Roundings Respecting Hard Constraints
|
|
- Gwenda Bryan
- 5 years ago
- Views:
Transcription
1 Roundings Respecting Hard Constraints Benjamin Doerr Mathematisches Seminar II, Christian Albrechts Universität zu Kiel, D Kiel, Germany, bed@numerik.uni-kiel.de Abstract. A problem arising in integer linear programming is to transform a solution of a linear system to an integer one which is close. The customary model to investigate such problems is, given a matrix A and a [0, 1] valued vector x, to find a binary vector y such that A(x y) (the violation of the constraints) is small. Randomized rounding and the algorithm of Beck and Fiala are ways to compute such solutions y, whereas linear discrepancy is a lower bound measure. In many applications one is looking for roundings that, in addition to being close to the original solution, satisfy some constraints without violation. The objective of this paper is to investigate such problems in a unified way. To this aim, we extend the notion of linear discrepancy, the theorem of Beck and Fiala and the method of randomized rounding to this setting. Whereas some of our examples show that additional hard constraints may seriously increase the linear discrepancy, the latter two sets of results demonstrate that a reasonably broad notion of hard constraints may be added to the rounding problem without worsening the obtained solution significantly. Of particular interest might be our results on randomized rounding. We provide a simpler way to randomly round fixed weight vectors (cf. Srinivasan, FOCS 2001). It has the additional advantage that it can be derandomized with standard methods. 1 Introduction and Results 1.1 Rounding Problems, Randomized Rounding and Linear Discrepancy Solving integer linear programs (ILPs) is NP hard, solving linear programs without integrality constraints is easy (in several respects). Therefore a natural and widely used technique is to solve the linear relaxation of the ILP and then transform (typically by rounding) its solution into an integer one. In doing so, one usually has to accept that the constraints are violated to some extent. There are several ways to deal with such violations, including simply accepting them, repairing them and preventing them by solving a linear program with stricter constraints in the first step. We do not want to go into detail here, but note that in any case the central theme is rounding the solution of the relaxation in such a way that the constraints are violated not too much. The underlying theoretical concept is the one of linear discrepancy.
2 Definition 1 (Linear Discrepancy Problem). Given a matrix A R m n and a vector x [0, 1] n, find a y {0, 1} n such that A(x y) is small. We write lindisc(a, x) := lindisc(a) := min A(x y), y {0,1} n max lindisc(a, x). x [0,1] n Thus lindisc(a, x) is the rounding error inflicted by an optimal rounding of x. It is known that this can be quite high. Spencer [Spe87] gives an example of a binary n n matrix A such that lindisc(a) = Ω( n). Whereas linear discrepancies provide bounds on how good roundings can possibly be, there are a number of positive results. A very general approach is the one of randomized rounding introduced in Raghavan and Thompson [RT87,Rag88]. Here the integer vector y is obtained from the solution x of the relaxation by rounding each component j independently with probabilities derived from y j. In particular, if x [0, 1] n, we have Pr(y j = 1) = x j and Pr(y j = 0) = 1 x j for all j. Since the components are rounded independently, the deviation (A(x y)) i in constraint i is a sum of independent random variables. Thus it is highly concentrated around its mean, which by choice of the probabilities is zero. Large deviation bounds like the Chernoff inequality allow to quantify such violations. Derandomizations transform this randomized approach into a deterministic algorithm (see [Rag88,SS96]). Another well-known rounding result is due to Beck and Fiala [BF81]. They give a polynomial time algorithm computing a rounding y such that A(x y) < A 1, where A 1 = max j [n] m i=1 a ij. This result is particularly useful for sparse matrices. A one-sided version was proven by Karp et al. [KLR + 87] and applied to a global routing problem. 1.2 Hard Constraints The notion of linear discrepancy prices all violations of constraints the same. This is feasible if all constraints are of the same kind. There are, however, a number of problems where this is definitely not the case. We sketch a simple one that carries most of the typical structure we are interested in. Raghavan and Thompson [RT87] investigate the following routing problem. Given an undirected graph and several source sink pairs (s i, t i ), we are looking for paths f i from s i to t i such that the maximum edge congestion is minimized. Solving the non-integral relaxation and applying path stripping (cf. [GKR + 99]), we end up with this rounding problem: Round the solution (x P ) P of the linear
3 system Minimize W s. t. x P W, e P e x P = 1, i P P i x P 0, P to an integer one such that the first set of constraints is violated not too much and the second one is satisfied without any violation. The first group of constraints ensures that W is the maximum congestion of an edge. Here a rounding error just enlarges the congestion (our objective value). The second kind of constraints is different. It ensures that each request is satisfied exactly once. Here no violation can be tolerated it would result in demands satisfied more than once or not at all. Further examples of rounding problems with hard constraints include other routing applications ([RT91,Sri01]), many flow problems ([RT87,RT91,GKR + 99]), partial and capacitated covering problems ([GKPS02]), the assignment problem with extra constraints ([AFK02]) and the linear discrepancy problem for hypergraphs in more than two colors ([DS03]). 1.3 Prior Work For linear programs with right hand side of the hard constraints equal to one and hard constraints depending on disjoint sets of variables, Raghavan and Thompson [RT87] presented an easy solution. In the example above, for each i they pick one P P i with probability x P and set y P = 1 and y P = 0 for all P P i \{P }. The general case of the integer splittable flow problem, however, seems to require a more complicated random experiment. In the integer splittable flow problem, each source sink pair has associated an integral demand d i and the task is to find an integer flow f i from s i to t i having value d i. Using the approach sketched in the previous subsection, we would end up with the same rounding problem with the 1 replaced by d i in the second set of constraints. Note that for this rounding problem, the ideas of Raghavan and Thompson (and all promising looking simple extensions) fail. Guruswami et al. [GKR + 99] state on the integral splittable flow problem (ISF) in comparison to the unsplittable flow problem that standard roundings techniques are not as easily applied to ISF. On FOCS 2001, Srinivasan [Sri01] presented a way to compute randomized roundings that respect the constraint that the sum of all variables remains unchanged (cardinality constraint) and fulfill some negative correlation properties (that imply Chernoff bounds). Among other results, this yields a randomized algorithm for the integer splittable flow problem. The deterministic pipage rounding algorithm of Ageev and Sviridenko [AS] allows to round edge weights of in a bipartite graph in such a way that the sum of weights incident with a vertex changes by less than one ( degree preservation ). This yields improved approximation algorithms for maximum coverage problems
4 and max-cut problems with given sizes of parts. Ageev and Sviridenko note that their ideas could be used in a randomized way, but the resulting algorithm will be too sophisticated to admit derandomization. The ideas of [AS] and [Sri01] were combined in Gandhi, Khuller, Parthasarathy and Srinivasan [GKPS02] to obtain randomized roundings of edge weights in bipartite graphs that are degree preserving and fulfill negative correlation properties on sets of edges incident with a common vertex. This again yields improved randomized approximation algorithms for several problems as well as some nice per-user fairness properties. 1.4 Our Contribution As can be seen from the previous subsection, there is now a decent amount of knowledge on rounding problems with hard constraints. However, most of these results focus rather on a particular application than on the common theme of respecting hard constraints. While still having an eye on the application, the main aim of this paper is to investigate rounding problems with hard constraints in a unified way. To this end, we introduce the corresponding linear discrepancy notion and extend previous rounding results to deal with hard constraints. Though we find examples showing that the linear discrepancy can increase unexpectedly by adding hard constraints (Theorem 8), our algorithmic results show that reasonable hard constraints can be added without seriously worsening the optima. We show that for constraints on disjoint sets of variables, a rounding error of 2 A 1 can be achieved, which is twice the bound of Beck and Fiala. For constraints of type By = Bx, where B is an arbitrary totally unimodular m n matrix, we have a bound of (1 + m) A 1. We provide a way to generate randomized roundings that satisfy hard constraints as in [Sri01]. They satisfy the key properties of the ones given there (hence our roundings yield all his results as well), but seem to be conceptually much simpler. This allows to derandomize them with respect to large deviation results. Our approach can be extended to the setting of [GKPS02], but we will not discuss this here. We have to defer detailed descriptions to the remainder of the paper. In simple words though, our results show that many known rounding results (in particular, randomized rounding and its derandomizations) still work when suitable hard constraints are added. For reasons of space, many proofs are omitted in the paper. 2 Definitions and Notation For a number r write [r] := {n N n r}. For a matrix A R m n let A 1 := max j [n] i [m] a ij denote the operator norm induced by the L 1 norm. For matrices A and vectors x we write A I J and x J to denote the restrictions (submatrices or subvectors) on the index sets I J and J respectively.
5 Throughout the paper let A R m A n, B R m B n and x [0, 1] n such that Bx Z m B. We call the problem to find a y {0, 1} n such that Bx = By and A(x y) is small a rounding problem with hard constraints. Definition 2 (Linear Discrepancy with Hard Constraints). Let A R m A n, B R m B n and x [0, 1] n such that Bx Z m B. Put E(B, x) = {y {0, 1} n Bx = By}. Then lindisc(a, B, x) := lindisc(a, B) := min A(x y), y E(B,x) max lindisc(a, B, x). x [0,1] n Bx Z m B If E(B, x) =, we have lindisc(a, B, x) =. Of course, the interesting case for our problem is that E(B, x) is not empty. Therefore, we will assume that B is totally unimodular. This is justified by the following corollary of the theorems of Hoffman and Kruskal [HK56] and Ghouila-Houri [GH62]. Theorem 1. The following properties are equivalent: (i) B is totally unimodular. (ii) For all x R n there is a y Z n such that x y < 1 and B(x y) < 1. 3 Sparse Matrices In this section, we extend the theorem of Beck and Fiala (cf. Section 1.1) to include hard constraints. Theorem 2. Let B be totally unimodular. Then a) lindisc(a, B) < (1 + m B ) A 1. b) If B 1 = 1, then lindisc(a, B) < 2 A 1 independent of m B. Proof (Theorem 2). Set := A 1. Set y = x. Successively we will round y to a 0, 1 vector. Let δ > 0 to be determined later. We repeat the following rounding process: Put J := {j [n] y j / {0, 1}}, and call these columns floating (the others fixed). Set I A := {i [m A ] j J a ij > δ} and I B := {i [m B ] j J b ij > 0}, and call these rows active (the others ignored). We will ensure that during the rounding process the following conditions are fulfilled (this is clear for the start, because y = x): (i) (A(x y)) IA = 0, (ii) (B(x y)) IB = 0, (iii) y [0, 1] n.
6 If there is no floating column, that is, J =, then our rounding process terminates with y {0, 1} n. Hence assume that there are still floating columns. We consider the system of equations A IA J z J = 0, B IB J z J = 0, z [n]\j = 0. (1) We have J j J i I A a ij = i I A j J a ij > I A δ, hence J > I A δ/. Case 1: I A. The system (1) consists of at most I A + I B + (n J ) equations. We will determine δ later in such a way the system (1) is underdetermined. Then it has a non-trivial solution z. By definition of J and (iii), there is a λ > 0 such that at least one component of y + λz becomes fixed and still y [0, 1] n. Note that y + λz instead of y also fulfills (i) and (ii). Set y := y + λz. Since (i) to (iii) are fulfilled for this new y and also no previously fixed y j becomes floating again (due to (iii)), we can continue this rounding process until all y j {0, 1}. Case 2: I A =. Since B IB J x J is integral and B (and thus B IB J) is totally unimodular, there is a z {0, 1} J such that B IB J z = B IB J x J (cf. e.g. Theorem 1). Define ỹ {0, 1} n by ỹ j = z j for j J and ỹ j = y j else. Note that this implies B(x ỹ) = 0. Since ỹ {0, 1} n we end the rounding process with result ỹ. We show A(x y) < δ for the resulting y. Let i [m A ]. Denote by y (0) and J (0) the values of y and J when the row i first became ignored. We have y (0) j = y j for all j / J (0) and y (0) j y j < 1 for all j J (0). Note that j J a ij δ, since i is ignored. Thus (0) (A(x y)) i = (A(x y (0) )) i + (A(y (0) y)) i = 0 + a ij (y (0) j y j ) < δ. j J (0) It remains to determine δ in such a way that the linear systems regarded are under-determined. Part a) For the general case, put δ = (1 + m B ). Since I A in Case 1, I B m B and J > I A δ/, we have I A + I B + (n J ) < I A + I B + n I A (1 + m B ) n. Part b) Assume now that B 1 = 1, that is, the constraints encoded in B belong to disjoint sets of variables. Then J 2 I B holds throughout the rounding process: If a constraint from B is active, it depends on at least two variables not yet fixed simply because B IB J y J = B IB J x J is integral and B { 1, 0, 1} mb n. Therefore, δ = 2 suffices. We then have I A + I B + (n J ) I A + n 1 2 J < n. The dependence on m B in Part a) is of the right order, as the first example in Section 5 shows. In particular, a bound like lindisc(a, B, x) (1+ B 1 ) A 1 as could be conjectured from a) and b), does not hold. Let us also remark that the rounding algorithms of Karp et al. [KLR + 87] admits similar extensions. We omit the details.
7 4 Randomized Rounding In this section, we modify the approach of randomized rounding to respect hard constraints. The particular problem is to design a random experiment that at the same time respects the hard constraints and generates independent looking randomized roundings (satisfying Chernoff bounds for example). Our random experiment is different from the one in [Sri01], which enables us to derandomize it. However, it also satisfies the main properties (A1) to (A3) of his approach. To ease reading, we describe our result in its simplest version in the following section and sketch possible extensions in the second one. 4.1 Randomized Construction and Derandomization In this section, we only treat the case that B {0, 1} mb n and B = 1. Hence, we only regard so-called cardinality constraints that contain disjoint sets of variables. Randomized construction: Assume first that all x j are in {0, 1 2, 1}. Since j [n] b ijx j Z for all i [m B ] by assumption, we conclude that all E i := {j [n] x j = 1 2, b ij = 1} have even cardinality. Now partitioning each E i into pairs 1 (j 1, j 2 ) and independently flipping a coin to decide whether (y j1, y j2 ) = (1, 0) or (y j1, y j2 ) = (0, 1) solves the problem in a randomized way (variables x j with j contained in no E i can be rounded independently at random). For x j having finite binary expansion, we iterate this procedure digit by digit: If x has binary length l, write x = x + 2 l+1 x with x {0, 1 2 }n and x [0, 1] n having binary length l 1. Compute y as rounding of x as above. Put x := x + 2 l+1 y. Note that x now has binary length l 1. Repeat this procedure until a binary vector is obtained. For each x having finite binary expansion, this defines a probability distribution D(B, x) on {0, 1} n. Theorem 3. Let y = (y 1,..., y n ) be a sample from D(B, x). Then it holds: (A1) y is a randomized rounding of x: For all j [n], Pr(y j = 1) = x j. (A2) D(B, x) is distributed on E(B, x): Pr(By = Bx) = 1. (A3) For all S [n] and b {0, 1}, Pr( j S : y j = b) j S Pr(y j = b). Proof. (A1): Let j [n]. If x j {0, 1}, the claim is trivial. Let x j therefore have binary length l 1. Let x j be the outcome of the first random experiment (i.e., x j is a random variable having binary length at most l 1). By induction, Pr(y j = 1) = ε { 1,1} = ε { 1,1} Pr( x j = x j + ε2 l ) Pr(y j = 1 x j = x j + ε2 l ) 1 2 (x j + ε2 l ) = x j. 1 As we will see, the particular choice of this partition is completely irrelevant. Assume therefore that we have fixed some deterministic way to choose it (e.g., greedily in the natural order of [n]).
8 (A2): By definition of D(B, x), in each rounding step the sum of the values with index in E i is unchanged for all i [m B ]. Hence (By) i = j E i y j = j E i x j = (Bx) i. (A3): Let S [n]. We show the claim for b = 1. Again, if x {0, 1} n, there is nothing to show. Let x therefore have binary length l 1. Let x be the outcome of the first rounding step. This is a random variable, that is uniformly distributed on the set R(x) of possible outcomes (which is determined by x and the way we choose the partition into pairs). Note that for each z R(x), also z := 2x z R(x). Note also that j S z j + j S z j 2 j S x j. Hence by induction Pr( j S : y j = 1) = Pr( x = z) Pr(( j S : y j = 1) x = z) = 1 R(x) z R(x) j S z R(x) z j 1 R(x) ( 1 2 R(x) 2 x j ) = x j = Pr(y i = 1). j S j S j S As shown in [PS97], (A3) implies the usual Chernoff-Hoeffding bounds on large deviations. We build on the following theorem of Raghavan [Rag88], which is a derandomization of the (independent) randomized rounding technique. Theorem 4 (Raghavan (1988)). For any A {0, 1} m n and x [0, 1] n a y {0, 1} n can be computed in O(mn) time such that A(x y) (e 1) s ln(2m), where s = max{ Ax, ln(2m)}. Noting that the pairing trick in a single iteration allows us to write Ay in the form matrix times vector of independent random variables, we prove the following result. Theorem 5. Let A {0, 1} m A n and B {0, 1} m B n such that B 1 = 1. a) Let x [0, 1] n such that Bx Z m B. Then for all l N, a binary vector y such that Bx = By and A(x y) 52 max{ Ax, ln(4m A )} ln(4m A ) + n2 l can be computed in time O(mnl). b) lindisc(a, B) 5 n ln(4m A ). 4.2 Extensions (1) We always assumed that Bx is integral. A trivial reduction (by adding dummy variables) extends our results to arbitrary Bx. We then have: (A2+) For all i [m B ], (By) i is a randomized rounding of (Bx) i. In particular, (By) i { (Bx) i, (Bx) i } with probability one.
9 (2) Raghavan [Rag88] also obtains the bound A(x y) e ln(2m)/ ln(e ln(2m)/ Ax ) for the case that Ax ln(2m). This is strongest for constant Ax, where it yields a bound of O( log m log log m ) instead of our bound of O(log m). Since the typical application of randomized rounding seems to be that Ax is large, we do not try to improve our result in this direction. (3) One subtle aspect in derandomizing Chernoff bounds lies in the computation of the involved pessimistic estimators. There is no problem if one works in a model that allows exact computations of real numbers. In the more realistic RAM model, things are more complicated. Raghavan s derandomization then only works for 0, 1 matrices A. Srivastav and Stangier [SS96] gave a solution that works for matrices having arbitrary entries in [0, 1] Q, though has a higher time complexity of O(mn 2 log(mn)). Here again the simplicity of our approach pays off. Since we only need to derandomize Chernoff type large deviation bounds, we can plug in any algorithmic version of the underlying large deviation inequality. (4) If B { 1, 0, 1}, one can modify the definition of ỹ in the proof above in such a way that Bỹ = 0. An extension to further values, however, is not possible as we might run into the problem that no integral solution exists at all. For example, the single constraint i [3] 4 5 x i = 2 is satisfied by x i = 5 6, but clearly no 0, 1 solution exists. (5) The constant of 52 is not the full truth. Things become much better, if Ax ln(4m A ). In this case, the constant reduces to less than Applications In this subsection, we sketch two applications. Note that and this is one advantage of the results presented above our results in simple words just state that randomized rounding and the corresponding derandomizations work as before even if a few hard constraints are added to the problem. This seems to be particularly useful for real-world application, which usually lack the plainness of problems regarded in theoretical sciences. We start with derandomizing Srinivasan s [Sri01] solution for the integral splittable flow problem (cf. Subsection 1.2 and 1.3). Note that for most of the other randomized results in [Sri01], deterministic algorithms of same quality have already been given earlier by Ageev and Sviridenko [AS]. The integral splittable flow problem extends the unit flow version of Raghavan and Thompson [RT87]. From the problem formulation, it is clear that Theorem 3 and 5 can be applied: The hard constraints depend on disjoint sets of variables, namely the paths obtained from applying the path stripping procedure to the flow satisfying a particular demand. Analogous to the result of Raghavan
10 and Thompson for unit flows and derandomizing Srinivasan [Sri01] (with larger constants), we obtain the following. Theorem 6. A solution of the relaxation with objective value W ln(4 E ) can efficiently be transformed into an integer solution with objective value W + 52 W ln(4m A ). As a second example, let us consider the packing problem max c t x such that Ax k, x {0, 1} n. We may view this as a scheduling problem. We want to select a set of jobs maximizing our profit in such a way that all m machines are busy for at most k time units. Using an additional scaling trick, Raghavan [Rag88] showed that for k = Ω(log m A ), approximations with additive error exist. In a real world scenario, additional constraints often are present (or show up while a first solution is analyzed). Here, one may assume that different parties have a particular interest in some jobs to be scheduled. In this case, we have disjoint sets F 1,..., F l of jobs favored by party i [l], and a fairness condition might impose that from each set F i, at least a given number of r jobs has to be scheduled. Note that r can (and usually will) be small compared to k. Hence large deviation bounds will not be applicable. However, the following easily solves the problem: (i) Solve the relaxation with additional constraints j F i x j r, i [l]. Denote the solution by ˆx. (ii) Apply randomized rounding or its derandomization on ˆx with the additional hard constraints that j F i y j is a randomized rounding of j F i ˆx j for all i [l] (cf. the extensions subsection for a remark on these dependencies). We thus obtain an integer solution of similar quality as Raghavan s that also satisfies our fairness requirements. 4.4 Comparison to the Approach of Srinivasan In Srinivasan [Sri01], randomized roundings satisfying hard constraints as in Theorem 3 were generated. His approach is to repeat regarding two variables only, and fixing one to an integer value and propagating the other with an updated probability distribution. This sequential rounding approach seems to be much harder to work with. We currently do not see how this algorithm can be derandomized. Also, we feel that proving the properties (A1) to (A3) must be quite complicated (proofs are omitted in [Sri01]). Note that the complexity of both approaches is very similar. Working with real numbers in [Sri01] hides part of complexity that is present in the bit-wise model used in this paper. 5 Examples and Lower Bounds The following simple example shows that hard constraints may increase the rounding error significantly. It also shows that the dependence on m B in part a) of Theorem 2 is of the right order.
11 Example 1: Let n be a multiple of 4. Let A = ( ) R 1 n, m B = n 1 and B {0, 1} m B n such that b ij = 1 if and only if j {i, i + 1}. Let x = n. Then lindisc(a, x) = 0, lindisc(a, x ) 1 2 for all x [0, 1] n, lindisc(a, B, x) = 1 4 n (= 1 4 (1 + m B) A 1 ). Example 2: The linear discrepancy problem for hypergraphs is to compute for a given mixed coloring (each vertex receives a weighted mixture of colors) a pure coloring in such a way that each hyperedges in total contains (roughly) the same amount of each color with respect to both colorings. Definition 3 (Linear Discrepancy Problem for Hypergraphs). Let c N 2. Let H = (V, E) be a hypergraph. A mapping p : V [0, 1] c such that d [c] p(v) d = 1 for all v V is called mixed coloring of H. It is called pure coloring, if for all v V there is a (unique) d [c] such that p(v) d = 1. In this case, we say that v has color d and write ˆp(v) = d. The discrepancy of two mixed colorings p, q is disc(h, p, q) = max d [c] max E E v E p(v) d v E q(v) d. The objective in the linear discrepancy problem for hypergraphs is to find for given hypergraph H and mixed coloring p a pure one q such that disc(h, p, q) is small. Put lindisc(h, c) := max p min q disc(h, p, q). A hypergraph is called totally unimodular, if its incidence matrix is totally unimodular. It is well known that totally unimodular hypergraph behave nicely in linear discrepancy problems. Theorem 7. Let H = (V, E) be a totally unimodular hypergraph. a) De Werra [dw71]: For all numbers c of colors, the combinatorial discrepancy lindisc(h, 1 c 1 V ) is less than 1. b) Hoffman, Kruskal [HK56]: The linear discrepancy lindisc(h, 2) of H in 2 colors is less than 1. The constant in b) was recently [Doe01] improved to the sharp bound of V /( V + 1). Contrary to what one might expect, a combination of a) and b) is not true: Theorem 8. For all c 3 there is a totally unimodular hypergraph H such that lindisc(h, c) ln(c + 1) 1. In consequence, the bound lindisc(h, c) < 1 for totally unimodular hypergraphs holds only in the case c = 2. References [AFK02] S. Arora, A. Frieze, and H. Kaplan. A new rounding procedure for the assignment problem with applications to dense graph arrangement problems. Math. Program., 92:1 36, 2002.
12 [AS] A. Ageev and M. Sviridenko. Pipage rounding: a new method of constructing algorithms with proven performance guarantee. Journal of Combinatorial Optimization. To appear. Also available from the authors homepages. [BF81] J. Beck and T. Fiala. Integer making theorems. Discrete Applied Mathematics, 3:1 8, [Doe01] B. Doerr. Lattice approximation and linear discrepancy of totally unimodular matrices. In Proceedings of the 12th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages , [DS03] B. Doerr and A. Srivastav. Multicolour discrepancies. Combinatorics, Probability and Computing, 12: , [dw71] D. de Werra. Equitable colorations of graphs. Rev. Française Informat. Recherche Opérationnelle, 5(Ser. R-3):3 8, [GH62] A. Ghouila-Houri. Caractérisation des matrices totalement unimodulaires. C. R. Acad. Sci. Paris, 254: , [GKPS02] R. Gandhi, S. Khuller, S. Parthasarathy, and A. Srinivasan. Dependent rounding in bipartite graphs. In Proc. IEEE Symposium on Foundations of Computer Science (FOCS), pages , [GKR + 99] V. Guruswami, S. Khanna, R. Rajaraman, B. Shepherd, and M. Yannakakis. Near-optimal hardness results and approximation algorithms for edge-disjoint paths and related problems. In Annual ACM Symposium on Theory of Computing (STOC), pages 19 28, New York, ACM. [HK56] A. J. Hoffman and J. B. Kruskal. Integral boundary points of convex polyhedra. In H. W. Kuhn and A. W. Tucker, editors, Linear Inequalities and Related Systems, pages [KLR + 87] R. M. Karp, F. T. Leighton, R. L. Rivest, C. D. Thompson, U. V. Vazirani, and V. V. Vazirani. Global wire routing in two-dimensional arrays. Algorithmica, 2: , [PS97] A. Panconesi and A. Srinivasan. Randomized distributed edge coloring via an extension of the Chernoff-Hoeffding bounds. SIAM J. Comput., 26: , [Rag88] P. Raghavan. Probabilistic construction of deterministic algorithms: Approximating packing integer programs. J. Comput. Syst. Sci., 37: , [RT87] P. Raghavan and C. D. Thompson. Randomized rounding: A technique for provably good algorithms and algorithmic proofs. Combinatorica, 7: , [RT91] P. Raghavan and C. D. Thompson. Multiterminal global routing: a deterministic approximation scheme. Algorithmica, 6:73 82, [Spe87] J. Spencer. Ten Lectures on the Probabilistic Method. SIAM, [Sri01] [SS96] A. Srinivasan. Distributions on level-sets with applications to approximations algorithms. In Proc. 41th Ann. IEEE Symp. on Foundations of Computer Science (FOCS), pages , A. Srivastav and P. Stangier. Algorithmic Chernoff-Hoeffding inequalities in integer programming. Random Structures & Algorithms, 8:27 58, 1996.
Generating Randomized Roundings with Cardinality Constraints and Derandomizations
Generating Randomized Roundings with Cardinality Constraints and Derandomizations Benjamin Doerr Max Planck Institut für Informatik, Saarbrücken, Germany. Abstract. We provide a general method to generate
More informationGenerating Randomized Roundings with Cardinality Constraints and Derandomizations
Generating Randomized Roundings with Cardinality Constraints and Derandomizations Benjamin Doerr, MPI Saarbrücken 1 Introduction to randomized rounding and its derandomization. 2 Cardinality constraints.
More informationMatrix approximation and Tusnády s problem
European Journal of Combinatorics 28 (2007) 990 995 www.elsevier.com/locate/ejc Matrix approximation and Tusnády s problem Benjamin Doerr Max-Planck-Institut für Informatik, Stuhlsatzenhausweg 85, 66123
More informationDiscrepancy Theory in Approximation Algorithms
Discrepancy Theory in Approximation Algorithms Rajat Sen, Soumya Basu May 8, 2015 1 Introduction In this report we would like to motivate the use of discrepancy theory in algorithms. Discrepancy theory
More informationCS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003
CS6999 Probabilistic Methods in Integer Programming Randomized Rounding April 2003 Overview 2 Background Randomized Rounding Handling Feasibility Derandomization Advanced Techniques Integer Programming
More informationAn Approximation Algorithm for MAX-2-SAT with Cardinality Constraint
An Approximation Algorithm for MAX-2-SAT with Cardinality Constraint Thomas Hofmeister Informatik 2, Universität Dortmund, 44221 Dortmund, Germany th01@ls2.cs.uni-dortmund.de Abstract. We present a randomized
More informationA DETERMINISTIC APPROXIMATION ALGORITHM FOR THE DENSEST K-SUBGRAPH PROBLEM
A DETERMINISTIC APPROXIMATION ALGORITHM FOR THE DENSEST K-SUBGRAPH PROBLEM ALAIN BILLIONNET, FRÉDÉRIC ROUPIN CEDRIC, CNAM-IIE, 8 allée Jean Rostand 905 Evry cedex, France. e-mails: {billionnet,roupin}@iie.cnam.fr
More informationThe maximum edge-disjoint paths problem in complete graphs
Theoretical Computer Science 399 (2008) 128 140 www.elsevier.com/locate/tcs The maximum edge-disjoint paths problem in complete graphs Adrian Kosowski Department of Algorithms and System Modeling, Gdańsk
More informationCS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi
CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source Shortest
More informationOn balanced colorings of sparse hypergraphs
On balanced colorings of sparse hypergraphs Andrzej Dude Department of Mathematics Western Michigan University Kalamazoo, MI andrzej.dude@wmich.edu January 21, 2014 Abstract We investigate 2-balanced colorings
More informationLecture 1 : Probabilistic Method
IITM-CS6845: Theory Jan 04, 01 Lecturer: N.S.Narayanaswamy Lecture 1 : Probabilistic Method Scribe: R.Krithika The probabilistic method is a technique to deal with combinatorial problems by introducing
More informationA An Overview of Complexity Theory for the Algorithm Designer
A An Overview of Complexity Theory for the Algorithm Designer A.1 Certificates and the class NP A decision problem is one whose answer is either yes or no. Two examples are: SAT: Given a Boolean formula
More informationThe Budgeted Unique Coverage Problem and Color-Coding (Extended Abstract)
The Budgeted Unique Coverage Problem and Color-Coding (Extended Abstract) Neeldhara Misra 1, Venkatesh Raman 1, Saket Saurabh 2, and Somnath Sikdar 1 1 The Institute of Mathematical Sciences, Chennai,
More informationA Robust APTAS for the Classical Bin Packing Problem
A Robust APTAS for the Classical Bin Packing Problem Leah Epstein 1 and Asaf Levin 2 1 Department of Mathematics, University of Haifa, 31905 Haifa, Israel. Email: lea@math.haifa.ac.il 2 Department of Statistics,
More informationDisjoint Bases in a Polymatroid
Disjoint Bases in a Polymatroid Gruia Călinescu Chandra Chekuri Jan Vondrák May 26, 2008 Abstract Let f : 2 N Z + be a polymatroid (an integer-valued non-decreasing submodular set function with f( ) =
More informationApproximating sparse binary matrices in the cut-norm
Approximating sparse binary matrices in the cut-norm Noga Alon Abstract The cut-norm A C of a real matrix A = (a ij ) i R,j S is the maximum, over all I R, J S of the quantity i I,j J a ij. We show that
More informationCS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi
CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source
More informationThe Maximum Flow Problem with Disjunctive Constraints
The Maximum Flow Problem with Disjunctive Constraints Ulrich Pferschy Joachim Schauer Abstract We study the maximum flow problem subject to binary disjunctive constraints in a directed graph: A negative
More informationAn 0.5-Approximation Algorithm for MAX DICUT with Given Sizes of Parts
An 0.5-Approximation Algorithm for MAX DICUT with Given Sizes of Parts Alexander Ageev Refael Hassin Maxim Sviridenko Abstract Given a directed graph G and an edge weight function w : E(G) R +, themaximumdirectedcutproblem(max
More informationBasic Research in Computer Science BRICS RS Ageev & Sviridenko: An Approximation Algorithm for Hypergraph Max k-cut
BRICS Basic Research in Computer Science BRICS RS-99-49 Ageev & Sviridenko: An Approximation Algorithm for Hypergraph Max k-cut An Approximation Algorithm for Hypergraph Max k-cut with Given Sizes of Parts
More informationMachine Minimization for Scheduling Jobs with Interval Constraints
Machine Minimization for Scheduling Jobs with Interval Constraints Julia Chuzhoy Sudipto Guha Sanjeev Khanna Joseph (Seffi) Naor Abstract The problem of scheduling jobs with interval constraints is a well-studied
More information11.1 Set Cover ILP formulation of set cover Deterministic rounding
CS787: Advanced Algorithms Lecture 11: Randomized Rounding, Concentration Bounds In this lecture we will see some more examples of approximation algorithms based on LP relaxations. This time we will use
More informationApproximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko
Approximation Algorithms for Maximum Coverage and Max Cut with Given Sizes of Parts? A. A. Ageev and M. I. Sviridenko Sobolev Institute of Mathematics pr. Koptyuga 4, 630090, Novosibirsk, Russia fageev,svirg@math.nsc.ru
More informationTHE METHOD OF CONDITIONAL PROBABILITIES: DERANDOMIZING THE PROBABILISTIC METHOD
THE METHOD OF CONDITIONAL PROBABILITIES: DERANDOMIZING THE PROBABILISTIC METHOD JAMES ZHOU Abstract. We describe the probabilistic method as a nonconstructive way of proving the existence of combinatorial
More information1 The linear algebra of linear programs (March 15 and 22, 2015)
1 The linear algebra of linear programs (March 15 and 22, 2015) Many optimization problems can be formulated as linear programs. The main features of a linear program are the following: Variables are real
More informationCS261: A Second Course in Algorithms Lecture #18: Five Essential Tools for the Analysis of Randomized Algorithms
CS261: A Second Course in Algorithms Lecture #18: Five Essential Tools for the Analysis of Randomized Algorithms Tim Roughgarden March 3, 2016 1 Preamble In CS109 and CS161, you learned some tricks of
More informationA robust APTAS for the classical bin packing problem
A robust APTAS for the classical bin packing problem Leah Epstein Asaf Levin Abstract Bin packing is a well studied problem which has many applications. In this paper we design a robust APTAS for the problem.
More informationLecture 5: Probabilistic tools and Applications II
T-79.7003: Graphs and Networks Fall 2013 Lecture 5: Probabilistic tools and Applications II Lecturer: Charalampos E. Tsourakakis Oct. 11, 2013 5.1 Overview In the first part of today s lecture we will
More informationBalanced Partitions of Vector Sequences
Balanced Partitions of Vector Sequences Imre Bárány Benjamin Doerr December 20, 2004 Abstract Let d,r N and be any norm on R d. Let B denote the unit ball with respect to this norm. We show that any sequence
More informationOut-colourings of Digraphs
Out-colourings of Digraphs N. Alon J. Bang-Jensen S. Bessy July 13, 2017 Abstract We study vertex colourings of digraphs so that no out-neighbourhood is monochromatic and call such a colouring an out-colouring.
More information3.3 Easy ILP problems and totally unimodular matrices
3.3 Easy ILP problems and totally unimodular matrices Consider a generic ILP problem expressed in standard form where A Z m n with n m, and b Z m. min{c t x : Ax = b, x Z n +} (1) P(b) = {x R n : Ax =
More informationApproximating maximum satisfiable subsystems of linear equations of bounded width
Approximating maximum satisfiable subsystems of linear equations of bounded width Zeev Nutov The Open University of Israel Daniel Reichman The Open University of Israel Abstract We consider the problem
More informationOn a hypergraph matching problem
On a hypergraph matching problem Noga Alon Raphael Yuster Abstract Let H = (V, E) be an r-uniform hypergraph and let F 2 V. A matching M of H is (α, F)- perfect if for each F F, at least α F vertices of
More informationRandomized Pipage Rounding for Matroid Polytopes and Applications
Randomized Pipage Rounding for Matroid Polytopes and Applications Chandra Chekuri Jan Vondrák September 23, 2009 Abstract We present concentration bounds for linear functions of random variables arising
More informationColoring Graphs to Minimize Load
Coloring Graphs to Minimize Load - Extended Abstract - Nitin Ahuja Andreas Baltz Benjamin Doerr Aleš Přívětivý Anand Srivastav Abstract Given a graph G = (V, E) with n vertices, m edges and maximum vertex
More informationImproved Parallel Approximation of a Class of Integer Programming Problems
Improved Parallel Approximation of a Class of Integer Programming Problems Noga Alon 1 and Aravind Srinivasan 2 1 School of Mathematical Sciences, Raymond and Beverly Sackler Faculty of Exact Sciences,
More informationBudgeted Allocations in the Full-Information Setting
Budgeted Allocations in the Full-Information Setting Aravind Srinivasan 1 Dept. of Computer Science and Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742. Abstract.
More informationA Polynomial-Time Algorithm for Pliable Index Coding
1 A Polynomial-Time Algorithm for Pliable Index Coding Linqi Song and Christina Fragouli arxiv:1610.06845v [cs.it] 9 Aug 017 Abstract In pliable index coding, we consider a server with m messages and n
More informationCPSC 536N: Randomized Algorithms Term 2. Lecture 2
CPSC 536N: Randomized Algorithms 2014-15 Term 2 Prof. Nick Harvey Lecture 2 University of British Columbia In this lecture we continue our introduction to randomized algorithms by discussing the Max Cut
More informationSanta Claus Schedules Jobs on Unrelated Machines
Santa Claus Schedules Jobs on Unrelated Machines Ola Svensson (osven@kth.se) Royal Institute of Technology - KTH Stockholm, Sweden March 22, 2011 arxiv:1011.1168v2 [cs.ds] 21 Mar 2011 Abstract One of the
More informationprinceton univ. F 17 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming
princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Matt Weinberg Scribe: Sanjeev Arora One of the running themes in this course is
More informationApproximability of Dense Instances of Nearest Codeword Problem
Approximability of Dense Instances of Nearest Codeword Problem Cristina Bazgan 1, W. Fernandez de la Vega 2, Marek Karpinski 3 1 Université Paris Dauphine, LAMSADE, 75016 Paris, France, bazgan@lamsade.dauphine.fr
More informationA packing integer program arising in two-layer network design
A packing integer program arising in two-layer network design Christian Raack Arie M.C.A Koster Zuse Institute Berlin Takustr. 7, D-14195 Berlin Centre for Discrete Mathematics and its Applications (DIMAP)
More informationLecture 24: April 12
CS271 Randomness & Computation Spring 2018 Instructor: Alistair Sinclair Lecture 24: April 12 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They
More informationUnsplittable Flow in Paths and Trees and Column-Restricted Packing Integer Programs
Unsplittable Flow in Paths and Trees and Column-Restricted Packing Integer Programs Chandra Chekuri, Alina Ene, and Nitish Korula Dept. of Computer Science, University of Illinois, Urbana, IL 61801. {chekuri,
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory
More informationAn Improved Approximation Algorithm for Requirement Cut
An Improved Approximation Algorithm for Requirement Cut Anupam Gupta Viswanath Nagarajan R. Ravi Abstract This note presents improved approximation guarantees for the requirement cut problem: given an
More informationCS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash
CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash Equilibrium Price of Stability Coping With NP-Hardness
More informationInterference in Cellular Networks: The Minimum Membership Set Cover Problem
Interference in Cellular Networks: The Minimum Membership Set Cover Problem Fabian Kuhn 1, Pascal von Rickenbach 1, Roger Wattenhofer 1, Emo Welzl 2, and Aaron Zollinger 1 kuhn@tikeeethzch, pascalv@tikeeethzch,
More informationLabel Cover Algorithms via the Log-Density Threshold
Label Cover Algorithms via the Log-Density Threshold Jimmy Wu jimmyjwu@stanford.edu June 13, 2017 1 Introduction Since the discovery of the PCP Theorem and its implications for approximation algorithms,
More informationCMPUT 675: Approximation Algorithms Fall 2014
CMPUT 675: Approximation Algorithms Fall 204 Lecture 25 (Nov 3 & 5): Group Steiner Tree Lecturer: Zachary Friggstad Scribe: Zachary Friggstad 25. Group Steiner Tree In this problem, we are given a graph
More informationTesting Equality in Communication Graphs
Electronic Colloquium on Computational Complexity, Report No. 86 (2016) Testing Equality in Communication Graphs Noga Alon Klim Efremenko Benny Sudakov Abstract Let G = (V, E) be a connected undirected
More informationLecture 7 Limits on inapproximability
Tel Aviv University, Fall 004 Lattices in Computer Science Lecture 7 Limits on inapproximability Lecturer: Oded Regev Scribe: Michael Khanevsky Let us recall the promise problem GapCVP γ. DEFINITION 1
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory
More informationCS 6820 Fall 2014 Lectures, October 3-20, 2014
Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given
More informationTight Hardness Results for Minimizing Discrepancy
Tight Hardness Results for Minimizing Discrepancy Moses Charikar Alantha Newman Aleksandar Nikolov Abstract In the Discrepancy problem, we are given M sets {S 1,..., S M } on N elements. Our goal is to
More informationCSE525: Randomized Algorithms and Probabilistic Analysis April 2, Lecture 1
CSE525: Randomized Algorithms and Probabilistic Analysis April 2, 2013 Lecture 1 Lecturer: Anna Karlin Scribe: Sonya Alexandrova and Eric Lei 1 Introduction The main theme of this class is randomized algorithms.
More informationDependent Randomized Rounding for Matroid Polytopes and Applications
Dependent Randomized Rounding for Matroid Polytopes and Applications Chandra Chekuri Jan Vondrák Rico Zenklusen November 4, 2009 Abstract Motivated by several applications, we consider the problem of randomly
More informationEdge-disjoint induced subgraphs with given minimum degree
Edge-disjoint induced subgraphs with given minimum degree Raphael Yuster Department of Mathematics University of Haifa Haifa 31905, Israel raphy@math.haifa.ac.il Submitted: Nov 9, 01; Accepted: Feb 5,
More informationThe discrepancy of permutation families
The discrepancy of permutation families J. H. Spencer A. Srinivasan P. Tetali Abstract In this note, we show that the discrepancy of any family of l permutations of [n] = {1, 2,..., n} is O( l log n),
More informationGRAPH PARTITIONING USING SINGLE COMMODITY FLOWS [KRV 06] 1. Preliminaries
GRAPH PARTITIONING USING SINGLE COMMODITY FLOWS [KRV 06] notes by Petar MAYMOUNKOV Theme The algorithmic problem of finding a sparsest cut is related to the combinatorial problem of building expander graphs
More informationPartitions and Covers
University of California, Los Angeles CS 289A Communication Complexity Instructor: Alexander Sherstov Scribe: Dong Wang Date: January 2, 2012 LECTURE 4 Partitions and Covers In previous lectures, we saw
More informationAn Elementary Construction of Constant-Degree Expanders
An Elementary Construction of Constant-Degree Expanders Noga Alon Oded Schwartz Asaf Shapira Abstract We describe a short and easy to analyze construction of constant-degree expanders. The construction
More informationWeak Graph Colorings: Distributed Algorithms and Applications
Weak Graph Colorings: Distributed Algorithms and Applications Fabian Kuhn Computer Science and Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA 0139, USA fkuhn@csail.mit.edu
More informationComplexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler
Complexity Theory Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität Wien 15 May, 2018 Reinhard
More informationOutline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.
Complexity Theory Complexity Theory Outline Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität
More information12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria
12. LOCAL SEARCH gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley h ttp://www.cs.princeton.edu/~wayne/kleinberg-tardos
More informationImproved Bounds for Flow Shop Scheduling
Improved Bounds for Flow Shop Scheduling Monaldo Mastrolilli and Ola Svensson IDSIA - Switzerland. {monaldo,ola}@idsia.ch Abstract. We resolve an open question raised by Feige & Scheideler by showing that
More informationA simple LP relaxation for the Asymmetric Traveling Salesman Problem
A simple LP relaxation for the Asymmetric Traveling Salesman Problem Thành Nguyen Cornell University, Center for Applies Mathematics 657 Rhodes Hall, Ithaca, NY, 14853,USA thanh@cs.cornell.edu Abstract.
More informationTheoretical Computer Science
Theoretical Computer Science 411 (010) 417 44 Contents lists available at ScienceDirect Theoretical Computer Science journal homepage: wwwelseviercom/locate/tcs Resource allocation with time intervals
More informationLecture 2: January 18
CS271 Randomness & Computation Spring 2018 Instructor: Alistair Sinclair Lecture 2: January 18 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They
More informationChapter 1. Comparison-Sorting and Selecting in. Totally Monotone Matrices. totally monotone matrices can be found in [4], [5], [9],
Chapter 1 Comparison-Sorting and Selecting in Totally Monotone Matrices Noga Alon Yossi Azar y Abstract An mn matrix A is called totally monotone if for all i 1 < i 2 and j 1 < j 2, A[i 1; j 1] > A[i 1;
More informationACO Comprehensive Exam March 17 and 18, Computability, Complexity and Algorithms
1. Computability, Complexity and Algorithms (a) Let G(V, E) be an undirected unweighted graph. Let C V be a vertex cover of G. Argue that V \ C is an independent set of G. (b) Minimum cardinality vertex
More informationOn the Complexity of Budgeted Maximum Path Coverage on Trees
On the Complexity of Budgeted Maximum Path Coverage on Trees H.-C. Wirth An instance of the budgeted maximum coverage problem is given by a set of weighted ground elements and a cost weighted family of
More informationCombinatorial Algorithms for the Unsplittable Flow Problem
Combinatorial Algorithms for the Unsplittable Flow Problem Yossi Azar Oded Regev January 1, 25 Abstract We provide combinatorial algorithms for the unsplittable flow problem (UFP) that either match or
More informationProperly colored Hamilton cycles in edge colored complete graphs
Properly colored Hamilton cycles in edge colored complete graphs N. Alon G. Gutin Dedicated to the memory of Paul Erdős Abstract It is shown that for every ɛ > 0 and n > n 0 (ɛ), any complete graph K on
More informationApproximability of Packing Disjoint Cycles
Approximability of Packing Disjoint Cycles Zachary Friggstad Mohammad R. Salavatipour Department of Computing Science University of Alberta Edmonton, Alberta T6G 2E8, Canada zacharyf,mreza@cs.ualberta.ca
More informationEquitable and semi-equitable coloring of cubic graphs and its application in batch scheduling
Equitable and semi-equitable coloring of cubic graphs and its application in batch scheduling Hanna Furmańczyk, Marek Kubale Abstract In the paper we consider the problems of equitable and semi-equitable
More informationFinite Induced Graph Ramsey Theory: On Partitions of Subgraphs
inite Induced Graph Ramsey Theory: On Partitions of Subgraphs David S. Gunderson and Vojtěch Rödl Emory University, Atlanta GA 30322. Norbert W. Sauer University of Calgary, Calgary, Alberta, Canada T2N
More informationBicolorings and Equitable Bicolorings of Matrices
Bicolorings and Equitable Bicolorings of Matrices Michele Conforti Gérard Cornuéjols Giacomo Zambelli dedicated to Manfred Padberg Abstract Two classical theorems of Ghouila-Houri and Berge characterize
More information3.7 Cutting plane methods
3.7 Cutting plane methods Generic ILP problem min{ c t x : x X = {x Z n + : Ax b} } with m n matrix A and n 1 vector b of rationals. According to Meyer s theorem: There exists an ideal formulation: conv(x
More informationMAL TSEV CONSTRAINTS MADE SIMPLE
Electronic Colloquium on Computational Complexity, Report No. 97 (2004) MAL TSEV CONSTRAINTS MADE SIMPLE Departament de Tecnologia, Universitat Pompeu Fabra Estació de França, Passeig de la circumval.lació,
More informationMulticriteria approximation through decomposition
Multicriteria approximation through decomposition Carl Burch Sven Krumke y Madhav Marathe z Cynthia Phillips x Eric Sundberg { Abstract We propose a general technique called solution decomposition to devise
More informationPartitioning Metric Spaces
Partitioning Metric Spaces Computational and Metric Geometry Instructor: Yury Makarychev 1 Multiway Cut Problem 1.1 Preliminaries Definition 1.1. We are given a graph G = (V, E) and a set of terminals
More informationLecture 23 Branch-and-Bound Algorithm. November 3, 2009
Branch-and-Bound Algorithm November 3, 2009 Outline Lecture 23 Modeling aspect: Either-Or requirement Special ILPs: Totally unimodular matrices Branch-and-Bound Algorithm Underlying idea Terminology Formal
More informationApproximation algorithms for cycle packing problems
Approximation algorithms for cycle packing problems Michael Krivelevich Zeev Nutov Raphael Yuster Abstract The cycle packing number ν c (G) of a graph G is the maximum number of pairwise edgedisjoint cycles
More informationConflict-Free Colorings of Rectangles Ranges
Conflict-Free Colorings of Rectangles Ranges Khaled Elbassioni Nabil H. Mustafa Max-Planck-Institut für Informatik, Saarbrücken, Germany felbassio, nmustafag@mpi-sb.mpg.de Abstract. Given the range space
More informationLecture 9: Matrix approximation continued
0368-348-01-Algorithms in Data Mining Fall 013 Lecturer: Edo Liberty Lecture 9: Matrix approximation continued Warning: This note may contain typos and other inaccuracies which are usually discussed during
More informationOn the complexity of approximate multivariate integration
On the complexity of approximate multivariate integration Ioannis Koutis Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 USA ioannis.koutis@cs.cmu.edu January 11, 2005 Abstract
More informationList coloring hypergraphs
List coloring hypergraphs Penny Haxell Jacques Verstraete Department of Combinatorics and Optimization University of Waterloo Waterloo, Ontario, Canada pehaxell@uwaterloo.ca Department of Mathematics University
More informationPARTITIONING PROBLEMS IN DENSE HYPERGRAPHS
PARTITIONING PROBLEMS IN DENSE HYPERGRAPHS A. CZYGRINOW Abstract. We study the general partitioning problem and the discrepancy problem in dense hypergraphs. Using the regularity lemma [16] and its algorithmic
More informationA better approximation ratio for the Vertex Cover problem
A better approximation ratio for the Vertex Cover problem George Karakostas Dept. of Computing and Software McMaster University October 5, 004 Abstract We reduce the approximation factor for Vertex Cover
More informationReachability-based matroid-restricted packing of arborescences
Egerváry Research Group on Combinatorial Optimization Technical reports TR-2016-19. Published by the Egerváry Research Group, Pázmány P. sétány 1/C, H 1117, Budapest, Hungary. Web site: www.cs.elte.hu/egres.
More informationTight Hardness Results for Minimizing Discrepancy
Tight Hardness Results for Minimizing Discrepancy Moses Charikar Alantha Newman Aleksandar Nikolov January 13, 2011 Abstract In the Discrepancy problem, we are given M sets {S 1,..., S M } on N elements.
More informationInteger Linear Programs
Lecture 2: Review, Linear Programming Relaxations Today we will talk about expressing combinatorial problems as mathematical programs, specifically Integer Linear Programs (ILPs). We then see what happens
More informationOptimal Auctions with Correlated Bidders are Easy
Optimal Auctions with Correlated Bidders are Easy Shahar Dobzinski Department of Computer Science Cornell Unversity shahar@cs.cornell.edu Robert Kleinberg Department of Computer Science Cornell Unversity
More informationLecture 20: LP Relaxation and Approximation Algorithms. 1 Introduction. 2 Vertex Cover problem. CSCI-B609: A Theorist s Toolkit, Fall 2016 Nov 8
CSCI-B609: A Theorist s Toolkit, Fall 2016 Nov 8 Lecture 20: LP Relaxation and Approximation Algorithms Lecturer: Yuan Zhou Scribe: Syed Mahbub Hafiz 1 Introduction When variables of constraints of an
More informationOn-line Scheduling to Minimize Max Flow Time: An Optimal Preemptive Algorithm
On-line Scheduling to Minimize Max Flow Time: An Optimal Preemptive Algorithm Christoph Ambühl and Monaldo Mastrolilli IDSIA Galleria 2, CH-6928 Manno, Switzerland October 22, 2004 Abstract We investigate
More informationBounds for pairs in partitions of graphs
Bounds for pairs in partitions of graphs Jie Ma Xingxing Yu School of Mathematics Georgia Institute of Technology Atlanta, GA 30332-0160, USA Abstract In this paper we study the following problem of Bollobás
More informationOn shredders and vertex connectivity augmentation
On shredders and vertex connectivity augmentation Gilad Liberman The Open University of Israel giladliberman@gmail.com Zeev Nutov The Open University of Israel nutov@openu.ac.il Abstract We consider the
More information