A Parallel Approximation Algorithm. for. Positive Linear Programming. mal values for the primal and dual problems are

Size: px
Start display at page:

Download "A Parallel Approximation Algorithm. for. Positive Linear Programming. mal values for the primal and dual problems are"

Transcription

1 A Parallel Approximation Algorithm for Positive Linear Programming Michael Luby Noam Nisan y Abstract We introduce a fast parallel approximation algorithm for the positive linear programming optimization problem, i.e. the special case of the linear programming optimization problem where the input constraint matrix and constraint vector consist entirely of positive entries. The algorithm is elementary, and has a simple parallel implementation that runs in polylog time using a linear number of processors. 1 Introduction The positive linear programming optimization problem (hereafter referred to as the positive problem) is the special case of the linear programming optimization problem where the input constraint matrix and constraint vector consist entirely of non-negative entries. We introduce an algorithm that takes as input the description of a problem and an error parameter and produces both a primal feasible solution and a dual feasible solution, where the values of these two solutions are within a multiplicative factor of 1 + of each other. Because the opti- International Computer Science Institute and UC Berkeley. Research supported in part by NSF Grant CCR and grant No from the United States-Israel Binational Science Foundation (BSF), Jerusalem, Israel. yhebrew University, Jerusalem, Israel. Supported by USA-Israel BSF and by a Wolfson research award. Research partially done while visiting the International Computer Science Institute. mal values for the primal and dual problems are equal, this implies that the primal and dual feasible solutions produced by the algorithm have a value within (with respect to relative error) of an optimal feasible solution. Let N be the number of non-zero coecients associated with an instance of the problem. Our algorithm can be implemented on a parallel machine using O(N) processors with a running time polynomial in log(n)=. The algorithm is elementary and has a simple parallel implementation. Note that the problem of approximating the value of a general linear program to within a constant factor is P -complete. This can be shown by a reduction from the circuit value problem to a linear programming problem where all coecients are small constants and the linear programming problem has exactly one feasible solution with value either 0 or 1 depending upon the answer to the circuit value problem (see, e.g., [5, Ja Ja]). Previously, [8, Plotkin, Shmoys, Tardos] have developed fast sequential algorithms for both the primal and dual versions of the positive problem which they call fractional packing and covering problems, (as well as for some generalizations of this problem). They introduce algorithms that are much simpler and far superior in terms of running times than known algorithms for the general linear programming optimization problem. However, the algorithms in [8] do not have fast parallel implementations. The algorithm we introduce is competitive with their algorithms in terms of running times when implemented se- 1

2 quentially. We rst introduce an elementary (but unimplementable) continuous algorithm that produces optimal primal and dual feasible solutions to the problem, and based on this we describe a fast parallel approximation algorithm. We use ideas that were previously employed in similar contexts by [1, Berger, Rompel, Shor], [8, Plotkin, Shmoys, Tardos] and [2, Chazelle, Friedman]. We use the general idea also used in [1] of incrementing the values of many variables in parallel, and we use the general idea also used in [8] and [2] of changing the weight function on the constraints by an amount exponential in the change in the variables. The overall method we introduce is novel in several respects, including the details of how to increment the values of the variables at each iteration (this choice reects a carefully chosen tradeo between the overall running time of the algorithm and the quality of the solution it produces) and the way to normalize the solution output at the end. Positive linear programs are strong enough to represent several combinatorial problems. The rst example is matching in a bipartite graph. From matching theory we know that relaxing the f0; 1g program that denes the largest matching to a linear program does not change the optimal value. This program is positive and thus our algorithm can be used to approximate the size of the largest matching in a bipartite graph. This essentially matches the results of [3, Cohen], except that we don't know how to get the matching itself without using some of Cohen's techniques. The second example is that of set-cover. In this case it is known that relaxing the 0? 1 program that denes the minimum set-cover to a linear program can decrease the optimum by at most a factor of log(), where is the maximum degree in the set system. This program is positive and thus our algorithm can be used to approximate the size of the set cover to within a factor of (1 + ) log(). This is essentially optimal (up to NP-completeness [7, Lund, Yannakakis]) and matches results of [1, Berger, Rompel, Shor]. In this case nding the set cover itself is also possible using e.g. ideas found in [9, Raghavan]. 2 The problem Throughout this paper, n is the number of variables and m is the number of constraints (not including constraints of the form x i 0). We use i to index variables, and whenever it is unspeci- ed we assume that i ranges over all f1; ; ng. We use j to index the non-zero constraints, and whenever it is unspecied we assume that j ranges over all f1; ; mg. Unless otherwise specied, if x = hx 1 ; ; x n i then sum(x) is de- ned as P i x i. (In a few places, sum(x) is de- ned as a weighted sum.) We consider linear problems in the following standard form The Primal Problem The objective is to nd z = hz 1 ; ; z n i that minimizes sum(z) = i d i z i subject to the following constraints For all i, z i 0. For all j, P i c i;j z i b j. We say z is primal feasible if z satises all the constraints. Let opt(z) be an optimal solution, i.e. opt(z) is a primal feasible solution such that sum(opt(z)) = min fsum(z)g z is primal feasible We consider also the dual of such problems The Dual Problem The objective is to nd q = hq 1 ; ; q m i that maximizes sum(q) = j b j q j Our results are a slight improvement over those in [1] in the sense that our multiplicative constant is 1 +, where is an input parameter, whereas their multiplicative constant is xed to something like 2. 2

3 subject to the following constraints For all j, q j 0. For all i, P j c i;j q j d i. We say q is dual feasible if q satises all the constraints. Let opt(q) be an optimal solution, i.e. opt(q) is a dual feasible solution such that sum(opt(q)) = max fsum(q)g q is dual feasible Description of Positive Problem We say that a linear program is positive if all the coef- cients are non-negative. I.e. For all i, d i > 0. For all j, b j > 0. For all i and j, c i;j 0 (It can easily be sees that restricting d i and b j to be strictly positive instead of non-negative causes no loss of generality.) We can look at the primal problem as trying to put weights on the x i 's such that each j is covered with weight at least b j, where each unit of weight on x i puts c i;j weight on each j. Similarly, we can look at the dual case as trying to put weights on the q j 's such that we pack at most d i weight into each i. For these reasons a positive linear program in the primal form is sometimes called a fractional covering problem, and in the dual case a fractional packing problem. We will sometimes use this terminology in our analysis. In this paper we develop an approximation algorithm for both these problems with the following properties. On input > 0 and the description of the problem, the algorithm produces a primal feasible solution z and a dual feasible solution q such that sum(z) sum(q) (1 + ) The algorithm consists of O log(n) log(m=) 4 iterations. Each iteration can be executed in parallel using O(N) processors in time O(log(N)) on a EREW PRAM, where N is the number of entries hi; ji where c i;j > A special form In the appendix we show that without loss of generality we may assume that the linear program is in the following special form Input to special form For all hi; ji, the input is a i;j, such that either a i;j = 0 or 1 a i;j 1=, where = m2 2 Special Form Primal Problem The objective is to nd z = hz 1 ; ; z n i that minimizes sum(z) = i subject to the following constraints For all i, z i 0. For all j, P i a i;j z i 1. Special Form Dual Problem The objective is to nd q = hq 1 ; ; q m i that maximizes sum(q) = j z i q j subject to the following constraints For all j, q j 0. For all i, P j a i;j q j 1. 3

4 2.2 The algorithm Given a problem instance in special form, the algorithm we develop below has the following properties. Let a i;j be the coecients for the input problem, and let = minfsum(z) z is primal feasibleg = maxfsum(q) q is dual feasibleg On input > 0 and the a i;j, the output is a primal feasible solution z = hz 1 ; ; z n i and a dual feasible solution such that q = hq 1 ; ; q m i sum(z) sum(q) (1 + ) Since sum(z) sum(q), this immediately implies that sum(z) (1 + ) and that sum(q) =(1 + ). The parallel algorithm we present below can be viewed as a parallel discretization of the following simple (but unimplementable) continuous algorithm. Based on the analysis of the parallel algorithm given below, it is not hard to see that the continuous algorithm produces optimal primal and dual solutions. Continuous Algorithm The values of all variables are driven by x = hx 1 ; ; x n i, and this vector is initially all zeroes. For all j, dene j = Pi a i;j x i, = P min j f j g and y j = e?j. For all i, dene D i = j a i;j y j, D = max i fd i g and B = fi D i = Dg. The continuous algorithm increases the values of all x i for all i 2 B in the direction which makes all D i for i 2 B decrease at the same rate. Continue the process forever. The optimal primal solution is z i, where z i is the limiting ratio of x i =. The optimal dual solution is q j, where q j = ^y j = ^D and ^y and ^D are dened as the y and D that maximize sum(y)=d over all time. We now describe the parallel (and implementable) version of this continuous algorithm. Initialization For all i, initialize x i = 0. For all j, initialize y j = 1. We now describe a phase of the algorithm. Each phase is indexed by an integer k and the index of the next phase is one smaller than the index of the previous phase. The values for the index of the start phase k s and the index of the nal phase k f are xed later. Denition 0 = 1 = 2 = 3 = 4 = =5. Phase k Each phase consists of a sequence of iterations. Each iteration consists of the following. For all i, let D i = P j a i;j y j Let B = fi D i (1 + 0 ) k g. If B = ; then the phase ends. For all > 0, dene { E() = fj P i2b a i;j = 1 g. { S() = fj P i2b a i;j < 1 g. { L() = fj P i2b a i;j > 1 g. { D? i () =P j2e()[s() a i;j y j { D + i () =P j2e()[l() a i;j y j Choose 0 such that (a) For at least a fraction 1? 2 of the i 2 B, D? i ( 0) (1? 3 ) D i (b) For at least a fraction 2 of the i 2 B, D + i ( 0) 3 D i For all i 2 B, x i = x i + 0. For all j, y j = y j e?0 Pi2B ai;j 4

5 Denition D = max i fd i g Start Phase The index of the start phase is the smallest positive integer k s such that the initial value of D satises (1 + 0 ) ks D < (1 + 0 ) ks+1 Final Phase The nal phase executed is the rst phase where sum(y) 1 is true at the m 1= 4 end of the phase. The following invariants are maintained throughout the course of the algorithm. Degree invariant At the beginning of an iteration within the execution of phase k it is the case that, for all i 2 B, (1 + 0 ) k D i < (1 + 0 ) k+1 Coverage invariant Dene Caveat In the analysis given below, for simplicity we are slightly inaccurate in the following sense. We say for example that (1+ 0 )(1+ 1 ) = 1 ( ) and 1? 0 = Each time we do this we introduce a small error, but we only do this a constant number of times overall. 2.3 Running time analysis We rst determine an upper bound on the number of phases. After the initialization, D m. This is because a i;j 1 for all hi; ji and because initially y j = 1 for all j. Let k s be the index of the rst phase. To guarantee that D < (1 + 0 ) ks+1 at the start of this phase, it is sucient that k s satisfy and this is true when (1 + 0 ) ks+1 > m; j = i a i;j x i k s = O(log(m)=) and let = min j f j g Then, j is the amount that j is covered by the current value of x, and is the minimal coverage of any j. Conventions We use the convention that x i is the value assigned to i at the beginning of an iteration within a phase, x 0 i is the value after the end of the iteration, and x i is the value after termination of the nal phase. This same convention applies to all the other variables, i.e. D, y j, j,, etc. Consider the iteration of the algorithm when sum(y)=d is maximum among all iterations of the algorithm. Let ^y = h^y 1 ; ; ^y m i be y from this iteration and let ^D be D from this iteration. Primal Feasible Solution Output output is, for all i, z i = x i The out- Dual Feasible Solution Output put is, for all j, q j = ^yj^d The Note that P D sum(y)= because each j contributes at least y j = to i D i. Thus, if D 1 then sum(y) m 1= 1. The algorithm 4 m 1= 4 never reaches phase?k if (1 + 0 )?k 1. m 1= 4 Thus, the last phase occurs before phase?k f where k f = O log(m=) 2 Thus, the total number of phases is O log(m=) 2 We now analyze the number of iterations per phase. At least a fraction 2 of the i 2 B have the property that D + i ( 0) 3 D i. For each j 2 E( 0 ) [ L( 0 ), y j goes down by at least a factor of e?1 1? 1. Let Di 0 be the value of D i at the end of the iteration. >From the above, a fraction of at least 2 of the i 2 B have the property that D 0 i D i (1? 1 3 ) 5

6 Note that after the value of D i drops by a factor of at least 1? 0, i is removed from B. >From this it follows that the number of iterations during phase k is at most O log(jbk j) 2 ; where B k is the set B at the beginning of phase k. Since, jb k j n for all k, it follows that the total number of iterations overall is O log(n) log(m=) Computing 0 The most dicult part in each iteration is the computation of 0. This can be done as follows. For each j 2 f1; ; mg, compute j = 1 Pi2B a i;j and nd a permutation of f1; ; mg that satises by sorting the set (1) (m) f( j ; j) j 2 f1; ; mgg For xed j, let j? is the smallest index and j + is the largest index for which (j? ) = (j) = (j + ) >From these denitions, it follows that E( (j) ) = f(j? ); ; (j + )g S( (j)) = f(1); ; (j?? 1)g L( (j)) = f(j + + 1); ; (m)g For each i 2 B, for each j 2 f1; ; mg, compute D? i ( (j)) = P kj + a i;(j) y (j) D + i ( (j)) = P kj? a i;(j) y (j) using a parallel prex computation. Note that D? i ( (1)) D? i ( (m)) D + i ( (1)) D + i ( (m)) Then, for each i 2 B, compute an index v(i) 2 f1; ; mg for which D? i ( (v(i))) (1? 3 ) D i D + i ( (v(i))) 3 D i using binary search on j. There is such an index v(i) because, for all i; j, D? i ( (j)) + D + i ( (j+1)) D i Then, nd an ordering hb 1 ; ; b jbj i of B that satises by sorting the set (v(b 1)) (v(b jbj )) f (v(i)) i 2 Bg Note that, for all 1 i < i 0 jbj, D? i ( (v(b i 0 ))) D? i ( (v(b i))) D + i ( (v(b i 0 ))) D + i ( (v(b i))) Compute i 0 so that i0 jbj 1? 2 jbj?i0+1 jbj 2 using binary search. Finally, set 0 = (v(b i 0 )) It is not hard to verify that 0 satises the conditions specied in the description of the algorithm. Each iteration consists of a constant number of parallel prex, parallel sorting, and parallel binary search operations. By the results described in [6, Ladner, Fischer] and [4, Cole], each iteration can be executed in parallel using O(N) processors in time O(log(N)) on a EREW PRAM. 6

7 2.5 Feasibility Primal We prove that the solution z output by the algorithm is a primal feasible solution. Recall that yj = e? j. Because sum(y ) < 1 at termination, ln(1=sum(y )) > 0. Since z i = x i = it follows that each j is covered at least j = 1 times at the end of the algorithm. This implies that z is a primal feasible solution. Dual Because ^D is the maximum over all i of ^Di, it follows that each i is covered a total of ^D i = ^D 1 with respect to q. It follows that q is a dual feasible solution. 2.6 Optimality Let = Lemma 1 sum(y 0 ) sum(y)? 0 D jbj (1? ) PROOF Let r j = y j? y 0 j = y j (1? e?0 Pi2B ai;j ) We want to show that Note that j j r j 0 D jbj (1? ) r j For all j 2 E( 0 ) [ S( 0 ), j2e( 0)[S( 0) 1? e?0 Pi2B ai;j 0 i2b r j a i;j (1? 1 ) This inequality is because of the following For all 1, 1? e? (1? ) Letting = 0 i2b a i;j ; and noting that j 2 E( 0 ) [ S( 0 ) implies that 1, this implies that Thus, 1? e? (1? ) (1? 1 ) j2e( 0)[S( 0) j2e( 0)[S( 0) i2b j2e( 0)[S( 0) But, i2b j2e( 0)[S( 0) r j = y j (1? e?0 Pi2B ai;j ) a i;j y j 0 (1? 1 ) a i;j y j = i2b D? i ( 0) For at least a fraction (1? 2 ) of the i 2 B, D? i ( 0) (1? 3 ) D i Since D i D=(1 + 0 ), it follows this sum is at least (1? 2 ) (1? 3 ) D jbj Putting all this together yields j Lemma 2 r j 0 D jbj (1? ) sum(x) sum(q) ln(m=sum(y)) 1? PROOF >From Lemma 1 it follows that sum(y 0 ) sum(y) 1? 0 D jbj (1? ) sum(y) e? 0 DjBj(1?) sum(y) e? 0 jbj(1?) sum(q) The last inequality is because sum(q) sum(y)=d 7

8 Because i x 0 i? x i = 0 jbj; i and because initially sum(y) = m, it follows that at any point in time, This implies that sum(x) Theorem?Pi x i (1?) sum(y) m e sum(q) sum(q) ln(m=sum(y)) 1? sum(z) sum(q) (1 + ) PROOF >From Lemma 2 and the denition of z it follows that sum ( z) sum(q) ln(m=sum(y )) (1? ) Since ln(1=sum(y )) it follows that sum(z) sum(q) (1 + ) ln(m=sum(y )) ln(1=sum(y )) Since sum(y ) 1 m 1=4 at the termination of the algorithm, the theorem follows. 3 Acknowledgments The authors would like to thank Richard Karp, Se Naor, Serge Plotkin and Eva Tardos for pointing out relationships between this work and previous work. The rst author would like to thank Chu-Cheow Lim for discussions which claried the description of the implementation. References [1] Berger, B., Rompel, J., Shor, P., \Ecient NC Algorithms for Set Cover with Applications to Learning and Geometry", 30 th Annual Symposium on Foundations of Computer Science, pp , [2] Chazelle, B., Friedman, J., \A Deterministic View of Random Sampling and Its Use in Geometry", Princeton Technical Report No. CS-TR-436, September A preliminary version appears in FOCS [3] Cohen, E., \Approximate max ow on small depth networks", FOCS, 1992, pp [4] Cole, R., \Parallel merge-sort", SIAM J. Comp., 17(4), pp , [5] Ja Ja, J., An Introduction to Parallel Algorithms, Addison Wesley, 1992 [6] Ladner, R., Fischer, M., \Parallel prex computation", JACM, 27(4), pp , 1980 [7] Lund, C., Yannakakis, M., \On the Hardness of Approximating Minimization Problems", preprint. [8] Plotkin, S., Shmoys, D., Tardos, E., \Fast Approximation Algorithms for Fractional Packing and Covering Problems", Stanford Technical Report No. STAN-CS , February [9] Raghavan, P., \Probabilistic construction of deterministic algorithms approximating packing integer programs", JCSS, October 1988, Vol. 37, pp Appendix Transforming to special form Given an instance of a positive problem, we rst perform a normalization step to eliminate the weights d i and the constraint bounds b j. Normalized problem For all hi; ji, let c 0 i;j = ci;j b jd i. For all i, dene variable z 0 i = d i z i. 8

9 The objective is to nd z 0 = hz1 0 ; ; z0 n i that minimizes sum(z 0 ) subject to the following constraints For all i, z 0 i 0. For all j, P i c0 i;j z0 i 1. There is a one-to-one correspondence between primal feasible solutions z 0 to the normalized problem and primal feasible solutions z to the primal positive problem with the property that sum(z) = sum(z 0 ). The next step to transforming the problem to the special form is to limit the range of the coecients c 0 i;j. This step will introduce an error of at most. This turns out to be important for the analysis of the approximation algorithm we develop. For all j, let and let Fact j = maxfc 0 i;j g; i = min j f j g m= sum(opt(z 0 )) 1= The transformation consists of forming the new set of coecients c 00 i;j as follows. For all hi; ji, if c 0 i;j > m then c 00 i;j = m. For all hi; ji, if c 0 i;j < m then c00 i;j = 0. For all hi; ji, if m c0 i;j m then c 00 c 0 i;j. i;j = The objective is to nd z 00 = hz 00 1 ; ; z00 n i that minimizes sum(z 00 ) subject to the following constraints For all i, z 00 i 0. For all j, P i c00 i;j z00 i 1. Let and let Fix t = max i;j fc00 i;j g b = min i;j fc00 i;j c 00 i;j > 0g = m2 2 The main point of this transformation is that the ratio t=b is not too large, i.e. t=b. Lemma A (1) Any primal feasible solution z 00 to the transformed problem is also a primal feasible solution to the normalized problem. (2) Let opt(z 00 ) be an optimal solution to the transformed problem. Then, PROOF sum(opt(z 00 )) sum(opt(z)) (1 + ) (of part (1)) This follows immediately because, for all hi; ji, c 00 i;j c0 i;j. Thus, if z00 covers each j at least once with respect to c 00, then z 00 covers each j at least once with respect to c 0. (of part (2)) Let opt(z 0 ) be an optimal solution to the normalized problem, i.e. sum(opt(z 0 )) = sum(opt(z)). Let = sum(opt(z 0 )). For all j, let m j 2 f1; ; ng be an index such that c 0 m j ;j is maximal, i.e. for all i, c 0 m j;j c0 i;j. Let Dene z 00 as follows. I = fm j j 2 f1; ; mgg For all i 62 I, let z 00 i = opt(z 0 ) i For all i 2 I, let z 00 i = opt(z0 ) i + m Let jij denote the cardinality of I. Since jij m, it is easy to see that sum(z 00 ) (1 + ) 9

10 We now verify that z 00 is a primal feasible solution to the transformed problem. The only concern with respect to feasibility is that some j is no longer covered at least once by z 00, and the only reason this can happen is because of the lowering of the coecients from c 0 to c 00. We show that this loss in coverage is compensated for by the increase in z 00 above opt(z 0 ). Suppose j is such that for at least one index i it is the case that c 0 i;j > m. By denition of m j and c 00 m j;j, it follows that c 00 m j ;j = m. Since m j 2 I, zm 00 j ;j = opt(z 0 ) mj ;j + m m By the fact on page 9, m m, and thus the coverage of j is at least zm 00 j c 00 m j ;j 1 Suppose j is such that for all indices i it is the case that c 0 i;j m. The decrease in the coverage of j is caused by hi; ji where c 0 i;j < m. Because c00 i;j is set to 0 and because sum(opt(z 0 )) =, the total loss in coverage at j is at most m On the other hand, by denition of and m j and c 00 m j ;j, c00 m j ;j, and z00 m j ;j is larger than opt(z 0 ) mj ;j by m Thus, the coverage of j increases by at least m Note that max i;j fa i;j g = 1 and min i;j fa i;j a i;j > 0g 1= There is a one-to-one correspondence between primal feasible solutions x to the special form problem and primal feasible solutions z 00 to the transformed problem with the property that sum(z 00 ) = sum(x)=t. Lemma B is the culmination of the above development. Lemma B Let x be a primal feasible solution to a special form problem derived as described above with error parameter from a primal positive problem. A primal feasible solution z to the primal positive problem can be easily derived from x with the property that sum(z) (1 + ) sum(x) sum(opt(x)) sum(opt(z)) A similar transformation is possible to preserve the quality of the the dual positive problem, but is omitted from this paper due to lack of space. For convenience, we normalize the largest coef- cient as follows. Special form For all hi; ji, let a i;j = c00 i;j t. For all i, dene variable x i = z 00 i t. The objective is to nd x = hx 1 ; ; x n i that minimizes sum(x) subject to the following constraints For all i, x i 0. For all j, P i a i;j x i 1. 10

Primal vector is primal infeasible till end. So when primal feasibility attained, the pair becomes opt. & method terminates. 3. Two main steps carried

Primal vector is primal infeasible till end. So when primal feasibility attained, the pair becomes opt. & method terminates. 3. Two main steps carried 4.1 Primal-Dual Algorithms Katta G. Murty, IOE 612 Lecture slides 4 Here we discuss special min cost ow problems on bipartite networks, the assignment and transportation problems. Algorithms based on an

More information

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko Approximation Algorithms for Maximum Coverage and Max Cut with Given Sizes of Parts? A. A. Ageev and M. I. Sviridenko Sobolev Institute of Mathematics pr. Koptyuga 4, 630090, Novosibirsk, Russia fageev,svirg@math.nsc.ru

More information

Chapter 1. Comparison-Sorting and Selecting in. Totally Monotone Matrices. totally monotone matrices can be found in [4], [5], [9],

Chapter 1. Comparison-Sorting and Selecting in. Totally Monotone Matrices. totally monotone matrices can be found in [4], [5], [9], Chapter 1 Comparison-Sorting and Selecting in Totally Monotone Matrices Noga Alon Yossi Azar y Abstract An mn matrix A is called totally monotone if for all i 1 < i 2 and j 1 < j 2, A[i 1; j 1] > A[i 1;

More information

All-norm Approximation Algorithms

All-norm Approximation Algorithms All-norm Approximation Algorithms Yossi Azar Leah Epstein Yossi Richter Gerhard J. Woeginger Abstract A major drawback in optimization problems and in particular in scheduling problems is that for every

More information

On-line Bin-Stretching. Yossi Azar y Oded Regev z. Abstract. We are given a sequence of items that can be packed into m unit size bins.

On-line Bin-Stretching. Yossi Azar y Oded Regev z. Abstract. We are given a sequence of items that can be packed into m unit size bins. On-line Bin-Stretching Yossi Azar y Oded Regev z Abstract We are given a sequence of items that can be packed into m unit size bins. In the classical bin packing problem we x the size of the bins and try

More information

1 Introduction It will be convenient to use the inx operators a b and a b to stand for maximum (least upper bound) and minimum (greatest lower bound)

1 Introduction It will be convenient to use the inx operators a b and a b to stand for maximum (least upper bound) and minimum (greatest lower bound) Cycle times and xed points of min-max functions Jeremy Gunawardena, Department of Computer Science, Stanford University, Stanford, CA 94305, USA. jeremy@cs.stanford.edu October 11, 1993 to appear in the

More information

A Sublinear Parallel Algorithm for Stable Matching. network. This result is then applied to the stable. matching problem in Section 6.

A Sublinear Parallel Algorithm for Stable Matching. network. This result is then applied to the stable. matching problem in Section 6. A Sublinear Parallel Algorithm for Stable Matching Tomas Feder Nimrod Megiddo y Serge A. Plotkin z Parallel algorithms for various versions of the stable matching problem are presented. The algorithms

More information

Upper and Lower Bounds on the Number of Faults. a System Can Withstand Without Repairs. Cambridge, MA 02139

Upper and Lower Bounds on the Number of Faults. a System Can Withstand Without Repairs. Cambridge, MA 02139 Upper and Lower Bounds on the Number of Faults a System Can Withstand Without Repairs Michel Goemans y Nancy Lynch z Isaac Saias x Laboratory for Computer Science Massachusetts Institute of Technology

More information

APPROXIMATING THE COMPLEXITY MEASURE OF. Levent Tuncel. November 10, C&O Research Report: 98{51. Abstract

APPROXIMATING THE COMPLEXITY MEASURE OF. Levent Tuncel. November 10, C&O Research Report: 98{51. Abstract APPROXIMATING THE COMPLEXITY MEASURE OF VAVASIS-YE ALGORITHM IS NP-HARD Levent Tuncel November 0, 998 C&O Research Report: 98{5 Abstract Given an m n integer matrix A of full row rank, we consider the

More information

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs

More information

Coins with arbitrary weights. Abstract. Given a set of m coins out of a collection of coins of k unknown distinct weights, we wish to

Coins with arbitrary weights. Abstract. Given a set of m coins out of a collection of coins of k unknown distinct weights, we wish to Coins with arbitrary weights Noga Alon Dmitry N. Kozlov y Abstract Given a set of m coins out of a collection of coins of k unknown distinct weights, we wish to decide if all the m given coins have the

More information

Computer Science Dept.

Computer Science Dept. A NOTE ON COMPUTATIONAL INDISTINGUISHABILITY 1 Oded Goldreich Computer Science Dept. Technion, Haifa, Israel ABSTRACT We show that following two conditions are equivalent: 1) The existence of pseudorandom

More information

Machine Minimization for Scheduling Jobs with Interval Constraints

Machine Minimization for Scheduling Jobs with Interval Constraints Machine Minimization for Scheduling Jobs with Interval Constraints Julia Chuzhoy Sudipto Guha Sanjeev Khanna Joseph (Seffi) Naor Abstract The problem of scheduling jobs with interval constraints is a well-studied

More information

A Robust APTAS for the Classical Bin Packing Problem

A Robust APTAS for the Classical Bin Packing Problem A Robust APTAS for the Classical Bin Packing Problem Leah Epstein 1 and Asaf Levin 2 1 Department of Mathematics, University of Haifa, 31905 Haifa, Israel. Email: lea@math.haifa.ac.il 2 Department of Statistics,

More information

for average case complexity 1 randomized reductions, an attempt to derive these notions from (more or less) rst

for average case complexity 1 randomized reductions, an attempt to derive these notions from (more or less) rst On the reduction theory for average case complexity 1 Andreas Blass 2 and Yuri Gurevich 3 Abstract. This is an attempt to simplify and justify the notions of deterministic and randomized reductions, an

More information

be assigned to one facility, thereby incurring a cost of c ij, the distance between locations i and j; the objective isto nd a solution of minimum tot

be assigned to one facility, thereby incurring a cost of c ij, the distance between locations i and j; the objective isto nd a solution of minimum tot Approximation algorithms for facility location problems David B. Shmoys Eva Tardos y Karen Aardal z Abstract We present new approximation algorithms for several facility location problems. In each facility

More information

SUM x. 2x y x. x y x/2. (i)

SUM x. 2x y x. x y x/2. (i) Approximate Majorization and Fair Online Load Balancing Ashish Goel Adam Meyerson y Serge Plotkin z July 7, 2000 Abstract This paper relates the notion of fairness in online routing and load balancing

More information

Structural Grobner Basis. Bernd Sturmfels and Markus Wiegelmann TR May Department of Mathematics, UC Berkeley.

Structural Grobner Basis. Bernd Sturmfels and Markus Wiegelmann TR May Department of Mathematics, UC Berkeley. I 1947 Center St. Suite 600 Berkeley, California 94704-1198 (510) 643-9153 FAX (510) 643-7684 INTERNATIONAL COMPUTER SCIENCE INSTITUTE Structural Grobner Basis Detection Bernd Sturmfels and Markus Wiegelmann

More information

Approximation Basics

Approximation Basics Approximation Basics, Concepts, and Examples Xiaofeng Gao Department of Computer Science and Engineering Shanghai Jiao Tong University, P.R.China Fall 2012 Special thanks is given to Dr. Guoqiang Li for

More information

Lower Bounds for Shellsort. April 20, Abstract. We show lower bounds on the worst-case complexity of Shellsort.

Lower Bounds for Shellsort. April 20, Abstract. We show lower bounds on the worst-case complexity of Shellsort. Lower Bounds for Shellsort C. Greg Plaxton 1 Torsten Suel 2 April 20, 1996 Abstract We show lower bounds on the worst-case complexity of Shellsort. In particular, we give a fairly simple proof of an (n

More information

Improved Parallel Approximation of a Class of Integer Programming Problems

Improved Parallel Approximation of a Class of Integer Programming Problems Improved Parallel Approximation of a Class of Integer Programming Problems Noga Alon 1 and Aravind Srinivasan 2 1 School of Mathematical Sciences, Raymond and Beverly Sackler Faculty of Exact Sciences,

More information

The subject of this paper is nding small sample spaces for joint distributions of

The subject of this paper is nding small sample spaces for joint distributions of Constructing Small Sample Spaces for De-Randomization of Algorithms Daphne Koller Nimrod Megiddo y September 1993 The subject of this paper is nding small sample spaces for joint distributions of n Bernoulli

More information

Linear Programming. Scheduling problems

Linear Programming. Scheduling problems Linear Programming Scheduling problems Linear programming (LP) ( )., 1, for 0 min 1 1 1 1 1 11 1 1 n i x b x a x a b x a x a x c x c x z i m n mn m n n n n! = + + + + + + = Extreme points x ={x 1,,x n

More information

Convergence Complexity of Optimistic Rate Based Flow. Control Algorithms. Computer Science Department, Tel-Aviv University, Israel

Convergence Complexity of Optimistic Rate Based Flow. Control Algorithms. Computer Science Department, Tel-Aviv University, Israel Convergence Complexity of Optimistic Rate Based Flow Control Algorithms Yehuda Afek y Yishay Mansour z Zvi Ostfeld x Computer Science Department, Tel-Aviv University, Israel 69978. December 12, 1997 Abstract

More information

Lecture 11 October 7, 2013

Lecture 11 October 7, 2013 CS 4: Advanced Algorithms Fall 03 Prof. Jelani Nelson Lecture October 7, 03 Scribe: David Ding Overview In the last lecture we talked about set cover: Sets S,..., S m {,..., n}. S has cost c S. Goal: Cover

More information

On Two Class-Constrained Versions of the Multiple Knapsack Problem

On Two Class-Constrained Versions of the Multiple Knapsack Problem On Two Class-Constrained Versions of the Multiple Knapsack Problem Hadas Shachnai Tami Tamir Department of Computer Science The Technion, Haifa 32000, Israel Abstract We study two variants of the classic

More information

S. ABERS Vohra [3] then gave an algorithm that is.986-competitive, for all m 70. Karger, Phillips and Torng [] generalized the algorithm and proved a

S. ABERS Vohra [3] then gave an algorithm that is.986-competitive, for all m 70. Karger, Phillips and Torng [] generalized the algorithm and proved a BETTER BOUNDS FOR ONINE SCHEDUING SUSANNE ABERS y Abstract. We study a classical problem in online scheduling. A sequence of jobs must be scheduled on m identical parallel machines. As each job arrives,

More information

Lecture 14 - P v.s. NP 1

Lecture 14 - P v.s. NP 1 CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford (sidford@stanford.edu) February 27, 2018 Lecture 14 - P v.s. NP 1 In this lecture we start Unit 3 on NP-hardness and approximation

More information

A robust APTAS for the classical bin packing problem

A robust APTAS for the classical bin packing problem A robust APTAS for the classical bin packing problem Leah Epstein Asaf Levin Abstract Bin packing is a well studied problem which has many applications. In this paper we design a robust APTAS for the problem.

More information

Cost-Constrained Matchings and Disjoint Paths

Cost-Constrained Matchings and Disjoint Paths Cost-Constrained Matchings and Disjoint Paths Kenneth A. Berman 1 Department of ECE & Computer Science University of Cincinnati, Cincinnati, OH Abstract Let G = (V, E) be a graph, where the edges are weighted

More information

Reproduced without access to the TeX macros. Ad-hoc macro denitions were used instead. ON THE POWER OF TWO-POINTS BASED SAMPLING

Reproduced without access to the TeX macros. Ad-hoc macro denitions were used instead. ON THE POWER OF TWO-POINTS BASED SAMPLING Reproduced without access to the TeX macros. Ad-hoc macro denitions were used instead. ON THE POWER OF TWO-POINTS BASED SAMPLING Benny Chor Oded Goldreich MIT Laboratory for Computer Science Cambridge,

More information

Finding Succinct. Ordered Minimal Perfect. Hash Functions. Steven S. Seiden 3 Daniel S. Hirschberg 3. September 22, Abstract

Finding Succinct. Ordered Minimal Perfect. Hash Functions. Steven S. Seiden 3 Daniel S. Hirschberg 3. September 22, Abstract Finding Succinct Ordered Minimal Perfect Hash Functions Steven S. Seiden 3 Daniel S. Hirschberg 3 September 22, 1994 Abstract An ordered minimal perfect hash table is one in which no collisions occur among

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 7: Duality and applications Prof. John Gunnar Carlsson September 29, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 29, 2010 1

More information

Tompa [7], von zur Gathen and Nocker [25], and Mnuk [16]. Recently, von zur Gathen and Shparlinski gave a lower bound of (log n) for the parallel time

Tompa [7], von zur Gathen and Nocker [25], and Mnuk [16]. Recently, von zur Gathen and Shparlinski gave a lower bound of (log n) for the parallel time A Sublinear-Time Parallel Algorithm for Integer Modular Exponentiation Jonathan P. Sorenson Department of Mathematics and Computer Science Butler University http://www.butler.edu/sorenson sorenson@butler.edu

More information

ICML '97 and AAAI '97 Tutorials

ICML '97 and AAAI '97 Tutorials A Short Course in Computational Learning Theory: ICML '97 and AAAI '97 Tutorials Michael Kearns AT&T Laboratories Outline Sample Complexity/Learning Curves: nite classes, Occam's VC dimension Razor, Best

More information

6.854J / J Advanced Algorithms Fall 2008

6.854J / J Advanced Algorithms Fall 2008 MIT OpenCourseWare http://ocw.mit.edu 6.85J / 8.5J Advanced Algorithms Fall 008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 8.5/6.85 Advanced Algorithms

More information

Optimal Online Scheduling. Shang-Hua Teng x. December Abstract. We study the following general online scheduling problem.

Optimal Online Scheduling. Shang-Hua Teng x. December Abstract. We study the following general online scheduling problem. Optimal Online Scheduling of Parallel Jobs with Dependencies Anja Feldmann Ming-Yang Kao y Jir Sgall z Shang-Hua Teng x December 1992 Abstract We study the following general online scheduling problem.

More information

Notes on Dantzig-Wolfe decomposition and column generation

Notes on Dantzig-Wolfe decomposition and column generation Notes on Dantzig-Wolfe decomposition and column generation Mette Gamst November 11, 2010 1 Introduction This note introduces an exact solution method for mathematical programming problems. The method is

More information

ceedings of the 24th Annual ACM Symposium [MVV87] Ketan Mulmuley, Umesh V. Vazirani, and Vijay V. Vazirani. Matching is as

ceedings of the 24th Annual ACM Symposium [MVV87] Ketan Mulmuley, Umesh V. Vazirani, and Vijay V. Vazirani. Matching is as ceedings of the 24th Annual ACM Symposium on Theory of Computing, pages 10{ 16, 1992. [ES73] P. Erdos and J. Selfridge. On a combinatorial game. Journal of Combinatorial Theory, series B, 14:298{301, 1973.

More information

ground state degeneracy ground state energy

ground state degeneracy ground state energy Searching Ground States in Ising Spin Glass Systems Steven Homer Computer Science Department Boston University Boston, MA 02215 Marcus Peinado German National Research Center for Information Technology

More information

MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES. Contents

MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES. Contents MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES JAMES READY Abstract. In this paper, we rst introduce the concepts of Markov Chains and their stationary distributions. We then discuss

More information

M 2 M 3. Robot M (O)

M 2 M 3. Robot M (O) R O M A TRE DIA Universita degli Studi di Roma Tre Dipartimento di Informatica e Automazione Via della Vasca Navale, 79 { 00146 Roma, Italy Part Sequencing in Three Machine No-Wait Robotic Cells Alessandro

More information

R ij = 2. Using all of these facts together, you can solve problem number 9.

R ij = 2. Using all of these facts together, you can solve problem number 9. Help for Homework Problem #9 Let G(V,E) be any undirected graph We want to calculate the travel time across the graph. Think of each edge as one resistor of 1 Ohm. Say we have two nodes: i and j Let the

More information

Efficient Primal- Dual Graph Algorithms for Map Reduce

Efficient Primal- Dual Graph Algorithms for Map Reduce Efficient Primal- Dual Graph Algorithms for Map Reduce Joint work with Bahman Bahmani Ashish Goel, Stanford Kamesh Munagala Duke University Modern Data Models Over the past decade, many commodity distributed

More information

MINIMUM DIAMETER COVERING PROBLEMS. May 20, 1997

MINIMUM DIAMETER COVERING PROBLEMS. May 20, 1997 MINIMUM DIAMETER COVERING PROBLEMS Esther M. Arkin y and Refael Hassin z May 20, 1997 Abstract A set V and a collection of (possibly non-disjoint) subsets are given. Also given is a real matrix describing

More information

ON THE COMPLEXITY OF SOLVING THE GENERALIZED SET PACKING PROBLEM APPROXIMATELY. Nimrod Megiddoy

ON THE COMPLEXITY OF SOLVING THE GENERALIZED SET PACKING PROBLEM APPROXIMATELY. Nimrod Megiddoy ON THE COMPLEXITY OF SOLVING THE GENERALIZED SET PACKING PROBLEM APPROXIMATELY Nimrod Megiddoy Abstract. The generalized set packing problem (GSP ) is as follows. Given a family F of subsets of M = f mg

More information

Lecture 8 - Algebraic Methods for Matching 1

Lecture 8 - Algebraic Methods for Matching 1 CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford (sidford@stanford.edu) February 1, 2018 Lecture 8 - Algebraic Methods for Matching 1 In the last lecture we showed that

More information

STGs may contain redundant states, i.e. states whose. State minimization is the transformation of a given

STGs may contain redundant states, i.e. states whose. State minimization is the transformation of a given Completely Specied Machines STGs may contain redundant states, i.e. states whose function can be accomplished by other states. State minimization is the transformation of a given machine into an equivalent

More information

7. Lecture notes on the ellipsoid algorithm

7. Lecture notes on the ellipsoid algorithm Massachusetts Institute of Technology Michel X. Goemans 18.433: Combinatorial Optimization 7. Lecture notes on the ellipsoid algorithm The simplex algorithm was the first algorithm proposed for linear

More information

Lecture 15 - NP Completeness 1

Lecture 15 - NP Completeness 1 CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford (sidford@stanford.edu) February 29, 2018 Lecture 15 - NP Completeness 1 In the last lecture we discussed how to provide

More information

On dependent randomized rounding algorithms

On dependent randomized rounding algorithms Operations Research Letters 24 (1999) 105 114 www.elsevier.com/locate/orms On dependent randomized rounding algorithms Dimitris Bertsimas a;, Chungpiaw Teo b, Rakesh Vohra c a Sloan School of Management

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 12 Luca Trevisan October 3, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 12 Luca Trevisan October 3, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analysis Handout 1 Luca Trevisan October 3, 017 Scribed by Maxim Rabinovich Lecture 1 In which we begin to prove that the SDP relaxation exactly recovers communities

More information

Abstract. This paper discusses polynomial-time reductions from Hamiltonian Circuit (HC),

Abstract. This paper discusses polynomial-time reductions from Hamiltonian Circuit (HC), SAT-Variable Complexity of Hard Combinatorial Problems Kazuo Iwama and Shuichi Miyazaki Department of Computer Science and Communication Engineering Kyushu University, Hakozaki, Higashi-ku, Fukuoka 812,

More information

Universal Juggling Cycles

Universal Juggling Cycles Universal Juggling Cycles Fan Chung y Ron Graham z Abstract During the past several decades, it has become popular among jugglers (and juggling mathematicians) to represent certain periodic juggling patterns

More information

2 S. FELSNER, W. T. TROTTER S. S. Kislitsyn [9] made the following conjecture, which remains of the most intriguing problems in the combinatorial theo

2 S. FELSNER, W. T. TROTTER S. S. Kislitsyn [9] made the following conjecture, which remains of the most intriguing problems in the combinatorial theo COLLOQUIA MATHEMATICA SOCIETATIS J ANOS BOLYA I 64. COMBINATORICS, KESZTHELY (HUNGARY), 1993 Balancing Pairs in Partially Ordered Sets S. FELSNER 1 and W. T. TROTTER Dedicated to Paul Erdos on his eightieth

More information

X. Hu, R. Shonkwiler, and M.C. Spruill. School of Mathematics. Georgia Institute of Technology. Atlanta, GA 30332

X. Hu, R. Shonkwiler, and M.C. Spruill. School of Mathematics. Georgia Institute of Technology. Atlanta, GA 30332 Approximate Speedup by Independent Identical Processing. Hu, R. Shonkwiler, and M.C. Spruill School of Mathematics Georgia Institute of Technology Atlanta, GA 30332 Running head: Parallel iip Methods Mail

More information

Electronic Colloquium on Computational Complexity, Report No. 17 (1999)

Electronic Colloquium on Computational Complexity, Report No. 17 (1999) Electronic Colloquium on Computational Complexity, Report No. 17 (1999) Improved Testing Algorithms for Monotonicity Yevgeniy Dodis Oded Goldreich y Eric Lehman z Sofya Raskhodnikova x Dana Ron { Alex

More information

The 2-valued case of makespan minimization with assignment constraints

The 2-valued case of makespan minimization with assignment constraints The 2-valued case of maespan minimization with assignment constraints Stavros G. Kolliopoulos Yannis Moysoglou Abstract We consider the following special case of minimizing maespan. A set of jobs J and

More information

Flows. Chapter Circulations

Flows. Chapter Circulations Chapter 4 Flows For a directed graph D = (V,A), we define δ + (U) := {(u,v) A : u U,v / U} as the arcs leaving U and δ (U) := {(u,v) A u / U,v U} as the arcs entering U. 4. Circulations In a directed graph

More information

to provide continuous buered playback ofavariable-rate output schedule. The

to provide continuous buered playback ofavariable-rate output schedule. The The Minimum Reservation Rate Problem in Digital Audio/Video Systems (Extended Abstract) David P. Anderson Nimrod Megiddo y Moni Naor z April 1993 Abstract. The \Minimum Reservation Rate Problem" arises

More information

Ole Christensen 3. October 20, Abstract. We point out some connections between the existing theories for

Ole Christensen 3. October 20, Abstract. We point out some connections between the existing theories for Frames and pseudo-inverses. Ole Christensen 3 October 20, 1994 Abstract We point out some connections between the existing theories for frames and pseudo-inverses. In particular, using the pseudo-inverse

More information

On the Exponent of the All Pairs Shortest Path Problem

On the Exponent of the All Pairs Shortest Path Problem On the Exponent of the All Pairs Shortest Path Problem Noga Alon Department of Mathematics Sackler Faculty of Exact Sciences Tel Aviv University Zvi Galil Department of Computer Science Sackler Faculty

More information

On Controllability and Normality of Discrete Event. Dynamical Systems. Ratnesh Kumar Vijay Garg Steven I. Marcus

On Controllability and Normality of Discrete Event. Dynamical Systems. Ratnesh Kumar Vijay Garg Steven I. Marcus On Controllability and Normality of Discrete Event Dynamical Systems Ratnesh Kumar Vijay Garg Steven I. Marcus Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin,

More information

Approximate Binary Search Algorithms for Mean Cuts and Cycles

Approximate Binary Search Algorithms for Mean Cuts and Cycles Approximate Binary Search Algorithms for Mean Cuts and Cycles S. Thomas McCormick Faculty of Commerce and Business Administration University of British Columbia Vancouver, BC V6T 1Z2 Canada June 1992,

More information

Santa Claus Schedules Jobs on Unrelated Machines

Santa Claus Schedules Jobs on Unrelated Machines Santa Claus Schedules Jobs on Unrelated Machines Ola Svensson (osven@kth.se) Royal Institute of Technology - KTH Stockholm, Sweden March 22, 2011 arxiv:1011.1168v2 [cs.ds] 21 Mar 2011 Abstract One of the

More information

3.4 Relaxations and bounds

3.4 Relaxations and bounds 3.4 Relaxations and bounds Consider a generic Discrete Optimization problem z = min{c(x) : x X} with an optimal solution x X. In general, the algorithms generate not only a decreasing sequence of upper

More information

NP-Hard to Linearly Approximate. University of California, San Diego. Computer Science Department. August 3, Abstract

NP-Hard to Linearly Approximate. University of California, San Diego. Computer Science Department. August 3, Abstract Minimum Propositional Proof Length is NP-Hard to Linearly Approximate Michael Alekhnovich Faculty of Mechanics & Mathematics Moscow State University, Russia michael@mail.dnttm.ru Shlomo Moran y Department

More information

and the polynomial-time Turing p reduction from approximate CVP to SVP given in [10], the present authors obtained a n=2-approximation algorithm that

and the polynomial-time Turing p reduction from approximate CVP to SVP given in [10], the present authors obtained a n=2-approximation algorithm that Sampling short lattice vectors and the closest lattice vector problem Miklos Ajtai Ravi Kumar D. Sivakumar IBM Almaden Research Center 650 Harry Road, San Jose, CA 95120. fajtai, ravi, sivag@almaden.ibm.com

More information

Lower Bounds for Cutting Planes Proofs. with Small Coecients. Abstract. We consider small-weight Cutting Planes (CP ) proofs; that is,

Lower Bounds for Cutting Planes Proofs. with Small Coecients. Abstract. We consider small-weight Cutting Planes (CP ) proofs; that is, Lower Bounds for Cutting Planes Proofs with Small Coecients Maria Bonet y Toniann Pitassi z Ran Raz x Abstract We consider small-weight Cutting Planes (CP ) proofs; that is, Cutting Planes (CP ) proofs

More information

Lecture 2: Scheduling on Parallel Machines

Lecture 2: Scheduling on Parallel Machines Lecture 2: Scheduling on Parallel Machines Loris Marchal October 17, 2012 Parallel environment alpha in Graham s notation): P parallel identical Q uniform machines: each machine has a given speed speed

More information

Counting and Constructing Minimal Spanning Trees. Perrin Wright. Department of Mathematics. Florida State University. Tallahassee, FL

Counting and Constructing Minimal Spanning Trees. Perrin Wright. Department of Mathematics. Florida State University. Tallahassee, FL Counting and Constructing Minimal Spanning Trees Perrin Wright Department of Mathematics Florida State University Tallahassee, FL 32306-3027 Abstract. We revisit the minimal spanning tree problem in order

More information

Simple Learning Algorithms for. Decision Trees and Multivariate Polynomials. xed constant bounded product distribution. is depth 3 formulas.

Simple Learning Algorithms for. Decision Trees and Multivariate Polynomials. xed constant bounded product distribution. is depth 3 formulas. Simple Learning Algorithms for Decision Trees and Multivariate Polynomials Nader H. Bshouty Department of Computer Science University of Calgary Calgary, Alberta, Canada Yishay Mansour Department of Computer

More information

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003 CS6999 Probabilistic Methods in Integer Programming Randomized Rounding April 2003 Overview 2 Background Randomized Rounding Handling Feasibility Derandomization Advanced Techniques Integer Programming

More information

This means that we can assume each list ) is

This means that we can assume each list ) is This means that we can assume each list ) is of the form ),, ( )with < and Since the sizes of the items are integers, there are at most +1pairs in each list Furthermore, if we let = be the maximum possible

More information

Three-dimensional Stable Matching Problems. Cheng Ng and Daniel S. Hirschberg. Department of Information and Computer Science

Three-dimensional Stable Matching Problems. Cheng Ng and Daniel S. Hirschberg. Department of Information and Computer Science Three-dimensional Stable Matching Problems Cheng Ng and Daniel S Hirschberg Department of Information and Computer Science University of California, Irvine Irvine, CA 92717 Abstract The stable marriage

More information

Budgeted Allocations in the Full-Information Setting

Budgeted Allocations in the Full-Information Setting Budgeted Allocations in the Full-Information Setting Aravind Srinivasan 1 Dept. of Computer Science and Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742. Abstract.

More information

Optimal and Sublogarithmic Time. Sanguthevar Rajasekaran 2. John H. Reif 2. Abstract.We assume a parallel RAM model which allows both concurrent reads

Optimal and Sublogarithmic Time. Sanguthevar Rajasekaran 2. John H. Reif 2. Abstract.We assume a parallel RAM model which allows both concurrent reads Optimal and Sublogarithmic Time Randomized Parallel Sorting Algorithms 1 Sanguthevar Rajasekaran 2 John H. Reif 2 Aiken Computation Laboratory, Harvard University, Cambridge, MA 02138. Abstract.We assume

More information

1 Introduction Property Testing (cf., [13, 9]) is a general formulation of computational tasks in which one is to determine whether a given object has

1 Introduction Property Testing (cf., [13, 9]) is a general formulation of computational tasks in which one is to determine whether a given object has Improved Testing Algorithms for Monotonicity Yevgeniy Dodis Oded Goldreich y Eric Lehman z Sofya Raskhodnikova x Dana Ron { Alex Samorodnitsky k April 12, 2000 Abstract We present improved algorithms for

More information

[A + 1 ] + (1 ) v: : (b) Show: the derivative of T at v = v 0 < 0 is: = (v 0 ) (1 ) ; [A + 1 ]

[A + 1 ] + (1 ) v: : (b) Show: the derivative of T at v = v 0 < 0 is: = (v 0 ) (1 ) ; [A + 1 ] Homework #2 Economics 4- Due Wednesday, October 5 Christiano. This question is designed to illustrate Blackwell's Theorem, Theorem 3.3 on page 54 of S-L. That theorem represents a set of conditions that

More information

The Hardness of Approximate Optima in Lattices, Codes, and Systems of Linear Equations. Laszlo Babai y. Eotvos University, Hungary.

The Hardness of Approximate Optima in Lattices, Codes, and Systems of Linear Equations. Laszlo Babai y. Eotvos University, Hungary. The Hardness of Approximate Optima in Lattices, Codes, and Systems of Linear Equations Sanjeev Arora Computer Science Princeton University Jacques Stern Laboratoire d'informatique Ecole Normale Superieure

More information

Fundamental Domains for Integer Programs with Symmetries

Fundamental Domains for Integer Programs with Symmetries Fundamental Domains for Integer Programs with Symmetries Eric J. Friedman Cornell University, Ithaca, NY 14850, ejf27@cornell.edu, WWW home page: http://www.people.cornell.edu/pages/ejf27/ Abstract. We

More information

ALGORITHMS AND COMPLETE FORMULATIONS FOR THE NETWORK DESIGN PROBLEM Trilochan Sastry Indian Institute of Management, Ahmedabad November 1997 Abstract

ALGORITHMS AND COMPLETE FORMULATIONS FOR THE NETWORK DESIGN PROBLEM Trilochan Sastry Indian Institute of Management, Ahmedabad November 1997 Abstract ALGORITHMS AND COMPLETE FORMULATIONS FOR THE NETWORK DESIGN PROBLEM Trilochan Sastry Indian Institute of Management, Ahmedabad November 1997 Abstract We study the multi commodity uncapacitated network

More information

to t in the graph with capacities given by u e + i(e) for edges e is maximized. We further distinguish the problem MaxFlowImp according to the valid v

to t in the graph with capacities given by u e + i(e) for edges e is maximized. We further distinguish the problem MaxFlowImp according to the valid v Flow Improvement and Network Flows with Fixed Costs S. O. Krumke, Konrad-Zuse-Zentrum fur Informationstechnik Berlin H. Noltemeier, S. Schwarz, H.-C. Wirth, Universitat Wurzburg R. Ravi, Carnegie Mellon

More information

A Z q -Fan theorem. 1 Introduction. Frédéric Meunier December 11, 2006

A Z q -Fan theorem. 1 Introduction. Frédéric Meunier December 11, 2006 A Z q -Fan theorem Frédéric Meunier December 11, 2006 Abstract In 1952, Ky Fan proved a combinatorial theorem generalizing the Borsuk-Ulam theorem stating that there is no Z 2-equivariant map from the

More information

Some remarks on the Shannon capacity of odd cycles Bruno Codenotti Ivan Gerace y Giovanni Resta z Abstract We tackle the problem of estimating the Sha

Some remarks on the Shannon capacity of odd cycles Bruno Codenotti Ivan Gerace y Giovanni Resta z Abstract We tackle the problem of estimating the Sha Some remarks on the Shannon capacity of odd cycles Bruno Codenotti Ivan Gerace y Giovanni Resta z Abstract We tackle the problem of estimating the Shannon capacity of cycles of odd length. We present some

More information

CS 6820 Fall 2014 Lectures, October 3-20, 2014

CS 6820 Fall 2014 Lectures, October 3-20, 2014 Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given

More information

A Polynomial-Time Algorithm for Pliable Index Coding

A Polynomial-Time Algorithm for Pliable Index Coding 1 A Polynomial-Time Algorithm for Pliable Index Coding Linqi Song and Christina Fragouli arxiv:1610.06845v [cs.it] 9 Aug 017 Abstract In pliable index coding, we consider a server with m messages and n

More information

Lecture notes on the ellipsoid algorithm

Lecture notes on the ellipsoid algorithm Massachusetts Institute of Technology Handout 1 18.433: Combinatorial Optimization May 14th, 007 Michel X. Goemans Lecture notes on the ellipsoid algorithm The simplex algorithm was the first algorithm

More information

On 2-Coverings and 2-Packings of Laminar. Families.

On 2-Coverings and 2-Packings of Laminar. Families. On 2-Coverings and 2-Packings of Laminar Families Joseph Cheriyan 1?, Tibor Jordan 2??, and R. Ravi 3??? 1 Department of Combinatorics and Optimization, University of Waterloo, Waterloo ON Canada N2L 3G1,

More information

Bounds on the Traveling Salesman Problem

Bounds on the Traveling Salesman Problem Bounds on the Traveling Salesman Problem Sean Zachary Roberson Texas A&M University MATH 613, Graph Theory A common routing problem is as follows: given a collection of stops (for example, towns, stations,

More information

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved. Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved. 1 Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should

More information

Lecture 15 (Oct 6): LP Duality

Lecture 15 (Oct 6): LP Duality CMPUT 675: Approximation Algorithms Fall 2014 Lecturer: Zachary Friggstad Lecture 15 (Oct 6): LP Duality Scribe: Zachary Friggstad 15.1 Introduction by Example Given a linear program and a feasible solution

More information

Power Domains and Iterated Function. Systems. Abbas Edalat. Department of Computing. Imperial College of Science, Technology and Medicine

Power Domains and Iterated Function. Systems. Abbas Edalat. Department of Computing. Imperial College of Science, Technology and Medicine Power Domains and Iterated Function Systems Abbas Edalat Department of Computing Imperial College of Science, Technology and Medicine 180 Queen's Gate London SW7 2BZ UK Abstract We introduce the notion

More information

A Faster Combinatorial Approximation Algorithm for Scheduling Unrelated Parallel Machines

A Faster Combinatorial Approximation Algorithm for Scheduling Unrelated Parallel Machines A Faster Combinatorial Approximation Algorithm for Scheduling Unrelated Parallel Machines Martin Gairing, Burkhard Monien, and Andreas Woclaw Faculty of Computer Science, Electrical Engineering and Mathematics,

More information

Combinatorial Algorithms for Minimizing the Weighted Sum of Completion Times on a Single Machine

Combinatorial Algorithms for Minimizing the Weighted Sum of Completion Times on a Single Machine Combinatorial Algorithms for Minimizing the Weighted Sum of Completion Times on a Single Machine James M. Davis 1, Rajiv Gandhi, and Vijay Kothari 1 Department of Computer Science, Rutgers University-Camden,

More information

The Complexity and Approximability of Finding. Maximum Feasible Subsystems of Linear Relations. Abstract

The Complexity and Approximability of Finding. Maximum Feasible Subsystems of Linear Relations. Abstract The Complexity and Approximability of Finding Maximum Feasible Subsystems of Linear Relations Edoardo Amaldi Department of Mathematics Swiss Federal Institute of Technology CH-1015 Lausanne amaldi@dma.epfl.ch

More information

The Minimum Reservation Rate Problem in Digital. Audio/Video Systems. Abstract

The Minimum Reservation Rate Problem in Digital. Audio/Video Systems. Abstract The Minimum Reservation Rate Problem in Digital Audio/Video Systems Dave Anderson y Nimrod Megiddo z Moni Naor x Abstract The \Minimum Reservation Rate Problem" arises in distributed systems for handling

More information

Pricing for Fairness: Distributed Resource Allocation for Multiple Objectives

Pricing for Fairness: Distributed Resource Allocation for Multiple Objectives Pricing for Fairness: Distributed Resource Allocation for Multiple Objectives Sung-woo Cho University of Southern California Ashish Goel Stanford University March 12, 2006 Abstract In this paper, we present

More information

The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In

The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-ero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In [0, 4], circulant-type preconditioners have been proposed

More information

Succinct linear programs for easy problems

Succinct linear programs for easy problems Succinct linear programs for easy problems David Bremner U. New Brunswick / visiting TU Berlin 24-11-2014 Joint Work with D. Avis, H.R. Tiwary, and O. Watanabe Outline 1 Matchings and Matching Polytopes

More information