Solutions to Exercises

Size: px
Start display at page:

Download "Solutions to Exercises"

Transcription

1 1/13 Solutions to Exercises The exercises referred to as WS 1.1(a), and so forth, are from the course book: Williamson and Shmoys, The Design of Approximation Algorithms, Cambridge University Press, 2011, available online at The solutions are by the instructor of the present course. The solutions may be suboptimal, incomplete, contain errors, or even be simply wrong. Week I I-1 (WS 1.1(a)) Denote by n the size of E. Consider the greedy algorithm given in WS, modified only so that the algorithm terminates when at least pn elements have been covered. Clearly, the algorithm returns a valid partial cover and runs in time polynomial in the input size. Let us revisit the analysis of the performance guarantee in the proof of Theorem Suppose the algorithm takes l iterations. Denote by n k the number of elements that remain uncovered at the start of the kth iteration. Thus n 1 = n, n l > (1 p)n, and n l+1 (1 p)n. Also denote by Sj k the subset of S j that remain uncovered at the start of the kth iteration (denoted by Ŝj in WS). Let O be an optimal solution (the index set) to the respective instance of the set cover problem (i.e., with p = 1). Again w j min j:sj k Sj k j O w j j O Sk j = OPT j O Sk j. Furthermore, since O is a set cover, the set j O Sk j must include at least n k elements. Thus the set I returned by the algorithm satisfies j I w j l OPT nk n k+1 k=1 n k ( 1 OPT n + 1 n n l n ) l n l+1 n l OPT (H n H nl + 1) OPT (1 + ln n ln[(1 p)n] + 1) = OPT (2 ln(1 p)), where we used the fact that ln n H n 1 + ln n for all n 1. Tighter analysis. As x ln(1 x) for all x, we have n k n k+1 n k ln n k+1 n k. Thus w j OPT ln n k n k+1 and w j OPT, yielding j I w j OPT (1 + l 1 k=1 ln n ) k n k+1 = OPT (1 + ln n ) ( 1 n l OPT 1 + ln 1 1 p).

2 2/13 I-2 (WS 1.1(b)) Consider again the greedy algorithm, but now with a more significant modification: in the kth iteration choose a set S j that minimizes the ratio w j / min{r k, Sj k }, where r k = max{0, pn (n n k )} is the minimum number of elements still to be covered. Observe that pn = r 1 > r 2 > > r l+1 = 0. Let us again revisit the analysis. Let O be an optimal solution to the partial set cover problem and let OPT p denote the respective optimal value. We have j O w j min j:s k j w j min{r k, S k j } j O min{r k, S k j } OPT p min{r k, j O Sk j }. Now, since O is a partial cover, the set j O Sk j must include at least r k elements, for ( j O Sj \Sj k ) is contained by the already covered pn rk elements. Note that when k < l the algorithm selects a set S j such that Sj k < r k, implying min{r k, Sj k } = r k r k+1 for all k. Thus the set I returned by the algorithm satisfies j I w j l OPT p rk r k+1 k=1 r k ( 1 OPT p + 1 ) r 1 r r l = OPT p H pn. I-3 (WS 1.4(a b)) (a) Map any instance I of the set cover problem to an instance I of the uncapacitated facility location problem as follows. Let F consist of the sets S j and D of the elements e i. Let the cost c Sj e i be 0 if e i S j and otherwise. Let the cost f Sj equal the weight w j of S j. Observe that any finite-cost solution to I corresponds to a solution to I of an equal cost, and vice versa. Because the mapping between the instances and the mapping between the solutions can be computed in polynomial time, a c log D -approximation algorithm for the uncapacitated facility location problem would yield a c log E -approximation algorithm for the set cover problem. By Theorem 1.14 the constant c cannot be arbitrarily small, unless P = NP. (b) Consider an instance of the set cover problem where the set of elements is D and each nonempty subset S t D is assigned the weight w t = min i F f i + c ij. j S t Clearly, an optimal solution to the original instance of the uncapacitated facility location problem directly gives a solution to the set cover problem instance, with equal costs (let the clients associated with the same facility form a set in the cover). Thus, it remains to (i) give a O(log D )-approximation algorithm for the set cover problem and (ii) to show how the obtained set cover can be turned into a solution to the uncapacitated facility location problem with an equal or smaller cost. To this end, we show that the greedy algorithm can be implemented to run in polynomial time. The difficulty is that the number of sets S t is exponential. The key

3 3/13 observation is that in the kth iteration we have w t min t:ŝt Ŝt = min f i + min i F 1 q n k q j=1 ĉij where we assume w.l.o.g. that for each i F the costs for the remaining n k clients are labeled by and satisfy ĉ i1 ĉ i2 ĉ ink. Thus a set S t that minimizes the ratio can be found in polynomial time, adressing the first issue (i). To address the second issue (ii), suppose I is the index set of the set cover returned by the greedy algorithm. For each t I, let ψ(t) be the facility i F that minimizes f i + j S t c ij, and construct a solution F to the uncapacitated facility location problem by letting F = {ψ(t) : t I}. Now the cost of F is f i + min c i F i F ij f ψ(t) + c ψ(t)j = w t. j D t I t I j S t t I q,

4 4/13 Week II II-1 (WS 2.1(b)) Suppose there is a (3 ɛ)-aa for the problem. Map any instance of the dominating set problem (V, E, k) to an instance of the k-supplier problem (F, D, d) as follows: For each vertex v V introduce one vertex x v to F and another vertex y v to D. Let d(x u, y v ) = 1 if u = v or (u, v) E, and d(x u, y v ) = 3 otherwise. Furthermore, let d(x u, x v ) = d(y u, y v ) = 0 if u = v, and d(x u, x v ) = d(y u, y v ) = 2 otherwise. Observe that d satisfies the triangle inequality. (Note that you cannot replace, say, 3 by 4 and 2 by 3.) Observe there is a dominating set of size k in (V, E) if and only if there is a solution S F of size k with cost 1. And, if there is no dominating set of size k, then the cost of an optimal solution must be 3. Thus the dominating set problem can be solved in polynomial time by running the (3 ɛ)-aa on (F, D, d) and checking whether the obtained cost is less than 3. As the dominating set is NP-complete, we got P=NP. II-2 (WS 2.3) The analysis of the list scheduling algorithm is somewhat similar to that in the case of no precedence constraints. Let l be a job that completes last in the final schedule. We want to show that the completion time C l is at most 2OPT. To this end, we partition the time interval [0, C l ] into two sets, namely, the set of times F where all machines process some job (full schedule) and the set of times P where some machine is idle (partial schedule). Observe that F spans time at most n j=1 p j/m OPT. Thus it remains to show that P spans time at most OPT. We construct a sequence of jobs j 1,..., j k such that l j 1 j k, as follows. Denote by S j the start time of job j in the schedule. Consider the last time point t 1 S l = C l p l in P. Clearly some predecessor j 1 of l is being processed at time t 1, because otherwise l could have been scheduled earlier. Similarly, consider the last time point t 2 S j1 t 1 in P. Again some predecessor j 2 of j 1 is being processed at time t 2, and so forth, until there is no such point in P. We get that the total processing time of the jobs l, j 1,..., j k is at least the span of P, since the times these jobs are processed cover P. As an upper bound, the span of P is at most the total time needed to process a maximum-length (in terms of the total processing time) chain in the predecessor structure, which length is at most OPT. II-3 (WS 2.10) We prove the extended version of Lemma 2.15: If S is a subset constructed so far by the algorithm, and i is the element chosen in the next iteration, then where O E is an optimal solution. f(s {i}) f(s) 1 (f(o) f(s)), k We first extend Lemma Let X Y and l Y. Then the submodularity of f implies that f((x {l}) Y ) + f((x {l}) Y ) f((x {l})) + f(y ). Rearranging and applying X Y and l Y gives us f(y {l}) f(y ) f((x {l})) f(x).

5 5/13 Let O \ S = {i 1,..., i p }. Consider the telescoping sum representation f(o S) = f(s) + p [f(s {i 1,..., i j }) f(s {i 1,..., i j 1 )]. j=1 We upper bound the right-hand side, using the extended Lemma 2.17, by f(s) + p [f(s {i j }) f(s)]. j=1 Because the algorithm chooses i E that maximizes f(s {i}) f(s), we arrive at f(o) f(o S) f(s) + p[f(s {i}) f(s)], where the first inequality follows from the monotonicity of f. Rewriting and observing that p k completes the proof. Finally, we apply the proof of Theorem 2.16 as is.

6 6/13 Week III III-1 (WS 3.1) Denote [k] = {1,..., k}, let O {1,..., n} be an optimal solution, and let I = [k] O. For any set J denote v J = i J v i and s J = i J s i. We will use (Fact 1.10 in WS) v [k]\i s [k]\i v k+1 s k+1 v O\I s O\I. Since s [k] + s k+1 > B, we get that s [k]\i > B s I s k+1, and hence v [k]\i > v k+1 B s I s k+1 s k+1 (B s I ) v O\I s O\I v k+1 v O\I v k+1, where the last inequality holds because B s O. Thus v [k] OPT v k+1. Now, if v k+1 < OPT/2, we have v [k] OPT/2. Otherwise v i = max i v i v k+1 OPT/2. Alternative proof. We claim that no solution of a total weight at most B m = s s m can achieve a total value larger than v v m. To see that the claim holds, consider a relaxed problem where each item i is replaced by s i items (i, j) for j = 1,..., s i, each of size s i,j = 1 and value v i,j = v i /s i. Clearly v i,j v i,j if and only if v i /s i v i /s i. Consequently, if we sort the items (i, j) in decreasing order by the values v i,j, then the total value of the first B m items is v v m. For the size bound B m this must be optimal (e.g., by the exchange argument). To complete the proof, we apply to claim for m = k + 1 and conclude max{v v k, v i } max{v v k, v k+1 } 1 2 (v v k+1 ) 1 2 OPT. III-2 (WS 3.2) We replace the bound M in the construction by the value of the greedy solution, M. Observe that OPT M OPT/2. Thanks to the first inequality, the approximation guarantee is unaffected by the replacement. Consider an arbitary feasible solution S {1,..., n} for the scaled instance with values v i = v i/µ, where µ = ɛm /n. We can upper bound the value of S by v i i S v i ɛm /n OPT ɛ OPT/(2n) = 2n/ɛ, i S thus eliminating a factor of n from the original bound O(n 2 /ɛ). III-3 (WS 3.6) We will imitate the proof of Theorem 3.5 for the knapsack problem. We will, however, encounter some difficulty in finding a good upper bound for the optimal cost OPT. Suppose for a moment that we know an upper bound U OPT. Consider the following algorithm. First remove (i.e., ignore) every edge whose cost is larger than U clearly such an edge cannot appear in an optimal solution. Scale the cost of each remaining edge e by setting c e = c e /µ, where µ = ɛu/n. Next, solve the problem for the scaled costs by dynamic programming, e.g., using the recurrence { } f(v, C ) = f(v, C c (u,v) ) + l (u,v), min (u,v) E

7 7/13 where f(v, C ) is the minimum length of a path from s to v of cost at most C. Return a path that achieves the minimum cost, i.e., min{c : f(t, C ) L}. The running time is polynomial in n and 1/ɛ, as C only needs to run from 0 to n n/ɛ. For an analysis of the approximation guarantee, denote the set of edges in the found path by S and in an optimal path by O. We have OPT e S c e µ e S c e µ e O c e µ (c e /µ + 1) c e + nµ = OPT + ɛu. e O e O Observe that if U was a constant-factor (or even polynomial-factor) approximation of OPT, then we were already done, as we could just set ɛ small enough to get an (1 + ɛ )-AA for the problem for any ɛ > 0, running in time polynomial in 1/ɛ. To get such an upper bound U, we resort to an iterative, yet a very simple initialization routine. We set ɛ = 1/2 and U initially to U 0 = n max e c e. In the first iteration we get a new upper bound U 1 = e S 0 c e OPT + U 0 /2. After k = log 2 U 0 iterations the found solution S k yields the cost U k+1 = e S k c e OPT + U k /2 OPT + OPT/2 + + OPT/2 k 1 + U 0 /2 k 2OPT. Because the running time is only logarithmic in the edge costs, the running time is polynomial in the input size. We can use the bound U k+1 to get a FPTAS. Alternative upper bound construction. Sort the edges in increasing order by their costs c 1 c 2 c m. For k = 1, 2, 3,... consider an instance where only the first k edges in the order are included, the rest being deleted. Find a shortest path from s to t, if any, in this reduced graph, disregarding the costs. If the length of the path is at most L, that is, there is a feasible path, then set the bound U = nc k and terminate the construction. To see that OPT U, it suffices to observe that a feasible path contains at most n 1 < n edges. To see that U/n OPT, observe that any feasible path from s to t in the original graph with all the edges must include at least one edge of cost at least c k.

8 8/13 Week IV IV-1 (WS 4.1) Consider the algorithm that routes each call via the shortest path between the two nodes in the ring. Denote the respective routing by S. Clearly the algorithm runs in polynomial time. It remains to prove that the approximation factor is at most 2. Denote by i the opposite node of each node i, that is, i = i + n/2. For a routing R and node i, let L R i be the number of calls (u, v) C for which the routing R u,v contains the link (i, i + 1). Now, let i be the node that maximizes L S i. Let (u, v) be a call that contributes 1 to L S i, that is, (i, i + 1) is in the shortest path between u and v. We claim that for any routing R the call (u, v) contributes 1 to either L R i or L R i. From this we obtain max{l R i, LR i } L S i /2, which suffices for showing that LS i 2OPT. To prove the claim we consider two cases. If R u,v = S u,v, then the call (u, v) contributes 1 to L R i and we are done. Otherwise R u,v S u,v, meaning that R u,v is the longer path from u to v in the ring. Because S u,v is the shortest path, it cannot contain both (i, i + 1) and (i, i + 1). Therefore (i, i + 1) must be in R u,v, and thus (u, v) contributes 1 to L R i. Alternative proof. For each call c C let P c be the set of the two paths in the ring for routing the call either clockwise or counterclockwise. Let P = c P c. Denote by E the set of edges (or links) of the ring. Consider the following integer linear program: minimize z subject to e p P c x p z, e E, p P c x p = 1, c C, x p {0, 1}, p P. We observe that the program models the SONET ring loading problem. Let x be an optimal solution to the linear programming relaxation, obtained by replacing the constraint of x p by x p 0. Let z be the respective optimal value. Let ˆx be a rounded version of x obtained by setting each ˆx p to 1 if x p 1/2 and to 0 otherwise. For the total load we obtain the guarantee IV-2 (WS 4.7(a b)) max e ˆx p max e e p P c e p P c 2x p = 2z 2OPT. (a) We consider the obvious bijection φ between the vertex subsets U ( V k) and the vectors x {0, 1} V satisfying i V x i = k, namely, φ(x) = {i V : x i = 1}. It remains to show that the objective functions are the same, that is, w ij (x i + x j 2x i x j ) = w ij [ {i, j} φ(x) = 1], (i,j) E (i,j) E where [Q] is the indicator function of the proposition Q. But this holds because x i + x j 2x i x j = [x i x j ], which can be verified by considering the four cases.

9 9/13 (b) Let x be a feasible solution to the nonlinear integer program. We will show that there exists a z such that (i) (x, z) is a feasible solution to the linear programming relaxation and (ii) F (x) = L(z), where F and L are the objective functions of the nonlinear and linear program, respectively. We put z ij = x i + x j 2x i x j. Clearly the condition (ii) holds. To see that the condition (i) holds, observe that z ij {0, 1} and that z ij 2 x i x j, since x i + x j x i x j 1 for all x i, x j {0, 1}. IV-3 (WS 4.7(c e)) (c) Let (x, z) be a feasible solution to the linear program. To prove F (x) L(z)/2, we will show that z ij 2(x i +x j 2x i x j ) for all (i, j) E. Consider the two nontrivial constraints for z ij and, for convenience, write a for x i and b for x j. We have that a + b 2 a b if and only if a + b 1. First assume a + b 1. We have a + b 2(a + b 2ab) if and only if a + b 4ab. But the latter inequality holds because 4ab (a + b) 2 and a + b 1. Then assume a + b > 1. We have 2 a b 2(a + b 2ab) if and only if 2(a + b) 4ab 2 (a + b). But the latter inequality holds because 4ab (a + b) 2 and a + b 1. (d) Let x be a fractional solution to the nonlinear program. Clearly there exist two indices i and j such that 0 < x i, x j < 1. For a real number ɛ, denote by x ɛ the vector obtained from x by replacing x i by x i + ɛ and x j by x j ɛ. Calculation shows that F (x ɛ ) F (x) = ɛ w is (1 2x s ) w sj (1 2x s ) (i,s) E:s j + w ij 2ɛ(x i x j + ɛ)[(i, j) E]. (s,j) E:s i Assume w.l.o.g. (due to the symmetry of i and j) that in the first term the factor of ɛ is nonnegative. Set ɛ to min{1 x i, x j }, implying that either x i + ɛ = 1 or x j ɛ = 0. It remains to see that x i x j + ɛ = x i x j + min{1 x i, x j } = min{1 x j, x i } > 0. (e) Consider the algorithm that first finds an optimal solution (x, z ) to the linear programming relaxation; clearly this can be done in polynomial time, as the number of variables and constraints is polynomial. Then the algorithm repeatedly rounds each noninteger x to either 0 or 1 using the above scheme; this results in a vector ˆx in polynomial time. We have the following guarantees: F (ˆx) (d) F (x ) (c) 1 2 L(z ) (b) 1 2 max x F (x) (a) = 1 2 OPT, where x runs through the feasible points of the nonlinear program.

10 10/13 Week V V-1 (WS 5.6(a)) Let U be a solution to MAX DICUT. Put x i = [i U] for each i V. Also put z ij = [x i = 1, x j = 0]. Clearly (x, z) is a feasible solution to the integer linear program (ILP) and its value is the sum of the weights w ij of all arcs (i, j) A for which x i = 1 and x j = 0, thus equalling the value of U. We have shown that the optimal value of MAX DICUT is at most the optimal value of the ILP. Let then (x, z) be a feasible solution to the ILP. Put U = {i V : x i = 1}. We observe that z ij min{x i, 1 x j } = [x i = 1, x j = 0] for all arcs (i, j) A. Thus we have shown that the optimal value of the ILP is at most the optimal value of MAX DICUT. The set U can be trivially read from the solution (x, z). V-2 (WS 5.6(b)) Let f(r) = 1/4 + r/2 for all real numbers r. Let (x, z ) be an optimal solution to the LP, and let ˆx i be independent Bernoulli(f(x i )) random variables for i V. The expected total weight of U = {i V : ˆx i = 1} is ( ) E w ij [ˆx i = 1, ˆx j = 0] = w ij Pr(ˆx i = 1, ˆx j = 0). (i,j) A On the other hand, OPT (i,j) A w ij z ij (i,j) A (i,j) A w ij min{x i, 1 x j}. (The second inequality is in fact an equality, as the objective function is maximized by setting z ij as large as possible.) To prove that the algorithm is a randomized 1/2- approximation algorithm, it thus suffices to show that f(r)(1 f(s)) min{r, 1 s}/2 for all 0 r, s 1. To this end, let m = min{r, 1 s}. Because f is an increasing function and s 1 m, we have ( 1 f(r)(1 f(s)) f(m)(1 f(1 m)) = 4 + m ) 2 m 2 2. Alternative calculation. Pr(ˆx i = 1, ˆx j = 0) = ( x i Since zij x i and z ij 1 x j, we get V-3 (WS 5.8) Because of the rounding rule, we have ) ( ( )) ( 1 2 x j = x i Pr(ˆx i = 1, ˆx j = 0) ( ) 2 2 z ij 1 2 z ij. ) ( (1 x j) The following ILP is a straightforward modification of the one given for MAX SAT: m n maximize w j z j + v i (1 y i ) subject to j=1 i=1 i C j y i z j, j = 1,..., m, y i {0, 1}, i = 1,..., n, z j [0, 1], j = 1,..., m. ).

11 11/13 Let λ > 0 and f(r) = 1 λ + λr. Consider the algorithm that first finds an optimal solution y to the obvious linear programming relaxation of the above ILP and then constructs a truth assignment x by setting x i to true with probability f(y i ), independently for each i. We have that the expected total weight of x is E(W ) = j = j = j w j (1 Pr(x i = false for all i C j )) + v i (1 Pr(x i = true)) i w j 1 (1 f(yi )) + v i (1 f(yi )) i C j i w j 1 (λ λyi ) + v i (λ λyi ). i C j i We observe that the latter term in the sum equals λ i v i(1 yi ). Thus, we have a randomized λ-approximation algorithm for the problem, provided that the first term in the sum is at least λ j w jzj. We will show that this holds for λ = 2( 2 1). Denote s = i C j yi and k = C j, the number of variables in the clause C j. Because the product of the numbers (1 yi ) cannot be larger than the kth power of their arithmetic mean, we have 1 i C j (λ λy i ) 1 λ k ( 1 s ) ( ) k k 1 λ k 1 z j. (1) k k It suffices to show that ( ) k g k (zj ) := 1 λ k 1 z j λzj 0 for all 0 zj 1. k The function g k (z j ) is decreasing, since g k (z j ) = λk (1 z j /k)k 1 λ 0 for all k 1 and 0 λ 1. Thus it remains to show that g k (1) 0 for all k = 1, 2,..., with the particular choice of λ = 2( 2 1). We consider four cases separately: Case k = 1: g 1 (1) = 1 λ 0 λ = 1 λ 0. Case k = 2: g 2 (1) 0 1 λ 2 /4 λ 0 2( 2 1) λ 2( 2 1). Case k = 3: g 3 (1) = 1 λ (λ2/3) 3 17/100 (166/300) 3 0. Case k = 4: g 4 (1) = 0. Case k 5: g k (1) = 1 λ λ k (1 1/k) k 1 λ λ 5 /e 0.

12 12/13 Week VI VI-1 (WS 7.1) Dijkstra s algorithm maintains a partitioning of the vertices into the visited and unvisited vertices. Initially all vertices are unvisited vertices. Each vertex v is also assigned a tentative distance to s, denoted by d[v]. Initially, d[s] = 0 and d[v] = for all v s. In each iteration, the algorithm picks the unvisited vertex u with the lowest-distance d[u], calculates the distance through it to each unvisited neighbor v of u, as d[u] + c u,v, and updates the neighbor s tentative distance d[v] if the obtained distance got smaller. It is known that each time the algorithm picks a vertex u (and moves to the visited vertices), the distance d[u] is the cost of the shortest path from s to u. Let p(u) denote the predecessor of u in the shortest path. Suppose the first k vertices the algorithm picks (after picking s) are u 1,..., u k in this order. We claim that the edge set F = {(p(u 1 ), u 1 ),..., (p(u k ), u k )} equals the edge set F constructed by the primal-dual algorithm after k iterations. The claim clearly holds for k = 1. Namely, Dijkstra s algorithm picks the neighbor u of s that minimizes c s,u. Likewise the primal-dual algorithm increases y {s} to c s,u and adds (s, u) to F. Now, consider the kth iteration of the algorithm. Let C be the connected component of (V, F ) containing s. Note that C contains k vertices. Define the time spent by the algorithm by the total amount the y-variables have been increased, t C = S y S. Let t C [u] be the time at which a vertex u C would be added by the primal-dual algorithm to the connected component C of (V, F ), if the connected component was not to change. More formally, t C [u] = t C + ε C (u), ε C (u) = min p C c p,u (p,u) δ(s) y S. In particular, t {s} [u] = c s,u. Observe that the next vertex u C the algorithm adds to the connected component is the one that minimizes t C [u]. Furthermore, once u has been added, we have the update rule t C {u} [v] = min {t C [v], t C [u] + c u,v }, for v C {u}. Intuitively speaking, this holds because the time got increased by y C = ε C (u), but also each sum containing the term y C got increased by this amount. More rigorously, ε C {u} (v) = min {ε C (v) y C, c u,v }, as the minimizing p either belongs to C or equals u. Adding t C {u} = t C + y C = t C [u] yields the said update rule. To complete the proof, it suffices to observe that t C [u] = d[u], where d[u] is the tentative distance to u after picking the vertices in C. This holds because the initial values and the update rules are the same. VI-2 (WS 7.8) Suppose the algorithm opens the set T of facilities and constructs the dual solution (v, w). We will prove that min c i T ij + 3 f i 3 v j. (2) i T j D j D

13 13/13 Note that this strengthens the already proved guarantee by allowing 3 times larger facility costs f i without loosing the approximation factor of 3. We start with the following equation given in the proof of Theorem 7.14: f i + c ij = v j. i T i T j A(i) j A(i) Recall that here A(i) is the set of neighboring clients assigned to a facility i T. By rearranging and denoting A = i T A(i) we obtain f i = v j c ij v j min c i T j A i T i T ij. j A(i) j A j A To prove (2) we denote Z = D \ A and write min c i T ij + 3 f i min c j D i T i T ij + min c i T ij 3 min c i T ij + 3 j Z j A j A j A min c i T ij + 3 v j j Z j A v j 3 v j + 3 v j, j Z j A where the last inequality follows from Lemma 7.13 (like in the proof of Theorem 7.14).

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved. Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved. 1 Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should

More information

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved. Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved. 1 P and NP P: The family of problems that can be solved quickly in polynomial time.

More information

Approximation Basics

Approximation Basics Approximation Basics, Concepts, and Examples Xiaofeng Gao Department of Computer Science and Engineering Shanghai Jiao Tong University, P.R.China Fall 2012 Special thanks is given to Dr. Guoqiang Li for

More information

Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Packing, MAX-SAT

Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Packing, MAX-SAT ,7 CMPUT 675: Approximation Algorithms Fall 2011 Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Pacing, MAX-SAT Lecturer: Mohammad R. Salavatipour Scribe: Weitian Tong 6.1 Bin Pacing Problem Recall the bin pacing

More information

Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths

Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths Matroids Shortest Paths Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths Marc Uetz University of Twente m.uetz@utwente.nl Lecture 2: sheet 1 / 25 Marc Uetz Discrete Optimization Matroids

More information

More Approximation Algorithms

More Approximation Algorithms CS 473: Algorithms, Spring 2018 More Approximation Algorithms Lecture 25 April 26, 2018 Most slides are courtesy Prof. Chekuri Ruta (UIUC) CS473 1 Spring 2018 1 / 28 Formal definition of approximation

More information

CO759: Algorithmic Game Theory Spring 2015

CO759: Algorithmic Game Theory Spring 2015 CO759: Algorithmic Game Theory Spring 2015 Instructor: Chaitanya Swamy Assignment 1 Due: By Jun 25, 2015 You may use anything proved in class directly. I will maintain a FAQ about the assignment on the

More information

Santa Claus Schedules Jobs on Unrelated Machines

Santa Claus Schedules Jobs on Unrelated Machines Santa Claus Schedules Jobs on Unrelated Machines Ola Svensson (osven@kth.se) Royal Institute of Technology - KTH Stockholm, Sweden March 22, 2011 arxiv:1011.1168v2 [cs.ds] 21 Mar 2011 Abstract One of the

More information

8 Knapsack Problem 8.1 (Knapsack)

8 Knapsack Problem 8.1 (Knapsack) 8 Knapsack In Chapter 1 we mentioned that some NP-hard optimization problems allow approximability to any required degree. In this chapter, we will formalize this notion and will show that the knapsack

More information

Lecture 13 March 7, 2017

Lecture 13 March 7, 2017 CS 224: Advanced Algorithms Spring 2017 Prof. Jelani Nelson Lecture 13 March 7, 2017 Scribe: Hongyao Ma Today PTAS/FPTAS/FPRAS examples PTAS: knapsack FPTAS: knapsack FPRAS: DNF counting Approximation

More information

Lecture 4: An FPTAS for Knapsack, and K-Center

Lecture 4: An FPTAS for Knapsack, and K-Center Comp 260: Advanced Algorithms Tufts University, Spring 2016 Prof. Lenore Cowen Scribe: Eric Bailey Lecture 4: An FPTAS for Knapsack, and K-Center 1 Introduction Definition 1.0.1. The Knapsack problem (restated)

More information

Algorithms. Outline! Approximation Algorithms. The class APX. The intelligence behind the hardware. ! Based on

Algorithms. Outline! Approximation Algorithms. The class APX. The intelligence behind the hardware. ! Based on 6117CIT - Adv Topics in Computing Sci at Nathan 1 Algorithms The intelligence behind the hardware Outline! Approximation Algorithms The class APX! Some complexity classes, like PTAS and FPTAS! Illustration

More information

16.1 Min-Cut as an LP

16.1 Min-Cut as an LP 600.469 / 600.669 Approximation Algorithms Lecturer: Michael Dinitz Topic: LPs as Metrics: Min Cut and Multiway Cut Date: 4//5 Scribe: Gabriel Kaptchuk 6. Min-Cut as an LP We recall the basic definition

More information

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs

More information

Lecture 4. 1 FPTAS - Fully Polynomial Time Approximation Scheme

Lecture 4. 1 FPTAS - Fully Polynomial Time Approximation Scheme Theory of Computer Science to Msc Students, Spring 2007 Lecturer: Dorit Aharonov Lecture 4 Scribe: Ram Bouobza & Yair Yarom Revised: Shahar Dobzinsi, March 2007 1 FPTAS - Fully Polynomial Time Approximation

More information

This means that we can assume each list ) is

This means that we can assume each list ) is This means that we can assume each list ) is of the form ),, ( )with < and Since the sizes of the items are integers, there are at most +1pairs in each list Furthermore, if we let = be the maximum possible

More information

On Maximizing Welfare when Utility Functions are Subadditive

On Maximizing Welfare when Utility Functions are Subadditive On Maximizing Welfare when Utility Functions are Subadditive Uriel Feige October 8, 2007 Abstract We consider the problem of maximizing welfare when allocating m items to n players with subadditive utility

More information

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko Approximation Algorithms for Maximum Coverage and Max Cut with Given Sizes of Parts? A. A. Ageev and M. I. Sviridenko Sobolev Institute of Mathematics pr. Koptyuga 4, 630090, Novosibirsk, Russia fageev,svirg@math.nsc.ru

More information

A necessary and sufficient condition for the existence of a spanning tree with specified vertices having large degrees

A necessary and sufficient condition for the existence of a spanning tree with specified vertices having large degrees A necessary and sufficient condition for the existence of a spanning tree with specified vertices having large degrees Yoshimi Egawa Department of Mathematical Information Science, Tokyo University of

More information

On Two Class-Constrained Versions of the Multiple Knapsack Problem

On Two Class-Constrained Versions of the Multiple Knapsack Problem On Two Class-Constrained Versions of the Multiple Knapsack Problem Hadas Shachnai Tami Tamir Department of Computer Science The Technion, Haifa 32000, Israel Abstract We study two variants of the classic

More information

CSE541 Class 22. Jeremy Buhler. November 22, Today: how to generalize some well-known approximation results

CSE541 Class 22. Jeremy Buhler. November 22, Today: how to generalize some well-known approximation results CSE541 Class 22 Jeremy Buhler November 22, 2016 Today: how to generalize some well-known approximation results 1 Intuition: Behavior of Functions Consider a real-valued function gz) on integers or reals).

More information

3.4 Relaxations and bounds

3.4 Relaxations and bounds 3.4 Relaxations and bounds Consider a generic Discrete Optimization problem z = min{c(x) : x X} with an optimal solution x X. In general, the algorithms generate not only a decreasing sequence of upper

More information

Bin packing and scheduling

Bin packing and scheduling Sanders/van Stee: Approximations- und Online-Algorithmen 1 Bin packing and scheduling Overview Bin packing: problem definition Simple 2-approximation (Next Fit) Better than 3/2 is not possible Asymptotic

More information

3.10 Lagrangian relaxation

3.10 Lagrangian relaxation 3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the

More information

Outline. Relaxation. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Lagrangian Relaxation. Lecture 12 Single Machine Models, Column Generation

Outline. Relaxation. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Lagrangian Relaxation. Lecture 12 Single Machine Models, Column Generation Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING 1. Lagrangian Relaxation Lecture 12 Single Machine Models, Column Generation 2. Dantzig-Wolfe Decomposition Dantzig-Wolfe Decomposition Delayed Column

More information

6.854J / J Advanced Algorithms Fall 2008

6.854J / J Advanced Algorithms Fall 2008 MIT OpenCourseWare http://ocw.mit.edu 6.85J / 8.5J Advanced Algorithms Fall 008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 8.5/6.85 Advanced Algorithms

More information

Optimization of Submodular Functions Tutorial - lecture I

Optimization of Submodular Functions Tutorial - lecture I Optimization of Submodular Functions Tutorial - lecture I Jan Vondrák 1 1 IBM Almaden Research Center San Jose, CA Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 1 / 1 Lecture I: outline 1

More information

8.5 Sequencing Problems

8.5 Sequencing Problems 8.5 Sequencing Problems Basic genres. Packing problems: SET-PACKING, INDEPENDENT SET. Covering problems: SET-COVER, VERTEX-COVER. Constraint satisfaction problems: SAT, 3-SAT. Sequencing problems: HAMILTONIAN-CYCLE,

More information

4/12/2011. Chapter 8. NP and Computational Intractability. Directed Hamiltonian Cycle. Traveling Salesman Problem. Directed Hamiltonian Cycle

4/12/2011. Chapter 8. NP and Computational Intractability. Directed Hamiltonian Cycle. Traveling Salesman Problem. Directed Hamiltonian Cycle Directed Hamiltonian Cycle Chapter 8 NP and Computational Intractability Claim. G has a Hamiltonian cycle iff G' does. Pf. Suppose G has a directed Hamiltonian cycle Γ. Then G' has an undirected Hamiltonian

More information

CHAPTER 3 FUNDAMENTALS OF COMPUTATIONAL COMPLEXITY. E. Amaldi Foundations of Operations Research Politecnico di Milano 1

CHAPTER 3 FUNDAMENTALS OF COMPUTATIONAL COMPLEXITY. E. Amaldi Foundations of Operations Research Politecnico di Milano 1 CHAPTER 3 FUNDAMENTALS OF COMPUTATIONAL COMPLEXITY E. Amaldi Foundations of Operations Research Politecnico di Milano 1 Goal: Evaluate the computational requirements (this course s focus: time) to solve

More information

K-center Hardness and Max-Coverage (Greedy)

K-center Hardness and Max-Coverage (Greedy) IOE 691: Approximation Algorithms Date: 01/11/2017 Lecture Notes: -center Hardness and Max-Coverage (Greedy) Instructor: Viswanath Nagarajan Scribe: Sentao Miao 1 Overview In this lecture, we will talk

More information

a 1 a 2 a 3 a 4 v i c i c(a 1, a 3 ) = 3

a 1 a 2 a 3 a 4 v i c i c(a 1, a 3 ) = 3 AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 17 March 30th 1 Overview In the previous lecture, we saw examples of combinatorial problems: the Maximal Matching problem and the Minimum

More information

Convex and Semidefinite Programming for Approximation

Convex and Semidefinite Programming for Approximation Convex and Semidefinite Programming for Approximation We have seen linear programming based methods to solve NP-hard problems. One perspective on this is that linear programming is a meta-method since

More information

APTAS for Bin Packing

APTAS for Bin Packing APTAS for Bin Packing Bin Packing has an asymptotic PTAS (APTAS) [de la Vega and Leuker, 1980] For every fixed ε > 0 algorithm outputs a solution of size (1+ε)OPT + 1 in time polynomial in n APTAS for

More information

9. Submodular function optimization

9. Submodular function optimization Submodular function maximization 9-9. Submodular function optimization Submodular function maximization Greedy algorithm for monotone case Influence maximization Greedy algorithm for non-monotone case

More information

Linear Programming. Scheduling problems

Linear Programming. Scheduling problems Linear Programming Scheduling problems Linear programming (LP) ( )., 1, for 0 min 1 1 1 1 1 11 1 1 n i x b x a x a b x a x a x c x c x z i m n mn m n n n n! = + + + + + + = Extreme points x ={x 1,,x n

More information

APPROXIMATION ALGORITHMS RESOLUTION OF SELECTED PROBLEMS 1

APPROXIMATION ALGORITHMS RESOLUTION OF SELECTED PROBLEMS 1 UNIVERSIDAD DE LA REPUBLICA ORIENTAL DEL URUGUAY IMERL, FACULTAD DE INGENIERIA LABORATORIO DE PROBABILIDAD Y ESTADISTICA APPROXIMATION ALGORITHMS RESOLUTION OF SELECTED PROBLEMS 1 STUDENT: PABLO ROMERO

More information

Shortest paths with negative lengths

Shortest paths with negative lengths Chapter 8 Shortest paths with negative lengths In this chapter we give a linear-space, nearly linear-time algorithm that, given a directed planar graph G with real positive and negative lengths, but no

More information

8. INTRACTABILITY I. Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley. Last updated on 2/6/18 2:16 AM

8. INTRACTABILITY I. Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley. Last updated on 2/6/18 2:16 AM 8. INTRACTABILITY I poly-time reductions packing and covering problems constraint satisfaction problems sequencing problems partitioning problems graph coloring numerical problems Lecture slides by Kevin

More information

Partitioning Metric Spaces

Partitioning Metric Spaces Partitioning Metric Spaces Computational and Metric Geometry Instructor: Yury Makarychev 1 Multiway Cut Problem 1.1 Preliminaries Definition 1.1. We are given a graph G = (V, E) and a set of terminals

More information

An approximation algorithm for the minimum latency set cover problem

An approximation algorithm for the minimum latency set cover problem An approximation algorithm for the minimum latency set cover problem Refael Hassin 1 and Asaf Levin 2 1 Department of Statistics and Operations Research, Tel-Aviv University, Tel-Aviv, Israel. hassin@post.tau.ac.il

More information

Lecture 4: NP and computational intractability

Lecture 4: NP and computational intractability Chapter 4 Lecture 4: NP and computational intractability Listen to: Find the longest path, Daniel Barret What do we do today: polynomial time reduction NP, co-np and NP complete problems some examples

More information

An 0.5-Approximation Algorithm for MAX DICUT with Given Sizes of Parts

An 0.5-Approximation Algorithm for MAX DICUT with Given Sizes of Parts An 0.5-Approximation Algorithm for MAX DICUT with Given Sizes of Parts Alexander Ageev Refael Hassin Maxim Sviridenko Abstract Given a directed graph G and an edge weight function w : E(G) R +, themaximumdirectedcutproblem(max

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms What do you do when a problem is NP-complete? or, when the polynomial time solution is impractically slow? assume input is random, do expected performance. Eg, Hamiltonian path

More information

The Steiner Network Problem

The Steiner Network Problem The Steiner Network Problem Pekka Orponen T-79.7001 Postgraduate Course on Theoretical Computer Science 7.4.2008 Outline 1. The Steiner Network Problem Linear programming formulation LP relaxation 2. The

More information

P,NP, NP-Hard and NP-Complete

P,NP, NP-Hard and NP-Complete P,NP, NP-Hard and NP-Complete We can categorize the problem space into two parts Solvable Problems Unsolvable problems 7/11/2011 1 Halting Problem Given a description of a program and a finite input, decide

More information

Lecture 15 (Oct 6): LP Duality

Lecture 15 (Oct 6): LP Duality CMPUT 675: Approximation Algorithms Fall 2014 Lecturer: Zachary Friggstad Lecture 15 (Oct 6): LP Duality Scribe: Zachary Friggstad 15.1 Introduction by Example Given a linear program and a feasible solution

More information

Lecture notes on the ellipsoid algorithm

Lecture notes on the ellipsoid algorithm Massachusetts Institute of Technology Handout 1 18.433: Combinatorial Optimization May 14th, 007 Michel X. Goemans Lecture notes on the ellipsoid algorithm The simplex algorithm was the first algorithm

More information

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source

More information

3.3 Easy ILP problems and totally unimodular matrices

3.3 Easy ILP problems and totally unimodular matrices 3.3 Easy ILP problems and totally unimodular matrices Consider a generic ILP problem expressed in standard form where A Z m n with n m, and b Z m. min{c t x : Ax = b, x Z n +} (1) P(b) = {x R n : Ax =

More information

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003 CS6999 Probabilistic Methods in Integer Programming Randomized Rounding April 2003 Overview 2 Background Randomized Rounding Handling Feasibility Derandomization Advanced Techniques Integer Programming

More information

Lecture 3. 1 Polynomial-time algorithms for the maximum flow problem

Lecture 3. 1 Polynomial-time algorithms for the maximum flow problem ORIE 633 Network Flows August 30, 2007 Lecturer: David P. Williamson Lecture 3 Scribe: Gema Plaza-Martínez 1 Polynomial-time algorithms for the maximum flow problem 1.1 Introduction Let s turn now to considering

More information

NP-COMPLETE PROBLEMS. 1. Characterizing NP. Proof

NP-COMPLETE PROBLEMS. 1. Characterizing NP. Proof T-79.5103 / Autumn 2006 NP-complete problems 1 NP-COMPLETE PROBLEMS Characterizing NP Variants of satisfiability Graph-theoretic problems Coloring problems Sets and numbers Pseudopolynomial algorithms

More information

Discrete (and Continuous) Optimization WI4 131

Discrete (and Continuous) Optimization WI4 131 Discrete (and Continuous) Optimization WI4 131 Kees Roos Technische Universiteit Delft Faculteit Electrotechniek, Wiskunde en Informatica Afdeling Informatie, Systemen en Algoritmiek e-mail: C.Roos@ewi.tudelft.nl

More information

Lec. 2: Approximation Algorithms for NP-hard Problems (Part II)

Lec. 2: Approximation Algorithms for NP-hard Problems (Part II) Limits of Approximation Algorithms 28 Jan, 2010 (TIFR) Lec. 2: Approximation Algorithms for NP-hard Problems (Part II) Lecturer: Prahladh Harsha Scribe: S. Ajesh Babu We will continue the survey of approximation

More information

The Knapsack Problem. 28. April /44

The Knapsack Problem. 28. April /44 The Knapsack Problem 20 10 15 20 W n items with weight w i N and profit p i N Choose a subset x of items Capacity constraint i x w i W wlog assume i w i > W, i : w i < W Maximize profit i x p i 28. April

More information

Provable Approximation via Linear Programming

Provable Approximation via Linear Programming Chapter 7 Provable Approximation via Linear Programming One of the running themes in this course is the notion of approximate solutions. Of course, this notion is tossed around a lot in applied work: whenever

More information

More on NP and Reductions

More on NP and Reductions Indian Institute of Information Technology Design and Manufacturing, Kancheepuram Chennai 600 127, India An Autonomous Institute under MHRD, Govt of India http://www.iiitdm.ac.in COM 501 Advanced Data

More information

Approximation Algorithms for Re-optimization

Approximation Algorithms for Re-optimization Approximation Algorithms for Re-optimization DRAFT PLEASE DO NOT CITE Dean Alderucci Table of Contents 1.Introduction... 2 2.Overview of the Current State of Re-Optimization Research... 3 2.1.General Results

More information

Algorithms: COMP3121/3821/9101/9801

Algorithms: COMP3121/3821/9101/9801 NEW SOUTH WALES Algorithms: COMP3121/3821/9101/9801 Aleks Ignjatović School of Computer Science and Engineering University of New South Wales LECTURE 9: INTRACTABILITY COMP3121/3821/9101/9801 1 / 29 Feasibility

More information

SDP Relaxations for MAXCUT

SDP Relaxations for MAXCUT SDP Relaxations for MAXCUT from Random Hyperplanes to Sum-of-Squares Certificates CATS @ UMD March 3, 2017 Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 1 / 27 Overview 1 MAXCUT, Hardness and UGC 2 LP

More information

3.7 Cutting plane methods

3.7 Cutting plane methods 3.7 Cutting plane methods Generic ILP problem min{ c t x : x X = {x Z n + : Ax b} } with m n matrix A and n 1 vector b of rationals. According to Meyer s theorem: There exists an ideal formulation: conv(x

More information

Geometric Steiner Trees

Geometric Steiner Trees Geometric Steiner Trees From the book: Optimal Interconnection Trees in the Plane By Marcus Brazil and Martin Zachariasen Part 3: Computational Complexity and the Steiner Tree Problem Marcus Brazil 2015

More information

Dual fitting approximation for Set Cover, and Primal Dual approximation for Set Cover

Dual fitting approximation for Set Cover, and Primal Dual approximation for Set Cover duality 1 Dual fitting approximation for Set Cover, and Primal Dual approximation for Set Cover Guy Kortsarz duality 2 The set cover problem with uniform costs Input: A universe U and a collection of subsets

More information

Duality of LPs and Applications

Duality of LPs and Applications Lecture 6 Duality of LPs and Applications Last lecture we introduced duality of linear programs. We saw how to form duals, and proved both the weak and strong duality theorems. In this lecture we will

More information

Week Cuts, Branch & Bound, and Lagrangean Relaxation

Week Cuts, Branch & Bound, and Lagrangean Relaxation Week 11 1 Integer Linear Programming This week we will discuss solution methods for solving integer linear programming problems. I will skip the part on complexity theory, Section 11.8, although this is

More information

NP Completeness and Approximation Algorithms

NP Completeness and Approximation Algorithms Chapter 10 NP Completeness and Approximation Algorithms Let C() be a class of problems defined by some property. We are interested in characterizing the hardest problems in the class, so that if we can

More information

1 Ordinary Load Balancing

1 Ordinary Load Balancing Comp 260: Advanced Algorithms Prof. Lenore Cowen Tufts University, Spring 208 Scribe: Emily Davis Lecture 8: Scheduling Ordinary Load Balancing Suppose we have a set of jobs each with their own finite

More information

Chapter 7 Network Flow Problems, I

Chapter 7 Network Flow Problems, I Chapter 7 Network Flow Problems, I Network flow problems are the most frequently solved linear programming problems. They include as special cases, the assignment, transportation, maximum flow, and shortest

More information

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints. Section Notes 8 Integer Programming II Applied Math 121 Week of April 5, 2010 Goals for the week understand IP relaxations be able to determine the relative strength of formulations understand the branch

More information

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo CSE 431/531: Analysis of Algorithms Dynamic Programming Lecturer: Shi Li Department of Computer Science and Engineering University at Buffalo Paradigms for Designing Algorithms Greedy algorithm Make a

More information

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source Shortest

More information

Essential facts about NP-completeness:

Essential facts about NP-completeness: CMPSCI611: NP Completeness Lecture 17 Essential facts about NP-completeness: Any NP-complete problem can be solved by a simple, but exponentially slow algorithm. We don t have polynomial-time solutions

More information

1 Submodular functions

1 Submodular functions CS 369P: Polyhedral techniques in combinatorial optimization Instructor: Jan Vondrák Lecture date: November 16, 2010 1 Submodular functions We have already encountered submodular functions. Let s recall

More information

Hardness of Approximation

Hardness of Approximation Hardness of Approximation We have seen several methods to find approximation algorithms for NP-hard problems We have also seen a couple of examples where we could show lower bounds on the achievable approxmation

More information

Topics in Approximation Algorithms Solution for Homework 3

Topics in Approximation Algorithms Solution for Homework 3 Topics in Approximation Algorithms Solution for Homework 3 Problem 1 We show that any solution {U t } can be modified to satisfy U τ L τ as follows. Suppose U τ L τ, so there is a vertex v U τ but v L

More information

Chapter 3: Discrete Optimization Integer Programming

Chapter 3: Discrete Optimization Integer Programming Chapter 3: Discrete Optimization Integer Programming Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Sito web: http://home.deib.polimi.it/amaldi/ott-13-14.shtml A.A. 2013-14 Edoardo

More information

Lecture 11 October 7, 2013

Lecture 11 October 7, 2013 CS 4: Advanced Algorithms Fall 03 Prof. Jelani Nelson Lecture October 7, 03 Scribe: David Ding Overview In the last lecture we talked about set cover: Sets S,..., S m {,..., n}. S has cost c S. Goal: Cover

More information

CSCE 750 Final Exam Answer Key Wednesday December 7, 2005

CSCE 750 Final Exam Answer Key Wednesday December 7, 2005 CSCE 750 Final Exam Answer Key Wednesday December 7, 2005 Do all problems. Put your answers on blank paper or in a test booklet. There are 00 points total in the exam. You have 80 minutes. Please note

More information

Polynomial kernels for constant-factor approximable problems

Polynomial kernels for constant-factor approximable problems 1 Polynomial kernels for constant-factor approximable problems Stefan Kratsch November 11, 2010 2 What do these problems have in common? Cluster Edge Deletion, Cluster Edge Editing, Edge Dominating Set,

More information

Chapter 3: Discrete Optimization Integer Programming

Chapter 3: Discrete Optimization Integer Programming Chapter 3: Discrete Optimization Integer Programming Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Website: http://home.deib.polimi.it/amaldi/opt-16-17.shtml Academic year 2016-17

More information

Speeding up the Dreyfus-Wagner Algorithm for minimum Steiner trees

Speeding up the Dreyfus-Wagner Algorithm for minimum Steiner trees Speeding up the Dreyfus-Wagner Algorithm for minimum Steiner trees Bernhard Fuchs Center for Applied Computer Science Cologne (ZAIK) University of Cologne, Weyertal 80, 50931 Köln, Germany Walter Kern

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Chapter 26 Semidefinite Programming Zacharias Pitouras 1 Introduction LP place a good lower bound on OPT for NP-hard problems Are there other ways of doing this? Vector programs

More information

MVE165/MMG630, Applied Optimization Lecture 6 Integer linear programming: models and applications; complexity. Ann-Brith Strömberg

MVE165/MMG630, Applied Optimization Lecture 6 Integer linear programming: models and applications; complexity. Ann-Brith Strömberg MVE165/MMG630, Integer linear programming: models and applications; complexity Ann-Brith Strömberg 2011 04 01 Modelling with integer variables (Ch. 13.1) Variables Linear programming (LP) uses continuous

More information

CS 6820 Fall 2014 Lectures, October 3-20, 2014

CS 6820 Fall 2014 Lectures, October 3-20, 2014 Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given

More information

Network Design and Game Theory Spring 2008 Lecture 2

Network Design and Game Theory Spring 2008 Lecture 2 Network Design and Game Theory Spring 2008 Lecture 2 Instructor: Mohammad T. Hajiaghayi Scribe: Imdadullah Khan February 04, 2008 MAXIMUM COVERAGE In this lecture we review Maximum Coverage and Unique

More information

Improved Bounds for Flow Shop Scheduling

Improved Bounds for Flow Shop Scheduling Improved Bounds for Flow Shop Scheduling Monaldo Mastrolilli and Ola Svensson IDSIA - Switzerland. {monaldo,ola}@idsia.ch Abstract. We resolve an open question raised by Feige & Scheideler by showing that

More information

Lecture 20: LP Relaxation and Approximation Algorithms. 1 Introduction. 2 Vertex Cover problem. CSCI-B609: A Theorist s Toolkit, Fall 2016 Nov 8

Lecture 20: LP Relaxation and Approximation Algorithms. 1 Introduction. 2 Vertex Cover problem. CSCI-B609: A Theorist s Toolkit, Fall 2016 Nov 8 CSCI-B609: A Theorist s Toolkit, Fall 2016 Nov 8 Lecture 20: LP Relaxation and Approximation Algorithms Lecturer: Yuan Zhou Scribe: Syed Mahbub Hafiz 1 Introduction When variables of constraints of an

More information

Aditya Bhaskara CS 5968/6968, Lecture 1: Introduction and Review 12 January 2016

Aditya Bhaskara CS 5968/6968, Lecture 1: Introduction and Review 12 January 2016 Lecture 1: Introduction and Review We begin with a short introduction to the course, and logistics. We then survey some basics about approximation algorithms and probability. We also introduce some of

More information

Single Source Shortest Paths

Single Source Shortest Paths CMPS 00 Fall 017 Single Source Shortest Paths Carola Wenk Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk Paths in graphs Consider a digraph G = (V, E) with an edge-weight

More information

1 The Knapsack Problem

1 The Knapsack Problem Comp 260: Advanced Algorithms Prof. Lenore Cowen Tufts University, Spring 2018 Scribe: Tom Magerlein 1 Lecture 4: The Knapsack Problem 1 The Knapsack Problem Suppose we are trying to burgle someone s house.

More information

Dominating Set. Chapter 7

Dominating Set. Chapter 7 Chapter 7 Dominating Set In this chapter we present another randomized algorithm that demonstrates the power of randomization to break symmetries. We study the problem of finding a small dominating set

More information

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748 COT 6936: Topics in Algorithms! Giri Narasimhan ECS 254A / EC 2443; Phone: x3748 giri@cs.fiu.edu https://moodle.cis.fiu.edu/v2.1/course/view.php?id=612 Gaussian Elimination! Solving a system of simultaneous

More information

Discrete Optimization 2010 Lecture 12 TSP, SAT & Outlook

Discrete Optimization 2010 Lecture 12 TSP, SAT & Outlook Discrete Optimization 2010 Lecture 12 TSP, SAT & Outlook Marc Uetz University of Twente m.uetz@utwente.nl Lecture 12: sheet 1 / 29 Marc Uetz Discrete Optimization Outline TSP Randomization Outlook 1 Approximation

More information

Reductions. Reduction. Linear Time Reduction: Examples. Linear Time Reductions

Reductions. Reduction. Linear Time Reduction: Examples. Linear Time Reductions Reduction Reductions Problem X reduces to problem Y if given a subroutine for Y, can solve X. Cost of solving X = cost of solving Y + cost of reduction. May call subroutine for Y more than once. Ex: X

More information

The 2-valued case of makespan minimization with assignment constraints

The 2-valued case of makespan minimization with assignment constraints The 2-valued case of maespan minimization with assignment constraints Stavros G. Kolliopoulos Yannis Moysoglou Abstract We consider the following special case of minimizing maespan. A set of jobs J and

More information

FINAL EXAM PRACTICE PROBLEMS CMSC 451 (Spring 2016)

FINAL EXAM PRACTICE PROBLEMS CMSC 451 (Spring 2016) FINAL EXAM PRACTICE PROBLEMS CMSC 451 (Spring 2016) The final exam will be on Thursday, May 12, from 8:00 10:00 am, at our regular class location (CSI 2117). It will be closed-book and closed-notes, except

More information

Lecture 8: The Goemans-Williamson MAXCUT algorithm

Lecture 8: The Goemans-Williamson MAXCUT algorithm IU Summer School Lecture 8: The Goemans-Williamson MAXCUT algorithm Lecturer: Igor Gorodezky The Goemans-Williamson algorithm is an approximation algorithm for MAX-CUT based on semidefinite programming.

More information

A An Overview of Complexity Theory for the Algorithm Designer

A An Overview of Complexity Theory for the Algorithm Designer A An Overview of Complexity Theory for the Algorithm Designer A.1 Certificates and the class NP A decision problem is one whose answer is either yes or no. Two examples are: SAT: Given a Boolean formula

More information

A Multi-Exchange Local Search Algorithm for the Capacitated Facility Location Problem

A Multi-Exchange Local Search Algorithm for the Capacitated Facility Location Problem A Multi-Exchange Local Search Algorithm for the Capacitated Facility Location Problem Jiawei Zhang Bo Chen Yinyu Ye October 19, 2003 Abstract We present a multi-exchange local search algorithm for approximating

More information