Solutions to Exercises

Similar documents
Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Approximation Basics

Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Packing, MAX-SAT

Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths

More Approximation Algorithms

CO759: Algorithmic Game Theory Spring 2015

Santa Claus Schedules Jobs on Unrelated Machines

8 Knapsack Problem 8.1 (Knapsack)

Lecture 13 March 7, 2017

Lecture 4: An FPTAS for Knapsack, and K-Center

Algorithms. Outline! Approximation Algorithms. The class APX. The intelligence behind the hardware. ! Based on

16.1 Min-Cut as an LP

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

Lecture 4. 1 FPTAS - Fully Polynomial Time Approximation Scheme

This means that we can assume each list ) is

On Maximizing Welfare when Utility Functions are Subadditive

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko

A necessary and sufficient condition for the existence of a spanning tree with specified vertices having large degrees

On Two Class-Constrained Versions of the Multiple Knapsack Problem

CSE541 Class 22. Jeremy Buhler. November 22, Today: how to generalize some well-known approximation results

3.4 Relaxations and bounds

Bin packing and scheduling

3.10 Lagrangian relaxation

Outline. Relaxation. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Lagrangian Relaxation. Lecture 12 Single Machine Models, Column Generation

6.854J / J Advanced Algorithms Fall 2008

Optimization of Submodular Functions Tutorial - lecture I

8.5 Sequencing Problems

4/12/2011. Chapter 8. NP and Computational Intractability. Directed Hamiltonian Cycle. Traveling Salesman Problem. Directed Hamiltonian Cycle

CHAPTER 3 FUNDAMENTALS OF COMPUTATIONAL COMPLEXITY. E. Amaldi Foundations of Operations Research Politecnico di Milano 1

K-center Hardness and Max-Coverage (Greedy)

a 1 a 2 a 3 a 4 v i c i c(a 1, a 3 ) = 3

Convex and Semidefinite Programming for Approximation

APTAS for Bin Packing

9. Submodular function optimization

Linear Programming. Scheduling problems

APPROXIMATION ALGORITHMS RESOLUTION OF SELECTED PROBLEMS 1

Shortest paths with negative lengths

8. INTRACTABILITY I. Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley. Last updated on 2/6/18 2:16 AM

Partitioning Metric Spaces

An approximation algorithm for the minimum latency set cover problem

Lecture 4: NP and computational intractability

An 0.5-Approximation Algorithm for MAX DICUT with Given Sizes of Parts

Approximation Algorithms

The Steiner Network Problem

P,NP, NP-Hard and NP-Complete

Lecture 15 (Oct 6): LP Duality

Lecture notes on the ellipsoid algorithm

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

3.3 Easy ILP problems and totally unimodular matrices

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003

Lecture 3. 1 Polynomial-time algorithms for the maximum flow problem

NP-COMPLETE PROBLEMS. 1. Characterizing NP. Proof

Discrete (and Continuous) Optimization WI4 131

Lec. 2: Approximation Algorithms for NP-hard Problems (Part II)

The Knapsack Problem. 28. April /44

Provable Approximation via Linear Programming

More on NP and Reductions

Approximation Algorithms for Re-optimization

Algorithms: COMP3121/3821/9101/9801

SDP Relaxations for MAXCUT

3.7 Cutting plane methods

Geometric Steiner Trees

Dual fitting approximation for Set Cover, and Primal Dual approximation for Set Cover

Duality of LPs and Applications

Week Cuts, Branch & Bound, and Lagrangean Relaxation

NP Completeness and Approximation Algorithms

1 Ordinary Load Balancing

Chapter 7 Network Flow Problems, I

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi

Essential facts about NP-completeness:

1 Submodular functions

Hardness of Approximation

Topics in Approximation Algorithms Solution for Homework 3

Chapter 3: Discrete Optimization Integer Programming

Lecture 11 October 7, 2013

CSCE 750 Final Exam Answer Key Wednesday December 7, 2005

Polynomial kernels for constant-factor approximable problems

Chapter 3: Discrete Optimization Integer Programming

Speeding up the Dreyfus-Wagner Algorithm for minimum Steiner trees

Approximation Algorithms

MVE165/MMG630, Applied Optimization Lecture 6 Integer linear programming: models and applications; complexity. Ann-Brith Strömberg

CS 6820 Fall 2014 Lectures, October 3-20, 2014

Network Design and Game Theory Spring 2008 Lecture 2

Improved Bounds for Flow Shop Scheduling

Lecture 20: LP Relaxation and Approximation Algorithms. 1 Introduction. 2 Vertex Cover problem. CSCI-B609: A Theorist s Toolkit, Fall 2016 Nov 8

Aditya Bhaskara CS 5968/6968, Lecture 1: Introduction and Review 12 January 2016

Single Source Shortest Paths

1 The Knapsack Problem

Dominating Set. Chapter 7

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

Discrete Optimization 2010 Lecture 12 TSP, SAT & Outlook

Reductions. Reduction. Linear Time Reduction: Examples. Linear Time Reductions

The 2-valued case of makespan minimization with assignment constraints

FINAL EXAM PRACTICE PROBLEMS CMSC 451 (Spring 2016)

Lecture 8: The Goemans-Williamson MAXCUT algorithm

A An Overview of Complexity Theory for the Algorithm Designer

A Multi-Exchange Local Search Algorithm for the Capacitated Facility Location Problem

Transcription:

1/13 Solutions to Exercises The exercises referred to as WS 1.1(a), and so forth, are from the course book: Williamson and Shmoys, The Design of Approximation Algorithms, Cambridge University Press, 2011, available online at http://www.designofapproxalgs.com. The solutions are by the instructor of the present course. The solutions may be suboptimal, incomplete, contain errors, or even be simply wrong. Week I I-1 (WS 1.1(a)) Denote by n the size of E. Consider the greedy algorithm given in WS, modified only so that the algorithm terminates when at least pn elements have been covered. Clearly, the algorithm returns a valid partial cover and runs in time polynomial in the input size. Let us revisit the analysis of the performance guarantee in the proof of Theorem 1.11. Suppose the algorithm takes l iterations. Denote by n k the number of elements that remain uncovered at the start of the kth iteration. Thus n 1 = n, n l > (1 p)n, and n l+1 (1 p)n. Also denote by Sj k the subset of S j that remain uncovered at the start of the kth iteration (denoted by Ŝj in WS). Let O be an optimal solution (the index set) to the respective instance of the set cover problem (i.e., with p = 1). Again w j min j:sj k Sj k j O w j j O Sk j = OPT j O Sk j. Furthermore, since O is a set cover, the set j O Sk j must include at least n k elements. Thus the set I returned by the algorithm satisfies j I w j l OPT nk n k+1 k=1 n k ( 1 OPT n + 1 n 1 + + 1 n l + 1 + n ) l n l+1 n l OPT (H n H nl + 1) OPT (1 + ln n ln[(1 p)n] + 1) = OPT (2 ln(1 p)), where we used the fact that ln n H n 1 + ln n for all n 1. Tighter analysis. As x ln(1 x) for all x, we have n k n k+1 n k ln n k+1 n k. Thus w j OPT ln n k n k+1 and w j OPT, yielding j I w j OPT (1 + l 1 k=1 ln n ) k n k+1 = OPT (1 + ln n ) ( 1 n l OPT 1 + ln 1 1 p).

2/13 I-2 (WS 1.1(b)) Consider again the greedy algorithm, but now with a more significant modification: in the kth iteration choose a set S j that minimizes the ratio w j / min{r k, Sj k }, where r k = max{0, pn (n n k )} is the minimum number of elements still to be covered. Observe that pn = r 1 > r 2 > > r l+1 = 0. Let us again revisit the analysis. Let O be an optimal solution to the partial set cover problem and let OPT p denote the respective optimal value. We have j O w j min j:s k j w j min{r k, S k j } j O min{r k, S k j } OPT p min{r k, j O Sk j }. Now, since O is a partial cover, the set j O Sk j must include at least r k elements, for ( j O Sj \Sj k ) is contained by the already covered pn rk elements. Note that when k < l the algorithm selects a set S j such that Sj k < r k, implying min{r k, Sj k } = r k r k+1 for all k. Thus the set I returned by the algorithm satisfies j I w j l OPT p rk r k+1 k=1 r k ( 1 OPT p + 1 ) r 1 r 1 1 + + 1 r l+1 + 1 = OPT p H pn. I-3 (WS 1.4(a b)) (a) Map any instance I of the set cover problem to an instance I of the uncapacitated facility location problem as follows. Let F consist of the sets S j and D of the elements e i. Let the cost c Sj e i be 0 if e i S j and otherwise. Let the cost f Sj equal the weight w j of S j. Observe that any finite-cost solution to I corresponds to a solution to I of an equal cost, and vice versa. Because the mapping between the instances and the mapping between the solutions can be computed in polynomial time, a c log D -approximation algorithm for the uncapacitated facility location problem would yield a c log E -approximation algorithm for the set cover problem. By Theorem 1.14 the constant c cannot be arbitrarily small, unless P = NP. (b) Consider an instance of the set cover problem where the set of elements is D and each nonempty subset S t D is assigned the weight w t = min i F f i + c ij. j S t Clearly, an optimal solution to the original instance of the uncapacitated facility location problem directly gives a solution to the set cover problem instance, with equal costs (let the clients associated with the same facility form a set in the cover). Thus, it remains to (i) give a O(log D )-approximation algorithm for the set cover problem and (ii) to show how the obtained set cover can be turned into a solution to the uncapacitated facility location problem with an equal or smaller cost. To this end, we show that the greedy algorithm can be implemented to run in polynomial time. The difficulty is that the number of sets S t is exponential. The key

3/13 observation is that in the kth iteration we have w t min t:ŝt Ŝt = min f i + min i F 1 q n k q j=1 ĉij where we assume w.l.o.g. that for each i F the costs for the remaining n k clients are labeled by and satisfy ĉ i1 ĉ i2 ĉ ink. Thus a set S t that minimizes the ratio can be found in polynomial time, adressing the first issue (i). To address the second issue (ii), suppose I is the index set of the set cover returned by the greedy algorithm. For each t I, let ψ(t) be the facility i F that minimizes f i + j S t c ij, and construct a solution F to the uncapacitated facility location problem by letting F = {ψ(t) : t I}. Now the cost of F is f i + min c i F i F ij f ψ(t) + c ψ(t)j = w t. j D t I t I j S t t I q,

4/13 Week II II-1 (WS 2.1(b)) Suppose there is a (3 ɛ)-aa for the problem. Map any instance of the dominating set problem (V, E, k) to an instance of the k-supplier problem (F, D, d) as follows: For each vertex v V introduce one vertex x v to F and another vertex y v to D. Let d(x u, y v ) = 1 if u = v or (u, v) E, and d(x u, y v ) = 3 otherwise. Furthermore, let d(x u, x v ) = d(y u, y v ) = 0 if u = v, and d(x u, x v ) = d(y u, y v ) = 2 otherwise. Observe that d satisfies the triangle inequality. (Note that you cannot replace, say, 3 by 4 and 2 by 3.) Observe there is a dominating set of size k in (V, E) if and only if there is a solution S F of size k with cost 1. And, if there is no dominating set of size k, then the cost of an optimal solution must be 3. Thus the dominating set problem can be solved in polynomial time by running the (3 ɛ)-aa on (F, D, d) and checking whether the obtained cost is less than 3. As the dominating set is NP-complete, we got P=NP. II-2 (WS 2.3) The analysis of the list scheduling algorithm is somewhat similar to that in the case of no precedence constraints. Let l be a job that completes last in the final schedule. We want to show that the completion time C l is at most 2OPT. To this end, we partition the time interval [0, C l ] into two sets, namely, the set of times F where all machines process some job (full schedule) and the set of times P where some machine is idle (partial schedule). Observe that F spans time at most n j=1 p j/m OPT. Thus it remains to show that P spans time at most OPT. We construct a sequence of jobs j 1,..., j k such that l j 1 j k, as follows. Denote by S j the start time of job j in the schedule. Consider the last time point t 1 S l = C l p l in P. Clearly some predecessor j 1 of l is being processed at time t 1, because otherwise l could have been scheduled earlier. Similarly, consider the last time point t 2 S j1 t 1 in P. Again some predecessor j 2 of j 1 is being processed at time t 2, and so forth, until there is no such point in P. We get that the total processing time of the jobs l, j 1,..., j k is at least the span of P, since the times these jobs are processed cover P. As an upper bound, the span of P is at most the total time needed to process a maximum-length (in terms of the total processing time) chain in the predecessor structure, which length is at most OPT. II-3 (WS 2.10) We prove the extended version of Lemma 2.15: If S is a subset constructed so far by the algorithm, and i is the element chosen in the next iteration, then where O E is an optimal solution. f(s {i}) f(s) 1 (f(o) f(s)), k We first extend Lemma 2.17. Let X Y and l Y. Then the submodularity of f implies that f((x {l}) Y ) + f((x {l}) Y ) f((x {l})) + f(y ). Rearranging and applying X Y and l Y gives us f(y {l}) f(y ) f((x {l})) f(x).

5/13 Let O \ S = {i 1,..., i p }. Consider the telescoping sum representation f(o S) = f(s) + p [f(s {i 1,..., i j }) f(s {i 1,..., i j 1 )]. j=1 We upper bound the right-hand side, using the extended Lemma 2.17, by f(s) + p [f(s {i j }) f(s)]. j=1 Because the algorithm chooses i E that maximizes f(s {i}) f(s), we arrive at f(o) f(o S) f(s) + p[f(s {i}) f(s)], where the first inequality follows from the monotonicity of f. Rewriting and observing that p k completes the proof. Finally, we apply the proof of Theorem 2.16 as is.

6/13 Week III III-1 (WS 3.1) Denote [k] = {1,..., k}, let O {1,..., n} be an optimal solution, and let I = [k] O. For any set J denote v J = i J v i and s J = i J s i. We will use (Fact 1.10 in WS) v [k]\i s [k]\i v k+1 s k+1 v O\I s O\I. Since s [k] + s k+1 > B, we get that s [k]\i > B s I s k+1, and hence v [k]\i > v k+1 B s I s k+1 s k+1 (B s I ) v O\I s O\I v k+1 v O\I v k+1, where the last inequality holds because B s O. Thus v [k] OPT v k+1. Now, if v k+1 < OPT/2, we have v [k] OPT/2. Otherwise v i = max i v i v k+1 OPT/2. Alternative proof. We claim that no solution of a total weight at most B m = s 1 + + s m can achieve a total value larger than v 1 + + v m. To see that the claim holds, consider a relaxed problem where each item i is replaced by s i items (i, j) for j = 1,..., s i, each of size s i,j = 1 and value v i,j = v i /s i. Clearly v i,j v i,j if and only if v i /s i v i /s i. Consequently, if we sort the items (i, j) in decreasing order by the values v i,j, then the total value of the first B m items is v 1 + + v m. For the size bound B m this must be optimal (e.g., by the exchange argument). To complete the proof, we apply to claim for m = k + 1 and conclude max{v 1 + + v k, v i } max{v 1 + + v k, v k+1 } 1 2 (v 1 + + v k+1 ) 1 2 OPT. III-2 (WS 3.2) We replace the bound M in the construction by the value of the greedy solution, M. Observe that OPT M OPT/2. Thanks to the first inequality, the approximation guarantee is unaffected by the replacement. Consider an arbitary feasible solution S {1,..., n} for the scaled instance with values v i = v i/µ, where µ = ɛm /n. We can upper bound the value of S by v i i S v i ɛm /n OPT ɛ OPT/(2n) = 2n/ɛ, i S thus eliminating a factor of n from the original bound O(n 2 /ɛ). III-3 (WS 3.6) We will imitate the proof of Theorem 3.5 for the knapsack problem. We will, however, encounter some difficulty in finding a good upper bound for the optimal cost OPT. Suppose for a moment that we know an upper bound U OPT. Consider the following algorithm. First remove (i.e., ignore) every edge whose cost is larger than U clearly such an edge cannot appear in an optimal solution. Scale the cost of each remaining edge e by setting c e = c e /µ, where µ = ɛu/n. Next, solve the problem for the scaled costs by dynamic programming, e.g., using the recurrence { } f(v, C ) = f(v, C c (u,v) ) + l (u,v), min (u,v) E

7/13 where f(v, C ) is the minimum length of a path from s to v of cost at most C. Return a path that achieves the minimum cost, i.e., min{c : f(t, C ) L}. The running time is polynomial in n and 1/ɛ, as C only needs to run from 0 to n n/ɛ. For an analysis of the approximation guarantee, denote the set of edges in the found path by S and in an optimal path by O. We have OPT e S c e µ e S c e µ e O c e µ (c e /µ + 1) c e + nµ = OPT + ɛu. e O e O Observe that if U was a constant-factor (or even polynomial-factor) approximation of OPT, then we were already done, as we could just set ɛ small enough to get an (1 + ɛ )-AA for the problem for any ɛ > 0, running in time polynomial in 1/ɛ. To get such an upper bound U, we resort to an iterative, yet a very simple initialization routine. We set ɛ = 1/2 and U initially to U 0 = n max e c e. In the first iteration we get a new upper bound U 1 = e S 0 c e OPT + U 0 /2. After k = log 2 U 0 iterations the found solution S k yields the cost U k+1 = e S k c e OPT + U k /2 OPT + OPT/2 + + OPT/2 k 1 + U 0 /2 k 2OPT. Because the running time is only logarithmic in the edge costs, the running time is polynomial in the input size. We can use the bound U k+1 to get a FPTAS. Alternative upper bound construction. Sort the edges in increasing order by their costs c 1 c 2 c m. For k = 1, 2, 3,... consider an instance where only the first k edges in the order are included, the rest being deleted. Find a shortest path from s to t, if any, in this reduced graph, disregarding the costs. If the length of the path is at most L, that is, there is a feasible path, then set the bound U = nc k and terminate the construction. To see that OPT U, it suffices to observe that a feasible path contains at most n 1 < n edges. To see that U/n OPT, observe that any feasible path from s to t in the original graph with all the edges must include at least one edge of cost at least c k.

8/13 Week IV IV-1 (WS 4.1) Consider the algorithm that routes each call via the shortest path between the two nodes in the ring. Denote the respective routing by S. Clearly the algorithm runs in polynomial time. It remains to prove that the approximation factor is at most 2. Denote by i the opposite node of each node i, that is, i = i + n/2. For a routing R and node i, let L R i be the number of calls (u, v) C for which the routing R u,v contains the link (i, i + 1). Now, let i be the node that maximizes L S i. Let (u, v) be a call that contributes 1 to L S i, that is, (i, i + 1) is in the shortest path between u and v. We claim that for any routing R the call (u, v) contributes 1 to either L R i or L R i. From this we obtain max{l R i, LR i } L S i /2, which suffices for showing that LS i 2OPT. To prove the claim we consider two cases. If R u,v = S u,v, then the call (u, v) contributes 1 to L R i and we are done. Otherwise R u,v S u,v, meaning that R u,v is the longer path from u to v in the ring. Because S u,v is the shortest path, it cannot contain both (i, i + 1) and (i, i + 1). Therefore (i, i + 1) must be in R u,v, and thus (u, v) contributes 1 to L R i. Alternative proof. For each call c C let P c be the set of the two paths in the ring for routing the call either clockwise or counterclockwise. Let P = c P c. Denote by E the set of edges (or links) of the ring. Consider the following integer linear program: minimize z subject to e p P c x p z, e E, p P c x p = 1, c C, x p {0, 1}, p P. We observe that the program models the SONET ring loading problem. Let x be an optimal solution to the linear programming relaxation, obtained by replacing the constraint of x p by x p 0. Let z be the respective optimal value. Let ˆx be a rounded version of x obtained by setting each ˆx p to 1 if x p 1/2 and to 0 otherwise. For the total load we obtain the guarantee IV-2 (WS 4.7(a b)) max e ˆx p max e e p P c e p P c 2x p = 2z 2OPT. (a) We consider the obvious bijection φ between the vertex subsets U ( V k) and the vectors x {0, 1} V satisfying i V x i = k, namely, φ(x) = {i V : x i = 1}. It remains to show that the objective functions are the same, that is, w ij (x i + x j 2x i x j ) = w ij [ {i, j} φ(x) = 1], (i,j) E (i,j) E where [Q] is the indicator function of the proposition Q. But this holds because x i + x j 2x i x j = [x i x j ], which can be verified by considering the four cases.

9/13 (b) Let x be a feasible solution to the nonlinear integer program. We will show that there exists a z such that (i) (x, z) is a feasible solution to the linear programming relaxation and (ii) F (x) = L(z), where F and L are the objective functions of the nonlinear and linear program, respectively. We put z ij = x i + x j 2x i x j. Clearly the condition (ii) holds. To see that the condition (i) holds, observe that z ij {0, 1} and that z ij 2 x i x j, since x i + x j x i x j 1 for all x i, x j {0, 1}. IV-3 (WS 4.7(c e)) (c) Let (x, z) be a feasible solution to the linear program. To prove F (x) L(z)/2, we will show that z ij 2(x i +x j 2x i x j ) for all (i, j) E. Consider the two nontrivial constraints for z ij and, for convenience, write a for x i and b for x j. We have that a + b 2 a b if and only if a + b 1. First assume a + b 1. We have a + b 2(a + b 2ab) if and only if a + b 4ab. But the latter inequality holds because 4ab (a + b) 2 and a + b 1. Then assume a + b > 1. We have 2 a b 2(a + b 2ab) if and only if 2(a + b) 4ab 2 (a + b). But the latter inequality holds because 4ab (a + b) 2 and a + b 1. (d) Let x be a fractional solution to the nonlinear program. Clearly there exist two indices i and j such that 0 < x i, x j < 1. For a real number ɛ, denote by x ɛ the vector obtained from x by replacing x i by x i + ɛ and x j by x j ɛ. Calculation shows that F (x ɛ ) F (x) = ɛ w is (1 2x s ) w sj (1 2x s ) (i,s) E:s j + w ij 2ɛ(x i x j + ɛ)[(i, j) E]. (s,j) E:s i Assume w.l.o.g. (due to the symmetry of i and j) that in the first term the factor of ɛ is nonnegative. Set ɛ to min{1 x i, x j }, implying that either x i + ɛ = 1 or x j ɛ = 0. It remains to see that x i x j + ɛ = x i x j + min{1 x i, x j } = min{1 x j, x i } > 0. (e) Consider the algorithm that first finds an optimal solution (x, z ) to the linear programming relaxation; clearly this can be done in polynomial time, as the number of variables and constraints is polynomial. Then the algorithm repeatedly rounds each noninteger x to either 0 or 1 using the above scheme; this results in a vector ˆx in polynomial time. We have the following guarantees: F (ˆx) (d) F (x ) (c) 1 2 L(z ) (b) 1 2 max x F (x) (a) = 1 2 OPT, where x runs through the feasible points of the nonlinear program.

10/13 Week V V-1 (WS 5.6(a)) Let U be a solution to MAX DICUT. Put x i = [i U] for each i V. Also put z ij = [x i = 1, x j = 0]. Clearly (x, z) is a feasible solution to the integer linear program (ILP) and its value is the sum of the weights w ij of all arcs (i, j) A for which x i = 1 and x j = 0, thus equalling the value of U. We have shown that the optimal value of MAX DICUT is at most the optimal value of the ILP. Let then (x, z) be a feasible solution to the ILP. Put U = {i V : x i = 1}. We observe that z ij min{x i, 1 x j } = [x i = 1, x j = 0] for all arcs (i, j) A. Thus we have shown that the optimal value of the ILP is at most the optimal value of MAX DICUT. The set U can be trivially read from the solution (x, z). V-2 (WS 5.6(b)) Let f(r) = 1/4 + r/2 for all real numbers r. Let (x, z ) be an optimal solution to the LP, and let ˆx i be independent Bernoulli(f(x i )) random variables for i V. The expected total weight of U = {i V : ˆx i = 1} is ( ) E w ij [ˆx i = 1, ˆx j = 0] = w ij Pr(ˆx i = 1, ˆx j = 0). (i,j) A On the other hand, OPT (i,j) A w ij z ij (i,j) A (i,j) A w ij min{x i, 1 x j}. (The second inequality is in fact an equality, as the objective function is maximized by setting z ij as large as possible.) To prove that the algorithm is a randomized 1/2- approximation algorithm, it thus suffices to show that f(r)(1 f(s)) min{r, 1 s}/2 for all 0 r, s 1. To this end, let m = min{r, 1 s}. Because f is an increasing function and s 1 m, we have ( 1 f(r)(1 f(s)) f(m)(1 f(1 m)) = 4 + m ) 2 m 2 2. Alternative calculation. Pr(ˆx i = 1, ˆx j = 0) = ( 1 4 + 1 2 x i Since zij x i and z ij 1 x j, we get V-3 (WS 5.8) Because of the rounding rule, we have ) ( ( 1 1 4 + 1 )) ( 1 2 x j = 4 + 1 2 x i Pr(ˆx i = 1, ˆx j = 0) ( 1 4 + 1 ) 2 2 z ij 1 2 z ij. ) ( 1 4 + 1 2 (1 x j) The following ILP is a straightforward modification of the one given for MAX SAT: m n maximize w j z j + v i (1 y i ) subject to j=1 i=1 i C j y i z j, j = 1,..., m, y i {0, 1}, i = 1,..., n, z j [0, 1], j = 1,..., m. ).

11/13 Let λ > 0 and f(r) = 1 λ + λr. Consider the algorithm that first finds an optimal solution y to the obvious linear programming relaxation of the above ILP and then constructs a truth assignment x by setting x i to true with probability f(y i ), independently for each i. We have that the expected total weight of x is E(W ) = j = j = j w j (1 Pr(x i = false for all i C j )) + v i (1 Pr(x i = true)) i w j 1 (1 f(yi )) + v i (1 f(yi )) i C j i w j 1 (λ λyi ) + v i (λ λyi ). i C j i We observe that the latter term in the sum equals λ i v i(1 yi ). Thus, we have a randomized λ-approximation algorithm for the problem, provided that the first term in the sum is at least λ j w jzj. We will show that this holds for λ = 2( 2 1). Denote s = i C j yi and k = C j, the number of variables in the clause C j. Because the product of the numbers (1 yi ) cannot be larger than the kth power of their arithmetic mean, we have 1 i C j (λ λy i ) 1 λ k ( 1 s ) ( ) k k 1 λ k 1 z j. (1) k k It suffices to show that ( ) k g k (zj ) := 1 λ k 1 z j λzj 0 for all 0 zj 1. k The function g k (z j ) is decreasing, since g k (z j ) = λk (1 z j /k)k 1 λ 0 for all k 1 and 0 λ 1. Thus it remains to show that g k (1) 0 for all k = 1, 2,..., with the particular choice of λ = 2( 2 1). We consider four cases separately: Case k = 1: g 1 (1) = 1 λ 0 λ = 1 λ 0. Case k = 2: g 2 (1) 0 1 λ 2 /4 λ 0 2( 2 1) λ 2( 2 1). Case k = 3: g 3 (1) = 1 λ (λ2/3) 3 17/100 (166/300) 3 0. Case k = 4: g 4 (1) = 0. Case k 5: g k (1) = 1 λ λ k (1 1/k) k 1 λ λ 5 /e 0.

12/13 Week VI VI-1 (WS 7.1) Dijkstra s algorithm maintains a partitioning of the vertices into the visited and unvisited vertices. Initially all vertices are unvisited vertices. Each vertex v is also assigned a tentative distance to s, denoted by d[v]. Initially, d[s] = 0 and d[v] = for all v s. In each iteration, the algorithm picks the unvisited vertex u with the lowest-distance d[u], calculates the distance through it to each unvisited neighbor v of u, as d[u] + c u,v, and updates the neighbor s tentative distance d[v] if the obtained distance got smaller. It is known that each time the algorithm picks a vertex u (and moves to the visited vertices), the distance d[u] is the cost of the shortest path from s to u. Let p(u) denote the predecessor of u in the shortest path. Suppose the first k vertices the algorithm picks (after picking s) are u 1,..., u k in this order. We claim that the edge set F = {(p(u 1 ), u 1 ),..., (p(u k ), u k )} equals the edge set F constructed by the primal-dual algorithm after k iterations. The claim clearly holds for k = 1. Namely, Dijkstra s algorithm picks the neighbor u of s that minimizes c s,u. Likewise the primal-dual algorithm increases y {s} to c s,u and adds (s, u) to F. Now, consider the kth iteration of the algorithm. Let C be the connected component of (V, F ) containing s. Note that C contains k vertices. Define the time spent by the algorithm by the total amount the y-variables have been increased, t C = S y S. Let t C [u] be the time at which a vertex u C would be added by the primal-dual algorithm to the connected component C of (V, F ), if the connected component was not to change. More formally, t C [u] = t C + ε C (u), ε C (u) = min p C c p,u (p,u) δ(s) y S. In particular, t {s} [u] = c s,u. Observe that the next vertex u C the algorithm adds to the connected component is the one that minimizes t C [u]. Furthermore, once u has been added, we have the update rule t C {u} [v] = min {t C [v], t C [u] + c u,v }, for v C {u}. Intuitively speaking, this holds because the time got increased by y C = ε C (u), but also each sum containing the term y C got increased by this amount. More rigorously, ε C {u} (v) = min {ε C (v) y C, c u,v }, as the minimizing p either belongs to C or equals u. Adding t C {u} = t C + y C = t C [u] yields the said update rule. To complete the proof, it suffices to observe that t C [u] = d[u], where d[u] is the tentative distance to u after picking the vertices in C. This holds because the initial values and the update rules are the same. VI-2 (WS 7.8) Suppose the algorithm opens the set T of facilities and constructs the dual solution (v, w). We will prove that min c i T ij + 3 f i 3 v j. (2) i T j D j D

13/13 Note that this strengthens the already proved guarantee by allowing 3 times larger facility costs f i without loosing the approximation factor of 3. We start with the following equation given in the proof of Theorem 7.14: f i + c ij = v j. i T i T j A(i) j A(i) Recall that here A(i) is the set of neighboring clients assigned to a facility i T. By rearranging and denoting A = i T A(i) we obtain f i = v j c ij v j min c i T j A i T i T ij. j A(i) j A j A To prove (2) we denote Z = D \ A and write min c i T ij + 3 f i min c j D i T i T ij + min c i T ij 3 min c i T ij + 3 j Z j A j A j A min c i T ij + 3 v j j Z j A v j 3 v j + 3 v j, j Z j A where the last inequality follows from Lemma 7.13 (like in the proof of Theorem 7.14).