Network Design and Game Theory Spring 2008 Lecture 6

Similar documents
CMPUT 675: Approximation Algorithms Fall 2014

arxiv: v1 [cs.dm] 20 Jun 2017

3.4 Relaxations and bounds

Iterative Rounding and Relaxation

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi

Lecture #21. c T x Ax b. maximize subject to

CO759: Algorithmic Game Theory Spring 2015

Lectures 6, 7 and part of 8

The Steiner k-cut Problem

Linear Programming Duality

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

Improved Approximation Algorithms for PRIZE-COLLECTING STEINER TREE and TSP

Some Open Problems in Approximation Algorithms

ABHELSINKI UNIVERSITY OF TECHNOLOGY

Duality of LPs and Applications

Week Cuts, Branch & Bound, and Lagrangean Relaxation

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Some Open Problems in Approximation Algorithms

Topic: Primal-Dual Algorithms Date: We finished our discussion of randomized rounding and began talking about LP Duality.

Exact and Approximate Equilibria for Optimal Group Network Formation

Discrete Optimization 2010 Lecture 8 Lagrangian Relaxation / P, N P and co-n P

16.1 Min-Cut as an LP

Combinatorial Optimization

Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Packing, MAX-SAT

Advanced Algorithms 南京大学 尹一通

Increasing the Span of Stars

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

1 Review Session. 1.1 Lecture 2

Introduction to Bin Packing Problems

Lecture 23 Branch-and-Bound Algorithm. November 3, 2009

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

arxiv: v3 [cs.ds] 10 Apr 2013

Approximation Algorithms for Re-optimization

IE 5531: Engineering Optimization I

From Primal-Dual to Cost Shares and Back: A Stronger LP Relaxation for the Steiner Forest Problem

Part 1. The Review of Linear Programming

Recent Progress in Approximation Algorithms for the Traveling Salesman Problem

7. Lecture notes on the ellipsoid algorithm

CS261: A Second Course in Algorithms Lecture #9: Linear Programming Duality (Part 2)

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

Topics in Approximation Algorithms Solution for Homework 3

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Lecture notes on the ellipsoid algorithm

Topic: Balanced Cut, Sparsest Cut, and Metric Embeddings Date: 3/21/2007

Network Design and Game Theory Spring 2008 Lecture 2

NP Completeness and Approximation Algorithms

Integer Linear Programs

Tree sets. Reinhard Diestel

CS 6820 Fall 2014 Lectures, October 3-20, 2014

Lecture 11 October 7, 2013

a 1 a 2 a 3 a 4 v i c i c(a 1, a 3 ) = 3

Relation of Pure Minimum Cost Flow Model to Linear Programming

In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight.

Algorithms and Theory of Computation. Lecture 13: Linear Programming (2)

This means that we can assume each list ) is

A Note on Johnson, Minkoff and Phillips Algorithm for the Prize-Collecting Steiner Tree Problem

23. Cutting planes and branch & bound

3. Duality: What is duality? Why does it matter? Sensitivity through duality.

Bounds on the Traveling Salesman Problem

Fundamentals of Operations Research. Prof. G. Srinivasan. Indian Institute of Technology Madras. Lecture No. # 15

Worst case analysis for a general class of on-line lot-sizing heuristics

Prize-collecting Survivable Network Design in Node-weighted Graphs. Chandra Chekuri Alina Ene Ali Vakilian

Sharing the cost more efficiently: Improved Approximation for Multicommodity Rent-or-Buy

More on NP and Reductions

A 1.8 approximation algorithm for augmenting edge-connectivity of a graph from 1 to 2

CS781 Lecture 3 January 27, 2011

Advanced Linear Programming: The Exercises

On the Role of Partition Inequalities in Classical Algorithms for Steiner Problems in Graphs

Lecture 1 : Data Compression and Entropy

Introduction to integer programming II

Dual fitting approximation for Set Cover, and Primal Dual approximation for Set Cover

A bad example for the iterative rounding method for mincost k-connected spanning subgraphs

CS261: A Second Course in Algorithms Lecture #8: Linear Programming Duality (Part 1)

Integer Programming. Wolfram Wiesemann. December 6, 2007

Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths

arxiv: v1 [cs.dc] 19 Oct 2017

Lecture 15 (Oct 6): LP Duality

A Review of Linear Programming

maxz = 3x 1 +4x 2 2x 1 +x 2 6 2x 1 +3x 2 9 x 1,x 2

BBM402-Lecture 20: LP Duality

Discrete Optimization 2010 Lecture 7 Introduction to Integer Programming

Distributed primal-dual approximation algorithms for network design problems

Exact and Approximate Equilibria for Optimal Group Network Formation

An Improved Approximation Algorithm for Requirement Cut

3.10 Lagrangian relaxation

Now just show p k+1 small vs opt. Some machine gets k=m jobs of size at least p k+1 Can also think of it as a family of (1 + )-approximation algorithm

Introduction to Semidefinite Programming I: Basic properties a

Topics in Theoretical Computer Science April 08, Lecture 8

Discrete Optimization 2010 Lecture 12 TSP, SAT & Outlook

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Approximation Basics

CS 580: Algorithm Design and Analysis

K-center Hardness and Max-Coverage (Greedy)

(tree searching technique) (Boolean formulas) satisfying assignment: (X 1, X 2 )

"SYMMETRIC" PRIMAL-DUAL PAIR

Improving Christofides Algorithm for the s-t Path TSP

Transcription:

Network Design and Game Theory Spring 2008 Lecture 6 Guest Lecturer: Aaron Archer Instructor: Mohammad T. Hajiaghayi Scribe: Fengming Wang March 3, 2008 1 Overview We study the Primal-dual, Lagrangian relaxation paradigm of approximation algorithms for the NP-hard problems: Steiner tree problem (both the standard and prize-collecting version) and k-minimum spanning tree problem. Its corresponding reference is [1]. 2 General Steiner Tree 2.1 Definition Given the graph G = (V, E), the cost function C : E R + and a subset of terminal nodes T V, Find T E that spans T of the total minimum cost. For example, let T = {a, b, c} in Figure 1. The extra node c is called a steiner node. It is easy to see that the corresponding steiner tree is the star centered at c of total weight 3 while any other spanning tree without using c costs 4 units. Without loss of generality, we assume that G is always a complete graph, moreover, C obeys triangle inequalities, since the cost function could be transformed into a metric by taking the shortest path as the distance between any two nodes. 2.2 2-Approximation via MST Compute the minimum spanning tree MST on the subgra ph induced by T, we claim that this 2-approximates the optimal steiner tree OPT. The proof consists of two steps. Since OPT is a tree, one could do the depth-first search on it from an arbitrary node, which constructs a Eulerian tour 1

Figure 1: Steiner Tree on the nodes spanned by OPT. Note the cost is doubled for the tour compared to OPT. Then revise the tree by skipping all of the steiner nodes and joining consecutive terminal nodes according to the tour, which returns a spanning tree on T. By triangle inequalities on C, the cost of this spanning tree is no larger than that of the tour, therefore, MST is a valid 2-approximation. 2.3 2-Approximation via LP First we fix some notations. For any S V, δ(s) = {e u,v E : u S, v / S}. C = {S V : S T, T}. For any F E, C(F) = Σ e F C e. The following observation is easy to verify and crucial for our algorithm. Claim 1 T E spans T if and only if S C, T δ(s). Proof: Apply max-flow/min-cut theorem. We interpret this idea in terms of integer programming and define the following indicator variables for each edge: e E, χ e = 1 if e is in our solution tree, 0 otherwise. min Σ e E C e χ e s.t. Σ e δ(s) χ e 1 S C Relax the condition that χ e {0, 1} to make them nonnegative real variables. Now we obtain a linear program and then take a close look at its dual. max Σ S C y s s.t. Σ S:e δ(s) y S C e e E y s 0 Apparently both the primal and the dual programs are feasible. So the complementary slackness constraints are satisfied for the pair of optimal solutions. 2

1. e E, χ e > 0 Σ S:e δ(s) y S = C e. 2. S C, y S > 0 Σ e δ(s) χ e = 1. These two implications have economic interpretations. The first postcondition says that for every edge, the primal buys exactly what the dual pays for it while the second one means that each cut pays exactly for one edge. Therefore, we get a sequence of equalities as follows: Σ e E C e χ e = Σ e E χ e Σ S:e δ(s) y S = Σ S C y S Σ e δs χ e }{{} 1 whenever y S 0 = Σ S C y S So any answer for the dual is always a lower bound for the primal. Next we would devise an algorithm which promises that the output never exceeds twice one specific dual solution. Agrawal-Klein-Ravi (AKR) Algorithm For Steiner Tree F = φ. repeat repeat Grow at unit rate y S for every component induced by F that contains a terminal. until Some edge e goes tight (is paid for by the cut it crosses)[break ties arbitrarily]. F F + e [This will never create a cycle because edge (u, v) stops being paid off once u&v are in the same component]. Add edges without creating cycles and ties are broken arbitrarily. until F spans T. Pruning: iteratively delete edges to steiner nodes that are leaves. Consider the graph in Figure 2, where the terminal nodes are a, b, e and f. The first several steps of the algorithm are performed as follows: 1. Grow y {a}, y {b}, y {e} y {f} at rate 1. Merge ac, bd, eg at time 1. 2. Grow y {a,c}, y {b,d}, y {e,g}, y {f}. Merge aceg, bdf at time 2. 3.... Remark: The y S values produced by the algorithm always satisfies the constraints in the dual program. So they form a feasible solution. Remark: The ratio between the optimal solution for the linear program and the output of the algorithm could be as large as 2. It is not hard to see that a simple cycle yields this gap. 3

Figure 2: Example Theorem 1 F output by the algorithm satisfies C(F) 2(1 1 n )Σy S, where the y S values untouched by the algorithm are implicitly set to 0. The proof does the backward analysis and utilizes the fact that in any tree, the total degree never exceeds twice the number of nodes. For more details, we refer the reader to chapter 22 of the book Approximation Algorithms. 3 Prize-collecting Steiner Tree (PCST) This is similar to the problem previously studied. In addition, there two more parameters, Π : V R +, the prize function which associates a nonnegative value to each node once reached, and r V, the root from which one tries to build the steiner tree. The goal is changed as well: we want to construct a tree T rooted at r such that the gain (the difference between the sum of prizes from the connected nodes and the total cost to realize the tree) is maximum. Formally the following quantity needs to be maximized: which is equivalent to maximize Π(T) C(T) (1) or Π(T) C(T) Π(V) (2) C(T) Π(V \ T) (3) Remark 1 This is the same as minimizing C(T)+Π(V \T). This seems like an unnatural objective function. The original motivation for studying it is that it is NP-hard to determine whether the optimum value for 1 is positive, and hence 4

no bounded approx ratio is possible unless P = NP. In contrast, we can get a 2-approx for 3, as shown below. This algorithm has a Lagrangian multiplierpreserving property that turns out to have both theoretical and practical consequences that we will exploit. For every U V \ {r}, z U equals to 1 if U is exactly the set we don t span, 0 otherwise. Therefore, an easy observation leads to the following integer program. min Σ e E C e χ e + Σ U V\{r} Π(U)z U s.t. Σ e δ(s) χ e + Σ U S z U 1 S V \ {r} χ e, z U 0 The corresponding dual is max Σy S s.t. Σ S:e δ(s) y S C e e E Σ S U y S Π(U) U V y S 0 We have one new implication added into the complementary slackness. (4) z U > 0 Σ S U y S = Π(U) Goemans-Williamson Algorithm For PCST F = φ. A = {{v} : Π v > 0} I = {{v} : Π v = 0} [This includes {r} since we assume Π r = 0] v V, P {v} = Π v [We call the P variables potentials.] repeat repeat Grow y S and shrink P S for all S A. until Some edge e goes tight or for some S A, P S hits 0.[Ties are broken arbitrarily.] if Edge e goes tight then F F + e. Merge S 1, S 2 that e joins and let P S1 S 2 = P S1 + P S2 Delete S 1, S 2 from A (or I) and add S 1 S 2 to A. if r S 1 S 2 then Move S 1 S 2 from A to I. end if else if For some S A, P S hits 0 then move S from A to I end if until A = φ or F spans the whole graph. Pruning: throw away every component that was ever an inactive leaf. 5

Note 1 GW should have to implement this using standard priority queue in O(n 2 log n) time. At the end of the algorithm, we have some tree T. Because the nodes we fail to span can be partitioned into components that ran out potential, and this was constructed to be equivalent to constraint 4 being tight for that component, we get Σ S V\T y S = Π(V \ T) Moreover, the analogous analysis in the last section gives Combine them together and we get C(T) 2Σ S:T δ(s) φ y S Theorem 2 C(T) + 2Π(V \ T) 2Σy S To have a 2-approximation for PCST, it would have been sufficient to show C(T) + 2Π(V \ T) 2Σy S. The two sides being equal means that the Lagrangian-preserving Property is satisfied. If we multiply the prize function by a parameter λ to find some tree T(λ) via GW PCST algorithm, this is very useful which amounts to the following claim. Claim 2 Among all trees that capture at least λπ(t(λ)) prizes, the cost of T(λ) is no worse than twice that of the optimal. Proof: As the previous linear program Its dual is min ΣC e χ e s.t. Σ e δ(s) χ e + Σ U S z U 1 S V \ r Σ U V\r Π(U)z U G χ e, z U 0 max Σ S V\r y S λg s.t. Σ S:e δ(s) y S C e e E Σ S U y S λπ(u) U V \ r λ, y S 0 Think of G as Π(V \ T(λ)). Then similar to the previous analysis, C(T(λ)) 2(Σ ys λπ(v \ T(λ))). The theorem above gives us a base for the algorithmic paradigm which turns the λ knob to search for a better approximation. 6

4 k-mst problem Given G = (V, E), the cost function C : E R +, the root r V and k Z +, find the tree T rooted at r spanning at least k nodes of the smallest total edge-cost. Notation: T(λ) = tree given by GW PCST run with edge profit λ. OPT k = cost of the optimal k-mst. LB k = a valid lower bound on OPT k Figure 3: Search The sketch of the idea is that find a value of λ (using binary search, Meggido parametric search, etc.) such that T(λ ) k and T(λ + ) > k, where T(λ ) and T(λ + ) are 2-approximate k -MSTs for k = T(λ ) and k = T(λ + ). If k = T(λ ), we are done and have a 2-approximate solution. Otherwise, we must combine the trees T(λ ) and T(λ + ) to get our final solution T. If c(t) are exactly the linear interpretation of c(t(λ )) and c(t(λ + )), then we would have a 2-approximation. This is essentially how a 2-approximation is obtained, although it is possibly to get a 5 or a 3 much more simply. 5 Real world Application At AT&T a team that includes Aaron has built a tool that uses the Goemans- Williamson PCST algorithm to help decide where to lay new cables. The tree produced by the GW algorithm is a first cut that is fed into some heuristics to augment the tree into a 2-connected network. We have found this approach to be practical and effective. The multiplier λ can be set to reflect the target payback period for this capital investment. 7

References [1] David S. Johnson, Maria Minkoff, and Steven Phillips. The prize collecting steiner tree problem: theory and practice. In SODA, pages 760 769, 2000. 8