Discrete Optimization 2010 Lecture 12 TSP, SAT & Outlook

Similar documents
Discrete Optimization 2010 Lecture 12 TSP, SAT & Outlook

Discrete Optimization 2010 Lecture 10 P, N P, and N PCompleteness

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Discrete Optimization 2010 Lecture 8 Lagrangian Relaxation / P, N P and co-n P

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

NP Completeness and Approximation Algorithms

Discrete Optimization 2010 Lecture 7 Introduction to Integer Programming

Algorithms Design & Analysis. Approximation Algorithm

Polynomial-Time Reductions

How hard is it to find a good solution?

Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths

Limits to Approximability: When Algorithms Won't Help You. Note: Contents of today s lecture won t be on the exam

CS 573: Algorithmic Game Theory Lecture date: Feb 6, 2008

NP and Computational Intractability

8.5 Sequencing Problems

8. INTRACTABILITY I. Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley. Last updated on 2/6/18 2:16 AM

Discrete Optimization 2010 Lecture 1 Introduction / Algorithms & Spanning Trees

Game Theory and Control

Some Open Problems in Approximation Algorithms

Some Open Problems in Approximation Algorithms

Traffic Games Econ / CS166b Feb 28, 2012

Welcome to... Problem Analysis and Complexity Theory , 3 VU

Exact and Approximate Equilibria for Optimal Group Network Formation

Graduate Algorithms CS F-21 NP & Approximation Algorithms

Provable Approximation via Linear Programming

CS 301: Complexity of Algorithms (Term I 2008) Alex Tiskin Harald Räcke. Hamiltonian Cycle. 8.5 Sequencing Problems. Directed Hamiltonian Cycle

The traveling salesman problem

Lec. 2: Approximation Algorithms for NP-hard Problems (Part II)

Algorithmic Game Theory

Algorithms and Theory of Computation. Lecture 22: NP-Completeness (2)

NP-Completeness. Andreas Klappenecker. [based on slides by Prof. Welch]

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash

Exact and Approximate Equilibria for Optimal Group Network Formation

NP-Complete Reductions 2

NP-Complete Problems. More reductions

New Perspectives and Challenges in Routing Games: Query models & Signaling. Chaitanya Swamy University of Waterloo

Computational Intractability 2010/4/15. Lecture 2

CS 598RM: Algorithmic Game Theory, Spring Practice Exam Solutions

Topics in Complexity Theory

Chapter 8. NP and Computational Intractability

The Paradox Severity Linear Latency General Latency Extensions Conclusion. Braess Paradox. Julian Romero. January 22, 2008.

Algorithm Design and Analysis

Some Algebra Problems (Algorithmic) CSE 417 Introduction to Algorithms Winter Some Problems. A Brief History of Ideas

CS/COE

P P P NP-Hard: L is NP-hard if for all L NP, L L. Thus, if we could solve L in polynomial. Cook's Theorem and Reductions

Topics in Complexity

Algorithms. NP -Complete Problems. Dong Kyue Kim Hanyang University

Lecture 24 Nov. 20, 2014

Strategic Games: Social Optima and Nash Equilibria

SAT, Coloring, Hamiltonian Cycle, TSP

CS 583: Algorithms. NP Completeness Ch 34. Intractability

Improving Christofides Algorithm for the s-t Path TSP

Lecture 15 - NP Completeness 1

Topics of Algorithmic Game Theory

Graph Theoretic Characterization of Revenue Equivalence

Algorithm Design and Analysis

Notes for Lecture 21

Lecture 6 January 21, 2013

NP and Computational Intractability

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

Algorithms. Outline! Approximation Algorithms. The class APX. The intelligence behind the hardware. ! Based on

MVE165/MMG630, Applied Optimization Lecture 6 Integer linear programming: models and applications; complexity. Ann-Brith Strömberg

Show that the following problems are NP-complete

Algorithmic Game Theory. Alexander Skopalik

MS&E 246: Lecture 18 Network routing. Ramesh Johari

CSCI3390-Lecture 17: A sampler of NP-complete problems

Algorithms: COMP3121/3821/9101/9801

Correctness of Dijkstra s algorithm

Introduction to Game Theory

Travelling Salesman Problem

Now just show p k+1 small vs opt. Some machine gets k=m jobs of size at least p k+1 Can also think of it as a family of (1 + )-approximation algorithm

Bounds on the Traveling Salesman Problem

CO759: Algorithmic Game Theory Spring 2015

Price of Stability in Survivable Network Design

4/12/2011. Chapter 8. NP and Computational Intractability. Directed Hamiltonian Cycle. Traveling Salesman Problem. Directed Hamiltonian Cycle

8.5 Sequencing Problems. Chapter 8. NP and Computational Intractability. Hamiltonian Cycle. Hamiltonian Cycle

ABHELSINKI UNIVERSITY OF TECHNOLOGY

Dynamic Atomic Congestion Games with Seasonal Flows

ECS122A Handout on NP-Completeness March 12, 2018

What Computers Can Compute (Approximately) David P. Williamson TU Chemnitz 9 June 2011

NP Complete Problems. COMP 215 Lecture 20

NP-COMPLETE PROBLEMS. 1. Characterizing NP. Proof

Chapter 8. NP and Computational Intractability. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Intro to Theory of Computation

AGlimpseofAGT: Selfish Routing

Network Games with Friends and Foes

RANDOM SIMULATIONS OF BRAESS S PARADOX

Convergence and Approximation in Potential Games

Solutions to Exercises

Some Open Problems in Approximation Algorithms

Week Cuts, Branch & Bound, and Lagrangean Relaxation

Scheduling and Optimization Course (MPRI)

Approximation algorithm for Max Cut with unit weights

Lecture 24 : Even more reductions

The Traveling Salesman Problem: An Overview. David P. Williamson, Cornell University Ebay Research January 21, 2014

Summer School on Introduction to Algorithms and Optimization Techniques July 4-12, 2017 Organized by ACMU, ISI and IEEE CEDA.

Easy Problems vs. Hard Problems. CSE 421 Introduction to Algorithms Winter Is P a good definition of efficient? The class P

Approximation Algorithms

Transcription:

TSP Randomization Outlook Discrete Optimization 2010 Lecture 12 TSP, SAT & Outlook Marc Uetz University of Twente m.uetz@utwente.nl Lecture 12: sheet 1 / 29 Marc Uetz Discrete Optimization

Outline TSP Randomization Outlook 1 Approximation Algorithms for the TSP 2 Randomization & Derandomization for MAXSAT 3 Outlook on Further Topics Discrete Optimization Online Optimization Algorithmic Game Theory Lecture 12: sheet 2 / 29 Marc Uetz Discrete Optimization

Outline TSP Randomization Outlook 1 Approximation Algorithms for the TSP 2 Randomization & Derandomization for MAXSAT 3 Outlook on Further Topics Discrete Optimization Online Optimization Algorithmic Game Theory Lecture 12: sheet 3 / 29 Marc Uetz Discrete Optimization

TSP Randomization Outlook The TSP is Really Hard Symmetric TSP: Given undirected, complete graph G = (V, E), nonnegative integer edge lengths c e, e E, find a Hamiltonian cycle (a tour visiting each vertex) of minimum length (asymmetric TSP: directed graph, so c ij c ji is possible) Theorem For any constant α > 1, there cannot exist an α-approximation algorithm for the (symmetric) TSP, unless P=N P. Proof: Exercise. Lecture 12: sheet 4 / 29 Marc Uetz Discrete Optimization

TSP Randomization Outlook Metric and Euclidean!"#$%#&'()$*+,$-.)/(,(*+$!01 TSP ( A? C B > 1 Metric TSP: The distance function c on the edges is required!"#$%&'()*+,-.*/%01('%2*(3*4*5%&'()*67'*&8%*2(3&49)%3:*&84&*(3. to be a metric. That is, the -inequality holds &8%*!;(9%014<(&= 87<23.* c ik 2 c (> ij! + 2 c jk (:? @*2?>! (:*?*:*> 2 Euclidean TSP: The nodes are points in R 2 and the metric is!c#*d1)<(2%49*+,-.*+8%*5%&'()*(3*e(f%9*g=*d1)<(2%49*2(3&49)%3 given by Euclidean distances D1)<(2%49*2(3&49)%3. 2(3&49)%*G%&H%%9*(*492*?.*!*!I ( JI? # C @*!= ( J=? # C* # "KC c ij = c ji = (x i x j ) 2 + (y i y j ) 2 Lecture 12: sheet 5 / 29 Marc Uetz Discrete Optimization

TSP Randomization Outlook The Symmetric TSP: Overview of Results Theorems on TSP 1 General TSP: No α-approximation algorithm unless P=N P 2 Metric TSP: There is a simple 2-approximation algorithm (Double-Tree Algorithm) There is a simple 3/2-approximation algorithm (Christofides Tree-Matching Algorithm 1976) 3 Euclidean TSP: PTAS, i.e. for any given ε > 0, there is a (1 + ε)-approximation algorithm (Arora, Mitchell 1996) Lecture 12: sheet 6 / 29 Marc Uetz Discrete Optimization

TSP Randomization Outlook Facts on Euler Tours Definition An Euler tour is a closed walk in a graph or multigraph (also parallel edges allowed) that traverses each edge exactly once. ET Theorem (Euler 1741) Bridges of Königsberg An Euler tour exists if and only if each node has even degree. Moreover, it can be found in O( n + m ) time. Lecture 12: sheet 7 / 29 Marc Uetz Discrete Optimization

TSP Randomization Outlook 2-approximation: The Double-Tree Algorithm!"#$%&'()#$*+##$,)-&+.*"/ (1)!" #$%&'()*+,-*.%/0"*1&200/03*(4))5 +,-*! -,A (2) 6" 6*+,-*7*!*8'9)4*($'4*8-*.&$9:;(/%)5 8-*! 6-,A (3) <",=$4(>'((/03*.'1)*!;/0)?'29/(:@5 -$'4*! 6-,A Lecture 12: sheet 8 / 29 Marc Uetz Discrete Optimization

TSP Randomization Outlook 3/2-approximation: Tree-Matching Algorithm!"#$%&'($)*% +#**,-.&/"$012.31'#$&"4256789: (1)!" #$%&'()*+,-*.%/0"*1&200/03*(4))5 +,-*! -,D (2) 6".25*#$%&'()*%/0/%'%*7)/38(*&)49):(* +2(:8/03*+*$0*$;;*;)34))*0$;)1.<5*#$%&'()*=- +*! E-,D =-*! >F6*-,D (3) >",8$4(:'((/03*.'1)*!?/0)@'2A/(BC5 -$'4*! >F6-,D Lecture 12: sheet 9 / 29 Marc Uetz Discrete Optimization

Proofs TSP Randomization Outlook Claim 1: Eulertour always exists Proof: in both cases there are no odd-degree nodes in the (multi)graph in which we compute an Euler tour Claim 2: Shortcutting works, even in linear time Proof: Walk along the Euler tour, as soon as a node is seen for the second time, store last visited node i, continue along Euler tour, as soon as next unvisited node j is seen, introduce shortcut {i, j}. This is linear time, O( n ). Claim 3: A perfect matching exists on odd-degree nodes, computable in poly-time Proof: Recall that we assume (w.l.o.g.) a complete graph. And, we must have an even number of odd-degree nodes in the MST, as 2 E = v d(v), for any graph. We can use Edmonds Matching Algorithm to compute it. Lecture 12: sheet 10 / 29 Marc Uetz Discrete Optimization

TSP Randomization Outlook Claim 4: Cost of Matching c(m) 1/2 Cost of TSP OPT Proof:!"##$%&'()&*+,-(./0&.1&! 2 '3!45!'!"#$%#! &'()*+,*--(.*(-.(/0&-(1-2!'3&-*(,)-(.-.(/0&-(1-2.(/0&-(1-2 +(.*43.&-054+*67-*8(-94*+'3.:& 2 ; <-2 This result of shortcutting the TSP-OPT tour onto only nodes of = M contains >0.+0? exactly @7-*'0-&'()*+,**3.:-4):,90.*?-2 two matchings, say M ; <-2 =!!"#$%#! 1 and M 2 A./? 93.39,9-803:'*-94*+'3.:-2-! 2 ;?-2 So c(m = 1 ) + c(m 2 ) TSP-OPT!'0)01()0B- 2-! CD2 ; <-2 = E-! C!"#$%#!! But c(m) c(m 1 ), c(m 2 ), as M is min-cost matching, so c(m) 1 ( ) c(m 1 ) + c(m 2 ) 1 2 2 TSP-OPT Lecture 12: sheet 11 / 29 Marc Uetz Discrete Optimization

Outline TSP Randomization Outlook 1 Approximation Algorithms for the TSP 2 Randomization & Derandomization for MAXSAT 3 Outlook on Further Topics Discrete Optimization Online Optimization Algorithmic Game Theory Lecture 12: sheet 12 / 29 Marc Uetz Discrete Optimization

TSP Randomization Outlook Good Solutions for Satisfiability Given a SAT formula in conjunctive normal form, F = C 1 C 2 C m, on n boolean variables x 1,..., x n How many of the m clauses are satisfiable at least? Theorem There exists a truth assignment fulfilling at least 1 2 of the clauses Proof: 1 Let x j = true with probability 1 2, for each x j independently 2 Show E[number fulfilled clauses] 1 2 m 3 So x {true, false} n that fulfills 1 2 m clauses (otherwise expectation can t be that large) Lecture 12: sheet 13 / 29 Marc Uetz Discrete Optimization

TSP Randomization Outlook Proof of the Claim We show even more: If each clause has at least k literals (variables or their negation), then the expected number of fulfilled clauses is at least ( ( ) ) 1 k 1 m 2 (as any clause must have at least 1 literal, the claim follows, and e.g. for 3-SAT, that is at least 7 8 = 87.5% of the clauses) Proof: A clause with l k literals is false with probability 1 2 l 1 k 2 Hence, E[#true clauses] = m i=1 P(C i = true) m i=1 (1 1 k 2 ) = (1 1 k 2 )m Lecture 12: sheet 14 / 29 Marc Uetz Discrete Optimization

TSP Randomization Outlook Randomized Algorithm for Max-SAT Max-SAT Given formula F, find a truth assignment x maximizing # of fulfilled clauses Max-SAT is (strongly) N P-hard (SAT: x fulfilling m clauses?) What we have: Randomized Algorithm For (j = 1,..., n) Let x j = true with probability 1 2, x j = false otherwise if randomization O( 1 ) time, this is a linear time algorithm produces a solution x that is reasonably good in expectation (# fulfilled clauses 1 2 m 1 2OPT, as OPT m) But can we also find such x in poly-time? Lecture 12: sheet 15 / 29 Marc Uetz Discrete Optimization

TSP Randomization Outlook Derandomization by Conditional Expectations Let X := # of true clauses by algorithm (X = random variable) E[X ] = 1 2 E[X x 1 = true] + 1 }{{} 2 E[X x 1 = false] }{{} (1) (2) Note that (1) and (2) can be computed easily (in time O( nm )), as E[X ] = i P(C i = true), for example: C i = (x 1 x 2 x 7 ) P(C i = true x 1 = true) = 1 P(C i = true x 1 = false) = 1 P(C i = false x 1 = false) = 3 4 If (1) (2), then (1) E[X ], fix x 1 = true (else, fix x 1 = false) Assuming (1) (2), next step would be to fix x 2 by the larger of E[X x 1 = true, x 2 = true] and E[X x 1 = true, x 2 = false], etc. Keeping x 1 and x 2 fixed, do the same with x 3, etc.... thus get fixed x fulfilling at least E[X ] 1 2 m clauses, in O( n2 m ) time. Lecture 12: sheet 16 / 29 Marc Uetz Discrete Optimization

TSP Randomization Outlook!"#$%&'()*+$%,-./0)*+/1.-.+/#&)!"%'2-#-.+/3 Computing Conditional Expectations: Example Lecture 12: sheet 17 / 29 Marc Uetz Discrete Optimization

!"#$%&'()*'+#,-.$/0#1/., TSP Randomization Outlook 23)4.,-/1/.,#&)!"%'51#1/.,6 Derandomization by Conditional Expectations: Example Lecture 12: sheet 18 / 29 Marc Uetz Discrete Optimization

TSP Randomization Outlook 1/2-approximation for MaxCut MaxCut Given undirected graph G = (V, E), find a subset W V of the nodes of G such that δ(w ) = δ(v \ W ) is maximal. MaxCut is (strongly) N P-complete. Theorem There exists a randomized 1/2-approximation algorithm, which can (easily) be derandomized to yield a 1/2-approximation algorithm. Lecture 12: sheet 19 / 29 Marc Uetz Discrete Optimization

Outline TSP Randomization Outlook Discrete Opt. Online Opt. AGT 1 Approximation Algorithms for the TSP 2 Randomization & Derandomization for MAXSAT 3 Outlook on Further Topics Discrete Optimization Online Optimization Algorithmic Game Theory Lecture 12: sheet 20 / 29 Marc Uetz Discrete Optimization

TSP Randomization Outlook Discrete Opt. Online Opt. AGT Approximation Algorithms LP-based Algorithms with Clever Rounding Schemes (for example, Shmoys & Tardos 1993) 0.878-Approximation for MaxCut using Semidefinite Programming Relaxation (Goemans & Williamson 1994) The PTAS for Euclidean TSP (Arora 1996) The PCP-Theorem (alternative characterization of N P) Lecture 12: sheet 21 / 29 Marc Uetz Discrete Optimization

TSP Randomization Outlook Discrete Opt. Online Opt. AGT Integer Linear Programming Separation & Optimization are equivalent (Grötschel, Lovasz, Schrijver 1981) Column Generation Algorithms (Dual of adding cuts - namely adding variables) Dantzig & Wolfe Decomposition (Problem reformulation - then column generation) Lecture 12: sheet 22 / 29 Marc Uetz Discrete Optimization

TSP Randomization Outlook Discrete Opt. Online Opt. AGT Online Optimization An example, the Ski Rental problem: go skiing for n days, should I rent for $1 per day (with sunshine) of buy a pair of skis right away for $11? Competitive Analysis Online Algorithm α Offline Optimum Buying a pair of skis only after having spent 10$ for rent, we pay never more than twice the optimum. (2-competitive algorithm) And, no algorithm can be better than 3/2-competitive (no matter if P=N P or not). Lecture 12: sheet 23 / 29 Marc Uetz Discrete Optimization

TSP Randomization Outlook Discrete Opt. Online Opt. AGT Algorithmic Game Theory (AGT) Assume I have a (poly-time) algorithm that routes all daily traffic on Dutch highways, avoiding congestion Great, but nobody will listen: Drivers behave selfishly, only in their own interest Price of Anarchy is α if Selfish Equilibrium = α System Optimum (α 1) Mechanism Design: Define incentives (e.g., taxation), such that Selfish Equilibrium System Optimum Lecture 12: sheet 24 / 29 Marc Uetz Discrete Optimization

TSP Randomization Outlook Discrete Opt. Online Opt. AGT Example: Price of Anarchy l (x) = 1 s t l (x) = x Sending one (splittable) Figure 3: unit A of Simple flowbad Example System optimum: Total latency = 1/2 + 1/4 = 3/4 er link (with Nash a total equilibrium: latency oftotal 1) while latency the minimum-latency = 1 flow spreads flow even oss the two links, Price thereby of Anarchy incurring PoA a cost 4/3 of 3. Thus, ρ = 4 in this simple instance a 4 3 ll. (In the next section we will prove that this is the worst possible ratio for instance Roughgarden/Tardos (2002) show h linear latency functions.) Unfortunately, the ratio can be much worse when non-linear latency functions are allowed r a positive integer p, consider PoA modifying 4/3 networks the example linear of Figure functions 3 by giving l the lower lin atency function of l(x) =x p (everything else remains unchanged). The flow at Nas ilibrium again places the entire unit on the lower link, incurring a cost of 1, while th timal flow assigns (p + 1) 1/p units to the lower link and the remainder to the upper lin Lecture 12: sheet 25 / 29 Marc Uetz Discrete Optimization

TSP Randomization Outlook Discrete Opt. Online Opt. AGT Example, cont.: Do they need 42nd street? v v l (x) = x l (x) = 1 l (x) = x l (x) = 1 s t s l (x) = 0 t l (x) = 1 l (x) = x l (x) = 1 l (x) = x w w (a) Before (b) After Figure 1: Braess s Paradox. The addition of an intuitively helpful link can negatively impact all of the users of a congested network. Before: Nash = OPT, total latency = 3/2 After: OPT = 3/2 (still), but Nash total latency = 2 of and extensions to this traffic model have been studied (see for example [1, 11, 16, 21, 32, 33, 34, 35, 41, 43, 47, 49]). In the past several decades, much of the work on this traffic model has been inspired by a paradox first discovered by Braess [6] and later reported by Murchland [31] (see also [2] for a non-technical account). The essence of Braess s Paradox is captured by the example New Yorkshown Times, in FigureDecember 1, where the edges are 25, labeled 1990 with their latency functions (each a function of the link congestion x). Suppose one unit of traffic needs to be routed from s to t in the first network of Figure 1. In the unique flow at Nash equilibrium, which coincides with the optimal flow, half of the traffic takes the upper path and the other half travels along the lower path, and thus all agents are routed on a path of latency 3. Next suppose a fifth 2 edge of latency 0 (independent of the congestion) is added to the network, with the result shown in Figure 1(b). The optimal flow is unaffected by this augmentation (there is no way to use the new link to decrease the total latency) while in the new (unique) flow at Nash equilibrium, all traffic follows path s v w t; here, the latency experienced by each Lecture 12: sheet 26 / 29 Marc Uetz Discrete Optimization What if they closed 42nd street? by Gina Kolata

TSP Randomization Outlook Discrete Opt. Online Opt. AGT Example: Private Information & Mechanism Design An example Single machine, jobs j {1,..., n} = agents Processing times p j public knowledge Weights w j private information to job j (job j s type) Interpretation: w j = job j s individual cost for waiting Task Schedule jobs, but reimburse for disutility of waiting Problem: We do not know w j s and jobs may lie... Theorem. If (and only if) S j with w j, payments can be defined such that all jobs will tell their true w j (in equilibrium) Lecture 12: sheet 27 / 29 Marc Uetz Discrete Optimization

TSP Randomization Outlook Discrete Opt. Online Opt. AGT Example: Complexity of Nash Nash (1951) A (mixed) Nash equilibrium always exists. Proof uses Brouwers fixed point theorem. Consequence: If we can find Brouwer fixed points (efficiently), we can find Nash equilibria (efficiently). Question: efficient algorithm to find a Nash equilibrium? Daskalakis, Goldberg, Papadimitriou (2005) If we can compute Nash equilibrium (efficiently), we can find Brouwer Fixed points (efficiently). Consequence: Computing Nash Equilibria is (PPAD) hard. Lecture 12: sheet 28 / 29 Marc Uetz Discrete Optimization

TSP Randomization Outlook Discrete Opt. Online Opt. AGT Thanks for coming Please fill in the questionnaires, now Lecture 12: sheet 29 / 29 Marc Uetz Discrete Optimization