Discrete Optimization 2010 Lecture 12 TSP, SAT & Outlook

Similar documents
Discrete Optimization 2010 Lecture 12 TSP, SAT & Outlook

Discrete Optimization 2010 Lecture 10 P, N P, and N PCompleteness

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Discrete Optimization 2010 Lecture 8 Lagrangian Relaxation / P, N P and co-n P

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Provable Approximation via Linear Programming

NP and Computational Intractability

Algorithms Design & Analysis. Approximation Algorithm

Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths

How hard is it to find a good solution?

NP Completeness and Approximation Algorithms

Discrete Optimization 2010 Lecture 7 Introduction to Integer Programming

Polynomial-Time Reductions

Limits to Approximability: When Algorithms Won't Help You. Note: Contents of today s lecture won t be on the exam

Some Open Problems in Approximation Algorithms

8.5 Sequencing Problems

Computational Intractability 2010/4/15. Lecture 2

8. INTRACTABILITY I. Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley. Last updated on 2/6/18 2:16 AM

CS 301: Complexity of Algorithms (Term I 2008) Alex Tiskin Harald Räcke. Hamiltonian Cycle. 8.5 Sequencing Problems. Directed Hamiltonian Cycle

Some Open Problems in Approximation Algorithms

Lec. 2: Approximation Algorithms for NP-hard Problems (Part II)

CS21 Decidability and Tractability

Algorithms and Theory of Computation. Lecture 22: NP-Completeness (2)

Discrete Optimization 2010 Lecture 1 Introduction / Algorithms & Spanning Trees

The traveling salesman problem

Welcome to... Problem Analysis and Complexity Theory , 3 VU

NP-Completeness. Andreas Klappenecker. [based on slides by Prof. Welch]

Graduate Algorithms CS F-21 NP & Approximation Algorithms

Approximation algorithm for Max Cut with unit weights

CSCI3390-Lecture 17: A sampler of NP-complete problems

Lecture 24 : Even more reductions

Chapter 8. NP and Computational Intractability

Game Theory and Control

NP and Computational Intractability

NP-Complete Reductions 2

CS 573: Algorithmic Game Theory Lecture date: Feb 6, 2008

NP Complete Problems. COMP 215 Lecture 20

Algorithm Design and Analysis

NP-complete Problems

SAT, Coloring, Hamiltonian Cycle, TSP

ABHELSINKI UNIVERSITY OF TECHNOLOGY

CS/COE

Graph. Supply Vertices and Demand Vertices. Supply Vertices. Demand Vertices

Intro to Theory of Computation

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

Graph Theoretic Characterization of Revenue Equivalence

CS 583: Algorithms. NP Completeness Ch 34. Intractability

Introduction to Semidefinite Programming I: Basic properties a

Lecture 6 January 21, 2013

Approximation Algorithms

NP-Complete Problems. More reductions

MVE165/MMG630, Applied Optimization Lecture 6 Integer linear programming: models and applications; complexity. Ann-Brith Strömberg

The Traveling Salesman Problem: An Overview. David P. Williamson, Cornell University Ebay Research January 21, 2014

Lecture 24 Nov. 20, 2014

Introduction. Pvs.NPExample

Lecture 21 (Oct. 24): Max Cut SDP Gap and Max 2-SAT

Lecture 15 - NP Completeness 1

Algorithms. Outline! Approximation Algorithms. The class APX. The intelligence behind the hardware. ! Based on

Approximations for MAX-SAT Problem

Approximations for MAX-SAT Problem

8.5 Sequencing Problems. Chapter 8. NP and Computational Intractability. Hamiltonian Cycle. Hamiltonian Cycle

4/12/2011. Chapter 8. NP and Computational Intractability. Directed Hamiltonian Cycle. Traveling Salesman Problem. Directed Hamiltonian Cycle

Algorithm Design and Analysis

Topics in Complexity

Increasing the Span of Stars

Travelling Salesman Problem

Some Open Problems in Approximation Algorithms

Exact and Approximate Equilibria for Optimal Group Network Formation

Lecture 4: NP and computational intractability

CS 598RM: Algorithmic Game Theory, Spring Practice Exam Solutions

CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash

Easy Problems vs. Hard Problems. CSE 421 Introduction to Algorithms Winter Is P a good definition of efficient? The class P

Some Algebra Problems (Algorithmic) CSE 417 Introduction to Algorithms Winter Some Problems. A Brief History of Ideas

NP-Complete Reductions 1

Approximation Algorithms

Topics of Algorithmic Game Theory

Limitations of Algorithm Power

Show that the following problems are NP-complete

Bounds on the Traveling Salesman Problem

Decision Problems TSP. Instance: A complete graph G with non-negative edge costs, and an integer

NP-COMPLETE PROBLEMS. 1. Characterizing NP. Proof

New Perspectives and Challenges in Routing Games: Query models & Signaling. Chaitanya Swamy University of Waterloo

Chapter 8. NP and Computational Intractability. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

CS311 Computational Structures. NP-completeness. Lecture 18. Andrew P. Black Andrew Tolmach. Thursday, 2 December 2010

Solutions to Exercises

Week Cuts, Branch & Bound, and Lagrangean Relaxation

P P P NP-Hard: L is NP-hard if for all L NP, L L. Thus, if we could solve L in polynomial. Cook's Theorem and Reductions

Correctness of Dijkstra s algorithm

Lecture 10: Mechanisms, Complexity, and Approximation

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

What Computers Can Compute (Approximately) David P. Williamson TU Chemnitz 9 June 2011

COP 4531 Complexity & Analysis of Data Structures & Algorithms

ECS122A Handout on NP-Completeness March 12, 2018

Approximation Basics

Algorithms. NP -Complete Problems. Dong Kyue Kim Hanyang University

July 18, Approximation Algorithms (Travelling Salesman Problem)

Complexity, P and NP

Algorithms: COMP3121/3821/9101/9801

Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Packing, MAX-SAT

Transcription:

Discrete Optimization 2010 Lecture 12 TSP, SAT & Outlook Marc Uetz University of Twente m.uetz@utwente.nl Lecture 12: sheet 1 / 29 Marc Uetz Discrete Optimization

Outline TSP Randomization Outlook 1 Approximation Algorithms for the TSP 2 Randomization & Derandomization for MAXSAT 3 Outlook on Further Topics Discrete Optimization Online Optimization Algorithmic Game Theory Lecture 12: sheet 2 / 29 Marc Uetz Discrete Optimization

Outline TSP Randomization Outlook 1 Approximation Algorithms for the TSP 2 Randomization & Derandomization for MAXSAT 3 Outlook on Further Topics Discrete Optimization Online Optimization Algorithmic Game Theory Lecture 12: sheet 3 / 29 Marc Uetz Discrete Optimization

The TSP is Really Hard Symmetric TSP: Given undirected, complete graph G =(V, E), nonnegative integer edge lengths c e, e 2 E, find a Hamiltonian cycle (a tour visiting each vertex) of minimum length (asymmetric TSP: directed graph, so c ij 6= c ji is possible) Theorem For any constant >1, there cannot exist an -approximation algorithm for the (symmetric) TSP, unless P=NP. Proof: Exercise. Lecture 12: sheet 4 / 29 Marc Uetz Discrete Optimization

Metric and Euclidean TSP 1 Metric TSP: The distance function c on the edges is required to be a metric. That is, the 4-inequality holds c ik apple c ij + c jk ( A? C B > 2 Euclidean TSP: The nodes are points in R 2 and the metric is given by Euclidean distances q c ij = c ji = (x i x j ) 2 +(y i y j ) 2 Lecture 12: sheet 5 / 29 Marc Uetz Discrete Optimization

The Symmetric TSP: Overview of Results Theorems on TSP 1 General TSP: No -approximation algorithm unless P=NP 2 Metric TSP: There is a simple 2-approximation algorithm (Double-Tree Algorithm) There is a simple 3/2-approximation algorithm (Christofides Tree-Matching Algorithm 1976) 3 Euclidean TSP: 9 PTAS, i.e. for any given ">0, there is a (1 + ")-approximation algorithm (Arora, Mitchell 1996) Lecture 12: sheet 6 / 29 Marc Uetz Discrete Optimization

Facts on Euler Tours Definition An Euler tour is a closed walk in a graph or multigraph (also parallel edges allowed) that traverses each edge exactly once. ET Theorem (Euler 1741)! Bridges of Königsberg An Euler tour exists if and only if each node has even degree. Moreover, it can be found in O( n + m )time. Lecture 12: sheet 7 / 29 Marc Uetz Discrete Optimization

2-approximation: The Double-Tree Algorithm (1) #$%&'()*+,-*.%/0"*1&200/03*(4))5 +,-*! -,A (2) 6*+,-*7*!*8'9)4*($'4*8-*.&$9:;(/%)5 8-*! 6-,A (3),=$4(>'((/03*.'1)*!;/0)?'29/(:@5 -$'4*! 6-,A Lecture 12: sheet 8 / 29 Marc Uetz Discrete Optimization

3/2-approximation: Tree-Matching Algorithm (1) #$%&'()*+,-*.%/0"*1&200/03*(4))5 (2) (3).25*#$%&'()*%/0/%'%*7)/38(*&)49):(* +2(:8/03*+*$0*$;;*;)34))*0$;)1.<5*#$%&'()*=-,8$4(:'((/03*.'1)*!?/0)@'2A/(BC5 +,-*! -,D +*! E-,D =-*! >F6*-,D -$'4*! >F6-,D Lecture 12: sheet 9 / 29 Marc Uetz Discrete Optimization

Proofs TSP Randomization Outlook Claim 1: Eulertour always exists Proof: in both cases there are no odd-degree nodes in the (multi)graph in which we compute an Euler tour Claim 2: Shortcutting works, even in linear time Proof: Walk along the Euler tour, as soon as a node is seen for the second time, store last visited node i, continue along Euler tour, as soon as next unvisited node j is seen, introduce shortcut {i, j}. This is linear time, O( n ). Claim 3: A perfect matching exists on odd-degree nodes, computable in poly-time Proof: Recall that we assume (w.l.o.g.) a complete graph. And, we must have an even number of odd-degree nodes in the MST, as 2 E = P v d(v), for any graph. We can use Edmonds Matching Algorithm to compute it. Lecture 12: sheet 10 / 29 Marc Uetz Discrete Optimization

Claim 4: Cost of Matching c(m) apple 1/2 Cost of TSP OPT Proof:!"#$%#! &'()*+,*--(.*(-.(/0&-(1-2 This result of shortcutting the TSP-OPT tour onto only nodes of M contains exactly two matchings, saym 1 and M 2 So c(m 1 )+c(m 2 ) apple TSP-OPT But c(m) apple c(m 1 ), c(m 2 ), as M is min-cost matching, so c(m) apple 1 c(m 1 )+c(m 2 ) apple 1 2 2 TSP-OPT Lecture 12: sheet 11 / 29 Marc Uetz Discrete Optimization

Outline TSP Randomization Outlook 1 Approximation Algorithms for the TSP 2 Randomization & Derandomization for MAXSAT 3 Outlook on Further Topics Discrete Optimization Online Optimization Algorithmic Game Theory Lecture 12: sheet 12 / 29 Marc Uetz Discrete Optimization

Good Solutions for Satisfiability Given a SAT formula in conjunctive normal form, F = C 1 ^ C 2 ^ ^C m, on n boolean variables x 1,...,x n How many of the m clauses are satisfiable at least? Theorem There exists a truth assignment fulfilling at least 1 2 of the clauses Proof: 1 Let x j = true with probability 1 2,foreachx j independently 2 Show E[number fulfilled clauses] 1 2 m 3 So 9 x 2{true, false} n that fulfills 1 2 m clauses (otherwise expectation can t be that large) Lecture 12: sheet 13 / 29 Marc Uetz Discrete Optimization

Proof of the Claim We show even more: If each clause has at least k literals (variables or their negation), then the expected number of fulfilled clauses is at least! 1 k 1 m 2 (as any clause must have at least 1 literal, the claim follows, and e.g. for 3-SAT, that is at least 7 8 = 87.5% of the clauses) Proof: Aclausewith` k literals is false with probability 1 2 Hence, E[#true clauses] = P m i=1 P(C i = true) k )m (1 1 2 ` apple 1 k 2 P m i=1 (1 1 2 k )= Lecture 12: sheet 14 / 29 Marc Uetz Discrete Optimization

Randomized Algorithm for Max-SAT Max-SAT Given formula F, find a truth assignment x maximizing # of fulfilled clauses Max-SAT is (strongly) NP-hard (SAT: 9 x fulfilling What we have: Randomized Algorithm For (j =1,...,n) Let x j = true with probability 1 2, x j = false otherwise m clauses?) if randomization 2 O( 1 ) time, this is a linear time algorithm produces a solution x that is reasonably good in expectation 1 (# fulfilled clauses 2 m 1 2OPT,asOPT apple m) But can we also find such x in poly-time? Lecture 12: sheet 15 / 29 Marc Uetz Discrete Optimization

Derandomization by Conditional Expectations Let X := # of true clauses by algorithm (X = random variable) E[X ]= 1 2 E[X x 1 = true] + 1 {z } 2 E[X x 1 = false] {z } (1) (2) Note that (1) and (2) can be computed easily (in time O( nm )), as E[X ]= P i P(C i = true), for example: C i =(x 1 _ x 2 _ x 7 ) P(C i = true x 1 = true) =1 P(C i = true x 1 = false) =1 P(C i = false x 1 = false) = 3 4 If (1) (2), then (1) E[X ], fixx 1 = true (else, fix x 1 = false) Assuming (1) (2), next step would be to fix x 2 by the larger of E[X x 1 = true, x 2 = true] ande[x x 1 = true, x 2 = false], etc. Keeping x 1 and x 2 fixed, do the same with x 3,etc... thus get fixed x fulfilling at least E[X ] 1 2 m clauses, ino( n2 m )time. Lecture 12: sheet 16 / 29 Marc Uetz Discrete Optimization

Computing Conditional Expectations: Example Lecture 12: sheet 17 / 29 Marc Uetz Discrete Optimization

Derandomization by Conditional Expectations: Example Lecture 12: sheet 18 / 29 Marc Uetz Discrete Optimization

1/2-approximation for MaxCut MaxCut Given undirected graph G =(V, E), find a subset W V of the nodes of G such that (W )= (V \ W )ismaximal. MaxCut is (strongly) NP-complete. Theorem There exists a randomized 1/2-approximation algorithm, which can (easily) be derandomized to yield a 1/2-approximation algorithm. Lecture 12: sheet 19 / 29 Marc Uetz Discrete Optimization

Outline TSP Randomization Outlook Discrete Opt. Online Opt. AGT 1 Approximation Algorithms for the TSP 2 Randomization & Derandomization for MAXSAT 3 Outlook on Further Topics Discrete Optimization Online Optimization Algorithmic Game Theory Lecture 12: sheet 20 / 29 Marc Uetz Discrete Optimization

Discrete Opt. Online Opt. AGT Approximation Algorithms LP-based Algorithms with Clever Rounding Schemes (for example, Shmoys & Tardos 1993) 0.878-Approximation for MaxCut using Semidefinite Programming Relaxation (Goemans & Williamson 1994) The PTAS for Euclidean TSP (Arora 1996) The PCP-Theorem (alternative characterization of NP) Lecture 12: sheet 21 / 29 Marc Uetz Discrete Optimization

Discrete Opt. Online Opt. AGT Integer Linear Programming Separation & Optimization are equivalent (Grötschel, Lovasz, Schrijver 1981) Column Generation Algorithms (Dual of adding cuts - namely adding variables) Dantzig & Wolfe Decomposition (Problem reformulation - then column generation) Lecture 12: sheet 22 / 29 Marc Uetz Discrete Optimization

Discrete Opt. Online Opt. AGT Online Optimization An example, the Ski Rental problem: go skiing for n days, should I rent for $1 per day (with sunshine) of buy a pair of skis right away for $11? Competitive Analysis Online Algorithm apple O ine Optimum Buying a pair of skis only after having spent 10$ for rent, we pay never more than twice the optimum. (2-competitive algorithm) And, no algorithm can be better than 3/2-competitive (no matter if P=NP or not). Lecture 12: sheet 23 / 29 Marc Uetz Discrete Optimization

Discrete Opt. Online Opt. AGT Algorithmic Game Theory (AGT) Assume I have a (poly-time) algorithm that routes all daily tra c on Dutch highways, avoiding congestion Great, but nobody will listen: Drivers behave selfishly, only in their own interest Price of Anarchy is if Selfish Equilibrium = System Optimum ( 1) Mechanism Design: Define incentives (e.g., taxation), such that Selfish Equilibrium System Optimum Lecture 12: sheet 24 / 29 Marc Uetz Discrete Optimization

Discrete Opt. Online Opt. AGT Example: Price of Anarchy l (x) = 1 s t l (x) = x Sending one (splittable) unit of flow System optimum: Total latency = 1/2 + 1/4 = 3/4 Nash equilibrium: Total latency = 1 ) Price of Anarchy PoA 4/3 Roughgarden/Tardos (2002) show PoA apple 4/3 8 networks 8 linear functions ` Lecture 12: sheet 25 / 29 Marc Uetz Discrete Optimization

Discrete Opt. Online Opt. AGT Example, cont.: Do they need 42nd street? v v l (x) = x l (x) = 1 l (x) = x l (x) = 1 s t s l (x) = 0 t l (x) = 1 l (x) = x l (x) = 1 l (x) = x w w (a) Before (b) After Before: Nash = OPT, total latency = 3/2 After: OPT = 3/2 (still), but Nash total latency = 2 New York Times, December 25, 1990 What if they closed 42nd street? by Gina Kolata Lecture 12: sheet 26 / 29 Marc Uetz Discrete Optimization

Discrete Opt. Online Opt. AGT Example: Private Information & Mechanism Design An example Single machine, jobs j 2{1,...,n} = agents Processing times p j public knowledge Weights w j private information to job j (job j s type) Interpretation: w j = job j s individual cost for waiting Task Schedule jobs, but reimburse for disutility of waiting Problem: We do not know w j s and jobs may lie... Theorem. If (and only if) S j # with w j ",paymentscanbe defined such that all jobs will tell their true w j (in equilibrium) Lecture 12: sheet 27 / 29 Marc Uetz Discrete Optimization

Discrete Opt. Online Opt. AGT Example: Complexity of Nash Nash (1951) A(mixed) Nash equilibrium always exists. Proof uses Brouwers fixed point theorem. Consequence: If we can find Brouwer fixed points (e ciently), we can find Nash equilibria (e ciently). Question: 9 e cient algorithm to find a Nash equilibrium? Daskalakis, Goldberg, Papadimitriou (2005) If we can compute Nash equilibrium (e ciently), we can find Brouwer Fixed points (e ciently). Consequence: Computing Nash Equilibria is (PPAD) hard. Lecture 12: sheet 28 / 29 Marc Uetz Discrete Optimization

Discrete Opt. Online Opt. AGT Thanks for coming Please fill in the questionnaires, now Lecture 12: sheet 29 / 29 Marc Uetz Discrete Optimization