Approximation Algorithms

Similar documents
variance of independent variables: sum of variances So chebyshev predicts won t stray beyond stdev.

Now just show p k+1 small vs opt. Some machine gets k=m jobs of size at least p k+1 Can also think of it as a family of (1 + )-approximation algorithm

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Knapsack. Bag/knapsack of integer capacity B n items item i has size s i and profit/weight w i

Lecture 13 March 7, 2017

This means that we can assume each list ) is

Algorithms. Outline! Approximation Algorithms. The class APX. The intelligence behind the hardware. ! Based on

Approximation Algorithms and Hardness of Approximation. IPM, Jan Mohammad R. Salavatipour Department of Computing Science University of Alberta

Introduction to Bin Packing Problems

Essential facts about NP-completeness:

Bin packing and scheduling

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003

NP-Completeness. NP-Completeness 1

More Approximation Algorithms

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms

Lecture 11 October 7, 2013

P,NP, NP-Hard and NP-Complete

Computational Complexity. IE 496 Lecture 6. Dr. Ted Ralphs

Lecture 18: More NP-Complete Problems

CS261: A Second Course in Algorithms Lecture #18: Five Essential Tools for the Analysis of Randomized Algorithms

COSC 341: Lecture 25 Coping with NP-hardness (2)

8 Knapsack Problem 8.1 (Knapsack)

4/12/2011. Chapter 8. NP and Computational Intractability. Directed Hamiltonian Cycle. Traveling Salesman Problem. Directed Hamiltonian Cycle

Outline. 1 NP-Completeness Theory. 2 Limitation of Computation. 3 Examples. 4 Decision Problems. 5 Verification Algorithm

NP-COMPLETE PROBLEMS. 1. Characterizing NP. Proof

1. Introduction Recap

Second main application of Chernoff: analysis of load balancing. Already saw balls in bins example. oblivious algorithms only consider self packet.

Polynomial-time reductions. We have seen several reductions:

NP-Completeness Theory

NP Complete Problems. COMP 215 Lecture 20

Easy Problems vs. Hard Problems. CSE 421 Introduction to Algorithms Winter Is P a good definition of efficient? The class P

ABHELSINKI UNIVERSITY OF TECHNOLOGY

Limitations of Algorithm Power

NP-Completeness. Andreas Klappenecker. [based on slides by Prof. Welch]

P, NP, NP-Complete, and NPhard

Some Algebra Problems (Algorithmic) CSE 417 Introduction to Algorithms Winter Some Problems. A Brief History of Ideas

Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Packing, MAX-SAT

Algorithms Design & Analysis. Approximation Algorithm

NP-Completeness. f(n) \ n n sec sec sec. n sec 24.3 sec 5.2 mins. 2 n sec 17.9 mins 35.

Dynamic Programming: Interval Scheduling and Knapsack

Algorithms. NP -Complete Problems. Dong Kyue Kim Hanyang University

CS 301: Complexity of Algorithms (Term I 2008) Alex Tiskin Harald Räcke. Hamiltonian Cycle. 8.5 Sequencing Problems. Directed Hamiltonian Cycle

1 The Knapsack Problem

Topic: Intro, Vertex Cover, TSP, Steiner Tree Date: 1/23/2007

ECS122A Handout on NP-Completeness March 12, 2018

NP-Completeness. Until now we have been designing algorithms for specific problems

Combinatorial optimization problems

Computational Complexity

Intractable Problems Part Two

Asymptotic Polynomial-Time Approximation (APTAS) and Randomized Approximation Algorithms

Aditya Bhaskara CS 5968/6968, Lecture 1: Introduction and Review 12 January 2016

1 Primals and Duals: Zero Sum Games

Integer Linear Programs

Some Algebra Problems (Algorithmic) CSE 417 Introduction to Algorithms Winter Some Problems. A Brief History of Ideas

Undecidable Problems. Z. Sawa (TU Ostrava) Introd. to Theoretical Computer Science May 12, / 65

APTAS for Bin Packing

Exercises NP-completeness

Theory of Computation Chapter 9

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming

NP Completeness and Approximation Algorithms

5 Integer Linear Programming (ILP) E. Amaldi Foundations of Operations Research Politecnico di Milano 1

Approximation Algorithms for Re-optimization

CS/COE

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

Approximation Algorithms (Load Balancing)

3.4 Relaxations and bounds

Data Structures in Java

COMP Analysis of Algorithms & Data Structures

Approximation Preserving Reductions

CSC 8301 Design & Analysis of Algorithms: Lower Bounds

Algorithms and Theory of Computation. Lecture 22: NP-Completeness (2)

Today: NP-Completeness (con t.)

Polynomial-time Reductions

How hard is it to find a good solution?

P P P NP-Hard: L is NP-hard if for all L NP, L L. Thus, if we could solve L in polynomial. Cook's Theorem and Reductions

CS 583: Algorithms. NP Completeness Ch 34. Intractability

NP-Completeness. Subhash Suri. May 15, 2018

CS325: Analysis of Algorithms, Fall Final Exam

Summer School on Introduction to Algorithms and Optimization Techniques July 4-12, 2017 Organized by ACMU, ISI and IEEE CEDA.

CS 583: Approximation Algorithms: Introduction

Introduction to Complexity Theory

TRIPARTITE MATCHING, KNAPSACK, Pseudopolinomial Algorithms, Strong NP-completeness

P is the class of problems for which there are algorithms that solve the problem in time O(n k ) for some constant k.

CS151 Complexity Theory. Lecture 15 May 22, 2017

Solutions to Exercises

NP Completeness and Approximation Algorithms

Linear Programming. Scheduling problems

Optimisation and Operations Research

NP-problems continued

Computational complexity theory

CS 6820 Fall 2014 Lectures, October 3-20, 2014

Increasing the Span of Stars

NP-Complete Problems. More reductions

K-center Hardness and Max-Coverage (Greedy)

Unit 1A: Computational Complexity

NP-completeness. Chapter 34. Sergey Bereg

Complexity theory for fellow CS students

CSE 421 Dynamic Programming

Week Cuts, Branch & Bound, and Lagrangean Relaxation

Transcription:

Approximation Algorithms What do you do when a problem is NP-complete? or, when the polynomial time solution is impractically slow? assume input is random, do expected performance. Eg, Hamiltonian path in all graphs. Problem: agreeing on good distribution. run a nonpolynomial (hopefully only slightly) algorithms such as branch and bound. Usually no proven bound on runtime, but sometime can. settle for a heuristic, but prove it does well enough (our focus) Definitions: optimization problem, instances I, solutions S(I) withvalues f : S(I) R maximization/minimization: find solution in S(I) maximizing/minimizing f called OPT(I) eg bin-packing. instance is set of s i 0,, partition so no subset exceeds Techincal assumptions we ll often make: assumption: all inputs and range of f are integers/rationals (can t represent reals, and allows, eg, LP, binary search). assumption f (σ) is a polynomial size (num bits) number (else output takes too long) look for polytime in bit complexity NP-hardness optimization NP-hard if can reduce an NP-hard decision problem to it (eg, problem of is opt solution to this instance k? ) but use more general notion of turing-reducibility (GJ). Approximation algorithm: any algorithm that gives a feasible answer eg, each item in own bin. of course, want good algorithm. How measure?

Absolute Approximations Definition: k-abs approx if on any I, have A(I) OP T (I) k Example: planar graph coloring. NP-complete to decide if 3 colorable know 4-colorable easy to 5-color Ditto for edge coloring: Vizing s theorem says opt is or (constructive) + Known only for trivial cases, where opt is bounded by a constant. Often, can show impossible by scaling the problem. EG knapsack. define profits p i, sizes s i,sack B wlog, integers. suppose have k-absolute multiply all p i by k +,solve,scale down. EG independent set (clique) k + copies of G Relative Approximation Definitions: An α-optimum solution has value at most α times optimum for minimization, at least /α times optimum for minimization. an algorithm has approximation ratio α if on any input, it outputs an α-approximate feasilbe solution. called an α-approximation algorithm How do we prove algorithms have relative approximations? Can t describe opt, so can t compare to it instead, comparison to computable lower bounds. 2

Greedy Algorithms Do obvious thing at each step. Hard part is proving it works. Usually, by attention to right upper/lower bound. Max cut Upper bound trivial Max-cut greedy. Min-diameter clustering? Gonzales algorithm. Distances to existing centers keep dropping Suppose after k chosen, farthest remaining is distance d Then OPT d k + mutually-distance-d points some must share a cluster Now assign each point to closest center Max distance from center (radius) is d So max diameter is 2d 2-approx. Set cover n items OPT = k At each step, can still cover remainder with k sets So can cover /k of remainder Vertex cover: define problem suppose repeatedly pick any uncovered edge and cover: no approx ratio suppose pick uncovered edge and cover both sides: 2-approx sometime, need to be extra greedy 3

Explicit attention to where lower bound is coming from lower bound informs algorithm. Graham s rule for P C max is a 2 approximation algorithm m explain problem: m machines, n jobs with proc times p j,min proc time. can also think of minimizing max load of continuously running jobs use a greedy algorithm to solve proof by comparison to lower bounds first lower bound: average load: OPT m second lower bound: OPT max p j suppose M has max runtime L at end suppose j was last job added to M then before, M had load L p j which was minimum so p i m(l p j )+ p j so OPT L +( m )p j so L OPT + ( m )p j (2 )OPT m Notice: this algorithm is online, competitive ratio 2 m we have no idea of optimum schedule; just used lower bounds. we used a greedy strategy tight bound: consider m(m ) size- jobs, one size-m job where was problem? Last job might be big LPT achieves 4/3, but not online newer online algs achieve.8 orso. Approximation Schemes So far, we ve seen various constant-factor approximations. WHat is best constant achievable? Lower bounds: APX-hardness/Max-SNP An approximation scheme is a family of algorithms A ɛ such that each algorithm polytime A ɛ achieve + ɛ approx But note: runtime might be awful as function of ɛ p j 4

FPAS, Pseudopolynomial algorithms Knapsack Dynamic program for bounded profits B(j, s) = best subset of jobs,...,j of total size s. rounding Let opt be P. Scale prices to (n/ɛp )p i New opt is it least n/ɛ n = ( ɛ)n/ɛ So find solution within ɛ of original opt But table size polynomial did this prove P = NP?No recall pseudopoly algorithms pseudopoly gives FPAS; converse almost true Knapsack is only weakly NP-hard strong NP-hardness (define) means no pseudo-poly Enumeration More powerful idea: k-enumeration Return to P C max Schedule k largest jobs optimally scheduling remainder greedily analysis: note A(I) OP T (I)+ p k+ Consider job with max c j If one of k largest, done and at opt Else, was assigned to min load machine, so c j OP T + p j OP T + p k+ so done if p k+ small but note OP T (I) (k/m)p k+ deduce A(I) ( + m/k)op T (I). So, for fixed m, can get any desired approximation ratio Scheduling any number of machines 5

Combine enumeration and rounding Suppose only k job sizes Vector of number of each type on a given machine gives machine type Only n k distinct vectors/machine types So need to find how many of each machine type. Use dynamic program: enumerate all job profiles that can be completed by j machines in time T In set if profile is sum of j machine profile and -machine profile Works because only poly many job profiles. Use rounding to make few important job types Guess OPT T to with ɛ (by binary search) All jobs of size exceeding ɛt are large Round each up to next power of ( + ɛ) Only O(/ɛ ln /ɛ) large types Solve optimally Greedy schedule remainder If last job is large, are optimal for rounded problem so with ɛ of opt If last job small, greedy analysis showes we are within ɛ of opt. Relaxations TSP Requiring tour: no approximation (finding hamiltonian path NP-hard) Undirected Metric: MST relaxation 2-approx, christofides Directed: Cycle cover relaxation LP relaxations Three steps write integer linear program relax round Vertex cover. 6

MAX SAT Define. literals clauses NP-complete random setting achieve 2 k very nice for large k, but only /2 for k = LP max y i + ( y ) z j i C + i C j j Analysis β k = ( /k) k.values, 3/4,.704,... Random round y i Lemma: k-literal clause sat w/pr at least β k ẑ j. proof: assume all positive literals. prob ( y i ) maximize when all y i =ẑ j /k. Show ( ẑ/k) k β k ẑ k. check at z = 0, Result: ( /e) approximation (convergence of ( /k) k ) much better for small k: i.e. -approx for k = LP good for small clauses, random for large. Better: try both methods. n,n 2 number in both methods Show (n + n 2 )/2 (3/4) ẑ j n C ( j S k 2 k )ẑ j n 2 β k ẑ j 3 n + n 2 ( 2 k + β k )ˆ zj 2 ẑ j z j 7

0. Chernoff-bound rounding Set cover. Theorem: Let X i poisson (ie independent 0/) trials, E[ X i ]= µ [ ] ɛ µ e Pr[X > ( + ɛ)µ] <. ( + ɛ) (+ɛ) Proof. note independent of n, exponential in µ. For any t> 0, Use independence. Pr[X > ( + ɛ)µ] = Pr[exp(tX) > exp(t( + ɛ)µ)] < E[exp(tX)] exp(t( + ɛ)µ) E[exp(tX)] = E[exp(tX i )] E[exp(tX i )] = p i e t +( p i ) = + p i (e t ) exp(p i (e t )) exp(p i (e t )) = exp(µ(e t )) So overall bound is exp((e t )µ) exp(t( + ɛ)µ) True for any t. To minimize, plug in t = ln( + ɛ). Simpler bounds: less than e µɛ2 /3 for ɛ < less than e µɛ2 /4 for ɛ < 2e. Less than 2 (+ɛ)µ for larger ɛ. By same argument on exp( tx), [ ] µ e ɛ Pr[X < ( ɛ)µ] < ( ɛ) ( ɛ) bound by e ɛ2 /2. Basic application: 8

cn log n balls in c bins. max matches average a fortiori for n balss in n bins General observations: Bound trails off when ɛ / µ, ie absolute error µ no surprise, since standard deviation is around µ (recall chebyshev) If µ =Ω(log n), probability of constant ɛ deviation is O(/n), Useful if polynomial number of events. Note similarito to Gaussian distribution. Generalizes: bound applies to any vars distributed in range [0, ]. Zillions of Chernoff applications. Wiring. multicommodity flow relaxation chernoff bound union bound 9