Now just show p k+1 small vs opt. Some machine gets k=m jobs of size at least p k+1 Can also think of it as a family of (1 + )-approximation algorithm

Similar documents
Approximation Algorithms

This means that we can assume each list ) is

APTAS for Bin Packing

Easy Problems vs. Hard Problems. CSE 421 Introduction to Algorithms Winter Is P a good definition of efficient? The class P

CS/COE

Computational Complexity. IE 496 Lecture 6. Dr. Ted Ralphs

Bin packing and scheduling

Knapsack. Bag/knapsack of integer capacity B n items item i has size s i and profit/weight w i

Algorithms. Outline! Approximation Algorithms. The class APX. The intelligence behind the hardware. ! Based on

Lecture 18: More NP-Complete Problems

A Robust APTAS for the Classical Bin Packing Problem

CS 6820 Fall 2014 Lectures, October 3-20, 2014

Reductions. Reduction. Linear Time Reduction: Examples. Linear Time Reductions

Lecture 11 October 7, 2013

Introduction to Bin Packing Problems

8 Knapsack Problem 8.1 (Knapsack)

Problem Complexity Classes

A robust APTAS for the classical bin packing problem

Some Algebra Problems (Algorithmic) CSE 417 Introduction to Algorithms Winter Some Problems. A Brief History of Ideas

P, NP, NP-Complete, and NPhard

Bounds on the Traveling Salesman Problem

P,NP, NP-Hard and NP-Complete

Complexity Theory: The P vs NP question

Outline. 1 NP-Completeness Theory. 2 Limitation of Computation. 3 Examples. 4 Decision Problems. 5 Verification Algorithm

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Approximation Basics

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

P is the class of problems for which there are algorithms that solve the problem in time O(n k ) for some constant k.

CS325: Analysis of Algorithms, Fall Final Exam

Some Algebra Problems (Algorithmic) CSE 417 Introduction to Algorithms Winter Some Problems. A Brief History of Ideas

Data Structures in Java

Week Cuts, Branch & Bound, and Lagrangean Relaxation

ABHELSINKI UNIVERSITY OF TECHNOLOGY

More Approximation Algorithms

Lecture 18: P & NP. Revised, May 1, CLRS, pp

In complexity theory, algorithms and problems are classified by the growth order of computation time as a function of instance size.

Linear Programming. Scheduling problems

Topic 17. Analysis of Algorithms

Approximation Algorithms for Asymmetric TSP by Decomposing Directed Regular Multigraphs

Unit 1A: Computational Complexity

NP Completeness and Approximation Algorithms

ECS122A Handout on NP-Completeness March 12, 2018

Essential facts about NP-completeness:

On Two Class-Constrained Versions of the Multiple Knapsack Problem

CSE 105 THEORY OF COMPUTATION

Intractable Problems Part Two

Some Open Problems in Approximation Algorithms

CS 301: Complexity of Algorithms (Term I 2008) Alex Tiskin Harald Räcke. Hamiltonian Cycle. 8.5 Sequencing Problems. Directed Hamiltonian Cycle

CSC 8301 Design & Analysis of Algorithms: Lower Bounds

NP-Completeness. Andreas Klappenecker. [based on slides by Prof. Welch]

NP-Completeness. Until now we have been designing algorithms for specific problems

4/12/2011. Chapter 8. NP and Computational Intractability. Directed Hamiltonian Cycle. Traveling Salesman Problem. Directed Hamiltonian Cycle

Hardness of Approximation

Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Packing, MAX-SAT

Network Design and Game Theory Spring 2008 Lecture 6

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko

COSC 341: Lecture 25 Coping with NP-hardness (2)

Some Open Problems in Approximation Algorithms

ICS 252 Introduction to Computer Design

Copyright 2000, Kevin Wayne 1

More on NP and Reductions

3.4 Relaxations and bounds

On the Existence of Ideal Solutions in Multi-objective 0-1 Integer Programs

Discrete Optimization 2010 Lecture 12 TSP, SAT & Outlook

NP-Completeness. NP-Completeness 1

CSI 4105 MIDTERM SOLUTION

COMP 355 Advanced Algorithms

A difficult problem. ! Given: A set of N cities and $M for gas. Problem: Does a traveling salesperson have enough $ for gas to visit all the cities?

How hard is it to find a good solution?

Intractability. A difficult problem. Exponential Growth. A Reasonable Question about Algorithms !!!!!!!!!! Traveling salesperson problem (TSP)

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms

NP-Completeness. Subhash Suri. May 15, 2018

2.1 Computational Tractability. Chapter 2. Basics of Algorithm Analysis. Computational Tractability. Polynomial-Time

1 Ordinary Load Balancing

Discrete Optimization 2010 Lecture 8 Lagrangian Relaxation / P, N P and co-n P

Solutions to Exercises

Tractable & Intractable Problems

Travelling Salesman Problem

Lecture 14 - P v.s. NP 1

COMP 355 Advanced Algorithms Algorithm Design Review: Mathematical Background

16.1 Min-Cut as an LP

Santa Claus Schedules Jobs on Unrelated Machines

1. Introduction Recap

Lecture 29: Tractable and Intractable Problems

Algorithms: COMP3121/3821/9101/9801

COMP Analysis of Algorithms & Data Structures

Greedy vs Dynamic Programming Approach

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming

Example: Fib(N) = Fib(N-1) + Fib(N-2), Fib(1) = 0, Fib(2) = 1

1 Column Generation and the Cutting Stock Problem

M 2 M 3. Robot M (O)

Topic: Intro, Vertex Cover, TSP, Steiner Tree Date: 1/23/2007

Discrete (and Continuous) Optimization WI4 131

NAME: Be clear and concise. You may use the number of points assigned toeach problem as a rough

Chapter 0 Introduction Suppose this was the abstract of a journal paper rather than the introduction to a dissertation. Then it would probably end wit

Topics in Theoretical Computer Science April 08, Lecture 8

Complexity Theory Part I

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

NP-COMPLETE PROBLEMS. 1. Characterizing NP. Proof

Transcription:

Review: relative approx. alg by comparison to lower bounds. TS with triangle inequality: A nongreedy algorithm, relating to a clever lower bound nd minimum spanning tree claim costs less than TS tour Double each edge makes graph Eulerian Find Euler tour costs at most twice opt shortcut vertices cost only decreases. so, get 2-approximation Christodes' Heuristic smarter: nd a min-cost matching on odd-degree vertices claim cost is at most 1/2 of opt now graph is Eulerian, do Euler tour, shortcut. get 3=2 approx. still best known for Eulerian TS L relaxation: vertex cover write integer program linear relaxation lower bounds opt round to ints, compare to linear relaxation bound Big question in general: what is best possible relative performance? want matching upper/lower bounds plain and asymptotic ratios. Approximation Schemes We've seen constant approximation ratios. Just how good a constant can we achieve? Denitions: An polynomial approximation scheme (AS) is an algorithm A that accepts I and and returns a (1+)-approximate solution in time polynomial in size of I. 1

Now just show p k+1 small vs opt. Some machine gets k=m jobs of size at least p k+1 Can also think of it as a family of (1 + )-approximation algorithms A, but require \uniformity". This allows nonpolynomial dependence on, eg n 1=. Can truthully say \achieve a n y approx in polytime", but not completely satisfactory. A Fully olynomial Approximation Scheme has runtime poly in problem size and. 0.1 AS General idea for a AS: solve \core" problem by exhaustive s e a r c h, then ll in rest (often greedily). Eg Scheduling when m constant: Sort jobs in decreasing order Schedule largest k optimally (constant time since k m constant) using list scheduling on rest let A k (I ) be time on instance I suppose rst k jobs nish at time K if A k (I ) = K, done else, some job j k + 1 nishes at A k (I ) then all processors busy until time A k (I) p j (else would have done job j elsewhere sooner) so OT A k (I ) p j A k (I) p k+1 So A k (I ) OT + p k+1 so p k+1 mot=k k so A k (I) (1 + m )OT more careful analysis: 1 + 1 1=m 1+bk=m c running time: O(m k + n), linear time! Similarly, Knapsack: Find k largest prots in optimum (n k time) greedily add more. 2

gives 1 + 1=k-approximation. Result of Korte: this \k-enumeration" approach is \only way" to build ASs. Says can achieve a n y approx in polytime, but disturbing that poly gets very big! Negative Results To decide whether to stop seeking approximation algs/as, need to show that certain approximation ratios cannot be achieved. General method: show a c hieving approximation would solve N -complete problem. Deduce N -hard to approximate. Classical method: rove impossible to distinguish between opt value k and k + 1 deduce no 1 + 1=k-approx eg bin-packing: can't distinguish between 2 and 3 bins (could solve partition) so no 3=2 approximation asymptotic approximation says restrict to problems whose opt exceeds soe N0, and this technique doesn't apply there. Eg general TS: suppose could achieve some ratio c could solve hamilton path: put edge of cost n(c + 1) where before had no edge if hamilton path, cost n else, cost n(c + 1) note, can't even approx to within n scalable, so asymptotic approx same as actual approx, impossible These approaches were very limited. Recent breakthroughs: Dene C showed equivalent t o N Showed approx algorithms could be used to check Cs deduce can't approx very strong hardness eg can't do clique within n 1. set cover: approx ratio exactly (1 + o(1)) ln n 3

max 3-sat: exactly 3/4 led to whole class of MAX-SN complete problems: all can b e approximated to within some constant factor, but not any constant factor (no AS) seudo-olynomial Algorithms Lead into FAS. Avoid AS -dependence. Often, problems come in 2 parts: structure plus numbers. graphs and edge weights bin packing with item sizes jobs with processing time An algorithm is pseudo-polynomial if running time is polynomial in structure size and max number. note: for really polynomial, would have to be poly in log of max numb e r equiv: polynomial if all numbers written in unary. Eg subset sum/artition: dene T i j true if subset of 1 : : : i sume to j T i 0 trivial T 1 j trivial T i+1 j = T i j _ T i j si +1 if max numb e r U, table size n U n iters, so time O(nU ). Eg Knapsack o n n items: dynamic program on subset of maximum prot? might not extend. instead: dynamic program on min-size subset of given prot. T i p is minimum size subset of 1 : : : i with prot p (if any) can extend to i + 1 as in knapsack. if max item prot, then max knapsack prot n, table size n 2. 4

divide all prots by =n { scales all solutions same { every prot is integer, { optimum prot at most (1 + )n= { nd using dynamic program in (n=) O(1) time { get solution of value at most (1 + )n= { return to original values: solution at most (1 + ) + Do all problems have pseudo-poly algorithms? on some, eg clique, no large numbers possible. So pseudo-poly is p o l y, implies = N. True of all non-number problems. more generally, call a problem strongly N-hard if N-hard, even when restricted to max-number polynomial in size. Knapsack is not strongly N-hard, since can solve in polytime when poly numb ers. More generally, if an algorithm is strongly N-hard, no pseudo-polynomial algorithm. How show strongly N-hard? give \pseudo-polynomial reduction" to a strongly N-hard problem. Canonical problem: 3-partition (partition into 3-element sets). See GJ. Immediate consequence: bin-packing and scheduling strongly N -hard 0.2 Rounding to FAS From pseudo-poly algorithm, can often get FAS: round/scale numbers to polysize solve using pseudo-poly (now poly) algorithm introduce negligible error. Example: Knapsack o n n items. Suppose know prot Round every item up to nearest integer multiple of =n { total change in prot of optimum:. { So some solution of prot (1 + ) 5

then (by t e c hnical assumptions) output value integer, poly in size set = 1 =2OT This approach w orks for any pseudo-polynomial algorithm whose pseudo-dependence is only on objective function. might break if pseudo-poly dependence on constraints, \unrounding" might give solution only approximately satisfying constraints sometimes, good enough! Arora TS AS similar: \rounds" to integer-points so that dynamic program can patch subtours together. Converse also true: If a problem has an FAS, it has a pseudo-polynomial algorithm Suppose input values integers, polynomial bounded by size get solution of value at most OT + 1=2. since integer, must be OT. deduce: no FAS for strongly N-complete problems! (but maybe AS) Bin acking We've seen bin-packing is strongly N-hard, has no AS (in fact, no < 3=2- approx). Was therefore big shock in 1981 when Vega and Leuker gave asymptotic AS. Compounded when Karmarker and Karp extended to asymptotic FAS. Later work has shown that in fact, can pack i n O T + O(log 2 O T ) bins! Big open question: maybe can do OT + O(1) bins? Note still wouldn't violate nonexistence of AS, but would be eminently satisfactory! Start with asymptotic AS uses (1 + )OT + 1 bins linear time in n (number of items) but very exponential in key ideas: 6

{ rounding of item sizes (like F AS) { exhaustive e n umeraition (like AS) { FAS does enumeration implicitly, in polytime. therefore, using at most OT=(1 ) (1 + )OT bins therefore, do worst of and 1 + approx. now h ave m distinct sizes claim only adds k to optimum. general principle: can treat and any f () as a constant, ignore in asmptotic time bounds. First rounding step: eliminate small items: Suppose ignore all items of size less than pack remainder with approx ratio put small items back greedily. if small items don't use new bin, still have approx if use new bin, every bin is full to (1 ) Second rounding step: make few items sizes so can use enumeration techniques: Goal: m = n=k distinct item sizes sort items sizes s 1 s 2 sn let Gi = s (i 1)k+1 sik Round each size in Gi down to sik, yielding G 0 i proof: suppose have p a c ked all Gi 0 replace items of G 0 i with items in Gi+1 (smaller, so t) packs all items of original problem except those in G 1 use k extra bins to pack items of G 1. Result: restricted Bin acking problem, where all sizes exceed and at m ost m sizes. solving RB solves B. Set = =2, kill size- items putting back later adds relative error at most =2 7

suppose solve. Then add back k bins. But note: each item size =2, so need at least n=2 bins. So k OT, so ok. set k = d 2 n=2e so m 2= 2 note m and delta are xed constants, independent o f n Bin types: since m sizes, express as fn1 : v1 : : : n m : v m g. each bin contains some subset of items fb1 : v1 b 2 : v2 : : : b m : v m g so b i v i 1 bin type T = ( T1 : : : T m) w here T i is number of size i items, s.t. T i v i 1 How m any bin types? { Note each item size at least { so T i 1= L relaxation: { so T i are one way to write k as a sum of integers { number of w ays: m+1= 1= (independent o f n) optimal solution is a certain number of bins of each type just need to know these numb e r s let x T denote number of bins of type T then number of bins is x T numb e r o f p a c ked items of size-type i is x T T i So, want solution to problem: following X w = min x T x 0 xa = n where n i is number of size-v i items in input and A represents constraint on problem. T Is this a linear program? No: need integer x i 8

{ but, m constraints. What to do? { dual has m variables but exponential constraints. { no problem: ellipsoid happy if can separate! { what is separation problem? Turns out to be knapsack! { uh oh, can't solve knapsack. { no problem have F AS. Good enough for approximate separation. { so can solve dual, means get value for primal. { In this case, can use to solve primal: unfortunately, integer linear programming is N -complete. FAS: fortunately, Lenstra showed how to solve integer program in O(n) time (numb e r of constraints) if numb e r of variables is xes (exhaustive enumeration). true in our case, since numb e r o f v ariable depends only on m and. so, can solve this integer program in O(n) time! why not FAS? I only polynomially solvable if constant m so FAS, m and no longer constant but, suppose solve L. note get solution no worse than integer opt. want to tu rn in to integer solution recall: only m constraints. L has basic feasible solution: only m nonzero variables round each o n e u p : adds at most m bins! so, gives solution of value at most OT + m but recall, m 1= 2. So, small additive error. wait: problem: L has exponential numb e r o f v ariables! Consider adding a bin. see if changes optimum if does, wrong bin if not, keep and start again (revise L to force) So, FAS for RB. thus, FAS for B. 9