APTAS for Bin Packing

Similar documents
Bin packing and scheduling

This means that we can assume each list ) is

Algorithm Design. Scheduling Algorithms. Part 2. Parallel machines. Open-shop Scheduling. Job-shop Scheduling.

Partition is reducible to P2 C max. c. P2 Pj = 1, prec Cmax is solvable in polynomial time. P Pj = 1, prec Cmax is NP-hard

Knapsack. Bag/knapsack of integer capacity B n items item i has size s i and profit/weight w i

P C max. NP-complete from partition. Example j p j What is the makespan on 2 machines? 3 machines? 4 machines?

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Algorithms. Outline! Approximation Algorithms. The class APX. The intelligence behind the hardware. ! Based on

Now just show p k+1 small vs opt. Some machine gets k=m jobs of size at least p k+1 Can also think of it as a family of (1 + )-approximation algorithm

Lecture 2: Scheduling on Parallel Machines

More Approximation Algorithms

Average-Case Performance Analysis of Online Non-clairvoyant Scheduling of Parallel Tasks with Precedence Constraints

Lecture 11 October 7, 2013

CSE 421 Greedy Algorithms / Interval Scheduling

Online Interval Coloring and Variants

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Lecture 18: More NP-Complete Problems

NP-Completeness. NP-Completeness 1

Approximation Algorithms (Load Balancing)

1 Ordinary Load Balancing

Flow Shop and Job Shop Models

On the Complexity of Mapping Pipelined Filtering Services on Heterogeneous Platforms

P,NP, NP-Hard and NP-Complete

Computers and Intractability. The Bandersnatch problem. The Bandersnatch problem. The Bandersnatch problem. A Guide to the Theory of NP-Completeness

CSE101: Design and Analysis of Algorithms. Ragesh Jaiswal, CSE, UCSD

Computers and Intractability

8 Knapsack Problem 8.1 (Knapsack)

Multiprocessor Scheduling I: Partitioned Scheduling. LS 12, TU Dortmund

Scheduling periodic tasks in a hard real-time environment

Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths

COSC 341: Lecture 25 Coping with NP-hardness (2)

A Robust APTAS for the Classical Bin Packing Problem

Dynamic Programming( Weighted Interval Scheduling)

CS521 CSE IITG 11/23/2012

A robust APTAS for the classical bin packing problem

A PTAS for Static Priority Real-Time Scheduling with Resource Augmentation

In complexity theory, algorithms and problems are classified by the growth order of computation time as a function of instance size.

A PTAS for Static Priority Real-Time Scheduling with Resource Augmentation

4. How to prove a problem is NPC

Design and Analysis of Algorithms

Approximation Algorithms

ALG 5.4. Lexicographical Search: Applied to Scheduling Theory. Professor John Reif

On Two Class-Constrained Versions of the Multiple Knapsack Problem

Approximation Schemes for Parallel Machine Scheduling Problems with Controllable Processing Times

Dependency Graph Approach for Multiprocessor Real-Time Synchronization. TU Dortmund, Germany

Asymptotic Polynomial-Time Approximation (APTAS) and Randomized Approximation Algorithms

The Knapsack Problem. 28. April /44

A polynomial-time approximation scheme for the two-machine flow shop scheduling problem with an availability constraint

Approximation Algorithms for Orthogonal Packing Problems for Hypercubes

Solutions to Exercises

Lec. 2: Approximation Algorithms for NP-hard Problems (Part II)

CSE 421 Dynamic Programming

APPROXIMATION ALGORITHMS FOR SCHEDULING ORDERS ON PARALLEL MACHINES

INSTITUT FÜR INFORMATIK

On Machine Dependency in Shop Scheduling

Embedded Systems 15. REVIEW: Aperiodic scheduling. C i J i 0 a i s i f i d i

Module 5: CPU Scheduling

Dynamic Programming: Interval Scheduling and Knapsack

Theoretical Computer Science

Dynamic Programming: Shortest Paths and DFA to Reg Exps

Lecture 19: Finish NP-Completeness, conp and Friends

Hardness of Approximation

University of Twente. Faculty of Mathematical Sciences. Scheduling split-jobs on parallel machines. University for Technical and Social Sciences

Chapter 6: CPU Scheduling

Reductions. Reduction. Linear Time Reduction: Examples. Linear Time Reductions

ALTERNATIVE PERSPECTIVES FOR SOLVING COMBINATORIAL OPTIMIZATION PROBLEMS

Minimizing Mean Flowtime and Makespan on Master-Slave Systems

Task Models and Scheduling

CS 374: Algorithms & Models of Computation, Spring 2017 Greedy Algorithms Lecture 19 April 4, 2017 Chandra Chekuri (UIUC) CS374 1 Spring / 1

Convex and Semidefinite Programming for Approximation

Approximation Schemes for Scheduling on Parallel Machines

Correctness of Dijkstra s algorithm

LPT rule: Whenever a machine becomes free for assignment, assign that job whose. processing time is the largest among those jobs not yet assigned.

Scheduling Parallel Jobs with Linear Speedup

A PTAS for the Uncertain Capacity Knapsack Problem

Dynamic Programming: Shortest Paths and DFA to Reg Exps

Minimizing the Number of Tardy Jobs

4 Sequencing problem with heads and tails

Lecture 10. Semidefinite Programs and the Max-Cut Problem Max Cut

Graph. Supply Vertices and Demand Vertices. Supply Vertices. Demand Vertices

Algorithm Design and Analysis

1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u)

Single machine scheduling with forbidden start times

8. INTRACTABILITY I. Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley. Last updated on 2/6/18 2:16 AM

Approximation Preserving Reductions

Efficient approximation algorithms for the Subset-Sums Equality problem

Approximation Basics

COMP Analysis of Algorithms & Data Structures

Unit 1A: Computational Complexity

Online Coloring of Intervals with Bandwidth

Discrete Optimization 2010 Lecture 10 P, N P, and N PCompleteness

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

Minimizing the total flow-time on a single machine with an unavailability period

Two Processor Scheduling with Real Release Times and Deadlines

2/5/07 CSE 30341: Operating Systems Principles

Algorithms Design & Analysis. Approximation Algorithm

Dynamic Programming. Cormen et. al. IV 15

Essential facts about NP-completeness:

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

Topics in Theoretical Computer Science April 08, Lecture 8

Transcription:

APTAS for Bin Packing Bin Packing has an asymptotic PTAS (APTAS) [de la Vega and Leuker, 1980] For every fixed ε > 0 algorithm outputs a solution of size (1+ε)OPT + 1 in time polynomial in n

APTAS for Bin Packing Split items into large and small item i is large if s i ε, otherwise small Pack large items Pack small items using greedy on top of large items

Packing large items: shifting trick Idea: change instance such that # of distinct sizes is constant. Can then solve problem using dynamic programming L: large items OPT s(l) = i L s i ε L

Grouping large items For any integer 1 k L /k, can partition L into L 1, L 2,..., L k such that for 1<i k, items in L i are all smaller than smallest item in L i+1 L 1 L 2 = L 3 =... L k Sort items and pick L k = largest L /k items, L k-1 the next largest L /k items and so on

Shifting a i : size of smallest item in L i For i = 1 to k-1 do if j L i, set s j = a i+1 Let L = L 1 L 2... L k-1 with new sizes Claim: L can be packed in OPT number of bins Claim: # of distinct sizes in L is k-1

Packing large items L can be packed in OPT bins in O(n 2k ) time using dynamic programming (can do better using other methods) Pack L k in L k bins - each item separately # of bins used is OPT + L k OPT + 2 L /k Choose k = 2/ε 2 (assuming L k 2 )

Packing large items L can be packed in OPT bins in O(n 2k ) time using dynamic programming # of bins used is OPT + L k OPT + 2 L /k Choose k = 2/ε 2 # of bins used is OPT + ε 2 L OPT + ε OPT (since OPT ε L ) (1+ε) OPT running time is O(n 4/ε2 )

Packing large items Suppose L 4/ε 4? Lemma: Optimum solution for a Bin Packing problem can be computed in O(n log n 2 n ) time If L 4/ε 4 can compute optimum solution in 2 O(1/ε4 ) time

Packing small items m: bins used to pack large items Run greedy with small items m : total # of bins used after small item packing Claim: m max { m, d ( i s i )/(1-ε)e } d ( i s i )/(1-ε) e ( i s i )(1+2ε) + 1 for sufficiently small ε

Packing small items m: bins used to pack large items Run greedy with small items m : total # of bins used after small item packing Claim: m max { m, d ( i s i )/(1-ε)e } Proof: if m > m then at most one bin (1-ε)-full (1-ε) (m -1) < i s i

Packing small items m: bins used to pack large items Run greedy with small items m : total # of bins used after small item packing Claim: m max { m, ( i s i )/(1-ε)e } m (1+ε) OPT therefore m (1+2ε) OPT + 1 for sufficiently small ε ( ½)

Summary of APTAS m (1+2ε) OPT + 1 Running time: dominated for packing large items n O(1/ε2 ) + 2 O(1/ε4 )

Better algorithms for Bin Packing [Karmarkar-Karp, 1982] : sophisticated ideas Polytime algorithm that to pack using OPT + log 2 OPT bins Also AFPAS (asymptotic FPTAS): (1+ε) OPT + f(ε) bins in time poly(n, 1/ε) Open problem: is there a poly-time algorithm to pack using OPT + 1 bins? OPT + c bins where c is some absolute constant? Sub-exponential algorithm?

Multiprocessor scheduling with precedence constraints Jobs J 1, J 2,..., J n to be executed on m identical processors/machine M 1, M 2,..., M m Job J i has processing time/size p i Jobs have precedence constraints between them J i J k implies J i cannot be done before J k completes Directed acyclic graph DAG G encodes constraints

Precedence constraints Encode dependencies which prevent parallelism Data and control dependencies Two improtant applications Parallel programming Instruction scheduling in multi-issue processors

Scheduling Problem Jobs to be assigned to machines Non-preemptive schedule: once job is started cannot interrupt it C j : finish time of job j s j = C j -p j : start time of job j j occupies time slots s j +1,..., C_j minimize max j C j

Example m = 3 1 3 1 C max = 10 1 2 3 1 4 6 8 11 3 4 5 4 2 7 9 3 5 10 2 6 7 2 8 1 9 1 101 11 2 Processing times: red

List Scheduling List: ordering of jobs according to some priority Higher priority to jobs earlier in the list Assume that list is a topological sort of the dag Greedy list scheduling algorithm: schedule as early as possible Job i is ready at t if all jobs in Pred(i) completed by t

List Scheduling for k = 1 to m do a k = 0 for t = 0 to T max do // a k time when M k is available if no jobs left to schedule break for k = 1 to m do if a k > t continue if no ready jobs break i highest priority ready job a k t + p i remove i from list

Predecessors and Successors Take transitive closure or DAG Pred(i) = { j j i } predecessors of i Succ(i) = { j i j } successors of i Chain: i 1 i 2... i k

List Scheduling Analysis Theorem: List scheduling with any list is a 2-1/m approximation Analysis: Two lower bounds: OPT i p i /m (average load) (LB1) OPT i A p i for any chain A OPT max A: A chain p(a) (LB2)

List Scheduling Analysis Two lower bounds: OPT i p i /m (average load) OPT i A p i for any chain A C max = max j C j maximum completion time in list schedule Theorem: C max LB1 + LB2

List Scheduling Analysis Theorem: C max LB1 + LB2 t (0, C max ] t is a full time slot if all machines busy in t otherwise t is a partial time slot # of full time slots i p i /m = LB1 To prove: # of partial time slots p(a) for some chain A

List Scheduling Analysis Theorem: C max LB1 + LB2 # of full time slots i p i /m = LB1 To prove: # of partial time slots p(a) for some chain A C max = # of full time slots + # of partial slots LB1 + p(a) LB1 + LB2 # of full time slots ( i p i p(a))/m hence C max LB1 + (1-1/m) LB2 (2-1/m) OPT

List scheduling analysis Create chain inductively i 1 : job such that C i1 = C max (last job to complete) let t 1 = C i1 t 2 : max integer in [0, s i1 ] s.t t 2 is a partial slot if t 2 = 0 stop: chain is only i 1 Claim: some predecessor of i 1 is executing at t 2, otherwise i 1 would have been scheduled at t 2

List scheduling analysis i 1 : job such that C i1 = C max (last job to complete) let t 1 = C i1 t 2 : max integer in [0, s i1 ] s.t t 2 is a partial slot if t 2 = 0 stop: chain is only i 1 i 2 : (any) predecessor of i 1 executing at t 2 t 3 = max integer in [0, s i2 ] s.t t 3 is a partial slot

List scheduling analysis inductively t k : max integer in [0, s ik ] s.t t k is a partial slot stop if t k = 0 else i k+1 : predecessor of i k that is executing at t_k Let i 1 Â i 2 Â... Â i h be the chain created t 1 > t 2 >... > t k t k+1 = 0 be associated times

Picture t 5 t 4 t 3 t 2 t 1 i 4 i 3 i 2 i 1 0 s i4 s i3 s i2 s i1 Note: execution intervals of jobs in chain don t overlap

List scheduling analysis Let i 1 Â i 2 Â... Â i h be the chain created t 1 > t 2 >... > t k t k+1 = 0 be associated times Claim: for 1 j k 1. all slots in [t j+1 +1, s ij ] are full 2. i j executing in [s ij +1, t j ] All slots that are partial can be charged to some interval in which a job from chain is executing

Picture t 5 t 4 t 3 t 2 t 1 i 4 i 3 i 2 i 1 0 s i4 s i3 s i2 s i1 All slots full

Critical Path Scheduling Create a good list: give importance to jobs that are critical for other jobs i 1 : job with longest chain (break ties arbitrarily) Remove i 1 from DAG i 2 : job with longest chain Remove i 2 from DAG...

Critical Path Scheduling Works very well in practice Several rules to break ties However there are examples which show that critical path scheduling does not improve 2- approx (irrespective of how you break ties)

More on scheduling Open problem: obtain 2-ε approx ( 35 years) even for the case when jobs are all unit size For unit job sizes Theorem: Unless P=NP no approximation ratio better than 4/3-ε for any ε > 0 Proof: reduction from clique

Reduction from clique Q: Given G=(V,E) and integer k, does G have a clique of size k? NP-Complete Given G=(V,E), k create a scheduling instance as follows Two classes of jobs: graph jobs, dummy jobs m machines where m = n(n-1)/2 + 1

Reduction from clique Two classes of jobs: graph jobs, dummy jobs E + V graph jobs, one for each edge and one for each vertex If J e is a job corresponding to edge e = uv then J u and J v are predecessors for J e Dummy Jobs: three sets of jobs, D 1, D 2, D 3 D 1 = m k D 2 = m k(k-1)/2 D 3 = m - E + k(k-1)/2

Reduction from clique Dummy Jobs: three sets of jobs, D 1, D 2, D 3 D 1 = m k D 2 = m k(k-1)/2 - V + k D 3 = m - E + k(k-1)/2 Each job in D 1 is a predecessor to each job in D 2 Each job in D 2 is a predecessor to each job in D 3 Clearly OPT 3 from dummy jobs

Reduction from clique Claim: If G has a clique of size k then OPT = 3 Dummy jobs have to be scheduled in order All D 1 in slot 1, all D 2 in slot 2, all D 3 in slot 3 Schedule clique vertices in slot 1 Schedule edges of clique in slot 2 + schedule remaining vertices in slot 2 Schedule remaining edges in slot 3

Reduction from clique Claim: If OPT = 3 then G has a clique of size k Essentially the same as previous claim. Therefore if G does not have a clique of size k then OPT 4 Implies that 4/3 -ε approx not possible unless P=NP

Open Problem For unit jobs and m fixed (independent of # of jobs) it is unknown whether the problem is NP- Complete even for m=3! Open for more than 30 years Obtain 2-ε approximation for m fixed