Single processor scheduling with time restrictions

Similar documents
Single processor scheduling with time restrictions

Worst-case analysis of the LPT algorithm for single processor scheduling with time restrictions

Scheduling Parallel Jobs with Linear Speedup

On-line Bin-Stretching. Yossi Azar y Oded Regev z. Abstract. We are given a sequence of items that can be packed into m unit size bins.

Polynomial Time Algorithms for Minimum Energy Scheduling

University of Twente. Faculty of Mathematical Sciences. Scheduling split-jobs on parallel machines. University for Technical and Social Sciences

An analysis of the LPT algorithm for the max min and the min ratio partition problems

M 2 M 3. Robot M (O)

Single Machine Scheduling with Job-Dependent Machine Deterioration

A polynomial-time approximation scheme for the two-machine flow shop scheduling problem with an availability constraint

Online algorithms for parallel job scheduling and strip packing Hurink, J.L.; Paulus, J.J.

Average-Case Performance Analysis of Online Non-clairvoyant Scheduling of Parallel Tasks with Precedence Constraints

Optimal on-line algorithms for single-machine scheduling

EGYPTIAN FRACTIONS WITH EACH DENOMINATOR HAVING THREE DISTINCT PRIME DIVISORS

LPT rule: Whenever a machine becomes free for assignment, assign that job whose. processing time is the largest among those jobs not yet assigned.

Minimizing Mean Flowtime and Makespan on Master-Slave Systems

Schedulability of Periodic and Sporadic Task Sets on Uniprocessor Systems

Preemptive Online Scheduling: Optimal Algorithms for All Speeds

Complexity analysis of job-shop scheduling with deteriorating jobs

On-line Scheduling of Two Parallel Machines. with a Single Server

On improving matchings in trees, via bounded-length augmentations 1

Decomposition of random graphs into complete bipartite graphs

Improved Bounds for Flow Shop Scheduling

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

This means that we can assume each list ) is

arxiv: v2 [cs.dm] 2 Mar 2017

On Machine Dependency in Shop Scheduling

The Maximum Flow Problem with Disjunctive Constraints

Multiprocessor Scheduling I: Partitioned Scheduling. LS 12, TU Dortmund

Single machine scheduling with forbidden start times

Decomposition of random graphs into complete bipartite graphs

Complexity and Algorithms for Two-Stage Flexible Flowshop Scheduling with Availability Constraints

NP-Completeness. NP-Completeness 1

A robust APTAS for the classical bin packing problem

Partition is reducible to P2 C max. c. P2 Pj = 1, prec Cmax is solvable in polynomial time. P Pj = 1, prec Cmax is NP-hard

A Robust APTAS for the Classical Bin Packing Problem

arxiv:cs/ v1 [cs.ds] 18 Oct 2004

Batch delivery scheduling with simple linear deterioration on a single machine 1

A lower bound on deterministic online algorithms for scheduling on related machines without preemption

Wooden Geometric Puzzles: Design and Hardness Proofs

Computers and Intractability. The Bandersnatch problem. The Bandersnatch problem. The Bandersnatch problem. A Guide to the Theory of NP-Completeness

Enumerating split-pair arrangements

Algorithm Design. Scheduling Algorithms. Part 2. Parallel machines. Open-shop Scheduling. Job-shop Scheduling.

Computers and Intractability

Online Scheduling of Parallel Jobs on Two Machines is 2-Competitive

Disjoint paths in unions of tournaments

SPT is Optimally Competitive for Uniprocessor Flow

hal , version 1-27 Mar 2014

Colored Bin Packing: Online Algorithms and Lower Bounds

The Complexity of Maximum. Matroid-Greedoid Intersection and. Weighted Greedoid Maximization

CS264: Beyond Worst-Case Analysis Lecture #18: Smoothed Complexity and Pseudopolynomial-Time Algorithms

(x 1 +x 2 )(x 1 x 2 )+(x 2 +x 3 )(x 2 x 3 )+(x 3 +x 1 )(x 3 x 1 ).

Una descrizione della Teoria della Complessità, e più in generale delle classi NP-complete, possono essere trovate in:

CS264: Beyond Worst-Case Analysis Lecture #15: Smoothed Complexity and Pseudopolynomial-Time Algorithms

An Algorithm to Determine the Clique Number of a Split Graph. In this paper, we propose an algorithm to determine the clique number of a split graph.

ONLINE SCHEDULING OF MALLEABLE PARALLEL JOBS

6.046 Recitation 11 Handout

Machine scheduling with resource dependent processing times

ECS122A Handout on NP-Completeness March 12, 2018

Upper and Lower Bounds on the Number of Faults. a System Can Withstand Without Repairs. Cambridge, MA 02139

F. Roussel, I. Rusu. Université d Orléans, L.I.F.O., B.P. 6759, Orléans Cedex 2, France

Single Machine Scheduling with a Non-renewable Financial Resource

On the discrepancy of circular sequences of reals

Machine Minimization for Scheduling Jobs with Interval Constraints

A comparison of sequencing formulations in a constraint generation procedure for avionics scheduling

Common-Deadline Lazy Bureaucrat Scheduling Problems

Approximation Schemes for Scheduling on Parallel Machines

Lecture 1 : Probabilistic Method

Hamilton Cycles in Digraphs of Unitary Matrices

Complexity analysis of the discrete sequential search problem with group activities

NP-Completeness. f(n) \ n n sec sec sec. n sec 24.3 sec 5.2 mins. 2 n sec 17.9 mins 35.

1.1 P, NP, and NP-complete

The Multiple Traveling Salesman Problem with Time Windows: Bounds for the Minimum Number of Vehicles

Oblivious and Adaptive Strategies for the Majority and Plurality Problems

Scheduling Linear Deteriorating Jobs with an Availability Constraint on a Single Machine 1

Polynomially solvable and NP-hard special cases for scheduling with heads and tails

CSE 421 Introduction to Algorithms Final Exam Winter 2005

ON THE COMPLEXITY OF SOLVING THE GENERALIZED SET PACKING PROBLEM APPROXIMATELY. Nimrod Megiddoy

Mathematics-I Prof. S.K. Ray Department of Mathematics and Statistics Indian Institute of Technology, Kanpur. Lecture 1 Real Numbers

Dispersing Points on Intervals

THE STRUCTURE OF 3-CONNECTED MATROIDS OF PATH WIDTH THREE

CMSC 451: Lecture 7 Greedy Algorithms for Scheduling Tuesday, Sep 19, 2017

The effect of machine availability on the worst-case performance of LPT

Time. 3 p3 Time. (a) (b)

Computational complexity theory

Graph Coloring Inequalities from All-different Systems

On the Algorithmic Complexity of the Mastermind Game with Black-Peg Results

Deciding Emptiness of the Gomory-Chvátal Closure is NP-Complete, Even for a Rational Polyhedron Containing No Integer Point

Computational Complexity

Task Models and Scheduling

A Note on Perfect Partial Elimination

Online Interval Coloring and Variants

IBM Almaden Research Center, 650 Harry Road, School of Mathematical Sciences, Tel Aviv University, TelAviv, Israel

On Two Class-Constrained Versions of the Multiple Knapsack Problem

Conditional Hardness of Precedence Constrained Scheduling on Identical Machines

The minimum G c cut problem

Embedded Systems 14. Overview of embedded systems design

Semi-Online Multiprocessor Scheduling with Given Total Processing Time

Transcription:

J Sched manuscript No. (will be inserted by the editor) Single processor scheduling with time restrictions O. Braun F. Chung R. Graham Received: date / Accepted: date Abstract We consider the following list scheduling problem. We are given a set S of jobs which are to be scheduled sequentially on a single processor. Each job has an associated processing time which is required for its processing. Given a particular permutation of the jobs in S, the jobs are processed in that order with each job started as soon as possible, subject only to the following constraint: For a fixed integer B, no unit time interval [x, x+) is allowed to intersect more than B jobs for any real x. It is not surprising that this problem is NP-hard when the value B is variable (which is typical of many scheduling problems). There are several real world situations for which this restriction is natural. For example, suppose in addition to our jobs being executed sequentially on a single main processor, each job also requires the use of one of B identical subprocessors during its execution. Each time a job is completed, the subprocessor it was using requires one unit of time in order to reset itself. In this way, it is never possible for more than B jobs to be worked on during any unit interval. Research supported in part by grants DFG BR 7/7- (O. Braun), ONR MURI N00008077, and FA9550-09-- 0900 (F. Chung), NSF 005079 (R. Graham) O. Braun Environmental Campus Birkenfeld, Trier University of Applied Sciences, E-mail: o.braun@umwelt-campus.de F. Chung University of California, San Diego E-mail: fan@ucsd.edu R. Graham University of California, San Diego E-mail: graham@ucsd.edu In this paper we carry out a classical worst-case analysis for this situation. In particular, we show that any permutation of the jobs can be processed within a factor of /() of the optimum (plus an additional small constant) when B and this factor is best possible. For the case B =, the situation is rather di erent, and in this case the corresponding factor we establish is / (plus an additional small constant), which is also best possible. It is fairly rare that best possible bounds can be obtained for the competitive ratios of list scheduling problems of this general type. Keywords Single Processor Scheduling Time Restrictions Worst-Case Analysis Introduction. We are initially given a set S = {S,S,...,S n } of jobs and an integer B. Each job S i has associated with it a length s i 0. The jobs are all processed sequentially on a single processor. A job S i will be processed during a (semi-open) time interval [, + s i ) for some 0. A special type of job S i, called a zero-job Z i, has length s i = 0. By convention, each zero-job is processed at some particular point in time. Furthermore, only one job can be worked on at any point in time except in the case of zero-jobs, where it is allowed for several zero-jobs to be processed at the same point in time, provided the constraint () below is not violated. Given some permutation of S, say (S) = T = (T,T,...,T n ) with corresponding job lengths (t,t,...,t n ), the jobs are placed sequentially on the real line

O. Braun et al. as follows. The initial job T in the list T begins at time 0 and finishes at time t. In general, T i+ begins as soon as T i is completed, provided the following B-constraint is always observed. The constraint that di erentiates our problem from many similar ones in the literature (e.g., see []) is this: For every real x 0, the unit interval [x, x + ) can intersect at most B jobs. () Thus, if it happens that some job T i finishes at time t and P j= t i+j apple, then the job T i+b cannot start until time t + (since if it started strictly before this time, then for a suitable >0, the interval [t, t + ) would intersect more than B jobs). Constraint () reflects the condition that each job needs one of B additional ressources for being processed and that a ressource has to be renewed after the processing of a job has been finished. The preceding procedure results in a unique placement (or schedule) of the jobs on the real line. We define the finishing time (T ) to be the time at which the last job T n is finished. A natural goal might be for a given job set S, to find those permutations T = (S) whichminimize the finishing time (T ). In Section and Section, our focus will be on a worst-case analysis of the problem. That is, how much worse can an arbitrary schedule be compared to an optimal schedule for a given set S of jobs. We will show in Section that the problem is NP-hard in general. Although this simple scheduling model has potential applicability to a number of application areas such as crew scheduling, flexible manufacturing systems, printed circuit board assembly, and cardinality constrained scheduling, for example, this is the first time that a precise worst-case analysis of it has been carried out (e.g., see []). Let us denote by w (S) the largest possible finishing time for any permutation of S, and let o (S) denote the optimal (i.e., the shortest possible) finishing time for any permutation of S. We are interested in seeing how much worse w (S) can be compared to o (S). In what follows, we will prove the following worst-case bounds: Theorem For B =and any set S, w (S) The factor o(s) apple. () is best possible. Theorem For B and any set S, w (S) The factor o (S) apple. () is best possible. It is interesting that for this problem, there is a significant di erence between the cases B = and B. We first make some preliminary remarks that will apply in the remainder of the paper. We are interested in upper-bounding the expression B(S) ={ w (S) where c B = c B o (S)} if B =, if B, and S = {S,S,...,S n } is some arbitrary set of jobs. Thus, we can assume that all the lengths s i of the S i satisfy s i apple for all i, since if any S i had s i =+ >, then by decreasing s i to, we decrease both w (S) and 0 (S) by, thereby increasing B(S). We should also keep in mind that in a schedule, it is possible that a job S i finishes at some time t, and then j zero-jobs are processed at this time t for some j apple B (which takes time 0), and then another job S k immediately begins its processing at time t. The case B =. Our goal in this section will be to prove (). Let us examine the schedule for the optimal permutation for S, which we will assume (by relabeling if necessary) to be (S,S,...,S n ). Since B =, then by () we have: o (S) s ++s ++ + s k ++, and also o (S) s + s ++s + + s k ++. Consequently, it is not hard to check that this implies o (S) s + s n + s i +(n ), i= which in turn implies! o (S) s i +(n ). () i=

Single processor scheduling with time restrictions Case : P n i= s n+ i. Because of w(s) apple n (note that we only have jobs with lengths apple ) and with () we come to! (S) = w (S) o(s) apple n s i + n apple n n + i= + n =. Case : P n i= s i < n+. In the schedule for S, let s(s i ) and f(s i ) denote the starting times and finishing times, respectively, of S i (so that f(s i ) s(s i )= t i = S i ). In particular, because of the constraint ), we must have s(t ) f(t ) = (we say that there is a jump from T to T ), s(t 5 ) f(t )=, and so on. Also we must have s(t ) f(t )=,s(t 6 ) f(t )=, and so on. In any case, we can see that there are at least n jumps from a job i to a job i+b = i+. Therefore we have for the makespan of any schedule w (S) apple i= s i + n. (5) With () and (5) we come to (S) = w (S) apple = i= o(s) s i + n s i + 5 n 6 i= This proves Theorem.! s i + n i= < n + + 5 n 6 =. To see that this bound is best possible, consider the set S consisting of t + jobs S i of length and t zero-jobs Z j, for some positive integer t. The (optimal) permutation (Z,S,S,...,S t+,z,z,...z t ) has a finishing time of o (S) =t. On the other hand, the (worst) permutation (S,Z,S,Z,...,S t,z t,s t+ ) has a finishing time of w (S) =t +. Thus, w (S) o(s) =t + achieving equality in (). The case B. t =, Our next goal will be to establish (). As before, we will assume that the permutation (S,S,...,S n )isthe optimal permutation for the set of jobs S = {S,S,..., S n }, so that this permutation has finishing time o (S). ut We now want to create an auxiliary job set T = {T,T,...,T n } as follows. The size of t i = T i will be defined by: t i = if s i apple, s i if s i >. Observe, that for any permutation T 0 of T, (T 0 )= (T )= t i w (S). (6) i= We next examine the schedule for S. Let us replace each job S i that has s i apple by a zero-job S0 i placed on the time axis at the starting time of S i (the lengths of the jobs with s i > remain unchanged). This certainly causes no violations of the B-constraint (). We will now assign a weight of size to each zero-job in this modified job set S 0. Thus, the schedule for S 0 consists of the original large jobs S i of length s i > and a number of point masses of weight, all placed so that condition () still holds, i.e., no unit interval intersects more than B of these jobs. The sum of all the weights of the jobs in S 0 is just equal to the sum of all the lengths of the jobs in T, which by (6) is an upper bound on w (S). Also (since some of the jobs have been replaced by zero-jobs), all of the jobs in S 0 still fit in the interval [0, (S)). So we have reduced our problem to that of finding an upper bound W on the total weight of an arbitrary assignment of (semi-open) intervals of length s i > and point masses of weight so that condition () is satisfied (and, of course, so that positive length intervals are disjoint, and point masses can only intersect a positive interval at its starting point). In particular, we can conclude that w (S) apple (T ) apple W. Let us now define the blocks U i =[i, i + ) for 0 apple i apple N, where N = d o (S)e. Also, for each U i we define its subblocks U i,j = [i + j,i+ j ) for apple j apple. We want to study how the weight from the jobs in S 0 is distributed in the U i.definethecontent c(u i ) to be the sum of all the weight (or mass ) in U i. In other words, we add up all the contributions of the point masses in U i together with all the portions of those Sk 0 that happen to lie within U i. We will use the abbreviation := since this quantity will occur so frequently in what follows. Our next task will be to analyze various possibilities for the U i. Let us say that the interval U i is bad if c(u i ) >. Otherwise, we will say that U i is good.

O. Braun et al. First, suppose that U i contains B zero-jobs, each contributing to c(u i ). Then U i cannot intersect any other jobs in S 0 and we have c(u i )=B apple when B, so that U i is good in this case. Next, suppose that U i contains j zero-jobs for some j with 0 apple j apple B. Then c(u i ) apple j +apple (B ) +apple and, again, U i is good. Hence, any bad U i must contain exactly zero-jobs. In addition, it must also intersect exactly one other job in S 0 in an interval I i (called its S-interval) with weight (= length) greater than, in order that its total mass (i.e., content) is greater than. The plan is now to show that if we add up the contents of all of the U i, the sum will be at most ( ) (S)+B. We will do this by keeping a running total of the total mass accumulated as we proceed from the beginning block U 0 and move to U, U, etc., until we reach the final block U N. We would like to show that the average mass in the blocks is at most (plus a small increment). While any good block has mass (= content) at most, every now and then we may have a bad block, which could have content as much as. Our goal is to show that having too many bad blocks will always force there to be a compensating number of very good blocks, i.e., blocks have content a lot less than, and this will keep the accumulated average mass total close to. First, observe that for any bad U i,sinceitss-interval I i has length exceeding, then it must intersect all of the subblocksu i,j of U i. Hence, each of the zero-jobs in U i must lie in either the first or last subblocks, i.e., in U i, [ U i,. Let us say that a block U i has rank r if it intersects exactly r jobs of S 0 in its last subblock U i,. We will denote this by writing (U i )=r. Let us now examine the relationship between two consecutive blocks U i and U i+ that have (U i )=s and (U i+ )=t, respectively, where we assume that s>t. Since (U i )=s then because of condition (), the number of jobs which can intersect any of the first B subblocks of U i+ is at most B s. However, since (U i+ )=t, then it is not hard to see that the content c(u i+ ) of U i+ satisfies c(u i+ ) apple (B s ) +(t ) + = (s t). (This can be seen by considering the possible contributions C of the non-zero-jobs to the content of U i+.if C = 0, then c(u i+ ) apple (B s + t) apple (s t) for B. On the other hand, if 0 <Capple, then either c(u i+ ) apple +(B s ) + t if the nonzero-jobs in U i+ intersect the first B subblocks of U i+, or c(u i+ ) apple +(B s) +(t ) if the non-zero-jobs in U i+ intersect only the last subblock of U i+. Finally, if C>, then the last subblock of U i+ can contain at most t zero-jobs, so that c(u i+ ) apple +(B s ) +(t ), as claimed.) In other words, if the ranks of two consecutive blocks drop by s t>0, then the second block has content at least (s t) below the target value. On the other hand, if s apple t, then the content of U i+ might well be (whichis above the desired average value of ). Let us examine the situation for a bad block U i,i > 0, with rank s (so, in particular, s>0). Thus, there are s point masses in the last subblock of U i (since its S-interval I i intersects all the subblocks of U i ). Consequently, the first subblock of U i must have B s point masses in it (since U i is bad), and so, intersects B s+ jobs of S 0. This means that the preceding block U i can have rank at most s. We now consider the sequence of ranks (U i ),i=0,,,... We will also consider the corresponding loss or gain with respect to our average value we make as we encounter each new block. As we have seen, if the rank in going from one block to the next one decreases by m then the content of the second block is at least m below the value. On the other hand, if it increases, then the content of the second block might only be as much as above the value. If the rank doesn t change, then we can t conclude that the content of the second block is deficient, i.e., below. However, suppose that we have a sequence of blocks U j,u j+,...,u k, where the only bad blocks in the sequence are the first and last blocks, and both blocks have the same rank s. Then by the preceding remark, the block U k preceding U k has rank at most s. Hence, in the sequence of ranks (U j ), (U j+ ),..., (U k ), there will be a pair of consecutive blocks for which the rank drops by at least. Putting all these facts together, we can conclude that the total gain over the average value as we sum up the cumulative content of the complete sequence of blocks is at most B. That is, the total weight

Single processor scheduling with time restrictions 5 P N i=0 c(u i) of the blocks of S 0 satisfies w (S) apple (T )= apple N( t i = i= appled o (S)e( )+B NX i=0 )+B c(u i ) < ( o (S) + )( )+B apple o (S) +. This proves Theorem. To see that the constant factor is best possible, we consider the job set S consisting of ()t + jobs of length and (B )(()t + ) + B zero-jobs for a positive integer t (t must be large enough). It is then not hard to see that the optimal permutation for S is jobs of size 0, followed by a job of length followed by B zero-jobs which is repeated for ()t + times, followed by a zero-job. We denote this by 0 0 B...0 B 0=0 [ 0 B ] ()t+ 0 that has a finishing time o (S) =(B other hand, the permutation ut )t +. On the 0...0...=[0 ] (B )t+ t has finishing time (B )t +apple w (S). Hence, w (S) o (S) (B )t + (()t + ) = B. This shows that the constant in () is not too far from the truth. In fact, we believe that the theorem should be true when the constant is replaced by B, which because of the example above, would be the best possible value for the constant. NP-hardness of the general problem In this section we will show that the determining whether there is a perfect schedule for S, (i.e., so that (S) is equal to the sum of all the job lengths) is an NP-hard problem, when the bound B is a variable parameter of the problem. We will polynomially reduce the NP-hard problem PARTITION (see []) to a special case of our scheduling problem. Recall that for PARTI- TION, we are given a set T = {t,t,...,t r } of positive integers with an even sum R and we are asked to decide if there is a partition of T = T [T such that each subset T i has a sum R. Let us first choose an integer X slightly larger than P r i= t i. Now define a set of r jobs S = {S,S,...S r } where S i has length s i = S i = 0X for apple i apple r, and s i+r = S i+r = 0(X + t i ) for r + apple i apple r. Further, define a set E = {E,E,E,E } of four jobs each having length E i =. It follows from the choice of X that for any t, thesum of any t + s i s is greater than the sum of any ts i s. In particular, any partition of {s,s,...,s r } into two subsets with equal sums can only happen if the two subsets each have r elements. We are going to choose B = r + and scale up our problem so that we will require that no interval [0,L) of length L = P r i= s i = 0rX +5 P r i= t i can intersect more than B jobs. Suppose now that S [ E has a perfect schedule, say S i,s i,...,s ir,wherethee i are also intermixed in with the S i. Assume without loss of generality that the indices of the E i increase from to as we go from left to right. Since the schedule is perfect, the finishing time will be L +:= P r i= s i + = 0rX + 0 P r i= t i +. Define = P r k= s i k and = P r k=r+ s i k. First, suppose that <. Then, in fact, apple 0. Since + = L then we see that apple L 0. Consequently, if E and E came before S ir+ then the sum of the lengths of the first r jobs together with the two (or more) E i is still less than L, so that the interval [0,L) intersects more than B = r + jobs (since it also intersects S ir+ ). On the other hand, if E,E and E come after S ir+ then it is easy to see that there is again an interval of length L which intersects more than B jobs (since the sum of the last r s i s is less than the sum of the first rs i s). A similar argument applies if >. Thus, if there is to be a perfect schedule for S [E, then we must have =. But as we have seen, this implies that we have exactly r terms in each sum. This in turn implies that each sum has a contribution of 0rX in addition to the sum of various 0t i tems. Canceling out the 0rX terms from each side, we have deduced the existence of a valid solution to the original PARTITION problem. This completes the reduction of PARTITION to a special case of our scheduling problem.

6 O. Braun et al. 5 Concluding remarks. It is interesting that our bounds are similar in spirit to some of the bounds in the very early list scheduling literature ([], []). One would suspect that there are polynomial-time algorithms with which one can improve the bounds in our theorems. However, we have not looked at this problem yet. There are also a number of variations and generalizations which might be interesting to investigate. For example, what if there is more than one processor, and/or more than one type of constraint. An example would be a setting where not only no unit interval can intersect more than B jobs, but also no set of jobs that together use more than a certain amount of some resource (such as power) can be processed at the same time (in the multi-processor case). It would also be interesting to know if our problem remains NP-hard if B isfixed. Acknowledgements The authors would like to thank David Johnson and the referees for helpful suggestions. References. Garey, M. R., & Johnson, D. S. (979). Computers and Intractability: A Guide to the Theory of NP-Completeness. New York: W. H. Freeman.. Graham, R. L. (966). Bounds on certain multiprocessing anomalies. Bell System Technical Journal, 5, 56 58.. Graham, R. L. (969). Bounds on multiprocessing timing anomalies. SIAM Journal on Applied Mathematics, 7, 6 9.. Leung, J. Y. - T. (Ed.) (00). Handbook of Scheduling. Chapman and Hall.