Single processor scheduling with time restrictions

Similar documents
Single processor scheduling with time restrictions

Worst-case analysis of the LPT algorithm for single processor scheduling with time restrictions

Polynomial Time Algorithms for Minimum Energy Scheduling

Minimizing Mean Flowtime and Makespan on Master-Slave Systems

M 2 M 3. Robot M (O)

Scheduling Parallel Jobs with Linear Speedup

Average-Case Performance Analysis of Online Non-clairvoyant Scheduling of Parallel Tasks with Precedence Constraints

Decomposition of random graphs into complete bipartite graphs

Complexity analysis of job-shop scheduling with deteriorating jobs

EGYPTIAN FRACTIONS WITH EACH DENOMINATOR HAVING THREE DISTINCT PRIME DIVISORS

SPT is Optimally Competitive for Uniprocessor Flow

Machine Minimization for Scheduling Jobs with Interval Constraints

University of Twente. Faculty of Mathematical Sciences. Scheduling split-jobs on parallel machines. University for Technical and Social Sciences

Partition is reducible to P2 C max. c. P2 Pj = 1, prec Cmax is solvable in polynomial time. P Pj = 1, prec Cmax is NP-hard

Decomposition of random graphs into complete bipartite graphs

On-line Bin-Stretching. Yossi Azar y Oded Regev z. Abstract. We are given a sequence of items that can be packed into m unit size bins.

On the discrepancy of circular sequences of reals

Optimal on-line algorithms for single-machine scheduling

The Complexity of Maximum. Matroid-Greedoid Intersection and. Weighted Greedoid Maximization

A polynomial-time approximation scheme for the two-machine flow shop scheduling problem with an availability constraint

Preemptive Online Scheduling: Optimal Algorithms for All Speeds

The Tool Switching Problem Revisited

Single Machine Scheduling with a Non-renewable Financial Resource

Improved Bounds for Flow Shop Scheduling

Oblivious and Adaptive Strategies for the Majority and Plurality Problems

This means that we can assume each list ) is

Lecture 2: Scheduling on Parallel Machines

Multiprocessor Scheduling I: Partitioned Scheduling. LS 12, TU Dortmund

Polynomially solvable and NP-hard special cases for scheduling with heads and tails

Decomposition Method for Project Scheduling with Spatial Resources

Basic Scheduling Problems with Raw Material Constraints

ONLINE SCHEDULING OF MALLEABLE PARALLEL JOBS

Schedulability of Periodic and Sporadic Task Sets on Uniprocessor Systems

CS264: Beyond Worst-Case Analysis Lecture #18: Smoothed Complexity and Pseudopolynomial-Time Algorithms

Throughput Optimization in Single and Dual-Gripper Robotic Cells

Approximation Schemes for Scheduling on Parallel Machines

A lower bound on deterministic online algorithms for scheduling on related machines without preemption

An Algorithm to Determine the Clique Number of a Split Graph. In this paper, we propose an algorithm to determine the clique number of a split graph.

Complexity and Algorithms for Two-Stage Flexible Flowshop Scheduling with Availability Constraints

CS264: Beyond Worst-Case Analysis Lecture #15: Smoothed Complexity and Pseudopolynomial-Time Algorithms

Single machine scheduling with forbidden start times

On Machine Dependency in Shop Scheduling

(x 1 +x 2 )(x 1 x 2 )+(x 2 +x 3 )(x 2 x 3 )+(x 3 +x 1 )(x 3 x 1 ).

NP-Completeness. f(n) \ n n sec sec sec. n sec 24.3 sec 5.2 mins. 2 n sec 17.9 mins 35.

Computers and Intractability. The Bandersnatch problem. The Bandersnatch problem. The Bandersnatch problem. A Guide to the Theory of NP-Completeness

Equitable and semi-equitable coloring of cubic graphs and its application in batch scheduling

Computers and Intractability

The three-dimensional matching problem in Kalmanson matrices

Scheduling of unit-length jobs with bipartite incompatibility graphs on four uniform machines arxiv: v1 [cs.

The Multiple Traveling Salesman Problem with Time Windows: Bounds for the Minimum Number of Vehicles

Topics in Approximation Algorithms Solution for Homework 3

Online algorithms for parallel job scheduling and strip packing Hurink, J.L.; Paulus, J.J.

1.1 P, NP, and NP-complete

TAYLOR POLYNOMIALS DARYL DEFORD

The tool switching problem revisited

ECS122A Handout on NP-Completeness March 12, 2018

The effect of machine availability on the worst-case performance of LPT

An analysis of the LPT algorithm for the max min and the min ratio partition problems

Greedy Algorithms. Kleinberg and Tardos, Chapter 4

Minimum Moment Steiner Trees

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

Upper and Lower Bounds on the Number of Faults. a System Can Withstand Without Repairs. Cambridge, MA 02139

CS/COE

Algorithm Design. Scheduling Algorithms. Part 2. Parallel machines. Open-shop Scheduling. Job-shop Scheduling.

On-line Scheduling of Two Parallel Machines. with a Single Server

Complexity analysis of the discrete sequential search problem with group activities

Online Interval Coloring and Variants

Maximizing Data Locality in Distributed Systems

ON THE COMPLEXITY OF SOLVING THE GENERALIZED SET PACKING PROBLEM APPROXIMATELY. Nimrod Megiddoy

On bilevel machine scheduling problems

Disjoint paths in unions of tournaments

Machine scheduling with resource dependent processing times

Matchings in hypergraphs of large minimum degree

NP-Completeness. NP-Completeness 1

1 Ordinary Load Balancing

Scheduling problem subject to compatibility constraints

Algorithms. NP -Complete Problems. Dong Kyue Kim Hanyang University

IBM Almaden Research Center, 650 Harry Road, School of Mathematical Sciences, Tel Aviv University, TelAviv, Israel

The Maximum Flow Problem with Disjunctive Constraints

Math 105A HW 1 Solutions

A NOTE ON THE PRECEDENCE-CONSTRAINED CLASS SEQUENCING PROBLEM

Algorithms: COMP3121/3821/9101/9801

Integer Equal Flows. 1 Introduction. Carol A. Meyers Andreas S. Schulz

Combinatorial Algorithms for Minimizing the Weighted Sum of Completion Times on a Single Machine

Feasibility of Periodic Scan Schedules

8.7 Taylor s Inequality Math 2300 Section 005 Calculus II. f(x) = ln(1 + x) f(0) = 0

Task Models and Scheduling

arxiv: v2 [cs.dm] 2 Mar 2017

Embedded Systems 14. Overview of embedded systems design

Approximation schemes for parallel machine scheduling with non-renewable resources

Scheduling Linear Deteriorating Jobs with an Availability Constraint on a Single Machine 1

HMMT February 2018 February 10, 2018

Scheduling with safety distances*

Bounds on the Traveling Salesman Problem

Computational complexity theory

Supplementary Material

Monotone Submodular Maximization over a Matroid

Deciding Emptiness of the Gomory-Chvátal Closure is NP-Complete, Even for a Rational Polyhedron Containing No Integer Point

Transcription:

Single processor scheduling with time restrictions Oliver Braun Fan Chung Ron Graham Abstract We consider the following scheduling problem 1. We are given a set S of jobs which are to be scheduled sequentially on a single processor. Each job has an associated processing time which is required for its processing. Given a particular permutation of the jobs in S, the jobs are processed in that order with each job started as soon as possible, subject only to the following constraint: For a fixed integer B, no unit time interval [x, x + 1) is allowed to intersect more than B jobs for any real x. In this paper we carry out a worst-case analysis for this situation. In particular, we show that any permutation of the jobs can be processed essentially within a factor of 1/B 1) of the optimum when B and this bound is best possible. For the case B =, the situation is rather different, and in this case the corresponding factor we establish is 4/, which is also best possible. We also show that the general problem of deciding whether there is a permutation whose schedule has no idle time is NP-hard which is typical of many scheduling problems). It is interesting that these worst-case bounds are similar to some earlier bounds that occur in the multiprocessor scheduling literature [7, 8]. 1 Introduction. We are initially given a set S = {S 1, S,..., S n } of jobs and an integer B. Each job S i has associated with it a length s i 0. The jobs are all processed sequentially on a single processor. A job S i will be processed during a semiopen) time interval [α, α + s i ) for some α 0. A job S i with s i = 0 is called a zero-job. Each zero-job is therefore processed at some particular point in time. Furthermore, it is allowed for several zero-jobs to be processed at the same point in time, provided the constraint 1) below is not violated. Quantitative Methods for Business and Management, University of Applied Sciences Trier, Environmental Campus Birkenfeld, P. O. Box 18, D-55761 Birkenfeld, braun@fh-trier.de, DFG grant BR 7/7-1 University of California, San Diego. Research supported in part by grants ONR MURI N000140810747, and FA9550-09-1-0900. University of California, San Diego. Research supported in part by NSF 1005079 1 first suggested to us by Bill Bradley. 1

Given some permutation π of S, say πs) = T = T 1, T,..., T n ) with corresponding job lengths t 1, t,..., t n ), the jobs are placed sequentially on the real line as follows. The initial job T 1 in the list T begins at time 0 and finishes at time t 1. In general, T i+1 begins as soon as T i is completed, provided the following B-constraint is always observed. The constraint that differentiates our problem from many similar ones in the literature e.g., see [9]) is this: For every real x 0, the unit interval [x, x + 1) can intersect at most B jobs. 1) Thus, if it happens that some job T i finishes at time t and j=1 t i+j 1, then the job T i+b cannot start until time t + 1 since it if started strictly before this time, then for a suitable δ > 0, the interval [t δ, t + 1 δ) would intersect more than B jobs). The preceding procedure results in a unique placement or schedule) of the jobs on the real line. We define the finishing time τt ) to be the time at which the last job T n is finished. A natural goal might be for a given job set S, to find those permutations T = πs) which minimize the finishing time τt ). Since this will be shown in Section 4) to be an NP-hard problem in general, our main focus will be on a worst-case analysis of the problem. That is, how much worse can an arbitrary schedule be compared to an optimal schedule for a given set S of jobs. Let us denote by τ w S) the largest possible finishing time for any permutation of S, and let τ o S) denote the optimal i.e., the shortest possible) finishing time for any permutation of S. We are interested in seeing how much worse τ w S) can be compared to τ o S). In what follows, we will prove the following worst-case bounds: Theorem 1 For B = and any set S, τ w S) 4 τ os) 1. ) The factor 4 is best possible. Theorem For B and any set S, τ w S) 1 ) τ o S). ) B 1 The factor 1 is best possible. It is interesting that for this problem, there is a significant difference between the cases B = and B. This often happens in algorithmic problems, for example, such as deciding whether the chromatic number of a graph is which is easy), or if it which is NP-hard).

Although this simple scheduling model has applicability to a number of application areas such as, crew scheduling [1], flexible manufacturing systems [10], printed circuit board assembly [], and cardinality constrained scheduling [, 4], for example, this is the first time that a worst-case analysis of it has been carried out e.g., see [9]). The case B =. We first make some preliminary remarks that will apply in the remainder of the paper. We are interested in upper-bounding the expression where B S) = {τ w S) c B τ o S)} c B = { 4 if B =, 1 if B, and S = {S 1, S,..., S n } is some arbitrary set of jobs. Thus, we can assume that all the lengths s i of the S i satisfy s i 1 for all i, since if any S i had s i = 1 + ɛ > 1, then by decreasing s i to 1, we decrease both τ w S) and τ 0 S) by ɛ, thereby increasing B S). We should also keep in mind that in a schedule, it is possible that a job S i finishes at some time t, and then j zero-jobs are processed at this time t for some j B which takes time 0), and then another job S k immediately begins its processing at time t. Our goal in this section will be to prove ). Given a set of jobs S, let T = T 1, T,..., T n ) be a permutation of S which maximizes S). In the schedule for T, let st i ) and ft i ) denote the starting times and finishing times, respectively, of T i so that ft i ) st i ) = t i = T i ). In particular, because of the constraint 1), we must have st ) ft 1 ) = 1. Thus, we can replace T by a zero job T ) at time ft 1 ) without affecting the overall finishing time τt ) = τ w S) of the schedule for T. Of course, T ) may already have been a zero job). Such a replacement could only decrease τ o S), and so could only increase S). Now, we can apply the same argument to T 4 if there is a job T 5 ). In other words, because of 1), st 5 ) ft )) = 1, and consequently, replacing T 4 by a zero job T 4 at time ft ) does not change τt ), and is still a valid schedule i.e., there is no conflict with T )). Repeating this process, we end up with a modified) schedule in which all the even labelled jobs are zero jobs, except possibly at the very end at which we might have two consecutive non-zero jobs. In particular, there are n 1 unit distances between the odd labelled jobs in the schedule for T. Thus, we can assume that our starting set S has at least n 1 zero jobs. In particular, we

have τ w S) = τt ) = n n 1 s i +. 4) Now let us examine the schedule for the optimal permutation for S, which we will assume by relabeling if necessary) to be S 1, S,..., S n ). Since B =, then by 1) we have: and also Consequently, τ o S) = s 1 + 1 + s + 1 + + s k 1 + 1 +, 5) τ o S) = s 1 + s + 1 + s 4 + 1 + s k + 1 +. 6) τ o S) = s 1 + s n + n s i + n ), 7) which implies τ o S) 1 n ) s i + n ). 8) Thus, S) = τ w S) 4 τ os) However, n n 1 s i + = 1 n ) n 1 s i + n n 1 s i n 4 n )) 1 s i + n ) n ). 9) since S has at least n 1 zero jobs, and all the other jobs have length at most 1. Substituting this inequality into 9), we get S) 1 n 1 n 1 n ) + n ) = 1 n 1 n) + 4 1 ) n 1 n) + 4 = 1. 4

This proves ). To see that this bound is best possible, consider the set S consisting of t + 1 jobs S i of length 1 and t zero jobs Z j, for some positive integer t. The optimal) permutation Z 1, S 1, S,..., S t+1, Z, Z,... Z t ) has a finishing time of τ o S) = t. On the other hand, the worst) permutation S 1, Z 1, S, Z,..., S t, Z t, S t+1 ) has a finishing time of τ w S) = 4t + 1. Thus, achieving equality in ). τ w S) 4 τ os) = 4t + 1 4 t = 1, The case B. Our next goal will be to establish ). As before, we will assume that the permutation S 1, S...., S n ) is the optimal permutation for the set of jobs S = {S 1, S,..., S n }, so that this permutation has finishing time τs) = τ o S). We now want to create an auxiliary job set T = {T 1, T,..., T n } as follows. The size of t i = T i will be defined by: t i = { 1 if s i 1 s i if s i > 1,. Observe, that for any permutation T of T, τt ) = τt ) = n t i τ w S). 10) We next examine the schedule for S. Let us replace each job S i which has s i 1 by a zero job S i placed on the time axis at the starting time of S i. This certainly causes no violations of the B-constraint 1). We will now assign 1 a weight of size to each zero job in this modified job set S. Thus, the schedule for S consists of the original large jobs S i of size s i > 1 and a 1 number of point masses of weight, all placed so that condition 1) still holds, i.e., no unit interval intersects more than B of these jobs. The sum of all the weights of the jobs in S is just equal to the sum of all the lengths of the jobs in T, which by 10) is an upper bound on τ w S). Also, all of the jobs in S fit in the interval [0, τs)). So we have reduced our problem to that of finding an upper bound W on the total weight of an arbitrary assignment of semi-open) intervals of length s i > 1 and point masses of weight 1 so that condition 1) is satisfied and, of course, so that positive length intervals are disjoint, and point masses can only intersect a positive interval at its starting point, in other words, so 5

that this distribution could be a schedule generated by our process if we allowed delays before jobs had to be started). In particular, we can conclude that τ w S) τt ) W. Let us now define the blocks U i = [i, i+1) for 0 i N 1, where N = τs). Also, for each U i we define its B 1 subblocks U i,j = [i + j 1, i + j ) for 1 j B 1. We want to study how the weight from the jobs in S is distributed in the U i. Define the content cu i ) to be the sum of all the weight or mass ) in U i. In other words, we add up all the contributions of the point masses in U i together with all the portions of those S k that happen to lie within U i. We will use the abbreviation θ := 1 since this quantity will occur so frequently in what follows. Our next task will be to analyze various possibilities for the U i. Let us say that the interval U i is bad if cu i ) > θ. Otherwise, we will say that U i is good. First, suppose that U i contains B zero jobs, each contributing θ to cu i ). Then U i cannot intersect any other jobs in S and we have cu i ) = B θ θ when B, so that U i is good in this case. Next, suppose that U i contains j zero jobs for some j with 0 j B. Then and, again, U i is good. cu i ) j θ + 1 B ) θ + 1 θ Hence, any bad U i must contain exactly B 1 zero-jobs. In addition, it must also intersect exactly one other job in S in an interval I i called its S-interval) with weight = length) greater than 1 θ, in order that its total mass i.e., content) is greater than θ. The plan is now to show that if we add up the contents of all of the U i, the sum will be at most θ)τs) + Bθ. We will do this by keeping a running total of the total mass accumulated as we proceed from the beginning block U 0 and move to U 1, U, etc., until we reach the final block U N 1. We would like to show that the average mass in the blocks is at most θ plus a small increment). While any good block has mass = content) at most θ, every now and then we may have a bad block, which could have content as much as. Our goal is to show that having too many bad blocks will always force there to be a compensating number of very good blocks, i.e., blocks have content a lot less than θ, and this will keep the accumulated average mass total close to θ. First, observe that for any bad U i, since its S-interval I i has length exceeding 1 θ, then it must intersect all of the B 1 subblocks U i,j of U i. Hence, each of the B 1 zero jobs in U i must lie in either the first or last subblocks, i.e., 6

in U i,1 U i,. Let us say that a block U i has rank r if it intersects exactly r jobs of S in its last subblock U i,. We will denote this by writing ρu i ) = r. Let us now examine the relationship between two consecutive blocks U i and U i+1 which have ρu i ) = s and ρu i+1 ) = t, respectively, where we assume that s > t. Since ρu i ) = s then because of condition 1), the number of jobs which can intersect any of the first B subblocks of U i+1 is at most B s. However, since ρu i+1 ) = t, then it is not hard to see that the content cu i+1 ) of U i+1 satisfies cu i+1 ) B s 1)θ + t 1)θ + 1 = θ s t)θ. 11) This can be seen by considering the possible contributions C of the nonzero jobs to the content of U i+1. If C = 0, then cu i+1 ) B s+t)θ θ s t)θ for B. On the other hand, if 0 < C 1 θ, then cu i+1 ) 1 θ + B s 1)θ + tθ. Finally, if C > 1 θ, then the last subblock of U i+1 can contain at most t 1 zero jobs, so that cu i+1 ) 1 + B s 1)θ + t 1)θ, as claimed.) In other words, if the ranks of two consecutive blocks drop by s t > 0, then the second block has content at least s t)θ below the target value θ. On the other hand, if s < t, then the content of U i+1 might well be which is θ above the desired average value of θ). Let us examine the situation for a bad block U i, i > 0, with rank s so, in particular, s > 0). Thus, there are s 1 point masses in the last subblock of U i since its S-interval I i intersects all the subblocks of U i ). Consequently, the first subblock of U i must have B s point masses in it since U i is bad), and so, intersects B s + 1 jobs of S. This means that the preceding block U i 1 can have rank at most s 1. We now consider the sequence of ranks ρu i ), i = 0, 1,,.... We will also consider the corresponding loss or gain with respect to our average value θ we make as we encounter each new block. As we have seen, if the rank in going from one block to the next one decreases by m then the content of the second block is at least mθ below the value θ. On the other hand, it it increases, then the content of the second block might only be as much as θ above the value θ. If the rank doesn t change, then we can t conclude that the content of the second block is deficient, i.e., below θ. However, suppose that we have a sequence of blocks U j, U j+1,..., U k, where the only bad blocks in the sequence are the first and last blocks, and both blocks have the same rank s. Then by the preceding remark, the block U k 1 preceding U k has rank at most s 1. Hence, in the sequence of ranks ρu j ), ρu j+1 ),..., ρu k ), there will be a pair of consecutive blocks for which the rank drops by at least 1. Putting all these facts together, we can conclude that the total gain over the average value θ as we sum up the cumulative content of the complete sequence of blocks is at most Bθ. That is, the total weight N 1 i=0 cu i) of the blocks of 7

S satisfies τ w S) τt ) = n t i = N 1 i=0 cu i ) N θ) + Bθ τ o S) θ) + Bθ < τ o S) + 1) θ) + Bθ τ o S) 1 ) +. B 1 This proves Theorem. To see that the constant factor 1 is best possible, we consider the job set S consisting of B 1)t jobs of length 1 and B 1)B )t zero jobs for an arbitrary positive integer t. It is then not hard to see that the optimal permutation for S is a job of size 1 followed by B zero jobs which is repeated for B 1)t times. We denote this by 1 0 B 1 0 B... 1 0 B = [1 0 B ] )t which has a finishing time τ o S) = )t. On the other hand, the permutation 1 0 1 0... 1 0 1 t = [1 0 ] B )t 1 t has finishing time B )t + t = B )t τ w S). Hence, τ w S) τ o S) 1 ) B )t B 1)t 1 ) = 0. B 1 B 1 This shows that the constant in ) is not too far from the truth. The above example can be tweaked by adding B 1 zero jobs) to get the difference to be 1, but we know that it cannot be as large as. 4 NP-hardness of the general problem In this section we will show that the determining whether there is a schedule for S which has no gaps i.e., so that τs) is equal to the sum of all the job times) is an NP-hard problem, when the bound B is a variable parameter of the problem. We will consider the following list S = S 1, S,..., S B 4, E 1, E, E, E 4 ) where s i = S i = 10 a i for some positive integer a i, and all E i = 1. Furthermore, we assume that the sum of the t largest s i s is less than the sum of the t + 1 smallest s i s for all t < B, and that B 4 s i = T. Finally, we will require that no interval of length T be allowed to intersect more than B jobs. This is a scaled-up version of the previous problem in which no unit interval was allowed to intersect more than B jobs. 8

Let us say that a schedule for S is perfect if τs) = T + 4. In this case, there are no gaps in the schedule. Theorem. A perfect schedule for S exists if and only if there is a partition of S into two parts, each of size B, and each having a sum equal to T. Proof. First, suppose S has a perfect schedule, where the S i have been relabelled so that the order of the jobs in this schedule from left to right is S 1, S, S,..., S B 4, with the four E j s intermixed within these jobs but with increasing indices from left to right). As usual, we will think of the S i and E i as represented by half-open intervals [ ) of lengths s i and 1, respectively. We assume that the first = left-most) job which might be S 1 or E 1 ) starts at time 0. Let T 1 = s 1 + s +... + s B and let T = s + s B +... + s B 4. Suppose T 1 T. Without loss of generality, we can assume that T 1 < T. Thus, we have T 1 T 0. 1) This implies that at most one of the E j s are to the left of S since otherwise, there would be an interval of length T intersecting more than B jobs. Thus, by our choice labelling the E j s, we must have E, E and E 4 all occurring to the right of S. Now, since T = B 4 1 s i then Hence, T smallest B s i > largest B s i T B 4 B B 4 B s i s i + 10 since T and all the s i are multiples of 10. Consequently, the interval I = [c T, c) where c is the finishing time of the last job in the schedule which is either S B 4 or E 4 ) must hit S and therefore also E, E and E 4. However, this means that I hits more than B jobs, which is impossible. Thus, if the schedule is perfect then we must have T 1 = T. On the other hand, if there is a partition so that T 1 = T then we can construct the valid schedule E 1, E, S 1, S,..., S B 4, E, E 4. Finally, if we have an arbitrary input set of positive integers {m 1, m,..., m r } then setting M = r 1 m i and m i = m i + M for all i, then we see that any partition of the m i into two equal subset sums must have r summands in each sum. Hence, from the well-known NP-hardness of the problem of partitioning 9

the m i into two equal subset sums see [6]), it follows that our restricted problem requiring the two subsets to have the same cardinality is also NP-hard. 5 Concluding remarks. One would suspect that there are polynomial-time algorithms with which one can improve the bounds in the theorems. However, we have not looked this problem yet. There are also a number of variations and generalizations which might be interesting to investigate. For example, what if you have more than one processor, and/or more than one type of constraint. An example would be jobs in which not only no unit interval can intersect more than B jobs, but also no set of jobs that together use more than a certain amount of some resource such as power) can be processed at the same time in the multi-processor case). It would also be interesting to know if our problem remains NP-hard if B is fixed. References [1] C. Barnhart, E. Johnson, G. Nemhauser and P. Vance, Crew scheduling, in Hall R., ed., Handbook of Transportation Science, Kluwer, 49-51, 1999). [] J. Blazewicz, W. Cellary, R. Slowinski and J. Weglarz, Scheduling under Resource Constraints: Deterministic Models, J.C. Baltzer, Basel, 1987) [] Y. Crama, J. van der Klundert and F. C. R. Spieksma, Production planning problems in printed circuit board assembly, Discrete Applied Mathematics, 1, 9 61, 00) [4] M. Dell Amico and S. Martello, Bounds for the cardinality constrained P Cmax problem, Journal of Scheduling, 4, 1 18, 001) [5] T. Dereli and A. Baykasoglu, Concurrent engineering utilities for controlling interactions in process planning Journal of Intelligent Manufacturing, 15, 471 479, 004) [6] M. R. Garey and D. S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness, W. H. Freeman, New York, 1979) [7] R. L. Graham, Bounds on Certain Multiprocessing Anomalies, Bell Sys. Tech Jour. 45, 1966), 156 1581. [8] R. L. Graham, Bounds on Multiprocessing Timing Anomalies, SIAM Jour. Appl. Math. 17, 1969), 416 49. [9] Joseph Y.-T Leung, ed. Handbook of Scheduling, Chapman and Hall, 004) 10

[10] K. E. Stecke, Formulation and solution of nonlinear integer production planning problems for flexible manufacturing systems, Management Science, 9, 7 88, 198) 11