Scheduling problems in master-slave model

Similar documents
Minimizing Mean Flowtime and Makespan on Master-Slave Systems

Combinatorial Algorithms for Minimizing the Weighted Sum of Completion Times on a Single Machine

Improved Bounds on Relaxations of a Parallel Machine Scheduling Problem

2 Martin Skutella modeled by machine-dependent release dates r i 0 which denote the earliest point in time when ob may be processed on machine i. Toge

Improved Bounds for Flow Shop Scheduling

A polynomial-time approximation scheme for the two-machine flow shop scheduling problem with an availability constraint

Completion Time Scheduling and the WSRPT Algorithm

Scheduling Online Algorithms. Tim Nieberg

SPT is Optimally Competitive for Uniprocessor Flow

SCHEDULING UNRELATED MACHINES BY RANDOMIZED ROUNDING

Machine Scheduling with Deliveries to Multiple Customer Locations

Preemptive Online Scheduling: Optimal Algorithms for All Speeds

Batch delivery scheduling with simple linear deterioration on a single machine 1

Optimal on-line algorithms for single-machine scheduling

On Machine Dependency in Shop Scheduling

On-line Scheduling to Minimize Max Flow Time: An Optimal Preemptive Algorithm

HEURISTICS FOR TWO-MACHINE FLOWSHOP SCHEDULING WITH SETUP TIMES AND AN AVAILABILITY CONSTRAINT

An on-line approach to hybrid flow shop scheduling with jobs arriving over time

An improved approximation algorithm for two-machine flow shop scheduling with an availability constraint

Lecture 13. Real-Time Scheduling. Daniel Kästner AbsInt GmbH 2013

Embedded Systems Development

Stochastic Online Scheduling Revisited

ONLINE SCHEDULING OF MALLEABLE PARALLEL JOBS

Task Models and Scheduling

Scheduling Parallel Jobs with Linear Speedup

Weighted flow time does not admit O(1)-competitive algorithms

STABILITY OF JOHNSON S SCHEDULE WITH LIMITED MACHINE AVAILABILITY

Minimizing Average Completion Time in the. Presence of Release Dates. September 4, Abstract

Rate-monotonic scheduling on uniform multiprocessors

Algorithm Design. Scheduling Algorithms. Part 2. Parallel machines. Open-shop Scheduling. Job-shop Scheduling.

University of Twente. Faculty of Mathematical Sciences. Scheduling split-jobs on parallel machines. University for Technical and Social Sciences

Single machine scheduling with forbidden start times

showed that the SMAT algorithm generates shelf based schedules with an approximation factor of 8.53 [10]. Turek et al. [14] proved that a generalizati

Complexity and Algorithms for Two-Stage Flexible Flowshop Scheduling with Availability Constraints

1 Ordinary Load Balancing

A Robust APTAS for the Classical Bin Packing Problem

Ideal preemptive schedules on two processors

The Power of Preemption on Unrelated Machines and Applications to Scheduling Orders

APPROXIMATION BOUNDS FOR A GENERAL CLASS OF PRECEDENCE CONSTRAINED PARALLEL MACHINE SCHEDULING PROBLEMS

Scheduling jobs with agreeable processing times and due dates on a single batch processing machine

Scheduling Lecture 1: Scheduling on One Machine

A note on the complexity of the concurrent open shop problem

Basic Scheduling Problems with Raw Material Constraints

Schedulability analysis of global Deadline-Monotonic scheduling

Approximation algorithms for scheduling problems with a modified total weighted tardiness objective

Deterministic Models: Preliminaries

Polynomially solvable and NP-hard special cases for scheduling with heads and tails

Non-preemptive Fixed Priority Scheduling of Hard Real-Time Periodic Tasks

Partition is reducible to P2 C max. c. P2 Pj = 1, prec Cmax is solvable in polynomial time. P Pj = 1, prec Cmax is NP-hard

Scheduling Coflows in Datacenter Networks: Improved Bound for Total Weighted Completion Time

Complexity analysis of job-shop scheduling with deteriorating jobs

Multiprocessor jobs, preemptive schedules, and one-competitive online algorithms

Lecture 2: Scheduling on Parallel Machines

immediately, without knowledge of the jobs that arrive later The jobs cannot be preempted, ie, once a job is scheduled (assigned to a machine), it can

arxiv: v2 [cs.dm] 2 Mar 2017

A BEST-COMPROMISE BICRITERIA SCHEDULING ALGORITHM FOR PARALLEL TASKS

Single Machine Scheduling with a Non-renewable Financial Resource

Polynomial time solutions for scheduling problems on a proportionate flowshop with two competing agents

Online algorithms for parallel job scheduling and strip packing Hurink, J.L.; Paulus, J.J.

On-line Bin-Stretching. Yossi Azar y Oded Regev z. Abstract. We are given a sequence of items that can be packed into m unit size bins.

Approximation Algorithms for scheduling

Tardiness Bounds under Global EDF Scheduling on a Multiprocessor

Heuristics for two-machine flowshop scheduling with setup times and an availability constraint

Hybrid Flowshop Scheduling with Interstage Job Transportation

Scheduling Linear Deteriorating Jobs with an Availability Constraint on a Single Machine 1

Embedded Systems 14. Overview of embedded systems design

Optimal delivery time quotation in supply chains to minimize tardiness and delivery costs

Polynomial Time Algorithms for Minimum Energy Scheduling

Networked Embedded Systems WS 2016/17

A New Approach to Online Scheduling: Approximating the Optimal Competitive Ratio

Single processor scheduling with time restrictions

hal , version 1-27 Mar 2014

Open Problems in Throughput Scheduling

Embedded Systems 15. REVIEW: Aperiodic scheduling. C i J i 0 a i s i f i d i

Research Article Batch Scheduling on Two-Machine Flowshop with Machine-Dependent Setup Times

Scheduling to Minimize Total Weighted Completion Time via Time-Indexed Linear Programming Relaxations

Multiprocessor Scheduling of Age Constraint Processes

FH2(P 2,P2) hybrid flow shop scheduling with recirculation of jobs

SUPPLY CHAIN SCHEDULING: ASSEMBLY SYSTEMS. Zhi-Long Chen. Nicholas G. Hall

Online Scheduling of Parallel Jobs on Two Machines is 2-Competitive

Single Machine Scheduling with Job-Dependent Machine Deterioration

APPROXIMATION ALGORITHMS FOR SCHEDULING ORDERS ON PARALLEL MACHINES

APTAS for Bin Packing

Throughput Optimization in Single and Dual-Gripper Robotic Cells

Average-Case Performance Analysis of Online Non-clairvoyant Scheduling of Parallel Tasks with Precedence Constraints

Select and Permute: An Improved Online Framework for Scheduling to Minimize Weighted Completion Time

The Constrained Minimum Weighted Sum of Job Completion Times Problem 1

Complexity of preemptive minsum scheduling on unrelated parallel machines Sitters, R.A.

Scheduling in an Assembly-Type Production Chain with Batch Transfer

Module 5: CPU Scheduling

Chapter 6: CPU Scheduling

A combinatorial auctions perspective on min-sum scheduling problems

CIS 4930/6930: Principles of Cyber-Physical Systems

A Framework for Scheduling with Online Availability

Approximation Algorithms for Scheduling with Reservations

Dependency Graph Approach for Multiprocessor Real-Time Synchronization. TU Dortmund, Germany

A robust APTAS for the classical bin packing problem

More Approximation Algorithms

HYBRID FLOW-SHOP WITH ADJUSTMENT

Real-time scheduling of sporadic task systems when the number of distinct task types is small

Transcription:

Ann Oper Res (2008) 159: 215 231 DOI 10.1007/s10479-007-0271-4 Scheduling problems in master-slave model Joseph Y.-T. Leung Hairong Zhao Published online: 1 December 2007 Springer Science+Business Media, LLC 2007 Abstract We consider scheduling problems in the master slave model, which was introduced by Sahni in 1996. The goal is to minimize the makespan and the total completion time. It has been shown that the problem of minimizing makespan is NP-hard. Sahni and Vairaktarakis developed some approximation algorithms to generate schedules whose makespan is at most constant times the optimal. In this paper, we show that the problem of minimizing total completion time is NP-hard in the strong sense. Then we develop algorithms to generate schedules whose total completion time and makespan are both bounded by some constants times their optimal values. Keywords Total completion time Makespan Approximation algorithms NP-hard Master slave model 1 Master-slave model and its applications 1.1 Master-slave model The master-slave model was introduced by Sahni in 1996. In this model, each job has to be processed sequentially in three stages. In the first stage, the preprocessing task runs on a master machine; in the second stage, the slave task runs on a dedicated slave machine; and in the last stage, the postprocessing task again runs on a master machine, possibly different from the master machine in the first stage. The preprocessing, slave and postprocessing tasks Research supported in part by the National Science Foundation through grant DMI-0300156. J.Y.-T. Leung ( ) Department of Computer Science, New Jersey Institute of Technology, Newark, NJ 07102, USA e-mail: leung@oak.njit.edu H. Zhao Department of Mathematics, Computer Science, and Statistics, Purdue University Calumet, 2200 169th Street, Hammond, IN 46323-2094, USA e-mail: hairong@calumet.purdue.edu

216 Ann Oper Res (2008) 159: 215 231 and their lengths of job i are denoted by a i, b i and c i, respectively. It is assumed that a i > 0, b i > 0andc i > 0. A job may have a release time r i 0, i.e., a i cannot start until r i. Without loss of generality, we may assume that min r j = 0. Unless stated otherwise, all jobs are assumed to have the same release time. There are two cases when arbitrary release time is present. The first case deals with offline problems, i.e., the release times and processing times of all jobs are known in advance. The second case deals with online problems, i.e., no information of a job i is given until it arrives at r i, and when it arrives, all parameters about job i are given. We use the quadruple (r i,a i,b i,c i ) to denote job i. For simplicity, if r i = 0, we use the triplet (a i,b i,c i ) to represent job i. Each machine is either a master machine or a slave machine. The master machines are used to run preprocessing and/or postprocessing tasks, and the slave machines are used to run slave tasks, one slave machine for each slave task. In a single-master system, there is a single master to execute all preprocessing tasks (a tasks) and postprocessing tasks (c tasks). In a multi-master system, there are more than one master, each of which is capable of processing both a tasks and c tasks. Finally, in some systems, there are distinct preprocessing masters (preprocessors) and postprocessing masters (postprocessors), which are dedicated to process a tasks and c tasks, respectively. The master-slave model is closely related to the flow shop model. The system which has a single preprocessor and a single postprocessor can be seen as a two-machine flow shop with transfer lags. In this flow shop model, each job j has two operations: the first operation is scheduled on the upstream machine and the second operation is scheduled on the downstream machine. The interval or time lag between the finish time of the first operation and the start time of the second operation must be exactly or at least l j.ifthel j s are large enough such that all of the first operations finish before the start of any second operation, then the flow shop problem is equivalent to the problem of scheduling on a single machine with time lags and two tasks per job, subject to the constraint that all of the first operations are scheduled first. The latter problem is a special case of the single-master master-slave scheduling model. When there are more than one preprocessing and postprocessing masters, the masterslave model can be seen as a two-stage hybrid flow shop with transfer lags. In this sense, the single master case can be regarded as a three-stage hybrid flow shop where the first and the last stage has a single machine and the second stage has n machines. Hybrid flow shop is often found in electronic manufacturing environment such as IC packaging and maketo-stock wafer manufacturing. In recent years, hybrid flow shop has received significant attention, see Buten and Shen (1973), Langston (1987), Sriskandarajah and Sethi (1989), Cheng and Sin (1990), Lee and Vairaktarakis (1994), Gupta and Tunc (1994), Guinet and Solomon (1996) and Allaoui and Artiba (2006). 1.2 Applications of master-slave model The master-slave model finds many applications in parallel computer scheduling and industrial settings such as semiconductor testing, machine scheduling, transportation maintenance, etc. Some of them are listed in the following. For more applications, see Sahni (1996), Sahni and Vairaktarakis (1996), Sahni and Vairaktarakis (2004) and Vairaktarakis (1997). Industrial applications of the master-slave paradigm include the case of consolidators that receive orders to manufacture quantities of various items. The actual manufacturing is done by a collection of slave agencies. In this example, the consolidator is the master machine and the slave agencies are the slave machines. The consolidator needs to assemble

Ann Oper Res (2008) 159: 215 231 217 the raw material needed for each task, load the trucks that will deliver this material to the slave machines, and perform an inspection before the consignment leaves. All of these work belong to preprocessing task. The slave machines need to wait for the arrival of the raw material, inspect the received goods, perform the manufacture, load the goods onto the trucks for delivery, perform an inspection as the trucks are leaving. These activities together with the delay involved in getting the trucks to their destination (i.e., the consolidator) represent the slave work. When the finished goods arrive at the consolidator, they are inspected and inventoried. This represents the postprocessing. Several applications of the master-slave model are found in parallel computer scheduling. A common parallel programming paradigm involves the use of a main computational thread whose function is to prepare data then fork and initiate new child threads that do the computations on different processors. After the computation of a child thread, the main thread collects the computation results and performs some processing on the results. Here, each child thread can be seen as a job with three tasks: the thread initiation and data preparation is the preprocessing task, the computation is the slave task and the postprocessing of the results from the computation is the postprocessing task. It is easy to see that both of the above examples generalize to multi-master systems or distinct preprocessing and postprocessing master systems. 2 Scheduling problems in master-slave model 2.1 Definitions and notations Given a set of jobs in the master-slave system, a non-preemptive schedule is one that schedules each task without interruption. Note that in such a schedule, it is still possible that there is an interval between the finish time of a i and the start time of b i, or the finish time of b i and the start time of c i. However, without loss of generality, one can always assume that b i is scheduled immediately after a i completes. In a preemptive schedule, a job running on one machine may be interrupted for some time, and later resumed on possibly a different machine. A non-preemptive schedule S is order preserving if for any two jobs i and j such that a i completes before a j, c i must also complete before c j. A no-wait-in schedule is one such that each slave task must be scheduled immediately after the corresponding preprocessing task finishes and each postprocessing task must be scheduled immediately after the corresponding slave task finishes. In other words, once a job starts, it will not stop until it finishes. It is easy to see that a no-wait-in schedule must be non-preemptive. A canonical schedule on the single master system is one such that all the preprocessing tasks complete before any postprocessing tasks can start. (Note that the definition of canonical schedule is slightly different from the one given in Sahni 1996.) In the multi-master system, a canonical schedule is one that is canonical on each master. The completion (or finish) time of job i in a schedule S is the time when the postprocessing task c i finishes. The completion time of i in S is denoted by C i (S).IfS is clear from the context, C i, instead of C i (S), isused.themakespan of S is the earliest time when all the tasks have been completed. The makespan of S is denoted by C max (S), orc max if S is clear from the context. The total completion time of S, denoted by C(S), is the sum of completion times of all n jobs, i.e., C(S) = n C j (S). Makespan and total completion time are two common objectives to minimize. The problems of finding a schedule that minimizes the makespan and total completion time are referred to as the makespan (C max ) problem and total completion time ( C j ) problem, respectively. Corresponding to various constraints, we have order preserving makespan (or total

218 Ann Oper Res (2008) 159: 215 231 completion time) problem, no-wait-in makespan (or total completion time) problem, canonical total completion time problem, etc. In all cases, a schedule that minimizes C max or C j is usually denoted by S, and the minimum makespan and the minimum total completion time are denoted by C max and C, respectively. In many cases, the problem of minimizing makespan or total completion time is NPhard, i.e. unless P = NP there is no polynomial time algorithms for these problems. So people turn to approximation algorithms for these problems. An α-approximation algorithm for makespan (or total completion time) is an algorithm that for any set of jobs generates a schedule S whose makespan (or total completion time) is at most α times the optimal makespan (or total completion time). It is an (α, β)-approximation algorithm if it is an α-approximation algorithm for makespan and at the same time a β-approximation algorithm for total completion time. For a schedule S, ifc max (S) αc max and C(S) βc, then S is said to be an (α, β)-schedule. It is easy to see that if all jobs have the same release time, one can always arrange a schedule to be canonical without increasing the makespan. Thus, in order to minimize the makespan, we only need to focus on canonical schedules. However, this is not true if we want to minimize C j. In fact, the ratio of the total completion time of the best canonical schedule versus that of the best non-canonical schedule can be arbitrarily large. Consider the example: (n 1) identical jobs (1,ɛ,1) and one job (n 2,ɛ,1),whereɛ is an arbitrary small positive number. The optimal canonical schedule has total completion time O(n 3 ), while the optimal non-canonical schedule has total completion time O(n 2 ). 2.2 Previous work Kern and Nawijn (1991) showed that the makespan problem is NP-hard in the ordinary sense. Sahni (1996) showed that both the no-wait-in makespan problem and the order preserving no-wait-in makespan problem are NP-hard in the ordinary sense. He gave an O(nlog n) algorithm that solves the order preserving makespan problem. For the general problem under the single-master systems, Sahni and Vairaktarakis (1996) developed an approximation algorithm with a worst-case bound of 3/2. For the multi-master systems, they gave approximation algorithms with worst-case bounds of 2. Further algorithms were given by Vairaktarakis (1997) when there are m 1 preprocessors and m 2 postprocessors. Let m = max{m 1,m 2 }. He gave approximation algorithms with a worst-case bound of 2 1/m for the makespan problems with no constraint, or with the constraints of order preserving. 2.3 Organization of paper We first present our new complexity results about the total completion time problem in Sect. 3. We show that the total completion time problem, with or without constraints, is NPhard in the strong sense. We then consider a special case of the problem in Sect. 4. Inthis section, we assume that (1) there is a single master, (2) for all i, 1 i n, r i = 0, a i = a and c i = c, wherea and c are constants; i.e. the jobs are different from each other only by their slave tasks, (3) no preemption is allowed, and (4) only canonical schedules are considered. Our result is that if a c and we are restricted to canonical and order preserving schedules, then in O(nlog n) time we can find an optimal schedule that minimizes the total completion time and makespan at the same time. Then we develop an approximation algorithm which generates schedules that not only approximates the minimum total completion time very well, but also provides a constant approximation for the minimum makespan. In Sects. 5 and 6 we consider general cases of the total completion time and makespan problem.

Ann Oper Res (2008) 159: 215 231 219 In Sect. 5, we develop efficient approximation algorithms to generate preemptive schedules which approximate both the total completion time and the makespan at the same time within constant bounds in various settings. These are the first general results for these problems in the master-slave model. Then in Sect. 6, we show that one can convert those preemptive schedules into non-preemptive schedules with a slight degradation of the approximation ratios. Finally, we end this paper with some conclusion in Sect. 7. 3 Complexity of total completion time problem Yu et al. (2004) showed that the problem F 2 l j,p ij = 1 C max is strongly NP-hard. In fact, they showed that the problem remains strongly NP-hard even with exact delays constraint. We can adapt their proof to show that the problem of minimizing total completion time (and makespan as well) with or without constraints in the single-master master-slave model is strongly NP-hard. Theorem 3.1 The problem of minimizing total completion time is strongly NP-hard, even if preemption is allowed and a i = c i = 1 for 1 i n. Furthermore, it remains strongly NP-hard even if we are restricted to canonical schedules, or no-wait-in schedules, or both canonical and no-wait-in schedules. As it turns out, the proof of the above theorem does not apply to order preserving scheduling problems. But by reducing from the 3-partition problem, we can show that the total completion time problem, with the no-wait-in and order preserving constraint, is NP-hard in the strong sense. Because of space limit, we only give the result. For detailed proof, please refer to Leung and Zhao (2005). Theorem 3.2 The problem of minimizing the order preserving and no-wait-in total completion time is strongly NP-hard even if a i = c i for 1 i n. 4 Optimal and approximation algorithms: Special cases The result from Kern and Nawijn (1991) andtheorem3.1 tell us that unless P = NP, there is no hope to find an optimal schedule for the total completion time problem, or makespan problem in general. In this section, we will consider special cases of these problems. We assume that (1) there is a single master, (2) for all jobs i, 1 i n, wehaver i = 0, a i = a and c i = c, wherea and c are some constants; i.e., the jobs are different from each other only by their slave tasks, (3) no preemption is allowed, (4) only canonical schedules are considered. As our complexity results show, even in this very special case, the makespan and the total completion time problems are still NP-hard. For this special case, we first show that if a c and we are restricted to order preserving schedules, then in O(nlog n) time one can find an optimal schedule that minimizes both the total completion time and the makespan. Then we show that the canonical schedule that schedules jobs in non-decreasing order of the slave tasks has total completion time only slighter larger than the minimum total completion time, and has makespan at most constant times the minimum makespan.

220 Ann Oper Res (2008) 159: 215 231 Fig. 1 Illustration of the proof of Theorem 4.1, t i+1 t i b i b i+1. (Shaded area represents idle time) 4.1 Optimal canonical and order preserving schedules While the total completion time problem and makespan problem are strongly NP-hard in general, there is a special case that admits a polynomial time solution. Theorem 4.1 For the special case of a c, one can find a schedule that minimizes both the total completion time and the makespan among all canonical and order preserving schedules in O(nlog n) time. Proof Let S be the canonical and order preserving schedule that schedules jobs in nondecreasing order of b i s. Let S be an optimal canonical and order preserving schedule with respect to the total completion time. Suppose S is not the same as S. Then we show that we can convert S into S without increasing the total completion time, which means that S is also optimal with respect to total completion time. For convenience, suppose S schedulesthejobsintheorderof1,2,...,n.sinces is not the same as S, there must exist two adjacent jobs i and i + 1inS such that a i is scheduled before a i+1 but b i >b i+1. Because S is order preserving c i+1 must be scheduled after c i completes. We show that c i+1 must be scheduled immediately after c i finishes in S, i.e there is no idle time between c i and c i+1 in S.Lett i be the interval between the finishing time of a i and the starting time of c i.thenwemusthavet i b i. Furthermore, the interval between the time a i+1 finishes and the time c i finishes is t i+1 = t i + c a. By assumption a c, thus t i+1 t i b i b i+1. In other words, at the time c i finishes, b i+1 already finishes. Since S is an optimal schedule, c i+1 must be scheduled immediately without any delay. Now since t i b i b i+1 and t i+1 b i b i+1, if we interchange a i with a i+1 and c i with c i+1, and keep all other tasks unchanged, we still get a feasible schedule S with the same total completion time as S (see Fig. 1). This means that S is also optimal with respect to the total completion time. By repeatedly interchanging jobs, we will arrive at the schedule S. SinceS has the same completion time as S, S is also optimal with respect to the total completion time. Notice that when we do the interchanging above, we do not change the makespan either, so the same arguments show that scheduling jobs in non-decreasing order of b i s also generates an optimal schedule with respect to makespan. Note that if a>c, then the canonical schedule that schedules jobs in non-decreasing order of the processing times of the slave tasks is still order preserving but may not be optimal with respect to both the total completion time and the makespan. For makespan, Sahni and Vairaktarakis (1996) showed that in case of a j c j for every job j, scheduling jobs in nonincreasing order of b j s yields an optimal canonical and order preserving schedule. On the other hand, the complexity of the problem of finding an optimal canonical and order preserving schedule with respect to total completion time when a>cis not known at the present time. However, we will show in the next subsection that scheduling jobs in non-decreasing order of b j s gives a 5/4-approximation with respect to total completion time.

Ann Oper Res (2008) 159: 215 231 221 4.2 Approximation algorithms: Special case In this subsection, we consider how to approximately solve the total completion time and makespan problems in the special case. Theorem 4.2 Let S be a schedule that schedules jobs in an arbitrary order. If a<c, then S is a (2, 2) schedule; if a = c, then S is a (2, 4/3) schedule; if a>c, then S is a (2, 3/2) schedule. Proof For makespan, Sahni and Vairaktarakis (1996) have shown that any canonical schedule is a 2-approximation for makespan. Leung and Zhao (2005) showed that if a c, then an arbitrary canonical schedule gives 1 + 1 approximation for the total completion time, 1+ 2a c which is asymptotically 2 when a<cand 4/3 whena = c; andifa>c,thenanarbitrary canonical gives 1 + 1 2+ a c approximation which is asymptotically 3/2. Combining these two results concludes the proof. A better approximation ratio can be obtained by scheduling jobs in non-decreasing order of b i s. Theorem 4.3 Let S be a schedule that schedules jobs in non-decreasing order of b i s. If a<c, then S is a (3/2, 4/3); if a = c, then S is a (3/2, 7/6) schedule; if a c, then S is a (2, 5/4) schedule. 4+ 2c a Proof As we mentioned before, Sahni and Vairaktarakis (1996) showed that any canonical schedule is a 2-approximation for makespan. Furthermore, they showed that if a i c i for every i, then scheduling jobs in non-decreasing order of b i s generates a scheduled whose makespan is at most 3/2 times the optimal. For the total completion time, Leung andzhao(2005) showed that if a c, thens is a (1 + 1 )-approximation for total completion time, which is asymptotically 4/3 whena < cand 7/6 whena = c; andifa < c, then S is a (1 + 1 2+ a c )-approximation, which is asymptotically 5/4. Combining these two results concludes the proof. 5 Approximation algorithms: General cases In this and the next section, we consider general cases of makespan and total completion time problems. We will not have any assumption about the processing times, so a i, b i and c i can be any arbitrary positive number. A job i can also have a release time r i. We will not only consider canonical schedules, but non-canonical schedules too; not only single-master systems, but multi-master systems and distinct preprocessing and postprocessing master systems too. Again, we will focus on approximation algorithms. In this section, we design algorithms that generate preemptive schedules and analyze the performance of these algorithms. Then in the next section, we will show how to convert these preemptive schedules into non-preemptive schedules with a slight degradation in the quality of approximation.

222 Ann Oper Res (2008) 159: 215 231 5.1 Main idea The Shortest-Processing-Time (SPT) rule, which always runs the job with the least processing time, and the Shortest-Remaining-Processing-Time (SRPT) rule (Schrage 1968; Smith 1976), which always runs the job with the least remaining processing time, are two wellknown algorithms for minimizing total completion time. Usually, the SPT rule is used to generate non-preemptive schedules, while the SRPT rule is used to generate preemptive schedules. Suppose each job consists of a single task. If all jobs are available at time 0, then the SPT rule is optimal for total completion time in the single-machine or multi-machine environment. If the release times are arbitrary, then the SRPT rule is optimal for a single machine and it is a 2-approximation (Phillips et al. 1998) in the multi-machine environment. We will adapt both rules for the problems in the master slave model. We use both rules to generate preemptive schedules. All these schedules are shown to have small total completion time and makespan as well. A scheduling decision is made when an a task or a c task completes so that a master machine becomes free, or when a new a task or a c task becomes available. At any such time instant, the SPT rule schedules, from the set of available tasks (including those that have been preempted but have not yet completed), the one with the smallest processing time. Depending on how one chooses from the set of available jobs, one can obtain the SPT a rule and the SPT c rule. Specifically, in the SPT a rule, preemption occurs only among the a tasks and the preemption is based on the length of a i.inthespt c rule, preemption occurs only among the c tasks and the preemption is based on the length of c i. On the other hand, the SRPT rule schedules, from the set of available tasks, the one with the smallest remaining processing time. Similarly, one can define the SRPT a rule and the SRPT c rule. Both the SPT rule and the SRPT rule may generate schedules with migration when there are multiple machines, i.e., after interrupted on one machine, a task is resumed on a different machine. Preemptive relaxation and linear programming relaxation are two important techniques for getting constant-factor approximations for total completion time of non-preemptive schedules (Phillips et al. 1998; Halletal.1997; Chakrabarti et al. 1996; Goemans 1997; Schulz and Skutella 1997). Most of these algorithms work by first constructing a relaxed solution, either a preemptive schedule or a linear programming relaxation. These relaxations are then used to obtain an ordering of the jobs, and the jobs are list scheduled (i.e., no unforced idle time) in this order. In this paper we will use the first approach. By applying the ideas of Phillips et al. (1998) and Chekuri et al. (2001) to the master slave models, we will show that the preemptive schedules generated above can be converted into non-preemptive schedules with certain degradation in the quality of approximation. 5.2 Single master systems For convenience, throughout this section, let A = n a j, B = n b j, C = n c j, R = n r j. In all cases we have the following trivial lower bound for C For makespan, we have C A + B + C + R. (1) Cmax 1 (A + C), m (2) where m 1 is the number of master machines in the system and Cmax r j + a j + b j + c j, 1 j n, (3) for any single master or multi-master systems.

Ann Oper Res (2008) 159: 215 231 223 Algorithm Canonical-SPT c Schedule the a tasks in an arbitrary order without preemption. After all the a tasks finish, schedule the available c tasks by the SPT c rule. Theorem 5.1 Algorithm Canonical-SPT c generates a (2, 2) canonical preemptive schedule in O(nlog n) time when there is a single master and r i = 0 for all jobs i. Proof Since preemption among the c tasks can not increase the makespan, by Sahni and Vairaktarakis (1996), the schedule generated by Algorithm Canonical-SPT c has makespan at most two times the optimal. Let C aj denote the time a j completes. Then at time t j = max(a, (C aj + b j )), all the a tasks finish and the task c j is available to be scheduled. Since C aj A,wehavet j A + b j. According to the algorithm, if there is another available task c i that is running but hasn t finished at time t j and c i <c j,thenc j has to wait until c i finishes. Also, during the execution of c j, if there is another task c i <c j that becomes available, then c i preempts c j. In both cases, we say that c j is delayed by c i.letc j be the completion time of c j in the schedule generated by Algorithm Canonical-SPT c. Then, C j = t j + c j + c i (A + b j + c j ) + c i. c i delays c j c i <c j For canonical schedules, whether preemption is allowed or not, we have the following lower bound of minimum total completion time C na + c i c j c i, (4) which is based on the fact that a schedule that has no idle time and schedules the c tasks in non-decreasing order of their lengths must be an optimal schedule. Thus, the total completion time is C j ( A + b j + c j + c i <c j c i ) ( = na + c i c i <c j ) + (B + C) < 2C, where the last inequality comes from the lower bounds (4)and(1). Let S 1 ={i : a i c i } and S 2 ={i : a i >c i }. Suppose the jobs in S 1 are arranged in increasing order of the b s and the jobs in S 2 are arranged in decreasing order of the b s. In Sahni and Vairaktarakis (1996), it was shown that the canonical schedule in which the a tasks of S 1 are scheduled before the a tasks of S 2 has makespan at most 3/2 times the optimal schedule. If the a tasks are scheduled in this order in Algorithm Canonical-SPT c, then one still gets a 3/2-approximation for makespan, since preemption on the available c tasks will not increase the makespan. Corollary 5.1 There is an O(nlog n) time algorithm that generates a (3/2, 2) canonical preemptive schedule when there is a single master and r i = 0 for all i.

224 Ann Oper Res (2008) 159: 215 231 Algorithm Non-canonical-SPT a+c For any two jobs, if (a j + c j )<(a i + c i ), then both a j and c j are said to have higher priority than a i and c i. At any time, if the master processor is free for assignment, assign the available task with the highest priority. If a new task becomes available and has higher priority than the currently running task, the new task preempts the currently running task. Theorem 5.2 Algorithm Non-canonical-SPT a+c generates a (2, 2) preemptive schedule for a single-master system. Furthermore, if the jobs have arbitrary release times, the schedule can be generated online. Proof To bound the makespan, we pick the last job j such that c j starts immediately after it becomes ready. Then the intervals I 1 =[r j,c aj ) and I 2 =[(C aj + b j ), C max ) must be both busy, and the total length I 1 + I 2 of these two intervals is at most n a j + n c j = A + C Cmax. Thus, C max = r j + I 1 +b j + I 2 (r j + b j ) + C max 2C max. Now we bound the total completion time of the schedule generated by Algorithm Noncanonical-SPT a+c. The schedules generated by Algorithm Non-canonical-SPT a+c are noncanonical, the a tasks and the c tasks can be scheduled alternatively. A lower bound on the C can be obtained by assuming all the b tasks have length 0: C a i +c i a j +c j (a i + c i ). (5) Let S be the schedule generated by Algorithm Non-canonical-SPT a+c.letc aj completion time of a j in S. Then, C aj = r j + a i + a i delays a j c i delays a j c i + a j be the and C j = C aj + b j + c j + a i + c i a i delays c j c i delays c j = r j + a i + c i + a j + b j + c j + a i + a i delays a j c i delays a j a i delays c j (a i + c i ) + (r j + a j + b j + c j ), a i +c i <a j +c j c i delays c j c i where the last inequality comes from the fact that the two sets of tasks delaying a j and c j are disjoint, and they all have higher priority than a j and c j. Thus, we can bound the total completion time

Ann Oper Res (2008) 159: 215 231 225 C j (a i + c i ) + (r j + a j + b j + c j ) a i +c i <a j +c j (a i + c i ) + (R + A + B + C) by (5) and(1) a i +c i <a j +c j < 2C. To conclude the proof, note that Algorithm Non-canonical-SPT a+c schedules jobs in an online fashion. 5.3 Multi-master systems Algorithm Multi-Master-SPT a+c (1) Without loss of generality, we assume that the jobs are indexed in non-decreasing order of a j + c j.thatis,a j + c j a j+1 + c j+1 for 1 j n 1. We may also assume that n is a multiple of m. Otherwise, one can add dummy jobs with a i = b i = c i = 0. (2) Assign the jobs to the machines such that jobs 1, 2,..., m go to machines 1, 2,..., m, respectively; jobs m + 1, m + 2,..., 2m go to machines m, m 1,...,1,respectively;jobs2m + 1, 2m + 2,...,3m gotomachines1,2,...,m, respectively; and so on until all jobs are assigned. (3) Apply Algorithm Non-canonical-SPT a+c to each master machine to schedule the jobs assigned to it. Theorem 5.3 Algorithm Multi-Master-SPT a+c generates a (3, 2) offline preemptive schedule without migration for multi-master systems when jobs have arbitrary release times. Proof Let p = j scheduled on p (a j + c j ). By the pigeon hole principle, it is easy to see that min 1 p m p 1 m (a j + c j )<Cmax. For any two machines p and q, by the way the jobs are assigned to the machines p q max 1 k n (a k + c k ) min 1 k n (a k + c k ) max 1 k n (a k + c k )<C max. This means that for any machine p, p min 1 q m q + Cmax 2C max. First we bound the makespan. Suppose the job with the maximum completion time among all jobs is assigned to machine p. Letl be the last job on machine p so that c l is scheduled immediately after it is ready. Define C al as before. Then the machine p is busy during the intervals I 1 =[r l,c al ) and I 2 =[(C al + b l ), C max ). The total length of the two intervals is I 1 + I 2 p < 2Cmax. Therefore, the makespan is C max = r l + I 1 +b l + I 2 < 3C max. To consider the total completion time, we first give a lower bound. Without loss of generality, one may assume that n = mk for some integer k. For convenience, one can reindex the jobs assigned to each machine p in the form of (p, q) such that a (p,q) + c (p,q) a (p,q+1) + c (p,q+1).letb p = k q=1 b (p,q).

226 Ann Oper Res (2008) 159: 215 231 A lower bound of the total completion time comes from the fact that Algorithm Multi- Master-SPT a+c is optimal if b (p,q) = 0 for every job (p, q) C m p=1 q=1 k (k q + 1)(a (p,q) + c (p,q) ) (6) Now fix a machine p. Using similar argument as in the proof of Theorem 5.2,wehave k k C (p,q) max r (p,q) + (k q + 1)(a (p,q) + c (p,q) ) + B p. 1 q k q=1 Thus, the total completion time is m k m k C (p,q) (k q + 1)(a (p,q) + c (p,q) ) + B p + max p=1 q=1 p=1 p=1 q=1 q=1 m k (k q + 1)(a (p,q) + c (p,q) ) + B + R < 2C. q=1 1 q k r (p,q) by (6)and(1) The schedules generated by Algorithm Multi-Master-SPT a+c are offline schedules. To obtain online schedules, one can apply Algorithm Non-canonical-SPT a+c to multi-master systems. We have the following theorem whose proof is omitted (Leung and Zha 2006). Theorem 5.4 Algorithm Non-canonical-SPT a+c generates a (3, 2) online preemptive schedule with migration on multi-master systems when jobs have arbitrary release times. 5.4 Distinct preprocessing and postprocessing master systems Algorithm SRPT a SPT c Schedule the available a tasks using the SRPT a rule on the preprocessing master. Schedule the available c tasks using the SPT c rule on the postprocessing master. In the following, we let m 1 and m 2 denote the numbers of preprocessing masters and postprocessing masters, respectively. Theorem 5.5 Algorithm SRPT a SPT c generates a (3, 2) online preemptive schedule when m 1 = m 2 = 1 and r j 0 for all j. Proof For the makespan, consider the last job l such that c l runs immediately when it is available at time C al + b l. There is no idle time in the interval I 1 =[r l,c al ) and the interval I 2 =[(C al + b l ), C max ). The length of each interval is at most Cmax. Therefore, the makespan is C max = r l + I 1 +b l + I 2 3Cmax.

Ann Oper Res (2008) 159: 215 231 227 Now we consider the total completion time. Let Ca j be the time a j finishes in an optimal schedule. Then, C (Ca j + b j + c j ) = + B + C. Ca j Let C aj be the time a j finishes in the schedule obtained by Algorithm SRPT a SPT c.since the SRPT a rule is optimal if b j = c j = 0, Algorithm SRPT a SPT c must have the minimum n C a j among all possible schedules. That is C aj Ca j. Thus, the total completion time is at most C aj + b j + c j + c i delays c j c i (C +aj b j + c j ) + c i <c j c i 2C. Theorem 5.6 Algorithm SRPT a SPT c generates a (4, 2) preemptive schedule with migration when m 1 1, m 2 1 and r j = 0 for all j. Proof As before, let k be the job with the maximum completion time, and let l be the last job such that the task c l runs immediately after it is ready at C al + b l. Then the intervals I 1 =[0,C al a l ) and I 2 =[(C al + b l ), (C max c k )) must be both busy. And I 1 a j <a l a j /m 1 <C max and I 2 c j <c l c j /m 2 <C max. Therefore, C max = I 1 +(a l + b l ) + I 2 +c k < 4C max. Since all a tasks are available at time 0, then the SRPT a rule is the same as the SPT a rule. As mentioned before, the SPT a rule minimizes the total completion time of the a tasks. Let Ca i be the finish time of a i in an optimal schedule, and let C ai be the finish time of a i in the schedule generated by Algorithm SRPT a SPT c. Then, as in the case of m 1 = m 2 = 1, a lower bound of the total completion time is ( ) ( ) C (Ca i + b j + c j ) = Ca i + B + C C ai + B + C. (7) i=1 i=1 When the task c j is ready, it can be delayed by a task c i only if c i <c j. The length of the interval [(C aj + b j ), C j c j ) is at most c i c i <c j m 2, since all postprocessing masters must be busy and can only run the task c i such that c i <c j during this interval. Hence the total completion time is at most ( C j C aj + b j + c j + ) c i = C aj + B + C c i + 2C, m 2 m c i <c 2 j i=1 c i <c j where the last inequality comes from (7) and a trivial lower bound of C, C n c i c i <c j m 2.

228 Ann Oper Res (2008) 159: 215 231 Theorem 5.7 Algorithm SRPT a SPT c generates a (4, 3) preemptive schedule with migration when m 1 1, m 2 1 and r j 0 for all j. Proof Using similar argument as in the proof of Theorem 5.6, one can show that C max 4C max. Let C a j be the completion time of a j in an optimal schedule. Again we have C ( C a j ) + B + C. LetC aj be the completion time of a j in the schedule generated by Algorithm SRPT a SPT c. As mentioned before, the SRPT a rule is a 2-approximation when b j = c j = 0. Thus, it must true that C aj 2 C a j,and C j = C aj + b j + c j + c i /m 2 c i <c j C aj + B + C + c i /m 2 c i <c j ( ) Ca j + C a j + B + C + 3C. c i c j c i /m 2 This concludes our proof. 6 Converting preemptive schedules into non-preemptive schedules As we mentioned before, we obtain non-preemptive schedules by converting from preemptive schedules. Our approach is based on the technique that was introduced by Phillips et al. (1998), and improved by Chekuri et al. (2001). The model studied in Phillips et al. (1998) consists of one or more identical machines and n simple jobs. Let S be a preemptive schedule. To obtain a non-preemptive schedule S, they form a list of jobs in increasing order of their completion times in S andthenlist schedule the jobs in this list one by one, respecting their release times. They showed that if S is a β-approximation for total completion time, then S is a 2βapproximation in the single-machine environment and a 3β-approximation in the multimachine environment. In addition, this conversion also yields an online non-preemptive algorithm if the preemptive schedule can be generated online. Later, Chekurietal.(2001) improved the above results in the single machine case. Chekuri et al. designed a deterministic O(n 2 ) time offline algorithm such that the schedule e obtained has total completion time at most times that of the preemptive schedule, where e 1 e is the base of natural log. As well, they gave a randomized online algorithm with expected performance e e 1. In the multi-machine case, Chakrabarti et al. (1996) showed that the convert procedure given in Phillips et al. (1998) has a bound of 7/3, instead of 3 times that of S. In both cases one can show that if S is an α-approximation for makespan, then S is an (α + 1)- approximation for makespan. In the following, we will describe how to convert preemptive schedules generated in the previous section to non-preemptive schedules in the master-slave model. The difficulty of

Ann Oper Res (2008) 159: 215 231 229 our conversion is that we need to respect not only the release time of a i,1 i n, butalso respect the constraint that the interval between the finish time of a i and the start time of c i has length at least b i. Theorem 6.1 In O(n 2 ) time, one can obtain a ( 5 2, 2e ) non-preemptive canonical schedule e 1 when there is a single master and r j = 0 for all j. Proof Let S be a preemptive canonical schedule of n jobs obtained by applying Corollary 5.1. LetS a be the partial schedule of S during the interval (0,A], ands c be the partial schedule of S c during the interval (A, C max ]. Clearly S a contains all a tasks only. By the Algorithm Canonical-SPT c, there is no preemption in S a.letc aj be the completion time of a j. It is easy to see that the partial schedule S c contains all c tasks only and it can be seen as a preemptive schedule of n tasks on a single machine where each task j has a release time max(a, C aj + b j ) and processing time c j. To convert S into a non-preemptive schedule S, one fixes S a and convert S c to a nonpreemptive schedule S c of c tasks by using the approach of Chekuri et al. (2001). Let C j and C j be the completion time of c j in S and S, respectively. As mentioned at the beginning of the section, it has been shown in Chekuri et al. (2001)thatC j e e 1 C j and C j C j + C max. Since S is a ( 3 2, 2) canonical schedule, the obtained schedule is a ( 5 2, 2e e 1 ) non-preemptive canonical schedule. This concludes the proof. Theorem 6.2 In O(nlog n) time, one can obtain a (3, 4) online non-preemptive schedule when there is a single master and r j 0 for all job j. Proof Let S be the (2, 2) non-canonical schedule in Theorem 5.2. Wegetaλ-schedule S, λ = 1, similarly as in Phillips et al. (1998). Using similar arguments as that in Phillips et al. (1998), we can show that the obtained schedule is a (3, 4) non-preemptive schedule. Furthermore, it can be implemented online if the preemptive schedule is online. For multi-master systems, let S be the (2, 2)-schedule generated by Algorithm Multi- Master-SPT a+c.thens has no migration. One can obtain the non-preemptive schedule S by converting the schedule on each machine separately in the same way as described in the proof of Theorem 6.2. The following can be shown; see Leung and Zhao ( 2006) for detail. Theorem 6.3 For multi-master systems, one can obtain a (5, 4) non-preemptive offline schedule. Now we consider systems which have m 1 preprocessors and m 2 postprocessors. Suppose m 1 = m 2 = 1. When the release times of all jobs are identical, then there is no preemption on the single preprocessor. So we simply do conversion on the single postprocessor using the approach given in Chekuri et al. (2001). When the release times are arbitrary, we need to do the conversion carefully so as to make sure that the difference between the finish time of a j and the start time of c j is at least b j.wefirstremovethepreemptions among the a tasks as in Chekuri et al. (2001), respecting the release times of the a tasks. Next, we remove the preemptions among the c tasks as in Phillips et al. (1998) andmake sure the interval between the finish time of a j and the start time of c j is at least b j for each job j. We summarize the results as follows.

230 Ann Oper Res (2008) 159: 215 231 Theorem 6.4 When there is a single preprocessor and a single postprocessor, one can obtain a (4, 2e e 1 a (4, 4+ 2e e 1 ) non-preemptive offline schedule if all jobs have the same release time ) non-preemptive offline schedule or an online non-preemptive schedule with ) if the jobs have arbitrary release time. expected performance of (4, 4 + 2e e 1 Now suppose m 1,m 2 > 1. When the release times of all jobs are identical, no preemption occurs on the preprocessors. So we do conversion on the postprocessors using the approach given in Phillips et al. (1998). When the release times are arbitrary, we first remove preemptions among the a tasks, respecting the release times of the a tasks as in Phillips et al. (1998). Next, we remove preemptions among the c tasks and make sure that the interval between the finish time of a j and the start time of c j is at least b j for each job j. Theorem 6.5 When there are m 1 preprocessors and m 2 postprocessors, in O(nlog n) time, one can obtain a (4, 14/3) non-preemptive schedule when all jobs have the same release times a (5, 13) non-preemptive online schedule when the jobs have arbitrary release times. 7 Conclusion In this paper we have considered the problem of minimizing total completion time and makespan in the master-slave model in various settings. We first show that the problem of minimizing the total completion time is NP-hard in the strong sense. Then we consider some special case of the problems. We show that while the total completion time problem and makespan problem are strongly NP-hard in general, there is a special case that admits a polynomial time solution. Then we turn to approximation algorithms. We designed efficient algorithms to generate preemptive schedules with good performance ratios. In all cases, these schedules have small makespan as well. Then we convert the preemptive schedules into non-preemptive schedules using the techniques developed in Phillips et al. (1998), Chakrabarti et al. (1996) and Chekuri et al. (2001). Most of the performance bounds derived in this paper are not tight. For future research, it will be interesting to tighten these bounds, or develop better approximation algorithms. References Allaoui, H., & Artiba, A. (2006). Scheduling two-stage hybrid flow shop with availability constraints. Computers and Operations Research, 33(5), 1399 1419. Buten, R. E., & Shen, V. Y. (1973). A scheduling model for computer systems with two classes of processors. In Proceedings of 1973 sagamore computer conference on parallel processing (pp. 130 138). Chakrabarti, S., Phillips, C., Schulz, A., Shmoys, D. B., Stein, C., & Wein, J. (1996). Improved scheduling algorithms for minsum criteria. In Proceedings of the 23rd international colloquium on automata, languages and programming (pp. 646 657). Chekuri, C., Motwani, R., Natarajan, B., & Stein, C. (2001). Approximation techniques for average completion time scheduling. SIAM Journal on Computing, 31(1), 146 166. Cheng, T. C. E., & Sin, C. C. S. (1990). State-of-the-art review of parallel-machine scheduling research. European Journal of Operational Research, 47, 271 290. Goemans, M. X. (1997). Improved approximation algorithms for scheduling with release dates. In Proceedings of the eighth ACM-SIAM symposium on discrete algorithms (pp. 591 598).

Ann Oper Res (2008) 159: 215 231 231 Guinet, A. G. P., & Solomon, M. M. (1996). Scheduling hybrid flowshops to minimize maximum tardiness or maximum completion-time. International Journal of Production Research, 34, 1643 1654. Gupta, J. N. D., & Tunc, E. A. (1994). Scheduling a 2-stage hybrid flowshop with separable setup and removal times. European Journal of Operational Research, 77, 415 428. Hall, L. A., Schulz, A. S., Shmoys, D. B., & Wein, J. (1997). Scheduling to minimize average completion time: Offline and online algorithms. Mathematics of Operations Research, 22, 513 544. Kern, W., & Nawijn, W. (1991). Scheduling multi-operation jobs with time lags on a single machine. In U. Faigle & C. Hoede (Eds.). Proceedings 2nd twente workshop on graphs and combinatorial optimization, Enschede. Langston, M. A. (1987). Interstage transportation planning in the deterministic flow-shop environment. Operations Research, 35(4), 556 564. Lee, C.-Y., & Vairaktarakis, G. L. (1994). Minimizing makespan in hybrid flowshops. Operations Research Letters, 16, 149 158. Leung, J. Y.-T., & Zhao, H. (2005). Minimizing mean flowtime and makespan on master-slave systems. Journal of Parallel and Distributed Computing, 65, 843 856. Leung, J. Y.-T., & Zha, H. (2006). Minimizing sum of completion times and makespan in master-slave systems. IEEE Transactions on Computers, 55, 985 999. Phillips, C., Stein, C., & Wein, J. (1998). Minimizing average completion time in the presence of release dates. Mathematical Programming, 82, 199 223. Sahni, S. (1996). Scheduling master-slave multiprocessor systems. IEEE Transactions on Computers, 45(10), 1195 1199. Sahni, S., & Vairaktarakis, G. (1996). The master-slave paradigm in parallel computer and industrial settings. Journal of Global Optimization, 9, 357 377. Sahni, S., & Vairaktarakis, G. (2004). The master-slave scheduling model. In J. Y.-T. Leung (Ed.), Handbook of scheduling: Algorithms, models, and performance analysis. Boca Raton: CRC Press. Schrage, L. (1968). A proof of the optimality of the shortest remaining processing time discipline. Operations Research, 16, 687 690. Schulz, A. S., & Skutella, M. (1997). Scheduling-LPs bear probabilities: Randomized approximations for min-sum criteria. In Proceedings of the fifth annual European symposium on algorithms (pp. 416 429). Smith, D. (1976). A new proof of the optimality of the shortest remaining processing time discipline. Operations Research, 26(1), 197 199. Sriskandarajah, C., & Sethi, S. P. (1989). Scheduling algorithms for flexible flowshops: worst and average case performance. European Journal of Operational Research, 43, 143 160. Vairaktarakis, G. (1997). Analysis of algorithms for master-slave system. IIE Transactions, 29(11), 939 949. Yu, W., Hoogeveen, H., & Lenstra, J. K. (2004). Minimizing makespan in a two-machine flowshop with delays and unit-time operations is NP-hard. Journal of Scheduling, 7(5), 333 348.