SUPPLY CHAIN SCHEDULING: ASSEMBLY SYSTEMS. Zhi-Long Chen. Nicholas G. Hall

Similar documents
Batch delivery scheduling with simple linear deterioration on a single machine 1

Multi-agent scheduling on a single machine to minimize total weighted number of tardy jobs

arxiv: v2 [cs.ds] 27 Nov 2014

Recoverable Robustness in Scheduling Problems

Online Appendix for Coordination of Outsourced Operations at a Third-Party Facility Subject to Booking, Overtime, and Tardiness Costs

Scheduling Lecture 1: Scheduling on One Machine

A note on the complexity of the concurrent open shop problem

Minimizing Mean Flowtime and Makespan on Master-Slave Systems

Scheduling jobs with agreeable processing times and due dates on a single batch processing machine

Scheduling Linear Deteriorating Jobs with an Availability Constraint on a Single Machine 1

Polynomially solvable and NP-hard special cases for scheduling with heads and tails

Basic Scheduling Problems with Raw Material Constraints

On-line Scheduling of Two Parallel Machines. with a Single Server

Scheduling Coflows in Datacenter Networks: Improved Bound for Total Weighted Completion Time

Machine Scheduling with Deliveries to Multiple Customer Locations

The polynomial solvability of selected bicriteria scheduling problems on parallel machines with equal length jobs and release dates

Combinatorial Structure of Single machine rescheduling problem

Single Machine Scheduling with a Non-renewable Financial Resource

An improved approximation algorithm for two-machine flow shop scheduling with an availability constraint

This means that we can assume each list ) is

Marjan van den Akker. Han Hoogeveen Jules van Kempen

Single machine scheduling with forbidden start times

Parallel machine scheduling with batch delivery costs

Optimal on-line algorithms for single-machine scheduling

Chapter 3: Discrete Optimization Integer Programming

Single Machine Scheduling: Comparison of MIP Formulations and Heuristics for. Interfering Job Sets. Ketan Khowala

arxiv: v2 [cs.dm] 2 Mar 2017

Metode şi Algoritmi de Planificare (MAP) Curs 2 Introducere în problematica planificării

Technical Note: Capacitated Assortment Optimization under the Multinomial Logit Model with Nested Consideration Sets

Throughput Optimization in Single and Dual-Gripper Robotic Cells

Partition is reducible to P2 C max. c. P2 Pj = 1, prec Cmax is solvable in polynomial time. P Pj = 1, prec Cmax is NP-hard

Lecture 2: Scheduling on Parallel Machines

MINIMIZING TOTAL TARDINESS FOR SINGLE MACHINE SEQUENCING

arxiv: v1 [math.oc] 3 Jan 2019

Using column generation to solve parallel machine scheduling problems with minmax objective functions

Minimizing total weighted tardiness on a single machine with release dates and equal-length jobs

A polynomial-time approximation scheme for the two-machine flow shop scheduling problem with an availability constraint

Chapter 3: Discrete Optimization Integer Programming

The Maximum Flow Problem with Disjunctive Constraints

Computational Complexity

On bilevel machine scheduling problems

Research Article Batch Scheduling on Two-Machine Flowshop with Machine-Dependent Setup Times

The Power of Preemption on Unrelated Machines and Applications to Scheduling Orders

The Constrained Minimum Weighted Sum of Job Completion Times Problem 1

Deterministic Scheduling. Dr inż. Krzysztof Giaro Gdańsk University of Technology

Research Notes for Chapter 1 *

Improved Bounds for Flow Shop Scheduling

Complexity analysis of the discrete sequential search problem with group activities

ABSTRACT SCHEDULING IN SUPPLY CHAINS. Dissertation directed by: Dr. Zhi-Long Chen, Associate Professor Robert H. Smith School of Business

CAPACITATED LOT-SIZING PROBLEM WITH SETUP TIMES, STOCK AND DEMAND SHORTAGES

Bicriteria models to minimize the total weighted number of tardy jobs with convex controllable processing times and common due date assignment

CS 6901 (Applied Algorithms) Lecture 3

Complexity analysis of job-shop scheduling with deteriorating jobs

Single Machine Problems Polynomial Cases

Deterministic Models: Preliminaries

University of Twente. Faculty of Mathematical Sciences. Scheduling split-jobs on parallel machines. University for Technical and Social Sciences

Polynomial Time Algorithms for Minimum Energy Scheduling

Lecture 13. Real-Time Scheduling. Daniel Kästner AbsInt GmbH 2013

RCPSP Single Machine Problems

Single Machine Scheduling with Job-Dependent Machine Deterioration

OPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS

NP-Completeness. f(n) \ n n sec sec sec. n sec 24.3 sec 5.2 mins. 2 n sec 17.9 mins 35.

A hybrid two-stage transportation and batch scheduling problem

Approximation Schemes for Parallel Machine Scheduling Problems with Controllable Processing Times

1 Column Generation and the Cutting Stock Problem

Polynomial time solutions for scheduling problems on a proportionate flowshop with two competing agents

STABILITY OF JOHNSON S SCHEDULE WITH LIMITED MACHINE AVAILABILITY

Assortment Optimization under Variants of the Nested Logit Model

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Introduction to Bin Packing Problems

Supply chain scheduling to minimize holding costs with outsourcing

SCHEDULING UNRELATED MACHINES BY RANDOMIZED ROUNDING

Lecture 4 Scheduling 1

A half-product based approximation scheme for agreeably weighted completion time variance

Task Models and Scheduling

Approximation algorithms for scheduling problems with a modified total weighted tardiness objective

Distributionally Robust Discrete Optimization with Entropic Value-at-Risk

Scheduling with Advanced Process Control Constraints

COSC 341: Lecture 25 Coping with NP-hardness (2)

Inventory optimization of distribution networks with discrete-event processes by vendor-managed policies

Sequencing problems with uncertain parameters and the OWA criterion

Linear Programming. H. R. Alvarez A., Ph. D. 1

Addendum to: Dual Sales Channel Management with Service Competition

Scheduling for Parallel Dedicated Machines with a Single Server

Algorithm Design and Analysis

CS 583: Approximation Algorithms: Introduction

A Branch and Bound Algorithm for the Project Duration Problem Subject to Temporal and Cumulative Resource Constraints

On the complexity of maximizing the minimum Shannon capacity in wireless networks by joint channel assignment and power allocation

On Two Class-Constrained Versions of the Multiple Knapsack Problem

Multi-Objective Scheduling Using Rule Based Approach

The unbounded single machine parallel batch scheduling problem with family jobs and release dates to minimize makespan

The Multiple Traveling Salesman Problem with Time Windows: Bounds for the Minimum Number of Vehicles

Using column generation to solve parallel machine scheduling problems with minmax objective functions

CMSC 451: Lecture 7 Greedy Algorithms for Scheduling Tuesday, Sep 19, 2017

Computational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs

Algorithms. Outline! Approximation Algorithms. The class APX. The intelligence behind the hardware. ! Based on

Scheduling the Tasks of Two Agents with a Central Selection Mechanism

Embedded Systems Development

MS-E2140. Lecture 1. (course book chapters )

Heuristics for two-machine flowshop scheduling with setup times and an availability constraint

Transcription:

SUPPLY CHAIN SCHEDULING: ASSEMBLY SYSTEMS Zhi-Long Chen Nicholas G. Hall University of Pennsylvania The Ohio State University December 27, 2000

Abstract We study the issue of cooperation in supply chain manufacturing. Consider an assembly system in which several suppliers provide component parts to a manufacturer. A product cannot be delivered until all its parts have been supplied. The manufacturer performs nonbottleneck operations, for example outsourced assembly, packaging and delivery, for each product. We consider a variety of manufacturer s problems, where the delivery time of a product is the maximum of the suppliers delivery times for the parts needed for that product. For most problems, we provide either an efficient algorithm or a proof of intractability. Similar results are provided for problems where the total suppliers cost is minimized, subject to a requirement that each supplier uses the same sequence. We show that an optimal schedule for the manufacturer s problem can be far from optimal for the suppliers problem, and vice versa. Therefore, we consider bicriteria problems where a composite measure of the manufacturer s cost and the total suppliers cost is minimized. We also describe heuristics for several problems that we show are intractable, and analyze their worst case performance or demonstrate asymptotic optimality of the solutions which they provide. Finally, incentives and practical mechanisms for cooperation between the decision makers are discussed. Key Words and Phrases: manufacturing and scheduling, supply chains, assembly system, algorithms and heuristics.

One of the most active topics in manufacturing research over the last ten years has been supply chain management. A supply chain represents all the stages at which value is added to a manufacturing product. These stages include the supply of raw materials and intermediate components, finished goods manufacture, packaging, transportation, warehousing and logistics. The most important issue in supply chain management research is cooperation between the decision makers in a supply chain. We demonstrate that cooperation between decision makers in an assembly system can greatly reduce cost, relative to an uncooperative solution where various decision makers use schedules that work poorly with those of others. In our assembly system, there are several suppliers and one manufacturer. Each supplier provides component parts to the manufacturer. The manufacturer makes products that are dependent on all the component parts. Therefore, the manufacturer waits until all the components for a job have arrived and then initiates a final stage of manufacturing. Since our focus is on the coordination of the suppliers schedules, rather than on the manufacturing stage, we assume that this final stage is nonbottleneck. That is, as many products as needed can be processed through the final stage simultaneously. Practical examples of nonbottleneck final manufacturing stages include outsourced assembly, packaging and delivery. The aim is to minimize the overall scheduling cost, which is defined in various ways. This is achieved by sequencing and scheduling decisions made by the suppliers. Thomas and Griffin (1996) provide an extensive review and discussion of the literature on supply chain management. They point out that over 11% of the U.S. Gross National Product is devoted to non-military logistics expenditures. Moreover, for many products, logistics expenditures can constitute over 30% of the cost of goods sold. In their conclusions, they discuss the need for research that addresses supply chain issues at an operational rather than a strategic level, and that uses deterministic rather than stochastic models. This paper directly addresses both of these needs. Sarmiento and Nagi (1999) survey the literature of integrated production and distribution models. They point out that the modern trend towards reduced inventory levels creates a closer interraction between the stages in a supply chain, which increases the practical usefulness of integrated models. They also discuss several possible topics for future research. In view of these two extensive survey papers, we do not discuss the general supply chain management literature in detail. Within supply chain management research, the literature that is most closely related to 1

this paper is on production and distribution systems with multiple stages. Ow, Smith and Howie (1988) describe a multi-agent, or distributed, scheduling system. They do not discuss algorithms for schedule construction, or the benefits of cooperation between the agents, in detail. Weng (1997) considers a two-stage supply chain in which demand is a function of both price and a random component. He develops models to find the sales price and production/order quantity that maximize the expected profit of each stage individually, and also the expected profit of the system as a whole where there is cooperation. He shows that the increase in expected profit resulting from cooperation increases with the price elasticity of demand and with the transfer price between the stages. Lee and Chen (2000) consider the integration of transportation constraints including time and capacity, with scheduling decisions, and study a variety of mathematical models related to this issue. Moses and Seshadri (2000) describe a model that determines a review period and a stocking policy for a two-stage supply chain with lost sales. They establish that, for a given review period, there exists an equilibrium stocking level if and only if decision makers at the first stage agree to share a fraction of the holding cost of safety stock held at the second stage. They describe an algorithm that finds the optimal cost sharing fraction, review period, and base stock level. There are at least two previous studies of scheduling problems in assembly systems, but neither discusses any supply chain issues such as the cooperation between decision makers that is the main focus of our paper. Lee, Cheng and Lin (1993) consider the minimization of makespan in an assembly system with two suppliers and one manufacturer. They prove that the general version of this problem is intractable, provide an enumerative algorithm, and also study several special cases and heuristics. Potts et al. (1995) consider a more general problem with several suppliers and one manufacturer, and describe heuristics with small worst case error bounds. The first paper on supply chain scheduling, by Hall and Potts (2000), studies the benefits of cooperative decision making in a supply chain where a supplier makes deliveries to several manufacturers, who may in turn supply several customers. They develop models which minimize the total of scheduling and delivery costs. They demonstrate that cooperation between a supplier and a manufacturer may reduce the total cost in this system by at least 20% and as much as 100%, depending upon the scheduling cost function being considered. Whereas the decision makers in the supply chain studied by Hall and Potts (2000) are 2

at different stages, in the assembly system considered here they are suppliers at the same stage who supply the same manufacturer. The issue of cooperation between the suppliers arises from the requirement that all parts for a job must be received from all suppliers before the final stage of manufacturing for that job can begin. Thus, the performance of the manufacturer is dependent on the last supplier to deliver a part for each job, and consequently on decisions made by all the suppliers. However, the suppliers are concerned not only about the manufacturer s performance but also about their own costs. The suppliers individual cost functions are not, in general, optimized by the same scheduling decisions which optimize the manufacturer s cost. Therefore, it is important to study the optimization of manufacturer s performance and how it may affect the costs of the suppliers. A natural constraint which the manufacturer may try to impose on the suppliers is that they all use the same job sequence. This constraint simplifies material handling and production, and also reduces work in process at the manufacturer. This constraint also produces part delivery schedules that are optimal from the manufacturer s perspective. However, the imposition of such a constraint reduces the suppliers scheduling options, and therefore needs careful evaluation. In view of these potential tradeoffs between the manufacturer s cost and the total suppliers cost, it is also valuable to study composite objectives that contain measures of both. In addition to the effects of these various scenarios on scheduling cost, the computational tractability of scheduling decisions is also relevant. This is because scheduling problems that are intractable often need to be solved by heuristic methods, which may increase costs significantly, relative to an optimal solution. This paper is organized as follows. In Section 1, we specify our notation, assumptions, and scheme for classifying supply chain scheduling problems in an assembly system. Next, we present some preliminary general results that simplify the analysis which follows. We also provide an overview of our algorithmic and computational complexity results. Section 2 considers the minimization of the total manufacturer s cost, using a variety of classical scheduling objectives. For each problem, we provide either an efficient algorithm, or a proof of intractability which demonstrates that such an algorithm is unlikely to exist. In Section 3, we similarly consider the minimization of the total suppliers cost, again variously defined, subject to a requirement that each supplier uses the same sequence. In Section 4, we study the extent to which an optimal schedule for the manufacturer s problem can be 3

S s S 2 S 1 M Figure 1: Structure of Assembly Problems. suboptimal for the suppliers problem, and vice versa, using two different cost objectives. These results motivate the analysis, in Section 5, of problems with a composite objective containing measures of both manufacturer s cost and total suppliers cost. In Section 6, we describe two heuristics for a manufacturer s problem and a system problem that we show are intractable, and analyze their worst case performance. We also describe a third heuristic for both problems, and demonstrate that it provides solutions which are asymptotically optimal in the number of jobs. Section 7 considers incentives and mechanisms for cooperation between different decision makers. Finally, Section 8 contains a conclusion and some suggestions for future research. 1 Preliminaries The assembly system which is studied in this paper consists of s suppliers S 1,..., S s, which supply parts to a manufacturer M, as illustrated in Figure 1. In this section, we first describe the notation and assumptions of our models. Then, we present some preliminary general results that simplify the analysis of the scheduling problems discussed. We also present an overview of the computational complexity results in the paper. Other results in the paper are described in their problem context. 1.1 Notation and Assumptions Let N = {1,..., n} denote the set of jobs to be processed. Each job consists of several parts to be processed nonpreemptively at suppliers S 1,..., S s. Let p ij denote the total processing time of the parts to be processed by supplier S i for job j, for i = 1,..., s, j = 1,..., n. 4

Other parameters of job j that occur in some problems include: a supplier-dependent weight or value w ij, for i = 1,..., s, a manufacturer s weight or value w j, and a due date d j. We assume that all parameters p ij, w ij, w j and d j are nonnegative integers. Each supplier may produce one or more parts for each job. When the last part for each job from a supplier completes processing, the supplier delivers all the parts for that job to the manufacturer in a single batch. In Section 1.2, we prove a preliminary result about the way in which the parts at a supplier are scheduled. We define the following variables in schedule σ. C ij (σ) = time at which the parts for job j are delivered from supplier S i to the manufacturer; L ij (σ) = C ij (σ) d j, the lateness of the parts for job j from supplier S i ; U ij (σ) = { 1 if the parts for job j from supplier Si are late, i.e., C ij (σ) > d j 0 if they are delivered to the manufacturer by its due date, i.e., C ij (σ) d j ; T ij (σ) = max{c ij (σ) d j, 0}, the tardiness of the parts for job j from supplier S i ; C j (σ) = max 1 i s {C ij (σ)}, the time at which the last part for job j is delivered to the manufacturer; L j (σ) = C j (σ) d j, the lateness of job j; U j (σ) = { 1 if job j is late, i.e., Cj (σ) > d j 0 if job j is delivered to the manufacturer by its due date, i.e., C j (σ) d j ; T j (σ) = max{c j (σ) d j, 0}, the tardiness of job j. When there is no ambiguity, we simplify C ij (σ), L ij (σ), U ij (σ), T ij (σ), C j (σ), L j (σ), U j (σ) and T j (σ) to C ij, L ij, U ij, T ij, C j, L j, U j and T j, respectively. The standard classification scheme for scheduling problems (Graham et al., 1979) is ψ 1 ψ 2 ψ 3, where ψ 1 indicates the scheduling environment, ψ 2 describes the job characteristics or restrictive requirements, and ψ 3 defines the objective function to be minimized. We let ψ 1 = As, denoting an assembly shop with s suppliers and one manufacturer. Under ψ 2, we may have coseq, requiring that the suppliers use a common job sequence. The objective functions that we consider under ψ 3 require the minimization of the following cost functions: (wij )C ij = total (weighted) completion time of the parts; max i,j {L ij } = maximum lateness of the parts; (wij )U ij = (weighted) number of parts not completed by their due dates; (wij )T ij = total (weighted) tardiness of the parts; (wj )C j = total (weighted) completion time of the jobs; L max = max 1 j n {L j }, maximum lateness of the jobs; (wj )U j = (weighted) number of jobs not completed by their due dates; (wj )T j = total (weighted) tardiness of the jobs. 5

1.2 General Results Theorem 1 For all the problems we consider, there exists an optimal schedule in which all the parts produced for a single job by the same supplier are scheduled consecutively by that supplier. Proof. By assumption, all the parts for the same job are delivered together from the supplier to the manufacturer. Consider a supplier s schedule in which the parts for job i are delivered to the manufacturer before the parts for job j. Assume that the supplier schedules some part for job j before the last part for job i. Then, interchanging these two parts in the schedule enables the parts for job i to be delivered to the manufacturer earlier which reduces the supplier s cost, while the delivery time of the parts for job j is unchanged. Theorem 2 For any manufacturer s problem with a regular objective function of (C 1,..., C n ) denoted by As max 1 j n {f j (C j )} or As f j (C j ), where f j ( ) is a nondecreasing function, there exists an optimal schedule in which the job sequences at different suppliers are identical. Proof. Given a schedule with two jobs i and j such that job i is scheduled before job j at supplier S 1 and job j is scheduled before job i at supplier S 2. There are two possible cases: (i) If C 1j C 2i, then we construct a new schedule by interchanging the positions of jobs i and j at supplier S 1. (ii) If C 1j > C 2i, then we construct a new schedule by interchanging the positions of jobs i and j at supplier S 2. In each case, the completion time of each job is either unchanged or decreased in the new schedule. 1.3 Overview of the Complexity Results Table 1 provides a summary of our computational complexity results for a variety of manufacturer s, suppliers, and system, scheduling problems. Since the feasibility and cost of a schedule can be checked efficiently in all the problems considered here, the recognition versions of those problems belong to the class NP. We use BNPC (respectively, UNPC) to 6

Manufacturer s Problems Suppliers Problems As C j UNPC Thm 3 As coseq C ij O(n log n) Cor 1 As w j C j UNPC Thm 3 As coseq w ij C ij Open As L max O(n log n) Thm 4 As coseq max i,j {L ij } O(n log n) Thm 8 As U j O(nP s ), BNPC Thm 6,5 As coseq U ij Open, BNPC Thm 9 As w j U j O(nP s ), BNPC Thm 6,5 As coseq w ij U ij Open, BNPC Thm 9 As T j UNPC Thm 3 As coseq T ij UNPC Thm 10 As w j T j UNPC Thm 3 As coseq w ij T ij UNPC Thm 10 System Problems As coseq α C ij + (1 α) C j UNPC Thm 15 As coseq, C j C C ij UNPC Thm 16 As coseq, L max L C ij UNPC Thm 17 As coseq α C ij + (1 α)l max Intr. Thm 18 Table 1: The Complexity of Supply Chain Scheduling Assembly Problems. denote that the recognition version of a problem is binary (resp., unary) NP-complete. We also use Intr. to denote that the optimization version of a problem cannot be solved in pseudopolynomial time, unless P = NP. Related definitions can be found in Garey and Johnson (1979). Where a polynomial or pseudopolynomial time algorithm exists, we indicate its time complexity. Included in the appropriate cell is a reference to where a proof of that result can be found. The paper contains other types of results also, which are described in their problem context. 2 Manufacturer s Problems In this section, we discuss the solvability of several manufacturer s problems, where the completion time of a job is defined as the time when its last part is delivered from the suppliers to the manufacturer. We first consider problem As C j, for which we provide an intractability result. Theorem 3 The recognition versions of problems As C j, As w j C j, As T j As w j T j are unary NP-complete. and Proof. We show that the recognition version of problem As C j is unary NP-complete, by 7

reduction from the following problem, which is known to be unary NP-complete. 3-Partition (Garey and Johnson, 1979): given 3m elements with integer sizes a 1,..., a 3m, where 3m i=1 a i = mb and B/4 < a i < B/2, for i = 1,..., 3m, does there exist a partition H 1,..., H m of the index set H = {1,..., 3m} such that H j = 3 and i H j a i = B, for j = 1,..., m? As part of this definition we assume that, if there exists a solution to 3- Partition, then the elements are numbered such that a 3i 2 +a 3i 1 +a 3i = B, for i = 1,..., m. Consider an instance of the recognition version of problem As C j defined by: n = 4m, s = 4, p 1j = 4B, for j = 1,..., 3m, p 2j = 4B a j, for j = 1,..., 3m, p 3j = 3B + a j, for j = 1,..., 3m, p 4j = 0, for j = 1,..., 3m, p 1j = 0, for j = 3m + 1,..., 4m, p 2j = B, for j = 3m + 1,..., 4m, p 3j = 2B, for j = 3m + 1,..., 4m, p 4j = 12B, C = 24m 2 B + 12mB, where C is a threshold cost. for j = 3m + 1,..., 4m, and We prove that there exists a schedule for this instance of As C j with cost less than or equal to C if and only if there exists a solution to 3-Partition. ( ) Consider the sequence (1, 2, 3, 3m + 1,..., 3m 2, 3m 1, 3m, 4m). The bottleneck supplier for the first three jobs is S 1, and those jobs are completed at times 4B, 8B and 12B, respectively. Each of the four suppliers completes job 3m + 1 at time 12B. Each subsequent subset of four jobs is scheduled similarly, requiring a total processing time of 12B at each supplier. It can be seen that the total cost of the schedule is equal to C. ( ) Assume the existence of a schedule σ for problem As C j for which C j C. Without loss of generality, suppose that the jobs {1,..., 3m} are sequenced in this order at supplier S 1, and that the jobs {3m + 1,..., 4m} are sequenced in this order at supplier S 4. Then it can be seen that C j C 1j = 4jB for j {1,..., 3m}, and C j C 4j = 12(j 3m)B for j {3m + 1,..., 4m}. Hence C j C, and C j C if and only if C j = 4jB for j {1,..., 3m}, and C j = 12(j 3m)B for j {3m + 1,..., 4m}. This means that, in 8

σ, C 2j 4jB and C 3j 4jB for j {1,..., 3m}, and C 2j 12(j 3m)B and C 3j 12(j 3m)B for j {3m + 1,..., 4m}. Now consider the following four jobs: 1, 2, 3, 3m+1. They are all completed no later than 12B at both suppliers S 2 and S 3. Thus p 21 + p 22 + p 23 + p 2,3m+1 = 13B (a 1 + a 2 + a 3 ) 12B, and p 31 + p 32 + p 33 + p 3,3m+1 = 11B + (a 1 + a 2 + a 3 ) 12B. This implies that a 1 + a 2 + a 3 = B. Thus the four jobs are completed exactly at 12B at each supplier. This same argument can be repeated for each group of four jobs {3j 2, 3j 1, 3j, 3m + j} to show that a 3j 2 + a 3j 1 + a 3j = B, for j = 2,..., m. This implies the existence of a solution to 3-Partition. Since problems As w j C j, As T j and As w j T j are generalizations of problem As C j, these problems are thus all unary NP-complete. Two heuristics for problem As w j C j are described and analyzed in Section 6. We next consider problem As L max. Our result generalizes that of Jackson (1955) for the classical single machine scheduling problem 1 L max. Theorem 4 An optimal schedule for problem As L max is provided by an earliest due date (EDD) ordering of the jobs. Proof. Consider a hypothetical schedule in which job k immediately precedes job j in some common sequence σ, where d j < d k. Let t i denote the total processing time of the jobs before j and k at supplier S i, for i = 1,..., s. Then the maximum lateness among jobs j and k is given by L max = max{max 1 i s {t i + p ik } d k, max 1 i s {t i + p ik + p ij } d j }. If we interchange jobs j and k to obtain a sequence σ, then the maximum lateness among jobs j and k is given by L max = max{max 1 i s {t i + p ij } d j, max 1 i s {t i + p ij + p ik } d k }. It is immediately clear that the second term in L max is larger than the first term in L max, for any i. The fact that the second term in L max is larger than the second term in L max follows immediately from the assumption that d j < d k. Thus, L max L max. 9

We conclude this section by proving two results for problems As U j and As w j U j. We begin with a negative result. Theorem 5 The recognition versions of problems As U j and As w j U j are binary NPcomplete. Proof. We show that the recognition version of problem As U j is binary NP-complete, by reduction from the following problem, which is known to be binary NP-complete. Equal Cardinality Partition (Garey and Johnson, 1979): given 2m elements with integer sizes a 1,..., a 2m, where 2m i=1 a i = 2B, does there exist a partition H 1, H 2 of the index set H = {1,..., 2m} such that H 1 = H 2 = m and i H 1 a i = i H 2 a i = B? As part of this definition we assume that, if there exists a solution to Equal Cardinality Partition, then the elements are numbered such that m i=1 a i = 2m i=m+1 a i = B. Consider an instance of the recognition version of problem As U j defined by: n = 2m, s = 2, p 1j = B + a j, for j = 1,..., 2m, p 2j = (m + 2)B/m a j, for j = 1,..., 2m, d j = (m + 1)B, C = m, where C is a threshold cost. for j = 1,..., 2m, and We prove that there exists a schedule for this instance of As U j with cost less than or equal to C if and only if there exists a solution to Equal Cardinality Partition. ( ) Consider the sequence (1,..., n). From the definition of Equal Cardinality Parition, jobs 1,..., m complete processing at both suppliers exactly at time (m+1)b. Jobs m+1,..., 2m are late. ( ) Consider a hypothetical schedule in which a subset σ of jobs is scheduled on time in arbitrary order, and which satisfies σ m, thus U j C. Since the total processing time of any (m + 1) jobs at supplier S 1 is greater than all the due dates, exactly m jobs must be on time. If j σ a j > B, then at least one job is delivered late by S 1. If j σ a j < B, then at least one job is delivered late by S 2. Therefore, j σ a j = B, which implies the existence of a solution to Equal Cardinality Partition. 10

We now describe a pseudopolynomial time algorithm, WU, for problems As w j U j and As U j. Algorithm WU Reindex the jobs in EDD order, i.e. d 1 d n. Value Function f k (t 1,..., t s ) = minimum total weight of late jobs from 1,..., k, where the on-time jobs require a total processing time of t i at supplier S i, for i = 1,..., s. Boundary Condition f 0 (0,..., 0) = 0. Optimal Solution Value f n ( j p 1j,..., j p sj ). Recurrence Relation{ fk 1 (t 1 p 1k,..., t s p sk ), if max f k (t 1,..., t s ) = min {t i} d k 1 i s w k + f k 1 (t 1,..., t s ). The recurrence relation compares the cost of scheduling job k either on time which results in an increase in processing time of p ik at each supplier S i, or late which incurs a cost of w k. Theorem 6 Algorithm WU finds an optimal schedule for problem As w j U j in O(nP s ) time. Proof. Theorem 4 establishes that, without loss of generality, the on-time jobs can be processed in EDD order in an optimal schedule. Therefore, the recurrence relation compares the cost of all possible state transitions, and finds an optimal schedule. There are O(n) possible stages k. The number of possible values for the state variables t 1,..., t s is O(P s ). For each state the recurrence relation requires only constant time. Therefore, the overall time complexity of Algorithm WU is O(nP s ). It follows from Theorem 5 that the pseudopolynomial time complexity of Algorithm WU is the best type of result possible, unless P = NP. 3 Suppliers Problems In this section, we consider suppliers scheduling problems with the objectives max i,j {f ij (C ij )} and f ij (C ij ), where f ij is a cost function associated with supplier i and job j. We con- 11

sider these problems with a constraint that the suppliers process jobs in a common sequence. From Theorem 2, we know that a common sequence solves the manufacturer s problem optimally. Moreover, this constraint simplifies production and reduces work in process at the manufacturer, therefore the manufacturer will naturally wish to impose such a constraint. The best common sequence needs to be determined. We denote this constraint by coseq. Theorem 7 An optimal schedule for problem As coseq w ij C ij with w ij = α i β j is given by sequencing jobs in nondecreasing order of s i=1 α ip ij β j. Proof. Consider a hypothetical schedule in which job k immediately precedes job j in some common sequence σ, where s i=1 α i p ik /β k > s i=1 α i p ij /β j. Let t i denote the total processing time of the jobs that are scheduled before j and k at supplier S i, for i = 1,..., s. Then the total weighted completion time of jobs k and j is given by s s s s (w ik C ik + w ij C ij ) = t i (w ik + w ij ) + p ik (w ik + w ij ) + w ij p ij. i=1 i=1 i=1 i=1 If we interchange jobs j and k to obtain a sequence σ, then the total weighted completion time of jobs k and j becomes s s s s (w ik C ik + w ij C ij) = t i (w ik + w ij ) + p ij (w ik + w ij ) + w ik p ik. i=1 i=1 i=1 i=1 Since s i=1 α i p ik /β k > s i=1 α i p ij /β j, we have s s s s s (w ik C ik+w ij C ij) (w ik C ik +w ij C ij ) = (w ik p ij w ij p ik ) = β k α i p ij β j α i p ik < 0. i=1 i=1 i=1 i=1 i=1 Thus, the total completion time of sequence σ is lower than that of σ. Repeating this argument for all adjacent pairs of jobs that do not satisfy the stated inequality proves the theorem. Corollary 1 An optimal schedule for problem As coseq C ij is given by sequencing the jobs in nondecreasing order of s i=1 p ij. Proof. The result follows immediately from Theorem 7. We note that the question of whether problem As coseq w ij C ij values is solvable in polynomial time remains open. with general w ij 12

Theorem 8 An optimal schedule for problem As coseq max i,j {L ij } is provided by an EDD ordering of the jobs. Proof. This can be shown by a proof similar to that of Theorem 4. We conclude our discussion of suppliers problems with two intractability results. Theorem 9 The recognition versions of problems As coseq U ij and As coseq w ij U ij are at least binary NP-complete. Proof. The proof is similar to that of Theorem 5, with a threshold cost C = 2m. The question of whether these problems are unary NP-complete remains open. Theorem 10 The recognition version of problem As coseq T ij is unary NP-complete. Proof. We show that the recognition version of problem As coseq T ij is unary NPcomplete, by reduction from 3-Partition as described in the proof of Theorem 3. Consider an instance of the recognition version of problem As coseq T ij defined by: n = 3m + mq, s = 2, p 1j = 3Aa j, for j = 1,..., 3m, p 2j = 2AB 3Aa j, for j = 1,..., 3m, d j = 0, for j = 1,..., 3m, p 1j = p 2j = 1, for j = 3m + 1,..., 3m + mq, d j = (3AB + q)k, for j Q k = {3m + 1 + q(k 1),..., 3m + qk}, k = 1,..., m, and C = 3m(3m + 1)AB + 3m(m 1)q, where q = m 3 B 3, A = m 4 B 4, and C is a threshold cost. We prove that there exists a schedule for this instance of As coseq T ij with cost less than or equal to C if and only if there exists a solution to 3-Partition. ( ) Consider the sequence (H 1, Q 1,..., H m, Q m ). Let C i (Q k ) denote the completion time of the last job in Q k at supplier S i. Then, C i (Q k ) = k l=1 ( j H l p ij +q) C 1 (Q k ) = C 2 (Q k ) = (3AB +q)k, for k = 1,..., m. Thus, all jobs from Q 1... Q m are on time at both suppliers. Let H k = {x, y, z}, where these jobs are scheduled in the sequence (x, y, z). Then, the total tardiness of these jobs is f 1 (H k ) + f 2 (H k ) = 6(3AB + q)(k 1) + 3(p 1x + p 2x ) + 2(p 1y + p 2y ) + 13

(p 1z + p 2z ) = (18AB + 6q)(k 1) + 12AB. It follows that the total tardiness of all the jobs from H is m k=1 (f 1 (H k ) + f 2 (H k )) = m k=1 [(18AB + 6q)(k 1) + 12AB] = C. ( ) We assume the existence of a schedule σ for the given instance with cost less than or equal to C. We prove a series of facts about σ. Fact 1. In σ, jobs from Q k are scheduled earlier than jobs from Q l, for all 1 k < l m. Fact 1 can be proved by an interchange argument. Fact 2. In σ, all jobs from Q k are scheduled consecutively, for k = 1,..., m. To prove Fact 2, suppose σ contains the subsequence (u, ρ, v), where u, v Q k for some k, and ρ is a subset of jobs from H. Let t i denote the starting time of job u at supplier S i, and P i (ρ) denote the total processing time of the jobs in ρ at supplier S i, for i = 1, 2. If we interchange job u and subset ρ, then the total cost change is 2 δ(1) = 2 ρ + [max(t i + P i (ρ) + 1 (3AB + q)k, 0) max(t i + 1 (3AB + q)k, 0)]. i=1 Similarly, if we interchange job v and subset ρ, then the total cost change is 2 δ(2) = [max(t i + P i (ρ) + 2 (3AB + q)k, 0) max(t i + 2 (3AB + q)k, 0)] + 2 ρ. i=1 It can be seen that min{δ(1), δ(2)} 0, which completes the proof of Fact 2. It follows from Facts 1 and 2 that we may write σ = (H 1, Q 1,..., H m, Q m, H m+1 ), where H l H for l = 1,..., m + 1. Let h k denote the cardinality of H k in σ, for k = 1,..., m + 1. Fact 3. In σ, k l=1 h l 3k, for k = 1,..., m. Suppose there exists some k {1,..., m} such that k l=1 h l 3k + 1. It can be seen that C i (H k ) > k l=1 j H l p ij, for i = 1, 2. We consider two cases. Case 1: If k l=1 j H l a j (3k + 1)B/3, then C 1 (H k ) > 3A k l=1 j H l a j (3k + 1)AB. Case 2: If k l=1 j H l a j < (3k + 1)B/3, then C 2 (H k ) > 2AB(3k + 1) 3A k l=1 j H l a j > (3k + 1)AB. Since (3k + 1)AB > (3AB + q)k, which is the due date of all the q jobs of Q k, the total tardiness of these jobs in either case is at least 1+ +q = q(q+1)/2 > C. This contradiction completes the proof of Fact 3. Fact 4. In σ, h k = 3 for k = 1,..., m, and h m+1 = 0. Since p 1j + p 2j = 2AB for j = 1,..., 3m and from the coseq assumption, the total tardiness of the jobs from H in σ is independent of their sequence, and is given by F (H) = 3m(3m + 14

1)AB+2q[ m+1 k=2 h k(k 1)]. From Fact 3, we have that 2[ m+1 k=2 h k(k 1)] 3m(m 1), where equality holds if and only if h k = 3 for k = 1,..., m and h m+1 = 0. Thus, if Fact 4 is not true, then 2[ m+1 k=2 h k(k 1)] 3m(m 1)+1 F (H) 3m(3m + 1)AB + [3m(m 1) + 1]q > C. This contradiction completes the proof of Fact 4. Fact 5. In σ, j H k a j = B, for k = 1,..., m. To prove Fact 5, we first consider k = 1. If j H 1 a j B + 1, then C 1 (H 1 ) = 3A j H 1 a j 3AB+3A > 3AB+q. Alternatively, if j H 1 a j B 1, then C 2 (H 1 ) = 6AB 3A j H 1 a j 3AB +3A > 3AB +q. Since 3AB +q is the due date for all the jobs of Q 1, the total tardiness of σ in either case is at least 1 + + q = q(q + 1)/2 > C. This contradiction shows that j H 1 a j = B. A similar argument for k = 2,..., m completes the proof of Fact 5. Facts 1 through 5 establish that subsets H 1,...,H m form a solution to 3-Partition. 4 Conflicts Between the Manufacturer s and Suppliers Problems In this section, we study the extent to which an optimal schedule for the manufacturer s problem can be suboptimal for the suppliers problem, and vice versa. Here we provide several results. First, we consider total completion time objectives, for which we show that optimization of the manufacturer s cost problem can result in a solution that provides very poor performance for the suppliers. Theorem 11 If the manufacturer optimally solves problem As C j, then the ratio of the suppliers total cost as measured by the corresponding problem As coseq C ij, to the optimal suppliers cost, can be more than 1.28 for small s, and arbitrarily close to s/2 for large s. Proof. By example. Consider the following instance: p ij = 0, i = 1,..., s 1, p sj = 1, j = 1,..., n 1; p in = 1, i = 1,..., s. From Theorem 3, the recognition version of problem As C j is unary NP-complete. However, the example instance described has the completion time of each job defined by the completion time of the component at supplier s, for each job. Moreover, since p sj = 1 for j = 1,..., n, an optimal solution to problem As C j is given by an arbitrary sequence of the jobs. We assume without loss of generality that this sequence is n, n 1,..., 1. The resulting total suppliers cost is C ij = ns + n(n 1)/2. 15

However, if the suppliers use an alternative common sequence, 1,..., n, then the total manufacturer s cost is still optimal, but the resulting total suppliers cost is reduced to Cij = s + (n 1)(n + 2)/2. Therefore, for any fixed s, the ratio of manufacturer s cost in the suppliers solution to that in the optimal solution is r(n) = [ns + n(n 1)/2]/[s + (n 1)(n + 2)/2]. It can be shown that, for n 1, r(n) is a concave function of n which achieves its maximum value when either n = 1 + 2s, or n = 1 + 2s. Thus, for any s 2, the worst-case ratio is R(s) = max{r(1 + 2s ), r(1 + 2s )}. It can also be shown that: (i) when s = 2, R(s) = 9/7 > 1.28; (ii) R(s) is a strictly increasing function and hence R(s) > 1.28 for all s 2; and (iii) R s approaches s/2 as s. We now present a symmetric result about the performance of the optimal suppliers solution with respect to the total manufacturer s cost. Theorem 12 If the suppliers optimally solve problem As coseq C ij, then the ratio of the manufacturer s cost as measured by the coresponding problem As C j, to the optimal manufacturer s cost, can be more than 1.28 for small s, and arbitrarily close to s/2 for large s. Proof. By example. Consider the following instance: p ij = 1, i = 1,..., s, j = 1,..., n 1; p in = 0, i = 1,..., s 1; p sn = s. From Corollary 1, the suppliers will minimize C ij by scheduling the jobs in shortest total processing time order. Since the total processing time of each job is s, we may assume without loss of generality that the suppliers choose the sequence n, n 1,..., 1. The resulting manufacturer s cost is C j = ns + n(n 1)/2. However, if the suppliers use an alternative common sequence, 1,..., n, then the total supplier cost is still optimal, but the resulting manufacturer s cost is reduced to C j = s + (n 1)(n + 2)/2. 16

Therefore, for any fixed s, the ratio of manufacturer s cost in the suppliers solution to that in the optimal solution is r(n) = [ns + n(n 1)/2]/[s + (n 1)(n + 2)/2], which is the same ratio as in the proof of Theorem 11. Therefore, we obtain the same result. As a second illustration of conflicts between the manufacturer s and suppliers problems, we also provide two results for number of late jobs objectives. Theorem 13 If the manufacturer optimally solves problem As U j, then the ratio of the suppliers total cost as measured by the corresponding problem As coseq U ij, to the optimal suppliers cost, can approach s as n. Proof. By example. Consider the following instance: p ij = 1, i = 1,..., s 1, j = 1,..., n; p sj = n, j = 1,..., n; d j = j, j = 1,..., n. For this example, sequence (n, 1, 2,..., n 1) is optimal for problem As U j, and gives Uj = n 1 and U ij = (n 1)s. However, sequence (1,..., n) is optimal for problem As coseq U ij, and gives U ij = n. Thus, if an optimal schedule for the manufacturer s problem As U j is used, then the total suppliers cost U ij is [(n 1)s/n] times the optimal cost of the suppliers, and this ratio approaches s as n. Theorem 14 If the suppliers optimally solve problem As coseq U ij, then the ratio of the manufacturer s cost as measured by the coresponding problem As U j, to the optimal manufacturer s cost, can equal s. Proof. By example. Consider the following instance: n = s; p ij = 1, i = 1,..., s 1, j = 1,..., s; p s1 = s; p sj = 1, j = 2,..., s; d 1 = 1; d j = s, j = 2,..., s. For this example, sequence (1,..., s) is optimal for problem As coseq U ij, and gives Uij = s and U j = s. However, sequence (2, 3,..., s, 1) is optimal for problem As U j and gives U j = 1. Thus, if an optimal schedule for problem As coseq U ij, (1,..., s), is used, then the manufacturer s cost U j can be s times optimal. 17

Theorems 11 through 14 establish that the performance outcomes of the suppliers collectively and of the manufacturer can be very different. These results thus demonstrate the importance of cooperation if the manufacturer and the suppliers are both to achieve good outcomes, and motivate our consideration of system problems in the following section. 5 System Problems In this section, we consider problems with a composite objective which includes a cost function of the manufacturer and a cost function of the suppliers. Theorem 15 The recognition version of problem As coseq α C ij + (1 α) C j, where α is a given parameter with 0 < α < 1, is unary NP-complete. Proof. By reduction from 3-Partition, as described in the proof of Theorem 3. Construct an instance with n = 4m and s = 5 where the job processing times at the first four suppliers are exactly the same as in the instance in the proof of Theorem 3, and the job processing times at S 5 are identical to those at S 1. Let the threshold cost C = α[15bn(n + 1)/2] + (1 α)(24m 2 B + 12mB). We prove that there exists a schedule for this instance of problem As coseq α C ij +(1 α) C j with cost less than or equal to C if and only if there exists a solution to 3-Partition. ( ) Consider the sequence (1, 2, 3, 3m + 1,..., 3m 2, 3m 1, 3m, 4m). From the ( ) part of the proof of Theorem 3, (1 α) C j = (1 α)(24m 2 B + 12mB). Also, since s i=1 p i1 = = s i=1 p in = 15B, we have that α C ij = α[15b(1 + + n)] = α[15bn(n + 1)/2]. Thus, the total cost is α C ij + (1 α) C j = C. ( ) Since s i=1 p i1 = = s i=1 p in = 15B, and from Corollary 1, we have that α C ij = α[15bn(n + 1)/2] for any sequence. The remainder of the proof follows from the ( ) part of the proof of Theorem 3. A heuristic for problem As coseq α C ij + (1 α) C j is described and analyzed in Section 6. Theorem 16 The recognition version of problem As coseq, C j C C ij is unary NP-complete. 18

Proof. By reduction from the recognition version of problem As C j, which is shown to be unary NP-complete in Theorem 3. It follows from Theorem 2 that problems As C j and As coseq C j are equivalent. Given an arbitrary instance of problem As coseq C j with processing times p ij and threshold cost C, we construct an instance of problem As coseq, C j C C ij with p ij = Cp ij/c and a threshold cost of n p ij. We prove that there exists a schedule for this instance of problem As coseq, C j C C ij with C ij n p ij if and only if there exists a schedule for the instance of problem As coseq C j with C j C. ( ) Let σ denote a schedule for problem As coseq C j that has C j C. From the construction of the p ij values, we have that C j C for σ in problem As coseq, C j C C ij. Moreover, C ij n p ij in any schedule without inserted idle time. ( ) From the construction of the p ij values, any schedule that is feasible for problem As coseq, C j C C ij also satisfies C j C in problem As coseq C j. Theorem 17 The recognition version of problem As coseq, L max L C ij is unary NPcomplete. Proof. By reduction from 3-Partition, as described in the proof of Theorem 3. Note that this problem is equivalent to As coseq, d j C ij, where d j is the deadline of job j by which job j must be completed at all the suppliers. Consider an instance of the recognition version of problem As coseq, d j C ij defined by: n = 4m, s = 2, p 1j = m 2 B + 2a j, for j = 1,..., 3m, p 2j = m 2 B + B a j, for j = 1,..., 3m, d j =, for j = 1,..., 3m, p 1j = p 2j = Q, for j = 3m + 1,..., 4m, d j = (j 3m)(A + Q), for j = 3m + 1,..., 4m, and C = (4m 2 2m + 1)Q, where A = 3m 2 B + 2B, Q = 4m(m + 1)A, and C is a threshold cost. We prove that there exists a schedule for this instance of As coseq, d j C ij with cost less than or equal to C if and only if there exists a solution to 3-Partition. 19

( ) Consider the sequence (H 1, 3m + 1,..., H m, 4m). Now, C 1 (H k ) = C 2 (H k ) = (k 1)Q + k(3m 2 B + 2B) = (k 1)Q + ka, for k = 1,..., m 2 m C ij 6 [(k 1)Q + ka] = 3Qm(m 1) + 3Am(m + 1). (1) i=1 j H k=1 Also, C 1,3m+k = C 2,3m+k = kq + ka = d 3m+k, for k = 1,..., m. Thus, 2 4m m C ij = 2 (kq + ka) = Qm(m + 1) + Am(m + 1) i=1 j=3m+1 k=1 2 4m C ij 3Qm(m 1) + 3Am(m + 1) + Qm(m + 1) + Am(m + 1) = C, from (1). i=1 j=1 ( ) We assume the existence of a schedule σ for the given instance with cost less than or equal to C. Without loss of generality, let σ = (H 1, 3m + 1,..., H m, 4m, H m+1 ), where H k is a (possibly empty) subset of jobs from H. Let h k denote the cardinality of H k. We prove a series of facts about σ. Fact 1. In σ, k i=1 h i 3k, for k = 1,..., m. To prove Fact 1, assume that k i=1 h i > 3k for some k {1,..., m}. Then C i,3m+k > ka + kq = d 3m+k, for i = 1, 2. This contradiction proves Fact 1. Fact 2. In σ, h k = 3 for k = 1,..., m, and h m+1 = 0. To prove Fact 2, we first note that since p 1j = p 2j = Q for j = 3m + 1,..., 4m, we have 2 4m m C ij 2 kq = Qm(m + 1). (2) i=1 j=3m+1 k=1 Also, C 1j, C 2j (k 1)Q, for j H k, k = 1,..., m + 1. Therefore, 2 i=1 j H m+1 C ij 2 k=1 (k 1)Qh k. (3) From Fact 1, it is easy to show that m+1 k=1 (k 1)h k 3m(m 1)/2, where equality is achieved if and only if h k = 3 for k = 1,..., m 1, and h m+1 = 0. Thus, if σ does not satisfy Fact 2, then m+1 k=1 (k 1)h k 3m(m 1)/2 + 1, and therefore 2 C ij 3m(m 1)Q + 2Q, from (3) i=1 j H 2 4m C ij 3m(m 1)Q + 2Q + Qm(m + 1) = 4m 2 Q 2mQ + 2Q > C, from (2). i=1 j=1 20

This contradiction completes the proof of Fact 2. Fact 3. In σ, j H k a j = B for k = 1,..., m. To prove Fact 3, we first consider k = 1. If j H 1 a j > B, then C 1,3m+1 > d 3m+1. Alternatively, if j H 1 a j < B, then C 2,3m+1 > d 3m+1. In either case, the contradiction shows that j H 1 a j = B. A similar argument for k = 2,..., m completes the proof of Fact 3. Facts 1 through 3 establish that subsets H 1,...,H m form a solution to 3-Partition. Theorem 18 There exists no pseudopolynomial time algorithm for problem As coseq α C ij + (1 α)l max, where α is a given parameter with 0 < α < 1, unless P = NP. Proof. Assume that we are given an arbitrary instance of problem As coseq, d j C ij. We conduct binary search on the value of α to find α = max{α L max 0 in problem As coseq α C ij + (1 α)l max }. Note that L max is a nondecreasing function of α. At each test value of α, we apply a hypothetical pseudopolynomial time algorithm for problem As coseq α C ij +(1 α)l max. Also, C ij n p ij and L max max 1 i n { j p ij } pij for any schedule without inserted idle time. Therefore, the number of binary search iterations required to find α is no more than O(log n( p ij ) 2 ), a polynomial number. When α is found, the optimal schedule for problem As coseq α C ij + (1 α )L max provides an optimal schedule for problem As coseq, d j C ij. Moreover, since this procedure requires only a polynomial number of calls on a pseudopolynomial time algorithm, the overall procedure runs in pseudopolynomial time. The proof of Theorem 17 then provides a contradiction to the existence of the pseudopolynomial time algorithm, unless P = NP. The use of an optimization version of a problem to prove intractability, as a proxy for the more standard use of a recognition version, is illustrated by Van Hoesel and Wagelmans (1999). 6 Heuristics In this section, we first propose two polynomial time heuristics for solving problems As w j C j and As coseq α C ij + (1 α) C j, respectively. The worst case performance of these heuristics is analyzed. We then describe a third heuristic, also for problem As w j C j, and 21

prove under mild assumptions about the job processing times and weights that the solutions which it delivers are asymptotically optimal in the number of jobs. The first heuristic is based on solving the linear programming relaxation of an integer programming formulation for problem As w j C j. The approach is similar to that of the heuristics proposed by Schulz (1996) and Hall et al. (1997) for problems 1 prec w j C j and 1 r j w j C j. To formulate problem As w j C j using an integer program, we first observe that a feasible schedule at each supplier S i must satisfy the constraint that C ij C ik + p ij, or C ik C ij + p ik, for all j and k, j k. Queyranne (1993) shows that the convex hull of the feasible completion times C ij that satisfy this disjunctive constraint is completely characterized by the following linear inequalities: p ij C ij 1/2[( p ij ) 2 + p 2 ij], for all J N. j J j J j J Thus, problem As w j C j can be formulated as the following integer program which uses the job completion times C ij, C j as variables. min w j C j (4) j N subject to p ij C ij j J 1/2[( p ij ) 2 + p 2 ij], j J j J J N, i {1,..., s} (5) C ij p ij, j N, i {1,..., s} (6) C j C ij, j N, i {1,..., s} (7) C ij, C j integer, j N, i {1,..., s}. (8) If we remove the last set of constraints (8), then we obtain the linear programming (L.P.) relaxation of the formulation. Clearly, the optimal objective value of the L.P. relaxation is a lower bound on that of problem As w j C j. By applying a variant of the ellipsoid method, Grötschel et al. (1993) show that if the separation problem for a convex set is solvable in polynomial time, then linear optimization over this set is also solvable in polynomial time. Queyranne (1993) provides a polynomial time algorithm for the separation problem associated with constraint set (5), for i = 1,..., s. Thus, if we apply the algorithm of Queyranne (1993) whenever a separation problem needs to be solved in the ellipsoid method 22

described by Grötschel et al. (1993), the ellipsoid method can solve our L.P. relaxation in polynomial time. This motivates the following heuristic. Heuristic WC Step 1. Step 2. Solve the linear program formed by (4) - (7) using the ellipsoid method. Denote the optimal solution of this linear program by the vectors (Cij LP, Cj LP ). Reindex the jobs such that C1 LP Cn LP. Schedule the jobs at each supplier S i in the order (1,..., n) without idle time. Theorem 19 For problem As w j C j, let the value of the solution delivered by Heuristic WC be F H, and the optimal objective value be F. Then, F H 2F. Proof. Let C H ij and C H j denote the completion times of jobs in the schedule obtained by the heuristic. We prove the theorem by showing that w j C H j 2 w j Cj LP. For any fixed i {1,..., s} and j N, substituting J = {1,..., j} into inequality (5), we have Since C LP ik C LP k j j j p ik Cik LP 1/2[( p ik ) 2 + p 2 ik] 1/2(Cij H ) 2. k=1 k=1 k=1 C LP j j k=1 p ikcj LP. Since Cij H = j k=1 p ik, this implies that Cj LP i {1,..., s}. Thus, using the fact that C H j for any k < j and i {1,..., s}, we have that 1/2(C H ij ) 2 1/2C H ij, for all j N and = max i=1,...,s {C H ij }, we have that C LP j for all j N. This implies that F H = w j C H j 2 w j C LP j follows that F H 2F. We now describe a second heuristic, Alpha, for the system problem As coseq α C ij + (1 α) C j. Heuristic Alpha 1/2C H j. Since F w j Cj LP, it Step 1. Reindex the jobs such that s i=1 p i1 s i=1 p in. Schedule the jobs at each supplier S i in the order (1,..., n) without idle time. Note that for problem As coseq α C ij + (1 α) C j with α = 1, this heuristic is optimal from Corollary 1. Theorem 20 For problem As coseq α C ij + (1 α) C j, let the value of the solution delivered by Heuristic Alpha be F H, and the optimal objective value be F. Then, F H [s/(αs α + 1)]F. 23

Proof. Let Cij H and Cj H denote the job completion times in the schedule obtained by the heuristic, and Cij and Cj the job completion times in the optimal schedule. Since Cj 1/s Cij, we have F = α Cij +(1 α) Cj [α+(1 α)/s] Cij [α+(1 α)/s] Cij H. (9) Also, since C H j C H ij, we have From (9) and (10), we have F H = α C H ij + (1 α) C H j C H ij. (10) F H 1/[α + (1 α)/s]f = [s/(αs α + 1)]F, which establishes the theorem. We now propose a third heuristic, BATCH, for problem As w j C j. We demonstrate that, under mild assumptions about the job processing times and weights, BATCH provides solutions that are asymptotically optimal in the number of jobs. The assumptions we make about the generation of data are as follows. A1. The processing times p ij, for i = 1,..., s, and j = 1,..., n, take integer values only and follow an independent and identical discrete distribution Φ( ) over a finite interval [L p, U p ], where L p and U p are nonnegative integers. A2. The weights w j, for j = 1,..., s, take integer values only and follow an independent and identical discrete distribution Γ( ) over a finite interval [L w, U w ], where L w and U w are nonnegative integers. The intuition behind the heuristic is to schedule most of the jobs in nonincreasing order of the ratio w j /( s i=1 p ij ), and use special sequences within jobs having the same ratios. The heuristic first divides the jobs into different subsets based on their processing times and weights. We refer to the vector of data (w j, p 1j,..., p sj ) associated with a job j as the parameter vector of this job. We divide the jobs in N into different classes such that all the jobs in a class have the same parameter vector, element wise. Thus, all the jobs in a class are identical. Let K denote the number of different classes, and N 1,..., N K denote these classes of jobs, where N = K k=1n k. Note that K is finite even when n goes to infinity. The following O(n 2 ) time procedure partitions N into classes N 1,..., N K. 24