Proofs for all the results presented in the body of the article are presented below: ProofofLemma1:Consider a Nash schedule S where x O i > P i

Similar documents
Online Appendix for Coordination of Outsourced Operations at a Third-Party Facility Subject to Booking, Overtime, and Tardiness Costs

Minimizing Mean Flowtime and Makespan on Master-Slave Systems

SPT is Optimally Competitive for Uniprocessor Flow

arxiv: v2 [cs.dm] 2 Mar 2017

Machine Scheduling with Deliveries to Multiple Customer Locations

1 Ordinary Load Balancing

Non-clairvoyant Scheduling Games

LPT rule: Whenever a machine becomes free for assignment, assign that job whose. processing time is the largest among those jobs not yet assigned.

arxiv: v1 [cs.ds] 6 Jun 2018

Santa Claus Schedules Jobs on Unrelated Machines

Technical Memorandum Number 820. Cooperative Strategies for Manufacturing Planning with Negotiable Third-Party Capacity

On Machine Dependency in Shop Scheduling

Algorithm Design. Scheduling Algorithms. Part 2. Parallel machines. Open-shop Scheduling. Job-shop Scheduling.

CS60007 Algorithm Design and Analysis 2018 Assignment 1

An improved approximation algorithm for two-machine flow shop scheduling with an availability constraint

This means that we can assume each list ) is

arxiv: v1 [cs.ds] 30 Jun 2016

bound of (1 + p 37)=6 1: Finally, we present a randomized non-preemptive 8 -competitive algorithm for m = 2 7 machines and prove that this is op

On bilevel machine scheduling problems

Price of Anarchy in Sequencing Situations and the Impossibility to Coordinate Herbert Hamers Flip Klijn Marco Slikker August 2013

Lecture 2: Scheduling on Parallel Machines

Completion Time Scheduling and the WSRPT Algorithm

Polynomial Time Algorithms for Minimum Energy Scheduling

Discrete Applied Mathematics

Online Scheduling of Parallel Jobs on Two Machines is 2-Competitive

4 Sequencing problem with heads and tails

1 Basic Definitions. 2 Proof By Contradiction. 3 Exchange Argument

IE652 - Chapter 10. Assumptions. Single Machine Scheduling

Controlling versus enabling Online appendix

6.254 : Game Theory with Engineering Applications Lecture 8: Supermodular and Potential Games

Simple Dispatch Rules

(a) Write a greedy O(n log n) algorithm that, on input S, returns a covering subset C of minimum size.

Lecture 13. Real-Time Scheduling. Daniel Kästner AbsInt GmbH 2013

Decentralization Cost in Two-Machine Job-shop Scheduling with Minimum Flow-time Objective

Complexity of preemptive minsum scheduling on unrelated parallel machines Sitters, R.A.

Exploiting Precedence Relations in the Schedulability Analysis of Distributed Real-Time Systems

Designing Optimal Pre-Announced Markdowns in the Presence of Rational Customers with Multi-unit Demands - Online Appendix

CHAPTER 2. The Simplex Method

A general framework for handling commitment in online throughput maximization

arxiv: v2 [cs.ds] 27 Nov 2014

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games

arxiv: v1 [math.oc] 3 Jan 2019

Multiprocessor jobs, preemptive schedules, and one-competitive online algorithms

RCPSP Single Machine Problems

Regular Performance Measures

2/5/07 CSE 30341: Operating Systems Principles

Problem Set 1 Welfare Economics

Approximation Algorithms for scheduling

On-line Scheduling to Minimize Max Flow Time: An Optimal Preemptive Algorithm

MANAGEMENT SCIENCE doi /mnsc ec pp. ec1 ec6

Non-Work-Conserving Non-Preemptive Scheduling: Motivations, Challenges, and Potential Solutions

Worst case analysis for a general class of on-line lot-sizing heuristics

A hierarchical network formation model

Preemptive Online Scheduling: Optimal Algorithms for All Speeds

Algorithm Design and Analysis

Introduction to General Equilibrium: Framework.

Offline First Fit Scheduling of Power Demands in Smart Grids

Price and Capacity Competition

arxiv: v2 [cs.ds] 27 Sep 2014

Scheduling jobs with agreeable processing times and due dates on a single batch processing machine

Approximation Algorithms (Load Balancing)

Multiprocessor Scheduling I: Partitioned Scheduling. LS 12, TU Dortmund

Scheduling Parallel Jobs with Linear Speedup

Scheduling Lecture 1: Scheduling on One Machine

Optimal on-line algorithms for single-machine scheduling

Existence of Nash Networks in One-Way Flow Models

Single Machine Problems Polynomial Cases

SCHEDULING UNRELATED MACHINES BY RANDOMIZED ROUNDING

Multi-agent scheduling problems

Single Machine Scheduling with Job-Dependent Machine Deterioration

Non-clairvoyant Scheduling Games

Make or Buy: Revenue Maximization in Stackelberg Scheduling Games

Scheduling Linear Deteriorating Jobs with an Availability Constraint on a Single Machine 1

A misère-play -operator

Effi ciency in Repeated Games with Local Monitoring

3. Scheduling issues. Common approaches 3. Common approaches 1. Preemption vs. non preemption. Common approaches 2. Further definitions

Lecture #3. General equilibrium

A note on semi-online machine covering

Converse consistency and the constrained equal benefits rule in airport problems

A polynomial-time approximation scheme for the two-machine flow shop scheduling problem with an availability constraint

Lecture 6: Greedy Algorithms I

STABILITY OF JOHNSON S SCHEDULE WITH LIMITED MACHINE AVAILABILITY

Partition is reducible to P2 C max. c. P2 Pj = 1, prec Cmax is solvable in polynomial time. P Pj = 1, prec Cmax is NP-hard

A Robust APTAS for the Classical Bin Packing Problem

Online Interval Coloring and Variants

RUN-TIME EFFICIENT FEASIBILITY ANALYSIS OF UNI-PROCESSOR SYSTEMS WITH STATIC PRIORITIES

Theoretical Computer Science

CS 598RM: Algorithmic Game Theory, Spring Practice Exam Solutions

1 The Knapsack Problem

τ σ 1 (n 3 + m) + (n 1 + n 2 )τ σ 1 n 1, n 2, n 3, m 0 In this equilibrium, (A.1)-(A.4) are all equalities. Solving (A.1)-(A.

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Activity selection. Goal: Select the largest possible set of nonoverlapping (mutually compatible) activities.

Optimization under Ordinal Scales: When is a Greedy Solution Optimal?

Decentralized bargaining in matching markets: online appendix

CSE 417. Chapter 4: Greedy Algorithms. Many Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

SOLVING FINITE-DOMAIN LINEAR CONSTRAINTS IN PRESENCE OF THE ALLDIFFERENT

Deterministic Models: Preliminaries

Minimizing total weighted tardiness on a single machine with release dates and equal-length jobs

CMSC 722, AI Planning. Planning and Scheduling

Polynomial time solutions for scheduling problems on a proportionate flowshop with two competing agents

Transcription:

ONLINE SUPPLEMENT FOR Non-Cooperative Games for Subcontracting Operations by George L. Vairaktarakis Weatherhead School of Management Department of Operations Case Western Reserve University 10900 Euclid Avenue, Cleveland, OH 44106-735 email: gxv5@case.edu Proofs for all the results presented in the body of the article are presented below: ProofofLemma1:Consider a Nash schedule S where x O i > P i for some i M. Then, the workload of player i on M i is P i x O i and his makespan C i is attained on F. Hence, C i x O i. Let y i be the workload of player i processed on F during the time interval [P i x O i,ci ]. By definition, 0 <y i x O i Also,. C i P i x O i + y i because the y i units start on F after time P i x O i. Then, remove the y i/ units of workload from F and schedule them on M i ;let S be the resulting schedule and C i be the new makespan of player i. Clearly, C i C i y i <Ci and hence the makespan of player i in S is better than that in S; this is a contradiction to S being a Nash schedule. Proof of Theorem 1: For proof by contradiction, let S be a Nash schedule such that P i P i+1 but x O i >x O i+1 for some 1 i<m. We assume that i is the smallest index for which this property holds; we refertothisastheminimality assumption for index i. Then, consider the difference δ =(x O i xo i+1 )/. In S, playeri + 1 is processed before i on F because x O i+1 <xo i.lett, t be the completion times for player i +1 onmachines M i+1,frespectively. If t <tplayer i + 1 could increase his workload on F by δ =min{ t t,δ}. This would reduce his makespan by δ / and wouldn t change his processing order on F because x O i+1 + δ x O i+1 + δ<xo i while obviously x O i 1 xo i+1 (due to the minimality assumption) before the reallocation and hence x O i 1 xo i+1 + δ afterwards. On the other hand, if t >tplayer i + 1 could decrease his workload on F by δ and improve his makespan by δ /onm i and possibly have his workload scheduled earlier on F because x O i+1 δ < x O i+1. In either case player i + 1 could improve his objective contradicting that S is a Nash schedule. Therefore, it must hold that t = t and the Gantt diagram for M i, M i+1, F has the configuration depicted in Figure 3. Note that P i P i+1, x O i+1 <xo i imply P i x O i <P i+1 x O i+1. But then, player i can improve his makespan by reducing his workload x O i on F ; contradiction to the fact that S is Nash. This completes the first part of the theorem. An argument similar to proving t = t and Lemma 1 yield that player i attains his makespan on M i, i M. Equivalently, i k=1 x O k P i x O i. Proof of Theorem : Given Nash strategies x 1,x,...,x M for the players, the makespan C i of player i at equilibrium is C i =max{ i k=1 x k,p i x i } = P i x i i M due to Theorem 1. Hence, 1

Figure 3: Schedule configuration for players [i], [i + 1]. x 1 +...+ x i 1 +x i P i and x 1... x i due to IRO. Then, (i +1)x 1 P i or x 1 P i From i+1 for i 1. x 1 + x +...+ x i P i x i (5) and i = 1 we see that the larger x 1, the smaller the makespan C 1 = P 1 x 1 for player 1. Therefore, player 1 will select strategy x O 1 = min i 1 P i i +1. (6) For x 1 = x O 1 inequality (5) yields x O 1 + x +...+ x i P i x i x +...+x i P i x O 1 or ix P i x O 1 for i (due to IRO), or x P i x O 1 i. From (5) and i = we observe that the larger x the smaller the makespan C = P x of player. Therefore, x O = min i P i x O 1 i. (7) Iterative arguments and expressions (6), (7) yield the result. Proof of Lemma : We first consider property (a). Let S be a Nash schedule. We first prove x P i P i by revising S (if necessary) so that the property holds for every player. Suppose that there exists player i with x P i > P i who attains his makespan (say Ci )onf while the completion time on M i is t i = P i x P i < P i <xp i C i. Then, we can assign on M i the workload processed on F during the interval [t i,c i ]. This reallocation does not increase the makespan of player i and the revised subcontracted workload, say x i,isnomorethant i = P i x P i < P i, i.e., x i P i in the revised schedule. Hence, there exists a Nash schedule such that x P i P i for all i M lets be such a schedule. If P i p i max P i (i.e., pi max P i )thenmin{p i p i max, P i } = P i and (a) holds trivially. Hence, it suffices to consider the case where p i max > P i for some player i. Let job j N i attain p ij = p i max.ifjis processed exclusively on M i,wehavex P i <P i p ij < P i and (a) holds. If p ij is not processed on M i exclusively then the makespan of player i is C i p ij > P i since no overlapping is allowed in S. Lety 1, y be the workload portions of p ij processed on M i, F respectively. Then, exchange the y periods of time that F is busy with p ij. This exchange preserves the property that jobs of player i are not processed simultaneously on M i and F. Also, the makespan

of i does not increase because the y time units of p ij are processed at the same time just on a different machine. Also, the total workload of i processed on F does not increase and (according to IRP) i is not scheduled later on F. However, after the exchange the job j is processed entirely on M i and hence x P i P i p ij = P i p i max =min{p i p i max, P i } because p i max > P i. This concludes property (a). We now consider property (b). For contradiction, suppose S is a Nash schedule that satisfies property (a) and there exists player [i] with x P 1 +...+ x P [i] > P [i] x P [i]. (8) Then, the makespan of player [i] isc [i] = x P 1 +...+xp [i]. Define Δ = min{ 1 (C[i] P [i] +x P [i] ),xp [i] } > 0. Replace x P [i] by x [i] = x P [i] Δ. Following the reallocation of his workload, the makespan of player [i] becomes C [i] C [i] Δ <C [i] because Δ > 0. If Δ = x P [i],then x [i] = 0 and no overlapping is possible. Otherwise, one can start processing jobs in N [i] on F, one at a time, starting at time C [i] x P [i] continuing until time C [i] possibly preempting a single job say j N [i], continuing with job j on M [i] at time 0, continuing with the rest of the jobs in N [i] until time C [i], preventing any overlapping due to the fact that x [i] satisfies property (a). Therefore, S can be revised to satisfy property (b) for player [i] or any other player violating property (b). This completes the proof of the lemma. ProofofTheorem3: Let x P 1,x P,...,x P M be the player strategies in a Nash schedule S chosen so that they satisfy Lemma. We prove the theorem by establishing that: a) If P i P k and P i p i max P k p k max, thenx P i x P k and player k s workload precedes i s on F, and b) If P i P k and P i p i max <P k p k max,thenplayerk s workload precedes i s on F. The properties imply the result. We prove claim a) by contradiction. Suppose S is a Nash schedule such that a) holds but x P i <x P k. Then, according to IRP player i precedes k on F as in Figure 4a (where the makespans of i, k are attained on M i,m k respectively due to Lemma (b)). Since P i P k and P i p i max P k p k max we have min{ P i,p i p i max} min{ P k,p k p k max} and hence x P i <x P k min{ P k,p k p k max} min{ P i,p i p i max}. Therefore player i could (if it were beneficial) subcontract at least x P k units of workload to F. Let Δ= 1 (xp k xp i ) > 0. Suppose that i reallocates to F x i = xp i + Δ units of workload instead of x P i. As a result player i will reduce his makespan by Δ even though P may now reschedule player i later still before player k, as in Figure 4b. In all cases i s makespan decreases by Δ contradicting that S is a Nash schedule. This proves claim a). To prove claim b) we distinguish 3 cases based on the disposable workload of players i and k as follows: Case i) P i p i max P i and P k p k max P k. In this case we can show as in claim a) that x P i x P k and according to IRP player k precedes i on F. Note that in this case min{ P i,p i p i max} = P i and min{ P k,p k p k max} = P k. Case ii) P i p i max P i and P k p k max P k. This case is not possible because P i P i p i max <P k p k max P k 3

Figure 4: Schedule configurations for players i, k. implies P i <P k ; this is a contradiction to the assumption P i P k. Case iii) P i p i max < P i. For proof by contradiction, suppose that player i precedes k on F as in Figure 4a. We must assume that x P i <x P k since otherwise we could schedule the xp i units of player i immediately after x P k without affecting the makespan of either i or k while ordering the two players in nondecreasing order of their total processing. If x P i = P i p i max then player i cannot benefit by subcontracting more workload to P (due to the assumption that x i satisfies Lemma ) and (according to IRP) can be rescheduled immediately after player k without affecting anyone s makespan. If x P i <P i p i max then consider increasing x P i to x i = xp i +Δwhere Δ=min{P i p i max xp i, xp k xp i Then, i will still precede player k on F because } > 0. x i = x P i +Δ x P i + xp k xp i = xp k + xp i and hence the makespan of i would improve by Δ i contradicting that S is a Nash schedule. In all subcases, claim b) holds. This completes the proof of the theorem. ProofofTheorem4:Let x P i, i M be the player strategies in a pure Nash equilibrium that satisfy Lemma and let S be the associated Nash schedule. We prove the result by contradiction. Suppose that there exist players i, k in S whose workload is processed consecutively on F and are such that min{ P i,p i p i max} min{ P k,p k p k max} but x P i >x P k. Let Δ = x P i x P k. Consider increasing xp k to x k = xp k + Δ or decreasing xp i to x i = xp i Δ. This is doable because x P k <x P i min{ P i,p i p i max} min{ P k,p k p k max}. 4 <x P k

Both reallocations can be made without incurring overlapping as in Lemma. Changing either x P i or x P k will not affect the ordering of i, k on F. If the makespan of either i or k decreases, then S cannot be a Nash schedule. The configuration in Figure 5 is the only configuration where x i and x k yield worse schedules for both players. Figure 5: The equilibrium schedule S. This configuration, however, does not correspond to a Nash schedule because player k can reduce his workload on F by δ = 1 P k + x P min{ck k,x P k } > 0 and improve his makespan by δ. And since x P k δ<x P k min{ P k,p k p k max}, jobs can be rescheduled on M i so as to avoid overlapping; contradicting the fact that S was a Nash schedule before the reallocation. This completes the proof of the theorem. ProofofLemma3:Theorems 1 and 3 indicate that the nondecreasing order of P i s is an equilibrium order when overlapping or preemption is allowed. This means that the quasi-spt order stipulated by IRP does not affect the ordering of a player on F when overlaps are not allowed. Consider equilibrium schedules S O, S with strategies {x O i i M}, {xp i i M} when overlapping or just preemption are allowed respectively, where 1,..., M is an SPT order of P i s. If x P k xo k for every k M the result holds trivially. Otherwise, let k be the first index such that x P k <xo k. If there exists i<k with x P i x P k then IRP implies that xp k = P k p k max and the result holds. Therefore we assume that x P i <x P k <xo k for all i<k. Still, if xp k = P k p k max the result holds. The last observations and the choice of index k imply x O i x P i <x P k <P k p k max for every i<k. (9) We now consider the following subcases. Case i) k x P i < k x O i. Then, consider revising the strategy of player k to k x P k =min{x P k + (x O i x P i ),P k p k max}. Evidently, x P k < xp k <xo k because k (x O i x P i ) > 0and k k 1 x P k x P k + (x O i x P i )=x O k + (x O i x P i ) <x O k 5

because x O i <x P i for all i<k. Moreover, expressions (9) and IRP imply which suggests that the strategies x P 1 x P... x P [k 1] <xp k < x P k <x O k x P i =min{x O i,p i p i max} for i>k are feasible for players k +1,..., M. Since (according to IRO) x O i x O k for i>k, IRP will preserve the k-th position for player k when using strategy x P k. Consequently, the makespan of player k is attained on M k because his revised makespan C k = P k x P k is such that k 1 k k k x P i + x P k x P i + (x O i x P i )= x O i P k x O k due to Theorem 1 < P k x P k. And since P k x P k <P k x P k,therevisedstrategy xp k is more beneficial to player k than strategy xp k, which contradicts the assumption that S is a Nash schedule. Case ii) k x P i k x O i. Then, consider strategies {xo i : i M} in S O and consider revising the strategies of players 1,...,k to x O i = min{x P i,xo i + xo k xp k } i<k, and k 1 x O k = x P k <xo k. Clearly, x O i >x O i for i<kbecause by choice of k we have x P i >x O i for i<k.strategies x O i for i<k improve the makespan of players i<kbecause i x O j j=1 i x P j P i x P i <P i x O i for i<k. j=1 Also, by definition of x O i for i k and the fact that, in S we have x P 1 xp...<xp k,wegetthat The latter observation together with the fact that x O 1 x O... x O [k 1] xo k = x P k <x O k. k 1 x O i k 1 + x P k (x O i + xo k xp k k 1 )+xp k = k x O k imply that players k +1,..., M can subcontract in S O amounts x O k+1... xo M respectively. Since x P k <xo k xo i for i>k,players1,...,k maintain their processing priority on F even when using strategies x O i for i k. Then the fact that these strategies improve the makespan of players 1,...,k 1 contradicts that S O is a Nash schedule. In both cases we reached a contradiction because x P k <xo k. Therefore, in S we must have xp k min{p k p k max,x O k }. Equivalently, if P k p k max <x O k then xp k P k p k max, i.e., x P k = P k p k max. Otherwise x P k xo k. This completes the proof of the lemma. 6

Proof of Theorem 5: Recall from Theorem that x O k = min i k P i x O 1... xo k 1 i + k for k =1,,..., M. (10) Consider the inside min operator in expressions (1) for x P k (r). Its numerator is no less than the numerator in (10), and its denominator less or equal to i + k. And since Γ(0) collects all values k : P k p k max <x O k, we have that, in (1), xp k (r) returns a value no less than min{xo k,p k p k max}. When all i k are already elements in Γ(r 1), then this min operator is null and x P k (r)=p k p k max; i.e., every player subcontracts his entire disposable workload. For k M Γ(0) workloads x P k (1) = min k i M i/ Γ(r 1) P i x P 1 (r)... x P k 1 (r) j<k (P j p j max) j Γ(0) i + k n k,i (r 1) replicate the optimal strategies in Theorem assuming that the workloads of players in Γ(0) are fixed. This is because rule IRO coincides with IRP for players i M Γ(0) who subcontract less than P i p i max. Note that, when computing the best strategy x P k (1) for given k M, the number of players up until player i (k i) that have not yet subcontracted the maximum possible amount P i p i max and their workload is not otherwise fixed (i.e., players 1,,...,k 1) are precisely i (k 1) n k,i (0). Therefore, expressions (1) correctly capture the dynamics of (10) for this situation. Value n k,i (0) is computed using set Γ(0). However, Lemma (a) suggests that x P k (1) cannot exceed amount P k p k max also enforced in (1). If none of the players i M Γ(0) subcontracts amount P i p i max then strategies xp k (1) : k M Γ(0) are optimal due to Theorem and values x P k (r) remain unchanged for every k M and r =,..., M. If on the other hand some player i M Γ(0) subcontracts P i p i max, set Γ(1) includes at least one more player and relations (1) revise the optimal strategies of players not in Γ(1). After at most M iterations, either all players subcontract amount P i p i max, or the optimal strategies have been found. In both cases, strategies x P k ( M ) are optimal. This completes the proof of the theorem. Proof of Lemma 4: If the last job of player i processed on F completes after time P i x N i + min j Ai p ij,playeri would be better off rescheduling the smallest job in A i on M i and revising his strategy to x i = xn i min j Ai p ij. Such a reallocation will not worsen his makespan and it may result in an improvement because x i <xn i and hence earlier players may be forced to subcontract less on F. This proves part (a). The proof of part (b) is similar to Lemma (a). ProofofTheorem6:For proof by contradiction, let S be a Nash schedule such that P i >P j but player i precedes player j on F, as in Figure 6. Since two such players exist, we may assume without loss of generality that they are processed consecutively on F, i.e., i immediately precedes j. Letx i,x j be the subcontracted workloads for players i, j, respectively. According to IRN, we have w i w j. Obviously, f i (w i )=x i, f j (w j )=x j, and by definition of f( ), x i w i and x j w j.ifx j x i,then x j w i which suggests x j f j (w i ) f j (w j )=x j, i.e., f j (w i )=f j (w j )=x j. Then, knapsack sizes w i, w i would be used by IRN to order the subcontracted workloads of players i and j respectively, and since P j <P i,playerj would be scheduled before i. This is not the case in S; hence we must assume x i <x j. Then, the start time t of processing the workload of player j on F must be t<c j = P j x j, as depicted in Figure 6. Otherwise, player j would be better off processing his entire workload on M j and IRN would have scheduled player j before player i on F. If f i (w j )=x i, then knapsack sizes w j, w j would be used for the subcontracted workloads of i, j, and since P i <P j,playerj would again precede i according to IRN. Hence, it must be that f i (w j ) x i and since f i (w i )=x i, w i w j we have f i (w j ) >x i and x i w i <w j. 7

Figure 6: The nonpreemptive equilibrium schedule S. Suppose that the knapsack sizes satisfy w [k] =max j<k x N [j]. Then, w i <w j and x i <x j imply that w j = x j. Then, we rewrite f i (w j ) >x i as f i (x j ) >x i and since by definition of f( ) wehave f i (x j ) x j,weget x i <f i (x j ) x j. The last expression means that player i can draw from a knapsack of size x j and improve his makespan because t +(x j x i ) < C i (11) where C i = P i x i is the makespan of player i. Indeed, we saw that t<c j = P j x j and since P j <P i,wehavet<p i x j. Rearranging terms and subtracting x i from both sides yields (11) which means that strategy f i (x j ) results to a smaller makespan for player i compared to strategy x i, contradicting our assumption that S is a Nash schedule. This completes the proof of the theorem. In what follows we develop an iterative algorithm to find an equilibrium schedule with respect to IRN. Non-Preemptive Nash (NPN) Input : Processing time profiles {p ij : j N i } for i M Output : Equilibrium workloads x N i : i M for the non-preemptive problem Begin 1. Order players in SPT order of P i s and determine sets X i : i M, as defined in (3) Let x N i := 0 for i M, x N 0 := 0. For k := 1 to M do For i := k to M do begin Compute w k 1 =max j<k x N j 3. Set x N k := the smallest value x X k that solves problem IP k. end End When solving IP k in line 3, there may be more than one most profitable strategies for player k. We select the smallest because player k is indifferent amongst them and selecting the smallest will allow subsequent players to subcontract more. Knowing sets X i : i M, the discussion preceding Theorem 7 establishes that NPN produces a Nash schedule. The effort expended in line 1 of NPN is O( N i Pi )fori M resulting in O( i N i Pi ) total because there are no more than O(P i) possible strategies for player i and the feasibility of each is tested by solving a knapsack problem in O( N i P i ) time (see Martello and Toth, 1990). Therefore, the total effort required in line 1 of NPN is bounded by O( M max i N i P M ). Every visit of line 3 of NPN requires effort O( X k ) when the values f i (x) are stored appropriately. Accounting for the i and k-loops, the total effort expended is bounded by 8

O( M P k ). In general, it is expected that M < max i N i and hence the overall complexity of NPN is dominated by O(max i N i M P M ) which is also the complexity of finding the sets X i, i M by solving O( M P M ) knapsack problems. 9