Theoretical Computer Science

Similar documents
arxiv: v1 [math.oc] 3 Jan 2019

The Maximum Flow Problem with Disjunctive Constraints

Online Interval Coloring and Variants

Facets for Node-Capacitated Multicut Polytopes from Path-Block Cycles with Two Common Nodes

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

On Two Class-Constrained Versions of the Multiple Knapsack Problem

Cographs; chordal graphs and tree decompositions

The maximum edge-disjoint paths problem in complete graphs

The minimum G c cut problem

3.4 Relaxations and bounds

Strongly chordal and chordal bipartite graphs are sandwich monotone

arxiv: v1 [cs.dm] 26 Apr 2010

A polynomial-time approximation scheme for the two-machine flow shop scheduling problem with an availability constraint

Exact Algorithms for Dominating Induced Matching Based on Graph Partition

On the Generation of Circuits and Minimal Forbidden Sets

This means that we can assume each list ) is

Approximation Algorithms for Re-optimization

Automorphism groups of wreath product digraphs

On improving matchings in trees, via bounded-length augmentations 1

5 Set Operations, Functions, and Counting

On Machine Dependency in Shop Scheduling

8 Knapsack Problem 8.1 (Knapsack)

Efficient Reassembling of Graphs, Part 1: The Linear Case

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

Cleaning Interval Graphs

1.1 P, NP, and NP-complete

On the Existence of Ideal Solutions in Multi-objective 0-1 Integer Programs

MATHEMATICAL ENGINEERING TECHNICAL REPORTS. Boundary cliques, clique trees and perfect sequences of maximal cliques of a chordal graph

Committee Selection with a Weight Constraint Based on Lexicographic Rankings of Individuals

Finding Large Induced Subgraphs

General Methods for Algorithm Design

Lecture 4 October 18th

2. Prime and Maximal Ideals

Efficient Approximation for Restricted Biclique Cover Problems

On shredders and vertex connectivity augmentation

An algorithm to increase the node-connectivity of a digraph by one

Discrete Applied Mathematics. Tighter bounds of the First Fit algorithm for the bin-packing problem

Week Cuts, Branch & Bound, and Lagrangean Relaxation

The Lovász Local Lemma : A constructive proof

CS60007 Algorithm Design and Analysis 2018 Assignment 1

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Fixed Parameter Algorithms for Interval Vertex Deletion and Interval Completion Problems

Worst case analysis for a general class of on-line lot-sizing heuristics

Scheduling Parallel Jobs with Linear Speedup

Bichain graphs: geometric model and universal graphs

Colored Bin Packing: Online Algorithms and Lower Bounds

10.3 Matroids and approximation

Classical Complexity and Fixed-Parameter Tractability of Simultaneous Consecutive Ones Submatrix & Editing Problems

1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u)

Estimates for probabilities of independent events and infinite series

More on NP and Reductions

Tree sets. Reinhard Diestel

Generalized Pigeonhole Properties of Graphs and Oriented Graphs

Linear-Time Algorithms for Finding Tucker Submatrices and Lekkerkerker-Boland Subgraphs

Lecture 4: An FPTAS for Knapsack, and K-Center

Augmenting Outerplanar Graphs to Meet Diameter Requirements

Communicating the sum of sources in a 3-sources/3-terminals network

NP Completeness and Approximation Algorithms

Chordal Graphs, Interval Graphs, and wqo

Dominating Set Counting in Graph Classes

The Lefthanded Local Lemma characterizes chordal dependency graphs

Enumerating minimal connected dominating sets in graphs of bounded chordality,

The Chromatic Number of Ordered Graphs With Constrained Conflict Graphs

The Turán number of sparse spanning graphs

1 Column Generation and the Cutting Stock Problem

Improved dynamic programming and approximation results for the Knapsack Problem with Setups

Basic counting techniques. Periklis A. Papakonstantinou Rutgers Business School

Essential facts about NP-completeness:

Online Optimization of Busy Time on Parallel Machines

Chapter 3: Discrete Optimization Integer Programming

Perfect matchings in highly cyclically connected regular graphs

Bin packing with colocations

Partial cubes: structures, characterizations, and constructions

CS 6901 (Applied Algorithms) Lecture 3

Santa Claus Schedules Jobs on Unrelated Machines

Graph coloring, perfect graphs

Robust Network Codes for Unicast Connections: A Case Study

Computational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs

(a) Write a greedy O(n log n) algorithm that, on input S, returns a covering subset C of minimum size.

Speeding up the Dreyfus-Wagner Algorithm for minimum Steiner trees

GRAPHS WITH MAXIMAL INDUCED MATCHINGS OF THE SAME SIZE. 1. Introduction

Generating p-extremal graphs

Parameterized Complexity of the Sparsest k-subgraph Problem in Chordal Graphs

Packing and Covering Dense Graphs

GRAPH ALGORITHMS Week 7 (13 Nov - 18 Nov 2017)

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Greedy Algorithms My T. UF

arxiv: v1 [cs.dc] 4 Oct 2018

Properties and Classification of the Wheels of the OLS Polytope.

Packing and decomposition of graphs with trees

Shortest paths with negative lengths

Combinatorial optimization problems

Approximation algorithms and hardness results for the clique packing problem. October, 2007

Advances in processor, memory, and communication technologies

Claw-free Graphs. III. Sparse decomposition

CLIQUES IN THE UNION OF GRAPHS

Online Coloring of Intervals with Bandwidth

On the Complexity of Partitioning Graphs for Arc-Flags

Transcription:

Theoretical Computer Science 411 (010) 417 44 Contents lists available at ScienceDirect Theoretical Computer Science journal homepage: wwwelseviercom/locate/tcs Resource allocation with time intervals Andreas Darmann a, Ulrich Pferschy b,, Joachim Schauer b a University of Graz, Institute of Public Economics, Universitaetsstr 15, 8010 Graz, Austria b University of Graz, Department of Statistics and Operations Research, Universitaetsstr 15, 8010 Graz, Austria a r t i c l e i n f o a b s t r a c t Article history: Received 8 September 009 Received in revised form 4 August 010 Accepted 6 August 010 Communicated by G Ausiello Keywords: Resource allocation Proper intervals Interval scheduling Unsplittable flow We study a resource allocation problem where jobs have the following characteristic: each job consumes some quantity of a bounded resource during a certain time interval and induces a given profit We aim to select a subset of jobs with maximal total profit such that the total resource consumed at any point in time remains bounded by the given resource capacity While this case is trivially N P -hard in general and polynomially solvable for uniform resource consumptions, our main result shows the N P -hardness for the case of general resource consumptions but uniform profit values, ie for the case of maximizing the number of performed jobs This result applies even for proper time intervals We also give a deterministic (1/ε)-approximation algorithm for the general problem on proper intervals improving upon the currently known 1/ ratio for general intervals Finally, we study the worst-case performance ratio of a simple greedy algorithm 010 Elsevier BV All rights reserved 1 Introduction In this paper we investigate a resource allocation problem (RAP) where each task requires a certain amount of a limited resource but only during a certain time interval Before and after this time interval the resource is not affected by this task ore formally, we are given a set of jobs j {1,, N}, each of them with a profit p j R + 0 and a resource consumption (or weight) w j R + 0 (we will also use the notation w(j)) Furthermore, to every job j a start time s j R + 0 and an end time t j R + 0, t j > s j, is assigned such that the resource consumption occurs only within the time interval [s j, t j ] The goal is to select a subset of jobs with maximal total profit such that the total resource consumption does not exceed the given resource capacity W R + 0 at any point in time An interesting application of this problem arises from the scheduling of tasks in the mission of a spacecraft described in a slightly different model in [18] The individual projects which researchers wish to carry out during a space mission are strictly restricted to a time interval [s j, t j ] because of the spacecraft being in a certain configuration with respect to earth and other planetary bodies Each project requires an amount w j of a limited resource such as labor, workspace or energy To choose from the large set of desirable projects each of them is assigned a priority value p j aximizing the sum of priorities of all scheduled jobs under the given feasibility constraints amounts to solving an instance of RAP In the literature, this problem (or close relatives thereof) is also known as bandwidth allocation problem (cf [11], where profits are given as weight times the interval length), resource constrained scheduling (cf [], where a generalization is considered in which each job is allowed to be positioned within a larger time interval) and call admission control; see [4,8] for further references An exact algorithm for RAP based on a sophisticated ILP-formulation which is solved by column generation was recently developed by [9] Corresponding author Tel: +4 16 80 496; fax: +4 16 80 9560 E-mail addresses: andreasdarmann@uni-grazat (A Darmann), pferschy@uni-grazat (U Pferschy), joachimschauer@uni-grazat (J Schauer) 004-975/$ see front matter 010 Elsevier BV All rights reserved doi:101016/jtcs0100808

418 A Darmann et al / Theoretical Computer Science 411 (010) 417 44 The generalization of RAP where the resource capacity varies over time was studied as a temporal knapsack problem in [5] where exact algorithms were developed A related problem with a geometric flavor, in which each job requires an adjacent block of the resource over its interval, was considered by several authors In this case each job can be seen as requiring a rectangle whose length is determined by the time interval [s j, t j ] and whose width is given by w j [] consider again the generalization with jobs allowed to be positioned in a larger interval while [1] consider the problem of minimizing W, such that all jobs can be processed, ie all rectangles can be packed into a strip of width W oreover, RAP is also a well-known special case of the unsplittable flow problem (UFP) By [10] UFP consists of an n- vertex graph G = (V, E) with edge capacities c e and a set of k vertex pairs (terminals) T = {(s i, t i ) i = 1,, k} Each pair (s i, t i ) in T has a demand w i and a profit p i The goal is to find the maximum profit subset of pairs from T, along with a path for each chosen pair, so that the entire demand for each such pair can be routed on its path while respecting the capacity constraints The special case of UFP where G is a path and all the capacities c e are uniform is known as unsplittable flow problem on line graphs with uniform capacities (UFPUC) (see []) and corresponds exactly to RAP with W equal to the uniform edge capacity For a comprehensive overview of RAP (UFPUC) and UFP, the reader is referred to the papers [,8] Obviously, RAP is N P -hard since the classical 0 1 knapsack problem can be understood as a special case of RAP where all time intervals are identical In [] a so-called quasi-ptas is given for the (general) unsplittable flow problem on line graphs A quasi-polynomial time approximation scheme is an approximation scheme (quasi-ptas) that runs in time bounded by polylog(m), where m is the input size of the problem instance This clearly also gives a quasi-ptas for RAP Nevertheless, the long standing question of whether there exists a PTAS for RAP is still open [8] The trivial N P -hardness of RAP gives rise to the study of two relevant special cases: 1 uniform weights: If all resource consumption values w j are identical, the resource constraint reduces to a cardinality constraint This version of the problem is well known as an interval scheduling problem (see [1] for a survey and pointers to applications) RAP with uniform weights can easily be seen to be polynomially solvable (see eg [1], where an O(N log N) algorithm is given, and [7] for a minimal cost flow model) uniform profits: For the case where all profits p j are identical, the complexity was open It is the main result of our paper that RAP with uniform profits is N P -hard, even if restricted to proper interval graphs (see Section ) Our result not only sharpens the obvious N P -hardness result for the general RAP, but also proves that the existence of an FPTAS for RAP even on proper interval graphs with uniform profits (and thus also for RAP) can be ruled out, since due to uniform profits the regarded objective function value is polynomially bounded oreover, we believe that RAP with uniform profits is also an interesting combinatorial problem in its own right, since it combines the aspects of packing and scheduling problems in an intriguing way The intersection of the two cases with uniform profits and uniform weights was considered in [7] where a simple O(N log N) algorithm is given However, for this case it was shown to be N P -hard in [14] to minimize W such that all jobs can be performed aximizing the number of performed jobs with the additional restriction that only one job can be performed at a time (ie W equals the uniform weight) but with jobs allowed to be positioned within a larger time interval was considered in [18] In [8] a deterministic 1 -approximation algorithm with running time O(N log N) and a randomized ( 1 ε)- approximation is given for RAP, thus improving upon earlier (and to some extent more general) results in [] and [4] Our second result is a deterministic ( 1 ε)-approximation algorithm for the special case of RAP defined on proper interval graphs described in Section This algorithm can easily be modified to an 1 -approximation algorithm for RAP defined on a proper interval graph with uniform profits, where the performance ratio is tight Finally, in Section 4 we analyze the performance of a simple greedy algorithm, concluding that it also gives a tight 1 -approximation algorithm for the same special case, but can perform arbitrarily bad if the profit is not uniform or the intervals are not proper 11 Problem formulations Let s min := min 1 j N s j and t max := max 1 j N t j Then a straightforward ILP-formulation of RAP based on binary decision variables x j for each job j = 1,, N, is given as follows maximize st N p j x j Resource allocation problem (time interval formulation) j:s j t t j w j x j W t [s min, t max ] x j {0, 1} j = 1,, N

A Darmann et al / Theoretical Computer Science 411 (010) 417 44 419 One can notice that the actual start and end times are not explicitly required in this problem They only serve to model the overlap between jobs, while the length in time of such an overlap is irrelevant Hence, we could map wlog the sorted list of all start and end times to the discrete set {1,, N} and thus bound the number of constraints by N On the other hand, we will not apply such a transformation since it will turn out to be more useful to define RAP with the use of an interval graph Definition 1 A set τ of intervals is called proper, if no interval of τ is contained in another interval of τ Definition A graph G = (V, E) is an interval graph if each vertex v V can be assigned an interval on the real line, such that two intervals intersect iff the two corresponding vertices are adjacent A representation of graph G by such intervals is called interval representation A proper interval graph is an interval graph for which there exists an interval representation which is proper For a given instance I of RAP we construct an interval graph G = (V, E) in the following way: For each job j define a corresponding vertex v j and make two vertices v i and v j adjacent, if the time intervals of the jobs i and j intersect If the underlying instance I has the property of being described by proper time intervals then obviously the graph G is a proper interval graph In this paper we will represent the instance I by its corresponding interval graph, ie, instead of looking at job j with specific profit, weight, and start and ending times, we consider the corresponding vertex v j that gets assigned the same weight and profit as j Note that the intersecting structure that was defined by the start and end times in I is only kept in the adjacency structure of the interval graph, which means that vertices that had an overlapping time interval are now adjacent However, in order to formulate the resource allocation problem in terms of an interval graph, we state the following definition Definition A clique of a graph G is a complete subgraph of G A maximal clique is a clique which is not properly contained in any other clique Replacing the label v j by j for j = 1,, N, the resource allocation problem RAP can also be formulated as follows Resource allocation problem (interval graph formulation) N maximize p j x j st w j x j W for all maximal cliques C of G j C x j {0, 1} j = 1,, N (1) For any subset of jobs X {1,, N} we denote its weight by w(x) := j X w j and its objective function value as p(x) := j X p j X is a feasible solution of I if the associated vector x {0, 1} N, defined by x j := 1 if j X and x j := 0 otherwise, satisfies all constraints (1) An optimal feasible solution is denoted by X It is easy to see that there exists a one-to-one correspondence between feasible solutions of the interval graph representation of I and the time interval representation Note that the number of maximal cliques of G and thus the number of constraints is bounded by N Indeed, the non-trivial instance where each job j overlaps only with jobs j 1 and j + 1 induces an interval graph with N 1 maximal cliques In what follows, we will use the interval graph formulation of RAP Resource allocation with uniform profits In this section we show that the resource allocation problem RAP is N P -hard even if restricted to jobs that are assigned uniform profits (wlog p j = 1, j = 1,, N) and proper intervals We first state the main theorem of this section: Theorem 1 The resource allocation problem is N P -hard, even if restricted to uniform profits and proper interval graphs It has to be mentioned that the strong N P -hardness of the problem is not shown by this result and thus an open problem The following corollary holds because of the fact that the problem considered is a polynomially bounded optimization problem Corollary There cannot exist an FPTAS for the resource allocation problem with uniform profits defined on proper interval graphs unless P = N P The proof of the N P -hardness result consists in a reduction from the following variant of the partition problem

40 A Darmann et al / Theoretical Computer Science 411 (010) 417 44 Definition 4 (Ordered Partition) GIVEN: A multiset B := {b 1, b,, b 1, b n } of n integers such that b i < b i1 for all i, 1 i n QUESTION: Is there a subset B B such that 1 B contains exactly one of {b i1, b i } for all i, 1 i n, and b i B b i = b i B\B b i? Since the variant of the above problem in which the requirement b i immediately follows that Ordered Partition is N P -complete Theorem ([16]) Ordered partition is N P -complete < b i1 is omitted is N P -complete [16], it It can easily be seen that ordered partition is N P -complete both for even n and odd n; for the sake of brevity we omit the proof of this simple observation Since for the rest of the paper we assume n to be even, we state the following lemma Lemma 4 Ordered partition is N P -complete if n is even 1 The instance R of RAP Given an even instance OP of Ordered Partition with a set of integers B := {b 1, b,, b 1, b n } such that b i < b i1 for all i, 1 i n, we will construct an instance R of RAP and state some of its basic features in this section 11 Idea of the construction The idea behind the instance R is as follows After specifying an appropriate resource bound W, we introduce the jobs a i and the jobs ā i each of which we associate with the integer b i of Ordered Partition, 1 i n Next we introduce two sorts of dummy jobs Having introduced these jobs, there are n maximal cliques in R: The cliques C, C 4,, C n, the cliques I, I 4,, I n and the cliques C, C4,, Cn The dummy jobs are assigned appropriate resource consumptions such that the resource constraints on the maximal cliques guarantee several structural properties of optimal solutions of R First, the construction allows us to establish an upper bound U for the objective function value of solutions of R Next, the maximal cliques C i are established in such a way that an optimal solution X of R whose objective function value meets the bound U contains exactly one element of {a i+, a i+1 } Analogously, the cliques Ci are designed in order to make sure that exactly one element of {ā i, ā i1 } is contained in such a solution X The cliques I i finally serve the purpose that a i and ā i are not both contained in X These properties are stated in Section In Section we show that from the resource constraint on the cliques C n and Cn it then follows that such an optimal solution X yields the desired subset B B of instance OP The construction of B is based on the idea of identifying a i X b i B On the other hand, the construction of an optimal solution X with p(x ) = U from a given solution B of OP can be done in a straightforward way by use of the above equivalence and inserting all the dummy jobs in X We will now give a detailed specification of instance R 1 Resource bound W and the jobs in R Let := n i=1 b i and W := n + n + 1 First we introduce n jobs a i, 1 i n, and n jobs ā i, 1 i n The resource consumptions of these jobs are resp w(a i+1 ) = b i+1 + (n i) and w(a i+ ) = b i+ + (n i) w(ā i+1 ) = b i+1 + (i + ) and w(ā i+ ) = b i+ + (i + ), for 0 i n 1 Note that this implies w(a i+1 ) > w(a i+ ) and w(ā i+1 ) > w(ā i+ ) () for all i {0,, n 1} For illustration reasons the resource consumptions of these jobs are given in Table 1 (all Tables can be found in Appendix A) Next we introduce the following types of sets of dummy jobs: the sets D i for all i, n + i n, the sets Di, for all i, 1 i n 1, and the sets E i and Ē i, 1 i n The resource consumption of each of the jobs in these job sets equals The number of jobs in the dummy job sets D n+4,, D n is given by D n++i = 8i 4,

A Darmann et al / Theoretical Computer Science 411 (010) 417 44 41 for i {,, n } The cardinalities of the dummy job sets D,, D are given by Di = 8i 4, for i {,, n } The cardinalities of the dummy job sets E i and Ē i are as follows: 4n 4i if i {,, n } 4i + 4 if i {n,, n 4} Ē i = E i = 4n 1 if i = n 4n if i = n Tables and display the starting and end times of the jobs The time interval [a, b] of a job set F means that the jobs in this job set have the time interval [a + i F, b + i ], i {0, 1,, F 1} Note that, for i {,, n }, we get F Thus, w(a i+ ) w(ā i+ ) = (n 4i ) = (4(n i) 4) = w( D(i) ) w(a i+ ) w(ā i+ ) = w( Di ) for i {,, n } () and analogously w(ā i ) w(a i ) = w(d i ) for i {n + 4,, n} (4) Finally, we assign profit 1 to each of the jobs in instance R and denote the total number of dummy jobs by h 1 The maximal cliques in R We denote the n maximal cliques of instance R by C, C 4, C n, I, I 4,, I n and C, C4,, Cn In Table 4 the times that constitute these cliques are shown The description of the cliques according to the jobs they contain is given in Table 5 The proof that these n cliques are indeed maximal and no further maximal cliques exist is given in Appendix A 14 Preliminary definitions and observations In the course of this paper the following definitions will be useful Definition 5 Let X be a feasible solution of R and C a maximal clique in R Then we define the set K j (X) of jobs of {a j, a j1 } that are contained in X Formally, K j (X) := {a j, a j1 } X the set A(X) := {a i a i X, 1 i n} of jobs a i contained in X, 1 i n the set B j (X) := {a k a k (X C j )} of jobs a k C j that are contained in X the total resource consumption of clique C with respect to X by r X (C) := a C X w(a) the set J(X) of indices j {1,, n} such that both elements of the set {a j+, a j+1 } are contained in X Formally spoken, J(X) := {j {1,, n} K j+1 (X) = } = {j {1,, n} a j+ X and a j+1 X} m := min J(X ) and J (X ) := {j < m K j+ (X ) = 0, j 1} The sets Kj (X), Ā(X), Bj (X) are defined analogously by replacing each job a i with job ā i and each clique C j by Cj, respectively For formal reasons however, the set J(X) is defined by J(X) := {j {1,, n} āj X and ā j1 X} With m := min J(X ) we define J (X ) := {j < m Kj (X ) = 0, j 1} Properties of solutions of R In this section the key features of optimal solutions of R used for proving the N P -hardness result in Section are presented With Lemmas 1 and, which can be found in Appendix B, it can be proven that any optimal solution of R contains at most n jobs of {a 1, a,, a n } and at most n jobs of {ā 1, ā,, ā n } This key feature is stated in the following lemma Its proof is in Appendix B Lemma 5 Let X be an optimal solution of R Then A(X ) n and Ā(X ) n The following two corollaries follow immediately from Lemma 5 resp its proof (see Appendix B for details) Corollary 6 Let X be an optimal solution of R Then p(x ) n + h

4 A Darmann et al / Theoretical Computer Science 411 (010) 417 44 Corollary 7 Let X be an optimal solution of R that contains all dummy jobs Then the following statements hold: 1 If J(X ), then J (X ) Let J(X ), then J (X ) With the use of Lemma 5 and Corollary 7 the following important property of an optimal solution X of R can be derived: if p(x ) = n + h, then for all j {1,, n} exactly one job of {a j, a j1 } (and exactly one of {ā j, ā j1 }) must be contained in X This property is stated in Lemma 8 below Lemma 8 Let X be an optimal solution of R with p(x ) = n + h Then the following statements hold: 1 A(X ) = n and Ā(X ) = n K j (X ) = 1 and Kj (X ) = 1 for all j, 1 j n Another useful structural property of an optimal solution X of R with p(x ) = n + h is stated in the next lemma: the jobs a i and ā i cannot be simultaneously contained in such a solution X, for all i, 1 i n Lemma 9 Let X be an optimal solution of R with p(x ) = n + h Then a i X ā i / X for all i, 1 i n Lemma 10 Let X be an optimal solution of R with p(x ) = n + h Then i:a i X b i and i:ā i X b i Remark The above lemma states that summing up the b i over all indices i such that a i X (resp ā i X ) does not exceed the bound Thus, it clearly indicates the idea used to derive a feasible solution of the ordered partition instance OP from an optimal solution X of R with p(x ) = n + h in the proof of the N P -hardness result in the next section The N P -hardness result In this section we show that the resource allocation problem with uniform profits is N P -hard even if the interval graph is proper The proof of the N P -hardness consists of a transformation of an arbitrary instance OP of the ordered partition problem with an even number n of integer pairs (see Definition 4) to the decision problem corresponding to the resource allocation problem Given such an instance OP, we constructed an instance R of the resource allocation problem in Section 1 With the structural properties of an optimal solution of R presented in Section, we will now prove the main result of this section We prove the theorem by showing that OP is a YES -instance of ordered partition if and only if there is a solution X of R with p(x ) n + h Sketch of the proof of Theorem 1 The if -part of this equivalence is proven in a straightforward way Given a solution B of OP, simply construct a solution X of R by 1 letting a i X b i B and ā i X b i B \ B and inserting all the dummy jobs in X With simple algebra the feasibility of X is shown and the optimality of X follows directly from Corollary 6 The only-if -part is proven as follows Given a solution X of R with p(x) = n + h, we construct the sets B 1 and B1 by letting b i B 1 a i X and b i B1 ā i X Lemmas 8 and 10 ensure that B 1 is a solution of OP if B 1 and B1 are disjoint Thus, the final part of the proof consists in showing that B 1 and B1 are disjoint by simply using the definition of the ordered partition problem Proof of Theorem 1 : Let B B be a solution of OP Thus, b i = b i = b i B b i B\B, B = n and, for 1 j n, b j B b j1 B \ B Let H denote the set of all dummy jobs in R Let X := {a i b i B, 1 i n} {ā i b i B\B } H Obviously, p(x ) = n+h Furthermore note that a i X ā i X for all 1 i n It remains to show is that X is a feasible solution of R (6) implies K j (X ) = 1 and Kj (X ) = 1 for all 1 j n Thus, for the cliques C j, 1 j n, we get r X (C j ) = δ(c j ) + w(a k ) = δ(c j ) + a k B j (X ) j k + k=1 k:a k B j (X ) b k (5) (6) (7)

A Darmann et al / Theoretical Computer Science 411 (010) 417 44 4 Hence, for 1 j < n, (5) and (6) together with Proposition 1 yield j j r X (C j ) < n + n k + k + k=1 k=1 = [ n + n + 1 < W For clique C n we get r X (C n ) = n + n = [( n + n + 1) = W k k=1 + 1 + k + k=1 Thus, the resource constraint holds for all cliques C j, 1 j n Analogously it follows that the resource constraint is satisfied for all cliques Cj, 1 j n Next consider the clique I n Recall that Clique I n contains the jobs a i, 1 i n, i n 1, the job ā n and all the dummy job sets D n+4, D n+6,, D n Note that (7) implies either a n X or ā n X Furthermore recall that w(a i ) < w(a i1 ) (resp w(ā i ) < w(ā i1 )) for 1 i n and w(ā i ) > w(a i ) for i {n + 4, n + 6,, n} n (see Table 1) Thus, with i=1 b i < we get 1 r X (I n ) w(d i ) + w(ā n ) + w(a i1 ) i= n + i=1 1 < α := w(d i ) + w(ā 1 ) + w(a i1 ) i= n + i=1 n = n + 1 + (b 1 + n) + b i1 + i n = n + + = n + n + < W i=1 i=1 i= b i1 + n + n(n + 1) i=1 b i1 Hence, for the clique I n the resource constraint is satisfied Now clique I contains the jobs a i, 1 i n, i n, the jobs ā, ā 1, ā n and all the dummy job sets D n+4, D n+6,, D Analogous to the above we get r X (I ) w(d i ) w(d n ) + w(ā 1 ) + w(ā ) + w(a i1 ) (8) i= n + i=1 Now recall that w(ā i ) w(a i ) = w(d i ) for i {n + 4,, n} (stated in ()) Substituting w(d n ) = w(ā ) w(a ) in (8) yields r X (I ) w(d i ) w(ā ) + w(a ) + w(ā 1 ) + w(ā ) + w(a i1 ) i= n + i=1 1 < w(d i ) + w(ā 1 ) + w(a i1 ) i= n + i=1 = α < W Analogously, we get r X (I i ) < α < W for all i {n +,, n}

44 A Darmann et al / Theoretical Computer Science 411 (010) 417 44 For i {,, n}, starting with I and using the equality w(a j ) w(ā j ) = D j for j {4,, n} (stated in ()) the inequality r X (I i ) < W can be shown in an analogous manner Thus, since p(x ) = n + h, X is a feasible and optimal solution of R : Let X be a feasible solution of R with p(x ) = n + h Obviously, X is an optimal solution due to Corollary 6 Let B 1 := {b i a i X, 1 i n} and B1 := {b i ā i X, 1 i n} As stated in Lemma 10, b i and b i (9) i:b i B 1 i:b i B1 hold Due to Lemma 8 A(X ) = n (resp Ā(X ) = n) and thus B 1 = n (resp B1 = n) Note that the fact that exactly one element of {a j, a j1 } (resp {ā j, ā j1 }) is contained in X (see Lemma 8) implies that exactly one element of {b j, b j1 } is contained in B 1 (resp B1 ), for 1 j n If B 1 B1 = then B \ B 1 = B1 Together with (9) this implies that B 1 is a solution of OP Thus, it remains to show that B 1 B1 = holds Assume that B 1 B1 Because of Lemma 9 we know that b i (B 1 B1 ) implies i = j 1 for some 1 j n Furthermore note that because of Lemma 8 b j1 (B 1 B1 ) b j / B 1, b j / B1 (10) for all 1 j n Now create the set B from B 1 by replacing every b j1 (B 1 B1 ) with b j, 1 j n Formally speaking, B := (B 1 \ B1 ) {b j b j1 (B 1 B1 ), 1 j n} Note that (10) yields B B1 = Obviously, B = n because of B 1 = n Thus we must have B \ B = B1 and hence n i=1 b i = i:b i B b i + i:b i B1 b i Since by definition b j < b j1 for 1 j n, we get b i < b i (1) i:b i B i:b i B 1 where the last inequality was stated in (9) However, inserting (9) and (1) in (11) yields n i=1 b i = i:b i B b i + < + = i:b i B1 b i (11) which contradicts the definition of Thus, B 1 B1 = which completes the proof A ( 1 ε)-approximation algorithm In this section we present an approximation algorithm with worst-case performance guarantee 1 ε for instances I of the resource allocation problem RAP in which the time intervals of all jobs are proper intervals This algorithm is based on the clique path representation of the proper interval graph which has a one-to-one correspondence to instance I We first give the formal definition of chordal graphs and clique trees and cite some facts from the literature needed to prove the correctness of our algorithm Definition 6 A graph G = (V, E) is called a chordal graph, if it does not contain induced cycles other than triangles [1] Lemma 11 Every interval graph is chordal [17] Definition 7 A clique tree T = (K, E) of a chordal graph G is a tree that has all the maximal cliques C of G as vertices and for each vertex v G all the cliques C containing v induce a subtree in T [6] Definition 8 A subset S V is called a separator of G if there are two vertices a and b in one component of G, such that after the removal of S from V a and b are in different components of G S Lemma 1 Let T be a clique tree of the chordal graph G and let C 1 and C be two maximal cliques adjacent in T Then C 1 C is a minimal separator wrt G for all a C 1 \C and b C \C 1 [15] Theorem 1 Let G be a connected chordal graph Then G is a proper interval graph if and only if G has a unique clique tree and this tree is a path P such that for every subsequence (C 1, C, C ) of P either C 1 C = or C (C 1 C ) [19, Cor 10] For algorithmic purposes it is important that the clique tree of a chordal graph G = (V, E) can be computed using O( V + E ) time and space [15]

A Darmann et al / Theoretical Computer Science 411 (010) 417 44 45 1 The algorithm Let I be an instance of RAP with proper intervals and P = (K, E) be the clique path of the corresponding proper interval graph G Let C 1,, C l be the l maximal cliques of G ordered such that C i is adjacent to C i1 and C i+1 in P for i =,, l1 For a subgraph Ḡ of G let N(Ḡ) denote the neighborhood of Ḡ in G where vertices of Ḡ are not considered Definition 9 Two subgraphs C 1, C of G are called unconnected if N(C 1 ) C = and C 1 C = Lemma 14 Let K be a set of cliques of G which are pairwise unconnected Then the resource allocation problem restricted to the items contained in the cliques in K and the constraints these cliques impose can be decomposed into K independent knapsack problems Proof Follows immediately from the definition of unconnectedness In our algorithm we will construct two sets K 1, K of cliques such that their union contains each vertex of G exactly once The cliques in each of the two subsets will be pairwise unconnected Solving the problem separately for each of these subsets and taking the better of the two solutions yields a 1 -approximation algorithm In polynomial time we can only perform an ε-approximation scheme for every knapsack problem which decreases the performance ratio by ε Let C jk := C j \C k (seen as in induced subgraph of G) The following algorithm gets as input the clique path representation C 1,, C l of an instance I Algorithm Clique Partitioning: K 1 := ; K := ; C l+1 := c := 0; i := 1 while i < l c := c + 1 if c is odd K 1 := K 1 {C i } else K := K {C i } find j > i with C i C j C i C j+1 = (*) C j := C ji i := j end while if c is odd K 1 := K 1 {C i } else K := K {C i } Lemma 15 Every vertex of G is contained in exactly one clique of K 1 K Proof Every vertex v G is contained in at least one maximal clique For all cliques C k which are skipped in line (*) there is C i C k and C i C k+1 It follows from Theorem 1 that C k C i C k+1 Hence, no vertices are lost by skipping C k (For general interval graphs that are not proper, this would not be the case when applying this algorithm) After adding a clique C i to some K l its items are removed from C j Since the clique tree is a path and C i C j+1 = the items of C i cannot appear in any other cliques of K 1 or K It is obvious from the construction of the algorithm that all elements of K l are cliques Lemma 16 The cliques in each set K 1, K are pairwise unconnected Proof Consider an arbitrary clique C K 1 and an arbitrary vertex v C Then we have to show that for all vertices u C for some C K 1, C C, there is no edge (v, u) in G For simplicity we assume that the cliques C = C ba, C cb K, C = C dc are generated in this order (If C is generated further away from C the following argument holds a fortiori) Applying Lemma 1 for the corresponding maximal cliques we know that C c C d separates C cd from C dc = C If v C b C c and hence v C cd (clearly C b C d = by construction of the algorithm and the path property), this means that v is separated from u Otherwise there is v C bc and we assume that there is an edge (v, u) E By the clique property every vertex in C b C c C cd (which is non-empty by construction) has an edge to v and thus via the edge (v, u) a path to u C dc in contradiction to the above separation property The same argument works for K Theorem 17 Let z denote the optimal solution value of I Then there is a polynomial time algorithm with solution value z A such that for proper interval graphs z A ( 1 ε) z for every fixed ε > 0

46 A Darmann et al / Theoretical Computer Science 411 (010) 417 44 Proof After executing Clique Partitioning we solve the standard knapsack problems defined by each of the cliques in K 1 and K By Lemma 15 every item appears in exactly one of these problems By Lemmas 14 and 16 taking the union of solutions over all cliques in K l yields optimal solutions zl, l = 1,, for the instance restricted to the subgraph induced by the vertices in the cliques of K l Since every feasible solution of I must fulfill the weight constraints for all cliques, we have z z + 1 z Hence, taking z A := max{z, 1 z } immediately yields a 1 -approximation algorithm To avoid the pseudopolynomial running time for solving the at most n occurring knapsack problems to optimality we apply an FPTAS (cf [0]) instead and thus obtain a polynomial running time Summing up the solutions of the FPTAS with performance guarantee (1 δ) we get approximate solution values zl A (1 δ)z l Putting things together we have z za 1 (1δ) + za (1δ) and again taking the maximum (1 δ)z max{z A, 1 za } Choosing δ ε yields the statement of the theorem Corollary 18 There is a polynomial time algorithm with performance guarantee 1 for the resource allocation problem with proper interval graphs and uniform profits Proof This result follows immediately since the separate knapsack subproblems with uniform profits can be solved to optimality by sorting the jobs in increasing order of weights A simple example with only three jobs shows that the performance ratio of Corollary 18 is tight Example j 1 p j 1 1 1 w j W W W s j 1 4 t j 5 6 There are two maximal cliques C 1 = {1, } and C = {, } Algorithm Clique Partitioning yields K 1 = {C 1 } and K = {C 1 } The approximation algorithm computes as solutions item 1 (or item ) for z A 1 and item for za and outputs one of them as maximum The optimal solution packs both items 1 and which yields a 1 -approximation Replacing items 1 and by an instance for the knapsack problem where the (1 ε)-approximation of the FPTAS is tight it can be shown that the bound of Theorem 17 is also tight In practice the algorithm could also be modified in such a way that a maximum independent set of the path (P 11 P 1 P 1 ) is calculated The weight of the vertices of this path is derived from an appropriate knapsack algorithm for the subgraphs Clearly this modification may improve the solution and is definitely not worse than the value of the approach described in the algorithm However this modification cannot guarantee a better worst-case performance guarantee Remark An alternative way for deriving a deterministic ( 1 ε)-approximation algorithm can be found by using the unit interval representation of the proper interval graph [17] However such an approach does not yield a solution based on maximal cliques 4 A simple greedy algorithm In this section we study the performance of a simple greedy algorithm for the resource allocation problem with proper intervals and uniform profits as considered in Section Since all jobs have the same profit it is natural to sort the jobs in increasing order of weights and try to add them to the current solution in this order if feasibility is preserved It will turn out that this straightforward algorithm Greedy has a tight worst-case ratio of 1/ This does not improve upon the result of Corollary 18 However, we believe that the analysis of such an intuitive approach, which also yields an improved running time, is interesting in its own right Algorithm Greedy: Sort the jobs in increasing order of weights GS := set of jobs selected by Greedy for i = 1 to N do if GS {i} is feasible then GS := GS {i} end for z G := GS z G is the value of the greedy solution Let z denote the optimal solution of a resource allocation problem with proper intervals and profits p j j = 1,, N = 1 for

A Darmann et al / Theoretical Computer Science 411 (010) 417 44 47 Theorem 19 z G 1 z Proof Let X be the optimal set of jobs, ie z = X and J be the set of all jobs The principle of the proof is very simple: We start with the set GS and remove iteratively jobs from GS \ X and add jobs from X \ GS until we have completely transformed GS into X It will be shown that whenever we remove a job at most two jobs can be added, which suffices to show the claim This transformation can be done by the following hypothetical algorithm to be executed after performing Greedy: Proof Construction: Eliminate all jobs J \ (GS X ) from consideration S := GS while S X do j m := arg max{w j j S \ X } S := S \ {j m } for all i X \ S in increasing order of weights do if S {i} is feasible then S := S {i} end for end while The algorithm starts with S = GS and terminates with S = X If jobs are added to S in an iteration, the most recently removed job j m will be called split job Claim 1: After removing j m, only jobs with weight w jm can be added to S This claim follows from the greedy strategy of the algorithm: Assume that a job j S with w j < w jm can be added to S Then job j would have been packed also by Greedy before packing j m since at this point the set of packed jobs was a strict subset of the current set S (note that jobs are removed from S in decreasing order of weight) Claim : After removing j m, at most two jobs from X \ S can be added Assume in contrary to the claim that three jobs i 1, i, i X \ S are added after removing j m Note that due to Claim 1 we have w i1 w jm,, w i w jm Wlog we assume that s i1 < s i < s i Let W t := j S t I j w j denote the weight of the jobs currently in S directly after the removal of j m (ie before the possible addition of jobs) whose intervals contain t In the previous iteration of our hypothetical algorithm before removing j m all these three jobs were tested for possible inclusion, but must have failed the feasibility test Hence, there must exist three points in time q l [s jm, t jm ], l = 1,,, such that W ql + w jm + w il > W, l = 1,, Case 1: s i1 < s jm Case 11: s i < s jm Since t i > t i1 there must be q 1 I i I jm in this case From (1) we get the violated feasibility constraint at time q 1 as W q1 + w i1 + w i > W Case 1: s i > s jm Now we have t i > t jm and hence q I i I jm (1) yields the violated constraint W q + w i + w i > W at time t Case : s i1 > s jm There is t i1 > t jm and in analogy to Case 1 we have a violated constraint at time q given by W q + w i1 + w i > W Summarizing, we have shown that every removed split job allows the addition of only one or two jobs (namely in Case 1) not chosen by Greedy which completes the proof The following example shows that the bound of Theorem 19 is tight Example Consider the following instance with N = k jobs, ε chosen small enough (eg, ε = 1 ) and some ε > 0: 6(k+1) j 1,, k k + 1,, k k + 1,, k W W ε W k k k s j 1 (j ε) (j ε) 4 (j ε) t j + (j ε) 5 + (j ε) 6 + (j ε) w j Greedy selects GS = {k + 1,, k} with z G = k whereas the optimal solution consists of the complement with z = k (1)

48 A Darmann et al / Theoretical Computer Science 411 (010) 417 44 Table 1 Resource consumptions of the jobs a i and ā i, 1 i n Resource consumptions of jobs a i Resource consumptions of jobs ā i w(a 1 ) = b 1 + n w(ā 1 ) = b 1 + w(a ) = b + n w(ā ) = b + w(a ) = b + (n ) w(ā ) = b + 4 w(a 4 ) = b 4 + (n ) w(ā 4 ) = b 4 + 4 w(a 1 ) = b 1 + (n + ) w(ā 1 ) = b 1 + n w(a n ) = b n + (n + ) w(ā n ) = b n + n w(a n+1 ) = b n+1 + n w(ā n+1 ) = b n+1 + (n + ) w(a n+ ) = b n+ + n w(ā n+ ) = b n+ + (n + ) w(a 1 ) = b 1 + w(ā 1 ) = b 1 + n w(a n ) = b n + w(ā n ) = b n + n It is easy to show that the 1/-approximation ratio of Greedy requires both proper intervals and uniform profits The following easy example shows that Greedy can perform arbitrarily bad for uniform profits and non-proper intervals Example We are given N = k + 1 jobs: j = 1 j =,, n + 1 w j W 1 W s j 1 j 1 t j n + j Greedy selects GS = {1} while the optimal solution consists of X = {,, k + 1} and hence z = k z G As can be expected Greedy can perform also arbitrarily bad for general profits even on proper intervals as shown by the following completely trivial example with only two jobs Example j 1 p j 1 w j W 1 W s j 1 t j 4 Greedy selects job 1 with z G = 1, while the optimal solution consists of job with z = The time complexity of Greedy is determined by O(N log N) for sorting and N feasibility checks These can be done trivially in O(N) time which yields a total running time of O(N ) However, a more involved representation of the current weight for each of the N starting and end points with a binary search tree permits a feasibility check in O(log N) time and can also be updated within the same time complexity Thus an overall running time complexity of O(N log N) for Greedy can be obtained Each subtree of this search tree represents an interval containing all points in time from the leftmost to the rightmost leaf node In each node we store the range of this interval of the emanating subtree, the weight of all selected jobs that cover the whole interval and the maximal additional weight for any point of the interval The details of searching and updating this data structure require some more elaboration which is outside the scope of this paper Acknowledgements We would like to thank Gaia Nicosia (Università Roma Tre) for introducing us to the resource allocation problem and pointing out several references Appendix A Appendix to Section 1 Lemma 0 The cliques C, C 4, C 6, C n, I, I 4,, I n and C, C4, C6,, Cn are the maximal cliques of instance R Proof Among the cliques, none is a subset of another Thus, in order to show that these cliques are maximal and that they are the only maximal cliques of the instance it suffices to show that there is no other clique containing one of these n cliques Now observe the following facts (going from left to right in Fig 1)

A Darmann et al / Theoretical Computer Science 411 (010) 417 44 49 Table Start and end times of the jobs a i and the job sets E i and D i Jobs E i Jobs a i Job sets D i Job [start, end] Job [start, end] Job [start, end] a n [n + 8, 9n + 8] E [1, n + 8] a 1 [n + 4, 9n + 4] a [n + 16, 9n + 16] E 4 [, n + 16] a [n + 1, 9n + 1] D n [n + 10, 9n + 10] a i+ [n + 8i, 9n + 8i] D [n + 18, 9n + 18] E i [i, n + 8i] a i+1 [n + 8i 4, 9n + 8i 4] D i+4 [n + 8i 6, 9n + 8i 6] a n+ [5n, 1n] E n [n/, 5n] a n+1 [5n 4, 1n 4] D n+4 [5n 6, 1n 6] a 4 [9n 8, 17n 8] E [n 1, 9n 8] a [9n 1, 17n 1] a [9n, 17n] E n [n, 9n] a 1 [9n 4, 17n 4] Table Start and end times of the jobs ā i resp the job sets Di and Ē i Jobs ā i Jobs Di Jobs Ē i Job [start, end] Job [start, end] Job [start, end] ā n [9n + 7, 17n + 7] Ē n [17n + 7, 5n + 7] ā 1 [9n + 1, 17n + 1] ā [9n + 15, 17n + 15] Ē [17n + 15, 5n + 15] ā [9n + 1, 17n + 1] ā n+ [1n 1, 1n 1] Ē n+ [1n 1, 9n 1] ā n+1 [1n + 5, 1n + 5] ā [1n 17, 1n 17] D [1n 18, 1n 18] Ē [1n 17, 9n 17] ā [1n + 1, 1n + 1] ā i [17n 8i + 7, 5n + 8i + 7] Di [17n 8i + 6, 5n + 8i + 6] Ē i [5n + 8i + 7, n + 8i + 7] ā i1 [17n 8i + 1, 5n + 8i + 1] ā 4 [17n 9, 5n 9] D4 [17n 10, 5n 10] Ē 4 [5n 95, n 95] ā [17n, 5n ] ā [17n 1, 5n 1] D [17n, 5n ] Ē [5n 15, n 15] ā 1 [17n + 5, 5n + 5] Table 4 The time positions of the maximal cliques in R Clique C i I i Ci Time n + 8i 17n 8i + 8 5n + 8i + 7 It is obvious that C is maximal Further, C is the first maximal clique Clearly no clique C C i in between C i and C i+ can contain C i since C does not contain E i However, C contains E i+,, E n, D i+4,, D n and a i+1,, a n In addition, it may contain D i+ and a i1 Since all these job sets and jobs are contained in the clique C i+, the clique C is contained in C i+ Hence, C i is maximal for all i {1,, n 1} and no maximal clique exists in between C i and C i+, i = 1,, n 1 Since by definition no clique C C n in between C n and I n can contain E n, it follows that C n is maximal as well

40 A Darmann et al / Theoretical Computer Science 411 (010) 417 44 Table 5 The maximal cliques in R with their respective jobs and job sets Index Clique Dummy jobs in the clique Non-dummy jobs in the clique i {, 4,, n} C i E i, E i+,, E n D i+4, D i+6,, D n a i+1, a i+,, a n i {n +,, n} C i E i, E i+,, E n D n+4, D n+6,, D n a i+1, a i+,, a n i {n +,, n} I i D n+4, D n+6,, D i a 1, a,, a i, a i ā i, ā i+1,, ā n i {, 4,, n} I i Di, Di+,, D a 1, a,, a i, a i ā i, ā i+1,, ā n i {n,, n} Ci Ē i, Ē i+,, Ē n D, D4,, D ā 1, ā,, ā i i {, 4,, n } Ci Ē i, Ē i+,, Ē n D, D4,, Di ā 1, ā,, ā i Considering the transition from C n to I n, note that the job set E n and the job a 1 disappear Noting that ā n is the only job j I n that is not contained in C n it is obvious that each clique C C n in between C n and I n is contained in C n Furthermore, any job that starts in between I i and I i is contained in the clique I i Due to these observations there are no maximal cliques in between I i and I i, and the cliques I i are maximal, i n Since no clique C I in between I and Cn contains a, I also is a maximal clique Analogously to the above observations, it follows that every job that starts in between I and Cn is contained in Cn, and every job that starts in between Ci and Ci is contained in Ci, i = 1,, n 1 Furthermore, it is clear that C is the last maximal clique As a consequence, all the cliques C, C 4, C 6, C n, I, I 4,, I n and C, C4, C6,, Cn are maximal cliques, and besides these cliques no maximal clique exists Appendix B Appendix to Section An important feature of instance R is the total resource consumption of the dummy jobs contained in the maximal cliques The following proposition summarizes the main resource consumptions; since the proposition is a result of simple algebra, the proof is omitted here for the sake of brevity Proposition 1 For instance R, the following statements hold: 1 The total resource consumption of the dummy jobs D n+4, D n+6,, D n sums up to ( n n + ) For 1 i n 1, the total resource consumption of the dummy jobs contained in the clique C i equals δ(c i ) := ( n + n i j) The total resource consumption of the dummy jobs contained in the clique C n equals δ(c n ) := [( n + n n j) + 1 ] The total resource consumption of the dummy jobs D, D4,, D sums up to ( n n + ) 4 For 1 i n 1, the resource consumption of the dummy jobs in the clique Ci equals δ( Ci ) := ( n + n i j) The total resource consumption of the dummy jobs contained in the clique Cn equals δ( Cn ) := [( n + n n j) + 1 ] Lemma 1 Let X be a feasible solution of R Let J(X) Let m := min J(X) For some l, 1 l < m, let neither a l+ nor a l+1 be contained in X Then the solution x 1 := X {a l+1 } \ {a m+1 } is a feasible solution of R Proof In order to guarantee the feasibility of x 1 we need to show that the resource constraint is satisfied for all the maximal cliques Obviously the constraint holds for all the cliques that do not contain a l+1 Thus the constraint is trivially satisfied for all the cliques C l with 1 l < l Note that due to the definition of m, for all j < m, j 1, the jobs a j+ and a j+1 cannot both be contained in X Recall that X does not contain a l+ Thus for all j < m, j 1, we get that the solution x 1 does contain at most one of {a j+, a j+1 } Observing that j < m implies j < n and that a k B j (x 1 ) a k < holds we get r x1 (C j ) δ(c j ) + < n + n = n + n + 1 = W a k B j (x 1 ) a k j j k) + k + k=1 k=1 and hence the constraint is satisfied for all the cliques C j with j < m and j 1 For the cliques C j with j m, j n, it is sufficient to show that a m+1 > a l+1 holds This follows directly from the definition of and the fact that m > l: a m+1 = b m+1 + m > b l+1 + l = a l+1 Analogously, x 1 satisfies the resource constraint for all the cliques I i, 1 i n, since each of these cliques that contains a l+1 contains a m+1 as well Finally, the resource constraints for the cliques Ci, 1 i n, are trivially satisfied because neither a l+1 nor a m+1 is contained in any of these cliques (14)

A Darmann et al / Theoretical Computer Science 411 (010) 417 44 41 Fig 1 Interval graph and maximal cliques of instance R Lemma Let X be a feasible solution of R Let J(X) Let m := min J(X) If there are two dummy jobs b, c C m that are not contained in X, then x 1 := X {b, c} \ {a m+1 } is a feasible solution of R Proof Note that w(b) = w(c) = implies w(b) + w(c) < w(a m+1) Thus, the weight constraint of x 1 is obviously satisfied for all cliques that contain a m+1 It remains to show that the resource constraint is satisfied for the cliques that do not contain a m+1 but contain at least one element of {b, c}; ie for the cliques C k, 1 k < m If m = 1 there is nothing to show Let m > 1 Recall that due to the definition of m, for all j m, j 1, we get K j+ (X) = {a j+, a j+1 } X 1 Thus, for the resource consumption of the clique C j we get r x1 (C j ) < W analogous to (14) With Lemmas 1 and it can be proven that any optimal solution of R contains at most n jobs of {a 1, a,, a n } and at most n jobs of {ā 1, ā,, ā n } as stated in Section Lemma 5 Let X be an optimal solution of R Then A(X ) n and Ā(X ) n Proof We give the proof for A(X ) n, the proof for Ā(X ) n follows analogously Assume that A(X ) n + 1 Hence for some j, 1 j n, both a j + and a j +1 must be contained in X This implies that the set J(X ) = {j {1,, n} K j+ (X ) = } is non-empty Let m := min J(X ) Obviously, for all j < m, j 1, we have K j+ (X ) 1 Now we distinguish the following cases Case 1 All the dummy jobs are contained in X Case 1a J (X ) = 0 That is, for all j < m, j 1, we have K j+ (X ) = 1 Consider the clique C m For this clique, the constraint on the resource consumption is δ(c m ) + a k W (16) a k B m (X ) However, with m 1 we get δ(c m ) + a k > a k B m (X ) m m n + n j + j + m = n + n + m > n + n + 1 = W contradicting (16) Thus we must have J (X ) 1 (15) (17)

4 A Darmann et al / Theoretical Computer Science 411 (010) 417 44 Case 1b J (X ) 1 For some l J (X ) let y 1 := X {a l+1 }\{a m+1 } y 1 is a feasible solution because of Lemma 1 Additionally we have p(y 1 ) = p(x ), A(y 1 ) = A(X ) n + 1 and J (y 1 ) = J (X ) 1 Thus, for every optimal solution X with A(X) n+1 and J (X) 1 we can create an optimal solution y with A(y) n+1 and J (y) = J (X) 1 Hence, by iterating we finally find an optimal solution y with J (y) = 0 However, Case 1a shows that this is not possible Therefore, not all dummy jobs can be contained in X Case There exists a dummy job that is not contained in X Case a J (X ) = 0 That is, for all j < m, j 1, we have K j+ (X ) = 1 Let z be the number of dummy jobs contained in C m that are elements of X If m < n, then we get from the weight constraint for the clique C m W r X (C m ) = a k + z a k B m (X ) m > j + m + z = m(m + 1) + m + z ) = m(m + ) + z Thus, z < W(m(m+)) z < [ n + n + 1 m(m + )] However, from statement () in Proposition 1 we know that in the clique C m there are ( n + n m j) dummies This implies that there are at least m j 1 + m(m + ) = [m(m + 1) 1 + m(m + )] = 4m dummy jobs in C m that are not contained in X Since m 1, this means that there exist dummies b, c C m that are not contained in X Due to Lemma the solution X := X {b, c} \ {a m+1 } is feasible However, p(x) = p(x ) + 1 in contradiction to the optimality of X Thus we must have J (X ) 1 Case b J (X ) 1 With l = min J (X ) the solution y 1 := X {a l+1 } \ {a m+1 } is feasible due to Lemma 1 We have A(y 1 ) = A(X ) n + 1 and thus J(y 1 ) Furthermore, p(y 1 ) = p(x ) and J (y 1 ) = J (X ) 1 As in Case 1 either J (y 1 ) = 0 or we can proceed to find an optimal solution y with J (y) = 0 Either way we end up in Case a which leads to a contradiction Proof of Corollary 7 We prove statement 1, statement follows in an analogous manner Assume that J (X ) = Then we get r X (C m ) > W exactly as in (17) This violates the resource constraint for the clique C m Next, assume that J (X ) = 1 Let l J (X ) Then we get due to l < m r X (C m ) = δ(c m ) + a k > m m j + j + m l = a k B m (X ) n + n n + n + (m l) > n + n + 1 = W which again contradicts the resource constraint for C m Proof of Lemma 8 The first statement follows directly from Lemma 5 We show the second statement for K j+ (X ), 1 j n the proof for Kj (X ), 1 j n is analogous Assume that for some j {1,, n} we have K j+ (X ) 1 Since A(X ) = n this implies that the set J(X ) is nonempty Let m := min J(X ) Corollary 7 implies that J (X ) = {j < m K j+ (X ) = 0, j 1} holds Now let l := min J (X ) Then due to Lemma 1 y 1 := X {a l+1 } \ {a m+1 } is a feasible solution of R Note that J(y 1 ) = J(X ) 1