A robust APTAS for the classical bin packing problem

Size: px
Start display at page:

Download "A robust APTAS for the classical bin packing problem"

Transcription

1 A robust APTAS for the classical bin packing problem Leah Epstein Asaf Levin Abstract Bin packing is a well studied problem which has many applications. In this paper we design a robust APTAS for the problem. The robust APTAS receives a single input item to be added to the packing at each step. It maintains an approximate solution throughout this process, by slightly adjusting the solution for each new item. At each step, the total size of items which may migrate between bins must be bounded by a constant factor times the size of the new item. We show that such a property cannot be maintained with respect to optimal solutions. 1 Introduction Consider the classical online bin packing problem where items arrive one by one and are assigned irrevocably to bins. Items have sizes bounded by 1 and are assigned to bins of size 1 so as to minimize the number of bins used. The associated offline problem assumes that the complete input is given in advance. We follow [12] and allow the online algorithm to change the assignment of items to bins whenever a new item arrives, subject to the constraint that the total size of the moved items is bounded by β times the size of the arriving item. The value β is called the Migration Factor of the algorithm. We call algorithms that solve an offline problem in the traditional way static, whereas algorithms that receive the input items one by one, assign them upon arrival, and can do some amount of re-packing using a constant migration factor are called dynamic or robust. An example we introduce later shows that an optimal solution for bin packing cannot be maintained using a dynamic (exponential) algorithm. Consequently, we focus on polynomial-time approximation algorithms. Sanders, Sivadasan and Skutella [12] studied the generalization of the online scheduling problem where jobs that arrive one by one are assigned to identical parallel machines with the objective of minimizing the makespan. In their generalization they allow the current assignment to be changed whenever a new job arrives, subject to the constraint that the total size of moved jobs is bounded by β times the size of the arriving job. They obtained a dynamic polynomial time approximation scheme for this problem extending an earlier polynomial time approximation scheme of Hochbaum and Shmoys [7] for the static problem. They noted that this result is of particular importance if considered in the context of sensitivity analysis. While a newly arriving job may force a complete change of the entire structure of an optimal schedule, only very limited local changes suffice to preserve near-optimal solutions. For an input X of the bin packing problem we denote by OP T (X) the minimal number of bins needed to pack the items of X, and let SIZE(X) denote the sum of all sizes of items. Clearly, OP T (X) SIZE(X). For an algorithm B we denote by B(X) the number of bins used by B. It is known that no approximation algorithm for the classical bin packing problem can have a cost within a constant factor r of the minimum number of required bins for r < 3 2 unless P = N P. This leads to the usage of the standard quality measure for the performance of bin packing algorithms which is the A preliminary version of this paper appeared in Proceedings of the 33rd International Colloquium on Automata, Languages and Programming (ICALP2006), part I, pages Department of Mathematics, University of Haifa, Haifa, Israel. lea@math.haifa.ac.il. Department of Statistics, The Hebrew University, Jerusalem, Israel. levinas@mscc.huji.ac.il. 1

2 asymptotic approximation ratio or asymptotic performance guarantee. The asymptotic approximation ratio for an algorithm A is defined to be { } A(X) R(A) = lim sup sup OP T (X) = n. n X OP T (X) The natural question, which was whether this measure allows to find an approximation scheme for bin packing, was answered affirmatively by Fernandez de la Vega and Lueker [3]. They designed an algorithm whose output never exceeds (1 + ε)op T (I) + f(ε) bins for an input I and a given ε > 0. The running time was linear in n, but depended exponentially on ε, and such a class of algorithms is considered to be an APTAS (Asymptotic Polynomial Time Approximation Scheme). The function f(ε) depends only on ε and grows with 1 ε. Two later modifications simplified and improved this seminal result. The first modification allows to replace the function f(ε) by 1 (i.e. one additional bin instead of some function of ε), see [17], Chapter 9. The second one by Karmarkar and Karp [10] allows to develop an AFPTAS (Asymptotic Fully Polynomial Time Approximation Scheme). This means that using a similar (but much more complex) algorithm, it is possible to achieve a running time which depends on 1 ε polynomially. The dependence on n is much worse than linear, and is not better than Θ(n 8 ). In this case the additive term remains f(ε). Karmarkar and Karp [10] also designed an algorithm which uses at most OP T (I) + log 2 [OP T (I)] bins for an input I. In this paper, we design a robust APTAS for bin packing. In our point of view, the main advantage in obtaining an APTAS for the classical bin packing problem with a bounded migration factor is that such type of schemes possess a structure. Hence we are able to gain insights into the structure of the solution even though it results from exhaustive enumeration of a large amount of information. Related work. The classical online problem was studied in many papers, see the survey papers of [2, 1]. It was first introduced and investigated by Ullman [15]. The currently best results are an algorithm of asymptotic performance ratio given by Seiden [14] and a lower bound of [16]. From this lower bound we can deduce that in order to maintain a solution which is very close to optimal, the algorithm cannot be online in the usual sense. Several attempts were made to give a semi-online model which allows a small amount of modifications to the solution produced by the algorithm. We next review these attempts. Gambosi, Postiglione and Talamo [5, 6] introduced a model where a constant number of items (or small items grouped together) can be moved after each arrival of an item. They presented two algorithms. The first moves at most three items on each arrival and has the performance guarantee 3 2 = 1.5. The second algorithm moves at most seven items on each arrival and the performance guarantee The running times of these two algorithms are Θ(n) and Θ(n log n) respectively, where n is the number of items. Ivkovic and Lloyd [9] gave an algorithm which uses O(log n) re-packing moves (these moves are again of a single item or a set of grouped small items). This algorithm is designed to deal with departures of items as well as arrivals, and has performance guarantee 5 4. Ivkovic and Lloyd [9, 8] considered an amortized analysis as well, and show that for every ε > 0, the performance guarantee 1 + ε can be maintained, with O(log n) amortized number of re-packing moves if ε is seen as a constant, and with O(log 2 n) re-packing moves if the running time must be polynomial in 1 ε. However, the amortized notion here refers to a situation that for most new items no re-packing at all is done, whereas for some arrivals the whole input is re-packed. Galambos and Woeginger [4] adapted the notion of bounded space online algorithms (see [11]), where an algorithm may have a constant number of active bins, and bins that are no longer active, cannot be activated. They allow complete re-packing of the active bins. It turned out that the same lower bound as for the original (bounded space) problem holds for this problem as well, and re-packing only allowed to obtain the exact best possible competitive ratio having three active bins, instead of in the limit. Outline. We review the adaptation of the algorithm of Fernandez de la Vega and Lueker [3], as it appears in [17], in Section 2. We then state some further helpful adaptations that can be made to the static 2

3 algorithm. In Section 3 we describe our dynamic APTAS and prove its correctness. This algorithm uses many ideas from [3], however the adaptation into a dynamic APTAS requires careful changes to the scheme. We show that the number of bins used by our APTAS never exceeds (1 + ε)op T (X) + 1 where X is the list of items that has been considered so far. The running time is O(n log n) where n is the number of items, since the amount of work done upon arrival of an item is a function of ε times log n. In Appendix A we show an example in which there is no optimal solution that can be maintained with a constant migration factor. 2 Preliminaries We review a simple version of the very first asymptotic polynomial time approximation scheme. This is the algorithm of Fernandez de la Vega and Lueker [3]. The algorithm is static, i.e., it considers the complete set of items in order to compute the approximate solution. Later in this section we adapt it and present another version of it, which is still static. This new version is used in our dynamic APTAS that is presented in the next section. We are given a value 0 < ε < 1 such that the asymptotic performance guarantee should be at most 1+ε, and 1 ε is an integer. Consider an input I for the bin packing problem. We define an item to be small, if its size is smaller than ε 2. Other items are large. Algorithm FL [3]: 1. Items are partitioned into two sets according to their size. The multiset of large items is denoted L and the multiset of small items is denoted T. We have I = L T. 2. A linear grouping is performed to the large items. Let n be the number of large items in the input (n = L ), and let a 1... a n be these items. Let m = 2 ε 2. We partition the sorted set of large items into m consecutive sequences S j (j = 1,..., m) of k = n m = nε2 2 items each (to make the last sequence be of the same cardinality, we define a i = 0 for i > n). I.e., S j = {a (j 1)k+1,..., a (j 1)k+k } for j = 1, 2,..., m. For j 2, we define a modified sequence Ŝj which is based on the sequence S j as follows. Ŝ j is a multiset which contains exactly k items of size a (j 1)k+1, i.e., all items are rounded up to the size of the largest element of S j. The set S 1 is not rounded and therefore Ŝ1 = S 1. Let L be the union of all multisets Ŝj and let L = m Ŝ j. j=2 3. The input L is solved optimally. 4. A packing of the complete input is obtained by first replacing the items of Ŝj in the packing by items of S j (the items of S j are never larger than the items of Ŝj, and so the resulting packing is feasible), and second, using k bins to pack each item of S 1 in a separate bin. Last, the small items are added to the packed bins (with the original items without rounding) using some Any Fit Algorithm (where Any Fit is a class of algorithms, which open a new bin to accommodate a new item, if and only if it cannot be packed into any previously opened bin). Additional bins can be opened for small items if necessary. Lemma 1 Algorithm FL is a polynomial time algorithm. Proof. It suffices to show that Step 3 can be executed in polynomial time (the other steps are trivially polynomial). The input L contains Θ( 1 ε 2 ) distinct sizes of elements. Next, we show how an exact optimal packing of L can be found via integer programming. This is an integer program formulated for computing the optimal solution of a bin packing instance with a constant number of distinct large sizes. Let t be the number of sizes. Let ε 2 b 1,..., b t 1 be the set of sizes. We represent a multiset of items by a vector J = (j 1,..., j t ), 3

4 where j i is the number of items of size b i. Let N = (n 1,..., n t ) denote an input. A pattern is a vector of non-negative integers such that the multiset of items represented by it can fit in a single bin, i.e. p is a t pattern if p i b i 1. Let P be the set of all patterns. A packing can be described by specifying for every i=1 p P, the number of bins y p that are packed using pattern p. Note that an empty pattern (for which p i = 0 for 1 i t), is considered to be a legal pattern. We now argue that P (t + 1) 2 ε. A bin can contain at most 2 ε items. There are t + 1 options for each item (an item can be absent as well as of any size among the t possible sizes). This gives an upper bound of (t + 1) 2 ε on the number of patterns P. A vector y N P 0 specifies a valid packing of an input N into l bins if and only if the following constraints hold. y p = l, and for all 1 i t, p i y p = n i (1) p P The number of variables is constant for constant t and ε. The number of constraints is t+1 which is constant as well. Therefore, the existence of a feasible integer solution can be decided in constant time. Note that the value of t used is m = 2 ε 2. To find the minimum value of l for which there exists a feasible solution, we perform an exhaustive search. Clearly OP T n and therefore we achieve a linear time algorithm (a binary search is possible as well to achieve a faster search). We note that Step 3 can be implemented by applying a dynamic programming procedure. We presented the integer programming in fixed dimension approach since our modification in Section 3 is based on it. We next analyze the performance guarantee of Algorithm FL. For two multisets A, B, we say that A is dominated by B and denote A B if there exists an injection f : A B such that for all a A, f(a) a. Lemma 2 If A and B are multisets such that A B, then OP T (A) OP T (B). Proof. Any packing for B can be converted into a packing for A by removing all items b B for which there is no element a A such that f(a ) = b, and replacing each other element ˆb B by the element â A such that f(â) = ˆb. Since in this situation â ˆb, the resulting packing is feasible. Theorem 3 Algorithm FL is an APTAS. Proof. The algorithm returns a feasible solution (because all items are placed in bins) in polynomial time by Lemma 1. It remains to show the asymptotic performance guarantee of Algorithm FL. We first show that L L L. The mapping from L to L is defined by mapping each item into the rounded item which corresponds to it in L. A mapping from L to L is defined as follows. The items of Ŝj are mapped into the items of S j 1. Since a (j 1)k+1 a (j 1)k, the mapping is valid. Therefore, by Lemma 2, OP T (L ) OP T (L) OP T (L ). However, L and L differ by only k elements, and therefore OP T (L ) OP T (L ) + k. First, assume that the addition of small items does not result in additional bins. The returned solution costs OP T (L ) + k OP T (L) + k OP T (I) + nε Note that since there are exactly n large items, SIZE(I) n ε 2, and therefore we obtain a solution of cost at most OP T (I) + εop T (I) + 1 = (1+ε)OP T (I)+1. This proves the claim in the case that adding the small items does not result in additional bins. Otherwise, at least one new bin is opened for small items, this means that all bins except possibly from the last one are full by at least 1 ε 2. Let l be the number of used bins. Then, SIZE(I) > (1 ε 2 )(l 1). Therefore, l < SIZE(I) (1 ε 2 ) + 1 (1 + ε)op T (I) + 1, where the last inequality holds since (1 + ε)(1 ε 2 ) 1 holds for ε < 1. Before we proceed to define a revised version of the static algorithm and our robust algorithm, we give some motivation to the changes. Specifically, we show that Algorithm FL is not robust. 4 p P

5 Proposition 4 Algorithm FL is not robust. Proof. Consider the following input, consisting of only large items. Let C be a constant integer value and let N be an arbitrarily large integer. We let n = 2NC 3 and ε = 1 C. The input at this time consists of NC items of size ε and n NC = 2NC 3 NC items of size 1. We have m = 2 = 2C 2 and k = nε2 ε 2 2 = NC. Therefore, there are m 1 multisets of large items S 1,..., S m 1 whose largest item is of size 1, and one multiset S m whose largest item is of size ε. Thus, the input does not change as a result of the rounding. An optimal solution of the input L (which consists of 2NC 3 2NC items of size 1, and NC items of size ε) would be to pack each unit sized item in a separate bin, which results in 2NC 3 2NC such bins, and to pack N bins, each containing C items of size ε. No matter which method is used to find such an optimal solution (even if a linear programming relaxation is used), this is a valid optimal solution that may be produced (note that this is the unique integer optimal solution). Next, a sequence of new items arrive. We use n, m and k to denote the values of n, m and k after the arrival of additional 2C 2 items. These items are all of size 1. We now have n = 2NC 3 + 2C 2, m = m = 2C 2, and k = 2NC3 +2C 2 = NC + 1 = k + 1. Therefore, the multisets resulting from the 2C 2 linear grouping have one additional item per multiset. Thus, since there are only NC items of size ε, the largest item of each multiset is of size 1, and the rounded input has only items of this size. Therefore, the optimal solution for the rounded instance contains only bins packed with a single item per bin. We get that in the process of arrival of the new 2C 2 items, the N bins which used to contain C items each need to be destroyed. For that, at most one item can stay in each bin and the rest must migrate to other bins. Thus, the total size of items that migrate is at least N(1 ε) N 2. Since this is done in at most 2C2 N steps, there exists a step in which items, whose total size is at least, migrate. Hence, the migration factor 4C 2 N is at least. Since C = 1 4C 2 ε is a constant and N can be arbitrarily large as a function of ε, we conclude that the migration factor of Algorithm FL is unbounded. Algorithm Revised FL: We now design a new static adaptation Algorithm FL which is later generalized into a dynamic APTAS. We modify only Step 2 as follows: The multisets S j are defined similarly to before, with the following changes. The multisets do not need to have the same size, but their cardinalities need to be monotonically non-increasing. That is, for all j, S j S j+1. Moreover, we require that if nε 2 8, then S 1 ε2 n 2 and S m S 1 4. Otherwise (i.e., if nε 2 < 8) each set S i (for i 1) has a single element. The rounding is done as follows. Given a multiset S j which consists of elements c 1... c k, the elements are rounded up into two values. Let 1 < s k, then all elements c 1,..., c s 1 are rounded into c 1, and the elements c s,..., c k are rounded into c t for some 2 t s. The proof of Theorem 3 extends easily to this adaptation as well. Specifically, the amount of distinct sizes in the rounded instance is constant (which depends on ε) and so is the number of patterns. The mapping is defined similarly, and the set S 1 still satisfies S 1 ε2 n 2 so it is small enough to be packed into separate bins. If the small items which are are added using Any Fit cause the usage of additional bins, the situation is exactly the same as before. Therefore, we establish the following theorem. Theorem 5 Algorithm Revised FL is an APTAS. In the sequel we show how to maintain the input grouped as required by Algorithm Revised FL. We also show that the difference between packing of two subsequent steps is small enough that it can be achieved by using a constant (as a function of ε) migration factor as in [12]. 5

6 3 APTAS with f( 1 ε ) migration factor In this Section we describe our dynamic APTAS for bin packing with the additional property that the migration factor of the scheme is f( 1 ε ), and therefore a constant migration factor for fixed value of ε. We use the following notations. The number of large items among the first t items is denoted n(t). The size of the ith arriving item is b i. The value m is defined as in the previous section m = 2. We denote by ε 2 OP T (t) the number of bins used by an optimal solution for the first t items. We assume that after the first t items, we have a feasible solution that uses at most (1 + ε)op T (t) + 1 bins and show how to maintain such a solution after the arrival of a new item of index t + 1. Later on we describe several structural properties that our solution satisfies, and show how to maintain these properties as well. These structural properties help us to establish the desired migration factor. Similarly to Algorithm Revised FL we treat small items and large items differently. Recall that an item is small, if its size is smaller than ε 2. When a small item arrives, we use Any Fit Algorithm to find it a suitable bin. It remains to consider the case where the t + 1-th item is a large item, i.e., b t+1 ε 2. We keep the following structural properties throughout the algorithm. 1. If n(t) 4m + 1 then the sorted list of large items which arrived so far is partitioned into M(t) consecutive sequences S 1 (t),..., S M(t) (t), where 4m + 1 M(t) 8m + 1 such that S 1 (t) S 2 (t) = S 3 (t) = = S M(t) (t). Otherwise, if n(t) 4m then the sorted list of large items which arrived so far is partitioned into n(t) consecutive sequences each of them has a single large element S 1 (t),..., S n(t) (t). 2. Denote K(t) = S M(t) (t), then S 1 (t) 4K(t). 3. There are two special subsets of S 1 (t) denoted by S 1 (t) and S 1 (t) (these sets might be empty at some times). Each of these two sets contains at most K(t) items. The sets are special in the sense that we treat them as separate sets while rounding, and in the rounded up instance S 1 (t) and S 1 (t) are rounded. However in the analysis the cost of the solution is bounded allowing each element of S 1 (t) to be packed in its own bin. The time index t can be omitted from the notation of a set or a parameter if the time it belongs to is clear from the context. Therefore, when e.g. we discuss the set S 1 this is the set S 1 (t) that is associated with the discussed time. The size of the set S 1 is defined according to algorithm Revised FL. We have n MK > 4mK, and therefore S 1 4K < n m = nε2 2. Note that for n 4m each list consists of at most one element. Therefore, we do not apply rounding to the large items, and we find an optimal solution to the original set of large items, excluding the largest item. We turn now to discuss steps, where each step is an arrival of a large item. We partition the steps into three types, which are, regular steps, creation steps and union steps. An insertion of a new large item operation takes place in all types of steps. A creation of new sets operation takes place only in creation steps. A union of sets operation takes place only in union steps. We also maintain the following property. During regular steps or creation steps, no set is rounded to a pair of values but for each set S j (j 2), the items of S j are rounded up to the largest size of any element of S j. However, during union steps, each set is rounded up to a pair of values (as in Algorithm Revised FL). Since the structural properties refer only to the partition of the sorted list of large items, the packing of an arriving small item (using an Any Fit algorithm) does not affect the structural properties. Therefore, the following lemma holds. Lemma 6 The arrival of a small item and allocating it according to Any Fit Algorithm maintains the structural properties. 6

7 We next define a series of operations on the sequences. We make sure that when a new item arrives the properties are maintained and the migration factor is constant. When a new item arrives, we first apply the insertion of a new large item operation. Afterwards if the current step is a creation step or a union step, we apply the creation of new sets operation and the union of sets operation, respectively. When we bound the migration factor, note that changes to the allocation of items into bins are made only when large items arrive. Hence, we only need to consider the case where the size of the arriving item, of index t + 1, is at least ε 2. Therefore, if we can prove that the allocation is changed only for a set of items that we allocated to a fixed number of bins (their number is a function of ε) then we get a constant migration factor throughout the algorithm. Insertion of a new large item. When a large item arrives then if n 4m + 1 we add a new list that contains the new item as its unique element. Note that in this case the resulting set of lists satisfies the structural properties. Otherwise (i.e., n 4m + 2) we first compute the list to which it belongs, and add it there. The list to which it belongs, S j, is defined as follows. If the new item is larger than an existing item in S 1, then this list is S 1. Otherwise, we find the set S j+1 of smallest index such that the new item is larger than all its elements, and the list to which the item belongs is S j. We move up items from S i to S i 1 for all 2 i j, this operations is defined as follows. We move the largest item of S i to S i 1 and afterwards we change the value which the size of the items of S i is rounded up to, into the size of the new largest item of S i. When we consider the effect this operation has on the feasibility of the integer program, we can see that the right hand side does not change (the size of S r is not affected for all values of r such that 2 r M), however new patterns arise as the size of the rounded up instance is smaller (and so we can pack more items to a bin in some cases). The additional patterns mean new columns of the feasibility constraint matrix. Note that adding the new large item into its list takes O(log n) time (as we need to maintain sorted lists of the large elements). Theorem 7 [Corollary 17.2a, [13]] Let A be an integral m d matrix such that each sub-determinant of A is at most in absolute value, let û and u be column m-vectors, and let v be a row d-vector. Suppose max{vx Ax û; x is integral} and max{vx Ax u ; x is integral} are finite. Then, for each optimum solution y of the first maximum there exists an optimum solution y of the second maximum such that y y d ( û u + 2). Lemma 8 Let A be the constraint matrix of the feasibility integer program. Let d be the number of columns of A and let be the maximum value in absolute value of a sub-determinant of A. Then, throughout the algorithm d is bounded by a constant (for a fixed value of ε). Proof. We first bound d that is the number of columns in the matrix A, i.e., it is the number of possible patterns. In the proof of Theorem 3 we showed that the number of patterns is at most (2M + 1) 2 ε (in our rounded up instance we have at most 2M distinct sizes). As a result of the structural properties we conclude that M 8m + 1. Recall that m = 2. Therefore, d ( ) 2 ε. Note that for a fixed value of ε, the right ε 2 ε 2 hand side of the last inequality is constant. We next bound. The maximum dimension of a square sub-matrix of A is at most the number of rows in A that is at most 2M + 2 2(8m + 1) + 2 = 16m ε 2 number of large items in a bin is at most 2 ε. Therefore, [( Each entry in A is at most 2 ε as the ] 32 ε Note again that for a fixed + 4 ) 2 ε 2 ε value of ε we obtain a constant upper bound on. The proof of the following lemma is similar to the analysis of [12]. Lemma 9 Assume that before the arrival of the current large item, there is a feasible solution y to the feasibility integer program of the rounded up instance L. After we apply an insertion of a new large item operation, if the feasibility integer program is feasible then there is a solution to it y such that it is suffices to re-pack the items that reside in a constant number of bins. 7

8 Proof. Denote by Ay = u the constraints of the feasibility integer program of the rounded up instance. The matrix A is the constraint matrix and u is the right hand side. Note that the columns of A correspond to patterns and the rows correspond to different sizes of items. The constraint that corresponds to an item size a has the following meaning. The amount of the items that we allocate along all possible patterns of items with size a is exactly the number of items in the rounded up instance with size a. We first assume that n(t) 4m + 1. Now consider the change in the constraint matrix A when a new large item of size b t+1 arrives. Let A denote the modified A, which is the feasibility constraint matrix for all the items in L and the one extra new element of S 1. First, the cardinality of S 1 increases by 1, (at the end of the move up operation), and this does not change the constraint matrix as we do not have a constraint associated with S 1. Next, we consider the decrease in the size of rounded up items. Note that all the patterns that were feasible in the previous stage clearly remain feasible (given a set of items that can be packed in a single bin, decreasing the size of some of the items still allows to pack them into a single bin). Therefore, the matrix A of the previous stage is a sub-matrix of the matrix after we apply the insertion of a new large item operation. The difference is a possible addition of columns (that correspond to patterns that were infeasible before we decrease the size of some items and before we add the new item, however these patterns are now feasible). To bound the change in u, denote the new right hand side by u. The set S 1 is not represented in A, therefore there is no change and u = u. Let û = u. Then, in the case where n(t) 4m + 1 the change in the right hand side is bounded by a constant, i.e., u û = 0. Otherwise, n(t) 4m and the feasibility integer program has a new row that corresponds to the new large item and whose right hand side value is 1. Note that all the patterns that were feasible in the previous stage clearly remain feasible. Therefore, the matrix A of the previous stage is a sub-matrix of the matrix after we apply the insertion of a new large item operation. The difference is a possible addition of columns that pack the new item as well, and a new row that corresponds to the new item. To bound the change in u, denote the new right hand side by u. Let û be a right hand side equal to u beside one entry that is in the component that correspond to the new row of A where it equals zero. In this case u û = 1. In the remainder of this proof we do not distinguish between the case where n 4m + 1 and the case where n 4m. We can extend y to a vector ŷ that is a feasible solution of A ŷ = û. To do so, we define the entries of y that correspond to the new columns in A compared to A to be zero. In the other components (whose columns exist in A) the value of ŷ is exactly the value of y. In order to prove the claim it is enough to show that there is a feasible solution y such that ŷ y is a constant (then we re-pack the items from the bins that correspond to the difference between ŷ and y ). Recall that we assume that Ay = û is feasible integer program. Therefore, the assumptions of Theorem 7 are satisfied. Therefore, by Theorem 7, there is a feasible integer solution y such that ŷ y d ( û u + 2). We would like to bound by a constant the right hand side of the last inequality. By Lemma 8, d and are bounded by a constant. We have already bounded û u by the constant 1 and this completes the proof. Therefore, the moving up operation causes constant migration. However in order to prove the performance guarantee of the algorithm we need to show how to maintain the structural properties. To do so, note that the only sets whose cardinality increases during the moving up operation is S 1, and therefore we need to show how to deal with cases where S 1 is too large. Creation of new sets: After S 1 has exactly 3K items we start a new operation that we name creation of new sets that lasts for K steps (in each such step a new large item arrives and we charge the operation done in the step to this new item). We consider the items c 1 c 2 c 3K of S 1. We create new sets S 1 and S 1 where eventually S 1 = {c K+1,..., c 2K } and S 1 = {c 2K+1,..., c 3K }. In each step we will have already rounded i items from S 1 and from S 1 to its target value and i is increased by 1 each step. So after i steps of this creation of new sets operation, the rounded up instance has i copies of the items c K+1 and 8

9 c 2K+1. Then, the resulting instance can be solved in polynomial time where we put each item of S 1 that has not already rounded up to either c K+1 or to c 2K+1 in a separate bin. Rounding up two items at each step results in a constant change in the right hand side of the feasibility integer programming and therefore increase the migration factor within an additive constant factor only as we prove in Lemma 10 below. At the end of K steps we declare the sets S 1 and S 1 as the new S 2 and S 3 and increasing M by two. Each of the new sets contains exactly K elements and the new S 1 contains exactly 2K elements, and therefore if M(t) 8m 1 we are done while keeping a constant migration factor. Otherwise, next time the creation new sets operation takes place we will violate the second structural property, and therefore we currently initiate the union of sets operation that lasts for K steps as well. Note that we never apply both the creation of new sets and the union of sets operations at the same step. The moving up procedure during the insertion of a new large item operation will increase S 1 further, and this case we also apply this procedure to S 1 and S 1 and thus decreasing the value to which we round up the items that belong to S 1 and S 1. So in fact during the creation of new sets operation S 1 is partitioned into five sets S1 1, S 1, S2 1, S 1, S3 1 such that the items in S1 1 are the largest items of S 1, S1 2 contains the items with size that is smaller than the items in S 1 but they are larger than the items in S 1, and S3 1 contains the other items of S 1. Then, in each step we increase the size of S1 1, S 1 and S 1 by one item each, whereas the size of S1 2 and S3 1 is decreased by one. This means that during the move up operation we will move up items also in these collection of the five subsets of S 1. Lemma 10 Assume that before we apply the creation of new sets operation and after we finish the insertion of a new large item operation, there is a feasible solution y to the feasibility integer program of the rounded up instance L. After we apply the creation new sets operation, if the feasibility integer program is feasible then there is a solution to it y such that it is enough to re-pack the items that reside in a constant number of bins. Proof. Denote by Ay = u the constraints of the feasibility integer program of the rounded up instance. So A is the constraint matrix and u is the right hand side. Now consider the change in A when we apply the creation of new sets operation. If S 1 and S 1 are just initialized, we add two rows to A that correspond to S 1 and S 1. They each have a right hand side equal to 1. If we are already after at least one step of the creation process, S 1 and S 1 already exist in the matrix, and the corresponding right hand size increases by 1 in these two entries. Regarding the change in u, denote the new right hand side by u. Then, u has either two new entries equal to 1 (that correspond to the new rows of A), or two existing (old) entries change by one unit each (that correspond to S 1 and S 1 ). The other entries of u are left without a change. We denote by û the vector resulting from u by decreasing each of the components that correspond to S 1 and S 1 by one. Note that this time A does not have new columns. Similarly to the proof of Lemma 9, in order to prove the claim it is enough to show that there is a feasible solution y such that y y is a constant (then we re-pack the items from the bins that correspond to the difference between y and y ). Recall that we assume that Ay = û and Ay = u are feasible integer programs. Therefore, the assumptions of Theorem 7 are satisfied. Therefore, by Theorem 7, there is a feasible integer solution y such that y y d ( û u + 2). We would like to bound by a constant the right hand side of the last inequality. By Lemma 8, d and are bounded by a constant. It remains to bound û u by a constant. However, û differs from u in exactly two components where the difference between these two vectors equals one. Therefore, the claim follows. Union of sets: When the number of sets reaches 8m + 1 we start the following operation that lasts for K steps (where again a step means an arrival of a large item). First we declare each pair of consecutive sets as a new set. That is for 2 j 4m + 1 we let S j (t + 1) be S 2j 1 (t) S 2j 2 (t), but we still do not change 9

10 the way the rounding is performed. So in the resulting partition into sets, each set has exactly 2K items, and we declare this the new value of K, i.e., K(t + 1) = 2K(t), and the number of sets now is M = 4m + 1. However, each set is rounded to a pair of values, and this is something we will recover in the next steps. While the rounding of each set is to two values, denote by S j the items that we round to the largest item of S j and by S j = S j \ S j. In each step for all j 2 we move the largest item of S j to S j. Thus, we round this moved item to the largest element of S j and do not change the value to which we round up the items of. Both these changes do not increase the migration factor by much as we prove below in Lemma 11. At the end of this procedure we end up with a collection of subsets each rounded to a common value and finish the union of sets operation. S j Lemma 11 Assume that before we apply the union of sets operation and after we finish the insertion of a new large item operation, there is a feasible solution y to the feasibility integer program of the rounded up instance L. After we apply the union of sets operation, if the feasibility integer program is feasible then there is a solution to it y such that it is enough to re-pack the items that reside in a constant number of bins. Proof. Denote by Ay = u the constraints of the feasibility integer program of the rounded up instance. So A is the constraint matrix and u is the right hand side. Now consider the change in A when we apply the union of sets operation. We do not add new rows or column to A and hence A is unchanged. Regarding the change in u, denote the new right hand side by u, and denote by û the right hand side vector before applying the union of sets operation. Then, each component of û differs from its corresponding component of u by at most one (in absolute value), and therefore û u = 1. (2) Similarly to the proof of Lemma 9, in order to prove the claim it is enough to show that there is a feasible solution y such that y y is a constant (then we will re-pack the items from the bins that correspond to the difference between y and y ). Recall that we assume that Ay = û and Ay = u are feasible integer programs. Therefore, the assumptions of Theorem 7 are satisfied. Therefore, by Theorem 7, there is a feasible integer solution y such that y y d ( û u + 2). We would like to bound by a constant the right hand side of the last inequality. By Lemma 8, d and are bounded by a constant. It remains to bound û u by a constant, that holds by (2). We now describe the algorithm we apply each time a new item arrives denoted as Algorithm Dynamic APTAS. If the new arriving item is small then we use Any Fit Algorithm to pack it into an existing bin or open a new bin for it if it cannot fit into any other existing bin (when we need to pack a small item, we consider the original sizes of large items that are already packed and not their rounded sizes). In the case where the new small items causes an addition of a new bin, we maintain the solution to the feasibility integer program by adding a bin whose pattern is the empty pattern that does not pack any large item. Otherwise, the item is large and we are allowed (while still getting a constant migration factor) to repack the items of a constant number of bins. We consider the optimal rounded-up solution y before the current item has arrived. Note that y is the integer solution after the previous large item was added with the possibility of introducing empty patterns bins in case we open new bins to small items. Then, after we apply the insertion of a new large item operation, we use Lemma 9 and look for a feasible packing of the resulting new rounded-up instance, y that is close to y (i.e., the norm infinity of their difference is at most a constant that is given by the proof of Lemma 9). The restriction that y is close to y is given by linear inequalities, and therefore we can solve the resulting feasibility integer program in polynomial time (for fixed value of ε). If the resulting integer program is infeasible, then we must open a new bin for the new rounded up instance, and then we can put the new item in the new bin and the rest 10

11 of the items as they were packed in the solution before the insertion of a new large item operation occurs. Otherwise, we obtain such an integer solution y that is close to y. Similarly, if we need to apply either the creation of new sets operation or the union the sets operation we construct a solution y that is close to y (the norm infinity of their difference is a constant). This restriction is again given by linear inequalities and therefore we can solve the resulting feasibility integer program in polynomial time (for fixed value of ε). If the resulting integer program is infeasible, then we must open a new bin for the new rounded up instance, and then we can put the new item in the new bin and the rest of the items as they were packed in the solution before the insertion of a new large item operation occurs. Otherwise, we obtain such an integer solution y that is close to y. If we did not apply the creation of new sets operation or the union the sets operation, then we denote y = y. Note that during creation steps our algorithm packs the items from S 1 S 1 according to their packing in the feasibility integer program. I.e., the integer program has a row for S 1 and a row for S 1. However, in the analysis of the performance guarantee of the algorithm in Theorem 13 below we bound the cost of the solution by a different solution that packs each item of S 1 in its own bin (this is also with for the items of S 1 S 1 ). Such a solution is not better than our resulting solution as it corresponds to a solution to the integer program that choose such patterns for its covering of the elements of S 1 S 1. It remains to show how to construct a solution to the bin packing instance using the vector y. For each pattern we change the packing of max{y p y p, 0} bins that were packed according to pattern p. For pattern p we select such bins arbitrarily (from the bins that we pack according to pattern p). We complete the packing of the items to a packing that correspond to y. This will pack all the large items and all the small items that were not packed in the bins we decided to re-pack. Then, we apply Any Fit Algorithm for the small unpacked items. This algorithm re-packs a constant number of bins in case a large item arrives and it does not change the packing in case a small item arrives. Therefore, we establish the following corollary. Corollary 12 Algorithm Dynamic APTAS has a constant migration factor for fixed value of ε. We are now ready to prove that Algorithm Dynamic APTAS is a polynomial time algorithm that has a constant migration factor and it is an asymptotic polynomial time approximation scheme. Our proof is based on the analysis of Algorithm Revised FL. Theorem 13 Algorithm Dynamic APTAS is a polynomial time algorithm that has a constant migration factor and uses at most (1 + ε)op T (t) + 1 bins after t items arrives. Proof. Algorithm Dynamic APTAS has a constant migration factor by Corollary 12. It runs in polynomial time, as we show in the proof of Lemma 1, that solving the feasibility integer program to obtain y and y can be done in polynomial time. The rest of the algorithm is clearly polynomial. Note that the time complexity that is needed for adding a new element is dominated by finding its place in its list S j, that is O(log n). It remains to show the performance guarantee of the algorithm. We show the claim via induction over t. For t = 1 the claim is trivial because of the additive constant. Assume that the claim holds after t items have already arrived. We show that the claim holds for t + 1 items. Assume that the new item is a small item, then we use Any Fit Algorithm to pack it. By the induction assumption, in the step prior to this arrival we had a solution that uses at most (1 + ε)op T (t) + 1 bins, there are two cases. If we do not open a new bin for the new small item then our solution uses at most (1+ε)OP T (t)+1 bins and since OP T (t+1) OP T (t) the claim holds. Otherwise, each of the previous bins is filled with items of total size of at least 1 ε 2. Similarly, to the proof of Theorem 3, for this case the resulting solution uses at most (1 + ε)op T (t + 1) + 1 bins. 11

12 Therefore, it remains to consider the case where the new item is a large item. In this case we pack the large items that do not belong to S 1 according to the solution y and we allow to pack each item of S 1 in its own bin. Recall that during creation steps our algorithm packs the items from S 1 S 1 according to their packing in the feasibility integer program. I.e., the integer program has a row for S 1 and a row for S 1. However, in the analysis of the performance guarantee of the algorithm, we bound the cost of the solution by a different solution that packs each item of S 1 in its own bin (this is also with for the items of S 1 S 1 ). Such a solution is not better than our resulting solution as it corresponds to a solution to the integer program that choose such patterns for its covering of the elements of S 1 S 1 and so the optimal solution uses less bins for the large items. Similar to the case of the small item that arrives last, if the small items cannot be packed to existing bins using the Any Fit Algorithm then all bins except the last one are full by at least 1 ε 2 and the claim follows. Therefore, we can ignore the presence of the small items from the remaining arguments. Note that y is the optimal solution for the rounded up instance L, and we pay an extra additive factor of at most S 1 (t + 1) 4K(t + 1) n(t+1)ε2 2 εop T (t + 1) (where the first inequality follows by our structural properties). Therefore, we can apply exactly the same arguments as in the proof of Theorem 5 to obtain that the resulting solution uses at most (1 + ε)op T (t + 1) + 1 bins. Remark 14 The feasibility integer program to find vectors y and y (in the notations of the algorithm), which are close to y, can be solved by only one integer program of a fixed size. Proof. In our algorithm, instead of solving the feasibility integer program, we can find the optimal integer solution according to a given linear goal function. Thus, in each step we can define a linear function whose coefficients are lexicographically decreasing according to the order of columns such that the components that correspond to patterns that were feasible in the previous step get a lexicographic larger value than the new patterns that are added during this iteration. Using this lexicographic goal function the resulting integer program is guaranteed to have a unique optimal solution (if it is feasible). Then, using Theorem 7, the new optimal integer solution is close to the previous optimal solution that is the integer solution from the previous step (this is so, because such solution is the optimal solution of the integer program under the restriction that the variables that correspond to the lexicographic small coefficients in the goal function get zero values). Therefore, we can solve the feasibility integer program to find vectors y and y (in the notations of the algorithm), which are close to y, by solving only one integer program of a fixed size. Note that this step can be done in a constant time. 4 Concluding Remarks A similar approach allows to prove the following result. Given bins of size 1 + ε instead of size 1, it is possible to design a dynamic algorithm which uses at time t, at most OP T (t) bins to pack the items. The algorithm works as follows. Items are partitioned into large (at least ε 4 ) and small (all other items). The sizes of large items are rounded up into powers of 1 + ε 4. This rounding is permanent and the original size is ignored in all steps of the algorithm. Feasible patterns of large items are defined similarly to [12]. At each arrival of a large item we check whether the previous amount of bins still allows a feasible solution. It is possible to show that in such a case, a limited amount of re-packing is needed. Otherwise the new item is packed in a new bin. Small items are packed greedily using Any Fit Algorithm. A question that is left open is whether there exists a dynamic AFPTAS for the classical bin packing problem. It would be interesting to find out which other problems can benefit from the study of dynamic approximation algorithms. 12

13 References [1] E. G. Coffman, M. R. Garey, and D. S. Johnson. Approximation algorithms for bin packing: A survey. In D. Hochbaum, editor, Approximation algorithms. PWS Publishing Company, [2] J. Csirik and G. J. Woeginger. On-line packing and covering problems. In A. Fiat and G. J. Woeginger, editors, Online Algorithms: The State of the Art, pages , [3] W. Fernandez de la Vega and G. S. Lueker. Bin packing can be solved within 1 + ε in linear time. Combinatorica, 1: , [4] G. Galambos and G. J. Woeginger. Repacking helps in bounded space online bin packing. Computing, 49: , [5] G. Gambosi, A. Postiglione, and M. Talamo. On-line maintenance of an approximate bin-packing solution. Nordic Journal on Computing, 4(2): , [6] G. Gambosi, A. Postiglione, and M. Talamo. Algorithms for the relaxed online bin-packing model. SIAM Journal on Computing, 30(5): , [7] D. S. Hochbaum and D. B. Shmoys. Using dual approximation algorithms for scheduling problems: theoretical and practical results. Journal of the ACM, 34(1): , [8] Z. Ivkovic and E. L. Lloyd. Partially dynamic bin packing can be solved within 1 + ε in (amortized) polylogarithmic time. Information Processing Letters, 63(1):45 50, [9] Z. Ivkovic and E. L. Lloyd. Fully dynamic algorithms for bin packing: Being (mostly) myopic helps. SIAM Journal on Computing, 28(2): , [10] N. Karmarkar and R. M. Karp. An efficient approximation scheme for the one-dimensional binpacking problem. In Proceedings of the 23rd Annual Symposium on Foundations of Computer Science (FOCS 82), pages , [11] C. C. Lee and D. T. Lee. A simple online bin packing algorithm. Journal of the ACM, 32(3): , [12] P. Sanders, N. Sivadasan, and M. Skutella. Online scheduling with bounded migration. In Proc. of the 31st International Colloquium on Automata, Languages and Programming (ICALP2004), pages , [13] A. Schrijver. Theory of Linear and Integer Programming. John Wiley & Sons, [14] S. S. Seiden. On the online bin packing problem. Journal of the ACM, 49(5): , [15] J. D. Ullman. The performance of a memory allocation algorithm. Technical Report 100, Princeton University, Princeton, NJ, [16] A. van Vliet. An improved lower bound for online bin packing algorithms. Information Processing Letters, 43(5): , [17] V. V. Vazirani. Approximation Algorithms. Springer-Verlag,

A Robust APTAS for the Classical Bin Packing Problem

A Robust APTAS for the Classical Bin Packing Problem A Robust APTAS for the Classical Bin Packing Problem Leah Epstein 1 and Asaf Levin 2 1 Department of Mathematics, University of Haifa, 31905 Haifa, Israel. Email: lea@math.haifa.ac.il 2 Department of Statistics,

More information

Online Interval Coloring and Variants

Online Interval Coloring and Variants Online Interval Coloring and Variants Leah Epstein 1, and Meital Levy 1 Department of Mathematics, University of Haifa, 31905 Haifa, Israel. Email: lea@math.haifa.ac.il School of Computer Science, Tel-Aviv

More information

Online Scheduling with Bounded Migration

Online Scheduling with Bounded Migration Online Scheduling with Bounded Migration Peter Sanders Universität Karlsruhe (TH), Fakultät für Informatik, Postfach 6980, 76128 Karlsruhe, Germany email: sanders@ira.uka.de http://algo2.iti.uni-karlsruhe.de/sanders.php

More information

Colored Bin Packing: Online Algorithms and Lower Bounds

Colored Bin Packing: Online Algorithms and Lower Bounds Noname manuscript No. (will be inserted by the editor) Colored Bin Packing: Online Algorithms and Lower Bounds Martin Böhm György Dósa Leah Epstein Jiří Sgall Pavel Veselý Received: date / Accepted: date

More information

ARobustPTASforMachineCoveringand Packing

ARobustPTASforMachineCoveringand Packing ARobustPTASforMachineCoveringand Packing Martin Skutella and José Verschae Institute of Mathematics, TU Berlin, Germany {skutella,verschae}@math.tu-berlin.de Abstract. Minimizing the makespan or maximizing

More information

On Two Class-Constrained Versions of the Multiple Knapsack Problem

On Two Class-Constrained Versions of the Multiple Knapsack Problem On Two Class-Constrained Versions of the Multiple Knapsack Problem Hadas Shachnai Tami Tamir Department of Computer Science The Technion, Haifa 32000, Israel Abstract We study two variants of the classic

More information

Bin packing and scheduling

Bin packing and scheduling Sanders/van Stee: Approximations- und Online-Algorithmen 1 Bin packing and scheduling Overview Bin packing: problem definition Simple 2-approximation (Next Fit) Better than 3/2 is not possible Asymptotic

More information

On-line Bin-Stretching. Yossi Azar y Oded Regev z. Abstract. We are given a sequence of items that can be packed into m unit size bins.

On-line Bin-Stretching. Yossi Azar y Oded Regev z. Abstract. We are given a sequence of items that can be packed into m unit size bins. On-line Bin-Stretching Yossi Azar y Oded Regev z Abstract We are given a sequence of items that can be packed into m unit size bins. In the classical bin packing problem we x the size of the bins and try

More information

8 Knapsack Problem 8.1 (Knapsack)

8 Knapsack Problem 8.1 (Knapsack) 8 Knapsack In Chapter 1 we mentioned that some NP-hard optimization problems allow approximability to any required degree. In this chapter, we will formalize this notion and will show that the knapsack

More information

Optimal Online Algorithms for Multidimensional Packing Problems

Optimal Online Algorithms for Multidimensional Packing Problems Optimal Online Algorithms for Multidimensional Packing Problems Leah Epstein Rob van Stee Abstract We solve an open problem in the literature by providing an online algorithm for multidimensional bin packing

More information

Discrepancy Theory in Approximation Algorithms

Discrepancy Theory in Approximation Algorithms Discrepancy Theory in Approximation Algorithms Rajat Sen, Soumya Basu May 8, 2015 1 Introduction In this report we would like to motivate the use of discrepancy theory in algorithms. Discrepancy theory

More information

Scheduling Parallel Jobs with Linear Speedup

Scheduling Parallel Jobs with Linear Speedup Scheduling Parallel Jobs with Linear Speedup Alexander Grigoriev and Marc Uetz Maastricht University, Quantitative Economics, P.O.Box 616, 6200 MD Maastricht, The Netherlands. Email: {a.grigoriev, m.uetz}@ke.unimaas.nl

More information

This means that we can assume each list ) is

This means that we can assume each list ) is This means that we can assume each list ) is of the form ),, ( )with < and Since the sizes of the items are integers, there are at most +1pairs in each list Furthermore, if we let = be the maximum possible

More information

A new analysis of Best Fit bin packing

A new analysis of Best Fit bin packing A new analysis of Best Fit bin packing Jiří Sgall Computer Science Institute of Charles University, Prague, Czech Republic. sgall@iuuk.mff.cuni.cz Abstract. We give a simple proof and a generalization

More information

On-line Scheduling to Minimize Max Flow Time: An Optimal Preemptive Algorithm

On-line Scheduling to Minimize Max Flow Time: An Optimal Preemptive Algorithm On-line Scheduling to Minimize Max Flow Time: An Optimal Preemptive Algorithm Christoph Ambühl and Monaldo Mastrolilli IDSIA Galleria 2, CH-6928 Manno, Switzerland October 22, 2004 Abstract We investigate

More information

Santa Claus Schedules Jobs on Unrelated Machines

Santa Claus Schedules Jobs on Unrelated Machines Santa Claus Schedules Jobs on Unrelated Machines Ola Svensson (osven@kth.se) Royal Institute of Technology - KTH Stockholm, Sweden March 22, 2011 arxiv:1011.1168v2 [cs.ds] 21 Mar 2011 Abstract One of the

More information

A PTAS for Static Priority Real-Time Scheduling with Resource Augmentation

A PTAS for Static Priority Real-Time Scheduling with Resource Augmentation A PAS for Static Priority Real-ime Scheduling with Resource Augmentation echnical Report Friedrich Eisenbrand and homas Rothvoß Institute of Mathematics EPFL, Lausanne, Switzerland {friedrich.eisenbrand,thomas.rothvoss}@epfl.ch

More information

APTAS for Bin Packing

APTAS for Bin Packing APTAS for Bin Packing Bin Packing has an asymptotic PTAS (APTAS) [de la Vega and Leuker, 1980] For every fixed ε > 0 algorithm outputs a solution of size (1+ε)OPT + 1 in time polynomial in n APTAS for

More information

A lower bound for scheduling of unit jobs with immediate decision on parallel machines

A lower bound for scheduling of unit jobs with immediate decision on parallel machines A lower bound for scheduling of unit jobs with immediate decision on parallel machines Tomáš Ebenlendr Jiří Sgall Abstract Consider scheduling of unit jobs with release times and deadlines on m identical

More information

Optimal analysis of Best Fit bin packing

Optimal analysis of Best Fit bin packing Optimal analysis of Best Fit bin packing György Dósa Jiří Sgall November 2, 2013 Abstract In the bin packing problem we are given an instance consisting of a sequence of items with sizes between 0 and

More information

Polynomial Time Algorithms for Minimum Energy Scheduling

Polynomial Time Algorithms for Minimum Energy Scheduling Polynomial Time Algorithms for Minimum Energy Scheduling Philippe Baptiste 1, Marek Chrobak 2, and Christoph Dürr 1 1 CNRS, LIX UMR 7161, Ecole Polytechnique 91128 Palaiseau, France. Supported by CNRS/NSF

More information

A note on semi-online machine covering

A note on semi-online machine covering A note on semi-online machine covering Tomáš Ebenlendr 1, John Noga 2, Jiří Sgall 1, and Gerhard Woeginger 3 1 Mathematical Institute, AS CR, Žitná 25, CZ-11567 Praha 1, The Czech Republic. Email: ebik,sgall@math.cas.cz.

More information

Online Algorithms for 1-Space Bounded Multi Dimensional Bin Packing and Hypercube Packing

Online Algorithms for 1-Space Bounded Multi Dimensional Bin Packing and Hypercube Packing Online Algorithms for -Space Bounded Multi Dimensional Bin Pacing and Hypercube Pacing Yong Zhang Francis Y.L. Chin Hing-Fung Ting Xin Han Abstract In this paper, we study -space bounded multi-dimensional

More information

Now just show p k+1 small vs opt. Some machine gets k=m jobs of size at least p k+1 Can also think of it as a family of (1 + )-approximation algorithm

Now just show p k+1 small vs opt. Some machine gets k=m jobs of size at least p k+1 Can also think of it as a family of (1 + )-approximation algorithm Review: relative approx. alg by comparison to lower bounds. TS with triangle inequality: A nongreedy algorithm, relating to a clever lower bound nd minimum spanning tree claim costs less than TS tour Double

More information

Multiprocessor Scheduling I: Partitioned Scheduling. LS 12, TU Dortmund

Multiprocessor Scheduling I: Partitioned Scheduling. LS 12, TU Dortmund Multiprocessor Scheduling I: Partitioned Scheduling Prof. Dr. Jian-Jia Chen LS 12, TU Dortmund 22/23, June, 2015 Prof. Dr. Jian-Jia Chen (LS 12, TU Dortmund) 1 / 47 Outline Introduction to Multiprocessor

More information

Machine Minimization for Scheduling Jobs with Interval Constraints

Machine Minimization for Scheduling Jobs with Interval Constraints Machine Minimization for Scheduling Jobs with Interval Constraints Julia Chuzhoy Sudipto Guha Sanjeev Khanna Joseph (Seffi) Naor Abstract The problem of scheduling jobs with interval constraints is a well-studied

More information

Machine scheduling with resource dependent processing times

Machine scheduling with resource dependent processing times Mathematical Programming manuscript No. (will be inserted by the editor) Alexander Grigoriev Maxim Sviridenko Marc Uetz Machine scheduling with resource dependent processing times Received: date / Revised

More information

All-norm Approximation Algorithms

All-norm Approximation Algorithms All-norm Approximation Algorithms Yossi Azar Leah Epstein Yossi Richter Gerhard J. Woeginger Abstract A major drawback in optimization problems and in particular in scheduling problems is that for every

More information

Discrete Applied Mathematics. Tighter bounds of the First Fit algorithm for the bin-packing problem

Discrete Applied Mathematics. Tighter bounds of the First Fit algorithm for the bin-packing problem Discrete Applied Mathematics 158 (010) 1668 1675 Contents lists available at ScienceDirect Discrete Applied Mathematics journal homepage: www.elsevier.com/locate/dam Tighter bounds of the First Fit algorithm

More information

Randomized algorithms for online bounded bidding

Randomized algorithms for online bounded bidding Randomized algorithms for online bounded bidding Leah Epstein Asaf Levin Abstract In the online bidding problem, a bidder is trying to guess a positive number, by placing bids until the value of the bid

More information

Beating the Harmonic lower bound for online bin packing

Beating the Harmonic lower bound for online bin packing Beating the Harmonic lower bound for online bin packing Sandy Heydrich Rob van Stee arxiv:1511.00876v7 [cs.ds] 28 Jun 2018 Received: date / Accepted: date Abstract In the online bin packing problem, items

More information

S. ABERS Vohra [3] then gave an algorithm that is.986-competitive, for all m 70. Karger, Phillips and Torng [] generalized the algorithm and proved a

S. ABERS Vohra [3] then gave an algorithm that is.986-competitive, for all m 70. Karger, Phillips and Torng [] generalized the algorithm and proved a BETTER BOUNDS FOR ONINE SCHEDUING SUSANNE ABERS y Abstract. We study a classical problem in online scheduling. A sequence of jobs must be scheduled on m identical parallel machines. As each job arrives,

More information

Online Removable Square Packing

Online Removable Square Packing Online Removable Square Packing Xin Han 1 Kazuo Iwama 1 Guochuan Zhang 2 1 School of Informatics, Kyoto University, Kyoto 606-8501, Japan, {hanxin, iwama}@kuis.kyoto-u.ac.jp, 2 Department of Mathematics,

More information

A Deterministic Fully Polynomial Time Approximation Scheme For Counting Integer Knapsack Solutions Made Easy

A Deterministic Fully Polynomial Time Approximation Scheme For Counting Integer Knapsack Solutions Made Easy A Deterministic Fully Polynomial Time Approximation Scheme For Counting Integer Knapsack Solutions Made Easy Nir Halman Hebrew University of Jerusalem halman@huji.ac.il July 3, 2016 Abstract Given n elements

More information

Alternatives to competitive analysis Georgios D Amanatidis

Alternatives to competitive analysis Georgios D Amanatidis Alternatives to competitive analysis Georgios D Amanatidis 1 Introduction Competitive analysis allows us to make strong theoretical statements about the performance of an algorithm without making probabilistic

More information

Online Scheduling of Parallel Jobs on Two Machines is 2-Competitive

Online Scheduling of Parallel Jobs on Two Machines is 2-Competitive Online Scheduling of Parallel Jobs on Two Machines is 2-Competitive J.L. Hurink and J.J. Paulus University of Twente, P.O. box 217, 7500AE Enschede, The Netherlands Abstract We consider online scheduling

More information

A lower bound on deterministic online algorithms for scheduling on related machines without preemption

A lower bound on deterministic online algorithms for scheduling on related machines without preemption Theory of Computing Systems manuscript No. (will be inserted by the editor) A lower bound on deterministic online algorithms for scheduling on related machines without preemption Tomáš Ebenlendr Jiří Sgall

More information

The Maximum Resource Bin Packing Problem

The Maximum Resource Bin Packing Problem The Maximum Resource Bin Pacing Problem Joan Boyar Leah Epstein Lene M. Favrholdt Jens S. Kohrt Kim S. Larsen Morten M. Pedersen Sanne Wøhl 3 Department of Mathematics and Computer Science University of

More information

Online Bin Packing with Resource Augmentation

Online Bin Packing with Resource Augmentation Online Bin Packing with Resource ugmentation Leah Epstein Rob van Stee September 17, 2007 bstract In competitive analysis, we usually do not put any restrictions on the computational complexity of online

More information

Online Coloring of Intervals with Bandwidth

Online Coloring of Intervals with Bandwidth Online Coloring of Intervals with Bandwidth Udo Adamy 1 and Thomas Erlebach 2, 1 Institute for Theoretical Computer Science, ETH Zürich, 8092 Zürich, Switzerland. adamy@inf.ethz.ch 2 Computer Engineering

More information

Semi-Online Multiprocessor Scheduling with Given Total Processing Time

Semi-Online Multiprocessor Scheduling with Given Total Processing Time Semi-Online Multiprocessor Scheduling with Given Total Processing Time T.C. Edwin Cheng Hans Kellerer Vladimir Kotov Abstract We are given a set of identical machines and a sequence of jobs, the sum of

More information

Preemptive Online Scheduling: Optimal Algorithms for All Speeds

Preemptive Online Scheduling: Optimal Algorithms for All Speeds Preemptive Online Scheduling: Optimal Algorithms for All Speeds Tomáš Ebenlendr Wojciech Jawor Jiří Sgall Abstract Our main result is an optimal online algorithm for preemptive scheduling on uniformly

More information

1 Ordinary Load Balancing

1 Ordinary Load Balancing Comp 260: Advanced Algorithms Prof. Lenore Cowen Tufts University, Spring 208 Scribe: Emily Davis Lecture 8: Scheduling Ordinary Load Balancing Suppose we have a set of jobs each with their own finite

More information

arxiv: v3 [cs.ds] 16 May 2018 To improve is to change; to be perfect is to change often.

arxiv: v3 [cs.ds] 16 May 2018 To improve is to change; to be perfect is to change often. Fully-Dynamic Bin Packing with Limited Repacking Anupam Gupta 1, Guru Guruganesh 1, Amit Kumar 2, and David Wajc 1 1 Carnegie Mellon University 2 IIT Delhi arxiv:1711.02078v3 [cs.ds] 16 May 2018 To improve

More information

Topic: Balanced Cut, Sparsest Cut, and Metric Embeddings Date: 3/21/2007

Topic: Balanced Cut, Sparsest Cut, and Metric Embeddings Date: 3/21/2007 CS880: Approximations Algorithms Scribe: Tom Watson Lecturer: Shuchi Chawla Topic: Balanced Cut, Sparsest Cut, and Metric Embeddings Date: 3/21/2007 In the last lecture, we described an O(log k log D)-approximation

More information

Approximation Schemes for Scheduling on Parallel Machines

Approximation Schemes for Scheduling on Parallel Machines Approximation Schemes for Scheduling on Parallel Machines Noga Alon Yossi Azar Gerhard J. Woeginger Tal Yadid Abstract We discuss scheduling problems with m identical machines and n jobs where each job

More information

Dispersing Points on Intervals

Dispersing Points on Intervals Dispersing Points on Intervals Shimin Li 1 and Haitao Wang 1 Department of Computer Science, Utah State University, Logan, UT 843, USA shiminli@aggiemail.usu.edu Department of Computer Science, Utah State

More information

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh

More information

Online algorithms for parallel job scheduling and strip packing Hurink, J.L.; Paulus, J.J.

Online algorithms for parallel job scheduling and strip packing Hurink, J.L.; Paulus, J.J. Online algorithms for parallel job scheduling and strip packing Hurink, J.L.; Paulus, J.J. Published: 01/01/007 Document Version Publisher s PDF, also known as Version of Record (includes final page, issue

More information

SIGACT News Online Algorithms Column 26: Bin packing in multiple dimensions

SIGACT News Online Algorithms Column 26: Bin packing in multiple dimensions SIGACT News Online Algorithms Column 26: Bin packing in multiple dimensions Rob van Stee University of Leicester United Kingdom I mentioned in a previous column [22] that the best known lower bound for

More information

Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Packing, MAX-SAT

Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Packing, MAX-SAT ,7 CMPUT 675: Approximation Algorithms Fall 2011 Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Pacing, MAX-SAT Lecturer: Mohammad R. Salavatipour Scribe: Weitian Tong 6.1 Bin Pacing Problem Recall the bin pacing

More information

arxiv:cs/ v1 [cs.ds] 20 Dec 2006

arxiv:cs/ v1 [cs.ds] 20 Dec 2006 Improved results for a memory allocation problem Leah Epstein Rob van Stee August 6, 018 Abstract arxiv:cs/061100v1 [cs.ds] 0 Dec 006 We consider a memory allocation problem that can be modeled as a version

More information

Approximation Schemes for Scheduling on Uniformly Related and Identical Parallel Machines

Approximation Schemes for Scheduling on Uniformly Related and Identical Parallel Machines Approximation Schemes for Scheduling on Uniformly Related and Identical Parallel Machines Leah Epstein Jiří Sgall Abstract We give a polynomial approximation scheme for the problem of scheduling on uniformly

More information

BIN PACKING IN MULTIPLE DIMENSIONS: INAPPROXIMABILITY RESULTS AND APPROXIMATION SCHEMES

BIN PACKING IN MULTIPLE DIMENSIONS: INAPPROXIMABILITY RESULTS AND APPROXIMATION SCHEMES BIN PACKING IN MULTIPLE DIMENSIONS: INAPPROXIMABILITY RESULTS AND APPROXIMATION SCHEMES NIKHIL BANSAL 1, JOSÉ R. CORREA2, CLAIRE KENYON 3, AND MAXIM SVIRIDENKO 1 1 IBM T.J. Watson Research Center, Yorktown

More information

arxiv: v1 [cs.ds] 6 Jun 2018

arxiv: v1 [cs.ds] 6 Jun 2018 Online Makespan Minimization: The Power of Restart Zhiyi Huang Ning Kang Zhihao Gavin Tang Xiaowei Wu Yuhao Zhang arxiv:1806.02207v1 [cs.ds] 6 Jun 2018 Abstract We consider the online makespan minimization

More information

SPT is Optimally Competitive for Uniprocessor Flow

SPT is Optimally Competitive for Uniprocessor Flow SPT is Optimally Competitive for Uniprocessor Flow David P. Bunde Abstract We show that the Shortest Processing Time (SPT) algorithm is ( + 1)/2-competitive for nonpreemptive uniprocessor total flow time

More information

The Constrained Minimum Weighted Sum of Job Completion Times Problem 1

The Constrained Minimum Weighted Sum of Job Completion Times Problem 1 The Constrained Minimum Weighted Sum of Job Completion Times Problem 1 Asaf Levin 2 and Gerhard J. Woeginger 34 Abstract We consider the problem of minimizing the weighted sum of job completion times on

More information

A PTAS for Static Priority Real-Time Scheduling with Resource Augmentation

A PTAS for Static Priority Real-Time Scheduling with Resource Augmentation A PTAS for Static Priority Real-Time Scheduling with Resource Augmentation Friedrich Eisenbrand and Thomas Rothvoß Institute of Mathematics École Polytechnique Féderale de Lausanne, 1015 Lausanne, Switzerland

More information

First Fit bin packing: A tight analysis

First Fit bin packing: A tight analysis First Fit bin packing: A tight analysis György Dósa 1 and Jiří Sgall 2 1 Department of Mathematics, University of Pannonia Veszprém, Hungary dosagy@almos.vein.hu 2 Computer Science Institute of Charles

More information

Worst case analysis for a general class of on-line lot-sizing heuristics

Worst case analysis for a general class of on-line lot-sizing heuristics Worst case analysis for a general class of on-line lot-sizing heuristics Wilco van den Heuvel a, Albert P.M. Wagelmans a a Econometric Institute and Erasmus Research Institute of Management, Erasmus University

More information

The Maximum Flow Problem with Disjunctive Constraints

The Maximum Flow Problem with Disjunctive Constraints The Maximum Flow Problem with Disjunctive Constraints Ulrich Pferschy Joachim Schauer Abstract We study the maximum flow problem subject to binary disjunctive constraints in a directed graph: A negative

More information

Optimal Preemptive Scheduling for General Target Functions

Optimal Preemptive Scheduling for General Target Functions Optimal Preemptive Scheduling for General Target Functions Leah Epstein Tamir Tassa Abstract We study the problem of optimal preemptive scheduling with respect to a general target function. Given n jobs

More information

A Polynomial-Time Algorithm for Pliable Index Coding

A Polynomial-Time Algorithm for Pliable Index Coding 1 A Polynomial-Time Algorithm for Pliable Index Coding Linqi Song and Christina Fragouli arxiv:1610.06845v [cs.it] 9 Aug 017 Abstract In pliable index coding, we consider a server with m messages and n

More information

Johns Hopkins Math Tournament Proof Round: Automata

Johns Hopkins Math Tournament Proof Round: Automata Johns Hopkins Math Tournament 2018 Proof Round: Automata February 9, 2019 Problem Points Score 1 10 2 5 3 10 4 20 5 20 6 15 7 20 Total 100 Instructions The exam is worth 100 points; each part s point value

More information

Approximation Basics

Approximation Basics Approximation Basics, Concepts, and Examples Xiaofeng Gao Department of Computer Science and Engineering Shanghai Jiao Tong University, P.R.China Fall 2012 Special thanks is given to Dr. Guoqiang Li for

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

Approximation algorithms for cycle packing problems

Approximation algorithms for cycle packing problems Approximation algorithms for cycle packing problems Michael Krivelevich Zeev Nutov Raphael Yuster Abstract The cycle packing number ν c (G) of a graph G is the maximum number of pairwise edgedisjoint cycles

More information

A primal-simplex based Tardos algorithm

A primal-simplex based Tardos algorithm A primal-simplex based Tardos algorithm Shinji Mizuno a, Noriyoshi Sukegawa a, and Antoine Deza b a Graduate School of Decision Science and Technology, Tokyo Institute of Technology, 2-12-1-W9-58, Oo-Okayama,

More information

ALTERNATIVE PERSPECTIVES FOR SOLVING COMBINATORIAL OPTIMIZATION PROBLEMS

ALTERNATIVE PERSPECTIVES FOR SOLVING COMBINATORIAL OPTIMIZATION PROBLEMS ε-optimization SCHEMES AND L-BIT PRECISION: ALTERNATIVE PERSPECTIVES FOR SOLVING COMBINATORIAL OPTIMIZATION PROBLEMS JAMES B. ORLIN, ANDREAS S. SCHULZ, AND SUDIPTA SENGUPTA September 2004 Abstract. Motivated

More information

Some Sieving Algorithms for Lattice Problems

Some Sieving Algorithms for Lattice Problems Foundations of Software Technology and Theoretical Computer Science (Bangalore) 2008. Editors: R. Hariharan, M. Mukund, V. Vinay; pp - Some Sieving Algorithms for Lattice Problems V. Arvind and Pushkar

More information

An approximation algorithm for the minimum latency set cover problem

An approximation algorithm for the minimum latency set cover problem An approximation algorithm for the minimum latency set cover problem Refael Hassin 1 and Asaf Levin 2 1 Department of Statistics and Operations Research, Tel-Aviv University, Tel-Aviv, Israel. hassin@post.tau.ac.il

More information

bound of (1 + p 37)=6 1: Finally, we present a randomized non-preemptive 8 -competitive algorithm for m = 2 7 machines and prove that this is op

bound of (1 + p 37)=6 1: Finally, we present a randomized non-preemptive 8 -competitive algorithm for m = 2 7 machines and prove that this is op Semi-online scheduling with decreasing job sizes Steve Seiden Jir Sgall y Gerhard Woeginger z October 27, 1998 Abstract We investigate the problem of semi-online scheduling jobs on m identical parallel

More information

The decomposability of simple orthogonal arrays on 3 symbols having t + 1 rows and strength t

The decomposability of simple orthogonal arrays on 3 symbols having t + 1 rows and strength t The decomposability of simple orthogonal arrays on 3 symbols having t + 1 rows and strength t Wiebke S. Diestelkamp Department of Mathematics University of Dayton Dayton, OH 45469-2316 USA wiebke@udayton.edu

More information

immediately, without knowledge of the jobs that arrive later The jobs cannot be preempted, ie, once a job is scheduled (assigned to a machine), it can

immediately, without knowledge of the jobs that arrive later The jobs cannot be preempted, ie, once a job is scheduled (assigned to a machine), it can A Lower Bound for Randomized On-Line Multiprocessor Scheduling Jir Sgall Abstract We signicantly improve the previous lower bounds on the performance of randomized algorithms for on-line scheduling jobs

More information

Combinatorial Algorithms for Minimizing the Weighted Sum of Completion Times on a Single Machine

Combinatorial Algorithms for Minimizing the Weighted Sum of Completion Times on a Single Machine Combinatorial Algorithms for Minimizing the Weighted Sum of Completion Times on a Single Machine James M. Davis 1, Rajiv Gandhi, and Vijay Kothari 1 Department of Computer Science, Rutgers University-Camden,

More information

Algorithms. Outline! Approximation Algorithms. The class APX. The intelligence behind the hardware. ! Based on

Algorithms. Outline! Approximation Algorithms. The class APX. The intelligence behind the hardware. ! Based on 6117CIT - Adv Topics in Computing Sci at Nathan 1 Algorithms The intelligence behind the hardware Outline! Approximation Algorithms The class APX! Some complexity classes, like PTAS and FPTAS! Illustration

More information

arxiv: v1 [math.oc] 3 Jan 2019

arxiv: v1 [math.oc] 3 Jan 2019 The Product Knapsack Problem: Approximation and Complexity arxiv:1901.00695v1 [math.oc] 3 Jan 2019 Ulrich Pferschy a, Joachim Schauer a, Clemens Thielen b a Department of Statistics and Operations Research,

More information

The Complexity of Maximum. Matroid-Greedoid Intersection and. Weighted Greedoid Maximization

The Complexity of Maximum. Matroid-Greedoid Intersection and. Weighted Greedoid Maximization Department of Computer Science Series of Publications C Report C-2004-2 The Complexity of Maximum Matroid-Greedoid Intersection and Weighted Greedoid Maximization Taneli Mielikäinen Esko Ukkonen University

More information

Turing Machines, diagonalization, the halting problem, reducibility

Turing Machines, diagonalization, the halting problem, reducibility Notes on Computer Theory Last updated: September, 015 Turing Machines, diagonalization, the halting problem, reducibility 1 Turing Machines A Turing machine is a state machine, similar to the ones we have

More information

Efficient approximation algorithms for the Subset-Sums Equality problem

Efficient approximation algorithms for the Subset-Sums Equality problem Efficient approximation algorithms for the Subset-Sums Equality problem Cristina Bazgan 1 Miklos Santha 2 Zsolt Tuza 3 1 Université Paris-Sud, LRI, bât.490, F 91405 Orsay, France, bazgan@lri.fr 2 CNRS,

More information

An algorithm to increase the node-connectivity of a digraph by one

An algorithm to increase the node-connectivity of a digraph by one Discrete Optimization 5 (2008) 677 684 Contents lists available at ScienceDirect Discrete Optimization journal homepage: www.elsevier.com/locate/disopt An algorithm to increase the node-connectivity of

More information

Lecture 2: Paging and AdWords

Lecture 2: Paging and AdWords Algoritmos e Incerteza (PUC-Rio INF2979, 2017.1) Lecture 2: Paging and AdWords March 20 2017 Lecturer: Marco Molinaro Scribe: Gabriel Homsi In this class we had a brief recap of the Ski Rental Problem

More information

Perfect Packing Theorems and the Average-Case Behavior of Optimal and Online Bin Packing

Perfect Packing Theorems and the Average-Case Behavior of Optimal and Online Bin Packing SIAM REVIEW Vol. 44, No. 1, pp. 95 108 c 2002 Society for Industrial and Applied Mathematics Perfect Packing Theorems and the Average-Case Behavior of Optimal and Online Bin Packing E. G. Coffman, Jr.

More information

Approximation Schemes for Parallel Machine Scheduling Problems with Controllable Processing Times

Approximation Schemes for Parallel Machine Scheduling Problems with Controllable Processing Times Approximation Schemes for Parallel Machine Scheduling Problems with Controllable Processing Times Klaus Jansen 1 and Monaldo Mastrolilli 2 1 Institut für Informatik und Praktische Mathematik, Universität

More information

Deciding Emptiness of the Gomory-Chvátal Closure is NP-Complete, Even for a Rational Polyhedron Containing No Integer Point

Deciding Emptiness of the Gomory-Chvátal Closure is NP-Complete, Even for a Rational Polyhedron Containing No Integer Point Deciding Emptiness of the Gomory-Chvátal Closure is NP-Complete, Even for a Rational Polyhedron Containing No Integer Point Gérard Cornuéjols 1 and Yanjun Li 2 1 Tepper School of Business, Carnegie Mellon

More information

ONLINE SCHEDULING OF MALLEABLE PARALLEL JOBS

ONLINE SCHEDULING OF MALLEABLE PARALLEL JOBS ONLINE SCHEDULING OF MALLEABLE PARALLEL JOBS Richard A. Dutton and Weizhen Mao Department of Computer Science The College of William and Mary P.O. Box 795 Williamsburg, VA 2317-795, USA email: {radutt,wm}@cs.wm.edu

More information

Knapsack. Bag/knapsack of integer capacity B n items item i has size s i and profit/weight w i

Knapsack. Bag/knapsack of integer capacity B n items item i has size s i and profit/weight w i Knapsack Bag/knapsack of integer capacity B n items item i has size s i and profit/weight w i Goal: find a subset of items of maximum profit such that the item subset fits in the bag Knapsack X: item set

More information

Improved Bounds for Flow Shop Scheduling

Improved Bounds for Flow Shop Scheduling Improved Bounds for Flow Shop Scheduling Monaldo Mastrolilli and Ola Svensson IDSIA - Switzerland. {monaldo,ola}@idsia.ch Abstract. We resolve an open question raised by Feige & Scheideler by showing that

More information

arxiv: v1 [cs.ds] 11 Dec 2013

arxiv: v1 [cs.ds] 11 Dec 2013 WORST-CASE PERFORMANCE ANALYSIS OF SOME APPROXIMATION ALGORITHMS FOR MINIMIZING MAKESPAN AND FLOW-TIME PERUVEMBA SUNDARAM RAVI, LEVENT TUNÇEL, MICHAEL HUANG arxiv:1312.3345v1 [cs.ds] 11 Dec 2013 Abstract.

More information

Theoretical Computer Science

Theoretical Computer Science Theoretical Computer Science 411 (010) 417 44 Contents lists available at ScienceDirect Theoretical Computer Science journal homepage: wwwelseviercom/locate/tcs Resource allocation with time intervals

More information

Multicommodity Flows and Column Generation

Multicommodity Flows and Column Generation Lecture Notes Multicommodity Flows and Column Generation Marc Pfetsch Zuse Institute Berlin pfetsch@zib.de last change: 2/8/2006 Technische Universität Berlin Fakultät II, Institut für Mathematik WS 2006/07

More information

Valency Arguments CHAPTER7

Valency Arguments CHAPTER7 CHAPTER7 Valency Arguments In a valency argument, configurations are classified as either univalent or multivalent. Starting from a univalent configuration, all terminating executions (from some class)

More information

On a hypergraph matching problem

On a hypergraph matching problem On a hypergraph matching problem Noga Alon Raphael Yuster Abstract Let H = (V, E) be an r-uniform hypergraph and let F 2 V. A matching M of H is (α, F)- perfect if for each F F, at least α F vertices of

More information

Laver Tables A Direct Approach

Laver Tables A Direct Approach Laver Tables A Direct Approach Aurel Tell Adler June 6, 016 Contents 1 Introduction 3 Introduction to Laver Tables 4.1 Basic Definitions............................... 4. Simple Facts.................................

More information

A packing integer program arising in two-layer network design

A packing integer program arising in two-layer network design A packing integer program arising in two-layer network design Christian Raack Arie M.C.A Koster Zuse Institute Berlin Takustr. 7, D-14195 Berlin Centre for Discrete Mathematics and its Applications (DIMAP)

More information

Task assignment in heterogeneous multiprocessor platforms

Task assignment in heterogeneous multiprocessor platforms Task assignment in heterogeneous multiprocessor platforms Sanjoy K. Baruah Shelby Funk The University of North Carolina Abstract In the partitioned approach to scheduling periodic tasks upon multiprocessors,

More information

5 Set Operations, Functions, and Counting

5 Set Operations, Functions, and Counting 5 Set Operations, Functions, and Counting Let N denote the positive integers, N 0 := N {0} be the non-negative integers and Z = N 0 ( N) the positive and negative integers including 0, Q the rational numbers,

More information

Bi-Immunity Separates Strong NP-Completeness Notions

Bi-Immunity Separates Strong NP-Completeness Notions Bi-Immunity Separates Strong NP-Completeness Notions A. Pavan 1 and Alan L Selman 2 1 NEC Research Institute, 4 Independence way, Princeton, NJ 08540. apavan@research.nj.nec.com 2 Department of Computer

More information

Convergence Rate of Best Response Dynamics in Scheduling Games with Conflicting Congestion Effects

Convergence Rate of Best Response Dynamics in Scheduling Games with Conflicting Congestion Effects Convergence Rate of est Response Dynamics in Scheduling Games with Conflicting Congestion Effects Michal Feldman Tami Tamir Abstract We study resource allocation games with conflicting congestion effects.

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information