Approximation Schemes for Parallel Machine Scheduling Problems with Controllable Processing Times

Size: px
Start display at page:

Download "Approximation Schemes for Parallel Machine Scheduling Problems with Controllable Processing Times"

Transcription

1 Approximation Schemes for Parallel Machine Scheduling Problems with Controllable Processing Times Klaus Jansen 1 and Monaldo Mastrolilli 2 1 Institut für Informatik und Praktische Mathematik, Universität zu Kiel, Germany, kj@informatik.uni-kiel.de 2 IDSIA, Galleria 2, 6928 Manno, Switzerland, monaldo@idsia.ch Scope and Purpose Most classical scheduling models assume fixed processing times for the jobs. However, in real-life applications the processing time of a job often depends on the amount of resources such as facilities, manpower, funds, etc. allocated to it, and so the processing time can be reduced when additional resources are given to the job. This speed up of the processing time of a job comes at a certain cost. A scheduling problem in which the processing times of the jobs can be reduced at some expense is called a scheduling problem with controllable processing times. We contribute by presenting new approximation algorithms for this kind of problems. Abstract We consider the problem of scheduling n independent jobs on m identical machines that operate in parallel. Each job has a controllable processing time. The fact that the jobs have a controllable processing time means that it is allowed to compress (a part of) the processing time of the job, in return for compression cost. We present the first known polynomial time approximation schemes for the non-preemptive case of several identical parallel machines scheduling problems with controllable processing times. Moreover, we study the problem when preemption is allowed and describe efficient exact and approximation algorithms. 1 Introduction In a scheduling problem with controllable processing times the job processing time can be compressed through incurring an additional cost. Scheduling problems with controllable processing times have gained importance in scheduling research since the pioneering works of Vickson [?,?]. For a survey of this area until 1990, the reader is referred to [?]. Recent results include [?,?,?,?,?,?,?]. A preliminary version of this paper appeared in the Proceedings of ICALP Workshops 2000 (ARACNE), Carleton Scientific, proceedings in informatics 8, pp Supported by Swiss National Science Foundation project /1, Resource Allocation and Scheduling in Flexible Manufacturing Systems, and by the Metaheuristics Network, grant HPRN-CT

2 The identical parallel machine scheduling problem is defined as follows. We have a set J of n jobs, J = {J 1,..., J n }, and m identical machines, M = {1,..., m}. Each job J j must be processed in an uninterrupted fashion on one of the machines, each of which can process at most one job at a time. We will also consider the preemptive case, in which a job may be interrupted on one machine and continued later (possibly on another machine) without penalty. The processing time of job J j lies in an interval [l j, u j ] (with 0 l j u j ). For each job J j we have to choose a machine m j {1,..., m}, and δ j [0, 1] and get then processing time and cost that depend linearly on δ j : p j (δ j ) = δ j l j + (1 δ j )u j, c j (δ j ) = δ j c l j + (1 δ j )c u j. We refer δ j as the compression level of job J j, since the processing time p j (δ j ) of J j decreases by increasing δ j. Solving a scheduling problem with controllable processing times amounts to specifying an assignment, σ = {m 1, m 2,..., m n }, of jobs to machines (where m j {1,..., m} is the machine job J j is assigned to), and a selection of the compression levels, δ = {δ 1, δ 2,..., δ n }, that defines jobs processing times p j (δ j ) and costs c j (δ j ). The makespan is the length of the schedule, or equivalently the time when the last job is completed. Given an assignment σ = {m 1, m 2,..., m n }, and a selection δ = {δ 1, δ 2,..., δ n }, the makespan T (σ, δ) is easily computed, i.e., T (σ, δ) = max 1 i m p j (δ j ). We denote by C(δ) the total cost of δ, i.e., C(δ) = J j J,m j =i c j (δ j ). Problems. The problem of scheduling jobs with controllable processing times is a bicriteria problem, and we can define the following three optimization problems. P1. Minimization of C(δ) subject to T (σ, δ) τ, for some given value τ > 0. P2. Minimization of T (σ, δ) subject to C(δ) κ, for some give value κ > 0. P3. Minimization of a T (σ, δ) + αc(δ), for some parameter α > 0. Known Results. The addressed problems are all strongly NP-hard [?] since the special case with fixed processing times is strongly NP-hard [?]. The practical importance of NP hard problems necessitates tractable relaxations. By tractable we mean efficient solvability, and polynomial time is a robust theoretical notion of efficiency. A very fruitful approach has been to relax the notion of optimality and settle for near optimal solutions. A near optimal solution is one whose objective function value is within some small multiplicative 1 factor of the optimal value. Approximation algorithms are heuristics that in polynomial time provide provably good guarantees on the quality of the solutions they return. This approach was pioneered by the influential paper of Johnson [?] in 2

3 which he showed the existence of good approximation algorithms for several NP hard problems. He also remarked that the optimization problems that are all indistinguishable in the theory of NP completeness behave very differently when it comes to approximability. Remarkable work in the last couple of decades in both the design of approximation algorithms and proving inapproximability results has validated Johnson s remarks. The book on approximation algorithms edited by Hochbaum [?] gives a good glimpse of the current knowledge on the subject. Approximation algorithms for several problems in scheduling have been developed in the last three decades. In fact it is widely believed that the first optimization problem for which an approximation algorithm was formally designed and analyzed is the makespan minimization of the identical parallel machines scheduling problem, this algorithm was designed by Graham in 1966 [?]. This precedes the development of the theory of NP completeness. A Polynomial Time Approximation Scheme (PTAS for short) for an NP-hard optimization problem in minimization (or maximization) form is a polynomial time algorithm which, given any instance of the problem and any given value ε > 0 (or 0 < ε < 1), it returns a solution whose value is within factor (1 + ε) (or (1 ε)) of the optimal solution value for that instance. The main themes of this paper revolve around the design of new polynomial time approximation schemes for the identical parallel machine scheduling problems with controllable processing times. Nowicki and Zdrzalka [?] consider the preemptive scheduling of m identical parallel machines. They provide a O(n 2 ) greedy algorithm which generates the set of Pareto-optimal points. (A pair (σ, δ) is called Pareto-optimal if there does not exist another pair (σ, δ ) that improves on (σ, δ) with respect to one of C(δ) and T (σ, δ), and stays equal or improves with respect to the other one.) When preemption is not allowed and the machines are not identical, Trick [?] gave a polynomial time approximation algorithm (i.e., an algorithm that returns a solution whose value is within times the optimal value) to minimize a weighted sum of the cost and the makespan (see problem P3). The latter result was improved by Shmoys and Tardos [?] by providing a polynomial time 2-approximation algorithm. When processing times are fixed, Hochbaum and Shmoys [?] designed a PTAS for the makespan minimization of the identical parallel machines scheduling problem. New Results. Solving a scheduling problem with controllable processing times amounts to specifying a selection of the processing times and costs, and assigning jobs to machines. If we could fix the processing times and costs of jobs as in some optimal (or near-optimal) solution, then we could use the algorithm described by Hochbaum and Shmoys [?] for fixed processing times to obtain a near-optimal solution. Unfortunately, in principle there is an infinite number of possible selections, and clearly we cannot consider all the possibilities. However, assume that it is possible to restrict the set of all possibilities to a small set U of interesting selections, where with the terms interesting selections we mean that a near-optimal solution can be always obtained by using one out of these alternatives. In this case, we can compute a near-optimal solution just by applying the algorithm described in [?] for each selection in U, and retaining the best found solution. The returned solution is a near-optimal solution. 3

4 In this paper we settle the approximability of the described problems by obtaining the first polynomial time approximation schemes for P1, P2 and P3. Our approach follows the intuitive ideas sketched above. We shows how to compute a polynomial number of interesting selections of processing times and costs. All the other choices can be neglected. Indeed, we prove that a solution whose value is within factor (1 + ε) of the optimal value can be always computed by using this restricted set. The set U of selections and the algorithm described in [?] for fixed processing times give a PTAS for P1. The latter provides a solution with optimal cost and makespan at most (1 + ε)τ, for any fixed ε > 0. (Note that for problem P1, deciding if there is a solution with T τ is already NP-complete, therefore the best that we can expect, unless P=NP, is to find a solution with cost at most the optimal cost and makespan not greater than τ(1 + ε).) In Section 3, by using the described PTAS for P1, we present a PTAS for P2 which delivers a solution with cost C(δ) κ and makespan within factor (1 + ε) of the optimal makespan. In Section 4 a PTAS for P3 is obtained by using again the PTAS for P1 as a basic building block. An other contribution of this paper is the use of the continuous relaxation of the knapsack problem [?] to get lower and upper bounds and fast optimal and approximation algorithms. In Section 3.2, by using the relaxed version of the knapsack problem, we present a fast greedy algorithm that returns a solution that is within factor 2 of the optimal solution value for problem P2. In Section 5 we propose a linear time algorithm to solve optimally problem P1 when preemption is allowed. This algorithm is again based on the continuous knapsack problem and McNaughton s rule [?]. By using a similar approach we obtain exact and approximate solutions for the other two preemptive variants P2 and P3. 2 A PTAS for P1 Our approximation scheme conceptually consists of the following two steps. 1. Fixing processing times and costs: Identify a polynomial sized set U of possible selections of processing times and costs; set U is computed to make sure that there exists a feasible schedule with optimal cost and makespan at most τ(1 + ε), when jobs processing times and costs are chosen according to some selection from U. 2. Scheduling jobs: when processing times and costs for all jobs are fixed, the problem turns out to be the classical identical parallel machine scheduling problem subject to makespan minimization. The latter can be solved approximately with any given accuracy in polynomial time [?]. For each selection of processing times and costs from set U, apply the algorithm described in [?] (processing times and costs of jobs are fixed according to the considered selection); return the best found solution. The total running time of our PTAS is polynomial in the input size. Indeed, it consists of applying the polynomial time algorithm described in [?] a polynomial number of times, i.e., one for each selection of U. Therefore, the description of the first step immediately defines a PTAS for P1. 4

5 Remark 1 Without loss of generality, we may (and will) assume that the makespan must be at most 1. This is obtained by dividing all processing times by τ. Furthermore, we may assume that u j 1 for all jobs j J; otherwise we can define u j = 1 and adjust the value of c u j to get an equivalent instance. Moreover, we assume, for simplicity of notation, that 1/ε is an integral value. Remark 2 Let c be a positive constant and ε any positive value. In this paper we present an algorithm that returns a solution with makespan at most (1+cε )τ, and not a solution with makespan at most (1 + ε)τ, as claimed. However, the latter can be easily obtained by setting ε := ε/c. Therefore, without loss of generality, in the following we describe an algorithm that returns a solution with makespan at most (1 + O(ε))τ 2.1 Selection of processing times and costs The first step to obtain set U of selections consists of reducing the number of distinct processing times that may occur in any feasible solution. We begin by transforming the given instance into a more structured one in which every job can assume at most O( 1 ε log n ε ) distinct processing times. We prove that by using this restricted set of choices it is still possible to get a solution with cost not larger than the optimal cost and makespan at most 1 + 2ε. The restricted selection of processing times is obtained as follows. 1. Divide the interval (0, 1] into subintervals of length ε/n, i.e., (0, ε n ], ( ε n, 2 ε n ],..., ((n ε 1) ε n, 1]. 2. Let S be the set of values obtained by rounding the following values ε n, ε n (1 + ε), ε n (1 + ε)2,..., 1 up to the nearest value between {ε/n, 2ε/n,..., 1}. 3. For each job J j consider as possible processing times the restricted set S j of values from S that fall in interval [l j, u j ]. Note that, in the restricted set, any possible processing time is a multiple of ε/n. The described restricted selection of processing times is motivated by the following lemma. Lemma 1 By using the restricted set of processing times the following holds: Every possible processing time is equal to k(ε/n) for some k {1,..., n/ε}, and there are O( 1 ε log n ε ) different processing times. If C is the minimum cost when the makespan is at most 1 then, by using the restricted set of processing times, there is a solution with cost and makespan at most C and 1 + 2ε, respectively. 5

6 Proof. Let b the cardinality of set S, then ε n (1 + ε)b 2 < 1 (and ε n (1 + ε) b 1 1) and therefore b 2 < log 2 n ε log 2 (1+ε) 1 ε log 2 n ε, for every ε 1. We prove that our transformation may potentially increase the makespan value by a factor of 1 + 2ε, while the minimum cost is lower or the same. Indeed, let p j (δ j ) be any feasible processing time for any given job J j. It is easy to see that there exists a corresponding value p j ( δ j ) belonging to the restricted set S j such that p j ( δ j ) p j (δ j )(1 + ε) + ε n. Now, let H be a set of jobs on one machine with J p j H j(δj ) 1, as in some optimal solution, and let δ j be the corresponding δ j -values, for J j H. Then, if we replace each p j (δj ) with the corresponding value p j ( δ j ), we obtain a solution with cost at most the optimal cost and makespan J j H p j ( δ j ) (p j (δj )(1 + ε) + ε n ) 1 + 2ε. J j H The above lemma allows us to work with instances with O( 1 ε log n ε ) distinct processing times and costs (note that for each distinct processing time of the restricted set, there is one associated cost that can be computed by using the linear relationship between costs and processing times). This restriction of the possible processing times, immediately reduces the number of possible selections: since there are n jobs and each can assume O( 1 ε log n ε ) distinct processing times and costs, the number of possible selections is clearly bounded by n O( 1 ε log n ε ). Unfortunately, this number is not polynomial in the input size and further ideas are needed in order to obtain a polynomial sized set U of selections. In the following we show how to reduce the number of interesting selections to a polynomial number by using a dynamic programming approach Reducing the number of interesting selections Before presenting the dynamic programming approach, we start presenting the definition of configuration, that is necessary for our scope. Then, by using this notion, we give an intuitive idea; the dynamic program formalization follows. Configurations. We call J j a big job if J j can assume a processing time greater than ε, and a small job if J j can assume a processing time not greater than ε. Clearly, we have the following different cases: (1) l j ε < u j. In this case J j can be small or big. (2) l j u j ε. In this case J j is always small. (3) ε < l j u j. In this case J j is always big. Note that big jobs have β 1 ε log 1 ε +1 different processing times, since the processing time of each job cannot be larger than 1 and by our transformation the distinct processing values that are greater than ε are at most 1 ε log 1 ε + 1. Let us denote these β values by p 1, p 2,..., p β. Let x i denote the total number of big jobs with processing times equal to p i, for i = 1,..., β. A configuration of 6

7 the processing times for big jobs can be given by a vector (x 1, x 2,..., x β ). Let us call (x 1, x 2,..., x β ) a β-configuration. Recall that our goal is to find a solution that has optimal cost and makespan that is within 1 + O(ε). By Lemma 1, in the transformed instance with a restricted set of possible processing times, we allow the makespan to be at most 1 + 2ε. Now, consider a selection of processing times of big jobs and let (x 1, x 2,..., x β ) be the corresponding β-configuration. If β i=1 x ip i > m(1 + 2ε) then, in any given assignment of big jobs to machines, there is at least one big job that completes after time 1 + 2ε, i.e., the makespan is larger than 1 + 2ε. In this case we can discard this selection since it is guaranteed that, whatever is the assignment of big jobs to machines, the makespan is too large, i.e., larger than 1 + 2ε. By this observation, we say that a β-configuration is feasible if β x i p i m(1 + 2ε). i=1 Since p i > ε, for i = 1,..., β, a necessary condition of feasibility for any β- configuration is that β x i m(1 + 2ε)/ε. i=1 Therefore, the number h of feasible β-configurations is at most the number of β- tuples of non-negative integers with the sum at most m(1 + 2ε)/ε. The number h(d) of β-tuples with the sum equals to d is known to be h(d) = ( ) d+β 1 β 1. Therefore, m(1+2ε) ε ( ) ( d + β 1 m(1+2ε) ) h = ε + β. β 1 β d=0 Since ( ) x y ( ex y )y and ε 1, we see that h ( 3m ε + β ) ( 3m ε + 1 ε log 1 ε + 2 ) β 1 ε log 1 ε ( + 2 (3m + 1)( 1 ε log 1 ε + 2) ) 1 ε log 1 ε + 2 (4em) 1 ε log 1 ε +2. A feasible configuration c = (x 1, x 2,..., x β, x β+1 ) is a (β + 1)-dimensional vector where (x 1, x 2,..., x β ) is a feasible β-configuration and x β+1 {0, ε/n, 2ε/n,..., εn} represents the sum of the chosen processing times for small jobs. The number of different feasible configurations is bounded by γ h(n 2 +1) = O(n 2 m O( 1 ε log 1 ε ) ), a polynomial number in the input size for any fixed error ε > 0. The Dynamic Programming. The intuitive idea behind our approach is as follows. Consider two selections of processing times and costs, s 1 and s 2, such the total cost of s 1 is smaller than that of s 2. Moreover, assume that s 1 and s 2 7

8 have the same associated configuration. We prove that we can discard s 2 since, by using the processing times and costs fixed as in s 1, we can compute a solution whose makespan is approximately (i.e., within 1+O(ε) times) the best makespan of s 2 (i.e., when processing times and costs are chosen as in s 2 ), and with smaller cost. Therefore, for each feasible configuration we have at most one selection, the one with the lowest cost. Since there is a polynomial number of feasible configurations, we have also reduced the number of interesting selections to a polynomial number, and we are done. In the following we formalize the above intuition and present a dynamic program which computes for each feasible configuration the corresponding selection with the lowest cost. Intuitively the dynamic program works as follows. Add one job after the other, and for each job consider its restricted set of processing times and costs. Each time that a new job is considered, compute all the possible combinations of the job processing times with the previously added jobs, but if there are two or more selections with the same configuration, then retain the one with the lowest cost and discard the others. The formal description of the dynamic programming is given below. To simplify the formulation of the dynamic program we use a vector representation of the reduced set S j of possible processing times for job J j. V j is the set of (β + 1)-dimensional vectors defined as follows: for each p S j with p = p i, for some i = 1,..., β, there is a vector v V j such that the i-th entry of v is set to 1, and the other entries are zeros; otherwise for each small processing time p then the (β + 1)-st entry of v is p and the other entries of v are set to zero. Let c(v) denote the cost of v. For every feasible configuration c and for every j, 1 j n, we denote by COST (j, c) the minimum total cost when jobs J 1,..., J j have been assigned processing times according to configuration c; in case no such assignment exists, COST (j, c) = +. It is easy to see that the following recurrence holds: COST (1, v) = { c(v) if v V1 + if v / V 1 COST (j, c) = min {c(v) + COST (j 1, c v)} v V j : v c for j = 2,..., n For each job we have to consider at most O(γ 1 ε log n ε ) combinations. Therefore, the running time of the dynamic program is O(n 3 log n m O( 1 ε log 1 ε ) ). Now we show that among all the generated assignments of processing times to jobs there exists a selection à with minimum cost and having a schedule with makespan at most 1 + O(ε). Let A denote the selection of processing times according to some optimal solution S when only the restricted set of processing times is considered (see Lemma 1). Since we consider all the different feasible configurations, one of them must be the configuration corresponding to A. Let us call c this configuration. Let à denote the selection of processing times with configuration c as computed by our dynamic program. Let L and L be the sum of all processing times according to A and Ã, respectively. The total cost of à is not greater than the cost of A. Furthermore, for each big job of A there is a big job of à having the same processing time, and vice versa. Finally, the sum of small job processing times in A has the same value as in Ã. It follows that L = L. Now, we show that the optimal solution value with processing times according 8

9 to à is at most 1 + O(ε). Indeed, consider the solution obtained by scheduling first the big jobs as in S and then by adding greedily the small jobs as in the classical list-schedule algorithm [?]. If the machine with the largest completion time finishes with a big job, then we get an optimal solution (with makespan at most 1 + 2ε by Lemma 1). Otherwise, let p k be the processing time of a small job J k that ends last. Then the makespan is equal to s k + p k, where s k is the starting time of job J k. We see that s k + p k < L m + p k = L m + p k 1 + 3ε, by using the classical analysis of the list-schedule algorithm [?]. Theorem 1 There is a PTAS for P1 that provides a solution with minimum cost and makespan at most (1 + ε)τ, if a solution with makespan not greater than τ exists. 3 A PTAS for P2 Problem P2 requires to compute a solution with minimum makespan and cost at most κ. Note that for some values of κ problem P2 might not have a solution. Given a value ε > 0, we present an algorithm that either finds a schedule of length at most (1 + ε) the optimal makespan and cost at most κ, or it decides that a schedule of cost at most κ does not exist. We embed the PTAS for P1, described in the previous section, within a binary search procedure. Assume, without loss of generality, that the input instance has integral data. Clearly, the value of the optimal makespan for problem P2 is in the interval [0, nu max ], where u max = max j u j. Thus we can use UB := 0 and LB := nu max as initial upper and lower bounds, respectively, for the binary search; in each iteration the algorithm performs the following steps: a) it uses the PTAS for P1 to find a schedule of minimum cost and length at most 1 + ε times the value H, where H = (LB + UB)/2 (if a solution of length H exists); b) if the total cost is not greater than κ, then update the upper bound to H, otherwise update the lower bound to H + 1. The algorithm terminates when LB = UB, and outputs the best schedule found. A straightforward proof by induction shows that, throughout the execution of the binary search, the LB value is never greater than the optimal makespan value when the cost is constrained to be at most κ. Moreover, if no schedule with cost at most κ is found then a schedule of length at most κ does not exist. After O(log(nu max )) iterations the search converges and we have obtained PTAS for P2. The resulting algorithm is polynomial in the binary encoding of the input size. Theorem 2 There is a PTAS for P2 that provides a solution with makespan at most (1 + ε) times the minimum makespan and cost at most κ, if a solution with cost not greater than κ exists. We observe that a lower and upper bounds on the optimal makespan better than 0 and nu max, respectively, helps us to improve the efficiency of the binary 9

10 search. Moreover, if we knew a makespan value τ [T, T (1 + ε)], where T is the optimal makespan according to some optimal solution, by applying the PTAS for P1 when the makespan is constrained to be at most τ (see Section 2), we would get a solution with makespan at most τ(1 + ε) T (1 + O(ε)) and cost not greater than κ. In the next sections we provide an approximation algorithm that delivers a solution with makespan t 2T and cost at most κ in O(n ln m + m ln 2 m) time. A value τ [T, T (1 + ε)] can be found by applying the previously described binary search on the following 1 ε log 2 ε values for H, t 2 (1 + ε), t 2 (1 + ε) 2,..., t. Therefore, by executing the algorithm of Section 2 O(log( 1 ε log 1 ε )) times, a constant number for any fixed ε, we get a PTAS for P2. This improves the previously described approach which requires to apply the PTAS for P1 O(log(nu max )) times. 3.1 Lower and Upper Bounds In the following we compute lower (LB) and upper (UB) bounds for the optimum makespan value when the total cost must be at most κ. We start analyzing the following artificial situation. Consider some optimal solution S and let δ1,..., δn denote the corresponding δ j -values. Let us write the corresponding total cost value as C(δj ) = n c jδj + n cu j, where c j = c l j cu j. Let P = n p j(δj ) be the sum of processing times in S. Clearly, the optimal makespan cannot be larger than P. Moreover, it must be at least P /m, which represents the best possible situation in which all machines complete processing at exactly the same time. Therefore P give us lower and upper bounds on the optimal makespan value. Unfortunately, we do not know P. However, what if we can compute n values δ 1,..., δ n, with δ j [0, 1] for j = 1,..., n, such that the total sum of processing times P and the total cost according to the δ j values are at most P and κ, respectively. More formally, δ 1,..., δ n satisfy the following constraints P = p j ( δ j ) P, (1) c j ( δ j ) κ c u j. (2) (Note that n cu j is a constant and κ n cu j is non-negative, otherwise no solution with cost at most κ exists.) If we can compute such a value P, then a solution with cost at most κ and makespan P exists. Therefore P is an upper bound of the optimal makespan. Moreover, since P P, then P /m is a lower bound. We observe that one such P can be obtained by minimizing the sum of processing times subject the cost to be at most κ. The latter problem reveals to be the continuous relaxation of the knapsack problem, as explained in the following. We start observing that if κ n cu j = 0 then the processing time of each job J j must be u j in any feasible solution with cost at most κ, and of course we obtain valid bounds by setting LB = P /m and UB = P, where 10

11 P = n u j. Otherwise, if κ n cu j > 0, we simplify the problem instance by dividing all processing costs by κ n cu j. Let x j (1 j n) be a non-negative variable. We define vector δ by setting δ j = x j, where x = (x 1,..., x n) is the optimal solution of the following linear program. minimize P = subject to [x j l j + (1 x j )u j ] (3) c j x j 1 (4) 0 x j 1 j = 1,..., n, (5) where c j = c j /(κ n cu j ). Note that the optimal solution of the linear program (3), (4) and (5), defines a vector δ that satisfies constraints (1) and (2). The objective function can be written as P = n [(l j u j )x j ]+ n u j, where the second summation is a constant. Let t j = u j l j. An optimal solution of the previous linear program is optimal also for the following linear program, maximize subject to t j x j (6) c j x j 1 (7) 0 x j 1, j = 1,..., n (8) The previous LP is a continuous relaxation of the classical knapsack problem that can be solved as follows [?]: first, sort the jobs in non-increasing t j / c j ratio order, so that, without loss of generality, t 1 c 1 t 2 c 2... t n c n. Then assign x j = 1 until either (a) the jobs are exhausted or (b) the cost capacity 1 is exactly used up or (c) it is necessary to set x j to a value that is less than 1 to use up the cost capacity 1 exactly. Let s the last job that uses up the cost capacity. Set x j = 0, when j > s. In all three cases an optimal solution is obtained. The optimal solution of (3), (4) and (5) can be carried out in O(n) time by partial sorting [?]. Lemma 2 If a solution with cost not greater than κ exists, then the optimal makespan value when the total cost is at most κ, is at least P /m and at most P, where P is the optimal solution value of the linear program (3), (4) and (5). By dividing all processing times by P, we may (and will) assume that the optimal makespan value is within interval [1/m, 1]. 11

12 3.2 A 2-approximation Algorithm for P2 In this section we present a 2-approximation algorithm for problem P2, i.e., an algorithm that returns a solution with cost at most κ and makespan within 2 times the optimal makespan value T. Assume that we have an algorithm A(τ, κ) that given τ [1/m, 1] and κ, either decides that there is no schedule with makespan at most τ and cost κ, or produces a schedule with makespan at most (2 2 m+1 )τ and cost κ. Then consider the following set of O(m log m) values, V = { 1 m (1 + 1 m ), 1 m (1 + 1 m )2,..., 1}. Since 1/m T 1, then there are two values from V, a = 1 m (1 + 1 m )i and b = min { 1; 1 m (1 + 1 m )i+1}, such that a T b. It is easy to check that b T (1 + 1 m ), and algorithm A(τ, κ), when τ = b, returns a solution with makespan at most (2 2 m + 1 )τ T (1 + 1 m )(2 2 m + 1 ) = 2T and cost κ. Therefore a 2-approximate solution can be found by performing a binary search of τ on the values in V. To complete the description of the 2-approximation algorithm we need algorithm A(τ, κ) that is as follows. We start observing that, since we are interested in some solution with makespan at most τ, we can transform the given instance into an equivalent one so that, for all jobs J j J, we have u j τ. Then, as described in the previous section, solve the continuous relaxation of the classical knapsack problem and determine the vector δ = ( δ 1,..., δ n ) that minimize the sum P of processing times. If P > mτ then there is no solution with makespan at most τ and cost κ. Otherwise, according to δ our algorithm works as follows. Algorithm A(τ, κ) 1. Assign the m jobs with the largest processing times to different machines. Let k = Schedule the remaining jobs as follows. Assign each job (one after another in any arbitrary order) to machine k until either (a) the jobs are exhausted or (b) the completion time of the last added job J f(k) is greater than τ. 3. If the jobs are not exhausted, then let k := k + 1. If k m go to step 2, else go to step Remove jobs J f(1),..., J f(m 1) from the schedule and assign them one after another to a machine with the currently smallest load. Since P mτ all jobs are scheduled by the algorithm above. The algorithm runs in O(n + m log m) time. Theorem 3 If P mτ then algorithm A provides a solution with makespan T A (2 2 m+1 )τ, where processing times are chosen according δ. 12

13 Proof. We assume that T A > τ otherwise we are done. Let J f be a job with the largest completion time and let s f be the start time of job J f. Then no machine can be idle at any time prior to s f, since otherwise job J f would have been processed there. Two situations are possible: 1. J f is one of the m biggest jobs. In that case the schedule is optimal since p f ( δ f ) τ. 2. Otherwise, assume, without loss of generality, that p 1 ( δ 1 ) p 2 ( δ 2 )... p n ( δ n ). The following inequalities hold, p f ( δ f ) min,...,m p i( δ i ), p f ( δ f ) mτ p f ( δ f ) 1 m m p i ( δ i ), i=1 m p i ( δ i ). It follows that p f ( δ f ) 1 m m i=1 p i( δ i ) 1 m (mτ p f ( δ f )), hence p f ( δ f ) mτ m+1. Since P = i p i( δ i ) mτ, then i=1 T A = s f + p f ( δ f ) 1 p i ( δ i ) + p f ( δ f ) m i f τ + (1 1/m)p f ( δ f ) τ + (1 1/m) mτ m + 1 = τ + m 1 m + 1 τ = (2 2 m + 1 )τ. 4 A PTAS for P3 Problem P3 consists of computing a schedule which minimizes T + αc, where T is the makespan, C is the total cost of the schedule and α > 0 is a parameter. Using modified cost values c j (δ j) = αc j (δ j ), we can restrict us to the case α = 1 without loss of generality. To explain the polynomial approximation scheme, we first examine an artificial situation that we shall use as a tool in analyzing the eventual algorithm. Let us focus on a specific instance and a particular optimal solution SOL with value OP T = T + C, where T is the makespan value and C the total cost according to SOL. Now, assume that we know the value T. Then by applying the PTAS for P1 (see Section 2) when the makespan is constrained to be at most T, we get a solution with cost at most C and makespan at most (1 + ε)t, i.e., a solution with value at most (1 + ε)t + C < (1 + ε)(t + C ), and we are done. Unfortunately, we do not know the value T. However, in the following we show that we can select a set T of O( 1 ε log m ε ) values such that one of these values, say T, is at most T + OP T ε and at least T. By applying the PTAS for P1 (see Section 2) when the makespan is constrained to be at most 13

14 T, we get a solution with cost at most C and makespan at most (1 + ε)t (1 + ε)(t + OP T ε). The objective value of this solution is therefore bounded by (1 + ε)(t + OP T ε) + C = T + C + ε(t + OP T (1 + ε)) OP T (1 + 3ε) (for ε 1). Hence, a PTAS for P3 is obtained by calling the the PTAS for P1 O( 1 ε log m ε ) times and returning the best found solution. In conclusion, what it remains to be shown is how to compute set T. This is accomplished in the following subsection. 4.1 Computing T We start computing some lower and upper bounds for the minimum objective value OP T. We define d j = min[c l j + l j, c u j + u j ], D = j J d j. Consider an optimum schedule with makespan T and total cost C (where OP T = T + C ). In this schedule we have δ j values with c j(δ j ) + p j(δ j ) d j. Then, D = j J d j j J c j (δ j ) + j J p j (δ j ) C + mt mop T. Therefore, we have that the optimal objective value OP T is at least D/m. Now we show that a schedule with value at most D always exists. The latter gives us an upper bound on the optimal value OP T. A solution of value at most D is obtained as follows. Set the compression level δ j of each job J j either to 1 or 0, such that p j (δ j ) + c j (δ j ) = d j. Since c j (δ j ) d j, the total cost is j J c j(δ j ) j J d j = D. Assign all jobs one after the other to one single machine, then the makespan of the schedule is equal to j J p j(δ j ) D. Since c j (δ j ) + p j (δ j ) = d j, the objective value of this schedule is exactly D. Therefore, we have proved that OP T [D/m, D]. Dividing all processing times and cost values by D, we have that 1/m OP T 1. Now, recall that our goal is to compute a set T of O( 1 ε log m ε ) values such that one of these values, say T, is at most T + OP T ε and at least T. We define T to be the following set of values { ε T = m, ε m (1 + ε), ε } m (1 + ε)2,..., 1, 14

15 and claim that T has the desired properties. Indeed if T ε/m, then the lowest element of T, i.e., ε/m is at most OP T ε T + OP T ε, since 1/m OP T, and at least T by assumption. If ε m (1 + ε)i T min { 1; ε m (1 + ε)i+1}, then { min 1; ε m (1 + ε)i+1} T (1 + ε) T + OP T ε. Therefore we have proved that in the defined set T there is always an element that is at most T + OP T ε and at least T. Simple calculations shows that the size of T is O( 1 ε log m ε ) and we are done. 5 Preemptive Problems The preemptive problems are easy to solve. For instance, an optimal solution for problem P1 can be delivered as follows. First, solve the following linear program which defines the cost and the processing times of jobs. Minimize s. t. i=1 m (cj u x u ij + c l jx l ij) m (x u ij + x l ij) = 1 j = 1,..., n i=1 (u j x u ij + l j x l ij) τ i = 1,..., m m (u j x u ij + l j x l ij) τ j = 1,..., n i=1 x u ij, x l ij 0 j = 1,..., n i = 1,..., m Let δ j = m i=1 xu ij, j = 1,..., n. It is easy to see that the length of a preemptive schedule cannot be smaller than LB = max{max p j (δ j ), 1 j m p j (δ j )}. A schedule meeting this bound can be constructed in O(n) time by using Mc- Naughton s rule [?]: fill the machines successively, scheduling the jobs in any order and splitting jobs into two parts whenever the above time bound is met. Schedule the second part of a preempted job on the next machine at zero time. Since p j (δ j ) LB for all j, the two parts of preempted job do not overlap. Similar arguments hold for P2 and P3. The most time-consuming part of the previous algorithms is the solution of the linear program. As pointed out in [?] these linear programs fall into the class of fractional packing problems, and therefore a solution with cost and makespan at most (1+ε) times the required values can be found by a randomized algorithm in O(n 2 log n) expected time, or deterministically, in O(mn 2 log n) time. In the following we present an alternative approach that finds the optimal solution for P1 in O(n) time. Regarding P2 and P3, we sketch algorithms 15

16 that find solutions with only makespan increased by a factor (1 + ε) and in O(n log log m) time. 5.1 Preemptive P1 In Section 3, we described a linear program which finds processing times of jobs such that the sum of processing times is minimized and the cost is at most κ. Similarly, we can formulate a linear program which defines jobs processing times by determining the vector ( δ j ) such that the total cost is minimized, the sum of processing times is at most τm and each processing time is at most τ. Therefore, by construction, we have max{max j p j ( δ j ), 1 n m p j( δ j )} τ, and again, a schedule meeting this bound can be constructed in O(n) time by using McNaughton s rule [?]. Before defining the linear program, since we are interested in solution with makespan at most τ, we can set, without loss of generality, the processing time upper bound of each job J j to min{τ, u j }. Of course we have to adjust the value of c u j to reflect this change. The linear program is the following. Minimize s. t. (c l j c u j ) δ j + (u j l j ) δ j c u j u j τm 0 δ j 1 j = 1,..., n. The previous LP is a continuous relaxation of the classical knapsack problem in minimization form. The minimization form of the problem can easily be transformed into an equivalent maximization form and solved as described in Section 3. Therefore, for P1 we can find the optimal solution in O(n) time. 5.2 Preemptive P2 Assume that there is a schedule with makespan and cost equal to T and κ, respectively. Then, we can discard all solutions in which there are jobs with processing times larger than T. To do this, modify the given instance by setting the upper bound of the processing times to min{t, u j } for each job J j. Of course we have to adjust the value of c u j to reflect this change. Let KP (T, κ) denote the linear program (3), (4) and (5), defined in Section 3 when makespan and cost are assumed to be at most T and κ, respectively (therefore after the preprocessing step described above). If KP (T, κ) is infeasible then there is no solution with cost at most κ and makespan at most T. Otherwise, let ( δ j ) 1 n denote the optimal solution of KP (T, κ). If m p j( δ j ) > T then there is no solution with makespan at most T subject to cost at most κ, respectively. Therefore, if such a solution exists, then max{max j p j ( δ j ), 1 n m p j( δ j )} T, and again, a schedule meeting this bound can be constructed in O(n) time by using McNaughton s rule [?]. In Section 3 we have shown how to compute the minimum makespan value when the cost is constrained to be at most κ. The latter is obtained by solving 16

17 KP (T, κ) with at most a polynomial number of different values for T. Moreover we have also pointed out that we can find a makespan that is at most τ(1 + ε) by calling KP (T, κ) with at most O(log log m) different values of T. 5.3 Preemptive P3 To solve P3 first compute as set T of candidate values for the makespan (as described in section 4) and then solve a LP similar to the one provided for the preemptive P1 that finds a processing time assignment with minimum cost and makespan at most equal to the considered value. Therefore, we can obtain a schedule with objective function value bounded by C +T (1+ε) (C +T )(1+ε). References 17

Approximation Schemes for Job Shop Scheduling Problems with Controllable Processing Times

Approximation Schemes for Job Shop Scheduling Problems with Controllable Processing Times Approximation Schemes for Job Shop Scheduling Problems with Controllable Processing Times Klaus Jansen 1, Monaldo Mastrolilli 2, and Roberto Solis-Oba 3 1 Universität zu Kiel, Germany, kj@informatik.uni-kiel.de

More information

This means that we can assume each list ) is

This means that we can assume each list ) is This means that we can assume each list ) is of the form ),, ( )with < and Since the sizes of the items are integers, there are at most +1pairs in each list Furthermore, if we let = be the maximum possible

More information

On Two Class-Constrained Versions of the Multiple Knapsack Problem

On Two Class-Constrained Versions of the Multiple Knapsack Problem On Two Class-Constrained Versions of the Multiple Knapsack Problem Hadas Shachnai Tami Tamir Department of Computer Science The Technion, Haifa 32000, Israel Abstract We study two variants of the classic

More information

Bin packing and scheduling

Bin packing and scheduling Sanders/van Stee: Approximations- und Online-Algorithmen 1 Bin packing and scheduling Overview Bin packing: problem definition Simple 2-approximation (Next Fit) Better than 3/2 is not possible Asymptotic

More information

Machine scheduling with resource dependent processing times

Machine scheduling with resource dependent processing times Mathematical Programming manuscript No. (will be inserted by the editor) Alexander Grigoriev Maxim Sviridenko Marc Uetz Machine scheduling with resource dependent processing times Received: date / Revised

More information

Santa Claus Schedules Jobs on Unrelated Machines

Santa Claus Schedules Jobs on Unrelated Machines Santa Claus Schedules Jobs on Unrelated Machines Ola Svensson (osven@kth.se) Royal Institute of Technology - KTH Stockholm, Sweden March 22, 2011 arxiv:1011.1168v2 [cs.ds] 21 Mar 2011 Abstract One of the

More information

Algorithms. Outline! Approximation Algorithms. The class APX. The intelligence behind the hardware. ! Based on

Algorithms. Outline! Approximation Algorithms. The class APX. The intelligence behind the hardware. ! Based on 6117CIT - Adv Topics in Computing Sci at Nathan 1 Algorithms The intelligence behind the hardware Outline! Approximation Algorithms The class APX! Some complexity classes, like PTAS and FPTAS! Illustration

More information

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved. Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved. 1 Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should

More information

Scheduling Parallel Jobs with Linear Speedup

Scheduling Parallel Jobs with Linear Speedup Scheduling Parallel Jobs with Linear Speedup Alexander Grigoriev and Marc Uetz Maastricht University, Quantitative Economics, P.O.Box 616, 6200 MD Maastricht, The Netherlands. Email: {a.grigoriev, m.uetz}@ke.unimaas.nl

More information

A polynomial-time approximation scheme for the two-machine flow shop scheduling problem with an availability constraint

A polynomial-time approximation scheme for the two-machine flow shop scheduling problem with an availability constraint A polynomial-time approximation scheme for the two-machine flow shop scheduling problem with an availability constraint Joachim Breit Department of Information and Technology Management, Saarland University,

More information

APTAS for Bin Packing

APTAS for Bin Packing APTAS for Bin Packing Bin Packing has an asymptotic PTAS (APTAS) [de la Vega and Leuker, 1980] For every fixed ε > 0 algorithm outputs a solution of size (1+ε)OPT + 1 in time polynomial in n APTAS for

More information

A Robust APTAS for the Classical Bin Packing Problem

A Robust APTAS for the Classical Bin Packing Problem A Robust APTAS for the Classical Bin Packing Problem Leah Epstein 1 and Asaf Levin 2 1 Department of Mathematics, University of Haifa, 31905 Haifa, Israel. Email: lea@math.haifa.ac.il 2 Department of Statistics,

More information

1 Ordinary Load Balancing

1 Ordinary Load Balancing Comp 260: Advanced Algorithms Prof. Lenore Cowen Tufts University, Spring 208 Scribe: Emily Davis Lecture 8: Scheduling Ordinary Load Balancing Suppose we have a set of jobs each with their own finite

More information

Improved Bounds for Flow Shop Scheduling

Improved Bounds for Flow Shop Scheduling Improved Bounds for Flow Shop Scheduling Monaldo Mastrolilli and Ola Svensson IDSIA - Switzerland. {monaldo,ola}@idsia.ch Abstract. We resolve an open question raised by Feige & Scheideler by showing that

More information

arxiv: v1 [cs.ds] 27 Sep 2018

arxiv: v1 [cs.ds] 27 Sep 2018 Scheduling on (Un-)Related Machines with Setup Times arxiv:1809.10428v1 [cs.ds] 27 Sep 2018 Klaus Jansen 1, Marten Maack 1, and Alexander Mäcker 2 1 Department of Computer Science, University of Kiel,

More information

Approximation Algorithms for scheduling

Approximation Algorithms for scheduling Approximation Algorithms for scheduling Ahmed Abu Safia I.D.:119936343, McGill University, 2004 (COMP 760) Approximation Algorithms for scheduling Leslie A. Hall The first Chapter of the book entitled

More information

A robust APTAS for the classical bin packing problem

A robust APTAS for the classical bin packing problem A robust APTAS for the classical bin packing problem Leah Epstein Asaf Levin Abstract Bin packing is a well studied problem which has many applications. In this paper we design a robust APTAS for the problem.

More information

Online Scheduling with Bounded Migration

Online Scheduling with Bounded Migration Online Scheduling with Bounded Migration Peter Sanders Universität Karlsruhe (TH), Fakultät für Informatik, Postfach 6980, 76128 Karlsruhe, Germany email: sanders@ira.uka.de http://algo2.iti.uni-karlsruhe.de/sanders.php

More information

All-norm Approximation Algorithms

All-norm Approximation Algorithms All-norm Approximation Algorithms Yossi Azar Leah Epstein Yossi Richter Gerhard J. Woeginger Abstract A major drawback in optimization problems and in particular in scheduling problems is that for every

More information

Scheduling Lecture 1: Scheduling on One Machine

Scheduling Lecture 1: Scheduling on One Machine Scheduling Lecture 1: Scheduling on One Machine Loris Marchal October 16, 2012 1 Generalities 1.1 Definition of scheduling allocation of limited resources to activities over time activities: tasks in computer

More information

Minimizing total weighted tardiness on a single machine with release dates and equal-length jobs

Minimizing total weighted tardiness on a single machine with release dates and equal-length jobs Minimizing total weighted tardiness on a single machine with release dates and equal-length jobs G. Diepen J.M. van den Akker J.A. Hoogeveen institute of information and computing sciences, utrecht university

More information

Preemptive Scheduling of Independent Jobs on Identical Parallel Machines Subject to Migration Delays

Preemptive Scheduling of Independent Jobs on Identical Parallel Machines Subject to Migration Delays Preemptive Scheduling of Independent Jobs on Identical Parallel Machines Subject to Migration Delays Aleksei V. Fishkin 1, Klaus Jansen 2, Sergey V. Sevastyanov 3,andRené Sitters 1 1 Max-Planck-Institute

More information

Algorithm Design. Scheduling Algorithms. Part 2. Parallel machines. Open-shop Scheduling. Job-shop Scheduling.

Algorithm Design. Scheduling Algorithms. Part 2. Parallel machines. Open-shop Scheduling. Job-shop Scheduling. Algorithm Design Scheduling Algorithms Part 2 Parallel machines. Open-shop Scheduling. Job-shop Scheduling. 1 Parallel Machines n jobs need to be scheduled on m machines, M 1,M 2,,M m. Each machine can

More information

P C max. NP-complete from partition. Example j p j What is the makespan on 2 machines? 3 machines? 4 machines?

P C max. NP-complete from partition. Example j p j What is the makespan on 2 machines? 3 machines? 4 machines? Multiple Machines Model Multiple Available resources people time slots queues networks of computers Now concerned with both allocation to a machine and ordering on that machine. P C max NP-complete from

More information

On-line Scheduling to Minimize Max Flow Time: An Optimal Preemptive Algorithm

On-line Scheduling to Minimize Max Flow Time: An Optimal Preemptive Algorithm On-line Scheduling to Minimize Max Flow Time: An Optimal Preemptive Algorithm Christoph Ambühl and Monaldo Mastrolilli IDSIA Galleria 2, CH-6928 Manno, Switzerland October 22, 2004 Abstract We investigate

More information

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved. Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved. 1 P and NP P: The family of problems that can be solved quickly in polynomial time.

More information

How Unsplittable-Flow-Covering Helps Scheduling with Job-Dependent Cost Functions

How Unsplittable-Flow-Covering Helps Scheduling with Job-Dependent Cost Functions DOI 10.1007/s00453-017-0300-x How Unsplittable-Flow-Covering Helps Scheduling with Job-Dependent Cost Functions Wiebke Höhn 1 Julián Mestre 2 Andreas Wiese 3 Received: 11 May 2016 / Accepted: 3 March 2017

More information

A lower bound for scheduling of unit jobs with immediate decision on parallel machines

A lower bound for scheduling of unit jobs with immediate decision on parallel machines A lower bound for scheduling of unit jobs with immediate decision on parallel machines Tomáš Ebenlendr Jiří Sgall Abstract Consider scheduling of unit jobs with release times and deadlines on m identical

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Matt Weinberg Scribe: Sanjeev Arora One of the running themes in this course is

More information

8 Knapsack Problem 8.1 (Knapsack)

8 Knapsack Problem 8.1 (Knapsack) 8 Knapsack In Chapter 1 we mentioned that some NP-hard optimization problems allow approximability to any required degree. In this chapter, we will formalize this notion and will show that the knapsack

More information

Recoverable Robustness in Scheduling Problems

Recoverable Robustness in Scheduling Problems Master Thesis Computing Science Recoverable Robustness in Scheduling Problems Author: J.M.J. Stoef (3470997) J.M.J.Stoef@uu.nl Supervisors: dr. J.A. Hoogeveen J.A.Hoogeveen@uu.nl dr. ir. J.M. van den Akker

More information

SCHEDULING UNRELATED MACHINES BY RANDOMIZED ROUNDING

SCHEDULING UNRELATED MACHINES BY RANDOMIZED ROUNDING SIAM J. DISCRETE MATH. Vol. 15, No. 4, pp. 450 469 c 2002 Society for Industrial and Applied Mathematics SCHEDULING UNRELATED MACHINES BY RANDOMIZED ROUNDING ANDREAS S. SCHULZ AND MARTIN SKUTELLA Abstract.

More information

The Power of Preemption on Unrelated Machines and Applications to Scheduling Orders

The Power of Preemption on Unrelated Machines and Applications to Scheduling Orders MATHEMATICS OF OPERATIONS RESEARCH Vol. 37, No. 2, May 2012, pp. 379 398 ISSN 0364-765X (print) ISSN 1526-5471 (online) http://dx.doi.org/10.1287/moor.1110.0520 2012 INFORMS The Power of Preemption on

More information

Approximation Basics

Approximation Basics Approximation Basics, Concepts, and Examples Xiaofeng Gao Department of Computer Science and Engineering Shanghai Jiao Tong University, P.R.China Fall 2012 Special thanks is given to Dr. Guoqiang Li for

More information

Approximation Algorithms for Scheduling with Reservations

Approximation Algorithms for Scheduling with Reservations Approximation Algorithms for Scheduling with Reservations Florian Diedrich 1,,, Klaus Jansen 1,, Fanny Pascual 2, and Denis Trystram 2, 1 Institut für Informatik, Christian-Albrechts-Universität zu Kiel,

More information

Lecture 2: Scheduling on Parallel Machines

Lecture 2: Scheduling on Parallel Machines Lecture 2: Scheduling on Parallel Machines Loris Marchal October 17, 2012 Parallel environment alpha in Graham s notation): P parallel identical Q uniform machines: each machine has a given speed speed

More information

A Polynomial Time Approximation Scheme for the Multiple Knapsack Problem

A Polynomial Time Approximation Scheme for the Multiple Knapsack Problem University of Pennsylvania ScholarlyCommons Departmental Papers (CIS) Department of Computer & Information Science February 2006 A Polynomial Time Approximation Scheme for the Multiple Knapsack Problem

More information

Scheduling Linear Deteriorating Jobs with an Availability Constraint on a Single Machine 1

Scheduling Linear Deteriorating Jobs with an Availability Constraint on a Single Machine 1 Scheduling Linear Deteriorating Jobs with an Availability Constraint on a Single Machine 1 Min Ji a, b, 2 Yong He b, 3 T.C.E. Cheng c, 4 a College of Computer Science & Information Engineering, Zhejiang

More information

A Framework for Scheduling with Online Availability

A Framework for Scheduling with Online Availability A Framework for Scheduling with Online Availability Florian Diedrich, and Ulrich M. Schwarz Institut für Informatik, Christian-Albrechts-Universität zu Kiel, Olshausenstr. 40, 24098 Kiel, Germany {fdi,ums}@informatik.uni-kiel.de

More information

On Machine Dependency in Shop Scheduling

On Machine Dependency in Shop Scheduling On Machine Dependency in Shop Scheduling Evgeny Shchepin Nodari Vakhania Abstract One of the main restrictions in scheduling problems are the machine (resource) restrictions: each machine can perform at

More information

AS computer hardware technology advances, both

AS computer hardware technology advances, both 1 Best-Harmonically-Fit Periodic Task Assignment Algorithm on Multiple Periodic Resources Chunhui Guo, Student Member, IEEE, Xiayu Hua, Student Member, IEEE, Hao Wu, Student Member, IEEE, Douglas Lautner,

More information

A PTAS for the Uncertain Capacity Knapsack Problem

A PTAS for the Uncertain Capacity Knapsack Problem Clemson University TigerPrints All Theses Theses 12-20 A PTAS for the Uncertain Capacity Knapsack Problem Matthew Dabney Clemson University, mdabney@clemson.edu Follow this and additional works at: https://tigerprints.clemson.edu/all_theses

More information

Dual fitting approximation for Set Cover, and Primal Dual approximation for Set Cover

Dual fitting approximation for Set Cover, and Primal Dual approximation for Set Cover duality 1 Dual fitting approximation for Set Cover, and Primal Dual approximation for Set Cover Guy Kortsarz duality 2 The set cover problem with uniform costs Input: A universe U and a collection of subsets

More information

5 Integer Linear Programming (ILP) E. Amaldi Foundations of Operations Research Politecnico di Milano 1

5 Integer Linear Programming (ILP) E. Amaldi Foundations of Operations Research Politecnico di Milano 1 5 Integer Linear Programming (ILP) E. Amaldi Foundations of Operations Research Politecnico di Milano 1 Definition: An Integer Linear Programming problem is an optimization problem of the form (ILP) min

More information

Memorandum COSOR 97-23, 1997, Eindhoven University of Technology

Memorandum COSOR 97-23, 1997, Eindhoven University of Technology Memorandum COSOR 97-3, 1997, Eindhoven University of Technology Approximation algorithms for the multiprocessor open shop scheduling problem Petra Schuurman y Gerhard J. Woeginger z Abstract We investigate

More information

Preemptive Online Scheduling: Optimal Algorithms for All Speeds

Preemptive Online Scheduling: Optimal Algorithms for All Speeds Preemptive Online Scheduling: Optimal Algorithms for All Speeds Tomáš Ebenlendr Wojciech Jawor Jiří Sgall Abstract Our main result is an optimal online algorithm for preemptive scheduling on uniformly

More information

ALTERNATIVE PERSPECTIVES FOR SOLVING COMBINATORIAL OPTIMIZATION PROBLEMS

ALTERNATIVE PERSPECTIVES FOR SOLVING COMBINATORIAL OPTIMIZATION PROBLEMS ε-optimization SCHEMES AND L-BIT PRECISION: ALTERNATIVE PERSPECTIVES FOR SOLVING COMBINATORIAL OPTIMIZATION PROBLEMS JAMES B. ORLIN, ANDREAS S. SCHULZ, AND SUDIPTA SENGUPTA September 2004 Abstract. Motivated

More information

COSC 341: Lecture 25 Coping with NP-hardness (2)

COSC 341: Lecture 25 Coping with NP-hardness (2) 1 Introduction Figure 1: Famous cartoon by Garey and Johnson, 1979 We have seen the definition of a constant factor approximation algorithm. The following is something even better. 2 Approximation Schemes

More information

Approximation schemes for parallel machine scheduling with non-renewable resources

Approximation schemes for parallel machine scheduling with non-renewable resources Approximation schemes for parallel machine scheduling with non-renewable resources Péter Györgyi a,b, Tamás Kis b, a Department of Operations Research, Loránd Eötvös University, H1117 Budapest, Pázmány

More information

Scheduling and fixed-parameter tractability

Scheduling and fixed-parameter tractability Math. Program., Ser. B (2015) 154:533 562 DOI 10.1007/s10107-014-0830-9 FULL LENGTH PAPER Scheduling and fixed-parameter tractability Matthias Mnich Andreas Wiese Received: 24 April 2014 / Accepted: 10

More information

Lecture 13. Real-Time Scheduling. Daniel Kästner AbsInt GmbH 2013

Lecture 13. Real-Time Scheduling. Daniel Kästner AbsInt GmbH 2013 Lecture 3 Real-Time Scheduling Daniel Kästner AbsInt GmbH 203 Model-based Software Development 2 SCADE Suite Application Model in SCADE (data flow + SSM) System Model (tasks, interrupts, buses, ) SymTA/S

More information

Approximation Schemes for Scheduling on Parallel Machines

Approximation Schemes for Scheduling on Parallel Machines Approximation Schemes for Scheduling on Parallel Machines Noga Alon Yossi Azar Gerhard J. Woeginger Tal Yadid Abstract We discuss scheduling problems with m identical machines and n jobs where each job

More information

The Constrained Minimum Weighted Sum of Job Completion Times Problem 1

The Constrained Minimum Weighted Sum of Job Completion Times Problem 1 The Constrained Minimum Weighted Sum of Job Completion Times Problem 1 Asaf Levin 2 and Gerhard J. Woeginger 34 Abstract We consider the problem of minimizing the weighted sum of job completion times on

More information

Polynomial Time Algorithms for Minimum Energy Scheduling

Polynomial Time Algorithms for Minimum Energy Scheduling Polynomial Time Algorithms for Minimum Energy Scheduling Philippe Baptiste 1, Marek Chrobak 2, and Christoph Dürr 1 1 CNRS, LIX UMR 7161, Ecole Polytechnique 91128 Palaiseau, France. Supported by CNRS/NSF

More information

1 Column Generation and the Cutting Stock Problem

1 Column Generation and the Cutting Stock Problem 1 Column Generation and the Cutting Stock Problem In the linear programming approach to the traveling salesman problem we used the cutting plane approach. The cutting plane approach is appropriate when

More information

Vertex Cover in Graphs with Locally Few Colors

Vertex Cover in Graphs with Locally Few Colors Vertex Cover in Graphs with Locally Few Colors Fabian Kuhn 1 and Monaldo Mastrolilli 2 1 Faculty of Informatics, University of Lugano (USI), 6904 Lugano, Switzerland fabian.kuhn@usi.ch 2 Dalle Molle Institute

More information

Coin Changing: Give change using the least number of coins. Greedy Method (Chapter 10.1) Attempt to construct an optimal solution in stages.

Coin Changing: Give change using the least number of coins. Greedy Method (Chapter 10.1) Attempt to construct an optimal solution in stages. IV-0 Definitions Optimization Problem: Given an Optimization Function and a set of constraints, find an optimal solution. Optimal Solution: A feasible solution for which the optimization function has the

More information

Distributed Optimization. Song Chong EE, KAIST

Distributed Optimization. Song Chong EE, KAIST Distributed Optimization Song Chong EE, KAIST songchong@kaist.edu Dynamic Programming for Path Planning A path-planning problem consists of a weighted directed graph with a set of n nodes N, directed links

More information

INSTITUT FÜR INFORMATIK

INSTITUT FÜR INFORMATIK INSTITUT FÜR INFORMATIK Approximation Algorithms for Scheduling Parallel Jobs: Breaking the Approximation Ratio of 2 Klaus Jansen and Ralf Thöle Bericht Nr. 0808 September 2008 CHRISTIAN-ALBRECHTS-UNIVERSITÄT

More information

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003 CS6999 Probabilistic Methods in Integer Programming Randomized Rounding April 2003 Overview 2 Background Randomized Rounding Handling Feasibility Derandomization Advanced Techniques Integer Programming

More information

LPT rule: Whenever a machine becomes free for assignment, assign that job whose. processing time is the largest among those jobs not yet assigned.

LPT rule: Whenever a machine becomes free for assignment, assign that job whose. processing time is the largest among those jobs not yet assigned. LPT rule Whenever a machine becomes free for assignment, assign that job whose processing time is the largest among those jobs not yet assigned. Example m1 m2 m3 J3 Ji J1 J2 J3 J4 J5 J6 6 5 3 3 2 1 3 5

More information

Multiprocessor Scheduling I: Partitioned Scheduling. LS 12, TU Dortmund

Multiprocessor Scheduling I: Partitioned Scheduling. LS 12, TU Dortmund Multiprocessor Scheduling I: Partitioned Scheduling Prof. Dr. Jian-Jia Chen LS 12, TU Dortmund 22/23, June, 2015 Prof. Dr. Jian-Jia Chen (LS 12, TU Dortmund) 1 / 47 Outline Introduction to Multiprocessor

More information

Solutions to Exercises

Solutions to Exercises 1/13 Solutions to Exercises The exercises referred to as WS 1.1(a), and so forth, are from the course book: Williamson and Shmoys, The Design of Approximation Algorithms, Cambridge University Press, 2011,

More information

More Approximation Algorithms

More Approximation Algorithms CS 473: Algorithms, Spring 2018 More Approximation Algorithms Lecture 25 April 26, 2018 Most slides are courtesy Prof. Chekuri Ruta (UIUC) CS473 1 Spring 2018 1 / 28 Formal definition of approximation

More information

An improved approximation algorithm for two-machine flow shop scheduling with an availability constraint

An improved approximation algorithm for two-machine flow shop scheduling with an availability constraint An improved approximation algorithm for two-machine flow shop scheduling with an availability constraint J. Breit Department of Information and Technology Management, Saarland University, Saarbrcken, Germany

More information

Lecture 4: An FPTAS for Knapsack, and K-Center

Lecture 4: An FPTAS for Knapsack, and K-Center Comp 260: Advanced Algorithms Tufts University, Spring 2016 Prof. Lenore Cowen Scribe: Eric Bailey Lecture 4: An FPTAS for Knapsack, and K-Center 1 Introduction Definition 1.0.1. The Knapsack problem (restated)

More information

NP Completeness and Approximation Algorithms

NP Completeness and Approximation Algorithms Chapter 10 NP Completeness and Approximation Algorithms Let C() be a class of problems defined by some property. We are interested in characterizing the hardest problems in the class, so that if we can

More information

Scheduling Shared Continuous Resources on Many-Cores

Scheduling Shared Continuous Resources on Many-Cores Journal of Scheduling manuscript No. (will be inserted by the editor) Scheduling Shared Continuous Resources on Many-Cores Ernst Althaus André Brinkmann Peter Kling Friedhelm Meyer auf der Heide Lars Nagel

More information

Theoretical Computer Science

Theoretical Computer Science Theoretical Computer Science 411 (010) 417 44 Contents lists available at ScienceDirect Theoretical Computer Science journal homepage: wwwelseviercom/locate/tcs Resource allocation with time intervals

More information

4 Sequencing problem with heads and tails

4 Sequencing problem with heads and tails 4 Sequencing problem with heads and tails In what follows, we take a step towards multiple stage problems Therefore, we consider a single stage where a scheduling sequence has to be determined but each

More information

Task assignment in heterogeneous multiprocessor platforms

Task assignment in heterogeneous multiprocessor platforms Task assignment in heterogeneous multiprocessor platforms Sanjoy K. Baruah Shelby Funk The University of North Carolina Abstract In the partitioned approach to scheduling periodic tasks upon multiprocessors,

More information

APPROXIMATION ALGORITHMS FOR SCHEDULING ORDERS ON PARALLEL MACHINES

APPROXIMATION ALGORITHMS FOR SCHEDULING ORDERS ON PARALLEL MACHINES UNIVERSIDAD DE CHILE FACULTAD DE CIENCIAS FÍSICAS Y MATEMÁTICAS DEPARTAMENTO DE INGENIERÍA MATEMÁTICA APPROXIMATION ALGORITHMS FOR SCHEDULING ORDERS ON PARALLEL MACHINES SUBMITTED IN PARTIAL FULFILLMENT

More information

Efficient approximation algorithms for the Subset-Sums Equality problem

Efficient approximation algorithms for the Subset-Sums Equality problem Efficient approximation algorithms for the Subset-Sums Equality problem Cristina Bazgan 1 Miklos Santha 2 Zsolt Tuza 3 1 Université Paris-Sud, LRI, bât.490, F 91405 Orsay, France, bazgan@lri.fr 2 CNRS,

More information

Embedded Systems Development

Embedded Systems Development Embedded Systems Development Lecture 3 Real-Time Scheduling Dr. Daniel Kästner AbsInt Angewandte Informatik GmbH kaestner@absint.com Model-based Software Development Generator Lustre programs Esterel programs

More information

Online Learning, Mistake Bounds, Perceptron Algorithm

Online Learning, Mistake Bounds, Perceptron Algorithm Online Learning, Mistake Bounds, Perceptron Algorithm 1 Online Learning So far the focus of the course has been on batch learning, where algorithms are presented with a sample of training data, from which

More information

Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Packing, MAX-SAT

Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Packing, MAX-SAT ,7 CMPUT 675: Approximation Algorithms Fall 2011 Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Pacing, MAX-SAT Lecturer: Mohammad R. Salavatipour Scribe: Weitian Tong 6.1 Bin Pacing Problem Recall the bin pacing

More information

CO759: Algorithmic Game Theory Spring 2015

CO759: Algorithmic Game Theory Spring 2015 CO759: Algorithmic Game Theory Spring 2015 Instructor: Chaitanya Swamy Assignment 1 Due: By Jun 25, 2015 You may use anything proved in class directly. I will maintain a FAQ about the assignment on the

More information

P,NP, NP-Hard and NP-Complete

P,NP, NP-Hard and NP-Complete P,NP, NP-Hard and NP-Complete We can categorize the problem space into two parts Solvable Problems Unsolvable problems 7/11/2011 1 Halting Problem Given a description of a program and a finite input, decide

More information

Improved Algorithms for Machine Allocation in Manufacturing Systems

Improved Algorithms for Machine Allocation in Manufacturing Systems Improved Algorithms for Machine Allocation in Manufacturing Systems Hans Frenk Martine Labbé Mario van Vliet Shuzhong Zhang October, 1992 Econometric Institute, Erasmus University Rotterdam, the Netherlands.

More information

arxiv: v1 [math.oc] 3 Jan 2019

arxiv: v1 [math.oc] 3 Jan 2019 The Product Knapsack Problem: Approximation and Complexity arxiv:1901.00695v1 [math.oc] 3 Jan 2019 Ulrich Pferschy a, Joachim Schauer a, Clemens Thielen b a Department of Statistics and Operations Research,

More information

Chapter 3: Discrete Optimization Integer Programming

Chapter 3: Discrete Optimization Integer Programming Chapter 3: Discrete Optimization Integer Programming Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Website: http://home.deib.polimi.it/amaldi/opt-16-17.shtml Academic year 2016-17

More information

THE EXISTENCE AND USEFULNESS OF EQUALITY CUTS IN THE MULTI-DEMAND MULTIDIMENSIONAL KNAPSACK PROBLEM LEVI DELISSA. B.S., Kansas State University, 2014

THE EXISTENCE AND USEFULNESS OF EQUALITY CUTS IN THE MULTI-DEMAND MULTIDIMENSIONAL KNAPSACK PROBLEM LEVI DELISSA. B.S., Kansas State University, 2014 THE EXISTENCE AND USEFULNESS OF EQUALITY CUTS IN THE MULTI-DEMAND MULTIDIMENSIONAL KNAPSACK PROBLEM by LEVI DELISSA B.S., Kansas State University, 2014 A THESIS submitted in partial fulfillment of the

More information

A PTAS for Static Priority Real-Time Scheduling with Resource Augmentation

A PTAS for Static Priority Real-Time Scheduling with Resource Augmentation A PAS for Static Priority Real-ime Scheduling with Resource Augmentation echnical Report Friedrich Eisenbrand and homas Rothvoß Institute of Mathematics EPFL, Lausanne, Switzerland {friedrich.eisenbrand,thomas.rothvoss}@epfl.ch

More information

arxiv: v2 [cs.ds] 6 Apr 2018

arxiv: v2 [cs.ds] 6 Apr 2018 Constant Approximation for k-median and k-means with Outliers via Iterative Rounding Ravishankar Krishnaswamy Shi Li Sai Sandeep April 9, 2018 arxiv:1711.01323v2 [cs.ds] 6 Apr 2018 Abstract In this paper,

More information

CS 374: Algorithms & Models of Computation, Spring 2017 Greedy Algorithms Lecture 19 April 4, 2017 Chandra Chekuri (UIUC) CS374 1 Spring / 1

CS 374: Algorithms & Models of Computation, Spring 2017 Greedy Algorithms Lecture 19 April 4, 2017 Chandra Chekuri (UIUC) CS374 1 Spring / 1 CS 374: Algorithms & Models of Computation, Spring 2017 Greedy Algorithms Lecture 19 April 4, 2017 Chandra Chekuri (UIUC) CS374 1 Spring 2017 1 / 1 Part I Greedy Algorithms: Tools and Techniques Chandra

More information

On bilevel machine scheduling problems

On bilevel machine scheduling problems Noname manuscript No. (will be inserted by the editor) On bilevel machine scheduling problems Tamás Kis András Kovács Abstract Bilevel scheduling problems constitute a hardly studied area of scheduling

More information

Scheduling Lecture 1: Scheduling on One Machine

Scheduling Lecture 1: Scheduling on One Machine Scheduling Lecture 1: Scheduling on One Machine Loris Marchal 1 Generalities 1.1 Definition of scheduling allocation of limited resources to activities over time activities: tasks in computer environment,

More information

INSTITUT FÜR INFORMATIK

INSTITUT FÜR INFORMATIK INSTITUT FÜR INFORMATIK Approximation Algorithms for Scheduling with Reservations Florian Diedrich Klaus Jansen Fanny Pascual Denis Trystram Bericht Nr. 0812 October 2008 CHRISTIAN-ALBRECHTS-UNIVERSITÄT

More information

The polynomial solvability of selected bicriteria scheduling problems on parallel machines with equal length jobs and release dates

The polynomial solvability of selected bicriteria scheduling problems on parallel machines with equal length jobs and release dates The polynomial solvability of selected bicriteria scheduling problems on parallel machines with equal length jobs and release dates Hari Balasubramanian 1, John Fowler 2, and Ahmet Keha 2 1: Department

More information

ARobustPTASforMachineCoveringand Packing

ARobustPTASforMachineCoveringand Packing ARobustPTASforMachineCoveringand Packing Martin Skutella and José Verschae Institute of Mathematics, TU Berlin, Germany {skutella,verschae}@math.tu-berlin.de Abstract. Minimizing the makespan or maximizing

More information

Optimal on-line algorithms for single-machine scheduling

Optimal on-line algorithms for single-machine scheduling Optimal on-line algorithms for single-machine scheduling J.A. Hoogeveen A.P.A. Vestjens Department of Mathematics and Computing Science, Eindhoven University of Technology, P.O.Box 513, 5600 MB, Eindhoven,

More information

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler Complexity Theory Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität Wien 15 May, 2018 Reinhard

More information

Online Appendix for Coordination of Outsourced Operations at a Third-Party Facility Subject to Booking, Overtime, and Tardiness Costs

Online Appendix for Coordination of Outsourced Operations at a Third-Party Facility Subject to Booking, Overtime, and Tardiness Costs Submitted to Operations Research manuscript OPRE-2009-04-180 Online Appendix for Coordination of Outsourced Operations at a Third-Party Facility Subject to Booking, Overtime, and Tardiness Costs Xiaoqiang

More information

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181. Complexity Theory Complexity Theory Outline Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität

More information

Improved Bounds on Relaxations of a Parallel Machine Scheduling Problem

Improved Bounds on Relaxations of a Parallel Machine Scheduling Problem Journal of Combinatorial Optimization 1, 413 426 (1998) c 1998 Kluwer Academic Publishers Manufactured in The Netherlands Improved Bounds on Relaxations of a Parallel Machine Scheduling Problem CYNTHIA

More information

Minimizing Mean Flowtime and Makespan on Master-Slave Systems

Minimizing Mean Flowtime and Makespan on Master-Slave Systems Minimizing Mean Flowtime and Makespan on Master-Slave Systems Joseph Y-T. Leung,1 and Hairong Zhao 2 Department of Computer Science New Jersey Institute of Technology Newark, NJ 07102, USA Abstract The

More information

Combinatorial Algorithms for Minimizing the Weighted Sum of Completion Times on a Single Machine

Combinatorial Algorithms for Minimizing the Weighted Sum of Completion Times on a Single Machine Combinatorial Algorithms for Minimizing the Weighted Sum of Completion Times on a Single Machine James M. Davis 1, Rajiv Gandhi, and Vijay Kothari 1 Department of Computer Science, Rutgers University-Camden,

More information

Machine Minimization for Scheduling Jobs with Interval Constraints

Machine Minimization for Scheduling Jobs with Interval Constraints Machine Minimization for Scheduling Jobs with Interval Constraints Julia Chuzhoy Sudipto Guha Sanjeev Khanna Joseph (Seffi) Naor Abstract The problem of scheduling jobs with interval constraints is a well-studied

More information

CMSC 451: Lecture 7 Greedy Algorithms for Scheduling Tuesday, Sep 19, 2017

CMSC 451: Lecture 7 Greedy Algorithms for Scheduling Tuesday, Sep 19, 2017 CMSC CMSC : Lecture Greedy Algorithms for Scheduling Tuesday, Sep 9, 0 Reading: Sects.. and. of KT. (Not covered in DPV.) Interval Scheduling: We continue our discussion of greedy algorithms with a number

More information

Scheduling on Unrelated Parallel Machines. Approximation Algorithms, V. V. Vazirani Book Chapter 17

Scheduling on Unrelated Parallel Machines. Approximation Algorithms, V. V. Vazirani Book Chapter 17 Scheduling on Unrelated Parallel Machines Approximation Algorithms, V. V. Vazirani Book Chapter 17 Nicolas Karakatsanis, 2008 Description of the problem Problem 17.1 (Scheduling on unrelated parallel machines)

More information