APPROXIMATION ALGORITHMS FOR SCHEDULING ORDERS ON PARALLEL MACHINES

Size: px
Start display at page:

Download "APPROXIMATION ALGORITHMS FOR SCHEDULING ORDERS ON PARALLEL MACHINES"

Transcription

1 UNIVERSIDAD DE CHILE FACULTAD DE CIENCIAS FÍSICAS Y MATEMÁTICAS DEPARTAMENTO DE INGENIERÍA MATEMÁTICA APPROXIMATION ALGORITHMS FOR SCHEDULING ORDERS ON PARALLEL MACHINES SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MATHEMATICAL CIVIL ENGINEER SUPERVISOR: JOSÉ RAFAEL CORREA HAEUSSLER COMMITTEE: MARCOS ABRAHAM KIWI KRAUSKOPF ROBERTO MARIO COMINETTI COTTI-COMETTI SANTIAGO, CHILE AUGUST 2008

2

3 SUBMITTED AS PARTIAL FULFILLMENT FOR THE DEGREE OF MATHEMATICAL CIVIL ENGINEER BY: JOSÉ C. VERSCHAE T. DATE: 18/08/2008 SUPERVISOR: JOSÉ R. CORREA APPROXIMATION ALGORITHMS FOR SCHEDULING ORDERS ON PARALLEL MACHINES The purpose of this thesis was to study the problem of scheduling orders on machines. In this problem a producer has an amount of machines in which must process a set of jobs. Each job belongs to an order, corresponding to a request of a client. On the other hand, the jobs have a processing time, which might depend on the machine on which is being processed, and a release date. Finally, each order has an associated weight depending on how important is to the producer. The completion time of an order is the point in time when all of its jobs has been processed. The producer must decide when and in which machine each job is processed, with the objective of minimizing the weighted sum of completion times of orders. This model generalizes several classical scheduling problems. First, the objective function in our problem includes as a special case the objective of minimizing the maximum completion time (makespan) and the weighted sum of completion times of jobs. Furthermore, in this thesis is shown that our model also generalizes the problem of minimizing the sum of weighted completion times of jobs in one machine under precedence constrains. Being all these problems N P-hard, their apparent intractability suggest to search efficient algorithms that yield a solution whose cost is near to the optimum. Is with this objective that, based on time-indexed linear relaxations, a 27/2-approximation algorithm was proposed for the more general setting previously described. This is the first algorithm with a constant approximation guarantee for this problem, which improves the result of Leung, Li and Pinedo (2007). Based on similar techniques, in the case where jobs can be preempted, an algorithms with an approximation guarantee arbitrarily closed to 4 was obtained. Also, a polynomial time approximation scheme (PTAS) was found in the case the orders are disjoint, and the machines are identical and constantly many. Furthermore, it was concluded that a variant of this approximation scheme can be applied to the case where the number of machines is part of the input, but the amount of jobs per order or the amount of orders is a constant. Finally, the problem of minimizing the makespan on unrelated machines was studied, obtaining an algorithm that transforms a solution with nonpreemptive jobs to one where no jobs is preempted, and the makespan of the solution is increased at most by a factor of 4. Moreover, it was proven that is not possible to find an algorithm with a better guarantee. i

4 Acknowledgments First of all, I want to thank my parents and brothers for instilling in me the love of thinking. Their constant support helped me throughout all my career. I thank my brother Rodrigo for always listen to me and discuss my writing. To my loving wife Natalia, that with her help, love, patience and unconditional support helped me finishing this thesis. I specially thank my advisor José R. Correa, that through long hours of discussions introduced me to the world of investigation. More than only help me in my work, he gave me friendship and support in general. Without his constant support this thesis would not have carried out successfully. To all the students of the mathematics department of the University of Chile, for always being willing to talk and cheer me up. I also thank Martin Skutella who received me in my staying in Germany during September and October His collaboration and important contributions made possible Chapter 4 of this writing. I also thank all his group in TU-Berlin, for making pleasant my staying in Berlin. I also thank Nicole Megow for offering me her friendship and support. ii

5 Contents 1 Introduction Machine scheduling problems Approximation algorithms Polynomial time approximation schemes Problem definition Previous work Single machine Parallel machines Unrelated machines Contributions of this work On the power of preemption on R C max R pmtn C max is polynomially solvable A new rounding technique for R C max Power of preemption of R C max Base case Iterative procedure Approximation algorithms for minimizing w L C L on unrelated machines A (4 + ε) approximation algorithm for R r ij,pmtn w L C L A constant factor approximation for R r ij w L C L A PTAS for minimizing w L C L on parallel machines Algorithm overview Localization iii

6 4.3 Polynomial Representation of Order s Subsets Polynomial Representation of Frontiers A PTAS for a specific block Variations Concluding remarks and open problems 63 iv

7 Chapter 1 Introduction 1.1 Machine scheduling problems Machine scheduling problems deal with the allocation of scarce resources over time. They arise in several and very different situations, for example, a construction site where the boss has to assign jobs to each worker, a CPU that must process tasks asked by several users, or a factory s production lines that must manufacture products for its clients. In general, an instance of a scheduling problem contains a set of n jobs J, and a set of m machines M where the jobs in J must be processed. A solution of the problem is a schedule, i.e., an assignment that specifies when and on which machines i M each job j J is executed. To classify scheduling problems we have to look at the different characteristics or attributes that the machines and jobs have, as well as the objective function to be optimized. One of these is the machine environment, or the characteristics of the machines on our model. For example, we can consider identical or parallel machines, where each machine is an identical copy of all the others. In this setting each job j J takes a time p j to be processed, independent of the machine in which is scheduled. On the other hand, we can consider a more general situation where each machine i M has a different speed s i, and then the time that takes to process job j on it is inversely proportional to the speed of the machine. Additionally, scheduling problems can be classified depending on job s characteristics. Just to name a few, our model may consider nonpreemptive jobs, i.e. jobs cannot be interrupted until they are completed, or preemptive jobs, i.e. jobs that can be interrupted at any time and later resumed on the same or in a different machine. 1

8 Also, we can classify problems depending on the objective function. One of the more naturals objective functions is to minimize the makespan, i.e., to minimize the point in time at which the last job finishes. More precisely, if for some schedule we define the completion time of a job j J, denoted as C j, as the time where job j J finishes processing, then the objective is to minimize C max := max j J C j. Other classical example consists on minimizing the number of late jobs. In this setting, each job j J has a deadline d j and the objective is to minimize the number of jobs that finish processing after its deadline. As these, there are several other different objective functions that can be considered. A large amount of scheduling problems can be consired by combining the characteristics just mentioned. So, it becomes necessary to introduce a standard notation for all these different problems. For this, Grahams, Lawler, Lenstra and Rinnooy Kan [20], introduced the three field notation, where a scheduling problem is represented by an expression of the form α β γ. Here, the first field α denotes the machine environment, the second field β contains extra constrains or characteristics of the problem, and the last field γ denotes the objective function. In the following we describe the most common values for α, β and γ. 1. Values of α. α = 1 : Single Machine. There is only one machine at our disposal to process the jobs. Each job j J takes a given time p j to be processed. α = P: Parallel Machines. We have a number m of identical or parallel machines to process the jobs. Then, the processing time of job j is given by p j, independently of the machine where job j is processed. α = Q: Related Machines. In this setting each machine i M has a speed s i associated. Then, the processing time of job j J on machine i M equals p j /s i, where p j is the time it takes to process j in a machine of speed 1. α = R: Unrelated Machines. In this more general setting there is no a priori relation between the processing times of jobs on each machine, i.e., the processing time of job j J on machine i M is an arbitrary number denoted by p ij. Additionally, in the case that α = P,Q or R, we can add the letter m at the end of the field indicating that the number of machines m is constant. Then, for example, if under a parallel machine environment the number of machines is constant, then α = Pm. The value of m can also be specified, e.g., α = P2 means that there are exactly 2 parallel machines to process the jobs. 2

9 2. Values of β. β = pmtn: Preemptive Jobs. In this setting we consider jobs that can be preempted, i.e., jobs that can be interrupted and resume later on the same or on a different machine. β = r j : Release Dates. Each job j J has associated a release date r j, such that j cannot start processing before that time. β = prec: Precedence Constrains. Consider a partial order relation over the jobs (J, ). If for some pair of jobs j y k, j k, then k must start processing after the completion time of job j. 3. Values of γ. γ = C max : Makespan. The objective is to minimize the makespan C max := max j J C j. γ = C j : Average Completion Times. We must minimize the average of the completion times, or equivalently j J C j. γ = w j C j : Sum of weighted Completion Times. Consider a weight w j for each j J. Then, the objective is to minimize the sum of weighted completion time j J w jc j. It is worth noticing that by default we consider nonpreemptive jobs. In other words, if the field β is empty, then jobs cannot be preempted. For example, R w j C j denotes the problem of finding a nonpreemptive schedule of a set of jobs J on a set of machines M, where each job j J takes p ij units of time to process in machine i M, minimizing j J w jc j. As a second example, R r j w j C j denotes the same problem as before, with the only difference that a job j can only start processing after r j. Also, note that the field β can take more than just one value. For example, R prec,r j w j C j is the same as the last problem, but adding precedence constrains. Over all scheduling problems, most non-trivial ones are N P-hard and therefore there is no polynomial time algorithm to solve them unless P = N P. In particular, as we will show later, one of the fundamental problems in scheduling, P2 C max, can be easily proven N P-hard. In the following section we describe some general techniques to address N P-hard optimization problems and some basic applications to scheduling. 3

10 1.2 Approximation algorithms The introduction of the N P-complete class given by Cook [11], Karp [24] and independently Levin [31], left big challenges about how these problems could be tackle given their apparent intractability. One option that has been widely studied is the use of algorithms that completely solves the problem, but has no polynomial upper bound on the running time. This kind of algorithm can be useful in small to medium instances, or in instances with some special structure where the algorithm runs fast enough in practice. Nevertheless, there may be other instances where the algorithm takes exponential time to finish, becoming impractical. The most commons of this approaches are Branch & Bound, Branch & Cut and Integer Programming techniques. For the special case of N P-hard optimization problems, another alternative is to use algorithms that runs in polynomial time, but may not solve the problem to optimality. Among this kind of algorithms, a particularly interesting class is approximation algorithms, i.e., algorithms in which the solution is guaranteed to be, in some sense, close to the optimal solution. More formally, let us consider a minimization problem P with cost function c. For α 1, we say that a solution S to P is an α-approximation if it cost c(s) is within a factor α from the cost of the optimal OPT, i.e., if c(s) α OPT. (1.1) Now, consider a polynomial-time algorithm A whose output over instance I is A(I). Then, A is an α-approximation algorithm if for any instance I, A(I) is an α-approximation. The number α is called the approximation factor of algorithm A, and if α does not depends on the input we say the A is a constant factor approximation algorithm. Analogously, if P is a maximization problem with objective function c, a solution S is an α-approximation, for α 1, if c(s) α OPT. As before, for α 1, an algorithm A is an α-approximation algorithm if A(I) is an α- approximation for any instance I. On the remaining of this document we will only study minimization problems, and therefore we will not use this definition. One of the firsts approximation algorithm for an N P-hard optimization problem was presented by R.L. Graham [19] in 1966, even before the notion of N P-completeness was 4

11 formally introduced. Graham studied the problem of minimizing the makespan on parallel machines, P C max. He proposed a greedy algorithm consisting on: (1) Order the jobs arbitrarily, (j 1,...,j n ); (2) For k = 1,...,n, schedule job j k on the machine where it would begin processing first. Such a procedure is called a list-scheduling algorithm. Lemma 1.1 (Graham 1966 [19]). List-scheduling is a (2 1/m)-approximation algorithm for P C max. Proof. First notice that if OPT denotes the makespan of the optimal solution, then OPT 1 p j, (1.2) m since otherwise the total amount of machine time needed to process all jobs would be less than j J p j. Let l be such that C jl = C max, and denote S j = C j p j the starting time of a job j J. Then, noting that at the l-th step of the algorithm all machines were busy at j J time S jl, S jl 1 l 1 p jk, m and therefore, k=1 C max = S jl + p jl 1 m l p jk + (1 1 m )p j l k=1 ( 2 1 ) OPT, (1.3) m where the last inequality follows from (1.2) and the fact that p jl OPT, since no schedule can finish before p j for any j J. As we could observe, a crucial step in the previous analysis is to obtain a good lower bound on the optimal solution (for example Equation (1.2) in last lemma), to then use it to upper bound the solution given by the algorithm (as in Equation (1.3)). Most techniques to find lower bounds are problem specific, and therefore is hard to give general rules of how to find them. One of the few exceptions that has been proven useful in a wide variety of problems, consists on formulating the optimization problem as a integer program, and later relax its integrality constrains. Clearly, the optimal solution of the relaxed problem must be a lower bound on the optimal solution of the original problem. An algorithm that uses this technique is called a LP-based approximation algorithm. To illustrate this idea, consider the following problem. 5

12 Minimum Cost Vertex-Cover: Input: A graph G = (V,E), and a cost function c : V Q over the vertices. Objective: Find a vertex-cover, i.e., a set B V that intersects every edge in E, minimizing the cost c(b) = v B c(v). It is easy to see that this problem is equivalent to the following integer program: [LP] min v V y v c(v) (1.4) y v + y w 1 for all vw E, (1.5) y v {0, 1} for all v V. (1.6) Therefore, by replacing Equation (1.6) by y v 0, we obtain a linear program whose optimal value is a lower bound on the optimal of the Minimum Cost Vertex-Cover problem. To get a constant factor approximation algorithm, we proceed as follows. First solve [LP] (by, for example, using the ellipsoid method), and call the solution yv. To round this fractional solution first note that Equation (1.5) implies that for every edge vw E either y v 1/2 or y w 1/2. Then, the set B = {v V y v 1/2} is a vertex-cover, and furthermore we can bound its cost as, c(b) = v:y v 1/2c(v) 2 v V y vc(v) 2OPT LP 2OPT, (1.7) where OPT denotes the cost of the optimum solution of the vertex-cover problem and OPT LP is the solution of [LP]. Thus, the algorithm just described is a 2-approximation algorithm. Noting that OPT c(b), Equation (1.7) implies that OPT OPT LP 2, for any instance I of the Minimum Cost Vertex-Cover. More generally, any α- approximation algorithm that uses OPT LP as a lower bound must satisfy max I OPT OPT LP α. The left hand side of this last equation is called the integrality gap of the linear program. 6

13 Finding a lower bound on the integrality gap is a common technique to see what is the best approximation factor that a linear program can yield. To do this we just need to find a instance with a large ratio OPT/OPT LP. For example, is easy to show that the rounding we just described for Minimum Cost Vertex-Cover is best possible. Indeed, considering the graph G as the complete graph of n vertices and the cost function c 1, we get that OPT = n 1 and OPT LP = n/2, and thus OPT/OPT LP 2 when n. 1.3 Polynomial time approximation schemes For a given N P-hard problem, it is natural to ask what is the best possible approximation algorithm in terms of its approximation factor. Clearly, this depends on the problem. On one side, there are some problems that do not admit any kind of approximation algorithms unless P = N P. For example, the travelling salesman problem with binary costs cannot be approximated up to any factor. Indeed, if there exists an α-approximation algorithm for this problem, then we can use it to decide whether exists or not a hamiltonian circuit of cost zero: If the optimum solution is zero, then the approximation algorithm must return zero by (1.1), independently of the value of α; If the optimum solution is greater than zero then the algorithm will also return a solution with cost greater than zero. On the other hand, there are some problems that admit arbitrarily good approximation algorithms. To formalize this idea we define a polynomial time approximation scheme (PTAS) as a collection of algorithms {A ε } ε>0 such that each A ε is a (1 + ε)-approximation algorithm that runs in polynomial time. Let us remark that ε is not considered as part of the input, and therefore the running time of the algorithm could depend exponentially on ε. A common technique to find a PTAS is to round the instance such that the solution space is significantly decreased, but the value of the optimal solution is only slightly changed. Later, we can use exhaustive search or dynamic programming to find an optimal or nearoptimal (i.e. a (1 + ε)-approximation) solution to the rounded problem. To obtain an almost-optimal solution to the original problem, we transform the solution of the rounded instance without increasing the cost in more than a 1 + O(ε) factor. We briefly show this technique by applying it to P2 C max, i.e. the problem of minimizing the makespan on two parallel machines. Consider a fixed 0 < ε < 1, and call OPT the makespan of the optimal solution. We will show how to find a schedule of makespan less than (1+ε) 2 OPT (1+3ε)OPT, which is enough by redefining ε ε/3. Begin by rounding 7

14 up the values of each p j to powers of (1 + ε), p j (1 + ε) log 1+ε p j. With this, the processing time of each job is increased in at most a (1+ε) factor, and so is the optimal makespan. In other words, by denoting OPT r the optimal makespan of the rounded instance, OPT r (1 + ε)opt. Then, it would be enough to find a (1 + ε)-approximation of the rounded instance, since using that assignment of jobs to machines on the original problem would only decreases the makespan of the solution, thus yielding a (1 + ε) 2 -approximation. For this, let P = max j p j, and define a job to be big if p j εp and small otherwise. Thanks to our rounding, the amount of different values the processing time of a big job can take is less than log 1+ε 1/ε +1 = O(1). Also, notice that a schedule of big jobs is determined by specifying how many jobs of each size are assigned to each of the two machines. Thus, we can enumerate all schedules of big jobs in time n log 1+ε 1/ε +1 = n O(1) = poly(n), and take the one with the shortest makespan. To schedule small jobs, notice that a list-scheduling algorithm is enough: process each job one step at a time, in any order, on the machine that would finish first. Clearly, this yields a (1 + ε)-approximation for the rounded instance. Indeed, if after adding the small jobs the makespan was not increased, then the solution constructed is optimal. On the other hand, if adding the small jobs increased the makespan, then the difference between the makespan of both machines is less than εp εopt r. Therefore, the makespan of the solution constructed is less than (1 + ε)opt r (1 + ε) 2 OPT. Thus, we can construct a (1 + ε) 2 -approximation of the original problem in polynomial-time. Although the algorithm that we just showed runs in polynomial-time for any fixed ε, the running time increases exponentially when ε decreases. Thus, we may ask if we can do even better, e.g., if we can find a PTAS for which the running time is also polynomial in ε. Such a scheme is called a fully polynomial time approximation schemes(fptas). Unfortunately, there are only few problems that admits an FPTAS. Indeed, it can be shown that any strongly N P-hard problem cannot admit a FPTAS, unless P = N P (see for example [42] Ch. 8). In the next section we will describe the problem that we are going to work on this thesis. Not surprisingly the problem is N P-hard, and thus the tools discussed in this and in the previous sections will be helpful to study it. 8

15 1.4 Problem definition In this writing we study a natural scheduling problem arising in manufacturing environments. Consider a setting where clients place orders, consisting of one or more products, to a given manufacturer. Each product has a machine dependant processing requirement, and has to be processed on any of m machines available for production. The manufacturer has to find a schedule so as to give the best possible service to its clients. In its most general form, the problem we consider is as follows. We are given a set of jobs J and a set of orders O P(J), such that L O L = J. Each job j J is associated with a value p ij which represents its processing time on machine i, while each order L has a weight factor w L depending on how important it is for the manufacturer. Also, job j is associated with a machine dependant release date r ij, so it can only start being processed on machine i by time r ij. An order is completed once all its jobs have been processed. Therefore, if C j denotes the point in time at which job j is completed, C L = max{c j : j L} denotes the completion time of order L. The goal of the manufacturer is to find a nonpreemptive schedule in the m available machines so as to minimize the sum of weighted completion time of orders, i.e., min w L C L. L O We refer to this objective function as the sum of weighted completion time of orders. Let us remark that in this general framework we are not restricted to the case where the orders are disjoint, and therefore one job may participate in the completion time of several orders. To adopt the three field scheduling notation we denote this problem as R r ij w L C L, or R w L C L, in case all release dates are zero. When the processing times p ij do not depend on the machine, we exchange the R by a P. Also, when we impose the additional constraint that orders are disjoint subsets of jobs we will add part in the second field β of the notation. As will be showed later, our problem generalizes several classic machine scheduling problems. Most notably, these include R C max, R r ij w j C j and 1 prec w j C j. Since all of this are N P-hard in the strong sense (see for example [17]), then our more general setting also is. It is somewhat surprising that the best known approximation algorithms for all these problems have an approximation guarantee of 2 [4, 35, 37]. However, for our more general setting, no constant factor approximation is known. The best known result, due to Leung, Li, Pinedo and Zhang [29], is an algorithm for the special case of related machines (i.e., 9

16 p ij = p j /s i, where s i is the speed of machine i) and without release dates on jobs. The approximation factor of the algorithm is 1 + ρ(m 1)/(ρ + m 1), where ρ is the ratio of the speed of the fastest machine to that of the slowest machine. In general this guarantee is not constant and can be as bad a m/ Previous work To illustrate the flexibility of our model, we now review some relevant scheduling models in different machine environments that lie in our framework Single machine We begin by considering the problem of minimizing the sum of weighted completion time of orders on one machine. First we study the simply case where no job belongs to more than one order, 1 part w L C L, showing that is equivalent to 1 w j C j. The later, as was shown by Smith [41], can be solved to optimality by scheduling jobs in non-increasing order of w j /p j. In the literature, this greedy algorithm is known as Smith s rule. To see that the these two problems are indeed equivalent, we first show that there is a optimal schedule of 1 part w L C L where all jobs of an order L O are processed consecutively. To see this, consider an optimal schedule where this does not hold. Then, there exist jobs j,l L and k L L, such that k starts processing at C j, and l is processed after k. Thus, swapping jobs j and k, i.e. delaying j by p k units of time and bringing forward k by p j units of time, does not increase the cost of the solution. Indeed, job k decreases its completion time, and so C L is not increased. Also, order L does not increase its completion time since job l L, which is always processed after j, remains untouched. By iterating this argument, we finish with a schedule where all jobs in an order are processed consecutively. Therefore, each order can be seen as a larger job with processing time j L p j, and thus our problem is equivalent to 1 w j C j. We now consider the more general problem 1 w L C L, where we allow jobs to belong to several orders at the same time. We will prove that this problem is equivalent to single machine scheduling with precedence constraints denoted by 1 prec w j C j. Recall that in this problem there is a partial order over the jobs meaning that, if j k, then job j must finish being processed before job k begins processing. If j k we say that j is a predecessor of k and k is a successor of j. This classic scheduling problem has attracted 10

17 much attention since the sixties. Lenstra and Rinnooy Kan [26] showed that this problem is strongly N P-hard even with unit weights or unit processing times. On the other hand, several 2-approximation algorithms have been proposed: Hall, Schulz, Shmoys & Wein [21] gave a LP-relaxation based 2-approximation, while Chudak & Hochbaum [6] proposed another 2-approximation based on a half-integral programming relaxation. Also, Chekuri & Motwani [4], and Margot, Queyranne & Wang [32] independently developed a very simple combinatorial 2-approximation. Furthermore, the results in [2, 12] imply that 1 prec w j C j is a special case of vertex cover. However, hardness of approximation results where unknown until recently Ambuhl, Mastrolilli & Svensson [3] proved that there is no PTAS for this problem unless N P-hard problems can be solved in randomized subexponential time. We now show that 1 w L C L and 1 prec w j C j are equivalent and therefore all results known for the latter can be also be applied to 1 w L C L. First, let us see that every α- approximation for 1 prec w j C j implies an α-approximation for 1 w L C L. Let I = (J,O) be an instance of 1 w L C L, where J is the job set and O the set of orders. We construct an instance I = (J, ) of 1 prec w j C j as follows. For each job j J there is a job j J with p j = p j and w j = 0. Also, for every order L O we will consider an extra job j(l) J with processing time p j(l) = 0 and weight w j(l) = w L. The only precedence constrains that we will impose will be that j j(l) for all j L and every L O. Since p j(l) = 0, we can restrict ourselves to schedules of I where each j(l) is processed when the last job of L is completed. Thus, it is clear that the optimal solutions to both problems have the same total cost. Furthermore, it is straightforward to note that given an algorithm for 1 prec w j C j (approximate or not) we can simply apply it to instance I above and impose that j(l) is processed exactly when the last job of L is completed, without a cost increase. The resulting schedule for I can then be directly applied to the original instance I of 1 w L C L and its cost will remain the same. To see the other direction, let I = (J, ) be an instance of 1 prec w j C j. To construct an instance I = (J,O) of 1 w L C L, consider the same set of jobs J = J and for every job j J, we let L(j) O be the order {k J : k j}, and let w L(j) = w j. With this construction the following lemma holds. Lemma 1.2. Any schedule of I can be efficiently transformed into a schedule of the same instance, respecting the underlying precedence constraints and without increasing the cost. Proof. Let k be the last job that violates a precedence constrain, and let j be the last job that is a successor of k but is scheduled before k. We will show that delaying job j right after 11

18 j k k j Figure 1.1: Top: Original schedule. Bottom: Schedule after delaying j. job k (see Figure 1.1) does not violate any new precedence constrain, and does not increase the total cost. Indeed, if moving j after k violates a precedence constrains then there exists a job j that was originally processed between j and k, such that j j. Thus k j, contradicting the choice of j and k. Also, note that every job but j diminishes its completion time. Furthermore, the completion time of each order containing j is not increased, since each such order also contained job k and the completion time of j in the new schedule will be the same as the completion of k in the old schedule. With this lemma we conclude that the optimal schedule for instance I of 1 prec w j C j has the same cost as that for instance I of 1 w L C L. Moreover, any α-approximate schedule for instance I of 1 w L C L can be transformed into a schedule for instance I of 1 prec w j C j of the same cost. Thus, the following holds. Theorem 1.3. The approximability thresholds of 1 prec w j C j and 1 w L C L coincide Parallel machines In this section we talk about scheduling on parallel machines, where the processing time of each job j, p ij = p j does not depend on the machine where is processed. Recall the previously defined problem of minimum makespan scheduling on parallel machines, P C max, which consists in finding a schedule of n jobs in m parallel machines, so as to minimize the maximum completion time. Notice that if in our setting O only contains one order, then the objective function becomes max j J C j = C max, and therefore P C max is a special case of P w L C L, which at the same time is a special case of our more general model R r ij w L C L. 12

19 The problem P C max has been a classical machine scheduling problem. It can be easily proven N P-hard, even for 2 machines. Indeed, consider the 2Partition problem where, for a given multiset of positive integers S = {a 1,...,a n }, we must decide whether exists a partition R,T A, R T = A such that j S a j = j R a j = 1/2 j A a j. Then, for a given multiset S, consider n jobs where job j = 1,...,n has processing time p j = a j. Then, finding the minimum makespan schedule on two parallel machines would let us solve 2Partition: the minimum makespan equals 1/2 j J p j if and only if there exist sets J 1,J 2 J, J 1 J2 = J, corresponding to the set of jobs processed in each machine, such that j J 1 p j = j J 2 p j = 1/2 j J p j. And thus, since 2Partition is N P-complete [24, 17], we conclude that P2 C max is N P-hard. On the other hand, as showed in Lemma 1.1, a list-scheduling approach yields a 2-approximation algorithm. Furthermore, Hochbaum and Shmoys [22] presented a PTAS for the problem (see also [42, Chapter 10]). On the other hand, when on our model each order only contains one job, the problem becomes equivalent to minimize the sum of weighted completion times of jobs j J w jc j. Thus, in this case, the parallel machine version of our problem with no release dates becomes P w j C j. The study of this problem also goes back to the sixties (see [9] for an early treatment). As in the makespan case, the problem becomes N P-hard already for two machines. On the other hand, a sequence of approximation algorithms had been proposed until Skutella and Woeginger [40] found a PTAS for the problem. Later, Afrati et al. [1] extended this result for the case on non-trivial release dates. A natural question is thus to ask if there exists a PTAS for P part w L C L (notice that, as shown in Section 1.5.1, the slightly more general problem P w L C L is unlikely to have a PTAS). Although we do not know whether the latter holds, Leung, Li, and Pinedo [28] (see also Yang and Posner [44]) presented a 2-approximation algorithm for this problem. We briefly give an alternative analysis of Leung et al. s algorithm by using a classic linear programming framework, first developed by Queyranne [33] for the single machine problem. Let M j be the midpoint of job j in a given schedule, in other words, M j = C j p j /2. Eastman et al. [15] implicitly showed that for any set of jobs S J and any feasible schedule in m parallel machines, then the inequality: j S p jm j p(s) 2 /2m is satisfied, where p(s) = j S p j. These inequalities are called the parallel inequalities. It follows that if OPT denotes the value of an optimal schedule, then OPT is lower bounded by the following linear program: [LP] min L O w L C L 13

20 C L M j + p j /2 for all L O and j L, p j M j p(s) 2 /2m for all S N. j S Queyranne [33] showed that [LP] can be solved in polynomial time since separating the parallel inequalities reduces to submodular function minimization. Let M 1,...,M n be an optimal solution and assume without loss of generality that M 1 M 2 M n. Clearly, C L = max{m j + p j /2 : j L}, so the optimal solution is completely determined by the M values. Consider the algorithm that first solves (LP) and then schedules jobs using a list-scheduling algorithm according to the order M1 M2 Mn. Let Cj A denote the completion time of job j in the schedule given by the algorithm, so that CL A = max{ca j : j L}. It is easy to see that C A j equals the time at which job j is started by the algorithm, S A j, plus p j. Furthermore, at any point in time before S A j all machines were busy processing jobs in {1,...,j 1}, thus S A j p({1,...,j 1})/m. It follows that { } p({1,...,j 1}) CL A max + p j. j L m Also, M j p({1,...,j}) l {1,...,j} p lm l p({1,...,j}) 2 /2m. Then, { p({1,...,j}) CL max + p } j. j L 2m 2 We conclude that CL A 2C L which implies that the algorithm returns a solution which is within a factor of 2 of OPT. Furthermore, note that this approach not only works for P part w L C L but also for P w L C L Unrelated machines In the unrelated machine setting, our problem is also a common generalization of some classic machine scheduling problems. As before, if there is a single order and r ij = 0, our problem becomes minimum makespan scheduling (R C max ), in which the goal is to find a schedule of the n tasks in m unrelated machines so as to minimize the makespan. In a seminal work, Lenstra, Shmoys and Tardos [27] give a 2-approximation algorithm for R C max, and showed that it is N P-hard to approximate it within a constant better than 3/2. Thus, the same 14

21 hardness result holds for R w L C L. On the other hand, if orders are singletons and r ij = 0, our problem becomes minimum sum of weighted completion times scheduling (R w j C j ). In this setting each job j J is associated, with a processing time p ij, and a weight w j. The goal is to find a schedule so as to minimize sum of weighted completion times of jobs. As in the makespan case, the latter problem was shown to be APX-hard [23] and therefore there is no PTAS, unless P = N P. On the positive side, Schulz and Skutella [35] used a linear program relaxation to design an approximation algorithm with performance guarantee of 3/2 + ε in the case without release dates, and 2 + ε in the more general case. Furthermore, Skutella [38] refined this result by means of a convex quadratic programming relaxation obtaining a 3/2-approximation algorithm in the case of trivial release dates, and a 2-approximation algorithm in the more general case. Finally, it is worth mentioning that our problem also generalizes assembly scheduling problems that have received attention recently, which we denote by A w j C j (see e.g. [7, 8, 30]). As explained before, in this setting we are given a set M with m machines and a set of jobs J, with associated weights w j. Each job has m parts, one to be processed by each machine. So, p ij denotes the processing time of the i-th part of job j, that must be processed on machine i. The goal is to minimize the sum of weighted completion times ( w j C j ), where the completion time C j of job j is the time by which all of its parts have been processed. Thus, in our setting, a job with its m parts can be modelled as an order that contains m jobs. To ensure that each of the jobs on each order can only be processed on its correspondent machine, we give it infinity (or sufficiently large) processing time on all the others machines. Besides proving that the assembly line problem is N P-hard, Chen and Hall [7] and Leung, Li, and Pinedo [30] independently gave a simple 2-approximation algorithm based in the following linear programming relaxation of the problem: [LP] min j N w j C j p ij C j ( p i (S) 2 + p 2 i(s) ) /2 for all i = 1,...,m, S N. j S Similarly to the 2-approximation described for P w L C L in Section 1.5.2, the algorithm 15

22 consists in processing jobs according to the order given by an optimal LP solution. Clearly, this is a 2-approximation. Indeed, consider C 1 C n the optimal LP solution (after reordering if needed) and let S = {1,...,k}. Call C H and C the heuristic and the optimal completion time vectors respectively. Clearly, p i (S)C k j S p ijc j p i (S) 2 /2, hence 2C k p i (S) for all i M. It follows that Ck H = max 1 i m p i (S) 2C k, and then w j Cj H 2 w j C j 2 w j Cj, and thus the solution constructed is an 2-approximation. 1.6 Contributions of this work In this thesis we develop approximation algorithms for R r ij w L C L and some of its particular cases. In Chapter 2 we begin by showing some techniques used in the subsequents sections. First, we review the result of Lawler and Labetoulle [25] showing that R pmpt C max, i.e. the problem of minimizing the makespan of preemptive jobs on unrelated machines, is polynomially solvable. Later, we propose a way of rounding any solution of R pmpt C max to a solution of R C max, such that the cost of the solution is not increased in more than a factor of 4. For this we use the classic rounding technique of Shmoys and Tardos [37] for the generalized assignment problem. We conclude this chapter by showing that this rounding is best possible. To this end we construct a sequence of instances for which the ratio between its optimal preemptive makespan and its optimal nonpreemptive makespan is arbitrarily closed to 4. In Chapter 3 we generalize the techniques previously developed. We begin by giving a (4 + ε)-approximation for R pmpt,r ij w L C L, i.e. for each fixed ε > 0 we show a (4 + ε)- approximation algorithm. The algorithm is based on a time-index linear program relaxation of the problem based on that of Dyer and Wolsey [13]. The rounding uses Lawler and Labetoulle s [25] result, described in the previous chapter. Also we show a 27/2-approximation algorithm for R r ij w L C L. This is the first constant factor approximation algorithm for this problem, and thus improves the non-constant factor approximation algorithm for Q part w L C L proposed by Leung et al. [29]. Our approach is based on an intervalindexed linear program proposed by Hall et al [21], and uses a very similar rounding to the one showed in Chapter 2. In Chapter 4 we design a PTAS for P w L C L, for the cases when the number of orders is constant, the number of jobs inside each order is constant, or the number of machines is constant. Our algorithm works in all three cases and thus generalizes the known PTASs 16

23 in [1, 22, 40]. Our approach follows closely the PTAS of Afrati et al. [1] for P r j w j C j. However, the main extra difficulty from that of Afrati et al. case, is that we might have orders that are processed through a long period of time, and its cost is only realized when it is completed. To overcome this issue, and thus be able to apply the dynamic programming ideas in [1], we simplify the instance and prove that there is a near-optimal solution in which every order is fully processed in a restricted time span. This requires some careful enumeration plus the introduction of artificial release dates. Finally, in Chapter 5 we summarize all the results, and then propose some possible directions for future investigation. 17

24 Chapter 2 On the power of preemption on R C max In this chapter we study the problem of minimizing the makespan on unrelated machines, R C max, that as was explained before, is a special case of our more general problem of minimizing the sum of weighted completion time of orders on unrelated machines, R w L C L. The techniques in this chapter will give insight on how to give approximations algorithms for the more general problems R r ij w L C L and R r ij,pmpt w L C L. In Section 2.1, we begin by reviewing the technique developed by Lawler and Labetoulle [25] to solve R pmtn C max, that shows that this problem is equivalent to solving a linear program. In Section 2.2, we give a quick overview of Lenstra, Shmoys and Tardos s [27] 2-approximation algorithm for R C max, and discuss why it is difficult to apply those ideas to our more general setting. Then, we show how we can modify this result, getting one easier to generalize. By doing this we obtain a rounding that turns any preemptive schedule to a nonpremptive one, such that the makespan is not increased in more than a factor of 4. On the other hand, in Section 2.3, we prove that this factor is best possible, i.e. there is no rounding that converts a preemptive schedule to a nonpreemtive one with a guarantee better than 4. We achieve this by iteratively constructing a family of almost tight instances. 2.1 R pmtn C max is polynomially solvable We now present the algorithm developed by Lawler and Labetoulle, that computes the optimal solution of R pmtn C max. It is based on a linear programming formulation that uses 18

25 assignment variables x ij, indicating the fraction of job j J that is processed on machine i M. With this, it will be enough to give a way of converting any feasible solution of this linear program to a preemptive schedule of equal makespan, i.e., we need to find a way of distributing the fractions of each job inside each machine, such that no two fraction of the same job are processed in parallel. More precisely, let us consider the following linear program, [LL] min C x ij = 1 for all j J, (2.1) i M p ij x ij C for all i M, (2.2) j J p ij x ij C for all j J, (2.3) i M x ij 0 for all i,j. (2.4) It is clear that each preemptive schedule induces a feasible solution to [LL]. Indeed, given any preemptive solution, denote C its makespan and x ij the fraction of job j that is processed on machine i. In other words, if y ij denotes the amount of time that the schedule uses to process job j on machine i, then x ij = y ij /p ij. With this definition, the solution must satisfy Equation (2.1) since every job is always completely scheduled. Furthermore, Equation (2.2) is also satisfied since no machine i M can finish processing before j p ijx ij. Similarly, Equation (2.3) holds since no job j can be processed in two machines at the same time, and thus the left hand side of this equation is a lower bound on the completion time of job j. Let x ij and C be any feasible solution of [LL]. Consider the following algorithm that creates a preemptive schedule of makespan C. Algorithm: Nonparallel Assignment 1. Define the values z ij := p ij x ij /C, for all i M and j J. Note that the vector (z ij ) ij belongs to the matching polyhedron P, of all y ij R nm satisfying the following inequalities: 19

26 y ij 1 for all j J, (2.5) i M y ij 1 for all i M, (2.6) j J y ij 0 for all i,j. (2.7) Also, note that P is integral, since the matrix that defines it is totally unimodular (see for example [34] Ch. 18). 2. Note that by Caratheodory s theorem [14, 16] it is possible to decompose vector z as a convex combination of a polynomial number of vertices of P. More precisely, we can find vectors Z k {0, 1} nm P and scalars λ k 0 for k = 1,...mn + 1, such that z ij = nm+1 k=1 λ k Zij k and mn+1 k=1 λ k = Build the schedule as follows. For each i M,k = 1,...,nm + 1 such that Zij k = 1, schedule job j in machine i, between time C k 1 l=1 λ l and C k l=1 λ l. We first show the correctness of the algorithm, and later show that it can be execute in polynomial time. Lemma 2.1. Let us consider x ij and C satisfying equations (2.2), (2.3) and (2.4). Algorithm: Nonparallel Assignment constructs a preemptive schedule of makespan at most C, where the fraction of job j J processed on machine i M is x ij. Proof. First, note that for each i M and j J Algorithm: Nonparallel Assignment process job j during p ij x ij units of time in machine i. Indeed, for each k = 1,...,nm + 1, i M and j J such that Z k ij = 1, the amount of time job j is processed on machine i equals Cλ k. Then, since Z k is binary, the total amount of time job j is processed in machine i equals nm+1 k=1 Cλ k Z k ij = Cz ij = p ij x ij. Then, the fraction of job j that is processed in machine i is x ij. Furthermore, no job is processed in two machines at the same time. Indeed, if by contradiction we assumed that there is a job that is processed in parallel, then there exist 20

27 k 1,...,mn + 1, j J and i,d M such that Z k ij = Z k dj = 1. This implies that m i=1 Zk ij 2, contradicting that Z k belongs to P. Finally, the makespan of the schedule is at most C, since the algorithm only assigns jobs between time 0 and C mn+1 k=1 λ k = C. With this the following holds. Corollary 2.2. To each feasible solution x ij, C of [LL] corresponds a preemptive schedule of makespan C and vice-versa. Thus, to solve R pmtn C max it is enough to compute the optimal solution of [LL], and then turn it to a preemptive schedule using Algorithm: Nonparallel Assignment. Finally, we show that this algorithm runs in polynomial time. Lemma 2.3. Algorithm: Nonparallel Assignment runs in polynomial time. Proof. We just need to show that step (2) can be done in polynomial time. For this, consider any polytope P = {x R N Ax b} for some matrix A M(R) K N and vector b R K. For any z P, we need to show how to decompose z as a convex combinations of vertices of P. Clearly, it is enough to decompose z = λz + (1 λ)z, where λ [0, 1], Z is a vertex of P, and z belong to some proper face P of P. Indeed, if this can be done, we can then interate the argument over z P. This procedure will finish after N steps since the dimension of the polytope is decreased after each iteration. For this, consider z P. Find any vertex Z P, which can be done for example, by minimizing a linear function over the polytope P. We define z by projecting z into the frontier of P. For this, let ˆγ = max {γ 1 Z + γ(z Z) P }. In other words, if A i denotes the i-th row of A, then { } bi A i Z ˆγ = min i=1,...,k A i (z Z) A i (z Z) 0. With this, define z := Z +ˆγ(z Z) P, implying that z = z /ˆγ+Z(ˆγ 1)/ˆγ. Thus, defining λ := 1/ˆγ 1 we get that z = λz + (1 λ)z. Finally, note that z belongs to a proper face of P. For this, it is enough to show that there is i {1,...,K} such that A i z = b i A i Z < b i, which is clear from the choice of ˆγ. Then, the face P z equals, P := { x R N A x b }, 21 and

The Power of Preemption on Unrelated Machines and Applications to Scheduling Orders

The Power of Preemption on Unrelated Machines and Applications to Scheduling Orders MATHEMATICS OF OPERATIONS RESEARCH Vol. 37, No. 2, May 2012, pp. 379 398 ISSN 0364-765X (print) ISSN 1526-5471 (online) http://dx.doi.org/10.1287/moor.1110.0520 2012 INFORMS The Power of Preemption on

More information

Approximation algorithms for scheduling problems with a modified total weighted tardiness objective

Approximation algorithms for scheduling problems with a modified total weighted tardiness objective Approximation algorithms for scheduling problems with a modified total weighted tardiness objective Stavros G. Kolliopoulos George Steiner December 2, 2005 Abstract Minimizing the total weighted tardiness

More information

Scheduling Parallel Jobs with Linear Speedup

Scheduling Parallel Jobs with Linear Speedup Scheduling Parallel Jobs with Linear Speedup Alexander Grigoriev and Marc Uetz Maastricht University, Quantitative Economics, P.O.Box 616, 6200 MD Maastricht, The Netherlands. Email: {a.grigoriev, m.uetz}@ke.unimaas.nl

More information

2 Martin Skutella modeled by machine-dependent release dates r i 0 which denote the earliest point in time when ob may be processed on machine i. Toge

2 Martin Skutella modeled by machine-dependent release dates r i 0 which denote the earliest point in time when ob may be processed on machine i. Toge Convex Quadratic Programming Relaxations for Network Scheduling Problems? Martin Skutella?? Technische Universitat Berlin skutella@math.tu-berlin.de http://www.math.tu-berlin.de/~skutella/ Abstract. In

More information

SCHEDULING UNRELATED MACHINES BY RANDOMIZED ROUNDING

SCHEDULING UNRELATED MACHINES BY RANDOMIZED ROUNDING SIAM J. DISCRETE MATH. Vol. 15, No. 4, pp. 450 469 c 2002 Society for Industrial and Applied Mathematics SCHEDULING UNRELATED MACHINES BY RANDOMIZED ROUNDING ANDREAS S. SCHULZ AND MARTIN SKUTELLA Abstract.

More information

Lecture 2: Scheduling on Parallel Machines

Lecture 2: Scheduling on Parallel Machines Lecture 2: Scheduling on Parallel Machines Loris Marchal October 17, 2012 Parallel environment alpha in Graham s notation): P parallel identical Q uniform machines: each machine has a given speed speed

More information

The Constrained Minimum Weighted Sum of Job Completion Times Problem 1

The Constrained Minimum Weighted Sum of Job Completion Times Problem 1 The Constrained Minimum Weighted Sum of Job Completion Times Problem 1 Asaf Levin 2 and Gerhard J. Woeginger 34 Abstract We consider the problem of minimizing the weighted sum of job completion times on

More information

Improved Bounds for Flow Shop Scheduling

Improved Bounds for Flow Shop Scheduling Improved Bounds for Flow Shop Scheduling Monaldo Mastrolilli and Ola Svensson IDSIA - Switzerland. {monaldo,ola}@idsia.ch Abstract. We resolve an open question raised by Feige & Scheideler by showing that

More information

Algorithms. Outline! Approximation Algorithms. The class APX. The intelligence behind the hardware. ! Based on

Algorithms. Outline! Approximation Algorithms. The class APX. The intelligence behind the hardware. ! Based on 6117CIT - Adv Topics in Computing Sci at Nathan 1 Algorithms The intelligence behind the hardware Outline! Approximation Algorithms The class APX! Some complexity classes, like PTAS and FPTAS! Illustration

More information

Combinatorial Algorithms for Minimizing the Weighted Sum of Completion Times on a Single Machine

Combinatorial Algorithms for Minimizing the Weighted Sum of Completion Times on a Single Machine Combinatorial Algorithms for Minimizing the Weighted Sum of Completion Times on a Single Machine James M. Davis 1, Rajiv Gandhi, and Vijay Kothari 1 Department of Computer Science, Rutgers University-Camden,

More information

Convex Quadratic and Semidefinite Programming Relaxations in Scheduling

Convex Quadratic and Semidefinite Programming Relaxations in Scheduling Convex Quadratic and Semidefinite Programming Relaxations in Scheduling MARTIN SKUTELLA Technische Universität Berlin, Berlin, Germany Abstract. We consider the problem of scheduling unrelated parallel

More information

Machine scheduling with resource dependent processing times

Machine scheduling with resource dependent processing times Mathematical Programming manuscript No. (will be inserted by the editor) Alexander Grigoriev Maxim Sviridenko Marc Uetz Machine scheduling with resource dependent processing times Received: date / Revised

More information

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved. Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved. 1 Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should

More information

On-line Scheduling to Minimize Max Flow Time: An Optimal Preemptive Algorithm

On-line Scheduling to Minimize Max Flow Time: An Optimal Preemptive Algorithm On-line Scheduling to Minimize Max Flow Time: An Optimal Preemptive Algorithm Christoph Ambühl and Monaldo Mastrolilli IDSIA Galleria 2, CH-6928 Manno, Switzerland October 22, 2004 Abstract We investigate

More information

Linear Programming. Scheduling problems

Linear Programming. Scheduling problems Linear Programming Scheduling problems Linear programming (LP) ( )., 1, for 0 min 1 1 1 1 1 11 1 1 n i x b x a x a b x a x a x c x c x z i m n mn m n n n n! = + + + + + + = Extreme points x ={x 1,,x n

More information

A polynomial-time approximation scheme for the two-machine flow shop scheduling problem with an availability constraint

A polynomial-time approximation scheme for the two-machine flow shop scheduling problem with an availability constraint A polynomial-time approximation scheme for the two-machine flow shop scheduling problem with an availability constraint Joachim Breit Department of Information and Technology Management, Saarland University,

More information

Minimizing total weighted tardiness on a single machine with release dates and equal-length jobs

Minimizing total weighted tardiness on a single machine with release dates and equal-length jobs Minimizing total weighted tardiness on a single machine with release dates and equal-length jobs G. Diepen J.M. van den Akker J.A. Hoogeveen institute of information and computing sciences, utrecht university

More information

On Preemptive Scheduling on Uniform Machines to Minimize Mean Flow Time

On Preemptive Scheduling on Uniform Machines to Minimize Mean Flow Time On Preemptive Scheduling on Uniform Machines to Minimize Mean Flow Time Svetlana A. Kravchenko 1 United Institute of Informatics Problems, Surganova St. 6, 220012 Minsk, Belarus kravch@newman.bas-net.by

More information

Preemptive Scheduling of Independent Jobs on Identical Parallel Machines Subject to Migration Delays

Preemptive Scheduling of Independent Jobs on Identical Parallel Machines Subject to Migration Delays Preemptive Scheduling of Independent Jobs on Identical Parallel Machines Subject to Migration Delays Aleksei V. Fishkin 1, Klaus Jansen 2, Sergey V. Sevastyanov 3,andRené Sitters 1 1 Max-Planck-Institute

More information

On Machine Dependency in Shop Scheduling

On Machine Dependency in Shop Scheduling On Machine Dependency in Shop Scheduling Evgeny Shchepin Nodari Vakhania Abstract One of the main restrictions in scheduling problems are the machine (resource) restrictions: each machine can perform at

More information

On the Approximability of Single-Machine Scheduling with Precedence Constraints

On the Approximability of Single-Machine Scheduling with Precedence Constraints MATHEMATICS OF OPERATIONS RESEARCH Vol. 36, No. 4, November 2011, pp. 653 669 issn 0364-765X eissn 1526-5471 11 3604 0653 http://dx.doi.org/10.1287/moor.1110.0512 2011 INFORMS On the Approximability of

More information

A simple LP relaxation for the Asymmetric Traveling Salesman Problem

A simple LP relaxation for the Asymmetric Traveling Salesman Problem A simple LP relaxation for the Asymmetric Traveling Salesman Problem Thành Nguyen Cornell University, Center for Applies Mathematics 657 Rhodes Hall, Ithaca, NY, 14853,USA thanh@cs.cornell.edu Abstract.

More information

Scheduling Lecture 1: Scheduling on One Machine

Scheduling Lecture 1: Scheduling on One Machine Scheduling Lecture 1: Scheduling on One Machine Loris Marchal October 16, 2012 1 Generalities 1.1 Definition of scheduling allocation of limited resources to activities over time activities: tasks in computer

More information

Santa Claus Schedules Jobs on Unrelated Machines

Santa Claus Schedules Jobs on Unrelated Machines Santa Claus Schedules Jobs on Unrelated Machines Ola Svensson (osven@kth.se) Royal Institute of Technology - KTH Stockholm, Sweden March 22, 2011 arxiv:1011.1168v2 [cs.ds] 21 Mar 2011 Abstract One of the

More information

Algorithm Design. Scheduling Algorithms. Part 2. Parallel machines. Open-shop Scheduling. Job-shop Scheduling.

Algorithm Design. Scheduling Algorithms. Part 2. Parallel machines. Open-shop Scheduling. Job-shop Scheduling. Algorithm Design Scheduling Algorithms Part 2 Parallel machines. Open-shop Scheduling. Job-shop Scheduling. 1 Parallel Machines n jobs need to be scheduled on m machines, M 1,M 2,,M m. Each machine can

More information

Preemptive Online Scheduling: Optimal Algorithms for All Speeds

Preemptive Online Scheduling: Optimal Algorithms for All Speeds Preemptive Online Scheduling: Optimal Algorithms for All Speeds Tomáš Ebenlendr Wojciech Jawor Jiří Sgall Abstract Our main result is an optimal online algorithm for preemptive scheduling on uniformly

More information

Optimal on-line algorithms for single-machine scheduling

Optimal on-line algorithms for single-machine scheduling Optimal on-line algorithms for single-machine scheduling J.A. Hoogeveen A.P.A. Vestjens Department of Mathematics and Computing Science, Eindhoven University of Technology, P.O.Box 513, 5600 MB, Eindhoven,

More information

8 Knapsack Problem 8.1 (Knapsack)

8 Knapsack Problem 8.1 (Knapsack) 8 Knapsack In Chapter 1 we mentioned that some NP-hard optimization problems allow approximability to any required degree. In this chapter, we will formalize this notion and will show that the knapsack

More information

Completion Time Scheduling and the WSRPT Algorithm

Completion Time Scheduling and the WSRPT Algorithm Connecticut College Digital Commons @ Connecticut College Computer Science Faculty Publications Computer Science Department Spring 4-2012 Completion Time Scheduling and the WSRPT Algorithm Christine Chung

More information

A Robust APTAS for the Classical Bin Packing Problem

A Robust APTAS for the Classical Bin Packing Problem A Robust APTAS for the Classical Bin Packing Problem Leah Epstein 1 and Asaf Levin 2 1 Department of Mathematics, University of Haifa, 31905 Haifa, Israel. Email: lea@math.haifa.ac.il 2 Department of Statistics,

More information

Improved Bounds on Relaxations of a Parallel Machine Scheduling Problem

Improved Bounds on Relaxations of a Parallel Machine Scheduling Problem Journal of Combinatorial Optimization 1, 413 426 (1998) c 1998 Kluwer Academic Publishers Manufactured in The Netherlands Improved Bounds on Relaxations of a Parallel Machine Scheduling Problem CYNTHIA

More information

A PTAS for the Uncertain Capacity Knapsack Problem

A PTAS for the Uncertain Capacity Knapsack Problem Clemson University TigerPrints All Theses Theses 12-20 A PTAS for the Uncertain Capacity Knapsack Problem Matthew Dabney Clemson University, mdabney@clemson.edu Follow this and additional works at: https://tigerprints.clemson.edu/all_theses

More information

Conditional Hardness of Precedence Constrained Scheduling on Identical Machines

Conditional Hardness of Precedence Constrained Scheduling on Identical Machines Conditional Hardness of Precedence Constrained Scheduling on Identical Machines Ola Svensson (osven@kth.se) KTH - Royal Institute of Technology Stockholm, Sweden November 5, 2009 Abstract Already in 966,

More information

1 Ordinary Load Balancing

1 Ordinary Load Balancing Comp 260: Advanced Algorithms Prof. Lenore Cowen Tufts University, Spring 208 Scribe: Emily Davis Lecture 8: Scheduling Ordinary Load Balancing Suppose we have a set of jobs each with their own finite

More information

Complexity analysis of the discrete sequential search problem with group activities

Complexity analysis of the discrete sequential search problem with group activities Complexity analysis of the discrete sequential search problem with group activities Coolen K, Talla Nobibon F, Leus R. KBI_1313 Complexity analysis of the discrete sequential search problem with group

More information

An approximation algorithm for the minimum latency set cover problem

An approximation algorithm for the minimum latency set cover problem An approximation algorithm for the minimum latency set cover problem Refael Hassin 1 and Asaf Levin 2 1 Department of Statistics and Operations Research, Tel-Aviv University, Tel-Aviv, Israel. hassin@post.tau.ac.il

More information

Topics in Theoretical Computer Science April 08, Lecture 8

Topics in Theoretical Computer Science April 08, Lecture 8 Topics in Theoretical Computer Science April 08, 204 Lecture 8 Lecturer: Ola Svensson Scribes: David Leydier and Samuel Grütter Introduction In this lecture we will introduce Linear Programming. It was

More information

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved. Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved. 1 P and NP P: The family of problems that can be solved quickly in polynomial time.

More information

Recoverable Robustness in Scheduling Problems

Recoverable Robustness in Scheduling Problems Master Thesis Computing Science Recoverable Robustness in Scheduling Problems Author: J.M.J. Stoef (3470997) J.M.J.Stoef@uu.nl Supervisors: dr. J.A. Hoogeveen J.A.Hoogeveen@uu.nl dr. ir. J.M. van den Akker

More information

Minimizing Mean Flowtime and Makespan on Master-Slave Systems

Minimizing Mean Flowtime and Makespan on Master-Slave Systems Minimizing Mean Flowtime and Makespan on Master-Slave Systems Joseph Y-T. Leung,1 and Hairong Zhao 2 Department of Computer Science New Jersey Institute of Technology Newark, NJ 07102, USA Abstract The

More information

Scheduling on Unrelated Parallel Machines. Approximation Algorithms, V. V. Vazirani Book Chapter 17

Scheduling on Unrelated Parallel Machines. Approximation Algorithms, V. V. Vazirani Book Chapter 17 Scheduling on Unrelated Parallel Machines Approximation Algorithms, V. V. Vazirani Book Chapter 17 Nicolas Karakatsanis, 2008 Description of the problem Problem 17.1 (Scheduling on unrelated parallel machines)

More information

Vertex Cover in Graphs with Locally Few Colors

Vertex Cover in Graphs with Locally Few Colors Vertex Cover in Graphs with Locally Few Colors Fabian Kuhn 1 and Monaldo Mastrolilli 2 1 Faculty of Informatics, University of Lugano (USI), 6904 Lugano, Switzerland fabian.kuhn@usi.ch 2 Dalle Molle Institute

More information

APTAS for Bin Packing

APTAS for Bin Packing APTAS for Bin Packing Bin Packing has an asymptotic PTAS (APTAS) [de la Vega and Leuker, 1980] For every fixed ε > 0 algorithm outputs a solution of size (1+ε)OPT + 1 in time polynomial in n APTAS for

More information

Partition is reducible to P2 C max. c. P2 Pj = 1, prec Cmax is solvable in polynomial time. P Pj = 1, prec Cmax is NP-hard

Partition is reducible to P2 C max. c. P2 Pj = 1, prec Cmax is solvable in polynomial time. P Pj = 1, prec Cmax is NP-hard I. Minimizing Cmax (Nonpreemptive) a. P2 C max is NP-hard. Partition is reducible to P2 C max b. P Pj = 1, intree Cmax P Pj = 1, outtree Cmax are both solvable in polynomial time. c. P2 Pj = 1, prec Cmax

More information

Approximation Schemes for Job Shop Scheduling Problems with Controllable Processing Times

Approximation Schemes for Job Shop Scheduling Problems with Controllable Processing Times Approximation Schemes for Job Shop Scheduling Problems with Controllable Processing Times Klaus Jansen 1, Monaldo Mastrolilli 2, and Roberto Solis-Oba 3 1 Universität zu Kiel, Germany, kj@informatik.uni-kiel.de

More information

MINIMIZING SCHEDULE LENGTH OR MAKESPAN CRITERIA FOR PARALLEL PROCESSOR SCHEDULING

MINIMIZING SCHEDULE LENGTH OR MAKESPAN CRITERIA FOR PARALLEL PROCESSOR SCHEDULING MINIMIZING SCHEDULE LENGTH OR MAKESPAN CRITERIA FOR PARALLEL PROCESSOR SCHEDULING By Ali Derbala University of Blida, Faculty of science Mathematics Department BP 270, Route de Soumaa, Blida, Algeria.

More information

Approximation Schemes for Parallel Machine Scheduling Problems with Controllable Processing Times

Approximation Schemes for Parallel Machine Scheduling Problems with Controllable Processing Times Approximation Schemes for Parallel Machine Scheduling Problems with Controllable Processing Times Klaus Jansen 1 and Monaldo Mastrolilli 2 1 Institut für Informatik und Praktische Mathematik, Universität

More information

Single Machine Scheduling: Comparison of MIP Formulations and Heuristics for. Interfering Job Sets. Ketan Khowala

Single Machine Scheduling: Comparison of MIP Formulations and Heuristics for. Interfering Job Sets. Ketan Khowala Single Machine Scheduling: Comparison of MIP Formulations and Heuristics for Interfering Job Sets by Ketan Khowala A Dissertation Presented in Partial Fulfillment of the Requirements for the Degree Doctor

More information

Scheduling to Minimize Total Weighted Completion Time via Time-Indexed Linear Programming Relaxations

Scheduling to Minimize Total Weighted Completion Time via Time-Indexed Linear Programming Relaxations 58th Annual IEEE Symposium on Foundations of Computer Science Scheduling to Minimize Total Weighted Completion Time via Time-Indexed Linear Programming Relaxations Shi Li Department of Computer Science

More information

arxiv: v1 [cs.ds] 17 Feb 2016

arxiv: v1 [cs.ds] 17 Feb 2016 Scheduling MapReduce Jobs under Multi-Round Precedences D Fotakis 1, I Milis 2, O Papadigenopoulos 1, V Vassalos 2, and G Zois 2 arxiv:160205263v1 [csds] 17 Feb 2016 1 School of Electrical and Computer

More information

Approximation Schemes for Scheduling on Parallel Machines

Approximation Schemes for Scheduling on Parallel Machines Approximation Schemes for Scheduling on Parallel Machines Noga Alon Yossi Azar Gerhard J. Woeginger Tal Yadid Abstract We discuss scheduling problems with m identical machines and n jobs where each job

More information

P C max. NP-complete from partition. Example j p j What is the makespan on 2 machines? 3 machines? 4 machines?

P C max. NP-complete from partition. Example j p j What is the makespan on 2 machines? 3 machines? 4 machines? Multiple Machines Model Multiple Available resources people time slots queues networks of computers Now concerned with both allocation to a machine and ordering on that machine. P C max NP-complete from

More information

Online algorithms for parallel job scheduling and strip packing Hurink, J.L.; Paulus, J.J.

Online algorithms for parallel job scheduling and strip packing Hurink, J.L.; Paulus, J.J. Online algorithms for parallel job scheduling and strip packing Hurink, J.L.; Paulus, J.J. Published: 01/01/007 Document Version Publisher s PDF, also known as Version of Record (includes final page, issue

More information

A General Framework for Designing Approximation Schemes for Combinatorial Optimization Problems with Many Objectives Combined into One

A General Framework for Designing Approximation Schemes for Combinatorial Optimization Problems with Many Objectives Combined into One OPERATIONS RESEARCH Vol. 61, No. 2, March April 2013, pp. 386 397 ISSN 0030-364X (print) ISSN 1526-5463 (online) http://dx.doi.org/10.1287/opre.1120.1093 2013 INFORMS A General Framework for Designing

More information

A lower bound for scheduling of unit jobs with immediate decision on parallel machines

A lower bound for scheduling of unit jobs with immediate decision on parallel machines A lower bound for scheduling of unit jobs with immediate decision on parallel machines Tomáš Ebenlendr Jiří Sgall Abstract Consider scheduling of unit jobs with release times and deadlines on m identical

More information

5 Integer Linear Programming (ILP) E. Amaldi Foundations of Operations Research Politecnico di Milano 1

5 Integer Linear Programming (ILP) E. Amaldi Foundations of Operations Research Politecnico di Milano 1 5 Integer Linear Programming (ILP) E. Amaldi Foundations of Operations Research Politecnico di Milano 1 Definition: An Integer Linear Programming problem is an optimization problem of the form (ILP) min

More information

A New Approach to Online Scheduling: Approximating the Optimal Competitive Ratio

A New Approach to Online Scheduling: Approximating the Optimal Competitive Ratio A New Approach to Online Scheduling: Approximating the Optimal Competitive Ratio Elisabeth Günther Olaf Maurer Nicole Megow Andreas Wiese Abstract We propose a new approach to competitive analysis in online

More information

Marjan van den Akker. Han Hoogeveen Jules van Kempen

Marjan van den Akker. Han Hoogeveen Jules van Kempen Parallel machine scheduling through column generation: minimax objective functions, release dates, deadlines, and/or generalized precedence constraints Marjan van den Akker Han Hoogeveen Jules van Kempen

More information

A robust APTAS for the classical bin packing problem

A robust APTAS for the classical bin packing problem A robust APTAS for the classical bin packing problem Leah Epstein Asaf Levin Abstract Bin packing is a well studied problem which has many applications. In this paper we design a robust APTAS for the problem.

More information

Online Scheduling with Bounded Migration

Online Scheduling with Bounded Migration Online Scheduling with Bounded Migration Peter Sanders Universität Karlsruhe (TH), Fakultät für Informatik, Postfach 6980, 76128 Karlsruhe, Germany email: sanders@ira.uka.de http://algo2.iti.uni-karlsruhe.de/sanders.php

More information

All-norm Approximation Algorithms

All-norm Approximation Algorithms All-norm Approximation Algorithms Yossi Azar Leah Epstein Yossi Richter Gerhard J. Woeginger Abstract A major drawback in optimization problems and in particular in scheduling problems is that for every

More information

An on-line approach to hybrid flow shop scheduling with jobs arriving over time

An on-line approach to hybrid flow shop scheduling with jobs arriving over time An on-line approach to hybrid flow shop scheduling with jobs arriving over time Verena Gondek, University of Duisburg-Essen Abstract During the manufacturing process in a steel mill, the chemical composition

More information

Single Machine Scheduling with a Non-renewable Financial Resource

Single Machine Scheduling with a Non-renewable Financial Resource Single Machine Scheduling with a Non-renewable Financial Resource Evgeny R. Gafarov a, Alexander A. Lazarev b Institute of Control Sciences of the Russian Academy of Sciences, Profsoyuznaya st. 65, 117997

More information

Complexity of preemptive minsum scheduling on unrelated parallel machines Sitters, R.A.

Complexity of preemptive minsum scheduling on unrelated parallel machines Sitters, R.A. Complexity of preemptive minsum scheduling on unrelated parallel machines Sitters, R.A. Published: 01/01/2003 Document Version Publisher s PDF, also known as Version of Record (includes final page, issue

More information

Resource Constrained Project Scheduling Linear and Integer Programming (1)

Resource Constrained Project Scheduling Linear and Integer Programming (1) DM204, 2010 SCHEDULING, TIMETABLING AND ROUTING Lecture 3 Resource Constrained Project Linear and Integer Programming (1) Marco Chiarandini Department of Mathematics & Computer Science University of Southern

More information

Deterministic Models: Preliminaries

Deterministic Models: Preliminaries Chapter 2 Deterministic Models: Preliminaries 2.1 Framework and Notation......................... 13 2.2 Examples... 20 2.3 Classes of Schedules... 21 2.4 Complexity Hierarchy... 25 Over the last fifty

More information

Bin packing and scheduling

Bin packing and scheduling Sanders/van Stee: Approximations- und Online-Algorithmen 1 Bin packing and scheduling Overview Bin packing: problem definition Simple 2-approximation (Next Fit) Better than 3/2 is not possible Asymptotic

More information

More Approximation Algorithms

More Approximation Algorithms CS 473: Algorithms, Spring 2018 More Approximation Algorithms Lecture 25 April 26, 2018 Most slides are courtesy Prof. Chekuri Ruta (UIUC) CS473 1 Spring 2018 1 / 28 Formal definition of approximation

More information

Online Scheduling of Parallel Jobs on Two Machines is 2-Competitive

Online Scheduling of Parallel Jobs on Two Machines is 2-Competitive Online Scheduling of Parallel Jobs on Two Machines is 2-Competitive J.L. Hurink and J.J. Paulus University of Twente, P.O. box 217, 7500AE Enschede, The Netherlands Abstract We consider online scheduling

More information

This means that we can assume each list ) is

This means that we can assume each list ) is This means that we can assume each list ) is of the form ),, ( )with < and Since the sizes of the items are integers, there are at most +1pairs in each list Furthermore, if we let = be the maximum possible

More information

Minimizing Average Completion Time in the. Presence of Release Dates. September 4, Abstract

Minimizing Average Completion Time in the. Presence of Release Dates. September 4, Abstract Minimizing Average Completion Time in the Presence of Release Dates Cynthia Phillips Cliord Stein y Joel Wein z September 4, 1996 Abstract A natural and basic problem in scheduling theory is to provide

More information

arxiv: v2 [cs.dm] 2 Mar 2017

arxiv: v2 [cs.dm] 2 Mar 2017 Shared multi-processor scheduling arxiv:607.060v [cs.dm] Mar 07 Dariusz Dereniowski Faculty of Electronics, Telecommunications and Informatics, Gdańsk University of Technology, Gdańsk, Poland Abstract

More information

Approximation schemes for parallel machine scheduling with non-renewable resources

Approximation schemes for parallel machine scheduling with non-renewable resources Approximation schemes for parallel machine scheduling with non-renewable resources Péter Györgyi a,b, Tamás Kis b, a Department of Operations Research, Loránd Eötvös University, H1117 Budapest, Pázmány

More information

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 8 Dr. Ted Ralphs ISE 418 Lecture 8 1 Reading for This Lecture Wolsey Chapter 2 Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Duality for Mixed-Integer

More information

CS 6783 (Applied Algorithms) Lecture 3

CS 6783 (Applied Algorithms) Lecture 3 CS 6783 (Applied Algorithms) Lecture 3 Antonina Kolokolova January 14, 2013 1 Representative problems: brief overview of the course In this lecture we will look at several problems which, although look

More information

Travelling Salesman Problem

Travelling Salesman Problem Travelling Salesman Problem Fabio Furini November 10th, 2014 Travelling Salesman Problem 1 Outline 1 Traveling Salesman Problem Separation Travelling Salesman Problem 2 (Asymmetric) Traveling Salesman

More information

Lecture 11 October 7, 2013

Lecture 11 October 7, 2013 CS 4: Advanced Algorithms Fall 03 Prof. Jelani Nelson Lecture October 7, 03 Scribe: David Ding Overview In the last lecture we talked about set cover: Sets S,..., S m {,..., n}. S has cost c S. Goal: Cover

More information

Select and Permute: An Improved Online Framework for Scheduling to Minimize Weighted Completion Time

Select and Permute: An Improved Online Framework for Scheduling to Minimize Weighted Completion Time Select and Permute: An Improved Online Framework for Scheduling to Minimize Weighted Completion Time Samir Khuller 1, Jingling Li 1, Pascal Sturmfels 2, Kevin Sun 3, and Prayaag Venkat 1 1 University of

More information

On bilevel machine scheduling problems

On bilevel machine scheduling problems Noname manuscript No. (will be inserted by the editor) On bilevel machine scheduling problems Tamás Kis András Kovács Abstract Bilevel scheduling problems constitute a hardly studied area of scheduling

More information

The Multiple Traveling Salesman Problem with Time Windows: Bounds for the Minimum Number of Vehicles

The Multiple Traveling Salesman Problem with Time Windows: Bounds for the Minimum Number of Vehicles The Multiple Traveling Salesman Problem with Time Windows: Bounds for the Minimum Number of Vehicles Snežana Mitrović-Minić Ramesh Krishnamurti School of Computing Science, Simon Fraser University, Burnaby,

More information

Approximation Algorithms for scheduling

Approximation Algorithms for scheduling Approximation Algorithms for scheduling Ahmed Abu Safia I.D.:119936343, McGill University, 2004 (COMP 760) Approximation Algorithms for scheduling Leslie A. Hall The first Chapter of the book entitled

More information

Approximation Basics

Approximation Basics Approximation Basics, Concepts, and Examples Xiaofeng Gao Department of Computer Science and Engineering Shanghai Jiao Tong University, P.R.China Fall 2012 Special thanks is given to Dr. Guoqiang Li for

More information

The 2-valued case of makespan minimization with assignment constraints

The 2-valued case of makespan minimization with assignment constraints The 2-valued case of maespan minimization with assignment constraints Stavros G. Kolliopoulos Yannis Moysoglou Abstract We consider the following special case of minimizing maespan. A set of jobs J and

More information

Using column generation to solve parallel machine scheduling problems with minmax objective functions

Using column generation to solve parallel machine scheduling problems with minmax objective functions Using column generation to solve parallel machine scheduling problems with minmax objective functions J.M. van den Akker J.A. Hoogeveen Department of Information and Computing Sciences Utrecht University

More information

CO759: Algorithmic Game Theory Spring 2015

CO759: Algorithmic Game Theory Spring 2015 CO759: Algorithmic Game Theory Spring 2015 Instructor: Chaitanya Swamy Assignment 1 Due: By Jun 25, 2015 You may use anything proved in class directly. I will maintain a FAQ about the assignment on the

More information

The traveling salesman problem

The traveling salesman problem Chapter 58 The traveling salesman problem The traveling salesman problem (TSP) asks for a shortest Hamiltonian circuit in a graph. It belongs to the most seductive problems in combinatorial optimization,

More information

Scheduling and fixed-parameter tractability

Scheduling and fixed-parameter tractability Math. Program., Ser. B (2015) 154:533 562 DOI 10.1007/s10107-014-0830-9 FULL LENGTH PAPER Scheduling and fixed-parameter tractability Matthias Mnich Andreas Wiese Received: 24 April 2014 / Accepted: 10

More information

Embedded Systems 14. Overview of embedded systems design

Embedded Systems 14. Overview of embedded systems design Embedded Systems 14-1 - Overview of embedded systems design - 2-1 Point of departure: Scheduling general IT systems In general IT systems, not much is known about the computational processes a priori The

More information

Outline. Outline. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Scheduling CPM/PERT Resource Constrained Project Scheduling Model

Outline. Outline. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Scheduling CPM/PERT Resource Constrained Project Scheduling Model Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING Lecture 3 and Mixed Integer Programg Marco Chiarandini 1. Resource Constrained Project Model 2. Mathematical Programg 2 Outline Outline 1. Resource Constrained

More information

Divisible Load Scheduling

Divisible Load Scheduling Divisible Load Scheduling Henri Casanova 1,2 1 Associate Professor Department of Information and Computer Science University of Hawai i at Manoa, U.S.A. 2 Visiting Associate Professor National Institute

More information

Separation Techniques for Constrained Nonlinear 0 1 Programming

Separation Techniques for Constrained Nonlinear 0 1 Programming Separation Techniques for Constrained Nonlinear 0 1 Programming Christoph Buchheim Computer Science Department, University of Cologne and DEIS, University of Bologna MIP 2008, Columbia University, New

More information

University of Twente. Faculty of Mathematical Sciences. Scheduling split-jobs on parallel machines. University for Technical and Social Sciences

University of Twente. Faculty of Mathematical Sciences. Scheduling split-jobs on parallel machines. University for Technical and Social Sciences Faculty of Mathematical Sciences University of Twente University for Technical and Social Sciences P.O. Box 217 7500 AE Enschede The Netherlands Phone: +31-53-4893400 Fax: +31-53-4893114 Email: memo@math.utwente.nl

More information

Unrelated Machine Scheduling with Stochastic Processing Times

Unrelated Machine Scheduling with Stochastic Processing Times Unrelated Machine Scheduling with Stochastic Processing Times Martin Skutella TU Berlin, Institut für Mathematik, MA 5- Straße des 17. Juni 136, 1063 Berlin, Germany, martin.skutella@tu-berlin.de Maxim

More information

A NOTE ON THE PRECEDENCE-CONSTRAINED CLASS SEQUENCING PROBLEM

A NOTE ON THE PRECEDENCE-CONSTRAINED CLASS SEQUENCING PROBLEM A NOTE ON THE PRECEDENCE-CONSTRAINED CLASS SEQUENCING PROBLEM JOSÉ R. CORREA, SAMUEL FIORINI, AND NICOLÁS E. STIER-MOSES School of Business, Universidad Adolfo Ibáñez, Santiago, Chile; correa@uai.cl Department

More information

SUPPLY CHAIN SCHEDULING: ASSEMBLY SYSTEMS. Zhi-Long Chen. Nicholas G. Hall

SUPPLY CHAIN SCHEDULING: ASSEMBLY SYSTEMS. Zhi-Long Chen. Nicholas G. Hall SUPPLY CHAIN SCHEDULING: ASSEMBLY SYSTEMS Zhi-Long Chen Nicholas G. Hall University of Pennsylvania The Ohio State University December 27, 2000 Abstract We study the issue of cooperation in supply chain

More information

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003 CS6999 Probabilistic Methods in Integer Programming Randomized Rounding April 2003 Overview 2 Background Randomized Rounding Handling Feasibility Derandomization Advanced Techniques Integer Programming

More information

MINIMIZING TOTAL TARDINESS FOR SINGLE MACHINE SEQUENCING

MINIMIZING TOTAL TARDINESS FOR SINGLE MACHINE SEQUENCING Journal of the Operations Research Society of Japan Vol. 39, No. 3, September 1996 1996 The Operations Research Society of Japan MINIMIZING TOTAL TARDINESS FOR SINGLE MACHINE SEQUENCING Tsung-Chyan Lai

More information

INFORMS Journal on Computing

INFORMS Journal on Computing This article was downloaded by: [148.251.232.83] On: 07 November 2018, At: 03:52 Publisher: Institute for Operations Research and the Management Sciences (INFORMS) INFORMS is located in Maryland, USA INFORMS

More information

Chapter 3: Discrete Optimization Integer Programming

Chapter 3: Discrete Optimization Integer Programming Chapter 3: Discrete Optimization Integer Programming Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Website: http://home.deib.polimi.it/amaldi/opt-16-17.shtml Academic year 2016-17

More information

The Maximum Flow Problem with Disjunctive Constraints

The Maximum Flow Problem with Disjunctive Constraints The Maximum Flow Problem with Disjunctive Constraints Ulrich Pferschy Joachim Schauer Abstract We study the maximum flow problem subject to binary disjunctive constraints in a directed graph: A negative

More information