An Approximate Pareto Set for Minimizing the Maximum Lateness and Makespan on Parallel Machines

Similar documents
This means that we can assume each list ) is

More Approximation Algorithms

Approximation Algorithms (Load Balancing)

Algorithms. Outline! Approximation Algorithms. The class APX. The intelligence behind the hardware. ! Based on

Batch delivery scheduling with simple linear deterioration on a single machine 1

Research Article Batch Scheduling on Two-Machine Flowshop with Machine-Dependent Setup Times

1 Column Generation and the Cutting Stock Problem

A polynomial-time approximation scheme for the two-machine flow shop scheduling problem with an availability constraint

Bin packing and scheduling

Single Machine Scheduling with Job-Dependent Machine Deterioration

Heuristics for two-machine flowshop scheduling with setup times and an availability constraint

COSC 341: Lecture 25 Coping with NP-hardness (2)

Scheduling Linear Deteriorating Jobs with an Availability Constraint on a Single Machine 1

Single machine scheduling with forbidden start times

P C max. NP-complete from partition. Example j p j What is the makespan on 2 machines? 3 machines? 4 machines?

Exact Mixed Integer Programming for Integrated Scheduling and Process Planning in Flexible Environment

Efficient approximation algorithms for the Subset-Sums Equality problem

The polynomial solvability of selected bicriteria scheduling problems on parallel machines with equal length jobs and release dates

An improved approximation algorithm for two-machine flow shop scheduling with an availability constraint

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003

Equitable and semi-equitable coloring of cubic graphs and its application in batch scheduling

Minimizing total weighted tardiness on a single machine with release dates and equal-length jobs

1 Ordinary Load Balancing

Scheduling of unit-length jobs with bipartite incompatibility graphs on four uniform machines arxiv: v1 [cs.

Polynomial time solutions for scheduling problems on a proportionate flowshop with two competing agents

Improved Bounds for Flow Shop Scheduling

Kamran. real world. work and study are applicable. and FPTASs. 1. Introduction1. .iust.ac.ir/ KEYWORDS ABSTRACT. penalty, Tardy/Lost

Single Machine Scheduling: Comparison of MIP Formulations and Heuristics for. Interfering Job Sets. Ketan Khowala

Minimizing Mean Flowtime and Makespan on Master-Slave Systems

A BEST-COMPROMISE BICRITERIA SCHEDULING ALGORITHM FOR PARALLEL TASKS

LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE SIMPLEX METHOD

P,NP, NP-Hard and NP-Complete

Scheduling of unit-length jobs with cubic incompatibility graphs on three uniform machines

An Effective Chromosome Representation for Evolving Flexible Job Shop Schedules

Job Sequencing with One Common and Multiple Secondary Resources: A Problem Motivated from Particle Therapy for Cancer Treatment

Approximation Algorithms

Partition is reducible to P2 C max. c. P2 Pj = 1, prec Cmax is solvable in polynomial time. P Pj = 1, prec Cmax is NP-hard

Santa Claus Schedules Jobs on Unrelated Machines

Multi-objective Quadratic Assignment Problem instances generator with a known optimum solution

Scheduling jobs with agreeable processing times and due dates on a single batch processing machine

Algorithm Design. Scheduling Algorithms. Part 2. Parallel machines. Open-shop Scheduling. Job-shop Scheduling.

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Algorithms Design & Analysis. Approximation Algorithm

arxiv: v2 [cs.ds] 27 Nov 2014

1 The Knapsack Problem

Complexity analysis of job-shop scheduling with deteriorating jobs

Embedded Systems 14. Overview of embedded systems design

b + O(n d ) where a 1, b > 1, then O(n d log n) if a = b d d ) if a < b d O(n log b a ) if a > b d

Polynomially solvable and NP-hard special cases for scheduling with heads and tails

Approximating min-max (regret) versions of some polynomial problems

Optimal on-line algorithms for single-machine scheduling

On-line Scheduling of Two Parallel Machines. with a Single Server

Single Machine Models

Week Cuts, Branch & Bound, and Lagrangean Relaxation

HEURISTICS FOR TWO-MACHINE FLOWSHOP SCHEDULING WITH SETUP TIMES AND AN AVAILABILITY CONSTRAINT

Now just show p k+1 small vs opt. Some machine gets k=m jobs of size at least p k+1 Can also think of it as a family of (1 + )-approximation algorithm

Algorithms. NP -Complete Problems. Dong Kyue Kim Hanyang University

Lecture 2: Scheduling on Parallel Machines

Multiprocessor Scheduling I: Partitioned Scheduling. LS 12, TU Dortmund

Math Models of OR: Branch-and-Bound

arxiv: v1 [cs.ds] 15 Sep 2018

Solving Zero-Sum Security Games in Discretized Spatio-Temporal Domains

Scheduling with Common Due Date, Earliness and Tardiness Penalties for Multi-Machine Problems: A Survey

A pruning pattern list approach to the permutation flowshop scheduling problem

Dynamic Scheduling with Genetic Programming

Scheduling jobs on two uniform parallel machines to minimize the makespan

Improved Algorithms for Machine Allocation in Manufacturing Systems

A lower bound for scheduling of unit jobs with immediate decision on parallel machines

Lecture 4: An FPTAS for Knapsack, and K-Center

Friday Four Square! Today at 4:15PM, Outside Gates

Scheduling the Tasks of Two Agents with a Central Selection Mechanism

CS264: Beyond Worst-Case Analysis Lecture #18: Smoothed Complexity and Pseudopolynomial-Time Algorithms

Duality of LPs and Applications

Approximation Schemes for Parallel Machine Scheduling Problems with Controllable Processing Times

8 Knapsack Problem 8.1 (Knapsack)

Analysis of Algorithms. Unit 5 - Intractable Problems

Single Machine Scheduling with a Non-renewable Financial Resource

Scheduling Parallel Jobs with Linear Speedup

arxiv: v2 [cs.dm] 2 Mar 2017

On Machine Dependency in Shop Scheduling

Lecture 4 Scheduling 1

Linear Programming. Scheduling problems

Introduction to Complexity Theory

A Global Constraint for Total Weighted Completion Time for Unary Resources

CS264: Beyond Worst-Case Analysis Lecture #15: Smoothed Complexity and Pseudopolynomial-Time Algorithms

A Mixed-Integer Linear Program for the Traveling Salesman Problem with Structured Time Windows

Approximation complexity of min-max (regret) versions of shortest path, spanning tree, and knapsack

Machine Scheduling with Deliveries to Multiple Customer Locations

A Bicriteria Approach to Robust Optimization

Minimizing the Number of Tardy Jobs

CS 6901 (Applied Algorithms) Lecture 3

PMS 2012 April Leuven

HYBRID FLOW-SHOP WITH ADJUSTMENT

Scheduling in an Assembly-Type Production Chain with Batch Transfer

Scheduling Lecture 1: Scheduling on One Machine

Algorithms and Complexity theory

PARAMETRIC ANALYSIS OF BI-CRITERION SINGLE MACHINE SCHEDULING WITH A LEARNING EFFECT. Received June 2007; revised December 2007

On the Complexity of Mapping Pipelined Filtering Services on Heterogeneous Platforms

New Reference-Neighbourhood Scalarization Problem for Multiobjective Integer Programming

5 Integer Linear Programming (ILP) E. Amaldi Foundations of Operations Research Politecnico di Milano 1

Transcription:

1 An Approximate Pareto Set for Minimizing the Maximum Lateness Makespan on Parallel Machines Gais Alhadi 1, Imed Kacem 2, Pierre Laroche 3, Izzeldin M. Osman 4 arxiv:1802.10488v1 [cs.ds] 28 Feb 2018 Abstract We consider the two-parallel machines scheduling problem, with the aim of minimizing the imum lateness the makespan. Formally, the problem is defined as follows. We have to schedule a set J of n jobs on two identical machines. Each job i J has a processing time p i a delivery time q i. Each machine can only perform one job at a given time. The machines are available at time t = 0 each of them can process at most one job at a given time. The problem is to find a sequence of jobs, with the objective of minimizing the imum lateness L the makespan C. With no loss of generality, we consider that all data are integers that jobs are indexed in non-increasing order of their delivery times: q 1 q 2... q n. This paper proposes an exact algorithm (based on a dynamic programming) to generate the complete Pareto Frontier in a pseudo-polynomial time. Then, we present an FPTAS (Fully Polynomial Time Approximation Scheme) to generate an approximate Pareto Frontier, based on the conversion of the dynamic programming. The proposed FPTAS is strongly polynomial. Some numerical experiments are provided in order to compare the two proposed approaches. I. INTRODUCTION We consider the two-parallel machines scheduling problem, with the aim of minimizing the imum lateness makespan. Formally, the problem is defined as follows. We have to schedule a set J of n jobs on two identical machines. Each job i J has a processing time p i a delivery time q i. The machines are available at time t=0 each of them can process at most one job at a time. The problem is to find a sequence of jobs, with the objective of minimizing the imum lateness L the makespan C. With no loss of generality, we consider that all data are integers that jobs are indexed in non-increasing order of their delivery times q 1 q 2... q n. For self-consistency, we recall some necessary definitions related to the approximation area. An algorithm A is called a ρ approximation algorithm for a given problem, if for any instance I of that problem the algorithm A yields, within a polynomial time, a feasible solution with an objective value A(I) such that: A(I) OP T (I) ɛ.op T (I), where 1 Gais Alhadi is a member of Faculty of Mathematical Computer Sciences, University of Gezira, Wad-Madani, Sudan gais.alhadi@uofg.edu.sd Imed Kacem 2 Pierre Laroche 3 are members of the LCOMS Laboratory, University of Lorraine, F-57045 Metz, France imed.kacem@univ-lorraine.fr, pierre.laroche@univ-lorraine.fr Izzeldin M. Osman 4 is from Sudan University of Science Technology, Khartoum, Sudan izzeldin@acm.org OP T (I) is the optimal value of I ρ is the performance guarantee or the worst-case ratio of the approximation algorithm A. It can be a real number greater or equal to 1 for the minimization problems ρ = 1 + ɛ (that it leads to inequality A(I) (1 + ɛ)op T (I)), or it can be real number from the interval [0, 1] for the imization problems ρ = 1 ɛ (that it leads to inequality A(I) (1 ɛ)op T (I)). The Pareto-optimal solutions are the solutions that are not dominated by other solutions. Thus, we can consider that the solution is Pareto-optimal if there does not exist another solution that is simultaneously the best for all the objectives. Noteworthy, Pareto-optimal solutions represent a range of reasonable optimal solutions for all possible functions based on the different objectives. A schedule is called Pareto-optimal if it is not possible to decrease the value of one objective without increasing the value of the other. It is noteworthy that during the last decade the multi-objective scheduling problems have attracted numerous researchers from all the world have been widely studied in the literature. For the scheduling problems on serial-batch machine, Geng et al.[17] studied scheduling problems with or without precedence relations, where the objective is to minimize makespan imum cost. They have provided highly efficient polynomial-time algorithms to generate all Pareto optimal points. An approximate Pareto set of minimal size that approximates within an accuracy ɛ for multi-objective optimization problems have been studied by Bazgan et al.[2]. They proposed a 3-approximation algorithm for two objectives also proposed a study of the greedy algorithm performance for a three-objective case when the points are given explicitly in the input. They showed that the threeobjective case is NP-hard. Chen Zou [5] proposed a runtime analysis of a (µ + 1) multi-objective evolutionary algorithm for three multi-objective optimization problems with unknown attributes. They showed that when the size of the population is less than the total number of Pareto-vector, the (µ + 1) multi-objective evolutionary algorithm cannot obtain the expected polynomial runtime for the exact discrete multiobjective optimization problems. Thus, we must determine the size of the population equal to the total number of leading ones, trailing zeros. Furthermore, the expected polynomial runtime for the exponential discrete multi-objective optimization problem can be obtained by the ratio of n/2 to µ 1 over an appropriate period of time. They also showed that the (µ + 1) multi-objective evolutionary algorithm can be solved efficiently in polynomial runtime by obtaining an ɛ adaptive

2 Pareto front. Florios Mavrotas [6] used AUGMECON2, a multi-objective mathematical programming method (which is suitable for general multi-objective integer programming problems), to produce all the Pareto-optimal solutions for multi-objective traveling salesman set covering problems. They showed that the performance of the algorithm is slightly better than it already exists. Moreover, they showed that their results can be helpful for other multi-objective mathematical programming methods or even multi-objective meta-heuristics. In [10], Sabouni Jolai proposed an optimal method for the problem of scheduling jobs on a single batch processing machine to minimize the makespan the imum lateness. They showed that the proposed method is optimal when the set with imum lateness objective has the same processing times. They also proposed an optimal method for the group that has the imum lateness objective the same processing times. Geng et al.[3] considered the scheduling problem on an unbounded p-batch machine with family jobs to find all Pareto-optimal points for minimizing makespan imum lateness. They presented a dynamic programming algorithm to solve the studied problem. He et al.[4] showed that the Pareto optimization scheduling problem on a single bounded serial-batching machine to minimize makespan imum lateness is solvable in O(n 6 ). They also presented an O(n 3 )- time algorithm to find all Pareto optimal solutions where the processing times deadlines are agreeable. For the bi-criteria scheduling problem, He et al.[8] showed that the problem of minimizing imum cost makespan is solvable in O(n 5 ) time. The authors presented a polynomial-time algorithm in order to find all Pareto optimal solutions. Also, He et al.[9] showed that the bi-criteria batching problem of minimizing imum cost makespan is solvable in O(n 3 ) time. The bi-criteria scheduling problem on a parallel-batching machine to minimize imum lateness makespan have been considered in [11]. The authors presented a polynomialtime algorithm in order to find all Pareto optimal solutions. Allahverdi Aldowaisan [13] studied the no-wait flow-shop scheduling problem with bi-criteria of makespan or imum lateness. They also proposed a dominance relation a branch--bound algorithm showed that these algorithms are quite efficient. The remainder of this paper is organized as follows. In Section 2, we describe the proposed dynamic programming (DP) algorithm. Section 3, provides the description the analysis of the FPTAS. In Section 4, we present a practical example for DP FPTAS. Finally, Section 5 concludes the paper. II. DYNAMIC PROGRAMMING ALGORITHM The following dynamic programming algorithm A, can be applied to solve exactly this problem. This algorithm A generates iteratively some sets of states. At every iteration i, a set χ i composed of states is generated (0 i n). Each state [k, L, C ] in χ i can be associated to a feasible partial schedule for the first i jobs. Let variable k {0, 1} denote the most loaded machine, L denote the imum lateness C denote the imum completion time of the corresponding schedule. The dynamic programming algorithm can be described as follows. ALGORITHM A 1) Set χ 1 = {[1, p 1 + q 1, p 1 ]}. 2) For i {2, 3,..., n}, a) χ i = φ. b) For every state [k, L, C ] in χ i 1 : (schedule job i on machine k) add [k,{l,c + p i + q i },C + p i ] to χ i (schedule job i on machine 1 k) if (C i C ) add [k,{l C + q i },C ] to χ i else add [1 k,{l C + q i }, i C ] to χ i c) For every k, for every C : keep only one state with the smallest possible L. d) Remove χ i 1. 3) Return the Pareto front of χ n, by only keeping nondominated states. Remark: To destroy the symmetry, we start by χ 1 = {[1, p 1 + q 1, p 1 ]} (i.e., we perform job 1 on the first machine). III. APPROXIMATE PARETO FRONTIER The main idea of the Approximate Pareto Frontier is to remove a special part of the states generated by the dynamic programming algorithm A. Therefore, the modified algorithm A described in Lemma III.1 produces an approximation solution instead of the optimal solution. Given an arbitrary ɛ > 0, we define the following parameters: δ 1 = ɛp/2 n, δ 2 = ɛ(p + q )/3. n where q is the imum delivery time P is the total sum of processing times. Let L C be the optimal solutions for our two objectives. Let LM AX CM AX be the upper bounds for the two considered criteria (scheduling all the jobs on the same machine), such that, 0 LMAX = P + q 3L 0 CMAX = P 2C We divide the intervals [0, CM AX] [0, LM AX] into equal sub-intervals respectively of lengths δ 1 δ 2. Then, an FPTAS is defined by following the same procedure as in the dynamic programming, except the fact that it will keep only one representative state for every couple of the defined subintervals produced from [0, CM AX] [0, LM AX]. Thus, our FPTAS will generate approximate sets χ # i of states

3 instead of χ i. The following lemma shows the closeness of the result generated by the FPTAS compared to the dynamic programming. Lemma III.1. For every state [k, L, C ] χ i there exists at least one approximate state [m, L #, C # ] χ # i such that: L # L + i. {δ 1, δ 2 }, C i.δ 1 C # C + i.δ 1. Proof. By induction on i. First, for i = 0 we have χ # i = χ 1. Therefore, the statement is trivial. Now, assume that the lemma holds true up to level i 1. Consider an arbitrary state [k, L, C ] χ i. Algorithm A introduces this state into χ i when job i is added to some feasible state for the first i 1 jobs. Let [k, L, C ] be the above feasible state. Three cases can be distinguished: 1) [k, L, C ] = [k, {L,C + p i + q i }, C + p i ] 2) [k, L, C ] = [k, {L, i C + q i }, C ] 3) [k, L, C ] = [1 k, {L, i C + q i }, i C ] We will prove the statement for level i in the three cases. 1 st Case: [k, L, C ] = [k, {L,C + p i + q i }, C + p i ] Since [k,l,c ] χ i 1, there exists [k #, L # ] χ # i 1, such that: L # L + (i 1) {δ 1, δ 2 } C (i 1)δ 1 C # C + (i 1)δ 1. Consequently, the state [k #, {L # + p i + q i }, C # + p i ] is created by algorithm A at iteration i. However, it may be removed when reducing the state subset. Let [α, λ, µ] be the state in χ # i that is in the same box as the sate [k #, {L # + p i + q i }, C # + p i ]. Hence, we have: In addition,, + p i + q i } + δ 2 {L, C + p i + q i } +(i 1). {δ 1, δ 2 } + δ 2 L + i. {δ 1, δ 2 } (1) µ C # + p i + δ 1 C + (i 1)δ 1 + p i + δ 1 = C + iδ 1. µ C # + p i δ 1 C (i 1)δ 1 + p i δ 1 C iδ 1. Consequently, [α, λ, µ] is an approximate state verifying the two conditions. (2) (3) 2 nd Case: [k, L, C ] = [k,{l, i C + q i },C ] Since [k,l,c ] χ i 1, there exists [k #, L # ] χ # i 1, such that: L # L + (i 1){δ 1, δ 2 } C (i 1)δ 1 C # C + (i 1)δ 1. Consequently, two sub-cases can occur: Sub-case 2.1: i C # Here, the state [k #, {L # C # +q i },C # ] is created by algorithm A at iteration i. However, it may be removed when reducing the state subset. Let [α, λ, µ] be the state in χ # i that is in the same box as [k #, {L # C # + q i }, C # ]. Hence, we have:, + q i } + δ 2 {L + (i 1) {δ 1, δ 2 }, Moreover, (C (i 1)δ 1 ) + q i } + δ 2 {L, C + q i } +(i 1) {δ 1, δ 2 } + δ 2 L + (i 1) {δ 1, δ 2 } + δ 2 < L + i. {δ 1, δ 2 } (4) µ C # + δ 1 C + (i 1)δ 1 + δ 1 = C + iδ 1. (5) And, µ C # δ 1 C (i 1)δ 1 δ 1 = C iδ 1. (6) Consequently, [α, λ, µ] is an approximate state verifying the two conditions. Sub-case 2.2: i > C # Here, the state [1 k #, {L # + q i }, i ] is created by algorithm A at iteration i. However, it may be removed when reducing the state subset. Let [α, λ, µ] be the state in χ # i that is in the same box as [1 k #, {L # C # +q i }, i C # ]. Hence, we have:

4, + q i } + δ 2 {L + (i 1) {δ 1, δ 2 }, Moreover, (C + (i 1)δ 1 ) + q i } + δ 2 {L, C + q i } +(i 1) {δ 1, δ 2 } + δ 2 L + (i 1) {δ 1, δ 2 } + δ 2 < L + i. {δ 1, δ 2 } (7) µ + δ 1 Since C # C (i 1)δ 1, then the following relation holds µ C + (i 1)δ 1 + δ 1 C + iδ 1 (since i C C ). And, µ δ 1 C # δ 1 (8) (9) C (i 1)δ 1 δ 1 = C iδ 1. (10) Thus, [α, λ, µ] verifies the necessary conditions. 3 rd Case: [k, L, C ] = [1 k, {L, i C + q i }, i C ] Since [k,l,c ] χ i 1, there exists [k #, L # ] χ # i 1, such that: L # L + (i 1) {δ 1, δ 2 } C (i 1)δ 1 C # C + (i 1)δ 1. Consequently, two sub-cases can occur: Sub-case 3.1: i C # Here, the state [1 k #,{L # + q i }, i ] is created by algorithm A at iteration i. However, it may be removed when reducing the state subset. Let [α, λ, µ] be the state in χ # i that is in the same box as [1 k #,{L # C # +q i }, i C # ]. Hence, we have:, + q i } + δ 2 {L + (i 1){δ 1, δ 2 }, (C (i 1)δ 1 ) + q i } + δ 2 {L, C + q i } + (i 1) {δ 1, δ 2 } + δ 2 L + (i 1) {δ 1, δ 2 } + δ 2 L + i {δ 1, δ 2 } (11) µ + δ 1 (C (i 1)δ 1 ) + δ 1 C + iδ 1. (12) In the other h, we have µ δ 1 (C + (i 1)δ 1 ) δ 1 C iδ 1. (13) Thus, [α, λ, µ] fulfills the conditions. Sub-case 3.2: i < C # Here, the state [k #, {L # C # +q i },C # ] is created by algorithm A at iteration i. However, it may be removed when reducing the state subset. Let [α, λ, µ] be the state in χ # i that is in the same box as [k #, {L # C # + q i },C # ]. Hence, we have:, +δ 2 + q i } {L + (i 1) {δ 1, δ 2 }, (C (i 1)δ 1 ) + q i } + δ 2 {L, C + q i } +(i 1) {δ 1, δ 2 } + δ 2 L + (i 1) {δ 1, δ 2 } + δ 2 L + i {δ 1, δ 2 } (14)

5 µ C # + δ 1 C + (i 1)δ 1 + δ 1 C + iδ 1. In the other h, we have µ C # δ 1 δ 1 (15) C iδ 1 = C iδ 1. (16) Therefore, [α, λ, µ] fulfills the conditions. In conclusion, the statement holds also for level i in the third case, this completes our inductive proof. Based on the lemma, we deduce easily that for every nondominated state [k, L, C ] χ n, it must remain a close state [m, L #, C ] # χ # n such that: L # L + n. {δ 1, δ 2 } (1 + ɛ).l, C # C + n.δ 1 (1 + ɛ).c. Moreover, it is clear that the FPTAS runs polynomially in n 1/ɛ. The overall complexity of our FPTAS is O(n 3 /ɛ 2 ). IV. RESULTS The following results have been obtained after testing the performance of the proposed algorithms. The code has been done in Java the experiments were performed on an Intel(R) Core(TM)-i7 with 8GB RAM. We romly generate five sets of instances, with different numbers of jobs various processing delivery times: number of jobs: from 5 to 25, 26 to 50, 51 to 75, 76 to 100 100 to 200 processing times : from 1 to 20, 1 to 100 1 to 1000 delivery times : from 1 to 20, 1 to 100 1 to 1000 That gave us 135 instances in each set of instances. Finally, the FPTAS has been tested with two values of ɛ: 0.3 0.9. To ensure the consistency of running times, each test has been run three times. Figure 1 presents a comparison of FPTAS Dynamic Programming. The left part of this figure shows the average size of the Pareto Front (i.e. the number of solutions) found by the Dynamic Programming algorithm our FPTAS with the two ɛ values we used. The sizes are given for our five sets of instances, from small instances (5-25 instances) to bigger ones (100-200 jobs). We can see that the number of solutions decreases as the number of jobs increases. With a lot of jobs, it is more likely to obtain very similar solutions, a lot of them being dominated by others. At the opposite, the number of jobs has no real influence on the Pareto front sizes found by our FPTAS algorithm, whatever the value of ɛ. On the right part of the same figure are given the average quality of the two objectives of our study: L A ɛ /L C A ɛ /C. We can see that the FPTAS algorithms are finding solutions closer to the optimal ones when the number of jobs is increasing. C values are closer to the optimal than L values, which is not a surprise, as L depends on C. Worth to mention, our FPTAS with ɛ = 0.3 gives better results than with ɛ = 0.9, which is consistent with the theory. We have also studied the influence of processing delivery times, in Table I II. These tables present average results for our benchmark, considering the 5 sets of instances. Results are presented for ɛ = 0.3; the analysis is the same with ɛ = 0.9. Table I shows that instances composed of jobs with various processing times lead to optimal solutions with a bigger Pareto front, as seen in column 2. At the opposite, columns 3 to 5 show that our FPTAS algorithm is not really influenced by this parameter. Table II shows that delivery times ranges have more influence on the results : the size of the Pareto front of non-dominated solutions grows faster. Our FPTAS algorithm has the same behavior, we can see that the results are more close to the optimal with smaller values of q i. Table I: Quality of FPTAS as a function of processing time ranges (for ɛ = 0.3) p i range DP size of FPTAS size of Pareto front Pareto front C A ɛ L A ɛ /L 1-20 2.26 1.23 1.0006 1.003 1-100 4.07 1.84 1.0008 1.007 1-500 5.65 1.73 1.0005 1.004 Table II: Quality of FPTAS as a function of delivery time ranges (for ɛ = 0.3) q i range DP size of FPTAS size of Pareto front Pareto front C A ɛ L A ɛ /L 1-20 2.02 1.25 1.0007 1.001 1-100 3.03 1.57 1.0007 1.002 1-500 6.93 1.99 1.0005 1.008 Computing times are given in Tables III, IV V. They compare our Dynamic Programming algorithm our FP- TAS, considering two ɛ values : 0.3 0.9. All values are in milliseconds. Table III shows that all algorithms are slower when the number of states is growing. Table IV shows an interesting result: while the Dynamic Programming algorithm becomes slower when the processing times ranges are growing, the FPTAS has an opposite behavior. The FPTAS with ɛ = 0.3 is even slower than the exact algorithm for the smallest range of processing times. Table V shows that the delivery times ranges have a smaller influence on the Dynamic Programming algorithm: computing times are growing, but slower than the delivery times ranges. The FPTAS computing times are also growing in function of the delivery times ranges. Note that tables IV V are based on mean values from all the set of instances.

6 Size of Pareto Front Quality of C L Figure 1: Quality of our FPTAS algorithm with ɛ = 0.3 ɛ = 0.9 Table III: Average computing times vs size of instances (ms) #jobs DP FPTAS ɛ = 0.3 ɛ = 0.9, 5-25 67 0.9 0.3 26-50 1278 5.4 0.9 51-75 7917 22 3.4 76-100 24937 58 7.8 100-200 164332 281 32 Table IV: Average computing times (ms) vs processing time ranges p i ranges DP FPTAS ɛ = 0.3 ɛ = 0.9, 1-20 91 144 15 1-100 2668 51 7 1-500 116362 25 5 V. CONCLUSIONS AND PERSPECTIVES The two-parallel machines scheduling problem has been considered to minimize the imum lateness the makespan. We have proposed an exact algorithm (based on dynamic algorithm) to generate the complete Pareto Frontier in a pseudo-polynomial time. Then, we present an FPTAS (Fully Polynomial Time Approximation Scheme) to generate an approximate Pareto Frontier, based on the conversion of the exact dynamic programming. For the proposed algorithms, we romly generated several instances with different ranges,, for each job J i, its processing time p i delivery time q i are sets to be integer numbers.the results of the experiments showed that the proposed algorithms for the considered problem are very efficient. It is clear that optimizing the imum lateness (L ) implies to minimize implicitly the makespan (C ). Moreover, the values of ɛ processing delivery times play an important role in the results (i.e., big processing Table V: Average computing times (ms) vs delivery time ranges q i ranges DP FPTAS ɛ = 0.3 ɛ = 0.9, 1-20 31447 26 5 1-100 40482 46 7 1-500 47190 149 15 times, small delivery times big ɛ make the FPTAS faster vice versa). In our future works, the study of the multiple-machine scheduling problems seems to be a challenging perspective in the extension of our work. REFERENCES [1] G. Sapienza, G. Brestovac, R. Grgurina, T. Seceleanu. On applying multiple criteria decision analysis in embedded systems design. Design automation for embedded systems,20(3), Vol 20(3), pp 211 238 (2016). [2] C. Bazgan, F. Jamain, D. Verpooten. Approximate Pareto sets of minimal size for multi-objective optimization problems..operations Research Letters, Vol 43(1), pp 1 6 (2015). [3] Z. Geng J. Yuan. Pareto optimization scheduling of family jobs on a p-batch machine to minimize makespan imum lateness. Theoretical Computer Science, Vol 570, pp 22 29 (2015). [4] C. He, H. Lin, Y. Lin. Bounded serial-batching scheduling for minimizing imum lateness makespan. Discrete Optimization, Vol 16, pp 70 75 (2015). [5] Y. Chen X. Zou. Runtime analysis of a multi-objective evolutionary algorithm for obtaining finite approximations of Pareto fronts. Information Sciences, Vol 262, pp 62 77 (2014). [6] K. Florios G. Mavrotas. Generation of the exact Pareto set in Multi-Objective Traveling Salesman Set Covering Problems. Applied Mathematics Computation, Vol 237, pp 1 19 (2014). [7] Q. Feng, J. Yuan, H. Liu, C. He. A note on two-agent scheduling on an unbounded parallel-batching machine with makespan imum lateness objectives. Applied Mathematical Modelling, Vol 37(10 11), pp 7071 7076 (2013). [8] C. He, H. Lin, Y. Lin, J. Tian. Bicriteria scheduling on a seriesbatching machine to minimize imum cost makespan. Central European Journal of Operations Research, pp 1 10 (2013). [9] C. He, X. M. Wang, Y. X. Lin, Y. D. Mu. An Improved Algorithm for a Bicriteria Batching Scheduling Problem. RAIRO-Operations Research, Vol 47(1), pp1 8 (2013). [10] M. T. Y. Sabouni F. Jolai. Optimal methods for batch processing problem with makespan imum lateness objectives. Applied Mathematical Modelling, Vol 34(2), pp 314 324 (2010). [11] C. He, Y. Lin, J. Yuan. Bicriteria scheduling on a batching machine to minimize imum lateness makespan. Theoretical Computer Science, Vol 381(1-3), pp 234 240 (2007). [12] D. Sarkar J. M. Modak. Pareto-optimal solutions for multi-objective optimization of fed-batch bioreactors using nondominated sorting genetic algorithm. Chemical Engineering Science, Vol 60(2), pp 481 492 (2005). [13] A. Allahverdi T. Aldowaisan. No-wait flowshops with bicriteria of makespan imum lateness. European Journal of Operational Research,Vol 152(1), pp 132 147 (2004). [14] C. M. Sil E. C. Biscaia. Genetic algorithm development for multiobjective optimization of batch free-radical polymerization reactors. Computers & chemical engineering, Vol 27(8), pp 1329 1344 (2003). [15] P. Kumar. A Framework for Multi-objective Optimization Multicriteria Decision Making for Design of Electrical Drives. Universiteit Delft. (2008). [16] S. Chakhar J. Martel. Multi-Criteria Evaluation Functions Inside Geographical Information Systems Towards a Spatial Decision Support System. (2006).

[17] Z. Geng J. Yuan. Scheduling with or without precedence relations on a serial-batch machine to minimize makespan imum cost. submitted, 2017 7