Hybridizing VNS and path-relinking on a particle swarm framework to minimize total flowtime

Size: px
Start display at page:

Download "Hybridizing VNS and path-relinking on a particle swarm framework to minimize total flowtime"

Transcription

1 Hybridizing VNS and path-relinking on a particle swarm framework to minimize total flowtime Wagner Emanoel Costa Marco César Goldbarg Elizabeth G. Goldbarg Technical Report - UFRN-DIMAp RT - Relatório Técnico April Abril The contents of this document are the sole responsibility of the authors. O conteúdo do presente documento é de única responsabilidade dos autores. Departamento de Informática e Matemática Aplicada Universidade Federal do Rio Grande do Norte

2 Hybridizing VNS and path-relinking on a particle swarm framework to minimize total flowtime Wagner Emanoel Costa wemano@gmail.com Elizabeth G. Goldbarg beth@dimap.ufrn.br Marco César Goldbarg gold@dimap.ufrn.br Abstract. This paper presents a new hybridization of VNS and path-relinking on a particle swarm framework for the permutational fowshop scheduling problem with total flowtime criterion. The operators of the proposed particle swarm are based on path-relinking and variable neighborhood search methods. The performance of the new approach was tested on the bechmark suit of Taillard, and five novel solutions for the benchmark suit are reported. The results were compared against results obtained using methods from literature. Statistical analysis favors the new particle swarm approach over the other methods tested. Keywords: Flowshop, Scheduling, Total Flowtime, Heuristics, PSO. Resumo. Este documento apresenta uma nova hibridização entre VNS e pathrelinking dentro do framework de otimização por nuvem de partículas para o problema de minimização de total flowtime dentro do contexto de escalonamento flowshop permurtacional. Os operadores do método proposto são baseados em métodos de path-relinking e busca por variação de vizinhanças. O desempenho da nova abordagem foi testada usando o conjunto de instâncias do Taillard, e cinco novas soluções para o conjunto são reportadas. Os resultados são comparados contra resultados obtidos usando métodos da literatura. Análise estatística favorece a nova abordagem de nuvem de partículas contra os métodos testados. Palavras-Chave: Flowshop, Scheduling, Total Flowtime, Heuristicas, PSO. 1 Introduction The Permutational Flowshop Scheduling Problem (PFSP) with the total flowtime criterion (T F T ) is a NP-hard Combinatorial Optimization Problem [11, 15] that deals with the scheduling of a set of jobs, J, through a set of machines, M. Every job has to be processed Programa de Pós-graduação em Sistemas e Computação, UFRN Departamento de Informática e Matemática Aplicada - UFRN/CCET/DIMAp Departamento de Informática e Matemática Aplicada - UFRN/CCET/DIMAp 1

3 Hybridizing VNS and path-relinking on a particle swarm framework to minimize total flowtime 2 on all machines following the same machine sequence. Each job requires a definite machineprocessing time and can be processed on one machine at a time. Each machine processes only one job at a time. Once a machine starts processing a job, no preemption is allowed, and the machine becomes available as soon as the operation is finished. The goal is to produce a schedule of jobs such that the sum of completion times is minimized. Many methods have been proposed to address this problem. There are constructive methods [23, 20, 10, 21, 9, 18], local search method [5], iterated greedy approach [22], genetic algorithms [33, 31, 7], ant colonies [24, 34], particle swarm optimization [28, 19, 17], bee colony optimization [29], VNS [16, 3], hybrid EDA approaches [16, 32], hybrid discrete differential evolutionary algorithm [29] and parallel simulated annealing [4]. The particle swarm optimization (PSO) is a metaheuristic proposed by Eberhart & Kennedy [8] for continuous optimization. Due to its simplicity it has been adapted to many discrete optimization problems including the PFSP [28, 19, 17]. The PSO approach was based on models explaining the synchronous movements in a flock of birds. In those models the birds attempt to keep an optimal distance from each other so they can be close enough to profit over the discoveries and previous experience of a neighboring bird, and at the same time avoiding the competition for food [8]. The proposed discrete PSO is based on the work of Goldbarg et al. [14] for the Traveling Salesman Problem. Therefore, the cited metaphor is translated into an optimization method as follow. Agents, named particles, fly over the solution space. A particle occupies a position on the solution space. The position of a particle represents a valid solution currently under exam, it is encoded as a permutation of jobs (Π), and it has a objective value associated to it, named T F T (Π). Besides knowing their current position, the particle knows the best site (solution) it previously visited, knows the current position of a neighboring particle, and the best sites previously visited by a neighbor. On any given iteration of the method, each particle will compromise in doing one of the following actions: a) to explore the solution space on its own; b) to move towards the current site of a neighboring bird c) to move towards the best site previously visited by a neighboring bird. A probability is assigned to each possible action. During execution those probabilities are updated, the update process takes into account the quality of the last solution obtained through each action. The actions of the particles are implemented using search operator such as variable neighborhood search procedure (VNS) and path-relinking. Given a particle A, if A chooses to explore the search space on its own, A copies the configuration of the best previous site it visited Π A,Best to its current position Π A,Curr, a VNS procedure is executed over Π, the resulting solution becomes the current position of A, Π A,Curr. If the particle chooses to move towards a neighboring solution B, the second possible action, a combination of path-relinking procedure with VNS is executed. Initially, the path-relinking gradually transforms the current position Π A,Curr (solution) of particle A in the position Π B,Curr, occupied by the neighboring particle B. If during this process an intermediate solution Π A,Curr is found, such that the value of its objective function is smaller than the objective function values of Π A,Curr and Π B,Curr, then path-relinking is interrupted and the VNS procedure is executed over Π A,Curr, resulting in Π A,Curr. After VNS finishes, path-relinking is resumed transforming Π A,Curr into Π B,Curr. A new interruption may occur if a solution better than both Π A,Curr and Π B,Curr is found. The action concludes when no further improvement is found during path-relinking. The third action is implemented similarly to the second action, however in this case the target position is the best site visited by the neighboring particle B (Π B,Best ) instead of its current position (Π B,Curr ). The remainder of this article is organized into three sections. In Section 2 the description of the problem is given. Section 3 describes the hybrid algorithm proposed for the PFSP with

4 Hybridizing VNS and path-relinking on a particle swarm framework to minimize total flowtime 3 T F T criterion. Section 4 addresses the experiments carried out to tune parameters involved in proposed approach. Section 5 presents computational experiments comparing the proposed PSO with two state-of-the-art methods from literature. Statistical analysis over the results indicates significant differences favoring of the proposed PSO over two state-of-the-art approaches. Finally, some conclusions are presented in Section 6. 2 Problem As stated earlier, in the permutation flowshop context a set of jobs J = {1,..., n} will be processed by a set of machines M = {1,..., m} in sequence. Job j has a processing time of T jr on machine r, 1 j n, 1 r m. Let the permutation Π = {π 1, π 2,..., π n } denotes the job-processing order, and π i corresponds to the i-th job on the schedule Π. The completion time of job π i on machine r, denoted by C(π i, r), is given by the time elapsed since the first job begins to be operated on the first machine until job π i is completed on machine r. C(π i, r) is proper evaluated through Eq. (1) to (4), where Eq. (1) refers to the conclusion time of the first job scheduled on the first machine, and Eq. (2) refers to the conclusion time of the first job scheduled on the remainder machines. Analogously, Eq. (3) evaluates the completion time of job π i, 1 < i n, on the first machine (r = 1), whereas Eq. (4) evaluates the completion time of job π i, 1 < i n, on the remainder machines, 1 < r m. Based on Eq. (1) to (4), the total flowtime value of a given permutation Π (T F T (Π)) is defined as the sum of completion times on the last machine (Eq. 5). C (π 1, 1) = T π1,1 (1) C (π 1, r) = C (π 1, r 1) + T π1,r r {2,..., m} (2) C (π i, 1) = C (π i 1, 1) + T πi,1 i {2,..., n} (3) C (π i, r) =max {C (π i, r 1), C (π i 1, r)} + T πi,r i {2,..., n} r {2,..., m} (4) T F T (Π) = n C (π i, m) (5) i=1 3 Discrete Particle Swarm Optimization This section describes the proposed discrete PSO to optimize the total flowtime, and there are five subsections. The first subsection shows the pseudo-code of the proposed PSO and discusses four procedures to be defined. Each subsequent subsection explains in details each procedure and their tuning parameters.

5 Hybridizing VNS and path-relinking on a particle swarm framework to minimize total flowtime Pseudo-code Algorithm 1 exhibits the pseudo-code of the proposed method. The PSO procedure depends on other procedures, at first the pseudo-code of PSO is explained and the subsequent paragraphs detail the procedures on which PSO depends on. The first line of the algorithm initializes the set of particles (P articles), meaning that the position of each particle is set to an individual solution. The best visited site of a given particle is also initialized with its current position. The main body of PSO lies on the loop from lines 2 to 20. Usually the main loop on PSO is repeated until the stop criterion is satisfied, in this case the stop criterion was defined as a time limit of 0.4 n m seconds, as the fastest heuristics for minimizing T F T in PFSP context use the same stop criterion [16, 32, 31, 3]. In line 4, probabilities v a, v b and v c are defined in the procedure ComputeP robabilities standing for the probabilities of particle P choses the first, second or third action, respectively. In line 5 a random number is picked from the interval [0, 1] and stored in variable Action. If the value of Action is smaller or equal to v a (line 6), then particle P explores the search space in function ExploreAlone, otherwise P will move towards a neighbor or towards a best site of a neighbor. In both cases a destiny must be selected, this is done in lines 9 to 12. In line 9, the neighbor, of which either its current position or its best site will be used as a guide, is selected randomly. The loop from line 10 to line 12 ensures that a neighbor is selected. If the value of Action is greater than v a and smaller or equal to v b then particle P will move from its current position (Π P,Curr ) towards a site currently occupied by a neighbor (Π T arget,curr ). That move is made in procedure MoveT oneighbor, line 14. If the value of Action is greater than v b then P moves towards the best site a neighboring particle has occupied, line 16. The main loop finishes when the time limit is reached. In lines 21 to 26 the algorithm finds the best solution ever achieved by a particle and returns it in BestSol, line 27. The proposed discrete PSO depends on the implementation of the following procedures: a procedure to create an initial position for each particle; a procedure to define the initial probabilities associated with each possible action, and to update those probabilities (ComputeP robabilities); the explore alone procedure; a procedure that is capable of executing the movements towards a neighbor and the move towards the best site of a neighboring particle. The following subsections explain how each of these procedures were implemented and the parameters involved. 3.2 Initial positions The initialization is performed by a randomized version of the greedy algorithm H(1) presented by Liu & Reeves [20]. Several state-of-the-art approaches use either H(1) or a procedure based on this heuristics to provide initial solutions [5, 31, 3], for it was tested, in Dong et al. [5], that the methods based on the heuristic H(1), like the method H(x), are better providers for initial solutions than other tested heuristics. The method H(x) creates x different solutions by placing a distinct job on the first position and applying the greedy criterion of H(1). However, H(x) is a greedy procedure, after the first job is placed, it adds the remaining jobs to a solution deterministically according to the greedy function. Therefore, by using H(x)

6 Hybridizing VNS and path-relinking on a particle swarm framework to minimize total flowtime 5 Algorithm 1 Discrete PSO 1: Initialize the set of particles P articles 2: repeat 3: for each particle P with position Π P,Curr and best visited site Π P,Best do 4: (v a, v b, v c ) ComputeP robabilities() 5: Action random number [0, 1] 6: if Action v a then 7: ExploreAlone(P ) 8: else 9: T arget random number {1, 2,..., P articles } 10: while T arget = P do 11: T arget random number {1, 2,..., P articles } 12: end while 13: if v a < Action v b then 14: MoveT owards(π P, Π T arget,curr ) 15: else 16: MoveT owards(π P, Π T arget,best ) 17: end if 18: end if 19: end for 20: until time limit of 0.4 n m seconds is reached 21: BestSol Π 1,Best 22: for P = 2 to P articles do 23: if T F T (Π P,Best ) < T F T (BestSol) then 24: BestSol Π P,Best 25: end if 26: end for 27: return BestSol

7 Hybridizing VNS and path-relinking on a particle swarm framework to minimize total flowtime 6 the number of different solutions is limited to the number of jobs (n) of the considered instance. Then, the number of particles of an instance with 20 jobs is limited to 20. In order to avoid this limitation, a randomized version of H(1) was implemented. The randomized version of H(1) builds at each step a subset with the three fittest jobs, according to the greedy criterion, and choose iteratively one of the three jobs to be added to to the solution. The job is chosen at random according to a uniform distribution. The greedy criterion of H(1) is explained next. The procedure H(1) weights down two criteria: weighted sum of machine idle time (IT ) and artificial flowtime (AT ). The idle time criterion for selecting job i when k jobs were already selected (IT ik ) is defined by Eq. (6), where w rk is calculated with Eq. (7). The term max {C(i, r 1) C(π k, r), 0} stands for the idle time of machine r. The weights, as defined on Eq. (7), stress that idle times on early machines are undesirable, for they delay the remaining jobs. Such stress is stronger if there are many unscheduled jobs (small value of k), and drops when the number k of scheduled jobs increases. IT ik = m w rk max {C(i, r 1) C(π k, r), 0} (6) r=2 w rk = m r + k(m r)/n 2 The artificial flowtime (AT ik ) of candidate job i after k jobs were scheduled refers to the T F T value obtained after including unscheduled job i, plus the time of an artificial job placed at the end of the sequence of jobs. The processing time of the artificial job on each machine is equal to the average processing time of all unscheduled jobs, excluding job i, on the correspondent machine. Both criteria, IT ik and AT ik are combined according to Eq. (8). 3.3 Defining and updating the probabilities f ik = (n k 2)IT ik + AT ik (8) The procedure ComputeP robabilities is responsible for defining the initial values for the probabilities and updating them for each particle. Initially, it assigns probability ρ to the action of exploring the search space on its own, and probability (1 ρ)/2 to each of the other action, that is, move towards a neighbor or towards a best site. Each time an action is performed the value of the solution obtained is stored in a specific variable, i.e. when a particle P explores the search space on its own, producing a solution Π, the value T F T (Π) is stored in variable T F T P,Alone, similarly if P moves towards a neighbor or towards a best site, the T F T values are stored in T F T P,Neighbor and T F T P,Site, respectively. Once particle P has performed each action at least once, ComputeP robabilities compares T F T P,Alone, T F T P,Neighbor and T F T P,Site, and assigns probability ρ to the action corresponding to the best T F T value, and probability (1 ρ)/2 for each of the other two possible actions. The value of ρ is a parameter to be tuned. 3.4 Exploring the search space on its own Whenever it is decided for a particle P to explore the search space on its own the procedure ExploreAlone(P ) is called. This procedure is based on the VNS4 procedure of [3]. The method VNS4 is a variable neighborhood search. It uses two neighborhood structures, job interchange and job insert. In the job interchange neighborhood, two jobs exchange (7)

8 Hybridizing VNS and path-relinking on a particle swarm framework to minimize total flowtime 7 positions, i.e. given two solutions Π A = {π A1, π A2,..., π An } and Π B = {π B1, π B2,..., π Bn }, they are neighbors if there are indices s and t, s t, such that π As = π Bt, π At = π Bs and c, c s, t, π Ac = π Bc. In the job insert neighborhood, a job is removed from its position in the solution and re-inserted in a different position, i.e. Π B = {π B1, π B2,..., π Bn } is a neighboring solution of Π A if there are indices s, t and c, s < t, such that one of the two possibilities is true: π Bs = π At and c, s c < t, π Bc = π Ac+1, or π Bt = π As and d, s < d t, π Bd = π Ad 1. VNS4 applies fourteen random insert moves on a solution as a Shake procedure, then it explores the job interchange neighborhood until no further improvement is possible. The first improvement policy is used in the job interchange, i.e. the current solution is replaced by the first neighbor whose T F T value is smaller than the former. After job interchange is fully explore, VNS4 starts using job insert neighborhood and as soon as a neighbor improving the current solution is found, VNS4 resumes the use of the interchange neighborhood. Different stop criteria were tested for the use of VNS4 within the proposed PSO. The experiment comparing all proposed stop criteria is detailed in Section 4. Algorithm 2 exhibits the pseudo-code of ExploreAlone(P ). In line 1, Π becomes a copy of the best solution found by particle P. The procedure VNS4 is applied over Π, line 2. After reaching a local optimum, Π is tested to verify if its T F T value is different from the T F T value of the other particles in their current position or in their best sites, line 3, if so the current position of P is updated, line 4, otherwise the current position of P remains unchanged. If there was an update in the current position of P then it is tested if this new site is better than the best site recorded for P (Π P,Best ), line 5, if so Π P,Best is updated, line 6. Algorithm 2 ExploreAlone(P ) 1: Π Π P,Best 2: V NS4(Π ) 3: if T F T (Π ) is different from any T F T of the other recorded positions then 4: Π P,Curr Π 5: if T F T (Π P,Curr ) < T F T (Π P,Best ) then 6: Π P,Best Π P,Curr 7: end if 8: end if 3.5 Moving towards a different site The last procedure discussed is M ovet osite, which is responsible for the movement of a particle towards the current site of a neighbor, or towards the best site a neighbor has ever achieved. Algorithm 3 shows the pseudo-code of MoveT osite. Initially, the solution Π is a copy of the current site of particle P (line 1). Within the loop from line 2 to 12, the procedure alternates between the use of a truncated path-relinking and VNS4, until no further improvement is possible. The VNS4 procedure is the same used in function ExploreAlone. In T runcatedp athrelinking, a path-relinking method is implemented. The path-relinking strategy was by Glover [13]. Given two solutions, an initiating solution, Π O, and a destiny solution, Π Destiny, the path-relinking strategy generates intermediate solutions by inserting into Π O properties of Π Destiny [12]. The sequence of solutions Π O, Π 1, Π 2,..., Π Destiny is called a path that links Π O to Π Destiny. The intermediate solutions are of interest for some of them may have better value of objective function (in this case T F T value) than both Π O and Π Destiny.

9 Hybridizing VNS and path-relinking on a particle swarm framework to minimize total flowtime 8 There are several types of path-relinking, e.g. forward path-relinking, backward and backand-forward path-relinking [25], among others. In forward path-relinking the initiating solution is the one with worse value of objective function assigned to it. In backward path-relinking the solution with the worse value is the destiny solution. In back-and-forward both trajectories, forward and backward, are examined. According to experiments reported in Ribeiro & Resende [25], the backward path-relinking often outperforms the forward one, whereas the back-andforward path-relinking produces solutions at least as good as either the forward or backward option in optimization problems and sometimes outperforms them, therefore the path-relinking implemented in this work. During the tuning experiments three strategies of inserting properties of Π Destiny into Π O were tested. The first one uses the job insert neighborhood, it takes π Destiny,n, the last job of the destiny solution, and finds its current position in the initiating solution(π O ). Then that job is re-inserted in the last position of Π O. This procedure is repeated until Π O is fully transformed into Π Destiny, or until an intermediate solution, Π Q,with T F T value better than T F T values of Π O and Π Destiny is found, in this case Π Q is returned. The second strategy uses job interchange neighborhood instead of job insert. In the third option, adjacent jobs were swaped. The three strategies stop when an intermediate solution better than both the initiating solution and the destiny one is found. Algorithm 3 MoveT osite(π P,Curr, Π Destiny ) 1: Π Π P,Curr 2: repeat 3: Continue F alse 4: Π T runcatedp athrelinking(π, Π Destiny ) 5: if T F T (Π ) < T F T (Π P,Curr ) and T F T (Π ) < T F T (Π Destiny ) then 6: V NS4(Π ) 7: if T F T (Π ) is different from any T F T of the other recorded positions then 8: Π P,Curr Π 9: Continue T rue 10: end if 11: end if 12: until Continue = F alse 13: if T F T (Π P,Curr ) < T F T (Π P,Best ) then 14: Π P,Best Π P,Curr 15: end if 4 Tuning experiments This section details the experiments to tune the parameter of the algorithm proposed in Section 3. This section is organized in six subsections. First, the methodology employed to tune parameters is described. The four subsections that follow the first one describe the experiments performed to tune each parameter. The last subsection presents an overall summary of the tuning experiments and recollects the final attribute of each parameter.

10 Hybridizing VNS and path-relinking on a particle swarm framework to minimize total flowtime Methodology The methodology consists of comparative experiments where each parameter is examined independently [1, 6]. A subset of Taillard s instances was used in the experimentation, as representative samples of the testbed. The complete dataset contains 120 randomly generated instances [27]. The number of jobs of Taillard s benchmark is in the set {20, 50, 100, 200, 500} and the number of machines in {5, 10, 20}. The subset utilized for each comparative experiment reported here comprises the five first test cases of each group of 50 and 100 jobs of Taillard s dataset, which made a total of 30 instances. Twenty independent executions of each algorithmic version were performed for each instance, each execution used a different seed value for the random number generator. The experiments were executed on a pentium Core 2 - Quad 2.4GHz (Q6600), 1GB RAM. Implementations were done in C++ using Gnu C++ compiler with -O2 flag. Results obtained during trials were transformed into relative percentage deviation (RP D), which was calculated with Eq. (9), wherein Best refers to the best solution found during experimentation for a given instance, and Heuristic refers to a solution obtained by one algorithm on the same instance. Because RP D is a dimensionless value resulting from a normalization procedure, the RP Ds from different instances were compared, treating RP D as a response variable similar to what is considered in [26]. RP D(%) = 100 Heuristic Best Best Solution (9) The Shapiro-Wilk test [2] indicates non-normal distributions were obtained in several tests with significance levels bellow Therefore, Kruskal-Wallis test [2], a non-parametric counterpart of ANOVA, was used to test if the results were statistically different. If the result from Kruskal-Wallis indicates differences between the compared algorithmic versions with significance level below or equal to 0.05 (5%), the median RP D value, among the 20 independent executions of the 30 tested instances, was used to ascertain which version is the best. Once the parameters were examined independently, they needed to have an initial value. These values are listed bellow: The number of particles used by PSO is initially equal to 15; The initial value of ρ is 0.60 (60%); The path-relinking operator uses the 2-swap as neighborhood structure; The remaining four subsections refer to experiments related to: 1. the stop criteria of VNS4; 2. the number of particles; 3. the value of ρ; 4. the neighborhood structure used by path-relinking.

11 Hybridizing VNS and path-relinking on a particle swarm framework to minimize total flowtime Number of VNS iterations within PSO The first experiment performed concerns the stop criteria of VNS4. In the work of Costa et al. [3], it is conjectured that a hybrid genetic algorithm with VNS4 had poor performance on instances with 100 jobs or greater due to the time spent by VNS4 to find a local optimum. The second stop criterion in consideration for VNS4 is to constrain the number of improvements over a single solution, therefore stopping the procedure before it reaches a local optimum common to both neighborhoods. This criterion is inspired by the work Xu et al. [31], where a hybrid genetic algorithm limited the number of improvements made by a VNS approach (named E-VNS). For instance if the limit is set at 60 improvements, the VNS is interrupted as soon it improves a solution by the 60 th time. In total six versions VNS4 were tested, five constrained in the number of improvements over a solution that they can produce, and one, the original unconstrained version. The limits of the constrained versions ranged from 60 to 100 improvements over a given solution. The results of the experiments are listed on Table 1. According to the Kruskal-Wallis test there are differences between the different implementations with p-value lower than 10 3, these differences favors the unconstrained version as it is the one with the lowest RP D value for median. Therefore, the unconstrained version of VNS4 was adopted. Tabela 1: Median RPD values for different stop criteria for VNS4 within PSO. stop criterion for VNS4 Number of Improvements Median RP D(%) Number of Improvements Unconstrained Median RP D(%) Number of particles The second parameter tuned was the size of the set of particles, that is the number of particles present in PSO. As stated in Section 3, the initial position of a particle is defined by a randomized version of H(1), and the initial best site of each particle is the same as its initial position. Nine different sizes for the set of particles were tested starting from ten particles going up to a total of fifty particles. The median values obtained in the experiments are recorded in Table 2. According to the Kruskal-Wallis test, there are significant differences between the performances (p-value lower than 10 3 ). The PSO version with ten particles exhibited the best performance among the tested options; therefore the number of particles was then reduced to ten. 4.4 The value of ρ The parameter ρ is used in procedure ComputeP robabilities to define the probability assigned to each possible action of a given particle. During the first two tuning experiments that value was fixed at 0.60 (60%). The third aimed at tuning the value of ρ. Six values for ρ were

12 Hybridizing VNS and path-relinking on a particle swarm framework to minimize total flowtime 11 Tabela 2: Median RP D values for different sizes of the set of particle. Number of particles present in PSO Number of Particles Median RP D(%) Number of Particles Median RP D(%) Number of Particles Median RP D(%) tested, ranging from 40% to 90%. The experiments were conducted according to the methodology described in Subsection 4.1; the median RP D values obtained for the tested values of ρ are summarized in Table 3. The results of the Kruskal-Wallis test did not point out significant differences between the results obtained in the experiments (p-value equals to ). This fact indicates that the value of ρ did not influence on the performance of the implemented PSO. Therefore the value of ρ remained set to 60%. Tabela 3: Median RP D values for different values of ρ. The value of ρ Value of ρ 40% 50% 60% Median RP D(%) Value of ρ 70% 80% 90% Median RP D(%) Neighborhood structure of path-relinking The last tuning experiment regards the neighborhood structure used for path-relinking. Three neighborhood structures were under consideration: 2-swap, job insert and job interchange. The Kruskal-Wallis test returned p-value equals to indicating that there is no significant difference of performance was detected between the tested neighborhoods. Therefore, the neighborhood structure used by the path-relinking operator (2-swap) was maintained. Table 4 reports the median RP D concerning this experiment. Tabela 4: Median RP D values associated with the use of different neighborhood structures within path-relinking Neighborhood structure used for path-relinking Neighborhood 2-Swap Job Insert Job Interchange Median RP D(%)

13 Hybridizing VNS and path-relinking on a particle swarm framework to minimize total flowtime Overall summary Four distinct parameters of PSO were examined independently. In each experiment diverse attribute values for each parameter were compared. According to a non-parametric statistical test, from the four conducted experiments, two exhibited significant evidences of differences of performance between the tested attributes (the stop criterion of VNS4 and the number of particle), whereas for the other two parameters, ρ and the neighborhood structure used during path-relinking, no significant difference was indicated by the test. The final configuration of PSO parameters is as follows: VNS4 stops only when it reaches a local optimum common to both job interchange and insert neighborhoods; the number of particles is set to ten; the value of ρ remained at 0.60 (60%); the 2-swap neighborhood structure is the one used to perform path-relinking. 5 Comparison with methods from literature Experiments were performed comparing the performance of the proposed discrete PSO with the performance of two state-of-the-art methods, namely VNS4 [3] and the asynchronous genetic algorithm, AGA [31]. The authors of [31] kindly provided the source code of AGA. The comparison is done by using the Kruskal-Wallis statistical test over the results. There are also comparisons with the best solutions found by the proposed approach to the best solutions from state-of-the-art. Each best know result was tagged to identify the method that found it. The tags are: EDA-VNS [16], VNS [16], HGLS [31], SAwGE [4], hdde [30], DABC [30], PHEDA [32], AGA [31] and VNS4 [3]. The remainder of this section is organized as follows: first the method AGA is described. The method VNS4 was detailed in Section 3.4, so after Section 5.1, there is a subsection detailing the experiment, its methodology, summaries of the results and the statistical analysis of the data. 5.1 Asynchronous genetic algorithm (AGA) The method AGA is a hybrid genetic algorithm with a population of 40 solutions, one of them is the best solution from the H (n/m) heuristic, where n/m are created using the H criterion, followed by interchange local search, the other 39 are randomly generated. At each iteration all solutions undergoes an E-VNS procedure (a specific VNS approach), crossover followed by another execution of E-VNS. The E-VNS uses both job insert local and job interchange neighborhoods. When using job insert, the job re-inserted into another position, π i, is randomly selected. The position where π i will be re-inserted is also randomly selected. If such attempt improves the current solution, the new solution is accepted and a new iteration of job insert occurs. E-VNS executes 50 iterations of job insert. After that, 50 iterations using job interchange occurs, again the jobs to be interchanged are randomly selected. If any improvement is achieved while using interchange neighborhood the solution is accepted and E-VNS resumes job insert local search. The number of times job insert can be resumed is limited by a randomly selected value. Each time E-VNS is called, a new limit is randomly picked between 10 and 60; this allows E-VNS to terminate with a solution that is not a local optimum.

14 Hybridizing VNS and path-relinking on a particle swarm framework to minimize total flowtime Results and statistical analysis The experiments were executed on a pentium Core 2 - Quad 2.4GHz (Q6600), 1GB RAM. The three methods, discrete PSO, VNS4 and AGA were encoded in C++. The experiments were performed over all 120 instances from Taillard s benchmark [27]. Thirty independent executions of each method were performed for each instance. The three methods share the same stop criterion, i.e. a time limit of 0.4 n m seconds. The results on instances with n = 20 jobs are not summarized in tables because most of the trials returned the best known results for them. From the 30 instances with n = 20 jobs the proposed PSO method found the best known solution for all of them on all of its independent executions. The AGA heuristics found the best known solution for 29 instances on all of its trials, and failed to do so once for one instance 1. The procedure VNS4 converged to the best known solutions on all of its trials on 20 cases, and failed to do so on 10 cases 2. For the remaining 90 instances, a summary of the results, consisting of the minimum, average, maximum and standard deviation values obtained by each algorithm, are summarized in Tables 5 to 7. Each line of these tables reports the instance name (Instance), the best T F T value reported in the literature for the instance (Best). The next column reports the tag of the stateof-the-art method which achieved the best known value (Algorithm), this column is followed by the minimum (Min), average (Ave), maximum (Max) and standard deviation (S.D.) values obtained by each method after 30 independent trials on the corresponding instance. Solutions presenting T F T value equal or better than the best known solution reported in the literature are indicated by a star symbol ( ). The best minimum and average values among the three tested methods are indicated in bold face. For the 30 instances with n = 50 jobs (Table 5), among the tested method, the proposed discrete PSO found the best minimum value on 24 instances, and it achieved the best average value on 27 instances. The method VNS4 found the minimum value on 5 cases and achieved the best average value on 2 cases. In the set of instances with n = 100 jobs, Table 6, discrete PSO found the minimum value on 23 cases and exhibited the best average on 26 cases. The AGA heuristics achieved the minimum value for 1 case (ta082). VNS4 achieved the best minimum on 6 cases and the best average on 4 cases. For the 30 instances with n 200 jobs (Table 7), discrete PSO achieved the minimum solution on 7 instances and achieved the best average on 8 instances. VNS4 exhibited the best minimum value on 23 cases, and the best average on 22 cases. Comparing the minimum value obtained by the three tested methods with the best known results, the discrete PSO found 5 novel solutions with T F T value better than those reported in the problem literature, all of them for instances with n = 500 jobs (supposedly the hardest ones). The discrete PSO also found 10 solutions with T F T values equal to the best reported in literature. The method AGA achieved 2 solutions with T F T value equal to the best known marks from literature for n = 50 jobs. VNS4 found 7 solutions, 1 solution for n = 200 jobs and 6 on cases with n = 500 jobs. In order to conduct the statistical analysis, the results from 90 instances were converted into RP D, using the same methodology of Section 4.1 and the statistical tests were applied to the transformed data. A statistical summary of the data is presented in Table 8. It contains the minimum RP D value, first quartile, median, third quartile, maximum, average and standard 1 The instance named ta The instances named ta001, ta003, ta014, ta016, ta018, ta021, ta022, ta029 and ta030.

15 Hybridizing VNS and path-relinking on a particle swarm framework to minimize total flowtime 14 Tabela 5: Summaries of results for instances with n=50 jobs. Minimum value (Min), average (Avg), maximum (Max) and standard deviation (S.D.) included. Discrete PSO AGA VNS4 Instance Best Algorithm Min Average Max S.D. Min Average Max S.D. Min Average Max S.D. tai SAwGE tai hdde tai hdde tai SAwGE tai SAwGE tai hdde tai SAwGE tai PHEDA tai HGLS tai PHEDA tai SAwGE tai HGLS tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai HGLS tai DABC tai VNS tai EDA-VNS tai HGLS tai PHEDA tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE

16 Hybridizing VNS and path-relinking on a particle swarm framework to minimize total flowtime 15 Tabela 6: Summaries of results for instances with n=100 jobs. Minimum value (Min), average (Avg), maximum (Max) and standard deviation (S.D.) included. Discrete PSO AGA VNS4 Instance Best Algorithm Min Average Max S.D. Min Average Max S.D. Min Average Max S.D. tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE

17 Hybridizing VNS and path-relinking on a particle swarm framework to minimize total flowtime 16 Tabela 7: Summaries of results for instances with n {200, 500}. Minimum value (Min), average (Avg), maximum (Max) and standard deviation (S.D.) included. Discrete PSO AGA VNS4 Instance Best Algorithm Min Average Max S.D. Min Average Max S.D. Min Average Max S.D. tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai AGA tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai SAwGE tai VNS tai SAwGE tai SAwGE tai VNS tai VNS tai VNS tai VNS tai SAwGE tai AGA tai VNS tai SAwGE tai VNS

18 Hybridizing VNS and path-relinking on a particle swarm framework to minimize total flowtime 17 deviation values measured for each method. The analysis disregarded only the thirty instances used to tune the parameters of PSO, as they would constitute a bias favoring PSO. The Shapiro- Wilk normality test indicated, with associated p-value lower than 10 3, that the results of each algorithm were not normally distributed. The comparison analysis used the Kruskal-Wallis statistical test. The Kruskal-Wallis test returned a p-value lower than 10 3 indicating significant differences between the methods and favoring the discrete PSO approach as it is the one with lowest RP D median, indicated with bold face in Table 8. The method VNS4 is ranked second as it presents RP D median lower than the one produced by AGA. Tabela 8: Statistical summary of the RP D values obtained by each method. Summary Discrete PSO AGA VNS4 Minimum First quartile Median Average Third quartile Maximum Std. deviation Conclusions In this paper a discrete Particle Swarm Optimization method (PSO) was proposed to minimize the total flowtime criterion in a permutational flowshop scheduling environment. The proposed method uses operators that combine VNS and truncated path-relinking procedures to explore the search space. The proposed discrete PSO was compared to two state-of-the-art approaches VNS4 [3] and AGA [31]. The experiment was conducted over the 120 instances from Taillard s benchmark set [27]. During the trials, five novel solutions for the Taillard s dataset were found by the proposed discrete PSO. The Kruskal-Wallis test was used to verify if there were significant differences between the examined methods. The results of that test indicated that there are differences between the methods and favored the proposed PSO over VNS4 and AGA methods, for when one compares the RP D measurements of the examined methods, discrete PSO has the lowest median value. Acknowledgements The authors want to thank Xu, Xu & Gu who kindly provided the source code of AGA used in this study. This work was partially supported by the Conselho Nacional de Desenvolvimento Científico e Tecnológico, CNPq - Brasil, under Grants /2009-0, / and /

3D HP Protein Folding Problem using Ant Algorithm

3D HP Protein Folding Problem using Ant Algorithm 3D HP Protein Folding Problem using Ant Algorithm Fidanova S. Institute of Parallel Processing BAS 25A Acad. G. Bonchev Str., 1113 Sofia, Bulgaria Phone: +359 2 979 66 42 E-mail: stefka@parallel.bas.bg

More information

Metaheuristics and Local Search

Metaheuristics and Local Search Metaheuristics and Local Search 8000 Discrete optimization problems Variables x 1,..., x n. Variable domains D 1,..., D n, with D j Z. Constraints C 1,..., C m, with C i D 1 D n. Objective function f :

More information

Iterated Responsive Threshold Search for the Quadratic Multiple Knapsack Problem

Iterated Responsive Threshold Search for the Quadratic Multiple Knapsack Problem Noname manuscript No. (will be inserted by the editor) Iterated Responsive Threshold Search for the Quadratic Multiple Knapsack Problem Yuning Chen Jin-Kao Hao* Accepted to Annals of Operations Research

More information

Metaheuristics and Local Search. Discrete optimization problems. Solution approaches

Metaheuristics and Local Search. Discrete optimization problems. Solution approaches Discrete Mathematics for Bioinformatics WS 07/08, G. W. Klau, 31. Januar 2008, 11:55 1 Metaheuristics and Local Search Discrete optimization problems Variables x 1,...,x n. Variable domains D 1,...,D n,

More information

Beta Damping Quantum Behaved Particle Swarm Optimization

Beta Damping Quantum Behaved Particle Swarm Optimization Beta Damping Quantum Behaved Particle Swarm Optimization Tarek M. Elbarbary, Hesham A. Hefny, Atef abel Moneim Institute of Statistical Studies and Research, Cairo University, Giza, Egypt tareqbarbary@yahoo.com,

More information

Algorithms and Complexity theory

Algorithms and Complexity theory Algorithms and Complexity theory Thibaut Barthelemy Some slides kindly provided by Fabien Tricoire University of Vienna WS 2014 Outline 1 Algorithms Overview How to write an algorithm 2 Complexity theory

More information

A parallel metaheuristics for the single machine total weighted tardiness problem with sequence-dependent setup times

A parallel metaheuristics for the single machine total weighted tardiness problem with sequence-dependent setup times A parallel metaheuristics for the single machine total weighted tardiness problem with sequence-dependent setup times Wojciech Bożejko Wroc law University of Technology Institute of Computer Engineering,

More information

A pruning pattern list approach to the permutation flowshop scheduling problem

A pruning pattern list approach to the permutation flowshop scheduling problem A pruning pattern list approach to the permutation flowshop scheduling problem Takeshi Yamada NTT Communication Science Laboratories, 2-4 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-02, JAPAN E-mail :

More information

A tabu search based memetic algorithm for the maximum diversity problem

A tabu search based memetic algorithm for the maximum diversity problem A tabu search based memetic algorithm for the maximum diversity problem Yang Wang a,b, Jin-Kao Hao b,, Fred Glover c, Zhipeng Lü d a School of Management, Northwestern Polytechnical University, 127 Youyi

More information

A PSO APPROACH FOR PREVENTIVE MAINTENANCE SCHEDULING OPTIMIZATION

A PSO APPROACH FOR PREVENTIVE MAINTENANCE SCHEDULING OPTIMIZATION 2009 International Nuclear Atlantic Conference - INAC 2009 Rio de Janeiro,RJ, Brazil, September27 to October 2, 2009 ASSOCIAÇÃO BRASILEIRA DE ENERGIA NUCLEAR - ABEN ISBN: 978-85-99141-03-8 A PSO APPROACH

More information

Solving Resource-Constrained Project Scheduling Problem with Particle Swarm Optimization

Solving Resource-Constrained Project Scheduling Problem with Particle Swarm Optimization Regular Papers Solving Resource-Constrained Project Scheduling Problem with Particle Swarm Optimization Sylverin Kemmoé Tchomté, Michel Gourgand, Alain Quilliot LIMOS, Campus Scientifique des Cézeaux,

More information

Tabu Search. Biological inspiration is memory the ability to use past experiences to improve current decision making.

Tabu Search. Biological inspiration is memory the ability to use past experiences to improve current decision making. Tabu Search Developed by Fred Glover in the 1970 s. Dr Glover is a business professor at University of Colorado at Boulder. Developed specifically as a combinatorial optimization tool. Biological inspiration

More information

Available online at ScienceDirect. Procedia Computer Science 20 (2013 ) 90 95

Available online at  ScienceDirect. Procedia Computer Science 20 (2013 ) 90 95 Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 20 (2013 ) 90 95 Complex Adaptive Systems, Publication 3 Cihan H. Dagli, Editor in Chief Conference Organized by Missouri

More information

Hybrid Metaheuristics for Crop Rotation

Hybrid Metaheuristics for Crop Rotation Hybrid Metaheuristics for Crop Rotation Angelo Aliano Filho Doutorando em Matemática Aplicada, IMECC, UNICAMP, 13083-859, Campinas, SP, Brazil Helenice de Oliveira Florentino Departamento de Bioestatística,

More information

Computational Intelligence in Product-line Optimization

Computational Intelligence in Product-line Optimization Computational Intelligence in Product-line Optimization Simulations and Applications Peter Kurz peter.kurz@tns-global.com June 2017 Restricted use Restricted use Computational Intelligence in Product-line

More information

Research Article A Hybrid Backtracking Search Optimization Algorithm with Differential Evolution

Research Article A Hybrid Backtracking Search Optimization Algorithm with Differential Evolution Mathematical Problems in Engineering Volume 2015, Article ID 769245, 16 pages http://dx.doi.org/10.1155/2015/769245 Research Article A Hybrid Backtracking Search Optimization Algorithm with Differential

More information

Permutation distance measures for memetic algorithms with population management

Permutation distance measures for memetic algorithms with population management MIC2005: The Sixth Metaheuristics International Conference??-1 Permutation distance measures for memetic algorithms with population management Marc Sevaux Kenneth Sörensen University of Valenciennes, CNRS,

More information

5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini

5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini 5. Simulated Annealing 5.1 Basic Concepts Fall 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Real Annealing and Simulated Annealing Metropolis Algorithm Template of SA A Simple Example References

More information

Ant Colony Optimization: an introduction. Daniel Chivilikhin

Ant Colony Optimization: an introduction. Daniel Chivilikhin Ant Colony Optimization: an introduction Daniel Chivilikhin 03.04.2013 Outline 1. Biological inspiration of ACO 2. Solving NP-hard combinatorial problems 3. The ACO metaheuristic 4. ACO for the Traveling

More information

GENETIC ALGORITHM FOR CELL DESIGN UNDER SINGLE AND MULTIPLE PERIODS

GENETIC ALGORITHM FOR CELL DESIGN UNDER SINGLE AND MULTIPLE PERIODS GENETIC ALGORITHM FOR CELL DESIGN UNDER SINGLE AND MULTIPLE PERIODS A genetic algorithm is a random search technique for global optimisation in a complex search space. It was originally inspired by an

More information

Motivation, Basic Concepts, Basic Methods, Travelling Salesperson Problem (TSP), Algorithms

Motivation, Basic Concepts, Basic Methods, Travelling Salesperson Problem (TSP), Algorithms Motivation, Basic Concepts, Basic Methods, Travelling Salesperson Problem (TSP), Algorithms 1 What is Combinatorial Optimization? Combinatorial Optimization deals with problems where we have to search

More information

CSC 4510 Machine Learning

CSC 4510 Machine Learning 10: Gene(c Algorithms CSC 4510 Machine Learning Dr. Mary Angela Papalaskari Department of CompuBng Sciences Villanova University Course website: www.csc.villanova.edu/~map/4510/ Slides of this presenta(on

More information

Self-Adaptive Ant Colony System for the Traveling Salesman Problem

Self-Adaptive Ant Colony System for the Traveling Salesman Problem Proceedings of the 29 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 29 Self-Adaptive Ant Colony System for the Traveling Salesman Problem Wei-jie Yu, Xiao-min

More information

Binary Particle Swarm Optimization with Crossover Operation for Discrete Optimization

Binary Particle Swarm Optimization with Crossover Operation for Discrete Optimization Binary Particle Swarm Optimization with Crossover Operation for Discrete Optimization Deepak Singh Raipur Institute of Technology Raipur, India Vikas Singh ABV- Indian Institute of Information Technology

More information

A Hybrid Data Mining Metaheuristic for the p-median Problem

A Hybrid Data Mining Metaheuristic for the p-median Problem A Hybrid Data Mining Metaheuristic for the p-median Problem Alexandre Plastino Erick R. Fonseca Richard Fuchshuber Simone de L. Martins Alex A. Freitas Martino Luis Said Salhi Abstract Metaheuristics represent

More information

Available online at ScienceDirect. Procedia Technology 25 (2016 )

Available online at   ScienceDirect. Procedia Technology 25 (2016 ) Available online at www.sciencedirect.com ScienceDirect Procedia Technology 25 (2016 ) 998 1005 Global Colloquium in Recent Advancement and Effectual Researches in Engineering, Science and Technology (RAEREST

More information

Single Machine Models

Single Machine Models Outline DM87 SCHEDULING, TIMETABLING AND ROUTING Lecture 8 Single Machine Models 1. Dispatching Rules 2. Single Machine Models Marco Chiarandini DM87 Scheduling, Timetabling and Routing 2 Outline Dispatching

More information

Computational statistics

Computational statistics Computational statistics Combinatorial optimization Thierry Denœux February 2017 Thierry Denœux Computational statistics February 2017 1 / 37 Combinatorial optimization Assume we seek the maximum of f

More information

Distributed Particle Swarm Optimization

Distributed Particle Swarm Optimization Distributed Particle Swarm Optimization Salman Kahrobaee CSCE 990 Seminar Main Reference: A Comparative Study of Four Parallel and Distributed PSO Methods Leonardo VANNESCHI, Daniele CODECASA and Giancarlo

More information

ARTIFICIAL INTELLIGENCE

ARTIFICIAL INTELLIGENCE BABEŞ-BOLYAI UNIVERSITY Faculty of Computer Science and Mathematics ARTIFICIAL INTELLIGENCE Solving search problems Informed local search strategies Nature-inspired algorithms March, 2017 2 Topics A. Short

More information

A FLOWSHOP SCHEDULING ALGORITHM TO MINIMIZE TOTAL FLOWTIME

A FLOWSHOP SCHEDULING ALGORITHM TO MINIMIZE TOTAL FLOWTIME Journal of the Operations Research Society of Japan Vo!. 34, No. 1, March 1991 1991 The Operations Research Society of Japan A FLOWSHOP SCHEDULING ALGORITHM TO MINIMIZE TOTAL FLOWTIME Chandrasekharan Rajendran

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms CSE 0, Winter 08 Design and Analysis of Algorithms Lecture 8: Consolidation # (DP, Greed, NP-C, Flow) Class URL: http://vlsicad.ucsd.edu/courses/cse0-w8/ Followup on IGO, Annealing Iterative Global Optimization

More information

An approach for the Class/Teacher Timetabling Problem using Graph Coloring

An approach for the Class/Teacher Timetabling Problem using Graph Coloring An approach for the Class/Teacher Timetabling Problem using Graph Coloring G. S. Bello M. C. Rangel M. C. S. Boeres Received: date / Accepted: date Keywords Timetabling Graph Coloring Metaheuristics Tabu

More information

New Instances for the Single Machine Total Weighted Tardiness Problem

New Instances for the Single Machine Total Weighted Tardiness Problem HELMUT-SCHMIDT-UNIVERSITÄT UNIVERSITÄT DER BUNDESWEHR HAMBURG LEHRSTUHL FÜR BETRIEBSWIRTSCHAFTSLEHRE, INSBES. LOGISTIK-MANAGEMENT Prof. Dr. M. J. Geiger Arbeitspapier / Research Report RR-10-03-01 March

More information

Solving the Homogeneous Probabilistic Traveling Salesman Problem by the ACO Metaheuristic

Solving the Homogeneous Probabilistic Traveling Salesman Problem by the ACO Metaheuristic Solving the Homogeneous Probabilistic Traveling Salesman Problem by the ACO Metaheuristic Leonora Bianchi 1, Luca Maria Gambardella 1 and Marco Dorigo 2 1 IDSIA, Strada Cantonale Galleria 2, CH-6928 Manno,

More information

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems Journal of Applied Mathematics Volume 2013, Article ID 757391, 18 pages http://dx.doi.org/10.1155/2013/757391 Research Article A Novel Differential Evolution Invasive Weed Optimization for Solving Nonlinear

More information

DISTRIBUTION SYSTEM OPTIMISATION

DISTRIBUTION SYSTEM OPTIMISATION Politecnico di Torino Dipartimento di Ingegneria Elettrica DISTRIBUTION SYSTEM OPTIMISATION Prof. Gianfranco Chicco Lecture at the Technical University Gh. Asachi, Iaşi, Romania 26 October 2010 Outline

More information

Outline. Ant Colony Optimization. Outline. Swarm Intelligence DM812 METAHEURISTICS. 1. Ant Colony Optimization Context Inspiration from Nature

Outline. Ant Colony Optimization. Outline. Swarm Intelligence DM812 METAHEURISTICS. 1. Ant Colony Optimization Context Inspiration from Nature DM812 METAHEURISTICS Outline Lecture 8 http://www.aco-metaheuristic.org/ 1. 2. 3. Marco Chiarandini Department of Mathematics and Computer Science University of Southern Denmark, Odense, Denmark

More information

Lecture 9 Evolutionary Computation: Genetic algorithms

Lecture 9 Evolutionary Computation: Genetic algorithms Lecture 9 Evolutionary Computation: Genetic algorithms Introduction, or can evolution be intelligent? Simulation of natural evolution Genetic algorithms Case study: maintenance scheduling with genetic

More information

A reactive framework for Ant Colony Optimization

A reactive framework for Ant Colony Optimization A reactive framework for Ant Colony Optimization Madjid Khichane 1,2, Patrick Albert 1, and Christine Solnon 2 1 ILOG SA, 9 rue de Verdun, 94253 Gentilly cedex, France {mkhichane,palbert}@ilog.fr 2 LIRIS

More information

PARTICLE SWARM OPTIMISATION (PSO)

PARTICLE SWARM OPTIMISATION (PSO) PARTICLE SWARM OPTIMISATION (PSO) Perry Brown Alexander Mathews Image: http://www.cs264.org/2009/projects/web/ding_yiyang/ding-robb/pso.jpg Introduction Concept first introduced by Kennedy and Eberhart

More information

Comparing genetic algorithm crossover and mutation operators for the indexing problem

Comparing genetic algorithm crossover and mutation operators for the indexing problem INDIAN INSTITUTE OF MANAGEMENT AHMEDABAD INDIA Comparing genetic algorithm crossover and mutation operators for the indexing problem Diptesh Ghosh W.P. No. 2016-03-29 March 2016 The main objective of the

More information

OPTIMAL DISPATCH OF REAL POWER GENERATION USING PARTICLE SWARM OPTIMIZATION: A CASE STUDY OF EGBIN THERMAL STATION

OPTIMAL DISPATCH OF REAL POWER GENERATION USING PARTICLE SWARM OPTIMIZATION: A CASE STUDY OF EGBIN THERMAL STATION OPTIMAL DISPATCH OF REAL POWER GENERATION USING PARTICLE SWARM OPTIMIZATION: A CASE STUDY OF EGBIN THERMAL STATION Onah C. O. 1, Agber J. U. 2 and Ikule F. T. 3 1, 2, 3 Department of Electrical and Electronics

More information

Particle Swarm Optimization. Abhishek Roy Friday Group Meeting Date:

Particle Swarm Optimization. Abhishek Roy Friday Group Meeting Date: Particle Swarm Optimization Abhishek Roy Friday Group Meeting Date: 05.25.2016 Cooperation example Basic Idea PSO is a robust stochastic optimization technique based on the movement and intelligence of

More information

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE Artificial Intelligence, Computational Logic PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE Lecture 4 Metaheuristic Algorithms Sarah Gaggl Dresden, 5th May 2017 Agenda 1 Introduction 2 Constraint

More information

Simulated Annealing. Local Search. Cost function. Solution space

Simulated Annealing. Local Search. Cost function. Solution space Simulated Annealing Hill climbing Simulated Annealing Local Search Cost function? Solution space Annealing Annealing is a thermal process for obtaining low energy states of a solid in a heat bath. The

More information

AN ANT APPROACH FOR STRUCTURED QUADRATIC ASSIGNMENT PROBLEMS

AN ANT APPROACH FOR STRUCTURED QUADRATIC ASSIGNMENT PROBLEMS AN ANT APPROACH FOR STRUCTURED QUADRATIC ASSIGNMENT PROBLEMS Éric D. Taillard, Luca M. Gambardella IDSIA, Corso Elvezia 36, CH-6900 Lugano, Switzerland. Extended abstract IDSIA-22-97 ABSTRACT. The paper

More information

Sensitive Ant Model for Combinatorial Optimization

Sensitive Ant Model for Combinatorial Optimization Sensitive Ant Model for Combinatorial Optimization CAMELIA CHIRA cchira@cs.ubbcluj.ro D. DUMITRESCU ddumitr@cs.ubbcluj.ro CAMELIA-MIHAELA PINTEA cmpintea@cs.ubbcluj.ro Abstract: A combinatorial optimization

More information

Particle swarm optimization approach to portfolio optimization

Particle swarm optimization approach to portfolio optimization Nonlinear Analysis: Real World Applications 10 (2009) 2396 2406 Contents lists available at ScienceDirect Nonlinear Analysis: Real World Applications journal homepage: www.elsevier.com/locate/nonrwa Particle

More information

Intuitionistic Fuzzy Estimation of the Ant Methodology

Intuitionistic Fuzzy Estimation of the Ant Methodology BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 9, No 2 Sofia 2009 Intuitionistic Fuzzy Estimation of the Ant Methodology S Fidanova, P Marinov Institute of Parallel Processing,

More information

ILP-Based Reduced Variable Neighborhood Search for Large-Scale Minimum Common String Partition

ILP-Based Reduced Variable Neighborhood Search for Large-Scale Minimum Common String Partition Available online at www.sciencedirect.com Electronic Notes in Discrete Mathematics 66 (2018) 15 22 www.elsevier.com/locate/endm ILP-Based Reduced Variable Neighborhood Search for Large-Scale Minimum Common

More information

An artificial chemical reaction optimization algorithm for. multiple-choice; knapsack problem.

An artificial chemical reaction optimization algorithm for. multiple-choice; knapsack problem. An artificial chemical reaction optimization algorithm for multiple-choice knapsack problem Tung Khac Truong 1,2, Kenli Li 1, Yuming Xu 1, Aijia Ouyang 1, and Xiaoyong Tang 1 1 College of Information Science

More information

Adaptive Generalized Crowding for Genetic Algorithms

Adaptive Generalized Crowding for Genetic Algorithms Carnegie Mellon University From the SelectedWorks of Ole J Mengshoel Fall 24 Adaptive Generalized Crowding for Genetic Algorithms Ole J Mengshoel, Carnegie Mellon University Severinio Galan Antonio de

More information

Methods for finding optimal configurations

Methods for finding optimal configurations CS 1571 Introduction to AI Lecture 9 Methods for finding optimal configurations Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square Search for the optimal configuration Optimal configuration search:

More information

HYBRIDIZATIONS OF GRASP WITH PATH-RELINKING FOR THE FAR FROM MOST STRING PROBLEM

HYBRIDIZATIONS OF GRASP WITH PATH-RELINKING FOR THE FAR FROM MOST STRING PROBLEM HYBRIDIZATIONS OF GRASP WITH PATH-RELINKING FOR THE FAR FROM MOST STRING PROBLEM DANIELE FERONE, PAOLA FESTA, AND MAURICIO G.C. RESENDE Abstract. Among the sequence selection and comparison problems, the

More information

Local Search (Greedy Descent): Maintain an assignment of a value to each variable. Repeat:

Local Search (Greedy Descent): Maintain an assignment of a value to each variable. Repeat: Local Search Local Search (Greedy Descent): Maintain an assignment of a value to each variable. Repeat: I I Select a variable to change Select a new value for that variable Until a satisfying assignment

More information

THE PBIL ALGORITHM APPLIED TO A NUCLEAR REACTOR DESIGN OPTIMIZATION

THE PBIL ALGORITHM APPLIED TO A NUCLEAR REACTOR DESIGN OPTIMIZATION 2007 International Nuclear Atlantic Conference - INAC 2007 Santos, SP, Brazil, September 30 to October 5, 2007 ASSOCIAÇÃO BRASILEIRA DE ENERGIA NUCLEAR - ABEN ISBN: 978-85-99141-02-1 THE PBIL ALGORITHM

More information

GRASP heuristics for discrete & continuous global optimization

GRASP heuristics for discrete & continuous global optimization GRASP heuristics for discrete & continuous global optimization Tutorial given at Learning and Intelligent Optimization Conference (LION 8) Gainesville, Florida February 17, 2014 Copyright @2014 AT&T Intellectual

More information

Single Solution-based Metaheuristics

Single Solution-based Metaheuristics Parallel Cooperative Optimization Research Group Single Solution-based Metaheuristics E-G. Talbi Laboratoire d Informatique Fondamentale de Lille Single solution-based metaheuristics Improvement of a solution.

More information

Integer weight training by differential evolution algorithms

Integer weight training by differential evolution algorithms Integer weight training by differential evolution algorithms V.P. Plagianakos, D.G. Sotiropoulos, and M.N. Vrahatis University of Patras, Department of Mathematics, GR-265 00, Patras, Greece. e-mail: vpp

More information

CHEMICAL Reaction Optimization (CRO) [1] is a simple

CHEMICAL Reaction Optimization (CRO) [1] is a simple Real-Coded Chemical Reaction Optimization with Different Perturbation s James J.Q. Yu, Student Member, IEEE Department of Electrical and Electronic Engineering The University of Hong Kong Email: jqyu@eee.hku.hk

More information

TUTORIAL: HYPER-HEURISTICS AND COMPUTATIONAL INTELLIGENCE

TUTORIAL: HYPER-HEURISTICS AND COMPUTATIONAL INTELLIGENCE TUTORIAL: HYPER-HEURISTICS AND COMPUTATIONAL INTELLIGENCE Nelishia Pillay School of Mathematics, Statistics and Computer Science University of KwaZulu-Natal South Africa TUTORIAL WEBSITE URL: http://titancs.ukzn.ac.za/ssci2015tutorial.aspx

More information

The particle swarm optimization algorithm: convergence analysis and parameter selection

The particle swarm optimization algorithm: convergence analysis and parameter selection Information Processing Letters 85 (2003) 317 325 www.elsevier.com/locate/ipl The particle swarm optimization algorithm: convergence analysis and parameter selection Ioan Cristian Trelea INA P-G, UMR Génie

More information

Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms

Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms Yong Wang and Zhi-Zhong Liu School of Information Science and Engineering Central South University ywang@csu.edu.cn

More information

Local Search & Optimization

Local Search & Optimization Local Search & Optimization CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2017 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 4 Outline

More information

Exponential neighborhood search for a parallel machine scheduling problem

Exponential neighborhood search for a parallel machine scheduling problem xponential neighborhood search for a parallel machine scheduling problem Y.A. Rios Solis and F. Sourd LIP6 - Université Pierre et Marie Curie 4 Place Jussieu, 75252 Paris Cedex 05, France Abstract We consider

More information

Solving Numerical Optimization Problems by Simulating Particle-Wave Duality and Social Information Sharing

Solving Numerical Optimization Problems by Simulating Particle-Wave Duality and Social Information Sharing International Conference on Artificial Intelligence (IC-AI), Las Vegas, USA, 2002: 1163-1169 Solving Numerical Optimization Problems by Simulating Particle-Wave Duality and Social Information Sharing Xiao-Feng

More information

DE [39] PSO [35] ABC [7] AO k/maxiter e-20

DE [39] PSO [35] ABC [7] AO k/maxiter e-20 3. Experimental results A comprehensive set of benchmark functions [18, 33, 34, 35, 36, 37, 38] has been used to test the performance of the proposed algorithm. The Appendix A (Table A1) presents the functions

More information

Department of Mathematics, Graphic Era University, Dehradun, Uttarakhand, India

Department of Mathematics, Graphic Era University, Dehradun, Uttarakhand, India Genetic Algorithm for Minimization of Total Cost Including Customer s Waiting Cost and Machine Setup Cost for Sequence Dependent Jobs on a Single Processor Neelam Tyagi #1, Mehdi Abedi *2 Ram Gopal Varshney

More information

Multi-objective Quadratic Assignment Problem instances generator with a known optimum solution

Multi-objective Quadratic Assignment Problem instances generator with a known optimum solution Multi-objective Quadratic Assignment Problem instances generator with a known optimum solution Mădălina M. Drugan Artificial Intelligence lab, Vrije Universiteit Brussel, Pleinlaan 2, B-1050 Brussels,

More information

ON THE USE OF RANDOM VARIABLES IN PARTICLE SWARM OPTIMIZATIONS: A COMPARATIVE STUDY OF GAUSSIAN AND UNIFORM DISTRIBUTIONS

ON THE USE OF RANDOM VARIABLES IN PARTICLE SWARM OPTIMIZATIONS: A COMPARATIVE STUDY OF GAUSSIAN AND UNIFORM DISTRIBUTIONS J. of Electromagn. Waves and Appl., Vol. 23, 711 721, 2009 ON THE USE OF RANDOM VARIABLES IN PARTICLE SWARM OPTIMIZATIONS: A COMPARATIVE STUDY OF GAUSSIAN AND UNIFORM DISTRIBUTIONS L. Zhang, F. Yang, and

More information

Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms

Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms Yong Wang and Zhi-Zhong Liu School of Information Science and Engineering Central South University ywang@csu.edu.cn

More information

Fuzzy adaptive catfish particle swarm optimization

Fuzzy adaptive catfish particle swarm optimization ORIGINAL RESEARCH Fuzzy adaptive catfish particle swarm optimization Li-Yeh Chuang, Sheng-Wei Tsai, Cheng-Hong Yang. Institute of Biotechnology and Chemical Engineering, I-Shou University, Kaohsiung, Taiwan

More information

Algorithm Design Strategies V

Algorithm Design Strategies V Algorithm Design Strategies V Joaquim Madeira Version 0.0 October 2016 U. Aveiro, October 2016 1 Overview The 0-1 Knapsack Problem Revisited The Fractional Knapsack Problem Greedy Algorithms Example Coin

More information

A new ILS algorithm for parallel machine scheduling problems

A new ILS algorithm for parallel machine scheduling problems J Intell Manuf (2006) 17:609 619 DOI 10.1007/s10845-006-0032-2 A new ILS algorithm for parallel machine scheduling problems Lixin Tang Jiaxiang Luo Received: April 2005 / Accepted: January 2006 Springer

More information

NILS: a Neutrality-based Iterated Local Search and its application to Flowshop Scheduling

NILS: a Neutrality-based Iterated Local Search and its application to Flowshop Scheduling NILS: a Neutrality-based Iterated Local Search and its application to Flowshop Scheduling Marie-Eleonore Marmion, Clarisse Dhaenens, Laetitia Jourdan, Arnaud Liefooghe, Sébastien Verel To cite this version:

More information

Meta heuristic algorithms for parallel identical machines scheduling problem with weighted late work criterion and common due date

Meta heuristic algorithms for parallel identical machines scheduling problem with weighted late work criterion and common due date DOI 10.1186/s40064-015-1559-5 RESEARCH Open Access Meta heuristic algorithms for parallel identical machines scheduling problem with weighted late work criterion and common due date Zhenzhen Xu, Yongxing

More information

Application of Teaching Learning Based Optimization for Size and Location Determination of Distributed Generation in Radial Distribution System.

Application of Teaching Learning Based Optimization for Size and Location Determination of Distributed Generation in Radial Distribution System. Application of Teaching Learning Based Optimization for Size and Location Determination of Distributed Generation in Radial Distribution System. Khyati Mistry Electrical Engineering Department. Sardar

More information

Zebo Peng Embedded Systems Laboratory IDA, Linköping University

Zebo Peng Embedded Systems Laboratory IDA, Linköping University TDTS 01 Lecture 8 Optimization Heuristics for Synthesis Zebo Peng Embedded Systems Laboratory IDA, Linköping University Lecture 8 Optimization problems Heuristic techniques Simulated annealing Genetic

More information

Part B" Ants (Natural and Artificial)! Langton s Vants" (Virtual Ants)! Vants! Example! Time Reversibility!

Part B Ants (Natural and Artificial)! Langton s Vants (Virtual Ants)! Vants! Example! Time Reversibility! Part B" Ants (Natural and Artificial)! Langton s Vants" (Virtual Ants)! 11/14/08! 1! 11/14/08! 2! Vants!! Square grid!! Squares can be black or white!! Vants can face N, S, E, W!! Behavioral rule:!! take

More information

The single machine earliness and tardiness scheduling problem: lower bounds and a branch-and-bound algorithm*

The single machine earliness and tardiness scheduling problem: lower bounds and a branch-and-bound algorithm* Volume 29, N. 2, pp. 107 124, 2010 Copyright 2010 SBMAC ISSN 0101-8205 www.scielo.br/cam The single machine earliness and tardiness scheduling problem: lower bounds and a branch-and-bound algorithm* DÉBORA

More information

Variable Objective Search

Variable Objective Search Variable Objective Search Sergiy Butenko, Oleksandra Yezerska, and Balabhaskar Balasundaram Abstract This paper introduces the variable objective search framework for combinatorial optimization. The method

More information

Research Article Effect of Population Structures on Quantum-Inspired Evolutionary Algorithm

Research Article Effect of Population Structures on Quantum-Inspired Evolutionary Algorithm Applied Computational Intelligence and So Computing, Article ID 976202, 22 pages http://dx.doi.org/10.1155/2014/976202 Research Article Effect of Population Structures on Quantum-Inspired Evolutionary

More information

Havrda and Charvat Entropy Based Genetic Algorithm for Traveling Salesman Problem

Havrda and Charvat Entropy Based Genetic Algorithm for Traveling Salesman Problem 3 IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.5, May 008 Havrda and Charvat Entropy Based Genetic Algorithm for Traveling Salesman Problem Baljit Singh, Arjan Singh

More information

Overview. Optimization. Easy optimization problems. Monte Carlo for Optimization. 1. Survey MC ideas for optimization: (a) Multistart

Overview. Optimization. Easy optimization problems. Monte Carlo for Optimization. 1. Survey MC ideas for optimization: (a) Multistart Monte Carlo for Optimization Overview 1 Survey MC ideas for optimization: (a) Multistart Art Owen, Lingyu Chen, Jorge Picazo (b) Stochastic approximation (c) Simulated annealing Stanford University Intel

More information

SIMU L TED ATED ANNEA L NG ING

SIMU L TED ATED ANNEA L NG ING SIMULATED ANNEALING Fundamental Concept Motivation by an analogy to the statistical mechanics of annealing in solids. => to coerce a solid (i.e., in a poor, unordered state) into a low energy thermodynamic

More information

Firefly algorithm in optimization of queueing systems

Firefly algorithm in optimization of queueing systems BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES, Vol. 60, No. 2, 2012 DOI: 10.2478/v10175-012-0049-y VARIA Firefly algorithm in optimization of queueing systems J. KWIECIEŃ and B. FILIPOWICZ

More information

3.4 Relaxations and bounds

3.4 Relaxations and bounds 3.4 Relaxations and bounds Consider a generic Discrete Optimization problem z = min{c(x) : x X} with an optimal solution x X. In general, the algorithms generate not only a decreasing sequence of upper

More information

Local Search & Optimization

Local Search & Optimization Local Search & Optimization CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 4 Some

More information

SPT is Optimally Competitive for Uniprocessor Flow

SPT is Optimally Competitive for Uniprocessor Flow SPT is Optimally Competitive for Uniprocessor Flow David P. Bunde Abstract We show that the Shortest Processing Time (SPT) algorithm is ( + 1)/2-competitive for nonpreemptive uniprocessor total flow time

More information

Improving Search Space Exploration and Exploitation with the Cross-Entropy Method and the Evolutionary Particle Swarm Optimization

Improving Search Space Exploration and Exploitation with the Cross-Entropy Method and the Evolutionary Particle Swarm Optimization 1 Improving Search Space Exploration and Exploitation with the Cross-Entropy Method and the Evolutionary Particle Swarm Optimization Leonel Carvalho, Vladimiro Miranda, Armando Leite da Silva, Carolina

More information

Capacitor Placement for Economical Electrical Systems using Ant Colony Search Algorithm

Capacitor Placement for Economical Electrical Systems using Ant Colony Search Algorithm Capacitor Placement for Economical Electrical Systems using Ant Colony Search Algorithm Bharat Solanki Abstract The optimal capacitor placement problem involves determination of the location, number, type

More information

Hybridizing the Cross Entropy Method: An Application to the Max-Cut Problem

Hybridizing the Cross Entropy Method: An Application to the Max-Cut Problem Hybridizing the Cross Entropy Method: An Application to the Max-Cut Problem MANUEL LAGUNA Leeds School of Business, University of Colorado at Boulder, USA laguna@colorado.edu ABRAHAM DUARTE Departamento

More information

Chapter 8: Introduction to Evolutionary Computation

Chapter 8: Introduction to Evolutionary Computation Computational Intelligence: Second Edition Contents Some Theories about Evolution Evolution is an optimization process: the aim is to improve the ability of an organism to survive in dynamically changing

More information

Lin-Kernighan Heuristic. Simulated Annealing

Lin-Kernighan Heuristic. Simulated Annealing DM63 HEURISTICS FOR COMBINATORIAL OPTIMIZATION Lecture 6 Lin-Kernighan Heuristic. Simulated Annealing Marco Chiarandini Outline 1. Competition 2. Variable Depth Search 3. Simulated Annealing DM63 Heuristics

More information

Flow Shop and Job Shop Models

Flow Shop and Job Shop Models Outline DM87 SCHEDULING, TIMETABLING AND ROUTING Lecture 11 Flow Shop and Job Shop Models 1. Flow Shop 2. Job Shop Marco Chiarandini DM87 Scheduling, Timetabling and Routing 2 Outline Resume Permutation

More information

Restarting a Genetic Algorithm for Set Cover Problem Using Schnabel Census

Restarting a Genetic Algorithm for Set Cover Problem Using Schnabel Census Restarting a Genetic Algorithm for Set Cover Problem Using Schnabel Census Anton V. Eremeev 1,2 1 Dostoevsky Omsk State University, Omsk, Russia 2 The Institute of Scientific Information for Social Sciences

More information

An Effective Chromosome Representation for Evolving Flexible Job Shop Schedules

An Effective Chromosome Representation for Evolving Flexible Job Shop Schedules An Effective Chromosome Representation for Evolving Flexible Job Shop Schedules Joc Cing Tay and Djoko Wibowo Intelligent Systems Lab Nanyang Technological University asjctay@ntuedusg Abstract As the Flexible

More information

The Pickup and Delivery Problem: a Many-objective Analysis

The Pickup and Delivery Problem: a Many-objective Analysis The Pickup and Delivery Problem: a Many-objective Analysis Abel García-Nájera and Antonio López-Jaimes Universidad Autónoma Metropolitana, Unidad Cuajimalpa, Departamento de Matemáticas Aplicadas y Sistemas,

More information

Evolutionary computation

Evolutionary computation Evolutionary computation Andrea Roli andrea.roli@unibo.it DEIS Alma Mater Studiorum Università di Bologna Evolutionary computation p. 1 Evolutionary Computation Evolutionary computation p. 2 Evolutionary

More information