Evolutionary Multitasking in Permutation-Based Combinatorial Optimization Problems: Realization with TSP, QAP, LOP, and JSP

Size: px
Start display at page:

Download "Evolutionary Multitasking in Permutation-Based Combinatorial Optimization Problems: Realization with TSP, QAP, LOP, and JSP"

Transcription

1 Evolutionary Multitasking in Permutation-Based Combinatorial Optimization Problems: Realization with TSP, QAP, LOP, and JSP Yuan Yuan, Yew-Soon Ong, Abhishek Gupta, Puay Siew Tan and Hua Xu School of Computer Science and Engineering, Nanyang Technological University, Singapore Singapore Institute of Manufacturing Technology, Singapore Department of Computer Science and Technology, Tsinghua University, Beijing , China Abstract Evolutionary computation (EC) has gained increasing popularity in dealing with permutation-based combinatorial optimization problems (PCOPs). Traditionally, EC focuses on solving a single optimization task at a time. However, in complex multi-echelon supply chain networks (SCNs), there usually exist various kinds of PCOPs at the same time, e.g., travel salesman problem (TSP), job-shop scheduling problem (JSP), etc. So, it is desirable to solve several PCOPs at once with both effectiveness and efficiency. Very recently, a new paradigm in EC, namely, multifactorial optimization (MFO) has been introduced to explore the potential of evolutionary multitasking, which can serve the purpose of simultaneously optimizing multiple PCOPs in SCNs. In this paper, the evolutionary multitasking of PCOPs is studied. In particular, based on a recently proposed multitasking engine known as the multifactorial evolutionary algorithm (MFEA), two novel mechanisms, namely, a new unified representation and a new survivor selection procedure, are introduced to better adapt to PCOPs. Experimental results obtained on well-known benchmark problems not only show the benefits of the two new mechanisms but also verify the promise of evolutionary multitasking for PCOPs. In addition, the results on a test case involving four optimization tasks demonstrate the potential scalability of evolutionary multitasking to many-task environments. I. INTRODUCTION Evolutionary algorithms (EAs) are a class of populationbased metaheuristics inspired by biological evolution. Due to the simplicity of the approach, free from assumptions about the fitness landscape, robust response to changing situations, and many other facets, EAs have been widely applied to various kinds of optimization problems, e.g, single-objective optimization [1], [2], multi or many-objective optimization [3] [5], dynamic optimization [6]. Although EAs have become popular in many fields of science and engineering, it is well known that pure EAs usually perform unsatisfactorily on hard problems, as the no free lunch theorem [7] implies. Hence, it is a common practice to incorporate some knowledge about the underlying optimization problem into the evolutionary-based search so as to enhance the performance of EAs. Memetic algorithms (MAs) [8] realize this by integrating the global search of EAs with problem-dependent local search (LS), and have achieved great success in combinatorial optimization problems (COPs) [9] [12]. The main idea of MAs is to exploit the knowledge of the current problem, whereas some other research efforts [13] [16] exploit problem knowledge in EAs in a different way, whose goal is to exploit the knowledge extracted from one problem to guide the search of EAs on another similar problem. Undoubtedly, such kinds of knowledge transfer in EAs are meaningful and promising. However, how to transfer and what knowledge is to be transferred are among some of the non-trivial challenges of such techniques, requiring careful design and deep understanding of problem characteristics. Moreover, another limitation of these techniques is that they are usually only applicable in the same problem domains. Very recently, a new paradigm in evolutionary computation (EC), namely, multifactorial optimization (MFO) [17], has been introduced to explore the potential of evolutionary multitasking. Unlike EC paradigms performing explicit transfer of knowledge from one task to another similar one [13] [16], evolutionary multitasking encompasses multiple optimization tasks at a time and facilitates the implicit transfer of knowledge across diverse problems via simple genetic transfer, thus achieving intra-domain and/or cross-domain optimization in a seamless manner. Generally, there is no need to have a concrete view of the relationships between the optimization tasks before multitasking, thereby making the paradigm easy to use. Once the unified representation for the tasks to be optimized is identified, these tasks can be directly tackled using a multitasking solver, where information sharing occurs in the unified search space. The multifactorial evolutionary algorithm (MFEA) [17] is one such recently proposed solver, which has been found to harness the genetic complementarity between tasks. In this paper, extending from the first study on evolutionary multitasking [17], our interest and focus is on the class of permutation-based COPs (PCOPs) [18], [19], which exist widely in many real-world scenarios such as supply chain networks (SCNs). The contributions of the present work are delineated as follows:

2 1) A new unified representation scheme is introduced for the evolutionary multitasking of PCOPs, leading to higher search efficiency; 2) A new survivor selection procedure is introduced for MFEA, which significantly reduces the computational complexity of the search; 3) The performance of MFEA on more than three tasks are investigated for the first time; 4) Some new insights about evolutionary multitasking have been derived from the experimental results obtained on PCOPs. The rest of this paper is organized as follows. Section II introduces the background knowledge of this paper. Section III describes the details of the algorithms considered in the experimental studies. Section IV provides the proof-of-concept computational results and discussions. Finally, Section V concludes this paper. A. Multifactorial Optimization II. PRELIMINARIES MFO formally describes an evolutionary multitasking environment where K optimization tasks are to be performed simultaneously. Without loss of generality, all tasks are assumed to be minimization problems. For the j-th task T j, its objective function is defined as f j : Ω j R, where Ω j is the search space of T j. The aim of MFO is to find {x 1, x 2,..., x K } = argmin{f 1 (x), f 2 (x),..., f K (x)}, where x j Ω j. In MFO, all the individuals in a given population P are encoded in a unified search space encompassing Ω 1, Ω 2,..., Ω K. Thus each individual p i, where i = {1, 2,..., P }, in the population P can be decoded into a task-specific solution for each of K optimization tasks. Based on the population P, the following definitions associated with p i are provided. Definition 1: The factorial rank rj i of p i on task T j, where j {1, 2,..., K}, is the index of p i in the list of population members sorted in ascending order with respect to f j. Definition 2: The scalar fitness φ i of p i is given by φ i = 1/ min K j=1{rj i}. Definition 3: The skill factor τ i of p i is given by τ i = argmin j {rj i }, where j {1, 2,..., K}. According to the scalar fitness, we can simply compare population members in a multitasking environment, laying the foundation to the design of EAs for multitasking. B. Multifactorial Evolutionary Algorithm MFEA is inspired by the bio-cultural models of multifactorial inheritance, which well serves the MFO purpose. The basic procedure of MFEA is shown in Algorithm 1. MFEA has four core features, i.e., unified representation, assortative mating, selective evaluation, and scalar fitness based selection. The random key (RK) [20] is used as the unified representation in MFEA. Suppose the dimension of the j-th task T j is given by D j, then the RK representation of an individual in MFEA can be dentoed as y = (y 1, y 2,..., y D ), where D = max D j=1 {D j} and the j-th RK value y j [0, 1]. Algorithm 1 Basic Procedure of MFEA 1: Randomly generate the initial population P with size N. 2: Evaluate each individual on all K optimization tasks. 3: Compute the skill factor (τ) for each individual. 4: while stopping condition is not met do 5: Apply genetic operators on P to get the offspring population P. 6: Evaluate the individuals in P for selected optimization tasks only. 7: Concatenate P and P to get the intermediate population Q. 8: Update the scalar fitness (φ) and skill factor (τ) of each individual in Q. 9: Select N best individuals (in terms of scalar fitness) from Q to form the next population P. 10: end while When addressing the task T j, only the first D j RK values in x are referred. This sort of unification not only avoids the challenge associated with the curse of the dimensionality but also encourages the discovery and implicit transfer of useful genetic material from one task to another. The principle of assortative mating (i.e., Step 5 of Algorithm 1) is that individuals prefer to mate with those belonging to the same cultural background. The procedure is described in Algorithm 2, where rmp is the random mating probability and rand is a random number between [0, 1]. Algorithm 2 Assortative Mating 1: Randomly select two parents p i and p j from P. 2: if τ i = τ j or rand < rmp then 3: Perform crossover on p i and p j to get two offspring individuals c i and c j. 4: else 5: Perform mutation on p i to get an offspring c i. 6: Perform mutation on p j to get an offspring c j. 7: end if The selective evaluation (i.e., Step 6 of Algorithm 1) implies that the offspring is evaluated for only one task instead of every task, making MFEA computationally practical. Moreover, the mechanism is very simple to comprehend and implement: the offspring is evaluated only on the task T τ, where τ is the skill factor of its parent (if the offspring has two parents, just randomly choose one). In addition, it should be noted that the evaluation in MFEA refers to task evaluation, i.e, LS is executed straight after the objective function evaluation. The scalar fitness based selection (i.e., Step 9 of Algorithm 1) follows an elitist strategy. Since every solution in the offspring population P is evaluated on only one task, its objective values with respect to all unevaluated tasks are artificially set to. Therefore, according to Definitions 1 and 2, we can easily assign the scalar fitness and skill factor to the solutions in Q (i.e., Step 8 of Algorithm 1) and then perform an elitist selection in terms of scalar fitness. C. Permutation-Based Combinatorial Optimization Problems The PCOPs refer to a class of COPs, where the natural representation of solutions is in the form of a permutation. In this

3 paper, we consider four kinds of PCOPs, i.e., travel salesman problem (TSP), quadratic assignment problem (QAP), linear ordering problem (LOP), and job-shop scheduling problem (JSP). 1) TSP: Given a list of n cities and n n distance matrix D = [d ij ], where d ij is the distance between city i and city j. The objective of TSP is to find the shortest possible path visiting each city exactly once and returns to the origin city. A solution can be given by a sequence of cities indicated as a permutation σ = (σ 1, σ 2,..., σ n ), and the objective function is formulated as follows n f(σ) = d σi 1σ i + d σnσ 1 (1) i=2 2) QAP: Given the n n flow matrix H = [h i,j ] and distance matrix D = [d ij ], the objective of QAP is to minimize the following function, where σ = (σ 1, σ 2,..., σ n ) is a permutation of {1, 2,..., n} n n f(σ) = h σi σ j d σi σ j (2) i=1 j=1 3) LOP: Given an n n matrix C = [c ij ], the objective of LOP is to determine a simultaneous permutation (σ) of the rows and columns of C such that the sum of the superdiagonal entries is as large as possible. The objective function is as follows n n f(σ) = c σiσ j (3) i=1 j=i Since only minimization problems are considered in this paper, we indeed minimize f(σ) for LOP. 4) JSP: Given n jobs {1, 2,..., n} and m machines {1, 2,..., m}, each job i consists of m i precedence constraint operations O i,1, O i,2,..., O i,mi, where O i,j, j {1, 2,..., m i }, must be processed on a specified machine k with uninterrupted processing time p i,j,k. There are N = n i=1 m i operations in total, and a schedule can be represented as a sequence of N operations (a permutation σ). Let C i the completion time of job i in the schedule σ, then the goal of JSP is to minimize the following function called makespan: f(σ) = max n {C i} (4) i=1 Note that not all the permutations is a feasible schedule of JSP due to the predefined order of operations within one job, which is different from that in TSP, QAP and LOP. If a permutation is infeasible, a repair procedure described in [12] is used to transform it to a feasible one. III. PROPOSED ALGORITHM FOR EVOLUTIONARY MULTITASKING OF PCOPS In Sections III-A and III-B, we introduce two novel mechanisms, namely, a new unified representation and a new survivor selection procedure, as an extension of the basic MFEA proposed in [17], which are designed for evolutionary multitasking of PCOPs. Section III-C then describes the local search procedures considered in the respective PCOPs. A. A Unified Representation The random key representation scheme has broad applicability and can be used both on continuous optimization problems and PCOPs. However, there are two obvious limitations of using a RK representation when dealing with PCOPs. On one hand, the decoding can be inefficient since the transformation from the RK representation to the permutation is required for each fitness evaluation of EAs, leading to the additional computing cost incurred. On the other hand, the decoding process of RK in PCOPs can be highly lossy since only information on relative order is derived, thus many solutions that differ in the genotype can correspond to a single permutation or the same solution in the phenotype. This has the effect of delimiting the explorative capability of EAs. Since the focus of our interests in this paper is on evolutionary multitasking of problems limited to only permutation based combinatorial optimization, it makes sense to consider a permutation based encoding over the more general RK representation. Suppose there are K tasks of PCOPs to be solved and the size of the j-th task T j is D j. Then the unified representation σ = (σ 1, σ 2,..., σ D ) is a permutation of (1, 2,..., D), where D = max K j=1 {D j}. When σ is associated with the j-th task T j, just choose all the integers no larger than D j from σ and keep them in the same relative order as in σ. These D j integers thereby form a permutation of (1, 2,..., D j ) and can be easily interpreted by T j. Fig. 1 depicts this procedure for three PCOP tasks, i.e., two TSP tasks with 3 and 5 cities, respectively, and one QAP task with a size of 4. Fig. 1. A solution (Unified representation) Solution for the TSP task with 3 cities B. Survivor Selection Solution for the TSP task with 5 cities Solution for the QAP task with the size 4 The interpretation of unified representation for each task. In this subsection, we suggest a new survivor selection procedure (replacing Steps 8 and 9 in Algorithm 1 for MFEA), referred as level-based selection (LBS), which is simpler and more computationally efficient. LBS maintains K ordered lists L 1, L 2,..., L K during the evolutionary process of MFEA. The list L j, j {1, 2,..., K}, consists of every solution whose skill factor is j in the current population P, and these solutions are sorted in the ascending order with respect to f j. The assortative mating mechanism described in Algorithm 2 remains intact for the generation of offspring solutions. The subtle difference is that when an offspring is assigned for evaluation on a selected task T j, insert the offspring into the ordered list L j after evaluation and set its skill factor to j. So, after the reproduction process, we have K ordered lists consisting of total 2N (N is the population size) solutions.

4 Then the 2N solutions are partitioned into different levels. Let F i be the i-th level that contains the set of solutions in the i-th position of L 1, L 2,..., L K, and if L j < i, F i consists of no solutions in L j. The LBS procedure, aiming to select N solutions to form the next population, is described in Algorithm 3. Fig. 2 provides an example of this selection process for 3 tasks with N = 7, where a shaded circle indicates a solution is selected for the next population. TABLE I LOCAL SEARCH NEIGHBORHOODS ADOPTED FOR TSP, QAP, LOP AND PFSP. Problem Local search neighborhood TSP 2-opt neighborhood [21] QAP swap neighborhood [22] LOP insert neighbourhood [23] JSP N6 neighborhood [24] L1 L2 s11 s12 s13 s14 s15 s21 s22 s23 s24 will replace the original solution in the population. Fig. 3 depicts an example search process of the LS working on the solution given previously in Fig. 1, which considers a TSP instance with 3 cities. L3 s31 s32 s33 s34 s35 A solution (Unified representation) An improved solution (Unified representation) F1 F2 F3 F4 F Fig. 2. The illustration of level-based selection procedure. In the original MFEA, the computational complexity per search generation is dominated by Step 8 of Algorithm 1. Here, every solution in Q should be ranked for every objective function, thus O(KN log N) computations are incurred. With the LBS procedure, the computational complexity of MFEA per generation is now dominated by the inserting of N offsprings into the K ordered lists, which has an average-case complexity of O(N log(nk 1 )) and worst-case complexity of O(N log N). This translates to a K times improvements in computational complexity per search generation over the originally proposed MFEA. Moreover, it should be mentioned that the effects of the two different selection methods are usually similar. This is because, in the original MFEA, when a solution is evaluated on T j, its objective function values with respect to all unevaluated task are set to, thus its minimum factorial rank is achieved on T j with very high probability. Algorithm 3 Level-Based Selection Procedure 1: Input L 1, L 2,..., L K. 2: P, u 0, l 1. 3: while u + F l N do 4: P P F l. 5: u u + F l. 6: l l : end while 8: Randomly select N u solutions in F l Solution for the TSP task with 3 cities Fig. 3. A. Experimental Setup LS procedure Improved solution for the TSP task with 3 cities An illustration of the LS. IV. EXPERIMENTAL STUDY In our experiments, we investigate three MFEA variants listed in Table II. MFEA denotes the original MFEA algorithm [17]. MFEA-Perm differs from MFEA only in that it employs a permutation encoding over RK as the unified representation. MFEA-Perm-LBS on the other hand, employs a permutationbased unified representation as well as the LBS procedure described in Section III-B. All three MFEA variants are implemented in Java and run on a PC with 3.2 GHz and 16 GB of RAM memory. For each kind of PCOPs, we consider three large-scale benchmark instances, which are summarized in Table III. In each test case, we consider two or more of these instances in a MFEA variant and solve them simultaneously. For each test case, the best, median and worst performances of each algorithm across 20 independent runs are then presented. TABLE II THE THREE MFEA VARIANTS CONSIDERED. C. Local Search Procedures A specific local search may be effective on some kinds of problems but not for others. To ensure the search performance, different LS neighborhoods are adopted in this paper for different kinds of PCOPs, which are listed in Table I. All LS procedures use a first improvement pivoting rule, i.e., once an improved solution is found, it is immediately applied. Moreover, all LS procedures are performed in the spirit of the Lamarckian learning, i.e., the improved solution Algorithm Representation Survivor Selection MFEA Random key Original selection MFEA-Perm MFEA-Perm-LBS Permutation (see Section III-A) Permutation (see Section III-A) Original selection Level-based selection (see Section III-B) The simulated binary crossover (SBX) and polynomial mutation (PM) [25] are used as the genetic operators for the

5 Fig. 4. Comparing convergence trends of f 1 and f 2 in multi-tasking and single-tasking (intra-domain). Fig. 5. Comparing convergence trends of f 1 and f 2 in multi-tasking and single-tasking (cross-domain). PCOPs TABLE III BENCHMARK INSTANCES CONSIDERED IN EXPERIMENTS. Benchmark instances TSP kroa200 [27] lin318 [27] linhp318 [27] QAP sko100a [28] sko100b [28] tai150b [29] LOP N-be75eec 150 [30] N-be75oi 150 [30] N-be75oi 250 [30] JSP la27 [31] la28 [31] la39 [31] TABLE IV PARAMETER SETTINGS OF MFEA VARIANTS. Parameter Value Population size 60 Maximum number of generations 300 Maximum local search iterations 50 random mating probability (rmp) 0.3 Distribution index of SBX 20 Distribution index of PM 20 Probability of PM 1/D RK representation. For the permutation based representation, the ordered crossover and swap mutation [26] are used. The parameter settings used in the experimental studies are summarized in Table IV. B. Effect of the Permutation-Based Unified Representation In this subsection, we demonstrate the effect of the permutation-based unified representation by comparing MFEA with MFEA-Perm. As can be observed from the results reported in Table V, MFEA-Perm shows clear benefits over MFEA on all the three test cases and is always better than MFEA in terms of median results. In terms of the computational time, MFEA incurs 723s for each test case on average, whereas MFEA-Perm takes 440s. From the results, MFEA- Perm is noted to outperform MFEA in both effectiveness and efficiency. TABLE V COMPARISON BETWEEN MFEA AND MFEA-PERM IN TERMS OF BEST, MEDIAN, AND WORST OBJECTIVE FUNCTION VALUES. Cases MFEA MFEA-Perm f 1 f 2 f 1 f 2 kroa lin sko100a tai150b N-be75oi N-be75oi la la f i is the objective of i-th task and the superior f i is marked in bold. C. Effect of the Proposed Survivor Selection Table VI presents the comparison results between MFEA- Perm and MFEA-Perm-LBS. It can be seen that the difference in the average performance of both MFEA variants are relatively small on the test cases comprising two tasks, verifying our claim made in Section III-B. As discussed in Section III-B, the average computational complexity of the proposed survivor selection procedure is O(N log(nk 1 )), in contrast to that of the basic MFEA which is O(KN log N). This implies that

6 TABLE VI COMPARISON BETWEEN MFEA-PERM AND MFEA-PERM-LBS IN TERMS OF BEST, MEDIAN, AND WORST OBJECTIVE FUNCTION VALUES. Cases MFEA-Perm MFEA-Perm-LBS f 1 f 2 f 1 f 2 kroa lin sko100a tai150b N-be75oi N-be75oi la la f i is the objective of i-th task and the superior f i is marked in bold. TABLE VII MULTITASKING VS. SINGLE-TASKING RESULTS (BEST, MEDIAN WORST) IN INTRA-DOMAIN OPTIMIZATION. Cases Multitasking Single-tasking f 1 f 2 f 1 f 2 kroa lin kroa linhp lin linhp sko100a sko100b sko100a tai150b sko100b tai150b N-be75eec N-be75oi N-be75eec N-be75oi N-be75oi N-be75oi la la la la la la f i is the objective of i-th task and the superior f i is marked in bold. TABLE VIII MULTITASKING VS. SINGLE-TASKING RESULTS (BEST, MEDIAN, WORST) IN CROSS-DOMAIN OPTIMIZATION. Cases Multitasking Single-tasking f 1 f 2 f 1 f 2 kroa tai150b lin sko100b linhp sko100a kroa N-be75oi lin N-be75oi linhp N-be75eec kroa la lin la linhp la sko100a N-be75eec sko100b N-be75oi tai150b N-be75oi sko100a la sko100b la tai150b la N-be75eec la N-be75oi la N-be75oi la f i is the objective of i-th task and the superior f i is marked in bold.

7 when K is small, the superiority of the LBS over the original selection procedure in MFEA will not be very apparent. Since the test cases involves only 2 tasks, i.e., K = 2, the highly similar results between MFEA-Perm and MFEA-Perm-LBS make sense due to the low K value. Thus, it makes good sense to conduct additional studies on cases where K is large so as to better understand the benefits of the LBS procedure. Taking the cue, our further experimental study indicates that approximately 33% of compute cost savings can be attained by MFEA-Perm-LBS over MFEA-Perm when K = 10, i.e., 10 multitasks. D. Evolutionary Multitasking vs. Single-tasking In this subsection, we study the search performances of evolutionary multitasking over single-tasking for solving P- COPs. Although the potential benefits of evolutionary multitasking has been demonstrated on continuous and discrete benchmark problems in [17], the coverage on PCOPs therein is preliminary. First, the intra-domain optimization of PCOPs is considered, i.e., the multiple tasks to be solved using MFEA- Perm-LBS belong to the same domain. Table VII presents the experimental results of MFEA-Perm-LBS for intra-domain optimization, which is pitted against those attained via singletasking, i.e., each task or PCOP problem instance is optimized individually. It can be seen that for TSP and QAP, multitasking generally performs better than single-tasking. For the LOP, multi-tasking fares better on the first test case {N-be75eec 150, N-be75oi 150}. On the remain two cases, noteworthy improvements are observed on the larger task albeit at the cost of a minor compromise with respect to the smaller counterpart problem instance/task. At this juncture, it is important to keep in mind that the effort expended on a single problem during single-tasking, is shared between the two problems during multitasking. From this perspective, the overall effectiveness of evolutionary multitasking is more clear. For the JSP, the observation is similar to that of LOP. Note here that there naturally does exist the possibility of some negative transfer [17] during multitasking, as is found to be the case for {la28, la39}, where single-tasking outperforms multitasking. Fig. 4 gives the convergence trends for three test cases in intra-domain multitasking, where MT and ST means multi-tasking and single-tasking respectively. Note that every point in the curve is the median result over 20 runs. Now, we further consider the case of cross-domain optimization and input two different types of PCOPs into MFEA- Perm-LBS. Table VIII shows the detailed comparison results. From Table VIII, it is interesting to see that multitasking exhibits more superiority in cross-domain than in intra-domain. Indeed, multitasking performs better on nearly all the 18 cross-domain test cases. The convergence trends for three cross-domain cases are plotted in Fig. 5. It can be seen that the single-tasking search usually stagnates at the early stage, whereas the multitasking can generally decrease the objective values more quickly and finally converge to a better objective value. To our knowledge, there is no other existing techniques to address cross-domain multitask optimization, thus the advantage of multitasking in this respect is very encouraging. We suspect part of the reason of the success is that different types of PCOPs are more likely to provide distinct search biases and further promote the diversity in the unified search space via the implicit genetic transfer phenomenon. But this is only a preliminary explanation, more investigations are needed in the future. TABLE IX THE MEDIAN RESULTS ON THE TEST CASE WITH FOUR TASKS {KROA200, LIN318, SKO100A, SKO100B}. f 1 f 2 f 3 f 4 Single-tasking Multiasking (300 generations) Multiasking (1200 generations) f i is the objective of i-th task and the best f i is marked in bold. In the above, we only present the performance of multitasking with two tasks. It will be interesting to see the outcome when more than two tasks are put into the MFEA. Here, we consider a test case having four tasks, i.e., {kroa200, lin318, sko100a, sko100b}. Table IX lists the median results obtained by MFEA-Perm-LBS. Note that, if multitasking is allocated the same number of generations as single-tasking, only 1/K computational cost is paid on average for each task compared to single-tasking; where K is the number of tasks. So, under this setting, it is unrealistic to expect that the performance of multitasking can consistently surpass that of single-tasking if K is large. Accordingly, in this situation, more computational efforts are needed to conclude whether multitasking has benefits or not. The worst-case scenario is that the number of generations allocated for multitasking is K times of that for single-tasking. If multitasking happens to outperform single-tasking within this worst case, we can still say that multitasking is beneficial. From Table IX, it can be seen that multitasking (at the 300 generations mark) is worse than single-tasking in terms of f 2. But if we extend multitasking to = 1200 generations, all objectives are improved. Some other cases with many tasks are also tested with similar outcomes, i.e., we find that multitasking is usually better than single-tasking at least at the worst-case scenario, validating the feasibility of multitasking for many tasks. V. CONCLUSION AND FUTURE WORK This paper focuses on the evolutionary multitasking of P- COPs. Four kinds of well-known PCOPs, i.e., TSP, QAP, LOP and JSP are considered. To make MFEA more effective and efficient for PCOPs, a permutation based unified representation is adopted instead of random key representation. Moreover, a new survivor selection is introduced for MFEA, whose computational complexity per search generation is much less than that of the original algorithm. In the experimental s- tudy, we first examine the effect of the two newly proposed

8 mechanisms. Then, through experiments both in intra-domain and cross-domain optimization, we try to figure out whether multitasking has advantages over single-tasking for PCOPs. The main experimental findings are as follows: 1) Multitasking is promising in the intra-domain optimization within TSP and QAP; 2) There exists the possibility of negative transfer, as is observed in some intra-domain cases with JSP. 3) Multitasking has interestingly shown more promise in cross-domain optimization than in intra-domain optimization. 4) For the cases involving many tasks, multitasking can often outperform single-tasking at least at the worst-case scenario. These experimental observations have shown the potential of evolutionary multitasking in intra-domain and cross-domain optimization and its scalability to many task environments. In particular, it is inspiring to see the superior performance of evolutionary multitasking in cross-domain optimization settings, a feature that cannot be readily achieved by any offthe-shelf optimizer. In the future, we should conduct deeper analysis towards why evolutionary multitasking works, especially in cross-domain optimization. To handle many tasks, more effective multitasking solvers need to be developed. In addition, evolutionary multitasking should be applied to more practical engineering problems to further demonstrate its efficacy. ACKNOWLEDGMENT This work was supported in part by the A*Star-TSRP funding, in part by the Singapore Institute of Manufacturing Technology-Nanyang Technological University (SIMTech- NTU) Joint Laboratory and Collaborative Research Programme on Complex Systems, in part by the Computational Intelligence Graduate Laboratory at NTU. The work of Hua Xu was also supported by National Basic Research Program of China (973 Program)(Grant No: 2012CB316301), National Natural Science Foundation of China (Grant No: ) and National S&T Major Projects of China (Grant No: 2011ZX ). REFERENCES [1] M. Srinivas and L. M. Patnaik, Genetic algorithms: A survey, Computer, vol. 27, no. 6, pp , [2] R. Storn and K. Price, Differential evolution a simple and efficient heuristic for global optimization over continuous spaces, Journal of global optimization, vol. 11, no. 4, pp , [3] C. A. C. Coello, A comprehensive survey of evolutionary-based multiobjective optimization techniques, Knowledge and Information systems, vol. 1, no. 3, pp , [4] Y. Yuan, H. Xu, B. Wang, and X. Yao, A new dominance relation based evolutionary algorithm for many-objective optimization, IEEE Transactions on Evolutionary Computation, in press. [5] Y. Yuan, H. Xu, B. Wang, B. Zhang, and X. Yao, Balancing convergence and diversity in decomposition-based many-objective optimizers, IEEE Transactions on Evolutionary Computation, in press. [6] J. Branke and H. Schmeck, Designing evolutionary algorithms for dynamic optimization problems, in Advances in evolutionary computing. Springer, 2003, pp [7] D. H. Wolpert and W. G. Macready, No free lunch theorems for optimization, IEEE Transactions on Evolutionary Computation, vol. 1, no. 1, pp , [8] Y.-S. Ong, M. H. Lim, and X. Chen, Research frontier-memetic computationpast, present & future, IEEE Computational Intelligence Magazine, vol. 5, no. 2, p. 24, [9] J. Tang, M. H. Lim, and Y. S. Ong, Diversity-adaptive parallel memetic algorithm for solving large scale combinatorial optimization problems, Soft Computing, vol. 11, no. 9, pp , [10] Y. Yuan, H. Xu, and J. Yang, A hybrid harmony search algorithm for the flexible job shop scheduling problem, Applied Soft Computing, vol. 13, no. 7, pp , [11] Y. Yuan and H. Xu, Flexible job shop scheduling using hybrid differential evolution algorithms, Computers & Industrial Engineering, vol. 65, no. 2, pp , [12] Y. Yuan and H. Xu, Multiobjective flexible job shop scheduling using memetic algorithms, IEEE Transactions on Automation Science and Engineering, vol. 12, no. 1, pp , [13] P. Cunningham and B. Smyth, Case-based reasoning in scheduling: reusing solution components, International Journal of Production Research, vol. 35, no. 11, pp , [14] S. J. Louis and J. McDonnell, Learning with case-injected genetic algorithms, IEEE Transactions on Evolutionary Computation, vol. 8, no. 4, pp , [15] L. Feng, Y. S. Ong, M. Lim, and I. Tsang, Memetic search with inter-domain learning: A realization between CVRP and CARP, IEEE Transactions on Evolutionary Computation, vol. 19, no. 5, pp , [16] L. Feng, Y.-S. Ong, A.-H. Tan, and I. W. Tsang, Memes as building blocks: a case study on evolutionary optimization+ transfer learning for routing problems, Memetic Computing, vol. 7, no. 3, pp , [17] A. Gupta, Y.-S. Ong, and L. Feng, Multifactorial evolution: Towards evolutionary multitasking, IEEE Transactions on Evolutionary Computation, in press. [18] G. C. Onwubolu and D. Davendra, Differential evolution: A handbook for global permutation-based combinatorial optimization, Berlin, Germany: Springer-Verlag, [19] J. Ceberio, E. Irurozki, A. Mendiburu, and J. A. Lozano, A review on estimation of distribution algorithms in permutation-based combinatorial optimization problems, Progress in Artificial Intelligence, vol. 1, no. 1, pp , [20] J. C. Bean, Genetic algorithms and random keys for sequencing and optimization, ORSA journal on computing, vol. 6, no. 2, pp , [21] G. A. Croes, A method for solving traveling-salesman problems, Operations research, vol. 6, no. 6, pp , [22] U. Benlic and J.-K. Hao, Breakout local search for the quadratic assignment problem, Applied Mathematics and Computation, vol. 219, no. 9, pp , [23] J. Ceberio, A. Mendiburu, and J. A. Lozano, The linear ordering problem revisited, European Journal of Operational Research, vol. 241, no. 3, pp , [24] C. Zhang, P. Li, Z. Guan, and Y. Rao, A tabu search algorithm with a new neighborhood structure for the job shop scheduling problem, Computers & Operations Research, vol. 34, no. 11, pp , [25] K. Deb and R. B. Agrawal, Simulated binary crossover for continuous search space, Complex Systems, vol. 9, no. 3, pp. 1 15, [26] L. Davis, Applying adaptive algorithms to epistatic domains. in Proceeding of the International Joint Conference on Artificial Intelligence, vol. 85, 1985, pp [27] G. Reinelt, TSPLIB A traveling salesman problem library, ORSA journal on computing, vol. 3, no. 4, pp , [28] J. Skorin-Kapov, Tabu search applied to the quadratic assignment problem, ORSA Journal on computing, vol. 2, no. 1, pp , [29] E. Taillard, Robust taboo search for the quadratic assignment problem, Parallel computing, vol. 17, no. 4, pp , [30] T. Schiavinotto and T. Stützle, The linear ordering problem: Instances, search space analysis and algorithms, Journal of Mathematical Modelling and Algorithms, vol. 3, no. 4, pp , [31] S. Lawrence, Resource constrained project scheduling: an experimental investigation of heuristic scheduling techniques (supplement), Graduate School of Industrial Administration, Carnegie-Mellon University, Pittsburgh, Pennsylvania, 1984.

Evolutionary Multitasking in Combinatorial Search Spaces: A Case Study in Capacitated Vehicle Routing Problem

Evolutionary Multitasking in Combinatorial Search Spaces: A Case Study in Capacitated Vehicle Routing Problem Evolutionary Multitasking in Combinatorial Search Spaces: A Case Study in Capacitated Vehicle Routing Problem Lei Zhou 1, Liang Feng 1, Jinghui Zhong 2 Yew-Soon Ong 3, Zexuan Zhu, Edwin Sha 1 College of

More information

Evolutionary Multitasking Across Multi and Single-Objective Formulations for Improved Problem Solving

Evolutionary Multitasking Across Multi and Single-Objective Formulations for Improved Problem Solving Evolutionary Multitasking Across Multi and Single-Objective Formulations for Improved Problem Solving Bingshui Da, Abhishek Gupta, Yew-Soon Ong, Liang Feng and Chen Wang School of Computer Engineering,

More information

An Effective Chromosome Representation for Evolving Flexible Job Shop Schedules

An Effective Chromosome Representation for Evolving Flexible Job Shop Schedules An Effective Chromosome Representation for Evolving Flexible Job Shop Schedules Joc Cing Tay and Djoko Wibowo Intelligent Systems Lab Nanyang Technological University asjctay@ntuedusg Abstract As the Flexible

More information

Evolutionary Multitasking for Single-objective Continuous Optimization: Benchmark Problems, Performance Metric, and Baseline Results

Evolutionary Multitasking for Single-objective Continuous Optimization: Benchmark Problems, Performance Metric, and Baseline Results Evolutionary Multitasking for Single-objective Continuous Optimization: Benchmark Problems, Performance Metric, and Baseline Results Bingshui a, Yew-Soon Ong, Liang Feng, A.K. Qin 3, Abhishek Gupta, Zexuan

More information

Permutation-based Optimization Problems and Estimation of Distribution Algorithms

Permutation-based Optimization Problems and Estimation of Distribution Algorithms Permutation-based Optimization Problems and Estimation of Distribution Algorithms Josu Ceberio Supervisors: Jose A. Lozano, Alexander Mendiburu 1 Estimation of Distribution Algorithms Estimation of Distribution

More information

On the Usefulness of Infeasible Solutions in Evolutionary Search: A Theoretical Study

On the Usefulness of Infeasible Solutions in Evolutionary Search: A Theoretical Study On the Usefulness of Infeasible Solutions in Evolutionary Search: A Theoretical Study Yang Yu, and Zhi-Hua Zhou, Senior Member, IEEE National Key Laboratory for Novel Software Technology Nanjing University,

More information

Multi-objective Quadratic Assignment Problem instances generator with a known optimum solution

Multi-objective Quadratic Assignment Problem instances generator with a known optimum solution Multi-objective Quadratic Assignment Problem instances generator with a known optimum solution Mădălina M. Drugan Artificial Intelligence lab, Vrije Universiteit Brussel, Pleinlaan 2, B-1050 Brussels,

More information

Constrained Real-Parameter Optimization with Generalized Differential Evolution

Constrained Real-Parameter Optimization with Generalized Differential Evolution 2006 IEEE Congress on Evolutionary Computation Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-21, 2006 Constrained Real-Parameter Optimization with Generalized Differential Evolution

More information

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems Journal of Applied Mathematics Volume 2013, Article ID 757391, 18 pages http://dx.doi.org/10.1155/2013/757391 Research Article A Novel Differential Evolution Invasive Weed Optimization for Solving Nonlinear

More information

An artificial chemical reaction optimization algorithm for. multiple-choice; knapsack problem.

An artificial chemical reaction optimization algorithm for. multiple-choice; knapsack problem. An artificial chemical reaction optimization algorithm for multiple-choice knapsack problem Tung Khac Truong 1,2, Kenli Li 1, Yuming Xu 1, Aijia Ouyang 1, and Xiaoyong Tang 1 1 College of Information Science

More information

Computational statistics

Computational statistics Computational statistics Combinatorial optimization Thierry Denœux February 2017 Thierry Denœux Computational statistics February 2017 1 / 37 Combinatorial optimization Assume we seek the maximum of f

More information

Integer weight training by differential evolution algorithms

Integer weight training by differential evolution algorithms Integer weight training by differential evolution algorithms V.P. Plagianakos, D.G. Sotiropoulos, and M.N. Vrahatis University of Patras, Department of Mathematics, GR-265 00, Patras, Greece. e-mail: vpp

More information

Comparing genetic algorithm crossover and mutation operators for the indexing problem

Comparing genetic algorithm crossover and mutation operators for the indexing problem INDIAN INSTITUTE OF MANAGEMENT AHMEDABAD INDIA Comparing genetic algorithm crossover and mutation operators for the indexing problem Diptesh Ghosh W.P. No. 2016-03-29 March 2016 The main objective of the

More information

Evolutionary Computation

Evolutionary Computation Evolutionary Computation - Computational procedures patterned after biological evolution. - Search procedure that probabilistically applies search operators to set of points in the search space. - Lamarck

More information

Distance Metrics and Fitness Distance Analysis for the Capacitated Vehicle Routing Problem

Distance Metrics and Fitness Distance Analysis for the Capacitated Vehicle Routing Problem MIC2005. The 6th Metaheuristics International Conference 603 Metrics and Analysis for the Capacitated Vehicle Routing Problem Marek Kubiak Institute of Computing Science, Poznan University of Technology

More information

Solving Numerical Optimization Problems by Simulating Particle-Wave Duality and Social Information Sharing

Solving Numerical Optimization Problems by Simulating Particle-Wave Duality and Social Information Sharing International Conference on Artificial Intelligence (IC-AI), Las Vegas, USA, 2002: 1163-1169 Solving Numerical Optimization Problems by Simulating Particle-Wave Duality and Social Information Sharing Xiao-Feng

More information

Metaheuristics and Local Search

Metaheuristics and Local Search Metaheuristics and Local Search 8000 Discrete optimization problems Variables x 1,..., x n. Variable domains D 1,..., D n, with D j Z. Constraints C 1,..., C m, with C i D 1 D n. Objective function f :

More information

Capacitor Placement for Economical Electrical Systems using Ant Colony Search Algorithm

Capacitor Placement for Economical Electrical Systems using Ant Colony Search Algorithm Capacitor Placement for Economical Electrical Systems using Ant Colony Search Algorithm Bharat Solanki Abstract The optimal capacitor placement problem involves determination of the location, number, type

More information

Generalization of Dominance Relation-Based Replacement Rules for Memetic EMO Algorithms

Generalization of Dominance Relation-Based Replacement Rules for Memetic EMO Algorithms Generalization of Dominance Relation-Based Replacement Rules for Memetic EMO Algorithms Tadahiko Murata 1, Shiori Kaige 2, and Hisao Ishibuchi 2 1 Department of Informatics, Kansai University 2-1-1 Ryozenji-cho,

More information

A New Approach to Estimating the Expected First Hitting Time of Evolutionary Algorithms

A New Approach to Estimating the Expected First Hitting Time of Evolutionary Algorithms A New Approach to Estimating the Expected First Hitting Time of Evolutionary Algorithms Yang Yu and Zhi-Hua Zhou National Laboratory for Novel Software Technology Nanjing University, Nanjing 20093, China

More information

Pure Strategy or Mixed Strategy?

Pure Strategy or Mixed Strategy? Pure Strategy or Mixed Strategy? Jun He, Feidun He, Hongbin Dong arxiv:257v4 [csne] 4 Apr 204 Abstract Mixed strategy evolutionary algorithms EAs) aim at integrating several mutation operators into a single

More information

Genetic Algorithm: introduction

Genetic Algorithm: introduction 1 Genetic Algorithm: introduction 2 The Metaphor EVOLUTION Individual Fitness Environment PROBLEM SOLVING Candidate Solution Quality Problem 3 The Ingredients t reproduction t + 1 selection mutation recombination

More information

Chapter 8: Introduction to Evolutionary Computation

Chapter 8: Introduction to Evolutionary Computation Computational Intelligence: Second Edition Contents Some Theories about Evolution Evolution is an optimization process: the aim is to improve the ability of an organism to survive in dynamically changing

More information

Evolutionary Ensemble Strategies for Heuristic Scheduling

Evolutionary Ensemble Strategies for Heuristic Scheduling 0 International Conference on Computational Science and Computational Intelligence Evolutionary Ensemble Strategies for Heuristic Scheduling Thomas Philip Runarsson School of Engineering and Natural Science

More information

Metaheuristics and Local Search. Discrete optimization problems. Solution approaches

Metaheuristics and Local Search. Discrete optimization problems. Solution approaches Discrete Mathematics for Bioinformatics WS 07/08, G. W. Klau, 31. Januar 2008, 11:55 1 Metaheuristics and Local Search Discrete optimization problems Variables x 1,...,x n. Variable domains D 1,...,D n,

More information

An Improved Quantum Evolutionary Algorithm with 2-Crossovers

An Improved Quantum Evolutionary Algorithm with 2-Crossovers An Improved Quantum Evolutionary Algorithm with 2-Crossovers Zhihui Xing 1, Haibin Duan 1,2, and Chunfang Xu 1 1 School of Automation Science and Electrical Engineering, Beihang University, Beijing, 100191,

More information

3D HP Protein Folding Problem using Ant Algorithm

3D HP Protein Folding Problem using Ant Algorithm 3D HP Protein Folding Problem using Ant Algorithm Fidanova S. Institute of Parallel Processing BAS 25A Acad. G. Bonchev Str., 1113 Sofia, Bulgaria Phone: +359 2 979 66 42 E-mail: stefka@parallel.bas.bg

More information

Bi-objective Portfolio Optimization Using a Customized Hybrid NSGA-II Procedure

Bi-objective Portfolio Optimization Using a Customized Hybrid NSGA-II Procedure Bi-objective Portfolio Optimization Using a Customized Hybrid NSGA-II Procedure Kalyanmoy Deb 1, Ralph E. Steuer 2, Rajat Tewari 3, and Rahul Tewari 4 1 Department of Mechanical Engineering, Indian Institute

More information

Efficient Non-domination Level Update Method for Steady-State Evolutionary Multi-objective. optimization

Efficient Non-domination Level Update Method for Steady-State Evolutionary Multi-objective. optimization Efficient Non-domination Level Update Method for Steady-State Evolutionary Multi-objective Optimization Ke Li, Kalyanmoy Deb, Fellow, IEEE, Qingfu Zhang, Senior Member, IEEE, and Qiang Zhang COIN Report

More information

AN ANT APPROACH FOR STRUCTURED QUADRATIC ASSIGNMENT PROBLEMS

AN ANT APPROACH FOR STRUCTURED QUADRATIC ASSIGNMENT PROBLEMS AN ANT APPROACH FOR STRUCTURED QUADRATIC ASSIGNMENT PROBLEMS Éric D. Taillard, Luca M. Gambardella IDSIA, Corso Elvezia 36, CH-6900 Lugano, Switzerland. Extended abstract IDSIA-22-97 ABSTRACT. The paper

More information

Investigation of Mutation Strategies in Differential Evolution for Solving Global Optimization Problems

Investigation of Mutation Strategies in Differential Evolution for Solving Global Optimization Problems Investigation of Mutation Strategies in Differential Evolution for Solving Global Optimization Problems Miguel Leon Ortiz and Ning Xiong Mälardalen University, Västerås, SWEDEN Abstract. Differential evolution

More information

Population-Based Incremental Learning with Immigrants Schemes in Changing Environments

Population-Based Incremental Learning with Immigrants Schemes in Changing Environments Population-Based Incremental Learning with Immigrants Schemes in Changing Environments Michalis Mavrovouniotis Centre for Computational Intelligence (CCI) School of Computer Science and Informatics De

More information

Self-Adaptive Ant Colony System for the Traveling Salesman Problem

Self-Adaptive Ant Colony System for the Traveling Salesman Problem Proceedings of the 29 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 29 Self-Adaptive Ant Colony System for the Traveling Salesman Problem Wei-jie Yu, Xiao-min

More information

A Comparison of GAs Penalizing Infeasible Solutions and Repairing Infeasible Solutions on the 0-1 Knapsack Problem

A Comparison of GAs Penalizing Infeasible Solutions and Repairing Infeasible Solutions on the 0-1 Knapsack Problem A Comparison of GAs Penalizing Infeasible Solutions and Repairing Infeasible Solutions on the 0-1 Knapsack Problem Jun He 1, Yuren Zhou 2, and Xin Yao 3 1 J. He is with the Department of Computer Science,

More information

Lecture 15: Genetic Algorithms

Lecture 15: Genetic Algorithms Lecture 15: Genetic Algorithms Dr Roman V Belavkin BIS3226 Contents 1 Combinatorial Problems 1 2 Natural Selection 2 3 Genetic Algorithms 3 31 Individuals and Population 3 32 Fitness Functions 3 33 Encoding

More information

Mohammad Saidi-Mehrabad a, Samira Bairamzadeh b,*

Mohammad Saidi-Mehrabad a, Samira Bairamzadeh b,* Journal of Optimization in Industrial Engineering, Vol. 11, Issue 1,Winter and Spring 2018, 3-0 DOI:10.22094/JOIE.2018.272 Design of a Hybrid Genetic Algorithm for Parallel Machines Scheduling to Minimize

More information

Ant Colony Optimization: an introduction. Daniel Chivilikhin

Ant Colony Optimization: an introduction. Daniel Chivilikhin Ant Colony Optimization: an introduction Daniel Chivilikhin 03.04.2013 Outline 1. Biological inspiration of ACO 2. Solving NP-hard combinatorial problems 3. The ACO metaheuristic 4. ACO for the Traveling

More information

CSC 4510 Machine Learning

CSC 4510 Machine Learning 10: Gene(c Algorithms CSC 4510 Machine Learning Dr. Mary Angela Papalaskari Department of CompuBng Sciences Villanova University Course website: www.csc.villanova.edu/~map/4510/ Slides of this presenta(on

More information

Behavior of EMO Algorithms on Many-Objective Optimization Problems with Correlated Objectives

Behavior of EMO Algorithms on Many-Objective Optimization Problems with Correlated Objectives H. Ishibuchi N. Akedo H. Ohyanagi and Y. Nojima Behavior of EMO algorithms on many-objective optimization problems with correlated objectives Proc. of 211 IEEE Congress on Evolutionary Computation pp.

More information

A Scalability Test for Accelerated DE Using Generalized Opposition-Based Learning

A Scalability Test for Accelerated DE Using Generalized Opposition-Based Learning 009 Ninth International Conference on Intelligent Systems Design and Applications A Scalability Test for Accelerated DE Using Generalized Opposition-Based Learning Hui Wang, Zhijian Wu, Shahryar Rahnamayan,

More information

DE [39] PSO [35] ABC [7] AO k/maxiter e-20

DE [39] PSO [35] ABC [7] AO k/maxiter e-20 3. Experimental results A comprehensive set of benchmark functions [18, 33, 34, 35, 36, 37, 38] has been used to test the performance of the proposed algorithm. The Appendix A (Table A1) presents the functions

More information

GENETIC ALGORITHM FOR CELL DESIGN UNDER SINGLE AND MULTIPLE PERIODS

GENETIC ALGORITHM FOR CELL DESIGN UNDER SINGLE AND MULTIPLE PERIODS GENETIC ALGORITHM FOR CELL DESIGN UNDER SINGLE AND MULTIPLE PERIODS A genetic algorithm is a random search technique for global optimisation in a complex search space. It was originally inspired by an

More information

Havrda and Charvat Entropy Based Genetic Algorithm for Traveling Salesman Problem

Havrda and Charvat Entropy Based Genetic Algorithm for Traveling Salesman Problem 3 IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.5, May 008 Havrda and Charvat Entropy Based Genetic Algorithm for Traveling Salesman Problem Baljit Singh, Arjan Singh

More information

Lecture 9 Evolutionary Computation: Genetic algorithms

Lecture 9 Evolutionary Computation: Genetic algorithms Lecture 9 Evolutionary Computation: Genetic algorithms Introduction, or can evolution be intelligent? Simulation of natural evolution Genetic algorithms Case study: maintenance scheduling with genetic

More information

Decomposition and Metaoptimization of Mutation Operator in Differential Evolution

Decomposition and Metaoptimization of Mutation Operator in Differential Evolution Decomposition and Metaoptimization of Mutation Operator in Differential Evolution Karol Opara 1 and Jaros law Arabas 2 1 Systems Research Institute, Polish Academy of Sciences 2 Institute of Electronic

More information

Evolutionary Programming Using a Mixed Strategy Adapting to Local Fitness Landscape

Evolutionary Programming Using a Mixed Strategy Adapting to Local Fitness Landscape Evolutionary Programming Using a Mixed Strategy Adapting to Local Fitness Landscape Liang Shen Department of Computer Science Aberystwyth University Ceredigion, SY23 3DB UK lls08@aber.ac.uk Jun He Department

More information

Finding Robust Solutions to Dynamic Optimization Problems

Finding Robust Solutions to Dynamic Optimization Problems Finding Robust Solutions to Dynamic Optimization Problems Haobo Fu 1, Bernhard Sendhoff, Ke Tang 3, and Xin Yao 1 1 CERCIA, School of Computer Science, University of Birmingham, UK Honda Research Institute

More information

Algorithms and Complexity theory

Algorithms and Complexity theory Algorithms and Complexity theory Thibaut Barthelemy Some slides kindly provided by Fabien Tricoire University of Vienna WS 2014 Outline 1 Algorithms Overview How to write an algorithm 2 Complexity theory

More information

Ant Colony Optimization for Resource-Constrained Project Scheduling

Ant Colony Optimization for Resource-Constrained Project Scheduling Ant Colony Optimization for Resource-Constrained Project Scheduling Daniel Merkle, Martin Middendorf 2, Hartmut Schmeck 3 Institute for Applied Computer Science and Formal Description Methods University

More information

(0)

(0) Citation A.Divsalar, P. Vansteenwegen, K. Sörensen, D. Cattrysse (04), A memetic algorithm for the Orienteering problem with hotel selection European Journal of Operational Research, 37, 9-49. Archived

More information

Analysis of Crossover Operators for Cluster Geometry Optimization

Analysis of Crossover Operators for Cluster Geometry Optimization Analysis of Crossover Operators for Cluster Geometry Optimization Francisco B. Pereira Instituto Superior de Engenharia de Coimbra Portugal Abstract We study the effectiveness of different crossover operators

More information

WORST CASE OPTIMIZATION USING CHEBYSHEV INEQUALITY

WORST CASE OPTIMIZATION USING CHEBYSHEV INEQUALITY WORST CASE OPTIMIZATION USING CHEBYSHEV INEQUALITY Kiyoharu Tagawa School of Science and Engineering, Kindai University, Japan tagawa@info.kindai.ac.jp Abstract In real-world optimization problems, a wide

More information

Department of Mathematics, Graphic Era University, Dehradun, Uttarakhand, India

Department of Mathematics, Graphic Era University, Dehradun, Uttarakhand, India Genetic Algorithm for Minimization of Total Cost Including Customer s Waiting Cost and Machine Setup Cost for Sequence Dependent Jobs on a Single Processor Neelam Tyagi #1, Mehdi Abedi *2 Ram Gopal Varshney

More information

Looking Under the EA Hood with Price s Equation

Looking Under the EA Hood with Price s Equation Looking Under the EA Hood with Price s Equation Jeffrey K. Bassett 1, Mitchell A. Potter 2, and Kenneth A. De Jong 1 1 George Mason University, Fairfax, VA 22030 {jbassett, kdejong}@cs.gmu.edu 2 Naval

More information

Quadratic Multiple Knapsack Problem with Setups and a Solution Approach

Quadratic Multiple Knapsack Problem with Setups and a Solution Approach Proceedings of the 2012 International Conference on Industrial Engineering and Operations Management Istanbul, Turkey, July 3 6, 2012 Quadratic Multiple Knapsack Problem with Setups and a Solution Approach

More information

Adaptive Generalized Crowding for Genetic Algorithms

Adaptive Generalized Crowding for Genetic Algorithms Carnegie Mellon University From the SelectedWorks of Ole J Mengshoel Fall 24 Adaptive Generalized Crowding for Genetic Algorithms Ole J Mengshoel, Carnegie Mellon University Severinio Galan Antonio de

More information

A MIXED INTEGER QUADRATIC PROGRAMMING MODEL FOR THE LOW AUTOCORRELATION BINARY SEQUENCE PROBLEM. Jozef Kratica

A MIXED INTEGER QUADRATIC PROGRAMMING MODEL FOR THE LOW AUTOCORRELATION BINARY SEQUENCE PROBLEM. Jozef Kratica Serdica J. Computing 6 (2012), 385 400 A MIXED INTEGER QUADRATIC PROGRAMMING MODEL FOR THE LOW AUTOCORRELATION BINARY SEQUENCE PROBLEM Jozef Kratica Abstract. In this paper the low autocorrelation binary

More information

TUTORIAL: HYPER-HEURISTICS AND COMPUTATIONAL INTELLIGENCE

TUTORIAL: HYPER-HEURISTICS AND COMPUTATIONAL INTELLIGENCE TUTORIAL: HYPER-HEURISTICS AND COMPUTATIONAL INTELLIGENCE Nelishia Pillay School of Mathematics, Statistics and Computer Science University of KwaZulu-Natal South Africa TUTORIAL WEBSITE URL: http://titancs.ukzn.ac.za/ssci2015tutorial.aspx

More information

Restarting a Genetic Algorithm for Set Cover Problem Using Schnabel Census

Restarting a Genetic Algorithm for Set Cover Problem Using Schnabel Census Restarting a Genetic Algorithm for Set Cover Problem Using Schnabel Census Anton V. Eremeev 1,2 1 Dostoevsky Omsk State University, Omsk, Russia 2 The Institute of Scientific Information for Social Sciences

More information

Dynamic Search Fireworks Algorithm with Covariance Mutation for Solving the CEC 2015 Learning Based Competition Problems

Dynamic Search Fireworks Algorithm with Covariance Mutation for Solving the CEC 2015 Learning Based Competition Problems Dynamic Search Fireworks Algorithm with Covariance Mutation for Solving the CEC 05 Learning Based Competition Problems Chao Yu, Ling Chen Kelley,, and Ying Tan, The Key Laboratory of Machine Perception

More information

NILS: a Neutrality-based Iterated Local Search and its application to Flowshop Scheduling

NILS: a Neutrality-based Iterated Local Search and its application to Flowshop Scheduling NILS: a Neutrality-based Iterated Local Search and its application to Flowshop Scheduling Marie-Eleonore Marmion, Clarisse Dhaenens, Laetitia Jourdan, Arnaud Liefooghe, Sébastien Verel To cite this version:

More information

Permutation distance measures for memetic algorithms with population management

Permutation distance measures for memetic algorithms with population management MIC2005: The Sixth Metaheuristics International Conference??-1 Permutation distance measures for memetic algorithms with population management Marc Sevaux Kenneth Sörensen University of Valenciennes, CNRS,

More information

Solving Multi-Criteria Optimization Problems with Population-Based ACO

Solving Multi-Criteria Optimization Problems with Population-Based ACO Solving Multi-Criteria Optimization Problems with Population-Based ACO Michael Guntsch 1 and Martin Middendorf 2 1 Institute for Applied Computer Science and Formal Description Methods University of Karlsruhe,

More information

A Mixed-Integer Linear Program for the Traveling Salesman Problem with Structured Time Windows

A Mixed-Integer Linear Program for the Traveling Salesman Problem with Structured Time Windows A Mixed-Integer Linear Program for the Traveling Salesman Problem with Structured Time Windows Philipp Hungerländer Christian Truden 5th January 2017 Abstract In this extended abstract we introduce the

More information

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE Artificial Intelligence, Computational Logic PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE Lecture 4 Metaheuristic Algorithms Sarah Gaggl Dresden, 5th May 2017 Agenda 1 Introduction 2 Constraint

More information

A Simple Haploid-Diploid Evolutionary Algorithm

A Simple Haploid-Diploid Evolutionary Algorithm A Simple Haploid-Diploid Evolutionary Algorithm Larry Bull Computer Science Research Centre University of the West of England, Bristol, UK larry.bull@uwe.ac.uk Abstract It has recently been suggested that

More information

Evolutionary computation

Evolutionary computation Evolutionary computation Andrea Roli andrea.roli@unibo.it DEIS Alma Mater Studiorum Università di Bologna Evolutionary computation p. 1 Evolutionary Computation Evolutionary computation p. 2 Evolutionary

More information

Toward Effective Initialization for Large-Scale Search Spaces

Toward Effective Initialization for Large-Scale Search Spaces Toward Effective Initialization for Large-Scale Search Spaces Shahryar Rahnamayan University of Ontario Institute of Technology (UOIT) Faculty of Engineering and Applied Science 000 Simcoe Street North

More information

A pruning pattern list approach to the permutation flowshop scheduling problem

A pruning pattern list approach to the permutation flowshop scheduling problem A pruning pattern list approach to the permutation flowshop scheduling problem Takeshi Yamada NTT Communication Science Laboratories, 2-4 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-02, JAPAN E-mail :

More information

Beta Damping Quantum Behaved Particle Swarm Optimization

Beta Damping Quantum Behaved Particle Swarm Optimization Beta Damping Quantum Behaved Particle Swarm Optimization Tarek M. Elbarbary, Hesham A. Hefny, Atef abel Moneim Institute of Statistical Studies and Research, Cairo University, Giza, Egypt tareqbarbary@yahoo.com,

More information

Evolving more efficient digital circuits by allowing circuit layout evolution and multi-objective fitness

Evolving more efficient digital circuits by allowing circuit layout evolution and multi-objective fitness Evolving more efficient digital circuits by allowing circuit layout evolution and multi-objective fitness Tatiana Kalganova Julian Miller School of Computing School of Computing Napier University Napier

More information

A new ILS algorithm for parallel machine scheduling problems

A new ILS algorithm for parallel machine scheduling problems J Intell Manuf (2006) 17:609 619 DOI 10.1007/s10845-006-0032-2 A new ILS algorithm for parallel machine scheduling problems Lixin Tang Jiaxiang Luo Received: April 2005 / Accepted: January 2006 Springer

More information

OPTIMIZATION OF THE SUPPLIER SELECTION PROBLEM USING DISCRETE FIREFLY ALGORITHM

OPTIMIZATION OF THE SUPPLIER SELECTION PROBLEM USING DISCRETE FIREFLY ALGORITHM Advanced Logistic Systems Vol. 6. No. 1. (2012) pp. 117-126. OPTIMIZATION OF THE SUPPLIER SELECTION PROBLEM USING DISCRETE FIREFLY ALGORITHM LÁSZLÓ KOTA 1 Abstract: In this article I show a firefly optimization

More information

Solving the Homogeneous Probabilistic Traveling Salesman Problem by the ACO Metaheuristic

Solving the Homogeneous Probabilistic Traveling Salesman Problem by the ACO Metaheuristic Solving the Homogeneous Probabilistic Traveling Salesman Problem by the ACO Metaheuristic Leonora Bianchi 1, Luca Maria Gambardella 1 and Marco Dorigo 2 1 IDSIA, Strada Cantonale Galleria 2, CH-6928 Manno,

More information

A Self-Adaptive Memeplex Robust Search Scheme for solving Stochastic Demands Vehicle Routing Problem

A Self-Adaptive Memeplex Robust Search Scheme for solving Stochastic Demands Vehicle Routing Problem International Journal of Systems Science Vol. 00, No. 00, 00 Month 20xx, 1 27 A Self-Adaptive Memeplex Robust Search Scheme for solving Stochastic Demands Vehicle Routing Problem Xianshun Chen, Liang Feng,

More information

Introduction to integer programming III:

Introduction to integer programming III: Introduction to integer programming III: Network Flow, Interval Scheduling, and Vehicle Routing Problems Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability

More information

A polynomial-time approximation scheme for the two-machine flow shop scheduling problem with an availability constraint

A polynomial-time approximation scheme for the two-machine flow shop scheduling problem with an availability constraint A polynomial-time approximation scheme for the two-machine flow shop scheduling problem with an availability constraint Joachim Breit Department of Information and Technology Management, Saarland University,

More information

A Lower Bound Analysis of Population-based Evolutionary Algorithms for Pseudo-Boolean Functions

A Lower Bound Analysis of Population-based Evolutionary Algorithms for Pseudo-Boolean Functions A Lower Bound Analysis of Population-based Evolutionary Algorithms for Pseudo-Boolean Functions Chao Qian,2, Yang Yu 2, and Zhi-Hua Zhou 2 UBRI, School of Computer Science and Technology, University of

More information

The coordinated scheduling of steelmaking with multi-refining and tandem transportation

The coordinated scheduling of steelmaking with multi-refining and tandem transportation roceedings of the 17th World Congress The International Federation of Automatic Control The coordinated scheduling of steelmaking with multi-refining and tandem transportation Jing Guan*, Lixin Tang*,

More information

Fundamentals of Genetic Algorithms

Fundamentals of Genetic Algorithms Fundamentals of Genetic Algorithms : AI Course Lecture 39 40, notes, slides www.myreaders.info/, RC Chakraborty, e-mail rcchak@gmail.com, June 01, 2010 www.myreaders.info/html/artificial_intelligence.html

More information

Center-based initialization for large-scale blackbox

Center-based initialization for large-scale blackbox See discussions, stats, and author profiles for this publication at: http://www.researchgate.net/publication/903587 Center-based initialization for large-scale blackbox problems ARTICLE FEBRUARY 009 READS

More information

Sorting Network Development Using Cellular Automata

Sorting Network Development Using Cellular Automata Sorting Network Development Using Cellular Automata Michal Bidlo, Zdenek Vasicek, and Karel Slany Brno University of Technology, Faculty of Information Technology Božetěchova 2, 61266 Brno, Czech republic

More information

Set-based Min-max and Min-min Robustness for Multi-objective Robust Optimization

Set-based Min-max and Min-min Robustness for Multi-objective Robust Optimization Proceedings of the 2017 Industrial and Systems Engineering Research Conference K. Coperich, E. Cudney, H. Nembhard, eds. Set-based Min-max and Min-min Robustness for Multi-objective Robust Optimization

More information

Totally unimodular matrices. Introduction to integer programming III: Network Flow, Interval Scheduling, and Vehicle Routing Problems

Totally unimodular matrices. Introduction to integer programming III: Network Flow, Interval Scheduling, and Vehicle Routing Problems Totally unimodular matrices Introduction to integer programming III: Network Flow, Interval Scheduling, and Vehicle Routing Problems Martin Branda Charles University in Prague Faculty of Mathematics and

More information

Codes for Partially Stuck-at Memory Cells

Codes for Partially Stuck-at Memory Cells 1 Codes for Partially Stuck-at Memory Cells Antonia Wachter-Zeh and Eitan Yaakobi Department of Computer Science Technion Israel Institute of Technology, Haifa, Israel Email: {antonia, yaakobi@cs.technion.ac.il

More information

Gaussian EDA and Truncation Selection: Setting Limits for Sustainable Progress

Gaussian EDA and Truncation Selection: Setting Limits for Sustainable Progress Gaussian EDA and Truncation Selection: Setting Limits for Sustainable Progress Petr Pošík Czech Technical University, Faculty of Electrical Engineering, Department of Cybernetics Technická, 66 7 Prague

More information

Sensitive Ant Model for Combinatorial Optimization

Sensitive Ant Model for Combinatorial Optimization Sensitive Ant Model for Combinatorial Optimization CAMELIA CHIRA cchira@cs.ubbcluj.ro D. DUMITRESCU ddumitr@cs.ubbcluj.ro CAMELIA-MIHAELA PINTEA cmpintea@cs.ubbcluj.ro Abstract: A combinatorial optimization

More information

Design of Manufacturing Systems Manufacturing Cells

Design of Manufacturing Systems Manufacturing Cells Design of Manufacturing Systems Manufacturing Cells Outline General features Examples Strengths and weaknesses Group technology steps System design Virtual cellular manufacturing 2 Manufacturing cells

More information

Local Search & Optimization

Local Search & Optimization Local Search & Optimization CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 4 Some

More information

A Compact Linearisation of Euclidean Single Allocation Hub Location Problems

A Compact Linearisation of Euclidean Single Allocation Hub Location Problems A Compact Linearisation of Euclidean Single Allocation Hub Location Problems J. Fabian Meier 1,2, Uwe Clausen 1 Institute of Transport Logistics, TU Dortmund, Germany Borzou Rostami 1, Christoph Buchheim

More information

Upper Bounds on the Time and Space Complexity of Optimizing Additively Separable Functions

Upper Bounds on the Time and Space Complexity of Optimizing Additively Separable Functions Upper Bounds on the Time and Space Complexity of Optimizing Additively Separable Functions Matthew J. Streeter Computer Science Department and Center for the Neural Basis of Cognition Carnegie Mellon University

More information

A Non-Parametric Statistical Dominance Operator for Noisy Multiobjective Optimization

A Non-Parametric Statistical Dominance Operator for Noisy Multiobjective Optimization A Non-Parametric Statistical Dominance Operator for Noisy Multiobjective Optimization Dung H. Phan and Junichi Suzuki Deptartment of Computer Science University of Massachusetts, Boston, USA {phdung, jxs}@cs.umb.edu

More information

HYPER-HEURISTICS have attracted much research attention

HYPER-HEURISTICS have attracted much research attention IEEE TRANSACTIONS ON CYBERNETICS 1 New Insights Into Diversification of Hyper-Heuristics Zhilei Ren, He Jiang, Member, IEEE, Jifeng Xuan, Yan Hu, and Zhongxuan Luo Abstract There has been a growing research

More information

Exact Mixed Integer Programming for Integrated Scheduling and Process Planning in Flexible Environment

Exact Mixed Integer Programming for Integrated Scheduling and Process Planning in Flexible Environment Journal of Optimization in Industrial Engineering 15 (2014) 47-53 Exact ixed Integer Programming for Integrated Scheduling and Process Planning in Flexible Environment ohammad Saidi mehrabad a, Saeed Zarghami

More information

Inter-Relationship Based Selection for Decomposition Multiobjective Optimization

Inter-Relationship Based Selection for Decomposition Multiobjective Optimization Inter-Relationship Based Selection for Decomposition Multiobjective Optimization Ke Li, Sam Kwong, Qingfu Zhang, and Kalyanmoy Deb Department of Electrical and Computer Engineering Michigan State University,

More information

A comparison of sequencing formulations in a constraint generation procedure for avionics scheduling

A comparison of sequencing formulations in a constraint generation procedure for avionics scheduling A comparison of sequencing formulations in a constraint generation procedure for avionics scheduling Department of Mathematics, Linköping University Jessika Boberg LiTH-MAT-EX 2017/18 SE Credits: Level:

More information

Research Article A Hybrid Backtracking Search Optimization Algorithm with Differential Evolution

Research Article A Hybrid Backtracking Search Optimization Algorithm with Differential Evolution Mathematical Problems in Engineering Volume 2015, Article ID 769245, 16 pages http://dx.doi.org/10.1155/2015/769245 Research Article A Hybrid Backtracking Search Optimization Algorithm with Differential

More information

Genetic Algorithms & Modeling

Genetic Algorithms & Modeling Genetic Algorithms & Modeling : Soft Computing Course Lecture 37 40, notes, slides www.myreaders.info/, RC Chakraborty, e-mail rcchak@gmail.com, Aug. 10, 2010 http://www.myreaders.info/html/soft_computing.html

More information

OPTIMIZED RESOURCE IN SATELLITE NETWORK BASED ON GENETIC ALGORITHM. Received June 2011; revised December 2011

OPTIMIZED RESOURCE IN SATELLITE NETWORK BASED ON GENETIC ALGORITHM. Received June 2011; revised December 2011 International Journal of Innovative Computing, Information and Control ICIC International c 2012 ISSN 1349-4198 Volume 8, Number 12, December 2012 pp. 8249 8256 OPTIMIZED RESOURCE IN SATELLITE NETWORK

More information

Part B" Ants (Natural and Artificial)! Langton s Vants" (Virtual Ants)! Vants! Example! Time Reversibility!

Part B Ants (Natural and Artificial)! Langton s Vants (Virtual Ants)! Vants! Example! Time Reversibility! Part B" Ants (Natural and Artificial)! Langton s Vants" (Virtual Ants)! 11/14/08! 1! 11/14/08! 2! Vants!! Square grid!! Squares can be black or white!! Vants can face N, S, E, W!! Behavioral rule:!! take

More information