PIBEA: Prospect Indicator Based Evolutionary Algorithm for Multiobjective Optimization Problems

Similar documents
A Non-Parametric Statistical Dominance Operator for Noisy Multiobjective Optimization

Generalization of Dominance Relation-Based Replacement Rules for Memetic EMO Algorithms

Behavior of EMO Algorithms on Many-Objective Optimization Problems with Correlated Objectives

Quad-trees: A Data Structure for Storing Pareto-sets in Multi-objective Evolutionary Algorithms with Elitism

Multiobjective Optimization

Effects of the Use of Non-Geometric Binary Crossover on Evolutionary Multiobjective Optimization

Research Article Study on Parameter Optimization Design of Drum Brake Based on Hybrid Cellular Multiobjective Genetic Algorithm

Robust Multi-Objective Optimization in High Dimensional Spaces

Plateaus Can Be Harder in Multi-Objective Optimization

Research Article A Novel Ranking Method Based on Subjective Probability Theory for Evolutionary Multiobjective Optimization

On Performance Metrics and Particle Swarm Methods for Dynamic Multiobjective Optimization Problems

Comparison of Multi-Objective Genetic Algorithms in Optimizing Q-Law Low-Thrust Orbit Transfers

Inter-Relationship Based Selection for Decomposition Multiobjective Optimization

Covariance Matrix Adaptation in Multiobjective Optimization

An Improved Multi-Objective Genetic Algorithm for Solving Multi-objective Problems

Runtime Analyses for Using Fairness in Evolutionary Multi-Objective Optimization

Multiobjective Optimisation An Overview

AMULTIOBJECTIVE optimization problem (MOP) can

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION 1. A New Dominance Relation Based Evolutionary Algorithm for Many-Objective Optimization

TECHNISCHE UNIVERSITÄT DORTMUND REIHE COMPUTATIONAL INTELLIGENCE COLLABORATIVE RESEARCH CENTER 531

Interactive Evolutionary Multi-Objective Optimization and Decision-Making using Reference Direction Method

On cone based decompositions of proper Pareto optimality

A Novel Multiobjective Formulation of the Robust Software Project Scheduling Problem

MULTIOBJECTIVE optimization problems (MOPs),

Articulating User Preferences in Many-Objective Problems by Sampling the Weighted Hypervolume

Running time analysis of a multi-objective evolutionary algorithm on a simple discrete optimization problem

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL., NO.,

Efficient Non-domination Level Update Method for Steady-State Evolutionary Multi-objective. optimization

Multi-objective approaches in a single-objective optimization environment

Deposited on: 1 April 2009

TECHNISCHE UNIVERSITÄT DORTMUND REIHE COMPUTATIONAL INTELLIGENCE COLLABORATIVE RESEARCH CENTER 531

The Pickup and Delivery Problem: a Many-objective Analysis

Solving High-Dimensional Multi-Objective Optimization Problems with Low Effective Dimensions

DESIGN OF OPTIMUM CROSS-SECTIONS FOR LOAD-CARRYING MEMBERS USING MULTI-OBJECTIVE EVOLUTIONARY ALGORITHMS

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. XX, NO.X, XXXX 1

Running Time Analysis of Multi-objective Evolutionary Algorithms on a Simple Discrete Optimization Problem

A Reference Vector Guided Evolutionary Algorithm for Many-objective Optimization

Dynamic Multiobjectives Optimization With a Changing Number of Objectives. Renzhi Chen, Ke Li, Member, IEEE, and Xin Yao, Fellow, IEEE

Evolving Dynamic Multi-objective Optimization Problems. with Objective Replacement. Abstract

TECHNISCHE UNIVERSITÄT DORTMUND REIHE COMPUTATIONAL INTELLIGENCE COLLABORATIVE RESEARCH CENTER 531

Expected Running Time Analysis of a Multiobjective Evolutionary Algorithm on Pseudo-boolean Functions

Diversity Based on Entropy: A Novel Evaluation Criterion in Multi-objective Optimization Algorithm

Solving High-Dimensional Multi-Objective Optimization Problems with Low Effective Dimensions

A Modified Indicator-based Evolutionary Algorithm

Universiteit Leiden Opleiding Informatica

Theory of the Hypervolume Indicator: Optimal µ-distributions and the Choice of the Reference Point

A Study on Non-Correspondence in Spread between Objective Space and Design Variable Space in Pareto Solutions

arxiv: v1 [cs.ne] 9 May 2016

Performance Measures for Dynamic Multi-Objective Optimization

Multiobjective Evolutionary Algorithms. Pareto Rankings

An Evolutionary Strategy for. Decremental Multi-Objective Optimization Problems

Performance Assessment of Generalized Differential Evolution 3 with a Given Set of Constrained Multi-Objective Test Problems

Hypervolume Sharpe-Ratio Indicator: Formalization and First Theoretical Results

Analyses of Guide Update Approaches for Vector Evaluated Particle Swarm Optimisation on Dynamic Multi-Objective Optimisation Problems

Frederick Sander. Variable Grouping Mechanisms and Transfer Strategies in Multi-objective Optimisation

Mengyuan Wu, Ke Li, Sam Kwong, Fellow, IEEE, Yu Zhou, and Qingfu Zhang, Fellow, IEEE

Monotonicity Analysis, Evolutionary Multi-Objective Optimization, and Discovery of Design Principles

A Brief Introduction to Multiobjective Optimization Techniques

Pareto Analysis for the Selection of Classifier Ensembles

MULTIOBJECTIVE EVOLUTIONARY ALGORITHM FOR INTEGRATED TIMETABLE OPTIMIZATION WITH VEHICLE SCHEDULING ASPECTS

GDE3: The third Evolution Step of Generalized Differential Evolution

Comparative Study of Basic Constellation Models for Regional Satellite Constellation Design

Relation between Pareto-Optimal Fuzzy Rules and Pareto-Optimal Fuzzy Rule Sets

A Multiobjective GO based Approach to Protein Complex Detection

Internal Report The Multi-objective Variable Metric Evolution Strategy Part I

Prediction-based Population Re-initialization for Evolutionary Dynamic Multi-objective Optimization

Evolutionary Computation: introduction

HypE An algorithm for fast hypervolume-based many-objective optimization

Package nsga2r. February 20, 2015

Multi-objective Quadratic Assignment Problem instances generator with a known optimum solution

Handling Uncertainty in Indicator-Based Multiobjective Optimization

Constrained Real-Parameter Optimization with Generalized Differential Evolution

PaCcET: An Objective Space Transformation to Iteratively Convexify the Pareto Front

Multi-Objective Optimization Methods for Optimal Funding Allocations to Mitigate Chemical and Biological Attacks

Runtime Analysis of Evolutionary Algorithms for the Knapsack Problem with Favorably Correlated Weights

Multi-objective genetic algorithm

The Logarithmic Hypervolume Indicator

Performance Assessment of Generalized Differential Evolution 3 (GDE3) with a Given Set of Problems

Not all parents are equal for MO-CMA-ES

A New Approach for Solving Nonlinear Equations Systems

An Analysis on Recombination in Multi-Objective Evolutionary Optimization

Evolutionary Computation and Convergence to a Pareto Front

Research Article Optimal Solutions of Multiproduct Batch Chemical Process Using Multiobjective Genetic Algorithm with Expert Decision System

Representation and Hidden Bias II: Eliminating Defining Length Bias in Genetic Search via Shuffle Crossover

Handling Uncertainty in Indicator-Based Multiobjective Optimization

Interactive Evolutionary Multi-Objective Optimization and Decision-Making using Reference Direction Method

Metaheuristics and Local Search

Lecture 9 Evolutionary Computation: Genetic algorithms

Bi-objective Portfolio Optimization Using a Customized Hybrid NSGA-II Procedure

This is an electronic reprint of the original article. This reprint may differ from the original in pagination and typographic detail.

Multi-Objective Optimization of Two Dynamic Systems

Evolutionary Multitasking Across Multi and Single-Objective Formulations for Improved Problem Solving

Constraint-Handling in Evolutionary Algorithms. Czech Institute of Informatics, Robotics and Cybernetics CTU Prague

Metaheuristics and Local Search. Discrete optimization problems. Solution approaches

Chapter 8: Introduction to Evolutionary Computation

Multivariate Optimization of High Brightness High Current DC Photoinjector. Ivan Bazarov

A Comparison of GAs Penalizing Infeasible Solutions and Repairing Infeasible Solutions on the 0-1 Knapsack Problem

An Evolutionary Algorithm for Reactive Power Compensation in Radial Distribution Networks

Scaling Up. So far, we have considered methods that systematically explore the full search space, possibly using principled pruning (A* etc.).

Running Time Analysis of a Multiobjective Evolutionary Algorithm on Simple and Hard Problems

Transcription:

PIBEA: Prospect Indicator Based Evolutionary Algorithm for Multiobjective Optimization Problems Pruet Boonma Department of Computer Engineering Chiang Mai University Chiang Mai, 52, Thailand Email: pruet@eng.cmu.ac.th Junichi Suzuki Department of Computer Science University of Massachusetts, Boston Boston, MA 2125, USA Email: jxs@cs.umb.edu Abstract This paper proposes and evaluates an evolutionary multiobjective optimization algorithm (EMOA) that uses a new quality indicator, called the prospect indicator, for parent selection and environmental selection operators. The prospect indicator measures the potential of each individual to reproduce offspring that dominate itself and spread out in the objective space. The prospect indicator allows the proposed EMOA, PIBEA (Prospect Indicator Based Evolutionary Algorithm), to (1) maintain sufficient selection pressure, even in high dimensional MOPs, thereby improving convergence velocity toward the Pareto front, and (2) diversify individuals, even in high dimensional MOPs, thereby distributing individuals uniformly in the objective space. Experimental results show that PIBEA effectively performs its operators in high dimensional problems and outperforms three existing well-known EMOAs, NSGA-II, SPEA2 and AbYSS, in terms of convergence velocity, diversity of individuals, coverage of the Pareto front and performance stability. I. INTRODUCTION This paper proposes and evaluates a new evolutionary algorithm to solve multiobjective optimization problems (MOPs). In general, an MOP is formally described as follows: minimize F ( x) = [f 1 ( x), f 2 ( x),, f m ( x)] T O subject to x = [x 1, x 2,, x n ] T S O denotes the objective space. S denotes the decision variable space. x S denotes a vector of n decision variables It is called an individual in evolutionary multiobjective optimization algorithms (EMOAs). A function vector, F : R n R m, consists of m real-value objective functions that map an individual ( x) to m objective values (f 1 ( x), f 2 ( x),, f m ( x)). When m, an MOP is called high-dimensional [1]. The goal of an EMOA is to find an individual(s) that minimize(s) objective values. In an MOP, there rarely exists a single solution that is optimum with respect to all objectives because objectives often conflict with each other. Therefore, EMOAs often seek the optimal trade-off individuals, or Pareto-optimal individuals, by balancing the trade-offs among conflicting objectives. An EMOA evolves a population of individuals via genetic operators (e.g., selection, crossover and mutation operators) through generations toward the Pareto-optimal front, or simply Pareto front, which is a collection of Pareto-optimal solutions. } (1) The notion of dominance plays an important role for EMOAs to seek Pareto optimality. An individual x is said to dominate another individual y (denoted by x y) iif f i ( x) f i ( y) i = 1,, m and f i ( x) < f i ( y) i = 1,, m. EMOAs often rank individuals based on the dominance relationships among them and exploit their ranks in selection operators. This process is called dominance ranking [2]. A research trend in the design space of EMOAs is to adopt indicator-based selection operators based on performance/quality indicators that augment or replace dominance ranking []. For example, the hypervolume indicator have been used in several EMOAs selection operators [4] [8]. This paper proposes and evaluates an EMOA that leverages a new quality indicator called prospect indicator. The prospect indicator measures the potential of each individual to reproduce offspring that dominate itself and spread out in the objective space. The proposed EMOA, PIBEA (Prospect Indicator Based Evolutionary Algorithm), uses the prospect indicator in its parent selection operator, which chooses individuals from the population to reproduce offspring, as well as its environmental selection operator, which chooses individuals for the next generation from the union of the current-generation individuals and their offspring. The prospect indicator allows the two selection operators to (1) maintain sufficient selection pressure, even in high dimensional MOPs, thereby improving convergence velocity toward the Pareto front, and (2) diversify individuals, even in high dimensional MOPs, thereby distributing individuals uniformly in the objective space. PIBEA also performs adaptive mutation rate adjustment. It increases the mutation rate to encourage individuals to explore the decision variable space when they have not converged to a part(s) of the Pareto front. However, when they have, it decreases the mutation rate to exploit them and cover a wider range of the front. The proposed operator examines the entropy of individuals to estimate their proximity to the Pareto front. Experimental results show that PIBEA effectively performs its parent selection, environmental selection and adaptive mutation rate adjustment operators in high dimensional MOPs and outperforms three existing well-known EMOAs, NSGA- II [], SPEA2 [1] and AbYSS [11], in terms of convergence velocity to the Pareto front, diversity of individuals, coverage of the Pareto front and performance stability.

II. RELATED WORK The dominance ranking operator was first proposed in NSGA [2]. NSGA-II extends NSGA with faster dominance ranking, diversity preservation using crowding distance and a (µ + λ)-elitism in environmental selection [12]. Crowding distance measures the density of each individual and its neighbors in the objective space by computing the Euclidean distances between the individual and its direct neighbors on an objective by objective basis. PIBEA is similar to NSGA- II in that it follows NSGA-II s algorithmic structure (See Section III-A.); however, it extends NSGA-II s selection operators with the prospect indicator. NSGA-II does not consider adaptive mutation rate adjustment as PIBEA does. SPEA enhances NSGA-II s selection operators with a new fitness assignment method and the notion of elite archiving [1]. SPEA2 further extends SPEA s selection operators with an improved fitness assignment method [1]. It also introduces a new diversity preservation method. It examines the density of each individual and its neighbors in the objective space with a k-nearest neighbor algorithm. PIBEA is similar to SPEA and SPEA2 in that it provides enhanced selection operators atop NSGA-II s dominance ranking. However, its enhancement strategy is very different from SPEA s and SPEA2 s strategies. PIBEA uses the prospect indicator for both parent selection and diversity preservation while SPEA/SPEA2 uses two different methods for parent selection and diversity preservation. SPEA and SPEA2 do not consider adaptive mutation rate adjustment. Adaptive mutation adjustment was first proposed in [14], and it has been used in [15], [1], among others. In [14], [15], the mutation rate is adjusted based on the fitness values of the current-generartion individuals. In [1], the mutation rate is adjusted based on a feedback from the previous generation. For instance, if mutation has limited effects on convergence in the previous generation, the mutation rate is increased by a fixed amount. PIBEA uses an entropy-based diversity metric to decide whether and how much the mutation rate is adjusted. The hypervolume indicator [4] [8] is similar to the prospect indicator in that both are volume-based indicators. It measures the volume of a hypercube that each individual dominates in the objective space. The hypercube is formed with the individual and a reference point representing the highest (or worst) possible objective values. In contrast, the prospect indicator measures the volume of a hypercube in the opposite way. It considers a hypercube that dominates an individual. The hypercube is formed with the individual and the Utopian point, which represents the lowest (or best) possible objective values. The Utopian point is the (, ) point in a two dimensional objective space. While the hypervolume indicator requires a carefully chosen reference point depending on an MOP to solve [8], [17], it is always trivial to choose the Utopian point for the prospect indicator regardless of an MOP to solve. III. PIBEA This section describes PIBIA s algorithmic structure (Section III-A) and its operators (Sections III-B to III-D). A. Algorithmic Structure Algorithm 1 shows PIBEA s algorithmic structure, which extends NSGA-II s. Algorithm 1 The Algorithmic Structure of PIBEA 1: g = 2: P g = initializepopulation(n) : P m = P mmax 4: while g < MAX-GENERATION do 5: O g = : while O g < N do do 7: p 1 = prospectbasedparentselection(p g) 8: p 2 = prospectbasedparentselection(p g) : if random() P c then 1: {o 1, o 2 } = crossover(p 1, p 2 ) 11: if random() P m then 12: o 1 = mutation(o 1 ) 1: end if 14: if random() P m then 15: o 2 = mutation(o 2 ) 1: end if 17: O g = {o 1, o 2 } O g 18: end if 1: end while 2: R g = P g O g 21: P g+1 = prospectbasedenvironmentalselection(r g) 22: E = computeentropy(p g+1) 2: P m = updatemutationrate(e) 24: g = g + 1 25: end while In the -th generation, N individuals are randomly generated as the initial population (Line 2). The mutation rate (P m ) is initially set to be the maximum value (P mmax ). In each generation (g), a pair of individuals, called parents (p 1 and p 2 ), are chosen from the current population with the proposed parent selection operator, which uses the prospect indicator (prospectbasedparentselection(), Lines 7-8). With the crossover rate P c, two parents reproduce two offspring with the SBX (self-adaptive simulated binary crossover) operator [18] (Lines 1). Each offspring performs polynomial mutation [] with the probability P m (Lines 11 to 1). The selection, crossover and mutation operations are performed repeatedly on P g to produce N offspring. The offspring (O g ) are combined with the population P g to form R g, which is a pool of candidates for the next-generation individuals. Environmental selection follows reproduction. N individuals are selected from 2N individuals in R g as the next-generation population P g+1 (prospectbasedenvironmentalselection(), Line 21). Environmental selection performs a (N + N)-elitism. Finally, the mutation rate is updated by computing the entropy of the next-generation individuals (Lines 22 and 2). B. Prospect Indicator Based Parent Selection Algorithm 2 shows how the proposed parent selection operator (prospectbaseparentselection() in Algorithm 1) works with the prospect indicator. It is designed as a variant of

binary tournament selection. It randomly draws two individuals from the current population P (Lines 1 and 2), compares them based on the dominance relationship between them and chooses a superior one as a parent (Lines 5 to 8). Note that p 1 p 2 means p 1 dominates p 2 as described in Section I If two individuals (p 1 and p 2 ) do not dominate each other and are placed at the same rank, the proposed operator chooses one of them as a parent with the prospect indicator. Lines 1 and 11 compute the prospect indicator values of p 1 and p 2 (I P (p 1 ) and I P (p 2 )), and Line 12 compares the two values. The proposed operator chooses the one with a higher I P (Lines 12 to 1). The prospect indicator value of individual i (I P (i)) is computed as follows: I P (i) = V (R rank(i) ) V (R rank(i) \ {i}) (2) rank(i) denotes the value of a rank that i is placed at. R rank(i) denotes a set of individuals that are placed at the same rank as i. V (R) denotes the volume of a hypercube that dominates the individuals in R in the objective space. It is calculated with the Lebesgue measure as follows. ( ) V (R) = Λ {x x u x x} x R x u denotes the Utopian point, and Λ denotes the Lebesgue measure. The prospect indicator valuates the potential of an individual to reproduce offspring that dominate itself. Figure 1 shows an example measurement of the prospect indicator in a two dimensional objective space. This example considers three nondominated individuals: a, b and c (R rank(a) = R rank(b) = R rank(c) = {a, b, c}). The Utopian point is (, ). I P (b) is a shaded area (i.e., V (R rank(b) ) V (R rank(b) \ {b})). It is the area where individual b can exclusively reproduce its offspring by evolving itself. Algorithm 2 prospectbaseparentselection() Require: P P 1: p 1 = randomselection(p) 2: p 2 = randomselection(p) : if p 1 = p 2 then 4: return p 1 5: else if p 1 p 2 then : return p 1 7: else if p 2 p 1 then 8: return p 2 : else 1: I P (p 1) = prospectindicator(p 1, R rank(p1 )) 11: I P (p 2) = prospectindicator(p 2, R rank(p2 )) 12: if I P (p 1) > I P (p 2) then 1: return p 1 14: else 15: return p 2 1: end if 17: end if () Algorithm prospectindicator() Require: p, P P 1: v = 1 2: for each o O do : s = 4: for each n P do 5: if f o(n) < f o(p) then : if s = then 7: s = n 8: else if f o(s) < f o(n) then : s = n 1: end if 11: end if 12: end for 1: v = v f o(p) f o(s) 14: end for 15: return v f 1 a Fig. 1: An Example Measurement of Prospect Indicator Algorithm shows pseudo code to compute I P (p). P denotes a set of individuals that are placed at the same rank as individual p. For each objective (o), the distance between p and s is measured to compute I P (p), where s denotes an individual that yields the closest yet superior objective value. C. Prospect Indicator Based Environmental Selection Algorithm 4 shows how the proposed environmental selection operator (prospectbaseenvironmentalselection() in Algorithm 1) works with the prospect indicator. In the environmental selection process, N individuals are selected from 2N individuals in R g as the next-generation population (P g+1 ). The individuals in R g are selected and moved to P g+1 on a rank by rank basis, starting with the top rank (Line 2 to 8). In Line, nondominatedindividualselection() determines the dominance relationships among a given set of individuals and chooses the non-dominated (i.e., top-ranked) ones as R ND. Algorithm 5 shows pseudo code of this operator. If the number of individuals in P g+1 R N D is less than N, R N D moves to P g+1. Otherwise, a subset of R N D moves to P g+1. The subset is selected with subsetselection() (Line 5). Algorithm shows how this operator works with the prospect indicator. It sorts the individuals in R N D from the ones with higher I P values to the ones with lower I P values (Line 4). The individuals with higher I P values have higher chances to be selected to P g+1 (Lines to ) The prospect indicator indicates how distant each individual is to its direct neighbors in the objective space. For example, in Figure 1, I P (b) indicates how distant individual b is to b c f 2

individuals a and c. A higher I P (b) means that b is more distant to a and c, and a lower I P (b) means that b is closer to a and c. Therefore, the prospect indicator valuates the potential of an individual to reproduce offspring that spread out in the objective space and contribute to the population s diversity. Algorithm 4 prospectbasedenvironmentalselection() Require: R g R g 1: P g+1 = 2: while P g+1 < N do : R ND = nondominatedindividualselection(r g \ P g+1) 4: if P g+1 + R ND > N then 5: R ND = subsetselection(r ND, N P g+1 ) : end if 7: P g+1 = P g+1 R ND 8: end while : return P g+1 diversity of individuals and stabilize convergence with limited performance fluctuations across generations. The proposed operator examines the entropy of individuals in the objective space in order to estimate their proximity to the Pareto front. It decreases the mutation rate when the entropy increases based on the observation that randomly-generated individuals are disordered in the objective space at the first generation and they become more ordered through generations. The entropy of individuals is computed with hypercubes that comprise the objective space. As an example in Figure 2 shows, the objective space is bounded by the maximum and minimum objective values that individuals yield. Then, it is divided to hypercubes. Figure 2 shows six individuals in the three dimensional objective space that consists of eight hypercubes. 1st objective individuals Algorithm 5 nondominatedindividualselection() Require: P P 1: P = 2: for each p P p / P do : P = P {p} 4: for each q P q p do 5: if p q then : P = P \ {q} 7: else if q p then 8: P = P \ {p} : end if 1: end for 11: end for 12: return P A B C D F rd objective E 2nd objective Fig. 2: An Example Hypercube The entropy of individuals (H) is computed as: H = P (i) log 2 (P (i)) (4) i C where Algorithm subsetselection() Require: P, T P 1: for each p P do 2: I P (p) = prospectindicator(p, P) : end for 4: P S = sort(p, I p) 5: R = : for each p P S R < T do 7: R = R p 8: end for : return R D. Adaptive Mutation Rate Adjustment The proposed mutation rate adjustment operator is designed to balance exploration and exploitation in an optimization process. It increases the mutation rate (p m in Algorithm 1) to encourage individuals to explore the decision variable space when they have not converged enough toward the Pareto front. This aims to increase convergence velocity by placing a higher priority on exploration. On the contrary, when individuals have not converged enough to the Pareto front, the proposed operator decreases the mutation rate to exploit them and cover a wider range of the front. This aims to improve the P (i) = n i i C n. (5) i C denotes the set of hypercubes in the objective space. P (i) denotes the probability that individuals exist in a hypercube i. n i denotes the number of individuals in a hypercube i. In Figure 2, i C n i = and C = 8. Once entropy (H) is obtained, it is normalized as follows: H o = H H = H max log 2 C The mutation rate m is adjusted with H o as follow. P mmax () P m = P mmax 1 (1 H o ) 2 (7) is the maximum mutation rate. IV. EXPERIMENTAL RESULTS This section evaluates PIBES with 45 MOPs that are derived from three standard test problems: DTLZ1, DTLZ and DTLZ7 [1] with a varying number of objectives (m, three to nine) and decision variables (n, 12 to 24). This section also compares PIBES with three existing EMOAs: NSGA-II, SPEA2 and AbYSS. AbYSS is designed to be a best-of-breed

algorithm that integrates the operators in NSGAII, SPEA2 and PAES [2]: crowding distance in NSGA-II, the initial individual selection with density estimation in SPEA2 and elite archiving in PAES. AbYSS also incorporates the scatter search template [21], which enables scatter search for the local search and diversification purposes. Parameter TABLE I: Parameter Setup value Maximum Number of Generations 25 Population Size 1 SPEA2 and AbYSS External Archive Size 1 AbYSS Reference Sets Size 1/1 AbYSS Improvement Method Iteration 5 NSGA-II s and SPEA2 s Crossover Rate. Crossover Rate (P c). Crossover Operator SBX Crossover Distribution Index (η c) 2 NSGA-II s and SPEA2 s Mutation Rate 1/(# of decision variables) PIBEA s Maximum Mutation Rate (m max) 1/(# of decision variables) Mutation Operator Polynomial Mutation Mutation Distribution Index (η m) 2 Size of V GD moving window (w) 1 NSGA-II, SPEA2 and AbYSS were configured as described in [], [1] and [11], respectively. All experiments were conducted with jmetal [22]. Each experimental result is the average of 2 independent results. A. Performance Metrics This paper uses the following four evaluation metrics. Generational distance (GD): measures the average distance from non-dominated individuals toward the Paretooptimal front [2]. Extend (EX): measures the coverage of non-dominated individuals in the objective space []. Spread (SP): measures the distribution of non-dominated individuals in the objective space [11]. Variance of Generational distance (V GD ): is the variance of GD values in the last w generations (Table I). It measures the stability of GD performance that non-dominated individuals yield. A large V GD indicates greater fluctuation in the GD performance B. Convergence This section discusses the convergence of individuals with the GD metric. Table II shows the results from DTLZ1 problem after 25 generations. The lower GD value indicate that the non-dominated individuals are closer to the Paretooptimal front; thus, lower is better. The first and second columns of the tables show the number of objectives (m) and decision variables (n) used in experiments, respectively. Bold number represents the best results among four algorithms. The result of GD shows that non-dominated individuals from PIBEA are closers to Pareto-optimal front than that from the other algorithms in every combination of number of objectives and decision variables. Prospect based parent selection and prospect based environmental selection allow PIBEA to find better non-dominated individuals, in term of closeness to Pareto-optimal front. TABLE II: GD Results with DTLZ1 12 5.55E-2.7E-1.88E-2.58E+ 15.2E-2 4.842E-1 4.41E-1 4.58E+ 18 8.557E-2.287E-1 8.441E-1 5.1871E+ 21 1.7821E-1 1.22E+ 1.1248E+ 7.452E+ 24.425E-1 1.741E+.E-1 1.2E+1 12 5.74E-2 8.78E+ 8.47E+.754E+ 15 2.51E-1 1.18E+1 1.2785E+1.281E+ 18.271E-1 1.28E+1 1.488E+1 7.8E+ 21 4.74E-1 1.7E+1 2.22E+1.878E+ 24 5.5E-1 2.415E+1 2.1E+1 1.52E+1 12 4.282E-2 1.1E+1 1.2E+1 2.574E+ 15.1175E-2 1.71E+1 2.484E+1 5.581E+ 18 2.5214E-1 2.828E+1.7E+1 8.48E+ 21.7E-1 2.8424E+1.874E+1 1.47E+1 24 1.14E+ 4.2E+1.74E+1 1.24E+1 TABLE III: GD Results with DTLZ 12 8.57E-2 5.1257E-1 1.775E-1.4E+ 15.72E-2 1.2E+ 2.5584E-1 4.277E+ 18 1.42E-1 2.12E+ 4.41E-1 1.12E+1 21.745E-1 2.44E+.871E-1 1.2144E+1 24 2.552E-1 1.578E+ 1.221E+ 1.E+1 12 1.17E-1 8.284E+ 7.88E+ 5.784E+ 15 4.24E-1 1.1728E+1 1.87E+1 8.542E+ 18.52E-1 1.55E+1 1.418E+1 1.1E+1 21 4.4475E-1 2.274E+1 1.22E+1 1.5244E+1 24 1.124E+ 2.12E+1 1.77E+1 1.72E+1 12.554E-2 7.875E+ 7.811E+ 2.E+ 15 1.74E-1 1.85E+1 1.182E+1.81E+ 18.427E-1 2.E+1 1.782E+1 1.174E+1 21 4.55E-1 2.85E+1 2.715E+1 1.2875E+1 24 7.517E-1.2E+1 2.72E+1 1.8E+1 Fig. 4a shows the value of GD with respect to the number of generations. In the figure, the number of objectives is three and the number of decision variables is 12. The figure shows that all four algorithms can improve GD and the value of GD to a particular value; however, PIBEA can improve the non-dominated individuals faster than the other algorithms which can be observed from steeper slop in the first fifty generations. NSGA-II and SPEA2 have similar performance while AbYSS is the worse one. Fig. 4b shows the value of GD when the number of objectives is increased to nine and the number of decision variables is increased to 24. In this figure, only PIBEA can improve the value of GD while the other algorithms cannot. PIBEA can find solutions faster for simple problem, i.e., with three objectives, and allow GD to converge in complicated problem, i.e., with nine objectives, while the other cannot. Table III shows the GD results from DTLZ problem. Even though, DTLZ problem is harder than DTLZ1; however, the same result as from DTLZ1 can be observed. Table III shows the GD results from DTLZ7 problem. Even though, DTLZ7 problem is harder than DTLZ1 and DTLZ; however, the similar result as from DTLZ1 and DTLZ can be observed. In particular, PIBEA can find better solutions in most of configurations, except when the number of objectives is and the number of decision variables are 12, 18 and 24.

.5.45.4.5..25.2.15.1.5.5.1.15.2.25.5.1.15.2.25..5..5.4.45.5 (a) DTLZ1.4.45.5 1..8.7..5.4..2.1.1.2..4.5.1.2..4.5..7..7.8. 1 (b) DTLZ.8. 1 5.5 5 4.5 4.5 2.5.1.2..4.5..7.8 (c) DTLZ7 Fig. : The Pareto Front Shapes of DTLZ1, DTLZ and DTLZ7 with three Objectives.8..7..5.4...1.2 (a) Objectives and 12 Decision Variables Fig. 4: GD Transition with DTLZ1 (b) Objectives and 24 Decision Variables TABLE IV: GD Results with DTLZ7 12 1.2E-1 1.2E-1 1.2E-1 1.44E-1 15 1.2E-1 1.2E-1 1.2E-1 1.4425E-1 18 1.2E-1 1.2E-1 1.2E-1 1.52E-1 21 1.2E-1 1.2E-1 1.2E-1 1.44E-1 24 1.2E-1 1.2E-1 1.2E-1 1.57E-1 12 1.282E-1 1.81E-1 1.288E-1 1.111E-1 15 1.282E-1 1.275E-1 1.285E-1 1.244E-1 18 1.282E-1 1.222E-1 1.284E-1 1.14E-1 21 1.285E-1 1.288E-1 1.285E-1 1.15E-1 24 1.285E-1 1.288E-1 1.2871E-1 1.172E-1 12 1.15E-1 1.541E-1 1.1E-1 1.41E-1 15 1.122E-1 1.512E-1 1.21E-1 1.27E-1 18 1.17E-1 1.488E-1 1.147E-1 1.4E-1 21 1.12E-1 1.25E-1 1.1E-1 1.4E-1 24 1.18E-1 1.222E-1 1.125E-1 1.4E-1 The results in Table IV might suggest that NODAME is worse than NSGA-II when the number of objectives is large. However, Fig. 5, which is GD result from DTLZ7 problem with objectives and 24 decision variables, shows that the value of GD from NSGA-II fluctuate from generation to generation while NODAME s GD is more stable. This is because of adaptive mutation rate adjustment in NODAME. Thus, the results show that even though NODAME does not have better performance in terms of GD at 25th generation, Fig. 5: GD Transition with DTLZ7 ( Objectives and 24 Decision Variables) but it provides more stable solutions than that of NSGA-II. The stability aspect of experimental results will be discussed more in the section IV-D. C. Coverage and Distribution This section discusses the coverage of individuals with the EX metric and the distribution of individuals with the SP

TABLE V: EX Results with DTLZ1 12 1.82E+ 2.8274E-2 2.788E-1.4E- 15.112E-1 5.2211E-2.42E-2.78E- 18.711E-1.4E-2.7E-2 5.5E- 21 1.4725E-1 1.471E-2 2.75E-2.21E- 24 8.74E-2 1.5248E-2 2.4E-2 2.81E- 12 1.274E+ 4.777E- 5.71E- 1.22E-2 15 2.27E-1.477E-.4841E-.121E- 18 1.45E-1 2.557E-.22E- 4.857E- 21 1.288E-1 2.117E- 2.455E-.77E- 24 1.8E-1 1.814E- 2.42E- 2.822E- 12 4.55E+ 5.875E- 5.42E- 2.1E-2 15 1.7E+.488E- 2.7577E- 8.28E- 18.551E-1 2.141E- 1.42E- 5.8827E- 21 2.2812E-1 2.281E- 1.814E- 4.775E- 24 7.81E-2 1.51E- 1.788E- 4.142E- TABLE VI: EX Results with DTLZ 12 1.82E+ 2.8274E-2 2.788E-1.4E- 15.112E-1 5.2211E-2.42E-2.78E- 18.711E-1.4E-2.7E-2 5.5E- 21 1.4725E-1 1.471E-2 2.75E-2.21E- 24 8.74E-2 1.5248E-2 2.4E-2 2.81E- 12 1.274E+ 4.777E- 5.71E- 1.22E-2 15 2.27E-1.477E-.4841E-.121E- 18 1.45E-1 2.557E-.22E- 4.857E- 21 1.288E-1 2.117E- 2.455E-.77E- 24 1.8E-1 1.814E- 2.42E- 2.822E- 12 4.55E+ 5.875E- 5.42E- 2.1E-2 15 1.7E+.488E- 2.7577E- 8.28E- 18.551E-1 2.141E- 1.42E- 5.8827E- 21 2.2812E-1 2.281E- 1.814E- 4.775E- 24 7.81E-2 1.51E- 1.788E- 4.142E- TABLE VII: EX Results with DTLZ7 m n NODAME SPEA2 NSGA-II AbYSS 12 1.414E+ 1.E+ 1.47E+ 1.214E+ 15 1.5777E+ 1.58E+ 1.E+ 1.82E+ 18 1.5582E+ 1.21E+ 1.8E+ 1.224E+ 21 1.54E+ 1.E+ 1.47E+ 1.154E+ 24 1.551E+ 1.28E+ 1.2E+.8572E-1 12.28E+ 1.5E+ 1.25E+ 1.58E+ 15.1185E+ 1.2484E+ 1.84E+ 1.4E+ 18.77E+ 1.58E+ 1.214E+ 1.E+ 21.257E+ 1.24E+ 1.7E+.77E-1 24.1521E+ 1.54E+ 1.48E+.4542E-1 12 1.77E+ 1.141E+ 1.5542E+ 1.881E+ 15 2.24E+ 1.412E+ 1.217E+ 1.15E+ 18 1.77E+ 1.817E+ 1.424E+ 1.145E+ 21 1.52E+ 1.18E+ 1.421E+ 1.42E+ 24 1.4E+ 1.451E+ 1.574E+ 1.457E+ TABLE VIII: SP Results with DTLZ1 12 1.28E+ 1.51E+ 1.1E+ 1.4E+ 15 1.2887E+ 1.1E+ 1.57E+ 1.E+ 18 1.17E+ 1.188E+ 1.15E+ 1.115E+ 21 1.2E+ 1.8E+ 1.8E+ 1.47E+ 24 1.5E+ 1.44E+ 1.7E+ 1.47E+ 12 1.287E+ 1.47E+ 1.48E+ 1.4E+ 15 1.211E+ 1.48E+ 1.48E+ 1.4E+ 18 1.22E+ 1.48E+ 1.48E+ 1.47E+ 21 1.4E+ 1.48E+ 1.48E+ 1.47E+ 24 1.5E+ 1.4E+ 1.48E+ 1.48E+ 12 1.2485E+ 1.4E+ 1.48E+ 1.4E+ 15 1.145E+ 1.4E+ 1.4E+ 1.48E+ 18 1.28E+ 1.4E+ 1.4E+ 1.4E+ 21 1.78E+ 1.4E+ 1.4E+ 1.4E+ 24 1.7E+ 1.4E+ 1.4E+ 1.4E+ TABLE IX: SP Results with DTLZ 12 1.1428E+ 1.2E+ 1.254E+ 1.2712E+ 15 1.2E+ 1.87E+ 1.5E+ 1.E+ 18 1.27E+ 1.41E+ 1.7E+ 1.7E+ 21 1.2E+ 1.42E+ 1.1E+ 1.84E+ 24 1.44E+ 1.4E+ 1.E+ 1.E+ 12 1.2818E+ 1.47E+ 1.47E+ 1.44E+ 15 1.24E+ 1.48E+ 1.48E+ 1.47E+ 18 1.E+ 1.48E+ 1.48E+ 1.47E+ 21 1.4E+ 1.48E+ 1.48E+ 1.48E+ 24 1.8E+ 1.4E+ 1.4E+ 1.48E+ 12 1.21E+ 1.48E+ 1.47E+ 1.45E+ 15 1.258E+ 1.48E+ 1.48E+ 1.48E+ 18 1.2E+ 1.4E+ 1.4E+ 1.48E+ 21 1.77E+ 1.4E+ 1.4E+ 1.48E+ 24 1.2E+ 1.4E+ 1.4E+ 1.4E+ TABLE X: SP Results with DTLZ7 m n NODAME SPEA2 NSGA-II AbYSS 12 1.251E+ 1.215E+ 1.22E+ 1.2428E+ 15 1.28E+ 1.275E+ 1.221E+ 1.2547E+ 18 1.217E+ 1.2417E+ 1.242E+ 1.28E+ 21 1.214E+ 1.254E+ 1.2172E+ 1.211E+ 24 1.21E+ 1.2477E+ 1.242E+ 1.22E+ 12 1.18E+ 1.2885E+ 1.225E+ 1.1E+ 15 1.184E+ 1.27E+ 1.2424E+ 1.124E+ 18 1.22E+ 1.2788E+ 1.217E+ 1.5E+ 21 1.215E+ 1.282E+ 1.2414E+ 1.17E+ 24 1.215E+ 1.272E+ 1.2528E+ 1.152E+ 12 1.27E+ 1.55E+ 1.285E+ 1.112E+ 15 1.24E+ 1.14E+ 1.272E+ 1.1E+ 18 1.282E+ 1.118E+ 1.25E+ 1.4E+ 21 1.277E+ 1.8E+ 1.254E+ 1.12E+ 24 1.285E+ 1.285E+ 1.285E+ 1.1E+ metric. Table V, VI and VII show the results of EX metric on DTLZ1, DTLZ and DTLZ7 problems, respectively. The higher value indicates better coverage. The results from the tables show that non-dominated individuals from PIBEA have better coverage than that from the other algorithms in all combination of the number of objectives and decision variables. The experimental results point out that prospect based parent selection and prospect based environmental selection in PIBEA allow non-dominated individuals to have better coverage of Pareto-optimal front. Table VIII, X and IX show the results of SP metric on DTLZ1, DTLZ and DTLZ7 problems, respectively. The lower value indicates better distribution. The results from the tables show that non-dominated individuals from PIBEA have better distribution than that from the other algorithms in all combination of the number of objectives and decision variables. D. Stability This section describes the stability of PIBEA and other algorithms with the V GD metric. Tables XI, XII and XIII show the results of V GD metric on DTLZ1, DTLZ and DTLZ7 problems, respectively. The lower value indicated better stability, i.e., less fluctuation of the GD value from generation to generation. The results from the tables show that the Pareto-optimal front from PIBEA are more stable than that from the other algorithms in all combination of the number of objectives and decision variables, except only one configuration in DTLZ7 problem. The experimental results

TABLE XI: V GD Results with DTLZ1 12 1.2581E-5 2.15E+ 5.882E-.81E+ 15 2.5E-4 1.428E+ 8.22E-1 1.181E+1 18 1.58E-4 2.451E+ 1.4882E+ 1.517E+1 21 4.8E-2 7.2E+ 2.4E+ 4.7788E+1 24 2.427E-2.874E+.558E+ 2.74E+1 12 2.77E-2 1.E+1 4.22E+1 8.781E+ 15 8.584E-2 1.4285E+2 1.47E+2 1.212E+1 18 4.545E-2 1.5E+2 1.5827E+2 2.14E+1 21 1.7E-1.54E+1 2.5E+2 2.12E+1 24.212E-2.222E+2 2.527E+2.1111E+1 12 5.41E- 2.11E+2 1.88E+2.27E+ 15 5.2E-2 7.524E+2 4.122E+2 1.75E+1 18.5187E-1 1.22E+ 8.8118E+2.777E+1 21.444E-1 2.824E+ 2.788E+ 2.21E+1 24 1.7E+ 2.58E+ 2.7E+.42E+1 TABLE XII: V GD Results with DTLZ 12.12E- 2.52E+ 1.7575E-2 5.81E-1 15 2.14E-1.577E+ 2.85E-1.222E+ 18 1.1522E-1 8.45E+ 2.27E-1 1.15E+1 21 1.187E-1 1.22E+1 4.248E+ 2.42E+1 24.478E-1 7.5E+ 2.145E+ 2.25E+1 12 1.8787E- 2.5E+1 2.151E+1.8258E+ 15.7585E-1 2.17E+1 1.741E+1 1.755E+1 18 8.4842E-1 2.2775E+1 7.517E+1 2.41E+1 21 4.125E-1 4.41E+1 5.5E+1.825E+1 24 1.288E+ 4.E+1.22E+1 1.82E+1 12.775E-5 7.7E+ 4.155E+1 1.751E+1 15 1.842E-.45E+1 4.84E+1 7.454E+ 18 2.45E-1.18E+1 7.2E+1.575E+1 21.1527E-1.41E+1.1145E+2 2.145E+1 24 2.212E+ 5.518E+1 8.5817E+1 4.71E+1 TABLE XIII: V GD Results with DTLZ7 m n NODAME SPEA2 NSGA-II AbYSS 12.5E-1 1.521E-11.1E-12.822E- 15 1.2E-1.711E-12 1.4847E-12 1.725E-5 18 1.14E-12 1.244E-12.52E-1 1.2875E-5 21 1.21E-15 1.177E-12.211E-1 5.81E- 24 1.122E-15 7.12E-1 4.57E-1.7151E- 12 4.418E-11 8.881E- 4.1124E-7 2.42E-5 15 1.487E-1 2.85E- 2.8575E-7.4E- 18 1.27E-1 1.514E- 1.218E-8 7.471E- 21 1.4458E-.28E-7 1.257E-8 8.52E- 24.71E-1 5.4E-7 4.57E-8.71E- 12 8.2E-7 7.2187E-5 4.1225E-5 1.45E-5 15 1.88E-7 5.141E-5.74E-5 2.54E-5 18 2.71E-7 4.5E-5 5.8E-.44E-5 21 1.21E-7 1.24E-5 7.4884E- 8.12E- 24 7.5E-8 1.284E-5.E- 5.4421E- show that adaptive mutation rate adjustment in PIBEA allows non-dominated individuals to have more stable convergence toward the Pareto-optimal front. V. CONCLUSION This paper studies an EMOA that leverages a new quality indicator, called the prospect indicator, for parent selection and environmental selection operators. Experimental results show that PIBEA effectively performs its operators in high dimensional problems and outperforms three existing wellknown EMOAs, NSGA-II, SPEA2 and AbYSS, in terms of convergence velocity, diversity of individuals, coverage of the Pareto front and performance stability. REFERENCES [1] R. C. Purshouse and P. J. Fleming, Evolutionary Many-Objective Optimization: An Exploratory Analysis, in Proc. of IEEE Congress on Evolutionary Computation, 2. [2] N. Srinivas and K. Deb, Multiobjective optimization using nondominated sorting in genetic algorithms, Evol. Computat., vol. 2(), 14. [] C. C. Coello, Evolutionary multi-objective optimization: Some current research trends and topics that remain to be explored, Front. Computat. Sci. China, vol., no. 1, 2. [4] N. Beume, B. Naujoks, and M. Emmerich, SMS-EMOA: Multiobjective selection based on dominated hypervolume, Eur. J. Oper. Res., vol. 181, no., 27. [5] E. Zitzler and S. Knzli, Indicator-based selection in multiobjective search, in Proc. of Int l Conf. on Parallel Problem Solving from Nature, 24. [] J. Bader and E. Zitzler, HypE: An algorithm for fast hypervolume-based many-objective optimization, Evol. Computat., vol. 1, no. 1, 211. [7] E. Zitzler, L. Thiele, and J. Bader, SPAM: Set preference algorithm for multiobjective optimization, in Proc. of Int l Conference on Parallel Problem Solving From Nature, 28. [8] E. Zitzler, D. Brockhoff, and L. Thiele, The hypervolume indicator revisited: On the design of pareto-compliant indicators via weighted integration, in Proc. of Int l Conference on Evolutionary Multi-Criterion Optimization, 27. [] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Trans Evol. Computat., vol., no. 2, 22. [1] E. Zitzler, M. Laumanns, and L. Thiele, SPEA2: Improving the Strength Pareto Evolutionary Algorithm for Multiobjective Optimization, in Evolutionary Methods for Design, Optimisation and Control with Application to Industrial Problems, 22. [11] A. J. Nebro, F. Luna, E. Alba, B. Dorronsoro, J. Durillo, and A. Beham, AbYSS: adapting scatter search for multiobjective optimization, IEEE Trans Evol. Computat., vol. 12, no. 4, 28. [12] K. Deb, S. Agrawal, A. Pratab, and T. Meyarivan, A fast elitist nondominated sorting genetic algorithm for multi-objective optimization: NSGA-II, in Proc. of Int l Conference on Parallel Problem Solving from Nature, 2. [1] E. Zitzler and L. Thiele, Multiobjective evolutionary algorithms: a comparative case study and the strength pareto approach, IEEE Trans. Evol. Computat., vol., no. 4, 1. [14] M. Srinivas and L. Patnaik, Adaptive probabilities of crossover and mutation in genetic algorithms, IEEE Trans Syst. Man Cybern., vol. 24, no. 4, 14. [15] A. L. Buczaka and H. Wangb, Optimization of fitness functions with non-ordered parameters by genetic algorithms, in Proc. of IEEE Congress on Evolutionary Computation, 21. [1] D. Thierens and D. Thierens, Adaptive mutation rate control schemes in genetic algorithms, in Proc. of IEEE Congress on Evolutionary Computation, 22. [17] A. Auger, J. Bader, D. Brockhoff, and E. Zitzler, Theory of the hypervolume indicator: Optimal µ-distributions and the choice of the reference point, in Proc. of ACM Int l Workshop on Foundations of Genetic Algorithms, 2. [18] K. Deb, K. Sindhya, and T. Okabe, Self-adaptive simulated binary crossover for real-parameter optimization, in Proc. of ACM Int l Genetic and Evolutionary Computation Conference, 27. [1] K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, Scalable test problems for evolutionary multiobjective optimization, in Evolutionary Multiobjective Optimization. Springer, 25, ch.. [2] J. Knowles and D. Corne, The pareto archived evolution strategy: a new baseline algorithm for pareto multiobjective optimisation, in Proc. of IEEE Congress on Evolutionary Computation, 1. [21] F. Glover, A template for scatter search and path relinking, in Proc. of European Conference on Artificial Evolution, 18. [22] J. Durillo, A. Nebro, and E. Alba, The jmetal framework for multiobjective optimization: Design and architecture, in Proc. of IEEE Congress on Evolutionary Computation, 21. [2] D. V. Veldhuizen and G. Lamont, Multiobjective evolutionary algorithm research: A history and analysis, Dept. Elec. Computationt. Eng., Air Force Inst. Technol., Tech. Rep. TR-8-, 18.