A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS

Similar documents
Lecture Notes on Linear Regression

The Study of Teaching-learning-based Optimization Algorithm

A Hybrid Variational Iteration Method for Blasius Equation

The Minimum Universal Cost Flow in an Infeasible Flow Network

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

Appendix B: Resampling Algorithms

Generalized Linear Methods

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

Amiri s Supply Chain Model. System Engineering b Department of Mathematics and Statistics c Odette School of Business

Interactive Bi-Level Multi-Objective Integer. Non-linear Programming Problem

Some modelling aspects for the Matlab implementation of MMA

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

CS-433: Simulation and Modeling Modeling and Probability Review

Chapter Newton s Method

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

Finding Dense Subgraphs in G(n, 1/2)

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

MMA and GCMMA two methods for nonlinear optimization

An improved multi-objective evolutionary algorithm based on point of reference

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Maximizing Overlap of Large Primary Sampling Units in Repeated Sampling: A comparison of Ernst s Method with Ohlsson s Method

Chapter 9: Statistical Inference and the Relationship between Two Variables

18.1 Introduction and Recap

Randomness and Computation

Min Cut, Fast Cut, Polynomial Identities

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

Comparative Analysis of SPSO and PSO to Optimal Power Flow Solutions

Speeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1

= z 20 z n. (k 20) + 4 z k = 4

MODIFIED PARTICLE SWARM OPTIMIZATION FOR OPTIMIZATION PROBLEMS

Assortment Optimization under MNL

Case Study of Markov Chains Ray-Knight Compactification

THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan.

Numerical Heat and Mass Transfer

Improved Approximation Methods to the Stopped Sum Distribution

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests

SOLVING CAPACITATED VEHICLE ROUTING PROBLEMS WITH TIME WINDOWS BY GOAL PROGRAMMING APPROACH

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Single-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition

An Interactive Optimisation Tool for Allocation Problems

On the Multicriteria Integer Network Flow Problem

Lecture 20: November 7

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

Lecture 12: Discrete Laplacian

A new Approach for Solving Linear Ordinary Differential Equations

Research Article Green s Theorem for Sign Data

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

VQ widely used in coding speech, image, and video

Markov Chain Monte Carlo Lecture 6

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming

Problem Set 9 Solutions

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

Global Sensitivity. Tuesday 20 th February, 2018

Lecture 4: Constant Time SVD Approximation

CHAPTER 7 STOCHASTIC ECONOMIC EMISSION DISPATCH-MODELED USING WEIGHTING METHOD

Chapter 1. Probability

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Linear Approximation with Regularization and Moving Least Squares

The Expectation-Maximization Algorithm

Chapter 2 A Class of Robust Solution for Linear Bilevel Programming

x = , so that calculated

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Lecture Space-Bounded Derandomization

On the Interval Zoro Symmetric Single-step Procedure for Simultaneous Finding of Polynomial Zeros

MODIFIED PREDATOR-PREY (MPP) ALGORITHM FOR CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION

RELIABILITY ASSESSMENT

Finding Primitive Roots Pseudo-Deterministically

New Method for Solving Poisson Equation. on Irregular Domains

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Chapter 2 Real-Coded Adaptive Range Genetic Algorithm

Exercises of Chapter 2

Linear Regression Analysis: Terminology and Notation

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1]

The Geometry of Logit and Probit

Review of Taylor Series. Read Section 1.2

SDMML HT MSc Problem Sheet 4

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Neryškioji dichotominių testo klausimų ir socialinių rodiklių diferencijavimo savybių klasifikacija

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 2/21/2008. Notes for Lecture 8

A New Evolutionary Computation Based Approach for Learning Bayesian Network

A Lower Bound on SINR Threshold for Call Admission Control in Multiple-Class CDMA Systems with Imperfect Power-Control

A MODIFIED METHOD FOR SOLVING SYSTEM OF NONLINEAR EQUATIONS

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

FUZZY GOAL PROGRAMMING VS ORDINARY FUZZY PROGRAMMING APPROACH FOR MULTI OBJECTIVE PROGRAMMING PROBLEM

Research on Route guidance of logistic scheduling problem under fuzzy time window

Credit Card Pricing and Impact of Adverse Selection

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm

A Local Variational Problem of Second Order for a Class of Optimal Control Problems with Nonsmooth Objective Function

Simulation and Probability Distribution

An Application of Fuzzy Hypotheses Testing in Radar Detection

Least squares cubic splines without B-splines S.K. Lucas

Computational Biology Lecture 8: Substitution matrices Saad Mneimneh

Transcription:

HCMC Unversty of Pedagogy Thong Nguyen Huu et al. A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS Thong Nguyen Huu and Hao Tran Van Department of mathematcs-nformaton, HCMC Unversty of Pedagogy 280, An Duong Vuong, Ho Ch Mnh cty, Vet Nam. Emal: thong_nh2002@yahoo.com, tranvhao@gmal.com ABSTRACT Ths paper proposes a new probablstc algorthm for solvng mult-objectve optmzaton problems - Probablty-Drven Search Algorthm. The algorthm uses probabltes to control the process n search of Pareto optmal solutons. Especally, we use the absorbng Marov Chan to argue the convergence of the algorthm. We test ths approach by mplementng the algorthm on some benchmar mult-objectve optmzaton problems, and fnd very good and stable results. Keywords: mult-objectve, optmzaton, stochastc, probablty, algorthm.. Introducton We ntroduce the Search va Probablty (SVP) algorthm for solvng sngleobjectve optmzaton problems [4]. In ths paper, we extend SVP algorthm nto Probablstc-Drven Search (PDS) algorthm for solvng mult-objectve optmzaton problems by replacng the normal order wth the Pareto one. We compute the complexty of the Changng Technque of the algorthm. Especally, we use the absorbng Marov Chan to argue the convergence of the Changng Technque of the algorthm. We test ths approach by mplementng the algorthm on some benchmar mult-objectve optmzaton problems, and fnd very good and stable results. 2. The model of Mult-objectve optmzaton problem A general mult-objectve problem can be descrbed as follows: Mnmze f ( x) ( =,, s) subject to g j ( x) 0 ( j =,, r) where x = ( x ), a x b ( a, b R, n). A soluton x s sad to domnate a soluton y f f ( x) f ( y), {,..., s} and f ( x) < f ( y) for at least one {,..., s} A soluton that s not domnated by any other solutons s called a Pareto optmzaton soluton. Let S be a set of Pareto optmzaton solutons, S s called Pareto optmal set. The set of objectve functon values correspondng to the varables of S s called Pareto front. 6

Journal of Scence 33 (202) Nguyen Huu Thong et al. 3. The Changng Technque of PDS algorthm and ts complexty We consder a class of optmzaton problems havng the character as follows: there s a fxed number ( <n) that s ndependent of the sze n of the problem such that f we only need to change values of varables then t has the ablty to fnd a better soluton than the current one, let us call t O. We suppose that every varable x ( n) has m dgts that are lsted from left to rght x, x 2,, x m (x j s an nteger, 0 x j 9, j m). Let L be a number of teratons for fndng correct values of j-th dgts. The Changng Technque whch changes a soluton x nto a new soluton y s descrbed wth general steps as follows: Input: a soluton x Output: a new soluton y S. j (determne j-th dgt); S2. (start countng varable of the loop); S3. y x; S4. Randomly select varables of soluton y and randomly change values of j-th dgts of these varables; S5. If (x s domnated by y) then x y; S6. If (<L) then + and return S3; S7. If (j<m) then j j+ and return S2; S8. The end of the Changng Technque; The Changng Technque fnds the value of each dgt from left dgt to rght dgt one by one. Consder j-th dgt, on each teraton the Changng Technque randomly selects varables, and randomly changes values of j-th dgts of these varables to fnd a better soluton than the current one. Let A be the event such that the technque can fnd correct values of j-th dgts of varables on each teraton. The probablty of A s pa = Pr ( A) = = n 0 0 n Let X be a number of occurrences of A wth N of teratons. X has the bnomal dstrbuton B (N, p A ) and the probablty mass functon as follows: x x N x Pr ( X = x) = CN ( pa) ( pa) ( x= 0,,..., N) Because N s suffcently large and p A s suffcently small, the Posson dstrbuton can be used as an approxmaton to B (N, p A ) of the bnomal dstrbuton as follows: X = x e x! wth the parameter λ=np A and the expected value of X s E(X) = λ. Pr ( ) x λ λ 7

HCMC Unversty of Pedagogy Thong Nguyen Huu et al. Because the soluton has n varables, we select an average number of teratons such that the event A occurs at least n/ tmes. We have + n n n 0 n E( X) > N. pa > N > = +. pa Because every varable has m dgts, so the average number of teratons for fndng correct values of a soluton the frst tme s 0 0 m m n n + + = + + On each teraton, the technque performs jobs wth complexty O(). Because s a fxed number, so the complexty of the Changng Technque s O(n + ). 4. The absorbng Marov Chan of the Changng Technque Wthout loss of generalty we suppose that the soluton has n varables, every varable has m=5 dgts. Let E 0 be the startng state of a soluton that s randomly selected, E (=,2,,5) be the state of the soluton havng n varables wth correct values that are found for dgts from -th dgt to -th dgt ( 5). We have (X n ; n=0,,2, ) s a Marov chan wth states {E 0, E, E 2, E 3, E 4, E 5 }. Let p be the probablty of event n whch -th dgts of n varables have correct values. Accordng to secton 2, we have + p = + 0 n Set q=-p, the transton matrx s then q p 0 0 0 0 0 q p 0 0 0 0 0 q p 0 0 P = ( p j ) =, j 0,5 = 0 0 0 q p 0 0 0 0 0 q p 0 0 0 0 0 The states E (0 4) are transent states, and the state E 5 s an absorbng state. The Marov chan s an absorbng Marov Chan and ts model as follows: q q q q q 0 p p 2 p 3 p 4 p 5 Fgure. The model of absorbng Marov Chan of the Changng Technque Absorpton Probabltes: Let u 5 (0 4) be the probablty that the absorbng chan wll be absorbed n the absorbng state E 5 f t starts n the transent state E (0 4). If we compute u 5 n term of the possbltes on the outcome of the frst step, then we have the equatons 8

Journal of Scence 33 (202) Nguyen Huu Thong et al. 4 u 5 = p5 + pju j5 ( 0 4) u05 = u5 = u 25 = u35 = u 45 = j= 0 Here the result tells us that, startng from state (0 4), there s probablty of absorpton n state E 5. Tme to Absorpton: Let t be the expected number of steps before the chan s absorbed n the absorbng state E 5, gven that the chan starts n state E (0 4). Then we have results as follows: t = (5-)/p (0 4). 5. PDS algorthm for solvng mult-objectve optmzaton problems 5.. The Changng Procedure Wthout loss of generalty we suppose that a soluton of the problem has n varables, every varable has m=5 dgts. We use the Changng Technque of secton 3 and ncrease the speed of convergence by usng two sets of probabltes [4] to create the Changng Procedure. Two sets of probabltes [4] are descrbed as follows: The changng probabltes q=(0.46, 0.52, 0.6, 0.75, ) of dgts of a varable are ncreasng from left to rght. Ths means that left dgts are more stable than rght dgts, and rght dgts change more than left dgts. In other words, the role of left dgt x j s more mportant than the role of rght dgt x,j+ ( j m-) for evaluatng objectve functons. The probabltes (r =0.5, r 2 =0.25, r 3 =0.25) for selectng values of a dgt. r : the probablty of choosng a random nteger number between 0 and 9 for j-th dgt, r 2 : the probablty of j-th dgt ncremented by one or a certan value (+,,+5), r 3 : the probablty of j-th dgt decremented by one or a certan value (-,,-5). We use a functon random (num) that returns a random number between 0 and (num-). The Changng Procedure whch changes values of a soluton x va probablty to create a new soluton y s descrbed as follows: The Changng Procedure Input: a soluton x Output: a new soluton y S. y x; S2. Randomly select varables of soluton y and call these varables y ( ). Let x j be j-th dgt ( j m) of varable x. The technque whch changes values of these varables s descrbed as follows: For = to do Begn_ y =0; For j= to m do Begn_2 9

HCMC Unversty of Pedagogy Thong Nguyen Huu et al. If j= then b=0 else b=0; If (probablty of a random event s q j ) then If (probablty of a random event s r ) then y =b*y +random(0); Else If (probablty of a random event s r 2 ) then y = b*y +( x j -); Else y = b*y +( x j +); Else y = b*y +x j ; End_2 If (y <a ) then y =a ; If (y >b ) then y =b ; End_; S3. Return y; S4. The end of the Changng Procedure; The Changng Procedure has the followng characterstcs: The central dea of PDS algorthm s that varables of an optmzaton problem are separated nto dscrete dgts, and then they are changed wth the gude of probabltes and combned to a new soluton. Because the role of left dgts s more mportant than the role of rght dgts for evaluatng objectve functons. The algorthm fnds values of each dgt from left dgts to rght dgts of every varable wth the gude of probabltes, and the newly-found values may be better than the current ones (accordng to probabltes). The parameter : In practce, we do not now the true value of for each problem. Accordng to statstcs of many experments, the best thng s to use n the rato as follows: o n 5, s an nteger selected at random from to 5. o n>6, s chosen as follows: s an nteger chosen randomly from to n / 2 wth probablty 20% (fnd the best pea of a hll to prepare to clmb). s an nteger selected at random from to 4 wth probablty 80% (clmbng the hll or carryng out the optmal number). 5.2. General steps of Probablstc-Drven Search algorthm We need to set three parameters S, M and L as follows: Let S be the set of Pareto optmal solutons to fnd Let M be a number of Pareto optmal solutons whch the algorthm has the ablty to fnd After generatng a random feasble soluton X, set L s the number so large that after repeated L tmes the algorthm has an ablty to fnd a Pareto optmal soluton that domnates the soluton X. 0

Journal of Scence 33 (202) Nguyen Huu Thong et al. The PDS algorthm for solvng mult-objectve optmzaton problems s descrbed wth general steps as follows: S. (determne -th soluton); S2. Select a randomly generated feasble soluton X ; S3. j (create jth loop); S4. Use the Changng Procedure to transform the soluton X nto a new soluton Y; S5. If (Y s not feasble) then return S4. S6. If (X s domnated by Y) then X Y; S7. If j<l then j j+ and return S4; S8. Put X on the set S; Remove the soluton n the set S whch s domnated by another; Remove overlappng solutons n the set S; Set S = ; S9. If <M then + and return S2; S0. The end of PDS algorthm; Remars: After generatng a random feasble soluton x, the algorthm repeats L tmes to fnd a soluton that domnates the soluton x. Thus each of the solutons wors ndependently of the other solutons. The changes of solutons are drven by probabltes. Every soluton has an ablty to change and drects ts poston to a pont of the Pareto front. 6. Illustratve examples In order to assess the performance of DPS algorthm, the algorthm wll be benchmared by usng sx optmzaton test cases developed by Deb et al. []. The problems are mnmzaton problems wth M=3 objectves. DTLZ: f( x) = xx2( + g( XM)), f2( x) = x( x2)( + g( XM)), f3( x) = ( x)( + g( XM)), 2 2 2 g( XM) = 00 XM + x 0.5 cos 20 x 0.5, 0 x ( =...7), x= x,..., x, X = x,..., x. 2 ( ) ( π ( )) x XM ( ) ( ) 7 M 3 7

HCMC Unversty of Pedagogy Thong Nguyen Huu et al. DTLZ2: ( M ) ( π ) ( π ) ( ) ( π ) ( π ) ( ) ( π ) f ( x) = + g( X ) cos x /2 cos x /2, 2 f ( x) = + g( X ) cos x /2 sn x /2, f ( x) = + g( X ) sn x /2, 2 M 2 3 M 2 ( ) ( ) ( ) g( X ) = x 0.5,0 x ( =...2), x = x,..., x, X = x,..., x. M 2 M 3 2 x XM DTLZ3: As DTLZ2, except the equaton for g s replaced by the one from DTLZ. DTLZ4: As DTLZ2, except x s replaced by x α where α > 0 (=,2) + 2g ( XM ) x2 DTLZ5: As DTLZ2, except x 2 s replaced by 2( + g X ) ( ) DTLZ6: As DTLZ5, except the equaton for g s replaced by M X DTLZ7: Mn f( x) = x; Mn f2( x) = x2; Mn f3( x) = + g( XM) h f( x), f2( x), g( XM) 9 g( XM) = + x; X M x XM M f ( x) h( f( x), f2( x), g( XM )) = M ( + sn( 3 π f( x) )) = + g( XM ) 0 x ( =...22), x= x,..., x, X = x,..., x. ( ) ( ) 22 M 3 22 M x 0. ( ) ( ) Because the memory of computer s lmted, we dvde the Pareto surface nto four parts as follows: Part : f (x) 0.5 and f 2 (x) 0.5 Part 2: f (x) 0.5 and f 2 (x) 0.5 Part 3: f (x) 0.5 and f 2 (x) 0.5 Part 4: f (x) 0.5 and f 2 (x) 0.5 Set L=30000 and M=700, we apply PDS algorthm to fndng 700 Pareto optmal solutons for each part. We use two dgts after decmal pont for all problems. It taes 20 seconds to mplement PDS algorthm for fndng Pareto surface of each problem. Here the Pareto surfaces of llustratve examples are found by PDS algorthm. We use two dgts after decmal pont for all problems. Here the Pareto fronts of llustratve examples are found by PDS algorthm. 2

Journal of Scence 33 (202) Nguyen Huu Thong et al. Fgure 2. DTLZ Fgure 3. DTLZ2 Fgure 4. DTLZ3 3

HCMC Unversty of Pedagogy Thong Nguyen Huu et al. Fgure 5. DTLZ4 (α=0) Fgure 6. DTLZ4 (α=50) Fgure 7. DTLZ4 (α=00) Fgure 8. DTLZ5 Fgure 9. DTLZ6 4

Journal of Scence 33 (202) Nguyen Huu Thong et al. Fg. 0. Pareto surface of DTLZ7 Remars: PDS algorthm has the ablty to fnd a large number of Pareto optmzaton solutons and these solutons are able to express the concentrated regons of Pareto front. PDS algorthm has the ablty to mantan dversty and the overall dstrbuton of solutons on Pareto front s acceptable. Now we use three dgts after decmal pont for problem DTLZ4 and the Pareto front s found by PDS algorthm wth α=00 as follows: Fgure. DTLZ4 (α=00 wth three dgts after decmal pont) 7. Conclusons In ths paper, we consder a class of optmzaton problems havng the character as follows: there s a fxed number ( <n) and s ndependent of the sze n of the problem such that f we only need to change values of varables then t has the ablty to fnd a better soluton than the current one, let us call t O. We ntroduce PDS 5

HCMC Unversty of Pedagogy Thong Nguyen Huu et al. algorthm for solvng mult-objectve optmzaton problems of the class O. We compute the complexty of the Changng Technque of PDS algorthm. Specfcally, we use the absorbng Marov Chan to argue the convergence of the Changng Technque. PDS algorthm has the followng advantages: There s no populaton or swarm, the algorthm s very smple and fast. The changes of solutons are drven by the probabltes and every soluton has the ablty to ndependently operate. The PDS algorthm has the ablty to fnd a large number of Pareto optmzaton solutons and the overall dstrbuton of solutons on Pareto front s acceptable. There are not many parameters to be adjusted. There s no predefned lmt of objectve functons and constrants, and the algorthm does not need a pre-process of objectve functons and constrants. The parameter : In practce, we do not now the true value of for each problem. Accordng to statstcs of many experments, the best thng s to use n the rato as follows: o n 5, s an nteger selected at random from to 5. 6 o n>6, s chosen as follows: s an nteger chosen randomly from to n / 2 wth probablty 20% (fnd the best pea of a hll to prepare to clmb). s an nteger selected at random from to 4 wth probablty 80% (clmbng the hll or carryng out the optmal number). In next paper, we apply PDS algorthm to solvng optmzaton problems wth equalty constrants by ncreasng the degree of equalty accuracy step by step. On the other hand, because the memory of computer s lmted, we study to dvde the Pareto front nto several parts and apply PDS algorthm to fndng a lot of solutons for each part. Especally, we apply PDS algorthm to solvng multobjectve portfolo optmzaton problems. REFERENCES. Deb K., Thele L., Laumanns M. (2002), and Ztzler E., Scalable Mult-Objectve Optmzaton Test Problems. In Congress on Evolutonary Computaton (CEC 2002), pages 825 830. IEEE Press. 2. Grnstead C. M., Snell J. L. (997). Introducton to probablty. Publshed by the Amercan Mathematcal Socety. 3. Huband S., Hngston P., Barone L., and Whle L. (2006), A Revew of Multobjectve Test Problems and a Scalable Test Problem Toolt. IEEE Transactons on Evo-lutonary Computaton, 0(5):477 506. 4. Thong Nguyen Huu and Hao Tran Van (2007), Search va Probablty Algorthm for Engneerng Optmzaton Problems, In Proceedngs of XIIth Internatonal Conference on Appled Stochastc Models and Data Analyss (ASMDA2007), Chana, Crete, Greece, 2007. In boo: Recent Advances n Stochastc Modelng and Data

Journal of Scence 33 (202) Nguyen Huu Thong et al. Analyss, edtor: Chrstos H. Sadas, publsher: World Scentfc Publshng Co Pte Ltd. 5. Ztzler E., Deb K., and Thele L. (2000), Comparson of Multobjectve Evolutonary Algorthms: Emprcal Results. Evolutonary Computaton, 8(2):73 95. (Receved: 25/7/20; Accepted: 09/9/20) 7