Self-adaptive Differential Evolution Algorithm for Constrained Real-Parameter Optimization
|
|
- Rhoda Byrd
- 5 years ago
- Views:
Transcription
1 26 IEEE Congress on Evolutonary Computaton Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-21, 26 Self-adaptve Dfferental Evoluton Algorthm for Constraned Real-Parameter Optmzaton V. L. Huang, A. K. Qn, Member, IEEE and P. N. Suganthan, Senor Member, IEEE Abstract In ths paper, we propose an extenson of Self-adaptve Dfferental Evoluton algorthm (SaDE) to solve optmzaton problems wth constrants. In comparson wth the orgnal SaDE algorthm, the replacement crteron was modfed for handlng constrants. The performance of the proposed method s reported on the set of 24 benchmark problems provded by CEC26 specal sesson on constraned real parameter optmzaton. M I. INTRODUCTION ANY optmzaton problems n scence and engneerng have a number of constrants. Evolutonary algorthms have been successful n a wde range of applcatons. However, evolutonary algorthms naturally perform unconstraned search. Therefore, when used for solvng constraned optmzaton problems, they requre addtonal mechansms to handle constrants n ther ftness functon. In the lterature, several constrants handlng technques have been suggested for solvng constraned optmzaton by usng evolutonary algorthms. Mchalewcz and Schoenauer [3] grouped the methods for handlng constrants by evolutonary algorthm nto four categores: ) preservng feasblty of solutons, ) penalty functons, ) make a separaton between feasble and nfeasble solutons, and v) other hybrd methods. The most common approach to deal wth constrants s the method based on penalty functons, whch penalze nfeasble solutons. However, penalty functons have, n general, several lmtatons. They requre a careful tunng to determne the most approprate penalty factors. Also, they tend to behave ll when tryng to solve a problem n whch the optmum s at the boundary between the feasble and the nfeasble regons or when the feasble regon s dsjont. For some dffcult problems n whch t s extremely dffcult to locate a feasble soluton due to napproprate representaton scheme, researchers desgned specal representatons and operators to preserve the feasblty of solutons at all the tme. Recently there are a few methods whch emphasze the dstncton between feasble and nfeasble solutons n the search space, such as behavoral memory method, superorty of feasble solutons to nfeasble solutons, and reparng nfeasble solutons. Also researchers developed hybrd methods whch combne evolutonary algorthm wth another technque (normally a numercal optmzaton approach) to handle constrants. The Self-adaptve Dfferental Evoluton algorthm (SaDE) was ntroduced n [1], n whch the choce of learnng strategy and the two control parameters F and CR are not requred to be pre-specfed. Durng evoluton, the sutable learnng strategy and parameter settngs are gradually self-adapted accordng to the learnng experence. In [1], SaDE was tested on a set of benchmark functons wthout constrants. In ths work, we generalze SaDE to handle problems wth constrants and nvestgate the performance on 24 constraned problems. II. DIFFERENTIAL EVOLUTION ALORITHM Dfferental evoluton (DE) algorthm, proposed by Storn and Prce [4], s a smple but powerful populaton-based stochastc search technque for solvng global optmzaton problems. The orgnal DE algorthm s descrbed n detal as n follows: Let S R be the n-dmensonal search space of the problem under consderaton. The DE evolves a populaton of NP n-dmensonal ndvdual vectors,.e. soluton canddates, X = ( x,, x ) S, = 1,, NP, 1 n from one generaton to the next. The ntal populaton should deally cover the entre parameter space by randomly dstrbutng each parameter of an ndvdual vector wth unform dstrbuton between the prescrbed upper and lower u parameter bounds x and x l. j j At each generaton, DE employs mutaton and crossover operatons to produce a tral vector U for each, ndvdual vector X, also called target vector, n the current, populaton. A. Mutaton operaton For each target vector X at, generaton, an assocated mutated vector V =, { v, v,..., v 1, 2, n, } can usually be generated by usng one of the followng strateges as shown n the onlne codes avalable at The authors are wth School of Electrcal and Electronc Engneerng, Nanyang Technologcal Unversty, Nanyang Ave., Sngapore (emal: huanglng@pmal.ntu.edu.sg, qnka@pmal.ntu.edu.sg, epnsugan@ntu.edu.sg) /6/$2./ 26 IEEE 17 Authorzed lcensed use lmted to: Nanyang Technologcal Unversty. Downloaded on March 24,21 at 21:39:19 EDT from IEEE Xplore. Restrctons apply.
2 DE/rand/1 : V, = Xr ( ) 1, + F Xr 2, X r3, DE/best/1 : V ( ), = Xbest, + F Xr 1, X r2, DE/current to best/1 : ( ) F ( ) 1 2, V = X + F X X + X X DE/best/2 :,, best,, r, r V = X + F ( X X, best, r ) + F ( X X ) 1, r2, r3, r4, DE/rand/2 : V = X + F ( X X, ) + F r ( X X ) 1, r2, r3, r4, r, where ndces r, r, r, r, r are random and mutually dfferent ntegers generated n the range [ 1, NP ], whch should also be dfferent from the current tral vector s ndex. F s a factor n (, 1+) for scalng dfferental vectors and X s the ndvdual vector wth best ftness best, value n the populaton at generaton. B. Crossover operaton After the mutaton phase, the bnomnal crossover operaton s appled to each par of the generated mutant vector V, and ts correspondng target vector X, to U = u, u,..., u. generate a tral vector:, ( 1, 2, n, ) vj,,, f ( rand j[, 1] CR) or ( j = jrand) u j,, = x j,,, otherwse j = 1, 2,..., n where CR s a user-specfed crossover constant n the range [,1) and j rand s a randomly chosen nteger n the range [ 1, n ] to ensure that the tral vector U, wll dffer from ts correspondng target vector X, by at least one parameter. C. Selecton operaton If the values of some parameters of a newly generated tral vector exceed the correspondng upper and lower bounds, we randomly and unformly rentalze t wthn the search range. Then the ftness values of all tral vectors are evaluated. After that, a selecton operaton s performed. The f U s compared to f X n the ftness value of each tral vector (, ) that of ts correspondng target vector ( ) current populaton. If the tral vector has smaller or equal ftness value (for mnmzaton problem) than the correspondng target vector, the tral vector wll replace the target vector and enter the populaton of the next generaton. Otherwse, the target vector wll reman n the populaton for the next generaton. The operaton s expressed as follows: X, + 1 {,, f f ( ) f ( ),, = U U X X otherwse,, The above 3 steps are repeated generaton after generaton untl some specfc stoppng crtera are satsfed. III. SADE ALORITHM To acheve good performance on a specfc problem by usng the orgnal DE algorthm, we need to try all avalable (usually ) learnng strateges n the mutaton phase and fne-tune the correspondng crtcal control parameters CR, F and NP. The performance of the orgnal DE algorthm s hghly dependent on the strateges and parameter settngs. Although we may fnd the most sutable strategy and the correspondng control parameters for a specfc problem, t may requre a huge amount of computaton tme. Also, durng dfferent evoluton stages, dfferent strateges and dfferent parameter settngs wth dfferent global and local search capabltes mght be preferred. Therefore, we developed SaDE algorthm that can automatcally adapt the learnng strateges and the parameters settngs durng evoluton. The man deas of the SaDE algorthm are summarzed below. A. Strategy Adaptaton SaDE probablstcally selects one out of several avalable learnng strateges for each ndvdual n the current populaton. Hence, we should have several canddate learnng strateges avalable to be chosen and also we need to develop a procedure to determne the probablty of applyng each learnng strategy. In the prelmnary SaDE verson [1], only two canddate strateges are employed,.e. rand/1/bn and current to best/2/bn. Our recent work suggests that ncorporatng more strateges can further mprove the performance of the SaDE. Here, we use 4 strateges nstead of the orgnal two to enhance the SaDE. DE/rand/1: V = X + F, r ( X X ) 1, r2, r 3, DE/current to best/2: ( ) ( ) ( ) 1 2, 3 4, V = X + F X X + F X X + F X X,, best,, r, r r, r DE/rand/2: ( ) F ( ) V = X + F X X + X X DE/current-to-rand/1:, r1, r2, r3, r4, r, ( ) ( ) U = X + K X X + F X X, r1, r3,, r1, r2, In strategy DE/current-to-rand/1, K s the coeffcent of combnaton n[.,1.]. Snce here we have four canddate strateges nstead of two strateges n [1], assumng that the probablty of applyng the four dfferent strateges to each ndvdual n the current populaton s p, = 1, 2,3, 4. The ntal probabltes are set to be equal to.2,.e., p = p = p = p = Therefore, each strategy has equal probablty to be appled 18 Authorzed lcensed use lmted to: Nanyang Technologcal Unversty. Downloaded on March 24,21 at 21:39:19 EDT from IEEE Xplore. Restrctons apply.
3 to every ndvdual n the ntal populaton. Accordng to the probablty, we apply Roulette Wheel selecton to select the strategy for each ndvdual n the current populaton. After evaluaton of all newly generated tral vectors, the number of tral vectors successfully enterng the next generaton whle generated by each strategy s recorded as ns, = 1,2,3,4 respectvely, and the numbers of tral vectors dscarded whle generated by each strategy s recorded as nf, = 1, 2,3, 4. ns and nf are accumulated wthn a specfed number of generatons (2 n our experments), called the learnng perod. Then, the probablty of p s updated as: ns p = ns + nf The above expresson represents the percentage of the success rate of tral vectors generated by each strategy durng the learnng perod. Therefore, the probabltes of applyng those four strateges are updated every generaton, after the learnng perod. We only accumulate the value of ns and nf n recent 2 generatons to avod the possble sde-effect accumulated n the far prevous learnng stage. Ths adaptaton procedure can gradually evolve the most sutable learnng strategy at dfferent stages durng the evoluton for the problem under consderaton. B. Parameter Adaptaton In the orgnal DE, the 3 control parameters CR, F and NP are closely related to the problem under consderaton. Here, we keep NP as a user-specfed value as n the orgnal DE, so as to deal wth problems wth dfferent dmensonaltes. Between the two parameters CR and F, CR s much more senstve to the problem s property and complexty such as the mult-modalty, whle F s more related to the convergence speed. Here, we allow F to take dfferent random values n the range (, 2] wth normal dstrbutons of mean. and standard devaton.3 for dfferent ndvduals n the current populaton. Ths scheme can keep both local (wth small F values) and global (wth large F values) search ablty to generate the potental good mutant vector throughout the evoluton process. For the control parameter K n strategy DE/current-to-rand/1, experments show that t s always successful to optmze a functon usng a normally dstrbuted random value for K. So here we set K = F to reduce one more tunng parameter. The control parameter CR plays an essental role n the orgnal DE algorthm. The proper choce of CR may lead to good performance under several learnng strateges whle a wrong choce may result n performance deteroraton under any learnng strategy. Also, the good CR parameter value usually falls wthn a small range, n whch the algorthm can perform consstently well on a complex problem. Therefore, we consder accumulatng the prevous learnng experence wthn a certan generatonal nterval so as to dynamcally adapt the value of CR to a sutable range. We assume that CR s normally dstrbuted n a range wth mean CRm and standard devaton.1. Intally, CRm s set at. and dfferent CR values conformng ths normal dstrbuton are generated for each ndvdual n the current populaton. These CR values for all ndvduals reman for generatons and then a new set of CR values s generated under the same normal dstrbuton. Durng every generaton, the CR values assocated wth tral vectors successfully enterng the next generaton are recorded. After a specfed number of generatons (2 n our experments), CR has been changed for several tmes (2/=4 tmes n our experments) under the same normal dstrbuton wth center CRm and standard devaton.1, and we recalculate the mean of normal dstrbuton of CR accordng to all the recorded CR values correspondng to successful tral vectors durng ths perod. Wth ths new normal dstrbuton s mean and the standard devdaton.1, we repeat the above procedure. As a result, the proper CR value range for the current problem can be learned to sut the partcular problem. Note that we wll reset the record of the successful CR values to zero once we recalculate the normal dstrbuton s mean to avod the possble napproprate long-term accumulaton effects. We ntroduce the above learnng strategy adaptaton schemes nto the orgnal DE algorthm and develop a Self-adaptve Dfferental Evoluton algorthm (SaDE) algorthm. The SaDE does not requre the choce of a certan learnng strategy and the settng of specfc values to crtcal control parameters CR and F. The learnng strategy and control parametercr, whch are hghly dependent on the problem s characterstc and complexty, are self-adapted by usng the prevous learnng experence. Therefore, the SaDE algorthm can demonstrate consstently good performance on problems wth dfferent propertes, such as unmodal and multmodal problems. The nfluence on the performance of SaDE by the number of generatons durng whch prevous learnng nformaton s collected s not sgnfcant. C. Local search To speed up the convergence of the SaDE algorthm, we apply a local search procedure once every generatons, on % of ndvduals ncludng the best ndvdual found so far and the randomly selected ndvduals out of the best % of the ndvduals n the current populaton. Here, we employ the Sequental Quadratc Programmng (SQP) method as the local search method. IV. HANDLIN CONSTRAINTS In real world applcatons, most optmzaton problems have complex constrants. A constraned optmzaton problem s usually wrtten as a nonlnear programmng 19 Authorzed lcensed use lmted to: Nanyang Technologcal Unversty. Downloaded on March 24,21 at 21:39:19 EDT from IEEE Xplore. Restrctons apply.
4 problem of the followng form: Mnmze: f( x), x = ( x, x,, x ) 1 2 n and x S Subject to: g ( x), = 1,, q h ( x) =, j= q+ 1,, m j S s the whole search space. q s the number of nequalty constrants. The number of equalty constrants s m-q. For convenence, the equalty constrants are always transformed nto the nequalty form, and then we can combne all the constrants as max{ g ( x),} = 1, q ( x) = h ( x) = q+ 1,, m Therefore, the objectve of our algorthm s to mnmze the ftness functon f ( x ), at the same tme the optmum solutons obtaned must satsfy all the constrants ( x ). Among varous constrants handlng methods mentoned n the ntroducton, some methods based on superorty of feasble solutons, such as the approach proposed by Deb [], has demonstrated promsng performance, as ndcated n [6][7] whch deal wth constrants usng DE. Besdes ths, Deb s selecton crteron [] has no parameter to fne-tune, whch s also the motvaton of our SaDE too - no fne-tunng of parameters as much possble. Hence, we ncorporate ths constrants handlng technque as follows: Durng the selecton procedure, the tral vector s compared to that of ts correspondng target vector n the current populaton consderng both the ftness value and constrants. The tral vector wll replace the target vector and enter the populaton of the next generaton f any of the followng condtons s true. 1) The tral vector s feasble and the target vector s not. 2) The tral vector and target vector are both feasble and tral vector has smaller or equal ftness value (for mnmzaton problem) than the correspondng target vector. 3) The tral vector and target vector are both nfeasble, but tral vector has a smaller overall constran volaton. The overall constran volaton s a weghted mean value of all the constrants, whch s expressed as followng, m = 1 v ( x ) = m = w( ( x)) 1 w where w 1 = s a weghted parameter, max s the max maxmum volaton of the constrants () x obtaned so far. Here, we set w as 1 whch vares durng the max evoluton n order to accurately normalze the constrants of the problem, thus the overall constran volaton can represent all constrants more equally. V. EXPERIMENTAL RESULTS We evaluate the performance of the SaDE algorthm on 24 benchmark functons wth constrants [2], whch nclude lnear, nonlnear, quadratc, cubc, polynomal constrants. The populaton sze s set at. For each functon, the SaDE algorthm runs 2 tmes. We * use the ftness value of best known solutons ( f ( x )) newly updated n [2]. The error values acheved when =e+3, =e+4, =e+ for the 24 test functons are lsted n Tables I-IV. We record the needed n each run for fndng a soluton satsfyng the successful condton [2] n Table V. The success rate, feasble rate, and success performance are also lsted. The convergence maps of SaDE on functons 1-6, functons 7-12, functons 13-18, and functons are plotted n Fgures 1-4 respectvely. log(v) log(f(x)-f(x*)) g1 g2 g3 g4 g g x 1 (1-a) log(f(x)-f(x*)) vs g1 g2 g3 g4 g g x 1 4 (1-b) log(v) vs Fgure 1: Convergence raph for Functon Authorzed lcensed use lmted to: Nanyang Technologcal Unversty. Downloaded on March 24,21 at 21:39:19 EDT from IEEE Xplore. Restrctons apply.
5 2 1 g7 g8 g9 g1 g11 g g13 g14 g1 g16 g17 g18 log(f(x)-f(x*)) log(v) x 1 (2-a) log(f(x)-f(x*)) vs x 1 4 (3-b) log(v) vs Fgure 3 Convergence raph for Functon g7 g8 g9 g1 g11 g12 1 g19 g2 g21 g22 g23 g24 log(v) - log(f(x)-f(x*)) (2-b) log(v) vs Fgure 2 Convergence raph for Functon x 1 (4-a) log(f(x)-f(x*)) vs 1 g13 g14 g1 g16 g17 g18 log(v) g19 g2 g21 g22 g23 g24 log(f(x)-f(x*)) x 1 (3-a) log(f(x)-f(x*)) vs x 1 (4-b) log(v) vs Fgure 4 Convergence raph for Functon From the results, we could observe that, for all problems, the SaDE algorthm could reach the newly updated best known solutons except problems 2 and 22. As shown n Table V, the feasble rates of all problems are 21 Authorzed lcensed use lmted to: Nanyang Technologcal Unversty. Downloaded on March 24,21 at 21:39:19 EDT from IEEE Xplore. Restrctons apply.
6 1%, except problem 2 and 22. Problem 2 s hghly constraned and no algorthm n the lterature found feasble solutons. The successful rates are very encouragng, as most problems have 1%. Problems 2, 3, 1, 14, 18, 21 and 23 have 84%, 96%, 8%, 92%, 6% and 88% respectvely. Problem 17 has 4%, wth successfully fndng better soluton only once. Although the successful rate of problem 22 s, the result we obtaned ndeed s much better than prevous best known solutons, and approxmates the newly updated best known solutons. We set MAX_ as e+, however from the experment results we could fnd that SaDE actually acheved the best known solutons wthn e+4 for many problems. We calculate the algorthm complexty accordng to [2] show n Table VI. We use Matlab 6. to mplement the algorthm and the system confguratons are: [7] R. Landa-Becerra and C. A. C. Coello. Optmzaton wth Constrants usng a Cultured Dfferental Evoluton Approach. In Proceedngs of the enetc and Evolutonary ComputatonConference (ECCO'2), volume 1, pages 27-34, New York, June 2. Washngton DC, USA, ACM Press. Intel Pentum 4 CPU 3. HZ 2 B of memory Wndows XP Professonal Verson 22 TABLE VI: COMPUTATIONAL COMPLEXITY T1 T2 (T2-T1)/T VI. CONCLUSION In ths paper, we generalzed the Self-adaptve Dfferental Evoluton algorthm for handlng optmzaton problem wth multple constrants, wthout ntroducng any addtonal parameters. The performance of our approach was evaluated on the testbed for CEC26 specal sesson on constraned real parameter optmzaton. The SaDE algorthm demonstrated effectveness and robustness. REFERENCES [1] A. K. Qn and P. N. Suganthan, Self-adaptve Dfferental Evoluton Algorthm for Numercal Optmzaton In: IEEE Congress on Evolutonary Computaton (CEC 2) Ednburgh, Scotland, Sep 2-, 2. [2] J. J. Lang, T. P. Runarsson, E. Mezura-montes, M. Clerc, P.N.Suganthan, C. A. C. Coello, and K.Deb, Problem Defntons and Evaluaton Crtera for the CEC 26 Specal Sesson on Constraned Real-Paremeter Optmzaton, Techncal Report, 2. [3] Z. Mchalewcz and M. Schoenauer, Evolutonary Algorthms for Constraned Parameter Optmzaton Problems, Evolutonary Computaton, 4(1):1 32, [4] R. Storn and K. V. Prce, Dfferental evoluton-a smple and Effcent Heurstc for lobal Optmzaton over Contnuous Spaces, Journal of lobal Optmzaton 11: [] K. Deb. An Effcent Constrant Handlng Method for enetc Algorthms, Computer Methods n Appled Mechancs and Engneerng, 186(2/4): , 2. [6] J. Lampnen. A Constrant Handlng Approach for the Dfferental Evoluton Algorthm. In Proceedngs of the Congress on Evolutonary Computaton 22 (CEC 22), volume 2, pages , Pscataway, New Jersey, May Authorzed lcensed use lmted to: Nanyang Technologcal Unversty. Downloaded on March 24,21 at 21:39:19 EDT from IEEE Xplore. Restrctons apply.
7 TABLE I ERROR VALUES ACHIEVED WHEN = 1 3, = 1 4, = 1 FOR PROBLEMS Prob. g1 g2 g3 g4 g g6 Best 2.92e+() 3.298e-1() e-1() 3.714e+1() 2.278e+1(3) 6.448e+1() Medan e+() 3.727e-1() e-1() 7.864e+1() e+2(3) e+2() Worst.2143e+() e-1() 8.81e-1(1) e+2() 1.948e+2(3) 1.72e+3() c,,,,,,,,, 1, 3,, v e-3 Mean e e e e e e+2 Std.227e e e e e e+2 Best e-1() 4.24e-3().973e-() 2.43e-7() 1.4e-11() 4.47e-11() Medan 2.967e-1() 2.233e-2() e-4() 2.983e-7().2481e-4() 4.47e-11() Worst 3.e-1() e-2() e-1() 3.379e-7() 1.39e-3() 4.47e-11() c,,,,,,,,,,,, v Mean e e e e e e-11 Std 4.926e e e e e-4 Best () 8.719e-1() e-1() e-7() () 4.47e-11() Medan () 3.8e-9() 1.777e-8() e-7() () 4.47e-11() Worst () 1.833e-2() e-4() e-7() () 4.47e-11() c,,,,,,,,,,,, v Mean 2.6e e e e-11 Std e e- 1.8e e-13 TABLE II ERROR VALUES ACHIEVED WHEN = 1 3, = 1 4, = 1 FOR PROBLEMS Prob. g7 g8 g9 g1 g11 g12 Best 3.884e+() e-11() 4.14e-1() e+2() e-4() 1.961e-14() Medan e+() e-11() 7.42e-1() 1.473e+3() 9.684e-2() 1.22e-12() Worst 1.248e+1() e-11() e+() 2.997e+3() 2.164e-1() e-1() c,,,,,,,,,,,, v Mean 7.128e e e e e e-11 Std 2.64e+ 1.99e e e e e-11 Best e-8() e-11() 3.744e-7() 1.991e-6() () () Medan 2.76e-3() e-11() 7.133e-7() e-1() () () Worst 2.29e-2() e-11() e-() 3.494e+() 9.998e-() () c,,,,,,,,,,,, v Mean e e e e e-6 Std.929e e e e e- Best 6.818e-8() e-11() 3.744e-7() e-11() () () Medan 1.468e-7() e-11() 3.744e-7() 1.812e-6() () () Worst e-() e-11() 3.744e-7() 7.83e-6() () () c,,,,,,,,,,,, v Mean e e e e-6 Std e e e e-6 TABLE III ERROR VALUES ACHIEVED WHEN = 1 3, = 1 4, = 1 FOR PROBLEMS Prob. g13 g14 g1 g16 g17 g18 Best e-1(3) 4.391e+(3) e-2(2) e-2() e+1(4) e-1() Medan 8.33e-1(3).4314e+(3) 9.442e-1(2).7669e-2() e+1(4) e-1() Worst.214e-1(3) e+(3) e+(2) 1.187e-1() 9.9e+1(4) e-1() c, 2, 3, 3, 3, 1, 2,,, 3, 4,, v.6738e e e-2.918e-2 Mean e+ 4.9e e+.973e e e-1 Std 1.211e e+ 1.74e e e e-2 Best.312e-6() 1.331e-() 6.822e-11() 6.214e-11() e+1() 2.781e-11() Medan 8.33e-6() 1.446e-4() 6.81e-() 6.21e-11() 7.48e+1() 1.178e-1() Worst e-1() 4.416e-4() 1.463e-4() 6.4e-11() 7.48e+1() e-1() c,,,,,,,,,,,, v Mean 1.778e e e e e e-2 Std e e-4 6.8e e e+.2992e-2 23 Authorzed lcensed use lmted to: Nanyang Technologcal Unversty. Downloaded on March 24,21 at 21:39:19 EDT from IEEE Xplore. Restrctons apply.
8 1 Best e-11() 2.9e-6() 6.822e-11() 6.214e-11() 8.188e-11() 1.61e-11() Medan e-11() e-() 6.822e-11() 6.214e-11() 7.48e+1() 1.61e-11() Worst 1.696e-6() 2.233e-4() 6.822e-11() 6.214e-11() 7.48e+1() 1.914e-1() c,,,,,,,,,,,, v Mean 8.263e e e e e e-2 Std e e e e Prob. TABLE IV ERROR VALUES ACHIEVED WHEN = 1 3, = 1 4, = 1 FOR PROBLEMS g19 g2 g21 g22 g23 g24 Best e+1() 1.296e+1(2) 4.61e+2() e+4(19) 6.119e+2(4) e-() Medan 1.38e+2() 1.63e+1(2) e+2() e+3(19) e+2(4) e-() Worst 1.926e+2() 1.369e+1(2) 4.739e+2() 1.348e+4(19) 4.429e+2() 1.713e-4() c,, 2, 18, 2, 3, 14, 19, 19, 2, 4,, v.6112e+ 6.3e e e-3 Mean e e e e+4 4.6e e- Std 2.26e e+ 1.2e e e e- Best.869e-7() 1.82e-2(6) 4.688e-2() 3.199e+1() e-3() e-12() Medan 8.69e-4().172e-1(19) 6.1e-2() 1.372e+2().9e-2() e-12() Worst 1.133e-1() 4.7e+(19) e-2() e+4(16).144e-2() e-12() c,, 1, 1, 19,,,,,,,, v 1.836e-1 Mean 6.66e e+ 6.34e e+3.237e e-12 Std e e+ 3.89e e e-2 Best.446e-11() 1.82e-2(6) () e+() () e-12() Medan e-1() 2.377e-1(2) 2.78e-8() 4.697e+1() 3.979e-13() e-12() Worst e-9().397e-1(2) 6.712e-3() 1.224e+2() e-3() e-12() c, 16, 2,,,,,,,, v 8.82e-2 Mean 4.22e e e-4.23e e e-12 Std e e-1 1.8e e e-4 TABLE VI NUMBER OF TO ACHIEVE THE FIXED ACCURACY LEVEL ( ( f(x) - f(x*) ).1), SUCCESS RATE, FEASIBLE RATE AND SUCCESS PERFORMANCE Prob. Best Medan Worst Mean Std Feasble Success Success Rate Rate Performance g % 1% 211 g % 84% 1838 g % 96% g % 1% 217 g % 1% 73 g % 1% 1246 g % 1% g % 1% 1323 g % 1% g % 1% g % 1% 2111 g % 1% 276 g % 1% 2168 g % 8% 4 g % 1% 27 g % 1% g % 4% 12 g % 92% g % 1% 216 g % % - g % 6% g % % - g % 88% 129 g % 1% Authorzed lcensed use lmted to: Nanyang Technologcal Unversty. Downloaded on March 24,21 at 21:39:19 EDT from IEEE Xplore. Restrctons apply.
Differential Evolution Algorithm with a Modified Archiving-based Adaptive Tradeoff Model for Optimal Power Flow
1 Dfferental Evoluton Algorthm wth a Modfed Archvng-based Adaptve Tradeoff Model for Optmal Power Flow 2 Outlne Search Engne Constrant Handlng Technque Test Cases and Statstcal Results 3 Roots of Dfferental
More informationThe Study of Teaching-learning-based Optimization Algorithm
Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute
More informationSolving of Single-objective Problems based on a Modified Multiple-crossover Genetic Algorithm: Test Function Study
Internatonal Conference on Systems, Sgnal Processng and Electroncs Engneerng (ICSSEE'0 December 6-7, 0 Duba (UAE Solvng of Sngle-objectve Problems based on a Modfed Multple-crossover Genetc Algorthm: Test
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationCHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE
CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng
More informationMarkov Chain Monte Carlo Lecture 6
where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways
More informationMulti-agent system based on self-adaptive differential evolution for solving dynamic optimization problems
Mult-agent system based on self-adaptve dfferental evoluton for solvng dynamc optmzaton problems Aleš Čep Faculty of Electrcal Engneerng and Computer Scence Unversty of Marbor ales.cep@um.s ABSTRACT Ths
More informationComputing Correlated Equilibria in Multi-Player Games
Computng Correlated Equlbra n Mult-Player Games Chrstos H. Papadmtrou Presented by Zhanxang Huang December 7th, 2005 1 The Author Dr. Chrstos H. Papadmtrou CS professor at UC Berkley (taught at Harvard,
More informationWinter 2008 CS567 Stochastic Linear/Integer Programming Guest Lecturer: Xu, Huan
Wnter 2008 CS567 Stochastc Lnear/Integer Programmng Guest Lecturer: Xu, Huan Class 2: More Modelng Examples 1 Capacty Expanson Capacty expanson models optmal choces of the tmng and levels of nvestments
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationChapter 2 Real-Coded Adaptive Range Genetic Algorithm
Chapter Real-Coded Adaptve Range Genetc Algorthm.. Introducton Fndng a global optmum n the contnuous doman s challengng for Genetc Algorthms (GAs. Tradtonal GAs use the bnary representaton that evenly
More informationSolving Nonlinear Differential Equations by a Neural Network Method
Solvng Nonlnear Dfferental Equatons by a Neural Network Method Luce P. Aarts and Peter Van der Veer Delft Unversty of Technology, Faculty of Cvlengneerng and Geoscences, Secton of Cvlengneerng Informatcs,
More informationDesign and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm
Desgn and Optmzaton of Fuzzy Controller for Inverse Pendulum System Usng Genetc Algorthm H. Mehraban A. Ashoor Unversty of Tehran Unversty of Tehran h.mehraban@ece.ut.ac.r a.ashoor@ece.ut.ac.r Abstract:
More informationA PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS
HCMC Unversty of Pedagogy Thong Nguyen Huu et al. A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS Thong Nguyen Huu and Hao Tran Van Department of mathematcs-nformaton,
More informationParticle Swarm Optimization with Adaptive Mutation in Local Best of Particles
1 Internatonal Congress on Informatcs, Envronment, Energy and Applcatons-IEEA 1 IPCSIT vol.38 (1) (1) IACSIT Press, Sngapore Partcle Swarm Optmzaton wth Adaptve Mutaton n Local Best of Partcles Nanda ulal
More informationCOS 521: Advanced Algorithms Game Theory and Linear Programming
COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton
More informationMMA and GCMMA two methods for nonlinear optimization
MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons
More informationMODIFIED PREDATOR-PREY (MPP) ALGORITHM FOR CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION
EVOLUTIONARY METHODS FOR DESIGN, OPTIMIZATION AND CONTROL T. Burczynsk and J. Péraux (Eds.) CIMNE, Barcelona, Span 29 MODIFIED PREDATOR-PREY () ALGORITHM FOR CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION Souma
More informationUtilizing cumulative population distribution information in differential evolution
Utlzng cumulatve populaton dstrbuton nformaton n dfferental evoluton Yong Wang a,b, Zh-Zhong Lu a, Janbn L c, Han-Xong L d, e, Gary G. Yen f a School of Informaton Scence and Engneerng, Central South Unversty,
More information2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification
E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton
More informationGlobal Optimization Using Hybrid Approach
Tng-Yu Chen, Y Lang Cheng Global Optmzaton Usng Hybrd Approach TING-YU CHEN, YI LIANG CHENG Department of Mechancal Engneerng Natonal Chung Hsng Unversty 0 Kuo Kuang Road Tachung, Tawan 07 tyc@dragon.nchu.edu.tw
More informationResource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud
Resource Allocaton wth a Budget Constrant for Computng Independent Tasks n the Cloud Wemng Sh and Bo Hong School of Electrcal and Computer Engneerng Georga Insttute of Technology, USA 2nd IEEE Internatonal
More informationAn Adaptive Learning Particle Swarm Optimizer for Function Optimization
An Adaptve Learnng Partcle Swarm Optmzer for Functon Optmzaton Changhe L and Shengxang Yang Abstract Tradtonal partcle swarm optmzaton (PSO) suffers from the premature convergence problem, whch usually
More informationOn a direct solver for linear least squares problems
ISSN 2066-6594 Ann. Acad. Rom. Sc. Ser. Math. Appl. Vol. 8, No. 2/2016 On a drect solver for lnear least squares problems Constantn Popa Abstract The Null Space (NS) algorthm s a drect solver for lnear
More informationLOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin
Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence
More informationReal Parameter Single Objective Optimization using Self-Adaptive Differential Evolution Algorithm with more Strategies
Real Parameter Sngle Objectve Optmzaton usng Self-Adaptve Dfferental Evoluton Algorthm wth more Strateges Janez Brest, Borko Boškovć, Aleš Zamuda, Iztok Fster Insttute of Computer Scence, Faculty of Electrcal
More informationA Hybrid Co-evolutionary Particle Swarm Optimization Algorithm for Solving Constrained Engineering Design Problems
JOURNL OF COMPUERS VOL 5 NO 6 JUNE 00 965 Hybrd Co-evolutonary Partcle Optmzaton lgorthm for Solvng Constraned Engneerng Desgn Problems Yongquan Zhou Shengyu Pe College of Mathematcs and Computer Scence
More informationThin-Walled Structures Group
Thn-Walled Structures Group JOHNS HOPKINS UNIVERSITY RESEARCH REPORT Towards optmzaton of CFS beam-column ndustry sectons TWG-RR02-12 Y. Shfferaw July 2012 1 Ths report was prepared ndependently, but was
More informationUsing Immune Genetic Algorithm to Optimize BP Neural Network and Its Application Peng-fei LIU1,Qun-tai SHEN1 and Jun ZHI2,*
Advances n Computer Scence Research (ACRS), volume 54 Internatonal Conference on Computer Networks and Communcaton Technology (CNCT206) Usng Immune Genetc Algorthm to Optmze BP Neural Network and Its Applcaton
More informationprinceton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg
prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there
More informationLecture 14: Bandits with Budget Constraints
IEOR 8100-001: Learnng and Optmzaton for Sequental Decson Makng 03/07/16 Lecture 14: andts wth udget Constrants Instructor: Shpra Agrawal Scrbed by: Zhpeng Lu 1 Problem defnton In the regular Mult-armed
More informationVQ widely used in coding speech, image, and video
at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng
More informationSome modelling aspects for the Matlab implementation of MMA
Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that
More informationDifference Equations
Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1
More informationSimultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals
Smultaneous Optmzaton of Berth Allocaton, Quay Crane Assgnment and Quay Crane Schedulng Problems n Contaner Termnals Necat Aras, Yavuz Türkoğulları, Z. Caner Taşkın, Kuban Altınel Abstract In ths work,
More informationSingle-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition
Sngle-Faclty Schedulng over Long Tme Horzons by Logc-based Benders Decomposton Elvn Coban and J. N. Hooker Tepper School of Busness, Carnege Mellon Unversty ecoban@andrew.cmu.edu, john@hooker.tepper.cmu.edu
More informationCollege of Computer & Information Science Fall 2009 Northeastern University 20 October 2009
College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:
More informationAnnexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances
ec Annexes Ths Annex frst llustrates a cycle-based move n the dynamc-block generaton tabu search. It then dsplays the characterstcs of the nstance sets, followed by detaled results of the parametercalbraton
More informationFeature Selection: Part 1
CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationThe Minimum Universal Cost Flow in an Infeasible Flow Network
Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran
More informationInteractive Bi-Level Multi-Objective Integer. Non-linear Programming Problem
Appled Mathematcal Scences Vol 5 0 no 65 3 33 Interactve B-Level Mult-Objectve Integer Non-lnear Programmng Problem O E Emam Department of Informaton Systems aculty of Computer Scence and nformaton Helwan
More informationA new Approach for Solving Linear Ordinary Differential Equations
, ISSN 974-57X (Onlne), ISSN 974-5718 (Prnt), Vol. ; Issue No. 1; Year 14, Copyrght 13-14 by CESER PUBLICATIONS A new Approach for Solvng Lnear Ordnary Dfferental Equatons Fawz Abdelwahd Department of
More informationADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING
1 ADVANCED ACHINE LEARNING ADVANCED ACHINE LEARNING Non-lnear regresson technques 2 ADVANCED ACHINE LEARNING Regresson: Prncple N ap N-dm. nput x to a contnuous output y. Learn a functon of the type: N
More informationDETERMINATION OF TEMPERATURE DISTRIBUTION FOR ANNULAR FINS WITH TEMPERATURE DEPENDENT THERMAL CONDUCTIVITY BY HPM
Ganj, Z. Z., et al.: Determnaton of Temperature Dstrbuton for S111 DETERMINATION OF TEMPERATURE DISTRIBUTION FOR ANNULAR FINS WITH TEMPERATURE DEPENDENT THERMAL CONDUCTIVITY BY HPM by Davood Domr GANJI
More informationNumerical Heat and Mass Transfer
Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and
More informationEcon107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)
I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More information4DVAR, according to the name, is a four-dimensional variational method.
4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The
More informationSupporting Information
Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to
More informationLecture 20: November 7
0-725/36-725: Convex Optmzaton Fall 205 Lecturer: Ryan Tbshran Lecture 20: November 7 Scrbes: Varsha Chnnaobreddy, Joon Sk Km, Lngyao Zhang Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer:
More informationFUZZY GOAL PROGRAMMING VS ORDINARY FUZZY PROGRAMMING APPROACH FOR MULTI OBJECTIVE PROGRAMMING PROBLEM
Internatonal Conference on Ceramcs, Bkaner, Inda Internatonal Journal of Modern Physcs: Conference Seres Vol. 22 (2013) 757 761 World Scentfc Publshng Company DOI: 10.1142/S2010194513010982 FUZZY GOAL
More informationStructure and Drive Paul A. Jensen Copyright July 20, 2003
Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.
More informationA Novel Evolutionary Algorithm for Capacitor Placement in Distribution Systems
DOI.703/s40707-013-0003-x STF Journal of Engneerng Technology (JET), Vol. No. 3, Dec 013 A Novel Evolutonary Algorthm for Capactor Placement n Dstrbuton Systems J-Pyng Chou and Chung-Fu Chang Abstract
More informationCOMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS
Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS
More informationCS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016
CS 29-128: Algorthms and Uncertanty Lecture 17 Date: October 26, 2016 Instructor: Nkhl Bansal Scrbe: Mchael Denns 1 Introducton In ths lecture we wll be lookng nto the secretary problem, and an nterestng
More informationSecond Order Analysis
Second Order Analyss In the prevous classes we looked at a method that determnes the load correspondng to a state of bfurcaton equlbrum of a perfect frame by egenvalye analyss The system was assumed to
More informationA Hybrid Differential Evolution Algorithm Game Theory for the Berth Allocation Problem
A Hybrd Dfferental Evoluton Algorthm ame Theory for the Berth Allocaton Problem Nasser R. Sabar, Sang Yew Chong, and raham Kendall The Unversty of Nottngham Malaysa Campus, Jalan Broga, 43500 Semenyh,
More informationThe Convergence Speed of Single- And Multi-Objective Immune Algorithm Based Optimization Problems
The Convergence Speed of Sngle- And Mult-Obectve Immune Algorthm Based Optmzaton Problems Mohammed Abo-Zahhad Faculty of Engneerng, Electrcal and Electroncs Engneerng Department, Assut Unversty, Assut,
More informationAppendix B: Resampling Algorithms
407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles
More informationErrors for Linear Systems
Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch
More information3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X
Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More information4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA
4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected
More informationJournal of Applied Research and Technology ISSN: Centro de Ciencias Aplicadas y Desarrollo Tecnológico.
Journal of Appled Research and Technology ISSN: 1665-6423 jart@aleph.cnstrum.unam.mx Centro de Cencas Aplcadas y Desarrollo Tecnológco Méxco Lu Chun-Lang; Chu Shh-Yuan; Hsu Chh-Hsu; Yen Sh-Jm Enhanced
More informationCOEFFICIENT DIAGRAM: A NOVEL TOOL IN POLYNOMIAL CONTROLLER DESIGN
Int. J. Chem. Sc.: (4), 04, 645654 ISSN 097768X www.sadgurupublcatons.com COEFFICIENT DIAGRAM: A NOVEL TOOL IN POLYNOMIAL CONTROLLER DESIGN R. GOVINDARASU a, R. PARTHIBAN a and P. K. BHABA b* a Department
More informationA New Evolutionary Computation Based Approach for Learning Bayesian Network
Avalable onlne at www.scencedrect.com Proceda Engneerng 15 (2011) 4026 4030 Advanced n Control Engneerng and Informaton Scence A New Evolutonary Computaton Based Approach for Learnng Bayesan Network Yungang
More informationCopyright 2017 by Taylor Enterprises, Inc., All Rights Reserved. Adjusted Control Limits for P Charts. Dr. Wayne A. Taylor
Taylor Enterprses, Inc. Control Lmts for P Charts Copyrght 2017 by Taylor Enterprses, Inc., All Rghts Reserved. Control Lmts for P Charts Dr. Wayne A. Taylor Abstract: P charts are used for count data
More informationTHE ROBUSTNESS OF GENETIC ALGORITHMS IN SOLVING UNCONSTRAINED BUILDING OPTIMIZATION PROBLEMS
Nnth Internatonal IBPSA Conference Montréal, Canada August 5-8, 2005 THE ROBUSTNESS OF GENETIC ALGORITHMS IN SOLVING UNCONSTRAINED BUILDING OPTIMIZATION PROBLEMS Jonathan Wrght, and Al Alajm Department
More informationChapter - 2. Distribution System Power Flow Analysis
Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load
More informationEconomic dispatch solution using efficient heuristic search approach
Leonardo Journal of Scences Economc dspatch soluton usng effcent heurstc search approach Samr SAYAH QUERE Laboratory, Faculty of Technology, Electrcal Engneerng Department, Ferhat Abbas Unversty, Setf
More informationOptimum Design of Steel Frames Considering Uncertainty of Parameters
9 th World Congress on Structural and Multdscplnary Optmzaton June 13-17, 211, Shzuoka, Japan Optmum Desgn of Steel Frames Consderng ncertanty of Parameters Masahko Katsura 1, Makoto Ohsak 2 1 Hroshma
More informationSome Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)
Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998
More informationA HYBRID DIFFERENTIAL EVOLUTION -ITERATIVE GREEDY SEARCH ALGORITHM FOR CAPACITATED VEHICLE ROUTING PROBLEM
IJCMA: Vol. 6, No. 1, January-June 2012, pp. 1-19 Global Research Publcatons A HYBRID DIFFERENTIAL EVOLUTION -ITERATIVE GREEDY SEARCH ALGORITHM FOR CAPACITATED VEHICLE ROUTING PROBLEM S. Kavtha and Nrmala
More informationMaximizing Overlap of Large Primary Sampling Units in Repeated Sampling: A comparison of Ernst s Method with Ohlsson s Method
Maxmzng Overlap of Large Prmary Samplng Unts n Repeated Samplng: A comparson of Ernst s Method wth Ohlsson s Method Red Rottach and Padrac Murphy 1 U.S. Census Bureau 4600 Slver Hll Road, Washngton DC
More informationCHAPTER 2 MULTI-OBJECTIVE GENETIC ALGORITHM (MOGA) FOR OPTIMAL POWER FLOW PROBLEM INCLUDING VOLTAGE STABILITY
26 CHAPTER 2 MULTI-OBJECTIVE GENETIC ALGORITHM (MOGA) FOR OPTIMAL POWER FLOW PROBLEM INCLUDING VOLTAGE STABILITY 2.1 INTRODUCTION Voltage stablty enhancement s an mportant tas n power system operaton.
More informationHomework Assignment 3 Due in class, Thursday October 15
Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.
More informationLectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More informationCurve Fitting with the Least Square Method
WIKI Document Number 5 Interpolaton wth Least Squares Curve Fttng wth the Least Square Method Mattheu Bultelle Department of Bo-Engneerng Imperal College, London Context We wsh to model the postve feedback
More informationAn Admission Control Algorithm in Cloud Computing Systems
An Admsson Control Algorthm n Cloud Computng Systems Authors: Frank Yeong-Sung Ln Department of Informaton Management Natonal Tawan Unversty Tape, Tawan, R.O.C. ysln@m.ntu.edu.tw Yngje Lan Management Scence
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More information10.34 Fall 2015 Metropolis Monte Carlo Algorithm
10.34 Fall 2015 Metropols Monte Carlo Algorthm The Metropols Monte Carlo method s very useful for calculatng manydmensonal ntegraton. For e.g. n statstcal mechancs n order to calculate the prospertes of
More informationPrimer on High-Order Moment Estimators
Prmer on Hgh-Order Moment Estmators Ton M. Whted July 2007 The Errors-n-Varables Model We wll start wth the classcal EIV for one msmeasured regressor. The general case s n Erckson and Whted Econometrc
More informationEEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming
EEL 6266 Power System Operaton and Control Chapter 3 Economc Dspatch Usng Dynamc Programmng Pecewse Lnear Cost Functons Common practce many utltes prefer to represent ther generator cost functons as sngle-
More informationResearch Article Green s Theorem for Sign Data
Internatonal Scholarly Research Network ISRN Appled Mathematcs Volume 2012, Artcle ID 539359, 10 pages do:10.5402/2012/539359 Research Artcle Green s Theorem for Sgn Data Lous M. Houston The Unversty of
More informationConstrained Evolutionary Programming Approaches to Power System Economic Dispatch
Proceedngs of the 6th WSEAS Int. Conf. on EVOLUTIONARY COMPUTING, Lsbon, Portugal, June 16-18, 2005 (pp160-166) Constraned Evolutonary Programmng Approaches to Power System Economc Dspatch K. Shant Swarup
More informationArtificial neural network regression as a local search heuristic for ensemble strategies in differential evolution
Nonlnear Dyn (2016) 84:895 914 DOI 10.1007/s11071-015-2537-8 ORIGINAL PAPER Artfcal neural network regresson as a local search heurstc for ensemble strateges n dfferental evoluton Iztok Fster Ponnuthura
More informationDUE: WEDS FEB 21ST 2018
HOMEWORK # 1: FINITE DIFFERENCES IN ONE DIMENSION DUE: WEDS FEB 21ST 2018 1. Theory Beam bendng s a classcal engneerng analyss. The tradtonal soluton technque makes smplfyng assumptons such as a constant
More informationCSC 411 / CSC D11 / CSC C11
18 Boostng s a general strategy for learnng classfers by combnng smpler ones. The dea of boostng s to take a weak classfer that s, any classfer that wll do at least slghtly better than chance and use t
More informationOn the Multicriteria Integer Network Flow Problem
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of
More informationHongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)
ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of
More informationLossy Compression. Compromise accuracy of reconstruction for increased compression.
Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost
More informationParametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010
Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton
More informationWhich Separator? Spring 1
Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal
More informationYong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )
Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often
More informationA discrete differential evolution algorithm for multi-objective permutation flowshop scheduling
A dscrete dfferental evoluton algorthm for mult-objectve permutaton flowshop schedulng M. Baolett, A. Mlan, V. Santucc Dpartmento d Matematca e Informatca Unverstà degl Stud d Peruga Va Vanvtell, 1 Peruga,
More informationTHE general problem tackled using an optimization
Hgh-Dmensonal Real-Parameter Optmzaton usng Self-Adaptve Dfferental Evoluton Algorthm wth Populaton Sze Reducton Janez Brest, Member, IEEE, Aleš Zamuda, Student Member, IEEE, BorkoBoškovć, Student Member,
More informationNotes on Frequency Estimation in Data Streams
Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to
More informationModule 2. Random Processes. Version 2 ECE IIT, Kharagpur
Module Random Processes Lesson 6 Functons of Random Varables After readng ths lesson, ou wll learn about cdf of functon of a random varable. Formula for determnng the pdf of a random varable. Let, X be
More information