An Adaptive Learning Particle Swarm Optimizer for Function Optimization

Size: px
Start display at page:

Download "An Adaptive Learning Particle Swarm Optimizer for Function Optimization"

Transcription

1 An Adaptve Learnng Partcle Swarm Optmzer for Functon Optmzaton Changhe L and Shengxang Yang Abstract Tradtonal partcle swarm optmzaton (PSO) suffers from the premature convergence problem, whch usually results n PSO beng trapped n local optma. Ths paper presents an adaptve learnng PSO (ALPSO) based on a varant PSO learnng strategy. In ALPSO, the learnng mechansm of each partcle s separated nto three parts: ts own hstorcal best poston, the closest neghbor and the global best one. By usng ths ndvdual level adaptve technque, a partcle can well gude ts behavor of exploraton and explotaton. A set of 21 test functons were used ncludng un-rotated, rotated and composton functons to test the performance of ALPSO. From the comparson results over several varant PSO algorthms, ALPSO shows an outstandng performance on most test functons, especally the fast convergence characterstc. I. INTRODUCTION Partcle Swarm Optmzaton (PSO) was frst ntroduced by Kennedy and Eberhart n [1], [2]. PSO s motvated from the socal behavor of organsms, such as brd flockng and fsh schoolng. In PSO, a swarm of partcles fly through the search space. Each partcle follows the prevous best poston found by ts neghbor partcles and the prevous best poston found by tself. In the past decade, PSO has been actvely studed and appled for many academc and real world problems wth promsng results due to ts property of fast convergence [8]. Ever snce PSO was frst ntroduced, several major versons of the PSO algorthms have been developed [8]. Each partcle s represented by a poston and a velocty, whch are updated as follows: V d = ωv d + η 1r 1 (pbest d Xd )+η 2r 2 (gbest d X d ) (1) X d = X d + V d, (2) where X d and X d represent the current and prevous poston of d th dmenson of partcle respectvely, V and V are the current and prevous velocty of partcle respectvely, pbest and gbest are the best poston found by partcle so far and the best poston found by the whole swarm so far respectvely, ω (0, 1) s an nerta weght, whch determnes how much the prevous velocty s preserved, η 1 and η 2 are the acceleraton constants, and r 1 and r 2 are random numbers generated n the nterval [0.0, 1.0]. There are two man models of the PSO algorthms, called gbest (global best) and lbest (local best), whch dffer n the way of defnng the neghborhood of each partcle. In The authors are wth the Department of Computer Scence, Unversty of Lecester, Unversty Road, Lecester LE1 7RH, Unted Kngdom (emal: {cl160, s.yang}@mcs.le.ac.uk) Ths work was supported by the Engneerng and Physcal Scences Research Councl (EPSRC) of the Unted Kngdom under Grant EP/E060722/1. the gbest model, the neghborhood of a partcle conssts of the partcles n the whole swarm, whch share nformaton between each other. On the contrary, n the lbest model, the neghborhood of a partcle s defned by several fxed partcles. The two models gve dfferent optmzaton performances on dfferent problems. Kennedy and Eberhart [3] and Pol et al. [8] ponted out that the gbest model has a faster convergence speed wth a hgher chance of gettng stuck n local optma than lbest. On the contrary, the lbest model s less vulnerable to the attracton of local optma but wth a slower convergence speed than the gbest model. In order to mprove PSO s performance, we present an adaptve learnng PSO (ALPSO) that utlzes a new learnng strategy. In ALPSO, each partcle can adjust ts search strategy accordng to the selecton ratos of four learnng operators n dfferent surroundng envronments. The selecton rato of each operator s calculated n the same way as n [4]. For the global best partcle, we ntroduce a learnng method that can subtract the promsng nformaton from all mproved partcles. The rest of ths paper s organzed as follows. Secton II descrbes the adaptve learnng PSO. The expermental study s present n secton III and fnally conclusons are gven n secton IV. II. ADAPTIVE LEARNING PARTICLE SWARM OPTIMIZER Although there are many mproved versons of PSO, how to balance the performance of the gbest and lbest models s stll an mportant ssue, especally for mult-modal problems. In the gbest model, all partcles socal behavor s strctly constraned by learnng nformaton from the global best partcle. Hence, partcles are easly attracted by gbest and quckly converge on that regon even t s not the global optmum and gbest does not mprove. In the lbest model, attracton by the gbest s not too much but the slow convergence speed s unbearable. In the orgn PSO, each partcle learns from ts pbest and the gbest smultaneously, whch mght cause the above problems. Hence, we can separate the cognton component and the socal component to ncrease dversty, but the proper moment for a partcle to learn from gbest or pbest s very hard to know. The followng sectons wll gve an adaptve method to enable a partcle to automatcally learn from the global or local nformaton from dfferent partcles. A. Learnng Strategy n ALPSO In ALPSO, the nformaton learnt by each partcle comes from four sources: the gbest, ts own pbest, thepbest of /09/$25.00 c 2009 IEEE 381

2 the closest partcle, and a random poston around tself. The learnng equatons are as follows: b : V d a : V d d : V d = ωv d + η r d (pbest d X d ) (3) = ωv d + η r d (pbest d nearest X d ) (4) c : X d = Xd + V avg d N(0, 1) (5) = ωv d + η r d (gbestd X d ) (6) where pbest nearest s the pbest of the closest partcle to partcle, V avg s the average velocty of all partcles, and N(0, 1) s a random number from the normal dstrbuton wth mean 0 and varance 1. Learnng from the nearest neghbor enables a partcle to explore the regon of local optma around tself. Partcles that are near a local optmum wll get closer and closer to that regon because the pbest s replaced only when a better poston s found. Gradually, they wll generate a local cluster around that local optmum. Partcles n one local cluster are not nfluenced by those far away (other local clusters) even they have very good ftness. Ths strategy can help swarm fnd more local optma rather than one optmum as the orgnal PSO does, especally for mult-modal problems. Once partcles converge on a local optmum or there s a more promsng regon nearby wthout partcles coverng t, partcles should have a probablty to jump to that promsng regon. Hence, learnng from a random poston around tself s needed. In ALPSO, each partcle has four dfferent choces to adjust ts behavor. The four choces enable each partcle to move to a promsng poston wth a hgher probablty than the orgnal PSO. Here, whch choce s the most sutable depends on the around envronment where a partcle s. However, we can not know what the around envronment looks lke. Each partcle should detect the shape of the envronment where t s by tself. Hence, we use the method proposed n [4], whch enables a partcle to choose the most sutable operator automatcally. The method s descrbed n the followng secton. B. The Adaptve Learnng Mechansm Borrowed the dea of probablty matchng[12], we ntroduce an adaptve framework usng the aforementoned four learnng operators, each of whch s assgned a selecton rato. The selecton rato of each operator s equally ntalzed to 1/4 and s adaptvely updated accordng to ts relatve performance. For each partcle, one of the four learnng operators s selected accordng to ther selecton ratos and ts offsprng ftness s evaluated. The operator that results n hgher ftness values of offsprng wll have ts selecton rato ncreased. The operator that results n lower ftness values of offsprng wll have ts selecton rato decreased. Gradually, the most sutable operator wll be chosen automatcally and control the leanng behavor of each partcle n dfferent envronments. Wthout lose of generalty, we dscuss the mnmzaton optmzaton problems n ths paper. Based on our prevous work n [4], we extend the adaptve framework at the populaton level nto the ndvdual level n ths paper. The selecton ratos are updated every U f generatons, where U f s called the updatng frequency. Durng the updatng perod for each partcle, the progress value and the reward value of operator are calculated as follows. The progress value prog (t) of operator at generaton t s defned as: M prog (t) = f(p j (t)) mn (f(p j (t)),f(c j (t))), (7) j=1 where p j (t) and c j (t) denote a partcle and ts chld produced by operator at generaton t and M s the selecton tmes of operator wth the partcle. The reward value reward (t) of operator at generaton t s defned as follows: prog reward (t) = exp( (t) M (1 α)) s P N j=1 progj(t)α + +c p (t) 1 (8) where s s the counter that records the number of chldren that are ftter than ther parent partcles by applyng operator, p (t) s the selecton rato of operator at generaton t, α s a random weght between 0.0 and 1.0, N s the number of operators, and c s a penalty factor for operator, whch s defned as follows: { 0.9, f s =0and p c = (t) =max N j=1 (p j(t)) 1, otherwse Wth the above defntons, the selecton rato of operator s updated every U f generaton accordng to the followng equaton: reward (t) p (t +1)= N j=1 reward (1 N γ)+γ, (10) j(t) where γ s the mnmum selecton rato for each operator, whch s set 0.01 for all the experments n ths paper. C. Informaton Learnng for gbest In the orgnal PSO, the gbest s updated only when partcles fnd a better poston than the current gbest. Once t s updated, the nformaton of all dmensons of the gbest s replaced wth that of the better poston. Ths updatng mechansm has a dsadvantage that promsng nformaton of some dmensons of one partcle can not be kept due to bad nformaton n other dmensons that cause ts low ftness. Ths problem s called Two step forward, one step back n [13]. If a partcle gets better, nformaton of some dmenson probably becomes more promsng. Other partcles should learn some useful nformaton from the mproved one even the partcle s ftness s very low. In ALPSO, the gbest learns the useful nformaton from those dmensons of a partcle that s mproved. Once promsng nformaton s extract from those mproved dmensons of that partcle, the nformaton of correspondng dmensons of the gbest s updated. The updatng happens only when partcles are mproved, whch s as shown n Algorthm 1. (9) IEEE Congress on Evolutonary Computaton (CEC 2009)

3 Algorthm 1 GbestUpdate(partcle p) 1: for each dmenson d of gbest do 2: X t gbest := X gbest, X t gbest [d] :=X p [d] 3: f t gbest s better than gbest then 4: X gbest [d] :=X t gbest [d] 5: end f 6: end for Algorthm 2 The ALPSO Algorthm 1: Generate the ntal partcles by randomly generatng the poston and velocty for each partcle 2: Set the generaton counter t := 0 3: whle the stop crteron s not satsfed do 4: for each partcle do 5: Select one learnng operator accordng to ts selecton rato to update partcle 6: f the updated partcle s better than ts pbest then 7: Update pbest 8: Perform GbestU pdate() for gbest 9: end f 10: f the updated partcle s better than gbest then 11: Update gbest 12: end f 13: f t%u f == 0 then 14: Update the selecton rato for each learnng operator accordng to Eq. (10) 15: else 16: Calculate the accumulatve reward value of each operator 17: end f 18: end for 19: t := t +1 20: end whle We can not apply ths strategy to all partcles because the learnng method s tme comsumng. Hence, we choose the gbest as the learner. The framework of the ALPSO algorthm s gven n Algorthm 2. III. EXPERIMENTAL STUDY A. Test Functons In order to test the performance of ALPSO, we choose three unmodal functons and 18 multmodal functons, whch are wdely used as the test functons n the lterature [5], [11], [14]. The detals of these test functons are gven n Table I. Functon f 16 s a composton functon proposed by Jang et al. [5], whch s composed of ten benchmark functons: two rotated and shfted f 1, f 2, f 3, f 4,andf 5. Functons f 18 to f 21 are rotated functons, where the rotaton matrx M for each functon s obtaned usng the method n [9]. B. Expermental Settng Experments were conducted to compare fve PSO algorthms on the 21 test problems. The algorthms are lsted as follows: Standard PSO; CPSO-H k [13]; FIPS [7]; CLPSO [6]; ALPSO For the standard PSO, the acceleraton constants η 1 and η 2 are both set to be and the nerta weght ω = Equatons 1 and 2 are used for the velocty and poston update n the standard PSO. CPSO-H k [13] s a cooperatve PSO model combned wth standard PSO, the same value of k =6n [13] s used. The fully nformed PSO (FIPS) [7] wth a U-rng topology that acheved the hghest success rate s used. Comprehensve learnng PSO (CLPSO) [6] uses all other partcles hstorcal best nformaton to update a partcle s velocty. CLSPO s desgned for solvng multmodal problems, and t presents a good performance n [6] compared wth eght other PSO algorthms. To acheve better performance of ALPSO, we use partcular settngs by experence for each problem due to dfferent complexty of dfferent problems. In ALPSO, parameters are the same as standard PSO and the updatng frequency s present n Table II. Each problem wth 10 dmensons was ndependently run 30 tmes. The ntal populaton s the same for all algorthms on each test problem and the populaton sze s gven n Table II. The maxmal number of ftness evaluatons s set to for all algorthms on each test problem. The code of the fve algorthms s avalable onlne at the followng webste: C. Expermental Results and Dscussons 1) Expermental Results: Table III presents the results of mean and varance values over 30 runs for the fve algorthms on all test problems. The best results of each problem are shown n bold except functon f 6, on whch all fve algorthms obtaned the global optmum 0. Two-taled T- test wth 58 degrees of freedom at a 0.05 level of sgnfcance was conducted between ALPSO and the best results obtaned by one of the other four algorthms and the results are also shown n Table III, where *** means the result of two algorthms s the same. The performance dfference s sgnfcant f the absolute value of the T-test result s greater than Fgs. 1 and 2 descrbe the convergence speed of the fve PSOs on all test problems. Form Table III, ALPSO shows an outstandng performance on functons f 1, f 8, f 12,andf 13 over the other four algorthms. Especally for functons f 2, f 3, f 6,andf 9, ALPSO obtaned the global optmum over all 30 runs. Comparng ALPSO wth CLPSO, though the performance of ALPSO s worse than that of CLPSO on functons f 4, f 16, f 20,andf 21, t s much better than that of CLPSO on functons f 4, f 12, f 13,andf 17, and s smlar to or the same as that of CLPSO on the other functons. For unmodal functons, ALSPO shows a fast convergence speed to the global optma. For multmodal functons, ALPSO and CLPSO present a much better performance than 2009 IEEE Congress on Evolutonary Computaton (CEC 2009) 383

4 TABLE I THE TEST FUNCTIONS, WHERE n AND f mn ARE THE NUMBER OF DIMENSIONS AND THE MINIMUM VALUE OF A FUNCTION RESPECTIVELY AND S R n Test Functon n S f mn f 1 (x) = n =1 x2 10 [ 100, 100] 0 f 2 (x) = n =1 (x2 10 cos(2πx ) + 10) 10 [-5.12, 5.12] 0 f 3 (x) = n ( kmax [a k cos(2πb k (x +0.5))]) n kmax [a k cos(πb k )], 10 [-0.5,0.5] 0 =1 k=0 k=0 a =0.5,b=3,k max =20 f 4 (x) = 1 n 4000 =1 (x 100) 2 n x 100 =1cos( )+1 10 [-600, 600] 0 1 f 5 (x) = 20 exp( 0.2 n n =1 x2 ) exp( 1 n n =1 cos(2πx )) e 10 [-32, 32] 0 f 6 (x) = n =1 ( x +0.5 ) 2 10 [-100,100] 0 f 7 (x) = n =1 ẋ4 + U(0, 1) 10 [-1.28, 1.28] 0 f 8 (x) = n =1 100(x2 +1 x ) 2 +(x 1) 2 ) 10 [-30, 30] 0 f 9 (x) = n =1 x sn ( x ) 10 [-500, 500] f 10 (x) = n + n =1 x sn ( x ) 10 [-500, 500] 0 f 11 (x) = n =1 x + n =1 x 10 [-10, 10] 0 f 12 (x) = n =1 ( j=1 x j) 2 10 [-100, 100] 0 f 13 (x) =max n =1 x 10 [-100, 100] 0 f 14 (x) = π 30 {10 sn2 (πy 1 )+ n 1 =1 (y 1) 2 [ sn 2 (πy +1 )]+ 10 [-50, 50] 0 (y n 1) 2 } + n =1 u(x, 5, 100, 4),y =1+(x +1)/4 f 15 (x) =0.1{10 sn 2 (3πx 1 )+ n 1 =1 (x 1) 2 [1 + sn 2 (3πx +1 )] 10 [-50, 50] 0 +(x n 1) 2 [1 + sn 2 (2πx n )]} + n =1 u(x, 5, 100, 4) f 16 (x) =Composton functon 5 (CF5) n [5] 10 [-5, 5] 0 f 17 (x) = n =1 100(y2 +1 y ) 2 +(y 1) 2 ), y = M x 10 [-100, 100] 0 f 18 (x) = 1 n 4000 =1 (y 100) 2 n y 100 =1cos( )+1, y = M x 10 [-600, 600] 0 1 n f 19 (x) = 20 exp( 0.2 n =1 y2 ) exp( 1 n n =1 cos(2πy )) e, 10 [-32, 32] 0 y = M x f 20 (x) = n =1 (y2 10 cos(2πy ) + 10), y = M x 10 [-5, 5] 0 f 21 (x) = n ( kmax [a k cos(2πb k (y +0.5))]) n kmax [a k cos(πb k )], 10 [-0.5,0.5] 0 =1 k=0 a =0.5,b=3,k max =20, y = M x k=0 TABLE II POPULATION SIZE AND UPDATING FREQUENCY, WHERE THE UPDATING FREQUENCY IN SHOWN IN THE BRACKET f 1 f 2 f 3 f 4 f 5 f 6 f 7 5(5) 10(5) 10(5) 20(5) 10(5) 10(5) 20(5) f 8 f 9 f 10 f 11 f 12 f 13 f 14 10(1) 40(5) 40(5) 10(5) 5(5) 5(5) 10(5) f 15 f 16 f 17 f 18 f 19 f 20 f 21 10(5) 40(10) 10(5) 20(5) 30(10) 20(5) 20(15) the other three algorthms. For example, on Schwefel s functons f 9 and f 10, all the other three algorthms are trapped nto local optmum that are far away from the glbal optmum. However, ALPSO and CLPSO both successfully avod fallng nto the deep local optmum. For rotated functons, the two algorthms also show a leadng performance compared wth the other three algorthms. Among the other three algorthms, CPSO-H 6 presents a comparatvely better performance on most problems, and t obtans the best results on problem f 7, whch s a lttle better than the results got by ALPSO. The FIPS wth the U-rng model gves relatvely better results compared wth the standard PSO. The standard PSO falls nto local optma on almost all multmodal problems. From Fgs. 1 and 2, one nterestng observaton s that ALPSO presents the fastest convergence speed on all test problems. The results obvously show that the learnng strategy for gbest s effcent to solve the Two step forward, one IEEE Congress on Evolutonary Computaton (CEC 2009)

5 TABLE III COMPARISON RESULTS OF MEANS AND VARIANCES Functon f 1 f 2 f 3 f 4 f 5 f 6 f 7 ALPSO 9.42e e (±4.97e-162) (±0) (±0) (±0.0298) (±2.55e-15) (±0) (± ) CLSPO 3.81e e (±2.09e-154) (±0) (±0) (± ) (±6.49e-16) (±0) (± ) FIPS 5.16e (±2.83e-052) (±5.54) (±0) (±0.0884) (±3.28) (±0) (± ) CPSO-H e e e (±6.74e-103) (±0.344) (±2.17e-15) (±0.0166) (±4.27e-15) (±0) (± ) PSO 2.79e (±1.53e-026) (±10.8) (±0.88) (±0.0505) (±1.24) (±0) (± ) T-test -1 *** *** *** Functon f 8 f 9 f 10 f 11 f 12 f 13 f 14 ALPSO e e e e e-32 (±0.617 ) (±9.25e-13) (±2.76e-20) (±3.6e-50) (±2.08e-50) (±1.73e-67) (±5.57e-48) CLSPO e e e e-32 (±1.68 ) (±9.25e-13) (±2.76e-20) (±1.42e-51) (±1.14e-05) (±0.748) (±5.57e-48) FIPS e e e-14 5e-26 (±6.92 ) (±162) (±162) (±4.25e-17) (±1.9e+03) (±1.38e-13) (±1.82e-25) CPSO-H e e e e e-32 (±6.87 ) (±189) (±189) (±7.54e-45) (±1.9e-13) (±2.44e-18) (±5.57e-48) PSO 9.32e e e e (±2.74e+04) (±305) (±305) (±3.05) (±6.07e+03) (±2.55e-44) (±6.36) T-test *** *** *** Functon f 15 f 16 f 17 f 18 f 19 f 20 f 21 ALPSO 1.35e e (±0) (±169) (±6.31) (±0.0458) (±7.19e-015) (±2.98) (±0.735) CLSPO 1.35e e e-006 (±0) (±36.1) (±3.35) (±0.0111) (±4e-013) (±0.675) (±1.68e-05) FIPS e (±47.5) (±179) (±7.28e+07) (±0.1) (± ) (±4.24) (± ) CPSO-H e (±0) (±398) (±176) (±0.0676) (±0.497) (±3.91) (±1.78) PSO e (±118) (±195) (±4.94e+5) (±0.0648) (±0.211) (±5.37) (±1.68) T-test *** step back problem. It does help the gbest learn promsng nformaton from those mproved partcles. Fg 3 presents the selecton rato of each operator for some test problems. From the results, we can see that the selecton rato of each operator s qute dfferent from problem to problem. Even for a partcular test problem, the selecton rato of the best operator changes n dfferent evolvng stages.it can be seen from the results of F 2, F 3, F 5 and F 6 n Fg. 3, most partcles quckly learn from the best partcle when the run starts, however, the selecton rato of learnng from partcles prvate best poston surpasses the selecton rato of learnng from the best partcle when the number of generatons reaches 500. The results valdate our predcton that dfferent learnng strateges are needed n dfferent evolvng stages. 2) Dscussons: From the above results on the 21 test problems, we can conclude that ALPSO performs much better than the other three algorthms on all unmodal test problems due to the learnng strategy for gbest. Though ALPSO does not perform the best on all multmodal test problems, t presents compettve results compared wth the other three mproved PSO algorthms. We can also conclude that the adaptve learnng mechansm enables partcles to have more chances to move to a more promsng regon, especally for those partcles beng trapped nto local optma n multmodal problems. IV. CONCLUSIONS Ths paper presents an adaptve learnng PSO whch uses an adaptve framework on the ndvdual level to adapt four leanng strateges for each partcle n the swarm. A new 2009 IEEE Congress on Evolutonary Computaton (CEC 2009) 385

6 Fg. 1. Evoluton process of the average best ftness of PSO, CLPSO, CPSO-H 6, FIPS, and ALPSO on functons f 1 to f IEEE Congress on Evolutonary Computaton (CEC 2009)

7 Fg. 2. Evoluton process of the average best ftness of PSO, CLPSO, CPSO-H 6, FIPS, and ALPSO on functons f 16 to f 21. learnng mechansm for gbest s ntroduced by extractng useful nformaton from those mproved partcles on all dmensons. The four learnng strateges gve each partcle more chances to search a larger space. A partcle s not smply nfluenced by ts own prevous best poston and the global best one, the nearest neghbor also can help t search n a local regon. The balance of local search and the global search can be solved usng the adaptve technque, whch enables each partcle make ts own choce accordng to the envronment around. The performance of ALPSO s tested on 21 test problems n comparson wth other three mproved PSOs and the standard PSO. The results show that ALPSO gves the best performance on all test unmodal problems and also presents the outstandng performance on most multmodal problems. Although ALPSO s not the best one for solvng all test multmodal problems, the adaptve framework can help partcles decde ther own step drecton. Especally for real problems, we can not know the dstrbuton of soluton space. However, we can desgn dfferent strateges to deal wth dfferent stuatons, and let partcles choose the most sutable strategy by themselves accordng to the adaptve technque. REFERENCES [4] C. L, S. Yang, and I. A. Korejo. An adaptve mutaton operator for partcle swarm optmzaton. Proc. of the 2008 UK Workshop on Computatonal Intellgence, pp , [5] J.J. Lang, PN. Suganthan, and K. Deb. Novel composton test functons for numercal global optmzaton. Swarm Intellgence Symposum, pp , [6] J. J. Lang, A. K. Qn, P. N. Suganthan, and S. Baskar. Comprehensve learnng partcle swarm optmzer for global optmzaton of multmodal functons. IEEE Trans. on Evol. Comput., 10(3): , [7] R. Mendes, J. Kennedy, and J. Neves, The fully nformed partcle swarm: smple, maybe better.ieee Trans. on Evol. Comput., 8(3): , [8] R. Pol, J. Kennedy, and T. Blackwell. Partcle swarm optmzaton: An overvew. Swarm Intellgence, 1(1): 33-58, [9] R. Salomon. Reevaluatng genetc algorthm performance under coordnate rotaton of benchmark functons: A survey of some theoretcal and practcal aspects of genetc algorthms. BoSystems, 39(3): , [10] Sh, Y. and R. C. Eberhart. A modfed partcle swarm optmzer. Proc. of the IEEE Int. Conf. on Evol. Comput., pp , [11] P. N. Suganthan, N. Hansen, J. J. Lang, K. Deb, Y.-P. Chen, A. Auger, and S. Twar. Problem defntons and evaluaton crtera for the CEC 2005 specal sesson on real-parameter optmzaton. Techncal Report, Nanyang Technologcal Unversty, Sngapore, [12] D. Therens. An adaptve pursut strategy for allocatng operator probabltes.proceedngs of the 2005 conference on Genetc and evolutonary computaton, pp , New York, NY, USA, [13] F. van den Bergh, A. P. Engelbrecht. A Cooperatve approach to partcle swarm optmzaton. IEEE Trans. on Evol. Comput., 8(2): , [14] X. Yao, Y. Lu and G. Ln. Evolutonary programmng made faster. IEEE Trans. on Evol. Comput., 3(2): , [1] Eberhart, R. C. and J. Kennedy. A new optmzer usng partcle swarm theory. Proc. of the 6th Int. Symp. on Mcro Machne and Human Scence, pp , [2] Kennedy, J. and R. C. Eberhart. Partcle Swarm Optmzaton. Proc. of the 1995 IEEE Int. Conf. on Neural Networks, pp , [3] Kennedy, J. and R. C. Eberhart. Swarm Intellgence. Morgan Kaufmann Publshers.(2001) IEEE Congress on Evolutonary Computaton (CEC 2009) 387

8 Fg. 3. The process of selecton rato of each operator for dfferent problems IEEE Congress on Evolutonary Computaton (CEC 2009)

The Study of Teaching-learning-based Optimization Algorithm

The Study of Teaching-learning-based Optimization Algorithm Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute

More information

Particle Swarm Optimization with Adaptive Mutation in Local Best of Particles

Particle Swarm Optimization with Adaptive Mutation in Local Best of Particles 1 Internatonal Congress on Informatcs, Envronment, Energy and Applcatons-IEEA 1 IPCSIT vol.38 (1) (1) IACSIT Press, Sngapore Partcle Swarm Optmzaton wth Adaptve Mutaton n Local Best of Partcles Nanda ulal

More information

Riccardo Poli, James Kennedy, Tim Blackwell: Particle swarm optimization. Swarm Intelligence 1(1): (2007)

Riccardo Poli, James Kennedy, Tim Blackwell: Particle swarm optimization. Swarm Intelligence 1(1): (2007) Sldes largely based on: Rccardo Pol, James Kennedy, Tm Blackwell: Partcle swarm optmzaton. Swarm Intellgence 1(1): 33-57 (2007) Partcle Swarm Optmzaton Sldes largely based on: Rccardo Pol, James Kennedy,

More information

Self-adaptive Differential Evolution Algorithm for Constrained Real-Parameter Optimization

Self-adaptive Differential Evolution Algorithm for Constrained Real-Parameter Optimization 26 IEEE Congress on Evolutonary Computaton Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-21, 26 Self-adaptve Dfferental Evoluton Algorthm for Constraned Real-Parameter Optmzaton V.

More information

Markov Chain Monte Carlo Lecture 6

Markov Chain Monte Carlo Lecture 6 where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways

More information

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS HCMC Unversty of Pedagogy Thong Nguyen Huu et al. A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS Thong Nguyen Huu and Hao Tran Van Department of mathematcs-nformaton,

More information

Boostrapaggregating (Bagging)

Boostrapaggregating (Bagging) Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod

More information

A New Evolutionary Computation Based Approach for Learning Bayesian Network

A New Evolutionary Computation Based Approach for Learning Bayesian Network Avalable onlne at www.scencedrect.com Proceda Engneerng 15 (2011) 4026 4030 Advanced n Control Engneerng and Informaton Scence A New Evolutonary Computaton Based Approach for Learnng Bayesan Network Yungang

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

An improved multi-objective evolutionary algorithm based on point of reference

An improved multi-objective evolutionary algorithm based on point of reference IOP Conference Seres: Materals Scence and Engneerng PAPER OPEN ACCESS An mproved mult-objectve evolutonary algorthm based on pont of reference To cte ths artcle: Boy Zhang et al 08 IOP Conf. Ser.: Mater.

More information

MODIFIED PARTICLE SWARM OPTIMIZATION FOR OPTIMIZATION PROBLEMS

MODIFIED PARTICLE SWARM OPTIMIZATION FOR OPTIMIZATION PROBLEMS Journal of Theoretcal and Appled Informaton Technology 3 st ecember 0. Vol. No. 005 0 JATIT & LLS. All rghts reserved. ISSN: 9985 www.jatt.org EISSN: 87395 MIFIE PARTICLE SARM PTIMIZATIN FR PTIMIZATIN

More information

Utilizing cumulative population distribution information in differential evolution

Utilizing cumulative population distribution information in differential evolution Utlzng cumulatve populaton dstrbuton nformaton n dfferental evoluton Yong Wang a,b, Zh-Zhong Lu a, Janbn L c, Han-Xong L d, e, Gary G. Yen f a School of Informaton Scence and Engneerng, Central South Unversty,

More information

Exploratory Toolkit for Evolutionary And Swarm based Optimization

Exploratory Toolkit for Evolutionary And Swarm based Optimization Exploratory Toolkt for Evolutonary And Swarm based Optmzaton Namrata Khemka, Chrstan Jacob Unversty of Calgary, Calgary, AB, Canada, T2N1N4 {khemka, jacob}@cpsc.ucalgary.ca Optmzaton of parameters or systems

More information

Solving Nonlinear Differential Equations by a Neural Network Method

Solving Nonlinear Differential Equations by a Neural Network Method Solvng Nonlnear Dfferental Equatons by a Neural Network Method Luce P. Aarts and Peter Van der Veer Delft Unversty of Technology, Faculty of Cvlengneerng and Geoscences, Secton of Cvlengneerng Informatcs,

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

V is the velocity of the i th

V is the velocity of the i th Proceedngs of the 007 IEEE Swarm Intellgence Symposum (SIS 007) Probablstcally rven Partcle Swarms for Optmzaton of Mult Valued screte Problems : esgn and Analyss Kalyan Veeramachanen, Lsa Osadcw, Ganapath

More information

A New Quantum Behaved Particle Swarm Optimization

A New Quantum Behaved Particle Swarm Optimization A New Quantum Behaved Partcle Swarm Optmzaton Mlle Pant Department of Paper Technology IIT Roorkee, Saharanpur Inda mllfpt@tr.ernet.n Radha Thangaraj Department of Paper Technology IIT Roorkee, Saharanpur

More information

Entropy Generation Minimization of Pin Fin Heat Sinks by Means of Metaheuristic Methods

Entropy Generation Minimization of Pin Fin Heat Sinks by Means of Metaheuristic Methods Indan Journal of Scence and Technology Entropy Generaton Mnmzaton of Pn Fn Heat Snks by Means of Metaheurstc Methods Amr Jafary Moghaddam * and Syfollah Saedodn Department of Mechancal Engneerng, Semnan

More information

Differential Evolution Algorithm with a Modified Archiving-based Adaptive Tradeoff Model for Optimal Power Flow

Differential Evolution Algorithm with a Modified Archiving-based Adaptive Tradeoff Model for Optimal Power Flow 1 Dfferental Evoluton Algorthm wth a Modfed Archvng-based Adaptve Tradeoff Model for Optmal Power Flow 2 Outlne Search Engne Constrant Handlng Technque Test Cases and Statstcal Results 3 Roots of Dfferental

More information

Research Article Adaptive Parameters for a Modified Comprehensive Learning Particle Swarm Optimizer

Research Article Adaptive Parameters for a Modified Comprehensive Learning Particle Swarm Optimizer Hndaw Publshng Corporaton Mathematcal Problems n Engneerng Volume 2012, Artcle ID 207318, 11 pages do:10.1155/2012/207318 Research Artcle Adaptve Parameters for a Modfed Comprehensve Learnng Partcle Swarm

More information

Annexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances

Annexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances ec Annexes Ths Annex frst llustrates a cycle-based move n the dynamc-block generaton tabu search. It then dsplays the characterstcs of the nstance sets, followed by detaled results of the parametercalbraton

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

Comparative Analysis of SPSO and PSO to Optimal Power Flow Solutions

Comparative Analysis of SPSO and PSO to Optimal Power Flow Solutions Internatonal Journal for Research n Appled Scence & Engneerng Technology (IJRASET) Volume 6 Issue I, January 018- Avalable at www.jraset.com Comparatve Analyss of SPSO and PSO to Optmal Power Flow Solutons

More information

Lecture 4. Instructor: Haipeng Luo

Lecture 4. Instructor: Haipeng Luo Lecture 4 Instructor: Hapeng Luo In the followng lectures, we focus on the expert problem and study more adaptve algorthms. Although Hedge s proven to be worst-case optmal, one may wonder how well t would

More information

EPR Paradox and the Physical Meaning of an Experiment in Quantum Mechanics. Vesselin C. Noninski

EPR Paradox and the Physical Meaning of an Experiment in Quantum Mechanics. Vesselin C. Noninski EPR Paradox and the Physcal Meanng of an Experment n Quantum Mechancs Vesseln C Nonnsk vesselnnonnsk@verzonnet Abstract It s shown that there s one purely determnstc outcome when measurement s made on

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm Desgn and Optmzaton of Fuzzy Controller for Inverse Pendulum System Usng Genetc Algorthm H. Mehraban A. Ashoor Unversty of Tehran Unversty of Tehran h.mehraban@ece.ut.ac.r a.ashoor@ece.ut.ac.r Abstract:

More information

Supporting Information

Supporting Information Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to

More information

Open Access A Variable Neighborhood Particle Swarm Algorithm Based on the Visual of Artificial Fish

Open Access A Variable Neighborhood Particle Swarm Algorithm Based on the Visual of Artificial Fish Send Orders for Reprnts to reprnts@benthamscence.ae 1122 The Open Automaton and Control Systems Journal, 2014, 6, 1122-1131 Open Access A Varable Neghborhood Partcle Swarm Algorthm Based on the Vsual of

More information

FUZZY GOAL PROGRAMMING VS ORDINARY FUZZY PROGRAMMING APPROACH FOR MULTI OBJECTIVE PROGRAMMING PROBLEM

FUZZY GOAL PROGRAMMING VS ORDINARY FUZZY PROGRAMMING APPROACH FOR MULTI OBJECTIVE PROGRAMMING PROBLEM Internatonal Conference on Ceramcs, Bkaner, Inda Internatonal Journal of Modern Physcs: Conference Seres Vol. 22 (2013) 757 761 World Scentfc Publshng Company DOI: 10.1142/S2010194513010982 FUZZY GOAL

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

Global Optimization Using Hybrid Approach

Global Optimization Using Hybrid Approach Tng-Yu Chen, Y Lang Cheng Global Optmzaton Usng Hybrd Approach TING-YU CHEN, YI LIANG CHENG Department of Mechancal Engneerng Natonal Chung Hsng Unversty 0 Kuo Kuang Road Tachung, Tawan 07 tyc@dragon.nchu.edu.tw

More information

Solving of Single-objective Problems based on a Modified Multiple-crossover Genetic Algorithm: Test Function Study

Solving of Single-objective Problems based on a Modified Multiple-crossover Genetic Algorithm: Test Function Study Internatonal Conference on Systems, Sgnal Processng and Electroncs Engneerng (ICSSEE'0 December 6-7, 0 Duba (UAE Solvng of Sngle-objectve Problems based on a Modfed Multple-crossover Genetc Algorthm: Test

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Appendix B. The Finite Difference Scheme

Appendix B. The Finite Difference Scheme 140 APPENDIXES Appendx B. The Fnte Dfference Scheme In ths appendx we present numercal technques whch are used to approxmate solutons of system 3.1 3.3. A comprehensve treatment of theoretcal and mplementaton

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016 CS 29-128: Algorthms and Uncertanty Lecture 17 Date: October 26, 2016 Instructor: Nkhl Bansal Scrbe: Mchael Denns 1 Introducton In ths lecture we wll be lookng nto the secretary problem, and an nterestng

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

x = , so that calculated

x = , so that calculated Stat 4, secton Sngle Factor ANOVA notes by Tm Plachowsk n chapter 8 we conducted hypothess tests n whch we compared a sngle sample s mean or proporton to some hypotheszed value Chapter 9 expanded ths to

More information

CHAPTER IV RESEARCH FINDING AND DISCUSSIONS

CHAPTER IV RESEARCH FINDING AND DISCUSSIONS CHAPTER IV RESEARCH FINDING AND DISCUSSIONS A. Descrpton of Research Fndng. The Implementaton of Learnng Havng ganed the whole needed data, the researcher then dd analyss whch refers to the statstcal data

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Evolutionary Computational Techniques to Solve Economic Load Dispatch Problem Considering Generator Operating Constraints

Evolutionary Computational Techniques to Solve Economic Load Dispatch Problem Considering Generator Operating Constraints Internatonal Journal of Engneerng Research and Applcatons (IJERA) ISSN: 48-96 Natonal Conference On Advances n Energy and Power Control Engneerng (AEPCE-K1) Evolutonary Computatonal Technques to Solve

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

An Improved Particle Swarm Optimization Algorithm based on Membrane Structure

An Improved Particle Swarm Optimization Algorithm based on Membrane Structure IJCSI Internatonal Journal of Computer Scence Issues Vol. 10 Issue 1 No January 013 ISSN (Prnt): 1694-0784 ISSN (Onlne): 1694-0814 www.ijcsi.org 53 An Improved Partcle Swarm Optmzaton Algorthm based on

More information

CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING INTRODUCTION

CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING INTRODUCTION CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING N. Phanthuna 1,2, F. Cheevasuvt 2 and S. Chtwong 2 1 Department of Electrcal Engneerng, Faculty of Engneerng Rajamangala

More information

Credit Card Pricing and Impact of Adverse Selection

Credit Card Pricing and Impact of Adverse Selection Credt Card Prcng and Impact of Adverse Selecton Bo Huang and Lyn C. Thomas Unversty of Southampton Contents Background Aucton model of credt card solctaton - Errors n probablty of beng Good - Errors n

More information

A Robust Method for Calculating the Correlation Coefficient

A Robust Method for Calculating the Correlation Coefficient A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal

More information

High resolution entropy stable scheme for shallow water equations

High resolution entropy stable scheme for shallow water equations Internatonal Symposum on Computers & Informatcs (ISCI 05) Hgh resoluton entropy stable scheme for shallow water equatons Xaohan Cheng,a, Yufeng Ne,b, Department of Appled Mathematcs, Northwestern Polytechncal

More information

Chapter Newton s Method

Chapter Newton s Method Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

Week 9 Chapter 10 Section 1-5

Week 9 Chapter 10 Section 1-5 Week 9 Chapter 10 Secton 1-5 Rotaton Rgd Object A rgd object s one that s nondeformable The relatve locatons of all partcles makng up the object reman constant All real objects are deformable to some extent,

More information

Modified Seeker Optimization Algorithm for Unconstrained Optimization Problems

Modified Seeker Optimization Algorithm for Unconstrained Optimization Problems Modfed Seeker Optmzaton Algorthm for Unconstraned Optmzaton Problems Ivona BRAJEVIC Mlan TUBA Faculty of Mathematcs Faculty of Computer Scence Unversty of Belgrade Megatrend Unversty Belgrade Studentsk

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

arxiv: v1 [math.oc] 3 Aug 2010

arxiv: v1 [math.oc] 3 Aug 2010 arxv:1008.0549v1 math.oc] 3 Aug 2010 Test Problems n Optmzaton Xn-She Yang Department of Engneerng, Unversty of Cambrdge, Cambrdge CB2 1PZ, UK Abstract Test functons are mportant to valdate new optmzaton

More information

(Online First)A Lattice Boltzmann Scheme for Diffusion Equation in Spherical Coordinate

(Online First)A Lattice Boltzmann Scheme for Diffusion Equation in Spherical Coordinate Internatonal Journal of Mathematcs and Systems Scence (018) Volume 1 do:10.494/jmss.v1.815 (Onlne Frst)A Lattce Boltzmann Scheme for Dffuson Equaton n Sphercal Coordnate Debabrata Datta 1 *, T K Pal 1

More information

Topic 23 - Randomized Complete Block Designs (RCBD)

Topic 23 - Randomized Complete Block Designs (RCBD) Topc 3 ANOVA (III) 3-1 Topc 3 - Randomzed Complete Block Desgns (RCBD) Defn: A Randomzed Complete Block Desgn s a varant of the completely randomzed desgn (CRD) that we recently learned. In ths desgn,

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

On the Interval Zoro Symmetric Single-step Procedure for Simultaneous Finding of Polynomial Zeros

On the Interval Zoro Symmetric Single-step Procedure for Simultaneous Finding of Polynomial Zeros Appled Mathematcal Scences, Vol. 5, 2011, no. 75, 3693-3706 On the Interval Zoro Symmetrc Sngle-step Procedure for Smultaneous Fndng of Polynomal Zeros S. F. M. Rusl, M. Mons, M. A. Hassan and W. J. Leong

More information

Chapter 8. Potential Energy and Conservation of Energy

Chapter 8. Potential Energy and Conservation of Energy Chapter 8 Potental Energy and Conservaton of Energy In ths chapter we wll ntroduce the followng concepts: Potental Energy Conservatve and non-conservatve forces Mechancal Energy Conservaton of Mechancal

More information

Research Article An Enhanced Differential Evolution with Elite Chaotic Local Search

Research Article An Enhanced Differential Evolution with Elite Chaotic Local Search Computatonal Intellgence and Neuroscence Volume 215, Artcle ID 583759, 11 pages http://dx.do.org/1.1155/215/583759 Research Artcle An Enhanced Dfferental Evoluton wth Elte Chaotc Local Search Zhaolu Guo,

More information

Pop-Click Noise Detection Using Inter-Frame Correlation for Improved Portable Auditory Sensing

Pop-Click Noise Detection Using Inter-Frame Correlation for Improved Portable Auditory Sensing Advanced Scence and Technology Letters, pp.164-168 http://dx.do.org/10.14257/astl.2013 Pop-Clc Nose Detecton Usng Inter-Frame Correlaton for Improved Portable Audtory Sensng Dong Yun Lee, Kwang Myung Jeon,

More information

THEORY OF GENETIC ALGORITHMS WITH α-selection. André Neubauer

THEORY OF GENETIC ALGORITHMS WITH α-selection. André Neubauer THEORY OF GENETIC ALGORITHMS WITH α-selection André Neubauer Informaton Processng Systems Lab Münster Unversty of Appled Scences Stegerwaldstraße 39, D-48565 Stenfurt, Germany Emal: andre.neubauer@fh-muenster.de

More information

A new Approach for Solving Linear Ordinary Differential Equations

A new Approach for Solving Linear Ordinary Differential Equations , ISSN 974-57X (Onlne), ISSN 974-5718 (Prnt), Vol. ; Issue No. 1; Year 14, Copyrght 13-14 by CESER PUBLICATIONS A new Approach for Solvng Lnear Ordnary Dfferental Equatons Fawz Abdelwahd Department of

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM An elastc wave s a deformaton of the body that travels throughout the body n all drectons. We can examne the deformaton over a perod of tme by fxng our look

More information

Using Immune Genetic Algorithm to Optimize BP Neural Network and Its Application Peng-fei LIU1,Qun-tai SHEN1 and Jun ZHI2,*

Using Immune Genetic Algorithm to Optimize BP Neural Network and Its Application Peng-fei LIU1,Qun-tai SHEN1 and Jun ZHI2,* Advances n Computer Scence Research (ACRS), volume 54 Internatonal Conference on Computer Networks and Communcaton Technology (CNCT206) Usng Immune Genetc Algorthm to Optmze BP Neural Network and Its Applcaton

More information

A Hybrid Co-evolutionary Particle Swarm Optimization Algorithm for Solving Constrained Engineering Design Problems

A Hybrid Co-evolutionary Particle Swarm Optimization Algorithm for Solving Constrained Engineering Design Problems JOURNL OF COMPUERS VOL 5 NO 6 JUNE 00 965 Hybrd Co-evolutonary Partcle Optmzaton lgorthm for Solvng Constraned Engneerng Desgn Problems Yongquan Zhou Shengyu Pe College of Mathematcs and Computer Scence

More information

A Particle Filter Algorithm based on Mixing of Prior probability density and UKF as Generate Importance Function

A Particle Filter Algorithm based on Mixing of Prior probability density and UKF as Generate Importance Function Advanced Scence and Technology Letters, pp.83-87 http://dx.do.org/10.14257/astl.2014.53.20 A Partcle Flter Algorthm based on Mxng of Pror probablty densty and UKF as Generate Importance Functon Lu Lu 1,1,

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Semi-supervised Classification with Active Query Selection

Semi-supervised Classification with Active Query Selection Sem-supervsed Classfcaton wth Actve Query Selecton Jao Wang and Swe Luo School of Computer and Informaton Technology, Beng Jaotong Unversty, Beng 00044, Chna Wangjao088@63.com Abstract. Labeled samples

More information

Chapter 3 Describing Data Using Numerical Measures

Chapter 3 Describing Data Using Numerical Measures Chapter 3 Student Lecture Notes 3-1 Chapter 3 Descrbng Data Usng Numercal Measures Fall 2006 Fundamentals of Busness Statstcs 1 Chapter Goals To establsh the usefulness of summary measures of data. The

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

The Quadratic Trigonometric Bézier Curve with Single Shape Parameter

The Quadratic Trigonometric Bézier Curve with Single Shape Parameter J. Basc. Appl. Sc. Res., (3541-546, 01 01, TextRoad Publcaton ISSN 090-4304 Journal of Basc and Appled Scentfc Research www.textroad.com The Quadratc Trgonometrc Bézer Curve wth Sngle Shape Parameter Uzma

More information

Statistics II Final Exam 26/6/18

Statistics II Final Exam 26/6/18 Statstcs II Fnal Exam 26/6/18 Academc Year 2017/18 Solutons Exam duraton: 2 h 30 mn 1. (3 ponts) A town hall s conductng a study to determne the amount of leftover food produced by the restaurants n the

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Appendix B: Resampling Algorithms

Appendix B: Resampling Algorithms 407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles

More information

Discrete Particle Swarm Optimization for TSP: Theoretical Results and Experimental Evaluations

Discrete Particle Swarm Optimization for TSP: Theoretical Results and Experimental Evaluations Dscrete Partcle Swarm Optmzaton for TSP: Theoretcal Results and Expermental Evaluatons Matthas Hoffmann, Mortz Mühlenthaler, Sabne Helwg, Rolf Wanka Department of Computer Scence, Unversty of Erlangen-Nuremberg,

More information

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity Week3, Chapter 4 Moton n Two Dmensons Lecture Quz A partcle confned to moton along the x axs moves wth constant acceleraton from x =.0 m to x = 8.0 m durng a 1-s tme nterval. The velocty of the partcle

More information

Solving Lattice Protein Folding Problems by Discrete Particle Swarm Optimization

Solving Lattice Protein Folding Problems by Discrete Particle Swarm Optimization 1904 JOURNAL OF COMPUTERS, VOL. 9, NO. 8, AUGUST 014 Solvng Lattce Proten Foldng Problems by Dscrete Partcle Swarm Optmzaton Jng Xao School of Computer Scence, South Chna Normal Unversty, Guangzhou 510631,

More information

OPTIMIZATION plays an important role in many realworld

OPTIMIZATION plays an important role in many realworld JOURNAL OF L A T E X CLASS FILES, VOL. 14, NO. 8, DECEMBER 2017 1 Partcle Swarm Optmzaton wth Movng Partcles on Scale-Free Networks D Wu, Nan Jang, Wenbo Du, Member, IEEE, Ke Tang, Senor Member, IEEE and

More information

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1]

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1] DYNAMIC SHORTEST PATH SEARCH AND SYNCHRONIZED TASK SWITCHING Jay Wagenpfel, Adran Trachte 2 Outlne Shortest Communcaton Path Searchng Bellmann Ford algorthm Algorthm for dynamc case Modfcatons to our algorthm

More information

FREQUENCY DISTRIBUTIONS Page 1 of The idea of a frequency distribution for sets of observations will be introduced,

FREQUENCY DISTRIBUTIONS Page 1 of The idea of a frequency distribution for sets of observations will be introduced, FREQUENCY DISTRIBUTIONS Page 1 of 6 I. Introducton 1. The dea of a frequency dstrbuton for sets of observatons wll be ntroduced, together wth some of the mechancs for constructng dstrbutons of data. Then

More information

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests Smulated of the Cramér-von Mses Goodness-of-Ft Tests Steele, M., Chaselng, J. and 3 Hurst, C. School of Mathematcal and Physcal Scences, James Cook Unversty, Australan School of Envronmental Studes, Grffth

More information

SPECTRAL ANALYSIS USING EVOLUTION STRATEGIES

SPECTRAL ANALYSIS USING EVOLUTION STRATEGIES SPECTRAL ANALYSIS USING EVOLUTION STRATEGIES J. FEDERICO RAMÍREZ AND OLAC FUENTES Insttuto Naconal de Astrofísca, Óptca y Electrónca Lus Enrque Erro # 1 Santa María Tonanzntla, Puebla, 784, Méxco framrez@cseg.naoep.mx,

More information

On the correction of the h-index for career length

On the correction of the h-index for career length 1 On the correcton of the h-ndex for career length by L. Egghe Unverstet Hasselt (UHasselt), Campus Depenbeek, Agoralaan, B-3590 Depenbeek, Belgum 1 and Unverstet Antwerpen (UA), IBW, Stadscampus, Venusstraat

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Cooperative Micro Differential Evolution for High Dimensional Problems

Cooperative Micro Differential Evolution for High Dimensional Problems Cooperatve Mcro Dfferental Evoluton for Hgh Dmensonal Problems Konstantnos E. Parsopoulos Department of Mathematcs Unversty of Patras GR 26110 Patras, Greece kostasp@math.upatras.gr ABSTRACT Hgh dmensonal

More information

BOOTSTRAP METHOD FOR TESTING OF EQUALITY OF SEVERAL MEANS. M. Krishna Reddy, B. Naveen Kumar and Y. Ramu

BOOTSTRAP METHOD FOR TESTING OF EQUALITY OF SEVERAL MEANS. M. Krishna Reddy, B. Naveen Kumar and Y. Ramu BOOTSTRAP METHOD FOR TESTING OF EQUALITY OF SEVERAL MEANS M. Krshna Reddy, B. Naveen Kumar and Y. Ramu Department of Statstcs, Osmana Unversty, Hyderabad -500 007, Inda. nanbyrozu@gmal.com, ramu0@gmal.com

More information

A New Scrambling Evaluation Scheme based on Spatial Distribution Entropy and Centroid Difference of Bit-plane

A New Scrambling Evaluation Scheme based on Spatial Distribution Entropy and Centroid Difference of Bit-plane A New Scramblng Evaluaton Scheme based on Spatal Dstrbuton Entropy and Centrod Dfference of Bt-plane Lang Zhao *, Avshek Adhkar Kouch Sakura * * Graduate School of Informaton Scence and Electrcal Engneerng,

More information

A Network Intrusion Detection Method Based on Improved K-means Algorithm

A Network Intrusion Detection Method Based on Improved K-means Algorithm Advanced Scence and Technology Letters, pp.429-433 http://dx.do.org/10.14257/astl.2014.53.89 A Network Intruson Detecton Method Based on Improved K-means Algorthm Meng Gao 1,1, Nhong Wang 1, 1 Informaton

More information

Lecture 20: November 7

Lecture 20: November 7 0-725/36-725: Convex Optmzaton Fall 205 Lecturer: Ryan Tbshran Lecture 20: November 7 Scrbes: Varsha Chnnaobreddy, Joon Sk Km, Lngyao Zhang Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer:

More information

Research on Route guidance of logistic scheduling problem under fuzzy time window

Research on Route guidance of logistic scheduling problem under fuzzy time window Advanced Scence and Technology Letters, pp.21-30 http://dx.do.org/10.14257/astl.2014.78.05 Research on Route gudance of logstc schedulng problem under fuzzy tme wndow Yuqang Chen 1, Janlan Guo 2 * Department

More information

Chapter 2 Real-Coded Adaptive Range Genetic Algorithm

Chapter 2 Real-Coded Adaptive Range Genetic Algorithm Chapter Real-Coded Adaptve Range Genetc Algorthm.. Introducton Fndng a global optmum n the contnuous doman s challengng for Genetc Algorthms (GAs. Tradtonal GAs use the bnary representaton that evenly

More information

Chapter 13: Multiple Regression

Chapter 13: Multiple Regression Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to

More information

A novel hybrid algorithm with marriage of particle swarm optimization and extremal optimization

A novel hybrid algorithm with marriage of particle swarm optimization and extremal optimization A novel hybrd algorthm wth marrage of partcle swarm optmzaton and extremal optmzaton Mn-Rong Chen 1, Yong-Za Lu 1, Q Luo 2 1 Department of Automaton, Shangha Jaotong Unversty, Shangha 200240, P.R.Chna

More information

Real Parameter Single Objective Optimization using Self-Adaptive Differential Evolution Algorithm with more Strategies

Real Parameter Single Objective Optimization using Self-Adaptive Differential Evolution Algorithm with more Strategies Real Parameter Sngle Objectve Optmzaton usng Self-Adaptve Dfferental Evoluton Algorthm wth more Strateges Janez Brest, Borko Boškovć, Aleš Zamuda, Iztok Fster Insttute of Computer Scence, Faculty of Electrcal

More information