2003 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media,

Size: px
Start display at page:

Download "2003 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media,"

Transcription

1 003 IEEE. Personal use of ths materal s permtted. Permsson from IEEE must be obtaned for all other uses, n any current or future meda, ncludng reprntng/republshng ths materal for advertsng or promotonal purposes, creatng new collectve works, for resale or redstrbuton to servers or lsts, or reuse of any copyrghted component of ths work n other works.

2 Tunng of the Structure and Parameters of Neural Networks usng an Improved Genetc Algorthm H.K. Lam, S.H. Lng, F.H.F. Leung and P.K.S. Tam Centre for Multmeda Sgnal Processng Department of Electronc and Informaton Engneerng The Hong Kong Polytechnc Unversty, Hung Hom, Kowloon, Hong Kong. Abstract Ths paper presents the tunng of the structure and parameters of a neural network usng an mproved genetc algorthm (GA). It wll also be shown that the mproved GA performs better than the standard GA based on some benchmark test functons. A neural network wth swtches ntroduced to ts lnks s proposed. By dong ths, the proposed neural network can learn both the nput-output relatonshps of an applcaton and the network structure. The number of hdden nodes should be chosen manually startng from a small number. The number of hdden nodes should be ncreased f the learnng performance n terms of ftness value s not acceptable. Usng the mproved GA, the structure and the parameters of the neural network can be tuned. Applcaton examples on sunspot forecastng and assocatve memory are gven to show the merts of the mproved GA and the proposed neural network. I. INTRODUCTION GA s a drected random search technque [] that s wdely appled n optmzaton problems [-, 5]. Ths s especally useful for complex optmzaton problems where the number of parameters s large and the analytcal solutons are dffcult to obtan. GA can help to fnd out the optmal soluton globally over a doman [-, 5]. It has been appled n dfferent areas such as fuzzy control [9-, 5], path plannng [], greenhouse clmate control [3], modelng and classfcaton [4] etc. A lot of research efforts have been spent to mprove the performance of GA. Dfferent selecton schemes and genetc operators have been proposed. Selecton schemes such as rank-based selecton, eltst strateges, steady-state electon and tournament selecton have been reported [3].

3 There are two knds of genetc operators, namely crossover and mutaton. Apart from random mutaton and crossover, other crossover and mutaton mechansms have been proposed. For crossover mechansms, two-pont crossover, multpont crossover, arthmetc crossover and heurstc crossover have been reported [, 3-33]. For mutaton mechansms, boundary mutaton, unform mutaton and non-unform mutaton can be found [, 3-33]. Neural network was proved to be a unversal approxmator [6]. A 3-layer feed-forward neural network can approxmate any nonlnear contnuous functon to an arbtrary accuracy. Neural networks are wdely appled n areas such as predcton [7], system modelng and control [6]. Owng to ts partcular structure, a neural network s very good n learnng [] usng some learnng algorthms such as GA [] and back propagaton algorthm []. In general, the learnng steps of a neural network are as follows. Frst, a network structure s defned wth fxed numbers of nputs, hdden nodes and outputs. Second, an algorthm s chosen to realze the learnng process. It can be seen that a fxed structure may not provde the optmal performance wthn a gven tranng perod. A small network may not provde good performance owng to ts lmted nformaton processng power. A large network, on the other hand, may have some of ts connectons redundant [8-9]. Moreover, the mplementaton cost for a large network s hgh. To obtan the network structure automatcally, constructve and destructve algorthms can be used [8]. The constructve algorthm starts wth a small network. Hdden layers, nodes and connectons are added to expand the network dynamcally [9-4]. The destructve algorthm starts wth a large network. Hdden layers, nodes and connectons are then deleted to contract the network dynamcally [5-6]. The desgn of a network structure can be formulated nto a search problem. Genetc algorthms [7-8] were employed to obtan the soluton. Pattern-classfcaton approaches [9] can also be found to desgn the network structure. Some other methods have been proposed to learn both the network structure and connecton weghts. The evoluton cycle of these methods can be summarzed by three steps as follows. ) Evaluate each ndvdual,.e., chromosome, accordng to a defned ftness functon. ) Select ndvdual for reproducton and genetc operaton. 3) Apply dfferent knds of genetc operatons to the chromosomes to obtan the next

4 generaton. An ANNA ELEONORA algorthm was proposed [36]. New genetc operator and encodng procedures (bnary) whch allows the algorthm to obtan an opportune length of the codng strng were ntroduced. Each gene conssts of two parts, connectvty bts and the connecton weght bts. The connectvty bts are to ndcate the absence or present of a lnk. The connecton weght bts are related to the value of the weght of a lnk. A GNARL algorthm was also proposed n [37]. At frst, the populaton of the chromosomes representng the network structure wll be generated randomly. The number of hdden nodes and connecton lnks for each network s randomly chosen wthn some defned ranges. Three steps were proposed to generate an offsprng: copyng the parents, determnng the mutatons to be performed and mutatng the copy. The severty of mutatons measurng the performance of the network wll be used to anneal the structural and parametrc smlarty between parent and offsprng. The networks wth hgh smlarty wll be mutated severely, on the contrary, the networks wth low smlarty wll be mutated slghtly. Mutaton of the copy s separated nto two classes, parametrc mutatons whch alter the connecton weghts and structural mutatons alter the number of hdden nodes and the presence of lnks n the network. An evolutonary system named EPNet can also be found for evolvng the neural networks. Rank-based selecton and fve mutatons were employed to modfy the network structure and connecton weghts. The fve mutatons are hybrd tranng, node deleton, connecton deleton, connecton addton and node addton. Hybrd tranng, based on a modfed back-propagaton wth adaptve learnng rate and smulated annealng, s the mutaton used to modfy the connecton weghts. The other four mutatons are used to grow and prune hdden node nodes and connectons of the network. Some other algorthms can also be found to evolve the network structure and connecton weghts smultaneously. In ths paper, a three-layer neural network wth swtches ntroduced n some lnks s proposed to facltate the tunng of the network structure. As a result, for a gven fully connected feedforward neural network, t may no longer be a fully connected network after learnng. Ths mples that the cost of mplementng the proposed neural network, n terms of hardware mplementaton and processng tme, can be reduced. The network structure and parameters wll be tuned smultaneously usng a 3

5 proposed mproved GA. As applcaton examples, the proposed neural network wth lnk swtches tuned by the mproved GA s used to estmate the number of sunspots [7-8] and realze an assocatve memory. The results wll be compared wth those obtaned by tradtonal feed-forward networks [] traned by () the standard GA wth arthmetc crossover and non-unform mutaton [-, 5] and () the back-propagaton wth momentum and adaptve learnng rate [30]. Ths paper s organzed as follows. In sesson II, the mproved genetc algorthm s presented. In sesson III, t wll be shown that the mproved GA performs more effcently than the standard GA [-, 5] based on some benchmark test functons [3-4, 6, 7]. In sesson IV, the neural network wth lnk swtches, and tunng of ts structure and parameters usng the mproved GA, wll be presented. Applcaton examples wll be presented n sesson V. A concluson wll be drawn n sesson VI. II. IMPROVED GENETIC ALGORITHM Genetc algorthms (GAs) are powerful searchng algorthms. The standard GA process [-, 5] s shown n Fg.. Frst, a populaton of chromosomes s created. Second, the chromosomes are evaluated by a defned ftness functon. Thrd, some of the chromosomes are selected for performng genetc operatons. Forth, genetc operatons of crossover and mutaton are performed. The produced offsprng replace ther parents n the ntal populaton. In ths reproducton process, only the selected parents n the thrd step wll be replaced by ther correspondng offsprng. Ths GA process repeats untl a user-defned crteron s reached. In ths paper, the standard GA s modfed and new genetc operators are ntroduced to mprove ts performance. The mproved GA process s shown n Fg.. Its detals wll be gven as follows. A. Intal Populaton The ntal populaton s a potental soluton set P. The frst set of populaton s usually generated randomly. 4

6 pop_ sze P p, p,, p () no _ vars p p p p p, =,,, pop_sze; =,,, no_vars () para mn max p para (3) where pop_sze denotes the populaton sze; no_vars denotes the number of varables to be tuned; p, =,,, pop_sze; =,,, no_vars, are the parameters to be tuned; para mn and para max are the mnmum and maxmum values of the parameter p for all. It can be seen from () to (3) that the potental soluton set P contans some canddate solutons p (chromosomes). The chromosome p contans some varables p (genes). B. Evaluaton Each chromosome n the populaton wll be evaluated by a defned ftness functon. The better chromosomes wll return hgher values n ths process. The ftness functon to evaluate a chromosome n the populaton can be wrtten as, ftness f p ) (4) ( The form of the ftness functon depends on the applcaton. C. Selecton Two chromosomes n the populaton wll be selected to undergo genetc operatons for reproducton by the method of spnnng the roulette wheel []. It s beleved that hgh potental parents wll produce better offsprng (survval of the best ones). The chromosome havng a hgher ftness value should therefore have a hgher chance to be selected. The selecton can be done by assgnng a probablty q to the chromosome p : 5

7 q f ( p ) pop_ sze f ( p ), =,,, pop_sze (5) The cumulatve probablty qˆ for the chromosome p s defned as, qˆ q, =,,, pop_sze (6) The selecton process starts by randomly generatng a nonzero floatng-pont number, 0 Then, the chromosome p s chosen f qˆ d qˆ, =,,, pop_sze, and q ˆ0 0 d.. It can be observed from ths selecton process that a chromosome havng a larger f( p ) wll have a hgher chance to be selected. Consequently, the best chromosomes wll get more offsprng, the average wll stay and the worst wll de off. In the selecton process, only two chromosomes wll be selected to undergo the genetc operatons. D. Genetc Operatons The genetc operatons are to generate some new chromosomes (offsprng) from ther parents after the selecton process. They nclude the crossover and the mutaton operatons.. Crossover The crossover operaton s manly for exchangng nformaton from the two parents, chromosomes p and p, obtaned n the selecton process. The two parents wll produce one offsprng. The detals of the crossover operaton are as follows. Frst, four chromosomes wll be generated accordng to the followng mechansms, os os os p os os _ vars (7) p c os no os os os p ( w) max p, p w (8) c no_ vars max os os os p ( w) mn p, p w (9) 3 c no_ vars mn 6

8 os ( p max p mn )( w) ( p p ) os os osno_ vars (0) w 4 c no_ vars max p para para para () max max max no_ vars mn p para para para () mn mn mn where w 0 denotes the weght to be determned by users, max p p, denotes the vector wth each element obtaned by takng the maxmum among the correspondng element of p and p. For nstance, max 3, Smlarly, mn p mnmum value. For nstance, mn 3, 3 p, gves a vector by takng the. Among os c to 4 os c, the one wth the largest ftness value s used as the offsprng of the crossover operaton. The offsprng s defned as, os os os os os os no _ vars c (3) os denotes the ndex whch gves a maxmum value of f os, =,,,3,4. The offsprng generated by the crossover operaton wll undergo the mutaton operaton. If the crossover operaton can provde a good offsprng, a hgher ftness value can be reached n less teraton. In general, two-pont crossover, multpont crossover, arthmetc crossover or heurstc crossover can be employed to realze the crossover operaton [, 3-33]. The offsprng generated by these methods may not be better than that form our approach. As seen from (7) to (0), the offsprng spreads over the doman: (7) and (0) wll move the offsprng near centre regon of the concerned doman (as w n (0) approaches, 4 os c approaches p p boundary (as w n (8) and (9) approaches, c ), and (8) and (9) wll move the offsprng near the doman os c and 3 os c approaches p max and p mn respectvely).. Mutaton 7

9 The offsprng (3) wll then undergo the mutaton operaton. The mutaton operaton s to change the genes of the chromosomes. Consequently, the features of the chromosomes nherted from ther parents can be changed. Three new offsprng wll be generated by the mutaton operaton: nos os os os b nos b nos b nos no_ var s no_ var s where b, =,,, no_vars, can only take the value of 0 or, no_ var s, =,, 3 (4) nos, =,,, no_vars, are randomly generated numbers such that para mn max os nos para. The frst new offsprng ( = ) s obtaned accordng to (4) wth that only one b ( beng randomly generated wthn the range) s allowed to be and all the others are zeros. The second new offsprng s obtaned accordng to (4) wth that some randomly chosen b are set to be and others are zero. The thrd new offsprng s obtaned accordng to (4) wth all b =. These three new offsprng wll then be evaluated usng the ftness functon of (4). A real number wll be generated randomly and compared wth a user-defned number p 0 a. If the real number s smaller than p a, the one wth the largest ftness value f l among the three new offsprng wll replace the chromosome wth the smallest ftness f s n the populaton. If the real number s larger than p a, the frst offsprng wll replace the chromosome wth the smallest ftness value f s n the populaton f f l f s ; the second and the thrd offsprng wll do the same. p a s effectvely the probablty of acceptng a bad offsprng n order to reduce the chance of convergng to a local optmum. Hence, the possblty of reachng the global optmum s kept. In general, varous methods lke boundary mutaton, unform mutaton or non-unform mutaton [, 3-33] can be employed to realze the mutaton operaton. Boundary mutaton s to change the value of a randomly selected gene to ts upper or lower bound. Unform mutaton s to change the value of a randomly selected gene to a value between ts upper and lower bounds. Non-unform mutaton s capable of fne-tunng the parameters. The value of a randomly selected gene wll be ncreased or decreased by a weghted random number. The weght s usually a monotonc decreasng functon of the number of teratons. In our approach, we have three offsprng generated n the mutaton 8

10 process. From (4), the frst mutaton s n fact a unform mutaton. The second mutaton allows some randomly selected genes to change smultaneously. The thrd mutaton changes all genes smultaneously. The second and the thrd mutatons allow multple genes to be changed. Hence, the doman to be searched s larger as compared wth a doman characterzed by changng a sngle gene. As the ntal values are generated randomly, the genes wll have a larger space for mprovng the ftness value when the ftness value s small. On the contrary, when the ftness values are large and nearly steady, changng the value of a sngle gene (the frst mutaton) may be enough as some genes may have reached the optmal values. After the operaton of selecton, crossover, and mutaton, a new populaton s generated. Ths new populaton wll repeat the same process. Such an teratve process can be termnated when the result reaches a defned condton, e.g., the change of the ftness values between the current and the prevous teraton s less than 0.00, or a defned number of teratons s reached. III. BENCHMARK TEST FUNCTIONS Some benchmark test functons [3-4, 6, 7] are used to examne the applcablty and effcency of the mproved GA. Sx test functons, f (x), =,, 3, 4, 5, 6 wll be used, where x x x x n. n s an nteger denotng the dmenson of the vector x. The sx test functons are defned as follows, n ) x f (x, 5. x 5. (5) where n = 3 and the mnmum pont s at f (0, 0, 0) = 0 x x x n ( ) 00 f x,.048 x. 048 (6) where n = and the mnmum pont s at f (0, 0) = 0. f n 3 ) 6n floor ( x ) (x, 5. x 5. (7) 9

11 where n = 5 and the mnmum pont s at f 3 ([5., 5],, [5., 5]) = 0. The floor functon, floor(), s to round down the argument to an nteger. f n 4 4 ) x Gauss(0,) (x,.8 x. 8 (8) where n = 3 and the mnmum pont s at f 4 (0, 0, 0) = 0. Gauss(0, ) s a functon to generate unformly a floatng-pont number between 0 and nclusvely. 5 f5 ( x ), x (9) k 6 x a where a a , 3 k = 500 and the maxmum pont s at f 5 (3, 3). n x 0cos x f 6 ( x ) 0, 5. x 5. (0) where n = 3 and the mnmum pont s at f 6 (0, 0, 0) = 0. It should be noted that the mnmum values of all functons n the defned doman are zero except for f 5( x). The ftness functons for f to f 4 and f 6 are defned as, ftness, =,, 3, 4, 6. () ( x) f and the ftness functon for f 5 s defned as, ftness f 5 ( x) () The proposed GA goes through these 6 test functons. The results are compared wth those obtaned by the standard GA wth arthmetc crossover and non-unform mutaton [, 3-33]. For each test functon, the smulaton takes 500 teratons and the populaton sze s 0 for the proposed and the 0

12 standard GAs. When the standard GA s used, the probablty of crossover s set at 0.8 for all functons, and the probablty of mutaton for functons f to f 6 are 0.8, 0.8, 0.7, 0.8, 0.8, 0.35 respectvely. The shape parameters b of the standard GA [] for non-unform mutaton, whch s selected by tral and error through experments for good performance, are set at b = 5 for f, f and f 5, b = 0. for f 3, b = for f 4 and f 6. For the proposed GA, the values of w are set to be 0.5, 0.99, 0., 0.5, 0.0 and 0.0 for the sx test functons respectvely. The probablty of acceptance p a s set at 0. for all functons. These values are selected by tral and error through experments for good performance. The ntal values of x n the populaton for a test functon are set to be the same for both the proposed and the standard GAs. For tests to 6, the ntal values are, ,, , 0 0 and respectvely. The results of the average ftness values over 00 tmes of smulatons based on the proposed and standard GAs are shown n Fg. 3 and tabulated n Table I. Generally, t can be seen that the performance of the proposed GA s better than that of the standard GA. IV. NEURAL NETWORK WITH LINK SWITCHES AND TUNING USING THE IMPROVED GA In ths secton, a neural network wth lnk swtches s presented. By ntroducng a swtch to a lnk, the parameters and the structure of the neural network can be tuned usng the mproved GA. A. Neural Network wth Lnk Swtches Neural networks [5] for tunng usually have a fxed structure. The number of connectons has to be large enough to ft a gven applcaton. Ths may cause the neural network structure to be unnecessarly complex and ncrease the mplementaton cost. In ths secton, a multple-nput-multple-output three-layer neural network s proposed as shown n Fg. 4. The man dfferent pont s that a unt step functon s ntroduced to each lnk. Such a unt step functon s defned as,

13 0 f 0 ( ), (3) f 0 Ths s equvalent to addng a swtch to each lnk of the neural network. Referrng to Fg. 4, the nput-output relatonshp of the proposed multple-nput multple-output three-layer neural network s as follows, y ( s ) v z ( t) ( s ) b ( s ) b nh nn k ( t) ( s k) wklogsg k k (t) z, k =,,, n out (4), =,,, n n, are the nputs whch are functons of a varable t; n n denotes the number of nputs; n h denotes the number of the hdden nodes; w k, =,,, h n ; k =,,, n out, denote the weght of the lnk between the -th hdden node and the k-th output; v denotes the weght of the lnk between the -th nput and the -th hdden node; s denotes the parameter of the lnk swtch from the -th nput to the -th hdden node; s k denotes the parameter of the lnk swtch from the -th hdden node to the k-th output; n out denotes the number of outputs of the proposed neural network; denote the bases for the hdden nodes and output nodes respectvely; s and b and b k s k denote the parameters of the lnk swtches of the bases to the hdden and output layers respectvely; logsg() denotes the logarthmc sgmod functon: logsg ( ), (5) e (t) y k the weghts, k =,,, n out, s the k-th output of the proposed neural network. By ntroducng the swtches, w k and v, and the swtch states can be tuned. It can be seen that the weghts of the lnks govern the nput-output relatonshp of the neural network whle the swtches of the lnks govern the structure of the neural network. B. Tunng of the Parameters and Structure

14 The proposed neural network can be employed to learn the nput-output relatonshp of an applcaton usng the mproved GA. The nput-output relatonshp s descrbed by, d d y ( t) g z ( t), t =,,, n d (6) d d d d d d d d where z t) z ( t) z ( t) z ( ) and t) y ( t) y ( t) y ( ) ( n t n y are the gven ( n t nputs and the desred outputs of an unknown nonlnear functon g () respectvely. n d denotes the number of nput-output data pars. The ftness functon s defned as, out ftness err (7) err d n t out k n y d k ( t) y n d k ( t) (8) The obectve s to maxmze the ftness value of (7) usng the mproved GA by settng the chromosome to be s w s v s b s b for all,, k. It can be seen from (7) and k k (8) that a larger ftness value mples a smaller error value. k k V. APPLICATION EXAMPLES Two applcaton examples wll be gven n ths secton to llustrate the merts of the proposed neural networks tuned by the mproved GA. A. Forecastng of the Sunspot Number An applcaton example on forecastng of the sunspot number [7-8, 7] wll be gven n ths secton. The sunspot numbers from 700 to 980 are shown n Fg. 5. The cycles generated are non-lnear, non-statonary, and non-gaussan whch are dffcult to model and predct. We use the proposed 3-layer neural network (3-nput-sngle-output) wth lnk swtches for the sunspot number d ( forecastng. The nputs, z, of the proposed neural network are defned as z t) y ( t ), d z ( t) y ( t ) d 3( and z t) y ( t 3) where t denotes the year and ( ) s the sunspot numbers at 3 y d t

15 the year t. The sunspot numbers of the frst 80 years (.e. 705 t 884 ) are used to tran the proposed neural network. Referrng to (4), the proposed neural network used for the sunspot forecastng s governed by, n h ( s) vz( t) ( s ) b ( s 3 y ( t) ( s ) wlogsg ) b (9) The value of n h s changed from 3 to 7 to test the learnng performance. The ftness functon s defned as follows, ftness err (30) 884 y ( t) y( t) err (3) 80 t705 d The mproved GA s employed to tune the parameters and structure of the neural network of (9). The obectve s to maxmze the ftness functon of (30). The best ftness value s and the worst one s 0. The populaton sze used for the mproved GA s 0; w = 0.9 and p a = 0. for all values of n h. The lower and the upper bounds of the lnk weghts are defned as 3 v, wk, b, b h n 3 n h and, s, s, s, s, =,,, 3; =,,, n h, k = [6]. The chromosomes used for the mproved GA are s w k s v s b sk b. The ntal values of all the lnk weghts between the nput and hdden layers are and those between the hdden and output layers are respectvely. The ntal values of the swtches are all 0.5. For comparson purpose, a fully connected 3-layer feed-forward neural network (3-nput--output) [] s traned by () the standard GA wth arthmetc crossover and non-unform mutaton [-, 5], and () back-propagaton wth momentum and adaptve learnng rate [30]. On the other hand, the proposed neural network s also traned wth the standard GA for comparson. For standard GA, the populaton sze s 0, the probablty of crossover s 0.8 and the probablty of mutaton s 0.. The shape parameters b of the standard GA wth arthmetc crossover and non-unform 4

16 mutaton, whch s selected by tral and error through experments for good performance, s set to be. For the back-propagaton wth momentum and adaptve learnng rate, the learnng rate s 0., the rato to ncrease learnng rate s.05, the rato to decrease the learnng rate s 0.7, the maxmum valdaton falures s 5, the maxmum performance ncrease s.04, and the momentum constant s 0.9. The ntal values of the lnk weghts are the same as those n the proposed neural network. For all approaches, the learnng processes are carred out by a personal computer wth a P4.4GHz CPU. The number of teratons for all approaches s 000. The tuned neural networks are used to forecast the sunspot number durng the years Fg. 6 shows the smulaton results of the forecastng usng the proposed neural network traned wth the mproved GA (dashed lnes) and the actual sunspot numbers (sold lnes). The number of hdden nodes n h s changed from 4 to 8. The smulaton results for the comparsons are tabulated n Table II and Table III. From Table II, t s observed that the proposed neural network traned wth the mproved GA provdes better results than those of the proposed neural network wth standard GA, the tradtonal feed-forward neural network traned wth standard GA and back-propagaton wth momentum and adaptve learnng rate n terms of accuracy (ftness values) and number of lnks. The tranng error (governed by (3)) and the forecastng error (governed by 980 t885 d y ( t) y ( t) 96 ) are tabulated n Table III. It can be observed from Table III that our approach performs better than the tradtonal approaches. Refer to Table III, the best result s obtaned when the number of hdden node s 6. The number of connected lnk s 8 after learnng (the number of lnks of a fully connected network s 3, ncludng the bas lnks). It s about 4.9% reducton of the number of lnks after learnng. The tranng error and the forecastng error n term of mean absolute error (MAE) are.5730 and respectvely. B. Assocatve Memory Another applcaton example on tunng an assocatve memory wll be gven n ths secton. In ths example, the assocatve memory, whch maps ts nput vector nto tself, has 0 nputs and 0 5

17 outputs. Thus, the desred output vector s ts nput vector. Referrng to (4), the proposed neural network used for the sunspot forecastng s reused, y ( s ) v z ( t) ( s ) b ( s ) b nh nn k ( t) ( s k) wklogsg k k, =,,, 0, k =,,, 0 (3) 50 sets of nput vector (each nput vector has the property that z ( t) ) wll be employed to tran the proposed neural network. The value of n h s changed from 4 to 8 to test the learnng performance. The ftness functon s defned as follows, ftness err (33) zk ( t) yk ( t) err 00 (34) 0 t k 00 The mproved GA s employed to tune the parameters and structure of the neural network of (3). The obectve s to maxmze the ftness functon of (33). A larger value of the ftness functon ndcates a smaller value of err of (34). The best ftness value s and the worst one s 0. The populaton sze used for the mproved GA s 0; w = 0.8 and p a = 0. for all values of n h. The lower and the upper bounds of the lnk weghts are defned as 3 v, w k, b, b k n h 3 n h and, s, s, s, s, =,,, 3; k k =,,, n h, k = 0 [6]. The chromosomes used for the mproved GA are s k w k s v s b s k b k. The ntal values of the lnk weghts are all zero. For comparson purpose, the proposed neural network traned by the standard GA (wth arthmetc crossover and non-unform mutaton [-, 5]), a fully connected 3-layer feed-forward neural networks (0-nput-0-output) [] traned by the standard GA and back-propagaton (wth momentum and adaptve learnng rate [30]) are used agan. For the standard GA, the populaton sze s 0, the probablty of crossover s 0.8 and the probablty of mutaton s The shape parameters b of the standard GA wth arthmetc crossover and non-unform mutaton, whch s selected by tral and error 6

18 through experments for good performance, s set to be 3. For the back-propagaton wth momentum and adaptve learnng rate, the learnng rate s 0., the rato to ncrease the learnng rate s.05, the rato to decrease the learnng rate s 0.7, the maxmum valdaton falures s 5, the maxmum performance ncrease s.04, and the momentum constant s 0.9. The ntal values of the lnks weghts are the same as those of the proposed approach. The number of teratons for all approaches s 500. The smulaton results are tabulated n Table IV. It can be seen from Table IV that the ftness values for dfferent n h by the standard GA (wth arthmetc crossover and non-unform mutaton) and the back-propagaton (wth momentum and adaptve learnng rate) are smlar to those by our approach, whch offer smaller networks. VI. CONCLUSION An mproved GA has been proposed n ths paper. By usng the benchmark test functons, t has been shown that the mproved GA performs more effcently than the standard GA. Besdes, by ntroducng a swtch to each lnk, a neural network that facltates the tunng of ts structure has been proposed. Usng the mproved GA, the proposed neural network s able to learn both the nput-output relatonshp of an applcaton and the network structure. As a result, a gven fully connected neural network can be reduced to a partly connected network after learnng. Ths mples a lower cost of mplementaton of the neural network. Applcaton examples on forecastng the sunspot numbers and tunng of an assocatve memory usng the proposed neural network traned wth the mproved GA have been gven. The smulaton results have been compared wth those obtaned by a tradtonal feed-forward network traned by () the standard GA wth arthmetc crossover and non-unform mutaton, and () the back-propagaton wth momentum and adaptve learnng rate. ACKNOWLEDGEMENT The work descrbed n ths paper was substantally supported by a Research Grant of Centre for 7

19 Multmeda Sgnal Processng, The Hong Kong Polytechnc Unversty (proect number A43). REFERENCES [] J.H. Holland, Adaptaton n natural and artfcal systems, Unversty of Mchgan Press, Ann Arbor, MI, 975. [] D.T. Pham and D. Karaboga, Intellgent optmzaton technques, genetc algorthms, tabu search, smulated annealng and neural networks, Sprnger, 000. [3] Y. Hanak, T. Hashyama and S. Okuma, Accelerated evolutonary computaton usng ftness estmaton, IEEE SMC '99 Conference Proceedngs of IEEE Internatonal Conference on Systems, Man, and Cybernetcs, vol., 999, pp [4] K.A. De Jong, An analyss of the behavor of a class of genetc adaptve systems, Ph.D. Thess, Unversty of Mchgan, Ann Arbor, MI., 975. [5] Z. Mchalewcz, Genetc Algorthm + Data Structures = Evoluton Programs, second, extended edton, Sprnger-Verlag, 994. [6] G.X. Yao and Y. Lu "Evolutonary Programmng made Faster," IEEE Trans., on Evolutonary Computaton, vol. 3, no., July 999, pp.8-0 [7] M. L, K. Mechrotra, C. Mohan and S. Ranka, Sunspot Numbers Forecastng Usng Neural Network, 5th IEEE Internatonal Symposum on Intellgent Control, 990. Proceedngs, pp.54-58, 990. [8] T.J. Cholewo, J.M. Zurada, Sequental network constructon for tme seres predcton, Internatonal Conference on Neural Networks, vol.4, pp , 997. [9] B.D. Lu, C.Y. Chen and J.Y. Tsao, Desgn of adaptve fuzzy logc controller based on lngustc-hedge concepts and genetc algorthms, IEEE Trans. Systems, Man and Cybernetcs, Part B, vol. 3 no., Feb. 00, pp [0] Y.S. Zhou and L.Y. La Optmal desgn for fuzzy controllers by genetc algorthms, IEEE Trans., Industry Applcatons, vol. 36, no., Jan.-Feb. 000, pp

20 [] C.F. Juang, J.Y. Ln and C.T. Ln, Genetc renforcement learnng through symbotc evoluton for fuzzy controller desgn, IEEE Trans., Systems, Man and Cybernetcs, Part B, vol. 30, no., Aprl, 000, pp [] H. Judette and H. Youlal, Fuzzy dynamc path plannng usng genetc algorthms, Electroncs Letters, vol. 36, no. 4, Feb. 000, pp [3] R. Caponetto, L. Fortuna, G. Nunnar, L. Occhpnt and M. G. Xbla, Soft computng for greenhouse clmate control, IEEE Trans., Fuzzy Systems, vol. 8, no. 6, Dec. 000, pp [4] M. Setnes and H. Roubos, GA-fuzzy modelng and classfcaton: complexty and performance, IEEE. Trans, Fuzzy Systems, vol. 8, no. 5, Oct. 000, pp [5] K. Belarb, and F. Ttel Genetc algorthm for the desgn of a class of fuzzy controllers: an alternatve approach, IEEE Trans., Fuzzy Systems, vol. 8, no. 4, Aug. 000, pp [6] M. Brown and C. Harrs, Neuralfuzzy adaptve modelng and control, Prentce Hall, 994. [7] S. Amn and J.L. Fernandez-Vllacanas, Dynamc Local Search, Second Internatonal Conference On Genetc Algorthms n Engneerng Systems: Innovatons and Applcatons, pp9-3, 997. [8] Xn Yao, Evolvng Artfcal Networks, Proceedngs of the IEEE, vol. 87, no. 7, pp , 999. [9] Xn Yao and Young Lu, A new Evolutonary System for Evolvng Artfcal Neural Networks, IEEE Tran. On Neural Networks, vol. 8, no. 3, pp , 997. [0] Faa-Jeng Ln, Chh-Hong Ln, Po-Hung Shen, Self-constructng fuzzy neural network speed controller for permanent-magnet synchronous motor drve, IEEE Trans., Fuzzy Systems, vol. 9, no. 5, pp , Oct. 00. [] Y. Hrose, K. Yamashta and S. Hya, Back-propagaton algorthm whch vares the number of hdden unts, Neural Networks, vol. 4, no., pp. 6-66, 99. [] A. Roy, L.S. Km and S. Mukhopadhyay, A polynomal tme algorthm for the constructon and 9

21 tranng of a class of multplayer perceptons, Neural Networks, vol. 6, no. 4, pp , 993. [3] Ncholas. K. Treadold and Tamas D. Gedeon, Explorng constructve Cascade Networks, IEEE Trans. Neural Networks, vol. 0, no. 6, pp , Nov, 999. [4] Chn-Ch Teng and Benamn W. Wah, Automated learnng for reducng the confguraton of a feedforward neural network, IEEE Trans. Neural Networks, vol. 7, no. 5, pp , Sep, 996. [5] Y.Q. Chen, D.W. Thomás and M.S. Nxon, Generatng-shrnkng algorthm for learnng arbtrary classfcaton, Neural Networks, vol. 7 no. 9, pp , 994. [6] M.C. Moze and P. Smolensky, Usng relevance to reduce network sze automatcally, Connect. Sc., vol., no., pp. 3-6, 989. [7] H.K. Lam, S.H. Lng, F.H.F. Leung and P.K.S. Tam, Tunng of the Structure and Parameters of Neural Network usng an Improved Genetc Algorthm, Proceedngs of the 7th Annual Conference of the IEEE Industral Electroncs Socety, IECON 00, Denver, Nov, 00, pp [8] G.P. Mller, P.M. Todd and S.U. Hegde, Desgnng neural networks usng genetc algorthms, Proc. 3 rd Int. Conf. Genetc Algorthms and Ther Applcatons, J. D. Schaffer, Ed. San Mateo, CA: Morgan Kaufmann, 989, pp [9] N. Weymaere and J. Martens, On the ntalzaton and optmzaton of multplayer perceptrons, IEEE Trans. Neural Networks, vol. 5, pp , Sept [30] Haykn, Smon S., Neural networks: A comprehensve foundaton, nd Ed., Upper Saddle Rver, N. J.: Prentce Hall, 999. [3] Xufeng Wang and M., Elbuluk, Neural network control of nducton machnes usng genetc algorthm tranng, Industry Applcatons Conference, 996. Thrty-Frst IAS Annual Meetng, IAS '96., Conference Record of the 996 IEEE, vol. 3, 996, pp [3] Lawrence Davs, Handbook of genetc algorthms, Van Nostrand Renhold, New York, 99. [33] M. Srnvas, L.M. Patnak Genetc algorthms: a survey, IEEE Computer, vol. 7, ssue 6, pp. 0

22 7-6, June 994. [34] J.D. Schaffer, D. Whtley, and L.J. Eshelman Combnatons of genetc algorthms and neural networks: a survey of the state of the art, Proceedngs of Internatonal Workshop on Combnatons of Genetc Algorthms and Neural Networks, COGANN-9, 99, pp -37. [35] S. Bornholdt and D. Graudenz General asymmetrc neural networks and structure desgn by genetc algorthms: a learnng rule for temporal patterns, Conference proceedngs of Internatonal Conference on Systems, Man and Cybernetcs, 993, vol., 993, pp [36] V. Manezzo, Genetc evoluton of the topology and weght dstrbuton of neural networks, IEEE Trans. Neural Networks, vol.5, ssue, pp , Jan [37] P.J. Angelne, G.M. Saunders and J.B. Pollack, An evolutonary algorthm that constructs recurrent neural networks, IEEE Trans. Neural Networks, vol.5, ssue, pp , Jan. 994

23 Procedure of the standard GA begn 0 // : teraton generaton ntalze P() //P(): populaton for teraton t evaluate f(p()) // f(p()):ftness functon whle (not termnaton condton) do begn + select parents p and p from P(-) perform genetc operatons (crossover and mutaton) reproduce a new P() evaluate f(p()) end end end Fg.. Procedure of standard GA. Procedure of the mproved GA begn 0 // : teraton ntalze P() //P(): populaton for teraton t evaluate f(p()) // f(p()):ftness functon whle (not termnaton condton) do begn + select parents p and p from P(-) perform crossover operaton accordng to equatons (7) to (3) perform mutaton operaton accordng to equaton (4) to three offsprng nos, nos and nos3 // reproduce a new P() f random number < pa // pa: probablty of acceptance The one among nos, nos and nos3 wth the largest ftness value replaces the chromosome wth the smallest ftness value n the populaton else f f(nos) > smallest ftness value n the P(-) nos replaces the chromosome wth the smallest ftness value end f f(nos) > smallest ftness value n the updated P(-) nos replaces the chromosome wth the smallest ftness value end f f(nos3) > smallest ftness value n the updated P(-) nos3 replaces the chromosome wth the smallest ftness value end end Fg.. Procedure of the mproved GA.

24 Ftness Value Ftness Value No. of Iteratons (a). The averaged ftness value of the test functon f ( x) obtaned by the mproved (sold lne) and standard (dotted lne) GAs No. of Iteratons (b). The averaged ftness value of the test functon f ( x) obtaned by the mproved (sold lne) and standard (dotted lne) GAs. 3

25 Ftness Value Ftness Value No. of Iteratons (c). The averaged ftness value of the test functon f 3( x) obtaned by the mproved (sold lne) and standard (dotted lne) GAs No. of Iteratons (d). The averaged ftness value of the test functon f 4 ( x) obtaned by the mproved (sold lne) and standard (dotted lne) GAs. 4

26 Ftness Value Ftness Value No. of Iteratons (e). The averaged ftness value of the test functon f 5 ( x) obtaned by the mproved (sold lne) and standard (dotted lne) GAs No. of Iteratons (f). The averaged ftness value of the test functon f 6 ( x) obtaned by the mproved (sold lne) and standard (dotted lne) GAs. Fg. 3. Smulaton results of the mproved and standard GAs. 5

27 s Logsg w s z v s w n out y v n h Logsg n out s v n n z n n s n n n h w n h b s n h Logsg b s n out y n out Logsg swtch Fg. 4. Proposed 3-layer neural network Fg. 5. Sunspot numbers from year 700 to

28 Sunspot numbers Sunspot numbers Year (a) Number of hdden nodes ( n h ) = Year (b). Number of hdden nodes ( n h ) = 5. 7

29 Sunspot numbers Sunspot numbers Year (c). Number of hdden nodes ( n h ) = Year (d). Number of hdden nodes ( n h ) = 7. 8

30 Sunspot numbers Year (e). Number of hdden nodes ( n h ) = 8. Fg. 6. Smulaton results of a 96-year predcton usng the proposed neural network wth the proposed GA (dashed lne) and actual sunspot numbers (sold lne) for the years Test functon Proposed GA Standard GA f ( ) x f ( ) x f ( ) x f ( ) x f ( ) x f 6 ( x) Table I. Smulaton results of the proposed GA and the standard GA based on the benchmark test functons. 9

31 Our Approach Standard GA wth proposed neural network n h Ftness Values Number of Lnks Ftness Values Number of Lnks (a) Standard GA wth tradtonal neural network Back-Propagaton wth Momentum and Adaptve Learnng Rate n h Ftness Values Number of Lnks Ftness Values Number of Lnks (b) Table II. Smulaton results for the applcaton example of forecastng the sunspot number after 000 teratons of learnng. Our Approach Standard GA wth proposed neural network n Tranng error Forecastng error Tranng error Forecastng error h (a) Standard GA wth tradtonal neural network Back-Propagaton wth Momentum and Adaptve Learnng Rate n Tranng error Forecastng error Tranng error Forecastng error h (b) Table III. Tranng error and forecastng error n mean absolute error (MAE) for the applcaton example on forecastng the sunspot number. 30

32 Our Approach Standard GA wth proposed neural network n h Ftness Values Number of Ftness Values Number of Lnks Lnks (a) Standard GA wth tradtonal neural network Back-Propagaton wth Momentum and Adaptve Learnng Rate n h Ftness Values Number of Ftness Values Number of Lnks Lnks (b) Table IV. Smulaton results for the applcaton example of assocatve memory after 500 teratons of learnng. 3

The Study of Teaching-learning-based Optimization Algorithm

The Study of Teaching-learning-based Optimization Algorithm Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute

More information

Markov Chain Monte Carlo Lecture 6

Markov Chain Monte Carlo Lecture 6 where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways

More information

An Improved Genetic Algorithm with Average-Bound Crossover and Wavelet Mutation Operations

An Improved Genetic Algorithm with Average-Bound Crossover and Wavelet Mutation Operations An Improved Genetc Algorthm wth Average-Bound Crossover and Wavelet Mutaton Operatons S.H. Lng and F.H.F. Leung Centre for Multmeda Sgnal Processng, Department of Electronc and Informaton Engneerng, The

More information

Using Immune Genetic Algorithm to Optimize BP Neural Network and Its Application Peng-fei LIU1,Qun-tai SHEN1 and Jun ZHI2,*

Using Immune Genetic Algorithm to Optimize BP Neural Network and Its Application Peng-fei LIU1,Qun-tai SHEN1 and Jun ZHI2,* Advances n Computer Scence Research (ACRS), volume 54 Internatonal Conference on Computer Networks and Communcaton Technology (CNCT206) Usng Immune Genetc Algorthm to Optmze BP Neural Network and Its Applcaton

More information

A New Evolutionary Computation Based Approach for Learning Bayesian Network

A New Evolutionary Computation Based Approach for Learning Bayesian Network Avalable onlne at www.scencedrect.com Proceda Engneerng 15 (2011) 4026 4030 Advanced n Control Engneerng and Informaton Scence A New Evolutonary Computaton Based Approach for Learnng Bayesan Network Yungang

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

An Extended Hybrid Genetic Algorithm for Exploring a Large Search Space

An Extended Hybrid Genetic Algorithm for Exploring a Large Search Space 2nd Internatonal Conference on Autonomous Robots and Agents Abstract An Extended Hybrd Genetc Algorthm for Explorng a Large Search Space Hong Zhang and Masum Ishkawa Graduate School of L.S.S.E., Kyushu

More information

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm Desgn and Optmzaton of Fuzzy Controller for Inverse Pendulum System Usng Genetc Algorthm H. Mehraban A. Ashoor Unversty of Tehran Unversty of Tehran h.mehraban@ece.ut.ac.r a.ashoor@ece.ut.ac.r Abstract:

More information

Solving of Single-objective Problems based on a Modified Multiple-crossover Genetic Algorithm: Test Function Study

Solving of Single-objective Problems based on a Modified Multiple-crossover Genetic Algorithm: Test Function Study Internatonal Conference on Systems, Sgnal Processng and Electroncs Engneerng (ICSSEE'0 December 6-7, 0 Duba (UAE Solvng of Sngle-objectve Problems based on a Modfed Multple-crossover Genetc Algorthm: Test

More information

Appendix B: Resampling Algorithms

Appendix B: Resampling Algorithms 407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Solving Nonlinear Differential Equations by a Neural Network Method

Solving Nonlinear Differential Equations by a Neural Network Method Solvng Nonlnear Dfferental Equatons by a Neural Network Method Luce P. Aarts and Peter Van der Veer Delft Unversty of Technology, Faculty of Cvlengneerng and Geoscences, Secton of Cvlengneerng Informatcs,

More information

Supporting Information

Supporting Information Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Chapter 2 Real-Coded Adaptive Range Genetic Algorithm

Chapter 2 Real-Coded Adaptive Range Genetic Algorithm Chapter Real-Coded Adaptve Range Genetc Algorthm.. Introducton Fndng a global optmum n the contnuous doman s challengng for Genetc Algorthms (GAs. Tradtonal GAs use the bnary representaton that evenly

More information

The Convergence Speed of Single- And Multi-Objective Immune Algorithm Based Optimization Problems

The Convergence Speed of Single- And Multi-Objective Immune Algorithm Based Optimization Problems The Convergence Speed of Sngle- And Mult-Obectve Immune Algorthm Based Optmzaton Problems Mohammed Abo-Zahhad Faculty of Engneerng, Electrcal and Electroncs Engneerng Department, Assut Unversty, Assut,

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Speeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem

Speeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem H.K. Pathak et. al. / (IJCSE) Internatonal Journal on Computer Scence and Engneerng Speedng up Computaton of Scalar Multplcaton n Ellptc Curve Cryptosystem H. K. Pathak Manju Sangh S.o.S n Computer scence

More information

Annexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances

Annexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances ec Annexes Ths Annex frst llustrates a cycle-based move n the dynamc-block generaton tabu search. It then dsplays the characterstcs of the nstance sets, followed by detaled results of the parametercalbraton

More information

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud Resource Allocaton wth a Budget Constrant for Computng Independent Tasks n the Cloud Wemng Sh and Bo Hong School of Electrcal and Computer Engneerng Georga Insttute of Technology, USA 2nd IEEE Internatonal

More information

Hopfield networks and Boltzmann machines. Geoffrey Hinton et al. Presented by Tambet Matiisen

Hopfield networks and Boltzmann machines. Geoffrey Hinton et al. Presented by Tambet Matiisen Hopfeld networks and Boltzmann machnes Geoffrey Hnton et al. Presented by Tambet Matsen 18.11.2014 Hopfeld network Bnary unts Symmetrcal connectons http://www.nnwj.de/hopfeld-net.html Energy functon The

More information

Short Term Load Forecasting using an Artificial Neural Network

Short Term Load Forecasting using an Artificial Neural Network Short Term Load Forecastng usng an Artfcal Neural Network D. Kown 1, M. Km 1, C. Hong 1,, S. Cho 2 1 Department of Computer Scence, Sangmyung Unversty, Seoul, Korea 2 Department of Energy Grd, Sangmyung

More information

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland

More information

RBF Neural Network Model Training by Unscented Kalman Filter and Its Application in Mechanical Fault Diagnosis

RBF Neural Network Model Training by Unscented Kalman Filter and Its Application in Mechanical Fault Diagnosis Appled Mechancs and Materals Submtted: 24-6-2 ISSN: 662-7482, Vols. 62-65, pp 2383-2386 Accepted: 24-6- do:.428/www.scentfc.net/amm.62-65.2383 Onlne: 24-8- 24 rans ech Publcatons, Swtzerland RBF Neural

More information

Negative Binomial Regression

Negative Binomial Regression STATGRAPHICS Rev. 9/16/2013 Negatve Bnomal Regresson Summary... 1 Data Input... 3 Statstcal Model... 3 Analyss Summary... 4 Analyss Optons... 7 Plot of Ftted Model... 8 Observed Versus Predcted... 10 Predctons...

More information

Electromagnetic Algorithm for tuning the structure and parameters of Neural Networks

Electromagnetic Algorithm for tuning the structure and parameters of Neural Networks Electromagnetc Algorthm for tunng the structure and parameters of Neural Networks Ayad Mashaan Turky, Salwan Abdullah and Nasser R. Sabar Abstract Electromagnetc algorthm s a populaton based meta-heurstc

More information

GHHAGA for Environmental Systems Optimization

GHHAGA for Environmental Systems Optimization GHHAGA for Envronmental Systems Optmzaton X. H. Yang *, Z. F. Yang, and Z. Y. Shen State Key Laboratory of Water Envronment Smulaton, School of Envronment, Beng Normal Unversty, Beng 00875, Chna Key Laboratory

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

HYBRID FUZZY MULTI-OBJECTIVE EVOLUTIONARY ALGORITHM: A NOVEL PARETO-OPTIMIZATION TECHNIQUE

HYBRID FUZZY MULTI-OBJECTIVE EVOLUTIONARY ALGORITHM: A NOVEL PARETO-OPTIMIZATION TECHNIQUE Internatonal Journal of Fuzzy Logc Systems (IJFLS) Vol.2, No., February 22 HYBRID FUZZY MULTI-OBJECTIVE EVOLUTIONARY ALGORITHM: A NOVEL PARETO-OPTIMIZATION TECHNIQUE Amt Saraswat and Ashsh San 2 Department

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

A LINEAR PROGRAM TO COMPARE MULTIPLE GROSS CREDIT LOSS FORECASTS. Dr. Derald E. Wentzien, Wesley College, (302) ,

A LINEAR PROGRAM TO COMPARE MULTIPLE GROSS CREDIT LOSS FORECASTS. Dr. Derald E. Wentzien, Wesley College, (302) , A LINEAR PROGRAM TO COMPARE MULTIPLE GROSS CREDIT LOSS FORECASTS Dr. Derald E. Wentzen, Wesley College, (302) 736-2574, wentzde@wesley.edu ABSTRACT A lnear programmng model s developed and used to compare

More information

A Hybrid Variational Iteration Method for Blasius Equation

A Hybrid Variational Iteration Method for Blasius Equation Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method

More information

Quantum-Evolutionary Algorithms: A SW-HW approach

Quantum-Evolutionary Algorithms: A SW-HW approach Proceedngs of the 5th WSEAS Int. Conf. on COMPUTATIONAL INTELLIGENCE, MAN-MACHINE SYSTEMS AND CYBERNETICS, Vence, Italy, November 0-, 006 333 Quantum-Evolutonary Algorthms: A SW-HW approach D. PORTO, A.

More information

Particle Swarm Optimization with Adaptive Mutation in Local Best of Particles

Particle Swarm Optimization with Adaptive Mutation in Local Best of Particles 1 Internatonal Congress on Informatcs, Envronment, Energy and Applcatons-IEEA 1 IPCSIT vol.38 (1) (1) IACSIT Press, Sngapore Partcle Swarm Optmzaton wth Adaptve Mutaton n Local Best of Partcles Nanda ulal

More information

Wavelet chaotic neural networks and their application to continuous function optimization

Wavelet chaotic neural networks and their application to continuous function optimization Vol., No.3, 04-09 (009) do:0.436/ns.009.307 Natural Scence Wavelet chaotc neural networks and ther applcaton to contnuous functon optmzaton Ja-Ha Zhang, Yao-Qun Xu College of Electrcal and Automatc Engneerng,

More information

Thin-Walled Structures Group

Thin-Walled Structures Group Thn-Walled Structures Group JOHNS HOPKINS UNIVERSITY RESEARCH REPORT Towards optmzaton of CFS beam-column ndustry sectons TWG-RR02-12 Y. Shfferaw July 2012 1 Ths report was prepared ndependently, but was

More information

Chapter 13: Multiple Regression

Chapter 13: Multiple Regression Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to

More information

Multi-Robot Formation Control Based on Leader-Follower Optimized by the IGA

Multi-Robot Formation Control Based on Leader-Follower Optimized by the IGA IOSR Journal of Computer Engneerng (IOSR-JCE e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 19, Issue 1, Ver. III (Jan.-Feb. 2017, PP 08-13 www.osrjournals.org Mult-Robot Formaton Control Based on Leader-Follower

More information

Chapter - 2. Distribution System Power Flow Analysis

Chapter - 2. Distribution System Power Flow Analysis Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

System identifications by SIRMs models with linear transformation of input variables

System identifications by SIRMs models with linear transformation of input variables ORIGINAL RESEARCH System dentfcatons by SIRMs models wth lnear transformaton of nput varables Hrofum Myama, Nortaka Shge, Hrom Myama Graduate School of Scence and Engneerng, Kagoshma Unversty, Japan Receved:

More information

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals Smultaneous Optmzaton of Berth Allocaton, Quay Crane Assgnment and Quay Crane Schedulng Problems n Contaner Termnals Necat Aras, Yavuz Türkoğulları, Z. Caner Taşkın, Kuban Altınel Abstract In ths work,

More information

Online Classification: Perceptron and Winnow

Online Classification: Perceptron and Winnow E0 370 Statstcal Learnng Theory Lecture 18 Nov 8, 011 Onlne Classfcaton: Perceptron and Wnnow Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton In ths lecture we wll start to study the onlne learnng

More information

De-noising Method Based on Kernel Adaptive Filtering for Telemetry Vibration Signal of the Vehicle Test Kejun ZENG

De-noising Method Based on Kernel Adaptive Filtering for Telemetry Vibration Signal of the Vehicle Test Kejun ZENG 6th Internatonal Conference on Mechatroncs, Materals, Botechnology and Envronment (ICMMBE 6) De-nosng Method Based on Kernel Adaptve Flterng for elemetry Vbraton Sgnal of the Vehcle est Kejun ZEG PLA 955

More information

An Improved Clustering Based Genetic Algorithm for Solving Complex NP Problems

An Improved Clustering Based Genetic Algorithm for Solving Complex NP Problems Journal of Computer Scence 7 (7): 1033-1037, 2011 ISSN 1549-3636 2011 Scence Publcatons An Improved Clusterng Based Genetc Algorthm for Solvng Complex NP Problems 1 R. Svaraj and 2 T. Ravchandran 1 Department

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

Neural Networks & Learning

Neural Networks & Learning Neural Netorks & Learnng. Introducton The basc prelmnares nvolved n the Artfcal Neural Netorks (ANN) are descrbed n secton. An Artfcal Neural Netorks (ANN) s an nformaton-processng paradgm that nspred

More information

On the Multicriteria Integer Network Flow Problem

On the Multicriteria Integer Network Flow Problem BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

Optimum Design of Steel Frames Considering Uncertainty of Parameters

Optimum Design of Steel Frames Considering Uncertainty of Parameters 9 th World Congress on Structural and Multdscplnary Optmzaton June 13-17, 211, Shzuoka, Japan Optmum Desgn of Steel Frames Consderng ncertanty of Parameters Masahko Katsura 1, Makoto Ohsak 2 1 Hroshma

More information

A Multi-modulus Blind Equalization Algorithm Based on Memetic Algorithm Guo Yecai 1, 2, a, Wu Xing 1, Zhang Miaoqing 1

A Multi-modulus Blind Equalization Algorithm Based on Memetic Algorithm Guo Yecai 1, 2, a, Wu Xing 1, Zhang Miaoqing 1 Internatonal Conference on Materals Engneerng and Informaton Technology Applcatons (MEITA 1) A Mult-modulus Blnd Equalzaton Algorthm Based on Memetc Algorthm Guo Yeca 1,, a, Wu Xng 1, Zhang Maoqng 1 1

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The

More information

Fuzzy Boundaries of Sample Selection Model

Fuzzy Boundaries of Sample Selection Model Proceedngs of the 9th WSES Internatonal Conference on ppled Mathematcs, Istanbul, Turkey, May 7-9, 006 (pp309-34) Fuzzy Boundares of Sample Selecton Model L. MUHMD SFIIH, NTON BDULBSH KMIL, M. T. BU OSMN

More information

Multigradient for Neural Networks for Equalizers 1

Multigradient for Neural Networks for Equalizers 1 Multgradent for Neural Netorks for Equalzers 1 Chulhee ee, Jnook Go and Heeyoung Km Department of Electrcal and Electronc Engneerng Yonse Unversty 134 Shnchon-Dong, Seodaemun-Ku, Seoul 1-749, Korea ABSTRACT

More information

Pop-Click Noise Detection Using Inter-Frame Correlation for Improved Portable Auditory Sensing

Pop-Click Noise Detection Using Inter-Frame Correlation for Improved Portable Auditory Sensing Advanced Scence and Technology Letters, pp.164-168 http://dx.do.org/10.14257/astl.2013 Pop-Clc Nose Detecton Usng Inter-Frame Correlaton for Improved Portable Audtory Sensng Dong Yun Lee, Kwang Myung Jeon,

More information

An Improved multiple fractal algorithm

An Improved multiple fractal algorithm Advanced Scence and Technology Letters Vol.31 (MulGraB 213), pp.184-188 http://dx.do.org/1.1427/astl.213.31.41 An Improved multple fractal algorthm Yun Ln, Xaochu Xu, Jnfeng Pang College of Informaton

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

A DNA Coding Scheme for Searching Stable Solutions

A DNA Coding Scheme for Searching Stable Solutions A DNA odng Scheme for Searchng Stable Solutons Intaek Km, HeSong Lan, and Hwan Il Kang 2 Department of ommuncaton Eng., Myongj Unversty, 449-728, Yongn, South Korea kt@mju.ac.kr, hslan@hotmal.net 2 Department

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method Appled Mathematcal Scences, Vol. 7, 0, no. 47, 07-0 HIARI Ltd, www.m-hkar.com Comparson of the Populaton Varance Estmators of -Parameter Exponental Dstrbuton Based on Multple Crtera Decson Makng Method

More information

Parameter Estimation for Dynamic System using Unscented Kalman filter

Parameter Estimation for Dynamic System using Unscented Kalman filter Parameter Estmaton for Dynamc System usng Unscented Kalman flter Jhoon Seung 1,a, Amr Atya F. 2,b, Alexander G.Parlos 3,c, and Klto Chong 1,4,d* 1 Dvson of Electroncs Engneerng, Chonbuk Natonal Unversty,

More information

Boostrapaggregating (Bagging)

Boostrapaggregating (Bagging) Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod

More information

Capacitor Placement In Distribution Systems Using Genetic Algorithms and Tabu Search

Capacitor Placement In Distribution Systems Using Genetic Algorithms and Tabu Search Capactor Placement In Dstrbuton Systems Usng Genetc Algorthms and Tabu Search J.Nouar M.Gandomar Saveh Azad Unversty,IRAN Abstract: Ths paper presents a new method for determnng capactor placement n dstrbuton

More information

THE ROBUSTNESS OF GENETIC ALGORITHMS IN SOLVING UNCONSTRAINED BUILDING OPTIMIZATION PROBLEMS

THE ROBUSTNESS OF GENETIC ALGORITHMS IN SOLVING UNCONSTRAINED BUILDING OPTIMIZATION PROBLEMS Nnth Internatonal IBPSA Conference Montréal, Canada August 5-8, 2005 THE ROBUSTNESS OF GENETIC ALGORITHMS IN SOLVING UNCONSTRAINED BUILDING OPTIMIZATION PROBLEMS Jonathan Wrght, and Al Alajm Department

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Queueing Networks II Network Performance

Queueing Networks II Network Performance Queueng Networks II Network Performance Davd Tpper Assocate Professor Graduate Telecommuncatons and Networkng Program Unversty of Pttsburgh Sldes 6 Networks of Queues Many communcaton systems must be modeled

More information

The Chaotic Robot Prediction by Neuro Fuzzy Algorithm (2) = θ (3) = ω. Asin. A v. Mana Tarjoman, Shaghayegh Zarei

The Chaotic Robot Prediction by Neuro Fuzzy Algorithm (2) = θ (3) = ω. Asin. A v. Mana Tarjoman, Shaghayegh Zarei The Chaotc Robot Predcton by Neuro Fuzzy Algorthm Mana Tarjoman, Shaghayegh Zare Abstract In ths paper an applcaton of the adaptve neurofuzzy nference system has been ntroduced to predct the behavor of

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

The Synchronous 8th-Order Differential Attack on 12 Rounds of the Block Cipher HyRAL

The Synchronous 8th-Order Differential Attack on 12 Rounds of the Block Cipher HyRAL The Synchronous 8th-Order Dfferental Attack on 12 Rounds of the Block Cpher HyRAL Yasutaka Igarash, Sej Fukushma, and Tomohro Hachno Kagoshma Unversty, Kagoshma, Japan Emal: {garash, fukushma, hachno}@eee.kagoshma-u.ac.jp

More information

FUZZY GOAL PROGRAMMING VS ORDINARY FUZZY PROGRAMMING APPROACH FOR MULTI OBJECTIVE PROGRAMMING PROBLEM

FUZZY GOAL PROGRAMMING VS ORDINARY FUZZY PROGRAMMING APPROACH FOR MULTI OBJECTIVE PROGRAMMING PROBLEM Internatonal Conference on Ceramcs, Bkaner, Inda Internatonal Journal of Modern Physcs: Conference Seres Vol. 22 (2013) 757 761 World Scentfc Publshng Company DOI: 10.1142/S2010194513010982 FUZZY GOAL

More information

DUE: WEDS FEB 21ST 2018

DUE: WEDS FEB 21ST 2018 HOMEWORK # 1: FINITE DIFFERENCES IN ONE DIMENSION DUE: WEDS FEB 21ST 2018 1. Theory Beam bendng s a classcal engneerng analyss. The tradtonal soluton technque makes smplfyng assumptons such as a constant

More information

Internet Engineering. Jacek Mazurkiewicz, PhD Softcomputing. Part 3: Recurrent Artificial Neural Networks Self-Organising Artificial Neural Networks

Internet Engineering. Jacek Mazurkiewicz, PhD Softcomputing. Part 3: Recurrent Artificial Neural Networks Self-Organising Artificial Neural Networks Internet Engneerng Jacek Mazurkewcz, PhD Softcomputng Part 3: Recurrent Artfcal Neural Networks Self-Organsng Artfcal Neural Networks Recurrent Artfcal Neural Networks Feedback sgnals between neurons Dynamc

More information

A Neuro-Fuzzy System on System Modeling and Its. Application on Character Recognition

A Neuro-Fuzzy System on System Modeling and Its. Application on Character Recognition A Neuro-Fuzzy System on System Modelng and Its Applcaton on Character Recognton C. J. Chen 1, S. M. Yang 2, Z. C. Wang 3 1 Department of Avaton Servce Management Alethea Unversty Tawan, ROC 2,3 Department

More information

Statistics II Final Exam 26/6/18

Statistics II Final Exam 26/6/18 Statstcs II Fnal Exam 26/6/18 Academc Year 2017/18 Solutons Exam duraton: 2 h 30 mn 1. (3 ponts) A town hall s conductng a study to determne the amount of leftover food produced by the restaurants n the

More information

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1]

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1] DYNAMIC SHORTEST PATH SEARCH AND SYNCHRONIZED TASK SWITCHING Jay Wagenpfel, Adran Trachte 2 Outlne Shortest Communcaton Path Searchng Bellmann Ford algorthm Algorthm for dynamc case Modfcatons to our algorthm

More information

IV. Performance Optimization

IV. Performance Optimization IV. Performance Optmzaton A. Steepest descent algorthm defnton how to set up bounds on learnng rate mnmzaton n a lne (varyng learnng rate) momentum learnng examples B. Newton s method defnton Gauss-Newton

More information

AN IMPROVED PARTICLE FILTER ALGORITHM BASED ON NEURAL NETWORK FOR TARGET TRACKING

AN IMPROVED PARTICLE FILTER ALGORITHM BASED ON NEURAL NETWORK FOR TARGET TRACKING AN IMPROVED PARTICLE FILTER ALGORITHM BASED ON NEURAL NETWORK FOR TARGET TRACKING Qn Wen, Peng Qcong 40 Lab, Insttuton of Communcaton and Informaton Engneerng,Unversty of Electronc Scence and Technology

More information

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement Markov Chan Monte Carlo MCMC, Gbbs Samplng, Metropols Algorthms, and Smulated Annealng 2001 Bonformatcs Course Supplement SNU Bontellgence Lab http://bsnuackr/ Outlne! Markov Chan Monte Carlo MCMC! Metropols-Hastngs

More information

Homework Assignment 3 Due in class, Thursday October 15

Homework Assignment 3 Due in class, Thursday October 15 Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.

More information

Finite Mixture Models and Expectation Maximization. Most slides are from: Dr. Mario Figueiredo, Dr. Anil Jain and Dr. Rong Jin

Finite Mixture Models and Expectation Maximization. Most slides are from: Dr. Mario Figueiredo, Dr. Anil Jain and Dr. Rong Jin Fnte Mxture Models and Expectaton Maxmzaton Most sldes are from: Dr. Maro Fgueredo, Dr. Anl Jan and Dr. Rong Jn Recall: The Supervsed Learnng Problem Gven a set of n samples X {(x, y )},,,n Chapter 3 of

More information

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests Smulated of the Cramér-von Mses Goodness-of-Ft Tests Steele, M., Chaselng, J. and 3 Hurst, C. School of Mathematcal and Physcal Scences, James Cook Unversty, Australan School of Envronmental Studes, Grffth

More information

Neural networks. Nuno Vasconcelos ECE Department, UCSD

Neural networks. Nuno Vasconcelos ECE Department, UCSD Neural networs Nuno Vasconcelos ECE Department, UCSD Classfcaton a classfcaton problem has two types of varables e.g. X - vector of observatons (features) n the world Y - state (class) of the world x X

More information

Determining Transmission Losses Penalty Factor Using Adaptive Neuro Fuzzy Inference System (ANFIS) For Economic Dispatch Application

Determining Transmission Losses Penalty Factor Using Adaptive Neuro Fuzzy Inference System (ANFIS) For Economic Dispatch Application 7 Determnng Transmsson Losses Penalty Factor Usng Adaptve Neuro Fuzzy Inference System (ANFIS) For Economc Dspatch Applcaton Rony Seto Wbowo Maurdh Hery Purnomo Dod Prastanto Electrcal Engneerng Department,

More information

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata Multlayer Perceptrons and Informatcs CG: Lecture 6 Mrella Lapata School of Informatcs Unversty of Ednburgh mlap@nf.ed.ac.uk Readng: Kevn Gurney s Introducton to Neural Networks, Chapters 5 6.5 January,

More information

Second Order Analysis

Second Order Analysis Second Order Analyss In the prevous classes we looked at a method that determnes the load correspondng to a state of bfurcaton equlbrum of a perfect frame by egenvalye analyss The system was assumed to

More information

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information

Computing Correlated Equilibria in Multi-Player Games

Computing Correlated Equilibria in Multi-Player Games Computng Correlated Equlbra n Mult-Player Games Chrstos H. Papadmtrou Presented by Zhanxang Huang December 7th, 2005 1 The Author Dr. Chrstos H. Papadmtrou CS professor at UC Berkley (taught at Harvard,

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Estimating the Fundamental Matrix by Transforming Image Points in Projective Space 1

Estimating the Fundamental Matrix by Transforming Image Points in Projective Space 1 Estmatng the Fundamental Matrx by Transformng Image Ponts n Projectve Space 1 Zhengyou Zhang and Charles Loop Mcrosoft Research, One Mcrosoft Way, Redmond, WA 98052, USA E-mal: fzhang,cloopg@mcrosoft.com

More information

Discretization of Continuous Attributes in Rough Set Theory and Its Application*

Discretization of Continuous Attributes in Rough Set Theory and Its Application* Dscretzaton of Contnuous Attrbutes n Rough Set Theory and Its Applcaton* Gexang Zhang 1,2, Lazhao Hu 1, and Wedong Jn 2 1 Natonal EW Laboratory, Chengdu 610036 Schuan, Chna dylan7237@sna.com 2 School of

More information

Other NN Models. Reinforcement learning (RL) Probabilistic neural networks

Other NN Models. Reinforcement learning (RL) Probabilistic neural networks Other NN Models Renforcement learnng (RL) Probablstc neural networks Support vector machne (SVM) Renforcement learnng g( (RL) Basc deas: Supervsed dlearnng: (delta rule, BP) Samples (x, f(x)) to learn

More information

An improved multi-objective evolutionary algorithm based on point of reference

An improved multi-objective evolutionary algorithm based on point of reference IOP Conference Seres: Materals Scence and Engneerng PAPER OPEN ACCESS An mproved mult-objectve evolutonary algorthm based on pont of reference To cte ths artcle: Boy Zhang et al 08 IOP Conf. Ser.: Mater.

More information

Optimal Solution to the Problem of Balanced Academic Curriculum Problem Using Tabu Search

Optimal Solution to the Problem of Balanced Academic Curriculum Problem Using Tabu Search Optmal Soluton to the Problem of Balanced Academc Currculum Problem Usng Tabu Search Lorna V. Rosas-Téllez 1, José L. Martínez-Flores 2, and Vttoro Zanella-Palacos 1 1 Engneerng Department,Unversdad Popular

More information

Note 10. Modeling and Simulation of Dynamic Systems

Note 10. Modeling and Simulation of Dynamic Systems Lecture Notes of ME 475: Introducton to Mechatroncs Note 0 Modelng and Smulaton of Dynamc Systems Department of Mechancal Engneerng, Unversty Of Saskatchewan, 57 Campus Drve, Saskatoon, SK S7N 5A9, Canada

More information

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI Logstc Regresson CAP 561: achne Learnng Instructor: Guo-Jun QI Bayes Classfer: A Generatve model odel the posteror dstrbuton P(Y X) Estmate class-condtonal dstrbuton P(X Y) for each Y Estmate pror dstrbuton

More information