Adaptive LRBP Using Learning Automata for Neural Networks

Size: px
Start display at page:

Download "Adaptive LRBP Using Learning Automata for Neural Networks"

Transcription

1 Adaptve LRBP Usng Learnng Automata for eura etworks *B. MASHOUFI, *MOHAMMAD B. MEHAJ (#, *SAYED A. MOTAMEDI and **MOHAMMAD R. MEYBODI *Eectrca Engneerng Department **Computer Engneerng Department Amrkabr Unversty of Technoogy Hafez Ave. 44, Tehran 594, IRA # Curreny wth depatment of eectrca and computer engneerng at okahoma state unversty. Abstract: - One of the bggest mtatons of BP agorthm s ts ow rate of convergence. In ths paper,varabe Learnng Rate (VLR agorthm and earnng automata based earnng rate adaptaton agorthms are descrbed and compared wth each other. Because the VLR parameters have mportant nfuence n ts performance, we use earnng automata for adustng them. In the proposed agorthm named as Adaptve Varabe Learnng Rate (AVLR agorthm, VLR parameters are changed dynamcay accordng to error changes by earnng automata. Smuaton resuts on a second order dscrete tme nonnear functon approxmaton probem hghght better the mert of the proposed AVLR. KeyWords:Mutayer eura etwork, Backpropagaton,Varabe Learnng Rate, Learnng Automata Introducton Consder Probem of backpropagaton earnng wth batch method. For the purpose of gettng an optma weght vector n an teratve manner, a descent type agorthm was frst deveoped by werbos n 974, and redscovered by Parker n 98 and once agan by Rumehart n 986. Unfortunatey t has been observed that convergence rate of BP s extremey sow, especay for the networks wth more than one hdden ayer. The ntrnsc reason behnd ths s the saturaton property of the actvaton functon used for the hdden and output unts. Once the output of a unt es n the saturaton area, the correspondng descent gradent woud take a very sma vaue f the output error s very arge. Ths w resut n very tte progress n the weght adustment f one takes a fxed sma earnng rate parameter. To avod ths undesred phenomenon, one may consder a reatve arge earnng rate. Ths woud be dangerous, however, because t may ead to dvergence of the teraton especay when the weght adustment happens to fa nto the surface regons wth arge steepness. Therefore an effcent earnng agorthm shoud be abe dynamcay vary ts earnng rate n accordance wth changes of the gradent vaues. Research nto dynamc change of the earnng rate of BP agorthm has been reported n [4]-[6]. Bascay they a dynamcay ncrease or decrease by a fxed factor the earnng rate and momentum based on the observaton of error sgnas. Some other acceeraton methods have aso been presented, ncudng modfcatons of optmzaton crteron and use of second order methods (e.g., the ewton method [7]- [8], the Broyden-Fetcher-Godfarb-Shanno and the Levenberg-Marquardt methods et a. [9]-[]. There are some earnng automata (LA based methods []- [9]. Athough M. L. Tsetn and hs co-workers started work on earnng automaton n the 96s n the Sovet Unon, varabe structure earnng automata (VSLA or fxed structure earnng automata (FSLA have recenty been used to fnd the approprate vaues of parameters for the BP tranng agorthm []-[9]. In ths paper dfferent methods of dynamc changng of earnng rate have been consdered. Varabe earnng rate (VLR agorthm and earnng automata based earnng rate adaptaton agorthms are descrbed and compared wth each other. Because the VLR parameters have mportant nfuence n ts performance, we use earnng automata for adustng them. In the proposed agorthm named as Adaptve Varabe Learnng Rate (AVLR agorthm, VLR parameters are changed dynamcay accordng to error changes by earnng automata. Smuaton resuts on a second order dscrete tme nonnear functon approxmaton probem hghght better the mert of the proposed AVLR. The rest of the paper s organzed as foows: Secton brefy presents the standard backpropagaon agorthm. An ntroducton to earnng automata s gven n secton 3. In secton 4 varabe earnng rate agorthm s descrbed. Secton 5 presents earnng automata based methods. In secton 6 a new agorthm named as adaptve varabe earnng rate s ntroduced. The smuaton resuts are gven n secton 7. Secton 8 concudes the paper.

2 Backpropagaton Agorthm BP s a systematc method for tranng mutayer neura networks. BP agorthm has two computatona phase. Frst phase s forward phase and the socond s backward phase[]. Forward phase: ths phase s descrbed by foowng equatons: a = pk ( a ( = F L ( W ( a + b (, =,,..., L a = a ( Actvaton functon act on a neurons, that s: F + ( n( = + + [ f ( n (... f ( n ( ] T Backward phase: n ths phase, senstvty vectors propagated from output ayer to nput ayer. The foowng equatons descrbe dynamc of backward phase: δ L ( = F.. L ( n e( δ ( = F ( n ( W T e( = t( a( In backward phase frst error vector s computed. Then, the error vector from rght to eft and from output ayer to nput ayer s propagated and oca gradents, neuron by neuron are computed. Parameter adustng: n ths step weght matrces and δ, s + = L,..., bases are adusted as foows. ( T W ( k + = W ( αδ ( a ( b k b k k L ( + = ( αδ (, =,,..., Stopng crteron: f average of error squares n each epoch (sum of square error for a of tranng patterns s smaer than from a predetermned vaue then BP agorthm s stopped. 3 Learnng Automata Learnng automata (LA can be cassfed nto two man groups, fxed and varabe structure earnng automata (FSLA and VSLA [6]-[9]. Exampes of the FSLA type are Tsetne, Krnsky and Kryov automata. A fxed structure earnng automaton s quntupe < α, Φ, β, F, G > where: α = ( α,..., α R s the set of acton that t must choose from. Φ = ( Φ,..., Φs s the set of states. 3 β ={, } s the set of nputs where represents a penaty and a reward. 4 F : Φ β Φ s a map caed the transton map. 5 G : Φ α s the out map The seected acton serves as the nput to the envronment that n turn emts a stochastc response β (n at the tme n. β (n s an eement of β = {,} and s the feedback response of the envronment to the automaton. The envronment penaze (.e., β (n = the automaton wth the penaty c, whch s the acton depent. On the bass of the response β (n, the state of the automaton Φ(n s updated and a new acton s chosen at the tme (n+. The nterconnecton of earnng automata and envronment s shown n fgure. In the foowng sectons, we w descrbe fxed structure and varabe structure earnng automata. α (n {c} Random Envronment Learnng Automaton { Φ, α, β, F, G} β (n Fgure : The nterconnecton of earnng automata and envronment 3. The two-state automaton ( L, Ths automata has two states, Φ and Φ and two actons α and α. The automata accepts nput from a set of {, } and swtches ts states upon encounterng an nput (unfavorabe response and remans n the same state on recevng an nput (favorabe response. State transton are shown n fgure. 3. The two-acton automaton wth memory Ths automata has states and two actons and attempts to ncorporate the past behavor of the system n ts decson rue for choosng the sequence of actons. Whe the automata L, keeps an account of the number of success and faures receved for each acton. It s ony when the number of faures exceeds the number of successes, or some maxmum vaue, the automata swtches from one acton to the another. For every favorabe response, the state of automata moves deeper nto the memory of correspondng acton, and for an unfavorabe response, moves out of t. The state transton graph of L, automata s shown n fgure 3.

3 Favorabe Φ s a set of nterna states, α a set of outputs, P denotes the state probabty vector determnng the choce of the state at each stage k, G s the output mappng, and T s earnng agorthm. The earnng agorthm s a recurrence reaton and s used to Unfavorabe Fgure : The state transton graph for L, - Favorabe Response Favorabe Response Unfavorabe Response + + Fgure 4: The state transton graph for Krnsky Automata Unfavorabe esponse Fgure 3: The state transton graph for L, 3.3 The Krnsky automaton Ths automata behaves exacty ke L, automata when the response of the envronment s unfavorabe, but for favorabe response, any state Φ (for =,..., passes to the state Φ and any state Φ (for = +,...,passes to the state Φ +. The state transton graph of Krnsky automaton s shown n fgure The Kryov automaton Ths automaton has state transton that s dentca to the L, automata when the output of the envronment s favorabe. However, when the response of the envronment s unfavorabe, a state Φ (,, +, passes to a state Φ + wth probabty.5 and to a stateφ wth probabty.5. When = or = +, Φ stays n the same state wth probabty.5 and moves to + wth the same probabty. When =, automata state moves Φ to Φ and wth the same probabty.5. When =, automata state moves Φ to Φ wth the same probabty.5. The state transton of Kryov automata s shown n fgure Varabe structure earnng automata Ths automata s represented by sextupe < β, Φ, α, P, G, T >, where β s a set of nputs actons, Φ Fgure 5: The state transton graph for Kryov Automata modfy the state probabty vector. Varous earnng agorthms have been reported n the terature [6]. Let α be the acton chosen at tme k as a sampe reazaton from dstrbuton p(. In near rewardpenaty ( LR P agorthm the recurrence equaton for updatng p s defned as: Favorabe response β ( n = p ( n + = + a[ ] p ( n + = ( a Unfavorabe response β ( n = p ( n + = ( b p ( n + = b ( r + ( b - Parameters a and b are caed step sze and determne amount of ncrement (decrement of the acton probabty. Another earnng agorthm that usuay used s near reward -nacton agorthm. In ( LR I ( LR I agorthm for favorabe response β ( n =, probabty correspondng to α ncrease and others decrease. But for unfavorabe response β ( n = probabtes are not changed. Recursve equaton for changng P s as foows: Favorabe response β ( n = Favorabe Responce / / / / - Unfavorabe Response

4 p ( n + = + a[ ] p ( n + = ( a Favorabe response β ( n = p ( n + = r In the above equatons r represents number of acton. α ( n {c} eura etwork Learnng Automaton β (n 4 Varabe Learnng Rate (VLR The performance of the steepest descent agorthm can be mproved f we aow the earnng rate to change durng the tranng process. An adaptve earnng rate w attempt to keep the earnng step sze as arge as possbe whe keepng earnng stabe. Frst, the nta network output and error are cacuated. At each epoch new weghts and bases are cacuated usng the current earnng rate. ew outputs and errors are then cacuated. If the new error exceeds the od error by more than a predefned rato max_perf_nc (typcay.4, the new weghts and bases are dscarded. In addton, the earnng rate s decreased (typcay by mutpyng by r_dec =.7. Otherwse the new weghts, etc. are kept. If the new error s ess than the od error, the earnng rate s ncreased (typcay by mutpyng by r_nc =.5. 5 Learnng Automata Based Methods In ths secton, we descrbe LA based methods for adaptaton of BP parameters. In these methods neura network acts as an envronment. The nterconnecton of neura network and earnng automata s shown n fg 6. Dfferent vaues of BP parameter acts as set of automata actons, n each step an acton s seected and feed to envronment. eura network uses ths parameter and runs BP agorthm tmes. Then a functon of network error s compared wth correspondng vaue n recent teraton. Ths functon for exampe can be mnmum of error n teraton. If we have a decrement n functon vaue the neura network generates favorabe response ( β ( n = and an ncrement unfavorabe response ( β ( n =. Usng network response, earnng automata changes acton probabty (n the case of LA wth varabe structure or changes state (n the case of LA wth fxed structure. 6 Adaptve Varabe Learnng Rate As before we say n varabe earnng rate secton, ths agorthm has three mportant parameters. These parameters are earnng rate ncrement coeffcent, earnng rate decrement coeffcent and maxmum error rato. These parameters have mportant nfuence on neura network earnng speed. The reason behnd ths s as foows: { Φ, α, β, F, G} Fgure 6: The nterconnecton of neura network and earnng automata It has been found when the error surface are from the form of quadratc bow, they usuay conssts of a arge amount of fat regons as we as ong and narrow extremey steep regons. Wth standard steepest descent, the earnng rate s hed constant throughout tranng. The performance of the agorthm s very senstve to the proper settng of the earnng rate. 8 6 E 4 P O C 8 H 6 S 4 Epochs versus Maxmum Error Rato Fgure 7: Epochs versus Maxmum Error Rato If the earnng rate s set too hgh, the agorthm may oscate and become unstabe. If the earnng rate s too sma, the agorthm w take too ong to converge. It s not practca to determne the optma settng for the earnng rate before tranng, and, n E P O C H S Fgure 8: Epochs versus Learnng Rate Increment Coeffcent Maxmum Error Rato Epochs versus Lr_Inc Learnng Rate Increament Coeffcent. 9

5 fact, the optma earnng rate changes durng the tranng process, as the agorthm moves across the performance surface. The performance of the steepest descent agorthm can be mproved f we aow the earnng rate to change durng the tranng process. E P O C H Epochs versus Lr_Dec..6 Learnng Rate Decreament Coeffcent Fgure 9: Epochs versus Learnng Rate Decrement Coeffcent An adaptve earnng rate w attempt to keep the earnng step sze as arge as possbe whe keepng earnng stabe. Frst, the nta network output and error are cacuated. At each epoch new weghts and bases are cacuated usng the current earnng rate. ew outputs and errors are then cacuated. If we are n a pont that have extremey steep regons, the new error exceeds the od error by more than a predefned rato max_perf_nc (typcay.4, n ths case the new weghts and bases are dscarded. In addton, the earnng rate s decreased (typcay by mutpyng by r_dec =.7. Otherwse the new weghts are kept. If sop of error surface s very arge, we must decrease the earnng rate wth hgh rate and f sop of error surface s not very arge, we can decrease earnng rate wth ow rate. So wth choosng dynamcay eerng rate decrement coeffcent, we can pass through speedy from regons wth hgh sops. On the other hand f we are n the regon that s fat, n ths case new error w smaer than od error and earnng rate s ncremented (typcay by mutpyng wth r_nc. If error surface s very fat then we must ncrease earnng rate wth hgh rate and f error surface s not very fat then we can ncrease earnng rate wth ow rate. So we see that wth choosng dynamcay ncrement coeffcent, we can pass wth hgh speed, through fat regons. In fgures 7 through 9, changes of earnng speed versus dfferent parameters of varabe earnng rate agorthm are shown. In ths paper we ntroduce a new agorthm named as Adaptve Varabe Learnng Rate (AVLR. In ths agorthm n spte of varabe earnng rate agorthm the agorthm parameters Max_Err_Rato, Lr_Dec and Lr_Inc are changed dynamcay durng earnng process. For adaptng parameters we use the fxed structure automata Tsetne. Smuaton resuts over varous probems show that, the proposed agorthm has the hghest earnng rate. AVLR agorthm s shown n fg.. Intaze automata parameters Intaze eura etwork Weghts and Bases Set Tranng Parameters for =:me Seect_VLR_Parameters for =:STEP_SIZE f SSE < eg, =-; break, Feedforward; Backward; Compute ew Weghts and Bases; Compute new_sse ; f new_sse > SSE* max_ perf _ nc r = r * Lr _ dec ; MC = ; ese f new_sse < SSE r = r * Lr _ nc ; w = new_w; b = new_b; a = new_a; w = new_w; b = new_b; a = new_a; e = new_e; SSE = new_sse; f new_mnmumofsse >= MnmumOfSSE*COEFFICIET EnvromentResponce= % penaze Automatan ese EnvromentResponce= % reward Automatan UpdateActveState_Tsetne Fgure : Adaptve Varabe Learnng Rate Agorthm 7 Smuaton Resuts In ths secton we compare dfferent methods of adaptng earnng rate. We mpement ths methods over Second order dscrete tme nonnear functon approxmaton Probem. 7. Second order dscrete tme nonnear functon approxmaton Probem Gven second order dscrete tme nonnear functon

6 .5y y y.35( y y. u k k k+ = + k + k + + yk + yk usng a three-ayer neura network we ke to smuate above functon wth reasonabe error. For ths purpose we generate nputs randomy between - and + and feed them to network. We aso choose nta condton randomy between - and +. As a resut a set of earnng patterns are generated that we use them for tranng neura network. In the above equaton uk and yk are nput and output at nstance k and yk and yk+ are outputs at nstance k- and k+. For approxmatng functon we use a three-ayer neura network wth 3 neuron n nput ayer, 8 neuron n hdden ayer and neuron n output ayer. ow we present the smuaton resut. Dfferent methods of adaptng earnng rate parameter are used for dfferent appcaton. In a experments momentum coeffcent are seected %98. Aso we use batch stye n a methods. In the foowng we descrbe each of methods. 7. Standard BP In ths method earnng rate are seected.. Batch stye are used are used for tranng. Momentum coeffcent has been chosen Varabe Learnng Rate(VLR Agorthm starts from nta earnng rate 4, and changes earnng rate dynamcay through earnng. 7.4 Varabe Structure Automata In ths method we use an automata wth varabe structure for adaptng earnng rate. Automata acton set whch are set of earnng rate are chosen as foows. α = {.94, } Acton probabty ncrement and decrement coeffcents are. and. respectvey. Step sze s 5. For an acton, the BP agorthm are repeated 5 tmes, then mnmum of error n prevous step compared wth mnmum of error n next step, f error ncrease neura network penaze automata ese reward. 7.5 Fxed Structure Automata We use Tsetne, Krnsky and Kryov automata for adaptng earnng rate. Smar to varabe structure automata, we choose 5 acton as foow. α = {., } Depth of memory and step sze are seected 3 and 5 respectvey. k 7.6 Adaptve Varabe Learnng Rate In ths method that ntroduced n ths paper we use Tsetne automata for adaptng varabe earnng rate parameter adaptaton. We choose canddates for ncrement, decrement and maxmum error rato coeffcent. Chosen vaues form automata acton set. Depth of memory and step sze are and 5 respectvey. Fgure shows resut of smuaton for second order dscrete tme nonnear functon approxmaton Probem. As shown n fgure, AVLR has maxmum speed and standard BP has mnmum speed. Usng AVLR earnng agorthm we approxmate second order nonnear dscrete functon:.5y k yk yk+ = +.35( yk + yk. u + k + yk + yk usng a three ayer neura network wth 3 neuron n nput ayer, 8 neuron n hdden ayer and neuron n output ayer. After 8 teratons neura network converge wth.898 error. For tranng network, we use random patterns n [-, +] duraton. For testng a c b Fgure : a nput b target output c actua output SSE a b c d e f g epochs Fgure : speed of convergence for dfferent method for Second order dscrete tme nonnear functon approxmaton Probem astandard BP bvlr cvsla(5 df_tsetne(5,3 ef_krnsky(5,3 ff_kryov(5,3 gavlr

7 network, we appy a snusoda nput. Fg shows nput, target output and actua output waveform. Target waveform and actua waveform are shown wth star and sod nes respectvey. As shown n fg., output of neura network concdes wth target waveform wth acceptabe precson. 8-Concuson One of the bggest mportant mtatons of BP agorthm, s the ow rate of convergence. For ncreasng speed of earnng some agorthms proposed whch, change earnng rate dynamcay accordng gradent varaton. Resut of smuaton show that, because VLR earnng agorthm has hgh facty for choosng earnng rate, hence have hgher speed than earnng automata based methods. Because The VLR parameters have mportant nfuence n ts performance, n ths paper we use earnng automata for adustng them. In the proposed agorthm named Adaptve Varabe Learnng Rate (AVLR agorthm, VLR parameters are changed dynamcay accordng to error changes by earnng automata. Smuaton resuts on second order dscrete tme nonnear functon approxmaton probem hghght better the mert of the proposed AVLR. References: [] D. E. Rumehart and J. L. Mc Ceand, Dstrbuted Processng: Expanaton n the Mcrostructureof Cognton, VOL. I, II and III. Cambrdge, MA: MIT Press, 986 and 987. [] B. Wdrow and M. A. Lehr," 3 Years of adaptve neura networks: Perceptron, madanes, and Back Propagaton," Proc. IEEE, Vo. 78, o. 9, pp , 99. [3] D. R. Hush and B. G. Horne, " Progress n supervsed neura networks," IEEE Sgna Processng- Mag., Vo., o., pp. 8-39, 993. [4] R. A. Jacobs, " Increased rates of convergence through earnng rate adaptaton," eura etworks, Vo., o. 4, pp , 988. [5] T. P. Voge et a., " Accearatng the convergenceof the backpropagaton method," Bo. Cybern., - Vo. 59, pp , 988. [6] L. G. Ared and G. E. Key, " Supervsed Learnng technques for backpropagaton networks,"n - Proc. Of IJC, Vo., SanDego, June 99, pp [7] L. P. Rcot, S. Ragazzn, and G. Martne, " Learnng of word stress n a uboptma second orderbackpropagaton neura networks," n Proc. st Int. Conf. eura etworks, Vo., pp , ew York, 988. [8] R. L. Watrous, " Learnng agorthms for connec- tonst networks: apped gradent methods of n- onnear optmzaton," n Proc. st Int. Conf. - eura etworks, Vo., pp , 987. [9] M. S. Moer, " A scaed conugate gradent agorthm for fast supervsed Learnng," eura etworks, Vo. 6, pp , o. 4, 993. [] A. R. Webb, D. Lowe, and M. D. Bedworth, "A- Comparson of nonnear optmzaton strategesfor feed forward adaptve ayered networks, "Roya Sgnas and Radar Estabshment, memorandum o. 457, uy 988. [] M. B. Menha and M. R. Meybod," Appcaton of Learnng Automata to eura etworks", Proc. Of Second Annua CSI Computer Conference CSIC 96, Tehran, Iran, pp. 9-, Dec [] M. B. Menha and M. R. Meybod, " A nove Learnng scheme for feedforward neura networks", Proc. Of ICEE-95, Unversty of scence and Technoogy, Tehran, 994. [3] M. B. Menha and M. R. Meybd, " Fexbe Sgmoda type functons for neura networks usng Game of automata", Proc. Of Second Annua - CSI Computer Conf. CSI 96, Tehran, Iran, pp. - 3, Dec 996. [4] M. B. Menha and M. R. Meybod, Usng Learnng Automata n backpropagaton agorthm wth momentum", Technca Report, Computer Eng. Department, Amrkabr Unversty of Technoogy, Tehran, Iran, 997. [5] H. Begy, M. R. Meybod, and M. B. Menha, "- Adaptaton of Learnng Rate n Backpropagaton Agorthm Usng Fxed Structure Learnng Automata", Pro. Of ICEE-98. [6] K. S. arra and M. A. L. Thathachar, Learnng Automata: An Introducton, Prentce Ha, E- ngewood cffs, 989. [7] M. R. Meybod and S. Lakshmvarahan,"Optma ty of a Genera Cass of Learnng Agorthm", - Informaton scence, Vo. 8, pp. -, 98. [8] M. R. Meybod and S. Lakshmvarahan," on a c- ass of Learnng Agorthm whch have a symmetrc Behavor under Success and Faure", Sprnger Vereg Lecture otes n Statstcs, pp , 984. [9] M. B. Menha, " Computatona Integence (vo. Fundamentas of eura etworks ", Professor Hessab Pubcaton, 998.

Supplementary Material: Learning Structured Weight Uncertainty in Bayesian Neural Networks

Supplementary Material: Learning Structured Weight Uncertainty in Bayesian Neural Networks Shengyang Sun, Changyou Chen, Lawrence Carn Suppementary Matera: Learnng Structured Weght Uncertanty n Bayesan Neura Networks Shengyang Sun Changyou Chen Lawrence Carn Tsnghua Unversty Duke Unversty Duke

More information

Neural network-based athletics performance prediction optimization model applied research

Neural network-based athletics performance prediction optimization model applied research Avaabe onne www.jocpr.com Journa of Chemca and Pharmaceutca Research, 04, 6(6):8-5 Research Artce ISSN : 0975-784 CODEN(USA) : JCPRC5 Neura networ-based athetcs performance predcton optmzaton mode apped

More information

Research on Complex Networks Control Based on Fuzzy Integral Sliding Theory

Research on Complex Networks Control Based on Fuzzy Integral Sliding Theory Advanced Scence and Technoogy Letters Vo.83 (ISA 205), pp.60-65 http://dx.do.org/0.4257/ast.205.83.2 Research on Compex etworks Contro Based on Fuzzy Integra Sdng Theory Dongsheng Yang, Bngqng L, 2, He

More information

Associative Memories

Associative Memories Assocatve Memores We consder now modes for unsupervsed earnng probems, caed auto-assocaton probems. Assocaton s the task of mappng patterns to patterns. In an assocatve memory the stmuus of an ncompete

More information

MARKOV CHAIN AND HIDDEN MARKOV MODEL

MARKOV CHAIN AND HIDDEN MARKOV MODEL MARKOV CHAIN AND HIDDEN MARKOV MODEL JIAN ZHANG JIANZHAN@STAT.PURDUE.EDU Markov chan and hdden Markov mode are probaby the smpest modes whch can be used to mode sequenta data,.e. data sampes whch are not

More information

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata Multlayer Perceptrons and Informatcs CG: Lecture 6 Mrella Lapata School of Informatcs Unversty of Ednburgh mlap@nf.ed.ac.uk Readng: Kevn Gurney s Introducton to Neural Networks, Chapters 5 6.5 January,

More information

Supervised Learning. Neural Networks and Back-Propagation Learning. Credit Assignment Problem. Feedforward Network. Adaptive System.

Supervised Learning. Neural Networks and Back-Propagation Learning. Credit Assignment Problem. Feedforward Network. Adaptive System. Part 7: Neura Networ & earnng /2/05 Superved earnng Neura Networ and Bac-Propagaton earnng Produce dered output for tranng nput Generaze reaonaby & appropratey to other nput Good exampe: pattern recognton

More information

Image Classification Using EM And JE algorithms

Image Classification Using EM And JE algorithms Machne earnng project report Fa, 2 Xaojn Sh, jennfer@soe Image Cassfcaton Usng EM And JE agorthms Xaojn Sh Department of Computer Engneerng, Unversty of Caforna, Santa Cruz, CA, 9564 jennfer@soe.ucsc.edu

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Delay tomography for large scale networks

Delay tomography for large scale networks Deay tomography for arge scae networks MENG-FU SHIH ALFRED O. HERO III Communcatons and Sgna Processng Laboratory Eectrca Engneerng and Computer Scence Department Unversty of Mchgan, 30 Bea. Ave., Ann

More information

Nested case-control and case-cohort studies

Nested case-control and case-cohort studies Outne: Nested case-contro and case-cohort studes Ørnuf Borgan Department of Mathematcs Unversty of Oso NORBIS course Unversty of Oso 4-8 December 217 1 Radaton and breast cancer data Nested case contro

More information

Multilayer Perceptron (MLP)

Multilayer Perceptron (MLP) Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne

More information

NONLINEAR SYSTEM IDENTIFICATION BASE ON FW-LSSVM

NONLINEAR SYSTEM IDENTIFICATION BASE ON FW-LSSVM Journa of heoretca and Apped Informaton echnoogy th February 3. Vo. 48 No. 5-3 JAI & LLS. A rghts reserved. ISSN: 99-8645 www.jatt.org E-ISSN: 87-395 NONLINEAR SYSEM IDENIFICAION BASE ON FW-LSSVM, XIANFANG

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

A finite difference method for heat equation in the unbounded domain

A finite difference method for heat equation in the unbounded domain Internatona Conerence on Advanced ectronc Scence and Technoogy (AST 6) A nte derence method or heat equaton n the unbounded doman a Quan Zheng and Xn Zhao Coege o Scence North Chna nversty o Technoogy

More information

Chapter 6 Hidden Markov Models. Chaochun Wei Spring 2018

Chapter 6 Hidden Markov Models. Chaochun Wei Spring 2018 896 920 987 2006 Chapter 6 Hdden Markov Modes Chaochun We Sprng 208 Contents Readng materas Introducton to Hdden Markov Mode Markov chans Hdden Markov Modes Parameter estmaton for HMMs 2 Readng Rabner,

More information

The Application of BP Neural Network principal component analysis in the Forecasting the Road Traffic Accident

The Application of BP Neural Network principal component analysis in the Forecasting the Road Traffic Accident ICTCT Extra Workshop, Bejng Proceedngs The Appcaton of BP Neura Network prncpa component anayss n Forecastng Road Traffc Accdent He Mng, GuoXucheng &LuGuangmng Transportaton Coege of Souast Unversty 07

More information

IV. Performance Optimization

IV. Performance Optimization IV. Performance Optmzaton A. Steepest descent algorthm defnton how to set up bounds on learnng rate mnmzaton n a lne (varyng learnng rate) momentum learnng examples B. Newton s method defnton Gauss-Newton

More information

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS HCMC Unversty of Pedagogy Thong Nguyen Huu et al. A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS Thong Nguyen Huu and Hao Tran Van Department of mathematcs-nformaton,

More information

Neural Networks & Learning

Neural Networks & Learning Neural Netorks & Learnng. Introducton The basc prelmnares nvolved n the Artfcal Neural Netorks (ANN) are descrbed n secton. An Artfcal Neural Netorks (ANN) s an nformaton-processng paradgm that nspred

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Research Article H Estimates for Discrete-Time Markovian Jump Linear Systems

Research Article H Estimates for Discrete-Time Markovian Jump Linear Systems Mathematca Probems n Engneerng Voume 213 Artce ID 945342 7 pages http://dxdoorg/11155/213/945342 Research Artce H Estmates for Dscrete-Tme Markovan Jump Lnear Systems Marco H Terra 1 Gdson Jesus 2 and

More information

Example: Suppose we want to build a classifier that recognizes WebPages of graduate students.

Example: Suppose we want to build a classifier that recognizes WebPages of graduate students. Exampe: Suppose we want to bud a cassfer that recognzes WebPages of graduate students. How can we fnd tranng data? We can browse the web and coect a sampe of WebPages of graduate students of varous unverstes.

More information

3. Stress-strain relationships of a composite layer

3. Stress-strain relationships of a composite layer OM PO I O U P U N I V I Y O F W N ompostes ourse 8-9 Unversty of wente ng. &ech... tress-stran reatonshps of a composte ayer - Laurent Warnet & emo Aerman.. tress-stran reatonshps of a composte ayer Introducton

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Short-Term Load Forecasting for Electric Power Systems Using the PSO-SVR and FCM Clustering Techniques

Short-Term Load Forecasting for Electric Power Systems Using the PSO-SVR and FCM Clustering Techniques Energes 20, 4, 73-84; do:0.3390/en40073 Artce OPEN ACCESS energes ISSN 996-073 www.mdp.com/journa/energes Short-Term Load Forecastng for Eectrc Power Systems Usng the PSO-SVR and FCM Custerng Technques

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

Multi-layer neural networks

Multi-layer neural networks Lecture 0 Mult-layer neural networks Mlos Hauskrecht mlos@cs.ptt.edu 5329 Sennott Square Lnear regresson w Lnear unts f () Logstc regresson T T = w = p( y =, w) = g( w ) w z f () = p ( y = ) w d w d Gradent

More information

A Hybrid Learning Algorithm for Locally Recurrent Neural Networks

A Hybrid Learning Algorithm for Locally Recurrent Neural Networks Contemporary Engneerng Scences, Vo. 11, 2018, no. 1, 1-13 HIKARI Ltd, www.m-hkar.com https://do.org/10.12988/ces.2018.711194 A Hybrd Learnng Agorthm for Locay Recurrent Neura Networks Dmtrs Varsams and

More information

Long-term Forecasting of Electrical Load using. Gustafson-Kessel clustering algorithm on Takagi-Sugeno type MISO Neuro- Fuzzy network

Long-term Forecasting of Electrical Load using. Gustafson-Kessel clustering algorithm on Takagi-Sugeno type MISO Neuro- Fuzzy network Long-term Forecastng of Eectrca Load usng Gustafson-Kesse custerng agorthm on akag-sugeno type ISO euro- Fuzzy network By: Fex Pasa Eectrca Engneerng Department, Petra Chrstan Unversty, Surabaya fex@petra.ac.d

More information

Sensitivity Analysis Using Neural Network for Estimating Aircraft Stability and Control Derivatives

Sensitivity Analysis Using Neural Network for Estimating Aircraft Stability and Control Derivatives Internatona Conference on Integent and Advanced Systems 27 Senstvty Anayss Usng Neura Networ for Estmatng Arcraft Stabty and Contro Dervatves Roht Garhwa a, Abhshe Hader b and Dr. Manoranan Snha c Department

More information

USING LEARNING CELLULAR AUTOMATA FOR POST CLASSIFICATION SATELLITE IMAGERY

USING LEARNING CELLULAR AUTOMATA FOR POST CLASSIFICATION SATELLITE IMAGERY USING LEARNING CELLULAR AUTOMATA FOR POST CLASSIFICATION SATELLITE IMAGERY B. Moarad a, C.Lucas b, M.Varshosaz a a Facuty of Geodesy and Geomatcs Eng., KN Toos Unversty of Technoogy, Va_Asr Street, Mrdamad

More information

Multigradient for Neural Networks for Equalizers 1

Multigradient for Neural Networks for Equalizers 1 Multgradent for Neural Netorks for Equalzers 1 Chulhee ee, Jnook Go and Heeyoung Km Department of Electrcal and Electronc Engneerng Yonse Unversty 134 Shnchon-Dong, Seodaemun-Ku, Seoul 1-749, Korea ABSTRACT

More information

On the Power Function of the Likelihood Ratio Test for MANOVA

On the Power Function of the Likelihood Ratio Test for MANOVA Journa of Mutvarate Anayss 8, 416 41 (00) do:10.1006/jmva.001.036 On the Power Functon of the Lkehood Rato Test for MANOVA Dua Kumar Bhaumk Unversty of South Aabama and Unversty of Inos at Chcago and Sanat

More information

Predicting Model of Traffic Volume Based on Grey-Markov

Predicting Model of Traffic Volume Based on Grey-Markov Vo. No. Modern Apped Scence Predctng Mode of Traffc Voume Based on Grey-Marov Ynpeng Zhang Zhengzhou Muncpa Engneerng Desgn & Research Insttute Zhengzhou 5005 Chna Abstract Grey-marov forecastng mode of

More information

Adaptive and Iterative Least Squares Support Vector Regression Based on Quadratic Renyi Entropy

Adaptive and Iterative Least Squares Support Vector Regression Based on Quadratic Renyi Entropy daptve and Iteratve Least Squares Support Vector Regresson Based on Quadratc Ren Entrop Jngqng Jang, Chu Song, Haan Zhao, Chunguo u,3 and Yanchun Lang Coege of Mathematcs and Computer Scence, Inner Mongoa

More information

9 Adaptive Soft K-Nearest-Neighbour Classifiers with Large Margin

9 Adaptive Soft K-Nearest-Neighbour Classifiers with Large Margin 9 Adaptve Soft -Nearest-Neghbour Cassfers wth Large argn Abstract- A nove cassfer s ntroduced to overcome the mtatons of the -NN cassfcaton systems. It estmates the posteror cass probabtes usng a oca Parzen

More information

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm Desgn and Optmzaton of Fuzzy Controller for Inverse Pendulum System Usng Genetc Algorthm H. Mehraban A. Ashoor Unversty of Tehran Unversty of Tehran h.mehraban@ece.ut.ac.r a.ashoor@ece.ut.ac.r Abstract:

More information

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD CHALMERS, GÖTEBORGS UNIVERSITET SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS COURSE CODES: FFR 35, FIM 72 GU, PhD Tme: Place: Teachers: Allowed materal: Not allowed: January 2, 28, at 8 3 2 3 SB

More information

Pop-Click Noise Detection Using Inter-Frame Correlation for Improved Portable Auditory Sensing

Pop-Click Noise Detection Using Inter-Frame Correlation for Improved Portable Auditory Sensing Advanced Scence and Technology Letters, pp.164-168 http://dx.do.org/10.14257/astl.2013 Pop-Clc Nose Detecton Usng Inter-Frame Correlaton for Improved Portable Audtory Sensng Dong Yun Lee, Kwang Myung Jeon,

More information

Chapter - 2. Distribution System Power Flow Analysis

Chapter - 2. Distribution System Power Flow Analysis Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load

More information

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI Logstc Regresson CAP 561: achne Learnng Instructor: Guo-Jun QI Bayes Classfer: A Generatve model odel the posteror dstrbuton P(Y X) Estmate class-condtonal dstrbuton P(X Y) for each Y Estmate pror dstrbuton

More information

Introduction to the Introduction to Artificial Neural Network

Introduction to the Introduction to Artificial Neural Network Introducton to the Introducton to Artfcal Neural Netork Vuong Le th Hao Tang s sldes Part of the content of the sldes are from the Internet (possbly th modfcatons). The lecturer does not clam any onershp

More information

Decentralized Adaptive Control for a Class of Large-Scale Nonlinear Systems with Unknown Interactions

Decentralized Adaptive Control for a Class of Large-Scale Nonlinear Systems with Unknown Interactions Decentrazed Adaptve Contro for a Cass of Large-Scae onnear Systems wth Unknown Interactons Bahram Karm 1, Fatemeh Jahangr, Mohammad B. Menhaj 3, Iman Saboor 4 1. Center of Advanced Computatona Integence,

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

COXREG. Estimation (1)

COXREG. Estimation (1) COXREG Cox (972) frst suggested the modes n whch factors reated to fetme have a mutpcatve effect on the hazard functon. These modes are caed proportona hazards (PH) modes. Under the proportona hazards

More information

L-Edge Chromatic Number Of A Graph

L-Edge Chromatic Number Of A Graph IJISET - Internatona Journa of Innovatve Scence Engneerng & Technoogy Vo. 3 Issue 3 March 06. ISSN 348 7968 L-Edge Chromatc Number Of A Graph Dr.R.B.Gnana Joth Assocate Professor of Mathematcs V.V.Vannaperuma

More information

Numerical Investigation of Power Tunability in Two-Section QD Superluminescent Diodes

Numerical Investigation of Power Tunability in Two-Section QD Superluminescent Diodes Numerca Investgaton of Power Tunabty n Two-Secton QD Superumnescent Dodes Matta Rossett Paoo Bardea Ivo Montrosset POLITECNICO DI TORINO DELEN Summary 1. A smpfed mode for QD Super Lumnescent Dodes (SLD)

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

[WAVES] 1. Waves and wave forces. Definition of waves

[WAVES] 1. Waves and wave forces. Definition of waves 1. Waves and forces Defnton of s In the smuatons on ong-crested s are consdered. The drecton of these s (μ) s defned as sketched beow n the goba co-ordnate sstem: North West East South The eevaton can

More information

A DIMENSION-REDUCTION METHOD FOR STOCHASTIC ANALYSIS SECOND-MOMENT ANALYSIS

A DIMENSION-REDUCTION METHOD FOR STOCHASTIC ANALYSIS SECOND-MOMENT ANALYSIS A DIMESIO-REDUCTIO METHOD FOR STOCHASTIC AALYSIS SECOD-MOMET AALYSIS S. Rahman Department of Mechanca Engneerng and Center for Computer-Aded Desgn The Unversty of Iowa Iowa Cty, IA 52245 June 2003 OUTLIE

More information

Why feed-forward networks are in a bad shape

Why feed-forward networks are in a bad shape Why feed-forward networks are n a bad shape Patrck van der Smagt, Gerd Hrznger Insttute of Robotcs and System Dynamcs German Aerospace Center (DLR Oberpfaffenhofen) 82230 Wesslng, GERMANY emal smagt@dlr.de

More information

Optimum Selection Combining for M-QAM on Fading Channels

Optimum Selection Combining for M-QAM on Fading Channels Optmum Seecton Combnng for M-QAM on Fadng Channes M. Surendra Raju, Ramesh Annavajjaa and A. Chockangam Insca Semconductors Inda Pvt. Ltd, Bangaore-56000, Inda Department of ECE, Unversty of Caforna, San

More information

Numerical integration in more dimensions part 2. Remo Minero

Numerical integration in more dimensions part 2. Remo Minero Numerca ntegraton n more dmensons part Remo Mnero Outne The roe of a mappng functon n mutdmensona ntegraton Gauss approach n more dmensons and quadrature rues Crtca anass of acceptabt of a gven quadrature

More information

Evaluation of classifiers MLPs

Evaluation of classifiers MLPs Lecture Evaluaton of classfers MLPs Mlos Hausrecht mlos@cs.ptt.edu 539 Sennott Square Evaluaton For any data set e use to test the model e can buld a confuson matrx: Counts of examples th: class label

More information

A neural network with localized receptive fields for visual pattern classification

A neural network with localized receptive fields for visual pattern classification Unversty of Wollongong Research Onlne Faculty of Informatcs - Papers (Archve) Faculty of Engneerng and Informaton Scences 2005 A neural network wth localzed receptve felds for vsual pattern classfcaton

More information

Application of Particle Swarm Optimization to Economic Dispatch Problem: Advantages and Disadvantages

Application of Particle Swarm Optimization to Economic Dispatch Problem: Advantages and Disadvantages Appcaton of Partce Swarm Optmzaton to Economc Dspatch Probem: Advantages and Dsadvantages Kwang Y. Lee, Feow, IEEE, and Jong-Bae Par, Member, IEEE Abstract--Ths paper summarzes the state-of-art partce

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

IDENTIFICATION OF NONLINEAR SYSTEM VIA SVR OPTIMIZED BY PARTICLE SWARM ALGORITHM

IDENTIFICATION OF NONLINEAR SYSTEM VIA SVR OPTIMIZED BY PARTICLE SWARM ALGORITHM Journa of Theoretca and Apped Informaton Technoogy th February 3. Vo. 48 No. 5-3 JATIT & LLS. A rghts reserved. ISSN: 99-8645 www.att.org E-ISSN: 87-395 IDENTIFICATION OF NONLINEAR SYSTEM VIA SVR OPTIMIZED

More information

GENERATIVE AND DISCRIMINATIVE CLASSIFIERS: NAIVE BAYES AND LOGISTIC REGRESSION. Machine Learning

GENERATIVE AND DISCRIMINATIVE CLASSIFIERS: NAIVE BAYES AND LOGISTIC REGRESSION. Machine Learning CHAPTER 3 GENERATIVE AND DISCRIMINATIVE CLASSIFIERS: NAIVE BAYES AND LOGISTIC REGRESSION Machne Learnng Copyrght c 205. Tom M. Mtche. A rghts reserved. *DRAFT OF September 23, 207* *PLEASE DO NOT DISTRIBUTE

More information

CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING INTRODUCTION

CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING INTRODUCTION CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING N. Phanthuna 1,2, F. Cheevasuvt 2 and S. Chtwong 2 1 Department of Electrcal Engneerng, Faculty of Engneerng Rajamangala

More information

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1 On an Extenson of Stochastc Approxmaton EM Algorthm for Incomplete Data Problems Vahd Tadayon Abstract: The Stochastc Approxmaton EM (SAEM algorthm, a varant stochastc approxmaton of EM, s a versatle tool

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

Admin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester

Admin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester 0/25/6 Admn Assgnment 7 Class /22 Schedule for the rest of the semester NEURAL NETWORKS Davd Kauchak CS58 Fall 206 Perceptron learnng algorthm Our Nervous System repeat untl convergence (or for some #

More information

Wavelet chaotic neural networks and their application to continuous function optimization

Wavelet chaotic neural networks and their application to continuous function optimization Vol., No.3, 04-09 (009) do:0.436/ns.009.307 Natural Scence Wavelet chaotc neural networks and ther applcaton to contnuous functon optmzaton Ja-Ha Zhang, Yao-Qun Xu College of Electrcal and Automatc Engneerng,

More information

Improvement of Histogram Equalization for Minimum Mean Brightness Error

Improvement of Histogram Equalization for Minimum Mean Brightness Error Proceedngs of the 7 WSEAS Int. Conference on Crcuts, Systems, Sgnal and elecommuncatons, Gold Coast, Australa, January 7-9, 7 3 Improvement of Hstogram Equalzaton for Mnmum Mean Brghtness Error AAPOG PHAHUA*,

More information

Homework Assignment 3 Due in class, Thursday October 15

Homework Assignment 3 Due in class, Thursday October 15 Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.

More information

Using Immune Genetic Algorithm to Optimize BP Neural Network and Its Application Peng-fei LIU1,Qun-tai SHEN1 and Jun ZHI2,*

Using Immune Genetic Algorithm to Optimize BP Neural Network and Its Application Peng-fei LIU1,Qun-tai SHEN1 and Jun ZHI2,* Advances n Computer Scence Research (ACRS), volume 54 Internatonal Conference on Computer Networks and Communcaton Technology (CNCT206) Usng Immune Genetc Algorthm to Optmze BP Neural Network and Its Applcaton

More information

WAVELET-BASED IMAGE COMPRESSION USING SUPPORT VECTOR MACHINE LEARNING AND ENCODING TECHNIQUES

WAVELET-BASED IMAGE COMPRESSION USING SUPPORT VECTOR MACHINE LEARNING AND ENCODING TECHNIQUES WAVELE-BASED IMAGE COMPRESSION USING SUPPOR VECOR MACHINE LEARNING AND ENCODING ECHNIQUES Rakb Ahmed Gppsand Schoo of Computng and Informaton echnoogy Monash Unversty, Gppsand Campus Austraa. Rakb.Ahmed@nfotech.monash.edu.au

More information

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method Appled Mathematcal Scences, Vol. 7, 0, no. 47, 07-0 HIARI Ltd, www.m-hkar.com Comparson of the Populaton Varance Estmators of -Parameter Exponental Dstrbuton Based on Multple Crtera Decson Makng Method

More information

Gradient Descent Learning and Backpropagation

Gradient Descent Learning and Backpropagation Artfcal Neural Networks (art 2) Chrstan Jacob Gradent Descent Learnng and Backpropagaton CSC 533 Wnter 200 Learnng by Gradent Descent Defnton of the Learnng roble Let us start wth the sple case of lnear

More information

Uncertainty and auto-correlation in. Measurement

Uncertainty and auto-correlation in. Measurement Uncertanty and auto-correlaton n arxv:1707.03276v2 [physcs.data-an] 30 Dec 2017 Measurement Markus Schebl Federal Offce of Metrology and Surveyng (BEV), 1160 Venna, Austra E-mal: markus.schebl@bev.gv.at

More information

Optimization of JK Flip Flop Layout with Minimal Average Power of Consumption based on ACOR, Fuzzy-ACOR, GA, and Fuzzy-GA

Optimization of JK Flip Flop Layout with Minimal Average Power of Consumption based on ACOR, Fuzzy-ACOR, GA, and Fuzzy-GA Journa of mathematcs and computer Scence 4 (05) - 5 Optmzaton of JK Fp Fop Layout wth Mnma Average Power of Consumpton based on ACOR, Fuzzy-ACOR, GA, and Fuzzy-GA Farshd Kevanan *,, A Yekta *,, Nasser

More information

LECTURE 21 Mohr s Method for Calculation of General Displacements. 1 The Reciprocal Theorem

LECTURE 21 Mohr s Method for Calculation of General Displacements. 1 The Reciprocal Theorem V. DEMENKO MECHANICS OF MATERIALS 05 LECTURE Mohr s Method for Cacuaton of Genera Dspacements The Recproca Theorem The recproca theorem s one of the genera theorems of strength of materas. It foows drect

More information

A General Column Generation Algorithm Applied to System Reliability Optimization Problems

A General Column Generation Algorithm Applied to System Reliability Optimization Problems A Genera Coumn Generaton Agorthm Apped to System Reabty Optmzaton Probems Lea Za, Davd W. Cot, Department of Industra and Systems Engneerng, Rutgers Unversty, Pscataway, J 08854, USA Abstract A genera

More information

Approximate merging of a pair of BeÂzier curves

Approximate merging of a pair of BeÂzier curves COMPUTER-AIDED DESIGN Computer-Aded Desgn 33 (1) 15±136 www.esever.com/ocate/cad Approxmate mergng of a par of BeÂzer curves Sh-Mn Hu a,b, *, Rou-Feng Tong c, Tao Ju a,b, Ja-Guang Sun a,b a Natona CAD

More information

Markov Chain Monte Carlo Lecture 6

Markov Chain Monte Carlo Lecture 6 where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways

More information

Multilayer neural networks

Multilayer neural networks Lecture Multlayer neural networks Mlos Hauskrecht mlos@cs.ptt.edu 5329 Sennott Square Mdterm exam Mdterm Monday, March 2, 205 In-class (75 mnutes) closed book materal covered by February 25, 205 Multlayer

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

Chapter Newton s Method

Chapter Newton s Method Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve

More information

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute

More information

Supporting Information

Supporting Information Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to

More information

Multispectral Remote Sensing Image Classification Algorithm Based on Rough Set Theory

Multispectral Remote Sensing Image Classification Algorithm Based on Rough Set Theory Proceedngs of the 2009 IEEE Internatona Conference on Systems Man and Cybernetcs San Antono TX USA - October 2009 Mutspectra Remote Sensng Image Cassfcaton Agorthm Based on Rough Set Theory Yng Wang Xaoyun

More information

Chapter 13: Multiple Regression

Chapter 13: Multiple Regression Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to

More information

Xin Li Department of Information Systems, College of Business, City University of Hong Kong, Hong Kong, CHINA

Xin Li Department of Information Systems, College of Business, City University of Hong Kong, Hong Kong, CHINA RESEARCH ARTICLE MOELING FIXE OS BETTING FOR FUTURE EVENT PREICTION Weyun Chen eartment of Educatona Informaton Technoogy, Facuty of Educaton, East Chna Norma Unversty, Shangha, CHINA {weyun.chen@qq.com}

More information

Analysis of Non-binary Hybrid LDPC Codes

Analysis of Non-binary Hybrid LDPC Codes Anayss of Non-bnary Hybrd LDPC Codes Luce Sassate and Davd Decercq ETIS ENSEA/UCP/CNRS UMR-5 954 Cergy, FRANCE {sassate,decercq}@ensea.fr Abstract Ths paper s egbe for the student paper award. In ths paper,

More information

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012 MLE and Bayesan Estmaton Je Tang Department of Computer Scence & Technology Tsnghua Unversty 01 1 Lnear Regresson? As the frst step, we need to decde how we re gong to represent the functon f. One example:

More information

CHAPTER III Neural Networks as Associative Memory

CHAPTER III Neural Networks as Associative Memory CHAPTER III Neural Networs as Assocatve Memory Introducton One of the prmary functons of the bran s assocatve memory. We assocate the faces wth names, letters wth sounds, or we can recognze the people

More information

Polite Water-filling for Weighted Sum-rate Maximization in MIMO B-MAC Networks under. Multiple Linear Constraints

Polite Water-filling for Weighted Sum-rate Maximization in MIMO B-MAC Networks under. Multiple Linear Constraints 2011 IEEE Internatona Symposum on Informaton Theory Proceedngs Pote Water-fng for Weghted Sum-rate Maxmzaton n MIMO B-MAC Networks under Mutpe near Constrants An u 1, Youjan u 2, Vncent K. N. au 3, Hage

More information

Lower Bounding Procedures for the Single Allocation Hub Location Problem

Lower Bounding Procedures for the Single Allocation Hub Location Problem Lower Boundng Procedures for the Snge Aocaton Hub Locaton Probem Borzou Rostam 1,2 Chrstoph Buchhem 1,4 Fautät für Mathemat, TU Dortmund, Germany J. Faban Meer 1,3 Uwe Causen 1 Insttute of Transport Logstcs,

More information

DISTRIBUTED PROCESSING OVER ADAPTIVE NETWORKS. Cassio G. Lopes and Ali H. Sayed

DISTRIBUTED PROCESSING OVER ADAPTIVE NETWORKS. Cassio G. Lopes and Ali H. Sayed DISTRIBUTED PROCESSIG OVER ADAPTIVE ETWORKS Casso G Lopes and A H Sayed Department of Eectrca Engneerng Unversty of Caforna Los Angees, CA, 995 Ema: {casso, sayed@eeucaedu ABSTRACT Dstrbuted adaptve agorthms

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Supervised Learning NNs

Supervised Learning NNs EE788 Robot Cognton and Plannng, Prof. J.-H. Km Lecture 6 Supervsed Learnng NNs Robot Intellgence Technolog Lab. From Jang, Sun, Mzutan, Ch.9, Neuro-Fuzz and Soft Computng, Prentce Hall Contents. Introducton.

More information

Optimal Guaranteed Cost Control of Linear Uncertain Systems with Input Constraints

Optimal Guaranteed Cost Control of Linear Uncertain Systems with Input Constraints Internatona Journa Optma of Contro, Guaranteed Automaton, Cost Contro and Systems, of Lnear vo Uncertan 3, no Systems 3, pp 397-4, wth Input September Constrants 5 397 Optma Guaranteed Cost Contro of Lnear

More information

Parameter Estimation for Dynamic System using Unscented Kalman filter

Parameter Estimation for Dynamic System using Unscented Kalman filter Parameter Estmaton for Dynamc System usng Unscented Kalman flter Jhoon Seung 1,a, Amr Atya F. 2,b, Alexander G.Parlos 3,c, and Klto Chong 1,4,d* 1 Dvson of Electroncs Engneerng, Chonbuk Natonal Unversty,

More information

Internet Engineering. Jacek Mazurkiewicz, PhD Softcomputing. Part 3: Recurrent Artificial Neural Networks Self-Organising Artificial Neural Networks

Internet Engineering. Jacek Mazurkiewicz, PhD Softcomputing. Part 3: Recurrent Artificial Neural Networks Self-Organising Artificial Neural Networks Internet Engneerng Jacek Mazurkewcz, PhD Softcomputng Part 3: Recurrent Artfcal Neural Networks Self-Organsng Artfcal Neural Networks Recurrent Artfcal Neural Networks Feedback sgnals between neurons Dynamc

More information