Gauss-newton Based Learning For Fully Recurrent Neural Networks

Size: px
Start display at page:

Download "Gauss-newton Based Learning For Fully Recurrent Neural Networks"

Transcription

1 nversy of Cenral Florda Elecronc heses and Dsseraons Masers hess Open Access Gauss-newon Based Learnng For Fully Recurren Neural Newors 4 Ane Arun Vara nversy of Cenral Florda Fnd smlar wors a: hp://sars.lbrary.ucf.edu/ed nversy of Cenral Florda Lbrares hp://lbrary.ucf.edu Par of he Elecrcal and Compuer Engneerng Commons SARS Caon Vara, Ane Arun, "Gauss-newon Based Learnng For Fully Recurren Neural Newors" 4. Elecronc heses and Dsseraons. 54. hp://sars.lbrary.ucf.edu/ed/54 hs Masers hess Open Access s brough o you for free and open access by SARS. I has been acceped for ncluson n Elecronc heses and Dsseraons by an auhorzed admnsraor of SARS. For more nformaon, please conac lee.doson@ucf.edu.

2 GASS-NEWON BASED LEARNING FOR FLLY RECRREN NERAL NEWORKS by ANIKE A. VARAK B.S. nversy of Mumba, A hess submed n paral fulfllmen of he requremens for he degree of Maser of Scence n he Deparmen of Elecrcal and Compuer Engneerng n he College of Engneerng and Compuer Scence a he nversy of Cenral Florda Orlando, Florda Summer erm 4

3 ABSRAC he hess dscusses a novel off-lne and on-lne learnng approach for Fully Recurren Neural Newors FRNNs. he mos popular algorhm for ranng FRNNs, he Real me Recurren Learnng RRL algorhm, employs he graden descen echnque for fndng he opmum wegh vecors n he recurren neural newor. Whn he framewor of he research presened, a new off-lne and on-lne varaon of RRL s presened, ha s based on he Gauss-Newon mehod. he mehod self s an approxmae Newon s mehod alored o he specfc opmzaon problem, non-lnear leas squares, whch ams o speed up he process of FRNN ranng. he new approach sands as a robus and effecve compromse beween he orgnal graden-based RRL low compuaonal complexy, slow convergence and Newon-based varans of RRL hgh compuaonal complexy, fas convergence. By gaherng nformaon over me n order o form Gauss-Newon search vecors, he new learnng algorhm, GN-RRL, s capable of convergng faser o a beer qualy soluon han he orgnal algorhm. Expermenal resuls reflec hese quales of GN-RRL, as well as he fac ha GN-RRL may have n pracce lower compuaonal cos n comparson, agan, o he orgnal RRL.

4 ACKNOWLEDGMENS Frs of all, I would le o han my academc advsors, Dr. M. Georgopoulos and Dr. G. Anagnosopoulos. her approach o he research process has helped me learn many new hngs requred o carry ou hs nd of research. I also wan o exend my warmes graude for beng paen, and neresed n my progress hroughou hs research effor. I am prvleged worng wh hem n hs research effor, and I wll always be hanful for hs. I would also le o exend my sncere hans o my commee members Dr. Kaspars, and Dr. Haralambous for her me and effors for revewng my wor and provdng wh valuable feedbac for hs documen o be as accurae and as hrough as possble. Fnally I would le o han my parens Arun Vara and Vrushal Vara and my sser Mrunmayee Vara for always beng a source of energy and suppor.

5 ABLE OF CONENS LIS OF FIGRES... v LIS OF ABLES... v LIS OF ABBREVAIONS... v : INRODCION... : ORGANIZAION OF HE HESIS...6. Leraure Revew Conrbuon of he hess -Bacground Informaon..... Mnmzaon echnques n non-lnear Leas Squares Problem Seepes Descen Newon s mehod Gauss-Newon s mehod Example Illusrang Worng of Mnmzaon Schemes Formng he Drecon Vecor Lne Search Off-Lne RRL Algorhm Fndng he Drecon Vecor Lne Searches Bsecon Parabolc Inerpolaon Bren s Mehod... 9 v

6 .4 Off-Lne GN-RRL Algorhm : EXPERIMENS : SMMARY, CONCLSIONS AND FRE RESEARCH...4 LIS OF REFERENCES...44 v

7 LIS OF FIGRES Fgure : Srucural dfferences n he feed forward and recurren newors... Fgure : Example demonsrang power of RNN o learn emporal ass... Fgure 3: Bloc dagram of a Fully Recurren Neural Newor FRNN... 4 Fgure 4: surface of he funcon o be mnmzed... 6 Fgure 5: Drecon vecors for dfferen mehods of mnmzaon of a sum of squares funcon... 8 Fgure 6: sae of he soluon afer 4 bacracng seps... 9 Fgure 7: Fully recurren neural newor archecure... Fgure 8: Parabolc nerpolaon Bren s mehod... 3 Fgure 9: Sana-Fe me seres daa se Fgure : Boxplo of KFlops for GN-RRL and GD-RRL for he Sana-Fe me Seres4 Fgure : he SSE versus SC resuls of he FRNN raned wh he GD-RRL... 4 v

8 LIS OF ABLES able : Resuls of he lne search on he found drecons... able : Performance - Sana-Fe me Seres able 3: Performance - Sunspo me Seres v

9 LIS OF ABBREVAIONS J - Jacoban marx g X - Graden vecor G X - Hessan marx p - Search drecon vecor Number of nodes n oupu layer L Number of observable nodes V Number of npu layer nodes I Number of npu nodes H Number of hdden nodes W Wegh marx θ - Column vecor of all adapable parameers φ - Obecve funcon sum of squared errors e - Error beween desred and acual oupus of node a me d - Desred oupu of node a me y - Oupu of node a me r - Resdual vecor v

10 : INRODCION Recurren neural newors RNN are very effecve n learnng emporal daa sequences, due o her feed bac connecons. hese feed bac connecons mae he RNN dfferen from he feed forward newors. Due o he use of hese recurren connecons we add anoher dmenson o our newor, whch s me. he hdden uns, whose oupu s of no mmedae neres, ac as dynamc memory uns, and nformaon abou he prevous me nsances s used o calculae he nformaon a presen me nsan. he followng fgure llusraes he srucural dfferences beween recurren neural newors and her more popular counerpars, feedforward neural newors. Oupu Layer Hdden Layer Oupu Layer Hdden Layer Conex Layer Inpu Layer Feed Forward Newor Inpu Layer Recurren Newor Fgure : Srucural dfferences n he feed forward and recurren newors In he srucure of a recurren neural newor, depced n Fgure, he conex layer acs as a

11 buffer ha sores he pas nformaon abou he daa. Due o hs srucure he recurren neural newor s very effecve when he daa se s a me varyng sgnal. In he followng we presen an example, from he expermens wh he Real me Recurren Learnng RRL paper Wllams, Zpser 89B, ha demonsraes he power of he recurren neural newors o learn a emporal as. me a b c d Oupu Oupu Recurren Neural Newor a b c d Fgure : Example demonsrang power of RNN o learn emporal ass Consder he above sysem, whose oupu becomes only when a parcular paern of lnes appear n a specfc order. As shown n he able, he oupu s when lne b s urned on afer some me lne a was on; oher wse he oupu s off. he lnes c and d are dsracors. hs as consss of recognzng evens n a specfc order ha s a hen b, regardless of he

12 number of nervenng evens ha s c or d. Due o he feedbac srucure of he recurren neural newor, where he npus conss of he exernal npus as well as he delayed oupus, hs as s accomplshed effcenly. In order for he same as o be accomplshed by a feed forward neural newor a apped delay lne s requred o sore he pas paern nformaon. Due o he fne number of apped delay lnes hs newor wll fal o acheve s goal for arbrary number of nervenng c s and d s.n beween a sequence of a, hen b. In general, recurren neural newors can effecvely address ass ha conan some sor of me elemen n hem. Examples of hese ass nclude, bu are no lmed o, soc mare predcon, speech recognon, learnng formal grammar sequences, one-sep-ahead predcon. here are dfferen archecures of recurren neural newors. hese archecures vary n he way hey feed he oupus of he uns nodes n he newor, a a parcular me nsance, as npus o he same uns nodes n he newor, a a fuure me nsance. For example, he Elman recurren neural newor feeds bac from each un n s hdden layer o each oher un n he hdden layer. On he oher hand, he fully recurren neural newor FRNN has he oupu of every node n he newor conneced o all he oher nodes n he newor see fgure below for a bloc dagram of FRNN. 3

13 y Vsble Nodes y Oupu Layer Hdden Nodes W n Delay Nodes Inpu Nodes Z - Z - Z - Z - Inpu Layer + x x Fgure 3: Bloc dagram of a Fully Recurren Neural Newor FRNN he FRNN consss of an npu layer and an oupu layer. he npu layer s fully conneced o he oupu layer va adusable, weghed connecons, whch represen he sysem s ranng parameers weghs. he npus o he npu layer are sgnals from he exernal envronmen or un-gan, un-delay feedbac connecons from he oupu layer o he npu layer. FRNNs accomplsh her as by learnng a mappng beween a se of npu sequences o anoher se of oupu sequences. In parcular, he nodes n he npu layer of an FRNN accep npu sequences from he ousde world, delayed oupu acvaons from he oupu nodes of he newor and a consan-valued node ha helps n servng as he bas node for all he oupu nodes n he newor. On he oher hand, he nodes n he oupu layer generae he se of oupu sequences. ypcally, nodes n he oupu layer are dsngushed as oupu nodes ha produce he desred oupus and hdden nodes, whose acvaons are no relaed o any of he oupus of he as o 4

14 be learned, bu ac as a secondary, dynamc memory of he sysem. he feedbac connecons from oupu layer o npu layer n a recurren neural newor s he mechansm ha allows he newor o be nfluenced no only by he currenly appled npus bu also by pas appled npus; hs feaure gves recurren newors he power o effecvely learn relaonshps beween emporal sequences. Smple feed-forward neural newors are usually raned by he popular bac-propagaon algorhm Rumelhar e al. 86. he recurren neural newors on he oher hand, due o her dynamc processng requre more complex algorhms for learnng. An aemp o exend he bac-propagaon echnque for learnng n recurren newors has led no a learnng approach, ermed bac-propagaon hrough me Werbos 9. he mplemenaon nvolved unfoldng he recurren newor n me so ha grows one layer a each me sep. hs approach has he dsadvanage of a requrng a memory sze ha grows large, especally for arbrarly long ranng sequences. Anoher popular algorhm for learnng n FRNN s he Real me Recurren Learnng RRL algorhm Wllams & Zpser 89. he man emphass of he algorhm s o learn he emporal sequences by sarng from a newor opology ha aes no consderaon he nowledge abou he emporal naure of he problem. RRL s a graden descen-based algorhm ha s used for he adusng he newor s nerconnecon weghs. Wllams & Zpser presen wo varaons of RRL, one for off-lne bach and anoher one for on-lne ncremenal learnng. In boh of s forms, RRL has been successfully used o ran FRNNs for a varey of applcaons, such as speech recognon and conroller modelng. 5

15 : ORGANIZAION OF HE HESIS he hess s organzed as follows: he frs secon Secon. s he leraure revew. I focuses on recurren neural newors and several approaches for learnng he nerconnecon weghs of recurren neural newors. I also focuses on dfferen varans of he RRL nroduced no he leraure, as well as successful applcaons of he RRL neural newors. In secon. we dscuss he conrbuon of hs hess n he feld of recurren neural newor learnng. I also dscusses he movaon behnd our approach for ranng hese nds of newors. In secon.3 he heory behnd he orgnal RRL Wllams & Zpser 89 s horoughly dscussed, ncludng he formaon of he drecon vecor, fndng he adapve learnng rae ec. Furhermore, n secon.4 we dscuss he dervaon of he RRL algorhm based on he Gauss-Newon drecon. In secon 3 we elaborae on he daa ses we used for he expermenaon and he assocaed expermenal resuls. In he las secon of he hess secon 4 conclusve remars are provded.. Leraure Revew here are several algorhms ha have been developed o ranng recurren neural newors, he prncpal ones beng he real-me recurren learnng RRL Wllams & Zpser 89, bacpropagaon-hrough-me BP Werbos 9 and he exended Kalman fler Wllams 9. All hese algorhms mae use of he graden of he error funcon wh respec o he 6

16 weghs o perform he wegh updaes. he specfc mehod n whch he graden s ncorporaed n he wegh updaes dsngushes he dfferen mehods. he RRL algorhm has several advanages as he graden nformaon s compued by negrang forward n me as he newor runs, as apposed o he BP algorhm where he compuaon s done by negrang bacward n me as he newor aes a sngle sep forward. RRL s a graden-based algorhm ha s used for he modfcaon of he newor s nerconnecon weghs. Wllams & Zpser presen wo varaons of RRL, one for off-lne bach and one for on-lne ncremenal learnng. sng boh varaons, RRL has been used o ran FRNNs n a varey of applcaons, such as speech recognon, conroller modelng, amongs ohers. Specfcally, RRL has been used for ranng a robus, manufacurng process conroller n Hambaba. he problem of speech enhancemen and recognon s addressed n Juang & Ln, where RRL s used o consruc adapve fuzzy flers. RRL has also been used o ran FRNNs for nex-symbol predcon n an Englsh ex processng applcaon Perez-Orz e al.. he RRL/FRNN combnaon has also been used n applcaons of communcaon sysems. L, e al. use FRNNs, raned by RRL, for adapve pre-dsoron lnearzaon of RF amplfers and hey have been shown o aan superor performance, n comparson wh oher well-nown pre-dsoron models. Furhermore, he auhors show sgnfcan mprovemens n he B Error Rae BER performance as compared wh lnear echnques n he feld of dgal, moble-rado sysems. Fnally, RRL has been used o ran FRNNs o effecvely removng arfacs n EEG Elecroencephalograms sgnals Selvan & 7

17 Srnvasan. A plehora of RRL varans or subsues have been suggesed n he leraure ha amed o enhance dfferen aspecs of he ranng procedure such as s compuaonal complexy, especally when used n off-lne mode, convergence speed/properes, and s sensvy o he choce of nal wegh values. In Cafols 93 a echnque s presened for renalzng RRL afer a specfc me nerval, so ha wegh changes depend on fewer pas values and wegh updaes follow more precsely he error graden. Also, he relaonshp beween some nheren parameers, le he slope of he sgmodal acvaon funcons and he learnng rae, has been aen no accoun o reduce he degrees of freedom of he assocaed non-lnear opmzaon problems Mandc & Chambers 99. In Schmdhuber 9, he graden calculaon has been decomposed no blocs o produce an algorhm whch s an order of magnude faser han he orgnal RRL. Addonal consrans have been mposed o he synapc wegh marx o acheve reduced learnng me, whle he newor forgeng s reduced Druaux, e al. 98. In Chang & Ma 98, a conugae-graden varaon of RRL has been developed. Oher echnques suggesed o mprove he convergence rae nclude use of Normalzed RRL Mandc & Chambers, and use of genec algorhms Ma, e al. 98 and Blanco, e al.. I has been shown Aya & Parlos ha of he ranng approaches le bacpropagaon hrough me B, RRL, fas forward propagaon approach wha ranng approaches?...be specfc are based on dfferen compuaonal ways o effcenly oban he graden of error funcon and can be generally grouped no fve maor groups. Furhermore, hese fve approaches are only 8

18 fve dfferen ways of solvng parcular marx equaon. Dynamc sub-groupng of processng elemens has also been proposed o reduce he RRL s compuaonal complexy from On 4 o On. Eulano & Prncpe. Fnally, Newon-based approaches for small FRNNs have also been used o explo he Newon s mehod quadrac rae of convergence Coelho. In hs paper, we presen a novel, on-lne RRL varaon, namely he GN-RRL. Whle he orgnal RRL ranng procedure ulzes graden nformaon o gude he search owards he mnmum ranng error and herefore we are gong o refer o as GD-RRL, GN-RRL uses he Gauss-Newon drecon vecor for he same purpose. he developmen of a GN-based ranng algorhm for FRNNs was movaed by he very naure of he opmzaon problem a hand. he funcon o be mnmzed s non-lnear squared-error ype, whch maes a Nonlnear Leas-Squares NLS opmzaon problem. Whle graden descen mehods are sraghforward and easy o mplemen for NLS problems, her convergence rae s lnear Nocedal & Wrgh 99, whch ypcally ranslaes o long ranng mes. he problem s furher worsened when he model sze he number of nerconnecon weghs ncreases. On he oher sde of he specrum, Newon-based mehods aan a heorecal quadrac rae of convergence Denns & Schnabel 83, whch maes hem appealng from he perspecve of achevng reduced ranng me. Neverheless, Newon-based algorhms requre second-order dervave nformaon, such as he assocaed Hessan marx or approxmaons o. hs requremen mples a hgh compuaonal cos, whch prohbs he usably of Newon-based learnng for moderae or large sze FRNN srucures. We propose an RRL scheme based on he Gauss- Newon mehod as a compromse beween graden descen and Newon s mehods. he GN- 9

19 RRL feaures a super-lnear convergence profle faser han GD-RRL and lower compuaonal cos o second-order algorhms, whch maes praccal for ranng small o moderae sze FRNNs.. Conrbuon of he hess -Bacground Informaon he hess dscusses a novel approach o learnng n he fully recurren neural newors. he mos popular algorhm for learnng, he RRL employs he graden descen echnque of mnmzaon for fndng he opmum wegh vecors n he recurren neural newor. By usng an approxmaon o he Newon s mehod,.e. Gauss-Newon mehod, whch s suable for nonlnear leas squares mnmzaon problems we can speed up he process of learnng he recurren weghs... Mnmzaon echnques n non-lnear Leas Squares Problem If an obecve funcon s expressed as a sum of squares of oher non-lnear funcons: F X m f X, hen s possble o devse some effcen mehods o mnmze. If we gaher he funcons f n vecor form as: [ f f Λ ] f m hen F X f X f X f X

20 o oban he graden vecor of he obecve funcon, he frs paral dervave wh respec o can be wren as: x m x f f g 3 Now f we defne he Jacoban marx as: n m m n x f x f x f x f Λ Μ Μ Λ J 4 hen he graden vecor can be wren as: 5 X X X g f J Now, f we dfferenae equaon 3 wh respec o gves he -elemen of he Hessan marx: x + m x x f f x f x f G 6 If we denoe be he Hessan marx of funcon f 7 X f X hen he complee Hessan marx can be wren as: 8 X X X X S J J G + where: 9 m X X f X S

21 hs concludes our dscusson abou he bascs of formaon of varous marces n he case of leas squares problems, needed for he furher developmens.... Seepes Descen Now, o fnd mnmum of a funcon FX eravely, he funcon should decrease n value from eraon o eraon, n oher words, Now we need o choose a drecon, so ha we can move downhll. Le us consder he frsorder aylor seres expanson abou : F X + < F X p X F X F X + X F + X + g X where g s he graden evaluaed a he old guess X : g F X X X I s evden ha he seepes descen would occur f we choose: p g 3 he orgnal RRL algorhm uses hs drecon for mnmzng he obecve funcon.

22 ... Newon s mehod he seepes descen algorhm consders frs-order aylor s seres expanson. he nex mehod we wll dscuss s based on he second-order aylor s seres expanson: F X + F X + X F X + g X + X G X 4 ang graden of hs quadrac funcon wh respec o and se equal o zero for o X X + be mnmum we ge: g + G X X G g p 5 f we use equaons 5 and 8 o fnd ou he graden g, and Hessan G a he old guess X, and subsue n n equaon 5 we ge, p J J + S J f 6 hs s he Newon s mehod for he specalzed, non-lnear leas squares obecve funcon. he Newon s mehod nvolves compuaon of he S erm whch nvolves evaluaon of mn n + erms for an obecve funcon comprsng of sum of squares of m funcons and n n dmensonal space....3 Gauss-Newon s mehod By neglecng he S erm n he Newon s equaon 6 becomes, 3

23 p J J J f 7 hs equaon defnes he Gauss-Newon mehod. * I s clear from 8, 9 & 6 ha f f X as X X, hen also S X, and hen he Gauss-Newon ends o Newon s mehod as he mnmum s approached, whch wll have favorable consequences, such as quadrac convergence. I has been shown by Meyer 7 ha he convergence consan K can be wren as: K * * [ J X J X ] * S X 8 So, he convergence s superlnear as S as, oherwse s frs-order and slower he larger S s. hus, can be concluded ha Gauss-Newon s approxmaon o Newon s mehod would no perform as expeced.e. wh super-lnear convergence f he sze of he erm S f X X s larger han egen values of J J. Also a * X, he vecor J f, beng * * proporonal o he graden vecor, mus be zero. herefore f f X, hen J X s ran defcen and J X * J X * s sngular, so Gauss-Newon canno be expeced o perform sasfacorly n large resdual problems, where he resduals of he errors are gong o be comparavely larger. he orgnal RRL algorhm uses a seepes descen of graden for mnmzaon, whch has lnear rae of convergence. Our approach s o use he Gauss-Newon mehod for he mnmzaon. As he obecve funcon o be mnmzed s a sum of squares error funcon, and 4

24 we assume ha he mnmzaon mehod s already nsde he close neghborhood of a local mnmum, where he resduals wll be small, and can be consdered as a small resdual problem. On he oher hand f we are no close he local mnmum, Gauss-Newon wll be exremely slow and mgh produce naccurae resuls n some cases... Example Illusrang Worng of Mnmzaon Schemes Le us ae an example of mnmzaon of a funcon F X sn X + sn X from an nal guess of [.5.5]. Fgure 4 llusrae surface of he funcon. 5

25 Fgure 4: surface of he funcon o be mnmzed... Formng he Drecon Vecor From equaon, we can wre: f [ sn X X ] X sn he Jacoban can be wren as: cos X J cos X Now from equaon 5, we can wre he graden as: g X J sn X X f X sn X cos X cos X 6

26 he Hessan s G X J X J X + S X he SX can be wren as: m sn X sn X S X f X X sn X + sn X sn X sn X so he Hessan becomes: cos X G X sn X + cos X cos sn X X sn X cos X sn X Now, From a nal guess X [.5.5], we ge he drecon for graden descen as, he Newon s drecon s, [.845. ] p GD 845 G and he Gauss-Newon s drecon s, [ ] X g X 7787 [ J J X ] J X f X hese Drecons are shown s fgure. [ ] X

27 Fgure 5: Drecon vecors for dfferen mehods of mnmzaon of a sum of squares funcon... Lne Search Once we fnd a mnmzng drecon he nex sep s o fnd a learnng rae sep lengh along he search drecon ha mnmzes he funcon along he found drecon. hs process s called lne search and s nohng else bu a un-dmensonal mnmzaon procedure. 8

28 here are several mehods o perform a lne search, he smples beng he he bsecon mehod. In hs mehod we sar wh a brace of [a,b] [,], and evaluae he funcon n he found drecon n hs brace-end pons. he funcon s also evaluaed a he md pon c a+b/. Now he pon wh he maxmum funcon value s excluded and a new brace s formed. hs erave process s connued unl he mnmum whn accepable range s found. Fgure 6: sae of he soluon afer 4 bacracng seps. he bsecon performed on he 3 drecons found yelded followng resuls: 9

29 able : Resuls of he lne search on he found drecons Seps Graden mehod Newon mehod GN mehod FX.43 FX.54 FX.43 FX.5 FX.44 FX. 3 FX.34 FX.4 FX.63 4 FX 3.3 FX FX I can be seen from able ha for equal number of mnmzaon seps he qualy of he soluon produced by he Gauss-Newon mehod s superor o he one provded by he seepes graden mehod. On he oher hand, when comparng he Gauss-Newon and Newon s mehods, boh soluons are comparable n accuracy. We have saved he compuaon of mn n + 6 erms, by omng he SX erm from he Gauss-Newon calculaon..3 Off-Lne RRL Algorhm In hs secon we presen an algorhm for off-lne ranng of FRNNs. he algorhm, whch appears n Wllams & Zpser 89, s based on he Graden Descen mehod, when he oal Sum of Squared Errors SSE s beng used as he obecve funcon o be mnmzed.

30 O y H Oupu Node Hdden Node Oupu Layer W 3 W n Delay Nodes 3 z - z - Bas Node Inpu Node Inpu Layer + I I V Fgure 7: Fully recurren neural newor archecure he above fgure shows he noaon used for ease of undersandng he dervaons ahead. he oupu layer consss of nodes, L are he observable nodes and H are he hdden nodes. he npu layer has a oal of V nodes, comprsng of I npu nodes, bas node and un-delay nodes. he ndexng as shown nsde he nodes s followed for ease of wrng he equaons. he weghs are denoed as W o node ndex from node ndex. Noe ha he recurren connecons do no have any weghs assocaed wh hem and hey us feed bac he curren oupu o he delay elemen. he oupus of he oupu nodes have some nal value a me -, whch s denoed by y for -

31 he operaons a a node n he oupu layer are depced n he followng fgure: Fgure 8: Operaons a he oupu node All he weghed npus convergng o he node are summed ogeher and he oupu of he node s he oupu of a nonlnear funcon appled o ha sum. ypcal choces for hs nonlnear funcon nclude he logsc and hyperbolc angen funcons, as shown below: f -x s e anh s anh 9 -x + e f log s + e s + anh s In he performance phase of he newor, le be he number of es paerns wh he frs paern beng presened a me. Le y.. O be he observed oupus and y O.. be he H unobserved oupus of he conex uns. he oupus of he oupu uns can be wren as: where: s y f s I l w, l xl + w, I + w, l+ I + yl l -, -

32 Now, learnng n he RNN archecure can be performed by mnmzng a suable obecve funcon ϕ. Furhermore, he mnmzaon can be acheved by fndng approprae values for all he free parameers of he newor. Whle up o hs pon we have consdered only he wegh marx W as a newor parameer, he nal values y.. are arbrary and, herefore, we can consder hem oo as newor parameers ha can be uned durng he ranng process. hese values can be summarzed n he oupu sae vecor y. Moreover, boh he oupu sae vecor and he wegh marx can be summarzed n a sngle column vecor of *+I+O+H*O+I+H+ elemens θ [ ] r r.. V + [ W] vec y θ 3 here vec. s used o ndcae he wegh marx W s arranged n a sngle column vecor. he relaonshp beween θ r and w,, y - parameers s gven below: w y,..,.. V.. θ θ r r where r V + r + V 4 θ θ r r w r V.. V + y r / V r V where r V r.. V,, o undersand he above mappng le us beer consder an example, where we have a newor 3

33 wh 3 npu nodes I 3, 4 hdden nodes H 4 and oupu nodes O. he wegh marx wll have dmensonaly 6x and y- wll have dmensonaly x6. Now he θ vecor can be wren as follows: θ [ w Λ w w Λ w y y Λ y ] w θ Λ Λ θ θ 9 θ θ 59 θ 6 θ 6 θ Fndng he Drecon Vecor Connung wh our presenaon, our obecve funcon Sum of Squared Errors SSE, whch for he RNN s defned as SSE φ θ φ θ, 5 where we defne he nsananeous SSE ϕθ, as φ θ, e 6 From Equaon 6 we can see ha he SSE s proporonal o he average of nsananeous SSE over he me perod..-. he nsananeous error e for oupu neuron depends 4

34 mplcly on θ and s gven as d e y f d oherwse.. O 7 In oher words, e s zero, f here s no specfc desred response d for oupu a me nsance. We can combne Equaons 5 and 6 no SSE φ θ e 8 In hs secon we are gong o apply Graden Descen o mnmze he SSE. hs s he off-lne learnng procedure, snce we consder errors over a perod of all he me nsances smulaneously. he graden s gven as φ θφ θ r W r.. V + y φ φ 9 where we defne he followng column vecors φ φ vec w, W 3..,.. + I φ y φ vec 3 y.. In order o specfy he graden compleely we need o fnd expressons for he paral dervaves n Equaons 3 and 3. From Equaon 6 he paral dervave wh respec o he parameer θ r whch mgh be a specfc w, or a specfc y - s gven as 5

35 .. + V r e e r θ r θ φ 3 lzng Equaon 7, Equaon 3 can be rewren as.. + V r y e r θ r θ φ 33 In order o proceed furher we need he expressons for he paral dervaves nsde he summaons. hese can be calculaed va Equaon... + V r s s f y r r θ θ 34 We can express he dervave of he acvaon funcon n Equaon 9 as anh y s f 35 [ ] log y y s f 36 he form of he dervave n Equaons 35 and 36 s convenen from an mplemenaon perspecve, snce hey requre only a few floang-pon operaons. A hs pon we sll need o fnd expressons for he paral dervaves n Equaon 34. For clary, we need o dfferenae now beween θ r beng a specfc w, or a specfc y -. hus, Equaon 34 becomes..,..,..,, V w s s f w y 37..,.. y s s f y y 38 he above paral dervaves can be calculaed wh he help of Equaon, so ha Equaons 37 and 38 become 6

36 ..,..,.... f f f,,, V V I I..I y x w y w s f w y l I l I l 39..,.., + + y y w s f y y l l I l 4 If we proceed o defne he ranng sae quanes I w y p +..,..,..,, 4..,.. y y q 4 hen we can rewre Equaons 4 and 34 as..,..,.... f f f,,,, V V I I..I y x p w s f p l I I l δ 43..,.., + + q w s f q l I l 44 where δ, s he Kronecer dela symbol. Due o her defnon n Equaons 4 and 4, he ranng sae quanes are nalzed o I w y p +..,..,..,, 45..,.., y y q δ 46 By nroducng he ranng sae quanes he calculaon of he graden componens n 7

37 Equaon 33 s performed as follows φ w, e p,..,.. V 47 φ y e q.. 48 p o hs pon we managed o calculae he graden vecor. ranng usng he Graden Descen mehod produces parameer updaes ha are of he form W λ W φ y λ y φ 49 where λ> s he learnng rae. he updae equaon hus can be wren as: W new W old + λ p 5 Where p s an approprae drecon vecor and λ s he sep lengh n he drecon of p also referred o as he learnng rae. Our drecon vecor here s p p GD Wφ, so W new W old λ Wφ 5 Before ranng commences s mporan o normalze adus he range of values he ranng paerns..3. Lne Searches Afer fndng he drecon vecor we need o fnd he learnng rae adapvely, as a consan 8

38 learnng rae would no necessarly mnmze he sum of he squared errors. So for he gven drecon vecor we fnd he learnng rae. he problem of fndng he learnng rae for mnmzng he SSE becomes a one-dmensonal mnmzaon problem, and hs problem can be solved by several mehods..3.. Bsecon One of he smples mehods s he bacracng. In bacracng we sar wh a brace of learnng raes, such as [, maxmum learnng rae]. he funcon o be mnmzed s evaluaed a he brace md pon, and he pon wh he maxmum value of funcon s elmnaed, and hs process s connued unl we reach o he mnmum of he funcon wh he pons separaed wh he dsance of machne s floang pon precson. he brace of learnng raes, for our case, would be always bounded by [, maxmum learnng rae], so our nal guess for he braceng mnmum would always be hs brace..3.. Parabolc Inerpolaon Bren s Mehod Assumng ha our funcon s parabolc near he mnmum, a parabola fed hrough any hree pons would ae us o he mnmum n a sngle sep, or a leas very near o. 9

39 A Fx B ψ x C A C A + x * B + C + B x Fgure 8: Parabolc nerpolaon Bren s mehod If we denoe equaon of he parabola by: ψ x ax + bx + c, 5 o f a parabola hrough ou funcon, we can use he brace end pons o fnd ou he coeffcens a, b and c. In fac we do no need he complee form of ψ x, as only he poson of he mnmum of he parabola x * s requred. Dfferenang equaon 5 wh respec o x, and equang o, we ge, * * b ψ x ax + b x 53 a 3

40 so rao of b/a s needed. Now we have he brace as [A,B] and anoher nernal pon C such ha and B C A C ψ ψ ψ ψ < <. We solve hree smulaneous lnear equaons: 54 C B A c b a C C B B A A ψ ψ ψ he x * s hen obaned as: * C B A B A C A C B C B A B A C A C B x ψ ψ ψ ψ ψ ψ As shown n fgure 8, A + becomes C, C + becomes he new found pon x * for he nex eraon +..4 Off-Lne GN-RRL Algorhm In hs secon we presen our new verson of he RRL algorhm by usng he Gauss Newon s echnque for mnmzng he leas squares obecve funcon. he Gauss Newon mehod replaces he drecon mehod n he orgnal approach descrbed above wh he Gauss- Newon drecon. he learnng rae s adapvely calculaed he same way usng he Bren mehod based on parabolc nerpolaon echnque as descrbed n he las secon. he funcon o be mnmzed s he funcon n equaon, wh respec o he adapable parameers.e. he weghs. Le us frs defne, where r s he vecor p [ ] L o L o e e e e e r 3

41 conanng he errors for dfferen vsble oupus across consecuve me nsances he curren me nsance and he - prevous me nsances; so r L R. Also f θ ˆ vec[ W], hen he m,n enry of he Jacoban marx can be wren as, r m J ; m, n 5 θ n he graden vecor of θ, φ e s hen defned as, φ θ, J r 53 and he Hessan of φ θ, as, φ θ, J J + + r r 54 Noe here ha afer calculang he graden g of he obecve funcon, we ge he frs par of equaon 54 whou any furher evaluaons. he Gauss-Newon mehod assumes ha he mnmzaon problem a hand s of small resdual and, herefore, gnores he second par of he Hessan calculaon. Le g denoe he graden vecor of φ θ,, and denoe he Hessan of φ G θ, ; G s defned n he followng equaon g φ θ, θ θ 55 G φ θ, θ θ 56 where φ θ, and φ θ, are defned n equaons 53 & 54. 3

42 Now, a aylor seres gves he expanson of he graden of a quadrac funcon a θ + p as g + G p θ + g If we ae a he graden of hs quadrac funcon wh respec o p and se o zero we ge, g Gp or p G g Now from 53, 54 & 58, p [ J J + S ] J r 59 he man problem n applyng hs mehod s he compuaon of he S erm. hs calculaon nvolves evaluaon of mn n + erms. hs dffculy leads us o he Gauss-Newon modfcaon of he Newon s mehod. If we gnore he S erm n equaon 5 we ge he Gauss-Newon equaon for he opmal drecon vecor, defned as, p [ J J ] J r GN p 6 hs approxmaon n equaon 6 can be vewed as a low compuaonal cos approxmaon o he Hessan marx assocaed o he SSE mnmzaon, whch s suffcenly accurae for smallresdual problems. Scales 85. In ou case, SSE φ θ, vecor θ. e, s o be mnmzed wh respec o he adapable wegh he resdual vecor r can be wren as: r [ d y d y... d y d y... d + y ] L L o L L

43 r y so, p. J ; m, n where: m / L, m L, n / V, n V 6 θ θ r r hs equaon lns he Jacoban marx o he oupu sensves p,. hs equaon also underlnes he fac ha GN-RRL s closely relaed o GD-RRL, snce boh approaches ulze oupu sensvy nformaon for he compuaon of her correspondng search drecons. Neverheless, GN-RRL promses shorer ranng me whou he need of compung second order dervaves. In order o solve he equaon o ge he Gauss-Newon drecon, we avod he formaon of he erm [ J J ] J drecly as s prone o consderable, numercal errors durng s compuaon. Insead we solve: J Jp GN J r 63 usng a numercally sable approach, namely he Sngular Value Decomposon SVD, as presened n Golub & Van Loan, 996. We decompose he Jacoban marx so ha, S S J V [ ] V SV, 64 Where, s m x m orhogonal; conans frs n columns of, he las m-n columns; V n n x n orhogonal; S s n x n dagonal, wh dagonal elemens σ σ Λ σ. for a m x n Jacoban marx. he n > marces, V should no be confused wh he szes of he npu and oupu layers of he FRNN. 34

44 Now usng hs decomposon we can wre, p GN VS r. 65 In erms of he lef- u, rgh-egenvecors v and sngular values σ of he Jacoban marx he drecon vecor can be wren as, p GN u r v σ 66 In he above summaon erms ha correspond o relavely small sngular values are omed o ncrease robusness n he drecon vecor calculaons. hs phenomenon may occur when J s close o beng ran defcen and he unalered search vecor may no represen a descen drecon. Anoher alernave would be usng he negave graden as n GD-RRL for ha parcular me sep unl J becomes full ran agan. Some of he advanages n usng he Gauss-Newon mehod for he mnmzaon n he leas squares soluons problem can be lsed as follows. Frs, we gnore he second order Hessan erm from he Newon s mehod, whch saves us he rouble of compung he ndvdual Hessans. Secondly our mnmzaon as s a small resdual problem as he resdual vecor r s he error beween he desred oupu and he acual oupu, and ends o zero as he learnng progresses. In he small resdual problems he frs erm n equaon 49 s much more sgnfcan han he second erm, and he Gauss-Newon mehod gves performance smlar o he Newon s mehod, wh reduced compuaonal effor Nocedal & Wrgh, 999. he orgnal RRL algorhm comes n versons: Bach mode off-lne ranng and Connuous mode on-lne learnng. 35

45 In connuous mode he weghs are updaed as he newor s runnng. he advanage wh hs approach s ha you don have o specfy any epoch boundares and ha leads o a concepual and mplemenaon-wse smple algorhm. he dsadvanage s ha now he algorhm s no longer he exac negave graden of he oal error along he raecory. In he case of GN-RRL for he super-lnear convergence, he Jacoban marx mus be full column ran. In an effor o fulfll he las requremen, GN-RRL mus compue s drecon vecor based on V/L me nsances, whch conrass GD-RRL ha needs only curren me nsance nformaon. Alhough hs very fac mgh be consdered as a dsadvanage of GN-RRL from a compuaonal or memory-sorage perspecve, ulzng nformaon of nsead of us one me nsance o perform on-lne learnng may cause a smoohng/averagng effec n me and allow GN-RRL o converge faser n an on-lne mode. 36

46 3: EXPERIMENS In order o demonsrae he mers of he Gauss-Newon RRL we have chosen o compare o he orgnal algorhm GD-RRL on one-sep-ahead predcon problems. For boh GN- & GD- RRL we used he same parabolc nerpolaon echnque for approxmae lne mnmzaon, he same se of nal weghs and he same, maxmum learnng rae. For each daase he wo algorhms performed ranng usng dfferen nal confguraons. Once he algorhms converged, hey were esed on all avalable daa and he produced SSE was measured. he daases we consdered were: Sana-Fe me Seres Compeon daase: he daase consss of a compuer-generaed, - dmensonal emporal sequence of 5 me nsances. he daa se was generaed by numercal negraon of equaon of moon of a damped parcle. he daa se loos as follows: Fgure 9: Sana-Fe me seres daa se 37

47 [Ln: hp://www-psych.sanford.edu/~andreas/me-seres/sanafe.hml] Sunspo daase: he daase consss of a emporal sequence represenng he annual, average number of sunspo acvy as measured n he nerval 749 o 95. he sequence s - dmensonal and conans samples. [Ln: hp://scence.msfc.nasa.gov/ssl/pad/solar/greenwch.hm] he resuls obaned are summarzed n ables, depced below. SC denoes me seps unl convergence, whle KFlops denoes housands of floang pon operaons performed durng ranng. In he onlne learnng case, he condon of convergence s, he L-nfny norm of he graden vecor mus be smaller han an accuracy value ε, for ha parcular me nsan. g < ε where > ε s he accuracy hreshold. 67 he expermenal resuls for boh daases verfy ha GN-RRL s superor n erms of convergence as expeced. In he frs daase, GN-RRL acheved convergence on average n only 7% of he me seps requred by GD-RRL, whle n he second one n 35%. In erms of compuaonal effor Kflops, GN-RRL seems o be on he average beer han he GD-RRL mehod for he Sana-Fe seres 96 vs. 7 Kflops and comparable for he Sun-spo seres 8 vs. 54 KFlops. able : Performance - Sana-Fe me Seres SC KFlops SSE Mn. Mean Max. Mn. Mean Max. Mn. Mean Max. GD-RRL GN-RRL

48 able 3: Performance - Sunspo me Seres SC KFlops SSE Mn. Mean Max. Mn. Mean Max. Mn. Mean Max. GD-RRL GN-RRL Fgure shows boxplos descrbng he dsrbuon of compuaonal effor requred by he algorhms n erms of loflops for he compuer generaed daase. he boxplo has blue lnes a lower and upper quarles, whle he red lne ndcaes he medan of he Kflops for each of he mehods. he lnes exendng from he box ndcae he res of he pons. hese plos show ha GN-RRL s relavely more compuaonally effcen n comparson o GD-RRL. Despe he fac ha GD-RRL has small compuaonal overhead per me nsance, when compared o GN- RRL ha ncorporaes an SVD sep, he laer mehod compensaes by performng sgnfcanly less eraons. 39

49 Fgure : Boxplo of KFlops for GN-RRL and GD-RRL for he Sana-Fe me Seres In erms of soluon qualy, ables and clearly emphasze he superory of GN-RRL. he SSE of GN-RRL s by almos an order of magnude smaller han he one acheved by GD- RRL. Fgure shows a represenave plo of SSE compued durng he esng phase versus he SC for boh mehods, whch demonsraes ha GN-RRL converges much faser o a soluon and, smulaneously, he soluon s of hgher qualy, when compared o he GD-RRL soluon. Addonally, Fgure ndcaes cases, where seems ha GN-RRL may have ermnaed ranng raher premaurely, whch resuled n hgh SSE values. hs may be arbued o he nalzaon of he wegh marx, and n hese cases he algorhm may have converged o he local mnma. o avod hese random nalzaons whch converge o local mnma, a mehod may be devsed where a he nal sages of he algorhm he GN drecon s wached and f s observed o be dvergng away from he seepes descen, hs nalzaon s 4

50 dscarded and anoher random wegh marx s seleced. Fgure : he SSE versus SC resuls of he FRNN raned wh he GN-RRL and GD- RRL mehods 4

51 4: SMMARY, CONCLSIONS AND FRE RESEARCH We have presened a Gauss-Newon varaon of he Real me Recurren Learnng algorhm Wllams & Zpser, 89 for he on-lne ranng of Fully Recurren Neural Newors. he modfed algorhm, GN-RRL, performs error mnmzaon usng Gauss-Newon drecon vecors ha are compued from nformaon colleced over a perod of me raher han only usng nsananeous graden nformaon. GN-RRL s a robus and effecve compromse beween he orgnal, graden-based RRL low compuaonal complexy, slow convergence and Newon-based varans of RRL hgh compuaonal complexy, fas convergence. Expermenal resuls were repored ha reflec he superory of GN-RRL over he orgnal verson n erms of speed of convergence and soluon qualy. Furhermore, he resuls ndcae ha, n pracce, GN-RRL feaures a lower-han-expeced compuaonal cos due o s fas convergence: GN-RRL requred fewer compuaons han he orgnal RRL o accomplsh s learnng as. hs s ndcaed by he resuls obaned on he ess performed on he sunspo and sana-fe me seres daa ses. he formaon of he Gauss-Newon drecon requres compuaon of sngular value decomposon of he Jacoban marx, n order o crcumven he problem of nverng ha marx. he maory of compuaonal load s due o hs compuaon of he SVD. Now f he Jacoban marx s ran defcen he drecon vecor may no pon o he descen drecon, and n ha case we have o ae he graden descen drecon. hs s one of he 4

52 shorcomngs of he mehod ha he qualy of he GN drecon depends upon he ran of he Jacoban marx. 43

53 LIS OF REFERENCES [Werbos 9] Werbos, Bacpropagaon hrough me: Wha does and how o do, Proceedngs of he IEEE, 78, 99. [Wllams & Zpser 89] Wllams R.J., Zpser D., A learnng algorhm for connually runnng fully recurren learnng algorhm, Neural Compuaon,, 7, 989. [Wllams, Zpser 89B] - Wllams R.J., Zsper D., Expermenal analyss of he real-me recurren learnng algorhm, Connecon Scence, Vol., No., 87-, 989. [Scales 85] Scales, Inroducon o Nonlnear Opmzaon, Sprnger-Verlag New Yor Inc [Denns & Schnabel 83] - Denns J.E., Schnabel R. B., Numercal Mehods for nconsraned Opmzaon and Nonlnear Equaons, Prence-Hall, NJ, 983. [Nocedal & Wrgh 99] - Nocedal, J., Wrgh, J.S., Numercal Opmzaon, Sprnger-Verlag, New Yor, NY, 999. [Golub & Van Loan, 996] - Golub G.H., Van Loan C.F., Marx Compuaons, 3 rd Ed., John Hopns nversy Press, Balmore, MD, 996. [Wllams 9] Wllams R.J., ranng recurren newors usng he exended Kalman fler, n Inernaonal Jon Conference on Neural Newors, Balomore, Vol. 4, 4, 99. [Cafols 93] - Cafols., a mehod for mprovng he real-me recurren learnng algorhm, Neural Newors, Vol.6, pp. 87-8, 993. [Druaux, e al. 98] - Druaux F., Rogue E., Faure A., Consraned RRL o reduce learnng rae and forgeng phenomenon, Neural Processng Leers, 7, 6-67, 998. [Ma e al. 98] - Ma M.W., Ku K.W., Lu Y.L., On he mprovemen of he real me recurren learnng algorhm for recurren neural newors, Neurocompung, 4: -3, 3-6, 998. [Mandc & Chambers 99] - Mandc D.P., Chambers J.A., Relang he slope of he acvaon funcon and he learnng rae wh a recurren neural newor, Neural Compuaon,, 69-77, 999. [Mandc & Chambers ] - Mandc D.P., Chambers J.A., A normalzed real me recurren 44

54 learnng algorhm, Sgnal Processng, 8, 99-96,. [Aya & Parlos ] - Aya A.F., Parlos A.G., new resuls on recurren newor ranng: nfyng he algorhms and accelerang convergence, IEEE ransacons on Neural Newors, Vol., No. 3, ,. [Schmdhuber 9] - Schmdhuber J., A Fxed Sze Sorage O n 3 me complexy learnng algorhm for fully recurren connually runnng newors, Neural Compuaon, 4, pp , 99. [Coelho ] - Coelho P.H.G., An exended RRL ranng algorhm usng Hessan Marx, IEEE Proceedngs on Neural Newors, Vol., ,. [Chang & Ma 98] - Chang W.F., Ma M.W., A conugae graden learnng algorhm for recurren neural newors, NEROCOMPING 4-3: 73-89, 998. [Eulano & Prncpe ] - Eulano N.R., Prncpe J.C., Dynamc Sub groupng n RRL provdng a faser O N algorhm, Acouscs, Speech, and Sgnal Processng, Vol. 6, [Hambaba ] - Hambaba A, Robus hybrd archecure o sgnals from manufacurng and machne monorng, Journal Of Inellgen & Fuzzy Sysems, 9 -: 9-4,. [Juang & Ln ] - Juang C.F., Ln C.., Nosy speech processng by recurrenly adapve fuzzy flers, IEEE ransacons On Fuzzy Sysems, 9 : 39-5,. [Perez-Orz e al. ] - Perez-Orz J.A., Calera-Rubo J., Forcada M.L., Onlne symbolcsequence predcon wh dscree-me recurren neural newors Arfcal Nneural Newors-cann, proceedngs lecure noes n compuer scence, 3, 79-74,. [L e al. ] - L C.G., He S.B., Lao X.F., Yu J.B.,sng recurren neural newor for adapve predsoron lnearzaon of RF amplfers, Inernaonal Journal Of Rf And Mcrowave Compuer-Aded Engneerng,, 5-3,. [Rumelhar e al. 86] Rumelhar D.E., Hnon G.E., Wllams R.J., Learnng represenaons by bac-propagang errors, Naure, vol. 33, pp , 986. [Selvan & Srnvasan ] - Selvan S., Srnvasan R., Recurren neural newor based effcen adapve flerng echnque for he removal of ocular arefacs from EEG, IEE echncal Revew, 7 -: 73-78,. 45

55 [Blanco e al. ] - Blanco A, Delgado M, Pegalaar MC, A real-coded genec algorhm for ranng recurren neural newors, Neural Newors, 4 : 93-5,. [Meyer 7] -Meyer, heorecal and compuaonal aspecs of nonlnear regresson, n J.B. Rosen, O.L. Mangasaran and K. Rer eds, Nonlnear Programmng, Academc Press, London and New Yor,

Variants of Pegasos. December 11, 2009

Variants of Pegasos. December 11, 2009 Inroducon Varans of Pegasos SooWoong Ryu bshboy@sanford.edu December, 009 Youngsoo Cho yc344@sanford.edu Developng a new SVM algorhm s ongong research opc. Among many exng SVM algorhms, we wll focus on

More information

V.Abramov - FURTHER ANALYSIS OF CONFIDENCE INTERVALS FOR LARGE CLIENT/SERVER COMPUTER NETWORKS

V.Abramov - FURTHER ANALYSIS OF CONFIDENCE INTERVALS FOR LARGE CLIENT/SERVER COMPUTER NETWORKS R&RATA # Vol.) 8, March FURTHER AALYSIS OF COFIDECE ITERVALS FOR LARGE CLIET/SERVER COMPUTER ETWORKS Vyacheslav Abramov School of Mahemacal Scences, Monash Unversy, Buldng 8, Level 4, Clayon Campus, Wellngon

More information

Solution in semi infinite diffusion couples (error function analysis)

Solution in semi infinite diffusion couples (error function analysis) Soluon n sem nfne dffuson couples (error funcon analyss) Le us consder now he sem nfne dffuson couple of wo blocks wh concenraon of and I means ha, n a A- bnary sysem, s bondng beween wo blocks made of

More information

Volatility Interpolation

Volatility Interpolation Volaly Inerpolaon Prelmnary Verson March 00 Jesper Andreasen and Bran Huge Danse Mares, Copenhagen wan.daddy@danseban.com brno@danseban.com Elecronc copy avalable a: hp://ssrn.com/absrac=69497 Inro Local

More information

Time-interval analysis of β decay. V. Horvat and J. C. Hardy

Time-interval analysis of β decay. V. Horvat and J. C. Hardy Tme-nerval analyss of β decay V. Horva and J. C. Hardy Work on he even analyss of β decay [1] connued and resuled n he developmen of a novel mehod of bea-decay me-nerval analyss ha produces hghly accurae

More information

( ) () we define the interaction representation by the unitary transformation () = ()

( ) () we define the interaction representation by the unitary transformation () = () Hgher Order Perurbaon Theory Mchael Fowler 3/7/6 The neracon Represenaon Recall ha n he frs par of hs course sequence, we dscussed he chrödnger and Hesenberg represenaons of quanum mechancs here n he chrödnger

More information

Linear Response Theory: The connection between QFT and experiments

Linear Response Theory: The connection between QFT and experiments Phys540.nb 39 3 Lnear Response Theory: The connecon beween QFT and expermens 3.1. Basc conceps and deas Q: ow do we measure he conducvy of a meal? A: we frs nroduce a weak elecrc feld E, and hen measure

More information

Lecture 11 SVM cont

Lecture 11 SVM cont Lecure SVM con. 0 008 Wha we have done so far We have esalshed ha we wan o fnd a lnear decson oundary whose margn s he larges We know how o measure he margn of a lnear decson oundary Tha s: he mnmum geomerc

More information

EEL 6266 Power System Operation and Control. Chapter 5 Unit Commitment

EEL 6266 Power System Operation and Control. Chapter 5 Unit Commitment EEL 6266 Power Sysem Operaon and Conrol Chaper 5 Un Commmen Dynamc programmng chef advanage over enumeraon schemes s he reducon n he dmensonaly of he problem n a src prory order scheme, here are only N

More information

An introduction to Support Vector Machine

An introduction to Support Vector Machine An nroducon o Suppor Vecor Machne 報告者 : 黃立德 References: Smon Haykn, "Neural Neworks: a comprehensve foundaon, second edon, 999, Chaper 2,6 Nello Chrsann, John Shawe-Tayer, An Inroducon o Suppor Vecor Machnes,

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 4

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 4 CS434a/54a: Paern Recognon Prof. Olga Veksler Lecure 4 Oulne Normal Random Varable Properes Dscrmnan funcons Why Normal Random Varables? Analycally racable Works well when observaon comes form a corruped

More information

CHAPTER 10: LINEAR DISCRIMINATION

CHAPTER 10: LINEAR DISCRIMINATION CHAPER : LINEAR DISCRIMINAION Dscrmnan-based Classfcaon 3 In classfcaon h K classes (C,C,, C k ) We defned dscrmnan funcon g j (), j=,,,k hen gven an es eample, e chose (predced) s class label as C f g

More information

Robust and Accurate Cancer Classification with Gene Expression Profiling

Robust and Accurate Cancer Classification with Gene Expression Profiling Robus and Accurae Cancer Classfcaon wh Gene Expresson Proflng (Compuaonal ysems Bology, 2005) Auhor: Hafeng L, Keshu Zhang, ao Jang Oulne Background LDA (lnear dscrmnan analyss) and small sample sze problem

More information

Outline. Probabilistic Model Learning. Probabilistic Model Learning. Probabilistic Model for Time-series Data: Hidden Markov Model

Outline. Probabilistic Model Learning. Probabilistic Model Learning. Probabilistic Model for Time-series Data: Hidden Markov Model Probablsc Model for Tme-seres Daa: Hdden Markov Model Hrosh Mamsuka Bonformacs Cener Kyoo Unversy Oulne Three Problems for probablsc models n machne learnng. Compung lkelhood 2. Learnng 3. Parsng (predcon

More information

Lecture 6: Learning for Control (Generalised Linear Regression)

Lecture 6: Learning for Control (Generalised Linear Regression) Lecure 6: Learnng for Conrol (Generalsed Lnear Regresson) Conens: Lnear Mehods for Regresson Leas Squares, Gauss Markov heorem Recursve Leas Squares Lecure 6: RLSC - Prof. Sehu Vjayakumar Lnear Regresson

More information

Approximate Analytic Solution of (2+1) - Dimensional Zakharov-Kuznetsov(Zk) Equations Using Homotopy

Approximate Analytic Solution of (2+1) - Dimensional Zakharov-Kuznetsov(Zk) Equations Using Homotopy Arcle Inernaonal Journal of Modern Mahemacal Scences, 4, (): - Inernaonal Journal of Modern Mahemacal Scences Journal homepage: www.modernscenfcpress.com/journals/jmms.aspx ISSN: 66-86X Florda, USA Approxmae

More information

Cubic Bezier Homotopy Function for Solving Exponential Equations

Cubic Bezier Homotopy Function for Solving Exponential Equations Penerb Journal of Advanced Research n Compung and Applcaons ISSN (onlne: 46-97 Vol. 4, No.. Pages -8, 6 omoopy Funcon for Solvng Eponenal Equaons S. S. Raml *,,. Mohamad Nor,a, N. S. Saharzan,b and M.

More information

FTCS Solution to the Heat Equation

FTCS Solution to the Heat Equation FTCS Soluon o he Hea Equaon ME 448/548 Noes Gerald Reckenwald Porland Sae Unversy Deparmen of Mechancal Engneerng gerry@pdxedu ME 448/548: FTCS Soluon o he Hea Equaon Overvew Use he forward fne d erence

More information

. The geometric multiplicity is dim[ker( λi. number of linearly independent eigenvectors associated with this eigenvalue.

. The geometric multiplicity is dim[ker( λi. number of linearly independent eigenvectors associated with this eigenvalue. Lnear Algebra Lecure # Noes We connue wh he dscusson of egenvalues, egenvecors, and dagonalzably of marces We wan o know, n parcular wha condons wll assure ha a marx can be dagonalzed and wha he obsrucons

More information

Introduction to Boosting

Introduction to Boosting Inroducon o Boosng Cynha Rudn PACM, Prnceon Unversy Advsors Ingrd Daubeches and Rober Schapre Say you have a daabase of news arcles, +, +, -, -, +, +, -, -, +, +, -, -, +, +, -, + where arcles are labeled

More information

On One Analytic Method of. Constructing Program Controls

On One Analytic Method of. Constructing Program Controls Appled Mahemacal Scences, Vol. 9, 05, no. 8, 409-407 HIKARI Ld, www.m-hkar.com hp://dx.do.org/0.988/ams.05.54349 On One Analyc Mehod of Consrucng Program Conrols A. N. Kvko, S. V. Chsyakov and Yu. E. Balyna

More information

In the complete model, these slopes are ANALYSIS OF VARIANCE FOR THE COMPLETE TWO-WAY MODEL. (! i+1 -! i ) + [(!") i+1,q - [(!

In the complete model, these slopes are ANALYSIS OF VARIANCE FOR THE COMPLETE TWO-WAY MODEL. (! i+1 -! i ) + [(!) i+1,q - [(! ANALYSIS OF VARIANCE FOR THE COMPLETE TWO-WAY MODEL The frs hng o es n wo-way ANOVA: Is here neracon? "No neracon" means: The man effecs model would f. Ths n urn means: In he neracon plo (wh A on he horzonal

More information

Robustness Experiments with Two Variance Components

Robustness Experiments with Two Variance Components Naonal Insue of Sandards and Technology (NIST) Informaon Technology Laboraory (ITL) Sascal Engneerng Dvson (SED) Robusness Expermens wh Two Varance Componens by Ana Ivelsse Avlés avles@ns.gov Conference

More information

Dynamic Team Decision Theory. EECS 558 Project Shrutivandana Sharma and David Shuman December 10, 2005

Dynamic Team Decision Theory. EECS 558 Project Shrutivandana Sharma and David Shuman December 10, 2005 Dynamc Team Decson Theory EECS 558 Proec Shruvandana Sharma and Davd Shuman December 0, 005 Oulne Inroducon o Team Decson Theory Decomposon of he Dynamc Team Decson Problem Equvalence of Sac and Dynamc

More information

THE PREDICTION OF COMPETITIVE ENVIRONMENT IN BUSINESS

THE PREDICTION OF COMPETITIVE ENVIRONMENT IN BUSINESS THE PREICTION OF COMPETITIVE ENVIRONMENT IN BUSINESS INTROUCTION The wo dmensonal paral dfferenal equaons of second order can be used for he smulaon of compeve envronmen n busness The arcle presens he

More information

TSS = SST + SSE An orthogonal partition of the total SS

TSS = SST + SSE An orthogonal partition of the total SS ANOVA: Topc 4. Orhogonal conrass [ST&D p. 183] H 0 : µ 1 = µ =... = µ H 1 : The mean of a leas one reamen group s dfferen To es hs hypohess, a basc ANOVA allocaes he varaon among reamen means (SST) equally

More information

Let s treat the problem of the response of a system to an applied external force. Again,

Let s treat the problem of the response of a system to an applied external force. Again, Page 33 QUANTUM LNEAR RESPONSE FUNCTON Le s rea he problem of he response of a sysem o an appled exernal force. Agan, H() H f () A H + V () Exernal agen acng on nernal varable Hamlonan for equlbrum sysem

More information

Chapter Lagrangian Interpolation

Chapter Lagrangian Interpolation Chaper 5.4 agrangan Inerpolaon Afer readng hs chaper you should be able o:. dere agrangan mehod of nerpolaon. sole problems usng agrangan mehod of nerpolaon and. use agrangan nerpolans o fnd deraes and

More information

Computing Relevance, Similarity: The Vector Space Model

Computing Relevance, Similarity: The Vector Space Model Compung Relevance, Smlary: The Vecor Space Model Based on Larson and Hears s sldes a UC-Bereley hp://.sms.bereley.edu/courses/s0/f00/ aabase Managemen Sysems, R. Ramarshnan ocumen Vecors v ocumens are

More information

Ordinary Differential Equations in Neuroscience with Matlab examples. Aim 1- Gain understanding of how to set up and solve ODE s

Ordinary Differential Equations in Neuroscience with Matlab examples. Aim 1- Gain understanding of how to set up and solve ODE s Ordnary Dfferenal Equaons n Neuroscence wh Malab eamples. Am - Gan undersandng of how o se up and solve ODE s Am Undersand how o se up an solve a smple eample of he Hebb rule n D Our goal a end of class

More information

. The geometric multiplicity is dim[ker( λi. A )], i.e. the number of linearly independent eigenvectors associated with this eigenvalue.

. The geometric multiplicity is dim[ker( λi. A )], i.e. the number of linearly independent eigenvectors associated with this eigenvalue. Mah E-b Lecure #0 Noes We connue wh he dscusson of egenvalues, egenvecors, and dagonalzably of marces We wan o know, n parcular wha condons wll assure ha a marx can be dagonalzed and wha he obsrucons are

More information

Lecture VI Regression

Lecture VI Regression Lecure VI Regresson (Lnear Mehods for Regresson) Conens: Lnear Mehods for Regresson Leas Squares, Gauss Markov heorem Recursve Leas Squares Lecure VI: MLSC - Dr. Sehu Vjayakumar Lnear Regresson Model M

More information

On computing differential transform of nonlinear non-autonomous functions and its applications

On computing differential transform of nonlinear non-autonomous functions and its applications On compung dfferenal ransform of nonlnear non-auonomous funcons and s applcaons Essam. R. El-Zahar, and Abdelhalm Ebad Deparmen of Mahemacs, Faculy of Scences and Humanes, Prnce Saam Bn Abdulazz Unversy,

More information

3. OVERVIEW OF NUMERICAL METHODS

3. OVERVIEW OF NUMERICAL METHODS 3 OVERVIEW OF NUMERICAL METHODS 3 Inroducory remarks Ths chaper summarzes hose numercal echnques whose knowledge s ndspensable for he undersandng of he dfferen dscree elemen mehods: he Newon-Raphson-mehod,

More information

2/20/2013. EE 101 Midterm 2 Review

2/20/2013. EE 101 Midterm 2 Review //3 EE Mderm eew //3 Volage-mplfer Model The npu ressance s he equalen ressance see when lookng no he npu ermnals of he amplfer. o s he oupu ressance. I causes he oupu olage o decrease as he load ressance

More information

J i-1 i. J i i+1. Numerical integration of the diffusion equation (I) Finite difference method. Spatial Discretization. Internal nodes.

J i-1 i. J i i+1. Numerical integration of the diffusion equation (I) Finite difference method. Spatial Discretization. Internal nodes. umercal negraon of he dffuson equaon (I) Fne dfference mehod. Spaal screaon. Inernal nodes. R L V For hermal conducon le s dscree he spaal doman no small fne spans, =,,: Balance of parcles for an nernal

More information

Lecture 18: The Laplace Transform (See Sections and 14.7 in Boas)

Lecture 18: The Laplace Transform (See Sections and 14.7 in Boas) Lecure 8: The Lalace Transform (See Secons 88- and 47 n Boas) Recall ha our bg-cure goal s he analyss of he dfferenal equaon, ax bx cx F, where we emloy varous exansons for he drvng funcon F deendng on

More information

GENERATING CERTAIN QUINTIC IRREDUCIBLE POLYNOMIALS OVER FINITE FIELDS. Youngwoo Ahn and Kitae Kim

GENERATING CERTAIN QUINTIC IRREDUCIBLE POLYNOMIALS OVER FINITE FIELDS. Youngwoo Ahn and Kitae Kim Korean J. Mah. 19 (2011), No. 3, pp. 263 272 GENERATING CERTAIN QUINTIC IRREDUCIBLE POLYNOMIALS OVER FINITE FIELDS Youngwoo Ahn and Kae Km Absrac. In he paper [1], an explc correspondence beween ceran

More information

Machine Learning Linear Regression

Machine Learning Linear Regression Machne Learnng Lnear Regresson Lesson 3 Lnear Regresson Bascs of Regresson Leas Squares esmaon Polynomal Regresson Bass funcons Regresson model Regularzed Regresson Sascal Regresson Mamum Lkelhood (ML)

More information

Notes on the stability of dynamic systems and the use of Eigen Values.

Notes on the stability of dynamic systems and the use of Eigen Values. Noes on he sabl of dnamc ssems and he use of Egen Values. Source: Macro II course noes, Dr. Davd Bessler s Tme Seres course noes, zarads (999) Ineremporal Macroeconomcs chaper 4 & Techncal ppend, and Hamlon

More information

[ ] 2. [ ]3 + (Δx i + Δx i 1 ) / 2. Δx i-1 Δx i Δx i+1. TPG4160 Reservoir Simulation 2018 Lecture note 3. page 1 of 5

[ ] 2. [ ]3 + (Δx i + Δx i 1 ) / 2. Δx i-1 Δx i Δx i+1. TPG4160 Reservoir Simulation 2018 Lecture note 3. page 1 of 5 TPG460 Reservor Smulaon 08 page of 5 DISCRETIZATIO OF THE FOW EQUATIOS As we already have seen, fne dfference appromaons of he paral dervaves appearng n he flow equaons may be obaned from Taylor seres

More information

CS286.2 Lecture 14: Quantum de Finetti Theorems II

CS286.2 Lecture 14: Quantum de Finetti Theorems II CS286.2 Lecure 14: Quanum de Fne Theorems II Scrbe: Mara Okounkova 1 Saemen of he heorem Recall he las saemen of he quanum de Fne heorem from he prevous lecure. Theorem 1 Quanum de Fne). Le ρ Dens C 2

More information

P R = P 0. The system is shown on the next figure:

P R = P 0. The system is shown on the next figure: TPG460 Reservor Smulaon 08 page of INTRODUCTION TO RESERVOIR SIMULATION Analycal and numercal soluons of smple one-dmensonal, one-phase flow equaons As an nroducon o reservor smulaon, we wll revew he smples

More information

WiH Wei He

WiH Wei He Sysem Idenfcaon of onlnear Sae-Space Space Baery odels WH We He wehe@calce.umd.edu Advsor: Dr. Chaochao Chen Deparmen of echancal Engneerng Unversy of aryland, College Par 1 Unversy of aryland Bacground

More information

Graduate Macroeconomics 2 Problem set 5. - Solutions

Graduate Macroeconomics 2 Problem set 5. - Solutions Graduae Macroeconomcs 2 Problem se. - Soluons Queson 1 To answer hs queson we need he frms frs order condons and he equaon ha deermnes he number of frms n equlbrum. The frms frs order condons are: F K

More information

Clustering (Bishop ch 9)

Clustering (Bishop ch 9) Cluserng (Bshop ch 9) Reference: Daa Mnng by Margare Dunham (a slde source) 1 Cluserng Cluserng s unsupervsed learnng, here are no class labels Wan o fnd groups of smlar nsances Ofen use a dsance measure

More information

Mechanics Physics 151

Mechanics Physics 151 Mechancs Physcs 5 Lecure 0 Canoncal Transformaons (Chaper 9) Wha We Dd Las Tme Hamlon s Prncple n he Hamlonan formalsm Dervaon was smple δi δ Addonal end-pon consrans pq H( q, p, ) d 0 δ q ( ) δq ( ) δ

More information

DEEP UNFOLDING FOR MULTICHANNEL SOURCE SEPARATION SUPPLEMENTARY MATERIAL

DEEP UNFOLDING FOR MULTICHANNEL SOURCE SEPARATION SUPPLEMENTARY MATERIAL DEEP UNFOLDING FOR MULTICHANNEL SOURCE SEPARATION SUPPLEMENTARY MATERIAL Sco Wsdom, John Hershey 2, Jonahan Le Roux 2, and Shnj Waanabe 2 Deparmen o Elecrcal Engneerng, Unversy o Washngon, Seale, WA, USA

More information

Advanced Machine Learning & Perception

Advanced Machine Learning & Perception Advanced Machne Learnng & Percepon Insrucor: Tony Jebara SVM Feaure & Kernel Selecon SVM Eensons Feaure Selecon (Flerng and Wrappng) SVM Feaure Selecon SVM Kernel Selecon SVM Eensons Classfcaon Feaure/Kernel

More information

Online Supplement for Dynamic Multi-Technology. Production-Inventory Problem with Emissions Trading

Online Supplement for Dynamic Multi-Technology. Production-Inventory Problem with Emissions Trading Onlne Supplemen for Dynamc Mul-Technology Producon-Invenory Problem wh Emssons Tradng by We Zhang Zhongsheng Hua Yu Xa and Baofeng Huo Proof of Lemma For any ( qr ) Θ s easy o verfy ha he lnear programmng

More information

John Geweke a and Gianni Amisano b a Departments of Economics and Statistics, University of Iowa, USA b European Central Bank, Frankfurt, Germany

John Geweke a and Gianni Amisano b a Departments of Economics and Statistics, University of Iowa, USA b European Central Bank, Frankfurt, Germany Herarchcal Markov Normal Mxure models wh Applcaons o Fnancal Asse Reurns Appendx: Proofs of Theorems and Condonal Poseror Dsrbuons John Geweke a and Gann Amsano b a Deparmens of Economcs and Sascs, Unversy

More information

Mechanics Physics 151

Mechanics Physics 151 Mechancs Physcs 5 Lecure 9 Hamlonan Equaons of Moon (Chaper 8) Wha We Dd Las Tme Consruced Hamlonan formalsm H ( q, p, ) = q p L( q, q, ) H p = q H q = p H = L Equvalen o Lagrangan formalsm Smpler, bu

More information

( t) Outline of program: BGC1: Survival and event history analysis Oslo, March-May Recapitulation. The additive regression model

( t) Outline of program: BGC1: Survival and event history analysis Oslo, March-May Recapitulation. The additive regression model BGC1: Survval and even hsory analyss Oslo, March-May 212 Monday May 7h and Tuesday May 8h The addve regresson model Ørnulf Borgan Deparmen of Mahemacs Unversy of Oslo Oulne of program: Recapulaon Counng

More information

CS 268: Packet Scheduling

CS 268: Packet Scheduling Pace Schedulng Decde when and wha pace o send on oupu ln - Usually mplemened a oupu nerface CS 68: Pace Schedulng flow Ion Soca March 9, 004 Classfer flow flow n Buffer managemen Scheduler soca@cs.bereley.edu

More information

Dual Approximate Dynamic Programming for Large Scale Hydro Valleys

Dual Approximate Dynamic Programming for Large Scale Hydro Valleys Dual Approxmae Dynamc Programmng for Large Scale Hydro Valleys Perre Carpener and Jean-Phlppe Chanceler 1 ENSTA ParsTech and ENPC ParsTech CMM Workshop, January 2016 1 Jon work wh J.-C. Alas, suppored

More information

Learning Objectives. Self Organization Map. Hamming Distance(1/5) Introduction. Hamming Distance(3/5) Hamming Distance(2/5) 15/04/2015

Learning Objectives. Self Organization Map. Hamming Distance(1/5) Introduction. Hamming Distance(3/5) Hamming Distance(2/5) 15/04/2015 /4/ Learnng Objecves Self Organzaon Map Learnng whou Exaples. Inroducon. MAXNET 3. Cluserng 4. Feaure Map. Self-organzng Feaure Map 6. Concluson 38 Inroducon. Learnng whou exaples. Daa are npu o he syse

More information

Lecture 2 L n i e n a e r a M od o e d l e s

Lecture 2 L n i e n a e r a M od o e d l e s Lecure Lnear Models Las lecure You have learned abou ha s machne learnng Supervsed learnng Unsupervsed learnng Renforcemen learnng You have seen an eample learnng problem and he general process ha one

More information

Mechanics Physics 151

Mechanics Physics 151 Mechancs Physcs 5 Lecure 9 Hamlonan Equaons of Moon (Chaper 8) Wha We Dd Las Tme Consruced Hamlonan formalsm Hqp (,,) = qp Lqq (,,) H p = q H q = p H L = Equvalen o Lagrangan formalsm Smpler, bu wce as

More information

10. A.C CIRCUITS. Theoretically current grows to maximum value after infinite time. But practically it grows to maximum after 5τ. Decay of current :

10. A.C CIRCUITS. Theoretically current grows to maximum value after infinite time. But practically it grows to maximum after 5τ. Decay of current : . A. IUITS Synopss : GOWTH OF UNT IN IUIT : d. When swch S s closed a =; = d. A me, curren = e 3. The consan / has dmensons of me and s called he nducve me consan ( τ ) of he crcu. 4. = τ; =.63, n one

More information

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore.

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore. Ths documen s downloaded from DR-NTU, Nanyang Technologcal Unversy Lbrary, Sngapore. Tle A smplfed verb machng algorhm for word paron n vsual speech processng( Acceped verson ) Auhor(s) Foo, Say We; Yong,

More information

HEAT CONDUCTION PROBLEM IN A TWO-LAYERED HOLLOW CYLINDER BY USING THE GREEN S FUNCTION METHOD

HEAT CONDUCTION PROBLEM IN A TWO-LAYERED HOLLOW CYLINDER BY USING THE GREEN S FUNCTION METHOD Journal of Appled Mahemacs and Compuaonal Mechancs 3, (), 45-5 HEAT CONDUCTION PROBLEM IN A TWO-LAYERED HOLLOW CYLINDER BY USING THE GREEN S FUNCTION METHOD Sansław Kukla, Urszula Sedlecka Insue of Mahemacs,

More information

Performance Analysis for a Network having Standby Redundant Unit with Waiting in Repair

Performance Analysis for a Network having Standby Redundant Unit with Waiting in Repair TECHNI Inernaonal Journal of Compung Scence Communcaon Technologes VOL.5 NO. July 22 (ISSN 974-3375 erformance nalyss for a Nework havng Sby edundan Un wh ang n epar Jendra Sngh 2 abns orwal 2 Deparmen

More information

Introduction ( Week 1-2) Course introduction A brief introduction to molecular biology A brief introduction to sequence comparison Part I: Algorithms

Introduction ( Week 1-2) Course introduction A brief introduction to molecular biology A brief introduction to sequence comparison Part I: Algorithms Course organzaon Inroducon Wee -2) Course nroducon A bref nroducon o molecular bology A bref nroducon o sequence comparson Par I: Algorhms for Sequence Analyss Wee 3-8) Chaper -3, Models and heores» Probably

More information

Supplementary Material to: IMU Preintegration on Manifold for E cient Visual-Inertial Maximum-a-Posteriori Estimation

Supplementary Material to: IMU Preintegration on Manifold for E cient Visual-Inertial Maximum-a-Posteriori Estimation Supplemenary Maeral o: IMU Prenegraon on Manfold for E cen Vsual-Ineral Maxmum-a-Poseror Esmaon echncal Repor G-IRIM-CP&R-05-00 Chrsan Forser, Luca Carlone, Fran Dellaer, and Davde Scaramuzza May 0, 05

More information

Comb Filters. Comb Filters

Comb Filters. Comb Filters The smple flers dscussed so far are characered eher by a sngle passband and/or a sngle sopband There are applcaons where flers wh mulple passbands and sopbands are requred Thecomb fler s an example of

More information

Including the ordinary differential of distance with time as velocity makes a system of ordinary differential equations.

Including the ordinary differential of distance with time as velocity makes a system of ordinary differential equations. Soluons o Ordnary Derenal Equaons An ordnary derenal equaon has only one ndependen varable. A sysem o ordnary derenal equaons consss o several derenal equaons each wh he same ndependen varable. An eample

More information

Reactive Methods to Solve the Berth AllocationProblem with Stochastic Arrival and Handling Times

Reactive Methods to Solve the Berth AllocationProblem with Stochastic Arrival and Handling Times Reacve Mehods o Solve he Berh AllocaonProblem wh Sochasc Arrval and Handlng Tmes Nsh Umang* Mchel Berlare* * TRANSP-OR, Ecole Polyechnque Fédérale de Lausanne Frs Workshop on Large Scale Opmzaon November

More information

Chapter 6: AC Circuits

Chapter 6: AC Circuits Chaper 6: AC Crcus Chaper 6: Oulne Phasors and he AC Seady Sae AC Crcus A sable, lnear crcu operang n he seady sae wh snusodal excaon (.e., snusodal seady sae. Complee response forced response naural response.

More information

Machine Learning 2nd Edition

Machine Learning 2nd Edition INTRODUCTION TO Lecure Sldes for Machne Learnng nd Edon ETHEM ALPAYDIN, modfed by Leonardo Bobadlla and some pars from hp://www.cs.au.ac.l/~aparzn/machnelearnng/ The MIT Press, 00 alpaydn@boun.edu.r hp://www.cmpe.boun.edu.r/~ehem/mle

More information

Testing a new idea to solve the P = NP problem with mathematical induction

Testing a new idea to solve the P = NP problem with mathematical induction Tesng a new dea o solve he P = NP problem wh mahemacal nducon Bacground P and NP are wo classes (ses) of languages n Compuer Scence An open problem s wheher P = NP Ths paper ess a new dea o compare he

More information

A Novel Efficient Stopping Criterion for BICM-ID System

A Novel Efficient Stopping Criterion for BICM-ID System A Novel Effcen Soppng Creron for BICM-ID Sysem Xao Yng, L Janpng Communcaon Unversy of Chna Absrac Ths paper devses a novel effcen soppng creron for b-nerleaved coded modulaon wh erave decodng (BICM-ID)

More information

Econ107 Applied Econometrics Topic 5: Specification: Choosing Independent Variables (Studenmund, Chapter 6)

Econ107 Applied Econometrics Topic 5: Specification: Choosing Independent Variables (Studenmund, Chapter 6) Econ7 Appled Economercs Topc 5: Specfcaon: Choosng Independen Varables (Sudenmund, Chaper 6 Specfcaon errors ha we wll deal wh: wrong ndependen varable; wrong funconal form. Ths lecure deals wh wrong ndependen

More information

Motion in Two Dimensions

Motion in Two Dimensions Phys 1 Chaper 4 Moon n Two Dmensons adzyubenko@csub.edu hp://www.csub.edu/~adzyubenko 005, 014 A. Dzyubenko 004 Brooks/Cole 1 Dsplacemen as a Vecor The poson of an objec s descrbed by s poson ecor, r The

More information

UNIVERSITAT AUTÒNOMA DE BARCELONA MARCH 2017 EXAMINATION

UNIVERSITAT AUTÒNOMA DE BARCELONA MARCH 2017 EXAMINATION INTERNATIONAL TRADE T. J. KEHOE UNIVERSITAT AUTÒNOMA DE BARCELONA MARCH 27 EXAMINATION Please answer wo of he hree quesons. You can consul class noes, workng papers, and arcles whle you are workng on he

More information

Advanced time-series analysis (University of Lund, Economic History Department)

Advanced time-series analysis (University of Lund, Economic History Department) Advanced me-seres analss (Unvers of Lund, Economc Hsor Dearmen) 3 Jan-3 Februar and 6-3 March Lecure 4 Economerc echnues for saonar seres : Unvarae sochasc models wh Box- Jenns mehodolog, smle forecasng

More information

Tight results for Next Fit and Worst Fit with resource augmentation

Tight results for Next Fit and Worst Fit with resource augmentation Tgh resuls for Nex F and Wors F wh resource augmenaon Joan Boyar Leah Epsen Asaf Levn Asrac I s well known ha he wo smple algorhms for he classc n packng prolem, NF and WF oh have an approxmaon rao of

More information

New M-Estimator Objective Function. in Simultaneous Equations Model. (A Comparative Study)

New M-Estimator Objective Function. in Simultaneous Equations Model. (A Comparative Study) Inernaonal Mahemacal Forum, Vol. 8, 3, no., 7 - HIKARI Ld, www.m-hkar.com hp://dx.do.org/.988/mf.3.3488 New M-Esmaor Objecve Funcon n Smulaneous Equaons Model (A Comparave Sudy) Ahmed H. Youssef Professor

More information

A NEW TECHNIQUE FOR SOLVING THE 1-D BURGERS EQUATION

A NEW TECHNIQUE FOR SOLVING THE 1-D BURGERS EQUATION S19 A NEW TECHNIQUE FOR SOLVING THE 1-D BURGERS EQUATION by Xaojun YANG a,b, Yugu YANG a*, Carlo CATTANI c, and Mngzheng ZHU b a Sae Key Laboraory for Geomechancs and Deep Underground Engneerng, Chna Unversy

More information

A Novel Iron Loss Reduction Technique for Distribution Transformers. Based on a Combined Genetic Algorithm - Neural Network Approach

A Novel Iron Loss Reduction Technique for Distribution Transformers. Based on a Combined Genetic Algorithm - Neural Network Approach A Novel Iron Loss Reducon Technque for Dsrbuon Transformers Based on a Combned Genec Algorhm - Neural Nework Approach Palvos S. Georglaks Nkolaos D. Doulams Anasasos D. Doulams Nkos D. Hazargyrou and Sefanos

More information

Department of Economics University of Toronto

Department of Economics University of Toronto Deparmen of Economcs Unversy of Torono ECO408F M.A. Economercs Lecure Noes on Heeroskedascy Heeroskedascy o Ths lecure nvolves lookng a modfcaons we need o make o deal wh he regresson model when some of

More information

Neural Networks-Based Time Series Prediction Using Long and Short Term Dependence in the Learning Process

Neural Networks-Based Time Series Prediction Using Long and Short Term Dependence in the Learning Process Neural Neworks-Based Tme Seres Predcon Usng Long and Shor Term Dependence n he Learnng Process J. Puchea, D. Paño and B. Kuchen, Absrac In hs work a feedforward neural neworksbased nonlnear auoregresson

More information

Single-loop System Reliability-Based Design & Topology Optimization (SRBDO/SRBTO): A Matrix-based System Reliability (MSR) Method

Single-loop System Reliability-Based Design & Topology Optimization (SRBDO/SRBTO): A Matrix-based System Reliability (MSR) Method 10 h US Naonal Congress on Compuaonal Mechancs Columbus, Oho 16-19, 2009 Sngle-loop Sysem Relably-Based Desgn & Topology Opmzaon (SRBDO/SRBTO): A Marx-based Sysem Relably (MSR) Mehod Tam Nguyen, Junho

More information

Li An-Ping. Beijing , P.R.China

Li An-Ping. Beijing , P.R.China A New Type of Cpher: DICING_csb L An-Png Bejng 100085, P.R.Chna apl0001@sna.com Absrac: In hs paper, we wll propose a new ype of cpher named DICING_csb, whch s derved from our prevous sream cpher DICING.

More information

How about the more general "linear" scalar functions of scalars (i.e., a 1st degree polynomial of the following form with a constant term )?

How about the more general linear scalar functions of scalars (i.e., a 1st degree polynomial of the following form with a constant term )? lmcd Lnear ransformaon of a vecor he deas presened here are que general hey go beyond he radonal mar-vecor ype seen n lnear algebra Furhermore, hey do no deal wh bass and are equally vald for any se of

More information

ISSN MIT Publications

ISSN MIT Publications MIT Inernaonal Journal of Elecrcal and Insrumenaon Engneerng Vol. 1, No. 2, Aug 2011, pp 93-98 93 ISSN 2230-7656 MIT Publcaons A New Approach for Solvng Economc Load Dspach Problem Ansh Ahmad Dep. of Elecrcal

More information

Genetic Algorithm in Parameter Estimation of Nonlinear Dynamic Systems

Genetic Algorithm in Parameter Estimation of Nonlinear Dynamic Systems Genec Algorhm n Parameer Esmaon of Nonlnear Dynamc Sysems E. Paeraks manos@egnaa.ee.auh.gr V. Perds perds@vergna.eng.auh.gr Ah. ehagas kehagas@egnaa.ee.auh.gr hp://skron.conrol.ee.auh.gr/kehagas/ndex.hm

More information

Inversion of Complex Valued Neural Networks Using Complex Back-propagation Algorithm

Inversion of Complex Valued Neural Networks Using Complex Back-propagation Algorithm ITERATIOAL JOURAL OF ATHEATICS AD COPUTERS I SIULATIO Inverson of Complex Valued eural ewors Usng Complex Bac-propagaon Algorhm Ana S. Gangal, P.K. Kalra, and D.S.Chauhan Absrac Ths paper presens he nverson

More information

Chapter 4. Neural Networks Based on Competition

Chapter 4. Neural Networks Based on Competition Chaper 4. Neural Neworks Based on Compeon Compeon s mporan for NN Compeon beween neurons has been observed n bologcal nerve sysems Compeon s mporan n solvng many problems To classfy an npu paern _1 no

More information

Bayes rule for a classification problem INF Discriminant functions for the normal density. Euclidean distance. Mahalanobis distance

Bayes rule for a classification problem INF Discriminant functions for the normal density. Euclidean distance. Mahalanobis distance INF 43 3.. Repeon Anne Solberg (anne@f.uo.no Bayes rule for a classfcaon problem Suppose we have J, =,...J classes. s he class label for a pxel, and x s he observed feaure vecor. We can use Bayes rule

More information

Fall 2010 Graduate Course on Dynamic Learning

Fall 2010 Graduate Course on Dynamic Learning Fall 200 Graduae Course on Dynamc Learnng Chaper 4: Parcle Flers Sepember 27, 200 Byoung-Tak Zhang School of Compuer Scence and Engneerng & Cognve Scence and Bran Scence Programs Seoul aonal Unversy hp://b.snu.ac.kr/~bzhang/

More information

The Finite Element Method for the Analysis of Non-Linear and Dynamic Systems

The Finite Element Method for the Analysis of Non-Linear and Dynamic Systems Swss Federal Insue of Page 1 The Fne Elemen Mehod for he Analyss of Non-Lnear and Dynamc Sysems Prof. Dr. Mchael Havbro Faber Dr. Nebojsa Mojslovc Swss Federal Insue of ETH Zurch, Swzerland Mehod of Fne

More information

CSCE 478/878 Lecture 5: Artificial Neural Networks and Support Vector Machines. Stephen Scott. Introduction. Outline. Linear Threshold Units

CSCE 478/878 Lecture 5: Artificial Neural Networks and Support Vector Machines. Stephen Scott. Introduction. Outline. Linear Threshold Units (Adaped from Ehem Alpaydn and Tom Mchell) Consder humans: Toal number of neurons Neuron schng me 3 second (vs ) Connecons per neuron 4 5 Scene recognon me second nference seps doesn seem lke enough ) much

More information

Boosted LMS-based Piecewise Linear Adaptive Filters

Boosted LMS-based Piecewise Linear Adaptive Filters 016 4h European Sgnal Processng Conference EUSIPCO) Boosed LMS-based Pecewse Lnear Adapve Flers Darush Kar and Iman Marvan Deparmen of Elecrcal and Elecroncs Engneerng Blken Unversy, Ankara, Turkey {kar,

More information

Pendulum Dynamics. = Ft tangential direction (2) radial direction (1)

Pendulum Dynamics. = Ft tangential direction (2) radial direction (1) Pendulum Dynams Consder a smple pendulum wh a massless arm of lengh L and a pon mass, m, a he end of he arm. Assumng ha he fron n he sysem s proporonal o he negave of he angenal veloy, Newon s seond law

More information

2.1 Constitutive Theory

2.1 Constitutive Theory Secon.. Consuve Theory.. Consuve Equaons Governng Equaons The equaons governng he behavour of maerals are (n he spaal form) dρ v & ρ + ρdv v = + ρ = Conservaon of Mass (..a) d x σ j dv dvσ + b = ρ v& +

More information

A NOVEL NETWORK METHOD DESIGNING MULTIRATE FILTER BANKS AND WAVELETS

A NOVEL NETWORK METHOD DESIGNING MULTIRATE FILTER BANKS AND WAVELETS A NOVEL NEWORK MEHOD DESIGNING MULIRAE FILER BANKS AND WAVELES Yng an Deparmen of Elecronc Engneerng and Informaon Scence Unversy of Scence and echnology of Chna Hefe 37, P. R. Chna E-mal: yan@usc.edu.cn

More information

Math 128b Project. Jude Yuen

Math 128b Project. Jude Yuen Mah 8b Proec Jude Yuen . Inroducon Le { Z } be a sequence of observed ndependen vecor varables. If he elemens of Z have a on normal dsrbuon hen { Z } has a mean vecor Z and a varancecovarance marx z. Geomercally

More information

The Analysis of the Thickness-predictive Model Based on the SVM Xiu-ming Zhao1,a,Yan Wang2,band Zhimin Bi3,c

The Analysis of the Thickness-predictive Model Based on the SVM Xiu-ming Zhao1,a,Yan Wang2,band Zhimin Bi3,c h Naonal Conference on Elecrcal, Elecroncs and Compuer Engneerng (NCEECE The Analyss of he Thcknesspredcve Model Based on he SVM Xumng Zhao,a,Yan Wang,band Zhmn B,c School of Conrol Scence and Engneerng,

More information

Should Exact Index Numbers have Standard Errors? Theory and Application to Asian Growth

Should Exact Index Numbers have Standard Errors? Theory and Application to Asian Growth Should Exac Index umbers have Sandard Errors? Theory and Applcaon o Asan Growh Rober C. Feensra Marshall B. Rensdorf ovember 003 Proof of Proposon APPEDIX () Frs, we wll derve he convenonal Sao-Vara prce

More information

Relative controllability of nonlinear systems with delays in control

Relative controllability of nonlinear systems with delays in control Relave conrollably o nonlnear sysems wh delays n conrol Jerzy Klamka Insue o Conrol Engneerng, Slesan Techncal Unversy, 44- Glwce, Poland. phone/ax : 48 32 37227, {jklamka}@a.polsl.glwce.pl Keywor: Conrollably.

More information