Approximate, Computationally Efficient Online Learning in Bayesian Spiking Neurons

Size: px
Start display at page:

Download "Approximate, Computationally Efficient Online Learning in Bayesian Spiking Neurons"

Transcription

1 Approxmae, Compuaonally Effcen Onlne Learnng n Bayesan Spkng Neurons Levn Kuhlmann levnk@unmelb.edu.au NeuroEngneerng Laboraory, Deparmen of Elecrcal & Elecronc Engneerng, The Unversy of Melbourne, and he Cenre for Neural Engneerng, The Unversy of Melbourne, Vcora 3010, Ausrala. Mchael Hauser-Raspe mchael.hauser-raspe@canab.ne The MARCS Insue, Unversy of Wesern Sydney, Penrh, Ausrala. Jonahan H. Manon jonahan.manon@gmal.com Conrol & Sgnal Processng Laboraory, Deparmen of Elecrcal & Elecronc Engneerng, The Unversy of Melbourne, Vcora 3010, Ausrala. Davd B. Grayden grayden@unmelb.edu.au NeuroEngneerng Laboraory, Deparmen of Elecrcal & Elecronc Engneerng, The Unversy of Melbourne, and he Cenre for Neural Engneerng, The Unversy of Melbourne, Vcora 3010, Ausrala. Jonahan Tapson j.apson@uws.edu.au The MARCS Insue, Unversy of Wesern Sydney, Penrh, Ausrala. André van Schak a.vanschak@uws.edu.au The MARCS Insue, Unversy of Wesern Sydney, Penrh, Ausrala. Runnng le: Approxmae, Effcen Learnng n Bayesan Neurons Bayesan Spkng Neurons (BSNs) provde a probablsc nerpreaon of how neurons perform nference and learnng. Onlne learnng n BSNs ypcally nvolves maxmum-lkelhood expecaon-maxmsaon (ML-EM) based parameer esmaon, whch s compuaonally slow and lms he poenal of 1

2 sudyng neworks of BSNs. An onlne learnng algorhm, Fas Learnng (FL), s presened ha s more compuaonally effcen han he benchmark ML-EM for a fxed number of me seps as he number of npus o a BSN ncreases (e.g mes faser run mes for 20 npus). Alhough ML-EM appears o converge mes faser han FL, he compuaonal cos of ML-EM means ha ML- EM akes longer o smulae o convergence han FL. FL also provdes reasonable convergence performance ha s robus o nalzaon of parameer esmaes ha are far from he rue parameer values. However, parameer esmaon depends on he range of rue parameer values. Neverheless, for a physologcally meanngful range of parameer values, FL gves very good average esmaon accuracy, despe s approxmae naure. The FL algorhm, herefore, provdes an effcen ool, complemenary o ML-EM, for explorng BSN neworks n more deal n order o beer undersand her bologcal relevance. Moreover, he smplcy of he FL algorhm means can be easly mplemened n neuromorphc VLSI such ha one can ake advanage of he energy effcen spke codng of BSNs. 1 Inroducon Bayesan nference provdes an nuve undersandng of sensory, cognve and moor processng by descrbng hese processes n probablsc erms (Knll & Rchards, 1996; Kordng & Wolper, 2004). In addon o he applcaon of Bayesan nference o neural populaon models (Zemel e al., 1998; Barber e al., 2003), more 2

3 recenly Bayesan nference has been appled a he level of sngle neurons n he form of Bayesan Spkng Neurons (BSNs) (Denève 2008a; 2008b; Lochmann & Denève, 2008). BSNs provde a probablsc nerpreaon of how neurons can perform nference and learnng, and BSN heory has been exended o provde explanaons of sensory processng (Lochmann e al., 2012) and workng memory (Boerln & Denève, 2011). Learnng n BSNs s local o a sngle neuron, meanng ha BSNs can selforganse usng only her npus, alhough her organsaon s sll consraned by underlyng assumpons abou how he npus were generaed. As a resul of he selforgansaon of sngle BSNs, herarchcal feed-forward neworks of BSNs can generally be hough of as cluserng sysems. Moreover, BSN neworks can be used o perform percepual nference and learnng n sensory neworks, such as feaure and objec recognon, and hs can be done usng he effcen spke codng properes of BSNs. Specfcally, objec neurons a he op of a feed-forward herarchcal sysem whch receve no supervsory sgnals can learn o connec o lower level feaure neurons ha are smulaneously acve when he objec smulus s presen. In addon, laeral neracons can be used o acvae dfferen objec neuron clusers when dfferen objec classes are presen n order o allow clusered learnng of objec classes, where groups of few o many neurons could represen he presence or absence of dfferen objec classes. Alhough some BSN neworks have been suded (Boerln & Denève, 2011; Lochmann e al., 2012), learnng n large feed-forward, and poenally feedback, BSN neworks has no ye been suded n deal. 3

4 Gven ha learnng n BSNs nvolves Maxmum-Lkelhood Expecaon- Maxmsaon (ML-EM) (Denève, 2008b; Mongllo and Denève, 2008), learnng s compuaonally nensve and, herefore, can be me consumng o sudy neworks of neresng complexy. Here, we presen he Fas Learnng (FL) algorhm, whch ncreases he speed of learnng n BSNs and should herefore make easer o sudy he properes of neworks of BSNs. The FL algorhm presened here s more robus and sable wh respec o nalsaon of parameer esmaes and a large se of rue parameer combnaons han an earler verson (Kuhlmann e al, 2012). Ths paper s organsed as follows. The mehods are frs presened, oulnng he defnon of a BSN, he new FL algorhm and a benchmark ML-EM algorhm. In he resuls, he performance (parameer esmaon accuracy and run me) of he FL algorhm and s dependence on varous facors s explored. The FL and ML-EM algorhms are compared for a large number of rue parameer combnaons and for vared degrees n he uncerany of he nal parameer esmaes relave o he rue parameer values. Ths s done n order o assess robusness of he FL algorhm and hghlgh s srenghs and weaknesses compared o ML-EM. We also consder he performance of FL n hree-layer neworks. I s worh nong ha he FL and ML-EM algorhms are boh onlne and adapve, meanng ha hey can be appled o daa n real-me as eners he sysem and ha her parameer esmaes wll converge when he npu sascs are saonary, bu can adap o esmae new parameer values f he npu sascs change o a dfferen saonary dsrbuon. 4

5 2 Mehods 2.1 Bayesan Spkng Neuron (BSN) Equaons A BSN (Denève, 2008a; 2008b) s defned o have an explc space and an mplc space (see Fgures 1A and 1B). The explc space represens he neuron, wh N synapc npus ndexed by, npu spke sequences, s, ( s = 1 corresponds o occurrence of a spke a me on he h synapse, oherwse s = 0 ), synapc weghs, w, and oupu spke sequence, O. The mplc space s defned by a 2-sae Hdden Markov Model (HMM), where he hdden sae akes on values of x = {0,1} ha, for example, model he presence or absence of a smulus. When he hdden sae s x = 1 s consdered on or presen, whereas when he hdden sae s x = 0 s consdered off or absen. The hdden saes ranson wh he probables, (1 r on ) r on P( x = d x = c) = a cd =, r off (1 r off ) (1) where r on and r off are he raes of off on and on off me, and s he me sep. ransons, s he curren The HMM produces N ndependen observable oupus ha can produce one of wo observaons whn a me sep, s = {0,1}, where ndexes he N oupus. These N ndependen oupu observaons of he HMM correspond o he npus o he BSN and are modelled as nhomogeneous Posson processes, where he process occurs as a homogeneous Posson process eher wh rae on q when x = 1or wh rae q off when 5

6 x = 0. For small me seps, he Posson process can be descrbed by a bnomal process and he observaon probables for each synapse become (1 qoff ) q off P( s = e x = d) = bde =. (1 qon ) qon (2) For a BSN when he smulus s eher presen or absen, he dsrbuon of he number of oupu spkes per a gven me nerval maches closely o a Posson dsrbuon (Denève, 2008a) and, herefore, s possble o buld herarches of BSNs. The am of a sngle BSN s o code he log-odds rao of he hdden sae, x, n s oupu spkes. Ths s acheved by numercally negrang he dfferenal equaon of he log-odds rao, L, along wh a dfferenal equaon for he predcon of he log-odds rao, G, whch depends on he oupu spkes of he BSN. When he log-odds rao goes above s predcon by g o / 2, an oupu spke s generaed (represened by O = 1 ). Ths can be seen n he followng equaons (Denève, 2008a): ( 1 ) ( 1 ) L = r + e r + e + w s (3) L L on off θh G G ( 1 ) ( 1 ) G = r on + e roff + e + goo (4) O g 1 ff L > G + = 2 0 oherwse o (5) 6

7 where w q on = log, q off (6) ( q q ) θ = (7) h on off and g o s a free parameer ha conrols codng effcency. Unless oherwse saed, for all smulaons g = Learnng n a BSN nvolves esmaon of he unknown parameers r on, r off, o q on, and q off. Here, we compare learnng wh ML-EM and he FL algorhms. In wha follows he parameer esmaes are denoed as r ˆon, r ˆoff, q ˆon, and ˆoff q, and θ and ˆ θ denoe he vecors of he parameers and he parameer esmaes, respecvely. 2.2 Benchmark ML-EM algorhm The benchmark onlne ML-EM algorhm s based on compung he suffcen sascs of he maxmum lkelhood funcon (Denève 2008b; Mongllo & Denève 2008). The algorhm converges o a local maxmum f run for an adequae amoun of me. Mongllo & Denève (2008) do no provde a proof of convergence for her onlne algorhm, alhough hey explan ha her onlne mehod converges o he same esmae as bach EM (Rabner, 1989) for an approprae dscoun facor and schedulng of parameer updaes. For each me sep, a me, he followng equaons (8)-(11) are compued n he order presened. Equaon (8) defnes γ ( ; ˆ( θ )), cd s 7

8 whch s a weghng erm ha depends on he prevous parameer esmaes and deermnes he rajecores of he varables n equaons (9) and (10), ˆ aˆ cd ( ) b ( ) d, s γ cd ( s ; ˆ( θ )) =, (8) aˆ ( ) bˆ ( ) Q ( ) m, n mn n, s m where ˆ( θ ) s he se of prevous parameer esmaes whch deermne he prevous esmaes of he ranson, aˆ ( ), and observaon, bˆ ( ), cd probables as defned n equaons (1) and (2), respecvely. The varable, Q ( ), represens he probably of beng n sae l a me and s updaed accordng o de l ( ) ( ; ˆ Ql = γ ml s θ ( )) Qm ( ). (9) m A sasc s hen compued from whch he curren ranson and observaon probables can be esmaed: h φ ( ) = γ ( s ; ˆ θ ( )) cde lh l l l { φcde ( ) η δ ( s e) δ ( c l) δ ( d h) Ql ( ) φcde ( ) } + (10) where η s a posve-valued emporal forgeng facor. The curren ranson and observaon probables are gven by: h h φcde φcde e, h, ˆ c, h h de h φcde φcde d, e, h, c, e, h ( ) ( ) aˆ cd ( ) = ; b ( ) =. ( ) ( ) (11) 8

9 The BSN parameer esmaes, r ˆon, r ˆoff, q ˆon, and q ˆoff, can hen be nferred from equaons (1) and (2). The nalsaon and sengs of he algorhm ha were appled are: φ h cde 1 5 h cde (0) = 0, Q (0) = 0.5, η = 10. The sasc, φ ( ), s updaed for he frs 102 me seps whou applyng equaon (11) so ha can sablse. I should also be noed ha hs form of learnng of he BSN only drecly depends on he npu spkes o he BSN and s parameers and no on any oher varables of he BSN. 2.3 The Fas Learnng (FL) algorhm As noed by Denève (2008), he parameers of he BSN (.e. he ranson raes and observaon raes of he HMM) depend on he followng nermedae sascs: he proporon of me spen n he on sae, τ ( ), he number of on off ransons, N on on off ( ), he number of off on ransons, Noff on( ), he number of spkes occurrng a he h synapse durng he on sae, N ( ), and he number of spkes occurrng a he h synapse, N ( ). For each me sep, he FL algorhm frs esmaes he hdden sae from he log-odds rao of he hdden sae, hen esmaes hese sascs n order o deermne he parameers. on Frs we noe ha he defnon of he log-odds rao of he hdden sae, x, s L P( x = 1 s0 ) = log P( x = 0 s0 ) (12) 9

10 mplyng ha he probably ha he hdden sae, x, s equal o 1 gven he pas synapc npus from all synapses can be compued usng L e P( x = 1 s 0 ) =. (13) L 1 + e A robus way o deermne f a hdden sae ranson has occurred s o se an auxlary varable, x%, (.e. a hdden sae esmae) such ha 1 f P( x = 1 s ) > U x% = 0 f P( x = 1 s ) < D x% oherwse, 0 0 (14) where U and D are dynamc hresholds ha depend on he maxmum and mnmum values of P( x = 1 s 0 ), respecvely, over a wndow of lengh Τ : U = θ ( M m ) + m (15) U D = θ ( M m ) + m (16) D wh M = max{ P( x = 1 s ) : u [ Τ, ]} (17) u 0 u m = mn{ P( x = 1 s ) : u [ Τ, ]} (18) u 0 u and θ U and θ D are facors defnng he poson of he hresholds whn he range of P( x = 1 s ) over he las Τ seconds. These facors should be se such ha 0 θ U > 0.5 and θ D < 0.5, smply because P( x = 1 s 0 ) values above he md-range of P( x = 1 s 0 ) over he las Τ seconds bes ndcae x s lkely o be 1, whereas 10

11 P( x = 1 s ) values below hs md-range bes ndcae x s lkely o be 0. In all 0 smulaons, Τ =0.5 s. The dynamc hresholdng s a heursc mehod ha provdes sably of esmaon ha canno be acheved wh sac hresholds alone (such as by Kuhlmann e al., 2012). In parcular, ensures ha sae ransons can sll be deeced when ranson raes are low, he number of npu spkes s low, and/or he parameer esmaes are far from he rue parameer values. An on off ranson varable s defned as T off 1 f x% = 0 and x% = 1 ( ) = 0 oherwse (19) and an off on ranson varable s defned as T on 1 f x% = 1and x% = 0 ( ) = 0 oherwse. (20) I s hen possble o wre updae rules for he nermedae sascs respecvely as follows: τ ( ) = ηx% + (1 η) τ ( ) (21) on on N ( ) = ηt ( ) + (1 η) N ( ) (22) on off off on off N ( ) = ηt ( ) + (1 η) N ( ) (23) off on on off on on on N ( ) = ηs x% + (1 η) N ( ) (24) N ( ) = ηs + (1 η) N ( ), (25) 11

12 where each equaon represens a movng average wh exponenal decay wh a me consan τ and η = /( + τ ) s a posve valued forgeng facor. The forgeng facor η s presen n boh he FL and ML-EM algorhms. A small η means ha learnng/parameer esmaon s smooher bu also akes longer o converge. A large η means he algorhm forges quckly bu can also converge qucker provded τ s of adequae duraon o accuraely esmae he nermedae sascs requred o esmae he parameers. follows: The parameer esmaes can hen be calculaed from equaons (21)-(25) as Noff on( ) rˆ on( ) = (1 τ ( )) on Non off ( ) rˆ off ( ) = ( τ ( )) on (26) (27) N ( ) ˆ on qon ( ) = ( τ ( )) on (28) N ( ) N ( ) ˆ on qoff ( ) =. (1 τ ( )) on (29) In equaons (26) and (29), s mporan o noe ha he denomnaor ncludes 1 raher han he me consan of he onlne forgeng wndow, τ, and τ on( ) represens a proporon of me raher han an acual me, due o our normalsed low-pass fler (see equaons (21)-(25)) and he use of a dscree me sep,. 12

13 The FL algorhm runs onlne by updang L usng equaon (3) and he prevous parameer esmaes. Then L s used n equaon (13) o deermne he auxlary sae esmae, x%, n equaon (14) whch s needed o esmae he nermedae sascs n equaons (21)-(25) va equaons (15)-(20). From hese sascs he curren parameer esmaes can be deermned usng equaons (26)-(29). The curren parameer esmaes are hen used o deermne L n he nex me sep. The FL algorhm s nalzed by seng he nermedae sascs n equaons (21)-(25) o zero and, as wh he ML-EM algorhm, η = 10. Moreover, he nermedae sascs were updaed for he frs equaons (26)-(29). In all smulaons, he me sep was se o me seps whou applyng = 0.1 ms. To ensure numercal sably of he FL algorhm, s mporan o preven he observaon rae esmaes ˆon q and ˆoff q from beng very close o zero because hs can cause he synapc weghs w o be undefned, or o approach very large posve or negave numbers (as can be seen by nspecng equaon (6)). Moreover, durng smulaon, s necessary o provde a small lower bound on he ranson rae esmaes r ˆon and r ˆoff n order o preven eher parameer esmae becomng oo small and causng he log odds rao o grow very large n eher he posve or negave drecon, as he resng poenal of he BSN corresponds o log( rˆ / r ˆ ). Ths s especally mporan when a on BSN receves only a small number of npu spkes, whch makes harder o deec off 13

14 off on and on off ransons. In he smulaons presened he esmaed ranson and observaon raes were forced o be above 0.1 s -1 and 10-3 s -1, respecvely. For smlar reasons, he nermedae sasc, τ on( ), was prevened from beng zero by addng a small number, o hs sasc. The FL algorhm emulaes he ML-EM algorhm n ha performs erave onlne esmaon of he sae and he parameers, alhough ML-EM only provdes a paral sae esmae. In addon, here s no proven guaranee ha he FL algorhm wll converge on a fxed se of parameers. In conras and as menoned above, he ML-EM algorhm presened here converges o he same esmaes as bach EM under ceran condons, alhough a general proof of convergence has no been publshed. Neverheless, he compuaonal smplcy of he FL algorhm means ha wll be more compuaonally effcen han he ML-EM algorhm. 2.4 Compuaonal comparson of FL and ML-EM algorhms Here we explore onlne learnng n a smple nework wh wo BSNs where Neuron 1 receves N=20 Posson process npus and Neuron 2 receves he oupu of Neuron 1 as s only npu (see Fgure 1C). Many smulaons of he FL algorhm were performed over a large se of rue parameer values o gve evdence ha he FL algorhm converges relably n he absence of convergence proofs. The performances of he FL and ML-EM algorhms are compared. 14

15 The onlne learnng problem ha s analysed frs nvolves smulaon of he mplc space HMM of Neuron 1. The smulaed hdden sae me seres (whch depends on r on and r off ) reflecs he presence or absence of a feaure ou n he world ha he BSN response encodes. From he smulaed hdden saes, he smulaed observaons are produced and fed as npu o Neuron 1. When he hdden sae s 1, he observaons are smulaed as a homogenous Posson process wh rae q on ; when he hdden sae s 0, he observaons are smulaed as a homogenous Posson process wh rae parameers r on, r off, q on q off. Learnng n Neuron 1 hen nvolves esmaon of he, and q off. The oupu of a BSN s approxmaely Posson (Denève, 2008a) and herefore he oupu of Neuron 1 can be used as he only npu o Neuron 2. Noe ha he hdden sae, x, s he same for Neuron 1 and Neuron 2 because Neuron 2 only receves npu from Neuron 1 and herefore wll only learn he same sae ranson raes coded n he oupu of Neuron 1. Ths s synonymous wh a herarchcal causal generave HMM model wh wo saes where he op sae causes he boom sae (.e. follows he op sae) and he boom sae produces he observaons (Denève, 2008a). Neuron 2 s consdered here o hghlgh some feaures of he FL algorhm n smple chan neworks. 15

16 2.5 The FL algorhm appled o hree-layer neworks The FL algorhm s also appled o hree-layer neworks o explore parameer esmaon and hdden sae decodng accuracy of he FL algorhm across layers n a nework. Two hree-layer neworks are consdered, one small, one large. The small hree-layer nework has 4, 2, and 1 neurons n he 1 s, 2 nd, and 3 rd layers, respecvely (see Fgure 1D), and each neuron n layer 1 receves 20 npus. The large hree-layer nework s smlar o he small nework; however, nvolves a doublng of he synapc npus o he hgher layers. Ths second nework has 16, 4, and 1 neurons n he 1 s, 2 nd, and 3 rd layers, respecvely. Learnng s consdered for a herarchcal causal generave HMM model where he op sae correspondng o layer 3 causes he saes correspondng o layer 2, whch n urn cause he saes correspondng o layer 1, and hese boom saes produce he observaons. BSN heory has been derved whn he conex of herarchcal causal models (Denève, 2008a) herefore we resrc our sudy o hese models. 3 Resuls 3.1 An example smulaon usng FL For he wo neuron nework, Fgure 2 shows an example smulaon of Neuron 1 and Neuron 2 whle runnng he FL algorhm over 10 6 me seps. The me seres ha are shown correspond o he las 10 5 me seps of he smulaon when he parameer esmaes have sablzed. For boh neurons, can be seen ha he log odds rao of he hdden sae, L, and he oupu spke sequence, O, show srong 16

17 correlaon wh he hdden sae, x. Moreover, here s less flucuaon n he log odds rao of Neuron 2 han here s for Neuron 1 because Neuron 1 receves many more npu spkes. Gven hs greaer number of npus, he oupu spkes of Neuron 1 also provde a beer represenaon of he hdden sae. 3.2 FL esmaon accuracy depends on varous facors In addon o he BSN s free parameers, g o and N (free n he sense ha hey are seleced pror o runnng any algorhm), he FL algorhm nroduces hree free parameers o selec before runnng he algorhm; he facors θ U and θ D, and he wndow lengh, Τ, used o defne he dynamc hresholds. Based on smulaons, a value of Τ = 0.5 s was emprcally found o adequaely capure he range of sae ranson raes consdered here, and herefore we focus our analyss on hreshold facors θ U and θ D, he spkng hreshold, g o, and he range of rue parameer values nsead. Based on equaons (1) and (2) and gven ha a me sep of 0.1 ms s used, he ranson and observaon raes should be lmed o he range of s -1. For he wo neuron nework, n smulaons wh FL was found ha f he ranson and observaon raes covered he full range of s -1 and θ U = 0.75 and θ D = 0.25, hen he ranson raes are sgnfcanly underesmaed whle observaon esmaes are reasonably accurae (Neuron 1 medan esmaon error values- r on : -97.6%, r off : -97.7%, q on : 0.01%, q off :-0.04%). Neverheless, f we resrc ourselves o a 17

18 physologcally plausble range of ranson raes (1-115s -1, hese raes are adequae for capurng ecologcally relevan changes of smul n he ousde world) and observaon raes ( s -1, hese raes span he allowed raes of neuronal spkng assumng a 1ms refracory perod) we can acheve excellen average esmaon performance as he remanng fgures wll show. For he wo neuron nework, Fgures 3A-3I show he sensvy analyss of he FL algorhm for he hreshold facors θ U and θ D for 100 smulaons of 10 6 me seps wh dfferen rue parameer combnaons for each facor par (.e. ( θ U, θ D )), wh he rue parameers lyng n he physologcally plausble range. For each smulaon he nal parameer esmaes were se o fve mes he values of he rue parameers. I can be seen ha he medan esmaon errors n r on and r off for Neuron 1 (op row of Fgures 3A and 3B) are que vared, showng bes performance away from low θ U and hgh θ D values, whereas he medan error for q on and for Neuron 1 (op row of Fgures 3C and 3D) s mnmally affeced by he hreshold facors wh errors under abou 1% for all facor pars. The medan errors n r on and r off for Neuron 2 (op row of Fgures 3E and 3F) show a smlar sensvy paern o ha seen for Neuron 1 alhough errors are larger for Neuron 2. These greaer errors resul from Neuron 2 recevng less npu spkes and, herefore, less nformaon abou sae ransons. Noe ha sensvy/error maps are no provded for q on and for Neuron 2 because here are no rue parameer values for hs neuron o deermne q off q off 18

19 he error. The sgn of he error/bas as a funcon of hreshold facors θ U and θ D s capured n he boom row of Fgures 3A-3F. Generally can be seen ha when he errors are large for r on and r off for boh Neuron 1 and Neuron 2 he bas ends o be posve (.e. overesmaon) and θ U and θ D are boh closer o 0.5. For q on and of Neuron 1 can be seen ha he bas s generally negave (.e. underesmaon) and posve, respecvely. q off For he same case as Fgures 3A-3F, Fgure 3G demonsraes he medan RMS error n he probably ha he hdden sae s 1, P rms, as a funcon of he hreshold facors for Neuron 1. For a gven 10 6 me sep smulaon, P rms was compued over he las 10 5 me seps as he dfference beween he smulaed P( x = 1 s 0 ) me seres usng he esmaed parameers and he rue parameers P rms = N rms N 1 rms j= 0 2 ( P( x = 1 0 ; ˆ( θ j )) P( x = 1 s0 ; θ )), j s (30) j j j where f s he end me of he smulaon, 5 N = 10 s he number of me seps over whch he RMS error s calculaed, and j = f j. Moreover, Fgures 3H and 3I show he percen Hammng error beween he rue and esmaed hdden saes as a funcon of he hreshold parameers for Neurons 1 and 2, respecvely. Percen Hammng error was compued n a smlar way o P rms and was defned as follows rms 19

20 N 1 rms j= 0 rms 1 H = 100 ~ x x, (31) j N j where j = f j also. In Fgures 3G-3I can be seen ha boh P rms and he percen Hammng error are lowes for facors θ U and θ D close o 0.75 and 0.25, respecvely. To assess how he rue parameer value ranges effec parameer esmaon, he maxmum values for he range of he ranson raes, r max, and he observaon raes, q max, are vared whn a physologcally plausble range n Fgures 3J and 3K. In Fgure 3J can be seen ha f q max = 1000 s-1 and r max s vared hen he ranson raes are overesmaed or underesmaed f r max s below or above 115 s -1. In Fgure 3K can be seen ha f r max = 115 s-1 and q max s vared hen he ranson raes are underesmaed or overesmaed f q max s below or above 1000 s -1. Generally was found ha o mnmse ranson rae esmaon error bas one needs o have ranson raes ha are suffcenly lower han he observaon raes, bu no oo low. Fgures 3J and 3K also demonsrae ha he medan esmaon error of he observaon raes s always que small and s ndependen of he rue parameer range. To assess he nfluence of he spkng hreshold, g o, n one layer on he esmaon performance n he nex layer, he spkng hreshold for Neuron 1 s vared for rue parameer values correspondng o our physologcally plausble range and he esmaon performance s ploed n Fgure 3L. I can be seen ha small and large 20

21 values of Neuron 1 g o lead o overesmaon and underesmaon of he ranson raes for Neuron 2, respecvely, wh an opmal value of around g = Noe ha observaon rae errors are no ploed because no rue observaon raes are defned for Neuron 2. A low spkng hreshold value means more npu spkes o Neuron 2 and leads o oo many false sae ranson deecons, whle a hgh spkng hreshold value leads o less npu spkes and fewer rue sae ranson deecons. In Fgures 3J-3K can generally be seen ha he on and off observaon rae esmaon errors closely overlap each oher, as do he on and off ranson rae esmaon errors n Fgures 3J-3L. 3.3 A comparson beween FL and ML-EM for he wo neuron nework For he wo neuron nework, Fgure 4 shows box whsker plos of he parameer esmaon errors for he FL and ML-EM algorhms n cases where he nal parameer esmaes were se o he rue parameer values (0% nal parameer perurbaon case) or he nal parameer esmaes were se o fve mes he values of he rue parameers (400% nal parameer perurbaon case). Moreover, θ = 0.75 and θ = 0.25 and he rue parameer values le n our physologcally U D plausble range. Each case was smulaed 1000 mes for 10 6 me seps wh dfferen rue parameer combnaons each me. I can be seen ha ML-EM performs he bes for he 0% perurbaon case, whereas for he 400% perurbaon case, FL performs o bes for r on and r off, and s comparable o ML-EM for q on and q off. The FL esmaon errors do no change much beween he 0% and 400% perurbaon cases. 21

22 We focus here on a fxed 400% nal parameer esmae perurbaon because we are mosly neresed n robusness and sably when he rue parameers are vared over a large range. The behavour of FL for 0% and 400% nal parameer esmae perurbaons s smlar o ha shown n Fgure 3. Fgures 5A and 5B show ha, n erms of he percen Hammng error and P rms, respecvely, FL performs beer han ML-EM for he 400% perurbaon case, and ncreasng he number of me seps by a facor of 10 furher mproves he convergence of FL for he 400% perurbaon case. ML-EM was only smulaed for 10 6 me seps because he run me was oo long. Moreover, for ML-EM he P rms s mos meanngful, snce he percen Hammng error values for ML-EM should only be consdered o be approxmae. Ths s because ML-EM only gves a paral sae esmae,.e. he probably P( x = 1 s 0 ), whch needs o be hresholded (hreshold se o 0.5) n order o oban a non-opmal sae esmae used o compue he percen Hammng error. Fgure 5C capures he medan run mes for he FL and ML-EM algorhms appled o Neurons 1 and 2 smulaed for 10 6 me seps over 1000 dfferen rue parameer value combnaons n he 0% nal parameer esmae perurbaon case. For Neuron 1, can be seen ha FL s approxmaely 16.5 mes more compuaonally effcen han ML-EM for a 20 synapse BSN. For Neuron 2, can be seen ha FL s approxmaely 6.1 mes more compuaonally effcen han ML-EM for a 1 synapse BSN. These resuls ndcae ha FL becomes more compuaonally 22

23 effcen han ML-EM as he number of synapses, and also he number of npu spkes o a BSN ncreases. Noe also ha for Neuron 1, smulang FL for 10 7 me seps sll resuls n run mes nearly less han half he me requred for ML-EM o run 10 6 me seps. The real me (seconds of synapc npu) convergence properes of FL and ML-EM are summarsed n Fgures 5D and 5E. Fgure 5D shows an example of parameer esmaon for Neuron 1 when he nal parameer esmaes are 400% above he rue values. As menoned n he mehods he parameer esmaes are held consan for he frs 10 5 and 10 2 me seps for FL and ML-EM, respecvely, n order o le he nermedae sascs sablse. I can be seen ha FL esmaes converge close o he rue values for all parameers (noe ha only r on, r off, 1 q on and 1 q off are shown), whle he ML-EM esmaes converge close o he rue values for all he observaon raes (noe only 1 q on and 1 q off are shown), bu no for he ranson raes, r on and r off. These observaons for FL and ML-EM were generally rue as can be observed n Fgure 4. Fgure 5E shows he real me convergence dsrbuons for Neuron 1 for 100% and 400% nal parameer esmae perurbaon cases under he same smulaon condons as Fgure 4. For he FL and ML-EM algorhms, convergence was deemed o have occurred f over 7.5 seconds of real me all he parameer esmaes remaned whn a devaon of 3%. I can be seen ha he real mes o convergence for ML-EM are approxmaely mes shorer han for FL. Noe, he benefs of he faser convergence of ML-EM han FL for he 400% nal 23

24 perurbaon case are negaed by he poorer performance of ML-EM as seen n Fgure 5B. 3.4 Three-layer nework smulaons To look a FL operang n larger neworks and o demonsrae he effecs of FL esmaon accuracy n one nework layer on subsequen nework layers, Fgure 6 shows he esmaon accuracy across he layers of he small and large hree-layer neworks descrbed n he mehods. Smlar smulaon sengs as n Fgures 4 and 5 are appled, wh he excepon ha he nal ranson and observaon rae esmaes are seleced unformly whn he physologcally plausble parameer range. Fgures 6A and 6B llusrae he dsrbuons of he percen esmaon error for r on and r off across he layers of he small hree-layer nework, respecvely. By lookng a he medan values, can be seen here s a slgh ncrease n underesmaon of he ranson raes. Fgure 6C shows he dsrbuons of he percen esmaon error for he observaon raes for he neurons n layer 1 of he small nework. As n he oher fgures, he medan percen esmaon error for he observaon raes s small. Fgure 6D shows he dsrbuons of he percen Hammng error across he layers of he small nework. I can be seen here s a slgh ncrease n he medan percen Hammng error; however, hs decodng error remans small a he fnal layer of he nework. Smlar resuls are obaned when each un n layers 2 and 3 pools over wce as many synapc npus; however, he ncrease n poolng leads o beer decodng 24

25 performance n layers 2 and 3. Ths can be seen by comparng he Hammng error n Fgures 6D (small nework) and 6E (large nework wh wce as many npus n hgher layers). The ncreased accuracy n he hgher layers n he large nework occurs because ncreasng he npu o an adequae level provdes a more nformave level of evdence abou he sae changes. An mporan propery n layered neworks s o recode relevan nformaon n he npus n a sparse manner wh fewer oupu spkes a he op layers. Ths can be acheved wh sronger synapc weghs a he hgher layers. Here he nformaveness of a gven nework layer s quanfed by he dsrbuon and medan value of he on f off f fnal absolue value of he synapc weghs, log( q ˆ ( ) / qˆ ( )), n he correspondng layer. Fgures 6F (small nework) and 6G (large nework wh wce as many npus n hgher layers) demonsrae ha he nformaveness of each nework layer ncreases across he layers. Moreover, comparng Fgures 6F and 6G ndcaes ha doublng he number of npus o he hgher layers leads o more nformave hgher layers. Ths reflecs he mproved decodng performance of he large nework wh wce as many npus n he hgher layers. To quanfy he sparsy of spke codng, oupu spkng raes are esmaed over he las 10 5 me seps of each smulaon when learnng has effecvely sablsed. For he small hree-layer nework alhough he medan oupu spkng rae per neuron n each layer ncreases from layer 1 o layers 2 and 3 by 27.6% and 55.8%, respecvely, he medan oal oupu spkng rae of each layer decreases from layer 1 25

26 o layers 2 and 3 by 38.5% and 64.3%, respecvely. Ths ndcaes ha fewer spkes are needed a he hgher layers o encode he presence or absence of he npu smul. Qualavely smlar changes were observed for he large hree-layer nework. The cos of spkng n hgher layers can be reduced by ncreasng he spkng hreshold a each layer, however, for he case of FL-based learnng hs needs o be raded off agans parameer esmaon accuracy as ndcaed n Fgure 3L. 4 Dscusson We have demonsraed for a physologcally plausble range of parameers ha, FL s more compuaonally effcen han he benchmark ML-EM algorhm presened, and ha FL s also more robus when here s hgh uncerany abou he nal parameer esmaes. The laer occurs because he ML-EM algorhm seeks ou local maxma of he ML funcon and herefore ypcally wll fnd local maxma raher han he global maxmum when here s a large msmach beween he nal parameer esmaes and he rue parameer values. FL on he oher hand, s a heursc approach ha seeks o coun he number of sae ransons and npu spkes, deermne he me spen n each sae and hen esmae he BSN parameers. The FL esmaon s robus o hgh uncerany abou he nal parameer esmaes because he dynamc hreshold enables esmaon of he sae even when he ranson rae esmaes are naccurae (.e. he resng poenal of he log-odds rao of he hdden sae s offse from he rue value) or when he synapc rae esmaes are naccurae (.e. synapc wegh magnudes are no he approprae sze or sgn). Ths s because he dynamc g o 26

27 hreshold fs o he shor-erm range of he log-odds rao of he hdden sae and, for he mos par, can capure mporan log-odds rao changes nduced by npu spke evens regardless of her sze or sgn, or he offse of he resng membrane poenal/log-odds rao. I was observed ha he compuaonal run mes for a fxed number of me seps are much longer for ML-EM han for FL, whle he real (.e. bologcal) me o convergence for he algorhms was shorer for ML-EM. Ths faser real me convergence for ML-EM, whch can also be hough of as a more effcen use of he avalable daa, may smply be due o he smooher convergence rajecory of ML-EM and he greaer sensvy of FL o flucuaons n he npu sascs. I s also worh nong ha hese real convergence mes are only approxmae and prmarly capure he mes over whch major esmae changes are akng place. Boh FL and ML-EM can poenally gve more accurae esmaes f he smulaons are run for longer. Fgures 5A and 5B show FL esmaes become more accurae for longer smulaons. We dd no consder longer smulaons for ML-EM gven ha becomes less racable o complee a large number of smulaons for dfferen rue parameer value combnaons. The reason for he shorer compuaonal run mes for a fxed number of me seps of FL compared o ML-EM sems largely from he number of equaons ha need o be calculaed by each algorhm a each me sep. As he number of npu synapses, N, ncreases, he number of equaons grows more quckly for ML-EM han for FL. For FL and ML-EM, hs number s 4N + 13 and 28N + 6, respecvely. Alhough, ML-EM appears o converge faser han FL (.e. requres less me seps), 27

28 he compuaonal cos of ML-EM means ha ML-EM sll akes a much longer me o smulae o convergence han FL. Boh he FL and ML-EM algorhms were mplemened as sequenal programs. For a sngle neuron, would be possble o mprove he compuaonal effcency of ML-EM compared o FL f mul-core or GPU mplemenaons were used. In parcular, compuaons for he N synapses can be parallelsed for boh algorhms. Assumng full parallelsaon of he synapse compuaons/equaons, hs would effecvely reduce he number of equaons updaed each me sep o = 17 and = 34 for FL and ML-EM, respecvely. Alhough hs would mprove performance for a sngle neuron, parallelsaon sll consumes compuaonal resources and, herefore, wll provde a lmaon of he sze of he nework ha can be suded. Where only small neworks are consdered and compuaonal resources are no an ssue, FL would sll be useful gven s robusness o uncerany n he nal parameer esmaes. The compuaonal speed and robusness of FL ndcaes ha wll be useful n sudyng herarchcal feedforward neworks of BSNs whn racable smulaon mes. The sudy of larger neworks s mporan for more complex nference problems such as feaure and objec recognon. Moreover, he smplcy of he FL algorhm means s easer o ake advanage of he spke codng effcency of Bayesan spkng neurons n energy effcen neuromorphc VLSI crcus, compared o f onlne EM were o be mplemened. The FL algorhm ha we presen s no necessarly nended o be bologcally movaed, alhough s smplcy may make compuaonally and mechanscally possble for real neurons o mplemen. Raher 28

29 he FL algorhm, whch specfcally apples o BSNs, represens a balance beween smplcy and effcency, and provdes us wh a ool o sudy BSNs n greaer deal n order o furher undersand her bologcal relevance and plausbly. The sudy of larger neworks s beyond he scope of hs leer bu here we have consdered he frs sep n hs drecon by smulang learnng n a wo neuron nework and n hree-layer neworks. Consderng he wo neuron nework, gven he one npu and a lower number of npu spkes o Neuron 2 compared wh Neuron 1, was dffcul for Neuron 2 o accuraely capure sae ransons whou he ncorporaon of he dynamc hresholds o esmae he auxlary sae. The dynamc hresholds hus enhance he sably and accuracy of he FL algorhm. Gven ha learnng n BSNs s local o he neuron and here he FL algorhm appears sable over a large number of rue parameer combnaons, s expeced ha he FL algorhm wll be sable n larger neworks and hs s suppored by he smulaons wh he hree-layer neworks. I s mporan o noe ha due o symmery of he HMM, alhough he FL algorhm can esmae he on and off parameers reasonably accuraely, canno deermne wheher on s on or on s off. For example, he esmae r ˆon may be close o he rue value for r off, and vce versa. Thus Fgures 3-6 have been produced by checkng f such parameer flps have occurred for boh he FL and ML-EM algorhms. Alhough hs may seem undesrable, he labellng of on and off s arbrary and can be reversed whou effecng esmaon n he nework. 29

30 I s also worh nong ha, gven ha he BSN only spkes when he hdden sae s on (or off dependng on how he parameers are defned or evolve), he learned observaon/synapc raes n he recevng neuron evolve such ha he observaon rae for he on sae (.e. q on ) akes a hgh value whle he observaon rae for he off sae (.e. q off ) akes a value close o zero. Ths apples o boh he FL and ML-EM algorhms, and was observed for BSNs n feed-forward neworks ha are no n he frs layer of he nework (no shown). To ensure hs propery dd no affec numercal sably of FL, very small observaon raes were prevened (see mehods). The ML-EM algorhm analysed here (Denève, 2008b; Mongllo and Denève, 2008) s jus one possbly and may be possble o fnd faser onlne EM algorhms o esmae HMM parameers (Cappé, 2011) and many varans of onlne EM algorhms for HMMs exs (Krshnamurhy & Moore, 1993; Ello e al., 1995; Lndgren & Hos, 1995; Le Gland & Mevel, 1997; Rydén, 1997; Sller & Radons, 1999; Andreu & Douce, 2003; Tadć, 2010; Cappé, 2011; LeCorff and For, 2012). However, s expeced ha FL wll sll be more compuaonally effcen because many of hese mehods am for exacness of esmaon whereas FL focuses on smplcy of he algorhm. One possble shorcomng of he applcaon of FL n herarchcal neworks could be an accumulaon of parameer esmaon errors across layers of neurons. However, as s shown n Fgure 6, he hdden sae decodng performance can be 30

31 reasonably sable, wh medan percen Hammng errors below 7% across he layers n he hree-layer neworks. Based on he resuls n Fgure 3L, may be possble n fuure versons of FL o conrol any under- or over-esmaon of parameers n subsequen layers by adapng he spkng hreshold n each layer. Perhaps he man shorcomng of FL s he dependence of esmaon accuracy/bas on he range of he rue parameers of he generave model. Ths s a reflecon of he approxmae naure of he FL algorhm. ML-EM on he oher hand works for he full heorecally allowed rue parameer range. Fuure work wll be needed o fnd an FL varan ha can mprove hs feaure. Neverheless, for a gven problem, provded one knows wha parameer range one s neresed n, one can es FL for s esmaon accuracy/bas on hs parameer range. As s shown for a physologcally plausble parameer range, average FL-based esmaon accuracy can be very good for an appropraely defned parameer range. Moreover, f FL s no very accurae over a ceran parameer range, hs may no be a sgnfcan problem because n many cases approxmae soluons may be adequae and here wll be hgh uncerany abou he rue values of parameers, herefore lmng he accuracy of EM approaches, whch can become rapped n local mnma. On he oher hand, FL gves much more conssen parameer esmaon regardless of he degree n uncerany abou he rue parameer values. Moreover, he problems o whch BSN neworks can be appled are sochasc n naure and hus small errors can be beer oleraed. An addonal possbly, ha reflecs he complemenary of he FL and ML- EM algorhms, wll be o frs run he FL algorhm o ge reasonably small 31

32 parameer esmaon error and poson he parameer esmaes close o he opmum, and hen apply ML-EM, whch s more exac and has a beer chance of fndng he opmum f he nal parameer esmaes are close o he rue parameer values. These deas should be he subjec of fuure sudy, wh a major am of seekng o undersand how neurons can perform probablsc nference n sensory (Lochmann e al., 2012) and cognve neworks (Boerln & Denève, 2011), and how such neworks can effcenly learn o exrac and/or negrae complex nformaon abou he exernal and nernal envronmen. Acknowledgemens Ths work was suppored by he Ausralan Research Councl Dscovery Projec gran DP and he Unversy of Wesern Sydney. 32

33 References Andreu, C., & Douce, A. (2003) Onlne expecaon-maxmzaon ype algorhms for parameer esmaon n general sae space models. In In. Conf. Acouscs, Speech and Sgnal Processng (Vol. 6, pp ). Pscaaway, NJ: IEEE Press. Barber, M., Clark, J., & Anderson, C. (2003) Neural represenaon of probablsc nformaon. Neural Compu., 15(8), Boerln, M., & Denève, S. (2011) Spke-based populaon codng and workng memory. PLoS Compu Bol 7(2): e do: /journal.pcb Cappé, O. (2011) Onlne EM algorhm for Hdden Markov Models. J. Compu. Graph. Sas., 20(3): Denève, S. (2008a) Bayesan spkng neurons : nference. Neural Compu. 20: Denève, S. (2008b) Bayesan spkng neurons : learnng. Neural Compu. 20: Ello, R.L., Aggoun, L., & Moore, J.B. (1995) Hdden Markov Models: Esmaon and Conrol. Sprnger, New York, NY. Knll, D., & Rchards, W. (1996) Percepon as Bayesan Inference. Cambrdge: Cambrdge Unversy Press. Kordng, K., & Wolper, D. (2004) Bayesan negraon n sensormoor learnng, Naure, 427,

34 Krshnamurhy, V. & Moore, J. B. (1993) On-lne esmaon of hdden Markov model parameers based on he Kullback-Lebler nformaon measure. IEEE Trans. Sg. Process. SP-41, Kuhlmann, L., Hauser-Raspe, M., Manon, J., Grayden, D.B., Tapson, J., & Van Schak, A. (2012). Onlne learnng n Bayesan Spkng Neurons. Proceedngs of IJCNN12, In press. Le Gland, F. & Mevel, L.. (1997) Recursve esmaon n HMMs. In Proc. IEEE Conf. Decs. Conrol, pages Lndgren, G. & Hos, H. (1995) Recursve esmaon of parameers n Markovmodulaed Posson processes. IEEE Transacons on Communcaons, vol. 43, no. 11, pp Le Corff, S. & For, G. (2012) Onlne Expecaon Maxmzaon based algorhms for nference n hdden Markov models. arxv.org, vol. mah.st. arxv: v2. Lochmann, T., & Denève, S. (2008) Informaon ransmsson wh spkng bayesan neurons. New J Phys 10: (19pp). Lochmann, T., Erns, U.A., & Denève, S. (2012) Percepual nference predcs conexual modulaons of sensory responses. J Neurosc. 32(12): Mongllo, M., & Denève, S. (2008) Onlne expecaon-maxmzaon n hdden Markov models. Neural Compu. 20: Rabner, L. R. (1989) A uoral on hdden Markov models and seleced applcaons 34

35 n speech recognon. Proceedngs of he IEEE, 77, Rydén, T. (1997) On Recursve Esmaon for Hdden Markov Models. Sochasc Processes and Ther Applcaons, 66 (1): Sller, J.C. & Radons, G. (1999) Onlne Esmaon of Hdden Markov Models. IEEE Sgnal Processng Leers, 6(8): Tadć, V.B. (2010) Analycy, convergence, and convergence rae of recursve maxmum-lkelhood esmaon n hdden Markov models. IEEE Trans. Inf. Theor., 56: Zemel, R., Dayan, P., & Pouge, A. (1998) Probablsc nerpolaon of populaon code. Neural Compu., 10(2),

36 Fgure Capons: Fgure 1: (A) The mplc space of a BSN nvolves a 2-sae HMM (.e. x = {0,1} ) ha produces N ndependen oupus wh 2 possble observaons and he oupus are ndexed by (.e. s = {0,1} ). The sae ranson raes are represened by r on and r off, and he observaon raes are represened by q on and q off. (B) The explc space of he BSN nvolves N synapses wh weghs w, he npu spke sequences are assumed o be he oupus of he HMM (.e. s = {0,1} ), and he oupu spke sequence s gven by O. (C) The smple wo neuron nework suded n hs paper. Neuron 1 receves N = 20 npus derved from ndependen Posson processes and provdes he only npu o Neuron 2. (D) The small hree-layer nework suded n hs paper. Each layer 1 neuron receves N = 20 npus. Fgure 2: Example BSN smulaons for (A) Neuron 1 and (B) Neuron 2 n he wo neuron nework whle runnng he FL algorhm. Each sub-fgure shows he las 10 5 me seps of a 10 6 me sep smulaon wh r on = 8 s -1, r = 10.5 s -1, and off q on and q off unformly seleced n he range of spkes/s for Neuron 1. Inal parameer esmaes were se o he rue parameer values for Neuron 1, whle for Neuron 2 nal ranson raes were se o he same values as Neuron 1. The nal observaon raes for he sngle npu synapse o Neuron 2 were se equal o he values of he frs npu synapse o Neuron 1. In each neuron-specfc sub-fgure, he op plo shows he 36

37 log odds rao L, he predcon G, and he sae ranson hresholds, U and mapped ono he log odds rao axs. The mddle plo shows he oupu spke sequence, g = 2. o D, O, and he lower plo shows he HMM sae, x, unknown o he BSN. Fgure 3: Esmaon accuracy as a funcon of he dynamc hreshold and he rue parameer range for he FL algorhm for he wo neuron nework. The medan percen parameer esmaon error (op row) and he correspondng sgn of he error/bas (boom row) ploed as funcons of he dynamc hreshold facors θ U and θ D for Neuron 1 parameers (A) r on, (B) r off, (C) q on, (D) q off, and Neuron 2 parameers (E) r on and (F) r off. The Neuron 1 medan RMS error of (G) P( x = 1 s ), P rms, and (H) medan percen Hammng error, H, and he (I) Neuron 0 2 medan percen Hammng error, H, as funcons of θ U and θ D. For each ( θ D, θ U ) par, he medan error was calculaed over 100 smulaons lasng 10 6 me seps for he 400% nal parameer perurbaon case and, for Neurons 1 and 2, r on and r off are unformly seleced n he range of s -1, and he q on and qoff are unformly seleced n he range of spkes/s. To mprove mage clary, medan sae ranson rae error was clpped a 150 s -1. In he bas map legend whe ndcaes posve bas/overesmaon and black ndcaes negave bas/underesmaon for a gven θ U and θ D par. Medan parameer esmaon error of Neuron 1 as a funcon 37

38 of he rue parameer range when (J) r max s vared and he q on and qoff are unformly seleced n he range of spkes/s and (K) q max s vared and r on and r off are unformly seleced n he range of s -1. (L) Parameer esmaon error of Neuron 2 as a funcon of Neuron 1 hreshold, g o, for he same rue parameer range as n (A)-(I). The legend n (J) apples o (K) and (L). For (J)-(L) θ = and U θ D = 0.25, and 1000 smulaons were performed o oban each medan value. Fgure 4: Comparson of parameer esmaon performance beween FL and ML-EM for he 0% and 400% nal parameer perurbaon cases for he wo neuron nework. The Neuron 1 and Neuron 2 dsrbuons of percen parameer esmaon error for (A) r on and (B) r off. The Neuron 1 dsrbuons of percen parameer esmaon error for (C) q on and (D) q off. For each FL and EM case, daa were colleced over 1000 smulaons lasng 10 6 me seps where for Neurons 1 and 2, r on and r off are unformly seleced n he range of s -1, and he q on and qoff are unformly seleced n he range of spkes/s. To gauge he HMM smulaon errors resulng from fne smulaon mes, he SIM dsrbuons n (A)-(D) show he percen error of he parameers calculaed from he hdden sae, x, known o us bu no he BSN, as well as he observaons, s. In each box-whsker he gray box bounds he 25 h and 75 h percenles of he daa, he black horzonal lne represens 38

39 he medan and he black dashed whskers span he exreme pons whn 1.5 mes he nerquarle range. Fgure 5: Esmaon accuracy, compuaon me and me o convergence n he wo neuron nework. Comparson beween FL and ML-EM for (A) % Hammng error dsrbuons for Neuron 1 and Neuron 2, (B) P rms dsrbuons for Neuron 1 and (C) run-me dsrbuons for Neuron 1 and Neuron 2 for he 0% and 400% nal parameer perurbaon cases. FL6 and FL7 refer o FL smulaons nvolvng 10 6 and 10 7 me seps, respecvely. EM refers o EM smulaons nvolvng 10 6 me seps. (D) Example of parameer esmaon evoluon for Neuron 1 for boh FL (sold black) and ML-EM (sold gray) when he esmaes begn 400% above he rue parameer values. The dashed gray lne ndcaes he rue parameer values. (E) Dsrbuons of he real me o convergence for he FL and ML-EM algorhms for Neuron 1 for 100% and 400% nal parameer perurbaon cases. For (E) all smulaons lased 10 6 me seps (.e. 100s). For (A)-(E) smulaon sengs and box whsker deals are he same as for Fgure 4. Fgure 6: Three-layer feedforward nework smulaons usng FL. The nework layer 1, 2 and 3 dsrbuons of percen parameer esmaon error for (A) r on and (B) r off for he small nework. (C) The layer 1 dsrbuons of percen parameer esmaon error for q on and q off for he small nework. The layer 1, 2 and 3 dsrbuons of percen Hammng error of he hdden sae esmae for (D) he small nework and (E) he large nework wh wce as many npus n he hgher layers. The layer 1, 2 and 3 39

40 dsrbuons of nformaveness for (F) he small nework and (G) he large nework. Daa were colleced over 1000 smulaons lasng 10 6 me seps where r on and r off are unformly seleced n he range of s -1, and he q on and qoff are unformly seleced n he range of spkes/s. Inal parameer esmaes were also seleced unformly n he same ranges. 40

41 Fgure 1 41

42 Fgure 2 42

43 Fgure 3 43

44 Fgure 4 44

45 Fgure 5 45

46 Fgure 6 46

Outline. Probabilistic Model Learning. Probabilistic Model Learning. Probabilistic Model for Time-series Data: Hidden Markov Model

Outline. Probabilistic Model Learning. Probabilistic Model Learning. Probabilistic Model for Time-series Data: Hidden Markov Model Probablsc Model for Tme-seres Daa: Hdden Markov Model Hrosh Mamsuka Bonformacs Cener Kyoo Unversy Oulne Three Problems for probablsc models n machne learnng. Compung lkelhood 2. Learnng 3. Parsng (predcon

More information

In the complete model, these slopes are ANALYSIS OF VARIANCE FOR THE COMPLETE TWO-WAY MODEL. (! i+1 -! i ) + [(!") i+1,q - [(!

In the complete model, these slopes are ANALYSIS OF VARIANCE FOR THE COMPLETE TWO-WAY MODEL. (! i+1 -! i ) + [(!) i+1,q - [(! ANALYSIS OF VARIANCE FOR THE COMPLETE TWO-WAY MODEL The frs hng o es n wo-way ANOVA: Is here neracon? "No neracon" means: The man effecs model would f. Ths n urn means: In he neracon plo (wh A on he horzonal

More information

Solution in semi infinite diffusion couples (error function analysis)

Solution in semi infinite diffusion couples (error function analysis) Soluon n sem nfne dffuson couples (error funcon analyss) Le us consder now he sem nfne dffuson couple of wo blocks wh concenraon of and I means ha, n a A- bnary sysem, s bondng beween wo blocks made of

More information

Variants of Pegasos. December 11, 2009

Variants of Pegasos. December 11, 2009 Inroducon Varans of Pegasos SooWoong Ryu bshboy@sanford.edu December, 009 Youngsoo Cho yc344@sanford.edu Developng a new SVM algorhm s ongong research opc. Among many exng SVM algorhms, we wll focus on

More information

V.Abramov - FURTHER ANALYSIS OF CONFIDENCE INTERVALS FOR LARGE CLIENT/SERVER COMPUTER NETWORKS

V.Abramov - FURTHER ANALYSIS OF CONFIDENCE INTERVALS FOR LARGE CLIENT/SERVER COMPUTER NETWORKS R&RATA # Vol.) 8, March FURTHER AALYSIS OF COFIDECE ITERVALS FOR LARGE CLIET/SERVER COMPUTER ETWORKS Vyacheslav Abramov School of Mahemacal Scences, Monash Unversy, Buldng 8, Level 4, Clayon Campus, Wellngon

More information

FTCS Solution to the Heat Equation

FTCS Solution to the Heat Equation FTCS Soluon o he Hea Equaon ME 448/548 Noes Gerald Reckenwald Porland Sae Unversy Deparmen of Mechancal Engneerng gerry@pdxedu ME 448/548: FTCS Soluon o he Hea Equaon Overvew Use he forward fne d erence

More information

WiH Wei He

WiH Wei He Sysem Idenfcaon of onlnear Sae-Space Space Baery odels WH We He wehe@calce.umd.edu Advsor: Dr. Chaochao Chen Deparmen of echancal Engneerng Unversy of aryland, College Par 1 Unversy of aryland Bacground

More information

Robustness Experiments with Two Variance Components

Robustness Experiments with Two Variance Components Naonal Insue of Sandards and Technology (NIST) Informaon Technology Laboraory (ITL) Sascal Engneerng Dvson (SED) Robusness Expermens wh Two Varance Componens by Ana Ivelsse Avlés avles@ns.gov Conference

More information

Introduction ( Week 1-2) Course introduction A brief introduction to molecular biology A brief introduction to sequence comparison Part I: Algorithms

Introduction ( Week 1-2) Course introduction A brief introduction to molecular biology A brief introduction to sequence comparison Part I: Algorithms Course organzaon Inroducon Wee -2) Course nroducon A bref nroducon o molecular bology A bref nroducon o sequence comparson Par I: Algorhms for Sequence Analyss Wee 3-8) Chaper -3, Models and heores» Probably

More information

Ordinary Differential Equations in Neuroscience with Matlab examples. Aim 1- Gain understanding of how to set up and solve ODE s

Ordinary Differential Equations in Neuroscience with Matlab examples. Aim 1- Gain understanding of how to set up and solve ODE s Ordnary Dfferenal Equaons n Neuroscence wh Malab eamples. Am - Gan undersandng of how o se up and solve ODE s Am Undersand how o se up an solve a smple eample of he Hebb rule n D Our goal a end of class

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 4

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 4 CS434a/54a: Paern Recognon Prof. Olga Veksler Lecure 4 Oulne Normal Random Varable Properes Dscrmnan funcons Why Normal Random Varables? Analycally racable Works well when observaon comes form a corruped

More information

Hidden Markov Models Following a lecture by Andrew W. Moore Carnegie Mellon University

Hidden Markov Models Following a lecture by Andrew W. Moore Carnegie Mellon University Hdden Markov Models Followng a lecure by Andrew W. Moore Carnege Mellon Unversy www.cs.cmu.edu/~awm/uorals A Markov Sysem Has N saes, called s, s 2.. s N s 2 There are dscree meseps, 0,, s s 3 N 3 0 Hdden

More information

Fall 2010 Graduate Course on Dynamic Learning

Fall 2010 Graduate Course on Dynamic Learning Fall 200 Graduae Course on Dynamc Learnng Chaper 4: Parcle Flers Sepember 27, 200 Byoung-Tak Zhang School of Compuer Scence and Engneerng & Cognve Scence and Bran Scence Programs Seoul aonal Unversy hp://b.snu.ac.kr/~bzhang/

More information

On One Analytic Method of. Constructing Program Controls

On One Analytic Method of. Constructing Program Controls Appled Mahemacal Scences, Vol. 9, 05, no. 8, 409-407 HIKARI Ld, www.m-hkar.com hp://dx.do.org/0.988/ams.05.54349 On One Analyc Mehod of Consrucng Program Conrols A. N. Kvko, S. V. Chsyakov and Yu. E. Balyna

More information

Comb Filters. Comb Filters

Comb Filters. Comb Filters The smple flers dscussed so far are characered eher by a sngle passband and/or a sngle sopband There are applcaons where flers wh mulple passbands and sopbands are requred Thecomb fler s an example of

More information

Econ107 Applied Econometrics Topic 5: Specification: Choosing Independent Variables (Studenmund, Chapter 6)

Econ107 Applied Econometrics Topic 5: Specification: Choosing Independent Variables (Studenmund, Chapter 6) Econ7 Appled Economercs Topc 5: Specfcaon: Choosng Independen Varables (Sudenmund, Chaper 6 Specfcaon errors ha we wll deal wh: wrong ndependen varable; wrong funconal form. Ths lecure deals wh wrong ndependen

More information

Let s treat the problem of the response of a system to an applied external force. Again,

Let s treat the problem of the response of a system to an applied external force. Again, Page 33 QUANTUM LNEAR RESPONSE FUNCTON Le s rea he problem of he response of a sysem o an appled exernal force. Agan, H() H f () A H + V () Exernal agen acng on nernal varable Hamlonan for equlbrum sysem

More information

( ) () we define the interaction representation by the unitary transformation () = ()

( ) () we define the interaction representation by the unitary transformation () = () Hgher Order Perurbaon Theory Mchael Fowler 3/7/6 The neracon Represenaon Recall ha n he frs par of hs course sequence, we dscussed he chrödnger and Hesenberg represenaons of quanum mechancs here n he chrödnger

More information

Time-interval analysis of β decay. V. Horvat and J. C. Hardy

Time-interval analysis of β decay. V. Horvat and J. C. Hardy Tme-nerval analyss of β decay V. Horva and J. C. Hardy Work on he even analyss of β decay [1] connued and resuled n he developmen of a novel mehod of bea-decay me-nerval analyss ha produces hghly accurae

More information

Graduate Macroeconomics 2 Problem set 5. - Solutions

Graduate Macroeconomics 2 Problem set 5. - Solutions Graduae Macroeconomcs 2 Problem se. - Soluons Queson 1 To answer hs queson we need he frms frs order condons and he equaon ha deermnes he number of frms n equlbrum. The frms frs order condons are: F K

More information

Volatility Interpolation

Volatility Interpolation Volaly Inerpolaon Prelmnary Verson March 00 Jesper Andreasen and Bran Huge Danse Mares, Copenhagen wan.daddy@danseban.com brno@danseban.com Elecronc copy avalable a: hp://ssrn.com/absrac=69497 Inro Local

More information

[ ] 2. [ ]3 + (Δx i + Δx i 1 ) / 2. Δx i-1 Δx i Δx i+1. TPG4160 Reservoir Simulation 2018 Lecture note 3. page 1 of 5

[ ] 2. [ ]3 + (Δx i + Δx i 1 ) / 2. Δx i-1 Δx i Δx i+1. TPG4160 Reservoir Simulation 2018 Lecture note 3. page 1 of 5 TPG460 Reservor Smulaon 08 page of 5 DISCRETIZATIO OF THE FOW EQUATIOS As we already have seen, fne dfference appromaons of he paral dervaves appearng n he flow equaons may be obaned from Taylor seres

More information

THE PREDICTION OF COMPETITIVE ENVIRONMENT IN BUSINESS

THE PREDICTION OF COMPETITIVE ENVIRONMENT IN BUSINESS THE PREICTION OF COMPETITIVE ENVIRONMENT IN BUSINESS INTROUCTION The wo dmensonal paral dfferenal equaons of second order can be used for he smulaon of compeve envronmen n busness The arcle presens he

More information

P R = P 0. The system is shown on the next figure:

P R = P 0. The system is shown on the next figure: TPG460 Reservor Smulaon 08 page of INTRODUCTION TO RESERVOIR SIMULATION Analycal and numercal soluons of smple one-dmensonal, one-phase flow equaons As an nroducon o reservor smulaon, we wll revew he smples

More information

Dynamic Team Decision Theory. EECS 558 Project Shrutivandana Sharma and David Shuman December 10, 2005

Dynamic Team Decision Theory. EECS 558 Project Shrutivandana Sharma and David Shuman December 10, 2005 Dynamc Team Decson Theory EECS 558 Proec Shruvandana Sharma and Davd Shuman December 0, 005 Oulne Inroducon o Team Decson Theory Decomposon of he Dynamc Team Decson Problem Equvalence of Sac and Dynamc

More information

Lecture 6: Learning for Control (Generalised Linear Regression)

Lecture 6: Learning for Control (Generalised Linear Regression) Lecure 6: Learnng for Conrol (Generalsed Lnear Regresson) Conens: Lnear Mehods for Regresson Leas Squares, Gauss Markov heorem Recursve Leas Squares Lecure 6: RLSC - Prof. Sehu Vjayakumar Lnear Regresson

More information

e-journal Reliability: Theory& Applications No 2 (Vol.2) Vyacheslav Abramov

e-journal Reliability: Theory& Applications No 2 (Vol.2) Vyacheslav Abramov June 7 e-ournal Relably: Theory& Applcaons No (Vol. CONFIDENCE INTERVALS ASSOCIATED WITH PERFORMANCE ANALYSIS OF SYMMETRIC LARGE CLOSED CLIENT/SERVER COMPUTER NETWORKS Absrac Vyacheslav Abramov School

More information

Linear Response Theory: The connection between QFT and experiments

Linear Response Theory: The connection between QFT and experiments Phys540.nb 39 3 Lnear Response Theory: The connecon beween QFT and expermens 3.1. Basc conceps and deas Q: ow do we measure he conducvy of a meal? A: we frs nroduce a weak elecrc feld E, and hen measure

More information

Clustering (Bishop ch 9)

Clustering (Bishop ch 9) Cluserng (Bshop ch 9) Reference: Daa Mnng by Margare Dunham (a slde source) 1 Cluserng Cluserng s unsupervsed learnng, here are no class labels Wan o fnd groups of smlar nsances Ofen use a dsance measure

More information

J i-1 i. J i i+1. Numerical integration of the diffusion equation (I) Finite difference method. Spatial Discretization. Internal nodes.

J i-1 i. J i i+1. Numerical integration of the diffusion equation (I) Finite difference method. Spatial Discretization. Internal nodes. umercal negraon of he dffuson equaon (I) Fne dfference mehod. Spaal screaon. Inernal nodes. R L V For hermal conducon le s dscree he spaal doman no small fne spans, =,,: Balance of parcles for an nernal

More information

Lecture VI Regression

Lecture VI Regression Lecure VI Regresson (Lnear Mehods for Regresson) Conens: Lnear Mehods for Regresson Leas Squares, Gauss Markov heorem Recursve Leas Squares Lecure VI: MLSC - Dr. Sehu Vjayakumar Lnear Regresson Model M

More information

CS286.2 Lecture 14: Quantum de Finetti Theorems II

CS286.2 Lecture 14: Quantum de Finetti Theorems II CS286.2 Lecure 14: Quanum de Fne Theorems II Scrbe: Mara Okounkova 1 Saemen of he heorem Recall he las saemen of he quanum de Fne heorem from he prevous lecure. Theorem 1 Quanum de Fne). Le ρ Dens C 2

More information

Forecasting customer behaviour in a multi-service financial organisation: a profitability perspective

Forecasting customer behaviour in a multi-service financial organisation: a profitability perspective Forecasng cusomer behavour n a mul-servce fnancal organsaon: a profably perspecve A. Audzeyeva, Unversy of Leeds & Naonal Ausrala Group Europe, UK B. Summers, Unversy of Leeds, UK K.R. Schenk-Hoppé, Unversy

More information

UNIVERSITAT AUTÒNOMA DE BARCELONA MARCH 2017 EXAMINATION

UNIVERSITAT AUTÒNOMA DE BARCELONA MARCH 2017 EXAMINATION INTERNATIONAL TRADE T. J. KEHOE UNIVERSITAT AUTÒNOMA DE BARCELONA MARCH 27 EXAMINATION Please answer wo of he hree quesons. You can consul class noes, workng papers, and arcles whle you are workng on he

More information

12d Model. Civil and Surveying Software. Drainage Analysis Module Detention/Retention Basins. Owen Thornton BE (Mech), 12d Model Programmer

12d Model. Civil and Surveying Software. Drainage Analysis Module Detention/Retention Basins. Owen Thornton BE (Mech), 12d Model Programmer d Model Cvl and Surveyng Soware Dranage Analyss Module Deenon/Reenon Basns Owen Thornon BE (Mech), d Model Programmer owen.hornon@d.com 4 January 007 Revsed: 04 Aprl 007 9 February 008 (8Cp) Ths documen

More information

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore.

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore. Ths documen s downloaded from DR-NTU, Nanyang Technologcal Unversy Lbrary, Sngapore. Tle A smplfed verb machng algorhm for word paron n vsual speech processng( Acceped verson ) Auhor(s) Foo, Say We; Yong,

More information

An introduction to Support Vector Machine

An introduction to Support Vector Machine An nroducon o Suppor Vecor Machne 報告者 : 黃立德 References: Smon Haykn, "Neural Neworks: a comprehensve foundaon, second edon, 999, Chaper 2,6 Nello Chrsann, John Shawe-Tayer, An Inroducon o Suppor Vecor Machnes,

More information

Computing Relevance, Similarity: The Vector Space Model

Computing Relevance, Similarity: The Vector Space Model Compung Relevance, Smlary: The Vecor Space Model Based on Larson and Hears s sldes a UC-Bereley hp://.sms.bereley.edu/courses/s0/f00/ aabase Managemen Sysems, R. Ramarshnan ocumen Vecors v ocumens are

More information

[Link to MIT-Lab 6P.1 goes here.] After completing the lab, fill in the following blanks: Numerical. Simulation s Calculations

[Link to MIT-Lab 6P.1 goes here.] After completing the lab, fill in the following blanks: Numerical. Simulation s Calculations Chaper 6: Ordnary Leas Squares Esmaon Procedure he Properes Chaper 6 Oulne Cln s Assgnmen: Assess he Effec of Sudyng on Quz Scores Revew o Regresson Model o Ordnary Leas Squares () Esmaon Procedure o he

More information

New M-Estimator Objective Function. in Simultaneous Equations Model. (A Comparative Study)

New M-Estimator Objective Function. in Simultaneous Equations Model. (A Comparative Study) Inernaonal Mahemacal Forum, Vol. 8, 3, no., 7 - HIKARI Ld, www.m-hkar.com hp://dx.do.org/.988/mf.3.3488 New M-Esmaor Objecve Funcon n Smulaneous Equaons Model (A Comparave Sudy) Ahmed H. Youssef Professor

More information

Appendix to Online Clustering with Experts

Appendix to Online Clustering with Experts A Appendx o Onlne Cluserng wh Expers Furher dscusson of expermens. Here we furher dscuss expermenal resuls repored n he paper. Ineresngly, we observe ha OCE (and n parcular Learn- ) racks he bes exper

More information

Mechanics Physics 151

Mechanics Physics 151 Mechancs Physcs 5 Lecure 9 Hamlonan Equaons of Moon (Chaper 8) Wha We Dd Las Tme Consruced Hamlonan formalsm H ( q, p, ) = q p L( q, q, ) H p = q H q = p H = L Equvalen o Lagrangan formalsm Smpler, bu

More information

DEEP UNFOLDING FOR MULTICHANNEL SOURCE SEPARATION SUPPLEMENTARY MATERIAL

DEEP UNFOLDING FOR MULTICHANNEL SOURCE SEPARATION SUPPLEMENTARY MATERIAL DEEP UNFOLDING FOR MULTICHANNEL SOURCE SEPARATION SUPPLEMENTARY MATERIAL Sco Wsdom, John Hershey 2, Jonahan Le Roux 2, and Shnj Waanabe 2 Deparmen o Elecrcal Engneerng, Unversy o Washngon, Seale, WA, USA

More information

Mechanics Physics 151

Mechanics Physics 151 Mechancs Physcs 5 Lecure 9 Hamlonan Equaons of Moon (Chaper 8) Wha We Dd Las Tme Consruced Hamlonan formalsm Hqp (,,) = qp Lqq (,,) H p = q H q = p H L = Equvalen o Lagrangan formalsm Smpler, bu wce as

More information

CHAPTER 10: LINEAR DISCRIMINATION

CHAPTER 10: LINEAR DISCRIMINATION CHAPER : LINEAR DISCRIMINAION Dscrmnan-based Classfcaon 3 In classfcaon h K classes (C,C,, C k ) We defned dscrmnan funcon g j (), j=,,,k hen gven an es eample, e chose (predced) s class label as C f g

More information

Appendix H: Rarefaction and extrapolation of Hill numbers for incidence data

Appendix H: Rarefaction and extrapolation of Hill numbers for incidence data Anne Chao Ncholas J Goell C seh lzabeh L ander K Ma Rober K Colwell and Aaron M llson 03 Rarefacon and erapolaon wh ll numbers: a framewor for samplng and esmaon n speces dversy sudes cology Monographs

More information

Department of Economics University of Toronto

Department of Economics University of Toronto Deparmen of Economcs Unversy of Torono ECO408F M.A. Economercs Lecure Noes on Heeroskedascy Heeroskedascy o Ths lecure nvolves lookng a modfcaons we need o make o deal wh he regresson model when some of

More information

GENERATING CERTAIN QUINTIC IRREDUCIBLE POLYNOMIALS OVER FINITE FIELDS. Youngwoo Ahn and Kitae Kim

GENERATING CERTAIN QUINTIC IRREDUCIBLE POLYNOMIALS OVER FINITE FIELDS. Youngwoo Ahn and Kitae Kim Korean J. Mah. 19 (2011), No. 3, pp. 263 272 GENERATING CERTAIN QUINTIC IRREDUCIBLE POLYNOMIALS OVER FINITE FIELDS Youngwoo Ahn and Kae Km Absrac. In he paper [1], an explc correspondence beween ceran

More information

Boosted LMS-based Piecewise Linear Adaptive Filters

Boosted LMS-based Piecewise Linear Adaptive Filters 016 4h European Sgnal Processng Conference EUSIPCO) Boosed LMS-based Pecewse Lnear Adapve Flers Darush Kar and Iman Marvan Deparmen of Elecrcal and Elecroncs Engneerng Blken Unversy, Ankara, Turkey {kar,

More information

SOME NOISELESS CODING THEOREMS OF INACCURACY MEASURE OF ORDER α AND TYPE β

SOME NOISELESS CODING THEOREMS OF INACCURACY MEASURE OF ORDER α AND TYPE β SARAJEVO JOURNAL OF MATHEMATICS Vol.3 (15) (2007), 137 143 SOME NOISELESS CODING THEOREMS OF INACCURACY MEASURE OF ORDER α AND TYPE β M. A. K. BAIG AND RAYEES AHMAD DAR Absrac. In hs paper, we propose

More information

John Geweke a and Gianni Amisano b a Departments of Economics and Statistics, University of Iowa, USA b European Central Bank, Frankfurt, Germany

John Geweke a and Gianni Amisano b a Departments of Economics and Statistics, University of Iowa, USA b European Central Bank, Frankfurt, Germany Herarchcal Markov Normal Mxure models wh Applcaons o Fnancal Asse Reurns Appendx: Proofs of Theorems and Condonal Poseror Dsrbuons John Geweke a and Gann Amsano b a Deparmens of Economcs and Sascs, Unversy

More information

Performance Analysis for a Network having Standby Redundant Unit with Waiting in Repair

Performance Analysis for a Network having Standby Redundant Unit with Waiting in Repair TECHNI Inernaonal Journal of Compung Scence Communcaon Technologes VOL.5 NO. July 22 (ISSN 974-3375 erformance nalyss for a Nework havng Sby edundan Un wh ang n epar Jendra Sngh 2 abns orwal 2 Deparmen

More information

HEAT CONDUCTION PROBLEM IN A TWO-LAYERED HOLLOW CYLINDER BY USING THE GREEN S FUNCTION METHOD

HEAT CONDUCTION PROBLEM IN A TWO-LAYERED HOLLOW CYLINDER BY USING THE GREEN S FUNCTION METHOD Journal of Appled Mahemacs and Compuaonal Mechancs 3, (), 45-5 HEAT CONDUCTION PROBLEM IN A TWO-LAYERED HOLLOW CYLINDER BY USING THE GREEN S FUNCTION METHOD Sansław Kukla, Urszula Sedlecka Insue of Mahemacs,

More information

5th International Conference on Advanced Design and Manufacturing Engineering (ICADME 2015)

5th International Conference on Advanced Design and Manufacturing Engineering (ICADME 2015) 5h Inernaonal onference on Advanced Desgn and Manufacurng Engneerng (IADME 5 The Falure Rae Expermenal Sudy of Specal N Machne Tool hunshan He, a, *, La Pan,b and Bng Hu 3,c,,3 ollege of Mechancal and

More information

Cubic Bezier Homotopy Function for Solving Exponential Equations

Cubic Bezier Homotopy Function for Solving Exponential Equations Penerb Journal of Advanced Research n Compung and Applcaons ISSN (onlne: 46-97 Vol. 4, No.. Pages -8, 6 omoopy Funcon for Solvng Eponenal Equaons S. S. Raml *,,. Mohamad Nor,a, N. S. Saharzan,b and M.

More information

Math 128b Project. Jude Yuen

Math 128b Project. Jude Yuen Mah 8b Proec Jude Yuen . Inroducon Le { Z } be a sequence of observed ndependen vecor varables. If he elemens of Z have a on normal dsrbuon hen { Z } has a mean vecor Z and a varancecovarance marx z. Geomercally

More information

10. A.C CIRCUITS. Theoretically current grows to maximum value after infinite time. But practically it grows to maximum after 5τ. Decay of current :

10. A.C CIRCUITS. Theoretically current grows to maximum value after infinite time. But practically it grows to maximum after 5τ. Decay of current : . A. IUITS Synopss : GOWTH OF UNT IN IUIT : d. When swch S s closed a =; = d. A me, curren = e 3. The consan / has dmensons of me and s called he nducve me consan ( τ ) of he crcu. 4. = τ; =.63, n one

More information

On computing differential transform of nonlinear non-autonomous functions and its applications

On computing differential transform of nonlinear non-autonomous functions and its applications On compung dfferenal ransform of nonlnear non-auonomous funcons and s applcaons Essam. R. El-Zahar, and Abdelhalm Ebad Deparmen of Mahemacs, Faculy of Scences and Humanes, Prnce Saam Bn Abdulazz Unversy,

More information

A Novel Efficient Stopping Criterion for BICM-ID System

A Novel Efficient Stopping Criterion for BICM-ID System A Novel Effcen Soppng Creron for BICM-ID Sysem Xao Yng, L Janpng Communcaon Unversy of Chna Absrac Ths paper devses a novel effcen soppng creron for b-nerleaved coded modulaon wh erave decodng (BICM-ID)

More information

EEL 6266 Power System Operation and Control. Chapter 5 Unit Commitment

EEL 6266 Power System Operation and Control. Chapter 5 Unit Commitment EEL 6266 Power Sysem Operaon and Conrol Chaper 5 Un Commmen Dynamc programmng chef advanage over enumeraon schemes s he reducon n he dmensonaly of he problem n a src prory order scheme, here are only N

More information

( t) Outline of program: BGC1: Survival and event history analysis Oslo, March-May Recapitulation. The additive regression model

( t) Outline of program: BGC1: Survival and event history analysis Oslo, March-May Recapitulation. The additive regression model BGC1: Survval and even hsory analyss Oslo, March-May 212 Monday May 7h and Tuesday May 8h The addve regresson model Ørnulf Borgan Deparmen of Mahemacs Unversy of Oslo Oulne of program: Recapulaon Counng

More information

Machine Learning Linear Regression

Machine Learning Linear Regression Machne Learnng Lnear Regresson Lesson 3 Lnear Regresson Bascs of Regresson Leas Squares esmaon Polynomal Regresson Bass funcons Regresson model Regularzed Regresson Sascal Regresson Mamum Lkelhood (ML)

More information

Existence and Uniqueness Results for Random Impulsive Integro-Differential Equation

Existence and Uniqueness Results for Random Impulsive Integro-Differential Equation Global Journal of Pure and Appled Mahemacs. ISSN 973-768 Volume 4, Number 6 (8), pp. 89-87 Research Inda Publcaons hp://www.rpublcaon.com Exsence and Unqueness Resuls for Random Impulsve Inegro-Dfferenal

More information

Filtrage particulaire et suivi multi-pistes Carine Hue Jean-Pierre Le Cadre and Patrick Pérez

Filtrage particulaire et suivi multi-pistes Carine Hue Jean-Pierre Le Cadre and Patrick Pérez Chaînes de Markov cachées e flrage parculare 2-22 anver 2002 Flrage parculare e suv mul-pses Carne Hue Jean-Perre Le Cadre and Parck Pérez Conex Applcaons: Sgnal processng: arge rackng bearngs-onl rackng

More information

THEORETICAL AUTOCORRELATIONS. ) if often denoted by γ. Note that

THEORETICAL AUTOCORRELATIONS. ) if often denoted by γ. Note that THEORETICAL AUTOCORRELATIONS Cov( y, y ) E( y E( y))( y E( y)) ρ = = Var( y) E( y E( y)) =,, L ρ = and Cov( y, y ) s ofen denoed by whle Var( y ) f ofen denoed by γ. Noe ha γ = γ and ρ = ρ and because

More information

Motion in Two Dimensions

Motion in Two Dimensions Phys 1 Chaper 4 Moon n Two Dmensons adzyubenko@csub.edu hp://www.csub.edu/~adzyubenko 005, 014 A. Dzyubenko 004 Brooks/Cole 1 Dsplacemen as a Vecor The poson of an objec s descrbed by s poson ecor, r The

More information

Bernoulli process with 282 ky periodicity is detected in the R-N reversals of the earth s magnetic field

Bernoulli process with 282 ky periodicity is detected in the R-N reversals of the earth s magnetic field Submed o: Suden Essay Awards n Magnecs Bernoull process wh 8 ky perodcy s deeced n he R-N reversals of he earh s magnec feld Jozsef Gara Deparmen of Earh Scences Florda Inernaonal Unversy Unversy Park,

More information

Relative controllability of nonlinear systems with delays in control

Relative controllability of nonlinear systems with delays in control Relave conrollably o nonlnear sysems wh delays n conrol Jerzy Klamka Insue o Conrol Engneerng, Slesan Techncal Unversy, 44- Glwce, Poland. phone/ax : 48 32 37227, {jklamka}@a.polsl.glwce.pl Keywor: Conrollably.

More information

Sampling Procedure of the Sum of two Binary Markov Process Realizations

Sampling Procedure of the Sum of two Binary Markov Process Realizations Samplng Procedure of he Sum of wo Bnary Markov Process Realzaons YURY GORITSKIY Dep. of Mahemacal Modelng of Moscow Power Insue (Techncal Unversy), Moscow, RUSSIA, E-mal: gorsky@yandex.ru VLADIMIR KAZAKOV

More information

Approximate Analytic Solution of (2+1) - Dimensional Zakharov-Kuznetsov(Zk) Equations Using Homotopy

Approximate Analytic Solution of (2+1) - Dimensional Zakharov-Kuznetsov(Zk) Equations Using Homotopy Arcle Inernaonal Journal of Modern Mahemacal Scences, 4, (): - Inernaonal Journal of Modern Mahemacal Scences Journal homepage: www.modernscenfcpress.com/journals/jmms.aspx ISSN: 66-86X Florda, USA Approxmae

More information

M. Y. Adamu Mathematical Sciences Programme, AbubakarTafawaBalewa University, Bauchi, Nigeria

M. Y. Adamu Mathematical Sciences Programme, AbubakarTafawaBalewa University, Bauchi, Nigeria IOSR Journal of Mahemacs (IOSR-JM e-issn: 78-578, p-issn: 9-765X. Volume 0, Issue 4 Ver. IV (Jul-Aug. 04, PP 40-44 Mulple SolonSoluons for a (+-dmensonalhroa-sasuma shallow waer wave equaon UsngPanlevé-Bӓclund

More information

Example: MOSFET Amplifier Distortion

Example: MOSFET Amplifier Distortion 4/25/2011 Example MSFET Amplfer Dsoron 1/9 Example: MSFET Amplfer Dsoron Recall hs crcu from a prevous handou: ( ) = I ( ) D D d 15.0 V RD = 5K v ( ) = V v ( ) D o v( ) - K = 2 0.25 ma/v V = 2.0 V 40V.

More information

Lecture 11 SVM cont

Lecture 11 SVM cont Lecure SVM con. 0 008 Wha we have done so far We have esalshed ha we wan o fnd a lnear decson oundary whose margn s he larges We know how o measure he margn of a lnear decson oundary Tha s: he mnmum geomerc

More information

Discrete Markov Process. Introduction. Example: Balls and Urns. Stochastic Automaton. INTRODUCTION TO Machine Learning 3rd Edition

Discrete Markov Process. Introduction. Example: Balls and Urns. Stochastic Automaton. INTRODUCTION TO Machine Learning 3rd Edition EHEM ALPAYDI he MI Press, 04 Lecure Sldes for IRODUCIO O Machne Learnng 3rd Edon alpaydn@boun.edu.r hp://www.cmpe.boun.edu.r/~ehem/ml3e Sldes from exboo resource page. Slghly eded and wh addonal examples

More information

Robust and Accurate Cancer Classification with Gene Expression Profiling

Robust and Accurate Cancer Classification with Gene Expression Profiling Robus and Accurae Cancer Classfcaon wh Gene Expresson Proflng (Compuaonal ysems Bology, 2005) Auhor: Hafeng L, Keshu Zhang, ao Jang Oulne Background LDA (lnear dscrmnan analyss) and small sample sze problem

More information

Chapter 6: AC Circuits

Chapter 6: AC Circuits Chaper 6: AC Crcus Chaper 6: Oulne Phasors and he AC Seady Sae AC Crcus A sable, lnear crcu operang n he seady sae wh snusodal excaon (.e., snusodal seady sae. Complee response forced response naural response.

More information

Bayesian Inference of the GARCH model with Rational Errors

Bayesian Inference of the GARCH model with Rational Errors 0 Inernaonal Conference on Economcs, Busness and Markeng Managemen IPEDR vol.9 (0) (0) IACSIT Press, Sngapore Bayesan Inference of he GARCH model wh Raonal Errors Tesuya Takash + and Tng Tng Chen Hroshma

More information

Tools for Analysis of Accelerated Life and Degradation Test Data

Tools for Analysis of Accelerated Life and Degradation Test Data Acceleraed Sress Tesng and Relably Tools for Analyss of Acceleraed Lfe and Degradaon Tes Daa Presened by: Reuel Smh Unversy of Maryland College Park smhrc@umd.edu Sepember-5-6 Sepember 28-30 206, Pensacola

More information

Chapter 6 DETECTION AND ESTIMATION: Model of digital communication system. Fundamental issues in digital communications are

Chapter 6 DETECTION AND ESTIMATION: Model of digital communication system. Fundamental issues in digital communications are Chaper 6 DEECIO AD EIMAIO: Fundamenal ssues n dgal communcaons are. Deecon and. Esmaon Deecon heory: I deals wh he desgn and evaluaon of decson makng processor ha observes he receved sgnal and guesses

More information

TSS = SST + SSE An orthogonal partition of the total SS

TSS = SST + SSE An orthogonal partition of the total SS ANOVA: Topc 4. Orhogonal conrass [ST&D p. 183] H 0 : µ 1 = µ =... = µ H 1 : The mean of a leas one reamen group s dfferen To es hs hypohess, a basc ANOVA allocaes he varaon among reamen means (SST) equally

More information

FI 3103 Quantum Physics

FI 3103 Quantum Physics /9/4 FI 33 Quanum Physcs Aleander A. Iskandar Physcs of Magnesm and Phooncs Research Grou Insu Teknolog Bandung Basc Conces n Quanum Physcs Probably and Eecaon Value Hesenberg Uncerany Prncle Wave Funcon

More information

Polymerization Technology Laboratory Course

Polymerization Technology Laboratory Course Prakkum Polymer Scence/Polymersaonsechnk Versuch Resdence Tme Dsrbuon Polymerzaon Technology Laboraory Course Resdence Tme Dsrbuon of Chemcal Reacors If molecules or elemens of a flud are akng dfferen

More information

Tight results for Next Fit and Worst Fit with resource augmentation

Tight results for Next Fit and Worst Fit with resource augmentation Tgh resuls for Nex F and Wors F wh resource augmenaon Joan Boyar Leah Epsen Asaf Levn Asrac I s well known ha he wo smple algorhms for he classc n packng prolem, NF and WF oh have an approxmaon rao of

More information

Single-loop System Reliability-Based Design & Topology Optimization (SRBDO/SRBTO): A Matrix-based System Reliability (MSR) Method

Single-loop System Reliability-Based Design & Topology Optimization (SRBDO/SRBTO): A Matrix-based System Reliability (MSR) Method 10 h US Naonal Congress on Compuaonal Mechancs Columbus, Oho 16-19, 2009 Sngle-loop Sysem Relably-Based Desgn & Topology Opmzaon (SRBDO/SRBTO): A Marx-based Sysem Relably (MSR) Mehod Tam Nguyen, Junho

More information

Modélisation de la détérioration basée sur les données de surveillance conditionnelle et estimation de la durée de vie résiduelle

Modélisation de la détérioration basée sur les données de surveillance conditionnelle et estimation de la durée de vie résiduelle Modélsaon de la dééroraon basée sur les données de survellance condonnelle e esmaon de la durée de ve résduelle T. T. Le, C. Bérenguer, F. Chaelan Unv. Grenoble Alpes, GIPSA-lab, F-38000 Grenoble, France

More information

Learning Objectives. Self Organization Map. Hamming Distance(1/5) Introduction. Hamming Distance(3/5) Hamming Distance(2/5) 15/04/2015

Learning Objectives. Self Organization Map. Hamming Distance(1/5) Introduction. Hamming Distance(3/5) Hamming Distance(2/5) 15/04/2015 /4/ Learnng Objecves Self Organzaon Map Learnng whou Exaples. Inroducon. MAXNET 3. Cluserng 4. Feaure Map. Self-organzng Feaure Map 6. Concluson 38 Inroducon. Learnng whou exaples. Daa are npu o he syse

More information

January Examinations 2012

January Examinations 2012 Page of 5 EC79 January Examnaons No. of Pages: 5 No. of Quesons: 8 Subjec ECONOMICS (POSTGRADUATE) Tle of Paper EC79 QUANTITATIVE METHODS FOR BUSINESS AND FINANCE Tme Allowed Two Hours ( hours) Insrucons

More information

Including the ordinary differential of distance with time as velocity makes a system of ordinary differential equations.

Including the ordinary differential of distance with time as velocity makes a system of ordinary differential equations. Soluons o Ordnary Derenal Equaons An ordnary derenal equaon has only one ndependen varable. A sysem o ordnary derenal equaons consss o several derenal equaons each wh he same ndependen varable. An eample

More information

2. SPATIALLY LAGGED DEPENDENT VARIABLES

2. SPATIALLY LAGGED DEPENDENT VARIABLES 2. SPATIALLY LAGGED DEPENDENT VARIABLES In hs chaper, we descrbe a sascal model ha ncorporaes spaal dependence explcly by addng a spaally lagged dependen varable y on he rgh-hand sde of he regresson equaon.

More information

Neural Networks-Based Time Series Prediction Using Long and Short Term Dependence in the Learning Process

Neural Networks-Based Time Series Prediction Using Long and Short Term Dependence in the Learning Process Neural Neworks-Based Tme Seres Predcon Usng Long and Shor Term Dependence n he Learnng Process J. Puchea, D. Paño and B. Kuchen, Absrac In hs work a feedforward neural neworksbased nonlnear auoregresson

More information

Survival Analysis and Reliability. A Note on the Mean Residual Life Function of a Parallel System

Survival Analysis and Reliability. A Note on the Mean Residual Life Function of a Parallel System Communcaons n Sascs Theory and Mehods, 34: 475 484, 2005 Copyrgh Taylor & Francs, Inc. ISSN: 0361-0926 prn/1532-415x onlne DOI: 10.1081/STA-200047430 Survval Analyss and Relably A Noe on he Mean Resdual

More information

A Deterministic Algorithm for Summarizing Asynchronous Streams over a Sliding Window

A Deterministic Algorithm for Summarizing Asynchronous Streams over a Sliding Window A Deermnsc Algorhm for Summarzng Asynchronous Sreams over a Sldng ndow Cosas Busch Rensselaer Polyechnc Insue Srkana Trhapura Iowa Sae Unversy Oulne of Talk Inroducon Algorhm Analyss Tme C Daa sream: 3

More information

Advanced Machine Learning & Perception

Advanced Machine Learning & Perception Advanced Machne Learnng & Percepon Insrucor: Tony Jebara SVM Feaure & Kernel Selecon SVM Eensons Feaure Selecon (Flerng and Wrappng) SVM Feaure Selecon SVM Kernel Selecon SVM Eensons Classfcaon Feaure/Kernel

More information

Reactive Methods to Solve the Berth AllocationProblem with Stochastic Arrival and Handling Times

Reactive Methods to Solve the Berth AllocationProblem with Stochastic Arrival and Handling Times Reacve Mehods o Solve he Berh AllocaonProblem wh Sochasc Arrval and Handlng Tmes Nsh Umang* Mchel Berlare* * TRANSP-OR, Ecole Polyechnque Fédérale de Lausanne Frs Workshop on Large Scale Opmzaon November

More information

Chapter Lagrangian Interpolation

Chapter Lagrangian Interpolation Chaper 5.4 agrangan Inerpolaon Afer readng hs chaper you should be able o:. dere agrangan mehod of nerpolaon. sole problems usng agrangan mehod of nerpolaon and. use agrangan nerpolans o fnd deraes and

More information

Parametric Estimation in MMPP(2) using Time Discretization. Cláudia Nunes, António Pacheco

Parametric Estimation in MMPP(2) using Time Discretization. Cláudia Nunes, António Pacheco Paramerc Esmaon n MMPP(2) usng Tme Dscrezaon Cláuda Nunes, Anóno Pacheco Deparameno de Maemáca and Cenro de Maemáca Aplcada 1 Insuo Superor Técnco, Av. Rovsco Pas, 1096 Lsboa Codex, PORTUGAL In: J. Janssen

More information

The topology and signature of the regulatory interactions predict the expression pattern of the segment polarity genes in Drosophila m elanogaster

The topology and signature of the regulatory interactions predict the expression pattern of the segment polarity genes in Drosophila m elanogaster The opology and sgnaure of he regulaory neracons predc he expresson paern of he segmen polary genes n Drosophla m elanogaser Hans Ohmer and Réka Alber Deparmen of Mahemacs Unversy of Mnnesoa Complex bologcal

More information

RELATIONSHIP BETWEEN VOLATILITY AND TRADING VOLUME: THE CASE OF HSI STOCK RETURNS DATA

RELATIONSHIP BETWEEN VOLATILITY AND TRADING VOLUME: THE CASE OF HSI STOCK RETURNS DATA RELATIONSHIP BETWEEN VOLATILITY AND TRADING VOLUME: THE CASE OF HSI STOCK RETURNS DATA Mchaela Chocholaá Unversy of Economcs Braslava, Slovaka Inroducon (1) one of he characersc feaures of sock reurns

More information

Dual Approximate Dynamic Programming for Large Scale Hydro Valleys

Dual Approximate Dynamic Programming for Large Scale Hydro Valleys Dual Approxmae Dynamc Programmng for Large Scale Hydro Valleys Perre Carpener and Jean-Phlppe Chanceler 1 ENSTA ParsTech and ENPC ParsTech CMM Workshop, January 2016 1 Jon work wh J.-C. Alas, suppored

More information

Li An-Ping. Beijing , P.R.China

Li An-Ping. Beijing , P.R.China A New Type of Cpher: DICING_csb L An-Png Bejng 100085, P.R.Chna apl0001@sna.com Absrac: In hs paper, we wll propose a new ype of cpher named DICING_csb, whch s derved from our prevous sream cpher DICING.

More information