Hierarchical Bayesian Inference in Networks of Spiking Neurons
|
|
- Sylvia Barber
- 5 years ago
- Views:
Transcription
1 To appear n Advances n NIPS, Vol. 17, MIT Press, 25. Herarchcal Bayesan Inference n Networks of Spkng Neurons Raesh P. N. Rao Department of Computer Scence and Engneerng Unversty of Washngton, Seattle, WA rao@cs.washngton.edu Abstract There s growng evdence from psychophyscal and neurophysologcal studes that the bran utlzes Bayesan prncples for nference and decson makng. An mportant open queston s how Bayesan nference for arbtrary graphcal models can be mplemented n networks of spkng neurons. In ths paper, we show that recurrent networks of nosy ntegrate-and-fre neurons can perform approxmate Bayesan nference for dynamc and herarchcal graphcal models. The membrane potental dynamcs of neurons s used to mplement belef propagaton n the log doman. The spkng probablty of a neuron s shown to approxmate the posteror probablty of the preferred state encoded by the neuron, gven past nputs. We llustrate the model usng two examples: (1) a moton detecton network n whch the spkng probablty of a drecton-selectve neuron becomes proportonal to the posteror probablty of moton n a preferred drecton, and (2) a two-level herarchcal network that produces attentonal effects smlar to those observed n vsual cortcal areas V2 and V4. The herarchcal model offers a new Bayesan nterpretaton of attentonal modulaton n V2 and V4. 1 Introducton A wde range of psychophyscal results have recently been successfully explaned usng Bayesan models [7, 8, 16, 19]. These models have been able to account for human responses n tasks rangng from 3D shape percepton to vsuomotor control. Smultaneously, there s accumulatng evdence from human and monkey experments that Bayesan mechansms are at work durng vsual decson makng [2, 5]. The versatlty of Bayesan models stems from ther ablty to combne pror knowledge wth sensory evdence n a rgorous manner: Bayes rule prescrbes how pror probabltes and stmulus lkelhoods should be combned, allowng the responses of subects or neural responses to be nterpreted n terms of the resultng posteror dstrbutons. An mportant queston that has only recently receved attenton s how networks of cortcal neurons can mplement algorthms for Bayesan nference. One powerful approach has been to buld on the known propertes of populaton codng models that represent nformaton usng a set of neural tunng curves or kernel functons [1, 2]. Several proposals have been made regardng how a probablty dstrbuton could be encoded usng populaton codes ([3, 18]; see [14] for an excellent revew). However, the problem of mplementng general nference algorthms for arbtrary graphcal models usng populaton codes remans unresolved (some encouragng ntal results are reported n Zemel et al., ths volume). An
2 alternate approach advocates performng Bayesan nference n the log doman such that multplcaton of probabltes s turned nto addton and dvson to subtracton, the latter operatons beng easer to mplement n standard neuron models [2, 5, 15] (see also the papers by Deneve and by Yu and Dayan n ths volume). For example, a neural mplementaton of approxmate Bayesan nference for a hdden Markov model was nvestgated n [15]. The queston of how such an approach could be generalzed to spkng neurons and arbtrary graphcal models remaned open. In ths paper, we propose a method for mplementng Bayesan belef propagaton n networks of spkng neurons. We show that recurrent networks of nosy ntegrate-and-fre neurons can perform approxmate Bayesan nference for dynamc and herarchcal graphcal models. In the model, the dynamcs of the membrane potental s used to mplement on-lne belef propagaton n the log doman [15]. A neuron s spkng probablty s shown to approxmate the posteror probablty of the preferred state encoded by the neuron, gven past nputs. We frst show that for a vsual moton detecton task, the spkng probablty of a drecton-selectve neuron becomes proportonal to the posteror probablty of moton n the neuron s preferred drecton. We then show that n a two-level network, herarchcal Bayesan nference [9] produces responses that mmc the attentonal effects seen n vsual cortcal areas V2 and V4. 2 Modelng Networks of Nosy Integrate-and-Fre Neurons 2.1 Integrate-and-Fre Model of Spkng Neurons We begn wth a recurrently-connected network of ntegrate-and-fre (IF) neurons recevng feedforward nputs denoted by the vector I. The membrane potental of neuron changes accordng to: τ dv dt = v + w I + u v (1) where τ s the membrane tme constant, I denotes the synaptc current due to nput neuron, w represents the strength of the synapse from nput to recurrent neuron, v denotes the synaptc current due to recurrent neuron, and u represents the correspondng synaptc strength. If v crosses a threshold T, the neuron fres a spke and v s reset to the potental v reset. Equaton 1 can be rewrtten n dscrete form as: v (t + 1) = v (t) + ɛ( v (t) + w I (t)) + u v (t)) (2).e. v (t + 1) = ɛ w I (t) + u v (t) (3) where ɛ s the ntegraton rate, u = 1 + ɛ(u 1) and for, u = ɛu. A more general ntegrate-and-fre model that takes nto account some of the effects of nonlnear flterng n dendrtes can be obtaned by generalzng Equaton 3 as follows: v (t + 1) = f ( w I (t) ) + g ( u v (t) ) (4) where f and g model potentally dfferent dendrtc flterng functons for feedforward and recurrent nputs. 2.2 Stochastc Spkng n Nosy IF Neurons To model the effects of background nputs and the random openngs of membrane channels, one can add a Gaussan whte nose term to the rght hand sde of Equatons 3 and 4. Ths makes the spkng of neurons n the recurrent network stochastc. Plesser and Gerstner [13] and Gerstner [4] have shown that under reasonable assumptons, the probablty of spkng
3 n such nosy neurons can be approxmated by an escape functon (or hazard functon) that depends only on the dstance between the (nose-free) membrane potental v and the threshold T. Several dfferent escape functons were studed. Of partcular nterest to the present paper s the followng exponental functon for spkng probablty suggested n [4] for nosy ntegrate-and-fre networks: P (neuron spkes at tme t) = ke (v(t) T )/c (5) where k and c are arbtrary constants. We used a model that combnes Equatons 4 and 5 to generate spkes, wth an absolute refractory perod of 1 tme step. 3 Bayesan Inference usng Spkng Neurons 3.1 Inference n a Sngle-Level Model We frst consder on-lne belef propagaton n a sngle-level dynamc graphcal model and show how t can be mplemented n spkng networks. The graphcal model s shown n Fgure 1A and corresponds to a classcal hdden Markov model. Let θ(t) represent the hdden state of a Markov model at tme t wth transton probabltes gven by P (θ(t) = θ θ(t 1) = θ ) = P (θ t θt 1 ) for, = 1... N. Let I(t) be the observable output governed by the probabltes P (I(t) θ(t)). Then, the forward component of the belef propagaton algorthm [12] prescrbes the followng message for state from tme step t to t + 1: = P (I(t) θ t ) P (θ θ t t 1 )m t 1,t (6) If m,1 = P (θ ) (the pror dstrbuton over states), then t s easy to show usng Bayes rule that = P (θ t, I(t),..., I(1)). If the probabltes are normalzed at each update step: = P (I(t) θ t ) P (θ θ t t 1 )m t 1,t /n t 1,t (7) where n t 1,t = mt 1,t, then the message becomes equal to the posteror probablty of the state and current nput, gven all past nputs: = P (θ, t I(t) I(t 1),..., I(1)) (8) 3.2 Neural Implementaton of the Inference Algorthm By comparng the membrane potental equaton (Eq. 4) wth the on-lne belef propagaton equaton (Eq. 7), t s clear that the frst equaton can mplement the second f belef propagaton s performed n the log doman [15],.e., f: v (t + 1) log (9) f ( w I (t) ) = log P (I(t) θ) t (1) g ( u v (t) ) = log( P (θ θ t t 1 )m t 1,t /n t 1,t ) (11) In ths model, the dendrtc flterng functons f and g approxmate the logarthm functon 1, the synaptc currents I (t) and v (t) are approxmated by the correspondng nstantaneous frng rates, and the recurrent synaptc weghts u encode the transton probabltes P (θ t θt 1 ). Normalzaton by n t 1,t s mplemented by subtractng log n t 1,t usng nhbton. 1 An alternatve approach, whch was also found to yeld satsfactory results, s to approxmate the log-sum wth a lnear weghted sum [15], the weghts beng chosen to mnmze the approxmaton error.
4 t θ t θ t θ t θ 2 2 θ θ t 1 1 I(t) A I() B I(t) I(t) C I() D I(t) Fgure 1: Graphcal Models and ther Neural Implementaton. (A) Sngle-level dynamc graphcal model. Each crcle represents a node denotng the state varable θ t whch can take on values θ 1,..., θ N. (B) Recurrent network for mplementng on-lne belef propagaton for the graphcal model n (A). Each crcle represents a neuron encodng a state θ. Arrows represent synaptc connectons. The probablty dstrbuton over state values at each tme step s represented by the entre populaton. (C) Two-level dynamc graphcal model. (D) Two-level network for mplementng onlne belef propagaton for the graphcal model n (C). Arrows represent synaptc connectons n the drecton ponted by the arrow heads. Lnes wthout arrow heads represent bdrectonal connectons. Fnally, snce the membrane potental v (t + 1) s assumed to be proportonal to log (Equaton 9), we have: v (t + 1) = c log + T (12) for some constants c and T. For nosy ntegrate-and-fre neurons, we can use Equaton 5 to calculate the probablty of spkng for each neuron as: P (neuron spkes at tme t + 1) e (v() T )/c (13) log mt, = e = (14) Thus, the probablty of spkng (or equvalently, the nstantaneous frng rate) for neuron n the recurrent network s drectly proportonal to the posteror probablty of the neuron s preferred state and the current nput, gven all past nputs. Fgure 1B llustrates the snglelevel recurrent network model that mplements the on-lne belef propagaton equaton Herarchcal Inference The model descrbed above can be extended to perform on-lne belef propagaton and nference for arbtrary graphcal models. As an example, we descrbe the mplementaton for the two-level herarchcal graphcal model n Fgure 1C. As n the case of the 1-level dynamc model, we defne the followng messages wthn a partcular level and between levels: 1, (message from state to other states at level 1 from tme step t to t + 1), m t 1 2, ( feedforward message from state at level 1 sent to level 2 at tme t), 2, (message from state to other states at level 2 from tme step t to t + 1), and m t 2 1, ( feedback message from state at level 2 sent to level 1 at tme t). Each of these messages can be calculated based on an on-lne verson of loopy belef propagaton [11] for the multply connected two-level graphcal model n Fgure 1C: m t 1 2, = P (θ1,k θ t 2,, t θ t 1 1, )mt 1,t 1, P (I(t) θ1,k) t (15) m t 2 1, = k P (θ t 2, θ t 1 2, )mt 1,t 2, (16)
5 1, = P (I(t) θ1,) t ( P (θ1, θ t 2,, t θ t 1 1,k )mt 2 1,m t 1,t ) 1,k 2, = m t 1 2, ( k P (θ2, θ t t 1 ) 2, )mt 1,t 2, (17) (18) Note the smlarty between the last equaton and the equaton for the sngle-level model (Equaton 6). The equatons above can be mplemented n a 2-level herarchcal recurrent network of ntegrate-and-fre neurons n a manner smlar to the 1-level case. We assume that neuron n level 1 encodes θ 1, as ts preferred state whle neuron n level 2 encodes θ 2,. We also assume specfc feedforward and feedback neurons for computng and conveyng m t 1 2, and mt 2 1, respectvely. Takng the logarthm of both sdes of Equatons 17 and 18, we obtan equatons that can be computed usng the membrane potental dynamcs of ntegrate-and-fre neurons (Equaton 4). Fgure 1D llustrates the correspondng two-level herarchcal network. A modfcaton needed to accommodate Equaton 17 s to allow blnear nteractons between synaptc nputs, whch changes Equaton 4 to: v (t + 1) = f ( w I (t) ) + g ( u kv (t)x k (t) ) (19) Multplcatve nteractons between synaptc nputs have prevously been suggested by several authors (e.g., [1]) and potental mplementatons based on actve dendrtc nteractons have been explored. The model suggested here utlzes these multplcatve nteractons wthn dendrtc branches, n addton to a possble logarthmc transform of the sgnal before t sums wth other sgnals at the soma. Such a model s comparable to recent models of dendrtc computaton (see [6] for more detals). 4 Results 4.1 Sngle-Level Network: Probablstc Moton Detecton and Drecton Selectvty We frst tested the model n a 1D vsual moton detecton task [15]. A sngle-level recurrent network of 3 neurons was used (see Fgure 1B). Fgure 2A shows the feedforward weghts for neurons 1,..., 15: these were recurrently connected to encode transton probabltes based for rghtward moton as shown n Fgure 2B. Feedforward weghts for neurons 16,..., 3 were dentcal to Fgure 2A but ther recurrent connectons encoded transton probabltes for leftward moton (see Fgure 2B). As seen n Fgure 2C, neurons n the network exhbted drecton selectvty. Furthermore, the spkng probablty of neurons reflected the posteror probabltes over tme of moton drecton at a gven locaton (Fgure 2D), suggestng a probablstc nterpretaton of drecton selectve spkng responses n vsual cortcal areas such as V1 and MT. 4.2 Two-Level Network: Spatal Attenton as Herarchcal Bayesan Inference We tested the two-level network mplementaton (Fgure 1D) of herarchcal Bayesan nference usng a smple attenton task prevously used n prmate studes [17]. In an nput mage, a vertcal or horzontal bar could occur ether on the left sde, rght sde, or both sdes (see Fgure 3). The correspondng 2-level generatve model conssted of two states at level 2 (left or rght sde) and four states at level 1: vertcal left, horzontal left, vertcal rght, horzontal rght. Each of these states was encoded by a neuron at the respectve level. The feedforward connectons at level 1 were chosen to be vertcally or horzontally orented Gabor flters localzed to the left or rght sde of the mage. Snce the experment used statc mages, the recurrent connectons at each level mplemented transton probabltes close to 1 for the same state and small random values for other states. The transton probabltes P (θ1,k t θt 2,, θt 1 1, ) were chosen such that for θt 2 = left sde, the transton probabltes for k
6 w 1 w 15 1 θ t θ 15 Neuron Spatal Locaton (pxels) Rghtward Leftward A B Rghtward Moton Leftward Moton Rghtward Moton Leftward Moton 1 12 C D Fgure 2: Responses from the Sngle-Level Moton Detecton Network. (A) Feedforward weghts for neurons 1,..., 15 (rghtward moton selectve neurons). Feedforward weghts for neurons 16,..., 3 (leftward moton selectve) are dentcal. (B) Recurrent weghts encodng the transton probabltes P (θ θ) t for, = 1,..., 3. Probablty values are proportonal to pxel brghtness. (C) Spkng responses of three of the frst 15 neurons n the recurrent network (neurons 8, 1, and 12). As s evdent, these neurons have become selectve for rghtward moton as a consequence of the recurrent connectons specfed n (B). (D) Posteror probabltes over tme of moton drecton (at a gven locaton) encoded by the three neurons for rghtward and leftward moton. states θ t 1 codng for the rght sde were set to values close to zero (and vce versa, for θ t 2 = rght sde). As shown n Fgure 3, the response of a neuron at level 1 that, for example, prefers a vertcal edge on the rght mmcs the response of a V4 neuron wth and wthout attenton (see fgure capton for more detals). The ntal settng of the prors at level 2 s the crucal determnant of attentonal modulaton n level 1 neurons, suggestng that feedback from hgher cortcal areas may convey task-specfc prors that are ntegrated nto V4 responses. 5 Dscusson and Conclusons We have shown that recurrent networks of nosy ntegrate-and-fre neurons can perform approxmate Bayesan nference for sngle- and mult-level dynamc graphcal models. The model suggests a new nterpretaton of the spkng probablty of a neuron n terms of the posteror probablty of the preferred state encoded by the neuron, gven past nputs. We llustrated the model usng two problems: nference of moton drecton n a sngle-level network and herarchcal nference of obect dentty at an attended vsual locaton n a twolevel network. In the frst case, neurons generated drecton-selectve spkes encodng the probablty of moton n a partcular drecton. In the second case, attentonal effects smlar to those observed n prmate cortcal areas V2 and V4 emerged as a result of mposng approprate prors at the hghest level. The results obtaned thus far are encouragng but several mportant questons reman. How does the approach scale to more realstc graphcal models? The two-level model explored n ths paper assumed statonary obects, resultng n smplfed dynamcs for the two levels n our recurrent network. Experments are currently underway to test the robustness of the proposed model when rcher classes of dynamcs are ntroduced at the dfferent levels. An-
7 A B Spkes/second 3 Ref Att Away Tme steps from stm onset Spkes/second 3 25 Par Att Away Tme steps from stm onset C D Spkes/second Par Att Ref Tme steps from stm onset Fgure 3: Responses from the Two-Level Herarchcal Network. (A) Top panel: Input mage (lastng the frst 15 tme steps) contanng a vertcal bar ( Reference ) on the rght sde. Each nput was convolved wth a retnal spatotemporal flter. Mddle: Three sample spke trans from the 1st level neuron whose preferred stmulus was a vertcal bar on the rght sde. Bottom: Posteror probablty of a vertcal bar (= spkng probablty or nstantaneous frng rate of the neuron) plotted over tme. (B) Top panel: An nput contanng two stmul ( Par ). Below: Sample spke trans and posteror probablty for the same neuron as n (A). (C) When attenton s focused on the rght sde (depcted by the whte oval) by ntalzng the pror probablty encoded by the 2nd level rght-codng neuron at a hgher value than the left-codng neuron, the frng rate for the 1st level neuron n (A) ncreases to a level comparable to that n (A). (D) Responses from a neuron n prmate area V4 wthout attenton (top panel, Ref Att Away and Par Att Away; compare wth (A) and (B)) and wth attenton (bottom panel, Par Att Ref; compare wth (C)) (from [17]). Smlar responses are seen n V2 [17].
8 other open queston s how actve dendrtc processes could support probablstc ntegraton of messages from local, lower-level, and hgher-level neurons, as suggested n Secton 3. We ntend to nvestgate ths queston usng bophyscal (compartmental) models of cortcal neurons. Fnally, how can the feedforward, feedback, and recurrent synaptc weghts n the networks be learned drectly from nput data? We hope to nvestgate ths queston usng bologcally-plausble approxmatons to the expectaton-maxmzaton (EM) algorthm. Acknowledgments. Ths research was supported by grants from ONR, NSF, and the Packard Foundaton. I am grateful to Wolfram Gerstner, Mchael Shadlen, Aaron Shon, Eero Smoncell, and Yar Wess for dscussons on topcs related to ths paper. References [1] C. H. Anderson and D. C. Van Essen. Neurobologcal computatonal systems. In Computatonal Intellgence: Imtatng Lfe, pages New York, NY: IEEE Press, [2] R. H. S. Carpenter and M. L. L. Wllams. Neural computaton of log lkelhood n control of saccadc eye movements. Nature, 377:59 62, [3] S. Deneve and A. Pouget. Bayesan estmaton by nterconnected neural networks (abstract no ). Socety for Neuroscence Abstracts, 27, 21. [4] W. Gerstner. Populaton dynamcs of spkng neurons: Fast transents, asynchronous states, and lockng. Neural Computaton, 12(1):43 89, 2. [5] J. I. Gold and M. N. Shadlen. Neural computatons that underle decsons about sensory stmul. Trends n Cogntve Scences, 5(1):1 16, 21. [6] M. Hausser and B. Mel. Dendrtes: bug or feature? Current Opnon n Neurobology, 13: , 23. [7] D. C. Knll and W. Rchards. Percepton as Bayesan Inference. Cambrdge, UK: Cambrdge Unversty Press, [8] K. P. Kördng and D. Wolpert. Bayesan ntegraton n sensormotor learnng. Nature, 427: , 24. [9] T. S. Lee and D. Mumford. Herarchcal Bayesan nference n the vsual cortex. Journal of the Optcal Socety of Amerca A, 2(7): , 23. [1] B. W. Mel. NMDA-based pattern dscrmnaton n a modeled cortcal neuron. Neural Computaton, 4(4):52 517, [11] K. Murphy, Y. Wess, and M. Jordan. Loopy belef propagaton for approxmate nference: An emprcal study. In Proceedngs of UAI (Uncertanty n AI), pages [12] J. Pearl. Probablstc Reasonng n Intellgent Systems: Networks of Plausble Inference. Morgan Kaufmann, San Mateo, CA, [13] H. E. Plesser and W. Gerstner. Nose n ntegrate-and-fre neurons: From stochastc nput to escape rates. Neural Computaton, 12(2): , 2. [14] A. Pouget, P. Dayan, and R. S. Zemel. Inference and computaton wth populaton codes. Annual Revew of Neuroscence, 26:381 41, 23. [15] R. P. N. Rao. Bayesan computaton n recurrent neural crcuts. Neural Computaton, 16(1):1 38, 24. [16] R. P. N. Rao, B. A. Olshausen, and M. S. Lewck. Probablstc Models of the Bran: Percepton and Neural Functon. Cambrdge, MA: MIT Press, 22. [17] J. H. Reynolds, L. Chelazz, and R. Desmone. Compettve mechansms subserve attenton n macaque areas V2 and V4. Journal of Neuroscence, 19: , [18] M. Sahan and P. Dayan. Doubly dstrbutonal populaton codes: Smultaneous representaton of uncertanty and multplcty. Neural Computaton, 15: , 23. [19] Y. Wess, E. P. Smoncell, and E. H. Adelson. Moton llusons as optmal percepts. Nature Neuroscence, 5(6):598 64, 22. [2] R. S. Zemel, P. Dayan, and A. Pouget. Probablstc nterpretaton of populaton codes. Neural Computaton, 1(2):43 43, 1998.
Motion Perception Under Uncertainty. Hongjing Lu Department of Psychology University of Hong Kong
Moton Percepton Under Uncertanty Hongjng Lu Department of Psychology Unversty of Hong Kong Outlne Uncertanty n moton stmulus Correspondence problem Qualtatve fttng usng deal observer models Based on sgnal
More informationHidden Markov Models & The Multivariate Gaussian (10/26/04)
CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models
More informationSupporting Information
Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to
More information1 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations
Physcs 171/271 -Davd Klenfeld - Fall 2005 (revsed Wnter 2011) 1 Dervaton of Rate Equatons from Sngle-Cell Conductance (Hodgkn-Huxley-lke) Equatons We consder a network of many neurons, each of whch obeys
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More information8 Derivation of Network Rate Equations from Single- Cell Conductance Equations
Physcs 178/278 - Davd Klenfeld - Wnter 2015 8 Dervaton of Network Rate Equatons from Sngle- Cell Conductance Equatons We consder a network of many neurons, each of whch obeys a set of conductancebased,
More information8 Derivation of Network Rate Equations from Single- Cell Conductance Equations
Physcs 178/278 - Davd Klenfeld - Wnter 2019 8 Dervaton of Network Rate Equatons from Sngle- Cell Conductance Equatons Our goal to derve the form of the abstract quanttes n rate equatons, such as synaptc
More informationNatural Images, Gaussian Mixtures and Dead Leaves Supplementary Material
Natural Images, Gaussan Mxtures and Dead Leaves Supplementary Materal Danel Zoran Interdscplnary Center for Neural Computaton Hebrew Unversty of Jerusalem Israel http://www.cs.huj.ac.l/ danez Yar Wess
More informationAppendix B: Resampling Algorithms
407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles
More information9 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations
Physcs 171/271 - Chapter 9R -Davd Klenfeld - Fall 2005 9 Dervaton of Rate Equatons from Sngle-Cell Conductance (Hodgkn-Huxley-lke) Equatons We consder a network of many neurons, each of whch obeys a set
More informationEEE 241: Linear Systems
EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they
More informationHidden Markov Models
CM229S: Machne Learnng for Bonformatcs Lecture 12-05/05/2016 Hdden Markov Models Lecturer: Srram Sankararaman Scrbe: Akshay Dattatray Shnde Edted by: TBD 1 Introducton For a drected graph G we can wrte
More information2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification
E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton
More informationNeural Networks & Learning
Neural Netorks & Learnng. Introducton The basc prelmnares nvolved n the Artfcal Neural Netorks (ANN) are descrbed n secton. An Artfcal Neural Netorks (ANN) s an nformaton-processng paradgm that nspred
More informationOn an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1
On an Extenson of Stochastc Approxmaton EM Algorthm for Incomplete Data Problems Vahd Tadayon Abstract: The Stochastc Approxmaton EM (SAEM algorthm, a varant stochastc approxmaton of EM, s a versatle tool
More informationMicrowave Diversity Imaging Compression Using Bioinspired
Mcrowave Dversty Imagng Compresson Usng Bonspred Neural Networks Youwe Yuan 1, Yong L 1, Wele Xu 1, Janghong Yu * 1 School of Computer Scence and Technology, Hangzhou Danz Unversty, Hangzhou, Zhejang,
More informationNeural Implementation of Hierarchical Bayesian Inference by Importance Sampling
Neural Implementaton of Herarchcal Bayesan Inference by Importance Samplng Le Sh Helen Wlls Neuroscence Insttute Unversty of Calforna, Berkeley Berkeley, CA 9470 lsh@berkeley.edu Thomas L. Grffths Department
More informationSupplementary Materials
Supplementary Materals hs secton s organzed nto three parts. In the frst, we show that when the lkelhood functon, p(r s), belongs to the exponental famly wth lnear suffcent statstcs, optmal cue combnaton
More informationThe Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD
he Gaussan classfer Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world X observatons g decson functon L[g,y] loss of predctng y wth g Bayes decson rule s
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationTime-Varying Systems and Computations Lecture 6
Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy
More informationFor now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.
Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson
More informationInternational Journal of Mathematical Archive-3(3), 2012, Page: Available online through ISSN
Internatonal Journal of Mathematcal Archve-3(3), 2012, Page: 1136-1140 Avalable onlne through www.ma.nfo ISSN 2229 5046 ARITHMETIC OPERATIONS OF FOCAL ELEMENTS AND THEIR CORRESPONDING BASIC PROBABILITY
More information6 Supplementary Materials
6 Supplementar Materals 61 Proof of Theorem 31 Proof Let m Xt z 1:T : l m Xt X,z 1:t Wethenhave mxt z1:t ˆm HX Xt z 1:T mxt z1:t m HX Xt z 1:T + mxt z 1:T HX We consder each of the two terms n equaton
More informationMLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012
MLE and Bayesan Estmaton Je Tang Department of Computer Scence & Technology Tsnghua Unversty 01 1 Lnear Regresson? As the frst step, we need to decde how we re gong to represent the functon f. One example:
More informationEM and Structure Learning
EM and Structure Learnng Le Song Machne Learnng II: Advanced Topcs CSE 8803ML, Sprng 2012 Partally observed graphcal models Mxture Models N(μ 1, Σ 1 ) Z X N N(μ 2, Σ 2 ) 2 Gaussan mxture model Consder
More informationEcon107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)
I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes
More informationRegularized Discriminant Analysis for Face Recognition
1 Regularzed Dscrmnant Analyss for Face Recognton Itz Pma, Mayer Aladem Department of Electrcal and Computer Engneerng, Ben-Guron Unversty of the Negev P.O.Box 653, Beer-Sheva, 845, Israel. Abstract Ths
More informationHopfield networks and Boltzmann machines. Geoffrey Hinton et al. Presented by Tambet Matiisen
Hopfeld networks and Boltzmann machnes Geoffrey Hnton et al. Presented by Tambet Matsen 18.11.2014 Hopfeld network Bnary unts Symmetrcal connectons http://www.nnwj.de/hopfeld-net.html Energy functon The
More informationUncertainty as the Overlap of Alternate Conditional Distributions
Uncertanty as the Overlap of Alternate Condtonal Dstrbutons Olena Babak and Clayton V. Deutsch Centre for Computatonal Geostatstcs Department of Cvl & Envronmental Engneerng Unversty of Alberta An mportant
More informationNumerical Heat and Mass Transfer
Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and
More informationA PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS
HCMC Unversty of Pedagogy Thong Nguyen Huu et al. A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS Thong Nguyen Huu and Hao Tran Van Department of mathematcs-nformaton,
More informationMarkov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement
Markov Chan Monte Carlo MCMC, Gbbs Samplng, Metropols Algorthms, and Smulated Annealng 2001 Bonformatcs Course Supplement SNU Bontellgence Lab http://bsnuackr/ Outlne! Markov Chan Monte Carlo MCMC! Metropols-Hastngs
More informationIntroduction to Hidden Markov Models
Introducton to Hdden Markov Models Alperen Degrmenc Ths document contans dervatons and algorthms for mplementng Hdden Markov Models. The content presented here s a collecton of my notes and personal nsghts
More informationPhysics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1
P. Guterrez Physcs 5153 Classcal Mechancs D Alembert s Prncple and The Lagrangan 1 Introducton The prncple of vrtual work provdes a method of solvng problems of statc equlbrum wthout havng to consder the
More informationP R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /
Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons
More informationSTATS 306B: Unsupervised Learning Spring Lecture 10 April 30
STATS 306B: Unsupervsed Learnng Sprng 2014 Lecture 10 Aprl 30 Lecturer: Lester Mackey Scrbe: Joey Arthur, Rakesh Achanta 10.1 Factor Analyss 10.1.1 Recap Recall the factor analyss (FA) model for lnear
More informationHomework Assignment 3 Due in class, Thursday October 15
Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.
More informationComposite Hypotheses testing
Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter
More informationGlobal Sensitivity. Tuesday 20 th February, 2018
Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values
More informationQuantifying Uncertainty
Partcle Flters Quantfyng Uncertanty Sa Ravela M. I. T Last Updated: Sprng 2013 1 Quantfyng Uncertanty Partcle Flters Partcle Flters Appled to Sequental flterng problems Can also be appled to smoothng problems
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationA Robust Method for Calculating the Correlation Coefficient
A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal
More informationCOMPUTATIONALLY EFFICIENT WAVELET AFFINE INVARIANT FUNCTIONS FOR SHAPE RECOGNITION. Erdem Bala, Dept. of Electrical and Computer Engineering,
COMPUTATIONALLY EFFICIENT WAVELET AFFINE INVARIANT FUNCTIONS FOR SHAPE RECOGNITION Erdem Bala, Dept. of Electrcal and Computer Engneerng, Unversty of Delaware, 40 Evans Hall, Newar, DE, 976 A. Ens Cetn,
More informationA New Evolutionary Computation Based Approach for Learning Bayesian Network
Avalable onlne at www.scencedrect.com Proceda Engneerng 15 (2011) 4026 4030 Advanced n Control Engneerng and Informaton Scence A New Evolutonary Computaton Based Approach for Learnng Bayesan Network Yungang
More informationThe optimal delay of the second test is therefore approximately 210 hours earlier than =2.
THE IEC 61508 FORMULAS 223 The optmal delay of the second test s therefore approxmately 210 hours earler than =2. 8.4 The IEC 61508 Formulas IEC 61508-6 provdes approxmaton formulas for the PF for smple
More informationComparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method
Appled Mathematcal Scences, Vol. 7, 0, no. 47, 07-0 HIARI Ltd, www.m-hkar.com Comparson of the Populaton Varance Estmators of -Parameter Exponental Dstrbuton Based on Multple Crtera Decson Makng Method
More informationBayesian predictive Configural Frequency Analysis
Psychologcal Test and Assessment Modelng, Volume 54, 2012 (3), 285-292 Bayesan predctve Confgural Frequency Analyss Eduardo Gutérrez-Peña 1 Abstract Confgural Frequency Analyss s a method for cell-wse
More informationEnsemble Methods: Boosting
Ensemble Methods: Boostng Ncholas Ruozz Unversty of Texas at Dallas Based on the sldes of Vbhav Gogate and Rob Schapre Last Tme Varance reducton va baggng Generate new tranng data sets by samplng wth replacement
More informationChapter 11: Simple Linear Regression and Correlation
Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests
More informationConjugacy and the Exponential Family
CS281B/Stat241B: Advanced Topcs n Learnng & Decson Makng Conjugacy and the Exponental Famly Lecturer: Mchael I. Jordan Scrbes: Bran Mlch 1 Conjugacy In the prevous lecture, we saw conjugate prors for the
More informationCOMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS
Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS
More informationMultigradient for Neural Networks for Equalizers 1
Multgradent for Neural Netorks for Equalzers 1 Chulhee ee, Jnook Go and Heeyoung Km Department of Electrcal and Electronc Engneerng Yonse Unversty 134 Shnchon-Dong, Seodaemun-Ku, Seoul 1-749, Korea ABSTRACT
More informationPsychology 282 Lecture #24 Outline Regression Diagnostics: Outliers
Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.
More informationCSC321 Tutorial 9: Review of Boltzmann machines and simulated annealing
CSC321 Tutoral 9: Revew of Boltzmann machnes and smulated annealng (Sldes based on Lecture 16-18 and selected readngs) Yue L Emal: yuel@cs.toronto.edu Wed 11-12 March 19 Fr 10-11 March 21 Outlne Boltzmann
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More informationStat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors
Stat60: Bayesan Modelng and Inference Lecture Date: February, 00 Reference Prors Lecturer: Mchael I. Jordan Scrbe: Steven Troxler and Wayne Lee In ths lecture, we assume that θ R; n hgher-dmensons, reference
More informationWeek3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity
Week3, Chapter 4 Moton n Two Dmensons Lecture Quz A partcle confned to moton along the x axs moves wth constant acceleraton from x =.0 m to x = 8.0 m durng a 1-s tme nterval. The velocty of the partcle
More informationLinear Regression Analysis: Terminology and Notation
ECON 35* -- Secton : Basc Concepts of Regresson Analyss (Page ) Lnear Regresson Analyss: Termnology and Notaton Consder the generc verson of the smple (two-varable) lnear regresson model. It s represented
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationThe Cortex. Networks. Laminar Structure of Cortex. Chapter 3, O Reilly & Munakata.
Networks The Cortex Chapter, O Relly & Munakata. Bology of networks: The cortex Exctaton: Undrectonal (transformatons) Local vs. dstrbuted representatons Bdrectonal (pattern completon, amplfcaton) Inhbton:
More informationarxiv:quant-ph/ Jul 2002
Lnear optcs mplementaton of general two-photon proectve measurement Andrze Grudka* and Anton Wóck** Faculty of Physcs, Adam Mckewcz Unversty, arxv:quant-ph/ 9 Jul PXOWRZVNDR]QDRODQG Abstract We wll present
More informationUsing deep belief network modelling to characterize differences in brain morphometry in schizophrenia
Usng deep belef network modellng to characterze dfferences n bran morphometry n schzophrena Walter H. L. Pnaya * a ; Ary Gadelha b ; Orla M. Doyle c ; Crstano Noto b ; André Zugman d ; Qurno Cordero b,
More informationThe Expectation-Maximization Algorithm
The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.
More informationAdmin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester
0/25/6 Admn Assgnment 7 Class /22 Schedule for the rest of the semester NEURAL NETWORKS Davd Kauchak CS58 Fall 206 Perceptron learnng algorthm Our Nervous System repeat untl convergence (or for some #
More informationLecture 3: Shannon s Theorem
CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts
More informationNetworks of Neurons (Chapter 7)
CSE/NEUBEH 58 Networks of Neurons (Chapter 7) Drawng by Ramón y Cajal Today s Agenda F Computaton n Networks of Neurons Feedforward Networks: What can they do? Recurrent Networks: What more can they do?
More informationADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING
1 ADVANCED ACHINE LEARNING ADVANCED ACHINE LEARNING Non-lnear regresson technques 2 ADVANCED ACHINE LEARNING Regresson: Prncple N ap N-dm. nput x to a contnuous output y. Learn a functon of the type: N
More informationRandom Walks on Digraphs
Random Walks on Dgraphs J. J. P. Veerman October 23, 27 Introducton Let V = {, n} be a vertex set and S a non-negatve row-stochastc matrx (.e. rows sum to ). V and S defne a dgraph G = G(V, S) and a drected
More informationConvergence of random processes
DS-GA 12 Lecture notes 6 Fall 216 Convergence of random processes 1 Introducton In these notes we study convergence of dscrete random processes. Ths allows to characterze phenomena such as the law of large
More informationRBF Neural Network Model Training by Unscented Kalman Filter and Its Application in Mechanical Fault Diagnosis
Appled Mechancs and Materals Submtted: 24-6-2 ISSN: 662-7482, Vols. 62-65, pp 2383-2386 Accepted: 24-6- do:.428/www.scentfc.net/amm.62-65.2383 Onlne: 24-8- 24 rans ech Publcatons, Swtzerland RBF Neural
More informationSpin-rotation coupling of the angularly accelerated rigid body
Spn-rotaton couplng of the angularly accelerated rgd body Loua Hassan Elzen Basher Khartoum, Sudan. Postal code:11123 E-mal: louaelzen@gmal.com November 1, 2017 All Rghts Reserved. Abstract Ths paper s
More informationRhythmic activity in neuronal ensembles in the presence of conduction delays
Rhythmc actvty n neuronal ensembles n the presence of conducton delays Crstna Masoller Carme Torrent, Jord García Ojalvo Departament de Fsca Engnyera Nuclear Unverstat Poltecnca de Catalunya, Terrassa,
More informationMean Field / Variational Approximations
Mean Feld / Varatonal Appromatons resented by Jose Nuñez 0/24/05 Outlne Introducton Mean Feld Appromaton Structured Mean Feld Weghted Mean Feld Varatonal Methods Introducton roblem: We have dstrbuton but
More informationChapter 8 Indicator Variables
Chapter 8 Indcator Varables In general, e explanatory varables n any regresson analyss are assumed to be quanttatve n nature. For example, e varables lke temperature, dstance, age etc. are quanttatve n
More informationA linear imaging system with white additive Gaussian noise on the observed data is modeled as follows:
Supplementary Note Mathematcal bacground A lnear magng system wth whte addtve Gaussan nose on the observed data s modeled as follows: X = R ϕ V + G, () where X R are the expermental, two-dmensonal proecton
More informationLossy Compression. Compromise accuracy of reconstruction for increased compression.
Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost
More informationA Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach
A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland
More informationResearch Article Green s Theorem for Sign Data
Internatonal Scholarly Research Network ISRN Appled Mathematcs Volume 2012, Artcle ID 539359, 10 pages do:10.5402/2012/539359 Research Artcle Green s Theorem for Sgn Data Lous M. Houston The Unversty of
More informationA Particle Filter Algorithm based on Mixing of Prior probability density and UKF as Generate Importance Function
Advanced Scence and Technology Letters, pp.83-87 http://dx.do.org/10.14257/astl.2014.53.20 A Partcle Flter Algorthm based on Mxng of Pror probablty densty and UKF as Generate Importance Functon Lu Lu 1,1,
More informationHidden Markov Models
Hdden Markov Models Namrata Vaswan, Iowa State Unversty Aprl 24, 204 Hdden Markov Model Defntons and Examples Defntons:. A hdden Markov model (HMM) refers to a set of hdden states X 0, X,..., X t,...,
More informationParametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010
Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton
More informationInformation Geometry of Gibbs Sampler
Informaton Geometry of Gbbs Sampler Kazuya Takabatake Neuroscence Research Insttute AIST Central 2, Umezono 1-1-1, Tsukuba JAPAN 305-8568 k.takabatake@ast.go.jp Abstract: - Ths paper shows some nformaton
More informationA New Algorithm for Training Multi-layered Morphological Networks
A New Algorthm for Tranng Mult-layered Morphologcal Networs Rcardo Barrón, Humberto Sossa, and Benamín Cruz Centro de Investgacón en Computacón-IPN Av. Juan de Dos Bátz esquna con Mguel Othón de Mendzábal
More information3) Surrogate Responses
1) Introducton Vsual neurophysology has benefted greatly for many years through the use of smple, controlled stmul lke bars and gratngs. One common characterzaton of the responses elcted by these stmul
More informationSolving Nonlinear Differential Equations by a Neural Network Method
Solvng Nonlnear Dfferental Equatons by a Neural Network Method Luce P. Aarts and Peter Van der Veer Delft Unversty of Technology, Faculty of Cvlengneerng and Geoscences, Secton of Cvlengneerng Informatcs,
More informationReport on Image warping
Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.
More informationInternet Engineering. Jacek Mazurkiewicz, PhD Softcomputing. Part 3: Recurrent Artificial Neural Networks Self-Organising Artificial Neural Networks
Internet Engneerng Jacek Mazurkewcz, PhD Softcomputng Part 3: Recurrent Artfcal Neural Networks Self-Organsng Artfcal Neural Networks Recurrent Artfcal Neural Networks Feedback sgnals between neurons Dynamc
More informationMaximum likelihood. Fredrik Ronquist. September 28, 2005
Maxmum lkelhood Fredrk Ronqust September 28, 2005 Introducton Now that we have explored a number of evolutonary models, rangng from smple to complex, let us examne how we can use them n statstcal nference.
More informationVQ widely used in coding speech, image, and video
at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng
More informationAn Application of Fuzzy Hypotheses Testing in Radar Detection
Proceedngs of the th WSES Internatonal Conference on FUZZY SYSEMS n pplcaton of Fuy Hypotheses estng n Radar Detecton.K.ELSHERIF, F.M.BBDY, G.M.BDELHMID Department of Mathematcs Mltary echncal Collage
More informationRetrieval Models: Language models
CS-590I Informaton Retreval Retreval Models: Language models Luo S Department of Computer Scence Purdue Unversty Introducton to language model Ungram language model Document language model estmaton Maxmum
More informationOutline. Bayesian Networks: Maximum Likelihood Estimation and Tree Structure Learning. Our Model and Data. Outline
Outlne Bayesan Networks: Maxmum Lkelhood Estmaton and Tree Structure Learnng Huzhen Yu janey.yu@cs.helsnk.f Dept. Computer Scence, Unv. of Helsnk Probablstc Models, Sprng, 200 Notces: I corrected a number
More informationThe equation of motion of a dynamical system is given by a set of differential equations. That is (1)
Dynamcal Systems Many engneerng and natural systems are dynamcal systems. For example a pendulum s a dynamcal system. State l The state of the dynamcal system specfes t condtons. For a pendulum n the absence
More informationUncertainty and auto-correlation in. Measurement
Uncertanty and auto-correlaton n arxv:1707.03276v2 [physcs.data-an] 30 Dec 2017 Measurement Markus Schebl Federal Offce of Metrology and Surveyng (BEV), 1160 Venna, Austra E-mal: markus.schebl@bev.gv.at
More information829. An adaptive method for inertia force identification in cantilever under moving mass
89. An adaptve method for nerta force dentfcaton n cantlever under movng mass Qang Chen 1, Mnzhuo Wang, Hao Yan 3, Haonan Ye 4, Guola Yang 5 1,, 3, 4 Department of Control and System Engneerng, Nanng Unversty,
More information( ) = ( ) + ( 0) ) ( )
EETOMAGNETI OMPATIBIITY HANDBOOK 1 hapter 9: Transent Behavor n the Tme Doman 9.1 Desgn a crcut usng reasonable values for the components that s capable of provdng a tme delay of 100 ms to a dgtal sgnal.
More informationSimulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests
Smulated of the Cramér-von Mses Goodness-of-Ft Tests Steele, M., Chaselng, J. and 3 Hurst, C. School of Mathematcal and Physcal Scences, James Cook Unversty, Australan School of Envronmental Studes, Grffth
More informationSUPPLEMENTARY INFORMATION
do: 0.08/nature09 I. Resonant absorpton of XUV pulses n Kr + usng the reduced densty matrx approach The quantum beats nvestgated n ths paper are the result of nterference between two exctaton paths of
More informationGaussian process classification: a message-passing viewpoint
Gaussan process classfcaton: a message-passng vewpont Flpe Rodrgues fmpr@de.uc.pt November 014 Abstract The goal of ths short paper s to provde a message-passng vewpont of the Expectaton Propagaton EP
More information