Information Bottleneck Optimization and Independent Component Extraction with Spiking Neurons
|
|
- Jack Austin
- 6 years ago
- Views:
Transcription
1 Informaton ottlenec Optmzaton and Independent Component Extracton wth Spng Neurons Stefan Klampfl, Robert Legensten, Wolfgang Maass Insttute for Theoretcal Computer Scence Graz Unversty of Technology A-8 Graz, Austra Abstract The extracton of statstcally ndependent components from hgh-dmensonal mult-sensory nput streams s assumed to be an essental component of sensory processng n the bran. Such ndependent component analyss (or blnd source separaton) could provde a less redundant representaton of nformaton about the external world. Another powerful processng strategy s to extract preferentally those components from hgh-dmensonal nput streams that are related to other nformaton sources, such as nternal predctons or proproceptve feedbac. Ths strategy allows the optmzaton of nternal representaton accordng to the nformaton bottlenec method. However, concrete learnng rules that mplement these general unsupervsed learnng prncples for spng neurons are stll mssng. We show how both nformaton bottlenec optmzaton and the extracton of ndependent components can n prncple be mplemented wth stochastcally spng neurons wth refractorness. The new learnng rule that acheves ths s derved from abstract nformaton optmzaton prncples. Introducton The Informaton ottlenec (I) approach and ndependent component analyss (ICA) have both attracted substantal nterest as general prncples for unsupervsed learnng [, 2]. A hope has been, that they mght also help us to understand strateges for unsupervsed learnng n bologcal systems. However t has turned out to be qute dffcult to establsh lns between nown learnng algorthms that have been derved from these general prncples, and learnng rules that could possbly be mplemented by synaptc plastcty of a spng neuron. Fortunately, n a smpler context a drect ln between an abstract nformaton theoretc optmzaton goal and a rule for synaptc plastcty has recently been establshed [3]. The resultng rule for the change of synaptc weghts n [3] maxmzes the mutual nformaton between pre- and postsynaptc spe trans, under the constrant that the postsynaptc frng rate stays close to some target frng rate. We show n ths artcle, that ths approach can be extended to stuatons where smultaneously the mutual nformaton between the postsynaptc spe tran of the neuron and other sgnals (such as for example the spe trans of other neurons) has to be mnmzed (Fgure ). Ths opens the door to the exploraton of learnng rules for nformaton bottlenec analyss and ndependent component extracton wth spng neurons that would be optmal from a theoretcal perspectve. We revew n secton 2 the neuron model and learnng rule from [3]. We show n secton 3 how ths learnng rule can be extended so that t not only maxmzes mutual nformaton wth some gven spe trans and eeps the output frng rate wthn a desred range, but smultaneously mnmzes mutual nformaton wth other spe trans, or other tme-varyng sgnals. Applcatons to nfor-
2 A Fgure : Dfferent learnng stuatons analyzed n ths artcle. A In an nformaton bottlenec tas the learnng neuron (neuron ) wants to maxmze the mutual nformaton between ts output Y K and the actvty of one or several target neurons Y2 K, Y3 K,... (whch can be functons of the nputs X K and/or other external sgnals), whle at the same tme eepng the mutual nformaton between the nputs X K and the output Y K as low as possble (and ts frng rate wthn a desred range). Thus the neuron should learn to extract from ts hgh-dmensonal nput those aspects that are related to these target sgnals. Ths setup s dscussed n sectons 3 and 4. Two neurons recevng the same nputs X K from a common set of presynaptc neurons both learn to maxmze nformaton transmsson, and smultaneously to eep ther outputs Y K and Y2 K statstcally ndependent. Such extracton of ndependent components from the nput s descrbed n secton 5. maton bottlenec tass are dscussed n secton 4. In secton 5 we show that a modfcaton of ths learnng rule allows a spng neuron to extract nformaton from ts nput spe trans that s ndependent from the component extracted by another neuron. 2 Neuron model and a basc learnng rule We use the model from [3], whch s a stochastcally spng neuron model wth refractorness, where the probablty of frng n each tme step depends on the current membrane potental and the tme snce the last output spe. It s convenent to formulate the model n dscrete tme wth step sze. The total membrane potental of a neuron n tme step t = s gven by N u (t ) = u r + w j ɛ(t t n )x n j, () j= n= where u r = 7mV s the restng potental and w j s the weght of synapse j (j =,..., N). An nput spe tran at synapse j up to the -th tme step s descrbed by a sequence X j = (x j, x2 j,..., x j ) of zeros (no spe) and ones (spe); each presynaptc spe at tme tn (x n j = ) evoes a postsynaptc potental (PSP) wth exponentally decayng tme course ɛ(t t n ) wth tme constant τ m = ms. The probablty ρ of frng of neuron n each tme step t s gven by ρ = exp[ g(u (t )R (t )] g(u (t ))R (t ), (2) where g(u) = r log{ + exp[(u u )/ u]} s a smooth ncreasng functon of the membrane potental u (u = 65mV, u = 2mV, r = Hz). The approxmaton s vald for suffcently small (ρ (t ˆt ). The refractory varable R (t) = τ abs ) 2 Θ(t ˆt τrefr 2 +(t ˆt τ abs ) 2 τ abs ) assumes values n [, ] and depends on the last frng tme ˆt of neuron (absolute refractory perod τ abs = 3ms, relatve refractory tme τ refr = ms). The Heavsde step functon Θ taes a value of for non-negatve arguments and otherwse. Ths model from [3] s a specal case of the spe-response model, and wth a refractory varable R(t) that depends only on the tme snce the last postsynaptc event t has renewal propertes [4].
3 The output of neuron at the -th tme step s denoted by a varable y that assumes the value f a postsynaptc spe occurred and otherwse. A specfc spe tran up to the -th tme step s wrtten as Y = (y, y2,..., y ). The nformaton transmsson between an ensemble of nput spe trans X K and the output spe tran Y K can be quantfed by the mutual nformaton [5] I(X K ; Y K ) = X K,Y K P (X K, Y K ) log P (Y K X K ) P (Y K. (3) The dea n [3] was to maxmze the quantty I(X K ; Y K) γd KL(P (Y K D KL (P (Y K ) P (Y K )) = Y P (Y K K ) log(p (Y K )/ P (Y K ) ) P (Y K )), where )) denotes the Kullbac-Lebler dvergence [5], mposng the addtonal constrant that the frng statstcs P (Y ) of the neuron should stay as close as possble to a target dstrbuton P (Y ). Ths dstrbuton was chosen to be that of a constant target frng rate g accountng for homeostatc processes. An onlne learnng-rule performng gradent ascent on ths quantty was derved for the weghts w j of neuron, wth w j denotng the weght change durng the -th tme step: w j = αc j (γ), (4) whch conssts of the correlaton term Cj and the postsynaptc term [3]. The term Cj measures concdences between postsynaptc spes at neuron and PSPs generated by presynaptc acton potentals arrvng at synapse j, ( Cj = C j ) + ɛ(t t n )x n g (u (t )) [ j y τ C g(u (t )) ρ ], (5) n= n an exponental tme wndow wth tme constant τ C = s and g (u (t )) denotng the dervatve of g wth respect to u. The term [ (γ) = y log g(u (t ( ) γ ] )) g ḡ (t ) ḡ (t ) ( y )R (t ) [ g(u (t )) ( + γ)ḡ (t ) + γ g ], (6) compares the current frng rate g(u (t )) wth ts average frng rate 2 ḡ (t ), and smultaneously the runnng average ḡ (t ) wth the constant target rate g. The argument ndcates that ths term also depends on the optmzaton parameter γ. 3 Learnng rule for mult-neuron nteractons We extend the learnng rule presented n the prevous secton to a more complex scenaro, where the mutual nformaton between the output spe tran Y K of the learnng neuron (neuron ) and some target spe trans Yl K (l > ) has to be maxmzed, whle smultaneously mnmzng the mutual nformaton between the nputs X K and the output Y K. Obvously, ths s the generc I scenaro appled to spng neurons (see Fgure A). A learnng rule for extractng ndependent components wth spng neurons (see secton 5) can be derved n a smlar manner. For smplcty, we consder the case of an I optmzaton for only one target spe tran Y K 2, and derve an update rule for the synaptc weghts w j of neuron. The quantty to maxmze s therefore L = I(X K ; Y K ) + βi(y K ; Y K 2 ) γd KL (P (Y K ) P (Y K )), (7) where β and γ are optmzaton constants. To maxmze ths objectve functon, we derve the weght change w j durng the -th tme step by gradent ascent on (7), assumng that the weghts w j can change between some bounds w j w max (we assume w max = throughout ths paper). We use boldface letters (X ) to dstngush random varables from specfc realzatons (X ). 2 The rate ḡ (t ) = g(u (t )) X Y denotes an expectaton of the frng rate over the nput dstrbuton gven the postsynaptc hstory and s mplemented as a runnng average wth an exponental tme wndow (wth a tme constant of ms).
4 Note that all three terms of (7) mplctly depend on w j because the output dstrbuton P (Y K ) changes f we modfy the weghts w j. Snce the frst and the last term of (7) have already been consdered (up to the sgn) n [3], we wll concentrate here on the mddle term L 2 := βi(y K ; Y2 K ) and denote the contrbuton of the gradent of L 2 to the total weght change wj n the -th tme step by w j. In order to get an expresson for the weght change n a specfc tme step t we wrte the probabltes P (Y K ) and P (Y K, Y2 K ) occurrng n (7) as products over ndvdual tme bns,.e., P (Y K ) = K = P (y Y ) and P (Y K, Y2 K ) = K = P (y, y2 Y, Y2 ), accordng to the chan rule of nformaton theory [5]. Consequently, we rewrte L 2 as a sum over the contrbutons of the ndvdual tme bns, L 2 = K = L 2, wth L 2 = β log P (y, y2 Y, Y2 ). (8) P (y Y )P (y2 Y2 ) X,Y,Y 2 The weght change w j s then proportonal to the gradent of ths expresson wth respect to the weghts w j,.e., w j = α( L 2/ w j ), wth some learnng rate α >. The evaluaton of the gradent yelds w j = α Cj βf 2 wth a correlaton term C X,Y,Y j as n (5) and a term 2 F2 = y y2 ḡ 2 (t ) [ḡ2 log ḡ (t )ḡ 2 (t ) y ( y2 )R 2 (t (t ] ) ) ḡ (t ) ḡ 2(t ) [ḡ2 ( y )y2 R (t (t ] ) ) ḡ 2 (t ) ḡ (t ) + + ( y )( y 2 )R (t )R 2 (t )() 2 [ ḡ 2 (t ) ḡ (t )ḡ 2 (t ) ]. (9) Here, ḡ (t ) = g(u (t )) X Y denotes the average frng rate of neuron and ḡ 2 (t ) = g(u (t ))g(u 2 (t )) X Y,Y denotes the average product of frng rates of both neurons. oth 2 quanttes are mplemented onlne as runnng exponental averages wth a tme constant of s. Under the assumpton of a small learnng rate α we can approxmate the expectaton X,Y,Y 2 by averagng over a sngle long tral. Consderng now all three terms n (7) we fnally arrve at an onlne rule for maxmzng (7) w j = αc j [ ( γ) β2 ]. () whch conssts of a term Cj senstve to correlatons between the output of the neuron and ts presynaptc nput at synapse j ( correlaton term ) and terms and 2 that characterze the postsynaptc state of the neuron ( postsynaptc terms ). Note that the argument of s dfferent from (4) because some of the terms of the objectve functon (7) have a dfferent sgn. In order to compensate the effect of a small, the constant β has to be large enough for the term 2 to have an nfluence on the weght change. The factors Cj and were descrbed n the prevous secton. In addton, our learnng rule contans an extra term 2 = F2/() 2 that s senstve to the statstcal dependence between the output spe tran of the neuron and the target. It s gven by 2 = y y2 () 2 log ḡ 2 (t ) ḡ (t )ḡ 2 (t ) y [ḡ2 (t ] ) ḡ (t ) ḡ 2(t ) y 2 ( y )R (t ) ( y 2 )R 2 (t ) ] [ḡ2 (t ) ḡ 2 (t ) ḡ (t ) + ( y )( y 2 )R (t )R 2 (t ) [ ḡ 2 (t ) ḡ (t )ḡ 2 (t ) ]. () Ths term bascally compares the average product of frng rates ḡ 2 (whch corresponds to the jont probablty of spng) wth the product of average frng rates ḡ ḡ 2 (representng the probablty of ndependent spng). In ths way, t measures the momentary mutual nformaton between the output of the neuron and the target spe tran.
5 For a smplfed neuron model wthout refractorness (R(t) = ), the update rule (4) resembles the CM-rule [6] as shown n [3]. Wth the objectve functon (7) to maxmze, we expect an ant- Hebban CM rule wth another term accountng for statstcal dependences between Y K and Y2 K. Snce there s no refractorness, the postsynaptc rate ν (t ) s gven drectly by the current value of g(u(t )), and the update rule () reduces to the rate model 3 w j { [ ( ) = αν pre, ν j f(ν ) log ν γ ] ν g ( [ ] [ ν β ν2 log 2 ν ν ν 2 ν 2 2 ν ν 2 where the presynaptc rate at synapse j at tme t s denoted by ν pre, j ])}, (2) = a n= ɛ(t t n )x n j wth a n unts (Vs). The values ν, ν 2, and ν 2 are runnng averages of the output rate ν, the rate of the target sgnal ν 2 and of the product of these values, ν ν 2, respectvely. The functon f(ν ) = g (g (ν ))/a s proportonal to the dervatve of g wth respect to u, evaluated at the current membrane potental. The frst term n the curly bracets accounts for the homeostatc process (smlar to the CM rule, see [3]), whereas the second term renforces dependences between Y K and Y K 2. Note that ths term s zero f the rates of the two neurons are ndependent. It s nterestng to note that f we rewrte the smplfed rate-based learnng rule (2) n the followng way, w j = αν pre, j Φ(ν, ν2 ), (3) we can vew t as an extenson of the classcal enenstoc-cooper-munro (CM) rule [6] wth a two-dmensonal synaptc modfcaton functon Φ(ν, ν2 ). Here, values of Φ > produce LTD whereas values of Φ < produce LTP. These regmes are separated by a sldng threshold, however, n contrast to the orgnal CM rule ths threshold does not only depend on the runnng average of the postsynaptc rate ν, but also on the current values of ν2 and ν 2. 4 Applcaton to Informaton ottlenec Optmzaton We use a setup as n Fgure A where we want to maxmze the nformaton whch the output Y K of a learnng neuron conveys about two target sgnals Y2 K and Y3 K. If the target sgnals are statstcally ndependent from each other we can optmze the mutual nformaton to each target sgnal separately. Ths leads to an update rule w j = αc j [ ( γ) β ( )], (4) where 2 and 3 are the postsynaptc terms () senstve to the statstcal dependence between the output and target sgnals and 2, respectvely. We choose g = 3Hz for the target frng rate, and we use dscrete tme wth = ms. In ths experment we demonstrate that t s possble to consder two very dfferent nds of target sgnals: one target spe tran has has a smlar rate modulaton as one part of the nput, whle the other target spe tran has a hgh spe-spe correlaton wth another part of the nput. The learnng neuron receves nput at synapses, whch are dvded nto 4 groups of 25 nputs each. The frst two nput groups consst of rate modulated Posson spe trans 4 (Fgure 2A). Spe trans from the remanng groups 3 and 4 are correlated wth a coeffcent of.5 wthn each group, however, spe trans from dfferent groups are uncorrelated. Correlated spe trans are generated by the procedure descrbed n [7]. The frst target sgnal s chosen to have the same rate modulaton as the nputs from group, except that Gaussan random nose s supermposed wth a standard devaton of 2Hz. The second target spe tran s correlated wth nputs from group 3 (wth a coeffcent of.5), but uncorrelated to nputs from group 4. Furthermore, both target sgnals are slent durng random ntervals: at each 3 In the absence of refractorness we use an alternatve gan functon g alt (u) = [/g max + /g(u)] n order to pose an upper lmt of g max = Hz on the postsynaptc frng rate.
6 A nput [Hz] nput 2 [Hz] t [ms] synapse dx evoluton of weghts C output [Hz] target [Hz] t [ms] D..5 MI/KLD of neuron E x I(output;targets) F correlaton wth targets target target Fgure 2: Performance of the spe-based learnng rule () for the I tas. A Modulaton of nput rates to nput groups and 2. Evoluton of weghts durng 6 mnutes of learnng (brght: strong synapses, w j, dar: depressed synapses, w j.) Weghts are ntalzed randomly between. and.2, α = 4, β = 2 3, γ = 5. C Output rate and rate of target sgnal durng 5 seconds after learnng. D Evoluton of the average mutual nformaton per tme bn (sold lne, left scale) between nput and output and the Kullbac-Lebler dvergence per tme bn (dashed lne, rght scale) as a functon of tme. Averages are calculated over segments of mnute. E Evoluton of the average mutual nformaton per tme bn between output and both target spe trans as a functon of tme. F Trace of the correlaton between output rate and rate of target sgnal (sold lne) and the spe-spe correlaton (dashed lne) between the output and target spe tran 2 durng learnng. Correlaton coeffcents are calculated every seconds. tme step, each target sgnal s ndependently set to wth a certan probablty ( 5 ) and remans slent for a duraton chosen from a Gaussan dstrbuton wth mean 5s and SD s (mnmum duraton s s). Hence ths experment tests whether learnng wors even f the target sgnals are not avalable all of the tme. Fgure 2 shows that strong weghts evolve for the frst and thrd group of synapses, whereas the effcaces for the remanng nputs are depressed. oth groups wth growng weghts are correlated wth one of the target sgnals, therefore the mutual nformaton between output and target spe trans ncreases. Snce spe-spe correlatons convey more nformaton than rate modulatons synaptc effcaces develop more strongly to group 3 (the group wth spe-spe correlatons). Ths results n an ntal decrease n correlaton wth the rate-modulated target to the beneft of hgher correlaton wth the second target. However, after about 3 mnutes when the weghts become stable, the correlatons as well as the mutual nformaton quanttes stay roughly constant. An applcaton of the smplfed rule (2) to the same tas s shown n Fgure 3 where t can be seen that strong weghts close to w max are developed for the rate-modulated nput. To some extent weghts grow also for the nputs wth spe-spe correlatons n order to reach the constant target frng rate g. In contrast to the spe-based rule the smplfed rule s not able to detect spe-spe correlatons between output and target spe trans. 4 The rate of the frst 25 nputs s modulated by a Gaussan whte-nose sgnal wth mean 2Hz that has been low pass fltered wth a cut-off frequency of 5Hz. Synapses 26 to 5 receve a rate that has a constant value of 2Hz, except that a burst s ntated at each tme step wth a probablty of.5. Thus there s a burst on average every 2s. The duraton of a burst s chosen from a Gaussan dstrbuton wth mean.5s and SD.2s, the mnmum duraton s chosen to be.s. Durng a burst the rate s set to 5Hz. In the smulatons we use dscrete tme wth = ms.
7 A evoluton of weghts 4 x MI/KLD of neuron 3.4 C.5 correlaton wth target synapse dx Fgure 3: Performance of the smplfed update rule (2) for the I tas. A Evoluton of weghts durng 3 mnutes of learnng (brght: strong synapses, w j, dar: depressed synapses, w j.) Weghts are ntalzed randomly between. and.2, α = 3, β = 4, γ =. Evoluton of the average mutual nformaton per tme bn (sold lne, left scale) between nput and output and the Kullbac-Lebler dvergence per tme bn (dashed lne, rght scale) as a functon of tme. Averages are calculated over segments of mnute. C Trace of the correlaton between output rate and target rate durng learnng. Correlaton coeffcents are calculated every seconds. 5 Extractng Independent Components Wth a slght modfcaton n the objectve functon (7) the learnng rule allows us to extract statstcally ndependent components from an ensemble of nput spe trans. We consder two neurons recevng the same nput at ther synapses (see Fgure ). For both neurons =, 2 we maxmze nformaton transmsson under the constrant that ther outputs stay as statstcally ndependent from each other as possble. That s, we maxmze L = I(X K ; Y K ) βi(y K ; Y2 K ) γd KL (P (Y K ) P (Y K )). (5) Snce the same terms (up to the sgn) are optmzed n (7) and (5) we can derve a gradent ascent rule for the weghts of neuron, w j, analogously to secton 3: w j = αc j [ (γ) β2 ]. (6) Fgure 4 shows the results of an experment where two neurons receve the same Posson nput wth a rate of 2Hz at ther synapses. The nput s dvded nto two groups of 4 spe trans each, such that synapses to 4 and 4 to 8 receve correlated nput wth a correlaton coeffcent of.5 wthn each group, however, any spe trans belongng to dfferent nput groups are uncorrelated. The remanng 2 synapses receve uncorrelated Posson nput. Weghts close to the maxmal effcacy w max = are developed for one of the groups of synapses that receves correlated nput (group 2 n ths case) whereas those for the other correlated group (group ) as well as those for the uncorrelated group (group 3) stay low. Neuron 2 develops strong weghts to the other correlated group of synapses (group ) whereas the effcaces of the second correlated group (group 2) reman depressed, thereby tryng to produce a statstcally ndependent output. For both neurons the mutual nformaton s maxmzed and the target output dstrbuton of a constant frng rate of 3Hz s approached well. After an ntal ncrease n the mutual nformaton and n the correlaton between the outputs, when the weghts of both neurons start to grow smultaneously, the amounts of nformaton and correlaton drop as both neurons develop strong effcaces to dfferent parts of the nput. 6 Dscusson Informaton ottlenec (I) and Independent Component Analyss (ICA) have been proposed as general prncples for unsupervsed learnng n lower cortcal areas, however, learnng rules that can mplement these prncples wth spng neurons have been mssng. In ths artcle we have derved from nformaton theoretc prncples learnng rules whch enable a stochastcally spng neuron to solve these tass. These learnng rules are optmal from the perspectve of nformaton theory, but they are not local n the sense that they use only nformaton that s avalable at a sngle
8 A weghts of neuron weghts of neuron 2 C 6 x 4 I(output ;output2) synapse dx synapse dx D.6 MI/KLD of neuron.4 E.6 MI/KLD of neuron 2.4 F.6 correlaton between outputs Fgure 4: Extractng ndependent components. A, Evoluton of weghts durng 3 mnutes of learnng for both postsynaptc neurons (red: strong synapses, w j, blue: depressed synapses, w j.) Weghts are ntalzed randomly between. and.2, α = 3, β =, γ =. C Evoluton of the average mutual nformaton per tme bn between both output spe trans as a functon of tme. D,E Evoluton of the average mutual nformaton per tme bn (sold lne, left scale) between nput and output and the Kullbac-Lebler dvergence per tme bn for both neurons (dashed lne, rght scale) as a functon of tme. Averages are calculated over segments of mnute. F Trace of the correlaton between both output spe trans durng learnng. Correlaton coeffcents are calculated every seconds. synapse wthout an auxlary networ of nterneurons or other bologcal processes. Rather, they tell us what type of nformaton would have to be deally provded by such auxlary networ, and how the synapse should change ts effcacy n order to approxmate a theoretcally optmal learnng rule. Acnowledgments We would le to than Wulfram Gerstner and Jean-Pascal Pfster for helpful dscussons. Ths paper was wrtten under partal support by the Austran Scence Fund FWF, # S92-N3 and # P7229- N4, and was also supported by PASCAL, project # IST , and FACETS, project # 5879, of the European Unon. References [] N. Tshby, F. C. Perera, and W. ale. The nformaton bottlenec method. In Proceedngs of the 37-th Annual Allerton Conference on Communcaton, Control and Computng, pages , 999. [2] A. Hyvärnen, J. Karhunen, and E. Oja. Independent Component Analyss. Wley, New Yor, 2. [3] T. Toyozum, J.-P. Pfster, K. Ahara, and W. Gerstner. Generalzed enenstoc-cooper-munro rule for spng neurons that maxmzes nformaton transmsson. Proc. Natl. Acad. Sc. USA, 2: , 25. [4] W. Gerstner and W. M. Kstler. Spng Neuron Models. Cambrdge Unversty Press, Cambrdge, 22. [5] T. M. Cover and J. A. Thomas. Elements of Informaton Theory. Wley, New Yor, 99. [6] E. L. enenstoc, L. N. Cooper, and P. W. Munro. Theory for the development of neuron selectvty: orentaton specfcty and bnocular nteracton n vsual cortex. J. Neurosc., 2():32 48, 982. [7] R. Gütg, R. Aharonov, S. Rotter, and H. Sompolnsy. Learnng nput correlatons through non-lnear temporally asymmetrc hebban plastcty. Journal of Neurosc., 23: , 23.
3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X
Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number
More information1 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations
Physcs 171/271 -Davd Klenfeld - Fall 2005 (revsed Wnter 2011) 1 Dervaton of Rate Equatons from Sngle-Cell Conductance (Hodgkn-Huxley-lke) Equatons We consder a network of many neurons, each of whch obeys
More informationSpike-Timing Dependent Plasticity and Mutual Information Maximization for a Spiking Neuron Model
Spke-Tmng Dependent Plastcty and Mutual Informaton Maxmzaton for a Spkng Neuron Model Taro Toyozum, Jean-Pascal Pfster Kazuyuk Ahara, Wulfram Gerstner Department of Complexty Scence and Engneerng, The
More information8 Derivation of Network Rate Equations from Single- Cell Conductance Equations
Physcs 178/278 - Davd Klenfeld - Wnter 2019 8 Dervaton of Network Rate Equatons from Sngle- Cell Conductance Equatons Our goal to derve the form of the abstract quanttes n rate equatons, such as synaptc
More information2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification
E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton
More information8 Derivation of Network Rate Equations from Single- Cell Conductance Equations
Physcs 178/278 - Davd Klenfeld - Wnter 2015 8 Dervaton of Network Rate Equatons from Sngle- Cell Conductance Equatons We consder a network of many neurons, each of whch obeys a set of conductancebased,
More information9 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations
Physcs 171/271 - Chapter 9R -Davd Klenfeld - Fall 2005 9 Dervaton of Rate Equatons from Sngle-Cell Conductance (Hodgkn-Huxley-lke) Equatons We consder a network of many neurons, each of whch obeys a set
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationEcon107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)
I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes
More informationUncertainty and auto-correlation in. Measurement
Uncertanty and auto-correlaton n arxv:1707.03276v2 [physcs.data-an] 30 Dec 2017 Measurement Markus Schebl Federal Offce of Metrology and Surveyng (BEV), 1160 Venna, Austra E-mal: markus.schebl@bev.gv.at
More informationECE559VV Project Report
ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate
More informationOnline Classification: Perceptron and Winnow
E0 370 Statstcal Learnng Theory Lecture 18 Nov 8, 011 Onlne Classfcaton: Perceptron and Wnnow Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton In ths lecture we wll start to study the onlne learnng
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationError Probability for M Signals
Chapter 3 rror Probablty for M Sgnals In ths chapter we dscuss the error probablty n decdng whch of M sgnals was transmtted over an arbtrary channel. We assume the sgnals are represented by a set of orthonormal
More informationResource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud
Resource Allocaton wth a Budget Constrant for Computng Independent Tasks n the Cloud Wemng Sh and Bo Hong School of Electrcal and Computer Engneerng Georga Insttute of Technology, USA 2nd IEEE Internatonal
More informationChapter 7 Channel Capacity and Coding
Wreless Informaton Transmsson System Lab. Chapter 7 Channel Capacty and Codng Insttute of Communcatons Engneerng atonal Sun Yat-sen Unversty Contents 7. Channel models and channel capacty 7.. Channel models
More informationEEE 241: Linear Systems
EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they
More informationJAB Chain. Long-tail claims development. ASTIN - September 2005 B.Verdier A. Klinger
JAB Chan Long-tal clams development ASTIN - September 2005 B.Verder A. Klnger Outlne Chan Ladder : comments A frst soluton: Munch Chan Ladder JAB Chan Chan Ladder: Comments Black lne: average pad to ncurred
More informationA Robust Method for Calculating the Correlation Coefficient
A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal
More information3) Surrogate Responses
1) Introducton Vsual neurophysology has benefted greatly for many years through the use of smple, controlled stmul lke bars and gratngs. One common characterzaton of the responses elcted by these stmul
More informationLecture 4. Instructor: Haipeng Luo
Lecture 4 Instructor: Hapeng Luo In the followng lectures, we focus on the expert problem and study more adaptve algorthms. Although Hedge s proven to be worst-case optmal, one may wonder how well t would
More informationA linear imaging system with white additive Gaussian noise on the observed data is modeled as follows:
Supplementary Note Mathematcal bacground A lnear magng system wth whte addtve Gaussan nose on the observed data s modeled as follows: X = R ϕ V + G, () where X R are the expermental, two-dmensonal proecton
More informationNumerical Heat and Mass Transfer
Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and
More informationMLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012
MLE and Bayesan Estmaton Je Tang Department of Computer Scence & Technology Tsnghua Unversty 01 1 Lnear Regresson? As the frst step, we need to decde how we re gong to represent the functon f. One example:
More informationEnsemble Methods: Boosting
Ensemble Methods: Boostng Ncholas Ruozz Unversty of Texas at Dallas Based on the sldes of Vbhav Gogate and Rob Schapre Last Tme Varance reducton va baggng Generate new tranng data sets by samplng wth replacement
More informationThe Expectation-Maximization Algorithm
The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.
More informationFor now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.
Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson
More informationOn the correction of the h-index for career length
1 On the correcton of the h-ndex for career length by L. Egghe Unverstet Hasselt (UHasselt), Campus Depenbeek, Agoralaan, B-3590 Depenbeek, Belgum 1 and Unverstet Antwerpen (UA), IBW, Stadscampus, Venusstraat
More informationUsing T.O.M to Estimate Parameter of distributions that have not Single Exponential Family
IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationj) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1
Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons
More informationPower law and dimension of the maximum value for belief distribution with the max Deng entropy
Power law and dmenson of the maxmum value for belef dstrbuton wth the max Deng entropy Bngy Kang a, a College of Informaton Engneerng, Northwest A&F Unversty, Yanglng, Shaanx, 712100, Chna. Abstract Deng
More information1 The Mistake Bound Model
5-850: Advanced Algorthms CMU, Sprng 07 Lecture #: Onlne Learnng and Multplcatve Weghts February 7, 07 Lecturer: Anupam Gupta Scrbe: Bryan Lee,Albert Gu, Eugene Cho he Mstake Bound Model Suppose there
More informationChapter 7 Channel Capacity and Coding
Chapter 7 Channel Capacty and Codng Contents 7. Channel models and channel capacty 7.. Channel models Bnary symmetrc channel Dscrete memoryless channels Dscrete-nput, contnuous-output channel Waveform
More informationGoodness of fit and Wilks theorem
DRAFT 0.0 Glen Cowan 3 June, 2013 Goodness of ft and Wlks theorem Suppose we model data y wth a lkelhood L(µ) that depends on a set of N parameters µ = (µ 1,...,µ N ). Defne the statstc t µ ln L(µ) L(ˆµ),
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationSupporting Information
Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to
More informationWinter 2008 CS567 Stochastic Linear/Integer Programming Guest Lecturer: Xu, Huan
Wnter 2008 CS567 Stochastc Lnear/Integer Programmng Guest Lecturer: Xu, Huan Class 2: More Modelng Examples 1 Capacty Expanson Capacty expanson models optmal choces of the tmng and levels of nvestments
More informationConvergence of random processes
DS-GA 12 Lecture notes 6 Fall 216 Convergence of random processes 1 Introducton In these notes we study convergence of dscrete random processes. Ths allows to characterze phenomena such as the law of large
More informationELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM
ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM An elastc wave s a deformaton of the body that travels throughout the body n all drectons. We can examne the deformaton over a perod of tme by fxng our look
More informationPop-Click Noise Detection Using Inter-Frame Correlation for Improved Portable Auditory Sensing
Advanced Scence and Technology Letters, pp.164-168 http://dx.do.org/10.14257/astl.2013 Pop-Clc Nose Detecton Usng Inter-Frame Correlaton for Improved Portable Audtory Sensng Dong Yun Lee, Kwang Myung Jeon,
More informationIV. Performance Optimization
IV. Performance Optmzaton A. Steepest descent algorthm defnton how to set up bounds on learnng rate mnmzaton n a lne (varyng learnng rate) momentum learnng examples B. Newton s method defnton Gauss-Newton
More informationResearch Article Green s Theorem for Sign Data
Internatonal Scholarly Research Network ISRN Appled Mathematcs Volume 2012, Artcle ID 539359, 10 pages do:10.5402/2012/539359 Research Artcle Green s Theorem for Sgn Data Lous M. Houston The Unversty of
More informationA new construction of 3-separable matrices via an improved decoding of Macula s construction
Dscrete Optmzaton 5 008 700 704 Contents lsts avalable at ScenceDrect Dscrete Optmzaton journal homepage: wwwelsevercom/locate/dsopt A new constructon of 3-separable matrces va an mproved decodng of Macula
More informationOn an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1
On an Extenson of Stochastc Approxmaton EM Algorthm for Incomplete Data Problems Vahd Tadayon Abstract: The Stochastc Approxmaton EM (SAEM algorthm, a varant stochastc approxmaton of EM, s a versatle tool
More informationProbability Theory (revisited)
Probablty Theory (revsted) Summary Probablty v.s. plausblty Random varables Smulaton of Random Experments Challenge The alarm of a shop rang. Soon afterwards, a man was seen runnng n the street, persecuted
More informationCS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016
CS 29-128: Algorthms and Uncertanty Lecture 17 Date: October 26, 2016 Instructor: Nkhl Bansal Scrbe: Mchael Denns 1 Introducton In ths lecture we wll be lookng nto the secretary problem, and an nterestng
More informationBayesian predictive Configural Frequency Analysis
Psychologcal Test and Assessment Modelng, Volume 54, 2012 (3), 285-292 Bayesan predctve Confgural Frequency Analyss Eduardo Gutérrez-Peña 1 Abstract Confgural Frequency Analyss s a method for cell-wse
More informationSupplementary material 1
Supplementary materal 1 to Correlated connectvty and the dstrbuton of frng rates n the neocortex by Alexe Koulakov, Tomas Hromadka, and Anthony M. Zador The emergence of log-normal dstrbuton n neural nets.
More informationCS-433: Simulation and Modeling Modeling and Probability Review
CS-433: Smulaton and Modelng Modelng and Probablty Revew Exercse 1. (Probablty of Smple Events) Exercse 1.1 The owner of a camera shop receves a shpment of fve cameras from a camera manufacturer. Unknown
More informationUNIVERSITY OF TORONTO Faculty of Arts and Science. December 2005 Examinations STA437H1F/STA1005HF. Duration - 3 hours
UNIVERSITY OF TORONTO Faculty of Arts and Scence December 005 Examnatons STA47HF/STA005HF Duraton - hours AIDS ALLOWED: (to be suppled by the student) Non-programmable calculator One handwrtten 8.5'' x
More informationParametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010
Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton
More informationAppendix B: Resampling Algorithms
407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles
More informationTemperature. Chapter Heat Engine
Chapter 3 Temperature In prevous chapters of these notes we ntroduced the Prncple of Maxmum ntropy as a technque for estmatng probablty dstrbutons consstent wth constrants. In Chapter 9 we dscussed the
More informationNegative Binomial Regression
STATGRAPHICS Rev. 9/16/2013 Negatve Bnomal Regresson Summary... 1 Data Input... 3 Statstcal Model... 3 Analyss Summary... 4 Analyss Optons... 7 Plot of Ftted Model... 8 Observed Versus Predcted... 10 Predctons...
More informationLecture 23: Artificial neural networks
Lecture 23: Artfcal neural networks Broad feld that has developed over the past 20 to 30 years Confluence of statstcal mechancs, appled math, bology and computers Orgnal motvaton: mathematcal modelng of
More informationCopyright 2017 by Taylor Enterprises, Inc., All Rights Reserved. Adjusted Control Limits for P Charts. Dr. Wayne A. Taylor
Taylor Enterprses, Inc. Control Lmts for P Charts Copyrght 2017 by Taylor Enterprses, Inc., All Rghts Reserved. Control Lmts for P Charts Dr. Wayne A. Taylor Abstract: P charts are used for count data
More informationANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)
Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of
More informationLecture 7: Boltzmann distribution & Thermodynamics of mixing
Prof. Tbbtt Lecture 7 etworks & Gels Lecture 7: Boltzmann dstrbuton & Thermodynamcs of mxng 1 Suggested readng Prof. Mark W. Tbbtt ETH Zürch 13 März 018 Molecular Drvng Forces Dll and Bromberg: Chapters
More informationThe Minimum Universal Cost Flow in an Infeasible Flow Network
Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran
More informationAverage Decision Threshold of CA CFAR and excision CFAR Detectors in the Presence of Strong Pulse Jamming 1
Average Decson hreshold of CA CFAR and excson CFAR Detectors n the Presence of Strong Pulse Jammng Ivan G. Garvanov and Chrsto A. Kabachev Insttute of Informaton echnologes Bulgaran Academy of Scences
More informationOPTIMAL COMBINATION OF FOURTH ORDER STATISTICS FOR NON-CIRCULAR SOURCE SEPARATION. Christophe De Luigi and Eric Moreau
OPTIMAL COMBINATION OF FOURTH ORDER STATISTICS FOR NON-CIRCULAR SOURCE SEPARATION Chrstophe De Lug and Erc Moreau Unversty of Toulon LSEET UMR CNRS 607 av. G. Pompdou BP56 F-8362 La Valette du Var Cedex
More informationCOMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS
Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS
More informationProf. Dr. I. Nasser Phys 630, T Aug-15 One_dimensional_Ising_Model
EXACT OE-DIMESIOAL ISIG MODEL The one-dmensonal Isng model conssts of a chan of spns, each spn nteractng only wth ts two nearest neghbors. The smple Isng problem n one dmenson can be solved drectly n several
More informationChapter 11: Simple Linear Regression and Correlation
Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests
More informationAPPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14
APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce
More informationMultilayer Perceptron (MLP)
Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne
More informationWeek3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity
Week3, Chapter 4 Moton n Two Dmensons Lecture Quz A partcle confned to moton along the x axs moves wth constant acceleraton from x =.0 m to x = 8.0 m durng a 1-s tme nterval. The velocty of the partcle
More informationMaximizing Overlap of Large Primary Sampling Units in Repeated Sampling: A comparison of Ernst s Method with Ohlsson s Method
Maxmzng Overlap of Large Prmary Samplng Unts n Repeated Samplng: A comparson of Ernst s Method wth Ohlsson s Method Red Rottach and Padrac Murphy 1 U.S. Census Bureau 4600 Slver Hll Road, Washngton DC
More informationStructure and Drive Paul A. Jensen Copyright July 20, 2003
Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.
More informationCHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE
CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng
More information4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA
4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected
More informationPHYS 450 Spring semester Lecture 02: Dealing with Experimental Uncertainties. Ron Reifenberger Birck Nanotechnology Center Purdue University
PHYS 45 Sprng semester 7 Lecture : Dealng wth Expermental Uncertantes Ron Refenberger Brck anotechnology Center Purdue Unversty Lecture Introductory Comments Expermental errors (really expermental uncertantes)
More informationLecture 4: November 17, Part 1 Single Buffer Management
Lecturer: Ad Rosén Algorthms for the anagement of Networs Fall 2003-2004 Lecture 4: November 7, 2003 Scrbe: Guy Grebla Part Sngle Buffer anagement In the prevous lecture we taled about the Combned Input
More informationMarkov Chain Monte Carlo Lecture 6
where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways
More informationLinear Regression Analysis: Terminology and Notation
ECON 35* -- Secton : Basc Concepts of Regresson Analyss (Page ) Lnear Regresson Analyss: Termnology and Notaton Consder the generc verson of the smple (two-varable) lnear regresson model. It s represented
More informationSTATISTICAL MECHANICS
STATISTICAL MECHANICS Thermal Energy Recall that KE can always be separated nto 2 terms: KE system = 1 2 M 2 total v CM KE nternal Rgd-body rotaton and elastc / sound waves Use smplfyng assumptons KE of
More informationA PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS
HCMC Unversty of Pedagogy Thong Nguyen Huu et al. A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS Thong Nguyen Huu and Hao Tran Van Department of mathematcs-nformaton,
More informationarxiv:cs.cv/ Jun 2000
Correlaton over Decomposed Sgnals: A Non-Lnear Approach to Fast and Effectve Sequences Comparson Lucano da Fontoura Costa arxv:cs.cv/0006040 28 Jun 2000 Cybernetc Vson Research Group IFSC Unversty of São
More informationLaboratory 1c: Method of Least Squares
Lab 1c, Least Squares Laboratory 1c: Method of Least Squares Introducton Consder the graph of expermental data n Fgure 1. In ths experment x s the ndependent varable and y the dependent varable. Clearly
More informationAn identification algorithm of model kinetic parameters of the interfacial layer growth in fiber composites
IOP Conference Seres: Materals Scence and Engneerng PAPER OPE ACCESS An dentfcaton algorthm of model knetc parameters of the nterfacal layer growth n fber compostes o cte ths artcle: V Zubov et al 216
More informationOutline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1]
DYNAMIC SHORTEST PATH SEARCH AND SYNCHRONIZED TASK SWITCHING Jay Wagenpfel, Adran Trachte 2 Outlne Shortest Communcaton Path Searchng Bellmann Ford algorthm Algorthm for dynamc case Modfcatons to our algorthm
More informationLab 2e Thermal System Response and Effective Heat Transfer Coefficient
58:080 Expermental Engneerng 1 OBJECTIVE Lab 2e Thermal System Response and Effectve Heat Transfer Coeffcent Warnng: though the experment has educatonal objectves (to learn about bolng heat transfer, etc.),
More informationLecture 14: Forces and Stresses
The Nuts and Bolts of Frst-Prncples Smulaton Lecture 14: Forces and Stresses Durham, 6th-13th December 2001 CASTEP Developers Group wth support from the ESF ψ k Network Overvew of Lecture Why bother? Theoretcal
More informationTransfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system
Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More informationManaging Capacity Through Reward Programs. on-line companion page. Byung-Do Kim Seoul National University College of Business Administration
Managng Caacty Through eward Programs on-lne comanon age Byung-Do Km Seoul Natonal Unversty College of Busness Admnstraton Mengze Sh Unversty of Toronto otman School of Management Toronto ON M5S E6 Canada
More informationPsychology 282 Lecture #24 Outline Regression Diagnostics: Outliers
Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.
More informationDigital Modems. Lecture 2
Dgtal Modems Lecture Revew We have shown that both Bayes and eyman/pearson crtera are based on the Lkelhood Rato Test (LRT) Λ ( r ) < > η Λ r s called observaton transformaton or suffcent statstc The crtera
More informationStatistics for Economics & Business
Statstcs for Economcs & Busness Smple Lnear Regresson Learnng Objectves In ths chapter, you learn: How to use regresson analyss to predct the value of a dependent varable based on an ndependent varable
More informationPhysics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1
P. Guterrez Physcs 5153 Classcal Mechancs D Alembert s Prncple and The Lagrangan 1 Introducton The prncple of vrtual work provdes a method of solvng problems of statc equlbrum wthout havng to consder the
More informationStat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors
Stat60: Bayesan Modelng and Inference Lecture Date: February, 00 Reference Prors Lecturer: Mchael I. Jordan Scrbe: Steven Troxler and Wayne Lee In ths lecture, we assume that θ R; n hgher-dmensons, reference
More informationLaboratory 3: Method of Least Squares
Laboratory 3: Method of Least Squares Introducton Consder the graph of expermental data n Fgure 1. In ths experment x s the ndependent varable and y the dependent varable. Clearly they are correlated wth
More informationWeek 5: Neural Networks
Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple
More informationP R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /
Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons
More informationChapter 13: Multiple Regression
Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to
More informationInternational Journal of Pure and Applied Sciences and Technology
Int. J. Pure Appl. Sc. Technol., 4() (03), pp. 5-30 Internatonal Journal of Pure and Appled Scences and Technology ISSN 9-607 Avalable onlne at www.jopaasat.n Research Paper Schrödnger State Space Matrx
More informationGaussian Mixture Models
Lab Gaussan Mxture Models Lab Objectve: Understand the formulaton of Gaussan Mxture Models (GMMs) and how to estmate GMM parameters. You ve already seen GMMs as the observaton dstrbuton n certan contnuous
More informationA New Evolutionary Computation Based Approach for Learning Bayesian Network
Avalable onlne at www.scencedrect.com Proceda Engneerng 15 (2011) 4026 4030 Advanced n Control Engneerng and Informaton Scence A New Evolutonary Computaton Based Approach for Learnng Bayesan Network Yungang
More information