Mathematical Equivalence of Two Common Forms of Firing Rate Models of Neural Networks
|
|
- Florence Townsend
- 6 years ago
- Views:
Transcription
1 NOTE Communcated by Terrence Sejnowsk Mathematcal Equvalence of Two Common Forms of Frng Rate Models of Neural Networks Kenneth D Mller ken@neurotheorycolumbaedu Center for Theoretcal Neuroscence, Dept of Neuroscence, Swartz Program n Theoretcal Neuroscence, and Kavl Insttute for Bran Scence, College of Physcans and Surgeons, Columba Unversty, New York, NY 10032, USA Francesco Fumarola fumarola@physcolumbaedu Department of Physcs, Columba Unversty, New York, NY 10027, USA We demonstrate the mathematcal equvalence of two commonly used forms of frng rate model equatons for neural networks In addton, we show that what s commonly nterpreted as the frng rate n one form of model may be better nterpreted as a low-pass-fltered frng rate, and we pont out a conductance-based frng rate model At least snce the poneerng work of Wlson & Cowan (1972), t has been common to study neural crcut behavor usng rate equatons equatons that specfy neural actvtes smply n terms of ther rates of frng acton potentals, as opposed to spkng models, n whch the actual emssons of acton potentals, or spkes, are modeled Rate models can be derved as approxmatons to spkng models n a varety of ways (Wlson & Cowan, 1972; Matta & Del Gudce, 2002; Shrk, Hansel, & Sompolnsky, 2003; Ermentrout, 1994; La Camera, Rauch, Luscher, Senn, & Fus, 2004; Avel and Gerstner, 2006; Ostojc & Brunel, 2011; revewed n Ermentrout & Terman, 2010; Gerstner & Kstler, 2002; and Dayan & Abbott, 2001) Two forms of rate model most commonly used to model neural crcuts are the followng, whch we wll refer to as the v-equaton and r-equaton respectvely: τ dv = v + Ĩ + Wf(v), (1) τ dr = r + f(wr + I) (2) Neural Computaton 24, (2012) c 2011 Massachusetts Insttute of Technology
2 26 K Mller and F Fumarola Here, v and r are each vectors representng neural actvty, wth each element representng the actvty of one neuron n the modeled crcut v s commonly thought of as representng voltage, whler s commonly thought of as representng frng rate (probablty of spkng per unt tme) f(x) s a nonlnear nput-output functon that acts element-by-element on the elements of x, that s, t has th element (f(x)) = f (x ) for some nonlnear functon of one varable f f typcally takes such forms as an exponental, a power law, or a sgmod functon, and f (v ) s typcally regarded as a statc nonlnearty convertng the voltage of the th cell v to the cell s nstantaneous frng rate W s the matrx of synaptc weghts between the neurons n the modeled crcut Ĩ and I are the vectors of external nputs to the neurons n the v or r networks, respectvely, whch may be tme dependent In the appendx, we llustrate a smple heurstc dervaton of the v-equaton, startng from the bophyscal equaton for the voltages v Along the way, we also pont to a conductance-based verson of the rate equaton When developng a rate model of a network, t can be unclear whch form of equaton to use or whether t makes a dfference Here we demonstrate that the choce between equatons 1 and 2 makes no dfference: the two models are mathematcally equvalent, and so wll dsplay the same set of behavors It has been noted prevously (Beer, 2006) that when I s constant and W s nvertble, the two equatons are equvalent under the relatonshp v = Wr + I, Ĩ = I We generalze ths result to demonstrate the equvalence of the two equatons when W s not nvertble and nputs may be tme dependent The v-equaton s defned when we specfy the nput across tme, Ĩ(t), and the ntal condton v(0); we wll call the combnaton of these and equaton 1 a v-model The r-equaton s defned when we specfy I(t) and r(0); we wll call the combnaton of these and equaton 2 an r-model We wll show that any v-model can be mapped to an r-model and any r-model can be mapped to a v-model such that the solutons to equatons 1 and 2 satsfy v = Wr + I As we wll see, the nputs n equvalent models are related by Ĩ = I + τ di, or τ di = I + ĨThats,Is a low-pass-fltered verson of Ĩ Note that there s an equvalence class of I, parameterzed by I(0), that all correspond to the same Ĩ under ths equvalence We assume that the equvalence class has been specfed, that s, Ĩ has been specfed (f I has been specfed, Ĩ can be found as Ĩ = I + τ di ) Then a v-model s defned by specfyng v(0), whle an r-model s defned by specfyng the set {r(0), I(0)}IfWs D D, then v(0) s D-dmensonal, whle {r(0), I(0)} s 2D-dmensonal, so we can guess that the map from r to v takes a D-dmensonal space of r-models to a sngle v-model, and conversely the map from v to r takes a sngle v-model back to a D-dmensonal space of r-models, and we wll show that ths s true
3 Mathematcal Equvalence of Frng Rate Models 27 We frst show that f r evolves accordng to the r-equaton, then Wr + I evolves accordng to the v-equaton Settng v = Wr + I, we fnd: τ dv dr = Wτ + τ di di = W( r + f (Wr + I)) + τ (3) = (v I) + W f (v) + τ di (4) = v + Ĩ + W f (v) (5) Therefore, f v evolves accordng to the v-equaton and r evolves accordng tothe r-equaton and v(0) = Wr(0) + I(0), then, snce thev-equaton propagates Wr + I forward n tme, v = Wr + I at all tmes t > 0 We wll thus have establshed the desred equvalence f we can solve v(0) = Wr(0) + I(0) for any v-model, specfed by v(0), or for any r-model, specfed by {r(0), I(0)} Note that, as expected, a D-dmensonal space of r-models converges on the same v-model Snce {r(0), I(0)} forms a 2D-dmensonal space, whch s constraned by the D-dmensonal equaton v(0) = Wr(0) + I(0), theddmensonal subspace of r-models {r(0), I(0)} that satsfy ths equaton all converge on the same v-model To go from an r-model to a v-model s straghtforward: we smply set v(0) = Wr(0) + I(0) Togofromav-model to an r-model, we frst defne some useful notaton: 1 N W s the null space of W, that s, the subspace of all vectors that W maps to 0 P N s the projecton operator nto N W N W s the subspace perpendcular to N W Ths s the subspace spanned by the rows of W P N s the projecton operator nto N W R W s the range of W, that s, the subspace of vectors that can be wrtten Wx for some x Ths s the subspace spanned by the columns of W P R s the projecton operator nto R W R W s the subspace perpendcular to R W, also called the left null space P R s the projecton operator nto R W For any vector x, wedefnex N P N x, x N P N x, x R P R x, x R P R x We rely on the fact that x = x N + x N = x R + x R 1 If W s normal, the egenvectors are orthogonal, so the null space s precsely the space orthogonal to the range: P N = P R and P N = P R However, f W s nonnormal, then vectors orthogonal to the null space can be mapped nto the null space; the range always has the dmenson of the full space mnus the dmenson of the null space, but t need not be orthogonal to the null space
4 28 K Mller and F Fumarola Gven a v-model, the equaton v(0) = Wr(0) + I(0) has a soluton f and only f v(0) I(0) R W, whch s true f and only f v R (0) I R (0) = 0, 2 so we must choose I R (0) = v R (0) (9) Lettng D R be the dmenson of R W and D N the dmenson of N W,the fundamental theorem of lnear algebra states that D R + D N = D SoI R (0) has dmenson D N Ths leaves unspecfed I R (0), whch has dmenson D R To solve for r N (0), wenotethattheequatonv = Wr + I can equvalently be wrtten v = Wr N + I (because Wr N = 0, so Wr = Wr N ) That s, knowledge of v specfes only r N WedefneW 1 to be the Moore-Penrose pseudo-nverse of W Ths s the matrx that gves the one-to-one mappng of R W nto N W that nverts the one-to-one mappng of N W to RW nduced by W, and that maps all vectors n R W to 03 The pseudo-nverse has the property that W 1 W = P N whle WW 1 = P R Then we can solve for r N (0) as r N (0) = W 1 (v(0) I(0)) = W 1 (v R (0) I R (0)) (10) Ths s a D R -dmensonal equaton for the 2D R -dmensonal set of unknowns {r N (0), I R (0)}, so t determnes D R of these parameters and leaves D R free For example, t could be solved by freely choosng I R (0) and then settng r N (0) = W 1 (v R (0) I R (0)), or by freely choosng r N (0) and then settng I R (0) = v R (0) Wr N (0) Equatons 10 and 9 together ensure the equalty v(0) = Wr(0) + I(0) Applyng W to both sdes of equaton 10 yelds v R (0) = Wr N (0) + I R (0) = Wr(0) + I R (0) Ths states that the equalty holds wthn the range of W; 2 Note that the condton v I R W, meanng that v = Wr + I can be solved, s true for all tme f t s true n the ntal condton We compute: d(v I) τ = v + Ĩ + W f (v) τ di (6) = v + I + W f (v) (7) Applyng P R to equaton 7 and notng that P R W = 0, we fnd τ d(v R I R ) = (v R I R ) (8) If v(0) I(0) R W,thenv R (0) I R (0) = 0, and hence v R I R = 0 at all subsequent tmes so v I R W at all subsequent tmes Note also that for any ntal condtons, the condton v(t) I(t) R W s true asymptotcally as t 3 If the sngular value decomposton of a matrx M s M = USV,whereS s the dagonal matrx of sngular values and U and V are untary matrces, then ts pseudonverse s M 1 = V SU,where S s the pseudonverse of S, obtaned by nvertng all nonzero sngular values n S
5 Mathematcal Equvalence of Frng Rate Models 29 orthogonal to the range of W, wehavep R Wr = 0andv R (0) = I R (0) Together, these yeld v(0) = Wr(0) + I(0) Fnally, we can freely choose r N (0), whch has no effect on the equaton v(0) = Wr(0) + I(0) r N (0) has D N dmensons, so we have freely chosen D R + D N = D dmensons n fndng an r-model that s equvalent to the v- model That s, we have found a D-dmensonal subspace of such r-models those that satsfy v(0) = Wr(0) + I(0) To summarze, we have establshed the equvalence between r-models and v-models For each fxed choce of W, τ,andĩ(t),anr-model s specfed by {r(0), I(0)} and equaton 2, whle a v-model s specfed by v(0) and equaton 1 The equvalence s establshed by settng v(0) = Wr(0) + I(0), whch yelds a D-dmensonal subspace of equvalent r-models for a gven v-model Under ths equvalence, v obeys equaton 1, r obeys equaton 2, and the two are related at all tmes by v = Wr + I, wth τ di = I + ĨTogo from an r-model to ts equvalent v-model, we smply set v(0) = Wr(0) + I(0)Togofromav-model to one of ts equvalent r-models, we set I R (0) = v R (0), freely choose r N (0), and freely choose {r N (0), I R (0)} from the D R - dmensonal subspace of such choces that satsfy r N (0) = W 1 (v R (0) I R (0)), where W 1 s the pseudonverse of W Fnally, note that equaton 2 can be wrtten τ dr = r + f(v) That s, f we regard v as a voltage and f (v) as a frng rate, as suggested by the dervaton n the appendx, then r s a low-pass-fltered verson of the frng rate, just as I s a low-pass-fltered verson of the nput Ĩ Appendx: Smple Dervaton of the v-equaton As an example of an unsophstcated and heurstc dervaton of these equatons (more sophstcated dervatons can be found n the references n the man text), the v-equaton can be derved as follows We start wth the equaton for the membrane voltage of the th neuron: C dv = j g j (E j v ), (A1) where C s the capactance of the th neuron and g j s the jth conductance onto the neuron, wth reversal potental E j We assume that the g j s are composed of an ntrnsc conductance, g L, wth reversal potental EL ; extrnsc nput g ext wth reversal potental E ext ; and wthn-network synaptc conductances, wth g j representng nput from neuron j wth reversal potental Ẽ j Dvdng by k g k and defnng τ (t) = C / k g k gves E ext τ (t) dv = v + gl EL g L + j + k g k g jẽj (A2)
6 30 K Mller and F Fumarola We now make a number of further smplfyng assumptons We assume that g j s proportonal to the frng rate r j of neuron j, wth proportonalty constant W j 0: g j = W j r j Ths gnores synaptc tme courses, among other thngs We assume that r j s gven by the statc nonlnearty r j = f (v j ) (see Mller & Troyer, 2002; Hansel & van Vreeswjk, 2002; Prebe, Mechler, Carandn, & Ferster, 2004, for such a relatonshp between frng rate and voltage averaged over a few tens of mllseconds) We assume synapses are ether exctatory wth reversal potental E E or nhbtory wth reversal potental E I, and lnearly transform the unts of voltage so that E E = 1and E I = 1 We defne W j = W j E j Ths s now a synaptc weght that s postve for exctatory synapses and negatve for nhbtory synapses We defne Ĩ g L EL E ext and defne g g L Ths yelds the conductancebased rate equaton, τ (t) dv = v + Ĩ + j W j f (v j ) g + k W k f (v k ), (A3) wth τ (t) = C / ( g + k W k f (v k )) Fnally, we assume that the total conductance, represented by the denomnator n the last term of equaton A3, can be taken to be constant, for example, f g L s much larger than synaptc and external conductances or f nputs tend to be push-pull, wth wthdrawal of some nputs compensatng for addton of others We absorb the constant denomnator nto the defntons of Ĩ and W j and note that ths also mples that τ s constant, to arrve fnally at the v-equaton: τ dv = v + j W j f (v j ) + Ĩ (A4) Acknowledgments Ths work was supported by R01-EY11001 from the Natonal Eye Insttute and by the Gatsby Chartable Foundaton through the Gatsby Intatve n Bran Crcutry at Columba Unversty References Avel, Y, & Gerstner, W (2006) From spkng neurons to rate models: A cascade model as an approxmaton to spkng neuron models wth refractorness Phys Rev E, 73, Beer, R D (2006) Parameter space structure of contnuous-tme recurrent neural networks Neural Comput, 18,
7 Mathematcal Equvalence of Frng Rate Models 31 Dayan, P, & Abbott, L F (2001) Theoretcal neuroscence Cambrdge, MA: MIT Press Ermentrout, B (1994) Reducton of conductance based models wth slow synapses to neural nets Neural Comput, 6, Ermentrout, G B, & Terman, D H (2010) Mathematcal foundatons of neuroscence New York: Sprnger Gerstner, W, & Kstler, W (2002) Spkng neuron models Cambrdge:Cambrdge Unversty Press Hansel, D, & van Vreeswjk, C (2002) How nose contrbutes to contrast nvarance of orentaton tunng n cat vsual cortex J Neurosc, 22, La Camera, G, Rauch, A, Luscher, H R, Senn, W, & Fus, S (2004) Mnmal models of adapted neuronal response to n vvo lke nput currents Neural Comput, 16, Matta, M, & Del Gudce, P (2002) Populaton dynamcs of nteractng spkng neurons PhysRevE, 66, Mller, K D, & Troyer, T W (2002) Neural nose can explan expansve, power-law nonlneartes n neural response functons J Neurophysol, 87, Ostojc, S, & Brunel, N (2011) From spkng neuron models to lnear-nonlnear models PLoS Comput Bol, 7, e Prebe, N, Mechler, F, Carandn, M, & Ferster, D (2004) The contrbuton of spke threshold to the dchotomy of cortcal smple and complex cells Nat Neurosc, 7(10), Shrk, O, & Hansel, D, & Sompolnsky, H (2003) Rate models for conductancebased cortcal neuronal networks Neural Comput, 15, Wlson, H R, & Cowan, J D (1972) Exctatory and nhbtory nteractons n localzed populatons of model neurons Bol Cybern, 12, 1 24 Receved July 6, 2011; accepted July 10, 2011
1 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations
Physcs 171/271 -Davd Klenfeld - Fall 2005 (revsed Wnter 2011) 1 Dervaton of Rate Equatons from Sngle-Cell Conductance (Hodgkn-Huxley-lke) Equatons We consder a network of many neurons, each of whch obeys
More information8 Derivation of Network Rate Equations from Single- Cell Conductance Equations
Physcs 178/278 - Davd Klenfeld - Wnter 2015 8 Dervaton of Network Rate Equatons from Sngle- Cell Conductance Equatons We consder a network of many neurons, each of whch obeys a set of conductancebased,
More information9 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations
Physcs 171/271 - Chapter 9R -Davd Klenfeld - Fall 2005 9 Dervaton of Rate Equatons from Sngle-Cell Conductance (Hodgkn-Huxley-lke) Equatons We consder a network of many neurons, each of whch obeys a set
More information8 Derivation of Network Rate Equations from Single- Cell Conductance Equations
Physcs 178/278 - Davd Klenfeld - Wnter 2019 8 Dervaton of Network Rate Equatons from Sngle- Cell Conductance Equatons Our goal to derve the form of the abstract quanttes n rate equatons, such as synaptc
More informationInner Product. Euclidean Space. Orthonormal Basis. Orthogonal
Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,
More informationAPPENDIX A Some Linear Algebra
APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,
More informationLectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More information2.3 Nilpotent endomorphisms
s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms
More informationLecture 3. Ax x i a i. i i
18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationIntroduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:
CE304, Sprng 2004 Lecture 4 Introducton to Vapor/Lqud Equlbrum, part 2 Raoult s Law: The smplest model that allows us do VLE calculatons s obtaned when we assume that the vapor phase s an deal gas, and
More information1 Matrix representations of canonical matrices
1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationTime-Varying Systems and Computations Lecture 6
Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy
More informationFormulas for the Determinant
page 224 224 CHAPTER 3 Determnants e t te t e 2t 38 A = e t 2te t e 2t e t te t 2e 2t 39 If 123 A = 345, 456 compute the matrx product A adj(a) What can you conclude about det(a)? For Problems 40 43, use
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationProblem Set 9 Solutions
Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem
More informationCHAPTER III Neural Networks as Associative Memory
CHAPTER III Neural Networs as Assocatve Memory Introducton One of the prmary functons of the bran s assocatve memory. We assocate the faces wth names, letters wth sounds, or we can recognze the people
More information8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS
SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars
More informationAppendix for Causal Interaction in Factorial Experiments: Application to Conjoint Analysis
A Appendx for Causal Interacton n Factoral Experments: Applcaton to Conjont Analyss Mathematcal Appendx: Proofs of Theorems A. Lemmas Below, we descrbe all the lemmas, whch are used to prove the man theorems
More informationC/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1
C/CS/Phy9 Problem Set 3 Solutons Out: Oct, 8 Suppose you have two qubts n some arbtrary entangled state ψ You apply the teleportaton protocol to each of the qubts separately What s the resultng state obtaned
More informationn α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0
MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector
More informationWeek 5: Neural Networks
Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple
More informationErratum: A Generalized Path Integral Control Approach to Reinforcement Learning
Journal of Machne Learnng Research 00-9 Submtted /0; Publshed 7/ Erratum: A Generalzed Path Integral Control Approach to Renforcement Learnng Evangelos ATheodorou Jonas Buchl Stefan Schaal Department of
More informationCSCE 790S Background Results
CSCE 790S Background Results Stephen A. Fenner September 8, 011 Abstract These results are background to the course CSCE 790S/CSCE 790B, Quantum Computaton and Informaton (Sprng 007 and Fall 011). Each
More informationLECTURE 9 CANONICAL CORRELATION ANALYSIS
LECURE 9 CANONICAL CORRELAION ANALYSIS Introducton he concept of canoncal correlaton arses when we want to quantfy the assocatons between two sets of varables. For example, suppose that the frst set of
More informationSupplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso
Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed
More informationTransfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system
Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng
More information= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system.
Chapter Matlab Exercses Chapter Matlab Exercses. Consder the lnear system of Example n Secton.. x x x y z y y z (a) Use the MATLAB command rref to solve the system. (b) Let A be the coeffcent matrx and
More informationBOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS
BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all
More informationStructure and Drive Paul A. Jensen Copyright July 20, 2003
Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.
More informationAdditional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty
Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,
More informationFor now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.
Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson
More information3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X
Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number
More informationSL n (F ) Equals its Own Derived Group
Internatonal Journal of Algebra, Vol. 2, 2008, no. 12, 585-594 SL n (F ) Equals ts Own Derved Group Jorge Macel BMCC-The Cty Unversty of New York, CUNY 199 Chambers street, New York, NY 10007, USA macel@cms.nyu.edu
More informationPhysics 5153 Classical Mechanics. Principle of Virtual Work-1
P. Guterrez 1 Introducton Physcs 5153 Classcal Mechancs Prncple of Vrtual Work The frst varatonal prncple we encounter n mechancs s the prncple of vrtual work. It establshes the equlbrum condton of a mechancal
More informationSalmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2
Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to
More informationEEE 241: Linear Systems
EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they
More informatione - c o m p a n i o n
OPERATIONS RESEARCH http://dxdoorg/0287/opre007ec e - c o m p a n o n ONLY AVAILABLE IN ELECTRONIC FORM 202 INFORMS Electronc Companon Generalzed Quantty Competton for Multple Products and Loss of Effcency
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationMatrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD
Matrx Approxmaton va Samplng, Subspace Embeddng Lecturer: Anup Rao Scrbe: Rashth Sharma, Peng Zhang 0/01/016 1 Solvng Lnear Systems Usng SVD Two applcatons of SVD have been covered so far. Today we loo
More information763622S ADVANCED QUANTUM MECHANICS Solution Set 1 Spring c n a n. c n 2 = 1.
7636S ADVANCED QUANTUM MECHANICS Soluton Set 1 Sprng 013 1 Warm-up Show that the egenvalues of a Hermtan operator  are real and that the egenkets correspondng to dfferent egenvalues are orthogonal (b)
More informationLecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.
prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove
More informationMarginal Effects in Probit Models: Interpretation and Testing. 1. Interpreting Probit Coefficients
ECON 5 -- NOE 15 Margnal Effects n Probt Models: Interpretaton and estng hs note ntroduces you to the two types of margnal effects n probt models: margnal ndex effects, and margnal probablty effects. It
More informationCS4495/6495 Introduction to Computer Vision. 3C-L3 Calibrating cameras
CS4495/6495 Introducton to Computer Vson 3C-L3 Calbratng cameras Fnally (last tme): Camera parameters Projecton equaton the cumulatve effect of all parameters: M (3x4) f s x ' 1 0 0 0 c R 0 I T 3 3 3 x1
More informationComposite Hypotheses testing
Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter
More informationConvexity preserving interpolation by splines of arbitrary degree
Computer Scence Journal of Moldova, vol.18, no.1(52), 2010 Convexty preservng nterpolaton by splnes of arbtrary degree Igor Verlan Abstract In the present paper an algorthm of C 2 nterpolaton of dscrete
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that
More informationSolution Thermodynamics
Soluton hermodynamcs usng Wagner Notaton by Stanley. Howard Department of aterals and etallurgcal Engneerng South Dakota School of nes and echnology Rapd Cty, SD 57701 January 7, 001 Soluton hermodynamcs
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus
More informationExplicit constructions of all separable two-qubits density matrices and related problems for three-qubits systems
Explct constructons of all separable two-qubts densty matrces and related problems for three-qubts systems Y. en-ryeh and. Mann Physcs Department, Technon-Israel Insttute of Technology, Hafa 2000, Israel
More informationSingular Value Decomposition: Theory and Applications
Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real
More informationPhysics 4B. A positive value is obtained, so the current is counterclockwise around the circuit.
Physcs 4B Solutons to Chapter 7 HW Chapter 7: Questons:, 8, 0 Problems:,,, 45, 48,,, 7, 9 Queston 7- (a) no (b) yes (c) all te Queston 7-8 0 μc Queston 7-0, c;, a;, d; 4, b Problem 7- (a) Let be the current
More informationThe non-negativity of probabilities and the collapse of state
The non-negatvty of probabltes and the collapse of state Slobodan Prvanovć Insttute of Physcs, P.O. Box 57, 11080 Belgrade, Serba Abstract The dynamcal equaton, beng the combnaton of Schrödnger and Louvlle
More informationPHYS 705: Classical Mechanics. Calculus of Variations II
1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary
More informationDifference Equations
Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1
More informationMATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS
MATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS These are nformal notes whch cover some of the materal whch s not n the course book. The man purpose s to gve a number of nontrval examples
More informationThe Geometry of Logit and Probit
The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.
More informationAPPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14
APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce
More informationDEMO #8 - GAUSSIAN ELIMINATION USING MATHEMATICA. 1. Matrices in Mathematica
demo8.nb 1 DEMO #8 - GAUSSIAN ELIMINATION USING MATHEMATICA Obectves: - defne matrces n Mathematca - format the output of matrces - appl lnear algebra to solve a real problem - Use Mathematca to perform
More informationANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)
Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of
More informationSupplementary material 1
Supplementary materal 1 to Correlated connectvty and the dstrbuton of frng rates n the neocortex by Alexe Koulakov, Tomas Hromadka, and Anthony M. Zador The emergence of log-normal dstrbuton n neural nets.
More informationDUE: WEDS FEB 21ST 2018
HOMEWORK # 1: FINITE DIFFERENCES IN ONE DIMENSION DUE: WEDS FEB 21ST 2018 1. Theory Beam bendng s a classcal engneerng analyss. The tradtonal soluton technque makes smplfyng assumptons such as a constant
More informationPerron Vectors of an Irreducible Nonnegative Interval Matrix
Perron Vectors of an Irreducble Nonnegatve Interval Matrx Jr Rohn August 4 2005 Abstract As s well known an rreducble nonnegatve matrx possesses a unquely determned Perron vector. As the man result of
More informationNorms, Condition Numbers, Eigenvalues and Eigenvectors
Norms, Condton Numbers, Egenvalues and Egenvectors 1 Norms A norm s a measure of the sze of a matrx or a vector For vectors the common norms are: N a 2 = ( x 2 1/2 the Eucldean Norm (1a b 1 = =1 N x (1b
More informationA how to guide to second quantization method.
Phys. 67 (Graduate Quantum Mechancs Sprng 2009 Prof. Pu K. Lam. Verson 3 (4/3/2009 A how to gude to second quantzaton method. -> Second quantzaton s a mathematcal notaton desgned to handle dentcal partcle
More informationCollege of Computer & Information Science Fall 2009 Northeastern University 20 October 2009
College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:
More informationThe Feynman path integral
The Feynman path ntegral Aprl 3, 205 Hesenberg and Schrödnger pctures The Schrödnger wave functon places the tme dependence of a physcal system n the state, ψ, t, where the state s a vector n Hlbert space
More informationFisher Linear Discriminant Analysis
Fsher Lnear Dscrmnant Analyss Max Wellng Department of Computer Scence Unversty of Toronto 10 Kng s College Road Toronto, M5S 3G5 Canada wellng@cs.toronto.edu Abstract Ths s a note to explan Fsher lnear
More informationErrors for Linear Systems
Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch
More informationU.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016
U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and
More informationMore metrics on cartesian products
More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of
More informationMarkov chains. Definition of a CTMC: [2, page 381] is a continuous time, discrete value random process such that for an infinitesimal
Markov chans M. Veeraraghavan; March 17, 2004 [Tp: Study the MC, QT, and Lttle s law lectures together: CTMC (MC lecture), M/M/1 queue (QT lecture), Lttle s law lecture (when dervng the mean response tme
More informationRandom Walks on Digraphs
Random Walks on Dgraphs J. J. P. Veerman October 23, 27 Introducton Let V = {, n} be a vertex set and S a non-negatve row-stochastc matrx (.e. rows sum to ). V and S defne a dgraph G = G(V, S) and a drected
More informationComplex Numbers. x = B B 2 4AC 2A. or x = x = 2 ± 4 4 (1) (5) 2 (1)
Complex Numbers If you have not yet encountered complex numbers, you wll soon do so n the process of solvng quadratc equatons. The general quadratc equaton Ax + Bx + C 0 has solutons x B + B 4AC A For
More informationMath 217 Fall 2013 Homework 2 Solutions
Math 17 Fall 013 Homework Solutons Due Thursday Sept. 6, 013 5pm Ths homework conssts of 6 problems of 5 ponts each. The total s 30. You need to fully justfy your answer prove that your functon ndeed has
More informationx = , so that calculated
Stat 4, secton Sngle Factor ANOVA notes by Tm Plachowsk n chapter 8 we conducted hypothess tests n whch we compared a sngle sample s mean or proporton to some hypotheszed value Chapter 9 expanded ths to
More informationHowever, since P is a symmetric idempotent matrix, of P are either 0 or 1 [Eigen-values
Fall 007 Soluton to Mdterm Examnaton STAT 7 Dr. Goel. [0 ponts] For the general lnear model = X + ε, wth uncorrelated errors havng mean zero and varance σ, suppose that the desgn matrx X s not necessarly
More informationCSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography
CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve
More informationPositive feedback Derivative feedback Pos. + der. feedback c Stronger. E I e f g Time (s)
Frng rate (Hz) a f nput current Transent nput Step-lke nput Postve feedback Dervatve feedback Pos. + der. feedback b c Stronger d e f g Frng rate (Hz) h j Frng rate (Hz) 5 5 3 4 5 3 4 5 Tme (s) 6 4 5 3
More informationStanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011
Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected
More informationCHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD
CHALMERS, GÖTEBORGS UNIVERSITET SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS COURSE CODES: FFR 35, FIM 72 GU, PhD Tme: Place: Teachers: Allowed materal: Not allowed: January 2, 28, at 8 3 2 3 SB
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationMATH 5630: Discrete Time-Space Model Hung Phan, UMass Lowell March 1, 2018
MATH 5630: Dscrete Tme-Space Model Hung Phan, UMass Lowell March, 08 Newton s Law of Coolng Consder the coolng of a well strred coffee so that the temperature does not depend on space Newton s law of collng
More informationMMA and GCMMA two methods for nonlinear optimization
MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM
ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM An elastc wave s a deformaton of the body that travels throughout the body n all drectons. We can examne the deformaton over a perod of tme by fxng our look
More informationGlobal Sensitivity. Tuesday 20 th February, 2018
Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values
More informationBezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0
Bezer curves Mchael S. Floater August 25, 211 These notes provde an ntroducton to Bezer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of the
More informationw ). Then use the Cauchy-Schwartz inequality ( v w v w ).] = in R 4. Can you find a vector u 4 in R 4 such that the
Math S-b Summer 8 Homework #5 Problems due Wed, July 8: Secton 5: Gve an algebrac proof for the trangle nequalty v+ w v + w Draw a sketch [Hnt: Expand v+ w ( v+ w) ( v+ w ) hen use the Cauchy-Schwartz
More informationDeterminants Containing Powers of Generalized Fibonacci Numbers
1 2 3 47 6 23 11 Journal of Integer Sequences, Vol 19 (2016), Artcle 1671 Determnants Contanng Powers of Generalzed Fbonacc Numbers Aram Tangboonduangjt and Thotsaporn Thanatpanonda Mahdol Unversty Internatonal
More informationTHE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens
THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chnese for ther wonderful remander theorem. Glenn Stevens 1. Introducton The Chnese remander theorem says we can unquely solve any par of
More informationFoundations of Arithmetic
Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an
More informationThe equation of motion of a dynamical system is given by a set of differential equations. That is (1)
Dynamcal Systems Many engneerng and natural systems are dynamcal systems. For example a pendulum s a dynamcal system. State l The state of the dynamcal system specfes t condtons. For a pendulum n the absence
More informationCanonical transformations
Canoncal transformatons November 23, 2014 Recall that we have defned a symplectc transformaton to be any lnear transformaton M A B leavng the symplectc form nvarant, Ω AB M A CM B DΩ CD Coordnate transformatons,
More informationThe Order Relation and Trace Inequalities for. Hermitian Operators
Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence
More informationOnline Appendix. t=1 (p t w)q t. Then the first order condition shows that
Artcle forthcomng to ; manuscrpt no (Please, provde the manuscrpt number!) 1 Onlne Appendx Appendx E: Proofs Proof of Proposton 1 Frst we derve the equlbrum when the manufacturer does not vertcally ntegrate
More informationYong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )
Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often
More informationTracking with Kalman Filter
Trackng wth Kalman Flter Scott T. Acton Vrgna Image and Vdeo Analyss (VIVA), Charles L. Brown Department of Electrcal and Computer Engneerng Department of Bomedcal Engneerng Unversty of Vrgna, Charlottesvlle,
More information