Outline. NIP: Hebbian Learning. Overview. Types of Learning. Neural Information Processing. Amos Storkey
|
|
- Francis Henderson
- 5 years ago
- Views:
Transcription
1 Outline NIP: Hebbian Learning Neural Information Processing Amos Storkey 1/36 Overview 2/36 Types of Learning Types of learning, learning strategies Neurophysiology, LTP/LTD Basic Hebb rule, covariance rule, BCM rule supervised: input u, output v, model p(v u) Eigenanalysis reinforcement: input u and scalar reward r; often associated with a temporal credit assignment problem Constraints: subtractive and multiplicative normalization unsupervised: model p(u) Multiple output neurons Timing-based rules Reading: Reading: [Dayan and Abbott, 2002] ch.8, [Hertz et al., 1991] 5/36 6/36
2 Learning Strategies Hebb [Hebb, 1949] conjectured that if input from neuron A often contributes to the firing of neuron B, then the synapse from A to B should be strengthened. strengthening long-term potentiation (LTP), weakening long-term depression (LTD) Bottom-up: rules of neural plasticity, e.g. Hebbian Top-down: use of objective function E(w), updates determined by gradient w E(w) strong element of causality extensive biochemical mechanisms, protein synthesis Focus in the course will be mostly on the objective function approach, but we start with Hebbian learning Observed in many systems: hippocampus, neocortex, cerebellum Hebbian learning: long-term modification of synapses based on pre- and post-synaptic activity. (In contrast to: homeostasis, excitability changes, non-local learning such as backpropagation, etc.) 7/36 8/36 Firing-rate model dv = v + w u Assume that τr is small compared to timescale of weight change, so set dv/ = 0 to give v=w u τr Then = f(v, u, w) Basic Hebb rule is LTP and LTD in Schaffer collateral inputs to CA1. Figure from Dayan and Abbott (2001) 9/36 = uv 10/36
3 Basic Hebb Rule Covariance Rule Average over input distribution hi to give Implementing LTD at low activity = huvi = huut iw = Qw or where Q is the input correlation matrix = h(u θ u )vi If θv = hvi or θ u = hui then Positive feedback instability d w 2 = 2 w = 2hv2 i = Cw where C = h(u hui)(u hui)t i is the input covariance matrix Still unstable as Discrete time version of Hebb rule w w + Qw = hu(v θv )i 11/36 Hebb Rule: Single Postsynaptic Neuron d w 2 = 2hv(v hvi)i = 2var(v) > 0 12/36 PCA Basic Hebb rule = Qw, analyze using an eigendecomposition of Q Qeµ = λµ eµ, λ1 λ2... As Q is SPD, eigenvalues are real and non-negative, and the eigenvectors are orthogonal. Write w(t) = Nu X (w(0) eµ ) exp(λµ t/ )eµ. µ=1 As e1 has largest eigenvalue it grows fastest, so w e1 for large t. Similar analysis for = Cw, 13/36 [Figure: Dayan and Abbott 2001] 14/36
4 Energy Function Analysis BCM Rule Under [Linsker, 1988] 1 E = wt Cw 2 Using gradient descent dynamics = Cw 15/36 Stability and Competition = (u θ u )v 16/36 Subtractive Normalization For non-negative w set X Hebbian learning involves positive feedback. Control by: i LTD: not enough, covariance and correlation rules wi = n w = const where n = (1, 1,..., 1)T. Dynamics v(n u) = vu n Nu n w is constant, as dn w n n = vn u 1 =0 Nu saturation or bounds prevent weights getting too big or too small normalization over pre-synaptic or post-synaptic arbors subtractive: decrease all synapses by the same amount whether large or small multiplicative: decrease large synapses more than small synapses BCM rule Bienenstock, Cooper and Munro [Bienenstock et al., 1982] proposed = vu(v θv ), for which there is experimental evidence If θv is fixed then the BCM rule is unstable. But if θv is set to a low-pass filtered version of v2 (sliding threshold) then stability is achieved. With a sliding threshold the BCM rule implements competition between synapses we have or one gets LTD with v = 0 or u = 0 resp., which is odd E = w = u(v θv ) Subtractive normalization needs a lower bound to avoid weights becoming negative. If there is no upper saturation constraint then final outcome is that all weights but one become 0. 17/36 18/36
5 Multiplicative Normalization and the Oja Rule The Effect of Constraints Ensure w 2 is constant by setting [Oja, 1982] Oja rule gives w(t) e1 / α = vu αv2 w Saturation can change this outcome Dot product with 2w gives Subtractive normalization: if e1 n 6= 0 then growth of w in direction of n is stunted d w 2 = 2v2 (1 α w 2 ) which shows that w 2 relaxes to 1/α, preventing the weights growing without bound. Hebbian dynamics with saturation. Principal eigen vector is (1, 1)T / 2 19/36 [Dayan and Abbott 2001] 20/36 Example: Ocular Dominance T Q = huu i = Consider simplified model where a single cortical cell that receives input from two LGN afferents qs qd qd qs with qs > qd (Same and Different). Eigenvectors/values λ 1 = qs + qd e1 = (1, 1)T / 2 T e2 = (1, 1) / 2 λ 1 = qs qd Afferents have activities ul, ur from left and right eyes v = wr ur + wl ul weights are constrained to be non-negative. Ocular dominance occurs when one weight becomes 0 while the other is positive Principal eigenvector does not give rise to OD solution OD can be obtained by use of subtractive normalization, as e1 n 21/36 22/36
6 Example: Orientation Selectivity Linsker s model [Linsker, 1988] Layered architecture and arbor function constraints + Hebbian learning Layer D Layer C Layer B [Figure: Dayan and Abbott (2001), after Miller (1994)] Given ON and OFF LGN cells, Hebbian learning can give rise to cortical orientation selectivity [Miller, 1994] Layer A Needs constraints to avoid uniform component Uncorrelated noise in retina (layer A) Learned centre-surround receptive fields in layer C, and orientation-selective RF in layer G (bar detector) 23/36 24/36 25/36 26/36 Analysis by MacKay and Miller [MacKay and Miller, 2001] = (Q + k2 J)w + k1 n where J is the matrix of ones and n is the vector of ones Depending on the choice of parameters k1 and k2, uniform, centre-surround and orientation-selective RFs can be obtained
7 Multiple Output Neurons Fixed recurrent connections M dv τr = v + Wu + Mv at steady state gives With multiple output neurons and no interaction between them, each output neuron would behave similary. (Different initial conditions could still lead to different RFs). def v = (I M) 1 Wu =KWu Hebbian learning of W gives [Dayan and Abbott 2001] 27/36 dw = hvut i = KWQ Can consider 3 cases (i) plastic W, fixed M, (ii) fixed W, plastic M, (iii) plastic W and M Dayan and Abbott (2001) show OD stripes arising from fixed Mexican hat interactions K and plastic W 28/36 Anti-Hebbian modification of lateral connections [Figure: Dayan and Abbott 2001, after (right) Nicholls et al. 1992] Instead of fixed M, make it plastic. One possible goal is to decorrelate outputs, i.e. hvvt i = I Anti-Hebbian learning: synapses decrease when there is simultaneous pre- and postsynaptic activity Recurrent interactions can prevent different outputs from representing the same eigenvector dm = vvt + βm ( ) (need β > 0 to avoid M weights decaying to 0) For suitable β and τm a combination of the Oja rule and ( ) leads to rows of W being different eigenvectors of Q, and M ultimately decaying to 0. Other neural algorithms for multiple-output PCA are possible, e.g. [Sanger, 1989, Oja, 1989] τm 29/36 30/36
8 Timing-based Rules Timing-based Rules = Z 0 [H(τ )v(t)u(t τ ) + H( τ )v(t τ )u(t)] dτ An approximation to spiking models [Dayan and Abbott (2001) after (left) Markram et al. (1997) and (right) Zhang et al. (1998) ] If H(τ ) is positive for τ > 0 and negative for τ < 0 then first term on RHS is LTP, second is LTD Left: LTP and LTD Right: spike timing dependent plasticity (STDP) in Xenopus. Window of ±50ms, causal order agrees with Hebb conjecture 31/36 Temporal Hebbian rules and trace learning Z 0 T v(t) Z dτ H(τ )u(t τ ) Dayan, P. and Abbott, L. F. (2002). Theoretical Neuroscience. MIT press, Cambridge, MA. F oldi ak, P. (1991). Learning invariance from transformation sequences. Neural Comp., 3: Filtered version is a (memory) trace of u(t) Hebb, D. (1949). The organization of behavior. New York: Wiley. This rule can be used to model development of invariant responses, e.g. the presence of an object in an image [F oldi ak, 1991, Wallis and Rolls, 1996] 32/36 Bienenstock, E. L., Cooper, L. N., and Munro, P. W. (1982). Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. J. Neurosci., 2: Temporally dependent Hebbian plasticity depends on the correlation between v(t) and a filtered version of u(t) References I Approximate solution of timing-based rule is 1 w= Hertz, J., Krogh, A., and Palmer, R. G. (1991). Introduction to the theory of neural computation. Perseus, Reading, MA. 33/36 34/36
9 References II References III Linsker, R. (1988). Self-organization in a perceptual network. Computer, 21(3): MacKay, D. and Miller, K. (2001). Analysis of Linsker s Simulations of Hebbian Rules. Self-Organizing Map Formation: Foundations of Neural Computation. Sanger, T. (1989). Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Networks, 2(6): Miller, K. D. (1994). A model for the development of simple cell receptive fields and the ordered arrangement of orientation columns through activity-dependent competition between on- and off-center inputs. J Neurosci, 14(1): Wallis, G. and Rolls, E. T. (1996). A model of invariant object recognition in the visual system. Prog Neurobiol., 51: Oja, E. (1982). A simplified neuron model as a principal component analyzer. J. Math. Biol., 15: Oja, E. (1989). Neural networks, principal components, and subspaces. International Journal of Neural Systems, 1(1): /36 36/36
Synaptic Plasticity. Introduction. Biophysics of Synaptic Plasticity. Functional Modes of Synaptic Plasticity. Activity-dependent synaptic plasticity:
Synaptic Plasticity Introduction Dayan and Abbott (2001) Chapter 8 Instructor: Yoonsuck Choe; CPSC 644 Cortical Networks Activity-dependent synaptic plasticity: underlies learning and memory, and plays
More informationPlasticity and Learning
Chapter 8 Plasticity and Learning 8.1 Introduction Activity-dependent synaptic plasticity is widely believed to be the basic phenomenon underlying learning and memory, and it is also thought to play a
More informationLearning and Meta-learning
Learning and Meta-learning computation making predictions choosing actions acquiring episodes statistics algorithm gradient ascent (eg of the likelihood) correlation Kalman filtering implementation Hebbian
More informationThe Hebb rule Neurons that fire together wire together.
Unsupervised learning The Hebb rule Neurons that fire together wire together. PCA RF development with PCA Classical Conditioning and Hebbʼs rule Ear A Nose B Tongue When an axon in cell A is near enough
More informationHebb rule book: 'The Organization of Behavior' Theory about the neural bases of learning
PCA by neurons Hebb rule 1949 book: 'The Organization of Behavior' Theory about the neural bases of learning Learning takes place in synapses. Synapses get modified, they get stronger when the pre- and
More informationHow do biological neurons learn? Insights from computational modelling of
How do biological neurons learn? Insights from computational modelling of neurobiological experiments Lubica Benuskova Department of Computer Science University of Otago, New Zealand Brain is comprised
More informationIterative face image feature extraction with Generalized Hebbian Algorithm and a Sanger-like BCM rule
Iterative face image feature extraction with Generalized Hebbian Algorithm and a Sanger-like BCM rule Clayton Aldern (Clayton_Aldern@brown.edu) Tyler Benster (Tyler_Benster@brown.edu) Carl Olsson (Carl_Olsson@brown.edu)
More informationCSE/NB 528 Final Lecture: All Good Things Must. CSE/NB 528: Final Lecture
CSE/NB 528 Final Lecture: All Good Things Must 1 Course Summary Where have we been? Course Highlights Where do we go from here? Challenges and Open Problems Further Reading 2 What is the neural code? What
More informationNeural networks: Unsupervised learning
Neural networks: Unsupervised learning 1 Previously The supervised learning paradigm: given example inputs x and target outputs t learning the mapping between them the trained network is supposed to give
More informationComputational Neuroscience. Structure Dynamics Implementation Algorithm Computation - Function
Computational Neuroscience Structure Dynamics Implementation Algorithm Computation - Function Learning at psychological level Classical conditioning Hebb's rule When an axon of cell A is near enough to
More informationEffect of Correlated LGN Firing Rates on Predictions for Monocular Eye Closure vs Monocular Retinal Inactivation
Effect of Correlated LGN Firing Rates on Predictions for Monocular Eye Closure vs Monocular Retinal Inactivation Brian S. Blais Department of Science and Technology, Bryant University, Smithfield RI and
More informationHierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References
24th March 2011 Update Hierarchical Model Rao and Ballard (1999) presented a hierarchical model of visual cortex to show how classical and extra-classical Receptive Field (RF) effects could be explained
More informationAn Introductory Course in Computational Neuroscience
An Introductory Course in Computational Neuroscience Contents Series Foreword Acknowledgments Preface 1 Preliminary Material 1.1. Introduction 1.1.1 The Cell, the Circuit, and the Brain 1.1.2 Physics of
More informationTime-Skew Hebb Rule in a Nonisopotential Neuron
Time-Skew Hebb Rule in a Nonisopotential Neuron Barak A. Pearlmutter To appear (1995) in Neural Computation, 7(4) 76 712 Abstract In an isopotential neuron with rapid response, it has been shown that the
More informationCovariance and Correlation Matrix
Covariance and Correlation Matrix Given sample {x n } N 1, where x Rd, x n = x 1n x 2n. x dn sample mean x = 1 N N n=1 x n, and entries of sample mean are x i = 1 N N n=1 x in sample covariance matrix
More informationA MEAN FIELD THEORY OF LAYER IV OF VISUAL CORTEX AND ITS APPLICATION TO ARTIFICIAL NEURAL NETWORKS*
683 A MEAN FIELD THEORY OF LAYER IV OF VISUAL CORTEX AND ITS APPLICATION TO ARTIFICIAL NEURAL NETWORKS* Christopher L. Scofield Center for Neural Science and Physics Department Brown University Providence,
More informationTriplets of Spikes in a Model of Spike Timing-Dependent Plasticity
The Journal of Neuroscience, September 20, 2006 26(38):9673 9682 9673 Behavioral/Systems/Cognitive Triplets of Spikes in a Model of Spike Timing-Dependent Plasticity Jean-Pascal Pfister and Wulfram Gerstner
More informationSpike timing dependent plasticity - STDP
Spike timing dependent plasticity - STDP Post before Pre: LTD + ms Pre before Post: LTP - ms Markram et. al. 997 Spike Timing Dependent Plasticity: Temporal Hebbian Learning Synaptic cange % Pre t Pre
More informationÂngelo Cardoso 27 May, Symbolic and Sub-Symbolic Learning Course Instituto Superior Técnico
BIOLOGICALLY INSPIRED COMPUTER MODELS FOR VISUAL RECOGNITION Ângelo Cardoso 27 May, 2010 Symbolic and Sub-Symbolic Learning Course Instituto Superior Técnico Index Human Vision Retinal Ganglion Cells Simple
More informationarxiv: v1 [q-bio.nc] 4 Jan 2016
Nonlinear Hebbian learning as a unifying principle in receptive field formation Carlos S. N. Brito*, Wulfram Gerstner arxiv:1601.00701v1 [q-bio.nc] 4 Jan 2016 School of Computer and Communication Sciences
More informationCommonly used pattern classi ers include SVMs (support
Unsupervised Hebbian learning by recurrent multilayer neural networks for temporal hierarchical pattern recognition James Ting-Ho Lo Abstract Recurrent multilayer network structures and Hebbian learning
More informationInformation Theory. Mark van Rossum. January 24, School of Informatics, University of Edinburgh 1 / 35
1 / 35 Information Theory Mark van Rossum School of Informatics, University of Edinburgh January 24, 2018 0 Version: January 24, 2018 Why information theory 2 / 35 Understanding the neural code. Encoding
More informationInvariant object recognition in the visual system with error correction and temporal difference learning
INSTITUTE OF PHYSICS PUBLISHING NETWORK: COMPUTATION IN NEURAL SYSTEMS Network: Comput. Neural Syst. (00) 9 www.iop.org/journals/ne PII: S0954-898X(0)488-9 Invariant object recognition in the visual system
More informationProcessing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics
Processing of Time Series by Neural Circuits with iologically Realistic Synaptic Dynamics Thomas Natschläger & Wolfgang Maass Institute for Theoretical Computer Science Technische Universität Graz, ustria
More informationA. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network.
Part 3A: Hopfield Network III. Recurrent Neural Networks A. The Hopfield Network 1 2 Typical Artificial Neuron Typical Artificial Neuron connection weights linear combination activation function inputs
More informationBeyond Hebbian plasticity: Effective learning with ineffective Hebbian learning rules
Beyond Hebbian plasticity: Effective learning with ineffective Hebbian learning rules Gal Chechik The center for neural computation Hebrew University in Jerusalem, Israel and the School of Mathematical
More informationBrains and Computation
15-883: Computational Models of Neural Systems Lecture 1.1: Brains and Computation David S. Touretzky Computer Science Department Carnegie Mellon University 1 Models of the Nervous System Hydraulic network
More informationThe Variance of Covariance Rules for Associative Matrix Memories and Reinforcement Learning
NOTE Communicated by David Willshaw The Variance of Covariance Rules for Associative Matrix Memories and Reinforcement Learning Peter Dayan Terrence J. Sejnowski Computational Neurobiology Laboratory,
More informationA. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network.
III. Recurrent Neural Networks A. The Hopfield Network 2/9/15 1 2/9/15 2 Typical Artificial Neuron Typical Artificial Neuron connection weights linear combination activation function inputs output net
More informationSampling-based probabilistic inference through neural and synaptic dynamics
Sampling-based probabilistic inference through neural and synaptic dynamics Wolfgang Maass for Robert Legenstein Institute for Theoretical Computer Science Graz University of Technology, Austria Institute
More informationPlasticity Kernels and Temporal Statistics
Plasticity Kernels and Temporal Statistics Peter Dayan1 Michael Hausser2 Michael London1 2 1 GCNU, 2WIBR, Dept of Physiology UCL, Gower Street, London dayan@gats5y.ucl.ac.uk {m.hausser,m.london}@ucl.ac.uk
More informationInvestigations of long-term synaptic plasticity have revealed
Dynamical model of long-term synaptic plasticity Henry D. I. Abarbanel*, R. Huerta, and M. I. Rabinovich *Marine Physical Laboratory, Scripps Institution of Oceanography, Department of Physics, and Institute
More informationIntroduction to Neural Networks
Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning
More informationThe Role of Constraints in Hebbian Learning
Communicated by Christof von der Malsburg The Role of Constraints in Hebbian Learning Kenneth D. Miller * Diuision of Biology, Caltech 226-76, Pasadena, CA 92225 USA David J. C. MacKayt Compictntioti atid
More informationarxiv: v1 [cs.ne] 30 Mar 2013
A Neuromorphic VLSI Design for Spike Timing and Rate Based Synaptic Plasticity Mostafa Rahimi Azghadi a,, Said Al-Sarawi a,, Derek Abbott a, Nicolangelo Iannella a, a School of Electrical and Electronic
More informationNovel VLSI Implementation for Triplet-based Spike-Timing Dependent Plasticity
Novel LSI Implementation for Triplet-based Spike-Timing Dependent Plasticity Mostafa Rahimi Azghadi, Omid Kavehei, Said Al-Sarawi, Nicolangelo Iannella, and Derek Abbott Centre for Biomedical Engineering,
More informationSynaptic plasticity in neuromorphic hardware. Stefano Fusi Columbia University
Synaptic plasticity in neuromorphic hardware Stefano Fusi Columbia University The memory problem Several efficient memory models assume that the synaptic dynamic variables are unbounded, or can be modified
More informationSynergies Between Intrinsic and Synaptic Plasticity Mechanisms
ARTICLE Communicated by Erkki Oja Synergies Between Intrinsic and Synaptic Plasticity Mechanisms Jochen Triesch triesch@fias.uni-frankfurt.de Frankfurt Institute for Advanced Studies, Johann Wolfgang Goethe
More informationLateral organization & computation
Lateral organization & computation review Population encoding & decoding lateral organization Efficient representations that reduce or exploit redundancy Fixation task 1rst order Retinotopic maps Log-polar
More informationAdaptation in the Neural Code of the Retina
Adaptation in the Neural Code of the Retina Lens Retina Fovea Optic Nerve Optic Nerve Bottleneck Neurons Information Receptors: 108 95% Optic Nerve 106 5% After Polyak 1941 Visual Cortex ~1010 Mean Intensity
More informationMarr's Theory of the Hippocampus: Part I
Marr's Theory of the Hippocampus: Part I Computational Models of Neural Systems Lecture 3.3 David S. Touretzky October, 2015 David Marr: 1945-1980 10/05/15 Computational Models of Neural Systems 2 Marr
More informationNeural Networks. Mark van Rossum. January 15, School of Informatics, University of Edinburgh 1 / 28
1 / 28 Neural Networks Mark van Rossum School of Informatics, University of Edinburgh January 15, 2018 2 / 28 Goals: Understand how (recurrent) networks behave Find a way to teach networks to do a certain
More information0 b ject ive Functions for Neural Map Format ion
242 4th Joint Symposium on Neural Computation Proceedings 1997 0 b ject ive Functions for Neural Map Format ion Laurenz Wiskottl and Terrence Sejn~wskil~~ lcomputational Neurobiology Laboratory 2Howard
More informationTraining and spontaneous reinforcement of neuronal assemblies by spike timing
Training and spontaneous reinforcement of neuronal assemblies by spike timing Gabriel Koch Ocker,3,4, Brent Doiron,3 : Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA : Department
More informationHebbian Learning II. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. July 20, 2017
Hebbian Learning II Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester July 20, 2017 Goals Teach about one-half of an undergraduate course on Linear Algebra Understand when
More information+ + ( + ) = Linear recurrent networks. Simpler, much more amenable to analytic treatment E.g. by choosing
Linear recurrent networks Simpler, much more amenable to analytic treatment E.g. by choosing + ( + ) = Firing rates can be negative Approximates dynamics around fixed point Approximation often reasonable
More informationNatural Image Statistics
Natural Image Statistics A probabilistic approach to modelling early visual processing in the cortex Dept of Computer Science Early visual processing LGN V1 retina From the eye to the primary visual cortex
More informationEffective Neuronal Learning with Ineffective Hebbian Learning Rules
Effective Neuronal Learning with Ineffective Hebbian Learning Rules Gal Chechik The Center for Neural Computation Hebrew University in Jerusalem, Israel and the School of Mathematical Sciences Tel-Aviv
More informationDesign and Implementation of BCM Rule Based on Spike-Timing Dependent Plasticity
Design and Implementation of BCM Rule Based on Spike-Timing Dependent Plasticity Mostafa Rahimi Azghadi, Said Al-Sarawi, Nicolangelo Iannella, and Derek Abbott Centre for Biomedical Engineering, School
More informationOptimal In-Place Self-Organization for Cortical Development: Limited Cells, Sparse Coding and Cortical Topography
Optimal In-Place Self-Organization for Cortical Development: Limited Cells, Sparse Coding and Cortical Topography Juyang Weng and Matthew D. Luciw Department of Computer Science and Engineering Michigan
More informationViewpoint invariant face recognition using independent component analysis and attractor networks
Viewpoint invariant face recognition using independent component analysis and attractor networks Marian Stewart Bartlett University of California San Diego The Salk Institute La Jolla, CA 92037 marni@salk.edu
More informationDYNAMICAL SYSTEMS OF THE BCM LEARNING RULE: by Lawrence C. Udeigwe
DYNAMICAL SYSTEMS OF THE BCM LEARNING RULE: EMERGENT PROPERTIES AND APPLICATION TO CLUSTERING by Lawrence C. Udeigwe M.A. Mathematics, University of Pittsburgh, 2008 M.S. Applied Mathematics, University
More informationComputational Explorations in Cognitive Neuroscience Chapter 4: Hebbian Model Learning
Computational Explorations in Cognitive Neuroscience Chapter 4: Hebbian Model Learning 4.1 Overview Learning is a general phenomenon that allows a complex system to replicate some of the structure in its
More informationTuning tuning curves. So far: Receptive fields Representation of stimuli Population vectors. Today: Contrast enhancment, cortical processing
Tuning tuning curves So far: Receptive fields Representation of stimuli Population vectors Today: Contrast enhancment, cortical processing Firing frequency N 3 s max (N 1 ) = 40 o N4 N 1 N N 5 2 s max
More informationDivisive Inhibition in Recurrent Networks
Divisive Inhibition in Recurrent Networks Frances S. Chance and L. F. Abbott Volen Center for Complex Systems and Department of Biology Brandeis University Waltham MA 2454-911 Abstract Models of visual
More informationEmergence of Phase- and Shift-Invariant Features by Decomposition of Natural Images into Independent Feature Subspaces
LETTER Communicated by Bartlett Mel Emergence of Phase- and Shift-Invariant Features by Decomposition of Natural Images into Independent Feature Subspaces Aapo Hyvärinen Patrik Hoyer Helsinki University
More informationAbstract. Author Summary
1 Self-organization of microcircuits in networks of spiking neurons with plastic synapses Gabriel Koch Ocker 1,3, Ashok Litwin-Kumar 2,3,4, Brent Doiron 2,3 1: Department of Neuroscience, University of
More informationMODELS OF LEARNING AND THE POLAR DECOMPOSITION OF BOUNDED LINEAR OPERATORS
Eighth Mississippi State - UAB Conference on Differential Equations and Computational Simulations. Electronic Journal of Differential Equations, Conf. 19 (2010), pp. 31 36. ISSN: 1072-6691. URL: http://ejde.math.txstate.edu
More informationEfficient temporal processing with biologically realistic dynamic synapses
INSTITUTE OF PHYSICS PUBLISHING NETWORK: COMPUTATION IN NEURAL SYSTEMS Network: Comput. Neural Syst. 12 (21) 75 87 www.iop.org/journals/ne PII: S954-898X(1)2361-3 Efficient temporal processing with biologically
More informationHigher Order Statistics
Higher Order Statistics Matthias Hennig Neural Information Processing School of Informatics, University of Edinburgh February 12, 2018 1 0 Based on Mark van Rossum s and Chris Williams s old NIP slides
More informationThe Role of Weight Normalization in Competitive Learning
Communicated by Todd Leen The Role of Weight Normalization in Competitive Learning Geoffrey J. Goodhill University of Edinburgh, Centre for Cognitive Science, 2 Buccleiich Place, Edinburgh EH8 9LW, United
More informationAn objective function for self-limiting neural plasticity rules.
ESANN 5 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. Bruges (Belgium), -4 April 5, i6doc.com publ., ISBN 978-875874-8. An objective function
More informationModel of a Biological Neuron as a Temporal Neural Network
Model of a Biological Neuron as a Temporal Neural Network Sean D. Murphy and Edward W. Kairiss Interdepartmental Neuroscience Program, Department of Psychology, and The Center for Theoretical and Applied
More informationA gradient descent rule for spiking neurons emitting multiple spikes
A gradient descent rule for spiking neurons emitting multiple spikes Olaf Booij a, Hieu tat Nguyen a a Intelligent Sensory Information Systems, University of Amsterdam, Faculty of Science, Kruislaan 403,
More information(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann
(Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for
More informationNeuroinformatics. Marcus Kaiser. Week 10: Cortical maps and competitive population coding (textbook chapter 7)!
0 Neuroinformatics Marcus Kaiser Week 10: Cortical maps and competitive population coding (textbook chapter 7)! Outline Topographic maps Self-organizing maps Willshaw & von der Malsburg Kohonen Dynamic
More informationSynaptic Rewiring for Topographic Map Formation
Synaptic Rewiring for Topographic Map Formation Simeon A. Bamford 1, Alan F. Murray 2, and David J. Willshaw 3 1 Doctoral Training Centre in Neuroinformatics, sim.bamford@ed.ac.uk, 2 Institute of Integrated
More informationFinancial Informatics XVII:
Financial Informatics XVII: Unsupervised Learning Khurshid Ahmad, Professor of Computer Science, Department of Computer Science Trinity College, Dublin-, IRELAND November 9 th, 8. https://www.cs.tcd.ie/khurshid.ahmad/teaching.html
More informationTensor Decomposition by Modified BCM Neurons Finds Mixture Means Through Input Triplets
Tensor Decomposition by Modified BCM Neurons Finds Mixture Means Through Input Triplets Matthew Lawlor Applied Mathematics Yale University New Haven, CT 0650 matthew.lawlor@yale.edu Steven W. Zucer Computer
More informationSynaptic Input. Linear Model of Synaptic Transmission. Professor David Heeger. September 5, 2000
Synaptic Input Professor David Heeger September 5, 2000 The purpose of this handout is to go a bit beyond the discussion in Ch. 6 of The Book of Genesis on synaptic input, and give some examples of how
More informationNonlinear reverse-correlation with synthesized naturalistic noise
Cognitive Science Online, Vol1, pp1 7, 2003 http://cogsci-onlineucsdedu Nonlinear reverse-correlation with synthesized naturalistic noise Hsin-Hao Yu Department of Cognitive Science University of California
More informationCopyright by. Changan Liu. May, 2017
Copyright by Changan Liu May, 2017 THE IMPACT OF STDP AND CORRELATED ACTIVITY ON NETWORK STRUCTURE A Dissertation Presented to the Faculty of the Department of Mathematics University of Houston In Partial
More informationCorrelations and neural information coding Shlens et al. 09
Correlations and neural information coding Shlens et al. 09 Joel Zylberberg www.jzlab.org The neural code is not one-to-one ρ = 0.52 ρ = 0.80 d 3s [Max Turner, UW] b # of trials trial 1!! trial 2!!.!.!.
More informationEquivalence of Backpropagation and Contrastive Hebbian Learning in a Layered Network
LETTER Communicated by Geoffrey Hinton Equivalence of Backpropagation and Contrastive Hebbian Learning in a Layered Network Xiaohui Xie xhx@ai.mit.edu Department of Brain and Cognitive Sciences, Massachusetts
More informationMethods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits
Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits Wolfgang Maass, Robert Legenstein, Nils Bertschinger Institute for Theoretical Computer Science Technische
More informationPrincipal Component Analysis CS498
Principal Component Analysis CS498 Today s lecture Adaptive Feature Extraction Principal Component Analysis How, why, when, which A dual goal Find a good representation The features part Reduce redundancy
More informationNeurophysiology of a VLSI spiking neural network: LANN21
Neurophysiology of a VLSI spiking neural network: LANN21 Stefano Fusi INFN, Sezione Roma I Università di Roma La Sapienza Pza Aldo Moro 2, I-185, Roma fusi@jupiter.roma1.infn.it Paolo Del Giudice Physics
More informationHow do synapses transform inputs?
Neurons to networks How do synapses transform inputs? Excitatory synapse Input spike! Neurotransmitter release binds to/opens Na channels Change in synaptic conductance! Na+ influx E.g. AMA synapse! Depolarization
More informationSynaptic dynamics. John D. Murray. Synaptic currents. Simple model of the synaptic gating variable. First-order kinetics
Synaptic dynamics John D. Murray A dynamical model for synaptic gating variables is presented. We use this to study the saturation of synaptic gating at high firing rate. Shunting inhibition and the voltage
More informationIntroduction. Previous work has shown that AER can also be used to construct largescale networks with arbitrary, configurable synaptic connectivity.
Introduction The goal of neuromorphic engineering is to design and implement microelectronic systems that emulate the structure and function of the brain. Address-event representation (AER) is a communication
More informationBiological Modeling of Neural Networks:
Week 14 Dynamics and Plasticity 14.1 Reservoir computing - Review:Random Networks - Computing with rich dynamics Biological Modeling of Neural Networks: 14.2 Random Networks - stationary state - chaos
More informationSynaptic Devices and Neuron Circuits for Neuron-Inspired NanoElectronics
Synaptic Devices and Neuron Circuits for Neuron-Inspired NanoElectronics Byung-Gook Park Inter-university Semiconductor Research Center & Department of Electrical and Computer Engineering Seoul National
More informationEffects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks
Commun. Theor. Phys. (Beijing, China) 42 (2004) pp. 121 125 c International Academic Publishers Vol. 42, No. 1, July 15, 2004 Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized
More informationIntrinsic Stabilization of Output Rates by Spike-Based Hebbian Learning
LETTER Communicated by Laurence Abbott Intrinsic Stabilization of Output Rates by Spike-Based Hebbian Learning Richard Kempter kempter@phy.ucsf.edu Keck Center for Integrative Neuroscience, University
More informationBayesian synaptic plasticity makes predictions about plasticity experiments in vivo
Bayesian synaptic plasticity makes predictions about plasticity experiments in vivo Laurence Aitchison and Peter E. Latham October 13, 014 arxiv:1410.109v [q-bio.nc] 10 Oct 014 Abstract Humans and other
More informationDeep learning in the brain. Deep learning summer school Montreal 2017
Deep learning in the brain Deep learning summer school Montreal 207 . Why deep learning is not just for AI The recent success of deep learning in artificial intelligence (AI) means that most people associate
More informationPrinciples of DCM. Will Penny. 26th May Principles of DCM. Will Penny. Introduction. Differential Equations. Bayesian Estimation.
26th May 2011 Dynamic Causal Modelling Dynamic Causal Modelling is a framework studying large scale brain connectivity by fitting differential equation models to brain imaging data. DCMs differ in their
More informationEEE 241: Linear Systems
EEE 4: Linear Systems Summary # 3: Introduction to artificial neural networks DISTRIBUTED REPRESENTATION An ANN consists of simple processing units communicating with each other. The basic elements of
More informationREPORT DOCUMENTATION PAGE
, UNCLASSIFIED SECURITY CLASSIFICATION OF THIS PAGE j?5 REPORT DOCUMENTATION PAGE la. REPORT SECURITY CLASSIFICATION Unclassified 2a. SECURITY CLASSIFICATION AUTHORITY 2b. DECLASSIFICAHON/DOWNGRADING SCHEDULE
More informationModels for Pattern Formation in Development.
Models for Pattern Formation in Development. Bard Ermentrout Remus Osan April 2, 2002 Abstract We describe a number of general principles that underlie the formation of cortical maps such as occular dominance
More informationFundamentals of Computational Neuroscience 2e
Fundamentals of Computational Neuroscience 2e January 1, 2010 Chapter 10: The cognitive brain Hierarchical maps and attentive vision A. Ventral visual pathway B. Layered cortical maps Receptive field size
More informationA summary of Deep Learning without Poor Local Minima
A summary of Deep Learning without Poor Local Minima by Kenji Kawaguchi MIT oral presentation at NIPS 2016 Learning Supervised (or Predictive) learning Learn a mapping from inputs x to outputs y, given
More information15 Grossberg Network 1
Grossberg Network Biological Motivation: Vision Bipolar Cell Amacrine Cell Ganglion Cell Optic Nerve Cone Light Lens Rod Horizontal Cell Retina Optic Nerve Fiber Eyeball and Retina Layers of Retina The
More informationPrincipal Component Analysis
Principal Component Analysis Yingyu Liang yliang@cs.wisc.edu Computer Sciences Department University of Wisconsin, Madison [based on slides from Nina Balcan] slide 1 Goals for the lecture you should understand
More informationReducing Spike Train Variability: A Computational Theory Of Spike-Timing Dependent Plasticity
Reducing Spike Train Variability: A Computational Theory Of Spike-Timing Dependent Plasticity Sander M. Bohte a Michael C. Mozer b a CWI, Kruislaan 413, 1098 SJ Amsterdam, The Netherlands b Dept. of Computer
More informationNeocortical Pyramidal Cells Can Control Signals to Post-Synaptic Cells Without Firing:
Neocortical Pyramidal Cells Can Control Signals to Post-Synaptic Cells Without Firing: a model of the axonal plexus Erin Munro Department of Mathematics Boston University 4/14/2011 Gap junctions on pyramidal
More informationConnecting Epilepsy and Alzheimer s Disease: A Computational Modeling Framework
Connecting Epilepsy and Alzheimer s Disease: A Computational Modeling Framework Péter Érdi perdi@kzoo.edu Henry R. Luce Professor Center for Complex Systems Studies Kalamazoo College http://people.kzoo.edu/
More informationLeo Kadanoff and 2d XY Models with Symmetry-Breaking Fields. renormalization group study of higher order gradients, cosines and vortices
Leo Kadanoff and d XY Models with Symmetry-Breaking Fields renormalization group study of higher order gradients, cosines and vortices Leo Kadanoff and Random Matrix Theory Non-Hermitian Localization in
More informationSpike-Timing Dependent Plasticity and Relevant Mutual Information Maximization
Spike-Timing Dependent Plasticity and Relevant Mutual Information Maximization Gal Chechik The Interdisciplinary Center for Neural Computation The Hebrew University, Givat Ram 91904, Jerusalem, Israel
More informationDecoding. How well can we learn what the stimulus is by looking at the neural responses?
Decoding How well can we learn what the stimulus is by looking at the neural responses? Two approaches: devise explicit algorithms for extracting a stimulus estimate directly quantify the relationship
More information