Learning Spatio-Temporally Encoded Pattern Transformations in Structured Spiking Neural Networks 12
|
|
- Gloria Woods
- 5 years ago
- Views:
Transcription
1 Learning Spatio-Temporally Encoded Pattern Transformations in Structured Spiking Neural Networks 12 André Grüning, Brian Gardner and Ioana Sporea Department of Computer Science University of Surrey Guildford, UK 9th November $Id: multilayerspiker.txt :39:53Z ag15 $
2
3
4 1 Introduction 2 Background 3 Our Approach 4 Results 5 Summary
5 What are we doing? What are we doing? Formulate a supervised learning rule for spiking neural networks that Why worthwhile? can train spiking networks containing a hidden layer of neurons, can map arbitrary spatio-temporal input into arbitrary output spike patterns, ie multiple spike trains. Understand how spike-pattern based information processing takes place in the brain. A learning rule for spiking neural networks with technical potential. Find a rule that is to spiking networks what is backprop to rate neuron networks. Human Brain Project
6 Scientific Area Where are we scientifically? In the middle of nowhere between: computational neuroscience cognitive science artificial intelligence / machine learning
7
8 1 Introduction 2 Background 3 Our Approach 4 Results 5 Summary
9 Spiking Neurons (a) input spikes output spike u (c) output spike (b) input spikes Spiking neurons: real neurons communicate with each other via sequences of pulses spikes. 1 Dendritic tree, axon and cell body of a neuron. 2 Top: Spikes arrive from other neurons and its membrane potential rises. Bottom: incoming spikes on various dendrites elicit timed spikes responses as the output. 3 response of the membrane potential to incoming spikes. If the threshold θ is crossed, the membrane potential is reset to a low value, and a spike fired. From Andre Gruning and Sander Bohte. Spiking neural networks: Principles and challenges. In Proceedings of the 22nd European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning ESANN, Brugge, 214. Invited Contribution.
10 Spiking Neurons Spiking Information Processing The precise timing of spikes generated by neurons conveys meaningful information. Synaptic plasticity forms the basis of learning. Changes in synaptic strength depend on relative pre- and postsynaptic spike times, and third signals. Challenge: to relate such localised plasticity changes to learning on the network level.
11 Learning for Spiking NN General Learning Algorithms for Spiking NN? There is no general-purpose algorithm for spiking neural networks. Challenge: discontinuous nature of spiking events. Various supervised learning algorithms exist, each with its own limitations eg: network topology, adaptability (e.g. reservoir computing), limited spike encoding (e.g. latency, or spike vs no spike). Most focus on classification rather than more challenging tasks like mapping from one spike train to another.
12 Some Learning Algorithms for Spiking NN SpikeProp 3, ReSuMe 4, Tempotron 5, Chronotron 6, SPAN 7, Urbanczik and Senn 8, Brea et al. 9, Freimaux et al. 1,... 3 S.M. Bohte, J.N. Kok, and H. La Poutré. Spike-prop: error-backpropagation in multi-layer networks of spiking neurons. Neurocomputing, 48(1 4):17 37, 22 4 Filip Ponulak and Andrzej Kasiński. Supervised learning in spiking neural networks with ReSuMe: Sequence learning, classification and spike shifting. Neural Computation, 22:467 51, 21 5 Robert Gütig and Haim Sompolinsky. The tempotron: a neuron that learns spike timing-based decisions. Nature Neuroscience, 9(3), 26. doi: 1.138/nn Răzvan V Florian. The chronotron: A neuron that learns to fire temporally precise spike patterns. PLoS ONE, 7(8):e4233, A. Mohemmed, S. Schliebs, and N. Kasabov. SPAN: Spike pattern association neuron for learning spatio-temporal sequences. Int. J. Neural Systems, R. Urbanczik and W. Senn. A gradient learning rule for the tempotron. Neural Computation, 21:34 352, 29 9 Johanni Brea, Walter Senn, and Jean-Pascal Pfister. Matching recall and storage in sequence learning with spiking neural networks. The Journal of Neuroscience, 33(23): , Nicolas Fremaux, Henning Sprekeler, and Wulfram Gerstner. Functional requirements for reward-modulated spile-timing-dependent plasticity. The Journal of Neuroscience, 3(4): , 1 21
13
14 1 Introduction 2 Background 3 Our Approach 4 Results 5 Summary
15 Our Approach MultilayerSpiker Generalise backpropagation to Spiking Neural Networks with hidden neurons. Use stochastic neuron model to connect smooth quantities (derivative exists) with discrete spike trains (no derivative)
16 Neuron model Membrane potential u o (t) := h w oh t t Y h (t )ɛ(t t )dt + Z o (t )κ(t t )dt, (1) o postsynaptic neurons, h presynaptic neuron u o membrane potential of o. w oh strength of synaptic connection from h to o. Y h (t) = t h <t δ(t t h) spike train of neuron h where t h are the firing times of h Z o (t) = t o<t δ(t t o) spike train of neuron o where t o are the firing times of o.
17 Neuron model Spike response kernel ɛ and reset kernel κ ɛ(s) = ɛ [e s/τm e s/τs ] Θ(s) and κ(s) = κ e s/τm Θ(s), (2) spike response kernel ɛ = 4mV, reset kernel κ = 15mV, membrane time constant τ m = 1ms, the synaptic rise time τ s = 5ms Heaviside step function Θ(s).
18 Neuron model Stochastic Intensity (instantaneous firing rate) and Spikes ( ) u(t) ϑ ρ(t) = ρ[u(t)] = ρ exp, (3) u firing rate at threshold ρ =.1ms 1. threshold ϑ = 15mV. smoothness of the threshold u o =.2mV (output layer) or u h = 2mV (hidden layer) Spikes are generated by a point process taking stochastic intensity ρ o (t). Ie in a small time interval [t, t + δt) a spike is generated with probability ρ o (t)δt.
19 Backpropagation Objective ( Error ) function ( ) P(zo ref x) = exp log (ρ o (t)) Zo ref (t) ρ o (t)dt, (4) where Zo ref (t) = f δ(t t o f ) is the target output spike train for input x. a a J. P. Pfister, T. Toyoizumi, K. Aihara, and W. Gerstner. Optimal spike-timing dependent plasticity for precise action potential firing in supervised learning. Neural Computation, 18(6): , 26 Backprop approach w oh = η o log P(z ref x) w oh (5)
20 Backprop approach... and some ten slides later Lots of derivatives, indices, probabilities. Derivatives only possible due to smoothness of probability function. Relatively freely switching between expected values and their best estimates to be had when you only have single cast.
21 Backprop Weight Update Backpropagated Error Signal δ o (t) := 1 [ u o Hidden-to-Output Weights Input-to-Hidden Weights a Zo ref ] (t) ρ o (t), (6) T w oh = η o δ o (t) (Y h ɛ)(t) dt. (7) w hi = η T h w oh δ o (t)([y h (X i ɛ)] ɛ)(t)dt. (8) u h o a Brian Gardner, Ioana Sporea, and Andre Gruning. Learning spatio-temporally encoded pattern transformations in structured spiking neural networks. Neural Computation, To appear Preprint available at
22
23 1 Introduction 2 Background 3 Our Approach 4 Results 5 Summary
24 Task Task Purpose: explore the properties of the new learning algorithm. Map an input (given as a set of spike trains) to an output (given again as a set of spike trains). Simulation details a. a Brian Gardner, Ioana Sporea, and Andre Gruning. Learning spatio-temporally encoded pattern transformations in structured spiking neural networks. Neural Computation, To appear Preprint available at
25 Introduction Background Our Approach Results Network Setup Input spike pattern 1 Input A Xi Hidden D Hidden neuron B Episodes Output 4 2 Output neuron E Time (ms) 4 5 Distance C Episodes Episodes 8 1 Left: spike rasters of input, hidden and output layers (with targets). Right top: network structure, bottom: van-rossum distance. Summary
26 Network in Action X i u h ϑ u o ϑ T (X i ǫ) T ([Y h (X i ǫ)] ǫ) T w hi Left: Input spike train X i (top) and its evoked post synaptic potential X i ɛ (bottom). Middle: Fluctuations of a hidden neuron membrane potential u h relative to a firing threshold ϑ, in response to inputs from input layer (top). The potential dependent factor of the back propagated error from hidden to input layer [Y h (X i ɛ)] bottom left: corresponding PSP (according to kernel X i ɛ). Right: membrane potential of an output neuron u o, in response to hidden layer activity. Target indicated by dotted lines (top). Weight changes of input-to-hidden weight due to learning rule.
27 Experiments Performance (%) Input patterns Episodes Free w hi Fixed w hi Single layer Input patterns Dependence of the performance on the number of input patterns and network setup. Each input pattern mapped to a unique target output of single output neuron and spike. Left: performance as a function of the number of input patterns. Right: Number of episodes to convergence in learning. Blue curves: hidden weights w hi updated according to learning algorithm, red curves: fixed random weights (plus homoeostasis), green: single layer.
28 Experiments A Performance (%) n o = 1 n o = 2 n o = n h / n o B n h / n o n o = Number of output spikes Dependence of the performance on the ratio of hidden to output neurons, and the number of target output spikes. p = 5. A unique target output spike pattern for each output neuron. (Left) Performance as a function of the ratio of hidden to output neurons. (Right) Minimum ratio of hidden to output neurons required to achieve 9% performance.
29
30 1 Introduction 2 Background 3 Our Approach 4 Results 5 Summary
31 Summary Results Compared to other learning algorithms for spiking neuron networks, we can learn more input-output mappings: 2 classes or 2 individual patterns here vs 3-4 more timed output spikes: up to 1 individually timed spikes here vs 3-5 with multiple outputs: up to 3 here vs 1 Apply it! MultilayerSpiker opens up the use of spiking neural networks for technical/cognitive modelling tasks. Spiking networks are biologically plausible. Explore how computations can be done with neural networks. Next step in the Human Brain Project: Implementation on SpiNNaker, and other neural hardware.
32 Spiking Neural Networks Open Questions How do networks of spiking neurons carry out computations? How can they learn such computations? Does this explain how real biological neurons compute? What is the applied killer application?
A gradient descent rule for spiking neurons emitting multiple spikes
A gradient descent rule for spiking neurons emitting multiple spikes Olaf Booij a, Hieu tat Nguyen a a Intelligent Sensory Information Systems, University of Amsterdam, Faculty of Science, Kruislaan 403,
More informationSuperSpike: Supervised learning in multi-layer spiking neural networks
SuperSpike: Supervised learning in multi-layer spiking neural networks arxiv:1705.11146v2 [q-bio.nc] 14 Oct 2017 Friedemann Zenke 1, 2 & Surya Ganguli 1 1 Department of Applied Physics Stanford University
More informationarxiv: v1 [cs.lg] 9 Dec 2016
Towards deep learning with spiking neurons in energy based models with contrastive Hebbian plasticity arxiv:1612.03214v1 [cs.lg] 9 Dec 2016 Thomas Mesnard École Normale Supérieure Paris, France thomas.mesnard@ens.fr
More informationCSE/NB 528 Final Lecture: All Good Things Must. CSE/NB 528: Final Lecture
CSE/NB 528 Final Lecture: All Good Things Must 1 Course Summary Where have we been? Course Highlights Where do we go from here? Challenges and Open Problems Further Reading 2 What is the neural code? What
More informationarxiv: v1 [cs.ne] 23 Oct 2018
TRAINING MULTILAYER SPIKING NEURAL NETWORKS USING NORMAD BASED SPATIO-TEMPORAL ERROR BACKPROPAGATION A PREPRINT arxiv:1811.10678v1 [cs.ne] 23 Oct 2018 Navin Anwani Department of Electrical Engineering
More informationEvent-Driven Random Backpropagation: Enabling Neuromorphic Deep Learning Machines
Event-Driven Random Backpropagation: Enabling Neuromorphic Deep Learning Machines Emre Neftci Department of Cognitive Sciences, UC Irvine, Department of Computer Science, UC Irvine, March 7, 2017 Scalable
More information(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann
(Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for
More informationData Mining Part 5. Prediction
Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,
More informationNeuronal Dynamics: Computational Neuroscience of Single Neurons
Week 5 part 3a :Three definitions of rate code Neuronal Dynamics: Computational Neuroscience of Single Neurons Week 5 Variability and Noise: The question of the neural code Wulfram Gerstner EPFL, Lausanne,
More informationTemporal Pattern Analysis
LIACS Leiden Institute of Advanced Computer Science Master s Thesis June 17, 29 Temporal Pattern Analysis Using Reservoir Computing Author: Ron Vink Supervisor: Dr. Walter Kosters 1 Contents 1 Introduction
More informationProbabilistic Models in Theoretical Neuroscience
Probabilistic Models in Theoretical Neuroscience visible unit Boltzmann machine semi-restricted Boltzmann machine restricted Boltzmann machine hidden unit Neural models of probabilistic sampling: introduction
More informationModelling stochastic neural learning
Modelling stochastic neural learning Computational Neuroscience András Telcs telcs.andras@wigner.mta.hu www.cs.bme.hu/~telcs http://pattern.wigner.mta.hu/participants/andras-telcs Compiled from lectures
More informationSTDP Learning of Image Patches with Convolutional Spiking Neural Networks
STDP Learning of Image Patches with Convolutional Spiking Neural Networks Daniel J. Saunders, Hava T. Siegelmann, Robert Kozma College of Information and Computer Sciences University of Massachusetts Amherst
More informationDeep learning in the brain. Deep learning summer school Montreal 2017
Deep learning in the brain Deep learning summer school Montreal 207 . Why deep learning is not just for AI The recent success of deep learning in artificial intelligence (AI) means that most people associate
More informationNeural Networks. Nicholas Ruozzi University of Texas at Dallas
Neural Networks Nicholas Ruozzi University of Texas at Dallas Handwritten Digit Recognition Given a collection of handwritten digits and their corresponding labels, we d like to be able to correctly classify
More informationSpiking Neural Network Training Using Evolutionary Algorithms
Spiking Neural Network Training Using Evolutionary Algorithms N.G. Pavlidis 1,2, D.K. Tasoulis 1,2,V.P.Plagianakos 1,2, G. Nikiforidis 3,4 and M.N. Vrahatis 1,2 1 Department of Mathematics, University
More informationFast classification using sparsely active spiking networks. Hesham Mostafa Institute of neural computation, UCSD
Fast classification using sparsely active spiking networks Hesham Mostafa Institute of neural computation, UCSD Artificial networks vs. spiking networks backpropagation output layer Multi-layer networks
More informationA Learning Theory for Reward-Modulated Spike-Timing-Dependent Plasticity with Application to Biofeedback
A Learning Theory for Reward-Modulated Spike-Timing-Dependent Plasticity with Application to Biofeedback Robert Legenstein, Dejan Pecevski, Wolfgang Maass Institute for Theoretical Computer Science Graz
More informationLearning Beyond Finite Memory in Recurrent Networks Of Spiking Neurons
Learning Beyond Finite Memory in Recurrent Networks Of Spiking Neurons Peter Tiňo 1 Ashley Mills 1 School Of Computer Science, University Of Birmingham, Birmingham B15 2TT, UK P.Tino@cs.bham.ac.uk, msc57ajm@cs.bham.ac.uk
More informationMachine Learning. Neural Networks. (slides from Domingos, Pardo, others)
Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward
More informationHopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5
Hopfield Neural Network and Associative Memory Typical Myelinated Vertebrate Motoneuron (Wikipedia) PHY 411-506 Computational Physics 2 1 Wednesday, March 5 1906 Nobel Prize in Physiology or Medicine.
More informationIntroduction to Neural Networks
Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning
More informationCOGS Q250 Fall Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November.
COGS Q250 Fall 2012 Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November. For the first two questions of the homework you will need to understand the learning algorithm using the delta
More informationThis script will produce a series of pulses of amplitude 40 na, duration 1ms, recurring every 50 ms.
9.16 Problem Set #4 In the final problem set you will combine the pieces of knowledge gained in the previous assignments to build a full-blown model of a plastic synapse. You will investigate the effects
More informationAN INTRODUCTION TO NEURAL NETWORKS. Scott Kuindersma November 12, 2009
AN INTRODUCTION TO NEURAL NETWORKS Scott Kuindersma November 12, 2009 SUPERVISED LEARNING We are given some training data: We must learn a function If y is discrete, we call it classification If it is
More informationEvolution of the Average Synaptic Update Rule
Supporting Text Evolution of the Average Synaptic Update Rule In this appendix we evaluate the derivative of Eq. 9 in the main text, i.e., we need to calculate log P (yk Y k, X k ) γ log P (yk Y k ). ()
More informationFast neural network simulations with population density methods
Fast neural network simulations with population density methods Duane Q. Nykamp a,1 Daniel Tranchina b,a,c,2 a Courant Institute of Mathematical Science b Department of Biology c Center for Neural Science
More informationOn the Complexity of Acyclic Networks of Neurons
On the Complexity of Acyclic Networks of Neurons Venkatakrishnan Ramaswamy CISE University of Florida vr1@cise.ufl.edu Arunava Banerjee CISE University of Florida arunava@cise.ufl.edu September 8, 2009
More informationNeural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21
Neural Networks Chapter 8, Section 7 TB Artificial Intelligence Slides from AIMA http://aima.cs.berkeley.edu / 2 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural
More informationReducing the Variability of Neural Responses: A Computational Theory of Spike-Timing-Dependent Plasticity
LETTER Communicated by Gal Chechik Reducing the Variability of Neural Responses: A Computational Theory of Spike-Timing-Dependent Plasticity Sander M. Bohte sbohte@cwi.nl Netherlands Centre for Mathematics
More informationSpiking neural network-based control chart pattern recognition
Alexandria Engineering Journal (2012) 51, 27 35 Alexandria University Alexandria Engineering Journal www.elsevier.com/locate/ae www.sciencedirect.com Spiking neural network-based control chart pattern
More informationNeural Networks. Mark van Rossum. January 15, School of Informatics, University of Edinburgh 1 / 28
1 / 28 Neural Networks Mark van Rossum School of Informatics, University of Edinburgh January 15, 2018 2 / 28 Goals: Understand how (recurrent) networks behave Find a way to teach networks to do a certain
More informationSynaptic dynamics. John D. Murray. Synaptic currents. Simple model of the synaptic gating variable. First-order kinetics
Synaptic dynamics John D. Murray A dynamical model for synaptic gating variables is presented. We use this to study the saturation of synaptic gating at high firing rate. Shunting inhibition and the voltage
More informationThe Spike Response Model: A Framework to Predict Neuronal Spike Trains
The Spike Response Model: A Framework to Predict Neuronal Spike Trains Renaud Jolivet, Timothy J. Lewis 2, and Wulfram Gerstner Laboratory of Computational Neuroscience, Swiss Federal Institute of Technology
More informationARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD
ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided
More informationARTIFICIAL INTELLIGENCE. Artificial Neural Networks
INFOB2KI 2017-2018 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Artificial Neural Networks Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html
More informationMachine Learning. Neural Networks. (slides from Domingos, Pardo, others)
Machine Learning Neural Networks (slides from Domingos, Pardo, others) Human Brain Neurons Input-Output Transformation Input Spikes Output Spike Spike (= a brief pulse) (Excitatory Post-Synaptic Potential)
More informationArtifical Neural Networks
Neural Networks Artifical Neural Networks Neural Networks Biological Neural Networks.................................. Artificial Neural Networks................................... 3 ANN Structure...........................................
More informationMachine Learning. Neural Networks. (slides from Domingos, Pardo, others)
Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward
More informationMachine Learning and Data Mining. Multi-layer Perceptrons & Neural Networks: Basics. Prof. Alexander Ihler
+ Machine Learning and Data Mining Multi-layer Perceptrons & Neural Networks: Basics Prof. Alexander Ihler Linear Classifiers (Perceptrons) Linear Classifiers a linear classifier is a mapping which partitions
More informationArtificial Neural Network
Artificial Neural Network Contents 2 What is ANN? Biological Neuron Structure of Neuron Types of Neuron Models of Neuron Analogy with human NN Perceptron OCR Multilayer Neural Network Back propagation
More information2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller
2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks Todd W. Neller Machine Learning Learning is such an important part of what we consider "intelligence" that
More informationHow to do backpropagation in a brain
How to do backpropagation in a brain Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto & Google Inc. Prelude I will start with three slides explaining a popular type of deep
More informationLiquid Computing in a Simplified Model of Cortical Layer IV: Learning to Balance a Ball
Liquid Computing in a Simplified Model of Cortical Layer IV: Learning to Balance a Ball Dimitri Probst 1,3, Wolfgang Maass 2, Henry Markram 1, and Marc-Oliver Gewaltig 1 1 Blue Brain Project, École Polytechnique
More informationArtificial Neural Networks. Q550: Models in Cognitive Science Lecture 5
Artificial Neural Networks Q550: Models in Cognitive Science Lecture 5 "Intelligence is 10 million rules." --Doug Lenat The human brain has about 100 billion neurons. With an estimated average of one thousand
More informationComputational Explorations in Cognitive Neuroscience Chapter 2
Computational Explorations in Cognitive Neuroscience Chapter 2 2.4 The Electrophysiology of the Neuron Some basic principles of electricity are useful for understanding the function of neurons. This is
More informationReducing Spike Train Variability: A Computational Theory Of Spike-Timing Dependent Plasticity
Reducing Spike Train Variability: A Computational Theory Of Spike-Timing Dependent Plasticity Sander M. Bohte a Michael C. Mozer b a CWI, Kruislaan 413, 1098 SJ Amsterdam, The Netherlands b Dept. of Computer
More informationMath in systems neuroscience. Quan Wen
Math in systems neuroscience Quan Wen Human brain is perhaps the most complex subject in the universe 1 kg brain 10 11 neurons 180,000 km nerve fiber 10 15 synapses 10 18 synaptic proteins Multiscale
More informationArtificial Neural Networks. Historical description
Artificial Neural Networks Historical description Victor G. Lopez 1 / 23 Artificial Neural Networks (ANN) An artificial neural network is a computational model that attempts to emulate the functions of
More informationBiological Modeling of Neural Networks:
Week 14 Dynamics and Plasticity 14.1 Reservoir computing - Review:Random Networks - Computing with rich dynamics Biological Modeling of Neural Networks: 14.2 Random Networks - stationary state - chaos
More informationFeedforward Neural Nets and Backpropagation
Feedforward Neural Nets and Backpropagation Julie Nutini University of British Columbia MLRG September 28 th, 2016 1 / 23 Supervised Learning Roadmap Supervised Learning: Assume that we are given the features
More informationDynamical Constraints on Computing with Spike Timing in the Cortex
Appears in Advances in Neural Information Processing Systems, 15 (NIPS 00) Dynamical Constraints on Computing with Spike Timing in the Cortex Arunava Banerjee and Alexandre Pouget Department of Brain and
More informationAI Programming CS F-20 Neural Networks
AI Programming CS662-2008F-20 Neural Networks David Galles Department of Computer Science University of San Francisco 20-0: Symbolic AI Most of this class has been focused on Symbolic AI Focus or symbols
More informationSampling-based probabilistic inference through neural and synaptic dynamics
Sampling-based probabilistic inference through neural and synaptic dynamics Wolfgang Maass for Robert Legenstein Institute for Theoretical Computer Science Graz University of Technology, Austria Institute
More informationLinear Regression, Neural Networks, etc.
Linear Regression, Neural Networks, etc. Gradient Descent Many machine learning problems can be cast as optimization problems Define a function that corresponds to learning error. (More on this later)
More informationSupporting Online Material for
www.sciencemag.org/cgi/content/full/319/5869/1543/dc1 Supporting Online Material for Synaptic Theory of Working Memory Gianluigi Mongillo, Omri Barak, Misha Tsodyks* *To whom correspondence should be addressed.
More informationSynaptic Devices and Neuron Circuits for Neuron-Inspired NanoElectronics
Synaptic Devices and Neuron Circuits for Neuron-Inspired NanoElectronics Byung-Gook Park Inter-university Semiconductor Research Center & Department of Electrical and Computer Engineering Seoul National
More informationWhat Can a Neuron Learn with Spike-Timing-Dependent Plasticity?
LETTER Communicated by Wulfram Gerstner What Can a Neuron Learn with Spike-Timing-Dependent Plasticity? Robert Legenstein legi@igi.tugraz.at Christian Naeger naeger@gmx.de Wolfgang Maass maass@igi.tugraz.at
More informationCOMP 551 Applied Machine Learning Lecture 14: Neural Networks
COMP 551 Applied Machine Learning Lecture 14: Neural Networks Instructor: Ryan Lowe (ryan.lowe@mail.mcgill.ca) Slides mostly by: Class web page: www.cs.mcgill.ca/~hvanho2/comp551 Unless otherwise noted,
More informationLecture 10. Neural networks and optimization. Machine Learning and Data Mining November Nando de Freitas UBC. Nonlinear Supervised Learning
Lecture 0 Neural networks and optimization Machine Learning and Data Mining November 2009 UBC Gradient Searching for a good solution can be interpreted as looking for a minimum of some error (loss) function
More informationSimple neuron model Components of simple neuron
Outline 1. Simple neuron model 2. Components of artificial neural networks 3. Common activation functions 4. MATLAB representation of neural network. Single neuron model Simple neuron model Components
More informationInformation Theory and Neuroscience II
John Z. Sun and Da Wang Massachusetts Institute of Technology October 14, 2009 Outline System Model & Problem Formulation Information Rate Analysis Recap 2 / 23 Neurons Neuron (denoted by j) I/O: via synapses
More informationNeural Networks. Xiaojin Zhu Computer Sciences Department University of Wisconsin, Madison. slide 1
Neural Networks Xiaoin Zhu erryzhu@cs.wisc.edu Computer Sciences Department University of Wisconsin, Madison slide 1 Terminator 2 (1991) JOHN: Can you learn? So you can be... you know. More human. Not
More informationarxiv: v2 [cs.ne] 16 Aug 2017
Supervised learning based on temporal coding in spiking neural networks arxiv:1606.08165v2 [cs.ne] 16 Aug 2017 Hesham Mostafa Department of Bioengineering, Jacobs School of Engineering Institute of Neural
More informationEEE 241: Linear Systems
EEE 4: Linear Systems Summary # 3: Introduction to artificial neural networks DISTRIBUTED REPRESENTATION An ANN consists of simple processing units communicating with each other. The basic elements of
More informationLearning and Memory in Neural Networks
Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units
More informationArtificial Neural Network and Fuzzy Logic
Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks
More informationNeural Networks: Introduction
Neural Networks: Introduction Machine Learning Fall 2017 Based on slides and material from Geoffrey Hinton, Richard Socher, Dan Roth, Yoav Goldberg, Shai Shalev-Shwartz and Shai Ben-David, and others 1
More informationTriplets of Spikes in a Model of Spike Timing-Dependent Plasticity
The Journal of Neuroscience, September 20, 2006 26(38):9673 9682 9673 Behavioral/Systems/Cognitive Triplets of Spikes in a Model of Spike Timing-Dependent Plasticity Jean-Pascal Pfister and Wulfram Gerstner
More informationMethods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits
Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits Wolfgang Maass, Robert Legenstein, Nils Bertschinger Institute for Theoretical Computer Science Technische
More informationSpike-based Long Short-Term Memory networks
Spike-based Long Short-Term Memory networks MSc Thesis Roeland Nusselder Mathematical Institute Utrecht University Machine Learning CWI Amsterdam Project supervisor: Prof. dr. R.H. Bisseling Daily supervisors:
More informationLecture 7 Artificial neural networks: Supervised learning
Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in
More information4. Multilayer Perceptrons
4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output
More informationHow to do backpropagation in a brain. Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto
1 How to do backpropagation in a brain Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto What is wrong with back-propagation? It requires labeled training data. (fixed) Almost
More informationNeural networks. Chapter 20, Section 5 1
Neural networks Chapter 20, Section 5 Chapter 20, Section 5 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 20, Section 5 2 Brains 0 neurons of
More informationNeural Networks. Bishop PRML Ch. 5. Alireza Ghane. Feed-forward Networks Network Training Error Backpropagation Applications
Neural Networks Bishop PRML Ch. 5 Alireza Ghane Neural Networks Alireza Ghane / Greg Mori 1 Neural Networks Neural networks arise from attempts to model human/animal brains Many models, many claims of
More informationNovel VLSI Implementation for Triplet-based Spike-Timing Dependent Plasticity
Novel LSI Implementation for Triplet-based Spike-Timing Dependent Plasticity Mostafa Rahimi Azghadi, Omid Kavehei, Said Al-Sarawi, Nicolangelo Iannella, and Derek Abbott Centre for Biomedical Engineering,
More informationNeuron. Detector Model. Understanding Neural Components in Detector Model. Detector vs. Computer. Detector. Neuron. output. axon
Neuron Detector Model 1 The detector model. 2 Biological properties of the neuron. 3 The computational unit. Each neuron is detecting some set of conditions (e.g., smoke detector). Representation is what
More informationLecture 6. Notes on Linear Algebra. Perceptron
Lecture 6. Notes on Linear Algebra. Perceptron COMP90051 Statistical Machine Learning Semester 2, 2017 Lecturer: Andrey Kan Copyright: University of Melbourne This lecture Notes on linear algebra Vectors
More informationCISC 3250 Systems Neuroscience
CISC 3250 Systems Neuroscience Systems Neuroscience How the nervous system performs computations How groups of neurons work together to achieve intelligence Professor Daniel Leeds dleeds@fordham.edu JMH
More informationOn the Computational Complexity of Networks of Spiking Neurons
On the Computational Complexity of Networks of Spiking Neurons (Extended Abstract) Wolfgang Maass Institute for Theoretical Computer Science Technische Universitaet Graz A-80lO Graz, Austria e-mail: maass@igi.tu-graz.ac.at
More informationBiosciences in the 21st century
Biosciences in the 21st century Lecture 1: Neurons, Synapses, and Signaling Dr. Michael Burger Outline: 1. Why neuroscience? 2. The neuron 3. Action potentials 4. Synapses 5. Organization of the nervous
More informationArtificial Neural Networks
Artificial Neural Networks 鮑興國 Ph.D. National Taiwan University of Science and Technology Outline Perceptrons Gradient descent Multi-layer networks Backpropagation Hidden layer representations Examples
More informationIntroduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis
Introduction to Natural Computation Lecture 9 Multilayer Perceptrons and Backpropagation Peter Lewis 1 / 25 Overview of the Lecture Why multilayer perceptrons? Some applications of multilayer perceptrons.
More informationLecture 11 : Simple Neuron Models. Dr Eileen Nugent
Lecture 11 : Simple Neuron Models Dr Eileen Nugent Reading List Nelson, Biological Physics, Chapter 12 Phillips, PBoC, Chapter 17 Gerstner, Neuronal Dynamics: from single neurons to networks and models
More informationArtificial Neural Networks The Introduction
Artificial Neural Networks The Introduction 01001110 01100101 01110101 01110010 01101111 01101110 01101111 01110110 01100001 00100000 01110011 01101011 01110101 01110000 01101001 01101110 01100001 00100000
More informationLecture 4: Perceptrons and Multilayer Perceptrons
Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons
More informationHow do biological neurons learn? Insights from computational modelling of
How do biological neurons learn? Insights from computational modelling of neurobiological experiments Lubica Benuskova Department of Computer Science University of Otago, New Zealand Brain is comprised
More informationProcessing of temporal structured information by spiking neural networks
Processing of temporal structured information by spiking neural networks A dissertation presented by Cătălin V. Rusu under supervision of prof. Leon Ţâmbulea summary Submitted to the Babeş-Bolyai University
More informationThe Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017
The Bayesian Brain Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester May 11, 2017 Bayesian Brain How do neurons represent the states of the world? How do neurons represent
More informationCourse 395: Machine Learning - Lectures
Course 395: Machine Learning - Lectures Lecture 1-2: Concept Learning (M. Pantic) Lecture 3-4: Decision Trees & CBC Intro (M. Pantic & S. Petridis) Lecture 5-6: Evaluating Hypotheses (S. Petridis) Lecture
More informationLast update: October 26, Neural networks. CMSC 421: Section Dana Nau
Last update: October 26, 207 Neural networks CMSC 42: Section 8.7 Dana Nau Outline Applications of neural networks Brains Neural network units Perceptrons Multilayer perceptrons 2 Example Applications
More informationEmergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity
Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity Jorge F. Mejias 1,2 and Joaquín J. Torres 2 1 Department of Physics and Center for
More informationArtificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!
Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011! 1 Todayʼs lecture" How the brain works (!)! Artificial neural networks! Perceptrons! Multilayer feed-forward networks! Error
More informationApprentissage, réseaux de neurones et modèles graphiques (RCP209) Neural Networks and Deep Learning
Apprentissage, réseaux de neurones et modèles graphiques (RCP209) Neural Networks and Deep Learning Nicolas Thome Prenom.Nom@cnam.fr http://cedric.cnam.fr/vertigo/cours/ml2/ Département Informatique Conservatoire
More informationMachine Learning. Neural Networks
Machine Learning Neural Networks Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 Biological Analogy Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 THE
More informationNeural networks. Chapter 19, Sections 1 5 1
Neural networks Chapter 19, Sections 1 5 Chapter 19, Sections 1 5 1 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 19, Sections 1 5 2 Brains 10
More informationBack-propagation as reinforcement in prediction tasks
Back-propagation as reinforcement in prediction tasks André Grüning Cognitive Neuroscience Sector S.I.S.S.A. via Beirut 4 34014 Trieste Italy gruening@sissa.it Abstract. The back-propagation (BP) training
More informationMachine Learning I Continuous Reinforcement Learning
Machine Learning I Continuous Reinforcement Learning Thomas Rückstieß Technische Universität München January 7/8, 2010 RL Problem Statement (reminder) state s t+1 ENVIRONMENT reward r t+1 new step r t
More informationEmergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks IV
Biol Cybern (2009) 101:427 444 DOI 10.1007/s00422-009-0346-1 ORIGINAL PAPER Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks IV Structuring synaptic
More information