Learning Spatio-Temporally Encoded Pattern Transformations in Structured Spiking Neural Networks 12

Similar documents
A gradient descent rule for spiking neurons emitting multiple spikes

SuperSpike: Supervised learning in multi-layer spiking neural networks

arxiv: v1 [cs.lg] 9 Dec 2016

CSE/NB 528 Final Lecture: All Good Things Must. CSE/NB 528: Final Lecture

arxiv: v1 [cs.ne] 23 Oct 2018

Event-Driven Random Backpropagation: Enabling Neuromorphic Deep Learning Machines

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

Data Mining Part 5. Prediction

Neuronal Dynamics: Computational Neuroscience of Single Neurons

Temporal Pattern Analysis

Probabilistic Models in Theoretical Neuroscience

Modelling stochastic neural learning

STDP Learning of Image Patches with Convolutional Spiking Neural Networks

Deep learning in the brain. Deep learning summer school Montreal 2017

Neural Networks. Nicholas Ruozzi University of Texas at Dallas

Spiking Neural Network Training Using Evolutionary Algorithms

Fast classification using sparsely active spiking networks. Hesham Mostafa Institute of neural computation, UCSD

A Learning Theory for Reward-Modulated Spike-Timing-Dependent Plasticity with Application to Biofeedback

Learning Beyond Finite Memory in Recurrent Networks Of Spiking Neurons

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5

Introduction to Neural Networks

COGS Q250 Fall Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November.

This script will produce a series of pulses of amplitude 40 na, duration 1ms, recurring every 50 ms.

AN INTRODUCTION TO NEURAL NETWORKS. Scott Kuindersma November 12, 2009

Evolution of the Average Synaptic Update Rule

Fast neural network simulations with population density methods

On the Complexity of Acyclic Networks of Neurons

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Reducing the Variability of Neural Responses: A Computational Theory of Spike-Timing-Dependent Plasticity

Spiking neural network-based control chart pattern recognition

Neural Networks. Mark van Rossum. January 15, School of Informatics, University of Edinburgh 1 / 28

Synaptic dynamics. John D. Murray. Synaptic currents. Simple model of the synaptic gating variable. First-order kinetics

The Spike Response Model: A Framework to Predict Neuronal Spike Trains

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Artifical Neural Networks

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning and Data Mining. Multi-layer Perceptrons & Neural Networks: Basics. Prof. Alexander Ihler

Artificial Neural Network

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller

How to do backpropagation in a brain

Liquid Computing in a Simplified Model of Cortical Layer IV: Learning to Balance a Ball

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5

Computational Explorations in Cognitive Neuroscience Chapter 2

Reducing Spike Train Variability: A Computational Theory Of Spike-Timing Dependent Plasticity

Math in systems neuroscience. Quan Wen

Artificial Neural Networks. Historical description

Biological Modeling of Neural Networks:

Feedforward Neural Nets and Backpropagation

Dynamical Constraints on Computing with Spike Timing in the Cortex

AI Programming CS F-20 Neural Networks

Sampling-based probabilistic inference through neural and synaptic dynamics

Linear Regression, Neural Networks, etc.

Supporting Online Material for

Synaptic Devices and Neuron Circuits for Neuron-Inspired NanoElectronics

What Can a Neuron Learn with Spike-Timing-Dependent Plasticity?

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

Lecture 10. Neural networks and optimization. Machine Learning and Data Mining November Nando de Freitas UBC. Nonlinear Supervised Learning

Simple neuron model Components of simple neuron

Information Theory and Neuroscience II

Neural Networks. Xiaojin Zhu Computer Sciences Department University of Wisconsin, Madison. slide 1

arxiv: v2 [cs.ne] 16 Aug 2017

EEE 241: Linear Systems

Learning and Memory in Neural Networks

Artificial Neural Network and Fuzzy Logic

Neural Networks: Introduction

Triplets of Spikes in a Model of Spike Timing-Dependent Plasticity

Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits

Spike-based Long Short-Term Memory networks

Lecture 7 Artificial neural networks: Supervised learning

4. Multilayer Perceptrons

How to do backpropagation in a brain. Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto

Neural networks. Chapter 20, Section 5 1

Neural Networks. Bishop PRML Ch. 5. Alireza Ghane. Feed-forward Networks Network Training Error Backpropagation Applications

Novel VLSI Implementation for Triplet-based Spike-Timing Dependent Plasticity

Neuron. Detector Model. Understanding Neural Components in Detector Model. Detector vs. Computer. Detector. Neuron. output. axon

Lecture 6. Notes on Linear Algebra. Perceptron

CISC 3250 Systems Neuroscience

On the Computational Complexity of Networks of Spiking Neurons

Biosciences in the 21st century

Artificial Neural Networks

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Lecture 11 : Simple Neuron Models. Dr Eileen Nugent

Artificial Neural Networks The Introduction

Lecture 4: Perceptrons and Multilayer Perceptrons

How do biological neurons learn? Insights from computational modelling of

Processing of temporal structured information by spiking neural networks

The Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017

Course 395: Machine Learning - Lectures

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau

Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity

Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!

Apprentissage, réseaux de neurones et modèles graphiques (RCP209) Neural Networks and Deep Learning

Machine Learning. Neural Networks

Neural networks. Chapter 19, Sections 1 5 1

Back-propagation as reinforcement in prediction tasks

Machine Learning I Continuous Reinforcement Learning

Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks IV

Transcription:

Learning Spatio-Temporally Encoded Pattern Transformations in Structured Spiking Neural Networks 12 André Grüning, Brian Gardner and Ioana Sporea Department of Computer Science University of Surrey Guildford, UK 9th November 215 1 http://dx.doi.org/1.684/m9.figshare.1517779 2 $Id: multilayerspiker.txt 1827 215-11-9 7:39:53Z ag15 $

1 Introduction 2 Background 3 Our Approach 4 Results 5 Summary

What are we doing? What are we doing? Formulate a supervised learning rule for spiking neural networks that Why worthwhile? can train spiking networks containing a hidden layer of neurons, can map arbitrary spatio-temporal input into arbitrary output spike patterns, ie multiple spike trains. Understand how spike-pattern based information processing takes place in the brain. A learning rule for spiking neural networks with technical potential. Find a rule that is to spiking networks what is backprop to rate neuron networks. Human Brain Project

Scientific Area Where are we scientifically? In the middle of nowhere between: computational neuroscience cognitive science artificial intelligence / machine learning

1 Introduction 2 Background 3 Our Approach 4 Results 5 Summary

Spiking Neurons (a) input spikes output spike u (c) output spike (b) input spikes Spiking neurons: real neurons communicate with each other via sequences of pulses spikes. 1 Dendritic tree, axon and cell body of a neuron. 2 Top: Spikes arrive from other neurons and its membrane potential rises. Bottom: incoming spikes on various dendrites elicit timed spikes responses as the output. 3 response of the membrane potential to incoming spikes. If the threshold θ is crossed, the membrane potential is reset to a low value, and a spike fired. From Andre Gruning and Sander Bohte. Spiking neural networks: Principles and challenges. In Proceedings of the 22nd European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning ESANN, Brugge, 214. Invited Contribution.

Spiking Neurons Spiking Information Processing The precise timing of spikes generated by neurons conveys meaningful information. Synaptic plasticity forms the basis of learning. Changes in synaptic strength depend on relative pre- and postsynaptic spike times, and third signals. Challenge: to relate such localised plasticity changes to learning on the network level.

Learning for Spiking NN General Learning Algorithms for Spiking NN? There is no general-purpose algorithm for spiking neural networks. Challenge: discontinuous nature of spiking events. Various supervised learning algorithms exist, each with its own limitations eg: network topology, adaptability (e.g. reservoir computing), limited spike encoding (e.g. latency, or spike vs no spike). Most focus on classification rather than more challenging tasks like mapping from one spike train to another.

Some Learning Algorithms for Spiking NN SpikeProp 3, ReSuMe 4, Tempotron 5, Chronotron 6, SPAN 7, Urbanczik and Senn 8, Brea et al. 9, Freimaux et al. 1,... 3 S.M. Bohte, J.N. Kok, and H. La Poutré. Spike-prop: error-backpropagation in multi-layer networks of spiking neurons. Neurocomputing, 48(1 4):17 37, 22 4 Filip Ponulak and Andrzej Kasiński. Supervised learning in spiking neural networks with ReSuMe: Sequence learning, classification and spike shifting. Neural Computation, 22:467 51, 21 5 Robert Gütig and Haim Sompolinsky. The tempotron: a neuron that learns spike timing-based decisions. Nature Neuroscience, 9(3), 26. doi: 1.138/nn1643 6 Răzvan V Florian. The chronotron: A neuron that learns to fire temporally precise spike patterns. PLoS ONE, 7(8):e4233, 212 7 A. Mohemmed, S. Schliebs, and N. Kasabov. SPAN: Spike pattern association neuron for learning spatio-temporal sequences. Int. J. Neural Systems, 211 8 R. Urbanczik and W. Senn. A gradient learning rule for the tempotron. Neural Computation, 21:34 352, 29 9 Johanni Brea, Walter Senn, and Jean-Pascal Pfister. Matching recall and storage in sequence learning with spiking neural networks. The Journal of Neuroscience, 33(23):9565 9575, 213 1 Nicolas Fremaux, Henning Sprekeler, and Wulfram Gerstner. Functional requirements for reward-modulated spile-timing-dependent plasticity. The Journal of Neuroscience, 3(4):13326 13337, 1 21

1 Introduction 2 Background 3 Our Approach 4 Results 5 Summary

Our Approach MultilayerSpiker Generalise backpropagation to Spiking Neural Networks with hidden neurons. Use stochastic neuron model to connect smooth quantities (derivative exists) with discrete spike trains (no derivative)

Neuron model Membrane potential u o (t) := h w oh t t Y h (t )ɛ(t t )dt + Z o (t )κ(t t )dt, (1) o postsynaptic neurons, h presynaptic neuron u o membrane potential of o. w oh strength of synaptic connection from h to o. Y h (t) = t h <t δ(t t h) spike train of neuron h where t h are the firing times of h Z o (t) = t o<t δ(t t o) spike train of neuron o where t o are the firing times of o.

Neuron model Spike response kernel ɛ and reset kernel κ ɛ(s) = ɛ [e s/τm e s/τs ] Θ(s) and κ(s) = κ e s/τm Θ(s), (2) spike response kernel ɛ = 4mV, reset kernel κ = 15mV, membrane time constant τ m = 1ms, the synaptic rise time τ s = 5ms Heaviside step function Θ(s).

Neuron model Stochastic Intensity (instantaneous firing rate) and Spikes ( ) u(t) ϑ ρ(t) = ρ[u(t)] = ρ exp, (3) u firing rate at threshold ρ =.1ms 1. threshold ϑ = 15mV. smoothness of the threshold u o =.2mV (output layer) or u h = 2mV (hidden layer) Spikes are generated by a point process taking stochastic intensity ρ o (t). Ie in a small time interval [t, t + δt) a spike is generated with probability ρ o (t)δt.

Backpropagation Objective ( Error ) function ( ) P(zo ref x) = exp log (ρ o (t)) Zo ref (t) ρ o (t)dt, (4) where Zo ref (t) = f δ(t t o f ) is the target output spike train for input x. a a J. P. Pfister, T. Toyoizumi, K. Aihara, and W. Gerstner. Optimal spike-timing dependent plasticity for precise action potential firing in supervised learning. Neural Computation, 18(6):139 1339, 26 Backprop approach w oh = η o log P(z ref x) w oh (5)

Backprop approach... and some ten slides later Lots of derivatives, indices, probabilities. Derivatives only possible due to smoothness of probability function. Relatively freely switching between expected values and their best estimates to be had when you only have single cast.

Backprop Weight Update Backpropagated Error Signal δ o (t) := 1 [ u o Hidden-to-Output Weights Input-to-Hidden Weights a Zo ref ] (t) ρ o (t), (6) T w oh = η o δ o (t) (Y h ɛ)(t) dt. (7) w hi = η T h w oh δ o (t)([y h (X i ɛ)] ɛ)(t)dt. (8) u h o a Brian Gardner, Ioana Sporea, and Andre Gruning. Learning spatio-temporally encoded pattern transformations in structured spiking neural networks. Neural Computation, To appear. 215. Preprint available at http://arxiv.org/abs/153.9129

1 Introduction 2 Background 3 Our Approach 4 Results 5 Summary

Task Task Purpose: explore the properties of the new learning algorithm. Map an input (given as a set of spike trains) to an output (given again as a set of spike trains). Simulation details a. a Brian Gardner, Ioana Sporea, and Andre Gruning. Learning spatio-temporally encoded pattern transformations in structured spiking neural networks. Neural Computation, To appear. 215. Preprint available at http://arxiv.org/abs/153.9129

Introduction Background Our Approach Results Network Setup Input spike pattern 1 Input A Xi 8 6 4 2 Hidden D Hidden neuron B Episodes 1 8 6 Output 4 2 Output neuron E 1 2 3 Time (ms) 4 5 Distance C Episodes 4 1 8 6 4 2 3 2 1 2 4 6 Episodes 8 1 Left: spike rasters of input, hidden and output layers (with targets). Right top: network structure, bottom: van-rossum distance. Summary

Network in Action X i u h ϑ u o ϑ T (X i ǫ) T ([Y h (X i ǫ)] ǫ) T w hi Left: Input spike train X i (top) and its evoked post synaptic potential X i ɛ (bottom). Middle: Fluctuations of a hidden neuron membrane potential u h relative to a firing threshold ϑ, in response to inputs from input layer (top). The potential dependent factor of the back propagated error from hidden to input layer [Y h (X i ɛ)] bottom left: corresponding PSP (according to kernel X i ɛ). Right: membrane potential of an output neuron u o, in response to hidden layer activity. Target indicated by dotted lines (top). Weight changes of input-to-hidden weight due to learning rule.

Experiments Performance (%) 1 8 6 4 2 8 16 24 32 4 Input patterns Episodes 15 1 5 Free w hi Fixed w hi Single layer 8 16 24 32 4 Input patterns Dependence of the performance on the number of input patterns and network setup. Each input pattern mapped to a unique target output of single output neuron and spike. Left: performance as a function of the number of input patterns. Right: Number of episodes to convergence in learning. Blue curves: hidden weights w hi updated according to learning algorithm, red curves: fixed random weights (plus homoeostasis), green: single layer.

Experiments A Performance (%) 1 8 6 4 2 n o = 1 n o = 2 n o = 3.5 1 1.5 2 2.5 3 n h / n o B n h / n o 5 4 3 2 1 n o = 1 2 4 6 8 1 Number of output spikes Dependence of the performance on the ratio of hidden to output neurons, and the number of target output spikes. p = 5. A unique target output spike pattern for each output neuron. (Left) Performance as a function of the ratio of hidden to output neurons. (Right) Minimum ratio of hidden to output neurons required to achieve 9% performance.

1 Introduction 2 Background 3 Our Approach 4 Results 5 Summary

Summary Results Compared to other learning algorithms for spiking neuron networks, we can learn more input-output mappings: 2 classes or 2 individual patterns here vs 3-4 more timed output spikes: up to 1 individually timed spikes here vs 3-5 with multiple outputs: up to 3 here vs 1 Apply it! MultilayerSpiker opens up the use of spiking neural networks for technical/cognitive modelling tasks. Spiking networks are biologically plausible. Explore how computations can be done with neural networks. Next step in the Human Brain Project: Implementation on SpiNNaker, and other neural hardware.

Spiking Neural Networks Open Questions How do networks of spiking neurons carry out computations? How can they learn such computations? Does this explain how real biological neurons compute? What is the applied killer application?