Introduction. Previous work has shown that AER can also be used to construct largescale networks with arbitrary, configurable synaptic connectivity.

Similar documents
Implementing Synaptic Plasticity in a VLSI Spiking Neural Network Model

Novel VLSI Implementation for Triplet-based Spike-Timing Dependent Plasticity

A brain-inspired neuromorphic architecture for robust neural computation

Addressing Challenges in Neuromorphic Computing with Memristive Synapses

Large-scale neural modeling

arxiv: v1 [cs.ne] 30 Mar 2013

STDP Learning of Image Patches with Convolutional Spiking Neural Networks

An Introductory Course in Computational Neuroscience

Synaptic Devices and Neuron Circuits for Neuron-Inspired NanoElectronics

Data Mining Part 5. Prediction

Systems Biology: A Personal View IX. Landscapes. Sitabhra Sinha IMSc Chennai

Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits

Outline. Neural dynamics with log-domain integrator circuits. Where it began Biophysics of membrane channels

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks

Is a 4-bit synaptic weight resolution enough? Constraints on enabling spike-timing dependent plasticity in neuromorphic hardware

Event-Driven Random Backpropagation: Enabling Neuromorphic Deep Learning Machines

How do synapses transform inputs?

Consider the following spike trains from two different neurons N1 and N2:

Synaptic Plasticity. Introduction. Biophysics of Synaptic Plasticity. Functional Modes of Synaptic Plasticity. Activity-dependent synaptic plasticity:

This script will produce a series of pulses of amplitude 40 na, duration 1ms, recurring every 50 ms.

How to outperform a supercomputer with neuromorphic chips

A Neuromorphic VLSI System for Modeling the Neural Control of Axial Locomotion

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore.

Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks

A Learning Theory for Reward-Modulated Spike-Timing-Dependent Plasticity with Application to Biofeedback

RE-ENGINEERING COMPUTING WITH NEURO- MIMETIC DEVICES, CIRCUITS, AND ALGORITHMS

Neuromorphic architectures: challenges and opportunites in the years to come

Causality and communities in neural networks

How to do backpropagation in a brain

How do biological neurons learn? Insights from computational modelling of

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5

Model-Free Stochastic Perturbative Adaptation and Optimization

The N3XT Technology for. Brain-Inspired Computing

arxiv: v1 [cs.et] 20 Nov 2014

Artificial Intelligence Hopfield Networks

Triplets of Spikes in a Model of Spike Timing-Dependent Plasticity

Biological Modeling of Neural Networks:

CSE/NB 528 Final Lecture: All Good Things Must. CSE/NB 528: Final Lecture

Model of a Biological Neuron as a Temporal Neural Network

Synaptic plasticity in neuromorphic hardware. Stefano Fusi Columbia University

Self-organized Criticality and Synchronization in a Pulse-coupled Integrate-and-Fire Neuron Model Based on Small World Networks

Dynamical Constraints on Computing with Spike Timing in the Cortex

Integer weight training by differential evolution algorithms

Magnetic tunnel junction beyond memory from logic to neuromorphic computing WANJUN PARK DEPT. OF ELECTRONIC ENGINEERING, HANYANG UNIVERSITY

Hopfield Neural Network

Supporting Online Material for

Abstract. Author Summary

Learning Spatio-Temporally Encoded Pattern Transformations in Structured Spiking Neural Networks 12

! Memory. " RAM Memory. ! Cell size accounts for most of memory array size. ! 6T SRAM Cell. " Used in most commercial chips

Exercise 15 : Cable Equation

Biosciences in the 21st century

ESE 570: Digital Integrated Circuits and VLSI Fundamentals

Delayed and Higher-Order Transfer Entropy

Introduction to Neural Networks

arxiv: v1 [q-bio.nc] 30 Apr 2012

Department of Physics and Astronomy University of Heidelberg

Analysis of Neural Networks with Chaotic Dynamics

Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity

ESE 570: Digital Integrated Circuits and VLSI Fundamentals

Neuromorphic Network Based on Carbon Nanotube/Polymer Composites

Building a Multi-FPGA Virtualized Restricted Boltzmann Machine Architecture Using Embedded MPI

Rate- and Phase-coded Autoassociative Memory

Synaptic Rewiring for Topographic Map Formation

Computing with Inter-spike Interval Codes in Networks of Integrate and Fire Neurons

DEVS Simulation of Spiking Neural Networks

Training and spontaneous reinforcement of neuronal assemblies by spike timing

Financial Informatics XVII:

Multiplexers Decoders ROMs (LUTs) Page 1

Probabilistic Models in Theoretical Neuroscience

Decoding. How well can we learn what the stimulus is by looking at the neural responses?

Lecture 14 Population dynamics and associative memory; stable learning

! Charge Leakage/Charge Sharing. " Domino Logic Design Considerations. ! Logic Comparisons. ! Memory. " Classification. " ROM Memories.

FPGA Implementation of a Predictive Controller

Time-Skew Hebb Rule in a Nonisopotential Neuron

Linear Regression, Neural Networks, etc.

SP-CNN: A Scalable and Programmable CNN-based Accelerator. Dilan Manatunga Dr. Hyesoon Kim Dr. Saibal Mukhopadhyay

Lecture 11 Linear programming : The Revised Simplex Method

GPGPU ACCELERATED SIMULATION AND PARAMETER TUNING FOR NEUROMORPHIC APPLICATIONS

Invariant object recognition in the visual system with error correction and temporal difference learning

Quasi Analog Formal Neuron and Its Learning Algorithm Hardware

A nanoparticle-organic memory field-effect transistor behaving as a programmable spiking synapse

On the Dynamics of Delayed Neural Feedback Loops. Sebastian Brandt Department of Physics, Washington University in St. Louis

GPU-accelerated Computing at Scale. Dirk Pleiter I GTC Europe 10 October 2018

Spike-Timing-Dependent Plasticity and Relevant Mutual Information Maximization

Temporal Pattern Analysis

A Quantum von Neumann Architecture for Large-Scale Quantum Computing

Digital Integrated Circuits A Design Perspective

Neuron, Volume 63. Supplemental Data. Generating Coherent Patterns of Activity. from Chaotic Neural Networks. David Sussillo and L.F.

Neuromorphic computing with Memristive devices. NCM group

Introduction to Neural Networks U. Minn. Psy 5038 Spring, 1999 Daniel Kersten. Lecture 2a. The Neuron - overview of structure. From Anderson (1995)

Quantitative Electrophysiology

Implementation of a Restricted Boltzmann Machine in a Spiking Neural Network

How can ideas from quantum computing improve or speed up neuromorphic models of computation?

Bayesian Computation Emerges in Generic Cortical Microcircuits through Spike-Timing-Dependent Plasticity

Spiking Neural Network Training Using Evolutionary Algorithms

How to do backpropagation in a brain. Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto

Spiking Neural P Systems with Anti-Spikes as Transducers

Hardware Design I Chap. 4 Representative combinational logic

Liquid Computing in a Simplified Model of Cortical Layer IV: Learning to Balance a Ball

Using Variable Threshold to Increase Capacity in a Feedback Neural Network

Transcription:

Introduction The goal of neuromorphic engineering is to design and implement microelectronic systems that emulate the structure and function of the brain. Address-event representation (AER) is a communication protocol originally proposed as a means to communicate sparse neural events between neuromorphic chips. Previous work has shown that AER can also be used to construct largescale networks with arbitrary, configurable synaptic connectivity. Here, we further extend the functionality of AER to implement arbitrary, configurable synaptic plasticity in the address domain.

Address-Event Representation (AER) Sender Receiver 3 Encoder Data bus 3 time Decoder 3 REQ REQ ACK ACK (Mahowald, 994; Lazzaro et al., 993) The AER communication protocol emulates massive connectivity between cells by time-multiplexing many connections on the same data bus. For a one-to-one connection topology, the required number of wires is reduced from N to log N. Each spike is represented by: Its location: explicitly encoded as an address. The time at which it occurs: implicitly encoded.

Learning on Silicon Adaptive hardware systems commonly employ learning circuitry embedded into the individual cells. Executing learning rules locally requires inputs and outputs of the algorithm to be local in both space and time. Implementing learning circuits locally increases the size of repeating units. This approach can be effective for small systems, but it is not efficient when the number of cells increases. y y w w x w w x w 3 w 3 x 3 w 4 w 4 x 4 3

Address Domain Learning By performing learning in the address domain, we can: Move learning circuits to the periphery. Create scalable adaptive systems. Maintain the small size of our analog cells. Construct arbitrarily complex and reconfigurable learning rules. Because any measure of cellular activity can be made globally available using AER, many adaptive algorithms based on incremental outer-product computations can be implemented in the address domain. By implementing learning circuits on the periphery, we reduce restrictions of locality on constituents of the learning rule. Spike timing-based learning rules are particularly well-suited for implementation in the address domain. 4

Enhanced AER In its original formulation, AER implements a one-to-one connection topology. To create more complex neural circuits, convergent and divergent connections are required. The connectivity of AER systems can be enhanced by routing addressevents to multiple receiver locations via a look-up table (Andreou et al., 997; Diess et al., 999; Boahen, ; Higgins & Koch, 999). Continuous-valued synaptic weights can be obtained by manipulating event transmission (Goldberg et al., ): W = n p q Weight Number of spikes sent Probability of transmission Amplitude of postsynaptic response 5

Enhanced AER: Example Sender address Synapse index Receiver address Weight polarity Weight magnitude Sender 3-8 4 Receiver Look-up table 3 - - - - - - 8 - - - 4 - - - - - - EG REQ POL Decoder Encoder Integrate-and-fire array A two-layer neural network is mapped to the AER framework by means of a look-up table (LUT). The event generator (EG) sends as many events as are specified in the weight magnitude field of the LUT. The integrate-and-fire array transceiver (IFAT) spatially and temporally integrates events. 6

Architecture IFAT System INACK INREQ AIN[] AIN[3] AOUT[] AOUT[] POL VDD/ AIN[] AIN[] AOUT[] AOUT[3] MATCH SCAN ACK MATCH ACK OUTREQ OUTACK RREQ RACK RSCAN CREQ CACK RSEL D Q Input control Event scanning CREQ CACK CREQ CACK RREQ RACK RREQ RACK CPOL RSEL RSEL RSCAN RSCAN CPOL CPOL SCAN Receiver address Weight polarity RAM DATA ADDRESS IN IFAT OUT POL MCU magnitude Sender address Weight IN index OUT Synapse PC board 7

Implementation IFAT System Column scanning and encoding RAM IFAT Row decoding Single IF cell Row scanning and encoding Column decoding MCU 8

Spike Timing-Dependent Plasticity In spike timing-dependent plasticity (STDP), changes in synaptic strength depend on the time between each pair of presynaptic and postsynaptic events. The most recent inputs to a postsynaptic cell make larger contributions to its membrane potential than past inputs due to passive leakage currents. Postsynaptic events immediately following incoming presynaptic spikes are considered to be causal and induce weight increments. Presynaptic inputs that arrive shortly after a postsynaptic spike are considered to be anti-causal and induce weight decrements. From (Bi & Poo, 998) 9

Address Domain STDP: Event Queues To implement our STDP synaptic modification rule in the address domain, we augmented our AER architecture with two event queues, one for presynaptic events and one for postsynaptic events. When an event occurs, its address is entered into the appropriate queue along with an associated value ϕ initialized to τ + or τ. This value is decremented over time. Presynaptic queue Postsynaptic queue Address ϕ pre....7..8..5 3. Address ϕ post..4 3.5 4. 4.5 4.8 5.3 5.6 6. x x y y t t -4-3 - - -4-3 - - { { τ+ (t t ϕ pre (t t pre ) = pre ) if t t pre τ + τ (t t if t t pre < τ ϕ post (t t post ) = post ) if t t post τ + if t t post < τ

Address Domain STDP: Weight Updates Weight update procedure: Presynaptic queue τ + w Postsynaptic queue w Presynaptic Postsynaptic x x y x x x x x x x t Presynaptic Postsynaptic x y y y y y y y y y y y τ t For each postsynaptic event, we iterate backwards through the presynaptic queue to find the causal spikes and increment the appropriate weights in the LUT. For each presynaptic event, we iterate backwards through the postsynaptic queue to find the anticausal spikes and decrement the appropriate weights in the LUT. The magnitude of the weight updates are specified by the values stored in the queue. w = { η ϕpost (t pre t post ) if t pre t post τ +η ϕ pre (t post t pre ) if τ + t pre t post otherwise

Address Domain STDP: Details w(tpre tpost) τ + τ tpre tpost presynaptic postsynaptic w For stable learning, the area under the synaptic modification curve in the anti-causal regime must be greater than that in the causal regime. This ensures convergence of the synaptic strengths (Song et al., ). In our implementation of STDP, this constraint is met by setting τ > τ +.

Experiment: Grouping Correlated Inputs x x x 3 Uncorrelated x 4 x 5 y x 6 x 7 Correlated x 8 x 9 x Each of the neurons in the input layer is driven by an externally supplied, randomly generated list of events. Our randomly generated list of events simulates two groups of neurons, one correlated and one uncorrelated. The uncorrelated group drives input layer cells x... x 7, and the correlated group drives input layer cells x 8... x. Although each neuron in the input layer has the same average firing rate, neurons x 8... x fire synchronous spikes more often than any other combination of neurons. 3

Experimental Results Single Trial Average Over Trials STDP has been shown to be effective at detecting correlations between groups of inputs (Song et al., ). We demonstrate that this can be accomplished in hardware in the address domain. Given a random starting distribution of synaptic weights for a set of presynaptic inputs, a neuron using STDP should maximize the weights of correlated inputs and minimize the weights of uncorrelated inputs. Our results illustrate this principle when all synaptic weights are initialized to a uniform value and the network is allowed to process, input events. 4

Conclusion The address domain provides an efficient representation to implement synaptic plasticity based on the relative timing of events. Learning circuitry can be moved to the periphery. The constituents of learning rules need not be constrained in space or time. We have implemented an address domain learning system using a hybrid analog/digital architecture. Our experimental results illustrate an application of this approach using a temporally-asymmetric Hebbian learning rule. 5

Extensions The mixed-signal approach provides the best of both worlds: Analog cells are capable of efficiently modelling sophisticated neural dynamics in continuous-time. Nearest-neighbor connectivity can be incorporated into an addressevent framework to exploit the parallel processing capabilities of analog circuits. Storing the connections in a digital LUT affords the opportunity to implement learning rules that reconfigure the network topology on the fly. In the future, we will combine all of the system elements on a single chip. The local embedding of memory will enable high bandwidth distribution of events. 6