Neurophysiology of a VLSI spiking neural network: LANN21

Similar documents
Characterizing the firing properties of an adaptive

Supporting Online Material for

Synaptic plasticity in neuromorphic hardware. Stefano Fusi Columbia University

Consider the following spike trains from two different neurons N1 and N2:

High-conductance states in a mean-eld cortical network model

Population dynamics of interacting spiking neurons

Activity Driven Adaptive Stochastic. Resonance. Gregor Wenning and Klaus Obermayer. Technical University of Berlin.

Synaptic dynamics. John D. Murray. Synaptic currents. Simple model of the synaptic gating variable. First-order kinetics

Computational Explorations in Cognitive Neuroscience Chapter 2

Information Theory and Neuroscience II

Computing with Inter-spike Interval Codes in Networks of Integrate and Fire Neurons

Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity

arxiv: v3 [q-bio.nc] 18 Feb 2015

Decision-making and Weber s law: a neurophysiological model

Large-scale neural modeling

1 Balanced networks: Trading speed for noise

arxiv: v1 [q-bio.nc] 1 Jun 2014

Contraction Properties of VLSI Cooperative Competitive Neural Networks of Spiking Neurons

How do synapses transform inputs?

Contraction Properties of VLSI Cooperative Competitive Neural Networks of Spiking Neurons

Introduction to Neural Networks U. Minn. Psy 5038 Spring, 1999 Daniel Kersten. Lecture 2a. The Neuron - overview of structure. From Anderson (1995)

Optimizing One-Shot Learning with Binary Synapses

Balance of Electric and Diffusion Forces

An analytical model for the large, fluctuating synaptic conductance state typical of neocortical neurons in vivo

Lecture 11 : Simple Neuron Models. Dr Eileen Nugent

Fast neural network simulations with population density methods

THE TRANSFER AND PROPAGATION OF CORRELATED NEURONAL ACTIVITY

7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory

Neuron. Detector Model. Understanding Neural Components in Detector Model. Detector vs. Computer. Detector. Neuron. output. axon

An Introductory Course in Computational Neuroscience

3 Detector vs. Computer

Dynamical Constraints on Computing with Spike Timing in the Cortex

Learning at the edge of chaos : Temporal Coupling of Spiking Neurons Controller for Autonomous Robotic

Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits

Biological Modeling of Neural Networks

SUPPLEMENTARY INFORMATION

Model of a Biological Neuron as a Temporal Neural Network

7 Recurrent Networks of Threshold (Binary) Neurons: Basis for Associative Memory

Synfire Waves in Small Balanced Networks

When do Correlations Increase with Firing Rates? Abstract. Author Summary. Andrea K. Barreiro 1* and Cheng Ly 2

The Spike Response Model: A Framework to Predict Neuronal Spike Trains

Mean-Field Theory of Irregularly Spiking Neuronal Populations and Working Memory in Recurrent Cortical Networks

Patterns of Synchrony in Neural Networks with Spike Adaptation

Biological Modeling of Neural Networks:

Probabilistic Models in Theoretical Neuroscience

Roles of Fluctuations. in Pulsed Neural Networks

Power-Law Neuronal Fluctuations in a Recurrent Network Model of Parametric Working Memory

The Mixed States of Associative Memories Realize Unimodal Distribution of Dominance Durations in Multistable Perception

Stationary Bumps in Networks of Spiking Neurons

AT2 Neuromodeling: Problem set #3 SPIKE TRAINS

Neural Modeling and Computational Neuroscience. Claudio Gallicchio

BioE 332 Assignment 1: Persistent activity in a spatial working memory model

The High-Conductance State of Cortical Networks

CN2 1: Introduction. Paul Gribble. Sep 10,

Comparing integrate-and-fire models estimated using intracellular and extracellular data 1

Modelling stochastic neural learning

Single Neuron Dynamics for Retaining and Destroying Network Information?

University of Bristol - Explore Bristol Research. Early version, also known as pre-print

Mathematical Models of Dynamic Behavior of Individual Neural Networks of Central Nervous System

F.P. Battaglia 1. Istituto di Fisica. Universita di Roma, La Sapienza, Ple Aldo Moro, Roma and. S. Fusi

Memory capacity of networks with stochastic binary synapses

Bistability and up/down state alternations in inhibition-dominated randomly connected networks of LIF neurons

Neocortical Pyramidal Cells Can Control Signals to Post-Synaptic Cells Without Firing:

When is an Integrate-and-fire Neuron like a Poisson Neuron?

Analytical Integrate-and-Fire Neuron Models with Conductance-Based Dynamics for Event-Driven Simulation Strategies

Fast and exact simulation methods applied on a broad range of neuron models

Mathematical Models of Dynamic Behavior of Individual Neural Networks of Central Nervous System

Patterns, Memory and Periodicity in Two-Neuron Delayed Recurrent Inhibitory Loops

Introduction and the Hodgkin-Huxley Model

Basic elements of neuroelectronics -- membranes -- ion channels -- wiring

NON-MARKOVIAN SPIKING STATISTICS OF A NEURON WITH DELAYED FEEDBACK IN PRESENCE OF REFRACTORINESS. Kseniia Kravchuk and Alexander Vidybida

This script will produce a series of pulses of amplitude 40 na, duration 1ms, recurring every 50 ms.

arxiv: v1 [q-bio.nc] 13 Feb 2018

CSE/NB 528 Final Lecture: All Good Things Must. CSE/NB 528: Final Lecture

The homogeneous Poisson process

Interspike interval distributions of spiking neurons driven by fluctuating inputs

arxiv: v1 [cs.ne] 30 Mar 2013

Effects of synaptic conductance on the voltage distribution and firing rate of spiking neurons

neural networks Balázs B Ujfalussy 17 october, 2016 idegrendszeri modellezés 2016 október 17.

+ + ( + ) = Linear recurrent networks. Simpler, much more amenable to analytic treatment E.g. by choosing

Exercises. Chapter 1. of τ approx that produces the most accurate estimate for this firing pattern.

Effects of refractory periods in the dynamics of a diluted neural network

Training and spontaneous reinforcement of neuronal assemblies by spike timing

A Learning Theory for Reward-Modulated Spike-Timing-Dependent Plasticity with Application to Biofeedback

Tuning tuning curves. So far: Receptive fields Representation of stimuli Population vectors. Today: Contrast enhancment, cortical processing

Neuronal Firing Sensitivity to Morphologic and Active Membrane Parameters

Artificial Neural Network and Fuzzy Logic

Collective Dynamics in Neural Networks

Computing with inter-spike interval codes in networks ofintegrate and fire neurons

Neural variability and Poisson statistics

80% of all excitatory synapses - at the dendritic spines.

A Model for Real-Time Computation in Generic Neural Microcircuits

CISC 3250 Systems Neuroscience

Synaptic Devices and Neuron Circuits for Neuron-Inspired NanoElectronics

Event-driven simulations of nonlinear integrate-and-fire neurons

Phase Response Properties and Phase-Locking in Neural Systems with Delayed Negative-Feedback. Carter L. Johnson

Introduction to Neural Networks

Math in systems neuroscience. Quan Wen

A Population Density Approach that Facilitates Large-Scale Modeling of Neural Networks: Analysis and an Application to Orientation Tuning

EEG- Signal Processing

Transcription:

Neurophysiology of a VLSI spiking neural network: LANN21 Stefano Fusi INFN, Sezione Roma I Università di Roma La Sapienza Pza Aldo Moro 2, I-185, Roma fusi@jupiter.roma1.infn.it Paolo Del Giudice Physics Lab, Istituto Superiore di Sanità Vle Regina Elena 299, Roma paolo@ibmteo.iss.infn.it Daniel J. Amit Dipartimento di Fisica Università di Roma La Sapienza Ple Aldo Moro 2, I-185, Roma amit@surete.roma1.infn.it Abstract A recurrent network of 21 linear integrate-and-fire (LIF) neurons (14 excitatory; 7 inhibitory) connected by 6 spike-driven, excitatory, plastic synapses and 35 inhibitory synapses is implemented in analog VLSI. The connectivity pattern is random and at a level of 3%. The synaptic efficacies have two stable values as long term memory. Each neuron receives also an external afferent current. We present neuro-physiological recordings of the collective characteristics of the network at frozen synaptic efficacies. Examining spike rasters we show that in absence of synaptic couplings and for constant external currents, the neurons spike in a regular fashion. Keeping the excitatory part of the network isolated, as the strength of the synapses rises the neuronal spiking becomes increasingly irregular, as expressed in coefficient of variability (CV) of inter-spike intervals (ISI). The rates are high, in absence of inhibition and are well described by mean-field theory. Inhibition is then turned on, the rates decrease; variability remains and population activity fluctuations appear, as predicted by mean-field theory. We conclude that the collective behavior of the pilot network produces distributed noise expressed in the ISI distribution, as would be required to control slow stochastic learning, and that the random connectivity acts to make the dynamics of the network noisy even in the absence of noise in the external afferents. http://jupiter.roma1.infn.it INFN, Gruppo Collegato Sanità INFN, Sezione Roma I

1 The neuron The implemented LIF (Linear Integrate and Fire) neuron is a modified version of the Mead neuron [1, 2]. The dynamics of the neuron s depolarization V, is given by: dv = β + I(t) (1) dt where I is the total afferent current and β is a constant loss. These equations are complemented by the constraint that V cannot go below, whatever its input. The neuron linearly integrates the afferent current, with a constant leak β, and emits a spike when V = θ; afterτ (=5µs, the width of the spike) V is reset to H. τ plays the role of an absolute refractory period, since, during the emission of the spike, no current is injected in the neuron. In the implementation θ=1.4v; H=.5V. A: Mean Spiking Frequency vs µ 3 2 C: Signal Dominated Regime ν (Hz) 2 1.2.2 µ [θ/ms] B: Coefficient of variability 1 1 2 4 6 8 1 t (ms) 2 D: Noise Dominated Regime CV.5 1.2.2 µ [θ/ms] 2 4 6 8 1 t (ms) Figure 1: Single neuron response: A: Output rate vs drift (µ =< I > β) for three values of σ (dashed line: σ 2 =, dotted line: σ 2 =.25, solid line: σ 2 =.5 in units of θ 2 /ms), theory [3]; B: CV vs µ for the same values of σ 2 as in A; C: Simulation of the evolution of the depolarization (Eq. 1) for Gaussian input current with µ =.19 θ/ms, σ 2 =.11 θ 2 /ms. The CV is.23. D: simulation for µ =.96 θ/ms, σ 2 =.26 θ 2 /ms. CV=.85. In both C and D the output rate is 1Hz, H =. Single neuron response to noisy input: When the afferent current I is drawn from a Gaussian distribution with mean I and variance σ 2, its spike emission rate, ν(µ, σ), is [3]: ν =Φ(µ, σ) = ( ) [τ + σ2 2µ 2 e 2µθ σ 2 e 2µH σ 2 + θ H ] 1. (2) µ

It is depicted in Fig. 1A vs the drift µ ( I β), for three values of σ. If σ=, the response is threshold-linear [4] (dashed curve): ν=, if I β; otherwise ( ) 1 θ H ν = + τ. (3) µ This would be the expected response if the neurons are decoupled and the afferent current is noiseless. If noise is present in the afferent current, the rate vs the drift becomes concave at low currents and the rate is non-zero even for negative drift. In Fig. 1C,D we show the temporal evolution of the depolarization (simulated) of the implemented neuron for positive and negative drift, and the output frequency of the neuron as a function of the drift, for different values of σ. In Fig. 1B we present the coefficient of variability CV, defined as: CV = <T 2 > <T > 2 /<T > where T =1/ν is the average time between two subsequent spikes emitted by the neuron. 2 The network The neurons are connected by spike-driven plastic synapses [5] between the excitatory neurons and fixed valued (excitatory-inhibitory, inhibitory-excitatory, inhibitory-inhibitory) synapses. In this report we concentrate on the collective properties of neurons, so all synapses are kept fixed and a detailed description of the synapse is omitted. The connectivity in all directions is random and on average each neuron is connected to 3% of the others. In Fig.2 (left) the layout of the LANN21 (Learning Attractor Neural Network) is shown; on the right the connectivity scheme and the chip I/O is illustrated. Figure 2: The LANN21 chip: CMOS technology, 1.2µm; the core is 1.5 1.38 mm. Left: layout of the chip. In the topmost, central row the 14 excitatory neurons, and the 7 inhibitory neurons are on the righmost column. The scattered patches in the lower part of the chip are the 6 plastic, excitatory-to-excitatory

synapses. The other synapses are barely visible in the lower strip. Right: 4-way connectivity scheme of the network: gray squares are existing synapses; each row represents the dendritic tree of one neuron. Black squares indicate that there are no self-interactions. Equal external stimulation current arrives within groups indicated by pointed boxes on the left; spikes can be observed only for neurons with an arrow down (1 excit, 5 inhib), spikes with arrows going out into the same Mux box come out on the same pin and are then de-multiplexed. Note that when all neurons in the network are excitatory and the stimulating current is constant, the neurons must operate at positive drift, or no spike is emitted and no noise generated. If the number of neurons is large enough, and the firing probability of each neuron is an independent random variable, the afferent current to each neuron is random, and can be approximated by a Gaussian current [7, 3]. In particular, the average of the current µ and its variance σ 2 can be estimated by: µ = cjν β + I ext σ 2 = cj 2 ν where c is the average number afferents, ν is the mean frequency of the afferent neurons, J is the mean strength of the couplings, and I ext is the external afferent current. This is a good approximation even in the case of LANN21, where the number of afferents is rather low. 3 Methods measuring efficacies The existing synaptic efficacies are either zero or have another (fixed) value, whose magnitude is externally regulated and is set by a host PC via a programmable micro-processor based board. The actual values of the efficacies must be measured, following their regulation. In LANN21 only the depolarization of one neuron is visible (neuron ); this neuron has two afferent excitatory synapses, whose efficacy can be directly measured by the jumps induced in the depolarization by pre-synaptic spikes. We have adopted the following procedure to estimate all the synaptic efficacies: 1. All synapses are regulated, the external (constant) currents turned on and spike rates and CVs are measured for all neurons. 2. All synapses are turned off and the resulting rates provide a measurement of afferent mean currents, which are composed of the external currents and the neuron linear decay. Since in absence of interaction, the integration is linear, the mean afferent current µ to each neuron is directly related to its mean spiking frequency ν (see Eq.3): νθ µ =< I> β = (1 ντ ) The contribution of the external current can be separated from the linear decay by setting β = and repeating the procedure. 3. Each of the four types of synapses is turned on exclusively and rates and CVs are measured. 4. For each of the four classes the single type of synaptic efficacy involved is varied in MF theory until the computed rates match the measured rates. For example, to determine the inhibitory-excitatory mean efficacy J I E, all the other couplings are turned off and the average rates are measured. The population of excitatory neurons receive an extra inhibitory current with respect to case 2, and the mean frequency ν E is lowered. The estimate of J I E should satisfy the following self consistency equations: ν E =Φ(µ E,σ E ) µ E = Iext E β E c I E J I E ν I, σe 2 = c I EJI E 2 ν I

where ν I and ν E are the measured average rates of inhibitory and excitatory neurons respectively, c I E is the mean number of inhibitory afferents per excitatory neuron, and Iext E is the mean external afferent current, which has already been determined in step 2. After all four (average) efficacies are determined this way, MF is applied to the entire network and the rates computed. Those are compared with the rates measured in Step (1). This is a strong consistency check in addition to comparing the excitatoryexcitatory efficacy with the one measured directly on neuron. It turns out that when the network does not synchronize the emission rates can be obtained by a MF approach (despite the fact that the number of neurons is rather low). C: J EE =.11 θ CV vs J EE.8 B: J EE =.7 θ.6 A: J EE =.4.2 B C 1 2 3 4 5 Time (ms) A.1.2 Figure 3: Activity in excitatory network. Left: Spike rasters (.5 second) for 1 excitatory neurons (no inhibition) for three values of average synaptic efficacy (A:J =, B:J =.7θ, C:J =.11θ) in units of the neural threshold (β=.). Note that spike rates of different neurons are quite different due to the random connectivity in the network which results in a large variability in the number of afferent synapses among neurons. Right: Average CV vs J E E. The black circles correspond to the rasters sets on the left. Rates are kept constant, as J E E varies, by adjusting external currents. 4 Results 4.1 Excitatory network As synaptic efficacies are turned on, the incoming current to a neuron will include the feedback spikes from other neurons in the network, over and above the (constant) external current. For a disordered system (here due to the random connectivity) spike trains become increasingly irregular. Even though the system is completely deterministic, also on relatively long time scales spike trains cannot be distinguished from noise. (See e.g. [6]). In Fig. 3 (left) we present.5 second spike rasters of 1 excitatory neurons of the network (in absence of inhibition) for three values of the average excitatory synaptic efficacy. Fig. 3 (right) presents average CV vs the (measured) synaptic efficacy. The excitatory efficacy is varied holding the average neurons rate constant, by adjusting the afferent current. CV is increasing with the efficacy as well as with increasing number of afferent synapses. Note that for weak

synapses the neurons emit quite regularly. For not too high values of the synaptic efficacies (high Js tend to synchronize the neurons firing and to break the basic hypotheses of the MF approach) we compare observations with the predictions in the MF approximation. We made the comparison for the parameters corresponding to the black points B and C in Fig. 3. For point B, MF: J E E =95.4 mv,measuredj E E is 12 ± 7mV.Forpoint C, MF: J E E = 138 mv, measured: 13 ± 21 mv. It is seen that even for such a small network there is good agreement with MF predictions. 4.2 Inhibition included Turning on the inhibition one can obtain neurons with small negative drift. Inhibition lowers the neuronal drift and hence the rates. This is accompanied by an increase of ISI variability, as measured by CV. It is a consequence of the fact that the neurons move into their noise driven regime, where spikes are emitted due to fluctuations (see e.g. Fig.1D). Fig. 4 presents two sets of raster plots of the 15 visible neurons (5 inhibitory at bottom) for the two sets of couplings in Table 1. For strong couplings (upper part in Fig. 4), we would have to inject small external currents, to avoid very high level of synchronization in the network; on the other hand, working with very small currents (of the order of na) is a source of instabilities and needs fine tuning. We therefore inject equal, constant current into excitatory neurons belonging to group A in Fig. 2 (they have common external input), and no current into other neurons. High rates are provoked only in these 4 neurons, which in turn excite the other excitatory neurons as well as the inhibitory ones. In this regime, the neurons exhibit high variability in their spiking times (the average measured CV is 1.6 for the excitatory neurons that do not receive external currents). A typical weak coupling regime is shown in the lower part of Fig. 4. After estimating the couplings following the procedure described in Section 3, the CV is calculated in MF approximation. The measured average CV of all excitatory neurons is.35, the MF prediction is.38 (for inhibitory neurons, measured CV:.13, MF:.14). For the J E E coupling, we could compare the MF estimate with the direct measurement of the induced jumps in the depolarization of neuron ; MF:.134 θ, measured:.135 θ. The measured average rates are: ν E = 139Hz and ν I = 199Hz; the MF predictions are ν E = 134Hz and ν I = 199Hz. Network couplings J E E J I E J E I J I I Strong couplings.235.168.14.6 Weak couplings.134.84.7.3 Table 1: Two sets of couplings for the network of Fig. 4. The values (in units of θ) are estimated from MF as described in Section 3. We conclude that the dynamics of the connected network generates (effective) noise even in absence of noise in the afferent current and this noise can underpin slow stochastic learning [5]. Acknowledgments We acknowledge the sopport of INFN through the NALS initiative. D. Badoni, A. Salamon and V. Dante designed the chip and the data acquisition system; we thank them for their support during the present work.

all interactions turned on strong couplings Inhibitory Excitatory Stimulated 1 2 3 4 5 6 7 8 9 1 Time (ms) all interactions turned on weak couplings Inhibitory Excitatory 1 2 3 4 5 6 7 8 9 1 Time (ms) Figure 4: Network with inhibitory neurons: 2 raster sets for 2 sets of synaptic parameters (see Table 1). Only the 4 neurons corresponding to the 4 topmost rows receive external current (equal and constant). References [1] Mead C. (1989) Analog VLSI and Neural System, Reading, MA: Addison-Wesley [2] Badoni D., Salamon. A., Salina A., Fusi S., (1999) avlsi implementation of a learning network with stochastic spike-driven synapse, in preparation [3] Fusi S. and Mattia M. (1999) Collective behavior of networks with linear (VLSI) integrate-and-fire neurons, Neural Computation 11, 643 [4] Gerstein G.L. and Mandelbrot B. (1964) Random walk models for the spike activity of a single neuron, Biophysical Journal 4 41 [5] Fusi S., Annunziato M., Badoni D., Salamon A. and Amit D.J., (1999) Spike-driven synaptic plasticity: theory, simulation, VLSI implementation, Neural Computation (submitted, URL:jupiter.roma1.infn.it) [6] van Vreeswijk CA, and Sompolinsky H (1996), Chaos in neural networks with balanced excitatory and inhibitory activity Science, 274 1724 [7] Amit D.J. and Brunel N. (1997) Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex, Cerebral Cortex 7 237