Computing with inter-spike interval codes in networks ofintegrate and fire neurons

Similar documents
Computing with Inter-spike Interval Codes in Networks of Integrate and Fire Neurons

When is an Integrate-and-fire Neuron like a Poisson Neuron?

Consider the following spike trains from two different neurons N1 and N2:

Membrane equation. VCl. dv dt + V = V Na G Na + V K G K + V Cl G Cl. G total. C m. G total = G Na + G K + G Cl

High-conductance states in a mean-eld cortical network model

Subthreshold cross-correlations between cortical neurons: Areference model with static synapses

How to read a burst duration code

The homogeneous Poisson process

Decoding Input Signals in Time Domain A Model Approach

An Introductory Course in Computational Neuroscience

Model neurons!!poisson neurons!

Coherence detection in a spiking neuron via Hebbian learning

Lecture IV: LTI models of physical systems

Roles of Fluctuations. in Pulsed Neural Networks

Introduction to Neural Networks U. Minn. Psy 5038 Spring, 1999 Daniel Kersten. Lecture 2a. The Neuron - overview of structure. From Anderson (1995)

Neurophysiology of a VLSI spiking neural network: LANN21

A Model for Real-Time Computation in Generic Neural Microcircuits

DEVS Simulation of Spiking Neural Networks

Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits

Activity Driven Adaptive Stochastic. Resonance. Gregor Wenning and Klaus Obermayer. Technical University of Berlin.

A gradient descent rule for spiking neurons emitting multiple spikes

Synfire Waves in Small Balanced Networks

REAL-TIME COMPUTING WITHOUT STABLE

The Spike Response Model: A Framework to Predict Neuronal Spike Trains

Fast neural network simulations with population density methods

Synaptic dynamics. John D. Murray. Synaptic currents. Simple model of the synaptic gating variable. First-order kinetics


An alternate burst analysis for detecting intra-burst rings based on inter-burst periods

Spiking neural network-based control chart pattern recognition

Dendritic cable with active spines: a modelling study in the spike-diffuse-spike framework

Decoding. How well can we learn what the stimulus is by looking at the neural responses?

arxiv:physics/ v1 [physics.bio-ph] 19 Feb 1999

Activity of any neuron with delayed feedback stimulated with Poisson stream is non-markov

Characterizing the firing properties of an adaptive

Basic elements of neuroelectronics -- membranes -- ion channels -- wiring

DISCRETE EVENT SIMULATION IN THE NEURON ENVIRONMENT

Spiking Neural Networks as Timed Automata

Distinguishing the Causes of Firing with the Membrane Potential Slope

Spike-Frequency Adaptation of a Generalized Leaky Integrate-and-Fire Model Neuron

Hierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References

Dendritic computation

Phase Response Properties and Phase-Locking in Neural Systems with Delayed Negative-Feedback. Carter L. Johnson

Adaptive contrast gain control and information maximization $

Introduction to Neural Networks. Daniel Kersten. Lecture 2. Getting started with Mathematica. Review this section in Lecture 1

Event-driven simulations of nonlinear integrate-and-fire neurons

How to do backpropagation in a brain

Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity

Figure 1: Equivalent electric (RC) circuit of a neurons membrane

Liquid Computing in a Simplified Model of Cortical Layer IV: Learning to Balance a Ball

AT2 Neuromodeling: Problem set #3 SPIKE TRAINS

Supporting Online Material for

Interspike interval distributions of spiking neurons driven by fluctuating inputs

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5

CSE/NB 528 Final Lecture: All Good Things Must. CSE/NB 528: Final Lecture

Dynamical Constraints on Computing with Spike Timing in the Cortex

NON-MARKOVIAN SPIKING STATISTICS OF A NEURON WITH DELAYED FEEDBACK IN PRESENCE OF REFRACTORINESS. Kseniia Kravchuk and Alexander Vidybida

Modulation ofexcitatory synaptic coupling facilitates synchronization and complex dynamics in a nonlinear model ofneuronal dynamics

Approximate step response of a nonlinear hydraulic mount using a simplified linear model

How Spike Generation Mechanisms Determine the Neuronal Response to Fluctuating Inputs

Phase Response. 1 of of 11. Synaptic input advances (excitatory) or delays (inhibitory) spiking

Self-organized Criticality and Synchronization in a Pulse-coupled Integrate-and-Fire Neuron Model Based on Small World Networks

Maximum Likelihood Estimation of a Stochastic Integrate-and-Fire Neural Model

Analytical Integrate-and-Fire Neuron Models with Conductance-Based Dynamics for Event-Driven Simulation Strategies

Fast and exact simulation methods applied on a broad range of neuron models

On the Dynamics of Delayed Neural Feedback Loops. Sebastian Brandt Department of Physics, Washington University in St. Louis

Neuronal Dynamics: Computational Neuroscience of Single Neurons

Structure and Measurement of the brain lecture notes

Compartmental Modelling

THE TRANSFER AND PROPAGATION OF CORRELATED NEURONAL ACTIVITY

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

Exercises. Chapter 1. of τ approx that produces the most accurate estimate for this firing pattern.

Probabilistic Models in Theoretical Neuroscience

Effects of synaptic conductance on the voltage distribution and firing rate of spiking neurons

INVERSE PROBLEMS FOR THE HODGKIN-HUXLEY MODEL

Analysis and design of a new SRAM memory cell based on vertical lambda bipolar transistor

Localized Excitations in Networks of Spiking Neurons

Synchronization, oscillations, and 1/ f noise in networks of spiking neurons

Increasing Inhibitory Input Increases Neuronal Firing Rate: Why And When? Diffusion Process Cases

Neural Modeling and Computational Neuroscience. Claudio Gallicchio

Fitting a Stochastic Neural Network Model to Real Data

Using Variable Threshold to Increase Capacity in a Feedback Neural Network

Efficient Digital Neurons for Large Scale Cortical Architectures

Infinite systems of interacting chains with memory of variable length - a stochastic model for biological neural nets

Computational Explorations in Cognitive Neuroscience Chapter 2

SPIKE TRIGGERED APPROACHES. Odelia Schwartz Computational Neuroscience Course 2017

Passive Membrane Properties

Lecture 11 : Simple Neuron Models. Dr Eileen Nugent

STDP Learning of Image Patches with Convolutional Spiking Neural Networks

Analytic Expressions for Rate and CV of a Type I Neuron Driven by White Gaussian Noise

System Identification for the Hodgkin-Huxley Model using Artificial Neural Networks

neural networks Balázs B Ujfalussy 17 october, 2016 idegrendszeri modellezés 2016 október 17.

Biological Modeling of Neural Networks

A spike sorting framework using nonparametric detection and incremental clustering

Frequency Adaptation and Bursting

Mathematical Models of Dynamic Behavior of Individual Neural Networks of Central Nervous System

!) + log(t) # n i. The last two terms on the right hand side (RHS) are clearly independent of θ and can be

Improving the Separability of a Reservoir Facilitates Learning Transfer

Spike-Frequency Adaptation: Phenomenological Model and Experimental Tests

Circular symmetry of solutions of the neural field equation

Ranking Neurons for Mining Structure-Activity Relations in Biological Neural Networks: NeuronRank

Transcription:

Neurocomputing 65 66 (2005) 415 420 www.elsevier.com/locate/neucom Computing with inter-spike interval codes in networks ofintegrate and fire neurons Dileep George a,b,, Friedrich T. Sommer b a Department of Electrical Engineering, Stanford University, 350 Serra Mall, Stanford, CA 94305, USA b Redwood Neuroscience Institute, 1010 El Camino Real, Menlo Park, CA 94025, USA Available online 15 December 2004 Abstract Information encoding in spikes and computations performed by spiking neurons are two sides ofthe same coin and should be consistent with each other. This study uses this consistency requirement to derive some new results for inter-spike interval (ISI) coding in networks ofintegrate and fire (IF) neurons. Our analysis shows that such a model can carry out useful computations and that it does also account for variability in spike timing as observed in cortical neurons. Our general result is that IF type neurons, though highly nonlinear, perform a simple linear weighted sum operation of ISI coded quantities. Further, we derive bounds on the variation ofisis that occur in the model although the neurons are deterministic. We also derive useful estimates of the maximum processing speed in a hierarchical network. r 2004 Elsevier B.V. All rights reserved. Keywords: ISI coding; Integrate and fire neuron; Spike timing; Cortical variability 1. Introduction It is well known that neurons communicate using spikes. But how is information encoded in the spikes that travel from one neuron to another? Since outputs of Corresponding author. Department ofelectrical Engineering, Stanford University, 126 Escondido Village, Apt. 10-C, Stanford 94305, US. Tel.: +1 650 8237643. E-mail addresses: dil@stanford.edu, dgeorge@rni.org (D. George), fsommer@rni.org (F.T. Sommer). 0925-2312/$ - see front matter r 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.neucom.2004.10.038

416 D. George, F.T. Sommer / Neurocomputing 65 66 (2005) 415 420 neurons are fed to other neurons for further processing, a generic information encoding scheme will need to have the same encoding mechanism at the input to a neuron as at the output. Given such an encoding scheme, neurons ofthe same type can be organized in multiple layers to perform useful computations [2]. A computational model for a neuron should, therefore, perform computations on its input spike trains and produce the result ofthe computation at its output with the same encoding as at its inputs. Thus, information encoding, computation and neuron models are part ofthe same problem and should be consistent with each other. As an example, consider the Poisson rate coding hypothesis where the spike trains are modelled as Poisson processes, with the rate ofthe process carrying the relevant information [4]. Softky and Koch showed that the rate coding hypothesis is inconsistent with integrate and fire neuron models [6]. A neuron model which is consistent with the rate coding hypothesis would require a random number generator inside the neuron so that the outputs are Poisson as well. This example illustrates that it is important to consider neuron models together with an information encoding and processing mechanism. What is a consistent kind ofcoding mechanism for the widely used integrate and fire (IF) neurons? What computation do IF neurons perform with its inputs and its outputs coded in that fashion? In this paper, we show that inter-spike interval (ISI) coding is a viable coding mechanism that can produce useful computations in networks ofif neurons while being consistent with the above requirements. In addition, we show that the ISI coding and computation mechanisms as suggested here could account for spike timing variability as observed in cortical spike trains. The rest ofthis paper is organized as follows. Section 2 establishes the general relationship between ISI coding and IF neurons. Section 3 describes the nature of computations performed on ISI coded streams by IF neurons. This section also describes the source ofvariability arising from these computations. Section 4 compares these results with previous work and suggests topics for further investigation. 2. ISI coding and IF neurons One way to encode a stream ofnon-negative numbers s 1 ; s 2 ;...; s N based on ISIs is to use these values to control the time elapsed between successive spikes as given by t nþ1 t n ¼ f ðs nþ1 Þ; (1) where t n s are the spike instants and f is a positive valued function. Consider an IF neuron without leak driven by a constant input current I. Such a neuron would produce output spikes at regular intervals T out given by T out ¼ V y C=I; (2) where V y is the threshold offiring and C is the capacitance ofthe neuron. We assume that after firing, the membrane potential is reset to zero. The corresponding

D. George, F.T. Sommer / Neurocomputing 65 66 (2005) 415 420 417 expression for a leaky integrate and fire neuron (LIF) with leakage resistance R is given by [7] 1 T out ¼ RC ln : (3) 1 ðv y =IRÞ Eqs. (2) and (3) satisfy the general form of ISI encoding given in Eq. (1) with the f ðsþ ¼V y C=s for Eq. (2) and f ðsþ ¼RC lnð1=½1 ðv y =srþšþ for Eq. (3). In both cases, T out decreases with increasing I, i.e., f is a decreasing function of the input magnitude. Inputs to cortical neurons are spikes rather than constant currents. Moreover, any given neuron will have many input synapses carrying different spike streams. Therefore, it is important to know how the output spike intervals of a neuron is related to the input spike intervals on its synapses. In the next section we derive this relationship for an IF neuron with two synaptic inputs. 3. Computations with ISI encoded input streams Consider ISI streams encoding static values s 1 and s 2 applied as inputs to an IF neuron with synaptic weights w 1 and w 2 : We assume that streams s 1 and s 2 were produced by IF neurons with the same threshold and capacitance. Initially we consider neurons with no leakage. We then know that values s 1 and s 2 correspond to input interspike intervals T 1 and T 2 where T 1 ¼ V y C=s 1 and T 2 ¼ V y C/s 2 : Let T out be an output ISI for these inputs. We assume that the firing threshold is such that T out 42 maxft 1 ; T 2 g: Then during the interval T out ; synapse 1 received approximately T out =T 1 spikes and synapse 2 received approximately T out =T 2 spikes. These input spikes increment the membrane potential ofthe neuron eventually making it reach the threshold and fire T out units after its previous firing. If each spike causes an increment V d in the membrane potential, on an average, the neuron fires when the following condition is met w 1 ðt out =T 1 Þþw 2 ðt out =T 2 Þ¼V y =V d : (4) Thus, using Eq. (2), the quantity s out encoded by the output ISI T out can be derived as w 1 s out ¼ðV d =CÞ þ w 2 ¼ðV d =V y Þðw 1 s 1 þ w 2 s 2 Þ: (5) T 1 T 2 Ifwe assume the ratio V d =V y to be the same for all neurons producing the input spike streams, a general approximate input output relationship holds for a neuron with multiple inputs: s out ¼ S i w i s i ; where i runs over all synapses ofthis neuron. Thus, even though the IF neuron is highly non-linear, the input output relation between the spike intervals remains a linear function. The above derivations ofoutput ISIs are only approximate since we did not consider the cases where two simultaneous input spikes fire an output spike. The

418 D. George, F.T. Sommer / Neurocomputing 65 66 (2005) 415 420 maximum output ISI T max will occur subsequent to a spike that was fired by two coincident spikes on the input streams. The next firing, which terminates the ISI corresponding to T max would then occur synchronously with spikes on either ofthe synapses. Without loss ofgenerality, we assume that this happens synchronously at synapse 1. Then T max ¼ n 1 T 1 ; where n 1 is the number ofspikes received at synapse 1. Further, the number ofspikes received on synapse 2 during this interval is given by n 2 ¼bT max =T 2 c: Equating the weighted number ofspikes to the firing threshold we get an upper bound on the maximum output ISI T max : Similarly, a lower bound for the minimum ISI T min can be derived. The two bounds are as given below by T max p ðv y=v d Þþmaxfw 1 ; w 2 g ðw 1 =T 1 Þþðw 2 =T 2 Þ ; T min X ðv y=v d Þ maxfw 1 ; w 2 g ; (6) ðw 1 =T 1 Þþðw 2 =T 2 Þ where V y is the firing threshold and V d is the increment in membrane potential caused by an incoming spike. Although the exact probability density ofthe ISI variation was not derived, from symmetry arguments we conclude that the average output ISI is the one given by Eq. (4). This implies a new way ofinterpreting the output ofan IF neuron with static ISI coded inputs. The output ISIs, on an average, represent the weighted sum ofthe quantities represented by the input ISIs. The output ISIs vary about this average and the variance depends on the relative magnitude ofthe voltage increments caused by an input spike. This jitter could account for part ofthe variability ofcortical spike trains. The stated results can be extended to time varying input streams by imposing appropriate constraints on the maximum rate ofchange ofthe input streams. Since an IF neuron loses all the history ofinputs after firing, a step change at any ofthe input streams is reflected at the output within a time tp2t max out where T max out is the maximum ISI ofthe output spike stream. Let T max out ¼ KT max max in where T in is the maximum ISI at the input and KX2 is a constant that depends on the time constant ofthe neuron and its synaptic strengths. Ifwe assume that the input spike trains are faithful samples of a bandlimited input current, then by the sampling theorem, T max in p1=ð2f maxþ; where f max is the maximum frequency component of the input current. Then T max out pðk=2þf max: This means that the maximum frequency that can be represented by the spike trains drops by a factor of KX2 for every layer of IF neurons in a layered architecture. Ifthis condition is violated for any layer, the output spike stream can no longer be interpreted as ISI coded as described in this paper. Knowing the minimum rate at which neurons at the highest level ofthe hierarchy should respond would give us an upper bound on the input frequencies that can be faithfully represented and processed using the ISI coding scheme described here. The result that the basic type ofcomputation is ofthe sort s out ¼ w 1 s 1 þ w 2 s 2 is not only valid for IF neurons, but also holds for LIF neurons. For a LIF neuron with leakage resistance R and capacitance C, the membrane voltage decays exponentially with a time constant t ¼ RC as given in Eq. (3). Therefore, on an average, the neuron fires when the following

condition is met: D. George, F.T. Sommer / Neurocomputing 65 66 (2005) 415 420 419 w 1 ð1 þ e T 1=t þ e 2T 1=t þþe ððt out=t 1 Þ 1ÞT 1 =t Þ þ w 2 ð1 þ e T 2=t þ e 2T 2=t þþe ððt out=t 2 Þ 1ÞT 2 =t Þ ¼ðe T out=t w 1 1Þ e T 1=t 1 þ w 2 ¼ V e T y =V d : 2=t 1 By comparing this with Eq. (3), the quantity s out encoded by the output ISI T out becomes w 1 s out ¼ðV d =RÞ e T 1=t 1 þ w 2 ¼ðV e T d =V y Þðw 1 s 1 þ w 2 s 2 Þ: (7) 2=t 1 4. Discussion The question ofhow spike timing is used for computations has been under debate for a long time [1 3]. The challenge lies in finding coding and computational mechanisms that are consistent with neuron models. Rate coding with Poisson distributed spikes has been suggested as one mechanism for encoding information in spikes [4]. But the traditional view ofpoisson spikes cannot be implemented with IF neurons [6]. The ISI coding mechanism we suggested for IF neurons can be interpreted as an instantaneous rate code. However, in our scheme the spikes are more precisely timed and hence can carry information at a higher rate compared to Poisson rate coding [5]. Rank order coding (ROC) [2] is another spike-timing-based computation suggested in the literature. However, ROC has the disadvantage that a slight delay in the onset ofone ofthe inputs can disrupt the entire chain. Poisson rate coding and ROC can be considered to be at the opposite extremes ofa continuum of spike coding mechanisms. The ISI coding we suggested here can be positioned somewhere between these extremes, thus giving a tradeoff between information rate and synchronicity requirements. Since ISI codes are nothing but rate codes with precise timing, all the results we derived in Section 3 are valid in general for all rate codes. There has been extensive discussion about the nature ofvariability ofspike trains in cortex [6,4]. Suggested mechanisms for this variability include random firing thresholds and stochastic synapses. The ISI coding mechanisms we suggested here demonstrate that the variability ofthe spike trains could, at least in part, arise as a natural consequence ofcomputations based on ISI coded spike stream inputs, even in entirely deterministic IF neurons. More studies are required to verify what percentage ofthe observed cortical variability ofspike trains can be explained using this approach. We know that neurons in the lower levels ofa cortical hierarchy need to respond faster compared to neurons higher up in the hierarchy. Our model predicts a drop in the maximum frequency as more computational layers are stacked like in a hierarchical architecture. Clearly, there is a tradeoffbetween speed ofprocessing and

420 D. George, F.T. Sommer / Neurocomputing 65 66 (2005) 415 420 accuracy ofrepresentation at each stage. Further studies are required to quantify this tradeoff. Also, the question of how learning could occur in ISI coded networks is left as a topic for further study. References [1] C.D. Brody, J.J. Hopfield, Simple networks for spike-timing-based computation, with application to olfactory processing, Neuron 37 (5) (2003) 843 852. [2] A. Delorme, S.J. Thorpe, Face identification using one spike per neuron: resistance to image degradations, Neural Networks 14 (6 7) (2001) 795 803. [3] W. Maass, On the relevance oftime in neural computation and learning, Theoret. Comput. Sci. 261 (1) (2001) 157 178. [4] M.N. Shadlen, W.T. Newsome, Noise, neural codes and cortical organization, Curr. Opin. Neurobiol. 4 (4) (1994) 569 579. [5] W.R. Softky, Simple codes versus efficient codes, Curr. Opin. Neurobiol. 5 (2) (1995) 239 247. [6] W.R. Softky, C. Koch, The highly irregular firing of cortical cells is inconsistent with temporal integration ofrandom EPSPs, J. Neurosci. 13 (1) (1993) 334 350. [7] R.B. Stein, The frequency of nerve action potentials generated by applied currents, Proc. R. Soc. London B Ser. 167 (6) (1967) 64 86.