When is an Integrate-and-fire Neuron like a Poisson Neuron?

Similar documents
Introduction to Neural Networks U. Minn. Psy 5038 Spring, 1999 Daniel Kersten. Lecture 2a. The Neuron - overview of structure. From Anderson (1995)

Computing with Inter-spike Interval Codes in Networks of Integrate and Fire Neurons

Activity Driven Adaptive Stochastic. Resonance. Gregor Wenning and Klaus Obermayer. Technical University of Berlin.

Interspike interval distributions of spiking neurons driven by fluctuating inputs

The Spike Response Model: A Framework to Predict Neuronal Spike Trains

Membrane equation. VCl. dv dt + V = V Na G Na + V K G K + V Cl G Cl. G total. C m. G total = G Na + G K + G Cl

How Spike Generation Mechanisms Determine the Neuronal Response to Fluctuating Inputs

Model neurons!!poisson neurons!

Comparing integrate-and-fire models estimated using intracellular and extracellular data 1

Consider the following spike trains from two different neurons N1 and N2:

Maximum Likelihood Estimation of a Stochastic Integrate-and-Fire Neural Model

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

An analytical model for the large, fluctuating synaptic conductance state typical of neocortical neurons in vivo

Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity

Supporting Online Material for

Computing with inter-spike interval codes in networks ofintegrate and fire neurons

Computational Explorations in Cognitive Neuroscience Chapter 2

Decoding Input Signals in Time Domain A Model Approach

Neural Modeling and Computational Neuroscience. Claudio Gallicchio

The homogeneous Poisson process

Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits

Synchronization, oscillations, and 1/ f noise in networks of spiking neurons

Roles of Fluctuations. in Pulsed Neural Networks

High-conductance states in a mean-eld cortical network model

Mean-Field Theory of Irregularly Spiking Neuronal Populations and Working Memory in Recurrent Cortical Networks

Spike-Frequency Adaptation of a Generalized Leaky Integrate-and-Fire Model Neuron

Neuronal Dynamics: Computational Neuroscience of Single Neurons

Characterizing the firing properties of an adaptive

(a) (b) (c) Time Time. Time

Neuronal Dynamics: Computational Neuroscience of Single Neurons

Effects of synaptic conductance on the voltage distribution and firing rate of spiking neurons

Empirical Bayes interpretations of random point events

Population dynamics of interacting spiking neurons

Noise and the PSTH Response to Current Transients: I. General Theory and Application to the Integrate-and-Fire Neuron

Balance of Electric and Diffusion Forces

Combining biophysical and statistical methods for understanding neural codes

Distinguishing the Causes of Firing with the Membrane Potential Slope

Linearization of F-I Curves by Adaptation

Biological Modeling of Neural Networks

THE TRANSFER AND PROPAGATION OF CORRELATED NEURONAL ACTIVITY

AT2 Neuromodeling: Problem set #3 SPIKE TRAINS

System Identification of Adapting Neurons

First Passage Time Calculations

Synaptic input statistics tune the variability and reproducibility of neuronal responses

An Introductory Course in Computational Neuroscience

Analytic Expressions for Rate and CV of a Type I Neuron Driven by White Gaussian Noise

Neural Encoding: Firing Rates and Spike Statistics

Frequency Adaptation and Bursting

Most Probable Escape Path Method and its Application to Leaky Integrate And Fire Neurons

Neurophysiology of a VLSI spiking neural network: LANN21

Introduction to Neural Networks. Daniel Kersten. Lecture 2. Getting started with Mathematica. Review this section in Lecture 1

1 Balanced networks: Trading speed for noise

Increasing Inhibitory Input Increases Neuronal Firing Rate: Why And When? Diffusion Process Cases

Theoretical connections between mathematical neuronal models corresponding to different expressions of noise

DYNAMICS OF CURRENT-BASED, POISSON DRIVEN, INTEGRATE-AND-FIRE NEURONAL NETWORKS

Dendritic cable with active spines: a modelling study in the spike-diffuse-spike framework

Linking non-binned spike train kernels to several existing spike train metrics

Handbook of Stochastic Methods

The Approach of a Neuron Population Firing Rate to a New Equilibrium: An Exact Theoretical Result

INVERSE PROBLEMS FOR THE HODGKIN-HUXLEY MODEL

Spike-Frequency Adaptation: Phenomenological Model and Experimental Tests

Data-true Characterization Of Neuronal Models

Communicated by John Rinzel

Exercises. Chapter 1. of τ approx that produces the most accurate estimate for this firing pattern.

DYNAMICS OF CURRENT-BASED, POISSON DRIVEN, INTEGRATE-AND-FIRE NEURONAL NETWORKS

Encoding or decoding

Elementary Applications of Probability Theory

Handbook of Stochastic Methods

+ + ( + ) = Linear recurrent networks. Simpler, much more amenable to analytic treatment E.g. by choosing

Memory capacity of networks with stochastic binary synapses

Analytical Integrate-and-Fire Neuron Models with Conductance-Based Dynamics for Event-Driven Simulation Strategies

Synaptic Input. Linear Model of Synaptic Transmission. Professor David Heeger. September 5, 2000

arxiv: v2 [q-bio.nc] 7 Nov 2013

Extracting Oscillations: Neuronal Coincidence Detection with Noisy Periodic Spike Input

Different Types of Noise in Leaky Integrate-and-Fire Model of Neuronal Dynamics with Discrete Periodical Input

Voltage-clamp and Hodgkin-Huxley models

encoding and estimation bottleneck and limits to visual fidelity

Simple models of neurons!! Lecture 4!

Temporal whitening by power-law adaptation in neocortical neurons

Network Amplification of Local Fluctuations Causes High Spike Rate Variability, Fractal Firing Patterns and Oscillatory Local Field Potentials

1/12/2017. Computational neuroscience. Neurotechnology.

Sean Escola. Center for Theoretical Neuroscience

Action Potential Initiation in the Hodgkin-Huxley Model

Mathematical Analysis of Bursting Electrical Activity in Nerve and Endocrine Cells

Decoding Poisson Spike Trains by Gaussian Filtering

An algorithm for detecting oscillatory behavior in discretized data: the damped-oscillator oscillator detector

dynamical zeta functions: what, why and what are the good for?

Patterns, Memory and Periodicity in Two-Neuron Delayed Recurrent Inhibitory Loops

Channel Noise in Excitable Neuronal Membranes

Neural Coding: Integrate-and-Fire Models of Single and Multi-Neuron Responses

Introduction. Stochastic Processes. Will Penny. Stochastic Differential Equations. Stochastic Chain Rule. Expectations.

Integration of synaptic inputs in dendritic trees

Mathematical Foundations of Neuroscience - Lecture 3. Electrophysiology of neurons - continued

Extracting Oscillations: Neuronal Coincidence Detection with Noisy Periodic Spike Input

Probabilistic Models in Theoretical Neuroscience

Voltage-clamp and Hodgkin-Huxley models

Dedicated to the sixtieth birthday of Professor Andrew Majda

Patterns of Synchrony in Neural Networks with Spike Adaptation

Dynamics of populations and networks of neurons with voltage-activated and calcium-activated currents

Electrophysiology of the neuron

Transcription:

When is an Integrate-and-fire Neuron like a Poisson Neuron? Charles F. Stevens Salk Institute MNL/S La Jolla, CA 92037 cfs@salk.edu Anthony Zador Salk Institute MNL/S La Jolla, CA 92037 zador@salk.edu Abstract In the Poisson neuron model, the output is a rate-modulated Poisson process (Snyder and Miller, 1991); the time varying rate parameter ret) is an instantaneous function G[.] of the stimulus, ret) = G[s(t)]. In a Poisson neuron, then, ret) gives the instantaneous firing rate-the instantaneous probability of firing at any instant t-and the output is a stochastic function of the input. In part because of its great simplicity, this model is widely used (usually with the addition of a refractory period), especially in in vivo single unit electrophysiological studies, where set) is usually taken to be the value of some sensory stimulus. In the integrate-and-fire neuron model, by contrast, the output is a filtered and thresholded function of the input: the input is passed through a low-pass filter (determined by the membrane time constant T) and integrated until the membrane potential vet) reaches threshold 8, at which point vet) is reset to its initial value. By contrast with the Poisson model, in the integrate-and-fire model the ouput is a deterministic function of the input. Although the integrate-and-fire model is a caricature of real neural dynamics, it captures many of the qualitative features, and is often used as a starting point for conceptualizing the biophysical behavior of single neurons. Here we show how a slightly modified Poisson model can be derived from the integrate-and-fire model with noisy inputs yet) = set) + net). In the modified model, the transfer function G[.] is a sigmoid (erf) whose shape is determined by the noise variance /T~. Understanding the equivalence between the dominant in vivo and in vitro simple neuron models may help forge links between the two levels.

104 c. F. STEVENS. A. ZADOR 1 Introduction In the Poisson neuron model, the output is a rate-modulated Poisson process; the time varying rate parameter ret) is an instantaneous function G[.] of the stimulus, ret) = G[s(t)]. In a Poisson neuron, then, ret) gives the instantaneous firing rate-the instantaneous probability of firing at any instant t-and the output is a stochastic function of the input. In part because of its great simplicity, this model is widely used (usually with the addition of a refractory period), especially in in vivo single unit electrophysiological studies, where set) is usually taken to be the value of some sensory stimulus. In the integrate-and-fire neuron model, by contrast, the output is a filtered and thresholded function of the input: the input is passed through a low-pass filter (determined by the membrane time constant T) and integrated until the membrane potential vet) reaches threshold 0, at which point vet) is reset to its initial value. By contrast with the Poisson model, in the integrate-and-fire model the ouput is a deterministic function of the input. Although the integrate-and-fire model is a caricature of real neural dynamics, it captures many of the qualitative features, and is often used as a starting point for conceptualizing the biophysical behavior of single neurons (Softky and Koch, 1993; Amit and Tsodyks, 1991; Shadlen and Newsome, 1995; Shadlen and Newsome, 1994; Softky, 1995; DeWeese, 1995; DeWeese, 1996; Zador and Pearlmutter, 1996). Here we show how a slightly modified Poisson model can be derived from the integrate-and-fire model with noisy inputs yet) = set) + net). In the modified model, the transfer function G[.] is a sigmoid (erf) whose shape is determined by the noise variance (j~. Understanding the equivalence between the dominant in vivo and in vitro simple neuron models may help forge links between the two levels. 2 The integrate-and-fire model Here we describe the the forgetful leaky integrate-and-fire model. Suppose we add a signal set) to some noise net), yet) = net) + set), and threshold the sum to produce a spike train z(t) = F[s(t) + net)], where F is the thresholding functional and z(t) is a list of firing times generated by the input. Specifically, suppose the voltage vet) of the neuron obeys vet) = - vet) + yet) (1) T where T is the membrane time constant. We assume that the noise net) has O-mean and is white with variance (j~. Thus yet) can be thought of as a Gaussian white process with variance (j~ and a time-varying mean set). If the voltage reaches the threshold 00 at some time t, the neuron emits a spike at that time and resets to the initial condition Vo. This is therefore a 5 parameter model: the membrane time constant T, the mean input signal Il, the variance of the input signal 172, the threshold 0, and the reset value Vo. Of course, if net) = 0, we recover a purely deterministic integrate-and-fire model.

When Is an Integrate-and-fire Neuron like a Poisson Neuron? 105 In order to forge the link between the integrate-and-fire neuron dynamics and the Poisson model, we will treat the firing times T probabilistically. That is, we will express the output of the neuron to some particular input set) as a conditional distribution p(tls(t», i.e. the probability of obtaining any firing time T given some particular input set). Under these assumptions, pet) is given by the first passage time distribution (FPTD) of the Ornstein-Uhlenbeck process (Uhlenbeck and Ornstein, 1930; Tuckwell, 1988). This means that the time evolution of the voltage prior to reaching threshold is given by the Fokker-Planck equation (FPE), 8 u; 8 2 8 vet) 8t g(t, v) = 2 8v 2 get, v) - av [(set) - --;:- )g(t, v)], (2) where uy = Un and get, v) is the distribution at time t of voltage -00 < v ::; (}o. Then the first passage time distribution is related to g( v, t) by 8190 pet) = - at -00 get, v)dv. (3) The integrand is the fraction of all paths that p.ave not yet crossed threshold. pet) is therefore just the interspike interval (lsi) distribution for a given signal set). A general eigenfunction expansion solution for the lsi distribution is known, but it converges slowly and its terms offer little insight into the behavior (at least to us). We now derive an expression for the probability of crossing threshold in some very short interval ~t, starting at some v. We begin with the "free" distribution of g (Tuckwell, 1988): the probability of the voltage jumping to v' at time t' = t + ~t, given that it was at v at time t, assuming von Neumann boundary conditions at plus and minus infinity, with 1 [ (v' - m( ~t; u»)2] get', v'lt, v) = exp - y, J27r q(~t;uy) 2 q(~t;uy) (4) and m(~t) = ve-at/ T + set) * T(l _ e-at/ T ), where * denotes convolution. The free distribution is a Gaussian with a timedependent mean m(~t) and variance q(~t; uy). This expression is valid for all ~t. The probability of making a jump ~v = v' - v in a short interval ~t ~ T depends only on ~v and ~t, ga(~t, ~v; uy) = 1 exp [_ ~~2 )]...j27r qa(uy) 2 qa uy (5) For small ~t, we expand to get qa(uy) :::::: 2u;~t, which is independent of T, showing that the leak can be neglected for short times.

106 c. F. STEVENS, A. ZADOR Now the probability Pt>, that the voltage exceeds threshold in some short Ilt, given that it started at v, depends on how far v is from threshold; it is Thus Pr[v + Ilv ~ 0] = Pr[llv ~ 0 - v]. (Xl dvgt>,(llt, v; O"y) J9-v -erfc 1 (o-v) 2 J2qt>,(O"y) -erfc 1 (o-v) 2 O"yJ21lt (6) -j; I; e-t~ dt goes from [2 : 0]. This then is the key result: where erfc(x) = 1 - it gives the instantaneous probability of firing as a function of the instantaneous voltage v. erfc is sigmoidal with a slope determined by O"y, so a smaller noise yields a steeper (more deterministic) transfer function; in the limit of 0 noise, the transfer function is a step and we recover a completely deterministic neuron. Note that Pt>, is actually an instantaneous function of v(t), not the stimulus itself s(t). If the noise is large compared with s(t) we must consider the distribution g$ (v, t; O"y) of voltages reached in response to the input s(t): Py(t) (7) 3 Ensemble of Signals What if the inputs s(t) are themselves drawn from an ensemble? If their distribution is also Gaussian and white with mean Jl and variance 0";, and if the firing rate is low (E[T] ~ T), then the output spike train is Poisson. Why is firing Poisson only in the slow firing limit? The reason is that, by assumption, immediately following a spike the membrane potential resets to 0; it must then rise (assuming Jl > 0) to some asymptotic level that is independent of the initial conditions. During this rise the firing rate is lower than the asymptotic rate, because on average the membrane is farther from threshold, and its variance is lower. The rate at which the asymptote is achieved depends on T. In the limit as t ~ T, some asymptotic distribution of voltage qoo(v), is attained. Note that if we make the reset Vo stochastic, with a distribution given by qoo (v), then the firing probability would be the same even immediately after spiking, and firing would be Poisson for all firing rates. A Poisson process is characterized by its mean alone. We therefore solve the FPE (eq. 2) for the steady-state by setting tg(t, v) = 0 (we consider only threshold crossings from initial values t ~ T; negyecting the early events results in only a small error, since we have assumed E{T} ~ T). Thus with the absorbing boundary

When Is an Integrate-and-fire Neuron like a Poisson Neuron? 107 at 0 the distribution at time t ~ T (given here for JJ = 0) is g~(vj uy) = kl (1 - k2erfi [uyfi]) exp [~i:], (8) where u; = u; + u~, erfi(z) = -ierf(iz), kl determines the normalization (the sign of kl determines whether the solution extends to positive or negative infinity) and k2 = l/erfi(o/(uy Vr)) is determined by the boundary. The instantaneous Poisson rate parameter is then obtained through eq. (7), (9) Fig. 1 tests the validity of the exponential approximation. The top graph shows the lsi distribution near the "balance point", when the excitation is in balance with the inhibition and the membrane potential hovers just subthreshold. The bottom curves show the lsi distribution far below the balance point. In both cases, the exponential distribution provides a good approximation for t ~ T. 4 Discussion The main point of this paper is to make explicit the relation between the Poisson and integrate-and-fire models of neuronal acitivity. The key difference between them is that the former is stochastic while the latter is deterministic. That is, given exactly the same stimulus, the Poisson neuron produces different spike trains on different trials, while the integrate-and-fire neuron produces exactly the same spike train each time. It is therefore clear that if some degree of stochasticity is to be obtained in the integrate-and-fire model, it must arise from noise in the stimulus itself. The relation we have derived here is purely formalj we have intentionally remained agnostic about the deep issues of what is signal and what is noise in the inputs to a neuron. We observe nevertheless that although we derive a limit (eq. 9) where the spike train of an integrate-and-fire neuron is a Poisson process-i.e. the probability of obtaining a spike in any interval is independent of obtaining a spike in any other interval (except for very short intervals )-from the point of view of information processing it is a very different process from the purely stochastic rate-modulated Poisson neuron. In fact, in this limit the spike train is deterministically Poisson if u y = u., i. e. when n( t) = OJ in this case the output is a purely deterministic function of the input, but the lsi distribution is exponential.

108 C. F. STEVENS, A. ZADOR References Amit, D. and Tsodyks, M. (1991). Quantitative study of attractor neural network retrieving at low spike rates. i. substrate-spikes, rates and neuronal gain. Network: Computation in Neural Systems, 2:259-273. DeWeese, M. (1995). Optimization principles for the neural code. PhD thesis, Dept of Physics, Princeton University. DeWeese, M. (1996). Optimization principles for the neural code. In Hasselmo, M., editor, Advances in Neural Information Processing Systems, vol. 8. MIT Press, Cambridge, MA. Shadlen, M. and Newsome, W. (1994). Noise, neural codes and cortical organization. Current Opinion in Neurobiology, 4:569-579. Shadlen, M. and Newsome, W. (1995). Is there a signal in the noise? [comment]. Current Opinion in Neurobiology, 5:248-250. Snyder, D. and Miller, M. (1991). Random Point Processes in Time and Space, 2 nd edition. Springer-Verlag. Softky, W. (1995). Simple codes versus efficient codes. Current Opinion in Neurobiology, 5:239-247. Softky, W. and Koch, C. (1993). The highly irregular firing of cortical cells is inconsistent with temporal integration of random epsps. J. Neuroscience., 13:334-350. Tuckwell, H. (1988). Introduction to theoretical neurobiology (2 vols.). Cambridge. Uhlenbeck, G. and Ornstein, L. (1930). On the theory of brownian motion. Phys. Rev., 36:823-84l. Zador, A. M. and Pearlmutter, B. A. (1996). VC dimension of an integrate and fire neuron model. Neural Computation, 8(3). In press.

When Is an Integrate-and-fire Neuron like a Poisson Neuron? 109 0.02 lsi distributions at balance point and the exponential limit 0.015.~ 15 0.01.8 e a. 0.005 50 100 150 200 250 300 350 400 450 500 Time (msec) 2 x 10-3 1.5 ~ ~ 1.0... 0 a. 0.5 0 0 200 400 600 800 1000 1200 1400 1600 1800 2000 lsi (msec) Figure 1: lsi distributions. (A; top) lsi distribution for leaky integrate-and-fire model at the balance point, where the asymptotic membrane potential is just subthreshold, for two values of the signal variance (1'2. Increasing (1'2 shifts the distribution to the left. For the left curve, the parameters were chosen so that E{T} ~ T, giving a nearly exponential distribution; for the right curve, the distribution would be hard to distinguish experimentally from an exponential distribution with a refractory period. (T = 50 msec; left: E{T} = 166 msec; right: E{T} = 57 msec). (B; bottom) In the subthreshold regime, the lsi distribution (solid} is nearly exponential (dashed) for intervals greater than the membrane time constant. (T = 50 msec; E{T} = 500 msec)