Fast neural network simulations with population density methods

Similar documents
A Population Density Approach that Facilitates Large-Scale Modeling of Neural Networks: Analysis and an Application to Orientation Tuning

Consider the following spike trains from two different neurons N1 and N2:

CSE/NB 528 Final Lecture: All Good Things Must. CSE/NB 528: Final Lecture

Supporting Online Material for

Synaptic Input. Linear Model of Synaptic Transmission. Professor David Heeger. September 5, 2000

Synaptic dynamics. John D. Murray. Synaptic currents. Simple model of the synaptic gating variable. First-order kinetics

Factors affecting phase synchronization in integrate-and-fire oscillators

Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity

The Approach of a Neuron Population Firing Rate to a New Equilibrium: An Exact Theoretical Result

Math in systems neuroscience. Quan Wen

Neuronal networks, whether real cortical networks (1, 2) or

Exercises. Chapter 1. of τ approx that produces the most accurate estimate for this firing pattern.

Synchrony in Stochastic Pulse-coupled Neuronal Network Models

Reduction of Conductance Based Models with Slow Synapses to Neural Nets

Neural Modeling and Computational Neuroscience. Claudio Gallicchio

DISCRETE EVENT SIMULATION IN THE NEURON ENVIRONMENT

Synchrony in Neural Systems: a very brief, biased, basic view

Dynamical Constraints on Computing with Spike Timing in the Cortex

Computing with Inter-spike Interval Codes in Networks of Integrate and Fire Neurons

High-conductance states in a mean-eld cortical network model

Biological Modeling of Neural Networks

Stationary Bumps in Networks of Spiking Neurons

Some Hidden Physiology in Naturalistic Spike Rasters. The Faithful Copy Neuron.

Information Theory and Neuroscience II

Coarse-grained event tree analysis for quantifying Hodgkin-Huxley neuronal network dynamics

Neuronal Dynamics: Computational Neuroscience of Single Neurons

Neuronal Dynamics: Computational Neuroscience of Single Neurons

Basic elements of neuroelectronics -- membranes -- ion channels -- wiring

Divisive Inhibition in Recurrent Networks

Songting Li. Applied Mathematics, Mathematical and Computational Neuroscience, Biophysics

Compartmental Modelling

Dynamical systems in neuroscience. Pacific Northwest Computational Neuroscience Connection October 1-2, 2010

Dendritic cable with active spines: a modelling study in the spike-diffuse-spike framework

Patterns of Synchrony in Neural Networks with Spike Adaptation

Introduction to Neural Networks U. Minn. Psy 5038 Spring, 1999 Daniel Kersten. Lecture 2a. The Neuron - overview of structure. From Anderson (1995)

Causality and communities in neural networks

Activity Driven Adaptive Stochastic. Resonance. Gregor Wenning and Klaus Obermayer. Technical University of Berlin.

Learning Spatio-Temporally Encoded Pattern Transformations in Structured Spiking Neural Networks 12

Event-driven simulations of nonlinear integrate-and-fire neurons

Linearization of F-I Curves by Adaptation

3 Detector vs. Computer

University of Bristol - Explore Bristol Research. Early version, also known as pre-print

Mathematical Models of Dynamic Behavior of Individual Neural Networks of Central Nervous System

CISC 3250 Systems Neuroscience

Phase Response Properties and Phase-Locking in Neural Systems with Delayed Negative-Feedback. Carter L. Johnson

Neurophysiology of a VLSI spiking neural network: LANN21

An Efficient Method for Computing Synaptic Conductances Based on a Kinetic Model of Receptor Binding

Decoding. How well can we learn what the stimulus is by looking at the neural responses?

A Three-dimensional Physiologically Realistic Model of the Retina

How Spike Generation Mechanisms Determine the Neuronal Response to Fluctuating Inputs

A Population Study of Integrate-and-Fire-or-Burst Neurons

1 Balanced networks: Trading speed for noise

Balance of Electric and Diffusion Forces

9 Generation of Action Potential Hodgkin-Huxley Model

Subthreshold cross-correlations between cortical neurons: Areference model with static synapses

IN THIS turorial paper we exploit the relationship between

Neuron. Detector Model. Understanding Neural Components in Detector Model. Detector vs. Computer. Detector. Neuron. output. axon

Phase Response. 1 of of 11. Synaptic input advances (excitatory) or delays (inhibitory) spiking

Coherence detection in a spiking neuron via Hebbian learning

Spike-Frequency Adaptation of a Generalized Leaky Integrate-and-Fire Model Neuron

A Learning Theory for Reward-Modulated Spike-Timing-Dependent Plasticity with Application to Biofeedback

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore.

Lecture 11 : Simple Neuron Models. Dr Eileen Nugent

Integration of synaptic inputs in dendritic trees

Stochastic Wilson-Cowan equations for networks of excitatory and inhibitory neurons II

Spike-Frequency Adaptation: Phenomenological Model and Experimental Tests

Neurophysiology. Danil Hammoudi.MD

MANY scientists believe that pulse-coupled neural networks

Effects of synaptic conductance on the voltage distribution and firing rate of spiking neurons

Ranking Neurons for Mining Structure-Activity Relations in Biological Neural Networks: NeuronRank

AT2 Neuromodeling: Problem set #3 SPIKE TRAINS

arxiv:physics/ v1 [physics.bio-ph] 19 Feb 1999

Synfire Waves in Small Balanced Networks

Fast and exact simulation methods applied on a broad range of neuron models

Bursting and Chaotic Activities in the Nonlinear Dynamics of FitzHugh-Rinzel Neuron Model

Nervous Tissue. Neurons Electrochemical Gradient Propagation & Transduction Neurotransmitters Temporal & Spatial Summation

Electrodiffusion Model of Electrical Conduction in Neuronal Processes

Propagation& Integration: Passive electrical properties

Chimera states in networks of biological neurons and coupled damped pendulums

The Phase Response Curve of Reciprocally Inhibitory Model Neurons Exhibiting Anti-Phase Rhythms

How do synapses transform inputs?

Approximate, not perfect synchrony maximizes the downstream effectiveness of excitatory neuronal ensembles

arxiv: v1 [q-bio.nc] 13 Feb 2018

Passive Membrane Properties

The Resonate-and-fire Neuron: Time Dependent and Frequency Selective Neurons in Neural Networks

Infinite systems of interacting chains with memory of variable length - a stochastic model for biological neural nets

80% of all excitatory synapses - at the dendritic spines.

Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits

Single Neuron Dynamics for Retaining and Destroying Network Information?

An Introductory Course in Computational Neuroscience

Biosciences in the 21st century

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Modeling the Milk-Ejection Reflex! Gareth Leng and collaborators!

Introduction and the Hodgkin-Huxley Model

CORRELATION TRANSFER FROM BASAL GANGLIA TO THALAMUS IN PARKINSON S DISEASE. by Pamela Reitsma. B.S., University of Maine, 2007

The Spike Response Model: A Framework to Predict Neuronal Spike Trains

Mathematical Models of Dynamic Behavior of Individual Neural Networks of Central Nervous System

Effects of Noisy Drive on Rhythms in Networks of Excitatory and Inhibitory Neurons

Liquid Computing in a Simplified Model of Cortical Layer IV: Learning to Balance a Ball

The homogeneous Poisson process

Transcription:

Fast neural network simulations with population density methods Duane Q. Nykamp a,1 Daniel Tranchina b,a,c,2 a Courant Institute of Mathematical Science b Department of Biology c Center for Neural Science New York University, New York, NY 112 Abstract The complexity of neural networks of the brain makes studying these networks through computer simulation challenging. Conventional methods, where one models thousands of individual neurons, can take enormous amounts of computer time even for models of small cortical areas. An alternative is the population density method in which neurons are grouped into large populations and one tracks the distribution of neurons over state space for each population. We discuss the method in general and illustrate the technique for integrate-and-fire neurons. Key words: Probability Density Function; Network Modeling; Computer Simulation; Populations 1 Introduction One challenge in the modeling of neural networks in the brain is to reduce the enormous computational time required for realistic simulations. Conventional methods, where one tracks each neuron and synapse in the network, can require tremendous computer time even for small parts of the brain (e.g. [1]). In the simplifying approach of Wilson and Cowan [11], neurons are grouped into populations of similar neurons, and the state of each population is summarized by a single quantity, the mean firing rate [11] or mean synaptic drive 1 Supported by a National Science Foundation Graduate Fellowship 2 Corresponding author. Tel.: (212) 998-319; fax: (212) 995-4121. E-mail: tranchin@cims.nyu.edu Preprint submitted to Elsevier Preprint 12 November 1999

[8]. Crude approximations of this sort cannot produce fast temporal dynamics observed in transient activity [2,5] and break down when the network is synchronized [1]. In the population density approach, one tracks the distribution of neurons over state space for each population [4,7,9,3]. The state space is determined by the dynamic variables in the explicit model for the underlying individual neuron. Because individual neurons and synapses are not tracked, these simulations can be hundreds of times faster than direct simulations. We have previously presented a detailed analysis of this method specialized to fast synapses [5] and slow inhibitory synapses [6]. 2 The general population density formulation We divide the state variables into two categories: the voltage of the neuron s cell body, denoted by V (t), and all other variables, denoted by X(t) which could represent the states of channels, synapses, and dendrites, and any other quantities needed to specify the state of a neuron. For each population k, we define ρ k (v, x, t) to be the probability density function of a neuron: Ω ρk (v, x, t)dv d x = Pr (the state of a neuron Ω). For a population of many similar neurons, we can interpret ρ as a population density: ρ k (v, x, t)dv d x = Fraction of neurons whose state Ω. (1) Ω From each ρ k, we also calculate a population firing rate, r k (t) the probability per unit time of firing a spike for a randomly selected neuron. To implement the network connectivity, we let νe/i k (t) be the rate of excitatory/inhibitory input to population k, andw jk be the average number of neurons from population j that project to each neuron in population k. Then, the input rate for each population is the weighted sum of the firing rates of the presynaptic populations: ν k e/i(t) =ν k e/i,o(t)+ j Λ E/I W jk α jk (t )r j (t t )dt, (2) where α jk (t) is the distribution of synaptic latencies from population j to population k, Λ E/I indexes the excitatory/inhibitory populations, and ν k e/i,o (t) is any imposed external excitatory/inhibitory input rate to population k. 2

3 The population density for integrate-and-fire neurons The voltage of a single-compartment integrate-and-fire neuron evolves via the equation c dv + g dt r (V E r )+G e (t)(v E e )+G i (t)(v E i ) =, where c is the membrane capacitance, g r is the fixed resting conductance, G e/i (t) is the time varying excitatory/inhibitory synaptic conductance, and E r/e/i is the resting/excitatory/inhibitory reversal potential. A spike time T sp is defined by V (T sp )=v th,wherev th is a fixed threshold voltage. After each spike, the voltage V is reset to the fixed voltage v reset. The fixed voltages are defined so that E i < E r,v reset < v th < E e. The synaptic conductances G e/i (t) are functions of the random arrival times of synaptic inputs, T j e/i.wedenotethe state of the synaptic conductances by X whose dimension depends on the number of conductance gating variables. 4 Population Density Evolution Equation The evolution equation for each population density is a conservation equation: ρ t (v, x, t) = J ( v, x, ν e/i (t) ) + δ(v v reset )J V ( vth, x, ν e/i (t) ) (3) where J =(J V, J X ) is the flux of probability. Note that the total flux across voltage, J V ( v, x, νe/i (t) ) d x, is the probability per unit time of crossing v from below minus the probability per unit time of crossing v from above. Thus, the population firing rate is the total flux across threshold: r(t) = JV ( vth, x, ν e/i (t) ) d x. The second term on the right hand side of (3) is due to the reset of voltage to v reset after a neuron fires a spike by crossing v th. The speed at which one can solve (3) diminishes rapidly as the dimension of the population density, 1 + dim x, increases. When dim x >, dimension reduction techniques are necessary for computational efficiency. 4.1 Fast Synapses When all synaptic events are instantaneous, dim x =.Inthiscase,G e/i (t) = j A j e/i δ(t T j e/i ), where the synaptic input sizes Aj e/i are random numbers with some given distribution, and the synaptic input rates T j e/i are given by a modulated Poisson process with mean rate ν e/i (t). In this case, no state variables are needed to describe the synapses, and ρ = ρ(v, t) is one-dimensional. 3

Firing Rate (imp/sec) 1 5 A Firing Rate ν e ν i 5 1 Time (ms) 4 2 Input Rate (imps/sec) Probability Density.3.2.1 t=3 ms t=7 ms B 7 65 6 55 Voltage (mv) Fig. 1. Results of a simulation of an uncoupled population with fast synapses. A: Population firing rate in response to sinusoidally modulated input rates. B: Snapshots of the distribution of neurons across voltage for simulation shown in A. The resulting specific form of (3) (described in detail in [5]), can be solved much more quickly than direct simulations. Figure 1 shows the results of a single uncoupled population with fast synapses in response to sinusoidally modulated input rates. The snapshots in panel B show that many neurons are near v th = 55 mv when the firing rate is high at t =3ms. 4.2 Slow Inhibitory Synapses The assumption of instantaneous inhibitory synapses is often not justified by physiological measurement. Furthermore, the fact that inhibitory synapses are slower than excitatory synapses can have a dramatic effect on the network dynamics [6]. Thus, for the inhibitory conductance G i (t), we may need to use a set of equations like: τ i dg i dt = G i(t)+s(t) (4) τ s ds dt = S(t)+ j A j i δ(t T j i ), (5) where τ s/i is the time constant for the rise/decay of the inhibitory conductance (τ s τ i ). The( response of G i )(t) to a single inhibitory synaptic input at T =is G i (t) = A i τ i τ s e t/τ i e t/τs, unless τi = τ s,inwhichcaseg i (t) = A i t τ i τ i e t/τ i. In this model, two variables describe the inhibitory conductance state, X(t) = (G i (t),s(t)), and each population density is three-dimensional: ρ = ρ(v, g, s, t). Thus, direct solution of equation (3) for ρ(t) would take much longer than it would for the fast synapse case. 4

A B Firing Rate (imp/s) Firing Rate (imp/s) 4 3 2 1 o Population Density Direct Simulation 1 2 3 4 5 Time (ms) 2 1.5 1.5 9 o Population Density Direct Simulation 1 2 3 4 5 Time (ms) Fig. 2. Comparison of population density and direct simulation firing rate for a model of a hypercolumn in visual cortex. Response is to a bar flashed at deg. A: Response of the population with a preferred orientation of deg. B: Response of the population with a preferred orientation of 9 deg. Note change of scale. τ s =2 ms, τ i = 8 ms. All other parameters are as in [5]. In the case of fast excitatory synapses, r(t) can be calculated from the marginal distribution in v: f V (v, t) = ρ(v, g, s, t)dg ds. Thus, we can reduce the dimension back to one by computing just f V (v, t). The evolution equation for f V, obtained by integrating (3) with respect to x =(g, s), depends on the unknown quantity µ G V (v, t), which is the expected value of G i given V [6]. Nonetheless, this equation can be solved by assuming that the expected value of G i is independent of V, i.e., µ G V (v, t) =µ G (t), where µ G (t) is the expected value of G i averaged over all neurons in the population. Since the equations for the inhibitory synapses (4 5) do not depend on voltage, the equation for µ G (t) can be derived directly. Although the independence assumption is not strictly justified, in practice, it gives good results. We illustrate the performance of the population density method with a comparison of the population density firing rates with those of a direct simulation implementation of the same network. We use a network model for orientation tuning in one hypercolumn of primary visual cortex in the cat based on the model by Somers et al. [1]. The network consists of 18 excitatory and 18 inhibitory populations with preferred orientations between and 18 deg [5]. The responses of two populations to a flashed bar oriented at zero deg are shown in figure 2. The population density simulation was over 1 times faster than the direct simulation (22 seconds versus 5 minutes on a Silcon Graphics Octane computer) without sacrificing accuracy in the firing rate. 5

5 Conclusion Population density methods can greatly reduce the computation time required for network simulations without sacrificing accuracy. These methods can, in principle, be used with more elaborate models for the underlying single neuron, but dimension reduction techniques, like those considered here, must be found to make the method practical. References [1] L. F. Abbott and C. van Vreeswijk. Asynchronous states in networks of pulsecoupled oscillators. Physical Review E, 48 (1993) 1483 149. [2] W. Gerstner. Population dynamics of spiking neurons: Fast transients, asynchronous states, and locking. Neural Computation, (1999). to appear. [3] B. W. Knight. Dynamics of encoding in neuron populations: Some general mathematical features. Neural Computation, (2). to appear. [4] B. W. Knight, D. Manin, and L. Sirovich. Dynamical models of interacting neuron populations. In E. C. Gerf, ed., Symposium on Robotics and Cybernetics: Computational Engineering in Systems Applications (Cite Scientifique, Lille, France, 1996). [5] D. Q. Nykamp and D. Tranchina. A population density approach that facilitates large-scale modeling of neural networks: Analysis and an application to orientation tuning. Journal of Computational Neuroscience, 8 (2) 19 5. [6] D. Q. Nykamp and D. Tranchina. A population density approach that facilitates large-scale modeling of neural networks: Extension to slow inhibitory synapses. (2). Submitted. [7] A. Omurtag, B. W. Knight, and L. Sirovich. On the simulation of large populations of neurons. Journal of Computational Neuroscience, 8 (2) 51 63. [8] D. J. Pinto, J. C. Brumberg, D. J. Simons, and G. B. Ermentrout. A quantitative population model of whisker barrels: Re-examining the Wilson- Cowan equations. Journal of Computational Neuroscience, 3(3):247 264, 1996. [9] L. Sirovich, B. Knight, and A. Omurtag. Dynamics of neuronal populations: The equilibrium solution. SIAM, (1999). to appear. [1] D. C. Somers, S. B. Nelson, and M. Sur. An emergent model of orientation selectivity in cat visual cortical simple cells. Journal of Neuroscience, 15 (1995) 5448 5465. [11] H. R. Wilson and J. D. Cowan. Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal, 12 (1972) 1 24. 6