Fitting a Stochastic Neural Network Model to Real Data

Similar documents
On the relation between neuronal size and extracellular spike amplitude and its consequence on extracellular recordings interpretation

On Goodness of Fit Tests For Models of Neuronal Spike Trains Considered as Counting Processes

Extracellular Recording and Spike Train Analysis

Consider the following spike trains from two different neurons N1 and N2:

Neural Spike Train Analysis 1: Introduction to Point Processes

Neuronal Dynamics: Computational Neuroscience of Single Neurons

Modelling stochastic neural learning

Describing Spike-Trains

The Axonal Plexus. A description of the behavior of a network of neurons connected by gap junctions. Erin Munro, 4/4/2007

AT2 Neuromodeling: Problem set #3 SPIKE TRAINS

The Spike Response Model: A Framework to Predict Neuronal Spike Trains

Neural variability and Poisson statistics

Biosciences in the 21st century

Math in systems neuroscience. Quan Wen

Biological Modeling of Neural Networks

How to read a burst duration code

Sean Escola. Center for Theoretical Neuroscience

Improved characterization of neural and behavioral response. state-space framework

Computing with Inter-spike Interval Codes in Networks of Integrate and Fire Neurons

Spike Train Measurement and Sorting

Lecture 11 : Simple Neuron Models. Dr Eileen Nugent

Probabilistic Models in Theoretical Neuroscience

CS 361: Probability & Statistics

The Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017

Lateral organization & computation

An Introductory Course in Computational Neuroscience

Empirical Bayes interpretations of random point events

Spike-Frequency Adaptation: Phenomenological Model and Experimental Tests

arxiv: v2 [math.pr] 14 Nov 2017

Neural Encoding I: Firing Rates and Spike Statistics

Roles of Fluctuations. in Pulsed Neural Networks

applications to neuroscience

A Statistical Approach to Functional Connectivity Involving Multichannel Neural Spike Trains

Physiology Unit 2. MEMBRANE POTENTIALS and SYNAPSES

Chasing down the neural code with mathematics and modeling

Dynamical Constraints on Computing with Spike Timing in the Cortex

NON-MARKOVIAN SPIKING STATISTICS OF A NEURON WITH DELAYED FEEDBACK IN PRESENCE OF REFRACTORINESS. Kseniia Kravchuk and Alexander Vidybida

Introduction and the Hodgkin-Huxley Model

Nature Neuroscience: doi: /nn.2283

Neurons and conductance-based models

This script will produce a series of pulses of amplitude 40 na, duration 1ms, recurring every 50 ms.

Neurophysiology. Danil Hammoudi.MD

Neurophysiology of a VLSI spiking neural network: LANN21

Classification of odorants across layers in locust olfactory pathway

Maximum Likelihood Estimation of a Stochastic Integrate-and-Fire Neural Model

Is the superposition of many random spike trains a Poisson process?

PROBABILITY DISTRIBUTIONS

Analysis of Extracellular Recordings

Introduction to Neural Networks U. Minn. Psy 5038 Spring, 1999 Daniel Kersten. Lecture 2a. The Neuron - overview of structure. From Anderson (1995)

Information Theory and Neuroscience II

Research Article Hidden Periodicity and Chaos in the Sequence of Prime Numbers

Spike sorting in the frequency domain with overlap detection

A realistic neocortical axonal plexus model has implications for neocortical processing and temporal lobe epilepsy

THE retina in general consists of three layers: photoreceptors

Construction and analysis of non-poisson stimulus-response models of neural spiking activity

Model neurons!!poisson neurons!

Dynamic Causal Modelling for evoked responses J. Daunizeau

Statistics Random Variables

Memory and hypoellipticity in neuronal models

Structure and Measurement of the brain lecture notes

Exercise 15 : Cable Equation

Nerve Signal Conduction. Resting Potential Action Potential Conduction of Action Potentials

Continuous case Discrete case General case. Hazard functions. Patrick Breheny. August 27. Patrick Breheny Survival Data Analysis (BIOS 7210) 1/21

ESTIMATING SCALING EXPONENTS IN AUDITORY-NERVE SPIKE TRAINS USING FRACTAL MODELS INCORPORATING REFRACTORINESS

Infinite systems of interacting chains with memory of variable length - a stochastic model for biological neural nets

Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits

Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity

Stochastic modeling of a serial killer

Topological target patterns and population oscillations in a network with random gap junctional coupling

Announcements: Test4: Wednesday on: week4 material CH5 CH6 & NIA CAPE Evaluations please do them for me!! ask questions...discuss listen learn.

Synaptic Input. Linear Model of Synaptic Transmission. Professor David Heeger. September 5, 2000

Introduction Biologically Motivated Crude Model Backpropagation

Neocortical Pyramidal Cells Can Control Signals to Post-Synaptic Cells Without Firing:

Deconstructing Actual Neurons

CSE/NB 528 Final Lecture: All Good Things Must. CSE/NB 528: Final Lecture

Neurons. The Molecular Basis of their Electrical Excitability

Simulation, Estimation and Applications of Hawkes Processes

Basic elements of neuroelectronics -- membranes -- ion channels -- wiring

Voltage-clamp and Hodgkin-Huxley models

A Monte Carlo Sequential Estimation for Point Process Optimum Filtering

Discrete Time Rescaling Theorem: Determining Goodness of Fit for Statistical Models of Neural Spiking

Neural Networks 1 Synchronization in Spiking Neural Networks

Poisson Processes for Neuroscientists

BIOELECTRIC PHENOMENA

Supporting Online Material for

Synchrony in Neural Systems: a very brief, biased, basic view

10/8/2014. The Multivariate Gaussian Distribution. Time-Series Plot of Tetrode Recordings. Recordings. Tetrode

Neural Encoding: Firing Rates and Spike Statistics

Bayesian probability theory and generative models

CISC 3250 Systems Neuroscience

Available online at ScienceDirect. Procedia Computer Science 37 (2014 )

Mid Year Project Report: Statistical models of visual neurons

arxiv: v2 [q-bio.nc] 3 Dec 2013

On oscillating systems of interacting neurons

Simulating events: the Poisson process

A new look at state-space models for neural data

Sampling-based probabilistic inference through neural and synaptic dynamics

High-dimensional geometry of cortical population activity. Marius Pachitariu University College London

Inventory of Supplemental Information

Integration of synaptic inputs in dendritic trees

Transcription:

Fitting a Stochastic Neural Network Model to Real Data Christophe Pouzat, Ludmila Brochini, Pierre Hodara and Guilherme Ost MAP5 Univ. Paris-Descartes and CNRS Neuromat, USP christophe.pouzat@parisdescartes.fr Random Structures on the Brain, December 8 2017

Outline Introduction Few words of caution In defense of a modeling approach with a strong stochastic element Some stochastic models of neuronal discharges Working with the intensity of the process Model and inference

Where are we? Introduction Few words of caution In defense of a modeling approach with a strong stochastic element Some stochastic models of neuronal discharges Working with the intensity of the process Model and inference

Experimental model and setting The locust, Schistocerca americana, preparation.

Insects do have a brain we can record from! Left: Opened head and brain; center: antennal lobe and recording probe; right: 1 sec recorded from a tetrode.

Locust and cockroach olfactory systems Left: Locust olfactory system (Laurent, 1996); center: Cockroach olfactory system (Boeckh & Ernst, 1987); right: Locust local neurons (MacLeod & Laurent, 1996).

Spike trains After a "rather heavy" pre-processing stage called spike sorting, the raster plot representing the spike trains can be built:

Inter Spike Intervals (ISI) Estimated ISI densities of 7 neurons recorded simultaneously from the locust antennal lobe.

Responses to stimulation Raster plots from one of the neurons showing the responses to 25 presentations (gray background) of cis-3-hexen-1-ol.

The observed counting processes associated with the 25 observed point processes just shown. In red the empirical mean of the 25.

Modeling spike trains: Why and How? A key working hypothesis in Neurosciences states that the spikes occurrence times, as opposed to their waveform are the only information carriers between brain region (Adrian and Zotterman, 1926). This hypothesis encourages the development of models whose goal is to predict the probability of occurrence of a spike at a given time, without necessarily considering the biophysical spike generation mechanisms.

Where are we? Introduction Few words of caution In defense of a modeling approach with a strong stochastic element Some stochastic models of neuronal discharges Working with the intensity of the process Model and inference

Getting the spike trains is not "neutral" The raw data used in this presentation are available (under a Creative Commons Zero license) on zenodo: https://zenodo.org/record/21589. Going from the raw data to the spike trains (doing the spike sorting) is not neutral and error prone. You can criticize, check, modify, improve what I did, its all available on github: https://christophe-pouzat.github.io/ zenodo-locust-datasets-analysis/. The analysis presented in this talk is available on gitlab: https://gitlab.com/c_pouzat/empirical-glm.

On the relation of extracellular spike and neuronal output Left, Rinberg and Davidowitz (2002); top, Spira et al (1969); bottom, Hamon et al (1990).

Potential consequences Cooley and Dodge (1966). First numerical solution of the H & H equations.

Eyzaguirre and Kuffler (1955), Lobster stretch receptor.

Stuart and Sakmann (2009).

Williams and Stuart (1999).

CA3 pyramidal cell simulation. After Traub et al (1994). Current injected in the soma. Voltage response in the soma and dendrite (blue), axon initial segment (red), axon (black).

Extracellular recordings from regions with neurons having different "sizes", like the neo-cortex, result in a biased sampling of the neuronal populations. Since what goes on in the soma is not necessarily what goes on in the axon and since the extracellular potential is dominated by what goes on in the former, we might get wrong output estimations from extracellular recordings.

Where are we? Introduction Few words of caution In defense of a modeling approach with a strong stochastic element Some stochastic models of neuronal discharges Working with the intensity of the process Model and inference

Membrane noise can lead to output fluctuations

Synaptic noise is ubiquitous (Pouzat & Marty, 1998)

Action potential propagation failures can occur

Where are we? Introduction Few words of caution In defense of a modeling approach with a strong stochastic element Some stochastic models of neuronal discharges Working with the intensity of the process Model and inference

A random walk with a drift (Biophysical Journal, 1964)

A Hawkes process model

A generalized linear model (GLM)

Where are we? Introduction Few words of caution In defense of a modeling approach with a strong stochastic element Some stochastic models of neuronal discharges Working with the intensity of the process Model and inference

Few definitions Counting Process: For points {t j } randomly scattered along a line, the counting process N(t) gives the number of points observed in the interval (0, t]: N(t) = {t j with 0 < t j t} where stands for the number of elements of a set.

History: The history, H t, consists of the variates determined up to and including time t that are necessary to describe the evolution of the counting process. H t can include all or part of the neuron s discharge up to t but also the discharge sequences of other neurons recorded simultaneously, the elapsed time since the onset of a stimulus, the nature of the stimulus, etc. One of the major problems facing the neuroscientist analysing spike trains is the determination of what constitutes H t for the data at hand. A pre-requisite for practical applications of the approach described here is that H t involves only a finite (but possibly random) time period prior to t.

Conditional Intensity: For the process N and history H t, the conditional intensity at time t is defined by: λ(t H t ) = lim δ 0 Prob{N(t, t + δ) N(t) = 1 H t } δ

Probability of an ISI based on the intensity We will find the probability density of the interval between two successive events, I j = t j+1 t j. Defining δ = t j+1 t j K, where K N, we write the probability of the interval as the following product: Pr{I j } = Pr{N(t j + δ) N(t j ) = 0 H tj } Pr{N(t j + 2 δ) N(t j + δ) = 0 H tj+δ } Pr{N(t j + K δ) N(t j + (K 1) δ) = 0 H tj+(k 1) δ } Pr{N(t j + (K + 1)δ) N(t j + K δ) = 1 H tj+kδ }

If we interpret our definition of the conditional intensity as meaning: Prob{N(t, t + δ) N(t) = 0 H t } = 1 λ(t H t ) δ + o(δ) Prob{N(t, t + δ) N(t) = 1 H t } = λ(t H t ) δ + o(δ) Prob{N(t, t + δ) N(t) > 1 H t } = o(δ) where o(δ) is such that lim δ 0 o(δ) δ = 0.

The interval s probability becomes the outcome of a sequence of Bernoulli trials, each with an inhomogeneous success probability given by λ i δ + o(δ), where, λ i = λ(t j + i δ H tj +i δ) and we get: Pr{I j = t j+1 t j } = ( K (1 λ k δ + o(δ)) ) (λ K+1 δ + o(δ)) k=1

We can rewrite the first term on the right hand side as: K K (1 λ k δ + o(δ)) = exp log (1 λ k δ + o(δ)) k=1 = exp = exp k=1 K log(1 λ k δ + o(δ)) k=1 K ( λ k δ + o(δ)) k=1 = exp( K λ k δ) exp ( K o(δ) ) k=1

Using the continuity of the exponential function, the definition of the Riemann s integral, the definition of δ and the property of the o() function we can take the limit when K goes to on both sides of our last equation to get: lim K K k=1 tj+1 (1 λ k δ + o(δ)) = exp And the probability density of the interval becomes: t j λ(t H t ) dt lim K Pr{I j = t j+1 t j } t j+1 t j K tj+1 = λ(t j+1 H tj+1 ) exp λ(t H t ) dt t j

If we now define the integrated conditional intensity by: Λ(t) = t u=0 λ(u H u ) du We see that Λ is increasing since by definition λ 0. We see then that the mapping: t T R + Λ(t) R + (1) is one to one and we can transform our {t 1,..., t n } into {Λ 1 = Λ(t 1 ),..., Λ n = Λ(t n )}.

If we now consider the probability density of the intervals t j+1 t j and Λ j+1 Λ j we get: p(t j+1 t j ) dt j+1 = λ(t j+1 H tj+1 ) exp ( tj+1 λ(t H t ) dt ) dt j+1 t j = dλ(t j+1) dt j+1 exp ( Λ(t j+1 ) Λ(t j ) ) dt = dλ j+1 exp ( ) Λ j+1 Λ j That is, the mapped intervals, Λ j+1 Λ j follow an exponential distribution with rate 1. This is the substance of the time transformation of Ogata (1988) and of the time rescaling theorem of Brown Et Al (2002). This has been repetitively derived in the neuroscience literature: Hagiwara (1954), Brillinger (1988), Chornoboy et al (1988), Johnson (1996),...

Consequences 1. The conditional intensity, with its dependence on the past through H t, describes a process "making a decision" to spike or not to spike at each time step that can include time dependent effects of synaptic coupling and stimulation. 2. The conditional intensity allows us to compute the likelihood of an observed spike train. 3. The time transformation: {t 1,..., t n } {Λ 1 = Λ(t 1 ),..., Λ n = Λ(t n )}, leads to goodness of fit tests since the mapped intervals should be the realization of a Poisson process with rate 1. 4. This time transformation can also be used for simulations.

Where are we? Introduction Few words of caution In defense of a modeling approach with a strong stochastic element Some stochastic models of neuronal discharges Working with the intensity of the process Model and inference

Model considered We are going to write our λ(t H t ) as a transformation of a "more basic" quantity, the membrane potential process (MPP), u(t H t ): λ(t H t ) λ max (1 + exp u(t H t )) 1, where λ max > 0 is a parameter allowing us to have the proper rate (in Hz). We are going to write u(t H t ) as: u(t H t ) s(t t l ) + g j (t x), for t > t l, j P x T j,x>t l where t l stands for the time of the last spike of the neuron of interest, P is the index set of the neurons of the network that are presynaptic to the neuron of interest, T j stands for the set of spike times of neuron j, g j (t x) is the effect of a spike in neuron j at time x, s(t t l ) stand for the "self" or more appropriately "unobserved" effect.

On the "self" or "unobserved" effect In an actual setting, only a tiny fraction of the neurons of a network are observed, but we know from the biophysics of these neurons and from the anatomy and function of the first olfactory relay that 3 "factors" will contribute in making a neuron spike: The so called "intrinsic properties" of the neuron, that is, the set of voltage dependent conductances present in the neuron s membrane, as well as their localization (not to mention the actual geometry of the neuron... ). The continuous asynchronous and "random" input the neuron gets from the olfactory receptors in the "spontaneous" regime. We know that this factor is a key contributor to the spontaneous activity in the first olfactory relay since this activity essentially disappear if we cut the antennal nerve (that is, the bunch of olfactory receptor axons entering into the first olfactory relay). The synaptic inputs from the other neurons of the network.

"Pragmatic" choices Since the likelihood computation implies the evaluation of many terms like: lim K Pr{I j = t j+1 t j } t j+1 t j K tj+1 = λ(t j+1 H tj+1 ) exp λ(t H t ) dt t j we will model our "self" and synaptic coupling effects as piecewise constant functions. Our MPP and λ(t j+1 H tj+1 ) will therefore be piecewise constant functions and the integrated intensity Λ(t) will be piecewise linear. The "self" effect will be on the left of a refractory period and constant on the right of a cutoff time. The synaptic effects will have a bounded support.

Justification of the constant rate after a cutoff

Example of "self" effect

Example of synaptic coupling

Example of MPP realization The presynaptic neuron (black ticks) spikes independently as a renewal process.

Example of fitted MPP