Infinite systems of interacting chains with memory of variable length - a stochastic model for biological neural nets

Similar documents
On oscillating systems of interacting neurons

Probabilistic Models in Theoretical Neuroscience

Consider the following spike trains from two different neurons N1 and N2:

arxiv: v2 [math.pr] 14 Nov 2017

Activity of any neuron with delayed feedback stimulated with Poisson stream is non-markov

Memory and hypoellipticity in neuronal models

Balance of Electric and Diffusion Forces

Modelling stochastic neural learning

An Introductory Course in Computational Neuroscience

Introduction to Neural Networks

Supporting Online Material for

Computing with Inter-spike Interval Codes in Networks of Integrate and Fire Neurons

Fast neural network simulations with population density methods

The homogeneous Poisson process

When do Correlations Increase with Firing Rates? Abstract. Author Summary. Andrea K. Barreiro 1* and Cheng Ly 2

Synchrony in Neural Systems: a very brief, biased, basic view

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5

CSE/NB 528 Final Lecture: All Good Things Must. CSE/NB 528: Final Lecture

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Patterns, Memory and Periodicity in Two-Neuron Delayed Recurrent Inhibitory Loops

The Mixed States of Associative Memories Realize Unimodal Distribution of Dominance Durations in Multistable Perception

High-conductance states in a mean-eld cortical network model

Dynamical Constraints on Computing with Spike Timing in the Cortex

DEVS Simulation of Spiking Neural Networks

Phase Response. 1 of of 11. Synaptic input advances (excitatory) or delays (inhibitory) spiking

Fast and exact simulation methods applied on a broad range of neuron models

Abstract. Author Summary

STDP Learning of Image Patches with Convolutional Spiking Neural Networks

Network models: random graphs

Mid Year Project Report: Statistical models of visual neurons

E. Löcherbach 1. Introduction

Susceptible-Infective-Removed Epidemics and Erdős-Rényi random

Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity

Leo Kadanoff and 2d XY Models with Symmetry-Breaking Fields. renormalization group study of higher order gradients, cosines and vortices

Lecture 06 01/31/ Proofs for emergence of giant component

Biological Modeling of Neural Networks:

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

How to read a burst duration code

Mathematical Models of Dynamic Behavior of Individual Neural Networks of Central Nervous System

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

+ + ( + ) = Linear recurrent networks. Simpler, much more amenable to analytic treatment E.g. by choosing

Artificial Intelligence Hopfield Networks

Causality and communities in neural networks

CS 256: Neural Computation Lecture Notes

Rare event simulation for the ruin problem with investments via importance sampling and duality

arxiv: v1 [q-bio.nc] 1 Jun 2014

Memories Associated with Single Neurons and Proximity Matrices

Chapter 11. Stochastic Methods Rooted in Statistical Mechanics

RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO. X i

Is the superposition of many random spike trains a Poisson process?

Lecture 11 : Simple Neuron Models. Dr Eileen Nugent

On the Complexity of Acyclic Networks of Neurons

neural networks Balázs B Ujfalussy 17 october, 2016 idegrendszeri modellezés 2016 október 17.

Data Mining Part 5. Prediction

From Stochastic Processes to Stochastic Petri Nets

Decoding. How well can we learn what the stimulus is by looking at the neural responses?

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

3 Detector vs. Computer

Hierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References

Neurophysiology of a VLSI spiking neural network: LANN21

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

Evolution of the Average Synaptic Update Rule

Extended Renovation Theory and Limit Theorems for Stochastic Ordered Graphs

Liquid Computing in a Simplified Model of Cortical Layer IV: Learning to Balance a Ball

Neural Encoding: Firing Rates and Spike Statistics

Analyzing Neuroscience Signals using Information Theory and Complexity

Math in systems neuroscience. Quan Wen

Ranking Neurons for Mining Structure-Activity Relations in Biological Neural Networks: NeuronRank

Markov processes and queueing networks

Phase Response Properties and Phase-Locking in Neural Systems with Delayed Negative-Feedback. Carter L. Johnson

Learning Spatio-Temporally Encoded Pattern Transformations in Structured Spiking Neural Networks 12

Neuron. Detector Model. Understanding Neural Components in Detector Model. Detector vs. Computer. Detector. Neuron. output. axon

Activity Driven Adaptive Stochastic. Resonance. Gregor Wenning and Klaus Obermayer. Technical University of Berlin.

RANDOM WALKS AND PERCOLATION IN A HIERARCHICAL LATTICE

Localized Excitations in Networks of Spiking Neurons

STOCHASTIC PROCESSES Basic notions

Tuning tuning curves. So far: Receptive fields Representation of stimuli Population vectors. Today: Contrast enhancment, cortical processing

What is the neural code? Sekuler lab, Brandeis

Spectral asymptotics for stable trees and the critical random graph

Mathematical Models of Dynamic Behavior of Individual Neural Networks of Central Nervous System

Feedforward Neural Nets and Backpropagation

Exercises. Chapter 1. of τ approx that produces the most accurate estimate for this firing pattern.

Discrete and Indiscrete Models of Biological Networks

The diameter and mixing time of critical random graphs.

A gradient descent rule for spiking neurons emitting multiple spikes

Linear Algebra for Time Series of Spikes

THE LOCUST OLFACTORY SYSTEM AS A CASE STUDY FOR MODELING DYNAMICS OF NEUROBIOLOGICAL NETWORKS: FROM DISCRETE TIME NEURONS TO CONTINUOUS TIME NEURONS

Computational Explorations in Cognitive Neuroscience Chapter 2

Fitting a Stochastic Neural Network Model to Real Data

Neural Spike Train Analysis 1: Introduction to Point Processes

Modern Discrete Probability Branching processes

Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks

Dynamic Causal Modelling for evoked responses J. Daunizeau

Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits

Memory capacity of networks with stochastic binary synapses

Nearest Neighbors Methods for Support Vector Machines

Spiking Neural Networks as Timed Automata

Mean-field dual of cooperative reproduction

Almost giant clusters for percolation on large trees

ABC methods for phase-type distributions with applications in insurance risk problems

Transcription:

Infinite systems of interacting chains with memory of variable length - a stochastic model for biological neural nets Antonio Galves Universidade de S.Paulo Fapesp Center for Neuromathematics Eurandom, September 2013

This is a joint work with Eva Löcherbach

This is a joint work with Eva Löcherbach as part of Fapesp Project NeuroMat

This is a joint work with Eva Löcherbach as part of Fapesp Project NeuroMat Journal of Statistical Physics, June 2013

This is a joint work with Eva Löcherbach as part of Fapesp Project NeuroMat Journal of Statistical Physics, June 2013 This article is dedicated to Errico Presutti

Spike trains Neurons talk to each other by firing sequences of action potentials.

Spike trains Neurons talk to each other by firing sequences of action potentials. One emission of such an action potential is called a spike.

Spike trains Neurons talk to each other by firing sequences of action potentials. One emission of such an action potential is called a spike. Duration of spikes is very short (about 1 ms) - so : report if in a given time interval (of about 3 ms) there is presence or absence of spike.

Spike trains Neurons talk to each other by firing sequences of action potentials. One emission of such an action potential is called a spike. Duration of spikes is very short (about 1 ms) - so : report if in a given time interval (of about 3 ms) there is presence or absence of spike. The point process with the times at which a neuron spikes is called a spike train.

Spike trains Figure: Spike trains of several neurons - Picture by W. Maass

Important - and open - questions How is information/external stimulus encoded in such patterns? How to model brain plasticity? How to explain the appearance of synchronized spiking patterns ( evoked potential)?

Important - and open - questions How is information/external stimulus encoded in such patterns? How to model brain plasticity? How to explain the appearance of synchronized spiking patterns ( evoked potential)? Goal : to find a model in which this type of questions can be addressed

The model Huge system with N 10 11 neurons that interact. Spike train : for each neuron i we indicate if there is a spike or not at time t, t Z. X t (i) {0, 1}, X t (i) = 1 neuron i has a spike at time t. t is an index of the time window in which we observe the neuron. In the data we considered, the width of this window is typically 3 ms.

Background Integrate and fire models : the membrane potential process of one neuron accumulates the stimulus coming from the other neurons. It spikes depending on the height of the accumulated potential. Then : reset to a resting potential. Restart accumulating potentials coming from other neurons.

Background Integrate and fire models : the membrane potential process of one neuron accumulates the stimulus coming from the other neurons. It spikes depending on the height of the accumulated potential. Then : reset to a resting potential. Restart accumulating potentials coming from other neurons. Hence : Variable length memory : the memory of the neuron goes back up to its last spike at least at a first glance.

Background Integrate and fire models : the membrane potential process of one neuron accumulates the stimulus coming from the other neurons. It spikes depending on the height of the accumulated potential. Then : reset to a resting potential. Restart accumulating potentials coming from other neurons. Hence : Variable length memory : the memory of the neuron goes back up to its last spike at least at a first glance. This is the framework considered by Cessac (2011) - but only for a finite number of neurons.

The model Chain X t {0, 1} I, X t = (X t (i), i I), X t (i) {0, 1}, t Z, I countable is the set of neurons. We will work in the case where I is infinite. Time evolution : At each time step, given the past history of the system, neurons update independently from each other :

The model Chain X t {0, 1} I, X t = (X t (i), i I), X t (i) {0, 1}, t Z, I countable is the set of neurons. We will work in the case where I is infinite. Time evolution : At each time step, given the past history of the system, neurons update independently from each other : For any finite subset J of neurons, P(X t (i) = a i, i J F t 1 ) = i J P(X t (i) = a i F t 1 ), where F t 1 is the past history up to time t 1.

The model II P(X t (i) = 1 F t 1 ) = Φ j Here : t 1 W j i s=l i t W j i R : synaptic weight of neuron j on i. g(t s)x s (j), t L i t. L i t = sup{s < t : X s (i) = 1} last spike time of neuron i strictly before time t. g : N R + describes an leaky effect. t L i t describes an aging efect

Excitatory versus inhibitory influence Neurons who have a direct influence on i are those belonging to V i := {j : W j i 0} :

Excitatory versus inhibitory influence Neurons who have a direct influence on i are those belonging to V i := {j : W j i 0} : Either excitatory : W j i > 0. Or inhibitory : W j i < 0.

This is a new class of non Markovian processes having a countable number of interacting components.

This is a new class of non Markovian processes having a countable number of interacting components. It extends in a non trivial way Spitzer s interacting particle systems

This is a new class of non Markovian processes having a countable number of interacting components. It extends in a non trivial way Spitzer s interacting particle systems (which are Markovian).

This is a new class of non Markovian processes having a countable number of interacting components. It extends in a non trivial way Spitzer s interacting particle systems (which are Markovian). It also extends Rissanen s stochastic chains with memory of variable length

This is a new class of non Markovian processes having a countable number of interacting components. It extends in a non trivial way Spitzer s interacting particle systems (which are Markovian). It also extends Rissanen s stochastic chains with memory of variable length (it is only locally of variable length).

This is a new class of non Markovian processes having a countable number of interacting components. It extends in a non trivial way Spitzer s interacting particle systems (which are Markovian). It also extends Rissanen s stochastic chains with memory of variable length (it is only locally of variable length). It is a chain of infinite order with a non countable state space.

This is a new class of non Markovian processes having a countable number of interacting components. It extends in a non trivial way Spitzer s interacting particle systems (which are Markovian). It also extends Rissanen s stochastic chains with memory of variable length (it is only locally of variable length). It is a chain of infinite order with a non countable state space. So it is an interesting mathematical object...

The discrete time frame is not important a continuous time description is analogous.

The discrete time frame is not important a continuous time description is analogous. Our model is a version in discrete time of the so-called Hawkes process (see Brémaud& Massoulié 1991) but : with an infinity of components and, locally, a structure of variable memory.

Basic mathematical questions Given (W i j ), Φ and g, does a chain with the above dynamics exist and if so, is it unique?

Basic mathematical questions Given (W i j ), Φ and g, does a chain with the above dynamics exist and if so, is it unique? Yes - under some conditions, see in two minutes.

Basic mathematical questions Given (W i j ), Φ and g, does a chain with the above dynamics exist and if so, is it unique? Yes - under some conditions, see in two minutes. Are neighboring inter-spike intervals correlated? This is both a mathematical and a biological question,

Basic mathematical questions Given (W i j ), Φ and g, does a chain with the above dynamics exist and if so, is it unique? Yes - under some conditions, see in two minutes. Are neighboring inter-spike intervals correlated? This is both a mathematical and a biological question, and there are experimental facts that we have to explain...

The proof of existence and uniqueness is based on the study of the transition probability (1) p (i,t) (1 x) = Φ j i W j i t 1 g(t s)x s (j), t L i t(x) : s=l i t (x) which depends on the space-time configuration of spike times x t 1 L i (V i ) : locally variable length in time, infinite range in space. t Globally of infinite range memory!

But attention : The function x p (i,t) (1 x) in general is not continuous! We do not have : as k. sup p (i,t) (1 x) p (i,t) (1 x ) 0 x,x :x=x k

But attention : The function x p (i,t) (1 x) in general is not continuous! We do not have : sup p (i,t) (1 x) p (i,t) (1 x ) 0 x,x :x=x k as k. Since no summability is imposed on g.

But attention : The function x p (i,t) (1 x) in general is not continuous! We do not have : sup p (i,t) (1 x) p (i,t) (1 x ) 0 x,x :x=x k as k. Since no summability is imposed on g. Continuity is usually what is required in the study of chains having infinite order (see work by R. Fernández, G. Maillard,...)

Hypotheses 1) Lipschitz : There exists some γ > 0 : such that for all z, z, n, Φ(z, n) Φ(z, n) γ z z. 2) Uniform summability of the synaptic weights sup i W j i <. 3) Spontaneous spiking activity with intensity δ : j Φ(, ) δ > 0.

Theorem Under the above hypotheses : If δ δ and : fast decay of synaptic weigths, then 1 there exists a unique stationary chain X t (i), t Z, i I, consistent with the dynamics.

Theorem Under the above hypotheses : If δ δ and : fast decay of synaptic weigths, then 1 there exists a unique stationary chain X t (i), t Z, i I, consistent with the dynamics. 2 the speed of convergence to equilibrium is bounded above : (2) E[f (X t s (i)) F 0 ] E[f (X t s (i))] C(t s + 1) f ϕ(s), where ϕ(s) 0 as s.

Theorem Under the above hypotheses : If δ δ and : fast decay of synaptic weigths, then 1 there exists a unique stationary chain X t (i), t Z, i I, consistent with the dynamics. 2 the speed of convergence to equilibrium is bounded above : (2) E[f (X t s (i)) F 0 ] E[f (X t s (i))] C(t s + 1) f ϕ(s), where ϕ(s) 0 as s. 3 If moreover g(n) < Ce βn, then we have in (2) that ϕ(s) Cϱ s for some ϱ ]0, 1[, if β >> 1.

Proof : Conditional Kalikow-decomposition Φ(, ) δ Coupling with i.i.d. field X i (i) ξ i (t) for all t, i. ξ t (i), t Z, i I, ξ i (t) B(δ) : We have to work in the configuration space conditioned on the realization of ξ : S ξ = {x {0, 1} Z I : x t (i) ξ t (i) t, i}.

Continuation of the proof Each site (i, t) has its memory bounded by Rt i = sup{s < t : ξ s (i) = 1}. Introduce : V i (0) := {i}, V i (k) V i = {j : W j i 0} {i}. Proposition p (i,t) (a x) = λ( 1)p [ 1] (a) + k 0 λ(k)p [k] (a x t 1 (V Rt i i (k))), where λ(k) [0, 1], λ(k) = 1, λ(k) 2γ t 1 s=r i t g(t s) j / V i (k 1) W j i, k 1.

Comments This is a conditional decomposition, conditional on the realization of spontaneous spikes. The reproduction probabilities λ(k) are random variables depending on ξ. We get uniqueness via a dual process, the Clan of Ancestors : in order to decide about the value of (i, t), we have to know the values of all sites in C 1 (i,t) = V i(k) [R i t, t 1], chosen with probability λ(k)...

Comments This is a conditional decomposition, conditional on the realization of spontaneous spikes. The reproduction probabilities λ(k) are random variables depending on ξ. We get uniqueness via a dual process, the Clan of Ancestors : in order to decide about the value of (i, t), we have to know the values of all sites in C 1 (i,t) = V i(k) [R i t, t 1], chosen with probability λ(k)... Iterate! If this process stops in finite time a.s., then we are done.

Comments This is a conditional decomposition, conditional on the realization of spontaneous spikes. The reproduction probabilities λ(k) are random variables depending on ξ. We get uniqueness via a dual process, the Clan of Ancestors : in order to decide about the value of (i, t), we have to know the values of all sites in C 1 (i,t) = V i(k) [R i t, t 1], chosen with probability λ(k)... Iterate! If this process stops in finite time a.s., then we are done. This is granted by a comparison with a multi-type branching process in random environment.

Back to neuroscience Goldberg et al. (1964) in their article Response of neurons of the superior olivary complex of the cat to acoustic stimuli of long duration observe : In many experimental setups the empirical correlation between successive inter-spike intervals is very small

Back to neuroscience Goldberg et al. (1964) in their article Response of neurons of the superior olivary complex of the cat to acoustic stimuli of long duration observe : In many experimental setups the empirical correlation between successive inter-spike intervals is very small indicating that a description of spiking as a stationary renewal process is a good approximation (Gerstner and Kistler 2002).

In the same direction : The statistical analysis of the activity of several (but not all!) neurons in the hippocampus selects as best model a renewal process. Data registered by Sidarta Ribeiro (Brain Institute UFRN), in 2005. Data analyzed by Andrés Rodríguez and Karina Y. Yaginuma, using the SMC (smallest maximiser criterion).

HOWEVER : Nawrot et al. (2007) in their article Serial interval statistics of spontaneous activity in cortical neurons in vivo and in vitro find statistical evidence that neighboring inter-spike intervals are correlated, having negative correlation!!!

HOWEVER : Nawrot et al. (2007) in their article Serial interval statistics of spontaneous activity in cortical neurons in vivo and in vitro find statistical evidence that neighboring inter-spike intervals are correlated, having negative correlation!!! Can we account for these apparently contradictory facts with our model?

Random synaptic weights We must describe in a more precise way the directed graph defined by the synaptic weights :

Random synaptic weights We must describe in a more precise way the directed graph defined by the synaptic weights : Vertices = neurons.

Random synaptic weights We must describe in a more precise way the directed graph defined by the synaptic weights : Vertices = neurons. There is a directed edge from neuron i to neuron j iff W i j 0.

Random synaptic weights We must describe in a more precise way the directed graph defined by the synaptic weights : Vertices = neurons. There is a directed edge from neuron i to neuron j iff W i j 0. In what follows this graph will be a realization of a critical directed Erdös-Rényi graph. In such a graph there is a unique giant cluster, and we work in this giant cluster.

Critical directed Erdös-Rényi random graph Large but finite system of neurons with I = {1,..., N}, N 10 11.

Critical directed Erdös-Rényi random graph Large but finite system of neurons with I = {1,..., N}, N 10 11. Random synaptic weights : W i j, i j, are independent random variables taking values 0 or 1

Critical directed Erdös-Rényi random graph Large but finite system of neurons with I = {1,..., N}, N 10 11. Random synaptic weights : W i j, i j, are independent random variables taking values 0 or 1 with P(W i j = 1) = 1 P(W i j = 0) = p.

Critical directed Erdös-Rényi random graph Large but finite system of neurons with I = {1,..., N}, N 10 11. Random synaptic weights : W i j, i j, are independent random variables taking values 0 or 1 with P(W i j = 1) = 1 P(W i j = 0) = p. Here, p = λ/n and λ = 1 + ϑ/n, ϑ > 0.

Critical directed Erdös-Rényi random graph Large but finite system of neurons with I = {1,..., N}, N 10 11. Random synaptic weights : W i j, i j, are independent random variables taking values 0 or 1 with P(W i j = 1) = 1 P(W i j = 0) = p. Here, p = λ/n and λ = 1 + ϑ/n, ϑ > 0. Observe that W i j and W j i are distinct and independent : being influenced by neuron i is different from influencing neuron i...

Does the past before the last spike of a neuron influence the future? / / / / / / / / 1 0 0 0 0 0? Past L i t t Future

Does the past before the last spike of a neuron influence the future? / / / / / / / / 1 0 0 0 0 0? Past L i t t Future Does it affect the future whether the last spike before L i t took place immediately before L i t or whether it took place many steps before?

Does the past before the last spike of a neuron influence the future? / / / / / / / / 1 0 0 0 0 0? Past L i t t Future Does it affect the future whether the last spike before L i t took place immediately before L i t or whether it took place many steps before? The point is : the last spike of neuron i before time L i t affects many neurons different from i, which in turn affect other neurons and so on. How long does it take until this influence returns to the starting neuron i?

This time is a sort of recurrence time in the random graph : C1 i = {j : W j i 0},..., Cn i = {j : k Cn 1 i : W j k 0}. Then the recurrence time is T i = inf{n : i Cn}. i Proposition P(recurrence time k) k N eϑk/n. N = number of neurons. ϑ =parameter appearing in the definition of the synaptic weight probabilities, Np = 1 + ϑ/n.

This implies Theorem On a good set of random synaptic weights : Covariance of neighboring inter-spike intervals C 1 δ 2 N(1 δ) N. Moreover, P(good set) 1 CN 1/2, where δ is the spontaneous spiking activity.

This implies Theorem On a good set of random synaptic weights : Covariance of neighboring inter-spike intervals C 1 δ 2 N(1 δ) N. Moreover, P(good set) 1 CN 1/2, where δ is the spontaneous spiking activity. This conciliates the empirical results both of Goldberg et al. (1964) and of Nawrot et al. (2007)!