John Z. Sun and Da Wang Massachusetts Institute of Technology October 14, 2009
Outline System Model & Problem Formulation Information Rate Analysis Recap 2 / 23
Neurons Neuron (denoted by j) I/O: via synapses excitatory & inhibitory synapses afferent cohort a(j), efferent cohort 3 / 23
Neurons afferent cohort neurons inhibitory synapse excitatory synapse neuron j efferent cohort neurons 85% synapses are excitatory amount n = 8500 typical in primary sensory cortex a(j) percentagewise close to n though some axons form more than one synapses with j 4 / 23
Neural Spike Train Neural spike pulse shape: time of arrival (TOA) conventions: several options needs to use one consistently asynchronous operations among a(j): no links among spike production time from different neurons when n > 2, possible for two spikes to be arbitrary close when n = 1, need to wait the refractory period 5 / 23
Neural Spike Train Union of spikes from three neurons: AP*weight 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 1 2 3 4 5 6 7 8 9 10 t time of arrival (TOA) conventions: several options needs to use one consistently asynchronous operations among a(j): no links among spike production time from different neurons when n > 2, possible for two spikes to be arbitrary close when n = 1, need to wait the refractory period 5 / 23
Afferent Spike Train Process A i (t) lim dt 0 P [i produces a spike in (t, t + dt)] dt afferent cohort...... inhibitory excitatory Union of spike trains: A(t) = A + (t) 1 + CA (t) efferent cohort... A + (t) & A (t): random excitatory/inhibitory intensity A + (t) w i A i (t) i:w i >0 A (t) i:w i <0 w i A i (t) A(t) always non-negative. 6 / 23
Spike train process (cont.) spikes inter-spike interval (ISI) A(t): instantaneous random mean spiking frequency (unit: spikes/second) D(t): instantaneous random mean excitatory afferent ISI duration (unit: time unit) 1 0.5 0 a(t) =40 0.5 0 2 4 6 8 10 12 14 t 2.5 2 1.5 1 0.5 normalized A(t) D(t) 0 0 2 4 6 8 10 12 14 t 7 / 23
Why D(t)? Introducing D(t) allows us to conceptualize the neuron as a communication channel: input (excitations): has unit of time output (firing): has unit of time. [coming soon] stochastically converts input time signals to output time signals Statistics of D(t) ρ D (τ): correlation of D(t) τ D : coherence time of D(t) ρ D (τ D ) = 1/2 τ D stationary assumption analogous to the modeling of wireless channel 8 / 23
Channel Model: input channel??? What is the output? 9 / 23
Efferent Spike Train Process Efferent spike: action potential to neurons in efferent cohort T k : the duration for k-th ISI (T k P T, k) S k : the time at which k-th AP is generated: action potential... efferent cohort... S k = k i=1 T i T k = refractory period 10 / 23
Channel Model: input & output channel both input & output have units of time but different indexing! 11 / 23
The Mean Value Assumption D k = k = 4: 1 Sk D(u) du T k S k 1 + Construct a piecewise constant random process D(t), where D(t) = D k for S k 1 + t < S k Mean Value Assumption { D(t) } adequately represents D(t) for the purpose of analysis. 12 / 23
Feedback among neurons For neuron j, its excitations come from: top-down feedback from higher levels of the cortex horizontal feedback from the same neuron region bottom-up signals from lower levels of the cortex causal feedback T n (D 1,..., D n 1 ) D n T n (T 1,..., T n 1 ) D n D 1 T 1 D 2 T 2 D 3 T 3.. D n T n 13 / 23
Channel Model: integer-indexed input & output channel 14 / 23
Channel Model: integer-indexed input & output channel Any model for the channel? 14 / 23
Classical Integrate and Fire (CIF) Neuron CIF neuron: excitatory synapses have the same weight w. unit step response to each afferent spike fixed threshold η ((m 1)w, mw] Always need m spikes to fire m afferent spikes... efferent PSP threshold trigger AP Input-output relationship T k = + mi refractory integrate fire E [I] = d k d k D k 15 / 23
CIF Neuron: Energy Constraints Energy that j expands during an ISI: metabolic energy: e 1 = C 1 T energy to construct PSP during (, T ]: e 2 = C 2 M energy to generate AP: e 3 = C 3 For CIF model, we have M = m always, hence e CIF (T ) = C 0 + C 1 T T is the random duration of the ISI M is the random number of afferent spikes in (, T ] C 0 = C 2 m + C 3. 16 / 23
Problem Formulation Channel model: Input: {D k } Output: {T k } memoryless and time-invariant channel but with (lots of) causal feedback! CIF neuron memoryless Central tenet Te optimality criterion apropos of neuronal information transmission is the maximization of bits per joule (bpj). Main objective Determines the optimal input & output distributions f D ( ) and f T ( ) based on the above principle. 17 / 23
Information Rate We start by investigating the information rate I(D; T). D (D 1, D 2,, D n ) T (T 1, T 2,, T n ) Memoryless f T D (t d) = n f T D (t i d i ) i=1 However, as {D i } may not be independent, I(D; T) n I(D i ; T i ) i=1 Equality only when {D i } independent. Are they? 18 / 23
Implications of τ D Recall τ D When T k slightly larger than D k+1 and D k highly correlated T k+1 and T k similarly correlated T k+1 will be similarly small For two jointly Gaussian r.v., I jointly Gaussian = 1 2 log(1 ρ2 ) where I as ρ 1. Message: correlation between D k and D k+1 results in a penalty in information rate. 19 / 23
Long Term Information Rate Incremental conditional mutual information: where I = lim n I n I n = I(D 1,..., D n ; T n T 1,..., T n 1 ) = I(D n ; T n T 1,..., T n 1 ) + I(D 1,..., D n 1 ; T n D n, T 1,..., T n 1 ) = h(t n T 1,..., T n 1 ) h(t n D n, T 1,..., T n 1 ) = h(t n T 1,..., T n 1 ) h(t n D n ) = I(D n ; T n ) I(T n ; T 1,..., T n 1 ) Since {(D k, T k )} strictly stationary: I = I(D 1 ; T 1 ) [I(T 1, T 2 ) + lim n I(T n; T 1,..., T n 2 T n 1 )] }{{} information decrement 20 / 23
Long Term Information Rate (cont.) Long Term Information Rate I = I(D 1 ; T 1 ) [I(T 1, T 2 ) + lim n I(T n; T 1,..., T n 2 T n 1 )] }{{} information decrement I(T 1, T 2 ): principal information decrement lim I(T n; T 1,..., T n 2 T n 1 ) n negligible, as having T n 1 is almost as effective as having D n 1. 21 / 23
Information Decrement I(T 1 ; T 2 ) I(T 1 ; T 2 ) =P [T 1 > τ D ] I(T 1 ; T 2 T 1 > τ D ) + P [T 1 τ D ] I(T 1 ; T 2 T 1 τ D ) P [T 1 τ D ] I(T 1 ; T 2 T 1 τ D ) when T 1 > τ D, T 2 T 1 effectively. When T 1 < 2 τ D, ρ T1,T 2 ρ D (T 1 ) I(T 1 ; T 2 ) κe [log T 1 ] + C where κ and C are constants based on ρ D (τ). high correlation via analyzing the conditional variance of T 2. 22 / 23
Recap Model channel input and output as time signals Mean value assumption: simplify analysis Memoryless channel with causal feedback CIF neuron model energy e CIF (T ) = C 0 + C 1 T Information rate analysis coherence time information decrement information rate I = I(D 1 ; T 1 ) I(T 1, T 2 ) Lots of hand-waving in modeling... 23 / 23