Information Theory and Neuroscience II

Similar documents
Neurophysiology of a VLSI spiking neural network: LANN21

NON-MARKOVIAN SPIKING STATISTICS OF A NEURON WITH DELAYED FEEDBACK IN PRESENCE OF REFRACTORINESS. Kseniia Kravchuk and Alexander Vidybida

Model neurons!!poisson neurons!

Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity

Fast neural network simulations with population density methods

Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks

How do synapses transform inputs?

Modelling stochastic neural learning

The homogeneous Poisson process

Consider the following spike trains from two different neurons N1 and N2:

Describing Spike-Trains

Patterns, Memory and Periodicity in Two-Neuron Delayed Recurrent Inhibitory Loops

Supporting Online Material for

Computational Explorations in Cognitive Neuroscience Chapter 2

Stochastic Processes

Neural Encoding: Firing Rates and Spike Statistics

CSE/NB 528 Final Lecture: All Good Things Must. CSE/NB 528: Final Lecture

Dynamical Constraints on Computing with Spike Timing in the Cortex

Mathematical Models of Dynamic Behavior of Individual Neural Networks of Central Nervous System

Overview Organization: Central Nervous System (CNS) Peripheral Nervous System (PNS) innervate Divisions: a. Afferent

1 Balanced networks: Trading speed for noise

Probabilistic Models in Theoretical Neuroscience

Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels

Chapter 37 Active Reading Guide Neurons, Synapses, and Signaling

Nervous Systems: Neuron Structure and Function

Activity of any neuron with delayed feedback stimulated with Poisson stream is non-markov

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5

Is the superposition of many random spike trains a Poisson process?

An Introductory Course in Computational Neuroscience

The exponential distribution and the Poisson process

SPIKE TRIGGERED APPROACHES. Odelia Schwartz Computational Neuroscience Course 2017

Liquid Computing in a Simplified Model of Cortical Layer IV: Learning to Balance a Ball

Poisson Processes for Neuroscientists

Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

Neuronal Dynamics: Computational Neuroscience of Single Neurons

Lecture 11 : Simple Neuron Models. Dr Eileen Nugent

Mathematical Models of Dynamic Behavior of Individual Neural Networks of Central Nervous System

Learning Spatio-Temporally Encoded Pattern Transformations in Structured Spiking Neural Networks 12

Synapse Model. Neurotransmitter is released into cleft between axonal button and dendritic spine

Consumption. Consider a consumer with utility. v(c τ )e ρ(τ t) dτ.

Activity Driven Adaptive Stochastic. Resonance. Gregor Wenning and Klaus Obermayer. Technical University of Berlin.

Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

Lecture 4: Feed Forward Neural Networks

Nervous Tissue. Neurons Neural communication Nervous Systems

Response Selection Using Neural Phase Oscillators

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks

Synaptic dynamics. John D. Murray. Synaptic currents. Simple model of the synaptic gating variable. First-order kinetics

Voltage-clamp and Hodgkin-Huxley models

REAL-TIME COMPUTING WITHOUT STABLE

PROBABILITY DISTRIBUTIONS

+ + ( + ) = Linear recurrent networks. Simpler, much more amenable to analytic treatment E.g. by choosing

arxiv: v3 [q-bio.nc] 18 Feb 2015

Characterizing the firing properties of an adaptive

How and why neurons fire

DISCRETE EVENT SIMULATION IN THE NEURON ENVIRONMENT

Interactions of Information Theory and Estimation in Single- and Multi-user Communications

CORRELATION TRANSFER FROM BASAL GANGLIA TO THALAMUS IN PARKINSON S DISEASE. by Pamela Reitsma. B.S., University of Maine, 2007

Neurons and Nervous Systems

Voltage-clamp and Hodgkin-Huxley models

Decorrelation of neural-network activity by inhibitory feedback

Marr's Theory of the Hippocampus: Part I

Communication Theory II

Limulus. The Neural Code. Response of Visual Neurons 9/21/2011

How to read a burst duration code

Tuning tuning curves. So far: Receptive fields Representation of stimuli Population vectors. Today: Contrast enhancment, cortical processing

Information Theory. Mark van Rossum. January 24, School of Informatics, University of Edinburgh 1 / 35

A Learning Theory for Reward-Modulated Spike-Timing-Dependent Plasticity with Application to Biofeedback

Neural Networks 1 Synchronization in Spiking Neural Networks

Modeling the Milk-Ejection Reflex! Gareth Leng and collaborators!

Neuron. Detector Model. Understanding Neural Components in Detector Model. Detector vs. Computer. Detector. Neuron. output. axon

9 Generation of Action Potential Hodgkin-Huxley Model

Communication Theory II

TIME-SEQUENTIAL SELF-ORGANIZATION OF HIERARCHICAL NEURAL NETWORKS. Ronald H. Silverman Cornell University Medical College, New York, NY 10021

Topics in Neurophysics

Neural Conduction. biologyaspoetry.com

Introduction to Neural Networks

Neural Excitability in a Subcritical Hopf Oscillator with a Nonlinear Feedback

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

Hebb rule book: 'The Organization of Behavior' Theory about the neural bases of learning

Lecture IV: LTI models of physical systems

Nerve Signal Conduction. Resting Potential Action Potential Conduction of Action Potentials

Lecture 4 Noisy Channel Coding

RADIO SYSTEMS ETIN15. Lecture no: Equalization. Ove Edfors, Department of Electrical and Information Technology

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing

Supratim Ray

Introduction to Neural Networks U. Minn. Psy 5038 Spring, 1999 Daniel Kersten. Lecture 2a. The Neuron - overview of structure. From Anderson (1995)

ECS 332: Principles of Communications 2012/1. HW 4 Due: Sep 7

High-conductance states in a mean-eld cortical network model

Roles of Fluctuations. in Pulsed Neural Networks

The Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017

Instituto Tecnológico y de Estudios Superiores de Occidente Departamento de Electrónica, Sistemas e Informática. Introductory Notes on Neural Networks

Biological Modeling of Neural Networks:

This script will produce a series of pulses of amplitude 40 na, duration 1ms, recurring every 50 ms.

Nervous Tissue. Neurons Electrochemical Gradient Propagation & Transduction Neurotransmitters Temporal & Spatial Summation

On the Complexity of Acyclic Networks of Neurons

Communication constraints and latency in Networked Control Systems

ECE 565 Notes on Shot Noise, Photocurrent Statistics, and Integrate-and-Dump Receivers M. M. Hayat 3/17/05

Mathematical Foundations of Neuroscience - Lecture 3. Electrophysiology of neurons - continued

Transcription:

John Z. Sun and Da Wang Massachusetts Institute of Technology October 14, 2009

Outline System Model & Problem Formulation Information Rate Analysis Recap 2 / 23

Neurons Neuron (denoted by j) I/O: via synapses excitatory & inhibitory synapses afferent cohort a(j), efferent cohort 3 / 23

Neurons afferent cohort neurons inhibitory synapse excitatory synapse neuron j efferent cohort neurons 85% synapses are excitatory amount n = 8500 typical in primary sensory cortex a(j) percentagewise close to n though some axons form more than one synapses with j 4 / 23

Neural Spike Train Neural spike pulse shape: time of arrival (TOA) conventions: several options needs to use one consistently asynchronous operations among a(j): no links among spike production time from different neurons when n > 2, possible for two spikes to be arbitrary close when n = 1, need to wait the refractory period 5 / 23

Neural Spike Train Union of spikes from three neurons: AP*weight 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 1 2 3 4 5 6 7 8 9 10 t time of arrival (TOA) conventions: several options needs to use one consistently asynchronous operations among a(j): no links among spike production time from different neurons when n > 2, possible for two spikes to be arbitrary close when n = 1, need to wait the refractory period 5 / 23

Afferent Spike Train Process A i (t) lim dt 0 P [i produces a spike in (t, t + dt)] dt afferent cohort...... inhibitory excitatory Union of spike trains: A(t) = A + (t) 1 + CA (t) efferent cohort... A + (t) & A (t): random excitatory/inhibitory intensity A + (t) w i A i (t) i:w i >0 A (t) i:w i <0 w i A i (t) A(t) always non-negative. 6 / 23

Spike train process (cont.) spikes inter-spike interval (ISI) A(t): instantaneous random mean spiking frequency (unit: spikes/second) D(t): instantaneous random mean excitatory afferent ISI duration (unit: time unit) 1 0.5 0 a(t) =40 0.5 0 2 4 6 8 10 12 14 t 2.5 2 1.5 1 0.5 normalized A(t) D(t) 0 0 2 4 6 8 10 12 14 t 7 / 23

Why D(t)? Introducing D(t) allows us to conceptualize the neuron as a communication channel: input (excitations): has unit of time output (firing): has unit of time. [coming soon] stochastically converts input time signals to output time signals Statistics of D(t) ρ D (τ): correlation of D(t) τ D : coherence time of D(t) ρ D (τ D ) = 1/2 τ D stationary assumption analogous to the modeling of wireless channel 8 / 23

Channel Model: input channel??? What is the output? 9 / 23

Efferent Spike Train Process Efferent spike: action potential to neurons in efferent cohort T k : the duration for k-th ISI (T k P T, k) S k : the time at which k-th AP is generated: action potential... efferent cohort... S k = k i=1 T i T k = refractory period 10 / 23

Channel Model: input & output channel both input & output have units of time but different indexing! 11 / 23

The Mean Value Assumption D k = k = 4: 1 Sk D(u) du T k S k 1 + Construct a piecewise constant random process D(t), where D(t) = D k for S k 1 + t < S k Mean Value Assumption { D(t) } adequately represents D(t) for the purpose of analysis. 12 / 23

Feedback among neurons For neuron j, its excitations come from: top-down feedback from higher levels of the cortex horizontal feedback from the same neuron region bottom-up signals from lower levels of the cortex causal feedback T n (D 1,..., D n 1 ) D n T n (T 1,..., T n 1 ) D n D 1 T 1 D 2 T 2 D 3 T 3.. D n T n 13 / 23

Channel Model: integer-indexed input & output channel 14 / 23

Channel Model: integer-indexed input & output channel Any model for the channel? 14 / 23

Classical Integrate and Fire (CIF) Neuron CIF neuron: excitatory synapses have the same weight w. unit step response to each afferent spike fixed threshold η ((m 1)w, mw] Always need m spikes to fire m afferent spikes... efferent PSP threshold trigger AP Input-output relationship T k = + mi refractory integrate fire E [I] = d k d k D k 15 / 23

CIF Neuron: Energy Constraints Energy that j expands during an ISI: metabolic energy: e 1 = C 1 T energy to construct PSP during (, T ]: e 2 = C 2 M energy to generate AP: e 3 = C 3 For CIF model, we have M = m always, hence e CIF (T ) = C 0 + C 1 T T is the random duration of the ISI M is the random number of afferent spikes in (, T ] C 0 = C 2 m + C 3. 16 / 23

Problem Formulation Channel model: Input: {D k } Output: {T k } memoryless and time-invariant channel but with (lots of) causal feedback! CIF neuron memoryless Central tenet Te optimality criterion apropos of neuronal information transmission is the maximization of bits per joule (bpj). Main objective Determines the optimal input & output distributions f D ( ) and f T ( ) based on the above principle. 17 / 23

Information Rate We start by investigating the information rate I(D; T). D (D 1, D 2,, D n ) T (T 1, T 2,, T n ) Memoryless f T D (t d) = n f T D (t i d i ) i=1 However, as {D i } may not be independent, I(D; T) n I(D i ; T i ) i=1 Equality only when {D i } independent. Are they? 18 / 23

Implications of τ D Recall τ D When T k slightly larger than D k+1 and D k highly correlated T k+1 and T k similarly correlated T k+1 will be similarly small For two jointly Gaussian r.v., I jointly Gaussian = 1 2 log(1 ρ2 ) where I as ρ 1. Message: correlation between D k and D k+1 results in a penalty in information rate. 19 / 23

Long Term Information Rate Incremental conditional mutual information: where I = lim n I n I n = I(D 1,..., D n ; T n T 1,..., T n 1 ) = I(D n ; T n T 1,..., T n 1 ) + I(D 1,..., D n 1 ; T n D n, T 1,..., T n 1 ) = h(t n T 1,..., T n 1 ) h(t n D n, T 1,..., T n 1 ) = h(t n T 1,..., T n 1 ) h(t n D n ) = I(D n ; T n ) I(T n ; T 1,..., T n 1 ) Since {(D k, T k )} strictly stationary: I = I(D 1 ; T 1 ) [I(T 1, T 2 ) + lim n I(T n; T 1,..., T n 2 T n 1 )] }{{} information decrement 20 / 23

Long Term Information Rate (cont.) Long Term Information Rate I = I(D 1 ; T 1 ) [I(T 1, T 2 ) + lim n I(T n; T 1,..., T n 2 T n 1 )] }{{} information decrement I(T 1, T 2 ): principal information decrement lim I(T n; T 1,..., T n 2 T n 1 ) n negligible, as having T n 1 is almost as effective as having D n 1. 21 / 23

Information Decrement I(T 1 ; T 2 ) I(T 1 ; T 2 ) =P [T 1 > τ D ] I(T 1 ; T 2 T 1 > τ D ) + P [T 1 τ D ] I(T 1 ; T 2 T 1 τ D ) P [T 1 τ D ] I(T 1 ; T 2 T 1 τ D ) when T 1 > τ D, T 2 T 1 effectively. When T 1 < 2 τ D, ρ T1,T 2 ρ D (T 1 ) I(T 1 ; T 2 ) κe [log T 1 ] + C where κ and C are constants based on ρ D (τ). high correlation via analyzing the conditional variance of T 2. 22 / 23

Recap Model channel input and output as time signals Mean value assumption: simplify analysis Memoryless channel with causal feedback CIF neuron model energy e CIF (T ) = C 0 + C 1 T Information rate analysis coherence time information decrement information rate I = I(D 1 ; T 1 ) I(T 1, T 2 ) Lots of hand-waving in modeling... 23 / 23