Consider the following spike trains from two different neurons N1 and N2:

Similar documents
The homogeneous Poisson process

Synaptic dynamics. John D. Murray. Synaptic currents. Simple model of the synaptic gating variable. First-order kinetics

Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity

Exercises. Chapter 1. of τ approx that produces the most accurate estimate for this firing pattern.

How do synapses transform inputs?

Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits

High-conductance states in a mean-eld cortical network model

This script will produce a series of pulses of amplitude 40 na, duration 1ms, recurring every 50 ms.

CSE/NB 528 Final Lecture: All Good Things Must. CSE/NB 528: Final Lecture

Synchrony in Neural Systems: a very brief, biased, basic view

Neuronal Dynamics: Computational Neuroscience of Single Neurons

Neural Encoding: Firing Rates and Spike Statistics

Synaptic Input. Linear Model of Synaptic Transmission. Professor David Heeger. September 5, 2000

Membrane equation. VCl. dv dt + V = V Na G Na + V K G K + V Cl G Cl. G total. C m. G total = G Na + G K + G Cl

Dynamical Constraints on Computing with Spike Timing in the Cortex

The Spike Response Model: A Framework to Predict Neuronal Spike Trains

An Introductory Course in Computational Neuroscience

Introduction to Neural Networks U. Minn. Psy 5038 Spring, 1999 Daniel Kersten. Lecture 2a. The Neuron - overview of structure. From Anderson (1995)

The Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017

Supporting Online Material for

3 Detector vs. Computer

Fast neural network simulations with population density methods

Neuron. Detector Model. Understanding Neural Components in Detector Model. Detector vs. Computer. Detector. Neuron. output. axon

Patterns, Memory and Periodicity in Two-Neuron Delayed Recurrent Inhibitory Loops

CORRELATION TRANSFER FROM BASAL GANGLIA TO THALAMUS IN PARKINSON S DISEASE. by Pamela Reitsma. B.S., University of Maine, 2007

THE LOCUST OLFACTORY SYSTEM AS A CASE STUDY FOR MODELING DYNAMICS OF NEUROBIOLOGICAL NETWORKS: FROM DISCRETE TIME NEURONS TO CONTINUOUS TIME NEURONS

Math in systems neuroscience. Quan Wen

Quantitative Electrophysiology

Tuning tuning curves. So far: Receptive fields Representation of stimuli Population vectors. Today: Contrast enhancment, cortical processing

Phase Response. 1 of of 11. Synaptic input advances (excitatory) or delays (inhibitory) spiking

Approximate, not perfect synchrony maximizes the downstream effectiveness of excitatory neuronal ensembles

Fitting a Stochastic Neural Network Model to Real Data

Integration of synaptic inputs in dendritic trees

Model neurons!!poisson neurons!

Self-organized Criticality and Synchronization in a Pulse-coupled Integrate-and-Fire Neuron Model Based on Small World Networks

A Model for Real-Time Computation in Generic Neural Microcircuits

Computing with Inter-spike Interval Codes in Networks of Integrate and Fire Neurons

Neurophysiology of a VLSI spiking neural network: LANN21

Exercise 15 : Cable Equation

Will Penny. 21st April The Macroscopic Brain. Will Penny. Cortical Unit. Spectral Responses. Macroscopic Models. Steady-State Responses

Roles of Fluctuations. in Pulsed Neural Networks

AT2 Neuromodeling: Problem set #3 SPIKE TRAINS

Spike-Frequency Adaptation of a Generalized Leaky Integrate-and-Fire Model Neuron

Action Potentials and Synaptic Transmission Physics 171/271

1 Balanced networks: Trading speed for noise

Causality and communities in neural networks

Oscillations and Synchrony in Large-scale Cortical Network Models

arxiv: v1 [q-bio.nc] 1 Jun 2014

Mathematical Neuroscience. Course: Dr. Conor Houghton 2010 Typeset: Cathal Ormond

Neurons and conductance-based models

Spike-Frequency Adaptation: Phenomenological Model and Experimental Tests

THE TRANSFER AND PROPAGATION OF CORRELATED NEURONAL ACTIVITY

A Learning Theory for Reward-Modulated Spike-Timing-Dependent Plasticity with Application to Biofeedback

Sparse Signal Representation in Digital and Biological Systems

Neural Networks 1 Synchronization in Spiking Neural Networks

Analyzing Neuroscience Signals using Information Theory and Complexity

Nervous Tissue. Neurons Electrochemical Gradient Propagation & Transduction Neurotransmitters Temporal & Spatial Summation

Activity Driven Adaptive Stochastic. Resonance. Gregor Wenning and Klaus Obermayer. Technical University of Berlin.

Decoding Synchronized Oscillations within the Brain: Phase-Delayed Inhibition Provides a Robust Mechanism for Creating a Sharp Synchrony Filter

Adaptation in the Neural Code of the Retina

Phase Locking. 1 of of 10. The PRC's amplitude determines which frequencies a neuron locks to. The PRC's slope determines if locking is stable

The Mixed States of Associative Memories Realize Unimodal Distribution of Dominance Durations in Multistable Perception

Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics

A Mass Model of Interconnected Thalamic Populations including both Tonic and Burst Firing Mechanisms

T CHANNEL DYNAMICS IN A SILICON LGN. Kai Michael Hynnä A DISSERTATION. Bioengineering

Lecture 11 : Simple Neuron Models. Dr Eileen Nugent

Dendritic computation

Biosciences in the 21st century

Triplets of Spikes in a Model of Spike Timing-Dependent Plasticity

Rhythms in the gamma range (30 80 Hz) and the beta range

Fast and exact simulation methods applied on a broad range of neuron models

Neurons, Synapses, and Signaling

Application of density estimation methods to quantal analysis

Effects of synaptic conductance on the voltage distribution and firing rate of spiking neurons

On the Dynamics of Delayed Neural Feedback Loops. Sebastian Brandt Department of Physics, Washington University in St. Louis

+ + ( + ) = Linear recurrent networks. Simpler, much more amenable to analytic treatment E.g. by choosing

Modeling neural oscillations

Reducing neuronal networks to discrete dynamics

Autoassociative recall as probabilistic inference

Extracting Oscillations: Neuronal Coincidence Detection with Noisy Periodic Spike Input

Phase Response Properties and Phase-Locking in Neural Systems with Delayed Negative-Feedback. Carter L. Johnson

Regulation of Signal Strength by Presynaptic Mechanisms

Mid Year Project Report: Statistical models of visual neurons

Sensory Encoding of Smell in the Olfactory System of Drosophila

Visual motion processing and perceptual decision making

Compartmental Modelling

arxiv:physics/ v1 [physics.bio-ph] 19 Feb 1999

Nervous Tissue. Neurons Neural communication Nervous Systems

Conductance-Based Integrate-and-Fire Models

Localized activity patterns in excitatory neuronal networks

Structure and Measurement of the brain lecture notes

Dendrites - receives information from other neuron cells - input receivers.

CISC 3250 Systems Neuroscience

Approximate, not Perfect Synchrony Maximizes the Downstream Effectiveness of Excitatory Neuronal Ensembles

Neurophysiology. Danil Hammoudi.MD

How to read a burst duration code

Temporal whitening by power-law adaptation in neocortical neurons

Nervous Systems: Neuron Structure and Function

Subthreshold cross-correlations between cortical neurons: Areference model with static synapses

Plasticity and Learning

Transcription:

About synchrony and oscillations So far, our discussions have assumed that we are either observing a single neuron at a, or that neurons fire independent of each other. This assumption may be correct in the case of sensory neurons, which are not usually interconnected; however; when circuits come in play, neurons cannot be considered independent of each other. A specific parameter that is often measured when the action potentials of several neurons are recorded at the same is that of neural synchrony. Reminder: Neurons function as "coincidence detectors". At the level of the synapse, as well as of the level of the membrane voltage, signals are summed in. Action potentials from presynaptic neurons are more efficient at depolarizing a postsynaptic neuron for example, if they arrive close to each other in (synaptic summation). One can think of coincidence detection at many different scales. For example, a glutamatergic synapse with AMPA receptors sums up signals arriving "simultaneously" at a scale of maybe 10-20 ms. A synapse functioning via NMDA receptors however may have an integration as long as 300 ms. In a behavioral experiment, simultaneous may mean within hours of each other. Coincidence can thus be defined only with a specific question and scenario in mind. Synchrony refers to the observation that in many cases, action potentials emitted from different neurons are emitted at the same, or very close in. Again, there is no "true" definition of synchrony; this term is dependent on the question and experimental situation. Consider the following spike trains from two different neurons N1 and N2: 5 ms N1 N2

N2's action potentials always precede N1's by 5ms. In a strict sense, this means that the two spike trains (sequence of action potentials) are "phase-locked". You may also say that N1 lags N2 by 5 ms. Depending of the resolution of the analysis however, one could conclude that N1 and N2 are synchronous. 5 ms N1 N2 10 ms 5 ms How can we measure synchrony in the brain? Obviously, in order to detect synchronous or phase-locked events, one has to be able to record form more than one neuron at a. This can be achieved by either recording from several individual neurons (multi-unit recordings), or from a population (field potentials). Local field potential (LFP) recordings refer to extracellular recordings with relatively large electrodes with which the activity of many neurons can be recorded. Large variations in these LFP recordings usually indicate that many neurons were activated or spiking simultaneously. An LFP is a "summated" activity measure, although the contributions of individual neurons are weighted by their distance from the electrode.

Examples of odor-evoked oscillations and synchronous spiking from the antennal lobe of locust. LFP: Local field potential; PN: intracellular recording from a projection neuron. LFP PN PN1 N1 PN2 N2 N3 N4 Sum LFP Synchronous events can be recorded at various levels of observation:

In order to calculate how "synchronous" two spike trains are, a variety of methods can be used. One simple example consists in counting the total number of action potentials, then counting the number of synchronous action potentials, and dividing it by the latter. The results will of course depend on the definition of "synchronous". Many of the commonly employed methods (cross-correlation, coherence, next lecture) can only be used to measure reoccurring synchronous events. When counting synchronous events, one has to be careful to check if these events could be due to purely random coincidence. If one assumes that two spike trains are independent of each other, and that both of them fire randomly with a given rate, then a certain number of synchronous events would always occur (this number will increase as the rates increase). In many cases, the "expected" number of synchronous events can be calculated, and the "measured" number of synchronous events can be compared to that number. A different, often used methods is that of "spike shuffling", in which the temporal occurrence of action potentials in the spike trains is changed, while preserving the rate and interspike interval distributions. The shuffled spike trains are then used as the baseline against which the measured number of synchronous events can be compared. The rest of the lecture will consist of a discussion about the readings! Aside: A complete description of the relationship between a stimulus and a response would require the probabilities corresponding to every sequence of spikes that can be evoked by a stimulus. This would typically be expressed by "the probability" (chance, likehood) of a spike to occur in a small interval t. The firing rate r(t) determines the probability of firing a spike in a small interval around a t. If the probability of generating an action potential can be considered independent (not usually true in biological neurons) of the presence or timing of other action potentials, then the firing rate is all that is needed to compute the probabilities for all possible action potential sequences. Ac example of a process which generates sequences of events (action potentials) which are independent of each other is the often used Poisson Process. The Poisson process provides a very useful approximation of stochastic neuronal firing.

In a homogenous Poisson Process, the firing rate is constant. If r(t) is the firing rate, and t is the size of the bins under consideration, then the probability of generating an action potential in one specific bin is r t. Similarly, the probability of not having a spike in a given bin is (1-r t). For a Poisson Process, the mean and the variance of the spike rate are both equal to r t. If the spike rate (mean) varies in, than the process that would generate such a varying rate is called an inhomogeneous Poisson Process. Spike sequences can be simulated by using some estimate (measure) of the firing rate, r(t), to drive a Poisson Process. Oscillations In many brains areas, regular variations of the electric field potential, spiking activity or individual neurons' activities can be observed experimentally. Many of these oscillations are apparent only during certain behavioral states (for example exploration, REM sleep..). Oscillations can occur in several ranges of frequency; these include theta oscillations (2.9Hz) and gamma oscillations (40-100Hz). Fig. 1. Simultaneous gamma and theta frequency oscillations in the rat hippocampus in vivo. Field potential recording from stratum pyramidal of the CA1 region. If large numbers of active neurons are synchronized (i.e. emit their action potentials at the same ), and do this in a regular, repetitive fashion, a regular oscillation in the field potential can be observed.

LFP Oscillations can be recorded with a variety of experimental techniques. In order to measure reoccurrence of patterns in a spike train, one can use a calculation called the autocorrelation function. The auto-correlation function is related to correlations (which you may know from statistics). One can also use the spike-train autocorrelation function (which should really be called auto variance because the average is subtracted from the neural response function and is varied only between 0 and T, the interval of interest: C a = 1 T 0 T s(t s )s(t + τ s ) dt Example and very approximate description of the autocorrelation function:

Calculating correlation coefficient of signal with itself returns scalar=1 Autocorrelation: Align signal with itself, calculate correlation coefficient. Delay signal to itself by t, calculate correlation coefficient again. Repeat. You get a value for each t. Plot these values in a histogram: autocorelation histogram! C a = 1.0 C a < 1.0 =0 =1 x t C a = 1.0 =x t A B C

The autocorrelation of a white noise signal is a constant, that of a signal which repeats itself once has two peaks (one at zero), that of a sine wave is a sine wave with a peak at zero. (Note: very commonly, you will see the autocorrelation and cross correlation function be drawn on an x-axis which goes from negative values to positive values. The negative part of the axis does not add information, it is a convention because of the definition and calculation of the function). In order to compare two signals s1 and s2 (or spike trains) with each other, one would calculate a cross-correlation function, which is similar to the autocorrelation function. The cross correlation of two perfectly synchronous spike trains is a single peak at zero, whereas the cross--correlation of two phase-locked spike trains is a single peak at the difference between their spikes. N1 N2 N1 t N2 T/4 t Of course when real spike trains are compared, the cross correlation is more difficult to interpret:

Autocorrelation (A) and cross correlation functions for neurons in the primary visual cortex of a cat. A: Autocorrelation histograms for neurons recorded in the right (upper) and left (lower) hemispheres show a periodic pattern indicating oscillations at about 40Hz. B. The cross correlation histograms for these two neurons shows that their oscillations are synchronized with little delay. Oscillations of varying mean frequencies have been found in many brain areas. These include for example the theta and gamma rhythms (8-10 Hz and 40-60Hz). In many brain structures, individual action potentials are aligned with a specific phase of these oscillations (phase-locked). Often, inhibitory and excitatory neurons are phase-locked with each other but spike at different phases of the field potential oscillation. Aside:

90 o phase locked at 180 o phase locked at 90 o 360 o T (period) = 1/f Example: Period = 40ms -> 360 o 90 o -> 10 ms Oscillations can occur in individual neurons because of their intrinsic properties as well as in networks of neurons because of network properties. Most often, network oscillations are due to a combination of individual properties and network properties. Regular firing in neurons is often considered a simple form of oscillation:

Activity modes of a model thalamic neuron. With no injected current, the neuron so silent (upper panel). When a positive current is injected, the model neuron fires action potentials in a regular, periodic pattern (middle panel). When negative current is injected, the neuron fires action potentials in periodic bursts (lower panel). When neurons that fire regularly (oscillators) are connected together via synapses, several different scenarios can arise: two synaptically coupled leaky-integrate and fire neurons which fire in a regular manner when isolated from each other continue firing in a regular manner when isolated from each other. A: When the two neurons interact with each other via reciprocal excitatory synapses, they produce an alternating, out of phase (phase locked) firing. B. When they are coupled via reciprocal inhibitory synapses, they produce synchronous firing.

A very common oscillation mechanism that has been extensively studied consists of an excitatory and an inhibitory neuron connected together. Input E W i E: Excitatory neuron I: Inhibitory neuron v E, v I : membrane potentials x E, x I : outputs (activity, firing rate) W i : Inhibitory synaptic weight W e : Excitatory synaptic weight θ E, θ I: Thresholds W e I τ e dv E /dt = -v E + w E x E + w I x I + Input x E = v E if v E >= θ E x E = 0.0 if v E < θ E τ I dv I /dt = -v I + w E x E + w I x I x I = v I if v I >= θ I x I = 0.0 if v E < θ I

Reminder: The leaky integrate and fire neuron In contrast to the simple integrate-and-fire neuron, a leaky integrator assumes a decay of the internal variable u(t) (for example membrane leakage; see the low pass filter and membrane voltage equations from Lecture 2). Incorporating this assumption leads to a differential equation relating stimulus magnitude, s, and the internal variable, u, with a du decay term γ: = γ u( t) + s( t).as before, the output becomes active when u(t) exceeds dt threshold and u(t) is then reset to zero. In this case the relation between the firing frequency f and the amplitude of the input s looks linear except at low values of s, where it shows threshold behavior (see Anderson, page 54). This means that for the leaky integrate and fire neuron, an input threshold exists below which the neuron s output stays inactive. Leaky Integrate-and-fire neuron s 1 (t), s 2 (t).. inputs u(t) x(t)=f(u(t),θ).. Output s(t)=σs i (t).. Total input γ du = -u(t) + s(t) dt... Internal variable s(t) 1.0 u(t) x(t) 1.0 x u Θ t Θ u(t) s The model of interacting excitatory and inhibitory neurons (or neuronal populations: in the equations above, E and I can represent large numbers of identical excitatory and inhibitory neurons).

Input v E v I This system can oscillate because the E neuron activates the I neuron, which in turn inhibits the E neuron, which makes it activate the I neuron less, which in turn inhibits the E neuron less etc.. This model is a good opportunity to have a look at some of the properties of the nonlinear dynamics of neural systems. This model exhibits fixed points (v E and v I are constants which don't vary in ) as well as oscillatory activity, depending on the values of the parameters. The values of v E and v I can be drawn as a function of, as shown in the figure below.