Mathematical Neuroscience. Course: Dr. Conor Houghton 2010 Typeset: Cathal Ormond

Similar documents
Channels can be activated by ligand-binding (chemical), voltage change, or mechanical changes such as stretch.

Structure and Measurement of the brain lecture notes

Overview Organization: Central Nervous System (CNS) Peripheral Nervous System (PNS) innervate Divisions: a. Afferent

9 Generation of Action Potential Hodgkin-Huxley Model

Neurons and Nervous Systems

Chapter 9. Nerve Signals and Homeostasis

Introduction and the Hodgkin-Huxley Model

Chapter 48 Neurons, Synapses, and Signaling

Physiology Unit 2. MEMBRANE POTENTIALS and SYNAPSES

Nervous Systems: Neuron Structure and Function

Neurophysiology. Danil Hammoudi.MD

Introduction to Neural Networks U. Minn. Psy 5038 Spring, 1999 Daniel Kersten. Lecture 2a. The Neuron - overview of structure. From Anderson (1995)

Information processing. Divisions of nervous system. Neuron structure and function Synapse. Neurons, synapses, and signaling 11/3/2017

Neurons. The Molecular Basis of their Electrical Excitability

Mathematical Foundations of Neuroscience - Lecture 3. Electrophysiology of neurons - continued

Physiology Unit 2. MEMBRANE POTENTIALS and SYNAPSES

Synaptic dynamics. John D. Murray. Synaptic currents. Simple model of the synaptic gating variable. First-order kinetics

Nerve Signal Conduction. Resting Potential Action Potential Conduction of Action Potentials

MEMBRANE POTENTIALS AND ACTION POTENTIALS:

Action Potentials and Synaptic Transmission Physics 171/271

NEURONS, SENSE ORGANS, AND NERVOUS SYSTEMS CHAPTER 34

Basic elements of neuroelectronics -- membranes -- ion channels -- wiring

Model Neurons I: Neuroelectronics

Lecture 2. Excitability and ionic transport

! Depolarization continued. AP Biology. " The final phase of a local action

Nervous Tissue. Neurons Neural communication Nervous Systems

Basic elements of neuroelectronics -- membranes -- ion channels -- wiring. Elementary neuron models -- conductance based -- modelers alternatives

Neural Modeling and Computational Neuroscience. Claudio Gallicchio

Control and Integration. Nervous System Organization: Bilateral Symmetric Animals. Nervous System Organization: Radial Symmetric Animals

LESSON 2.2 WORKBOOK How do our axons transmit electrical signals?

Nervous Tissue. Neurons Electrochemical Gradient Propagation & Transduction Neurotransmitters Temporal & Spatial Summation

Neurons, Synapses, and Signaling

CELL BIOLOGY - CLUTCH CH. 9 - TRANSPORT ACROSS MEMBRANES.

Neuron. Detector Model. Understanding Neural Components in Detector Model. Detector vs. Computer. Detector. Neuron. output. axon

Consider the following spike trains from two different neurons N1 and N2:

3 Detector vs. Computer

Dendrites - receives information from other neuron cells - input receivers.

Neurons, Synapses, and Signaling

Neurons, Synapses, and Signaling

Neural Conduction. biologyaspoetry.com

Decoding. How well can we learn what the stimulus is by looking at the neural responses?

BIOLOGY. 1. Overview of Neurons 11/3/2014. Neurons, Synapses, and Signaling. Communication in Neurons

Neurons, Synapses, and Signaling

BIOLOGY 11/10/2016. Neurons, Synapses, and Signaling. Concept 48.1: Neuron organization and structure reflect function in information transfer

9 Generation of Action Potential Hodgkin-Huxley Model

3.3 Simulating action potentials

Deconstructing Actual Neurons

Biological Modeling of Neural Networks

Exercises. Chapter 1. of τ approx that produces the most accurate estimate for this firing pattern.

BME 5742 Biosystems Modeling and Control

37 Neurons, Synapses, and Signaling

Passive Membrane Properties

Fundamentals of the Nervous System and Nervous Tissue

Voltage-clamp and Hodgkin-Huxley models

Nervous System Organization

Integration of synaptic inputs in dendritic trees

Resting Distribution of Ions in Mammalian Neurons. Outside Inside (mm) E ion Permab. K Na Cl

Neurons, Synapses, and Signaling

1 Single neuron computation and the separation of processing and signaling

COGNITIVE SCIENCE 107A

Ch. 5. Membrane Potentials and Action Potentials

PROPERTY OF ELSEVIER SAMPLE CONTENT - NOT FINAL. The Nervous System and Muscle

Membrane Potentials, Action Potentials, and Synaptic Transmission. Membrane Potential

BIOELECTRIC PHENOMENA

Neurons: Cellular and Network Properties HUMAN PHYSIOLOGY POWERPOINT

Lecture 11 : Simple Neuron Models. Dr Eileen Nugent

An Introductory Course in Computational Neuroscience

Topics in Neurophysics

Nervous System Organization

Balance of Electric and Diffusion Forces

Neural Encoding: Firing Rates and Spike Statistics

Action Potential (AP) NEUROEXCITABILITY II-III. Na + and K + Voltage-Gated Channels. Voltage-Gated Channels. Voltage-Gated Channels

Action Potentials & Nervous System. Bio 219 Napa Valley College Dr. Adam Ross

The Nervous System. Nerve Impulses. Resting Membrane Potential. Overview. Nerve Impulses. Resting Membrane Potential

Voltage-clamp and Hodgkin-Huxley models

Neuroscience 201A Exam Key, October 7, 2014

Organization of the nervous system. Tortora & Grabowski Principles of Anatomy & Physiology; Page 388, Figure 12.2

Lecture 10 : Neuronal Dynamics. Eileen Nugent

2401 : Anatomy/Physiology

Bio 449 Fall Exam points total Multiple choice. As with any test, choose the best answer in each case. Each question is 3 points.

BIOLOGY. Neurons, Synapses, and Signaling CAMPBELL. Reece Urry Cain Wasserman Minorsky Jackson

CSE/NB 528 Final Lecture: All Good Things Must. CSE/NB 528: Final Lecture

The Neuron - F. Fig. 45.3

Nervous System: Part II How A Neuron Works

Ch 33. The nervous system

MATH 3104: THE HODGKIN-HUXLEY EQUATIONS

PNS Chapter 7. Membrane Potential / Neural Signal Processing Spring 2017 Prof. Byron Yu

Model Neurons II: Conductances and Morphology

The homogeneous Poisson process

BIOL Week 5. Nervous System II. The Membrane Potential. Question : Is the Equilibrium Potential a set number or can it change?

Ionic basis of the resting membrane potential. Foundations in Neuroscience I, Oct

Electrophysiology of the neuron

Chapter 37 Active Reading Guide Neurons, Synapses, and Signaling

Particles with opposite charges (positives and negatives) attract each other, while particles with the same charge repel each other.

80% of all excitatory synapses - at the dendritic spines.

/639 Final Solutions, Part a) Equating the electrochemical potentials of H + and X on outside and inside: = RT ln H in

ACTION POTENTIAL. Dr. Ayisha Qureshi Professor MBBS, MPhil

Curtis et al. Il nuovo Invito alla biologia.blu BIOLOGY HIGHLIGHTS KEYS

Computational Explorations in Cognitive Neuroscience Chapter 2

Propagation& Integration: Passive electrical properties

Transcription:

Mathematical Neuroscience Course: Dr. Conor Houghton 21 Typeset: Cathal Ormond May 6, 211

Contents 1 Introduction 2 1.1 The Brain.......................................... 2 1.2 Pyramidal Neuron..................................... 2 1.3 Signalling.......................................... 3 1.4 Connection Between Neurons............................... 3 2 Electrodynamics 6 2.1 Introduction......................................... 6 2.2 Equilibrium Potential................................... 7 2.3 Nernst Equation...................................... 8 2.4 Gates and Transient Channels............................... 1 2.4.1 Persistence Channels................................ 1 2.4.2 Transient Channels................................. 11 2.5 Hodgkin-Huxley Model................................... 12 2.6 Integrate-and-Fire Models................................. 13 2.7 Synapses........................................... 15 2.8 Post-Synaptic Conductances................................ 16 3 Coding 18 3.1 Spike Trains......................................... 18 3.2 Tuning Curves....................................... 2 3.3 Spike-Triggered Averages................................. 2 3.4 Linear Models........................................ 21 3.5 Problems with Linear Model............................... 25 3.6 Rate Based Spiking..................................... 25 1

Chapter 1 Introduction 1.1 The Brain The brain consists of Neurons grey matter) and of Glial Cells white matter). Neurons participate actively in signalling and in computations. Glial cells offer structural support and have a metabolic and modulating role. We will be dealing mostly with neurons. 1.2 Pyramidal Neuron Table 1.1: Parts of a Pyramidal Neuron The Soma is the cell body. It is the site of metabolic processes and contains the nucleus. This is where the incoming signals are integrated. 2

1.3. SIGNALLING CHAPTER 1. INTRODUCTION Dendrites carry signals into the soma. They are passive, in the sense that the signals diffuse, they are quite short approx. 4mm) Axons carry signals away from the soma, by active signalling. They are quite long approx. 4mm). 1.3 Signalling dendrites: passive, signal comes in soma: sums up signals by time weighing: where V is the voltage and τ is a constant. }{{} τ V = V + signals }{{} linear relaxation voltage changed by to incoming signals axons: actively propagating signals. If the voltage in the soma passes some threshold, a spike or voltage pulse) is sent down the axon. 1.4 Connection Between Neurons An axon terminates at a Synapse: When a spike arrives at a synapse, the voltage in the dendrite changes: Table 1.2: A Synapse 3

1.4. CONNECTION BETWEEN NEURONS CHAPTER 1. INTRODUCTION Table 1.3: When a spike arrives at a synapse The chemical gradients involved are sodium Na + ), potassium K + ), calcium Ca 2+ ) and chlorine Cl ). These gradients are maintained by ion pumps - tiny machines which consume energy while transporting ions. Ion Gates are ion-selective gated i.e. open or closed) channels. The gate is usually controlled by voltage gradients or chemical signals. There are several types of signals: Passive Channels: allow specific ions to leak through Pumps: pump some ions in and some ions out, e.g. sodium in, calcium out. Gated Channels: can open or close in response to voltage gradients, concentration gradients or chemical signals. Note: the word gradient is used here, but it is slightly misleading, in that the voltages and concentrations vary discontinuously across the membrane. Spikes, aka active potentials, are voltage pulses which propagate along the axon. Depolarization is where the current flowing into the cell changes the membrane potential to less negative/positive values. The opposite is Hyperpolarization. If a neuron is depolarized enough to raise the membrane potential above a certain threshold, the neuron generates an action potential, called a Spike, which has a potential of 1mV and lasts 1ms. For a few milliseconds after a spike, it may be virtually impossible to have another spike. This is called the Absolute Refractory Period. For a longer interval 1ms) known as the Relative Refractory Period it is more difficult - but not impossible - to evoke an action potential. This is important, as action potentials are the only type of membrane potential fluctuation which can propagate over large distances. 4

1.4. CONNECTION BETWEEN NEURONS CHAPTER 1. INTRODUCTION Table 1.4: Voltage of a Spike In the synapse, the voltage transient of the action potential opens ion channels, producing an influx of Ca 2+ that prompts the release of a neurotransmitter. This binds to receptors at the postsynaptic signal receiving) side of the synapse, causing ion-conducting channels to open. When a spike arrives at a synapse, it changes the voltage in the dendrite. A spike is non-linear. The energy for a spike comes from the energy stored in the membrane by the gradient, so the membrane sustains the spike. Spikes propagate without dissipation. At branches, the spike continues equally down each branch. If the pump shuts off, the cell can still produce 7, spikes. When a spike arrives at the synapse, the vesicles migrate towards the cleft and some of them burst. This migrations is due to an increase in calcium levels. Channels open and ions can pass in to or out of the dendrites causing the change in voltage. 5

Chapter 2 Electrodynamics 2.1 Introduction The neuron relies on moving ions around using voltages and dissipation i.e. Brownian motion of ions and atoms). All particles have thermal energy, and this energy on average is proportional to the temperature, in particular at temperature T we have E ion = K B T where E ion is the average energy per ion and K B is the Boltzmann constant. We will calculate the typical voltage of a neuron so that the voltage gaps will roughly have this potential energy. A Mole of something is a specific number of constituent particles, i.e. Avogadro s number L = 6.2 1 23. The thermal energy of a mole is given by RT where R is the gas constant given by R = LK B = 8.31J/mol. We need the thermal energy to be similar to the potential gap due to voltages in the neuron. If you have a potential gap of V T then the energy required to move a charge q the charge of one proton) across the gap is qv T. Similarly, the energy required to move one mole of charged ions against a potential of V T is F V T, where F is Faraday s constant given by ql. Balancing these, we get qv T = K B T V T = K BT q = RT F 27mV The intracellular resistance to current flows can cause substantial differences in the membrane potential measured in different parts of a neuron. Long, narrow stretches of dendrites or axonal cables can cause a high resistance. Neurons that have few of these may have relatively uniform membrane potentials across their surface. These are called Electronically Compact neurons. Assume that the voltage is the same everywhere in a cell. This is equivalent to saying that the time-scales of dissipation across the cell are small compared to the other cells. This is a harmless enough assumption, as we are really dealing with a small section of membrane. If we have a voltage across a membrane, the charge is stored on the membrane. The amount of charge stored Q depends linearly on the voltage, so Q = CV 6

2.2. EQUILIBRIUM POTENTIAL CHAPTER 2. ELECTRODYNAMICS where C is the capacitance given by C = ca, where c is the specific capacitance per area and A is the area of the membrane. The current through the membrane is given by Ohm s Law tells us that I = dq dt = C dv dt I = 1 R V i.e. that the current is linearly proportional to voltage, where G = 1 R is the conductance, and R is the resistance. We also have the specific resistance and specific conductance given respectively by R = r A G = g A We may combine the above equations to see the following: [RC] = [T ] [R] = [CV ][I 1 ] = [V ][Q 1 ][T ] [C] = [Q][V 1 ] We wish to have one equation for V, but before we do this we need to think about chemical gradients, i.e. the differences in ion concentrations across the membrane. If there is no voltage gap and high conductivity, then there is a sodium current in the absence of a potential. Ohm s Law can be modified in the presence of concentration differences: [V E i ] = IR where E i is the Reversal Potential that would be required to prevent the net diffusion across the barrier. This value will change as a current changes concentrations. We will ignore this and assume that the current is small. 2.2 Equilibrium Potential Equilibrium Potential is the voltage gap required to prevent a current in the presence of a chemical gradient. Equilibrium potential is given by the Nernst equation which we will derive. Imagine ions of charge zq, where q is the charge of a single proton and z = 1 for Na +. These ions will need energy zqv to cross the barrier. What is the probability that the ion has that energy? The distribution of energy is given by the the Boltzman distribution, i.e. pɛ) = 1 z exp ɛ ) ɛ2 1 ɛ Pɛ 1 < energy of ion < ɛ 2 ) = K B T ɛ 1 z e K B T dɛ 7

2.3. NERNST EQUATION CHAPTER 2. ELECTRODYNAMICS This implies that 1 = 1 z exp ɛ ) dɛ K B T [ = 1 zk B T exp ɛ )] K B T = [ K BT z exp ɛ )] K B T = K BT z Which gives us that z = K B T. We also have where V T = K BT q Pɛ > zqv ) = 1 K B T is the typical voltage. 2.3 Nernst Equation = 1 K B T zqv zqv = exp K B ) T zv = exp exp ɛ ) dɛ K B T [ K B T exp V T ) ɛ )] K B T Consider a cell barrier. Inside, only exp zv V T ) cells have enough energy to diffuse out to the exterior. Outside, all cells have enough energy to diffuse into the interior. Let p i and p e be the concentrations of ions in the interior and exterior respectively. Assume, near equilibrium, that the diffusion flow is equivalent to the concentration of energetically available ions. Then: ) ze p i exp = p e V T ) ze exp V T E = V T z log = p e p i The latter of which is the Nernst Equation. Each ion has a different equilibrium potential. Na + has a potential of c 7mV. The current for sodium is given by pe g Na V E Na ) p i ) 8

2.3. NERNST EQUATION CHAPTER 2. ELECTRODYNAMICS and so the Hodgkin-Huxley Equation is given by: C dv dt = i + I e A where I e is an electrode current. This accounts for experimental situations with injected current in the brain replaced by a synaptic current. The current for each ion is then given by Ohm s law, i x = 1 r x V E x ). We will frequently make use of the conductance g x = 1 r x. This gives: i = g l V E l ) +g }{{} Na V E Na ) + g K V E K ) leaking current g l is the conductance of all permanently open channels, whereas g Na, g k are the conductances through the gated channels: channels that generate a particular conductance that allows only one type of ion to pass through. Models that describe the membrane potential of a neuron by just a single variable V are called Single-Compartment Models. The basic equation for all single compartment models is, as above: dv c m dt = i m + I e A Table 2.1: The Equivalent Circuit of a Neuron The structure of such a model is the same as an electrical circuit, called an Equivalent Circuit, which consists of a capacitor and a set of variable and non-variable resistors corresponding to the different membrane conductances. The membrane resistance is given by Ohm s law, V = IR m. Note that we will often use specific resistances and capacitances, denoted by small letters, R m rm A and C m c m A, where A is the surface area of the neuron. 9

2.4. GATES AND TRANSIENT CHANNELS CHAPTER 2. ELECTRODYNAMICS 2.4 Gates and Transient Channels 2.4.1 Persistence Channels Voltage dependent channels open and close as a function of membrane potentials. A channel that acts like it has a single type of gate is a Persistent Channel opening of the gate us called activation of the conductance. We denote the probability that a gate is open as P k. The opening of Table 2.2: The Equivalent Circuit of a Neuron a persistent gate may involve a number of different changes. In general, if k independent, identical events are required for a channel to open, P k can be written as P k = n k where n is the probability that any one of the k independent gating events has occurred. Note that for the Hodgkin-Huxley equation, we have k = 4. The rate at which the open probability for a gate changes is given by dn dt = α n1 n) + β n n where α n is the opening rate and β n is the closing rate. Simplifying, we have τ n dn dt = n n τ n = 1 α n + β n, n = We can look at this as an inhomogeneous first order ODE: τ df dt = f f 1 α n α n + β n

2.4. GATES AND TRANSIENT CHANNELS CHAPTER 2. ELECTRODYNAMICS Assume that f and τ are constant. Then solving this we have ft) = f + f f ) exp t ) τ 2.4.2 Transient Channels Table 2.3: The Equivalent Circuit of a Neuron The activation is coupled to a voltage sensor, and acts like a gate in a persistent channel. A second gate - the deactivation fate - can block that channel once it is open. Only the middle panel corresponds to an open, ion-conducting state. Since the first gate acts like the one in the persistent channel, we can say that Pgate 1 is open) = m k where m is an activation variable similar to n from before and k is an integer. The ball acts as the second gate. We have Pball does not block the channel pore) = h h is called the Inactivation Variable. The activation and inactivation variables m and h are distinguished by having opposite voltage dependencies. For the transient channels to conduct, both gates must be open and assuming they both act independently. This has probability: P Na = m 3 h 11

2.5. HODGKIN-HUXLEY MODEL CHAPTER 2. ELECTRODYNAMICS As with the persistent channels, we get dm dt = α m1 m) β m m dh dt = α h1 h) β h h Functions m and h describing the steady-state activation and inactivation levels, and voltage dependent time constraints for m and h can be defined as for persistent channels. To turn on a conductance maximally, it may be first necessary to hyperpolarize the neuron below its resting potential and then depolarize it. Hyperpolarization raises the value of the inactivation variable h, also called Deinactivation. The second step - depolarization - increases the value of m, the activation variable. Only when m and h are are both non-zero is the conductance turned on. Note that the conductance can be reduced in magnitude by either decreasing m or h. Decreasing h is called Inactivation and decreasing m is called Deactivation. 2.5 Hodgkin-Huxley Model The revised Hodgkin-Huxley model is described by i = g l V E l ) + g Na m 3 hv E Na ) + g K n 4 V E K ) where the bar indicates a constant. This is constructed by writing the membrane current as the sum of a leakage current, a delayed-rectified K + current and a transient Na + current. A positive electrode current is injected into the model, causing an initial rise of the membrane potential. When the current has been risen up to about 5mV, the m variable that describes the activation of the Na + conductance suddenly jumps from nearly to nearly 1. Initially, the h variable expressing the degree of inactivation of the Na + conductance) is.6. Thus, for a brief period, both m and h are significantly different from. This causes a large influx of Na + ions, producing a sharp downward spike of inward current q) which causes the membrane potential to rise rapidly to around 5mV, near the Na + equilibrium potential. The rapid increase in both V and m is due to a positive feedback effect. Depolarization of the membrane causes m to increase and the resulting activation of the Na + conductance makes V increase. This drives h, causing the Na + current to shut off. The rise in V also activates the K + conductance by driving n towards 1. This increases the K + current which drives the membrane potential back down to negative values. After V has returned to the reset value, the gates return to their reset states, i.e. n m h 1 so n, m and h relax to these values. This is not instantaneous, and so there is a refractory period. The Connor-Stevens Model provides an alternative description of action-potential generation. The membrane current in this model is given by: i m = g l V E l ) + g Na m 3 hv E Na ) + g K n 4 V E K ) + g A a 3 bv E A ) 12

2.6. INTEGRATE-AND-FIRE MODELS CHAPTER 2. ELECTRODYNAMICS This model has an additional K + conductance called the A current) which is transient. The A current causes the firing rate to rise continuously from, and to increase roughly linearly for currents over the range shown know as a Type I neuron. If A-current is switched off, the firing rate is much higher and jumps discontinuously to a non-zero value Type II). The A-current also delays the occurrence of the first action potential. A-current lowers the internal voltage, and reduces spiking). This model can be extended by including a transient Ca 2+ conductance, e.g. in thalmocortical neurons. A transient Ca 2+ conductance acts - in many ways - like a slower version of the transient Na + conductance that generates action potentials. Instead of producing an action potential, a transient Ca 2+ conductance generates a slower transient depolarization, sometimes called a Ca 2+ spike. This causes the neuron to fire a burst of action potentials which are Na + spikes riding on the slower Ca 2+ spike. Neurons can fire action potentials either at a steady rate or in bursts, even without current injection or synaptic output. Periodic bursting gives rise to transient Ca 2+ spikes with action potentials riding on them. Ca 2+ current during these bursts causes a dramatic increase in intracellular Ca 2+ concentration. This activates a Ca 2+ dependent K + current, which - along with the inactivation of a Ca 2+ current - terminates the burst. The interburst interval is determined primarily by the time it takes for the intracellular Ca 2+ concentration to a low value, which deactivates the Ca 2+ dependent K + current, allowing another burst to be generated. Membrane potentials can vary considerably over the surface of the cell membrane, especially for neurons with long and narrow processes, or if we consider rapidly changing membrane potentials. The attenuation and delay within a neuron are most severe when electrical signals travel down the long, narrow, cable-like structures of dendritic or axonal branches. For this reason, the mathematical analysis of signal propagation within neurons is called Cable Theory. The voltage drop across a cable segment of length x, radius a and intracellular resistivity r L is where V = V x + x) V x) = R L I L R L = r l x πa 2 I L = πa2 V r l x = πa2 r L V x Many axons are covered with an insulating sheath of myelin, except at certain gaps in the neuron, called the Nodes of Ranvier where there is a high density of Na + channels. There is no spike at the myelinated bits, since the myelin acts as an insulator, and so the signal travels as a current. The signal therefore gets weaker and goes faster and is actively regenerated at the nodes of Ranvier. Action potential propagation is thus sped up. 2.6 Integrate-and-Fire Models The mechanisms that we have shown by which K + and Na + produce action potentials are well understood and can be modelled quite accurately. However, neuron models can be simplified and simulation can be drastically accelerated if these biophysical mechanisms are not explicitly included in the model. Integrate-And-Fire models do this by stating that an action potential 13

2.6. INTEGRATE-AND-FIRE MODELS CHAPTER 2. ELECTRODYNAMICS occurs whenever the membrane potential of the model neuron reaches a threshold value, V t. After the spike, the potential is rest to a value V r, where V r < V t. I and F models only model subthreshold membrane potential dynamics. In the simplest model, all active membrane conductances are ignored, including synaptic inputs, and the entire membrane conductance is modelled as a single passive leakage term: i m = g l V E l ) This is known as the Leaky Integrate-and-Fire model. The membrane potential in this model is determined by dv c m dt = g lv E l ) + I e A If we multiply across by r m = 1 g m and define r m c m = τ m, then τ m dv dt = E l V + R m I e To generate action potentials in the model, we augment this by the rule that whenever V reaches the threshold value V t, an action potential is fired and the potential is reset to V r. When I e = we have V = E L, and so E L is the resisting potential. To get the membrane potential, we simply integrate the above equation. The firing rate of an I and F model in response to a constant injected current can by computed analytically: V t) = E L + R m I e + V ) E L R m I e ) exp t ) τ m This is valid only as long as V t) < V t. Suppose that at t =, an action potential has just fired, so V ) = V r. If t isi is the time to the next spike, then we have V t = V t isi ) = E l + R m I e + V r E L R m I e ) exp t ) isi τ m exp t ) isi = V t E l R m I e τ m V r E l R m I e t ) isi Vt E l R m I e = log τ m V r E l R m I e ) Rm i e + E l V r t isi = τ m log R m i e + E l V t whenever R m I e > V t E L. Otherwise, we have that t isi =. We call t isi the Interspike Interval for the constant I e. Alternatively, we can calculate the interspike-interval firing rate of the neuron r isi : r isi = 1 = 1 [ )] Rm i e + E L V 1 r log t isi τ m R m i e + E L V t whenever R m I e > V t E L. Otherwise, we have that r isi =. For sufficiently large values of I e i.e. R m I e >> v t E L ), we can use the linear approximation of the logarithm, log1 + z) z) to see that r isi R mi e + E L V t τ m V t V r ) 14

2.7. SYNAPSES CHAPTER 2. ELECTRODYNAMICS which shows that the firing rate grows linearly with I e for large I e. Real neurons exhibit spike-rate adaptation in that t isi lengthens over time when a constant current is injected into the cell, before settling to a steady state value, i.e. stabilizing. So far, our passive I and F model has been based on two separate approximations: a highly simplified description of the action potential linear approximation for the total membrane current. We ll keep the first assumption, but we can still model the membrane current in as much detail as necessary. We can model spike-rate adaptation by including an additional current in the model: τ m dv dt = E l V r m g sra V E K ) + R m I E }{{} inputs where E K E L V r and g sra is the spike-rate adaptation conductance, and has been modelled as a K + conductance, so when activated hyperpolarizes the neuron, i.e. moves it away from firing. We ll assume that g sra relaxes exponentially to with a time constant τ sra, i.e. τ sra dg sra dt = g sra Clearly, a non-zero g sra changes the equilibrium potential: dv τ m dt = E L + r m g sra E K 1 + r m g rsa )V + inputs τ m dv 1 + r m g rsa dt = E L + r m g sra V + reduced inputs 1 + r m g rsa The refractory effect is not included in the basic I and F model. Refractoriness can be incorporated by adding a conductance similar to g sra described above, but with a much smaller τ, and a much larger g conductance increment). 2.7 Synapses Synaptic transmission begins when a spike invades the pre-synaptic terminal and activates Ca 2+ channels leading to a rise in the concentration of Ca 2+. Ca 2+ enters the button, and the vesicles migrate to the cell membrane and fuse. They then burst, releasing neurotransmitters into the cleft. These diffuse across the cleft, and bind to receptors on the post-synaptic neuron, leading to the opening of ion channels that modify the conductance of the post-synaptic neuron. The neurotransmitter is then reabsorbed and the channels close again. We want to model this mathematically. A Ligand-Gated Channel is one which opens or closes in response to the binding of a neurotransmitter to a receptor. There are two broad classes of synaptic conductances: Ionotropic Gates - where the neurotransmitter binds directly to the gate fast, simple). 15

2.8. POST-SYNAPTIC CONDUCTANCES CHAPTER 2. ELECTRODYNAMICS Metabotropic Gates - where the neurotransmitter binds to receptors that are not on the gate, but where the binding initiates a biochemical process that opens the gate and has other effects. The two major neurotransmitters that are found in the brain are: Glutamates: excitatory transmitters. Principal ionotropic receptors are AMPA and NMDA. GABA Gamma-aminobutyric acid): an inhibitory transmitter. A synapse will either have a glutamate or GABA and then a mixture of the corresponding gates. As with a voltage-dependent conductance, a synaptic conductance can be written as a product of a maximal conductance and an open channel probability: g s = g s P where P is the probability that an individual gate is open. P can be expressed as a product of two terms that reflect processes occurring on the pre and post-synaptic sides of the synapse: P = P r P s where P r is the probability that the transmitter is released by the pre-synaptic terminal following the arrival of an action potential. P r varies to take account of vesicle depletion. here, we let P r = 1. 2.8 Post-Synaptic Conductances In a simple model of a directly activated receptor channel, the transmitter interacts with the channel through a binding reaction in which k transmitter molecules bind to a closed receptor and open it in the reverse reaction. The transmitter molecules unbind from the receptor and is closes. This is modelled by dp s = α s 1 P s ) + β s P s dt where β s is the constant closing rate and α s is the opening rate which depends on the concentration of the transmitter available for binding, i.e. α s depends on the chance of a neurotransmitter is close enough to a transmitter to bind. We ll assume that α s >> β s, so we can ignore β s in our initial calculations. When an action potential invades the pre-synaptic terminal, the transmitter concentration rises and α s grows rapidly, causing P s to increase. P s rises towards 1 with a time-scale τ α = 1 α s. Assume that the spike arrives at t =, and assume that t [, T ]. Then we have P s t) = 1 + P s ) 1) exp t ) τ α If P s ) =, then P s t) = 1 exp t ) τ α 16

2.8. POST-SYNAPTIC CONDUCTANCES CHAPTER 2. ELECTRODYNAMICS so the largest change in P s occurs in this case. Following the release of the transmitter, the transmitter concentration reduces rapidly. This sets α s = and P s then decays exponentially with timescale τ β = 1 β s. Typically τ β >> τ α. Then open probability takes its maximum value at t = T, and then for t T decays exponentially at a rate determined by β s : P s t) = P s T ) exp β s t T )) If P s ) = as it will if there is no synaptic release immediately before the release at t = ) the maximum value for P s is ) P max = 1 exp Tτα This gives us from beforehand that i.e. if a spike arrives at a time t + T, then P s T ) = P s ) + P max 1 P )) P s t + T ) = P s t) + P where P s = P max 1 P s t)) One simple model leaves out the T -scale dynamics: τ β dp s dt = P s note: τ s = τ β The model discussed above, i.e. 1 + P P s t) = s ) 1) exp t ) τ α P s T ) exp β s t T )) t [, T ] t T can be used to describe synapses with slower rise times, but there are many other models. One way of describing both the rise and fall of a synaptic conductance is to express P s as the difference of two exponentials: P s t) = βp max exp t ) exp t )) τ 1 τ 2 where τ 1 and τ 2 are two time-scales in response to a spike arriving when P s and β is some normalization constant. This model allows for a smooth rise as well as a smooth fall. Another popular synaptic response is given by the α-function P s t) = P maxt τ s exp 1 t ) τ s This model starts at, reaches peak value at t = τ s and decays with a time constant τ s. Again, this is favoured for its simplicity and because it somewhat resembles the actual conductance response, albeit with too slow a rise. As with the previous model, to implement it properly, it should be understood as a solution to a differential equation. 17

Chapter 3 Coding 3.1 Spike Trains A Spike Train is a series of spike times and is the result of extracellular recording. It is believed that the spike times are the information carrying component of spike trains. If we ignore the brief duration of an action potential, we can just count the spikes. For n spikes, denote these times by t i for i = 1,..., n. The trial is taken to start at time and ends at time T, i.e. ρt) = n δt t i ) The spike count, or the average number of spikes, is given by n = i=1 ρτ) dτ We denote the spike count rate by r, which is given by r = n T = 1 T ρτ) dτ The next step is to discretize time and produce a histogram, i.e. divide T into subintervals for the form [nδt, n + 1)δt] and define rt) = n+1)δt nδt ρτ) dτ t [nδt, n + 1)δt] i.e. rt) is the number of spikes in the corresponding interval. We repeat over numerous trials to see that the firing rate is the average n+1)δt rt) = ρτ) dτ trials A more sophisticated point of view would be to have a moving window nδt rt) = t+ δt 2 t δt 2 18 ρτ) dτ

3.1. SPIKE TRAINS CHAPTER 3. CODING which gives rise to a histogram without the rigid discretisation. Again, with multiple trials, the average is 1 t+ δt 2 rt) = ρτ) dτ trials δt rt) = lim # trials δt t δt 2 Thus, rt)δt is the number of spikes in [t δt 2, t + δt 2 ] and if you average over trials, rt)δt is the average number of spikes that fall in that interval. We regard a neuron as having a firing rate t+ δt 2 t δt 2 ρτ) dτ trials which may be approximated by rt). The firing rate for a set of repeated trials at a resolution t is defined as rt) = 1 t t+ t t ρτ) dτ so rt) t is the number of spikes occurring in t, t), i.e. the fraction of trials on which a spike occurred in t, t). Note: r rt) r is the spike count rate is firing rate is the average firing rate, equal to n r = 1 T ρτ) In practise, the firing rate is something we calculate from a finite number of trials, and what matters is the usefulness of a given prescription for calculating the firing rate in terms of how well it can be modelled. The basic point is that since spike trains are so variable, they don t give us a good way to describe the response. In this description, the firing rate is some what of giving a smoothed, averaged quantity for ˆr which can easily fir into models and be compared to experiments. A simple way of extracting an estimate of the firing fate from a spike trains is to divide time into discrete bins of duration t, count the number of spikes within each bin and divide by t, so r app t) = n wt t i ) i=1 where w is the window function defined by Alternatively, we have { wt) = 1 t if t [ t 2, t 2 ] otherwise r app t) = wτ)ρt τ) dτ = w ρ)t) This integral is called the Linear Filter, and the window function also called the Filter Kernel) specifies how the neural response function evaluated at time t τ contributes to the firing rate 19

3.2. TUNING CURVES CHAPTER 3. CODING approximated at time t. This use of a sliding window avoids the arbitrariness of the bin placement and produces a rate that might appear to have better temporal resolution. One thing that s commonly done is to replace the filter kernel with a more smooth function, like a Gaussian: ) 1 wt) = exp t2 2πσw 2σw 2 In such a filter calculation, the choice of filter forms part of the prescription. Other choices include 1 e t t if t > wt) = t otherwise wt) = { α 2 te αt if t > otherwise There is no experimental evidence to show that any given filter is better than another, not is there any derivation from principle. The choice of t or σ w does matter, usually chosen by validating against the data. 3.2 Tuning Curves Neuronal responses typically depend on many different properties of stimulus. A simple way of characterizing the response of a neuron is to count the number of action potentials fired during the presentation of a stimulus and then repeat an infinite number of times) and average. A Tuning Curve is the graph of the average r against some experimental parameter. Response tuning curves characterize the average response of a neuron to a given stimulus. We now consider the complementary procedure of averaging the given stimuli that produces a given response. The resulting quantity, called the spike-triggered average stimulus provides a useful way of characterizing neuronal selectivity. STAs are computed using stimuli characterized by a parameter st) that varies over time. 3.3 Spike-Triggered Averages This is another way of describing the relationship between stimulus and response, exactly how we can better understand the linear models. The Spike Triggered Average Stimulus, denoted Cτ), is the average value of the stimulus at a time interval τ before a spike is fired. It is given by 1 n Cτ) = st i τ) n i=1 2

3.4. LINEAR MODELS CHAPTER 3. CODING In other words, for a spike occurring at time t i, we determine st i τ), sum over all n spikes in a trial and divide the total by n. In addition, we average over trials, so 1 n Cτ) = st i τ) n i=1 n 1 st i τ) n = 1 n = 1 n = 1 n which is the stimulus response correlation. i=1 T T T ρt)st τ) dt ρt) st τ) dt rt)st τ) dt Correlation functions are a useful way of determining how two quantities that vary over time are related to each other. The correlation function of the firing rate and the stimulus is From this we can see that Q rs τ) = 1 T rt)st + τ) dt Cτ) = 1 r Q rs τ) Because the argument of this correlation function is τ, the STA stimulus is often called the reverse correlation function. The STA stimulus is widely used to study average and characterize neural responses. Because Cτ) is the average value of the stimulus at a time τ before a stimulus, larger values of τ represent times further in the past relative to the triggering spike. For this reason, we plot the STA with the time axis going backward compared to the normal convention. This allows the average spike-triggered stimulus to be read of from the plots in the usual left-to-right order. The results obtained by spike-triggered averaging depend on the particular set of stimuli used during an experiment. There are certain advantages to using a stimulus that is uncorrelated from one time to the next, e.g. a white-noise stimulus. This condition of white-noise stimulus can be expressed using the stimulus-stimulus correlation function: Q ss τ) = 1 T st)st + τ) dt if you had a white-noise stimulus, you might expect that for negative values of τ, we have Cτ) =. 3.4 Linear Models From before, we noted that Cτ) = 1 r Q rs τ) 21

3.4. LINEAR MODELS CHAPTER 3. CODING We can see that Cτ) this depends only on Q rs. However, a better description would be to take Q ss into account. In for formula for Q rs, we cannot be sure if a non-zero value reflects a statistical relationship between st) and rt + τ) or, for example, one between st) and st + τ) and another between st + τ) and rt + τ). Thus, the problems with the STA are: 1. no accounting for Q ss 2. it only depends on 2 nd order statistics 3. no accounting for spike-spike effects Linear Models can solve the first problem, but not the other two. We consider: rt) = r + Dτ)st τ) dτ where r is a constant which accounts for any background firing when s =. Dτ) is a weighting factor that determines how strongly and with what sign the value of st τ) affects the firing rate at the time t τ. The integral in this equation is a linear filter of the same for *** as those defined before. In the linear model, a neuron has a kernel associated with it, and the predicted firing rate is the convolution of the kernel and the stimulus. We can think of this equation as being the first two terms in a Volterra Expansion - the functional equivalent of the Taylor series expansion used to generate power series approximations of the functions: rt) = r + + D 1 τ)st τ) dτ + D 2 τ 1, τ 2 )st τ 1 )st τ 2 ) dτ 1 dτ 2 D 3 τ 1, τ 2, τ 3 )st τ 1 )st τ 2 )st τ 3 ) dτ 1 dτ 2 dτ 3 The question now is what is Dτ) and how to calculate it. The standard method is reverse correlation - without loss of generality, we ll absorb r into r and r to let r =, or simply consider r r r. We wish to choose the kernel D to minimize the squared difference between the estimated response to a stimulus and the actual measured response to a stimulus and the actual measured response averaged over the duration of the trial T ), i.e. ɛ = 1 T rt) rt)) 2 dt ɛ This is called the Objective Function. To optimize this, we want as a problem in the Dτ) calculus of variations. However, instead, we want to phrase the problem as a simple variation. We send Dτ) to Dτ) + δdτ) and calculate the corresponding variation in ɛ. Let ɛ be the new error under this translation: ɛ = 1 T r 2 2r r + r ) 2 ) dt 22

3.4. LINEAR MODELS CHAPTER 3. CODING where the denotes the new estimate, not the derivative. We know that r is given by: r t) = = rt) + If we let ɛ = ɛ + δɛ, we have that Dτ) + δrτ))st τ) dτ δdτ)st τ) dτ δɛ = 1 T = 1 T = 1 T = 2 T T T r 2 2r r + r ) 2 ) dt 1 T 2r r r ) + r ) 2 r) 2 ) dt [ 2r δdτ)st τ) dτ + 2 r δdτ) r 2 2r r + r) 2 ) dt st τ) rt) rt)) dt dτ ] δdτ)st τ) dτ dt + OδD 2 ) where we change the order of integration. For optimal Dτ), we want δɛ =, so we need to have st τ) rt) rt)) dt which is an integral equation for Dτ). We have that Now we consider Letting t = t σ, we have st τ)rt) dτ = = = = = T st τ) rt) dτ st τ) T Dσ) Dσ)st σ) dσ dt st τ)dσ)st σ) dσ dt st τ)dσ)st σ) dt dσ st τ)st σ) dt dσ st τ)st σ) dt dσ Recalling that st τ)st σ) dt dσ = st + σ τ)st ) dt = T Q ss σ τ) Q rs τ) = 1 T rt)st τ) dt = 1 T Dσ) st τ)st σ) dt dσ 23

3.4. LINEAR MODELS CHAPTER 3. CODING we conclude that Q rs τ) = D Q ss )τ) since it can be shown) Q ss is an even function of τ. This method is know as the reverse correlation because the firing rate-stimulus correlation function is evaluated at τ in this equation. What happens if the stimulus is white noise? If knowing st) tells you something about st + τ) for τ, then it can be argued that Q ss = σ 2 δt) for T where σ 2 is the variance of st) at a point. Substituting this into the above equation, we have Q rs τ) = D Q ss )τ) = Q ss τ τ )Dτ ) dτ = σ 2 δτ τ )Dτ ) dτ = σ 2 Dτ) whence we conclude Dτ) = 1 σ 2 Q rs τ) Previously, we have seen that the spike STA was approximated by so Ct) 1 r Q rs τ) Dτ) r Cτ) σ 2 Thus, the linear kernel is approximately equal to the STA for a white noise stimulus. We can also think of the linear kernel as encoding information about how the neuron responds to the stimulus in a way that separates the response from the structure of the stimulus. This makes it useful in situations where we need to use a highly structured stimulus to study the sort of behaviour the neuron has when performing its computational tasks. Calculating Dτ) can be tricky: At its simplest, we consider the Fourier Transform, and recall that So we have giving Ff g) = Ff)Fg) F[Q rs τ)] = F[Q ss τ)]f[dτ)] [ ] Dτ) = F 1 F[Qrs τ)] F[Q ss τ)] However, this will not always work as our convolution is not quite correct. Also, F[Q ss τ)] is sometimes quite small, so our division can give rise to errors. 24

3.5. PROBLEMS WITH LINEAR MODEL CHAPTER 3. CODING Another approach is to rewrite the equations as a matrix equation by discretizing time: We then write and we can see that so Q rs τ) Q rs nδτ) = Q rs n a vector) Dτ) D n = D nδτ) Q ss nn = S ssnδτ n δτ) Q rs n = Q ss nn D n }{{} Q ss D D n = Q ss nn ) 1 Q rs n It turns out that Q ss nn is always invertible, but often its eigenvalues are small, and this dominate the inverse matrix. 3.5 Problems with Linear Model We have the following failures of the linear model: 1. The objective function ɛ is not chosen from principle - there is a subtle model dependence in what we did. 2. st) introduces more model dependence in real applications. 3. There are no spikes. Ideally, we consider a stimulus, estimate its spike train based on a given model and compare it to its experimentally determined spike train. Here, we estimate the firing rate based on our model, and compare it to the firing rate of the spike train determined by experiment. We consider the following questions 1. Are there better models that produce spikes instead of firing rates? 2. Alternatively, can we supplement a firing fate model with a model that gives spikes? 3. How does a spike model relate to the definition of ɛ? 3.6 Rate Based Spiking The idea is that the probability of a spike depends only on the current value of the firing rate. This gives us a Poisson process. For small time intervals t, the probability of a spike at time t [t, t + t] is Pt < t < t + t) = rt ) t as T We re interested in the probability of a given spike train, so Pt 1 1 < t < t 2 1, t 1 2 < t < t 2 2,..., t 1 n < t < t 2 n) = 25 t 2 1 t 2 2 t 1 1 t 1 2 t 2 n P [t 1, t 2,..., t n ] dt n t 1 n

3.6. RATE BASED SPIKING CHAPTER 3. CODING Here, P [t 1, t 2,..., t n ] is the probability density function. Note that if we have n spikes, the probability distribution for those spikes occurring at time t 1, t 2,..., t n ) is the sum of the probabilities of the spikes occurring at t σ1), t σ2),..., t σn) ), where σ is a permutation of {1, 2,..., n}. Each spike has a constant distribution over [, T ], so we get P [t 1, t 2,..., t n ] = n! T n P[n] where P[n] is the probability that n spikes occur. To calculate P[n], we divide the interval into m subintervals of width t = T. We can assume that t is sufficiently small that we never get m two spikes within any one subinterval, because at the end of the calculation, we take t. The probability of a spike occurring in on specific interval is r t, and the probability of n spikes occurring in n given intervals is r t) n. Similarly, the probability that a spike doesn t occur in an interval is 1 r t), so the probability of having the remaining M n intervals without spikes is 1 r t) M n. Finally, the number of ways of putting n spikes into M intervals is gives us P[n] = lim M ) M rt n M ) n 1 rt M 1 rt M ) n ) M M n ). This To take the limit, we note that as t, M grows without bound because M t = T. Because n is fixed, we can write M n M = T. Using this approximation and defining ɛ = r t, we find δt that [ ] lim 1 t r t)m n = lim 1 + ɛ) 1 rt ɛ = e rt ɛ Also, for large enough M, M! M n)! M n so we have P[n] = rt )n e rt n! which is the Poisson Distribution. We can compute the mean and standard deviation of this distribution: n = np[n] n= rt )n = n e rt n! n=1 ) rt ) n 1 = rt e rt n 1! n=1 ) rt ) m = rt e rt m! = rt m= 26

3.6. RATE BASED SPIKING CHAPTER 3. CODING Note that so the variance is given by n 2 = = = n 2 P[n] n= nn 1)P[n] + np[n] n= n=2 n= rt )n nn 1) e rt + n n! = rt ) 2 e rt = rt ) 2 + rt n=2 rt ) n 2 n 2)! + rt σ 2 = n 2 n 2 = rt In general, the ratio of the variance and the mean is known as the Fano Factor, F = σ2 n. For a homogeneous Poisson process, F = 1. However, even with homogeneous stimulus, the Fano factor for Poisson spiking is usually greater than 1. We have the following considerations Homogeneous Poisson spiking doesn t describe spike trains. More interesting evidence is provided by the distribution of the inter-spike intervals. The probability density of the time interval between adjacent spikes is called the inter-spike interval distribution, and it has a useful statistic or characterizing spiking patterns. Let t i be the time between spikes. A similar argument to the previous one shows that Pt i ) = ne rt i Neuronal spiking is clearly not Poisson. For a start, there is the refractory period. Even if it was Poisson, it is unlikely that it would be homogeneous. In the homogeneous Poisson process, the firing rate s not constant, but the probability of getting a spike depends only on the current value of the firing rate rt). We need a formula for P [t 1, t 2,..., t n ]. Consider the time between 2 spikes at t i and t i+1. Divide that into M subintervals. P [no spike] = M 1 rt m ) t) m=1 where rt m ) t is the the probability of a spike in the m th subinterval and The trick is to take logarithms, so t = t i+1 t i M M M log P [no spike] = log1 rt m ) t) rt m ) t m=1 m=1 27

3.6. RATE BASED SPIKING CHAPTER 3. CODING recalling that log1 + z) z for small enough z. Assuming that r is nice, we have that Thus we have Combining this, we have ti+1 log P [no spike] = rt) dt t i ti+1 ) P [no spike] = exp rt) dt t i n P [t 1, t 2,..., t n ] = rt i ) exp i=1 n = exp = exp i=1 ti+1 t i ) rt) dt ) n rt) dt rt i ) ti+1 t i i=1 ) n rt) dt rt i ) i=1 28