The variance of phase-resetting curves

Size: px
Start display at page:

Download "The variance of phase-resetting curves"

Transcription

1 DOI.7/s The variance of phase-resetting curves G. Bard Ermentrout Bryce Beverlin II Todd Troyer TheodenI.Netoff Received: 9 May 2 / Revised: December 2 / Accepted: 4 December 2 Springer Science+Business Media, LLC 2 Abstract Phase resetting curves (PRCs) provide a measure of the sensitivity of oscillators to perturbations. In a noisy environment, these curves are themselves very noisy. Using perturbation theory, we compute the mean and the variance for PRCs for arbitrary limit cycle oscillators when the noise is small. Phase resetting curves and phase dependent variance are fit to experimental data and the variance is computed using an ad-hoc method. The theoretical curves of this phase dependent method match both simulations and experimental data significantly better than an ad-hoc method. A dual cell network simulation is compared to predictions using the analytical phase dependent variance estimation presented in this paper. We also discuss how entrainment Action Editor: Charles Wilson G. B. Ermentrout Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvannia, USA bard@pitt.edu B. Beverlin II Department of Physics, University of Minnesota, Minneapolis, USA beverlin@physics.umn.edu T. Troyer Department of Biology, University of Texas at San Antonio, San Antonio, Texas, USA todd.troyer@utsa.edu T. I. Netoff (B) Department of Biomedical Engineering, University of Minnesota, Minneapolis, USA tnetoff@umn.edu of a neuron to a periodic pulse depends on the noise amplitude. Keywords Phase resetting Noise Neural oscillators Variance Synchrony Introduction Phase resetting curves (PRCs) have become increasingly popular as a tool to study response properties of neural and other biological oscillators (Winfree 967; Forger and Paydarfar 24). The PRC quantifies how the timing of a perturbation shifts the timing of the rhythm and they have been widely studied in many biological systems (Ariaratnam and Strogatz 2; Guevara and Glass 982). The experimental measurement of these curves is associated with some degree of noisiness in the data (Reyes and Fetz 993; Stoop et al. 2; Galan et al. 25; Netoff et al. 25a). For example, if a neuron is injected with sufficient constant current to cause it to fire repetitively, the distribution of interspike intervals (ISIs) is often quite broad (Abouzeid and Ermentrout 29). This means the measurement of phase can also be quite broad, thus producing a noisy PRC. Previously, we characterized this noise in experimentally measured PRCs using an ad-hoc fit function to the variance of the PRC and found the variance to be phase dependent (Netoff et al. 25b). In a recent paper Ermentrout and Saunders showed that phase-dependent noise may create noiseinduced bifurcations in the PRC for a simple model neuron when compared to a deterministic system (Ermentrout and Saunders 26). Furthermore, it has been shown that phase-dependent variance could affect

2 the dynamics of coupled oscillators in the sense that a stable synchronous state with a flat variance could be rendered unstable by applying a greater variance at zero phase (Ly and Ermentrout 2). In this note, we use perturbation methods to determine the phase-dependence of the variance for arbitrary phase-resetting curves. We show that the variance is not a simple function of the PRC, but, rather, is a functional involving the PRC, its derivative, and the integral of its square. We fit PRCs and ad-hoc variance curves to experimental data obtained from hippocampal pyramidal neurons using a dynamic patch clamp technique. Individually fit PRCs are used to calculate analytical variance functions as presented in this paper. To demonstrate the effect of using the analytical phase dependent variance compared to an ad-hoc variance fit, we construct a dual cell network simulation. We find the analytical variance as presented in this paper predicts the synchrony of the dual cell network better than when using a flat (constant, independent of phase) variance. 2 Derivation The Phase Resetting Curve (PRC) of an oscillator characterizes how the timing of the oscillation is shifted as a function of the timing of a perturbation. There are many different techniques for experimentally measuring PRCs (Torben-Nielsen et al. 2); many of them involve defining the time of a spike to be the zero phase, applying a brief current pulse after a fixed time τ, and measuring how this shifts the time of the next event. If a current, I is injected for short length of time, w, then the injected charge is just wi and the resulting voltage jumps by β = wi /C, wherec is the capacitance. Then the PRC, P(β, τ), is parameterized by the stimulus magnitude and the time of the pulse and is defined by P(β, τ) = T ˆT(β, τ), where T is the natural period of the oscillator and ˆT(β, τ) is the time of the next spike. The natural period T can be viewed as resulting from a stimulus with zero magnitude: ˆT(,τ)= T. Thus, P(,τ)=. As long as P is sufficiently smooth, it follows for small β that P(β, τ) (τ)β. The function (τ), which arises in any theory of small perturbations of limit cycles, has units msec/mv and is called the infinitesimal PRC or iprc. It should be noted that in experiments it may only be possible to inject current to perturb the cell, so the phase may not be manipulated directly. In the remainder of this paper, we will assume that we are in the linear range and that the PRC is proportional to the iprc. The iprc can be estimated experimentally by letting the conductance go to infinitesimal conductance (Preyer and Butera 25) or current (Netoff et al. 25a), however in the limit they are equivalent (Achuthan and Canavier 29). We will refer to the PRC when the stimulus is of finite amplitude and duration, and refer to, the iprc, when we do the derivation. 2. Perturbations and the distribution of phase We start with an arbitrary differential equation: dx dt = F(X) and assume that there is an orbitally stable limit cycle solution, X (t) with period T. We define the phase of the oscillation, θ(t), as a circular variable representing how far the oscillation has progressed along its limit cycle, i.e. we write X (t) = X (θ(t)). The phase is often dimensionless and defined on [, ) or [, 2π). We will view the phase as a time-like variable defined on the interval [, T). We now introduce a general time-dependent perturbation G(t, X(t)) which includes both the pulse stimulus delivered for computing the PRC and the background noise: dx dt = F(X(t)) + G(t, X(t)) () If G is sufficiently small, then we can retain phase coordinates (Kuramoto 984), with X(t) = X (θ(t)) and dθ dt = + Z (θ) G(t, X (θ(t))). Z (θ) is the solution to a certain linear differential equation (called the adjoint equation) and can be easily found numerically for any limit cycle. In general, the function G perturbs multiple components of the state vector X(t); each component of the vector valued function Z (θ) represents the iprc for perturbations applied to that component. For example, the voltage component of Z (θ) is equal to (θ), the iprc for voltage perturbations resulting from brief current pulses. We can simplify the above phase model substantially if we consider instantaneous current pulses (Dirac delta functions) with a specified magnitude, β, and additive noise, εξ(t), applied only to the voltage equation, where ξ(t) is a zero mean Wiener process. Then the voltage

3 perturbations can be written G V (t, X (θ(t))) = εξ(t) + βδ(t τ) and dθ dt = +[εξ(t) + βδ(t τ)] (θ) + O(ε2 ). (2) Higher order terms come from the Ito correction (e.g. in the case of white noise, (ε 2 /2) (θ) (θ)) (Ito 946; Kloeden and Platen 992; Gardiner 24). We assume that the noise amplitude ε is small, and formally expand θ(t) as a series in orders of ε: θ(t) = θ (t) + εθ (t) +... Note that θ (t) represents the deterministic component of the phase and θ (t) is the noise-induced deviation. Substituting this into Eq. (2) and gathering terms to first order, we obtain: dθ = + βδ(t τ) (θ ) (3) dt dθ = ξ(t) (θ (t)) + βδ(t τ) (θ (t))θ (t). (4) dt Here, we have approximated (θ(t)) by (θ (t) + εθ (t)) and expanded in a Taylor series to get Eq. (4). Integrating Eq. (3), we obtain θ (t) = t + β H(t τ) (τ) (5) where the step function H(x) arises as the integral of the Dirac delta function. We next integrate θ (t): θ (t) = t ξ(s) (θ (s)) ds + β (τ)h(t τ)θ (τ ). We use u(t ) to denote the limit of u(x) as x approaches t from below, and have used the fact that θ (τ ) = τ. For t less than the time of the stimulus τ, θ (t) is just given by θ (t) = t ξ(s) (s) ds. This integral can be used to evaluate the term θ (τ ) in the second part of the equation. We can now write θ (t) = t ξ(s) (θ (s)) ds +H(t τ)β (τ) τ ξ(s) (s) ds. (6) The first integral represents the integrated noise perturbations, and the last term represents the direct effect of the stimulus, which depends on the integral of the noise at the time of the stimulus. For t >τ, we can evaluate the step functions and rewrite this as τ θ (t) = ( + β (τ)) ξ(s) (s) ds + t τ ξ(s) (s + β (τ)) ds. (7) Note that for t >τ the time dependence of θ (t) is confined to the upper limit of the second integral. The first integral is the drift in the phase due to noise prior to the perturbation plus the deviation in the resetting caused by the drift while the second integral is the noise driven drift after the perturbation. 2.2 Mean and variance of the interspike interval Recall that in the noiseless case we defined the PRC by P(β, τ) = T ˆT(β, τ), where ˆT(β, τ) is the perturbed interspike interval, defined as the time at which the phase equals T, θ( ˆT) = T. For mathematical convenience we make the assumption that τ + β (τ) < T which guarantees that the neuron will not spike instantaneously upon receiving the perturbation. In the presence of noise, the interspike interval is stochastic, and we can define the mean and variance of the PRC as the mean and variance of T ˆT(β, τ). To compute these statistics, we expand ˆT in orders of ε and use our previous expansion of phase to obtain θ (T + εt ) + εθ (T + εt ) + O(ε 2 ) = T Here, we have approximated ˆT = T + εt. We evaluate t in Eq. (5) at ˆT = T + εt since we need order ε accuracy. We evaluate t = ˆT = T in Eq. (6) sinceθ is already order ε. Gathering terms of the same order, we obtain: T + β (τ) = T (8) T + θ (T ) = (9) Therefore, T = T β (τ) and T = θ (T ).Since the noise ξ(s) has zero mean, the mean PRC is equal to T T = β (τ), the same as in the noiseless case. Thus, the effect of the noise on the mean PRC is negligible, at least up to first order in ε. Letting E[X] denote the expected value of the random variable X, the variance of the PRC is equal to E[( ˆT T E[ ˆT T]) 2 ]=ε 2 E[T 2 ]=ε2 E[θ (T ) 2 ]. In squaring θ (T ) we note that ξ(s) in the first and second integrals of Eq. (7) refer to non-overlapping intervals in time. If we assume

4 the noise is white, then E[ξ(s)ξ(s )]=δ(s s ) and we obtain the main result of this paper: ( τ Var(τ) = ε 2 [ + β (τ)] 2 2 (s) ds T β (τ) ) + 2 (s + β (τ)) ds. () τ Equation () is appealing in that when β =, itprovides an expression for the variance of the interspike interval of the noisy oscillator: T Var ISI = ε 2 2 (s) ds. Once the mean iprc, (t), is known, this can be used to estimate the strength of the noise, ε 2. This means that there are no free parameters in Eq. (). We refer to this method as the phase dependent variance, or PDV. This approach can be generalized to include stimuli and noise correlations having finite duration. However, this introduces several mathematical complications that are beyond the scope of this paper. The assumptions of fast noise and short pulses provide reasonable approximations to many experimental situations. In the remainder of this paper, we compare Eq. () to numerically computed statistics for equations of the form (Eq. (2)) for the Hodgkin Huxley equations, and to experimental data. 2.3 Underlying sources of phase dependent variance Greater insight into the sources affecting PRC variance can be obtained by changing variables in the second integral and assuming that the perturbation is small, so that [ + β (τ)] 2 [ + 2β (τ)]. Then Var(τ) ε 2 ([ + 2β (τ)] τ T ) + 2 (s) ds τ+β (τ) 2 (s) ds ( T = ε 2 2 (s) ds + 2β (τ) τ+β (τ) ) 2 (s) ds. τ τ 2 (s) ds The first term is just the variance of the unperturbed interspike interval distribution. The second term is equal to the derivative of the PRC, β (τ), multiplied by the variance of the phase at the time of the stimulation, ε 2 τ 2 (s) ds. Thus, the stimulus acts to compress or expand the variance, depending on the sign of the PRC. This can be understood by considering a positive perturbation given at a time when the PRC is increasing. In this case, trajectories that are phase advanced relative to the mean phase will experience a more positive phase shift than average, whereas trajectories that are phase delayed will experience a less positive shift. This is illustrated in Fig.. This will result in an increase in the latent phase variance. Conversely, for a positive pulse given when the PRC is decreasing, trajectories that are phase advanced will get a less positive shift than trajectories that are phase delayed, causing the latent phase variance to decrease. To understand the third term, consider a case where the mean PRC at the time of the stimulus is positive, β (τ). The pulse will cause an overall phase advance, reducing the variance by the amount that would have accumulated for the phases that are skipped over due to the perturbation. If the pulse causes a phase delay, then τ + β (τ) < τ and the sign of the integral will be θ(i+) θ(i) Fig. Illustration of how PRC affects variance. Immediately following an action potential, the phase is very well defined. However, because neurons are noisy, as time progresses, the phase of the neuron becomes more uncertain, indicated by a probability distribution. When the synaptic input is applied, the distribution is mapped through the PRC to determine the new phase and then integrated until the end of the phase to determine final distribution. Here we illustrate a synaptic input applied late in the phase, at.8 (note that phase from.6 is plotted), where the uncertainty is illustrated as a Gaussian distribution around the mean phase plotted. The mean and points one standard deviation above and below the mean are mapped through the PRC to illustrate the distribution after the stimulus. The original distribution is plotted with dotted lines on the y-axis for comparison. If the slope of the PRC PRC (τ) >, the distribution is widened after the stimulus and if PRC (θ) <, the distribution will contract

5 reversed. The final term represents additional variance that accumulated as the phases between (τ + β (τ)) and τ are replayed. 3 Comparison to simulations and experiments In order to test the asymptotic theory for the phase dependent variance of Eq. (), we simulate a variety of models ranging from simple phase models to biophysical models such as the Hodgkin Huxley equations. In each case, we start the model equations on their limit cycle at an initial condition that corresponds to the zero phase. For biophysical models, we use the point at which V(t) crosses since, with noise, the peak of the action potential cannot be accurately determined. We add white noise of a specified magnitude. Small brief pulses of current are given at different times during the cycle and the time of the next crossing is found. We run each set of stimuli, times. We compute the PRC by subtracting the resulting list of crossing times from the unforced, noise-free period. Then, for each stimulus, we compute the mean shift and the standard deviation. These are plotted in the figures. Captions give the magnitude of the noise as well as the shape of the perturbing stimulus. The simplest comparison of the theory with simulations is to solve Eq. (2) and compute the relevant statistics for simple PRCs. Two standard PRCs are sin θ (Type II) and ( cos θ) (Type I) as they arise near bifurcation to limit cycles (Brown et al. 24). Figure 2(a), (b) shows that the phase dependence of the standard deviation (the square root of the variance) is quite different for the two types of PRC. The match with Monte Carlo simulations (, simulations at each of 5 different phase points) is excellent. Figure 2(c) shows that the relationship between the standard deviation and the mean (the shape of the actual PRC) is not simple. That is, for example, it is not the case that the standard deviation is maximal where the slope of the PRC is maximal. Figure 2(d) shows the shape of the standard deviation as the PRC is transformed from type I to type II via (θ) = A(r)(r sin 2πθ + ( r)( cos 2πθ)) where A(r) is chosen so that the integral of 2 (θ) is constant. Recall that this integral sets the variance of the ISI. Type II PRCs have a minimum variance near the onset of the spike and a maximum about halfway through the cycle. Type I PRCs have a nearly constant variance except for a dip three quarters through the cycle. Figure 3 shows the theory applied to the Hodgkin Huxley model for the squid axon. A constant current ( μa/cm 2 ) is applied to the model to generate approximately 6 Hz oscillations. A white (a) SD SD time (c) cos(2πθ) sin(2πθ) mean (b) time SD (d) SD.6.4 r= r= time Fig. 2 The standard deviation (square root of the variance) for two examples of Eq. (2) comparing simulation of, trials with Eq. (). (a) (θ) = sin 2πθ and (b) (θ) = ( cos 2πθ).(Insets show the two PRCs) (c) Plot of standard deviation versus the mean. (d) Plot of the standard deviation for a series of PRCs of the form, A(r)(r sin 2πθ + ( r)( cos 2πθ)) where A(r) is chosen so the L 2 norm is constant. Stimulus consists of noise with ε =. and a rectangular pulse of amplitude 2 and width.25/2π noise stimulus is added to the voltage equation with variance.625 mv 2 /msec. To compute the PRC, perturbations of magnitude 2 μa/cm 2 for.5 msec are applied at a series of 5 times during the cycle. We compute the time of zero crossings for the potential to estimate the effect of a perturbation. Figure 3(a) shows the raw data for of the, trials along with the mean PRC and the PRC with no noise. The latter two curves are nearly indistinguishable. Using the noise free PRC as an approximation to (θ), we apply Eq. ()to estimate the SD. Figure 3(b) shows the SD of the data in panel A (points) compared to the theory (smooth curve). There is a small discrepancy at the half cycle point; the SD is larger than the theory predicts, but the overall shape and magnitude are very similar. 3. Application of phase dependent variance to real neuronal data We have previously shown a phase dependence of variance in spike advance from neuronal data collected from hippocampal excitatory pyramidal neurons (Netoffetal.25b). In this section, we compare the accuracy of describing the phase dependent variance of a real neuron using the analytical phase dependent variance compared to an ad-hoc method we have used previously. The two phase dependent variance

6 (a) time shift (msec) time (msec) (b) s.d. (msec) time (msec) Fig. 3 Hodgkin Huxley model. The standard HH model is injected with μa/cm 2 current and ε =.25 mv/ms /2 white noise. The stimulus consists of a.5 ms current pulse with amplitude 2 μa/cm 2. (Since the capacitance is μf/cm 2, the total voltage shift is mv.) Zero crossings of the voltage determine the timing. (a) Distribution of the time shifts for trials at 5 time points during the cycle. Superimposed is the PRC computed for the deterministic system (no noise) and the average PRC computed from a simulation with, trials. Note that the two curves are indistinguishable. (b) Standard deviation determined from the, trials as jagged dotted line and standard deviation from Eq. () assolid line functions are fit to the residuals of the PRC fit using a maximum likelihood estimation. The accuracy of the fit to the variance is measured using a χ 2 metric. 3.. Fitting phase dependent variance distribution to neuronal data In order to validate our theory with real neuronal data, we fit the variance function (Eq. ()) to the data with a maximum likelihood fit. If the variance is too large, too many points will be close to the mean but the probability at the mean will be small and total probability will be small. If the variance is too small, too many points will look like outliers and the total estimated probability will be small. When the coefficients of our function estimating the variance fit the data, we have the maximum likelihood. Therefore, we can use a gradient descent method to maximize the likelihood to find the coefficients of our phase dependent variance functions (Harris and Stocker 998). The analytical phase dependent variance will be estimated by fitting ε and β to maximize the likelihood. Maximum likelihood is found by first estimating the probability of observing a certain distribution given a function for the expected distribution. The probability of observing a particular distribution is the product of the probabilities of the observation at each point. If the residuals at a particular phase are assumed to be Gaussian, the total probability can be calculated as: L(ε, β) = N ( i= 2πσi (θ ε, β) ) e ( yi (θ i ) 2σ i (θ ε,β)) 2 () where σ i (θ ε, β) is the expected standard deviation at phase θ given the coefficients for the phase dependent variance, ε and β, and(y datai (θ i )) are the residuals of the PRC. For numerical reasons, this equation is usually calculated as the sum of the logs of the probabilities as: = N N 2 log(2π) log(σ i (θ ε, β)) 2 N i= i= ( ) yi (θ i ) 2, (2) σ i (θ ε, β) where σ 2 is the variance. Thus, we fit both the adhoc and the analytical model by minimizing the loglikelihood, which provides us with the values for ε and β in Eq. () Application to experimental data In this section we test the analytical phase dependent variance to experimentally attained PRCs measured from pyramidal neurons in the hippocampal formation. The accuracy of the analytical phase dependent variance will be compared to an ad-hoc phase dependent variance function we have used in a previous paper (Netoff et al. 25b). The data is pulled from the Netoff lab s experimental database containing phase response curves from hundreds of neurons collected for various experiments (Pervouchine et al. 26; Netoffetal.25a, b). Details of the experimental protocols are published in those papers. Briefly, neurons are patch clamped using

7 whole-cell patch clamp technique. PRCs are measured using a dynamic patch clamp running in real-time Linux (Dorval et al. 2). The dynamic clamp is used to () control the spiking rate of the neuron using a closed loop spike rate controller, (2) deliver current pulses to simulate synaptic conductance, and (3) record the preand post-stimulus ISI s to create a PRC. In all cases, the neuron is controlled to fire at Hz by adjusting the applied current. The phase response curve is measured using a stimulus waveform resembling a synaptic conductance. The synaptic conductance waveform is an alpha function. The synaptic current is time dependent and is calculated as: α = A ( e t/τ f e t/τ r ) (V m E syn ), (3) where A controls the stimulus amplitude, τ r and τ f are the characteristic rise and fall times of the synaptic inputs with values of 2.6 msec and 6.23 msec, respectively, E syn = is the reversal potential of the synapse, and V m is the membrane voltage of the cell. Synaptic input is applied at a randomly selected phase on every 6th period and the resulting period recorded. Phase advances caused by synaptic inputs are fit with the following polynomial to estimate the PRC: (θ) = θ(θ ) ( a 3 θ 3 + a 2 θ 2 + a θ ). This function forces the PRC to be zero at θ = and θ = 2π. The polynomial is fit to the data with a least squares fit. To estimate the phase dependent variance, we subtract our fit PRC function from each point to obtain the residuals. The function for the variance of the residuals are fit by maximizing the likelihood, as described above. The analytical phase dependent variance function will be compared to an ad-hoc method used to fit the variance in an earlier publication. An example of the standard deviation for the ad-hoc fit and phase dependent fit are given in Fig. 4 for comparison. In the ad-hoc method, we assume that the error is a random walk, such that the variance increases linearly, and the standard deviation increases as a square root of time. The standard deviation as a function of phase is calculated using the function: σ(θ) = Y + Z θ, (4) where Y and Z are fit using a least squares minimization algorithm based on the maximum likelihood function as described in Eq. (2). Residual values and a standard deviation fit for a real neuron are shown in Fig. 5. Although the fits do not seem particularly compelling, the phase dependent variance fits the residual data better than an SD PRC Ad Hoc phase Theory phase Fig. 4 Comparison of this paper s theoretical estimate of the standard deviation using Eq. () to an ad hoc function from Eq. (4). Coefficients for the polynomial used to fit the PRC are (.5967x x x )x(x ). Coefficients for the ad-hoc function are x. Inset shows the PRC used to compute the standard deviations ad-hoc method. In this example, the ad-hoc method over-estimates the variance while the analytical method under-estimates the variance. Phase dependent variance is estimated on PRCs measured from 74 different neurons in the hippocampal formation. The accuracy of the spike dependent variance is quantified using a chi-squared distribution: χ 2 reduced = N N ( ) yi (θ i ) 2, σ(θ i ) i= where σ 2 is the variance of the PRC fit (Plackett 983). The reduced χ 2 value is the average of the residuals squared divided by the expected variance. χ 2 = if the estimate of the variance as a function of phase is ideal. For the 74 cells the average reduced χ 2 value for the ad-hoc variance method is.7697 while the value for the analytical form is nearly ideal at The Eq. () has a lower χ 2 value than the ad-hoc method in 63 of 74 cells (p =.7E-5 using Welch s t-test) (Welch 947). Because the χ 2 value much closer to on average and represents most data sets better than the ad-hoc method, we conclude that the phase dependent variance method is more accurate in describing the residuals of experimental data. 3.2 Dual cell simulations In this section we seek to test if accounting for the phase dependent variance actually provides any significant improvement over using phase independent noise in a simulation. To answer this, we simulate a coupled

8 (a) 5 4 (b).2.5 Spike time advance (msec) 3 2 Residuals (phase) Time since last spike (msec) Phase Fig. 5 PRC and phase dependent variance fits. (a) Experimental data collected from an excitatory pyramidal neuron in hippocampal brain slice with period of approximately ms. A polynomial in solid line has been fit to the data with a 5th order polynomial constrained at the beginning and end of the phase. (b) Residual values from comparing raw data to the polynomial fit PRC. The solid line is the standard deviation fit using the phase dependent variance method of this paper, Eq. ()andthedashed line is the square root ad-hoc variance pair of excitatory neurons using a Hodgkin Huxleylike conductance based model as described by Golomb and Amitai (997). This model has the functional form: dv dt = C (I Na+ I NaP + I Kdr + I KA + I Kslow + I L + I s +Noise) where C is the membrane capacitance, V is the voltage difference between intercellular and extracellular space, I s is the synaptic current, as an Alpha function input described in Eq. (3), along with various ionic membrane currents: sodium (Na), persistent sodium (Na P ), delayed rectifier potassium (K dr ), A-type potassium (K A ), slow potassium (K slow ), leak current (L) and a noise component, which is used to create spike time variability. The parameters used for the currents and membrane capacitance are as described by Golomb and Amitai (997). The noise applied is of the same form assumed in Eq. (). When one neuron fires an action potential, a synaptic input is applied to the other neuron. Here we define an action potential as a positive zero-crossing of the voltage. This full scale network simulation is compared to simulations performed using an iterative PRC model where the PRC and the variance are fit to the data taken directly from the Golomb Amitai model. In the iterative model, at each firing of a neuron in the network, the phase of the post-synaptic neuron is advanced according to the PRC with a noise term whose amplitude is determined by the phase dependent variance equation. For both the full Golomb Amitai model simulation and the two reduced PRC model simulations (one with a constant phase independent variance and the other with a phase dependent variance), a histogram of the spike time differences are Normalized Spikes Var const Var phase GA Cell.5.5 Phase Fig. 6 Histogram of spike counts as a function of oscillation phase for dual cell network simulations. Comparison of dual cell network simulations where the PRC determines the cell s phase advance to dual cell network simulations using full scale GA model neurons. Gray points are the result of a dual cell simulation using the polynomial fit PRC from experimental data with a flat variance based on the standard deviation of all residuals from the fit. X-marks are the result of using the experimental fit PRC with the phase dependent variance of Eq. (). Black points represent the full scale GA cell model s dual cell simulation. The simulation with phase dependent variance more accurately predicts the results of the GA model

9 made. The results of the three simulations can be seen in Fig. 6. In the simulation where the phase dependent variance is accounted for, the iterated PRC model is able to reproduce the full Golomb Amitai model simulation much more accurately than the iterative model simulation using the phase independent noise. 4 Neuronal entrainment as a function of ε and β In this section we use an example of a neuron, or population of uncoupled neurons, entrained to a periodic stimulus to illustrate how synchrony depends on the variance. An example PRC will be used with a phase dependent variance calculated using Eq. (). We will determine how the entrainment depends on the parameters of the phase dependent variance by varying the amplitude of the noise ε and the amplitude of the stimulus pulse β. A transfer operator will be used to map a neuron s phase probabilistically into the next cycle. By analyzing this map, we can predict the steady state probability distribution of a neuron, or a population of neurons over infinite time, as well as determine how quickly the neuron or population will approach this steady state solution. The purpose of this section is to provide a more intuitive understanding of Eq. (). deviation σ. This mapping can be used to define a Frobenius Perron operator P, that maps the density of phases at iteration i onto the density of phases at iterations i + : ρ n+ (θ) = Pρ n (θ). This operator is linear, and by discretizing phase and taking a piecewise linear approximation to the densities ρ(θ), we can approximate the transfer operator using a transition matrix which we also call P. The matrix P has all positive entries and because it conserves probability, the columns of P summate to. As a result, the largest eigenvalue of P is real and has magnitude. The corresponding eigenvector represents the steady state distribution of dist(i+) PRC H- fxn 4. Estimating synchrony in a stochastic system For a periodically stimulated oscillator with a PRC = (θ) stimulated at period P, the new phase after each stimulus can be calculated as θ i+ = θ i + T P (P θ i ). If the period of the stimulus and the neuron are the same, T = P, then this equation becomes θ i+ = θ i (T θ i ). If the change in the phase is calculated from cycle to cycle, this is the H-function for a single oscillator with a periodic input: H = θ i+ θ i = (T θ i ) (Neu 979; Goel and Ermentrout 22). The fixed points of the system, where the neuron will phase-lock to the stimulus, occurs where the H-function crosses zero, indicating that there is no change in phase from one cycle to the next. The stable fixed points, are those where the slope of the H-function at the zero crossing is negative. This H-function is a map that determines how the phase is changed from one stimulus cycle to the next for deterministic systems. In noisy systems, the phase advance is probabilistic, with trajectories that start at a single phase θ mapped onto a distribution of phases ρ(θ) = N (θ (T θ),σ( i )), wheren (μ, σ ) represents a normal distribution with mean μ and standard spike Population dist(i) Fig. 7 Stochastic map. This figure illustrates the effect of a stimulus pulse on the phase distribution of a neuron or population of neurons. The top panel shows an arbitrary PRC (dashed lines) with its corresponding H f unction (H(θ))(solid line) as a function of phase. The central panel is a graphical illustration of the function that calculates the probability distribution of the neuron after the stimulus (y-axis) given the probability distribution prior to the stimulus (x-axis). The center of probability is along the line of identity to which the H-function is added and the width of the probability distribution is determined by Eq. (). The line of identity is indicated as a thin black line along the diagonal. Two examples of a-priori phase distributions are shown, one is a Dirac delta function (vertical line in bottom panel), representing the probability distribution in the phase of a single neuron, or a synchronous population, the other is a uniform distribution (dotted line in bottom panel) representing an asynchronous population evenly distributed across the phases at the time the first stimulus is applied. The distribution after applying the map (left panel) indicates the probability distribution after the stimulus. The result of the stochastic map is that the delta function is mapped to a Gaussian and the flat distribution is no longer flat

10 the dynamical system obtained from iterated mapping, ρ n (θ) = P n ρ (θ). Figure 7 shows the stochastic map corresponding to PRC equal to (θ) =.θ( θ).6. The gray scale shows the magnitude of the elements of P. The solid lines show the map applied to an initial state consisting of a delta function centered at a phase =.6. This delta function is mapped to a Gaussian distribution with mean shifted to a phase slightly smaller than.6. Similarly, we can start with a population of unconnected neurons distributed uniformly in phase and iterate the distribution through the transfer function (dashed lines). The output distribution is not perfectly uniform, but slightly heavier at the negative zero crossings of the H-function and where the variance of the PRC is lower. In both cases, the phase distribution is attracted to the phase where the H-function crosses zero with a negative slope. In this example, the H- function has two zeros, one at each end of the phase. The crossing at θ = has a negative slope while the crossing at θ = has a positive slope. This indicates that the system is asymptotically stable from one side and unstable from the other predicting that the neuron will synchronize to the stimulus as long as the stimulus is leading in phase, but the moment the neuron is pushed by noise into a leader position, the two will phase slip until the stimulus is leading again. This results in a sharp peak on the right hand side of the stimulus and a wider peak on the preceding phase. To show how the distributions change smoothly in time, we have included a movie that can be downloaded and viewed from The transfer operator can be applied iteratively to a distribution of neurons to determine the distribution after n number of stimuli, ρ i+n (θ) = ρ i (θ)p n. In Fig. 8 we show the how the Dirac delta function and the uniform distribution are iteratively mapped through the transfer function, 2, 8, 32 and 64 times. The steady state solution of the phase distribution, for either the single neuron or the population, is the eigenvector with the largest eigenvalue of the transfer operator. This is plotted as the infinite time solution. Over subsequent stimuli, the distribution starting with the Dirac delta function or a uniform distribution approaches the steady state solution. It can be seen that it does not matter if the initial distribution started out as a delta function or a uniform distribution, the final solution is the same. 4.2 Synchrony dependency on ε and β A neuron or population of neurons will synchronize to a periodic stimulus depending on the amplitude of H function Spike Dist Pop. Dist Phase Fig. 8 Iteration of phase distributions through the stochastic map. Top panel shows H(θ) for two cycles where the negatively sloped roots predict a synchronous solution at the corresponding phase. Middle panel labled Spike Dist. illustrates an initial distribution starting as a delta function mapped through stochastic map for through 64 iterations. The bottom panel, labled Pop. Dist., represents a population of neurons starting with an initial distribution that is uniform mapped through the stochastic map. Two consecutive iterations are plotted next to each other so that zero phase is plotted in the center of the plot. The thick darkest line labeled inf is the infinite solution, computed from the eigenvector of the largest eigenvalue of the stochastic map the noise in the neuron, ε, and stimulus amplitude, β. How strongly a population of neurons will synchronize to a periodic stimulus can be determined by measuring the amplitude of the peak of the eigenvector with the largest eigenvalue. By varying these parameters we can determine how they affect the ability of the neuron to synchronize to the periodic stimulus. In Fig. 9 the noise amplitude ε of the neuron is varied while the strength of the stimulus, β, is kept constant. Not surprisingly, as the noise amplitude is increased the peak gets smeared and the peak amplitude decreases. How quickly the neuron approaches the steady-state solution can also be determined from the transition matrix. The largest eigenvalue is always one, so the rate of the convergence to the steady state is determined by the amplitude of the second eigenvalue E 2.The closer E 2 is to one, the more attracting it is, therefore it will compete with the first eigenvector resulting in a longer time for the system to converge to steady state. By measuring E 2 as a function of the ε, itcan be seen that E 2 gets smaller as the noise amplitude inf inf

11 gets larger, this indicates that the system will converge to the steady state faster in higher noise conditions. If the second eigenvector E 2 is complex, it indicates that the system will oscillate as it approaches the steady state solution. The angle of E 2, will determine the frequency of the oscillation as it converges. It is difficult to show this oscillation in a static figure, but it can be seen in a movie that can be downloaded and viewed from Asε increases E 2 decreases indicating that the oscillations are increasing in frequency. Similarly, we examine how synchrony depends on the amplitude of the stimulus, β, while keeping the noise amplitude ε constant is shown in Fig.. Asβ is decreased, the strength of the entrainment weakens and (a) Synchrony (b) E 2 (c) Frequency (d) Pop Density ε Phase x -3 ε =.2 ε =.5 ε =. Fig. 9 Synchrony as a function of neuronal noise amplitude ε. The eigenvector with the largest eigenvalue indicates the steady state phase distribution after an infinite number of stimuli. On bottom (d) is shown the distribution of the stimulus phase for three values of ε and β =. Asε increases noise smears the peak out and synchrony decreases. Panel (a) shows how the peak amplitude of the eigenvector changes as a function of the ε. Panel (b) shows how the rate of the convergence to the steady state solution depends on ε. The rate the population converges to the steady state solution is determined by the amplitude of the second eigenvalue E 2, the closer E 2 is to one the slower the system converges. As ε increases E 2 decreases indicating that the population converges to the steady state behavior faster with higher noise. Panel (c) shows the frequency of the oscillation of the population as it converges to the steady state solution. This is determined by the angle of E 2.Asε is increased, the angle of E 2 increases indicating that the frequency of the oscillation increases (a) Synchrony (b) E 2 (c) Frequency (d) Pop Density β= β=.6 β=.28 β= Phase Fig. Synchrony as a function of stimulus amplitude, β. (a) As β increases (and ε =.), synchrony increases, measured by the peak amplitude of the eigenvector with largest eigenvalue of the transition matrix. (b) The complex modulus of the eigenvector E 2 increases with increasing β up to β =.45, and then decreases for.45 <β<, which means the oscillator converges more quickly for middle stimulation amplitudes. (c) The phase angle of E 2 determines the frequency of oscillations on the way to convergence, which decrease with increasing β. (d) Eigenvectors for three different values of β. As the stimulus amplitude decreases, the noise term dominates and the population density becomes more spread across the oscillator s phase synchrony decreases. Surprisingly, the rate at which the network approaches the steady state phase distribution seems to peak at some middle value of β indicating that the system converges fastest at higher values of β. Asβ decreases, the frequency at which the system oscillates as it approaches steady state, as measured by E 2, increases. In summary, as we decrease noise, ε, or increase stimulus strength β synchrony will be stronger. However, with higher noise and moderate stimulus amplitude, the population will converge to their steady state behavior faster. 5 Discussion In this paper we have shown that noisy neural oscillators show a predictable phase-dependent variance in their PRCs that is primarily dependent only on the

12 PRC itself. We use a perturbation method to estimate the change in phase due to the noisy input. With this calculation, we are able to estimate many statistical parameters such as the effects on the mean and, in particular, the variance of the PRC. We have used this perturbation calculation in other papers (Ly and Ermentrout 29). There is no reason why a similar calculation could not be applied to more realistic forms of noise such as colored noise arising from synapses and as we noted, to more realistic perturbations. The advantage of white noise approach is that the autocorrelation is a delta function and, thus, the integrals are easy to evaluate. Our calculations are predicated on the idea that the magnitude of the noise and the perturbations is small. The reason for this is that the calculations are based on an asymptotic theory. Thus, they may be of limited use in realistic experimental settings. Indeed, in the high noise case, the notion of phase becomes more difficult to define precisely. On the other hand, numerical simulations, which will work no matter what the size of the perturbation or the stimulus, are of limited value for improving understanding of the very general phenomenon of phase-dependent variance. Analytic calculation, even in somewhat unrealistic limits, can help fill in that understanding. Since the analytic calculations are approximations, they are valid up to some limit; the limit is the maximum amplitude such that we remain in the linear regime. By examining the variance of phase resetting curves of real neurons we have shown that the result of this paper characterizes the residual data quite well. The noisy nature of these periodic cells provides an ideal application of the phase dependent variance theory, which is shown to be statistically significant when compared to average variance and ad-hoc methods such as Eq. (4). Not only does our theory bode well for characterizing individual noisy cell dynamics, it is able to provide insight about the usefulness in describing network behavior. In modeled dual cell networks, the phase dependent variance produces behavior which closely follows biologically relevant Hodgkin Huxleystyle conductance models such as the Golomb Amitai model with several ionic currents. One could easily apply this theory to much larger networks to analyze the effect of including a phase dependence on the noise. The noise amplitude ε and stimulus β in Eq. () affect the dynamics of a stimulated cell. A neuron s ability to synchronize to a periodic stimulus is likely to be important to network dynamics, especially those involved in diseases such as epilepsy and Parkinson s. It is interesting that by increasing the noise amplitude the cell converges to the synchronous solution more rapidly, but a population distribution for the stimulated cell becomes sharper by decreasing the noise amplitude. In a sense, the noise is allowing the system to reach the solution by providing more variability of the phase of oscillation and is keeping a greater variability in synchrony. This has important implications as to the strength of the noise in the neural system. Too much noise spreads the synchronous solution out, while too little noise causes the system to take longer to reach the synchronous solution. When varying the stimulus amplitude β, we find the result of greater synchrony and increased rate of convergence with greater pulse amplitude. Although the results of varying β may be as intuitively expected, it must be balanced with the noise amplitude depending on the structure of the network and firing rate of the individual neurons when considering the application to an entire system of cells. We believe the theory of this paper will be more useful to experiments and simulations than previously used methods of estimating the variance on a noisy neuronal phase resetting curve, which have been largely based on averages or assumptions about the general shape of such curves. An ad-hoc method developed from noticing the variance on experimentally measured PRCs follow the general form of Eq. (4) has been used to describe the phase dependent variance until now. By using the analytical expression developed in this paper, Eq. (), we are now able to characterize the phase dependent variance on a phase resetting curve with much greater accuracy. The main result of this paper will provide a useful tool when describing phase resetting curves in greater detail for both experiment and simulation. Acknowledgements We would like to acknowledge NSF, NSF CAREER Award, and University of Minnesota Grant-in-Aid. References Abouzeid, A., & Ermentrout, B. (29). Type-ii phase resetting curve is optimal for stochastic synchrony. Physical Review E, 8, 9. Achuthan, S., & Canavier, C. C. (29). Phase-resetting curves determine synchronization, phase locking, and clustering in networks of neural oscillators. The Journal of Neuroscience, 29(6), Ariaratnam, J. T., & Strogatz, S. H. (2). Phase diagram for the winfree model of coupled nonlinear oscillators. Physical Review Letters, 86, Brown, E., Moehlis, J., & Holmes, P. (24). On the phase reduction and response dynamics of neural oscillator populations. Neural Computation, 6, Dorval, A. D., Christini, D. J., & White, J. A. (2). Real-time linux dynamic clamp: A fast and flexible way to construct

13 virtual ion channels in living cells. Annals of Biomedical Engineering, 29, Ermentrout, B., & Saunders, D. (26). Phase resetting and coupling of noisy neural oscillators. Journal of Computational Neuroscience, 2, Forger, D. B., & Paydarfar, D. (24). Starting, stopping, and resetting biological oscillators: In search of optimum perturbations. Journal of Theoretical Biology, 23, Galan, R. F., Ermentrout, G. B., & Urban, N. N. (25). Efficient estimation of phase-resetting curves in real neurons and its significance for neural-network modeling. Physical Review Letters, 94, 58. Gardiner, C. W. (24). Handbook of stochastic methods for physics, chemistry and the natural sciences. Springer Series in Synergetics (Vol. 3). Berlin: Springer. Goel, P., & Ermentrout, B. (22). Synchrony, stability, and firing patterns in pulse-coupled oscillators. Physica D, 63(3), Golomb, D., & Amitai, Y. (997). Propagating neuronal discharges in neocortical slices: Computational and experimental study. Journal of Neurophysiology, 78, Guevara, M. R., & Glass, L. (982). Phase locking, period doubling bifurcations and chaos in a mathematical model of a periodically driven oscillator: A theory for the entrainment of biological oscillators and the generation of cardiac dysrhythmias. Journal of Mathematical Biology, 4, 23. Harris, J. J., & Stocker, H. (998). Handbook of mathematics and computational science. New York: Springer. Ito, K. (946). On a stochastic integral equation. Proceedings of the Japan Academy, 22, Kloeden, P. E., & Platen, E. (992). Numerical solution of stochastic differential equations. Applications of Mathematics (New York) (Vol. 23). Berlin: Springer. Kuramoto, Y. (984). Chemical oscillations, waves, and turbulence. Dover Publications. Ly, C., & Ermentrout, G. B. (29). Synchronization dynamics of two coupled neural oscillators receiving shared and unshared noisy stimuli. Journal of Computational Neuroscience, 26, Ly, C., & Ermentrout, G. B. (2). Coupling regularizes individual units in noisy populations. Physical Review E, 8, 9. Netoff, T. I., Acker, C. D., Bettencourt, J. C., & White, J. A. (25a). Beyond two-cell networks: Experimental measurement of neuronal responses to multiple synaptic inputs. Journal of Computational Neuroscience, 8, Netoff, T. I., Banks, M. I., Dorval, A. D., Acker, C. D., Haas, J. S., Kopell, N., et al. (25b). Synchronization in hybrid neuronal networks of the hippocampal formation. Journal of Neurophysiology, 93, Neu, J. C. (979). Coupled chemical oscillators. SIAM Journal on Applied Mathematics, 37(2), Pervouchine, D. D., Netoff, T. I., Rotstein, H. G., White, J. A., Cunningham, M. O., Whittington, M. A., et al. (26). Low-dimensional maps encoding dynamics in entorhinal cortex and hippocampus. Neural Computation, 8, Plackett, R. L. (983). Karl pearson and the chi-squared test. International Statistical Review, 5, Preyer, A., & Butera, R. (25). Neuronal oscillators in aplysia californica that demonstrate weak coupling in vitro. Physical Review Letters, 95(3), 383. Reyes, A. D., & Fetz, E. E. (993). Effects of transient depolarizing potentials on the firing rate of cat neocortical neurons. Journal of Neurophysiology, 69, Stoop, R., Schindler, K., & Bunimovich, L. A. (2). Neocortical networks of pyramidal neurons: From local locking and chaos to macroscopic chaos and synchronization. Nonlinearity, 3, Torben-Nielsen, B., Uusisaari, M., & Stiefel, K. (2). A comparison of methods to determine neuronal phase-response curves. Frontiers in Neuroinformatics, 4(6). Welch, B. L. (947). The generalization of student s problem when several different population variances are involved. Biometrika, 34, Winfree, A. T. (967). Biological rhythms and the behavior of populations of coupled oscillators. Journal of Theoretical Biology, 6, 5 42.

Chapter 5 Experimentally Estimating Phase Response Curves of Neurons: Theoretical and Practical Issues

Chapter 5 Experimentally Estimating Phase Response Curves of Neurons: Theoretical and Practical Issues Chapter 5 Experimentally Estimating Phase Response Curves of Neurons: Theoretical and Practical Issues Theoden Netoff, Michael A. Schwemmer, and Timothy J. Lewis Abstract Phase response curves (PRCs) characterize

More information

Consider the following spike trains from two different neurons N1 and N2:

Consider the following spike trains from two different neurons N1 and N2: About synchrony and oscillations So far, our discussions have assumed that we are either observing a single neuron at a, or that neurons fire independent of each other. This assumption may be correct in

More information

Synchrony in Neural Systems: a very brief, biased, basic view

Synchrony in Neural Systems: a very brief, biased, basic view Synchrony in Neural Systems: a very brief, biased, basic view Tim Lewis UC Davis NIMBIOS Workshop on Synchrony April 11, 2011 components of neuronal networks neurons synapses connectivity cell type - intrinsic

More information

Synchronization and Phase Oscillators

Synchronization and Phase Oscillators 1 Synchronization and Phase Oscillators Richard Bertram Department of Mathematics and Programs in Neuroscience and Molecular Biophysics Florida State University Tallahassee, Florida 32306 Synchronization

More information

Dynamical systems in neuroscience. Pacific Northwest Computational Neuroscience Connection October 1-2, 2010

Dynamical systems in neuroscience. Pacific Northwest Computational Neuroscience Connection October 1-2, 2010 Dynamical systems in neuroscience Pacific Northwest Computational Neuroscience Connection October 1-2, 2010 What do I mean by a dynamical system? Set of state variables Law that governs evolution of state

More information

Topics in Neurophysics

Topics in Neurophysics Topics in Neurophysics Alex Loebel, Martin Stemmler and Anderas Herz Exercise 2 Solution (1) The Hodgkin Huxley Model The goal of this exercise is to simulate the action potential according to the model

More information

Supporting Online Material for

Supporting Online Material for www.sciencemag.org/cgi/content/full/319/5869/1543/dc1 Supporting Online Material for Synaptic Theory of Working Memory Gianluigi Mongillo, Omri Barak, Misha Tsodyks* *To whom correspondence should be addressed.

More information

The Spike Response Model: A Framework to Predict Neuronal Spike Trains

The Spike Response Model: A Framework to Predict Neuronal Spike Trains The Spike Response Model: A Framework to Predict Neuronal Spike Trains Renaud Jolivet, Timothy J. Lewis 2, and Wulfram Gerstner Laboratory of Computational Neuroscience, Swiss Federal Institute of Technology

More information

Predicting Synchrony in Heterogeneous Pulse Coupled Oscillators

Predicting Synchrony in Heterogeneous Pulse Coupled Oscillators Predicting Synchrony in Heterogeneous Pulse Coupled Oscillators Sachin S. Talathi 1, Dong-Uk Hwang 1, Abraham Miliotis 1, Paul R. Carney 1, and William L. Ditto 1 1 J Crayton Pruitt Department of Biomedical

More information

Factors affecting phase synchronization in integrate-and-fire oscillators

Factors affecting phase synchronization in integrate-and-fire oscillators J Comput Neurosci (26) 2:9 2 DOI.7/s827-6-674-6 Factors affecting phase synchronization in integrate-and-fire oscillators Todd W. Troyer Received: 24 May 25 / Revised: 9 November 25 / Accepted: November

More information

Linearization of F-I Curves by Adaptation

Linearization of F-I Curves by Adaptation LETTER Communicated by Laurence Abbott Linearization of F-I Curves by Adaptation Bard Ermentrout Department of Mathematics, University of Pittsburgh, Pittsburgh, PA 15260, U.S.A. We show that negative

More information

Synaptic dynamics. John D. Murray. Synaptic currents. Simple model of the synaptic gating variable. First-order kinetics

Synaptic dynamics. John D. Murray. Synaptic currents. Simple model of the synaptic gating variable. First-order kinetics Synaptic dynamics John D. Murray A dynamical model for synaptic gating variables is presented. We use this to study the saturation of synaptic gating at high firing rate. Shunting inhibition and the voltage

More information

Fast neural network simulations with population density methods

Fast neural network simulations with population density methods Fast neural network simulations with population density methods Duane Q. Nykamp a,1 Daniel Tranchina b,a,c,2 a Courant Institute of Mathematical Science b Department of Biology c Center for Neural Science

More information

Exercises. Chapter 1. of τ approx that produces the most accurate estimate for this firing pattern.

Exercises. Chapter 1. of τ approx that produces the most accurate estimate for this firing pattern. 1 Exercises Chapter 1 1. Generate spike sequences with a constant firing rate r 0 using a Poisson spike generator. Then, add a refractory period to the model by allowing the firing rate r(t) to depend

More information

Phase Oscillators. and at r, Hence, the limit cycle at r = r is stable if and only if Λ (r ) < 0.

Phase Oscillators. and at r, Hence, the limit cycle at r = r is stable if and only if Λ (r ) < 0. 1 Phase Oscillators Richard Bertram Department of Mathematics and Programs in Neuroscience and Molecular Biophysics Florida State University Tallahassee, Florida 32306 2 Phase Oscillators Oscillations

More information

Phase Response Properties and Phase-Locking in Neural Systems with Delayed Negative-Feedback. Carter L. Johnson

Phase Response Properties and Phase-Locking in Neural Systems with Delayed Negative-Feedback. Carter L. Johnson Phase Response Properties and Phase-Locking in Neural Systems with Delayed Negative-Feedback Carter L. Johnson Faculty Mentor: Professor Timothy J. Lewis University of California, Davis Abstract Oscillatory

More information

Lecture 11 : Simple Neuron Models. Dr Eileen Nugent

Lecture 11 : Simple Neuron Models. Dr Eileen Nugent Lecture 11 : Simple Neuron Models Dr Eileen Nugent Reading List Nelson, Biological Physics, Chapter 12 Phillips, PBoC, Chapter 17 Gerstner, Neuronal Dynamics: from single neurons to networks and models

More information

Mathematical Foundations of Neuroscience - Lecture 3. Electrophysiology of neurons - continued

Mathematical Foundations of Neuroscience - Lecture 3. Electrophysiology of neurons - continued Mathematical Foundations of Neuroscience - Lecture 3. Electrophysiology of neurons - continued Filip Piękniewski Faculty of Mathematics and Computer Science, Nicolaus Copernicus University, Toruń, Poland

More information

Phase Response. 1 of of 11. Synaptic input advances (excitatory) or delays (inhibitory) spiking

Phase Response. 1 of of 11. Synaptic input advances (excitatory) or delays (inhibitory) spiking Printed from the Mathematica Help Browser 1 1 of 11 Phase Response Inward current-pulses decrease a cortical neuron's period (Cat, Layer V). [Fetz93] Synaptic input advances (excitatory) or delays (inhibitory)

More information

Voltage-clamp and Hodgkin-Huxley models

Voltage-clamp and Hodgkin-Huxley models Voltage-clamp and Hodgkin-Huxley models Read: Hille, Chapters 2-5 (best) Koch, Chapters 6, 8, 9 See also Clay, J. Neurophysiol. 80:903-913 (1998) (for a recent version of the HH squid axon model) Rothman

More information

Single-Compartment Neural Models

Single-Compartment Neural Models Single-Compartment Neural Models BENG/BGGN 260 Neurodynamics University of California, San Diego Week 2 BENG/BGGN 260 Neurodynamics (UCSD) Single-Compartment Neural Models Week 2 1 / 18 Reading Materials

More information

5.4 Modelling ensembles of voltage-gated ion channels

5.4 Modelling ensembles of voltage-gated ion channels 5.4 MODELLING ENSEMBLES 05 to as I g (Hille, 200). Gating currents tend to be much smaller than the ionic currents flowing through the membrane. In order to measure gating current, the ionic current is

More information

Stochastic Oscillator Death in Globally Coupled Neural Systems

Stochastic Oscillator Death in Globally Coupled Neural Systems Journal of the Korean Physical Society, Vol. 52, No. 6, June 2008, pp. 19131917 Stochastic Oscillator Death in Globally Coupled Neural Systems Woochang Lim and Sang-Yoon Kim y Department of Physics, Kangwon

More information

Research Article Hidden Periodicity and Chaos in the Sequence of Prime Numbers

Research Article Hidden Periodicity and Chaos in the Sequence of Prime Numbers Advances in Mathematical Physics Volume 2, Article ID 5978, 8 pages doi:.55/2/5978 Research Article Hidden Periodicity and Chaos in the Sequence of Prime Numbers A. Bershadskii Physics Department, ICAR,

More information

Neural Excitability in a Subcritical Hopf Oscillator with a Nonlinear Feedback

Neural Excitability in a Subcritical Hopf Oscillator with a Nonlinear Feedback Neural Excitability in a Subcritical Hopf Oscillator with a Nonlinear Feedback Gautam C Sethia and Abhijit Sen Institute for Plasma Research, Bhat, Gandhinagar 382 428, INDIA Motivation Neural Excitability

More information

Voltage-clamp and Hodgkin-Huxley models

Voltage-clamp and Hodgkin-Huxley models Voltage-clamp and Hodgkin-Huxley models Read: Hille, Chapters 2-5 (best Koch, Chapters 6, 8, 9 See also Hodgkin and Huxley, J. Physiol. 117:500-544 (1952. (the source Clay, J. Neurophysiol. 80:903-913

More information

The Effects of Voltage Gated Gap. Networks

The Effects of Voltage Gated Gap. Networks The Effects of Voltage Gated Gap Junctions on Phase Locking in Neuronal Networks Tim Lewis Department of Mathematics, Graduate Group in Applied Mathematics (GGAM) University of California, Davis with Donald

More information

6.3.4 Action potential

6.3.4 Action potential I ion C m C m dφ dt Figure 6.8: Electrical circuit model of the cell membrane. Normally, cells are net negative inside the cell which results in a non-zero resting membrane potential. The membrane potential

More information

80% of all excitatory synapses - at the dendritic spines.

80% of all excitatory synapses - at the dendritic spines. Dendritic Modelling Dendrites (from Greek dendron, tree ) are the branched projections of a neuron that act to conduct the electrical stimulation received from other cells to and from the cell body, or

More information

Basic elements of neuroelectronics -- membranes -- ion channels -- wiring

Basic elements of neuroelectronics -- membranes -- ion channels -- wiring Computing in carbon Basic elements of neuroelectronics -- membranes -- ion channels -- wiring Elementary neuron models -- conductance based -- modelers alternatives Wires -- signal propagation -- processing

More information

The Theory of Weakly Coupled Oscillators

The Theory of Weakly Coupled Oscillators The Theory of Weakly Coupled Oscillators Michael A. Schwemmer and Timothy J. Lewis Department of Mathematics, One Shields Ave, University of California Davis, CA 95616 1 1 Introduction 2 3 4 5 6 7 8 9

More information

Dynamical phase transitions in periodically driven model neurons

Dynamical phase transitions in periodically driven model neurons Dynamical phase transitions in periodically driven model neurons Jan R. Engelbrecht 1 and Renato Mirollo 2 1 Department of Physics, Boston College, Chestnut Hill, Massachusetts 02467, USA 2 Department

More information

Comparing integrate-and-fire models estimated using intracellular and extracellular data 1

Comparing integrate-and-fire models estimated using intracellular and extracellular data 1 Comparing integrate-and-fire models estimated using intracellular and extracellular data 1 Liam Paninski a,b,2 Jonathan Pillow b Eero Simoncelli b a Gatsby Computational Neuroscience Unit, University College

More information

Neural Modeling and Computational Neuroscience. Claudio Gallicchio

Neural Modeling and Computational Neuroscience. Claudio Gallicchio Neural Modeling and Computational Neuroscience Claudio Gallicchio 1 Neuroscience modeling 2 Introduction to basic aspects of brain computation Introduction to neurophysiology Neural modeling: Elements

More information

Phase Response Properties of Half-Center. Oscillators

Phase Response Properties of Half-Center. Oscillators Phase Response Properties of Half-Center Oscillators Jiawei Calvin Zhang Timothy J. Lewis Department of Mathematics, University of California, Davis Davis, CA 95616, USA June 17, 212 Abstract We examine

More information

3 Action Potentials - Brutal Approximations

3 Action Potentials - Brutal Approximations Physics 172/278 - David Kleinfeld - Fall 2004; Revised Winter 2015 3 Action Potentials - Brutal Approximations The Hodgkin-Huxley equations for the behavior of the action potential in squid, and similar

More information

9 Generation of Action Potential Hodgkin-Huxley Model

9 Generation of Action Potential Hodgkin-Huxley Model 9 Generation of Action Potential Hodgkin-Huxley Model (based on chapter 12, W.W. Lytton, Hodgkin-Huxley Model) 9.1 Passive and active membrane models In the previous lecture we have considered a passive

More information

How Spike Generation Mechanisms Determine the Neuronal Response to Fluctuating Inputs

How Spike Generation Mechanisms Determine the Neuronal Response to Fluctuating Inputs 628 The Journal of Neuroscience, December 7, 2003 23(37):628 640 Behavioral/Systems/Cognitive How Spike Generation Mechanisms Determine the Neuronal Response to Fluctuating Inputs Nicolas Fourcaud-Trocmé,

More information

Strange Nonchaotic Spiking in the Quasiperiodically-forced Hodgkin-Huxley Neuron

Strange Nonchaotic Spiking in the Quasiperiodically-forced Hodgkin-Huxley Neuron Journal of the Korean Physical Society, Vol. 57, o. 1, July 2010, pp. 23 29 Strange onchaotic Spiking in the Quasiperiodically-forced Hodgkin-Huxley euron Woochang Lim and Sang-Yoon Kim Department of Physics,

More information

The Phase Response Curve of Reciprocally Inhibitory Model Neurons Exhibiting Anti-Phase Rhythms

The Phase Response Curve of Reciprocally Inhibitory Model Neurons Exhibiting Anti-Phase Rhythms The Phase Response Curve of Reciprocally Inhibitory Model Neurons Exhibiting Anti-Phase Rhythms Jiawei Zhang Timothy J. Lewis Department of Mathematics, University of California, Davis Davis, CA 9566,

More information

MANY scientists believe that pulse-coupled neural networks

MANY scientists believe that pulse-coupled neural networks IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 3, MAY 1999 499 Class 1 Neural Excitability, Conventional Synapses, Weakly Connected Networks, and Mathematical Foundations of Pulse-Coupled Models Eugene

More information

Phase Response Curves, Delays and Synchronization in Matlab

Phase Response Curves, Delays and Synchronization in Matlab Phase Response Curves, Delays and Synchronization in Matlab W. Govaerts and B. Sautois Department of Applied Mathematics and Computer Science, Ghent University, Krijgslaan 281-S9, B-9000 Ghent, Belgium

More information

Activity Driven Adaptive Stochastic. Resonance. Gregor Wenning and Klaus Obermayer. Technical University of Berlin.

Activity Driven Adaptive Stochastic. Resonance. Gregor Wenning and Klaus Obermayer. Technical University of Berlin. Activity Driven Adaptive Stochastic Resonance Gregor Wenning and Klaus Obermayer Department of Electrical Engineering and Computer Science Technical University of Berlin Franklinstr. 8/9, 187 Berlin fgrewe,obyg@cs.tu-berlin.de

More information

An analysis of how coupling parameters influence nonlinear oscillator synchronization

An analysis of how coupling parameters influence nonlinear oscillator synchronization An analysis of how coupling parameters influence nonlinear oscillator synchronization Morris Huang, 1 Ben McInroe, 2 Mark Kingsbury, 2 and Will Wagstaff 3 1) School of Mechanical Engineering, Georgia Institute

More information

Neural Coding: Integrate-and-Fire Models of Single and Multi-Neuron Responses

Neural Coding: Integrate-and-Fire Models of Single and Multi-Neuron Responses Neural Coding: Integrate-and-Fire Models of Single and Multi-Neuron Responses Jonathan Pillow HHMI and NYU http://www.cns.nyu.edu/~pillow Oct 5, Course lecture: Computational Modeling of Neuronal Systems

More information

Memory and hypoellipticity in neuronal models

Memory and hypoellipticity in neuronal models Memory and hypoellipticity in neuronal models S. Ditlevsen R. Höpfner E. Löcherbach M. Thieullen Banff, 2017 What this talk is about : What is the effect of memory in probabilistic models for neurons?

More information

A Universal Model for Spike-Frequency Adaptation

A Universal Model for Spike-Frequency Adaptation A Universal Model for Spike-Frequency Adaptation Jan Benda 1 & Andreas V. M. Herz 2 1 Department of Physics, University of Ottawa, Ottawa, Ontario, K1N 6N5, Canada j.benda@biologie.hu-berlin.de 2 Institute

More information

Learning Cycle Linear Hybrid Automata for Excitable Cells

Learning Cycle Linear Hybrid Automata for Excitable Cells Learning Cycle Linear Hybrid Automata for Excitable Cells Sayan Mitra Joint work with Radu Grosu, Pei Ye, Emilia Entcheva, I V Ramakrishnan, and Scott Smolka HSCC 2007 Pisa, Italy Excitable Cells Outline

More information

Introduction and the Hodgkin-Huxley Model

Introduction and the Hodgkin-Huxley Model 1 Introduction and the Hodgkin-Huxley Model Richard Bertram Department of Mathematics and Programs in Neuroscience and Molecular Biophysics Florida State University Tallahassee, Florida 32306 Reference:

More information

We observe the model neuron s response to constant input current, studying the dependence of:

We observe the model neuron s response to constant input current, studying the dependence of: BioE332A Lab 2, 21 1 Lab 2 December 23, 29 A Spiking Neuron Like biological neurons, the model neuron we characterize in this lab has a repertoire of (model) ion-channel populations. Here we focus on the

More information

Phase Synchronization

Phase Synchronization Phase Synchronization Lecture by: Zhibin Guo Notes by: Xiang Fan May 10, 2016 1 Introduction For any mode or fluctuation, we always have where S(x, t) is phase. If a mode amplitude satisfies ϕ k = ϕ k

More information

Coupling in Networks of Neuronal Oscillators. Carter Johnson

Coupling in Networks of Neuronal Oscillators. Carter Johnson Coupling in Networks of Neuronal Oscillators Carter Johnson June 15, 2015 1 Introduction Oscillators are ubiquitous in nature. From the pacemaker cells that keep our hearts beating to the predator-prey

More information

2 1. Introduction. Neuronal networks often exhibit a rich variety of oscillatory behavior. The dynamics of even a single cell may be quite complicated

2 1. Introduction. Neuronal networks often exhibit a rich variety of oscillatory behavior. The dynamics of even a single cell may be quite complicated GEOMETRIC ANALYSIS OF POPULATION RHYTHMS IN SYNAPTICALLY COUPLED NEURONAL NETWORKS J. Rubin and D. Terman Dept. of Mathematics; Ohio State University; Columbus, Ohio 43210 Abstract We develop geometric

More information

Nonlinear Observer Design and Synchronization Analysis for Classical Models of Neural Oscillators

Nonlinear Observer Design and Synchronization Analysis for Classical Models of Neural Oscillators Nonlinear Observer Design and Synchronization Analysis for Classical Models of Neural Oscillators Ranjeetha Bharath and Jean-Jacques Slotine Massachusetts Institute of Technology ABSTRACT This work explores

More information

Chapter 24 BIFURCATIONS

Chapter 24 BIFURCATIONS Chapter 24 BIFURCATIONS Abstract Keywords: Phase Portrait Fixed Point Saddle-Node Bifurcation Diagram Codimension-1 Hysteresis Hopf Bifurcation SNIC Page 1 24.1 Introduction In linear systems, responses

More information

FRTF01 L8 Electrophysiology

FRTF01 L8 Electrophysiology FRTF01 L8 Electrophysiology Lecture Electrophysiology in general Recap: Linear Time Invariant systems (LTI) Examples of 1 and 2-dimensional systems Stability analysis The need for non-linear descriptions

More information

A realistic neocortical axonal plexus model has implications for neocortical processing and temporal lobe epilepsy

A realistic neocortical axonal plexus model has implications for neocortical processing and temporal lobe epilepsy A realistic neocortical axonal plexus model has implications for neocortical processing and temporal lobe epilepsy Neocortical Pyramidal Cells Can Send Signals to Post-Synaptic Cells Without Firing Erin

More information

The homogeneous Poisson process

The homogeneous Poisson process The homogeneous Poisson process during very short time interval Δt there is a fixed probability of an event (spike) occurring independent of what happened previously if r is the rate of the Poisson process,

More information

Action Potential Initiation in the Hodgkin-Huxley Model

Action Potential Initiation in the Hodgkin-Huxley Model Action Potential Initiation in the Hodgkin-Huxley Model The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters. Citation Published Version

More information

Electrophysiology of the neuron

Electrophysiology of the neuron School of Mathematical Sciences G4TNS Theoretical Neuroscience Electrophysiology of the neuron Electrophysiology is the study of ionic currents and electrical activity in cells and tissues. The work of

More information

Spike-Frequency Adaptation: Phenomenological Model and Experimental Tests

Spike-Frequency Adaptation: Phenomenological Model and Experimental Tests Spike-Frequency Adaptation: Phenomenological Model and Experimental Tests J. Benda, M. Bethge, M. Hennig, K. Pawelzik & A.V.M. Herz February, 7 Abstract Spike-frequency adaptation is a common feature of

More information

STUDENT PAPER. Santiago Santana University of Illinois, Urbana-Champaign Blue Waters Education Program 736 S. Lombard Oak Park IL, 60304

STUDENT PAPER. Santiago Santana University of Illinois, Urbana-Champaign Blue Waters Education Program 736 S. Lombard Oak Park IL, 60304 STUDENT PAPER Differences between Stochastic and Deterministic Modeling in Real World Systems using the Action Potential of Nerves. Santiago Santana University of Illinois, Urbana-Champaign Blue Waters

More information

1. Introduction - Reproducibility of a Neuron. 3. Introduction Phase Response Curve. 2. Introduction - Stochastic synchronization. θ 1. c θ.

1. Introduction - Reproducibility of a Neuron. 3. Introduction Phase Response Curve. 2. Introduction - Stochastic synchronization. θ 1. c θ. . Introduction - Reproducibility of a euron Science (995 Constant stimuli led to imprecise spike trains, whereas stimuli with fluctuations produced spike trains with timing reproducible to less than millisecond.

More information

On the Response of Neurons to Sinusoidal Current Stimuli: Phase Response Curves and Phase-Locking

On the Response of Neurons to Sinusoidal Current Stimuli: Phase Response Curves and Phase-Locking On the Response of Neurons to Sinusoidal Current Stimuli: Phase Response Curves and Phase-Locking Michael J. Schaus and Jeff Moehlis Abstract A powerful technique for analyzing mathematical models for

More information

Deconstructing Actual Neurons

Deconstructing Actual Neurons 1 Deconstructing Actual Neurons Richard Bertram Department of Mathematics and Programs in Neuroscience and Molecular Biophysics Florida State University Tallahassee, Florida 32306 Reference: The many ionic

More information

Functional Identification of Spike-Processing Neural Circuits

Functional Identification of Spike-Processing Neural Circuits Functional Identification of Spike-Processing Neural Circuits Aurel A. Lazar 1, Yevgeniy B. Slutskiy 1 1 Department of Electrical Engineering, Columbia University, New York, NY 127. Keywords: System identification,

More information

Synchrony, stability, and firing patterns in pulse-coupled oscillators

Synchrony, stability, and firing patterns in pulse-coupled oscillators Physica D 163 (2002) 191 216 Synchrony, stability, and firing patterns in pulse-coupled oscillators Pranay Goel, Bard Ermentrout Department of Mathematics, University of Pittsburgh, Pittsburgh, PA 15260,

More information

Neurophysiology of a VLSI spiking neural network: LANN21

Neurophysiology of a VLSI spiking neural network: LANN21 Neurophysiology of a VLSI spiking neural network: LANN21 Stefano Fusi INFN, Sezione Roma I Università di Roma La Sapienza Pza Aldo Moro 2, I-185, Roma fusi@jupiter.roma1.infn.it Paolo Del Giudice Physics

More information

Effects of Noisy Drive on Rhythms in Networks of Excitatory and Inhibitory Neurons

Effects of Noisy Drive on Rhythms in Networks of Excitatory and Inhibitory Neurons Effects of Noisy Drive on Rhythms in Networks of Excitatory and Inhibitory Neurons Christoph Börgers 1 and Nancy Kopell 2 1 Department of Mathematics, Tufts University, Medford, MA 2155 2 Department of

More information

3.3 Simulating action potentials

3.3 Simulating action potentials 6 THE HODGKIN HUXLEY MODEL OF THE ACTION POTENTIAL Fig. 3.1 Voltage dependence of rate coefficients and limiting values and time constants for the Hodgkin Huxley gating variables. (a) Graphs of forward

More information

Parameterized phase response curves for characterizing neuronal behaviors under transient conditions

Parameterized phase response curves for characterizing neuronal behaviors under transient conditions J Neurophysiol 19: 36 316, 13. First published January 3, 13; doi:1.115/jn.94.1. Parameterized phase response curves for characterizing neuronal behaviors under transient conditions Óscar Miranda-omínguez

More information

Frequency Adaptation and Bursting

Frequency Adaptation and Bursting BioE332A Lab 3, 2010 1 Lab 3 January 5, 2010 Frequency Adaptation and Bursting In the last lab, we explored spiking due to sodium channels. In this lab, we explore adaptation and bursting due to potassium

More information

Synaptic Input. Linear Model of Synaptic Transmission. Professor David Heeger. September 5, 2000

Synaptic Input. Linear Model of Synaptic Transmission. Professor David Heeger. September 5, 2000 Synaptic Input Professor David Heeger September 5, 2000 The purpose of this handout is to go a bit beyond the discussion in Ch. 6 of The Book of Genesis on synaptic input, and give some examples of how

More information

BME 5742 Biosystems Modeling and Control

BME 5742 Biosystems Modeling and Control BME 5742 Biosystems Modeling and Control Hodgkin-Huxley Model for Nerve Cell Action Potential Part 1 Dr. Zvi Roth (FAU) 1 References Hoppensteadt-Peskin Ch. 3 for all the mathematics. Cooper s The Cell

More information

EEG- Signal Processing

EEG- Signal Processing Fatemeh Hadaeghi EEG- Signal Processing Lecture Notes for BSP, Chapter 5 Master Program Data Engineering 1 5 Introduction The complex patterns of neural activity, both in presence and absence of external

More information

Biological Modeling of Neural Networks

Biological Modeling of Neural Networks Week 4 part 2: More Detail compartmental models Biological Modeling of Neural Networks Week 4 Reducing detail - Adding detail 4.2. Adding detail - apse -cable equat Wulfram Gerstner EPFL, Lausanne, Switzerland

More information

Approximate, not perfect synchrony maximizes the downstream effectiveness of excitatory neuronal ensembles

Approximate, not perfect synchrony maximizes the downstream effectiveness of excitatory neuronal ensembles Börgers et al. RESEARCH Approximate, not perfect synchrony maximizes the downstream effectiveness of excitatory neuronal ensembles Christoph Börgers *, Jie Li and Nancy Kopell 2 * Correspondence: cborgers@tufts.edu

More information

Synaptic input statistics tune the variability and reproducibility of neuronal responses

Synaptic input statistics tune the variability and reproducibility of neuronal responses CHAOS 16, 06105 006 Synaptic input statistics tune the variability and reproducibility of neuronal responses Alan D. Dorval II a and John A. White Department of Biomedical Engineering, Center for BioDynamics,

More information

Neocortical Pyramidal Cells Can Control Signals to Post-Synaptic Cells Without Firing:

Neocortical Pyramidal Cells Can Control Signals to Post-Synaptic Cells Without Firing: Neocortical Pyramidal Cells Can Control Signals to Post-Synaptic Cells Without Firing: a model of the axonal plexus Erin Munro Department of Mathematics Boston University 4/14/2011 Gap junctions on pyramidal

More information

1 Hodgkin-Huxley Theory of Nerve Membranes: The FitzHugh-Nagumo model

1 Hodgkin-Huxley Theory of Nerve Membranes: The FitzHugh-Nagumo model 1 Hodgkin-Huxley Theory of Nerve Membranes: The FitzHugh-Nagumo model Alan Hodgkin and Andrew Huxley developed the first quantitative model of the propagation of an electrical signal (the action potential)

More information

Patterns of Synchrony in Neural Networks with Spike Adaptation

Patterns of Synchrony in Neural Networks with Spike Adaptation Patterns of Synchrony in Neural Networks with Spike Adaptation C. van Vreeswijky and D. Hanselz y yracah Institute of Physics and Center for Neural Computation, Hebrew University, Jerusalem, 9194 Israel

More information

CORRELATION TRANSFER FROM BASAL GANGLIA TO THALAMUS IN PARKINSON S DISEASE. by Pamela Reitsma. B.S., University of Maine, 2007

CORRELATION TRANSFER FROM BASAL GANGLIA TO THALAMUS IN PARKINSON S DISEASE. by Pamela Reitsma. B.S., University of Maine, 2007 CORRELATION TRANSFER FROM BASAL GANGLIA TO THALAMUS IN PARKINSON S DISEASE by Pamela Reitsma B.S., University of Maine, 27 Submitted to the Graduate Faculty of the Department of Mathematics in partial

More information

Dynamical modelling of systems of coupled oscillators

Dynamical modelling of systems of coupled oscillators Dynamical modelling of systems of coupled oscillators Mathematical Neuroscience Network Training Workshop Edinburgh Peter Ashwin University of Exeter 22nd March 2009 Peter Ashwin (University of Exeter)

More information

9 Generation of Action Potential Hodgkin-Huxley Model

9 Generation of Action Potential Hodgkin-Huxley Model 9 Generation of Action Potential Hodgkin-Huxley Model (based on chapter 2, W.W. Lytton, Hodgkin-Huxley Model) 9. Passive and active membrane models In the previous lecture we have considered a passive

More information

Spike-Frequency Adaptation of a Generalized Leaky Integrate-and-Fire Model Neuron

Spike-Frequency Adaptation of a Generalized Leaky Integrate-and-Fire Model Neuron Journal of Computational Neuroscience 10, 25 45, 2001 c 2001 Kluwer Academic Publishers. Manufactured in The Netherlands. Spike-Frequency Adaptation of a Generalized Leaky Integrate-and-Fire Model Neuron

More information

Introduction to Neural Networks U. Minn. Psy 5038 Spring, 1999 Daniel Kersten. Lecture 2a. The Neuron - overview of structure. From Anderson (1995)

Introduction to Neural Networks U. Minn. Psy 5038 Spring, 1999 Daniel Kersten. Lecture 2a. The Neuron - overview of structure. From Anderson (1995) Introduction to Neural Networks U. Minn. Psy 5038 Spring, 1999 Daniel Kersten Lecture 2a The Neuron - overview of structure From Anderson (1995) 2 Lect_2a_Mathematica.nb Basic Structure Information flow:

More information

IN THIS turorial paper we exploit the relationship between

IN THIS turorial paper we exploit the relationship between 508 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 3, MAY 1999 Weakly Pulse-Coupled Oscillators, FM Interactions, Synchronization, Oscillatory Associative Memory Eugene M. Izhikevich Abstract We study

More information

Two dimensional synaptically generated traveling waves in a theta-neuron neuralnetwork

Two dimensional synaptically generated traveling waves in a theta-neuron neuralnetwork Neurocomputing 38}40 (2001) 789}795 Two dimensional synaptically generated traveling waves in a theta-neuron neuralnetwork Remus Osan*, Bard Ermentrout Department of Mathematics, University of Pittsburgh,

More information

Proceedings of Neural, Parallel, and Scientific Computations 4 (2010) xx-xx PHASE OSCILLATOR NETWORK WITH PIECEWISE-LINEAR DYNAMICS

Proceedings of Neural, Parallel, and Scientific Computations 4 (2010) xx-xx PHASE OSCILLATOR NETWORK WITH PIECEWISE-LINEAR DYNAMICS Proceedings of Neural, Parallel, and Scientific Computations 4 (2010) xx-xx PHASE OSCILLATOR NETWORK WITH PIECEWISE-LINEAR DYNAMICS WALTER GALL, YING ZHOU, AND JOSEPH SALISBURY Department of Mathematics

More information

ELECTROPHYSIOLOGICAL STUDIES

ELECTROPHYSIOLOGICAL STUDIES COUPLING AND SYNCHRONY IN NEURONAL NETWORKS: ELECTROPHYSIOLOGICAL STUDIES A Dissertation Presented to The Academic Faculty by Amanda Jervis Preyer In Partial Fulfillment Of the Requirements for the Degree

More information

Chimera states in networks of biological neurons and coupled damped pendulums

Chimera states in networks of biological neurons and coupled damped pendulums in neural models in networks of pendulum-like elements in networks of biological neurons and coupled damped pendulums J. Hizanidis 1, V. Kanas 2, A. Bezerianos 3, and T. Bountis 4 1 National Center for

More information

University of Bristol - Explore Bristol Research. Early version, also known as pre-print

University of Bristol - Explore Bristol Research. Early version, also known as pre-print Chizhov, Anto, V., Rodrigues, S., & Terry, JR. (6). A comparative analysis of an EEG model and a conductance-based. https://doi.org/1.116/j.physleta.7.4.6 Early version, also known as pre-print Link to

More information

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION DOI: 1.138/NPHYS535 Spontaneous synchrony in power-grid networks Adilson E. Motter, Seth A. Myers, Marian Anghel and Takashi Nishikawa Supplementary Sections S1. Power-grid data. The data required for

More information

Chapter 2 Phase Resetting Neural Oscillators: Topological Theory Versus the Real World

Chapter 2 Phase Resetting Neural Oscillators: Topological Theory Versus the Real World Chapter 2 Phase Resetting Neural Oscillators: Topological Theory Versus the Real World Trine Krogh-Madsen, Robert Butera, G. Bard Ermentrout, and Leon Glass Abstract Biological oscillations, despite their

More information

Gamma and Theta Rhythms in Biophysical Models of Hippocampal Circuits

Gamma and Theta Rhythms in Biophysical Models of Hippocampal Circuits Gamma and Theta Rhythms in Biophysical Models of Hippocampal Circuits N. Kopell, C. Börgers, D. Pervouchine, P. Malerba, and A. Tort Introduction The neural circuits of the hippocampus are extremely complex,

More information

MATH 723 Mathematical Neuroscience Spring 2008 Instructor: Georgi Medvedev

MATH 723 Mathematical Neuroscience Spring 2008 Instructor: Georgi Medvedev MATH 723 Mathematical Neuroscience Spring 28 Instructor: Georgi Medvedev 2 Lecture 2. Approximate systems. 2. Reduction of the HH model to a 2D system. The original HH system consists of 4 differential

More information

Balance of Electric and Diffusion Forces

Balance of Electric and Diffusion Forces Balance of Electric and Diffusion Forces Ions flow into and out of the neuron under the forces of electricity and concentration gradients (diffusion). The net result is a electric potential difference

More information

MATH 3104: THE HODGKIN-HUXLEY EQUATIONS

MATH 3104: THE HODGKIN-HUXLEY EQUATIONS MATH 3104: THE HODGKIN-HUXLEY EQUATIONS Parallel conductance model A/Prof Geoffrey Goodhill, Semester 1, 2009 So far we have modelled neuronal membranes by just one resistance (conductance) variable. We

More information

STOCHASTIC NEURAL OSCILLATORS. by Aushra Abouzeid B.A., Knox College, 1994 M.S., University of Illinois at Chicago, 2003

STOCHASTIC NEURAL OSCILLATORS. by Aushra Abouzeid B.A., Knox College, 1994 M.S., University of Illinois at Chicago, 2003 STOCHASTIC NEURAL OSCILLATORS by Aushra Abouzeid B.A., Knox College, 1994 M.S., University of Illinois at Chicago, 23 Submitted to the Graduate Faculty of the Department of Mathematics in partial fulfillment

More information

Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons

Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons Yan Karklin and Eero P. Simoncelli NYU Overview Efficient coding is a well-known objective for the evaluation and

More information