1 A coupled excitatory-inhibitory (E-I) pair a neural oscillator

Size: px
Start display at page:

Download "1 A coupled excitatory-inhibitory (E-I) pair a neural oscillator"

Transcription

1 A coupled excitatory-inhibitory (E-I) pair a neural oscillator A basic element in the basic circuit is an interacting, or coupled, pair of excitatory and inhibitory cells. We will refer to such a pair as an E-I pair. Let x and y be the states or membrane potentials of the excitatory and inhibitory cells respectively, and let the outputs and time constants be g x (x) and g y (y), and, τ x and τ y, respectively. We also use notations α x /τ x and α y /τ y to describe the constants of decay of the neurons to their resting states. According to equation (??, this basic element follow equations of motion ẋ = α x x hg y (y) + I () ẏ = α y y + wg x (x) + I c () where I and I c are external inputs to the two cells respectively, and w > and h > denote the excitatory and inhibitory connection weights. As g x (x) is the output from this pair, the input-output transform of this pair is simply (I, I c ) g x (x). For illustration, consider the example where g x (x = ) = g y (y = ) =, which is usually the case for real neurons when they are at their resting potential. When I = I c =, the static resting states x(t) = y(t) = is an easily verifiable solution. Thus we have one point example (I =, I c = ) g x (x) = in the input-output relationship. The fixed point of this system is ( x, ȳ) where ẋ = ẏ =, satisfies = α x x hg y (ȳ) + I (3) = α y ȳ + wg x ( x) + I c (4) In the example above, x = ȳ = is the fixed point under input (I, I c ) = (, ). With nonzero input I c to the inhibitory cell, when (I, I c ) = (, I c > ), the inhibitory cell raises its state ȳ to ȳ = I c /α y according to equation (4) when g x ( x) =. When the inhibitory cell is charged enough such that g y (ȳ) becomes nonzero, the excitatory cell is inhibited by the inhibitory input hg y (ȳ) to a suppressed state x = hg y (ȳ)/α x according to equation (3). However, output g x ( x) stays zero since g x (.) is an increasing function of its argument. Hence, (I =, I c > ) g x ( x) = is another example in the input-output relationship. The (I =, I c > ) input gives a trivial, open-loop, example to the pair of neurons since, while input I c excites the inhibitory cell, the excitatory cell is passively inhibited by the inhibitory cell without giving nontrivial feedback. The pair becomes a non-trivial close-looped system when the input I to the excitatory cell is large enough to evoke nonzero output g x (x) > despite the the inhibitory input hg x (y) from the cell s partner. Intuitively, the excitatory cell s fixed point x increases with input I and decreases with input I c, i.e., the excitatory cell would be more excited without inhibitory feedback. Meanwhile, the inhibitory cell s fixed point ȳ increases with I c and with I via excitation wg x ( x) from the excitatory cell. In general, the fixed point is where the two curves described by equations (3) and (4) respectively intersect in the x vs. y coordinate system.

2 A: An EI pair C: Fixed point as the interaction output g x (x) I c -, y wg x (x) hg y (y) +, x I y 4 3 ẏ = curve ( x, ȳ) ẋ = curve 3 4 x B: g x (x) and g y (y) g x (x) x g y (y) y D: I g x (x) transform g x (x) h = ց I c = I c = I c = I Figure : The basic circuit element as shown in A: a pair of excitatory and inhibitory cells, interacting with each other, under external inputs (I, I c ), and with output g x (x). B: The piece-wise linear input-output functions g x (x) and g y (y) used as example for illustration. C: Fixed point ( x, ȳ) as the interaction between the two curves. I = 4, I c =, g x (x) and g y (y) as in B. D: The output g x (x) as a function of input I under various input I c to the inhibitory cell, or in the condition without negative feedback h =. Solid curves: I c =, dashed: I c =, dotted: I c =, dot-dashed: h =. The parameters used are x th =, x sat =, g =, y th =, y th =, g =.5, g =, h =, and w = 3.. Gain control in input-output relationship To see how the fixed point ( x, ȳ) depend on input (I, I c ) in more detail, we analyse how small (infinitesimal) changes (δi, δi c ) to input lead to changes (δ x, δȳ) of the fixed point. Differentiating equations (3) and (4) above, = α x δ x hg y (ȳ)δȳ + δi (5) = α y δȳ + wg x ( x)δ x + δi c. (6)

3 The variations of x, y, and g y (y) with input I, under various I c or loop h = conditions. 5 g y (y) 8 y I I 6 x h = 4 I c = ց I c = ց I c = I Figure : Using the same neural parameters as those in fig (), variations of x, y, and g y (y) with I. Solid curve, I c =, dash-dotted: h =, dashed: I c =, and dotted: I c =. Solving the linear equations for (δ x, δȳ), δ x = α y δi hg y(ȳ)δi c α x α y + hwg y(ȳ)g x( x) (7) δȳ = wg x ( x)δi + α xδi c α x α y + hwg y (ȳ)g x ( x) (8) We can understand this solution as follows. First, consider the case when h = or an absent inhibitory feedback to the excitatory cell. Then, δ x = δi/α x, or x = I/α x, the asymptotic state of a cell under input I, as in equation (??). Meanwhile, δȳ = (wg x ( x)δ x + δi c)/α y, or equivalently, ȳ = (wg x ( x) + I c )/α y, the asymptotic state of a cell driven by input wg x ( x) + I c. Second, consider the case w = or when the excitatory cell does not drive the inhibitory one. Then (δ x, δȳ) = (δi c /α y, (δi hg y (ȳ)δȳ)/α x) or equivalently ( x, ȳ) = (I hg y (ȳ))/α x, I c /α y ), the asymptotic states of cells driven by inputs I hg y (ȳ) and I c respectively. When mutual interaction (hw > ), both cells become less sensitive to their direct external input. 3

4 More specifically, sensitivities referred to are δ x δi δȳ δi c δ x δi δȳ δi c = = α x α y when hw = (9) α x+hwg y (ȳ)g x ( x)/αy α y+hwg y (ȳ)g x ( x)/αx when hw () All sensitivities δ x δi, δ x δi c depend on the state values x or ȳ. They are reduced when g x ( x)g y (ȳ) is large, i.e., when the negative feedback loop between the two cells is at a high gain region by high gains, g x( x) and g y(ȳ), of the input-output functions of both cells. Meanwhile, a cell is insensitive to the external input to the other cell if the other cell is at a low gain (g x( x) or g y(ȳ)) region. Most important however is to study how the output g x (x) of this basic circuit element depend on the input, i.e., to obtain the (I, I c ) g x (x) transform, and the sensitivities δg x ( x) δi δi c, δȳ δi, δȳ = g x ( x)δ x δi = δg x ( x) = hg y(ȳ) δi c g x (x) = α y g x ( x) α x + hwg y (ȳ)g x ( x)/α, () y δg x ( x). () δi For concreteness and illustration, consider the following example for g x (x) and g y (y): g y (y) =, if x < x th g(x ) if x th x x sat if x > x sat (3) (4), if y < y th g (y ) if y th y y th g (y ) + g if y > y th The excitatory cell has a piece-wise linear input-output function g x (x), with firing threshold and saturation at x th and x sat respectively, and a gain of g within the operating range x (x th, x sat ). The inhibitory cell also has a piece-wise linear input-output function. This cell has a gain g above the threshold y th, and the gain changes to g at y th > y th which continues into infinity it is just an approximation of a supposed reality where the saturation is beyond the usual operation range, not uncommon for some inhibitory cells. As seen in Fig. (CD), input I drives output g x (x) less effectively under inhibitory feedback h > compared to the open loop condition h =, starting at input values when the inhibitory cell is active g y (y) >. Since the gain g y (y) of the inhibitory cell increases with its state value y, the higher y leads to lower sensitivity δg x (x)/δi, the slope of the g x (x) vs. I curve. The piece-wise linear changes in the slope δg x (x)/δi correspond to the piece-wise changes in g x (x) and g y (y), as a closer examinations of the plots in Fig () indicate that all slope changes are triggered 4

5 by either of the cells passing a threshold or saturation point, x th, x sat, y th, or y th. Note that although sensitivities δ x/δi generally decrease with I, it suddenly increases with I at higher input values to a maximum gain. This happens when x reaches saturation x x sat, such that g x(x) =, effectively severing the transmission to the inhibitory cell the effect of input changes in I and thus disabling the negative feedback in response to the change. This is reflected in the corresponding zero sensitivity δȳ/δi of the inhibitory cell to changes in I. The increased cell state x beyond saturation can not be reflected in the output g x (x) due to a zero gain g x (x) =, see Fig (C). This however, will slow down the response g x (x) to any input changes as the cell takes time to recover from a higher state.. Example: Gain control and contextual modulation of visual inputs The primary visual cortex respond to the contrast of visual input in the way similar to the E-I unit responding to input I. The excitatory cell models one or a local group of pyramidal cells that sends output to higher visual areas. The inhibitory cell models the local interneurons. Visual image input is filtered through retina and other neural circuit before affecting the pyramidal cell in such a way that I relates to the contrast of small local image area, such as a luminance edge, as in Fig. (3). Contributions to I c are from other neighboring neurons or sources outside a cortical area, including top-down feedbacks from higher visual areas and perhaps some components of direct visual inputs. From fig. (D) and equation (), we can see that the contrast gain δg x ( x)/deltai of the E-I unit can be modulated by input level I c which could be controlled by higher visual areas or other neural sources to reflect perhaps contextual or environmental conditions. Given a particular transformation I g x (x), the neuron is most sensitive to a range of contrast I in which the gain δg x ( x)/deltai is large. By adjusting the contrast gain or the transformation I g x (x) within the constraint of the maximum response rate possible g x (x), the neuron could have the optimal dynamic range matched to a particular input environment such as a bright or dim environment, often termed input adaptation level. Response of a visual cortex neuron to an edge within its receptive field has been observed to depend on the visual input context outside and near the receptive field of the neuron. In other words, while the contextual input outside the receptive field does not evoke responses without direct input within the receptive field, it can modulate the neural response to the direct visual input within the receptive field. The contextual modulation can be facilitatory or suppressive, to increase or decrease the neural response respectively. It often depends on the contrast of the direct visual input. For example, neural responses to a weak image edge can often be facilitated by a contextual edge aligned with it. Such a facilitation can serve to detect a faint edge segment, perhaps due to input noise, within a long line or smooth contour. However, the same contextual edge can act to suppress the responses to a strong direct edge segment which is easily detectable. The contextual inputs are direct inputs to other E-I units 5

6 nearby. The neural substrates for the contextual modulation are mainly the neural connections from the excitatory cells from one E-I unit to another. Fig. (3) shows a schematic example of contextual interactions between two aligned edges in an input image. A: mechanisms of influence from contextual visual input A visual cortical unit under study A neighboring unit output g x (x) I c I c -, y -, y +, x +, x Due to contextual input I, change of input from unit (x, y ) to x: I to y: I c Direct input I Contextual input I input image with two contrast edges B: Effect of ( I, I c ) depends on I I/ I c.5 I/ I c = hg y(ȳ)/α y.5.5 Facilitative region Suppressive region 3 4 Contrast input I Figure 3: A: the neural circuit for visual cortical responses to direct I and contextual visual inputs I. Intra-cortical neural connections enable the contextual input I to change the total input to the E-I unit (x, y) from (I, I c ) to (I + I, I c + I c ). B: the effect of the contextual modulation ( I, I c ) depend on the contrast input I. 6

7 The contextual modulation can be modelled as follows. Let (I, I c ) be the input to a particular E-I unit when there is no contextual visual input I. With the contextual input, the total inputs to the excitatory and inhibitory cells in the E-I unit under study become I I + I and I c I c + I c. (5) Consequently, the changes in g x (x) is approximately, according to equation (), g x (x) δg x(x) δi I + δg x(x) δi c I c = δg x(x) ( I hg δi y(ȳ) I c /α y ) (6) It has a facilitative term δgx(x) I caused by I and suppressive term δg x(x) δi δi hg y(ȳ) I c /α y caused by I c. The net result is facilitation or suppression depending on the relative strength of these two terms. It follows that contextual facilitation results when I/ I c > hg y (ȳ)/α y. (7) Otherwise, contextual suppression results. Note that hg y (ȳ)/α y describes the relative sensitivity to input I c compared to that to input I. Accordingly, whether the contextual modulation is facilitatory or suppressive depends on the internal state ȳ. Given a particular contextual modulatory input ( I, I c ), the larger the g y(ȳ), the more likely that the contextual modulation is suppressive. Both physiological and psychophysical data point out that contextual modulation is more likely suppressive when the direct visual input is of higher contrast. This is the so-called contrast dependence of the contextual influence. It can be understood if we assume that higher direct contrast input I gives higher gain g y(ȳ) in the inhibitory interneuron. Since δȳ/δi, this means that the interneuron y normally operates in an operation region of increasing gain, i.e., g y (ȳ) >. The higher gain g y(ȳ) makes the E-I unit more sensitive to suppressive contextual modulation I c than to the facilitatory modulation I..3 A damped neural oscillator One can intuitively see that the E-I pair has a tendency to oscillate. The excited excitatory cell drives inhibitory cell, which increases its response and give inhibitory feedback to the excitatory cell. Subsequently, the suppressed excitatory cell reduces its drive to the interneuron which, in its less excited state, reduces the inhibitory feedback, causing the excitatory cell to rebound. The cycle then starts again and continues until a equilibrium balance is achieved at the fixed point ( x, ȳ). To study oscillations around this fixed point, consider small deviations (δx, δy) from it such that (x, y) = ( x + δx, ȳ + δy). For notation simplicity, we do a coordinate transformation x x x, y y ȳ (8) so that now (x, y) denote the deviation from the fixed point. In this new coordinate system, the fixed point is at the origin. With Taylor approximation g x (x + x) g x ( x) + g x ( x)x g y(y + ȳ) g y (ȳ) + g y (ȳ)y (9) 7

8 equations () and () become ẋ = α x x hg y(ȳ)y α x x hg y (ȳ) + I () ẏ = α y y + wg x ( x)x α yȳ + wg x ( x) + I c () Using equations (3) and (4), we have ẋ = α x x hg y(ȳ)y () ẏ = α y y + wg x ( x)x (3) We perform another temporal derivative d/dt on equation () to have ẍ = α x ẋ hg y (ȳ)ẏ (4) substituting eq. (3) for ẏ = α x ẋ hg y(ȳ)( α y y + wg x( x)x)(5) substituting eq. () for y = (α x + α y )ẋ (6) and we arrive at (hwg x( x)g y(ȳ) + α x α y )x, (7) ẍ + (α x + α y )ẋ + (hwg x ( x)g y (ȳ) + α xα y )x = (8) This is a second order linear differential equation in x, an equation for a damped harmonic oscillator. The second term (α x + α y )ẋ is the friction in the oscillator, arising from the decay or leaky term in the excitatory and inhibitory neurons. The third term (hwg x( x)g y(ȳ) + α x α y )x describes the restoration term in the oscillator. We can compare this equation to that of a harmonic oscillator, with oscillation frequency ω, without damping, ẍ + ω x = (9) which has oscillatory solution x(t) = x()e iωt with initial condition x(). When the decay terms are zero α x = α y =, equation (8) has the same form as a non-damping oscillator, giving an oscillating solution with frequency ω = hwg x ( x)g y (ȳ). To account for non-zero (α x, α y ), assume a solution x(t) = x()e λt, substitute into equation (8) gives [λ + (α x + α y )λ + (hwg x ( x)g y (ȳ) + α xα y )]x()e λt = (3) The λ value is called the eigenvalue of the system, and can be solved as λ = So the solution x(t) becomes (α x + α y )/ ± i hwg x( x)g y(ȳ) (α x α y ) /4 (3) α x=α y= ±i hwg x( x)g y(ȳ) (3) x(t) = x()e αt i ωt where (33) α (α x + α y )/, ω hwg x( x)g y(ȳ) (α x α y ) /4 (34) 8

9 Here, we omitted the solution x(t) e αt+i ωt. This is because in reality, x(t) is a real value, which can be obtained by x(t) = [x(t) + x (t)]/ = x()e αt i ωt + x ()e αt+i ωt = x() e αt cos(ωt + φ) (35) where the superscript denote complex-conjugate of a value, x() and φ are oscillation amplitude and initial phase determined by initial conditions (x(), ẋ()), or equivalently (x(), y()), from which we can deduce ẋ() via equation (3). For analytical convenience, we will use the convention of complex values e iωt to denote oscillations unless stated otherwise. Given solution x(), equation () gives y(t) = ( α x x ẋ)/[hg y (ȳ)] (36) = (α y α x )/ + iω hg x(ȳ) x(t) y()e αt iωt (37) (αy α x ) hence y() = x() / 4 + ω hg x(ȳ) e iθ (38) where θ = arctan(ω/(α y α x )) (39) which means that y(t) has the same damped oscillation as x(t), but with a oscillation phase lag of θ and an oscillation amplitude scaled by a factor (αy α x) / 4+ω hg x (ȳ). When α = α x = α y, θ = π/, so y(t) is phase lagged by a quarter oscillation cycle, and with amplitude scaled by a factor hwg x( x)g y(ȳ). For simplicity in our discussion, we use mainly the simple case when α = α x = α y, we have ω = hwg x( x)g y(ȳ). Fig. (4A) shows an example of x(t) and y(t) pair, where the phase lag was apparent. The quarter-cycle phase difference also means that trajectory (x(t), y(t)) traces out a shrinking spiral in x-y phase space and converge to the fixed point ( x, ȳ), see Fig. (4B). If the damping term is zero α =, this spiral becomes an elipse, and the oscillation amplitude will be sustained..3. An alternative approach to the solution The steps from equation () to equation (39) to obtained solutions [x(t), y(t)] for the linear equations () and (3), can be replaced by a standard approach (which we will often use) in solving a set of linear differential equations. Treat (x, y) as components of a two dimensional vector, we have ( ( ) ( x α hg d/dt = y (ȳ) x y) wg x( x) (4) α y) In general, linear differential equations ż = Mz, where z is a N dimensional vector and M is a N N matrix with constant elements, have the solution z = sum N i= c iv i e λit. Here, v i is the i th eigenvector of the matrix M, λ i is the corresponding eigenvalue such that Mv i = λ i v i, and c i are 9

10 A: x(t) and y(t) dynamics.6.4. x(t) y(t) t B: the trajectory x vs.y for A y ( x, ȳ) x Figure 4: The oscillation dynamics of an EI pair. h =, w = 4, α x = α y =., I = 6, I c =, both g x and g y take the form g(u) = /(+e 4(u.5) ). A plots x(t) and y(t) starting from initial condition x() = y() =, note the phase difference between x(t) and y(t). B plots this spiraling trajectory in the x -y coordinate. Note that here we plotted x, y in the coordinate system before the transformation (x, y) (x x, y ȳ). constants such that sum N i= c iv i = z(t = ) is the initial condition. Applying to our E-I pair, the eigenvalue and eigenvectors of the matrix ( ) α hg M = y (ȳ) wg x ( x) α (4) are solutions of equation [λ + (α x + α y )λ + (hwg x( x)g y(ȳ) + α x α y )] = (4) which is just like equation (3), with its two solutions expressed in equation (3), one for each of the eigenvectors. The eigenvector for one the eigenvalue λ = α x i hwg x( x)g y(ȳ) is ( ( ) x y) i (wg x ( x)/(hg y (ȳ)) (43) which is the same as in equation (39), expressed as a phase and amplitude relationship between x and y. The other eigenvalue and eigenvector are simply complex conjugates of this one. So we arrive at the same solutions..4 A Lyapunov function for the E-I pair When the oscillation amplitude is too large, the linear approximation is not accurate, and thus the sinusoidal oscillation (in condition α = ) becomes non-sinusoidal. However, it can be show that there is a Lyapunov function Using the original coordinate (x, y) rather than deviations from the fixed

11 A: Three [x(t), y(t)] traces x(t): solid, y(t): dashed.8 g x (x) t B: Their trajectories in x-y space each traces a contour of a constant L(x, y) value. x.6 y.6 y L = L =.96 ւ L = g y (y) x Figure 5: The non-damped oscillations around the fixed point in Fig. (4) when the decay terms were turned off, i.e., α =. The external inputs (I, I c ) = (4.7,.) are re-adjusted to maintain the same fixed point under α = condition. All other parameters are the same as in Fig. (4). Three different initial conditions (x(), y()) gives three oscillation traces, plotted as temporal traces in A (note three different scales used on the y-axis on the three plots), and in x-y phase space in B. The Lyapunov functional values are L(x, y) = ,.96,.94 for larger and larger oscillations. In A, the oscillation traces are less sinusoidal for larger amplitude oscillations. This is manifested in less elipse like trajectories in B. Also plotted in the upper right and lower left are the g x (x) and g y (y) function curves aligned to the plot in B to indicate the operation range of the various motion trajectories relative the the linear and non-linear regions of the gain functions. points, L(x, y) = w x x du[g x (u) g x ( x)] + h y ȳ du[g y (u) g y (ȳ)] (44) Note that since u x or u ȳ is always of the same sign as [g x (u) g x ( x)] or [g y (u) g y (ȳ), respectively. Each integral in L(x, y) is thus non-negative, increases with x x and y ȳ respectively, and becomes zero only when

12 x = x or y = ȳ respectively. L(x, y) evolves in time as dl(x, y)/dt = ẋdl/dx + ẏdl/dy = w[g x (x) g x ( x)][ α x (x x) h(g y (y) g y (ȳ))] +h[g y (y) g y (ȳ)][ α y (y ȳ) + w(g x (x) g x ( x))] = α{w(x x)[g x (x) g x ( x)] +h(y ȳ)[g y (y) g y (ȳ)]} (45) The last inequality is arrived by noting that x x and y ȳ are always of the same sign as [g x (x) g x ( x)] and [g y (y) g y (ȳ)] respectively. When α >, dl/dt <, and the decreasing L(x, y) shrinks down the magnitude x x and y ȳ, which conceptually represent the oscillating amplitude around the fixed point ( x, ȳ). When α =, dl/dt = even when (x x, y ȳ) while (ẋ, ẏ). This means that the motion trajectory is a closed curve surrounding ( x, ȳ) in (x, y) space, and L(x, y) is a constant on the closed curve. The dynamics is then a perpetual cycle. The closed curve is an elipse and the oscillation cycle is sinusoidal when both g x (.) and g y (.) are linear functions. In Fig. (5), we can see the positions of the motion trajectories relative to the linear and nonlinear regions of the g x (x) and g y (y) function. As the trajectory spans a more or less linear region, the trajectory is more or less elipsoidal. The two terms in L(x, y) above can be intuitively viewed as the kinetic and potential energies in the oscillation. The energies decay with friction in the system. To view this more clearly, take the case when both g x (x) = a(x x ) and g y (y) = b(y y ) are linear, with constants a, b, x and y. Then, L(x, y) = wa (x x) + hb (y ȳ) (46) which resembles the familiar kinetic and potential energy terms in harmonic oscillators when x x and y ȳ are viewed as displacement and velocity respectively of the oscillator. In the case when there is no damping, ẋ = hb(y ȳ), then the form L(x, y) = wa (x x) + hbẋ (47) becomes more obviously familiar to summation of kinetic and potential energy of an oscillator, whose frequency is hwab = hwg x ( x)g y (ȳ)..5 Oscillation frequency It is apparent from ω = hwg x( x)g y(ȳ) (48) that the oscillation frequency ω can be tuned by changing the synaptic connection strength h and w, or by changing ( x, ȳ) via external input (I, I c ). To have larger oscillation frequencies, it helps to place ( x, ȳ) at the locations of higher gain (g x ( x), g x (ȳ) in the respective functions. This is demonstrated in Fig. (6), where three different input conditions give three different ( x, ȳ)

13 .5 4 A: motion traces x(t): solid; y(t): dashed three y(t) (red) and x(t) (blue) traces for different input I ( x, ȳ) = (.5,.5) ( x, ȳ) = (, ) ( x, ȳ) = (.5,.5) t B: the trajectories x vs.y for traces in A y Increasing input I x 4 6 Figure 6: Input control of the oscillation frequencies by shifting the fixed point ( x, ȳ). h = 3, w = 8, α =. Both g x and g y take the form g(u) = /( + e 4(u.5) ). Three different fixed points: ( x, ȳ) = (.5,.5), (, ), and (.5,.5) are created by three different input (I, I c ) = (.4,.356), (4.58,.464), and (6.5,.5) respectively. All three conditions start with the same initial condition (x(), y()) = (, ). Note that the oscillations for the second condition is less sinusoidal like as the third condition, which has its fixed point at a more linear region of the g x and g y. B plots the three spiraling trajectories (dot-dashed, dashed, and solid curves for the three ( x, ȳ) conditions respectively) in the x -y coordinate for the three conditions. Note the increase in oscillation frequency as the fixed points are increased in value to reach higher gains g x( x)g y(ȳ). and thus three differnt oscillation frequencies. In all conditions, the same initial state value (x(), y()) is below the respective fixed point. Within the same time duration, each motion trajectory approaches the respective fixed point, with the more spirals, i.e., faster oscillations or larger frequencies, around the largest fixed point. Note that when the oscillation frequency is too small, as in the example when ( x, ȳ) = (.5,.5), oscillation becomes unapparent while the amplitude decays too fast compared with that of the oscillation. Specifically, let the deviation from ( x, ȳ) be approximately, (x, y) e αt iωt. The amplitude decays exponentially with a time constant τ = /alpha, while the oscillation has a period T = π/ω. When τ T, oscillation is negligible. The membrane time constant of a neuron is /α, which in many neurons are of the order of milliseconds. Since our unit time used in the figures is this time constant, the periods of oscillations in Figs (4), (5), and (6) are also on the order of milliseconds, which means an oscillation frequency of Hz. This frequency is comparable to or higher than most neural oscillations seen in the brain, e.g., gamma and theta oscillations are on the order of 4 Hz and Hz respectively, or 5 and milliseconds of oscillation period. We used small enough oscillation period T in our figures in order to demonstrate the oscillation behavior of our simple neural oscillator clearly. Fig. (7) shows different oscillation frequencies by using different synaptic weights h and w while 3

14 keeping ( x, ȳ) fixed. A: higher frequency oscillations x(t): solid; y(t): dashed B: lower frequency oscillations x(t): solid; y(t): dashed t t Figure 7: Higher (A) and lower (B) oscillation frequencies are obtained by tuning the synaptic connection weights h and w. h =, w = 4 in A while h = w = 4 in B. α =. Both g x and g y take the form g(u) = /(+e 4(u.5) ). The fixed point ( x, ȳ) = (.3,.) in both plots, achieved by having I = 5.9 in A and I =.3 in B, while I c in both A and B. Both plots start with the same (x(), y()) = (, ). Note that the oscillation traces are less obvious when the oscillation period is comparable or longer than the time constant for the oscillation amplitude to decay. Of course, when the oscillation amplitude is too large, the trajectory covers the phase space where the gains g x ( x)g y (ȳ) are not constant. This also makes the oscillation frequency amplitude department, even when the fixed point ( x, ȳ) is unchanged. This is apparent in Fig. (6) within a single oscillation trace. For instance, in the oscillation trace where ( x, ȳ) = (.5,.5), the oscillation period is larger for larger amplitude oscillations, so the oscillation frequency increases while the amplitude decays to zero. This is because the fixed point ( x, ȳ) = (.5,.5) is actually the location where both g x ( x) and g y (ȳ) are maximum, and that either gain decreases with increasing distance x x or y ȳ respectively. Hence, larger amplitude oscillations leads to smaller g x( x)g y(ȳ) and the oscillation frequency..6 Response to non-stationary or oscillatory inputs resonance Consider an E-I pair in the situation when I c is static while I is a function of time I(t). It can always be be composed of a static component and a summation of sinusoidal components I(t) = Ī + ω I(ω)e iωt (49) where Ī does not change with time, I(ω) represent the amplitude and phase of the sinusoidal component with frequency ω. Let ( x, ȳ) be the 4

15 fixed point under static input (Ī, I c). Following procedures from equation (8) to equation (3), one see that small deviations (x, y) from ( x, ȳ) approximately follow equations ẋ = αx hg y (ȳ)y + ω I(ω)e iωt (5) ẏ = αy + wg x( x)x (5) Also, analogous to equation (8), we have equation ẍ + α x ẋ + (hwg x( x)g y(ȳ) + α )x = ω (α iω)i(ω)e iωt (5) For simplicity, we consider a single sinusoidal input component such that I(t) = Ī + Ie iωt, where for simplicity of notation I now represent the amplitude and phase of the sinusoidal drive. Intuitively, the external drive would make the neuron respond with the driving frequency. Substitute x(t) = xe iωt (53) into equation (5) under single sinusoidal drive, we have x = (α iω)i/[ ω iωα + (hwg x ( x)g y (ȳ) + α )] α iω = I ωo (ω + iα) (54) where ω o = hwg x ( x)g y (ȳ) is the intrinsic frequency of the neural oscillator s intrinsic dynamics e αt iωot without the external inputs. This means, ignoring the initial transients, the response follows the external oscillatory drive in the same frequency, with an oscillation amplitude α x = I + ω (ω + α ωo) + 4ωoα (55) and a phase lead relative to the drive as tan (ω/α) tan [(ωα)/(ωo + α ω )] if ωo φ = + α ω > tan (ω/α) π + tan [(ωα)/ ωo + α ω (56) ] otherwise We define α Q(ω) + ω (ω + α ωo) + 4ωoα (57) as the sensitivity to the external drive. Then x = IQ(ω)e iφ (58) Note that the sensitivity depends on ω. Hence, the response is tuned to the driving frequency. The optimal frequency ˆω to which the neural oscillator is the most sensitive is easily seen from equation (57) as satisfying ˆω ω o α (59) 5

16 which minimizes the denominator in Q(ω). So for small damping such that α ω o, the optimal driving frequency is close to the intrinsic frequency frequency ω o. The tuning width of Q(ω) is roughly, from the denominator again in equation (57), ω ω o α, which means, ω ω o α/ω. (6) Hence, tuning width increases with increasing damping α. In other words, small damping enables a neural oscillator to be very selective to the driving frequency. A high selectivity leads to a resonance phenomena, meaning that the response is insignificant unless the driving frequency is close to the optimal frequency ω ˆω. From equation (56), we see that the driven oscillator x slightly phase lead the driving force for small ω but lags the driving force for larger ω. Near resonance when ω = ω α, x and I are exactly in phase. It is not intuitive to imagine that the driven oscillator x(t) should phase lead the input I(t). In a physical situation when an oscillator is driven by an external oscillatory force in the sense that the force contributes to the acceleration of the oscillator as in Newton s law, the driven oscillator normally phase lags the driving force. This physical situation is described by ẍ + αẋ + (ω o + α )x = F(t) Fe iωt (6) where F is the amplitude and phase of the driving force. Comparing this equation with equation (5), we see that F corresponds to (α iω)i(ω). We can then arrive at x = FQ(ω)e iφ with { tan φ = [(ωα)/(ωo + α ω )] if ωo + α ω > π + tan [(ωα)/ ωo + α ω (6) ] otherwise Hence, the oscillator x(t) always phase lags the driving force F(t). The sensitivity is also slightly changed, Q(ω) (ω + α ωo) + 4ωoα (63) which still displays the resonance or frequency tuning phenomena with roughly the same oscillation frequency and tuning width. Note that when the oscillatory drive is from the input I c such that I c (t) = Īc + I c e iωt, then F(t) = I c e iωt. Hence, in general, let I(ω)e iωt and I c (ω)e iωt (64) ω> be the AC components of the input to the excitatory and inhibitory cells respectively, the the AC component of the drive to the oscillator is then F(t) = F(ω)e iωt = dω[(α iω)i(ω) + I c (ω)]e iωt. (65) ω> ω> Oscillatory inputs to neural oscillators can be seen in the example of olfactory cortex, which is intrinsically but not spontaneous oscillatory, and which has oscillatory inputs from the olfactory bulb. Both the intrinsic oscillation frequency of the olfactory cortex and the inputs from the bulb are of the gamma frequency range. ω> 6

17 A: Input and response traces x(t): solid; I(t): dashed ω = ω = 6.5 ω = 5 5 t B: the frequency tuning Q(ω) Q(ω) C: the cycle lead φ/(π) φ/(π) ω ω Figure 8: The oscillator as in Fig. (4) is driven additionally by oscillating input Ie iωt for I =. A: Ie iωt (scaled down) and x(t) for three different ω =, 6.5,. While the input amplitude is the same, output response amplitude changes with ω. The resonance frequency is close to ω = 6.5. Note the initial transient in the intrinsic frequency of the neural oscillator before the asymptotic oscillations in the frequency of the input. B: The Q(ω) function as in equation (57) with ω o = 6.5 and α = (this α is used in A). C: The corresponding cycle lead φ(ω)/(π) function with the same ω o and α. A self-excitatory neural oscillator The damped oscillator in the last section can become sustained if the excitatory cell is self-excitatory, as ẋ = α x x + Jg x (x) hg y (y) + I (66) ẏ = α y y + wg x (x) + I c (67) 7

18 This is the same as equations () and () except that an additional term Jg x (x) is added in the equation for ẋ. Following the same procedure as before, the deviation of x from the fixed point ( x) can be shown to follow the equation, in the linear approximation: ẍ + (α x + α y Jg x( x))ẋ + (hwg x( x)g y(ȳ) + (α x Jg x( x))α y )x = (68) This is the same as equation (8) except that α x in equation (8) is now α x Jg x( x). Intuitively, this means the damping of this oscillator stemming from α x is reduced by the amount Jg x ( x), as can also be seen in equation (66). Hence, the effects of reduced damping and self-excitation could be qualitatively be equivalent. However, if J is sufficiently large, the effective damping α x Jg x ( ) can be negative, and when this is sufficiently negative, the oscillator can have sustained rather than damped oscillation. The linear solution is x(t) = x()e λt, and as before, λ is the eigenvalue of the system satisfying equation (compare with equation (3)): λ + (α x + α y Jg x( x))λ + (hwg x( x)g y(ȳ) + (α x Jg x( x))α y )] = (69) Hence λ = (α x + α y Jg x ( x))/ (7) ±i hwg x ( x)g y (ȳ) (α x Jg x ( x) α y) /4 (7) α x=α y=α α + Jg x ( x)/ (7) ±i hwg x ( x)g y (ȳ) (Jg x ( x)) /4 (73) x(t) = x()e (α Jg x ( x)/)t i ωt (74) where ω hwg x( x)g y(ȳ) (Jg x( x)) /4 (75) This oscillator can sustain its oscillation when there is sufficient self excitation such that Jg x ( x)/ > α. At the same time, the oscillation frequency is also reduced from the value ω hwg x( x)g y(ȳ), whether or not the self-excitation is sufficient to make the oscillation self-sustaining. The value of the negative damping, Jg x ( x) depends on the fixed point x, which is controlled by the external input (I, I c ). We have seen in section (.5) that oscillation frequency can be modulated by inputs. Here, whether the oscillator can have sustained oscillation can also be modulated by external input. For example, given a fixed I c to the inhibitory cell, increasing the input I to the excitatory raises x. Hence, when I is so weak that x is below threshold and g x ( x) =, the slope or gain g x( x) is zero and the oscillator is damped. As I increases, x passes the threshold to achieve a larger gain g x ( x), spontaneous oscillation is possible when this gain is sufficient. In Fig. (9)A, three different traces of x(t) were plotted for input values (I, I c ), which correspond to increasing levels for the fixed point ( x, ȳ). Sustained oscillation is achieved for the intermediate fixed point value ( x, ȳ) which correspond to the highest gain g x( x) possible. When the fixed 8

19 A: motion trajectories x(t) around three different mean x x(t): solid; x(t): dashed ( x, ȳ) = (.85,.85) ( x, ȳ) = (.5,.5) ( x, ȳ) = (.5,.5) B: The g x (x) or g y (y) function.5 g x(x) or g y (y) 3 4 t x or y C: x(t) (top) and I(t) (bottom) D: An excitatory-inhibitory pair x(t) I(t) t I c -, y wg x (x) hg y (y) output g x (x) +, x J Figure 9: Input control of the emergence of neural oscillation by shifting the fixed point ( x, ȳ). J = 3, h = 4, w = 4, α =. Both g x and g y take the form g(u) = /( + e 4(u.5) ). The dynamics follows equations (66) and (67). A: Three different fixed points: ( x, ȳ) = (.,.5), (.5,.5), and (.85,.85) are created by three different input (I, I c ) = (.35,.36), (,.5), (.7,.36) respectively. All three conditions start with the same initial condition (x(), y()) = (, ). Note the sustained oscillation for the second condition. B: The g(u) function and the location of the three fixed points x corresponding to the motion traces in A. C: the trajectory x(t) when I(t) slowly changes in time. Initial condition (x(), y()) = (, ). I(t) =. + [ cos(πt/t)]+noise(t), where T = 5, and noise(t) is random number uniformly sampled from range (,.), changes with time only once every time interval of length./α. I c =.5. Note from the plot of I(t) that noise amplitude is very small (and almost imperceptible with the plot resolution) over the smooth sinusoidal variation of I(t). The dotted curve in the upper plot traces out (the approximate) x, which evolves in the same time scale as I(t). I 9

20 point x is too high, it reaches towards the saturating region for the function g x (x), and thus the gain g x( x) is lowered, making the oscillator damped again. If we again examine the quantity for the Lyapunov function as defined in equation (44): L(x, y) = w x It now evolves in time as x du[g x (u) g x ( x)] + h dl(x, y)/dt = ẋdl/dx + ẏdl/dy y ȳ du[g y (u) g y (ȳ)] (76) = w[g x (x) g x ( x)][ α x (x x) h(g y (y) g y (ȳ))] = +w[g x (x) g x ( x)][j(g x (x) g x ( x))] +h[g y (y) g y (ȳ)][ α y (y ȳ) + w(g x (x) g x ( x))] = α{w(x x)[g x (x) g x ( x)] +h(y ȳ)[g y (y) g y (ȳ)]} +wj[g x (x) g x ( x)] (77) We can see that the first term α{...} is non-positive since (x x)[g x (x) g x ( x) and h(y ȳ)[g y (y) g y (ȳ)] for all (x, y), but the second term wj[g x (x) g x ( x)] is always non-negative. Hence, L(x, y) is not guaranteed to be always decreasing in the dynamics. This means it is no longer a Lyapunov function when J is sufficiently large. When (x, y) is sufficiently close to ( x, ȳ), one can approximate g x (x) g x ( x) g x ( x) x, and g y (y) g y (ȳ) g y(ȳ) y, where ( x, y) (x x, y ȳ). Then, dl(x, y)/dt w(jg x( x) α)g x( x) x αhg y(ȳ) y (78) when Jg x ( x) > α, dl(x, y)/dt can be positive for some values of ( x, y), though the value of L(x, y) over the motion trajectory could still rise and fall. The self-excitation in the excitatory cell has been used in many models of neural oscillators in order to produce the neural oscillations observed in physiological experiments, as this is the easiest way to generate oscillations without using network effects (between different neural oscillators) or using cellular level mechanisms (not modelled here) which could also produce oscillatory tendency.

arxiv:physics/ v1 [physics.bio-ph] 19 Feb 1999

arxiv:physics/ v1 [physics.bio-ph] 19 Feb 1999 Odor recognition and segmentation by coupled olfactory bulb and cortical networks arxiv:physics/9902052v1 [physics.bioph] 19 Feb 1999 Abstract Zhaoping Li a,1 John Hertz b a CBCL, MIT, Cambridge MA 02139

More information

4. Complex Oscillations

4. Complex Oscillations 4. Complex Oscillations The most common use of complex numbers in physics is for analyzing oscillations and waves. We will illustrate this with a simple but crucially important model, the damped harmonic

More information

Consider the following spike trains from two different neurons N1 and N2:

Consider the following spike trains from two different neurons N1 and N2: About synchrony and oscillations So far, our discussions have assumed that we are either observing a single neuron at a, or that neurons fire independent of each other. This assumption may be correct in

More information

Linear and Nonlinear Oscillators (Lecture 2)

Linear and Nonlinear Oscillators (Lecture 2) Linear and Nonlinear Oscillators (Lecture 2) January 25, 2016 7/441 Lecture outline A simple model of a linear oscillator lies in the foundation of many physical phenomena in accelerator dynamics. A typical

More information

Patterns, Memory and Periodicity in Two-Neuron Delayed Recurrent Inhibitory Loops

Patterns, Memory and Periodicity in Two-Neuron Delayed Recurrent Inhibitory Loops Math. Model. Nat. Phenom. Vol. 5, No. 2, 2010, pp. 67-99 DOI: 10.1051/mmnp/20105203 Patterns, Memory and Periodicity in Two-Neuron Delayed Recurrent Inhibitory Loops J. Ma 1 and J. Wu 2 1 Department of

More information

Lecture 7. Please note. Additional tutorial. Please note that there is no lecture on Tuesday, 15 November 2011.

Lecture 7. Please note. Additional tutorial. Please note that there is no lecture on Tuesday, 15 November 2011. Lecture 7 3 Ordinary differential equations (ODEs) (continued) 6 Linear equations of second order 7 Systems of differential equations Please note Please note that there is no lecture on Tuesday, 15 November

More information

1 Simple Harmonic Oscillator

1 Simple Harmonic Oscillator Physics 1a Waves Lecture 3 Caltech, 10/09/18 1 Simple Harmonic Oscillator 1.4 General properties of Simple Harmonic Oscillator 1.4.4 Superposition of two independent SHO Suppose we have two SHOs described

More information

1 Pushing your Friend on a Swing

1 Pushing your Friend on a Swing Massachusetts Institute of Technology MITES 017 Physics III Lecture 05: Driven Oscillations In these notes, we derive the properties of both an undamped and damped harmonic oscillator under the influence

More information

Exercises. Chapter 1. of τ approx that produces the most accurate estimate for this firing pattern.

Exercises. Chapter 1. of τ approx that produces the most accurate estimate for this firing pattern. 1 Exercises Chapter 1 1. Generate spike sequences with a constant firing rate r 0 using a Poisson spike generator. Then, add a refractory period to the model by allowing the firing rate r(t) to depend

More information

2.3 Damping, phases and all that

2.3 Damping, phases and all that 2.3. DAMPING, PHASES AND ALL THAT 107 2.3 Damping, phases and all that If we imagine taking our idealized mass on a spring and dunking it in water or, more dramatically, in molasses), then there will be

More information

Forced Oscillation and Resonance

Forced Oscillation and Resonance Chapter Forced Oscillation and Resonance The forced oscillation problem will be crucial to our understanding of wave phenomena Complex exponentials are even more useful for the discussion of damping and

More information

Lab 1: Damped, Driven Harmonic Oscillator

Lab 1: Damped, Driven Harmonic Oscillator 1 Introduction Lab 1: Damped, Driven Harmonic Oscillator The purpose of this experiment is to study the resonant properties of a driven, damped harmonic oscillator. This type of motion is characteristic

More information

Lab 1: damped, driven harmonic oscillator

Lab 1: damped, driven harmonic oscillator Lab 1: damped, driven harmonic oscillator 1 Introduction The purpose of this experiment is to study the resonant properties of a driven, damped harmonic oscillator. This type of motion is characteristic

More information

Damped harmonic motion

Damped harmonic motion Damped harmonic motion March 3, 016 Harmonic motion is studied in the presence of a damping force proportional to the velocity. The complex method is introduced, and the different cases of under-damping,

More information

Complex Numbers. The set of complex numbers can be defined as the set of pairs of real numbers, {(x, y)}, with two operations: (i) addition,

Complex Numbers. The set of complex numbers can be defined as the set of pairs of real numbers, {(x, y)}, with two operations: (i) addition, Complex Numbers Complex Algebra The set of complex numbers can be defined as the set of pairs of real numbers, {(x, y)}, with two operations: (i) addition, and (ii) complex multiplication, (x 1, y 1 )

More information

Damped Harmonic Oscillator

Damped Harmonic Oscillator Damped Harmonic Oscillator Wednesday, 23 October 213 A simple harmonic oscillator subject to linear damping may oscillate with exponential decay, or it may decay biexponentially without oscillating, or

More information

MCE693/793: Analysis and Control of Nonlinear Systems

MCE693/793: Analysis and Control of Nonlinear Systems MCE693/793: Analysis and Control of Nonlinear Systems Systems of Differential Equations Phase Plane Analysis Hanz Richter Mechanical Engineering Department Cleveland State University Systems of Nonlinear

More information

Chapter 6 Nonlinear Systems and Phenomena. Friday, November 2, 12

Chapter 6 Nonlinear Systems and Phenomena. Friday, November 2, 12 Chapter 6 Nonlinear Systems and Phenomena 6.1 Stability and the Phase Plane We now move to nonlinear systems Begin with the first-order system for x(t) d dt x = f(x,t), x(0) = x 0 In particular, consider

More information

The Harmonic Oscillator

The Harmonic Oscillator The Harmonic Oscillator Math 4: Ordinary Differential Equations Chris Meyer May 3, 008 Introduction The harmonic oscillator is a common model used in physics because of the wide range of problems it can

More information

Chapter 14 Oscillations

Chapter 14 Oscillations Chapter 14 Oscillations Chapter Goal: To understand systems that oscillate with simple harmonic motion. Slide 14-2 Chapter 14 Preview Slide 14-3 Chapter 14 Preview Slide 14-4 Chapter 14 Preview Slide 14-5

More information

Simple Harmonic Motion

Simple Harmonic Motion Simple Harmonic Motion (FIZ 101E - Summer 2018) July 29, 2018 Contents 1 Introduction 2 2 The Spring-Mass System 2 3 The Energy in SHM 5 4 The Simple Pendulum 6 5 The Physical Pendulum 8 6 The Damped Oscillations

More information

A plane autonomous system is a pair of simultaneous first-order differential equations,

A plane autonomous system is a pair of simultaneous first-order differential equations, Chapter 11 Phase-Plane Techniques 11.1 Plane Autonomous Systems A plane autonomous system is a pair of simultaneous first-order differential equations, ẋ = f(x, y), ẏ = g(x, y). This system has an equilibrium

More information

18.03SC Practice Problems 14

18.03SC Practice Problems 14 1.03SC Practice Problems 1 Frequency response Solution suggestions In this problem session we will work with a second order mass-spring-dashpot system driven by a force F ext acting directly on the mass:

More information

Tuning tuning curves. So far: Receptive fields Representation of stimuli Population vectors. Today: Contrast enhancment, cortical processing

Tuning tuning curves. So far: Receptive fields Representation of stimuli Population vectors. Today: Contrast enhancment, cortical processing Tuning tuning curves So far: Receptive fields Representation of stimuli Population vectors Today: Contrast enhancment, cortical processing Firing frequency N 3 s max (N 1 ) = 40 o N4 N 1 N N 5 2 s max

More information

Nonlinear Control Lecture 2:Phase Plane Analysis

Nonlinear Control Lecture 2:Phase Plane Analysis Nonlinear Control Lecture 2:Phase Plane Analysis Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2010 r. Farzaneh Abdollahi Nonlinear Control Lecture 2 1/53

More information

Introductory Physics. Week 2015/05/29

Introductory Physics. Week 2015/05/29 2015/05/29 Part I Summary of week 6 Summary of week 6 We studied the motion of a projectile under uniform gravity, and constrained rectilinear motion, introducing the concept of constraint force. Then

More information

Modal Decomposition and the Time-Domain Response of Linear Systems 1

Modal Decomposition and the Time-Domain Response of Linear Systems 1 MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF MECHANICAL ENGINEERING.151 Advanced System Dynamics and Control Modal Decomposition and the Time-Domain Response of Linear Systems 1 In a previous handout

More information

ME 563 HOMEWORK # 7 SOLUTIONS Fall 2010

ME 563 HOMEWORK # 7 SOLUTIONS Fall 2010 ME 563 HOMEWORK # 7 SOLUTIONS Fall 2010 PROBLEM 1: Given the mass matrix and two undamped natural frequencies for a general two degree-of-freedom system with a symmetric stiffness matrix, find the stiffness

More information

Complex Dynamic Systems: Qualitative vs Quantitative analysis

Complex Dynamic Systems: Qualitative vs Quantitative analysis Complex Dynamic Systems: Qualitative vs Quantitative analysis Complex Dynamic Systems Chiara Mocenni Department of Information Engineering and Mathematics University of Siena (mocenni@diism.unisi.it) Dynamic

More information

P441 Analytical Mechanics - I. RLC Circuits. c Alex R. Dzierba. In this note we discuss electrical oscillating circuits: undamped, damped and driven.

P441 Analytical Mechanics - I. RLC Circuits. c Alex R. Dzierba. In this note we discuss electrical oscillating circuits: undamped, damped and driven. Lecture 10 Monday - September 19, 005 Written or last updated: September 19, 005 P441 Analytical Mechanics - I RLC Circuits c Alex R. Dzierba Introduction In this note we discuss electrical oscillating

More information

Displacement at very low frequencies produces very low accelerations since:

Displacement at very low frequencies produces very low accelerations since: SEISMOLOGY The ability to do earthquake location and calculate magnitude immediately brings us into two basic requirement of instrumentation: Keeping accurate time and determining the frequency dependent

More information

Exploring Nonlinear Oscillator Models for the Auditory Periphery

Exploring Nonlinear Oscillator Models for the Auditory Periphery Exploring Nonlinear Oscillator Models for the Auditory Periphery Andrew Binder Dr. Christopher Bergevin, Supervisor October 31, 2008 1 Introduction Sound-induced vibrations are converted into electrical

More information

221B Lecture Notes on Resonances in Classical Mechanics

221B Lecture Notes on Resonances in Classical Mechanics 1B Lecture Notes on Resonances in Classical Mechanics 1 Harmonic Oscillators Harmonic oscillators appear in many different contexts in classical mechanics. Examples include: spring, pendulum (with a small

More information

Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity

Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity Jorge F. Mejias 1,2 and Joaquín J. Torres 2 1 Department of Physics and Center for

More information

Daba Meshesha Gusu and O.Chandra Sekhara Reddy 1

Daba Meshesha Gusu and O.Chandra Sekhara Reddy 1 International Journal of Basic and Applied Sciences Vol. 4. No. 1 2015. Pp.22-27 Copyright by CRDEEP. All Rights Reserved. Full Length Research Paper Solutions of Non Linear Ordinary Differential Equations

More information

Section 3.4. Second Order Nonhomogeneous. The corresponding homogeneous equation

Section 3.4. Second Order Nonhomogeneous. The corresponding homogeneous equation Section 3.4. Second Order Nonhomogeneous Equations y + p(x)y + q(x)y = f(x) (N) The corresponding homogeneous equation y + p(x)y + q(x)y = 0 (H) is called the reduced equation of (N). 1 General Results

More information

An algorithm for detecting oscillatory behavior in discretized data: the damped-oscillator oscillator detector

An algorithm for detecting oscillatory behavior in discretized data: the damped-oscillator oscillator detector An algorithm for detecting oscillatory behavior in discretized data: the damped-oscillator oscillator detector David Hsu, Murielle Hsu, He Huang and Erwin B. Montgomery, Jr Department of Neurology University

More information

Selected Topics in Physics a lecture course for 1st year students by W.B. von Schlippe Spring Semester 2007

Selected Topics in Physics a lecture course for 1st year students by W.B. von Schlippe Spring Semester 2007 Selected Topics in Physics a lecture course for st year students by W.B. von Schlippe Spring Semester 7 Lecture : Oscillations simple harmonic oscillations; coupled oscillations; beats; damped oscillations;

More information

Resonance and response

Resonance and response Chapter 2 Resonance and response Last updated September 20, 2008 In this section of the course we begin with a very simple system a mass hanging from a spring and see how some remarkable ideas emerge.

More information

Copyright (c) 2006 Warren Weckesser

Copyright (c) 2006 Warren Weckesser 2.2. PLANAR LINEAR SYSTEMS 3 2.2. Planar Linear Systems We consider the linear system of two first order differential equations or equivalently, = ax + by (2.7) dy = cx + dy [ d x x = A x, where x =, and

More information

Math 216 Final Exam 24 April, 2017

Math 216 Final Exam 24 April, 2017 Math 216 Final Exam 24 April, 2017 This sample exam is provided to serve as one component of your studying for this exam in this course. Please note that it is not guaranteed to cover the material that

More information

Math 312 Lecture Notes Linear Two-dimensional Systems of Differential Equations

Math 312 Lecture Notes Linear Two-dimensional Systems of Differential Equations Math 2 Lecture Notes Linear Two-dimensional Systems of Differential Equations Warren Weckesser Department of Mathematics Colgate University February 2005 In these notes, we consider the linear system of

More information

Poincaré Map, Floquet Theory, and Stability of Periodic Orbits

Poincaré Map, Floquet Theory, and Stability of Periodic Orbits Poincaré Map, Floquet Theory, and Stability of Periodic Orbits CDS140A Lecturer: W.S. Koon Fall, 2006 1 Poincaré Maps Definition (Poincaré Map): Consider ẋ = f(x) with periodic solution x(t). Construct

More information

NORMAL MODES, WAVE MOTION AND THE WAVE EQUATION. Professor G.G.Ross. Oxford University Hilary Term 2009

NORMAL MODES, WAVE MOTION AND THE WAVE EQUATION. Professor G.G.Ross. Oxford University Hilary Term 2009 NORMAL MODES, WAVE MOTION AND THE WAVE EQUATION Professor G.G.Ross Oxford University Hilary Term 009 This course of twelve lectures covers material for the paper CP4: Differential Equations, Waves and

More information

PHYSICS. Chapter 15 Lecture FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E RANDALL D. KNIGHT Pearson Education, Inc.

PHYSICS. Chapter 15 Lecture FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E RANDALL D. KNIGHT Pearson Education, Inc. PHYSICS FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E Chapter 15 Lecture RANDALL D. KNIGHT Chapter 15 Oscillations IN THIS CHAPTER, you will learn about systems that oscillate in simple harmonic

More information

Fast neural network simulations with population density methods

Fast neural network simulations with population density methods Fast neural network simulations with population density methods Duane Q. Nykamp a,1 Daniel Tranchina b,a,c,2 a Courant Institute of Mathematical Science b Department of Biology c Center for Neural Science

More information

ENGI 9420 Lecture Notes 4 - Stability Analysis Page Stability Analysis for Non-linear Ordinary Differential Equations

ENGI 9420 Lecture Notes 4 - Stability Analysis Page Stability Analysis for Non-linear Ordinary Differential Equations ENGI 940 Lecture Notes 4 - Stability Analysis Page 4.01 4. Stability Analysis for Non-linear Ordinary Differential Equations A pair of simultaneous first order homogeneous linear ordinary differential

More information

Hierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References

Hierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References 24th March 2011 Update Hierarchical Model Rao and Ballard (1999) presented a hierarchical model of visual cortex to show how classical and extra-classical Receptive Field (RF) effects could be explained

More information

Theoretical physics. Deterministic chaos in classical physics. Martin Scholtz

Theoretical physics. Deterministic chaos in classical physics. Martin Scholtz Theoretical physics Deterministic chaos in classical physics Martin Scholtz scholtzzz@gmail.com Fundamental physical theories and role of classical mechanics. Intuitive characteristics of chaos. Newton

More information

Differential Equations Grinshpan Two-Dimensional Homogeneous Linear Systems with Constant Coefficients. Purely Imaginary Eigenvalues. Recall the equation mẍ+k = of a simple harmonic oscillator with frequency

More information

Forced Oscillations in a Linear System Problems

Forced Oscillations in a Linear System Problems Forced Oscillations in a Linear System Problems Summary of the Principal Formulas The differential equation of forced oscillations for the kinematic excitation: ϕ + 2γ ϕ + ω 2 0ϕ = ω 2 0φ 0 sin ωt. Steady-state

More information

1. (10 points) Find the general solution to the following second-order differential equation:

1. (10 points) Find the general solution to the following second-order differential equation: Math 307A, Winter 014 Midterm Solutions Page 1 of 8 1. (10 points) Find the general solution to the following second-order differential equation: 4y 1y + 9y = 9t. To find the general solution to this nonhomogeneous

More information

Linearization of Differential Equation Models

Linearization of Differential Equation Models Linearization of Differential Equation Models 1 Motivation We cannot solve most nonlinear models, so we often instead try to get an overall feel for the way the model behaves: we sometimes talk about looking

More information

11/17/10. Chapter 14. Oscillations. Chapter 14. Oscillations Topics: Simple Harmonic Motion. Simple Harmonic Motion

11/17/10. Chapter 14. Oscillations. Chapter 14. Oscillations Topics: Simple Harmonic Motion. Simple Harmonic Motion 11/17/10 Chapter 14. Oscillations This striking computergenerated image demonstrates an important type of motion: oscillatory motion. Examples of oscillatory motion include a car bouncing up and down,

More information

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015 EN530.678 Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015 Prof: Marin Kobilarov 0.1 Model prerequisites Consider ẋ = f(t, x). We will make the following basic assumptions

More information

Supporting Online Material for

Supporting Online Material for www.sciencemag.org/cgi/content/full/319/5869/1543/dc1 Supporting Online Material for Synaptic Theory of Working Memory Gianluigi Mongillo, Omri Barak, Misha Tsodyks* *To whom correspondence should be addressed.

More information

Fundamentals of Dynamical Systems / Discrete-Time Models. Dr. Dylan McNamara people.uncw.edu/ mcnamarad

Fundamentals of Dynamical Systems / Discrete-Time Models. Dr. Dylan McNamara people.uncw.edu/ mcnamarad Fundamentals of Dynamical Systems / Discrete-Time Models Dr. Dylan McNamara people.uncw.edu/ mcnamarad Dynamical systems theory Considers how systems autonomously change along time Ranges from Newtonian

More information

Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks

Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks Commun. Theor. Phys. (Beijing, China) 42 (2004) pp. 121 125 c International Academic Publishers Vol. 42, No. 1, July 15, 2004 Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized

More information

1. < 0: the eigenvalues are real and have opposite signs; the fixed point is a saddle point

1. < 0: the eigenvalues are real and have opposite signs; the fixed point is a saddle point Solving a Linear System τ = trace(a) = a + d = λ 1 + λ 2 λ 1,2 = τ± = det(a) = ad bc = λ 1 λ 2 Classification of Fixed Points τ 2 4 1. < 0: the eigenvalues are real and have opposite signs; the fixed point

More information

Computational Physics (6810): Session 8

Computational Physics (6810): Session 8 Computational Physics (6810): Session 8 Dick Furnstahl Nuclear Theory Group OSU Physics Department February 24, 2014 Differential equation solving Session 7 Preview Session 8 Stuff Solving differential

More information

Lecture XXVI. Morris Swartz Dept. of Physics and Astronomy Johns Hopkins University November 5, 2003

Lecture XXVI. Morris Swartz Dept. of Physics and Astronomy Johns Hopkins University November 5, 2003 Lecture XXVI Morris Swartz Dept. of Physics and Astronomy Johns Hopins University morris@jhu.edu November 5, 2003 Lecture XXVI: Oscillations Oscillations are periodic motions. There are many examples of

More information

Experiment 5. Simple Harmonic Motion

Experiment 5. Simple Harmonic Motion Reading and Problems: Chapters 7,8 Problems 7., 8. Experiment 5 Simple Harmonic Motion Goals. To understand the properties of an oscillating system governed by Hooke s Law.. To study the effects of friction

More information

Exercises Lecture 15

Exercises Lecture 15 AM1 Mathematical Analysis 1 Oct. 011 Feb. 01 Date: January 7 Exercises Lecture 15 Harmonic Oscillators In classical mechanics, a harmonic oscillator is a system that, when displaced from its equilibrium

More information

Chapter 2: Complex numbers

Chapter 2: Complex numbers Chapter 2: Complex numbers Complex numbers are commonplace in physics and engineering. In particular, complex numbers enable us to simplify equations and/or more easily find solutions to equations. We

More information

2.152 Course Notes Contraction Analysis MIT, 2005

2.152 Course Notes Contraction Analysis MIT, 2005 2.152 Course Notes Contraction Analysis MIT, 2005 Jean-Jacques Slotine Contraction Theory ẋ = f(x, t) If Θ(x, t) such that, uniformly x, t 0, F = ( Θ + Θ f x )Θ 1 < 0 Θ(x, t) T Θ(x, t) > 0 then all solutions

More information

Classical Mechanics Comprehensive Exam Solution

Classical Mechanics Comprehensive Exam Solution Classical Mechanics Comprehensive Exam Solution January 31, 011, 1:00 pm 5:pm Solve the following six problems. In the following problems, e x, e y, and e z are unit vectors in the x, y, and z directions,

More information

8 Example 1: The van der Pol oscillator (Strogatz Chapter 7)

8 Example 1: The van der Pol oscillator (Strogatz Chapter 7) 8 Example 1: The van der Pol oscillator (Strogatz Chapter 7) So far we have seen some different possibilities of what can happen in two-dimensional systems (local and global attractors and bifurcations)

More information

Lab 11 - Free, Damped, and Forced Oscillations

Lab 11 - Free, Damped, and Forced Oscillations Lab 11 Free, Damped, and Forced Oscillations L11-1 Name Date Partners Lab 11 - Free, Damped, and Forced Oscillations OBJECTIVES To understand the free oscillations of a mass and spring. To understand how

More information

Lecture 2 Notes, Electromagnetic Theory II Dr. Christopher S. Baird, faculty.uml.edu/cbaird University of Massachusetts Lowell

Lecture 2 Notes, Electromagnetic Theory II Dr. Christopher S. Baird, faculty.uml.edu/cbaird University of Massachusetts Lowell Lecture Notes, Electromagnetic Theory II Dr. Christopher S. Baird, faculty.uml.edu/cbaird University of Massachusetts Lowell 1. Dispersion Introduction - An electromagnetic wave with an arbitrary wave-shape

More information

Principles of DCM. Will Penny. 26th May Principles of DCM. Will Penny. Introduction. Differential Equations. Bayesian Estimation.

Principles of DCM. Will Penny. 26th May Principles of DCM. Will Penny. Introduction. Differential Equations. Bayesian Estimation. 26th May 2011 Dynamic Causal Modelling Dynamic Causal Modelling is a framework studying large scale brain connectivity by fitting differential equation models to brain imaging data. DCMs differ in their

More information

TOPIC E: OSCILLATIONS SPRING 2019

TOPIC E: OSCILLATIONS SPRING 2019 TOPIC E: OSCILLATIONS SPRING 2019 1. Introduction 1.1 Overview 1.2 Degrees of freedom 1.3 Simple harmonic motion 2. Undamped free oscillation 2.1 Generalised mass-spring system: simple harmonic motion

More information

Keble College - Hilary 2014 CP3&4: Mathematical methods I&II Tutorial 5 - Waves and normal modes II

Keble College - Hilary 2014 CP3&4: Mathematical methods I&II Tutorial 5 - Waves and normal modes II Tomi Johnson 1 Keble College - Hilary 2014 CP3&4: Mathematical methods I&II Tutorial 5 - Waves and normal modes II Prepare full solutions to the problems with a self assessment of your progress on a cover

More information

Oscillatory Motion SHM

Oscillatory Motion SHM Chapter 15 Oscillatory Motion SHM Dr. Armen Kocharian Periodic Motion Periodic motion is motion of an object that regularly repeats The object returns to a given position after a fixed time interval A

More information

1 Review of simple harmonic oscillator

1 Review of simple harmonic oscillator MATHEMATICS 7302 (Analytical Dynamics YEAR 2017 2018, TERM 2 HANDOUT #8: COUPLED OSCILLATIONS AND NORMAL MODES 1 Review of simple harmonic oscillator In MATH 1301/1302 you studied the simple harmonic oscillator:

More information

BIBO STABILITY AND ASYMPTOTIC STABILITY

BIBO STABILITY AND ASYMPTOTIC STABILITY BIBO STABILITY AND ASYMPTOTIC STABILITY FRANCESCO NORI Abstract. In this report with discuss the concepts of bounded-input boundedoutput stability (BIBO) and of Lyapunov stability. Examples are given to

More information

High-conductance states in a mean-eld cortical network model

High-conductance states in a mean-eld cortical network model Neurocomputing 58 60 (2004) 935 940 www.elsevier.com/locate/neucom High-conductance states in a mean-eld cortical network model Alexander Lerchner a;, Mandana Ahmadi b, John Hertz b a Oersted-DTU, Technical

More information

2 1. Introduction. Neuronal networks often exhibit a rich variety of oscillatory behavior. The dynamics of even a single cell may be quite complicated

2 1. Introduction. Neuronal networks often exhibit a rich variety of oscillatory behavior. The dynamics of even a single cell may be quite complicated GEOMETRIC ANALYSIS OF POPULATION RHYTHMS IN SYNAPTICALLY COUPLED NEURONAL NETWORKS J. Rubin and D. Terman Dept. of Mathematics; Ohio State University; Columbus, Ohio 43210 Abstract We develop geometric

More information

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks Commun. Theor. Phys. (Beijing, China) 40 (2003) pp. 607 613 c International Academic Publishers Vol. 40, No. 5, November 15, 2003 Effects of Interactive Function Forms in a Self-Organized Critical Model

More information

Physics 106a, Caltech 4 December, Lecture 18: Examples on Rigid Body Dynamics. Rotating rectangle. Heavy symmetric top

Physics 106a, Caltech 4 December, Lecture 18: Examples on Rigid Body Dynamics. Rotating rectangle. Heavy symmetric top Physics 106a, Caltech 4 December, 2018 Lecture 18: Examples on Rigid Body Dynamics I go through a number of examples illustrating the methods of solving rigid body dynamics. In most cases, the problem

More information

ODEs. September 7, Consider the following system of two coupled first-order ordinary differential equations (ODEs): A =

ODEs. September 7, Consider the following system of two coupled first-order ordinary differential equations (ODEs): A = ODEs September 7, 2017 In [1]: using Interact, PyPlot 1 Exponential growth and decay Consider the following system of two coupled first-order ordinary differential equations (ODEs): d x/dt = A x for the

More information

MAS212 Assignment #2: The damped driven pendulum

MAS212 Assignment #2: The damped driven pendulum MAS Assignment #: The damped driven pendulum Sam Dolan (January 8 Introduction In this assignment we study the motion of a rigid pendulum of length l and mass m, shown in Fig., using both analytical and

More information

1 (2n)! (-1)n (θ) 2n

1 (2n)! (-1)n (θ) 2n Complex Numbers and Algebra The real numbers are complete for the operations addition, subtraction, multiplication, and division, or more suggestively, for the operations of addition and multiplication

More information

Math 215/255 Final Exam (Dec 2005)

Math 215/255 Final Exam (Dec 2005) Exam (Dec 2005) Last Student #: First name: Signature: Circle your section #: Burggraf=0, Peterson=02, Khadra=03, Burghelea=04, Li=05 I have read and understood the instructions below: Please sign: Instructions:.

More information

Nonlinear Autonomous Systems of Differential

Nonlinear Autonomous Systems of Differential Chapter 4 Nonlinear Autonomous Systems of Differential Equations 4.0 The Phase Plane: Linear Systems 4.0.1 Introduction Consider a system of the form x = A(x), (4.0.1) where A is independent of t. Such

More information

Torsion Spring Oscillator with Dry Friction

Torsion Spring Oscillator with Dry Friction Torsion Spring Oscillator with Dry Friction Manual Eugene Butikov Annotation. The manual includes a description of the simulated physical system and a summary of the relevant theoretical material for students

More information

Prof. Krstic Nonlinear Systems MAE281A Homework set 1 Linearization & phase portrait

Prof. Krstic Nonlinear Systems MAE281A Homework set 1 Linearization & phase portrait Prof. Krstic Nonlinear Systems MAE28A Homework set Linearization & phase portrait. For each of the following systems, find all equilibrium points and determine the type of each isolated equilibrium. Use

More information

Phase Response Properties and Phase-Locking in Neural Systems with Delayed Negative-Feedback. Carter L. Johnson

Phase Response Properties and Phase-Locking in Neural Systems with Delayed Negative-Feedback. Carter L. Johnson Phase Response Properties and Phase-Locking in Neural Systems with Delayed Negative-Feedback Carter L. Johnson Faculty Mentor: Professor Timothy J. Lewis University of California, Davis Abstract Oscillatory

More information

4 Second-Order Systems

4 Second-Order Systems 4 Second-Order Systems Second-order autonomous systems occupy an important place in the study of nonlinear systems because solution trajectories can be represented in the plane. This allows for easy visualization

More information

Oscillations Simple Harmonic Motion

Oscillations Simple Harmonic Motion Oscillations Simple Harmonic Motion Lana Sheridan De Anza College Dec 1, 2017 Overview oscillations simple harmonic motion (SHM) spring systems energy in SHM pendula damped oscillations Oscillations and

More information

Synaptic dynamics. John D. Murray. Synaptic currents. Simple model of the synaptic gating variable. First-order kinetics

Synaptic dynamics. John D. Murray. Synaptic currents. Simple model of the synaptic gating variable. First-order kinetics Synaptic dynamics John D. Murray A dynamical model for synaptic gating variables is presented. We use this to study the saturation of synaptic gating at high firing rate. Shunting inhibition and the voltage

More information

Vibrations and waves: revision. Martin Dove Queen Mary University of London

Vibrations and waves: revision. Martin Dove Queen Mary University of London Vibrations and waves: revision Martin Dove Queen Mary University of London Form of the examination Part A = 50%, 10 short questions, no options Part B = 50%, Answer questions from a choice of 4 Total exam

More information

arxiv: v3 [q-bio.nc] 1 Sep 2016

arxiv: v3 [q-bio.nc] 1 Sep 2016 Interference of Neural Waves in Distributed Inhibition-stabilized Networks arxiv:141.4237v3 [q-bio.nc] 1 Sep 216 Sergey Savel ev 1, Sergei Gepshtein 2,* 1 Department of Physics, Loughborough University

More information

Lecture Notes for PHY 405 Classical Mechanics

Lecture Notes for PHY 405 Classical Mechanics Lecture Notes for PHY 405 Classical Mechanics From Thorton & Marion s Classical Mechanics Prepared by Dr. Joseph M. Hahn Saint Mary s University Department of Astronomy & Physics September 1, 2005 Chapter

More information

Basic Theory of Dynamical Systems

Basic Theory of Dynamical Systems 1 Basic Theory of Dynamical Systems Page 1 1.1 Introduction and Basic Examples Dynamical systems is concerned with both quantitative and qualitative properties of evolution equations, which are often ordinary

More information

L = 1 2 a(q) q2 V (q).

L = 1 2 a(q) q2 V (q). Physics 3550, Fall 2011 Motion near equilibrium - Small Oscillations Relevant Sections in Text: 5.1 5.6 Motion near equilibrium 1 degree of freedom One of the most important situations in physics is motion

More information

Physics 235 Chapter 4. Chapter 4 Non-Linear Oscillations and Chaos

Physics 235 Chapter 4. Chapter 4 Non-Linear Oscillations and Chaos Chapter 4 Non-Linear Oscillations and Chaos Non-Linear Differential Equations Up to now we have considered differential equations with terms that are proportional to the acceleration, the velocity, and

More information

Chapter 15 - Oscillations

Chapter 15 - Oscillations The pendulum of the mind oscillates between sense and nonsense, not between right and wrong. -Carl Gustav Jung David J. Starling Penn State Hazleton PHYS 211 Oscillatory motion is motion that is periodic

More information

Problem set 7 Math 207A, Fall 2011 Solutions

Problem set 7 Math 207A, Fall 2011 Solutions Problem set 7 Math 207A, Fall 2011 s 1. Classify the equilibrium (x, y) = (0, 0) of the system x t = x, y t = y + x 2. Is the equilibrium hyperbolic? Find an equation for the trajectories in (x, y)- phase

More information

Driven Harmonic Oscillator

Driven Harmonic Oscillator Driven Harmonic Oscillator Physics 6B Lab Experiment 1 APPARATUS Computer and interface Mechanical vibrator and spring holder Stands, etc. to hold vibrator Motion sensor C-209 spring Weight holder and

More information

154 Chapter 9 Hints, Answers, and Solutions The particular trajectories are highlighted in the phase portraits below.

154 Chapter 9 Hints, Answers, and Solutions The particular trajectories are highlighted in the phase portraits below. 54 Chapter 9 Hints, Answers, and Solutions 9. The Phase Plane 9.. 4. The particular trajectories are highlighted in the phase portraits below... 3. 4. 9..5. Shown below is one possibility with x(t) and

More information