Size: px
Start display at page:

Download ""

Transcription

1 This article was originally published in a journal published by Elsevier, and the attached copy is provided by Elsevier for the author s benefit and for the benefit of the author s institution, for non-commercial research and educational use including without limitation use in instruction at your institution, sending it to specific colleagues that you know, and providing a copy to your institution s administrator. All other uses, reproduction and distribution, including without limitation commercial reprints, selling or licensing copies or access, or posting on open internet sites, your personal or institution s website or repository, are prohibited. For exceptions, permission may be sought for such use through Elsevier s permissions site at:

2 Physics Letters A 360 (2006) Stability of coupled excitatory inhibitory neural populations and application to control of multi-stable systems Abstract Roman Ilin, Robert Kozma Department of Computer Science, The University of Memphis, Memphis, TN 38152, USA Received 10 January 2006; received in revised form 17 July 2006; accepted 18 July 2006 Available online 4 August 2006 Communicated by C.R. Doering The hierarchy of models of interacting neural populations with excitatory and inhibitory connections is described using second-order nonlinear ordinary differential equations. Systematic analytical and numerical studies are aimed at determining stability conditions of the coupled system. Generalized stability conditions are derived for a class of excitatory inhibitory neural population models. Equilibrium continuation analysis is applied to interpret the obtained stability conditions. The results are discussed in the context of chaotic brain dynamics theory and chaotic itinerancy. Attractor switching in biologically relevant multi-stable systems is demonstrated Elsevier B.V. All rights reserved. PACS: a; 0.45.Gg; i; Sn Keywords: Neural populations; Bifurcation; K models; Chaos; Chaotic itinerancy 1. Introduction In the past decades, various feed-forward and recurrent neural networks have been introduced, which utilized steady outputs corresponding to the equilibrium state of the system; see, e.g., [1,2]. In recent years there has been an increased interest towards more biologically plausible dynamic neural networks. In one approach, brains are viewed as essentially nonequilibrium systems which do not reach a steady state, rather they generate dynamical patterns of activation exhibiting phase transitions [3]. There is a number of dynamical neural network models with encoding in limit cycles [4 7]. Experimental studies on brain waves at the level of neural populations using EEG techniques gave rise to new theories [8 10], and [11]. Multiple electrode recordings in the olfactory bulb indicated that odors are encoded as complex spatial and temporal patterns in the bulb [10]. Based on these observations, a chaos theory of sensory perception has been proposed [10,11]. In this approach, the state variables of the brain in general, and the olfactory bulb in particular, traverse along complex chaotic trajectories which constitute a strange attractor with multiple wings. External stimuli constrain the trajectories to one of the attractor wings, which are identified as stimulus specific patterns. Once the stimulus disappears, the dynamics returns to the unconstrained state until the next stimulus arrives. Chaotic brain theory offers a plausible explanation to the complex dynamic behaviors observed in brains. According to this theory, pattern recognition in brains is fast because chaotic dynamics is sensitive to small perturbations, allowing easy switching between attractor wings. The dynamics is robust because the system involves the whole brain and its functioning is not limited to individual neurons [12,13]. The idea of complex chaotic state finds its counterparts in the notions of itinerant chaos [14], winnerless competition [15], and frustrated chaos [16]. Chaotic oscillations in neural networks have been studied extensively by [5,6,17 19]. Population level brain dynamics is described, for example, in [20,21]. The present work is related to K sets, named after Aaron * Corresponding author. addresses: rilin@memphis.edu (R. Ilin), rkozma@memphis.edu (R. Kozma) /$ see front matter 2006 Elsevier B.V. All rights reserved. doi: /j.physleta

3 R. Ilin, R. Kozma / Physics Letters A 360 (2006) Katchalsky, a pioneer of neurodynamic research. K sets embody the idea of using models of populations of neurons as basic processing elements of neural networks [22,23]. K models are complex dynamical systems with many parameters which need to be optimized to properly describe the dynamical behavior of neural populations in brains [24,25]. K sets can be used as dynamic memories by applying various learning methods, including Hebbian correlational learning, habituation, and reinforcement learning. There are numerous examples of successful implementations of K sets as memories in speech recognition [26], classification [13,27], detection of chemicals [28], robot navigation [29 31], and image recognition [32]. These results show that K sets are especially advantageous as associative memories in difficult recognition problems when only relatively few data points are available for the analysis and the signals contain high level of noise and clutter. The potential of using K sets as robust dynamic memories is very promising, but the theory is still in its infancy. There are some initial results toward establishing a comprehensive theory of the underlining dynamics, e.g., [33,34]. The aim of the present Letter is to study the dynamics of K sets and analyze stability issues which are relevant when using K sets in various application areas. The Letter is organized as follows. Section 2 defines basic K models at K0, KI, KII, and KIII levels. Section 3 introduces analytical results on stability of a specific class of KII sets. Section 4 is devoted to general stability issues of KII sets based on numerical studies. Section 5 discusses the obtained results in the context of multistable chaotic neural networks. In particular, we design a simple KIII system and demonstrate the operation of the chaotic KIII model. Section 6 gives general conclusions and directions of future research. 2. Overview of K set approach to neural modeling 2.1. General description of K0 equations The approach represented by K sets is motivated by the idea of modeling populations of neurons. Anatomically, the existence of neural populations is manifested in the micro-column structure of cortical tissues. The basic unit of neural populations corresponding to this granulation level is called K0 set. The number of neurons in the mammalian brain can range from about 10 9 in the case of small animals and approx in humans. The granulation level of K0 sets can include approx neurons, i.e., it represents an intermediate level between microscopic neurons and the whole brain. The intermediate level is often referred to as mesoscopic level, and the corresponding processes manifest mesoscopic brain dynamics [35]. Before elaborating the mathematical model of K0, we give an overview of the prevailing methods used in the neuroscience literature for the description of neurons. During the last decades, extensive research has been conducted in the field of neuroscience which resulted in a thorough understanding of neural processes [36,37]. Individual neurons are typically modeled using Hodgkin Huxley equations. The Hodgkin Huxley and related equations employ a system of first order ordinary differential equations (ODEs) to describe the physical properties of the cell membrane and the concentrations of different ions in the proximity of the membrane. In these models, the state variables are the membrane potentials of the individual neurons and the action potentials (also called pulses or spikes) are determined by solving first-order differential equations with respect to the membrane potentials; see for example [38,39]. The first order approximation used in these ODEs produces a very efficient way of describing the dynamics of neural pulses using exponential decay in time. In some cases, however, a higher-order approximation may be beneficial, which would allow a dynamical characterization which is more refined than a sequence of pure exponential decays. In the field of physics, the use of second order equations is a common practice, starting from Newton s basic equation of motion. It has been Freeman s remarkable observation over 40 years ago, that the description of the dynamics of large masses of neurons can benefit from higher order ODE models [40,41]. His original idea has lead to the hierarchy of K models studied in this Letter. In the next section we introduce the K0 set, which is the starting point of the neuron population model. The state variables describing neural populations are the averaged wave and pulse densities. They are directly related to the average local field potentials, which can be measured by intracranial EEG electrodes. The wave density relates to the membrane potentials of the dendrites, and the pulse density relates to the action potentials on the axons. As the neurons constantly engage in wave to pulse transformation, it is sufficient to use either of them as state variable, while the other is inferred using the corresponding transformation from wave to pulse or vice versa. We use the average pulse density of a neural population as the state variable. Let y i (t) denote the normalized pulse density of the ith neural population i = 1,...,N, where N is the number of populations. The normalization is done by making the steady state pulse density equal to zero. Accordingly, y i can assume positive and negative values as time evolves. The mathematical model of neural population has been originally developed based on intracranial EEG measurements in animals. The results introduced below have been obtained in cats with implanted electrodes in the prepyriform cortex and having injected with a surgically anesthetizing dose of pentobarbital [40]. In this way the parameters of the open-loop transfer function of the prepyriform cortex have been estimated. The results give a second order system with the following components: a simple exponential decay term, and a term describing the delay effect due to distributed lag in the cortical tissue. The experiments indicated the presence of a third exponential term as well, which is neglected for the sake of simplicity. This leads to the following second order ODE for the normalized pulse

4 68 R. Ilin, R. Kozma / Physics Letters A 360 (2006) Fig. 1. The nonlinear transfer function Q(v) given by Eq. (2) (solid line) and its derivative (dashed line). The maximum of the derivative is shifted to the positive value of the average wave amplitude; q m = 5. Fig. 2. Simple excitatory KIe set with two K0 units. KIe has a single parameter w ee which represents the level of mutual excitation within the neural population. density y(t) of a neural population, which is called K0 equation: 1 (ÿ(t) ) + (a + b)ẏ(t) + aby(t) = P(t). (1) ab Time constants a and b have been determined experimentally a = 0.22 ms 1 and b = 0.72 ms 1 [40], while P(t)is the external input at time t. Freeman points out the close parallel between the population response described by the K0 equation and the postsynaptic potentials generated by single neurons in response to impulses. The population time constants must be consistent with the cumulative effects of synaptic delays, dispersion, passive dendritic conduction, passive decay of membrane transients, dispersal of transmitter substances, etc., but they cannot be identified with unique processes at the cellular level [4]. The conversion among interacting K0 sets is modeled by transfer function Q(v), which is an asymmetric nonlinear function describing the transformation between the average wave density (v) and average pulse density (p) in neural masses [8]: ( 1 e v ) p = Q(v) = q m 1 e qm, (2) where q m is a constant. Following [8], we assume that the pulse to wave transformation is linear. Therefore, the pulse density to pulse density transformation can be described by Q(v) modified by a constant weight factor, as we elaborate in the next section. The value of constant q m varies between 1 and 14 for different types of neural populations and for different states of the animal in sleep or being awake and motivated. In this work, we use the value q m = 5, which is typical value for waking animals [9]. The transfer function is depicted in Fig. 1. Unlike the sigmoid curves commonly used in neural network research, Q(v) is asymmetric with the level of asymmetry dictated by the constant q m and its maximum gain is shifted to the positive values of wave amplitudes Dynamics of KI populations Interaction of mutually excitatory neural populations is modeled by excitatory KI set (KIe). Similarly, the interaction of inhibitory neural populations is modeled by inhibitory KI set (KIi). A simple case of two interacting excitatory K0 sets is shown in Fig. 2. Note that the neural mass itself does not have to consist of distinct sub-populations. It is the presence of interactions within the mass that causes the model to consist of two coupled K0 sets. We assume that the pulse density of the first K0 set is transformed into the wave density by the nonlinear function Q(v) (Eq. (2)) and again into the pulse density by the linear function, represented by the weight w ee. The same holds for the second K0 set. The dynamics is given by the following two second order ODEs: { ÿ1 + αẏ 1 + βy 1 = βw ee Q(y 2 ), (3) ÿ 2 + αẏ 2 + βy 2 = βw ee Q(y 1 ). Here α = a +b, and β = ab. The equilibrium can be found by setting the derivatives to zero, which gives the following equations for the equilibrium values of y1 and y 2 : { y 1 = w ee Q(y2 ), (4) y2 = w eeq(y1 ). Here the consequences of the asymmetry of the transfer function Q(v) are summarized, see Eq. (2). Q(v) is a monotonously increasing function of v and it has a unique inflexion point. The corresponding maximum derivative is located at positive v values in the case of biologically plausible parameter range of q m > 1.

5 R. Ilin, R. Kozma / Physics Letters A 360 (2006) Fig. 3. Bifurcation diagram of KIe set with asymmetric transfer function Q(v) with parameter q m = 5. Notations: solid and dash lines denote stable and unstable fixed points, respectively; BP bifurcation point; LP limit point (saddle node bifurcation). Fig. 4. Graphical illustration of the solutions of the equilibrium equation; the curves correspond to the two conditions in Eq. (4); w ee = 0.6. We characterize the equilibria of Eq. (4) depending on gain parameter w ee ;seefig. 3. Solid and dotted lines show stable and unstable equilibria, respectively. The stability of the equilibrium points has been evaluated based on the eigenvalues of the Jacobian matrix. For small values of w ee, there is a unique stable zero equilibrium. As we increase w ee, we reach a value w LP when a limit point (LP) appears. Further increasing w ee, there will be 3 equilibria, two of them are stable and one is unstable. The zero equilibrium is stable until a threshold value of w ee = w BP, which is a bifurcation point (transcritical bifurcation). Above w BP, the zero equilibrium becomes unstable, while there is one positive and one negative stable equilibrium. After some algebra, the following condition can be derived for w LP : w LP = max arg { v/q(v) }, for v>0. The maximum value in the above equation can be determined as the solution of the following transcendental equation: ln ( 1 + ve v /q m ) = ( e v 1 ) /q m. For example, in the case of a typical value of q m = 5, the maximum takes place at v = 2.20, which gives w LP = , see Fig. 3. A condition for w BP is obtained based on evaluating the derivative of Q at zero: w BP = 1/ ( Q(v) ) v=0. Transfer function Q(v) givenineq.(2) has its derivative equal to 1 at v = 0 for any q m. Therefore, in our models w BP =1. Fig. 4, illustrates the equilibria for w ee =0.6. It is to be noted, that a system with symmetric transfer function exhibits classical pitch-fork bifurcation and there is no limit point. For illustration, let us consider the same network with symmetric transfer function tanh. The equations do not change, but tanh is used as the nonlinearity. At the branching point BP, the curve splits into three parts which belong to two stable and one unstable equilibria; see Fig. 5. The symmetry of the diagram follows from the symmetry of the system Basics of KII sets A KII set is formed by feedback between KIe (excitatory) and KIi (inhibitory) sets. In its simplest manifestation, a KII set consists of two excitatory and two inhibitory nodes. In a KII set, four types of internal connections are identified, namely, w ee, w ei, w ie, and w ii. The weights correspond to interacting excitatory excitatory, excitatory inhibitory, inhibitory excitatory, and inhibitory inhibitory populations, respectively. An example of a basic KII set is given in Fig. 6. P(t) denotes the location of an external input signal. As we have 4 types of interactions in excitatory inhibitory populations, having 4 parameters as shown in Fig. 6 is the simplest possible system. Clearly, one can design much more complex KII sets with hundreds of excitatory and inhibitory nodes and with a wide range of various connections among them. Distributed KII sets have been introduced in [8] and have been widely studied since then. Arrays of KII sets suffice to represent the main dynamics of the olfactory system, as the olfactory bulb, the anterior olfactory nucleus, and the prepyriform cortex [9]. Complex architectures with distributed KII sets having adaptive lateral connections between excitatory nodes across a KII layer have been used as associative memories, by adjusting the energy landscape of the dynamical system in response to new input patterns; see, e.g., [11]. This is an interesting and complicated issue. However, it is not the topic of the present Letter. (5) (6) (7)

6 70 R. Ilin, R. Kozma / Physics Letters A 360 (2006) Fig. 5. Bifurcation diagram of KIe set with tanh transfer function. Fig. 6. Example of a simple KII set; nodes 1 and 2 are excitatory, nodes 3 and 4 are inhibitory. This KII set has 4 weights: w ee, w ei, w ie,andw ii. The location of an input signal is marked as P(t). Here we introduce a parsimonious set of models which allows for systematic analytical and numerical studies of the attractor dynamics, at the same time it has the capacity of producing complex oscillatory behaviors consistent with neurophysiological findings. In the next section, principles of perception and cognition are summarized using nonconvergent dynamical principles, which will be interpreted with the help of K sets Hierarchical construction of KIII sets Simultaneous recordings of the activity of the olfactory bulb reveal that odors are encoded in amplitude modulated spatial patterns of pulse densities. The pulse density oscillates with the frequency in the gamma band, around 40 Hz. The observed spatial patterns refer to the pulse density amplitudes averaged over the time period corresponding to the later stage of the inhalation and the beginning of exhalation in the breathing cycle. In the second part of exhalation the pulse density falls into the complex basal state associated with chaotic oscillations. This state changes into another spatial pattern with the next breathing cycle [10,11,35]. KIII sets has been introduced to model such physiological findings. As the complex sensory dynamics has been observed first in the olfactory bulb, early KIII sets mimic the architecture of the olfactory system [13,27]. An example of the KIII architecture inspired by the olfactory system is given in Fig. 7. Each layer is a distributed KII set, which consists of interacting neural populations. The simplified KIII set shown here consists of 3 layers of distributed KII sets corresponding to different anatomical parts of the brain: olfactory bulb (OB), anterior olfactory nucleus (AON), and prepyriform cortex (PC). Input signals enter OB through the glomeruli layer (top layer in Fig. 7). Excitatory elements of the OB layer correspond to the populations of the secondary dendrites of the mitral cells. The inhibitory populations are the granule cells. We model mutual excitation between the mitral cells and mutual inhibition between the granule cells [23,42]. AON consists of excitatory pyramidal and inhibitory stellate cells. OB sends projections to AON and PC. They, in turn, send feedback projections, as shown by the dashed lines in Fig. 7. The delays in the feedback tract are greater then in other parts of the olfactory system, due to the length of the tract. Therefore, feedback links are modeled by delayed connections [24,43]. KIII is described by the following set of 2nd order ODEs: 1 (ÿi (t) + αẏ i (t) + βy i (t) ) = β N N T w ij Q(y j (t)) + k ij τ Q ( y j (t τ) ) + P i (t), j=1 j=1 τ=1 i = 1,...,N. Here N is the total number of K0 sets in the system, α and β are time constants defined earlier. w ij and k ij τ are the coupling weights, τ is the time delay associated with feedback, T is the maximum delay. We model the delays in discrete time steps to simplify computations. It is assumed that the interactions within the KII layers occur without delay. The interactions between the layers and along the feedback track are delayed. The first term of the right-hand side of Eq. (8) contains the intra-layer coupling, and the second term contains the coupling between the layers. Introducing the first derivatives of y i with respect to time as additional state variables, a K0 equation can be transformed into two simultaneous equations w.r.t. y i and z i =ẏ i. In vector notation, the state of the KIII system is given by the following 2Ndimensional vector: (8) x = (y 1 z 1 y 2 z 2... y N z N ) T. (9)

7 R. Ilin, R. Kozma / Physics Letters A 360 (2006) Fig. 7. Simplified KIII system and its relationship with the anatomy of the olfactory system. Top layer: inputs from the receptors via the glomeruli. Additional layers: olfactory bulb (OB), anterior olfactory nucleus (AON), prepyriform cortex (PC). Dashed lines are delayed feedback connections. Black and white circles: inhibitory and excitatory populations, respectively. The evolution of the system is described by the following matrix equation: x(t) = A x(t) + βw KIII Q ( x(t) ) + β T VKIII τ Q( x(t τ) ), τ=1 where matrices W KIII and VKIII τ contain the coupling weights. A is a constant matrix given by β α A = 0 0 β α β α Eq. (10) is the general formulation of the KIII model, which can exhibit very complex dynamics, including chaotic behavior, see [25, 30,34]. Due to the complexity of the model, it is generally hopeless to solve Eq. (10) in quadratures. Efficient numerical integration methods, based on Runge Kutta algorithm has been successfully employed to solve this equation under various conditions [13, 27]. In this Letter we introduce results using tools of dynamical systems in order to improve our understanding on the underlying physical phenomena. First, we analyze simplified forms of KII sets to obtain analytical results concerning system stability. Then we examine how the basic properties are preserved in more complex model which do not allow analytical solution. Finally, we introduce simple examples of building multi-stable KIII models with biologically feasible dynamical behaviors. 3. Analytical results on the stability of intermediate KII sets (10) (11) 3.1. Overview of earlier results on reduced KII models Xu and Principe [33] considered the reduced KII as shown in Fig. 8. This is a 4th order system described by two second order ODEs and it depends on two weight parameters w ei and w ie. Note, that the reduced KII set is not KII set according to the basic notion introduced in Section 2.3, as it does not incorporate excitatory excitatory and inhibitory inhibitory interactions. Nevertheless, this simplification can serve as a useful starting point toward more complex and more biologically plausible systems.

8 72 R. Ilin, R. Kozma / Physics Letters A 360 (2006) Fig. 8. Reduced KII set is the simplest model of mixed excitatory inhibitory population. It has two parameters w ei and w ie. Fig. 9. The intermediate KII set is obtained from the full KII by turning off excitatory excitatory and inhibitory inhibitory connections. The side connection weights w l ei and wl ie can be different from the central connection weights w ei and w ie. The main results of [33] concern the existence of a unique equilibrium, and the identification of two parameter regions where the equilibrium is stable and unstable. The condition of stability has been obtained using a generalization of the Poincaré Bendixon theorem. The stability condition, in the absence of external input, is given by w ei w ie < α2 β, where α and β are known time constants of the basic K0 set. Next, we generalize these results to more realistic KII sets Stability conditions for intermediate KII We introduce the intermediate KII set as shown in Fig. 9. The intermediate KII has two excitatory and two inhibitory K0 units. However, it has only excitatory inhibitory and inhibitory excitatory connections, while the inhibitory inhibitory and excitatory excitatory interactions are missing. For the sake of the detailed study in this section, we introduce the set of lateral interaction weights w l ei and wl ie, which can be different from the central weights w ei and w ie, respectively. The system is described by the following 4 differential equations: ÿ 1 + αẏ 1 + βy 1 = β( w ie Q(y 3 ) wie l Q(y 4) + P (t)), ÿ 2 + αẏ 2 + βy 2 = β( wie l Q(y 3)), ÿ 3 + αẏ 3 + βy 3 = β(w ei Q(y 1 ) + wei l Q(y 2)), ÿ 4 + αẏ 4 + βy 4 = β(wei l Q(y 1)). First we find the equilibria of this system, by setting the derivatives to zero and solving the resulting system of algebraic equations. Expressing the equilibrium values y2 and y 4 in terms of the other two variables, we obtain the following two simultaneous equations: { y 1 = w ie Q(y3 ) wl ie Q(wl ei Q(y 1 )) + P(t), y3 = w ei Q(y1 ) + wl ei Q( wl ie Q(y 3 )). (12) (13) (14) We can show that, in the absence of external input, this equation has a unique zero solution; for details see Appendix A. The stability of the zero equilibrium can be analyzed by considering the eigenvalues of the Jacobian of our system. We follow the approach used by [33] and find the condition when the maximum real part of the eight eigenvalues equals to zero. This gives the boundary between the regions of stable and unstable equilibria. The Jacobian Df of the intermediate KII is given by Eq. (15).Note

9 R. Ilin, R. Kozma / Physics Letters A 360 (2006) that Q y i stands for dq dy i evaluated at equilibrium point y i = y i β α 0 0 βw ie Q y 3 0 βwie l Q y β α βw l ie Df = Q y βw ei Q y1 0 βwei l Q y2 0 β α βwei l Q y β α The eigenvalues of the Jacobian are determined by solving the characteristic equation det(df I) = 0, where I isthe8by8 identity matrix. The stability condition is given by the solution of an 8th order polynomial. There is no general solution for an 8th order polynomial in quadratures, so it seems to be hopeless to pursue the solution of this problem analytically. Surprisingly, however, we have succeeded to derive the exact analytical expression to the eigenvalues of the matrix in Eq. (15) using a computer algebra system. The expression for the 8 eigenvalues is given by Eq. (16): λ 1 8 = 1 (16) 2 α ± 1 α 2 2 4β ± 2β 2 wie 2 w2 ei + 4w iew ei wei l wl ie ± ( 2w ie w ei + 4wei l ie) wl. In order to derive a stability condition, we need to determine when the largest real part of the eigenvalues becomes zero. We proceed by transforming Eq. (16) into polar form. Next, we take the square root of the corresponding quantities, and separate the real and imaginary parts. This is quite straight-forward but cumbersome procedure, by considering various special cases regarding the signs of the quantities under the square root. Finally, the following compact condition has been derived for the stability of the zero equilibrium; for details, see Appendix B: w ei w ie <γ α2 (17) β, where γ = 2/( 1 + 4k 2 k k 2 k 1 ), k 1 = wei l /w ei,k 2 = wie l /w ie. Eq. (17) is the generalization of Eq. (12) by using the correction factor γ. It contains stability condition for the reduced KII as a special case when γ = 1. Another special case is the intermediate KII, when wie l = w ie, and wei l = w ei. This yields k 1 = k 2 = 1, and γ = 2/(3 + 5) The stability boundaries determined explicitly for various KII systems are shown in Fig. 10. (15) Fig. 10. Stability boundaries for the intermediate KII set. Equilibrium is unstable above the curve and stable below it. Notations: dotted curve is for reduced KII set Eq. (12); solid curve corresponds to w l ei = w ei and w l ie = w ie; dashed curve is for w l ei = 0.5w ei and w l ie = 0.5w ie.

10 74 R. Ilin, R. Kozma / Physics Letters A 360 (2006) Our results show that the qualitative behavior of the stability boundary remains the same even in the case of intermediate KII with lateral weights wei l and wl ie. Moreover, it appears that the lateral weights provide means to control the boundary between the stable and the unstable states. As the strengths of lateral weights wei l and wl ie increase, the stability boundary shifts away from the reduced KII case, more and more toward the origin. At the same time the region below the curve, corresponding to stable equilibria, decreases. The absence of stable equilibria in a KII component is an important condition of complex chaotic behavior of interacting KII sets at the KIII level. Therefore, the results in this section on controlling the stability boundary has important consequence for the design of KIII sets as described later in this work. 4. Stability properties of the full KII set The KII equations for a simple, full KII set with 4 nodes and 4 weights, see Fig. 6, are given as follows: ÿ 1 + αẏ 1 + βy 1 = β(w ee Q(y 2 ) w ie Q(y 3 ) w ie Q(y 4 ) + P (t)), ÿ 2 + αẏ 2 + βy 2 = β(w ee Q(y 1 ) w ie Q(y 3 )), ÿ 3 + αẏ 3 + βy 3 = β(w ei Q(y 1 ) + w ei Q(y 2 ) w ii Q(y 4 )), ÿ 4 + αẏ 4 + βy 4 = β(w ei Q(y 1 ) w ii Q(y 3 )). This 8th order system does not allow analytical solutions. Nevertheless, some important quantitative features obtained on the stability boundary in simpler models will be beneficial for the understanding of the more complex system. In the absence of analytical solutions, the results of numerical studies are presented here. First we study the KII without inhibitory inhibitory connections, i.e., w ii = 0. This system has 3 parameters w ee, w ei and w ie. We explore the 3-dimensional parameter space by changing each weight in the interval [0, 2] with step size 0.1. For each triplet of weights, the equilibria are found using Newton s method and by evaluating the eigenvalues of the Jacobian matrix. Fig. 11 shows a two-dimensional cut through the parameter space. The figure can be divided into regions based on the number of total/stable equilibria. The main difference from the intermediate KII set is the appearance of multiple equilibrium states, compare to Fig. 10. Itis remarkable that the region without stable equilibria, which is marked as 0/1 extending from the upper left corner of Fig. 10, is preserved. Its boundary, however, is shifted toward lower values of w ei and w ie, as compared to Fig. 10. In order to better understand Fig. 11, we employ the bifurcation analysis tool called Content [44]. Starting with a known equilibrium point, this tool performs continuation of the equilibrium curve in the parameter space. Structural changes are detected by considering the eigenvalues of the Jacobian matrix of the system. The appearance of one eigenvalue with zero real part indicates a (18) Fig. 11. Regions of structural stability with regard to equilibria. Each area is labeled by the number of total/stable equilibria; w ee = 1.6, w ii = 0. The arrows refer to the cross-sections of the phase space displayed on separate diagrams.

11 R. Ilin, R. Kozma / Physics Letters A 360 (2006) (a) (c) Fig. 12. Continuation of equilibria for full KII sets with w ee = 1.6 andw ii = 0. Notations: H Hopf bifurcation, LP limit point, BP branching point. The four diagrams correspond to the four cross-sections indicated in Fig. 11: (a)w ie = 0.5, (b) w ie = 1.3; (c) w ei = 0.5; (d) w ei = 1.3. limit point (LP) or a branching point bifurcation (BP). The appearance of a pair of eigenvalues with zero real parts indicate Hopf bifurcation (H). Consider Fig. 12(a). The location of this cross-section is indicated by the arrows marked diagram 1 in Fig. 11. We know from Fig. 11 that at w ei = 2 the system has 1 stable and 2 unstable equilibria. The top curve on diagram 1 is the continuation of the stable equilibrium. There are no special points encountered here. The zero equilibrium, which exists for all parameter values, is unstable at w ei = 2. As we decrease w ei, we detect a Hopf bifurcation at w ei = 1.48 and the zero equilibrium becomes stable. Decreasing w ei even more, we detect a branching point (BP) where two equilibrium curves intersect. The curve starting from BP upwards has several interesting points. The points marked H indicate the appearance of a pair of eigenvalues with zero imaginary parts and zero sum real parts. Such points are called neutral saddles. We use the same letter H for neutral saddles and Hopf bifurcations as the test condition is the same for both points: a pair of eigenvalues that sum to zero. The other two points are limit points (LP). The second vertical cross-section (Fig. 12(b)) is similar to the first one. The difference is that there is only one equilibrium for higher values, which is unstable. The absence of stable equilibrium is typically an indication of a limit cycle. Its existence is confirmed by numeric simulations for the given parameters. By cross-secting Fig. 11 in the horizontal direction, we obtain diagrams as shown in Fig. 12(c) and (d). The zero equilibrium is stable at w ie = 2, but it again becomes stable and goes through a Hopf bifurcation as w ie decreases. Further decreasing w ie, a branching point appears. There are several limit points and neural saddles in the proximity of the branching point. Finally, we introduce KII with all 4 types of connections. Example of the results are shown in Fig. 13. The regions of stability become more complicated and we can observe up to 9 equilibria and 13 stability regions of 6 different types. Multiple continuations of equilibria have shown the same basic bifurcation mechanisms in this case. We can conjecture that those mechanisms are important generic properties of KII sets which can be utilized in designing KIII sets with the combination of KII sets with specific stability properties. An important observation concerns the behavior of the unstable equilibrium. The introduction of inhibitory inhibitory links stabilizes the unique unstable equilibrium for most of the parameters w ei and w ie, see the region marked with 1/1 forlargew ei and w ie values. However, some traces of the region without stability still persist, see regions around values w ei w ie 1.1 and w ei w ie 2. This observation is utilized in the design of multistable KIII sets in the next section. (b) (d)

12 76 R. Ilin, R. Kozma / Physics Letters A 360 (2006) Fig. 13. Regions of structural stability for the full KII set. The numbers at each evaluation point are the number of total/stable equilibria. w ee = 1.6 andw ii = K sets with multi-stable chaotic attractor 5.1. Principles of switching attractor basins in KIII sets According to dynamic brain theories [3,45], the operation of brains relies on the existence of complex energy landscapes with multiple equilibria. Stable equilibria are the valleys and the unstable ones are the peaks. Energy is constantly pumped into the system preventing it from coming to rest. The system is continuously moving along a complex trajectory across the attractor landscape. K sets are well suited to implement the above mentioned paradigm. The bifurcation diagrams obtained in the previous sections allow us to hypothesize the mechanisms by which the chaotic basal state is maintained in KIII sets. The coexistence of limit cycle oscillations and multiple equilibria hints the mechanism by which jumping from one valley to the other takes place. In the following example we design a KIII system that demonstrates autonomous quasi-periodic switching between two attractors. We use the architecture with three basic layers, which was originally suggested based on anatomical considerations concerning the olfactory cortex. The parameters of each layer are selected in order to create the required dynamics Sample KIII design We build a system that oscillates between two dynamic attractors in its basal state, where KII sets are used as building blocks and the blocks are connected with direct or time-delayed links. In this section we show a system with chaotic switching between two attractors. This example is simple but illustrative. Clearly, even the relatively simple KII models studied in this Letter have the potential of generating multi-stable systems with a large number of states (at least 9 states have been identified for single KIIs). Moreover, by linking 100s of KII units into distributed arrays one has the potential for producing very complex dynamics. This has been demonstrated in various application areas, but it has not been the topic of the present study. Our goal now is to show that using a few simple KII units, each having just 4 adjustable gains, we are able to produce the desired dynamics in the style of brains. The design of the selected KIII system is shown in Fig. 14. The first layer corresponds to external inputs. The second and forth layers are KII sets with one unstable equilibrium, while the third layer is a KII set with two stable equilibria. The weights of each KII sets are given in Table 1. We used phase diagrams similar to shown in Fig. 13 to select our parameter values. The parameters of the forth layer generate a limit cycle which disappears under input, as suggested in [33]. The links between the KIIs are immediate, except for the feedback link from the forth layer to the third. The feedback and feed forward connections are selected to support constant switching between the attractors. By properly selecting the connection strengths between the layers, we can generate various basal states. Three parameter sets are given in Table 2. Systems 2 and 3 oscillate around one of the stable equilibrium states. System 1 switches between two attractors.

13 R. Ilin, R. Kozma / Physics Letters A 360 (2006) Fig. 14. Design of KIII system with oscillations of Layer 3 alternating between two periodic attractors. This is a case of the system in Fig. 7 with each layer containing only one KII unit; dash denotes delayed feedback connection. Table 1 Intra-layer parameters of KII sets Parameter Layer 1 Layer 2 Layer 3 Layer 4 w ee w ei w ie w ii w l ei w l ie Operation of bistable KIII systems Table 2 Inter-layer parameters for various systems Inter-layer weights System 1 System 2 System 3 w w w w42 d w42 d Consider the operation of System 1. The oscillator in the second layer is randomly initialized. The third layer is initialized at one of the equilibrium points. Layer 2 forces Layer 3 to oscillate around the equilibrium. At the same time, oscillations of Layer 2 induce oscillations of Layer 4. The immediate feedback from Layer 4 to Layer 2 increases the amplitude of Layer 2 s oscillations to the point where the latter jumps over the energy barrier and lands in the vicinity of the second equilibrium. At this time Layer 4 is turned off by the feedback link from Layer 3 to Layer 4, which inhibits the latter. Finally, the delayed signal from Layer 4 reaches Layer 3. The delay line shown in the figure is a very simple time distributed delay. It produces a signal determined by averaging the activations of its source population over the interval of 400 ms to 500 ms in its past. This delayed input kicks Layer 3 out of the second equilibrium back into the first one. The loop is closed and the process is repeated over and over. System 1 has well-balanced connection weights to achieve switching between the two states. The time series are given in Fig. 17. Even though there is a certain periodicity in the process described above, the nonlinearly of all building blocks make the transitions not fully predictable. The return plot of the time series for Layer 4 of System 1, shown in Fig. 18, indicates presence of a strange attractor resembling Rossler system [46].

14 78 R. Ilin, R. Kozma / Physics Letters A 360 (2006) Fig. 15. Time series for the KIII system confined to its first periodic attractor; degenerate System 2, Table 2. Fig. 16. Time series for the KIII system confined to one of its second periodic attractor; degenerate System 3, Table 2. If the delayed feedback connection strengths are decreased as in System 2, the dynamics is degenerated and it will oscillate around one of its equilibria. The time series corresponding to System 2 are shown in Fig. 15. Analogously, System 3 has stronger connection weight between Layer 3 and Layer 2. That is why Layer 3 oscillates around the other equilibrium corresponding to higher driving amplitude. The time series for System 3 are shown in Fig Discussion The operation of the KIII model described in the previous section can be explained using the analogy of a forced pendulum with two equilibrium states. Examples of such systems can be found in [46,47]. A KII set with two stable equilibria is analogous to those systems with two energy wells. Moreover, another KII set with parameters corresponding to limit cycle oscillations can play the role of an external oscillator when coupled with the first KII set. The idea that some areas of the brain generate periodic oscillations is commonly accepted, as in the case of coupled oscillating neurons models [5,6]. The observed behavior of the KIII model introduced in this Letter can be interpreted based on the stability properties of the KII components. We hypothesize that some brain areas can be modeled by KII sets having unstable zero equilibrium. Unlike the

15 R. Ilin, R. Kozma / Physics Letters A 360 (2006) Fig. 17. Time series for the KIII system switching between two periodic attractors; balanced System 1, Table 2. Fig. 18. The return plot of the top node activation of Layer 4 of System 1. classical case, the depths of our energy wells are different since the system is asymmetric. The way to create switching between the energy wells is to use two forcing oscillators. One of these oscillators works constantly. The other kicks in only when the system is in the deeper region of the potential valley. As the result, the coupled KIII model with KII sets selected according to the above specifications can exhibit complex quasi-periodic and chaotic oscillations if the interactions between the KII components are properly specified. Indeed, in the case of a well-balanced KIII set (System 1), the switching between the two states is apparent, see Fig. 17. Ifthe balance among the KII sets is broken as for Systems 2 and 3, the dynamics of the KIII set degenerates and its behavior is dominated by one of its components, see Figs. 15 and 16. As a result, no switching is observed. In the context of biological significance, System 1 corresponds to high-dimensional complex cortical oscillations resembling healthy brains. On the other hand, the broken balance in Systems 2 and 3 signify pathological mental behaviors with narrow-band oscillations, as in seizure. In order to obtain quantitative estimates of the dynamics of KIII, we evaluate the largest Liapunov exponent of each of the Systems 1, 2, and 3. We employ Wolf s method [48] to determine the Liapunov exponent from experimental time series. The results

16 80 R. Ilin, R. Kozma / Physics Letters A 360 (2006) Table 3 Largest Liapunov exponent for various systems System Largest Liapunov exponent System System System are shown in Table 3. Leading Liapunov exponents are averaged over 40 experiments performed with random initial conditions. The Liapunov exponent for the case of switching time series of System 1 is a positive number that is about an order of magnitude larger than the corresponding exponents for Systems 2 and 3. KIII is an example of dynamical systems with switching between high and low-dimensional states. The newly developing field of chaotic itinerancy studies dynamical systems with frequent switching between various attractors [14]. In the characteristic picture of chaotic itinerancy, the trajectory of the dynamical system visits the known attactors/attractor ruins again and again, as a restless traveler. Chaotic itinerancy indicates that low-dimensional and high-dimensional chaotic states may coexist and they give rise to very complex behaviors. Earlier studies indicated that KIII models with multiple, spatially distributed KII layers exhibit behaviors which are consistent with the description of chaotic itinerancy [34]. Extending the KIII stability studies to spatially extended systems can help to explain KIII dynamics using chaotic itinerancy theory. Remarkably, complex dynamical behavior has been obtained using simple basic oscillatory components in the form of KII sets with interacting excitatory and inhibitory units. KII sets are biologically motivated and they correspond to micro-columns, which are basic anatomical and functional units of the cortex. KII includes 2nd order dynamics of its nodes, and an asymmetric transfer function Q which is the function of a single parameter q m. In its simplest form, the full KII set is characterized by just 4 weight parameters describing the 4 types of possible interactions among excitatory and inhibitory populations. By changing the values of the 4 weight parameters, very different type of dynamical behavior can be achieved. It is remarkable that the same KII basic building blocks are repeated again and again in various parts of the brain, giving uniformity across the cortical sheet. This uniformity is deemed crucial for the efficient operation of brains and for high-level cognitive functions. KIII models build on the uniformity of KII sets, which can nevertheless produce very complex dynamic behavior if the intra-layer and inter-layer connections are selected properly. The present work is a step toward exploring the richness of behaviors represented by K sets. 6. Conclusions The background activity manifested in the EEG is of fundamental importance to the brain chaos theory. The brain s basal state is a high-dimensional attractor with complex energy landscape. External input constrains the system s dynamics to a low dimensional attractor that is associated with that particular input. K sets represent a biologically inspired, hierarchical neural network model, which is capable of producing behavior mimicking physiological observations. The present work has been focused on studying the properties of coupled excitatory inhibitory neural populations using K sets. We improved on previous results by deriving generalized stability conditions for intermediate KII sets. Analytical and numerical studies of KII sets indicate the presence of certain parameter regions with unstable equilibrium, while other regions exhibit multiple stable equilibria. It is important to point out the sequence of stability results obtained with models of increasing complexity, from reduced KII, through intermediate KII, to full KII sets, and their relevance to the design and operation of the KIII model. Previous result in the literature gives analytic expression for the hyperbolic boundary of stability for reduced KII. This result has been generalized in this Letter for intermediate KII in a mathematically rigorous way, by deriving a formula to control the stability boundary for extended systems. This analytical result is nontrivial and it is based on the solution of 8th order characteristic polynomial of a system of four 2nd order ODEs. In the case of full KII, numerical simulations show that the hyperbolic stability boundary corresponding to the presence of unstable zero equilibrium largely remains intact for weak inhibitory inhibitory interactions. For the general case of KII with strong inhibitory inhibitory connections, the region of instability defined by the hyperbolic stability condition collapses. However, traces of the region corresponding to unstable equilibrium still remain intact. Finally, for a KIII set, we designed a chaotic switching mechanism which is based on using two KIIs with unstable zero equilibrium and another bi-stable KII. The presence of unstable equilibrium has been used to achieve chaotic oscillations in the model. This KIII design demonstrated oscillations consisting of quasi-periodic switching between stable attractors. In this Letter, an extremely simplified KIII set is used with only one column, i.e., there is just one KII set in each layer. Data from a range of studies of spatially distributed KII sets show the presence of ever increasing number of equilibria as the number of KII units increases. This justifies the phenomenon of attractor crowding described in [13]. Therefore, using layers with more stable

Chaotic Neurodynamics for Autonomous Agents

Chaotic Neurodynamics for Autonomous Agents CHAOTIC NEURODYNAMICS FOR AUTONOMOUS AGENTS 1 Chaotic Neurodynamics for Autonomous Agents Derek Harter Member, Robert Kozma Senior Member Division of Computer Science, University of Memphis, TN, USA Abstract

More information

Influence of Criticality on 1/f α Spectral Characteristics of Cortical Neuron Populations

Influence of Criticality on 1/f α Spectral Characteristics of Cortical Neuron Populations Influence of Criticality on 1/f α Spectral Characteristics of Cortical Neuron Populations Robert Kozma rkozma@memphis.edu Computational Neurodynamics Laboratory, Department of Computer Science 373 Dunn

More information

Estimation of Propagating Phase Transients in EEG Data - Application of Dynamic Logic Neural Modeling Approach

Estimation of Propagating Phase Transients in EEG Data - Application of Dynamic Logic Neural Modeling Approach Proceedings of International Joint Conference on Neural Networks, Orlando, Florida, USA, August 12-17, 2007 Estimation of Propagating Phase Transients in EEG Data - Application of Dynamic Logic Neural

More information

An Introductory Course in Computational Neuroscience

An Introductory Course in Computational Neuroscience An Introductory Course in Computational Neuroscience Contents Series Foreword Acknowledgments Preface 1 Preliminary Material 1.1. Introduction 1.1.1 The Cell, the Circuit, and the Brain 1.1.2 Physics of

More information

arxiv:physics/ v1 [physics.bio-ph] 19 Feb 1999

arxiv:physics/ v1 [physics.bio-ph] 19 Feb 1999 Odor recognition and segmentation by coupled olfactory bulb and cortical networks arxiv:physics/9902052v1 [physics.bioph] 19 Feb 1999 Abstract Zhaoping Li a,1 John Hertz b a CBCL, MIT, Cambridge MA 02139

More information

Lecture 4: Feed Forward Neural Networks

Lecture 4: Feed Forward Neural Networks Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training

More information

8.1 Bifurcations of Equilibria

8.1 Bifurcations of Equilibria 1 81 Bifurcations of Equilibria Bifurcation theory studies qualitative changes in solutions as a parameter varies In general one could study the bifurcation theory of ODEs PDEs integro-differential equations

More information

Kantian metaphysics to mind-brain. The approach follows Bacon s investigative method

Kantian metaphysics to mind-brain. The approach follows Bacon s investigative method 82 Basic Tools and Techniques As discussed, the project is based on mental physics which in turn is the application of Kantian metaphysics to mind-brain. The approach follows Bacon s investigative method

More information

Controlling the cortex state transitions by altering the oscillation energy

Controlling the cortex state transitions by altering the oscillation energy Controlling the cortex state transitions by altering the oscillation energy Valery Tereshko a and Alexander N. Pisarchik b a School of Computing, University of Paisley, Paisley PA 2BE, Scotland b Centro

More information

Neural Excitability in a Subcritical Hopf Oscillator with a Nonlinear Feedback

Neural Excitability in a Subcritical Hopf Oscillator with a Nonlinear Feedback Neural Excitability in a Subcritical Hopf Oscillator with a Nonlinear Feedback Gautam C Sethia and Abhijit Sen Institute for Plasma Research, Bhat, Gandhinagar 382 428, INDIA Motivation Neural Excitability

More information

Chapter 24 BIFURCATIONS

Chapter 24 BIFURCATIONS Chapter 24 BIFURCATIONS Abstract Keywords: Phase Portrait Fixed Point Saddle-Node Bifurcation Diagram Codimension-1 Hysteresis Hopf Bifurcation SNIC Page 1 24.1 Introduction In linear systems, responses

More information

Nonlinear dynamics & chaos BECS

Nonlinear dynamics & chaos BECS Nonlinear dynamics & chaos BECS-114.7151 Phase portraits Focus: nonlinear systems in two dimensions General form of a vector field on the phase plane: Vector notation: Phase portraits Solution x(t) describes

More information

Mathematical Foundations of Neuroscience - Lecture 7. Bifurcations II.

Mathematical Foundations of Neuroscience - Lecture 7. Bifurcations II. Mathematical Foundations of Neuroscience - Lecture 7. Bifurcations II. Filip Piękniewski Faculty of Mathematics and Computer Science, Nicolaus Copernicus University, Toruń, Poland Winter 2009/2010 Filip

More information

Computational Neuroscience. Session 4-2

Computational Neuroscience. Session 4-2 Computational Neuroscience. Session 4-2 Dr. Marco A Roque Sol 06/21/2018 Two-Dimensional Two-Dimensional System In this section we will introduce methods of phase plane analysis of two-dimensional systems.

More information

Artificial Neural Network and Fuzzy Logic

Artificial Neural Network and Fuzzy Logic Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks

More information

Bursting and Chaotic Activities in the Nonlinear Dynamics of FitzHugh-Rinzel Neuron Model

Bursting and Chaotic Activities in the Nonlinear Dynamics of FitzHugh-Rinzel Neuron Model Bursting and Chaotic Activities in the Nonlinear Dynamics of FitzHugh-Rinzel Neuron Model Abhishek Yadav *#, Anurag Kumar Swami *, Ajay Srivastava * * Department of Electrical Engineering, College of Technology,

More information

Phase Locking. 1 of of 10. The PRC's amplitude determines which frequencies a neuron locks to. The PRC's slope determines if locking is stable

Phase Locking. 1 of of 10. The PRC's amplitude determines which frequencies a neuron locks to. The PRC's slope determines if locking is stable Printed from the Mathematica Help Browser 1 1 of 10 Phase Locking A neuron phase-locks to a periodic input it spikes at a fixed delay [Izhikevich07]. The PRC's amplitude determines which frequencies a

More information

Title. Author(s)Fujii, Hiroshi; Tsuda, Ichiro. CitationNeurocomputing, 58-60: Issue Date Doc URL. Type.

Title. Author(s)Fujii, Hiroshi; Tsuda, Ichiro. CitationNeurocomputing, 58-60: Issue Date Doc URL. Type. Title Neocortical gap junction-coupled interneuron systems exhibiting transient synchrony Author(s)Fujii, Hiroshi; Tsuda, Ichiro CitationNeurocomputing, 58-60: 151-157 Issue Date 2004-06 Doc URL http://hdl.handle.net/2115/8488

More information

Consider the following spike trains from two different neurons N1 and N2:

Consider the following spike trains from two different neurons N1 and N2: About synchrony and oscillations So far, our discussions have assumed that we are either observing a single neuron at a, or that neurons fire independent of each other. This assumption may be correct in

More information

One Dimensional Dynamical Systems

One Dimensional Dynamical Systems 16 CHAPTER 2 One Dimensional Dynamical Systems We begin by analyzing some dynamical systems with one-dimensional phase spaces, and in particular their bifurcations. All equations in this Chapter are scalar

More information

Probabilistic Models in Theoretical Neuroscience

Probabilistic Models in Theoretical Neuroscience Probabilistic Models in Theoretical Neuroscience visible unit Boltzmann machine semi-restricted Boltzmann machine restricted Boltzmann machine hidden unit Neural models of probabilistic sampling: introduction

More information

Patterns, Memory and Periodicity in Two-Neuron Delayed Recurrent Inhibitory Loops

Patterns, Memory and Periodicity in Two-Neuron Delayed Recurrent Inhibitory Loops Math. Model. Nat. Phenom. Vol. 5, No. 2, 2010, pp. 67-99 DOI: 10.1051/mmnp/20105203 Patterns, Memory and Periodicity in Two-Neuron Delayed Recurrent Inhibitory Loops J. Ma 1 and J. Wu 2 1 Department of

More information

Is chaos possible in 1d? - yes - no - I don t know. What is the long term behavior for the following system if x(0) = π/2?

Is chaos possible in 1d? - yes - no - I don t know. What is the long term behavior for the following system if x(0) = π/2? Is chaos possible in 1d? - yes - no - I don t know What is the long term behavior for the following system if x(0) = π/2? In the insect outbreak problem, what kind of bifurcation occurs at fixed value

More information

Supporting Online Material for

Supporting Online Material for www.sciencemag.org/cgi/content/full/319/5869/1543/dc1 Supporting Online Material for Synaptic Theory of Working Memory Gianluigi Mongillo, Omri Barak, Misha Tsodyks* *To whom correspondence should be addressed.

More information

arxiv: v4 [cond-mat.dis-nn] 30 Apr 2018

arxiv: v4 [cond-mat.dis-nn] 30 Apr 2018 Attractor metadynamics in terms of target points in slow fast systems: adiabatic vs. symmetry protected flow in a recurrent neural network arxiv:1611.00174v4 [cond-mat.dis-nn] 30 Apr 2018 Hendrik Wernecke,

More information

= F ( x; µ) (1) where x is a 2-dimensional vector, µ is a parameter, and F :

= F ( x; µ) (1) where x is a 2-dimensional vector, µ is a parameter, and F : 1 Bifurcations Richard Bertram Department of Mathematics and Programs in Neuroscience and Molecular Biophysics Florida State University Tallahassee, Florida 32306 A bifurcation is a qualitative change

More information

/639 Final Solutions, Part a) Equating the electrochemical potentials of H + and X on outside and inside: = RT ln H in

/639 Final Solutions, Part a) Equating the electrochemical potentials of H + and X on outside and inside: = RT ln H in 580.439/639 Final Solutions, 2014 Question 1 Part a) Equating the electrochemical potentials of H + and X on outside and inside: RT ln H out + zf 0 + RT ln X out = RT ln H in F 60 + RT ln X in 60 mv =

More information

arxiv: v3 [q-bio.nc] 17 Oct 2018

arxiv: v3 [q-bio.nc] 17 Oct 2018 Evaluating performance of neural codes in model neural communication networks Chris G. Antonopoulos 1, Ezequiel Bianco-Martinez 2 and Murilo S. Baptista 3 October 18, 2018 arxiv:1709.08591v3 [q-bio.nc]

More information

The Mixed States of Associative Memories Realize Unimodal Distribution of Dominance Durations in Multistable Perception

The Mixed States of Associative Memories Realize Unimodal Distribution of Dominance Durations in Multistable Perception The Mixed States of Associative Memories Realize Unimodal Distribution of Dominance Durations in Multistable Perception Takashi Kanamaru Department of Mechanical Science and ngineering, School of Advanced

More information

Dynamical Systems in Neuroscience: Elementary Bifurcations

Dynamical Systems in Neuroscience: Elementary Bifurcations Dynamical Systems in Neuroscience: Elementary Bifurcations Foris Kuang May 2017 1 Contents 1 Introduction 3 2 Definitions 3 3 Hodgkin-Huxley Model 3 4 Morris-Lecar Model 4 5 Stability 5 5.1 Linear ODE..............................................

More information

Self-organized Criticality and Synchronization in a Pulse-coupled Integrate-and-Fire Neuron Model Based on Small World Networks

Self-organized Criticality and Synchronization in a Pulse-coupled Integrate-and-Fire Neuron Model Based on Small World Networks Commun. Theor. Phys. (Beijing, China) 43 (2005) pp. 466 470 c International Academic Publishers Vol. 43, No. 3, March 15, 2005 Self-organized Criticality and Synchronization in a Pulse-coupled Integrate-and-Fire

More information

Activity types in a neural mass model

Activity types in a neural mass model Master thesis Activity types in a neural mass model Jurgen Hebbink September 19, 214 Exam committee: Prof. Dr. S.A. van Gils (UT) Dr. H.G.E. Meijer (UT) Dr. G.J.M. Huiskamp (UMC Utrecht) Contents 1 Introduction

More information

Supplemental Figures A Nonlinear Dynamical Theory of Cell Injury

Supplemental Figures A Nonlinear Dynamical Theory of Cell Injury upplementary Figures, A Nonlinear ynamical Theory of Cell Injury - upplemental Figures A Nonlinear ynamical Theory of Cell Injury onald J. egracia, Zhi-Feng Huang, ui Huang A. f ( X ) = a Θ n n Θ + X n

More information

Stochastic Model for Adaptation Using Basin Hopping Dynamics

Stochastic Model for Adaptation Using Basin Hopping Dynamics Stochastic Model for Adaptation Using Basin Hopping Dynamics Peter Davis NTT Communication Science Laboratories 2-4 Hikaridai, Keihanna Science City, Kyoto, Japan 619-0237 davis@cslab.kecl.ntt.co.jp Abstract

More information

Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits

Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits Wolfgang Maass, Robert Legenstein, Nils Bertschinger Institute for Theoretical Computer Science Technische

More information

EEG- Signal Processing

EEG- Signal Processing Fatemeh Hadaeghi EEG- Signal Processing Lecture Notes for BSP, Chapter 5 Master Program Data Engineering 1 5 Introduction The complex patterns of neural activity, both in presence and absence of external

More information

Suppression of the primary resonance vibrations of a forced nonlinear system using a dynamic vibration absorber

Suppression of the primary resonance vibrations of a forced nonlinear system using a dynamic vibration absorber Suppression of the primary resonance vibrations of a forced nonlinear system using a dynamic vibration absorber J.C. Ji, N. Zhang Faculty of Engineering, University of Technology, Sydney PO Box, Broadway,

More information

Principles of DCM. Will Penny. 26th May Principles of DCM. Will Penny. Introduction. Differential Equations. Bayesian Estimation.

Principles of DCM. Will Penny. 26th May Principles of DCM. Will Penny. Introduction. Differential Equations. Bayesian Estimation. 26th May 2011 Dynamic Causal Modelling Dynamic Causal Modelling is a framework studying large scale brain connectivity by fitting differential equation models to brain imaging data. DCMs differ in their

More information

Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks

Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks Commun. Theor. Phys. (Beijing, China) 42 (2004) pp. 121 125 c International Academic Publishers Vol. 42, No. 1, July 15, 2004 Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized

More information

Decision-making and Weber s law: a neurophysiological model

Decision-making and Weber s law: a neurophysiological model European Journal of Neuroscience, Vol. 24, pp. 901 916, 2006 doi:10.1111/j.14-9568.2006.04940.x Decision-making and Weber s law: a neurophysiological model Gustavo Deco 1 and Edmund T. Rolls 2 1 Institucio

More information

Propagation and stability of waves of electrical activity in the cerebral cortex

Propagation and stability of waves of electrical activity in the cerebral cortex PHYSICAL REVIEW E VOLUME 56, NUMBER 1 JULY 1997 Propagation and stability of waves of electrical activity in the cerebral cortex P. A. Robinson, 1,* C. J. Rennie, 1,2, and J. J. Wright 3, 1 School of Physics,

More information

Neuron. Detector Model. Understanding Neural Components in Detector Model. Detector vs. Computer. Detector. Neuron. output. axon

Neuron. Detector Model. Understanding Neural Components in Detector Model. Detector vs. Computer. Detector. Neuron. output. axon Neuron Detector Model 1 The detector model. 2 Biological properties of the neuron. 3 The computational unit. Each neuron is detecting some set of conditions (e.g., smoke detector). Representation is what

More information

Hierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References

Hierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References 24th March 2011 Update Hierarchical Model Rao and Ballard (1999) presented a hierarchical model of visual cortex to show how classical and extra-classical Receptive Field (RF) effects could be explained

More information

Modelling biological oscillations

Modelling biological oscillations Modelling biological oscillations Shan He School for Computational Science University of Birmingham Module 06-23836: Computational Modelling with MATLAB Outline Outline of Topics Van der Pol equation Van

More information

Additive resonances of a controlled van der Pol-Duffing oscillator

Additive resonances of a controlled van der Pol-Duffing oscillator Additive resonances of a controlled van der Pol-Duffing oscillator This paper has been published in Journal of Sound and Vibration vol. 5 issue - 8 pp.-. J.C. Ji N. Zhang Faculty of Engineering University

More information

Reducing neuronal networks to discrete dynamics

Reducing neuronal networks to discrete dynamics Physica D 237 (2008) 324 338 www.elsevier.com/locate/physd Reducing neuronal networks to discrete dynamics David Terman a,b,, Sungwoo Ahn a, Xueying Wang a, Winfried Just c a Department of Mathematics,

More information

CISC 3250 Systems Neuroscience

CISC 3250 Systems Neuroscience CISC 3250 Systems Neuroscience Systems Neuroscience How the nervous system performs computations How groups of neurons work together to achieve intelligence Professor Daniel Leeds dleeds@fordham.edu JMH

More information

A plane autonomous system is a pair of simultaneous first-order differential equations,

A plane autonomous system is a pair of simultaneous first-order differential equations, Chapter 11 Phase-Plane Techniques 11.1 Plane Autonomous Systems A plane autonomous system is a pair of simultaneous first-order differential equations, ẋ = f(x, y), ẏ = g(x, y). This system has an equilibrium

More information

HSND-2015, IPR. Department of Physics, University of Burdwan, Burdwan, West Bengal.

HSND-2015, IPR. Department of Physics, University of Burdwan, Burdwan, West Bengal. New kind of deaths: Oscillation Death and Chimera Death HSND-2015, IPR Dr. Tanmoy Banerjee Department of Physics, University of Burdwan, Burdwan, West Bengal. Points to be discussed Oscillation suppression

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

CHALMERS, GÖTEBORGS UNIVERSITET. EXAM for DYNAMICAL SYSTEMS. COURSE CODES: TIF 155, FIM770GU, PhD

CHALMERS, GÖTEBORGS UNIVERSITET. EXAM for DYNAMICAL SYSTEMS. COURSE CODES: TIF 155, FIM770GU, PhD CHALMERS, GÖTEBORGS UNIVERSITET EXAM for DYNAMICAL SYSTEMS COURSE CODES: TIF 155, FIM770GU, PhD Time: Place: Teachers: Allowed material: Not allowed: April 06, 2018, at 14 00 18 00 Johanneberg Kristian

More information

DEVS Simulation of Spiking Neural Networks

DEVS Simulation of Spiking Neural Networks DEVS Simulation of Spiking Neural Networks Rene Mayrhofer, Michael Affenzeller, Herbert Prähofer, Gerhard Höfer, Alexander Fried Institute of Systems Science Systems Theory and Information Technology Johannes

More information

Dynamic Modeling of Brain Activity

Dynamic Modeling of Brain Activity 0a Dynamic Modeling of Brain Activity EIN IIN PC Thomas R. Knösche, Leipzig Generative Models for M/EEG 4a Generative Models for M/EEG states x (e.g. dipole strengths) parameters parameters (source positions,

More information

ABOUT UNIVERSAL BASINS OF ATTRACTION IN HIGH-DIMENSIONAL SYSTEMS

ABOUT UNIVERSAL BASINS OF ATTRACTION IN HIGH-DIMENSIONAL SYSTEMS International Journal of Bifurcation and Chaos, Vol. 23, No. 12 (2013) 1350197 (7 pages) c World Scientific Publishing Company DOI: 10.1142/S0218127413501976 ABOUT UNIVERSAL BASINS OF ATTRACTION IN HIGH-DIMENSIONAL

More information

Dynamic Causal Modelling for EEG/MEG: principles J. Daunizeau

Dynamic Causal Modelling for EEG/MEG: principles J. Daunizeau Dynamic Causal Modelling for EEG/MEG: principles J. Daunizeau Motivation, Brain and Behaviour group, ICM, Paris, France Overview 1 DCM: introduction 2 Dynamical systems theory 3 Neural states dynamics

More information

Lesson 4: Non-fading Memory Nonlinearities

Lesson 4: Non-fading Memory Nonlinearities Lesson 4: Non-fading Memory Nonlinearities Nonlinear Signal Processing SS 2017 Christian Knoll Signal Processing and Speech Communication Laboratory Graz University of Technology June 22, 2017 NLSP SS

More information

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, 2012 Sasidharan Sreedharan www.sasidharan.webs.com 3/1/2012 1 Syllabus Artificial Intelligence Systems- Neural Networks, fuzzy logic,

More information

Simplest Chaotic Flows with Involutional Symmetries

Simplest Chaotic Flows with Involutional Symmetries International Journal of Bifurcation and Chaos, Vol. 24, No. 1 (2014) 1450009 (9 pages) c World Scientific Publishing Company DOI: 10.1142/S0218127414500096 Simplest Chaotic Flows with Involutional Symmetries

More information

3.5 Competition Models: Principle of Competitive Exclusion

3.5 Competition Models: Principle of Competitive Exclusion 94 3. Models for Interacting Populations different dimensional parameter changes. For example, doubling the carrying capacity K is exactly equivalent to halving the predator response parameter D. The dimensionless

More information

Clearly the passage of an eigenvalue through to the positive real half plane leads to a qualitative change in the phase portrait, i.e.

Clearly the passage of an eigenvalue through to the positive real half plane leads to a qualitative change in the phase portrait, i.e. Bifurcations We have already seen how the loss of stiffness in a linear oscillator leads to instability. In a practical situation the stiffness may not degrade in a linear fashion, and instability may

More information

STUDY OF SYNCHRONIZED MOTIONS IN A ONE-DIMENSIONAL ARRAY OF COUPLED CHAOTIC CIRCUITS

STUDY OF SYNCHRONIZED MOTIONS IN A ONE-DIMENSIONAL ARRAY OF COUPLED CHAOTIC CIRCUITS International Journal of Bifurcation and Chaos, Vol 9, No 11 (1999) 19 4 c World Scientific Publishing Company STUDY OF SYNCHRONIZED MOTIONS IN A ONE-DIMENSIONAL ARRAY OF COUPLED CHAOTIC CIRCUITS ZBIGNIEW

More information

Announcements: Test4: Wednesday on: week4 material CH5 CH6 & NIA CAPE Evaluations please do them for me!! ask questions...discuss listen learn.

Announcements: Test4: Wednesday on: week4 material CH5 CH6 & NIA CAPE Evaluations please do them for me!! ask questions...discuss listen learn. Announcements: Test4: Wednesday on: week4 material CH5 CH6 & NIA CAPE Evaluations please do them for me!! ask questions...discuss listen learn. The Chemical Senses: Olfaction Mary ET Boyle, Ph.D. Department

More information

Dynamical Systems and Chaos Part I: Theoretical Techniques. Lecture 4: Discrete systems + Chaos. Ilya Potapov Mathematics Department, TUT Room TD325

Dynamical Systems and Chaos Part I: Theoretical Techniques. Lecture 4: Discrete systems + Chaos. Ilya Potapov Mathematics Department, TUT Room TD325 Dynamical Systems and Chaos Part I: Theoretical Techniques Lecture 4: Discrete systems + Chaos Ilya Potapov Mathematics Department, TUT Room TD325 Discrete maps x n+1 = f(x n ) Discrete time steps. x 0

More information

Example of a Blue Sky Catastrophe

Example of a Blue Sky Catastrophe PUB:[SXG.TEMP]TRANS2913EL.PS 16-OCT-2001 11:08:53.21 SXG Page: 99 (1) Amer. Math. Soc. Transl. (2) Vol. 200, 2000 Example of a Blue Sky Catastrophe Nikolaĭ Gavrilov and Andrey Shilnikov To the memory of

More information

Identification of Odors by the Spatiotemporal Dynamics of the Olfactory Bulb. Outline

Identification of Odors by the Spatiotemporal Dynamics of the Olfactory Bulb. Outline Identification of Odors by the Spatiotemporal Dynamics of the Olfactory Bulb Henry Greenside Department of Physics Duke University Outline Why think about olfaction? Crash course on neurobiology. Some

More information

IN THIS turorial paper we exploit the relationship between

IN THIS turorial paper we exploit the relationship between 508 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 3, MAY 1999 Weakly Pulse-Coupled Oscillators, FM Interactions, Synchronization, Oscillatory Associative Memory Eugene M. Izhikevich Abstract We study

More information

A Novel Chaotic Neural Network Architecture

A Novel Chaotic Neural Network Architecture ESANN' proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), - April, D-Facto public., ISBN ---, pp. - A Novel Neural Network Architecture Nigel Crook and Tjeerd olde Scheper

More information

Dr. Harry Erwin, School of Computing and Technology, University of Sunderland, Sunderland, UK

Dr. Harry Erwin, School of Computing and Technology, University of Sunderland, Sunderland, UK Freeman K-set Recommend this on Google Dr. Walter J. Freeman, University of California, Berkeley, California Dr. Harry Erwin, School of Computing and Technology, University of Sunderland, Sunderland, UK

More information

80% of all excitatory synapses - at the dendritic spines.

80% of all excitatory synapses - at the dendritic spines. Dendritic Modelling Dendrites (from Greek dendron, tree ) are the branched projections of a neuron that act to conduct the electrical stimulation received from other cells to and from the cell body, or

More information

A Three-dimensional Physiologically Realistic Model of the Retina

A Three-dimensional Physiologically Realistic Model of the Retina A Three-dimensional Physiologically Realistic Model of the Retina Michael Tadross, Cameron Whitehouse, Melissa Hornstein, Vicky Eng and Evangelia Micheli-Tzanakou Department of Biomedical Engineering 617

More information

Dynamical systems in neuroscience. Pacific Northwest Computational Neuroscience Connection October 1-2, 2010

Dynamical systems in neuroscience. Pacific Northwest Computational Neuroscience Connection October 1-2, 2010 Dynamical systems in neuroscience Pacific Northwest Computational Neuroscience Connection October 1-2, 2010 What do I mean by a dynamical system? Set of state variables Law that governs evolution of state

More information

Solutions of a PT-symmetric Dimer with Constant Gain-loss

Solutions of a PT-symmetric Dimer with Constant Gain-loss Solutions of a PT-symmetric Dimer with Constant Gain-loss G14DIS Mathematics 4th Year Dissertation Spring 2012/2013 School of Mathematical Sciences University of Nottingham John Pickton Supervisor: Dr

More information

Dynamical Constraints on Computing with Spike Timing in the Cortex

Dynamical Constraints on Computing with Spike Timing in the Cortex Appears in Advances in Neural Information Processing Systems, 15 (NIPS 00) Dynamical Constraints on Computing with Spike Timing in the Cortex Arunava Banerjee and Alexandre Pouget Department of Brain and

More information

Exploring a Simple Discrete Model of Neuronal Networks

Exploring a Simple Discrete Model of Neuronal Networks Exploring a Simple Discrete Model of Neuronal Networks Winfried Just Ohio University Joint work with David Terman, Sungwoo Ahn,and Xueying Wang August 6, 2010 An ODE Model of Neuronal Networks by Terman

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

Stochastic resonance in the absence and presence of external signals for a chemical reaction

Stochastic resonance in the absence and presence of external signals for a chemical reaction JOURNAL OF CHEMICAL PHYSICS VOLUME 110, NUMBER 7 15 FEBRUARY 1999 Stochastic resonance in the absence and presence of external signals for a chemical reaction Lingfa Yang, Zhonghuai Hou, and Houwen Xin

More information

Causality and communities in neural networks

Causality and communities in neural networks Causality and communities in neural networks Leonardo Angelini, Daniele Marinazzo, Mario Pellicoro, Sebastiano Stramaglia TIRES-Center for Signal Detection and Processing - Università di Bari, Bari, Italy

More information

Introduction Knot Theory Nonlinear Dynamics Topology in Chaos Open Questions Summary. Topology in Chaos

Introduction Knot Theory Nonlinear Dynamics Topology in Chaos Open Questions Summary. Topology in Chaos Introduction Knot Theory Nonlinear Dynamics Open Questions Summary A tangled tale about knot, link, template, and strange attractor Centre for Chaos & Complex Networks City University of Hong Kong Email:

More information

Strange dynamics of bilinear oscillator close to grazing

Strange dynamics of bilinear oscillator close to grazing Strange dynamics of bilinear oscillator close to grazing Ekaterina Pavlovskaia, James Ing, Soumitro Banerjee and Marian Wiercigroch Centre for Applied Dynamics Research, School of Engineering, King s College,

More information

Chapter 1. Introduction

Chapter 1. Introduction Chapter 1 Introduction 1.1 What is Phase-Locked Loop? The phase-locked loop (PLL) is an electronic system which has numerous important applications. It consists of three elements forming a feedback loop:

More information

Discrete and Indiscrete Models of Biological Networks

Discrete and Indiscrete Models of Biological Networks Discrete and Indiscrete Models of Biological Networks Winfried Just Ohio University November 17, 2010 Who are we? What are we doing here? Who are we? What are we doing here? A population of interacting

More information

A Novel Three Dimension Autonomous Chaotic System with a Quadratic Exponential Nonlinear Term

A Novel Three Dimension Autonomous Chaotic System with a Quadratic Exponential Nonlinear Term ETASR - Engineering, Technology & Applied Science Research Vol., o.,, 9-5 9 A Novel Three Dimension Autonomous Chaotic System with a Quadratic Exponential Nonlinear Term Fei Yu College of Information Science

More information

Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting

Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting Eugene M. Izhikevich The MIT Press Cambridge, Massachusetts London, England Contents Preface xv 1 Introduction 1 1.1 Neurons

More information

Phase Response Properties and Phase-Locking in Neural Systems with Delayed Negative-Feedback. Carter L. Johnson

Phase Response Properties and Phase-Locking in Neural Systems with Delayed Negative-Feedback. Carter L. Johnson Phase Response Properties and Phase-Locking in Neural Systems with Delayed Negative-Feedback Carter L. Johnson Faculty Mentor: Professor Timothy J. Lewis University of California, Davis Abstract Oscillatory

More information

Neural Networks 1 Synchronization in Spiking Neural Networks

Neural Networks 1 Synchronization in Spiking Neural Networks CS 790R Seminar Modeling & Simulation Neural Networks 1 Synchronization in Spiking Neural Networks René Doursat Department of Computer Science & Engineering University of Nevada, Reno Spring 2006 Synchronization

More information

Overview Organization: Central Nervous System (CNS) Peripheral Nervous System (PNS) innervate Divisions: a. Afferent

Overview Organization: Central Nervous System (CNS) Peripheral Nervous System (PNS) innervate Divisions: a. Afferent Overview Organization: Central Nervous System (CNS) Brain and spinal cord receives and processes information. Peripheral Nervous System (PNS) Nerve cells that link CNS with organs throughout the body.

More information

2D-Volterra-Lotka Modeling For 2 Species

2D-Volterra-Lotka Modeling For 2 Species Majalat Al-Ulum Al-Insaniya wat - Tatbiqiya 2D-Volterra-Lotka Modeling For 2 Species Alhashmi Darah 1 University of Almergeb Department of Mathematics Faculty of Science Zliten Libya. Abstract The purpose

More information

Difference Resonances in a controlled van der Pol-Duffing oscillator involving time. delay

Difference Resonances in a controlled van der Pol-Duffing oscillator involving time. delay Difference Resonances in a controlled van der Pol-Duffing oscillator involving time delay This paper was published in the journal Chaos, Solitions & Fractals, vol.4, no., pp.975-98, Oct 9 J.C. Ji, N. Zhang,

More information

Ângelo Cardoso 27 May, Symbolic and Sub-Symbolic Learning Course Instituto Superior Técnico

Ângelo Cardoso 27 May, Symbolic and Sub-Symbolic Learning Course Instituto Superior Técnico BIOLOGICALLY INSPIRED COMPUTER MODELS FOR VISUAL RECOGNITION Ângelo Cardoso 27 May, 2010 Symbolic and Sub-Symbolic Learning Course Instituto Superior Técnico Index Human Vision Retinal Ganglion Cells Simple

More information

Analysis of an Attractor Neural Network s Response to Conflicting External Inputs

Analysis of an Attractor Neural Network s Response to Conflicting External Inputs Journal of Mathematical Neuroscience (2018) 8:6 https://doi.org/10.1186/s13408-018-0061-0 RESEARCH OpenAccess Analysis of an Attractor Neural Network s Response to Conflicting External Inputs Kathryn Hedrick

More information

Modeling of Retinal Ganglion Cell Responses to Electrical Stimulation with Multiple Electrodes L.A. Hruby Salk Institute for Biological Studies

Modeling of Retinal Ganglion Cell Responses to Electrical Stimulation with Multiple Electrodes L.A. Hruby Salk Institute for Biological Studies Modeling of Retinal Ganglion Cell Responses to Electrical Stimulation with Multiple Electrodes L.A. Hruby Salk Institute for Biological Studies Introduction Since work on epiretinal electrical stimulation

More information

Layer 3 patchy recurrent excitatory connections may determine the spatial organization of sustained activity in the primate prefrontal cortex

Layer 3 patchy recurrent excitatory connections may determine the spatial organization of sustained activity in the primate prefrontal cortex Neurocomputing 32}33 (2000) 391}400 Layer 3 patchy recurrent excitatory connections may determine the spatial organization of sustained activity in the primate prefrontal cortex Boris S. Gutkin *, G. Bard

More information

MATH 415, WEEKS 7 & 8: Conservative and Hamiltonian Systems, Non-linear Pendulum

MATH 415, WEEKS 7 & 8: Conservative and Hamiltonian Systems, Non-linear Pendulum MATH 415, WEEKS 7 & 8: Conservative and Hamiltonian Systems, Non-linear Pendulum Reconsider the following example from last week: dx dt = x y dy dt = x2 y. We were able to determine many qualitative features

More information

Spike-Frequency Adaptation: Phenomenological Model and Experimental Tests

Spike-Frequency Adaptation: Phenomenological Model and Experimental Tests Spike-Frequency Adaptation: Phenomenological Model and Experimental Tests J. Benda, M. Bethge, M. Hennig, K. Pawelzik & A.V.M. Herz February, 7 Abstract Spike-frequency adaptation is a common feature of

More information

Abstract: Complex responses observed in an experimental, nonlinear, moored structural

Abstract: Complex responses observed in an experimental, nonlinear, moored structural AN INDEPENDENT-FLOW-FIELD MODEL FOR A SDOF NONLINEAR STRUCTURAL SYSTEM, PART II: ANALYSIS OF COMPLEX RESPONSES Huan Lin e-mail: linh@engr.orst.edu Solomon C.S. Yim e-mail: solomon.yim@oregonstate.edu Ocean

More information

Lecture 4: Importance of Noise and Fluctuations

Lecture 4: Importance of Noise and Fluctuations Lecture 4: Importance of Noise and Fluctuations Jordi Soriano Fradera Dept. Física de la Matèria Condensada, Universitat de Barcelona UB Institute of Complex Systems September 2016 1. Noise in biological

More information

Research Article Hidden Periodicity and Chaos in the Sequence of Prime Numbers

Research Article Hidden Periodicity and Chaos in the Sequence of Prime Numbers Advances in Mathematical Physics Volume 2, Article ID 5978, 8 pages doi:.55/2/5978 Research Article Hidden Periodicity and Chaos in the Sequence of Prime Numbers A. Bershadskii Physics Department, ICAR,

More information

Nonlinear Dynamics of Neural Firing

Nonlinear Dynamics of Neural Firing Nonlinear Dynamics of Neural Firing BENG/BGGN 260 Neurodynamics University of California, San Diego Week 3 BENG/BGGN 260 Neurodynamics (UCSD) Nonlinear Dynamics of Neural Firing Week 3 1 / 16 Reading Materials

More information

Visual Selection and Attention Shifting Based on FitzHugh-Nagumo Equations

Visual Selection and Attention Shifting Based on FitzHugh-Nagumo Equations Visual Selection and Attention Shifting Based on FitzHugh-Nagumo Equations Haili Wang, Yuanhua Qiao, Lijuan Duan, Faming Fang, Jun Miao 3, and Bingpeng Ma 3 College of Applied Science, Beijing University

More information

Fundamentals of Dynamical Systems / Discrete-Time Models. Dr. Dylan McNamara people.uncw.edu/ mcnamarad

Fundamentals of Dynamical Systems / Discrete-Time Models. Dr. Dylan McNamara people.uncw.edu/ mcnamarad Fundamentals of Dynamical Systems / Discrete-Time Models Dr. Dylan McNamara people.uncw.edu/ mcnamarad Dynamical systems theory Considers how systems autonomously change along time Ranges from Newtonian

More information