Evolution and Analysis of Model CPGs for Walking: II. General Principles and Individual Variability

Size: px
Start display at page:

Download "Evolution and Analysis of Model CPGs for Walking: II. General Principles and Individual Variability"

Transcription

1 Journal of Computational Neuroscience 7, (1999) c 1999 Kluwer Academic Publishers. Manufactured in The Netherlands. Evolution and Analysis of Model CPGs for Walking: II. General Principles and Individual Variability RANDALL D. BEER Department of Computer Engineering and Science and Department of Biology, Case Western Reserve University, Cleveland, OH beer@alpha.ces.cwru.edu HILLEL J. CHIEL Department of Biology and Department of Neuroscience, Case Western Reserve University, Cleveland, OH hjc@po.cwru.edu JOHN C. GALLAGHER Department of Computer Science, SUNY Institute of Technology at Utica/Rome, Utica, NY johng@fang.cs.sunyit.edu Received July 31, 1998; Revised February 19, 1999; Accepted March 5, 1999 Action Editor: Thelma Williams Abstract. Are there general principles for pattern generation? We examined this question by analyzing the operation of large populations of evolved model central pattern generators (CPGs) for walking. Three populations of model CPGs were evolved, containing three, four, or five neurons. We identified six general principles. First, locomotion performance increased with the number of interneurons. Second, the top 10 three-, four-, and fiveneuron CPGs could be decomposed into dynamical modules, an abstract description developed in a companion article. Third, these dynamical modules were multistable: they could be switched between multiple stable output configurations. Fourth, the rhythmic pattern generated by a CPG could be understood as a closed chain of successive destabilizations of one dynamical module by another. A combinatorial analysis enumerated the possible dynamical modular structures. Fifth, one-dimensional modules were frequently observed and, in some cases, could be assigned specific functional roles. Finally, dynamic dynamical modules, in which the modular structure itself changed over one cycle, were frequently observed. The existence of these general principles despite significant variability in both patterns of connectivity and neural parameters was explained by degeneracy in the maps from neural parameters to neural dynamics to behavior to fitness. An analysis of the biomechanical properties of the model body was essential for relating neural activity to behavior. Our studies of evolved model circuits suggest that, in the absence of other constraints, there is no compelling reason to expect neural circuits to be functionally decomposable as the number of interneurons increase. Analyzing idealized model pattern generators may be an effective methodology for gaining insights into the operation of biological pattern generators. Keywords: central pattern generators, dynamical modules, computational neuroethology, walking, biomechanics, evolution, dynamical systems theory

2 120 Beer, Chiel and Gallagher Introduction Are there general principles for pattern generation? Over the last thirty years, researchers have analyzed a very wide variety of pattern generators in many different species of animals (Marder and Calabrese, 1996). Pattern generators have been studied for swimming in Tritonia, Clione, frog, leech, and lamprey (Getting and Dekin, 1985; Arshavsky et al., 1993; Friesen, 1989; Grillner et al., 1995), for crawling and heartbeat in leech (Kristan et al., in press; Calabrese et al., 1995), for feeding in Lymnaea, Helisoma, and Aplysia (Elliot and Benjamin, 1985a, 1985b, Kater, 1974; Hurwitz et al., 1996; Hurwitz and Susswein, 1996; Hurwitz et al., 1997), for digestion in crustacea (Harris-Warrick et al., 1992), for scratching in turtles (Robertson et al., 1985), and for locomotion in cockroaches and cats (Pearson, 1993), among others. These studies were undertaken with the expectation that they would lead to an understanding of general principles for the operation of pattern generators (Getting, 1989). However, what has emerged most clearly from these studies is the remarkable variability of neurons and neural architectures that subserve pattern generation. One possible reason that so much variability has been observed is that researchers have used a wide variety of different species, each of which have different peripheries and need to function in different environmental niches. These variations in biomechanics and environment create sources of variability that could obscure general principles of organization of pattern generators. For this reason, some investigators have chosen to focus on the variations within the nervous system, rather than in peripheral biomechanics and environment, by looking at the neural basis of a behavior performed by similar species. For example, Wright and his colleagues have studied homologous circuitry for the gill withdrawal response in Aplysia and several closely related species and demonstrated that several different neural mechanisms can be combined to generate behaviorally similar sensitization responses (Wright et al., 1996). The attempt to uncover general principles by studying homologous circuitry subserving similar behavior in similar species must still overcome the technical difficulty of analyzing large numbers of neural circuits in extensive detail. For example, insights into general properties of the stomatogastric nervous system in decapod crustaceans have emerged only from comparative studies made by a large number of investigators working over many years (Katz and Tazaki, 1992). Even within a given species, another potential source of variability is the need to average partial measurements of neuronal and synaptic properties in many animals to construct a composite understanding of circuit function, rather than completely characterizing the circuitry of a single individual. As Gardner has demonstrated by analyzing the connections between identified neurons B4/ B5 in the buccal ganglion of Aplysia and its synaptic followers, there may be significant variability from animal to animal in the strength of a synapse or the input resistance of a follower cell, but the properties of pre- and postsynaptic neurons in any given individual are matched to guarantee that the overall output is not very different (Gardner, 1993). Model central pattern generators (CPGs) described in a companion article (Chiel et al., 1999) can be used to explore the relationship between general principles and individual variability. As demonstrated in that article, the use of idealized models offers several advantages, including a complete set of data for all parameters, the ability to manipulate all of the neurons and their parameters, and the ability to mathematically quantify the properties of their components. A major additional advantage of these model CPGs, which will be exploited in this article, is the ability to generate and analyze large populations of model circuits. A further advantage of these model CPGs is that it is possible to fully characterize the biomechanical properties of the body they control, which is extremely difficult to do in a biological system. In the companion article, these advantages made it possible to develop abstract descriptions of a threeneuron CPG s function in terms of dynamical modules. In turn, the concept of dynamical modules made it possible to quantitatively describe constraints on the circuit s neural architecture, to account for the timing of its output patterns, and to predict the effects of parameter changes. We also found that coordinated changes in the parameters of a dynamical module can preserve its dynamical behavior. How general and broadly applicable is the notion of a dynamical module? In this article, we applied the concept of dynamical modules to the analysis of large populations of three-, four-, and five-neuron CPGs created using an evolutionary process. First, we examined the extent to which the abstract description developed for one three-neuron CPG in the companion article generalizes to the top 10 three-neuron CPGs despite their significant variability (Fig. 1 in companion article). Second, we examined the applicability of this approach

3 General Principles and Individual Variability in Evolved CPGs 121 Figure 1. Motion of the body over one optimal walking cycle. The outline of the body is shown in gray, while the position of the leg is indicated by a black line. A foot that is down is indicated by a black square. As the body moves forward, the changing position of the center of mass of the body is indicated by a dashed line. Each phase of the walking cycle is labeled (see text for further details). to four- and five-neuron CPGs containing interneurons, in which not all pattern generating elements are directly connected to the periphery. Although motor neurons can serve as pattern generating elements (e.g., in the stomatogastric nervous system (Harris-Warrick et al., 1992) and in the feeding pattern generator in Aplysia californica (Hurwitz et al., 1996)), pattern-generating elements are more typically interneurons. We report that the concept of dynamical modules generalizes extremely well to a much larger set of three-neuron CPGs and can be used for the analysis of CPGs with interneurons, although the ability to identify discrete modules becomes more difficult as the number of interneurons increases. The mechanical properties of the body and the generic dynamical properties of the model neurons are crucial for the existence of dynamical modules. Furthermore, much of the variability that is observed in the structure of the different CPGs can be accounted for by analyzing the degeneracies in the dynamics of the neural circuitry, the biomechanics of the periphery, and the evaluation of fitness. are common in animal limbs. When the foot is up, these clockwise and counterclockwise torques have a maximum value of 1/40, and they sum to produce a resultant torque τ that causes the leg to swing relative to the body according to the equation θ = τ (here and throughout the article, the notation ẋ represents the time derivative of x (i.e., dx/dt) and ẍ represents the second derivative of x with respect to time (i.e., d 2 x/dt 2 )). When the foot is down, these clockwise and counterclockwise torques produce translational forces on the body (maximum value 1/20), which sum to produce a resultant force F that causes the body to translate according to the equation ẍ = F. The angular motion of the leg is limited to the range ±π/6. A supporting leg may stretch outside of this range, but it can apply forces only within these limits. A stretched leg immediately snaps back to its mechanical limit and nominal length as soon as the foot is lifted. The body loses support if the foot is lifted or if the vertical distance between the foot and the center of mass of the body exceeds the stability limit of 20. Whenever the body loses support, its translational velocity falls immediately to 0. Neural Model The dynamical equations of continuous-time recurrent neural networks (CTRNNs), as well as several analytical results concerning their dynamics (e.g., borders of bistability and transition durations), are fully described in the companion article. Model CPGs are fully-interconnected CTRNNs containing at least three motor neurons (FT: Foot, BS: Backward Swing, FS: Forward Swing) and zero, one, or two interneurons. The foot is down when the output of FT is above 0.5 and up otherwise. When the foot is up, the outputs of BS and FS scale the maximum leg torques. When the foot is down, these outputs scale the maximum leg force applied to the body. Methods Body Model The experiments described in this and the companion article utilize a simple insectlike body possessing a single leg with a nominal length of 15 (Fig. 1) (Beer and Gallagher, 1992). The leg is controlled by three effectors: One specifies the state of the foot and the other two specify clockwise and counterclockwise torques about the leg s single joint with the body. The intent of opposing torques was to model the antagonistic muscles that Evolutionary Algorithm All neural circuits described in this article were produced using a simple model of evolution known as a genetic algorithm (Goldberg, 1989; Mitchell, 1996). The parameters to be searched are encoded in some fashion and concatenated to form a genetic string. A population of such strings is maintained. Initially, the strings in this population are randomly generated. In each generation, the fitness of each individual in the population is evaluated. A new generation of individuals is then produced by applying a set of genetic operators to selected

4 122 Beer, Chiel and Gallagher individuals from the previous generation. Individuals are selected for reproduction with a probability proportional to their fitness. The standard genetic operators are mutation (in which a portion of a genetic string is randomly modified) and crossover (in which portions of two genetic strings are exchanged). Once a new population has been constructed, the fitness of each new individual is evaluated and the entire process repeats. While genetic algorithms are obviously highly simplified compared to biological evolution, they do capture the two key features of Darwinian evolution namely, heritable variation and differential reproduction (natural selection). All experiments described in this article were performed using a public domain genetic algorithm software package known as GAucsd (version 1.4, URL:ftp: //ftp.aic.nrl.navy.mil/pub/galist/src/gaucsd14.sh.z). In an N-neuron circuit there are N time constants, N biases, and N 2 connection weights for a total of N 2 +2N real parameters to be searched. Each parameter was encoded as a four-bit binary number, with time constants in the range [0.5, 10] and biases and weights in the range ±16. GAucsd uses a dynamic parameter encoding scheme in which parameters can be remapped to smaller ranges as the search progresses to increase precision. Parameters were encoded in the following order: time constants, biases, and connection weights. Each individual was evaluated by decoding its genetic string into a CTRNN and then numerically integrating the coupled body and neural circuit for 1,000 time units using the forward Euler integration method with a step size of 0.1. The fitness was then calculated as (100 distance covered) 2, where 100 is larger than the maximum possible distance that can be covered in 1,000 time units. Neural states were initialized to values uniformly distributed over the range ±0.1, while the leg angle was initialized to a value uniformly distributed over the range ±π/6. GAucsd parameters were set as follows for all experiments: Total Trials = (220 generations); Population Size = 500; Crossover Rate = 0.6; Mutation Rate = ; Generation Gap = 1; Scaling Window = 1; Structures Saved = 5; Max Gens w/o Eval = 2; Options = Aclue; Maximum Bias = 0.99; Max Convergence = 160; Conv Threshold = 0.99; DPE Time constant = 50; Sigma Scaling = 2. Results Body Biomechanics The biomechanical properties of the model body are critical for understanding many features of the evolved CPGs. One of the advantages of such a simple body model is that it is straightforward to define an optimal walking controller for this body and to analyze its properties. The motion of the body during one cycle of this optimal pattern generator is shown in Fig. 1. The cycle begins when the foot goes down at the forward mechanical limit of the leg, beginning a stance phase. During this power phase of stance, the leg swings backward with maximum force, propelling the body forward until the leg reaches the rear mechanical limit of its motion. Since the leg cannot apply a force to the body beyond this point, the outputs of the forward and backward swing effectors become irrelevant, and the body simply coasts until the vertical distance between the foot and the center of mass of the body reaches the stability limit. Because the body loses support beyond this point, the foot lifts, causing the leg to snap back to its rear mechanical limit and beginning a swing phase. The leg then swings forward with maximum torque until it reaches its forward mechanical limit and the cycle repeats. The properties of this optimal pattern generator are analyzed in detail in Appendix A. Some of the key predictions of this analysis are summarized numerically in Table 1. The biomechanical properties of the body give rise to a biomechanical degeneracy because different motor patterns can give rise to the same motion. This degeneracy arises for three reasons. First, the only characteristic of the output of the foot motor neuron that affects behavior is whether it is above or below the threshold of 0.5. Second, during the stance coast phase, the outputs Table 1. Average behavioral performance of the top 10 evolved model CPGs as compared to optimal (mean ± s.d.). Stance duration Swing duration Period Average velocity Three-Neuron CPGs ± ± ± ± Four-Neuron CPGs ± ± ± ± Five-Neuron CPGs ± ± ± ± Optimal CPG

5 General Principles and Individual Variability in Evolved CPGs 123 of the BS and FS neurons are irrelevant to behavior. Finally, since interneurons have no direct access to the body, their outputs do not directly affect behavior. Performance Statistics To identify general principles of operation of model pattern generators, a large number of three-neuron (N = 114), four-neuron (N = 101), and five-neuron (N = 110) single-leg CPGs were evolved. Histograms of walking performance (i.e., average velocity) for each of these three sets of experiments are shown in Fig. 2. These three populations have common features but also exhibit distinctive trends. First, all three histograms exhibit a maximum average velocity. This maximum average velocity falls just below the average velocity of the optimal controller (dashed line). Second, in all three sets of experiments, the performance histograms are strongly skewed, peaking at higher average velocities. This feature is easily explained by the fitness measure used during evolution, since maximizing the forward distance traveled in a fixed amount of time is equivalent to maximizing average forward velocity. Note that the leg can be moved in many different ways and still generate the same average velocity, leading to a fitness degeneracy in the performance evaluation. Third, both the skew and peakedness increase with the number of neurons. The histograms of the CPGs with the largest numbers of neurons exhibit the most power at higher average velocities (skewness is 0.43, 1.26, and 1.82 and kurtosis, i.e., peakedness, is 1.60, 3.55, and 5.58 for the three-, four-, and five-neuron CPGs, respectively). This trend suggests that CPGs with larger numbers of neurons are better able to utilize the body for efficient locomotion. How statistically significant is this trend towards higher average velocities with larger numbers of neurons? We focused on those CPGs whose average velocity exceeded 0.5 (40% of three-neuron CPGs, 66% of four-neuron CPGs, and 75% of five-neuron CPGs). The average performances of these subpopulations of the three-, four-, and five-neuron CPGs are ± 0.003, ± 0.003, ± (mean ± s.d.). The average performance of this subpopulation of five-neuron CPGs is significantly higher than those of the comparable four-neuron CPGs (p < 0.04), which in turn is significantly higher than those of the comparable threeneuron CPGs (p < 0.01). What general trends are observed in the performance of the best CPGs? To answer this question, we focused on the top 10 three-neuron CPGs, the top 10 fourneuron CPGs, and the top 10 five-neuron CPGs. The overall performance of the five-neuron CPGs was better than that of the four-neuron CPGs, although the difference was not significant (Table 1). The average performance of the four-neuron CPGs was significantly better than that of the three-neuron CPGs (p < 0.001). The very best three-, four-, and five-neuron CPGs achieve a performance that is 93.78%, 99.13%, and 98.82% of optimal, respectively. Behavior Figure 2. Performance histograms for (A) three-neuron, (B) fourneuron, and (C) five-neuron CPGs. Relative frequency of binned average velocity as a percentage of total trials is plotted. The maximum theoretically possible average velocity is indicated with a dashed line. Are there consistent patterns of locomotion in the best evolved CPGs? The motion of the body can be characterized by plotting the leg angle and angular velocity. The body state trajectories for the top 10 three-, four-, and five-neuron CPGs are shown in Figs. 3A1, 3A2, and 3A3, respectively. The behavioral

6 124 Beer, Chiel and Gallagher Figure 3. (A) Behavioral and (B) neural trajectories of the top 10 three-neuron, four-neuron, and five-neuron CPGs over one cycle. The cycle begins when the leg passes through an angle of 0 (i.e., perpendicular to the long axis of the body) during stance. Each of the major phases of the cycle are labeled in (1). For reference, the motor outputs and body state trajectories of the optimal pattern generator over one cycle are shown in gray. Abbreviations: θ is leg angle; θ is leg angular velocity; FT is foot motor neuron output; BS is backward swing motor neuron output; FS is forward swing motor neuron output; INT, INT1, INT2 are interneuron output.

7 General Principles and Individual Variability in Evolved CPGs 125 plots of all 30 CPGs exhibit a similar shape, which closely approximates the optimal body state trajectory (Fig. 3A, thick gray line). Despite this consistency in shape, there is some variability. First, there are two different options for the end of stance. In most cases, the leg is lifted before it reaches the stability limit and immediately snaps back to its mechanical limit (i.e., diagonal line beneath Foot Up label). In a few cases, however, the leg reaches the stability limit, the velocity goes to zero, and then the leg is lifted and snaps back to its mechanical limit (i.e., vertical and horizontal lines below Foot Up label). Second, there is some variability in the angle at which the foot is put down at the onset of the stance phase. The largest variability in the behavioral plots is the peak angular velocity reached by the leg during swing. The peak velocity during swing of all of the evolved CPGs is lower than that of the optimal controller, but that of the three-neuron CPGs is especially low. This swing deficit is primarily responsible for the longer swing phases, longer periods and lower performances of the three-neuron CPGs. The average peak angular velocity of the three-neuron CPGs (0.140 ± 0.013, mean ± s.d.) is significantly lower than that of the four-neuron CPGs (0.180 ± 0.018, p < 0.001), which in turn is slightly lower than that of the five-neuron CPGs (0.183 ± 0.014). Motor Patterns How do the motor patterns generated by the evolved CPGs compare to that of the optimal controller? The neural outputs of the top 10 three-, four-, and fiveneuron CPGs are shown in Fig. 3B. Overall, the motor outputs of all of the CPGs (top three traces in each plot) conform reasonably well to the optimal motor pattern (shown in gray). The evolved motor patterns are tightly clustered where the optimal motor pattern is highly constrained, and they are more variable where the optimal motor pattern is less constrained (Fig. 4A). For example, during the coast phase of stance, when neither BS nor FS have any direct effect on the motor output, there is considerable dispersion in the motor neuron activity pattern. In contrast, at the swing-tostance transition, when any deviation from the optimal pattern has consequences for performance, the transitions of the activity patterns are more clustered. Note that, as observed earlier, both the duration of swing and the overall duration of the pattern are both considerably longer in the three-neuron CPGs as compared to the Figure 4. Variability of neuron transition times. A: The time (mean ± s.d.) at which the output of each neuron of the three-neuron (filled box), four-neuron (diamond), and five-neuron (point) CPGs crosses 0.5. The optimal motor pattern is shown in gray. B: The times at which the output of the BS and FS neurons in three-neuron CPGs crosses 0.5, divided into BS-switch (filled box) and FS-switch (unfilled box) subsets. Note the reduction in variability as compared to A. four- and five-neuron CPGs. In addition, there appears to be a general trend for the transition of the four- and five-neuron CPGs to cluster more tightly around the optimal values than for the three-neuron CPGs. A closer examination of the mean values of the transition times reveals two interesting discrepancies from the optimal pattern. While the time at which the FT neuron turns off has converged on the optimal value in the four- and five-neuron CPGs, the time at which the FT neuron turns on appears to have converged at a value slightly later than optimal. Second, while the time at which the BS neuron turns on has converged on a near-optimal value, the time at which the FS neuron turns off has converged on a value that is slightly earlier than optimal. These deviations are due to a subtle property of the body. If the foot goes down while the leg is still swinging forward, it may push the body backward and stretch the leg beyond its forward mechanical limit. Since the leg cannot exert forces on the body once past

8 126 Beer, Chiel and Gallagher its mechanical limit, the body can become frozen in this configuration. To preclude this possibility, a CPG must ensure that the leg is beginning to move backward before the foot goes down. In turn, this implies that BS s output must exceed FS s output before the foot goes down, which explains the deviation from the optimal controller that is observed. This constraint, as well as the membrane time constants of the model neurons, may account for the inability of the evolved CPGs to achieve the optimal leg velocity during swing (Fig. 3A). Analysis of Three-Neuron CPGs In the companion article, we used the concept of a dynamical module to analyze the best evolved threeneuron CPG. A dynamical module is a collection of one or more neurons that makes a transition between one quasistable output configuration and another while the outputs of the remaining neurons are effectively constant. In that article, we suggested that this approach might be more generally applicable to other three-neuron CPGs. In this article, we test this hypothesis. Specifically, we examine the modular structure of all top 10 three-neuron CPGs and use this structure to account for the parameter variability observed in these circuits. FS-Switch versus BS-Switch CPGs. In the companion article, we decomposed the best three-neuron CPG into two dynamical modules: a BS module and an FS/FT module (Fig. 4 in the companion article). Both modules were bistable, and each acted to switch the other between two different output configurations. The motor pattern could be understood as a closed chain of successive destabilizations of each module by the other. When the BS module turned off, it switched the state of the FS/FT module into a swing configuration. In turn, this new configuration of the FS/FT module switched the BS module on. In this state, the BS module switched the FS/FT module into a stance configuration. This switch once again destabilized BS, causing it to turn back off, and the cycle repeated. Another important feature of this CPG was that BS s steadystate input/input (SSIO) curve was strongly folded and a slow passage near the left fold was responsible for much of the duration of stance phase. We term CPGs that exhibit this modular architecture BS-switch CPGs. How common is this dynamical structure within the top 10 three-neuron CPGs? Six of these CPGs, including the best, are BS-switch CPGs. The other four CPGs are FS-switch CPGs, in which FS rather than BS has a strongly folded SSIO curve and a near-fold transition in FS is responsible for much of the duration of stance phase. Thus, all of the top 10 three-neuron CPGs use a very similar dynamical modular structure. Interestingly, FT is never used for this role, probably because the output of FT is at least partially constrained at all times. In contrast, because the outputs of BS and FS are completely unconstrained during the stance coast phase, a slow transition during this time will have minimal impact on the body s performance. The operation of the best FS-switch three-neuron CPG is illustrated in Fig. 5. The motor output pattern (Fig. 5A) shows that FS slowly turns on during stance. This contrasts with a BS-switch CPG, in which BS slowly turns off during stance (see Fig. 4C in the companion article). Also note that BS and FT change their outputs together. The transition structure of the circuit is seen more clearly in a plot of transition rate: FS turns on, BS and FT turn off together, FS turns off, and BS and FT turn on together (Fig. 5B). Note that each of these transitions occurs essentially in isolation. The neural architecture of the circuit provides a qualitative explanation for its operation (Fig. 5C). In the absence of synaptic input, FS s intrinsic bias will turn it off. When FS is off (1), both BS and FT are on and mutually excite one another. Although BS inhibits FS, the excitation from FT to FS is much stronger, which will tend to turn FS on. Once FS turns on (2), it very strongly inhibits both BS and FT, turning them off. However, once they have turned off (3), FS s intrinsic bias will cause it to turn off. After FS s inhibition has been removed from BS and FT (4), their intrinsic biases will tend to turn them both on, and the cycle repeats. An analysis of the dynamical modular structure of this circuit provides both qualitative and quantitative insight into its operation. Both the transition rate plot and the architecture diagrams indicate that the circuit is composed of two dynamical modules: a onedimensional module consisting of FS (Fig. 5D), and a two-dimensional module consisting of BS and FT (Fig. 5E). Qualitatively, the operation of the circuit can be understood as a reciprocal interaction of these two modules. Each module is bistable and acts to switch the other module between its two stable output configurations. Specifically, when the FS module turns on (Fig. 5D, 1 to 2), it switches the state of the BS/FT module into a swing configuration (Fig. 5E, 2 to 3). Because FS passes near its right-hand fold, it turns on slowly. The new configuration of the BS/FT module

9 General Principles and Individual Variability in Evolved CPGs 127 Figure 5. The best FS-switch three-neuron CPG. A: The motor pattern, with the four quasistable states of this CPG labeled with dashed lines and numbered 1 to 4. For reference, the optimal motor pattern is shown in gray. B: The time derivatives of the neuron outputs shown in Part B. Here an upward peak corresponds to a neuron turning on and a downward peak corresponds to a neuron turning off. C: Circuit architecture and circuit-based analysis. Excitatory connections are indicated by short bars, and inhibitory connections are indicated by filled circles. The thickness of a neuron s border represents its output state, with a thin border indicating a neuron is off and a thick border indicating a neuron is on. Neurons that are about to change state are filled with gray. An asterisk above a neuron s name indicates that it is bistable. The sign below a neuron s name indicates whether it is intrinsically on (+) oroff( ) in the absence of synaptic input. D: Phase plots of total synaptic input versus module output over one walking cycle for the FS module. Black dots indicate the input/output state at time intervals of 0.1, so that the space between dots indicates the speed with which this state is changing. For reference, the steady-state input/output curve is shown in gray, with stable regions denoted by a solid line and unstable regions denoted by a dashed line. E: Phase plots of synaptic input versus module output and steady-state input/output curve of the BS/FT module, with the same labeling conventions as in D.

10 128 Beer, Chiel and Gallagher then switches the FS module off (Fig. 5D, 3 to 4). Finally, the new state of the FS module switches the BS/FT module into a stance configuration (Fig. 5E, 4 to 1), and the cycle repeats. Quantitatively, a dynamical analysis of these modules allows us to understand the constraints on circuit architecture, the duration and timing of the transitions of each module, the sensitivity and robustness of the circuit to parameter changes, and the effects of coordinated changes. Such analyses were demonstrated in the companion article for the best BS-switch CPG. The BS-switch or FS-switch modular structure of the top 10 three-neuron CPGs makes it possible to explain the swing deficit that was noted earlier in these CPGs (Fig. 3A1). In BS-switch CPGs, BS goes on early in swing, prematurely slowing the forward motion of the leg. Similarly, in FS-switch CPGs, FS goes off early in swing, which also prematurely slows the forward motion of the leg. This swing deficit occurs because, while the optimal CPG can make a swing-to-stance transition instantaneously, this transition takes time in the evolved CPGs due to their membrane time constants. No such deficit occurs at the stance-to-swing transition because this transition occurs during the period of stance coast where the leg has stretched outside of its mechanical limit and BS and FS have no effect. In fact, if the BS and FS transition times for the top 10 three-neuron CPGs are divided into BS-switch and FS-switch subsets, the variability of their transition times is greatly reduced (Fig. 4B). Furthermore, it is clear that the transition of one precedes that of the other at the swing/stance border. The early transition of one neuron appears to be necessary to set up the transition of the other two neurons. Accounting for Parameter Variability. There is considerable variability in both the circuit architectures (Fig. 1 in the companion article) and the neural parameters (Fig. 6A) of the top 10 three-neuron CPGs, even though their performance varies by less than 2%. The BS-switch/FS-switch distinction, along with the abstract description and supporting theory developed in the companion article, allows us to account for much of the observed variability, at least in the one-dimensional modules. This variability can be accounted for in three steps. The first step is to separate the ten CPGs into the BSswitch and FS-switch subsets. If the parameters of all 10 CPGs are averaged, the resulting parameter values specify a CPG that is unable to oscillate. However, if one averages separately the parameter values of the BS-switch or the FS-switch subpopulations identified above, the resulting average CPGs are not only capable of oscillation, but their locomotion performance is not far from the best three-neuron CPG. The performance of the BS-switch average CPG is 92.8% of optimal, and the performance of the FS-switch average CPG is 92.5% of optimal. The observed parameter variability can be greatly reduced by considering the intrinsic parameters (w,τ,θ) of the BS-switch and FS-switch CPGs separately. In the BS-switch subset, BS is the one-dimensional module, its intrinsic paramters are w BS, τ BS, and θ BS, and its two synaptic inputs are w FT BS and w FS BS. For this subset, averaged values of w BS and τ BS show much less variability (Fig. 6B1). The average w BS value (15.3) is near the very maximum value possible (16), which creates a strong fold in the SSIO curve of the neuron, a critical feature for the near-fold transition that is primarily responsible for the duration of stance. The average time constant τ BS (8.6) tends toward the maximum possible value (10), which also contributes to maintaining the duration of stance. The remaining parameters still show considerable variability. In the FS-switch subset, FS is the one-dimensional module, and its intrinsic parameters are w FS, τ FS, and θ FS, and its two synaptic inputs are w FT FS and w BS FS. For this subset, averaged values of w FS, τ FS, and θ FS show much less variability (Fig. 6B2). Again, the self-connection is large, as is the time constant. Interestingly, the variability of θ FS is also very small (see below). The two synaptic inputs still show considerable variability. The second step in explaining the observed parameter variability is to take into account an example of the neural degeneracy of our model neurons: only the net input matters to their behavior and not the individual synaptic weights and biases (see Eq. (2) in companion article). In a BS-switch CPG, the off-to-on transition of BS is governed by the sum w FS BS + θ BS, while the on-to-off transition is governed by w FT BS + θ BS (see Fig. 4C in the companion article). As can be clearly seen in Fig. 6C1, these sums show considerably less variability than do the individual parameter values θ BS, w FS BS, and w FS BS. Likewise, in an FSswitch CPG, the off-to-on transition is governed by w FT FS + w BS FS + θ FS and the on-to-off transition is governed by θ FS alone, since both BS and FT are off at this point in the walking pattern (Fig. 5D). These quantities are also considerably less variable than

11 General Principles and Individual Variability in Evolved CPGs 129 Figure 6. Accounting for parameter variability among the top 10 three-neuron CPGs. A: Means and standard deviations of all neural parameters. Time constants are indicated in gray because their allowable range is smaller [0.5,10] than the allowable range for the other parameters [ 16,16]. B: Means and standard deviations of both the intrinsic and synaptic input parameters for the BS module of the BS-switch CPGs (1) and for the FS module of the FS-switch CPGs (2). C: Means and standard deviations of data in B: with the bias and synaptic input parameters replaced by net input. D: Predicted (gray points) and actual (black points) values of the most variable of the net inputs (w FS BS + θ BS ) for each of the six BS-switch CPGs.

12 130 Beer, Chiel and Gallagher the individual synaptic weights w FT FS and w BS FS (Fig. 6C2). It is interesting to note that the quantities governing the near-fold transitions (w FT BS + θ BS in BS-switch CPGs and θ FS in FS-switch CPGs) exhibit less variability than do the quantities governing the other switch. This is presumably because the transition time is much more sensitive to small changes in input near a fold (see Fig. 6A in companion article). The final step in understanding the observed parameter variability is to account for the differing roles that each one-dimensional module plays in the particular CPG in which it is embedded, since the timing of a one-dimensional module clearly depends on the timing of the other neurons in the circuit. As an example, we focus on the off-to-on transition in BSswitch CPGs, since the sum that governs this transition (w FS BS + θ BS ) exhibits the largest remaining amount of variability (Fig. 6C1). For each of the six BS-switch CPGs, we measured the duration D of the transition of its BS module from an output of 0.05 to an output of We then inserted its particular parameter values into the equation τ BS T (w BS,w FS BS + θ BS ) = D and numerically solved for w FS BS + θ BS. In all six cases, these theoretical predictions of w FS BS + θ BS (gray points; Fig. 6D) closely matched the actual values (black points). Because the outputs of the other neurons are not completely constant during BS s transition, the assumption that BS functions as an independent module is not entirely accurate, and this leads to the remaining small differences that are observed between the predicted and actual parameter values. Overall Modular Structure. In three-neuron CPGs, there are a total of six possible transitions as the three motor neurons turn on and off. The BS-switch/FSswitch distinction focused on only two of these transitions. What are the patterns of modular structure in the top 10 three-neuron CPGs when all six transitions are taken into account? Both transition rate plots and limit cycle plots provide insight into the overall modular structure of a CPG. In the transition rate plots, an isolated peak indicates that only one neuron is changing its output, whereas coincident peaks indicate that two or more neurons are changing simultaneously. In the limit cycle plots, a trajectory that follows an edge (i.e., is one-dimensional) indicates that only one neuron is making a transition, a trajectory that follows a face (i.e., is two-dimensional) indicates that two neurons are making transitions together, and a trajectory that occupies the volume (i.e., is three-dimensional) indicates that all three neurons are simultaneously making transitions. Note that the limit cycles of the top 10 three-neuron CPGs exhibit considerable variability (Fig. 7A). When all six transitions are considered, there appear to be six different classes of dynamical modular structure. The first two classes are characterized by having a one-dimensional module and a two-dimensional module. In the first class, BS is the one-dimensional module, and FS/FT constitute the two-dimensional module. There are three examples of this structure among the top ten CPGs (Fig. 7B), one of which has already been extensively analyzed (see the companion article). Note that two transitions occur along the BS edges of the cube of possible output values and that two occur along the FS/FT faces of the cube. In the second class, FS is the one-dimensional module, and BS/FT constitute the two dimensional modules. There are two examples of this structure among the top 10 CPGs (Fig. 7C), the best of which has already been discussed in detail above (Fig. 5). Note that two transitions occur along the FS edges of the cube, and that two occur along the BS/FT faces of the cube. A dynamical analysis of the two-dimensional modules demonstrates that all of them exhibit regions of multistability through which they are switched by their one-dimensional modules. The third class necessitates a generalization of the concept of a dynamical module to focus on individual sets of transitions rather than groups of neurons. An example of this class is shown in Fig. 7D. Here, the FS neuron constitutes a one-dimensional module. Note the transitions along the FS edges in the limit cycle (Fig. 7D1) and the isolated FS peaks in the transition rate plot (Fig. 7D2). However, while BS and FT make transitions simultaneously at the end of swing (overlapping BS and FT peaks in transition rate plot, Fig. 7D2; transition along the bottom face, Fig. 7D1), they make transitions independently at the end of stance (nonoverlapping BS and FT peaks in transition rate plot, Fig. 7D2; transitions along the top FT and BS edges, Fig. 7D1). Thus the modular structure itself changes through one cycle of the CPG. We will term such structures dynamic dynamical modules. Note that, because the dynamical modular analysis we have done focuses on individual transitions, such dynamic structures in no way undermine the utility of our approach. Two CPGs with this structure are found among the top 10, one of which is shown in Fig. 7D. Single examples of the remaining three classes were seen among the top 10 CPGs. In the fourth class, all

13 General Principles and Individual Variability in Evolved CPGs 131 Figure 7. Modular structure of top 10 three-neuron CPGs. A: Plots of the limit cycles of the top 10 three-neuron CPGs. B G: Limit cycles (D1, E1, F1, G1) and transition rate plots (D2, E2, F2, G2) for the six classes of CPGs. Most of these classes correspond to idealized modular structures shown in Table 2: 7B corresponds to 4th row in column 1; 7C, F to 6th row in column 1; 7D to 5th row in column 2; 7E to 2nd row in column 2.

14 132 Beer, Chiel and Gallagher transitions are one-dimensional (Fig. 7E). In the limit cycle, all transitions occur along the edges of the output cube (Fig. 7E1) and exhibit nonoverlapping peaks in the transition rate plots (Fig. 7E2). In the fifth class, all transitions are somewhat three dimensional. In the limit cycle, transitions occur within the volume of the output cube and have rounded edges (Fig. 7F1), and exhibit overlapping peaks in the transition rate plots (Fig. 7F2). The transitions are not fully three-dimensional, however, and strongly resemble those of class two (Fig. 7C). In the sixth class, it appears that all transitions are twodimensional (Fig. 7G). In the limit cycle, all transitions occur along the faces of the output cube (Fig. 7G1) and exhibit peaks that overlap in pairs in the transition rate plots (Fig. 7G2). An unusual feature of this circuit is that FS has double peaks in the transition rate plot, which correspond to its switching partially on or off, and then, after a delay, switching fully on or off. A combinatorial analysis of idealized transitions (see Appendix B) shows that 15 distinct dynamical modular structures are possible for a three-neuron CPG. Classes one through five correspond closely to four of the legal idealized modular structures shown in Table 2. Class six does not appear in our catalog because FS makes more than two transitions within a cycle, thus violating one of the assumptions underlying the catalog. All 15 of the idealized modular structures contain at least one one-dimensional module, while eight of these 15 contain dynamic dynamical modules. Thus, it is hardly surprising that one-dimensional modules are a common feature of the top 10 threeneuron CPGs and that dynamic dynamical modules are observed. An interesting feature observed in three of the top 10 three-neuron CPGs was that one neuron had a much smaller time constant than the others, causing it to rapidly equilibrate to its SSIO curve rather than exhibiting hysteresis as in the bistable case (Fig. 5D). The output of such a neuron is almost completely determined by the synaptic input it receives from the other neurons, effectively causing it to d rop out of the dynamics of the circuit. This slaving of one neuron to the others can simplify the analysis of a CPG because it reduces the dimensionality of the dynamics. In addition, this feature explains the unusual FS double peaks observed in the transition rate plots of the circuit shown in Fig. 7G2. Because this neuron is slaved to BS and FT, each time either of these neurons makes a transition, a peak is observed in FS as well. This is why this circuit does not appear in our catalog of legal three-neuron modular structures (Table 2). Analysis of Four-Neuron CPGs Can the lessons learned from the analysis of the threeneuron CPGs be applied to the four-neuron CPGs, which have an additional internal degree of freedom due to their interneuron? In particular, how do the performances, neural patterns, modular analysis, pattern of dynamical modular structures, and parameter variability compare to the three-neuron CPGs? We note that reset experiments demonstrated that both motor neurons and interneurons were essential components of all four-neuron CPGs. The performances of all of the top 10 four-neuron CPGs are higher than the performance of the very best three-neuron CPG. In fact, the best four-neuron CPG has the highest performance of all the 325 evolved CPGs (99.13% of the optimal). The primary reason for the consistently higher performance of the fourneuron CPGs is that they have no swing deficit because the interneuron can be used instead of one of the motor neurons to switch between swing and stance configurations. The neural output pattern of the best four-neuron CPG is shown in Fig. 8A, and it is clear that the motor neurons are better able to approximate the optimal pattern than in the three-neuron CPGs. Furthermore, it exhibits five quasistable states. The circuit architecture and a circuit-based explanation of the CPG s function are shown in Fig. 8C. When INT is off (1), the excitation from FT is stronger than the inhibition from BS and its own bias, and so it turns on. Once it turns on (2), its inhibition turns off BS. Once BS turns off (3), excitation to FT and inhibition to FS are removed, and they turn off and on, respectively. Once they have changed state (4), the inhibition from FS to INT turns INT off. Once INT is off (5), inhibition to FT and to BS are removed, as well as excitation to FS. FT and BS turn on, FS turns off, and the cycle repeats. What is the dynamical modular structure of this CPG? This question can be answered by examining the clustering of peaks in the transition rate plot (Fig. 8B). While INT functions as a one-dimensional module, making both of its transitions in isolation, the motor neurons function as a dynamic dynamical module. They form a three-dimensional module at the transition from swing to stance, but split into one-dimensional (BS) and two-dimensional (FS/FT) modules during the transition from stance to swing. This can also be seen in the circuit-based explanation (Fig. 8C), in which the motor neurons form changing coalitions through the cycle.

15 General Principles and Individual Variability in Evolved CPGs 133 Figure 8. Motor pattern (A), transition rate plots (B), and circuit architecture and circuit-based explanation (C) of the best four-neuron CPG. Labeling conventions are the same as in Fig. 5. Note that if a neuron s bistable region crosses 0, then both output states are intrinsically stable. In this case, no intrinsic state is indicated. Throughout the figure, the numbers 1 to 5 refer to the five quasistable states indicated in A. Like the three-neuron CPGs, the operation of this four-neuron CPG can be understood in terms of its dynamical modules (Fig. 9). All modules are multistable, and each acts to switch the other between their two stable output configurations. Thus, once again, this motor pattern can be understand as a closed chain of successive destabilizations of each module by the other. Unlike the BS-switch or FS-switch three-neuron CPGs that were previously analyzed, some of the modules of this CPG are dynamic that is, they exist only for part of the cycle. Nevertheless, each individual transition can still be analyzed as described in the companion article. The CPG exhibits the following sequence of transitions. At (1), the INT module is unstable, and makes a transition from off to on (Fig. 9A). This destabilizes the BS dynamic dynamical module (2), which then makes a transition from on to off (Fig. 9B). In

Connection and Coordination: The Interplay Between Architecture and Dynamics in Evolved Model Pattern Generators

Connection and Coordination: The Interplay Between Architecture and Dynamics in Evolved Model Pattern Generators LETTER Communicated by Nicholas Hatsopoulos Connection and Coordination: The Interplay Between Architecture and Dynamics in Evolved Model Pattern Generators Sean Psujek sean.psujek@case.edu Department

More information

Central Pattern Generators

Central Pattern Generators Central Pattern Generators SHARON CROOK and AVIS COHEN 8.1 Introduction Many organisms exhibit repetitive or oscillatory patterns of muscle activity that produce rhythmic movements such as locomotion,

More information

arxiv: v1 [math.ds] 13 Jul 2018

arxiv: v1 [math.ds] 13 Jul 2018 Heterogeneous inputs to central pattern generators can shape insect gaits. Zahra Aminzare Philip Holmes arxiv:1807.05142v1 [math.ds] 13 Jul 2018 Abstract In our previous work [1], we studied an interconnected

More information

CPG CONTROL OF A TENSEGRITY MORPHING STRUCTURE FOR BIOMIMETIC APPLICATIONS

CPG CONTROL OF A TENSEGRITY MORPHING STRUCTURE FOR BIOMIMETIC APPLICATIONS CPG CONTROL OF A TENSEGRITY MORPHING STRUCTURE FOR BIOMIMETIC APPLICATIONS T. K. BLISS, T. IWASAKI, and H. BART-SMITH University of Virginia Department of Mechanical and Aerospace Engineering 122 Engineer

More information

Supporting Online Material for

Supporting Online Material for www.sciencemag.org/cgi/content/full/319/5869/1543/dc1 Supporting Online Material for Synaptic Theory of Working Memory Gianluigi Mongillo, Omri Barak, Misha Tsodyks* *To whom correspondence should be addressed.

More information

Evolving multi-segment super-lamprey CPG s for increased swimming control

Evolving multi-segment super-lamprey CPG s for increased swimming control Evolving multi-segment super-lamprey CPG s for increased swimming control Leena N. Patel 1 and Alan Murray 1 and John Hallam 2 1- School of Engineering and Electronics, The University of Edinburgh, Kings

More information

Artificial Neural Network and Fuzzy Logic

Artificial Neural Network and Fuzzy Logic Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

2 1. Introduction. Neuronal networks often exhibit a rich variety of oscillatory behavior. The dynamics of even a single cell may be quite complicated

2 1. Introduction. Neuronal networks often exhibit a rich variety of oscillatory behavior. The dynamics of even a single cell may be quite complicated GEOMETRIC ANALYSIS OF POPULATION RHYTHMS IN SYNAPTICALLY COUPLED NEURONAL NETWORKS J. Rubin and D. Terman Dept. of Mathematics; Ohio State University; Columbus, Ohio 43210 Abstract We develop geometric

More information

Consider the following spike trains from two different neurons N1 and N2:

Consider the following spike trains from two different neurons N1 and N2: About synchrony and oscillations So far, our discussions have assumed that we are either observing a single neuron at a, or that neurons fire independent of each other. This assumption may be correct in

More information

Chapter 1 Introduction

Chapter 1 Introduction Chapter 1 Introduction 1.1 Introduction to Chapter This chapter starts by describing the problems addressed by the project. The aims and objectives of the research are outlined and novel ideas discovered

More information

An Introductory Course in Computational Neuroscience

An Introductory Course in Computational Neuroscience An Introductory Course in Computational Neuroscience Contents Series Foreword Acknowledgments Preface 1 Preliminary Material 1.1. Introduction 1.1.1 The Cell, the Circuit, and the Brain 1.1.2 Physics of

More information

Phase Response Properties of Half-Center. Oscillators

Phase Response Properties of Half-Center. Oscillators Phase Response Properties of Half-Center Oscillators Jiawei Calvin Zhang Timothy J. Lewis Department of Mathematics, University of California, Davis Davis, CA 95616, USA June 17, 212 Abstract We examine

More information

Model of Motor Neural Circuit of C. elegans

Model of Motor Neural Circuit of C. elegans Model of Motor Neural Circuit of C. elegans Different models for oscillators There are several different models to generate oscillations. Among these models, some have different values for the same parameters,

More information

The Phase Response Curve of Reciprocally Inhibitory Model Neurons Exhibiting Anti-Phase Rhythms

The Phase Response Curve of Reciprocally Inhibitory Model Neurons Exhibiting Anti-Phase Rhythms The Phase Response Curve of Reciprocally Inhibitory Model Neurons Exhibiting Anti-Phase Rhythms Jiawei Zhang Timothy J. Lewis Department of Mathematics, University of California, Davis Davis, CA 9566,

More information

Network Oscillations Generated by Balancing Graded Asymmetric Reciprocal Inhibition in Passive Neurons

Network Oscillations Generated by Balancing Graded Asymmetric Reciprocal Inhibition in Passive Neurons The Journal of Neuroscience, April 1, 1999, 19(7):2765 2779 Network Oscillations Generated by Balancing Graded Asymmetric Reciprocal Inhibition in Passive Neurons Yair Manor, 1 Farzan Nadim, 1 Steven Epstein,

More information

Mathematical Models of Dynamic Behavior of Individual Neural Networks of Central Nervous System

Mathematical Models of Dynamic Behavior of Individual Neural Networks of Central Nervous System Mathematical Models of Dynamic Behavior of Individual Neural Networks of Central Nervous System Dimitra Despoina Pagania 1, Adam Adamopoulos 1,2 and Spiridon D. Likothanassis 1 1 Pattern Recognition Laboratory,

More information

Localized activity patterns in excitatory neuronal networks

Localized activity patterns in excitatory neuronal networks Localized activity patterns in excitatory neuronal networks Jonathan Rubin Amitabha Bose February 3, 2004 Abstract. The existence of localized activity patterns, or bumps, has been investigated in a variety

More information

Co-evolution of Morphology and Control for Roombots

Co-evolution of Morphology and Control for Roombots Co-evolution of Morphology and Control for Roombots Master Thesis Presentation Ebru Aydın Advisors: Prof. Auke Jan Ijspeert Rico Möckel Jesse van den Kieboom Soha Pouya Alexander Spröwitz Co-evolution

More information

Frequency Adaptation and Bursting

Frequency Adaptation and Bursting BioE332A Lab 3, 2010 1 Lab 3 January 5, 2010 Frequency Adaptation and Bursting In the last lab, we explored spiking due to sodium channels. In this lab, we explore adaptation and bursting due to potassium

More information

Reducing neuronal networks to discrete dynamics

Reducing neuronal networks to discrete dynamics Physica D 237 (2008) 324 338 www.elsevier.com/locate/physd Reducing neuronal networks to discrete dynamics David Terman a,b,, Sungwoo Ahn a, Xueying Wang a, Winfried Just c a Department of Mathematics,

More information

Can a Magnetic Field Produce a Current?

Can a Magnetic Field Produce a Current? Can a Magnetic Field Produce a Current? In our study of magnetism we learned that an electric current through a wire, or moving electrically charged objects, produces a magnetic field. Could the reverse

More information

Mathematical Models of Dynamic Behavior of Individual Neural Networks of Central Nervous System

Mathematical Models of Dynamic Behavior of Individual Neural Networks of Central Nervous System Mathematical Models of Dynamic Behavior of Individual Neural Networks of Central Nervous System Dimitra-Despoina Pagania,*, Adam Adamopoulos,2, and Spiridon D. Likothanassis Pattern Recognition Laboratory,

More information

A Three-dimensional Physiologically Realistic Model of the Retina

A Three-dimensional Physiologically Realistic Model of the Retina A Three-dimensional Physiologically Realistic Model of the Retina Michael Tadross, Cameron Whitehouse, Melissa Hornstein, Vicky Eng and Evangelia Micheli-Tzanakou Department of Biomedical Engineering 617

More information

Journal Club. Haoyun Lei Joint CMU-Pitt Computational Biology

Journal Club. Haoyun Lei Joint CMU-Pitt Computational Biology Journal Club Haoyun Lei 10.10 Joint CMU-Pitt Computational Biology Some background 302 neurons Somatic neurons system(282) pharyngeal nervous system(20) The neurons communicate through approximately 6400

More information

Phase Response Properties and Phase-Locking in Neural Systems with Delayed Negative-Feedback. Carter L. Johnson

Phase Response Properties and Phase-Locking in Neural Systems with Delayed Negative-Feedback. Carter L. Johnson Phase Response Properties and Phase-Locking in Neural Systems with Delayed Negative-Feedback Carter L. Johnson Faculty Mentor: Professor Timothy J. Lewis University of California, Davis Abstract Oscillatory

More information

V. Evolutionary Computing. Read Flake, ch. 20. Assumptions. Genetic Algorithms. Fitness-Biased Selection. Outline of Simplified GA

V. Evolutionary Computing. Read Flake, ch. 20. Assumptions. Genetic Algorithms. Fitness-Biased Selection. Outline of Simplified GA Part 5A: Genetic Algorithms V. Evolutionary Computing A. Genetic Algorithms Read Flake, ch. 20 1 2 Genetic Algorithms Developed by John Holland in 60s Did not become popular until late 80s A simplified

More information

V. Evolutionary Computing. Read Flake, ch. 20. Genetic Algorithms. Part 5A: Genetic Algorithms 4/10/17. A. Genetic Algorithms

V. Evolutionary Computing. Read Flake, ch. 20. Genetic Algorithms. Part 5A: Genetic Algorithms 4/10/17. A. Genetic Algorithms V. Evolutionary Computing A. Genetic Algorithms 4/10/17 1 Read Flake, ch. 20 4/10/17 2 Genetic Algorithms Developed by John Holland in 60s Did not become popular until late 80s A simplified model of genetics

More information

Order parameter for bursting polyrhythms in multifunctional central pattern generators

Order parameter for bursting polyrhythms in multifunctional central pattern generators Order parameter for bursting polyrhythms in multifunctional central pattern generators Jeremy Wojcik, Robert Clewley, and Andrey Shilnikov Neuroscience Institute and Department of Mathematics and Statistics,

More information

IV. Evolutionary Computing. Read Flake, ch. 20. Assumptions. Genetic Algorithms. Fitness-Biased Selection. Outline of Simplified GA

IV. Evolutionary Computing. Read Flake, ch. 20. Assumptions. Genetic Algorithms. Fitness-Biased Selection. Outline of Simplified GA IV. Evolutionary Computing A. Genetic Algorithms Read Flake, ch. 20 2014/2/26 1 2014/2/26 2 Genetic Algorithms Developed by John Holland in 60s Did not become popular until late 80s A simplified model

More information

REAL-TIME COMPUTING WITHOUT STABLE

REAL-TIME COMPUTING WITHOUT STABLE REAL-TIME COMPUTING WITHOUT STABLE STATES: A NEW FRAMEWORK FOR NEURAL COMPUTATION BASED ON PERTURBATIONS Wolfgang Maass Thomas Natschlager Henry Markram Presented by Qiong Zhao April 28 th, 2010 OUTLINE

More information

Lecture 9 Evolutionary Computation: Genetic algorithms

Lecture 9 Evolutionary Computation: Genetic algorithms Lecture 9 Evolutionary Computation: Genetic algorithms Introduction, or can evolution be intelligent? Simulation of natural evolution Genetic algorithms Case study: maintenance scheduling with genetic

More information

TIME-SEQUENTIAL SELF-ORGANIZATION OF HIERARCHICAL NEURAL NETWORKS. Ronald H. Silverman Cornell University Medical College, New York, NY 10021

TIME-SEQUENTIAL SELF-ORGANIZATION OF HIERARCHICAL NEURAL NETWORKS. Ronald H. Silverman Cornell University Medical College, New York, NY 10021 709 TIME-SEQUENTIAL SELF-ORGANIZATION OF HIERARCHICAL NEURAL NETWORKS Ronald H. Silverman Cornell University Medical College, New York, NY 10021 Andrew S. Noetzel polytechnic University, Brooklyn, NY 11201

More information

Methods. Subjects. Experimental setup and procedure

Methods. Subjects. Experimental setup and procedure 4 Gravity s effect Based on: Verheij R, Brenner E, Smeets JBJ (2013) Gravity affects the vertical curvature in human grasping movements. J Motor Behav 45 (4):325-332 This is an Author s Original Manuscript

More information

Discrete and Indiscrete Models of Biological Networks

Discrete and Indiscrete Models of Biological Networks Discrete and Indiscrete Models of Biological Networks Winfried Just Ohio University November 17, 2010 Who are we? What are we doing here? Who are we? What are we doing here? A population of interacting

More information

SI Appendix. 1. A detailed description of the five model systems

SI Appendix. 1. A detailed description of the five model systems SI Appendix The supporting information is organized as follows: 1. Detailed description of all five models. 1.1 Combinatorial logic circuits composed of NAND gates (model 1). 1.2 Feed-forward combinatorial

More information

Key Bifurcations of Bursting Polyrhythms in 3-Cell Central Pattern Generators

Key Bifurcations of Bursting Polyrhythms in 3-Cell Central Pattern Generators Georgia State University ScholarWorks @ Georgia State University Neuroscience Institute Faculty Publications Neuroscience Institute 4-04 Key Bifurcations of Bursting Polyrhythms in 3-Cell Central Pattern

More information

arxiv: v2 [math.ds] 21 Aug 2017

arxiv: v2 [math.ds] 21 Aug 2017 Gait transitions in a phase oscillator model of an insect central pattern generator arxiv:1704.05738v2 [math.ds] 21 Aug 2017 Zahra Aminzare Vaibhav Srivastava Philip Holmes Abstract Legged locomotion involves

More information

Epistasis in Multi-Objective Evolutionary Recurrent Neuro- Controllers

Epistasis in Multi-Objective Evolutionary Recurrent Neuro- Controllers Brock University Department of Computer Science Epistasis in Multi-Objective Evolutionary Recurrent Neuro- Controllers Mario Ventresca and Beatrice Ombuki-Berman Technical Report # CS-06-06 November 2006

More information

Rhythmic Robot Arm Control Using Oscillators

Rhythmic Robot Arm Control Using Oscillators Rhythmic Robot Arm Control Using Oscillators Matthew M. Williamson MIT AI Lab, 545 Technology Square, Cambridge, MA 2139 http://www.ai.mit.edu/people/matt Abstract This paper presents an approach to robot

More information

Causality and communities in neural networks

Causality and communities in neural networks Causality and communities in neural networks Leonardo Angelini, Daniele Marinazzo, Mario Pellicoro, Sebastiano Stramaglia TIRES-Center for Signal Detection and Processing - Università di Bari, Bari, Italy

More information

Cover Page. The handle holds various files of this Leiden University dissertation

Cover Page. The handle   holds various files of this Leiden University dissertation Cover Page The handle http://hdl.handle.net/1887/29750 holds various files of this Leiden University dissertation Author: Wortel, Geert Title: Granular flows : fluidization and anisotropy Issue Date: 2015-11-19

More information

A Neuromorphic VLSI System for Modeling the Neural Control of Axial Locomotion

A Neuromorphic VLSI System for Modeling the Neural Control of Axial Locomotion A Neuromorphic VLSI System for Modeling the Neural Control of Axial Locomotion Girish N. Patel girish@ece.gatech.edu Edgar A. Brown ebrown@ece.gatech.edu Stephen P. De Weerth steved@ece.gatech.edu School

More information

Analysis of coupled van der Pol oscillators and implementation to a myriapod robot

Analysis of coupled van der Pol oscillators and implementation to a myriapod robot Proceedings of the 17th World Congress The International Federation of Automatic Control Analysis of coupled van der Pol oscillators and implementation to a myriapod robot Naoki KUWATA Yoshikatsu HOSHI

More information

Integer weight training by differential evolution algorithms

Integer weight training by differential evolution algorithms Integer weight training by differential evolution algorithms V.P. Plagianakos, D.G. Sotiropoulos, and M.N. Vrahatis University of Patras, Department of Mathematics, GR-265 00, Patras, Greece. e-mail: vpp

More information

Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks

Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks Commun. Theor. Phys. (Beijing, China) 42 (2004) pp. 121 125 c International Academic Publishers Vol. 42, No. 1, July 15, 2004 Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized

More information

CHAPTER 3. Pattern Association. Neural Networks

CHAPTER 3. Pattern Association. Neural Networks CHAPTER 3 Pattern Association Neural Networks Pattern Association learning is the process of forming associations between related patterns. The patterns we associate together may be of the same type or

More information

Electrophysiology of the neuron

Electrophysiology of the neuron School of Mathematical Sciences G4TNS Theoretical Neuroscience Electrophysiology of the neuron Electrophysiology is the study of ionic currents and electrical activity in cells and tissues. The work of

More information

A SIMPLE MODEL OF A CENTRAL PATTERN GENERATOR FOR QUADRUPED GAITS

A SIMPLE MODEL OF A CENTRAL PATTERN GENERATOR FOR QUADRUPED GAITS A SIMPLE MODEL OF A CENTRAL PATTERN GENERATOR FOR QUADRUPED GAITS JEFFREY MARSH Humankind s long association with four-legged animals, wild as well as domesticated, has produced a rich vocabulary of words

More information

k k k 1 Lecture 9: Applying Backpropagation Lecture 9: Applying Backpropagation 3 Lecture 9: Applying Backpropagation

k k k 1 Lecture 9: Applying Backpropagation Lecture 9: Applying Backpropagation 3 Lecture 9: Applying Backpropagation K-Class Classification Problem Let us denote the -th class by C, with n exemplars or training samples, forming the sets T for = 1,, K: {( x, ) p = 1 n } T = d,..., p p The complete training set is T =

More information

Robust Microcircuit Synchronization by Inhibitory Connections

Robust Microcircuit Synchronization by Inhibitory Connections Article Robust Microcircuit Synchronization by Inhibitory Connections Attila Szücs, 1,2, * Ramon Huerta, 1 Mikhail I. Rabinovich, 1 and Allen I. Selverston 1 1 Institute for Nonlinear Science, University

More information

Executable Symbolic Modeling of Neural Processes

Executable Symbolic Modeling of Neural Processes Executable Symbolic Modeling of Neural Processes M Sriram Iyengar 1, Carolyn Talcott 2, Riccardo Mozzachiodi 3, Douglas Baxter 3 1 School of Health Information Sciences, Univ. of Texas Health Science Center

More information

Phase Response. 1 of of 11. Synaptic input advances (excitatory) or delays (inhibitory) spiking

Phase Response. 1 of of 11. Synaptic input advances (excitatory) or delays (inhibitory) spiking Printed from the Mathematica Help Browser 1 1 of 11 Phase Response Inward current-pulses decrease a cortical neuron's period (Cat, Layer V). [Fetz93] Synaptic input advances (excitatory) or delays (inhibitory)

More information

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,

More information

Computational Explorations in Cognitive Neuroscience Chapter 2

Computational Explorations in Cognitive Neuroscience Chapter 2 Computational Explorations in Cognitive Neuroscience Chapter 2 2.4 The Electrophysiology of the Neuron Some basic principles of electricity are useful for understanding the function of neurons. This is

More information

Exam 1--PHYS 151--Chapter 1

Exam 1--PHYS 151--Chapter 1 ame: Class: Date: Exam 1--PHYS 151--Chapter 1 True/False Indicate whether the statement is true or false. Select A for True and B for False. 1. The force is a measure of an object s inertia. 2. Newton

More information

CSC 4510 Machine Learning

CSC 4510 Machine Learning 10: Gene(c Algorithms CSC 4510 Machine Learning Dr. Mary Angela Papalaskari Department of CompuBng Sciences Villanova University Course website: www.csc.villanova.edu/~map/4510/ Slides of this presenta(on

More information

Rotational Equilibrium

Rotational Equilibrium Rotational Equilibrium 6-1 Rotational Equilibrium INTRODUCTION Have you ever tried to pull a stubborn nail out of a board or develop your forearm muscles by lifting weights? Both these activities involve

More information

Introduction to Neural Networks U. Minn. Psy 5038 Spring, 1999 Daniel Kersten. Lecture 2a. The Neuron - overview of structure. From Anderson (1995)

Introduction to Neural Networks U. Minn. Psy 5038 Spring, 1999 Daniel Kersten. Lecture 2a. The Neuron - overview of structure. From Anderson (1995) Introduction to Neural Networks U. Minn. Psy 5038 Spring, 1999 Daniel Kersten Lecture 2a The Neuron - overview of structure From Anderson (1995) 2 Lect_2a_Mathematica.nb Basic Structure Information flow:

More information

6. APPLICATION TO THE TRAVELING SALESMAN PROBLEM

6. APPLICATION TO THE TRAVELING SALESMAN PROBLEM 6. Application to the Traveling Salesman Problem 92 6. APPLICATION TO THE TRAVELING SALESMAN PROBLEM The properties that have the most significant influence on the maps constructed by Kohonen s algorithm

More information

Haploid & diploid recombination and their evolutionary impact

Haploid & diploid recombination and their evolutionary impact Haploid & diploid recombination and their evolutionary impact W. Garrett Mitchener College of Charleston Mathematics Department MitchenerG@cofc.edu http://mitchenerg.people.cofc.edu Introduction The basis

More information

Asimple spring-loaded toy that jumps up off

Asimple spring-loaded toy that jumps up off Springbok: The Physics of Jumping Robert J. Dufresne, William J. Gerace, and William J. Leonard Asimple spring-loaded toy that jumps up off the table when compressed and released offers the physics teacher

More information

Analysis of Ultrastability in Small Dynamical Recurrent Neural Networks

Analysis of Ultrastability in Small Dynamical Recurrent Neural Networks Analysis of Ultrastability in Small Dynamical Recurrent Neural Networks Eduardo J. Izquierdo, Miguel Aguilera, Randall D. Beer Indiana University Bloomington, IN, U.S. Universidad de Zaragoza, Spain edizquie@indiana.edu

More information

When do Correlations Increase with Firing Rates? Abstract. Author Summary. Andrea K. Barreiro 1* and Cheng Ly 2

When do Correlations Increase with Firing Rates? Abstract. Author Summary. Andrea K. Barreiro 1* and Cheng Ly 2 When do Correlations Increase with Firing Rates? Andrea K. Barreiro 1* and Cheng Ly 2 1 Department of Mathematics, Southern Methodist University, Dallas, TX 75275 U.S.A. 2 Department of Statistical Sciences

More information

Introduction to Artificial Neural Networks

Introduction to Artificial Neural Networks Facultés Universitaires Notre-Dame de la Paix 27 March 2007 Outline 1 Introduction 2 Fundamentals Biological neuron Artificial neuron Artificial Neural Network Outline 3 Single-layer ANN Perceptron Adaline

More information

Can a Magnetic Field Produce a Current?

Can a Magnetic Field Produce a Current? Can a Magnetic Field Produce a Current? In our study of magnetism we learned that an electric current through a wire, or moving electrically charged objects, produces a magnetic field. Could the reverse

More information

Overview Organization: Central Nervous System (CNS) Peripheral Nervous System (PNS) innervate Divisions: a. Afferent

Overview Organization: Central Nervous System (CNS) Peripheral Nervous System (PNS) innervate Divisions: a. Afferent Overview Organization: Central Nervous System (CNS) Brain and spinal cord receives and processes information. Peripheral Nervous System (PNS) Nerve cells that link CNS with organs throughout the body.

More information

Active Guidance for a Finless Rocket using Neuroevolution

Active Guidance for a Finless Rocket using Neuroevolution Active Guidance for a Finless Rocket using Neuroevolution Gomez, F.J. & Miikulainen, R. (2003). Genetic and Evolutionary Computation Gecco, 2724, 2084 2095. Introduction Sounding rockets are used for making

More information

Dynamic Models for Passive Components

Dynamic Models for Passive Components PCB Design 007 QuietPower columns Dynamic Models for Passive Components Istvan Novak, Oracle, February 2016 A year ago the QuietPower column [1] described the possible large loss of capacitance in Multi-Layer

More information

Homeostatic plasticity improves signal propagation in. continuous-time recurrent neural networks

Homeostatic plasticity improves signal propagation in. continuous-time recurrent neural networks Homeostatic plasticity improves signal propagation in continuous-time recurrent neural networks Hywel Williams and Jason Noble {hywelw jasonn}@comp.leeds.ac.uk School of Computing, University of Leeds,

More information

Forecasting & Futurism

Forecasting & Futurism Article from: Forecasting & Futurism December 2013 Issue 8 A NEAT Approach to Neural Network Structure By Jeff Heaton Jeff Heaton Neural networks are a mainstay of artificial intelligence. These machine-learning

More information

Learning Cellular Automaton Dynamics with Neural Networks

Learning Cellular Automaton Dynamics with Neural Networks Learning Cellular Automaton Dynamics with Neural Networks N H Wulff* and J A Hertz t CONNECT, the Niels Bohr Institute and Nordita Blegdamsvej 17, DK-2100 Copenhagen 0, Denmark Abstract We have trained

More information

Computational statistics

Computational statistics Computational statistics Combinatorial optimization Thierry Denœux February 2017 Thierry Denœux Computational statistics February 2017 1 / 37 Combinatorial optimization Assume we seek the maximum of f

More information

Self-organized Criticality and Synchronization in a Pulse-coupled Integrate-and-Fire Neuron Model Based on Small World Networks

Self-organized Criticality and Synchronization in a Pulse-coupled Integrate-and-Fire Neuron Model Based on Small World Networks Commun. Theor. Phys. (Beijing, China) 43 (2005) pp. 466 470 c International Academic Publishers Vol. 43, No. 3, March 15, 2005 Self-organized Criticality and Synchronization in a Pulse-coupled Integrate-and-Fire

More information

Evolution of Genotype-Phenotype mapping in a von Neumann Self-reproduction within the Platform of Tierra

Evolution of Genotype-Phenotype mapping in a von Neumann Self-reproduction within the Platform of Tierra Evolution of Genotype-Phenotype mapping in a von Neumann Self-reproduction within the Platform of Tierra Declan Baugh and Barry Mc Mullin The Rince Institute, Dublin City University, Ireland declan.baugh2@mail.dcu.ie,

More information

Nature Neuroscience: doi: /nn.2283

Nature Neuroscience: doi: /nn.2283 Supplemental Material for NN-A2678-T Phase-to-rate transformations encode touch in cortical neurons of a scanning sensorimotor system by John Curtis and David Kleinfeld Figure S. Overall distribution of

More information

6.3.4 Action potential

6.3.4 Action potential I ion C m C m dφ dt Figure 6.8: Electrical circuit model of the cell membrane. Normally, cells are net negative inside the cell which results in a non-zero resting membrane potential. The membrane potential

More information

An Efficient Method for Computing Synaptic Conductances Based on a Kinetic Model of Receptor Binding

An Efficient Method for Computing Synaptic Conductances Based on a Kinetic Model of Receptor Binding NOTE Communicated by Michael Hines An Efficient Method for Computing Synaptic Conductances Based on a Kinetic Model of Receptor Binding A. Destexhe Z. F. Mainen T. J. Sejnowski The Howard Hughes Medical

More information

6 Evolution of Networks

6 Evolution of Networks last revised: March 2008 WARNING for Soc 376 students: This draft adopts the demography convention for transition matrices (i.e., transitions from column to row). 6 Evolution of Networks 6. Strategic network

More information

Patterns, Memory and Periodicity in Two-Neuron Delayed Recurrent Inhibitory Loops

Patterns, Memory and Periodicity in Two-Neuron Delayed Recurrent Inhibitory Loops Math. Model. Nat. Phenom. Vol. 5, No. 2, 2010, pp. 67-99 DOI: 10.1051/mmnp/20105203 Patterns, Memory and Periodicity in Two-Neuron Delayed Recurrent Inhibitory Loops J. Ma 1 and J. Wu 2 1 Department of

More information

Statistics Random Variables

Statistics Random Variables 1 Statistics Statistics are used in a variety of ways in neuroscience. Perhaps the most familiar example is trying to decide whether some experimental results are reliable, using tests such as the t-test.

More information

arxiv: v1 [q-bio.nc] 1 Jun 2014

arxiv: v1 [q-bio.nc] 1 Jun 2014 1 arxiv:1406.0139v1 [q-bio.nc] 1 Jun 2014 Distribution of Orientation Selectivity in Recurrent Networks of Spiking Neurons with Different Random Topologies Sadra Sadeh 1, Stefan Rotter 1, 1 Bernstein Center

More information

CHAPTER 20 Magnetism

CHAPTER 20 Magnetism CHAPTER 20 Magnetism Units Magnets and Magnetic Fields Electric Currents Produce Magnetic Fields Force on an Electric Current in a Magnetic Field; Definition of B Force on Electric Charge Moving in a Magnetic

More information

Localized Excitations in Networks of Spiking Neurons

Localized Excitations in Networks of Spiking Neurons Localized Excitations in Networks of Spiking Neurons Hecke Schrobsdorff Bernstein Center for Computational Neuroscience Göttingen Max Planck Institute for Dynamics and Self-Organization Seminar: Irreversible

More information

Neurons, Synapses, and Signaling

Neurons, Synapses, and Signaling LECTURE PRESENTATIONS For CAMPBELL BIOLOGY, NINTH EDITION Jane B. Reece, Lisa A. Urry, Michael L. Cain, Steven A. Wasserman, Peter V. Minorsky, Robert B. Jackson Chapter 48 Neurons, Synapses, and Signaling

More information

Computing with Inter-spike Interval Codes in Networks of Integrate and Fire Neurons

Computing with Inter-spike Interval Codes in Networks of Integrate and Fire Neurons Computing with Inter-spike Interval Codes in Networks of Integrate and Fire Neurons Dileep George a,b Friedrich T. Sommer b a Dept. of Electrical Engineering, Stanford University 350 Serra Mall, Stanford,

More information

+ + ( + ) = Linear recurrent networks. Simpler, much more amenable to analytic treatment E.g. by choosing

+ + ( + ) = Linear recurrent networks. Simpler, much more amenable to analytic treatment E.g. by choosing Linear recurrent networks Simpler, much more amenable to analytic treatment E.g. by choosing + ( + ) = Firing rates can be negative Approximates dynamics around fixed point Approximation often reasonable

More information

Selecting Efficient Correlated Equilibria Through Distributed Learning. Jason R. Marden

Selecting Efficient Correlated Equilibria Through Distributed Learning. Jason R. Marden 1 Selecting Efficient Correlated Equilibria Through Distributed Learning Jason R. Marden Abstract A learning rule is completely uncoupled if each player s behavior is conditioned only on his own realized

More information

Chapter 12. Magnetism and Electromagnetism

Chapter 12. Magnetism and Electromagnetism Chapter 12 Magnetism and Electromagnetism 167 168 AP Physics Multiple Choice Practice Magnetism and Electromagnetism SECTION A Magnetostatics 1. Four infinitely long wires are arranged as shown in the

More information

Period Differences Between Segmental Oscillators Produce Intersegmental Phase Differences in the Leech Heartbeat Timing Network

Period Differences Between Segmental Oscillators Produce Intersegmental Phase Differences in the Leech Heartbeat Timing Network J Neurophysiol 87: 1603 1615, 2002; 10.1152/jn.00338.2001. Period Differences Between Segmental Oscillators Produce Intersegmental Phase Differences in the Leech Heartbeat Timing Network MARK A. MASINO

More information

Mining Classification Knowledge

Mining Classification Knowledge Mining Classification Knowledge Remarks on NonSymbolic Methods JERZY STEFANOWSKI Institute of Computing Sciences, Poznań University of Technology SE lecture revision 2013 Outline 1. Bayesian classification

More information

Control of multistability in ring circuits of oscillators

Control of multistability in ring circuits of oscillators Biol. Cybern. 80, 87±102 (1999) Control of multistability in ring circuits of oscillators C.C. Canavier 1, D.A. Baxter 2, J.W. Clark 3, J.H. Byrne 2 1 Department of Psychology, University of New Orleans,

More information

Physics 212 Question Bank III 2010

Physics 212 Question Bank III 2010 A negative charge moves south through a magnetic field directed north. The particle will be deflected (A) North. () Up. (C) Down. (D) East. (E) not at all.. A positive charge moves West through a magnetic

More information

When Do Microcircuits Produce Beyond-Pairwise Correlations?

When Do Microcircuits Produce Beyond-Pairwise Correlations? When Do Microcircuits Produce Beyond-Pairwise Correlations? Andrea K. Barreiro,4,, Julijana Gjorgjieva 3,5, Fred Rieke 2, and Eric Shea-Brown Department of Applied Mathematics, University of Washington

More information

Generation and Preservation of the Slow Underlying Membrane Potential Oscillation in Model Bursting Neurons

Generation and Preservation of the Slow Underlying Membrane Potential Oscillation in Model Bursting Neurons J Neurophysiol 04: 589 602, 200. First published June 30, 200; doi:0.52/jn.00444.200. Generation and Preservation of the Slow Underlying Membrane Potential Oscillation in Model Bursting Neurons Clarence

More information

Ch. 5. Membrane Potentials and Action Potentials

Ch. 5. Membrane Potentials and Action Potentials Ch. 5. Membrane Potentials and Action Potentials Basic Physics of Membrane Potentials Nerve and muscle cells: Excitable Capable of generating rapidly changing electrochemical impulses at their membranes

More information

Kinesiology 201 Solutions Fluid and Sports Biomechanics

Kinesiology 201 Solutions Fluid and Sports Biomechanics Kinesiology 201 Solutions Fluid and Sports Biomechanics Tony Leyland School of Kinesiology Simon Fraser University Fluid Biomechanics 1. Lift force is a force due to fluid flow around a body that acts

More information

A learning model for oscillatory networks

A learning model for oscillatory networks Pergamon Neural Networks Neural Networks 11 (1998) 249 257 Contributed article A learning model for oscillatory networks Jun Nishii* Laboratory for Neural Modeling, The Institute of Physical and Chemical

More information

Work, Power, and Energy Lecture 8

Work, Power, and Energy Lecture 8 Work, Power, and Energy Lecture 8 ˆ Back to Earth... ˆ We return to a topic touched on previously: the mechanical advantage of simple machines. In this way we will motivate the definitions of work, power,

More information

Internally generated preactivation of single neurons in human medial frontal cortex predicts volition

Internally generated preactivation of single neurons in human medial frontal cortex predicts volition Internally generated preactivation of single neurons in human medial frontal cortex predicts volition Itzhak Fried, Roy Mukamel, Gabriel Kreiman List of supplementary material Supplementary Tables (2)

More information