Spiking Neuron Model Approximation using GEP

Size: px
Start display at page:

Download "Spiking Neuron Model Approximation using GEP"

Transcription

1 23 IEEE Congress on Evolutionary Computation June 2-23, Cancún, México Spiking Neuron Model Approximation using GEP Josafath I. Espinosa-Ramos and Nareli Cruz Cortés Centro de Investigación en Computación Instituto Politécnico Nacional Mexico City, CP Roberto A. Vázquez Grupo de Sistemas Inteligentes Facultad de Igeniería Universidad La Salle Mexico City, CP 64 Abstract Spiking Neuron Models can accurately predict the spike trains produced by cortical neurons in response to somatically injected electric currents. Since the specific model characteristics depend on the neuron; a computational method is required to fit models to electrophysiological recordings. However, models only work within defined limits and it is possible that they could only be applied to the example presented. Moreover, there is not a methodology to fit the models; in fact, the fitting procedure can be very time consuming both in terms of computer simulations and code writing. In this paper a first effort is presented not to fit models, but to create a methodology to generate neuron models automatically. We propose to use Gene Expression Programming to create mathematical expressions that replicate the behavior of a state of the art neuron model. We will present how this strategy is feasible to solve more complex problems and provide the basis to find new models which could be applied in a wide range of areas from the field of computational neurosciences as pyramidal neurons spike train prediction, or in artificial intelligence as pattern recognition problems. I. INTRODUCTION Spiking neuron models have been applied in a wide range of areas in the field of computational neurosciences such as: brain region modeling, auditory processing, visual processing, robotics, pattern recognition and so on. Many spiking neuron models have been proposed, but choosing one of them is a difficult question. The answer depends on the type of the problem; electrophysiologists generally prefer the biophysical models, familiar with the notion of ion channels that open and close (and hence, alter neuronal activity) depending on environmental conditions. Theoreticians, by contrast, typically prefer simple neuron models with few parametres that are amenable to mathematical analysis. The model proposed by Izhikevich [5] was developed to understand the fine temporal structure of cortical spike trains, and to use spike-timing as an additional variable to understand how the mammalian neocortex processes information. This model can exhibit the 2 neurocomputational properties of biological neurons summarized in [6]. There, Izhikevich also discusses the biological plausibility and computational efficiency of some of the most useful spiking and bursting neuron models, and compares their applicability to large-scale simulations of cortical neural networks. Recently, several research groups have approached this question by assessing the quality of neuron models with respect to spike timing prediction or voltage traced characteristic features. In 29, following previous attempts at model comparison on a smaller scale, the International Neuroinformatics Coordinating Facility (INCF) launched an international competition [] permitting a quantitative comparison of neuron models. The idea behind the INCF competition is that a good model can predict neuronal activity based on data (electrophysiological recordings) that were not used for parameter tuning. In [9], the authors use spiking models that can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific model characteristics depends on the neuron, a computational method is required to fit models to electrophysiological recordings. The problem of many spiking neuron models is that they only work within the limits defined and it is possible that they could only be applied to the example presented. Then, the ideal would not be to fit existing models, but to construct a model for each kind of neuron. However, it is almost impossible, at least in an analytic way. This paper is the first effort to find and implement a methodology to create automatically spiking neuron models using a state of the art evolutionary computation strategy. This methodology will help us to find a mathematical equation that describes the behavior of biological neurons, such as pyramidal neurons (located in the cortex of the brain) which are involved in cognitive functions, or mathematical models with a spiking neuron behavior that can be used to solve pattern recognition problems. Since the Izhikevich model exhibits the 2 most prominent neurocomputational properties of biological neurons [6], we adopted it as a reference model to create heuristically mathematical expressions for new spiking neuron models. It is worth mention that we do not intend to find an equation that exactly reproduces the signal produced by the Izhikevich model after being stimulated, but to create mathematical models that generate spikes almost at the same firing time. Then, we could address the problem as a symbolic regression, in which some evolutionary computation strategies have been successful. Among the most popular is the Genetic Programming (GP). However, one of the main problems of this technique is the difficulty to combine simplicity and expressive power in individual representation [2]. If the form of representation is simple to manipulate genetically, it loses functional complexity, and it is not suitable for solving certain problems. If the representation allows great functional complexity, it is difficult to make it evolve toward accurate solutions. To solve this dilemma, several alternatives have been proposed among /3/$3. 23 IEEE 326

2 which Gene Expression Programming (GEP) [2] stands out as it proposes a representation that attempts to combine simplicity with expressive power, and proposes a model that is simple to evolve, but which can represent complex structures. The remainder of the article is ordered as follows: Section 2 introduces background research, section 3 presents the methodology proposed. Section 4 describes implementation details. In section 5 some results are shown. Finally, a conclusion and future work is presented. A. Spiking Neurons II. BACKGROUND Biological neurons communicate by generating action potentials that are transmitted to other neurons in the network. Action potentials are generated in response to transmembrane currents elicited by presynaptic activation of various receptor types. These action potentials alter the membrane voltage so it crosses a threshold value; the neuron spikes and goes into a refractory state and shows the typical forms of excitatory and inhibitory postsynaptic potentials over time. Spiking neuron models try to simulate the behavior of a biological neuron when it is stimulated with an electric current through a synaptic channel and a spike train is generated. This allows incorporating spatial-temporal information in communication and computation, like real neurons do. Instead of using rate coding (the output value) as classical artificial neuron networks, these neuron models use pulse coding (spikes number); mechanisms where neurons receive and send out individual pulses, allowing multiplexing of information as frequency and amplitude of sound [3]. These models constitute the computational unit of the third generation of artificial neural networks (see [8] for classification). It has been proved that spiking neurons can solve linear and non linear pattern recognition probelms [] []. This is, given a set of input patterns belonging to k classes, each input pattern is transformed into an input signal. Then the spiking neuron is stimulated during T ms, and a spike train is generated. It is expected that input patterns belonging to the same class generate almost the same firing rates, and input patterns belonging to different classes generate enough different firing rates to discriminate among the different classes. B. Gene Expression Programming Gene Expression Programming (GEP) is an evolutionary algorithm that automatically creates computer programs. These computer programs can take many forms: they can be conventional mathematical models, neural networks, decision trees, sophisticated nonlinear models, logistic nonlinear regressors, nonlinear classifiers, complex polynomial structures, logic circuits and expressions, and so on. Irrespective of their complexity, all GEP programs are encoded in very simple linear structures called chromosomes [2]. As in other evolutionary computation techniques, GEP is a population based algorithm, where a set of chromosomes (also known as individuals or candidate solutions) can be reproduced using a selection criterion; then the offspring is mutated and a new population is created. Finally, a selection method is used to select the best individuals that continue in the evolutionary process until a stop criteria is achieved. The evolutionary process searches for better and better solutions as it tries to solve a particular problem. The GEP can be distinguished from other evolutionary strategies for the representation of individuals and the way that they are reproduced. The GEP individuals have dual codification, their genotype is organized as a string and phenotype in an expression tree. The items that appear in these individuals are: a functions set, which is made up of functions that receive some parameters and can only appear in nonterminal nodes of the syntax tree. The number of parameters that each function receives will determine a certain arity value for the corresponding node. a terminals set, which is the set of elements that can only appear on the leaves of the tree. This set contains both constant values and input parameters received by the tree. Each gene that constitutes the genotype is divided into two parts: head and tail. The genes head size is chosen a priori for the problem, but the tail size is determined by the following expression: t = h(a )+ () where t is the tail size, h the head size and a the maximum arity present in nonterminal nodes. The head can contain both functions and terminal set elements, but the tail can only contain terminal set elements. The purpose of these limitations is to allow any gene to be transformed into a valid syntax tree. The format where a syntax tree is in the genotype string is called K-Expression, and determines the phenotype generated from the genotype. The way to build a valid tree from a K- Expression is to fill the tree level by level. Fig.. Consider, for example, the algebraic expression: (a+b)(c d)) which can be represented as the diagram in Figure. Syntax tree of a simple expression. This kind of diagram representation is in fact the phenotype of GEP individuals, being the genotype easily inferred from the phenotype as follows: Q*+-abcd which is the straightforward reading of the diagram from left to right and from top to bottom. Concerning to individuals reproduction, the GEP considers replication, mutation, inversion, transportation and insertion 326

3 sequence elements and recombination strategies (see [3] for details). Many of these reproduction strategies can be executed during a single generation, e.g. gene recombination, transpositon of insertion sequence elements and mutation. III. METHODOLOGY As mentioned before, we adopted the Izhikevich model in order to reproduce different behaviors while it is stimulated with a constant current. A. Problem representation This first goal is to find a polynomial equation in terms of voltage v which can generate spikes at the same firing rates of one of the Izhikevich model behaviors when it is stimulated by a constant current. Izhikevich model is represented by a two differential equations system: v =.4v 2 +5v +4 u+i (2) u = a(bv u) (3) { v c if v v peak then (4) u u+d Here, variable v represents the membrane potential given in and u represents a membrane recovery variable which provides negative feedback to v. v peak is the neuron maximum potential value. The model can exhibit firing patterns of all known types of cortical neurons with the choice of parameters a, b, c and d. Various choices of the parameters result in various intrinsic firing patterns [5]. As a first approach, we chose the class 2 excitable model [6] as a reference model, since the spikes number depends on the injected current value I, generating a higher number of spikes with a higher value. Apparently, this model periodically fires the spikes when it is stimulated with a constant current as shown in Figure 2. This specific behavior might be the base in pattern recognition problems using spiking neurons as described in [4] [] [2]. Fig Class 2 Izhikevich model spikes train / Izhikevich Class 2 excitable. In order to replicate this spike train response, the model should be stimulated with a constant current I =. during ms., and using the following parameter values: a =.2, b =.26, c = 65., d = and I =.. To construct the model, we propose that the GEP individuals representation is based on the polynomial term of the Izhikevich model first differential equation.4v 2 +5v+4. This is a classical second order polynomial equation with three terms where the voltage (variable v) is involved. Then the individuals must consider at least one variable that represents the voltage v. Constants may or may not be included, but in this work we include a random constants array that will be generated randomly at the beginning of the GEP algorithm. As we mentioned in section II-B, the individual gene must contain two parts: head and tail. The head may contain both functions and terminal set elements, but the tail just contains terminal set elements. For simplicity, we define the functions set as F = {+,,,/, }, being the square operator, and the terminal set elements as T = {v,u,?} where v represents the voltage, u the recovery variable and? represents the variable that will be mapped to the constants array, in order to be included in the K-expression. A good solution depends on the size and number of genes that constitute the individual, since these features limit the search space. So, the individuals should have one large gene, or two or more small genes (multigenic chromosome) joined by a linking function. For simplicity, we join these genes with the add function. In Fig. 3 a candidate solution representation is shown. Fig. 3. Example of a candidate solution. Three genes are joined by the add function (grey) B. Function Creating a spiking neuron model to electrophysiological data is performed by maximizing a fitness function measuring the model adequacy to the data [7]. In this research we apply the gamma factor not to predict neural activity, but to replicate a spiking neuron model output in a period of time. The gamma factor is based on the number of coincidences between the model spikes and the experimentally recorded spikes. This is defined as the number of spikes in the experimental train such that there is at least one spike in the model train within ±δ, where δ is the temporal window size (typically a few ). The gamma factor is defined by equation 5: ( )( ) 2 Ncoinc 2δN exp r exp Γ = (5) 2δr exp N exp +N model where N coinc is the number of coincidences, N exp and N model are the number of spikes in the experimental and model spike trains, respectively, and r exp is the average firing rate of the experimental train. The term 2δN exp r exp is the expected number of coincidences with a Poisson process with the same rate as the experimental spike train, so that Γ = means that the model performs no better than chance. The normalization factor is chosen such that Γ, and Γ = corresponds to a perfect match. The gamma factor depends on the temporal window size parameter δ (it increases with it). We choose δ = ±2 ms to pursue the closest approximation to the real behavior of the neuron, because it is of the same order as the synaptic rise times that can be measured in the cortical pyramidal neuron s soma [7]. 3262

4 IV. EXPERIMENTAL FRAMEWORK The reader should remember that we try to find a model which can replicate the spike rates and the firing time. That is why Eq. 5 is adopted as a similarity or dissimilarity measure between two spike trains and assess the replication quality of new models. This factor compares spike trains as produced by the new model to spike trains as generated by the reference model. Experimental work is basically divided in three types of experiments: the first is addressed to find a polynomial term which replaces only the polynomial term of the Izhikevich model first differential equation Eq. 2. This means that we generate the spikes by computing v using the new polynomial term minus variable u plus the injected current I. v = p u+i (6) u = a(bv u) (7) { v c if v v peak then (8) u u+d where p is the polynomial term created by GEP. Second experiments do not include the Izhikevich model second differential equation. The spikes are generated by computing only one differential equation where variable v is involved, then the model is defined as follows: v = p+i (9) if v v peak then v c () where p is the polynomial term created by GEP. The last experiment is formed by generating two algebraic expressions; the first will replace the polynomial term of Eq. 2 such as in the first experiments. Second expression will substitute the Izhikevich model second differential equation given by Eq. 3. v = p u+i () u = v q (2) { v c if v v peak then (3) u u+d where p and q are algebraic expressions created by GEP. We expect that these three experiments will help us to ensure that GEP can produce spiking neuron models which are adaptable to solve specific computational problems (e.g. pattern recognition problems). The first experiments greatly help GEP in order to have a check experiment but especially to build different models with almost exactly the same behavior. The second experiments have the intention to find whether is it possible to substitute a two differential equations system by a one differential equation system and have almost the same behavior. Finally, the third experiments will help us to determine whether GEP can construct more complex differential equation systems. Now, in the following lines we describe the scenarios and the initial GEP algorithm parameters. Subsequently, the data sets and some implementation details are presented. A. GEP implementation As the first step, we perform scenarios according to the number of genes, the genes head size and the probability for mutation. For the five first scenarios, we configure the individuals with one large gene, this will create a mathematical expression with one to 5 terms. For the last five scenarios, we configure the individuals with three short genes. Here, we want to find a mathematical expression similar to the polynomial term of the Izhikevich model first differential equation. We expect that the number of genes does not drastically affect the algorithm to find good solutions, since the individuals size is almost the same. As described in section II-B, the GEP reproduction process can execute many variation operators during the same generation for each individual. These would considerably increase the number of scenarios to find the best settings for GEP. Because of this, we adopted the variation operators values considering some examples of GEP in problem solving documented in [2]. We only vary the mutation rates, in order to show a variation in the GEP behavior. Table I summarized the scenarios proposed. TABLE I. GEP SCENARIOS Scenario Genes Head Size Mutation There are also other parameters included in the GEP algorithm that are maintained in all experiments. These parameters are shown in Table II. TABLE II. GEP PARAMETERS parameter Number of runs 3 for each scenario 5 Population size One-point recombination rate.8 Gene recombination rate. IS transposition rate. IS elements length. RIS transposition rate. RIS elements length -3 Gene transposition rate. Dc specific IS transposition rate. Selection for reproduction Tournament Replacement based Finally, in the first experiments the constant values for each chromosome gene are Izhikevich model constants values being.4, 5. and 4. This will greatly help the algorithm to find a curve very similar to the originalmodel. For remaining experiments, constants are random values between -. and.. It is noteworthy that, we performed 3 tests for each scenario in order to achieve statistically significant results. B. Data Sets Useful information is the spikes number generated by reference models and the time when they are fired, as we saw in sections II-A and III-B. Then, the data we use is the spikes 3263

5 train generated after stimulating the Class 2 Izhikevich model with a constant current I =. na during ms. The Euler method is used to solve the differential equations of both the reference and the GEP proposed models with dt =. This corresponds a time step of. ms and a sampling frequency of KHz. The spikes train is shown in Figure 2. As can be seen, the Izhikevich model can detect the time when a spike occurs determined by the variable value v peak, so that no process should be run to identify the spikes. Then, the data set used is a vector where only the firing times are stored. Thus, evaluating the fitness criterion for any candidate solution involves few operations. V. RESULTS AND DISCUSSION This section is divided into three subsections: in the first subsection we show the results using the model proposed for the first experiments where one term of one of the two differential equations is replaced. In subsection two, we present a model with one differential equation where one term of this equation is replaced. Finally, in subsection three we describe the results obtained when we replace one term of each model equation with two differential equations. The number of experiments performed for the three types of experiments is 3. Due to this, the results are presented as follows: at the beginning of every subsection, a table with the summary of the proposed model fitness. There, we present the worst, the best and the average found in the 3 tests performed for each scenario. A fourth column is added where the algorithm effectiveness is shown, i.e. if the percentage of the scenario is 33 %, that means that GEP found about solutions with the maximum possible fitness. Subsequently, other tables are included containing the features of two of the best individuals (best solutions achieved), such as the genotype, the fitness, the constant values used for each gene and the algebraic expression created. After every solution a figure with two plots is shown; the plot at the top shows the Izhikevich model spike train (blue solid curve) and the new model created by GEP (red dotted curve) after being stimulated with a constant current I = during ms. The bottom plot shows the GEP evolutionary behavior, red and blue curves show the best and worst individual fitness respectively during the evolutionary process. A. First experiments The following results correspond to the model where Eq s. 6, 7 and 8 are involved. Firstly, the best individuals summary of each scenario is presented in Table III. TABLE III. FIRST EXPERIMENTS FITNESS SUMMARY Scenario Worst Best Average Effectiveness % % % % % % % % % % It was found that the number of individual genes does not considerably affect finding good solutions. However, such as other evolutionary techniques, mutation rate affects the algorithm performance. For this particular work, a lower value does not produce good results in most cases, but only occasionally. We also observe that the best results belong to the th scenario with an average fitness value of.835 and producing good solutions with a probability of 4%. Now, we describe two of the best solutions obtained with the th scenario. Table IV contains the first best solution. TABLE IV. ST GEP BEST SOLUTION (ST EXPERIMENTS) Gene?v v v vv?v??v32332 Gene 2 /?v??v?????32333 Gene 3 +??vvvvvv3223 Cons. Gene [.4, 5., 4] Cons. Gene 2 [.4, 5., 4] Cons. Gene 3 [.4, 5., 4] Arithmetic Expression v + (.4v 2.4) It is noted that the algebraic expression created is very similar to the polynomial term of the Izhikevich model first differential equation. This equation generates the spikes train shown in Figure 4, when the model is stimulated with a constant current. Fig Spikes train st GEP best solution (st experiments) The number of spikes is the same for both models and the firings are almost generated at the same time, with a minimal variation given by subtracting.4 to the polynomial quadratic term. This result is achieved after evolving about 3 generations. Now, in Table V another best solution found with GEP is presented. TABLE V. 2ND GEP BEST SOLUTION (ST EXPERIMENTS) Gene /v? +vv?v???2223 Gene 2 /?/???vv??v?33333 Gene 3 / +??v?????22223 Cons. Gene [.4, 5., 4] Cons. Gene 2 [.4, 5., 4] Cons. Gene 3 [.4, 5., 4] v Arithmetic Expression (v 5 )2 Here, a different mathematical expression of the polynomial term is created. However, it is an equivalence of the Eq

6 polynomial term, since ( v 5 )2 =.4v 2 and v.2 = 5v. The plots in Figure 5 show the models spike train and the evolutionary behavior of the one proposed Spikes train tests performed, the th scenario found 6 good solutions which represents about 53 % of effectiveness. The features of one of these solutions is presented in Table VII. TABLE VII. ST GEP BEST SOLUTION (2ND EXPERIMENTS) Gene? /v?v??vv Gene 2 ///vvv??v?v?222 Gene 3 +?v v????v?2222 Cons. Gene [-.5794, ] Cons. Gene 2 [-.427,.5935, ] Cons. Gene 3 [-.3472,.252, ] Algebraic Expression v +.26v The algebraic expression created is completely different from the polynomial term of Eq. 2. As is shown, each gene constants are random values between -. and.. The plot produced by solving the differential equation with the Euler method is compared with the Izhikevich model in Figure 6. Fig. 5. 2nd GEP best solution (st experiments) 4 Spikes trains 2 Once again, the GEP algorithm produced a good solution before 5 generations with the highest possible fitness value. The number of the spikes and the firing times are almost the same. These two results show that GEP can replicate the equation and the Izhikevich model curve with a considerable accuracy and with relatively few generations. The algorithm was greatly helped since we define the constant values and only substitute the polynomial term of Eq. 2. The following section presents the results without any intervention, rather than the number of constants and the settings shown in Tables I and II. B. Second experiments These results belong to the experiments performed with the second model proposed, where the neuron membrane potential is given by Eq. 9 and, without the existence of the recovery variable u. Before we present these results, it is important to remember that the similarity between two spiking neuron models behaviors is measured by the spike rates and the firing times. This criteria is adopted, since the information is stored in the spike trains as was mentioned in section II-A. Similar to previous experiments, the number of tests exceeds 2. Therefore, only two of the best solutions are presented. First, in Table VI we summarize the solutions fitness value achieved in all scenarios. TABLE VI. SECOND EXPERIMENTS FITNESS SUMMARY Scenario Worst Best Average Effectiveness % % % % % % % % % % It is evident that the number of genes and the mutation rate have a greater impact than in the first model proposed. After Fig st GEP best solution (2nd experiments) It is observed that, the plot generated by GEP is different from the one generated by Izhikevich. However, what matters is that the number of spikes is the same in both models and almost fire at the same time with a difference of δ = 2 ms. With regard to the GEP behavior, it is observed that the solution was approximately found at generation 3, similar to the first solution presented in the previous section. A second solution is shown in Table VIII. TABLE VIII. 2ND GEP BEST SOLUTION (2ND EXPERIMENTS) Gene //?//v??v?vv22 Gene 2 / v/vv??v?vv2 Gene 3 +?? + v?v???v?222 Cons. Gene [-.253,-.966,.3532] Cons. Gene 2 [-.6388,-.225,.92] Cons. Gene 3 [.945,.888,-.555] Algebraic Expression v ( ) v.5263 A different individual representation and a new algebraic expression are created. The second order polynomial is similar to the polynomial term in Eq. 2. The model and the GEP algorithm behavior is shown in Figure 7. Here, a different plot is generated, but the solution met with the criteria already mentioned. This time, the algorithm takes 3265

7 Spikes trains TABLE X. ST GEP BEST SOLUTION (3RD EXPERIMENTS) Gene / uvu?vvvu3244 Gene 2 uv?uvuvuvu?43244 Gene 3? + / +?uv?v??43442 Gene 4 u /u/?u????u33 Gene 5 v uu v??uvv?223 Cons. Gene [.7,.29,.388, -.598,.4] Cons. Gene 2 [ , -.46,.5, -.37] Cons. Gene 3 [-.993, -.263, -.348,.958,.23] Cons. Gene 4 [-.879, -.85,.283, -.69, -.38] Cons. Gene 5 [.43,.837,.669, -.2,.73] Algebraic Exp. uv + (.66v) Algebraic Exp u + v Spiking Neuron Fig. 7. 2nd GEP best solution (2nd experiments) more generations to get the best solution, but before reaching 25. As we can see, none of the plots that shows the new models spikes train have the same shape as the Izhikevich model. We presume that this is because the model proposed does not contain a differential equation that provides negative feedback to v, as u does in Eq. 2. As has been mentioned, the most important point is that the spikes rate and the firings time are generated very close to the reference model. C. Third experiments In experiments one and two, we substitute only one of the Izhikevich model equation. In this third type of experiments, we replace both model differential equations, making GEP to find the expression vales for p and q as shown in Eq. and 2. Due to adding one more equation, the test scenario will be modified. Here, an th scenario is proposed, where the individual is formed with 5 genes, three to model the first expression and two for the second. We also add two more constants for each gene, having a total of 5 constant per gene with values between -. and.. In previous experiments, we observed that the best fitness value and the more efficient was the th scenario. Therefore, six head size genes and a mutation rate of 2 % will be used. Table IX shows the fitness summary. TABLE IX. THIRD EXPERIMENTS FITNESS SUMMARY Scenario Worst Best Average Effectiveness % We observe that the highest fitness value is achieved and altough the average is a liitle lower than in the second experiments, more good solutions were achieved. This mean about 57% of effectiveness, 4% higher than in second experiments. In Table X we show the features of one of the best individuals achieved. Algebraic expression and 2 correspond to variables p and q respectively of the proposed model. When we substitute these expressions in Eq. and 2 and solving with Euler, we get the plot shown in Figure 8. Fig st GEP best solution (3rd experiments) We consider this result a good solution, since the number of spikes are the same and the firings are almost generated at the same time in both Izhikevich model and the proposed model. The maximum fitness value achieved was found before 3 generations, similar to previous experiments. Following, a second solution is presented in Table XI. There, two new different expressions are created which generates the plot in Figure 9. TABLE XI. 2ND GEP BEST SOLUTION (3RD EXPERIMENTS) Gene +?vvuuvu? Gene 2 +? + uu/v?vv?vv3342 Gene 3 /? + / uv?u?vv?34432 Gene 4 u u????uu???u3342 Gene 5 +?? /vu?u?uv22424 Cons. Gene [.598, -.249, -.273, -.2, -.274] Cons. Gene 2 [.8, -.47,.276,.35, -.849] Cons. Gene 3 [.58, -.855,.6,.9,.23] Cons. Gene 4 [.545,.23, -.479,.522,.565] Cons. Gene 5 [.597,.779, -.54,.39,.246] Algebraic Exp..747(v 2 + (uv))+.233 ( u) + u+(.97 u) v Algebraic Exp. 2 u.297 Similar to the previous result, the number of spikes and the firing times met with the similarity criteria given by the gamma factor. GEP also found the maximum fitness value before 3 generations. These two experiments did not replicate the Izhikevich model signal, since GEP could not find an adequate recovery variable which provided negative feedback for v. This means 3266

8 Spiking Neuron output spikes using pyramidal neurons electrophysiological recordings with the methodology suggested. This work could help neuroscientists study with more realism the behavior of particular type of neurons, since new models would be created. Fig nd GEP best solution (3rd experiments) that negative feedback u is closely linked to variables a and b in Eq. 3. To end this results section, we can say that the GEP algorithm is able to produce good solutions according to the similarity criteria given by gamma function with an error window δ = 2 ms. GEP also can create new models with the maximum possible fitness value in less than 3 generations which we can consider an acceptable parameter for an evolutionary algorithm. VI. CONCLUSIONS AND FUTURE WORK The methology proposed has demonstrated to be an alternative tool to create mathematical models that reproduce similar behaviors of one of the most versatile spiking neuron models. This offers the possibility for solving more complex problems in neurosciences, such as pyramidal neuron spike train prediction. GEP chromosomes can be easily modified in every generation, so the success rate depends on evolutionary time, and this time is affected by the mutation rate. For this particular study, a higher mutation rate produces more efficient solutions than a lower rate. In fact, the best solutions were achieved using the double rate than in other evolutionary computation techniques such as genetic algorithms, where a mutation rate between. and. is commonly used. We observed that single and complex differential equation systems have similar results, both type of systems can generate spikes at the same firing time as the reference model. Thus, in future work we will focus on generate only single systems and fit the GEP algorithm to solve specific computational problems as pattern recognition. Some spiking neuron models have proved to solve different linear and non-linear pattern recognition problems [4] [] [2]. This methodology could create spiking neuron models which will be adaptable to a specific pattern recognition problem. In other words, the methodology fitness function may be substituted by one that could meet with a criteria for this type of problem. At present, we are working on developing a quantitative neuron model with the goal of predicting the timing of ACKNOWLEDGMENTS The authors thank CONACYT and CONACYT-INEGI through project codes 3273 and respectively, and Universidad La Salle for the economic support under grant I-6/2. REFERENCES [] I. N. C. Facility. 29 incf 29. [2] C. Ferreira. Gene expression programming: a new adaptive algorithm for solving problems. Complex Systems, 3(2):87 29, 2. cite arxiv:cs/227comment: 22 pages, 7 figures. [3] C. Ferreira. Gene Expression Programming: Mathematical Modeling by an Artificial Intelligence. Springer, 2nd edition, May 26. [4] A. C. Guillén. Ajuste de Modelos Neuronales de Tercera Generación para el Reconocimiento de Patrones: Análisis de Rendimiento y Comparativa. Master s thesis, Universidad La Salle, México D.F., 22. [5] E. M. Izhikevich. Simple model of spiking neurons. IEEE Transactions on Neural Networks, 4(6): , Nov. 23. [6] E. M. Izhikevich. Which model to use for cortical spiking neurons? IEEE transactions on neural networks / a publication of the IEEE Neural Networks Council, 5(5):63 7, Sept. 24. [7] R. Jolivet, F. Schrmann, T. K. Berger, R. Naud, W. Gerstner, and A. Roth. The quantitative single-neuron modeling competition. Biol. Cybern., 99(4-5):47 426, Nov. 28. [8] W. Maas. Networks of spiking neurons: the third generation of neural network models. Trans. Soc. Comput. Simul. Int., 4(4):659 67, Dec [9] C. Rossant, D. F. Goodman, J. Platkiewicz, and R. Brette. Automatic fitting of spiking neuron models to electrophysiological recordings. Frontiers in neuroinformatics, 4, 2. [] R. Vázquez. Izhikevich neuron model and its application in pattern recognition. Australian Journal of Intelligent Information Processing Systems, (), 2. [] R. Vázquez. Pattern recognition using spiking neurons and firing rates. In A. Kuri-Morales and G. Simari, editors, Advances in Artificial Intelligence IBERAMIA 2, volume 6433 of Lecture Notes in Computer Science, pages Springer Berlin / Heidelberg, 2. [2] R. A. Vázquez and A. Cachon. Integrate and fire neurons and their application in pattern recognition. In CCE, pages IEEE, 2. [3] J. Vreeken. Spiking Neural Networks, an Introduction

Neural Modeling and Computational Neuroscience. Claudio Gallicchio

Neural Modeling and Computational Neuroscience. Claudio Gallicchio Neural Modeling and Computational Neuroscience Claudio Gallicchio 1 Neuroscience modeling 2 Introduction to basic aspects of brain computation Introduction to neurophysiology Neural modeling: Elements

More information

Consider the following spike trains from two different neurons N1 and N2:

Consider the following spike trains from two different neurons N1 and N2: About synchrony and oscillations So far, our discussions have assumed that we are either observing a single neuron at a, or that neurons fire independent of each other. This assumption may be correct in

More information

The Spike Response Model: A Framework to Predict Neuronal Spike Trains

The Spike Response Model: A Framework to Predict Neuronal Spike Trains The Spike Response Model: A Framework to Predict Neuronal Spike Trains Renaud Jolivet, Timothy J. Lewis 2, and Wulfram Gerstner Laboratory of Computational Neuroscience, Swiss Federal Institute of Technology

More information

Causality and communities in neural networks

Causality and communities in neural networks Causality and communities in neural networks Leonardo Angelini, Daniele Marinazzo, Mario Pellicoro, Sebastiano Stramaglia TIRES-Center for Signal Detection and Processing - Università di Bari, Bari, Italy

More information

Discovery of the Boolean Functions to the Best Density-Classification Rules Using Gene Expression Programming

Discovery of the Boolean Functions to the Best Density-Classification Rules Using Gene Expression Programming Discovery of the Boolean Functions to the Best Density-Classification Rules Using Gene Expression Programming Cândida Ferreira Gepsoft, 37 The Ridings, Bristol BS13 8NU, UK candidaf@gepsoft.com http://www.gepsoft.com

More information

Lecture 11 : Simple Neuron Models. Dr Eileen Nugent

Lecture 11 : Simple Neuron Models. Dr Eileen Nugent Lecture 11 : Simple Neuron Models Dr Eileen Nugent Reading List Nelson, Biological Physics, Chapter 12 Phillips, PBoC, Chapter 17 Gerstner, Neuronal Dynamics: from single neurons to networks and models

More information

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,

More information

CSE/NB 528 Final Lecture: All Good Things Must. CSE/NB 528: Final Lecture

CSE/NB 528 Final Lecture: All Good Things Must. CSE/NB 528: Final Lecture CSE/NB 528 Final Lecture: All Good Things Must 1 Course Summary Where have we been? Course Highlights Where do we go from here? Challenges and Open Problems Further Reading 2 What is the neural code? What

More information

Marr's Theory of the Hippocampus: Part I

Marr's Theory of the Hippocampus: Part I Marr's Theory of the Hippocampus: Part I Computational Models of Neural Systems Lecture 3.3 David S. Touretzky October, 2015 David Marr: 1945-1980 10/05/15 Computational Models of Neural Systems 2 Marr

More information

Linearization of F-I Curves by Adaptation

Linearization of F-I Curves by Adaptation LETTER Communicated by Laurence Abbott Linearization of F-I Curves by Adaptation Bard Ermentrout Department of Mathematics, University of Pittsburgh, Pittsburgh, PA 15260, U.S.A. We show that negative

More information

Introduction to Neural Networks U. Minn. Psy 5038 Spring, 1999 Daniel Kersten. Lecture 2a. The Neuron - overview of structure. From Anderson (1995)

Introduction to Neural Networks U. Minn. Psy 5038 Spring, 1999 Daniel Kersten. Lecture 2a. The Neuron - overview of structure. From Anderson (1995) Introduction to Neural Networks U. Minn. Psy 5038 Spring, 1999 Daniel Kersten Lecture 2a The Neuron - overview of structure From Anderson (1995) 2 Lect_2a_Mathematica.nb Basic Structure Information flow:

More information

Biological Modeling of Neural Networks

Biological Modeling of Neural Networks Week 4 part 2: More Detail compartmental models Biological Modeling of Neural Networks Week 4 Reducing detail - Adding detail 4.2. Adding detail - apse -cable equat Wulfram Gerstner EPFL, Lausanne, Switzerland

More information

Evolutionary computation

Evolutionary computation Evolutionary computation Andrea Roli andrea.roli@unibo.it DEIS Alma Mater Studiorum Università di Bologna Evolutionary computation p. 1 Evolutionary Computation Evolutionary computation p. 2 Evolutionary

More information

Lecture 5: Linear Genetic Programming

Lecture 5: Linear Genetic Programming Lecture 5: Linear Genetic Programming CIU036 Artificial Intelligence 2, 2010 Krister Wolff, Ph.D. Department of Applied Mechanics Chalmers University of Technology 41296 Göteborg, Sweden krister.wolff@chalmers.se

More information

Fast and exact simulation methods applied on a broad range of neuron models

Fast and exact simulation methods applied on a broad range of neuron models Fast and exact simulation methods applied on a broad range of neuron models Michiel D Haene michiel.dhaene@ugent.be Benjamin Schrauwen benjamin.schrauwen@ugent.be Ghent University, Electronics and Information

More information

Signal, donnée, information dans les circuits de nos cerveaux

Signal, donnée, information dans les circuits de nos cerveaux NeuroSTIC Brest 5 octobre 2017 Signal, donnée, information dans les circuits de nos cerveaux Claude Berrou Signal, data, information: in the field of telecommunication, everything is clear It is much less

More information

Dynamical Constraints on Computing with Spike Timing in the Cortex

Dynamical Constraints on Computing with Spike Timing in the Cortex Appears in Advances in Neural Information Processing Systems, 15 (NIPS 00) Dynamical Constraints on Computing with Spike Timing in the Cortex Arunava Banerjee and Alexandre Pouget Department of Brain and

More information

Lecture 9 Evolutionary Computation: Genetic algorithms

Lecture 9 Evolutionary Computation: Genetic algorithms Lecture 9 Evolutionary Computation: Genetic algorithms Introduction, or can evolution be intelligent? Simulation of natural evolution Genetic algorithms Case study: maintenance scheduling with genetic

More information

Novel VLSI Implementation for Triplet-based Spike-Timing Dependent Plasticity

Novel VLSI Implementation for Triplet-based Spike-Timing Dependent Plasticity Novel LSI Implementation for Triplet-based Spike-Timing Dependent Plasticity Mostafa Rahimi Azghadi, Omid Kavehei, Said Al-Sarawi, Nicolangelo Iannella, and Derek Abbott Centre for Biomedical Engineering,

More information

We observe the model neuron s response to constant input current, studying the dependence of:

We observe the model neuron s response to constant input current, studying the dependence of: BioE332A Lab 2, 21 1 Lab 2 December 23, 29 A Spiking Neuron Like biological neurons, the model neuron we characterize in this lab has a repertoire of (model) ion-channel populations. Here we focus on the

More information

Introduction Biologically Motivated Crude Model Backpropagation

Introduction Biologically Motivated Crude Model Backpropagation Introduction Biologically Motivated Crude Model Backpropagation 1 McCulloch-Pitts Neurons In 1943 Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, published A logical calculus of the

More information

Chapter 8: Introduction to Evolutionary Computation

Chapter 8: Introduction to Evolutionary Computation Computational Intelligence: Second Edition Contents Some Theories about Evolution Evolution is an optimization process: the aim is to improve the ability of an organism to survive in dynamically changing

More information

Lecture 7 Artificial neural networks: Supervised learning

Lecture 7 Artificial neural networks: Supervised learning Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

Evolutionary Computation. DEIS-Cesena Alma Mater Studiorum Università di Bologna Cesena (Italia)

Evolutionary Computation. DEIS-Cesena Alma Mater Studiorum Università di Bologna Cesena (Italia) Evolutionary Computation DEIS-Cesena Alma Mater Studiorum Università di Bologna Cesena (Italia) andrea.roli@unibo.it Evolutionary Computation Inspiring principle: theory of natural selection Species face

More information

Artificial Neural Network and Fuzzy Logic

Artificial Neural Network and Fuzzy Logic Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks

More information

Neural Networks and the Back-propagation Algorithm

Neural Networks and the Back-propagation Algorithm Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely

More information

Evolutionary Design I

Evolutionary Design I Evolutionary Design I Jason Noble jasonn@comp.leeds.ac.uk Biosystems group, School of Computing Evolutionary Design I p.1/29 This lecture Harnessing evolution in a computer program How to construct a genetic

More information

This is a repository copy of Improving the associative rule chaining architecture.

This is a repository copy of Improving the associative rule chaining architecture. This is a repository copy of Improving the associative rule chaining architecture. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/75674/ Version: Accepted Version Book Section:

More information

Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits

Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits Wolfgang Maass, Robert Legenstein, Nils Bertschinger Institute for Theoretical Computer Science Technische

More information

This script will produce a series of pulses of amplitude 40 na, duration 1ms, recurring every 50 ms.

This script will produce a series of pulses of amplitude 40 na, duration 1ms, recurring every 50 ms. 9.16 Problem Set #4 In the final problem set you will combine the pieces of knowledge gained in the previous assignments to build a full-blown model of a plastic synapse. You will investigate the effects

More information

Action Potentials and Synaptic Transmission Physics 171/271

Action Potentials and Synaptic Transmission Physics 171/271 Action Potentials and Synaptic Transmission Physics 171/271 Flavio Fröhlich (flavio@salk.edu) September 27, 2006 In this section, we consider two important aspects concerning the communication between

More information

Dendritic computation

Dendritic computation Dendritic computation Dendrites as computational elements: Passive contributions to computation Active contributions to computation Examples Geometry matters: the isopotential cell Injecting current I

More information

An Introductory Course in Computational Neuroscience

An Introductory Course in Computational Neuroscience An Introductory Course in Computational Neuroscience Contents Series Foreword Acknowledgments Preface 1 Preliminary Material 1.1. Introduction 1.1.1 The Cell, the Circuit, and the Brain 1.1.2 Physics of

More information

Modeling of Retinal Ganglion Cell Responses to Electrical Stimulation with Multiple Electrodes L.A. Hruby Salk Institute for Biological Studies

Modeling of Retinal Ganglion Cell Responses to Electrical Stimulation with Multiple Electrodes L.A. Hruby Salk Institute for Biological Studies Modeling of Retinal Ganglion Cell Responses to Electrical Stimulation with Multiple Electrodes L.A. Hruby Salk Institute for Biological Studies Introduction Since work on epiretinal electrical stimulation

More information

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, 2012 Sasidharan Sreedharan www.sasidharan.webs.com 3/1/2012 1 Syllabus Artificial Intelligence Systems- Neural Networks, fuzzy logic,

More information

Chapter 9: The Perceptron

Chapter 9: The Perceptron Chapter 9: The Perceptron 9.1 INTRODUCTION At this point in the book, we have completed all of the exercises that we are going to do with the James program. These exercises have shown that distributed

More information

Supervisor: Prof. Stefano Spaccapietra Dr. Fabio Porto Student: Yuanjian Wang Zufferey. EPFL - Computer Science - LBD 1

Supervisor: Prof. Stefano Spaccapietra Dr. Fabio Porto Student: Yuanjian Wang Zufferey. EPFL - Computer Science - LBD 1 Supervisor: Prof. Stefano Spaccapietra Dr. Fabio Porto Student: Yuanjian Wang Zufferey EPFL - Computer Science - LBD 1 Introduction Related Work Proposed Solution Implementation Important Results Conclusion

More information

Evolutionary computation in high-energy physics

Evolutionary computation in high-energy physics Evolutionary computation in high-energy physics Liliana Teodorescu Brunel University, United Kingdom Abstract Evolutionary computation is a branch of computer science with which, traditionally, high-energy

More information

V. Evolutionary Computing. Read Flake, ch. 20. Assumptions. Genetic Algorithms. Fitness-Biased Selection. Outline of Simplified GA

V. Evolutionary Computing. Read Flake, ch. 20. Assumptions. Genetic Algorithms. Fitness-Biased Selection. Outline of Simplified GA Part 5A: Genetic Algorithms V. Evolutionary Computing A. Genetic Algorithms Read Flake, ch. 20 1 2 Genetic Algorithms Developed by John Holland in 60s Did not become popular until late 80s A simplified

More information

Analyzing Neuroscience Signals using Information Theory and Complexity

Analyzing Neuroscience Signals using Information Theory and Complexity 12th INCF Workshop on Node Communication and Collaborative Neuroinformatics Warsaw, April 16-17, 2015 Co-Authors: Analyzing Neuroscience Signals using Information Theory and Complexity Shannon Communication

More information

Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity

Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity Jorge F. Mejias 1,2 and Joaquín J. Torres 2 1 Department of Physics and Center for

More information

The Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017

The Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017 The Bayesian Brain Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester May 11, 2017 Bayesian Brain How do neurons represent the states of the world? How do neurons represent

More information

V. Evolutionary Computing. Read Flake, ch. 20. Genetic Algorithms. Part 5A: Genetic Algorithms 4/10/17. A. Genetic Algorithms

V. Evolutionary Computing. Read Flake, ch. 20. Genetic Algorithms. Part 5A: Genetic Algorithms 4/10/17. A. Genetic Algorithms V. Evolutionary Computing A. Genetic Algorithms 4/10/17 1 Read Flake, ch. 20 4/10/17 2 Genetic Algorithms Developed by John Holland in 60s Did not become popular until late 80s A simplified model of genetics

More information

IV. Evolutionary Computing. Read Flake, ch. 20. Assumptions. Genetic Algorithms. Fitness-Biased Selection. Outline of Simplified GA

IV. Evolutionary Computing. Read Flake, ch. 20. Assumptions. Genetic Algorithms. Fitness-Biased Selection. Outline of Simplified GA IV. Evolutionary Computing A. Genetic Algorithms Read Flake, ch. 20 2014/2/26 1 2014/2/26 2 Genetic Algorithms Developed by John Holland in 60s Did not become popular until late 80s A simplified model

More information

Linear Regression, Neural Networks, etc.

Linear Regression, Neural Networks, etc. Linear Regression, Neural Networks, etc. Gradient Descent Many machine learning problems can be cast as optimization problems Define a function that corresponds to learning error. (More on this later)

More information

Spike-Frequency Adaptation: Phenomenological Model and Experimental Tests

Spike-Frequency Adaptation: Phenomenological Model and Experimental Tests Spike-Frequency Adaptation: Phenomenological Model and Experimental Tests J. Benda, M. Bethge, M. Hennig, K. Pawelzik & A.V.M. Herz February, 7 Abstract Spike-frequency adaptation is a common feature of

More information

ARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC ALGORITHM FOR NONLINEAR MIMO MODEL OF MACHINING PROCESSES

ARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC ALGORITHM FOR NONLINEAR MIMO MODEL OF MACHINING PROCESSES International Journal of Innovative Computing, Information and Control ICIC International c 2013 ISSN 1349-4198 Volume 9, Number 4, April 2013 pp. 1455 1475 ARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC

More information

Annales UMCS Informatica AI 1 (2003) UMCS. Liquid state machine built of Hodgkin-Huxley neurons pattern recognition and informational entropy

Annales UMCS Informatica AI 1 (2003) UMCS. Liquid state machine built of Hodgkin-Huxley neurons pattern recognition and informational entropy Annales UMC Informatica AI 1 (2003) 107-113 Annales UMC Informatica Lublin-Polonia ectio AI http://www.annales.umcs.lublin.pl/ Liquid state machine built of Hodgkin-Huxley neurons pattern recognition and

More information

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks Commun. Theor. Phys. (Beijing, China) 40 (2003) pp. 607 613 c International Academic Publishers Vol. 40, No. 5, November 15, 2003 Effects of Interactive Function Forms in a Self-Organized Critical Model

More information

Visual Selection and Attention Shifting Based on FitzHugh-Nagumo Equations

Visual Selection and Attention Shifting Based on FitzHugh-Nagumo Equations Visual Selection and Attention Shifting Based on FitzHugh-Nagumo Equations Haili Wang, Yuanhua Qiao, Lijuan Duan, Faming Fang, Jun Miao 3, and Bingpeng Ma 3 College of Applied Science, Beijing University

More information

Activity Driven Adaptive Stochastic. Resonance. Gregor Wenning and Klaus Obermayer. Technical University of Berlin.

Activity Driven Adaptive Stochastic. Resonance. Gregor Wenning and Klaus Obermayer. Technical University of Berlin. Activity Driven Adaptive Stochastic Resonance Gregor Wenning and Klaus Obermayer Department of Electrical Engineering and Computer Science Technical University of Berlin Franklinstr. 8/9, 187 Berlin fgrewe,obyg@cs.tu-berlin.de

More information

Artificial Neural Networks Examination, March 2004

Artificial Neural Networks Examination, March 2004 Artificial Neural Networks Examination, March 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum

More information

CISC 3250 Systems Neuroscience

CISC 3250 Systems Neuroscience CISC 3250 Systems Neuroscience Systems Neuroscience How the nervous system performs computations How groups of neurons work together to achieve intelligence Professor Daniel Leeds dleeds@fordham.edu JMH

More information

Genetic Algorithm for Solving the Economic Load Dispatch

Genetic Algorithm for Solving the Economic Load Dispatch International Journal of Electronic and Electrical Engineering. ISSN 0974-2174, Volume 7, Number 5 (2014), pp. 523-528 International Research Publication House http://www.irphouse.com Genetic Algorithm

More information

Triplets of Spikes in a Model of Spike Timing-Dependent Plasticity

Triplets of Spikes in a Model of Spike Timing-Dependent Plasticity The Journal of Neuroscience, September 20, 2006 26(38):9673 9682 9673 Behavioral/Systems/Cognitive Triplets of Spikes in a Model of Spike Timing-Dependent Plasticity Jean-Pascal Pfister and Wulfram Gerstner

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) Human Brain Neurons Input-Output Transformation Input Spikes Output Spike Spike (= a brief pulse) (Excitatory Post-Synaptic Potential)

More information

Computational Aspects of Aggregation in Biological Systems

Computational Aspects of Aggregation in Biological Systems Computational Aspects of Aggregation in Biological Systems Vladik Kreinovich and Max Shpak University of Texas at El Paso, El Paso, TX 79968, USA vladik@utep.edu, mshpak@utep.edu Summary. Many biologically

More information

Genetic Algorithm: introduction

Genetic Algorithm: introduction 1 Genetic Algorithm: introduction 2 The Metaphor EVOLUTION Individual Fitness Environment PROBLEM SOLVING Candidate Solution Quality Problem 3 The Ingredients t reproduction t + 1 selection mutation recombination

More information

Learning Spatio-Temporally Encoded Pattern Transformations in Structured Spiking Neural Networks 12

Learning Spatio-Temporally Encoded Pattern Transformations in Structured Spiking Neural Networks 12 Learning Spatio-Temporally Encoded Pattern Transformations in Structured Spiking Neural Networks 12 André Grüning, Brian Gardner and Ioana Sporea Department of Computer Science University of Surrey Guildford,

More information

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks Topics in Machine Learning-EE 5359 Neural Networks 1 The Perceptron Output: A perceptron is a function that maps D-dimensional vectors to real numbers. For notational convenience, we add a zero-th dimension

More information

Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks

Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks Commun. Theor. Phys. (Beijing, China) 42 (2004) pp. 121 125 c International Academic Publishers Vol. 42, No. 1, July 15, 2004 Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized

More information

Evolving more efficient digital circuits by allowing circuit layout evolution and multi-objective fitness

Evolving more efficient digital circuits by allowing circuit layout evolution and multi-objective fitness Evolving more efficient digital circuits by allowing circuit layout evolution and multi-objective fitness Tatiana Kalganova Julian Miller School of Computing School of Computing Napier University Napier

More information

STDP Learning of Image Patches with Convolutional Spiking Neural Networks

STDP Learning of Image Patches with Convolutional Spiking Neural Networks STDP Learning of Image Patches with Convolutional Spiking Neural Networks Daniel J. Saunders, Hava T. Siegelmann, Robert Kozma College of Information and Computer Sciences University of Massachusetts Amherst

More information

Using Variable Threshold to Increase Capacity in a Feedback Neural Network

Using Variable Threshold to Increase Capacity in a Feedback Neural Network Using Variable Threshold to Increase Capacity in a Feedback Neural Network Praveen Kuruvada Abstract: The article presents new results on the use of variable thresholds to increase the capacity of a feedback

More information

Sorting Network Development Using Cellular Automata

Sorting Network Development Using Cellular Automata Sorting Network Development Using Cellular Automata Michal Bidlo, Zdenek Vasicek, and Karel Slany Brno University of Technology, Faculty of Information Technology Božetěchova 2, 61266 Brno, Czech republic

More information

Fast neural network simulations with population density methods

Fast neural network simulations with population density methods Fast neural network simulations with population density methods Duane Q. Nykamp a,1 Daniel Tranchina b,a,c,2 a Courant Institute of Mathematical Science b Department of Biology c Center for Neural Science

More information

INTRODUCTION TO NEURAL NETWORKS

INTRODUCTION TO NEURAL NETWORKS INTRODUCTION TO NEURAL NETWORKS R. Beale & T.Jackson: Neural Computing, an Introduction. Adam Hilger Ed., Bristol, Philadelphia and New York, 990. THE STRUCTURE OF THE BRAIN The brain consists of about

More information

Artificial Neural Networks The Introduction

Artificial Neural Networks The Introduction Artificial Neural Networks The Introduction 01001110 01100101 01110101 01110010 01101111 01101110 01101111 01110110 01100001 00100000 01110011 01101011 01110101 01110000 01101001 01101110 01100001 00100000

More information

I N N O V A T I O N L E C T U R E S (I N N O l E C) Petr Kuzmič, Ph.D. BioKin, Ltd. WATERTOWN, MASSACHUSETTS, U.S.A.

I N N O V A T I O N L E C T U R E S (I N N O l E C) Petr Kuzmič, Ph.D. BioKin, Ltd. WATERTOWN, MASSACHUSETTS, U.S.A. I N N O V A T I O N L E C T U R E S (I N N O l E C) Binding and Kinetics for Experimental Biologists Lecture 2 Evolutionary Computing: Initial Estimate Problem Petr Kuzmič, Ph.D. BioKin, Ltd. WATERTOWN,

More information

Comparing integrate-and-fire models estimated using intracellular and extracellular data 1

Comparing integrate-and-fire models estimated using intracellular and extracellular data 1 Comparing integrate-and-fire models estimated using intracellular and extracellular data 1 Liam Paninski a,b,2 Jonathan Pillow b Eero Simoncelli b a Gatsby Computational Neuroscience Unit, University College

More information

A gradient descent rule for spiking neurons emitting multiple spikes

A gradient descent rule for spiking neurons emitting multiple spikes A gradient descent rule for spiking neurons emitting multiple spikes Olaf Booij a, Hieu tat Nguyen a a Intelligent Sensory Information Systems, University of Amsterdam, Faculty of Science, Kruislaan 403,

More information

Artificial Neural Networks Examination, June 2005

Artificial Neural Networks Examination, June 2005 Artificial Neural Networks Examination, June 2005 Instructions There are SIXTY questions. (The pass mark is 30 out of 60). For each question, please select a maximum of ONE of the given answers (either

More information

Bursting and Chaotic Activities in the Nonlinear Dynamics of FitzHugh-Rinzel Neuron Model

Bursting and Chaotic Activities in the Nonlinear Dynamics of FitzHugh-Rinzel Neuron Model Bursting and Chaotic Activities in the Nonlinear Dynamics of FitzHugh-Rinzel Neuron Model Abhishek Yadav *#, Anurag Kumar Swami *, Ajay Srivastava * * Department of Electrical Engineering, College of Technology,

More information

All-or-None Principle and Weakness of Hodgkin-Huxley Mathematical Model

All-or-None Principle and Weakness of Hodgkin-Huxley Mathematical Model All-or-None Principle and Weakness of Hodgkin-Huxley Mathematical Model S. A. Sadegh Zadeh, C. Kambhampati International Science Index, Mathematical and Computational Sciences waset.org/publication/10008281

More information

Evolutionary computation

Evolutionary computation Evolutionary computation Andrea Roli andrea.roli@unibo.it Dept. of Computer Science and Engineering (DISI) Campus of Cesena Alma Mater Studiorum Università di Bologna Outline 1 Basic principles 2 Genetic

More information

A FINITE STATE AUTOMATON MODEL FOR MULTI-NEURON SIMULATIONS

A FINITE STATE AUTOMATON MODEL FOR MULTI-NEURON SIMULATIONS A FINITE STATE AUTOMATON MODEL FOR MULTI-NEURON SIMULATIONS Maria Schilstra, Alistair Rust, Rod Adams and Hamid Bolouri Science and Technology Research Centre, University of Hertfordshire, UK Department

More information

NUMERICAL SOLUTION FOR FREDHOLM FIRST KIND INTEGRAL EQUATIONS OCCURRING IN SYNTHESIS OF ELECTROMAGNETIC FIELDS

NUMERICAL SOLUTION FOR FREDHOLM FIRST KIND INTEGRAL EQUATIONS OCCURRING IN SYNTHESIS OF ELECTROMAGNETIC FIELDS GENERAL PHYSICS EM FIELDS NUMERICAL SOLUTION FOR FREDHOLM FIRST KIND INTEGRAL EQUATIONS OCCURRING IN SYNTHESIS OF ELECTROMAGNETIC FIELDS ELENA BÃUTU, ELENA PELICAN Ovidius University, Constanta, 9527,

More information

[Read Chapter 9] [Exercises 9.1, 9.2, 9.3, 9.4]

[Read Chapter 9] [Exercises 9.1, 9.2, 9.3, 9.4] 1 EVOLUTIONARY ALGORITHMS [Read Chapter 9] [Exercises 9.1, 9.2, 9.3, 9.4] Evolutionary computation Prototypical GA An example: GABIL Schema theorem Genetic Programming Individual learning and population

More information

Supporting Online Material for

Supporting Online Material for www.sciencemag.org/cgi/content/full/319/5869/1543/dc1 Supporting Online Material for Synaptic Theory of Working Memory Gianluigi Mongillo, Omri Barak, Misha Tsodyks* *To whom correspondence should be addressed.

More information

Instituto Tecnológico y de Estudios Superiores de Occidente Departamento de Electrónica, Sistemas e Informática. Introductory Notes on Neural Networks

Instituto Tecnológico y de Estudios Superiores de Occidente Departamento de Electrónica, Sistemas e Informática. Introductory Notes on Neural Networks Introductory Notes on Neural Networs Dr. José Ernesto Rayas Sánche April Introductory Notes on Neural Networs Dr. José Ernesto Rayas Sánche BIOLOGICAL NEURAL NETWORKS The brain can be seen as a highly

More information

Overview Organization: Central Nervous System (CNS) Peripheral Nervous System (PNS) innervate Divisions: a. Afferent

Overview Organization: Central Nervous System (CNS) Peripheral Nervous System (PNS) innervate Divisions: a. Afferent Overview Organization: Central Nervous System (CNS) Brain and spinal cord receives and processes information. Peripheral Nervous System (PNS) Nerve cells that link CNS with organs throughout the body.

More information

Ch. 5. Membrane Potentials and Action Potentials

Ch. 5. Membrane Potentials and Action Potentials Ch. 5. Membrane Potentials and Action Potentials Basic Physics of Membrane Potentials Nerve and muscle cells: Excitable Capable of generating rapidly changing electrochemical impulses at their membranes

More information

Lecture 4: Feed Forward Neural Networks

Lecture 4: Feed Forward Neural Networks Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training

More information

Application of a GA/Bayesian Filter-Wrapper Feature Selection Method to Classification of Clinical Depression from Speech Data

Application of a GA/Bayesian Filter-Wrapper Feature Selection Method to Classification of Clinical Depression from Speech Data Application of a GA/Bayesian Filter-Wrapper Feature Selection Method to Classification of Clinical Depression from Speech Data Juan Torres 1, Ashraf Saad 2, Elliot Moore 1 1 School of Electrical and Computer

More information

Neuron. Detector Model. Understanding Neural Components in Detector Model. Detector vs. Computer. Detector. Neuron. output. axon

Neuron. Detector Model. Understanding Neural Components in Detector Model. Detector vs. Computer. Detector. Neuron. output. axon Neuron Detector Model 1 The detector model. 2 Biological properties of the neuron. 3 The computational unit. Each neuron is detecting some set of conditions (e.g., smoke detector). Representation is what

More information

Application of density estimation methods to quantal analysis

Application of density estimation methods to quantal analysis Application of density estimation methods to quantal analysis Koichi Yoshioka Tokyo Medical and Dental University Summary There has been controversy for the quantal nature of neurotransmission of mammalian

More information

NE 204 mini-syllabus (weeks 4 8)

NE 204 mini-syllabus (weeks 4 8) NE 24 mini-syllabus (weeks 4 8) Instructor: John Burke O ce: MCS 238 e-mail: jb@math.bu.edu o ce hours: by appointment Overview: For the next few weeks, we will focus on mathematical models of single neurons.

More information

biologically-inspired computing lecture 18

biologically-inspired computing lecture 18 Informatics -inspired lecture 18 Sections I485/H400 course outlook Assignments: 35% Students will complete 4/5 assignments based on algorithms presented in class Lab meets in I1 (West) 109 on Lab Wednesdays

More information

Estimating the Selectivity of tf-idf based Cosine Similarity Predicates

Estimating the Selectivity of tf-idf based Cosine Similarity Predicates Estimating the Selectivity of tf-idf based Cosine Similarity Predicates Sandeep Tata Jignesh M. Patel Department of Electrical Engineering and Computer Science University of Michigan 22 Hayward Street,

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

Compartmental Modelling

Compartmental Modelling Modelling Neurons Computing and the Brain Compartmental Modelling Spring 2010 2 1 Equivalent Electrical Circuit A patch of membrane is equivalent to an electrical circuit This circuit can be described

More information

Computational Explorations in Cognitive Neuroscience Chapter 2

Computational Explorations in Cognitive Neuroscience Chapter 2 Computational Explorations in Cognitive Neuroscience Chapter 2 2.4 The Electrophysiology of the Neuron Some basic principles of electricity are useful for understanding the function of neurons. This is

More information

Bayesian Modeling and Classification of Neural Signals

Bayesian Modeling and Classification of Neural Signals Bayesian Modeling and Classification of Neural Signals Michael S. Lewicki Computation and Neural Systems Program California Institute of Technology 216-76 Pasadena, CA 91125 lewickiocns.caltech.edu Abstract

More information

Neuronal Dynamics: Computational Neuroscience of Single Neurons

Neuronal Dynamics: Computational Neuroscience of Single Neurons Week 4 part 5: Nonlinear Integrate-and-Fire Model 4.1 From Hodgkin-Huxley to 2D Neuronal Dynamics: Computational Neuroscience of Single Neurons Week 4 Recing detail: Two-dimensional neuron models Wulfram

More information

CMSC 421: Neural Computation. Applications of Neural Networks

CMSC 421: Neural Computation. Applications of Neural Networks CMSC 42: Neural Computation definition synonyms neural networks artificial neural networks neural modeling connectionist models parallel distributed processing AI perspective Applications of Neural Networks

More information

Evolutionary Computation

Evolutionary Computation Evolutionary Computation - Computational procedures patterned after biological evolution. - Search procedure that probabilistically applies search operators to set of points in the search space. - Lamarck

More information

A Three-dimensional Physiologically Realistic Model of the Retina

A Three-dimensional Physiologically Realistic Model of the Retina A Three-dimensional Physiologically Realistic Model of the Retina Michael Tadross, Cameron Whitehouse, Melissa Hornstein, Vicky Eng and Evangelia Micheli-Tzanakou Department of Biomedical Engineering 617

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

MEMBRANE POTENTIALS AND ACTION POTENTIALS:

MEMBRANE POTENTIALS AND ACTION POTENTIALS: University of Jordan Faculty of Medicine Department of Physiology & Biochemistry Medical students, 2017/2018 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Review: Membrane physiology

More information

CSC 4510 Machine Learning

CSC 4510 Machine Learning 10: Gene(c Algorithms CSC 4510 Machine Learning Dr. Mary Angela Papalaskari Department of CompuBng Sciences Villanova University Course website: www.csc.villanova.edu/~map/4510/ Slides of this presenta(on

More information