Spiking Neuron Model Approximation using GEP

Similar documents
Neural Modeling and Computational Neuroscience. Claudio Gallicchio

Consider the following spike trains from two different neurons N1 and N2:

The Spike Response Model: A Framework to Predict Neuronal Spike Trains

Causality and communities in neural networks

Discovery of the Boolean Functions to the Best Density-Classification Rules Using Gene Expression Programming

Lecture 11 : Simple Neuron Models. Dr Eileen Nugent

Data Mining Part 5. Prediction

CSE/NB 528 Final Lecture: All Good Things Must. CSE/NB 528: Final Lecture

Marr's Theory of the Hippocampus: Part I

Linearization of F-I Curves by Adaptation

Introduction to Neural Networks U. Minn. Psy 5038 Spring, 1999 Daniel Kersten. Lecture 2a. The Neuron - overview of structure. From Anderson (1995)

Biological Modeling of Neural Networks

Evolutionary computation

Lecture 5: Linear Genetic Programming

Fast and exact simulation methods applied on a broad range of neuron models

Signal, donnée, information dans les circuits de nos cerveaux

Dynamical Constraints on Computing with Spike Timing in the Cortex

Lecture 9 Evolutionary Computation: Genetic algorithms

Novel VLSI Implementation for Triplet-based Spike-Timing Dependent Plasticity

We observe the model neuron s response to constant input current, studying the dependence of:

Introduction Biologically Motivated Crude Model Backpropagation

Chapter 8: Introduction to Evolutionary Computation

Lecture 7 Artificial neural networks: Supervised learning

Evolutionary Computation. DEIS-Cesena Alma Mater Studiorum Università di Bologna Cesena (Italia)

Artificial Neural Network and Fuzzy Logic

Neural Networks and the Back-propagation Algorithm

Evolutionary Design I

This is a repository copy of Improving the associative rule chaining architecture.

Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits

This script will produce a series of pulses of amplitude 40 na, duration 1ms, recurring every 50 ms.

Action Potentials and Synaptic Transmission Physics 171/271

Dendritic computation

An Introductory Course in Computational Neuroscience

Modeling of Retinal Ganglion Cell Responses to Electrical Stimulation with Multiple Electrodes L.A. Hruby Salk Institute for Biological Studies

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan

Chapter 9: The Perceptron

Supervisor: Prof. Stefano Spaccapietra Dr. Fabio Porto Student: Yuanjian Wang Zufferey. EPFL - Computer Science - LBD 1

Evolutionary computation in high-energy physics

V. Evolutionary Computing. Read Flake, ch. 20. Assumptions. Genetic Algorithms. Fitness-Biased Selection. Outline of Simplified GA

Analyzing Neuroscience Signals using Information Theory and Complexity

Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity

The Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017

V. Evolutionary Computing. Read Flake, ch. 20. Genetic Algorithms. Part 5A: Genetic Algorithms 4/10/17. A. Genetic Algorithms

IV. Evolutionary Computing. Read Flake, ch. 20. Assumptions. Genetic Algorithms. Fitness-Biased Selection. Outline of Simplified GA

Linear Regression, Neural Networks, etc.

Spike-Frequency Adaptation: Phenomenological Model and Experimental Tests

ARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC ALGORITHM FOR NONLINEAR MIMO MODEL OF MACHINING PROCESSES

Annales UMCS Informatica AI 1 (2003) UMCS. Liquid state machine built of Hodgkin-Huxley neurons pattern recognition and informational entropy

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks

Visual Selection and Attention Shifting Based on FitzHugh-Nagumo Equations

Activity Driven Adaptive Stochastic. Resonance. Gregor Wenning and Klaus Obermayer. Technical University of Berlin.

Artificial Neural Networks Examination, March 2004

CISC 3250 Systems Neuroscience

Genetic Algorithm for Solving the Economic Load Dispatch

Triplets of Spikes in a Model of Spike Timing-Dependent Plasticity

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Computational Aspects of Aggregation in Biological Systems

Genetic Algorithm: introduction

Learning Spatio-Temporally Encoded Pattern Transformations in Structured Spiking Neural Networks 12

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks

Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks

Evolving more efficient digital circuits by allowing circuit layout evolution and multi-objective fitness

STDP Learning of Image Patches with Convolutional Spiking Neural Networks

Using Variable Threshold to Increase Capacity in a Feedback Neural Network

Sorting Network Development Using Cellular Automata

Fast neural network simulations with population density methods

INTRODUCTION TO NEURAL NETWORKS

Artificial Neural Networks The Introduction

I N N O V A T I O N L E C T U R E S (I N N O l E C) Petr Kuzmič, Ph.D. BioKin, Ltd. WATERTOWN, MASSACHUSETTS, U.S.A.

Comparing integrate-and-fire models estimated using intracellular and extracellular data 1

A gradient descent rule for spiking neurons emitting multiple spikes

Artificial Neural Networks Examination, June 2005

Bursting and Chaotic Activities in the Nonlinear Dynamics of FitzHugh-Rinzel Neuron Model

All-or-None Principle and Weakness of Hodgkin-Huxley Mathematical Model

Evolutionary computation

A FINITE STATE AUTOMATON MODEL FOR MULTI-NEURON SIMULATIONS

NUMERICAL SOLUTION FOR FREDHOLM FIRST KIND INTEGRAL EQUATIONS OCCURRING IN SYNTHESIS OF ELECTROMAGNETIC FIELDS

[Read Chapter 9] [Exercises 9.1, 9.2, 9.3, 9.4]

Supporting Online Material for

Instituto Tecnológico y de Estudios Superiores de Occidente Departamento de Electrónica, Sistemas e Informática. Introductory Notes on Neural Networks

Overview Organization: Central Nervous System (CNS) Peripheral Nervous System (PNS) innervate Divisions: a. Afferent

Ch. 5. Membrane Potentials and Action Potentials

Lecture 4: Feed Forward Neural Networks

Application of a GA/Bayesian Filter-Wrapper Feature Selection Method to Classification of Clinical Depression from Speech Data

Neuron. Detector Model. Understanding Neural Components in Detector Model. Detector vs. Computer. Detector. Neuron. output. axon

Application of density estimation methods to quantal analysis

NE 204 mini-syllabus (weeks 4 8)

biologically-inspired computing lecture 18

Estimating the Selectivity of tf-idf based Cosine Similarity Predicates

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Compartmental Modelling

Computational Explorations in Cognitive Neuroscience Chapter 2

Bayesian Modeling and Classification of Neural Signals

Neuronal Dynamics: Computational Neuroscience of Single Neurons

CMSC 421: Neural Computation. Applications of Neural Networks

Evolutionary Computation

A Three-dimensional Physiologically Realistic Model of the Retina

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

MEMBRANE POTENTIALS AND ACTION POTENTIALS:

CSC 4510 Machine Learning

Transcription:

23 IEEE Congress on Evolutionary Computation June 2-23, Cancún, México Spiking Neuron Model Approximation using GEP Josafath I. Espinosa-Ramos and Nareli Cruz Cortés Centro de Investigación en Computación Instituto Politécnico Nacional Mexico City, CP 7738 Email: vjier@prodigy.net.mx, nareli@cic.ipn.mx Roberto A. Vázquez Grupo de Sistemas Inteligentes Facultad de Igeniería Universidad La Salle Mexico City, CP 64 Email: ravem@lasallistas.org.mx Abstract Spiking Neuron Models can accurately predict the spike trains produced by cortical neurons in response to somatically injected electric currents. Since the specific model characteristics depend on the neuron; a computational method is required to fit models to electrophysiological recordings. However, models only work within defined limits and it is possible that they could only be applied to the example presented. Moreover, there is not a methodology to fit the models; in fact, the fitting procedure can be very time consuming both in terms of computer simulations and code writing. In this paper a first effort is presented not to fit models, but to create a methodology to generate neuron models automatically. We propose to use Gene Expression Programming to create mathematical expressions that replicate the behavior of a state of the art neuron model. We will present how this strategy is feasible to solve more complex problems and provide the basis to find new models which could be applied in a wide range of areas from the field of computational neurosciences as pyramidal neurons spike train prediction, or in artificial intelligence as pattern recognition problems. I. INTRODUCTION Spiking neuron models have been applied in a wide range of areas in the field of computational neurosciences such as: brain region modeling, auditory processing, visual processing, robotics, pattern recognition and so on. Many spiking neuron models have been proposed, but choosing one of them is a difficult question. The answer depends on the type of the problem; electrophysiologists generally prefer the biophysical models, familiar with the notion of ion channels that open and close (and hence, alter neuronal activity) depending on environmental conditions. Theoreticians, by contrast, typically prefer simple neuron models with few parametres that are amenable to mathematical analysis. The model proposed by Izhikevich [5] was developed to understand the fine temporal structure of cortical spike trains, and to use spike-timing as an additional variable to understand how the mammalian neocortex processes information. This model can exhibit the 2 neurocomputational properties of biological neurons summarized in [6]. There, Izhikevich also discusses the biological plausibility and computational efficiency of some of the most useful spiking and bursting neuron models, and compares their applicability to large-scale simulations of cortical neural networks. Recently, several research groups have approached this question by assessing the quality of neuron models with respect to spike timing prediction or voltage traced characteristic features. In 29, following previous attempts at model comparison on a smaller scale, the International Neuroinformatics Coordinating Facility (INCF) launched an international competition [] permitting a quantitative comparison of neuron models. The idea behind the INCF competition is that a good model can predict neuronal activity based on data (electrophysiological recordings) that were not used for parameter tuning. In [9], the authors use spiking models that can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific model characteristics depends on the neuron, a computational method is required to fit models to electrophysiological recordings. The problem of many spiking neuron models is that they only work within the limits defined and it is possible that they could only be applied to the example presented. Then, the ideal would not be to fit existing models, but to construct a model for each kind of neuron. However, it is almost impossible, at least in an analytic way. This paper is the first effort to find and implement a methodology to create automatically spiking neuron models using a state of the art evolutionary computation strategy. This methodology will help us to find a mathematical equation that describes the behavior of biological neurons, such as pyramidal neurons (located in the cortex of the brain) which are involved in cognitive functions, or mathematical models with a spiking neuron behavior that can be used to solve pattern recognition problems. Since the Izhikevich model exhibits the 2 most prominent neurocomputational properties of biological neurons [6], we adopted it as a reference model to create heuristically mathematical expressions for new spiking neuron models. It is worth mention that we do not intend to find an equation that exactly reproduces the signal produced by the Izhikevich model after being stimulated, but to create mathematical models that generate spikes almost at the same firing time. Then, we could address the problem as a symbolic regression, in which some evolutionary computation strategies have been successful. Among the most popular is the Genetic Programming (GP). However, one of the main problems of this technique is the difficulty to combine simplicity and expressive power in individual representation [2]. If the form of representation is simple to manipulate genetically, it loses functional complexity, and it is not suitable for solving certain problems. If the representation allows great functional complexity, it is difficult to make it evolve toward accurate solutions. To solve this dilemma, several alternatives have been proposed among 978--4799-454-9/3/$3. 23 IEEE 326

which Gene Expression Programming (GEP) [2] stands out as it proposes a representation that attempts to combine simplicity with expressive power, and proposes a model that is simple to evolve, but which can represent complex structures. The remainder of the article is ordered as follows: Section 2 introduces background research, section 3 presents the methodology proposed. Section 4 describes implementation details. In section 5 some results are shown. Finally, a conclusion and future work is presented. A. Spiking Neurons II. BACKGROUND Biological neurons communicate by generating action potentials that are transmitted to other neurons in the network. Action potentials are generated in response to transmembrane currents elicited by presynaptic activation of various receptor types. These action potentials alter the membrane voltage so it crosses a threshold value; the neuron spikes and goes into a refractory state and shows the typical forms of excitatory and inhibitory postsynaptic potentials over time. Spiking neuron models try to simulate the behavior of a biological neuron when it is stimulated with an electric current through a synaptic channel and a spike train is generated. This allows incorporating spatial-temporal information in communication and computation, like real neurons do. Instead of using rate coding (the output value) as classical artificial neuron networks, these neuron models use pulse coding (spikes number); mechanisms where neurons receive and send out individual pulses, allowing multiplexing of information as frequency and amplitude of sound [3]. These models constitute the computational unit of the third generation of artificial neural networks (see [8] for classification). It has been proved that spiking neurons can solve linear and non linear pattern recognition probelms [] []. This is, given a set of input patterns belonging to k classes, each input pattern is transformed into an input signal. Then the spiking neuron is stimulated during T ms, and a spike train is generated. It is expected that input patterns belonging to the same class generate almost the same firing rates, and input patterns belonging to different classes generate enough different firing rates to discriminate among the different classes. B. Gene Expression Programming Gene Expression Programming (GEP) is an evolutionary algorithm that automatically creates computer programs. These computer programs can take many forms: they can be conventional mathematical models, neural networks, decision trees, sophisticated nonlinear models, logistic nonlinear regressors, nonlinear classifiers, complex polynomial structures, logic circuits and expressions, and so on. Irrespective of their complexity, all GEP programs are encoded in very simple linear structures called chromosomes [2]. As in other evolutionary computation techniques, GEP is a population based algorithm, where a set of chromosomes (also known as individuals or candidate solutions) can be reproduced using a selection criterion; then the offspring is mutated and a new population is created. Finally, a selection method is used to select the best individuals that continue in the evolutionary process until a stop criteria is achieved. The evolutionary process searches for better and better solutions as it tries to solve a particular problem. The GEP can be distinguished from other evolutionary strategies for the representation of individuals and the way that they are reproduced. The GEP individuals have dual codification, their genotype is organized as a string and phenotype in an expression tree. The items that appear in these individuals are: a functions set, which is made up of functions that receive some parameters and can only appear in nonterminal nodes of the syntax tree. The number of parameters that each function receives will determine a certain arity value for the corresponding node. a terminals set, which is the set of elements that can only appear on the leaves of the tree. This set contains both constant values and input parameters received by the tree. Each gene that constitutes the genotype is divided into two parts: head and tail. The genes head size is chosen a priori for the problem, but the tail size is determined by the following expression: t = h(a )+ () where t is the tail size, h the head size and a the maximum arity present in nonterminal nodes. The head can contain both functions and terminal set elements, but the tail can only contain terminal set elements. The purpose of these limitations is to allow any gene to be transformed into a valid syntax tree. The format where a syntax tree is in the genotype string is called K-Expression, and determines the phenotype generated from the genotype. The way to build a valid tree from a K- Expression is to fill the tree level by level. Fig.. Consider, for example, the algebraic expression: (a+b)(c d)) which can be represented as the diagram in Figure. Syntax tree of a simple expression. This kind of diagram representation is in fact the phenotype of GEP individuals, being the genotype easily inferred from the phenotype as follows: 234567 Q*+-abcd which is the straightforward reading of the diagram from left to right and from top to bottom. Concerning to individuals reproduction, the GEP considers replication, mutation, inversion, transportation and insertion 326

sequence elements and recombination strategies (see [3] for details). Many of these reproduction strategies can be executed during a single generation, e.g. gene recombination, transpositon of insertion sequence elements and mutation. III. METHODOLOGY As mentioned before, we adopted the Izhikevich model in order to reproduce different behaviors while it is stimulated with a constant current. A. Problem representation This first goal is to find a polynomial equation in terms of voltage v which can generate spikes at the same firing rates of one of the Izhikevich model behaviors when it is stimulated by a constant current. Izhikevich model is represented by a two differential equations system: v =.4v 2 +5v +4 u+i (2) u = a(bv u) (3) { v c if v v peak then (4) u u+d Here, variable v represents the membrane potential given in and u represents a membrane recovery variable which provides negative feedback to v. v peak is the neuron maximum potential value. The model can exhibit firing patterns of all known types of cortical neurons with the choice of parameters a, b, c and d. Various choices of the parameters result in various intrinsic firing patterns [5]. As a first approach, we chose the class 2 excitable model [6] as a reference model, since the spikes number depends on the injected current value I, generating a higher number of spikes with a higher value. Apparently, this model periodically fires the spikes when it is stimulated with a constant current as shown in Figure 2. This specific behavior might be the base in pattern recognition problems using spiking neurons as described in [4] [] [2]. Fig. 2. 4 2 2 4 6 Class 2 Izhikevich model spikes train 8 2 3 4 5 6 7 8 9 / Izhikevich Class 2 excitable. In order to replicate this spike train response, the model should be stimulated with a constant current I =. during ms., and using the following parameter values: a =.2, b =.26, c = 65., d = and I =.. To construct the model, we propose that the GEP individuals representation is based on the polynomial term of the Izhikevich model first differential equation.4v 2 +5v+4. This is a classical second order polynomial equation with three terms where the voltage (variable v) is involved. Then the individuals must consider at least one variable that represents the voltage v. Constants may or may not be included, but in this work we include a random constants array that will be generated randomly at the beginning of the GEP algorithm. As we mentioned in section II-B, the individual gene must contain two parts: head and tail. The head may contain both functions and terminal set elements, but the tail just contains terminal set elements. For simplicity, we define the functions set as F = {+,,,/, }, being the square operator, and the terminal set elements as T = {v,u,?} where v represents the voltage, u the recovery variable and? represents the variable that will be mapped to the constants array, in order to be included in the K-expression. A good solution depends on the size and number of genes that constitute the individual, since these features limit the search space. So, the individuals should have one large gene, or two or more small genes (multigenic chromosome) joined by a linking function. For simplicity, we join these genes with the add function. In Fig. 3 a candidate solution representation is shown. Fig. 3. Example of a candidate solution. Three genes are joined by the add function (grey) B. Function Creating a spiking neuron model to electrophysiological data is performed by maximizing a fitness function measuring the model adequacy to the data [7]. In this research we apply the gamma factor not to predict neural activity, but to replicate a spiking neuron model output in a period of time. The gamma factor is based on the number of coincidences between the model spikes and the experimentally recorded spikes. This is defined as the number of spikes in the experimental train such that there is at least one spike in the model train within ±δ, where δ is the temporal window size (typically a few ). The gamma factor is defined by equation 5: ( )( ) 2 Ncoinc 2δN exp r exp Γ = (5) 2δr exp N exp +N model where N coinc is the number of coincidences, N exp and N model are the number of spikes in the experimental and model spike trains, respectively, and r exp is the average firing rate of the experimental train. The term 2δN exp r exp is the expected number of coincidences with a Poisson process with the same rate as the experimental spike train, so that Γ = means that the model performs no better than chance. The normalization factor is chosen such that Γ, and Γ = corresponds to a perfect match. The gamma factor depends on the temporal window size parameter δ (it increases with it). We choose δ = ±2 ms to pursue the closest approximation to the real behavior of the neuron, because it is of the same order as the synaptic rise times that can be measured in the cortical pyramidal neuron s soma [7]. 3262

IV. EXPERIMENTAL FRAMEWORK The reader should remember that we try to find a model which can replicate the spike rates and the firing time. That is why Eq. 5 is adopted as a similarity or dissimilarity measure between two spike trains and assess the replication quality of new models. This factor compares spike trains as produced by the new model to spike trains as generated by the reference model. Experimental work is basically divided in three types of experiments: the first is addressed to find a polynomial term which replaces only the polynomial term of the Izhikevich model first differential equation Eq. 2. This means that we generate the spikes by computing v using the new polynomial term minus variable u plus the injected current I. v = p u+i (6) u = a(bv u) (7) { v c if v v peak then (8) u u+d where p is the polynomial term created by GEP. Second experiments do not include the Izhikevich model second differential equation. The spikes are generated by computing only one differential equation where variable v is involved, then the model is defined as follows: v = p+i (9) if v v peak then v c () where p is the polynomial term created by GEP. The last experiment is formed by generating two algebraic expressions; the first will replace the polynomial term of Eq. 2 such as in the first experiments. Second expression will substitute the Izhikevich model second differential equation given by Eq. 3. v = p u+i () u = v q (2) { v c if v v peak then (3) u u+d where p and q are algebraic expressions created by GEP. We expect that these three experiments will help us to ensure that GEP can produce spiking neuron models which are adaptable to solve specific computational problems (e.g. pattern recognition problems). The first experiments greatly help GEP in order to have a check experiment but especially to build different models with almost exactly the same behavior. The second experiments have the intention to find whether is it possible to substitute a two differential equations system by a one differential equation system and have almost the same behavior. Finally, the third experiments will help us to determine whether GEP can construct more complex differential equation systems. Now, in the following lines we describe the scenarios and the initial GEP algorithm parameters. Subsequently, the data sets and some implementation details are presented. A. GEP implementation As the first step, we perform scenarios according to the number of genes, the genes head size and the probability for mutation. For the five first scenarios, we configure the individuals with one large gene, this will create a mathematical expression with one to 5 terms. For the last five scenarios, we configure the individuals with three short genes. Here, we want to find a mathematical expression similar to the polynomial term of the Izhikevich model first differential equation. We expect that the number of genes does not drastically affect the algorithm to find good solutions, since the individuals size is almost the same. As described in section II-B, the GEP reproduction process can execute many variation operators during the same generation for each individual. These would considerably increase the number of scenarios to find the best settings for GEP. Because of this, we adopted the variation operators values considering some examples of GEP in problem solving documented in [2]. We only vary the mutation rates, in order to show a variation in the GEP behavior. Table I summarized the scenarios proposed. TABLE I. GEP SCENARIOS Scenario Genes Head Size Mutation 5. 2 5.3 3 5.5 4 5. 5 5.2 6 3 6. 7 3 6.3 8 3 6.5 9 3 6. 3 6.2 There are also other parameters included in the GEP algorithm that are maintained in all experiments. These parameters are shown in Table II. TABLE II. GEP PARAMETERS parameter Number of runs 3 for each scenario 5 Population size One-point recombination rate.8 Gene recombination rate. IS transposition rate. IS elements length. RIS transposition rate. RIS elements length -3 Gene transposition rate. Dc specific IS transposition rate. Selection for reproduction Tournament Replacement based Finally, in the first experiments the constant values for each chromosome gene are Izhikevich model constants values being.4, 5. and 4. This will greatly help the algorithm to find a curve very similar to the originalmodel. For remaining experiments, constants are random values between -. and.. It is noteworthy that, we performed 3 tests for each scenario in order to achieve statistically significant results. B. Data Sets Useful information is the spikes number generated by reference models and the time when they are fired, as we saw in sections II-A and III-B. Then, the data we use is the spikes 3263

train generated after stimulating the Class 2 Izhikevich model with a constant current I =. na during ms. The Euler method is used to solve the differential equations of both the reference and the GEP proposed models with dt =. This corresponds a time step of. ms and a sampling frequency of KHz. The spikes train is shown in Figure 2. As can be seen, the Izhikevich model can detect the time when a spike occurs determined by the variable value v peak, so that no process should be run to identify the spikes. Then, the data set used is a vector where only the firing times are stored. Thus, evaluating the fitness criterion for any candidate solution involves few operations. V. RESULTS AND DISCUSSION This section is divided into three subsections: in the first subsection we show the results using the model proposed for the first experiments where one term of one of the two differential equations is replaced. In subsection two, we present a model with one differential equation where one term of this equation is replaced. Finally, in subsection three we describe the results obtained when we replace one term of each model equation with two differential equations. The number of experiments performed for the three types of experiments is 3. Due to this, the results are presented as follows: at the beginning of every subsection, a table with the summary of the proposed model fitness. There, we present the worst, the best and the average found in the 3 tests performed for each scenario. A fourth column is added where the algorithm effectiveness is shown, i.e. if the percentage of the scenario is 33 %, that means that GEP found about solutions with the maximum possible fitness. Subsequently, other tables are included containing the features of two of the best individuals (best solutions achieved), such as the genotype, the fitness, the constant values used for each gene and the algebraic expression created. After every solution a figure with two plots is shown; the plot at the top shows the Izhikevich model spike train (blue solid curve) and the new model created by GEP (red dotted curve) after being stimulated with a constant current I = during ms. The bottom plot shows the GEP evolutionary behavior, red and blue curves show the best and worst individual fitness respectively during the evolutionary process. A. First experiments The following results correspond to the model where Eq s. 6, 7 and 8 are involved. Firstly, the best individuals summary of each scenario is presented in Table III. TABLE III. FIRST EXPERIMENTS FITNESS SUMMARY Scenario Worst Best Average Effectiveness.35.577.4368 % 2.467..5339 3% 3.366..677 % 4.467..629 7% 5.5945..878 33% 6.2324..5238 7% 7.2324..5883 3% 8.2324..748 27% 9.4745..782 3%.474..835 4% It was found that the number of individual genes does not considerably affect finding good solutions. However, such as other evolutionary techniques, mutation rate affects the algorithm performance. For this particular work, a lower value does not produce good results in most cases, but only occasionally. We also observe that the best results belong to the th scenario with an average fitness value of.835 and producing good solutions with a probability of 4%. Now, we describe two of the best solutions obtained with the th scenario. Table IV contains the first best solution. TABLE IV. ST GEP BEST SOLUTION (ST EXPERIMENTS).9835847457627 Gene?v v v vv?v??v32332 Gene 2 /?v??v?????32333 Gene 3 +??vvvvvv3223 Cons. Gene [.4, 5., 4] Cons. Gene 2 [.4, 5., 4] Cons. Gene 3 [.4, 5., 4] Arithmetic Expression 4 + 5.v + (.4v 2.4) It is noted that the algebraic expression created is very similar to the polynomial term of the Izhikevich model first differential equation. This equation generates the spikes train shown in Figure 4, when the model is stimulated with a constant current. Fig. 4. 4 2 2 4 6 Spikes train 8 2 3 4 5 6 7 8 9.2.8.6.4.2.2 5 5 2 25 3 35 4 45 5 st GEP best solution (st experiments) The number of spikes is the same for both models and the firings are almost generated at the same time, with a minimal variation given by subtracting.4 to the polynomial quadratic term. This result is achieved after evolving about 3 generations. Now, in Table V another best solution found with GEP is presented. TABLE V. 2ND GEP BEST SOLUTION (ST EXPERIMENTS).9835847457627 Gene /v? +vv?v???2223 Gene 2 /?/???vv??v?33333 Gene 3 / +??v?????22223 Cons. Gene [.4, 5., 4] Cons. Gene 2 [.4, 5., 4] Cons. Gene 3 [.4, 5., 4] v Arithmetic Expression.2 + 4 + (v 5 )2 Here, a different mathematical expression of the polynomial term is created. However, it is an equivalence of the Eq. 2 3264

polynomial term, since ( v 5 )2 =.4v 2 and v.2 = 5v. The plots in Figure 5 show the models spike train and the evolutionary behavior of the one proposed. 4 2 2 4 6 Spikes train 8 2 3 4 5 6 7 8 9.2.8.6.4.2.2 5 5 2 25 3 35 4 45 5 3 tests performed, the th scenario found 6 good solutions which represents about 53 % of effectiveness. The features of one of these solutions is presented in Table VII. TABLE VII. ST GEP BEST SOLUTION (2ND EXPERIMENTS).9835847457627 Gene? /v?v??vv222222 Gene 2 ///vvv??v?v?222 Gene 3 +?v v????v?2222 Cons. Gene [-.5794,-.5. -.7494] Cons. Gene 2 [-.427,.5935, -.7887] Cons. Gene 3 [-.3472,.252, -.4763] Algebraic Expression.722 +.3288 v +.26v The algebraic expression created is completely different from the polynomial term of Eq. 2. As is shown, each gene constants are random values between -. and.. The plot produced by solving the differential equation with the Euler method is compared with the Izhikevich model in Figure 6. Fig. 5. 2nd GEP best solution (st experiments) 4 Spikes trains 2 Once again, the GEP algorithm produced a good solution before 5 generations with the highest possible fitness value. The number of the spikes and the firing times are almost the same. These two results show that GEP can replicate the equation and the Izhikevich model curve with a considerable accuracy and with relatively few generations. The algorithm was greatly helped since we define the constant values and only substitute the polynomial term of Eq. 2. The following section presents the results without any intervention, rather than the number of constants and the settings shown in Tables I and II. B. Second experiments These results belong to the experiments performed with the second model proposed, where the neuron membrane potential is given by Eq. 9 and, without the existence of the recovery variable u. Before we present these results, it is important to remember that the similarity between two spiking neuron models behaviors is measured by the spike rates and the firing times. This criteria is adopted, since the information is stored in the spike trains as was mentioned in section II-A. Similar to previous experiments, the number of tests exceeds 2. Therefore, only two of the best solutions are presented. First, in Table VI we summarize the solutions fitness value achieved in all scenarios. TABLE VI. SECOND EXPERIMENTS FITNESS SUMMARY Scenario Worst Best Average Effectiveness.355.644.2358 % 2.355..394 3 % 3.355..4656 7 % 4.355..5924 3 % 5.35..774 7 % 6.355..326 3 % 7.355..4556 3 % 8.355..727 37 % 9.355..774 33 %.35..883 53 % It is evident that the number of genes and the mutation rate have a greater impact than in the first model proposed. After Fig. 6. 2 4 6 8 2 3 4 5 6 7 8 9.8.6.4.2.2 5 5 2 25 3 st GEP best solution (2nd experiments) It is observed that, the plot generated by GEP is different from the one generated by Izhikevich. However, what matters is that the number of spikes is the same in both models and almost fire at the same time with a difference of δ = 2 ms. With regard to the GEP behavior, it is observed that the solution was approximately found at generation 3, similar to the first solution presented in the previous section. A second solution is shown in Table VIII. TABLE VIII. 2ND GEP BEST SOLUTION (2ND EXPERIMENTS).9835847457627 Gene //?//v??v?vv22 Gene 2 / v/vv??v?vv2 Gene 3 +?? + v?v???v?222 Cons. Gene [-.253,-.966,.3532] Cons. Gene 2 [-.6388,-.225,.92] Cons. Gene 3 [.945,.888,-.555] Algebraic Expression v ( 28.29 )2 +.945v.5263 A different individual representation and a new algebraic expression are created. The second order polynomial is similar to the polynomial term in Eq. 2. The model and the GEP algorithm behavior is shown in Figure 7. Here, a different plot is generated, but the solution met with the criteria already mentioned. This time, the algorithm takes 3265

Spikes trains 4 2 2 4 6 8 2 3 4 5 6 7 8 9.8.6.4.2 TABLE X. ST GEP BEST SOLUTION (3RD EXPERIMENTS).965572437933 Gene / uvu?vvvu3244 Gene 2 uv?uvuvuvu?43244 Gene 3? + / +?uv?v??43442 Gene 4 u /u/?u????u33 Gene 5 v uu v??uvv?223 Cons. Gene [.7,.29,.388, -.598,.4] Cons. Gene 2 [-.6..343, -.46,.5, -.37] Cons. Gene 3 [-.993, -.263, -.348,.958,.23] Cons. Gene 4 [-.879, -.85,.283, -.69, -.38] Cons. Gene 5 [.43,.837,.669, -.2,.73] Algebraic Exp. uv + (.66v) 2 +.239 Algebraic Exp. 2.455u + v.2 5 5 2 25 3 4 2 Spiking Neuron Fig. 7. 2nd GEP best solution (2nd experiments) 2 4 6 more generations to get the best solution, but before reaching 25. As we can see, none of the plots that shows the new models spikes train have the same shape as the Izhikevich model. We presume that this is because the model proposed does not contain a differential equation that provides negative feedback to v, as u does in Eq. 2. As has been mentioned, the most important point is that the spikes rate and the firings time are generated very close to the reference model. C. Third experiments In experiments one and two, we substitute only one of the Izhikevich model equation. In this third type of experiments, we replace both model differential equations, making GEP to find the expression vales for p and q as shown in Eq. and 2. Due to adding one more equation, the test scenario will be modified. Here, an th scenario is proposed, where the individual is formed with 5 genes, three to model the first expression and two for the second. We also add two more constants for each gene, having a total of 5 constant per gene with values between -. and.. In previous experiments, we observed that the best fitness value and the more efficient was the th scenario. Therefore, six head size genes and a mutation rate of 2 % will be used. Table IX shows the fitness summary. TABLE IX. THIRD EXPERIMENTS FITNESS SUMMARY Scenario Worst Best Average Effectiveness.4482..847 57 % We observe that the highest fitness value is achieved and altough the average is a liitle lower than in the second experiments, more good solutions were achieved. This mean about 57% of effectiveness, 4% higher than in second experiments. In Table X we show the features of one of the best individuals achieved. Algebraic expression and 2 correspond to variables p and q respectively of the proposed model. When we substitute these expressions in Eq. and 2 and solving with Euler, we get the plot shown in Figure 8. Fig. 8. 8 2 3 4 5 6 7 8 9.2.8.6.4.2.2 5 5 2 25 3 35 4 45 5 st GEP best solution (3rd experiments) We consider this result a good solution, since the number of spikes are the same and the firings are almost generated at the same time in both Izhikevich model and the proposed model. The maximum fitness value achieved was found before 3 generations, similar to previous experiments. Following, a second solution is presented in Table XI. There, two new different expressions are created which generates the plot in Figure 9. TABLE XI. 2ND GEP BEST SOLUTION (3RD EXPERIMENTS).965572437933 Gene +?vvuuvu?234422 Gene 2 +? + uu/v?vv?vv3342 Gene 3 /? + / uv?u?vv?34432 Gene 4 u u????uu???u3342 Gene 5 +?? /vu?u?uv22424 Cons. Gene [.598, -.249, -.273, -.2, -.274] Cons. Gene 2 [.8, -.47,.276,.35, -.849] Cons. Gene 3 [.58, -.855,.6,.9,.23] Cons. Gene 4 [.545,.23, -.479,.522,.565] Cons. Gene 5 [.597,.779, -.54,.39,.246] Algebraic Exp..747(v 2 + (uv))+.233 (.359 + 2u) + u+(.97 u) v Algebraic Exp. 2 u.297 Similar to the previous result, the number of spikes and the firing times met with the similarity criteria given by the gamma factor. GEP also found the maximum fitness value before 3 generations. These two experiments did not replicate the Izhikevich model signal, since GEP could not find an adequate recovery variable which provided negative feedback for v. This means 3266

4 2 2 4 Spiking Neuron output spikes using pyramidal neurons electrophysiological recordings with the methodology suggested. This work could help neuroscientists study with more realism the behavior of particular type of neurons, since new models would be created. Fig. 9. 6 8 2 3 4 5 6 7 8 9.2.8.6.4.2.2 5 5 2 25 3 35 4 45 5 2nd GEP best solution (3rd experiments) that negative feedback u is closely linked to variables a and b in Eq. 3. To end this results section, we can say that the GEP algorithm is able to produce good solutions according to the similarity criteria given by gamma function with an error window δ = 2 ms. GEP also can create new models with the maximum possible fitness value in less than 3 generations which we can consider an acceptable parameter for an evolutionary algorithm. VI. CONCLUSIONS AND FUTURE WORK The methology proposed has demonstrated to be an alternative tool to create mathematical models that reproduce similar behaviors of one of the most versatile spiking neuron models. This offers the possibility for solving more complex problems in neurosciences, such as pyramidal neuron spike train prediction. GEP chromosomes can be easily modified in every generation, so the success rate depends on evolutionary time, and this time is affected by the mutation rate. For this particular study, a higher mutation rate produces more efficient solutions than a lower rate. In fact, the best solutions were achieved using the double rate than in other evolutionary computation techniques such as genetic algorithms, where a mutation rate between. and. is commonly used. We observed that single and complex differential equation systems have similar results, both type of systems can generate spikes at the same firing time as the reference model. Thus, in future work we will focus on generate only single systems and fit the GEP algorithm to solve specific computational problems as pattern recognition. Some spiking neuron models have proved to solve different linear and non-linear pattern recognition problems [4] [] [2]. This methodology could create spiking neuron models which will be adaptable to a specific pattern recognition problem. In other words, the methodology fitness function may be substituted by one that could meet with a criteria for this type of problem. At present, we are working on developing a quantitative neuron model with the goal of predicting the timing of ACKNOWLEDGMENTS The authors thank CONACYT and CONACYT-INEGI through project codes 3273 and 87637 respectively, and Universidad La Salle for the economic support under grant I-6/2. REFERENCES [] I. N. C. Facility. 29 incf competition @ONLINE, 29. [2] C. Ferreira. Gene expression programming: a new adaptive algorithm for solving problems. Complex Systems, 3(2):87 29, 2. cite arxiv:cs/227comment: 22 pages, 7 figures. [3] C. Ferreira. Gene Expression Programming: Mathematical Modeling by an Artificial Intelligence. Springer, 2nd edition, May 26. [4] A. C. Guillén. Ajuste de Modelos Neuronales de Tercera Generación para el Reconocimiento de Patrones: Análisis de Rendimiento y Comparativa. Master s thesis, Universidad La Salle, México D.F., 22. [5] E. M. Izhikevich. Simple model of spiking neurons. IEEE Transactions on Neural Networks, 4(6):569 572, Nov. 23. [6] E. M. Izhikevich. Which model to use for cortical spiking neurons? IEEE transactions on neural networks / a publication of the IEEE Neural Networks Council, 5(5):63 7, Sept. 24. [7] R. Jolivet, F. Schrmann, T. K. Berger, R. Naud, W. Gerstner, and A. Roth. The quantitative single-neuron modeling competition. Biol. Cybern., 99(4-5):47 426, Nov. 28. [8] W. Maas. Networks of spiking neurons: the third generation of neural network models. Trans. Soc. Comput. Simul. Int., 4(4):659 67, Dec. 997. [9] C. Rossant, D. F. Goodman, J. Platkiewicz, and R. Brette. Automatic fitting of spiking neuron models to electrophysiological recordings. Frontiers in neuroinformatics, 4, 2. [] R. Vázquez. Izhikevich neuron model and its application in pattern recognition. Australian Journal of Intelligent Information Processing Systems, (), 2. [] R. Vázquez. Pattern recognition using spiking neurons and firing rates. In A. Kuri-Morales and G. Simari, editors, Advances in Artificial Intelligence IBERAMIA 2, volume 6433 of Lecture Notes in Computer Science, pages 423 432. Springer Berlin / Heidelberg, 2. [2] R. A. Vázquez and A. Cachon. Integrate and fire neurons and their application in pattern recognition. In CCE, pages 424 428. IEEE, 2. [3] J. Vreeken. Spiking Neural Networks, an Introduction. 23. 3267