Structure-Dynamics Relationships in Bursting Neuronal Networks Revealed Using a Prediction Framework

Size: px
Start display at page:

Download "Structure-Dynamics Relationships in Bursting Neuronal Networks Revealed Using a Prediction Framework"

Transcription

1 Structure-Dynamics Relationships in Bursting Neuronal Networks Revealed Using a Prediction Framework Tuomo Mäki-Marttunen 1,2, Jugoslava Aćimović 1, Keijo Ruohonen 2, Marja-Leena Linne 1 1 Department of Signal Processing, Tampere University of Technology, Tampere, Finland, 2 Department of Mathematics, Tampere University of Technology, Tampere, Finland Abstract The question of how the structure of a neuronal network affects its functionality has gained a lot of attention in neuroscience. However, the vast majority of the studies on structure-dynamics relationships consider few types of network structures and assess limited numbers of structural measures. In this in silico study, we employ a wide diversity of network topologies and search among many possibilities the aspects of structure that have the greatest effect on the network excitability. The network activity is simulated using two point-neuron models, where the neurons are activated by noisy fluctuation of the membrane potential and their connections are described by chemical synapse models, and statistics on the number and quality of the emergent network bursts are collected for each network type. We apply a prediction framework to the obtained data in order to find out the most relevant aspects of network structure. In this framework, predictors that use different sets of graph-theoretic measures are trained to estimate the activity properties, such as burst count or burst length, of the networks. The performances of these predictors are compared with each other. We show that the best performance in prediction of activity properties for networks with sharp in-degree distribution is obtained when the prediction is based on clustering coefficient. By contrast, for networks with broad in-degree distribution, the maximum eigenvalue of the connectivity graph gives the most accurate prediction. The results shown for small (N~1) networks hold with few exceptions when different neuron models, different choices of neuron population and different average degrees are applied. We confirm our conclusions using larger (N~9) networks as well. Our findings reveal the relevance of different aspects of network structure from the viewpoint of network excitability, and our integrative method could serve as a general framework for structure-dynamics studies in biosciences. Citation: Mäki-Marttunen T, Aćimović J, Ruohonen K, Linne M-L (213) Structure-Dynamics Relationships in Bursting Neuronal Networks Revealed Using a Prediction Framework. PLoS ONE 8(7): e doi:1.1371/journal.pone Editor: Gennady Cymbalyuk, Georgia State University, United States of America Received February 1, 213; Accepted June 7, 213; Published July 25, 213 Copyright: ß 213 Mäki-Marttunen et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: This work was supported by TISE graduate school, Academy of Finland project , Foundation of Tampere University of Technology, and KAUTE foundation. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing Interests: The authors have declared that no competing interests exist. tuomo.maki-marttunen@tut.fi Introduction There is a great interest towards understanding the structure of neuronal networks, and ultimately, the full connectome [1,2]. The network structure lays a foundation to all collective activity observed in the system, and understanding this relationship is relevant both in vivo and in vitro. Promising experimental attempts have been made in controlling the growth of neurons to produce a pre-designed network structure [3,4]. If successful, such experiments would inform us on how the collective dynamics of the neurons is influenced by their patterns of synaptic connectivity. However, such information is extremely challenging to obtain using the state-of-the-art equipment due to the complexity of processes involved in neuronal growth. Furthermore, the connectivity patterns obtained using experimental setups are always subject to physical constraints posed by the growing platform of the neurons. For all this, most of the nowadays studies on structure-function relationship in neuronal networks are likely to be conducted in silico, where the connectivity can easily be modified and the effect on the network dynamics instantaneously screened. In the past few decades a lot of theoretical and computational studies on the function of neuronal networks have been carried out in order to examine the behavior of the network under various circumstances and various stimuli. However, in most studies the structure of the network is at least in part based on purely random networks, i.e., the far and widely studied Erdös-Rényi networks. These networks are statistically described by a single parameter, namely, the connection probability p, and by far lack any spatial organization. Several studies have revealed the contribution of connection probability to various aspects of neuronal network dynamics, e.g., emergence of large-scale network synchronization [5,6] the amplitude of fast network oscillations [7], and emergence of spontaneous network-wide bursts [8]. Despite their vast usage, the random networks have been found an insufficient model for the synaptic connectivity in the brain [9 12]. Recently, steps toward deeper understanding of the details of the structure and their effects on the dynamics have been made, which is shown by the devotion of a recent special issue in Frontiers in Computational Neuroscience particularly to this topic [13]. The framework of small-world networks [14] which allows varying the proportion q of long-range connections in addition to the connection probability p has hitherto been the most studied alternative to Erdös-Rényi networks in models of neuronal networks. Analyses on the effects of the long-range connections on, e.g., oscillation coherency [15], modes of synchrony in models PLOS ONE 1 July 213 Volume 8 Issue 7 e69373

2 of epilepsy [16,17], and self-sustained activity [18] have been carried out. However, a range of other extensions to random networks exists as well. The scale-free [19] networks possess a structure that is hierarchical over different scales, and are characterized by power-law distributed degrees. These networks have been applied in a range of neuronal modeling studies due to their resemblance to the hierarchical connectivity of the brain [2]. Nevertheless, the preferential attachment algorithm in [19] (and in most generalizations for directed graphs, e.g. [21]) for generating scale-free topology only uses the first order connectivity statistics, i.e., the number of contacts of the nodes, as the criteria for creating a link. In [22] the effect of second-order connectivity statistics, which can roughly be captured by the widths and correlation of the degree distributions, were studied. Similarly, [23] studied the effect of degree distribution widths through a framework where both degree distributions can be arbitrarily predefined, and the networks are created through random couplings. Both [22] and [23] agree on the significance of the in-degree over the out-degree in influencing the mode of synchrony in the network. A frequent trend in structure-dynamics studies is to overlook the coeffect of structural measures. The changes in activity are monitored with respect to one graph measure, ignoring the possible mutual changes in other structural measures [24]. In this work we approach this problem by measuring a set of graph properties simultaneously. In addition, we apply multiple network generation algorithms in order to avoid too great correlation between some particular graph measures. As an example, studying only such networks that are described in [14] would bring about a large correlation between geodesic path length and clustering coefficient, which would make it difficult to tell which properties of dynamics are due to the high path length and which are due to the clustering. The focus of this work is on excitability of spontaneously bursting networks, i.e., on how frequently network bursts occur and of what magnitude they are. Note that we adopt the term burst from literature on neuronal networks cultured on a microelectrode array, where the term is widely used for a short period of high spiking activity (alternative names are many, e.g., network spike, population spike, and synchronized spike) [25,26]. By contrast, when we refer to a burst of a single neuron, we use the term intrinsic burst or single-cell burst to make a clear distinction. We apply two point-neuron models, one of which is based on the integrate-andfire formalism and the other on the Hodgkin-Huxley formalism. In both models, the neurons are connected by chemical synapses expressing short-term plasticity. The synaptic currents (or conductances in the Hodgkin-Huxley type of model) are instantaneous and decay exponentially after a presynaptic action potential. In the case of strong enough recurrent excitation, both models produce network bursts. Our focus is on the regime of spontaneous bursting activity, where the bursting frequency lies between and 6 bursts/min. This is a typical range of bursting in, e.g., cortical cultures [25]. In the present study, we apply a prediction framework to determine the importance of different graph-theoretic measures. Simulations of network activity are run on a large set of different network structures, and measures of both structure and activity are calculated. For each measure of structure we estimate its capability to predict the outcome of the activity properties, and to an extent, its capability to copredict the activity when used together with the other graph measures. We show that the prediction of activity properties in networks with sharp in-degree distribution (binomial) is best when clustering coefficient is used, whereas in networks with broad in-degree distribution (power-law) the predictions based on maximum eigenvalue of the connectivity matrix are the most accurate. Our results could serve as a general guideline for designing experiments in which several but not all aspects of structure are measured. With novel experimental techniques and tools for data analysis [12,27], graph-theoretic measures of the local connectivity could be estimated without unraveling the whole connectivity matrix, and our results may help to choose those measured aspects. Materials and Methods We restrict our study on networks in which the structure can be fully represented by a directed unweighted graph. We use the notation G~(V,E), where G is the graph, V~fv 1,...,v N g is the set of nodes, and E~f(x,y)Dx[V,y[Vg is the set of egdes between the nodes. The connectivity matrix M[f,1g N N of a graph V is a binary matrix, where each element M ij denotes the existence (1) or nonexistence () of an edge from node v i to node v j. Selfconnections are excluded in this work. We call neighbors such pair of nodes, that have at least a unidirected edge between them. When no risk of confusion, we use the terms node v i and node i interchangeably. Network structure We assess network structure using the following graph-theoretic measures. N Clustering coefficient (). The local clustering coefficient i of a node v i describes the density of local connections in the neighborhood of node v i. We say that the nodes v i, v j and v k form a triangle if there is at least a unidirected edge between v i and v j, between v i and v k, and between v j and v k. The local clustering coefficient of node v i is the number of triangles that include the node divided by the maximum number of such triangles if all neighbors of the node were connected [14,28]. The directions of the edges are respected, hence changing a unidirected edge to a bidirectional edge doubles the counted triangles that include the considered edge. In mathematical terms, we can write i ~ 1 8 n XN i j~1 2 j=i X j{1 (M ij zm ji )(M ik zm ki )(M jk zm kj ), k~1 k=i where n i is the number of neighbors of node v i. The clustering coefficient of the whole network is calculated as the average over the local clustering coefficients of the nodes only those nodes are taken into account that have more than one neighbor. N Harmonic path length (PL). A geodesic path from a node to another means the shortest traversable path between the two nodes. To calculate the harmonic path length, the geodesic path length PL ij between each pair of nodes (v i,v j ), i=j is first calculated, where PL ij ~? represents the case where no path exists from v i to v j. The harmonic path length of the network represents the average distance between two nodes of the network, and is computed as the harmonic mean of the geodesic path lengths [28,29]: PLOS ONE 2 July 213 Volume 8 Issue 7 e69373

3 1 PL~ j~1 j=i X N i X N j~1 j=i k~1 i=k=j PL jk v? 1 PL ij 1 C A {1 N Node-betweenness (). The local node-betweenness i is a measure of centrality of the node v i. It is calculated as the number of shortest paths that the considered node lies on [28]. If the node lies on a number s (i) jk out of s(tot) jk equally long geodesic paths between nodes v j and v k, then the increment of this pair of nodes is the fraction of the two quantities. Thus, we can write i ~ XN X N s (i) jk s (tot). The node-between- jk ness of the network is the average of the local betweennesses i. N Out-degree deviation (). The sample standard deviation of the realized out-degrees of the nodes. N Degree correlation (DC). The sample correlation coefficient between the realized in- and out-degrees of the nodes. N Length-to-self (LtS). The mean geodesic length to self 1 X N N PL i~1 ii. N Maximum eigenvalue (). The largest eigenvalue of the connectivity matrix M. This is always real-valued as the connectivity matrix is non-negative [3]. N Motif count (MotN, N~1,:::,13). The (absolute) number of different connectivity patterns of triples of nodes [31] (see Fig. 1). Ideally, to study how measures of structure are linked to measures of dynamics, one would have a direct (possibly stochastic) function from the measures of structure to measures of dynamics. However, to obtain the measures of dynamics or their distributions, a network activity model has to be applied using a certain connectivity graph. Hence, this sort of mapping is not possible unless the measures of structure uniquely determine the underlying graph. To go around this problem, we generate networks with very different structural properties and simulate the neuronal activity in them. We concentrate on a few carefully selected random graph classes that we consider to span wide enough diversity of network types relevant in neuroscience: Watts- Strogatz-type networks (WS), networks with high local feedforward structure (FF), and networks with high number of loops of certain length (L2,L3,L4,L6). Let us motivate the choice of these classes. WS networks were first introduced in [14] as a class of networks expressing the smallworld phenomenon, and have been extensively used ever since. In : neuroscientific studies the WS networks between ordered and random topologies have been proposed as a model for, e.g., optimal signal propagation [15], maximal dynamical complexity [32], and optimal pattern restoration [33]. As for the FF networks, the feed-forward loop is a triple of nodes, v i, v j and v k, where there is a direct connection from v i to v k, and a secured disynaptic connection from v i through v j to v k. The feed-forward loops have been found more abundant in C. Elegans neuronal network than in random networks [31], and their contribution to neural processing has been much studied [34,35]. We include these networks in the present study as an alternative to WS networks that should show a great number of feed-forward loops and yet lack the spatial structure typical to WS networks. Finally, the loopy networks (L2, L3, L4 and L6) represent a network structure, where the connections are organized such that the feed-back loops of certain length and direction are promoted. The synaptic feed-back projections in general have been suggested as a mechanism for working memory [36,37]. Several papers discuss the existence of directed loops in the brain: [38] and [39] show that such loops could be produced by rules of spike-timing-dependent plasticity (STDP) in order to promote stability in the network, contradicting with the no-strong-loops hypothesis [4]. The reason to include loopy networks in this study is to address the question whether and to what extent such loops contribute to the dynamics in recurrent neuronal networks. One of the statistically most dominant properties of recurrent neuronal networks is the connection probability of the neurons. Increasing or decreasing the connection probability has usually major effects on the neuronal activity, which has been discussed in several computational studies, including [7], [41] and [8]. In addition to this, not only the average number but also the variance in number of inputs to the neurons plays a significant role in the synchronization properties of the network [22,23]. Regarding these facts, we keep the in-degree distributions strictly constrained while studying the other aspects in the network structure. To do this we propose to use the following random graph algorithms in which the in-degree distribution f ID can be explicitly set. Watts-Strogatz [14] algorithm for bidirectional graphs. Initially, the nodes are placed in a metric space of choice. The number of inputs is drawn from f ID for each node, and that number of spatially nearest nodes are chosen as inputs. Finally, all existing edges are rewired with probability q such that the postsynaptic node is held fixed but the presynaptic node is picked by random. We call these networks WS1 and WS2 networks, where the number 1 or 2 tells the dimensionality of the manifold where the nodes lie. In WS1 networks the nodes are placed on the perimeter of a ring, while in WS2 networks the nodes are placed on the surface of a torus. To be more specific, in the ring topology the nodes are placed into a ring in 2D plane as (x,y)~(sinc,cosc), where c[f 2p N, 4p N,:::,2pg. Similarly, in the torus topology the 2D grid is nested into 4D space as (x,y,z,w)~(sinc 1,cosc 1,sinc 2,cosc 2 ), where c 1,c 2 [fp 2p ffiffiffiffi, 4p pffiffiffiffi p ffiffiffiffi,:::,2pg, given that N is an integer. In both N N Figure 1. The 13 network motifs of three connected nodes. See [31] for reference. doi:1.1371/journal.pone g1 PLOS ONE 3 July 213 Volume 8 Issue 7 e69373

4 topologies the Euclidean distance is used as the metric. We refer to the limit topologies of Watts-Strogatz networks with zero rewiring (q~) aslocally connected networks (LCN1 and LCN2). Scheme for generating graphs with high local feedforward occurrence. For each node the number of inputs is drawn from f ID. The inputs are selected sequentially for each node. For the first node, the inputs are selected by random. For the next ones, the inputs are selected in such a way, that the emergence of feed-forward motifs is pronounced. This is done by giving higher weights to the nodes that project disynaptically to the considered node than to the others. A detailed scheme for generating these networks is given in Algorithm S1 in File S1. We refer to these networks by acronym FF. Note that this is not to be mistaken for the general term feed-forward networks in the meaning of opposite for recurrent networks. In this work all considered networks are recurrent. Scheme for generating graphs with high occurrence of loops of length L. For each node the number of inputs is drawn from f ID. The edges are set one by one until each node has all its inputs selected. In the selection of presynaptic nodes, the emergence of loops of length L is promoted, while the addition of edges that shorten these loops is discredited. This is done by giving different weights to the nodes depending on the shortest path from the considered node to the candidate nodes. See Algorithm S2 in File S1 for the detailed algorithm. The resulting networks are rich in recurrent synfire chains of length L. This is however conditional to the choice of the in-degree distribution: If the number of connections is too great, the excessive edges have to create shortcuts into the loops. In this work we refer to these networks with acronym L2, L3, L4 or L6, depending on the promoted length of loops. MATLAB functions to generate these networks are given in ModelDB entry Each of these algorithms can be used to generate both networks where the definitive property of the respective network is very pronounced, networks where the strength of that property is zero (random networks), and networks that lie in between these extremes on a continuous scale. We denote this strength parameter by W[½,?Š. In Watts-Strogatz networks, we draw the relation between the rewiring probability q and the strength parameter as q~ exp ({W=2). Hence, in all network classes W~ produces strictly random networks (RN) and W~? produces the other extreme of networks. In addition to these networks, we consider biologically realistic 2-dimensional neuronal networks. To generate these, we use the NETMORPH simulator [42] with the model parameters taken from [43]. NETMORPH simulates the growth of dendrites and axons in a population of neurons and outputs the sites of potential synapses. The potential synapses are formed when an axon and a dendrite of distinct neurons come close enough to each other. To remove the effect of boundaries, we place the somas randomly inside a square-shaped box, and the neurites that grow outside the box are considered to appear on the opposite side of the box. For each simulation, we form the connectivity graph from the simulation result once the required amount of connections has been reached. We omit the question of to which degree the potential synapses become functional synapses and consider every potential synapse as an edge. Multiple synapses with the same preand postsynaptic neurons are considered as one edge. The indegree distribution of these NETMORPH networks cannot be explicitly set, but it is fairly well approximated by binomial distributions (see Fig. S1 in supporting information). In the forthcoming sections, we abbreviate the networks obtained with the NETMORPH simulators as NM. The different network classes are illustrated in Fig. 2. In addition, iterations for the generation of extreme FF and L4 networks are shown. Furthermore, a set of graph measures in extreme FF, L2, L3, L4 and L6 networks are shown. These statistics, compared to the corresponding statistics in random and locally connected networks, reveal that the algorithms indeed produce networks with the desired properties. Further properties of the networks are shown in Figs. S2, S3 and S4, and discussed in Section S2 in File S1. Neuronal dynamics We apply two neuron models with rather different intrinsic dynamics. The first one is a leaky integrate-and-fire model with short-term plasticity [44], and the second one is a Hodgkin-Huxley type of model with four ionic and three synaptic currents [45]. In the latter we import a model of synaptic short-term plasticity from [46]. In both models we input a stochastic white noise term into the membrane potential of the neurons to make them spontaneously active. The models are described in detail in Section S1.3 in File S1. We refer to the first model as LIF model and to the latter as HH model throughout this work, although they are extensions of the ordinary leaky integrate-and-fire and Hodgkin-Huxley models. These two models were chosen to represent both a simple model that can easily be extended to larger networks, and a more biophysically detailed model that can be extended to study the effect of, e.g., various neurotransmitters and modulators on network activity. The latter was introduced as a model for studying synchronization in low extracellular magnesium concentration, but it allows the use of higher concentrations as well. Here, we use a value ½Mg 2z Š o ~:7mM, which is in the range of magnesium concentrations normally used in studies of neuronal cultures (see, e.g., [25]). Network bursts could be produced with simpler models that do not consider short-term plasticity, e.g., by using widely applied models of balanced excitation/inhibition [47] or Markov binary neurons [48]. The ending of the bursts in these models is dependent on the activation of the inhibitory population, which returns the elevated firing activity to a baseline level. By contrast, applying short-term depression to the excitatory synaptic currents allows the emergence of network bursts in both excitatory-only (E) and excitatory-inhibitory (EI) networks [44]. This is favorable, as the experiments carried out on neuronal cultures show that network bursts cease even in the pathological case of blocked inhibition (see, e.g., [49] and [26] for spinal cord cultures and [5] and [51] for cortical cultures). In this work, we study the bursting dynamics of both E and EI networks, and hence, we employ the short-term depressing synapses in both cases. In the EI networks, the structure is first generated using one of the network generation schemes and then 2% of the neurons, randomly picked, are assigned as the inhibitory population. The network size is N~1 unless otherwise stated. As a major simplification to reality, we consider the synaptic transmission to be instantaneous. The transmission delays and their effect on neuronal network dynamics have been under wide examination (see e.g. [52]) and have been shown to play an important role in various contexts. Their inclusion can be, however, carried out in multiple ways. For instance, in WS1, WS2, and NM networks the long-range connections should have longer delay parameters than the local connections (see e.g. [53]), whereas for other network types such distance-delay relationship cannot be straightforwardly defined, and hence, different approaches should be tested. In this work we restrict our study to non-delayed networks in order to avoid excessive simulations. PLOS ONE 4 July 213 Volume 8 Issue 7 e69373

5 ARN LCN1 LCN2 B FF L2 L3 L4 L6 NM C D 225 # #Mot6 LCN1: 1.68± 1.94 LCN1: 81± 2.5 E 6 LtS RN FF L2 L3 L4 L6 LCN1 2 Figure 2. Illustration of network classes. A: Examples of the extreme network types used in the present work. Network size N~6, except in LCN2 N~36, and in NM N~3. The red arrows highlight the definitive properties of the networks. In NM the connections whose post-synaptic node lies across the box boundaries are replaced by a link to a copy of the post-synaptic node (plotted with gray at a corresponding location outside the box). B, C: Illustration of the generation of FF (B) and L4 (C) networks. The red dots show the node that has recently been added the inputs, and these inputs are in turn highlighted by circles. The number at the upper-left corner of each graph shows the iteration number. D: Mean and standard deviation of the number of motifs 5 (left) and 6 (right) in different extreme network types (RN, FF, and LCN1). The FF networks possess the greatest number of both these two motifs. The low number of these motfis in LCN1 networks is explained by the fact that they contain much more highly connected motifs (motifs 12 and 13) due to their locally coupled design. E: Mean and standard deviation of the length-to-self measure in different extreme network types (RN, L2, L3, L4, L6, and LCN1). The loopy networks L2, L3, L4, and L6, express a value of LtS near to the corresponding length of the promoted loop. In both D and E, all networks are of size N~1 and their in-degree distribution is binomial with p~:16. Statistics are computed from 15 independent samples. doi:1.1371/journal.pone g2 The networks are set into a regime of spontaneous network bursting. This is done by tuning the synaptic weight g (see Section S1.3 of File S1) so that the moderately connected networks (RN, p~:2, binomial in-degree) show a bursting frequency of 1 bursts/min. These values are in the range of connectivity and bursting activity in a typical cortical culture [25]. For the applied proportions of excitatory and inhibitory neurons and model parameters, we found that the mean bursting frequency is a monotonically increasing function of the synaptic weight in the regime of interest ( 6 bursts/min), and hence we use the bisection method to find the proper synaptic weight. For each network simulation the spiking activity is solved for a one minute period (in fact for 61 s, but the first second is neglected for a possible transition stage). The model parameters and initial conditions for both models are described in Section S1.3 of File S1. The code files to carry out the simulations in PyNEST [54] (LIF model) and MATLAB (HH model) are given in ModelDB entry Fig. 3 illustrates the typical dynamics for a single neuron and a network of neurons. Activity in a bursting network can be characterized by the quantity and quality of the network bursts. We employ the burst detection scheme applied in, e.g., [55] and [56]. The spikes are first divided into separate network bursts using a maximal interspike interval of 25 ms. This means that two consecutive spikes belong to the same network burst if and only if their distance is 25 ms or less. Those bursts which consist of less than 1:5N E (with HH model) or :4N E (with LIF model) spikes, where N E denotes the size of the excitatory population, or in which less than :3N E individual neurons contributed to the burst, are disregarded. Further, a burst profile is created by convoluting the population spike train in the range from the first to the last spike of the burst with a Gaussian with deviation 2.5 ms. The length of the rising slope and the falling slope, i.e., the halfwidths of the burst profile, are calculated with a resolution of.25 ms. These PLOS ONE 5 July 213 Volume 8 Issue 7 e69373

6 2 (mv) (ms) 1 2 time (s) 2 4 (ms) 1 2 time (s) Neuron index Neuron index time (s) time (s) 3 6 time (ms) 3 6 time (ms) Figure 3. Illustration of the HH (upper panels) and the LIF (lower panels) model dynamics. Left: Single cell membrane potential with the spike magnified in the inset. The membrane potential at the time of spike in the LIF model explicitly set 3 mv for the sake of illustration. Middle: Network spike train in an excitatory-inhibitory RN with p~:2 connectivity and binomial in-degree distribution. The upmost 2 neurons represent the inhibitory population. The red spike corresponds to the (first) spike shown in the left panel, and the burst with the red borders corresponds to the burst shown in the right panel. Right: The selected burst highlighted. doi:1.1371/journal.pone g3 measures are illustrated in Fig. 4. We consider the summed value of these two measures the length of the burst. This measure is more robust to addition of a single spike to the burst than the absolute duration of the burst, which is calculated as the time from the first spike to the last spike of the burst. To further characterize the burst, we consider the number of spikes in a burst, which we refer to as the burst size. To average the network activity over a one minute simulation, we use the median burst length and median burst size. An important characteristic of the network activity is also the burst count, i.e. the number of bursts during the time of simulation, which has been shown to vary substantially in spontaneously active networks with different structures [57]. In addition, we consider the total spike count of the network during the one minute simulation as an indicator of the overall amount of activity. All above activity measures are calculated from the population spike train of the network. In the LIF model the spike trains are given explicitly by the model, but in the HH model they have to be extracted from the time series of the membrane potential. In this work, we consider any local maximum of the membrane potential above the threshold of 23 mv a spike. It should be noted that due to the Brownian noise injected to the membrane potential, we only consider local maxima at the resolution of 1dt, where dt is the simulation time step. This means that the time instant t is considered a local maximum if and only if V(t{1dt)vV(t) and V(t) V(tz1dt). Given the simulation time step dt~:25ms, this resolution was found scarce enough to prevent the noisy fluctuation of the membrane potential from being registered as spikes but on the other hand fine enough to correctly detect spikes in an intrinsic (single-cell) burst. The chosen threshold potential, - 3 mv, is robust. In a RN with binomial in-degree distribution (p~:3), the change of +2:5mV in the threshold potential had no effect on the detected spikes, and a change of +1:mV changed the total number of detected spikes by less than 5%. Structure-dynamics analysis Using the above methods, a realization of activity properties can be obtained for any given connectivity graph by simulating one of the two neuron models and performing the burst detection. In purely excitatory networks the graph properties are extracted using the entire network, while in EI networks only the excitatory-excitatory part is considered. The activity properties are likewise calculated from the excitatory population merely. Throughout this work, we divide the data into 24 simulation settings, as listed in Table 1. The networks in each simulation Rs Fs BL 1ms Figure 4. Illustration of the burst profile attributes. The shaded dots represent the spikes of the excitatory neurons. The thick blue curve represents burst profile, i.e., the smoothened firing rate curve. The time instants when the burst profile for the first and the last time crosses the value of half of the maximal value (shown with horizontal dashed line) are identified. The distances of these time instants from the time instant of the maximal firing rate (vertical line) are the lengths of the rising (Rs) and falling (Fs) slope. The burst length (BL) is the sum of these two attributes. The network activity in this figure is simulated with the HH model, and the structure of the underlying network is a RN with binomial in-degree distribution, p~:2. Scale bar (black) 1 ms. doi:1.1371/journal.pone g4 PLOS ONE 6 July 213 Volume 8 Issue 7 e69373

7 Table 1. The list of the 24 simulation settings. HH LIF E EI E EI BIN POW BIN POW BIN POW BIN POW The first row denotes the model of dynamics, and the second row shows the choices of population. For each combination of these one may freely choose the shape (third row) and connection probability (fourth row) of the in-degree distribution. doi:1.1371/journal.pone t1 setting have a fixed average connection probability p (.16,.2, or.3), a fixed shape of in-degree distribution (BIN as binomial or POW as power-law), a fixed choice of population (E or EI) and a fixed choice of model of dynamics (HH or LIF). Hence, all variation in activity properties between networks that belong to the same simulation setting is an effect of the network structure only. For each setting we generate a series of network structure realizations and for each of these we simulate a one minute sample of activity. The chosen network types are FF, WS1 and WS2 networks with W ~1,3,6,?, and L2, L3, L4 and L6 networks with W~3,6,12,?. In addition, RNs are included, and NM networks are considered in settings with binomial in-degree distribution, which makes the total number of essentially different types of network structure N nt ~29 (power-law) or 3 (binomial). We use two methods for the data analysis, namely, a correlation analysis and a prediction framework. We use the first to restrict the number of analyses to be done with the latter. The correlation coefficient between activity property and graph property is calculated for each simulation setting separately as P (x(g){m x )(y(g){m y ) G[G rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P r ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (x(g){m x ) 2 P : ð1þ (y(g){m y ) 2 G[G In this notation, G is the set of networks (we use terms network and network realization interchangeably here) belonging to a said simulation setting. The term x(g) is a graph property of network G, while the term y(g) is an activity property obtained from a neuronal simulation done on network G, and m x and m y are the corresponding average values. The correlation analysis is useful as a first approximation of the relationship between the graph measures and activity measures, but it only sheds light on the linear pair-wise dependence between the measures. We apply a prediction framework to answer the question: Which graph measures are the most important when aiming to predict the activity in the network? To do this, we divide the data into a teaching data set and a target data set. The teaching data set consists of N te ~35 networks for each of the N nt ~3 (29) network types, while the target data set contains only N ta ~5 repetitions. An affine predictor y~a z XK G[G i~1 a i x i is built using the considered activity properties Y[R N te N nt and the K chosen structural properties X[R N te N nt K that are extracted from the teaching data. We include the realized average degree in the structural measures in order to compensate for the variety ð2þ caused by in-degree variance, and hence, we always have K 1. Least mean squares is used to solve the predictor coefficients, i.e., {1½1 ½a a 1 a K Š~ ½1 XŠ T ½1XŠ XŠ T Y, where 1[R N te N nt is a vector consisting of 1 s. The activity properties of the target data set can be predicted using Eqn. 2 for each of the N nt N ta networks, and the prediction error can be calculated as the average absolute difference between the predicted and actual value of the activity property. The prediction is repeated for 1 times in total. During the repetition the target data are regenerated, but the teaching data are resampled from a total pool of 1 samples of each network type. The error distribution for a given predictor, i.e., a predictor that uses a chosen set of structural measures, is compared to the error distribution of other predictors. This is done using Mann- Whitney s U-test, which tests the null hypothesis that the medians of the distributions are equal. It should be noted that we do not use the term predict in the meaning of forecasting the future based on the past. Instead, the task of the predictor is to estimate the outcome of an activity property in a separate, unknown network when only some aspects of the network structure are known to the predictor. This is closely related to classification tasks, but as the outcome of the predictor is a continuous value instead of discrete, it is best described by the term prediction task [58]. Results As a first step for understanding the structure-dynamics relationships in bursting neuronal networks, we estimated the correlations of graph-theoretic measures and activity properties. Fig. 5 shows the correlation coefficients between the considered graph measures and measures of activity. We first calculated the correlation coefficient between all pairs of measures in each simulation setting by Eqn. 1. We then computed the mean and standard deviation of the obtained correlations, taken over the twelve simulation settings with the same shape (binomial or powerlaw) of in-degree distribution. We focus our analysis on those graph measures that at least for some activity property gave an absolute mean correlation greater than.25 in both binomial and power-law settings. Namely, they were, PL,,,,,, and Mot13. However, and Mot13 were very strongly correlated with each other (correlation coefficient between these measures ranges from.85 to.99 in the 24 simulation settings, mean.94). This was the case also between PL and (.91 to.99, mean.95), which is backed by the analytical derivations shown in section S2.4 in File S1. Hence we disregarded PL and Mot13 whenever and were considered. Other pairs of PLOS ONE 7 July 213 Volume 8 Issue 7 e69373

8 measures were considerably less correlated: The strongest correlation among the remaining measures was between and, where the correlation coefficient ranged from.59 to.87, mean.77. It should be noted that was to some extent correlated with the average degree of the network (correlation coefficient on range from.63 to.89, mean.79) as predicted by mean-field approximations in [59]. In our framework the mean degrees E½dŠ~(N{1)p, where d represents the average degree of the nodes, were held fixed between compared networks. However, drawing from the in-degree distribution resulted in some variance in the network structure. In the case of binomial in-degree this variance was negligible (Var d~var 1 X N N d i~1 i ~ N{1 N p(1{ p)&:158 for p~:2), where d i represents the in-degree of a single node, but in networks with power-law distributed in-degree (with p~:2) it was empirically found as large as Var d&2:96. This variance had to be taken into account explicitly in the analyses of the following sections. Similarly to structural measures, there was redundancy in the activity measures. Naturally, the total spike count was largely dictated by the product of burst count and median burst size: Correlation coefficient between these measures ranged from.866 to.999 with mean.978. In most of the following analyses we disregarded one of these measures, namely the burst size, due to its small coefficient of variation (mean CV.16, whereas those of spike count and burst count were.46 and.62, respectively). The low variance in burst size was also reflected in a high correlation between the spike count and the burst count (correlation coefficient ranged from.532 to.998, mean.918). Between other pairs of activity measures, the correlation coefficient ranged from negative to positive values. Hence, we also neglected the spike count in most of the forthcoming results and considered it to behave to a great degree similarly to the burst count. Clustering coefficient regulates the bursting properties in networks with binomial in-degree distribution To further analyze the dependency between activity and graph properties, we applied the prediction framework for different activity properties in different simulation settings. Fig. 6 shows the prediction errors of the burst count in simulation settings with excitatory-only networks, binomial in-degree distribution, and HH model. The error distribution (mean, std) is plotted for different predictors. One finds that predictors using are significantly better than the null predictors (the predictors where K~1, i.e., only the realized degree is used in the prediction). In the dense connectivity simulations (p~:3) the performs approximately equally well, but in other connectivities the effect of is insignificant. The distribution of the values of burst count with respect to the values of are illustrated for the p~:2 case. The dominance of in prediction of activity properties can be observed for all simulation settings with binomial in-degree distribution. This is confirmed in Fig. 7, where the best predictor was named for the prediction of each activity property in each of the twelve simulation settings. Furthermore, Fig. 8 shows the averaged improvements that were obtained by using the said graph measures in the prediction of burst count and burst length. One can observe that the predictions were best improved from both the null predictor and from a predictor using an arbitrary other graph measure by including in the predicting graph measures. The next best predictors were and. The improvements obtained by adding, and were small. The improvement in the prediction was most substantial in the case of burst count: By using only one predicting graph measure () the error was reduced by up to 35% on average, while the corresponding prediction error reduction for burst length was on average 26%. The predictor using all available structural measures reached corresponding percentages of 49% for burst count and 45% for burst length (data not shown). Maximum eigenvalue is the best predictor of activity when in-degree is power-law distributed We repeated the analyses carried out in the previous section, now using networks with power-law distributed in-degree. The results were substantially different: Changing between excitatoryonly and excitatory-inhibitory networks, between different activity models, or even between different connection probabilities did not affect the overall significance of the graph measures in the prediction of activity measures as much as the choice of in-degree distribution did. Fig. 9 shows the statistics corresponding to those shown in Figs. 6, 7 and 8. One observes a great improvement in prediction by the inclusion of in Fig. 9. This effect was most evident in the networks with the lowest connection probability (p~:16, Fig. 9A) where the bursts were most rare (see Fig. S5). Fig. 9C shows the dominance of across activity properties and all simulation settings with power-law distributed in-degree. The prediction errors of burst count and burst length were decreased from null predictions on average by 28% and 13% by the inclusion of (Fig. 9D), respectively. The corresponding percentages for the predictor using all structural data were 41% and 34% (data not shown), which suggests that it be useful to employ more than one structural measure especially in the prediction of burst length. In these analyses, the realized degree was included in all the predictions in order to cancel the effect of correlation between and the average degree. If the degree was neglected, the effect of was even more pronounced. By contrast, the exclusion of degree from the predictions of activity measures in networks with binomial in-degree had no notable effect due to the low intrinsic variance in the degree. Furthermore, the results stayed the same when a neural network predictor (default feedforward backpropagation network in MATLAB) was used instead of linear predictor. If a diagonally quadratic predictor ( ½1XŠ replaced by ½1 XX (2) Š where X (2) is the element-wise second power) was used, the improvements by the addition of and were slightly increased, however retaining the statistical dominance of and in the prediction of all activity properties (data not shown). We carried out corresponding simulations with larger networks, N~9. We used the LIF model and excitatory-only networks, and varied the in-degree distribution. Fig. S6 shows the representative data about large network activity and the predictor performances. Our conclusions hold with large networks as well: The activity properties in networks with binomially distributed indegree can be best predicted with, whereas the activity in networks with power-law distributed in-degree can be best predicted using. In addition, we ran longer, 5 minute simulations using the LIF model networks with the normal network size N~1 (data not shown). Our results remained qualitatively the same and confirmed that the shorter (1 minute) simulations give statistically significant results in spite of the large variability in the activity properties. Discussion In this work we studied the graph-theoretic properties of several types of networks, and searched for the most relevant aspects of network structure from the viewpoint of bursting properties of the PLOS ONE 8 July 213 Volume 8 Issue 7 e69373

9 Spike count, binomial in degree Spike count, power law in degree Burst count, binomial in degree Burst count, power law in degree Burst length, binomial in degree Burst length, power law in degree Burst size, binomial in degree Burst size, power law in degree PL DC LtSs Mot1 Mot2 Mot3 Mot4 Mot6 Mot7 Mot8 Mot9 Mot1 Mot11 Mot13 Figure 5. The mean and standard deviation of the correlations between graph measures (see legend) and the activity measures (spike count, burst count, burst length, and burst size). The Eqn. 1 is used for calculating the correlation coefficients for each simulation setting separately. The set of networks G consists of 15 repetitions of each of the N nt ~3 (29) network types. In the panels on the left the mean correlation is taken over correlation coefficients in the twelve simulation settings that use binomial in-degree distribution, while in the panels on the right the twelve simulation settings with power-law distribution are used. The faded bars represent pairs of measures with absolute mean correlations smaller than.25. The graph measures that were finally chosen for structure-dynamics study are bolded in the legend. doi:1.1371/journal.pone g5 network. Our framework for network generation allows the use of arbitrary in-degree distribution. This allows a fair comparison between the dynamics of different network types, given that the distribution of in-degree plays a crucial role in determining the network dynamics [22]. The relevance of the graph-theoretic properties of the network are assessed in a prediction framework. We calculated how much the prediction of an activity property, such as burst count or average length of a burst, is improved when the prediction is based on a given graph property. We found that in the networks with sharp (binomial) in-degree distribution plays the most crucial role (Figs. 7 and 8), whereas in networks with wide (power-law) in-degree distribution is the most relevant graph property (Fig. 9C D). These results are consistent with few exceptions in the twelve combinations of the two neuron models (HH and LIF), two choices of neuron population (excitatory-only and excitatory-inhibitory), and three connection probabilities (p~:16, :2, and :3). The simulations were run using small (N = 1) networks due to the high computational load needed for generation and analysis of a large enough data set, but we confirmed our main findings using a small subset of simulations with larger (N = 9) networks (Fig. S6). Our framework that combines the use of multiple different types of networks allows the concurrent study of importance of different graph measures, namely, PL,,, DC, LtS,, and PLOS ONE 9 July 213 Volume 8 Issue 7 e69373

Modeling growth in neuronal cell cultures: network properties in different phases of growth studied using two growth simulators

Modeling growth in neuronal cell cultures: network properties in different phases of growth studied using two growth simulators Tampere University of Technology Author(s) Title Citation Mäki-Marttunen, Tuomo; Havela, Riikka; Acimovic, Jugoslava; Teppola, Heidi; Ruohonen, Keijo; Linne, Marja-Leena Modeling growth in neuronal cell

More information

An Introductory Course in Computational Neuroscience

An Introductory Course in Computational Neuroscience An Introductory Course in Computational Neuroscience Contents Series Foreword Acknowledgments Preface 1 Preliminary Material 1.1. Introduction 1.1.1 The Cell, the Circuit, and the Brain 1.1.2 Physics of

More information

Consider the following spike trains from two different neurons N1 and N2:

Consider the following spike trains from two different neurons N1 and N2: About synchrony and oscillations So far, our discussions have assumed that we are either observing a single neuron at a, or that neurons fire independent of each other. This assumption may be correct in

More information

Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits

Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits Wolfgang Maass, Robert Legenstein, Nils Bertschinger Institute for Theoretical Computer Science Technische

More information

Supporting Online Material for

Supporting Online Material for www.sciencemag.org/cgi/content/full/319/5869/1543/dc1 Supporting Online Material for Synaptic Theory of Working Memory Gianluigi Mongillo, Omri Barak, Misha Tsodyks* *To whom correspondence should be addressed.

More information

Dynamical Constraints on Computing with Spike Timing in the Cortex

Dynamical Constraints on Computing with Spike Timing in the Cortex Appears in Advances in Neural Information Processing Systems, 15 (NIPS 00) Dynamical Constraints on Computing with Spike Timing in the Cortex Arunava Banerjee and Alexandre Pouget Department of Brain and

More information

Probabilistic Models in Theoretical Neuroscience

Probabilistic Models in Theoretical Neuroscience Probabilistic Models in Theoretical Neuroscience visible unit Boltzmann machine semi-restricted Boltzmann machine restricted Boltzmann machine hidden unit Neural models of probabilistic sampling: introduction

More information

Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity

Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity Jorge F. Mejias 1,2 and Joaquín J. Torres 2 1 Department of Physics and Center for

More information

Computational Explorations in Cognitive Neuroscience Chapter 2

Computational Explorations in Cognitive Neuroscience Chapter 2 Computational Explorations in Cognitive Neuroscience Chapter 2 2.4 The Electrophysiology of the Neuron Some basic principles of electricity are useful for understanding the function of neurons. This is

More information

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,

More information

Artificial Neural Network and Fuzzy Logic

Artificial Neural Network and Fuzzy Logic Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks

More information

Lecture 4: Feed Forward Neural Networks

Lecture 4: Feed Forward Neural Networks Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training

More information

The Spike Response Model: A Framework to Predict Neuronal Spike Trains

The Spike Response Model: A Framework to Predict Neuronal Spike Trains The Spike Response Model: A Framework to Predict Neuronal Spike Trains Renaud Jolivet, Timothy J. Lewis 2, and Wulfram Gerstner Laboratory of Computational Neuroscience, Swiss Federal Institute of Technology

More information

Influence of Criticality on 1/f α Spectral Characteristics of Cortical Neuron Populations

Influence of Criticality on 1/f α Spectral Characteristics of Cortical Neuron Populations Influence of Criticality on 1/f α Spectral Characteristics of Cortical Neuron Populations Robert Kozma rkozma@memphis.edu Computational Neurodynamics Laboratory, Department of Computer Science 373 Dunn

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

This script will produce a series of pulses of amplitude 40 na, duration 1ms, recurring every 50 ms.

This script will produce a series of pulses of amplitude 40 na, duration 1ms, recurring every 50 ms. 9.16 Problem Set #4 In the final problem set you will combine the pieces of knowledge gained in the previous assignments to build a full-blown model of a plastic synapse. You will investigate the effects

More information

Modelling stochastic neural learning

Modelling stochastic neural learning Modelling stochastic neural learning Computational Neuroscience András Telcs telcs.andras@wigner.mta.hu www.cs.bme.hu/~telcs http://pattern.wigner.mta.hu/participants/andras-telcs Compiled from lectures

More information

Data Mining and Analysis: Fundamental Concepts and Algorithms

Data Mining and Analysis: Fundamental Concepts and Algorithms Data Mining and Analysis: Fundamental Concepts and Algorithms dataminingbook.info Mohammed J. Zaki 1 Wagner Meira Jr. 2 1 Department of Computer Science Rensselaer Polytechnic Institute, Troy, NY, USA

More information

18.6 Regression and Classification with Linear Models

18.6 Regression and Classification with Linear Models 18.6 Regression and Classification with Linear Models 352 The hypothesis space of linear functions of continuous-valued inputs has been used for hundreds of years A univariate linear function (a straight

More information

Infinite systems of interacting chains with memory of variable length - a stochastic model for biological neural nets

Infinite systems of interacting chains with memory of variable length - a stochastic model for biological neural nets Infinite systems of interacting chains with memory of variable length - a stochastic model for biological neural nets Antonio Galves Universidade de S.Paulo Fapesp Center for Neuromathematics Eurandom,

More information

Abstract. Author Summary

Abstract. Author Summary 1 Self-organization of microcircuits in networks of spiking neurons with plastic synapses Gabriel Koch Ocker 1,3, Ashok Litwin-Kumar 2,3,4, Brent Doiron 2,3 1: Department of Neuroscience, University of

More information

7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory

7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory Physics 178/278 - David Kleinfeld - Fall 2005; Revised for Winter 2017 7 Rate-Based Recurrent etworks of Threshold eurons: Basis for Associative Memory 7.1 A recurrent network with threshold elements The

More information

Neuron. Detector Model. Understanding Neural Components in Detector Model. Detector vs. Computer. Detector. Neuron. output. axon

Neuron. Detector Model. Understanding Neural Components in Detector Model. Detector vs. Computer. Detector. Neuron. output. axon Neuron Detector Model 1 The detector model. 2 Biological properties of the neuron. 3 The computational unit. Each neuron is detecting some set of conditions (e.g., smoke detector). Representation is what

More information

When do Correlations Increase with Firing Rates? Abstract. Author Summary. Andrea K. Barreiro 1* and Cheng Ly 2

When do Correlations Increase with Firing Rates? Abstract. Author Summary. Andrea K. Barreiro 1* and Cheng Ly 2 When do Correlations Increase with Firing Rates? Andrea K. Barreiro 1* and Cheng Ly 2 1 Department of Mathematics, Southern Methodist University, Dallas, TX 75275 U.S.A. 2 Department of Statistical Sciences

More information

Research Article Hidden Periodicity and Chaos in the Sequence of Prime Numbers

Research Article Hidden Periodicity and Chaos in the Sequence of Prime Numbers Advances in Mathematical Physics Volume 2, Article ID 5978, 8 pages doi:.55/2/5978 Research Article Hidden Periodicity and Chaos in the Sequence of Prime Numbers A. Bershadskii Physics Department, ICAR,

More information

Neuronal Firing Sensitivity to Morphologic and Active Membrane Parameters

Neuronal Firing Sensitivity to Morphologic and Active Membrane Parameters Neuronal Firing Sensitivity to Morphologic and Active Membrane Parameters Christina M. Weaver 1,2,3*, Susan L. Wearne 1,2,3* 1 Laboratory of Biomathematics, Mount Sinai School of Medicine, New York, New

More information

arxiv: v1 [q-bio.nc] 1 Jun 2014

arxiv: v1 [q-bio.nc] 1 Jun 2014 1 arxiv:1406.0139v1 [q-bio.nc] 1 Jun 2014 Distribution of Orientation Selectivity in Recurrent Networks of Spiking Neurons with Different Random Topologies Sadra Sadeh 1, Stefan Rotter 1, 1 Bernstein Center

More information

The Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017

The Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017 The Bayesian Brain Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester May 11, 2017 Bayesian Brain How do neurons represent the states of the world? How do neurons represent

More information

Synchrony in Neural Systems: a very brief, biased, basic view

Synchrony in Neural Systems: a very brief, biased, basic view Synchrony in Neural Systems: a very brief, biased, basic view Tim Lewis UC Davis NIMBIOS Workshop on Synchrony April 11, 2011 components of neuronal networks neurons synapses connectivity cell type - intrinsic

More information

A Learning Theory for Reward-Modulated Spike-Timing-Dependent Plasticity with Application to Biofeedback

A Learning Theory for Reward-Modulated Spike-Timing-Dependent Plasticity with Application to Biofeedback A Learning Theory for Reward-Modulated Spike-Timing-Dependent Plasticity with Application to Biofeedback Robert Legenstein, Dejan Pecevski, Wolfgang Maass Institute for Theoretical Computer Science Graz

More information

Neural networks. Chapter 20. Chapter 20 1

Neural networks. Chapter 20. Chapter 20 1 Neural networks Chapter 20 Chapter 20 1 Outline Brains Neural networks Perceptrons Multilayer networks Applications of neural networks Chapter 20 2 Brains 10 11 neurons of > 20 types, 10 14 synapses, 1ms

More information

Activity Driven Adaptive Stochastic. Resonance. Gregor Wenning and Klaus Obermayer. Technical University of Berlin.

Activity Driven Adaptive Stochastic. Resonance. Gregor Wenning and Klaus Obermayer. Technical University of Berlin. Activity Driven Adaptive Stochastic Resonance Gregor Wenning and Klaus Obermayer Department of Electrical Engineering and Computer Science Technical University of Berlin Franklinstr. 8/9, 187 Berlin fgrewe,obyg@cs.tu-berlin.de

More information

CSE/NB 528 Final Lecture: All Good Things Must. CSE/NB 528: Final Lecture

CSE/NB 528 Final Lecture: All Good Things Must. CSE/NB 528: Final Lecture CSE/NB 528 Final Lecture: All Good Things Must 1 Course Summary Where have we been? Course Highlights Where do we go from here? Challenges and Open Problems Further Reading 2 What is the neural code? What

More information

Self-organized Criticality and Synchronization in a Pulse-coupled Integrate-and-Fire Neuron Model Based on Small World Networks

Self-organized Criticality and Synchronization in a Pulse-coupled Integrate-and-Fire Neuron Model Based on Small World Networks Commun. Theor. Phys. (Beijing, China) 43 (2005) pp. 466 470 c International Academic Publishers Vol. 43, No. 3, March 15, 2005 Self-organized Criticality and Synchronization in a Pulse-coupled Integrate-and-Fire

More information

Fast and exact simulation methods applied on a broad range of neuron models

Fast and exact simulation methods applied on a broad range of neuron models Fast and exact simulation methods applied on a broad range of neuron models Michiel D Haene michiel.dhaene@ugent.be Benjamin Schrauwen benjamin.schrauwen@ugent.be Ghent University, Electronics and Information

More information

Causality and communities in neural networks

Causality and communities in neural networks Causality and communities in neural networks Leonardo Angelini, Daniele Marinazzo, Mario Pellicoro, Sebastiano Stramaglia TIRES-Center for Signal Detection and Processing - Università di Bari, Bari, Italy

More information

How to do backpropagation in a brain

How to do backpropagation in a brain How to do backpropagation in a brain Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto & Google Inc. Prelude I will start with three slides explaining a popular type of deep

More information

Spike-Frequency Adaptation: Phenomenological Model and Experimental Tests

Spike-Frequency Adaptation: Phenomenological Model and Experimental Tests Spike-Frequency Adaptation: Phenomenological Model and Experimental Tests J. Benda, M. Bethge, M. Hennig, K. Pawelzik & A.V.M. Herz February, 7 Abstract Spike-frequency adaptation is a common feature of

More information

An analysis of how coupling parameters influence nonlinear oscillator synchronization

An analysis of how coupling parameters influence nonlinear oscillator synchronization An analysis of how coupling parameters influence nonlinear oscillator synchronization Morris Huang, 1 Ben McInroe, 2 Mark Kingsbury, 2 and Will Wagstaff 3 1) School of Mechanical Engineering, Georgia Institute

More information

7 Recurrent Networks of Threshold (Binary) Neurons: Basis for Associative Memory

7 Recurrent Networks of Threshold (Binary) Neurons: Basis for Associative Memory Physics 178/278 - David Kleinfeld - Winter 2019 7 Recurrent etworks of Threshold (Binary) eurons: Basis for Associative Memory 7.1 The network The basic challenge in associative networks, also referred

More information

Analysis of Interest Rate Curves Clustering Using Self-Organising Maps

Analysis of Interest Rate Curves Clustering Using Self-Organising Maps Analysis of Interest Rate Curves Clustering Using Self-Organising Maps M. Kanevski (1), V. Timonin (1), A. Pozdnoukhov(1), M. Maignan (1,2) (1) Institute of Geomatics and Analysis of Risk (IGAR), University

More information

CS:4420 Artificial Intelligence

CS:4420 Artificial Intelligence CS:4420 Artificial Intelligence Spring 2018 Neural Networks Cesare Tinelli The University of Iowa Copyright 2004 18, Cesare Tinelli and Stuart Russell a a These notes were originally developed by Stuart

More information

Introduction Biologically Motivated Crude Model Backpropagation

Introduction Biologically Motivated Crude Model Backpropagation Introduction Biologically Motivated Crude Model Backpropagation 1 McCulloch-Pitts Neurons In 1943 Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, published A logical calculus of the

More information

Emergent bursting and synchrony in computer simulations of neuronal cultures

Emergent bursting and synchrony in computer simulations of neuronal cultures Emergent bursting and synchrony in computer simulations of neuronal cultures Niru Maheswaranathan 1, Silvia Ferrari 2, Antonius M.J. VanDongen 3 and Craig S. Henriquez 1 Abstract Experimental studies of

More information

The homogeneous Poisson process

The homogeneous Poisson process The homogeneous Poisson process during very short time interval Δt there is a fixed probability of an event (spike) occurring independent of what happened previously if r is the rate of the Poisson process,

More information

Introduction to Neural Networks U. Minn. Psy 5038 Spring, 1999 Daniel Kersten. Lecture 2a. The Neuron - overview of structure. From Anderson (1995)

Introduction to Neural Networks U. Minn. Psy 5038 Spring, 1999 Daniel Kersten. Lecture 2a. The Neuron - overview of structure. From Anderson (1995) Introduction to Neural Networks U. Minn. Psy 5038 Spring, 1999 Daniel Kersten Lecture 2a The Neuron - overview of structure From Anderson (1995) 2 Lect_2a_Mathematica.nb Basic Structure Information flow:

More information

DISCRETE EVENT SIMULATION IN THE NEURON ENVIRONMENT

DISCRETE EVENT SIMULATION IN THE NEURON ENVIRONMENT Hines and Carnevale: Discrete event simulation in the NEURON environment Page 1 Preprint of a manuscript that will be published in Neurocomputing. DISCRETE EVENT SIMULATION IN THE NEURON ENVIRONMENT Abstract

More information

TIME-SEQUENTIAL SELF-ORGANIZATION OF HIERARCHICAL NEURAL NETWORKS. Ronald H. Silverman Cornell University Medical College, New York, NY 10021

TIME-SEQUENTIAL SELF-ORGANIZATION OF HIERARCHICAL NEURAL NETWORKS. Ronald H. Silverman Cornell University Medical College, New York, NY 10021 709 TIME-SEQUENTIAL SELF-ORGANIZATION OF HIERARCHICAL NEURAL NETWORKS Ronald H. Silverman Cornell University Medical College, New York, NY 10021 Andrew S. Noetzel polytechnic University, Brooklyn, NY 11201

More information

2 1. Introduction. Neuronal networks often exhibit a rich variety of oscillatory behavior. The dynamics of even a single cell may be quite complicated

2 1. Introduction. Neuronal networks often exhibit a rich variety of oscillatory behavior. The dynamics of even a single cell may be quite complicated GEOMETRIC ANALYSIS OF POPULATION RHYTHMS IN SYNAPTICALLY COUPLED NEURONAL NETWORKS J. Rubin and D. Terman Dept. of Mathematics; Ohio State University; Columbus, Ohio 43210 Abstract We develop geometric

More information

Hidden Markov Models Part 1: Introduction

Hidden Markov Models Part 1: Introduction Hidden Markov Models Part 1: Introduction CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 Modeling Sequential Data Suppose that

More information

Exercises. Chapter 1. of τ approx that produces the most accurate estimate for this firing pattern.

Exercises. Chapter 1. of τ approx that produces the most accurate estimate for this firing pattern. 1 Exercises Chapter 1 1. Generate spike sequences with a constant firing rate r 0 using a Poisson spike generator. Then, add a refractory period to the model by allowing the firing rate r(t) to depend

More information

Neural networks. Chapter 19, Sections 1 5 1

Neural networks. Chapter 19, Sections 1 5 1 Neural networks Chapter 19, Sections 1 5 Chapter 19, Sections 1 5 1 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 19, Sections 1 5 2 Brains 10

More information

+ + ( + ) = Linear recurrent networks. Simpler, much more amenable to analytic treatment E.g. by choosing

+ + ( + ) = Linear recurrent networks. Simpler, much more amenable to analytic treatment E.g. by choosing Linear recurrent networks Simpler, much more amenable to analytic treatment E.g. by choosing + ( + ) = Firing rates can be negative Approximates dynamics around fixed point Approximation often reasonable

More information

3 Detector vs. Computer

3 Detector vs. Computer 1 Neurons 1. The detector model. Also keep in mind this material gets elaborated w/the simulations, and the earliest material is often hardest for those w/primarily psych background. 2. Biological properties

More information

1. Introductory Examples

1. Introductory Examples 1. Introductory Examples We introduce the concept of the deterministic and stochastic simulation methods. Two problems are provided to explain the methods: the percolation problem, providing an example

More information

Discrete and Indiscrete Models of Biological Networks

Discrete and Indiscrete Models of Biological Networks Discrete and Indiscrete Models of Biological Networks Winfried Just Ohio University November 17, 2010 Who are we? What are we doing here? Who are we? What are we doing here? A population of interacting

More information

AT2 Neuromodeling: Problem set #3 SPIKE TRAINS

AT2 Neuromodeling: Problem set #3 SPIKE TRAINS AT2 Neuromodeling: Problem set #3 SPIKE TRAINS Younesse Kaddar PROBLEM 1: Poisson spike trains Link of the ipython notebook for the code Brain neuron emit spikes seemingly randomly: we will aim to model

More information

CISC 3250 Systems Neuroscience

CISC 3250 Systems Neuroscience CISC 3250 Systems Neuroscience Systems Neuroscience How the nervous system performs computations How groups of neurons work together to achieve intelligence Professor Daniel Leeds dleeds@fordham.edu JMH

More information

In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required.

In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required. In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required. In humans, association is known to be a prominent feature of memory.

More information

Introduction to Neural Networks

Introduction to Neural Networks Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning

More information

Algorithm-Independent Learning Issues

Algorithm-Independent Learning Issues Algorithm-Independent Learning Issues Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2007 c 2007, Selim Aksoy Introduction We have seen many learning

More information

Delayed and Higher-Order Transfer Entropy

Delayed and Higher-Order Transfer Entropy Delayed and Higher-Order Transfer Entropy Michael Hansen (April 23, 2011) Background Transfer entropy (TE) is an information-theoretic measure of directed information flow introduced by Thomas Schreiber

More information

Neurophysiology of a VLSI spiking neural network: LANN21

Neurophysiology of a VLSI spiking neural network: LANN21 Neurophysiology of a VLSI spiking neural network: LANN21 Stefano Fusi INFN, Sezione Roma I Università di Roma La Sapienza Pza Aldo Moro 2, I-185, Roma fusi@jupiter.roma1.infn.it Paolo Del Giudice Physics

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

Patterns, Memory and Periodicity in Two-Neuron Delayed Recurrent Inhibitory Loops

Patterns, Memory and Periodicity in Two-Neuron Delayed Recurrent Inhibitory Loops Math. Model. Nat. Phenom. Vol. 5, No. 2, 2010, pp. 67-99 DOI: 10.1051/mmnp/20105203 Patterns, Memory and Periodicity in Two-Neuron Delayed Recurrent Inhibitory Loops J. Ma 1 and J. Wu 2 1 Department of

More information

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18 CSE 417T: Introduction to Machine Learning Final Review Henry Chai 12/4/18 Overfitting Overfitting is fitting the training data more than is warranted Fitting noise rather than signal 2 Estimating! "#$

More information

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000

More information

Deep Feedforward Networks. Sargur N. Srihari

Deep Feedforward Networks. Sargur N. Srihari Deep Feedforward Networks Sargur N. srihari@cedar.buffalo.edu 1 Topics Overview 1. Example: Learning XOR 2. Gradient-Based Learning 3. Hidden Units 4. Architecture Design 5. Backpropagation and Other Differentiation

More information

Neural Networks and Ensemble Methods for Classification

Neural Networks and Ensemble Methods for Classification Neural Networks and Ensemble Methods for Classification NEURAL NETWORKS 2 Neural Networks A neural network is a set of connected input/output units (neurons) where each connection has a weight associated

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks 鮑興國 Ph.D. National Taiwan University of Science and Technology Outline Perceptrons Gradient descent Multi-layer networks Backpropagation Hidden layer representations Examples

More information

Model-Free Reconstruction of Excitatory Neuronal Connectivity from Calcium Imaging Signals

Model-Free Reconstruction of Excitatory Neuronal Connectivity from Calcium Imaging Signals Model-Free Reconstruction of Excitatory Neuronal Connectivity from Calcium Imaging Signals Olav Stetter 1,2,3, Demian Battaglia 1,3 *, Jordi Soriano 4, Theo Geisel 1,2,3 1 Max Planck Institute for Dynamics

More information

Statistical Pattern Recognition

Statistical Pattern Recognition Statistical Pattern Recognition Feature Extraction Hamid R. Rabiee Jafar Muhammadi, Alireza Ghasemi, Payam Siyari Spring 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2/ Agenda Dimensionality Reduction

More information

IN THIS turorial paper we exploit the relationship between

IN THIS turorial paper we exploit the relationship between 508 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 3, MAY 1999 Weakly Pulse-Coupled Oscillators, FM Interactions, Synchronization, Oscillatory Associative Memory Eugene M. Izhikevich Abstract We study

More information

Neural Excitability in a Subcritical Hopf Oscillator with a Nonlinear Feedback

Neural Excitability in a Subcritical Hopf Oscillator with a Nonlinear Feedback Neural Excitability in a Subcritical Hopf Oscillator with a Nonlinear Feedback Gautam C Sethia and Abhijit Sen Institute for Plasma Research, Bhat, Gandhinagar 382 428, INDIA Motivation Neural Excitability

More information

Ranking Neurons for Mining Structure-Activity Relations in Biological Neural Networks: NeuronRank

Ranking Neurons for Mining Structure-Activity Relations in Biological Neural Networks: NeuronRank Ranking Neurons for Mining Structure-Activity Relations in Biological Neural Networks: NeuronRank Tayfun Gürel a,b,1, Luc De Raedt a,b, Stefan Rotter a,c a Bernstein Center for Computational Neuroscience,

More information

THE TRANSFER AND PROPAGATION OF CORRELATED NEURONAL ACTIVITY

THE TRANSFER AND PROPAGATION OF CORRELATED NEURONAL ACTIVITY THE TRANSFER AND PROPAGATION OF CORRELATED NEURONAL ACTIVITY A Dissertation Presented to the Faculty of the Department of Mathematics University of Houston In Partial Fulfillment of the Requirements for

More information

arxiv:cond-mat/ v1 [cond-mat.dis-nn] 4 May 2000

arxiv:cond-mat/ v1 [cond-mat.dis-nn] 4 May 2000 Topology of evolving networks: local events and universality arxiv:cond-mat/0005085v1 [cond-mat.dis-nn] 4 May 2000 Réka Albert and Albert-László Barabási Department of Physics, University of Notre-Dame,

More information

How to read a burst duration code

How to read a burst duration code Neurocomputing 58 60 (2004) 1 6 www.elsevier.com/locate/neucom How to read a burst duration code Adam Kepecs a;, John Lisman b a Cold Spring Harbor Laboratory, Marks Building, 1 Bungtown Road, Cold Spring

More information

High-conductance states in a mean-eld cortical network model

High-conductance states in a mean-eld cortical network model Neurocomputing 58 60 (2004) 935 940 www.elsevier.com/locate/neucom High-conductance states in a mean-eld cortical network model Alexander Lerchner a;, Mandana Ahmadi b, John Hertz b a Oersted-DTU, Technical

More information

Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics

Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics Processing of Time Series by Neural Circuits with iologically Realistic Synaptic Dynamics Thomas Natschläger & Wolfgang Maass Institute for Theoretical Computer Science Technische Universität Graz, ustria

More information

Neural Networks 1 Synchronization in Spiking Neural Networks

Neural Networks 1 Synchronization in Spiking Neural Networks CS 790R Seminar Modeling & Simulation Neural Networks 1 Synchronization in Spiking Neural Networks René Doursat Department of Computer Science & Engineering University of Nevada, Reno Spring 2006 Synchronization

More information

Chapter 14 Semiconductor Laser Networks: Synchrony, Consistency, and Analogy of Synaptic Neurons

Chapter 14 Semiconductor Laser Networks: Synchrony, Consistency, and Analogy of Synaptic Neurons Chapter 4 Semiconductor Laser Networks: Synchrony, Consistency, and Analogy of Synaptic Neurons Abstract Synchronization among coupled elements is universally observed in nonlinear systems, such as in

More information

Balance of Electric and Diffusion Forces

Balance of Electric and Diffusion Forces Balance of Electric and Diffusion Forces Ions flow into and out of the neuron under the forces of electricity and concentration gradients (diffusion). The net result is a electric potential difference

More information

Neural networks. Chapter 20, Section 5 1

Neural networks. Chapter 20, Section 5 1 Neural networks Chapter 20, Section 5 Chapter 20, Section 5 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 20, Section 5 2 Brains 0 neurons of

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

Revision: Neural Network

Revision: Neural Network Revision: Neural Network Exercise 1 Tell whether each of the following statements is true or false by checking the appropriate box. Statement True False a) A perceptron is guaranteed to perfectly learn

More information

Artificial Neural Networks Examination, March 2004

Artificial Neural Networks Examination, March 2004 Artificial Neural Networks Examination, March 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum

More information

9 Generation of Action Potential Hodgkin-Huxley Model

9 Generation of Action Potential Hodgkin-Huxley Model 9 Generation of Action Potential Hodgkin-Huxley Model (based on chapter 12, W.W. Lytton, Hodgkin-Huxley Model) 9.1 Passive and active membrane models In the previous lecture we have considered a passive

More information

Temporal whitening by power-law adaptation in neocortical neurons

Temporal whitening by power-law adaptation in neocortical neurons Temporal whitening by power-law adaptation in neocortical neurons Christian Pozzorini, Richard Naud, Skander Mensi and Wulfram Gerstner School of Computer and Communication Sciences and School of Life

More information

Lecture 2: Connectionist approach Elements and feedforward networks

Lecture 2: Connectionist approach Elements and feedforward networks Short Course: Computation of Olfaction Lecture 2 Lecture 2: Connectionist approach Elements and feedforward networks Dr. Thomas Nowotny University of Sussex Connectionist approach The connectionist approach

More information

Effects of synaptic conductance on the voltage distribution and firing rate of spiking neurons

Effects of synaptic conductance on the voltage distribution and firing rate of spiking neurons PHYSICAL REVIEW E 69, 051918 (2004) Effects of synaptic conductance on the voltage distribution and firing rate of spiking neurons Magnus J. E. Richardson* Laboratory of Computational Neuroscience, Brain

More information

Topics in Neurophysics

Topics in Neurophysics Topics in Neurophysics Alex Loebel, Martin Stemmler and Anderas Herz Exercise 2 Solution (1) The Hodgkin Huxley Model The goal of this exercise is to simulate the action potential according to the model

More information

Information Theory and Neuroscience II

Information Theory and Neuroscience II John Z. Sun and Da Wang Massachusetts Institute of Technology October 14, 2009 Outline System Model & Problem Formulation Information Rate Analysis Recap 2 / 23 Neurons Neuron (denoted by j) I/O: via synapses

More information

/639 Final Solutions, Part a) Equating the electrochemical potentials of H + and X on outside and inside: = RT ln H in

/639 Final Solutions, Part a) Equating the electrochemical potentials of H + and X on outside and inside: = RT ln H in 580.439/639 Final Solutions, 2014 Question 1 Part a) Equating the electrochemical potentials of H + and X on outside and inside: RT ln H out + zf 0 + RT ln X out = RT ln H in F 60 + RT ln X in 60 mv =

More information

STUDENT PAPER. Santiago Santana University of Illinois, Urbana-Champaign Blue Waters Education Program 736 S. Lombard Oak Park IL, 60304

STUDENT PAPER. Santiago Santana University of Illinois, Urbana-Champaign Blue Waters Education Program 736 S. Lombard Oak Park IL, 60304 STUDENT PAPER Differences between Stochastic and Deterministic Modeling in Real World Systems using the Action Potential of Nerves. Santiago Santana University of Illinois, Urbana-Champaign Blue Waters

More information

Temporal Pattern Analysis

Temporal Pattern Analysis LIACS Leiden Institute of Advanced Computer Science Master s Thesis June 17, 29 Temporal Pattern Analysis Using Reservoir Computing Author: Ron Vink Supervisor: Dr. Walter Kosters 1 Contents 1 Introduction

More information

Using a Hopfield Network: A Nuts and Bolts Approach

Using a Hopfield Network: A Nuts and Bolts Approach Using a Hopfield Network: A Nuts and Bolts Approach November 4, 2013 Gershon Wolfe, Ph.D. Hopfield Model as Applied to Classification Hopfield network Training the network Updating nodes Sequencing of

More information

Feedforward Neural Nets and Backpropagation

Feedforward Neural Nets and Backpropagation Feedforward Neural Nets and Backpropagation Julie Nutini University of British Columbia MLRG September 28 th, 2016 1 / 23 Supervised Learning Roadmap Supervised Learning: Assume that we are given the features

More information

Bayesian Computation Emerges in Generic Cortical Microcircuits through Spike-Timing-Dependent Plasticity

Bayesian Computation Emerges in Generic Cortical Microcircuits through Spike-Timing-Dependent Plasticity Bayesian Computation Emerges in Generic Cortical Microcircuits through Spike-Timing-Dependent Plasticity Bernhard Nessler 1 *, Michael Pfeiffer 1,2, Lars Buesing 1, Wolfgang Maass 1 1 Institute for Theoretical

More information