F.P. Battaglia 1. Istituto di Fisica. Universita di Roma, La Sapienza, Ple Aldo Moro, Roma and. S. Fusi

Size: px
Start display at page:

Download "F.P. Battaglia 1. Istituto di Fisica. Universita di Roma, La Sapienza, Ple Aldo Moro, Roma and. S. Fusi"

Transcription

1 partially structured synaptic transitions F.P. Battaglia 1 Istituto di Fisica Universita di Roma, La Sapienza, Ple Aldo oro, Roma and S. Fusi INFN, Sezione dell'istituto Superiore di Sanita, Viale Regina Elena 299, Roma September 7, 1995 Abstract We show that stochastic learning of attractors can take place in a situation in which either only potentiation or only depression of synaptic ecacies are caused in a structured Hebbian way. In each case the transition in the opposite sense take place at random, but occurs only upon presentation of a stimulus. The outcome is an associative memory with the palimpsest property. It is shown that structured potentiation produces more eective learning than structured depression, i.e. it creates a network with a much higher number of retrievable memories. Introduction Unsupervised learning of uncorrelated stimuli in attractor networks was recently described [1] as a stochastic process on the distribution of synaptic values characterizing the network. In this approach synapses have a nite number of stable states (ecacies). Learning is schematized as a random walk among them. Probability of each step is determined by the activities of the two neurons connected by the synapse. Neuronal states are in turn driven by external stimuli, represented as a stream of words, or patterns, each consisting of N 0's or 1's each. Patterns are uncorrelated with each other and have a given coding level f, which is the average fraction of neurons driven by a stimulus. Stimuli drive synaptic transitions by means of the activity correlations of the two neurons connected by each synapse. If both neurons on a synapse have high activity there is a non-zero probability for the synapse to transit into an higher ecacy state; anticorrelation produces transitions depressing the synaptic ecacies; two inactive neurons leave the synapse unchanged. Synaptic modications induced by more recently presented patterns stack upon those produced by older patterns. As more patterns are presented, traces of older ones gradually fade away. After the learning of a given pattern, the network can learn other patterns without destroying the retrieval of the rst one. Thus, the network reaches and permain into a steady state, with new patterns learned starting from an asymptotic distribution of synaptic weights. Always the same number of patterns is stored in memory, ready for retrieval. Retrievable patterns are the most recently learned ones. If we consider other learning schemes, in which we start from a distribution of connection weights dierent from the asymptotic one, we obtain other memory behaviours: in some situations we could have the network storing the rst patterns presented, with more strength than the following ones, and the memory of the former lasts for more time than the memory of the latters. With certain parameters choice, the network is not capable of learning (and forgetting), when it has reached the 1 Present Address: S.I.S.S.A. { Via Beirut, 2-4, Trieste 1

2 primacy phenomenon [2]). Here we concentrate on the steady state memory, when only a constant number of the newest patterns is stored in the network (the recency phenomenon [2]) The maximum number of such patterns represents the storage capacity of the network. It is the number of the most recent stimuli that can be recalled. In [1] this limit is investigated by checking the bitwise stability of a recalled pattern by a signalto-noise computation. In other words, after the learning of p uncorrelated patterns in the stream, the rst pattern is presented to the network for retrieval. The synaptic input generated by this pattern, given the resulting learned synaptic matrix, is checked to see if it reproduces the originally learned pattern. It turns out that, with xed learning parameters and a stream of uncorrelated patterns, a network cannot store any patterns without errors. Allowing the coding level f and the transition probabilities to vary appropriately with the number of neurons N as N 1, one can achieve optimal storage and retrieve as many as (N= log N) 2 patterns. Weaker learning requirements In [1] the structure of the correlations in a learned pattern determines the probability of both synaptic potentiation and depression. A synapse is potentiated (transits to a stable synaptic state of higher ecacy) with probability q + if the neurons connected by the synapse have correlated activities. If the two neurons have anticorrelated activities, depression takes place with probability q?. Here we report that learning and recall of patterns are possible even when the requirements on the learning process are weaker. Such weaker conditions could be that only the synaptic transitions in one of the two possible directions (potentiation or depression) depend on the correlations of neural activities in the stimuli. The transitions in the other direction could be random. In fact, in both cases the qualitative behaviour of the network in learning and retrieval are maintained. Two learning scenarios with examples of weaker requirements are: 1. A synapse J ij is potentiated with probability q +, if i = j = 1 (both neurons connected by the synapse are active). The depression (transition to a stable synaptic state of lower ecacy) takes place randomly, with probability z, independently of activity values, during the presentation of a pattern. 2. A synapse J ij is depressed (or pruned) with probability q?, if the two neurons connected by it have anticorrelated activities. Potentiation takes place at random, independently of activity values, with probability r, during the presentation of a pattern. It should be underlined that in both options the random transition have been coupled to the presentation of the patterns, even though they are partially independent of their structure. This restriction is introduced to avoid the appearance of an additional time scale. If any type of transition becomes independent of the presentation of stimuli, the synaptic states can no longer be considered stable and their transition probability determines a time scale in which memory is destroyed spontaneously. Following a long enough interval in which no pattern is presented to the network, the synaptic structure will become completely dominated by the random transitions. Possible biological underpinning for the modeled possibilities will be discussed at the end. In what follows the scope is limited to synapses with two stable values 0,1 and neurons with 0,1 activities. As in ref. [1], most eects of interest appear already in this reduced context. It is found that in both cases the network is able to learn, in the sense that patterns that have been learned in the past can be associatively recalled. The learning process produces a palimpsest [2, 3, 4, 5]. If the learning parameters are xed, no patterns can be stored, due to correlations in synaptic structure 2

3 learning parameters, i.e. allowing f; q + (q? ); z(r) to decrease with increasing N, one can achieve a network which learns continuously and can maintain a much higher number of distinct patterns. But here the two scenarios part ways: In scenario 1, when potentiation is structured the capacity can become as large as (N= log N) 2 patterns. While for scenario 2, no manipulation of the parameters can make the capacity larger than N=(log N) 2. The theoretical framework The framework is the same as that of ref. [1]: the network is presented a stream of p uncorrelated stimuli. Since the synapses are assumed to have a nite number of states, after the removal of each stimulus, the synaptic ecacies converge rapidly to one of the stable states. So, on long time scales, the learning process can be seen as a walk among the stable states. In fact this is a random walk for two reasons: 1. the stimuli are assumed uncorrelated, so each synapse will see a random sequence of pairs of activities ( i ; j ) on the two neurons connected by it [1, 6] 2. The transition from one stable synaptic state to another may itself be stochastic [1, 2]. There are at least two possible sources of synaptic stochasticity: the dynamics of the synaptic modication and the random uctuations in the spike rates of the two neurons connected by the synapse. This stochastic process is described by a conditional probability distribution function for the synaptic values p (; ~ J ), giving the probability of nding the synaptic value J following the presentation of p uncorrelated patterns, conditional on the appearance of neural activities, ~ in the presentation of the rst of the p patterns. Since the synapses are assumed to have a nite number of states the stochastic process is generically ergodic and leads to an asymptotic distribution after a large enough number of presentations. This distribution is taken to be the initial distribution upon which learning takes place. Also the conditional distribution p J(; ) ~ is driven towards the asymptotic form, as p 1, by the presentation of p uncorrelated patterns following the rst one, the one that produced the conditioning. The dynamics of the conditional X distribution is p J(; ) ~ = 1 K(; )( ~ p?1 ) KJ ; (1) K=0;1 where is the stochastic transition matrix of the process: KJ is the probability of going from state K to state J. For instance consider the model of [1] in which the network is presented sparse patterns (f is the mean fraction of active neurons), the synapse has two stable states, and both depression and potentiation are structured: = 1? f(1? f)q? f(1? f)q? (2) f 2 q + 1? f 2 q + f 2 q + is the probability of potentiation (the probability of having two active neurons produces the f 2 factor), and f(1? f)q? is the probability of depression. Given p J(; ~ ), one computes the statistics of the synaptic inputs among neurons upon the presentation, for retrieval, of the rst of the p patterns. The mean synaptic input into neurons which had activity during the presentation of 1 is: 3

4 hh 1 i i = h N j=1 J ij (p) 1 j i = ~ ~ J=0;1 J ~ p J(; ~ ) (3) where ~ is the probability that stimulus number 1 induced the state ~ on a neuron. For uncorrelated and sparse patterns we have: 1 = f, 0 = 1? f (see ref. [1]). Retrieval is possible if the distribution of h i is such that there exists a threshold which separates the neural inputs of the neurons which had been active in the learned pattern (hh i i 1 ) from those of the neurons which had been quiescent (hh i i 0 ). So we dene the signal of pattern 1 as the distance between the averages of the two distributions: S = hh p i i 1? hh p i i 0 = X ~ X ~ J=0;1 The uctuations of the two neural inputs are estimated by: R 2 () = h(h p i? hh p i i ) 2 i J ~ [ p J(1; ~ )? p J(0; ~ )]: (4) If the random variables h p i are gaussian, then total noise is: R 2 = 1 2 [R2 (1) + R 2 (0)] Each term is given by [1]: h(h p i? hh p i i ) 2 i = * 1 N 2 j;j6=i k;k6=i N? 1 N 2 X X + J ij 1 j 1 k X ~ X ~ XX (N? 1)(N? 2) N 2?* 1 N X j;j6=i J ij 1 j J 2 2 ~ p (; ~ J )? hh p i i A 2 + ~ ~ 0 ~ ~ 0 XX J = JJ 0 ~ ~ 0 p 0(; JJ ; ~ ~ 0 ) A (5) J 0 1 The rst term is computed as if the synaptic values were uncorrelated. The second term is due to correlations between J ij and J ik, which have a neuron in common. p 0(; ~ JJ ; ~ 0 ) is the distribution of J ij ; J ik conditional on the appearance of the values ; ; ~ ~ 0 on neurons i, j, k, in the rst pattern. The evolution of the conditional distribution, following the presentation of 1 can be fully described in terms of the eigen-values and the eigen-vectors of. We have (see e.g. [7]): p J = 1 J + p?1 X K 1 K(; ~ )u K v J (6) where 1 J is the asymptotic distribution (the term corresponding to the unitary eigen-value), which is the palimpsest on top of which the new pattern 1 is learned. is the subleading eigen-value of ; 1 K(; ) ~ is the conditional distribution produced on top of the palimpsest, immediately following the presentation of 1. This is the extent of `one shot' learning. u K, v J are, respectively, the right and left eigen-vectors of corresponding to. emory maintenance is tested by evaluating the signal to noise ratio upon presentation of one of the patterns presented in the past as a stimulus for retrieval. Then assuming a gaussian distribution for the signals, one uses the criterion that if the square of the signal to noise ratio S 2 =R 2 satises 4

5 as N becomes large, retrieval is assured. See e.g. ref. [1, 8]. S =R > log N (7) Structured potentiation and random depression In this scenario (no. 1) the transition matrix is = 1? z z f 2 q + 1? f 2 q + (8) in which the rst row (column) corresponds to the initial (nal) synaptic value 1. z is the probability of leaving the upper state (depression) following the presentation of the stimulus. f 2 q + is the probability of potentiation: it occurs with probability q + when both neurons on the synapse have high activity (f 2 factor) during the presentation of the stimulus. Note that the probabilities of unstructured transitions, z and 1? z intervene together with the structured ones. This is a consequence of our assumption that unstructured depressions appear only during the presentation of a stimulus. This matrix has the eigen-values: The asymptotic synaptic distribution is 1 = 1 = 1? z? f 2 q + < 1 (9) 1 = (p + ; p? ) = f 2 q + z ; z + f 2 q + z + f 2 q + where p + and p? are the probabilities of nding 1 or 0 synapses, respectively, in the asymptotic palimpsest. It has no memory. The expression for the signal is (eq. 4 and [1]) The uncorrelated part of noise is (10) S = p?1 q +p? f (11) R 2 = f N [p +? p + f + O(f p?1 )] (12) It is evident form Eq. 5 that the correlation term is O(1) in N when N is large. Therefore it will dominate in the limit N 1. With xed learning parameters one has, for Signal/Noise ratio S R 2 2(p?1) cost: (13) and the condition (S=R) 2 log N is never fullled, not even for one single pattern stored. Actually the Weisbuch and Fogelman-Soulie condition (7) is a very strict constraint. It implies that pattern can be retrieved, with no errors at all, with probability 1, in the limit N 1 [8]. The stochastic nature of learning dynamics is such that it cannot satisfy this constraint for any number of stored patterns. This is mainly because of correlations induced in the synaptic structure. If f tends in an appropriate fashion to 0 when N 1, correlations become negligible, and the Signal/Noise computations can proceed with the uncorrelated noise term alone, see e.g.[1]. 5

6 f N pf 5 (14) We note that this condition is looser than the corresponding one in [1] ((f=n) pf 3 ), where both potentiation and depression are structured. This is due to the fact that here only upward transition generate correlations in the synaptic structure, and each of them requires two neurons to be active, implying a factor of f 2. For example, if we choose f log N=N and p N 3 =(log N) 4 condition is satised and we can consider the uncorrelated term only. To leading order in f: The condition (S=R) 2 log N reads p c < S R 2 = 2(p?1) N f (q +p? ) 2 p + (15) 1?2 log log N f (q + p? ) 2 : (16) (log N) p + provided N f (q+ p? ) 2 (log N) p + > 1 (17) This last condition should be satised in order to allow storage and retrieval of at least one pattern. In fact it is equivalent to impose condition 7 for the most recently learned pattern (see equation 15 when p = 1). It represents a constraint on learning parameters. Now, if we put z f 2 (log N=N) 2 and we keep q + xed we have: Expanding the denominator in Eq. 16 we nd 1? x with x O (log N =N) 2 : p c N log N 2 ; (18) provided, of course, condition (17) is still satised. In fact, for N large, the N dependence on the left hand side of Eq. 17 cancels out, and the condition can be satised by choosing appropriately the values of the nite constants q + ; fn= log N; z=f 2. With parameters values yielding optimal storage level, the average number of synaptic up and down transition is of the same order. Hence, in this scenario, like in [1], the best storage is of order N 2. Weaker conditions on the learning transitions do not change the dependence of the capacity on N, for large N, and at worst can lower the capacity by a nite multiplicative factor. Structured pruning and random synapse generation This is scenario no.2. In this case reads = 1? f(1? f)q? f(1? f)q? r 1? r 6

7 sion. The last takes place only when the two neurons connected by the synapse have anticorrelated activities. The eigen-values are 1 = 1; = 1? r? f(1? f)q? < 1: The asymptotic distribution is 1 (p + ; p? ) = r f(1? f)q? ; r + f(1? f)q? r + f(1? f)q? Also in this case correlations give a noise term O(1) in N, with xed parameters. So, condition (7) could never be satised. Correlations can be made negligible by adjusting the coding rate f. The condition for the uncorrelated part of noise to dominate is f N pf 3 (19) The dierence with respect to condition(14) is due to the fact that in this case correlations between synapses are caused by structured pruning transitions. These transitions occur with a probability O(f), therefore they are more frequent than structured transitions (potentiations) in the previous scenario. This explains the stricter condition on p we obtain in this case. With f log N=N (this is the fastest possible decay, see e.g. eq. 17, if at least one pattern is to be learned), it is satised only if p N (20) (log N) 2 If this is the case, we can continue the calculation considering uncorrelated noise only. The signal, Eq. 4, is S = p?1 q? p + f (21) The uncorrelated noise is: R 2 = f N [p +? p 2 +f + O(f p?1 )] (22) like in the previous case. The signal-to-noise ratio, Eq. 22, is given by S R 2 = 2(p?1) Nfq 2?p + (23) With a critical capacity: p c < Again, if N f q 2? p + (log N) 1 log N f q2?p + : (24)?2 log (log N) > 1; (25) one would be able to learn with this dynamics, and pattern retrieval is possible for as many as p c uncorrelated patterns. Storage capacity is again O(log N) if f, q?, and r are xed. 7

8 eective, due to the constraint Eq. 25. The best one can do is take f (log N=N) and r f. Then, to obtain the constraint on the critical capacity: 1? x with x = O(log N=N) p c < N log N : (26) The condition (20) is stricter and represent the actual storage limit for this network. Therefore, the capacity is N=(log N) 2. With structured depression, the total number of structured transitions f(1? f)q?. It cannot be made smaller than O(f) in the limit f 0 since constraint 25 implies that q? should be kept nite. Otherwise the network will lose its capability to store even a single pattern (this is the meaning of (24)). In the previous scenario the number of transitions could be reduced to O(f 2 ) as f 0, still assuring safe pattern storage and retrieval. As more transitions are requested to learn each pattern, traces of older pattern are rubbed out more rapidly. In fact, in both cases, a term similar to the mean number of transitions per presentation appears in the expression for, as one would easily argue. How fast the network tends to the asymptotic distribution, forgetting previously learnt information, is actually depending on the eigen-value. The closer approaches to 1, the larger will be the storage capacity. Conclusions In the hebbian paradigm learning depends on synaptic modications driven by neural activities. The understanding of the mechanisms of synaptic formation and plasticity is therefore one of the main problems for neurobiology. Although much eort has been devoted to this question in the last decades, a complete picture is still lacking. Very important progresses have been pursued in comprehension of LTP. It seems to be the result of many eects, (see e.g.[10] or [11] for a review). Other similar processes, such as selective stabilization of synapses induced by dierential, activityregulated release of Trophic Factor (see e.g. [12]) by the post-synaptic neuron are known. Search for mechanisms of long term, activity correlated synaptic depression has been another intensively studied subject for experimentalists. Nevertheless, the situation is far less clear than for the LTP case: Stanton and Sejnowski [13] claim for the nding of an hebbian LTD, induced by the anti-correlated pre- and post-synaptic activities. Despite of several eorts, no other research group was able to replicate the result, possibly conditioned by a very delicate parameter choice [14]. any other possibilities, resembling the hebbian postulates, have been proposed (see e.g. [15] for a review), like homosynaptic LTD, where decrease of synaptic ecacy is produced by an activation of the pre-synpatic neuron that is not followed by the activation of the post-synaptic one (the former is unable to excitate the latter). The heterosynaptic LTD is caused by a silent pre-synaptic neuron, together with the activation of the post-synaptic, induced by another input pathway. In theoretical models of attractor networks learning, necessity of LTD for information storing have often been called for. Here we wanted to show that an activity-dependent, \hebbian", LTD is not strictly necessary in order to store pattern. What is important, for this memory scheme (\palimpsest") to work, is the presence, along with an information carrying potentiation process, of mechanisms keeping the mean connectivity constant, thus preventing the saturation of the network. In order to obtain a real palimpsestic scheme (older memories are removed only when new objects are stored), a completely random depression mechanism is fully equivalent, as far as transitions of 8

9 network is inactive. This could be the case if we consider selective stabilization of synapses induced by Trophic Factor: activity triggers the elimination of connections by a reduction of Trophic factor release [12]. The proposal for random transition reects the lack of knowledge of LTD: we tried to gure out which are the minimal requests on synaptic plasticity for this learning paradigm to work. Of course, an activity-correlated LTD increases the signal, but only up to a multiplicative constant, leaving the scaling law for the storage capacity invaried (the capacity is O(N 2 =(logn) 2 )). We showed another possibility for learning, with depression transitions driven by stimuli, and random potentiation. We saw that learning is again possible, but the rst scheme is more ecient in storing memories (it can store and recall O(N 2 =(log N) 2 ) patterns) than the second one (with a capacity O(N=(log N) 2 )). This scheme probably has not a direct thinkable analog in biological synapses, and it is suggested as a theoretical possibility, mainly in comparison with the former scheme. We know from conditions (17) and (25) that, in the case studied, probabilities for structured transitions must be O(1) in f, if at least one pattern is to be stored. This implies that total number of transitions is O(f 2 ) in the rst case, and O(f) in the second. So, with structured potentiation, we need less transitions to store the same amount of information, and to get the same signal magnitude. Less transitions imply slower interference with the synaptic structure and therefore a longer storage capacity. In some sense, LTP appears as a more ecient storage mechanism than LTD, and it could be that it is actually the crucial one. Acknowledgement We are grateful to Prof. Daniel Amit for encouragement and criticism during the performance and the writing of this study. This study was done in the context of a project to develop an output network which can learn to read the attractors of an upstream attractor network. References [1] Amit D.J., Fusi S Learning in neural networks with material synapses, Neural Computation, 6, 957{982 [2] Wong K.Y.., Kahn P.E., Sherrington D A neural network model of working memory exhibiting primacy and recency, J. Phys A, 24, 1119{1135 [3] Nadal J.P., Toulouse G., Changeux J.P., Dehaene S Networks of formal neurons and memory palimpsests Europhysics Letters, 1, 532{542 [4] Parisi G A memory which forgets, J. Phys, A19, L617 [5] Burgess N., Shapiro J.L., oore.a Neural network models of list learning, NETWORK, 2, 399{422 [6] Heskes T.., Kappen B Phys Rev. A 44, 2718 [7] Cox D.R., iller H.D Theory of stochastic processes (ETHUEN & CO LTD London) [8] Weisbuch G. Fogelman-Soulie F Scaling laws for the attractors of Hopeld model, J.Physique Lett., 2, 337 9

10 for the specication of neuronal networks Nature, 264, [10] Larson J. Lynch G Induction of synaptic potentiation in hippocampus by patterned stimulation involves two events, Science, 232, 985 [11] Brown T.H. Kairiss E.W. Keenan C.L Hebbian Synapses: Biophysical mechanisms and algorithms, Ann. Rev Neurosci., 13, [12] Henderson C.E., 1986 Activity and the regulation of Neuronal Growth Factor etabolism, in Changeux J.P. and Konishi. eds. The neural and molecular bases of learning Springer-Verlag [13] Stanton P.K. Sejnowski T.J. Associative long-term depression in the hippocampus induced by hebbian covariance, Nature, [14] Paulsen O. Li Y.-G. Hvalby O. Andersen P. Bliss T.V.P.1993 Failure to induce Long-term depression by an anti-correlation procedure in area CA1 of the rat Hippocampal slice Eur. Journ. Neurosci., 5, [15] Christie B.R. Kerr D.S. Abraham W.C. Flip side of synaptic: Long-term Depression mechanisms in the hippocampus, Hippocampus, 4,

Stochastic Learning in a Neural Network with Adapting. Synapses. Istituto Nazionale di Fisica Nucleare, Sezione di Bari

Stochastic Learning in a Neural Network with Adapting. Synapses. Istituto Nazionale di Fisica Nucleare, Sezione di Bari Stochastic Learning in a Neural Network with Adapting Synapses. G. Lattanzi 1, G. Nardulli 1, G. Pasquariello and S. Stramaglia 1 Dipartimento di Fisica dell'universita di Bari and Istituto Nazionale di

More information

Neurophysiology of a VLSI spiking neural network: LANN21

Neurophysiology of a VLSI spiking neural network: LANN21 Neurophysiology of a VLSI spiking neural network: LANN21 Stefano Fusi INFN, Sezione Roma I Università di Roma La Sapienza Pza Aldo Moro 2, I-185, Roma fusi@jupiter.roma1.infn.it Paolo Del Giudice Physics

More information

The Variance of Covariance Rules for Associative Matrix Memories and Reinforcement Learning

The Variance of Covariance Rules for Associative Matrix Memories and Reinforcement Learning NOTE Communicated by David Willshaw The Variance of Covariance Rules for Associative Matrix Memories and Reinforcement Learning Peter Dayan Terrence J. Sejnowski Computational Neurobiology Laboratory,

More information

(a) (b) (c) Time Time. Time

(a) (b) (c) Time Time. Time Baltzer Journals Stochastic Neurodynamics and the System Size Expansion Toru Ohira and Jack D. Cowan 2 Sony Computer Science Laboratory 3-4-3 Higashi-gotanda, Shinagawa, Tokyo 4, Japan E-mail: ohiracsl.sony.co.jp

More information

Memory capacity of networks with stochastic binary synapses

Memory capacity of networks with stochastic binary synapses Memory capacity of networks with stochastic binary synapses Alexis M. Dubreuil 1,, Yali Amit 3 and Nicolas Brunel 1, 1 UMR 8118, CNRS, Université Paris Descartes, Paris, France Departments of Statistics

More information

1 Introduction Tasks like voice or face recognition are quite dicult to realize with conventional computer systems, even for the most powerful of them

1 Introduction Tasks like voice or face recognition are quite dicult to realize with conventional computer systems, even for the most powerful of them Information Storage Capacity of Incompletely Connected Associative Memories Holger Bosch Departement de Mathematiques et d'informatique Ecole Normale Superieure de Lyon Lyon, France Franz Kurfess Department

More information

Properties of Associative Memory Model with the β-th-order Synaptic Decay

Properties of Associative Memory Model with the β-th-order Synaptic Decay Regular Paper Properties of Associative Memory Model with the β-th-order Synaptic Decay Ryota Miyata 1,2 Toru Aonishi 1 Jun Tsuzurugi 3 Koji Kurata 4,a) Received: January 30, 2013, Revised: March 20, 2013/June

More information

Synaptic Plasticity. Introduction. Biophysics of Synaptic Plasticity. Functional Modes of Synaptic Plasticity. Activity-dependent synaptic plasticity:

Synaptic Plasticity. Introduction. Biophysics of Synaptic Plasticity. Functional Modes of Synaptic Plasticity. Activity-dependent synaptic plasticity: Synaptic Plasticity Introduction Dayan and Abbott (2001) Chapter 8 Instructor: Yoonsuck Choe; CPSC 644 Cortical Networks Activity-dependent synaptic plasticity: underlies learning and memory, and plays

More information

Synaptic plasticity in neuromorphic hardware. Stefano Fusi Columbia University

Synaptic plasticity in neuromorphic hardware. Stefano Fusi Columbia University Synaptic plasticity in neuromorphic hardware Stefano Fusi Columbia University The memory problem Several efficient memory models assume that the synaptic dynamic variables are unbounded, or can be modified

More information

Covariance Learning of Correlated Patterns in Competitive. Networks. Ali A. Minai. University of Cincinnati. (To appearinneural Computation)

Covariance Learning of Correlated Patterns in Competitive. Networks. Ali A. Minai. University of Cincinnati. (To appearinneural Computation) Covariance Learning of Correlated Patterns in Competitive Networks Ali A. Minai Department of Electrical & Computer Engineering and Computer Science University of Cincinnati Cincinnati, OH 45221 (To appearinneural

More information

High-conductance states in a mean-eld cortical network model

High-conductance states in a mean-eld cortical network model Neurocomputing 58 60 (2004) 935 940 www.elsevier.com/locate/neucom High-conductance states in a mean-eld cortical network model Alexander Lerchner a;, Mandana Ahmadi b, John Hertz b a Oersted-DTU, Technical

More information

of the dynamics. There is a competition between the capacity of the network and the stability of the

of the dynamics. There is a competition between the capacity of the network and the stability of the Special Issue on the Role and Control of Random Events in Biological Systems c World Scientic Publishing Company LEARNING SYNFIRE CHAINS: TURNING NOISE INTO SIGNAL JOHN HERTZ and ADAM PRUGEL-BENNETT y

More information

How do biological neurons learn? Insights from computational modelling of

How do biological neurons learn? Insights from computational modelling of How do biological neurons learn? Insights from computational modelling of neurobiological experiments Lubica Benuskova Department of Computer Science University of Otago, New Zealand Brain is comprised

More information

Beyond Hebbian plasticity: Effective learning with ineffective Hebbian learning rules

Beyond Hebbian plasticity: Effective learning with ineffective Hebbian learning rules Beyond Hebbian plasticity: Effective learning with ineffective Hebbian learning rules Gal Chechik The center for neural computation Hebrew University in Jerusalem, Israel and the School of Mathematical

More information

How to read a burst duration code

How to read a burst duration code Neurocomputing 58 60 (2004) 1 6 www.elsevier.com/locate/neucom How to read a burst duration code Adam Kepecs a;, John Lisman b a Cold Spring Harbor Laboratory, Marks Building, 1 Bungtown Road, Cold Spring

More information

T sg. α c (0)= T=1/β. α c (T ) α=p/n

T sg. α c (0)= T=1/β. α c (T ) α=p/n Taejon, Korea, vol. 2 of 2, pp. 779{784, Nov. 2. Capacity Analysis of Bidirectional Associative Memory Toshiyuki Tanaka y, Shinsuke Kakiya y, and Yoshiyuki Kabashima z ygraduate School of Engineering,

More information

Convergence of stochastic learning in perceptrons with binary synapses

Convergence of stochastic learning in perceptrons with binary synapses PhysRevE Convergence of stochastic learning in perceptrons with binary synapses Walter Senn and Stefano Fusi Department of Physiology, University of Bern (Dated: February 18, 2005) The efficacy of a biological

More information

Effective Neuronal Learning with Ineffective Hebbian Learning Rules

Effective Neuronal Learning with Ineffective Hebbian Learning Rules Effective Neuronal Learning with Ineffective Hebbian Learning Rules Gal Chechik The Center for Neural Computation Hebrew University in Jerusalem, Israel and the School of Mathematical Sciences Tel-Aviv

More information

Coherence detection in a spiking neuron via Hebbian learning

Coherence detection in a spiking neuron via Hebbian learning Neurocomputing 44 46 (2002) 133 139 www.elsevier.com/locate/neucom Coherence detection in a spiking neuron via Hebbian learning L. Perrinet, M. Samuelides ONERA-DTIM, 2 Av. E. Belin, BP 4025, 31055 Toulouse,

More information

Optimizing One-Shot Learning with Binary Synapses

Optimizing One-Shot Learning with Binary Synapses ARTICLE Communicated by Mark Goldman Optimizing One-Shot Learning with Binary Synapses Sandro Romani sandro.romani@gmail.com Human Physiology, UniversitàdiRomaLaSapienza,Rome00185,Italy Daniel J. Amit

More information

Activity Driven Adaptive Stochastic. Resonance. Gregor Wenning and Klaus Obermayer. Technical University of Berlin.

Activity Driven Adaptive Stochastic. Resonance. Gregor Wenning and Klaus Obermayer. Technical University of Berlin. Activity Driven Adaptive Stochastic Resonance Gregor Wenning and Klaus Obermayer Department of Electrical Engineering and Computer Science Technical University of Berlin Franklinstr. 8/9, 187 Berlin fgrewe,obyg@cs.tu-berlin.de

More information

Neural networks: Unsupervised learning

Neural networks: Unsupervised learning Neural networks: Unsupervised learning 1 Previously The supervised learning paradigm: given example inputs x and target outputs t learning the mapping between them the trained network is supposed to give

More information

Memory capacity of neural networks learning within bounds

Memory capacity of neural networks learning within bounds Nous We done sont are faites J. Physique 48 (1987) 20532058 DTCEMBRE 1987, 1 2053 Classification Physics Abstracts 75.10H 64.60 87.30 Memory capacity of neural networks learning within bounds Mirta B.

More information

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5 Hopfield Neural Network and Associative Memory Typical Myelinated Vertebrate Motoneuron (Wikipedia) PHY 411-506 Computational Physics 2 1 Wednesday, March 5 1906 Nobel Prize in Physiology or Medicine.

More information

The Bias-Variance dilemma of the Monte Carlo. method. Technion - Israel Institute of Technology, Technion City, Haifa 32000, Israel

The Bias-Variance dilemma of the Monte Carlo. method. Technion - Israel Institute of Technology, Technion City, Haifa 32000, Israel The Bias-Variance dilemma of the Monte Carlo method Zlochin Mark 1 and Yoram Baram 1 Technion - Israel Institute of Technology, Technion City, Haifa 32000, Israel fzmark,baramg@cs.technion.ac.il Abstract.

More information

ESANN'1999 proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), April 1999, D-Facto public., ISBN X, pp.

ESANN'1999 proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), April 1999, D-Facto public., ISBN X, pp. Statistical mechanics of support vector machines Arnaud Buhot and Mirta B. Gordon Department de Recherche Fondamentale sur la Matiere Condensee CEA-Grenoble, 17 rue des Martyrs, 38054 Grenoble Cedex 9,

More information

Plasticity and Learning

Plasticity and Learning Chapter 8 Plasticity and Learning 8.1 Introduction Activity-dependent synaptic plasticity is widely believed to be the basic phenomenon underlying learning and memory, and it is also thought to play a

More information

2 1. Introduction. Neuronal networks often exhibit a rich variety of oscillatory behavior. The dynamics of even a single cell may be quite complicated

2 1. Introduction. Neuronal networks often exhibit a rich variety of oscillatory behavior. The dynamics of even a single cell may be quite complicated GEOMETRIC ANALYSIS OF POPULATION RHYTHMS IN SYNAPTICALLY COUPLED NEURONAL NETWORKS J. Rubin and D. Terman Dept. of Mathematics; Ohio State University; Columbus, Ohio 43210 Abstract We develop geometric

More information

ESTIMATING SCALING EXPONENTS IN AUDITORY-NERVE SPIKE TRAINS USING FRACTAL MODELS INCORPORATING REFRACTORINESS

ESTIMATING SCALING EXPONENTS IN AUDITORY-NERVE SPIKE TRAINS USING FRACTAL MODELS INCORPORATING REFRACTORINESS ESTIMATING SCALING EXPONENTS IN AUDITORY-NERVE SPIKE TRAINS USING FRACTAL MODELS INCORPORATING REFRACTORINESS S.B. LOWEN Department of Electrical and Computer Engineering 44 Cummington Street Boston University,

More information

Rate- and Phase-coded Autoassociative Memory

Rate- and Phase-coded Autoassociative Memory Rate- and Phase-coded Autoassociative Memory Máté Lengyel Peter Dayan Gatsby Computational Neuroscience Unit, University College London 7 Queen Square, London WCN 3AR, United Kingdom {lmate,dayan}@gatsby.ucl.ac.uk

More information

7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory

7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory Physics 178/278 - David Kleinfeld - Fall 2005; Revised for Winter 2017 7 Rate-Based Recurrent etworks of Threshold eurons: Basis for Associative Memory 7.1 A recurrent network with threshold elements The

More information

Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks

Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks Commun. Theor. Phys. (Beijing, China) 42 (2004) pp. 121 125 c International Academic Publishers Vol. 42, No. 1, July 15, 2004 Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized

More information

Storage Capacity of Letter Recognition in Hopfield Networks

Storage Capacity of Letter Recognition in Hopfield Networks Storage Capacity of Letter Recognition in Hopfield Networks Gang Wei (gwei@cs.dal.ca) Zheyuan Yu (zyu@cs.dal.ca) Faculty of Computer Science, Dalhousie University, Halifax, N.S., Canada B3H 1W5 Abstract:

More information

Economics 472. Lecture 10. where we will refer to y t as a m-vector of endogenous variables, x t as a q-vector of exogenous variables,

Economics 472. Lecture 10. where we will refer to y t as a m-vector of endogenous variables, x t as a q-vector of exogenous variables, University of Illinois Fall 998 Department of Economics Roger Koenker Economics 472 Lecture Introduction to Dynamic Simultaneous Equation Models In this lecture we will introduce some simple dynamic simultaneous

More information

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks Commun. Theor. Phys. (Beijing, China) 40 (2003) pp. 607 613 c International Academic Publishers Vol. 40, No. 5, November 15, 2003 Effects of Interactive Function Forms in a Self-Organized Critical Model

More information

Using Variable Threshold to Increase Capacity in a Feedback Neural Network

Using Variable Threshold to Increase Capacity in a Feedback Neural Network Using Variable Threshold to Increase Capacity in a Feedback Neural Network Praveen Kuruvada Abstract: The article presents new results on the use of variable thresholds to increase the capacity of a feedback

More information

Financial Informatics XVII:

Financial Informatics XVII: Financial Informatics XVII: Unsupervised Learning Khurshid Ahmad, Professor of Computer Science, Department of Computer Science Trinity College, Dublin-, IRELAND November 9 th, 8. https://www.cs.tcd.ie/khurshid.ahmad/teaching.html

More information

Lecture 4: Importance of Noise and Fluctuations

Lecture 4: Importance of Noise and Fluctuations Lecture 4: Importance of Noise and Fluctuations Jordi Soriano Fradera Dept. Física de la Matèria Condensada, Universitat de Barcelona UB Institute of Complex Systems September 2016 1. Noise in biological

More information

1. Introduction As is well known, the bosonic string can be described by the two-dimensional quantum gravity coupled with D scalar elds, where D denot

1. Introduction As is well known, the bosonic string can be described by the two-dimensional quantum gravity coupled with D scalar elds, where D denot RIMS-1161 Proof of the Gauge Independence of the Conformal Anomaly of Bosonic String in the Sense of Kraemmer and Rebhan Mitsuo Abe a; 1 and Noboru Nakanishi b; 2 a Research Institute for Mathematical

More information

Outline. NIP: Hebbian Learning. Overview. Types of Learning. Neural Information Processing. Amos Storkey

Outline. NIP: Hebbian Learning. Overview. Types of Learning. Neural Information Processing. Amos Storkey Outline NIP: Hebbian Learning Neural Information Processing Amos Storkey 1/36 Overview 2/36 Types of Learning Types of learning, learning strategies Neurophysiology, LTP/LTD Basic Hebb rule, covariance

More information

Ingo Ahrns, Jorg Bruske, Gerald Sommer. Christian Albrechts University of Kiel - Cognitive Systems Group. Preusserstr Kiel - Germany

Ingo Ahrns, Jorg Bruske, Gerald Sommer. Christian Albrechts University of Kiel - Cognitive Systems Group. Preusserstr Kiel - Germany On-line Learning with Dynamic Cell Structures Ingo Ahrns, Jorg Bruske, Gerald Sommer Christian Albrechts University of Kiel - Cognitive Systems Group Preusserstr. 1-9 - 24118 Kiel - Germany Phone: ++49

More information

Chaotic Balanced State in a Model of Cortical Circuits C. van Vreeswijk and H. Sompolinsky Racah Institute of Physics and Center for Neural Computatio

Chaotic Balanced State in a Model of Cortical Circuits C. van Vreeswijk and H. Sompolinsky Racah Institute of Physics and Center for Neural Computatio Chaotic Balanced State in a Model of Cortical Circuits C. van Vreeswij and H. Sompolinsy Racah Institute of Physics and Center for Neural Computation Hebrew University Jerusalem, 91904 Israel 10 March

More information

Learning and Memory in Neural Networks

Learning and Memory in Neural Networks Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units

More information

Hopfield Network for Associative Memory

Hopfield Network for Associative Memory CSE 5526: Introduction to Neural Networks Hopfield Network for Associative Memory 1 The next few units cover unsupervised models Goal: learn the distribution of a set of observations Some observations

More information

Effects of refractory periods in the dynamics of a diluted neural network

Effects of refractory periods in the dynamics of a diluted neural network Effects of refractory periods in the dynamics of a diluted neural network F. A. Tamarit, 1, * D. A. Stariolo, 2, * S. A. Cannas, 2, *, and P. Serra 2, 1 Facultad de Matemática, Astronomía yfísica, Universidad

More information

Decoding. How well can we learn what the stimulus is by looking at the neural responses?

Decoding. How well can we learn what the stimulus is by looking at the neural responses? Decoding How well can we learn what the stimulus is by looking at the neural responses? Two approaches: devise explicit algorithms for extracting a stimulus estimate directly quantify the relationship

More information

Shigetaka Fujita. Rokkodai, Nada, Kobe 657, Japan. Haruhiko Nishimura. Yashiro-cho, Kato-gun, Hyogo , Japan. Abstract

Shigetaka Fujita. Rokkodai, Nada, Kobe 657, Japan. Haruhiko Nishimura. Yashiro-cho, Kato-gun, Hyogo , Japan. Abstract KOBE-TH-94-07 HUIS-94-03 November 1994 An Evolutionary Approach to Associative Memory in Recurrent Neural Networks Shigetaka Fujita Graduate School of Science and Technology Kobe University Rokkodai, Nada,

More information

ON SYMMETRY NON-RESTORATION AT HIGH TEMPERATURE. G. Bimonte. and. G. Lozano 1. International Centre for Theoretical Physics, P.O.

ON SYMMETRY NON-RESTORATION AT HIGH TEMPERATURE. G. Bimonte. and. G. Lozano 1. International Centre for Theoretical Physics, P.O. IC/95/59 July 995 ON SYMMETRY NON-RESTORATION AT HIGH TEMPERATURE G. Bimonte and G. Lozano International Centre for Theoretical Physics, P.O.BOX 586 I-400 Trieste, ITALY Abstract We study the eect of next-to-leading

More information

Strong Chaos without Buttery Eect. in Dynamical Systems with Feedback. Via P.Giuria 1 I Torino, Italy

Strong Chaos without Buttery Eect. in Dynamical Systems with Feedback. Via P.Giuria 1 I Torino, Italy Strong Chaos without Buttery Eect in Dynamical Systems with Feedback Guido Boetta 1, Giovanni Paladin 2, and Angelo Vulpiani 3 1 Istituto di Fisica Generale, Universita di Torino Via P.Giuria 1 I-10125

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required.

In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required. In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required. In humans, association is known to be a prominent feature of memory.

More information

Self-organized Criticality and Synchronization in a Pulse-coupled Integrate-and-Fire Neuron Model Based on Small World Networks

Self-organized Criticality and Synchronization in a Pulse-coupled Integrate-and-Fire Neuron Model Based on Small World Networks Commun. Theor. Phys. (Beijing, China) 43 (2005) pp. 466 470 c International Academic Publishers Vol. 43, No. 3, March 15, 2005 Self-organized Criticality and Synchronization in a Pulse-coupled Integrate-and-Fire

More information

Analysis and Simulations y. Risto Miikkulainen. Department of Computer Sciences. Austin, TX USA.

Analysis and Simulations y. Risto Miikkulainen. Department of Computer Sciences. Austin, TX USA. Convergence-Zone Episodic Memory: Analysis and Simulations y Mark Moll School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 USA mmoll+@cs.cmu.edu Risto Miikkulainen Department of

More information

Topological target patterns and population oscillations in a network with random gap junctional coupling

Topological target patterns and population oscillations in a network with random gap junctional coupling 0 0 0 0 Neurocomputing }0 (00) } Topological target patterns and population oscillations in a network with random gap junctional coupling Timothy J. Lewis*, John Rinzel Center for Neural Science and Courant

More information

0 o 1 i B C D 0/1 0/ /1

0 o 1 i B C D 0/1 0/ /1 A Comparison of Dominance Mechanisms and Simple Mutation on Non-Stationary Problems Jonathan Lewis,? Emma Hart, Graeme Ritchie Department of Articial Intelligence, University of Edinburgh, Edinburgh EH

More information

Triplets of Spikes in a Model of Spike Timing-Dependent Plasticity

Triplets of Spikes in a Model of Spike Timing-Dependent Plasticity The Journal of Neuroscience, September 20, 2006 26(38):9673 9682 9673 Behavioral/Systems/Cognitive Triplets of Spikes in a Model of Spike Timing-Dependent Plasticity Jean-Pascal Pfister and Wulfram Gerstner

More information

Artificial Intelligence Hopfield Networks

Artificial Intelligence Hopfield Networks Artificial Intelligence Hopfield Networks Andrea Torsello Network Topologies Single Layer Recurrent Network Bidirectional Symmetric Connection Binary / Continuous Units Associative Memory Optimization

More information

Error Empirical error. Generalization error. Time (number of iteration)

Error Empirical error. Generalization error. Time (number of iteration) Submitted to Neural Networks. Dynamics of Batch Learning in Multilayer Networks { Overrealizability and Overtraining { Kenji Fukumizu The Institute of Physical and Chemical Research (RIKEN) E-mail: fuku@brain.riken.go.jp

More information

Hopfield Networks. (Excerpt from a Basic Course at IK 2008) Herbert Jaeger. Jacobs University Bremen

Hopfield Networks. (Excerpt from a Basic Course at IK 2008) Herbert Jaeger. Jacobs University Bremen Hopfield Networks (Excerpt from a Basic Course at IK 2008) Herbert Jaeger Jacobs University Bremen Building a model of associative memory should be simple enough... Our brain is a neural network Individual

More information

Correlations strike back (again): the case of associative memory retrieval

Correlations strike back (again): the case of associative memory retrieval Correlations strike back (again): the case of associative memory retrieval Cristina Savin 1 cs664@cam.ac.uk Peter Dayan 2 dayan@gatsby.ucl.ac.uk Máté Lengyel 1 m.lengyel@eng.cam.ac.uk 1 Computational &

More information

neural units that learn through an information-theoretic-based criterion.

neural units that learn through an information-theoretic-based criterion. Submitted to: Network: Comput. Neural Syst. Entropy Optimization by the PFANN Network: Application to Blind Source Separation Simone Fiori Dept. of Electronics and Automatics { University of Ancona, Italy

More information

Infinite systems of interacting chains with memory of variable length - a stochastic model for biological neural nets

Infinite systems of interacting chains with memory of variable length - a stochastic model for biological neural nets Infinite systems of interacting chains with memory of variable length - a stochastic model for biological neural nets Antonio Galves Universidade de S.Paulo Fapesp Center for Neuromathematics Eurandom,

More information

Ordering periodic spatial structures by non-equilibrium uctuations

Ordering periodic spatial structures by non-equilibrium uctuations Physica A 277 (2000) 327 334 www.elsevier.com/locate/physa Ordering periodic spatial structures by non-equilibrium uctuations J.M.G. Vilar a;, J.M. Rub b a Departament de F sica Fonamental, Facultat de

More information

Marr's Theory of the Hippocampus: Part I

Marr's Theory of the Hippocampus: Part I Marr's Theory of the Hippocampus: Part I Computational Models of Neural Systems Lecture 3.3 David S. Touretzky October, 2015 David Marr: 1945-1980 10/05/15 Computational Models of Neural Systems 2 Marr

More information

How to Pop a Deep PDA Matters

How to Pop a Deep PDA Matters How to Pop a Deep PDA Matters Peter Leupold Department of Mathematics, Faculty of Science Kyoto Sangyo University Kyoto 603-8555, Japan email:leupold@cc.kyoto-su.ac.jp Abstract Deep PDA are push-down automata

More information

Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity

Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity Jorge F. Mejias 1,2 and Joaquín J. Torres 2 1 Department of Physics and Center for

More information

Novel VLSI Implementation for Triplet-based Spike-Timing Dependent Plasticity

Novel VLSI Implementation for Triplet-based Spike-Timing Dependent Plasticity Novel LSI Implementation for Triplet-based Spike-Timing Dependent Plasticity Mostafa Rahimi Azghadi, Omid Kavehei, Said Al-Sarawi, Nicolangelo Iannella, and Derek Abbott Centre for Biomedical Engineering,

More information

Learning with Ensembles: How. over-tting can be useful. Anders Krogh Copenhagen, Denmark. Abstract

Learning with Ensembles: How. over-tting can be useful. Anders Krogh Copenhagen, Denmark. Abstract Published in: Advances in Neural Information Processing Systems 8, D S Touretzky, M C Mozer, and M E Hasselmo (eds.), MIT Press, Cambridge, MA, pages 190-196, 1996. Learning with Ensembles: How over-tting

More information

and B. Taglienti (b) (a): Dipartimento di Fisica and Infn, Universita di Cagliari (c): Dipartimento di Fisica and Infn, Universita di Roma La Sapienza

and B. Taglienti (b) (a): Dipartimento di Fisica and Infn, Universita di Cagliari (c): Dipartimento di Fisica and Infn, Universita di Roma La Sapienza Glue Ball Masses and the Chameleon Gauge E. Marinari (a),m.l.paciello (b),g.parisi (c) and B. Taglienti (b) (a): Dipartimento di Fisica and Infn, Universita di Cagliari Via Ospedale 72, 09100 Cagliari

More information

A Dynamical Implementation of Self-organizing Maps. Wolfgang Banzhaf. Department of Computer Science, University of Dortmund, Germany.

A Dynamical Implementation of Self-organizing Maps. Wolfgang Banzhaf. Department of Computer Science, University of Dortmund, Germany. Fraunhofer Institut IIE, 994, pp. 66 73 A Dynamical Implementation of Self-organizing Maps Abstract Wolfgang Banzhaf Department of Computer Science, University of Dortmund, Germany and Manfred Schmutz

More information

7 Recurrent Networks of Threshold (Binary) Neurons: Basis for Associative Memory

7 Recurrent Networks of Threshold (Binary) Neurons: Basis for Associative Memory Physics 178/278 - David Kleinfeld - Winter 2019 7 Recurrent etworks of Threshold (Binary) eurons: Basis for Associative Memory 7.1 The network The basic challenge in associative networks, also referred

More information

arxiv:cond-mat/ v1 [cond-mat.dis-nn] 28 Jul 2003

arxiv:cond-mat/ v1 [cond-mat.dis-nn] 28 Jul 2003 A novel stochastic Hebb-like learning rule for neural networks Frank Emmert-Streib Institut für Theoretische Physik, Universität Bremen, Otto-Hahn-Allee, 28334 Bremen, Germany (Dated: February 2, 2008)

More information

Statistical mechanics of classical systems

Statistical mechanics of classical systems Statistical mechanics of classical systems States and ensembles A microstate of a statistical system is specied by the complete information about the states of all microscopic degrees of freedom of the

More information

[7] M. Falcioni, E. Marinari, M.L. Paciello, G. Parisi and B. Taglienti, Phys. Lett. B 108

[7] M. Falcioni, E. Marinari, M.L. Paciello, G. Parisi and B. Taglienti, Phys. Lett. B 108 [5] G. Parisi, Statistical Field Theory, Addisson Wesley 1992. [6] U. Wol, Phys. Lett. 228B 3(1989) [7] M. Falcioni, E. Marinari, M.L. Paciello, G. Parisi and B. Taglienti, Phys. Lett. B 108 (1982) 331.

More information

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology c Chapter 7 Interconnected

More information

Applied Physics and by courtesy, Neurobiology and EE

Applied Physics and by courtesy, Neurobiology and EE Information theoretic limits on the memory capacity of neuronal and synaptic networks. Surya Ganguli Applied Physics and by courtesy, Neurobiology and EE Stanford The Nature and Scope of Theoretical Neuroscience

More information

When is an Integrate-and-fire Neuron like a Poisson Neuron?

When is an Integrate-and-fire Neuron like a Poisson Neuron? When is an Integrate-and-fire Neuron like a Poisson Neuron? Charles F. Stevens Salk Institute MNL/S La Jolla, CA 92037 cfs@salk.edu Anthony Zador Salk Institute MNL/S La Jolla, CA 92037 zador@salk.edu

More information

In#uence of dendritic topologyon "ring patterns in model neurons

In#uence of dendritic topologyon ring patterns in model neurons Neurocomputing 38}40 (2001) 183}189 In#uence of dendritic topologyon "ring patterns in model neurons Jacob Duijnhouwer, Michiel W.H. Remme, Arjen van Ooyen*, Jaap van Pelt Netherlands Institute for Brain

More information

Neural Networks and Brain Function

Neural Networks and Brain Function Neural Networks and Brain Function Edmund T. Rolls University of Oxford Department of Experimental Psychology Oxford England Alessandro Treves International School of Advanced Studies Programme in Neuroscience

More information

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network.

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network. Part 3A: Hopfield Network III. Recurrent Neural Networks A. The Hopfield Network 1 2 Typical Artificial Neuron Typical Artificial Neuron connection weights linear combination activation function inputs

More information

How do synapses transform inputs?

How do synapses transform inputs? Neurons to networks How do synapses transform inputs? Excitatory synapse Input spike! Neurotransmitter release binds to/opens Na channels Change in synaptic conductance! Na+ influx E.g. AMA synapse! Depolarization

More information

DEVS Simulation of Spiking Neural Networks

DEVS Simulation of Spiking Neural Networks DEVS Simulation of Spiking Neural Networks Rene Mayrhofer, Michael Affenzeller, Herbert Prähofer, Gerhard Höfer, Alexander Fried Institute of Systems Science Systems Theory and Information Technology Johannes

More information

arxiv:quant-ph/ v1 17 Oct 1995

arxiv:quant-ph/ v1 17 Oct 1995 PHYSICS AND CONSCIOUSNESS Patricio Pérez arxiv:quant-ph/9510017v1 17 Oct 1995 Departamento de Física, Universidad de Santiago de Chile Casilla 307, Correo 2, Santiago, Chile ABSTRACT Some contributions

More information

Binary synapses: better than expected

Binary synapses: better than expected October 13, 2008. version 8. 1 Binary synapses: better than expected 1 Introduction Memory is an absolutely critical component in any biological system. Without it, learning would be impossible, and behavior

More information

G METHOD IN ACTION: FROM EXACT SAMPLING TO APPROXIMATE ONE

G METHOD IN ACTION: FROM EXACT SAMPLING TO APPROXIMATE ONE G METHOD IN ACTION: FROM EXACT SAMPLING TO APPROXIMATE ONE UDREA PÄUN Communicated by Marius Iosifescu The main contribution of this work is the unication, by G method using Markov chains, therefore, a

More information

Reduction of two-loop Feynman integrals. Rob Verheyen

Reduction of two-loop Feynman integrals. Rob Verheyen Reduction of two-loop Feynman integrals Rob Verheyen July 3, 2012 Contents 1 The Fundamentals at One Loop 2 1.1 Introduction.............................. 2 1.2 Reducing the One-loop Case.....................

More information

Ensemble averaged dynamic modeling. By D. Carati 1,A.Wray 2 AND W. Cabot 3

Ensemble averaged dynamic modeling. By D. Carati 1,A.Wray 2 AND W. Cabot 3 Center for Turbulence Research Proceedings of the Summer Program 1996 237 Ensemble averaged dynamic modeling By D. Carati 1,A.Wray 2 AND W. Cabot 3 The possibility of using the information from simultaneous

More information

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network.

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network. III. Recurrent Neural Networks A. The Hopfield Network 2/9/15 1 2/9/15 2 Typical Artificial Neuron Typical Artificial Neuron connection weights linear combination activation function inputs output net

More information

in a Chaotic Neural Network distributed randomness of the input in each neuron or the weight in the

in a Chaotic Neural Network distributed randomness of the input in each neuron or the weight in the Heterogeneity Enhanced Order in a Chaotic Neural Network Shin Mizutani and Katsunori Shimohara NTT Communication Science Laboratories, 2-4 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 69-237 Japan shin@cslab.kecl.ntt.co.jp

More information

1 Receptive Fields and Maps in the Visual Cortex: Models of Ocular Dominance and Orientation Columns Kenneth D. Miller 1 Published in: Models of Neura

1 Receptive Fields and Maps in the Visual Cortex: Models of Ocular Dominance and Orientation Columns Kenneth D. Miller 1 Published in: Models of Neura 1 Receptive Fields and Maps in the Visual Cortex: Models of Ocular Dominance and Orientation Columns Kenneth D. Miller 1 Published in: Models of Neural Networks III, E. Domany,J.L. vanhemmen, and K. Schulten,

More information

Plasticity and learning in a network of coupled phase oscillators

Plasticity and learning in a network of coupled phase oscillators PHYSICAL REVIEW E, VOLUME 65, 041906 Plasticity and learning in a network of coupled phase oscillators Philip Seliger, Stephen C. Young, and Lev S. Tsimring Institute for onlinear Science, University of

More information

The Mixed States of Associative Memories Realize Unimodal Distribution of Dominance Durations in Multistable Perception

The Mixed States of Associative Memories Realize Unimodal Distribution of Dominance Durations in Multistable Perception The Mixed States of Associative Memories Realize Unimodal Distribution of Dominance Durations in Multistable Perception Takashi Kanamaru Department of Mechanical Science and ngineering, School of Advanced

More information

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics UNIVERSITY OF CAMBRIDGE Numerical Analysis Reports Spurious Chaotic Solutions of Dierential Equations Sigitas Keras DAMTP 994/NA6 September 994 Department of Applied Mathematics and Theoretical Physics

More information

Information storage capacity of incompletely connected associative memories

Information storage capacity of incompletely connected associative memories Information storage capacity of incompletely connected associative memories Holger Bosch a, Franz J. Kurfess b, * a Department of Computer Science, University of Geneva, Geneva, Switzerland b Department

More information

Growth with Memory. Institut de Physique Teorique, Universite de Fribourg, Perolles, Fribourg, CH-1700

Growth with Memory. Institut de Physique Teorique, Universite de Fribourg, Perolles, Fribourg, CH-1700 Growth with Memory Matteo Marsili 1 and Michele Vendruscolo 2 1 Institut de Physique Teorique, Universite de Fribourg, Perolles, Fribourg, CH-1700 2 International School for Advanced Studies (SISSA) Via

More information

A key idea in the latter paper is that using dierent correlations depending upon the position and orientation of a pair of antithetic walkers can, in

A key idea in the latter paper is that using dierent correlations depending upon the position and orientation of a pair of antithetic walkers can, in Model fermion Monte Carlo with correlated pairs II M.H. Kalos Center for Theory and Simulation in Science and Engineering Laboratory of Atomic and Solid State Physics Cornell University Ithaca, New York

More information

An Introductory Course in Computational Neuroscience

An Introductory Course in Computational Neuroscience An Introductory Course in Computational Neuroscience Contents Series Foreword Acknowledgments Preface 1 Preliminary Material 1.1. Introduction 1.1.1 The Cell, the Circuit, and the Brain 1.1.2 Physics of

More information

Analysis of an Attractor Neural Network s Response to Conflicting External Inputs

Analysis of an Attractor Neural Network s Response to Conflicting External Inputs Journal of Mathematical Neuroscience (2018) 8:6 https://doi.org/10.1186/s13408-018-0061-0 RESEARCH OpenAccess Analysis of an Attractor Neural Network s Response to Conflicting External Inputs Kathryn Hedrick

More information

Associative Memories (I) Hopfield Networks

Associative Memories (I) Hopfield Networks Associative Memories (I) Davide Bacciu Dipartimento di Informatica Università di Pisa bacciu@di.unipi.it Applied Brain Science - Computational Neuroscience (CNS) A Pun Associative Memories Introduction

More information

!-mesons produced in, p reactions. de Physique Nucleaire et de l'instrumentation Associee. Service de Physique Nucleaire

!-mesons produced in, p reactions. de Physique Nucleaire et de l'instrumentation Associee. Service de Physique Nucleaire Quantum interference in the e + e, decays of - and!-mesons produced in, p reactions Madeleine Soyeur 1, Matthias Lutz 2 and Bengt Friman 2;3 1. Departement d'astrophysique, de Physique des Particules,

More information