Memory capacity of neural networks learning within bounds

Size: px
Start display at page:

Download "Memory capacity of neural networks learning within bounds"

Transcription

1 Nous We done sont are faites J. Physique 48 (1987) DTCEMBRE 1987, Classification Physics Abstracts 75.10H Memory capacity of neural networks learning within bounds Mirta B. Gordon Centre d Etudes Nucléaires de Grenoble, Département de Recherche Fondamentale/Service de Physique, Groupe Magnétisme et Diffraction Neutronique (*), 85 X, Grenoble Cedex, France (Reçu le 7 juillet 1987, accept6 le 12 aoat 1987) Résumé. présentons un modèle de mémoire à long terme : apprentissage avec bornes irréversibles. Les meilleures valeurs des bornes et la capacité de mémoire sont déterminés numériquement. Nous montrons qu il est possible en général de calculer analytiquement la capacité de mémoire si l on résout le problème de 2014 marche aléatoire associé à chaque règle d apprentissage. Nos estimations pour plusieurs règles 2014 d apprentissage en excellent accord avec les résultats numériques et de mécanique statistique Abstract. present a model of long term memory : learning within irreversible bounds. The best bound values and memory capacity are determined numerically. We show that it is possible in general to calculate analytically the memory capacity by solving the random walk problem associated to a given learning rule. Our estimations for several learning rules in excellent agreement with numerical and analytical statistical mechanics results. In the last few years, a great amount of work has been done on the properties of networks of formal neurons, proposed by Hopfield [1] as models of associative memories. In these models, each neuron i is represented by a spin variable oi which can take only two values ai 1 or ai I. Any state of the system is defined by the values {oi, U2,..., UN} U taken by each one of the N spins or neurons. Pairs of neurons i, j interact with strengths Cij, the synaptic efficacies, which are modified by learning. As usual, we denote 6 (v 1, 2,...) the learnt states or patterns. Retrieval of patterns is a dynamic process in which each spin takes the sign of the local field : acting on it. The primed sum means that terms j i should be ignored. A learnt state ç v is said to be memorized or retrieved if, starting with the network in state ç v it relaxes towards a final state close to ç v. In general, the final state can be very different from ç v, and will be denoted lv. The overlap between both : gives a measure of retrieval quality. The simplest local learning prescription [2] for p learnt patterns is Hebb s rule : Assuming that the values of )I are random and uncorrelated, it has been shown [13] that the maximum number of patterns p that can be memorized with Hebb s learning rule is proportional to the number of neurons : p an, with a ± If more than an patterns are learnt, memory breaks down and none of the learnt patterns are retrieved. In order to avoid this catastrophic effect, different modifications of Hebb s rule were proposed [46]. The simplest one is the socalled learning within bounds [5] : synaptic efficacies are modified by learning in the same way as Hebb s rule, but their values are constrained to remain within some chosen range. In the version proposed by Parisi [4] bounds are reversible : once a Cij reaches a barrier, it remains at its value until a pattern is learnt that returns it inside the allowed range. This is a model of Article published online by EDP Sciences and available at

2 is is with 2054 short term memory : only the last learnt patterns are retrieved, old memories are gradually erased by learning. With this learning rule no deterioration occurs, but the storage capacity is smaller than with Hebb s rule. In the first part of this paper, we present numerical simulations on a model of long term memory, which is an irreversible version of learning within bounds : those synaptic efficacies that reach a bound remain at its value for ever [6]. The best bounds and the storage capacity are similar to those found with reversible bounds, but now the first, and not the last, learnt patterns are memorized. In the second part of the paper, we show that a quantitative analysis of the random walk associated to each learning rule gives a very good estimate of the network s memory capacity. We present results for the standard Hebb s rule and for different variants of learning within bounds. Generalization to other learning rules is straightforward, and is presented in section Learning within irreversible bounds. Numerical simulations. Fig. 1 Overlap between the learnt pattern and the retrieved state vs. u the number of learnt pattern, once p patterns were learnt with the best bound value m mopt. The learning rule with irreversible bounds or barriers with C ij (0) 0. Sij is the pattern number for which Cij first reaches a bound. Patterns after are Sij not learnt and the synaptic efficacy is saturated. For m oo, the standard Hebb s rule is recovered. But, unlike in Hebbian learning, with rule (4) the number u the «time» at which l is learnt relevant. In our numerical simulations, random patterns were learnt following (4). Each time a new pattern was added, the retrieval quality of all the previously stored patterns was tested : starting with the network in a learnt state, spins are allowed to flip with Monte Carlo sequential dynamics until relaxation to a state in which each spin takes the sign of the field (1) acting on it. A learnt pattern is considered as well memorized if its overlap q with the relaxed state is q > Any other value would give nearly the same results because patterns are either retrieved without almost any error (q ~ 1 ), or with q 1. The bound value giving maximal number of well retrieved patterns, was mopt, determined for networks with N 100, 150, 200 and 400 neurons by testing different values of m. Figure 1 shows the retrieval quality (2) as a function of the.pattern number, for N 400. With the best bounds (mopt ), the overlap jumps abruptly from 1 to a small value, Fig. 2. Number of well retrieved patterns (q > 0.97 ) vs. number of learnt patterns. showing that only the first learnt patterns are memorized. Figure 2 is a plot of the number of well retrieved patterns versus the number of learnt patterns. For m mopt, a smaller number of patterns are retrieved in the asymptotic regime (p large), and for m > mopt the number of retrieved patterns vanishes for large p, as it should, because in the large m limit, the standard Hebb s rule deterioration recovered. its memory Optimal bound values are proportional to the network size, but we do not have enough accuracy to establish numerically the law mopt (N ). In next section it is shown that mopt..! 0.3 J N, and numerical data are consistent with this prediction. With the optimal bounds, we find a storage capacity

3 in assumed observed remain the N. These results show that learning within irreversible bounds is a model of long term memory in the sense that only old learnt patterns are remembered. The catastrophic deterioration of Hebb s rule is avoided by stopping the acquisition of new patterns once the memory is saturated. The capacity, and the «best» bound values, are similar to those of reversible «learning [4] memory which forgets». 2. Random walk analysis. For uncorrelated random learnt patterns, the synaptic efficacies Cij perform random walks of steps 1. In this section we show how a N probabilistic p analysis gives the maximum memory capacity of the network under a given learning rule. It is based on the following fact in our numerical simulations : when the initial state of the network is a learnt state, then either it remains in this state upon relaxation (retrieval is then perfect, q 1) or it moves away, and this from the very first Monte Carlo step, to a distant state (q small). This suggests that an analysis based on the first Monte Carlo step should be able to predict the memory capacity of a network with a given learning rule. That this is the case is shown in this and the following sections. We first present the method on Hebb s rule, for which analytic and very accurate numerical simulations exist, to show how it works on a simple model, before applying it to learning within bounds. 2.1 HOPFIELD MODEL. The learning rule is given by (3). When the network is in the learnt state 6, the field acting on neuron i, averaged over all the learnt patterns random and uncorrelated is terms j # k and is fî2 (neglecting terms of order 1/N). The variance of the field acting on a given neuron is then,,. Therefore, even if the initial state is a learnt state, say 6 II, when p/n is large enough, there is some probability that the sign of the field acting on a neuron i is opposite to gr. This probability (we drop down subscript i, all neurons being equivalent) is a function of Alh2 : For small x, the function P (x ) vanishes like exp ( x 2), and is linear in x in the neighbourhood of x* 1/3, the inflexion point. It can be approximated (Fig. 3) by a straigth line passing by x *, P(x.) dp I 1 ( 3 ) 3/ of slope dx x. J7r 2 e ;: , which crosses the x axis at xo For x «xo, P (x) 0. Beyond the crossover point at 0.153, errors in retrieval are expected. From (5) and (7), Alh2 p/n ; the maximum number of patterns that can be learnt before errors in retrieval become important is therefore p N, in excellent agreement with theoretical [3] and numerical [12] results z ± 0.009). The prescription for maximum storage capacity is then Therefore, when the network is allowed to relax, spins should the average in state g v. Note that if the initial state is not a learnt state, then hi 0. The second moment of the field distribution for p learnt patterns is : The first contribution to comes hf from the terms j k. It exists also if the network is not in a learnt state [1, 7]. The second contribution comes from Fig. 3. Probability of hi 6i 0 as a function of x

4 limited now constant 2056 In what follows, the same argument is applied to other learning rules. 2.2 LEARNING WITHIN IRREVERSIBLE BOUNDS. When the network is in state g v the average field acting on a neuron i is (to lower order in 1 IN) where P(s > v ) is the probability to perform a random walk of more than v steps between absorbing barriers at m and m, without absorption. For large v (see Appendix Aa) : not exist. After learning a large number of patterns : The random walk between reversible barriers gets into an equilibrium distribution : C ij (p ) takes any of the allowed values N (n m, m 1,..., m) with 1 N. probability When the network is in state 2m+1 ) v, the field averaged over all the learnt patterns and its variance, are given by (see Appendix Ab) : The variance of the field is easily seen to be : where P (s ) is the probability that absorption takes place in s steps, so that s is the mean number of patterns learnt by a bond before its strength Cij sticks to the bounds. From the random walk problem (Appendix Aa) : Unlike in the Hebbian scheme of learning, in the present case the dispersion of the field values is constant by the bounds. Storage capacity is limited because the average field with Hebb s rule decreases with the pattern number. Therefore, only the first learnt patterns have a field on each neuron large enough to ensure good retrieval. Introducing (9) to (12) into (8) gives the maximum number v of patterns expected to be memorized, for a given m. After maximization of v with respect to m, we find in very good agreement simulations. with our numerical 2.3 LEARNING WITHIN REVERSIBLE BOUNDS. With this learning scheme [4], the synaptic efficacies show reversible saturation effects. They stick to the bounds and do not learn those patterns that would make them take values beyond the allowed range. Let sij be the pattern that produced the last saturation effect on bond ij. The values taken by Cij on learning the patterns that follow pattern Sij, are all within the allowed range, as if barriers did were 11 p v is the pattern number counted starting from the last learnt one, and P (x > 11 ) is the probability of a random walk of more than q steps starting from + m or m, without sticking to the barriers. For q > 1 we get (see Appendix Ab) The field is now a decreasing function of q : the effect of learning new patterns is to lower the local fields acting on older patterns, and the variance of the field distribution remains constant. Introducing (15) and (16) into (8), and maximizing q with respect to m gives in good agreement with numerical results [4] : m,pt 0.35 JN ; q (mopt ) 0.04 N. It is interesting to apply this analysis to learning without synaptic sign changes [5]. The learning rule is the same as (14), but half of the synaptic efficacies are constrained between m/n and 0, the others between 0 and m/n. From the corresponding random walk, The field decreases faster with q than (16), for a given m, because the allowed range for the Cij is half as before and therefore saturation effects appear in fewer steps. However, the variance is of the same order of magnitude

5 Therefore, memory capacity will be smaller than when synaptic sign changes are allowed. Indeed, one finds the same value (Eq. (17a)) for as mopt before (this value is only a function of 4) but q is 4 times smaller, T1 (mopt) N, in fairly good agreement with numerical results [5], which with our 3. Generalization to other learning rules. The results of section 2 can easily be generalized to learning rules with variable acquisition intensities : The average field on a neuron, when the network is in the learnt state g v is : and the dispersion is given by : With Hebb s rule, À #L 1 and the results of section 2.1 are recovered. Here, the condition for pattern v to be well retrieved is : An example of such a rule is the marginalist learning [5, 9], in which weights increase exponentially in order to ensure good retrieval of the last learnt pattern. Introduction of À II. eif2ia/i Nin (19) and (20) shows that within this scheme, both the average field and its dispersion increase with learning. If good retrieval of only the last learnt pattern is imposed, then v p in (21), and the value of e2 that ensures this must satisfy That is, E 2.56, which is the value estimated numerically in [5], and is in very good agreement with e 2.465, the replica symmetric solution of this model [9]. But it is possible to do better, and ask that the last q learnt patterns be retrieved. Introducing v p q into (21), we find : Maximising 7y with respect to 2 gives Eopt, the JOURNAL DE PHYSIQUE. T. 48, N 12, DTCEMBRE 1987 «best >> E21 and the number of well retrieved states : again in excellent agreement with the theoretical N. predictions [9] Bopt 4.108, 71 ( Eopt ) Result (21) shows that the normalization of the p Cij that consists of dividing it by A 2 JA i does not affect the memory capacity, and also suggests how other selective learning rules can be devised. It is possible, for example, to give stronger weights to the most «important» patterns, in order to keep them in memory even when other patterns are forgotten, or reinforce [9] the memorization of a given pattern v when it is at the limit of being erased (sign in (21)), by learning it again. Conclusion. We analysed different schemes of learning sequences of uncorrelated patterns. When the network is in a learnt state, the average value h of the field acting on a given neuron, produced by all the others, has the same sign as the neuron s spin. The network should remain in the learnt state. The probability to have a field of opposite sign is vanishingly small for a small number of stored patterns, but the crossover to a regime where this probability increases almost linearly sets an upper limit to storage capacity. The maximum storage capacity is attained when A 0.153, where d is the mean square width of the field on several distribution. We tested this prescription models of learning within bounds, proposed as models of short and long term memory. The estimated storage capacity and the best bound values are in excellent agreement with the numerical results. With Hebb s rule, h 1 and remains constant with pattern acquisition, while A increases. At crossover, because A and h are the same for all learnt patterns, all of them are «forgotten» together. In learning within bounds, d is constant and h decreases with the pattern number : memory is lost only of those patterns that have small values of h. Generalization to other learning schemes is straightforward, the storage capacity with a given rule can be estimated once h and A are known. The fact that our predictions, based on a first Monte Carlo step, are so successful, suggests that the size of the basins of attraction at maximum ~ storage capacity is N / [2(max. storage capacity) ]. The factor 2 is there because patterns g and g cannot be distinguished in Hopfield s networks. h 132

6 by For The 2058 This extends to other learning rules a result that is exact with Hebb s rule [10]. Finally, several authors [1, 3, 5, 7, 8] already pointed out that memory deterioration is due to the increasing noise on synaptic efficacies, produced by acquisition of new patterns. Our approach gives a quantitative estimate of storage capacity, until now only available by numerical simulations or in some special cases statistical mechanics calculations. Acknowledgments. Useful discussions with Pierre Peretto, who suggested the model of learning within irreversible bounds, are gratefully acknowledged. Appendix A. The solution to the random walk between barriers and some intermediate results leading to for are summarized in this mulae (10), (12) and (16) appendix. A(a) ABSORBING BARRIERS. a random walk [11] between absorbing barriers at + m and m, the probability of performing a walk of n steps from state i to state j is where À k COS (k 7r /2 m ) are the eigenvalues of the transition probability matrix, and vj(k) sin [(j+m) ktt/2m]/jm (j m l,..., m+l) the corresponding eigenvector components. The probability of a random walk of more than v steps without absorption starting from i 0 is then The dominant contribution to this sum is the term k 1, which gives equation (10). The mean number of patterns learnt by a given bond Cij before saturation is the mean time to absorption Y in the random walk problem. It is the derivative of the generating function of the probability of absorption [11] where f0,m(s) is the probability of first passage from state 0 to state m in s steps, and A± (x) (I ± B/l x)lx. It is then easy to check that s lim dfldx m2, which gives equation (12). x A(b) NON ABSORBING BARRIERS. stationary probability distribution is given by the eigenvector of the eigenvalue 1 of the transition probability matrix. This gives the same probability for all the 2 m + 1 allowed states, namely (2 m + 1 )1. We are interested on the walks of more than q steps starting from + m or m that do not stick to the barriers. Their probability P (t q ) can be deduced from the random walk between absorbing barriers at m + 1 and (m + 1 ) as the sum of the following terms : 1) the random walks starting at m, making a first step of 1 and then 17 1 steps without absorption ; 2) those starting at m, making a first step of + 1 and then q steps without absorption ; 3) those walks starting at n ( m + 1 n m 1) performing q steps without absorption. Each of these terms enters in the sum multiplied by the probability (2 m + 1) of starting the random walk at the corresponding point. The problem is therefore reduced to calculate sums of terms of the form (A. 1) with m + 1 instead of m. The dominant term of the sum gives equation (16). References [1] HOPFIELD, J. J., Proc. Natl. Acad. Sci. (USA) 79 (1982) [2] PERETTO, P., On learning rules and memory storage abilities of neural networks, preprint (1987). [3] CRISANTI, A., AMIT, D. J., GUTFREUND, H., Europhys. Lett. 2 (1986) 337. [4] PARISI, G., J. Phys. A 19 (1986) L 617. [5] NADAL, J. P., TOULOUSE, G., CHANGEUX, J. P. and DEHAENE, S., Europhys. Lett. 1 (1986) 535. [6] This model has been suggested by P. Peretto. [7] PERETTO, P., NIEZ, J. J., Biol. Cybern. 54 (1986) 1. [8] WEISBUCH, G., FOGELMANSOULIÉ, F., J. Physique Lett. 46 (1985) L 623. [9] MÉZARD, M., NADAL, J. P. and TOULOUSE, G., J. Physique 47 (1986) [10] COTTRELL, M., Preprint [11] Cox, D. R., MILLER, H. D., The theory of stochastic processes (Ed. Chapman and Hall Ltd., London) 1977.

Solvable models of working memories

Solvable models of working memories Nous We J. Physique 47 (1986) 1457-1462 SEPTEMBRE 1986, 1457 Classification Physics Abstracts 87.30G - 75. IOH - 89.70-64.60C Solvable models of working memories M. Mézard ( ), J. P. Nadal (*) and G. Toulouse

More information

Effect of thermal noise and initial conditions in the dynamics of a mean-field ferromagnet

Effect of thermal noise and initial conditions in the dynamics of a mean-field ferromagnet Effect of thermal noise and initial conditions in the dynamics of a mean-field ferromagnet O. Golinelli, B. Derrida To cite this version: O. Golinelli, B. Derrida. Effect of thermal noise and initial conditions

More information

Properties of Associative Memory Model with the β-th-order Synaptic Decay

Properties of Associative Memory Model with the β-th-order Synaptic Decay Regular Paper Properties of Associative Memory Model with the β-th-order Synaptic Decay Ryota Miyata 1,2 Toru Aonishi 1 Jun Tsuzurugi 3 Koji Kurata 4,a) Received: January 30, 2013, Revised: March 20, 2013/June

More information

Generalization in a Hopfield network

Generalization in a Hopfield network Generalization in a Hopfield network J.F. Fontanari To cite this version: J.F. Fontanari. Generalization in a Hopfield network. Journal de Physique, 1990, 51 (21), pp.2421-2430. .

More information

LE JOURNAL DE PHYSIQUE - LETTRES

LE JOURNAL DE PHYSIQUE - LETTRES 05.50 On We Tome 45 No 14 15 JUILLET 1984 LE JOURNAL DE PHYSIQUE - LETTRES J. Physique Lett. 45 (1984) L-701 - L-706 15 JUILLET 1984, L-701 Classification Physics Abstracts - - 05.20 64.60 On a model of

More information

7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory

7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory Physics 178/278 - David Kleinfeld - Fall 2005; Revised for Winter 2017 7 Rate-Based Recurrent etworks of Threshold eurons: Basis for Associative Memory 7.1 A recurrent network with threshold elements The

More information

T sg. α c (0)= T=1/β. α c (T ) α=p/n

T sg. α c (0)= T=1/β. α c (T ) α=p/n Taejon, Korea, vol. 2 of 2, pp. 779{784, Nov. 2. Capacity Analysis of Bidirectional Associative Memory Toshiyuki Tanaka y, Shinsuke Kakiya y, and Yoshiyuki Kabashima z ygraduate School of Engineering,

More information

7 Recurrent Networks of Threshold (Binary) Neurons: Basis for Associative Memory

7 Recurrent Networks of Threshold (Binary) Neurons: Basis for Associative Memory Physics 178/278 - David Kleinfeld - Winter 2019 7 Recurrent etworks of Threshold (Binary) eurons: Basis for Associative Memory 7.1 The network The basic challenge in associative networks, also referred

More information

LE JOURNAL DE PHYSIQUE - LETTRES

LE JOURNAL DE PHYSIQUE - LETTRES Les The Tome 46 No 6 15 MARS 1985 LE JOURNAL DE PHYSIQUE - LETTRES J. Physique Lett. 46 (1985) L-217 - L-222 15 MARS 1985, : L-217 Classification Physics Abstracts 75.50K Random free energies in spin glasses

More information

LE JOURNAL DE PHYSIQUE - LETTRES

LE JOURNAL DE PHYSIQUE - LETTRES Nous We Tome 46? 21 1 er NOVEMBRE 1985 LE JOURNAL DE PHYSIQUE - LETTRES J. Physique Lett. 46 (1985) L-985 - L-989 1 er NOVEMBRE 1985, L-985 Classification Physics Abstracts 64.70P - 05.40-75.40 Diffusion

More information

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5 Hopfield Neural Network and Associative Memory Typical Myelinated Vertebrate Motoneuron (Wikipedia) PHY 411-506 Computational Physics 2 1 Wednesday, March 5 1906 Nobel Prize in Physiology or Medicine.

More information

ESANN'1999 proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), April 1999, D-Facto public., ISBN X, pp.

ESANN'1999 proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), April 1999, D-Facto public., ISBN X, pp. Statistical mechanics of support vector machines Arnaud Buhot and Mirta B. Gordon Department de Recherche Fondamentale sur la Matiere Condensee CEA-Grenoble, 17 rue des Martyrs, 38054 Grenoble Cedex 9,

More information

Storage Capacity of Letter Recognition in Hopfield Networks

Storage Capacity of Letter Recognition in Hopfield Networks Storage Capacity of Letter Recognition in Hopfield Networks Gang Wei (gwei@cs.dal.ca) Zheyuan Yu (zyu@cs.dal.ca) Faculty of Computer Science, Dalhousie University, Halifax, N.S., Canada B3H 1W5 Abstract:

More information

Effects of refractory periods in the dynamics of a diluted neural network

Effects of refractory periods in the dynamics of a diluted neural network Effects of refractory periods in the dynamics of a diluted neural network F. A. Tamarit, 1, * D. A. Stariolo, 2, * S. A. Cannas, 2, *, and P. Serra 2, 1 Facultad de Matemática, Astronomía yfísica, Universidad

More information

THIRD-ORDER HOPFIELD NETWORKS: EXTENSIVE CALCULATIONS AND SIMULATIONS

THIRD-ORDER HOPFIELD NETWORKS: EXTENSIVE CALCULATIONS AND SIMULATIONS Philips J. Res. 44, 501-519, 1990 R 1223 THIRD-ORDER HOPFIELD NETWORKS: EXTENSIVE CALCULATIONS AND SIMULATIONS by J.A. SIRAT and D. JO RAND Laboratoires d'electronique Philips, 3 avenue Descartes, B.P.

More information

Memory capacity of networks with stochastic binary synapses

Memory capacity of networks with stochastic binary synapses Memory capacity of networks with stochastic binary synapses Alexis M. Dubreuil 1,, Yali Amit 3 and Nicolas Brunel 1, 1 UMR 8118, CNRS, Université Paris Descartes, Paris, France Departments of Statistics

More information

Neural networks that use three-state neurons

Neural networks that use three-state neurons J. Phys. A: Math. Gen. 22 (1989) 2265-2273. Printed in the UK Neural networks that use three-state neurons Jonathan S Yedidia Department of Physics, Jadwin Hall, Princeton University, Princeton, NJ 08544,

More information

PHYSICAL REVIEW LETTERS

PHYSICAL REVIEW LETTERS PHYSICAL REVIEW LETTERS VOLUME 86 28 MAY 21 NUMBER 22 Mathematical Analysis of Coupled Parallel Simulations Michael R. Shirts and Vijay S. Pande Department of Chemistry, Stanford University, Stanford,

More information

Learning and Memory in Neural Networks

Learning and Memory in Neural Networks Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units

More information

CHAPTER 3. Pattern Association. Neural Networks

CHAPTER 3. Pattern Association. Neural Networks CHAPTER 3 Pattern Association Neural Networks Pattern Association learning is the process of forming associations between related patterns. The patterns we associate together may be of the same type or

More information

How can ideas from quantum computing improve or speed up neuromorphic models of computation?

How can ideas from quantum computing improve or speed up neuromorphic models of computation? Neuromorphic Computation: Architectures, Models, Applications Associative Memory Models with Adiabatic Quantum Optimization Kathleen Hamilton, Alexander McCaskey, Jonathan Schrock, Neena Imam and Travis

More information

Neural Nets and Symbolic Reasoning Hopfield Networks

Neural Nets and Symbolic Reasoning Hopfield Networks Neural Nets and Symbolic Reasoning Hopfield Networks Outline The idea of pattern completion The fast dynamics of Hopfield networks Learning with Hopfield networks Emerging properties of Hopfield networks

More information

Stochastic Networks Variations of the Hopfield model

Stochastic Networks Variations of the Hopfield model 4 Stochastic Networks 4. Variations of the Hopfield model In the previous chapter we showed that Hopfield networks can be used to provide solutions to combinatorial problems that can be expressed as the

More information

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network.

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network. III. Recurrent Neural Networks A. The Hopfield Network 2/9/15 1 2/9/15 2 Typical Artificial Neuron Typical Artificial Neuron connection weights linear combination activation function inputs output net

More information

Neural Networks. Prof. Dr. Rudolf Kruse. Computational Intelligence Group Faculty for Computer Science

Neural Networks. Prof. Dr. Rudolf Kruse. Computational Intelligence Group Faculty for Computer Science Neural Networks Prof. Dr. Rudolf Kruse Computational Intelligence Group Faculty for Computer Science kruse@iws.cs.uni-magdeburg.de Rudolf Kruse Neural Networks 1 Hopfield Networks Rudolf Kruse Neural Networks

More information

Selection on selected records

Selection on selected records Selection on selected records B. GOFFINET I.N.R.A., Laboratoire de Biometrie, Centre de Recherches de Toulouse, chemin de Borde-Rouge, F 31320 Castanet- Tolosan Summary. The problem of selecting individuals

More information

Latent voter model on random regular graphs

Latent voter model on random regular graphs Latent voter model on random regular graphs Shirshendu Chatterjee Cornell University (visiting Duke U.) Work in progress with Rick Durrett April 25, 2011 Outline Definition of voter model and duality with

More information

The Variance of Covariance Rules for Associative Matrix Memories and Reinforcement Learning

The Variance of Covariance Rules for Associative Matrix Memories and Reinforcement Learning NOTE Communicated by David Willshaw The Variance of Covariance Rules for Associative Matrix Memories and Reinforcement Learning Peter Dayan Terrence J. Sejnowski Computational Neurobiology Laboratory,

More information

F.P. Battaglia 1. Istituto di Fisica. Universita di Roma, La Sapienza, Ple Aldo Moro, Roma and. S. Fusi

F.P. Battaglia 1. Istituto di Fisica. Universita di Roma, La Sapienza, Ple Aldo Moro, Roma and. S. Fusi partially structured synaptic transitions F.P. Battaglia 1 Istituto di Fisica Universita di Roma, La Sapienza, Ple Aldo oro, Roma and S. Fusi INFN, Sezione dell'istituto Superiore di Sanita, Viale Regina

More information

Stochastic Learning in a Neural Network with Adapting. Synapses. Istituto Nazionale di Fisica Nucleare, Sezione di Bari

Stochastic Learning in a Neural Network with Adapting. Synapses. Istituto Nazionale di Fisica Nucleare, Sezione di Bari Stochastic Learning in a Neural Network with Adapting Synapses. G. Lattanzi 1, G. Nardulli 1, G. Pasquariello and S. Stramaglia 1 Dipartimento di Fisica dell'universita di Bari and Istituto Nazionale di

More information

Consider the way we are able to retrieve a pattern from a partial key as in Figure 10 1.

Consider the way we are able to retrieve a pattern from a partial key as in Figure 10 1. CompNeuroSci Ch 10 September 8, 2004 10 Associative Memory Networks 101 Introductory concepts Consider the way we are able to retrieve a pattern from a partial key as in Figure 10 1 Figure 10 1: A key

More information

Mean field theory for Heisenberg spin glasses

Mean field theory for Heisenberg spin glasses Mean field theory for Heisenberg spin glasses G. Toulouse, M. Gabay To cite this version: G. Toulouse, M. Gabay. Mean field theory for Heisenberg spin glasses. Journal de Physique Lettres, 1981, 42 (5),

More information

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network.

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network. Part 3A: Hopfield Network III. Recurrent Neural Networks A. The Hopfield Network 1 2 Typical Artificial Neuron Typical Artificial Neuron connection weights linear combination activation function inputs

More information

Catastrophic forgetting in simple networks: an analysis of the pseudorehearsal solution

Catastrophic forgetting in simple networks: an analysis of the pseudorehearsal solution Network: Comput. Neural Syst. 0 (999) 227 236. Printed in the UK PII: S0954-898X(99)9972-X Catastrophic forgetting in simple networks: an analysis of the pseudorehearsal solution Marcus Frean and Anthony

More information

Random Walks A&T and F&S 3.1.2

Random Walks A&T and F&S 3.1.2 Random Walks A&T 110-123 and F&S 3.1.2 As we explained last time, it is very difficult to sample directly a general probability distribution. - If we sample from another distribution, the overlap will

More information

Analysis of Neural Networks with Chaotic Dynamics

Analysis of Neural Networks with Chaotic Dynamics Chaos, Solitonr & Fructals Vol. 3, No. 2, pp. 133-139, 1993 Printed in Great Britain @60-0779/93$6.00 + 40 0 1993 Pergamon Press Ltd Analysis of Neural Networks with Chaotic Dynamics FRANCOIS CHAPEAU-BLONDEAU

More information

Temperature-concentration diagram of polymer solutions

Temperature-concentration diagram of polymer solutions Temperatureconcentration diagram of polymer solutions M. Daoud, G. Jannink To cite this version: M. Daoud, G. Jannink. Temperatureconcentration diagram of polymer solutions. Journal de Physique, 1976,

More information

Synaptic plasticity in neuromorphic hardware. Stefano Fusi Columbia University

Synaptic plasticity in neuromorphic hardware. Stefano Fusi Columbia University Synaptic plasticity in neuromorphic hardware Stefano Fusi Columbia University The memory problem Several efficient memory models assume that the synaptic dynamic variables are unbounded, or can be modified

More information

Metropolis Monte Carlo simulation of the Ising Model

Metropolis Monte Carlo simulation of the Ising Model Metropolis Monte Carlo simulation of the Ising Model Krishna Shrinivas (CH10B026) Swaroop Ramaswamy (CH10B068) May 10, 2013 Modelling and Simulation of Particulate Processes (CH5012) Introduction The Ising

More information

Shift in the velocity of a front due to a cutoff

Shift in the velocity of a front due to a cutoff PHYSICAL REVIEW E VOLUME 56, NUMBER 3 SEPTEMBER 1997 Shift in the velocity of a front due to a cutoff Eric Brunet* and Bernard Derrida Laboratoire de Physique Statistique, ENS, 24 rue Lhomond, 75005 Paris,

More information

First-Passage Statistics of Extreme Values

First-Passage Statistics of Extreme Values First-Passage Statistics of Extreme Values Eli Ben-Naim Los Alamos National Laboratory with: Paul Krapivsky (Boston University) Nathan Lemons (Los Alamos) Pearson Miller (Yale, MIT) Talk, publications

More information

Artificial Intelligence Hopfield Networks

Artificial Intelligence Hopfield Networks Artificial Intelligence Hopfield Networks Andrea Torsello Network Topologies Single Layer Recurrent Network Bidirectional Symmetric Connection Binary / Continuous Units Associative Memory Optimization

More information

In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required.

In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required. In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required. In humans, association is known to be a prominent feature of memory.

More information

Reliability Theory of Dynamically Loaded Structures (cont.)

Reliability Theory of Dynamically Loaded Structures (cont.) Outline of Reliability Theory of Dynamically Loaded Structures (cont.) Probability Density Function of Local Maxima in a Stationary Gaussian Process. Distribution of Extreme Values. Monte Carlo Simulation

More information

Hopfield Networks. (Excerpt from a Basic Course at IK 2008) Herbert Jaeger. Jacobs University Bremen

Hopfield Networks. (Excerpt from a Basic Course at IK 2008) Herbert Jaeger. Jacobs University Bremen Hopfield Networks (Excerpt from a Basic Course at IK 2008) Herbert Jaeger Jacobs University Bremen Building a model of associative memory should be simple enough... Our brain is a neural network Individual

More information

Phase transition phenomena of statistical mechanical models of the integer factorization problem (submitted to JPSJ, now in review process)

Phase transition phenomena of statistical mechanical models of the integer factorization problem (submitted to JPSJ, now in review process) Phase transition phenomena of statistical mechanical models of the integer factorization problem (submitted to JPSJ, now in review process) Chihiro Nakajima WPI-AIMR, Tohoku University Masayuki Ohzeki

More information

(a) (b) (c) Time Time. Time

(a) (b) (c) Time Time. Time Baltzer Journals Stochastic Neurodynamics and the System Size Expansion Toru Ohira and Jack D. Cowan 2 Sony Computer Science Laboratory 3-4-3 Higashi-gotanda, Shinagawa, Tokyo 4, Japan E-mail: ohiracsl.sony.co.jp

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov

More information

Energy-Decreasing Dynamics in Mean-Field Spin Models

Energy-Decreasing Dynamics in Mean-Field Spin Models arxiv:cond-mat/0210545 v1 24 Oct 2002 Energy-Decreasing Dynamics in Mean-Field Spin Models L. Bussolari, P. Contucci, M. Degli Esposti, C. Giardinà Dipartimento di Matematica dell Università di Bologna,

More information

On the nonrelativistic binding energy for positive ions

On the nonrelativistic binding energy for positive ions On the nonrelativistic binding energy for positive ions G.I. Plindov, I.K. Dmitrieva To cite this version: G.I. Plindov, I.K. Dmitrieva. On the nonrelativistic binding energy for positive ions. Journal

More information

Shigetaka Fujita. Rokkodai, Nada, Kobe 657, Japan. Haruhiko Nishimura. Yashiro-cho, Kato-gun, Hyogo , Japan. Abstract

Shigetaka Fujita. Rokkodai, Nada, Kobe 657, Japan. Haruhiko Nishimura. Yashiro-cho, Kato-gun, Hyogo , Japan. Abstract KOBE-TH-94-07 HUIS-94-03 November 1994 An Evolutionary Approach to Associative Memory in Recurrent Neural Networks Shigetaka Fujita Graduate School of Science and Technology Kobe University Rokkodai, Nada,

More information

arxiv:cond-mat/ v1 [cond-mat.dis-nn] 18 Nov 1996

arxiv:cond-mat/ v1 [cond-mat.dis-nn] 18 Nov 1996 arxiv:cond-mat/9611130v1 [cond-mat.dis-nn] 18 Nov 1996 Learning by dilution in a Neural Network B López and W Kinzel Institut für Theoretische Physik, Universität Würzburg, Am Hubland, D-97074 Würzburg

More information

Synaptic Plasticity. Introduction. Biophysics of Synaptic Plasticity. Functional Modes of Synaptic Plasticity. Activity-dependent synaptic plasticity:

Synaptic Plasticity. Introduction. Biophysics of Synaptic Plasticity. Functional Modes of Synaptic Plasticity. Activity-dependent synaptic plasticity: Synaptic Plasticity Introduction Dayan and Abbott (2001) Chapter 8 Instructor: Yoonsuck Choe; CPSC 644 Cortical Networks Activity-dependent synaptic plasticity: underlies learning and memory, and plays

More information

Neural Networks. Hopfield Nets and Auto Associators Fall 2017

Neural Networks. Hopfield Nets and Auto Associators Fall 2017 Neural Networks Hopfield Nets and Auto Associators Fall 2017 1 Story so far Neural networks for computation All feedforward structures But what about.. 2 Loopy network Θ z = ቊ +1 if z > 0 1 if z 0 y i

More information

Week 4: Hopfield Network

Week 4: Hopfield Network Week 4: Hopfield Network Phong Le, Willem Zuidema November 20, 2013 Last week we studied multi-layer perceptron, a neural network in which information is only allowed to transmit in one direction (from

More information

arxiv:cond-mat/ v1 [cond-mat.dis-nn] 30 Sep 1999

arxiv:cond-mat/ v1 [cond-mat.dis-nn] 30 Sep 1999 arxiv:cond-mat/9909443v1 [cond-mat.dis-nn] 30 Sep 1999 Thresholds in layered neural networks with variable activity 1. Introduction D Bollé and G Massolo Instituut voor Theoretische Fysica, K.U. Leuven,

More information

Terminal attractor optical associative memory with adaptive control parameter

Terminal attractor optical associative memory with adaptive control parameter 1 June 1998 Ž. Optics Communications 151 1998 353 365 Full length article Terminal attractor optical associative memory with adaptive control parameter Xin Lin a,), Junji Ohtsubo b, Masahiko Mori a a Electrotechnical

More information

Title Properties of associative memory ne biological information encoding( Di Author(s) Kitano, Katsunori Citation Kyoto University ( 京都大学 ) ssue Date 2000-03-23 URL https:doi.org10.115013167435 Right

More information

Capillary rise between closely spaced plates : effect of Van der Waals forces

Capillary rise between closely spaced plates : effect of Van der Waals forces Capillary rise between closely spaced plates : effect of Van der Waals forces B. Legait, P.G. De Gennes To cite this version: B. Legait, P.G. De Gennes. Capillary rise between closely spaced plates : effect

More information

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks Commun. Theor. Phys. (Beijing, China) 40 (2003) pp. 607 613 c International Academic Publishers Vol. 40, No. 5, November 15, 2003 Effects of Interactive Function Forms in a Self-Organized Critical Model

More information

Neural Networks for Machine Learning. Lecture 11a Hopfield Nets

Neural Networks for Machine Learning. Lecture 11a Hopfield Nets Neural Networks for Machine Learning Lecture 11a Hopfield Nets Geoffrey Hinton Nitish Srivastava, Kevin Swersky Tijmen Tieleman Abdel-rahman Mohamed Hopfield Nets A Hopfield net is composed of binary threshold

More information

Markov Chains and MCMC

Markov Chains and MCMC Markov Chains and MCMC Markov chains Let S = {1, 2,..., N} be a finite set consisting of N states. A Markov chain Y 0, Y 1, Y 2,... is a sequence of random variables, with Y t S for all points in time

More information

Shear modulations and icosahedral twins

Shear modulations and icosahedral twins We Microsc. Microanal. Microstruct. 1 (1990) 503 OCTOBER/DECEMBER 1990, PAGE 503 Classification Physics Abstracts - 61.50J 61.70N Shear modulations and icosahedral twins Michel Duneau Centre de Physique

More information

~ij is a function of the positions of the points

~ij is a function of the positions of the points 64.40C Nous We J. Phys. France 49 (1988) 2019-2025 DÉCEMBRE 1988, 2019 Classification Physics Abstracts - - 02.10 75.10NR The Euclidean matching problem Marc Mézard (1) and Giorgio Parisi (2) (1) Laboratoire

More information

Lecture 11: Non-Equilibrium Statistical Physics

Lecture 11: Non-Equilibrium Statistical Physics Massachusetts Institute of Technology MITES 2018 Physics III Lecture 11: Non-Equilibrium Statistical Physics In the previous notes, we studied how systems in thermal equilibrium can change in time even

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Neural Networks in Which Synaptic Patterns Fluctuate with Time

Neural Networks in Which Synaptic Patterns Fluctuate with Time Journal of Statistical Physics, Vol. 94, Nos. 56, 1999 Neural Networks in Which Synaptic Patterns Fluctuate with Time J. Marro, 1 J. J. Torres, 1, 2 and P. L. Garrido 1 Received July 9, 1998; final October

More information

Exploring the energy landscape

Exploring the energy landscape Exploring the energy landscape ChE210D Today's lecture: what are general features of the potential energy surface and how can we locate and characterize minima on it Derivatives of the potential energy

More information

HOPFIELD MODEL OF NEURAL NETWORK

HOPFIELD MODEL OF NEURAL NETWORK Chapter 2 HOPFIELD MODEL OF NEURAL NETWORK 2.1 INTRODUCTION Human beings are constantly thinking since ages about the reasons for human capabilities and incapabilities. Successful attempts have been made

More information

Storage capacity of hierarchically coupled associative memories

Storage capacity of hierarchically coupled associative memories Storage capacity of hierarchically coupled associative memories Rogério M. Gomes CEFET/MG - LSI Av. Amazonas, 7675 Belo Horizonte, MG, Brasil rogerio@lsi.cefetmg.br Antônio P. Braga PPGEE-UFMG - LITC Av.

More information

CHAPTER V. Brownian motion. V.1 Langevin dynamics

CHAPTER V. Brownian motion. V.1 Langevin dynamics CHAPTER V Brownian motion In this chapter, we study the very general paradigm provided by Brownian motion. Originally, this motion is that a heavy particle, called Brownian particle, immersed in a fluid

More information

arxiv:quant-ph/ v1 17 Oct 1995

arxiv:quant-ph/ v1 17 Oct 1995 PHYSICS AND CONSCIOUSNESS Patricio Pérez arxiv:quant-ph/9510017v1 17 Oct 1995 Departamento de Física, Universidad de Santiago de Chile Casilla 307, Correo 2, Santiago, Chile ABSTRACT Some contributions

More information

Supporting Online Material for

Supporting Online Material for www.sciencemag.org/cgi/content/full/319/5869/1543/dc1 Supporting Online Material for Synaptic Theory of Working Memory Gianluigi Mongillo, Omri Barak, Misha Tsodyks* *To whom correspondence should be addressed.

More information

Computation with phase oscillators: an oscillatory perceptron model

Computation with phase oscillators: an oscillatory perceptron model Computation with phase oscillators: an oscillatory perceptron model Pablo Kaluza Abteilung Physikalische Chemie, Fritz-Haber-Institut der Max-Planck-Gesellschaft, Faradayweg 4-6, 495 Berlin, Germany. Abstract

More information

Hopfield and Potts-Hopfield Networks

Hopfield and Potts-Hopfield Networks Hopfield and Potts-Hopfield Networks Andy Somogyi Dec 10, 2010 1 Introduction Many early models of neural networks lacked mathematical rigor or analysis. They were, by and large, ad hoc models. John Hopfield

More information

( ) T. Reading. Lecture 22. Definition of Covariance. Imprinting Multiple Patterns. Characteristics of Hopfield Memory

( ) T. Reading. Lecture 22. Definition of Covariance. Imprinting Multiple Patterns. Characteristics of Hopfield Memory Part 3: Autonomous Agents /8/07 Reading Lecture 22 Flake, ch. 20 ( Genetics and Evolution ) /8/07 /8/07 2 Imprinting Multiple Patterns Let x, x 2,, x p be patterns to be imprinted Define the sum-of-outer-products

More information

arxiv:cond-mat/ v1 [cond-mat.stat-mech] 13 Apr 1999

arxiv:cond-mat/ v1 [cond-mat.stat-mech] 13 Apr 1999 Optimal Path in Two and Three Dimensions Nehemia Schwartz, Alexander L. Nazaryev, and Shlomo Havlin Minerva Center and Department of Physics, Jack and Pearl Resnick Institute of Advanced Technology Bldg.,

More information

LE JOURNAL DE PHYSIQUE

LE JOURNAL DE PHYSIQUE une In Nous Tome 35 N 5 MAI 1974 LE JOURNAL DE PHYSIQUE Classification Physics Abstracts 1.660 CLUSTERS OF ATOMS COUPLED BY LONG RANGE INTERACTIONS J. P. GAYDA and H. OTTAVI Département d Electronique

More information

Covariance and Correlation Matrix

Covariance and Correlation Matrix Covariance and Correlation Matrix Given sample {x n } N 1, where x Rd, x n = x 1n x 2n. x dn sample mean x = 1 N N n=1 x n, and entries of sample mean are x i = 1 N N n=1 x in sample covariance matrix

More information

Irredundant Families of Subcubes

Irredundant Families of Subcubes Irredundant Families of Subcubes David Ellis January 2010 Abstract We consider the problem of finding the maximum possible size of a family of -dimensional subcubes of the n-cube {0, 1} n, none of which

More information

Neural networks: Unsupervised learning

Neural networks: Unsupervised learning Neural networks: Unsupervised learning 1 Previously The supervised learning paradigm: given example inputs x and target outputs t learning the mapping between them the trained network is supposed to give

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear

More information

When is an Integrate-and-fire Neuron like a Poisson Neuron?

When is an Integrate-and-fire Neuron like a Poisson Neuron? When is an Integrate-and-fire Neuron like a Poisson Neuron? Charles F. Stevens Salk Institute MNL/S La Jolla, CA 92037 cfs@salk.edu Anthony Zador Salk Institute MNL/S La Jolla, CA 92037 zador@salk.edu

More information

Lecture 7 Artificial neural networks: Supervised learning

Lecture 7 Artificial neural networks: Supervised learning Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

Evolution of the Average Synaptic Update Rule

Evolution of the Average Synaptic Update Rule Supporting Text Evolution of the Average Synaptic Update Rule In this appendix we evaluate the derivative of Eq. 9 in the main text, i.e., we need to calculate log P (yk Y k, X k ) γ log P (yk Y k ). ()

More information

Dilution and polydispersity in branched polymers

Dilution and polydispersity in branched polymers Des J. Physique Lett. 45 (1984) L199 L203 ler MARS 1984, L199 Classification Physics Abstracts 36.20E 05.40 82.70G Dilution and polydispersity in branched polymers M. Daoud (*), F. Family (**) and G. Jannink

More information

Using a Hopfield Network: A Nuts and Bolts Approach

Using a Hopfield Network: A Nuts and Bolts Approach Using a Hopfield Network: A Nuts and Bolts Approach November 4, 2013 Gershon Wolfe, Ph.D. Hopfield Model as Applied to Classification Hopfield network Training the network Updating nodes Sequencing of

More information

Monte Carlo. Lecture 15 4/9/18. Harvard SEAS AP 275 Atomistic Modeling of Materials Boris Kozinsky

Monte Carlo. Lecture 15 4/9/18. Harvard SEAS AP 275 Atomistic Modeling of Materials Boris Kozinsky Monte Carlo Lecture 15 4/9/18 1 Sampling with dynamics In Molecular Dynamics we simulate evolution of a system over time according to Newton s equations, conserving energy Averages (thermodynamic properties)

More information

Statistical Mechanics of Temporal Association in Neural Networks with Delayed Interactions

Statistical Mechanics of Temporal Association in Neural Networks with Delayed Interactions Statistical Mechanics of Temporal Association in Neural Networks with Delayed Interactions Andreas V.M. Herz Division of Chemistry Caltech 139-74 Pasadena, CA 91125 Zhaoping Li School of Natural Sciences

More information

Accurate critical exponents from the ϵ-expansion

Accurate critical exponents from the ϵ-expansion Accurate critical exponents from the ϵ-expansion J.C. Le Guillou, J. Zinn-Justin To cite this version: J.C. Le Guillou, J. Zinn-Justin. Accurate critical exponents from the ϵ-expansion. Journal de Physique

More information

Eco504, Part II Spring 2010 C. Sims PITFALLS OF LINEAR APPROXIMATION OF STOCHASTIC MODELS

Eco504, Part II Spring 2010 C. Sims PITFALLS OF LINEAR APPROXIMATION OF STOCHASTIC MODELS Eco504, Part II Spring 2010 C. Sims PITFALLS OF LINEAR APPROXIMATION OF STOCHASTIC MODELS 1. A LIST OF PITFALLS Linearized models are of course valid only locally. In stochastic economic models, this usually

More information

Adsorption of chain molecules with a polar head a scaling description

Adsorption of chain molecules with a polar head a scaling description Adsorption of chain molecules with a polar head a scaling description S. Alexander To cite this version: S. Alexander. Adsorption of chain molecules with a polar head a scaling description. Journal de

More information

Phase transition in cellular random Boolean nets

Phase transition in cellular random Boolean nets Nous We J. Physique 48 (1987) 1118 JANVIER 1987, 11 Classification Physics Abstracts 05.40 Phase transition in cellular random Boolean nets G. Weisbuch and D. Stauffer Laboratoire de Physique de l Ecole

More information

Finite-Horizon Statistics for Markov chains

Finite-Horizon Statistics for Markov chains Analyzing FSDT Markov chains Friday, September 30, 2011 2:03 PM Simulating FSDT Markov chains, as we have said is very straightforward, either by using probability transition matrix or stochastic update

More information

Accurate critical exponents for Ising like systems in non-integer dimensions

Accurate critical exponents for Ising like systems in non-integer dimensions Dans In J. Physique 48 (1987) 1924 JANVIER 1987, 19 Classification Physics Abstracts, 64.60 64.70 Accurate critical exponents for Ising like systems in noninteger dimensions J. C. Le Guillou (*) and J.

More information

Lecture 14 Population dynamics and associative memory; stable learning

Lecture 14 Population dynamics and associative memory; stable learning Lecture 14 Population dynamics and associative memory; stable learning -Introduction -Associative Memory -Dense networks (mean-ield) -Population dynamics and Associative Memory -Discussion Systems or computing

More information

Hopfield Network for Associative Memory

Hopfield Network for Associative Memory CSE 5526: Introduction to Neural Networks Hopfield Network for Associative Memory 1 The next few units cover unsupervised models Goal: learn the distribution of a set of observations Some observations

More information

Iterative Autoassociative Net: Bidirectional Associative Memory

Iterative Autoassociative Net: Bidirectional Associative Memory POLYTECHNIC UNIVERSITY Department of Computer and Information Science Iterative Autoassociative Net: Bidirectional Associative Memory K. Ming Leung Abstract: Iterative associative neural networks are introduced.

More information

arxiv: v1 [cond-mat.stat-mech] 6 Mar 2008

arxiv: v1 [cond-mat.stat-mech] 6 Mar 2008 CD2dBS-v2 Convergence dynamics of 2-dimensional isotropic and anisotropic Bak-Sneppen models Burhan Bakar and Ugur Tirnakli Department of Physics, Faculty of Science, Ege University, 35100 Izmir, Turkey

More information

Using Variable Threshold to Increase Capacity in a Feedback Neural Network

Using Variable Threshold to Increase Capacity in a Feedback Neural Network Using Variable Threshold to Increase Capacity in a Feedback Neural Network Praveen Kuruvada Abstract: The article presents new results on the use of variable thresholds to increase the capacity of a feedback

More information