THIRD-ORDER HOPFIELD NETWORKS: EXTENSIVE CALCULATIONS AND SIMULATIONS

Size: px
Start display at page:

Download "THIRD-ORDER HOPFIELD NETWORKS: EXTENSIVE CALCULATIONS AND SIMULATIONS"

Transcription

1 Philips J. Res. 44, , 1990 R 1223 THIRD-ORDER HOPFIELD NETWORKS: EXTENSIVE CALCULATIONS AND SIMULATIONS by J.A. SIRAT and D. JO RAND Laboratoires d'electronique Philips, 3 avenue Descartes, B.P. 15, Limei/-Brévannes Cédex, France Abstract Third-order Hopfield nets have been extensively studied. Very good agreement was obtained between numerical simulations and theoretical calculations by random walk analysis. Properties similar to those of second-order nets were observed, including wide basins of attraction, resistance to synapses dilution and quantization. However, third-order nets do not seem to exhibit a catastrophic deterioration of the memory. The principal discrepancy between third- and second-order nets is quantitative: the storage capacity corresponds to the number of synaptic coefficients. Two examples of rotationally and translationally invariant character recognition on small networks are given. For this task, we derived a simple learning rule from analytical arguments on Hebb's law, which is suitable for biased patterns. Keywords: associative memory, higher-order networks, Hopfield networks, neural networks, rotational invariance, translational invariance 1. Introduetion Recently, Hopfield has demonstrated that formal neurons can perform associative memory tasks 1). Their storage capacity was calculated using the replica method developed in statistical physics 2). A simpler formalismrandom walk analysis of the first iteration-can, however, predict the shape of the basins of attraction in second-order nets 3) and the storage capacity for different learning rules 4). This capacity rises when the interaction order is increased as shown by theoretical calculations-spin glasses or random walk approach-and numerical simulations 5-10). Higher-order nets were also proposed as invariant associative memories with respect to any set of transformations 11). Philips Journalof Research Vol.44 No.5 199IJ 501

2 J.A. Simt and D. Jorand We present an extensive random walk analysis and numerical simulation of third-order Hopfield networks with the Hebbian learning rule. In the first part, the storage capacity under various conditions is studied, and the information contents of different Hopfield nets are compared. In the second part, analytical considerations allow a learning rule which is suited to biased patterns to be designed. Application to translationally and rotationally invariant character recognition is given as an illustration. Finally, the invariant property of the memory is discussed in terms of memory capacity. 2. General properties of third-order Hopfield networks 2.1. Network and learning rule As in the second-order model of associative memory proposed by Hopfield, we consider a network of N bipolar neurons. The state of neuron i is represented by a, = ± 1. Triplets of neurons (i, j and k) interact through the synaptic coefficient C jjk, which is computed in the learning stage. When the latter is achieved, the network can be used as a content-addressable memory. The capacity is defined as the maximal number of patterns that can be stored. Retrieval requires a sequence of steps (iterations) which may converge towards the desired state of the network, generally one of the prescribed patterns. In the retrieval step t, neuron states are updated according to the sign of the local field h.: and (Jj(t + 1) = sign[hj(t)] (1) hj(t) N = L C jjk (Jj(t) (Jk(t) j,k= I (2) where 'If.k= I denotes the sum over j and k excluding j = i, k = i or j = k. In this paper, we only use synchronous dynamics, i.e. the N neurons are updated simultaneously in one step (or iteration). To memorize a given set of P patterns (çf)jl= I,P, where f.1 denotes the pattern index and i the neuron, we use the generalized Hebb rule: Cjjk = N 2 1 P L çfçfçf Jl=l (3) 502 Philip. Journalof Research Vol. 44 No.S 1990

3 ( Third-order Hopfield networks 2.2. Random walk analysis of the first iteration Under the assumption that the çf are PN independent random variables with zero average, the probability that a given pattern is stable after one iteration can easily be computed 4). Let us first determine the probability distribution ofthe local field hi obtained when presenting a memorized pattern çv: (4) I hi(t) = ei + N 2 L çf L efecçjçl:. p"r.v i.k=1 The second term represents the zero-mean noise due to overlapping with the other patterns, hi is hence a Gaussian variable of mean hi = Çr The variance, Ll, is obtained by averaging the square of the noise term over the possible sets of patterns er: Ll= \~4 p~ v p~ v i.~ 1i"t= 1çfçf' efe';: ecec: çjçj. el:el:'). (6) Onlyterms with u =.u' and ((j, k) = (j', k')or (j, k) = (k',j'))do not vanish and N (5) (7) These formulae and those following are first-order approximations in I/N and I/P. The factor 2 comes from thej-k permutation symmetry. The probability Pi that the state of neuron i is not equal to er after one iteration is equal to the probability that hi < 0: (z - hiçi)2] I I f,jh; (Z2) Pi=_I_fO exp[ dz =---- exp -_ dz. (8) 2Ll 2./ht 2 foa -00 With the approximation that the Pi are independent for the different i, the condition for retrieval with less than en neurons in the wrong state is N L Pi~eN. i= 1 (9) For perfect retrieval, we take en = 1. Philips Journalof Research Vol.44 No

4 J.A. Sirat and D. Jorand TABLE I Distribution parameters of hi in the different cases. Ll Lljhf Exact capacity 1 2P 2P N 2 N 2 Noisy exact capacity (1-2r)2 2P N 2 2P N 2 (1-2r)2 Exact capacity in a diluted network (dilution r) Exact capacity with clipped synapses 1--r (1 - -r)(2- -r)p 2-t P ---- N 2 1--rN 2 2 np N 2 N 2 When the retrieval conditions or the network parameters are modified, the mean value and variance of hi after one iteration can also be computed (see Table I): Exact capacity: each learnt pattern is stable without error; Noisy exact capacity: retrieval without errors of each pattern presented with noise; s capacity: a learnt pattern yields a state at a Hamming distance less than sn from the original pattern 3.12); Exact capacity in diluted networks: synaptic coefficients are randomly suppressed with probability r ; Exact capacity with clipped synapses: synaptic coefficients take only one of the two values +1or -1, depending on sign(c ijk ) The distribution parameters of the local field hi are easily calculated for the different cases. They are displayed in Table I Numerical simulations and results We have performed extensive numerical simulations on an IBM 3090 computer. Calculations were carried out with integer-type synaptic coefficients 504 Philips Journalof Research Vol.44 No

5 ( Third-order Hopfield networks IX Il THEORY o SIMULATIONS N--- Fig. I. Exact capacity (J. = P/N 2 vs the network size N. (except in the last section). Results were averaged over 20 runs each time. Two running modes were studied: the first mode consisted in a single iteration and in the second mode successive iterations were performed until convergence (maximum of 20 iterations) Exact capacity This takes the form P = Cl.N 2 with Cl.of order 0.04 (for N ranging from 10 to 50 only a slight dependence on N was observed). The agreement with the statistical physics result 5,7,13) Nd-1 p=--- 2d!lnN' where d is the network order, is consistent with the fact that no significant iteration effect was observed as already noted for second-order nets 4). More precisely, a single iteration allowed convergence towards the stable patterns in almost all the runs (fig. 1) s capacity When the state after one iteration and the learnt pattern are allowed to differ by less than en bits, Cl. increases slightly (dala = 0.5 for e = 0.1 with N = 40). When further iterations are allowed, errors are amplified and the capacity increase is partially suppressed. This means that the extra learnt Philips Journalof Research Vol.44 No.S

6 l.a. Sirat and D. Jorand oe o SIMULA TIONS N= E Fig.2. B capacity ex= PjN 2 vs B, the maximum authorized fraction of wrong neurons in the output patterns. patterns do not yield supplementary stable states of the network dynamics (fig. 2) Non-catastrophic deterioration of the memory? When I)( = P/N 2 rises above I)(c, the mean proportion of inverted neurons <D) (averaged over the learnt patterns) grows linearly with 1)(. The slope is small and varies weakly with N. This result has been confirmed by a study of the basins of attraction, i.e. final Hamming distance vs initial Hamming distance, for I)( > I)(c (fig. 3). This contrasts with second-order networks: similar curves are then obtained where the slope diverges as N goes to infinity (see Appendix 1), leading to the catastrophic deterioration of the memory 2) Basins of attraction When a noisy pattern (neurons of the original pattern are inverted with probability r) is presented, the exact retrieval capacity is not significantly altered for r < 20% (fig. 4a)). In this case, running a few iterations «5) yields a greater capacity, which means that the prescribed patterns are stable states of the dynamics (fig. 4b)) Dilution Dilution reduces weakly the storage capacity for perfect retrieval (fig. 5a)). To evaluate the storage efficiency, we plot the stored information divided by 506 Philip, Journalof Research Vol.44 No

7 Third-order Hopfield networks <Ö> r ,0" fl 0.10 ~ N=30 o SIMULA TIONS N= L...-.()I...t::...JL I L_J..._--L...---lL l.----L----I IX---" Fig. 3. Non-catastrophic deterioration of the capacity in third-order nets. The mean Hamming distance (8) is plotted vs IX = P/N 2 the number of synaptic coefficients available vs the dilution T (fig. 5b)). The former increases by a factor of order 4 at strong dilutions. A similar effect was observed in second-order asymmetrically diluted networks in the limit of infinite dilution 14). We confirmed that symmetric dilution, i.e. cancelling simultaneously C ijk and C ikj, does not modify the result qualitatively Synapse clipping When the synaptic coefficients are restricted to one of the two values + 1 and -1, the theoretical storage capacity is only reduced by a factor of nl2 (see Appendix 2 for a detailed calculation). Numerical simulations agree perfectly with this estimate. This result is of great interest for hardware implementations of such memories: they only require 1bit precision for efficient storage (fig. 6) Discussion Comparing theoretical and numerical results: weak effect of the iterations As for second-order networks 4), a single iteration after initialization with a given pattern allows the stable state to be reached in most of the cases. Therefore for many properties we can model a Hopfield network as a set of single-layer perceptrons of the corresponding order. This makes the random walk analysis a powerful (and simple) tool for predicting the different properties Philips Journalof Research Vol.44 No

8 J.A. Sirat and D. Jorand (X ~ THEORY o SIMULATIONS 0.03 N= (al r(%) 0.04 (XI0.03 ~ 1 ITERATION 0.02 o CONVERGENCE N= (bi r(%1 Fig.4. Exact capacity Cl. = P/N 2 for noisy input patterns vs r, the probability for flipping one neuron: a) comparison of theory and simulation for one iteration and b) iteration effect in the simulations. of Hopfield nets of any order, as shown by the very good agreement between numerical simulations and theoretical results. The slight discrepancy (by a factor of order 1.5) is probably due to the approximation that the fields hi are independent variables. Some properties, however, cannot be described using this approach: s capacity, precise shape of the basins of attraction, and deterioration of the memory beyond the storage capacity. For those aspects, 508 Philips Journalof Research Vol.44 No

9 Third-order Hopfield networks a 0.04 t Asymmetric o Symmetric 0 simulation N = (a) Dilution 't (%). ö t o Asymmetric simulation N = 40 o (b) Dilution t (%) Fig. 5. a) Exact capacity ex= P/N 2 in a diluted network vs the dilution r the proportion of cancelled synaptic coefficients, and h) ex*= P/(l-.)N 2 = ex/{l-.) vs r. the iteration effect is predominant and other theoretical tools would be required 13) Comparing second- and third-order networks As already noted by several authors on partial aspects (see the references on higher-order nets mentioned in Sec. 1), the results of this section show that the properties of second- and third-order nets are qualitatively identical (for second-order nets see for example ref. 3). However, the catastrophic Philips Journalof Research Vol. 44 No

10 I.A. Sirat and D. Jorand 0.07 IX 0.06 o SIMULATIONS fl THEORY N Fig.6. Exact capacity IX = PjN 2 vs the network size N for quantized synaptic coefficients èjjk = ± I. z )( a.. Vl 4000 toid , , 11:::: fl 3 rd ORDER 1000 o o o 2 nd ORDER SIMULA TIONS NUMBER OF SYNAPTIC COEFFICIENTS Fig. 7. Information stored (PN bits) vs the number of synaptic coefficients for second- and third-order Hopfield nets. deterioration of the memory seems to disappear at orders higher than 2. To the best of our knowledge, this remains an open problem. The quantitative difference arises from the greater number of synaptic coefficients N 3 This has already been noted in theoretical calculations 5) and is shown by numerical curves for d = 2 and d = 3 in diluted or intact nets (fig.7): the stored information (PN bits) is proportional to the memory size used for the synaptic coefficients 510 Philips Journalof Research Vol.44 No

11 Third-order Hopfield networks (Nd/d!). Their ratio q gives the yield of the Hopfield memory. This can be precisely evaluated in the case of clipped synapses, and when taking into account the redundancy of the synaptic coefficients: (10) In our case q = 0.14 x 2 = 0.18 re/2 for second order, and 0.04 x 6 q = re/ for third order Improving the storage capacity Two ways are proposed to enhance the storage capacity. In still higher-order networks the number of stored patterns grows as N d - 1 (d is the order of the network) at the cost of a greater number of synaptic coefficients (N d ). Moreover, we expect from simple considerations that the quality of retrieval will improve when d grows to infinity. The field value obtained when presenting a pattern a at the input where P h i = L çf(ç!'a)d-1 1'=1 (11) (12) When d goes to infinity, we can approximate hi by the leading term in the sum: (13) where the pattern v maximizes the quantity lçval. hr n may be viewed as a 'nearest neighbour' field. Philips Journni of Research Vol. 44 No.S

12 I.A. Sirat and D. Jorand Iterative algorithms already developed for second-order nets 15,16) should give optimal performances as shown by preliminary numerical simulations (capacity multiplied by a factor of order 10).. 3. Application to invariant pattern recognition 3.1. Collapse of Hebb 's rule Jor biased patterns In the first part, the Hebbian rule was used for efficient storage of zero-mean patterns. However, in many cases of interest, patterns are biased, leading to memory failure. We will consider patterns (Çf)i=I,N, where çf are PN independent random variables, with a mean value for each neuron given by (14) With the standard Hebb rule, the mean field value becomes where (15) (16) and < >/1 indicates averaging over pattern u: Therefore the restitution of 'minority' neurons is hindered by the modification of hi' The network would then converge towards the mean pattern ë with high probability. Moreover, the variance is enhanced as follows: (17) When the mean pattern ë is not equal to zero, this dramatic' noise leads to memory breakdown Modified Hebb rulejor biased patterns To overcome this problem, we propose a simple generalization of the Hebbian rule which is naturally derived from the random walk analysis. First, 512 Philips Journal of Research Vol.44 No.S 1990

13 ~ , Third-order Hopfield networks we use centred variables (18) The synaptic coefficients become and we add the thresholds 1 P Cijk = N2 I çfçfçf 1l=1 1 p N - [1 f}i= -- L Çi 2 L (çf) PIl=1 N j=1 2J2 (19) (20) to correct the local fields. Neuron updating is performed as follows: Keeping in mind that çf are independent random variables with (22) we obtain which is proportional to ex= P/N 2 as in the non-correlated case. This also depends on the mean contrast of neuron i (first bracket), and on the relative total contrast of pattern f1. (last bracket) Application to invariant character recognition Higher-order nets were suggested for the performance of invariant pattern retrieval by learning systematically the transformed patterns 11). The required capacity is then PT, where P is the number of basic patterns and T the number of transformations. T is often of order N, so a higher-order network is needed. Moreover, in the case of structured patterns (images, characters,... ), strong Philips Journalof Research Vol.44 No.S

14 --. -H ~rhli;-_ J.A. Sirat and D. Jorand ER-Ill a-. Ill-Ill Fig. 8. Examples of translationally invariant character recognition (three characters, ten translations with periodic boundary conditions). The original non-translated patterns are obtained after a few iterations (of order 3). biases may appear; the standard Hebb rule is then inefficient. We used the modified rule (defined earlier) in a heteroassociative version; when a transformed pattern is presented to the network, it has to yield the original (non-transformed) pattern. As the output patterns are associated with themselves, the network can run many iterations to improve convergence (e.g. in the case of noisy patterns). For more details on the network parameters, see Appendix 3. In the first illustration (fig. 8), we take 5 x 5 characters (X, 0, and R) on a 10 x 5 white ground. The network learns the patterns with ten horizontal translations (periodic boundary conditions). All the learnt characters are correctly retrieved, even with a little noise (10% wrong neurons) and clipped synapses (C iik = ± 1, ei is normalized by <ICijd)iik)' The information storage efficiency is evaluated by the ratio R of stored information (3 patterns x 50 neurons x 10 translations) to the number of synaptic coefficients (50 3 ): R = 1/80. This corresponds to (J. = < (J.c = In the second illustration (fig. 9), two 5 x 5 characters (T, R) could be translated and rotated on a 6 x 6 white ground (4 x 4 transformations). Retrieval was as satisfactory as in the first case. In both cases, we confirmed that the simple Hebbian rule does not work (the mean state values <ë;)i were respectively 0.5 and 0.2) Comments This limited application shows that the principal advantage provided by the third-order net is a larger storage capacity: P", N 2 instead of P '" N. The space complexity-i.e. the memory size needed-can be reduced when considering the intrinsic symmetry of the synaptic coefficients. However, the 514 Philips Journalof Research Vol.44 No

15 Third-order Hopfield networks Fig. 9. Examples oftranslationally and rotationally invariant character recognition (2 characters, 4 translations x 4 rotations). The original non-transformed patterns are obtained after a few iterations (of order 3). time complexity-i.e. the number of arithmetic operations performed in the retrieval stage-grows as the total number of synapses. Therefore the computational time for invariant recognition by a Hopfield network is comparable with the' time needed to test recognition successively on the different transforms of the pattern. To improve the storage capacity, still higher-order nets would be needed. However, generalization ofiterative algorithms already tested on second-order nets 15.16) should give optimal performances as shown by preliminary numerical simulations (capacity multiplied a factor of order 10). Acknowledgements D.J. stayed at the Laboratoires d'electronique et de Physique Appliquée during his 'stage d'option' of the Ecole Polytechnique, Palaiseau, France, which was supervised by P. Weinfeld from PNHE, Ecole Polytechnique. We are grateful to P. Peretto and M. Gordon from CEN, Grenoble, France, to B. Derrida from CEN, Saclay, France, and to P. Weisbuch from ENS, Paris, for stimulating discussions. We thank particularly S. Makram from LEP for attentive re-reading of the manuscript. Appendix 1: catastrophic deterioration of the memory in finite-size second-order networks We have performed numerical simulations on second-order Hopfield nets for different sizes: N = 100,200,300,400. Figure 10 shows the normalized mean error (D) after convergence (averaged over the learnt patterns) vs the number of learnt patterns P. For small N, (D) varies linearly with P (finite Philips Journalof Research Vol.44 No

16 J.A. Sirat and D. Jorand <6> 0.4,...--r _, o N=100 SIMULA TIONS 11 N=200 o N= THEORY(N=OO) ( AMIT ET AL.8s ) (a) txc= _IX 0.2 <6> 1 01 o SIMULA TIONS THEORY ( THIS WORK J (b) t 0.20!Xc= _IX Fig. ID. a) Normalized mean error on output patterns (8) vs ex= P/N 2 in second-order nets for different sizes N-iterations continue until convergence- and b) same plot with a single iteration. size effect). The slope grows to infinity with N and the curve approaches the replica calculation result 2): catastrophic deterioration of the memory. This has a second origin: iterations propagate and amplify errors as seen on the curve for iteration 1, which does not vary with size (theory and simulations). This last phenomenon had already been noted by several authors 13). 516 Philip. Journalof Research Vol.44 No

17 Third-order Hopfield networks Appendix 2: local field mean value and variance for clipped synapses When presenting the pattern v at the input, the field value is (24) with When clipping the synaptic coefficients-i.e. when replacing Cijk by sign(cijk)-the qjk are replaced by sign(qjk)' As the qjk for j < k are random independent Gaussian variables of mean 1 and variance P, and qjk = qkj' we can compute the distribution of the clipped qjk' The mean value is (q - 1)2J dq I ro [(q - 1)2J dq --+ exp -- 2P~o 2P~ (25) or - [(q_l)2] 11 dq ft qjk = 2 0 exp 2P JW'" y;p' (26) As t:"\2-2 2 (q'k) -(h) =1--",1 J J np (27) the 'clipped field' hi is a Gaussian variable ofmean value.j2fip, and variance 2/N 2 (the factor 2 arises from the j-k permutation symmetry). The effective IX for clipped synapses is (28) The loss of capacity by a factor n/2 is identical to that noted for second-order networks 17). Philips Journalof Research Vol.44 No

18 I.A. Sirat and D. Jorand Appendix 3: heteroassociative version of the modified Hebb rule Several authors 18,19) have extended associative Hopfield memory to heteroassociation in second-order networks: the input pattern XI' is associated with the output pattern yl'. Input and output pattern sizes can be different. In the case of the standard Hebb rule in third-order nets we take Cijk = N 2 1 P L YfXjXf 1'=1 (29) We generalize this for biased patterns: 1 P C"k=- " yl'xlfxl" IJ N2 L... IJk' 1'=1 1 P N )2 e i - Y; - L L xj ( PN 11=1 j=1 (30) (31 ) When a pattern a is presented, the field is N hi = L Cijk(aj - Xj)(ak - Xk) + ei j,k= 1 (32) where Yi and x j are the reduced variables Y; - Y; and X, - Xj' As shown for second-order nets, properties such as memory capacity and fault tolerance are similar for auto- and heteroassociation. The formalism that we have used in this paper works as well for heteroassociation. REFERENCES I) 1.1. Hopfield, Proc. Natl. Acad. Sci. USA, 79,2554 (1982). 2) D.J. Amit, H. Gu tfreun d and H. Sompolinsky, Phys. Rev. Lett.,55 (14), 1530(1985). 3) R. MeElieee, E.C. Posner, E.R. Rodemich and S.S. Venkatesh, IEEE Trans. Inr. Theory,33 (4),461 (1987). 4) M. Gordon, J. Phys. (Paris), 48, 2053 (1987). 5) P. Peretto and J.J. Niez, BioI. Cybern., 54 (1), 53 (1986). 6) A.D. Bruce, A. Canning, B. Forrest, E. Garder and 0.1. Wallace, Proc. or the AlP Conf., Snowbird, UT, 1986, pp , ) P. Baldi and S.S. Venkatesh, Phys. Rev. Lett., 58 (9), 913 (1987). 8) L. Personnaz, I. Guyon and G. Dreyfus, Europhys. Lett., 4 (8), 863 (1987). 9) D. Horn and M. Usher, J. Phys. (paris), 49 (3), 389 (1988). 10) D. Psaltis, C.H. Park and 1. Hong, Neural Netw., 1, 149 (1988). 11) C.L. Giles and T. Maxwell, Appl. Opt., 26 (23),4972 (1987). 12) S.S. Venkatesh, Proc. of the AlP Conf., Snowbird, UT, 1986, pp Philips Journalof Research Vol.44 No

19 Third-order Hopfield networks 13) E. Gardner, J. Phys. A, 20 (11), 3453 (1987). 14) B. Derrida, E. Gardner and A. Zippelius, Europhys. Lett., 4 (2),167 (1987). 15) W. Krauth and M. Mezard, J. Phys. A, 20, L745 (1987). 16) G. Poppel and U. Krey, Europhys. Lett., 4 (9), 979 (1987). 17) H. Sompolinsky, Phys. Rev. A, 34 (3), 2571 (1986). 18) E. Domany, R. Meir and W. KinzeI, Europhys. Lett., 2 (3),175 (1986). 19) L. Personnaz, I. Guyon and G. Dreyfus, Phys. Rev. A, 34 (5),4217 (1986). Authors 1.A. Sir at: Ing. degree, Ecole Polytechnique, Palaiseau, France, 1983; Ph.D. (Nouvelle Thèse), Université Scientifique et Médicale de Grenoble, France, 1986; Laboratoires d'electronique Philips, His main topics of research have been successively low temperature solid state physics (thesis) and neural networks. D. Jo ran d: Ing. degree, Ecole Polytechnique, Palaiseau, France, 1988; Laboratoires d'electronique Philips, He is currently studying computer science at the Ecole Nationale Supérieurc des Télécommunications, Paris, France, with financial support from the Laboratoires d'electronique Philips. His main topics of interest are neural networks, parallel computation and high-level languages. Philips Journalof Research Vol. 44 No.S

Storage Capacity of Letter Recognition in Hopfield Networks

Storage Capacity of Letter Recognition in Hopfield Networks Storage Capacity of Letter Recognition in Hopfield Networks Gang Wei (gwei@cs.dal.ca) Zheyuan Yu (zyu@cs.dal.ca) Faculty of Computer Science, Dalhousie University, Halifax, N.S., Canada B3H 1W5 Abstract:

More information

Memory capacity of neural networks learning within bounds

Memory capacity of neural networks learning within bounds Nous We done sont are faites J. Physique 48 (1987) 20532058 DTCEMBRE 1987, 1 2053 Classification Physics Abstracts 75.10H 64.60 87.30 Memory capacity of neural networks learning within bounds Mirta B.

More information

Terminal attractor optical associative memory with adaptive control parameter

Terminal attractor optical associative memory with adaptive control parameter 1 June 1998 Ž. Optics Communications 151 1998 353 365 Full length article Terminal attractor optical associative memory with adaptive control parameter Xin Lin a,), Junji Ohtsubo b, Masahiko Mori a a Electrotechnical

More information

Neural networks that use three-state neurons

Neural networks that use three-state neurons J. Phys. A: Math. Gen. 22 (1989) 2265-2273. Printed in the UK Neural networks that use three-state neurons Jonathan S Yedidia Department of Physics, Jadwin Hall, Princeton University, Princeton, NJ 08544,

More information

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5 Hopfield Neural Network and Associative Memory Typical Myelinated Vertebrate Motoneuron (Wikipedia) PHY 411-506 Computational Physics 2 1 Wednesday, March 5 1906 Nobel Prize in Physiology or Medicine.

More information

T sg. α c (0)= T=1/β. α c (T ) α=p/n

T sg. α c (0)= T=1/β. α c (T ) α=p/n Taejon, Korea, vol. 2 of 2, pp. 779{784, Nov. 2. Capacity Analysis of Bidirectional Associative Memory Toshiyuki Tanaka y, Shinsuke Kakiya y, and Yoshiyuki Kabashima z ygraduate School of Engineering,

More information

Hopfield Network for Associative Memory

Hopfield Network for Associative Memory CSE 5526: Introduction to Neural Networks Hopfield Network for Associative Memory 1 The next few units cover unsupervised models Goal: learn the distribution of a set of observations Some observations

More information

Learning and Memory in Neural Networks

Learning and Memory in Neural Networks Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units

More information

7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory

7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory Physics 178/278 - David Kleinfeld - Fall 2005; Revised for Winter 2017 7 Rate-Based Recurrent etworks of Threshold eurons: Basis for Associative Memory 7.1 A recurrent network with threshold elements The

More information

ON VLSI: A GENERAL-PURPOSE DIGITAL NEUROCHIP

ON VLSI: A GENERAL-PURPOSE DIGITAL NEUROCHIP Philips J. Res. 45,1-17,1990 R 1224 LEARNING ON VLSI: A GENERAL-PURPOSE DIGITAL NEUROCHIP by M. DURANTON and J.A. SIRAT Laboratoires d'electronique Philips, 3 avenue Descartes, B.P. 15, 94451 Limeil-Breuannes

More information

Effects of refractory periods in the dynamics of a diluted neural network

Effects of refractory periods in the dynamics of a diluted neural network Effects of refractory periods in the dynamics of a diluted neural network F. A. Tamarit, 1, * D. A. Stariolo, 2, * S. A. Cannas, 2, *, and P. Serra 2, 1 Facultad de Matemática, Astronomía yfísica, Universidad

More information

7 Recurrent Networks of Threshold (Binary) Neurons: Basis for Associative Memory

7 Recurrent Networks of Threshold (Binary) Neurons: Basis for Associative Memory Physics 178/278 - David Kleinfeld - Winter 2019 7 Recurrent etworks of Threshold (Binary) eurons: Basis for Associative Memory 7.1 The network The basic challenge in associative networks, also referred

More information

arxiv: v2 [nlin.ao] 19 May 2015

arxiv: v2 [nlin.ao] 19 May 2015 Efficient and optimal binary Hopfield associative memory storage using minimum probability flow arxiv:1204.2916v2 [nlin.ao] 19 May 2015 Christopher Hillar Redwood Center for Theoretical Neuroscience University

More information

Hopfield Neural Network

Hopfield Neural Network Lecture 4 Hopfield Neural Network Hopfield Neural Network A Hopfield net is a form of recurrent artificial neural network invented by John Hopfield. Hopfield nets serve as content-addressable memory systems

More information

ESANN'1999 proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), April 1999, D-Facto public., ISBN X, pp.

ESANN'1999 proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), April 1999, D-Facto public., ISBN X, pp. Statistical mechanics of support vector machines Arnaud Buhot and Mirta B. Gordon Department de Recherche Fondamentale sur la Matiere Condensee CEA-Grenoble, 17 rue des Martyrs, 38054 Grenoble Cedex 9,

More information

Logic Learning in Hopfield Networks

Logic Learning in Hopfield Networks Logic Learning in Hopfield Networks Saratha Sathasivam (Corresponding author) School of Mathematical Sciences, University of Science Malaysia, Penang, Malaysia E-mail: saratha@cs.usm.my Wan Ahmad Tajuddin

More information

Week 4: Hopfield Network

Week 4: Hopfield Network Week 4: Hopfield Network Phong Le, Willem Zuidema November 20, 2013 Last week we studied multi-layer perceptron, a neural network in which information is only allowed to transmit in one direction (from

More information

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,

More information

In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required.

In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required. In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required. In humans, association is known to be a prominent feature of memory.

More information

HOPFIELD neural networks (HNNs) are a class of nonlinear

HOPFIELD neural networks (HNNs) are a class of nonlinear IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 52, NO. 4, APRIL 2005 213 Stochastic Noise Process Enhancement of Hopfield Neural Networks Vladimir Pavlović, Member, IEEE, Dan Schonfeld,

More information

( ) T. Reading. Lecture 22. Definition of Covariance. Imprinting Multiple Patterns. Characteristics of Hopfield Memory

( ) T. Reading. Lecture 22. Definition of Covariance. Imprinting Multiple Patterns. Characteristics of Hopfield Memory Part 3: Autonomous Agents /8/07 Reading Lecture 22 Flake, ch. 20 ( Genetics and Evolution ) /8/07 /8/07 2 Imprinting Multiple Patterns Let x, x 2,, x p be patterns to be imprinted Define the sum-of-outer-products

More information

Memory capacity of networks with stochastic binary synapses

Memory capacity of networks with stochastic binary synapses Memory capacity of networks with stochastic binary synapses Alexis M. Dubreuil 1,, Yali Amit 3 and Nicolas Brunel 1, 1 UMR 8118, CNRS, Université Paris Descartes, Paris, France Departments of Statistics

More information

Generalization in a Hopfield network

Generalization in a Hopfield network Generalization in a Hopfield network J.F. Fontanari To cite this version: J.F. Fontanari. Generalization in a Hopfield network. Journal de Physique, 1990, 51 (21), pp.2421-2430. .

More information

CHAPTER 3. Pattern Association. Neural Networks

CHAPTER 3. Pattern Association. Neural Networks CHAPTER 3 Pattern Association Neural Networks Pattern Association learning is the process of forming associations between related patterns. The patterns we associate together may be of the same type or

More information

SCRAM: Statistically Converging Recurrent Associative Memory

SCRAM: Statistically Converging Recurrent Associative Memory SCRAM: Statistically Converging Recurrent Associative Memory Sylvain Chartier Sébastien Hélie Mounir Boukadoum Robert Proulx Department of computer Department of computer science, UQAM, Montreal, science,

More information

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network.

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network. III. Recurrent Neural Networks A. The Hopfield Network 2/9/15 1 2/9/15 2 Typical Artificial Neuron Typical Artificial Neuron connection weights linear combination activation function inputs output net

More information

Storage capacity of hierarchically coupled associative memories

Storage capacity of hierarchically coupled associative memories Storage capacity of hierarchically coupled associative memories Rogério M. Gomes CEFET/MG - LSI Av. Amazonas, 7675 Belo Horizonte, MG, Brasil rogerio@lsi.cefetmg.br Antônio P. Braga PPGEE-UFMG - LITC Av.

More information

2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net.

2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net. 2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net. - For an autoassociative net, the training input and target output

More information

arxiv:cond-mat/ v1 [cond-mat.dis-nn] 18 Nov 1996

arxiv:cond-mat/ v1 [cond-mat.dis-nn] 18 Nov 1996 arxiv:cond-mat/9611130v1 [cond-mat.dis-nn] 18 Nov 1996 Learning by dilution in a Neural Network B López and W Kinzel Institut für Theoretische Physik, Universität Würzburg, Am Hubland, D-97074 Würzburg

More information

Solvable models of working memories

Solvable models of working memories Nous We J. Physique 47 (1986) 1457-1462 SEPTEMBRE 1986, 1457 Classification Physics Abstracts 87.30G - 75. IOH - 89.70-64.60C Solvable models of working memories M. Mézard ( ), J. P. Nadal (*) and G. Toulouse

More information

Properties of Associative Memory Model with the β-th-order Synaptic Decay

Properties of Associative Memory Model with the β-th-order Synaptic Decay Regular Paper Properties of Associative Memory Model with the β-th-order Synaptic Decay Ryota Miyata 1,2 Toru Aonishi 1 Jun Tsuzurugi 3 Koji Kurata 4,a) Received: January 30, 2013, Revised: March 20, 2013/June

More information

Neural Networks. Hopfield Nets and Auto Associators Fall 2017

Neural Networks. Hopfield Nets and Auto Associators Fall 2017 Neural Networks Hopfield Nets and Auto Associators Fall 2017 1 Story so far Neural networks for computation All feedforward structures But what about.. 2 Loopy network Θ z = ቊ +1 if z > 0 1 if z 0 y i

More information

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network.

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network. Part 3A: Hopfield Network III. Recurrent Neural Networks A. The Hopfield Network 1 2 Typical Artificial Neuron Typical Artificial Neuron connection weights linear combination activation function inputs

More information

CSE 5526: Introduction to Neural Networks Hopfield Network for Associative Memory

CSE 5526: Introduction to Neural Networks Hopfield Network for Associative Memory CSE 5526: Introduction to Neural Networks Hopfield Network for Associative Memory Part VII 1 The basic task Store a set of fundamental memories {ξξ 1, ξξ 2,, ξξ MM } so that, when presented a new pattern

More information

Lecture 7 Artificial neural networks: Supervised learning

Lecture 7 Artificial neural networks: Supervised learning Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

Introduction to Neural Networks

Introduction to Neural Networks Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning

More information

Unit III. A Survey of Neural Network Model

Unit III. A Survey of Neural Network Model Unit III A Survey of Neural Network Model 1 Single Layer Perceptron Perceptron the first adaptive network architecture was invented by Frank Rosenblatt in 1957. It can be used for the classification of

More information

Using a Hopfield Network: A Nuts and Bolts Approach

Using a Hopfield Network: A Nuts and Bolts Approach Using a Hopfield Network: A Nuts and Bolts Approach November 4, 2013 Gershon Wolfe, Ph.D. Hopfield Model as Applied to Classification Hopfield network Training the network Updating nodes Sequencing of

More information

Neural networks: Unsupervised learning

Neural networks: Unsupervised learning Neural networks: Unsupervised learning 1 Previously The supervised learning paradigm: given example inputs x and target outputs t learning the mapping between them the trained network is supposed to give

More information

Using Variable Threshold to Increase Capacity in a Feedback Neural Network

Using Variable Threshold to Increase Capacity in a Feedback Neural Network Using Variable Threshold to Increase Capacity in a Feedback Neural Network Praveen Kuruvada Abstract: The article presents new results on the use of variable thresholds to increase the capacity of a feedback

More information

Analysis of Neural Networks with Chaotic Dynamics

Analysis of Neural Networks with Chaotic Dynamics Chaos, Solitonr & Fructals Vol. 3, No. 2, pp. 133-139, 1993 Printed in Great Britain @60-0779/93$6.00 + 40 0 1993 Pergamon Press Ltd Analysis of Neural Networks with Chaotic Dynamics FRANCOIS CHAPEAU-BLONDEAU

More information

Computational Intelligence Lecture 6: Associative Memory

Computational Intelligence Lecture 6: Associative Memory Computational Intelligence Lecture 6: Associative Memory Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2011 Farzaneh Abdollahi Computational Intelligence

More information

Content-Addressable Memory Associative Memory Lernmatrix Association Heteroassociation Learning Retrieval Reliability of the answer

Content-Addressable Memory Associative Memory Lernmatrix Association Heteroassociation Learning Retrieval Reliability of the answer Associative Memory Content-Addressable Memory Associative Memory Lernmatrix Association Heteroassociation Learning Retrieval Reliability of the answer Storage Analysis Sparse Coding Implementation on a

More information

Hopfield Networks. (Excerpt from a Basic Course at IK 2008) Herbert Jaeger. Jacobs University Bremen

Hopfield Networks. (Excerpt from a Basic Course at IK 2008) Herbert Jaeger. Jacobs University Bremen Hopfield Networks (Excerpt from a Basic Course at IK 2008) Herbert Jaeger Jacobs University Bremen Building a model of associative memory should be simple enough... Our brain is a neural network Individual

More information

Consider the way we are able to retrieve a pattern from a partial key as in Figure 10 1.

Consider the way we are able to retrieve a pattern from a partial key as in Figure 10 1. CompNeuroSci Ch 10 September 8, 2004 10 Associative Memory Networks 101 Introductory concepts Consider the way we are able to retrieve a pattern from a partial key as in Figure 10 1 Figure 10 1: A key

More information

Lecture 4: Feed Forward Neural Networks

Lecture 4: Feed Forward Neural Networks Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training

More information

CS:4420 Artificial Intelligence

CS:4420 Artificial Intelligence CS:4420 Artificial Intelligence Spring 2018 Neural Networks Cesare Tinelli The University of Iowa Copyright 2004 18, Cesare Tinelli and Stuart Russell a a These notes were originally developed by Stuart

More information

Artificial Intelligence Hopfield Networks

Artificial Intelligence Hopfield Networks Artificial Intelligence Hopfield Networks Andrea Torsello Network Topologies Single Layer Recurrent Network Bidirectional Symmetric Connection Binary / Continuous Units Associative Memory Optimization

More information

Quasi Analog Formal Neuron and Its Learning Algorithm Hardware

Quasi Analog Formal Neuron and Its Learning Algorithm Hardware Quasi Analog Formal Neuron and Its Learning Algorithm Hardware Karen Nazaryan Division of Microelectronics and Biomedical Devices, State Engineering University of Armenia, 375009, Terian Str. 105, Yerevan,

More information

How can ideas from quantum computing improve or speed up neuromorphic models of computation?

How can ideas from quantum computing improve or speed up neuromorphic models of computation? Neuromorphic Computation: Architectures, Models, Applications Associative Memory Models with Adiabatic Quantum Optimization Kathleen Hamilton, Alexander McCaskey, Jonathan Schrock, Neena Imam and Travis

More information

Iterative Autoassociative Net: Bidirectional Associative Memory

Iterative Autoassociative Net: Bidirectional Associative Memory POLYTECHNIC UNIVERSITY Department of Computer and Information Science Iterative Autoassociative Net: Bidirectional Associative Memory K. Ming Leung Abstract: Iterative associative neural networks are introduced.

More information

Introduction to Artificial Neural Networks

Introduction to Artificial Neural Networks Facultés Universitaires Notre-Dame de la Paix 27 March 2007 Outline 1 Introduction 2 Fundamentals Biological neuron Artificial neuron Artificial Neural Network Outline 3 Single-layer ANN Perceptron Adaline

More information

The Variance of Covariance Rules for Associative Matrix Memories and Reinforcement Learning

The Variance of Covariance Rules for Associative Matrix Memories and Reinforcement Learning NOTE Communicated by David Willshaw The Variance of Covariance Rules for Associative Matrix Memories and Reinforcement Learning Peter Dayan Terrence J. Sejnowski Computational Neurobiology Laboratory,

More information

HOPFIELD MODEL OF NEURAL NETWORK

HOPFIELD MODEL OF NEURAL NETWORK Chapter 2 HOPFIELD MODEL OF NEURAL NETWORK 2.1 INTRODUCTION Human beings are constantly thinking since ages about the reasons for human capabilities and incapabilities. Successful attempts have been made

More information

A recurrent model of transformation invariance by association

A recurrent model of transformation invariance by association NN 1381 PERGAMON Neural Networks 13 (2000) 225 237 Contributed article A recurrent model of transformation invariance by association Martin C.M. Elliffe a, Edmund T. Rolls a,1,2, *,Néstor Parga b,2,3,

More information

THE information capacity is one of the most important

THE information capacity is one of the most important 256 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 1, JANUARY 1998 Capacity of Two-Layer Feedforward Neural Networks with Binary Weights Chuanyi Ji, Member, IEEE, Demetri Psaltis, Senior Member,

More information

Neural Networks Lecture 6: Associative Memory II

Neural Networks Lecture 6: Associative Memory II Neural Networks Lecture 6: Associative Memory II H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011. A. Talebi, Farzaneh Abdollahi Neural

More information

Phase transition in cellular random Boolean nets

Phase transition in cellular random Boolean nets Nous We J. Physique 48 (1987) 1118 JANVIER 1987, 11 Classification Physics Abstracts 05.40 Phase transition in cellular random Boolean nets G. Weisbuch and D. Stauffer Laboratoire de Physique de l Ecole

More information

CHALMERS, GÖTEBORGS UNIVERSITET. EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD

CHALMERS, GÖTEBORGS UNIVERSITET. EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD CHALMERS, GÖTEBORGS UNIVERSITET EXAM for ARTIFICIAL NEURAL NETWORKS COURSE CODES: FFR 135, FIM 72 GU, PhD Time: Place: Teachers: Allowed material: Not allowed: October 23, 217, at 8 3 12 3 Lindholmen-salar

More information

Synchronous vs asynchronous behavior of Hopfield's CAM neural net

Synchronous vs asynchronous behavior of Hopfield's CAM neural net K.F. Cheung, L.E. Atlas and R.J. Marks II, "Synchronous versus asynchronous behavior of Hopfield's content addressable memory", Applied Optics, vol. 26, pp.4808-4813 (1987). Synchronous vs asynchronous

More information

Neural Nets in PR. Pattern Recognition XII. Michal Haindl. Outline. Neural Nets in PR 2

Neural Nets in PR. Pattern Recognition XII. Michal Haindl. Outline. Neural Nets in PR 2 Neural Nets in PR NM P F Outline Motivation: Pattern Recognition XII human brain study complex cognitive tasks Michal Haindl Faculty of Information Technology, KTI Czech Technical University in Prague

More information

Simple Neural Nets For Pattern Classification

Simple Neural Nets For Pattern Classification CHAPTER 2 Simple Neural Nets For Pattern Classification Neural Networks General Discussion One of the simplest tasks that neural nets can be trained to perform is pattern classification. In pattern classification

More information

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Neural Networks Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Brains as Computational Devices Brains advantages with respect to digital computers: Massively parallel Fault-tolerant Reliable

More information

Machine Learning. Neural Networks

Machine Learning. Neural Networks Machine Learning Neural Networks Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 Biological Analogy Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 THE

More information

Supervised Learning Part I

Supervised Learning Part I Supervised Learning Part I http://www.lps.ens.fr/~nadal/cours/mva Jean-Pierre Nadal CNRS & EHESS Laboratoire de Physique Statistique (LPS, UMR 8550 CNRS - ENS UPMC Univ. Paris Diderot) Ecole Normale Supérieure

More information

Storkey Learning Rules for Hopfield Networks

Storkey Learning Rules for Hopfield Networks Storey Learning Rules for Hopfield Networs Xiao Hu Bejing Technology Group September 18, 2013 Abstract We summarize the Storey Learning Rules for the Hopfield Model, and evaluate performance relative to

More information

Parallel dynamics of fully connected Q-Ising neural networks

Parallel dynamics of fully connected Q-Ising neural networks arxiv:cond-mat/9704089v1 [cond-mat.dis-nn] 10 Apr 1997 Parallel dynamics of fully connected Q-Ising neural networks D. Bollé 1 2 and G. Jongen 1 3 Instituut voor Theoretische Fysica, K.U. Leuven B-3001

More information

Statistical Mechanics of Temporal Association in Neural Networks with Delayed Interactions

Statistical Mechanics of Temporal Association in Neural Networks with Delayed Interactions Statistical Mechanics of Temporal Association in Neural Networks with Delayed Interactions Andreas V.M. Herz Division of Chemistry Caltech 139-74 Pasadena, CA 91125 Zhaoping Li School of Natural Sciences

More information

1 R.V k V k 1 / I.k/ here; we ll stimulate the action potential another way.) Note that this further simplifies to. m 3 k h k.

1 R.V k V k 1 / I.k/ here; we ll stimulate the action potential another way.) Note that this further simplifies to. m 3 k h k. 1. The goal of this problem is to simulate a propagating action potential for the Hodgkin-Huxley model and to determine the propagation speed. From the class notes, the discrete version (i.e., after breaking

More information

Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions

Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions BACK-PROPAGATION NETWORKS Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks Cannot approximate (learn) non-linear functions Difficult (if not impossible) to design

More information

arxiv:cond-mat/ v1 [cond-mat.dis-nn] 30 Sep 1999

arxiv:cond-mat/ v1 [cond-mat.dis-nn] 30 Sep 1999 arxiv:cond-mat/9909443v1 [cond-mat.dis-nn] 30 Sep 1999 Thresholds in layered neural networks with variable activity 1. Introduction D Bollé and G Massolo Instituut voor Theoretische Fysica, K.U. Leuven,

More information

Stochastic Learning in a Neural Network with Adapting. Synapses. Istituto Nazionale di Fisica Nucleare, Sezione di Bari

Stochastic Learning in a Neural Network with Adapting. Synapses. Istituto Nazionale di Fisica Nucleare, Sezione di Bari Stochastic Learning in a Neural Network with Adapting Synapses. G. Lattanzi 1, G. Nardulli 1, G. Pasquariello and S. Stramaglia 1 Dipartimento di Fisica dell'universita di Bari and Istituto Nazionale di

More information

Associative Memory : Soft Computing Course Lecture 21 24, notes, slides RC Chakraborty, Aug.

Associative Memory : Soft Computing Course Lecture 21 24, notes, slides   RC Chakraborty,  Aug. Associative Memory : Soft Computing Course Lecture 21 24, notes, slides www.myreaders.info/, RC Chakraborty, e-mail rcchak@gmail.com, Aug. 10, 2010 http://www.myreaders.info/html/soft_computing.html www.myreaders.info

More information

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks Commun. Theor. Phys. (Beijing, China) 40 (2003) pp. 607 613 c International Academic Publishers Vol. 40, No. 5, November 15, 2003 Effects of Interactive Function Forms in a Self-Organized Critical Model

More information

Artificial Neural Networks Examination, March 2004

Artificial Neural Networks Examination, March 2004 Artificial Neural Networks Examination, March 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum

More information

Associative Memories (I) Hopfield Networks

Associative Memories (I) Hopfield Networks Associative Memories (I) Davide Bacciu Dipartimento di Informatica Università di Pisa bacciu@di.unipi.it Applied Brain Science - Computational Neuroscience (CNS) A Pun Associative Memories Introduction

More information

COMP9444 Neural Networks and Deep Learning 11. Boltzmann Machines. COMP9444 c Alan Blair, 2017

COMP9444 Neural Networks and Deep Learning 11. Boltzmann Machines. COMP9444 c Alan Blair, 2017 COMP9444 Neural Networks and Deep Learning 11. Boltzmann Machines COMP9444 17s2 Boltzmann Machines 1 Outline Content Addressable Memory Hopfield Network Generative Models Boltzmann Machine Restricted Boltzmann

More information

Neural networks. Chapter 20. Chapter 20 1

Neural networks. Chapter 20. Chapter 20 1 Neural networks Chapter 20 Chapter 20 1 Outline Brains Neural networks Perceptrons Multilayer networks Applications of neural networks Chapter 20 2 Brains 10 11 neurons of > 20 types, 10 14 synapses, 1ms

More information

Neural Networks in Which Synaptic Patterns Fluctuate with Time

Neural Networks in Which Synaptic Patterns Fluctuate with Time Journal of Statistical Physics, Vol. 94, Nos. 56, 1999 Neural Networks in Which Synaptic Patterns Fluctuate with Time J. Marro, 1 J. J. Torres, 1, 2 and P. L. Garrido 1 Received July 9, 1998; final October

More information

Specification and Implementation of Digital Hopfield-Type Associative Memory with On-Chip Training

Specification and Implementation of Digital Hopfield-Type Associative Memory with On-Chip Training Specification and Implementation of Digital Hopfield-Type Associative Memory with On-Chip Training Anne Johannet, L. Personnaz, G. Dreyfus, Jean-Dominique Gascuel, M. Weinfeld To cite this version: Anne

More information

arxiv:hep-th/ v1 2 Jul 1998

arxiv:hep-th/ v1 2 Jul 1998 α-representation for QCD Richard Hong Tuan arxiv:hep-th/9807021v1 2 Jul 1998 Laboratoire de Physique Théorique et Hautes Energies 1 Université de Paris XI, Bâtiment 210, F-91405 Orsay Cedex, France Abstract

More information

3.3 Discrete Hopfield Net An iterative autoassociative net similar to the nets described in the previous sections has been developed by Hopfield

3.3 Discrete Hopfield Net An iterative autoassociative net similar to the nets described in the previous sections has been developed by Hopfield 3.3 Discrete Hopfield Net An iterative autoassociative net similar to the nets described in the previous sections has been developed by Hopfield (1982, 1984). - The net is a fully interconnected neural

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

Pattern Association or Associative Networks. Jugal Kalita University of Colorado at Colorado Springs

Pattern Association or Associative Networks. Jugal Kalita University of Colorado at Colorado Springs Pattern Association or Associative Networks Jugal Kalita University of Colorado at Colorado Springs To an extent, learning is forming associations. Human memory associates similar items, contrary/opposite

More information

Neural Nets and Symbolic Reasoning Hopfield Networks

Neural Nets and Symbolic Reasoning Hopfield Networks Neural Nets and Symbolic Reasoning Hopfield Networks Outline The idea of pattern completion The fast dynamics of Hopfield networks Learning with Hopfield networks Emerging properties of Hopfield networks

More information

High-conductance states in a mean-eld cortical network model

High-conductance states in a mean-eld cortical network model Neurocomputing 58 60 (2004) 935 940 www.elsevier.com/locate/neucom High-conductance states in a mean-eld cortical network model Alexander Lerchner a;, Mandana Ahmadi b, John Hertz b a Oersted-DTU, Technical

More information

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm Volume 4, Issue 5, May 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Huffman Encoding

More information

Robust Controller Design for Speed Control of an Indirect Field Oriented Induction Machine Drive

Robust Controller Design for Speed Control of an Indirect Field Oriented Induction Machine Drive Leonardo Electronic Journal of Practices and Technologies ISSN 1583-1078 Issue 6, January-June 2005 p. 1-16 Robust Controller Design for Speed Control of an Indirect Field Oriented Induction Machine Drive

More information

Neural networks: Associative memory

Neural networks: Associative memory Neural networs: Associative memory Prof. Sven Lončarić sven.loncaric@fer.hr http://www.fer.hr/ipg 1 Overview of topics l Introduction l Associative memories l Correlation matrix as an associative memory

More information

Information storage capacity of incompletely connected associative memories

Information storage capacity of incompletely connected associative memories Information storage capacity of incompletely connected associative memories Holger Bosch a, Franz J. Kurfess b, * a Department of Computer Science, University of Geneva, Geneva, Switzerland b Department

More information

Shigetaka Fujita. Rokkodai, Nada, Kobe 657, Japan. Haruhiko Nishimura. Yashiro-cho, Kato-gun, Hyogo , Japan. Abstract

Shigetaka Fujita. Rokkodai, Nada, Kobe 657, Japan. Haruhiko Nishimura. Yashiro-cho, Kato-gun, Hyogo , Japan. Abstract KOBE-TH-94-07 HUIS-94-03 November 1994 An Evolutionary Approach to Associative Memory in Recurrent Neural Networks Shigetaka Fujita Graduate School of Science and Technology Kobe University Rokkodai, Nada,

More information

Artificial Neural Networks Examination, June 2004

Artificial Neural Networks Examination, June 2004 Artificial Neural Networks Examination, June 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum

More information

Neural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET

Neural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET Unit-. Definition Neural network is a massively parallel distributed processing system, made of highly inter-connected neural computing elements that have the ability to learn and thereby acquire knowledge

More information

Integer weight training by differential evolution algorithms

Integer weight training by differential evolution algorithms Integer weight training by differential evolution algorithms V.P. Plagianakos, D.G. Sotiropoulos, and M.N. Vrahatis University of Patras, Department of Mathematics, GR-265 00, Patras, Greece. e-mail: vpp

More information

CORRELATION FUNCTIONS IN 2D SIMPLICIAL GRAVITY. Universite de Paris XI, b^atiment 211, Orsay Cedex, France

CORRELATION FUNCTIONS IN 2D SIMPLICIAL GRAVITY. Universite de Paris XI, b^atiment 211, Orsay Cedex, France UNIVERSALITY IN THE CRITICAL BEHAVIOR OF THE CORRELATION FUNCTIONS IN 2D SIMPLICIAL GRAVITY J. Jurkiewicz 1 Laboratoire de Physiue Theoriue et Hautes Energies 2 Universite de Paris XI, b^atiment 211, 91405

More information

Learning Gaussian Process Models from Uncertain Data

Learning Gaussian Process Models from Uncertain Data Learning Gaussian Process Models from Uncertain Data Patrick Dallaire, Camille Besse, and Brahim Chaib-draa DAMAS Laboratory, Computer Science & Software Engineering Department, Laval University, Canada

More information

Convolutional Associative Memory: FIR Filter Model of Synapse

Convolutional Associative Memory: FIR Filter Model of Synapse Convolutional Associative Memory: FIR Filter Model of Synapse Rama Murthy Garimella 1, Sai Dileep Munugoti 2, Anil Rayala 1 1 International Institute of Information technology, Hyderabad, India. rammurthy@iiit.ac.in,

More information

Dense Associative Memory for Pattern Recognition

Dense Associative Memory for Pattern Recognition Dense Associative Memory for Pattern Recognition Dmitry Krotov Simons Center for Systems Biology Institute for Advanced Study Princeton, USA krotov@ias.edu John J. Hopfield Princeton Neuroscience Institute

More information

Storage capacity of a two-layer perceptron with fixed preprocessing in the first layer

Storage capacity of a two-layer perceptron with fixed preprocessing in the first layer I. Phys. A: Math. Gen. 27 (1994) 1929-1937. Printed in the UK Storage capacity of a two-layer perceptron with fixed preprocessing in the first layer Anthea Bethge, Reimer Kiihn and Heinz Homer Instimt

More information

Object Recognition Using Local Characterisation and Zernike Moments

Object Recognition Using Local Characterisation and Zernike Moments Object Recognition Using Local Characterisation and Zernike Moments A. Choksuriwong, H. Laurent, C. Rosenberger, and C. Maaoui Laboratoire Vision et Robotique - UPRES EA 2078, ENSI de Bourges - Université

More information