1 Introduction Tasks like voice or face recognition are quite dicult to realize with conventional computer systems, even for the most powerful of them

Size: px
Start display at page:

Download "1 Introduction Tasks like voice or face recognition are quite dicult to realize with conventional computer systems, even for the most powerful of them"

Transcription

1 Information Storage Capacity of Incompletely Connected Associative Memories Holger Bosch Departement de Mathematiques et d'informatique Ecole Normale Superieure de Lyon Lyon, France Franz Kurfess Department of Computer Science New Jersey Institute of Technology Newark, New Jersey submitted for publication to \Neural Networks", Oct Abstract The memory capacities for auto- and hetero-associative incompletely connected memories are calculated. First, the capacity is computed for xed parameters of the system. Optimization yields a maximum capacity between 0.53 and 0.69 for hetero-association and half of it for auto-association, improving previous results. The maximum capacity requires sparse input and output patterns and grows with increasing connectivity of the memory. Furthermore, parameters can be chosen in such a way that the information content per pattern asymptotically approaches 1. 1

2 1 Introduction Tasks like voice or face recognition are quite dicult to realize with conventional computer systems, even for the most powerful of them. On the contrary, these tasks are easily performed by the human brain despite its highly inferior computation speed. This indicates that it is rather a conceptual problem of the conventional computer model and one could ask for alternatives. An important dierence to the human brain is that some of its functionality is based on an associative memory. The main advantages of this kind of memory is its noise tolerance and the nearly constant retrieval time. Therefore it has been intensively studied in the last decades (Steinbuch 1961, Kohonen 1979, Palm 1980, Willshaw et al 1993) and some interesting applications and theoretical results have been obtained. Most of the theoretical models are fully connected memories, which can be applied to investigate the actual digital implementations of articial associative memories. However, some future realizations like optical or analog associative memories may not possess fully connected layers, due to physical properties of the implementations. Hence incompletely connected models should also be considered to supply a theoretical foundation for these memories. Another reason to study incompletely connected associative memories is that the corresponding parts of the brain are not at all fully connected. For example, the neurons of some parts of the hippocampus are connected to not more than 5% of the neurons in their neighborhood (Amaral et al 1990). The obtained results from these investigations could improve the understanding of the brain, even if the considered models are far from neurobiological reality. A naturally arising question about a memory concerns its storage capacity. For fully connected memories, Palm (1980) calculated it for a simple model and found a maximum capacity of log(2) 0:69 for the heteroassociative memory and log(2)=2 0:34 for the auto-associative memory. Furthermore, he mentioned a capacity of 0.26 for the incompletely connected hetero-associative memory. In contrast to the fully connected memory there's no natural retrieval strategy for the incompletely connected case. Buckingham and Willshaw (1993) investigated several possibilities and analyzed their performances. Our calculations are based on one of their retrieval strategies, for the capacity of a memory depends on the retrieval technique used. 2

3 We will recalculate the capacity for both, the hetero- and the autoassociative memory, and improve the value of 0.26 given in (Palm 1980). We obtain for the maximum capacity C of hetero-association an increasing function between 0.53 for low connectivity and 0.69 for full connectivity, conrming Palm's result of C = log(2). Auto-association achieves maximum capacities between 0.27 and Description of the Model This section contains a description of the associative memories on which the following calculations are based. The hetero-associative case is described in detail; the auto-associative memory discussed in the second section diers only slightly from the hetero-associative memory. 2.1 Hetero-association The memory considered here is similar to the completely connected model used in (Palm 1980). The input patterns X s and the output patterns Y s are binary vectors of nite dimension, i.e. X s 2 f0; 1g M and Y s 2 f0; 1g N. A total number of R associations between input and output patterns should be stored in the memory, i.e. s = 1::R. The neural network used for the memory task consists of one input and one output layer of M respectively N neurons, binary weights w ij 2 f0; 1g for the existing synapses between the two layers and threshold functions i for the output units which are set individually during the retrieval phase. The two layers are incompletely connected and therefore the net can be represented as a partially lled matrix W 2 f0; 1; bg N M in which the blank b stands for a missing matrix element. The memory is trained with a one{step Hebbian learning rule and the values for the existing weights w ij are calculated by: w ij = _ s=1::r (Y s i X s j ) To retrieve the response for an input pattern X rst the vector Y 0 = W X is calculated where the sums in the matrix product are over the existing 3

4 weights w ij. To obtain the nal response Y the threshold functions are determined and applied to Y 0 : MX Y i = i ( w ij X j ) j=1 The calculation of the memory capacity requires some further specications of the model. The number K of \ones" in the output pattern Y s is xed for all patterns in order to simplify the correction of the obtained response. To obtain a consistent model the number L of \ones" in the input pattern Y s is also xed. This restriction can be replaced by a probability for the occurrence of a "one" in an input unit, which gives roughly the same results. Furthermore the net contains a total of ZMN uniformly distributed connections between the input and the output layer where Z, 0 < Z 1, is the connectivity of the network. The threshold functions are set by using the activities of the input pattern X (see Buckingham and Willshaw 1993). The individual activity A i for an output unit Y i is dened as A i = X j X j where the sum is over the existing synapses w ij between input neuron i and output neuron j. Note that the activities A i can range from 0 to L, depending on the distribution of the existing connections. The threshold for the output unit Y i is set to i = A i which means that an output neuron res if its dendritic sum equals the input activity A i. Hence, if a learned input pattern X s is applied to the net, the answer contains all the K \ones" of its answer Y s but may contain O additional wrong \ones". In this point the net has the same behavior as Palm's model. The described retrieval strategy can be eciently implemented by switching to the complement matrix W of W, i.e. w ij = 1? w ij : An output unit returns \one" if its dendritic sum equals its input activity, i.e. if the existing connections between the high input units and the output unit are high. Switching to the complement matrix, this is equivalent to corresponding low connections, i.e. if its dendritic sum equals zero. Y i = 1? 4 _ j w ij X j

5 Therefore no additional calculations are required to retrieve the output vector Y. 2.2 Auto-association The same model as for the hetero-association is used, except for the following natural modications: The dimensions of the in- and output vectors are the same, i.e. N = M and both vectors contain the same number of \ones", i.e. K = L. Note that the obtained matrix is symmetrical and therefore only roughly half of it is needed, which explains the lower capacity for auto-association. Having specied the associative memories as targets of our investigations, we can now proceed to calculate the memory capacities. 3 Capacity of the Hetero-associative Memory First, the capacity is calculated for a given memory. In a next step, this capacity is optimized with respect to the connectivity Z. The obtained values are then numerically veried. 3.1 Theoretical Results for Fixed Parameters The following calculations are partly inspired by the calculations for the completely connected model from (Palm 1980). First, the expectation value of the information stored in the memory is determined. The denition of information is based on the corresponding expression of Shannon's Information Theory. The total information I is the sum of the individual informations stored per output pattern I s : I = s=1 I s is the dierence between the information contained in the output pattern Y s and the information needed to determine this pattern given the memory's RX I s 5

6 response to X s : I s = ld N K!? ld O + K K Therefore we obtain for the expectation value of the total information E(I) = R E " ld K?1 X = R E i=0 K?1 X = R E i=0 N K ld ld!? ld N? i K + O? i N? i K + O? i! O + K K Using the general inequation 1 E(ldA) lde(a) a lower bound can be derived: K?1 X N? i E(I) R ld (1) i=0 K + E(O)? i The expectation number of the wrong \ones" E(O) in the response needs to be determined next. E(O) depends highly on the distributions of \ones" in the connection matrix. Unfortunately the occurrences of \ones" in the same row or column are not independent. But for sparse 2 input patterns and a large number of stored patterns the dependence is very weak. Since the maximum capacity is reached for this choice of parameters, the independence of the occurrences is assumed for this calculation. For a more detailed discussion see (Palm 1980). The expectation number E(O) of wrong \ones" is given by!!# E(O) = (N? K)Pr ( (W X) i = A i jy i = 0) (W X) i = A i holds if and only if all existing connections between the output and the high input units have high weights. Since independence for the weights is assumed we obtain: E(O) (N? K)(1? Z Pr(w ij = 0)) L = (N? K) 1? Z 1 LK? MN R?1! L 1 which derives from the algebraic geometric inequation 2 Here, sparse means that the input vector contains only a few \ones". (2) 6

7 which completes the calculation of the stored information. The expectation value for the memory capacity E(C) is then given by E(C) = E(I) ZMN for the memory contains a total of ZMN storage units. 3.2 Maximum Capacity In order to maximize the capacity, some approximations are required. As (A? i)=(b? i) A=B for A B, and N K + E(O), we obtain by (1) N E(I) RKld K + E(O) For sparse output patterns, i.e. i K << N, this gives a tight approximation. The substitution of E(O) by (2) yields: E(I) RK ld RK ld RK ld = RKL ld N K + (N? K) 1? Z 1? LK MN N?R LK K + (N? K) 1? Ze MN N?R LK N 1? Ze MN 1?R LK 1? Ze MN L R?1 L L (3) The approximation in (3) overestimates the memory capacity, but the real value will be close to the estimation if K << E(O). This requires sparse input patterns and a certain loss of information, i.e. there are many more wrong than genuine \ones" in the answer. But if furthermore E(O) << N, even a high percentage of the information can be stored (see Section 5). Now, writing R = p(z) MN for the number of stored patterns, the expectation value of the total capacity LK is: 1 E(C) p(z)ld 1? Ze?p(Z) (4) 7

8 p(z) Z Figure 1: Optimal values for p(z) where R = p(z) MN LK stored patterns. is the number of Numerical optimization of E(C) in (4) yields values for p(z) between 0.7 and 1.0 (Figure 1). With these values we obtain a theoretical maximum capacity between 0.53 and 0.69 which depends only on the connectivity Z (Figure 2). This is much higher than the previously estimated 0.26 (Palm 1980) and brings the capacities of incompletely connected networks close to those of completely connected memories. 3.3 Numerical Verication To verify the obtained result the capacity has been optimized for M = N = 100 (Figure 3) and M = N = 1000 (Figure 4), testing a wide range of parameters. Both gures contain the theoretical capacity from Figure 2, the eective capacity from the optimization and the capacity for K = L = 2 and R = p(z) MN LK which is called capacity for estimated parameters. For M = N = 100, the theoretical and the real capacities are indistinguishable whereas the capacity for estimated parameters is not yet optimal. For M = N = 1000 all three capacities are indistinguishable, conrming the theoretical result. For the verication, the formulas (1) and (2) were used even if they don't give the exact capacity, since the exact formula requires very time-intensive 8

9 C Z Figure 2: Maximum capacity of the incompletely connected memory. The capacity C increases with the connectivity Z. C Z Figure 3: For 100 neurons the theoretical and the real capacity are indistinguishable (upper curve) whereas the capacity for estimated parameters is not yet optimal (lower curve). 9

10 C Z Figure 4: For 1000 neurons the theoretical, the real and the capacity for estimated parameters are indistinguishable. calculations. The obtained results were conrmed by the exact formula for the optimal parameters, showing in these cases only insignicantly dierent capacities. In this chapter the capacity for hetero-associative memories has been calculated and then optimized for xed connectivity. These theoretical values have been conrmed by numerical evaluations of the exact formula. 4 Capacity of the Auto-associative Memory The corresponding calculations are performed for the auto-associative memory. The capacity is optimized in the same manner as for hetero-association and is therefore only briey outlined. Since the result is quite similar to hetero-association no numerical verications were performed. The amount of information stored in the auto-associative memory is de- ned in the same way as for the hetero-associative memory. Clearly the correction information per pattern must be dened dierently, since the input pattern equals the output pattern. To retrieve a pattern the high units are determined successively, one at each step, starting with an \empty" vector. A step consists of the application of the actual pattern to the memory 10

11 and the determination of a further genuine \one" among the obtained high output units. The sum of the information of these determinations equals the correction information. Hence the information stored in the memory is given by 3 E(I) R K?1 X i=0 ld N? i K + E(O i )? i where E(O i ) is the expectation number of spurious \ones" at the i th step. Note that the applied input pattern at the i th step contains exactly i \ones". Similar to the hetero-association case E(O i ) (N? K)(1? Z Pr(w ij = 0)) i = (N? K) and comparable approximations yield and nally, with R = p(z) N 2 K?1 X K 2! 1 R?1 i 1? Z 1? N 2 A 0 1i A E(I) R 1 K2?R i=0 1? Ze N 0 2 = )ld K 2 we obtain 1? Ze K2?R N 2 E(C) 1 2 p(z)ld 1 1? Ze?p(Z) Considering that only half of the symmetrical matrix is necessary the result is the same as in the case of hetero-association and the same optimization yields values between 0.26 and 0.34 for the maximum capacity. 5 Information Content per Pattern Besides the maximum capacity of an associative memory the information stored per pattern is also of interest. The corresponding value will be calculated and then optimized. Again, numerical verications conrm the theoretical results. 3 see (Palm 1980) for the derivation of the formula 1 A 11

12 c(z) Z Figure 5: c(z) log N \ones" in the input pattern are required to store almost the entire information of the pattern. The information contained in a pattern is I p = ld N K! KldN in the case of sparse output patterns. The condition RKldN = E(I) guarantees that roughly all information of the patterns is stored since RKldN is the total information of the patterns. For the optimal information E(I) = 1 RKLld this provides the following condition for L: 1?Ze?p(Z) L log N log 1 1?Ze?p(Z) = c(z) log N (5) where c(z) is the function shown in Figure 5. Low connectivity requires a larger number of input \ones" than high connectivity, for a certain level of activity must be present to obtain a high percentage of information storage. Based on the above value of L the average activity A = ZL is a decreasing function of connectivity, resulting in values between 2:7 log N and 1:4 log N. Remember the necessary condition K << E(O) in section 3.2 to obtain 12

13 a high capacity. It can be modied to: K << (N? K)(1? Ze?p(Z) ) L N(1? Ze?p(Z) ) L Applying the logarithm yields log K < log N + L log(1? Ze?p(Z) ) which gives another condition for L: L < log N? log K log 1 = c(z)(log N? log K) (6) 1?Ze?p(Z) Thus, comparing the conditions (5) and (6), high capacity and high percentage of stored information are achieved if log K << log N and L = c(z) log N << N. In particular for small connectivity this is reached only for very large N. It is striking that, neglecting log K in (6), the same function, i.e. c(z) log N, is derived by two totally dierent approaches. Figure 6 shows the maximum capacity and the corresponding fraction of stored information for a very large memory. The number of \ones " is chosen between c log N and c(log N? log K) which results in high capacities and high information contents. For L = c(z)(log N? log K), maximum capacity is given whereas L = c(z) log N leads to an approximately complete information storage. The dependency of the capacity on the number of \ones" in the input pattern is illustrated in Figure 7 for connectivity Z = 0:5. The number of patterns has been optimized to obtain maximum capacity. The increasing function represents the information content per pattern. Figure 8 shows the capacity and the information content for an increasing number of patterns R and connectivity Z = 0:5. The capacity increases rst with the number of pattern before it begins to decrease when too many \ones" in the matrix are stored. The information content is a steadily decreasing function as expected. The previous results show that high information content per pattern is possible and can be realized together with high capacity for appropriate choices of parameters. 13

14 Z Figure 6: Maximum capacity (upper curve) and information content for N = M = 10 20, K = log N, L = (Z)(log N? 1 MN 2 log K and R = p(z). LK L Figure 7: Maximum capacity (increasing curve) and information content as function of L. The xed parameters are N = M = 1000, K = 10 and Z = 0:5 whereas the number of stored patterns is optimized. The capacity decreases with L whereas the information content increases with the number of \ones". 14

15 ,000 20,000 R Figure 8: Capacity and information content (decreasing function) for increasing numbers of patterns R. The xed parameters are N = M = 1000, Z = 0:5, K = 10 log N and L = 30 c(0:5) log N. 6 Discussion The memory capacities of incompletely connected associative memories have been investigated. The maximum capacities range from 0.53 to 0.69, depending on the connectivity of the network, and can be obtained for sparse input and output patterns and convenient numbers of stored associations. Additionally, more restricted choices of parameters guarantee a high information storage per pattern which asymptotically approaches 1. The memory loses its stored information if too many weights are set to \one" in the matrix. Sparse input patterns achieve therefore better storage capacities, since the information content does not increase with the number of \ones" in the input patterns, whereas the number of \ones" in the matrix is increased. On the other hand, a large number of \ones" in the input pattern leads to high information content. Memories with low connectivity require more \ones" in the input patterns in order to achieve a high information content, since a minimum activity is needed. The advantage of our model is that only spurious \ones" {and no \zeros"{ occur in the answer and that the number of output \ones" K is xed. There- 15

16 fore no information is required to determine the number of wrong \ones" O. These specications result in a high capacity, but are a restriction for the biological validity of the model. Furthermore, our retrieval strategy implies that an output unit is set to \high" if it has no connection to any of the high input units, i.e. if the activity is zero. This does not seem biological plausible and shows the limit of our model. Further work should investigate the memory capacity for other retrieval strategies, modied learning rules or dierent architectures of associative incompletely connected memories. The present work can be seen as basis for such investigations. 16

17 7 List of Symbols Used A i C c(z) I I s K L M N O p(z) R W X Y Z i individual activity capacity of the memory constant for optimal number of input \ones" stored information stored information per pattern number of \ones" in the output pattern number of \ones" in the input pattern length of the input pattern length of the output pattern number of wrong \ones" constant for optimal number of patterns number of stored patterns weight matrix input pattern output pattern connectivity of the network threshold function 17

18 References [1] Amaral, D. G., Ishizuka, N., & Claiborne, B. (1990). Neurons, numbers and the hippocampal network. progress in Brain Research, 83, [2] Buckingham, J., & Willshaw, D. (1983). On setting unit thresholds in an incompletely connected associative net. Network, 4, [3] Hogan, J., & Diederich, J. (1995). Random Neural Networks of Biologically Plausible Connectivity. Technical Report, Queensland University of Technology, Australia. [4] Kohonen, T. (1979). Content-Addressable memories. New York: Springer Verlag. [5] Palm, G. (1980). On associative memory. Biological Cybernetics, 36, [6] Palm, G., Kurfess, F., Schwenker, F., & Strey, A. (1993). Neural associative memories. Technical Report, Universitat Ulm, Germany. [7] Steinbuch, K. (1961). Die Lernmatrix. Kybernetik, 1, 36. [8] Vogel, D., & Boos W. (1993). Minimally connective, auto-associative, neural networks. Connection Science, Vol. 6, No. 4,

Information storage capacity of incompletely connected associative memories

Information storage capacity of incompletely connected associative memories Information storage capacity of incompletely connected associative memories Holger Bosch a, Franz J. Kurfess b, * a Department of Computer Science, University of Geneva, Geneva, Switzerland b Department

More information

Content-Addressable Memory Associative Memory Lernmatrix Association Heteroassociation Learning Retrieval Reliability of the answer

Content-Addressable Memory Associative Memory Lernmatrix Association Heteroassociation Learning Retrieval Reliability of the answer Associative Memory Content-Addressable Memory Associative Memory Lernmatrix Association Heteroassociation Learning Retrieval Reliability of the answer Storage Analysis Sparse Coding Implementation on a

More information

Information Capacity of Binary Weights Associative. Memories. Mathematical Sciences. University at Bualo. Bualo NY

Information Capacity of Binary Weights Associative. Memories. Mathematical Sciences. University at Bualo. Bualo NY Information Capacity of Binary Weights Associative Memories Arun Jagota Department of Computer Science University of California, Santa Cruz CA 95064 jagota@cse.ucsc.edu Kenneth W. Regan University at Bualo

More information

Learning and Memory in Neural Networks

Learning and Memory in Neural Networks Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units

More information

Marr's Theory of the Hippocampus: Part I

Marr's Theory of the Hippocampus: Part I Marr's Theory of the Hippocampus: Part I Computational Models of Neural Systems Lecture 3.3 David S. Touretzky October, 2015 David Marr: 1945-1980 10/05/15 Computational Models of Neural Systems 2 Marr

More information

2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net.

2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net. 2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net. - For an autoassociative net, the training input and target output

More information

Perceptron. (c) Marcin Sydow. Summary. Perceptron

Perceptron. (c) Marcin Sydow. Summary. Perceptron Topics covered by this lecture: Neuron and its properties Mathematical model of neuron: as a classier ' Learning Rule (Delta Rule) Neuron Human neural system has been a natural source of inspiration for

More information

Analysis and Simulations y. Risto Miikkulainen. Department of Computer Sciences. Austin, TX USA.

Analysis and Simulations y. Risto Miikkulainen. Department of Computer Sciences. Austin, TX USA. Convergence-Zone Episodic Memory: Analysis and Simulations y Mark Moll School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 USA mmoll+@cs.cmu.edu Risto Miikkulainen Department of

More information

CHAPTER 3. Pattern Association. Neural Networks

CHAPTER 3. Pattern Association. Neural Networks CHAPTER 3 Pattern Association Neural Networks Pattern Association learning is the process of forming associations between related patterns. The patterns we associate together may be of the same type or

More information

Lecture 7 Artificial neural networks: Supervised learning

Lecture 7 Artificial neural networks: Supervised learning Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

Covariance Learning of Correlated Patterns in Competitive. Networks. Ali A. Minai. University of Cincinnati. (To appearinneural Computation)

Covariance Learning of Correlated Patterns in Competitive. Networks. Ali A. Minai. University of Cincinnati. (To appearinneural Computation) Covariance Learning of Correlated Patterns in Competitive Networks Ali A. Minai Department of Electrical & Computer Engineering and Computer Science University of Cincinnati Cincinnati, OH 45221 (To appearinneural

More information

This is a repository copy of Improving the associative rule chaining architecture.

This is a repository copy of Improving the associative rule chaining architecture. This is a repository copy of Improving the associative rule chaining architecture. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/75674/ Version: Accepted Version Book Section:

More information

Using Variable Threshold to Increase Capacity in a Feedback Neural Network

Using Variable Threshold to Increase Capacity in a Feedback Neural Network Using Variable Threshold to Increase Capacity in a Feedback Neural Network Praveen Kuruvada Abstract: The article presents new results on the use of variable thresholds to increase the capacity of a feedback

More information

A NEURAL NETWORK THAT EMBEDS ITS OWN META-LEVELS In Proc. of the International Conference on Neural Networks '93, San Francisco. IEEE, Jurgen Sc

A NEURAL NETWORK THAT EMBEDS ITS OWN META-LEVELS In Proc. of the International Conference on Neural Networks '93, San Francisco. IEEE, Jurgen Sc A NEURAL NETWORK THAT EMBEDS ITS OWN META-LEVELS In Proc. of the International Conference on Neural Networks '93, San Francisco. IEEE, 1993. Jurgen Schmidhuber Department of Computer Science University

More information

Lecture 4: Feed Forward Neural Networks

Lecture 4: Feed Forward Neural Networks Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training

More information

Neural Nets and Symbolic Reasoning Hopfield Networks

Neural Nets and Symbolic Reasoning Hopfield Networks Neural Nets and Symbolic Reasoning Hopfield Networks Outline The idea of pattern completion The fast dynamics of Hopfield networks Learning with Hopfield networks Emerging properties of Hopfield networks

More information

F.P. Battaglia 1. Istituto di Fisica. Universita di Roma, La Sapienza, Ple Aldo Moro, Roma and. S. Fusi

F.P. Battaglia 1. Istituto di Fisica. Universita di Roma, La Sapienza, Ple Aldo Moro, Roma and. S. Fusi partially structured synaptic transitions F.P. Battaglia 1 Istituto di Fisica Universita di Roma, La Sapienza, Ple Aldo oro, Roma and S. Fusi INFN, Sezione dell'istituto Superiore di Sanita, Viale Regina

More information

A Dynamical Implementation of Self-organizing Maps. Wolfgang Banzhaf. Department of Computer Science, University of Dortmund, Germany.

A Dynamical Implementation of Self-organizing Maps. Wolfgang Banzhaf. Department of Computer Science, University of Dortmund, Germany. Fraunhofer Institut IIE, 994, pp. 66 73 A Dynamical Implementation of Self-organizing Maps Abstract Wolfgang Banzhaf Department of Computer Science, University of Dortmund, Germany and Manfred Schmutz

More information

Convolutional Associative Memory: FIR Filter Model of Synapse

Convolutional Associative Memory: FIR Filter Model of Synapse Convolutional Associative Memory: FIR Filter Model of Synapse Rama Murthy Garimella 1, Sai Dileep Munugoti 2, Anil Rayala 1 1 International Institute of Information technology, Hyderabad, India. rammurthy@iiit.ac.in,

More information

Memory Capacities for Synaptic and Structural Plasticity

Memory Capacities for Synaptic and Structural Plasticity REVIEW Communicated by Bruce Graham Memory Capacities for Synaptic and Structural Plasticity Andreas Knoblauch andreas.knoblauch@honda-ri.de Honda Research Institute Europe GmbH, D-6373 Offenbach, Germany

More information

The Count-Min-Sketch and its Applications

The Count-Min-Sketch and its Applications The Count-Min-Sketch and its Applications Jannik Sundermeier Abstract In this thesis, we want to reveal how to get rid of a huge amount of data which is at least dicult or even impossible to store in local

More information

patterns (\attractors"). For instance, an associative memory may have tocomplete (or correct) an incomplete

patterns (\attractors). For instance, an associative memory may have tocomplete (or correct) an incomplete Chapter 6 Associative Models Association is the task of mapping input patterns to target patterns (\attractors"). For instance, an associative memory may have tocomplete (or correct) an incomplete (or

More information

Supervised Learning Part I

Supervised Learning Part I Supervised Learning Part I http://www.lps.ens.fr/~nadal/cours/mva Jean-Pierre Nadal CNRS & EHESS Laboratoire de Physique Statistique (LPS, UMR 8550 CNRS - ENS UPMC Univ. Paris Diderot) Ecole Normale Supérieure

More information

Extending the Associative Rule Chaining Architecture for Multiple Arity Rules

Extending the Associative Rule Chaining Architecture for Multiple Arity Rules Extending the Associative Rule Chaining Architecture for Multiple Arity Rules Nathan Burles, James Austin, and Simon O Keefe Advanced Computer Architectures Group Department of Computer Science University

More information

Consider the way we are able to retrieve a pattern from a partial key as in Figure 10 1.

Consider the way we are able to retrieve a pattern from a partial key as in Figure 10 1. CompNeuroSci Ch 10 September 8, 2004 10 Associative Memory Networks 101 Introductory concepts Consider the way we are able to retrieve a pattern from a partial key as in Figure 10 1 Figure 10 1: A key

More information

Stochastic Learning in a Neural Network with Adapting. Synapses. Istituto Nazionale di Fisica Nucleare, Sezione di Bari

Stochastic Learning in a Neural Network with Adapting. Synapses. Istituto Nazionale di Fisica Nucleare, Sezione di Bari Stochastic Learning in a Neural Network with Adapting Synapses. G. Lattanzi 1, G. Nardulli 1, G. Pasquariello and S. Stramaglia 1 Dipartimento di Fisica dell'universita di Bari and Istituto Nazionale di

More information

Dynamical Embodiments of Computation in Cognitive Processes James P. Crutcheld Physics Department, University of California, Berkeley, CA a

Dynamical Embodiments of Computation in Cognitive Processes James P. Crutcheld Physics Department, University of California, Berkeley, CA a Dynamical Embodiments of Computation in Cognitive Processes James P. Crutcheld Physics Department, University of California, Berkeley, CA 94720-7300 and Santa Fe Institute, 1399 Hyde Park Road, Santa Fe,

More information

The Variance of Covariance Rules for Associative Matrix Memories and Reinforcement Learning

The Variance of Covariance Rules for Associative Matrix Memories and Reinforcement Learning NOTE Communicated by David Willshaw The Variance of Covariance Rules for Associative Matrix Memories and Reinforcement Learning Peter Dayan Terrence J. Sejnowski Computational Neurobiology Laboratory,

More information

Analog Neural Nets with Gaussian or other Common. Noise Distributions cannot Recognize Arbitrary. Regular Languages.

Analog Neural Nets with Gaussian or other Common. Noise Distributions cannot Recognize Arbitrary. Regular Languages. Analog Neural Nets with Gaussian or other Common Noise Distributions cannot Recognize Arbitrary Regular Languages Wolfgang Maass Inst. for Theoretical Computer Science, Technische Universitat Graz Klosterwiesgasse

More information

Support Vector Regression with Automatic Accuracy Control B. Scholkopf y, P. Bartlett, A. Smola y,r.williamson FEIT/RSISE, Australian National University, Canberra, Australia y GMD FIRST, Rudower Chaussee

More information

Neural networks: Associative memory

Neural networks: Associative memory Neural networs: Associative memory Prof. Sven Lončarić sven.loncaric@fer.hr http://www.fer.hr/ipg 1 Overview of topics l Introduction l Associative memories l Correlation matrix as an associative memory

More information

Artificial Neural Network and Fuzzy Logic

Artificial Neural Network and Fuzzy Logic Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks

More information

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5 Hopfield Neural Network and Associative Memory Typical Myelinated Vertebrate Motoneuron (Wikipedia) PHY 411-506 Computational Physics 2 1 Wednesday, March 5 1906 Nobel Prize in Physiology or Medicine.

More information

PDF hosted at the Radboud Repository of the Radboud University Nijmegen

PDF hosted at the Radboud Repository of the Radboud University Nijmegen PDF hosted at the Radboud Repository of the Radboud University Nijmegen The following full text is a preprint version which may differ from the publisher's version. For additional information about this

More information

Artificial Neural Network

Artificial Neural Network Artificial Neural Network Contents 2 What is ANN? Biological Neuron Structure of Neuron Types of Neuron Models of Neuron Analogy with human NN Perceptron OCR Multilayer Neural Network Back propagation

More information

Artificial Intelligence Hopfield Networks

Artificial Intelligence Hopfield Networks Artificial Intelligence Hopfield Networks Andrea Torsello Network Topologies Single Layer Recurrent Network Bidirectional Symmetric Connection Binary / Continuous Units Associative Memory Optimization

More information

a subset of these N input variables. A naive method is to train a new neural network on this subset to determine this performance. Instead of the comp

a subset of these N input variables. A naive method is to train a new neural network on this subset to determine this performance. Instead of the comp Input Selection with Partial Retraining Pierre van de Laar, Stan Gielen, and Tom Heskes RWCP? Novel Functions SNN?? Laboratory, Dept. of Medical Physics and Biophysics, University of Nijmegen, The Netherlands.

More information

Abstract: Q-learning as well as other learning paradigms depend strongly on the representation

Abstract: Q-learning as well as other learning paradigms depend strongly on the representation Ecient Q-Learning by Division of Labor Michael Herrmann RIKEN, Lab. f. Information Representation 2-1 Hirosawa, Wakoshi 351-01, Japan michael@sponge.riken.go.jp Ralf Der Universitat Leipzig, Inst. f. Informatik

More information

In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required.

In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required. In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required. In humans, association is known to be a prominent feature of memory.

More information

Storage Capacity of Letter Recognition in Hopfield Networks

Storage Capacity of Letter Recognition in Hopfield Networks Storage Capacity of Letter Recognition in Hopfield Networks Gang Wei (gwei@cs.dal.ca) Zheyuan Yu (zyu@cs.dal.ca) Faculty of Computer Science, Dalhousie University, Halifax, N.S., Canada B3H 1W5 Abstract:

More information

Week 4: Hopfield Network

Week 4: Hopfield Network Week 4: Hopfield Network Phong Le, Willem Zuidema November 20, 2013 Last week we studied multi-layer perceptron, a neural network in which information is only allowed to transmit in one direction (from

More information

Neural Networks. Hopfield Nets and Auto Associators Fall 2017

Neural Networks. Hopfield Nets and Auto Associators Fall 2017 Neural Networks Hopfield Nets and Auto Associators Fall 2017 1 Story so far Neural networks for computation All feedforward structures But what about.. 2 Loopy network Θ z = ቊ +1 if z > 0 1 if z 0 y i

More information

Using a Hopfield Network: A Nuts and Bolts Approach

Using a Hopfield Network: A Nuts and Bolts Approach Using a Hopfield Network: A Nuts and Bolts Approach November 4, 2013 Gershon Wolfe, Ph.D. Hopfield Model as Applied to Classification Hopfield network Training the network Updating nodes Sequencing of

More information

1 Introduction CONVEXIFYING THE SET OF MATRICES OF BOUNDED RANK. APPLICATIONS TO THE QUASICONVEXIFICATION AND CONVEXIFICATION OF THE RANK FUNCTION

1 Introduction CONVEXIFYING THE SET OF MATRICES OF BOUNDED RANK. APPLICATIONS TO THE QUASICONVEXIFICATION AND CONVEXIFICATION OF THE RANK FUNCTION CONVEXIFYING THE SET OF MATRICES OF BOUNDED RANK. APPLICATIONS TO THE QUASICONVEXIFICATION AND CONVEXIFICATION OF THE RANK FUNCTION Jean-Baptiste Hiriart-Urruty and Hai Yen Le Institut de mathématiques

More information

Shigetaka Fujita. Rokkodai, Nada, Kobe 657, Japan. Haruhiko Nishimura. Yashiro-cho, Kato-gun, Hyogo , Japan. Abstract

Shigetaka Fujita. Rokkodai, Nada, Kobe 657, Japan. Haruhiko Nishimura. Yashiro-cho, Kato-gun, Hyogo , Japan. Abstract KOBE-TH-94-07 HUIS-94-03 November 1994 An Evolutionary Approach to Associative Memory in Recurrent Neural Networks Shigetaka Fujita Graduate School of Science and Technology Kobe University Rokkodai, Nada,

More information

An average case analysis of a dierential attack. on a class of SP-networks. Distributed Systems Technology Centre, and

An average case analysis of a dierential attack. on a class of SP-networks. Distributed Systems Technology Centre, and An average case analysis of a dierential attack on a class of SP-networks Luke O'Connor Distributed Systems Technology Centre, and Information Security Research Center, QUT Brisbane, Australia Abstract

More information

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5 Artificial Neural Networks Q550: Models in Cognitive Science Lecture 5 "Intelligence is 10 million rules." --Doug Lenat The human brain has about 100 billion neurons. With an estimated average of one thousand

More information

Data Retrieval and Noise Reduction by Fuzzy Associative Memories

Data Retrieval and Noise Reduction by Fuzzy Associative Memories Data Retrieval and Noise Reduction by Fuzzy Associative Memories Irina Perfilieva, Marek Vajgl University of Ostrava, Institute for Research and Applications of Fuzzy Modeling, Centre of Excellence IT4Innovations,

More information

Pattern Association or Associative Networks. Jugal Kalita University of Colorado at Colorado Springs

Pattern Association or Associative Networks. Jugal Kalita University of Colorado at Colorado Springs Pattern Association or Associative Networks Jugal Kalita University of Colorado at Colorado Springs To an extent, learning is forming associations. Human memory associates similar items, contrary/opposite

More information

Evaluation of Spectra in Chemistry and Physics. with Kohonen's Selforganizing Feature Map. Lehrstuhl fuer technische Informatik

Evaluation of Spectra in Chemistry and Physics. with Kohonen's Selforganizing Feature Map. Lehrstuhl fuer technische Informatik Evaluation of Spectra in Chemistry and Physics with Kohonen's Selforganizing Feature Map J. Goppert 1, H. Speckmann 1, W. Rosenstiel 1, W. Kessler 2, G. Kraus 3, G. Gauglitz 3 1 Lehrstuhl fuer technische

More information

Nonmonotonic Networks. a. IRST, I Povo (Trento) Italy, b. Univ. of Trento, Physics Dept., I Povo (Trento) Italy

Nonmonotonic Networks. a. IRST, I Povo (Trento) Italy, b. Univ. of Trento, Physics Dept., I Povo (Trento) Italy Storage Capacity and Dynaics of Nononotonic Networks Bruno Crespi a and Ignazio Lazzizzera b a. IRST, I-38050 Povo (Trento) Italy, b. Univ. of Trento, Physics Dept., I-38050 Povo (Trento) Italy INFN Gruppo

More information

of the dynamics. There is a competition between the capacity of the network and the stability of the

of the dynamics. There is a competition between the capacity of the network and the stability of the Special Issue on the Role and Control of Random Events in Biological Systems c World Scientic Publishing Company LEARNING SYNFIRE CHAINS: TURNING NOISE INTO SIGNAL JOHN HERTZ and ADAM PRUGEL-BENNETT y

More information

Linear Regression, Neural Networks, etc.

Linear Regression, Neural Networks, etc. Linear Regression, Neural Networks, etc. Gradient Descent Many machine learning problems can be cast as optimization problems Define a function that corresponds to learning error. (More on this later)

More information

Novel determination of dierential-equation solutions: universal approximation method

Novel determination of dierential-equation solutions: universal approximation method Journal of Computational and Applied Mathematics 146 (2002) 443 457 www.elsevier.com/locate/cam Novel determination of dierential-equation solutions: universal approximation method Thananchai Leephakpreeda

More information

7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory

7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory Physics 178/278 - David Kleinfeld - Fall 2005; Revised for Winter 2017 7 Rate-Based Recurrent etworks of Threshold eurons: Basis for Associative Memory 7.1 A recurrent network with threshold elements The

More information

Neural Networks Introduction

Neural Networks Introduction Neural Networks Introduction H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011 H. A. Talebi, Farzaneh Abdollahi Neural Networks 1/22 Biological

More information

Ecient Higher-order Neural Networks. for Classication and Function Approximation. Joydeep Ghosh and Yoan Shin. The University of Texas at Austin

Ecient Higher-order Neural Networks. for Classication and Function Approximation. Joydeep Ghosh and Yoan Shin. The University of Texas at Austin Ecient Higher-order Neural Networks for Classication and Function Approximation Joydeep Ghosh and Yoan Shin Department of Electrical and Computer Engineering The University of Texas at Austin Austin, TX

More information

Ingo Ahrns, Jorg Bruske, Gerald Sommer. Christian Albrechts University of Kiel - Cognitive Systems Group. Preusserstr Kiel - Germany

Ingo Ahrns, Jorg Bruske, Gerald Sommer. Christian Albrechts University of Kiel - Cognitive Systems Group. Preusserstr Kiel - Germany On-line Learning with Dynamic Cell Structures Ingo Ahrns, Jorg Bruske, Gerald Sommer Christian Albrechts University of Kiel - Cognitive Systems Group Preusserstr. 1-9 - 24118 Kiel - Germany Phone: ++49

More information

Associative Memories (I) Hopfield Networks

Associative Memories (I) Hopfield Networks Associative Memories (I) Davide Bacciu Dipartimento di Informatica Università di Pisa bacciu@di.unipi.it Applied Brain Science - Computational Neuroscience (CNS) A Pun Associative Memories Introduction

More information

Derived copy of Electric Potential Energy: Potential Difference *

Derived copy of Electric Potential Energy: Potential Difference * OpenStax-CNX module: m60491 1 Derived copy of Electric Potential Energy: Potential Difference * Albert Hall Based on Electric Potential Energy: Potential Dierence by OpenStax This work is produced by OpenStax-CNX

More information

ECE521 Lectures 9 Fully Connected Neural Networks

ECE521 Lectures 9 Fully Connected Neural Networks ECE521 Lectures 9 Fully Connected Neural Networks Outline Multi-class classification Learning multi-layer neural networks 2 Measuring distance in probability space We learnt that the squared L2 distance

More information

0 o 1 i B C D 0/1 0/ /1

0 o 1 i B C D 0/1 0/ /1 A Comparison of Dominance Mechanisms and Simple Mutation on Non-Stationary Problems Jonathan Lewis,? Emma Hart, Graeme Ritchie Department of Articial Intelligence, University of Edinburgh, Edinburgh EH

More information

Fundamentals of Neural Network

Fundamentals of Neural Network Chapter 3 Fundamentals of Neural Network One of the main challenge in actual times researching, is the construction of AI (Articial Intelligence) systems. These systems could be understood as any physical

More information

Finding Succinct. Ordered Minimal Perfect. Hash Functions. Steven S. Seiden 3 Daniel S. Hirschberg 3. September 22, Abstract

Finding Succinct. Ordered Minimal Perfect. Hash Functions. Steven S. Seiden 3 Daniel S. Hirschberg 3. September 22, Abstract Finding Succinct Ordered Minimal Perfect Hash Functions Steven S. Seiden 3 Daniel S. Hirschberg 3 September 22, 1994 Abstract An ordered minimal perfect hash table is one in which no collisions occur among

More information

Sparse Binary Matrices as Efficient Associative Memories

Sparse Binary Matrices as Efficient Associative Memories Sparse Binary Matrices as Efficient Associative Memories Vincent Gripon Télécom Bretagne vincentgripon@telecom-bretagneeu Vitaly Skachek University of Tartu vitalyskachek@gmailcom Michael Rabbat McGill

More information

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm Volume 4, Issue 5, May 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Huffman Encoding

More information

LOCAL ENTROPY CHARACTERIZATION OF CORRELATED RANDOM MICROSTRUCTURES C. Andraud 1, A. Beghdadi 2, E. Haslund 3, R. Hilfer 3;4, J. Lafait 1, and B. Virg

LOCAL ENTROPY CHARACTERIZATION OF CORRELATED RANDOM MICROSTRUCTURES C. Andraud 1, A. Beghdadi 2, E. Haslund 3, R. Hilfer 3;4, J. Lafait 1, and B. Virg LOCAL ENTROPY CHARACTERIZATION OF CORRELATED RANDOM MICROSTRUCTURES C. Andraud 1, A. Beghdadi 2, E. Haslund 3, R. Hilfer 3;4, J. Lafait 1, and B. Virgin 3 1 Laboratoire d'optiques des Solides, Universite

More information

Coherence detection in a spiking neuron via Hebbian learning

Coherence detection in a spiking neuron via Hebbian learning Neurocomputing 44 46 (2002) 133 139 www.elsevier.com/locate/neucom Coherence detection in a spiking neuron via Hebbian learning L. Perrinet, M. Samuelides ONERA-DTIM, 2 Av. E. Belin, BP 4025, 31055 Toulouse,

More information

y(n) Time Series Data

y(n) Time Series Data Recurrent SOM with Local Linear Models in Time Series Prediction Timo Koskela, Markus Varsta, Jukka Heikkonen, and Kimmo Kaski Helsinki University of Technology Laboratory of Computational Engineering

More information

Computational Intelligence Lecture 6: Associative Memory

Computational Intelligence Lecture 6: Associative Memory Computational Intelligence Lecture 6: Associative Memory Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2011 Farzaneh Abdollahi Computational Intelligence

More information

7 Recurrent Networks of Threshold (Binary) Neurons: Basis for Associative Memory

7 Recurrent Networks of Threshold (Binary) Neurons: Basis for Associative Memory Physics 178/278 - David Kleinfeld - Winter 2019 7 Recurrent etworks of Threshold (Binary) eurons: Basis for Associative Memory 7.1 The network The basic challenge in associative networks, also referred

More information

Re-sampling and exchangeable arrays University Ave. November Revised January Summary

Re-sampling and exchangeable arrays University Ave. November Revised January Summary Re-sampling and exchangeable arrays Peter McCullagh Department of Statistics University of Chicago 5734 University Ave Chicago Il 60637 November 1997 Revised January 1999 Summary The non-parametric, or

More information

Security Proofs for Signature Schemes. Ecole Normale Superieure. 45, rue d'ulm Paris Cedex 05

Security Proofs for Signature Schemes. Ecole Normale Superieure. 45, rue d'ulm Paris Cedex 05 Security Proofs for Signature Schemes David Pointcheval David.Pointcheval@ens.fr Jacques Stern Jacques.Stern@ens.fr Ecole Normale Superieure Laboratoire d'informatique 45, rue d'ulm 75230 Paris Cedex 05

More information

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,

More information

Atomic Masses and Molecular Formulas *

Atomic Masses and Molecular Formulas * OpenStax-CNX module: m44278 1 Atomic Masses and Molecular Formulas * John S. Hutchinson This work is produced by OpenStax-CNX and licensed under the Creative Commons Attribution License 3.0 1 Introduction

More information

Neural Networks. Associative memory 12/30/2015. Associative memories. Associative memories

Neural Networks. Associative memory 12/30/2015. Associative memories. Associative memories //5 Neural Netors Associative memory Lecture Associative memories Associative memories The massively parallel models of associative or content associative memory have been developed. Some of these models

More information

Quantum logics with given centres and variable state spaces Mirko Navara 1, Pavel Ptak 2 Abstract We ask which logics with a given centre allow for en

Quantum logics with given centres and variable state spaces Mirko Navara 1, Pavel Ptak 2 Abstract We ask which logics with a given centre allow for en Quantum logics with given centres and variable state spaces Mirko Navara 1, Pavel Ptak 2 Abstract We ask which logics with a given centre allow for enlargements with an arbitrary state space. We show in

More information

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n Binary Codes for synchronous DS-CDMA Stefan Bruck, Ulrich Sorger Institute for Network- and Signal Theory Darmstadt University of Technology Merckstr. 25, 6428 Darmstadt, Germany Tel.: 49 65 629, Fax:

More information

2 F. PASIAN ET AL. ology, both for ordinary spectra (Gulati et al., 1994; Vieira & Ponz, 1995) and for high-resolution objective prism data (von Hippe

2 F. PASIAN ET AL. ology, both for ordinary spectra (Gulati et al., 1994; Vieira & Ponz, 1995) and for high-resolution objective prism data (von Hippe AUTOMATED OBJECTIVE PRISM SPECTRAL CLASSIFICATION USING NEURAL NETWORKS F. PASIAN AND R. SMAREGLIA Osservatorio Astronomico di Trieste Via G.B.Tiepolo 11, 34131 Trieste, Italy AND P. HANZIOS, A. DAPERGOLAS

More information

Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks

Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks Commun. Theor. Phys. (Beijing, China) 42 (2004) pp. 121 125 c International Academic Publishers Vol. 42, No. 1, July 15, 2004 Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized

More information

COMP9444 Neural Networks and Deep Learning 11. Boltzmann Machines. COMP9444 c Alan Blair, 2017

COMP9444 Neural Networks and Deep Learning 11. Boltzmann Machines. COMP9444 c Alan Blair, 2017 COMP9444 Neural Networks and Deep Learning 11. Boltzmann Machines COMP9444 17s2 Boltzmann Machines 1 Outline Content Addressable Memory Hopfield Network Generative Models Boltzmann Machine Restricted Boltzmann

More information

Memory capacity of networks with stochastic binary synapses

Memory capacity of networks with stochastic binary synapses Memory capacity of networks with stochastic binary synapses Alexis M. Dubreuil 1,, Yali Amit 3 and Nicolas Brunel 1, 1 UMR 8118, CNRS, Université Paris Descartes, Paris, France Departments of Statistics

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Connexions module: m11446 1 Maximum Likelihood Estimation Clayton Scott Robert Nowak This work is produced by The Connexions Project and licensed under the Creative Commons Attribution License Abstract

More information

Online Companion for. Decentralized Adaptive Flow Control of High Speed Connectionless Data Networks

Online Companion for. Decentralized Adaptive Flow Control of High Speed Connectionless Data Networks Online Companion for Decentralized Adaptive Flow Control of High Speed Connectionless Data Networks Operations Research Vol 47, No 6 November-December 1999 Felisa J Vásquez-Abad Départment d informatique

More information

Commonly used pattern classi ers include SVMs (support

Commonly used pattern classi ers include SVMs (support Unsupervised Hebbian learning by recurrent multilayer neural networks for temporal hierarchical pattern recognition James Ting-Ho Lo Abstract Recurrent multilayer network structures and Hebbian learning

More information

Neural Networks. Prof. Dr. Rudolf Kruse. Computational Intelligence Group Faculty for Computer Science

Neural Networks. Prof. Dr. Rudolf Kruse. Computational Intelligence Group Faculty for Computer Science Neural Networks Prof. Dr. Rudolf Kruse Computational Intelligence Group Faculty for Computer Science kruse@iws.cs.uni-magdeburg.de Rudolf Kruse Neural Networks 1 Hopfield Networks Rudolf Kruse Neural Networks

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Simple Neural Nets For Pattern Classification

Simple Neural Nets For Pattern Classification CHAPTER 2 Simple Neural Nets For Pattern Classification Neural Networks General Discussion One of the simplest tasks that neural nets can be trained to perform is pattern classification. In pattern classification

More information

On the Speed of Quantum Computers with Finite Size Clocks

On the Speed of Quantum Computers with Finite Size Clocks On the Speed of Quantum Computers with Finite Size Clocks Tino Gramss SFI WORKING PAPER: 1995-1-8 SFI Working Papers contain accounts of scientific work of the author(s) and do not necessarily represent

More information

Boxlets: a Fast Convolution Algorithm for. Signal Processing and Neural Networks. Patrice Y. Simard, Leon Bottou, Patrick Haner and Yann LeCun

Boxlets: a Fast Convolution Algorithm for. Signal Processing and Neural Networks. Patrice Y. Simard, Leon Bottou, Patrick Haner and Yann LeCun Boxlets: a Fast Convolution Algorithm for Signal Processing and Neural Networks Patrice Y. Simard, Leon Bottou, Patrick Haner and Yann LeCun AT&T Labs-Research 100 Schultz Drive, Red Bank, NJ 07701-7033

More information

Essentials of Intermediate Algebra

Essentials of Intermediate Algebra Essentials of Intermediate Algebra BY Tom K. Kim, Ph.D. Peninsula College, WA Randy Anderson, M.S. Peninsula College, WA 9/24/2012 Contents 1 Review 1 2 Rules of Exponents 2 2.1 Multiplying Two Exponentials

More information

2 EBERHARD BECKER ET AL. has a real root. Thus our problem can be reduced to the problem of deciding whether or not a polynomial in one more variable

2 EBERHARD BECKER ET AL. has a real root. Thus our problem can be reduced to the problem of deciding whether or not a polynomial in one more variable Deciding positivity of real polynomials Eberhard Becker, Victoria Powers, and Thorsten Wormann Abstract. We describe an algorithm for deciding whether or not a real polynomial is positive semidenite. The

More information

protocols such as protocols in quantum cryptography and secret-key agreement by public discussion [8]. Before we formalize the main problem considered

protocols such as protocols in quantum cryptography and secret-key agreement by public discussion [8]. Before we formalize the main problem considered Privacy Amplication Secure Against Active Adversaries? Ueli Maurer Stefan Wolf Department of Computer Science Swiss Federal Institute of Technology (ETH Zurich) CH-8092 Zurich, Switzerland E-mail addresses:

More information

percentage of problems with ( 1 lb/ub ) <= x percentage of problems with ( 1 lb/ub ) <= x n= n=8 n= n=32 n= log10( x )

percentage of problems with ( 1 lb/ub ) <= x percentage of problems with ( 1 lb/ub ) <= x n= n=8 n= n=32 n= log10( x ) Soft vs. Hard Bounds in Probabilistic Robustness Analysis Xiaoyun Zhu Yun Huang John Doyle California Institute of Technology, Pasadena, CA 925 Abstract The relationship between soft vs. hard bounds and

More information

Alon Orlitsky. AT&T Bell Laboratories. March 22, Abstract

Alon Orlitsky. AT&T Bell Laboratories. March 22, Abstract Average-case interactive communication Alon Orlitsky AT&T Bell Laboratories March 22, 1996 Abstract and Y are random variables. Person P knows, Person P Y knows Y, and both know the joint probability distribution

More information

Information capacity in recurrent McCulloch-Pitts networks with sparsely coded memory states

Information capacity in recurrent McCulloch-Pitts networks with sparsely coded memory states Network 3 (1992) 177-186. Printed in the UK Information capacity in recurrent McCulloch-Pitts networks with sparsely coded memory states G Palm and F T Sommer C U 0 Vogt Inslitut fur Hirnforschung, Universiiiustrasse

More information

2. THE MAIN RESULTS In the following, we denote by the underlying order of the lattice and by the associated Mobius function. Further let z := fx 2 :

2. THE MAIN RESULTS In the following, we denote by the underlying order of the lattice and by the associated Mobius function. Further let z := fx 2 : COMMUNICATION COMLEITY IN LATTICES Rudolf Ahlswede, Ning Cai, and Ulrich Tamm Department of Mathematics University of Bielefeld.O. Box 100131 W-4800 Bielefeld Germany 1. INTRODUCTION Let denote a nite

More information

Gravitational potential energy *

Gravitational potential energy * OpenStax-CNX module: m15090 1 Gravitational potential energy * Sunil Kumar Singh This work is produced by OpenStax-CNX and licensed under the Creative Commons Attribution License 2.0 The concept of potential

More information

Simple neuron model Components of simple neuron

Simple neuron model Components of simple neuron Outline 1. Simple neuron model 2. Components of artificial neural networks 3. Common activation functions 4. MATLAB representation of neural network. Single neuron model Simple neuron model Components

More information

A Finite Element Method for an Ill-Posed Problem. Martin-Luther-Universitat, Fachbereich Mathematik/Informatik,Postfach 8, D Halle, Abstract

A Finite Element Method for an Ill-Posed Problem. Martin-Luther-Universitat, Fachbereich Mathematik/Informatik,Postfach 8, D Halle, Abstract A Finite Element Method for an Ill-Posed Problem W. Lucht Martin-Luther-Universitat, Fachbereich Mathematik/Informatik,Postfach 8, D-699 Halle, Germany Abstract For an ill-posed problem which has its origin

More information