Hopfield and Potts-Hopfield Networks

Size: px
Start display at page:

Download "Hopfield and Potts-Hopfield Networks"

Transcription

1 Hopfield and Potts-Hopfield Networks Andy Somogyi Dec 10, Introduction Many early models of neural networks lacked mathematical rigor or analysis. They were, by and large, ad hoc models. John Hopfield was among the first to apply the formalism of statistical mechanics to artificial neural networks by developing a model that was mathematically tractable and which exhibited many of the key attributes found in biological neural networks. In this article, I follow the standard derivation of free energy for such a network. Next, I introduce my extension, the Potts-Hopfield network, which I argue behaves more like a biological system than the original Hopfield network. I next discuss the details of the program I wrote to test and evaluate both networks, mine and Hopfield s, trained on and matching a variety of patterns. Finally, I display and discuss the results of various experiments with both networks. 2 Biological Plausibility Nature is fundamentally stochastic. All continuum descriptions of nature are approximations at best. Let us suppose that we could know the exact velocity AND momentum of every particle in the universe. Only then could we have a truly deterministic universe. The problem is that particles do not have exact velocities and momentums: a little thing called the Pauli exclusion principle prevents that. We cannot even in principle hope to measure both the exact velocity AND momentum. Thus the exact state of any real system cannot be exactly specified since the initial conditions cannot be exactly specified. Continuum approximations work well when describing man-made objects, objects that are smooth, straight, are homogeneous, objects that are artificial. Nature rarely presents us with such objects. Objects in nature, and in particular biology are rough, bumpy, in-homogeneous, objects are natural. Biological systems are thermally driven. At the molecular level, all reactions are thermally activated. It simply does not make sense to model biological processes as continuum processes. It is not that mathematics can not be applied to biology, its just that classical, smooth mathematics don t make a lot of sense here. Instead, stochastic dynamics is what must be used when modeling biological systems. Biological systems at the cellular level predominantly interact via local connections. There are obvious exceptions, such as the multitude of long range axonal projections in the cortex. These 1

2 long range connections account for a significant portion of the cortical volume; however they occupy only a relatively small percentage of the total synaptic connection count. Consider that among the many constraints of cortical configuration, one of the most significant is the volume constraint. (I.e., the entire cortical assembly must fit inside the skull.) Long range axonal projections occupy a significant proportion of that volume. The Hopfield network, if evolved stochastically does exhibit the first key trait of biological plausibility. This type of network however stores all information in a potentially all-all connected network. If even a fraction of connections were long range, it would make sense biologically, there simply is no possible what anything but the most trivially simply biological neural network can be all-all connected. 3 The Hopfield Network We can say with certainty that the nervous system of every known animal consists of neurons and synapses. Out of the millions of life forms that exist on this planet, all of them have control systems composed of neurons and synapses. Whilst it may be plausible to have intelligences using some other system, there is no way to prove a negative. But we can say that we have not seen any other type of intelligent system. If we accept that we, as humans, are intelligent and if we can say that other animals have some varying degrees of intelligence (the ability to learn and problem solve, e.g.), and given that all animals have a neural system composed of neurons and synapses, then intelligence must somehow emerge out of incredibly complex interactions of neurons and synapses. This is the system found in nature: why not use it as our model? Early researchers in connectionism, such as McCulloch and Pitts, were no doubt inspired by this fact, and that led them to create the first mathematical model of a neuron in This model combined neurophysiology and mathematical logic. Even in 1943, neurons were known to behave in a binary way: they either fire or not based on some external threshold. Although researchers in their day did not fully understand the mechanics of real neurons (most would contend that we still do not fully understand them), it made sense to treat them as logical constructs. The McCulloch- Pitts neuron operates on a discrete time scale. At each time step, the value based on the input is computed, and the single output is set to either high or low for the next time step. If this time scale were converted into real units, it would be on the order of a millisecond, as that is the typical period we see for spike trains recorded from actual neurons. Research in Connectionism thrived through the 1960s with contributions from scientists such as Hebb, Ashby, Rosenblatt, et al. However, early research focused on a very simple model of a neuron called a perceptron. But perceptrons are relatively limited in what they can do, and in 1969 Minsky and Papert demonstrated the limits on the sorts of functions which perceptrons can calculate, showing that even simple functions like the exclusive or could not be handled properly. As a result of Minsky and Papert s work, very little research was done in biologically inspired computing until the early 1980s. In the intervening years, the AI community shifted their efforts toward symbol processing rather than connectionism. In 1982, John Hopfield, likely inspired by the similarities between biological neural networks and the popular Ising model devised a neural network based on the Sherrington-Kirkpatrick spin- 2

3 glass [6] (which is a generalization of the Ising network) that could learn and recognize information [3]. The introduction of the Hopfield network, along with a new method for training neural networks called backpropagation discovered by Paul Werbos and popularized by David Rumelhart, caused renewed interest in the field of connectionism. 3.1 The Hopfield Neuron Biological neurons, in the absolute simplest abstraction, can be described as a two state system: +1 if excited, 1 if quiescent. The synaptic efficacy from neuron j to neuron i will be denoted by J ij. Then the sum of signals to the ith neuron is h i = j J ij (S i + 1). (1) 3.2 Deterministic Time Evolution Neuron i becomes excited if the input signal exceeds a threshold θ i at time t and is not excited otherwise: ( ) S i (t + t) = sgn J ij (S j (t) + 1) θ i. (2) j The simplest case is where the threshold θ i is equal to sum, no constant: ( ) S i (t + t) = sgn J ij S)j(t). (3) A pattern of excitation is denoted by {ξ µ i }. i = 1,..., N is the neuron index, µ = 1,..., p is the excitation pattern index, and ξ µ i is an Ising variable ±1. If the µ th pattern has the i th neuron in the excited state, ξ µ i = 1. The µ th excitation pattern can be written as {ξ µ 1, ξ µ 2,..., ξ µ N }, and p such patterns are assumed to exist {µ = 1, 2,..., p}. If the system is evolved, and there there is no further change in state, the system has reached a stable fixed point (i.e., a stable basin of attraction) such that S i (t) = ξ µ i S i (t + t) = ξ µ i. (4) The connectivity matrix J is constructed via the Hebb rule such that j J ij = 1 N p ξ µ i ξµ j. (5) µ=1 Here each stored pattern ξ µ is one of the stable fixed point patterns of eqn. 4. The connectivity matrix J can be substituted into 3, and if we assume orthogonality amongst the patterns, 1 ( ) 1 ξ µ j N ξν j = δ νµ + O,. (6) N j 3

4 can be expanded and simplified as ( ) ( ) ( ) sgn J ij ξ µ 1 j = sgn ξi ν ξj ν ξ µ j = sgn ξ)i ν δ νµ = sgn (ξ µ i N ). (7) j j ν Assuming that the values of each pattern ξ µ are approximately orthogonal, random and uncorrelated, the maximum number of patterns p that can be stored with the Hebbian rule [5] is proportional to the size of the network : p = αn, with α [1, 3]. If more than αn patterns are stored, the attractor states break down forming a chimera a state where more than one pattern merges together. 3.3 Statistical Mechanics The behavior of such a spin system is entirely described by a Hamiltonian and a temperature. J is an interconnection matrix organized according to the Hebb rule into which p patterns are stored. The diagonal elements of the connectivity matrix are assumed to be zero, i.e., self connections are forbidden. The traditional approach to such a system is that all spins are assumed to be free and their dynamics are defined only by the action of a local field along which they are oriented. The previous deterministic time evolution is equivalent to the zero-temperature dynamics of the Ising model with the Hamiltonian H = 1 J ij S i S j = 1 S i J ij S j, (8) 2 2 i,j where j J ijs j is the local field h i to the spin S i which aligns the spin (neuron state) S i to the direction of the local field at the next time step. This section will follow a fairly standard treatment[5, 2] of the derivation of free energy from the Hamiltonian 8 with the addition of some clarifying steps. All life on Earth (and presumably elsewhere) exists because of the transduction of free energy. Knowing the form of the free energy of a system provides great insight into its inner workings. The free energy is a measure of how much useful energy a system has: it is the difference in the total energy of the system E and the product of its entropy and temperature T S. We can see that as either the temperature or level of disorder of a system increases, the less useful energy a system has to perform work. Formally, the free energy is defined as The free energy can be calculated from the partition function, i j ν F = E T S. (9) Z = s e βes, (10) which is defined as the sum of all possible energy configurations weighted by the Boltzmann factor. Given the partition function, the free energy is calculated as F = kt ln Z. (11) 4

5 Once the free energy is obtained, any measure of the system can be obtained directly from the free energy. The partition function for the Hamiltonian 8 with no external field is ( ) Z = Tr exp β 2 S i ξ µ i. (12) 2N This equation may seem daunting at first the squared term expands into potentially n + ( ) n 2 terms. This however can be readily handled via the introduction of the Hubbard-Stratonovich method, which is an observation of the result of a Gaussian integral. In the present case it takes the form ( ) βj NβJ ( exp 2N S2 = exp NβJ ) 2π 2 m2 + βjms dm, (13) which can be shown by completing the square µ i NβJ 2 m2 + βjms = NβJ ( m S ) 2 + βjs2 2 N 2N. (14) Using the integration variable dm µ from 13, we can linearize the square in the exponent as ( Z = Tr Π p µ=1dm µ exp 1 2 Nβ m 2 µ + β ) m µ S i ξ µ i µ µ i ( = Π µ dm µ exp 1 2 Nβm2 + ) log (2 cosh(βm ξ i )). (15) i In the limit of large N, the integral in 15 can be evaluated by steepest descent. The free energy is thus f = 1 2 m2 T log (2 cosh(βm ξ i )). (16) N i The equation of state for this system can be calculated as usual by minimizing the free energy 16 arriving at m = 1 ξ i tanh(βm ξ i ). (17) N i In the large N limit, the sum in 17 approximates the average over infinitely many random patterns ξ i. Thus in the large N limit, the equation of state becomes m =<< ξ tanh(βm ξ) >>, (18) where << >> denotes the configuration average, or average over all possible patterns. 5

6 4 Potts-Hopfield Model The traditional Hopfield model does incorporate many biologically plausible features. In spirit, it is closer to how we currently understand the cortex an assemblage of interconnected neurons, rather than a symbol processing system as was advocated by the AI community in the 1960 s and 70 s. The Hopfield model is initialized in a certain state (the desired pattern to match), and is evolved without any further input until a steady state - a stable basin of attraction - is reached. Regardless of whether the Hopfield model is evolved via synchronous, asynchronous or Boltzmann updates, it functions in a similar manner, much like an ancient calculating device such as a Babbage difference engine, where the device is given some input, the crank is turned, and one waits for the output to converge. Biological systems do not function as such an isolated calculating device; rather they are constantly fed some input. Thus, I believe that a more biologically plausible model is one where the pattern to be matched is always present, motivating and guiding the dynamics of the pattern matching system. So instead of the pattern being given to the pattern matching system, it is presented as a boundary condition, one that is always present during the dynamic progression of the system. Instead of treating the Hopfield network as a system that is started in a certain state and evolves deterministically towards one attractor, what if instead we treat the Hopfield system as a gas and treat the patterns as a surface onto which the Hopfield gas can adsorb? This is a fairly common system to analyze. Irving Langmuir first analyzed the adsorption of gases onto surfaces with finite number of binding sites in The derivation of the Langmuir equation is a common task in all introductory statistical mechanics courses. In essence, what results is a system in which each Hopfield particle can now be either adsorbed or free. If the particle is adsorbed onto the surface, there is an additional spin-spin interaction. If the particle spin matches the surface spin, it is energetically favorable; if it does not match, it is not favorable. Thus we move from a two state system to a four state system. Such a system can be described by the following Hamiltonian ( ) H = 1 S i J ij S j + ɛα i A i, (19) 2 i j where the spin variables S i, S j are the Hopfield spin states, ɛ is the binding energy of the surface, and α i is a Boolean variable indicating whether or not the i th Hopfield particle is bound to the i th pattern adsorption site A i. It is not immediately apparent how one would analytically calculate the free energy and equation from a Hamiltonian of the form 19. This Hamiltonian is not as amenable to the linearization method used for 8. 5 Implementation The time evolution algorithm of Hopfield and Potts-Hopfield networks is described as follows. The initial spin directions (neuron states) are oriented according to the components of the input 6

7 vector. The local field h i = E/ s i, which is produced by all the remaining connected spins of the network and acts on the i th spin at time t, is calculated via eqn. 1. If the spin direction is parallel with the direction of the local field, then its position is energetically favorable and remains unchanged. If however the spin position is anti-parallel, the position is energetically unfavorable, and the local field acts as a torque and flips the spin state S i (t + 1) = S i (t) with a probability proportional to the Boltzmann factor. At zero temperature, the total energy is reduced any time there is a spin flip. Where there are no further spin flips, the network is in a stable state. The time evolution of the Potts-Hopfield network is nearly identical to the evolution discussed above, except that there are a total of four rather than two possible states. Both Hopfield and Potts-Hopfield networks are extremely computationally intensive as they both have all to all connectivity that is, an energy calculation at a single node requires querying the state of all other nodes. In effect, this is O(N 2 ) operations per time step per node. Thus a highly efficient programming language is required. All of the computational routines were written in C++ and the user interface was written in Objective-C. Sadly, there is no extant programming language that is well suited for BOTH numerics AND user interface development. Each pattern is stored on disk as.png file, is read and converted to a matrix. Each time a pattern is learned, the program is added to the connectivity matrix J via the Hebbian rule, eqn. 5. For the Hopfield network, the matching pattern is set as the initial state of the network and for the Potts-Hopfield network it is stored as the adsorbing surface. The evolution of the simulation is performed via the Metropolis algorithm [4]. This algorithm is probably the most widely used algorithm in Ising/Potts simulations. The Metropolis algorithm is designed to pick the most likely state for configurations obeying the Boltzmann distribution. Essentially, any system in or near thermal equilibrium has configurations which obey the Boltzmann distribution. The algorithm works by considering two configurations A and B, each of which occur with probability proportional to the Boltzmann factor. The ratio of probabilities for two energy configurations E A and E B is P (A) P (B) = e EA/T e E B/T = e (E A E B )/T. (20) In order to calculate the most likely energy configuration, 1. Starting from a configuration A, with known energy E A, make a change in the configuration to obtain a new configuration B. 2. Compute E B. 3. If E B < E A, automatically accept the new configuration, since it has lower energy (a desirable thing, according to the Boltzmann factor). 4. If E B > E A, accept the new (higher energy) configuration with probability exp( (E B E A )/T ). This means that when the temperature is high, we don t mind taking steps in the higher direction, but as the temperature is lowered, we are forced to settle into the lowest configuration we can find. The Metropolis algorithm is most commonly used to simulate the Ising model, but it is easily extended to work with a Potts model. The Potts model is a generalization of the Ising system. Potts q = 2 is equivalent to the Ising model. In order to use the Metropolis algorithm on a q = n Potts model, the energy configuration E A is the current state. To determine E B, simply pick at random any other state q A for the E B configuration. 7

8 6 Results For the purposes of this article, three tests were run on each network type. The time evolution of the state and the time evolution of the total network energy are displayed. The size of the network was held at 10,000 nodes which allows a bit-mapped image to be used as a pattern. Each image was captured at 5000 Monte Carlo (MC) steps, and each energy value was calculated every 2500 MC steps. 6.1 Hopfield Network The time evolution of Hopfield network is shown in Figures 1 and 3. In these figures, blue pixels represent spin down (lower) and yellow represents spin up. The Hopfield network tests were run for 100,000 Monte Carlo steps. When the system has two stored patterns A, and B, and the system is in either one of these states, the energies are identical with a value of When the system is in the chalky A state the total energy is When the network was held at 300K, and started from the chalky A state, run for approximately 150,000 Monte Carlo steps, then cooled to 0K, it settled in either the A or B with equal probability. The implication here is that when the system is run with some level of disorder, the initial state configuration is eventually lost. As we will see, the Potts-Hopfield network does not exhibit this behavior. For the first two tests, two patterns A and B were stored in the networks, and the matching pattern was a chalky A. The first test was run with the system held at 300K, and the second test was run with the system held at 0K. One should not place too much significance on the physical meaning of these temperatures, in that a value for the Boltzmann constant was chosen so that a temperature of 300K (room temperature) corresponded roughly to a system that exhibits some thermal fluctuations (i.e., is partially disordered). A value of 1000K corresponds to a fully disordered system. The third test was run at 0K with the patterns A, B, and C stored in the network. When the system has three stored patterns A, B, and C and the system is in one of these pure states, the total energy is 1.648, 1.695, or 1.708, respectively. When the system has these three stored patterns and it is in the chalky A state, the total energy is When the system is in a pure state, the total energy value is lower than when the system is set to a nonmatching pattern. Regardless of initial conditions, when the system has three stored patterns, it will always eventually end up at a chimera such as the last frame in Figure 5. This chimera state corresponds to a hypothesized global minimum energy value of Even if the system is initialized to a pure state and evolved at 0K, the system will tend towards the chimera state. This implies that the pure states are not local minima. The energy over time profile of all tests for the Hopfield network was identical: a monotonic decrease of energy with no evidence of local minima. 8

9 Figure 1. Time evolution of a Hopfield network with two stored patters A and B held at 300K started from the chalky A state 0.5 Energy Figure 2. Total energy of the Hopfield network displayed in Figure 1 9

10 Figure 3. Time evolution of a Hopfield network with two stored patters A and B held at 0K started from the chalky A state 0.5 Energy Figure 4. Total energy of the Hopfield network displayed in Figure 3 10

11 Figure 5. Time evolution of a Hopfield network with two stored patters A, B and C held at 0K started from the chalky A state 0.5 Energy Figure 6. Total energy of the Hopfield network displayed in Figure 5 11

12 6.2 Potts-Hopfield Network The time evolution of Potts-Hopfield network is shown in Figures 7 and 9. In these figures, black pixels represent bound, spin down, blue pixels represent un-bound, spin down (identical to Hopfield spin down), yellow represents un-bound spin up (identical to Hopfield spin up), and red represents bound, spin up (the highest energy level). The Potts-Hopfield network tests were run for 150,000 Monte Carlo steps. This system has a longer convergence time than the previous simply because there are more possible states for the system to explore. All tests were performed with a binding energy of ɛ = 100. All energies measured when the system we set at a pure state are identical to the previous values because in a pure state, there are no additional binding energies. When the network was held at 300K, and started from the chalky A state, run for approximately 150,000 Monte Carlo steps, then cooled to 0K, it always converged to the correct A state. Figure 7. Time evolution of a Potts-Hopfield network with two stored patterns A and B held at 300K started from the chalky A state When the system has three stored patterns A, B, and C and the system is in one of these pure states, energies are again identical to the previous Hopfield network. Again, regardless of initial conditions, when the system has three stored patterns, it will always eventually end up at a chimera such as the last frame in Figure 11. The chimera behavior is nearly identical to the Hopfield network. Thus we can conclude that the addition of binding energy, an omni-present input pattern that is continuously influencing the evolution of the system has no effect on the total information storage capacity of a Hopfield like network. 12

13 0.5 Energy Figure 8. Total energy of the Potts-Hopfield network displayed in Figure 7 Figure 9. Time evolution of a Potts-Hopfield network with two stored patterns A and B held at 0K started from the chalky A state 13

14 0.5 Energy Figure 10. Total energy of the Potts-Hopfield network displayed in Figure 9 Figure 11. Time evolution of a Hopfield network with three stored patterns A, B and C held at 0K started from the chalky A state 14

15 0.5 Energy Figure 12. Total energy of the Hopfield network displayed in Figure 11 15

16 7 Conclusions At this point it is still unclear why the storage capacity of the Hopfield network as presently implemented is so much lower than the generally accepted theoretical value. It may be possible that the Hopfield network is not implemented properly. The theoretical value was previously calculated for both synchronous and asynchronous updates, so the fact that we used asynchronous updates is an unlikely cause. Because of the inherent computational inefficiencies, neither the Hopfield nor the Potts-Hopfield networks appear to have many practical prospects in either pattern matching or distributed information storage. The Potts-Hopfield extension did allow patterns to be matched after being randomized, but did not increase accuracy or information storage capacity of the Hopfield network. Despite the drawbacks of these networks, I still firmly believe that biologically inspired designs can still be incredibly useful for distributed, fault tolerant information storage and retrieval. References [1] D Amit and H Gutfreund. Saturation level of the hopfield model for neural network. EPL (Europhysics Letters), Jan [2] Viktor Dotsenko. An Introduction to the Theory of Spin Glasses and Neural Networks (World Scientific Lecture Notes in Physics). World Scientific Publishing Company, March [3] JJ Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci USA, 79(8):2554, [4] D. P. Landau and K. Binder. A Guide to Monte Carlo Simulations in Statistical Physics - 2nd Edition. A Guide to Monte Carlo Simulations in Statistical Physics - 2nd Edition, by David P. Landau and Kurt Binder, pp Cambridge University Press, September ISBN-10: ISBN-13: , September [5] H. Nishimori. Statistical Physics of Spin Glasses and Information Processing: An Introduction. Oxford University Press, USA, [6] D Sherrington.... Solvable model of a spin-glass. Physical review letters, Jan

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5 Hopfield Neural Network and Associative Memory Typical Myelinated Vertebrate Motoneuron (Wikipedia) PHY 411-506 Computational Physics 2 1 Wednesday, March 5 1906 Nobel Prize in Physiology or Medicine.

More information

7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory

7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory Physics 178/278 - David Kleinfeld - Fall 2005; Revised for Winter 2017 7 Rate-Based Recurrent etworks of Threshold eurons: Basis for Associative Memory 7.1 A recurrent network with threshold elements The

More information

7 Recurrent Networks of Threshold (Binary) Neurons: Basis for Associative Memory

7 Recurrent Networks of Threshold (Binary) Neurons: Basis for Associative Memory Physics 178/278 - David Kleinfeld - Winter 2019 7 Recurrent etworks of Threshold (Binary) eurons: Basis for Associative Memory 7.1 The network The basic challenge in associative networks, also referred

More information

Introduction to Neural Networks

Introduction to Neural Networks Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning

More information

Hopfield Networks. (Excerpt from a Basic Course at IK 2008) Herbert Jaeger. Jacobs University Bremen

Hopfield Networks. (Excerpt from a Basic Course at IK 2008) Herbert Jaeger. Jacobs University Bremen Hopfield Networks (Excerpt from a Basic Course at IK 2008) Herbert Jaeger Jacobs University Bremen Building a model of associative memory should be simple enough... Our brain is a neural network Individual

More information

Learning and Memory in Neural Networks

Learning and Memory in Neural Networks Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units

More information

Introduction to Artificial Neural Networks

Introduction to Artificial Neural Networks Facultés Universitaires Notre-Dame de la Paix 27 March 2007 Outline 1 Introduction 2 Fundamentals Biological neuron Artificial neuron Artificial Neural Network Outline 3 Single-layer ANN Perceptron Adaline

More information

Neural Nets and Symbolic Reasoning Hopfield Networks

Neural Nets and Symbolic Reasoning Hopfield Networks Neural Nets and Symbolic Reasoning Hopfield Networks Outline The idea of pattern completion The fast dynamics of Hopfield networks Learning with Hopfield networks Emerging properties of Hopfield networks

More information

Using a Hopfield Network: A Nuts and Bolts Approach

Using a Hopfield Network: A Nuts and Bolts Approach Using a Hopfield Network: A Nuts and Bolts Approach November 4, 2013 Gershon Wolfe, Ph.D. Hopfield Model as Applied to Classification Hopfield network Training the network Updating nodes Sequencing of

More information

Artificial Intelligence Hopfield Networks

Artificial Intelligence Hopfield Networks Artificial Intelligence Hopfield Networks Andrea Torsello Network Topologies Single Layer Recurrent Network Bidirectional Symmetric Connection Binary / Continuous Units Associative Memory Optimization

More information

Memories Associated with Single Neurons and Proximity Matrices

Memories Associated with Single Neurons and Proximity Matrices Memories Associated with Single Neurons and Proximity Matrices Subhash Kak Oklahoma State University, Stillwater Abstract: This paper extends the treatment of single-neuron memories obtained by the use

More information

Introduction Biologically Motivated Crude Model Backpropagation

Introduction Biologically Motivated Crude Model Backpropagation Introduction Biologically Motivated Crude Model Backpropagation 1 McCulloch-Pitts Neurons In 1943 Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, published A logical calculus of the

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

Logic Learning in Hopfield Networks

Logic Learning in Hopfield Networks Logic Learning in Hopfield Networks Saratha Sathasivam (Corresponding author) School of Mathematical Sciences, University of Science Malaysia, Penang, Malaysia E-mail: saratha@cs.usm.my Wan Ahmad Tajuddin

More information

COMP9444 Neural Networks and Deep Learning 2. Perceptrons. COMP9444 c Alan Blair, 2017

COMP9444 Neural Networks and Deep Learning 2. Perceptrons. COMP9444 c Alan Blair, 2017 COMP9444 Neural Networks and Deep Learning 2. Perceptrons COMP9444 17s2 Perceptrons 1 Outline Neurons Biological and Artificial Perceptron Learning Linear Separability Multi-Layer Networks COMP9444 17s2

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required.

In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required. In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required. In humans, association is known to be a prominent feature of memory.

More information

Lecture 7 Artificial neural networks: Supervised learning

Lecture 7 Artificial neural networks: Supervised learning Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

Markov Chain Monte Carlo The Metropolis-Hastings Algorithm

Markov Chain Monte Carlo The Metropolis-Hastings Algorithm Markov Chain Monte Carlo The Metropolis-Hastings Algorithm Anthony Trubiano April 11th, 2018 1 Introduction Markov Chain Monte Carlo (MCMC) methods are a class of algorithms for sampling from a probability

More information

Neural Networks. Hopfield Nets and Auto Associators Fall 2017

Neural Networks. Hopfield Nets and Auto Associators Fall 2017 Neural Networks Hopfield Nets and Auto Associators Fall 2017 1 Story so far Neural networks for computation All feedforward structures But what about.. 2 Loopy network Θ z = ቊ +1 if z > 0 1 if z 0 y i

More information

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau Last update: October 26, 207 Neural networks CMSC 42: Section 8.7 Dana Nau Outline Applications of neural networks Brains Neural network units Perceptrons Multilayer perceptrons 2 Example Applications

More information

Machine Learning. Neural Networks

Machine Learning. Neural Networks Machine Learning Neural Networks Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 Biological Analogy Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 THE

More information

On the convergence speed of artificial neural networks in the solving of linear systems

On the convergence speed of artificial neural networks in the solving of linear systems Available online at http://ijimsrbiauacir/ Int J Industrial Mathematics (ISSN 8-56) Vol 7, No, 5 Article ID IJIM-479, 9 pages Research Article On the convergence speed of artificial neural networks in

More information

Sections 18.6 and 18.7 Artificial Neural Networks

Sections 18.6 and 18.7 Artificial Neural Networks Sections 18.6 and 18.7 Artificial Neural Networks CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline The brain vs. artifical neural

More information

Chapter 9: The Perceptron

Chapter 9: The Perceptron Chapter 9: The Perceptron 9.1 INTRODUCTION At this point in the book, we have completed all of the exercises that we are going to do with the James program. These exercises have shown that distributed

More information

Using Variable Threshold to Increase Capacity in a Feedback Neural Network

Using Variable Threshold to Increase Capacity in a Feedback Neural Network Using Variable Threshold to Increase Capacity in a Feedback Neural Network Praveen Kuruvada Abstract: The article presents new results on the use of variable thresholds to increase the capacity of a feedback

More information

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network.

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network. III. Recurrent Neural Networks A. The Hopfield Network 2/9/15 1 2/9/15 2 Typical Artificial Neuron Typical Artificial Neuron connection weights linear combination activation function inputs output net

More information

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,

More information

Hopfield Neural Network

Hopfield Neural Network Lecture 4 Hopfield Neural Network Hopfield Neural Network A Hopfield net is a form of recurrent artificial neural network invented by John Hopfield. Hopfield nets serve as content-addressable memory systems

More information

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5 Artificial Neural Networks Q550: Models in Cognitive Science Lecture 5 "Intelligence is 10 million rules." --Doug Lenat The human brain has about 100 billion neurons. With an estimated average of one thousand

More information

Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso

Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso xsu@utep.edu Fall, 2018 Outline Introduction A Brief History ANN Architecture Terminology

More information

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network.

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network. Part 3A: Hopfield Network III. Recurrent Neural Networks A. The Hopfield Network 1 2 Typical Artificial Neuron Typical Artificial Neuron connection weights linear combination activation function inputs

More information

Neural Networks. Fundamentals Framework for distributed processing Network topologies Training of ANN s Notation Perceptron Back Propagation

Neural Networks. Fundamentals Framework for distributed processing Network topologies Training of ANN s Notation Perceptron Back Propagation Neural Networks Fundamentals Framework for distributed processing Network topologies Training of ANN s Notation Perceptron Back Propagation Neural Networks Historical Perspective A first wave of interest

More information

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA   1/ 21 Neural Networks Chapter 8, Section 7 TB Artificial Intelligence Slides from AIMA http://aima.cs.berkeley.edu / 2 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural

More information

Storage Capacity of Letter Recognition in Hopfield Networks

Storage Capacity of Letter Recognition in Hopfield Networks Storage Capacity of Letter Recognition in Hopfield Networks Gang Wei (gwei@cs.dal.ca) Zheyuan Yu (zyu@cs.dal.ca) Faculty of Computer Science, Dalhousie University, Halifax, N.S., Canada B3H 1W5 Abstract:

More information

Hopfield Network for Associative Memory

Hopfield Network for Associative Memory CSE 5526: Introduction to Neural Networks Hopfield Network for Associative Memory 1 The next few units cover unsupervised models Goal: learn the distribution of a set of observations Some observations

More information

Sections 18.6 and 18.7 Artificial Neural Networks

Sections 18.6 and 18.7 Artificial Neural Networks Sections 18.6 and 18.7 Artificial Neural Networks CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline The brain vs artifical neural networks

More information

Hertz, Krogh, Palmer: Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company (1991). (v ji (1 x i ) + (1 v ji )x i )

Hertz, Krogh, Palmer: Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company (1991). (v ji (1 x i ) + (1 v ji )x i ) Symmetric Networks Hertz, Krogh, Palmer: Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company (1991). How can we model an associative memory? Let M = {v 1,..., v m } be a

More information

Lecture 4: Feed Forward Neural Networks

Lecture 4: Feed Forward Neural Networks Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training

More information

Neural Networks (Part 1) Goals for the lecture

Neural Networks (Part 1) Goals for the lecture Neural Networks (Part ) Mark Craven and David Page Computer Sciences 760 Spring 208 www.biostat.wisc.edu/~craven/cs760/ Some of the slides in these lectures have been adapted/borrowed from materials developed

More information

CHAPTER 3. Pattern Association. Neural Networks

CHAPTER 3. Pattern Association. Neural Networks CHAPTER 3 Pattern Association Neural Networks Pattern Association learning is the process of forming associations between related patterns. The patterns we associate together may be of the same type or

More information

Neural networks. Chapter 20, Section 5 1

Neural networks. Chapter 20, Section 5 1 Neural networks Chapter 20, Section 5 Chapter 20, Section 5 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 20, Section 5 2 Brains 0 neurons of

More information

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks Commun. Theor. Phys. (Beijing, China) 40 (2003) pp. 607 613 c International Academic Publishers Vol. 40, No. 5, November 15, 2003 Effects of Interactive Function Forms in a Self-Organized Critical Model

More information

Feedforward Neural Nets and Backpropagation

Feedforward Neural Nets and Backpropagation Feedforward Neural Nets and Backpropagation Julie Nutini University of British Columbia MLRG September 28 th, 2016 1 / 23 Supervised Learning Roadmap Supervised Learning: Assume that we are given the features

More information

Artificial Neural Network

Artificial Neural Network Artificial Neural Network Contents 2 What is ANN? Biological Neuron Structure of Neuron Types of Neuron Models of Neuron Analogy with human NN Perceptron OCR Multilayer Neural Network Back propagation

More information

Neural Networks Introduction

Neural Networks Introduction Neural Networks Introduction H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011 H. A. Talebi, Farzaneh Abdollahi Neural Networks 1/22 Biological

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks 鮑興國 Ph.D. National Taiwan University of Science and Technology Outline Perceptrons Gradient descent Multi-layer networks Backpropagation Hidden layer representations Examples

More information

Neural Networks. Mark van Rossum. January 15, School of Informatics, University of Edinburgh 1 / 28

Neural Networks. Mark van Rossum. January 15, School of Informatics, University of Edinburgh 1 / 28 1 / 28 Neural Networks Mark van Rossum School of Informatics, University of Edinburgh January 15, 2018 2 / 28 Goals: Understand how (recurrent) networks behave Find a way to teach networks to do a certain

More information

Neural Networks: Introduction

Neural Networks: Introduction Neural Networks: Introduction Machine Learning Fall 2017 Based on slides and material from Geoffrey Hinton, Richard Socher, Dan Roth, Yoav Goldberg, Shai Shalev-Shwartz and Shai Ben-David, and others 1

More information

Introduction and Perceptron Learning

Introduction and Perceptron Learning Artificial Neural Networks Introduction and Perceptron Learning CPSC 565 Winter 2003 Christian Jacob Department of Computer Science University of Calgary Canada CPSC 565 - Winter 2003 - Emergent Computing

More information

Numerical Studies of the Quantum Adiabatic Algorithm

Numerical Studies of the Quantum Adiabatic Algorithm Numerical Studies of the Quantum Adiabatic Algorithm A.P. Young Work supported by Colloquium at Universität Leipzig, November 4, 2014 Collaborators: I. Hen, M. Wittmann, E. Farhi, P. Shor, D. Gosset, A.

More information

Artificial Neural Networks. Part 2

Artificial Neural Networks. Part 2 Artificial Neural Netorks Part Artificial Neuron Model Folloing simplified model of real neurons is also knon as a Threshold Logic Unit x McCullouch-Pitts neuron (943) x x n n Body of neuron f out Biological

More information

Neural networks. Chapter 19, Sections 1 5 1

Neural networks. Chapter 19, Sections 1 5 1 Neural networks Chapter 19, Sections 1 5 Chapter 19, Sections 1 5 1 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 19, Sections 1 5 2 Brains 10

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

Hopfield Networks and Boltzmann Machines. Christian Borgelt Artificial Neural Networks and Deep Learning 296

Hopfield Networks and Boltzmann Machines. Christian Borgelt Artificial Neural Networks and Deep Learning 296 Hopfield Networks and Boltzmann Machines Christian Borgelt Artificial Neural Networks and Deep Learning 296 Hopfield Networks A Hopfield network is a neural network with a graph G = (U,C) that satisfies

More information

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks INFOB2KI 2017-2018 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Artificial Neural Networks Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html

More information

COMP9444 Neural Networks and Deep Learning 11. Boltzmann Machines. COMP9444 c Alan Blair, 2017

COMP9444 Neural Networks and Deep Learning 11. Boltzmann Machines. COMP9444 c Alan Blair, 2017 COMP9444 Neural Networks and Deep Learning 11. Boltzmann Machines COMP9444 17s2 Boltzmann Machines 1 Outline Content Addressable Memory Hopfield Network Generative Models Boltzmann Machine Restricted Boltzmann

More information

Artificial Neural Networks. Historical description

Artificial Neural Networks. Historical description Artificial Neural Networks Historical description Victor G. Lopez 1 / 23 Artificial Neural Networks (ANN) An artificial neural network is a computational model that attempts to emulate the functions of

More information

( ) T. Reading. Lecture 22. Definition of Covariance. Imprinting Multiple Patterns. Characteristics of Hopfield Memory

( ) T. Reading. Lecture 22. Definition of Covariance. Imprinting Multiple Patterns. Characteristics of Hopfield Memory Part 3: Autonomous Agents /8/07 Reading Lecture 22 Flake, ch. 20 ( Genetics and Evolution ) /8/07 /8/07 2 Imprinting Multiple Patterns Let x, x 2,, x p be patterns to be imprinted Define the sum-of-outer-products

More information

Statistical Physics of The Symmetric Group. Mobolaji Williams Harvard Physics Oral Qualifying Exam Dec. 12, 2016

Statistical Physics of The Symmetric Group. Mobolaji Williams Harvard Physics Oral Qualifying Exam Dec. 12, 2016 Statistical Physics of The Symmetric Group Mobolaji Williams Harvard Physics Oral Qualifying Exam Dec. 12, 2016 1 Theoretical Physics of Living Systems Physics Particle Physics Condensed Matter Astrophysics

More information

Artificial Neural Networks Examination, March 2004

Artificial Neural Networks Examination, March 2004 Artificial Neural Networks Examination, March 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) Human Brain Neurons Input-Output Transformation Input Spikes Output Spike Spike (= a brief pulse) (Excitatory Post-Synaptic Potential)

More information

CHALMERS, GÖTEBORGS UNIVERSITET. EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD

CHALMERS, GÖTEBORGS UNIVERSITET. EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD CHALMERS, GÖTEBORGS UNIVERSITET EXAM for ARTIFICIAL NEURAL NETWORKS COURSE CODES: FFR 135, FIM 72 GU, PhD Time: Place: Teachers: Allowed material: Not allowed: October 23, 217, at 8 3 12 3 Lindholmen-salar

More information

Potts And XY, Together At Last

Potts And XY, Together At Last Potts And XY, Together At Last Daniel Kolodrubetz Massachusetts Institute of Technology, Center for Theoretical Physics (Dated: May 16, 212) We investigate the behavior of an XY model coupled multiplicatively

More information

Neural Networks. Fundamentals of Neural Networks : Architectures, Algorithms and Applications. L, Fausett, 1994

Neural Networks. Fundamentals of Neural Networks : Architectures, Algorithms and Applications. L, Fausett, 1994 Neural Networks Neural Networks Fundamentals of Neural Networks : Architectures, Algorithms and Applications. L, Fausett, 1994 An Introduction to Neural Networks (nd Ed). Morton, IM, 1995 Neural Networks

More information

Progress toward a Monte Carlo Simulation of the Ice VI-VII Phase Transition

Progress toward a Monte Carlo Simulation of the Ice VI-VII Phase Transition Progress toward a Monte Carlo Simulation of the Ice VI-VII Phase Transition Christina Gower 2010 NSF/REU PROJECT Physics Department University of Notre Dame Advisor: Dr. Kathie E. Newman August 6, 2010

More information

Introduction To Artificial Neural Networks

Introduction To Artificial Neural Networks Introduction To Artificial Neural Networks Machine Learning Supervised circle square circle square Unsupervised group these into two categories Supervised Machine Learning Supervised Machine Learning Supervised

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks CPSC 533 Winter 2 Christian Jacob Neural Networks in the Context of AI Systems Neural Networks as Mediators between Symbolic AI and Statistical Methods 2 5.-NeuralNets-2.nb Neural

More information

Neural Networks for Machine Learning. Lecture 2a An overview of the main types of neural network architecture

Neural Networks for Machine Learning. Lecture 2a An overview of the main types of neural network architecture Neural Networks for Machine Learning Lecture 2a An overview of the main types of neural network architecture Geoffrey Hinton with Nitish Srivastava Kevin Swersky Feed-forward neural networks These are

More information

Neural Networks. Associative memory 12/30/2015. Associative memories. Associative memories

Neural Networks. Associative memory 12/30/2015. Associative memories. Associative memories //5 Neural Netors Associative memory Lecture Associative memories Associative memories The massively parallel models of associative or content associative memory have been developed. Some of these models

More information

Neural Nets in PR. Pattern Recognition XII. Michal Haindl. Outline. Neural Nets in PR 2

Neural Nets in PR. Pattern Recognition XII. Michal Haindl. Outline. Neural Nets in PR 2 Neural Nets in PR NM P F Outline Motivation: Pattern Recognition XII human brain study complex cognitive tasks Michal Haindl Faculty of Information Technology, KTI Czech Technical University in Prague

More information

Stochastic Networks Variations of the Hopfield model

Stochastic Networks Variations of the Hopfield model 4 Stochastic Networks 4. Variations of the Hopfield model In the previous chapter we showed that Hopfield networks can be used to provide solutions to combinatorial problems that can be expressed as the

More information

Neural Networks. Prof. Dr. Rudolf Kruse. Computational Intelligence Group Faculty for Computer Science

Neural Networks. Prof. Dr. Rudolf Kruse. Computational Intelligence Group Faculty for Computer Science Neural Networks Prof. Dr. Rudolf Kruse Computational Intelligence Group Faculty for Computer Science kruse@iws.cs.uni-magdeburg.de Rudolf Kruse Neural Networks 1 Hopfield Networks Rudolf Kruse Neural Networks

More information

Sections 18.6 and 18.7 Analysis of Artificial Neural Networks

Sections 18.6 and 18.7 Analysis of Artificial Neural Networks Sections 18.6 and 18.7 Analysis of Artificial Neural Networks CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline Univariate regression

More information

Neural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET

Neural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET Unit-. Definition Neural network is a massively parallel distributed processing system, made of highly inter-connected neural computing elements that have the ability to learn and thereby acquire knowledge

More information

Artificial Intelligence (AI) Common AI Methods. Training. Signals to Perceptrons. Artificial Neural Networks (ANN) Artificial Intelligence

Artificial Intelligence (AI) Common AI Methods. Training. Signals to Perceptrons. Artificial Neural Networks (ANN) Artificial Intelligence Artificial Intelligence (AI) Artificial Intelligence AI is an attempt to reproduce intelligent reasoning using machines * * H. M. Cartwright, Applications of Artificial Intelligence in Chemistry, 1993,

More information

Neural Networks for Machine Learning. Lecture 11a Hopfield Nets

Neural Networks for Machine Learning. Lecture 11a Hopfield Nets Neural Networks for Machine Learning Lecture 11a Hopfield Nets Geoffrey Hinton Nitish Srivastava, Kevin Swersky Tijmen Tieleman Abdel-rahman Mohamed Hopfield Nets A Hopfield net is composed of binary threshold

More information

Convolutional Associative Memory: FIR Filter Model of Synapse

Convolutional Associative Memory: FIR Filter Model of Synapse Convolutional Associative Memory: FIR Filter Model of Synapse Rama Murthy Garimella 1, Sai Dileep Munugoti 2, Anil Rayala 1 1 International Institute of Information technology, Hyderabad, India. rammurthy@iiit.ac.in,

More information

Markov Chains and MCMC

Markov Chains and MCMC Markov Chains and MCMC Markov chains Let S = {1, 2,..., N} be a finite set consisting of N states. A Markov chain Y 0, Y 1, Y 2,... is a sequence of random variables, with Y t S for all points in time

More information

Physics 115/242 Monte Carlo simulations in Statistical Physics

Physics 115/242 Monte Carlo simulations in Statistical Physics Physics 115/242 Monte Carlo simulations in Statistical Physics Peter Young (Dated: May 12, 2007) For additional information on the statistical Physics part of this handout, the first two sections, I strongly

More information

AI Programming CS F-20 Neural Networks

AI Programming CS F-20 Neural Networks AI Programming CS662-2008F-20 Neural Networks David Galles Department of Computer Science University of San Francisco 20-0: Symbolic AI Most of this class has been focused on Symbolic AI Focus or symbols

More information

How can ideas from quantum computing improve or speed up neuromorphic models of computation?

How can ideas from quantum computing improve or speed up neuromorphic models of computation? Neuromorphic Computation: Architectures, Models, Applications Associative Memory Models with Adiabatic Quantum Optimization Kathleen Hamilton, Alexander McCaskey, Jonathan Schrock, Neena Imam and Travis

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE 4: Linear Systems Summary # 3: Introduction to artificial neural networks DISTRIBUTED REPRESENTATION An ANN consists of simple processing units communicating with each other. The basic elements of

More information

CS:4420 Artificial Intelligence

CS:4420 Artificial Intelligence CS:4420 Artificial Intelligence Spring 2018 Neural Networks Cesare Tinelli The University of Iowa Copyright 2004 18, Cesare Tinelli and Stuart Russell a a These notes were originally developed by Stuart

More information

Artificial Neural Networks The Introduction

Artificial Neural Networks The Introduction Artificial Neural Networks The Introduction 01001110 01100101 01110101 01110010 01101111 01101110 01101111 01110110 01100001 00100000 01110011 01101011 01110101 01110000 01101001 01101110 01100001 00100000

More information

In the Name of God. Lecture 9: ANN Architectures

In the Name of God. Lecture 9: ANN Architectures In the Name of God Lecture 9: ANN Architectures Biological Neuron Organization of Levels in Brains Central Nervous sys Interregional circuits Local circuits Neurons Dendrite tree map into cerebral cortex,

More information

CSE 5526: Introduction to Neural Networks Hopfield Network for Associative Memory

CSE 5526: Introduction to Neural Networks Hopfield Network for Associative Memory CSE 5526: Introduction to Neural Networks Hopfield Network for Associative Memory Part VII 1 The basic task Store a set of fundamental memories {ξξ 1, ξξ 2,, ξξ MM } so that, when presented a new pattern

More information

J ij S i S j B i S i (1)

J ij S i S j B i S i (1) LECTURE 18 The Ising Model (References: Kerson Huang, Statistical Mechanics, Wiley and Sons (1963) and Colin Thompson, Mathematical Statistical Mechanics, Princeton Univ. Press (1972)). One of the simplest

More information

Associative Memories (I) Hopfield Networks

Associative Memories (I) Hopfield Networks Associative Memories (I) Davide Bacciu Dipartimento di Informatica Università di Pisa bacciu@di.unipi.it Applied Brain Science - Computational Neuroscience (CNS) A Pun Associative Memories Introduction

More information

ECE521 Lecture 7/8. Logistic Regression

ECE521 Lecture 7/8. Logistic Regression ECE521 Lecture 7/8 Logistic Regression Outline Logistic regression (Continue) A single neuron Learning neural networks Multi-class classification 2 Logistic regression The output of a logistic regression

More information

Why Complexity is Different

Why Complexity is Different Why Complexity is Different Yaneer Bar-Yam (Dated: March 21, 2017) One of the hardest things to explain is why complex systems are actually different from simple systems. The problem is rooted in a set

More information

arxiv: v2 [nlin.ao] 19 May 2015

arxiv: v2 [nlin.ao] 19 May 2015 Efficient and optimal binary Hopfield associative memory storage using minimum probability flow arxiv:1204.2916v2 [nlin.ao] 19 May 2015 Christopher Hillar Redwood Center for Theoretical Neuroscience University

More information

Consider the way we are able to retrieve a pattern from a partial key as in Figure 10 1.

Consider the way we are able to retrieve a pattern from a partial key as in Figure 10 1. CompNeuroSci Ch 10 September 8, 2004 10 Associative Memory Networks 101 Introductory concepts Consider the way we are able to retrieve a pattern from a partial key as in Figure 10 1 Figure 10 1: A key

More information

3.4 Linear Least-Squares Filter

3.4 Linear Least-Squares Filter X(n) = [x(1), x(2),..., x(n)] T 1 3.4 Linear Least-Squares Filter Two characteristics of linear least-squares filter: 1. The filter is built around a single linear neuron. 2. The cost function is the sum

More information

Simulating Neural Networks. Lawrence Ward P465A

Simulating Neural Networks. Lawrence Ward P465A Simulating Neural Networks Lawrence Ward P465A 1. Neural network (or Parallel Distributed Processing, PDP) models are used for a wide variety of roles, including recognizing patterns, implementing logic,

More information

Artifical Neural Networks

Artifical Neural Networks Neural Networks Artifical Neural Networks Neural Networks Biological Neural Networks.................................. Artificial Neural Networks................................... 3 ANN Structure...........................................

More information

Kinetic Monte Carlo (KMC)

Kinetic Monte Carlo (KMC) Kinetic Monte Carlo (KMC) Molecular Dynamics (MD): high-frequency motion dictate the time-step (e.g., vibrations). Time step is short: pico-seconds. Direct Monte Carlo (MC): stochastic (non-deterministic)

More information

Phase transitions and finite-size scaling

Phase transitions and finite-size scaling Phase transitions and finite-size scaling Critical slowing down and cluster methods. Theory of phase transitions/ RNG Finite-size scaling Detailed treatment: Lectures on Phase Transitions and the Renormalization

More information

Monte Carlo Simulation of the Ising Model. Abstract

Monte Carlo Simulation of the Ising Model. Abstract Monte Carlo Simulation of the Ising Model Saryu Jindal 1 1 Department of Chemical Engineering and Material Sciences, University of California, Davis, CA 95616 (Dated: June 9, 2007) Abstract This paper

More information