Convolutional Associative Memory: FIR Filter Model of Synapse
|
|
- Mervyn Little
- 5 years ago
- Views:
Transcription
1 Convolutional Associative Memory: FIR Filter Model of Synapse Rama Murthy Garimella 1, Sai Dileep Munugoti 2, Anil Rayala 1 1 International Institute of Information technology, Hyderabad, India. rammurthy@iiit.ac.in, anil.rayala@students.iiit.ac.in 2 Indian Institute of Technology, Guwahati, India. d.munugoti@iitg.erent.in Abstract. In this research paper, a novel Convolutional Associative Memory is proposed. In the proposed model, Synapse of each neuron is modeled as a Linear FIR filter. The dynamics of Convolutional Associative Memory is discussed. A new method called Sub- Sampling is given. Proof of convergence theorem is discussed. An example depicting the convergence is shown. Special cases to the proposed convolutional Associative memory are discussed. A new vector Hopfield Associative memory is proposed. Some potential applications of the proposed model are also proposed. Keywords: Convolutional Associative Memory FIR filter Sub Sampling Matrix Hankel Matrix Vector Hopfield Network. 1 Introduction Artificial Neural Networks (ANNs) such as Multi- Layer Perceptron (MLP), Hopfield Neural Network (HNN) are based on synaptic weights that are scalars. Researchers conceived of ANNs based on dynamic synapse i.e. the synapse is modeled as a linear filter e.g. Finite Impulse Response (FIR) filter [1,2,3]. One of the motivations of such a model of synapse is the ability to filter noise corrupting the signals. In recent years, ANNs based on deep learning paradigm are able to provide excellent results in many applications (with the ability to learn features from the data automatically). Specifically Convolutional Neural Networks (CVNNs) pioneered hand writing recognition systems and other related Artificial Intelligence (AI) systems. One of the main goals of the research in AI is to build a system which has Multi-Modal Intelligence i.e. a system combining and reasoning with different inputs e.g. Vision, Sound, Smell and Touch which is similar to how the human brain works. It is biologically more appealing for such system to have a memory which is similar to that of human memory. Human memory is based on associations with the memories it contains. For example a part of a Well-Known tune is enough to bring the whole song back to mind. Associative memories can be used for tasks like adfa, p. 1, Springer-Verlag Berlin Heidelberg 2011
2 - Completing information if some part is missing, - Denoising if noisy input is given, - To guess information that means if a pattern is presented, the most similar stored pattern is determined etc. Hopfield Neural Network is widely used as an Associative memory. In this research paper, we propose and study a novel associative memory based on FIR filter model of synapse. Several other researches proposed certain types of Convolutional Associative Memories [5], but, the approach of Sub-Sampling performed in our method is very different from any other contribution. We expect our Convolutional Associative Memory to find many applications. This research paper is organized as follows: In Section 2, the previous known literature of Hopfield Network is reviewed. In section 3, the dynamics of the proposed novel Convolutional Associative Memory is studied. In section 4, examples of the proposed model of convolutional associative memory are given. In section 5, some applications of the proposed Convolutional Associative Memory are discussed. The paper concludes in section 6. 2 Review of literature In 1982, John Hopfield proposed a form of neural network which potentially acts as an Associative memory called Hopfield Neural Network. A Hopfield Neural network is a form of recurrent neural network which is a non-linear dynamical system based on weighted, undirected graph. In a Hopfield Network, each node of the graph represent an artificial neuron which assumes a binary value {+1 or -1} and the edge weights correspond to synaptic weights. The order of the network corresponds to number of neurons. If all the synaptic weights of the network are represented in a synaptic weight matrix M, a Hopfield neural network can be represented by M and threshold vector T. Since each neuron has a state of +1 or -1, the state space of an N neuron Hopfield network can be given by an N dimensional unit -hypercube. Let the state of the non-linear dynamical system is represented by the N x 1 vector, S (t) and let S i (t) { +1 or 1 } represent the state of i th neuron at time instant t. The state update of i th neuron is done by, S i (t + 1 ) = Sign{ M ij S j (t) T i } Depending on the number of neurons at which state is updated at a given time instant, the network operation can be classified. Two main types of network operation are: N j=1 1) Serial Mode updating. Here at any time instant, only state of one node of the network is updated. 2) Fully Parallel Mode updating: Here, at every time instant, state of all the nodes are updated.
3 The network is said to be stable / converged if and only if: S (t + 1) = S (t) = Sign{MS (t) T} Thus, if the network has reached stable state / converged, irrespective of the mode of operation the network remains in that state forever. The following convergence theorem summarizes the dynamics of Hopfield Network. Theorem 1. Let the pair N = (M, T) specify a Hopfield neural network. Then the following hold true: 1) If the network is operating in a serial mode and the elements of the diagonal of M are non-negative, the network will always converge to a stable state (i.e. There are no cycles in the state space). 2) If the network is operating in the fully parallel mode, the network will always converge to a stable state or to a cycle of length 2 (i.e. the cycles in the state space are of length almost 2). Proof of the above theorem can be found in [4]. 3 Dynamics of Convolutional Associative Memory Before we begin the dynamics of Convolutional Associative Memory, some important innovative concepts are discussed: 3.1 State as a Sequence In the convolutional Associative Memory, each neuron is modeled to have a state which is a sequence of binary values {+1,-1} of length L rather than a single value {+1 or -1} as in the case of Hopfield network. All the neurons are interconnected and it is assumed that there are no self connections. For practical considerations it is taken that the sequence starts at time n = 0. Notation. S represents a Matrix of size (L x N) where N correspond to the number of neurons and Si(i th column of S) correspond to the state of i th neuron. 3.2 Synapse as a Linear FIR filter: Sub - Sampling Based on biological motivation, the author for the first time proposed the idea of modeling the synapse as a linear filter in [1]. The synapse of each connection is taken to be a Linear Discrete time Finite Impulse Response Filter of length F rather than a single constant as in the case of Hopfield Network. From practical and theoretical considerations, this type of model is considered to be more realistic. It is assumed that, the sequence starts at time n = 0. Since output of each synapse is convolution of state sequence of a neuron and a filter, the output of the synapse is a sequence of length L+F- 1 and since this sequence is used to update the state of a neuron, it needs to be compressed to a sequence of length L, this compression is called as Sub - Sampling. The compression can be done in infinite ways, it can observed that the compression is a
4 linear operator and can be realized by a matrix of size L x L+F-1 1. Let K represent the sub-sampling matrix which does the task of compression. In digital signal processing, it is very well known that windows such as Hamming, Hanning, Blackman window are utilized. We invoke such results in the design of synaptic FIR filter. 3.3 Convolution as Matrix Multiplication In the convolutional associative memory, the convolution of the synapse filter sequence (between the neurons i, j) and state sequence can be realized by multiplying a convolution matrix H i,j (L+F-1 x L) with the state sequence of length L. The H i,j matrix can be observed to be a Toeplitz matrix and the first column can be given as a vector of length L+F-1 in which only first F elements are non-zero, and the first row is a vector of length L with only first element non-zero. For L = 4, F =3: H i,j matrix for a synapse can be given as: h(0) h(1) h(0) 0 0 h(2) h(1) h(0) 0 0 h(2) h(1) h(0) 0 0 h(2) h(1) [ h(2) ] It can be seen that, multiplying H i,j with a vector of length L is same as convolving a sequence of length L {state sequence of each neuron} and the impulse response of length F {h(0), h(1), h(2)}. Notation. Let H denote an NxN Cell with each element as a Matrix. H i,j represents the synaptic filter convolution matrix between the neurons i an j. It is assumed that H i,j and H j,i are same for all i and j. Since it is also assumed that there are no self-connections (Section 3.1) i.e. H i,i is a null matrix for all i. 3.4 Serial Mode Updation In serial mode updating, the state sequence of i th neuron is updated by, Si(t+1) = sign( K j * Hi,j * Sj(t)) (1) Si(t+1) in the above equation is the updated state sequence (length L) of i th neuron. Sj(t) is the state sequence (length L) of j th neuron. Hi,j is the convolution Matrix of the synapse between neuron i and j. Matrix multiplication is denoted by *. 1 A is a matrix of size M x N denotes that A has M rows and N columns.
5 3.5 Energy of the Network The energy function associated to the convolutional associative memory network is defined to be: E = - i,j Si(t) T * K * Hi,j * Sj(t) (2) It can be observed that for an N neuron network and for given H i,j the above defined energy function is bounded below. In equation (2), Si(t) T denotes the transpose of Si(t). 3.6 Proof of Convergence of energy in serial mode If the state of i th neuron is changed from Si(t) to Si(t+1) the change in energy can be given by, E (Si(t+1)) E (Si(t)) = -1 *{ j(si(t+1)- Si(t)) T * K * Hi,j * Sj(t) + j Sj(t) T * K *Hj,i * (Si(t+1)- Si(t))}. Note. If the energy of the network has to converge, the change in energy should be either negative or zero for all time instants. We know that, Si(t+1) - Si(t) (element wise) and j K * Hi,j * Sj(t) have same sign since Si(t+1) = sign{ j K * Hi,j * Sj(t)} Hence we can say that is always negative or zero. j(si(t+1)- Si(t)) T * K * Hi,j * Sj(t) So the first term of the change in energy is always negative or zero and if the second term is equal to the first term then we can say that the total change energy of the network is always negative or zero. If K * Hj,i is a symmetrical Matrix, then j Sj(t) T * K *Hj,i * (Si(t+1)- Si(t)) and j(si(t+1)- Si(t)) T * K * Hi,j * Sj(t) are equal. Which can be easily seen by taking transpose of either of them and also using the fact that H i,j = H j,i from 3.3. Therefore, E (Si(t+1)) E (Si(t)) is always either negative or zero if K * Hj,i is a symmetric matrix. Since the energy is a lower bounded function and since it always decreases or remains constant we can say that the energy of the system converges. We know that, Hj,i is in the form as suggested in 3.3 for all i and j, if K is a Hankel matrix, then K* Hj,i will be a symmetric matrix.
6 3.7 proposed form of K K has to be a Hankel Matrix. For L =4; F=3, the K is: b1 b2 b3 b4 b5 b6 b2 b3 b3 b4 b4 b5 b5 b6 b6 b7 b7 b8 b4 b5 b6 b7 b8 b9 The above proposed form is one of many forms of sub-sampling for which the network s energy converges and it should not be mistaken as the only form. 3.8 Convergence of the network to a Stable State If the energy of the system has converged, then there are two possible cases 1) Si(t+1)- Si(t) = 0 and change in energy is zero. 2) All the elements of Si(t+1)- Si(t) might not be zero but the corresponding elements in j K * Hi,j * Sj(t) might be zero hence making change in energy zero. This kind of change is only possible when one or many elements of S i changes from -1 to +1. Hence once the energy in the network has converged it is clear from the foregoing facts that the network will reach a stable state after at most L*N 2 time intervals. It is well known that convergence proof in parallel mode can be reduced to that in serial mode [6]. In view of the above proof, we have the following convergence theorem Theorem 2: Let the pair R = (S, H) represent the Convolutional Associative Memory where S in the form as discussed in 3.1 and H in the form as discussed in 3.3. Considering K to be a Hankel Matrix, then the following hold true. 1) If the network is operating in a serial mode, the network will always converge to a stable state (i.e. There are no cycles in the state space). 2) If the network is operating in the fully parallel mode, the network will always converge to a stable state or to a cycle of maximum length 2. The proof for parallel mode can be done by converting parallel mode into serial mode as proposed in [6]. 4 Special cases 4.1 K as H T If K (sub sampling matrix) associated with every synapse is transpose of corresponding convolution matrix i.e K = (Hi,j) T then K * Hi,j is a symmetric matrix, which leads to convergence of energy and then convergence of state matrix.
7 Some interesting results can be observed in this case. Gram matrix. Gram matrix of a matrix A is calculated as A T * A. Note. Gram matrix is always symmetric and semi-positive definite. It can be shown by a simple proof. Now, Energy of the system as given in 3.5 is - i,j Si(t) * K * Hi,j * Sj(t) If K = (H ij) T then energy = - i,j Si(t) * Gi,j * Sj(t) where G is Gram matrix of H ij. Since G is semi-positive definite matrix, We can say that if S i(t)=s j(t) i.e if all neurons have same state then energy obtained in this case will be minimum. Hence state matrix with all equal columns will behave as local energy minima. Result. If we take K = (H ij) T, then there is a very good chance that our network is converged such that States of all neurons are same. 4.2 Synapse as a Linear Time Variant System In the above proposed convolutional associative memory, Synapse is modeled as a Linear Time invariant System. If we take the synapse to be a non - causal linear time variant system, the H i,j matrix is no longer in Toeplitz form but, assumes some arbitrary matrix form since the system is linear. For such a system to converge K*H i,j should be a symmetric matrix. Since for any given matrix H i,j we can always find a K such that K*H i,j will be a symmetric matrix, we can say that if the synapse is modeled as a linear time variant system the network converges. 4.3 Vector Hopfield Network From the above discussion it is clear that if K*H i,j is a symmetric matrix and taking H i,j to be equal as H j,i, the network converges for both cases where H i,j is a Toeplitz matrix and where H i,j is some arbitrary matrix. So if we take a symmetric matrix W i,j as K*H i,j for each synapse, it is sufficient to represent the vector Hopfield network. The vector Hopfield network should not be mistaken as a normal Hopfield network operating in a parallel mode with just being represented in layers. The difference between a vector Hopfield network and normal Hopfield network operating in parallel mode is, in a Vector Hopfield network, the state of any element of i th neuron is not influenced by the states of other elements in i th neuron but in a Hopfield network operating in parallel mode, state of every neuron is influenced by all other neurons including those which are being parallely updated. For implementation purposes it is advisable to use a vector Hopfield network in place of normal Hopfield network since for implementing a Hopfield network on real data, the data should be converted to binary form of +1 s and -1 s. So if a value of 126 is obtained for a feature, then storing it in 7 neurons requires connections between them
8 whereas in a vector Hopfield it can be stored in a vector and does not require connections in between them. Hence to store same patterns as a Hopfield network, Vector Hopfield network requires less memory. 4.4 Information Storage algorithm for Vector Hopfield If S is the state matrix to be stored, then update the weight vectors W of vector Hopfield network by, W i,j = W i,j + f ( outer product of S i and (S j) T ) Where S i and S j are i th and j th columns of S and function f is defined as f(a)= A+ A T where A is arbitrary matrix and is the learning rate. Hence if every synapse in the network has W in the above proposed form then our required state matrix S acts as a local minima so that it can be retrieved. The proof is similar to that of Hopfield network. 5 Examples 5.1 Synapse as an LTI system For such a system H will be in the form as suggested in section 3.3. Consider the case where, Number of neurons=4; Pattern length= 3 Filter length=2 Initial state matrix = where each column denotes state of a neuron Let, H{i,j} corresponds to filter coefficients for a synapse between i th and j th neuron and take H{1,2}=H{2,1}= { 4, 1} H{1,3}=H{3,1}= { -2, 0} H{1,4}= H{ 4,1}= {1,3} H{2,3}=H{3,2} = {-5, 0} H{2,4}=H{4,2} = {-1,1} H{3,4} =H{4,3} = {0, -3} let us take subsampling matrix, K = which is in the required form. Initial energy of the system E in =
9 Now let us update the first neuron, the state matrix of network becomes Updated energy of the system E upd = 0 Now let us update the seconsd neuron, the state matrix of network becomes Updated energy of the system E upd = Now let us update the third neuron, the state matrix of network becomes Updated energy of the system E upd = Now let us update the fourth neuron, the state matrix of network becomes Updated energy of the system E upd = -432 Again updating the states of all the neurons doesn t alter the states of neurons which implies the system has converged. Hence the example we took converged. 5.2 Synapse as a Binary Filter Now, let us consider an example in which we take filter coefficients as only binary values i.e. +1 and -1. Consider n=4; L=3; F=3. Initial state matrix= where i th column denotes state of i th neuron H(1,2)= H(2,1) = { -1, -1, -1 } H(1,3)= H(3,1) = { -1, 1, 1 } H(1,4)= H(4,1) = { 1, -1, 1 } H(2,3)= H(3,2) = { 1, -1, 1 } H(2,4)= H(4,2) = { -1, -1, -1 } H(3,4)= H(4,3) = { -1, -1, -1 } Let us take the sub sampling matrix as
10 which is a Henkel Matrix If we update the network in serial mode after the first loop, the state matrix becomes s After second loop, state matrix is same as above which indicates convergence. 5.3 Synapse as Non causal LTV system Now, let us consider a case in which synapse between the neurons act as an LTV (Linear time variant) filter instead of LTI filter Consider n=4; L=4 and H i,j to be a matrix of size 7x4. H(1,2)=H(2,1)= H(1,3)=H(3,1) = H(1,4)=H(4,1)= H(2,3)=H(3,2)=
11 H(2,4)=H(4,2) = H(3,4)=H(4,3) = Now take take sub sampling matrix associated with synapse between i th and j th neuron as K ij = (H(i,j)) T in order for the product K ij * H(i,j) to be a symmetric matrix so that convergence is guaranteed. Now initial state matrix = If we update first, second, third, fourth neurons one after the another in order then after First loop, the state matrix becomes After second loop, state matrix is same as above which indicates convergence. 6 Applications: Multi Modal Intelligence 1) One of the major challenges to the research in Artificial Intelligence has been Multi modal intelligence i.e. a system combining and reasoning with different inputs e.g. Vision, Sound, Smell, Touch. This is akin to how we humans perceive the world around us - our sight works in conjunction with our sense of hearing, touch etc. AI technologies that leverage multi-modal data are likely to be more accurate. The problem of Multi- Model Intelligence can be solved
12 by using a Convolutional Associative Memory. Since each neuron has a state which is a vector of length L, different elements of the vector can correspond to different inputs, i.e. the last element of the vector at each neuron can correspond to speech the above one can correspond to visual data etc. Example: I) A video camera might be able to accurately "recognize" a human if it recognizes human voices and touch. II) In Many security systems, only one input from a user is taken to know if he is authorized to enter or not. Using convolutional Associative memory proposed in this paper many inputs can be taken from user i.e. voice, fingerprint, visual etc. to have a better security system. 2) The proposed convolutional associative memory can be used in robust speech recognition, since each synapse is modeled as a linear FIR filter it is more likely that such a system filters out the noise and produces more accurate results than that of existing systems. 7 Implementation The above discussed Convolutional Associative memory is used to classify emotions in speech, other researchers have implemented Hopfield networks using LPC for emotion recognition and the efficiency observed is 46%. For emotion classification, the emotions are combined into two sets (Anger and Happiness). The MFCC of the each set are extracted. The average of all the MFCC s of anger is computed and the network is trained to store the average, similarly average of all the MFCC s of Happiness is computed and network is trained to store the average. A small set of the training data is used for validation. The network is tested for a data which is not included in training and the efficiency is observed to be 61%. The observed efficiency is more than that of a normal Hopfield network. The error observed can be justified as the Associative memories are to be used to store finite patterns and with added noise the network should converge to the stored patterns. The data we used for training and testing included speech signals of different speakers and since we are taking the averages of all the Anger signals or Happiness signals the observed accuracy can be justified. The Convolutional Associative Memory proposed in this paper outperforms the normal Hopfield memory in most of the cases and should be preferred to Hopfield Network. 8 Conclusion In this research paper, the synapse is modeled as a discrete time Linear FIR filter of length F and state of each neuron is modeled as a sequence of length L. It is proved that such a system converges in serial mode operation for the proposed type of sub sampling matrix. Some special cases to the proposed Convolutional Associative memory
13 are discussed. A novel vector Hopfield network is proposed. It is expected that such an artificial neural network will find many applications. References 1. G. Rama Murthy, Some Novel Real/Complex-Valued Neural Network Models" Advances in Soft Computing, Springer Series,Computational Intelligence, Theory and Applications, Proceedings of 9th Fuzzy Days ( International Conference on Computational Intelligence), Dortmund, Germany, September 18-20, G.Rama Murthy, "Finite Impulse Response (FIR) Filter Model of Synapses: Associated Neural Networks," International Conference on Natural Computation, G. Rama Murthy, Multi-Dimensional Neural Networks: Unified Theory,New Age International Publishers, J.J Hopfield, "Neural Networks and Physical systems with Emergent Collective Computational Abilities," Proceedings of National Academy of Sciences, USA Vol. 79,pp , Amin Karbasi, Amir Hesam Salavati, Amin Shokrollahi, "Convolutional Neural Associative Memories: Massive Capacity with Noise Tolerance," arxiv.org 6. Jehoshua Bruck and Joseph W.Goodman, "A Generalized Convergence Theorem for Neural Networks," IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 34, NO. 5, SEPTEMBER 1988
Dynamics of Structured Complex Recurrent Hopfield Networks
Dynamics of Structured Complex Recurrent Hopfield Networks by Garimella Ramamurthy Report No: IIIT/TR/2016/-1 Centre for Security, Theory and Algorithms International Institute of Information Technology
More informationDynamics of Ordinary and Recurrent Hopfield Networks: Novel Themes
2017 IEEE 7th International Advance Computing Conference Dynamics of Ordinary and Recurrent Hopfield Networks: Novel Themes Siva Prasad Raju Bairaju, Rama Murthy Garimella, Ayush Jha,Anil Rayala APIIIT
More informationHopfield-Amari Neural Network:
Hopfield-Amari Neural Network: Minimization of Quadratic Forms Dr G Rama Murthy PhD Associate Professor IIIT Hyderabad Andhra Pradesh, India E-mail: rammurthy@iiitacin B Nischal(BTech) NIT Hamirpur Himachal
More informationOn the Dynamics of a Recurrent Hopfield Network
On the Dynamics of a Recurrent Hopfield Network Rama Garimella Department of Signal Processing Tampere University of Technology Tampere, Finland Email: rammurthy@iiit.ac.in Berkay Kicanaoglu Department
More informationUsing Variable Threshold to Increase Capacity in a Feedback Neural Network
Using Variable Threshold to Increase Capacity in a Feedback Neural Network Praveen Kuruvada Abstract: The article presents new results on the use of variable thresholds to increase the capacity of a feedback
More informationUsing a Hopfield Network: A Nuts and Bolts Approach
Using a Hopfield Network: A Nuts and Bolts Approach November 4, 2013 Gershon Wolfe, Ph.D. Hopfield Model as Applied to Classification Hopfield network Training the network Updating nodes Sequencing of
More informationLearning and Memory in Neural Networks
Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units
More informationOptimization of Quadratic Forms: NP Hard Problems : Neural Networks
1 Optimization of Quadratic Forms: NP Hard Problems : Neural Networks Garimella Rama Murthy, Associate Professor, International Institute of Information Technology, Gachibowli, HYDERABAD, AP, INDIA ABSTRACT
More informationLecture 7 Artificial neural networks: Supervised learning
Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in
More informationApplication of hopfield network in improvement of fingerprint recognition process Mahmoud Alborzi 1, Abbas Toloie- Eshlaghy 1 and Dena Bazazian 2
5797 Available online at www.elixirjournal.org Computer Science and Engineering Elixir Comp. Sci. & Engg. 41 (211) 5797-582 Application hopfield network in improvement recognition process Mahmoud Alborzi
More informationLecture 4: Feed Forward Neural Networks
Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training
More informationIntroduction to Neural Networks
Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning
More informationConvolutional networks. Sebastian Seung
Convolutional networks Sebastian Seung Convolutional network Neural network with spatial organization every neuron has a location usually on a grid Translation invariance synaptic strength depends on locations
More informationLecture 4: Perceptrons and Multilayer Perceptrons
Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons
More informationArtificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino
Artificial Neural Networks Data Base and Data Mining Group of Politecnico di Torino Elena Baralis Politecnico di Torino Artificial Neural Networks Inspired to the structure of the human brain Neurons as
More informationCHAPTER 3. Pattern Association. Neural Networks
CHAPTER 3 Pattern Association Neural Networks Pattern Association learning is the process of forming associations between related patterns. The patterns we associate together may be of the same type or
More informationOn the Dynamics of a Recurrent Hopfield Network
On the Dynamics of a Recurrent Hopfield Network by Garimella Ramamurthy Report No: III/R/2015/-1 Centre for Communications International Institute of Information echnology Hyderabad - 500 032, INDIA April
More informationHopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5
Hopfield Neural Network and Associative Memory Typical Myelinated Vertebrate Motoneuron (Wikipedia) PHY 411-506 Computational Physics 2 1 Wednesday, March 5 1906 Nobel Prize in Physiology or Medicine.
More informationEE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan
EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, 2012 Sasidharan Sreedharan www.sasidharan.webs.com 3/1/2012 1 Syllabus Artificial Intelligence Systems- Neural Networks, fuzzy logic,
More informationOptimization of Quadratic Forms: Unified Minimum/Maximum Cut Computation in Directed/Undirected Graphs
Optimization of Quadratic Forms: Unified Minimum/Maximum Cut Computation in Directed/Undirected Graphs by Garimella Ramamurthy Report No: IIIT/TR/2015/-1 Centre for Security, Theory and Algorithms International
More informationINTRODUCTION TO ARTIFICIAL INTELLIGENCE
v=1 v= 1 v= 1 v= 1 v= 1 v=1 optima 2) 3) 5) 6) 7) 8) 9) 12) 11) 13) INTRDUCTIN T ARTIFICIAL INTELLIGENCE DATA15001 EPISDE 8: NEURAL NETWRKS TDAY S MENU 1. NEURAL CMPUTATIN 2. FEEDFRWARD NETWRKS (PERCEPTRN)
More information2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net.
2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net. - For an autoassociative net, the training input and target output
More informationComputational Intelligence Lecture 6: Associative Memory
Computational Intelligence Lecture 6: Associative Memory Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2011 Farzaneh Abdollahi Computational Intelligence
More informationEEE 241: Linear Systems
EEE 4: Linear Systems Summary # 3: Introduction to artificial neural networks DISTRIBUTED REPRESENTATION An ANN consists of simple processing units communicating with each other. The basic elements of
More informationNeural networks. Chapter 19, Sections 1 5 1
Neural networks Chapter 19, Sections 1 5 Chapter 19, Sections 1 5 1 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 19, Sections 1 5 2 Brains 10
More informationNeural Networks Introduction
Neural Networks Introduction H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011 H. A. Talebi, Farzaneh Abdollahi Neural Networks 1/22 Biological
More informationHopfield Neural Network
Lecture 4 Hopfield Neural Network Hopfield Neural Network A Hopfield net is a form of recurrent artificial neural network invented by John Hopfield. Hopfield nets serve as content-addressable memory systems
More informationNeural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21
Neural Networks Chapter 8, Section 7 TB Artificial Intelligence Slides from AIMA http://aima.cs.berkeley.edu / 2 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural
More informationArtificial Intelligence (AI) Common AI Methods. Training. Signals to Perceptrons. Artificial Neural Networks (ANN) Artificial Intelligence
Artificial Intelligence (AI) Artificial Intelligence AI is an attempt to reproduce intelligent reasoning using machines * * H. M. Cartwright, Applications of Artificial Intelligence in Chemistry, 1993,
More informationNeural Nets in PR. Pattern Recognition XII. Michal Haindl. Outline. Neural Nets in PR 2
Neural Nets in PR NM P F Outline Motivation: Pattern Recognition XII human brain study complex cognitive tasks Michal Haindl Faculty of Information Technology, KTI Czech Technical University in Prague
More informationKeywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm
Volume 4, Issue 5, May 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Huffman Encoding
More informationCS 4700: Foundations of Artificial Intelligence
CS 4700: Foundations of Artificial Intelligence Prof. Bart Selman selman@cs.cornell.edu Machine Learning: Neural Networks R&N 18.7 Intro & perceptron learning 1 2 Neuron: How the brain works # neurons
More informationIn biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required.
In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required. In humans, association is known to be a prominent feature of memory.
More informationArtificial Intelligence Hopfield Networks
Artificial Intelligence Hopfield Networks Andrea Torsello Network Topologies Single Layer Recurrent Network Bidirectional Symmetric Connection Binary / Continuous Units Associative Memory Optimization
More informationMemories Associated with Single Neurons and Proximity Matrices
Memories Associated with Single Neurons and Proximity Matrices Subhash Kak Oklahoma State University, Stillwater Abstract: This paper extends the treatment of single-neuron memories obtained by the use
More informationarxiv: v2 [nlin.ao] 19 May 2015
Efficient and optimal binary Hopfield associative memory storage using minimum probability flow arxiv:1204.2916v2 [nlin.ao] 19 May 2015 Christopher Hillar Redwood Center for Theoretical Neuroscience University
More informationNeural networks. Chapter 20, Section 5 1
Neural networks Chapter 20, Section 5 Chapter 20, Section 5 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 20, Section 5 2 Brains 0 neurons of
More informationIntroduction to Artificial Neural Networks
Facultés Universitaires Notre-Dame de la Paix 27 March 2007 Outline 1 Introduction 2 Fundamentals Biological neuron Artificial neuron Artificial Neural Network Outline 3 Single-layer ANN Perceptron Adaline
More informationConsider the way we are able to retrieve a pattern from a partial key as in Figure 10 1.
CompNeuroSci Ch 10 September 8, 2004 10 Associative Memory Networks 101 Introductory concepts Consider the way we are able to retrieve a pattern from a partial key as in Figure 10 1 Figure 10 1: A key
More informationCS:4420 Artificial Intelligence
CS:4420 Artificial Intelligence Spring 2018 Neural Networks Cesare Tinelli The University of Iowa Copyright 2004 18, Cesare Tinelli and Stuart Russell a a These notes were originally developed by Stuart
More informationArtificial Neural Networks Examination, June 2005
Artificial Neural Networks Examination, June 2005 Instructions There are SIXTY questions. (The pass mark is 30 out of 60). For each question, please select a maximum of ONE of the given answers (either
More informationLecture 11 FIR Filters
Lecture 11 FIR Filters Fundamentals of Digital Signal Processing Spring, 2012 Wei-Ta Chu 2012/4/12 1 The Unit Impulse Sequence Any sequence can be represented in this way. The equation is true if k ranges
More informationMachine Learning. Neural Networks. (slides from Domingos, Pardo, others)
Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward
More informationNeural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET
Unit-. Definition Neural network is a massively parallel distributed processing system, made of highly inter-connected neural computing elements that have the ability to learn and thereby acquire knowledge
More informationNeural Networks DWML, /25
DWML, 2007 /25 Neural networks: Biological and artificial Consider humans: Neuron switching time 0.00 second Number of neurons 0 0 Connections per neuron 0 4-0 5 Scene recognition time 0. sec 00 inference
More informationMachine Learning. Neural Networks. (slides from Domingos, Pardo, others)
Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward
More informationARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD
ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided
More informationContent-Addressable Memory Associative Memory Lernmatrix Association Heteroassociation Learning Retrieval Reliability of the answer
Associative Memory Content-Addressable Memory Associative Memory Lernmatrix Association Heteroassociation Learning Retrieval Reliability of the answer Storage Analysis Sparse Coding Implementation on a
More informationDEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY
DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY 1 On-line Resources http://neuralnetworksanddeeplearning.com/index.html Online book by Michael Nielsen http://matlabtricks.com/post-5/3x3-convolution-kernelswith-online-demo
More informationA Novel Activity Detection Method
A Novel Activity Detection Method Gismy George P.G. Student, Department of ECE, Ilahia College of,muvattupuzha, Kerala, India ABSTRACT: This paper presents an approach for activity state recognition of
More informationSupervised Learning Part I
Supervised Learning Part I http://www.lps.ens.fr/~nadal/cours/mva Jean-Pierre Nadal CNRS & EHESS Laboratoire de Physique Statistique (LPS, UMR 8550 CNRS - ENS UPMC Univ. Paris Diderot) Ecole Normale Supérieure
More informationMulti-Dimensional Neural Networks: Unified Theory
Multi-Dimensional Neural Networks: Unified Theory Garimella Ramamurthy Associate Professor IIIT-Hyderebad India Slide 1 Important Publication Book based on my MASTERPIECE Title: Multi-Dimensional Neural
More informationCMSC 421: Neural Computation. Applications of Neural Networks
CMSC 42: Neural Computation definition synonyms neural networks artificial neural networks neural modeling connectionist models parallel distributed processing AI perspective Applications of Neural Networks
More informationNeural Networks. Mark van Rossum. January 15, School of Informatics, University of Edinburgh 1 / 28
1 / 28 Neural Networks Mark van Rossum School of Informatics, University of Edinburgh January 15, 2018 2 / 28 Goals: Understand how (recurrent) networks behave Find a way to teach networks to do a certain
More informationLast update: October 26, Neural networks. CMSC 421: Section Dana Nau
Last update: October 26, 207 Neural networks CMSC 42: Section 8.7 Dana Nau Outline Applications of neural networks Brains Neural network units Perceptrons Multilayer perceptrons 2 Example Applications
More informationArtificial Neural Networks Examination, June 2004
Artificial Neural Networks Examination, June 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum
More informationArtificial Neural Networks
Artificial Neural Networks Threshold units Gradient descent Multilayer networks Backpropagation Hidden layer representations Example: Face Recognition Advanced topics 1 Connectionist Models Consider humans:
More information7.1 Basis for Boltzmann machine. 7. Boltzmann machines
7. Boltzmann machines this section we will become acquainted with classical Boltzmann machines which can be seen obsolete being rarely applied in neurocomputing. It is interesting, after all, because is
More informationCOMPARING PERFORMANCE OF NEURAL NETWORKS RECOGNIZING MACHINE GENERATED CHARACTERS
Proceedings of the First Southern Symposium on Computing The University of Southern Mississippi, December 4-5, 1998 COMPARING PERFORMANCE OF NEURAL NETWORKS RECOGNIZING MACHINE GENERATED CHARACTERS SEAN
More informationUnit 8: Introduction to neural networks. Perceptrons
Unit 8: Introduction to neural networks. Perceptrons D. Balbontín Noval F. J. Martín Mateos J. L. Ruiz Reina A. Riscos Núñez Departamento de Ciencias de la Computación e Inteligencia Artificial Universidad
More informationGrundlagen der Künstlichen Intelligenz
Grundlagen der Künstlichen Intelligenz Neural networks Daniel Hennes 21.01.2018 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Logistic regression Neural networks Perceptron
More informationNeural Networks biological neuron artificial neuron 1
Neural Networks biological neuron artificial neuron 1 A two-layer neural network Output layer (activation represents classification) Weighted connections Hidden layer ( internal representation ) Input
More informationStorage Capacity of Letter Recognition in Hopfield Networks
Storage Capacity of Letter Recognition in Hopfield Networks Gang Wei (gwei@cs.dal.ca) Zheyuan Yu (zyu@cs.dal.ca) Faculty of Computer Science, Dalhousie University, Halifax, N.S., Canada B3H 1W5 Abstract:
More informationMachine Learning. Neural Networks. (slides from Domingos, Pardo, others)
Machine Learning Neural Networks (slides from Domingos, Pardo, others) Human Brain Neurons Input-Output Transformation Input Spikes Output Spike Spike (= a brief pulse) (Excitatory Post-Synaptic Potential)
More informationMachine Learning. Neural Networks
Machine Learning Neural Networks Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 Biological Analogy Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 THE
More informationAssociative Memory : Soft Computing Course Lecture 21 24, notes, slides RC Chakraborty, Aug.
Associative Memory : Soft Computing Course Lecture 21 24, notes, slides www.myreaders.info/, RC Chakraborty, e-mail rcchak@gmail.com, Aug. 10, 2010 http://www.myreaders.info/html/soft_computing.html www.myreaders.info
More informationSinger Identification using MFCC and LPC and its comparison for ANN and Naïve Bayes Classifiers
Singer Identification using MFCC and LPC and its comparison for ANN and Naïve Bayes Classifiers Kumari Rambha Ranjan, Kartik Mahto, Dipti Kumari,S.S.Solanki Dept. of Electronics and Communication Birla
More informationSimple Neural Nets For Pattern Classification
CHAPTER 2 Simple Neural Nets For Pattern Classification Neural Networks General Discussion One of the simplest tasks that neural nets can be trained to perform is pattern classification. In pattern classification
More informationSections 18.6 and 18.7 Artificial Neural Networks
Sections 18.6 and 18.7 Artificial Neural Networks CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline The brain vs. artifical neural
More informationMachine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6
Machine Learning for Large-Scale Data Analysis and Decision Making 80-629-17A Neural Networks Week #6 Today Neural Networks A. Modeling B. Fitting C. Deep neural networks Today s material is (adapted)
More informationSPEECH ANALYSIS AND SYNTHESIS
16 Chapter 2 SPEECH ANALYSIS AND SYNTHESIS 2.1 INTRODUCTION: Speech signal analysis is used to characterize the spectral information of an input speech signal. Speech signal analysis [52-53] techniques
More informationAnalysis of Multilayer Neural Network Modeling and Long Short-Term Memory
Analysis of Multilayer Neural Network Modeling and Long Short-Term Memory Danilo López, Nelson Vera, Luis Pedraza International Science Index, Mathematical and Computational Sciences waset.org/publication/10006216
More informationNeural networks. Chapter 20. Chapter 20 1
Neural networks Chapter 20 Chapter 20 1 Outline Brains Neural networks Perceptrons Multilayer networks Applications of neural networks Chapter 20 2 Brains 10 11 neurons of > 20 types, 10 14 synapses, 1ms
More information(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann
(Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for
More informationIntroduction Biologically Motivated Crude Model Backpropagation
Introduction Biologically Motivated Crude Model Backpropagation 1 McCulloch-Pitts Neurons In 1943 Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, published A logical calculus of the
More informationNeural Networks and the Back-propagation Algorithm
Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely
More informationArtificial Neural Networks
Artificial Neural Networks 鮑興國 Ph.D. National Taiwan University of Science and Technology Outline Perceptrons Gradient descent Multi-layer networks Backpropagation Hidden layer representations Examples
More informationNeural Networks. Associative memory 12/30/2015. Associative memories. Associative memories
//5 Neural Netors Associative memory Lecture Associative memories Associative memories The massively parallel models of associative or content associative memory have been developed. Some of these models
More informationCompressed Sensing and Neural Networks
and Jan Vybíral (Charles University & Czech Technical University Prague, Czech Republic) NOMAD Summer Berlin, September 25-29, 2017 1 / 31 Outline Lasso & Introduction Notation Training the network Applications
More informationHand Written Digit Recognition using Kalman Filter
International Journal of Electronics and Communication Engineering. ISSN 0974-2166 Volume 5, Number 4 (2012), pp. 425-434 International Research Publication House http://www.irphouse.com Hand Written Digit
More informationProtein Structure Prediction Using Multiple Artificial Neural Network Classifier *
Protein Structure Prediction Using Multiple Artificial Neural Network Classifier * Hemashree Bordoloi and Kandarpa Kumar Sarma Abstract. Protein secondary structure prediction is the method of extracting
More informationMultilayer Perceptron
Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Single Perceptron 3 Boolean Function Learning 4
More informationArtificial Neural Network
Artificial Neural Network Contents 2 What is ANN? Biological Neuron Structure of Neuron Types of Neuron Models of Neuron Analogy with human NN Perceptron OCR Multilayer Neural Network Back propagation
More informationSections 18.6 and 18.7 Artificial Neural Networks
Sections 18.6 and 18.7 Artificial Neural Networks CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline The brain vs artifical neural networks
More informationData Mining Part 5. Prediction
Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,
More informationNeural Networks. Nicholas Ruozzi University of Texas at Dallas
Neural Networks Nicholas Ruozzi University of Texas at Dallas Handwritten Digit Recognition Given a collection of handwritten digits and their corresponding labels, we d like to be able to correctly classify
More informationCellular Automata Evolution for Pattern Recognition
Cellular Automata Evolution for Pattern Recognition Pradipta Maji Center for Soft Computing Research Indian Statistical Institute, Kolkata, 700 108, INDIA Under the supervision of Prof. P Pal Chaudhuri
More informationPart 8: Neural Networks
METU Informatics Institute Min720 Pattern Classification ith Bio-Medical Applications Part 8: Neural Netors - INTRODUCTION: BIOLOGICAL VS. ARTIFICIAL Biological Neural Netors A Neuron: - A nerve cell as
More informationAI Programming CS F-20 Neural Networks
AI Programming CS662-2008F-20 Neural Networks David Galles Department of Computer Science University of San Francisco 20-0: Symbolic AI Most of this class has been focused on Symbolic AI Focus or symbols
More informationARTIFICIAL INTELLIGENCE. Artificial Neural Networks
INFOB2KI 2017-2018 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Artificial Neural Networks Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html
More information22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1
Neural Networks Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Brains as Computational Devices Brains advantages with respect to digital computers: Massively parallel Fault-tolerant Reliable
More informationArtificial Neural Network and Fuzzy Logic
Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks
More informationRobust Sound Event Detection in Continuous Audio Environments
Robust Sound Event Detection in Continuous Audio Environments Haomin Zhang 1, Ian McLoughlin 2,1, Yan Song 1 1 National Engineering Laboratory of Speech and Language Information Processing The University
More informationA Fractal-ANN approach for quality control
A Fractal-ANN approach for quality control Kesheng Wang Department of Production and Quality Engineering, University of Science and Technology, N-7491 Trondheim, Norway Abstract The main problem with modern
More informationFrom perceptrons to word embeddings. Simon Šuster University of Groningen
From perceptrons to word embeddings Simon Šuster University of Groningen Outline A basic computational unit Weighting some input to produce an output: classification Perceptron Classify tweets Written
More informationApplication of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption
Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption ANDRÉ NUNES DE SOUZA, JOSÉ ALFREDO C. ULSON, IVAN NUNES
More information18.6 Regression and Classification with Linear Models
18.6 Regression and Classification with Linear Models 352 The hypothesis space of linear functions of continuous-valued inputs has been used for hundreds of years A univariate linear function (a straight
More informationWeight Quantization for Multi-layer Perceptrons Using Soft Weight Sharing
Weight Quantization for Multi-layer Perceptrons Using Soft Weight Sharing Fatih Köksal, Ethem Alpaydın, and Günhan Dündar 2 Department of Computer Engineering 2 Department of Electrical and Electronics
More informationNoise Robust Isolated Words Recognition Problem Solving Based on Simultaneous Perturbation Stochastic Approximation Algorithm
EngOpt 2008 - International Conference on Engineering Optimization Rio de Janeiro, Brazil, 0-05 June 2008. Noise Robust Isolated Words Recognition Problem Solving Based on Simultaneous Perturbation Stochastic
More informationARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92
ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000
More information