Neural Networks. Associative memory 12/30/2015. Associative memories. Associative memories

Similar documents
Part 8: Neural Networks

3.3 Discrete Hopfield Net An iterative autoassociative net similar to the nets described in the previous sections has been developed by Hopfield

Artificial Neural Networks. Part 2

Hopfield Neural Network

Multilayer Feedforward Networks. Berlin Chen, 2002

Linear models: the perceptron and closest centroid algorithms. D = {(x i,y i )} n i=1. x i 2 R d 9/3/13. Preliminaries. Chapter 1, 7.

2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net.

CHAPTER 3. Pattern Association. Neural Networks

In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required.

Preliminaries. Definition: The Euclidean dot product between two vectors is the expression. i=1

Artificial Intelligence Hopfield Networks

Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!

Enhancing Generalization Capability of SVM Classifiers with Feature Weight Adjustment

Linear models and the perceptron algorithm

Lecture 7 Artificial neural networks: Supervised learning

CSC Neural Networks. Perceptron Learning Rule

Introduction to Neural Networks

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

Neural networks. Chapter 19, Sections 1 5 1

Computational Intelligence Lecture 6: Associative Memory

Networks of McCulloch-Pitts Neurons

Information Retrieval and Web Search

Course 395: Machine Learning - Lectures

Multilayer Neural Networks

Neural networks. Chapter 20. Chapter 20 1

Artificial Neural Network and Fuzzy Logic

Motivation for the topic of the seminar

Iterative Autoassociative Net: Bidirectional Associative Memory

Learning and Memory in Neural Networks

Neural Networks. Fundamentals Framework for distributed processing Network topologies Training of ANN s Notation Perceptron Back Propagation

Artificial Neural Network

Neural Nets and Symbolic Reasoning Hopfield Networks

Artificial Neural Networks Examination, March 2004

Using a Hopfield Network: A Nuts and Bolts Approach

Machine Learning. Neural Networks

Linear Discriminant Functions

Fixed Weight Competitive Nets: Hamming Net

Neural Networks DWML, /25

100 inference steps doesn't seem like enough. Many neuron-like threshold switching units. Many weighted interconnections among units

Week 4: Hopfield Network

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan

Neural Nets in PR. Pattern Recognition XII. Michal Haindl. Outline. Neural Nets in PR 2

Lecture 3a: The Origin of Variational Bayes

Neural Networks Lecture 6: Associative Memory II

Neural Networks Based on Competition

Neural Networks. Solving of Optimization Problems 12/30/2015

The Application of Liquid State Machines in Robot Path Planning

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5

Synchronous vs asynchronous behavior of Hopfield's CAM neural net

Logistic Regression. Machine Learning Fall 2018

Computational Intelligence Lecture 3: Simple Neural Networks for Pattern Classification

Mining Classification Knowledge

Introduction to Neural Networks

Neural networks. Chapter 20, Section 5 1

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Linear discriminant functions

Christian Mohr

Lecture 4: Feed Forward Neural Networks

CSE 5526: Introduction to Neural Networks Hopfield Network for Associative Memory

Simple Neural Nets For Pattern Classification

INTRODUCTION TO ARTIFICIAL INTELLIGENCE

Artificial Neural Networks. Edward Gatt

Neural Networks (Part 1) Goals for the lecture

Plan. Perceptron Linear discriminant. Associative memories Hopfield networks Chaotic networks. Multilayer perceptron Backpropagation

Associative Memory : Soft Computing Course Lecture 21 24, notes, slides RC Chakraborty, Aug.

Minimize Cost of Materials

Convolutional Associative Memory: FIR Filter Model of Synapse

An artificial neural networks (ANNs) model is a functional abstraction of the

COMS 4771 Introduction to Machine Learning. Nakul Verma

Artificial Neural Networks Lecture Notes Part 3

Artificial Neural Networks

Neural Networks for Machine Learning. Lecture 2a An overview of the main types of neural network architecture

COMP-4360 Machine Learning Neural Networks

Neural Networks and the Back-propagation Algorithm

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore

7 Recurrent Networks of Threshold (Binary) Neurons: Basis for Associative Memory

Pattern Association or Associative Networks. Jugal Kalita University of Colorado at Colorado Springs

On the Hopfield algorithm. Foundations and examples

NONLINEAR AND ADAPTIVE (INTELLIGENT) SYSTEMS MODELING, DESIGN, & CONTROL A Building Block Approach

7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory

15. NEURAL NETWORKS Introduction to The Chapter Neural Networks: A Primer Basic Terminology and Functions

Criteria for the determination of a basic Clark s flow time distribution function in network planning

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

Neural Turing Machine. Author: Alex Graves, Greg Wayne, Ivo Danihelka Presented By: Tinghui Wang (Steve)

Associative Memories (I) Hopfield Networks

Instituto Tecnológico y de Estudios Superiores de Occidente Departamento de Electrónica, Sistemas e Informática. Introductory Notes on Neural Networks

Memories Associated with Single Neurons and Proximity Matrices

STATC141 Spring 2005 The materials are from Pairwise Sequence Alignment by Robert Giegerich and David Wheeler

Artificial Neural Network

Machine Learning: Logistic Regression. Lecture 04

Application of hopfield network in improvement of fingerprint recognition process Mahmoud Alborzi 1, Abbas Toloie- Eshlaghy 1 and Dena Bazazian 2

Bloom Filters and Locality-Sensitive Hashing

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

Minimizing and maximizing compressor and turbine work respectively

Artificial Neural Networks Examination, June 2005

A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation

AI Programming CS F-20 Neural Networks

Lecture 8 January 30, 2014

Lab 5: 16 th April Exercises on Neural Networks

Transcription:

//5 Neural Netors Associative memory Lecture Associative memories Associative memories The massively parallel models of associative or content associative memory have been developed. Some of these models are: Kohonen, Grossberg, Hamming and idely non Hopfield model. The most interesting aspect of the most of these models is that they specify a learning rule hich can be used to train netor to associate input and output patterns. The associative netor is a computational model emphasizing local and synchronous or asynchronous control, high parallelism, and redundancy. Such a netor is a connectionist architecture and shares some common features ith the Rosenblatt's Perceptron. Hoever, that is much more poerful and flexible than the Perceptron. 4 The model has its origin both in the Hamming and Grossberg models. The netor model is composed of layers or slabs: and input layer, an intermediate layer, and an output layer. The intermediate layer is a modified totally interconnected memoryless Grossberg slab ith recurrent shunting oncenter off-surround subnets, hose purpose is to achieve a maority vote so that only one neuron from this level, the one ith the highest input value, ill send its output to the next layer. The similarities to Grossbergs model: interconnections beteen input layer and intermediate layer The similarities to Hammings model interconnections (feedbac) in the intermediate layer. The connections beteen the input layer and intermediate layer contain all the information about one stored vector. The netor is implementing the nearest-neighbor algorithm. 5 6

//5 The number of elements in the intermediate layer defines the number of stored patterns.. All feedbac connections ithin the intermediate layer are based on the rule of lateral inhibition. The netor is performing a inner-taes-all operation. The elements of input signals (and stored vectors) are the binary values and. X [x, x, x,, x n ] x i {,} The input and output elements (neurons) are only nodes hose purpose is to connect the inputs and outputs respectively to the intermediate slab. The netor can be programmed to function as an autoassociative content-addressable memory or as symbolic substitution system hich yields an arbitrary defined output for any input it depends from the connections beteen the intermediate slab and the output layer. 7 8...... Input layer Intermediate layer eights + negative eights threshold. Output layer 9 Programming the netor The interconnections (eights) beteen the input elements and each intermediate neuron are independent to each other. Each intermediate element has its eights programmed to one input signal and these connections are left unchanged hile the other neurons are programmed. Adding or removing a ne pattern does not influence to the existing netor structure and eights. The connection eights beteen the elements of the input layer and th element of intermediate slab are: if the i th element of the input vector is equal to zero i if the i th element of the input vector is equal to one i b here b is the number of non-zero elements in the th input vector to be stored. This procedure normalizes the total input to each element of the intermediate slab to the interval <;>, and taes not account the relative number of stored elements equal to the input elements, instead of the absolute number. It allos to distinguish beteen signals if one is included in another one.

//5 Example: pattern pattern eights of intermediate slab element here the pattern is recorded i i 8. In the input signal is, the output from both elements is equal to one.. If the input signal is, the output signal from element is equal to.8 hence from element is equal to. The ambiguous output signal in the first case can be solved by the proper netor structure. 4 This learning procedure is repeated for each input vector, each time ith a ne intermediate neuron. The total number of different vectors that can be stored ith this prescription in the net ith n elements in the input layer is n n n 5 Each neuron in the intermediate slab is connected to all other neurons of this slab. The eight on the self feedbac loop is equal to one, and all the other values depend on the correlation beteen stored vectors. The eight beteen the output of th neuron and input of the th neuron is given by + cor(, (, ( M ) here cor(, is correlation (inner product) beteen th and th stored vectors. is one of the identical positive eight from the input slab to the th neuron, M is equal to the number of neurons in the intermediate slab ith non-zero inputs. 6 The denominator ensures that the total lateral inhibition for the element ith the greatest value is smaller that its input. This procedure realizes the rule inner-taes-all. The intermediate slab selects the maximum input, and drives all the other intermediate neurons to zero. If more then one intermediate neuron has the same maximum value, the slab ill select the one that is less correlated to the remaining stored vectors. The structure of connections in the intermediate slab is not symmetrical (, ) (, ), hence If to or more neurons ill have the same input signal, and the outputs may not be discriminated by the criterion, then the slab ill be unable to distinguish beteen them and the outputs ill be driven to zero or ill be a superposition of the tp or more outputs. 7 8

//5 Retrieval of stored vectors At the input layer the unnon signal is applied and the netor has to recognize it. Of the stored vectors are orthogonal, any full of or partial input corresponding to one stored vector ould cause only one neuron in the intermediate slab to have a non-zero output in the first iteration. hen the stored vectors are not orthogonal, a certain number of neurons ill be excited. Let f is the unnon input signal The elements of the vector X define the total input to the elements of the intermediate layer X f * is the matrix of connections beteen the input layer and intermediate layer (the columns are equal to the input eights of each stored vector. 9 The output to of the first iteration is equal to G * X here is square matrix of connections beteen elements of the intermediate slab T the iterative formula G( t + ) * G( t) ( ) t ( f * ) T the output values are calculated by formula (,) ( n,) (,) ( n,) (,) (,) ( n,) (, n) (, n) Y * G matrix of connections beteen the intermediate slab and the output layer; for the associative memory V [ ] V [ ] for i,4,5 i.5 for i, for i,5 i / for i,,4 4 4

//5 V [ ] for i, i / for i,4,5 / / / / / / / / 5 6 (,) (,) (,) (,) (,) (,) (,) (,) (, / / (,) / 4 / / / + cor(, (, ( n ) > 7 8 D patterns stored in the netor (9x6) Input signal after iterations Output signal after 4 iterations 9 5