Hopfield Network Recurrent Netorks

Similar documents
ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

Christian Mohr

Artificial Neural Network and Fuzzy Logic

Introduction to Neural Networks

Artificial Intelligence Hopfield Networks

Artificial Neural Networks Examination, March 2004

Sample Exam COMP 9444 NEURAL NETWORKS Solutions

Neural Networks Introduction

Learning and Memory in Neural Networks

CHAPTER IX Radial Basis Function Networks

Hopfield Neural Network

Artificial Neural Networks Examination, June 2004

Neural Networks. Hopfield Nets and Auto Associators Fall 2017

Consider the way we are able to retrieve a pattern from a partial key as in Figure 10 1.

In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required.

Instituto Tecnológico y de Estudios Superiores de Occidente Departamento de Electrónica, Sistemas e Informática. Introductory Notes on Neural Networks

ESANN'2001 proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), April 2001, D-Facto public., ISBN ,

Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network.

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network.

Introduction to Neural Networks: Structure and Training

EEE 241: Linear Systems

Computational Intelligence Lecture 6: Associative Memory

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

Neural Networks. Mark van Rossum. January 15, School of Informatics, University of Edinburgh 1 / 28

Master Recherche IAC TC2: Apprentissage Statistique & Optimisation

Neural Network to Control Output of Hidden Node According to Input Patterns

Radial-Basis Function Networks. Radial-Basis Function Networks

Using a Hopfield Network: A Nuts and Bolts Approach

Neural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET

Supervised (BPL) verses Hybrid (RBF) Learning. By: Shahed Shahir

L20: MLPs, RBFs and SPR Bayes discriminants and MLPs The role of MLP hidden units Bayes discriminants and RBFs Comparison between MLPs and RBFs

On the Hopfield algorithm. Foundations and examples

Artificial Neural Networks. Edward Gatt

Hopfield Network for Associative Memory

Extending the Associative Rule Chaining Architecture for Multiple Arity Rules

Neural Networks Lecture 7: Self Organizing Maps

Artificial Neural Networks Examination, June 2005

Cellular Automata Evolution for Pattern Recognition

Artificial Intelligence

INTERNATIONAL COMPUTER SCIENCE INSTITUTE

Defining Feedforward Network Architecture. net = newff([pn],[s1 S2... SN],{TF1 TF2... TFN},BTF,LF,PF);

COMP9444 Neural Networks and Deep Learning 11. Boltzmann Machines. COMP9444 c Alan Blair, 2017

Lecture 5: Recurrent Neural Networks

Lecture 4: Feed Forward Neural Networks

Artificial Neural Networks Examination, March 2002

Learning Vector Quantization (LVQ)

Neural Networks. Associative memory 12/30/2015. Associative memories. Associative memories

Performance Comparison of K-Means and Expectation Maximization with Gaussian Mixture Models for Clustering EE6540 Final Project

Machine Learning. Nonparametric Methods. Space of ML Problems. Todo. Histograms. Instance-Based Learning (aka non-parametric methods)

8. Lecture Neural Networks

Radial Basis Function Networks. Ravi Kaushik Project 1 CSC Neural Networks and Pattern Recognition

Introduction to Convolutional Neural Networks 2018 / 02 / 23

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY

Part 8: Neural Networks

Learning Vector Quantization

A Reservoir Sampling Algorithm with Adaptive Estimation of Conditional Expectation

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm

Memories Associated with Single Neurons and Proximity Matrices

Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso

9 Competitive Neural Networks

Learning Long Term Dependencies with Gradient Descent is Difficult

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18

Lecture 7 Artificial neural networks: Supervised learning

CHAPTER 3. Pattern Association. Neural Networks

Introduction to Neural Networks

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau

In the Name of God. Lectures 15&16: Radial Basis Function Networks

Artificial Neural Networks The Introduction

Machine Learning and Adaptive Systems. Lectures 3 & 4

y(n) Time Series Data

Neural Networks Lecture 2:Single Layer Classifiers

Neural Networks Lecture 4: Radial Bases Function Networks

Artificial Neural Networks. Introduction to Computational Neuroscience Tambet Matiisen

Learning Vector Quantization with Training Count (LVQTC)

<Special Topics in VLSI> Learning for Deep Neural Networks (Back-propagation)

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5

Neural Networks biological neuron artificial neuron 1

Neural Networks for Machine Learning. Lecture 11a Hopfield Nets

ECE521 Lectures 9 Fully Connected Neural Networks

In the Name of God. Lecture 9: ANN Architectures

HOPFIELD neural networks (HNNs) are a class of nonlinear

Automatic Noise Recognition Based on Neural Network Using LPC and MFCC Feature Parameters

7 Recurrent Networks of Threshold (Binary) Neurons: Basis for Associative Memory

Computational Intelligence Lecture 3: Simple Neural Networks for Pattern Classification

Neural Networks and Ensemble Methods for Classification

Neural Networks Lecture 6: Associative Memory II

Neural Nets and Symbolic Reasoning Hopfield Networks

Storage Capacity of Letter Recognition in Hopfield Networks

CSE446: non-parametric methods Spring 2017

Reservoir Computing and Echo State Networks

Neural Network Analysis of Russian Parliament Voting Patterns

Introduction to Artificial Neural Networks

Explaining Results of Neural Networks by Contextual Importance and Utility

Scuola di Calcolo Scientifico con MATLAB (SCSM) 2017 Palermo 31 Luglio - 4 Agosto 2017

Machine Learning. Kernels. Fall (Kernels, Kernelized Perceptron and SVM) Professor Liang Huang. (Chap. 12 of CIML)

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory

1. A discrete-time recurrent network is described by the following equation: y(n + 1) = A y(n) + B x(n)

Multilayer Perceptron

Transcription:

Hopfield Network Recurrent Netorks w 2 w n E.P. P.E. y (k) Auto-Associative Memory: Given an initial n bit pattern returns the closest stored (associated) pattern. No P.E. self-feedback! w 2 w n2 E.P.2 P.E.2 y 2 (k) ( k) Dynamics : s n i w i y ( k) i ( ) ( s ) ( k+ ) k f y w n w 2n E.P.n P.E.n y n (k) Hopfield Network with n Processing Elements Network Initialization: Binary f ( s Output Vector: activation ) hold if if s s () y x y function : ( ) [ y ] ( k) k i > L < L previous value, if s L Laboratório de Automação e Robótica - A. Bauchspiess Soft Computing - Neural Networks and Fuzzy Logic 84

Hopfield Network... w 2 w n E.P. P.E. y (k) - Fast training and fast data recovery - IIR system with no input (only I.C.) - Guaranteed stability - Good for VLSI implementation w 2 w n2 E.P.2 P.E.2 y 2 (k) -Operating Forms (firing order) Asynchronous Synchronous Sequential w n w 2n E.P.n P.E.n y n (k) Initial Condition End Value (stable) Hopfield Network with n Processing Elements Possible Hopfield Network states (8) with Processing Elements (Illustration of a typical recovery state evolution. From I.C. to E.V.) Laboratório de Automação e Robótica - A. Bauchspiess Soft Computing - Neural Networks and Fuzzy Logic 85

Hopfield Network... Learning: The patterns to be stored in the associative memory are chosen a priori. m distinct patterns. Each of the form: A p p p p [ a a a ], with a or. ( L, ) p 2 K n i usually w i m p (2a p i )(2a p ) p a i ( 2 ) Obs: converts / to /+ p i w i is incremented by if a a otherwise it is decremented Procedure is repeated for each i, for every A p. Learning is analogous to reinforcement learning p Laboratório de Automação e Robótica - A. Bauchspiess Soft Computing - Neural Networks and Fuzzy Logic 86

Hopfield Network - Example Symbol Training Vector L A [ ] T A 2 [ ] a a 2 a a 4 a 5 a 6 a 7 a 8 a 9 Patterns to be stored as x matrices: Laboratório de Automação e Robótica - A. Bauchspiess Soft Computing - Neural Networks and Fuzzy Logic 87 + A [ ] W m p p p i i a a w ) )(2 2 ( Weigth Matrix

Hopfield Network - Example ] [ () y x New pattern presented to the trained network: Fired P.E. P.E. Sum P.E. Output New output vector 2 2 - -4 4 5-4 6-4 7 4 Sequential operation of the network: Laboratório de Automação e Robótica - A. Bauchspiess Soft Computing - Neural Networks and Fuzzy Logic 88 ] [ W y x 7 4 8 9 4 2 2-8 < >, ) ( :, s if value previous hold s if s if s f L function activation Binary Remember Convergence to L Pattern

Hopfield Network ava demos Demonstrations available in the www, e.g.: Laboratório de Automação e Robótica - A. Bauchspiess Soft Computing - Neural Networks and Fuzzy Logic 89

Hopfield N. final considerations Stability proof Cohen and Grossberg, 98. W symmetric with zero diagonal Energy funcition always decreases. E 2 i w i y i y y L E - Energia Energy da rede of the network Spurious Padrão Pattern espúrio Initial Valor Inicial State Hopfield Network Limitations: Not necessarily the closest pattern is returned. Differences between patterns. Not all patterns have equal emphasis (size of attraction basins). Spurious patterns, i.e., patterns evoked that are not part of the stored set. Maximum number of stored patterns is limited. m,5n / log n, m patterns, n bits network Padrão Recovered recuperado Pattern Stored Padrões Patterns armazenados Estados States Typical Energy and Patterns illustration for Hopfield Newtworks Laboratório de Automação e Robótica - A. Bauchspiess Soft Computing - Neural Networks and Fuzzy Logic 9

Radial Basis Functions - Moody & Darken, 989,... - Function Approximators - Inspiration: sensoricc overlapped reception fields in the cortex - Localized activity of the processing elements a i e µ x i i 2 σ i Gaussian (Average, Variance) a a i e p w i i b ml: w weigth b bias.4 Weighted Sum of Radial Basis Transfer Functions.2 pw pw2 pwn Output a.8.6.4.2 - -2-2 Input p p Laboratório de Automação e Robótica - A. Bauchspiess Soft Computing - Neural Networks and Fuzzy Logic 9

Radial Basis Functions... a.8.6.4.2 a radbas( dist( w, p)* b) X: -.8 Y:.52 - -2-2 w-p MatLab Implementation [net,tr] newrb(p,t,goal,spread,mn) P - RxQ matrix, Q input vectors ( pattern ), T - SxQ matrix, Q obective vectors ( target ), GOAL - desired mean square error, default., SPREAD - radial basis function spread, default., MN - Maximum number of neurons, default is Q. Output a.4.2.8.6.4.2 Weighted Sum of Radial Basis Transfer Functions - -2-2 Input p Laboratório de Automação e Robótica - A. Bauchspiess Soft Computing - Neural Networks and Fuzzy Logic 92

Radial Basis Functions... Learning: add neurons incrementally - Where? To obtain the largest quadratic errror reductions at each step min e 2.6.5.4..2. spread to low - Good fitting (at the training points)! - Bad interpolation! -. -.2 -. -.4-2 2 4 6 8 2.6.5.4..2. spread to high - Good fitting at low frequencies - Good interpolation in some ranges! -. -.2 -. -.4-2 2 4 6 8 2 Laboratório de Automação e Robótica - A. Bauchspiess Soft Computing - Neural Networks and Fuzzy Logic 9

Radial Basis Functions... Heuristics: x i+ xi < SPREAD < xmax xmin Target.6.5.4..2. 26 function 26 Amostras samplesda Função Function Função Aprox. Aprox. with 2 RBF RBF - 2 neuros neurônios spread OK - Good fitting! - Good interpolation! -. -.2 -. -Bad extrapolation (is a very difficult task) -.4-2 2 4 6 8 2 Pattern Conclusions - Faster training faster, but uses more neurons than MLP. - Incremental Training, new points can be learned without losing prior knowledge. -You can use a priori knowledge to locate neurons (which is not possible in a MLP). - Fixed spread Incremental traning suboptimal solution!! Laboratório de Automação e Robótica - A. Bauchspiess Soft Computing - Neural Networks and Fuzzy Logic 94

Comparison RBF x MLP RBF comparison - MSE over ti:.:,yiexp(-.*ti).*sin(.*ti.*ti)).6.5.4..2. -. -.2 -. samples (2) function MSE 5 RBFs.62 RBFs.25 5 RBFs.28 2 RBFs.9 -.4 2 4 6 8 MLP comparison - MSE over ti:.:,yiexp(-.*ti).*sin(.*ti.*ti)).6.5.4..2. -. -.2 -. samples (2) function MSE 5 MLPs.672 MLPs.268 5 MLPs.244 2 MLPs.46 -.4 2 4 6 8 RBF more neurons better fitting best solution newrbe (exact fitting!) MLP too much neurons worse fitting (bad interpolation) Laboratório de Automação e Robótica - A. Bauchspiess Soft Computing - Neural Networks and Fuzzy Logic 95

Unsupervised Learning Competitive Layer Find vector codes that describe the data distribution No known desired Code Vectors They should reflect the Inner statistical distribution of the process Used in Data Compression Example: Code symbols that will be transmitted over a communication channel. For the comprehension the variability of the signal is considered as noise, and so, discarded. Laboratório de Automação e Robótica - A. Bauchspiess Soft Computing - Neural Networks and Fuzzy Logic 96

Competitive Layer.8.6.4.2 -.2 -.4 o code vectors -.6 -.8 - - -.8 -.6 -.4 -.2.2.4.6.8 + training vectors x test vectors Laboratório de Automação e Robótica - A. Bauchspiess Soft Computing - Neural Networks and Fuzzy Logic 97

Borders.8.6.4.2 -.2 -.4 (Voronoi Diagram) -.6 -.8 - - -.8 -.6 -.4 -.2.2.4.6.8 o code vectors + training vectors x test vectors Laboratório de Automação e Robótica - A. Bauchspiess Soft Computing - Neural Networks and Fuzzy Logic 98

Ex. Classification Borders o code vectors + training vectors x test vectors.8.8.6.6.4.4.2.2 -.2 -.2 -.4 -.4 -.6 -.6 -.8 -.8 - - -.8 -.6 -.4 -.2.2.4.6.8 - - -.8 -.6 -.4 -.2.2.4.6.8 Laboratório de Automação e Robótica - A. Bauchspiess Soft Computing - Neural Networks and Fuzzy Logic 99

Bias Bias adustment to to help weak neurons Bias (only Euclidean distance).8.8.6.6.4.4.2.2 -.2 -.2 -.4 -.4 -.6 -.6 -.8 -.8 - - -.8 -.6 -.4 -.2.2.4.6.8 - - -.8 -.6 -.4 -.2.2.4.6.8 Laboratório de Automação e Robótica - A. Bauchspiess Soft Computing - Neural Networks and Fuzzy Logic

Learning Vector Quantization No known desired Code Vectors Data points belong to Classes Code Vectors should reflect the Inner statistical distribution of the process Class A Class B Input Vectors Current input Class C Code vectors Class(X)Class(Wc) LVQ, LVQ2., LVQ, OLVQ Enhanced Algorithms -Dead neurons -Neighborhood definition Laboratório de Automação e Robótica - A. Bauchspiess Soft Computing - Neural Networks and Fuzzy Logic

Self Organizing Maps Kohonen, 982 Unsupervised learning One active layer with neighborhood constrains Code Vectors should reflect the Inner statistical distribution of the process Triangular distribution.6 Weight Vectors.5.4.2 SOM Map W(i,2) -.2 -.5 -.4 -.6 - - -.5.5 -.6 -.4 -.2.2.4.6 W(i,) Weird unsuccessful training Laboratório de Automação e Robótica - A. Bauchspiess Soft Computing - Neural Networks and Fuzzy Logic 2

ANN General Characteristics Positive Learning Parallelism Distributed knowledge Fault Tolerant Associative Memory Robust against Noise No exhaustive modelling To obtain successful ANN a good process knowledge is recommended in order to design experiments that produce useful data sets! Negative Knowledge acquisition only by learning ( E.g., Wich topology is best suit?) Introspection is not possible ( What is the contribution of this neuron?) The logical inference is hard to obtain ( Why this output for this situation? ) Learning is slow Very sensitive to initial conditions There is no free lunch! Laboratório de Automação e Robótica - A. Bauchspiess Soft Computing - Neural Networks and Fuzzy Logic