Neural Nets in PR. Pattern Recognition XII. Michal Haindl. Outline. Neural Nets in PR 2

Similar documents
EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan

Branch-and-Bound Algorithm. Pattern Recognition XI. Michal Haindl. Outline

Feature Selection. Pattern Recognition X. Michal Haindl. Feature Selection. Outline

Artificial Neural Network and Fuzzy Logic

Notation. Pattern Recognition II. Michal Haindl. Outline - PR Basic Concepts. Pattern Recognition Notions

Lecture 7 Artificial neural networks: Supervised learning

Lecture 4: Feed Forward Neural Networks

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Neural networks. Chapter 20. Chapter 20 1

Introduction to Neural Networks

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau

In the Name of God. Lecture 9: ANN Architectures

Neural networks. Chapter 20, Section 5 1

Neural networks. Chapter 19, Sections 1 5 1

EEE 241: Linear Systems

Set Theory. Pattern Recognition III. Michal Haindl. Set Operations. Outline

Neural Networks Introduction

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

Data Mining Part 5. Prediction

Introduction to Artificial Neural Networks

Neural Networks: Introduction

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5

Artificial neural networks

Neural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET

Artificial Neural Networks The Introduction

CS:4420 Artificial Intelligence

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

Artificial Neural Networks Examination, June 2005

CMSC 421: Neural Computation. Applications of Neural Networks

Neural Networks. Textbook. Other Textbooks and Books. Course Info. (click on

Artificial Neural Network

Artificial Neural Network

CN2 1: Introduction. Paul Gribble. Sep 10,

COMP9444 Neural Networks and Deep Learning 2. Perceptrons. COMP9444 c Alan Blair, 2017

Sections 18.6 and 18.7 Artificial Neural Networks

Part 8: Neural Networks

Neural Networks. Mark van Rossum. January 15, School of Informatics, University of Edinburgh 1 / 28

Artificial Neural Networks. Historical description

Artificial Neural Networks Examination, June 2004

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Instituto Tecnológico y de Estudios Superiores de Occidente Departamento de Electrónica, Sistemas e Informática. Introductory Notes on Neural Networks

Supervised (BPL) verses Hybrid (RBF) Learning. By: Shahed Shahir

An artificial neural networks (ANNs) model is a functional abstraction of the

Introduction Biologically Motivated Crude Model Backpropagation

Artificial Intelligence Hopfield Networks

CSC321 Lecture 5: Multilayer Perceptrons

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

Chapter 9: The Perceptron

Artificial Neural Networks. Introduction to Computational Neuroscience Tambet Matiisen

Sections 18.6 and 18.7 Artificial Neural Networks

Learning and Memory in Neural Networks

Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!

Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning

Unit III. A Survey of Neural Network Model

AI Programming CS F-20 Neural Networks

Fundamentals of Neural Networks

Neural Networks. Fundamentals Framework for distributed processing Network topologies Training of ANN s Notation Perceptron Back Propagation

Feedforward Neural Nets and Backpropagation

Neural Networks for Machine Learning. Lecture 2a An overview of the main types of neural network architecture

Neural Networks. Fundamentals of Neural Networks : Architectures, Algorithms and Applications. L, Fausett, 1994

Simple Neural Nets for Pattern Classification: McCulloch-Pitts Threshold Logic CS 5870

Introduction To Artificial Neural Networks

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5

Artificial Intelligence

Hopfield Neural Network

Lecture 4: Perceptrons and Multilayer Perceptrons

4. Multilayer Perceptrons

Artificial Neural Networks. Edward Gatt

Sections 18.6 and 18.7 Analysis of Artificial Neural Networks

Neural Networks and Deep Learning

Simple Neural Nets For Pattern Classification

Artificial Neural Networks

CSC Neural Networks. Perceptron Learning Rule

Introduction and Perceptron Learning

Artificial Neural Networks

Artificial Neural Networks

Machine Learning. Neural Networks

Neural Networks (Part 1) Goals for the lecture

Synaptic Devices and Neuron Circuits for Neuron-Inspired NanoElectronics

Introduction to Neural Networks

Simple neuron model Components of simple neuron

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller

Multilayer Perceptron

y k = (a)synaptic f(x j ) link linear i/p o/p relation (b) Activation link linear i/p o/p relation

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen

Week 4: Hopfield Network

International Journal of Advanced Research in Computer Science and Software Engineering

Artificial Neural Networks. Part 2

CE213 Artificial Intelligence Lecture 14

Computational statistics

How can ideas from quantum computing improve or speed up neuromorphic models of computation?

Multilayer Perceptron Tutorial

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm

Artificial Neural Networks Examination, March 2004

INVERSE RELIABILITY ANALYSIS IN STRUCTURAL DESIGN

Backpropagation and his application in ECG classification

Transcription:

Neural Nets in PR NM P F Outline Motivation: Pattern Recognition XII human brain study complex cognitive tasks Michal Haindl Faculty of Information Technology, KTI Czech Technical University in Prague Institute of Information Theory and Automation Academy of Sciences of the Czech Republic Prague, Czech Republic Evropský sociální fond. Praha & EU: Investujeme do vaší budoucnosti MI-ROZ 2011-2012/Z Neural Nets in PR 2 c M. Haindl MI-ROZ - 12 3/15 NM P F Outline c M. Haindl MI-ROZ - 12 1/15 January 16, 2012 Outline studied since the late Middle Ages structure discovered Santiago Ramon y Cajal (1888) & introduced the idea of neuron chemical transmission of nerve signals Sir Henry Dale (1936 Nobel prize in medicine) electrical signal transmission in the nervous system Sir John Eccles, A. L. Hodgkin, A. Huxley (1963 Nobel prize in medicine) W. McCulloch, W. Pitts (1943) Hebb s learning of McCulloch-Pitts neural network E. Caianiello (1961) c M. Haindl MI-ROZ - 12 4/15 1 Neural Nets in PR Neuron Models Neural Nets Properties Feedback c M. Haindl MI-ROZ - 12 2/15

Neural Nets in PR 2 NM P F Neural Nets in PR 2 NM P F W. McCulloch, W. Pitts (1943) Hebb s learning of McCulloch-Pitts neural network E. Caianiello (1961) perceptron type (feed-forward) of F. Rosenblatt (1960) analogy between McCulloch-Pitts and Ising model W. Little, J. Hopfield (1978,1982) Does the artificial neural net computing paradigm model biology or not? c M. Haindl MI-ROZ - 12 4/15 c M. Haindl MI-ROZ - 12 4/15 Machine System NM P F Neural Nets in PR 2 NM P F rely on speed and accuracy of execution of vast amount of instructions are easily overhelmed by task of exponential or greater complexity (most directly approached PR tasks) search whole system space for acceptable solution limited progress (little previously unknown knowledge) few -based algorithms superior to those previously known sleep (REM) eliminates undesirable memory Crick, Mitchison (1983) W. McCulloch, W. Pitts (1943) Hebb s learning of McCulloch-Pitts neural network E. Caianiello (1961) perceptron type (feed-forward) of F. Rosenblatt (1960) analogy between McCulloch-Pitts and Ising model W. Little, J. Hopfield (1978,1982) c M. Haindl MI-ROZ - 12 5/15 c M. Haindl MI-ROZ - 12 4/15

Neuron Models NM P F Neural Nets NM P F x 1 a i,1 x 2 a i,2. x l a i,l v i = vi f(v i ) y i l a i,j x i j=1 y i = f(v i +ϑ i ) ϑ i bias ϑ i threshold (affine tr. on output) x j inputs c M. Haindl MI-ROZ - 12 7/15 motivation - biological neurones neurocomputers, connectionist networks, parallel distributed processors, intelligent computing nodes neurons links synapses input node output node hidden node (interlayer) Net properties - activation function f, weights a j, net architecture c M. Haindl MI-ROZ - 12 6/15 Neuron Models 2 NM P F Neural Nets NM P F y i = f(v i ϑ i ) y i = f(v i +ϑ i ) motivation - biological neurones neurocomputers, connectionist networks, parallel distributed processors, intelligent computing Net properties - activation function f, weights a j, net architecture Neural nets: Recurrent - feedback, fixed weights, input pattern X as an initial condition, net converges to the closest template stored in its memory. Feed-forward - synaptic connections start with random weights and change during iterative learning, in the classification stage X - input, ω - output. c M. Haindl MI-ROZ - 12 8/15 c M. Haindl MI-ROZ - 12 6/15

NM P F Neuron Models 3 NM P F 1 if v 1 2 f(v) = 1 v 2 > v > 1 2 0 v 1 2 equivalent model x 0 = 1 a i,0=ϑ i x 1 a i,1 x 2 a i,2. a i,l (X 0 = 1) x l v i f(vi ) y i v i = l a i,j x i j=0 y i = f(v i ) x 0 = 1 a i,0 = ϑ i c M. Haindl MI-ROZ - 12 9/15 NM P F NM P F f(v) = 1 1+exp{ av} { 1 if v 0 f(v) = 0 otherwise

Neural Nets Properties NM P F NM P F 1 Each neuron is represented by a set of linear synaptic links, externally applied threshold ϑ i, nonlinear activation link f(). 2 The synaptic links of a neuron weight their resp. input signals. 3 The weighted sum of input signals defines the total internal activity of the neuron. 4 The activation link transforms the internal activity into an output (state variable of the neuron). 1 if v > 0 f(v) = 0 if v = 0 1 if v < 0 c M. Haindl MI-ROZ - 12 11/15 Neural Nets Properties 2 NM P F NM P F massive parallel distributed structure supervised learning nonlinearity (neuron is a nonlinear device) input-output mapping contextual behaviour of neurons fault tolerance (fault neuron impaired quality only) uniformity of analysis and design neurons common to all learning can be shared in different applications seamless integration of models hyperbolic tangent ( v f(v) = tanh = 2) 1 exp{ v} 1+exp{ v} c M. Haindl MI-ROZ - 12 12/15

Signal-Flow Graph NM P F Signal-Flow Graph NM P F 1 rule a signal flows only in the direction links synaptic x j a k,j y k = a k,j x j (linear) activation x j f() y k = f(x j ) (nonlinear) 2 rule signal = signals from incoming links y i ց y j ր y k = y i +y j (synaptic convergence, fan-in) 3 rule signal is transmitted independently on f i () of targets x j ր x j ց x j (synaptic divergence, fan-out) 1 rule a signal flows only in the direction links synaptic x j a k,j y k = a k,j x j (linear) activation x j f() y k = f(x j ) (nonlinear) 2 rule signal = signals from incoming links y i ց y j ր y k = y i +y j (synaptic convergence, fan-in) 3 rule signal is transmitted independently on f i () of targets x j ր x j ց x j (synaptic divergence, fan-out) c M. Haindl MI-ROZ - 12 13/15 c M. Haindl MI-ROZ - 12 13/15 Signal-Flow Graph 2 NM P F Signal-Flow Graph NM P F Neural network model is defined as a directed graph: A real state variable (input signal) x i is associated with each i. A real-valued (synaptic) weight a ik is associated with each link i k. A real-valued biased ϑ i (threshold) is associated with each i. A transfer function f i (x k,a ik,ϑ i,(k i)) is defined for each i, which determines its state as a function of its bias, incoming links weights and states of connected nodes. 1 rule a signal flows only in the direction links synaptic x j a k,j y k = a k,j x j (linear) activation x j f() y k = f(x j ) (nonlinear) 2 rule signal = signals from incoming links y i ց y j ր y k = y i +y j (synaptic convergence, fan-in) 3 rule signal is transmitted independently on f i () of targets x j ր x j ց x j (synaptic divergence, fan-out) c M. Haindl MI-ROZ - 12 14/15 c M. Haindl MI-ROZ - 12 13/15

Feedback NM P F ouput input (recurrent ) single-loop feedback x j (t) x j (t) B y i (t) A y i (t) = A[ x j (t)] z 1 unit delay x j (t) = x j (t)+b[y i (t)] y i (t) = A 1 AB [x j(t)] c M. Haindl MI-ROZ - 12 15/15