Neural Networks. Fundamentals of Neural Networks : Architectures, Algorithms and Applications. L, Fausett, 1994

Similar documents
Artificial Neural Networks. Part 2

COMP9444 Neural Networks and Deep Learning 2. Perceptrons. COMP9444 c Alan Blair, 2017

Simple Neural Nets for Pattern Classification: McCulloch-Pitts Threshold Logic CS 5870

Neural networks. Chapter 20, Section 5 1

Neural networks. Chapter 19, Sections 1 5 1

Simple Neural Nets For Pattern Classification

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Neural networks. Chapter 20. Chapter 20 1

Lecture 4: Feed Forward Neural Networks

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau

Feedforward Neural Nets and Backpropagation

Grundlagen der Künstlichen Intelligenz

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Revision: Neural Network

3.3 Discrete Hopfield Net An iterative autoassociative net similar to the nets described in the previous sections has been developed by Hopfield

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1

Introduction Biologically Motivated Crude Model Backpropagation

2018 EE448, Big Data Mining, Lecture 5. (Part II) Weinan Zhang Shanghai Jiao Tong University

Back-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Introduction to Neural Networks

Networks of McCulloch-Pitts Neurons

Artificial Neural Networks

Computational Intelligence

Computational Intelligence Winter Term 2009/10

CS:4420 Artificial Intelligence

Computational Intelligence

Fundamentals of Neural Networks

Artificial neural networks

Artificial Neural Networks The Introduction

Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!

CS 4700: Foundations of Artificial Intelligence

Artificial Neural Networks. Introduction to Computational Neuroscience Tambet Matiisen

Introduction To Artificial Neural Networks

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller

Perceptron. (c) Marcin Sydow. Summary. Perceptron

Artifical Neural Networks

Single Layer Perceptron Networks

Artificial Neural Networks

Artificial Neural Network

Sections 18.6 and 18.7 Artificial Neural Networks

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

Machine Learning. Neural Networks

Sections 18.6 and 18.7 Artificial Neural Networks

Course 395: Machine Learning - Lectures

Introduction to Artificial Neural Networks

Part 8: Neural Networks

Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso

Lecture 7 Artificial neural networks: Supervised learning

Data Mining Part 5. Prediction

CS 4700: Foundations of Artificial Intelligence

2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net.

Introduction and Perceptron Learning

Chapter ML:VI (continued)

CE213 Artificial Intelligence Lecture 13

Neural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5

Artificial Intelligence Hopfield Networks

Intelligent Systems Discriminative Learning, Neural Networks

Neural Networks Lecturer: J. Matas Authors: J. Matas, B. Flach, O. Drbohlav

Artificial Neural Network and Fuzzy Logic

CMSC 421: Neural Computation. Applications of Neural Networks

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

Artificial Neural Networks. Historical description

Neural Networks (Part 1) Goals for the lecture

Neural Networks for Machine Learning. Lecture 2a An overview of the main types of neural network architecture

Multilayer Perceptron Tutorial

Neural Networks Introduction

Chapter ML:VI (continued)

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

Chapter ML:VI. VI. Neural Networks. Perceptron Learning Gradient Descent Multilayer Perceptron Radial Basis Functions

Master Recherche IAC TC2: Apprentissage Statistique & Optimisation

CS 256: Neural Computation Lecture Notes

Simple neuron model Components of simple neuron

Neural Nets in PR. Pattern Recognition XII. Michal Haindl. Outline. Neural Nets in PR 2

Neural Networks. Fundamentals Framework for distributed processing Network topologies Training of ANN s Notation Perceptron Back Propagation

Single layer NN. Neuron Model

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan

CE213 Artificial Intelligence Lecture 14

Chapter 9: The Perceptron

Neural Networks: Introduction

Simulating Neural Networks. Lawrence Ward P465A

Introduction to Neural Networks

In the Name of God. Lecture 11: Single Layer Perceptrons

Ch.8 Neural Networks

Deep Learning. Ali Ghodsi. University of Waterloo

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Sections 18.6 and 18.7 Analysis of Artificial Neural Networks

EEE 241: Linear Systems

Neural Networks biological neuron artificial neuron 1

Linear classification with logistic regression

Lab 5: 16 th April Exercises on Neural Networks

Artificial Intelligence

Backpropagation Neural Net

Administration. Registration Hw3 is out. Lecture Captioning (Extra-Credit) Scribing lectures. Questions. Due on Thursday 10/6

Multi-Dimensional Neural Networks: Unified Theory

Unit III. A Survey of Neural Network Model

Pattern Classification

Transcription:

Neural Networks

Neural Networks Fundamentals of Neural Networks : Architectures, Algorithms and Applications. L, Fausett, 1994 An Introduction to Neural Networks (nd Ed). Morton, IM, 1995

Neural Networks McCulloch & Pitts (1943) are generally recognised as the designers of the first neural network Many of their ideas still used today (e.g. many simple units combine to give increased computational power and the idea of a threshold)

Neural Networks Hebb (1949) developed the first learning rule (on the premise that if two neurons were active at the same time the strength between them should be increased)

Neural Networks During the 50 s and 60 s many researchers worked on the perceptron amidst great excitement. 1969 saw the death of neural network research for about 15 years Minsky & Papert Only in the mid 80 s (Parker and LeCun) was interest revived (in fact Werbos discovered algorithm in 1974)

Neural Networks

Neural Networks We are born with about 100 billion neurons A neuron may connect to as many as 100,000 other neurons

Neural Networks Signals move via electrochemical signals The synapses release a chemical transmitter the sum of which can cause a threshold to be reached causing the neuron to fire Synapses can be inhibitory or excitatory

The First Neural Neural Networks McCulloch and Pitts produced the first neural network in 1943 Many of the principles can still be seen in neural networks of today

The First Neural Neural Networks X 1 X -1 Y X 3 The activation of a neuron is binary. That is, the neuron either fires (activation of one) or does not fire (activation of zero).

The First Neural Neural Networks X 1 X X 3-1 Y For the network shown here the activation function for unit Y is f(y_in) = 1, if y_in >= θ else 0 where y_in is the total input signal received θ is the threshold for Y

The First Neural Neural Networks X 1 X -1 Y X 3 Neurons in a McCulloch-Pitts network are connected by directed, weighted paths

The First Neural Neural Networks X 1 X -1 Y X 3 If the weight on a path is positive the path is excitatory, otherwise it is inhibitory

The First Neural Neural Networks X 1 X X 3-1 Y All excitatory connections into a particular neuron have the same weight, although different weighted connections can be input to different neurons

The First Neural Neural Networks X 1 X -1 Y X 3 Each neuron has a fixed threshold. If the net input into the neuron is greater than or equal to the threshold, the neuron fires

The First Neural Neural Networks X 1 X Y -1 X 3 The threshold is set such that any non-zero inhibitory input will prevent the neuron from firing

The First Neural Neural Networks X 1 X X 3-1 Y It takes one time step for a signal to pass over one connection.

The First Neural Neural Networks X 1 X 1 1 AND Function Y AND X1 X Y 1 1 1 1 0 0 0 1 0 0 0 0 Threshold (Y) =

The First Neural Neural Networks X 1 X Y AND Function OR Function OR X1 X Y 1 1 1 1 0 1 0 1 1 0 0 0 Threshold (Y) =

The First Neural Neural Networks X 1 X Y -1 AND NOT Function AND NOT X1 X Y 1 1 0 1 0 1 0 1 0 0 0 0 Threshold (Y) =

The First Neural Neural Networks X 1 X -1-1 XOR Function Z 1 Z Y XOR X1 X Y 1 1 0 1 0 1 0 1 1 0 0 0 X 1 XOR X = (X 1 AND NOT X ) OR (X AND NOT X 1 )

Modelling a Neuron in i Wj, i, a j j a j :Activation value of unit j w j,i :Weight on the link from unit j to unit i in I :Weighted sum of inputs to unit i a I :Activation value of unit i g :Activation function

Activation Functions Step t (x) = 1 if x >= t, else 0 Sign(x) = +1 if x >= 0, else 1 Sigmoid(x) = 1/(1+e -x ) Identity Function

Simple Networks AND OR NOT Input 1 0 0 1 1 0 0 1 1 0 1 Input 0 1 0 1 0 1 0 1 Output 0 0 0 1 0 1 1 1 1 0

Simple Networks -1 W = 1.5 x t = 0.0 y W = 1

Perceptron Synonym for Single- Layer, Feed-Forward Network First Studied in the 50 s Other networks were known about but the perceptron was the only one capable of learning and thus all research was concentrated in this area

Perceptron A single weight only affects one output so we can restrict our investigations to a model as shown on the right Notation can be simpler, i.e. O 0 Step j WjIj

What can perceptrons represent? 0,1 1,1 0,1 1,1 0,0 AND 1,0 0,0 XOR Functions which can be separated in this way are called Linearly Separable 1,0 Only linearly Separable functions can be represented by a perceptron

Training a perceptron Aim AND Input 1 0 0 1 1 Input 0 1 0 1 Output 0 0 0 1

Training a perceptrons -1 W = 0.3 x W = 0.5 t = 0.0 y W = -0.4 I 1 I I 3 Summation Output -1 0 0 (-1*0.3) + (0*0.5) + (0*-0.4) = -0.3 0-1 0 1 (-1*0.3) + (0*0.5) + (1*-0.4) = -0.7 0-1 1 0 (-1*0.3) + (1*0.5) + (0*-0.4) = 0. 1-1 1 1 (-1*0.3) + (1*0.5) + (1*-0.4) = -0. 0

Learning While epoch produces an error Present network with next inputs from epoch Err = T O If Err <> 0 then W j = W j + LR * I j * Err End If End While

Learning While epoch produces an error Present network with next inputs from epoch Err = T O If Err <> 0 then End If End While W j = W j + LR * I j * Err Epoch : Presentation of the entire training set to the neural network. In the case of the AND function an epoch consists of four sets of inputs being presented to the network (i.e. [0,0], [0,1], [1,0], [1,1])

Learning While epoch produces an error Present network with next inputs from epoch Err = T O If Err <> 0 then End If End While W j = W j + LR * I j * Err Training Value, T : When we are training a network we not only present it with the input but also with a value that we require the network to produce. For example, if we present the network with [1,1] for the AND function the training value will be 1

Learning While epoch produces an error Present network with next inputs from epoch Err = T O If Err <> 0 then End If End While W j = W j + LR * I j * Err Error, Err : The error value is the amount by which the value output by the network differs from the training value. For example, if we required the network to output 0 and it output a 1, then Err = -1

Learning While epoch produces an error Present network with next inputs from epoch Err = T O If Err <> 0 then End If End While W j = W j + LR * I j * Err Output from Neuron, O : The output value from the neuron Ij : Inputs being presented to the neuron Wj : Weight from input neuron (I j ) to the output neuron LR : The learning rate. This dictates how quickly the network converges. It is set by a matter of experimentation. It is typically 0.1

Learning After First Epoch 0,1 I 1 1,1 0,0 1,0 I 0,1 I 1 1,1 At Convergence 0,0 1,0 I