Networks of McCulloch-Pitts Neurons

Similar documents
Artificial Neural Networks. Part 2

Multilayer Neural Networks

Neural networks. Chapter 19, Sections 1 5 1

Neural Networks. Fundamentals of Neural Networks : Architectures, Algorithms and Applications. L, Fausett, 1994

Neural networks. Chapter 20. Chapter 20 1

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Part 8: Neural Networks

Pattern Classification

In the Name of God. Lecture 11: Single Layer Perceptrons

Neural Networks (Part 1) Goals for the lecture

COMP9444 Neural Networks and Deep Learning 2. Perceptrons. COMP9444 c Alan Blair, 2017

Lab 5: 16 th April Exercises on Neural Networks

N-bit Parity Neural Networks with minimum number of threshold neurons

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

Intelligent Systems Discriminative Learning, Neural Networks

Lecture 3a: The Origin of Variational Bayes

Neural networks. Chapter 20, Section 5 1

Simple Neural Nets For Pattern Classification

Multilayer Feedforward Networks. Berlin Chen, 2002

Multilayer Neural Networks

Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!

Introduction to Neural Networks

CS:4420 Artificial Intelligence

CSC Neural Networks. Perceptron Learning Rule

COGS Q250 Fall Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November.

Multilayer Neural Networks

Single layer NN. Neuron Model

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Neural Networks Lecturer: J. Matas Authors: J. Matas, B. Flach, O. Drbohlav

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Neural Networks: Basics. Darrell Whitley Colorado State University

CE213 Artificial Intelligence Lecture 13

Multilayer Perceptron

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1

Multilayer Neural Networks

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau

Artificial Neural Networks

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

Neural networks and support vector machines

Linear Discriminant Functions

Introduction to Artificial Neural Network - theory, application and practice using WEKA- Anto Satriyo Nugroho, Dr.Eng

Multilayer Perceptrons and Backpropagation

Computational statistics

The Perceptron algorithm

4. Multilayer Perceptrons

Introduction to Neural Networks

Classification with Perceptrons. Reading:

Single Layer Perceptron Networks

Motivation for the topic of the seminar

Computational Intelligence Winter Term 2017/18

Preliminaries. Definition: The Euclidean dot product between two vectors is the expression. i=1

Artificial Neural Networks. Historical description

Inf2b Learning and Data

Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions

Multi-Dimensional Neural Networks: Unified Theory

Computational Intelligence

The perceptron learning algorithm is one of the first procedures proposed for learning in neural network models and is mostly credited to Rosenblatt.

y(x n, w) t n 2. (1)

Artificial Neural Networks Examination, March 2004

Last updated: Oct 22, 2012 LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

An artificial neural networks (ANNs) model is a functional abstraction of the

Machine Learning: The Perceptron. Lecture 06

Linear models: the perceptron and closest centroid algorithms. D = {(x i,y i )} n i=1. x i 2 R d 9/3/13. Preliminaries. Chapter 1, 7.

THE MOST IMPORTANT BIT

Neural Networks. Associative memory 12/30/2015. Associative memories. Associative memories

Multilayer Perceptron

Artificial neural networks

CSC242: Intro to AI. Lecture 21

Neural Networks and Deep Learning

Feedforward Neural Nets and Backpropagation

Learning Deep Architectures for AI. Part I - Vijay Chakilam

Computational Intelligence Lecture 3: Simple Neural Networks for Pattern Classification

NEURAL CONTROLLERS FOR NONLINEAR SYSTEMS IN MATLAB

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

Lecture 5: Logistic Regression. Neural Networks

Artificial Neural Network : Training

Neural Networks for Machine Learning. Lecture 2a An overview of the main types of neural network architecture

Artificial Neural Networks The Introduction

EEE 241: Linear Systems

Preliminary Testing and Analysis of an Adaptive Neural Network Training Kalman Filtering Algorithm

Neural Networks Lecture 2:Single Layer Classifiers

ECS171: Machine Learning

Artificial Neural Network

Reification of Boolean Logic

Back-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples

Apprentissage, réseaux de neurones et modèles graphiques (RCP209) Neural Networks and Deep Learning

Vote. Vote on timing for night section: Option 1 (what we have now) Option 2. Lecture, 6:10-7:50 25 minute dinner break Tutorial, 8:15-9

Forecasting Time Series by SOFNN with Reinforcement Learning

Computational Intelligence Winter Term 2009/10

Revision: Neural Network

Machine Learning

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

Administration. Registration Hw3 is out. Lecture Captioning (Extra-Credit) Scribing lectures. Questions. Due on Thursday 10/6

Rapid Introduction to Machine Learning/ Deep Learning

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

PMR5406 Redes Neurais e Lógica Fuzzy Aula 3 Single Layer Percetron

Machine Learning: Logistic Regression. Lecture 04

Computational Intelligence

Transcription:

s Lecture 4 Netorks of McCulloch-Pitts Neurons The McCulloch and Pitts (M_P) Neuron x x sgn x n

Netorks of M-P Neurons One neuron can t do much on its on, but a net of these neurons x i x i i sgn i ij x ij x ni i th Neuron j th Neuron i-j Snapse x ki k ki x ij i ij sgn( n i x ki k ) i 3 Netorks of M-P Neurons We can connect several number of McCulloch-Pitts neurons together, as follo: Output laer Input laer An arrangement of one input laer of McCulloch-Pitts neurons feeding forard to one output laer of McCulloch-Pitts neurons as above is knon as a Perceptron. 4

Implementing Logic Gates ith M-P Neurons According to the McCulloch-Pitts Neuron properties e can use it to implement the basic logic gates. Not in out 0 0 And In in out 0 0 0 0 0 0 0 OR In in out 0 0 0 0 0 What should e do to implement or realize a logic gate, Not/AND/OR, b N.N.? 5 Implementing Logic Gates ith M-P Neurons What should e do to implement or realize a logic gate, Not/AND/OR, b N.N.? All e need to do is find the appropriate snapses (connection) eights and neuron thresholds to produce the right outputs corresponding to each set of inputs. To solutions can be introduced for this problem:. Analticall Approach. Learning Algorithms 6 3

Find Weights Analticall for NOT x sgn( x) in Not out sgn( ) 0 0 0 sgn( ) 0 0 So: 0.5 7 Find Weights Analticall for AND gate x sgn( x x ) x And In in out 0 0 0 0 0 0 0 sgn( ) sgn( ) 0 sgn( ) 0 sgn( ) 0 0 So:. 5 8 4

Find Weights Analticall for XOR gate x sgn( x x ) x XOR In in out 0 0 0 0 0 0 sgn( ) sgn( ) sgn( ) sgn( ) 0 0 0 But, the st equation is not compatible ith others. 9 Find Weights Analticall for XOR gate What is the solution? x x Ne questions: Ho can compute the eights and thresholds? Is analticall solution reasonable and practical or not? 0 5

A Ne Idea: Learning Algorithm Linearl separable problems: 0 0 0 Not -0. -0. 0 AND A Ne Idea: Learning Algorithm Wh is single laer neural netorks capable to solve the linearl separable problems? x sign ( x x ) x i x i i 0 i i x i 0 i i 0 x i i i 6

Learning Algorithm What is the goal of learning algorithm? We need a learning algorithm hich it updates the eights i () so that finall (at end of learning process) the input patterns lie on both sides of the line decided b the Perceptron. Step: Step: Step: 3 3 Learning Algorithm Perceptron Learning Rule: T ( x( ] x( ) ( t ) ( ( [ d( sign t Desired Output: d( if if x( x( in class in class ( 0: Learning rate 4 7

Preparing the Perceptron for Learning x x b x( x ( x ( ) t ( b( ( ( ) t x x b b(: bias (: Actual Response of N.N. 5 Preparing the Perceptron for Learning Training Data: x (), d () x (), d () x ( p ), d ( p ) x x b T ( x( ] x( ) ( t ) ( ( [ d( sign t 6 8

Learning Algorithm. Initialization Set (0)=rand. Then perform the folloing computatio for time step t=,,.... Activation At time step t, activate the Perceptron b appling input vector x( and desired response d( 3. Computation the actual response of N.N. Compute the actual response of the Perceptron ( = sign ( ( x( T ) 4. Adaptation of eight vector Update the eight vector of the Perceptron (t+) = (+ η( [ d( - ( ] x( 5. Continuation and return to. x B xdr. B. Moaveni b d 7 Learning Algorithm Where or When to stop? There are to approaches to stop the learning process:. Converging the generalized error to a zero constant value.. Repeat the learning process for predefined number. x(), d() x(), d() x( p), d( p) G. E. p T d ( sign( x( t 8 9

Training Tpes To tpes of netork training: Sequential mode (on-line, stochastic, or per-pattern) Weights updated after each pattern is presented (Perceptron is in this class) Batch mode (off-line or per-epoch) Weights updated after all pattern in a period is presented 9 st Mini Project. B using the Perceptron learning rule generate a N.N. to represent a NOT gate.. B using the Perceptron learning rule generate a N.N. to represent a AND gate (Sequential and Batch Mode). 3. B using the Perceptron learning rule generate a N.N. to represent a XOR gate (Sequential and Batch Mode). 4. Please sho that the generalized error converge to constant value after a learning process. 5. Please test the above N.N.s b testing data? 6. Please check the above N. N.s ith data hich added to noise. 7. Repeat the learning process for above N.N.s in both ith and ithout bias (parts & ). 8. Please plot the updated eights. 0 0