CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning

Similar documents
Data Mining Part 5. Prediction

ECLT 5810 Classification Neural Networks. Reference: Data Mining: Concepts and Techniques By J. Hand, M. Kamber, and J. Pei, Morgan Kaufmann

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

CS:4420 Artificial Intelligence

Neural Networks and Ensemble Methods for Classification

Neural networks. Chapter 19, Sections 1 5 1

Artificial Intelligence

Neural networks. Chapter 20. Chapter 20 1

Lecture 4: Perceptrons and Multilayer Perceptrons

Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!

Unit III. A Survey of Neural Network Model

Neural Networks and the Back-propagation Algorithm

Artificial Neural Network

Neural Networks. Nicholas Ruozzi University of Texas at Dallas

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Lecture 5: Logistic Regression. Neural Networks

Artificial Neural Networks. Edward Gatt

Introduction to Neural Networks

Simple neuron model Components of simple neuron

Neural Networks biological neuron artificial neuron 1

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5

Lecture 7 Artificial neural networks: Supervised learning

Neural Networks Introduction

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

Part 8: Neural Networks

Neural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET

Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions

ECE521 Lectures 9 Fully Connected Neural Networks

AI Programming CS F-20 Neural Networks

Neural networks. Chapter 20, Section 5 1

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau

CS249: ADVANCED DATA MINING

Artificial Neural Networks Examination, June 2005

Introduction to Artificial Neural Networks

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes

Introduction To Artificial Neural Networks

Administration. Registration Hw3 is out. Lecture Captioning (Extra-Credit) Scribing lectures. Questions. Due on Thursday 10/6

Neural Networks (Part 1) Goals for the lecture

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

Machine Learning and Data Mining. Multi-layer Perceptrons & Neural Networks: Basics. Prof. Alexander Ihler

Statistical NLP for the Web

In the Name of God. Lecture 11: Single Layer Perceptrons

Computational statistics

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore

CMSC 421: Neural Computation. Applications of Neural Networks

Speaker Representation and Verification Part II. by Vasileios Vasilakakis

CSC242: Intro to AI. Lecture 21

Artificial neural networks

Artifical Neural Networks

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann

Grundlagen der Künstlichen Intelligenz

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

CS 4700: Foundations of Artificial Intelligence

COGS Q250 Fall Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November.

Artificial Neural Networks Examination, June 2004

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino

Feedforward Neural Nets and Backpropagation

Simple Neural Nets For Pattern Classification

Logistic Regression & Neural Networks

) (d o f. For the previous layer in a neural network (just the rightmost layer if a single neuron), the required update equation is: 2.

Multilayer Neural Networks

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen

From perceptrons to word embeddings. Simon Šuster University of Groningen

Security Analytics. Topic 6: Perceptron and Support Vector Machine

Artificial Neural Networks. MGS Lecture 2

Single layer NN. Neuron Model

Neural Networks and Deep Learning

Artificial Neural Networks Examination, March 2004

Using a Hopfield Network: A Nuts and Bolts Approach

CSC321 Lecture 5: Multilayer Perceptrons

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller

Artificial Neural Networks

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Pattern Classification

Chapter 3 Supervised learning:

<Special Topics in VLSI> Learning for Deep Neural Networks (Back-propagation)

4. Multilayer Perceptrons

Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses about the label (Top-5 error) No Bounding Box

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Simulating Neural Networks. Lawrence Ward P465A

Lecture 4: Feed Forward Neural Networks

Revision: Neural Network

Chapter 2 Single Layer Feedforward Networks

Ch.8 Neural Networks

Multilayer Perceptron

Artificial Neural Network and Fuzzy Logic

Introduction to Neural Networks

Machine Learning. Neural Networks

PV021: Neural networks. Tomáš Brázdil

Learning and Neural Networks

Multilayer Perceptrons (MLPs)

Introduction to Machine Learning

The error-backpropagation algorithm is one of the most important and widely used (and some would say wildly used) learning techniques for neural

Transcription:

CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Learning

Neural Networks Classifier Short Presentation INPUT: classification data, i.e. it contains an classification (class) attribute. WE also say that the class label is known for all data. DATA is divided, as in any classification problem, into TRAINING and TEST data sets.

Neural Networks Classifier ALL DATA must be normalized, i.e. all values of attributes in the dataset has to be changed to contain values in the interval [0,1], or [-1,1]. TWO BASIC normalization techniques: Max- Min normalization and Decimal Scaling normalization.

Data Normalization Max-Min Normalization Performs a linear transformation on the original data. Given an attribute A, we denote by mina, maxa the minimum and maximum values of the values of the attribute A. Max-Min Normalization maps a value v of A to v in the range [new_mina, new_maxa] as follows.

Data Normalization Max- Min normalization formula is as follows: v min A v' = ( new _ max A new _ min A) + max A min A new _ min A Example: we want to normalize data to range of the interval [-1,1]. We put: new_max A= 1, new_mina = -1. In general, to normalize within interval [a,b], we put: new_max A= b, new_mina = a.

Decimal Scaling Normalization Normalization by decimal scaling normalizes by moving the decimal point of values of attribute A. A value v of A is normalized to v by computing v' = v 10 where is the smallest integer such that max v <1.

Neural Network Neural Network is a set of connected INPUT/OUTPUT UNITS, where each connection has a WEIGHT associated with it. Neural Network learning is also called CONNECTIONIST learning due to the connections between units. It is a case of SUPERVISED, INDUCTIVE or CLASSIFICATION learning.

Neural Network Neural Network learns by adusting the weights so as to be able to correctly classify the training data and hence, after testing phase, to classify unknown data. Neural Network needs long time for training. Neural Network has a high tolerance to noisy and incomplete data.

Neural Network Learning Learning is being performed by a backpropagation algorithm. The inputs are fed simultaneously into the input layer. The weighted outputs of these units are, in turn, are fed simultaneously into a neuron like units, known as a hidden layer. The hidden layer s weighted outputs can be input to another hidden layer, and so on. The number of hidden layers is arbitrary, but in practice, usually one or two are used. The weighted outputs of the last hidden layer are inputs to units making up the output layer.

A Multilayer Feed-Forward (MLFF) Neural Network Output vector; classes Output nodes w k O k Hidden nodes O Input nodes w i - weights Input vector; Record: x i Network is fully connected

MLFF Neural Network The units in the hidden layers and output layer are sometimes referred to as neurones, due to their symbolic biological basis, or as output units. A multilayer neural network shown on the previous slide has two layers of output units. Therefore, we say that it is a two-layer neural network.

MLFF Neural Network A network containing two hidden layers is called a three-layer neural network, and so on. The network is feed-forward in that none of the weights cycles back to an input unit or to an output unit of a previous layer.

MLFF Neural Network Output vector; classes Output nodes w k O k Hidden nodes O Input nodes w i - weights Input vector; Record: x i Network is fully connected

MLFF Network INPUT: records without class attribute with normalized attributes values. We call it an input vector. INPUT VECTOR: X = { x1, x2,. xn} where n is the number of (non class) attributes. INPUT LAYER there are as many nodes as non-class attributes i.e. as the length of the input vector. HIDDEN LAYER the number of nodes in =1, 2..#hidden nodes the hidden layer and the number of hidden layers depends on implementation. O

A Multilayered Feed Forward Network OUTPUT LAYER corresponds to the class attribute. There are as many nodes as classes (values of the class attribute). O k= 1, 2,.. #classes k Network is fully connected, i.e. each unit provides input to each unit in the next forward layer.

Classification by Backpropagation Backpropagation is a neural network learning algorithm. It learns by iteratively processing a set of training data (samples), comparing the network s classification of each record (sample) with the actual known class label (classification). For each training sample, the weights are modified as to minimize the mean squared error between the network s classification (prediction) and actual classification (class).

Steps in Backpropagation Algorithm STEP ONE: initialize the weights and biases. The weights in the network are initialized to small random numbers ranging for example from -1.0 to 1.0, or -0.5 to 0.5. Each unit has a BIAS associated with it (see next slide). The biases are similarly initialized to small random numbers. STEP TWO: feed the training sample. STEP THREE: propagate the inputs forward; we compute the net input and output of each

Output Unit - x 0 w 0 Bias Θ x 1 x n w 1 w n f output y Input vector x weight vector w weighted sum Activation function The inputs to unit are outputs from the previous layer. These are multiplied by their corresponding weights in order to form a weighted sum, which is added to the bias

Step Three: propagate the inputs forward For unit in the input layer, its output is equal to its input, that is, O = I for input unit. The net input to each unit in the hidden and output layers is computed as follows. Given a unit in a hidden or output layer, the net input is I = w O + θ i i where wi is the weight of the connection from unit i in the previous layer to unit ; Oi is the output of unit I from the previous layer; i θ is the bias of the unit

Step Three: propagate the inputs forward Each unit in the hidden and output layers takes its net input and then applies an activation function. The function symbolizes the activation of the neuron represented by the unit. It is also called a logistic, sigmoid, or squashing function. Given a net input I to unit, then 1 O = O = I 1 + e f(i), the output of unit, is computed as

Backpropagation Formulas Output vector Output nodes Hidden nodes Err Err = O (1 O ) θ i = =θ O i k + (l) Err w = w + ( l) Err Err k ( 1 O )( T O ) O i k w Input nodes w i I O = w O + θ i 1 = 1 + e i i I Input vector: x i

References Data Mining Concept and Techniques (Chapter 7.5) [Jiawei Han, Micheline Kamber/Morgan Kaufman Publishers2002] www.cs.vu.nl/~elena/slides03/nn_1light.ppt Xin Yao Evolving Artificial Neural Networks http://www.cs.bham.ac.uk/~xin/papers/published_iproc_sep99.pdf informatics.indiana.edu/larryy/talks/s4.matti.eann.ppt www.cs.appstate.edu/~can/classes/ 5100/Presentations/DataMining1.ppt www.comp.nus.edu.sg/~cs6211/slides/blondie24.ppt www.public.asu.edu/~svadrevu/umd/thesistalk.ppt www.ctrl.cinvestav.mx/~yuw/file/afnn1_nnintro.ppt

Overview Basics of Neural Network Advanced Features of Neural Network Applications Summary

Basics of Neural Network What is a Neural Network Neural Network Classifier Data Normalization Neuron and bias of a neuron Single Layer Feed Forward Limitation Multi Layer Feed Forward Back propagation

Neural Networks What is a Neural Network? Biologically motivated approach to machine learning Similarity with biological network Fundamental processing elements of a neural network is a neuron 1. Receives inputs from other source 2. Combines them in someway 3. Performs a generally nonlinear operation on the result 4. Outputs the final result

Similarity with Biological Network Fundamental processing element of a neural network is a neuron A human brain has 100 billion neurons An ant brain has 250,000 neurons

Synapses: the basis of learning and memory

Neural Network Neural Network is a set of connected INPUT/OUTPUT UNITS, where each connection has a WEIGHT associated with it. Neural Network learning is also called CONNECTIONIST learning due to the connections between units. It is a case of SUPERVISED or INDUCTIVE CLASSIFICATION learning.

Neural Network Neural Network learns by adusting the weights so as to be able to correctly classify the training data and hence, after testing phase, to classify unknown data. Neural Network needs long time for training. Neural Network has a high tolerance to noisy and incomplete data

Neural Network Classifier Input: Classification data It contains classification attribute Data is divided, as in any classification problem into Training data and Testing data All data must be normalized. (i.e. all values of attributes in the database are changed to contain values in the internal [0,1] or[-1,1]) Neural Network can work with data in the range of (0,1) or (-1,1) Two basic normalization techniques [1] Max-Min normalization [2] Decimal Scaling normalization

Data Normalization [1] Max- Min normalization formula is as follows: v min A v' = ( new _ max A new _ min A) + max A min A new _ min A [mina, maxa, the minimun and maximum values of the attribute A max-min normalization maps a value v of A to v in the range {new_mina, new_maxa} ]

Example of Max Min Normalization Max- Min normalization formula v min A v' = ( new _ max A new _ min A) + max A min A new _ min A Example: We want to normalize data to range of the interval [0,1]. We put: new_max A= 1, new_mina =0. Say, max A was 100 and min A was 20 ( That means maximum and minimum values for the attribute ). Now, if v = 40 ( If for this particular pattern, attribute value is 40 ), v will be calculated as, v = (40-20) x (1-0) / (100-20) + 0 => v = 20 x 1/80 => v = 0.4

Decimal Scaling Normalization [2]Decimal Scaling Normalization Normalization by decimal scaling normalizes by moving the decimal point of values of attribute A. v'= v 10 Here is the smallest integer such that max v <1. Example : A values range from -986 to 917. Max v = 986. v = -986 normalize to v = -986/1000 = -0.986

One Neuron as a Network Here x1 and x2 are normalized attribute value of data. y is the output of the neuron, i.e the class label. x1 and x2 values multiplied by weight values w1 and w2 are input to the neuron x. Value of x1 is multiplied by a weight w1 and values of x2 is multiplied by a weight w2. Given that w1 = 0.5 and w2 = 0.5 Say value of x1 is 0.3 and value of x2 is 0.8, So, weighted sum is : sum= w1 x x1 + w2 x x2 = 0.5 x 0.3 + 0.5 x 0.8 = 0.55

One Neuron as a Network The neuron receives the weighted sum as input and calculates the output as a function of input as follows : y = f(x), where f(x) is defined as f(x) = 0 { when x< 0.5 } f(x) = 1 { when x >= 0.5 } For our example, x ( weighted sum ) is 0.55, so y = 1, That means corresponding input attribute values are classified in class 1. If for another input values, x = 0.45, then f(x) = 0, so we could conclude that input values are classified to class 0.

Bias of a Neuron We need the bias value to be added to the weighted sum wixi so that we can transform it from the origin. v = wixi + b, here b is the bias x1-x2= -1 x2 x1-x2=0 x1-x2= 1 x1

Bias as extra input x0 = +1 x 1 Input Attributex 2 values x m M w 0 W1 w 2 M w m weights v ϕ ( ) Summing function m v = = w 0 = b 0 w Activation function x Output class y

Neuron with Activation The neuron is the basic information processing unit of a NN. It consists of: 1 A set of links, describing the neuron inputs, with weights W 1, W 2,, W m 2. An adder function (linear combiner) for computing the weighted sum of the inputs (real numbers): u = m = 1 w x 3 Activation function : for limiting the amplitude of the neuron output. y =ϕ (u + b)

Why We Need Multi Layer Linear Separable: Model? x y Linear inseparable: x y Solution? x y

A Multilayer Feed-Forward Neural Network O k Output nodes Hidden nodes Output Class O w k Input nodes (as many as attributes w i - weights Network is fully connected Input Record : x i

Neural Network Learning The inputs are fed simultaneously into the input layer. The weighted outputs of these units are fed into hidden layer. The weighted outputs of the last hidden layer are inputs to units making up the output layer.

A Multilayer Feed Forward Network The units in the hidden layers and output layer are sometimes referred to as neurodes, due to their symbolic biological basis, or as output units. A network containing two hidden layers is called a three-layer neural network, and so on. The network is feed-forward in that none of the weights cycles back to an input unit or to an output unit of a previous layer.

A Multilayered Feed Forward Network INPUT: records without class attribute with normalized attributes values. INPUT VECTOR (record): X = ( x1, x2,. Xn) where n is the number of (non class) attributes. INPUT LAYER there are as many nodes as nonclass attributes i.e. as the length of the input vector. HIDDEN LAYER the number of nodes in the hidden layer and the number of hidden layers depends on implementation.

A Multilayered Feed Forward Network OUTPUT LAYER corresponds to the class attribute. There are as many nodes as classes (values of the class attribute). O k k= 1, 2,.. #classes Network is fully connected, i.e. each unit provides input to each unit in the next forward layer.

Classification by Back Propagation Back Propagation learns by iteratively processing a set of training data (samples). For each sample, weights are modified to minimize the error between network s classification and actual classification.

Steps in Back propagation Algorithm STEP ONE: initialize the weights and biases. The weights in the network are initialized to random numbers from the interval [-1,1]. Each unit has a BIAS associated with it The biases are similarly initialized to random numbers from the interval [-1,1]. STEP TWO: feed the training sample.

Steps in Back propagation Algorithm STEP THREE: Propagate the inputs forward; we compute the net input and output of each unit in the hidden and output layers. STEP FOUR: back propagate the error. STEP FIVE: update weights and biases to reflect the propagated errors. STEP SIX: terminating conditions.

Propagation through Hidden Layer x 0 w 0 - Bias Θ x 1 x n w 1 w n f output y Input vector x weight vector weighted sum Activation function w The inputs to unit are outputs from the previous layer. These are multiplied by their corresponding weights in order to form a weighted sum, which is added to the bias associated with unit. A nonlinear activation function f is applied to the net input.

Propagate the inputs forward For unit in the input layer, its output is equal to its input, that is, for input unit. O = I The net input to each unit in the hidden and output layers is computed as follows. Given a unit in a hidden or output layer, the net input is I = w O + θ i where wi is the weight of the connection from unit i in the previous layer to unit ; Oi is the output of unit I from the previous layer; θ is the bias of the unit i i

Propagate the inputs forward Each unit in the hidden and output layers takes its net input and then applies an activation function. The function symbolizes the activation of the neuron represented by the unit. It is also called a logistic, sigmoid, or squashing function. Given a net input I to unit, then O = f(i), the output of unit, is computed as O 1 = 1 + e I

Back propagate the error When reaching the Output layer, the error is computed and propagated backwards. For a unit k in the output layer the error is computed by a formula: Err k = O k ( 1 O )( T O k k k Where O k actual output of unit k ( computed by activation function. 1 O k = 1 I + e k ) Tk True output based of known class label; classification of training sample Ok(1-Ok) is a Derivative ( rate of change ) of activation function.

Back propagate the error The error is propagated backwards by updating weights and biases to reflect the error of the network classification. For a unit in the hidden layer the error is computed by a formula: Err = O (1 O ) k Errw k k where wk is the weight of the connection from unit to unit k in the next higher layer, and Errk is the error of unit k.

Update weights and biases Weights are updated by the following equations, where l is a constant between 0.0 and 1.0 reflecting the learning rate, this learning rate is fixed for implementation. Δ w i = ( l ) Err O i w i = w i + Δ w i Biases are updated by the following equations Δ θ = ( l ) Err θ = θ + Δθ

Update weights and biases We are updating weights and biases after the presentation of each sample (record). This is called case updating. Epoch: One iteration through the training set is called an epoch. Epoch updating Alternatively, the weight and bias increments could be accumulated in variables and the weights and biases updated after all of the samples of the training set have been presented. Case updating is more accurate

Terminating Conditions Training stops when Δw i All in the previous epoch are below some threshold, or The percentage of samples misclassified in the previous epoch is below some threshold, or a pre specified number of epochs has expired. In practice, several hundreds of thousands of epochs may be required before the weights will converge.

Back Propagation Formulas Output vector Output nodes O 1 = 1 + e I Hidden nodes Err k Err = O k ( 1 O )( Tk Ok) k = O (1 O ) k Err k w k I = w O + θ i i i Input nodes i w i w = w + ( l) Err i θ =θ + (l) Err O i Input vector: x i

Example of Back Propagation Input = 3, Hidden Neuron = 2 Output =1 Initialize weights : Random Numbers from -1.0 to 1.0 Initial Input and weight x1 x2 x3 w14 w15 w24 w25 w34 w35 w46 w56 1 0 1 0.2-0.3 0.4 0.1-0.5 0.2-0.3-0.2

Example of Back Propagation Bias added to Hidden + Output nodes Initialize Bias Random Values from -1.0 to 1.0 Bias ( Random ) θ4 θ5 θ6-0.4 0.2 0.1

Net Input and Output Calculation Unit Net Input I Output O 4 0.2 + 0 + 0.5-0.4 = -0.7 5-0.3 + 0 + 0.2 + 0.2 =0.1 6 (-0.3)0.332- (0.2)(0.525)+0.1= -0.105 O 1 = 1 + e O 1 = 1+ e 1 O = 1+ e 0.7 0.1 0.105 = 0.332 = 0.525 = 0.475

Calculation of Error at Each Node Unit Error 6 0.475(1-0.475)(1-0.475) =0.1311 We assume T 6 = 1 5 0.525 x (1-0.525)x 0.1311x (-0.2) = 0.0065 4 0.332 x (1-0.332) x 0.1311 x (-0.3) = -0.0087

Calculation of weights and Bias Weight Updating Learning Rate l =0.9 New Values w46-0.3 + 0.9(0.1311)(0.332) = - 0.261 w56-0.2 + (0.9)(0.1311)(0.525) = - 0.138 w14 0.2 + 0.9(-0.0087)(1) = 0.192 w15-0.3 + (0.9)(-0.0065)(1) = - 0.306..similarly similarly θ6 0.1 +(0.9)(0.1311)=0.218..similarly similarly

Advanced Features of Neural Network Training with Subsets Modular Neural Network Evolution of Neural Network

Variants of Neural Networks Learning Supervised learning/classification Control Function approximation Associative memory Unsupervised learning or Clustering

Training with Subsets Select subsets of data Build new classifier on subset Aggregate with previous classifiers Compare error after adding a classifier Repeat as long as error decreases

Training with subsets Subset 1 NN 1 The Whole Dataset Split the dataset into subsets that can fit into memory Subset 2 Subset 3... NN 2 NN 3 Subset n NN n A Single Neural Network Model

Modular Neural Network Modular Neural Network Made up of a combination of several neural networks. The idea is to reduce the load for each neural network as opposed to trying to solve the problem on a single neural network.

Evolving Network Architectures Small networks without a hidden layer can t solve problems such as XOR, that are not linearly separable. Large networks can easily overfit a problem to match the training data, limiting their ability to generalize a problem set.

Constructive vs Destructive Algorithm Constructive algorithms take a minimal network and build up new layers nodes and connections during training. Destructive algorithms take a maximal network and prunes unnecessary layers nodes and connections during training.

Faster Convergence Back propagation requires many epochs to converge Some ideas to overcome this are: Stochastic learning Update weights after each training example Momentum Add fraction of previous update to current update Faster convergence

Applications Handwritten Digit Recognition Face recognition Time series prediction Process identification Process control Optical character recognition

Applications Forecasting/Market Prediction: finance and banking Manufacturing: quality control, fault diagnosis Medicine: analysis of electrocardiogram data, RNA & DNA sequencing, drug development without animal testing Control: process, robotics

Summary We presented mainly the following: Basic building block of Artificial Neural Network. Construction, working and limitation of single layer neural network (Single Layer Neural Network). Back propagation algorithm for multi layer feed forward NN. Some Advanced Features like training with subsets, Quicker convergence, Modular Neural Network, Evolution of NN.

Facts to be Remembered NNs perform well, generally better with larger number of hidden units More hidden units generally produce lower error Determining network topology is difficult Choosing single learning rate impossible Difficult to reduce training time by altering the network topology or learning parameters NN with Subsets learning often produce better results