Learning strategies for neuronal nets - the backpropagation algorithm

Similar documents
Multilayer Neural Networks

Multilayer Neural Networks

Pattern Classification

4. Multilayer Perceptrons

Neural Network Training

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen

Computational Intelligence Winter Term 2017/18

Neuro-Fuzzy Comp. Ch. 4 March 24, R p

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions

Neural Networks Lecture 3:Multi-Layer Perceptron

Neural Networks and Deep Learning

Mark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer.

Artificial Neural Networks Examination, June 2005

Artifical Neural Networks

y(x n, w) t n 2. (1)

AI Programming CS F-20 Neural Networks

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes

Lecture 5: Logistic Regression. Neural Networks

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Computational Intelligence

Lecture 4: Perceptrons and Multilayer Perceptrons

Neural networks. Chapter 20. Chapter 20 1

Artificial Neural Networks Examination, March 2004

Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses about the label (Top-5 error) No Bounding Box

Neural Networks. Nicholas Ruozzi University of Texas at Dallas

Statistical Machine Learning from Data

Artificial Neural Network

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

Unit III. A Survey of Neural Network Model

A summary of Deep Learning without Poor Local Minima

Linear & nonlinear classifiers

Neural Networks Learning the network: Backprop , Fall 2018 Lecture 4

Neural Networks and the Back-propagation Algorithm

Data Mining Part 5. Prediction

Vasil Khalidov & Miles Hansard. C.M. Bishop s PRML: Chapter 5; Neural Networks

Neural Networks. Single-layer neural network. CSE 446: Machine Learning Emily Fox University of Washington March 10, /9/17

Introduction to Neural Networks

Computational statistics

A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation

CSE446: Neural Networks Spring Many slides are adapted from Carlos Guestrin and Luke Zettlemoyer

Neural networks. Chapter 19, Sections 1 5 1

1 What a Neural Network Computes

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

Artificial Neural Networks

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Introduction to Machine Learning

Apprentissage, réseaux de neurones et modèles graphiques (RCP209) Neural Networks and Deep Learning

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Artificial Neural Networks Examination, June 2004

Artificial Neural Networks. Edward Gatt

C4 Phenomenological Modeling - Regression & Neural Networks : Computational Modeling and Simulation Instructor: Linwei Wang

Introduction to Neural Networks

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Artificial Neural Networks

Multilayer Perceptrons (MLPs)

Machine Learning and Data Mining. Multi-layer Perceptrons & Neural Networks: Basics. Prof. Alexander Ihler

Neural networks. Chapter 20, Section 5 1

Machine Learning Lecture 5

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

Neural Networks biological neuron artificial neuron 1

Automatic Differentiation and Neural Networks

Deep Feedforward Networks

Artificial Neural Networks. MGS Lecture 2

Chapter 3 Supervised learning:

PATTERN CLASSIFICATION

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

8. Lecture Neural Networks

The error-backpropagation algorithm is one of the most important and widely used (and some would say wildly used) learning techniques for neural

Neural Networks. Xiaojin Zhu Computer Sciences Department University of Wisconsin, Madison. slide 1

Backpropagation: The Good, the Bad and the Ugly

CSC242: Intro to AI. Lecture 21

Supervised Learning in Neural Networks

Lecture 17: Neural Networks and Deep Learning

Machine Learning. Neural Networks

CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning

Neuro-Fuzzy Comp. Ch. 4 June 1, 2005

Chapter ML:VI (continued)

Neural Networks. Bishop PRML Ch. 5. Alireza Ghane. Feed-forward Networks Network Training Error Backpropagation Applications

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

CSC 578 Neural Networks and Deep Learning

Neural Networks DWML, /25

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks

Machine Learning (CSE 446): Neural Networks

Lecture 7 Artificial neural networks: Supervised learning

Machine Learning: Multi Layer Perceptrons

Numerical Learning Algorithms

Radial-Basis Function Networks

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY

Intelligent Systems Discriminative Learning, Neural Networks

Neural Networks Lecture 4: Radial Bases Function Networks

CSC321 Lecture 5: Multilayer Perceptrons

ADAPTIVE NEURO-FUZZY INFERENCE SYSTEMS

Radial-Basis Function Networks

Gradient Descent Training Rule: The Details

BACKPROPAGATION. Neural network training optimization problem. Deriving backpropagation

Tutorial on Machine Learning for Advanced Electronics

Intro to Neural Networks and Deep Learning

Feed-forward Networks Network Training Error Backpropagation Applications. Neural Networks. Oliver Schulte - CMPT 726. Bishop PRML Ch.

Transcription:

Learning strategies for neuronal nets - the backpropagation algorithm In contrast to the NNs with thresholds we handled until now NNs are the NNs with non-linear activation functions f(x). The most common function is the sigmoid function ψ(x): f( x) = ψ ( x) = + e ax It approxmiates with increasing a the step function σ(x). 0.9 0.8 0.7 0.6 0.5 0.4 a 0.3 0.2 0. 0-6 -4-2 0 2 4 6

The function ψ(x) is closely related to the tanh function. For a= applies: 2 tanh( x) = = 2 ψ (2 x) 2x + e If the two parameter a and b are added, a more general form results: ( ) = + e ψ g x a( x b) with a controlling the sheerness and with b controlling the position on the x-axis. The non-linearity of the activation function is the core reason for the existence of the multi-layer perceptron; without this property the multi-layer perceptron would coincide with a trivial linear one-layered network. The differentiability of f(x) allows us to apply neccessary constraints ( ( )/ w i,j ) for optimization of the weight coefficients w i,j. The first derivate of the sigmoid function can be ascribed to itself: dψ x = ( ) e = ( ) = ψ( x)( ψ( x)) x 2 x x dx ( + e ) + e + e

Figure of the extended weight matrix w,0 = b w, w,2 w, N w2,0 b2 w2, w2,2 w =, N W = w3,0 = b3 w3, w3,2 w, N = wm,0 = bm wm, wm,2 w M, N [ b, W]

The neural net with H layers The multi-layer NN with H layers has a peculiar weight matrix W i in every layer, but identical sigmoid-functions: x y W W 2 y 2 y H- W H yh H 2 y = ψ( W H ψ( W ψ( Wx) )) { } { 2 } 2 H yˆ y min y y J = min E = E i The learning is based on adjustment of the weight matrices with minimization of a square error criterion between target value y and approximation ŷ by the net (supervised learning) in mind. The expected value has to be formed by the available training ensemble of {ŷ j,x j }: W W Note: The one-layer nn and the linear polynom classifier are identical, if the activation function is set to ψ. i

Relations to the concept of function approximation by a linear combination of base functions The first layer creates the base functions in form of a hidden vector y and the second layer forms a linear combination of these base functions. Therefore the coefficient matrix W of the first layer controls the appearance of the base functions, while the weight matrix W 2 of the second layer contains the coefficients of the linear combination. Additionally the matrix is weighted through the activation function.

Reaction of a neuron with two inputs and a threshold function σ

Reaction of a neuron with two inputs and a sigmoid function ψ

Overlapping two neurons with two inputs each and a sigmoid function

Reaction of a neuron in the second layer to two neurons of first layer after valuation by sigmoid function

The backpropagation learning algorithm The class map is done by a multi-layer perceptron, which produces a on the output in the regions of x, that are populated by the samples of the according class, and produces a 0 in the areas allocated by other classes. In the areas inbetween and outside an interpolation resp. extrapolation is done. The network ist trained based on the optimization of the squared quality factor: { yw ˆ( i, x) y 2 } J = E This term is non-linear both in the elements of the input-vector x, and in the weight coefficients {w ijh }.

From insertion of the function map of the multi-layer perceptron results: J 2 H 2 = E ψ( W ψ( W ψ( W x ) )) y yˆ Sought is the global minimum. It is not clear whether such an element exists or how many local minima exist. The backpropagation algorithm solves the problem iteratively with a gradient algorithm. One iterates according to: 2 H { } W W α J with: W = W, W,, W and the gradients: J J J = = h W w nm The iteration terminates, when the gradient disappears at a (local or even global) minimum. A proper choice of α is difficult. Small values increase the number of required iterations. High values reduce the probabilty of running into a local minimum, risking the method to diverge and not finding the minimum (or giving rise to oscillations).

The iteration has only linear convergence (gradient algorithm). Calculating the gradient: For determination of the composed function the chain rule is needed. yˆ( x ) = ψ W ψ W ψ W x The error caused by a sample results in: H 2 ( ( ( ) )) H N 2 ˆ ˆ j = 2 yx j y j = 2 k k k= J ( ) ( y ( j) y ( j)) 2 The expected value E{...} of the gradient has to be approximated by a mean value of all sample gradients: n n J = J j= j

We have to distinguish between individual learning based on the last sample x, y J j j j and cumulative learning (batch learning), based on Partial derivations for one layer: For the r-th layer applies: n n J = J j= j r N r r r r ym = ψ( sm) = ψ w nmx n n= 0 r x n-th input of layer r y n r m m-th output of layer r

Definition of the variables of two layers for the backpropagation algorithm + f + f + f + r s j r δ j f r y j + r s k r δ k f r y k r w jk + f layer r- layer r

First of all we calculate the effect of the last hidden layer on the starting layer r=h. Using partial derivations of the quality function J (actually J j, but j is omitted for simplification reasons) and applying the chain rule results: introducing the sensibility of cell n J δ n = s results: δ and thus for update of weights: H H J J sn H sn = = δ H H H n H w nm sn w nm w nm J wneu = walt + w with w= α w w = αδ y = α ( y yˆ ) f ( s ) y = y H H H H H nm n m n n n m n H m H J J yˆ n H = = = ( y yˆ ) f ( s ) H H H s yˆ s H n n n n n n n N H 2 2 ( yˆ k yk)) k= yˆ n

For all other hidden layers r <H the thoughts behind are a litte more complex. Due to dependencies amongst each other, the values of s j r- have an effect on all elements s kr of the following layer. Using the chain rule results in: J J s = r r r sj k s k sj r δ j r k δ r k With results: r f ( s m ) w y s s r r km m r m k r r = = w ( ) r r kj f s j j sj r ( r δ ) r r j = f sj w kjδ k k i.e. the errors backpropagate from the starting layer to lower layers!

All learning samples are crossed without any changes of the weight coefficients, and the gradient J results by forming the mean value of the J j. Both values can be used for update of the parameter matrix W of the perceptron: W = W α J k+ k j W = W α J k+ k individual learning batch learning The gradients J and J j differ. The latter is the mean value of the first ( J = E{ J j }), or: the first value is random value of the latter.

The whole individual learning procedure consists of these steps: Choose interim values for the coefficient matrices W,W 2,...,W H of all layers Given a new observation [x j, ŷ j ] Calculate the discrimination function ŷ j from the given observation x j and the current weight matrices (forward calculation) Calculate the error between estimation ŷ and target vector y: y= ŷ-y Calculate the gradient J regarding to all perceptron weights (error backpropagation). To do so calculate first δ jh of the starting layer according to: δ H J J yˆ n H = = = ( y yˆ ) f ( s ) H H H s yˆ s H j j j j n n n and calculate from this backwards all values of the lower layers according to: r ( r δ ) r r j = f sj w kjδk für r = H, H,,2 k

considering the first derivate of the sigmoid function f(s)=ψ(s): dψ () s f () s = = ψ()( s ψ()) s = y( y) ds (Parallely) correct all perceptron weights according to: r =, 2, H J w w w y n N m=, 2, M r r r r r H nm nm α = nm αδn m für =,2, r w nm H For individual learning the weights {w nmh } are corrected with J for every sample, while for batch learning all learning samples have to be considered, in order to determine the averaged gradient J from the sequence { J}, before the weights can be corrected according to: r =, 2, H w w i y i n N r r r r H nm nm αδn () m () für =,2, i H m=, 2, M

Backpropagation algorithm for matrices Choose interim values for the coefficient matrices W,W 2,...,W H of all layers Given a new observation [x j, ŷ j ] Calculate the discrimination function ŷ j from the given observation x j and the current weight matrices (forward calculation). Store all values of y r and s r in all layers inbetween r =,2,,H. Calculate the error between estimation ŷ and target vector y on the output: y= ŷ-y Calculate the gradient J regarding all perceptron weights (error backpropagation). To do so first calculate δ H of the starting layer according to: H H J J yˆ H δ = = = ( y yˆ ) f ( s ) H H H s yˆ s and calculate from this backwards all values of the lower layers according to: T r r r δ = f ( s ) ( W δ ) für r = H, H,,2

considering the first derivate of the sigmoid function f(s)=ψ(s): dψ () s f () s = = ψ()( s ψ()) s = y( y) ds Individual leraning: (parallely) correct all perceptron weight matrices with J for every sample according to: W r r J r r ( r ) W α = α T für r =, 2, H r W W δ y Cumulative learning: all samples have to be considered, in order to determine the averaged value J from the sequence of the { J}, before the weights can be corrected according to: T W W α δ y für r =,2, H r r r ( r ) j j j

Properties: The backpropagation algorithm is simple to implement, but very complex especially for large coefficient matrices, which causes the number of samples to be large as well. The dependency of the method on the starting values of the weights, the correction factor α, and the order of processing the samples have also an adverse effect. Linear dependencies remain unconsidered, just like for recursive training of the polynom classifier. The gradient algorithm has the advantage that it can be used for very large problems. The dimension of the coefficient matrix W results from the dimensions of the input vector, the masked layers, and the output vector (N,N,..., N H-,K) as: H h h 0 H dim( W) ( ) with and T = = N + N N = N N = K h= N = dim( x) feature space K = dim( yˆ ) number of classes

Dimensioning the net The considerations for designing the multi-layer perceptron with threshold function (σ resp. sign) gives quite a good idea how many hidden-layers and how many neurons should be used for a MLPC for backpropagation learning (assumed one has a certain idea of distribution of clusters). The net is to be chosen as simple as possible. The higher the dimension the higher the risk of overfitting, accompanied by a loss of ability to generalize. Also higher dimension allows a lot of local maximas, which can cause the algorithm to get stuck! For a given number of samples, the learning phase should be stopped at a certain point. If not stopped the net is overfitted to the existing data and looses its ablility to generalize. error testing training epochs

Complexity of backpropagation algorithm For T=dim(W) number of weights and bias terms, linear complexity in the number of weights can be easily understood, since O(T) computation steps are needed for forward simulation, O(T) steps are needed for backpropagation of the error and also O(T) operations for correction of weights, altogether: O(T). If gradients would be determined experimentally by finite differences (on order to do so, one would have to increment each weight and determine the effect on the quality criterion by forward calculation) by calculating a differential quotient according to (i.e. no analytical evaluation): J w r ji r r J( wji + ε ) J( wji ) = + O( ε ) ε T forward calculations of complexity O(T) would result and therefore a overall (?) complexity of O(T 2 )

Backpropagation for training a two-layer network for function approximation (Matlab demo: Backpropagation Calculation) w, + x + w 2, + b b 2 δ s δ s 2 2 y y 2 2 w, 2 w,2 2 δ 2 s 2 b y 2 = yˆ y = ψ( W x + b ) = ψ( Wx ) = ψ( s ) w, b mit: W = und = b w2, b2 2 2 2 yˆ = y = ( + b ) mit: W Wy = w w 2 2 2,,2 Sought is an approximation of the function yx ( ) = + sin( x) for 2 x 2 π 4

Backpropagation for training a two-layer network for function approximation Backpropagation: For output layer or second layer with linear activation function results due to f =: ( yˆ ) 2 δ = y Backpropagation: For first layer with sigmoid function results: 2 2 2 2 δ j = f ( s ) j w, jδ = yj( yj) w, jδ for j =,2 Correction of weights in the output layer: w = y b = j = 2 2 2 2, j αδ j and αδ for, 2 Correction of weights in the input layer: wj, = αδ jx and bj = αδ j for j =, 2

Starting Matlab demo matlab-bpc.bat

Backpropagation for training a two-layer network for function approximation (Matlab demo: Function Approximation) Sought is an approximation of the function yx ( ) = + sin( i x) for 2 x 2 π 4 With increasing value of i (difficulty index) the MLPC network requirements increase. The approximation based on the sigmoid function needs more and more layers in order to depict several periods of the sinus function. Problem: Convergence to local minima, even when the network is big enough to represent the function The network is too small, so that approximation is poor, i.e. the global minimum can be reached, but approximation quality is poor (i=8 and network -3-) The network is big enough, so that approximation is good, but it converges towards a local minimum (i=4 and network -3-)

Starting Matlab demo matlab-fa.bat

Model big enough but only local minimum

Generalization ability of the network Assuming that the network is trained for samples { y, x },{ y, x }, { y, x } 2 2 How well does the network approximate the function for samples not learned depending on the complexity of the network? Starting Matlab demo (Hagan -2) matlab-general.bat Rule: The network should have fewer parameters than the available input/output pairs

Improving the ability to generalize of a net by adding noise If only few samples are available, the generalization ability of a nn can be improved by extending the number of samples e.g. by normally distributed noise. i.e. adding further samples, which can be found in the nearest neighbourhood with high probablity. In doing so the intra class areas are broadened and the borders between classes are sharpened. class 2 class

Interpolation by overlapping of normal distributions.4 W eighted Sum of Radial Basis Transfer Functions.2 0.8 0.6 0.4 0.2 0-3 -2-0 2 3 I t yx ( ) = αi fi( x ti)

Interpolation by overlapping of normal distributions

Character recognition task for 26 letters of size 7 5 with gradually increasing additive noise:

Two-layer neural net for symbol recognition with 35 input values (pixel), 0 neurons in the hidden layer and 26 neurons in the output layer first layer second layer x W N S N= 0 35 b + s S = 0 ψ y W 2 S = 0 S 2 S = 26 0 b 2 + s 2 S 2 = 26 ψ y 2 S 2 = 26 N=35 S = 0 S = 0 S 2 = 26 S 2 = 26 y =f (W x+b ) y 2 =f 2 (W 2 y +b 2 ) y 2 = ψ(w 2 ψ(w x+b )+b 2 )

Demo with MATLAB Opening Matlab Demo in Toolbox Neural Networks Character recognition (command line) Two networks are trained network without noise network 2 with noise Network 2 has better results than network considering an independent set of test data Starting Matlab demo matlab-cr.bat

Possibilites to accelerate the adaption Adaption of NN is simply a general parameter optimization problem. Accordingly a variety of other optimization methods from numerical mathematics could be applied in principle. This can lead to significant improvements (convergence speed, convergence area, stability, complexity). In general conclusions cannot be made easily, since the events can differ a lot and compromises have to be made! Heuristic improvements of the gradient algorithm (increment control) gradient algorithm (steepest descent) with momentum term (update of weights depends not only on gradient but also on former update) using an adaptive learning factor Conjugate gradient algorithm

Newton and Semi-Newton-algorithms (using the second derivatives of the error function, e.g. in form of Hesse mtrix or its estimation) Quickprop Levenberg-Marquardt-algorithm Taylor expansion of quality function in w : T T J( w+ w) = J( w) + J w+ w H w+ terms of higher order J with: J = gradient vector w 2 J and: H = Hesse matrix wi wj 2 Pruning techniques. Starting with a sufficiently big network, neurons, that have no or little effect on the quality function, are taken off the net, which can reduce overfitting of data.

Demo with MATLAB (Demo of Theodoridis) C:\...\matlab\PR\startdemo Example 2 (XOR) with (2-2- and 2-5-5-) Example 3 with three-layer network [5,5] (3000 epochs, learning rate 0,7, momentum 0,5) Two solutions of the XOR-problem: Start der Matlab-Demo matlab-theodoridis.bat 2-2-2- does not converge! Convex area implemented with two-layer net (2-2-). Possible mal-classification for variance: Union of 2 convex areas implemented with a three-layer net (2-5-5-). Possible malclassification for variance: n > n > 0,5 4 2 0,35

XOR with high noise in order to activate second solution! With two lines a error-free (exact) combination cannot be reached and the gradient is still different from 0!

Demo with MATLAB (Klassifikationgui.m) Opening Matlab first invoke setmypath.m, then C:\Home\ppt\Lehre\ME_2002\matlab\KlassifikatorEntwurf- WinXX\Klassifikationgui set of data: load Samples/xor2.mat Setting: 500 epochs, learning rate 0.7 load weight matrices: models/xor-trivial.mat load weight matrices : models/xor-3.mat Two-class problem with banana shape load set of data Samples/banana_test_samples.mat load presetting from: models/mlp_7_3_2_banana.mat Starting Matlab-Demo matlab-klassifikation_gui.bat

Solution of the XOR-problem with two-layer nn

Solution of the XOR-problem with two-layer nn

Solution of the XOR-problem with three-layer nn

Solution of the XOR-problem with three-layer nn

Convergence behaviour of backpropagation algorithm Backpropagation with common gradient descent (steepest descent) Starting Matlab demo matlab-bp-gradient.bat Backpropagation with conjugate gradient descent (conjugate gradient) significantly better convergence! Starting Matlab demo matlab-bp-cggradient.bat

NN and their properties Quick and Dirty. A nn design is simple to implement and results in solutions are by all means usable (a suboptimal solution is better than just any solution). Particularly large problems can be solved. All strategies only use local optimization functions and generally do not reach a global optimum, which is a big draw back for using neural nets.

Using a NN for iterative calculation of Principal Component Analysis (KLT) Instead of solving the KLT explicitely by finding Eigen values, a nn can be used for iterative calculation. A two-layer perceptron is used. The output of the hidden layer w is the feature vector sought. T A A x M ˆx N w N The first layer calculates the feature vector as: w = T A x

Supposing a linear activation function is used. The second layer implements the reconstruction of x with the transposed weight matrix of the first layer A ˆ = x = Aw Target of optimization is to minimize: { } { } { 2 } 2 2 T xˆ x Aw x AA x x J = E = E = E The following learning rule: T A A α( Aw x) w = A α J leads to a coefficient matrix A, which can be written as a product of two matrices (without proof!): B T M B M is a N M-matrix of the M dominant Eigenvectors of E{xx T } and T a orthonormal M M matrix, which causes a rotation of the coordinate system, within the space that is spanned by the M dominant Eigenvectors of E{xx T }.

The learning strategy does not include translation. i.e. this strategy calculates the Eigenvectors of the moment matrix E{xx T } and not of the covariance matrix K = E{(x-µ x )(x- µ x ) T }. This can be implemented by subtracting a recursively estimated expected value µ x = E{x} before starting the recursive estimation of the feature vector. The same effect can be retrieved by using an extended observation vector: x = x instead of x