Project 1: A comparison of time delay neural networks (TDNN) trained with mean squared error (MSE) and error entropy criterion (EEC)

Similar documents
4. Multilayer Perceptrons

epochs epochs

y(x n, w) t n 2. (1)

Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions

Deep Learning II: Momentum & Adaptive Step Size

CSC 578 Neural Networks and Deep Learning

Neural Networks, Computation Graphs. CMSC 470 Marine Carpuat

Artificial Neural Network

Neural Networks (Part 1) Goals for the lecture

Lecture 4: Perceptrons and Multilayer Perceptrons

Introduction to Machine Learning

Back-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples

CSE 190 Fall 2015 Midterm DO NOT TURN THIS PAGE UNTIL YOU ARE TOLD TO START!!!!

Linear Neural Networks

Neural Networks. Yan Shao Department of Linguistics and Philology, Uppsala University 7 December 2016

Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses about the label (Top-5 error) No Bounding Box

Multilayer Perceptrons and Backpropagation

Engineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 5: Single Layer Perceptrons & Estimating Linear Classifiers

Artificial Neural Networks. Edward Gatt

Lecture 5: Logistic Regression. Neural Networks

ECE521 Lectures 9 Fully Connected Neural Networks

Multilayer Perceptrons (MLPs)

Adaptive Filtering Part II

Neural networks III: The delta learning rule with semilinear activation function

Statistical Machine Learning from Data

Artifical Neural Networks

Artificial Neural Networks

Neural Networks Task Sheet 2. Due date: May

Lecture 10. Neural networks and optimization. Machine Learning and Data Mining November Nando de Freitas UBC. Nonlinear Supervised Learning

Gradient Descent Training Rule: The Details

Neuro-Fuzzy Comp. Ch. 4 March 24, R p

Mark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer.

Speaker Representation and Verification Part II. by Vasileios Vasilakakis

Artificial Neural Networks

Solutions. Part I Logistic regression backpropagation with a single training example

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Reading Group on Deep Learning Session 1

Machine Learning (CSE 446): Neural Networks

Chapter 4 Neural Networks in System Identification

Engineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 6: Multi-Layer Perceptrons I

Day 3 Lecture 3. Optimizing deep networks

Lab 5: 16 th April Exercises on Neural Networks

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann

Recursive Least Squares for an Entropy Regularized MSE Cost Function

Neural Network Training

Machine Learning and Data Mining. Multi-layer Perceptrons & Neural Networks: Basics. Prof. Alexander Ihler

Multi-layer Neural Networks

Feed-forward Networks Network Training Error Backpropagation Applications. Neural Networks. Oliver Schulte - CMPT 726. Bishop PRML Ch.

Simple neuron model Components of simple neuron

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

<Special Topics in VLSI> Learning for Deep Neural Networks (Back-propagation)

Artificial Neuron (Perceptron)

Multilayer Perceptron

C1.2 Multilayer perceptrons

ECS171: Machine Learning

10-701/ Machine Learning, Fall

COMPUTATIONAL INTELLIGENCE (INTRODUCTION TO MACHINE LEARNING) SS16

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes

Neural Networks: Backpropagation

Backpropagation Neural Net

Multilayer Neural Networks. (sometimes called Multilayer Perceptrons or MLPs)

Address for Correspondence

Regularization in Neural Networks

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Machine Learning (CSE 446): Backpropagation

STOCHASTIC INFORMATION GRADIENT ALGORITHM BASED ON MAXIMUM ENTROPY DENSITY ESTIMATION. Badong Chen, Yu Zhu, Jinchun Hu and Ming Zhang

Greedy Layer-Wise Training of Deep Networks

COGS Q250 Fall Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November.

Linear Models for Regression

Lecture 6. Regression

Machine Learning Basics III

Introduction to Neural Networks

Variable Learning Rate LMS Based Linear Adaptive Inverse Control *

Machine Learning CS 4900/5900. Lecture 03. Razvan C. Bunescu School of Electrical Engineering and Computer Science

Classification with Perceptrons. Reading:

Temporal Backpropagation for FIR Neural Networks

AN INTRODUCTION TO NEURAL NETWORKS. Scott Kuindersma November 12, 2009

Chapter ML:VI (continued)

Lecture 35: Optimization and Neural Nets

Deep Neural Networks

Linear Regression. CSL603 - Fall 2017 Narayanan C Krishnan

Linear Regression. CSL465/603 - Fall 2016 Narayanan C Krishnan

AI Programming CS F-20 Neural Networks

Linear Least-Squares Based Methods for Neural Networks Learning

CS 343: Artificial Intelligence

Neural networks. Chapter 20, Section 5 1

Neural Network Control

CSCI 1951-G Optimization Methods in Finance Part 12: Variants of Gradient Descent

Neural networks. Chapter 19, Sections 1 5 1

Overview of gradient descent optimization algorithms. HYUNG IL KOO Based on

EPL442: Computational

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen

CS:4420 Artificial Intelligence

1 What a Neural Network Computes

Neural Networks. Volker Tresp Summer 2015

Conjugate gradient algorithm for training neural networks

Neural Networks. Bishop PRML Ch. 5. Alireza Ghane. Feed-forward Networks Network Training Error Backpropagation Applications

Neural networks. Chapter 20. Chapter 20 1

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Transcription:

1 Project 1: A comparison of time delay neural networks (TDNN) trained with mean squared error (MSE) and error entropy criterion (EEC) Stefan Craciun Abstract The goal is to implement a TDNN (time delay neural network) trained with both MSE (steepest descent or LMS) and EEC (steepest descent/mee-rip or MEE-SIG) to predict the next sample of the sun spot time series data. TDNN is a nonlinear system, so you will be able to quantify its advantage to predict this difficult time series. Use an input layer with 5 delays as before. The TDNN will have the topology 5-X-1 where the output processing element should be linear (with a bias). You have to select the number of nonlinear (tanh) hidden processing elements X: start with 3 and do not go beyond 10, and choose the one that produces the best results in a cross validation set. If you have used downsampling of the sun spot TS before, keep it the same here such that we can compare the prediction results across linear, nonlinear, MSE and EEC. The nonlinear function used for the PEs is the hyperbolic tangent: 1 2 e 2 + α where n gives the steepness of the nonlinearity. The figure below shows how the slope of the hyperbolic tangent changes as a function of α. This deps on how steep the nonlinearity will be. For this TDNN α is chosen to be 1.8. 1 F irst the TDNN is built using an input tap delay line with four delays. The network has an input layer of five nonlinear PEs, a hidden layer of nonlinear PEs that will be varied in size and a linear output layer of one PE. The hidden layer will vary from 3 PEs to 10 PEs. Bellow is a figure showing the topology of the TDNN. Hyperbolic tangent function The last PE is a linear PE with a bias so that the output of the TDNN can have a larger set of values not just± 1. TDNN topology with one hidden layer The sun spots data is divided into a training set and a test set using 500 data points to train and 200 points to test. The back-propagation algorithm is implemented in Matlab using incremental or on-line training. This means that for each input vector the error is calculated and back-propagated to update the weights. The back-propagation algorithm is very sensitive to the learning rate parameter as well as to the initial vector weights. It is possible to choose a learning rate for which the back-propagation algorithm will not converge to a stable weight value. It is also observed that for very small step sizes the larger network will take very long to train (error decreases very slowly). The gradient descent method is very slow and so a new cost function is implemented in order to A

2 speed up the process. The weights are updated using the gradient descent with momentum. ( 1 α )( η ) + α k 1 ωk = J ω where α=0.9 and the step size η =0.0001. Both the MSE cost function and the MEE-RIP will update the weights using the steepest descent method with momentum. In order to compare the MSE cost function and the MEE cost function the same training and testing data sets are used as well as the same number of iterations. The graph bellow shows the learning rates for two different topologies. The errors decrease for both the MSE and MEE as the network is trained to predict the time series. Immediately it is observed that a nonlinear predictor such as this TDNN is more accurate in predicting the sun spots time series then a nonlinear network such as the Wiener FIR filter. Here the values of the errors after 3000 iteration decrease below 0.1. In the LMS or Wiener linear filter the errors were on the order of 0.3 after 3000 iterations. The figure below shows the MSE decreasing as a function of time (iterations). 1 hidden layer with 8 PE s using MSE cost function (large step size) 1 hidden layer with 8 PE s using MEE-RIP cost function (large step size) 1 hidden layer with 3 PE s using MSE cost function (large step size) 1 hidden layer with 3 PE s using MEE-RIP cost function (large step size) In order to analyze the performance of each TDNN with MEE-RIP cost function and MSE cost function we quantify the errors looking at both the entropy and the MSE values. The table below summarizes the normalized MSE for both cost functions: Number of hidden PE s MSE normalized MSE 3 0.334 0.451 4 0.229 0.306 5 0.288 0.379 6 0.497 0.325 7 0.415 0.579 8 0.691 0.661 9 0.607 0.511 10 0.629 0.712 MEE-RIP normalized MSE As expected the MSE criterion minimizes the overall MSE so it outperforms the MEE-RIP cost function if we compare normalized MSE as our criterion. However, if we look at a histograms of the errors for a network with 3 hidden PE s there is a significant difference. The TDNN when trained with MEE cost function, trains the network better because it has very few large errors and a majority of small errors. On the other hand the MSE cost function will train the TDNN to approximate the sun spots time series with many

3 more larger errors. This can be observed in the figure below. Notice how the MEE cost function has errors with larger variance while the PDF of the errors for the MEE has a higher peak around zero. We will now proceed to test the performance of the of the 2 TDNNs one trained with MSE and one trained with MEE and by building the topology shown in the figure below. PDF of errors when TDNN is trained with MEE-RIP autonomous TDNN PDF of errors when TDNN is trained with MSE Also by looking at the Information Potential as a function of time we notice that the IP does not reach the value of 1 but rather oscillates around 0.7. The figure below shows the normalized IP trained with MEE-RIP. The starting five inputs of the TDNN labeled x1 through x5 are the first five samples of the time series. Instead of using the following sun spot time series such as x6, x7, x100 for the tap delay inputs the output of the network is fed back as the next input. This is called autonomous prediction. Both TDNNs are expected to slowly diverge from the time series. When the normalized error becomes greater than 0.8 the test is stopped. The winner TDNN will be the one which can predict the sun spot time series for the longest time. The results of the autonomous prediction are shown below. Normalized IP trained with MEE-RIP TDNN trained with back-propagation using MEE-RIP

4 TDNN trained with back-propagation using MSE It is obvious just by looking at the figures above that the MEE is a slightly better cost function. The TDNN trained with MEE can approximate the sun spots time series for a longer time. The TDNN trained with MSE will diverge much faster from the time series. It seems that once a sample with a large error is predicted the next approximation will be even worse and the predictor will diverge from its goal. The MSE criterion will thus diverge faster because the probability of predicting an erroneous value that is far from the expected time series is high. The MEE is more likely to stumble on a high error and diverge rather than the MEE TDNN. Even when comparing the normalized error the TDNN trained with MEE will reach a normalized MSE greater than 0.8 after 15 samples. The TDNN trained with MEE will reach a normalized MSE greater that 0.8 after 23 samples.

5 Matlab Code: MEE Cost Function clear all; load sun_spots.mat; %window_size=100; %filter_order = 5 %initialize the weights w=zeros(1,5); %set step size mu=0.000001; % form the input matrix for row=1:100 for col=1:5 in(row,col)=sun_spots(row+col-1); %form the desired matrix d(i)=sun_spots(i+5); for l=1:200 %% %%%%%%%%%%%% %% %%%%%%%%%%%% %form the error matrix e(i)=d(i)-(in(i,:)*w'); %form the error distances for j=1:100 E(i,j)=e(i)-e(j); %form the input difference matrix for every tap xk(n-i)-xk(n-j) for k=1 for j=1:100 X(j,i)=in(j,k)-in(i,k); A=X; for k=2:5 for j=1:100 X(j,i)=in(j,k)-in(i,k); A=[A;X]; %Use Silverman's rule sig=std(e); sig_opt=2*sig*((1/size(e,2))*1.333)^(1/5); %Calculate the Gaussian of each error distance G=0; for i=1:size(e,1) for j=1:size(e,1) G(i,j)=1/ (sqrt(2*pi)*sig_opt)*exp(-e(i,j)^2/ (2*sig_opt^2)); %Calculate the Gradient Information Potential dv(i)/dw for each tap (10) dv=zeros(1,5); for k=1:5 for i=1:size(e,1) for j=1:size(e,1) dv(k) = dv(k) + G(i,j)*E(i,j)*A(i+k*100-100,j); dv(k)=-2*dv(k)/100^2; for k=1:5 w(k)=w(k) + mu*dv(k); Backpropagation: for k=0:1000

6 %calculate output (activation) %calculate output at first hidden layer sum_hidden_out=weights1*input; for i=1:4 for j=1:8 out_hidden(i,j)=sig(sum_hidde n_out(i,j)); %sigmoid function of W*X %calculate final output sum_out=weights2*out_hidden; out(1,i)=sig(sum_out(1,i)); %sigmoid function of W*Y1 %calculate final error due to each possible input error=desired-out; %backpropagate error at the first hidden layer error_hidden=weights2'*error; %update weights2 %w51 d_weights2(1,1)=0; d_weights2(1,1)= d_weights2(1,1) + t_hidden(1,i); weights2(1,1)= weights2(1,1)+ d_weights2(1,1); %w52 d_weights2(1,2)=0; d_weights2(1,2) = d_weights2(1,2) + t_hidden(2,i); weights2(1,2)= weights2(1,2)+ d_weights2(1,2); %w53 d_weights2(1,3)=0; d_weights2(1,3)=d_weights2(1,3) + t_hidden(3,i); weights2(1,3)= weights2(1,3)+ d_weights2(1,3); %w54 d_weights2(1,4)=0; d_weights2(1,4)=d_weights2(1,4) + t_hidden(4,i); weights2(1,4)= weights2(1,4)+ d_weights2(1,4); %w55 d_weights2(1,5)=0; weights2(1,5)=weights2(1,5) + learning_rate*error(1,i)*der(out(1,i)); weights2(1,5)= weights2(1,5)+ d_weights2(1,5); %update weights1 %w11 d_weights1(1,1)=0; d_weights1(1,1) = d_weights1(1,1) + learning_rate*error_hidden(1,i)*der(out_h idden(1,i))*input(1,i); weights1(1,1) = weights1(1,1)+d_weights1(1,1); %w12 d_weights1(1,2)=0; d_weights1(1,2) = d_weights1(1,2) + learning_rate*error_hidden(1,i)*der(out_h idden(1,i))*input(2,i); weights1(1,2) = weights1(1,2)+d_weights1(1,2); %w13 d_weights1(1,3)=0; d_weights1(1,3) = d_weights1(1,3) + learning_rate*error_hidden(1,i)*der(out_h idden(1,i)); weights1(1,3) = weights1(1,3)+d_weights1(1,3); %w21 d_weights1(2,1)=0;

7 d_weights1(2,1) = d_weights1(2,1) + learning_rate*error_hidden(2,i)*der(out_h idden(2,i))*input(1,i); weights1(2,1) = weights1(2,1)+d_weights1(2,1); %w22 d_weights1(2,2)=0; d_weights1(2,2) = d_weights1(2,2) + learning_rate*error_hidden(2,i)*der(out_h idden(2,i))*input(2,i); weights1(2,2) = weights1(2,2)+d_weights1(2,2); %w23 d_weights1(2,3)=0; d_weights1(2,3) = d_weights1(2,3) + learning_rate*error_hidden(2,i)*der(out_h idden(2,i)); weights1(2,3) = weights1(2,3)+d_weights1(2,3); %w31 d_weights1(3,1)=0; d_weights1(3,1) = weights1(3,1) + learning_rate*error_hidden(3,i)*der(out_h idden(3,i))*input(1,i); d_weights1(4,1) = d_weights1(4,1) + learning_rate*error_hidden(4,i)*der(out_h idden(4,i))*input(1,i); weights1(4,1) = weights1(4,1)+d_weights1(4,1); %w42 d_weights1(4,2)=0; d_weights1(4,2) = d_weights1(4,2) + learning_rate*error_hidden(4,i)*der(out_h idden(4,i))*input(2,i); weights1(4,2) = weights1(4,2)+d_weights1(4,2); %w43 d_weights1(4,3)=0; d_weights1(4,3) = d_weights1(4,3) + learning_rate*error_hidden(4,i)*der(out_h idden(4,i)); weights1(4,3) = weights1(4,3)+d_weights1(4,3); %w32 d_weights1(3,2)=0; d_weights1(3,2) = d_weights1(3,2) + learning_rate*error_hidden(3,i)*der(out_h idden(3,i))*input(2,i); weights1(3,2) = weights1(3,2)+d_weights1(3,2); %w33 d_weights1(3,3)=0; d_eights1(3,3) = d_weights1(3,3) + learning_rate*error_hidden(3,i)*der(out_h idden(3,i)); weights1(3,3) = weights1(3,3)+d_weights1(3,3); %w41 d_weights1(4,1)=0;