Deep Neural Networks (1) Hidden layers; Back-propagation

Similar documents
Deep Neural Networks (1) Hidden layers; Back-propagation

What Do Neural Networks Do? MLP Lecture 3 Multi-layer networks 1

Stochastic gradient descent; Classification

Multi-layer Neural Networks

CSC321 Lecture 5: Multilayer Perceptrons

ECE521 Lectures 9 Fully Connected Neural Networks

Introduction to Neural Networks

Multilayer Perceptron

Learning from Data: Multi-layer Perceptrons

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

Feed-forward Networks Network Training Error Backpropagation Applications. Neural Networks. Oliver Schulte - CMPT 726. Bishop PRML Ch.

Jakub Hajic Artificial Intelligence Seminar I

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann

Neural Networks and the Back-propagation Algorithm

Backpropagation Introduction to Machine Learning. Matt Gormley Lecture 12 Feb 23, 2018

Engineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 6: Multi-Layer Perceptrons I

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY

Neural Networks. Bishop PRML Ch. 5. Alireza Ghane. Feed-forward Networks Network Training Error Backpropagation Applications

Feed-forward Network Functions

Topics in AI (CPSC 532L): Multimodal Learning with Vision, Language and Sound. Lecture 3: Introduction to Deep Learning (continued)

Deep Feedforward Networks. Han Shao, Hou Pong Chan, and Hongyi Zhang

Lecture 16: Introduction to Neural Networks

Deep Feedforward Networks

Lecture 17: Neural Networks and Deep Learning

Lecture 3 Feedforward Networks and Backpropagation

Unit III. A Survey of Neural Network Model

Lecture 3 Feedforward Networks and Backpropagation

From perceptrons to word embeddings. Simon Šuster University of Groningen

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes

Reading Group on Deep Learning Session 1

CSC 411 Lecture 10: Neural Networks

Welcome to the Machine Learning Practical Deep Neural Networks. MLP Lecture 1 / 18 September 2018 Single Layer Networks (1) 1

Machine Learning Lecture 12

Machine Learning Lecture 10

Feedforward Neural Networks. Michael Collins, Columbia University

Neural Networks and Deep Learning

Neural Networks Lecturer: J. Matas Authors: J. Matas, B. Flach, O. Drbohlav

Artificial Neural Networks. MGS Lecture 2

Logistic Regression & Neural Networks

Ch.6 Deep Feedforward Networks (2/3)

Lecture 2: Learning with neural networks

Feedforward Neural Networks

Deep Learning: Self-Taught Learning and Deep vs. Shallow Architectures. Lecture 04

CSC321 Lecture 15: Exploding and Vanishing Gradients

Machine Learning Basics Lecture 7: Multiclass Classification. Princeton University COS 495 Instructor: Yingyu Liang

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Neural Networks, Computation Graphs. CMSC 470 Marine Carpuat

<Special Topics in VLSI> Learning for Deep Neural Networks (Back-propagation)

Neural Networks Learning the network: Backprop , Fall 2018 Lecture 4

Introduction to Convolutional Neural Networks (CNNs)

STA 414/2104: Lecture 8

Artificial Neural Networks

CS60010: Deep Learning

Course 395: Machine Learning - Lectures

Advanced Machine Learning

Neural Network Language Modeling

Statistical Machine Learning from Data

Deep Feedforward Networks. Lecture slides for Chapter 6 of Deep Learning Ian Goodfellow Last updated

Classification with Perceptrons. Reading:

Neural Networks: Introduction

Deep Neural Networks (3) Computational Graphs, Learning Algorithms, Initialisation

Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 16

Name (NetID): (1 Point)

Machine Learning and Data Mining. Multi-layer Perceptrons & Neural Networks: Basics. Prof. Alexander Ihler

Deep Feedforward Networks. Seung-Hoon Na Chonbuk National University

Statistical Machine Learning (BE4M33SSU) Lecture 5: Artificial Neural Networks

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Speaker Representation and Verification Part II. by Vasileios Vasilakakis

y(x n, w) t n 2. (1)

STA 414/2104: Lecture 8

Artificial Neural Networks

Neural Networks. Advanced data-mining. Yongdai Kim. Department of Statistics, Seoul National University, South Korea

Based on the original slides of Hung-yi Lee

Machine Learning Lecture 5

Introduction to Neural Networks

Neural networks. Chapter 20. Chapter 20 1

Neural Networks. Nicholas Ruozzi University of Texas at Dallas

4. Multilayer Perceptrons

The XOR problem. Machine learning for vision. The XOR problem. The XOR problem. x 1 x 2. x 2. x 1. Fall Roland Memisevic

Neural networks. Chapter 19, Sections 1 5 1

Deep learning attracts lots of attention.

ECE521 Lecture 7/8. Logistic Regression

Neural networks and support vector machines

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

Multilayer Perceptrons (MLPs)

Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 6

NEURAL LANGUAGE MODELS

Artifical Neural Networks

CSE 190 Fall 2015 Midterm DO NOT TURN THIS PAGE UNTIL YOU ARE TOLD TO START!!!!

Introduction to Machine Learning

CSCI567 Machine Learning (Fall 2018)

Deep Learning Lab Course 2017 (Deep Learning Practical)

Introduction to Machine Learning Spring 2018 Note Neural Networks

Fast classification using sparsely active spiking networks. Hesham Mostafa Institute of neural computation, UCSD

Neural Networks: Backpropagation

The Multi-Layer Perceptron

Intro to Neural Networks and Deep Learning

SGD and Deep Learning

Transcription:

Deep Neural Networs (1) Hidden layers; Bac-propagation Steve Renals Machine Learning Practical MLP Lecture 3 4 October 2017 / 9 October 2017 MLP Lecture 3 Deep Neural Networs (1) 1

Recap: Softmax single layer networ class 1 class 2 class 3 softmax + + + inputs MLP Lecture 3 Deep Neural Networs (1) 2

Single layer networ Single-layer networ, 1 output, 2 inputs + x 1 x 2 MLP Lecture 3 Deep Neural Networs (1) 3

Geometric interpretation Single-layer networ, 1 output, 2 inputs x 2 w y(w; x) =0 x 1 b w Bishop, sec 3.1 MLP Lecture 3 Deep Neural Networs (1) 4

Single layer networ Single-layer networ, 3 outputs, 2 inputs + + + x 1 x 2 MLP Lecture 3 Deep Neural Networs (1) 5

Example data (three classes) 5 Data 4 3 2 1 0 1 2 3 4 4 3 2 1 0 1 2 3 4 5 MLP Lecture 3 Deep Neural Networs (1) 6

Classification regions with single-layer networ 5 Plot of Decision regions 4 3 2 1 0 1 2 3 4 4 3 2 1 0 1 2 3 4 5 Single-layer networs are limited to linear classification boundaries MLP Lecture 3 Deep Neural Networs (1) 7

Single layer networ trained on MNIST Digits 10 Outputs 0 1 2 3 4 5 6 7 8 9 784x10 weight matrix.... 784 Inputs 28x28 Output weights define a template for each class MLP Lecture 3 Deep Neural Networs (1) 8

Hinton Diagrams Visualise the weights for class.... 400 (20x20) inputs MLP Lecture 3 Deep Neural Networs (1) 9

Hinton diagram for single layer networ trained on MNIST Weights for each class act as a discriminative template Inner product of class weights and input to measure closeness to each template Classify to the closest template (maximum value output) 0 1 2 3 MLP Lecture 3 Deep Neural Networs (1) 10

Multi-Layer Networs MLP Lecture 3 Deep Neural Networs (1) 11

From templates to features Good classification needs to cope with the variability of real data: scale, sew, rotation, translation,... Very difficult to do with a single template per class Could have multiple templates per tas... this will wor, but we can do better Use features rather than templates (images from: Nielsen, chapter 1) MLP Lecture 3 Deep Neural Networs (1) 12

Incorporating features in neural networ architecture Layered processing: inputs - features - classification 0 1 2 3 4 5 6 7 8 9............ How to obtain features? - learning! MLP Lecture 3 Deep Neural Networs (1) 13

Incorporating features in neural networ architecture 0 1 2 3 4 5 6 7 8 9............ MLP Lecture 3 Deep Neural Networs (1) 14

Multi-layer networ y f f.... f f Softmax.... + + + + a (2) w (2) Outputs b (2) g g h (1).... g g Hidden layer + + Sigmoid.... + + w (1) i a (1) b (1).... x i Inputs y = softmax ( H r=1 ) w (2) r h(1) r + b h (1) = sigmoid ( d s=1 w (1) s x s + b MLP Lecture 3 Deep Neural Networs (1) 15 )

Multi-layer networ for MNIST (image from: Michael Nielsen, Neural Networs and Deep Learning, http://neuralnetworsanddeeplearning.com/chap1.html) MLP Lecture 3 Deep Neural Networs (1) 16

Training multi-layer networs: Credit assignment Hidden units mae training the weights more complicated, since the hidden units affect the error function indirectly via all the outputs The credit assignment problem what is the error of a hidden unit? how important is input-hidden weight w (1) i to output unit? what is the gradient of the error with respect to each weight? Solution: bac-propagation of error (bacprop) Bacprop enables the gradients to be computed. These gradients are used by gradient descent to train the weights. MLP Lecture 3 Deep Neural Networs (1) 17

Training output weights y 1 y Outputs y K g (2) 1 w (2) 1 w (2) l h g (2) ` w (2) K @E n @w (2) Hidden units g (2) K = g (2) h(1) w (1) i x i MLP Lecture 3 Deep Neural Networs (1) 18

Training MLPs: Error function and required gradients Cross-entropy error function: Required gradients: E n w (2) E n = E n w (1) i C t n ln y n =1 E n b (2) E n b (1) Gradient for hidden-to-output weights similar to single-layer networ: E n w (2) = E n a (2) = (y t ) }{{} g (2) ( a(2) C E n = w y c=1 c h (1) y c a (2) ) a(2) w MLP Lecture 3 Deep Neural Networs (1) 19

Bac-propagation of error: hidden unit error signal y 1 y Outputs y K g (2) 1 g (2) ` w (2) g (2) K K w (2) 1 w (2) l w (1) i Hidden units h g (1) = X` @E n @w (1) i g (2) l w` = g (1) x i! h (1 h ) x i MLP Lecture 3 Deep Neural Networs (1) 20

Training MLPs: Input-to-hidden weights c=1 E n w (1) i = E n a (1) }{{} g (1) a(1) w (1) i }{{} x i To compute g (1) = E n / a (1), the error signal for hidden unit, we must sum over all the output units contributions to g (1) : K g (1) E n K = c=1 a c (2) a(2) c a (1) = g c (2) a(2) c c=1 h (1) h(1) a (1) ( K ) = g c (2) w (2) c h (1) (1 h (1) ) MLP Lecture 3 Deep Neural Networs (1) 21

Training MLPs: Gradients E n w (2) E n w (1) i = (y t ) }{{} g (2) ( = c=1 h (1) g (2) c w (2) c ) h (1) (1 h (1) ) } {{ } g (1) x i Exercise: write down expressions for the gradients w.r.t. the biases E n E n b (2) b (1) MLP Lecture 3 Deep Neural Networs (1) 22

Bac-propagation of error: hidden unit error signal y 1 y Outputs y K g (2) 1 g (2) ` w (2) g (2) K K w (2) 1 w (2) l w (1) i Hidden units h g (1) = X` @E n @w (1) i g (2) l w` = g (1) x i! h (1 h ) x i MLP Lecture 3 Deep Neural Networs (1) 23

Bac-propagation of error The bac-propagation of error algorithm is summarised as follows: 1 Apply an input vectors from the training set, x, to the networ and forward propagate to obtain the output vector y 2 Using the target vector t compute the error E n 3 Evaluate the error gradients g (2) for each output unit 4 Evaluate the error gradients g (1) for each hidden unit using bac-propagation of error 5 Evaluate the derivatives for each training pattern Bac-propagation can be extended to multiple hidden layers, in each case computing the g (l) s for the current layer as a weighted sum of the g (l+1) s of the next layer MLP Lecture 3 Deep Neural Networs (1) 24

Training with multiple hidden layers y 1 y` Outputs yk g (3) 1 w (3) g (3) w (3) ` K g (3) 1 K w (3) ` h (2) 1 h (2) g (2) 1 g (2) w (2) 1 w (2) Hidden g (2) w (2) H H h (2) H h (1) w (1) i g (1) x i Hidden g (2) = X! g m (3) w m h (2) (1 h(2) ) m Inputs @E n @w (2) = g (2) h(1) MLP Lecture 3 Deep Neural Networs (1) 25

Are there alternatives to Sigmoid Hidden Units? MLP Lecture 3 Deep Neural Networs (1) 26

Sigmoid function 1 Logistic sigmoid activation function g(a) = 1/(1+exp( a)) 0.9 0.8 0.7 0.6 g(a) 0.5 0.4 0.3 0.2 0.1 0 5 4 3 2 1 0 1 2 3 4 5 a MLP Lecture 3 Deep Neural Networs (1) 27

Sigmoid Hidden Units Compress unbounded inputs to (0,1), saturating high magnitudes to 1 Interpretable as the probability of a feature defined by their weight vector Interpretable as the (normalised) firing rate of a neuron However... Saturation causes gradients to approach 0: If the output of a sigmoid unit is h, then then gradient is h(1 h) which approaches 0 as h saturates to 0 or 1 - and hence the gradients it multiplies into approach 0. Very small gradients result in very small parameter changes, so learning becomes very slow Outputs are not centred at 0: The output of a sigmoid layer will have mean> 0. This is numerically undesirable. MLP Lecture 3 Deep Neural Networs (1) 28

tanh tanh(x) = ex e x e x + e x sigmoid(x) = 1 + tanh(x/2) 2 Derivative: d dx tanh(x) = 1 tanh2 (x) MLP Lecture 3 Deep Neural Networs (1) 29

tanh hidden units tanh has same shape as sigmoid but has output range ±1 Results about approximation capability of sigmoid networs also apply to tanh networs h (2) 1 h (2) (2) Hidden h (2) H Possible reason to prefer tanh over sigmoid: allowing units to be positive or negative allows gradient for weights into a hidden unit to have a different sign h (1) 1 h (1) Hidden h (1) H Saturation still a problem MLP Lecture 3 Deep Neural Networs (1) 30

Rectified Linear Unit ReLU relu(x) = max(0, x) Derivative: { d dx relu(x) = 0 if x 0 1 if x > 0 MLP Lecture 3 Deep Neural Networs (1) 31

ReLU hidden units Similar approximation results to tanh and sigmoid hidden units Empirical results for speech and vision show consistent improvements using relu over sigmoid or tanh Unlie tanh or sigmoid there is no positive saturation saturation results in very small derivatives (and hence slower learning) Negative input to relu results in zero gradient (and hence no learning) Relu is computationally efficient: max(0, x) Relu units can die (i.e. respond with 0 to everything) Relu units can be very sensitive to the learning rate MLP Lecture 3 Deep Neural Networs (1) 32

Summary Understanding what single-layer networs compute How multi-layer networs allow feature computation Training multi-layer networs using bac-propagation of error Tanh and ReLU activation functions Multi-layer networs are also referred to as deep neural networs or multi-layer perceptrons Reading: Nielsen, chapter 2 Goodfellow, sections 6.3, 6.4, 6.5 Bishop, sections 3.1, 3.2, and chapter 4 MLP Lecture 3 Deep Neural Networs (1) 33