Deep Neural Networks (1) Hidden layers; Back-propagation
|
|
- Evan Horn
- 5 years ago
- Views:
Transcription
1 Deep Neural Networs (1) Hidden layers; Bac-propagation Steve Renals Machine Learning Practical MLP Lecture 3 2 October MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 1
2 Recap: Softmax single layer networ class 1 class 2 class 3 softmax inputs MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 2
3 What Do Neural Networs Do?
4 What Do Single Layer Neural Networs Do?
5 Single layer networ Single-layer networ, 1 output, 2 inputs + x 1 x 2 MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 3
6 Geometric interpretation Single-layer networ, 1 output, 2 inputs x 2 w y(x; w) =0 b w x 1 Bishop, sec 3.1 MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 4
7 Single layer networ Single-layer networ, 3 outputs, 2 inputs x 1 x 2 MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 5
8 Example data (three classes) 5 Data MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 6
9 Classification regions with single-layer networ 5 Plot of Decision regions Single-layer networs are limited to linear classification boundaries MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 7
10 Single layer networ trained on MNIST Digits 10 Outputs x10 weight matrix Inputs 28x28 Weights of each output unit define a template for the class MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 8
11 Hinton Diagrams Visualise the weights for class (20x20) inputs MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 9
12 Hinton diagram for single layer networ trained on MNIST Weights for each class act as a discriminative template Inner product of class weights and input to measure closeness to each template Classify to the closest template (maximum value output) MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 10
13 Hinton diagram for single layer networ trained on MNIST Weights for each class act as a discriminative template Inner product of class weights and input to measure closeness to each template Classify to the closest template (maximum value output) MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 10
14 Hinton diagram for single layer networ trained on MNIST Weights for each class act as a discriminative template Inner product of class weights and input to measure closeness to each template Classify to the closest template (maximum value output) What are the pros and cons of using 0 a single-layer networ 1 for MNIST classification? 2 3 MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 10
15 Multi-Layer Networs
16 From templates to features Good classification needs to cope with the variability of real data: scale, sew, rotation, translation,... Very difficult to do with a single template per class Could have multiple templates per tas... this will wor, but we can do better MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 11
17 From templates to features Good classification needs to cope with the variability of real data: scale, sew, rotation, translation,... Very difficult to do with a single template per class Could have multiple templates per tas... this will wor, but we can do better Use features rather than templates (images from: Nielsen, chapter 1) MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 11
18 Incorporating features in neural networ architecture Layered processing: inputs - features - classification MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 12
19 Incorporating features in neural networ architecture Layered processing: inputs - features - classification How to obtain features? - learning! MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 12
20 Incorporating features in neural networ architecture MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 13
21 Incorporating features in neural networ architecture MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 13
22 Incorporating features in neural networ architecture Softmax Output Layer Sigmoid Hidden Layer.... Input Layer MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 13
23 Multi-layer networ SoftmaxLayer.fprop AffineLayer.fprop y f f.... f f Softmax a (2) w (2) Outputs b (2) g g h (1).... g g Hidden layer SigmoidLayer.fprop AffineLayer.fprop + + Sigmoid w (1) i a (1) b (1).... x i Inputs y = softmax ( H r=1 ) w (2) r h(1) r + b h (1) = sigmoid ( d s=1 w (1) s x s + b MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 14 )
24 Multi-layer networ for MNIST (image from: Michael Nielsen, Neural Networs and Deep Learning, MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 15
25 Training multi-layer networs: Credit assignment Hidden units mae training the weights more complicated, since the hidden units affect the error function indirectly via all the outputs The credit assignment problem what is the error of a hidden unit? how important is input-hidden weight w (1) i to output unit? MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 16
26 Training multi-layer networs: Credit assignment Hidden units mae training the weights more complicated, since the hidden units affect the error function indirectly via all the outputs The credit assignment problem what is the error of a hidden unit? how important is input-hidden weight w (1) i to output unit? What is the gradient of the error with respect to each weight? (How to compute grads wrt params?) MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 16
27 Training multi-layer networs: Credit assignment Hidden units mae training the weights more complicated, since the hidden units affect the error function indirectly via all the outputs The credit assignment problem what is the error of a hidden unit? how important is input-hidden weight w (1) i to output unit? What is the gradient of the error with respect to each weight? (How to compute grads wrt params?) Solution: bac-propagation of error (bacprop) Bacprop enables the gradients to be computed. These gradients are used by gradient descent to train the weights. MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 16
28 Training output weights y 1 y Outputs y K w (2) 1 w (2) l w (2) K Hidden units h w (1) i x i MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 17
29 Training output weights y 1 y Outputs y K g (2) 1 w (2) 1 w (2) l h g (2) ` w (2) (2) Hidden units g (2) K = g (2) h(1) error.grad grads_wrt_params w (1) i x i MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 17
30 Training MLPs: Error function and required gradients Cross-entropy error function: E n = Required gradients (grads wrt params): C t n ln y n =1 E n w (2) E n w (1) i E n b (2) Gradient for hidden-to-output weights similar to single-layer networ: E n w (2) = E n z (2) = (y t ) }{{} g (2) ( z(2) C E n = w y c=1 c h (1) y c z (2) ) z(2) w E n b (1) MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 18
31 Training MLPs: Error function and required gradients Cross-entropy error function: E n = Required gradients (grads wrt params): C t n ln y n =1 E n w (2) E n w (1) i E n b (2) Gradient for hidden-to-output weights similar to single-layer networ: E n ( w (2) = E n C ) z (2) z(2) E n y c = w y c=1 c z (2) z(2) w }{{} grads wrt params = (y t ) }{{} error.grad h (1) E n b (1) MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 18
32 Bac-propagation of error: hidden unit error signal y 1 y Outputs y K g (2) 1 g (2) ` w (2) g (2) K K w (2) 1 w (2) l error.grad w (1) i Hidden units h g (1) = (1) i g (2) l w` = g (1) x i AffineLayer.bprop! SigmoidLayer.bprop h (1 h ) grads_wrt_params x i MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 19
33 Training MLPs: Input-to-hidden weights E n w (1) i = E n z (1) }{{} g (1) z(1) w (1) i }{{} x i To compute g (1) = E n / a (1), error signal for hidden unit, sum over all output units contributions to g (1) : K g (1) E n K = c=1 z c (2) z(2) c z (1) = g c (2) z(2) c c=1 h (1) h(1) z (1) ) = ( K c=1 g (2) c w (2) c h (1) (1 h (1) ) MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 20
34 Training MLPs: Input-to-hidden weights E n w (1) i }{{} grads wrt params = E n z (1) }{{} g (1) z(1) w (1) i }{{} x i To compute g (1) = E n / a (1), error signal for hidden unit, sum over all output units contributions to g (1) : K g (1) E n K = c=1 z c (2) z(2) c z (1) = g c (2) z(2) c c=1 h (1) h(1) z (1) ( K ) = g c (2) w (2) c h (1) (1 h (1) ) c=1 }{{}}{{} SigmoidLayer.bprop AffineLayer.bprop MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 20
35 Training MLPs: Gradients grads wrt params: E n w (2) E n w (1) i = (y t ) h (1) = ( c=1 g (2) c w (2) c ) h (1) (1 h (1) ) x i MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 21
36 Training MLPs: Gradients grads wrt params: E n w (2) E n w (1) i = (y t ) }{{} CrossEntropySoftmaxError.grad h (1) ( ) = g c (2) w (2) c h (1) (1 h (1) ) x i c=1 }{{}}{{} SigmoidLayer.bprop AffineLayer.bprop MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 21
37 Training MLPs: Gradients grads wrt params: E n w (2) E n w (1) i = (y t ) }{{} CrossEntropySoftmaxError.grad h (1) ( ) = g c (2) w (2) c h (1) (1 h (1) ) x i c=1 }{{}}{{} SigmoidLayer.bprop AffineLayer.bprop Exercise: write down expressions for the gradients w.r.t. the biases E n b (2) E n b (1) MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 21
38 Bac-propagation of error The bac-propagation of error algorithm is summarised as follows: 1 Apply an input vectors from the training set, x, to the networ and forward propagate to obtain the output vector y 2 Using the target vector t compute the error E n 3 Evaluate the error gradients g (2) for each output unit using error.grad 4 Evaluate the error gradients g (1) for each hidden unit using AffineLayer.bprop and SigmoidLayer.bprop 5 Evaluate the derivatives (grads wrt params) for each training pattern Bac-propagation can be extended to multiple hidden layers, in each case computing the g (l) s for the current layer as a weighted sum of the g (l+1) s of the next layer MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 22
39 Training with multiple hidden layers y 1 y` Outputs yk g (3) 1 w (3) g (3) w (3) ` K g (3) 1 K w (3) ` h (2) 1 h (2) g (2) 1 g (2) w (2) 1 w (2) Hidden g (2) w (2) H H h (2) H h (1) w (1) i g (1) x i Hidden g (2) = X! g m (3) w m h (2) (1 h(2) ) m (2) = g (2) h(1) MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 23
40 Training with multiple hidden layers CrossEntropyError.grad SoftmaxLayer.fprop SoftmaxLayer.bprop y 1 y` Outputs yk AffineLayer.fprop SigmoidLayer.fprop AffineLayer.fprop h (2) 1 g (3) 1 w (3) g (3) w (3) ` K g (3) 1 K w (3) ` h (2) g (2) 1 g (2) w (2) 1 w (2) AffineLayer.bprop SigmoidLayer.bprop Hidden h (2) H g (2) w (2) H H AffineLayer.bprop SigmoidLayer.fprop AffineLayer.fprop h (1) w (1) i g (1) x i Hidden g (2) = X! g m (3) w m h (2) (1 h(2) ) m Inputs (2) = g (2) h(1) MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 23
41 Training with multiple hidden layers AffineLayer.fprop SigmoidLayer.fprop AffineLayer.fprop CrossEntropySoftmaxError.grad y 1 y` Outputs yk h (2) 1 g (3) 1 w (3) g (3) w (3) ` K g (3) 1 K w (3) ` h (2) g (2) 1 g (2) w (2) 1 w (2) AffineLayer.bprop SigmoidLayer.bprop Hidden h (2) H g (2) w (2) H H AffineLayer.bprop SigmoidLayer.fprop AffineLayer.fprop h (1) w (1) i g (1) x i Hidden g (2) = X! g m (3) w m h (2) (1 h(2) ) m Inputs (2) = g (2) h(1) MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 23
42 Are there alternatives to Sigmoid Hidden Units?
43 Sigmoid function 1 Logistic sigmoid activation function g(a) = 1/(1+exp( a)) g(a) a
44 Sigmoid Hidden Units (SigmoidLayer) Compress unbounded inputs to (0,1), saturating high magnitudes to 1 Interpretable as the probability of a feature defined by their weight vector Interpretable as the (normalised) firing rate of a neuron MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 24
45 Sigmoid Hidden Units (SigmoidLayer) Compress unbounded inputs to (0,1), saturating high magnitudes to 1 Interpretable as the probability of a feature defined by their weight vector Interpretable as the (normalised) firing rate of a neuron However... Saturation causes gradients to approach 0 If the output of a sigmoid unit is h, then the gradient is h(1 h) which approaches 0 as h saturates to 0 or 1 hence the gradients it multiplies into approach 0. Small gradients result in small parameter changes, so learning becomes slow Outputs are not centred at 0 The output of a sigmoid layer will have mean> 0 numerically undesirable. MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 24
46 tanh tanh(x) = ex e x e x + e x sigmoid(x) = 1 + tanh(x/2) 2 Derivative: d dx tanh(x) = 1 tanh2 (x) MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 25
47 tanh hidden units (TanhLayer) tanh has same shape as sigmoid but has output range ±1 Results about approximation capability using sigmoid layers also apply to tanh layers MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 26
48 tanh hidden units (TanhLayer) tanh has same shape as sigmoid but has output range ±1 Results about approximation capability using sigmoid layers also apply to tanh layers Possible reason to prefer tanh over sigmoid: allowing units to be positive or negative allows gradient for weights into a hidden unit to have a different sign MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 26
49 tanh hidden units (TanhLayer) tanh has same shape as sigmoid but has output range ±1 Results about approximation capability using sigmoid layers also apply to tanh layers Possible reason to prefer tanh over sigmoid: allowing units to be positive or negative allows gradient for weights into a hidden unit to have a different sign h (2) 1 h (1) 1 h (1) h (2) g (2) Hidden Hidden h (2) H h (1) H MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 26
50 tanh hidden units (TanhLayer) tanh has same shape as sigmoid but has output range ±1 Results about approximation capability using sigmoid layers also apply to tanh layers Possible reason to prefer tanh over sigmoid: allowing units to be positive or negative allows gradient for weights into a hidden unit to have a different sign h (2) 1 h (1) 1 h (1) h (2) g (2) Hidden Hidden h (2) H h (1) H Saturation still a problem MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 26
51 Rectified Linear Unit ReLU relu(x) = max(0, x) Derivative: d dx relu(x) = { 0 if x 0 1 if x > 0 MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 27
52 ReLU hidden units (ReluLayer) Similar approximation results to tanh and sigmoid hidden units Empirical results for speech and vision show consistent improvements using relu over sigmoid or tanh Unlie tanh or sigmoid there is no positive saturation saturation results in very small derivatives (and hence slower learning) MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 28
53 ReLU hidden units (ReluLayer) Similar approximation results to tanh and sigmoid hidden units Empirical results for speech and vision show consistent improvements using relu over sigmoid or tanh Unlie tanh or sigmoid there is no positive saturation saturation results in very small derivatives (and hence slower learning) Negative input to relu results in zero gradient (and hence no learning) Relu is computationally efficient: max(0, x) Relu units can die (i.e. respond with 0 to everything) Relu units can be very sensitive to the learning rate MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 28
54 Lab 3: 03 Multiple layer models Lab 3 covers how multiple layer networs are modelled in the mlp framewor. Representing activation functions as layer, applied after a linear (affine) layer Stacing layers together to build multi-layer networs with non-linear activation functions (e.g. sigmoid) for the hidden layer Using Layer.fprop methods to implement the forward propagation and and Layer.bprop methods to implement bac propagation to compute the gradients Training softmax models on MNIST Training deeper multi-layer networs on MNIST Using various non-linear activation functions for the hidden layer (sigmoid, tanh, relu) MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 29
55 Summary Understanding what single-layer networs compute How multi-layer networs allow feature computation Training multi-layer networs using bac-propagation of error Tanh and ReLU activation functions Multi-layer networs are also referred to as deep neural networs or multi-layer perceptrons Reading: Nielsen, chapter 2 Goodfellow, sections 6.3, 6.4, 6.5 Bishop, sections 3.1, 3.2, and chapter 4 MLP Lecture 3 / 2 October 2018 Deep Neural Networs (1) 30
Deep Neural Networks (1) Hidden layers; Back-propagation
Deep Neural Networs (1) Hidden layers; Bac-propagation Steve Renals Machine Learning Practical MLP Lecture 3 4 October 2017 / 9 October 2017 MLP Lecture 3 Deep Neural Networs (1) 1 Recap: Softmax single
More informationWhat Do Neural Networks Do? MLP Lecture 3 Multi-layer networks 1
What Do Neural Networks Do? MLP Lecture 3 Multi-layer networks 1 Multi-layer networks Steve Renals Machine Learning Practical MLP Lecture 3 7 October 2015 MLP Lecture 3 Multi-layer networks 2 What Do Single
More informationStochastic gradient descent; Classification
Stochastic gradient descent; Classification Steve Renals Machine Learning Practical MLP Lecture 2 28 September 2016 MLP Lecture 2 Stochastic gradient descent; Classification 1 Single Layer Networks MLP
More informationMulti-layer Neural Networks
Multi-layer Neural Networks Steve Renals Informatics 2B Learning and Data Lecture 13 8 March 2011 Informatics 2B: Learning and Data Lecture 13 Multi-layer Neural Networks 1 Overview Multi-layer neural
More informationCSC321 Lecture 5: Multilayer Perceptrons
CSC321 Lecture 5: Multilayer Perceptrons Roger Grosse Roger Grosse CSC321 Lecture 5: Multilayer Perceptrons 1 / 21 Overview Recall the simple neuron-like unit: y output output bias i'th weight w 1 w2 w3
More informationECE521 Lectures 9 Fully Connected Neural Networks
ECE521 Lectures 9 Fully Connected Neural Networks Outline Multi-class classification Learning multi-layer neural networks 2 Measuring distance in probability space We learnt that the squared L2 distance
More informationMachine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6
Machine Learning for Large-Scale Data Analysis and Decision Making 80-629-17A Neural Networks Week #6 Today Neural Networks A. Modeling B. Fitting C. Deep neural networks Today s material is (adapted)
More informationFeed-forward Networks Network Training Error Backpropagation Applications. Neural Networks. Oliver Schulte - CMPT 726. Bishop PRML Ch.
Neural Networks Oliver Schulte - CMPT 726 Bishop PRML Ch. 5 Neural Networks Neural networks arise from attempts to model human/animal brains Many models, many claims of biological plausibility We will
More informationBackpropagation Introduction to Machine Learning. Matt Gormley Lecture 12 Feb 23, 2018
10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Backpropagation Matt Gormley Lecture 12 Feb 23, 2018 1 Neural Networks Outline
More informationMultilayer Perceptron
Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Single Perceptron 3 Boolean Function Learning 4
More informationTopics in AI (CPSC 532L): Multimodal Learning with Vision, Language and Sound. Lecture 3: Introduction to Deep Learning (continued)
Topics in AI (CPSC 532L): Multimodal Learning with Vision, Language and Sound Lecture 3: Introduction to Deep Learning (continued) Course Logistics - Update on course registrations - 6 seats left now -
More informationIntroduction to Neural Networks
Introduction to Neural Networks Steve Renals Automatic Speech Recognition ASR Lecture 10 24 February 2014 ASR Lecture 10 Introduction to Neural Networks 1 Neural networks for speech recognition Introduction
More informationWelcome to the Machine Learning Practical Deep Neural Networks. MLP Lecture 1 / 18 September 2018 Single Layer Networks (1) 1
Welcome to the Machine Learning Practical Deep Neural Networks MLP Lecture 1 / 18 September 2018 Single Layer Networks (1) 1 Introduction to MLP; Single Layer Networks (1) Steve Renals Machine Learning
More informationLearning from Data: Multi-layer Perceptrons
Learning from Data: Multi-layer Perceptrons Amos Storkey, School of Informatics University of Edinburgh Semester, 24 LfD 24 Layered Neural Networks Background Single Neurons Relationship to logistic regression.
More informationNeural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann
Neural Networks with Applications to Vision and Language Feedforward Networks Marco Kuhlmann Feedforward networks Linear separability x 2 x 2 0 1 0 1 0 0 x 1 1 0 x 1 linearly separable not linearly separable
More informationNeural Networks and the Back-propagation Algorithm
Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely
More informationEngineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 6: Multi-Layer Perceptrons I
Engineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 6: Multi-Layer Perceptrons I Phil Woodland: pcw@eng.cam.ac.uk Michaelmas 2012 Engineering Part IIB: Module 4F10 Introduction In
More informationNeural Networks. Bishop PRML Ch. 5. Alireza Ghane. Feed-forward Networks Network Training Error Backpropagation Applications
Neural Networks Bishop PRML Ch. 5 Alireza Ghane Neural Networks Alireza Ghane / Greg Mori 1 Neural Networks Neural networks arise from attempts to model human/animal brains Many models, many claims of
More informationDEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY
DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY 1 On-line Resources http://neuralnetworksanddeeplearning.com/index.html Online book by Michael Nielsen http://matlabtricks.com/post-5/3x3-convolution-kernelswith-online-demo
More informationJakub Hajic Artificial Intelligence Seminar I
Jakub Hajic Artificial Intelligence Seminar I. 11. 11. 2014 Outline Key concepts Deep Belief Networks Convolutional Neural Networks A couple of questions Convolution Perceptron Feedforward Neural Network
More informationLecture 3 Feedforward Networks and Backpropagation
Lecture 3 Feedforward Networks and Backpropagation CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor University of Chicago April 3, 2017 Things we will look at today Recap of Logistic Regression
More informationLecture 3 Feedforward Networks and Backpropagation
Lecture 3 Feedforward Networks and Backpropagation CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor University of Chicago April 3, 2017 Things we will look at today Recap of Logistic Regression
More informationCS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes
CS 6501: Deep Learning for Computer Graphics Basics of Neural Networks Connelly Barnes Overview Simple neural networks Perceptron Feedforward neural networks Multilayer perceptron and properties Autoencoders
More informationDeep Feedforward Networks
Deep Feedforward Networks Liu Yang March 30, 2017 Liu Yang Short title March 30, 2017 1 / 24 Overview 1 Background A general introduction Example 2 Gradient based learning Cost functions Output Units 3
More informationUnit III. A Survey of Neural Network Model
Unit III A Survey of Neural Network Model 1 Single Layer Perceptron Perceptron the first adaptive network architecture was invented by Frank Rosenblatt in 1957. It can be used for the classification of
More informationFeed-forward Network Functions
Feed-forward Network Functions Sargur Srihari Topics 1. Extension of linear models 2. Feed-forward Network Functions 3. Weight-space symmetries 2 Recap of Linear Models Linear Models for Regression, Classification
More informationCh.6 Deep Feedforward Networks (2/3)
Ch.6 Deep Feedforward Networks (2/3) 16. 10. 17. (Mon.) System Software Lab., Dept. of Mechanical & Information Eng. Woonggy Kim 1 Contents 6.3. Hidden Units 6.3.1. Rectified Linear Units and Their Generalizations
More informationLecture 16: Introduction to Neural Networks
Lecture 16: Introduction to Neural Networs Instructor: Aditya Bhasara Scribe: Philippe David CS 5966/6966: Theory of Machine Learning March 20 th, 2017 Abstract In this lecture, we consider Bacpropagation,
More informationDeep Feedforward Networks. Han Shao, Hou Pong Chan, and Hongyi Zhang
Deep Feedforward Networks Han Shao, Hou Pong Chan, and Hongyi Zhang Deep Feedforward Networks Goal: approximate some function f e.g., a classifier, maps input to a class y = f (x) x y Defines a mapping
More informationReading Group on Deep Learning Session 1
Reading Group on Deep Learning Session 1 Stephane Lathuiliere & Pablo Mesejo 2 June 2016 1/31 Contents Introduction to Artificial Neural Networks to understand, and to be able to efficiently use, the popular
More informationFrom perceptrons to word embeddings. Simon Šuster University of Groningen
From perceptrons to word embeddings Simon Šuster University of Groningen Outline A basic computational unit Weighting some input to produce an output: classification Perceptron Classify tweets Written
More informationMachine Learning Lecture 10
Machine Learning Lecture 10 Neural Networks 26.11.2018 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Today s Topic Deep Learning 2 Course Outline Fundamentals Bayes
More informationMachine Learning Lecture 12
Machine Learning Lecture 12 Neural Networks 30.11.2017 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Course Outline Fundamentals Bayes Decision Theory Probability
More informationLecture 2: Learning with neural networks
Lecture 2: Learning with neural networks Deep Learning @ UvA LEARNING WITH NEURAL NETWORKS - PAGE 1 Lecture Overview o Machine Learning Paradigm for Neural Networks o The Backpropagation algorithm for
More informationLecture 17: Neural Networks and Deep Learning
UVA CS 6316 / CS 4501-004 Machine Learning Fall 2016 Lecture 17: Neural Networks and Deep Learning Jack Lanchantin Dr. Yanjun Qi 1 Neurons 1-Layer Neural Network Multi-layer Neural Network Loss Functions
More informationArtificial Neural Networks
Artificial Neural Networks Oliver Schulte - CMPT 310 Neural Networks Neural networks arise from attempts to model human/animal brains Many models, many claims of biological plausibility We will focus on
More informationNeural Networks Learning the network: Backprop , Fall 2018 Lecture 4
Neural Networks Learning the network: Backprop 11-785, Fall 2018 Lecture 4 1 Recap: The MLP can represent any function The MLP can be constructed to represent anything But how do we construct it? 2 Recap:
More informationArtificial Neural Networks. MGS Lecture 2
Artificial Neural Networks MGS 2018 - Lecture 2 OVERVIEW Biological Neural Networks Cell Topology: Input, Output, and Hidden Layers Functional description Cost functions Training ANNs Back-Propagation
More information<Special Topics in VLSI> Learning for Deep Neural Networks (Back-propagation)
Learning for Deep Neural Networks (Back-propagation) Outline Summary of Previous Standford Lecture Universal Approximation Theorem Inference vs Training Gradient Descent Back-Propagation
More informationDeep Learning: Self-Taught Learning and Deep vs. Shallow Architectures. Lecture 04
Deep Learning: Self-Taught Learning and Deep vs. Shallow Architectures Lecture 04 Razvan C. Bunescu School of Electrical Engineering and Computer Science bunescu@ohio.edu Self-Taught Learning 1. Learn
More informationCSC 411 Lecture 10: Neural Networks
CSC 411 Lecture 10: Neural Networks Roger Grosse, Amir-massoud Farahmand, and Juan Carrasquilla University of Toronto UofT CSC 411: 10-Neural Networks 1 / 35 Inspiration: The Brain Our brain has 10 11
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression
More informationNeural Networks and Deep Learning
Neural Networks and Deep Learning Professor Ameet Talwalkar November 12, 2015 Professor Ameet Talwalkar Neural Networks and Deep Learning November 12, 2015 1 / 16 Outline 1 Review of last lecture AdaBoost
More informationNeural Networks Lecturer: J. Matas Authors: J. Matas, B. Flach, O. Drbohlav
Neural Networks 30.11.2015 Lecturer: J. Matas Authors: J. Matas, B. Flach, O. Drbohlav 1 Talk Outline Perceptron Combining neurons to a network Neural network, processing input to an output Learning Cost
More informationCS60010: Deep Learning
CS60010: Deep Learning Sudeshna Sarkar Spring 2018 16 Jan 2018 FFN Goal: Approximate some unknown ideal function f : X! Y Ideal classifier: y = f*(x) with x and category y Feedforward Network: Define parametric
More informationClassification with Perceptrons. Reading:
Classification with Perceptrons Reading: Chapters 1-3 of Michael Nielsen's online book on neural networks covers the basics of perceptrons and multilayer neural networks We will cover material in Chapters
More informationFeedforward Neural Networks. Michael Collins, Columbia University
Feedforward Neural Networks Michael Collins, Columbia University Recap: Log-linear Models A log-linear model takes the following form: p(y x; v) = exp (v f(x, y)) y Y exp (v f(x, y )) f(x, y) is the representation
More informationLogistic Regression & Neural Networks
Logistic Regression & Neural Networks CMSC 723 / LING 723 / INST 725 Marine Carpuat Slides credit: Graham Neubig, Jacob Eisenstein Logistic Regression Perceptron & Probabilities What if we want a probability
More informationDeep Feedforward Networks. Seung-Hoon Na Chonbuk National University
Deep Feedforward Networks Seung-Hoon Na Chonbuk National University Neural Network: Types Feedforward neural networks (FNN) = Deep feedforward networks = multilayer perceptrons (MLP) No feedback connections
More informationDeep Feedforward Networks. Lecture slides for Chapter 6 of Deep Learning Ian Goodfellow Last updated
Deep Feedforward Networks Lecture slides for Chapter 6 of Deep Learning www.deeplearningbook.org Ian Goodfellow Last updated 2016-10-04 Roadmap Example: Learning XOR Gradient-Based Learning Hidden Units
More informationStatistical Machine Learning from Data
January 17, 2006 Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Multi-Layer Perceptrons Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole
More informationy(x n, w) t n 2. (1)
Network training: Training a neural network involves determining the weight parameter vector w that minimizes a cost function. Given a training set comprising a set of input vector {x n }, n = 1,...N,
More informationNeural Networks: Introduction
Neural Networks: Introduction Machine Learning Fall 2017 Based on slides and material from Geoffrey Hinton, Richard Socher, Dan Roth, Yoav Goldberg, Shai Shalev-Shwartz and Shai Ben-David, and others 1
More informationNeural Network Language Modeling
Neural Network Language Modeling Instructor: Wei Xu Ohio State University CSE 5525 Many slides from Marek Rei, Philipp Koehn and Noah Smith Course Project Sign up your course project In-class presentation
More informationMachine Learning Lecture 5
Machine Learning Lecture 5 Linear Discriminant Functions 26.10.2017 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Course Outline Fundamentals Bayes Decision Theory
More informationCSC321 Lecture 15: Exploding and Vanishing Gradients
CSC321 Lecture 15: Exploding and Vanishing Gradients Roger Grosse Roger Grosse CSC321 Lecture 15: Exploding and Vanishing Gradients 1 / 23 Overview Yesterday, we saw how to compute the gradient descent
More informationNeural networks and support vector machines
Neural netorks and support vector machines Perceptron Input x 1 Weights 1 x 2 x 3... x D 2 3 D Output: sgn( x + b) Can incorporate bias as component of the eight vector by alays including a feature ith
More informationMachine Learning Basics Lecture 7: Multiclass Classification. Princeton University COS 495 Instructor: Yingyu Liang
Machine Learning Basics Lecture 7: Multiclass Classification Princeton University COS 495 Instructor: Yingyu Liang Example: image classification indoor Indoor outdoor Example: image classification (multiclass)
More informationNeural Networks. Advanced data-mining. Yongdai Kim. Department of Statistics, Seoul National University, South Korea
Neural Networks Advanced data-mining Yongdai Kim Department of Statistics, Seoul National University, South Korea What is Neural Networks? One of supervised learning method using one or more hidden layer.
More informationFeedforward Neural Networks
Feedforward Neural Networks Michael Collins 1 Introduction In the previous notes, we introduced an important class of models, log-linear models. In this note, we describe feedforward neural networks, which
More informationSTA 414/2104: Lecture 8
STA 414/2104: Lecture 8 6-7 March 2017: Continuous Latent Variable Models, Neural networks With thanks to Russ Salakhutdinov, Jimmy Ba and others Outline Continuous latent variable models Background PCA
More informationNeural Networks, Computation Graphs. CMSC 470 Marine Carpuat
Neural Networks, Computation Graphs CMSC 470 Marine Carpuat Binary Classification with a Multi-layer Perceptron φ A = 1 φ site = 1 φ located = 1 φ Maizuru = 1 φ, = 2 φ in = 1 φ Kyoto = 1 φ priest = 0 φ
More information(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann
(Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for
More informationIntroduction to Convolutional Neural Networks (CNNs)
Introduction to Convolutional Neural Networks (CNNs) nojunk@snu.ac.kr http://mipal.snu.ac.kr Department of Transdisciplinary Studies Seoul National University, Korea Jan. 2016 Many slides are from Fei-Fei
More informationDeep Learning Lab Course 2017 (Deep Learning Practical)
Deep Learning Lab Course 207 (Deep Learning Practical) Labs: (Computer Vision) Thomas Brox, (Robotics) Wolfram Burgard, (Machine Learning) Frank Hutter, (Neurorobotics) Joschka Boedecker University of
More informationCourse 395: Machine Learning - Lectures
Course 395: Machine Learning - Lectures Lecture 1-2: Concept Learning (M. Pantic) Lecture 3-4: Decision Trees & CBC Intro (M. Pantic & S. Petridis) Lecture 5-6: Evaluating Hypotheses (S. Petridis) Lecture
More informationAdvanced Machine Learning
Advanced Machine Learning Lecture 4: Deep Learning Essentials Pierre Geurts, Gilles Louppe, Louis Wehenkel 1 / 52 Outline Goal: explain and motivate the basic constructs of neural networks. From linear
More informationThe XOR problem. Machine learning for vision. The XOR problem. The XOR problem. x 1 x 2. x 2. x 1. Fall Roland Memisevic
The XOR problem Fall 2013 x 2 Lecture 9, February 25, 2015 x 1 The XOR problem The XOR problem x 1 x 2 x 2 x 1 (picture adapted from Bishop 2006) It s the features, stupid It s the features, stupid The
More informationBased on the original slides of Hung-yi Lee
Based on the original slides of Hung-yi Lee Google Trends Deep learning obtains many exciting results. Can contribute to new Smart Services in the Context of the Internet of Things (IoT). IoT Services
More informationMachine Learning: Chenhao Tan University of Colorado Boulder LECTURE 16
Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 16 Slides adapted from Jordan Boyd-Graber, Justin Johnson, Andrej Karpathy, Chris Ketelsen, Fei-Fei Li, Mike Mozer, Michael Nielson
More informationDeep Neural Networks (3) Computational Graphs, Learning Algorithms, Initialisation
Deep Neural Networks (3) Computational Graphs, Learning Algorithms, Initialisation Steve Renals Machine Learning Practical MLP Lecture 5 16 October 2018 MLP Lecture 5 / 16 October 2018 Deep Neural Networks
More informationMachine Learning and Data Mining. Multi-layer Perceptrons & Neural Networks: Basics. Prof. Alexander Ihler
+ Machine Learning and Data Mining Multi-layer Perceptrons & Neural Networks: Basics Prof. Alexander Ihler Linear Classifiers (Perceptrons) Linear Classifiers a linear classifier is a mapping which partitions
More informationDeep learning attracts lots of attention.
Deep Learning Deep learning attracts lots of attention. I believe you have seen lots of exciting results before. Deep learning trends at Google. Source: SIGMOD/Jeff Dean Ups and downs of Deep Learning
More informationStatistical Machine Learning (BE4M33SSU) Lecture 5: Artificial Neural Networks
Statistical Machine Learning (BE4M33SSU) Lecture 5: Artificial Neural Networks Jan Drchal Czech Technical University in Prague Faculty of Electrical Engineering Department of Computer Science Topics covered
More informationArtificial Neuron (Perceptron)
9/6/208 Gradient Descent (GD) Hantao Zhang Deep Learning with Python Reading: https://en.wikipedia.org/wiki/gradient_descent Artificial Neuron (Perceptron) = w T = w 0 0 + + w 2 2 + + w d d where
More informationNeed for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels
Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)
More informationLecture 35: Optimization and Neural Nets
Lecture 35: Optimization and Neural Nets CS 4670/5670 Sean Bell DeepDream [Google, Inceptionism: Going Deeper into Neural Networks, blog 2015] Aside: CNN vs ConvNet Note: There are many papers that use
More informationSTA 414/2104: Lecture 8
STA 414/2104: Lecture 8 6-7 March 2017: Continuous Latent Variable Models, Neural networks Delivered by Mark Ebden With thanks to Russ Salakhutdinov, Jimmy Ba and others Outline Continuous latent variable
More informationSpeaker Representation and Verification Part II. by Vasileios Vasilakakis
Speaker Representation and Verification Part II by Vasileios Vasilakakis Outline -Approaches of Neural Networks in Speaker/Speech Recognition -Feed-Forward Neural Networks -Training with Back-propagation
More informationNEURAL LANGUAGE MODELS
COMP90042 LECTURE 14 NEURAL LANGUAGE MODELS LANGUAGE MODELS Assign a probability to a sequence of words Framed as sliding a window over the sentence, predicting each word from finite context to left E.g.,
More informationDeep Learning (CNNs)
10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Deep Learning (CNNs) Deep Learning Readings: Murphy 28 Bishop - - HTF - - Mitchell
More informationNeural networks. Chapter 20. Chapter 20 1
Neural networks Chapter 20 Chapter 20 1 Outline Brains Neural networks Perceptrons Multilayer networks Applications of neural networks Chapter 20 2 Brains 10 11 neurons of > 20 types, 10 14 synapses, 1ms
More informationArtificial Neural Networks
Introduction ANN in Action Final Observations Application: Poverty Detection Artificial Neural Networks Alvaro J. Riascos Villegas University of los Andes and Quantil July 6 2018 Artificial Neural Networks
More informationIntroduction to Neural Networks
CUONG TUAN NGUYEN SEIJI HOTTA MASAKI NAKAGAWA Tokyo University of Agriculture and Technology Copyright by Nguyen, Hotta and Nakagawa 1 Pattern classification Which category of an input? Example: Character
More informationIntelligent Systems Discriminative Learning, Neural Networks
Intelligent Systems Discriminative Learning, Neural Networks Carsten Rother, Dmitrij Schlesinger WS2014/2015, Outline 1. Discriminative learning 2. Neurons and linear classifiers: 1) Perceptron-Algorithm
More informationCSE446: Neural Networks Spring Many slides are adapted from Carlos Guestrin and Luke Zettlemoyer
CSE446: Neural Networks Spring 2017 Many slides are adapted from Carlos Guestrin and Luke Zettlemoyer Human Neurons Switching time ~ 0.001 second Number of neurons 10 10 Connections per neuron 10 4-5 Scene
More informationArtificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino
Artificial Neural Networks Data Base and Data Mining Group of Politecnico di Torino Elena Baralis Politecnico di Torino Artificial Neural Networks Inspired to the structure of the human brain Neurons as
More informationNeural Networks. Nicholas Ruozzi University of Texas at Dallas
Neural Networks Nicholas Ruozzi University of Texas at Dallas Handwritten Digit Recognition Given a collection of handwritten digits and their corresponding labels, we d like to be able to correctly classify
More informationNONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition
NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function
More informationIntroduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis
Introduction to Natural Computation Lecture 9 Multilayer Perceptrons and Backpropagation Peter Lewis 1 / 25 Overview of the Lecture Why multilayer perceptrons? Some applications of multilayer perceptrons.
More information4. Multilayer Perceptrons
4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output
More informationARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92
ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000
More informationNeural networks. Chapter 19, Sections 1 5 1
Neural networks Chapter 19, Sections 1 5 Chapter 19, Sections 1 5 1 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 19, Sections 1 5 2 Brains 10
More informationECE521 Lecture 7/8. Logistic Regression
ECE521 Lecture 7/8 Logistic Regression Outline Logistic regression (Continue) A single neuron Learning neural networks Multi-class classification 2 Logistic regression The output of a logistic regression
More informationPerceptron & Neural Networks
School of Computer Science 10-701 Introduction to Machine Learning Perceptron & Neural Networks Readings: Bishop Ch. 4.1.7, Ch. 5 Murphy Ch. 16.5, Ch. 28 Mitchell Ch. 4 Matt Gormley Lecture 12 October
More informationMultilayer Perceptrons (MLPs)
CSE 5526: Introduction to Neural Networks Multilayer Perceptrons (MLPs) 1 Motivation Multilayer networks are more powerful than singlelayer nets Example: XOR problem x 2 1 AND x o x 1 x 2 +1-1 o x x 1-1
More informationCSCI567 Machine Learning (Fall 2018)
CSCI567 Machine Learning (Fall 2018) Prof. Haipeng Luo U of Southern California Sep 12, 2018 September 12, 2018 1 / 49 Administration GitHub repos are setup (ask TA Chi Zhang for any issues) HW 1 is due
More informationFast classification using sparsely active spiking networks. Hesham Mostafa Institute of neural computation, UCSD
Fast classification using sparsely active spiking networks Hesham Mostafa Institute of neural computation, UCSD Artificial networks vs. spiking networks backpropagation output layer Multi-layer networks
More informationMachine Learning: Chenhao Tan University of Colorado Boulder LECTURE 6
Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 6 Slides adapted from Jordan Boyd-Graber, Chris Ketelsen Machine Learning: Chenhao Tan Boulder 1 of 39 HW1 turned in HW2 released Office
More information