CSC 411 Lecture 10: Neural Networks

Similar documents
CSC321 Lecture 6: Backpropagation

CSC321 Lecture 5: Multilayer Perceptrons

Lecture 6: Backpropagation

Neural Networks. Nicholas Ruozzi University of Texas at Dallas

CSC321 Lecture 15: Exploding and Vanishing Gradients

CSC 411 Lecture 6: Linear Regression

Machine Learning (CSE 446): Neural Networks

CSC321 Lecture 4: Learning a Classifier

CSC321 Lecture 5 Learning in a Single Neuron

Introduction to Machine Learning Spring 2018 Note Neural Networks

Artificial Neuron (Perceptron)

CSC321 Lecture 4: Learning a Classifier

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes

CSC 411 Lecture 7: Linear Classification

Machine Learning (CSE 446): Backpropagation

CS 179: LECTURE 16 MODEL COMPLEXITY, REGULARIZATION, AND CONVOLUTIONAL NETS

1 What a Neural Network Computes

Lecture 5: Logistic Regression. Neural Networks

Feed-forward Network Functions

Artificial Neural Networks. MGS Lecture 2

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY

Neural Networks. Bishop PRML Ch. 5. Alireza Ghane. Feed-forward Networks Network Training Error Backpropagation Applications

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

Neural Networks Learning the network: Backprop , Fall 2018 Lecture 4

8-1: Backpropagation Prof. J.C. Kao, UCLA. Backpropagation. Chain rule for the derivatives Backpropagation graphs Examples

Lecture 4: Training a Classifier

Introduction to Machine Learning

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann

Lecture 15: Exploding and Vanishing Gradients

Neural Networks, Computation Graphs. CMSC 470 Marine Carpuat

Ch.6 Deep Feedforward Networks (2/3)

Lecture 4: Training a Classifier

Neural Networks and the Back-propagation Algorithm

Machine Learning

Lecture 17: Neural Networks and Deep Learning

Multilayer Perceptrons and Backpropagation

CSC321 Lecture 2: Linear Regression

Backpropagation Introduction to Machine Learning. Matt Gormley Lecture 12 Feb 23, 2018

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

Neural networks. Chapter 20. Chapter 20 1

Feed-forward Networks Network Training Error Backpropagation Applications. Neural Networks. Oliver Schulte - CMPT 726. Bishop PRML Ch.

Neural networks. Chapter 19, Sections 1 5 1

Artificial Neural Networks

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks

CSC321 Lecture 9: Generalization

Course 395: Machine Learning - Lectures

Machine Learning. Neural Networks

Artificial Intelligence

Deep Feedforward Networks

CSC321 Lecture 7: Optimization

4. Multilayer Perceptrons

Deep Neural Networks (1) Hidden layers; Back-propagation

Input layer. Weight matrix [ ] Output layer

Machine Learning Basics III

CSC 411 Lecture 12: Principal Component Analysis

Vote. Vote on timing for night section: Option 1 (what we have now) Option 2. Lecture, 6:10-7:50 25 minute dinner break Tutorial, 8:15-9

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

CSC321 Lecture 9: Generalization

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

How to do backpropagation in a brain

Lecture 2: Modular Learning

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Machine Learning and Data Mining. Multi-layer Perceptrons & Neural Networks: Basics. Prof. Alexander Ihler

CSC321 Lecture 8: Optimization

Deep Learning: Self-Taught Learning and Deep vs. Shallow Architectures. Lecture 04

Lecture 3 Feedforward Networks and Backpropagation

CSC242: Intro to AI. Lecture 21

Deep Feedforward Networks. Lecture slides for Chapter 6 of Deep Learning Ian Goodfellow Last updated

CSC321 Lecture 4 The Perceptron Algorithm

Neural Networks: Introduction

Artificial Neural Networks

Neural networks. Chapter 20, Section 5 1

AN INTRODUCTION TO NEURAL NETWORKS. Scott Kuindersma November 12, 2009

Data Mining Part 5. Prediction

Topic 3: Neural Networks

10 NEURAL NETWORKS Bio-inspired Multi-Layer Networks. Learning Objectives:

STA 414/2104: Lecture 8

Online Videos FERPA. Sign waiver or sit on the sides or in the back. Off camera question time before and after lecture. Questions?

Introduction to Deep Learning

STA 414/2104: Lecture 8

Engineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 6: Multi-Layer Perceptrons I

ECE521 Lectures 9 Fully Connected Neural Networks

Convolutional networks. Sebastian Seung

Feedforward Neural Nets and Backpropagation

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

AI Programming CS F-20 Neural Networks

Lecture 8: Introduction to Deep Learning: Part 2 (More on backpropagation, and ConvNets)

Automatic Differentiation and Neural Networks

Artificial Neural Networks 2

Machine Learning Lecture 5

Lecture 2: Learning with neural networks

Nonlinear Classification

Lecture 4: Perceptrons and Multilayer Perceptrons

CSE446: Neural Networks Spring Many slides are adapted from Carlos Guestrin and Luke Zettlemoyer

Neural Networks (and Gradient Ascent Again)

CSC 411 Lecture 3: Decision Trees

Intro to Neural Networks and Deep Learning

Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 16

Statistical NLP for the Web

Transcription:

CSC 411 Lecture 10: Neural Networks Roger Grosse, Amir-massoud Farahmand, and Juan Carrasquilla University of Toronto UofT CSC 411: 10-Neural Networks 1 / 35

Inspiration: The Brain Our brain has 10 11 neurons, each of which communicates (is connected) to 10 4 other neurons Figure: The basic computational unit of the brain: Neuron [Pic credit: http://cs231n.github.io/neural-networks-1/] UofT CSC 411: 10-Neural Networks 2 / 35

Inspiration: The Brain For neural nets, we use a much simpler model neuron, or unit: Compare with logistic regression: y = σ(w x + b) By throwing together lots of these incredibly simplistic neuron-like processing units, we can do some powerful computations! UofT CSC 411: 10-Neural Networks 3 / 35

Multilayer Perceptrons We can connect lots of units together into a directed acyclic graph. This gives a feed-forward neural network. That s in contrast to recurrent neural networks, which can have cycles. Typically, units are grouped together into layers. UofT CSC 411: 10-Neural Networks 4 / 35

Multilayer Perceptrons Each layer connects N input units to M output units. In the simplest case, all input units are connected to all output units. We call this a fully connected layer. We ll consider other layer types later. Note: the inputs and outputs for a layer are distinct from the inputs and outputs to the network. Recall from softmax regression: this means we need an M N weight matrix. The output units are a function of the input units: y = f (x) = φ (Wx + b) A multilayer network consisting of fully connected layers is called a multilayer perceptron. Despite the name, it has nothing to do with perceptrons! UofT CSC 411: 10-Neural Networks 5 / 35

Multilayer Perceptrons Some activation functions: Linear y = z Rectified Linear Unit (ReLU) y = max(0, z) Soft ReLU y = log 1 + e z UofT CSC 411: 10-Neural Networks 6 / 35

Multilayer Perceptrons Some activation functions: Hard Threshold { 1 if z > 0 y = 0 if z 0 y = Logistic 1 1 + e z Hyperbolic Tangent (tanh) y = ez e z e z + e z UofT CSC 411: 10-Neural Networks 7 / 35

Multilayer Perceptrons Designing a network to compute XOR: Assume hard threshold activation function UofT CSC 411: 10-Neural Networks 8 / 35

Multilayer Perceptrons h 1 computes x 1 OR x 2 h 2 computes x 1 AND x 2 y computes h 1 AND NOT x 2 UofT CSC 411: 10-Neural Networks 9 / 35

Multilayer Perceptrons Each layer computes a function, so the network computes a composition of functions: Or more simply: h (1) = f (1) (x) h (2) = f (2) (h (1) ). y = f (L) (h (L 1) ) y = f (L) f (1) (x). Neural nets provide modularity: we can implement each layer s computations as a black box. UofT CSC 411: 10-Neural Networks 10 / 35

Feature Learning Neural nets can be viewed as a way of learning features: The goal: UofT CSC 411: 10-Neural Networks 11 / 35

Feature Learning Suppose we re trying to classify images of handwritten digits. Each image is represented as a vector of 28 28 = 784 pixel values. Each first-layer hidden unit computes σ(w T i x). It acts as a feature detector. We can visualize w by reshaping it into an image. Here s an example that responds to a diagonal stroke. UofT CSC 411: 10-Neural Networks 12 / 35

Feature Learning Here are some of the features learned by the first hidden layer of a handwritten digit classifier: UofT CSC 411: 10-Neural Networks 13 / 35

Expressive Power We ve seen that there are some functions that linear classifiers can t represent. Are deep networks any better? Any sequence of linear layers can be equivalently represented with a single linear layer. y = } W (3) W {{ (2) W (1) } x W Deep linear networks are no more expressive than linear regression! Linear layers do have their uses stay tuned! UofT CSC 411: 10-Neural Networks 14 / 35

Expressive Power Multilayer feed-forward neural nets with nonlinear activation functions are universal function approximators: they can approximate any function arbitrarily well. This has been shown for various activation functions (thresholds, logistic, ReLU, etc.) Even though ReLU is almost linear, it s nonlinear enough! UofT CSC 411: 10-Neural Networks 15 / 35

Expressive Power Universality for binary inputs and targets: Hard threshold hidden units, linear output Strategy: 2 D hidden units, each of which responds to one particular input configuration Only requires one hidden layer, though it needs to be extremely wide! UofT CSC 411: 10-Neural Networks 16 / 35

Expressive Power What about the logistic activation function? You can approximate a hard threshold by scaling up the weights and biases: y = σ(x) y = σ(5x) This is good: logistic units are differentiable, so we can train them with gradient descent. (Stay tuned!) UofT CSC 411: 10-Neural Networks 17 / 35

Expressive Power Limits of universality You may need to represent an exponentially large network. If you can learn any function, you ll just overfit. Really, we desire a compact representation! We ve derived units which compute the functions AND, OR, and NOT. Therefore, any Boolean circuit can be translated into a feed-forward neural net. This suggests you might be able to learn compact representations of some complicated functions UofT CSC 411: 10-Neural Networks 18 / 35

Training neural networks with backpropagation UofT CSC 411: 10-Neural Networks 19 / 35

Recap: Gradient Descent Recall: gradient descent moves opposite the gradient (the direction of steepest descent) Weight space for a multilayer neural net: one coordinate for each weight or bias of the network, in all the layers Conceptually, not any different from what we ve seen so far just higher dimensional and harder to visualize! We want to compute the cost gradient dj /dw, which is the vector of partial derivatives. This is the average of dl/dw over all the training examples, so in this lecture we focus on computing dl/dw. UofT CSC 411: 10-Neural Networks 20 / 35

Univariate Chain Rule We ve already been using the univariate Chain Rule. Recall: if f (x) and x(t) are univariate functions, then d df dx f (x(t)) = dt dx dt. UofT CSC 411: 10-Neural Networks 21 / 35

Univariate Chain Rule Recall: Univariate logistic least squares model z = wx + b y = σ(z) L = 1 (y t)2 2 Let s compute the loss derivatives. UofT CSC 411: 10-Neural Networks 22 / 35

Univariate Chain Rule How you would have done it in calculus class L = 1 (σ(wx + b) t)2 2 L w = [ ] 1 (σ(wx + b) t)2 w 2 = 1 (σ(wx + b) t)2 2 w = (σ(wx + b) t) (σ(wx + b) t) w = (σ(wx + b) t)σ (wx + b) (wx + b) w = (σ(wx + b) t)σ (wx + b)x L b = b What are the disadvantages of this approach? [ 1 (σ(wx + b) t)2 2 = 1 (σ(wx + b) t)2 2 b = (σ(wx + b) t) (σ(wx + b) t) b = (σ(wx + b) t)σ (wx + b) (wx + b) b = (σ(wx + b) t)σ (wx + b) ] UofT CSC 411: 10-Neural Networks 23 / 35

Univariate Chain Rule A more structured way to do it Computing the loss: z = wx + b y = σ(z) L = 1 (y t)2 2 Computing the derivatives: dl dy = y t dl dz = dl dy σ (z) L w = dl dz x L b = dl dz Remember, the goal isn t to obtain closed-form solutions, but to be able to write a program that efficiently computes the derivatives. UofT CSC 411: 10-Neural Networks 24 / 35

Univariate Chain Rule We can diagram out the computations using a computation graph. The nodes represent all the inputs and computed quantities, and the edges represent which nodes are computed directly as a function of which other nodes. UofT CSC 411: 10-Neural Networks 25 / 35

Univariate Chain Rule A slightly more convenient notation: Use y to denote the derivative dl/dy, sometimes called the error signal. This emphasizes that the error signals are just values our program is computing (rather than a mathematical operation). This is not a standard notation, but I couldn t find another one that I liked. Computing the loss: z = wx + b y = σ(z) L = 1 (y t)2 2 Computing the derivatives: y = y t z = y σ (z) w = z x b = z UofT CSC 411: 10-Neural Networks 26 / 35

Multivariate Chain Rule Problem: what if the computation graph has fan-out > 1? This requires the multivariate Chain Rule! L 2 -Regularized regression Softmax regression z = wx + b y = σ(z) L = 1 (y t)2 2 z l = j w lj x j + b l R = 1 2 w 2 L reg = L + λr y k = ez k l ez l L = k t k log y k UofT CSC 411: 10-Neural Networks 27 / 35

Multivariate Chain Rule Suppose we have a function f (x, y) and functions x(t) and y(t). (All the variables here are scalar-valued.) Then Example: d f dx f (x(t), y(t)) = dt x dt + f dy y dt Plug in to Chain Rule: f (x, y) = y + e xy x(t) = cos t y(t) = t 2 df dt = f dx x dt + f dy y dt = (ye xy ) ( sin t) + (1 + xe xy ) 2t UofT CSC 411: 10-Neural Networks 28 / 35

Multivariable Chain Rule In the context of backpropagation: In our notation: t = x dx dt + y dy dt UofT CSC 411: 10-Neural Networks 29 / 35

Backpropagation Full backpropagation algorithm: Let v 1,..., v N be a topological ordering of the computation graph (i.e. parents come before children.) v N denotes the variable we re trying to compute derivatives of (e.g. loss). UofT CSC 411: 10-Neural Networks 30 / 35

Backpropagation Example: univariate logistic least squares regression Backward pass: Forward pass: z = wx + b y = σ(z) L = 1 (y t)2 2 R = 1 2 w 2 L reg = L + λr L reg = 1 R = L reg dl reg dr = L reg λ L = L reg dl reg dl = L reg y = L dl dy = L (y t) z = y dy dz = y σ (z) w= z z w + R dr dw = z x + R w b = z z b = z UofT CSC 411: 10-Neural Networks 31 / 35

Backpropagation Multilayer Perceptron (multiple outputs): Backward pass: L = 1 y k = L (y k t k ) Forward pass: z i = j h i = σ(z i ) w (1) ij x j + b (1) i y k = w (2) ki h i + b (2) k i L = 1 (y k t k ) 2 2 k w (2) ki b (2) k w (1) ij = y k h i = y k h i = k b (1) i y k w (2) ki z i = h i σ (z i ) = z i x j = z i UofT CSC 411: 10-Neural Networks 32 / 35

Backpropagation In vectorized form: Backward pass: Forward pass: z = W (1) x + b (1) h = σ(z) y = W (2) h + b (2) L = 1 t y 2 2 L = 1 y = L (y t) W (2) = yh b (2) = y h = W (2) y z = h σ (z) W (1) = zx b (1) = z UofT CSC 411: 10-Neural Networks 33 / 35

Computational Cost Computational cost of forward pass: one add-multiply operation per weight z i = w (1) ij x j + b (1) i j Computational cost of backward pass: two add-multiply operations per weight w (2) ki = y k h i h i = k y k w (2) ki Rule of thumb: the backward pass is about as expensive as two forward passes. For a multilayer perceptron, this means the cost is linear in the number of layers, quadratic in the number of units per layer. UofT CSC 411: 10-Neural Networks 34 / 35

Backpropagation Backprop is used to train the overwhelming majority of neural nets today. Even optimization algorithms much fancier than gradient descent (e.g. second-order methods) use backprop to compute the gradients. Despite its practical success, backprop is believed to be neurally implausible. No evidence for biological signals analogous to error derivatives. All the biologically plausible alternatives we know about learn much more slowly (on computers). So how on earth does the brain learn? UofT CSC 411: 10-Neural Networks 35 / 35