Perceptron & Neural Networks

Similar documents
Backpropagation Introduction to Machine Learning. Matt Gormley Lecture 12 Feb 23, 2018

Deep Learning (CNNs)

Backpropagation Introduction to Machine Learning. Matt Gormley Lecture 13 Mar 1, 2018

Perceptron (Theory) + Linear Regression

Logistic Regression Introduction to Machine Learning. Matt Gormley Lecture 8 Feb. 12, 2018

Hidden Markov Models

Machine Learning. Neural Networks. Le Song. CSE6740/CS7641/ISYE6740, Fall Lecture 7, September 11, 2012 Based on slides from Eric Xing, CMU

Logistic Regression Introduction to Machine Learning. Matt Gormley Lecture 9 Sep. 26, 2018

Naïve Bayes Introduction to Machine Learning. Matt Gormley Lecture 18 Oct. 31, 2018

Hidden Markov Models

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

Linear & nonlinear classifiers

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

Logistic Regression. William Cohen

Logistic Regression & Neural Networks

Neural Networks: Introduction

> DEPARTMENT OF MATHEMATICS AND COMPUTER SCIENCE GRAVIS 2016 BASEL. Logistic Regression. Pattern Recognition 2016 Sandro Schönborn University of Basel

Neural Networks. David Rosenberg. July 26, New York University. David Rosenberg (New York University) DS-GA 1003 July 26, / 35

CSC 411 Lecture 10: Neural Networks

Artificial Neuron (Perceptron)

Lecture 3 Feedforward Networks and Backpropagation

Machine Learning and Data Mining. Multi-layer Perceptrons & Neural Networks: Basics. Prof. Alexander Ihler

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Midterm Review CS 7301: Advanced Machine Learning. Vibhav Gogate The University of Texas at Dallas

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

ECE521 Lecture 7/8. Logistic Regression

Course 395: Machine Learning - Lectures

Advanced Machine Learning

Bayesian Networks (Part I)

Linear Models for Classification

Linear discriminant functions

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Engineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 6: Multi-Layer Perceptrons I

CSE446: Neural Networks Spring Many slides are adapted from Carlos Guestrin and Luke Zettlemoyer

Neural Networks and Deep Learning

CSC242: Intro to AI. Lecture 21

Lecture 3 Feedforward Networks and Backpropagation

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas

CSC321 Lecture 5: Multilayer Perceptrons

Machine Learning Lecture 10

Lecture 17: Neural Networks and Deep Learning

Machine Learning Lecture 5

Neural Networks. Single-layer neural network. CSE 446: Machine Learning Emily Fox University of Washington March 10, /9/17

Artificial Neural Networks

From perceptrons to word embeddings. Simon Šuster University of Groningen

Machine Learning

Artificial Neural Networks. MGS Lecture 2

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

Lecture 5: Logistic Regression. Neural Networks

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann

Classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

MLE/MAP + Naïve Bayes

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

Neural Networks in Structured Prediction. November 17, 2015

Neural Networks: Backpropagation

Deep Neural Networks (1) Hidden layers; Back-propagation

Logistic Regression Review Fall 2012 Recitation. September 25, 2012 TA: Selen Uguroglu

Deep Feedforward Networks

Generative v. Discriminative classifiers Intuition

Probabilistic classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016

Machine Learning for NLP

Introduction to Convolutional Neural Networks (CNNs)

CSC321 Lecture 6: Backpropagation

Neural Networks. Nicholas Ruozzi University of Texas at Dallas

Learning Deep Architectures for AI. Part I - Vijay Chakilam

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

AN INTRODUCTION TO NEURAL NETWORKS. Scott Kuindersma November 12, 2009

Machine Learning Basics III

ECE521 Lectures 9 Fully Connected Neural Networks

2018 EE448, Big Data Mining, Lecture 5. (Part II) Weinan Zhang Shanghai Jiao Tong University

Warm up: risk prediction with logistic regression

Machine Learning Lecture 12

Comments. Assignment 3 code released. Thought questions 3 due this week. Mini-project: hopefully you have started. implement classification algorithms

Machine Learning Lecture 7

Multilayer Perceptron

MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October,

Discriminative Models

Reading Group on Deep Learning Session 1

Support Vector Machines and Kernel Methods

The Perceptron Algorithm, Margins

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

Neural Networks (Part 1) Goals for the lecture

Machine Learning

Regularization Introduction to Machine Learning. Matt Gormley Lecture 10 Feb. 19, 2018

Machine Learning Basics

Machine Learning for NLP

Introduction to Logistic Regression and Support Vector Machine

Machine Learning (CSE 446): Neural Networks

Bias-Variance Tradeoff

Data Mining Part 5. Prediction

1 Machine Learning Concepts (16 points)

Lecture 6. Regression

Learning Deep Architectures for AI. Part II - Vijay Chakilam

Bayesian Networks Introduction to Machine Learning. Matt Gormley Lecture 24 April 9, 2018

Machine Learning for Signal Processing Bayes Classification and Regression

STA 414/2104: Lecture 8

Deep Neural Networks (1) Hidden layers; Back-propagation

Transcription:

School of Computer Science 10-701 Introduction to Machine Learning Perceptron & Neural Networks Readings: Bishop Ch. 4.1.7, Ch. 5 Murphy Ch. 16.5, Ch. 28 Mitchell Ch. 4 Matt Gormley Lecture 12 October 17, 2016 1

Reminders Homework 3: due 10/24/16 2

Outline Discriminative vs. Generative Perceptron Neural Networks Backpropagation 3

DISCRIMINATIVE AND GENERATIVE CLASSIFIERS 4

Generative vs. Discriminative Generative Classifiers: Example: Naïve Bayes Define a joint model of the observations x and the labels y: p(x,y) Learning maximizes (joint) likelihood Use Bayes Rule to classify based on the posterior: p(y x) =p(x y)p(y)/p(x) Discriminative Classifiers: Example: Logistic Regression Directly model the conditional: p(y x) Learning maximizes conditional likelihood 5

Generative vs. Discriminative Finite Sample Analysis (Ng & Jordan, 2002) [Assume that we are learning from a finite training dataset] If model assumptions are correct: Naive Bayes is a more efficient learner (requires fewer samples) than Logistic Regression If model assumptions are incorrect: Logistic Regression has lower asymtotic error, and does better than Naïve Bayes 6

solid: NB dashed: LR Slide courtesy of William Cohen 7

solid: NB dashed: LR Naïve Bayes makes stronger assumptions about the data but needs fewer examples to estimate the parameters On Discriminative vs Generative Classifiers:. Andrew Ng and Michael Jordan, NIPS 2001. Slide courtesy of William Cohen 8

Generative vs. Discriminative Learning (Parameter Estimation) Naïve Bayes: Parameters are decoupled à Closed form solution for MLE Logistic Regression: Parameters are coupled à No closed form solution must use iterative optimization techniques instead 9

Naïve Bayes vs. Logistic Reg. Learning (MAP Estimation of Parameters) Bernoulli Naïve Bayes: Parameters are probabilities à Beta prior (usually) pushes probabilities away from zero / one extremes Logistic Regression: Parameters are not probabilities à Gaussian prior encourages parameters to be close to zero (effectively pushes the probabilities away from zero / one extremes) 10

Features Naïve Bayes vs. Logistic Reg. Naïve Bayes: Features x are assumed to be conditionally independent given y. (i.e. Naïve Bayes Assumption) Logistic Regression: No assumptions are made about the form of the features x. They can be dependent and correlated in any fashion. 11

THE PERCEPTRON ALGORITHM 12

Recall Background: Hyperplanes Why don t we drop the generative model and try to learn this hyperplane directly?

Background: Hyperplanes Recall w Hyperplane (Definition 1): H = {x : w T x = b} Hyperplane (Definition 2): H = {x : w T x =0 and x 1 =1} Half- spaces: H + = {x : w T x > 0 and x 1 =1} H = {x : w T x < 0 and x 1 =1}

Background: Hyperplanes Directly modeling the hyperplane would use a decision function: for: h( ) =sign( T ) y { 1, +1} Recall Why don t we drop the generative model and try to learn this hyperplane directly?

Online Learning Model Setup: We receive an example (x, y) Make a prediction h(x) Check for correctness h(x) = y? Goal: Minimize the number of mistakes 16

Margins Recall Definition: The margin of example x w.r.t. a linear sep. w is the distance from x to the plane w x = 0 (or the negative if on wrong side) Definition: The margin γ w of a set of examples S wrt a linear separator w is the smallest margin over points x S. Definition: The margin γ of a set of examples S is the maximum γ w over all linear separators w. Slide from Nina Balcan - - - - γ γ - - - + + - - w + + + 17

Perceptron Algorithm Data: Inputs are continuous vectors of length K. Outputs are discrete. D = { (i),y (i) } N i=1 where R K and y {+1, 1} Prediction: Output determined by hyperplane. ŷ = h (x) = sign( T x) sign(a) = 1, if a 0 1, otherwise Learning: Iterative procedure: while not converged receive next example (x, y) predict y = h(x) if positive mistake: add x to parameters if negative mistake: subtract x from parameters 18

Perceptron Algorithm Learning: Algorithm 1 Perceptron Learning Algorithm (Batch) 1: procedure P (D = {( (1),y (1) ),...,( (N),y (N) )}) 2: 0 Initialize parameters 3: while not converged do 4: for i {1, 2,...,N} do For each example 5: ŷ sign( T (i) ) Predict 6: if ŷ = y (i) then If mistake 7: + y (i) (i) Update parameters 8: return 19

Analysis: Perceptron Perceptron Mistake Bound Guarantee: If data has margin and all points inside a ball of radius R, then Perceptron makes (R/ ) 2 mistakes. (Normalized margin: multiplying all points by 100, or dividing all points by 100, doesn t change the number of mistakes; algo is invariant to scaling.) + + + Slide from Nina Balcan - - γ γ - - - - - + + + + - - R 20

Analysis: Perceptron Perceptron Mistake Bound Theorem 0.1 (Block (1962), Novikoff (1962)). Given dataset: D = {( (i),y (i) )} N i=1. Suppose: 1. Finite size inputs: x (i) R 2. Linearly separable data: s.t. =1and y (i) ( (i) ), i Then: The number of mistakes made by the Perceptron + algorithm on this dataset is + + k (R/ ) 2 - - γ γ - - - - - + - - + R + + Figure from Nina Balcan 21

Analysis: Perceptron Proof of Perceptron Mistake Bound: We will show that there exist constants A and B s.t. Ak (k+1) B k Ak (k+1) B k Ak 22

Analysis: Perceptron Theorem 0.1 (Block (1962), Novikoff (1962)). Given dataset: D = {( (i),y (i) )} N i=1. Suppose: 1. Finite size inputs: x (i) R 2. Linearly separable data: s.t. =1and y (i) ( (i) ), i Then: The number of mistakes made by the Perceptron algorithm on this dataset is - - + γ γ - - - - - + + - - + + + R + k (R/ ) 2 Algorithm 1 Perceptron Learning Algorithm (Online) 1: procedure P (D = {( (1),y (1) ), ( (2),y (2) ),...}) 2: 0, k =1 Initialize parameters 3: for i {1, 2,...} do For each example 4: if y (i) ( (k) (i) ) 0 then If mistake 5: (k+1) (k) + y (i) (i) Update parameters 6: k k +1 7: return 23

Analysis: Perceptron Whiteboard: Proof of Perceptron Mistake Bound 24

Analysis: Perceptron Proof of Perceptron Mistake Bound: Part 1: for some A, Ak (k+1) (k+1) =( (k) + y (i) (i) ) by Perceptron algorithm update = (k) + y (i) ( (i) ) (k) + by assumption (k+1) k by induction on k since (1) = 0 (k+1) k since and =1 Cauchy- Schwartz inequality 25

Analysis: Perceptron Proof of Perceptron Mistake Bound: Part 2: for some B, (k+1) B k (k+1) 2 = (k) + y (i) (i) 2 by Perceptron algorithm update = (k) 2 +(y (i) ) 2 (i) 2 +2y (i) ( (k) (i) ) (k) 2 +(y (i) ) 2 (i) 2 since kth mistake y (i) ( (k) (i) ) 0 = (k) 2 + R 2 since (y (i) ) 2 (i) 2 = (i) 2 = R 2 by assumption and (y (i) ) 2 =1 (k+1) 2 kr 2 by induction on k since ( (1) ) 2 =0 (k+1) kr 26

Analysis: Perceptron Proof of Perceptron Mistake Bound: Part 3: Combining the bounds finishes the proof. k k (R/ ) 2 (k+1) kr The total number of mistakes must be less than this 27

Extensions of Perceptron Kernel Perceptron Choose a kernel K(x, x) Apply the kernel trick to Perceptron Resulting algorithm is still very simple Structured Perceptron Basic idea can also be applied when y ranges over an exponentially large set Mistake bound does not depend on the size of that set 28

Summary: Perceptron Perceptron is a simple linear classifier Simple learning algorithm: when a mistake is made, add / subtract the features For linearly separable and inseparable data, we can bound the number of mistakes (geometric argument) Extensions support nonlinear separators and structured prediction 29

RECALL: LOGISTIC REGRESSION 30

Using gradient ascent for linear classifiers Key idea behind today s lecture: 1. Define a linear classifier (logistic regression) 2. Define an objective function (likelihood) 3. Optimize it with gradient descent to learn parameters Recall 4. Predict the class with highest probability under the model 31

Using gradient ascent for linear classifiers Recall This decision function isn t differentiable: h( ) =sign( T ) Use a differentiable function instead: p (y =1 ) = 1 1+ ( T ) sign(x) logistic(u) 1 1+ e u 32

Using gradient ascent for linear classifiers Recall This decision function isn t differentiable: h( ) =sign( T ) Use a differentiable function instead: p (y =1 ) = 1 1+ ( T ) sign(x) logistic(u) 1 1+ e u 33

Logistic Regression Recall Data: Inputs are continuous vectors of length K. Outputs are discrete. D = { (i),y (i) } N i=1 where R K and y {0, 1} Model: Logistic function applied to dot product of parameters with input vector. p (y =1 ) = Learning: finds the parameters that minimize some objective function. = argmin J( ) 1 1+ ( T ) Prediction: Output is the most probable class. ŷ = y {0,1} p (y ) 34

NEURAL NETWORKS 35

Learning highly non-linear functions l l l f: X à Y f might be non-linear function X (vector of) continuous and/or discrete vars Y (vector of) continuous and/or discrete vars The XOR gate Speech recognition Eric Xing @ CMU, 2006-2011 36

Perceptron and Neural Nets l From biological neuron to artificial neuron (perceptron) Synapse Inputs l Dendrites Axon Soma Synapse Soma Synapse Activation function Dendrites Axon x 1 x 2 w 1 w 2 Linear Combiner Hard Limiter θ Threshold Output Y n X = x i w i 1 + 1, if X ω Y = 1, if X < ω i= 0 0 l Artificial neuron networks l l supervised learning gradient descent I n p u t S i g n a l s O u t p u t S i g n a l s Middle Layer Input Layer Output Layer Eric Xing @ CMU, 2006-2011 37

Connectionist Models l l Consider humans: l l l l l Neuron switching time ~ 0.001 second Number of neurons ~ 10 10 Connections per neuron ~ 10 4-5 Scene recognition time ~ 0.1 second 100 inference steps doesn't seem like enough à much parallel computation Properties of artificial neural nets (ANN) l l l Many neuron-like threshold switching units Many weighted interconnections among units Highly parallel, distributed processes Eric Xing @ CMU, 2006-2011 38

Motivation Why is everyone talking about Deep Learning? Because a lot of money is invested in it DeepMind: Acquired by Google for $400 million DNNResearch: Three person startup (including Geoff Hinton) acquired by Google for unknown price tag Enlitic, Ersatz, MetaMind, Nervana, Skylab: Deep Learning startups commanding millions of VC dollars Because it made the front page of the New York Times 39

Motivation Why is everyone talking about Deep Learning? 1960s 1980s 1990s Deep learning: Has won numerous pattern recognition competitions Does so with minimal feature engineering 2006 2016 This wasn t always the case! Since 1980s: Form of models hasn t changed much, but lots of new tricks More hidden units Better (online) optimization New nonlinear functions (ReLUs) Faster computers (CPUs and GPUs) 40

Background A Recipe for Machine Learning 1. Given training data: Face Face Not a face 2. Choose each of these: Decision function Examples: Linear regression, Logistic regression, Neural Network Loss function Examples: Mean- squared error, Cross Entropy 41

Background A Recipe for Machine Learning 1. Given training data: 3. Define goal: 2. Choose each of these: Decision function Loss function 4. Train with SGD: (take small steps opposite the gradient) 42

Background A Recipe for Gradients Machine Learning 1. Given training data: Backpropagation 3. Define can goal: compute this gradient! And it s a special case of a more general algorithm called reverse- 2. Choose each of these: mode automatic differentiation that Decision function can compute 4. Train the with gradient SGD: of any differentiable (take function small steps efficiently! opposite the gradient) Loss function 43

A Recipe for Goals for Today s Machine Lecture Learning Background 1. 1. Given Explore training a new data: class of 3. decision Define functions goal: (Neural Networks) 2. Consider variants of this recipe for training 2. Choose each of these: Decision function Loss function 4. Train with SGD: (take small steps opposite the gradient) 44

Decision Functions Output Linear Regression y = h (x) = ( T x) where (a) =a θ 1 θ 2 θ 3 θ M Input 45

Decision Functions Logistic Regression Output y = h (x) = ( T x) 1 where (a) = 1+ ( a) θ 1 θ 2 θ 3 θ M Input 46

Decision Functions Logistic Regression y = h (x) = ( x) 1 where (a) = 1 + 2tT( a) Output Face θ1 Input T θ2 θ3 Face Not a face θm 47

Decision Functions Output Logistic Regression y = h (x) = ( T x) 1 where (a) = 1+ ( a) 1 1 0 y θ 1 θ 2 θ 3 θ M x 2 x 1 Input 48

Decision Functions Logistic Regression Output y = h (x) = ( T x) 1 where (a) = 1+ ( a) θ 1 θ 2 θ 3 θ M Input 49

Neural Network Model Inputs Age 34 Gender 2 Stage 4.1.3.6.2.7.2 Σ Σ.4.2.5.8 Σ Output 0.6 Probability of beingalive Independent variables Weights HiddenL ayer Weights Dependent variable Prediction Eric Xing @ CMU, 2006-2011 50

Combined logistic models Inputs Age 34.6 Output Gender 2 Stage 4.1.7.5.8 Σ 0.6 Probability of beingalive Independent variables Weights HiddenL ayer Weights Dependent variable Prediction Eric Xing @ CMU, 2006-2011 51

Inputs Age 34 Gender 2 Stage 4.3.2.2.5.8 Σ Output 0.6 Probability of beingalive Independent variables Weights HiddenL ayer Weights Dependent variable Prediction Eric Xing @ CMU, 2006-2011 52

Inputs Age 34 Gender 1 Stage 4.1.3.6.2.7.2.5.8 Σ Output 0.6 Probability of beingalive Independent variables Weights HiddenL ayer Weights Dependent variable Prediction Eric Xing @ CMU, 2006-2011 53

Not really, no target for hidden units... Age 34 Gender 2 Stage 4.1.3.6.2.7.2 Σ Σ.4.2.5.8 Σ 0.6 Probability of beingalive Independent variables Weights HiddenL ayer Weights Dependent variable Prediction Eric Xing @ CMU, 2006-2011 54

Jargon Pseudo-Correspondence l l l l Independent variable = input variable Dependent variable = output variable Coefficients = weights Estimates = targets Logistic Regression Model (the sigmoid unit) Inputs Age 34 Gende 1 r Stage 4 5 4 8 Output Σ 0.6 Probability of beingalive Independent variables x1, x2, x3 Coefficients Dependent variable a, b, c p Prediction Eric Xing @ CMU, 2006-2011 55

Decision Functions Neural Network Output Hidden Layer Input 56

Decision Functions Neural Network Output (E) Output (sigmoid) y = 1 1+ ( b) Hidden Layer (D) Output (linear) b = D j=0 jz j Input (C) Hidden (sigmoid) z j = 1 1+ ( a j ), j (B) Hidden (linear) a j = M i=0 jix i, j (A) Input Given x i, i 57

Building a Neural Net Output Features 58

Building a Neural Net Output Hidden Layer D = M 1 1 1 Input 59

Building a Neural Net Output Hidden Layer D = M Input 60

Building a Neural Net Output Hidden Layer D = M Input 61

Building a Neural Net Output Hidden Layer D < M Input 62

Decision Boundary 0 hidden layers: linear classifier Hyperplanes y x1 x2 Example from to Eric Postma via Jason Eisner 63

Decision Boundary 1 hidden layer y Boundary of convex region (open or closed) x1 x2 Example from to Eric Postma via Jason Eisner 64

Decision Boundary y 2 hidden layers Combinations of convex regions x1 x2 Example from to Eric Postma via Jason Eisner 65

Decision Functions Multi- Class Output Output Hidden Layer Input 66

Decision Functions Deeper Networks Next lecture: Output Hidden Layer 1 Input 67

Decision Functions Deeper Networks Next lecture: Output Hidden Layer 2 Hidden Layer 1 Input 68

Decision Functions Next lecture: Making the neural networks deeper Output Hidden Layer 3 Hidden Layer 2 Deeper Networks Hidden Layer 1 Input 69

Decision Functions Different Levels of Abstraction We don t know the right levels of abstraction So let the model figure it out! Example from Honglak Lee (NIPS 2010) 70

Decision Functions Different Levels of Abstraction Face Recognition: Deep Network can build up increasingly higher levels of abstraction Lines, parts, regions Example from Honglak Lee (NIPS 2010) 71

Decision Functions Different Levels of Abstraction Output Hidden Layer 3 Hidden Layer 2 Hidden Layer 1 Input Example from Honglak Lee (NIPS 2010) 72

ARCHITECTURES 73

Neural Network Architectures Even for a basic Neural Network, there are many design decisions to make: 1. # of hidden layers (depth) 2. # of units per hidden layer (width) 3. Type of activation function (nonlinearity) 4. Form of objective function 74

Activation Functions Neural Network with sigmoid activation functions (F) Loss J = 1 2 (y y )2 Output (E) Output (sigmoid) y = 1 1+ ( b) Hidden Layer (D) Output (linear) b = D j=0 jz j Input (C) Hidden (sigmoid) z j = 1 1+ ( a j ), j (B) Hidden (linear) a j = M i=0 jix i, j (A) Input Given x i, i 75

Activation Functions Neural Network with arbitrary nonlinear activation functions (F) Loss J = 1 2 (y y )2 Output (E) Output (nonlinear) y = (b) Hidden Layer (D) Output (linear) b = D j=0 jz j Input (C) Hidden (nonlinear) z j = (a j ), j (B) Hidden (linear) a j = M i=0 jix i, j (A) Input Given x i, i 76

Activation Functions Sigmoid / Logistic Function 1 logistic(u) 1+ e u So far, we ve assumed that the activation function (nonlinearity) is always the sigmoid function 77

Activation Functions A new change: modifying the nonlinearity The logistic is not widely used in modern ANNs Alternate 1: tanh Like logistic function but shifted to range [- 1, +1] Slide from William Cohen

AI Stats 2010 depth 4? sigmoid vs. tanh Figure from Glorot & Bentio (2010)

Activation Functions A new change: modifying the nonlinearity relu often used in vision tasks Alternate 2: rectified linear unit Linear with a cutoff at zero (Implementation: clip the gradient when you pass zero) Slide from William Cohen

Activation Functions A new change: modifying the nonlinearity relu often used in vision tasks Alternate 2: rectified linear unit Soft version: log(exp(x)+1) Doesn t saturate (at one end) Sparsifies outputs Helps with vanishing gradient Slide from William Cohen

Objective Functions for NNs Regression: Use the same objective as Linear Regression Quadratic loss (i.e. mean squared error) Classification: Use the same objective as Logistic Regression Cross- entropy (i.e. negative log likelihood) This requires probabilities, so we add an additional softmax layer at the end of our network Forward Backward Quadratic J = 1 2 (y y )2 dj dy = y Cross Entropy J = y (y)+(1 y ) (1 y) y dj dy = y 1 y +(1 y ) 1 y 1 82

Multi- Class Output Output Hidden Layer Input 83

Multi- Class Output Softmax: y k = (b k ) K l=1 (b l) (F) Loss J = K k=1 y k (y k) (E) Output (softmax) y k = (b k) K l=1 (b l) (D) Output (linear) b k = D j=0 kjz j k Output Hidden Layer (C) Hidden (nonlinear) z j = (a j ), j (B) Hidden (linear) a j = M i=0 jix i, j Input (A) Input Given x i, i 84

Cross- entropy vs. Quadratic loss Figure from Glorot & Bentio (2010)

Background A Recipe for Machine Learning 1. Given training data: 3. Define goal: 2. Choose each of these: Decision function Loss function 4. Train with SGD: (take small steps opposite the gradient) 86

Objective Functions Matching Quiz: Suppose you are given a neural net with a single output, y, and one hidden layer. 1) Minimizing sum of squared errors 2) Minimizing sum of squared errors plus squared Euclidean norm of weights 3) Minimizing cross- entropy 4) Minimizing hinge loss gives 5) MLE estimates of weights assuming target follows a Bernoulli with parameter given by the output value 6) MAP estimates of weights assuming weight priors are zero mean Gaussian 7) estimates with a large margin on the training data 8) MLE estimates of weights assuming zero mean Gaussian noise on the output value A. 1=5, 2=7, 3=6, 4=8 B. 1=5, 2=7, 3=8, 4=6 C. 1=7, 2=5, 3=5, 4=7 D. 1=7, 2=5, 3=6, 4=8 E. 1=8, 2=6, 3=5, 4=7 F. 1=8, 2=6, 3=8, 4=6 87

BACKPROPAGATION 88

Background A Recipe for Machine Learning 1. Given training data: 3. Define goal: 2. Choose each of these: Decision function Loss function 4. Train with SGD: (take small steps opposite the gradient) 89

Training Backpropagation Question 1: When can we compute the gradients of the parameters of an arbitrary neural network? Question 2: When can we make the gradient computation efficient? 90

Training { That is, the computation Given: : y = g(u) and u = h(x). es. Chain Rule: dy i dx k = JX j=1 dy i du j du j dx k, Chain Rule 8i, k 91

Training { That is, the computation Given: : y = g(u) and u = h(x). es. Chain Rule: dy i dx k = JX j=1 dy i du j du j dx k, Chain Rule 8i, k Backpropagation is just repeated application of the chain rule from Calculus 101. 92

Given: Training Chain Rule: { That is, the computation : y = g(u) and u = h(x). es. dy i dx k = JX j=1 dy i du j du j dx k, 8i, k Chain Rule Backpropagation: 1. Instantiate the computation as a directed acyclic graph, where each intermediate quantity is a node 2. At each node, store (a) the quantity computed in the forward pass and (b) the partial derivative of the goal with respect to that node s intermediate quantity. 3. Initialize all partial derivatives to 0. 4. Visit each node in reverse topological order. At each node, add its contribution to the partial derivatives of its parents This algorithm is also called automatic differentiation in the reverse- mode 93

Training Backpropagation Simple Example: The goal is to compute J = ( (x 2 )+3x 2 ) on the forward pass and the derivative dj dx on the backward pass. Forward J = cos(u) u = u 1 + u 2 u 1 = sin(t) u 2 =3t t = x 2 94

Training Backpropagation Simple Example: The goal is to compute J = ( (x 2 )+3x 2 ) on the forward pass and the derivative dj dx on the backward pass. Forward J = cos(u) u = u 1 + u 2 u 1 = sin(t) u 2 =3t t = x 2 Backward dj du = sin(u) dj = dj du, du 1 du du 1 dj dj du 1 = dt du 1 dt, du 1 dt dj dj du 2 = dt du 2 dt, du 2 dt dj dj dt = dx dt dx, du du 1 =1 = (t) =3 dt dx =2x dj = dj du 2 du du du 2, du du 2 =1 95

Training Backpropagation Case 1: Logistic Regression Output Input θ 1 θ 2 θ 3 θ M Forward J = y y +(1 y ) (1 y) y = a = 1 1+ ( a) D j=0 jx j Backward dj dy = y y + (1 y ) y 1 dj da = dj dy dy da, dy da = ( a) ( ( a) + 1) 2 dj = dj d j da dj = dj dx j da da d j, da dx j, da d j = x j da dx j = j 96

Training Backpropagation Output (E) Output (sigmoid) y = 1 1+ ( b) Hidden Layer (D) Output (linear) b = D j=0 jz j Input (C) Hidden (sigmoid) z j = 1 1+ ( a j ), j (B) Hidden (linear) a j = M i=0 jix i, j (A) Input Given x i, i 97

Training Backpropagation (F) Loss J = 1 2 (y y )2 Output (E) Output (sigmoid) y = 1 1+ ( b) Hidden Layer (D) Output (linear) b = D j=0 jz j Input (C) Hidden (sigmoid) z j = 1 1+ ( a j ), j (B) Hidden (linear) a j = M i=0 jix i, j (A) Input Given x i, i 98

Training Backpropagation Case 2: Neural Network Forward J = y y +(1 y ) (1 y) y = b = z j = a j = 1 1+ ( b) D j=0 jz j 1 1+ ( a j ) M i=0 jix i Backward dj dy = y y + (1 y ) y 1 dj db = dj dy dy db, dy db = ( b) ( ( b) + 1) 2 dj = dj d j db dj = dj dz j db db d j, db dz j, db d j = z j db dz j = dj = dj dz j, dz j = da j dz j da j da j dj = dj d ji da j da j d ji, dj = dj da j, da j = dx i da j dx i dx i j ( a j ) ( ( a j ) + 1) 2 da j d ji = x i D j=0 ji 99

Given: Training Chain Rule: { That is, the computation : y = g(u) and u = h(x). es. dy i dx k = JX j=1 dy i du j du j dx k, 8i, k Chain Rule Backpropagation: 1. Instantiate the computation as a directed acyclic graph, where each intermediate quantity is a node 2. At each node, store (a) the quantity computed in the forward pass and (b) the partial derivative of the goal with respect to that node s intermediate quantity. 3. Initialize all partial derivatives to 0. 4. Visit each node in reverse topological order. At each node, add its contribution to the partial derivatives of its parents This algorithm is also called automatic differentiation in the reverse- mode 100

Given: Training Chain Rule: { That is, the computation : y = g(u) and u = h(x). es. dy i dx k = JX j=1 dy i du j du j dx k, 8i, k Chain Rule Backpropagation: 1. Instantiate the computation as a directed acyclic graph, where each node represents a Tensor. 2. At each node, store (a) the quantity computed in the forward pass and (b) the partial derivatives of the goal with respect to that node s Tensor. 3. Initialize all partial derivatives to 0. 4. Visit each node in reverse topological order. At each node, add its contribution to the partial derivatives of its parents This algorithm is also called automatic differentiation in the reverse- mode 101

Training Backpropagation Case 2: Neural Module 5 Network Module 4 Module 3 Module 2 Module 1 Forward J = y y +(1 y ) (1 y) y = b = z j = a j = 1 1+ ( b) D j=0 jz j 1 1+ ( a j ) M i=0 jix i Backward dj dy = y y + (1 y ) y 1 dj db = dj dy dy db, dy db = ( b) ( ( b) + 1) 2 dj = dj d j db dj = dj dz j db db d j, db dz j, db d j = z j db dz j = dj = dj dz j, dz j = da j dz j da j da j dj = dj d ji da j da j d ji, dj = dj da j, da j = dx i da j dx i dx i j ( a j ) ( ( a j ) + 1) 2 da j d ji = x i D j=0 ji 102

Background A Recipe for Gradients Machine Learning 1. Given training data: Backpropagation 3. Define can goal: compute this gradient! And it s a special case of a more general algorithm called reverse- 2. Choose each of these: mode automatic differentiation that Decision function can compute 4. Train the with gradient SGD: of any differentiable (take function small steps efficiently! opposite the gradient) Loss function 103

Summary 1. Neural Networks provide a way of learning features are highly nonlinear prediction functions (can be) a highly parallel network of logistic regression classifiers discover useful hidden representations of the input 2. Backpropagation provides an efficient way to compute gradients is a special case of reverse- mode automatic differentiation 104