Machine Learning

Similar documents
Machine Learning

Artificial Neural Networks

Artificial Neural Networks

Artificial Neural Networks

Lecture 4: Perceptrons and Multilayer Perceptrons

Back-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples

100 inference steps doesn't seem like enough. Many neuron-like threshold switching units. Many weighted interconnections among units

Neural Networks (Part 1) Goals for the lecture

Lab 5: 16 th April Exercises on Neural Networks

Neural Networks and the Back-propagation Algorithm

Machine Learning. Neural Networks. Le Song. CSE6740/CS7641/ISYE6740, Fall Lecture 7, September 11, 2012 Based on slides from Eric Xing, CMU

Course 395: Machine Learning - Lectures

AN INTRODUCTION TO NEURAL NETWORKS. Scott Kuindersma November 12, 2009

Multilayer Neural Networks. (sometimes called Multilayer Perceptrons or MLPs)

Multilayer Neural Networks. (sometimes called Multilayer Perceptrons or MLPs)

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Lecture 5: Logistic Regression. Neural Networks

Neural Networks. Nicholas Ruozzi University of Texas at Dallas

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

A summary of Deep Learning without Poor Local Minima

Pattern Classification

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

Introduction to Machine Learning

Artifical Neural Networks

COMP-4360 Machine Learning Neural Networks

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Multilayer Perceptron

Multilayer Neural Networks

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

AI Programming CS F-20 Neural Networks

Multilayer Perceptrons and Backpropagation

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks

Neural Networks biological neuron artificial neuron 1

Supervised Learning in Neural Networks

y(x n, w) t n 2. (1)

Computational statistics

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

<Special Topics in VLSI> Learning for Deep Neural Networks (Back-propagation)

Neural Networks. Xiaojin Zhu Computer Sciences Department University of Wisconsin, Madison. slide 1

Artificial Neural Networks. Edward Gatt

Introduction to Neural Networks

Administration. Registration Hw3 is out. Lecture Captioning (Extra-Credit) Scribing lectures. Questions. Due on Thursday 10/6

Linear discriminant functions

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning Tom M. Mitchell Machine Learning Department Carnegie Mellon University. September 20, 2012

Neural networks. Chapter 20. Chapter 20 1

Gradient Descent Training Rule: The Details

COMPUTATIONAL INTELLIGENCE (INTRODUCTION TO MACHINE LEARNING) SS16

Statistical NLP for the Web

Plan. Perceptron Linear discriminant. Associative memories Hopfield networks Chaotic networks. Multilayer perceptron Backpropagation

Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions

Classification with Perceptrons. Reading:

CSC321 Lecture 5: Multilayer Perceptrons

Neural networks. Chapter 19, Sections 1 5 1

Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses about the label (Top-5 error) No Bounding Box

Answers Machine Learning Exercises 4

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

Artificial Neural Network

Neural Nets Supervised learning

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller

Machine Learning

Neural Networks DWML, /25

Machine Learning in Bioinformatics

Multilayer Perceptron

CSC 411 Lecture 10: Neural Networks

Multilayer Perceptron = FeedForward Neural Network

Feedforward Neural Nets and Backpropagation

Artificial Neural Networks. MGS Lecture 2

Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

CSE446: Neural Networks Spring Many slides are adapted from Carlos Guestrin and Luke Zettlemoyer

Learning Neural Networks

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Machine Learning. Neural Networks

Lecture 7 Artificial neural networks: Supervised learning

Neural Networks Lecture 4: Radial Bases Function Networks

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Part 8: Neural Networks

Introduction to feedforward neural networks

Multilayer Neural Networks

Introduction to Neural Networks

Multilayer Perceptrons (MLPs)

C4 Phenomenological Modeling - Regression & Neural Networks : Computational Modeling and Simulation Instructor: Linwei Wang

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1

Incremental Stochastic Gradient Descent

Bits of Machine Learning Part 1: Supervised Learning

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

Backpropagation: The Good, the Bad and the Ugly

Data Mining Part 5. Prediction

MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October,

Artificial Neural Networks (ANN)

Slide04 Haykin Chapter 4: Multi-Layer Perceptrons

Neural Networks Learning the network: Backprop , Fall 2018 Lecture 4

Neural Networks and Deep Learning

4. Multilayer Perceptrons

Training Multi-Layer Neural Networks. - the Back-Propagation Method. (c) Marcin Sydow

Simulating Neural Networks. Lawrence Ward P465A

Understanding Neural Networks : Part I

Neural Networks: Basics. Darrell Whitley Colorado State University

Transcription:

Machine Learning 10-601 Maria Florina Balcan Machine Learning Department Carnegie Mellon University 02/10/2016 Today: Artificial neural networks Backpropagation Reading: Mitchell: Chapter 4 Bishop: Chapter 5 Slide courtesy: Tom Mitchell

Artificial Neural Network (ANN) Biological systems built of very complex webs of interconnected neurons. Highly connected to other neurons, and performs computations by combining signals from other neurons. Outputs of these computations may be transmitted to one or more other neurons. Artificial Neural Networks built out of a densely interconnected set of simple units (e.g., sigmoid units). Each unit takes real-valued inputs (possibly the outputs of other units) and produces a real-valued output (which may become input to many other units).

ALVINN [Pomerleau 1993]

Artificial Neural Networks to learn f: X Y f w typically a non-linear function, f w : X Y X feature space: (vector of) continuous and/or discrete vars Y ouput space: (vector of) continuous and/or discrete vars f w network of basic units Learning algorithm: given x d, t d d D, train weights w of all units to minimize sum of squared errors of predicted network outputs. Find parameters w to minimize f w x d t d 2 d D Use gradient descent!

What type of units should we use? Classifier is a multilayer network of units. Each unit takes some inputs and produces one output. Output of one unit can be the input of another. Input layer 1 Hidden layer Output layer 0.5 0.3

Multilayer network of Linear units? Advantage: we know how to do gradient descent on linear units Input layer Hidden layer Output layer x 0 = 1 w 0,2 w 0,1 w 1,1 v 1 = w 1 x w 3,1 x 1 w 1,2 w 2,1 w 3,2 z = w 3 v x 2 w 2,2 v 2 = w 2 x Problem: linear of linear is just linear. z = w 3,1 w 1 x + w 3,2 w 2 x = w 3,1 w 1 + w 3,2 w 2 x = linear

Multilayer network of Perceptron units? Advantage: Can produce highly non-linear decision boundaries! Input layer Hidden layer Output layer x 0 = 1 w 0,2 w 0,1 w 1,1 v 1 = (w 1 x) w 3,1 x 1 w 1,2 w 2,1 w 3,2 z = (w 3 v) x 2 w 2,2 v 2 = (w 2 x) Threshold function: x = 1 if x is positive, 0 if x is negative. Problem: discontinuous threshold is not differentiable. Can t do gradient descent.

Multilayer network of sigmoid units Advantage: Can produce highly non-linear decision boundaries! Sigmoid is differentiable, so can use gradient descent Input layer Hidden layer Output layer x 0 = 1 w 0,2 x 1 w 1,2 w 0,1 w 1,1 w 2,1 v 1 = σ(w 1 x) w 3,1 w 3,2 z = σ(w 3 v) x 2 w 2,2 v 2 = σ(w 2 x) σ x = 1 1 + e x = Very useful in practice!

The Sigmoid Unit σ is the sigmoid function; σ x = Nice property: dσ x dx = σ x 1 σ x 1 1+e x We can derive gradient descent rules to train One sigmoid unit Multilayer networks of sigmoid units Backpropagation

Gradient Descent to Minimize Squared Error Goal: Given x d, t d d D find w to minimize E D w = 1 2 d D f w x d t d 2 Batch mode Gradient Descent: Do until satisfied 1. Compute the gradient E D [w] 2. w w η E D [w] Incremental (stochastic) Gradient Descent: Do until satisfied For each training example d in D E w E E E,,, w 0 w 1 w n 1. Compute the gradient E d [w] 2. w w η E d [w] E d w 1 2 t d o d 2 Note: Incremental Gradient Descent can approximate Batch Gradient Descent arbitrarily closely if η made small enough

Gradient Descent for the Sigmoid Unit Given x d, t d d D find w to minimize o d t d 2 d D o d = observed unit output for x d o d = σ(net d ); net d = w i x i,d i E w i = = 1 2 1 w i 2 d D d D t d o d 2 2 t d o d w i = 1 2 w i t d o d 2 d D t d o d o d net d = t d o d net d w i d D = t d o d o d w i d D But we know: o d net d = σ net d net d = o d 1 o d and net d w i = w x d w i = x i,d So: E w i = d D t d o d o d 1 o d x i,d

Gradient Descent for the Sigmoid Unit Given x d, t d d D find w to minimize o d t d 2 d D o d = observed unit output for x d o d = σ(net d ); net d = w i x i,d i E w i = d D t d o d o d 1 o d x i,d E w i = d D δ d error term t d o d multiplied by o d 1 o d that comes from the derivative of the sigmoid function δ d x i,d Update rule: w w η E[w]

Gradient Descent for Multilayer Networks Given x d, t d d D find w to minimize 1 2 d D k Outputs o k,d t kd 2

Backpropagation Algorithm Incremental/stochastic gradient descent Initialize all weights to small random numbers. Until satisfied, Do: For each training example (x, t) do: 1. Input the training example to the network and compute the network outputs 2. For each output unit k: 3. For each hidden unit h: δ k o k (1 o k )(t k o k ) δ h o h 1 o h w h,k δ k k outputs k h o = observed unit output t = target output x = input x i,j = ith input to jth unit w ij = wt from i to j 4. Update each network weight w i,j w i,j w i,j + Δw i,j where Δw i,j = ηδ j x i,j

Validation/generalization error first decreases, then increases. Weights tuned to fit the idiosyncrasies of the training examples that are not representative of the general distribution. Stop when lowest error over validation set. Not always obvious when lowest error over validation set has been reached.

Dealing with Overfitting Our learning algorithm involves a parameter n=number of gradient descent iterations How do we choose n to optimize future error? Separate available data into training and validation set Use training to perform gradient descent n number of iterations that optimizes validation set error

K-Fold Cross Validation Idea: train multiple times, leaving out a disjoint subset of data each time for test. Average the test set accuracies. Partition data into K disjoint subsets For k=1 to K end testdata = kth subset h classifier trained* on all data except for testdata accuracy(k) = accuracy of h on testdata FinalAccuracy = mean of the K recorded testset accuracies * might withhold some of this to choose number of gradient decent steps

Leave-One-Out Cross Validation This is just k-fold cross validation leaving out one example each iteration Partition data into K disjoint subsets, each containing one example For k=1 to K testdata = kth subset h classifier trained* on all data except for testdata accuracy(k) = accuracy of h on testdata end FinalAccuracy = mean of the K recorded testset accuracies * might withhold some of this to choose number of gradient decent steps

Expressive Capabilities of ANNs Boolean functions: Every Boolean function can be represented by a network with a single hidden layer But might require exponential (in number of inputs) hidden units Continuous functions: Every bounded continuous function can be approximated with arbitrarily small error, by network with one hidden layer [Cybenko 1989; Hornik et al. 1989] Any function can be approximated to arbitrarily accuracy by a network with two hidden layers [Cybenko 1988]

Representing Simple Boolean Functions Inputs x i {0,1} Or function x i1 x i2 x ik And function x i1 x i2 x ik And with negations x i1 x i 2 x ik x 0 = 1 x 0 = 1 x 0 = 1 x 1 x 2 1 0 0.5 x 1 x 2 1 0 0.5 k x 1 x 2 1 0 0.5 t 1 1 1 x n x n x n w i = 1 if i is an i j w i = 0 otherwise w i = 1 if i is an i j w i = 0 otherwise w i = 1 if i is i j not negated w i = 1 if i is i j negated w i = 0 otherwise t = # not negated

General Boolean functions Every Boolean function can be represented by a network with a single hidden layer; might require exponential # of hidden units Can write any Boolean function as a truth table: 000 + 001 010 011 + 100 101 110 + 111 + View as OR of ANDs, with one AND for each positive entry. x1x 2x 3 x1x 2 x 3 x 1 x 2 x 3 x 1 x 2 x 3 Then combine AND and OR networks into a 2-layer network.

Artificial Neural Networks: Summary Highly non-linear regression/classification Vector valued inputs and outputs Potentially millions of parameters to estimate Actively used to model distributed comptutation in the brain Hidden layers learn intermediate representations Stochastic gradient descent, local minima problems Overfitting and how to deal with it.