Introduction to Deep Learning

Similar documents
Introduction to Deep Learning

SGD and Deep Learning

CSE446: Neural Networks Spring Many slides are adapted from Carlos Guestrin and Luke Zettlemoyer

Lecture 17: Neural Networks and Deep Learning

Jakub Hajic Artificial Intelligence Seminar I

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

Introduction to Convolutional Neural Networks (CNNs)

Neural Networks and Deep Learning.

Administration. Registration Hw3 is out. Lecture Captioning (Extra-Credit) Scribing lectures. Questions. Due on Thursday 10/6

A summary of Deep Learning without Poor Local Minima

Feature Design. Feature Design. Feature Design. & Deep Learning

CSC 411 Lecture 10: Neural Networks

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

Neural Networks: Introduction

Course 395: Machine Learning - Lectures

Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses about the label (Top-5 error) No Bounding Box

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

From perceptrons to word embeddings. Simon Šuster University of Groningen

Neural networks and optimization

Neural Networks. Bishop PRML Ch. 5. Alireza Ghane. Feed-forward Networks Network Training Error Backpropagation Applications

Neural Turing Machine. Author: Alex Graves, Greg Wayne, Ivo Danihelka Presented By: Tinghui Wang (Steve)

Neural Networks. Nicholas Ruozzi University of Texas at Dallas

Introduction to Deep Neural Networks

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

CSC321 Lecture 5: Multilayer Perceptrons

Artificial Neural Networks. MGS Lecture 2

Intelligent Systems Discriminative Learning, Neural Networks

STA 414/2104: Lecture 8

Machine Learning for Physicists Lecture 1

Neural Networks Lecturer: J. Matas Authors: J. Matas, B. Flach, O. Drbohlav

CSC321 Lecture 16: ResNets and Attention

Deep Learning (CNNs)

Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 16

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Neural Networks. Learning and Computer Vision Prof. Olga Veksler CS9840. Lecture 10

Deep Learning: a gentle introduction

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

Comments. Assignment 3 code released. Thought questions 3 due this week. Mini-project: hopefully you have started. implement classification algorithms

CSE446: Neural Networks Winter 2015

Neural Networks and Deep Learning

Neural Networks. Single-layer neural network. CSE 446: Machine Learning Emily Fox University of Washington March 10, /9/17

Feed-forward Networks Network Training Error Backpropagation Applications. Neural Networks. Oliver Schulte - CMPT 726. Bishop PRML Ch.

Machine Learning for Signal Processing Neural Networks Continue. Instructor: Bhiksha Raj Slides by Najim Dehak 1 Dec 2016

Neural Networks. David Rosenberg. July 26, New York University. David Rosenberg (New York University) DS-GA 1003 July 26, / 35

ECE521 Lecture 7/8. Logistic Regression

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY

Deep Neural Networks (1) Hidden layers; Back-propagation

TUTORIAL PART 1 Unsupervised Learning

CS 4700: Foundations of Artificial Intelligence

Neural Networks. Intro to AI Bert Huang Virginia Tech

CSE 190 Fall 2015 Midterm DO NOT TURN THIS PAGE UNTIL YOU ARE TOLD TO START!!!!

Reading Group on Deep Learning Session 1

Convolutional Neural Networks II. Slides from Dr. Vlad Morariu

Understanding How ConvNets See

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Neural networks. Chapter 19, Sections 1 5 1

Introduction Biologically Motivated Crude Model Backpropagation

Grundlagen der Künstlichen Intelligenz

Artificial Neural Networks. Introduction to Computational Neuroscience Tambet Matiisen

Deep Generative Models. (Unsupervised Learning)

Introduction to Machine Learning

Neural networks. Chapter 20. Chapter 20 1

CS 179: LECTURE 16 MODEL COMPLEXITY, REGULARIZATION, AND CONVOLUTIONAL NETS

Topic 3: Neural Networks

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau

Spectral Hashing: Learning to Leverage 80 Million Images

Neural networks. Chapter 20, Section 5 1

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Neural Networks. Yan Shao Department of Linguistics and Philology, Uppsala University 7 December 2016

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Machine Learning Lecture 12

What Do Neural Networks Do? MLP Lecture 3 Multi-layer networks 1

Backpropagation Introduction to Machine Learning. Matt Gormley Lecture 12 Feb 23, 2018

Deep Learning. What Is Deep Learning? The Rise of Deep Learning. Long History (in Hind Sight)

Deep Learning. What Is Deep Learning? The Rise of Deep Learning. Long History (in Hind Sight)

Learning Deep Architectures

Machine Learning Lecture 10

Artificial Neural Networks

Bayesian Networks (Part I)

Neural Networks (Part 1) Goals for the lecture

2018 EE448, Big Data Mining, Lecture 5. (Part II) Weinan Zhang Shanghai Jiao Tong University

CSC321 Lecture 4: Learning a Classifier

Training Neural Networks Practical Issues

Speech and Language Processing

Global Optimality in Matrix and Tensor Factorization, Deep Learning & Beyond

<Special Topics in VLSI> Learning for Deep Neural Networks (Back-propagation)

CSC321 Lecture 4: Learning a Classifier

Lecture 7 Convolutional Neural Networks

CS489/698: Intro to ML

Deep Neural Networks (1) Hidden layers; Back-propagation

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino

Machine Learning

Feedforward Neural Nets and Backpropagation

Natural Language Processing

Apprentissage, réseaux de neurones et modèles graphiques (RCP209) Neural Networks and Deep Learning

Theories of Deep Learning

1 What a Neural Network Computes

Nonlinear Classification

CSC 411: Lecture 02: Linear Regression

Transcription:

Introduction to Deep Learning A. G. Schwing & S. Fidler University of Toronto, 2014 A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 1 / 35

Outline 1 Universality of Neural Networks 2 Learning Neural Networks 3 Deep Learning 4 Applications 5 References A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 2 / 35

What are neural networks? Let s ask Biological Input layer Hidden layer Output layer Input #1 Computational Input #2 Input #3 Input #4 Output A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 3 / 35

What are neural networks?...neural networks (NNs) are computational models inspired by biological neural networks [...] and are used to estimate or approximate functions... [Wikipedia] A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 4 / 35

What are neural networks? The people behind Origins: Traced back to threshold logic [W. McCulloch and W. Pitts, 1943] Perceptron [F. Rosenblatt, 1958] More recently: G. Hinton (UofT) Y. LeCun (NYU) A. Ng (Stanford) Y. Bengio (University of Montreal) J. Schmidhuber (IDSIA, Switzerland) R. Salakhutdinov (UofT) H. Lee (University of Michigan) R. Fergus (NYU)... (and many more having made significant contributions; apologies for not being able to mention everyone and all the students behind) A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 5 / 35

What are neural networks? Use cases Classification Playing video games Captcha Neural Turing Machine (e.g., learn how to sort) Alex Graves http://www.technologyreview.com/view/532156/googles-secretive-deepmind-startup-unveils-a-neural-turing-machine/ A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 6 / 35

What are neural networks? Example: input x parameters w 1, w 2, b x R w 1 w 2 h 1 f b R A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 7 / 35

How to compute the function? Forward propagation/pass, inference, prediction: Given input x and parameters w, b Compute latent variables/intermediate results in a feed-forward manner Until we obtain output function f x R w 1 w 2 h 1 f b R A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 8 / 35

How to compute the function? Forward propagation/pass, inference, prediction: Given input x and parameters w, b Compute latent variables/intermediate results in a feed-forward manner Until we obtain output function f A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 8 / 35

How to compute the function? Example: input x, parameters w 1, w 2, b x R w 1 h w 2 1 f h 1 = σ(w 1 x + b) b R f = w 2 h 1 Sigmoid function: x = ln 2, b = ln 3, w 1 = 2, w 2 = 2 h 1 =? f =? σ(z) = 1/(1 + exp( z)) A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 9 / 35

f How to compute the function? Given parameters, what is f for x = 0, x = 1, x = 2,... f = w 2 σ(w 1 x + b) 2 1.5 1 0.5 0 5 0 5 x A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 10 / 35

Let s mess with parameters: f x R w 1 h 1 w 2 f h 1 = σ(w 1 x + b) f = w 2 h 1 b R σ(z) = 1/(1 + exp( z)) w 1 = 1.0 b = 0 1 0.8 0.6 0.4 b = 2 0.2 b = 0 b = 2 0 5 0 5 x 1 0.8 0.6 = 0 fw1 0.4 0.2 w 1 = 0.5 w = 1.0 1 w 1 = 100 0 5 0 5 x Keep in mind the step function. A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 11 / 35

How to use Neural Networks for binary classification? Feature/Measurement: x Output: How likely is the input to be a cat? 1 0.8 0.6 y 0.4 0.2 0 5 0 5 x A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 12 / 35

f How to use Neural Networks for binary classification? Feature/Measurement: x Output: How likely is the input to be a cat? 1 0.8 0.6 0.4 0.2 0 5 0 5 x A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 12 / 35

f How to use Neural Networks for binary classification? Feature/Measurement: x Output: How likely is the input to be a cat? 1 0.8 0.6 0.4 0.2 0 5 0 5 x A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 12 / 35

f How to use Neural Networks for binary classification? Feature/Measurement: x Output: How likely is the input to be a cat? 1 0.8 0.6 0.4 0.2 0 5 0 5 x A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 12 / 35

f f How to use Neural Networks for binary classification? Shifted feature/measurement: x Output: How likely is the input to be a cat? 1 0.8 0.6 0.4 0.2 Previous features 0 5 0 5 x 1 0.8 0.6 0.4 0.2 Shifted features 0 5 0 5 x Learning/Training means finding the right parameters. A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 13 / 35

f f So far we are able to scale and translate sigmoids. How well can we approximate an arbitrary function? With the simple model we are obviously not going very far. Features are good Features are noisy Simple classifier More complex classifier 1 1 0.8 0.6 0.4 0.2 0.8 0.6 0.4 0.2 0 5 0 5 x How can we generalize? 0 5 0 5 x A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 14 / 35

Let s use more hidden variables: f b 1 h 1 x R w 1 w 2 w 3 w 4 f h 1 = σ(w 1 x + b 1 ) h 2 = σ(w 3 x + b 2 ) f = w 2 h 1 + w 4 h 2 h 2 b 2 Combining two step functions gives a bump. 2 1.8 1.6 1.4 1.2 1 5 0 5 x w 1 = 100, b 1 = 40, w 3 = 100, b 2 = 60, w 2 = 1, w 4 = 1 A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 15 / 35

So let s simplify: b 1 x R w 1 h 1 w 2 w 3 w 4 f h 2 b 2 Bump(x 1, x 2, h) f We simplify a pair of hidden nodes to a bump function: Starts at x 1 Ends at x 2 Has height h A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 16 / 35

Now we can represent bumps very well. How can we generalize? f Bump(0.0, 0.2, h 1 ) Bump(0.2, 0.4, h 2 ) Bump(0.4, 0.6, h 3 ) f Bump(0.6, 0.8, h 4 ) Bump(0.8, 1.0, h 5) 1.5 1 0.5 0 Target Approximation 0.5 0 0.5 1 x More bumps gives more accurate approximation. Corresponds to a single layer network. A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 17 / 35

Universality: theoretically we can approximate an arbitrary function So we can learn a really complex cat classifier Where is the catch? A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 18 / 35

Universality: theoretically we can approximate an arbitrary function So we can learn a really complex cat classifier Where is the catch? Complexity, we might need quite a few hidden units Overfitting, memorize the training data A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 18 / 35

Generalizations are possible to include more input dimensions capture more output dimensions employ multiple layers for more efficient representations See http://neuralnetworksanddeeplearning.com/chap4.html for a great read! A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 19 / 35

How do we find the parameters to obtain a good approximation? How do we tell a computer to do that? Intuitive explanation: Compute approximation error at the output Propagate error back by computing individual contributions of parameters to error [Fig. from H. Lee] A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 20 / 35

Intuitive example: Target function: 5x 2 Approximation: f w (x) Domain of interest: x [0, 1] Error: Program of interest: e(w) = 1 0 min e(w) = min w (5x 2 f w (x)) 2 dx w 1 0 (5x 2 f w (x)) 2 dx A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 21 / 35

Chain rule is important: w 1, w 2, x R Assume e(w 1, w 2 ) = f (w 2, h(w 1, x)) Derivatives are: e(w 1, w 2 ) w 2 = f (w 2, h(w 1, x)) w 2 e(w 1, w 2 ) = f (w 2, h(w 1, x)) w 1 w 1 = f h h Chain rule w 1 A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 22 / 35

Back propagation does not work well for deep networks: Diffusion of gradient signal (multiplication of many small numbers) Attractivity of many local minima (random initialization is very far from good points) Requires a lot of training samples Need for significant computational power Solution: 2 step approach Greedy layerwise pre-training Perform full fine tuning at the end A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 23 / 35

Why go deep? Representation efficiency (fewer computational units for the same function) Hierarchical representation (non-local generalization) Combinatorial sharing (re-use of earlier computation) Works very well [Fig. from H. Lee] A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 24 / 35

To obtain more flexibility/non-linearity we use additional function prototypes: Sigmoid Rectified linear unit (ReLU) Pooling Dropout Convolutions A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 25 / 35

Convolutions What do the numbers mean? See Sanja s lecture 14 for the answers... [Fig. adapted from A. Krizhevsky] A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 26 / 35

Max Pooling What is happening here? [Fig. adapted from A. Krizhevsky] A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 27 / 35

Rectified Linear Unit (ReLU) Drop information if smaller than zero Fixes the problem of vanishing gradients to some degree Dropout Drop information at random Kind of a regularization, enforcing redundancy A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 28 / 35

A famous deep learning network called AlexNet. The network won the ImageNet competition in 2012. How many parameters? Given an image, what is happening? Inference Time: about 2ms per image when processing many images in parallel Training Time: forever (maybe 2-3 weeks) [Fig. adapted from A. Krizhevsky] A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 29 / 35

Demo A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 30 / 35

Neural networks have been used for many applications: Classification and Recognition in Computer Vision Text Parsing in Natural Language Processing Playing Video Games Stock Market Prediction Captcha Demos: Russ website Antonio Places website A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 31 / 35

Classification in Computer Vision: ImageNet Challenge http://deeplearning.cs.toronto.edu/ Since it s the end of the semester, let s find the beach... A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 32 / 35

Classification in Computer Vision: ImageNet Challenge http://deeplearning.cs.toronto.edu/ A place to maybe prepare for exams... A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 33 / 35

Links: Tutorials: http://deeplearning.net/tutorial/deeplearning.pdf Toronto Demo by Russ and students: http://deeplearning.cs.toronto.edu/ MIT Demo by Antonio and students: http://places.csail.mit.edu/demo.html Honglak Lee: http://deeplearningworkshopnips2010.files.wordpress.com/2010/09/n workshop-tutorial-final.pdf Yann LeCun: http://www.cs.nyu.edu/ yann/talks/lecun-ranzato-icml2013.pdf Richard Socher: http://lxmls.it.pt/2014/socher-lxmls.pdf A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 34 / 35

Videos: Video games: https://www.youtube.com/watch?v=mart-xpable Captcha: http://singularityhub.com/2013/10/29/tiny-ai-startupvicarious-says-its-solved-captcha/ https://www.youtube.com/watch?v=lge-dl2juam#t=27 Stock exchange: http://cs.stanford.edu/people/eroberts/courses/soco/projects/neuralnetworks/applications/stocks.html A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 35 / 35