Feature Design. Feature Design. Feature Design. & Deep Learning

Similar documents
Denoising Autoencoders

Learning Deep Architectures

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Jakub Hajic Artificial Intelligence Seminar I

Neural networks and optimization

STA 414/2104: Lecture 8

Neural Networks. Single-layer neural network. CSE 446: Machine Learning Emily Fox University of Washington March 10, /9/17

Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses about the label (Top-5 error) No Bounding Box

Deep Learning: a gentle introduction

RegML 2018 Class 8 Deep learning

TUTORIAL PART 1 Unsupervised Learning

Deep Learning for NLP

Introduction to Convolutional Neural Networks (CNNs)

How to do backpropagation in a brain

11/3/15. Deep Learning for NLP. Deep Learning and its Architectures. What is Deep Learning? Advantages of Deep Learning (Part 1)

WHY ARE DEEP NETS REVERSIBLE: A SIMPLE THEORY,

Learning Deep Architectures

Introduction to Deep Learning

CSE446: Neural Networks Spring Many slides are adapted from Carlos Guestrin and Luke Zettlemoyer

UNSUPERVISED LEARNING

Deep Belief Networks are compact universal approximators

Deep Learning: Self-Taught Learning and Deep vs. Shallow Architectures. Lecture 04

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino

Artificial Neural Networks. Introduction to Computational Neuroscience Tambet Matiisen

STA 414/2104: Lecture 8

From perceptrons to word embeddings. Simon Šuster University of Groningen

Convolutional Neural Networks

Based on the original slides of Hung-yi Lee

Learning Deep Architectures for AI. Part II - Vijay Chakilam

Machine Learning for Signal Processing Neural Networks Continue. Instructor: Bhiksha Raj Slides by Najim Dehak 1 Dec 2016

Machine Learning for Physicists Lecture 1

Convolutional Neural Networks II. Slides from Dr. Vlad Morariu

Neural Networks 2. 2 Receptive fields and dealing with image inputs

Deep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, Spis treści

Introduction to Deep Learning

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Cheng Soon Ong & Christian Walder. Canberra February June 2018

ECE521 Lecture 7/8. Logistic Regression

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann

CS 179: LECTURE 16 MODEL COMPLEXITY, REGULARIZATION, AND CONVOLUTIONAL NETS

Machine Learning Lecture 10

Course 395: Machine Learning - Lectures

Greedy Layer-Wise Training of Deep Networks

Grundlagen der Künstlichen Intelligenz

Introduction to Deep Neural Networks

Large-Scale Feature Learning with Spike-and-Slab Sparse Coding

Deep Learning Architectures and Algorithms

Artificial neural networks

Administration. Registration Hw3 is out. Lecture Captioning (Extra-Credit) Scribing lectures. Questions. Due on Thursday 10/6

Machine Learning Techniques

SGD and Deep Learning

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes

Neural Networks. William Cohen [pilfered from: Ziv; Geoff Hinton; Yoshua Bengio; Yann LeCun; Hongkak Lee - NIPs 2010 tutorial ]

arxiv: v3 [cs.lg] 18 Mar 2013

A Little History of Machine Learning

Deep Learning Autoencoder Models

Artificial Neural Networks. Historical description

CSC321 Lecture 5: Multilayer Perceptrons

A summary of Deep Learning without Poor Local Minima

Apprentissage, réseaux de neurones et modèles graphiques (RCP209) Neural Networks and Deep Learning

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Making Deep Learning Understandable for Analyzing Sequential Data about Gene Regulation

Learning Deep Architectures for AI. Contents

Lecture 12. Talk Announcement. Neural Networks. This Lecture: Advanced Machine Learning. Recap: Generalized Linear Discriminants

Neural Networks. Learning and Computer Vision Prof. Olga Veksler CS9840. Lecture 10

Deep Learning & Neural Networks Lecture 2

Neural Networks. David Rosenberg. July 26, New York University. David Rosenberg (New York University) DS-GA 1003 July 26, / 35

Machine Learning for Computer Vision 8. Neural Networks and Deep Learning. Vladimir Golkov Technical University of Munich Computer Vision Group

Neural networks and optimization

Identifying QCD transition using Deep Learning

Tutorial on Methods for Interpreting and Understanding Deep Neural Networks. Part 3: Applications & Discussion

Convolutional Neural Networks. Srikumar Ramalingam

Lecture 12. Neural Networks Bastian Leibe RWTH Aachen

Neural Networks and Deep Learning.

Improved Local Coordinate Coding using Local Tangents

Deep Learning. Convolutional Neural Network (CNNs) Ali Ghodsi. October 30, Slides are partially based on Book in preparation, Deep Learning

Measuring the Usefulness of Hidden Units in Boltzmann Machines with Mutual Information

Deep Learning Basics Lecture 7: Factor Analysis. Princeton University COS 495 Instructor: Yingyu Liang

Neural Networks. Nicholas Ruozzi University of Texas at Dallas

Statistical NLP for the Web

To go deep or wide in learning?

Lecture 14: Deep Generative Learning

RAGAV VENKATESAN VIJETHA GATUPALLI BAOXIN LI NEURAL DATASET GENERALITY

Machine Learning Lecture 12

Learning Deep Architectures for AI. Part I - Vijay Chakilam

Lecture 17: Neural Networks and Deep Learning

Some Applications of Machine Learning to Astronomy. Eduardo Bezerra 20/fev/2018

Deep Generative Models. (Unsupervised Learning)

Machine Learning and Deep Learning! Vincent Lepetit!

Introduction to Deep Learning

1 Introduction 2. 4 Q-Learning The Q-value The Temporal Difference The whole Q-Learning process... 5

Demystifying deep learning. Artificial Intelligence Group Department of Computer Science and Technology, University of Cambridge, UK

Comments. Assignment 3 code released. Thought questions 3 due this week. Mini-project: hopefully you have started. implement classification algorithms

OPTIMIZATION METHODS IN DEEP LEARNING

Gaussian Cardinality Restricted Boltzmann Machines

Transcription:

Artificial Intelligence and its applications Lecture 9 & Deep Learning Professor Daniel Yeung danyeung@ieee.org Dr. Patrick Chan patrickchan@ieee.org South China University of Technology, China Appropriately designed features are the critical factor of the success of machine learning Mad cow disease Detection Temperature is a better feature than Gender of a cow Wrong features lead to bad performance no matter how advance the classification technique is 1 2 How features are designed? Domain Expert Experience Ad-hoc Manually designed features may be Unrelated to the task Not enough to achieve good performance Redundant to other features Feature set should be refined iteratively Feature Refinement Feature Selection Select a proper subset Each feature is meaningful Filter Method: Based on dataset property E.g. Correlation, mutual information Wrapper Method: Based on a classifier E.g. Classification Error Feature Extraction Build new dimensions, E.g. Height + blood pressure / 5 New dimensions may not be meaningful E.g. LDA, PCA 3 4

Feature Selection: Filter No classifier is required and only based on characteristics of a dataset Measure the relation between a feature (X) and a class ID (Y) Correlation Mutual information Feature Selection: Filter COR(f 1, Y)=0.87 COR(f 2, Y)=0.29 f 1 f 2 Y s 1 4 2 1 s 2 2 8 1 s 3 8 4 2 s 4 10 10 2 s 5 6 6 2 f 1 is more useful in classifying class 1 and 2 then f 2 since the correlation value (COR) of f 1 shows a stronger linear relationship between f 1 and Y 5 6 Feature Selection: Filter Feature Selection: Wrapper Advantage Independent of the classification algorithm Easily scale to high dimensional datasets Computationally simple and fast Disadvantage Ignore the interaction with the classifier Ignore feature dependency Evaluate a feature set according to the generalization ability of a trained classifier Forward Selection Initial set: empty set Add one feature in each iteration Assume four features: f 1, f 2, f 3, f 4 c(f 1 ) : 92% c(f 2 ): 80% c(f 3 ): 60% c(f 4 ) : 86% c(f 1, f 2 ) : 93% c(f 1, f 3 ): 95% c(f 1, f 4 ): 91% c(f 1, f 3, f 2 ) : 95% c(f 1, f 3, f 4 ) : 94% Select f 1 Select f 3 Select f 2 Selection: f 1 > f 3 > f 2 > f 4 7 8

Feature Selection: Wrapper Backward Elimination Initial set: full set Remove one feature iteratively Assume four features: f 1, f 2, f 3, f 4 c(f 2, f 3, f 4 ): 90% c(f 1, f 3, f 4 ): 93% c(f 1, f 2, f 4 ): 91% c(f 1, f 2, f 3 ) : 80% Remove f 2 Remove f 3 Remove f 1 Selection: f 4 > f 1 > f 3 > f 2 c(f 3, f 4 ): 89% c(f 1, f 4 ): 90% c(f 1, f 3 ): 75% c(f 4 ) : 85% c(f 1 ) : 77% Feature Selection: Wrapper Time complexity of Backward Elimination is larger than forward selection but usually has a better result E.g. Backward Elimination considers n classifiers with n-1 feature n times but forward selection consider n classifiers with 1 feature Advantage Interaction between feature selection and model section Take into account feature dependencies Disadvantage Learning system is a black box High risk of overfitting Computationally intensive 9 10 Feature Extraction Feature selection may not work sometimes For example: Feature 2 Only using feature 1 or 2 cannot classify two classes accurately Feature 1 rotate transform New Feature 1 After rotating and transforming the data, new feature 2 is more useful for classification New Feature 2 Feature Extraction Two most common methods Principal Component Analysis (PCA) W =arg.. = Linear Discriminant Analysis (LDA) =arg where uses an orthogonal transformation to convert a set of observations of possibly correlated Variables nto a set of values of linearly uncorrelated variables called principal components... ^_= where S b (S w ) is intra (inter) class distance Find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification. 11 http://perclass.com/doc/guide/dimensionality_reduction.html 12

Feature Extraction Advantage Project the original feature space to a new low dimension space Disadvantage Alter original representation of the features Hard to interpret the learning model First Impression AlphaGo implemented by DeepMind March 2016:Beat Lee Sedol Late 2016 Early 2017:Played as Master in the Internet and got 60w 0L, including playing with world #1 Mr. KeJie 13 14 Trend of Machine Learning What is Deep Learning? Deep learning draws much public attention recently due to its satisfying performance in many difficult and complicated applications Computer Vision (e.g. image classification, segmentation) Speech Recognition Natural Language Processing (e.g. sentiment analysis, translation) Branch of Machine Learning Today most commonly refer to a neural network with multiple layers(deep architecture) Involve unsupervised and supervised learning in training the network 15 16

Why Deep Architecture? Our brainis a deep architecture A deep architecture can represent more complicated function than a shallow one Many functions can be represented efficiently with a deep architecture but not with a shallow one What s New? Multilayer neural networks have been studied for more than 30 years What s actually new? Old wine in a new bottle?? Deeper More Complicated Function Shallower Less Complicated Function 17 Yoshua Bengio, Learning Deep Architectures for AI, Foundations and Trends in Machine Learning, 2(1), 2009 18 What s New? No satisfying training method for deep architectures before 2006 Backpropagation algorithm loses its power in training deep-architecture NN as the cumulative backpropagatederror signals decay exponentially wrtthe number of layers Deep-architecture NN is less accurate than shallow one 1. Hertz, J., Krogh, A., & Palmer, R. (1991). Introduction to the theory of neural computation. Redwood City: Addison-Wesley 2. Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85-117. 19 What s New? Breakthrough in 2006 Three papers proposing new learning algorithms all containing: Feature Learning by Unsupervised learning train a hidden layer each time Prediction by Supervised learning train the last layer, and fine-tune all the layers 1. Hinton, G. E., Osindero, S. and Teh, Y., A fast learning algorithm for deep belief nets Neural Computation 18:1527-1554, 2006 2. YoshuaBengio, Pascal Lamblin, Dan Popoviciand Hugo Larochelle, Greedy Layer-Wise Training of Deep Networks, in J. Platt et al. (Eds), Advances in Neural Information Processing Systems 19 (NIPS 2006), pp. 153-160, MIT Press, 2007 3. Marc AurelioRanzato, Christopher Poultney, SumitChopra and Yann LeCunEfficient Learning of Sparse Representations with an Energy-Based Model, in J. Platt et al. (Eds), Advances in Neural Information Processing Systems (NIPS 2006), MIT Press, 2007 20

What s New? Type of Deep Learning Brief History of Machine Learning The common type of deep learning Stacked Autoencoder Convolution Neural Network Deep Belief Network Recurrent Neural Network Deep Learning Stacked Autoencoderand Convolution Neural Network will be discussed in this lecture 2006 Digression of deep architecture due to the success of SVM http://www.erogol.com/brief-history-machine-learning/ 21 22 Traditional Learning Algorithm How is a traditional Neural Network trained? Traditional Learning Algorithm How is a traditional Neural Network trained? w 11 w 12 w 21 w 22 w 31 w 32 w 41 Initialize the weights w 42 w 43 w 19 w 2n w 3m y s 1 0.2 0.1 0.5 +1 s 2 0.1 0.5 0.3 +1 s 3-0.2 0.2 0.2-1 y s 1 0.2 0.1 0.5 +1 s 2 0.1 0.5 0.3 +1 s 3-0.2 0.2 0.2-1 23 24

Traditional Learning Algorithm How is a traditional Neural Network trained? 1 w 11 w 12 w 21 w 22 w 31 w 32 w 41 1. Error = f(s 1 ) y 2. Update weights Traditional Learning Algorithm How is a traditional Neural Network trained? 1 w 11 w 12 w 21 w 22 w 31 w 32 w 41 1. Error = f(s 2 ) y 2. Update weights w 42 w 42 w 43 w 43 w 19 w 2n w 3m 2 y s 1 0.2 0.1 0.5 +1 s 2 0.1 0.5 0.3 +1 s 3-0.2 0.2 0.2-1 25 w 19 w 2n w 3m 2 y s 1 0.2 0.1 0.5 +1 s 2 0.1 0.5 0.3 +1 s 3-0.2 0.2 0.2-1 26 Traditional Learning Algorithm How is a traditional Neural Network trained? 1 w 11 w 12 w 21 w 22 w 31 w 32 w 41 1. Error = f(s 3 ) y 2. Update weights Traditional Learning Algorithm How is a traditional Neural Network trained? w 11 w 12 w 21 w 22 w 31 w 32 w 41 Finally (Hopefully) w 42 w 42 w 43 w 43 w 19 w 2n w 3m 2 y s 1 0.2 0.1 0.5 +1 s 2 0.1 0.5 0.3 +1 s 3-0.2 0.2 0.2-1 27 w 19 w 2n w 3m y s 1 0.2 0.1 0.5 +1 s 2 0.1 0.5 0.3 +1 s 3-0.2 0.2 0.2-1 28

Traditional Learning Algorithm DL Algorithm Gradient Descent updates the weights by making many tiny adjustments Each adjustment makes the network better at the recent samples but may be worse on others Easily gets stuck in local minima Search space too complex Rely on weight initialization Weights are trainedlayer by layer Train this layer Then this layer Then this layer Finally this layer 29 Intermediate layers are trained (auto-encoder is used in this illustration) Final layer is trained to predict class 30 DL Algorithm Auto-encoder:it is forced to learn good features that describe the previous layer DL Algorithm Weights are trained layer by layer We would like to train the weight By minimizing the error ( a a ), b can representa a 1 a 2 b 1 a' 1 a' 2 a a 3 a' 3 a' Next layer with (more) fewer units than the first one, this forces the units of the next layer to become good feature of the first layer a 4 Encode b 2 Decode a' 4 Intermediate layers are trained to be auto-encoder (No Label is needed!) Unsupervised Learning Final layer is trained to predict class Finetunecan be applied Supervised Learning 31 32

Feature Learning in DL Hierarchies of features are learnt as nonlinear transformations of data in DL Need for manual feature design is reduced Deep Learning Stacked Autoencoder Deep Learning VS Traditional NN Traditional backpropagation pixels edges object parts (edge combination) object models 1 st layer 2 nd layer 3 rd layer 33 1. Dumitru 34 Erhan, Aaron Courville, Yoshua Bengio, Pascal Artificial Vincent, Intelligence Why Does and its Unsupervised applications Pre-training - Lecture 10: Help Feature Deep Design & Deep Learning Learning?, Proceedings of the 13th International Conference on Artificial Intelligence and Statistics (AISTATS) 2010 Convolutional Neuronal Network Convolutional Neuronal Network In stacked autoencoder, all inputs connect to all neurons in the first hidden layer The performance of network may not be good for high dimensional problems (a lot of features) Every input yields many connection For example: an image with 100x100 pixels has 10,000 dimensions x 4 x n An input is connected to a number of neuron For example, neurons only consider adjacent pixels in an image x 4 x n This is called Local network, locally connected networks, local receptive field network x 4 x n 35 36

Convolutional Neuronal Network Convolutional Neuronal Network Weight sharing is used for further reduction Some parameters are equal E.g. w 1 = w 4 = w 7, w 2 = w 5 = w 8, w 3 = w 6 = w 9 w 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8 w 9 Another feature is Max-Pooling Layer Also call subsampling layer Reduce the size significantly Translational invariance Invariant to shift of inputs This is called convolutional neural network w 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8 w 9 0 1 0 0 0 0 0 x 4 x 5 x 6 x 7 w 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8 w 9 0 0 0 1 0 0 0 It is called convolution w 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8 w 9 x 4 x 5 x 6 x 7 37 38 Convolutional Neuronal Network Convolutional Neuronal Network Other features (optional): Local Contrast Normalization (LCN) Subtract the mean and divide the standard deviation of the incoming neurons Feature Learning Classifier Brightness invariance LCN Dog (0.01) Cat (0.04) Boat (0.94) Bird (0.02) Convolution + ReLU Max Pooling Convolution + ReLU Max Pooling Fully Connected w 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8 w 9 x 4 x 5 x 6 x 7 39 40

Convolution operation Or Feature Map 41 42 Convolution operation Convolution operation Pooling + ReLU(introduce nonlinearity of convent which is linear) 43 Artificial Intelligence and its applications - Lecture 10: & Deep 44

Deep Learning Convolution operation Feature Learning Face Car Elephants Rectified feature map Rectified feature map 45 Artificial Intelligence and its applications 46 Unsupervised Learning of Hierarchical Representations with Convolutional Deep Belief Networks Deep Learning Characteristics Image Recognition Lecture 10: & Deep Learning Learn from raw materials (Feature Learning) Very deep architecture (10 100 layers) Imagenet > 1m color images 1000 classes different sizes (avg 482x415) Huge training set (big data era) Huge computation complexity (distributed system, GPU) Traditional Methods Deep Learning 47 48

Recognition on Simple Drawing Object Recognition https://quickdraw.withgoogle.com/ 49 50 Image Caption Generation Scene Parsing 51 52

Scene Parsing Auto Pilot Car 53 54 Automatic Colorization Automatic Machine Translation 55 56

Create Images from Sketches Game Playing Reinforcement Learning Breakout Mario 57 58 Concluding Remarks Deep Learning Advantage Automatic feature extraction Independent of initial weight assignment Dimensionality reduction Disadvantage Can t explain why it works well or not well Time complexity Big data requirement 59