Artificial Intelligence and its applications Lecture 9 & Deep Learning Professor Daniel Yeung danyeung@ieee.org Dr. Patrick Chan patrickchan@ieee.org South China University of Technology, China Appropriately designed features are the critical factor of the success of machine learning Mad cow disease Detection Temperature is a better feature than Gender of a cow Wrong features lead to bad performance no matter how advance the classification technique is 1 2 How features are designed? Domain Expert Experience Ad-hoc Manually designed features may be Unrelated to the task Not enough to achieve good performance Redundant to other features Feature set should be refined iteratively Feature Refinement Feature Selection Select a proper subset Each feature is meaningful Filter Method: Based on dataset property E.g. Correlation, mutual information Wrapper Method: Based on a classifier E.g. Classification Error Feature Extraction Build new dimensions, E.g. Height + blood pressure / 5 New dimensions may not be meaningful E.g. LDA, PCA 3 4
Feature Selection: Filter No classifier is required and only based on characteristics of a dataset Measure the relation between a feature (X) and a class ID (Y) Correlation Mutual information Feature Selection: Filter COR(f 1, Y)=0.87 COR(f 2, Y)=0.29 f 1 f 2 Y s 1 4 2 1 s 2 2 8 1 s 3 8 4 2 s 4 10 10 2 s 5 6 6 2 f 1 is more useful in classifying class 1 and 2 then f 2 since the correlation value (COR) of f 1 shows a stronger linear relationship between f 1 and Y 5 6 Feature Selection: Filter Feature Selection: Wrapper Advantage Independent of the classification algorithm Easily scale to high dimensional datasets Computationally simple and fast Disadvantage Ignore the interaction with the classifier Ignore feature dependency Evaluate a feature set according to the generalization ability of a trained classifier Forward Selection Initial set: empty set Add one feature in each iteration Assume four features: f 1, f 2, f 3, f 4 c(f 1 ) : 92% c(f 2 ): 80% c(f 3 ): 60% c(f 4 ) : 86% c(f 1, f 2 ) : 93% c(f 1, f 3 ): 95% c(f 1, f 4 ): 91% c(f 1, f 3, f 2 ) : 95% c(f 1, f 3, f 4 ) : 94% Select f 1 Select f 3 Select f 2 Selection: f 1 > f 3 > f 2 > f 4 7 8
Feature Selection: Wrapper Backward Elimination Initial set: full set Remove one feature iteratively Assume four features: f 1, f 2, f 3, f 4 c(f 2, f 3, f 4 ): 90% c(f 1, f 3, f 4 ): 93% c(f 1, f 2, f 4 ): 91% c(f 1, f 2, f 3 ) : 80% Remove f 2 Remove f 3 Remove f 1 Selection: f 4 > f 1 > f 3 > f 2 c(f 3, f 4 ): 89% c(f 1, f 4 ): 90% c(f 1, f 3 ): 75% c(f 4 ) : 85% c(f 1 ) : 77% Feature Selection: Wrapper Time complexity of Backward Elimination is larger than forward selection but usually has a better result E.g. Backward Elimination considers n classifiers with n-1 feature n times but forward selection consider n classifiers with 1 feature Advantage Interaction between feature selection and model section Take into account feature dependencies Disadvantage Learning system is a black box High risk of overfitting Computationally intensive 9 10 Feature Extraction Feature selection may not work sometimes For example: Feature 2 Only using feature 1 or 2 cannot classify two classes accurately Feature 1 rotate transform New Feature 1 After rotating and transforming the data, new feature 2 is more useful for classification New Feature 2 Feature Extraction Two most common methods Principal Component Analysis (PCA) W =arg.. = Linear Discriminant Analysis (LDA) =arg where uses an orthogonal transformation to convert a set of observations of possibly correlated Variables nto a set of values of linearly uncorrelated variables called principal components... ^_= where S b (S w ) is intra (inter) class distance Find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification. 11 http://perclass.com/doc/guide/dimensionality_reduction.html 12
Feature Extraction Advantage Project the original feature space to a new low dimension space Disadvantage Alter original representation of the features Hard to interpret the learning model First Impression AlphaGo implemented by DeepMind March 2016:Beat Lee Sedol Late 2016 Early 2017:Played as Master in the Internet and got 60w 0L, including playing with world #1 Mr. KeJie 13 14 Trend of Machine Learning What is Deep Learning? Deep learning draws much public attention recently due to its satisfying performance in many difficult and complicated applications Computer Vision (e.g. image classification, segmentation) Speech Recognition Natural Language Processing (e.g. sentiment analysis, translation) Branch of Machine Learning Today most commonly refer to a neural network with multiple layers(deep architecture) Involve unsupervised and supervised learning in training the network 15 16
Why Deep Architecture? Our brainis a deep architecture A deep architecture can represent more complicated function than a shallow one Many functions can be represented efficiently with a deep architecture but not with a shallow one What s New? Multilayer neural networks have been studied for more than 30 years What s actually new? Old wine in a new bottle?? Deeper More Complicated Function Shallower Less Complicated Function 17 Yoshua Bengio, Learning Deep Architectures for AI, Foundations and Trends in Machine Learning, 2(1), 2009 18 What s New? No satisfying training method for deep architectures before 2006 Backpropagation algorithm loses its power in training deep-architecture NN as the cumulative backpropagatederror signals decay exponentially wrtthe number of layers Deep-architecture NN is less accurate than shallow one 1. Hertz, J., Krogh, A., & Palmer, R. (1991). Introduction to the theory of neural computation. Redwood City: Addison-Wesley 2. Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85-117. 19 What s New? Breakthrough in 2006 Three papers proposing new learning algorithms all containing: Feature Learning by Unsupervised learning train a hidden layer each time Prediction by Supervised learning train the last layer, and fine-tune all the layers 1. Hinton, G. E., Osindero, S. and Teh, Y., A fast learning algorithm for deep belief nets Neural Computation 18:1527-1554, 2006 2. YoshuaBengio, Pascal Lamblin, Dan Popoviciand Hugo Larochelle, Greedy Layer-Wise Training of Deep Networks, in J. Platt et al. (Eds), Advances in Neural Information Processing Systems 19 (NIPS 2006), pp. 153-160, MIT Press, 2007 3. Marc AurelioRanzato, Christopher Poultney, SumitChopra and Yann LeCunEfficient Learning of Sparse Representations with an Energy-Based Model, in J. Platt et al. (Eds), Advances in Neural Information Processing Systems (NIPS 2006), MIT Press, 2007 20
What s New? Type of Deep Learning Brief History of Machine Learning The common type of deep learning Stacked Autoencoder Convolution Neural Network Deep Belief Network Recurrent Neural Network Deep Learning Stacked Autoencoderand Convolution Neural Network will be discussed in this lecture 2006 Digression of deep architecture due to the success of SVM http://www.erogol.com/brief-history-machine-learning/ 21 22 Traditional Learning Algorithm How is a traditional Neural Network trained? Traditional Learning Algorithm How is a traditional Neural Network trained? w 11 w 12 w 21 w 22 w 31 w 32 w 41 Initialize the weights w 42 w 43 w 19 w 2n w 3m y s 1 0.2 0.1 0.5 +1 s 2 0.1 0.5 0.3 +1 s 3-0.2 0.2 0.2-1 y s 1 0.2 0.1 0.5 +1 s 2 0.1 0.5 0.3 +1 s 3-0.2 0.2 0.2-1 23 24
Traditional Learning Algorithm How is a traditional Neural Network trained? 1 w 11 w 12 w 21 w 22 w 31 w 32 w 41 1. Error = f(s 1 ) y 2. Update weights Traditional Learning Algorithm How is a traditional Neural Network trained? 1 w 11 w 12 w 21 w 22 w 31 w 32 w 41 1. Error = f(s 2 ) y 2. Update weights w 42 w 42 w 43 w 43 w 19 w 2n w 3m 2 y s 1 0.2 0.1 0.5 +1 s 2 0.1 0.5 0.3 +1 s 3-0.2 0.2 0.2-1 25 w 19 w 2n w 3m 2 y s 1 0.2 0.1 0.5 +1 s 2 0.1 0.5 0.3 +1 s 3-0.2 0.2 0.2-1 26 Traditional Learning Algorithm How is a traditional Neural Network trained? 1 w 11 w 12 w 21 w 22 w 31 w 32 w 41 1. Error = f(s 3 ) y 2. Update weights Traditional Learning Algorithm How is a traditional Neural Network trained? w 11 w 12 w 21 w 22 w 31 w 32 w 41 Finally (Hopefully) w 42 w 42 w 43 w 43 w 19 w 2n w 3m 2 y s 1 0.2 0.1 0.5 +1 s 2 0.1 0.5 0.3 +1 s 3-0.2 0.2 0.2-1 27 w 19 w 2n w 3m y s 1 0.2 0.1 0.5 +1 s 2 0.1 0.5 0.3 +1 s 3-0.2 0.2 0.2-1 28
Traditional Learning Algorithm DL Algorithm Gradient Descent updates the weights by making many tiny adjustments Each adjustment makes the network better at the recent samples but may be worse on others Easily gets stuck in local minima Search space too complex Rely on weight initialization Weights are trainedlayer by layer Train this layer Then this layer Then this layer Finally this layer 29 Intermediate layers are trained (auto-encoder is used in this illustration) Final layer is trained to predict class 30 DL Algorithm Auto-encoder:it is forced to learn good features that describe the previous layer DL Algorithm Weights are trained layer by layer We would like to train the weight By minimizing the error ( a a ), b can representa a 1 a 2 b 1 a' 1 a' 2 a a 3 a' 3 a' Next layer with (more) fewer units than the first one, this forces the units of the next layer to become good feature of the first layer a 4 Encode b 2 Decode a' 4 Intermediate layers are trained to be auto-encoder (No Label is needed!) Unsupervised Learning Final layer is trained to predict class Finetunecan be applied Supervised Learning 31 32
Feature Learning in DL Hierarchies of features are learnt as nonlinear transformations of data in DL Need for manual feature design is reduced Deep Learning Stacked Autoencoder Deep Learning VS Traditional NN Traditional backpropagation pixels edges object parts (edge combination) object models 1 st layer 2 nd layer 3 rd layer 33 1. Dumitru 34 Erhan, Aaron Courville, Yoshua Bengio, Pascal Artificial Vincent, Intelligence Why Does and its Unsupervised applications Pre-training - Lecture 10: Help Feature Deep Design & Deep Learning Learning?, Proceedings of the 13th International Conference on Artificial Intelligence and Statistics (AISTATS) 2010 Convolutional Neuronal Network Convolutional Neuronal Network In stacked autoencoder, all inputs connect to all neurons in the first hidden layer The performance of network may not be good for high dimensional problems (a lot of features) Every input yields many connection For example: an image with 100x100 pixels has 10,000 dimensions x 4 x n An input is connected to a number of neuron For example, neurons only consider adjacent pixels in an image x 4 x n This is called Local network, locally connected networks, local receptive field network x 4 x n 35 36
Convolutional Neuronal Network Convolutional Neuronal Network Weight sharing is used for further reduction Some parameters are equal E.g. w 1 = w 4 = w 7, w 2 = w 5 = w 8, w 3 = w 6 = w 9 w 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8 w 9 Another feature is Max-Pooling Layer Also call subsampling layer Reduce the size significantly Translational invariance Invariant to shift of inputs This is called convolutional neural network w 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8 w 9 0 1 0 0 0 0 0 x 4 x 5 x 6 x 7 w 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8 w 9 0 0 0 1 0 0 0 It is called convolution w 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8 w 9 x 4 x 5 x 6 x 7 37 38 Convolutional Neuronal Network Convolutional Neuronal Network Other features (optional): Local Contrast Normalization (LCN) Subtract the mean and divide the standard deviation of the incoming neurons Feature Learning Classifier Brightness invariance LCN Dog (0.01) Cat (0.04) Boat (0.94) Bird (0.02) Convolution + ReLU Max Pooling Convolution + ReLU Max Pooling Fully Connected w 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8 w 9 x 4 x 5 x 6 x 7 39 40
Convolution operation Or Feature Map 41 42 Convolution operation Convolution operation Pooling + ReLU(introduce nonlinearity of convent which is linear) 43 Artificial Intelligence and its applications - Lecture 10: & Deep 44
Deep Learning Convolution operation Feature Learning Face Car Elephants Rectified feature map Rectified feature map 45 Artificial Intelligence and its applications 46 Unsupervised Learning of Hierarchical Representations with Convolutional Deep Belief Networks Deep Learning Characteristics Image Recognition Lecture 10: & Deep Learning Learn from raw materials (Feature Learning) Very deep architecture (10 100 layers) Imagenet > 1m color images 1000 classes different sizes (avg 482x415) Huge training set (big data era) Huge computation complexity (distributed system, GPU) Traditional Methods Deep Learning 47 48
Recognition on Simple Drawing Object Recognition https://quickdraw.withgoogle.com/ 49 50 Image Caption Generation Scene Parsing 51 52
Scene Parsing Auto Pilot Car 53 54 Automatic Colorization Automatic Machine Translation 55 56
Create Images from Sketches Game Playing Reinforcement Learning Breakout Mario 57 58 Concluding Remarks Deep Learning Advantage Automatic feature extraction Independent of initial weight assignment Dimensionality reduction Disadvantage Can t explain why it works well or not well Time complexity Big data requirement 59