CSC321 Lecture 9: Generalization

Similar documents
CSC321 Lecture 9: Generalization

Lecture 9: Generalization

CSC 411 Lecture 6: Linear Regression

CSC321 Lecture 2: Linear Regression

CS 179: LECTURE 16 MODEL COMPLEXITY, REGULARIZATION, AND CONVOLUTIONAL NETS

CSC321 Lecture 8: Optimization

CSC321 Lecture 4: Learning a Classifier

CSC321 Lecture 7: Optimization

Ways to make neural networks generalize better

CSC321 Lecture 4: Learning a Classifier

CSC2515 Winter 2015 Introduction to Machine Learning. Lecture 2: Linear regression

CSC321 Lecture 15: Exploding and Vanishing Gradients

Lecture 6: Backpropagation

Lecture 2: Linear regression

Simple Techniques for Improving SGD. CS6787 Lecture 2 Fall 2017

CSC321 Lecture 5: Multilayer Perceptrons

CSC321 Lecture 6: Backpropagation

Deep Learning & Neural Networks Lecture 4

Neural Networks. Nicholas Ruozzi University of Texas at Dallas

Regularization in Neural Networks

CSC 411 Lecture 10: Neural Networks

What Do Neural Networks Do? MLP Lecture 3 Multi-layer networks 1

Non-Convex Optimization. CS6787 Lecture 7 Fall 2017

Advanced computational methods X Selected Topics: SGD

Lecture 15: Exploding and Vanishing Gradients

CSC321 Lecture 20: Autoencoders

Neural Networks. David Rosenberg. July 26, New York University. David Rosenberg (New York University) DS-GA 1003 July 26, / 35

CSC321 Lecture 5 Learning in a Single Neuron

CSC321 Lecture 18: Learning Probabilistic Models

Machine Learning for Computer Vision 8. Neural Networks and Deep Learning. Vladimir Golkov Technical University of Munich Computer Vision Group

Overfitting, Bias / Variance Analysis

CSC2541 Lecture 2 Bayesian Occam s Razor and Gaussian Processes

Comments. Assignment 3 code released. Thought questions 3 due this week. Mini-project: hopefully you have started. implement classification algorithms

Regularization. CSCE 970 Lecture 3: Regularization. Stephen Scott and Vinod Variyam. Introduction. Outline

Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses about the label (Top-5 error) No Bounding Box

Deep Learning Lab Course 2017 (Deep Learning Practical)

SGD and Deep Learning

Introduction to Neural Networks

1 What a Neural Network Computes

Deep Feedforward Networks

CSC321 Lecture 7 Neural language models

Machine Learning Basics: Stochastic Gradient Descent. Sargur N. Srihari

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.

Lecture 4: Training a Classifier

ECE521 lecture 4: 19 January Optimization, MLE, regularization

CSC 411: Lecture 02: Linear Regression

CPSC 340: Machine Learning and Data Mining. MLE and MAP Fall 2017

CSC321 Lecture 16: ResNets and Attention

Machine Learning Lecture 7

Generative adversarial networks

Vote. Vote on timing for night section: Option 1 (what we have now) Option 2. Lecture, 6:10-7:50 25 minute dinner break Tutorial, 8:15-9

Automatic Differentiation and Neural Networks

Neural networks and optimization

Lecture 3. Linear Regression II Bastian Leibe RWTH Aachen

Deep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, Spis treści

CPSC 340: Machine Learning and Data Mining. Stochastic Gradient Fall 2017

ECE521 Lecture 7/8. Logistic Regression

COMPUTATIONAL INTELLIGENCE (INTRODUCTION TO MACHINE LEARNING) SS16

Lecture 3: Introduction to Complexity Regularization

Topics in AI (CPSC 532L): Multimodal Learning with Vision, Language and Sound. Lecture 3: Introduction to Deep Learning (continued)

Machine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li.

Lecture 18: Learning probabilistic models

CSE 190 Fall 2015 Midterm DO NOT TURN THIS PAGE UNTIL YOU ARE TOLD TO START!!!!

OPTIMIZATION METHODS IN DEEP LEARNING

IFT Lecture 6 Nesterov s Accelerated Gradient, Stochastic Gradient Descent

Artificial Neural Networks 2

Linear classifiers: Overfitting and regularization

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann

Machine Learning and Data Mining. Linear regression. Kalev Kask

Linear Models for Regression CS534

Advanced statistical methods for data analysis Lecture 2

Comparison of Modern Stochastic Optimization Algorithms

Parameter Norm Penalties. Sargur N. Srihari

Neural Networks: Optimization & Regularization

Lecture 6 Optimization for Deep Neural Networks

Summary and discussion of: Dropout Training as Adaptive Regularization

Least Mean Squares Regression

Linear Models for Regression

CSCI567 Machine Learning (Fall 2018)

Topics we covered. Machine Learning. Statistics. Optimization. Systems! Basics of probability Tail bounds Density Estimation Exponential Families

Discrete Mathematics and Probability Theory Fall 2015 Lecture 21

Lecture 15: Recurrent Neural Nets

12 Statistical Justifications; the Bias-Variance Decomposition

Data Mining & Machine Learning

Recap from previous lecture

Classification: The rest of the story

Nonlinear Optimization Methods for Machine Learning

Neural Network Training

Lecture 4: Training a Classifier

Lecture 5: Logistic Regression. Neural Networks

Statistical Machine Learning (BE4M33SSU) Lecture 5: Artificial Neural Networks

Week 5: Logistic Regression & Neural Networks

Rapid Introduction to Machine Learning/ Deep Learning

Bayesian Inference and an Intro to Monte Carlo Methods

Apprentissage, réseaux de neurones et modèles graphiques (RCP209) Neural Networks and Deep Learning

Convolution and Pooling as an Infinitely Strong Prior

Case Study 1: Estimating Click Probabilities. Kakade Announcements: Project Proposals: due this Friday!

Machine Learning Lecture 12

CSE 417T: Introduction to Machine Learning. Lecture 11: Review. Henry Chai 10/02/18

Understanding Generalization Error: Bounds and Decompositions

Transcription:

CSC321 Lecture 9: Generalization Roger Grosse Roger Grosse CSC321 Lecture 9: Generalization 1 / 27

Overview We ve focused so far on how to optimize neural nets how to get them to make good predictions on the training set. How do we make sure they generalize to data they haven t seen before? Even though the topic is well studied, it s still poorly understood. Roger Grosse CSC321 Lecture 9: Generalization 2 / 27

Generalization Recall: overfitting and underfitting 1 M = 1 1 M = 3 1 M = 9 t t t 0 0 0 1 1 1 0 x 1 0 x 1 0 x 1 We d like to minimize the generalization error, i.e. error on novel examples. Roger Grosse CSC321 Lecture 9: Generalization 3 / 27

Generalization Training and test error as a function of data and model capacity. Note: capacity isn t a formal term, or a quantity we can measure. Roger Grosse CSC321 Lecture 9: Generalization 4 / 27

Bias/Variance Decomposition There s an interesting decomposition of generalization error in the particular case of squared error loss. It s often convenient to suppose our training and test data are sampled from a data generating distribution p D(x, t). Suppose we re given an input x. We d like to minimize the expected loss: E pd [(y t) 2 x] The best possible prediction we can make is the conditional expectation Proof: y = E pd [t x]. E[(y t) 2 x] = E[y 2 2yt + t 2 x] = y 2 2yE[t x] + E[t 2 x] = y 2 2yE[t x] + E[t x] 2 + Var[t x] = (y y ) 2 + Var[t x] The term Var[t x], called the Bayes error, is the best risk we can hope to achieve. Roger Grosse CSC321 Lecture 9: Generalization 5 / 27

Bias/Variance Decomposition Now suppose we sample a training set from the data generating distribution, train a model on it, and use that model to make a prediction y on test example x. Here, y is a random variable, and we get a new value each time we sample a new training set. We d like to minimize the risk, or expected loss E[(y t) 2 ]. We can decompose this into bias, variance, and Bayes error. (We suppress the conditioning on x for clarity.) E[(y t) 2 ] = E[(y y ) 2 ] + Var(t) = E[y 2 2y y + y 2 ] + Var(t) = y 2 2y E[y] + E[y 2 ] + Var(t) = y 2 2y E[y] + E[y] 2 + Var(y) + Var(t) = (y E[y]) 2 }{{} bias + Var(y) }{{} variance + Var(t) }{{} Bayes error Roger Grosse CSC321 Lecture 9: Generalization 6 / 27

Bias/Variance Decomposition We can visualize this decomposition in prediction space, where the axes correspond to predictions on the test examples. If we have an overly simple model, it might have high bias (because it s too simplistic to capture the structure in the data) low variance (because there s enough data to estimate the parameters, so you get the same results for any sample of the training data) Roger Grosse CSC321 Lecture 9: Generalization 7 / 27

Bias/Variance Decomposition If you have an overly complex model, it might have low bias (since it learns all the relevant structure) high variance (it fits the idiosyncrasies of the data you happened to sample) The bias/variance decomposition holds only for squared error, but it provides a useful intuition even for other loss functions. Roger Grosse CSC321 Lecture 9: Generalization 8 / 27

Our Bag of Tricks How can we train a model that s complex enough to model the structure in the data, but prevent it from overfitting? I.e., how to achieve low bias and low variance? Our bag of tricks data augmentation reduce the capacity (e.g. number of paramters) weight decay early stopping ensembles (combine predictions of different models) stochastic regularization (e.g. dropout, batch normalization) The best-performing models on most benchmarks use some or all of these tricks. Roger Grosse CSC321 Lecture 9: Generalization 9 / 27

Data Augmentation The best way to improve generalization is to collect more data! Suppose we already have all the data we re willing to collect. We can augment the training data by transforming the examples. This is called data augmentation. Examples (for visual recognition) translation horizontal or vertical flip rotation smooth warping noise (e.g. flip random pixels) Only warp the training, not the test, examples. The choice of transformations depends on the task. (E.g. horizontal flip for object recognition, but not handwritten digit recognition.) Roger Grosse CSC321 Lecture 9: Generalization 10 / 27

Reducing the Capacity Roughly speaking, models with more parameters have more capacity. Therefore, we can try to reduce the number of parameters. One approach: reduce the number of layers or the number of paramters per layer. Adding a linear bottleneck layer is another way to reduce the capacity: The first network is strictly more expressive than the second (i.e. it can represent a strictly larger class of functions). (Why?) Remember how linear layers don t make a network more expressive? They might still improve generalization. Roger Grosse CSC321 Lecture 9: Generalization 11 / 27

Weight Decay We ve already seen that we can regularize a network by penalizing large weight values, thereby encouraging the weights to be small in magnitude. E reg = E + λr = E + λ 2 We saw that the gradient descent update can be interpreted as weight decay: ( ) E w w α w + λ R w ( ) E = w α w + λw = (1 αλ)w α E w j w 2 j Roger Grosse CSC321 Lecture 9: Generalization 12 / 27

Weight Decay Why we want weights to be small: y = 0.1x 5 + 0.2x 4 + 0.75x 3 x 2 2x + 2 y = 7.2x 5 + 10.4x 4 + 24.5x 3 37.9x 2 3.6x + 12 The red polynomial overfits. Notice it has really large coefficients. Roger Grosse CSC321 Lecture 9: Generalization 13 / 27

Weight Decay Why we want weights to be small: Suppose inputs x 1 and x 2 are nearly identical. The following two networks make nearly the same predictions: But the second network might make weird predictions if the test distribution is slightly different (e.g. x 1 and x 2 match less closely). Roger Grosse CSC321 Lecture 9: Generalization 14 / 27

Weight Decay The geometric picture: Roger Grosse CSC321 Lecture 9: Generalization 15 / 27

Weight Decay There are other kinds of regularizers which encourage weights to be small, e.g. sum of the absolute values. These alternative penalties are commonly used in other areas of machine learning, but less commonly for neural nets. Regularizers differ by how strongly they prioritize making weights exactly zero, vs. not being very large. Hinton, Coursera lectures Bishop, Pattern Recognition and Machine Learning Roger Grosse CSC321 Lecture 9: Generalization 16 / 27

Early Stopping We don t always want to find a global (or even local) optimum of our cost function. It may be advantageous to stop training early. Roughly speaking, training for longer increases the capacity of the model, which might make us overfit. Early stopping: monitor performance on a validation set, stop training when the validtion error starts going up. Roger Grosse CSC321 Lecture 9: Generalization 17 / 27

Early Stopping A slight catch: validation error fluctuates because of stochasticity in the updates. Determining when the validation error has actually leveled off can be tricky. Roger Grosse CSC321 Lecture 9: Generalization 18 / 27

Early Stopping Why does early stopping work? Weights start out small, so it takes time for them to grow large. Therefore, it has a similar effect to weight decay. If you are using sigmoidal units, and the weights start out small, then the inputs to the activation functions take only a small range of values. Therefore, the network starts out approximately linear, and gradually becomes more nonlinear (and hence more powerful). Roger Grosse CSC321 Lecture 9: Generalization 19 / 27

Ensembles If a loss function is convex (with respect to the predictions), you have a bunch of predictions, and you don t know which one is best, you are always better off averaging them. L(λ 1y 1 + + λ N y N, t) λ 1L(y 1, t) + + λ N L(y N, t) for λ i 0, i λ i = 1 This is true no matter where they came from (trained neural net, random guessing, etc.). Note that only the loss function needs to be convex, not the optimization problem. Examples: squared error, cross-entropy, hinge loss If you have multiple candidate models and don t know which one is the best, maybe you should just average their predictions on the test data. The set of models is called an ensemble. Averaging often helps even when the loss is nonconvex (e.g. 0 1 loss). Roger Grosse CSC321 Lecture 9: Generalization 20 / 27

Ensembles Some examples of ensembles: Train networks starting from different random initializations. But this might not give enough diversity to be useful. Train networks on differnet subsets of the training data. This is called bagging. Train networks with different architectures or hyperparameters, or even use other algorithms which aren t neural nets. Ensembles can improve generalization quite a bit, and the winning systems for most machine learning benchmarks are ensembles. But they are expensive, and the predictions can be hard to interpret. Roger Grosse CSC321 Lecture 9: Generalization 21 / 27

Stochastic Regularization For a network to overfit, its computations need to be really precise. This suggests regularizing them by injecting noise into the computations, a strategy known as stochastic regularization. Dropout is a stochastic regularizer which randomly deactivates a subset of the units (i.e. sets their activations to zero). { φ(zi ) with probability 1 ρ h i = 0 with probability ρ, where ρ is a hyperparameter. Equivalently, h i = m i φ(z i ), where m i is a Bernoulli random variable, independent for each hidden unit. Backprop rule: z i = h i m i φ (z i ) Roger Grosse CSC321 Lecture 9: Generalization 22 / 27

Stochastic Regularization Dropout can be seen as training an ensemble of 2 D different architectures with shared weights (where D is the number of units): Goodfellow et al., Deep Learning Roger Grosse CSC321 Lecture 9: Generalization 23 / 27

Stochastic Regularization Dropout can help performance quite a bit, even if you re already using weight decay. Lots of other stochastic regularizers have been proposed: DropConnect drops connections instead of activations. Batch normalization (mentioned last week for its optimization benefits) also introduces stochasticity, thereby acting as a regularizer. The stochasticity in SGD updates has been observed to act as a regularizer, helping generalization. Increasing the mini-batch size may improve training error at the expense of test error! Roger Grosse CSC321 Lecture 9: Generalization 24 / 27

Our Bag of Tricks Techniques we just covered: data augmentation reduce the capacity (e.g. number of paramters) weight decay early stopping ensembles (combine predictions of different models) stochastic regularization (e.g. dropout, batch normalization) The best-performing models on most benchmarks use some or all of these tricks. Many of these techniques involve hyperparameters. How do we choose these? Roger Grosse CSC321 Lecture 9: Generalization 25 / 27

Choosing Hyperparameters Many hyperparameters are relevant to generalization number of layers number of units per layer L 2 weight cost dropout probability Ideally, we want to choose hyperparameters which perform the best on a validation set. Ways to approximate this manual ( graduate student descent ) grid search random search Bayesian optimization (covered later in the course) use Autograd to differentiate through the whole training procedure, and do gradient descent on hyperparameters (only practical for very small problems) Roger Grosse CSC321 Lecture 9: Generalization 26 / 27

Choosing Hyperparameters Random search can be more efficient than grid search when some of the hyperparamters are unimportant: But grid search can be more reproducible. Roger Grosse CSC321 Lecture 9: Generalization 27 / 27