VBM683 Machine Learning

Similar documents
ECE 5424: Introduction to Machine Learning

ECE 5984: Introduction to Machine Learning

Stochastic Gradient Descent

Voting (Ensemble Methods)

CS7267 MACHINE LEARNING

A Decision Stump. Decision Trees, cont. Boosting. Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University. October 1 st, 2007

Boos$ng Can we make dumb learners smart?

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18

Decision Trees: Overfitting

Machine Learning. Ensemble Methods. Manfred Huber

Learning with multiple models. Boosting.

Data Mining und Maschinelles Lernen

Big Data Analytics. Special Topics for Computer Science CSE CSE Feb 24

Ensemble Methods for Machine Learning

Statistical Machine Learning from Data

CSCI-567: Machine Learning (Spring 2019)

CSE 151 Machine Learning. Instructor: Kamalika Chaudhuri

Ensemble Methods. Charles Sutton Data Mining and Exploration Spring Friday, 27 January 12

Ensembles of Classifiers.

A Brief Introduction to Adaboost

PAC-learning, VC Dimension and Margin-based Bounds

Ensemble Methods and Random Forests

Boosting. CAP5610: Machine Learning Instructor: Guo-Jun Qi

PAC-learning, VC Dimension and Margin-based Bounds

Boosting. Ryan Tibshirani Data Mining: / April Optional reading: ISL 8.2, ESL , 10.7, 10.13

What makes good ensemble? CS789: Machine Learning and Neural Network. Introduction. More on diversity

Learning Theory. Machine Learning CSE546 Carlos Guestrin University of Washington. November 25, Carlos Guestrin

Ensemble learning 11/19/13. The wisdom of the crowds. Chapter 11. Ensemble methods. Ensemble methods

6.036 midterm review. Wednesday, March 18, 15

FINAL: CS 6375 (Machine Learning) Fall 2014

Variance Reduction and Ensemble Methods

Announcements Kevin Jamieson

Chapter 14 Combining Models

TDT4173 Machine Learning

Gradient Boosting (Continued)

Bias-Variance Tradeoff

TDT4173 Machine Learning

CS534 Machine Learning - Spring Final Exam

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.

Ensembles. Léon Bottou COS 424 4/8/2010

Bias-Variance in Machine Learning

Bagging and Other Ensemble Methods

BBM406 - Introduc0on to ML. Spring Ensemble Methods. Aykut Erdem Dept. of Computer Engineering HaceDepe University

10701/15781 Machine Learning, Spring 2007: Homework 2

Support Vector Machine, Random Forests, Boosting Based in part on slides from textbook, slides of Susan Holmes. December 2, 2012

Learning Ensembles. 293S T. Yang. UCSB, 2017.

Learning Theory Continued

Final Overview. Introduction to ML. Marek Petrik 4/25/2017

The Perceptron algorithm

Support vector machines Lecture 4

Part I Week 7 Based in part on slides from textbook, slides of Susan Holmes

Machine Learning Ensemble Learning I Hamid R. Rabiee Jafar Muhammadi, Alireza Ghasemi Spring /

FINAL EXAM: FALL 2013 CS 6375 INSTRUCTOR: VIBHAV GOGATE

CSE 417T: Introduction to Machine Learning. Lecture 11: Review. Henry Chai 10/02/18

CS229 Supplemental Lecture notes

Learning theory. Ensemble methods. Boosting. Boosting: history

Recitation 9. Gradient Boosting. Brett Bernstein. March 30, CDS at NYU. Brett Bernstein (CDS at NYU) Recitation 9 March 30, / 14

Ensemble Methods. NLP ML Web! Fall 2013! Andrew Rosenberg! TA/Grader: David Guy Brizan

Machine Learning CSE546 Carlos Guestrin University of Washington. September 30, 2013

Lecture 24: Other (Non-linear) Classifiers: Decision Tree Learning, Boosting, and Support Vector Classification Instructor: Prof. Ganesh Ramakrishnan

Support Vector Machines

Lecture 8. Instructor: Haipeng Luo

Gradient Boosting, Continued

Ensemble Methods: Jay Hyer

ECE 5984: Introduction to Machine Learning

Midterm Exam Solutions, Spring 2007

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas

Holdout and Cross-Validation Methods Overfitting Avoidance

Lecture 13: Ensemble Methods

Classification: The rest of the story

Decision Trees. Machine Learning CSEP546 Carlos Guestrin University of Washington. February 3, 2014

Understanding Generalization Error: Bounds and Decompositions

Boosting: Foundations and Algorithms. Rob Schapire

PDEEC Machine Learning 2016/17

Machine Learning. Lecture 9: Learning Theory. Feng Li.

i=1 = H t 1 (x) + α t h t (x)

Introduction to Boosting and Joint Boosting

Logistic Regression. Machine Learning Fall 2018

AdaBoost. Lecturer: Authors: Center for Machine Perception Czech Technical University, Prague

Kernelized Perceptron Support Vector Machines

The exam is closed book, closed notes except your one-page (two sides) or two-page (one side) crib sheet.

Deconstructing Data Science

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2013

Combining Classifiers

Lecture 3: Decision Trees

Multiclass Boosting with Repartitioning

Hierarchical Boosting and Filter Generation

Introduction to Machine Learning Midterm, Tues April 8

CS 6375 Machine Learning

IFT Lecture 7 Elements of statistical learning theory

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 8. Chapter 8. Classification: Basic Concepts

Machine Learning Lecture 10

Support Vector Machine I

Statistics and learning: Big Data

MIDTERM SOLUTIONS: FALL 2012 CS 6375 INSTRUCTOR: VIBHAV GOGATE

Introduction to Machine Learning Midterm Exam

Machine Learning. Nonparametric Methods. Space of ML Problems. Todo. Histograms. Instance-Based Learning (aka non-parametric methods)

10-701/ Machine Learning - Midterm Exam, Fall 2010

Reducing Multiclass to Binary: A Unifying Approach for Margin Classifiers

Machine Learning Recitation 8 Oct 21, Oznur Tastan

Transcription:

VBM683 Machine Learning Pinar Duygulu Slides are adapted from Dhruv Batra

Bias is the algorithm's tendency to consistently learn the wrong thing by not taking into account all the information in the data (underfitting). Variance is the algorithm's tendency to learn random things irrespective of the real signal by fitting highly flexible models that follow the error/noise in the data too closely (overfitting). (C) Dhruv Batra 2

We see that the linear (degree = 1) fit is an under-fit: 1) It does not take into account all the information in the data (high bias), but 2) It will not change much in the face of a new set of points from the same source (low variance). The high degree polynomial (degree = 20) fit, on the other hand, is an over-fit: 1) The curve fits the given data points very well (low bias), but 2) It will collapse in the face of subsets or new sets of points from the same source because it intimately takes all the data into account, thus losing generality (high variance). (C) Dhruv Batra 3

Bias-Variance Tradeoff Choice of hypothesis class introduces learning bias More complex class less bias More complex class more variance (C) Dhruv Batra Slide Credit: Carlos Guestrin 4

Fighting the bias-variance tradeoff Simple (a.k.a. weak) learners e.g., naïve Bayes, logistic regression, decision stumps (or shallow decision trees) Good: Low variance, don t usually overfit Bad: High bias, can t solve hard learning problems Sophisticated learners Kernel SVMs, Deep Neural Nets, Deep Decision Trees Good: Low bias, have the potential to learn with Big Data Bad: High variance, difficult to generalize Can we make combine these properties In general, No!! But often yes (C) Dhruv Batra Slide Credit: Carlos Guestrin 5

Ensemble Methods Core Intuition: A combination of multiple classifiers will perform better than a single classifier. Bagging Boosting (C) Stefan Lee 6

Ensemble Methods Instead of learning a single predictor, learn many predictors Output class: (Weighted) combination of each predictor With sophisticated learners Uncorrelated errors expected error goes down On average, do better than single classifier! Bagging With weak learners each one good at different parts of the input space On average, do better than single classifier! Boosting (C) Dhruv Batra 7

Ensemble Methods Synonyms Learning Mixture of Experts/Committees Boosting types AdaBoost L2Boost LogitBoost <Your-Favorite-keyword>Boost (C) Dhruv Batra 8

Bagging (Bootstrap Aggregating / Bootstrap Averaging) Core Idea: Average multiple strong learners trained from resamples of your data to reduce variance and overfitting! (C) Stefan Lee 9

Bagging Given: Dataset of N Training Examples (x i, y i ) Sample N training points with replacement and train a predictor, repeat M times: Sample 1: h j (x) Sample M:.. h M (x) At test time, output the (weighted) average output of these predictors. (C) Stefan Lee 10

Why Use Bagging Let e m be the error for the m th predictor trained through bagging and e avg be the error of the ensemble. If E[e m ] = 0 (unbiased) and E[e m e k ] = E[e m ]E[e k ] (uncorrelated) then.. E e avg = 1 M 1 M E em The expected error of the average is a faction of the average expected error of the predictors! (C) Stefan Lee 11

When To Use Bagging In practice, completely uncorrelated predictors don t really happen, but there also wont likely be perfect correlation either, so bagging may still help! Use bagging when you have overfit sophisticated learners (averaging lowers variance) you have a somewhat reasonably sized dataset you want an extra bit of performance from your models (C) Stefan Lee 12

Example: Decision Forests We ve seen that single decision trees can easily overfit! Train a M trees on different samples of the data and call it a forest. Uncorrelated errors result in better ensemble performance. Can we force this? Could assign trees random max depths Could only give each tree a random subset of the splits Some work to optimize for no correlation as part of the object! (C) Stefan Lee

A Note From Statistics Bagging is a general method to reduce/estimate the variance of an estimator. Looking at the distribution of a estimator from multiple resamples of the data can give confidence intervals and bounds on that estimator. Typically just called Bootstrapping in this context. (C) Stefan Lee

Boosting Core Idea: Combine multiple weak learners to reduce error/bias by reweighting hard examples! (C) Stefan Lee 25

Some Intuition About Boosting Consider a weak learner h(x), for example a decision stump: x j < t x j x j >= t h(x) = a t h(x) = b h(x) = a h(x) = b x j Example for binary classification: h(x) = -1 h(x) = 1 x j t x j Example for regression: t h(x) = w T L x h(x) =w R T x (C) Stefan Lee 26

Some Intuition About Boosting Consider a weak learner h(x), for example a decision stump: x j < t x j x j >= t h(x) = a t h(x) = b h(x) = a h(x) = b x j This learner will make mistakes often but what if we combine multiple to combat these errors such that our final predictor is: f x = α 1 h 1 x + α 2 h 2 x + + α M 1 h m 1 x + α M h M (x) This is a big optimization problem now!! min α 1,,α M h i H 1 N i L(y i, α 1 h 1 x + α 2 h 2 x + + α M 1 h m 1 x + α M h M x ) Boosting will do this greedily, training one classifier at a time to correct the errors of the existing ensemble (C) Stefan Lee 27

Boosting Algorithm [Schapire, 1989] Pick a class of weak learners You have a black box that picks best weak learning unweighted sum weighted sum On each iteration t Compute error based on current ensemble Update weight of each training example based on it s error. Learn a predictor h t and strength for this predictor α t Update ensemble: 28

Demo Boosting Demo Matlab demo by Antonio Torralba http://people.csail.mit.edu/torralba/shortcourserloc/boosting/boosting.html (C) Dhruv Batra 29

Training Error Boosting: Weak to Strong 0,0 Size of Boosted Ensemble (M) As we add more boosted learners to our ensemble, error approaches zero (in the limit) need to decided when to stop based on a validation set don t use this on already overfit strong learners, will just become worse (C) Stefan Lee 52

Boosting Algorithm [Schapire, 1989] Pick a class of weak learners You have a black box that picks best weak learning unweighted sum weighted sum On each iteration t Compute error based on current ensemble Update weight of each training example based on it s error. Learn a predictor h t and strength for this predictor α t Update ensemble: 53

Boosting Algorithm [Schapire, 1989] We ve assumed we have some tools to find optimal learners, either N x i, y i i=1 h = argmin 1 N i L(y i, h x i h or N x i, y i, w i i=1 h = argmin 1 N i w i L(y i, h x i h To train the t th predictor, our job is to express the optimization for the new predictor in one of these forms 1 min L(y α 1,,α M N i, f t 1 x + α t h t x ) h i H i Typically done by either changing y i or w i depending on L. (C) Stefan Lee 54

Types of Boosting Loss Name Loss Formula Boosting Name Regression: Squared Loss L2Boosting Regression: Absolute Loss Gradient Boosting Classification: Exponential Loss AdaBoost Classification: Log/Logistic Loss LogitBoost (C) Dhruv Batra 55

L2 Boosting Loss Name Loss Formula Boosting Name Regression: Squared Loss L2Boosting Algorithm On Board (C) Dhruv Batra 56

Adaboost Loss Name Loss Formula Boosting Name Classification: Exponential Loss AdaBoost Algorithm You will derive in HW4! (C) Dhruv Batra 57

What you should know Voting/Ensemble methods Bagging How to sample Under what conditions is error reduced Boosting General algorithm L2Boosting derivation Adaboost derivation (from HW4) (C) Dhruv Batra 58