STK-IN4300 Statistical Learning Methods in Data Science
|
|
- Barbara Stanley
- 5 years ago
- Views:
Transcription
1 Outline of the lecture STK-IN4300 Statistical Learning Methods in Data Science Riccardo De Bin AdaBoost Introduction algorithm Statistical Boosting Boosting as a forward stagewise additive modelling Why exponential loss? Gradient boosting STK-IN4300: lecture 10 1/ 28 STK-IN4300: lecture 10 2/ 28 AdaBoost: introduction L. Breiman: [Boosting is] the best off-shelf classifier in the world. originally developed for classification; as a pure machine learning black-box; translated into the statistical world (Friedman et al., 2000); extended to every statistical problem (Mayr et al., 2014), regression; survival analysis;... interpretable models, thanks to the statistical view; extended to work in high-dimensional settings (Bühlmann, 2006). AdaBoost: introduction Starting challenge: Can [a committee of blockheads] somehow arrive at highly reasoned decisions, despite the weak judgement of the individual members? (Schapire & Freund, 2014) Goal: create a good classifier by combining several weak classifiers, in classification, a weak classifier is a classifier which is able to produce results only slightly better than a random guess; Idea: apply repeatedly (iteratively) a weak classifier to modifications of the data, at each iteration, give more weight to the misclassified observations. STK-IN4300: lecture 10 3/ 28 STK-IN4300: lecture 10 4/ 28
2 AdaBoost: introduction STK-IN4300: lecture 10 5/ 28 AdaBoost: algorithm Consider a two-class classification problem, y i P t 1, 1u, x i P R p. AdaBoost algorithm: 1. initialize the weights, w r0s p1{n,..., 1{Nq; 2. for m from 1 to m stop, (a) fit the weak estimator Gp q to the weighted data; (b) compute the weighted in-sample missclassification rate, err rms ř N wrm 1s i 1 i 1py i Ĝrms px i qq; (c) compute the voting weights, α rms logpp1 err rms q{err rms q; (d) update the weights w i w rm 1s i w rms i exptα rms 1py i Ĝrms px 1qqu; w i{ ř N i 1 wi; 3. compute the final result, mÿ stop Ĝ AdaBoost signp α rms Ĝ rms px 1 qq m 1 STK-IN4300: lecture 10 6/ 28 First iteration: apply the classier Gp q on observations with weights: w i observations 1, 2 and 3 are misclassified ñ err r1s 0.3; compute α r1s 0.5 logpp1 err r1s q{err r1s q «0.42; set w i w i exptα r1s 1py i Ĝr1s px i qqu: w i STK-IN4300: lecture 10 7/ 28 STK-IN4300: lecture 10 8/ 28
3 Second iteration: apply classifier Gp q on re-weighted observations (w i { ř i w i): w i observations 6, 7 and 9 are misclassified ñ err r2s «0.21; compute α r2s 0.5 logpp1 err r2s q{err r2s q «0.65; set w i w i exptα r2s 1py i Ĝr2s px i qqu: w i STK-IN4300: lecture 10 9/ 28 STK-IN4300: lecture 10 10/ 28 Third iteration: apply classifier Gp q on re-weighted observations (w i { ř i w i): w i observations 4, 5 and 8 are misclassified ñ err r3s «0.14; compute α r3s 0.5 logpp1 err r3s q{err r3s q «0.92; set w i w i exptα r3s 1py i Ĝr3s px i qqu: w i STK-IN4300: lecture 10 11/ 28 STK-IN4300: lecture 10 12/ 28
4 STK-IN4300: lecture 10 13/ 28 STK-IN4300: lecture 10 14/ 28 Statistical Boosting: Boosting as a forward stagewise additive modelling Statistical Boosting: Boosting as a forward stagewise additive modelling The statistical view of boosting is based on the concept of forward stagewise additive modelling: minimizes a loss function Lpy i, fpx i qq; using an additive model, fpxq ř M m 1 β mbpx; γ m q; (see notes) bpx; γm q is the basis, or weak learner; at each step, ř pβ m, γ m q argmin N β,γ i 1 Lpy i, f m 1 px i q ` βbpx i ; γqq; the estimate is updated as f m pxq f m 1 pxq ` β m bpx; γ m q e.g., in AdaBoost, β m α m {2, bpx; γ m q Gpxq; STK-IN4300: lecture 10 15/ 28 STK-IN4300: lecture 10 16/ 28
5 Statistical Boosting: Why exponential loss? Statistical Boosting: Why exponential loss? The statistical view of boosting: allows to interpret the results; by studying the properties of the exponential loss; It is easy to show that i.e. f pxq argmin fpxq E Y X x re Y fpxq s 1 2 P rpy 1 xq 1 ` 1 e 2f pxq ; log P rpy 1 xq P rpy 1 xq, therefore AdaBoost estimates 1/2 the log-odds of P rpy 1 xq. Note: the exponential loss is not the only possible loss-function; deviance (cross/entropy): binomial negative log-likelihood, where: lpπ x q y 1 logpπ x q p1 y 1 q logp1 π x q, y 1 py ` 1q{2, i.e., y 1 P t0, 1u; π x PrpY 1 X xq e fpxq e fpxq`e fpxq 1 1`e 2fpxq ; equivalently, lpπ x q logp1 ` e 2yfpxq q. same population minimizers for Er lpπ x qs and Er Y fpxqs. STK-IN4300: lecture 10 17/ 28 STK-IN4300: lecture 10 18/ 28 Statistical Boosting: Why exponential loss? Statistical Boosting: Gradient boosting We saw that AdaBoost iteratively minimizes a loss function. In general, consider Lpfq ř N i 1 Lpy i, fpx i qq; ˆf argmin f Lpfq; the minimization problem can be solved by considering where: f mstop mÿ stop m 0 h m f0 h 0 is the initial guess; each fm improves the previous f m 1 though h m ; hm is called step. STK-IN4300: lecture 10 19/ 28 STK-IN4300: lecture 10 20/ 28
6 The steepest descent chooses where Statistical Boosting: steepest descent h m ρ m g m g m P R N is the gradient descent of Lpfq evaluated at f f m 1 and represents the direction for the minimization, g im BLpy i, fpx i qq ˇ Bfpx i q ˇfpxi q f m 1 px i q ρ m is a scalar and tells how much to minimize ρ m argmin ρ Lpf m 1 ρg m q. Statistical Boosting: example Consider the linear Gaussian regression case: Lpy, fpxqq 1 2 ř N i 1 py i fpx i qq 2 ; fpxq X T β; initial guess: β 0. Therefore: g B 1 2 ř N i 1 py i fpx i qq 2 Bfpx i q py X T βq; g m py X T βqˇˇβ 0 y; ρ m argmin ρ 1 2 py ρyq2 Ñ ρ m XpX T Xq 1 X T. Note: overfitting! STK-IN4300: lecture 10 21/ 28 STK-IN4300: lecture 10 22/ 28 Statistical Boosting: shrinkage To regularize the procedure, a shrinkage factor is introduced, where 0 ă ν ă 1. f m pxq f m 1 pxq ` νh m Moreover, h m can be a general weak learner (base learner): stump; spline;... the idea is to fit the base learner to the gradient descent to iteratively minimize the loss function. Statistical Boosting: Gradient boosting Gradient boosting algorithm: 1. initialize the estimate, e.g., f 0 pxq 0; 2. for m 1,..., m stop, 2.1 compute the negative gradient vector, u m BLpy,fpxqq ˇ Bfpxq ˇfpxq ˆfm 1pxq; 2.2 fit the base learner to the negative gradient vector, h m pu m, xq; 2.3 update the estimate, f m pxq f m 1 pxq ` νh m pu m, xq. 3. final estimate, ˆf mstop pxq ř m stop m 1 νh mpu m, xq Note: u m g m ˆf mstop pxq is a GAM. STK-IN4300: lecture 10 23/ 28 STK-IN4300: lecture 10 24/ 28
7 Statistical Boosting: example Consider again the linear Gaussian regression case: Lpy, fpxqq 1 2 ř N i 1 py i fpx i, βqq 2, fpx i, βq x i β; hpy, Xq XpX T Xq 1 X T y. Therefore: initialize the estimate, e.g., ˆf 0 px, βq 0; m 1, u 1 BLpy,fpX,βqq ˇ BfpX,βq py 0q y; ˇfpX,βq ˆf0pX,βq h 1 pu 1, Xq XpX T Xq 1 X T y; ˆf1 pxq 0 ` νxpx T Xq 1 X T y. m 2, u 2 BLpy,fpX,βqq BfpX,βq ˇ py X ˇfpX,βq T pν ˆβqq; ˆf1pX,βq h 2 pu 2, Xq XpX T Xq 1 X T py X T pν ˆβqq; update the estimate, ˆf2 px, βq νxpx T Xq 1 X T y ` νxpx T Xq 1 X T py X T pν ˆβqq. STK-IN4300: lecture 10 25/ 28 Statistical Boosting: remarks Note that: we do not need to have a linear effects, hpy, Xq can be, e.g., a spline; using fpx, βq X T β, it makes more sense to work with β: 1. initialize the estimate, e.g., ˆβ 0 0; 2. for m 1,..., m stop, 2.1 compute the negative gradient vector, u m BLpy,fpX,βqq ˇ ; BfpX,βq ˇβ ˆβm fit the base learner to the negative gradient vector, b mpu m, Xq px T Xq 1 X T u m; 2.3 update the estimate, ˆβ m ˆβ m 1 ` νb mpu m, xq. 3. final estimate, ˆf mstop pxq X T ˆβ m. STK-IN4300: lecture 10 26/ 28 Statistical Boosting: remarks References I Further remarks: for m stop Ñ 8, ˆβ mstop Ñ ˆβ OLS ; the shrinkage is controlled by both m stop and ν; usually ν is fixed, ν 0.1 m stop is computed by cross-validation: it controls the model complexity; we need an early stop to avoid overfitting; if it is too small Ñ too much bias; if it is too large Ñ too much variance; Bühlmann, P. (2006). Boosting for high-dimensional linear models. The Annals of Statistics 34, Friedman, J., Hastie, T. & Tibshirani, R. (2000). Additive logistic regression: a statistical view of boosting. The Annals of Statistics 28, Mayr, A., Binder, H., Gefeller, O. & Schmid, M. (2014). Extending statistical boosting. Methods of Information in Medicine 53, Schapire, R. E. & Freund, Y. (2014). Boosting: Foundations and Algorithms. MIT Press, Cambridge. the predictors must be centred, ErX j s 0. STK-IN4300: lecture 10 27/ 28 STK-IN4300: lecture 10 28/ 28
STK-IN4300 Statistical Learning Methods in Data Science
STK-IN4300 Statistical Learning Methods in Data Science Riccardo De Bin debin@math.uio.no STK-IN4300: lecture 2 1/ 38 Outline of the lecture STK-IN4300 - Statistical Learning Methods in Data Science Linear
More informationSTK-IN4300 Statistical Learning Methods in Data Science
Outline of the lecture Linear Methods for Regression Linear Regression Models and Least Squares Subset selection STK-IN4300 Statistical Learning Methods in Data Science Riccardo De Bin debin@math.uio.no
More informationBoosting. Ryan Tibshirani Data Mining: / April Optional reading: ISL 8.2, ESL , 10.7, 10.13
Boosting Ryan Tibshirani Data Mining: 36-462/36-662 April 25 2013 Optional reading: ISL 8.2, ESL 10.1 10.4, 10.7, 10.13 1 Reminder: classification trees Suppose that we are given training data (x i, y
More informationBoosting Methods: Why They Can Be Useful for High-Dimensional Data
New URL: http://www.r-project.org/conferences/dsc-2003/ Proceedings of the 3rd International Workshop on Distributed Statistical Computing (DSC 2003) March 20 22, Vienna, Austria ISSN 1609-395X Kurt Hornik,
More informationCART Classification and Regression Trees Trees can be viewed as basis expansions of simple functions. f(x) = c m 1(x R m )
CART Classification and Regression Trees Trees can be viewed as basis expansions of simple functions with R 1,..., R m R p disjoint. f(x) = M c m 1(x R m ) m=1 The CART algorithm is a heuristic, adaptive
More informationDoes Modeling Lead to More Accurate Classification?
Does Modeling Lead to More Accurate Classification? A Comparison of the Efficiency of Classification Methods Yoonkyung Lee* Department of Statistics The Ohio State University *joint work with Rui Wang
More informationEnsemble Methods. Charles Sutton Data Mining and Exploration Spring Friday, 27 January 12
Ensemble Methods Charles Sutton Data Mining and Exploration Spring 2012 Bias and Variance Consider a regression problem Y = f(x)+ N(0, 2 ) With an estimate regression function ˆf, e.g., ˆf(x) =w > x Suppose
More informationChapter 6. Ensemble Methods
Chapter 6. Ensemble Methods Wei Pan Division of Biostatistics, School of Public Health, University of Minnesota, Minneapolis, MN 55455 Email: weip@biostat.umn.edu PubH 7475/8475 c Wei Pan Introduction
More informationStochastic Gradient Descent
Stochastic Gradient Descent Machine Learning CSE546 Carlos Guestrin University of Washington October 9, 2013 1 Logistic Regression Logistic function (or Sigmoid): Learn P(Y X) directly Assume a particular
More informationA Study of Relative Efficiency and Robustness of Classification Methods
A Study of Relative Efficiency and Robustness of Classification Methods Yoonkyung Lee* Department of Statistics The Ohio State University *joint work with Rui Wang April 28, 2011 Department of Statistics
More informationLogitBoost with Trees Applied to the WCCI 2006 Performance Prediction Challenge Datasets
Applied to the WCCI 2006 Performance Prediction Challenge Datasets Roman Lutz Seminar für Statistik, ETH Zürich, Switzerland 18. July 2006 Overview Motivation 1 Motivation 2 The logistic framework The
More informationBoosting. March 30, 2009
Boosting Peter Bühlmann buhlmann@stat.math.ethz.ch Seminar für Statistik ETH Zürich Zürich, CH-8092, Switzerland Bin Yu binyu@stat.berkeley.edu Department of Statistics University of California Berkeley,
More informationAdaBoost. Lecturer: Authors: Center for Machine Perception Czech Technical University, Prague
AdaBoost Lecturer: Jan Šochman Authors: Jan Šochman, Jiří Matas Center for Machine Perception Czech Technical University, Prague http://cmp.felk.cvut.cz Motivation Presentation 2/17 AdaBoost with trees
More informationCART Classification and Regression Trees Trees can be viewed as basis expansions of simple functions
CART Classification and Regression Trees Trees can be viewed as basis expansions of simple functions f (x) = M c m 1(x R m ) m=1 with R 1,..., R m R p disjoint. The CART algorithm is a heuristic, adaptive
More informationSTK-IN4300 Statistical Learning Methods in Data Science
Outline of the lecture STK-I4300 Statistical Learning Methods in Data Science Riccardo De Bin debin@math.uio.no Model Assessment and Selection Cross-Validation Bootstrap Methods Methods using Derived Input
More informationMachine Learning. Ensemble Methods. Manfred Huber
Machine Learning Ensemble Methods Manfred Huber 2015 1 Bias, Variance, Noise Classification errors have different sources Choice of hypothesis space and algorithm Training set Noise in the data The expected
More informationA Gentle Introduction to Gradient Boosting. Cheng Li College of Computer and Information Science Northeastern University
A Gentle Introduction to Gradient Boosting Cheng Li chengli@ccs.neu.edu College of Computer and Information Science Northeastern University Gradient Boosting a powerful machine learning algorithm it can
More informationBig Data Analytics. Special Topics for Computer Science CSE CSE Feb 24
Big Data Analytics Special Topics for Computer Science CSE 4095-001 CSE 5095-005 Feb 24 Fei Wang Associate Professor Department of Computer Science and Engineering fei_wang@uconn.edu Prediction III Goal
More informationA Brief Introduction to Adaboost
A Brief Introduction to Adaboost Hongbo Deng 6 Feb, 2007 Some of the slides are borrowed from Derek Hoiem & Jan ˇSochman. 1 Outline Background Adaboost Algorithm Theory/Interpretations 2 What s So Good
More informationVoting (Ensemble Methods)
1 2 Voting (Ensemble Methods) Instead of learning a single classifier, learn many weak classifiers that are good at different parts of the data Output class: (Weighted) vote of each classifier Classifiers
More informationA New Perspective on Boosting in Linear Regression via Subgradient Optimization and Relatives
A New Perspective on Boosting in Linear Regression via Subgradient Optimization and Relatives Paul Grigas May 25, 2016 1 Boosting Algorithms in Linear Regression Boosting [6, 9, 12, 15, 16] is an extremely
More informationECE 5984: Introduction to Machine Learning
ECE 5984: Introduction to Machine Learning Topics: Ensemble Methods: Bagging, Boosting Readings: Murphy 16.4; Hastie 16 Dhruv Batra Virginia Tech Administrativia HW3 Due: April 14, 11:55pm You will implement
More informationFrank C Porter and Ilya Narsky: Statistical Analysis Techniques in Particle Physics Chap. c /9/9 page 331 le-tex
Frank C Porter and Ilya Narsky: Statistical Analysis Techniques in Particle Physics Chap. c15 2013/9/9 page 331 le-tex 331 15 Ensemble Learning The expression ensemble learning refers to a broad class
More informationGeneralized Boosted Models: A guide to the gbm package
Generalized Boosted Models: A guide to the gbm package Greg Ridgeway April 15, 2006 Boosting takes on various forms with different programs using different loss functions, different base models, and different
More informationRecitation 9. Gradient Boosting. Brett Bernstein. March 30, CDS at NYU. Brett Bernstein (CDS at NYU) Recitation 9 March 30, / 14
Brett Bernstein CDS at NYU March 30, 2017 Brett Bernstein (CDS at NYU) Recitation 9 March 30, 2017 1 / 14 Initial Question Intro Question Question Suppose 10 different meteorologists have produced functions
More informationLearning with multiple models. Boosting.
CS 2750 Machine Learning Lecture 21 Learning with multiple models. Boosting. Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square Learning with multiple models: Approach 2 Approach 2: use multiple models
More informationECE 5424: Introduction to Machine Learning
ECE 5424: Introduction to Machine Learning Topics: Ensemble Methods: Bagging, Boosting PAC Learning Readings: Murphy 16.4;; Hastie 16 Stefan Lee Virginia Tech Fighting the bias-variance tradeoff Simple
More informationEvidence Contrary to the Statistical View of Boosting
Journal of Machine Learning Research 9 (2008) 131-156 Submitted 10/05; Revised 7/07; Published 2/08 Evidence Contrary to the Statistical View of Boosting David Mease Department of Marketing and Decision
More informationWhy does boosting work from a statistical view
Why does boosting work from a statistical view Jialin Yi Applied Mathematics and Computational Science University of Pennsylvania Philadelphia, PA 939 jialinyi@sas.upenn.edu Abstract We review boosting
More informationSTK-IN4300 Statistical Learning Methods in Data Science
STK-IN4300 Statistical Learning Methods in Data Science Riccardo De Bin debin@math.uio.no STK4030: lecture 1 1/ 51 Outline of the lecture STK-IN4300 - Statistical Learning Methods in Data Science Introduction
More informationBoosting. CAP5610: Machine Learning Instructor: Guo-Jun Qi
Boosting CAP5610: Machine Learning Instructor: Guo-Jun Qi Weak classifiers Weak classifiers Decision stump one layer decision tree Naive Bayes A classifier without feature correlations Linear classifier
More informationEvidence Contrary to the Statistical View of Boosting
University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 2-2008 Evidence Contrary to the Statistical View of Boosting David Mease San Jose State University Abraham J. Wyner
More informationAnnouncements Kevin Jamieson
Announcements My office hours TODAY 3:30 pm - 4:30 pm CSE 666 Poster Session - Pick one First poster session TODAY 4:30 pm - 7:30 pm CSE Atrium Second poster session December 12 4:30 pm - 7:30 pm CSE Atrium
More informationLogistic Regression and Boosting for Labeled Bags of Instances
Logistic Regression and Boosting for Labeled Bags of Instances Xin Xu and Eibe Frank Department of Computer Science University of Waikato Hamilton, New Zealand {xx5, eibe}@cs.waikato.ac.nz Abstract. In
More informationGradient Boosting (Continued)
Gradient Boosting (Continued) David Rosenberg New York University April 4, 2016 David Rosenberg (New York University) DS-GA 1003 April 4, 2016 1 / 31 Boosting Fits an Additive Model Boosting Fits an Additive
More informationThe AdaBoost algorithm =1/n for i =1,...,n 1) At the m th iteration we find (any) classifier h(x; ˆθ m ) for which the weighted classification error m
) Set W () i The AdaBoost algorithm =1/n for i =1,...,n 1) At the m th iteration we find (any) classifier h(x; ˆθ m ) for which the weighted classification error m m =.5 1 n W (m 1) i y i h(x i ; 2 ˆθ
More informationChapter 14 Combining Models
Chapter 14 Combining Models T-61.62 Special Course II: Pattern Recognition and Machine Learning Spring 27 Laboratory of Computer and Information Science TKK April 3th 27 Outline Independent Mixing Coefficients
More informationVBM683 Machine Learning
VBM683 Machine Learning Pinar Duygulu Slides are adapted from Dhruv Batra Bias is the algorithm's tendency to consistently learn the wrong thing by not taking into account all the information in the data
More informationLecture 8. Instructor: Haipeng Luo
Lecture 8 Instructor: Haipeng Luo Boosting and AdaBoost In this lecture we discuss the connection between boosting and online learning. Boosting is not only one of the most fundamental theories in machine
More informationData Mining und Maschinelles Lernen
Data Mining und Maschinelles Lernen Ensemble Methods Bias-Variance Trade-off Basic Idea of Ensembles Bagging Basic Algorithm Bagging with Costs Randomization Random Forests Boosting Stacking Error-Correcting
More informationLecture 13: Ensemble Methods
Lecture 13: Ensemble Methods Applied Multivariate Analysis Math 570, Fall 2014 Xingye Qiao Department of Mathematical Sciences Binghamton University E-mail: qiao@math.binghamton.edu 1 / 71 Outline 1 Bootstrap
More informationRegularization Paths
December 2005 Trevor Hastie, Stanford Statistics 1 Regularization Paths Trevor Hastie Stanford University drawing on collaborations with Brad Efron, Saharon Rosset, Ji Zhu, Hui Zhou, Rob Tibshirani and
More informationSupport Vector Machine, Random Forests, Boosting Based in part on slides from textbook, slides of Susan Holmes. December 2, 2012
Support Vector Machine, Random Forests, Boosting Based in part on slides from textbook, slides of Susan Holmes December 2, 2012 1 / 1 Neural networks Neural network Another classifier (or regression technique)
More informationLogistic Regression Review Fall 2012 Recitation. September 25, 2012 TA: Selen Uguroglu
Logistic Regression Review 10-601 Fall 2012 Recitation September 25, 2012 TA: Selen Uguroglu!1 Outline Decision Theory Logistic regression Goal Loss function Inference Gradient Descent!2 Training Data
More informationCS7267 MACHINE LEARNING
CS7267 MACHINE LEARNING ENSEMBLE LEARNING Ref: Dr. Ricardo Gutierrez-Osuna at TAMU, and Aarti Singh at CMU Mingon Kang, Ph.D. Computer Science, Kennesaw State University Definition of Ensemble Learning
More informationImportance Sampling: An Alternative View of Ensemble Learning. Jerome H. Friedman Bogdan Popescu Stanford University
Importance Sampling: An Alternative View of Ensemble Learning Jerome H. Friedman Bogdan Popescu Stanford University 1 PREDICTIVE LEARNING Given data: {z i } N 1 = {y i, x i } N 1 q(z) y = output or response
More informationCOMS 4721: Machine Learning for Data Science Lecture 13, 3/2/2017
COMS 4721: Machine Learning for Data Science Lecture 13, 3/2/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University BOOSTING Robert E. Schapire and Yoav
More informationIntroduction to Machine Learning Lecture 11. Mehryar Mohri Courant Institute and Google Research
Introduction to Machine Learning Lecture 11 Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu Boosting Mehryar Mohri - Introduction to Machine Learning page 2 Boosting Ideas Main idea:
More informationLEAST ANGLE REGRESSION 469
LEAST ANGLE REGRESSION 469 Specifically for the Lasso, one alternative strategy for logistic regression is to use a quadratic approximation for the log-likelihood. Consider the Bayesian version of Lasso
More informationMachine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li. https://funglee.github.io
Machine Learning Lecture 4: Regularization and Bayesian Statistics Feng Li fli@sdu.edu.cn https://funglee.github.io School of Computer Science and Technology Shandong University Fall 207 Overfitting Problem
More informationPart I Week 7 Based in part on slides from textbook, slides of Susan Holmes
Part I Week 7 Based in part on slides from textbook, slides of Susan Holmes Support Vector Machine, Random Forests, Boosting December 2, 2012 1 / 1 2 / 1 Neural networks Artificial Neural networks: Networks
More informationOptimal and Adaptive Algorithms for Online Boosting
Optimal and Adaptive Algorithms for Online Boosting Alina Beygelzimer 1 Satyen Kale 1 Haipeng Luo 2 1 Yahoo! Labs, NYC 2 Computer Science Department, Princeton University Jul 8, 2015 Boosting: An Example
More informationThe Boosting Approach to. Machine Learning. Maria-Florina Balcan 10/31/2016
The Boosting Approach to Machine Learning Maria-Florina Balcan 10/31/2016 Boosting General method for improving the accuracy of any given learning algorithm. Works by creating a series of challenge datasets
More informationApplied Statistics and Machine Learning
Applied Statistics and Machine Learning MDL, e-l2boosting, Sparse Graphical Model Estimation, and Ridge Estimation Bin Yu, IMA, June 24, 2013 6/24/13 Minimum Description Length Principle MDL is a modern
More informationDecision Trees: Overfitting
Decision Trees: Overfitting Emily Fox University of Washington January 30, 2017 Decision tree recap Loan status: Root 22 18 poor 4 14 Credit? Income? excellent 9 0 3 years 0 4 Fair 9 4 Term? 5 years 9
More informationABC-Boost: Adaptive Base Class Boost for Multi-class Classification
ABC-Boost: Adaptive Base Class Boost for Multi-class Classification Ping Li Department of Statistical Science, Cornell University, Ithaca, NY 14853 USA pingli@cornell.edu Abstract We propose -boost (adaptive
More informationNon-linear Supervised High Frequency Trading Strategies with Applications in US Equity Markets
Non-linear Supervised High Frequency Trading Strategies with Applications in US Equity Markets Nan Zhou, Wen Cheng, Ph.D. Associate, Quantitative Research, J.P. Morgan nan.zhou@jpmorgan.com The 4th Annual
More informationSPECIAL INVITED PAPER
The Annals of Statistics 2000, Vol. 28, No. 2, 337 407 SPECIAL INVITED PAPER ADDITIVE LOGISTIC REGRESSION: A STATISTICAL VIEW OF BOOSTING By Jerome Friedman, 1 Trevor Hastie 2 3 and Robert Tibshirani 2
More informationBoosting Based Conditional Quantile Estimation for Regression and Binary Classification
Boosting Based Conditional Quantile Estimation for Regression and Binary Classification Songfeng Zheng Department of Mathematics Missouri State University, Springfield, MO 65897, USA SongfengZheng@MissouriState.edu
More informationEnsemble learning 11/19/13. The wisdom of the crowds. Chapter 11. Ensemble methods. Ensemble methods
The wisdom of the crowds Ensemble learning Sir Francis Galton discovered in the early 1900s that a collection of educated guesses can add up to very accurate predictions! Chapter 11 The paper in which
More informationGradient descent. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725
Gradient descent Barnabas Poczos & Ryan Tibshirani Convex Optimization 10-725/36-725 1 Gradient descent First consider unconstrained minimization of f : R n R, convex and differentiable. We want to solve
More informationCSE 151 Machine Learning. Instructor: Kamalika Chaudhuri
CSE 151 Machine Learning Instructor: Kamalika Chaudhuri Ensemble Learning How to combine multiple classifiers into a single one Works well if the classifiers are complementary This class: two types of
More informationConjugate direction boosting
Conjugate direction boosting June 21, 2005 Revised Version Abstract Boosting in the context of linear regression become more attractive with the invention of least angle regression (LARS), where the connection
More informationPDEEC Machine Learning 2016/17
PDEEC Machine Learning 2016/17 Lecture - Model assessment, selection and Ensemble Jaime S. Cardoso jaime.cardoso@inesctec.pt INESC TEC and Faculdade Engenharia, Universidade do Porto Nov. 07, 2017 1 /
More informationMultivariate Analysis Techniques in HEP
Multivariate Analysis Techniques in HEP Jan Therhaag IKTP Institutsseminar, Dresden, January 31 st 2013 Multivariate analysis in a nutshell Neural networks: Defeating the black box Boosted Decision Trees:
More information10-701/ Machine Learning - Midterm Exam, Fall 2010
10-701/15-781 Machine Learning - Midterm Exam, Fall 2010 Aarti Singh Carnegie Mellon University 1. Personal info: Name: Andrew account: E-mail address: 2. There should be 15 numbered pages in this exam
More informationStatistical Data Mining and Machine Learning Hilary Term 2016
Statistical Data Mining and Machine Learning Hilary Term 2016 Dino Sejdinovic Department of Statistics Oxford Slides and other materials available at: http://www.stats.ox.ac.uk/~sejdinov/sdmml Naïve Bayes
More informationMachine Learning Ensemble Learning I Hamid R. Rabiee Jafar Muhammadi, Alireza Ghasemi Spring /
Machine Learning Ensemble Learning I Hamid R. Rabiee Jafar Muhammadi, Alireza Ghasemi Spring 2015 http://ce.sharif.edu/courses/93-94/2/ce717-1 / Agenda Combining Classifiers Empirical view Theoretical
More informationRegularization Paths. Theme
June 00 Trevor Hastie, Stanford Statistics June 00 Trevor Hastie, Stanford Statistics Theme Regularization Paths Trevor Hastie Stanford University drawing on collaborations with Brad Efron, Mee-Young Park,
More informationBoosting. 1 Boosting. 2 AdaBoost. 2.1 Intuition
Boosting 1 Boosting Boosting refers to a general and effective method of producing accurate classifier by combining moderately inaccurate classifiers, which are called weak learners. In the lecture, we
More informationCS229 Supplemental Lecture notes
CS229 Supplemental Lecture notes John Duchi 1 Boosting We have seen so far how to solve classification (and other) problems when we have a data representation already chosen. We now talk about a procedure,
More informationA Decision Stump. Decision Trees, cont. Boosting. Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University. October 1 st, 2007
Decision Trees, cont. Boosting Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University October 1 st, 2007 1 A Decision Stump 2 1 The final tree 3 Basic Decision Tree Building Summarized
More informationLogistic Regression. Machine Learning Fall 2018
Logistic Regression Machine Learning Fall 2018 1 Where are e? We have seen the folloing ideas Linear models Learning as loss minimization Bayesian learning criteria (MAP and MLE estimation) The Naïve Bayes
More informationOrdinal Classification with Decision Rules
Ordinal Classification with Decision Rules Krzysztof Dembczyński 1, Wojciech Kotłowski 1, and Roman Słowiński 1,2 1 Institute of Computing Science, Poznań University of Technology, 60-965 Poznań, Poland
More informationLearning Ensembles. 293S T. Yang. UCSB, 2017.
Learning Ensembles 293S T. Yang. UCSB, 2017. Outlines Learning Assembles Random Forest Adaboost Training data: Restaurant example Examples described by attribute values (Boolean, discrete, continuous)
More informationBoosting kernel density estimates: A bias reduction technique?
Biometrika (2004), 91, 1, pp. 226 233 2004 Biometrika Trust Printed in Great Britain Boosting kernel density estimates: A bias reduction technique? BY MARCO DI MARZIO Dipartimento di Metodi Quantitativi
More informationMonte Carlo Theory as an Explanation of Bagging and Boosting
Monte Carlo Theory as an Explanation of Bagging and Boosting Roberto Esposito Università di Torino C.so Svizzera 185, Torino, Italy esposito@di.unito.it Lorenza Saitta Università del Piemonte Orientale
More informationA ROBUST ENSEMBLE LEARNING USING ZERO-ONE LOSS FUNCTION
Journal of the Operations Research Society of Japan 008, Vol. 51, No. 1, 95-110 A ROBUST ENSEMBLE LEARNING USING ZERO-ONE LOSS FUNCTION Natsuki Sano Hideo Suzuki Masato Koda Central Research Institute
More informationoptimization view of statistical inference, and its most obvious consequence has been the emergence of many variants of the original AdaBoost, under v
Boosting with the L 2 -Loss: Regression and Classification Peter Bühlmann ETH Zürich Bin Yu University of California, Berkeley August 200 Abstract This paper investigates a variant of boosting, L 2 Boost,
More informationModel-based boosting in R
Model-based boosting in R Families in mboost Nikolay Robinzonov nikolay.robinzonov@stat.uni-muenchen.de Institut für Statistik Ludwig-Maximilians-Universität München Statistical Computing 2011 Model-based
More informationBoosting in Cox regression: a comparison between the likelihood-based and the model-based approaches with focus on the R-packages CoxBoost and mboost
Riccardo De Bin Boosting in Cox regression: a comparison between the likelihood-based and the model-based approaches with focus on the R-packages CoxBoost and mboost Technical Report Number 180, 2015 Department
More informationGradient Descent. Ryan Tibshirani Convex Optimization /36-725
Gradient Descent Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: canonical convex programs Linear program (LP): takes the form min x subject to c T x Gx h Ax = b Quadratic program (QP): like
More informationBoosted Lasso and Reverse Boosting
Boosted Lasso and Reverse Boosting Peng Zhao University of California, Berkeley, USA. Bin Yu University of California, Berkeley, USA. Summary. This paper introduces the concept of backward step in contrast
More informationCSCI-567: Machine Learning (Spring 2019)
CSCI-567: Machine Learning (Spring 2019) Prof. Victor Adamchik U of Southern California Mar. 19, 2019 March 19, 2019 1 / 43 Administration March 19, 2019 2 / 43 Administration TA3 is due this week March
More informationIron Ore Rock Recognition Trials
Australian Centre for Field Robotics A Key Centre of Teaching and Research The Rose Street Building J04 The University of Sydney 2006 NSW, Australia Iron Ore Rock Recognition Trials Fabio Ramos, Peter
More informationBoosting. Acknowledgment Slides are based on tutorials from Robert Schapire and Gunnar Raetsch
. Machine Learning Boosting Prof. Dr. Martin Riedmiller AG Maschinelles Lernen und Natürlichsprachliche Systeme Institut für Informatik Technische Fakultät Albert-Ludwigs-Universität Freiburg riedmiller@informatik.uni-freiburg.de
More informationLinear Classification. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington
Linear Classification CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 Example of Linear Classification Red points: patterns belonging
More informationSupport Vector Machines for Classification: A Statistical Portrait
Support Vector Machines for Classification: A Statistical Portrait Yoonkyung Lee Department of Statistics The Ohio State University May 27, 2011 The Spring Conference of Korean Statistical Society KAIST,
More informationmboost - Componentwise Boosting for Generalised Regression Models
mboost - Componentwise Boosting for Generalised Regression Models Thomas Kneib & Torsten Hothorn Department of Statistics Ludwig-Maximilians-University Munich 13.8.2008 Boosting in a Nutshell Boosting
More informationThe Margin Vector, Admissible Loss and Multi-class Margin-based Classifiers
The Margin Vector, Admissible Loss and Multi-class Margin-based Classifiers Hui Zou University of Minnesota Ji Zhu University of Michigan Trevor Hastie Stanford University Abstract We propose a new framework
More informationHarrison B. Prosper. Bari Lectures
Harrison B. Prosper Florida State University Bari Lectures 30, 31 May, 1 June 2016 Lectures on Multivariate Methods Harrison B. Prosper Bari, 2016 1 h Lecture 1 h Introduction h Classification h Grid Searches
More informationBoosting. Jiahui Shen. October 27th, / 44
Boosting Jiahui Shen October 27th, 2017 1 / 44 Target of Boosting Figure: Weak learners Figure: Combined learner 2 / 44 Boosting introduction and notation Boosting: combines weak learners into a strong
More informationCS534 Machine Learning - Spring Final Exam
CS534 Machine Learning - Spring 2013 Final Exam Name: You have 110 minutes. There are 6 questions (8 pages including cover page). If you get stuck on one question, move on to others and come back to the
More informationEnsemble Methods: Jay Hyer
Ensemble Methods: committee-based learning Jay Hyer linkedin.com/in/jayhyer @adatahead Overview Why Ensemble Learning? What is learning? How is ensemble learning different? Boosting Weak and Strong Learners
More informationStatistics and learning: Big Data
Statistics and learning: Big Data Learning Decision Trees and an Introduction to Boosting Sébastien Gadat Toulouse School of Economics February 2017 S. Gadat (TSE) SAD 2013 1 / 30 Keywords Decision trees
More informationEnsemble Methods for Machine Learning
Ensemble Methods for Machine Learning COMBINING CLASSIFIERS: ENSEMBLE APPROACHES Common Ensemble classifiers Bagging/Random Forests Bucket of models Stacking Boosting Ensemble classifiers we ve studied
More informationMargin Maximizing Loss Functions
Margin Maximizing Loss Functions Saharon Rosset, Ji Zhu and Trevor Hastie Department of Statistics Stanford University Stanford, CA, 94305 saharon, jzhu, hastie@stat.stanford.edu Abstract Margin maximizing
More informationLearning theory. Ensemble methods. Boosting. Boosting: history
Learning theory Probability distribution P over X {0, 1}; let (X, Y ) P. We get S := {(x i, y i )} n i=1, an iid sample from P. Ensemble methods Goal: Fix ɛ, δ (0, 1). With probability at least 1 δ (over
More informationHierarchical Boosting and Filter Generation
January 29, 2007 Plan Combining Classifiers Boosting Neural Network Structure of AdaBoost Image processing Hierarchical Boosting Hierarchical Structure Filters Combining Classifiers Combining Classifiers
More informationMachine Learning Tom M. Mitchell Machine Learning Department Carnegie Mellon University. September 20, 2012
Machine Learning 10-601 Tom M. Mitchell Machine Learning Department Carnegie Mellon University September 20, 2012 Today: Logistic regression Generative/Discriminative classifiers Readings: (see class website)
More information