Lecture 13: Ensemble Methods

Similar documents
BAGGING PREDICTORS AND RANDOM FOREST

Bagging. Ryan Tibshirani Data Mining: / April Optional reading: ISL 8.2, ESL 8.7

ECE 5424: Introduction to Machine Learning

Support Vector Machine, Random Forests, Boosting Based in part on slides from textbook, slides of Susan Holmes. December 2, 2012

Data Mining und Maschinelles Lernen

Statistical Machine Learning from Data

Part I Week 7 Based in part on slides from textbook, slides of Susan Holmes

CART Classification and Regression Trees Trees can be viewed as basis expansions of simple functions. f(x) = c m 1(x R m )

Boosting. Ryan Tibshirani Data Mining: / April Optional reading: ISL 8.2, ESL , 10.7, 10.13

CART Classification and Regression Trees Trees can be viewed as basis expansions of simple functions

Chapter 6. Ensemble Methods

VBM683 Machine Learning

Learning with multiple models. Boosting.

Machine Learning. Ensemble Methods. Manfred Huber

Statistics and learning: Big Data

Voting (Ensemble Methods)

Cross Validation & Ensembling

A Brief Introduction to Adaboost

Ensemble learning 11/19/13. The wisdom of the crowds. Chapter 11. Ensemble methods. Ensemble methods

AdaBoost. Lecturer: Authors: Center for Machine Perception Czech Technical University, Prague

Neural Networks and Ensemble Methods for Classification

ECE 5984: Introduction to Machine Learning

Importance Sampling: An Alternative View of Ensemble Learning. Jerome H. Friedman Bogdan Popescu Stanford University

Learning theory. Ensemble methods. Boosting. Boosting: history

Frank C Porter and Ilya Narsky: Statistical Analysis Techniques in Particle Physics Chap. c /9/9 page 331 le-tex

PDEEC Machine Learning 2016/17

Ensemble Methods. Charles Sutton Data Mining and Exploration Spring Friday, 27 January 12

Outline: Ensemble Learning. Ensemble Learning. The Wisdom of Crowds. The Wisdom of Crowds - Really? Crowd wiser than any individual

Machine Learning Ensemble Learning I Hamid R. Rabiee Jafar Muhammadi, Alireza Ghasemi Spring /

Lecture 8. Instructor: Haipeng Luo

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 8. Chapter 8. Classification: Basic Concepts

Chapter 14 Combining Models

COMS 4771 Lecture Boosting 1 / 16

Big Data Analytics. Special Topics for Computer Science CSE CSE Feb 24

FINAL: CS 6375 (Machine Learning) Fall 2014

Background. Adaptive Filters and Machine Learning. Bootstrap. Combining models. Boosting and Bagging. Poltayev Rassulzhan

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18

Learning Ensembles. 293S T. Yang. UCSB, 2017.

Gradient Boosting (Continued)

Machine Learning. Lecture 9: Learning Theory. Feng Li.

Ensemble Methods and Random Forests

Boosting Methods: Why They Can Be Useful for High-Dimensional Data

Decision Trees: Overfitting

Variance Reduction and Ensemble Methods

Ensemble Methods: Jay Hyer

TDT4173 Machine Learning

A Decision Stump. Decision Trees, cont. Boosting. Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University. October 1 st, 2007

Lossless Online Bayesian Bagging

Final Overview. Introduction to ML. Marek Petrik 4/25/2017

CSCI-567: Machine Learning (Spring 2019)

Stochastic Gradient Descent

Announcements Kevin Jamieson

CS7267 MACHINE LEARNING

Constructing Prediction Intervals for Random Forests

Introduction to Machine Learning Lecture 11. Mehryar Mohri Courant Institute and Google Research

The AdaBoost algorithm =1/n for i =1,...,n 1) At the m th iteration we find (any) classifier h(x; ˆθ m ) for which the weighted classification error m

CS229 Supplemental Lecture notes

RANDOMIZING OUTPUTS TO INCREASE PREDICTION ACCURACY

Generalized Boosted Models: A guide to the gbm package

Diagnostics. Gad Kimmel

Classification using stochastic ensembles

Monte Carlo Theory as an Explanation of Bagging and Boosting

TDT4173 Machine Learning

1/sqrt(B) convergence 1/B convergence B

SPECIAL INVITED PAPER

Analysis of the Performance of AdaBoost.M2 for the Simulated Digit-Recognition-Example

COMS 4721: Machine Learning for Data Science Lecture 13, 3/2/2017

A Simple Algorithm for Learning Stable Machines

Regularization Paths

A Bias Correction for the Minimum Error Rate in Cross-validation

Understanding Generalization Error: Bounds and Decompositions

Algorithm-Independent Learning Issues

Why does boosting work from a statistical view

A Study of Relative Efficiency and Robustness of Classification Methods

Multivariate Analysis Techniques in HEP

Lecture 12: Ensemble Methods. Introduction. Weighted Majority. Mixture of Experts/Committee. Σ k α k =1. Isabelle Guyon

Random Forests. These notes rely heavily on Biau and Scornet (2016) as well as the other references at the end of the notes.

Does Modeling Lead to More Accurate Classification?

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Decision Trees. Tobias Scheffer

CS534 Machine Learning - Spring Final Exam

Decision Tree Learning Lecture 2

ABC random forest for parameter estimation. Jean-Michel Marin

Ensembles of Classifiers.

A Gentle Introduction to Gradient Boosting. Cheng Li College of Computer and Information Science Northeastern University

Ensemble Learning in the Presence of Noise

Data Warehousing & Data Mining

Boosting & Deep Learning

The exam is closed book, closed notes except your one-page (two sides) or two-page (one side) crib sheet.

Harrison B. Prosper. Bari Lectures

UVA CS 4501: Machine Learning

EXAM IN STATISTICAL MACHINE LEARNING STATISTISK MASKININLÄRNING

SVAN 2016 Mini Course: Stochastic Convex Optimization Methods in Machine Learning

Mehryar Mohri Foundations of Machine Learning Courant Institute of Mathematical Sciences Homework assignment 3 April 5, 2013 Due: April 19, 2013

Boos$ng Can we make dumb learners smart?

Boosting. March 30, 2009

Discussion of Mease and Wyner: And yet it overfits

CSE 151 Machine Learning. Instructor: Kamalika Chaudhuri

IEEE TRANSACTIONS ON SYSTEMS, MAN AND CYBERNETICS - PART B 1

From statistics to data science. BAE 815 (Fall 2017) Dr. Zifei Liu

Combining Classifiers

Transcription:

Lecture 13: Ensemble Methods Applied Multivariate Analysis Math 570, Fall 2014 Xingye Qiao Department of Mathematical Sciences Binghamton University E-mail: qiao@math.binghamton.edu 1 / 71

Outline 1 Bootstrap and bagging 2 Boosting 3 Random Forest 2 / 71

The next section would be...... 1 Bootstrap and bagging Bootstrap Subsampling Bagging 2 Boosting Boosting algorithm Additive logistic model and exponential loss 3 Random Forest 3 / 71

The next subsection would be...... 1 Bootstrap and bagging Bootstrap Subsampling Bagging 2 Boosting Boosting algorithm Additive logistic model and exponential loss 3 Random Forest 4 / 71

Complete element space initial sample with N elements N b bootstrap samples 1 2 3 4. N b Theorem (B. Efron, Ann. Statist. 1979) When N tend to infinity, the distribution of average values computed from bootstrap samples is equal to the distribution of average values obtained from ALL samples with N elements which can be constructed from the complete space. Thus the width of the distribution gives an evaluation of the sample quality. Figure: Courtesy of www.texample.net 5 / 71

Bootstrap sample Given data X = {X 1, X 2,..., X n }, a bootstrap sample is obtained by sampling with replacement n observations from {X 1, X 2,..., X n } Each observation in the original data may be drawn multiple times or not drawn at all. Often one need to know the sampling distribution of a statistic. It can be estimated if we have many X from the true underlying distribution. However, since we do not have the true underlying distribution, we generate many bootstrap samples which are actually from the so-called empirical distribution. Ideally, for large n, the empirical distribution should be close to the true distribution. Next, we review some basics of the theoretical background of bootstrap. 6 / 71

X 1,..., X n follow distribution F. Plug-in principal We define the empirical distribution function F n as one which assigns 1/n probability mass to any observed value in the current data x i (i = 1,..., n). Many parameters θ of the distribution F can be written as a function of F. Hence we write θ = t(f ) For example, if θ is the mean of this distribution, then θ = xf (x)dx = xdf (x) In a much broader sense, the sampling distribution of a statistic is also a parameter. The plug-in principal says we can estimate θ = t(f ) by θ = t(f n ) For example, in the example above, t(f n ) = xdf n (x) = n i=1 1 n x i = 1 n n i=1 x i; the sample mean of the data, is actually a plug-in estimator. 7 / 71

Another example is the variance of the distribution. We write σ 2 = t(f ) = x 2 df (x) ( xdf (x)) 2 Replacing F by F n, we have t(f n ) = x 2 df n (x) ( xdf n (x)) 2 ( n n ) 2 = xi 2 1 n 1 x i n i=1 i=1 = 1 n xi 2 (x) 2 n = 1 n i=1 n (x i x) 2 i=1 But this is the common sample variance estimator (up to a n multiplicative constant n 1 ) in any undergraduate statistical textbook. 8 / 71

Bootstrap estimate for the standard error of an estimator For a general estimator θ that is calculated based on sample X, we write θ = s(x ). Note that it could be a plug-in estimator, but does not have to be. We seek to estimate a parameter of the distribution of θ = s(x ), especially, the standard error (root of the variance) of θ, i.e., se( θ). Note that although θ = s(x ) has its own distribution function, it is still a function of X ; hence se( θ) is still a function of F. To stress this, we denote it as se F ( θ) = t(f ). 9 / 71

Bootstrap estimate for the standard error of an estimator A counterpart of θ is θ defined as θ = s(x ) where X is a bootstrap sample of size n Note that X i F and Xi F n According to previous pages, the plug-in estimator for se F ( θ) = t(f ) is se Fn ( θ ) := t(f n ), that is, to replace the true unknown distribution F (where X comes from) by the empirical distribution F n (where X comes from). To implement this, we can hold X fixed and generate a bootstrap sample X from X, and obtain the point estimator θ. Claim: The standard deviation of θ (conditioning on X ) is the same as the plug-in se Fn ( θ ) = t(f n ) above. This is perhaps less obvious. See next page. 10 / 71

Side note: more explanation on last claim The true variance of s(x ) is (s(x) E F (s(x ))) 2 df (x 1 )... df (x n ) The plug-in estimator of the true variance of s(x ) is ( ) 1 n [s(x1, x2,..., x n n) E Fn s(x )] 2 (x1,x 2,...,x n ) ( ) 1 n = [ θ E Fn n θ ] 2 (x 1,x 2,...,x n ) Note that the probability mass at (x 1, x 2,..., x n) is (1/n) n The last line above is actually the standard deviation of θ (conditioning on X ) Note that the randomness of θ is only coming from how the bootstrap set is sampled, since we have had X and hence F n fixed. 11 / 71

Unfortunately, except for the simple case where θ = X, the explicit formula for t( ) is difficult to identify. Fortunately, we can use computer to approximate se Fn ( θ ). Bootstrap Estimate for Standard Error 1 Draw B bootstrap samples X 1,..., X B, where X b,i F n 2 Calculate θ b = s(x b ) for each bootstrap sample 3 Approximate se Fn ( θ ) = Var θ Fn by sample standard } deviation se B of { θ b, b = 1,..., B, se B := 1 B ( θ B 1 b θ ) 2 b=1 12 / 71

se Fn ( θ ) is called the ideal bootstrap estimate for se F ( θ) se B is the actual bootstrap estimate that we use to approximate se Fn ( θ ) There are two levels of approximations going on: 1 se Fn ( θ ) is a plug-in estimator for se F ( θ). 2 se B is a sample standard deviation approximating the population standard deviation for θ (that is se Fn ( θ )) Here the population is the population of θ = s(x ). Note that even X is given (i.e. given F n ), θ is still random (with up to ( ) 2n 1 n possible outcomes). Each bootstrap point estimate θ b = s(x b ), based on the bootstrap sample X b, is an observation of θ The first approximation error vanishes as n The second approximation error vanishes as B. Because we can already bootstrap sample many times, usually we can ignore the second approximation error. 13 / 71

There are ( ) 2n 1 n distinct bootstrap samples, corresponding to up to ( ) 2n 1 n distinct possible values of θ b = s(x b ) Technically, one can calculate E Fn (s(x )) = l w l s(x l ) and se Fn ( θ ) = se Fn (s(x )) = w l [s(x l ) E F n (s(x ))] 2 where w l is the probability for the lth distinct bootstrap sample. But such calculation is not realistic due to the very large number of ( ) 2n 1 n. Incidentally, w l is the probability mass function for a multinomial distribution. l 14 / 71

Example: θ = X se F (X ) = σ 2 (X i ) n By plug-in estimation of σ 2 (X i ) which is σ 2 ({X i }), se Fn (X ) = σ 2 ({X i }) n If we do generate a new bootstrap sample X and recalculate X based on X, then the standard deviation of X (under F n, i.e., given X ) would really be σ 2 ({X i }) n (See next page.) This is an example where we can directly calculate the plug-in estimate for se F (X ). Otherwise, neat forms like (*) do not exist; in these cases, we could generate many X b s and approximate the standard deviation of X by the sample standard deviation of X b s. (*) 15 / 71

Side note Var(X X = (x i )) = σ2 ({x i })?? n X := 1 n (x 1Z 1 + x 2 Z 2 + + x n Z n ) where (Z 1,..., Z n ) follows multinomial distribution with parameters n and ( 1 n, 1 n,..., 1 n ). Note that Var(Z j ) = n 1 n (1 1 n ) = 1 1 n and Cov(Z i, Z j ) = n 1 1 n n = 1 n. Thus treating (x i) as fixed, we have Var(X ) = Var( 1 n x j Z j ) = Cov( 1 n x i Z i, 1 n x j Z j ) n n n = = n i=1 j=1 n i=1 j=1 n x i x j n 2 Cov(Z i, Z j ) xi 2 n 2 1 n n n i=1 j=1 x i x j n 2 i=1 = 1 n [ n i=1 j=1 x 2 i n x 2 ] = σ2 ({x i }) n 16 / 71

How large should B be? The approach of approximating se Fn ( θ ) by sample standard deviation se B is essentially to calculate a complicated expectation by averaging Monte Carlo simulations. A canonical example is to approximate EX = xf (x)dx by 1 n n i=1 X i where X i f (x) This is sometimes called Monte Carlo integration. It is very convenient when the integration involved is very complicated but we have the capacity to generate many observations from the underlying distribution. The foundation of the Monte Carlo integration is law of large number. According to Efron and Tibshirani, B = 25 is sometimes enough to have a good estimator for se Fn ( θ ), and rarely is B > 200 needed. 17 / 71

Beyond X and Beyond Standard Error For more complicated θ than X, estimating the standard error of θ boils down to calculating the sample standard deviation of the θ based on B bootstrapped samples. More or less like a black box. Need not to know what s going on inside. Bootstrap sampling Calculate θ Calculate sample standard deviation. Done. Core idea: the variance of θ and the variance of θ ought to close (and the larger the n is, the closer they are), supported by the closeness of F n and F. We can do more than just estimating the standard error. 18 / 71

Confidence interval Traditional method: Use θ θ se( θ) as a pivotal. If it has normal or t distribution, then with probability (1 α), t α/2 θ θ se( θ) t 1 α/2 Inverting the pivotal, we have, with probability (1 α) Note that t α/2 is negative. θ t 1 α/2 se( θ) θ θ t α/2 se( θ) 19 / 71

Bootstrap confidence intervals It is possible that normal assumptions (or t assumptions) are not true. In these cases, some model-free methods are desired. Bootstrap-t interval Percentile interval Basic bootstrap interval Following Efron and Tibshiran 1993, An Introduction to the Bootstrap. 20 / 71

Percentile-t interval θ θ and θ θ se( θ) Idea: F ŝe( θ n should have very close distribution. ) Hence, use the quantiles of the latter, and invert the former. Let Z b = θ b θ and find the α/2 and 1 α/2 sample ŝe( θ b ), percentiles for {Z b, b = 1,..., B}, namely, z α/2 and z 1 α/2. The confidence interval is θ z 1 α/2 se( θ) θ θ z α/2 se( θ) 21 / 71

A simpler interval Idea: θ θ and θ θ F n should have same/close distribution. This is a simplified idea where we choose to let se( θ) be independent of the data in the Percentile t-method. Instead of first finding the percentiles of the former and inverting it, we find the sample percentiles of the latter and invert the former. We should have with probability 1 α After inverting it, we have θ b,α/2 θ θ θ θ b,1 α/2 θ 2 θ θ b,1 α/2 θ 2 θ θ b,α/2 22 / 71

Percentile interval If the normal assumption is true, then ( θ z 1 α/2 se( θ), θ z α/2 se( θ)) is the confidence interval for θ If we have θ N( θ, se( θ) 2 ), then ( θ z 1 α/2 se( θ), θ z α/2 se( θ)) is also the (α/2) and (1 α/2) percentiles of θ Hence we avoid working hard to get se( θ) and all that, and directly estimate them by the sample percentiles of { θ b, b = 1,..., B} 23 / 71

Remarks on Bootstrap Poor-men s (but rich enough to afford a computer) statistical inference. Very little effort is needed. People who know little about statistics can use it. Can fail sometimes, 1 When t(f n ) is not a good estimate for t(f ). 2 Dimension is high. 3 Time-series data (not iid.) A paradox: the motive to use bootstrap is to deal with complex t, but sometimes if t is too complex then bootstrap would fail as well. 24 / 71

The next subsection would be...... 1 Bootstrap and bagging Bootstrap Subsampling Bagging 2 Boosting Boosting algorithm Additive logistic model and exponential loss 3 Random Forest 25 / 71

Subsampling Subsampling can be viewed as a variant of bootstrap. Instead of sampling with replacement, we do sampling without replacement to obtain a subset with b observations of the original data. Usually b n such that b/n 0, and b as n Subsampling can provide confidence intervals that work (i.e., do not fail) with much less strong conditions than bootstrap. The proof of the consistency for subsampling involves some concentration inequality, but that for boostrap involves very high-tech empirical process theory. 26 / 71

Remark There are also some catches for subsampling. Need to choose b wisely. The theoretical guarantee is only asymptotic. Performance in finite sample cases is more sensitive to b For both bootstrap and subsampling: too computationally intensive. 27 / 71

The next subsection would be...... 1 Bootstrap and bagging Bootstrap Subsampling Bagging 2 Boosting Boosting algorithm Additive logistic model and exponential loss 3 Random Forest 28 / 71

Bagging In addition to statistical inference, bootstrapped samples can also be utilized to improve existing classification or regression methods. Bagging = Bootstrap aggregating. Proposed by Leo Breiman in 1994 to improve the classification by combining classification results of many randomly generated training sets 29 / 71

Bagging classifier Suppose a method provides a prediction f (x) at input x based on sample X The bagging prediction is f B (x) = 1 B B f b (x) b=1 where f b is the prediction based on a bootstrapped sample X b 30 / 71

If we view f (x) as an estimator, then we would be interested in E F ( f (x)), which is a function of the distribution F, (like t(f ) in previous section) In this case, the bagging classifier f B (x) is nothing but an approximation to the ideal bootstrap estimate E Fn ( f (x)) Hence the goal of bagging can be viewed as to use a plug-in estimate to approximate E( f (x)) Neither E F ( f (x)) nor E Fn ( f (x)) can be directly calculated. As before, we have f B (x) p E Fn ( f (x)) given data (or given F n ) and E Fn ( f (x)) is supposed to be close to the true mean of f (x) 31 / 71

Why Bagging Works? f B (x) E Fn [ f (x)] E F [ f (x)] A very well known result regarding mean squared error for estimation is that In the current setting, this is MSE = Variance + Bias 2 E F [ f (x) y] 2 = E F [ f (x) E F ( f (x))] 2 + (E F ( f (x)) y) 2 By using the bagging estimate f B (x) in lieu of f (x), we hope to make the variance part E F [ f (x) E F ( f (x))] 2 almost zero. Hence bagging improves MSE by reducing the variance. 32 / 71

Bagging for Classification Bagging was initially invented to solve classification problem. Two approaches: If the basic classifier itself is a plug-in classifier (not the plug-in method that we introduced in bootstrap), which works by first estimating η j (x) by η j (x) and then use classification rule φ(x) := argmax η j (x), j=1,...,k then we can bagging the bootstrap estimates η j (x) to obtain η j,b (x) = 1 B B η j,b (x) b=1 then use φ B (x) = argmax η j,b (x) j 33 / 71

However, except for a few classifiers such as knn, most approaches do not work like that. Often, a classifier will report a prediction for x without reporting the estimate for η(x). For example, SVM, CART, etc. In this case, use a majority voting scheme: Let V j,b (x) := 1 B B 1{ φ b (x) = j} b=1 then we just use φ B (x) := argmax V j,b (x) j In other words, we let the B classifiers to cast votes and choose the class with the most votes. 34 / 71

35 / 71

Figure: Consensus majority voting; Probability bagging η(x) 36 / 71

The MSE argument of how bagging works does not apply for classification problem. MSE uses squared loss. Classification uses 0-1 loss and in that case the decomposition to variance and bias 2 does not exist. Wisdom of the crowd argument: In binary classification, if there are B independent voters and each would classify Y = 1 correctly with probability 1 e and the misclassification rate e < 0.5, then Z = B φ b=1 b (x) Binomial(B, 1 e) and 1 B φ p B b=1 b (x) 1 e > 0.5. Hence, the misclassification rate of the bagged classifier is P( 1 B φ B b=1 b (x) < 0.5) 0. A catch: the bootstrap classifiers are not independent at all. Random forest, however, adds in additional randomness to the bootstrap classifiers which makes them less independent. 37 / 71

Remarks Bagging does not improve the estimator if it is only a linear function of the data. It would only have some effect if the estimator is of a nonlinear nature of the data. Bagging does not improve an estimator/classifier if it is already very stable; may even worsen it. Bagging does not improve a classifier if it is a bad classifier (generalization error > 0.5 in binary classification case; worse than random guessing) Bagging in some sense expands the basis of the model space. But examples show that it has not done enough. In contrast, boosting can do a good job in terms of expending model space. 38 / 71

Each basic classifier is just a stump (tree with only one split). The basic classifier is too weak and the model space needs to be expended much aggressively. Next two sections, introduce Boosting and Random Forest respectively. 39 / 71

The next section would be...... 1 Bootstrap and bagging Bootstrap Subsampling Bagging 2 Boosting Boosting algorithm Additive logistic model and exponential loss 3 Random Forest 40 / 71

The next subsection would be...... 1 Bootstrap and bagging Bootstrap Subsampling Bagging 2 Boosting Boosting algorithm Additive logistic model and exponential loss 3 Random Forest 41 / 71

Boosting To enhance accuracy of weak learner(s) Create a strong learner by substantially boosting the performance of each single weak learner. Focus on binary classification Here each weak learner must be slightly better than random guessing (error rate < 0.5). First really useful Boosting algorithm: Schapire (1990), Freund (1995), and Schapire and Freund (1997) AdaBoost = Adaptive Boosting. 42 / 71

AdaBoost.M1 We introduce AdaBoost.M1 in the next slide. We follow the introduction in ESL (Chapter 10). The one in Izenman can be specialized to the case we will introduce here. Input: D := {(x i, y i ), i = 1,..., n} where y i {+1, 1} Core Idea: A weaker learner is applied to an adaptively weighted training sample sequentially. All resulting models are weighted in the end to form a final model. Note that there are two types of weightings: one is for the observations; the other for the learners. 43 / 71

AdaBoost.M1 1 Initialize the weights w i = 1/n for i = 1,..., n 2 For t = 1,..., T (a) Fit a classifier φ t (x) to the data with weight vector {w i } (b) Compute the weighted training data prediction error: E t := n w i 1{y i φ t (x i)} i=1 ( ) 1 E (c) Compute α t := log t E t (d) Update weights: w i normalize{w i e αt1{y i φ t(x i )} } 3 Final model: { T } φ AdaBoost (x) := sign α t φt (x) t=1 44 / 71

The same learner is applied to an adaptively changing training data repeatedly. An early version of the AdaBoost allowed multiple learning algorithms ( ) α t := log 1 Et E t can be viewed as a measure of fit for the model φ t ( ) to the current weighted data. For a perfectly fit data, α t. Justify the fact that α t is used as the weights for model φ t ( ) within the final model. Those misclassified observations are weighted heavier [by a multiplicative factor of e αt = ( 1 Et E t )], i.e., emphasizing more on these data in the next round. If E t 1/2, then the magnitude of the change is milder. The normalize{c i } step makes the weights sum up to 1. Although the weights for those correctly classified observations are not changed at a first glance, their weights become smaller due to normalization. 45 / 71

46 / 71

Figure: After 100 iterations, the test set misclassification rates are stablized. 47 / 71

Empirical evidence has shown that Boosting seems to be resistant to overfit: the testing set misclassification rate will be stable and not go up as iteration grows. Some other empirical experiments have shown that the overfitting may appear, if iterations go for a very long time. Leo Breiman once speculated that the generalization error of boosting may eventually go to the Bayes error (the theoretically best one can do.) Many attempts have been made to try to explain the mysteriously good performance of Boosting. In the next subsection, we provide a statistical interpretation of boosting. See ESL 10.2 10.5 and Izenman 14.3.6 for more details. This view was initially given by Friedman, Hastie and Tibshirani (2000), Additive logistic regression: a statistical view of boosting, the Annals of Statistics 48 / 71

The next subsection would be...... 1 Bootstrap and bagging Bootstrap Subsampling Bagging 2 Boosting Boosting algorithm Additive logistic model and exponential loss 3 Random Forest 49 / 71

We first introduce additive model and exponential loss. We then show that AdaBoost is equivalent to fitting an additive model with exponential loss. An additive model takes the form f (x) = M β m g(x; γ m ) m=1 where β m is the weight for the mth model and γ m is the estimated parameters for the mth model. One needs to fit β m and γ m simultaneously. Goal is to minimize n i=1 L(y i, M m=1 β mg(x i; γ m )) for an appropriate loss function. Such optimization can be quite complicated, even if the loss function has a simple form. One possible way to solve this is to sequentially estimate (β m, γ m ) for m = 1,..., M 50 / 71

Forward stagewise additive model 1 Initialize f 0 (x) = 0 (no model) 2 For m = 1 to M (a) Estimate (β m, γ m ) := argmin β,γ n L[y i, f m 1 (x i) + βg(x i; γ)] i=1 (b) Set f m (x) = f m 1 (x) + β m g(x; γ m ) After each iteration the model is enhanced. At each iteration, only an optimization involving one pair of (β m, γ m ) is done. 51 / 71

Exponential loss For binary classification problems, we call u := yf (x) a functional margin. The larger the margin is, the more confidence we have that the data point (x, y) is correctly classified. We consider the exponential loss yf (x) L(y, f (x)) := e The risk function is E(e Yf (X ) ). In order to find a f that minimizes E(e Yf (X ) ), we only need to minimize E(e Yf (X ) X = x) for any x. Write l(f (x)) := E(e Yf (X ) X = x) =e f (x) P(Y = 1 X = x) + e f (x) P(Y = 1 X = x) =e f (x) η(x) + e f (x) [1 η(x)] 52 / 71

Take the derivative of l(f (x)) with respect to f (x), and set it to 0, we have 0 = l (f (x)) = e f (x) η(x) + e f (x) [1 η(x)] e f (x) η(x) = e f (x) [1 η(x)] e 2f (x) = η(x) 1 η(x) ( ) η(x) 2f (x) = log 1 η(x) f (x) = 1 ( ) η(x) 2 log 1 η(x) Recall that this gives the model for logistic regression (when f is a linear function of x). Incidentally, this minimizer of the population risk function under exponential loss coincides with that under the logistic loss function L(y, f (x)) := log(1 + e yf (x) ) 53 / 71

AdaBoost A.M. + Exponential Loss We combine the forward stagewise additive modelling with the exponential loss. Consider that the basis g( ) in the additive model is each single weak learner φ( ). What this weak learner really is does not matter. In this setting, at each iteration, we seek to find n argmin e y i [f m 1 (xi )+β φ(xi ;γ)] β,γ i=1 n = e [y i f m 1 (xi )+y i β φ(xi ;γ)] = i=1 n i=1 where w (m) i := e y i f m 1 (xi ) w (m) i e y i β φ(xi ;γ) 54 / 71

When minimizing n i=1 w (m) i e y i β φ(xi ;γ), we first fix β: n i=1 = w (m) i e βy i φ(xi ;γ) y i φ(xi ;γ)=1 =e β =e β i y i φ(xi ;γ)=1 w (m) i e β + y i φ(xi ;γ)= 1 w (m) i + e β y i φ(xi ;γ)= 1 w (m) i e β w (m) i w (m) i 1{y i φ(x i; γ) = 1} + e β i w (m) i 1{y i φ(x i; γ) = 1} = (e β e β) i w (m) i 1{y i φ(x i; γ) = 1} + e β i w (m) i Hence φ m (x) minimizes i.e. E m i w (m) i 1{y i φ m(xi )} i w (m) i, the weighted 0-1 loss, 55 / 71

Once we have φ t ( ), we find the minimizing β: ( l(β) := e β e β) E m + e β l (β) := which is minimized at ( e β + e β) E m e β = 0 β m = 1 ( ) 1 2 log Em E m The additive model continues with f m (x) = f m 1 (x) + β m φm (x). This means that the weight for the next iteration is = e y i f m(xi ) = e y i [f m 1 (xi )+β m φm(xi )] = w (m) i e y i β m φm(xi ) w (m+1) i 56 / 71

Multiplicative factor on the weight updating is e βmy i φ m(xi ) Recall that y i, φ m (x i) {+1, 1}. So Hence y i φm (x i) = 21{y i φ m (x i)} 1 e βmy i φ m(xi ) = e βm(21{y i φ m(xi )} 1) = e 2βm1{y i φ m(xi )} e βm = e αm1{y i φ m(xi )} e βm where α m = log ( 1 Em E m ) was defined in the AdaBoost.M1 algorithm. Note that e βm will vanish due to normalization. 57 / 71

Short summary, The final model is the weighted sum of φ m (x) with β m = 1 2 α m as weights (only half of the weights in AdaBoost.M1) The observation weight is updated by w (m+1) i = w (m) i e αm1{y i φ m(xi )}, the same as in AdaBoost.M1. In AdaBoost.M1, the weak learner might have its own loss function. But in principal, it should try to minimize the weighted 0-1 loss. These suggest that AdaBoost.M1 is approximately solving an additive model with weak learners as basis, with the forward stagewise algorithm as the optimization approach, and with the exponential loss. The reason that it is approximately solving, not exactly solving, is that the weak learner may not directly minimizes the weighted 0-1 loss. 58 / 71

The next section would be...... 1 Bootstrap and bagging Bootstrap Subsampling Bagging 2 Boosting Boosting algorithm Additive logistic model and exponential loss 3 Random Forest 59 / 71

Bagging in a different view In Bagging, we generate bootstrap sample and re-do classification, and aggregate the predictions by majority voting. The bootstrap classifiers are not independent and the average effect to reduce variance may not be clear. Random Forest: an application of bagging to aggregate bootstrap classification trees, but with some additional randomness due to random subset of variables in tree growing. Since trees are known to be noisy (large variance), bagging should bring a lot of benefit to trees. But these trees are identically distributed but not independent. 60 / 71

Random Forest 1 For b = 1 to B (a) Draw a bootstrap sample of size N from X, namely X b (b) Grow a tree T b based on X b in the common way, except that at each node, find the best split among m random selected subset of all p variables. (c) Stop growing when a node has reached to a pre-set minimal node size. 2 Report the forest: {T b } B 1 Regression: f (x) = 1 B B b=1 T b(x) Classification: φ(x) = majority vote{t b (x)} Note that the set of m random selected variables may be different at different nodes, even within the same tree. 61 / 71

62 / 71

Out-of-Bag Sample For each bootstrap, there must be a set of observations not selected in the bootstrap sample. This set of observations is called the Out-of-Bag Sample For each observation, there must be a set of trees whose bootstrap samples do not include this observation. 63 / 71

OOB for the whole forest: OOB Errors The OOB Error is defined to be the average of misclassification over all the n observations, where each observation is classified not by the whole forest, but by the sub-forest with those trees whose bootstrap samples do not include this observation. Similar to leave-one-out cross validation. 2 observations do not share classifier, compared to 5-fold cv. But, the classifier for each observation is also a random forest Unlike the cross validation, there is no additional computational burden. The OOB error is obtained along the way of generating the random forest. We can also calculate the OOB for a single (bootstrap) tree: The misclassification rate of the (bootstrap) tree when it is applied to a OOB sample. 64 / 71

65 / 71

Variance Importance Goal: provide a ranked list of variables representing their relative importance in classification. Two versions Permutation based Gini index (or impurity) based 66 / 71

Permutation based variance importance 1 The OOB error for the bth tree in the forest is E b (OOB) 2 For each j = 1,..., p, the jth variable in the OOB sample can be randomized (randomly exchange the values on the jth variable). Call this permuted OOB sample OOB j. A new OOB error for OOB j can be calculated for the bth tree, denoted as E b (OOB j ) 3 The difference is aggregated over all B trees for each j: VI (j) := B [E b (OOB j ) E b (OOB)] b=1 The large VI, the more important the variable is. 67 / 71

The permutation on variable j works by voiding the effect of variable j, making it usable for the trees. Note that it is a little different from removing the variable j from the data and regrowing the tree again! Note that the tree is static; we only try a different testing data. Hence there is not much computational burden. If a variable j is important, then E b (OOB j ) E b (OOB). Otherwise, E b (OOB j ) = E b (OOB). 68 / 71

Gini index (or impurity) based variance importance Recall that when growing a tree, the split at a variable makes the Gini index on the mother node reduce to the weighted sum of the Gini indices on the two daughter nodes. We aggregate such reduction for each variable j over all the B trees Large VI = large reduction overall = helped to make many trees reduce impurity = being important. It is possible that a variable j does not appear in a single tree; but over the whole forest with many trees, the chance that it is not selected by one tree is very small. 69 / 71

Figure: Left: Gini-based VI. Right: Randomization VI. 70 / 71

Remarks Choice of m: classification m; regression p/3 Minimal node size: classification 1; regression 5 Random forest is an improved version of bagging trees. Friedman and Hall showed that subsampling (without replacement) with N/2 is approximately equivalent to bootstrap, while smaller fraction of N may reduce the variance even further. In R, package randomforest. 71 / 71