UVA CS 4501: Machine Learning

Similar documents
Machine Learning Recitation 8 Oct 21, Oznur Tastan

UVA$CS$6316$$ $Fall$2015$Graduate:$$ Machine$Learning$$ $ $Lecture$17:$Decision(Tree(/(Random( Forest(/(Ensemble$

UVA$CS$4501$+$001$/$6501$ $007$ Introduc8on$to$Machine$Learning$ and$data$mining$$ $ Lecture$19:!Decision!Tree!/! Random!Forest!/!

Decision Trees. Nicholas Ruozzi University of Texas at Dallas. Based on the slides of Vibhav Gogate and David Sontag

Decision Tree And Random Forest

Lecture 7: DecisionTrees

Learning Decision Trees

UVA CS / Introduc8on to Machine Learning and Data Mining

Classification and Regression Trees

CS145: INTRODUCTION TO DATA MINING

Learning Decision Trees

Decision Trees. Danushka Bollegala

Decision Trees. Tirgul 5

Machine Learning 2nd Edi7on

Data Mining Classification: Basic Concepts and Techniques. Lecture Notes for Chapter 3. Introduction to Data Mining, 2nd Edition

BAGGING PREDICTORS AND RANDOM FOREST

the tree till a class assignment is reached

Introduction to Machine Learning CMU-10701

Decision Trees. Each internal node : an attribute Branch: Outcome of the test Leaf node or terminal node: class label.

Decision Tree Analysis for Classification Problems. Entscheidungsunterstützungssysteme SS 18

EECS 349:Machine Learning Bryan Pardo

CS6375: Machine Learning Gautam Kunapuli. Decision Trees

CS 6375 Machine Learning

Lecture 3: Decision Trees

Introduction. Decision Tree Learning. Outline. Decision Tree 9/7/2017. Decision Tree Definition

Data classification (II)

day month year documentname/initials 1

Decision Tree Learning

Decision Trees. Gavin Brown

Decision Trees. CS57300 Data Mining Fall Instructor: Bruno Ribeiro

Oliver Dürr. Statistisches Data Mining (StDM) Woche 11. Institut für Datenanalyse und Prozessdesign Zürcher Hochschule für Angewandte Wissenschaften

Data Mining und Maschinelles Lernen

Dan Roth 461C, 3401 Walnut

Deconstructing Data Science

Lecture 3: Decision Trees

Variance Reduction and Ensemble Methods

FINAL: CS 6375 (Machine Learning) Fall 2014

Learning with multiple models. Boosting.

Decision Tree Learning

Bagging. Ryan Tibshirani Data Mining: / April Optional reading: ISL 8.2, ESL 8.7

Learning Classification Trees. Sargur Srihari

Where are we? è Five major sec3ons of this course

The Naïve Bayes Classifier. Machine Learning Fall 2017

Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation

Decision Trees Part 1. Rao Vemuri University of California, Davis

Data Mining. CS57300 Purdue University. Bruno Ribeiro. February 8, 2018

Classification: Decision Trees

Ensemble Methods. Charles Sutton Data Mining and Exploration Spring Friday, 27 January 12

Classification Using Decision Trees

Imagine we ve got a set of data containing several types, or classes. E.g. information about customers, and class=whether or not they buy anything.

Lecture 24: Other (Non-linear) Classifiers: Decision Tree Learning, Boosting, and Support Vector Classification Instructor: Prof. Ganesh Ramakrishnan

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18

Classification and regression trees

Machine Learning & Data Mining

Decision Tree Learning and Inductive Inference

Classification using stochastic ensembles

Decision Trees. CSC411/2515: Machine Learning and Data Mining, Winter 2018 Luke Zettlemoyer, Carlos Guestrin, and Andrew Moore

Data Mining Prof. Pabitra Mitra Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur

Chapter 6: Classification

Decision Tree Learning - ID3

Artificial Intelligence. Topic

2018 CS420, Machine Learning, Lecture 5. Tree Models. Weinan Zhang Shanghai Jiao Tong University

Introduction to Data Science Data Mining for Business Analytics

From statistics to data science. BAE 815 (Fall 2017) Dr. Zifei Liu

Machine Learning and Data Mining. Decision Trees. Prof. Alexander Ihler

Decision Trees. Data Science: Jordan Boyd-Graber University of Maryland MARCH 11, Data Science: Jordan Boyd-Graber UMD Decision Trees 1 / 1

Question of the Day. Machine Learning 2D1431. Decision Tree for PlayTennis. Outline. Lecture 4: Decision Tree Learning

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas

ML techniques. symbolic techniques different types of representation value attribute representation representation of the first order

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.

UVA CS 6316/4501 Fall 2016 Machine Learning. Lecture 11: Probability Review. Dr. Yanjun Qi. University of Virginia. Department of Computer Science

Jialiang Bao, Joseph Boyd, James Forkey, Shengwen Han, Trevor Hodde, Yumou Wang 10/01/2013

Decision Trees Lecture 12

Classification: Decision Trees

Classification and Prediction

Decision Trees. Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University. February 5 th, Carlos Guestrin 1

Decision trees COMS 4771

Supervised Learning! Algorithm Implementations! Inferring Rudimentary Rules and Decision Trees!

Machine Learning, Fall 2009: Midterm

SUPERVISED LEARNING: INTRODUCTION TO CLASSIFICATION

Introduction to ML. Two examples of Learners: Naïve Bayesian Classifiers Decision Trees

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Decision Trees. Tobias Scheffer

( D) I(2,3) I(4,0) I(3,2) weighted avg. of entropies

Decision Tree Learning Mitchell, Chapter 3. CptS 570 Machine Learning School of EECS Washington State University

Administrative notes. Computational Thinking ct.cs.ubc.ca

Jeffrey D. Ullman Stanford University

1 Handling of Continuous Attributes in C4.5. Algorithm

Modern Information Retrieval

Machine Learning Linear Classification. Prof. Matteo Matteucci

Bias Correction in Classification Tree Construction ICML 2001

Lecture 7 Decision Tree Classifier

UVA CS / Introduc8on to Machine Learning and Data Mining. Lecture 9: Classifica8on with Support Vector Machine (cont.

A Decision Stump. Decision Trees, cont. Boosting. Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University. October 1 st, 2007

Decision Trees: Overfitting

Probability and Statistical Decision Theory

MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October,

Generative Learning. INFO-4604, Applied Machine Learning University of Colorado Boulder. November 29, 2018 Prof. Michael Paul

Decision Trees.

Ensembles. Léon Bottou COS 424 4/8/2010

Empirical Risk Minimization, Model Selection, and Model Assessment

Transcription:

UVA CS 4501: Machine Learning Lecture 21: Decision Tree / Random Forest / Ensemble Dr. Yanjun Qi University of Virginia Department of Computer Science

Where are we? è Five major sections of this course q Regression (supervised) q Classification (supervised) q Unsupervised models q Learning theory q Graphical models 4/19/18 2

Today Ø Decision Tree (DT): ØTree representation ØBrief information theory ØLearning decision trees ØBagging ØRandom forests: Ensemble of DT ØMore about ensemble 4/19/18 3

A study comparing Classifiers Proceedings of the 23rd International Conference on Machine Learning (ICML `06). 4/19/18 4

Top 8 Models Dr. Yanjun Qi / UVA CS A study comparing Classifiers è 11 binary classification problems / 8 metrics 4/19/18 5

Readable Readability Hierarchy Decision Trees: Classifies based on a series of one-variable decisions. Linear Classifier: Weight vector w tells us how important each variable is for classification and in which direction it points. Quadratic Classifier: Linear weights work as in linear classifier, with additional information coming from all products of variables. k Nearest Neighbors: Classifies using the complete training set, no information about the nature of the class difference

Where are we? è Three major sections for classification We can divide the large variety of classification approaches into roughly three major types 1. Discriminative - directly estimate a decision rule/boundary - e.g., logistic regression, support vector machine, decisiontree 2. Generative: - build a generative statistical model - e.g., naïve bayes classifier, Bayesian networks 3. Instance based classifiers - Use observation directly (no models) - e.g. K nearest neighbors 4/19/18 7

C Dr. Yanjun Qi / UVA CS A Dataset for classification C Output as Discrete Class Label C 1, C 2,, C L Data/points/instances/examples/samples/records: [ rows ] Features/attributes/dimensions/independent variables/covariates/predictors/regressors: [ columns, except the last] Target/outcome/response/label/dependent variable: special column to be predicted [ last column ] 4/19/18 8

C Example: Play Tennis Example 4/19/18 9

Anatomy of a decision tree Outlook Each node is a test on one feature/attribute sunny overcast rain Possible attribute values of the node Humidity Yes Windy high normal false true No Yes No Yes Leaves are the decisions 4/19/18 10

Anatomy of a decision tree Outlook Each node is a test on one feature/attribute sunny overcast rain Possible attribute values of the node Humidity Yes Windy high normal false true No Yes No Yes Leaves are the decisions 4/19/18 11

Anatomy of a decision tree Sample size Outlook Each node is a test on one attribute Your data gets smaller sunny overcast rain Possible attribute values of the node Humidity Yes Windy high normal false true No Yes No Yes Leaves are the decisions 4/19/18 12

Apply Model to Test Data: To play tennis or not. Dr. Yanjun Qi / UVA CS Outlook A new test example: (Outlook==rain) and (Windy==false) sunny overcast rain Pass it on the tree -> Decision is yes. Humidity Yes Windy high normal false true No Yes No Yes 4/19/18 13

Apply Model to Test Data: To play tennis or not. Dr. Yanjun Qi / UVA CS sunny Humidity Outlook overcast Yes (Outlook ==overcast) -> yes (Outlook==rain) and (Windy==false) ->yes (Outlook==sunny) and (Humidity=normal) ->yes rain Windy Three cases of YES high normal false true No Yes No Yes 4/19/18 14

Decision trees Decision trees represent a disjunction of conjunctions of constraints on the attribute values of instances. (Outlook ==overcast) OR ((Outlook==rain) and (Windy==false)) OR ((Outlook==sunny) and (Humidity=normal)) => yes play tennis 4/19/18 15

Representation 0 A 1 Y=((A and B) or ((not A) and C)) C B 0 1 0 1 false true false true 4/19/18 16

Same concept / different representation 0 A 1 Y=((A and B) or ((not A) and C)) C B 0 1 0 1 false true false true 0 C 1 B A 0 1 1 0 false A 1 0 false true true false 4/19/18 17

Which attribute to select for splitting? 16 + 16 - the distribution of each class (not attribute) 8 + 8-8 + 8-4 + 4-4 + 4-4 + 4-4 + 4-2 + 2-2 + 2 - This is bad splitting 4/19/18 18

How do we choose which Dr. Yanjun Qi / UVA CS attribute to split? Which attribute should be used first to test? Intuitively, you would prefer the one that separates the training examples as much as possible. 4/19/18 19

Today Ø Decision Tree (DT): ØTree representation ØBrief information theory ØLearning decision trees ØBagging ØRandom forests: Ensemble of DT ØMore about ensemble 4/19/18 20

Information gain is one criteria to decide on which attribute for splitting Imagine: 1. Someone is about to tell you your own name 2. You are about to observe the outcome of a dice roll 2. You are about to observe the outcome of a coin flip 3. You are about to observe the outcome of a biased coin flip Dr. Yanjun Qi / UVA CS Each situation have a different amount of uncertainty as to what outcome you will observe. 4/19/18 21

Information Information: è Reduction in uncertainty (amount of surprise in the outcome) I( X ) = log 2 1 p(x) = log 2 p(x) If the probability of this event happening is small and it happens, the information is large. Ø Observing the outcome of a coin flip is head Ø Observe the outcome of a dice is 6 I = log2 1/ 2 = 1 I = log2 1/ 6 = 2.58 4/19/18 22

Entropy The expected amount of information when observing the output of a random variable X H( X) = E( I( X)) = p( x) I( x) = p( x)log p( x) i i i i 2 i i If the X can have 8 outcomes and all are equally likely H( X ) == 1/8log 1/8= 3 i 2 4/19/18 23

Entropy If there are k possible outcomes H( X) log 2 k Equality holds when all outcomes are equally likely The more the probability distribution that deviates from uniformity, the lower the entropy H( X) = E( I( X)) = p( x) I( x) = p( x)log p( x) e.g. for a random binary variable i i i 2 i 4/19/18 i i 24

Entropy If there are k possible outcomes H( X) log 2 k Equality holds when all outcomes are equally likely The more the probability distribution that deviates from uniformity, the lower the entropy e.g. for a random 4/19/18 binary variable 25

Entropy Lower è better purity Entropy measures the purity 4 + 4-8 + 0 - The distribution is less uniform Entropy is lower The node is purer 4/19/18 26

Information gain IG(X,Y)=H(Y)-H(Y X) Reduction in uncertainty of Y by knowing a feature variable X Information gain: = (information before split) (information after split) = entropy(parent) [average entropy(children)] Fixed the lower, the better (children nodes are purer) = For IG, the higher, the better 4/19/18 27

Conditional entropy i H(Y ) = p( y i )log 2 p( y i ) H(Y X ) = p(x ) H(Y X = x ) j j j = p(x j ) p( y i x j )log 2 p( y i x j ) j i H(Y X = x j ) = p( y i x j )log 2 p( y i x j ) i 4/19/18 28

Example Attributes Labels X1 X2 Y Count T T + 2 T F + 2 F T - 5 F F + 1 Which one do we choose X1 or X2? IG(X1,Y) = H(Y) H(Y X1) H(Y) = - (5/10) log(5/10) -5/10log(5/10) = 1 H(Y X1) = P(X1=T)H(Y X1=T) + P(X1=F) H(Y X1=F) = 4/10 (1log 1 + 0 log 0) +6/10 (5/6log 5/6 +1/6log1/6) = 0.39 Information gain (X1,Y)= 1-0.39=0.61 4/19/18 29

Example Attributes Labels X1 X2 Y Count T T + 2 T F + 2 F T - 5 F F + 1 Which one do we choose X1 or X2? IG(X1,Y) = H(Y) H(Y X1) H(Y) = - (5/10) log(5/10) -5/10log(5/10) = 1 H(Y X1) = P(X1=T)H(Y X1=T) + P(X1=F) H(Y X1=F) = 4/10 (1log 1 + 0 log 0) +6/10 (5/6log 5/6 +1/6log1/6) = 0.39 Information gain (X1,Y)= 1-0.39=0.61 4/19/18 30

Example Attributes Labels X1 X2 Y Count T T + 2 T F + 2 F T - 5 F F + 1 Which one do we choose X1 or X2? IG(X1,Y) = H(Y) H(Y X1) H(Y) = - (5/10) log(5/10) -5/10log(5/10) = 1 H(Y X1) = P(X1=T)H(Y X1=T) + P(X1=F) H(Y X1=F) = 4/10 (1log 1 + 0 log 0) +6/10 (5/6log 5/6 +1/6log1/6) = 0.39 Information gain (X1,Y)= 1-0.39=0.61 4/19/18 31

Which one do we choose? X1 X2 Y Count T T + 2 T F + 2 F T - 5 F F + 1 Information gain (X1,Y)= 0.61 Information gain (X2,Y)= 0.12 Pick the variable which provides the most information gain about Y Pick X1 è Then recursively choose next Xi on branches 4/19/18 32

Which one do we choose? X1 X2 Y Count T T + 2 T F + 2 F T - 5 F F + 1 X1 X2 Y Count T T + 2 T F + 2 F T - 5 F F + 1 One branch The other branch Information gain (X1,Y)= 0.61 Information gain (X2,Y)= 0.12 Pick the variable which provides the most information gain about Y Pick X1 è Then recursively choose next Xi on branches 4/19/18 33

4/19/18 34

Intuitively, you would prefer the one that separates the training examples as much as possible. Dr. Yanjun Qi / UVA CS 4/19/18 35

Decision Trees Caveats: The number of possible values influences the information gain. The more possible values, the higher the gain (the more likely it is to form small, but pure partitions) Other Purity (diversity) measures Information Gain Gini (population diversity) where p mk is proportion of class k at node m Chi-square Test 4/19/18 36

Overfitting You can perfectly fit DT to any training data Instability of Trees High variance (small changes in training set will result in changes of tree model) Hierarchical structure è Error in top split propagates down Two approaches: 1. Stop growing the tree when further splitting the data does not yield an improvement 2. Grow a full tree, then prune the tree, by eliminating nodes. 4/19/18 37

From ESL book Ch9 : Classification and Regression Trees (CART) Partition feature space into set of rectangles Fit simple model in each partition

Summary: Decision trees Non-linear classifier / regression Easy to use Easy to interpret Susceptible to overfitting but can be avoided. 4/19/18 39

Decision Tree / Random Forest Task Representation Score Function Classification Partition feature space into set of rectangles, local smoothness Greedy to find partitions Search/Optimization Models, Parameters Split with Purity measure / e.g. IG / cross-entropy / Gini / Tree Model (s), i.e. space partition 4/19/18 40

Today Ø Decision Tree (DT): ØTree representation ØBrief information theory ØLearning decision trees ØBagging ØRandom forests: Ensemble of DT ØMore about ensemble 4/19/18 41

Bagging Bagging or bootstrap aggregation a technique for reducing the variance of an estimated prediction function. For instance, for classification, a committee of trees Each tree casts a vote for the predicted class. 4/19/18 42

Bootstrap The basic idea: randomly draw datasets with replacement (i.e. allows duplicates) from the training data, each samples the same size as the original training set 4/19/18 43

Bootstrap The basic idea: randomly draw datasets with replacement (i.e. allows duplicates) from the training data, each samples the same size as the original training set 4/19/18 44

With vs Without Replacement Bootstrap with replacement can keep the sampling size the same as the original size for every repeated sampling. The sampled data groups are independent on each other. Bootstrap without replacement cannot keep the sampling size the same as the original size for every repeated sampling. The sampled data groups are dependent on each other. 4/19/18 45

Bagging Create bootstrap samples from the training data M features N examples N... 4/19/18 46

Bagging of DT Classifiers e.g. M features N examples...... Take the majority vote i.e. Refit the model to each bootstrap dataset, and then examine the behavior over the B replications. 4/19/18 47

Bagging for Classification with 0,1 Loss Classification with 0, 1 loss Bagging a good classifier can make it better. Bagging a bad classifier can make it worse. Can understand the bagging effect in terms of a consensus of independent weak leaners and wisdom of crowds 4/19/18 48

Peculiarities of Bagging Model Instability is good when bagging The more variable (unstable) the basic model is, the more improvement can potentially be obtained Low-Variability methods (e.g. LDA) improve less than High- Variability methods (e.g. decision trees) 4/19/18 49

Bagging : an example with simulated data N = 30 training samples, two classes and p = 5 features, Each feature N(0, 1) distribution and pairwise correlation.95 Response Y generated according to: Test sample size of 2000 Fit classification trees to training set and bootstrap samples B = 200 ESL book / Example 8.7.1

Five features highly correlated with each other Notice the bootstrap trees are different than the original tree ESL book / Example 8.7.1 è No clear difference with picking up which feature to split è Small changes in the training set will result in different tree è But these trees are actually quite similar for classification

è For B>30, more trees do not improve the bagging results è Since the trees correlate highly to each other and give similar classifications B Consensus: Majority vote Probability: Average distribution at terminal nodes ESL book / Example 8.7.1

Bagging Slightly increases model space Cannot help where greater enlargement of space is needed Bagged trees are correlated Use random forest to reduce correlation between trees

Today Ø Decision Tree (DT): ØTree representation ØBrief information theory ØLearning decision trees ØBagging ØRandom forests: special ensemble of DT ØMore about ensemble 4/19/18 54

Random forest classifier Random forest classifier, an extension to bagging which uses de-correlated trees. 4/19/18 55

N examples Dr. Yanjun Qi / UVA CS Random Forest Classifier Create bootstrap samples from the training data M features... 4/19/18 56

N examples Dr. Yanjun Qi / UVA CS Random Forest Classifier At each node when choosing the split feature choose only among m<m features M features... 4/19/18 57

N examples Dr. Yanjun Qi / UVA CS Random Forest Classifier Create decision tree from each bootstrap sample M features...... 4/19/18 58

Random Forest Classifier M features N examples...... Take he majority vote 4/19/18 59

Random Forests 1. For each of our B bootstrap samples a. Form a tree in the following manner i. Given p dimensions, pick m of them ii. Split only according to these m dimensions 1. (we will NOT consider the other p-m dimensions) iii. Repeat the above steps i & ii for each split 1. Note: we pick a different set of m dimensions for each split on a single tree

Page 598-599 In ESL book 4/19/18 61

Random Forests Random forest can be viewed as a refinement of bagging with a tweak of decorrelating the trees: o At each tree split, a random subset of m features out of all p features is drawn to be considered for splitting Some guidelines provided by Breiman, but be careful to choose m based on specific problem: o o o m = p amounts to bagging m = p/3 or log2(p) for regression m = sqrt(p) for classification

Why correlated trees are not ideal? Random Forests try to reduce correlation between the trees. Why?

Why correlated trees are not ideal? Assuming each tree has variance σ 2 If trees are independently identically distributed, then average variance is σ 2 /B

Why correlated trees are not ideal? Assuming each tree has variance σ 2 If simply identically distributed, then average variance is ρ refers to pairwise correlation, a positive value As B, second term 0 Thus, the pairwise correlation always affects the variance

Why correlated trees are not ideal? Assuming each tree has variance σ 2 If simply identically distributed, then average variance is ρ refers to pairwise correlation, a positive value As B, second term 0 Thus, the pairwise correlation always affects the variance

Why correlated trees are not ideal? How to deal? o o If we reduce m (the number of dimensions we actually consider), then we reduce the pairwise tree correlation o Thus, variance will be reduced.

Today Ø Decision Tree (DT): ØTree representation ØBrief information theory ØLearning decision trees ØBagging ØRandom forests: Ensemble of DT ØMore ensemble 4/19/18 68

e.g. Ensembles in practice Oct 2006-2009 Each rating/sample: + <user, movie, date of grade, grade> Training set (100,480,507 ratings) Qualifying set (2,817,131 ratings)è winner

Ensemble in practice Team Bellkor's Pragmatic Chaos defeated the team ensemble by submitting just 20 minutes earlier! è 1 million dollar! The ensemble team è blenders of multiple different methods

References q Prof. Tan, Steinbach, Kumar s Introduction to Data Mining slide q ESLbook : Hastie, Trevor, et al. The elements of statistical learning. Vol. 2. No. 1. New York: Springer, 2009. q Dr. Oznur Tastan s slides about RF and DT 4/19/18 71