Question of the Day. Machine Learning 2D1431. Decision Tree for PlayTennis. Outline. Lecture 4: Decision Tree Learning

Similar documents
Typical Supervised Learning Problem Setting

Decision-Tree Learning. Chapter 3: Decision Tree Learning. Classification Learning. Decision Tree for PlayTennis

Decision Tree Learning Mitchell, Chapter 3. CptS 570 Machine Learning School of EECS Washington State University

Administration. Chapter 3: Decision Tree Learning (part 2) Measuring Entropy. Entropy Function

Chapter 3: Decision Tree Learning

Decision Tree Learning

Chapter 3: Decision Tree Learning (part 2)

Outline. Training Examples for EnjoySport. 2 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Machine Learning in Bioinformatics

Decision Trees.

Decision Tree Learning

Decision Trees.

Lecture 3: Decision Trees

Lecture 3: Decision Trees

DECISION TREE LEARNING. [read Chapter 3] [recommended exercises 3.1, 3.4]

Notes on Machine Learning for and

Introduction. Decision Tree Learning. Outline. Decision Tree 9/7/2017. Decision Tree Definition

Decision Trees. Data Science: Jordan Boyd-Graber University of Maryland MARCH 11, Data Science: Jordan Boyd-Graber UMD Decision Trees 1 / 1

Data Mining in Bioinformatics

Classification and Prediction

Learning Classification Trees. Sargur Srihari

Decision Tree Learning and Inductive Inference

Decision Tree Learning - ID3

Introduction Association Rule Mining Decision Trees Summary. SMLO12: Data Mining. Statistical Machine Learning Overview.

Learning Decision Trees

Lecture 24: Other (Non-linear) Classifiers: Decision Tree Learning, Boosting, and Support Vector Classification Instructor: Prof. Ganesh Ramakrishnan

Machine Learning 2nd Edi7on

M chi h n i e n L e L arni n n i g Decision Trees Mac a h c i h n i e n e L e L a e r a ni n ng

Learning Decision Trees

Machine Learning & Data Mining

CS 6375 Machine Learning

CS6375: Machine Learning Gautam Kunapuli. Decision Trees

Introduction to ML. Two examples of Learners: Naïve Bayesian Classifiers Decision Trees

Decision Trees. Tirgul 5

the tree till a class assignment is reached

Supervised Learning! Algorithm Implementations! Inferring Rudimentary Rules and Decision Trees!

Decision trees. Special Course in Computer and Information Science II. Adam Gyenge Helsinki University of Technology

Imagine we ve got a set of data containing several types, or classes. E.g. information about customers, and class=whether or not they buy anything.

Decision Tree Learning

Dan Roth 461C, 3401 Walnut

Decision Trees / NLP Introduction

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Intelligent Data Analysis. Decision Trees

Apprentissage automatique et fouille de données (part 2)

Classification: Decision Trees

Decision Tree Learning Lecture 2

Machine Learning Recitation 8 Oct 21, Oznur Tastan

Classification II: Decision Trees and SVMs

Classification: Decision Trees

EECS 349:Machine Learning Bryan Pardo

Decision Tree Learning

Tutorial 6. By:Aashmeet Kalra

Data classification (II)

Decision Tree Learning

Decision Trees. Gavin Brown

Information Theory & Decision Trees

The Solution to Assignment 6

Classification and regression trees

ARTIFICIAL INTELLIGENCE. Supervised learning: classification

Decision Tree. Decision Tree Learning. c4.5. Example

ML techniques. symbolic techniques different types of representation value attribute representation representation of the first order

CSCE 478/878 Lecture 6: Bayesian Learning

Artificial Intelligence. Topic

Learning from Observations

Introduction to Machine Learning CMU-10701

Decision Trees. Danushka Bollegala

Artificial Intelligence Decision Trees

Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation


Induction of Decision Trees

Decision Tree Learning

Machine Learning

Symbolic methods in TC: Decision Trees

CSCI 5622 Machine Learning

CS 380: ARTIFICIAL INTELLIGENCE MACHINE LEARNING. Santiago Ontañón

Lecture 9: Bayesian Learning

brainlinksystem.com $25+ / hr AI Decision Tree Learning Part I Outline Learning 11/9/2010 Carnegie Mellon

Decision Trees Part 1. Rao Vemuri University of California, Davis

Decision Trees. Lewis Fishgold. (Material in these slides adapted from Ray Mooney's slides on Decision Trees)

Decision Trees. CSC411/2515: Machine Learning and Data Mining, Winter 2018 Luke Zettlemoyer, Carlos Guestrin, and Andrew Moore

Decision Trees. Nicholas Ruozzi University of Texas at Dallas. Based on the slides of Vibhav Gogate and David Sontag

Decision Support. Dr. Johan Hagelbäck.

Induction on Decision Trees

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18

Decision Tree Analysis for Classification Problems. Entscheidungsunterstützungssysteme SS 18

Decision Trees. Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University. February 5 th, Carlos Guestrin 1

Decision Trees Entropy, Information Gain, Gain Ratio

Algorithms for Classification: The Basic Methods

Decision Trees. Each internal node : an attribute Branch: Outcome of the test Leaf node or terminal node: class label.

Decision Tree And Random Forest

Review of Lecture 1. Across records. Within records. Classification, Clustering, Outlier detection. Associations

Bayesian Learning. Artificial Intelligence Programming. 15-0: Learning vs. Deduction

Inductive Learning. Chapter 18. Material adopted from Yun Peng, Chuck Dyer, Gregory Piatetsky-Shapiro & Gary Parker

Symbolic methods in TC: Decision Trees

Machine Learning 2010

Classification: Rule Induction Information Retrieval and Data Mining. Prof. Matteo Matteucci

Hypothesis Evaluation

Decision Trees. Introduction. Some facts about decision trees: They represent data-classification models.

Lecture 7: DecisionTrees

CS145: INTRODUCTION TO DATA MINING

The Quadratic Entropy Approach to Implement the Id3 Decision Tree Algorithm

Transcription:

Question of the Day Machine Learning 2D1431 How can you make the following equation true by drawing only one straight line? 5 + 5 + 5 = 550 Lecture 4: Decision Tree Learning Outline Decision Tree for PlayTennis decision trees ID3 learning algorithm entropy, information gain generalization and overfitting Humidity rmal 1

Decision Tree for PlayTennis Humidity rmal Each internal node tests an attribute Each branch corresponds to an attribute value node Each leaf node assigns a classification Decision Tree for PlayTennis Temperature Humidity PlayTennis Sunny Hot? Humidity rmal Decision Tree for Conjunction =Sunny = Decision Tree for Disjunction =Sunny = 2

Decision Tree for XOR =Sunny XOR = Decision Tree decision trees represent disjunctions of conjunctions Humidity rmal (=Sunny Humidity=rmal) (=Overcast) (=Rain =) When to consider Decision Trees Top-Down Induction of Decision Trees ID3 Instances describable by attribute-value pairs Target function is discrete valued Disjunctive hypothesis may be required Possibly noisy training data Missing attribute values Examples: Medical diagnosis Credit risk analysis Object classification for robot manipulator (Tan 1993) 1. A the best decision attribute for next node 2. Assign A as decision attribute for node 3. For each value of A create new descendant 4. Sort training examples to leaf node according to the attribute value of the branch 5. If all training examples are perfectly classified (same value of target attribute) stop, else iterate over new leaf nodes. 3

Which Attribute is best? Entropy [29+,35-] A 1 =? A 2 =? [29+,35-] True False True False [21+, 5-] [8+, 30-] [18+, 33-] [11+, 2-] S is a sample of training examples p + is the proportion of positive examples p - is the proportion of negative examples Entropy measures the impurity of S Entropy(S) = -p + log 2 p + - p - log 2 p - Entropy Information Gain Entropy(S)= expected number of bits needed to encode class (+ or -) of randomly drawn members of S (under the optimal, shortest length-code) Why? Information theory optimal length code assign log 2 p bits to messages having probability p. So the expected number of bits to encode (+ or -) of random member of S: -p + log 2 p + - p - log 2 p - Gain(S,A): expected reduction in entropy due to sorting S on attribute A Gain(S,A)=Entropy(S) - v values(a) S v / S Entropy(S v ) Entropy([29+,35-]) = -29/64 log 2 29/64 35/64 log 2 35/64 = 0.99 [29+,35-] A 1 =? A 2 =? [29+,35-] True False [21+, 5-] [8+, 30-] True False [18+, 33-] [11+, 2-] 4

Information Gain Training Examples Entropy([21+,5-]) = 0.71 Entropy([8+,30-]) = 0.74 Gain(S,A 1 )=Entropy(S) -26/64*Entropy([21+,5-]) -38/64*Entropy([8+,30-]) =0.27 [29+,35-] True A 1 =? False [21+, 5-] [8+, 30-] Entropy([18+,33-]) = 0.94 Entropy([8+,30-]) = 0.62 Gain(S,A 2 )=Entropy(S) -51/64*Entropy([18+,33-]) -13/64*Entropy([11+,2-]) =0.12 True A 2 =? [29+,35-] False [18+, 33-] [11+, 2-] Day D1 D2 D3 D4 D5 D6 D7 D8 D9 D10 D11 D12 D13 D14 Sunny Sunny Overcast Rain Rain Rain Overcast Sunny Sunny Rain Sunny Overcast Overcast Rain Temp. Hot Hot Hot Mild Cool Cool Cool Mild Cold Mild Mild Mild Hot Mild Humidity rmal rmal rmal rmal rmal rmal rmal Play Tennis Selecting the Next Attribute S=[9+,5-] E=0.940 Humidity S=[9+,5-] E=0.940 Selecting the Next Attribute S=[9+,5-] E=0.940 rmal [3+, 4-] [6+, 1-] E=0.985 E=0.592 Gain(S,Humidity) =0.940-(7/14)*0.985 (7/14)*0.592 =0.151 [6+, 2-] [3+, 3-] E=0.811 E=1.0 Gain(S,) =0.940-(8/14)*0.811 (6/14)*1.0 =0.048 Sunny Rain [2+, 3-] [4+, 0] [3+, 2-] E=0.971 Over cast E=0.0 E=0.971 Gain(S,) =0.940-(5/14)*0.971 -(4/14)*0.0 (5/14)*0.0971 =0.247 5

ID3 Algorithm ID3 Algorithm [D1,D2,,D14] [9+,5-] S sunny =[D1,D2,D8,D9,D11] [2+,3-] [D3,D7,D12,D13] [D4,D5,D6,D10,D14] [4+,0-] [3+,2-]?? Gain(S sunny, Humidity)=0.970-(3/5)0.0 2/5(0.0) = 0.970 Gain(S sunny, Temp.)=0.970-(2/5)0.0 2/5(1.0)-(1/5)0.0 = 0.570 Gain(S sunny, )=0.970= -(2/5)1.0 3/5(0.918) = 0.019 Humidity [D3,D7,D12,D13] rmal [D1,D2] [D8,D9,D11] [D6,D14] [D4,D5,D10] Hypothesis Space Search ID3 Hypothesis Space Search ID3 + - + + - + A1 - - + + - + A2 - A3 + - + - - A2 + - + - + A2 - A4 Hypothesis space is complete! Target function surely in there Outputs a single hypothesis backtracking on selected attributes (greedy search) Local minimal (suboptimal splits) Statistically-based search choices Robust to noisy data Inductive bias (search bias) Prefer shorter trees over longer ones Place high information gain attributes close to the root 6

Inductive Bias in ID3 H is the power set of instances X Unbiased? Preference for short trees, and for those with high information gain attributes near the root Bias is a preference for some hypotheses, rather than a restriction of the hypothesis space H Occam s razor: prefer the shortest (simplest) hypothesis that fits the data Occam s Razor Why prefer short hypotheses? argument in favor: Fewer short hypotheses than long hypotheses A short hypothesis that fits the data is unlikely to be a coincidence A long hypothesis that fits the data might be a coincidence argument against: There are many ways to define small sets of hypotheses E.g. All trees with a prime number of nodes that use attributes beginning with Z What is so special about small sets based on size of hypothesis Occam s Razor Definition A: green blue Hypothesis A: Objects do not instantaneously change their color. Definition B: grue bleen Hypothesis B: time 1.1.2000 0:00 On 1.1.2000 objects that were grue turned instantaneously bleen and objects that were bleen turned instantaneously grue. Overfitting Consider error of hypothesis h over Training data: error train (h) Entire distribution D of data: error D (h) Hypothesis h H overfits training data if there is an alternative hypothesis h H such that error train (h) < error train (h ) and error D (h) > error D (h ) 7

Overfitting in Decision Trees Overfitting in Function Approximation error validation set training set optimal #nodes #nodes Generalization Generalization means the ability of a learning algorithm to generalize from training examples to previously unseen examples and to predict their target value correctly. The training error is no indicator of how well the learner is able to predict future examples. Therefore, partition the data set into training and validation set. Train the learning algorithm on the training data only and use the validation set to predict the accuracy/error of the learner on future unseen instances. Cross-Validation Estimate the accuracy of a hypothesis induced by a supervised learning algorithm Predict the accuracy of a hypothesis over future unseen instances Select the optimal hypothesis from a given set of alternative hypotheses Pruning decision trees Model selection Feature selection Combining multiple classifiers (boosting) 8

Holdout Method Partition data set D = {(v 1,y 1 ),,(v n,y n )} into training D t and validation set D h =D\D t Training D t Validation D\D t acc h = 1/h (vi,yi) Dh δ(i(d t,v i ),y i ) I(D t,v i ) : output of hypothesis induced by learner I trained on data D t for instance v i δ(i,j) = 1 if i=j and 0 otherwise Problems: makes insufficient use of data training and validation set are correlated Cross-Validation k-fold cross-validation splits the data set D into k mutually exclusive subsets D 1,D 2,,D k D 1 D 2 D 3 D 4 Train and test the learning algorithm k times, each time it is trained on D\D i and tested on D i D 1 D 2 D 3 D 4 D 1 D 2 D 3 D 4 D 1 D 2 D 3 D 4 D 1 D 2 D 3 D 4 acc cv = 1/n (vi,yi) D δ(i(d\d i,v i ),y i ) Cross-Validation Uses all the data for training and testing Complete k-fold cross-validation splits the dataset of size m in all (m over m/k) possible ways (choosing m/k instances out of m) Leave n-out cross-validation sets n instances aside for testing and uses the remaining ones for training (leave one-out is equivalent to n- fold cross-validation) In stratified cross-validation, the folds are stratified so that they contain approximately the same proportion of labels as the original data set Bootstrap Samples n instances uniformly from the data set with replacement Probability that any given instance is not chosen after n samples is (1-1/n) n e -1 0.632 The bootstrap sample is used for training the remaining instances are used for testing acc boot = 1/b i=1 b (0.632 accb i + 0.368 acct i ) where accb i is the accuracy on the test data of the i-th bootstrap sample, acct i is the accuracy estimate on the training set and b the number of bootstrap samples 9

Avoid Overfitting How can we avoid overfitting? Stop growing when data split not statistically significant Grow full tree then post-prune Minimum description length (MDL): Minimize: size(tree) + size(misclassifications(tree)) Reduced-Error Pruning Split data into training and validation set Do until further pruning is harmful: 1. Evaluate impact on validation set of pruning each possible node (plus those below it) 2. Greedily remove the one that most improves the validation set accuracy Produces smallest version of most accurate subtree Rule-Post Pruning 1. Convert tree to equivalent set of rules 2. Prune each rule independently of each other 3. Sort final rules into a desired sequence to use Method used in C4.5 Converting a Tree to Rules Humidity rmal R 1 : If (=Sunny) (Humidity=) Then PlayTennis= R 2 : If (=Sunny) (Humidity=rmal) Then PlayTennis= R 3 : If (=Overcast) Then PlayTennis= R 4 : If (=Rain) (=) Then PlayTennis= R 5 : If (=Rain) (=) Then PlayTennis= 10

Continuous Valued Attributes Create a discrete attribute to test continuous Temperature = 24.5 0 C (Temperature > 20.0 0 C) = {true, false} Where to set the threshold? Temperatur 15 0 C 18 0 C 19 0 C 22 0 C 24 0 C 27 0 C PlayTennis Attributes with many Values Problem: if an attribute has many values, maximizing InformationGain will select it. E.g.: Imagine using Date=12.7.1996 as attribute perfectly splits the data into subsets of size 1 Use GainRatio instead of information gain as criteria: GainRatio(S,A) = Gain(S,A) / SplitInformation(S,A) SplitInformation(S,A) = -Σ i=1..c S i / S log 2 S i / S Where S i is the subset for which attribute A has the value v i (see paper by [Fayyad, Irani 1993] Attributes with Cost Unknown Attribute Values Consider: Medical diagnosis : blood test costs 1000 SEK Robotics: width_from_one_feet has cost 23 secs. How to learn a consistent tree with low expected cost? Replace Gain by : Gain 2 (S,A)/Cost(A) [Tan, Schimmer 1990] 2 Gain(S,A) -1/(Cost(A)+1) w w [0,1] [Nunez 1988] What is some examples missing values of A? Use training example anyway sort through tree If node n tests A, assign most common value of A among other examples sorted to node n. Assign most common value of A among other examples with same target value Assign probability pi to each possible value vi of A Assign fraction pi of example to each descendant in tree Classify new examples in the same fashion 11

Wrapper Model Wrapper Model Input features Feature subset search Feature subset evaluation Feature subset evaluation Induction algorithm Evaluate the accuracy of the inducer for a given subset of features by means of n-fold cross-validation The training data is split into n folds, and the induction algorithm is run n times. The accuracy results are averaged to produce the estimated accuracy. Forward elimination: Starts with the empty set of features and greedily adds the feature that improves the estimated accuracy at most Backward elimination: Starts with the set of all features and greedily removes features and greedily removes the worst feature 12