Decision Tree Learning Mitchell, Chapter 3. CptS 570 Machine Learning School of EECS Washington State University

Similar documents
Decision-Tree Learning. Chapter 3: Decision Tree Learning. Classification Learning. Decision Tree for PlayTennis

Decision Tree Learning

Administration. Chapter 3: Decision Tree Learning (part 2) Measuring Entropy. Entropy Function

Chapter 3: Decision Tree Learning

Chapter 3: Decision Tree Learning (part 2)

Outline. Training Examples for EnjoySport. 2 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Decision Trees.

Question of the Day. Machine Learning 2D1431. Decision Tree for PlayTennis. Outline. Lecture 4: Decision Tree Learning

Decision Tree Learning

Decision Trees.

DECISION TREE LEARNING. [read Chapter 3] [recommended exercises 3.1, 3.4]

Typical Supervised Learning Problem Setting

Notes on Machine Learning for and

Introduction. Decision Tree Learning. Outline. Decision Tree 9/7/2017. Decision Tree Definition

Decision Trees. Data Science: Jordan Boyd-Graber University of Maryland MARCH 11, Data Science: Jordan Boyd-Graber UMD Decision Trees 1 / 1

Lecture 3: Decision Trees

Introduction Association Rule Mining Decision Trees Summary. SMLO12: Data Mining. Statistical Machine Learning Overview.

Lecture 3: Decision Trees

CS6375: Machine Learning Gautam Kunapuli. Decision Trees

M chi h n i e n L e L arni n n i g Decision Trees Mac a h c i h n i e n e L e L a e r a ni n ng

Machine Learning 2nd Edi7on

Machine Learning & Data Mining

Decision Tree Learning - ID3

Decision Tree Learning and Inductive Inference

Classification and Prediction

Learning Classification Trees. Sargur Srihari

Lecture 24: Other (Non-linear) Classifiers: Decision Tree Learning, Boosting, and Support Vector Classification Instructor: Prof. Ganesh Ramakrishnan

Learning Decision Trees

Decision Tree Learning

Supervised Learning! Algorithm Implementations! Inferring Rudimentary Rules and Decision Trees!

Introduction to ML. Two examples of Learners: Naïve Bayesian Classifiers Decision Trees

Imagine we ve got a set of data containing several types, or classes. E.g. information about customers, and class=whether or not they buy anything.

Learning Decision Trees

CS 6375 Machine Learning

Classification: Decision Trees

Decision Trees. Tirgul 5

Classification: Decision Trees

Decision Trees / NLP Introduction

Apprentissage automatique et fouille de données (part 2)

Machine Learning in Bioinformatics

Decision trees. Special Course in Computer and Information Science II. Adam Gyenge Helsinki University of Technology

EECS 349:Machine Learning Bryan Pardo

Classification II: Decision Trees and SVMs

Classification and regression trees

Dan Roth 461C, 3401 Walnut

the tree till a class assignment is reached

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Intelligent Data Analysis. Decision Trees

Data classification (II)


Machine Learning Recitation 8 Oct 21, Oznur Tastan

Decision Tree Learning

Decision Tree Learning Lecture 2

Artificial Intelligence. Topic

The Solution to Assignment 6

Decision Trees. Danushka Bollegala

Information Theory & Decision Trees

Introduction to Machine Learning CMU-10701

Decision Trees. Gavin Brown

Decision Tree. Decision Tree Learning. c4.5. Example

CS 380: ARTIFICIAL INTELLIGENCE MACHINE LEARNING. Santiago Ontañón

10-701/ Machine Learning: Assignment 1

Machine Learning

Decision Trees Part 1. Rao Vemuri University of California, Davis

Tutorial 6. By:Aashmeet Kalra

Decision Support. Dr. Johan Hagelbäck.

Data Mining in Bioinformatics

CSCE 478/878 Lecture 6: Bayesian Learning

Learning from Observations

Decision Trees Entropy, Information Gain, Gain Ratio

Decision Trees. Each internal node : an attribute Branch: Outcome of the test Leaf node or terminal node: class label.

Algorithms for Classification: The Basic Methods

Lecture 9: Bayesian Learning

ARTIFICIAL INTELLIGENCE. Supervised learning: classification

Rule Generation using Decision Trees

Decision Tree Learning. Dr. Xiaowei Huang

Decision Tree Learning

Classification and Regression Trees

CSCI 5622 Machine Learning

Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation

ML techniques. symbolic techniques different types of representation value attribute representation representation of the first order

Artificial Intelligence Decision Trees

Inductive Learning. Chapter 18. Material adopted from Yun Peng, Chuck Dyer, Gregory Piatetsky-Shapiro & Gary Parker

Classification Using Decision Trees

Decision Tree Analysis for Classification Problems. Entscheidungsunterstützungssysteme SS 18

Lecture 7: DecisionTrees

The Quadratic Entropy Approach to Implement the Id3 Decision Tree Algorithm

Classification: Rule Induction Information Retrieval and Data Mining. Prof. Matteo Matteucci

Decision Trees. CSC411/2515: Machine Learning and Data Mining, Winter 2018 Luke Zettlemoyer, Carlos Guestrin, and Andrew Moore

brainlinksystem.com $25+ / hr AI Decision Tree Learning Part I Outline Learning 11/9/2010 Carnegie Mellon

Inductive Learning. Chapter 18. Why Learn?

Induction of Decision Trees

Decision Tree And Random Forest

Machine Learning

Symbolic methods in TC: Decision Trees

Einführung in Web- und Data-Science

Bayesian Learning. Artificial Intelligence Programming. 15-0: Learning vs. Deduction

Decision Trees. Introduction. Some facts about decision trees: They represent data-classification models.

Concept Learning Mitchell, Chapter 2. CptS 570 Machine Learning School of EECS Washington State University

Decision Trees. Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University. February 5 th, Carlos Guestrin 1

Decision Trees. Nicholas Ruozzi University of Texas at Dallas. Based on the slides of Vibhav Gogate and David Sontag

Transcription:

Decision Tree Learning Mitchell, Chapter 3 CptS 570 Machine Learning School of EECS Washington State University

Outline Decision tree representation ID3 learning algorithm Entropy and information gain Overfitting Enhancements

Decision Tree for PlayTennis

Decision Trees Decision tree representation Each internal node test an attribute Each branch corresponds to attribute value Each leaf node assigns a classification How would we represent:,, XOR (A B) (C D E) M of N

When to Consider Decision Trees Instances describable by attribute-value pairs Target function is discrete valued Disjunctive hypothesis may be required Possibly noisy training data Examples: Equipment or medical diagnosis Credit risk analysis Modeling calendar scheduling preferences

Top-Down Induction of Decision Trees Main loop (ID3, Table 3.1): A the best decision attribute for next node Assign A as decision attribute for node For each value of A, create new descendant of node Sort training examples to leaf nodes If training examples perfectly classified Then STOP Else iterate over new leaf nodes

Which Attribute is Best?

Entropy S is a sample of training examples p is the proportion of positive examples in S p is the proportion of negative examples in S Entropy measures the impurity of S Entropy(S) p log 2 p p log 2 p

Entropy Entropy(S) = expected number of bits needed to encode class ( or ) of randomly drawn member of S (under the optimal, shortestlength code) Why? Information theory Optimal length code assigns ( log 2 p) bits to message having probability p So, expected number of bits to encode or of random member of S: p ( log 2 p ) + p ( log 2 p ) Entropy(S) p log 2 p p log 2 p

Information Gain Gain(S,A) = expected reduction in entropy due to sorting on attribute A Gain( S, A) Entropy( S) v Values( A) S S v Entropy( S v )

Training Examples: PlayTennis Day Outlook Temperature Humidity Wind PlayTennis D1 Sunny Hot High Weak No D2 Sunny Hot High Strong No D3 Overcast Hot High Weak Yes D4 Rain Mild High Weak Yes D5 Rain Cool Normal Weak Yes D6 Rain Cool Normal Strong No D7 Overcast Cool Normal Strong Yes D8 Sunny Mild High Weak No D9 Sunny Cool Normal Weak Yes D10 Rain Mild Normal Weak Yes D11 Sunny Mild Normal Strong Yes D12 Overcast Mild High Strong Yes D13 Overcast Hot Normal Weak Yes D14 Rain Mild High Strong No

Selecting the Next Attribute

Selecting the Next Attribute

Hypothesis Space Search by ID3

Hypothesis Space Search by ID3 Hypothesis space is complete! Target function surely in there Outputs a single hypothesis (which one?) Does not keep all consistent hypotheses No backtracking Local minima Statistically-based search choices Robust to noisy data Inductive bias: prefer shortest tree

Inductive Bias in ID3 Note H is the power set of instances X Unbiased? Not really Preference for short trees with high information gain attributes near the root Bias is a preference for some hypotheses, rather than a restriction on the hypothesis space H Occam's razor Prefer the shortest hypothesis that fits the data

Occam s Razor Why prefer short hypotheses? Argument in favor: Fewer short hypotheses than long hypotheses Short hypothesis that fits data unlikely to be coincidence Long hypothesis that fits data might be coincidence Argument opposed: There are many ways to define small sets of hypotheses E.g., all trees with a prime number of nodes that use attributes beginning with Z What's so special about small sets based on the size of the hypothesis?

Overfitting in Decision Trees Consider adding noisy training example #15: (<Sunny, Hot, Normal, Strong>, PlayTennis = No) What effect on earlier tree?

Overfitting Consider error of hypothesis h over Training data: error train (h) Entire distribution D of data: error D (h) Hypothesis h H overfits the training data if there is an alternative hypothesis h H such that error train (h) < error train (h ) error D (h) > error D (h )

Overfitting in Decision Tree Learning

Avoiding Overfitting How can we avoid overfitting? Stop growing when data split not statistically significant Grow full tree, then post-prune How to select best tree: Measure performance over training data Measure performance over separate validation data set MDL Minimize size(tree) + size(misclassifications(tree))

Reduced-Error Pruning Split data into training and validation set Do until further pruning is harmful: Evaluate impact on validation set of pruning each possible node (plus those below it) Greedily remove the one that most improves validation set accuracy Produces smallest version of most accurate subtree What if data is limited?

Effect of Reduced-Error Pruning

Rule Post-Pruning Generate decision tree, then Convert tree to equivalent set of rules Prune each rule independently of others Sort final rules into desired sequence for use Perhaps most frequently used method (e.g., C4.5)

Converting a Tree to Rules If (Outlook=Sunny) Λ (Humidity=High) Then PlayTennis=No If (Outlook=Sunny) Λ (Humidity=Normal) Then PlayTennis=Yes

Continuous Valued Attributes Create a discrete attribute to test continuous values Temperature = 82.5 (Temperature > 72.3) = true, false Temperature 40 48 60 72 80 90 PlayTennis No No Yes Yes Yes No

Attributes with Many Values Problem: If attribute has many values, Gain will select it Imagine using Date = Jun_3_1996 as attribute One approach: Use GainRatio instead GainRatio( S, A) Gain( S, A) SplitInformation( S, A) Values( A) SplitInformation( S, A) i= 1 S i S log 2 S i S where S i is the subset of S for which A has value v i

Attributes with Costs Consider Medical diagnosis, BloodTest has cost $150 Robotics, Width_from_1ft has cost 23 sec. How to learn a consistent tree with low expected cost? One approach: replace gain by Tan and Schlimmer (1990) Nunez (1988) Gain( S, A) 2 1 ( Cost ( A) + 1) 2 Gain ( S, A) Cost( A) where w [0,1] determines importance of cost w

Unknown Attribute Values What if some examples missing values of A? Use training example anyway, sort through tree If node n tests A, assign most common value of A among other examples sorted to node n Assign most common value of A among other examples with same target value Assign probability p i to each possible value v i of A Assign fraction p i of example to each descendant in tree Classify new examples in same fashion

Summary: Decision-Tree Learning Most popular symbolic learning method Learning discrete-valued functions Information-theoretic heuristic Handles noisy data Decision trees completely expressive Biased towards simpler trees ID3 C4.5 C5.0 (www.rulequest.com) J48 (WEKA) C4.5