Decision Tree Learning

Similar documents
Decision Tree Learning

Learning Classification Trees. Sargur Srihari

Decision Tree Learning and Inductive Inference

Introduction. Decision Tree Learning. Outline. Decision Tree 9/7/2017. Decision Tree Definition

Lecture 3: Decision Trees

Decision Trees / NLP Introduction

Learning Decision Trees

Decision Tree Learning Mitchell, Chapter 3. CptS 570 Machine Learning School of EECS Washington State University

Lecture 3: Decision Trees

Learning Decision Trees

Decision Tree Learning

CS 6375 Machine Learning

EECS 349:Machine Learning Bryan Pardo

Question of the Day. Machine Learning 2D1431. Decision Tree for PlayTennis. Outline. Lecture 4: Decision Tree Learning

CS 380: ARTIFICIAL INTELLIGENCE MACHINE LEARNING. Santiago Ontañón

Artificial Intelligence. Topic

Imagine we ve got a set of data containing several types, or classes. E.g. information about customers, and class=whether or not they buy anything.

Decision-Tree Learning. Chapter 3: Decision Tree Learning. Classification Learning. Decision Tree for PlayTennis

Decision Tree Learning - ID3

the tree till a class assignment is reached

Decision Trees. Tirgul 5

Chapter 3: Decision Tree Learning

Classification and Prediction

Decision Trees.

Tutorial 6. By:Aashmeet Kalra

M chi h n i e n L e L arni n n i g Decision Trees Mac a h c i h n i e n e L e L a e r a ni n ng

ML techniques. symbolic techniques different types of representation value attribute representation representation of the first order

Machine Learning Recitation 8 Oct 21, Oznur Tastan

CS6375: Machine Learning Gautam Kunapuli. Decision Trees

Topics. Bayesian Learning. What is Bayesian Learning? Objectives for Bayesian Learning

Machine Learning 2nd Edi7on

Classification and Regression Trees

Decision Trees.

Classification: Decision Trees

Typical Supervised Learning Problem Setting

Decision Trees. Gavin Brown

brainlinksystem.com $25+ / hr AI Decision Tree Learning Part I Outline Learning 11/9/2010 Carnegie Mellon

Outline. Training Examples for EnjoySport. 2 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Supervised Learning! Algorithm Implementations! Inferring Rudimentary Rules and Decision Trees!

Rule Generation using Decision Trees

Classification Using Decision Trees

Classification and regression trees

Administration. Chapter 3: Decision Tree Learning (part 2) Measuring Entropy. Entropy Function

Machine Learning & Data Mining

Decision Trees. Data Science: Jordan Boyd-Graber University of Maryland MARCH 11, Data Science: Jordan Boyd-Graber UMD Decision Trees 1 / 1

Decision Tree Learning

Dan Roth 461C, 3401 Walnut

Introduction to ML. Two examples of Learners: Naïve Bayesian Classifiers Decision Trees

Decision Trees. Danushka Bollegala

Decision Trees Part 1. Rao Vemuri University of California, Davis

Classification: Decision Trees


MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October,

Decision Tree Learning. Dr. Xiaowei Huang

Artificial Intelligence Decision Trees

DECISION TREE LEARNING. [read Chapter 3] [recommended exercises 3.1, 3.4]

Decision Trees. Lewis Fishgold. (Material in these slides adapted from Ray Mooney's slides on Decision Trees)

Apprentissage automatique et fouille de données (part 2)

Decision Trees. CSC411/2515: Machine Learning and Data Mining, Winter 2018 Luke Zettlemoyer, Carlos Guestrin, and Andrew Moore

CS145: INTRODUCTION TO DATA MINING

Learning Decision Trees

Decision Trees Entropy, Information Gain, Gain Ratio

Induction on Decision Trees

Decision Tree And Random Forest

Decision Tree Analysis for Classification Problems. Entscheidungsunterstützungssysteme SS 18

Notes on Machine Learning for and

CSCE 478/878 Lecture 6: Bayesian Learning

Decision Tree Learning

CSCI 5622 Machine Learning

Lecture 24: Other (Non-linear) Classifiers: Decision Tree Learning, Boosting, and Support Vector Classification Instructor: Prof. Ganesh Ramakrishnan

Administrative notes. Computational Thinking ct.cs.ubc.ca

Decision trees. Special Course in Computer and Information Science II. Adam Gyenge Helsinki University of Technology

Decision T ree Tree Algorithm Week 4 1

Decision Tree Learning

Decision Tree. Decision Tree Learning. c4.5. Example

From inductive inference to machine learning

Chapter 3: Decision Tree Learning (part 2)

Bayesian Classification. Bayesian Classification: Why?

CHAPTER-17. Decision Tree Induction

Bayesian Learning. Artificial Intelligence Programming. 15-0: Learning vs. Deduction

Decision Trees. Each internal node : an attribute Branch: Outcome of the test Leaf node or terminal node: class label.

Decision Trees. None Some Full > No Yes. No Yes. No Yes. No Yes. No Yes. No Yes. No Yes. Patrons? WaitEstimate? Hungry? Alternate?

Decision Trees. Nicholas Ruozzi University of Texas at Dallas. Based on the slides of Vibhav Gogate and David Sontag

Learning from Observations. Chapter 18, Sections 1 3 1

Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation

Statistical Learning. Philipp Koehn. 10 November 2015

Information Theory & Decision Trees

Machine Learning Alternatives to Manual Knowledge Acquisition

UVA CS 4501: Machine Learning

Inductive Learning. Chapter 18. Material adopted from Yun Peng, Chuck Dyer, Gregory Piatetsky-Shapiro & Gary Parker

Name (NetID): (1 Point)

Data classification (II)

CS 380: ARTIFICIAL INTELLIGENCE

[read Chapter 2] [suggested exercises 2.2, 2.3, 2.4, 2.6] General-to-specific ordering over hypotheses

ARTIFICIAL INTELLIGENCE. Supervised learning: classification

Introduction to Machine Learning CMU-10701

Machine Learning and Data Mining. Decision Trees. Prof. Alexander Ihler

Einführung in Web- und Data-Science

Algorithms for Classification: The Basic Methods

16.4 Multiattribute Utility Functions

Transcription:

Topics Decision Tree Learning Sattiraju Prabhakar CS898O: DTL Wichita State University What are decision trees? How do we use them? New Learning Task ID3 Algorithm Weka Demo C4.5 Algorithm Weka Demo Implementation 2/6/2006 ML_S2006_DecisionTreeLearning 2 What are Decision Trees? Elements of Decision Tree Attribute value Attribute Yes Yes No 2/6/2006 ML_S2006_DecisionTreeLearning 3 2/6/2006 ML_S2006_DecisionTreeLearning 4 1

Example: PlayTennis Example Decision Tree 2/6/2006 ML_S2006_DecisionTreeLearning 5 2/6/2006 ML_S2006_DecisionTreeLearning 6 Example: Contact Lens data Example: Labour Negotiations (a) (b) 2/6/2006 ML_S2006_DecisionTreeLearning 7 2/6/2006 ML_S2006_DecisionTreeLearning 8 2

Exercise: Animal Decision tree for a simple disjunction. Example Vertibral Column Make_So und Legs Take_Food Animal Man Yes Talking 2 Cultivation Yes Mango Tree No Branch Moveme nt 1 Using_ Chlorophyll No Lizard Yes Using 4 Catches Yes Tongue Flies Parameciu m No No Sound 0 Absorbs Cells Yes Flytrap No No Sound 0 Catches Flies No 2/6/2006 ML_S2006_DecisionTreeLearning 9 2/6/2006 ML_S2006_DecisionTreeLearning 10 Structure of a Decision Tree New Learning Task Structure: Is a disjunction of conjunctions on the attribute values of instances Each path from tree root to a leaf corresponds to a conjunction of attribute tests Tree itself is a disjunction of thee conjunctions Example: PlayTennis Decision Tree (Outlook=Sunny Humidity=Normal) (Outlook = Overcast) (Outlook = Rain Wind = Weak) 2/6/2006 ML_S2006_DecisionTreeLearning 11 2/6/2006 ML_S2006_DecisionTreeLearning 12 3

Learner + Performance Element Using Decision Tree by Performance Element Training Examples Test Examples Learner Learnt Model: Decision Tree Performance Element Yes/ No Decision New Situation or example Decision Tree Classifier Yes/ No Decision 2/6/2006 ML_S2006_DecisionTreeLearning 13 2/6/2006 ML_S2006_DecisionTreeLearning 14 Characterizing Learning Task Strengths of DTL It should be possible to represent instances by attribute-value pairs Target Function should have discrete values It should be possible to represent Target Functions as disjunctive expressions Training data may contain errors Even when some attribute value pairs are missing in instance descriptions, we can use them 2/6/2006 ML_S2006_DecisionTreeLearning 15 2/6/2006 ML_S2006_DecisionTreeLearning 16 4

Decision Tree Learning Algorithm Decision Tree Learning Algorithm Topics: Different ways of partitioning instance space Entropy and Information Gain ID3 algorithm Topics: Different ways of partitioning instance space Entropy and Information Gain ID3 algorithm 2/6/2006 ML_S2006_DecisionTreeLearning 17 2/6/2006 ML_S2006_DecisionTreeLearning 18 Different ways of partitioning the instance space. Decision Tree stumps for the weather data. (a) (b) (a) (b) (c) (d) (c) (d) 2/6/2006 ML_S2006_DecisionTreeLearning 19 2/6/2006 ML_S2006_DecisionTreeLearning 20 5

Expanded tree stumps for the weather data. Decision tree for the weather data. (a) (b) (c) 2/6/2006 ML_S2006_DecisionTreeLearning 21 2/6/2006 ML_S2006_DecisionTreeLearning 22 Operation of a covering algorithm; decision tree for the same problem Example of building a partial tree. y b a b b b a b b a a a b b b a b b b b b y b a b b a b b b a a a b b b a b b b b b y 2 6 b a b b b a b b a a a b b b a b b b b b (a) (c) x 1 2 x 1 2 x (a) (b) (b) 2/6/2006 ML_S2006_DecisionTreeLearning 23 2/6/2006 ML_S2006_DecisionTreeLearning 24 6

Decision Tree Learning Algorithm Attribute Selection Topics: Different ways of partitioning instance space Attribute Selection: Entropy and Information Gain ID3 algorithm To build a decision tree, we need to select an attribute Selection of different attributes at different points lead to different decision trees Two measures are important for selection of attributes: Entropy Information Gain 2/6/2006 ML_S2006_DecisionTreeLearning 25 2/6/2006 ML_S2006_DecisionTreeLearning 26 Entropy Definition (1) Entropy Definition (2) Definition: It is a measure of (im)purity of a collection of examples. Formalization of Measure: Let S is a collection of instances Let a Target Concept, T divides the S into positive and negative examples Then entropy if S with respect to T is: In the entropy formula: p is the portion of positive examples in S p Θ is the portion of negative examples in S We define 0 log 2 0 to be 0 Entropy(S, T) -p log 2 p - p Θ log 2 p Θ 2/6/2006 ML_S2006_DecisionTreeLearning 27 2/6/2006 ML_S2006_DecisionTreeLearning 28 7

Example of Entropy S is a set of 14 examples +ve = 9 -ve = 5 Case1: Pure Samples: Entropy[14+, 0] = -1 log 2 1 0 = 0 Entropy[0, 14-] = -0 (14/ 14) log 2 1 = 0 Case2: Most Impure Samples: Entropy[7, 7] = -(7/14)log 2 (7/14) (7/14)log 2 (7/14) = 1 Case3: Other samples: Entropy [9+, 5-] = - (9/14) log 2 (9/14) (5/14) log 2 (5/14) = 0.940 CONCLUSION: Entropy is larger for larger impurity Entropy: Variation with Positive Example Ratio 2/6/2006 ML_S2006_DecisionTreeLearning 29 2/6/2006 ML_S2006_DecisionTreeLearning 30 Interpretation of Entropy (1) Interpretation of Entropy (2) Interpretation: Given an instance from S What is the (additional) information needed to tell the classification of that instance If p is 1, Let us say we arbitrarily choose an example No additional information is needed to classify it That is, it can be perfectly be classified Then entropy of that S is 0 If p is 0.5: Then S has equal number of + ve and -ve examples For any example picked up arbitrarily from S We cannot say it is positive or negative for sure To say the given example is positive or negative, we need to see S where all examples are all either positive or negative If p is 0.8: Then we may not be able to correctly classify an instance But we know the instance is more likely to be positive than negative At this point, we need less (additional) information to show the instance is positive We need more (additional) information to show that the instance is negative 2/6/2006 ML_S2006_DecisionTreeLearning 31 2/6/2006 ML_S2006_DecisionTreeLearning 32 8

Generalizing Entropy Criterion for Attribute Selection Entropy( S) = pilog pi c i= 1 c = number of values an attribute can take 2 2/6/2006 ML_S2006_DecisionTreeLearning 33 2/6/2006 ML_S2006_DecisionTreeLearning 34 Information as measure of purity of a subset Expected Amount of Information Information: Represents the expected amount of information needed to classify a new instance as yes or no. Examples of information: Gain([2,3])=0.971bits Gain([4,0])=0.0bits Gain([3,2])=0.971bits Expected Amount of Information required to classify a new instance correctly at this node is: The average value taking all the branches into account: Gain([2,3], [4,0], [3,2])=(5/15) x Gain([2,3]) + (4/14) x Gain([4,0]) + (5/14) x Gain([3,2]) Gain([2,3], [4,0], [3,2]) = (5/14) x 0.971 + (4/14) x 0 + (5/14) x 0.971 = 0.693 bits This is the amount of information required to specify the class of a new instance. 2/6/2006 ML_S2006_DecisionTreeLearning 35 2/6/2006 ML_S2006_DecisionTreeLearning 36 9

Gain as Expected Reduction in Entropy Attribute Selection based on Gain Gain is the Expected reduction in entropy caused by partitioning the examples according to this attribute G(S, A) = Entropy(S) Expected_Entropy (S, A) Expected_Entropy(S, A) =? vvaluesa ( ) Sv EntropyS ( v) S Larger the Gain, better it is for classification In the following example, Humidity is a better classifier than Wind Reason: Better Gain implies, reduction in Entropy as one goes down the tree Smaller Entropy means less information required to correctly classify an instance 2/6/2006 ML_S2006_DecisionTreeLearning 37 2/6/2006 ML_S2006_DecisionTreeLearning 38 Which attribute is best for classifier? Decision Tree Learning Algorithm Topics: Different ways of partitioning instance space Entropy and Information Gain ID3 algorithm 2/6/2006 ML_S2006_DecisionTreeLearning 39 2/6/2006 ML_S2006_DecisionTreeLearning 40 10

The Problem in Learning Decision Tree To find the simplest hypothesis To derive a decision tree that classifies all examples correctly. To be able to describe a large number of examples in a concise way Ockham s Razor: The most likely hypothesis is the simplest one that is consistent with all observations (examples). Simplest hypotheses have simple structure Decision tree with smallest number of levels Simplest hypothesis has the smallest decision tree In general, finding the smallest decision tree is an intractable problem. We use heuristics To test the most important attribute first. The most important attribute is the one that classifies all examples significantly. 2/6/2006 ML_S2006_DecisionTreeLearning 41 2/6/2006 ML_S2006_DecisionTreeLearning 42 Algorithm: Behavior Partially Learnt Decision Tree Step1: Root node is created Which attribute to be tested first in the tree? ID3 determines the information gain for each candidate attribute Select the one with the highest information gain Gain(S, Outlook) = 0.246 Gain(S, Humidity) = 0.151 Gain(S, Wind) = 0.048 Gain(S, Temperature) = 0.029 2/6/2006 ML_S2006_DecisionTreeLearning 43 2/6/2006 ML_S2006_DecisionTreeLearning 44 11

ID3 Functional Algorithm ID3 Algorithm Part I ID3(Examples, Target_attribute, Attributes) Create a Root node for the tree If Examples all positive? Return Single Node Tree Root, with label = + If Examples all negative? Return Single node Tree Root, with label = - If Attributes is empty Return single-node tree Root, label = most common value of Target_attribute in Examples 2/6/2006 ML_S2006_DecisionTreeLearning 45 2/6/2006 ML_S2006_DecisionTreeLearning 46 ID3 Algorithm Part II Computing Information Gain Otherwise A Best_Attribute (Attributes, Examples ) Root A For each value v i of A Add a new tree branch Examples_svi is a subset of Examples for vi If Examples_svi is empty? Add leaf node label = most common value of Target_attribute Add a new sub tree: ID3(Examples_svi, Target_attribute, Attributes {A}) 2/6/2006 ML_S2006_DecisionTreeLearning 47 2/6/2006 ML_S2006_DecisionTreeLearning 48 12

Continuing to Split Final Decision Tree 2/6/2006 ML_S2006_DecisionTreeLearning 49 2/6/2006 ML_S2006_DecisionTreeLearning 50 Analysis Points on Information Gain A single example cannot provide information on information gain Information Gain is a mapping from a distribution to a property of classified groups homogeneity Information gain is a function Domain: Distribution of examples Range: Homogeneity property 2/6/2006 ML_S2006_DecisionTreeLearning 51 2/6/2006 ML_S2006_DecisionTreeLearning 52 13

A Measure for Homogeneity: Ratio Information Gain in Information Theory A simple measure: Ratio of positive examples Max = 1 Min = 0 Impure = 0.5 1 Ratio Sender Information Receiver Disadvantages: 0 0 1 +-ex Examples Homogeneity 1. Does not tell us directly about examples 2. Does not give us the impurity Information 2/6/2006 ML_S2006_DecisionTreeLearning 53 2/6/2006 ML_S2006_DecisionTreeLearning 54 Information Examples Colored Balls Example + + + + Entropy = 0 ---- Entropy = 0 No code required No code required Entropy(S) = - p blue log2p blue p red log2 p red p green log2 p green p yellow log2 p yellow Maximum value of entropy = log 2 c +++++ ----- Entropy = 1 Maximum Information (1 bit) represents space 2/6/2006 ML_S2006_DecisionTreeLearning 55 2/6/2006 ML_S2006_DecisionTreeLearning 56 14

Explanation Example Values(A) = set of all values of A S v = subset of S with attribute A has value v Entropy(S) = Entropy of original collection Second Term = Expected value of entropy after S is partitioned using attribute A A = Wind Values(A) = {Weak, Strong} S Weak = [6+, 2-] S Strong = [3+, 3-] To compute Gain (S, Wind) Entropy (S) = 0.940 - S Weak = - (8/14) - S Strong = -(6/14) Gain (S, Wind) = 0.048 2/6/2006 ML_S2006_DecisionTreeLearning 57 2/6/2006 ML_S2006_DecisionTreeLearning 58 DTL as Search Main Issue: Select best attribute that classifies examples Each possible attribute points to a search state Search: Top_Down: Start with a concept that represents all examples Greedy Search: Select attribute that classifies maximum number of examples Algorithm never backtracks to reconsider earlier choices Systems ID3 and C4.5 2/6/2006 ML_S2006_DecisionTreeLearning 59 2/6/2006 ML_S2006_DecisionTreeLearning 60 15

Algorithm: Functional Modules If positive and negative examples present Choose best attribute to split them If all remaining examples are positive (or negative) Say yes (or no ) If no examples left Return default value (majority classification) If no attributes left, but there are unclassified examples There is noise in the data 2/6/2006 ML_S2006_DecisionTreeLearning 61 2/6/2006 ML_S2006_DecisionTreeLearning 62 Decision Tree Learning Algorithm Exercise See figure R18.7 Example: See figure R18.8 Learn the restaurant concept from the examples Try various best attributes 2/6/2006 ML_S2006_DecisionTreeLearning 63 2/6/2006 ML_S2006_DecisionTreeLearning 64 16