Data Mining Classification: Basic Concepts and Techniques. Lecture Notes for Chapter 3. Introduction to Data Mining, 2nd Edition

Similar documents
Lecture VII: Classification I. Dr. Ouiem Bchir

Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation

Review of Lecture 1. Across records. Within records. Classification, Clustering, Outlier detection. Associations

Machine Learning & Data Mining

CS145: INTRODUCTION TO DATA MINING

Oliver Dürr. Statistisches Data Mining (StDM) Woche 11. Institut für Datenanalyse und Prozessdesign Zürcher Hochschule für Angewandte Wissenschaften

CS6375: Machine Learning Gautam Kunapuli. Decision Trees

the tree till a class assignment is reached

Learning Decision Trees

Informal Definition: Telling things apart

Holdout and Cross-Validation Methods Overfitting Avoidance

Induction of Decision Trees

Lecture 7 Decision Tree Classifier

1 Handling of Continuous Attributes in C4.5. Algorithm

Learning Decision Trees

Decision Tree Analysis for Classification Problems. Entscheidungsunterstützungssysteme SS 18

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18

Data Mining. CS57300 Purdue University. Bruno Ribeiro. February 8, 2018

Lecture 3: Decision Trees

Classification and Prediction

Jeffrey D. Ullman Stanford University

UVA CS 4501: Machine Learning

CS 6375 Machine Learning

Text Mining. Dr. Yanjun Li. Associate Professor. Department of Computer and Information Sciences Fordham University

Decision Tree Learning

Supervised Learning! Algorithm Implementations! Inferring Rudimentary Rules and Decision Trees!

Decision Trees. CS57300 Data Mining Fall Instructor: Bruno Ribeiro

Decision trees COMS 4771

EECS 349:Machine Learning Bryan Pardo

Notes on Machine Learning for and

Decision Tree And Random Forest

Classification Using Decision Trees

Data classification (II)

Decision Tree Learning Lecture 2

From statistics to data science. BAE 815 (Fall 2017) Dr. Zifei Liu

Lecture 24: Other (Non-linear) Classifiers: Decision Tree Learning, Boosting, and Support Vector Classification Instructor: Prof. Ganesh Ramakrishnan

Lecture 3: Decision Trees

Modern Information Retrieval

Machine Learning and Data Mining. Decision Trees. Prof. Alexander Ihler

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Decision Trees. Tobias Scheffer

Rule Generation using Decision Trees

CHAPTER-17. Decision Tree Induction

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 8. Chapter 8. Classification: Basic Concepts

Support Vector Machines. Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

Data Mining and Analysis: Fundamental Concepts and Algorithms

Predictive Modeling: Classification. KSE 521 Topic 6 Mun Yi

Algorithms for Classification: The Basic Methods

Introduction to ML. Two examples of Learners: Naïve Bayesian Classifiers Decision Trees

Introduction to Machine Learning CMU-10701

day month year documentname/initials 1

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Intelligent Data Analysis. Decision Trees

Chapter 6: Classification

CSCI 5622 Machine Learning

Classification: Decision Trees

Chapter 6 Classification and Prediction (2)

Machine Learning 2nd Edi7on

1 Handling of Continuous Attributes in C4.5. Algorithm

Dan Roth 461C, 3401 Walnut

Decision Trees. Each internal node : an attribute Branch: Outcome of the test Leaf node or terminal node: class label.

CSE 5243 INTRO. TO DATA MINING

SF2930 Regression Analysis

Information Theory & Decision Trees

Statistics and learning: Big Data

Bayesian Inference. Definitions from Probability: Naive Bayes Classifiers: Advantages and Disadvantages of Naive Bayes Classifiers:

Inductive Learning. Chapter 18. Material adopted from Yun Peng, Chuck Dyer, Gregory Piatetsky-Shapiro & Gary Parker

ML techniques. symbolic techniques different types of representation value attribute representation representation of the first order

SUPERVISED LEARNING: INTRODUCTION TO CLASSIFICATION

CART Classification and Regression Trees Trees can be viewed as basis expansions of simple functions

26 Chapter 4 Classification

Mining Classification Knowledge

Lecture 7: DecisionTrees

Classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

FINAL: CS 6375 (Machine Learning) Fall 2014

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.

DECISION TREE LEARNING. [read Chapter 3] [recommended exercises 3.1, 3.4]

Knowledge Discovery and Data Mining

Chapter 14 Combining Models

Decision Trees. Nicholas Ruozzi University of Texas at Dallas. Based on the slides of Vibhav Gogate and David Sontag

Chapter ML:III. III. Decision Trees. Decision Trees Basics Impurity Functions Decision Tree Algorithms Decision Tree Pruning

2018 CS420, Machine Learning, Lecture 5. Tree Models. Weinan Zhang Shanghai Jiao Tong University

Decision Trees Part 1. Rao Vemuri University of California, Davis

Decision Trees. CSC411/2515: Machine Learning and Data Mining, Winter 2018 Luke Zettlemoyer, Carlos Guestrin, and Andrew Moore

Data Mining. Preamble: Control Application. Industrial Researcher s Approach. Practitioner s Approach. Example. Example. Goal: Maintain T ~Td

Decision Trees.

A Decision Stump. Decision Trees, cont. Boosting. Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University. October 1 st, 2007

Generative v. Discriminative classifiers Intuition

CS6220: DATA MINING TECHNIQUES

Decision Tree. Decision Tree Learning. c4.5. Example

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 8. Chapter 8. Classification: Basic Concepts

CART Classification and Regression Trees Trees can be viewed as basis expansions of simple functions. f(x) = c m 1(x R m )

Decision trees. Special Course in Computer and Information Science II. Adam Gyenge Helsinki University of Technology

Decision Trees (Cont.)

M chi h n i e n L e L arni n n i g Decision Trees Mac a h c i h n i e n e L e L a e r a ni n ng

Mining Classification Knowledge

Decision Tree Learning Mitchell, Chapter 3. CptS 570 Machine Learning School of EECS Washington State University

Decision Trees. Data Science: Jordan Boyd-Graber University of Maryland MARCH 11, Data Science: Jordan Boyd-Graber UMD Decision Trees 1 / 1

Classification and regression trees

Midterm: CS 6375 Spring 2018

Data Mining. 3.6 Regression Analysis. Fall Instructor: Dr. Masoud Yaghini. Numeric Prediction

Generalization to Multi-Class and Continuous Responses. STA Data Mining I

Transcription:

Data Mining Classification: Basic Concepts and Techniques Lecture Notes for Chapter 3 by Tan, Steinbach, Karpatne, Kumar 1

Classification: Definition Given a collection of records (training set ) Each record is by characterized by a tuple (x,y), where x is the attribute set and y is the class label x: attribute, predictor, independent variable, input y: class, response, dependent variable, output Task: Learn a model that maps each attribute set x into one of the predefined class labels y 2

Examples of Classification Task Task Attribute set, x Class label, y Categorizing Features extracted from email email message header messages and content spam or non-spam Identifying tumor cells Features extracted from MRI scans malignant or benign cells Cataloging galaxies Features extracted from telescope images Elliptical, spiral, or irregular-shaped galaxies 3

General Approach for Building Classification Model 4

Classification Techniques Base Classifiers Decision Tree based Methods Rule-based Methods Nearest-neighbor Neural Networks Deep Learning Naïve Bayes and Bayesian Belief Networks Support Vector Machines Ensemble Classifiers Boosting, Bagging, Random Forests 5

Example of a Decision Tree ca go e t al c ri ca go e t al c ri us o u in t ss n a cl co Splitting Attributes Yes Home Owner NO No MarSt Single, Divorced Income NO > 80K < 80K NO Training Data Married YES Model: Decision Tree 6

Another Example of Decision Tree al al us c c i i o or or nu i g g t ss e e n t t a l c ca ca co Married NO Single, Divorced MarSt Home Yes Owner NO No Income < 80K NO > 80K YES There could be more than one tree that fits the same data! 7

Apply Model to Test Data Test Data Start from the root of tree. Home Yes Owner NO No MarSt Married Single, Divorced Income NO < 80K NO > 80K YES 8

Apply Model to Test Data Test Data Home Yes Owner NO No MarSt Married Single, Divorced Income NO < 80K NO > 80K YES 9

Apply Model to Test Data Test Data Home Yes Owner NO No MarSt Married Single, Divorced Income NO < 80K NO > 80K YES 10

Apply Model to Test Data Test Data Home Yes Owner NO No MarSt Married Single, Divorced Income NO < 80K NO > 80K YES 11

Apply Model to Test Data Test Data Home Yes Owner NO No MarSt Married Single, Divorced Income NO < 80K NO > 80K YES 12

Apply Model to Test Data Test Data Home Yes Owner NO No MarSt Married Single, Divorced Income NO < 80K NO Assign Defaulted to No > 80K YES 13

Decision Tree Classification Task Decision Tree 14

Decision Tree Induction Many Algorithms: Hunt s Algorithm (one of the earliest) CART ID3, C4.5 SLIQ, SPRINT 15

General Structure of Hunt s Algorithm Let Dt be the set of training records that reach a node t General Procedure: If Dt contains records that belong the same class yt, then t is a leaf node labeled as yt If Dt contains records that belong to more than one class, use an attribute test to split the data into smaller subsets. Recursively apply the procedure to each subset. Dt? 16

Hunt s Algorithm (7,3) (3,0) (4,3) (3,0) (3,0) (3,0) (1,3) (3,0) (1,0) (0,3) 17

Hunt s Algorithm (7,3) (3,0) (4,3) (3,0) (3,0) (3,0) (1,3) (3,0) (1,0) (0,3) 18

Hunt s Algorithm (7,3) (3,0) (4,3) (3,0) (3,0) (3,0) (1,3) (3,0) (1,0) (0,3) 19

Hunt s Algorithm (7,3) (3,0) (4,3) (3,0) (3,0) (3,0) (1,3) (3,0) (1,0) (0,3) 20

Design Issues of Decision Tree Induction How should training records be split? Method for specifying test condition depending on attribute types Measure for evaluating the goodness of a test condition How should the splitting procedure stop? Stop splitting if all the records belong to the same class or have identical attribute values Early termination 21

Methods for Expressing Test Conditions Depends on attribute types Binary Nominal Ordinal Continuous Depends on number of ways to split 2-way split Multi-way split 22

Test Condition for Nominal Attributes Multi-way split: Use as many partitions as distinct values. Binary split: Divides values into two subsets 23

Test Condition for Ordinal Attributes Multi-way split: Use as many partitions as distinct values Binary split: Divides values into two subsets Preserve order property among attribute values This grouping violates order property 24

Test Condition for Continuous Attributes 25

Splitting Based on Continuous Attributes Different ways of handling Discretization to form an ordinal categorical attribute Ranges can be found by equal interval bucketing, equal frequency bucketing (percentiles), or clustering. Static discretize once at the beginning Dynamic repeat at each node Binary Decision: (A < v) or (A v) consider all possible splits and finds the best cut can be more compute intensive 26

How to determine the Best Split Before Splitting: 10 records of class 0, 10 records of class 1 Which test condition is the best? 27

How to determine the Best Split Greedy approach: Nodes with purer class distribution are preferred Need a measure of node impurity: High degree of impurity Low degree of impurity 28

Measures of Node Impurity Gini Index GINI (t )=1 [ p( j t )] 2 j Entropy Entropy ( t )= p ( j t ) log p( j t ) j Misclassification error Error ( t )= 1 max P ( i t ) i 29

Finding the Best Split 1. 2. Compute impurity measure (P) before splitting Compute impurity measure (M) after splitting 3. Compute impurity measure of each child node M is the weighted impurity of children Choose the attribute test condition that produces the highest gain Gain = P M or equivalently, lowest impurity measure after splitting (M) 30

Finding the Best Split Before Splitting: P A? Yes B? No Node N1 Yes Node N2 Node N3 M12 M11 Node N4 M21 M1 M22 M2 Gain = P M1 No vs P M2 31

Measure of Impurity: GINI Gini Index for a given node t : GINI (t )=1 [ p( j t )] 2 j (NOTE: p( j t) is the relative frequency of class j at node t). Maximum (1-1/nc) when records are equally distributed among all classes, implying least interesting information Minimum (0.0) when all records belong to one class, implying most interesting information 32

Measure of Impurity: GINI Gini Index for a given node t : GINI (t )=1 [ p( j t )] 2 j (NOTE: p( j t) is the relative frequency of class j at node t). For 2-class problem (p, 1 p): GINI = 1 p2 (1 p)2 = 2p (1-p) C1 0 C2 6 Gini=0.000 C1 1 C2 5 Gini=0.278 C1 2 C2 4 Gini=0.444 C1 3 C2 3 Gini=0.500 33

Computing Gini Index of a Single Node GINI (t )=1 [ p( j t )] 2 j C1 C2 0 6 P(C1) = 0/6 = 0 C1 C2 1 5 P(C1) = 1/6 C1 C2 2 4 P(C1) = 2/6 P(C2) = 6/6 = 1 Gini = 1 P(C1)2 P(C2)2 = 1 0 1 = 0 P(C2) = 5/6 Gini = 1 (1/6)2 (5/6)2 = 0.278 P(C2) = 4/6 Gini = 1 (2/6)2 (4/6)2 = 0.444 34

Computing Gini Index for a Collection of Nodes When a node p is split into k partitions (children) k ni GINI split = GINI (i) i=1 n where, ni = number of records at child i, n = number of records at parent node p. Choose the attribute that minimizes weighted average Gini index of the children Gini index is used in decision tree algorithms such as CART, SLIQ, SPRINT 35

Continuous Attributes: Computing Gini Index... For efficient computation: for each attribute, Sort the attribute on values Linearly scan these values, each time updating the count matrix and computing gini index Choose the split position that has the least gini index Sorted Values Split Positions 36

Measure of Impurity: Entropy Entropy at a given node t: Entropy ( t )= p ( j t ) log p( j t ) j (NOTE: p( j t) is the relative frequency of class j at node t). Maximum (log nc) when records are equally distributed among all classes implying least information Minimum (0.0) when all records belong to one class, implying most information Entropy based computations are quite similar to the GINI index computations 37

Computing Entropy of a Single Node Entropy ( t )= p ( j t ) log 2 p ( j t ) j C1 C2 0 6 P(C1) = 0/6 = 0 C1 C2 1 5 P(C1) = 1/6 C1 C2 2 4 P(C1) = 2/6 P(C2) = 6/6 = 1 Entropy = 0 log 0 1 log 1 = 0 0 = 0 P(C2) = 5/6 Entropy = (1/6) log2 (1/6) (5/6) log2 (1/6) = 0.65 P(C2) = 4/6 Entropy = (2/6) log2 (2/6) (4/6) log2 (4/6) = 0.92 38

Computing Information Gain After Splitting Information Gain: k ( GAIN split =Entropy ( p ) i=1 ni Entropy (i ) n ) Parent Node, p is split into k partitions; ni is number of records in partition i Choose the split that achieves most reduction (maximizes GAIN) Used in the ID3 and C4.5 decision tree algorithms 39

Gain Ratio Gain Ratio: GainRATIO split = GAIN Split SplitINFO k SplitINFO = i =1 ni ni log n n Parent Node, p is split into k partitions ni is the number of records in partition i Adjusts Information Gain by the entropy of the partitioning (SplitINFO). Higher entropy partitioning (large number of small partitions) is penalized! Used in C4.5 algorithm Designed to overcome the disadvantage of Information Gain 40

Gain Ratio Gain Ratio: GainRATIO split = GAIN Split SplitINFO k SplitINFO = i =1 ni ni log n n Parent Node, p is split into k partitions ni is the number of records in partition i SplitINFO = 1.52 SplitINFO = 0.72 SplitINFO = 0.97 41

Measure of Impurity: Classification Error Classification error at a node t : Error ( t )= 1 max P ( i t ) i Maximum (1-1/nc) when records are equally distributed among all classes, implying least interesting information Minimum (0) when all records belong to one class, implying most interesting information 42

Computing Error of a Single Node Error ( t )= 1 max P ( i t ) i C1 C2 0 6 P(C1) = 0/6 = 0 C1 C2 1 5 P(C1) = 1/6 C1 C2 2 4 P(C1) = 2/6 P(C2) = 6/6 = 1 Error = 1 max (0, 1) = 1 1 = 0 P(C2) = 5/6 Error = 1 max (1/6, 5/6) = 1 5/6 = 1/6 P(C2) = 4/6 Error = 1 max (2/6, 4/6) = 1 4/6 = 1/3 43

Comparison among Impurity Measures For a 2-class problem: 44

Misclassification Error vs Gini Index Parent A? Yes Node N1 Gini(N1) = 1 (3/3)2 (0/3)2 =0 Gini(N2) = 1 (4/7)2 (3/7)2 = 0.489 No C1 7 C2 3 Gini = 0.42 Node N2 Gini(Children) = 3/10 * 0 + 7/10 * 0.489 = 0.342 Gini improves but error remains the same!! 45

Misclassification Error vs Gini Index Parent A? Yes Node N1 No Node N2 C1 7 C2 3 Gini = 0.42 Misclassification error for all three cases = 0.3! 46

Decision Tree Based Classification Advantages: Inexpensive to construct Extremely fast at classifying unknown records Easy to interpret for small-sized trees Robust to noise (especially when methods to avoid overfitting are employed) Can easily handle redundant or irrelevant attributes (unless the attributes are interacting) Disadvantages: Space of possible decision trees is exponentially large. Greedy approaches are often unable to find the best tree. Does not take into account interactions between attributes Each decision boundary involves only a single attribute 47

Handling interactions + : 1000 instances Entropy (X) : 0.99 Entropy (Y) : 0.99 o : 1000 instances Y X 48

Handling interactions + : 1000 instances o : 1000 instances Entropy (X) : 0.99 Entropy (Y) : 0.99 Entropy (Z) : 0.98 Y Adding Z as a noisy attribute generated from a uniform distribution Attribute Z will be chosen for splitting! X Z Z X Y 49

Limitations of single attribute-based decision boundaries Both positive (+) and negative (o) classes generated from skewed Gaussians with centers at (8,8) and (12,12) respectively. 50