Typical Supervised Learning Problem Setting

Similar documents
Question of the Day. Machine Learning 2D1431. Decision Tree for PlayTennis. Outline. Lecture 4: Decision Tree Learning

Machine Learning in Bioinformatics

Decision-Tree Learning. Chapter 3: Decision Tree Learning. Classification Learning. Decision Tree for PlayTennis

Decision Tree Learning Mitchell, Chapter 3. CptS 570 Machine Learning School of EECS Washington State University

Administration. Chapter 3: Decision Tree Learning (part 2) Measuring Entropy. Entropy Function

Data Mining in Bioinformatics

Chapter 3: Decision Tree Learning

Decision Tree Learning

Outline. Training Examples for EnjoySport. 2 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Chapter 3: Decision Tree Learning (part 2)

Decision Tree Learning

Decision Trees.

Decision Trees.

DECISION TREE LEARNING. [read Chapter 3] [recommended exercises 3.1, 3.4]

Introduction. Decision Tree Learning. Outline. Decision Tree 9/7/2017. Decision Tree Definition

Lecture 3: Decision Trees

Lecture 3: Decision Trees

Decision Tree Learning and Inductive Inference

Learning Classification Trees. Sargur Srihari

Notes on Machine Learning for and

Decision Trees. Data Science: Jordan Boyd-Graber University of Maryland MARCH 11, Data Science: Jordan Boyd-Graber UMD Decision Trees 1 / 1

Decision Tree Learning - ID3

Imagine we ve got a set of data containing several types, or classes. E.g. information about customers, and class=whether or not they buy anything.

Classification and Prediction

Learning Decision Trees

Learning Decision Trees

Introduction to ML. Two examples of Learners: Naïve Bayesian Classifiers Decision Trees

Introduction Association Rule Mining Decision Trees Summary. SMLO12: Data Mining. Statistical Machine Learning Overview.

CS 6375 Machine Learning

Artificial Intelligence. Topic

Decision Tree Learning

Lecture 24: Other (Non-linear) Classifiers: Decision Tree Learning, Boosting, and Support Vector Classification Instructor: Prof. Ganesh Ramakrishnan

Decision Trees. Tirgul 5

M chi h n i e n L e L arni n n i g Decision Trees Mac a h c i h n i e n e L e L a e r a ni n ng

Supervised Learning! Algorithm Implementations! Inferring Rudimentary Rules and Decision Trees!

Dan Roth 461C, 3401 Walnut

Machine Learning 2nd Edi7on

Decision Trees / NLP Introduction

Machine Learning & Data Mining

CS6375: Machine Learning Gautam Kunapuli. Decision Trees

Classification and regression trees

Apprentissage automatique et fouille de données (part 2)

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Intelligent Data Analysis. Decision Trees

the tree till a class assignment is reached

The Solution to Assignment 6

Decision Trees. Gavin Brown

Data classification (II)

Machine Learning Recitation 8 Oct 21, Oznur Tastan

Decision Tree Learning

Decision trees. Special Course in Computer and Information Science II. Adam Gyenge Helsinki University of Technology

EECS 349:Machine Learning Bryan Pardo

Classification: Decision Trees

Classification II: Decision Trees and SVMs

Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation

Classification: Decision Trees

Information Theory & Decision Trees

ARTIFICIAL INTELLIGENCE. Supervised learning: classification

Concept Learning. Space of Versions of Concepts Learned

CSCE 478/878 Lecture 6: Bayesian Learning

Tutorial 6. By:Aashmeet Kalra

Algorithms for Classification: The Basic Methods

Question of the Day? Machine Learning 2D1431. Training Examples for Concept Enjoy Sport. Outline. Lecture 3: Concept Learning

Decision Support. Dr. Johan Hagelbäck.


Decision Trees. Danushka Bollegala

Classification and Regression Trees

Learning from Observations

Decision Tree Learning

CS 380: ARTIFICIAL INTELLIGENCE MACHINE LEARNING. Santiago Ontañón

Decision Tree Learning

ML techniques. symbolic techniques different types of representation value attribute representation representation of the first order

Decision Tree Learning Lecture 2

Machine Learning 2D1431. Lecture 3: Concept Learning

Introduction to Machine Learning CMU-10701

Concept Learning Mitchell, Chapter 2. CptS 570 Machine Learning School of EECS Washington State University

Lecture 9: Bayesian Learning

[read Chapter 2] [suggested exercises 2.2, 2.3, 2.4, 2.6] General-to-specific ordering over hypotheses

Induction of Decision Trees

Lecture 2: Foundations of Concept Learning

Decision Tree. Decision Tree Learning. c4.5. Example

Inductive Learning. Chapter 18. Why Learn?

Classification Using Decision Trees

brainlinksystem.com $25+ / hr AI Decision Tree Learning Part I Outline Learning 11/9/2010 Carnegie Mellon

Introduction to machine learning. Concept learning. Design of a learning system. Designing a learning system

Decision Trees Part 1. Rao Vemuri University of California, Davis

Symbolic methods in TC: Decision Trees

Decision Tree And Random Forest

Decision Tree Analysis for Classification Problems. Entscheidungsunterstützungssysteme SS 18

Bayesian Learning Features of Bayesian learning methods:

The Quadratic Entropy Approach to Implement the Id3 Decision Tree Algorithm

Classification: Rule Induction Information Retrieval and Data Mining. Prof. Matteo Matteucci

Rule Generation using Decision Trees

Decision Trees. CSC411/2515: Machine Learning and Data Mining, Winter 2018 Luke Zettlemoyer, Carlos Guestrin, and Andrew Moore

CSCI 5622 Machine Learning

Bayesian Learning. Bayesian Learning Criteria

Machine Learning

( D) I(2,3) I(4,0) I(3,2) weighted avg. of entropies

UVA CS 4501: Machine Learning

Machine Learning 3. week

Artificial Intelligence Decision Trees

Transcription:

Typical Supervised Learning Problem Setting Given a set (database) of observations Each observation (x1,, xn, y) Xi are input variables Y is a particular output Build a model to predict y = f(x1,, xn) First define criterion to measure model quality Split dataset into training and test sets Build model using training set Validate model using test set A Database (Example) X1 X2 X3 X4 X5 X6 Y f(x1,,x6) O5001 876.029-193.660-98.179 1.067 54.598 1.076 GOOD GOOD O5002 1110.880-423.190-119.300 1.119 58.228 1.112 GOOD GOOD O5003 980.132 79.722-122.600 1.063 62.537 1.062 GOOD GOOD O5004 974.139 217.073-100.520 1.015 64.428 1.010 GOOD GOOD O5005 927.198-618.470-100.000 1.020 42.557 1.017 GOOD GOOD O5006 1192.590 617.266-103.460 1.073 51.230 1.065 GOOD GOOD O5007 1069.120 7.137-109.460 1.016 66.446 1.010 BAD BAD O5008 1189.200 905.121-87.433 1.056 66.052 1.071 GOOD GOOD O5009 999.084 685.442-107.870 1.109 59.726 1.110 GOOD GOOD O5010 1241.880-442.250-105.680 1.078 58.917 1.079 GOOD BAD O5011 845.574 816.962-106.180 1.103 57.922 1.098 GOOD GOOD O5012 1151.250 10.003-86.426 1.108 64.069 1.113 GOOD GOOD O5013 963.600-312.430-96.890 0.988 61.517 0.986 BAD BAD O5014 721.150 155.468-95.262 1.050 63.566 1.027 GOOD GOOD O5015 1135.190 320.912-117.480 1.049 56.051 1.041 GOOD GOOD O5016 939.189 234.557-109.450 1.100 49.913 1.110 BAD GOOD O5017 923.754 294.472-100.300 1.106 74.579 1.120 GOOD GOOD O5018 886.942 446.574-87.805 0.950 57.044 0.950 GOOD GOOD O5019 1201.850-415.250-108.290 1.116 52.578 1.123 BAD BAD O5020 909.856-424.200-94.270 1.086 53.906 1.089 GOOD GOOD O5021 1065.490-21.762-102.260 0.953 58.404 0.984 BAD BAD O5022 1007.220-450.000-91.618 1.055 56.688 1.047 GOOD GOOD O5023 859.116-631.820-99.983 1.010 59.397 1.012 GOOD GOOD O5024 846.327-17.598-104.480 1.006 72.391 1.012 GOOD GOOD O5025 786.674-314.950-110.000 1.056 63.269 1.062 GOOD GOOD O5026 759.662 761.961-98.087 1.124 63.185 1.113 GOOD GOOD 1

Main Steps Select subset of relevant input variables Build a model using these variables Generate sequence of models Identify one (or various) as being good models Use only the training set Validate selected models Quantitatively : using the test set Qualitatively : using expert knowledge Main Classes of Methods Supervised learning (= input/output models) Decision/regression trees Neural networks Unsupervised learning (=p(x1,,xn) models) Bayesian networks Clustering 2

Inductive Learning Learning from examples The general problem of inductive inference Inductive bias Examples Training Examples for Concept Enjoy Sport Concept: days on which my friend Aldo enjoys his favourite water sports Task: predict the value of Enjoy Sport for an arbitrary day based on the values of the other attributes Sky Sunny Sunny Rainy Sunny Temp Warm Warm Cold Warm Humid rmal Wind instance Water Warm Warm Warm Cool Forecast Same Same Change Change Enjoy Sport 3

Inductive Learning Hypothesis Any hypothesis found to approximate the target function well over the training examples, will also approximate the target function well over the unobserved examples. Futility of Bias-Free Learning A learner that makes no prior assumptions regarding the identity of the target concept has no rational basis for classifying any unseen instances. Free Lunch! 4

Decision Trees Decision tree representation ID3 learning algorithm Entropy, information gain Overfitting What Is a Decision Tree? Value of X1 Small Medium or Large Value of X2 Y is big < 0.34 > 0.34 Y is very big Y is small 5

6 Training Examples Mild Rain D14 rmal Hot Overcast D13 Mild Overcast D12 rmal Mild Sunny D11 rmal Mild Rain D10 rmal Cold Sunny D9 Mild Sunny D8 rmal Cool Overcast D7 rmal Cool Rain D6 rmal Cool Rain D5 Mild Rain D4 Hot Overcast D3 Hot Sunny D2 Hot Sunny D1 Play Tennis Wind Humidity Temp. Outlook Day Decision Tree for PlayTennis Outlook Sunny Overcast Rain Humidity rmal Wind

Decision Tree for PlayTennis Outlook Sunny Overcast Rain Humidity Each internal node tests an attribute rmal Each branch corresponds to an attribute value node Each leaf node assigns a classification Decision Tree for PlayTennis Outlook Temperature Humidity Wind PlayTennis Sunny Hot? Outlook Sunny Overcast Rain Humidity Wind rmal 7

Decision Tree for Conjunction Outlook=Sunny Wind= Outlook Sunny Overcast Rain Wind Decision Tree for Disjunction Outlook=Sunny Wind= Outlook Sunny Overcast Rain Wind Wind 8

Decision Tree for XOR Outlook=Sunny XOR Wind= Outlook Sunny Overcast Rain Wind Wind Wind Decision Tree decision trees represent disjunctions of conjunctions Outlook Sunny Overcast Rain Humidity Wind rmal (Outlook=Sunny Humidity=rmal) (Outlook=Overcast) (Outlook=Rain Wind=) 9

When to consider Decision Trees Instances describable by attribute-value pairs Target function is discrete valued Disjunctive hypothesis may be required Possibly noisy training data Missing attribute values Examples: Medical diagnosis Credit risk analysis Object classification for robot manipulator (Tan 1993) Growing and Pruning Pictorially Data mis-fit Underfitting Overfitting Pruning Growing Tree complexity Final tree 10

An Application in Bioinformatics Genetics of complex traits Data base (see) Composed of observations on 1086 animals Inputs : 20x2 genetic markers Outputs : phenotypic measurements (numbers) Identify the location of involved chromosomal regions Results : unpruned and pruned Another Application in Bioinformatics Identification of protein origin Data base (see) Composed of frequency of aminoacids in different families Inputs : 20 frequencies Outputs : class of protein Objective: identify the family of the protein 11

Yet Another Application in Bioinformatics Identification of regulatory mechanisms between yeast genes Data from microarray experiments [Spellman et al., (1998). Comprehensive Identification of Cell Cycle-regulated Genes of the Yeast Saccharomyces cerevisiae by Microarray Hybridization. Molecular Biology of the Cell 9, 3273-3297] Want to predict which genes activate: CLN1, CLN2, CLN3, SW14 Decision tree for CNL1 activation YPL256C CLN2 <-0,375-0,375 CLN1 Não activo YPL120C CLB5 <-0,285-0,285 CLN1 Não activo YDR328C SKP1 <0,695 0,695 CLN1 CLN1 Activo Não activo -1-1 78% 1 0% Confusion matrix 1 22% 100% 12

Decision tree for CLN2 activation CLB5 <0 0 CLN2 Activo CLN3 <-0,455-0,455 CLN2 Não activo CLN2 Activo CDH1 <-0,475-0,475 CLN2 Não activo -1-1 66,6% 1 20% Confusion matrix 1 33,3% 80% Decision tree for CLN3 activation YGL003 CDH1 <-0.14-0.14 CLN3 Não activo SKP1 <0.3 0.3 CLN3 CDC53 Não activo <0.025 >0,025 CLN3 CLN3 Não activo Activo -1 1-1 83,3% 16,6% 1 14,2% 85,7% Confusion matrix 13

Decision tree for SW14 activation MBP1 <-0.005-0.005 <-0,28 SW14 Não activo MCM1-0,28 CLB1 SIC1 CLN2 <0.1 0.1 SW14 Activo -1 1-1 70% 9% 1 37% 90% <1,24 >1,24 SW14 Activo SW14 Não activo <0.025 >0,025 SW14 Não activo SW14 Activo Confusion matrix Top-Down Induction of Decision Trees ID3 1. A the best decision attribute for next node 2. Assign A as decision attribute for node 3. For each value of A create new descendant 4. Sort training examples to leaf node according to the attribute value of the branch 5. If all training examples are perfectly classified (same value of target attribute) stop, else iterate over new leaf nodes. 14

Which Attribute is best? [29+,35-] A 1 =? A 2 =? [29+,35-] True False True False [21+, 5-] [8+, 30-] [18+, 33-] [11+, 2-] Entropy S is a sample of training examples p + is the proportion of positive examples p - is the proportion of negative examples Entropy measures the impurity of S Entropy(S) = -p + log 2 p + -p - log 2 p - 15

Entropy Entropy(S)= expected number of bits needed to encode class (+ or -) of randomly drawn members of S (under the optimal, shortest length-code) Why? Information theory optimal length code assign log 2 p bits to messages having probability p. So the expected number of bits to encode (+ or -) of random member of S: -p + log 2 p + -p - log 2 p - Information Gain Gain(S,A): expected reduction in entropy due to sorting S on attribute A Gain(S,A)=Entropy(S) - v values(a) S v / S Entropy(S v ) Entropy([29+,35-]) = -29/64 log 2 29/64 35/64 log 2 35/64 = 0.99 [29+,35-] A 1 =? A 2 =? [29+,35-] True False True False [21+, 5-] [8+, 30-] [18+, 33-] [11+, 2-] 16

Entropy([21+,5-]) = 0.71 Entropy([8+,30-]) = 0.74 Gain(S,A 1 )=Entropy(S) -26/64*Entropy([21+,5-]) -38/64*Entropy([8+,30-]) =0.27 Information Gain Entropy([18+,33-]) = 0.94 Entropy([8+,30-]) = 0.62 Gain(S,A 2 )=Entropy(S) -51/64*Entropy([18+,33-]) -13/64*Entropy([11+,2-]) =0.12 [29+,35-] A 1 =? A 2 =? [29+,35-] True False True False [21+, 5-] [8+, 30-] [18+, 33-] [11+, 2-] Day D1 D2 D3 D4 D5 D6 D7 D8 D9 D10 D11 D12 D13 D14 Training Examples Outlook Temp. Humidity Wind Sunny Hot Sunny Hot Overcast Hot Rain Mild Rain Cool rmal Rain Cool rmal Overcast Cool rmal Sunny Mild Sunny Cold rmal Rain Mild rmal Sunny Mild rmal Overcast Mild Overcast Hot rmal Rain Mild Play Tennis 17

Selecting the Next Attribute S=[9+,5-] E=0.940 Humidity S=[9+,5-] E=0.940 Wind rmal [3+, 4-] [6+, 1-] E=0.985 Gain(S,Humidity) =0.940-(7/14)*0.985 (7/14)*0.592 =0.151 E=0.592 [6+, 2-] [3+, 3-] E=0.811 E=1.0 Gain(S,Wind) =0.940-(8/14)*0.811 (6/14)*1.0 =0.048 Selecting the Next Attribute Sunny S=[9+,5-] E=0.940 Outlook Over cast Rain [2+, 3-] [4+, 0] [3+, 2-] E=0.971 E=0.0 E=0.971 Gain(S,Outlook) =0.940-(5/14)*0.971 -(4/14)*0.0 (5/14)*0.0971 =0.247 18

ID3 Algorithm [D1,D2,,D14] [9+,5-] Outlook Sunny Overcast Rain S sunny =[D1,D2,D8,D9,D11] [2+,3-] [D3,D7,D12,D13] [4+,0-] [D4,D5,D6,D10,D14] [3+,2-]?? Gain(S sunny, Humidity)=0.970-(3/5)0.0 2/5(0.0) = 0.970 Gain(S sunny, Temp.)=0.970-(2/5)0.0 2/5(1.0)-(1/5)0.0 = 0.570 Gain(S sunny, Wind)=0.970= -(2/5)1.0 3/5(0.918) = 0.019 ID3 Algorithm Outlook Sunny Overcast Rain Humidity [D3,D7,D12,D13] Wind rmal [D1,D2] [D8,D9,D11] [D6,D14] [D4,D5,D10] 19

Hypothesis Space Search ID3 A1 + - + A2 + - + - - + + - + + - - A2 A2 + - - A3 + - + - + - A4 Hypothesis Space Search ID3 Hypothesis space is complete! Target function surely in there Outputs a single hypothesis backtracking on selected attributes (greedy search) Local minimal (suboptimal splits) Statistically-based search choices Robust to noisy data Inductive bias (search bias) Prefer shorter trees over longer ones Place high information gain attributes close to the root 20

Inductive Bias in ID3 H is the power set of instances X Unbiased? Preference for short trees, and for those with high information gain attributes near the root Bias is a preference for some hypotheses, rather than a restriction of the hypothesis space H Occam s razor: prefer the shortest (simplest) hypothesis that fits the data Occam s Razor Why prefer short hypotheses? Argument in favor: Fewer short hypotheses than long hypotheses A short hypothesis that fits the data is unlikely to be a coincidence A long hypothesis that fits the data might be a coincidence Argument opposed: There are many ways to define small sets of hypotheses E.g. All trees with a prime number of nodes that use attributes beginning with Z What is so special about small sets based on size of hypothesis 21

Overfitting in Decision Tree Learning Avoid Overfitting How can we avoid overfitting? Stop growing when data split not statistically significant Grow full tree then post-prune Minimum description length (MDL): Minimize: size(tree) + size(misclassifications(tree)) 22

Reduced-Error Pruning Split data into training and validation set Do until further pruning is harmful: 1. Evaluate impact on validation set of pruning each possible node (plus those below it) 2. Greedily remove the one that most improves the validation set accuracy Produces smallest version of most accurate subtree Effect of Reduced Error Pruning 23

Rule-Post Pruning 1. Convert tree to equivalent set of rules 2. Prune each rule independently of each other 3. Sort final rules into a desired sequence to use Method used in C4.5 Converting a Tree to Rules Outlook Sunny Overcast Rain Humidity Wind rmal R 1 : If (Outlook=Sunny) (Humidity=) Then PlayTennis= R 2 : If (Outlook=Sunny) (Humidity=rmal) Then PlayTennis= R 3 : If (Outlook=Overcast) Then PlayTennis= R 4 : If (Outlook=Rain) (Wind=) Then PlayTennis= R 5 : If (Outlook=Rain) (Wind=) Then PlayTennis= 24

Continuous Valued Attributes Create a discrete attribute to test continuous Temperature = 24.5 0 C (Temperature > 20.0 0 C) = {true, false} Where to set the threshold? Temperatur 15 0 C 18 0 C 19 0 C 22 0 C 24 0 C 27 0 C PlayTennis (see paper by [Fayyad, Irani 1993] Attributes with many Values Problem: if an attribute has many values, maximizing InformationGain will select it. E.g.: Imagine using Date=12.7.1996 as attribute perfectly splits the data into subsets of size 1 Use GainRatio instead of information gain as criteria: GainRatio(S,A) = Gain(S,A) / SplitInformation(S,A) SplitInformation(S,A) = -Σ i=1..c S i / S log 2 S i / S Where S i is the subset for which attribute A has the value v i 25

Attributes with Cost Consider: Medical diagnosis : blood test costs 1000 SEK Robotics: width_from_one_feet has cost 23 secs. How to learn a consistent tree with low expected cost? Replace Gain by : Gain 2 (S,A)/Cost(A) [Tan, Schimmer 1990] 2 Gain(S,A) -1/(Cost(A)+1) w w [0,1] [Nunez 1988] Unknown Attribute Values What is some examples missing values of A? Use training example anyway sort through tree If node n tests A, assign most common value of A among other examples sorted to node n. Assign most common value of A among other examples with same target value Assign probability pi to each possible value vi of A Assign fraction pi of example to each descendant in tree Classify new examples in the same fashion 26

Cross-Validation Estimate the accuracy of a hypothesis induced by a supervised learning algorithm Predict the accuracy of a hypothesis over future unseen instances Select the optimal hypothesis from a given set of alternative hypotheses Pruning decision trees Model selection Feature selection Combining multiple classifiers (boosting) Holdout Method Partition data set D = {(v 1,y 1 ),,(v n,y n )} into training D t and validation set D h =D\D t Training D t Validation D\D t acc h = 1/h (vi,yi) Dh δ(i(d t,v i ),y i ) I(D t,v i ) : output of hypothesis induced by learner I trained on data D t for instance v i δ(i,j) = 1 if i=j and 0 otherwise Problems: makes insufficient use of data training and validation set are correlated 27

Cross-Validation k-fold cross-validation splits the data set D into k mutually exclusive subsets D 1,D 2,,D k D 1 D 2 D 3 D 4 Train and test the learning algorithm k times, each time it is trained on D\D i and tested on D i D 1 D 2 D 3 D 4 D 1 D 2 D 3 D 4 D 1 D 2 D 3 D 4 D 1 D 2 D 3 D 4 acc cv = 1/n (vi,yi) D δ(i(d\d i,v i ),y i ) Cross-Validation Uses all the data for training and testing Complete k-fold cross-validation splits the dataset of size m in all (m over m/k) possible ways (choosing m/k instances out of m) Leave n-out cross-validation sets n instances aside for testing and uses the remaining ones for training (leave one-out is equivalent to n-fold crossvalidation) In stratified cross-validation, the folds are stratified so that they contain approximately the same proportion of labels as the original data set 28