Outline. Training Examples for EnjoySport. 2 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Similar documents
Decision Tree Learning

Decision-Tree Learning. Chapter 3: Decision Tree Learning. Classification Learning. Decision Tree for PlayTennis

Chapter 3: Decision Tree Learning

Decision Tree Learning Mitchell, Chapter 3. CptS 570 Machine Learning School of EECS Washington State University

Administration. Chapter 3: Decision Tree Learning (part 2) Measuring Entropy. Entropy Function

Decision Trees.

Chapter 3: Decision Tree Learning (part 2)

Outline. [read Chapter 2] Learning from examples. General-to-specic ordering over hypotheses. Version spaces and candidate elimination.

[read Chapter 2] [suggested exercises 2.2, 2.3, 2.4, 2.6] General-to-specific ordering over hypotheses

Decision Trees.

Concept Learning through General-to-Specific Ordering

Question of the Day. Machine Learning 2D1431. Decision Tree for PlayTennis. Outline. Lecture 4: Decision Tree Learning

Decision Tree Learning

Notes on Machine Learning for and

Fundamentals of Concept Learning

Concept Learning.

Version Spaces.

Question of the Day? Machine Learning 2D1431. Training Examples for Concept Enjoy Sport. Outline. Lecture 3: Concept Learning

Concept Learning Mitchell, Chapter 2. CptS 570 Machine Learning School of EECS Washington State University

DECISION TREE LEARNING. [read Chapter 3] [recommended exercises 3.1, 3.4]

Introduction Association Rule Mining Decision Trees Summary. SMLO12: Data Mining. Statistical Machine Learning Overview.

Lecture 3: Decision Trees

Machine Learning 2D1431. Lecture 3: Concept Learning

Typical Supervised Learning Problem Setting

CSCE 478/878 Lecture 2: Concept Learning and the General-to-Specific Ordering

Introduction to machine learning. Concept learning. Design of a learning system. Designing a learning system

Concept Learning. Space of Versions of Concepts Learned

Overview. Machine Learning, Chapter 2: Concept Learning

Lecture 3: Decision Trees

Concept Learning. Berlin Chen Department of Computer Science & Information Engineering National Taiwan Normal University.

Concept Learning. Berlin Chen References: 1. Tom M. Mitchell, Machine Learning, Chapter 2 2. Tom M. Mitchell s teaching materials.

EECS 349: Machine Learning

Decision Trees. Data Science: Jordan Boyd-Graber University of Maryland MARCH 11, Data Science: Jordan Boyd-Graber UMD Decision Trees 1 / 1

Lecture 2: Foundations of Concept Learning

Learning Decision Trees

Introduction. Decision Tree Learning. Outline. Decision Tree 9/7/2017. Decision Tree Definition

Decision Tree Learning and Inductive Inference

Machine Learning 2nd Edi7on

Classification and Prediction

Supervised Learning! Algorithm Implementations! Inferring Rudimentary Rules and Decision Trees!

CS6375: Machine Learning Gautam Kunapuli. Decision Trees

M chi h n i e n L e L arni n n i g Decision Trees Mac a h c i h n i e n e L e L a e r a ni n ng

Decision Tree Learning - ID3

Introduction to ML. Two examples of Learners: Naïve Bayesian Classifiers Decision Trees

Learning Classification Trees. Sargur Srihari

Lecture 24: Other (Non-linear) Classifiers: Decision Tree Learning, Boosting, and Support Vector Classification Instructor: Prof. Ganesh Ramakrishnan

Machine Learning & Data Mining

Learning Decision Trees

CS 6375 Machine Learning

CSCE 478/878 Lecture 6: Bayesian Learning

Decision Tree Learning

Tutorial 6. By:Aashmeet Kalra

Dan Roth 461C, 3401 Walnut

Apprentissage automatique et fouille de données (part 2)

EECS 349:Machine Learning Bryan Pardo

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Intelligent Data Analysis. Decision Trees

BITS F464: MACHINE LEARNING

Decision Trees. Tirgul 5

Machine Learning in Bioinformatics

Imagine we ve got a set of data containing several types, or classes. E.g. information about customers, and class=whether or not they buy anything.

Decision trees. Special Course in Computer and Information Science II. Adam Gyenge Helsinki University of Technology

Lecture 9: Bayesian Learning

Decision Trees. Gavin Brown

Computational Learning Theory

Classification II: Decision Trees and SVMs

the tree till a class assignment is reached

Computational Learning Theory

Answers Machine Learning Exercises 2

Decision Trees / NLP Introduction

Classification: Decision Trees


Algorithms for Classification: The Basic Methods

Decision Tree Learning Lecture 2

Classification: Decision Trees

Bayesian Learning. Artificial Intelligence Programming. 15-0: Learning vs. Deduction

Information Theory & Decision Trees

EECS 349: Machine Learning Bryan Pardo

Computational Learning Theory

Decision Trees Part 1. Rao Vemuri University of California, Davis

Introduction to Machine Learning CMU-10701

Machine Learning Recitation 8 Oct 21, Oznur Tastan

Data classification (II)

Decision Tree Learning

Decision Tree Learning

Bayesian Learning Features of Bayesian learning methods:

Decision Trees Entropy, Information Gain, Gain Ratio

Decision Tree. Decision Tree Learning. c4.5. Example

Machine Learning. Computational Learning Theory. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012

MODULE -4 BAYEIAN LEARNING

CS 380: ARTIFICIAL INTELLIGENCE MACHINE LEARNING. Santiago Ontañón

Machine Learning

Decision Trees. Each internal node : an attribute Branch: Outcome of the test Leaf node or terminal node: class label.

Computational Learning Theory

CSCI 5622 Machine Learning

ML techniques. symbolic techniques different types of representation value attribute representation representation of the first order

Machine Learning

Machine Learning

Decision Trees. Danushka Bollegala

Machine Learning. Computational Learning Theory. Eric Xing , Fall Lecture 9, October 5, 2016

Decision Trees. Nicholas Ruozzi University of Texas at Dallas. Based on the slides of Vibhav Gogate and David Sontag

Transcription:

Outline Training Examples for EnjoySport Learning from examples General-to-specific ordering over hypotheses [read Chapter 2] [suggested exercises 2.2, 2.3, 2.4, 2.6] Version spaces and candidate elimination algorithm Picking new examples The need for inductive bias Sky Temp Humid Wind Water Forecst EnjoySpt Sunny Warm Normal Strong Warm Same Yes Sunny Warm High Strong Warm Same Yes Rainy Cold High Strong Warm Change No Sunny Warm High Strong Cool Change Yes What is the general concept? Note: simple approach assuming no noise, illustrates key concepts 1 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997 2 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Representing Hypotheses Prototypical Concept Learning Task Sky Temp Humid Wind Water Forecst EnjoySpt Sunny Warm Normal Strong Warm Same Yes Sunny Warm High Strong Warm Same Yes Rainy Cold High Strong Warm Change No Sunny Warm High Strong Cool Change Yes Sky Temp Humid Wind Water Forecst EnjoySpt Sunny Warm Normal Strong Warm Same Yes Sunny Warm High Strong Warm Same Yes Rainy Cold High Strong Warm Change No Sunny Warm High Strong Cool Change Yes Many possible representations Here, h is conjunction of constraints on attributes Each constraint can be a specfic value (e.g., W ater = Warm) don t care (e.g., W ater =? ) no value allowed (e.g., Water= ) For example, Sky AirTemp Humid Wind Water Forecst Sunny?? Strong? Same Classify everything negative φ, φ, φ, φ, φ, φ Given: Instances X: Possible days, each described by the attributes Sky, AirTemp, Humidity, Wind, Water, Forecast Target function c: EnjoySport : X {0, 1} Hypotheses H: Conjunctions of literals. E.g.?,Cold,High,?,?,?. Training examples D: Positive and negative examples of the target function x 1,c(x 1),... x m,c(x m) Determine: A hypothesis h in H such that h(x) =c(x) for all x in D. Classify everything positive?,?,?,?,?,? 3 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997 4 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Instance, Hypotheses, and More- General-Than The inductive learning hypothesis: Any hypothesis found to approximate the target function well over a sufficiently large set of training examples will also approximate the target function well over other unobserved examples. Sky Temp Humid Wind Water Forecst EnjoySpt Sunny Warm Normal Strong Warm Same Yes Sunny Warm High Strong Warm Same Yes Rainy Cold High Strong Warm Change No Sunny Warm High Strong Cool Change Yes Instances X Hypotheses H Specific x 1 x 2 h 1 h 2 h 3 General x 1 = <Sunny, Warm, High, Strong, Cool, Same> x = <Sunny, Warm, High, Light, Warm, Same> 2 h 1 = <Sunny,?,?, Strong,?,?> h = <Sunny,?,?,?,?,?> 2 h = <Sunny,?,?,?, Cool,?> 3 5 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997 6 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Find-S Algorithm Hypothesis Space Search by Find-S 1. Initialize h to the most specific hypothesis in H 2. For each positive training instance x For each attribute constraint a i in h If the constraint a i in h is satisfied by x Then do nothing Else replace a i in h by the next more general constraint that is satisfied by x Sky Temp Humid Wind Water Forecst EnjoySpt Sunny Warm Normal Strong Warm Same Yes Sunny Warm High Strong Warm Same Yes Rainy Cold High Strong Warm Change No Sunny Warm High Strong Cool Change Yes Instances X Hypotheses H 3. Output hypothesis h - x 3 h 0 h 1 Specific x + 1 x+ 2 h 2,3 x+ 4 h 4 General x = <Sunny Warm Normal Strong Warm Same>, + 1 x 2 = <Sunny Warm High Strong Warm Same>, + x 3 = <Rainy Cold High Strong Warm Change>, - x = <Sunny Warm High Strong Cool Change>, + 4 h = <,,,,, > 0 h 1 = <Sunny Warm Normal Strong Warm Same> h 2 = <Sunny Warm? Strong Warm Same> h = <Sunny Warm? Strong Warm Same> 3 h = <Sunny Warm? Strong?? > 4 Generalisation only as far as necessary! IF hypothesis is conjuction of attribute constraints THEN guarantees most specific hypothesis in H that is consistent with the positive examples also consistent with the negative examples provided target concept H, training examples correct 7 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997 8 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Complaints about Find-S Version Spaces Can t tell whether it has learned concept Can t tell when training data inconsistent Picks a maximally specific h (why?) Depending on H, there might be several! A hypothesis h is consistent with a set of training examples D of target concept c if and only if h(x) =c(x) for each training example x, c(x) in D. Consistent(h,D) ( x, c(x) D) h(x) =c(x) The version space, VS H,D, with respect to hypothesis space H and training examples D, is the subset of hypotheses from H consistent with all training examples in D. VS H,D {h H Consistent(h, D)} 9 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997 10 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

The List-Then-Eliminate Algorithm: Example Version Space 1. VersionSpace a list containing every hypothesis in H 2. For each training example, x, c(x) remove from VersionSpaceany hypothesis h for which h(x) c(x) 3. Output the list of hypotheses in VersionSpace Sky Temp Humid Wind Water Forecst EnjoySpt Sunny Warm Normal Strong Warm Same Yes Sunny Warm High Strong Warm Same Yes Rainy Cold High Strong Warm Change No Sunny Warm High Strong Cool Change Yes S: { <Sunny, Warm,?, Strong,?,?> } <Sunny,?,?, Strong,?,?> <Sunny, Warm,?,?,?,?> <?, Warm,?, Strong,?,?> G: { <Sunny,?,?,?,?,?>, <?, Warm,?,?,?,?> } 11 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997 12 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Concept Learning As Search Representing Version Spaces # instances Sky Temp Humid Wind Water Forecst EnjoySpt Sunny Warm Normal Strong Warm Same Yes Sunny Warm High Strong Warm Same Yes Rainy Cold High Strong Warm Change No Sunny Warm High Strong Cool Change Yes 3 2 2 2 2 2=96 The General boundary, G, of version space VS H,D is the set of its maximally general members The Specific boundary, S, of version space VS H,D is the set of its maximally specific members Every member of the version space lies between these boundaries VS H,D = {h H ( s S)( g G)(g h s)} where x y means x is more general or equal to y # classifications 2 96 # syntactically distinct hypotheses 5120.,., φ,.,.,. represents no instance # semantically distinct hypotheses 975 When large hypothesis space H (possibly infinite) then efficient search strategies required to find hypothesis that best fits the data. 13 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997 14 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Candidate Elimination Algorithm Example Trace G maximally general hypotheses in H,?,?,?,?,?,? S maximally specific hypotheses in H, φ, φ, φ, φ, φ, φ For each training example d, do If d is a positive example Remove from G any hypothesis inconsistent with d For each hypothesis s in S that is not consistent with d Remove s from S Add to S all minimal generalizations h of s such that 1. h is consistent with d, and 2. some member of G is more general than h Remove from S any hypothesis that is more general than another hypothesis in S If d is a negative example Remove from S any hypothesis inconsistent with d For each hypothesis g in G that is not consistent with d Remove g from G Add to G all minimal specializations h of g such that 1. h is consistent with d, and 2. some member of S is more specific than h Remove from G any hypothesis that is less general than another hypothesis in G S 0 : {<Ø, Ø, Ø, Ø, Ø, Ø>} G 0 : {<?,?,?,?,?,?>} 15 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997 16 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

What Next Training Example? How Should These Be Classified? S: { <Sunny, Warm,?, Strong,?,?> } S: { <Sunny, Warm,?, Strong,?,?> } <Sunny,?,?, Strong,?,?> <Sunny, Warm,?,?,?,?> <?, Warm,?, Strong,?,?> <Sunny,?,?, Strong,?,?> <Sunny, Warm,?,?,?,?> <?, Warm,?, Strong,?,?> G: { <Sunny,?,?,?,?,?>, <?, Warm,?,?,?,?> } G: { <Sunny,?,?,?,?,?>, <?, Warm,?,?,?,?> } What query is best, most informative? A query is satisfied by # / 2 of the hypotheses, and not satisfied by the other half Sunny Warm Normal Strong Cool Change For example Sunny,Warm,Normal,Light,Warm,Same Rainy Cool Normal Light W arm Same Note: Answer comes from nature or teacher # experiments: log 2 VS Sunny Warm Normal Light Warm Same Sunny Cold Normal Strong Warm Same 17 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997 18 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

ABiasedHypothesisSpace What Justifies this Inductive Leap? Example Sky Temp Humid Wind Water Forecst EnjoySpt 1 Sunny Warm Normal Strong Cool Change Yes 2 Cloudy Warm Normal Strong Cool Change Yes 3 Rainy Warm Normal Strong Cool Change No S 2 :? W arm Normal Strong Cool Change + Sunny Warm Normal Strong Cool Change + Sunny Warm Normal Light Warm Same S : Sunny Warm Normal??? Why believe we can classify the unseen Sunny Warm Normal Strong Warm Same 19 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997 20 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

An UNBiased Learner Inductive Bias Idea: Choose H that expresses every teachable concept (i.e., H is the power set of X) Consider H = disjunctions, conjunctions, negations over previous H. E.g., Sunny Warm Normal????????Change What are S, G in this case? S G Consider concept learning algorithm L instances X, target concept c training examples D c = { x, c(x) } let L(x i,d c) denote the classification assigned to the instance x i by L after training on data D c. Definition: The inductive bias of L is any minimal set of assertions B such that for any target concept c and corresponding training examples D c where A B means A logically entails B ( x i X)[(B D c x i) L(x i,d c)] 21 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997 22 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Inductive Systems and Equivalent Deductive Systems Three Learners with Different Biases Training examples Inductive system Candidate Elimination Algorithm Classification of new instance, or "don t know" 1. Rote learner: Store examples, Classify x iff it matches previously observed example. 2. Version space candidate elimination algorithm 3. Find-S New instance Using Hypothesis Space H Training examples New instance Equivalent deductive system Theorem Prover Classification of new instance, or "don t know" Assertion " H contains the target concept" Inductive bias made explicit 23 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997 24 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Summary Points Decision Tree Learning 1. Concept learning as search through H 2. General-to-specific ordering over H 3. Version space candidate elimination algorithm 4. S and G boundaries characterize learner s uncertainty 5. Learner can generate useful queries 6. Inductive leaps possible only if learner is biased 7. Inductive learners can be modelled by equivalent deductive systems Decision tree representation ID3 learning algorithm Entropy, Information gain Overfitting [read Chapter 3] [recommended exercises 3.1, 3.4] 25 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997 26 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Decision Tree for P layt ennis ATreetoPredictC-SectionRisk Outlook Learned from medical records of 1000 women Negative examples are C-sections Humidity Sunny Overcast Yes Rain Wind [833+,167-].83+.17- Fetal_Presentation = 1: [822+,116-].88+.12- Previous_Csection = 0: [767+,81-].90+.10- Primiparous = 0: [399+,13-].97+.03- Primiparous = 1: [368+,68-].84+.16- Fetal_Distress = 0: [334+,47-].88+.12- Birth_Weight < 3349: [201+,10.6-].95+.05- Birth_Weight >= 3349: [133+,36.4-].78+.22- Fetal_Distress = 1: [34+,21-].62+.38- Previous_Csection = 1: [55+,35-].61+.39- Fetal_Presentation = 2: [3+,29-].11+.89- Fetal_Presentation = 3: [8+,22-].27+.73- High Normal Strong Weak No Yes No Yes The instance Outlook = Sunny Temperature = Hot Humidity = High Wind = Strong No 27 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997 28 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Decision Trees When to Consider Decision Trees Decision tree representation: Each internal node tests an attribute Each branch corresponds to attribute value Each leaf node assigns a classification How would we represent:,, XOR (A B) (C D E) M of N Instances describable by attribute value pairs Target function is discrete valued Disjunctive hypothesis may be required Possibly noisy training data Examples: Equipment or medical diagnosis Credit risk analysis Modeling calendar scheduling preferences 29 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997 30 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Top-Down Induction of Decision Trees Entropy Main loop: 1. A the best decision attribute for next node 2. Assign A as decision attribute for node 3. For each value of A, createnewdescendantofnode 4. Sort training examples to leaf nodes 5. If training examples perfectly classified, Then STOP, Else iterate over new leaf nodes Entropy(S) 1.0 0.5 Which attribute is best? A1=? [29+,35-] [29+,35-] t f t A2=? f 0.0 0.5 1.0 p + S is a sample of training examples [21+,5-] [8+,30-] [18+,33-] [11+,2-] p is the proportion of positive examples in S p is the proportion of negative examples in S Entropy measures the impurity of S Entropy(S) p log 2 p p log 2 p 31 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997 32 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Entropy Information Gain Entropy(S) = expected number of bits needed to encode class ( or ) of randomly drawn member of S (under the optimal, shortest-length code) Gain(S, A) = expected reduction in entropy due to sorting on A Why? Information theory: optimal length code assigns log 2 p bits to message having probability p. So, expected number of bits to encode or of random member of S: Gain(S, A) Entropy(S) A1=? v V alues(a) S v S Entropy(Sv) [29+,35-] [29+,35-] A2=? p ( log 2 p )+p ( log 2 p ) t f t f Entropy(S) p log 2 p p log 2 p with c-wise classification, we get c Entropy(S) = p i log 2 p i i=1 [21+,5-] [8+,30-] [18+,33-] [11+,2-] 33 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997 34 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Training Examples Selecting the Next Attribute Day Outlook Temperature Humidity Wind PlayTennis D1 Sunny Hot High Weak No D2 Sunny Hot High Strong No D3 Overcast Hot High Weak Yes D4 Rain Mild High Weak Yes D5 Rain Cool Normal Weak Yes D6 Rain Cool Normal Strong No D7 Overcast Cool Normal Strong Yes D8 Sunny Mild High Weak No D9 Sunny Cool Normal Weak Yes D10 Rain Mild Normal Weak Yes D11 Sunny Mild Normal Strong Yes D12 Overcast Mild High Strong Yes D13 Overcast Hot Normal Weak Yes D14 Rain Mild High Strong No Day Outlook Temperature Humidity Wind PlayTennis D1 Sunny Hot High Weak No D2 Sunny Hot High Strong No D3 Overcast Hot High Weak Yes D4 Rain Mild High Weak Yes D5 Rain Cool Normal Weak Yes D6 Rain Cool Normal Strong No D7 Overcast Cool Normal Strong Yes D8 Sunny Mild High Weak No D9 Sunny Cool Normal Weak Yes D10 Rain Mild Normal Weak Yes D11 Sunny Mild Normal Strong Yes D12 Overcast Mild High Strong Yes D13 Overcast Hot Normal Weak Yes D14 Rain Mild High Strong No Which attribute is the best classifier? S: [9+,5-] S: [9+,5-] E =0.940 E =0.940 Humidity Wind High Normal Weak Strong [3+,4-] [6+,1-] [6+,2-] [3+,3-] E =0.985 E =0.592 E =0.811 E =1.00 Gain (S, Humidity ) Gain (S, Wind) =.940 - (7/14).985 - (7/14).592 =.151 Gain(s, Outlook) =0.246 Gain(s, T emperature) =0.029 =.940 - (8/14).811 - (6/14)1.0 =.048 35 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997 36 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

{D1, D2,..., D14} [9+,5 ] Hypothesis Space Search by ID3 Outlook Sunny Overcast Rain + + {D1,D2,D8,D9,D11} {D3,D7,D12,D13} {D4,D5,D6,D10,D14} [2+,3 ] [4+,0 ] [3+,2 ]? Yes? A1 + + + A2 + +...... Which attribute should be tested here? Ssunny = {D1,D2,D8,D9,D11} Gain (S sunny, Humidity) =.970 (3/5) 0.0 (2/5) 0.0 =.970 Gain (S sunny, Temperature) =.970 (2/5) 0.0 (2/5) 1.0 (1/5) 0.0 =.570 + + A2 A3 + A2 + + A4...... Gain (S sunny, Wind) =.970 (2/5) 1.0 (3/5).918 =.019 ID3 algorithms perform a single to complex hill-climbing searching The evaluation function = information gain 37 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997 38 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Hypothesis Space Search by ID3 Inductive Bias in ID3 Hypothesis space is complete! Target function surely in there... Outputs a single hypothesis (which one?) Can t play 20 questions... No back tracking Local minima... Note H is the power set of instances X Unbiased? Not really... Preference for short trees, and for those with high information gain attributes near the root Bias is a preference for some hypotheses, rather than a restriction of hypothesis space H Occam s razor: prefer the shortest hypothesis that fits the data Statisically-based search choices Robust to noisy data... Inductive bias: approx prefer shortest tree 39 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997 40 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Occam s Razor Overfitting in Decision Trees Why prefer short hypotheses? Argument in favor: Fewer short hyps. than long hyps. a short hyp that fits data unlikely to be coincidence a long hyp that fits data might be coincidence Consider adding noisy training example #15: Sunny, Hot, Normal, Strong, PlayTennis = No What effect on earlier tree? Outlook Argument opposed: There are many ways to define small sets of hyps Sunny Overcast Rain e.g., all trees with a prime number of nodes that use attributes beginning with Z What s so special about small sets based on size of hypothesis?? Humidity Yes Wind High Normal Strong Weak No Yes No Yes 41 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997 42 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Overfitting Overfitting in Decision Tree Learning Consider error of hypothesis h over training data: error train(h) entire distribution D of data: error D(h) Hypothesis h H overfits training data if there is an alternative hypothesis h H such that error train(h) <error train(h ) 0.9 0.85 0.8 and error D(h) >error D(h ) Accuracy 0.75 0.7 0.65 0.6 0.55 On training data On test data 0.5 0 10 20 30 40 50 60 70 80 90 100 Size of tree (number of nodes) 43 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997 44 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Avoiding Overfitting Reduced-Error Pruning How can we avoid overfitting? stop growing when data split not statistically significant grow full tree, then post-prune How to select best tree: Measure performance over training data Measure performance over separate validation data set MDL: minimize size(tree) + size(misclassifications(tree)) Split data into training and validation set Do until further pruning is harmful: 1. Evaluate impact on validation set of pruning each possible node (plus those below it) 2. Greedily remove the one that most improves validation set accuracy produces smallest version of most accurate subtree What if data is limited? 45 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997 46 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Effect of Reduced-Error Pruning Rule Post-Pruning 0.9 0.85 0.8 0.75 1. Convert tree to equivalent set of rules 2. Prune each rule independently of others 3. Sort final rules into desired sequence for use Perhaps most frequently used method (e.g., C4.5) Accuracy 0.7 0.65 0.6 0.55 On training data On test data On test data (during pruning) 0.5 0 10 20 30 40 50 60 70 80 90 100 Size of tree (number of nodes) 47 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997 48 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Converting A Tree to Rules Continuous Valued Attributes Outlook Create a discrete attribute to test continuous Temperature=82.5 (Temperature>72.3) = t, f Sunny Overcast Rain Humidity Yes Wind Temperature: 40 48 60 72 80 90 PlayTennis: No No Yes Yes Yes No High Normal Strong Weak No Yes No Yes IF THEN IF THEN (Outlook = Sunny) (Humidity = High) PlayTennis= No (Outlook = Sunny) (Humidity = Normal) PlayTennis= Yes... 49 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997 50 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Attributes with Many Values Attributes with Costs Problem: If attribute has many values, Gain will select it Imagine using Date = Jun 3 1996 as attribute One approach: use GainRatio instead GainRatio(S, A) SplitInformation(S, A) where S i is subset of S for which A has value v i Gain(S, A) SplitInformation(S, A) v -c S i S log S i 2 S i=1 Consider medical diagnosis, BloodTest has cost $150 robotics, W idth from 1ft has cost 23 sec. How to learn a consistent tree with low expected cost? One approach: replace gain by Tan and Schlimmer (1990) Nunez (1988) where w [0, 1] determines importance of cost Gain 2 (S, A). Cost(A) 2 Gain(S,A) 1 (Cost(A)+1) w 51 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997 52 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Unknown Attribute Values What if some examples missing values of A? Use training example anyway, sort through tree If node n tests A, assign most common value of A among other examples sorted to node n assign most common value of A among other examples with same target value assign probability p i to each possible value v i of A assign fraction p i of example to each descendant in tree Classify new examples in same fashion 53 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997