Question of the Day? Machine Learning 2D1431. Training Examples for Concept Enjoy Sport. Outline. Lecture 3: Concept Learning

Similar documents
Machine Learning 2D1431. Lecture 3: Concept Learning

EECS 349: Machine Learning

Concept Learning. Space of Versions of Concepts Learned

[read Chapter 2] [suggested exercises 2.2, 2.3, 2.4, 2.6] General-to-specific ordering over hypotheses

Overview. Machine Learning, Chapter 2: Concept Learning

Concept Learning Mitchell, Chapter 2. CptS 570 Machine Learning School of EECS Washington State University

Concept Learning.

Concept Learning through General-to-Specific Ordering

Version Spaces.

Concept Learning. Berlin Chen Department of Computer Science & Information Engineering National Taiwan Normal University.

Outline. [read Chapter 2] Learning from examples. General-to-specic ordering over hypotheses. Version spaces and candidate elimination.

Concept Learning. Berlin Chen References: 1. Tom M. Mitchell, Machine Learning, Chapter 2 2. Tom M. Mitchell s teaching materials.

Introduction to machine learning. Concept learning. Design of a learning system. Designing a learning system

Fundamentals of Concept Learning

Outline. Training Examples for EnjoySport. 2 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Lecture 2: Foundations of Concept Learning

CSCE 478/878 Lecture 2: Concept Learning and the General-to-Specific Ordering

EECS 349: Machine Learning Bryan Pardo

BITS F464: MACHINE LEARNING

Computational Learning Theory

Computational Learning Theory

Computational Learning Theory

Answers Machine Learning Exercises 2

Introduction to Machine Learning

Topics. Concept Learning. Concept Learning Task. Concept Descriptions

Machine Learning. Computational Learning Theory. Eric Xing , Fall Lecture 9, October 5, 2016

Tutorial 6. By:Aashmeet Kalra

Data Mining and Machine Learning

Classification (Categorization) CS 391L: Machine Learning: Inductive Classification. Raymond J. Mooney. Sample Category Learning Problem

Machine Learning. Computational Learning Theory. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012

Computational Learning Theory

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Introduction to Machine Learning

Typical Supervised Learning Problem Setting

CS 6375: Machine Learning Computational Learning Theory

Computational Learning Theory (COLT)

Computational Learning Theory

Linear Classifiers: Expressiveness

Lecture 3: Decision Trees

Computational learning theory. PAC learning. VC dimension.

Introduction to Machine Learning

Bayesian Learning Features of Bayesian learning methods:

Decision Tree Learning and Inductive Inference

Lecture 3: Decision Trees

Decision Tree Learning

CS 380: ARTIFICIAL INTELLIGENCE MACHINE LEARNING. Santiago Ontañón

Lecture 9: Bayesian Learning

CSCE 478/878 Lecture 6: Bayesian Learning

Machine Learning: Exercise Sheet 2

Lecture 25 of 42. PAC Learning, VC Dimension, and Mistake Bounds

Course 395: Machine Learning

Machine Learning

Midterm Exam, Spring 2005

Computational Learning Theory

Question of the Day. Machine Learning 2D1431. Decision Tree for PlayTennis. Outline. Lecture 4: Decision Tree Learning

Machine Learning

EECS 349:Machine Learning Bryan Pardo

A Tutorial on Computational Learning Theory Presented at Genetic Programming 1997 Stanford University, July 1997

Bias-Variance in Machine Learning

CS 6375 Machine Learning

Midterm: CS 6375 Spring 2015 Solutions

Decision Trees.

Decision Trees Part 1. Rao Vemuri University of California, Davis

Notes on Machine Learning for and

Statistical Learning Learning From Examples

The Bayesian Learning

The Naïve Bayes Classifier. Machine Learning Fall 2017

Introduction. Decision Tree Learning. Outline. Decision Tree 9/7/2017. Decision Tree Definition

CS340 Machine learning Lecture 4 Learning theory. Some slides are borrowed from Sebastian Thrun and Stuart Russell

Decision Tree Learning Mitchell, Chapter 3. CptS 570 Machine Learning School of EECS Washington State University

T Machine Learning: Basic Principles

Applied Logic. Lecture 4 part 2 Bayesian inductive reasoning. Marcin Szczuka. Institute of Informatics, The University of Warsaw

Learning Classification Trees. Sargur Srihari

1. Courses are either tough or boring. 2. Not all courses are boring. 3. Therefore there are tough courses. (Cx, Tx, Bx, )

Decision Trees.

Bayesian Learning. Artificial Intelligence Programming. 15-0: Learning vs. Deduction

MODULE -4 BAYEIAN LEARNING

B555 - Machine Learning - Homework 4. Enrique Areyan April 28, 2015

Algorithms for Classification: The Basic Methods

Computational Learning Theory. Definitions

Chapter 3: Decision Tree Learning

Decision-Tree Learning. Chapter 3: Decision Tree Learning. Classification Learning. Decision Tree for PlayTennis

Algorithm Independent Topics Lecture 6

Propositional Logic: Equivalence

Learning Decision Trees

Learning Decision Trees

Logic Overview, I. and T T T T F F F T F F F F

Two hours. Note that the last two pages contain inference rules for natural deduction UNIVERSITY OF MANCHESTER SCHOOL OF COMPUTER SCIENCE

Computational Learning Theory (VC Dimension)

Dan Roth 461C, 3401 Walnut

Natural Deduction. Formal Methods in Verification of Computer Systems Jeremy Johnson

Bayesian Learning. Examples. Conditional Probability. Two Roles for Bayesian Methods. Prior Probability and Random Variables. The Chain Rule P (B)

Decision Trees. Gavin Brown

Estimating the accuracy of a hypothesis Setting. Assume a binary classification setting

Chapter 1: The Logic of Compound Statements. January 7, 2008

Introduction to ML. Two examples of Learners: Naïve Bayesian Classifiers Decision Trees

Topics. Bayesian Learning. What is Bayesian Learning? Objectives for Bayesian Learning

M chi h n i e n L e L arni n n i g Decision Trees Mac a h c i h n i e n e L e L a e r a ni n ng

Introduction to Machine Learning CMU-10701

Classification: Rule Induction Information Retrieval and Data Mining. Prof. Matteo Matteucci

Transcription:

Question of the Day? Machine Learning 2D43 Lecture 3: Concept Learning What row of numbers comes next in this series? 2 2 22 322 3222 Outline Training Examples for Concept Enjoy Sport Learning from examples General-to specific ordering of hypotheses Version spaces and candidate elimination algorithm Inductive bias Concept: days on which my friend Aldo enjoys his favourite water sports Task: predict the value of Enjoy Sport for an arbitrary day based on the values of the other attributes Sky Sunny Sunny Rainy Sunny Temp Cold Humid Wind Normal Strong instance High Strong High Strong High Strong Water Cool Forecast Sport Enjoy Same Yes Same Yes Change No Change Yes

Representing Hypothesis Prototypical Concept Learning Task Hypothesis h is a conjunction of constraints on attributes Each constraint can be: A specific value : e.g. Water= A don t care value : e.g. Water=? No value allowed (null hypothesis): e.g. Water=Ø Example: hypothesis h Sky Temp Humid Wind Water Forecast < Sunny?? Strong? Same > Given: Instances X : Possible days decribed by the attributes Sky, Temp, Humidity, Wind, Water, Forecast Target function c: EnjoySport X {0,} Hypotheses H: conjunction of literals e.g. < Sunny?? Strong? Same > Training examples D : positive and negative examples of the target function: <x,c(x )>,, <x n,c(x n )> Determine: A hypothesis h in H such that h(x)=c(x) for all x in D. Inductive Learning Hypothesis Number of Instances, Concepts, Hypotheses Any hypothesis found to approximate the target function well over the training examples, will also approximate the target function well over the unobserved examples. Sky: Sunny, Cloudy, Rainy AirTemp:, Cold Humidity: Normal, High Wind: Strong, Weak Water:, Cold Forecast: Same, Change #distinct instances : 3*2*2*2*2*2 = 96 #distinct concepts : 2 96 #syntactically distinct hypotheses : 5*4*4*4*4*4=520 #semantically distinct hypotheses : +4*3*3*3*3*3=973 2

General to Specific Order Instance, Hypotheses and more general Consider two hypotheses: h =< Sunny,?,?,Strong,?,?> h 2 =< Sunny,?,?,?,?,?> Set of instances covered by h and h 2 : h 2 imposes fewer constraints than h and therefore classifies more instances x as positive h(x)=. Definition: Let h j and h k be boolean-valued functions defined over X. Then h j is more general than or equal to h k (written h j h k ) if and only if x X : [ (h k (x) = ) (h j (x) = )] The relation imposes a partial order over the hypothesis space H that is utilized many concept learning methods. Instances x x 2 h 2 h h 2 h 3 x =< Sunny,,High,Strong,Cool,Same> x 2 =< Sunny,,High,Light,,Same> Hypotheses specific h 3 h h 2 general h =< Sunny,?,?,Strong,?,?> h 2 =< Sunny,?,?,?,?,?> h 3 =< Sunny,?,?,?,Cool,?> Find-S Algorithm. Initialize h to the most specific hypothesis in H 2. For each positive training instance x For each attribute constraint a i in h If the constraint a i in h is satisfied by x then do nothing else replace a i in h by the next more general constraint that is satisfied by x 3. Output hypothesis h x 3 Hypothesis Space Search by Find-S Instances x x 4 x 2 Hypotheses h 0 h h 2,3 specific general x h 0 =< Ø, Ø, Ø, Ø, Ø, Ø,> =<Sunny,,Normal,Strong,,Same>+ h =< Sunny,,Normal, x 2 =<Sunny,,High,Strong,,Same>+ Strong,,Same> h 2,3 =< Sunny,,?, x 3 =<Rainy,Cold,High,Strong,,Change> - Strong,,Same> x 4 =<Sunny,,High,Strong,Cool,Change> + h 4 =< Sunny,,?, Strong,?,?> h 4 3

Properties of Find-S Complaints about Find-S Hypothesis space described by conjunctions of attributes Find-S will output the most specific hypothesis within H that is consistent with the positve training examples The output hypothesis will also be consistent with the negative examples, provided the target concept is contained in H. Can t tell if the learner has converged to the target concept, in the sense that it is unable to determine whether it has found the only hypothesis consistent with the training examples. Can t tell when training data is inconsistent, as it ignores negative training examples. Why prefer the most specific hypothesis? What if there are multiple maximally specific hypothesis? Version Spaces List-Then Eliminate Algorithm A hypothesis h is consistent with a set of training examples D of target concept if and only if h(x)=c(x) for each training example <x,c(x)> in D. Consistent(h,D) := <x,c(x)> D h(x)=c(x) The version space, VS H,D, with respect to hypothesis space H, and training set D, is the subset of hypotheses from H consistent with all training examples: VS H,D = {h H Consistent(h,D) }. VersionSpace a list containing every hypothesis in H 2. For each training example <x,c(x)> remove from VersionSpace any hypothesis that is inconsistent with the training example h(x) c(x) 3. Output the list of hypotheses in VersionSpace 4

Example Version Space Representing Version Spaces S: {<Sunny,,?,Strong,?,?>} The general boundary, G, of version space VS H,D is the set of maximally general members. <Sunny,?,?,Strong,?,?> <Sunny,,?,?,?,?> <?,,?,Strong,?,?> G: {<Sunny,?,?,?,?,?>, <?,,?,?,?>, } x = <Sunny Normal Strong Same> + x 2 = <Sunny High Strong Same> + x 3 = <Rainy Cold High Strong Change> - x 4 = <Sunny High Strong Cool Change> + The specific boundary, S, of version space VS H,D is the set of maximally specific members. Every member of the version space lies between these boundaries VS H,D = {h H ( s S) ( g G) (g h s) where x y means x is more general or equal than y Candidate Elimination Algorithm Candidate Elimination Algorithm G maximally general hypotheses in H S maximally specific hypotheses in H For each training example d=<x,c(x)> If d is a positive example Remove from G any hypothesis that is inconsistent with d For each hypothesis s in S that is not consistent with d remove s from S. Add to S all minimal generalizations h of s such that h consistent with d Some member of G is more general than h Remove from S any hypothesis that is more general than another hypothesis in S If d is a negative example Remove from S any hypothesis that is inconsistent with d For each hypothesis g in G that is not consistent with d remove g from G. Add to G all minimal specializations h of g such that h consistent with d Some member of S is more specific than h Remove from G any hypothesis that is less general than another hypothesis in G 5

Example Candidate Elimination Example Candidate Elimination S: {<,,,,, >} G: {<?,?,?,?,?,?>} x = <Sunny Normal Strong Same> + S: {< Sunny Normal Strong Same >} G: {<?,?,?,?,?,?>} x 2 = <Sunny High Strong Same> + S: {< Sunny? Strong Same >} G: {<?,?,?,?,?,?>} S: {< Sunny? Strong Same >} G: {<?,?,?,?,?,?>} x 3 = <Rainy Cold High Strong Change> - S: {< Sunny? Strong Same >} G: {<Sunny,?,?,?,?,?>, <?,,?,?,?>, <?,?,?,?,?,Same>} x 4 = <Sunny High Strong Cool Change> + S: {< Sunny? Strong?? >} G: {<Sunny,?,?,?,?,?>, <?,,?,?,?> } Example Candidate Elimination Example Candidate Elimination Instance space: integer points in the x,y plane with 0 x,y 0 hypothesis space : rectangles, that means hypotheses are of the form a x b, c y d, assume examples = {ø} G={0,0,0,0} S={ø} G d Lab: Assignment c a b 6

Example Candidate Elimination Classification of Unseen Data examples = {(3,4),+} G={(0,0,0,0)} S={(3,3,4,4)} G S: {<Sunny,,?,Strong,?,?>} <Sunny,?,?,Strong,?,?> <Sunny,,?,?,?,?> <?,,?,Strong,?,?> + S G: {<Sunny,?,?,?,?,?>, <?,,?,?,?>, } x 5 = <Sunny Normal Strong Cool Change> + 6/0 x 6 = <Rainy Cold Normal Light Same> - 0/6 x 7 = <Sunny Normal Light Same>? 3/3 x 8 = <Sunny Cold Normal Strong Same>? 2/4 Inductive Leap + <Sunny Normal Strong Cool Change> + <Sunny Normal Light Same> S : <Sunny Normal???> new example <Sunny Normal Strong Same> How can we justify to classify the new example as positive? Since S is the specific boundary all other hypotheses in the version space are more general. So if the example satifies S it will also satisfy every other hypothesis in VS. inductive bias: We assume that the hypothesis space H contains the target concept c. In other words that c can be described by a conjunction of literals. What Example to Query Next? S: <Sunny,?,?,Strong,?,?> G: {<Sunny,,?,Strong,?,?>} <Sunny,,?,?,?,?> {<Sunny,?,?,?,?,?>, <?,,?,?,?>, } <?,,?,Strong,?,?> What would be a good query for the learner to pose at that point? Choose an instance that is classified positive by some of the hypothesis and negative by the others. <Sunny,, Normal, Light,, Same> If the example is positive S can be generalized, if it is negative G can be specialized. 7

Biased Hypothesis Space Our hypothesis space is unable to represent a simple disjunctive target concept : (Sky=Sunny) v (Sky=Cloudy) x = <Sunny Normal Strong Cool Change> + S : { <Sunny,, Normal, Strong, Cool, Change> } x 2 = <Cloudy Normal Strong Cool Change> + S 2 : { <?,, Normal, Strong, Cool, Change> } x 3 = <Rainy Normal Strong Cool Change> - S 3 : {} The third example x 3 contradicts the already overly general hypothesis space specific boundary S 2. Unbiased Learner Idea: Choose H that expresses every teachable concept, that means H is the set of all possible subsets of X called the power set P(X) X =96, P(X) =2 96 ~ 0 28 distinct concepts H = disjunctions, conjunctions, negations e.g. <Sunny Normal???> v <????? Change> H surely contains the target concept. Unbiased Learner Futility of Bias-Free Learning What are S and G in this case? Assume positive examples (x, x 2, x 3 ) and negative examples (x 4, x 5 ) S : { (x v x 2 v x 3 ) } G : { (x 4 v x 5 ) } The only examples that are classified are the training examples themselves. In other words in order to learn the target concept one would have to present every single instance in X as a training example. Each unobserved instance will be classified positive by precisely half the hypothesis in VS and negative by the other half. A learner that makes no prior assumptions regarding the identity of the target concept has no rational basis for classifying any unseen instances. No Free Lunch! 8

Inductive Bias Consider: Concept learning algorithm L Instances X, target concept c Training examples D c ={<x,c(x)>} Let L(x i,d c ) denote the classification assigned to instance x i by L after training on D c. Definition: The inductive bias of L is any minimal set of assertions B such that for any target concept c and corresponding training data D c ( x i X)[B D c x i ] -- L(x i, D c ) Where A -- B means that A logically entails B. Inductive Systems and Equivalent Deductive Systems training examples new instance training examples new instance assertion H contains target concept candidate elimination algorithm using hypothesis space H equivalent deductive system theorem prover classification of new instance or don t know classification of new instance or don t know Three Learners with Different Biases Rote learner: Store examples classify x if and only if it matches a previously observed example. No inductive bias Version space candidate elimination algorithm. Bias: The hypothesis space contains the target concept. Find-S Bias: The hypothesis space contains the target concept and all instances are negative instances unless the opposite is entailed by its other knowledge. No Free Lunch Theorem If we can make no prior assumptions about the nature of the learning problem, there is no reason to prefer one learning algorithm over the other. If one algorithm seems to outperform another in a particular situation, it is a consequence of its fit to the particular problem, not the general superiority of the algorithm. For any two learning algorithms, the expected offtraining-set classification error averaged over all possible target functions is the same. No matter how clever we are at choosing a good learning algorithm and a bad algorithm, if all target functions are equally likely, then the good algorithm will not outperform the bad one. 9

training data D No Free Lunch Theorem for Binary Data x 000 00 00 0 00 0 0 For the target function F(x) clearly algorithm is superior to algorithm 2. For each of the 2 5 possible distinct target functions consistent with D, there is exactly one other target function whose output is inverted for each of the patterns outside the training set D, so that the contributions to ε (E F,D) and ε 2 (E F,D) cancel. F - - - h - h 2 - - - - - - ε (E F,D)=0.4 ε 2 (E F,D)=0.6 Question of the Day? What row of numbers comes next in this series? 2 2 22 322 3222 Each row describes the series in the previous row, the first row has One One, the second Two Ones, the third row One Two and One One, the fourth row One One, One Two and Two Ones.so the answer is: 3232 0