UVA CS 4501: Machine Learning

Size: px
Start display at page:

Download "UVA CS 4501: Machine Learning"

Transcription

1 UVA CS 4501: Machine Learning Lecture 21: Decision Tree / Random Forest / Ensemble Dr. Yanjun Qi University of Virginia Department of Computer Science

2 Where are we? è Five major sections of this course q Regression (supervised) q Classification (supervised) q Unsupervised models q Learning theory q Graphical models 4/19/18 2

3 Today Ø Decision Tree (DT): ØTree representation ØBrief information theory ØLearning decision trees ØBagging ØRandom forests: Ensemble of DT ØMore about ensemble 4/19/18 3

4 A study comparing Classifiers Proceedings of the 23rd International Conference on Machine Learning (ICML `06). 4/19/18 4

5 Top 8 Models Dr. Yanjun Qi / UVA CS A study comparing Classifiers è 11 binary classification problems / 8 metrics 4/19/18 5

6 Readable Readability Hierarchy Decision Trees: Classifies based on a series of one-variable decisions. Linear Classifier: Weight vector w tells us how important each variable is for classification and in which direction it points. Quadratic Classifier: Linear weights work as in linear classifier, with additional information coming from all products of variables. k Nearest Neighbors: Classifies using the complete training set, no information about the nature of the class difference

7 Where are we? è Three major sections for classification We can divide the large variety of classification approaches into roughly three major types 1. Discriminative - directly estimate a decision rule/boundary - e.g., logistic regression, support vector machine, decisiontree 2. Generative: - build a generative statistical model - e.g., naïve bayes classifier, Bayesian networks 3. Instance based classifiers - Use observation directly (no models) - e.g. K nearest neighbors 4/19/18 7

8 C Dr. Yanjun Qi / UVA CS A Dataset for classification C Output as Discrete Class Label C 1, C 2,, C L Data/points/instances/examples/samples/records: [ rows ] Features/attributes/dimensions/independent variables/covariates/predictors/regressors: [ columns, except the last] Target/outcome/response/label/dependent variable: special column to be predicted [ last column ] 4/19/18 8

9 C Example: Play Tennis Example 4/19/18 9

10 Anatomy of a decision tree Outlook Each node is a test on one feature/attribute sunny overcast rain Possible attribute values of the node Humidity Yes Windy high normal false true No Yes No Yes Leaves are the decisions 4/19/18 10

11 Anatomy of a decision tree Outlook Each node is a test on one feature/attribute sunny overcast rain Possible attribute values of the node Humidity Yes Windy high normal false true No Yes No Yes Leaves are the decisions 4/19/18 11

12 Anatomy of a decision tree Sample size Outlook Each node is a test on one attribute Your data gets smaller sunny overcast rain Possible attribute values of the node Humidity Yes Windy high normal false true No Yes No Yes Leaves are the decisions 4/19/18 12

13 Apply Model to Test Data: To play tennis or not. Dr. Yanjun Qi / UVA CS Outlook A new test example: (Outlook==rain) and (Windy==false) sunny overcast rain Pass it on the tree -> Decision is yes. Humidity Yes Windy high normal false true No Yes No Yes 4/19/18 13

14 Apply Model to Test Data: To play tennis or not. Dr. Yanjun Qi / UVA CS sunny Humidity Outlook overcast Yes (Outlook ==overcast) -> yes (Outlook==rain) and (Windy==false) ->yes (Outlook==sunny) and (Humidity=normal) ->yes rain Windy Three cases of YES high normal false true No Yes No Yes 4/19/18 14

15 Decision trees Decision trees represent a disjunction of conjunctions of constraints on the attribute values of instances. (Outlook ==overcast) OR ((Outlook==rain) and (Windy==false)) OR ((Outlook==sunny) and (Humidity=normal)) => yes play tennis 4/19/18 15

16 Representation 0 A 1 Y=((A and B) or ((not A) and C)) C B false true false true 4/19/18 16

17 Same concept / different representation 0 A 1 Y=((A and B) or ((not A) and C)) C B false true false true 0 C 1 B A false A 1 0 false true true false 4/19/18 17

18 Which attribute to select for splitting? the distribution of each class (not attribute) This is bad splitting 4/19/18 18

19 How do we choose which Dr. Yanjun Qi / UVA CS attribute to split? Which attribute should be used first to test? Intuitively, you would prefer the one that separates the training examples as much as possible. 4/19/18 19

20 Today Ø Decision Tree (DT): ØTree representation ØBrief information theory ØLearning decision trees ØBagging ØRandom forests: Ensemble of DT ØMore about ensemble 4/19/18 20

21 Information gain is one criteria to decide on which attribute for splitting Imagine: 1. Someone is about to tell you your own name 2. You are about to observe the outcome of a dice roll 2. You are about to observe the outcome of a coin flip 3. You are about to observe the outcome of a biased coin flip Dr. Yanjun Qi / UVA CS Each situation have a different amount of uncertainty as to what outcome you will observe. 4/19/18 21

22 Information Information: è Reduction in uncertainty (amount of surprise in the outcome) I( X ) = log 2 1 p(x) = log 2 p(x) If the probability of this event happening is small and it happens, the information is large. Ø Observing the outcome of a coin flip is head Ø Observe the outcome of a dice is 6 I = log2 1/ 2 = 1 I = log2 1/ 6 = /19/18 22

23 Entropy The expected amount of information when observing the output of a random variable X H( X) = E( I( X)) = p( x) I( x) = p( x)log p( x) i i i i 2 i i If the X can have 8 outcomes and all are equally likely H( X ) == 1/8log 1/8= 3 i 2 4/19/18 23

24 Entropy If there are k possible outcomes H( X) log 2 k Equality holds when all outcomes are equally likely The more the probability distribution that deviates from uniformity, the lower the entropy H( X) = E( I( X)) = p( x) I( x) = p( x)log p( x) e.g. for a random binary variable i i i 2 i 4/19/18 i i 24

25 Entropy If there are k possible outcomes H( X) log 2 k Equality holds when all outcomes are equally likely The more the probability distribution that deviates from uniformity, the lower the entropy e.g. for a random 4/19/18 binary variable 25

26 Entropy Lower è better purity Entropy measures the purity The distribution is less uniform Entropy is lower The node is purer 4/19/18 26

27 Information gain IG(X,Y)=H(Y)-H(Y X) Reduction in uncertainty of Y by knowing a feature variable X Information gain: = (information before split) (information after split) = entropy(parent) [average entropy(children)] Fixed the lower, the better (children nodes are purer) = For IG, the higher, the better 4/19/18 27

28 Conditional entropy i H(Y ) = p( y i )log 2 p( y i ) H(Y X ) = p(x ) H(Y X = x ) j j j = p(x j ) p( y i x j )log 2 p( y i x j ) j i H(Y X = x j ) = p( y i x j )log 2 p( y i x j ) i 4/19/18 28

29 Example Attributes Labels X1 X2 Y Count T T + 2 T F + 2 F T - 5 F F + 1 Which one do we choose X1 or X2? IG(X1,Y) = H(Y) H(Y X1) H(Y) = - (5/10) log(5/10) -5/10log(5/10) = 1 H(Y X1) = P(X1=T)H(Y X1=T) + P(X1=F) H(Y X1=F) = 4/10 (1log log 0) +6/10 (5/6log 5/6 +1/6log1/6) = 0.39 Information gain (X1,Y)= =0.61 4/19/18 29

30 Example Attributes Labels X1 X2 Y Count T T + 2 T F + 2 F T - 5 F F + 1 Which one do we choose X1 or X2? IG(X1,Y) = H(Y) H(Y X1) H(Y) = - (5/10) log(5/10) -5/10log(5/10) = 1 H(Y X1) = P(X1=T)H(Y X1=T) + P(X1=F) H(Y X1=F) = 4/10 (1log log 0) +6/10 (5/6log 5/6 +1/6log1/6) = 0.39 Information gain (X1,Y)= =0.61 4/19/18 30

31 Example Attributes Labels X1 X2 Y Count T T + 2 T F + 2 F T - 5 F F + 1 Which one do we choose X1 or X2? IG(X1,Y) = H(Y) H(Y X1) H(Y) = - (5/10) log(5/10) -5/10log(5/10) = 1 H(Y X1) = P(X1=T)H(Y X1=T) + P(X1=F) H(Y X1=F) = 4/10 (1log log 0) +6/10 (5/6log 5/6 +1/6log1/6) = 0.39 Information gain (X1,Y)= =0.61 4/19/18 31

32 Which one do we choose? X1 X2 Y Count T T + 2 T F + 2 F T - 5 F F + 1 Information gain (X1,Y)= 0.61 Information gain (X2,Y)= 0.12 Pick the variable which provides the most information gain about Y Pick X1 è Then recursively choose next Xi on branches 4/19/18 32

33 Which one do we choose? X1 X2 Y Count T T + 2 T F + 2 F T - 5 F F + 1 X1 X2 Y Count T T + 2 T F + 2 F T - 5 F F + 1 One branch The other branch Information gain (X1,Y)= 0.61 Information gain (X2,Y)= 0.12 Pick the variable which provides the most information gain about Y Pick X1 è Then recursively choose next Xi on branches 4/19/18 33

34 4/19/18 34

35 Intuitively, you would prefer the one that separates the training examples as much as possible. Dr. Yanjun Qi / UVA CS 4/19/18 35

36 Decision Trees Caveats: The number of possible values influences the information gain. The more possible values, the higher the gain (the more likely it is to form small, but pure partitions) Other Purity (diversity) measures Information Gain Gini (population diversity) where p mk is proportion of class k at node m Chi-square Test 4/19/18 36

37 Overfitting You can perfectly fit DT to any training data Instability of Trees High variance (small changes in training set will result in changes of tree model) Hierarchical structure è Error in top split propagates down Two approaches: 1. Stop growing the tree when further splitting the data does not yield an improvement 2. Grow a full tree, then prune the tree, by eliminating nodes. 4/19/18 37

38 From ESL book Ch9 : Classification and Regression Trees (CART) Partition feature space into set of rectangles Fit simple model in each partition

39 Summary: Decision trees Non-linear classifier / regression Easy to use Easy to interpret Susceptible to overfitting but can be avoided. 4/19/18 39

40 Decision Tree / Random Forest Task Representation Score Function Classification Partition feature space into set of rectangles, local smoothness Greedy to find partitions Search/Optimization Models, Parameters Split with Purity measure / e.g. IG / cross-entropy / Gini / Tree Model (s), i.e. space partition 4/19/18 40

41 Today Ø Decision Tree (DT): ØTree representation ØBrief information theory ØLearning decision trees ØBagging ØRandom forests: Ensemble of DT ØMore about ensemble 4/19/18 41

42 Bagging Bagging or bootstrap aggregation a technique for reducing the variance of an estimated prediction function. For instance, for classification, a committee of trees Each tree casts a vote for the predicted class. 4/19/18 42

43 Bootstrap The basic idea: randomly draw datasets with replacement (i.e. allows duplicates) from the training data, each samples the same size as the original training set 4/19/18 43

44 Bootstrap The basic idea: randomly draw datasets with replacement (i.e. allows duplicates) from the training data, each samples the same size as the original training set 4/19/18 44

45 With vs Without Replacement Bootstrap with replacement can keep the sampling size the same as the original size for every repeated sampling. The sampled data groups are independent on each other. Bootstrap without replacement cannot keep the sampling size the same as the original size for every repeated sampling. The sampled data groups are dependent on each other. 4/19/18 45

46 Bagging Create bootstrap samples from the training data M features N examples N... 4/19/18 46

47 Bagging of DT Classifiers e.g. M features N examples Take the majority vote i.e. Refit the model to each bootstrap dataset, and then examine the behavior over the B replications. 4/19/18 47

48 Bagging for Classification with 0,1 Loss Classification with 0, 1 loss Bagging a good classifier can make it better. Bagging a bad classifier can make it worse. Can understand the bagging effect in terms of a consensus of independent weak leaners and wisdom of crowds 4/19/18 48

49 Peculiarities of Bagging Model Instability is good when bagging The more variable (unstable) the basic model is, the more improvement can potentially be obtained Low-Variability methods (e.g. LDA) improve less than High- Variability methods (e.g. decision trees) 4/19/18 49

50 Bagging : an example with simulated data N = 30 training samples, two classes and p = 5 features, Each feature N(0, 1) distribution and pairwise correlation.95 Response Y generated according to: Test sample size of 2000 Fit classification trees to training set and bootstrap samples B = 200 ESL book / Example 8.7.1

51 Five features highly correlated with each other Notice the bootstrap trees are different than the original tree ESL book / Example è No clear difference with picking up which feature to split è Small changes in the training set will result in different tree è But these trees are actually quite similar for classification

52 è For B>30, more trees do not improve the bagging results è Since the trees correlate highly to each other and give similar classifications B Consensus: Majority vote Probability: Average distribution at terminal nodes ESL book / Example 8.7.1

53 Bagging Slightly increases model space Cannot help where greater enlargement of space is needed Bagged trees are correlated Use random forest to reduce correlation between trees

54 Today Ø Decision Tree (DT): ØTree representation ØBrief information theory ØLearning decision trees ØBagging ØRandom forests: special ensemble of DT ØMore about ensemble 4/19/18 54

55 Random forest classifier Random forest classifier, an extension to bagging which uses de-correlated trees. 4/19/18 55

56 N examples Dr. Yanjun Qi / UVA CS Random Forest Classifier Create bootstrap samples from the training data M features... 4/19/18 56

57 N examples Dr. Yanjun Qi / UVA CS Random Forest Classifier At each node when choosing the split feature choose only among m<m features M features... 4/19/18 57

58 N examples Dr. Yanjun Qi / UVA CS Random Forest Classifier Create decision tree from each bootstrap sample M features /19/18 58

59 Random Forest Classifier M features N examples Take he majority vote 4/19/18 59

60 Random Forests 1. For each of our B bootstrap samples a. Form a tree in the following manner i. Given p dimensions, pick m of them ii. Split only according to these m dimensions 1. (we will NOT consider the other p-m dimensions) iii. Repeat the above steps i & ii for each split 1. Note: we pick a different set of m dimensions for each split on a single tree

61 Page In ESL book 4/19/18 61

62 Random Forests Random forest can be viewed as a refinement of bagging with a tweak of decorrelating the trees: o At each tree split, a random subset of m features out of all p features is drawn to be considered for splitting Some guidelines provided by Breiman, but be careful to choose m based on specific problem: o o o m = p amounts to bagging m = p/3 or log2(p) for regression m = sqrt(p) for classification

63 Why correlated trees are not ideal? Random Forests try to reduce correlation between the trees. Why?

64 Why correlated trees are not ideal? Assuming each tree has variance σ 2 If trees are independently identically distributed, then average variance is σ 2 /B

65 Why correlated trees are not ideal? Assuming each tree has variance σ 2 If simply identically distributed, then average variance is ρ refers to pairwise correlation, a positive value As B, second term 0 Thus, the pairwise correlation always affects the variance

66 Why correlated trees are not ideal? Assuming each tree has variance σ 2 If simply identically distributed, then average variance is ρ refers to pairwise correlation, a positive value As B, second term 0 Thus, the pairwise correlation always affects the variance

67 Why correlated trees are not ideal? How to deal? o o If we reduce m (the number of dimensions we actually consider), then we reduce the pairwise tree correlation o Thus, variance will be reduced.

68 Today Ø Decision Tree (DT): ØTree representation ØBrief information theory ØLearning decision trees ØBagging ØRandom forests: Ensemble of DT ØMore ensemble 4/19/18 68

69 e.g. Ensembles in practice Oct Each rating/sample: + <user, movie, date of grade, grade> Training set (100,480,507 ratings) Qualifying set (2,817,131 ratings)è winner

70 Ensemble in practice Team Bellkor's Pragmatic Chaos defeated the team ensemble by submitting just 20 minutes earlier! è 1 million dollar! The ensemble team è blenders of multiple different methods

71 References q Prof. Tan, Steinbach, Kumar s Introduction to Data Mining slide q ESLbook : Hastie, Trevor, et al. The elements of statistical learning. Vol. 2. No. 1. New York: Springer, q Dr. Oznur Tastan s slides about RF and DT 4/19/18 71

Machine Learning Recitation 8 Oct 21, Oznur Tastan

Machine Learning Recitation 8 Oct 21, Oznur Tastan Machine Learning 10601 Recitation 8 Oct 21, 2009 Oznur Tastan Outline Tree representation Brief information theory Learning decision trees Bagging Random forests Decision trees Non linear classifier Easy

More information

UVA$CS$6316$$ $Fall$2015$Graduate:$$ Machine$Learning$$ $ $Lecture$17:$Decision(Tree(/(Random( Forest(/(Ensemble$

UVA$CS$6316$$ $Fall$2015$Graduate:$$ Machine$Learning$$ $ $Lecture$17:$Decision(Tree(/(Random( Forest(/(Ensemble$ Dr.YanjunQi/UVACS6316/f15 UVA$CS$6316$$ $Fall$2015$Graduate:$$ Machine$Learning$$ $ $Lecture$17:$DecisionTree/Random Forest/Ensemble$ 10/28/15 Dr.YanjunQi UniversityofVirginia Departmentof ComputerScience

More information

UVA$CS$4501$+$001$/$6501$ $007$ Introduc8on$to$Machine$Learning$ and$data$mining$$ $ Lecture$19:!Decision!Tree!/! Random!Forest!/!

UVA$CS$4501$+$001$/$6501$ $007$ Introduc8on$to$Machine$Learning$ and$data$mining$$ $ Lecture$19:!Decision!Tree!/! Random!Forest!/! YanjunQi/UVACS4501L01L6501L07 UVA$CS$4501$+$001$/$6501$ $007$ Introduc8on$to$Machine$Learning$ and$data$mining$$ $ Lecture$19:DecisionTree/ RandomForest/Ensemble$ YanjunQi/Jane,,PhD UniversityofVirginia

More information

Decision Trees. Nicholas Ruozzi University of Texas at Dallas. Based on the slides of Vibhav Gogate and David Sontag

Decision Trees. Nicholas Ruozzi University of Texas at Dallas. Based on the slides of Vibhav Gogate and David Sontag Decision Trees Nicholas Ruozzi University of Texas at Dallas Based on the slides of Vibhav Gogate and David Sontag Supervised Learning Input: labelled training data i.e., data plus desired output Assumption:

More information

Decision Tree And Random Forest

Decision Tree And Random Forest Decision Tree And Random Forest Dr. Ammar Mohammed Associate Professor of Computer Science ISSR, Cairo University PhD of CS ( Uni. Koblenz-Landau, Germany) Spring 2019 Contact: mailto: Ammar@cu.edu.eg

More information

Lecture 7: DecisionTrees

Lecture 7: DecisionTrees Lecture 7: DecisionTrees What are decision trees? Brief interlude on information theory Decision tree construction Overfitting avoidance Regression trees COMP-652, Lecture 7 - September 28, 2009 1 Recall:

More information

Learning Decision Trees

Learning Decision Trees Learning Decision Trees Machine Learning Spring 2018 1 This lecture: Learning Decision Trees 1. Representation: What are decision trees? 2. Algorithm: Learning decision trees The ID3 algorithm: A greedy

More information

UVA CS / Introduc8on to Machine Learning and Data Mining

UVA CS / Introduc8on to Machine Learning and Data Mining UVA CS 4501-001 / 6501 007 Introduc8on to Machine Learning and Data Mining Lecture 13: Probability and Sta3s3cs Review (cont.) + Naïve Bayes Classifier Yanjun Qi / Jane, PhD University of Virginia Department

More information

Classification and Regression Trees

Classification and Regression Trees Classification and Regression Trees Ryan P Adams So far, we have primarily examined linear classifiers and regressors, and considered several different ways to train them When we ve found the linearity

More information

CS145: INTRODUCTION TO DATA MINING

CS145: INTRODUCTION TO DATA MINING CS145: INTRODUCTION TO DATA MINING 4: Vector Data: Decision Tree Instructor: Yizhou Sun yzsun@cs.ucla.edu October 10, 2017 Methods to Learn Vector Data Set Data Sequence Data Text Data Classification Clustering

More information

Learning Decision Trees

Learning Decision Trees Learning Decision Trees Machine Learning Fall 2018 Some slides from Tom Mitchell, Dan Roth and others 1 Key issues in machine learning Modeling How to formulate your problem as a machine learning problem?

More information

Decision Trees. Danushka Bollegala

Decision Trees. Danushka Bollegala Decision Trees Danushka Bollegala Rule-based Classifiers In rule-based learning, the idea is to learn a rule from train data in the form IF X THEN Y (or a combination of nested conditions) that explains

More information

Decision Trees. Tirgul 5

Decision Trees. Tirgul 5 Decision Trees Tirgul 5 Using Decision Trees It could be difficult to decide which pet is right for you. We ll find a nice algorithm to help us decide what to choose without having to think about it. 2

More information

Machine Learning 2nd Edi7on

Machine Learning 2nd Edi7on Lecture Slides for INTRODUCTION TO Machine Learning 2nd Edi7on CHAPTER 9: Decision Trees ETHEM ALPAYDIN The MIT Press, 2010 Edited and expanded for CS 4641 by Chris Simpkins alpaydin@boun.edu.tr h1p://www.cmpe.boun.edu.tr/~ethem/i2ml2e

More information

Data Mining Classification: Basic Concepts and Techniques. Lecture Notes for Chapter 3. Introduction to Data Mining, 2nd Edition

Data Mining Classification: Basic Concepts and Techniques. Lecture Notes for Chapter 3. Introduction to Data Mining, 2nd Edition Data Mining Classification: Basic Concepts and Techniques Lecture Notes for Chapter 3 by Tan, Steinbach, Karpatne, Kumar 1 Classification: Definition Given a collection of records (training set ) Each

More information

BAGGING PREDICTORS AND RANDOM FOREST

BAGGING PREDICTORS AND RANDOM FOREST BAGGING PREDICTORS AND RANDOM FOREST DANA KANER M.SC. SEMINAR IN STATISTICS, MAY 2017 BAGIGNG PREDICTORS / LEO BREIMAN, 1996 RANDOM FORESTS / LEO BREIMAN, 2001 THE ELEMENTS OF STATISTICAL LEARNING (CHAPTERS

More information

the tree till a class assignment is reached

the tree till a class assignment is reached Decision Trees Decision Tree for Playing Tennis Prediction is done by sending the example down Prediction is done by sending the example down the tree till a class assignment is reached Definitions Internal

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 23. Decision Trees Barnabás Póczos Contents Decision Trees: Definition + Motivation Algorithm for Learning Decision Trees Entropy, Mutual Information, Information

More information

Decision Trees. Each internal node : an attribute Branch: Outcome of the test Leaf node or terminal node: class label.

Decision Trees. Each internal node : an attribute Branch: Outcome of the test Leaf node or terminal node: class label. Decision Trees Supervised approach Used for Classification (Categorical values) or regression (continuous values). The learning of decision trees is from class-labeled training tuples. Flowchart like structure.

More information

Decision Tree Analysis for Classification Problems. Entscheidungsunterstützungssysteme SS 18

Decision Tree Analysis for Classification Problems. Entscheidungsunterstützungssysteme SS 18 Decision Tree Analysis for Classification Problems Entscheidungsunterstützungssysteme SS 18 Supervised segmentation An intuitive way of thinking about extracting patterns from data in a supervised manner

More information

EECS 349:Machine Learning Bryan Pardo

EECS 349:Machine Learning Bryan Pardo EECS 349:Machine Learning Bryan Pardo Topic 2: Decision Trees (Includes content provided by: Russel & Norvig, D. Downie, P. Domingos) 1 General Learning Task There is a set of possible examples Each example

More information

CS6375: Machine Learning Gautam Kunapuli. Decision Trees

CS6375: Machine Learning Gautam Kunapuli. Decision Trees Gautam Kunapuli Example: Restaurant Recommendation Example: Develop a model to recommend restaurants to users depending on their past dining experiences. Here, the features are cost (x ) and the user s

More information

CS 6375 Machine Learning

CS 6375 Machine Learning CS 6375 Machine Learning Decision Trees Instructor: Yang Liu 1 Supervised Classifier X 1 X 2. X M Ref class label 2 1 Three variables: Attribute 1: Hair = {blond, dark} Attribute 2: Height = {tall, short}

More information

Lecture 3: Decision Trees

Lecture 3: Decision Trees Lecture 3: Decision Trees Cognitive Systems - Machine Learning Part I: Basic Approaches of Concept Learning ID3, Information Gain, Overfitting, Pruning last change November 26, 2014 Ute Schmid (CogSys,

More information

Introduction. Decision Tree Learning. Outline. Decision Tree 9/7/2017. Decision Tree Definition

Introduction. Decision Tree Learning. Outline. Decision Tree 9/7/2017. Decision Tree Definition Introduction Decision Tree Learning Practical methods for inductive inference Approximating discrete-valued functions Robust to noisy data and capable of learning disjunctive expression ID3 earch a completely

More information

Data classification (II)

Data classification (II) Lecture 4: Data classification (II) Data Mining - Lecture 4 (2016) 1 Outline Decision trees Choice of the splitting attribute ID3 C4.5 Classification rules Covering algorithms Naïve Bayes Classification

More information

day month year documentname/initials 1

day month year documentname/initials 1 ECE471-571 Pattern Recognition Lecture 13 Decision Tree Hairong Qi, Gonzalez Family Professor Electrical Engineering and Computer Science University of Tennessee, Knoxville http://www.eecs.utk.edu/faculty/qi

More information

Decision Tree Learning

Decision Tree Learning Decision Tree Learning Berlin Chen Department of Computer Science & Information Engineering National Taiwan Normal University References: 1. Machine Learning, Chapter 3 2. Data Mining: Concepts, Models,

More information

Decision Trees. Gavin Brown

Decision Trees. Gavin Brown Decision Trees Gavin Brown Every Learning Method has Limitations Linear model? KNN? SVM? Explain your decisions Sometimes we need interpretable results from our techniques. How do you explain the above

More information

Decision Trees. CS57300 Data Mining Fall Instructor: Bruno Ribeiro

Decision Trees. CS57300 Data Mining Fall Instructor: Bruno Ribeiro Decision Trees CS57300 Data Mining Fall 2016 Instructor: Bruno Ribeiro Goal } Classification without Models Well, partially without a model } Today: Decision Trees 2015 Bruno Ribeiro 2 3 Why Trees? } interpretable/intuitive,

More information

Oliver Dürr. Statistisches Data Mining (StDM) Woche 11. Institut für Datenanalyse und Prozessdesign Zürcher Hochschule für Angewandte Wissenschaften

Oliver Dürr. Statistisches Data Mining (StDM) Woche 11. Institut für Datenanalyse und Prozessdesign Zürcher Hochschule für Angewandte Wissenschaften Statistisches Data Mining (StDM) Woche 11 Oliver Dürr Institut für Datenanalyse und Prozessdesign Zürcher Hochschule für Angewandte Wissenschaften oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 Multitasking

More information

Data Mining und Maschinelles Lernen

Data Mining und Maschinelles Lernen Data Mining und Maschinelles Lernen Ensemble Methods Bias-Variance Trade-off Basic Idea of Ensembles Bagging Basic Algorithm Bagging with Costs Randomization Random Forests Boosting Stacking Error-Correcting

More information

Dan Roth 461C, 3401 Walnut

Dan Roth   461C, 3401 Walnut CIS 519/419 Applied Machine Learning www.seas.upenn.edu/~cis519 Dan Roth danroth@seas.upenn.edu http://www.cis.upenn.edu/~danroth/ 461C, 3401 Walnut Slides were created by Dan Roth (for CIS519/419 at Penn

More information

Deconstructing Data Science

Deconstructing Data Science econstructing ata Science avid Bamman, UC Berkeley Info 290 Lecture 6: ecision trees & random forests Feb 2, 2016 Linear regression eep learning ecision trees Ordinal regression Probabilistic graphical

More information

Lecture 3: Decision Trees

Lecture 3: Decision Trees Lecture 3: Decision Trees Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning ID3, Information Gain, Overfitting, Pruning Lecture 3: Decision Trees p. Decision

More information

Variance Reduction and Ensemble Methods

Variance Reduction and Ensemble Methods Variance Reduction and Ensemble Methods Nicholas Ruozzi University of Texas at Dallas Based on the slides of Vibhav Gogate and David Sontag Last Time PAC learning Bias/variance tradeoff small hypothesis

More information

FINAL: CS 6375 (Machine Learning) Fall 2014

FINAL: CS 6375 (Machine Learning) Fall 2014 FINAL: CS 6375 (Machine Learning) Fall 2014 The exam is closed book. You are allowed a one-page cheat sheet. Answer the questions in the spaces provided on the question sheets. If you run out of room for

More information

Learning with multiple models. Boosting.

Learning with multiple models. Boosting. CS 2750 Machine Learning Lecture 21 Learning with multiple models. Boosting. Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square Learning with multiple models: Approach 2 Approach 2: use multiple models

More information

Decision Tree Learning

Decision Tree Learning Topics Decision Tree Learning Sattiraju Prabhakar CS898O: DTL Wichita State University What are decision trees? How do we use them? New Learning Task ID3 Algorithm Weka Demo C4.5 Algorithm Weka Demo Implementation

More information

Bagging. Ryan Tibshirani Data Mining: / April Optional reading: ISL 8.2, ESL 8.7

Bagging. Ryan Tibshirani Data Mining: / April Optional reading: ISL 8.2, ESL 8.7 Bagging Ryan Tibshirani Data Mining: 36-462/36-662 April 23 2013 Optional reading: ISL 8.2, ESL 8.7 1 Reminder: classification trees Our task is to predict the class label y {1,... K} given a feature vector

More information

Learning Classification Trees. Sargur Srihari

Learning Classification Trees. Sargur Srihari Learning Classification Trees Sargur srihari@cedar.buffalo.edu 1 Topics in CART CART as an adaptive basis function model Classification and Regression Tree Basics Growing a Tree 2 A Classification Tree

More information

Where are we? è Five major sec3ons of this course

Where are we? è Five major sec3ons of this course UVA CS 4501-001 / 6501 007 Introduc8on to Machine Learning and Data Mining Lecture 12: Probability and Sta3s3cs Review Yanjun Qi / Jane University of Virginia Department of Computer Science 10/02/14 1

More information

The Naïve Bayes Classifier. Machine Learning Fall 2017

The Naïve Bayes Classifier. Machine Learning Fall 2017 The Naïve Bayes Classifier Machine Learning Fall 2017 1 Today s lecture The naïve Bayes Classifier Learning the naïve Bayes Classifier Practical concerns 2 Today s lecture The naïve Bayes Classifier Learning

More information

Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation

Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation Lecture Notes for Chapter 4 Part I Introduction to Data Mining by Tan, Steinbach, Kumar Adapted by Qiang Yang (2010) Tan,Steinbach,

More information

Decision Trees Part 1. Rao Vemuri University of California, Davis

Decision Trees Part 1. Rao Vemuri University of California, Davis Decision Trees Part 1 Rao Vemuri University of California, Davis Overview What is a Decision Tree Sample Decision Trees How to Construct a Decision Tree Problems with Decision Trees Classification Vs Regression

More information

Data Mining. CS57300 Purdue University. Bruno Ribeiro. February 8, 2018

Data Mining. CS57300 Purdue University. Bruno Ribeiro. February 8, 2018 Data Mining CS57300 Purdue University Bruno Ribeiro February 8, 2018 Decision trees Why Trees? interpretable/intuitive, popular in medical applications because they mimic the way a doctor thinks model

More information

Classification: Decision Trees

Classification: Decision Trees Classification: Decision Trees Outline Top-Down Decision Tree Construction Choosing the Splitting Attribute Information Gain and Gain Ratio 2 DECISION TREE An internal node is a test on an attribute. A

More information

Ensemble Methods. Charles Sutton Data Mining and Exploration Spring Friday, 27 January 12

Ensemble Methods. Charles Sutton Data Mining and Exploration Spring Friday, 27 January 12 Ensemble Methods Charles Sutton Data Mining and Exploration Spring 2012 Bias and Variance Consider a regression problem Y = f(x)+ N(0, 2 ) With an estimate regression function ˆf, e.g., ˆf(x) =w > x Suppose

More information

Classification Using Decision Trees

Classification Using Decision Trees Classification Using Decision Trees 1. Introduction Data mining term is mainly used for the specific set of six activities namely Classification, Estimation, Prediction, Affinity grouping or Association

More information

Imagine we ve got a set of data containing several types, or classes. E.g. information about customers, and class=whether or not they buy anything.

Imagine we ve got a set of data containing several types, or classes. E.g. information about customers, and class=whether or not they buy anything. Decision Trees Defining the Task Imagine we ve got a set of data containing several types, or classes. E.g. information about customers, and class=whether or not they buy anything. Can we predict, i.e

More information

Lecture 24: Other (Non-linear) Classifiers: Decision Tree Learning, Boosting, and Support Vector Classification Instructor: Prof. Ganesh Ramakrishnan

Lecture 24: Other (Non-linear) Classifiers: Decision Tree Learning, Boosting, and Support Vector Classification Instructor: Prof. Ganesh Ramakrishnan Lecture 24: Other (Non-linear) Classifiers: Decision Tree Learning, Boosting, and Support Vector Classification Instructor: Prof Ganesh Ramakrishnan October 20, 2016 1 / 25 Decision Trees: Cascade of step

More information

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18 CSE 417T: Introduction to Machine Learning Final Review Henry Chai 12/4/18 Overfitting Overfitting is fitting the training data more than is warranted Fitting noise rather than signal 2 Estimating! "#$

More information

Classification and regression trees

Classification and regression trees Classification and regression trees Pierre Geurts p.geurts@ulg.ac.be Last update: 23/09/2015 1 Outline Supervised learning Decision tree representation Decision tree learning Extensions Regression trees

More information

Machine Learning & Data Mining

Machine Learning & Data Mining Group M L D Machine Learning M & Data Mining Chapter 7 Decision Trees Xin-Shun Xu @ SDU School of Computer Science and Technology, Shandong University Top 10 Algorithm in DM #1: C4.5 #2: K-Means #3: SVM

More information

Decision Tree Learning and Inductive Inference

Decision Tree Learning and Inductive Inference Decision Tree Learning and Inductive Inference 1 Widely used method for inductive inference Inductive Inference Hypothesis: Any hypothesis found to approximate the target function well over a sufficiently

More information

Classification using stochastic ensembles

Classification using stochastic ensembles July 31, 2014 Topics Introduction Topics Classification Application and classfication Classification and Regression Trees Stochastic ensemble methods Our application: USAID Poverty Assessment Tools Topics

More information

Decision Trees. CSC411/2515: Machine Learning and Data Mining, Winter 2018 Luke Zettlemoyer, Carlos Guestrin, and Andrew Moore

Decision Trees. CSC411/2515: Machine Learning and Data Mining, Winter 2018 Luke Zettlemoyer, Carlos Guestrin, and Andrew Moore Decision Trees Claude Monet, The Mulberry Tree Slides from Pedro Domingos, CSC411/2515: Machine Learning and Data Mining, Winter 2018 Luke Zettlemoyer, Carlos Guestrin, and Andrew Moore Michael Guerzhoy

More information

Data Mining Prof. Pabitra Mitra Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur

Data Mining Prof. Pabitra Mitra Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur Data Mining Prof. Pabitra Mitra Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur Lecture 21 K - Nearest Neighbor V In this lecture we discuss; how do we evaluate the

More information

Chapter 6: Classification

Chapter 6: Classification Chapter 6: Classification 1) Introduction Classification problem, evaluation of classifiers, prediction 2) Bayesian Classifiers Bayes classifier, naive Bayes classifier, applications 3) Linear discriminant

More information

Decision Tree Learning - ID3

Decision Tree Learning - ID3 Decision Tree Learning - ID3 n Decision tree examples n ID3 algorithm n Occam Razor n Top-Down Induction in Decision Trees n Information Theory n gain from property 1 Training Examples Day Outlook Temp.

More information

Artificial Intelligence. Topic

Artificial Intelligence. Topic Artificial Intelligence Topic What is decision tree? A tree where each branching node represents a choice between two or more alternatives, with every branching node being part of a path to a leaf node

More information

2018 CS420, Machine Learning, Lecture 5. Tree Models. Weinan Zhang Shanghai Jiao Tong University

2018 CS420, Machine Learning, Lecture 5. Tree Models. Weinan Zhang Shanghai Jiao Tong University 2018 CS420, Machine Learning, Lecture 5 Tree Models Weinan Zhang Shanghai Jiao Tong University http://wnzhang.net http://wnzhang.net/teaching/cs420/index.html ML Task: Function Approximation Problem setting

More information

Introduction to Data Science Data Mining for Business Analytics

Introduction to Data Science Data Mining for Business Analytics Introduction to Data Science Data Mining for Business Analytics BRIAN D ALESSANDRO VP DATA SCIENCE, DSTILLERY ADJUNCT PROFESSOR, NYU FALL 2014 Fine Print: these slides are, and always will be a work in

More information

From statistics to data science. BAE 815 (Fall 2017) Dr. Zifei Liu

From statistics to data science. BAE 815 (Fall 2017) Dr. Zifei Liu From statistics to data science BAE 815 (Fall 2017) Dr. Zifei Liu Zifeiliu@ksu.edu Why? How? What? How much? How many? Individual facts (quantities, characters, or symbols) The Data-Information-Knowledge-Wisdom

More information

Machine Learning and Data Mining. Decision Trees. Prof. Alexander Ihler

Machine Learning and Data Mining. Decision Trees. Prof. Alexander Ihler + Machine Learning and Data Mining Decision Trees Prof. Alexander Ihler Decision trees Func-onal form f(x;µ): nested if-then-else statements Discrete features: fully expressive (any func-on) Structure:

More information

Decision Trees. Data Science: Jordan Boyd-Graber University of Maryland MARCH 11, Data Science: Jordan Boyd-Graber UMD Decision Trees 1 / 1

Decision Trees. Data Science: Jordan Boyd-Graber University of Maryland MARCH 11, Data Science: Jordan Boyd-Graber UMD Decision Trees 1 / 1 Decision Trees Data Science: Jordan Boyd-Graber University of Maryland MARCH 11, 2018 Data Science: Jordan Boyd-Graber UMD Decision Trees 1 / 1 Roadmap Classification: machines labeling data for us Last

More information

Question of the Day. Machine Learning 2D1431. Decision Tree for PlayTennis. Outline. Lecture 4: Decision Tree Learning

Question of the Day. Machine Learning 2D1431. Decision Tree for PlayTennis. Outline. Lecture 4: Decision Tree Learning Question of the Day Machine Learning 2D1431 How can you make the following equation true by drawing only one straight line? 5 + 5 + 5 = 550 Lecture 4: Decision Tree Learning Outline Decision Tree for PlayTennis

More information

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas Midterm Review CS 6375: Machine Learning Vibhav Gogate The University of Texas at Dallas Machine Learning Supervised Learning Unsupervised Learning Reinforcement Learning Parametric Y Continuous Non-parametric

More information

ML techniques. symbolic techniques different types of representation value attribute representation representation of the first order

ML techniques. symbolic techniques different types of representation value attribute representation representation of the first order MACHINE LEARNING Definition 1: Learning is constructing or modifying representations of what is being experienced [Michalski 1986], p. 10 Definition 2: Learning denotes changes in the system That are adaptive

More information

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function. Bayesian learning: Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function. Let y be the true label and y be the predicted

More information

UVA CS 6316/4501 Fall 2016 Machine Learning. Lecture 11: Probability Review. Dr. Yanjun Qi. University of Virginia. Department of Computer Science

UVA CS 6316/4501 Fall 2016 Machine Learning. Lecture 11: Probability Review. Dr. Yanjun Qi. University of Virginia. Department of Computer Science UVA CS 6316/4501 Fall 2016 Machine Learning Lecture 11: Probability Review 10/17/16 Dr. Yanjun Qi University of Virginia Department of Computer Science 1 Announcements: Schedule Midterm Nov. 26 Wed / 3:30pm

More information

Jialiang Bao, Joseph Boyd, James Forkey, Shengwen Han, Trevor Hodde, Yumou Wang 10/01/2013

Jialiang Bao, Joseph Boyd, James Forkey, Shengwen Han, Trevor Hodde, Yumou Wang 10/01/2013 Simple Classifiers Jialiang Bao, Joseph Boyd, James Forkey, Shengwen Han, Trevor Hodde, Yumou Wang 1 Overview Pruning 2 Section 3.1: Simplicity First Pruning Always start simple! Accuracy can be misleading.

More information

Decision Trees Lecture 12

Decision Trees Lecture 12 Decision Trees Lecture 12 David Sontag New York University Slides adapted from Luke Zettlemoyer, Carlos Guestrin, and Andrew Moore Machine Learning in the ER Physician documentation Triage Information

More information

Classification: Decision Trees

Classification: Decision Trees Classification: Decision Trees These slides were assembled by Byron Boots, with grateful acknowledgement to Eric Eaton and the many others who made their course materials freely available online. Feel

More information

Classification and Prediction

Classification and Prediction Classification Classification and Prediction Classification: predict categorical class labels Build a model for a set of classes/concepts Classify loan applications (approve/decline) Prediction: model

More information

Decision Trees. Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University. February 5 th, Carlos Guestrin 1

Decision Trees. Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University. February 5 th, Carlos Guestrin 1 Decision Trees Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University February 5 th, 2007 2005-2007 Carlos Guestrin 1 Linear separability A dataset is linearly separable iff 9 a separating

More information

Decision trees COMS 4771

Decision trees COMS 4771 Decision trees COMS 4771 1. Prediction functions (again) Learning prediction functions IID model for supervised learning: (X 1, Y 1),..., (X n, Y n), (X, Y ) are iid random pairs (i.e., labeled examples).

More information

Supervised Learning! Algorithm Implementations! Inferring Rudimentary Rules and Decision Trees!

Supervised Learning! Algorithm Implementations! Inferring Rudimentary Rules and Decision Trees! Supervised Learning! Algorithm Implementations! Inferring Rudimentary Rules and Decision Trees! Summary! Input Knowledge representation! Preparing data for learning! Input: Concept, Instances, Attributes"

More information

Machine Learning, Fall 2009: Midterm

Machine Learning, Fall 2009: Midterm 10-601 Machine Learning, Fall 009: Midterm Monday, November nd hours 1. Personal info: Name: Andrew account: E-mail address:. You are permitted two pages of notes and a calculator. Please turn off all

More information

SUPERVISED LEARNING: INTRODUCTION TO CLASSIFICATION

SUPERVISED LEARNING: INTRODUCTION TO CLASSIFICATION SUPERVISED LEARNING: INTRODUCTION TO CLASSIFICATION 1 Outline Basic terminology Features Training and validation Model selection Error and loss measures Statistical comparison Evaluation measures 2 Terminology

More information

Introduction to ML. Two examples of Learners: Naïve Bayesian Classifiers Decision Trees

Introduction to ML. Two examples of Learners: Naïve Bayesian Classifiers Decision Trees Introduction to ML Two examples of Learners: Naïve Bayesian Classifiers Decision Trees Why Bayesian learning? Probabilistic learning: Calculate explicit probabilities for hypothesis, among the most practical

More information

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Decision Trees. Tobias Scheffer

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Decision Trees. Tobias Scheffer Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen Decision Trees Tobias Scheffer Decision Trees One of many applications: credit risk Employed longer than 3 months Positive credit

More information

( D) I(2,3) I(4,0) I(3,2) weighted avg. of entropies

( D) I(2,3) I(4,0) I(3,2) weighted avg. of entropies Decision Tree Induction using Information Gain Let I(x,y) as the entropy in a dataset with x number of class 1(i.e., play ) and y number of class (i.e., don t play outcomes. The entropy at the root, i.e.,

More information

Decision Tree Learning Mitchell, Chapter 3. CptS 570 Machine Learning School of EECS Washington State University

Decision Tree Learning Mitchell, Chapter 3. CptS 570 Machine Learning School of EECS Washington State University Decision Tree Learning Mitchell, Chapter 3 CptS 570 Machine Learning School of EECS Washington State University Outline Decision tree representation ID3 learning algorithm Entropy and information gain

More information

Administrative notes. Computational Thinking ct.cs.ubc.ca

Administrative notes. Computational Thinking ct.cs.ubc.ca Administrative notes Labs this week: project time. Remember, you need to pass the project in order to pass the course! (See course syllabus.) Clicker grades should be on-line now Administrative notes March

More information

Jeffrey D. Ullman Stanford University

Jeffrey D. Ullman Stanford University Jeffrey D. Ullman Stanford University 3 We are given a set of training examples, consisting of input-output pairs (x,y), where: 1. x is an item of the type we want to evaluate. 2. y is the value of some

More information

1 Handling of Continuous Attributes in C4.5. Algorithm

1 Handling of Continuous Attributes in C4.5. Algorithm .. Spring 2009 CSC 466: Knowledge Discovery from Data Alexander Dekhtyar.. Data Mining: Classification/Supervised Learning Potpourri Contents 1. C4.5. and continuous attributes: incorporating continuous

More information

Modern Information Retrieval

Modern Information Retrieval Modern Information Retrieval Chapter 8 Text Classification Introduction A Characterization of Text Classification Unsupervised Algorithms Supervised Algorithms Feature Selection or Dimensionality Reduction

More information

Machine Learning Linear Classification. Prof. Matteo Matteucci

Machine Learning Linear Classification. Prof. Matteo Matteucci Machine Learning Linear Classification Prof. Matteo Matteucci Recall from the first lecture 2 X R p Regression Y R Continuous Output X R p Y {Ω 0, Ω 1,, Ω K } Classification Discrete Output X R p Y (X)

More information

Bias Correction in Classification Tree Construction ICML 2001

Bias Correction in Classification Tree Construction ICML 2001 Bias Correction in Classification Tree Construction ICML 21 Alin Dobra Johannes Gehrke Department of Computer Science Cornell University December 15, 21 Classification Tree Construction Outlook Temp. Humidity

More information

Lecture 7 Decision Tree Classifier

Lecture 7 Decision Tree Classifier Machine Learning Dr.Ammar Mohammed Lecture 7 Decision Tree Classifier Decision Tree A decision tree is a simple classifier in the form of a hierarchical tree structure, which performs supervised classification

More information

UVA CS / Introduc8on to Machine Learning and Data Mining. Lecture 9: Classifica8on with Support Vector Machine (cont.

UVA CS / Introduc8on to Machine Learning and Data Mining. Lecture 9: Classifica8on with Support Vector Machine (cont. UVA CS 4501-001 / 6501 007 Introduc8on to Machine Learning and Data Mining Lecture 9: Classifica8on with Support Vector Machine (cont.) Yanjun Qi / Jane University of Virginia Department of Computer Science

More information

A Decision Stump. Decision Trees, cont. Boosting. Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University. October 1 st, 2007

A Decision Stump. Decision Trees, cont. Boosting. Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University. October 1 st, 2007 Decision Trees, cont. Boosting Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University October 1 st, 2007 1 A Decision Stump 2 1 The final tree 3 Basic Decision Tree Building Summarized

More information

Decision Trees: Overfitting

Decision Trees: Overfitting Decision Trees: Overfitting Emily Fox University of Washington January 30, 2017 Decision tree recap Loan status: Root 22 18 poor 4 14 Credit? Income? excellent 9 0 3 years 0 4 Fair 9 4 Term? 5 years 9

More information

Probability and Statistical Decision Theory

Probability and Statistical Decision Theory Tufts COMP 135: Introduction to Machine Learning https://www.cs.tufts.edu/comp/135/2019s/ Probability and Statistical Decision Theory Many slides attributable to: Erik Sudderth (UCI) Prof. Mike Hughes

More information

MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October,

MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October, MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October, 23 2013 The exam is closed book. You are allowed a one-page cheat sheet. Answer the questions in the spaces provided on the question sheets. If you run

More information

Generative Learning. INFO-4604, Applied Machine Learning University of Colorado Boulder. November 29, 2018 Prof. Michael Paul

Generative Learning. INFO-4604, Applied Machine Learning University of Colorado Boulder. November 29, 2018 Prof. Michael Paul Generative Learning INFO-4604, Applied Machine Learning University of Colorado Boulder November 29, 2018 Prof. Michael Paul Generative vs Discriminative The classification algorithms we have seen so far

More information

Decision Trees.

Decision Trees. . Machine Learning Decision Trees Prof. Dr. Martin Riedmiller AG Maschinelles Lernen und Natürlichsprachliche Systeme Institut für Informatik Technische Fakultät Albert-Ludwigs-Universität Freiburg riedmiller@informatik.uni-freiburg.de

More information

Ensembles. Léon Bottou COS 424 4/8/2010

Ensembles. Léon Bottou COS 424 4/8/2010 Ensembles Léon Bottou COS 424 4/8/2010 Readings T. G. Dietterich (2000) Ensemble Methods in Machine Learning. R. E. Schapire (2003): The Boosting Approach to Machine Learning. Sections 1,2,3,4,6. Léon

More information

Empirical Risk Minimization, Model Selection, and Model Assessment

Empirical Risk Minimization, Model Selection, and Model Assessment Empirical Risk Minimization, Model Selection, and Model Assessment CS6780 Advanced Machine Learning Spring 2015 Thorsten Joachims Cornell University Reading: Murphy 5.7-5.7.2.4, 6.5-6.5.3.1 Dietterich,

More information