Machine Learning in Bioinformatics
|
|
- Karen Smith
- 6 years ago
- Views:
Transcription
1 Machine Learning in Bioinformatics Arlindo Oliveira Data Mining: Concepts and Techniques Data mining concepts Learning from examples Decision trees Neural Networks 1
2 Typical Supervised Learning Problem Setting Given a set (database) of observations Each observation (x1,, xn, y) Xi are input variables Y is a particular output Build a model to predict y = f(x1,, xn) First define criterion to measure model quality Split dataset into training and test sets Build model using training set Validate model using test set A Database (Example) X1 X2 X3 X4 X5 X6 Y f(x1,,x6) O GOOD GOOD O GOOD GOOD O GOOD GOOD O GOOD GOOD O GOOD GOOD O GOOD GOOD O BAD BAD O GOOD GOOD O GOOD GOOD O GOOD BAD O GOOD GOOD O GOOD GOOD O BAD BAD O GOOD GOOD O GOOD GOOD O BAD GOOD O GOOD GOOD O GOOD GOOD O BAD BAD O GOOD GOOD O BAD BAD O GOOD GOOD O GOOD GOOD O GOOD GOOD O GOOD GOOD O GOOD GOOD 2
3 Main Steps Select subset of relevant input variables Build a model using these variables Generate sequence of models Identify one (or various) as being good models Use only the training set Validate selected models Quantitatively : using the test set Qualitatively : using expert knowledge Main Classes of Methods Supervised learning (= input/output models) Decision/regression trees Neural networks Unsupervised learning (=p(x1,,xn) models) Bayesian networks Clustering 3
4 Inductive Learning Learning from examples The general problem of inductive inference Inductive bias Examples Training Examples for Concept Enjoy Sport Concept: days on which my friend Aldo enjoys his favourite water sports Task: predict the value of Enjoy Sport for an arbitrary day based on the values of the other attributes Sky Temp Humid Wind Water Forecast Sunny Sunny Rainy Sunny Warm Warm Cold Warm Normal High High High Strong Strong Strong Strong instance Warm Warm Warm Cool Same Same Change Change Enjoy Sport No 4
5 Inductive Learning Hypothesis Any hypothesis found to approximate the target function well over the training examples, will also approximate the target function well over the unobserved examples. Futility of Bias-Free Learning A learner that makes no prior assumptions regarding the identity of the target concept has no rational basis for classifying any unseen instances. No Free Lunch! 5
6 Decision Trees Decision tree representation ID3 learning algorithm Entropy, information gain Overfitting What Is a Decision Tree? Value of X1 Small Medium or Large Value of X2 Y is big < 0.34 > 0.34 Y is very big Y is small 6
7 Training Examples Day Outlook Temp. Humidity Wind Play Tennis D1 Sunny Hot High Weak No D2 Sunny Hot High Strong No D3 Overcast Hot High Weak D4 Rain Mild High Weak D5 Rain Cool Normal Weak D6 Rain Cool Normal Strong No D7 Overcast Cool Normal Weak D8 Sunny Mild High Weak No D9 Sunny Cold Normal Weak D10 Rain Mild Normal Strong D11 Sunny Mild Normal Strong D12 Overcast Mild High Strong D13 Overcast Hot Normal Weak D14 Rain Mild High Strong No Decision Tree for PlayTennis Outlook Sunny Overcast Rain Humidity Wind High Normal Strong Weak No No 7
8 Decision Tree for PlayTennis Outlook Sunny Overcast Rain Humidity Each internal node tests an attribute High No Normal Each branch corresponds to an attribute value node Each leaf node assigns a classification Decision Tree for PlayTennis Outlook Temperature Humidity Wind PlayTennis Sunny Hot High Weak? No Outlook Sunny Overcast Rain Humidity Wind High Normal Strong Weak No No 8
9 Decision Tree for Conjunction Outlook=Sunny Wind=Weak Outlook Sunny Overcast Rain Wind No No Strong No Weak Decision Tree for Disjunction Outlook=Sunny Wind=Weak Outlook Sunny Overcast Rain Wind Wind Strong Weak Strong Weak No No 9
10 Decision Tree for XOR Outlook=Sunny XOR Wind=Weak Outlook Sunny Overcast Rain Wind Wind Wind Strong Weak Strong Weak Strong Weak No No No Decision Tree decision trees represent disjunctions of conjunctions Outlook Sunny Overcast Rain Humidity Wind High Normal Strong Weak No No (Outlook=Sunny Humidity=Normal) (Outlook=Overcast) (Outlook=Rain Wind=Weak) 10
11 When to consider Decision Trees Instances describable by attribute-value pairs Target function is discrete valued Disjunctive hypothesis may be required Possibly noisy training data Missing attribute values Examples: Medical diagnosis Credit risk analysis Object classification for robot manipulator (Tan 1993) Growing and Pruning Pictorially Data mis-fit Underfitting Overfitting Pruning Growing Tree complexity Final tree 11
12 An Application in Bioinformatics Genetics of complex traits Data base (see) Composed of observations on 1086 animals Inputs : 20x2 genetic markers Outputs : phenotypic measurements (numbers) Identify the location of involved chromosomal regions Results : unpruned and pruned Another Application in Bioinformatics Identification of protein origin Data base (see) Composed of frequency of aminoacids in different families Inputs : 20 frequencies Outputs : class of protein Objective: identify the family of the protein 12
13 Yet Another Application in Bioinformatics Identification of regulatory mechanisms between yeast genes Data from microarray experiments [Spellman et al., (1998). Comprehensive Identification of Cell Cycle-regulated Genes of the Yeast Saccharomyces cerevisiae by Microarray Hybridization. Molecular Biology of the Cell 9, ] Want to predict which genes activate: CLN1, CLN2, CLN3, SW14 Decision tree for CNL1 activation CLN2 YPL256C <-0,375-0,375 CLN1 Não activo YPL120C CLB5 <-0,285-0,285 CLN1 Não activo YDR328C SKP1 <0,695 0,695 CLN1 CLN1 Activo Não activo % 22% 1 0% 100% Confusion matrix 13
14 Decision tree for CLN2 activation CLB5 <0 0 CLN2 Activo CLN3 <-0,455-0,455 CLN2 Não activo CLN2 Activo CDH1 <-0,475-0,475 CLN2 Não activo ,6% 33,3% 1 20% 80% Confusion matrix Decision tree for CLN3 activation YGL003 CDH1 < CLN3 Não activo SKP1 < CLN3 CDC53 Não activo <0.025 >0,025 CLN3 CLN3 Não activo Activo ,3% 16,6% 1 14,2% 85,7% Confusion matrix 14
15 Decision tree for SW14 activation MBP1 < <-0,28 SW14 Não activo MCM1-0,28 CLB1 SIC1 CLN2 < SW14 Activo % 37% 1 9% 90% <1,24 >1,24 SW14 Activo SW14 Não activo <0.025 >0,025 SW14 Não activo SW14 Activo Confusion matrix Top-Down Induction of Decision Trees ID3 1. A the best decision attribute for next node 2. Assign A as decision attribute for node 3. For each value of A create new descendant 4. Sort training examples to leaf node according to the attribute value of the branch 5. If all training examples are perfectly classified (same value of target attribute) stop, else iterate over new leaf nodes. 15
16 Which Attribute is best? [29+,35-] A 1 =? A 2 =? [29+,35-] True False True False [21+, 5-] [8+, 30-] [18+, 33-] [11+, 2-] Entropy S is a sample of training examples p + is the proportion of positive examples p - is the proportion of negative examples Entropy measures the impurity of S Entropy(S) = -p + log 2 p + - p - log 2 p - 16
17 Entropy Entropy(S)= expected number of bits needed to encode class (+ or -) of randomly drawn members of S (under the optimal, shortest length-code) Why? Information theory optimal length code assign log 2 p bits to messages having probability p. So the expected number of bits to encode (+ or -) of random member of S: -p + log 2 p + - p - log 2 p - Information Gain Gain(S,A): expected reduction in entropy due to sorting S on attribute A Gain(S,A)=Entropy(S) - v values(a) S v / S Entropy(S v ) Entropy([29+,35-]) = -29/64 log 2 29/64 35/64 log 2 35/64 = 0.99 [29+,35-] A 1 =? A 2 =? [29+,35-] True False True False [21+, 5-] [8+, 30-] [18+, 33-] [11+, 2-] 17
18 Entropy([21+,5-]) = 0.71 Entropy([8+,30-]) = 0.74 Gain(S,A 1 )=Entropy(S) -26/64*Entropy([21+,5-]) -38/64*Entropy([8+,30-]) =0.27 Information Gain Entropy([18+,33-]) = 0.94 Entropy([8+,30-]) = 0.62 Gain(S,A 2 )=Entropy(S) -51/64*Entropy([18+,33-]) -13/64*Entropy([11+,2-]) =0.12 [29+,35-] A 1 =? A 2 =? [29+,35-] True False True False [21+, 5-] [8+, 30-] [18+, 33-] [11+, 2-] Training Examples Day Outlook Temp. Humidity Wind Play Tennis D1 Sunny Hot High Weak No D2 Sunny Hot High Strong No D3 Overcast Hot High Weak D4 Rain Mild High Weak D5 Rain Cool Normal Weak D6 Rain Cool Normal Strong No D7 Overcast Cool Normal Weak D8 Sunny Mild High Weak No D9 Sunny Cold Normal Weak D10 Rain Mild Normal Strong D11 Sunny Mild Normal Strong D12 Overcast Mild High Strong D13 Overcast Hot Normal Weak D14 Rain Mild High Strong No 18
19 Selecting the Next Attribute S=[9+,5-] E=0.940 Humidity S=[9+,5-] E=0.940 Wind High Normal Weak Strong [3+, 4-] [6+, 1-] E=0.985 Gain(S,Humidity) =0.940-(7/14)*0.985 (7/14)*0.592 =0.151 E=0.592 [6+, 2-] [3+, 3-] E=0.811 E=1.0 Gain(S,Wind) =0.940-(8/14)*0.811 (6/14)*1.0 =0.048 Selecting the Next Attribute Sunny S=[9+,5-] E=0.940 Outlook Rain [2+, 3-] [4+, 0] [3+, 2-] E=0.971 Over cast E=0.0 E=0.971 Gain(S,Outlook) =0.940-(5/14)* (4/14)*0.0 (5/14)* =
20 ID3 Algorithm [D1,D2,,D14] [9+,5-] Outlook Sunny Overcast Rain S sunny =[D1,D2,D8,D9,D11] [2+,3-] [D3,D7,D12,D13] [4+,0-]?? [D4,D5,D6,D10,D14] [3+,2-] Gain(S sunny, Humidity)=0.970-(3/5)0.0 2/5(0.0) = Gain(S sunny, Temp.)=0.970-(2/5)0.0 2/5(1.0)-(1/5)0.0 = Gain(S sunny, Wind)=0.970= -(2/5)1.0 3/5(0.918) = ID3 Algorithm Outlook Sunny Overcast Rain Humidity [D3,D7,D12,D13] Wind High Normal Strong Weak No No [D1,D2] [D8,D9,D11] [D6,D14] [D4,D5,D10] 20
21 Overfitting in Decision Tree Learning Avoid Overfitting How can we avoid overfitting? Stop growing when data split not statistically significant Grow full tree then post-prune Minimum description length (MDL): Minimize: size(tree) + size(misclassifications(tree)) 21
22 Reduced-Error Pruning Split data into training and validation set Do until further pruning is harmful: 1. Evaluate impact on validation set of pruning each possible node (plus those below it) 2. Greedily remove the one that most improves the validation set accuracy Produces smallest version of most accurate subtree Effect of Reduced Error Pruning 22
23 Continuous Valued Attributes Create a discrete attribute to test continuous Temperature = C (Temperature > C) = {true, false} Where to set the threshold? Temperatur 15 0 C 18 0 C 19 0 C 22 0 C 24 0 C 27 0 C PlayTennis No No No (see paper by [Fayyad, Irani 1993] Attributes with many Values Problem: if an attribute has many values, maximizing InformationGain will select it. E.g.: Imagine using Date= as attribute perfectly splits the data into subsets of size 1 Use GainRatio instead of information gain as criteria: GainRatio(S,A) = Gain(S,A) / SplitInformation(S,A) SplitInformation(S,A) = -Σ i=1..c S i / S log 2 S i / S Where S i is the subset for which attribute A has the value v i 23
24 Attributes with Cost Consider: Medical diagnosis : blood test costs 1000 SEK Robotics: width_from_one_feet has cost 23 secs. How to learn a consistent tree with low expected cost? Replace Gain by : Gain 2 (S,A)/Cost(A) [Tan, Schimmer 1990] 2 Gain(S,A) -1/(Cost(A)+1) w w [0,1] [Nunez 1988] Unknown Attribute Values What is some examples missing values of A? Use training example anyway sort through tree If node n tests A, assign most common value of A among other examples sorted to node n. Assign most common value of A among other examples with same target value Assign probability pi to each possible value vi of A Assign fraction pi of example to each descendant in tree Classify new examples in the same fashion 24
25 Cross-Validation Estimate the accuracy of a hypothesis induced by a supervised learning algorithm Predict the accuracy of a hypothesis over future unseen instances Select the optimal hypothesis from a given set of alternative hypotheses Pruning decision trees Model selection Feature selection Combining multiple classifiers (boosting) Holdout Method Partition data set D = {(v 1,y 1 ),,(v n,y n )} into training D t and validation set D h =D\D t Training D t Validation D\D t acc h = 1/h (vi,yi) Dh δ(i(d t,v i ),y i ) I(D t,v i ) : output of hypothesis induced by learner I trained on data D t for instance v i δ(i,j) = 1 if i=j and 0 otherwise Problems: makes insufficient use of data training and validation set are correlated 25
26 Cross-Validation k-fold cross-validation splits the data set D into k mutually exclusive subsets D 1,D 2,,D k D 1 D 2 D 3 D 4 Train and test the learning algorithm k times, each time it is trained on D\D i and tested on D i D 1 D 2 D 3 D 4 D 1 D 2 D 3 D 4 D 1 D 2 D 3 D 4 D 1 D 2 D 3 D 4 acc cv = 1/n (vi,yi) D δ(i(d\d i,v i ),y i ) Cross-Validation Uses all the data for training and testing Complete k-fold cross-validation splits the dataset of size m in all (m over m/k) possible ways (choosing m/k instances out of m) Leave n-out cross-validation sets n instances aside for testing and uses the remaining ones for training (leave one-out is equivalent to n-fold crossvalidation) In stratified cross-validation, the folds are stratified so that they contain approximately the same proportion of labels as the original data set 26
27 Neural networks Perceptrons Gradient descent Multi-layer networks Backpropagation Biological Neural Systems Neuron switching time : > 10-3 secs Number of neurons in the human brain: ~10 10 Connections (synapses) per neuron : ~ Face recognition : 0.1 secs High degree of parallel computation Distributed representations 27
28 Properties of Artificial Neural Nets (ANNs) Many simple neuron-like threshold switching units Many weighted interconnections among units Highly parallel, distributed processing Learning by tuning the connection weights Appropriate Problem Domains for Neural Network Learning Input is high-dimensional discrete or real-valued (e.g. raw sensor input) Output is discrete or real valued Output is a vector of values Form of target function is unknown Humans do not need to interpret the results (black box model) 28
29 Perceptron Linear treshold unit (LTU) x 1 x 2. x n w 2 w n w 1 x 0=1 w 0 Σ Σ i=0n w i x i o(x i )= { 1 if Σ i=0 n w i x i >0-1 otherwise o Decision Surface of a Perceptron + + x x 1 Perceptron is able to represent some useful functions And(x 1,x 2 ) choose weights w 0 =-1.5, w 1 =1, w 2 =1 But functions that are not linearly separable (e.g. Xor) are not representable + - x x 1 29
30 Perceptron Learning Rule w i = w i + w i w i = η (t - o) x i t=c(x) is the target value o is the perceptron output η Is a small constant (e.g. 0.1) called learning rate If the output is correct (t=o) the weights w i are not changed If the output is incorrect (t o) the weights w i are changed such that the output of the perceptron for the new weights is closer to t. The algorithm converges to the correct classification if the training data is linearly separable and η is sufficiently small Perceptron Learning Rule t=1 w=[ ] x 2 = 0.2 x (x,t)=([-1,-1],1) (x,t)=([2,1],-1) o=sgn( ) o=sgn( ) (x,t)=([1,1],1) o=sgn( ) =-1 =1 =-1 w=[-0.2 w=[ ] w=[ ] o=1 o=-1 t=-1 30
31 Gradient Descent Learning Rule Consider linear unit without threshold and continuous output o (not just 1,1) o=w 0 + w 1 x w n x n Train the w i s such that they minimize the squared error E[w 1,,w n ] = ½ Σ d D (t d -o d ) 2 where D is the set of training examples Gradient Descent D={<(1,1),1>,<(-1,-1),1>, <(1,-1),-1>,<(-1,1),-1>} Gradient: E[w]=[ E/ w 0, E/ w n ] w=-η E[w] (w 1,w 2 ) (w 1 + w 1,w 2 + w 2 ) w i =-η E/ w i = / w i 1/2Σ d (t d -o d ) 2 = / w i 1/2Σ d (t d -Σ i w i x i ) 2 = Σ d (t d - o d )(-x i ) 31
32 Gradient Descent Gradient-Descent(training_examples, η) Each training example is a pair of the form <(x 1, x n ),t> where (x 1,,x n ) is the vector of input values, and t is the target output value, η is the learning rate (e.g. 0.1) Initialize each w i to some small random value Until the termination condition is met, Do Initialize each w i to zero For each <(x 1, x n ),t> in training_examples Do Input the instance (x 1,,x n ) to the linear unit and compute the output o For each linear unit weight w i Do w i = w i + η (t-o) x i For each linear unit weight wi Do w i =w i + w i Incremental Stochastic Gradient Descent Batch mode : gradient descent w=w -η E D [w] over the entire data D E D [w]=1/2σ d (t d -o d ) 2 Incremental mode: gradient descent w=w -η E d [w] over individual training examples d E d [w]=1/2 (t d -o d ) 2 Incremental Gradient Descent can approximate Batch Gradient Descent arbitrarily closely if η is small enough 32
33 Comparison Perceptron and Gradient Descent Rule Perceptron learning rule guaranteed to succeed if Training examples are linearly separable Sufficiently small learning rate η Linear unit training rules uses gradient descent Guaranteed to converge to hypothesis with minimum squared error Given sufficiently small learning rate η Even when training data contains noise Even when training data not separable by H Multi-Layer Networks output layer hidden layer input layer 33
34 x 1 x 2. x n w 2 w n w 1 x 0=1 w 0 Σ Sigmoid Unit net=σ i=0n w i x i o=σ(net)=1/(1+e -net ) σ(x) is the sigmoid function: 1/(1+e -x) dσ(x)/dx= σ(x) (1- σ(x)) Derive gradient decent rules to train: one sigmoid function E/ w i = -Σ d (t d -o d ) o d (1-o d ) x i Multilayer networks of sigmoid units backpropagation: o Backpropagation Algorithm Initialize each w i to some small random value Until the termination condition is met, Do For each training example <(x 1, x n ),t> Do Input the instance (x 1,,x n ) to the network and compute the network outputs o k For each output unit k δ k =o k (1-o k )(t k -o k ) For each hidden unit h δ h =o h (1-o h ) Σ k w h,k δ k For each network weight w,j Do w i,j =w i,j + w i,j where w i,j = ηδ j x i,j 34
35 Backpropagation Gradient descent over entire network weight vector Easily generalized to arbitrary directed graphs Will find a local, not necessarily global error minimum -in practice often works well (can be invoked multiple times with different initial weights) Often include weight momentum term w i,j (n)= ηδ j x i,j + α w i,j (n-1) Minimizes error training examples Will it generalize well to unseen instances (over-fitting)? Training can be slow typical iterations (use Levenberg-Marquardt instead of gradient descent) Using network after training is fast Binary Encoder -Decoder 8 inputs 3 hidden 8 outputs Hidden values
36 Sum of Squared Errors for the Output Units Hidden Unit Encoding for Input
37 Convergence of Backprop Gradient descent to some local minimum Perhaps not global minimum Add momentum Stochastic gradient descent Train multiple nets with different initial weights Nature of convergence Initialize weights near zero Therefore, initial networks near-linear Increasingly non-linear functions possible as training progresses Expressive Capabilities of ANN Boolean functions Every boolean function can be represented by network with single hidden layer But might require exponential (in number of inputs) hidden units Continuous functions Every bounded continuous function can be approximated with arbitrarily small error, by network with one hidden layer [Cybenko 1989, Hornik 1989] Any function can be approximated to arbitrary accuracy by a network with two hidden layers [Cybenko 1988] 37
Typical Supervised Learning Problem Setting
Typical Supervised Learning Problem Setting Given a set (database) of observations Each observation (x1,, xn, y) Xi are input variables Y is a particular output Build a model to predict y = f(x1,, xn)
More informationQuestion of the Day. Machine Learning 2D1431. Decision Tree for PlayTennis. Outline. Lecture 4: Decision Tree Learning
Question of the Day Machine Learning 2D1431 How can you make the following equation true by drawing only one straight line? 5 + 5 + 5 = 550 Lecture 4: Decision Tree Learning Outline Decision Tree for PlayTennis
More informationData Mining in Bioinformatics
Data Mining in Bioinformatics Arlindo Oliveira aml@inesc-id.pt Data Mining: Concepts and Techniques Data mining concepts Learning from examples Decision trees Neural networks Clustering 1 Typical Supervised
More informationDecision Tree Learning Mitchell, Chapter 3. CptS 570 Machine Learning School of EECS Washington State University
Decision Tree Learning Mitchell, Chapter 3 CptS 570 Machine Learning School of EECS Washington State University Outline Decision tree representation ID3 learning algorithm Entropy and information gain
More informationDecision-Tree Learning. Chapter 3: Decision Tree Learning. Classification Learning. Decision Tree for PlayTennis
Decision-Tree Learning Chapter 3: Decision Tree Learning CS 536: Machine Learning Littman (Wu, TA) [read Chapter 3] [some of Chapter 2 might help ] [recommended exercises 3.1, 3.2] Decision tree representation
More informationChapter 3: Decision Tree Learning
Chapter 3: Decision Tree Learning CS 536: Machine Learning Littman (Wu, TA) Administration Books? New web page: http://www.cs.rutgers.edu/~mlittman/courses/ml03/ schedule lecture notes assignment info.
More informationAdministration. Chapter 3: Decision Tree Learning (part 2) Measuring Entropy. Entropy Function
Administration Chapter 3: Decision Tree Learning (part 2) Book on reserve in the math library. Questions? CS 536: Machine Learning Littman (Wu, TA) Measuring Entropy Entropy Function S is a sample of training
More informationOutline. Training Examples for EnjoySport. 2 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997
Outline Training Examples for EnjoySport Learning from examples General-to-specific ordering over hypotheses [read Chapter 2] [suggested exercises 2.2, 2.3, 2.4, 2.6] Version spaces and candidate elimination
More informationDecision Tree Learning
0. Decision Tree Learning Based on Machine Learning, T. Mitchell, McGRAW Hill, 1997, ch. 3 Acknowledgement: The present slides are an adaptation of slides drawn by T. Mitchell PLAN 1. Concept learning:
More informationArtificial Neural Networks
Artificial Neural Networks 鮑興國 Ph.D. National Taiwan University of Science and Technology Outline Perceptrons Gradient descent Multi-layer networks Backpropagation Hidden layer representations Examples
More informationDecision Tree Learning
Decision Tree Learning Berlin Chen Department of Computer Science & Information Engineering National Taiwan Normal University References: 1. Machine Learning, Chapter 3 2. Data Mining: Concepts, Models,
More informationDecision Trees.
. Machine Learning Decision Trees Prof. Dr. Martin Riedmiller AG Maschinelles Lernen und Natürlichsprachliche Systeme Institut für Informatik Technische Fakultät Albert-Ludwigs-Universität Freiburg riedmiller@informatik.uni-freiburg.de
More informationDecision Trees.
. Machine Learning Decision Trees Prof. Dr. Martin Riedmiller AG Maschinelles Lernen und Natürlichsprachliche Systeme Institut für Informatik Technische Fakultät Albert-Ludwigs-Universität Freiburg riedmiller@informatik.uni-freiburg.de
More informationNotes on Machine Learning for and
Notes on Machine Learning for 16.410 and 16.413 (Notes adapted from Tom Mitchell and Andrew Moore.) Learning = improving with experience Improve over task T (e.g, Classification, control tasks) with respect
More informationChapter 3: Decision Tree Learning (part 2)
Chapter 3: Decision Tree Learning (part 2) CS 536: Machine Learning Littman (Wu, TA) Administration Books? Two on reserve in the math library. icml-03: instructional Conference on Machine Learning mailing
More informationIntroduction. Decision Tree Learning. Outline. Decision Tree 9/7/2017. Decision Tree Definition
Introduction Decision Tree Learning Practical methods for inductive inference Approximating discrete-valued functions Robust to noisy data and capable of learning disjunctive expression ID3 earch a completely
More informationDecision Tree Learning and Inductive Inference
Decision Tree Learning and Inductive Inference 1 Widely used method for inductive inference Inductive Inference Hypothesis: Any hypothesis found to approximate the target function well over a sufficiently
More informationLearning Classification Trees. Sargur Srihari
Learning Classification Trees Sargur srihari@cedar.buffalo.edu 1 Topics in CART CART as an adaptive basis function model Classification and Regression Tree Basics Growing a Tree 2 A Classification Tree
More informationArtificial Neural Networks
Artificial Neural Networks Threshold units Gradient descent Multilayer networks Backpropagation Hidden layer representations Example: Face Recognition Advanced topics 1 Connectionist Models Consider humans:
More informationLecture 3: Decision Trees
Lecture 3: Decision Trees Cognitive Systems - Machine Learning Part I: Basic Approaches of Concept Learning ID3, Information Gain, Overfitting, Pruning last change November 26, 2014 Ute Schmid (CogSys,
More informationDECISION TREE LEARNING. [read Chapter 3] [recommended exercises 3.1, 3.4]
1 DECISION TREE LEARNING [read Chapter 3] [recommended exercises 3.1, 3.4] Decision tree representation ID3 learning algorithm Entropy, Information gain Overfitting Decision Tree 2 Representation: Tree-structured
More informationLecture 3: Decision Trees
Lecture 3: Decision Trees Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning ID3, Information Gain, Overfitting, Pruning Lecture 3: Decision Trees p. Decision
More informationImagine we ve got a set of data containing several types, or classes. E.g. information about customers, and class=whether or not they buy anything.
Decision Trees Defining the Task Imagine we ve got a set of data containing several types, or classes. E.g. information about customers, and class=whether or not they buy anything. Can we predict, i.e
More informationLearning Decision Trees
Learning Decision Trees Machine Learning Spring 2018 1 This lecture: Learning Decision Trees 1. Representation: What are decision trees? 2. Algorithm: Learning decision trees The ID3 algorithm: A greedy
More informationDecision Tree Learning - ID3
Decision Tree Learning - ID3 n Decision tree examples n ID3 algorithm n Occam Razor n Top-Down Induction in Decision Trees n Information Theory n gain from property 1 Training Examples Day Outlook Temp.
More informationLearning Decision Trees
Learning Decision Trees Machine Learning Fall 2018 Some slides from Tom Mitchell, Dan Roth and others 1 Key issues in machine learning Modeling How to formulate your problem as a machine learning problem?
More informationDecision Trees. Data Science: Jordan Boyd-Graber University of Maryland MARCH 11, Data Science: Jordan Boyd-Graber UMD Decision Trees 1 / 1
Decision Trees Data Science: Jordan Boyd-Graber University of Maryland MARCH 11, 2018 Data Science: Jordan Boyd-Graber UMD Decision Trees 1 / 1 Roadmap Classification: machines labeling data for us Last
More informationCS 6375 Machine Learning
CS 6375 Machine Learning Decision Trees Instructor: Yang Liu 1 Supervised Classifier X 1 X 2. X M Ref class label 2 1 Three variables: Attribute 1: Hair = {blond, dark} Attribute 2: Height = {tall, short}
More informationArtificial Intelligence. Topic
Artificial Intelligence Topic What is decision tree? A tree where each branching node represents a choice between two or more alternatives, with every branching node being part of a path to a leaf node
More informationLecture 4: Perceptrons and Multilayer Perceptrons
Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons
More informationCS6375: Machine Learning Gautam Kunapuli. Decision Trees
Gautam Kunapuli Example: Restaurant Recommendation Example: Develop a model to recommend restaurants to users depending on their past dining experiences. Here, the features are cost (x ) and the user s
More informationDan Roth 461C, 3401 Walnut
CIS 519/419 Applied Machine Learning www.seas.upenn.edu/~cis519 Dan Roth danroth@seas.upenn.edu http://www.cis.upenn.edu/~danroth/ 461C, 3401 Walnut Slides were created by Dan Roth (for CIS519/419 at Penn
More informationM chi h n i e n L e L arni n n i g Decision Trees Mac a h c i h n i e n e L e L a e r a ni n ng
1 Decision Trees 2 Instances Describable by Attribute-Value Pairs Target Function Is Discrete Valued Disjunctive Hypothesis May Be Required Possibly Noisy Training Data Examples Equipment or medical diagnosis
More informationthe tree till a class assignment is reached
Decision Trees Decision Tree for Playing Tennis Prediction is done by sending the example down Prediction is done by sending the example down the tree till a class assignment is reached Definitions Internal
More informationIntroduction to ML. Two examples of Learners: Naïve Bayesian Classifiers Decision Trees
Introduction to ML Two examples of Learners: Naïve Bayesian Classifiers Decision Trees Why Bayesian learning? Probabilistic learning: Calculate explicit probabilities for hypothesis, among the most practical
More informationDecision Tree Learning
Topics Decision Tree Learning Sattiraju Prabhakar CS898O: DTL Wichita State University What are decision trees? How do we use them? New Learning Task ID3 Algorithm Weka Demo C4.5 Algorithm Weka Demo Implementation
More informationClassification and Prediction
Classification Classification and Prediction Classification: predict categorical class labels Build a model for a set of classes/concepts Classify loan applications (approve/decline) Prediction: model
More informationMachine Learning
Machine Learning 10-601 Maria Florina Balcan Machine Learning Department Carnegie Mellon University 02/10/2016 Today: Artificial neural networks Backpropagation Reading: Mitchell: Chapter 4 Bishop: Chapter
More informationDecision Trees / NLP Introduction
Decision Trees / NLP Introduction Dr. Kevin Koidl School of Computer Science and Statistic Trinity College Dublin ADAPT Research Centre The ADAPT Centre is funded under the SFI Research Centres Programme
More informationArtificial Neural Networks
0 Artificial Neural Networks Based on Machine Learning, T Mitchell, McGRAW Hill, 1997, ch 4 Acknowledgement: The present slides are an adaptation of slides drawn by T Mitchell PLAN 1 Introduction Connectionist
More informationLecture 24: Other (Non-linear) Classifiers: Decision Tree Learning, Boosting, and Support Vector Classification Instructor: Prof. Ganesh Ramakrishnan
Lecture 24: Other (Non-linear) Classifiers: Decision Tree Learning, Boosting, and Support Vector Classification Instructor: Prof Ganesh Ramakrishnan October 20, 2016 1 / 25 Decision Trees: Cascade of step
More informationDecision Trees. Tirgul 5
Decision Trees Tirgul 5 Using Decision Trees It could be difficult to decide which pet is right for you. We ll find a nice algorithm to help us decide what to choose without having to think about it. 2
More informationClassification and regression trees
Classification and regression trees Pierre Geurts p.geurts@ulg.ac.be Last update: 23/09/2015 1 Outline Supervised learning Decision tree representation Decision tree learning Extensions Regression trees
More informationSupervised Learning! Algorithm Implementations! Inferring Rudimentary Rules and Decision Trees!
Supervised Learning! Algorithm Implementations! Inferring Rudimentary Rules and Decision Trees! Summary! Input Knowledge representation! Preparing data for learning! Input: Concept, Instances, Attributes"
More informationMachine Learning & Data Mining
Group M L D Machine Learning M & Data Mining Chapter 7 Decision Trees Xin-Shun Xu @ SDU School of Computer Science and Technology, Shandong University Top 10 Algorithm in DM #1: C4.5 #2: K-Means #3: SVM
More informationMachine Learning Recitation 8 Oct 21, Oznur Tastan
Machine Learning 10601 Recitation 8 Oct 21, 2009 Oznur Tastan Outline Tree representation Brief information theory Learning decision trees Bagging Random forests Decision trees Non linear classifier Easy
More informationApprentissage automatique et fouille de données (part 2)
Apprentissage automatique et fouille de données (part 2) Telecom Saint-Etienne Elisa Fromont (basé sur les cours d Hendrik Blockeel et de Tom Mitchell) 1 Induction of decision trees : outline (adapted
More informationIntroduction Association Rule Mining Decision Trees Summary. SMLO12: Data Mining. Statistical Machine Learning Overview.
SMLO12: Data Mining Statistical Machine Learning Overview Douglas Aberdeen Canberra Node, RSISE Building Australian National University 4th November 2004 Outline 1 Introduction 2 Association Rule Mining
More information100 inference steps doesn't seem like enough. Many neuron-like threshold switching units. Many weighted interconnections among units
Connectionist Models Consider humans: Neuron switching time ~ :001 second Number of neurons ~ 10 10 Connections per neuron ~ 10 4 5 Scene recognition time ~ :1 second 100 inference steps doesn't seem like
More informationThe Solution to Assignment 6
The Solution to Assignment 6 Problem 1: Use the 2-fold cross-validation to evaluate the Decision Tree Model for trees up to 2 levels deep (that is, the maximum path length from the root to the leaves is
More informationCSCE 478/878 Lecture 6: Bayesian Learning
Bayesian Methods Not all hypotheses are created equal (even if they are all consistent with the training data) Outline CSCE 478/878 Lecture 6: Bayesian Learning Stephen D. Scott (Adapted from Tom Mitchell
More informationMachine Learning. Neural Networks. (slides from Domingos, Pardo, others)
Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward
More informationMachine Learning. Neural Networks. Le Song. CSE6740/CS7641/ISYE6740, Fall Lecture 7, September 11, 2012 Based on slides from Eric Xing, CMU
Machine Learning CSE6740/CS7641/ISYE6740, Fall 2012 Neural Networks Le Song Lecture 7, September 11, 2012 Based on slides from Eric Xing, CMU Reading: Chap. 5 CB Learning highly non-linear functions f:
More informationMachine Learning 2nd Edi7on
Lecture Slides for INTRODUCTION TO Machine Learning 2nd Edi7on CHAPTER 9: Decision Trees ETHEM ALPAYDIN The MIT Press, 2010 Edited and expanded for CS 4641 by Chris Simpkins alpaydin@boun.edu.tr h1p://www.cmpe.boun.edu.tr/~ethem/i2ml2e
More informationUniversität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Intelligent Data Analysis. Decision Trees
Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen Intelligent Data Analysis Decision Trees Paul Prasse, Niels Landwehr, Tobias Scheffer Decision Trees One of many applications:
More informationClassification: Decision Trees
Classification: Decision Trees These slides were assembled by Byron Boots, with grateful acknowledgement to Eric Eaton and the many others who made their course materials freely available online. Feel
More informationData classification (II)
Lecture 4: Data classification (II) Data Mining - Lecture 4 (2016) 1 Outline Decision trees Choice of the splitting attribute ID3 C4.5 Classification rules Covering algorithms Naïve Bayes Classification
More informationDecision Trees. Gavin Brown
Decision Trees Gavin Brown Every Learning Method has Limitations Linear model? KNN? SVM? Explain your decisions Sometimes we need interpretable results from our techniques. How do you explain the above
More informationARTIFICIAL INTELLIGENCE. Supervised learning: classification
INFOB2KI 2017-2018 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Supervised learning: classification Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from
More informationDecision trees. Special Course in Computer and Information Science II. Adam Gyenge Helsinki University of Technology
Decision trees Special Course in Computer and Information Science II Adam Gyenge Helsinki University of Technology 6.2.2008 Introduction Outline: Definition of decision trees ID3 Pruning methods Bibliography:
More informationEECS 349:Machine Learning Bryan Pardo
EECS 349:Machine Learning Bryan Pardo Topic 2: Decision Trees (Includes content provided by: Russel & Norvig, D. Downie, P. Domingos) 1 General Learning Task There is a set of possible examples Each example
More informationCSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning
CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Learning Neural Networks Classifier Short Presentation INPUT: classification data, i.e. it contains an classification (class) attribute.
More informationClassification: Decision Trees
Classification: Decision Trees Outline Top-Down Decision Tree Construction Choosing the Splitting Attribute Information Gain and Gain Ratio 2 DECISION TREE An internal node is a test on an attribute. A
More informationNeural Networks (Part 1) Goals for the lecture
Neural Networks (Part ) Mark Craven and David Page Computer Sciences 760 Spring 208 www.biostat.wisc.edu/~craven/cs760/ Some of the slides in these lectures have been adapted/borrowed from materials developed
More informationSupervised Learning (contd) Decision Trees. Mausam (based on slides by UW-AI faculty)
Supervised Learning (contd) Decision Trees Mausam (based on slides by UW-AI faculty) Decision Trees To play or not to play? http://www.sfgate.com/blogs/images/sfgate/sgreen/2007/09/05/2240773250x321.jpg
More informationMachine Learning
Machine Learning 10-315 Maria Florina Balcan Machine Learning Department Carnegie Mellon University 03/29/2019 Today: Artificial neural networks Backpropagation Reading: Mitchell: Chapter 4 Bishop: Chapter
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 23. Decision Trees Barnabás Póczos Contents Decision Trees: Definition + Motivation Algorithm for Learning Decision Trees Entropy, Mutual Information, Information
More informationData Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation
Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation Lecture Notes for Chapter 4 Part I Introduction to Data Mining by Tan, Steinbach, Kumar Adapted by Qiang Yang (2010) Tan,Steinbach,
More informationTutorial 6. By:Aashmeet Kalra
Tutorial 6 By:Aashmeet Kalra AGENDA Candidate Elimination Algorithm Example Demo of Candidate Elimination Algorithm Decision Trees Example Demo of Decision Trees Concept and Concept Learning A Concept
More informationIncremental Stochastic Gradient Descent
Incremental Stochastic Gradient Descent Batch mode : gradient descent w=w - η E D [w] over the entire data D E D [w]=1/2σ d (t d -o d ) 2 Incremental mode: gradient descent w=w - η E d [w] over individual
More informationClassification Using Decision Trees
Classification Using Decision Trees 1. Introduction Data mining term is mainly used for the specific set of six activities namely Classification, Estimation, Prediction, Affinity grouping or Association
More informationInduction of Decision Trees
Induction of Decision Trees Peter Waiganjo Wagacha This notes are for ICS320 Foundations of Learning and Adaptive Systems Institute of Computer Science University of Nairobi PO Box 30197, 00200 Nairobi.
More informationInformation Theory & Decision Trees
Information Theory & Decision Trees Jihoon ang Sogang University Email: yangjh@sogang.ac.kr Decision tree classifiers Decision tree representation for modeling dependencies among input variables using
More informationMining Classification Knowledge
Mining Classification Knowledge Remarks on NonSymbolic Methods JERZY STEFANOWSKI Institute of Computing Sciences, Poznań University of Technology COST Doctoral School, Troina 2008 Outline 1. Bayesian classification
More informationConcept Learning. Space of Versions of Concepts Learned
Concept Learning Space of Versions of Concepts Learned 1 A Concept Learning Task Target concept: Days on which Aldo enjoys his favorite water sport Example Sky AirTemp Humidity Wind Water Forecast EnjoySport
More informationLecture 5: Logistic Regression. Neural Networks
Lecture 5: Logistic Regression. Neural Networks Logistic regression Comparison with generative models Feed-forward neural networks Backpropagation Tricks for training neural networks COMP-652, Lecture
More informationClassification and Regression Trees
Classification and Regression Trees Ryan P Adams So far, we have primarily examined linear classifiers and regressors, and considered several different ways to train them When we ve found the linearity
More informationAlgorithms for Classification: The Basic Methods
Algorithms for Classification: The Basic Methods Outline Simplicity first: 1R Naïve Bayes 2 Classification Task: Given a set of pre-classified examples, build a model or classifier to classify new cases.
More informationML techniques. symbolic techniques different types of representation value attribute representation representation of the first order
MACHINE LEARNING Definition 1: Learning is constructing or modifying representations of what is being experienced [Michalski 1986], p. 10 Definition 2: Learning denotes changes in the system That are adaptive
More informationDecision Tree Learning
Topics Decision Tree Learning Sattiraju Prabhakar CS898O: DTL Wichita State University What are decision trees? How do we use them? New Learning Task ID3 Algorithm Weka Demo C4.5 Algorithm Weka Demo Implementation
More informationData Mining Part 5. Prediction
Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,
More informationCS 380: ARTIFICIAL INTELLIGENCE MACHINE LEARNING. Santiago Ontañón
CS 380: ARTIFICIAL INTELLIGENCE MACHINE LEARNING Santiago Ontañón so367@drexel.edu Summary so far: Rational Agents Problem Solving Systematic Search: Uninformed Informed Local Search Adversarial Search
More informationClassification II: Decision Trees and SVMs
Classification II: Decision Trees and SVMs Digging into Data: Jordan Boyd-Graber February 25, 2013 Slides adapted from Tom Mitchell, Eric Xing, and Lauren Hannah Digging into Data: Jordan Boyd-Graber ()
More informationRule Generation using Decision Trees
Rule Generation using Decision Trees Dr. Rajni Jain 1. Introduction A DT is a classification scheme which generates a tree and a set of rules, representing the model of different classes, from a given
More informationMultilayer Perceptron
Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Single Perceptron 3 Boolean Function Learning 4
More informationDecision Tree Learning Lecture 2
Machine Learning Coms-4771 Decision Tree Learning Lecture 2 January 28, 2008 Two Types of Supervised Learning Problems (recap) Feature (input) space X, label (output) space Y. Unknown distribution D over
More informationDecision Support. Dr. Johan Hagelbäck.
Decision Support Dr. Johan Hagelbäck johan.hagelback@lnu.se http://aiguy.org Decision Support One of the earliest AI problems was decision support The first solution to this problem was expert systems
More informationConcept Learning Mitchell, Chapter 2. CptS 570 Machine Learning School of EECS Washington State University
Concept Learning Mitchell, Chapter 2 CptS 570 Machine Learning School of EECS Washington State University Outline Definition General-to-specific ordering over hypotheses Version spaces and the candidate
More informationDecision Trees Part 1. Rao Vemuri University of California, Davis
Decision Trees Part 1 Rao Vemuri University of California, Davis Overview What is a Decision Tree Sample Decision Trees How to Construct a Decision Tree Problems with Decision Trees Classification Vs Regression
More informationMining Classification Knowledge
Mining Classification Knowledge Remarks on NonSymbolic Methods JERZY STEFANOWSKI Institute of Computing Sciences, Poznań University of Technology SE lecture revision 2013 Outline 1. Bayesian classification
More informationMultilayer Neural Networks
Multilayer Neural Networks Multilayer Neural Networks Discriminant function flexibility NON-Linear But with sets of linear parameters at each layer Provably general function approximators for sufficient
More informationBack-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples
Back-Propagation Algorithm Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples 1 Inner-product net =< w, x >= w x cos(θ) net = n i=1 w i x i A measure
More informationQuestion of the Day? Machine Learning 2D1431. Training Examples for Concept Enjoy Sport. Outline. Lecture 3: Concept Learning
Question of the Day? Machine Learning 2D43 Lecture 3: Concept Learning What row of numbers comes next in this series? 2 2 22 322 3222 Outline Training Examples for Concept Enjoy Sport Learning from examples
More informationPATTERN CLASSIFICATION
PATTERN CLASSIFICATION Second Edition Richard O. Duda Peter E. Hart David G. Stork A Wiley-lnterscience Publication JOHN WILEY & SONS, INC. New York Chichester Weinheim Brisbane Singapore Toronto CONTENTS
More informationMachine Learning. Neural Networks. (slides from Domingos, Pardo, others)
Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward
More informationNumerical Learning Algorithms
Numerical Learning Algorithms Example SVM for Separable Examples.......................... Example SVM for Nonseparable Examples....................... 4 Example Gaussian Kernel SVM...............................
More informationCSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18
CSE 417T: Introduction to Machine Learning Final Review Henry Chai 12/4/18 Overfitting Overfitting is fitting the training data more than is warranted Fitting noise rather than signal 2 Estimating! "#$
More informationMultilayer Neural Networks. (sometimes called Multilayer Perceptrons or MLPs)
Multilayer Neural Networks (sometimes called Multilayer Perceptrons or MLPs) Linear separability Hyperplane In 2D: w x + w 2 x 2 + w 0 = 0 Feature x 2 = w w 2 x w 0 w 2 Feature 2 A perceptron can separate
More informationNeural Networks. Nicholas Ruozzi University of Texas at Dallas
Neural Networks Nicholas Ruozzi University of Texas at Dallas Handwritten Digit Recognition Given a collection of handwritten digits and their corresponding labels, we d like to be able to correctly classify
More informationName (NetID): (1 Point)
CS446: Machine Learning Fall 2016 October 25 th, 2016 This is a closed book exam. Everything you need in order to solve the problems is supplied in the body of this exam. This exam booklet contains four
More information