Mining Classification Knowledge

Similar documents
Mining Classification Knowledge

Algorithms for Classification: The Basic Methods

Bayesian Classification. Bayesian Classification: Why?

Inteligência Artificial (SI 214) Aula 15 Algoritmo 1R e Classificador Bayesiano

Numerical Learning Algorithms

Classification. Classification. What is classification. Simple methods for classification. Classification by decision tree induction

CS6220: DATA MINING TECHNIQUES

Data Mining Part 4. Prediction

Midterm Review CS 7301: Advanced Machine Learning. Vibhav Gogate The University of Texas at Dallas

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas

Intuition Bayesian Classification

MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October,

Chapter 6 Classification and Prediction (2)

LINEAR CLASSIFICATION, PERCEPTRON, LOGISTIC REGRESSION, SVC, NAÏVE BAYES. Supervised Learning

Naïve Bayes Lecture 6: Self-Study -----

Bayesian Learning Features of Bayesian learning methods:

Modern Information Retrieval

Bayesian Learning. Artificial Intelligence Programming. 15-0: Learning vs. Deduction

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18

The Naïve Bayes Classifier. Machine Learning Fall 2017

Support Vector Machines

CSCE 478/878 Lecture 6: Bayesian Learning

Introduction to ML. Two examples of Learners: Naïve Bayesian Classifiers Decision Trees

COMS 4771 Introduction to Machine Learning. Nakul Verma

Linear & nonlinear classifiers

Chapter 6: Classification

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

Supervised Learning! Algorithm Implementations! Inferring Rudimentary Rules and Decision Trees!

Support Vector Machines. Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation

Pattern Recognition and Machine Learning. Perceptrons and Support Vector machines

Bayesian Learning. Bayesian Learning Criteria

MIRA, SVM, k-nn. Lirong Xia

CLASSIFICATION NAIVE BAYES. NIKOLA MILIKIĆ UROŠ KRČADINAC

Classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

FINAL: CS 6375 (Machine Learning) Fall 2014

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 8. Chapter 8. Classification: Basic Concepts

Modern Information Retrieval

Decision Tree Learning and Inductive Inference

Lecture 9: Bayesian Learning

Instance-based Learning CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.

CS 6375 Machine Learning

Support Vector Machine. Industrial AI Lab.

Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!

Tools of AI. Marcin Sydow. Summary. Machine Learning

Lecture 3: Pattern Classification. Pattern classification

Brief Introduction to Machine Learning

Probability Based Learning

Machine Learning. Yuh-Jye Lee. March 1, Lab of Data Science and Machine Intelligence Dept. of Applied Math. at NCTU

Linear & nonlinear classifiers

Learning Classification Trees. Sargur Srihari

Data Mining. Practical Machine Learning Tools and Techniques. Slides for Chapter 4 of Data Mining by I. H. Witten, E. Frank and M. A.

Support Vector Machines

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

Decision Trees. Data Science: Jordan Boyd-Graber University of Maryland MARCH 11, Data Science: Jordan Boyd-Graber UMD Decision Trees 1 / 1

Lazy Rule Learning Nikolaus Korfhage

18.9 SUPPORT VECTOR MACHINES

CS145: INTRODUCTION TO DATA MINING

Slides for Data Mining by I. H. Witten and E. Frank

The Solution to Assignment 6

DATA MINING LECTURE 10

Learning Decision Trees

Introduction to Machine Learning

Lecture 3: Decision Trees

Data Mining Part 5. Prediction

Jialiang Bao, Joseph Boyd, James Forkey, Shengwen Han, Trevor Hodde, Yumou Wang 10/01/2013

Introduction to Machine Learning

Lecture 3: Pattern Classification

Statistical Pattern Recognition

Support Vector Machine. Industrial AI Lab. Prof. Seungchul Lee

Stephen Scott.

Naïve Bayesian. From Han Kamber Pei

CS 446 Machine Learning Fall 2016 Nov 01, Bayesian Learning

Machine Learning (CS 567) Lecture 5

Chapter 4.5 Association Rules. CSCI 347, Data Mining

Decision Support. Dr. Johan Hagelbäck.

Midterm: CS 6375 Spring 2015 Solutions

MODULE -4 BAYEIAN LEARNING

Introduction to Machine Learning Midterm Exam

Intro. ANN & Fuzzy Systems. Lecture 15. Pattern Classification (I): Statistical Formulation

Induction on Decision Trees

Decision Trees. Tirgul 5

Final Examination CS 540-2: Introduction to Artificial Intelligence

> DEPARTMENT OF MATHEMATICS AND COMPUTER SCIENCE GRAVIS 2016 BASEL. Logistic Regression. Pattern Recognition 2016 Sandro Schönborn University of Basel

Neural networks. Chapter 20. Chapter 20 1

Logistic Regression. Machine Learning Fall 2018

ML techniques. symbolic techniques different types of representation value attribute representation representation of the first order

Text Mining. Dr. Yanjun Li. Associate Professor. Department of Computer and Information Sciences Fordham University

Machine Learning Practice Page 2 of 2 10/28/13

Decision Trees. Danushka Bollegala

Machine Learning in a Nutshell. Data Model Performance Measure Machine Learner

Support Vector Machines (SVM) in bioinformatics. Day 1: Introduction to SVM

18.6 Regression and Classification with Linear Models

Decision Trees.


Kernel Methods and Support Vector Machines

Single layer NN. Neuron Model

Lecture 9: Large Margin Classifiers. Linear Support Vector Machines

Learning from Examples

Transcription:

Mining Classification Knowledge Remarks on NonSymbolic Methods JERZY STEFANOWSKI Institute of Computing Sciences, Poznań University of Technology COST Doctoral School, Troina 2008

Outline 1. Bayesian classification 2. K nearest neighbors 3. Linear discrimination 4. Artificial neural networks 5. Support vector machines

Bayesian Classification: Why? Probabilistic learning: Calculate explicit probabilities for hypothesis (decision), among the most practical approaches to certain types of learning problems. Probabilistic prediction: Predict multiple hypotheses, weighted by their probabilities. Standard: Even in cases where Bayesian methods prove computationally intractable, they can provide a standard of optimal decision making against which other methods can be measured.

Bayesian Theorem Given training data D, posteriori probability of a hypothesis h, P(h D) follows the Bayes theorem: P ( h D) = P( D h) P( h) P( D) MAP (maximum posteriori) hypothesis: h MAP argmaxp( h D) = argmaxp( D h) P( h h H h H ). Practical difficulty: require initial knowledge of many probabilities, significant computational cost.

Naïve Bayes Classifier (I) A simplified assumption: attributes are conditionally independent: C C v C P( V) P( ) P( ) j j i j i= 1 n Greatly reduces the computation cost, only count the class distribution.

Probabilities for weather data 5/14 5 No 9/14 9 Play 3/5 2/5 3 2 No 3/9 6/9 3 6 True False True False Windy 1/5 4/5 1 4 No No No 6/9 3/9 6 3 Normal High Normal High Humidity 1/5 2/5 2/5 1 2 2 3/9 4/9 2/9 3 4 2 Cool 2/5 3/9 Rainy Mild Hot Cool Mild Hot Temperature 0/5 4/9 Overcast 3/5 2/9 Sunny 2 3 Rainy 0 4 Overcast 3 2 Sunny Outlook No True High Mild Rainy False Normal Hot Overcast True High Mild Overcast True Normal Mild Sunny False Normal Mild Rainy False Normal Cool Sunny No False High Mild Sunny True Normal Cool Overcast No True Normal Cool Rainy False Normal Cool Rainy False High Mild Rainy False High Hot Overcast No True High Hot Sunny No False High Hot Sunny Play Windy Humidity Temp Outlook witten&eibe

Probabilities for weather data Outlook Temperature Humidity Windy Play No No No No No Sunny 2 3 Hot 2 2 High 3 4 False 6 2 9 5 Overcast 4 0 Mild 4 2 Normal 6 1 True 3 3 Rainy 3 2 Cool 3 1 Sunny 2/9 3/5 Hot 2/9 2/5 High 3/9 4/5 False 6/9 2/5 9/14 5/14 Overcast 4/9 0/5 Mild 4/9 2/5 Normal 6/9 1/5 True 3/9 3/5 Rainy 3/9 2/5 Cool 3/9 1/5 A new day: Outlook Sunny Temp. Cool Humidity High Windy True Play? Likelihood of the two classes For yes = 2/9 3/9 3/9 3/9 9/14 = 0.0053 For no = 3/5 1/5 4/5 3/5 5/14 = 0.0206 Conversion into a probability by normalization: P( yes ) = 0.0053 / (0.0053 0.0206) = 0.205 P( no ) = 0.0206 / (0.0053 0.0206) = 0.795 witten&eibe

Missing values Training: instance is not included in frequency count for attribute valueclass combination Classification: attribute will be omitted from calculation Example: Outlook Temp. Humidity Windy Play? Cool High True? Likelihood of yes = 3/9 3/9 3/9 9/14 = 0.0238 Likelihood of no = 1/5 4/5 3/5 5/14 = 0.0343 P( yes ) = 0.0238 / (0.0238 0.0343) = 41% P( no ) = 0.0343 / (0.0238 0.0343) = 59% witten&eibe

Naïve Bayes: discussion Naïve Bayes works surprisingly well (even if independence assumption is clearly violated) Why? Because classification doesn t require accurate probability estimates as long as maximum probability is assigned to correct class However: adding too many redundant attributes will cause problems (e.g. identical attributes) Note also: many numeric attributes are not normally distributed ( kernel density estimators)

InstanceBased Methods Instancebased learning: Store training examples and delay the processing ( lazy evaluation ) until a new instance must be classified. Typical approaches: knearest neighbor approach: Instances represented as points in a Euclidean space. Locally weighted regression: Constructs local approximation.

knearestneighbor Neighbor Algorithm The case of discrete set of classes. 1. Take the instance x to be classified 2. Find k nearest neighbors of x in the training data. 3. Determine the class c of the majority of the instances among the k nearest neighbors. 4. Return the class c as the classification of x. The distance functions are composed from difference metric d a defined for each two instances x i and x j.

Classification & Decision Boundaries e1 q1 1nn: q1 is positive 5nn: q1 is classified as negative 1nn:

Discussion on the knn Algorithm The knn algorithm for continuousvalued target functions. Calculate the mean values of the k nearest neighbors. Distanceweighted nearest neighbor algorithm. Weight the contribution of each of the k neighbors according to their distance to the query point xq. w 1 giving greater weight to closer neighbors: dxq (, x i ) 2 Similarly, we can distanceweight the instances for realvalued target functions. Robust to noisy data by averaging knearest neighbors. Curse of dimensionality: distance between neighbors could be dominated by irrelevant attributes. To overcome it, axes stretch or elimination of the least relevant attributes.

Disadvantages of the NN Algorithm the NN algorithm has large storage requirements because it has to store all the data; the NN algorithm is slow during instance classification because all the training instances have to be visited; the accuracy of the NN algorithm degrades with increase of noise in the training data; the accuracy of the NN algorithm degrades with increase of irrelevant attributes.

Condensed NN Algorithm The Condensed NN algorithm was introduced to reduce the storage requirements of the NN algorithm. The algorithm finds a subset S of the training data D s.t. each instance in D can be correctly classified by the NN algorithm applied on the subset S. The average reduction of the algorithm varies between 60% to 80%. D S

Remarks on Lazy vs. Eager Learning Instancebased learning: lazy evaluation Decisiontree and Bayesian classification: eager evaluation Key differences Lazy method may consider query instance xq when deciding how to generalize beyond the training data D Eager method cannot since they have already chosen global approximation when seeing the query Efficiency: Lazy less time training but more time predicting Accuracy Lazy method effectively uses a richer hypothesis space since it uses many local linear functions to form its implicit global approximation to the target function Eager: must commit to a single hypothesis that covers the entire instance space

Linear Classification Binary Classification problem The data above the red line belongs to class x x x x x o x x x x x o o o o o o o x o o o o o The data below red line belongs to class o Examples SVM, Perceptron, Probabilistic Classifiers

Discriminative Classifiers Advantages prediction accuracy is generally high (as compared to Bayesian methods in general) robust, works when training examples contain errors fast evaluation of the learned target function Criticism long training time difficult to understand the learned function (weights) not easy to incorporate domain knowledge (easy in the form of priors on the data or distributions)

Neural Networks Analogy to Biological Systems (Indeed a great example of a good learning system) Massive Parallelism allowing for computational efficiency The first learning algorithm came in 1959 (Rosenblatt) who suggested that if a target output value is provided for a single neuron with fixed inputs, one can incrementally change weights to learn to produce these outputs using the perceptron learning rule

A Neuron µ k x 0 w 0 x 1 x n w 1 w n f output y Input vector x weight vector w weighted sum Activation function The ndimensional input vector x is mapped into variable y by means of the scalar product and a nonlinear function mapping

A Neuron µ k x 0 w 0 x 1 x n w 1 w n f output y Input weight vector x vector w For Example y n = sign( i = 0 weighted sum w µ i x i k ) Activation function

MultiLayer Perceptron Output vector Output nodes Hidden nodes Input nodes Err j Err w ij = O (1 O ) θ j ij j = I O 1 O j = I 1 j e = w O θ j j =θ j ij j ( 1 O j )( T j O j ) i j ij k (l) Err w = w ( l) Err i Err j j O i k j w jk Input vector: x i

Network Training The ultimate objective of training obtain a set of weights that makes almost all the examples in the training data classified correctly Steps: Initial weights are set randomly Input examples are fed into the network one by one Activation values for the hidden nodes are computed Output vector can be computed after the activation values of all hidden node are available Weights are adjusted using error (desired output actual output)

Neural Networks Advantages prediction accuracy is generally high robust, works when training examples contain errors output may be discrete, realvalued, or a vector of several discrete or realvalued attributes fast evaluation of the learned target function. Criticism long training time difficult to understand the learned function (weights). not easy to incorporate domain knowledge

SVM Support Vector Machines Small Margin Large Margin Support Vectors

SVM Cont. Linear Support Vector Machine Given a set of points i with label The SVM finds a hyperplane defined by the pair (w,b) (where w is the normal to the plane and b is the distance from the origin) s.t. x n R y i { 1,1 } yi ( xi w b) 1 i = 1,..., N x feature vector, b bias, y class label, w margin

SVM Cont. What if the data is not linearly separable? Project the data to high dimensional space where it is linearly separable and then we can use linear SVM (Using Kernels) (0,1) 1 0 1 (0,0) (1,0)

NonLinear SVM Classification using SVM (w,b) x i w b? > 0 In non linear case we can see this as K ( x i, w)? b > 0 Kernel Can be thought of as doing dot product in some high dimensional space

Changing Attribute Space

SVM vs. Neural Network SVM Relatively new concept Nice Generalization properties Hard to learn learned in batch mode using quadratic programming techniques Using kernels can learn quite complex functions Neural Network Quiet Old Generalizes well but doesn t have so strong mathematical foundation Can easily be learned in incremental fashion To learn complex functions use multilayer perceptron (not that trivial)