Advanced classifica-on methods

Similar documents
Lecture Notes for Chapter 5 (PART 1)

DATA MINING LECTURE 10

Classification Part 2. Overview

Data Analysis. Santiago González

7 Classification: Naïve Bayes Classifier

Lecture Notes for Chapter 5. Introduction to Data Mining

Probabilistic Methods in Bioinformatics. Pabitra Mitra

DATA MINING: NAÏVE BAYES

Data Mining Classification: Alternative Techniques. Lecture Notes for Chapter 5. Introduction to Data Mining

Classification Lecture 2: Methods

Given a collection of records (training set )

Machine Learning 2nd Edition

Statistics 202: Data Mining. c Jonathan Taylor. Week 6 Based in part on slides from textbook, slides of Susan Holmes. October 29, / 1

Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation

Lecture VII: Classification I. Dr. Ouiem Bchir

Mining Classification Knowledge

Classification Bayesian Classifiers

Data Mining Classification: Basic Concepts and Techniques. Lecture Notes for Chapter 3. Introduction to Data Mining, 2nd Edition

Instance-based Learning CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016

Mining Classification Knowledge

Intuition Bayesian Classification

Bayesian Networks. Distinguished Prof. Dr. Panos M. Pardalos

CS6220: DATA MINING TECHNIQUES

Geometric View of Machine Learning Nearest Neighbor Classification. Slides adapted from Prof. Carpuat

Data preprocessing. DataBase and Data Mining Group 1. Data set types. Tabular Data. Document Data. Transaction Data. Ordered Data

CMSC 422 Introduction to Machine Learning Lecture 4 Geometry and Nearest Neighbors. Furong Huang /

Oliver Dürr. Statistisches Data Mining (StDM) Woche 11. Institut für Datenanalyse und Prozessdesign Zürcher Hochschule für Angewandte Wissenschaften

Data Mining algorithms

Naïve Bayes classification

Data Mining Part 4. Prediction

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability

Data Mining Prof. Pabitra Mitra Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur

Lets start off with a visual intuition

Day 6: Classification and Machine Learning

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas

Algorithmisches Lernen/Machine Learning

Inductive Learning. Chapter 18. Material adopted from Yun Peng, Chuck Dyer, Gregory Piatetsky-Shapiro & Gary Parker

Midterm Review CS 7301: Advanced Machine Learning. Vibhav Gogate The University of Texas at Dallas

CS 498F: Machine Learning Fall 2010

Time Series Classification

MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October,

Machine Learning for Signal Processing Bayes Classification and Regression

Soft Computing. Lecture Notes on Machine Learning. Matteo Mattecci.

Introduction to Machine Learning CMU-10701

Brief Introduction to Machine Learning

Machine Learning for Signal Processing Bayes Classification

Bayesian Learning. CSL603 - Fall 2017 Narayanan C Krishnan

Chapter 6 Classification and Prediction (2)

Einführung in Web- und Data-Science

Data Mining vs Statistics

Algorithms for Classification: The Basic Methods

Handling Uncertainty

Introduction to Machine Learning

Classification. Classification. What is classification. Simple methods for classification. Classification by decision tree induction

Introduction to Bayesian Learning. Machine Learning Fall 2018

Data Mining: Data. Lecture Notes for Chapter 2. Introduction to Data Mining

Data Mining: Data. Lecture Notes for Chapter 2. Introduction to Data Mining

Introduction to Machine Learning

A short introduc-on to discrete choice models

CS 484 Data Mining. Classification 7. Some slides are from Professor Padhraic Smyth at UC Irvine

CHAPTER-17. Decision Tree Induction

Bayes Classifiers. CAP5610 Machine Learning Instructor: Guo-Jun QI

CPSC 340: Machine Learning and Data Mining. MLE and MAP Fall 2017

Intro. ANN & Fuzzy Systems. Lecture 15. Pattern Classification (I): Statistical Formulation

LINEAR CLASSIFICATION, PERCEPTRON, LOGISTIC REGRESSION, SVC, NAÏVE BAYES. Supervised Learning

Bayesian Learning. Artificial Intelligence Programming. 15-0: Learning vs. Deduction

Computer Vision Group Prof. Daniel Cremers. 2. Regression (cont.)

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18

The Naïve Bayes Classifier. Machine Learning Fall 2017

Assignment No A-05 Aim. Pre-requisite. Objective. Problem Statement. Hardware / Software Used

SYDE 372 Introduction to Pattern Recognition. Probability Measures for Classification: Part I

Notes on Machine Learning for and

Classification: The rest of the story

Part I. Linear regression & LASSO. Linear Regression. Linear Regression. Week 10 Based in part on slides from textbook, slides of Susan Holmes

Machine Learning. Theory of Classification and Nonparametric Classifier. Lecture 2, January 16, What is theoretically the best classifier

Naïve Bayesian. From Han Kamber Pei

Statistical learning. Chapter 20, Sections 1 4 1

Machine Learning, Midterm Exam: Spring 2009 SOLUTION

Nearest Neighbor Pattern Classification

Ensemble Methods. NLP ML Web! Fall 2013! Andrew Rosenberg! TA/Grader: David Guy Brizan

SUPERVISED LEARNING: INTRODUCTION TO CLASSIFICATION

MIRA, SVM, k-nn. Lirong Xia

FINAL: CS 6375 (Machine Learning) Fall 2014

Homework 1 Solutions Probability, Maximum Likelihood Estimation (MLE), Bayes Rule, knn

EEL 851: Biometrics. An Overview of Statistical Pattern Recognition EEL 851 1

Midterm: CS 6375 Spring 2018

Time Series. Knowledge Discovery and Data Mining 2 (VU) ( ) Roman Kern. ISDS, TU Graz

Data Mining Condensed Draft of a Lecture starting in Summer Term 2010 as developed up to November 2009

Final Overview. Introduction to ML. Marek Petrik 4/25/2017

9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering

Naive Bayes classification

Data classification (II)

Text Categorization CSE 454. (Based on slides by Dan Weld, Tom Mitchell, and others)

Announcements. Proposals graded

A first model of learning

Final Exam, Fall 2002

Pattern Recognition 2

Lecture 3: Pattern Classification

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 8. Chapter 8. Classification: Basic Concepts

Transcription:

Advanced classifica-on methods

Instance-based classifica-on Bayesian classifica-on

Instance-Based Classifiers Set of Stored Cases Atr1... AtrN Class A B B C A C B Store the training records Use training records to predict the class label of unseen cases Unseen Case Atr1... AtrN

Instance Based Classifiers Examples: Rote-learner Memorizes en-re training data and performs classifica-on only if afributes of record match one of the training examples exactly Nearest neighbor Uses k closest points (nearest neighbors for performing classifica-on

Nearest Neighbor Classifiers Basic idea: If it walks like a duck, quacks like a duck, then it s probably a duck Compute Distance Test Record Training Records Choose k of the nearest records

Nearest-Neighbor Classifiers Unknown record Requires three things The set of stored records Distance Metric to compute distance between records The value of k, the number of nearest neighbors to retrieve To classify an unknown record: Compute distance to other training records Iden-fy k nearest neighbors Use class labels of nearest neighbors to determine the class label of unknown record (e.g., by taking majority vote

Defini-on of Nearest Neighbor X X X (a 1-nearest neighbor (b 2-nearest neighbor (c 3-nearest neighbor K-nearest neighbors of a record x are data points that have the k smallest distance to x

1 nearest-neighbor Voronoi Diagram

Nearest Neighbor Classifica-on Compute distance between two points: Euclidean distance d( p, q = i ( p q i i 2 Determine the class from nearest neighbor list take the majority vote of class labels among the k- nearest neighbors Weigh the vote according to distance weight factor, w = 1/d 2

Nearest Neighbor Classifica-on Choosing the value of k: If k is too small, sensi-ve to noise points If k is too large, neighborhood may include points from other classes X

Nearest Neighbor Classifica-on Scaling issues AFributes may have to be scaled to prevent distance measures from being dominated by one of the afributes Example: height of a person may vary from 1.5m to 1.8m weight of a person may vary from 90lb to 300lb income of a person may vary from $10K to $1M

Nearest Neighbor Classifica-on Problem with Euclidean measure: High dimensional data curse of dimensionality Can produce counter-intui-ve results 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 vs 0 0 0 0 0 0 0 0 0 0 0 1 d = 1.4142 d = 1.4142 u Solu-on: Normalize the vectors to unit length

Nearest neighbor Classifica-on k-nn classifiers are lazy learners It does not build models explicitly Unlike eager learners such as decision tree induc-on and rule-based systems Classifying unknown records are rela-vely expensive

Example: PEBLS PEBLS: Parallel Examplar-Based Learning System (Cost & Salzberg Works with both con-nuous and nominal features For nominal features, distance between two nominal values is computed using modified value difference metric (MVDM Each record is assigned a weight factor Number of nearest neighbor, k = 1

10 Example: PEBLS Tid Refund Marital Status Taxable Income 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No Cheat 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes Distance between nominal afribute values: d(single,married = 2/4 0/4 + 2/4 4/4 = 1 d(single,divorced = 2/4 1/2 + 2/4 1/2 = 0 d(married,divorced = 0/4 1/2 + 4/4 1/2 = 1 d(refund=yes,refund=no = 0/3 3/7 + 3/3 4/7 = 6/7 Marital Status Class Single Married Divorced Yes 2 0 1 Refund Class Yes No Yes 0 3 d( V1, V2 n = 1i n i 1 n n 2i 2 No 2 4 1 No 3 4

10 Example: PEBLS Tid Refund Marital Status Taxable Income X Yes Single 125K No Y No Married 100K No Cheat Distance between record X and record Y: where: Δ( X w X =, Y = w X w Y d i= 1 d( X i, Y Number of times X is used for prediction Number of times X predicts correctly i 2 w X 1 if X makes accurate predic-on most of the -me w X > 1 if X is not reliable for making predic-ons

Bayes Classifier A probabilis-c framework for solving classifica-on problems Condi-onal Probability: Bayes theorem: ( ( ( ( A P C P C A P A C P = (, ( ( (, ( ( C P A C P C A P A P A C P A C P = =

Example of Bayes Theorem Given: A doctor knows that meningi-s causes s-ff neck 50% of the -me Prior probability of any pa-ent having meningi-s is 1/50,000 Prior probability of any pa-ent having s-ff neck is 1/20 If a pa-ent has s-ff neck, what s the probability he/she has meningi-s? P( S M P( M 0.5 1/ 50000 P( M S = = = P( S 1/ 20 0.0002

Bayesian Classifiers Consider each afribute and class label as random variables Given a record with afributes (A 1, A 2,,A n Goal is to predict class C Specifically, we want to find the value of C that maximizes P(C A 1, A 2,,A n Can we es-mate P(C A 1, A 2,,A n directly from data?

Bayesian Classifiers Approach: compute the posterior probability P(C A 1, A 2,, A n for all values of C using the Bayes theorem P( C A A A = 1 2 n P( A A A C P( C 1 2 n P( A A A 1 2 n Choose value of C that maximizes P(C A 1, A 2,, A n Equivalent to choosing value of C that maximizes P(A 1, A 2,, A n C P(C How to es-mate P(A 1, A 2,, A n C?

Naïve Bayes Classifier Assume independence among afributes A i when class is given: P(A 1, A 2,, A n C = P(A 1 C j P(A 2 C j P(A n C j Can es-mate P(A i C j for all A i and C j. New point is classified to C j if P(C j Π P(A i C j is maximal.

10 How to Es-mate Probabili-es from c c Tid Refund Marital Status c Taxable Income 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No Evade 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes c Data? Class: P(C = N c /N e.g., P(No = 7/10, P(Yes = 3/10 For discrete afributes: P(A i C k = A ik / Nk c where A ik is number of instances having afribute A i and belongs to class C k Examples: P(Status=Married No = 4/7 P(Refund=Yes Yes=0

How to Es-mate Probabili-es from Data? For con-nuous afributes: Discre-ze the range into bins one ordinal afribute per bin violates independence assump-on Two-way split: (A < v or (A > v choose only one of the two splits as new afribute Probability density es-ma-on: Assume afribute follows a normal distribu-on Use data to es-mate parameters of distribu-on (e.g., mean and standard devia-on Once probability distribu-on is known, can use it to es-mate the condi-onal probability P(A i c k

10 How to Es-mate Probabili-es from Data? c c Tid Refund Marital Status c Taxable Income 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No Evade 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes Normal distribu-on: P( A i c 1 2 2πσ One for each (A i,c i pair For (Income, Class=No: If Class=No j = sample mean = 110 sample variance = 2975 ij e ( A i µ 2σ ij 2 ij 2 P( Income = 120 No = 1 2π (54.54 e (120 110 2 ( 2975 2 = 0.0072

Example of Naïve Bayes Classifier Given a Test Record: X = ( Refund = No,Married,Income = 120K naive Bayes Classifier: P(Refund=Yes No = 3/7 P(Refund=No No = 4/7 P(Refund=Yes Yes = 0 P(Refund=No Yes = 1 P(Marital Status=Single No = 2/7 P(Marital Status=Divorced No=1/7 P(Marital Status=Married No = 4/7 P(Marital Status=Single Yes = 2/7 P(Marital Status=Divorced Yes=1/7 P(Marital Status=Married Yes = 0 For taxable income: If class=no: sample mean=110 sample variance=2975 If class=yes: sample mean=90 sample variance=25 P(X Class=No = P(Refund=No Class=No P(Married Class=No P(Income=120K Class=No = 4/7 4/7 0.0072 = 0.0024 P(X Class=Yes = P(Refund=No Class=Yes P(Married Class=Yes P(Income=120K Class=Yes = 1 0 1.2 10-9 = 0 Since P(X NoP(No > P(X YesP(Yes Therefore P(No X > P(Yes X => Class = No

Naïve Bayes Classifier If one of the condi-onal probability is zero, then the en-re expression becomes zero Probability es-ma-on: Original : P( A Laplace : P( A i i C = C = m - estimate : P( A i N N N N ic ic c c C = + 1 + c N N ic c + mp + m c: number of classes p: prior probability m: parameter

Example of Naïve Bayes Classifier Name Give Birth Can Fly Live in Water Have Legs Class human yes no no yes mammals python no no no no non-mammals salmon no no yes no non-mammals whale yes no yes no mammals frog no no sometimes yes non-mammals komodo no no no yes non-mammals bat yes yes no yes mammals pigeon no yes no yes non-mammals cat yes no no yes mammals leopard shark yes no yes no non-mammals turtle no no sometimes yes non-mammals penguin no no sometimes yes non-mammals porcupine yes no no yes mammals eel no no yes no non-mammals salamander no no sometimes yes non-mammals gila monster no no no yes non-mammals platypus no no no yes mammals owl no yes no yes non-mammals dolphin yes no yes no mammals eagle no yes no yes non-mammals A: afributes M: mammals N: non-mammals 6 6 2 2 P( A M = = 0.06 7 7 7 7 1 10 3 4 P( A N = = 0.0042 13 13 13 13 7 P( A M P( M = 0.06 = 0.021 20 13 P( A N P( N = 0.004 = 0.0027 20 Give Birth Can Fly Live in Water Have Legs Class yes no yes no? P(A MP(M > P(A NP(N => Mammals

Naïve Bayes (Summary Robust to isolated noise points Handle missing values by ignoring the instance during probability es-mate calcula-ons Robust to irrelevant afributes Independence assump-on may not hold for some afributes Use other techniques such as Bayesian Belief Networks (BBN