Classifier performance evaluation

Similar documents
Bayesian decision making

Bayesian Decision Theory

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 8. Chapter 8. Classification: Basic Concepts

Stephen Scott.

Lecture 4 Discriminant Analysis, k-nearest Neighbors

Performance Evaluation and Comparison

Regularization. CSCE 970 Lecture 3: Regularization. Stephen Scott and Vinod Variyam. Introduction. Outline

Machine Learning Linear Classification. Prof. Matteo Matteucci

SUPERVISED LEARNING: INTRODUCTION TO CLASSIFICATION

Introduction to Supervised Learning. Performance Evaluation

LEARNING & LINEAR CLASSIFIERS

Lecture 3 Classification, Logistic Regression

Evaluation. Andrea Passerini Machine Learning. Evaluation

Least Squares Classification

Evaluation requires to define performance measures to be optimized

ECE521 week 3: 23/26 January 2017

Performance Evaluation

Performance Evaluation

Learning with multiple models. Boosting.

Applied Machine Learning Annalisa Marsico

Lecture 3. STAT161/261 Introduction to Pattern Recognition and Machine Learning Spring 2018 Prof. Allie Fletcher

Diagnostics. Gad Kimmel

Data Privacy in Biomedicine. Lecture 11b: Performance Measures for System Evaluation

Model Accuracy Measures

Class 4: Classification. Quaid Morris February 11 th, 2011 ML4Bio

Notes on Discriminant Functions and Optimal Classification

Hypothesis Evaluation

Data Mining and Analysis: Fundamental Concepts and Algorithms

Introduction to Machine Learning Midterm Exam

CptS 570 Machine Learning School of EECS Washington State University. CptS Machine Learning 1

Introduction to Machine Learning. Introduction to ML - TAU 2016/7 1

Smart Home Health Analytics Information Systems University of Maryland Baltimore County

Algorithm-Independent Learning Issues

Bayesian Decision Theory

Machine Learning (CS 567) Lecture 2

A Simple Algorithm for Learning Stable Machines

Introduction to Machine Learning

Machine Learning. Lecture 9: Learning Theory. Feng Li.

day month year documentname/initials 1

Algorithmisches Lernen/Machine Learning

Methods and Criteria for Model Selection. CS57300 Data Mining Fall Instructor: Bruno Ribeiro

Detection theory. H 0 : x[n] = w[n]

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18

Introduction to Statistical Inference

PATTERN RECOGNITION AND MACHINE LEARNING

Midterm. Introduction to Machine Learning. CS 189 Spring Please do not open the exam before you are instructed to do so.

10-701/ Machine Learning, Fall

GI01/M055: Supervised Learning

CS534 Machine Learning - Spring Final Exam

Pointwise Exact Bootstrap Distributions of Cost Curves

VC dimension, Model Selection and Performance Assessment for SVM and Other Machine Learning Algorithms

EXAM IN STATISTICAL MACHINE LEARNING STATISTISK MASKININLÄRNING

Linear & nonlinear classifiers

Part I. Linear Discriminant Analysis. Discriminant analysis. Discriminant analysis

Non-Bayesian decision making

Intelligent Systems Statistical Machine Learning

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas

FINAL: CS 6375 (Machine Learning) Fall 2014

CS 188: Artificial Intelligence Spring Today

Sampling Strategies to Evaluate the Performance of Unknown Predictors

Introduction to AI Learning Bayesian networks. Vibhav Gogate

Bagging and Other Ensemble Methods

Midterm exam CS 189/289, Fall 2015

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.

Lecture 2. Judging the Performance of Classifiers. Nitin R. Patel

Introduction to Machine Learning Midterm, Tues April 8

Machine Learning! in just a few minutes. Jan Peters Gerhard Neumann

Anomaly Detection. Jing Gao. SUNY Buffalo

Statistical Learning Reading Assignments

Machine Learning and Data Mining. Linear classification. Kalev Kask

Understanding Generalization Error: Bounds and Decompositions

Pattern Recognition and Machine Learning

Performance evaluation of binary classifiers

Machine Learning and Data Mining. Bayes Classifiers. Prof. Alexander Ihler

Linear & nonlinear classifiers

CSC314 / CSC763 Introduction to Machine Learning

Final Overview. Introduction to ML. Marek Petrik 4/25/2017

Machine Learning Concepts in Chemoinformatics

Optimization Methods for Machine Learning (OMML)

Bayesian Decision Theory

Ensemble Methods. NLP ML Web! Fall 2013! Andrew Rosenberg! TA/Grader: David Guy Brizan

Probabilistic Graphical Models for Image Analysis - Lecture 1

Linear Classifiers as Pattern Detectors

ECE521 Lecture7. Logistic Regression

Machine Learning, Midterm Exam: Spring 2009 SOLUTION

Statistical Data Mining and Machine Learning Hilary Term 2016

Machine Learning. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 8 May 2012

The exam is closed book, closed notes except your one-page (two sides) or two-page (one side) crib sheet.

University of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout 2:. The Multivariate Gaussian & Decision Boundaries

Performance Evaluation and Hypothesis Testing

Machine Learning Lecture 7

hsnim: Hyper Scalable Network Inference Machine for Scale-Free Protein-Protein Interaction Networks Inference

CS 195-5: Machine Learning Problem Set 1

Recap from previous lecture

Bias-Variance Tradeoff

COMP9444: Neural Networks. Vapnik Chervonenkis Dimension, PAC Learning and Structural Risk Minimization

PDEEC Machine Learning 2016/17

Parametric Techniques

Classification: The rest of the story

Introduction: MLE, MAP, Bayesian reasoning (28/8/13)

Transcription:

Classifier performance evaluation Václav Hlaváč Czech Technical University in Prague Czech Institute of Informatics, Robotics and Cybernetics 166 36 Prague 6, Jugoslávských partyzánu 1580/3, Czech Republic http://people.ciirc.cvut.cz/hlavac, vaclav.hlavac@cvut.cz also Center for Machine Perception, http://cmp.felk.cvut.cz Lecture plan Classifier performance as the statistical hypothesis testing. Criterion functions. Confusion matrix, characteristics. Receiver operation curve (ROC). Application domains Classifiers (our main concern in this lecture), 1940s radars, 1970s medical diagnostics, 1990s data mining. Regression. Estimation of probability densities.

Classifier experimental evaluation 2/44 To what degree should we believe to the learned classifier? Classifiers (both supervised and unsupervised) are learned (trained) on a finite training multiset (named simply training set in the sequel for simplicity). A learned classifier has to be tested on a different test set experimentally. The classifier performs on different data in the run mode that on which it has learned. The experimental performance on the test data is a proxy for the performance on unseen data. It checks the classifier s generalization ability. There is a need for a criterion function assessing the classifier performance experimentally, e.g., its error rate, accuracy, expected Bayesian risk (to be discussed later). A need for comparing classifiers experimentally.

Evaluation is an instance of hypothesis testing Evaluation has to be treated as hypothesis testing in statistics. 3/44 The value of the population parameter has to be statistically inferred based on the sample statistics (i.e., a training set in pattern recognition). population statistical sampling sample parameter accuracy = 95% statistic accuracy = 97% statistical inference

Danger of overfitting 4/44 Learning the training data too precisely usually leads to poor classification results on new data. Classifier has to have the ability to generalize. underfit fit overfit

Training vs. test data Problem: Finite data are available only and have to be used both for training and testing. More training data gives better generalization. More test data gives better estimate for the classification error probability. Never evaluate performance on training data. The conclusion would be optimistically biased. 5/44 Partitioning of available finite set of data to training / test sets. Hold out. Cross validation. Bootstrap. Once evaluation is finished, all the available data can be used to train the final classifier.

Hold out method 6/44 Given data is randomly partitioned into two independent sets. Training multi-set (e.g., 2/3 of data) for the statistical model construction, i.e. learning the classifier. Test set (e.g., 1/3 of data) is hold out for the accuracy estimation of the classifier. Random sampling is a variation of the hold out method: Repeat the hold out k times, the accuracy is estimated as the average of the accuracies obtained.

K-fold cross validation 7/44 The training set is randomly divided into K disjoint sets of equal size where each part has roughly the same class distribution. The classifier is trained K times, each time with a different set held out as a test set. The estimated error is the mean of these K errors. Example, courtesy TU Graz Ron Kohavi, A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection, IJCAI 1995. Courtesy: the presentation of Jin Tian, Iowa State Univ..

Leave-one-out 8/44 A special case of K-fold cross validation with K = n, where n is the total number of samples in the training multiset. n experiments are performed using n 1 samples for training and the remaining sample for testing. It is rather computationally expensive. Leave-one-out cross-validation does not guarantee the same class distribution in training and test data! The extreme case: 50% class A, 50% class B. Predict majority class label in the training data. True error 50%; Leave-one-out error estimate 100%! Courtesy: The presentation of Jin Tian, Iowa State Univ..

Bootstrap aggregating, called also bagging The bootstrap uses sampling with replacement to form the training set. 9/44 Given: the training set T consisting of n entries. Bootstrap generates m new datasets Ti each of size n < n by sampling T uniformly with replacement. The consequence is that some entries can be repeated in T i. In a special case (called 632 boosting) when n = n, for large n, Ti is expected to have 1 1 e 63.2% of unique samples. The rest are duplicates. The m statistical models (e.g., classifiers, regressors) are learned using the above m bootstrap samples. The statistical models are combined, e.g. by averaging the output (for regression) or by voting (for classification). Proposed in: Breiman, Leo (1996). Bagging predictors. Machine Learning 24 (2): 123 140.

Recomended experimental validation procedure 10/44 Use K-fold cross-validation (K = 5 or K = 10) for estimating performance estimates (accuracy, etc.). Compute the mean value of performance estimate, and standard deviation and confidence intervals. Report mean values of performance estimates and their standard deviations or 95% confidence intervals around the mean. Courtesy: the presentation of Jin Tian, Iowa State Univ..

Criterion function to assess classifier performance 11/44 Accuracy, error rate. Accuracy is the percent of correct classifications. Error rate = is the percent of incorrect classifications. Accuracy = 1 - Error rate. Problems with the accuracy: Assumes equal costs for misclassification. Assumes relatively uniform class distribution (cf. 0.5% patients of certain disease in the population). Other characteristics derived from the confusion matrix (to be explained later). Expected Bayesian risk (to be explained later).

Confusion matrix, two classes only 12/44 The confusion matrix is also called the contingency table. actual examples positive negative negative a TN - True Negative correct rejections c FN - False Negative misses, type II error overlooked danger predicted positive b FP - False Positive false alarms type I error d TP - True Positive hits R. Kohavi, F. Provost: Glossary of terms, Machine Learning, Vol. 30, No. 2/3, 1998, pp. 271-274. Performance measures calculated from the confusion matrix entries: Accuracy = (a + d)/(a + b + c + d) = (TN + TP)/total True positive rate, recall, sensitivity = d/(c + d) = TP/actual positive Specificity, true negative rate = a/(a + b) = TN/actual negative Precision, predicted positive value = d/(b + d) = TP/predicted positive False positive rate, false alarm = b/(a + b) = FP/actual negative = 1 - specificity False negative rate = c/(c + d) = FN/actual positive

Confusion matrix, # of classes > 2 The toy example (courtesy Stockman) shows predicted and true class labels of optical character recognition for numerals 0-9. There were 100 examples of each number class available for the evaluation. Empirical performance is given in percents. The classifier allows the reject option, class label R. Notice, e.g., unavoidable confusion between 4 and 9. class j predicted by a classifier true class i 0 1 2 3 4 5 6 7 8 9 R 0 97 0 0 0 0 0 1 0 0 1 1 1 0 98 0 0 1 0 0 1 0 0 0 2 0 0 96 1 0 1 0 1 0 0 1 3 0 0 2 95 0 1 0 0 1 0 1 4 0 0 0 0 98 0 0 0 0 2 0 5 0 0 0 1 0 97 0 0 0 0 2 6 1 0 0 0 0 1 98 0 0 0 0 7 0 0 1 0 0 0 0 98 0 0 1 8 0 0 0 1 0 0 1 0 96 1 1 9 1 0 0 0 3 1 0 0 0 95 0 13/44

Unequal costs of decisions 14/44 Examples: Medical diagnosis: The cost of falsely indicated breast cancer in population screening is smaller than the cost of missing a true disease. Defense against ballistic missiles: The cost of missing a real attack is much higher than the cost of false alarm. Bayesian risk is able to represent unequal costs of decisions. We will show that there is a tradeoff between apriori probability of the class and the induced cost.

Criterion, Bayesian risk 15/44 For set of observations X, set of hidden states Y and decisions D, statistical model given by the joint probability p XY : X Y R and the penalty function W : Y D R and a decision strategy Q: X D the Bayesian risk is given as R(Q) = x X y Y p XY (x, y) W (y, Q(x)). It is difficult to fulfil the assumption that a statistical model pxy is known in many practical tasks. If the Bayesian risk would be calculated on the finite (training) set then it would be too optimistic. Two substitutions for Baysesian risk are used in practical tasks: Expected risk. Structural risk.

Two paradigms for learning classifiers Choose a class Q of decision functions (classifiers) q: X Y. Find q Q by minimizing some criterion function on the training set that approximates the risk R(q) (which cannot be computed). Learning paradigm is defined by the criterion function: 1. Expected risk minimization in which the true risk is approximated by the error rate on the training set, 16/44 R emp (q(x, Θ)) = 1 L L i=1 W (y i, q(x i, Θ)), Θ = argmin R emp (q(x, Θ)). Θ Examples: Perceptron, Neural nets (Back-propagation), etc. 2. Structural risk minimization which introduces the guaranteed risk J(Q), R(Q) < J(Q). Example: SVM (Support Vector Machines).

Problem of unknown class distribution and costs 17/44 We already know: The class distribution and the costs of each error determine the goodness of classifiers. Additional problem: In many circumstances, until the application time, we do not know the class distribution and/or it is difficult to estimate the cost matrix. E.g., an email spam filter. Statistical models have to be learned before. Possible solution: Incremental learning.

Unbalanced problems and data 18/44 Classes have often unequal frequency. Medical diagnosis: 95 % healthy, 5% disease. e-commerce: 99 % do not buy, 1 % buy. Security: 99.999 % of citizens are not terrorists. Similar situation for multiclass classifiers. Majority class classifier can be 99 % correct but useless. Example: OCR, 99 % correct, error at the every second line. This is why OCR is not widely used. How should we train classifiers and evaluated them for unbalanced problems?

Balancing unbalanced data 19/44 Two class problems: Build a balanced training set, use it for classifier training. Randomly select desired number of minority class instances. Add equal number of randomly selected majority class instances. Build a balanced test set (different from training set, of course) and test the classifier using it. Multiclass problems: Generalize balancing to multiple classes. Ensure that each class is represented with approximately equal proportions in training and test datasets.

Thoughts about balancing 20/44 Natural hesitation: Balancing the training set changes the underlying statistical problem. Can we change it? Hesitation is justified. The statistical problem is indeed changed. Good news: Balancing can be seen as the change of the penalty function. In the balanced training set, the majority class patterns occur less often which means that the penalty assigned to them is lowered proportionally to their relative frequency. R(q ) = min q D R(q ) = min q(x) D x X x X y Y y Y p XY (x, y) W (y, q(x)) p(x) p Y X (y x) W (y, q(x)) This modified problem is what the end user usually wants as she/he is interested in a good performance for the minority class.

Scalar characteristics are not good for evaluating performance Scalar characteristics as the accuracy, expected cost, area under ROC curve (AUC, to be explained soon) do not provide enough information. We are interested in: How are errors distributed across the classes? How will each classifier perform in different testing conditions (costs or class ratios other than those measured in the experiment)? Two numbers true positive rate and false positive rate are much more informative than the single number. These two numbers are better visualized by a curve, e.g., by a Receiver Operating Characteristic (ROC), which informs about: Performance for all possible misclassification costs. Performance for all possible class ratios. Under what conditions the classifier c 1 outperforms the classifier c 2? 21/44

ROC Receiver Operating Characteristic 22/44 Called also often ROC curve. Originates in WWII processing of radar signals. Useful for the evaluation of dichotomic classifiers performance. Characterizes degree of overlap of classes for a single feature. Decision is based on a single threshold Θ (called also operating point). Generally, false alarms go up with attempts to detect higher percentages of true objects. A graphical plot showing (hit rate, false alarm rate) pairs. Different ROC curves correspond to different classifiers. The single curve is the result of changing threshold Θ. true positive rate 100 % 50 0 0 50 false positive rate 100 %

A model problem Receiver of weak radar signals 23/44 Suppose a receiver detecting a single weak pulse (e.g., a radar reflection from a plane, a dim flash of light). A dichotomic decision, two hidden states y1, y2: y 1 a plane is not present (true negative) or y 2 a plane is present (true positive). Assume a simple statistical model two Gaussians. Internal signal of the receiver, voltage x with the mean µ 1 when the plane (external signal) is not present and µ 2 when the plane is present. Random variables due to random noise in the receiver and outside of it. p(x y i ) = N(µ i, σi 2 ), i = 1, 2.

Four probabilities involved 24/44 Let suppose that the involved probability distributions are Gaussians and the correct decision of the receiver are known. The mean values µ1, µ2, standard deviations σ1, σ1, and the threshold Θ are not known too. There are four conditional probabilities involved: Hit (true positive) p(x > Θ x y 2 ). False alarm, type I error (false positive) p(x > Θ x y 1 ). Miss, overlooked danger, type II error (false negative) p(x < Θ x y 2 ). Correct rejection (true negative) p(x < Θ x y 1 ).

Radar receiver example, graphically 25/44 p(x y ), i=1,2 i p(x y ) 1 threshold Q p(x y ) 2 hits, true positives false alarms false positives Any decision threshold Θ on the voltage x, x > Θ, determines: Hits their probability is the hatched area under the curve p(x y 1 ). False alarms their probability is the red filled area under the curve p(x y 2 ). x Courtesy to Silvio Maffei for pointing to the mistake in the picture.

ROC Example A Two Gaussians with equal variances 26/44 Two Gaussians, µ1 = 4.0, µ2 = 6.0, σ1 = σ2 = 1.0 Less overlap, better discriminability. In this special case, ROC is convex. ROC curve p(x y) 1.0 0.75 true positive rate 0.5 0.25 x 0.0 0.0 0.25 0.5 0.75 false positive rate 1.0

ROC Example B Two gaussians with equal variances 27/44 Two Gaussians, µ1 = 4.0, µ2 = 5.0, σ1 = σ2 = 1.0 More overlap, worse discriminability. In this special case, ROC is convex. ROC curve p(x y) 1.0 0.75 true positive rate 0.5 0.25 0.0 0.0 0.25 0.5 0.75 false positive rate 1.0

ROC Example C Two gaussians with equal variances 28/44 Two Gaussians, µ1 = 4.0, µ2 = 4.1, σ1 = σ2 = 1.0 Almost total overlap, almost no discriminability. In this special case, ROC is convex. p(x y)

ROC and the likelihood ratio 29/44 Under the assumption of the Gaussian signal corrupted by the Gaussian noise, the slope of the ROC curve equals to the likelihood ratio L(x) = p(x noise) p(x signal). In the even more special case, when standard deviations σ1 = σ2 then L(x) increases monotonically with x. Consequently, ROC becomes a convex curve. The optimal threshold Θ becomes Θ = p(noise) p(signal).

ROC Example D Two gaussians with different variances 30/44 Two Gaussians, µ1 = 4.0, µ2 = 6.0, σ1 = 1.0, σ2 = 2.0 Less overlap, better discriminability. In general, ROC is not convex. ROC curve p(x y) 1.0 0.75 true positive rate 0.5 0.25 0.0 0.0 0.25 0.5 0.75 false positive rate 1.0

ROC Example E Two gaussians with different variances 31/44 Two Gaussians, µ1 = 4.0, µ2 = 4.5, σ1 = 1.0, σ2 = 2.0 More overlap, worse discriminability. In general, ROC is not convex. ROC curve p(x y) 1.0 0.75 true positive rate 0.5 0.25 0.0 0.0 0.25 0.5 0.75 false positive rate 1.0

ROC Example F Two gaussians with different variances 32/44 Two Gaussians, µ1 = 4.0, µ2 = 4.0, σ1 = 1.0, σ2 = 2.0 Maximal overlap, the worst discriminability. In general, ROC is not convex. p(x y)

Properties of the ROC space 33/44 the ideal case always positive true positive rate better than the random classifier the random classifier, p=0.5 worse than random; can be improved by inverting its predictions always negative the worst case false positive rate

Continuity of the ROC 34/44 Given two classifiers c 1 and c 2. Classifiers c1 and c2 can be linearly weighted to create intermediate classifiers and In such a way, a continuum of classifiers can be imagined which is denoted by a red line in the figure. true positive rate c 1 c 2 false positive rate

More classifiers, ROC construction 35/44 The convex hull covering classifiers in the ROC space is constructed. Classifiers on the convex hull achieve always the best performance for some class probability distributions (i.e., the ratio of positive and negative examples). true positive rate c 2 c 1 c 5 c 4 c 3 c 6 c 7 c 8 Classifiers inside the convex hull perform worse and can be discarded. false positive rate No penalties have been involved so far. To come...

ROC convex hull and iso-accuracy 36/44 Each line segment on the convex hull is an iso-accuracy line for a particular class distribution. Under that distribution, the two classifiers on the end-points achieve the same accuracy. For distributions skewed towards negatives (steeper slope than 45 o ), the left classifier is better. For distributions skewed towards positives (flatter slope than 45 o ), the right classifier is better. Each classifier on convex hull is optimal for a specific range of class distributions.

Accuracy expressed in the ROC space 37/44 Accuracy a a = pos TPR + neg (1 FPR). a = 0.875 a = 0.875 a = 0.75 c 2 a = 0.625 a = 0.5 Express the accuracy in the ROC space as TPR = f(fpr). TPR = a neg pos + neg pos FPR. Iso-accuracy are given by straight lines with the slope neg/pos, i.e., by a degree of balance in the test set. The balanced set 45 o. true positive rate a = 0.73 a = 0.375 a = 0.25 a = 0.125 The diagonal TPR = FPR = a. false positive rate A blue line is a iso-accuracy line for a general classifier c 2.

Selecting the optimal classifier (1) 38/44 0.78 c 2 c 7 true positive rate c 5 c 1 c 4 false positive rate For a balanced training set (i.e., as many +ves as -ves), see solid blue line. Classifier c 1 achieves 78 % true positive rate.

Selecting the optimal classifier (2) 39/44 0.82 c 2 c 7 true positive rate c 5 c 1 c 4 false positive rate For a training set with half +ves than -ves), see solid blue line. Classifier c 1 achieves 82 % true positive rate.

Selecting the optimal classifier (3) 40/44 c 7 c 2 true positive rate c 5 c 1 c 4 0.11 false positive rate For less than 11 % +ves, the always negative is the best.

Selecting the optimal classifier (4) 41/44 0.8 c 2 c 7 true positive rate c 5 c 1 c 4 false positive rate For more than 80 % +ves, the always positive is the best.

Benefit of the ROC approach 42/44 It is possible to distinguish operationally between Discriminability inherent property of the detection system. Decision bias implied by the loss function changable by the user. The discriminability can be determined from ROC curve. If the Gaussian assumption holds then the Bayesian error rate can be calculated.

ROC generalization to arbitrary multidimensional distribution 43/44 The approach can be used for any two multidimensional distributions p(x y 1 ), p(x y 2 ) provided they overlap and thus have nonzero Bayes classification error. Unlike in one-dimensional case, there may be many decision boundaries corresponding to particular true positive rate, each with different false positive rate. Consequently, we cannot determine discriminability from true positive and false positive rate without knowing more about underlying decision rule.

Optimal true/false positive rates? Unfortunately not 44/44 This is a rarely attainable ideal for a multidimensional case. We should have found the decision rule of all the decision rules giving the measured true positive rate, the rule which has the minimum false negative rate. This would need huge computational resources (like Monte Carlo). In practice, we forgo optimality.