# Day 6: Classification and Machine Learning

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Day 6: Classification and Machine Learning Kenneth Benoit Essex Summer School 2014 July 30, 2013

2 Today s Road Map The Naive Bayes Classifier The k-nearest Neighbour Classifier Support Vector Machines (SVMs) Assessing the reliability of a training set Evaluating classification: Precision and recall Lab session: Classifying Text Using quanteda

3 THE NAIVE BAYES CLASSIFIER

4 Prior probabilities and updating A test is devised to automatically flag racist news stories. 1% of news stories in general have racist messages 80% of racist news stories will be flagged by the test 10% of non-racist stories will also be flagged We run the test on a new news story, and it is flagged as racist. Question: What is probability that the story is actually racist? Any guesses?

5 Prior probabilities and updating What about without the test? Imagine we run 1,000 news stories through the test We expect that 10 will be racist With the test, we expect: Of the 10 found to be racist, 8 should be flagged as racist Of the 990 non-racist stories, 99 will be wrongly flagged as racist That s a total of 107 stories flagged as racist So: the updated probability of a story being racist, conditional on being flagged as racist, is = The prior probability of 0.01 is updated to only by the positive test result

6 This is an example of Bayes Rule: P(R = 1 T = 1) = P(T =1 R=1)P(R=1) P(T =1)

7 Multinomial Bayes model of Class given a Word Consider J word types distributed across I documents, each assigned one of K classes. At the word level, Bayes Theorem tells us that: P(c k w j ) = P(w j c k )P(c k ) P(w j ) For two classes, this can be expressed as = P(w j c k )P(c k ) P(w j c k )P(c k ) + P(w j c k )P(c k ) (1)

8 Classification as a goal Machine learning focuses on identifying classes (classification), while social science is typically interested in locating things on latent traits (scaling) One of the simplest and most robust classification methods is the Naive Bayes (NB) classifier, built on a Bayesian probability model The class predictions for a collection of words from NB are great for classification, but useless for scaling But intermediate steps from NB turn out to be excellent for scaling purposes, and identical to Laver, Benoit and Garry s Wordscores Applying lessons from machine to learning to supervised scaling, we can Apply classification methods to scaling improve it using lessons from machine learning

9 Supervised v. unsupervised methods compared The goal (in text analysis) is to differentiate documents from one another, treating them as bags of words Different approaches: Supervised methods require a training set that exmplify constrasting classes, identified by the researcher Unsupervised methods scale documents based on patterns of similarity from the term-document matrix, without requiring a training step Relative advantage of supervised methods: You already know the dimension being scaled, because you set it in the training stage Relative disadvantage of supervised methods: You must already know the dimension being scaled, because you have to feed it good sample documents in the training stage

10 Supervised v. unsupervised methods: Examples General examples: Supervised: Naive Bayes, k-nearest Neighbor, Support Vector Machines (SVM) Unsupervised: correspondence analysis, IRT models, factor analytic approaches Political science applications Supervised: Wordscores (LBG 2003); SVMs (Yu, Kaufman and Diermeier 2008); Naive Bayes (Evans et al 2007) Unsupervised Wordfish (Slapin and Proksch 2008); Correspondence analysis (Schonhardt-Bailey 2008); two-dimensional IRT (Monroe and Maeda 2004)

11 Multinomial Bayes model of Class given a Word Consider J word types distributed across I documents, each assigned one of K classes. At the word level, Bayes Theorem tells us that: P(c k w j ) = P(w j c k )P(c k ) P(w j ) For two classes, this can be expressed as = P(w j c k )P(c k ) P(w j c k )P(c k ) + P(w j c k )P(c k ) (2)

12 Multinomial Bayes model of Class given a Word Class-conditional word likelihoods P(c k w j ) = P(w j c k )P(c k ) P(w j c k )P(c k ) + P(w j c k )P(c k ) The word likelihood within class The maximum likelihood estimate is simply the proportion of times that word j occurs in class k, but it is more common to use Laplace smoothing by adding 1 to each oberved count within class

13 Multinomial Bayes model of Class given a Word Word probabilities P(c k w j ) = P(w j c k )P(c k ) P(w j ) This represents the word probability from the training corpus Usually uninteresting, since it is constant for the training data, but needed to compute posteriors on a probability scale

14 Multinomial Bayes model of Class given a Word Class prior probabilities P(c k w j ) = P(w j c k )P(c k ) P(w j c k )P(c k ) + P(w j c k )P(c k ) This represents the class prior probability Machine learning typically takes this as the document frequency in the training set This approach is flawed for scaling, however, since we are scaling the latent class-ness of an unknown document, not predicting class uniform priors are more appropriate

15 Multinomial Bayes model of Class given a Word Class posterior probabilities P(c k w j ) = P(w j c k )P(c k ) P(w j c k )P(c k ) + P(w j c k )P(c k ) This represents the posterior probability of membership in class k for word j Under certain conditions, this is identical to what LBG (2003) called P wr Under those conditions, the LBG wordscore is the linear difference between P(c k w j ) and P(c k w j )

16 Moving to the document level The Naive Bayes model of a joint document-level class posterior assumes conditional independence, to multiply the word likelihoods from a test document, to produce: P(c d) = P(c) j P(w j c) P(w j ) This is why we call it naive : because it (wrongly) assumes: conditional independence of word counts positional independence of word counts

17 Naive Bayes Classification Example (From Manning, Raghavan and Schütze, Introduction to 13.2 Naive Bayes text classification 261 Information Retrieval) Table 13.1 Data for parameter estimation examples. docid words in document in c = China? training set 1 Chinese Beijing Chinese yes 2 Chinese Chinese Shanghai yes 3 Chinese Macao yes 4 Tokyo JapanChinese no test set 5 Chinese Chinese Chinese Tokyo Japan? Table 13.2 Training and test times for NB. mode time complexity training Θ( D L ave + C V ) testing Θ(L a + C M a )=Θ( C M a ) We have now introduced all the elements we need for training and apply-

18 a a a Naive Bayes Classification Example Example We have now introduced all the elements we need for training and applying an NB classifier. The complete algorithm is described in Figure : For the example in Table 13.1, themultinomialparameterswe need to classify the test document are the priors ˆP(c) =3/4 and ˆP(c) =1/4 and the following conditional probabilities: ˆP(Chinese c) = (5 + 1)/(8 + 6) =6/14 = 3/7 ˆP(Tokyo c) = ˆP(Japan c) = (0 + 1)/(8 + 6) = 1/14 ˆP(Chinese c) = (1 + 1)/(3 + 6) =2/9 ˆP(Tokyo c) = ˆP(Japan c) = (1 + 1)/(3 + 6) = 2/9 The denominators are (8 + 6) and (3 + 6) because the lengths of text c and text c are 8 and 3, respectively, and because the constant B in Equation (13.7) is 6 as the vocabulary consists of six terms. We then get: ˆP(c d 5 ) 3/4 (3/7) 3 1/14 1/ ˆP(c d 5 ) 1/4 (2/9) 3 2/9 2/ Thus, the classifier assigns the test document to c = China. Thereasonforthisclassification decision is that the three occurrences of the positive indicator Chinese in d 5 outweigh the occurrences of the two negative indicators Japan and Tokyo. What is the time complexity of NB? The complexity of computing the parameters is Θ( C V ) because the set of parameters consists of C V con-

19 THE k-nn CLASSIFIER

20 Other classification methods: k-nearest neighbour A non-parametric method for classifying objects based on the training examples taht are closest in the feature space A type of instance-based learning, or lazy learning where the function is only approximated locally and all computation is deferred until classification An object is classified by a majority vote of its neighbors, with the object being assigned to the class most common amongst its k nearest neighbors (where k is a positive integer, usually small) Extremely simple: the only parameter that adjusts is k (number of neighbors to be used) - increasing k smooths the decision boundary

21 k-nn Example: Red or Blue?

22 k = 1

23 k = 7

24 k = 15

25 k-nearest neighbour issues: Dimensionality Distance usually relates to all the attributes and assumes all of them have the same effects on distance Misclassification may results from attributes not confirming to this assumption (sometimes called the curse of dimensionality ) solution is to reduce the dimensions There are (many!) different metrics of distance

26 SUPPORT VECTOR MACHINES

27 g( x) sign( f ( x)) some mathematical details in order to better appreciate why the method has been so successful. (Very) General overview to SVMs Main Thrust of the Chapter The SVM is a supervised learning algorithm that infers from a set of labeled examples a function that takes new examples as input, and produces predicted labels as output. As such the output of the algorithm is a mathematical function that is defined on the space from which our examples are taken, and takes on one of two values at all points in the space, corresponding to the two class labels that are considered in binary Generalization of maximal margin classifier classification. One of the theoretically appealing things about the SVM is that the key underlying idea is in fact extremely simple. Indeed, the standard derivation of the SVM algorithm begins with possibly the The idea is to find classification boundary that maximizes simplest class of decision functions: linear ones. To illustrate what is meant by this, Figure 2 consists of the three linear distance decision functions to the that marginal happen to be correctly points classifying some simple 2D training sets. Figure 2: A simple 2D classification task, to separate the black dots from the circles. Three feasible but different linear decision functions are depicted, whereby the classifier predicts that any new samples in the gray region are black dots, and those in the white region are circles. Which is the best decision function and why? Unfortunately MMC does not apply to cases with non-linear decision boundaries Linear decision functions consist of a decision boundary that is a hyperplane (a line in 2D, plane in 3D, etc) separating the two different regions of the space. Such a decision function can be expressed by a mathematical function of an input vector x, the value of which is the predicted label for x (either +1 or -1). The linear classifier can therefore be written as

28 No solution to this using support vector classifier 9.3 Support Vector Machines 349 X X X X 1 FIGURE 9.8. Left: The observations fall into two classes, with a non-linear boundary between them. Right: The support vector classifier seeks a linear boundary, and consequently performs very poorly.

29 Case 3: Not linearly separable data; Kernel trick SVMs represent the data in a higher dimensional projection using a kernel, and bisect this using a hyperplane Gene 2 Tumor Normal kernel? Tumor? Normal Data is not linearly separable in the input space Gene 1 Data is linearly separable in the feature space obtained by a kernel : R N H 58

30 Need for model selection for SVMs This is only needed when no linear separation plane exists - so not needed in second of these Gene 2 Tumor Gene 2 Tumor Normal Gene 1 It is impossible to find a linear SVM classifier that separates tumors from normals! Need a non-linear SVM classifier, e.g. SVM with polynomial kernel of degree 2 solves this problem without errors. Normal Gene 1 We should not apply a non-linear SVM classifier while we can perfectly solve this problem using a linear SVM classifier! 78

31 Different kernels can represent different decision boundaries This has to do with different projections of the data into higher-dimensional space The mathematics of this are complicated but solveable as forms of optimization problems - but the kernel choice is a user decision 9.3 Support Vector Machines 353 X X X X 1

32 EVALUATING CLASSIFIER PERFORMANCE

33 Basic principles of machine learning: Generalization and overfitting Generalization: A classifier or a regression algorithm learns to correctly predict output from given inputs not only in previously seen samples but also in previously unseen samples Overfitting: A classifier or a regression algorithm learns to correctly predict output from given inputs in previously seen samples but fails to do so in previously unseen samples. This causes poor prediction/generalization. Goal is to maximize the frontier of precise identification of true condition with accurate recall

34 Precision and recall Same intuition as specificity and sensitivity earlier in course True condition Positive Negative Positive True Positive False Positive (Type I error) Prediction Negative False Negative (Type II error) True Negative

35 Precision and recall and related statistics Precision: true positives true positives + false positives Recall: true positives true positives + false negatives Accuracy: Correctly classified Total number of cases Precision Recall F 1 = 2 Precision + Recall (the harmonic mean of precision and recall)

36 Example: Computing precision/recall Assume: We have a corpus where 80 documents are really positive (as opposed to negative, as in sentiment) Our method declares that 60 are positive Of the 60 declared positive, 45 are actually positive Solution: Precision = (45/( )) = 45/60 = 0.75 Recall = (45/( )) = 45/80 = 0.56

37 Accuracy? True condition Positive Negative Positive Prediction Negative 80

38 add in the cells we can compute True condition Positive Negative Positive Prediction Negative 35 80

39 but need True Negatives and N to compute accuracy True condition Positive Negative Positive Prediction Negative 35??? 80

40 assume 10 True Negatives: True condition Positive Negative Positive Prediction Negative Accuracy = ( )/105 = 0.52 F1 = 2 ( )/( ) = 0.64

41 now assume 100 True Negatives: True condition Positive Negative Positive Prediction Negative Accuracy = ( )/195 = 0.74 F1 = 2 ( )/( ) = 0.64

42 RELIABILITY TESTING FOR THE TRAINING SET

43 How do we get true condition? In some domains: through more expensive or extensive tests In social sciences: typically by expert annotation or coding A scheme should be tested and reported for its reliability

44 Inter-rater reliability Different types are distinguished by the way the reliability data is obtained. Type Test Design Causes of Disagreements Strength Stability test-retest intraobserver inconsistencies weakest Reproducibility test-test intraobserver inconsistencies + interobserver disagreements Accuracy test-standard intraobserver inconsistencies + interobserver disagreements + deviations from a standard medium strongest

45 Measures of agreement Percent agreement Very simple: (number of agreeing ratings) / (total ratings) * 100% Correlation (usually) Pearson s r, aka product-moment correlation ( Ai Ā ) ( Bi B Formula: r AB = 1 n n 1 i=1 s A May also be ordinal, such as Spearman s rho or Kendall s tau-b Range is [0,1] Agreement measures Take into account not only observed agreement, but also agreement that would have occured by chance Cohen s κ is most common Krippendorf s α is a generalization of Cohen s κ Both range from [0,1] s B )

46 Reliability data matrixes Example here used binary data (from Krippendorff) Article: Coder A Coder B A and B agree on 60% of the articles: 60% agreement Correlation is (approximately) 0.10 Observed disagreement: 4 Expected disagreement (by chance): Krippendorff s α = 1 Do D e = = Cohen s κ (nearly) identical

### The Bayes classifier

The Bayes classifier Consider where is a random vector in is a random variable (depending on ) Let be a classifier with probability of error/risk given by The Bayes classifier (denoted ) is the optimal

### MIRA, SVM, k-nn. Lirong Xia

MIRA, SVM, k-nn Lirong Xia Linear Classifiers (perceptrons) Inputs are feature values Each feature has a weight Sum is the activation activation w If the activation is: Positive: output +1 Negative, output

### Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas

Midterm Review CS 6375: Machine Learning Vibhav Gogate The University of Texas at Dallas Machine Learning Supervised Learning Unsupervised Learning Reinforcement Learning Parametric Y Continuous Non-parametric

### LINEAR CLASSIFICATION, PERCEPTRON, LOGISTIC REGRESSION, SVC, NAÏVE BAYES. Supervised Learning

LINEAR CLASSIFICATION, PERCEPTRON, LOGISTIC REGRESSION, SVC, NAÏVE BAYES Supervised Learning Linear vs non linear classifiers In K-NN we saw an example of a non-linear classifier: the decision boundary

### Probability Review and Naïve Bayes

Probability Review and Naïve Bayes Instructor: Alan Ritter Some slides adapted from Dan Jurfasky and Brendan O connor What is Probability? The probability the coin will land heads is 0.5 Q: what does this

### Machine Learning: Assignment 1

10-701 Machine Learning: Assignment 1 Due on Februrary 0, 014 at 1 noon Barnabas Poczos, Aarti Singh Instructions: Failure to follow these directions may result in loss of points. Your solutions for this

### Modern Information Retrieval

Modern Information Retrieval Chapter 8 Text Classification Introduction A Characterization of Text Classification Unsupervised Algorithms Supervised Algorithms Feature Selection or Dimensionality Reduction

### Overview of Statistical Tools. Statistical Inference. Bayesian Framework. Modeling. Very simple case. Things are usually more complicated

Fall 3 Computer Vision Overview of Statistical Tools Statistical Inference Haibin Ling Observation inference Decision Prior knowledge http://www.dabi.temple.edu/~hbling/teaching/3f_5543/index.html Bayesian

### UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2013

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2013 Exam policy: This exam allows two one-page, two-sided cheat sheets; No other materials. Time: 2 hours. Be sure to write your name and

### Probabilistic Methods in Bioinformatics. Pabitra Mitra

Probabilistic Methods in Bioinformatics Pabitra Mitra pabitra@cse.iitkgp.ernet.in Probability in Bioinformatics Classification Categorize a new object into a known class Supervised learning/predictive

### Support Vector Machines (SVM) in bioinformatics. Day 1: Introduction to SVM

1 Support Vector Machines (SVM) in bioinformatics Day 1: Introduction to SVM Jean-Philippe Vert Bioinformatics Center, Kyoto University, Japan Jean-Philippe.Vert@mines.org Human Genome Center, University

### Introduction to Information Retrieval

Introduction to Information Retrieval http://informationretrieval.org IIR 13: Text Classification & Naive Bayes Hinrich Schütze Center for Information and Language Processing, University of Munich 2014-05-15

### Classification: The rest of the story

U NIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN CS598 Machine Learning for Signal Processing Classification: The rest of the story 3 October 2017 Today s lecture Important things we haven t covered yet Fisher

### Text Classification and Naïve Bayes

Text Classification and Naïve Bayes The Task of Text Classification Many slides are adapted from slides by Dan Jurafsky Is this spam? Who wrote which Federalist papers? 1787-8: anonymous essays try to

### Statistical Pattern Recognition

Statistical Pattern Recognition Support Vector Machine (SVM) Hamid R. Rabiee Hadi Asheri, Jafar Muhammadi, Nima Pourdamghani Spring 2013 http://ce.sharif.edu/courses/91-92/2/ce725-1/ Agenda Introduction

### BANA 7046 Data Mining I Lecture 6. Other Data Mining Algorithms 1

BANA 7046 Data Mining I Lecture 6. Other Data Mining Algorithms 1 Shaobo Li University of Cincinnati 1 Partially based on Hastie, et al. (2009) ESL, and James, et al. (2013) ISLR Data Mining I Lecture

### Lecture 4. 1 Learning Non-Linear Classifiers. 2 The Kernel Trick. CS-621 Theory Gems September 27, 2012

CS-62 Theory Gems September 27, 22 Lecture 4 Lecturer: Aleksander Mądry Scribes: Alhussein Fawzi Learning Non-Linear Classifiers In the previous lectures, we have focused on finding linear classifiers,

### SVMs, Duality and the Kernel Trick

SVMs, Duality and the Kernel Trick Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University February 26 th, 2007 2005-2007 Carlos Guestrin 1 SVMs reminder 2005-2007 Carlos Guestrin 2 Today

### Introduction to Machine Learning Midterm Exam Solutions

10-701 Introduction to Machine Learning Midterm Exam Solutions Instructors: Eric Xing, Ziv Bar-Joseph 17 November, 2015 There are 11 questions, for a total of 100 points. This exam is open book, open notes,

### Midterm Exam Solutions, Spring 2007

1-71 Midterm Exam Solutions, Spring 7 1. Personal info: Name: Andrew account: E-mail address:. There should be 16 numbered pages in this exam (including this cover sheet). 3. You can use any material you

### Introduction to Gaussian Process

Introduction to Gaussian Process CS 778 Chris Tensmeyer CS 478 INTRODUCTION 1 What Topic? Machine Learning Regression Bayesian ML Bayesian Regression Bayesian Non-parametric Gaussian Process (GP) GP Regression

### Outline. Supervised Learning. Hong Chang. Institute of Computing Technology, Chinese Academy of Sciences. Machine Learning Methods (Fall 2012)

Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Linear Models for Regression Linear Regression Probabilistic Interpretation

### Numerical Learning Algorithms

Numerical Learning Algorithms Example SVM for Separable Examples.......................... Example SVM for Nonseparable Examples....................... 4 Example Gaussian Kernel SVM...............................

### Machine Learning Lecture 7

Course Outline Machine Learning Lecture 7 Fundamentals (2 weeks) Bayes Decision Theory Probability Density Estimation Statistical Learning Theory 23.05.2016 Discriminative Approaches (5 weeks) Linear Discriminant

### Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2014 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several

### Introduction to AI Learning Bayesian networks. Vibhav Gogate

Introduction to AI Learning Bayesian networks Vibhav Gogate Inductive Learning in a nutshell Given: Data Examples of a function (X, F(X)) Predict function F(X) for new examples X Discrete F(X): Classification

### UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2014

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2014 Exam policy: This exam allows two one-page, two-sided cheat sheets (i.e. 4 sides); No other materials. Time: 2 hours. Be sure to write

### Final Overview. Introduction to ML. Marek Petrik 4/25/2017

Final Overview Introduction to ML Marek Petrik 4/25/2017 This Course: Introduction to Machine Learning Build a foundation for practice and research in ML Basic machine learning concepts: max likelihood,

### PAC-learning, VC Dimension and Margin-based Bounds

More details: General: http://www.learning-with-kernels.org/ Example of more complex bounds: http://www.research.ibm.com/people/t/tzhang/papers/jmlr02_cover.ps.gz PAC-learning, VC Dimension and Margin-based

### CS 446 Machine Learning Fall 2016 Nov 01, Bayesian Learning

CS 446 Machine Learning Fall 206 Nov 0, 206 Bayesian Learning Professor: Dan Roth Scribe: Ben Zhou, C. Cervantes Overview Bayesian Learning Naive Bayes Logistic Regression Bayesian Learning So far, we

### Introduction to Machine Learning. Introduction to ML - TAU 2016/7 1

Introduction to Machine Learning Introduction to ML - TAU 2016/7 1 Course Administration Lecturers: Amir Globerson (gamir@post.tau.ac.il) Yishay Mansour (Mansour@tau.ac.il) Teaching Assistance: Regev Schweiger

### Bayesian Learning Features of Bayesian learning methods:

Bayesian Learning Features of Bayesian learning methods: Each observed training example can incrementally decrease or increase the estimated probability that a hypothesis is correct. This provides a more

### Machine Learning, Midterm Exam

10-601 Machine Learning, Midterm Exam Instructors: Tom Mitchell, Ziv Bar-Joseph Wednesday 12 th December, 2012 There are 9 questions, for a total of 100 points. This exam has 20 pages, make sure you have

### MIDTERM SOLUTIONS: FALL 2012 CS 6375 INSTRUCTOR: VIBHAV GOGATE

MIDTERM SOLUTIONS: FALL 2012 CS 6375 INSTRUCTOR: VIBHAV GOGATE March 28, 2012 The exam is closed book. You are allowed a double sided one page cheat sheet. Answer the questions in the spaces provided on

### Bayesian Learning. Artificial Intelligence Programming. 15-0: Learning vs. Deduction

15-0: Learning vs. Deduction Artificial Intelligence Programming Bayesian Learning Chris Brooks Department of Computer Science University of San Francisco So far, we ve seen two types of reasoning: Deductive

### Machine Learning for natural language processing

Machine Learning for natural language processing Classification: k nearest neighbors Laura Kallmeyer Heinrich-Heine-Universität Düsseldorf Summer 2016 1 / 28 Introduction Classification = supervised method

### Linear Classification and SVM. Dr. Xin Zhang

Linear Classification and SVM Dr. Xin Zhang Email: eexinzhang@scut.edu.cn What is linear classification? Classification is intrinsically non-linear It puts non-identical things in the same class, so a

### Natural Language Processing. Statistical Inference: n-grams

Natural Language Processing Statistical Inference: n-grams Updated 3/2009 Statistical Inference Statistical Inference consists of taking some data (generated in accordance with some unknown probability

### Last Time. Today. Bayesian Learning. The Distributions We Love. CSE 446 Gaussian Naïve Bayes & Logistic Regression

CSE 446 Gaussian Naïve Bayes & Logistic Regression Winter 22 Dan Weld Learning Gaussians Naïve Bayes Last Time Gaussians Naïve Bayes Logistic Regression Today Some slides from Carlos Guestrin, Luke Zettlemoyer

### Kernel expansions with unlabeled examples

Kernel expansions with unlabeled examples Martin Szummer MIT AI Lab & CBCL Cambridge, MA szummer@ai.mit.edu Tommi Jaakkola MIT AI Lab Cambridge, MA tommi@ai.mit.edu Abstract Modern classification applications

### Intro. ANN & Fuzzy Systems. Lecture 15. Pattern Classification (I): Statistical Formulation

Lecture 15. Pattern Classification (I): Statistical Formulation Outline Statistical Pattern Recognition Maximum Posterior Probability (MAP) Classifier Maximum Likelihood (ML) Classifier K-Nearest Neighbor

### Text Categorization CSE 454. (Based on slides by Dan Weld, Tom Mitchell, and others)

Text Categorization CSE 454 (Based on slides by Dan Weld, Tom Mitchell, and others) 1 Given: Categorization A description of an instance, x X, where X is the instance language or instance space. A fixed

### COMP 652: Machine Learning. Lecture 12. COMP Lecture 12 1 / 37

COMP 652: Machine Learning Lecture 12 COMP 652 Lecture 12 1 / 37 Today Perceptrons Definition Perceptron learning rule Convergence (Linear) support vector machines Margin & max margin classifier Formulation

### Bayes Rule. CS789: Machine Learning and Neural Network Bayesian learning. A Side Note on Probability. What will we learn in this lecture?

Bayes Rule CS789: Machine Learning and Neural Network Bayesian learning P (Y X) = P (X Y )P (Y ) P (X) Jakramate Bootkrajang Department of Computer Science Chiang Mai University P (Y ): prior belief, prior

### Today. Calculus. Linear Regression. Lagrange Multipliers

Today Calculus Lagrange Multipliers Linear Regression 1 Optimization with constraints What if I want to constrain the parameters of the model. The mean is less than 10 Find the best likelihood, subject

### Generative Models for Discrete Data

Generative Models for Discrete Data ddebarr@uw.edu 2016-04-21 Agenda Bayesian Concept Learning Beta-Binomial Model Dirichlet-Multinomial Model Naïve Bayes Classifiers Bayesian Concept Learning Numbers

### Pattern Recognition and Machine Learning. Learning and Evaluation of Pattern Recognition Processes

Pattern Recognition and Machine Learning James L. Crowley ENSIMAG 3 - MMIS Fall Semester 2016 Lesson 1 5 October 2016 Learning and Evaluation of Pattern Recognition Processes Outline Notation...2 1. The

### CS 188: Artificial Intelligence. Machine Learning

CS 188: Artificial Intelligence Review of Machine Learning (ML) DISCLAIMER: It is insufficient to simply study these slides, they are merely meant as a quick refresher of the high-level ideas covered.

### L11: Pattern recognition principles

L11: Pattern recognition principles Bayesian decision theory Statistical classifiers Dimensionality reduction Clustering This lecture is partly based on [Huang, Acero and Hon, 2001, ch. 4] Introduction

### Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2015 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

### Introduction to Logistic Regression and Support Vector Machine

Introduction to Logistic Regression and Support Vector Machine guest lecturer: Ming-Wei Chang CS 446 Fall, 2009 () / 25 Fall, 2009 / 25 Before we start () 2 / 25 Fall, 2009 2 / 25 Before we start Feel

### Useful Equations for solving SVM questions

Support Vector Machines (send corrections to yks at csail.mit.edu) In SVMs we are trying to find a decision boundary that maximizes the "margin" or the "width of the road" separating the positives from

### Latent Dirichlet Allocation Introduction/Overview

Latent Dirichlet Allocation Introduction/Overview David Meyer 03.10.2016 David Meyer http://www.1-4-5.net/~dmm/ml/lda_intro.pdf 03.10.2016 Agenda What is Topic Modeling? Parametric vs. Non-Parametric Models

### Midterm: CS 6375 Spring 2015 Solutions

Midterm: CS 6375 Spring 2015 Solutions The exam is closed book. You are allowed a one-page cheat sheet. Answer the questions in the spaces provided on the question sheets. If you run out of room for an

### MLE/MAP + Naïve Bayes

10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University MLE/MAP + Naïve Bayes Matt Gormley Lecture 19 March 20, 2018 1 Midterm Exam Reminders

### Last updated: Oct 22, 2012 LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

Last updated: Oct 22, 2012 LINEAR CLASSIFIERS Problems 2 Please do Problem 8.3 in the textbook. We will discuss this in class. Classification: Problem Statement 3 In regression, we are modeling the relationship

### Density Estimation. Seungjin Choi

Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/

### Mining of Massive Datasets Jure Leskovec, AnandRajaraman, Jeff Ullman Stanford University

Note to other teachers and users of these slides: We would be delighted if you found this our material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit

### Machine Learning 2017

Machine Learning 2017 Volker Roth Department of Mathematics & Computer Science University of Basel 21st March 2017 Volker Roth (University of Basel) Machine Learning 2017 21st March 2017 1 / 41 Section

### Support Vector Machines. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Support Vector Machines CAP 5610: Machine Learning Instructor: Guo-Jun QI 1 Linear Classifier Naive Bayes Assume each attribute is drawn from Gaussian distribution with the same variance Generative model:

### 18.6 Regression and Classification with Linear Models

18.6 Regression and Classification with Linear Models 352 The hypothesis space of linear functions of continuous-valued inputs has been used for hundreds of years A univariate linear function (a straight

### Nearest Neighbors Methods for Support Vector Machines

Nearest Neighbors Methods for Support Vector Machines A. J. Quiroz, Dpto. de Matemáticas. Universidad de Los Andes joint work with María González-Lima, Universidad Simón Boĺıvar and Sergio A. Camelo, Universidad

### CS6220: DATA MINING TECHNIQUES

CS6220: DATA MINING TECHNIQUES Chapter 8&9: Classification: Part 3 Instructor: Yizhou Sun yzsun@ccs.neu.edu March 12, 2013 Midterm Report Grade Distribution 90-100 10 80-89 16 70-79 8 60-69 4

### CENG 793. On Machine Learning and Optimization. Sinan Kalkan

CENG 793 On Machine Learning and Optimization Sinan Kalkan 2 Now Introduction to ML Problem definition Classes of approaches K-NN Support Vector Machines Softmax classification / logistic regression Parzen

### Linear Classifiers. Michael Collins. January 18, 2012

Linear Classifiers Michael Collins January 18, 2012 Today s Lecture Binary classification problems Linear classifiers The perceptron algorithm Classification Problems: An Example Goal: build a system that

### Course in Data Science

Course in Data Science About the Course: In this course you will get an introduction to the main tools and ideas which are required for Data Scientist/Business Analyst/Data Analyst. The course gives an

### CS4495/6495 Introduction to Computer Vision. 8C-L3 Support Vector Machines

CS4495/6495 Introduction to Computer Vision 8C-L3 Support Vector Machines Discriminative classifiers Discriminative classifiers find a division (surface) in feature space that separates the classes Several

### 11 More Regression; Newton s Method; ROC Curves

More Regression; Newton s Method; ROC Curves 59 11 More Regression; Newton s Method; ROC Curves LEAST-SQUARES POLYNOMIAL REGRESSION Replace each X i with feature vector e.g. (X i ) = [X 2 i1 X i1 X i2

### Machine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li. https://funglee.github.io

Machine Learning Lecture 4: Regularization and Bayesian Statistics Feng Li fli@sdu.edu.cn https://funglee.github.io School of Computer Science and Technology Shandong University Fall 207 Overfitting Problem

### Logistic Regression. Will Monroe CS 109. Lecture Notes #22 August 14, 2017

1 Will Monroe CS 109 Logistic Regression Lecture Notes #22 August 14, 2017 Based on a chapter by Chris Piech Logistic regression is a classification algorithm1 that works by trying to learn a function

### Latent Dirichlet Allocation

Latent Dirichlet Allocation 1 Directed Graphical Models William W. Cohen Machine Learning 10-601 2 DGMs: The Burglar Alarm example Node ~ random variable Burglar Earthquake Arcs define form of probability

### Naïve Bayes. Vibhav Gogate The University of Texas at Dallas

Naïve Bayes Vibhav Gogate The University of Texas at Dallas Supervised Learning of Classifiers Find f Given: Training set {(x i, y i ) i = 1 n} Find: A good approximation to f : X Y Examples: what are

### Lecture 3: Pattern Classification. Pattern classification

EE E68: Speech & Audio Processing & Recognition Lecture 3: Pattern Classification 3 4 5 The problem of classification Linear and nonlinear classifiers Probabilistic classification Gaussians, mitures and

### Support Vector Machines

Wien, June, 2010 Paul Hofmarcher, Stefan Theussl, WU Wien Hofmarcher/Theussl SVM 1/21 Linear Separable Separating Hyperplanes Non-Linear Separable Soft-Margin Hyperplanes Hofmarcher/Theussl SVM 2/21 (SVM)

### Regularization. CSCE 970 Lecture 3: Regularization. Stephen Scott and Vinod Variyam. Introduction. Outline

Other Measures 1 / 52 sscott@cse.unl.edu learning can generally be distilled to an optimization problem Choose a classifier (function, hypothesis) from a set of functions that minimizes an objective function

### Naïve Bayes classification

Naïve Bayes classification 1 Probability theory Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. Examples: A person s height, the outcome of a coin toss

### ECE662: Pattern Recognition and Decision Making Processes: HW TWO

ECE662: Pattern Recognition and Decision Making Processes: HW TWO Purdue University Department of Electrical and Computer Engineering West Lafayette, INDIANA, USA Abstract. In this report experiments are

### The Naïve Bayes Classifier. Machine Learning Fall 2017

The Naïve Bayes Classifier Machine Learning Fall 2017 1 Today s lecture The naïve Bayes Classifier Learning the naïve Bayes Classifier Practical concerns 2 Today s lecture The naïve Bayes Classifier Learning

### Classification of Ordinal Data Using Neural Networks

Classification of Ordinal Data Using Neural Networks Joaquim Pinto da Costa and Jaime S. Cardoso 2 Faculdade Ciências Universidade Porto, Porto, Portugal jpcosta@fc.up.pt 2 Faculdade Engenharia Universidade

### GI01/M055: Supervised Learning

GI01/M055: Supervised Learning 1. Introduction to Supervised Learning October 5, 2009 John Shawe-Taylor 1 Course information 1. When: Mondays, 14:00 17:00 Where: Room 1.20, Engineering Building, Malet

### Clustering. Professor Ameet Talwalkar. Professor Ameet Talwalkar CS260 Machine Learning Algorithms March 8, / 26

Clustering Professor Ameet Talwalkar Professor Ameet Talwalkar CS26 Machine Learning Algorithms March 8, 217 1 / 26 Outline 1 Administration 2 Review of last lecture 3 Clustering Professor Ameet Talwalkar

### Study Notes on the Latent Dirichlet Allocation

Study Notes on the Latent Dirichlet Allocation Xugang Ye 1. Model Framework A word is an element of dictionary {1,,}. A document is represented by a sequence of words: =(,, ), {1,,}. A corpus is a collection

### Machine Learning Linear Classification. Prof. Matteo Matteucci

Machine Learning Linear Classification Prof. Matteo Matteucci Recall from the first lecture 2 X R p Regression Y R Continuous Output X R p Y {Ω 0, Ω 1,, Ω K } Classification Discrete Output X R p Y (X)

### Statistical Machine Learning from Data

Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Ensembles Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole Polytechnique Fédérale de Lausanne

### Perceptron Revisited: Linear Separators. Support Vector Machines

Support Vector Machines Perceptron Revisited: Linear Separators Binary classification can be viewed as the task of separating classes in feature space: w T x + b > 0 w T x + b = 0 w T x + b < 0 Department

Advanced classifica-on methods Instance-based classifica-on Bayesian classifica-on Instance-Based Classifiers Set of Stored Cases Atr1... AtrN Class A B B C A C B Store the training records Use training

### CS Lecture 18. Topic Models and LDA

CS 6347 Lecture 18 Topic Models and LDA (some slides by David Blei) Generative vs. Discriminative Models Recall that, in Bayesian networks, there could be many different, but equivalent models of the same

### Kernel Methods. Barnabás Póczos

Kernel Methods Barnabás Póczos Outline Quick Introduction Feature space Perceptron in the feature space Kernels Mercer s theorem Finite domain Arbitrary domain Kernel families Constructing new kernels

### Lecture 2. Judging the Performance of Classifiers. Nitin R. Patel

Lecture 2 Judging the Performance of Classifiers Nitin R. Patel 1 In this note we will examine the question of how to udge the usefulness of a classifier and how to compare different classifiers. Not only

### Generative Classifiers: Part 1. CSC411/2515: Machine Learning and Data Mining, Winter 2018 Michael Guerzhoy and Lisa Zhang

Generative Classifiers: Part 1 CSC411/2515: Machine Learning and Data Mining, Winter 2018 Michael Guerzhoy and Lisa Zhang 1 This Week Discriminative vs Generative Models Simple Model: Does the patient

### Ranked Retrieval (2)

Text Technologies for Data Science INFR11145 Ranked Retrieval (2) Instructor: Walid Magdy 31-Oct-2017 Lecture Objectives Learn about Probabilistic models BM25 Learn about LM for IR 2 1 Recall: VSM & TFIDF

### Introduction: MLE, MAP, Bayesian reasoning (28/8/13)

STA561: Probabilistic machine learning Introduction: MLE, MAP, Bayesian reasoning (28/8/13) Lecturer: Barbara Engelhardt Scribes: K. Ulrich, J. Subramanian, N. Raval, J. O Hollaren 1 Classifiers In this

### Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability

Probability theory Naïve Bayes classification Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. s: A person s height, the outcome of a coin toss Distinguish

### Part I. Linear regression & LASSO. Linear Regression. Linear Regression. Week 10 Based in part on slides from textbook, slides of Susan Holmes

Week 10 Based in part on slides from textbook, slides of Susan Holmes Part I Linear regression & December 5, 2012 1 / 1 2 / 1 We ve talked mostly about classification, where the outcome categorical. If

### Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions

Pattern Recognition and Machine Learning Chapter 2: Probability Distributions Cécile Amblard Alex Kläser Jakob Verbeek October 11, 27 Probability Distributions: General Density Estimation: given a finite

### Measuring Topic Quality in Latent Dirichlet Allocation

Measuring Topic Quality in Sergei Koltsov Olessia Koltsova Steklov Institute of Mathematics at St. Petersburg Laboratory for Internet Studies, National Research University Higher School of Economics, St.

### Bayesian Classification Methods

Bayesian Classification Methods Suchit Mehrotra North Carolina State University smehrot@ncsu.edu October 24, 2014 Suchit Mehrotra (NCSU) Bayesian Classification October 24, 2014 1 / 33 How do you define

### Modeling Data with Linear Combinations of Basis Functions. Read Chapter 3 in the text by Bishop

Modeling Data with Linear Combinations of Basis Functions Read Chapter 3 in the text by Bishop A Type of Supervised Learning Problem We want to model data (x 1, t 1 ),..., (x N, t N ), where x i is a vector