General Naïve Bayes. CS 188: Artificial Intelligence Fall Example: Overfitting. Example: OCR. Example: Spam Filtering. Example: Spam Filtering

Similar documents
CS 188: Artificial Intelligence Fall 2008

CS 188: Artificial Intelligence Fall 2011

Errors, and What to Do. CS 188: Artificial Intelligence Fall What to Do About Errors. Later On. Some (Simplified) Biology

CS 188: Artificial Intelligence. Machine Learning

CS 5522: Artificial Intelligence II

CS 343: Artificial Intelligence

CS 188: Artificial Intelligence Spring Today

Machine Learning. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 8 May 2012

Introduction to AI Learning Bayesian networks. Vibhav Gogate

CS 188: Artificial Intelligence. Outline

Learning: Binary Perceptron. Examples: Perceptron. Separable Case. In the space of feature vectors

MIRA, SVM, k-nn. Lirong Xia

Natural Language Processing. Classification. Features. Some Definitions. Classification. Feature Vectors. Classification I. Dan Klein UC Berkeley

EE 511 Online Learning Perceptron

Classification. Chris Amato Northeastern University. Some images and slides are used from: Rob Platt, CS188 UC Berkeley, AIMA

CSE 473: Artificial Intelligence Autumn Topics

Statistical NLP Spring A Discriminative Approach

Perceptron. Subhransu Maji. CMPSCI 689: Machine Learning. 3 February February 2015

CSE446: Naïve Bayes Winter 2016

CS 188: Artificial Intelligence Learning :

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas

CSE546: Naïve Bayes Winter 2012

CS 188: Artificial Intelligence Spring Announcements

CPSC 340: Machine Learning and Data Mining. MLE and MAP Fall 2017

Midterm Review CS 7301: Advanced Machine Learning. Vibhav Gogate The University of Texas at Dallas

CPSC 340: Machine Learning and Data Mining

Gaussian and Linear Discriminant Analysis; Multiclass Classification

Naïve Bayes Classifiers and Logistic Regression. Doug Downey Northwestern EECS 349 Winter 2014

From Binary to Multiclass Classification. CS 6961: Structured Prediction Spring 2018

Naïve Bayes classification

Announcements. CS 188: Artificial Intelligence Spring Classification. Today. Classification overview. Case-Based Reasoning

CS 188: Artificial Intelligence Spring Announcements

CS 446 Machine Learning Fall 2016 Nov 01, Bayesian Learning

Probabilistic modeling. The slides are closely adapted from Subhransu Maji s slides

Support vector machines Lecture 4

Bayesian Methods: Naïve Bayes

MLE/MAP + Naïve Bayes

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability

The Naïve Bayes Classifier. Machine Learning Fall 2017

6.867 Machine learning

CMU-Q Lecture 24:

Machine Learning (CSE 446): Multi-Class Classification; Kernel Methods

Machine Learning for NLP

Logistic Regression. Machine Learning Fall 2018

Machine Learning, Midterm Exam: Spring 2008 SOLUTIONS. Q Topic Max. Score Score. 1 Short answer questions 20.

Final Examination CS 540-2: Introduction to Artificial Intelligence

Linear Models for Classification: Discriminative Learning (Perceptron, SVMs, MaxEnt)

MLE/MAP + Naïve Bayes

CS395T: Structured Models for NLP Lecture 2: Machine Learning Review

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.

Naïve Bayes. Vibhav Gogate The University of Texas at Dallas

Mining of Massive Datasets Jure Leskovec, AnandRajaraman, Jeff Ullman Stanford University

DEPARTMENT OF COMPUTER SCIENCE Autumn Semester MACHINE LEARNING AND ADAPTIVE INTELLIGENCE

Generative Classifiers: Part 1. CSC411/2515: Machine Learning and Data Mining, Winter 2018 Michael Guerzhoy and Lisa Zhang

Forward algorithm vs. particle filtering

Machine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li.

ECE521 Lecture7. Logistic Regression

Linear discriminant functions

CS 188: Artificial Intelligence Fall 2011

COMS 4771 Introduction to Machine Learning. Nakul Verma

Generative Learning. INFO-4604, Applied Machine Learning University of Colorado Boulder. November 29, 2018 Prof. Michael Paul

IMBALANCED DATA. Phishing. Admin 9/30/13. Assignment 3: - how did it go? - do the experiments help? Assignment 4. Course feedback

Classification: Naïve Bayes. Nathan Schneider (slides adapted from Chris Dyer, Noah Smith, et al.) ENLP 19 September 2016

Multiclass Classification-1

Generative Learning algorithms

CS188: Artificial Intelligence, Fall 2010 Written 3: Bayes Nets, VPI, and HMMs

Machine Learning (CS 567) Lecture 2

Approximate Inference Part 1 of 2

COMS 4721: Machine Learning for Data Science Lecture 10, 2/21/2017

Bayesian Analysis for Natural Language Processing Lecture 2

Announcements - Homework

Vote. Vote on timing for night section: Option 1 (what we have now) Option 2. Lecture, 6:10-7:50 25 minute dinner break Tutorial, 8:15-9

Linear Classifiers. Michael Collins. January 18, 2012

Machine Learning CMPT 726 Simon Fraser University. Binomial Parameter Estimation

Part of the slides are adapted from Ziko Kolter

Linear smoother. ŷ = S y. where s ij = s ij (x) e.g. s ij = diag(l i (x))

Discrete Mathematics and Probability Theory Fall 2015 Lecture 21

CS325 Artificial Intelligence Chs. 18 & 4 Supervised Machine Learning (cont)

Machine Learning for Signal Processing Bayes Classification and Regression

Machine Learning for NLP

Logistic Regression. Some slides adapted from Dan Jurfasky and Brendan O Connor

Outline. Supervised Learning. Hong Chang. Institute of Computing Technology, Chinese Academy of Sciences. Machine Learning Methods (Fall 2012)

Introduction to Logistic Regression and Support Vector Machine

Binary Classification / Perceptron

Approximate Inference Part 1 of 2

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

Natural Language Processing (CSEP 517): Text Classification

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Intro to Learning Theory Date: 12/8/16

Classification & Information Theory Lecture #8

Lecture 9: Naive Bayes, SVM, Kernels. Saravanan Thirumuruganathan

Midterm. Introduction to Machine Learning. CS 189 Spring Please do not open the exam before you are instructed to do so.

Brief Introduction to Machine Learning

An Algorithms-based Intro to Machine Learning

CS221 / Autumn 2017 / Liang & Ermon. Lecture 15: Bayesian networks III

Linear classifiers Lecture 3

Click Prediction and Preference Ranking of RSS Feeds

CS221 / Autumn 2017 / Liang & Ermon. Lecture 2: Machine learning I

The Bayes classifier

CPSC 540: Machine Learning

Machine Learning Gaussian Naïve Bayes Big Picture

Transcription:

CS 188: Artificial Intelligence Fall 2008 General Naïve Bayes A general naive Bayes model: C Lecture 23: Perceptrons 11/20/2008 E 1 E 2 E n Dan Klein UC Berkeley We only specify how each feature depends on the class Total number of parameters is linear in n 2 Example: OCR Example: Overfitting 1 0.1 2 0.1 3 0.1 4 0.1 5 0.1 6 0.1 7 0.1 8 0.1 9 0.1 0 0.1 1 0.01 2 0.05 3 0.05 4 0.30 5 0.80 6 0.90 7 0.05 8 0.60 9 0.50 0 0.80 1 0.05 2 0.01 3 0.90 4 0.80 5 0.90 6 0.90 7 0.25 8 0.85 9 0.60 0 0.80 2 wins!! 3 4 Model: Example: Spam Filtering Parameters: Example: Spam Filtering Raw probabilities alone don t affect the posteriors; relative probabilities (odds ratios) do: ham : 0.66 spam: 0.33.016 to : 0.015 and : 0.012 free : 0.005 click : 0.004 screens : 0.001 minute : 0.001.021 to : 0.013 and : 0.011 free : 0.001 click : 0.001 screens : 0.000 minute : 0.000 south-west : inf nation : inf morally : inf nicely : inf extent : inf seriously : inf What went wrong here? screens : inf minute : inf guaranteed : inf $205.00 : inf delivery : inf signature : inf 5 6 1

Generalization and Overfitting Relative frequency parameters will overfit the training data! Unlikely that every occurrence of minute is 100% spam Unlikely that every occurrence of seriously is 100% ham What about all the words that don t occur in the training set? In general, we can t go around giving unseen events zero probability As an extreme case, imagine using the entire email as the only feature Would get the training data perfect (if deterministic labeling) Wouldn t generalize at all Just making the bag-of-words assumption gives us some generalization, but isn t enough Estimation: Smoothing Problems with maximum likelihood estimates: If I flip a coin once, and it s heads, what s the estimate for P(heads)? What if I flip 10 times with 8 heads? What if I flip 10M times with 8M heads? Basic idea: We have some prior expectation about parameters (here, the probability of heads) Given little evidence, we should skew towards our prior Given a lot of evidence, we should listen to the data To generalize better: we need to smooth or regularize the estimates 7 8 Estimation: Smoothing Estimation: Laplace Smoothing Relative frequencies are the maximum likelihood estimates Laplace s estimate: Pretend you saw every outcome once more than you actually did H H T In Bayesian statistics, we think of the parameters as just another random variable, with its own distribution???? Can derive this as a MAP estimate with Dirichlet priors (see cs281a) 9 10 Estimation: Laplace Smoothing Estimation: Linear Interpolation Laplace s estimate (extended): Pretend you saw every outcome k extra times What s Laplace with k = 0? k is the strength of the prior Laplace for conditionals: Smooth each condition independently: H H T In practice, Laplace often performs poorly for P(X Y): When X is very large When Y is very large Another option: linear interpolation Also get P(X) from the data Make sure the estimate of P(X Y) isn t too different from P(X) What if α is 0? 1? For even better ways to estimate parameters, as well as details of the math see cs281a, cs294 11 12 2

Real NB: Smoothing For real classification problems, smoothing is critical New odds ratios: Tuning on Held-Out Data Now we ve got two kinds of unknowns Parameters: the probabilities P(Y X), P(Y) Hyperparameters, like the amount of smoothing to do: k, α helvetica : 11.4 seems : 10.8 group : 10.2 ago : 8.4 areas : 8.3 verdana : 28.8 Credit : 28.4 ORDER : 27.2 <FONT> : 26.9 money : 26.5 Where to learn? Learn parameters from training data Must tune hyperparameters on different data Why? For each value of the hyperparameters, train and test on the held-out data Choose the best value and do a final test on the test data Do these make more sense? 13 14 Baselines First task: get a baseline Baselines are very simple straw man procedures Help determine how hard the task is Help know what a good accuracy is Weak baseline: most frequent label classifier Gives all test instances whatever label was most common in the training set E.g. for spam filtering, might label everything as ham Accuracy might be very high if the problem is skewed For real research, usually use previous work as a (strong) baseline Confidences from a Classifier The confidence of a probabilistic classifier: Posterior over the top label Represents how sure the classifier is of the classification Any probabilistic model will have confidences No guarantee confidence is correct Calibration Weak calibration: higher confidences mean higher accuracy Strong calibration: confidence predicts accuracy rate What s the use of calibration? 15 16 Generative vs. Discriminative Generative classifiers: E.g. naïve Bayes We build a causal model of the variables We then query that model for causes, given evidence Discriminative classifiers: E.g. perceptron (next) No causal model, no Bayes rule, often no probabilities Try to predict output directly Loosely: mistake driven rather than model driven Errors, and What to Do Examples of errors Dear GlobalSCAPE Customer, GlobalSCAPE has partnered with ScanSoft to offer you the latest version of OmniPage Pro, for just $99.99* - the regular list price is $499! The most common question we've received about this offer is - Is this genuine? We would like to assure you that this offer is authorized by ScanSoft, is genuine and valid. You can get the...... To receive your $30 Amazon.com promotional certificate, click through to http://www.amazon.com/apparel and see the prominent link for the $30 offer. All details are there. We hope you enjoyed receiving this message. However, if you'd rather not receive future e-mails announcing new store launches, please click... 17 18 3

What to Do About Errors? Need more features words aren t enough! Have you emailed the sender before? Have 1K other people just gotten the same email? Is the sending information consistent? Is the email in ALL CAPS? Do inline URLs point where they say they point? Does the email address you by (your) name? Naïve Bayes models can incorporate a variety of features, but tend to do best in homogeneous cases (e.g. all features are word occurrences) Features A feature is a function which signals a property of the input Examples: ALL_CAPS: value is 1 iff email in all caps HAS_URL: value is 1 iff email has a URL NUM_URLS: number of URLs in email VERY_LONG: 1 iff email is longer than 1K SUSPICIOUS_SENDER: 1 iff reply-to domain doesn t match originating server Features are anything you can think of code to evaluate on an input Some cheap, some very very expensive to calculate Can even be the output of another classifier Domain knowledge goes here! In naïve Bayes, how did we encode features? 19 20 Feature Extractors A feature extractor maps inputs to feature vectors Some (Vague) Biology Very loose inspiration: human neurons Dear Sir. First, I must solicit your confidence in this transaction, this is by virture of its nature as being utterly confidencial and top secret. W=dear : 1 W=sir : 1 W=this : 2 W=wish : 0 MISSPELLED : 2 NAMELESS : 1 ALL_CAPS : 0 NUM_URLS : 0 Many classifiers take feature vectors as inputs Feature vectors usually very sparse, use sparse encodings (i.e. only represent non-zero keys) 21 22 The Binary Perceptron Inputs are feature values Each feature has a weight Sum is the activation Example: Spam Imagine 4 features: Free (number of occurrences of free ) Money (occurrences of money ) BIAS (always has value 1) If the activation is: Positive, output 1 Negative, output 0 f 1 f 2 f 3 w 1 w 2 Σ w 3 >0? free money BIAS : 1 free : 1 money : 1 BIAS : -3 free : 4 money : 2 23 24 4

Binary Decision Rule Multiclass Decision Rule In the space of feature vectors Any weight vector is a hyperplane One side will be class 1 Other will be class -1 money 2 1 = SPAM If we have more than two classes: Have a weight vector for each class Calculate an activation for each class BIAS : -3 free : 4 money : 2-1 = HAM 1 0 0 1 free Highest activation wins 25 26 Example The Perceptron Update Rule win the vote BIAS : 1 win : 1 game : 0 vote : 1 the : 1 Start with zero weights Pick up training instances one by one Try to classify BIAS : -2 win : 4 game : 4 vote : 0 BIAS : 1 win : 2 game : 0 vote : 4 BIAS : 2 win : 0 game : 2 vote : 0 If correct, no change! If wrong: lower score of wrong answer, raise score of right answer 27 28 Example Examples: Perceptron win the vote Separable Case win the election win the game BIAS : win : game : vote : the : BIAS : win : game : vote : the : BIAS : win : game : vote : the : 29 30 5

Examples: Perceptron Mistake-Driven Classification Separable Case In naïve Bayes, parameters: From data statistics Have a causal interpretation One pass through the data For the perceptron parameters: From reactions to mistakes Have a discriminative interpretation Go through the data until held-out accuracy maxes out Training Data Held-Out Data Test Data 31 32 Properties of Perceptrons Examples: Perceptron Separability: some parameters get the training set perfectly correct Separable Non-Separable Case Convergence: if the training is separable, perceptron will eventually converge (binary case) Mistake Bound: the maximum number of mistakes (binary case) related to the margin or degree of separability Non-Separable 33 34 Examples: Perceptron Non-Separable Case Issues with Perceptrons Overtraining: test / held-out accuracy usually rises, then falls Overtraining isn t quite as bad as overfitting, but is similar Regularization: if the data isn t separable, weights might thrash around Averaging weight vectors over time can help (averaged perceptron) Mediocre generalization: finds a barely separating solution 35 36 6

Fixing the Perceptron Minimum Correcting Update Main problem with perceptron: Update size τ is uncontrolled Sometimes update way too much Sometimes update way too little Solution: choose an update size which fixes the current mistake (by 1) but, choose the minimum change 37 min not τ=0, or would not have made an error, so min will be where equality holds 38 MIRA Linear Separators In practice, it s bad to make updates that are too large Example may be labeled incorrectly Solution: cap the maximum possible value of τ Which of these linear separators is optimal? This gives an algorithm called MIRA Usually converges faster than perceptron Usually performs better, especially on noisy data 39 40 Support Vector Machines Summary Maximizing the margin: good according to intuition and theory. Only support vectors matter; other training examples are ignorable. Support vector machines (SVMs) find the separator with max margin Basically, SVMs are MIRA where you optimize over all examples at once MIRA Naïve Bayes Build classifiers using model of training data Smoothing estimates is important in real systems Classifier confidences are useful, when you can get them SVM Perceptrons / MIRA: Make less assumptions about data Mistake-driven learning Multiple passes through data 41 42 7

Similarity Functions Similarity functions are very important in machine learning Topic for next class: kernels Similarity functions with special properties The basis for a lot of advance machine learning (e.g. SVMs) 43 8