CS 343: Artificial Intelligence

Similar documents
Errors, and What to Do. CS 188: Artificial Intelligence Fall What to Do About Errors. Later On. Some (Simplified) Biology

CS 188: Artificial Intelligence Fall 2011

CS 5522: Artificial Intelligence II

CS 188: Artificial Intelligence. Outline

CS 188: Artificial Intelligence Fall 2008

General Naïve Bayes. CS 188: Artificial Intelligence Fall Example: Overfitting. Example: OCR. Example: Spam Filtering. Example: Spam Filtering

Learning: Binary Perceptron. Examples: Perceptron. Separable Case. In the space of feature vectors

MIRA, SVM, k-nn. Lirong Xia

CS 188: Artificial Intelligence. Machine Learning

CS 188: Artificial Intelligence Spring Announcements

Announcements. CS 188: Artificial Intelligence Spring Classification. Today. Classification overview. Case-Based Reasoning

EE 511 Online Learning Perceptron

CS 188: Artificial Intelligence Spring Announcements

Machine Learning. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 8 May 2012

CS 343: Artificial Intelligence

Perceptron. Subhransu Maji. CMPSCI 689: Machine Learning. 3 February February 2015

Natural Language Processing. Classification. Features. Some Definitions. Classification. Feature Vectors. Classification I. Dan Klein UC Berkeley

Classification. Chris Amato Northeastern University. Some images and slides are used from: Rob Platt, CS188 UC Berkeley, AIMA

CS 188: Artificial Intelligence Learning :

Statistical NLP Spring A Discriminative Approach

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas

Machine Learning for NLP

Linear Classifiers. Michael Collins. January 18, 2012

Mining of Massive Datasets Jure Leskovec, AnandRajaraman, Jeff Ullman Stanford University

Approximate Q-Learning. Dan Weld / University of Washington

CS 343: Artificial Intelligence

Support vector machines Lecture 4

CS 188: Artificial Intelligence Spring Today

Midterm Review CS 7301: Advanced Machine Learning. Vibhav Gogate The University of Texas at Dallas

From Binary to Multiclass Classification. CS 6961: Structured Prediction Spring 2018

Reinforcement Learning Wrap-up

Multiclass Classification-1

Linear smoother. ŷ = S y. where s ij = s ij (x) e.g. s ij = diag(l i (x))

CS 343: Artificial Intelligence

COMS 4771 Introduction to Machine Learning. Nakul Verma

Machine Learning for NLP

CS 343: Artificial Intelligence

Logistic Regression. Some slides adapted from Dan Jurfasky and Brendan O Connor

Announcements - Homework

Support Vector Machine. Industrial AI Lab. Prof. Seungchul Lee

Support Vector Machine. Industrial AI Lab.

Linear discriminant functions

Introduction to AI Learning Bayesian networks. Vibhav Gogate

Introduction to Logistic Regression and Support Vector Machine

Binary Classification / Perceptron

CS 188: Artificial Intelligence

CS 6375 Machine Learning

Linear classifiers Lecture 3

CS 343: Artificial Intelligence

CS 188: Artificial Intelligence Spring Announcements

Gaussian and Linear Discriminant Analysis; Multiclass Classification

Final Examination CS 540-2: Introduction to Artificial Intelligence

Tackling the Poor Assumptions of Naive Bayes Text Classifiers

Lecture 9: Large Margin Classifiers. Linear Support Vector Machines

Non-Linearity. CS 188: Artificial Intelligence. Non-Linear Separators. Non-Linear Separators. Deep Learning I

Machine Learning and Data Mining. Linear classification. Kalev Kask

The Perceptron algorithm

CS 343: Artificial Intelligence

CS 343: Artificial Intelligence

Lecture 3: Multiclass Classification

CS145: INTRODUCTION TO DATA MINING

Kernel Methods and Support Vector Machines

Support Vector Machines

COMP 875 Announcements

Classification. Statistical NLP Spring Example: Text Classification. Some Definitions. Block Feature Vectors.

Artificial Neural Networks Examination, June 2005

Machine Learning A Geometric Approach

Numerical Learning Algorithms

Warm up: risk prediction with logistic regression

CSE446: Naïve Bayes Winter 2016

Machine Learning (CS 567) Lecture 2

CS 343: Artificial Intelligence

Support Vector Machines. Machine Learning Fall 2017

Midterm: CS 6375 Spring 2015 Solutions

The Perceptron Algorithm

Machine Learning, Midterm Exam: Spring 2008 SOLUTIONS. Q Topic Max. Score Score. 1 Short answer questions 20.

CSCI-567: Machine Learning (Spring 2019)

Perceptron. by Prof. Seungchul Lee Industrial AI Lab POSTECH. Table of Contents

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Linear Classifiers. Blaine Nelson, Tobias Scheffer

Nonlinear Classification

PAC-learning, VC Dimension and Margin-based Bounds

CS325 Artificial Intelligence Chs. 18 & 4 Supervised Machine Learning (cont)

Models, Data, Learning Problems

1 Machine Learning Concepts (16 points)

Probabilistic modeling. The slides are closely adapted from Subhransu Maji s slides

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.

Decision trees COMS 4771

Multiclass Boosting with Repartitioning

#33 - Genomics 11/09/07

Algorithms for NLP. Classification II. Taylor Berg-Kirkpatrick CMU Slides: Dan Klein UC Berkeley

Neural networks and support vector machines

Classification: Analyzing Sentiment

Natural Language Processing (CSEP 517): Text Classification

CS340 Machine learning Lecture 5 Learning theory cont'd. Some slides are borrowed from Stuart Russell and Thorsten Joachims

Case Study 1: Estimating Click Probabilities. Kakade Announcements: Project Proposals: due this Friday!

Linear models: the perceptron and closest centroid algorithms. D = {(x i,y i )} n i=1. x i 2 R d 9/3/13. Preliminaries. Chapter 1, 7.

CPSC 340: Machine Learning and Data Mining. MLE and MAP Fall 2017

Jeff Howbert Introduction to Machine Learning Winter

Jeffrey D. Ullman Stanford University

Transcription:

CS 343: Artificial Intelligence Perceptrons Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at http://ai.berkeley.edu.]

Error-Driven Classification

Errors, and What to Do Examples of errors Dear GlobalSCAPE Customer, GlobalSCAPE has partnered with ScanSoft to offer you the latest version of OmniPage Pro, for just $99.99* - the regular list price is $499! The most common question we've received about this offer is - Is this genuine? We would like to assure you that this offer is authorized by ScanSoft, is genuine and valid. You can get the...... To receive your $30 Amazon.com promotional certificate, click through to http://www.amazon.com/apparel and see the prominent link for the $30 offer. All details are there. We hope you enjoyed receiving this message. However, if you'd rather not receive future e-mails announcing new store launches, please click...

What to Do About Errors Problem: there s still spam in your inbox Need more features words aren t enough! Have you emailed the sender before? Have 1M other people just gotten the same email? Is the sending information consistent? Is the email in ALL CAPS? Do inline URLs point where they say they point? Does the email address you by (your) name? Naïve Bayes models can incorporate a variety of features, but tend to do best when homogeneous (e.g. all features are word occurrences) and/or roughly independent

Linear Classifiers

Feature Vectors Hello, Do you want free printr cartriges? Why pay more when you can get them ABSOLUTELY FREE! Just # free : 2 YOUR_NAME : 0 MISSPELLED : 2 FROM_FRIEND : 0... SPAM or + PIXEL-7,12 : 1 PIXEL-7,13 : 0... NUM_LOOPS : 1... 2

Some (Simplified) Biology Very loose inspiration: human neurons

Linear Classifiers Inputs are feature values Each feature has a weight Sum is the activation If the activation is: Positive, output +1 Negative, output -1 f 1 f 2 f 3 w 1 w 2 Σ >0? w 3

Weights Binary case: compare features to a weight vector Learning: figure out the weight vector from examples # free : 4 YOUR_NAME :-1 MISSPELLED : 1 FROM_FRIEND :-3... # free : 2 YOUR_NAME : 0 MISSPELLED : 2 FROM_FRIEND : 0... Dot product positive means the positive class # free : 0 YOUR_NAME : 1 MISSPELLED : 1 FROM_FRIEND : 1...

Decision Rules

Binary Decision Rule In the space of feature vectors Examples are points Any weight vector is a hyperplane One side corresponds to Y=+1 Other corresponds to Y=-1 money 2 +1 = SPAM BIAS : -3 free : 4 money : 2... -1 = HAM 1 0 0 1 free

Weight Updates

Learning: Binary Perceptron Start with weights = 0 For each training instance: Classify with current weights If correct (i.e., y=y*), no change! If wrong: adjust the weight vector

Learning: Binary Perceptron Start with weights = 0 For each training instance: Classify with current weights If correct (i.e., y=y*), no change! If wrong: adjust the weight vector by adding or subtracting the feature vector. Subtract if y* is -1.

Separable Case Examples: Perceptron

Multiclass Decision Rule If we have multiple classes: A weight vector for each class: Score (activation) of a class y: Prediction highest score wins Binary = multiclass where the negative class has weight zero

Learning: Multiclass Perceptron Start with all weights = 0 Pick up training examples one by one Predict with current weights If correct, no change! If wrong: lower score of wrong answer, raise score of right answer

Properties of Perceptrons Separability: true if some parameters get the training set perfectly correct Separable Convergence: if the training is separable, perceptron will eventually converge (binary case) Mistake Bound: the maximum number of mistakes (binary case) related to the margin or degree of separability Non-Separable

Non-Separable Case Examples: Perceptron

Improving the Perceptron

Problems with the Perceptron Noise: if the data isn t separable, weights will thrash Averaging weight vectors over time can help (averaged perceptron) Mediocre generalization: finds a barely separating solution Overtraining: test / held-out accuracy usually rises, then falls Overtraining is a kind of overfitting

Fixing the Perceptron Idea: adjust the weight update to mitigate these effects MIRA*: choose an update size that fixes the current mistake but, minimizes the change to w The +1 helps to generalize * Margin Infused Relaxed Algorithm

Minimum Correcting Update min not τ=0, or would not have made an error, so min will be where equality holds

Maximum Step Size In practice, it s also bad to make updates that are too large Example may be labeled incorrectly You may not have enough features Solution: cap the maximum possible value of τ with some constant C Corresponds to an optimization that assumes non-separable data Usually converges faster than perceptron Usually better, especially on noisy data

Linear Separators Which of these linear separators is optimal?

Support Vector Machines Maximizing the margin: good according to intuition, theory, practice Only support vectors matter; other training examples are ignorable Support vector machines (SVMs) find the separator with max margin Basically, SVMs are MIRA where you optimize over all examples at once MIRA SVM

Classification: Comparison Naïve Bayes Builds a model training data Gives prediction probabilities Strong assumptions about feature independence One pass through data (counting) Perceptrons / MIRA: Makes less assumptions about data Mistake-driven learning Multiple passes through data (prediction) Often more accurate

Apprenticeship

Pacman Apprenticeship! Examples are states s Candidates are pairs (s,a) Correct actions: those taken by expert Features defined over (s,a) pairs: f(s,a) Score of a q-state (s,a) given by: correct action a* How is this VERY different from reinforcement learning?

Video of Pacman Apprentice

Web Search

Extension: Web Search Information retrieval: Given information needs, produce information Includes, e.g. web search, question answering, and classic IR x = Apple Computers Web search: not exactly classification, but rather ranking

Feature-Based Ranking x = Apple Computer x, x,

Perceptron for Ranking Inputs Candidates Many feature vectors: One weight vector: Prediction: Update (if wrong):