Support Vector Machines

Similar documents
Introduction to Machine Learning

Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 9

Clustering. Introduction to Data Science. Jordan Boyd-Graber and Michael Paul SLIDES ADAPTED FROM LAUREN HANNAH

Classification II: Decision Trees and SVMs

Multi-class SVMs. Lecture 17: Aykut Erdem April 2016 Hacettepe University

Lecture Support Vector Machine (SVM) Classifiers

Machine Learning Lecture 7

10/05/2016. Computational Methods for Data Analysis. Massimo Poesio SUPPORT VECTOR MACHINES. Support Vector Machines Linear classifiers

Introduction to Machine Learning Midterm Exam

Decision Trees. Data Science: Jordan Boyd-Graber University of Maryland MARCH 11, Data Science: Jordan Boyd-Graber UMD Decision Trees 1 / 1

Classification: The PAC Learning Framework

Machine Learning

Introduction to Machine Learning Midterm Exam Solutions

Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 5

SUPPORT VECTOR MACHINE

Logistic Regression. Introduction to Data Science Algorithms Jordan Boyd-Graber and Michael Paul SLIDES ADAPTED FROM HINRICH SCHÜTZE

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Machine Learning, Midterm Exam: Spring 2008 SOLUTIONS. Q Topic Max. Score Score. 1 Short answer questions 20.

Machine Learning

Machine Learning, Midterm Exam

Computational Learning Theory

LINEAR CLASSIFICATION, PERCEPTRON, LOGISTIC REGRESSION, SVC, NAÏVE BAYES. Supervised Learning

ECE 5424: Introduction to Machine Learning

Support Vector Machine. Industrial AI Lab.

IFT Lecture 7 Elements of statistical learning theory

Online Learning. Jordan Boyd-Graber. University of Colorado Boulder LECTURE 21. Slides adapted from Mohri

FINAL: CS 6375 (Machine Learning) Fall 2014

PAC-learning, VC Dimension and Margin-based Bounds

Outline. Supervised Learning. Hong Chang. Institute of Computing Technology, Chinese Academy of Sciences. Machine Learning Methods (Fall 2012)

Natural Language Processing. Classification. Features. Some Definitions. Classification. Feature Vectors. Classification I. Dan Klein UC Berkeley

Machine Learning

Introduction to Logistic Regression and Support Vector Machine

PAC-learning, VC Dimension and Margin-based Bounds

Lecture 18: Kernels Risk and Loss Support Vector Regression. Aykut Erdem December 2016 Hacettepe University

Support Vector Machines, Kernel SVM

Support Vector Machines for Classification and Regression. 1 Linearly Separable Data: Hard Margin SVMs

Support Vector Machines

Solving Regression. Jordan Boyd-Graber. University of Colorado Boulder LECTURE 12. Slides adapted from Matt Nedrich and Trevor Hastie

CS145: INTRODUCTION TO DATA MINING

Introduction to Information Retrieval

Linear & nonlinear classifiers

Kernel Methods and Support Vector Machines

The Bayes classifier

Bias-Variance Tradeoff

Introduction to Machine Learning

Support Vector Machine. Industrial AI Lab. Prof. Seungchul Lee

Classification: Logistic Regression from Data

Machine Learning. Support Vector Machines. Fabio Vandin November 20, 2017

Support vector machines Lecture 4

Machine Learning Gaussian Naïve Bayes Big Picture

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.

Statistical Data Mining and Machine Learning Hilary Term 2016

Jeff Howbert Introduction to Machine Learning Winter

ECE 5984: Introduction to Machine Learning

Engineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 5: Single Layer Perceptrons & Estimating Linear Classifiers

CSC 411 Lecture 17: Support Vector Machine

Linear Classifiers: Expressiveness

More about the Perceptron

Support Vector Machines. Machine Learning Fall 2017

10701/15781 Machine Learning, Spring 2007: Homework 2

Machine Learning

L5 Support Vector Classification

Midterm Review CS 7301: Advanced Machine Learning. Vibhav Gogate The University of Texas at Dallas

Introduction to Machine Learning

Machine Learning

Classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

Support Vector Machines and Kernel Methods

ECE521 Lecture7. Logistic Regression

Support Vector Machines

Linear classifiers Lecture 3

Support Vector Machines

CSC321 Lecture 4 The Perceptron Algorithm

Computational Learning Theory: Shattering and VC Dimensions. Machine Learning. Spring The slides are mainly from Vivek Srikumar

CSE 546 Final Exam, Autumn 2013

Statistical and Computational Learning Theory

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2013

Statistical NLP Spring A Discriminative Approach

VBM683 Machine Learning

Machine Learning Practice Page 2 of 2 10/28/13

Lecture 8. Instructor: Haipeng Luo

Linear and Logistic Regression. Dr. Xiaowei Huang

Machine Learning Basics Lecture 4: SVM I. Princeton University COS 495 Instructor: Yingyu Liang

Machine Learning for NLP

ECE 5984: Introduction to Machine Learning

CSCI-567: Machine Learning (Spring 2019)

Machine Learning for NLP

MIDTERM SOLUTIONS: FALL 2012 CS 6375 INSTRUCTOR: VIBHAV GOGATE

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2014

Midterm Exam Solutions, Spring 2007

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Introduction to Machine Learning

10-701/ Machine Learning - Midterm Exam, Fall 2010

Max Margin-Classifier

Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines

Large-Margin Thresholded Ensembles for Ordinal Regression

Announcements - Homework

Generative v. Discriminative classifiers Intuition

Lecture 16: Modern Classification (I) - Separating Hyperplanes

Machine Learning. VC Dimension and Model Complexity. Eric Xing , Fall 2015

The Perceptron algorithm

Transcription:

Support Vector Machines Jordan Boyd-Graber University of Colorado Boulder LECTURE 7 Slides adapted from Tom Mitchell, Eric Xing, and Lauren Hannah Jordan Boyd-Graber Boulder Support Vector Machines 1 of 34

Roadmap Classification: machines labeling data for us Previously: naïve Bayes and logistic regression This time: SVMs (another) example of linear classifier State-of-the-art classification Good theoretical properties Jordan Boyd-Graber Boulder Support Vector Machines 2 of 34

Thinking Geometrically Suppose you have two classes: vacations and sports Suppose you have four documents Sports Doc 1 : {ball, ball, ball, travel} Doc 2 : {ball, ball} Vacations What does this look like in vector space? Doc 3 : {travel, ball, travel} Doc 4 : {travel} Jordan Boyd-Graber Boulder Support Vector Machines 3 of 34

Put the documents in vector space Travel 3 2 1 0 1 2 3 Ball Jordan Boyd-Graber Boulder Support Vector Machines 4 of 34

Vector space representation of documents Each document is a vector, one component for each term. Terms are axes. High dimensionality: 10,000s of dimensions and more How can we do classification in this space? Jordan Boyd-Graber Boulder Support Vector Machines 5 of 34

Vector space classification As before, the training set is a set of documents, each labeled with its class. In vector space classification, this set corresponds to a labeled set of points or vectors in the vector space. Premise 1: Documents in the same class form a contiguous region. Premise 2: Documents from different classes don t overlap. We define lines, surfaces, hypersurfaces to divide regions. Jordan Boyd-Graber Boulder Support Vector Machines 6 of 34

Recap Vector space classification Linear classifiers Support Vector Machines D Classes in the vector space Classes in the vector space UK China Kenya x x x x Should the document be assigned to China, UK or Jordan Boyd-Graber Boulder Support Vector Machines 7 of 34

Recap Vector space classification Linear classifiers Support Vector Machines D Classes in the vector space Classes in the vector space UK China Kenya Should the document Should the be assigned document to China, be assigned UK or Kenya? to China, UK or x x x x Jordan Boyd-Graber Boulder Support Vector Machines 7 of 34

Recap Vector space classification Linear classifiers Support Vector Machines Dis Classes in the vector space Classes in the vector space UK China Kenya Find separators between the classes Find separators between the classes x x x x Jordan Boyd-Graber Boulder Support Vector Machines 7 of 34

Recap Vector space classification Linear classifiers Support Vector Machines Discu Classes in the vector space Classes in the vector space UK China Kenya Find separators between the classes Find separators between the classes x x x x Jordan Boyd-Graber Boulder Support Vector Machines 7 of 34

Recap Vector space classification Linear classifiers Support Vector Machines Discu Classes in the vector space Classes in the vector space UK China Kenya Find separators between the classes Based on these separators: should be assigned to China x x x x Jordan Boyd-Graber Boulder Support Vector Machines 7 of 34

Recap Vector space classification Linear classifiers Support Vector Machines Discu Classes in the vector space Classes in the vector space UK China Kenya Find separators between the classes How do we find separators that do a good job at classifying new documents like? Main topic of today Jordan Boyd-Graber Boulder Support Vector Machines 7 of 34 x x x x

Linear Classifiers Plan Linear Classifiers Support Vector Machines Formulation Theoretical Guarantees Recap Jordan Boyd-Graber Boulder Support Vector Machines 8 of 34

Linear Classifiers Linear classifiers Definition: A linear classifier computes a linear combination or weighted sum i β ix i of the feature values. Classification decision: i β ix i > β 0? (β 0 is our bias)... where β 0 (the threshold) is a parameter. We call this the separator or decision boundary. We find the separator based on training set. Methods for finding separator: logistic regression, naïve Bayes, linear SVM Assumption: The classes are linearly separable. Jordan Boyd-Graber Boulder Support Vector Machines 9 of 34

Linear Classifiers Linear classifiers Definition: A linear classifier computes a linear combination or weighted sum i β ix i of the feature values. Classification decision: i β ix i > β 0? (β 0 is our bias)... where β 0 (the threshold) is a parameter. We call this the separator or decision boundary. We find the separator based on training set. Methods for finding separator: logistic regression, naïve Bayes, linear SVM Assumption: The classes are linearly separable. Before, we just talked about equations. What s the geometric intuition? Jordan Boyd-Graber Boulder Support Vector Machines 9 of 34

Alinearclassifierin1D Linear Classifiers A linear classifier in 1D Alinearclassifierin1Dis apointx described by the equation w 1 d 1 = θ x = θ/w 1 A linear classifier in 1D is a point x described by the equation β 1 x 1 = β 0 Schütze: Support vector machines 15 / 55 Jordan Boyd-Graber Boulder Support Vector Machines 10 of 34

Alinearclassifierin1D Linear Classifiers A linear classifier in 1D Alinearclassifierin1Dis apointx described by the equation w 1 d 1 = θ x = θ/w 1 A linear classifier in 1D is a point x described by the equation β 1 x 1 = β 0 x = β 0 /β 1 Schütze: Support vector machines 15 / 55 Jordan Boyd-Graber Boulder Support Vector Machines 10 of 34

Alinearclassifierin1D Linear Classifiers A linear classifier in 1D Alinearclassifierin1Dis apointx described by the equation w 1 d 1 = θ x = θ/w 1 Points (d 1 )withw 1 d 1 θ are in the class c. A linear classifier in 1D is a point x described by the equation β 1 x 1 = β 0 x = β 0 /β 1 Points (x 1 ) with β 1 x 1 β 0 are in the class c. Schütze: Support vector machines 15 / 55 Jordan Boyd-Graber Boulder Support Vector Machines 10 of 34

Alinearclassifierin1D Linear Classifiers A linear classifier in 1D Alinearclassifierin1Dis apointx described by the equation w 1 d 1 = θ x = θ/w 1 Points (d 1 )withw 1 d 1 θ are in the class c. A linear classifier in 1D is a point x described by the equation β 1 x 1 = β 0 Points (d 1 )withw 1 d 1 < θ are in the complement class c. x = β 0 /β 1 Points (x 1 ) with β 1 x 1 β 0 are in the class c. Points (x 1 ) with β 1 x 1 < β 0 are in the complement class c. Schütze: Support vector machines 15 / 55 Jordan Boyd-Graber Boulder Support Vector Machines 10 of 34

Linear Classifiers Recap Vector space classification Linear classifiers Support Vector Machines Discussion AAlinearclassifierin2D Alinearclassifierin2Dis alinedescribedbythe equation w 1 d 1 + w 2 d 2 = θ A linear classifier in 2D is a line described by the equation β 1 x 1 + β 2 x 2 = β 0 Example for a 2D linear classifier Schütze: Support vector machines 16 / 55 Jordan Boyd-Graber Boulder Support Vector Machines 11 of 34

Linear Classifiers Recap Vector space classification Linear classifiers Support Vector Machines Discussion AAlinearclassifierin2D Alinearclassifierin2Dis alinedescribedbythe equation w 1 d 1 + w 2 d 2 = θ A linear classifier in 2D is a line described by the equation β 1 x 1 + β 2 x 2 = β 0 Example for a 2D linear classifier Example for a 2D linear classifier Schütze: Support vector machines 16 / 55 Jordan Boyd-Graber Boulder Support Vector Machines 11 of 34

Linear Classifiers Recap Vector space classification Linear classifiers Support Vector Machines Discussion A linear classifier in 2D Alinearclassifierin2D A linear classifier in 2D is a line described by the equation β 1 x 1 + β 2 x 2 = β 0 Example for a 2D linear Alinearclassifierin2Dis alinedescribedbythe equation w 1 d 1 + w 2 d 2 = θ Example for a 2D linear classifier Points (d 1 d 2 )with classifier w 1 d 1 + w 2 d 2 θ are in the class c. Points (x 1 x 2 ) with β 1 x 1 + β 2 x 2 β 0 are in the class c. Schütze: Support vector machines 16 / 55 Jordan Boyd-Graber Boulder Support Vector Machines 11 of 34

Linear Classifiers Recap Vector space classification Linear classifiers Support Vector Machines Discussion AAlinearclassifierin2D Alinearclassifierin2Dis alinedescribedbythe equation w 1 d 1 + w 2 d 2 = θ A linear classifier in 2D is a line described by the equation β 1 x 1 + β 2 x 2 = β 0 Example for a 2D linear Example for a 2D linear classifier Points (d 1 d 2 )with w 1 d 1 + classifier w 2 d 2 θ are in the class c. Points (x 1 x 2 ) with β 1 x 1 + β 2 x 2 β 0 are in the class c. Points (d 1 d 2 )with w 1 d 1 + w 2 d 2 < θ are in the complement class c. Points (x 1 x 2 ) with β 1 x 1 + β 2 x 2 < β 0 are in the complement class c. Schütze: Support vector machines 16 / 55 Jordan Boyd-Graber Boulder Support Vector Machines 11 of 34

Linear Classifiers Recap Vector space classification Linear classifiers Support Vector Machines Discussion A Alinearclassifierin3D 3D Alinearclassifierin3Dis aplanedescribedbythe equation w 1 d 1 + w 2 d 2 + w 3 d 3 = θ A linear classifier in 3D is a plane described by the equation β 1 x 1 + β 2 x 2 + β 3 x 3 = β 0 Schütze: Support vector machines 17 / 55 Jordan Boyd-Graber Boulder Support Vector Machines 12 of 34

Linear Classifiers Recap Vector space classification Linear classifiers Support Vector Machines Discussion A Alinearclassifierin3D Alinearclassifierin3Dis aplanedescribedbythe equation w 1 d 1 + w 2 d 2 + w 3 d 3 = θ A linear classifier in 3D is a plane described by the equation β 1 x 1 + β 2 x 2 + β 3 x 3 = β 0 Example for a 3D linear classifier Example for a 3D linear classifier Schütze: Support vector machines 17 / 55 Jordan Boyd-Graber Boulder Support Vector Machines 12 of 34

Linear Classifiers Recap Vector space classification Linear classifiers A linear classifier in 3D Support Vector Machines Discussion Alinearclassifierin3D A linear classifier in 3D is a plane described by the Alinearclassifierin3Dis aplanedescribedbythe equation w 1 d 1 + w 2 dequation 2 + w 3 d 3 = θ β 1 x 1 + β 2 x 2 + β 3 x 3 = β 0 Example for a 3D linear Example for a 3D linear classifier Points (d 1 d 2 d 3 )with w 1 d 1 + w 2 dclassifier 2 + w 3 d 3 θ are in the class c. Points (x 1 x 2 x 3 ) with β 1 x 1 + β 2 x 2 + β 3 x 3 β 0 are in the class c. Schütze: Support vector machines 17 / 55 Jordan Boyd-Graber Boulder Support Vector Machines 12 of 34

Linear Classifiers Recap Vector space classification Linear classifiers Support Vector Machines Discussion A Alinearclassifierin3D Alinearclassifierin3Dis aplanedescribedbythe equation w 1 d 1 + w 2 d 2 + w 3 d 3 = θ Example for a 3D linear classifier Points (d 1 d 2 d 3 )with w 1 d 1 + w 2 d 2 + w 3 d 3 θ are in the class c. Points (d 1 d 2 d 3 )with w 1 d 1 + w 2 d 2 + w 3 d 3 < θ are in the complement class c. Schütze: Support vector machines 17 / 55 A linear classifier in 3D is a plane described by the equation β 1 x 1 + β 2 x 2 + β 3 x 3 = β 0 Example for a 3D linear classifier Points (x 1 x 2 x 3 ) with β 1 x 1 + β 2 x 2 + β 3 x 3 β 0 are in the class c. Points (x 1 x 2 x 3 ) with β 1 x 1 + β 2 x 2 + β 3 x 3 < β 0 are in the complement class c. Jordan Boyd-Graber Boulder Support Vector Machines 12 of 34

Linear Classifiers Naive Bayes and Logistic Regression as linear classifiers Multinomial Naive Bayes is a linear classifier (in log space) defined by: M β i x i = β 0 i=1 where β i = log[ˆp(t i c)/ˆp(t i c)], x i = number of occurrences of t i in d, and β 0 = log[ˆp(c)/ˆp( c)]. Here, the index i, 1 i M, refers to terms of the vocabulary. Logistic regression is the same (we only put it into the logistic function to turn it into a probability). Jordan Boyd-Graber Boulder Support Vector Machines 13 of 34

Linear Classifiers Naive Bayes and Logistic Regression as linear classifiers Multinomial Naive Bayes is a linear classifier (in log space) defined by: M β i x i = β 0 i=1 where β i = log[ˆp(t i c)/ˆp(t i c)], x i = number of occurrences of t i in d, and β 0 = log[ˆp(c)/ˆp( c)]. Here, the index i, 1 i M, refers to terms of the vocabulary. Logistic regression is the same (we only put it into the logistic function to turn it into a probability). Takeway Naïve Bayes, logistic regression and SVM are all linear methods. They choose their hyperplanes based on different objectives: joint likelihood (NB), conditional likelihood (LR), and the margin (SVM). Jordan Boyd-Graber Boulder Support Vector Machines 13 of 34

Linear Classifiers Which hyperplane? Jordan Boyd-Graber Boulder Support Vector Machines 14 of 34

Linear Classifiers Which hyperplane? For linearly separable training sets: there are infinitely many separating hyperplanes. They all separate the training set perfectly...... but they behave differently on test data. Error rates on new data are low for some, high for others. How do we find a low-error separator? Jordan Boyd-Graber Boulder Support Vector Machines 15 of 34

Support Vector Machines Plan Linear Classifiers Support Vector Machines Formulation Theoretical Guarantees Recap Jordan Boyd-Graber Boulder Support Vector Machines 16 of 34

Support Vector Machines Support vector machines Machine-learning research in the last two decades has improved classifier effectiveness. New generation of state-of-the-art classifiers: support vector machines (SVMs), boosted decision trees, regularized logistic regression, neural networks, and random forests Applications to IR problems, particularly text classification SVMs: A kind of large-margin classifier Vector space based machine-learning method aiming to find a decision boundary between two classes that is maximally far from any point in the training data (possibly discounting some points as outliers or noise) Jordan Boyd-Graber Boulder Support Vector Machines 17 of 34

Support Vector Vector Machines Machines Support Vector Machines 2-class training data 2-class training data Schütze: Support vector machines 29 / 55 Jordan Boyd-Graber Boulder Support Vector Machines 18 of 34

pport Vector Machines Support Vector Machines Support 2-class Vector trainingmachines data decision boundary linear separator 2-class training data decision boundary linear separator ütze: Support vector machines 29 / 55 Jordan Boyd-Graber Boulder Support Vector Machines 18 of 34

Support Vector Machines Machines Support Vector Machines 2-class training data decision boundary linear separator 2-class training data criterion: decisionbeing boundary maximally linear separator far away from criterion: any databeing point maximally determinesfar away classifier from any margin data point determines classifier margin Margin is maximized tze: Support vector machines 29 / 55 Jordan Boyd-Graber Boulder Support Vector Machines 18 of 34

Support Vector Machines Recap Vector space classification Linear classifiers Support Vector Machines Discussion Support Why Vector maximize Machines the margin? Points near decision surface uncertain classification decisions (50% either way). 2-class training data Aclassifierwithalarge margin makes no low certainty classification decision boundary linear separator criterion: decisions. being maximally far away from any data point determines doc. variation classifier margin Gives classification safety margin w.r.t slight errors in measurement or linear separator position defined by support vectors Maximum margin decision hyperplane Support vectors Margin is maximized Schütze: Support vector machines 30 / 55 Jordan Boyd-Graber Boulder Support Vector Machines 18 of 34

Support Vector Machines Recap Vector space classification Linear classifiers Support Vector Machines Discussion Why maximize Why maximize the margin? the margin? Points near decision surface uncertain classification decisions Points(50% neareither decision way). surface uncertain classification decisions Aclassifierwithalarge margin makes no low certainty classification decisions. A classifier Gives classification with a large margin is always confident safety margin w.r.t slight errors in measurement or doc. variation Gives classification safety margin (measurement or variation) Maximum margin decision hyperplane Support vectors Margin is maximized Schütze: Support vector machines 30 / 55 Jordan Boyd-Graber Boulder Support Vector Machines 19 of 34

Support Vector Machines Why maximize the margin? SVM classifier: large margin around decision boundary compare to decision hyperplane: place fat separator between classes unique solution decreased memory capacity increased ability to correctly generalize to test data Jordan Boyd-Graber Boulder Support Vector Machines 20 of 34

Formulation Plan Linear Classifiers Support Vector Machines Formulation Theoretical Guarantees Recap Jordan Boyd-Graber Boulder Support Vector Machines 21 of 34

Formulation Equation Equation of a hyperplane Distance of a point to hyperplane w x i + b = 0 (1) w x i + b w (2) The margin ρ is given by ρ w x i + b min = 1 (x,y) S w w (3) Jordan Boyd-Graber Boulder Support Vector Machines 22 of 34

Formulation Equation Equation of a hyperplane Distance of a point to hyperplane w x i + b = 0 (1) w x i + b w (2) The margin ρ is given by ρ w x i + b min = 1 (x,y) S w w (3) This is because for any point on the marginal hyperplane, w x + b = ±1 Jordan Boyd-Graber Boulder Support Vector Machines 22 of 34

Formulation Optimization Problem We want to find a weight vector w and bias b that optimize min w,b subject to y i ( w x i + b) 1, i [1, m]. 1 2 w 2 (4) Jordan Boyd-Graber Boulder Support Vector Machines 23 of 34

Formulation Optimization Problem We want to find a weight vector w and bias b that optimize min w,b subject to y i ( w x i + b) 1, i [1, m]. 1 2 w 2 (4) Next week: algorithm Jordan Boyd-Graber Boulder Support Vector Machines 23 of 34

Theoretical Guarantees Plan Linear Classifiers Support Vector Machines Formulation Theoretical Guarantees Recap Jordan Boyd-Graber Boulder Support Vector Machines 24 of 34

Theoretical Guarantees Three Proofs that Suggest SVMs will Work Leave-one-out error VC Dimension Margin analysis Jordan Boyd-Graber Boulder Support Vector Machines 25 of 34

Theoretical Guarantees Leave One Out Error (sketch) Leave one out error is the error by using one point as your test set (averaged over all such points). ˆR LOO = 1 m m 1 [ ] h s {xi } y i i=1 (5) Jordan Boyd-Graber Boulder Support Vector Machines 26 of 34

Theoretical Guarantees Leave One Out Error (sketch) Leave one out error is the error by using one point as your test set (averaged over all such points). ˆR LOO = 1 m m 1 [ ] h s {xi } y i i=1 (5) This serves as an unbiased estimate of generalization error for samples of size m 1: ] E S D m [ˆR LOO = E S D m 1 [R(h S )] (6) Jordan Boyd-Graber Boulder Support Vector Machines 26 of 34

Theoretical Guarantees Leave One Out Error (sketch) Let h S be the hypothesis returned by SVMs for a separable sample S, and let N SV (S) be the number of support vectors that define h S. [ ] NSV (S) E S D m [R(h s )] E S D m+1 (7) m + 1 Consider the held out error for x i. If x i was not a support vector, the answer doesn t change. If x i was a support vector, it could change the answer; this is when we can have an error. There are N SV (S) support vectors and thus N SV (S) possible errors. Jordan Boyd-Graber Boulder Support Vector Machines 27 of 34

Theoretical Guarantees VC Dimension Argument Remember discussion VC dimension for d-dimensional hyperplanes? That applies here: R(h) ˆR(h) 2(d + 1) log ɛ d+1 log 1 δ + + (8) m 2m Jordan Boyd-Graber Boulder Support Vector Machines 28 of 34

Theoretical Guarantees VC Dimension Argument Remember discussion VC dimension for d-dimensional hyperplanes? That applies here: R(h) ˆR(h) 2(d + 1) log ɛ d+1 log 1 δ + + (8) m 2m But this is useless when d is large (e.g. for text). Jordan Boyd-Graber Boulder Support Vector Machines 28 of 34

Theoretical Guarantees Margin Theory Jordan Boyd-Graber Boulder Support Vector Machines 29 of 34

Theoretical Guarantees Margin Loss Function To see where SVMs really shine, consider the margin loss ρ: 0 if ρ x Φ ρ (x) = 1 x ρ if 0 x ρ 1 if x 0 (9) Jordan Boyd-Graber Boulder Support Vector Machines 30 of 34

Theoretical Guarantees Margin Loss Function To see where SVMs really shine, consider the margin loss ρ: 0 if ρ x Φ ρ (x) = 1 x ρ if 0 x ρ 1 if x 0 (9) The empirical margin loss of a hypothesis h is ˆR ρ (h) = 1 m m Φ ρ (y i h(x i )) (10) i=1 Jordan Boyd-Graber Boulder Support Vector Machines 30 of 34

Theoretical Guarantees Margin Loss Function To see where SVMs really shine, consider the margin loss ρ: 0 if ρ x Φ ρ (x) = 1 x ρ if 0 x ρ 1 if x 0 (9) The empirical margin loss of a hypothesis h is ˆR ρ (h) = 1 m m Φ ρ (y i h(x i )) 1 m i=1 m 1 [y i h(x i ) ρ] (10) i=1 Jordan Boyd-Graber Boulder Support Vector Machines 30 of 34

Theoretical Guarantees Margin Loss Function To see where SVMs really shine, consider the margin loss ρ: 0 if ρ x Φ ρ (x) = 1 x ρ if 0 x ρ 1 if x 0 (9) The empirical margin loss of a hypothesis h is ˆR ρ (h) = 1 m m Φ ρ (y i h(x i )) 1 m i=1 m 1 [y i h(x i ) ρ] (10) i=1 The fraction of the points in the training sample S that have been misclassified or classified with confidence less than ρ. Jordan Boyd-Graber Boulder Support Vector Machines 30 of 34

Theoretical Guarantees Generalization For linear classifiers H = {x w x : w Λ} and data X {x : x r}. Fix ρ > 0 then with probability at least 1 δ, for any h H, R(h) ˆR ρ (h) + 2 r 2 Λ 2 ρ 2 m + log 1 δ 2m (11) Jordan Boyd-Graber Boulder Support Vector Machines 31 of 34

Theoretical Guarantees Generalization For linear classifiers H = {x w x : w Λ} and data X {x : x r}. Fix ρ > 0 then with probability at least 1 δ, for any h H, R(h) ˆR ρ (h) + 2 r 2 Λ 2 ρ 2 m + log 1 δ 2m Data-dependent: must be separable with a margin Fortunately, many data do have good margin properties SVMs can find good classifiers in those instances (11) Jordan Boyd-Graber Boulder Support Vector Machines 31 of 34

Recap Plan Linear Classifiers Support Vector Machines Formulation Theoretical Guarantees Recap Jordan Boyd-Graber Boulder Support Vector Machines 32 of 34

Recap Choosing what kind of classifier to use When building a text classifier, first question: how much training data is there currently available? None? Very little? A fair amount? A huge amount Jordan Boyd-Graber Boulder Support Vector Machines 33 of 34

Recap Choosing what kind of classifier to use When building a text classifier, first question: how much training data is there currently available? None? Hand write rules or use active learning Very little? A fair amount? A huge amount Jordan Boyd-Graber Boulder Support Vector Machines 33 of 34

Recap Choosing what kind of classifier to use When building a text classifier, first question: how much training data is there currently available? None? Hand write rules or use active learning Very little? Naïve Bayes A fair amount? A huge amount Jordan Boyd-Graber Boulder Support Vector Machines 33 of 34

Recap Choosing what kind of classifier to use When building a text classifier, first question: how much training data is there currently available? None? Hand write rules or use active learning Very little? Naïve Bayes A fair amount? SVM A huge amount Jordan Boyd-Graber Boulder Support Vector Machines 33 of 34

Recap Choosing what kind of classifier to use When building a text classifier, first question: how much training data is there currently available? None? Hand write rules or use active learning Very little? Naïve Bayes A fair amount? SVM A huge amount Doesn t matter, use whatever works Jordan Boyd-Graber Boulder Support Vector Machines 33 of 34

Recap SVM extensions: What s next Finding solutions Slack variables: not perfect line Kernels: different geometries 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Jordan Boyd-Graber Boulder Support Vector Machines 34 of 34