Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 9

Similar documents
Support Vector Machines

Introduction to Machine Learning

Clustering. Introduction to Data Science. Jordan Boyd-Graber and Michael Paul SLIDES ADAPTED FROM LAUREN HANNAH

Classification II: Decision Trees and SVMs

Machine Learning Lecture 7

Lecture Support Vector Machine (SVM) Classifiers

SVM. Introduction to Data Science Algorithms Jordan Boyd-Graber and Michael Paul SLIDES ADAPTED FROM HINRICH SCHÜTZE

Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 5

10/05/2016. Computational Methods for Data Analysis. Massimo Poesio SUPPORT VECTOR MACHINES. Support Vector Machines Linear classifiers

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Support Vector Machine. Industrial AI Lab.

Introduction to Machine Learning Midterm Exam

Machine Learning, Midterm Exam: Spring 2008 SOLUTIONS. Q Topic Max. Score Score. 1 Short answer questions 20.

CSC 411 Lecture 17: Support Vector Machine

SUPPORT VECTOR MACHINE

LINEAR CLASSIFICATION, PERCEPTRON, LOGISTIC REGRESSION, SVC, NAÏVE BAYES. Supervised Learning

Introduction to Logistic Regression and Support Vector Machine

Support Vector Machines for Classification and Regression. 1 Linearly Separable Data: Hard Margin SVMs

Support Vector Machines (SVM) in bioinformatics. Day 1: Introduction to SVM

Kernel Methods and Support Vector Machines

Linear & nonlinear classifiers

ECE 5424: Introduction to Machine Learning

Support Vector Machine. Industrial AI Lab. Prof. Seungchul Lee

Support Vector Machines, Kernel SVM

CS145: INTRODUCTION TO DATA MINING

Linear Classifiers: Expressiveness

Decision Trees. Data Science: Jordan Boyd-Graber University of Maryland MARCH 11, Data Science: Jordan Boyd-Graber UMD Decision Trees 1 / 1

Machine Learning Practice Page 2 of 2 10/28/13

Multi-class SVMs. Lecture 17: Aykut Erdem April 2016 Hacettepe University

Lecture 16: Modern Classification (I) - Separating Hyperplanes

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Computational Learning Theory

Machine Learning Basics Lecture 4: SVM I. Princeton University COS 495 Instructor: Yingyu Liang

Introduction to Machine Learning Midterm Exam Solutions

Max Margin-Classifier

Support Vector Machines. Machine Learning Fall 2017

Support Vector Machines. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Engineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 5: Single Layer Perceptrons & Estimating Linear Classifiers

Machine Learning

A short introduction to supervised learning, with applications to cancer pathway analysis Dr. Christina Leslie

Classification: The PAC Learning Framework

Natural Language Processing. Classification. Features. Some Definitions. Classification. Feature Vectors. Classification I. Dan Klein UC Berkeley

Introduction to Information Retrieval

L5 Support Vector Classification

ECE521 Lecture7. Logistic Regression

Support Vector Machines

Support Vector Machines and Kernel Methods

IFT Lecture 7 Elements of statistical learning theory

Midterm Review CS 7301: Advanced Machine Learning. Vibhav Gogate The University of Texas at Dallas

MIRA, SVM, k-nn. Lirong Xia

CSE 546 Final Exam, Autumn 2013

Support Vector Machines

Online Learning. Jordan Boyd-Graber. University of Colorado Boulder LECTURE 21. Slides adapted from Mohri

Linear Models for Classification

Support Vector Machines for Classification and Regression

Machine Learning. Support Vector Machines. Fabio Vandin November 20, 2017

Machine Learning

Linear & nonlinear classifiers

Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines

PAC-learning, VC Dimension and Margin-based Bounds

Bias-Variance Tradeoff

Jeff Howbert Introduction to Machine Learning Winter

Statistical Pattern Recognition

Support Vector Machines. Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.

Machine Learning Lecture 5

Support vector machines Lecture 4

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2014

Announcements - Homework

More about the Perceptron

ECE 5984: Introduction to Machine Learning

Machine Learning for NLP

The exam is closed book, closed notes except your one-page (two sides) or two-page (one side) crib sheet.

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Support Vector Machines

CSC321 Lecture 4 The Perceptron Algorithm

PAC-learning, VC Dimension and Margin-based Bounds

Linear classifiers Lecture 3

The Perceptron Algorithm, Margins

CSCI-567: Machine Learning (Spring 2019)

18.9 SUPPORT VECTOR MACHINES

CMU-Q Lecture 24:

Mining Classification Knowledge

Machine Learning for NLP

Final Overview. Introduction to ML. Marek Petrik 4/25/2017

LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

Classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

Statistical and Computational Learning Theory

Logistic Regression Introduction to Machine Learning. Matt Gormley Lecture 8 Feb. 12, 2018

VBM683 Machine Learning

Machine Learning, Midterm Exam

Introduction to SVM and RVM

Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 6

Linear and Logistic Regression. Dr. Xiaowei Huang

FINAL: CS 6375 (Machine Learning) Fall 2014

Lecture 14 : Online Learning, Stochastic Gradient Descent, Perceptron

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas

Lecture 8. Instructor: Haipeng Luo

Pattern Recognition 2018 Support Vector Machines

Transcription:

Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 9 Slides adapted from Jordan Boyd-Graber Machine Learning: Chenhao Tan Boulder 1 of 39

Recap Supervised learning Previously: KNN, naïve Bayes, logistic regression and neural networks Machine Learning: Chenhao Tan Boulder 2 of 39

Recap Supervised learning Previously: KNN, naïve Bayes, logistic regression and neural networks This time: SVMs Motivated by geometric properties (another) example of linear classifier Good theoretical guarantee Machine Learning: Chenhao Tan Boulder 2 of 39

Take a step back Problem Model Algorithm/optimization Evaluation Machine Learning: Chenhao Tan Boulder 3 of 39

Take a step back Problem Model Algorithm/optimization Evaluation https://blog.twitter.com/official/en_us/topics/product/ 2017/Giving-you-more-characters-to-express-yourself.html Machine Learning: Chenhao Tan Boulder 3 of 39

Overview Intuition Linear Classifiers Support Vector Machines Formulation Theoretical Guarantees Recap Machine Learning: Chenhao Tan Boulder 4 of 39

Intuition Outline Intuition Linear Classifiers Support Vector Machines Formulation Theoretical Guarantees Recap Machine Learning: Chenhao Tan Boulder 5 of 39

Intuition Thinking Geometrically Suppose you have two classes: vacations and sports Suppose you have four documents Sports Doc 1 : {ball, ball, ball, travel} Doc 2 : {ball, ball} What does this look like in vector space? Vacations Doc 3 : {travel, ball, travel} Doc 4 : {travel} Machine Learning: Chenhao Tan Boulder 6 of 39

Intuition Put the documents in vector space Travel 3 Sports Doc 1 : {ball, ball, ball, travel} Doc 2 : {ball, ball} Vacations Doc 3 : {travel, ball, travel} Doc 4 : {travel} 2 1 0 1 2 3 Ball Machine Learning: Chenhao Tan Boulder 7 of 39

Intuition Vector space representation of documents Each document is a vector, one component for each term. Terms are axes. High dimensionality: 10,000s of dimensions and more How can we do classification in this space? Machine Learning: Chenhao Tan Boulder 8 of 39

Intuition Vector space classification As before, the training set is a set of documents, each labeled with its class. In vector space classification, this set corresponds to a labeled set of points or vectors in the vector space. Premise 1: Documents in the same class form a contiguous region. Premise 2: Documents from different classes don t overlap. We define lines, surfaces, hypersurfaces to divide regions. Machine Learning: Chenhao Tan Boulder 9 of 39

Intuition Recap Vector space classification Linear classifiers Support Vector Machines Discussion Classes in the vector space Classes in the vector space UK China Kenya x x x x Should the document be assigned to China, UK or Kenya? Machine Learning: Chenhao Tan Boulder 10 of 39

Intuition Recap Vector space classification Linear classifiers Support Vector Machines Discussion Classes in the vector space Classes in the vector space UK China Kenya Should the document be Should assigned the document to China, be UKassigned or Kenya? to China, UK or Kenya? x x x x Machine Learning: Chenhao Tan Boulder 10 of 39

Intuition Recap Vector space classification Linear classifiers Support Vector Machines Discussion Classes in the vector space Classes in the vector space UK China Kenya Find separators between the classes Find separators between the classes x x x x Machine Learning: Chenhao Tan Boulder 10 of 39

Intuition Recap Vector space classification Linear classifiers Support Vector Machines Discussion Classes in the vector space Classes in the vector space UK China Kenya Find separators between the classes Find separators between the classes x x x x Machine Learning: Chenhao Tan Boulder 10 of 39

Intuition Recap Vector space classification Linear classifiers Support Vector Machines Discussion Classes in the vector space Classes in the vector space UK China Kenya Find separators between the classes Based on these separators: should be assigned to China x x x x Machine Learning: Chenhao Tan Boulder 10 of 39

Intuition Recap Vector space classification Linear classifiers Support Vector Machines Discussion Classes in the vector space Classes in the vector space UK China Kenya Find separators between the classes How do we find separators that do a good job at classifying new documents like? Main topic of today Machine Learning: Chenhao Tan Boulder 10 of 39 x x x x

Linear Classifiers Outline Intuition Linear Classifiers Support Vector Machines Formulation Theoretical Guarantees Recap Machine Learning: Chenhao Tan Boulder 11 of 39

Linear Classifiers Linear classifiers Definition: A linear classifier computes a linear combination or weighted sum j β jx j of the feature values. Classification decision: j β jx j > β 0? (β 0 is our bias)... where β 0 (the threshold) is a parameter. We call this the separator or decision boundary. We find the separator based on training set. Methods for finding separator: logistic regression, naïve Bayes, linear SVM Assumption: The classes are linearly separable. Machine Learning: Chenhao Tan Boulder 12 of 39

Linear Classifiers Linear classifiers Definition: A linear classifier computes a linear combination or weighted sum j β jx j of the feature values. Classification decision: j β jx j > β 0? (β 0 is our bias)... where β 0 (the threshold) is a parameter. We call this the separator or decision boundary. We find the separator based on training set. Methods for finding separator: logistic regression, naïve Bayes, linear SVM Assumption: The classes are linearly separable. Before, we just talked about equations. What s the geometric intuition? Machine Learning: Chenhao Tan Boulder 12 of 39

Linear Classifiers Alinearclassifierin1D A linear classifier in 1D Alinearclassifierin1Dis apointx described by the equation w 1 d 1 = θ x = θ/w 1 A linear classifier in 1D is a point x described by the equation β 1 x 1 = β 0 Schütze: Support vector machines 15 / 55 Machine Learning: Chenhao Tan Boulder 13 of 39

Linear Classifiers Alinearclassifierin1D A linear classifier in 1D Alinearclassifierin1Dis apointx described by the equation w 1 d 1 = θ x = θ/w 1 A linear classifier in 1D is a point x described by the equation β 1 x 1 = β 0 x = β 0 /β 1 Schütze: Support vector machines 15 / 55 Machine Learning: Chenhao Tan Boulder 13 of 39

Linear Classifiers Alinearclassifierin1D A linear classifier in 1D Alinearclassifierin1Dis apointx described by the equation w 1 d 1 = θ x = θ/w 1 Points (d 1 )withw 1 d 1 θ are in the class c. A linear classifier in 1D is a point x described by the equation β 1 x 1 = β 0 x = β 0 /β 1 Points (x 1 ) with β 1 x 1 β 0 are in the class c. Schütze: Support vector machines 15 / 55 Machine Learning: Chenhao Tan Boulder 13 of 39

Linear Classifiers Alinearclassifierin1D A linear classifier in 1D Alinearclassifierin1Dis apointx described by the equation w 1 d 1 = θ x = θ/w 1 Points (d 1 )withw 1 d 1 θ are in the class c. Points (d 1 )withw 1 d 1 < θ are in the x complement = β 0 /β 1 class c. Schütze: Support vector machines 15 / 55 A linear classifier in 1D is a point x described by the equation β 1 x 1 = β 0 Points (x 1 ) with β 1 x 1 β 0 are in the class c. Points (x 1 ) with β 1 x 1 < β 0 are in the complement class c. Machine Learning: Chenhao Tan Boulder 13 of 39

Linear Classifiers Recap Vector space classification Linear classifiers Support Vector Machines Discussion A Alinearclassifierin2D Alinearclassifierin2Dis alinedescribedbythe equation w 1 d 1 + w 2 d 2 = θ A linear classifier in 2D is a line described by the equation β 1 x 1 + β 2 x 2 = β 0 Example for a 2D linear classifier Schütze: Support vector machines 16 / 55 Machine Learning: Chenhao Tan Boulder 14 of 39

Linear Classifiers Recap Vector space classification Linear classifiers Support Vector Machines Discussion A Alinearclassifierin2D Alinearclassifierin2Dis alinedescribedbythe equation w 1 d 1 + w 2 d 2 = θ A linear classifier in 2D is a line described by the equation β 1 x 1 + β 2 x 2 = β 0 Example for a 2D linear classifier Example for a 2D linear classifier Schütze: Support vector machines 16 / 55 Machine Learning: Chenhao Tan Boulder 14 of 39

Linear Classifiers Recap Vector space classification Linear classifiers Support Vector Machines Discussion A linear classifier in 2D Alinearclassifierin2D A linear classifier in 2D is a line described by the equation β 1 x 1 + β 2 x 2 = β 0 Example for a 2D linear classifier Alinearclassifierin2Dis alinedescribedbythe equation w 1 d 1 + w 2 d 2 = θ Example for a 2D linear classifier Points (d 1 d 2 )with w 1 d 1 + w 2 d 2 θ are in the class c. Points (x 1 x 2 ) with β 1 x 1 + β 2 x 2 β 0 are in the class c. Schütze: Support vector machines 16 / 55 Machine Learning: Chenhao Tan Boulder 14 of 39

Linear Classifiers Recap Vector space classification Linear classifiers Support Vector Machines Discussion A Alinearclassifierin2D Alinearclassifierin2Dis alinedescribedbythe equation w 1 d 1 + w 2 d 2 = θ A linear classifier in 2D is a line described by the equation β 1 x 1 + β 2 x 2 = β 0 Example for a 2D linear classifier Example for a 2D linear classifier Points (d 1 d 2 )with w 1 d 1 + w 2 d 2 θ are in the class c. Points (x 1 x 2 ) with β 1 x 1 + β 2 x 2 β 0 are in the class c. Points (d 1 d 2 )with w 1 d 1 + w 2 d 2 < θ are in the complement class c. Points (x 1 x 2 ) with β 1 x 1 + β 2 x 2 < β 0 are in the complement class c. Schütze: Support vector machines 16 / 55 Machine Learning: Chenhao Tan Boulder 14 of 39

Linear Classifiers Recap Vector space classification Linear classifiers Support Vector Machines Discussion A linear Alinearclassifierin3D classifier in 3D Alinearclassifierin3Dis aplanedescribedbythe equation w 1 d 1 + w 2 d 2 + w 3 d 3 = θ A linear classifier in 3D is a plane described by the equation β 1 x 1 + β 2 x 2 + β 3 x 3 = β 0 Schütze: Support vector machines 17 / 55 Machine Learning: Chenhao Tan Boulder 15 of 39

Linear Classifiers Recap Vector space classification Linear classifiers Support Vector Machines Discussion A linear Alinearclassifierin3D Alinearclassifierin3Dis aplanedescribedbythe equation w 1 d 1 + w 2 d 2 + w 3 d 3 = θ Example for a 3D linear classifier A linear classifier in 3D is a plane described by the equation β 1 x 1 + β 2 x 2 + β 3 x 3 = β 0 Example for a 3D linear classifier Schütze: Support vector machines 17 / 55 Machine Learning: Chenhao Tan Boulder 15 of 39

Linear Classifiers Recap Vector space classification Linear classifiers A linear classifier in 3D Support Vector Machines Discussion Alinearclassifierin3D A linear classifier in 3D is a plane described by the equation β 1 x 1 + β 2 x 2 + β 3 x 3 = β 0 Example for a 3D linear classifier Alinearclassifierin3Dis aplanedescribedbythe equation w 1 d 1 + w 2 d 2 + w 3 d 3 = θ Example for a 3D linear classifier Points (x 1 x 2 x 3 ) with β 1 x 1 + β 2 x 2 + β 3 x 3 β 0 are in the class c. Points (d 1 d 2 d 3 )with w 1 d 1 + w 2 d 2 + w 3 d 3 θ are in the class c. Schütze: Support vector machines 17 / 55 Machine Learning: Chenhao Tan Boulder 15 of 39

Linear Classifiers Recap Vector space classification Linear classifiers Support Vector Machines Discussion A Alinearclassifierin3D Alinearclassifierin3Dis aplanedescribedbythe equation w 1 d 1 + w 2 d 2 + w 3 d 3 = θ Example for a 3D linear classifier Points (d 1 d 2 d 3 )with w 1 d 1 + w 2 d 2 + w 3 d 3 θ are in the class c. Points (d 1 d 2 d 3 )with w 1 d 1 + w 2 d 2 + w 3 d 3 < θ are in the complement class c. Schütze: Support vector machines 17 / 55 A linear classifier in 3D is a plane described by the equation β 1 x 1 + β 2 x 2 + β 3 x 3 = β 0 Example for a 3D linear classifier Points (x 1 x 2 x 3 ) with β 1 x 1 + β 2 x 2 + β 3 x 3 β 0 are in the class c. Points (x 1 x 2 x 3 ) with β 1 x 1 + β 2 x 2 + β 3 x 3 < β 0 are in the complement class c. Machine Learning: Chenhao Tan Boulder 15 of 39

Linear Classifiers Naive Bayes and Logistic Regression as linear classifiers Takeaway Naïve Bayes, logistic regression and SVM are all linear methods. They choose their hyperplanes based on different objectives: joint likelihood (NB), conditional likelihood (LR), and the margin (SVM). Machine Learning: Chenhao Tan Boulder 16 of 39

Linear Classifiers Which hyperplane? For linearly separable training sets: there are infinitely many separating hyperplanes. They all separate the training set perfectly...... but they behave differently on test data. Error rates on new data are low for some, high for others. How do we find a low-error separator? Machine Learning: Chenhao Tan Boulder 17 of 39

Support Vector Machines Outline Intuition Linear Classifiers Support Vector Machines Formulation Theoretical Guarantees Recap Machine Learning: Chenhao Tan Boulder 18 of 39

Support Vector Machines Support vector machines Machine-learning research in the last two decades has improved classifier effectiveness. New generation of state-of-the-art classifiers: support vector machines (SVMs), boosted decision trees, regularized logistic regression, neural networks, and random forests Applications to IR problems, particularly text classification SVMs: A kind of large-margin classifier Vector space based machine-learning method aiming to find a decision boundary between two classes that is maximally far from any point in the training data (possibly discounting some points as outliers or noise) Machine Learning: Chenhao Tan Boulder 19 of 39

Support Vector Machines Support Vector Machines Support Vector Machines 2-class training data 2-class training data Schütze: Support vector machines 29 / 55 Machine Learning: Chenhao Tan Boulder 20 of 39

Support Vector Machines Support Vector Machines Support Vector Machines 2-class training data 2-class training data decision boundary linear separator decision boundary linear separator Schütze: Support vector machines 29 / 55 Machine Learning: Chenhao Tan Boulder 20 of 39

Support Vector Machines Support Vector Machines Support Vector Machines 2-class training data decision boundary linear separator 2-class training data decision boundary criterion: linear being separator maximally far away criterion: beingfrom maximally any data farpoint away from any data point determines determines classifier margin classifier margin Margin is maximized Schütze: Support vector machines 29 / 55 Machine Learning: Chenhao Tan Boulder 20 of 39

Support Vector Machines Support Vector MachinesWhy maximize the margin? 2-class training data decision boundary linear separator criterion: being maximally decisions. far away from any data point determines classifier margin linear separator positiondoc. defined variation by support vectors Recap Vector space classification Linear classifiers Support Vector Machines Discussion Points near decision surface uncertain classification decisions (50% either way). Aclassifierwithalarge margin makes no low certainty classification Gives classification safety margin w.r.t slight errors in measurement or Maximum margin decision hyperplane Support vectors Margin is maximized Schütze: Support vector machines 30 / 55 Machine Learning: Chenhao Tan Boulder 20 of 39

Support Vector Machines Why maximize the margin? Why maximize the margin? Points near decision surface uncertain classification decisions A classifier with a large margin is always confident Gives classification safety margin (measurement or variation) Recap Vector space classification Linear classifiers Support Vector Machines Discussion Points near decision surface uncertain classification decisions (50% either way). Aclassifierwithalarge margin makes no low certainty classification decisions. Gives classification safety margin w.r.t slight errors in measurement or doc. variation Maximum margin decision hyperplane Support vectors Margin is maximized Schütze: Support vector machines 30 / 55 Machine Learning: Chenhao Tan Boulder 21 of 39

Support Vector Machines Why maximize the margin? SVM classifier: large margin around decision boundary compare to decision hyperplane: place fat separator between classes unique solution decreased memory capacity increased ability to correctly generalize to test data Machine Learning: Chenhao Tan Boulder 22 of 39

Formulation Outline Intuition Linear Classifiers Support Vector Machines Formulation Theoretical Guarantees Recap Machine Learning: Chenhao Tan Boulder 23 of 39

Formulation Equation Equation of a hyperplane w x + b = 0 (1) Distance of a point to hyperplane w x + b w (2) The margin ρ is given by w x + b ρ min 1 (x,y) S w w (3) Machine Learning: Chenhao Tan Boulder 24 of 39

Formulation Equation Equation of a hyperplane w x + b = 0 (1) Distance of a point to hyperplane w x + b w (2) The margin ρ is given by w x + b ρ min 1 (x,y) S w w (3) This is because for any point on the marginal hyperplane, w x + b = ±1, and we would like to maximize the margin ρ Machine Learning: Chenhao Tan Boulder 24 of 39

Formulation Optimization Problem We want to find a weight vector w and bias b that optimize min w,b subject to y i (w x i + b) 1, i [1, m]. 1 2 w 2 (4) Machine Learning: Chenhao Tan Boulder 25 of 39

Formulation Optimization Problem We want to find a weight vector w and bias b that optimize min w,b subject to y i (w x i + b) 1, i [1, m]. 1 2 w 2 (4) Next week: algorithm Machine Learning: Chenhao Tan Boulder 25 of 39

Formulation Find the maximum margin hyperplane 3 2 1 0 1 2 3 Machine Learning: Chenhao Tan Boulder 26 of 39

Formulation Find the maximum margin hyperplane 3 2 Which are the support vectors? 1 0 1 2 3 Machine Learning: Chenhao Tan Boulder 26 of 39

Formulation Walkthrough example: building an SVM over the data shown Working geometrically: Machine Learning: Chenhao Tan Boulder 27 of 39

Formulation Walkthrough example: building an SVM over the data shown Working geometrically: If you got 0 =.5x + y 2.75, close! Machine Learning: Chenhao Tan Boulder 27 of 39

Formulation Walkthrough example: building an SVM over the data shown Working geometrically: If you got 0 =.5x + y 2.75, close! Set up system of equations w 1 + w 2 + b = 1 (5) 3 2 w 1 + 2w 2 + b = 0 (6) 2w 1 + 3w 2 + b = +1 (7) Machine Learning: Chenhao Tan Boulder 27 of 39

Formulation Walkthrough example: building an SVM over the data shown Working geometrically: If you got 0 =.5x + y 2.75, close! Set up system of equations w 1 + w 2 + b = 1 (5) 3 2 w 1 + 2w 2 + b = 0 (6) 2w 1 + 3w 2 + b = +1 (7) The SVM decision boundary is: 0 = 2 5 x + 4 5 y 11 5 3 2 1 0 1 2 3 Machine Learning: Chenhao Tan Boulder 27 of 39

Formulation Cannonical Form 3 2 1 w 1 x 1 + w 2 x 2 + b 0 0 1 2 3 Machine Learning: Chenhao Tan Boulder 28 of 39

Formulation Cannonical Form 3 2 1.4x 1 +.8x 2 2.2 0 0 1 2 3 Machine Learning: Chenhao Tan Boulder 28 of 39

Formulation Cannonical Form 3 2 1.4x 1 +.8x 2 2.2.4 1 +.8 1 2.2 = 1.4 3 2 +.8 2 = 0.4 2 +.8 3 2.2 = +1 0 0 1 2 3 Machine Learning: Chenhao Tan Boulder 28 of 39

Formulation What s the margin? Distance to closest point Machine Learning: Chenhao Tan Boulder 29 of 39

Formulation What s the margin? Distance to closest point (3 ) 2 2 1 + (2 1) 2 = 5 2 (8) Machine Learning: Chenhao Tan Boulder 29 of 39

Formulation What s the margin? Distance to closest point (3 ) 2 2 1 + (2 1) 2 = 5 2 (8) Weight vector Machine Learning: Chenhao Tan Boulder 29 of 39

Formulation What s the margin? Distance to closest point (3 ) 2 2 1 + (2 1) 2 = 5 2 (8) Weight vector 1 w = 1 ( 2 2 ( 5) + 4 5 ) = 1 2 20 25 = 5 5 = 5 4 2 (9) Machine Learning: Chenhao Tan Boulder 29 of 39

Theoretical Guarantees Outline Intuition Linear Classifiers Support Vector Machines Formulation Theoretical Guarantees Recap Machine Learning: Chenhao Tan Boulder 30 of 39

Theoretical Guarantees Three Proofs that Suggest SVMs will Work Leave-one-out error Margin analysis VC Dimension (omitted for now) Machine Learning: Chenhao Tan Boulder 31 of 39

Theoretical Guarantees Leave One Out Error (sketch) Leave one out error is the error by using one point as your test set (averaged over all such points). ˆR LOO = 1 m 1 [ ] h m s {xi } y i (10) i=1 Machine Learning: Chenhao Tan Boulder 32 of 39

Theoretical Guarantees Leave One Out Error (sketch) Leave one out error is the error by using one point as your test set (averaged over all such points). ˆR LOO = 1 m 1 [ ] h m s {xi } y i (10) i=1 This serves as an unbiased estimate of generalization error for samples of size m 1: Machine Learning: Chenhao Tan Boulder 32 of 39

Theoretical Guarantees Leave One Out Error (sketch) Leave-one-out error is bounded by the number of support vectors. [ ] NSV (S) E S D m 1 [R(h s )] E S D m m (11) Consider the held out error for x i. If x i was not a support vector, the answer doesn t change. If x i was a support vector, it could change the answer; this is when we can have an error. There are N SV support vectors and thus N SV possible errors. Machine Learning: Chenhao Tan Boulder 33 of 39

Theoretical Guarantees Margin Theory Machine Learning: Chenhao Tan Boulder 34 of 39

Theoretical Guarantees Margin Loss Function To see where SVMs really shine, consider the margin loss ρ: 0 if ρ x Φ ρ (x) = 1 x ρ if 0 x ρ 1 if x 0 (12) Machine Learning: Chenhao Tan Boulder 35 of 39

Theoretical Guarantees Margin Loss Function To see where SVMs really shine, consider the margin loss ρ: 0 if ρ x Φ ρ (x) = 1 x ρ if 0 x ρ 1 if x 0 (12) The empirical margin loss of a hypothesis h is ˆR ρ (h) = 1 m m Φ ρ (y i h(x i )) (13) i=1 Machine Learning: Chenhao Tan Boulder 35 of 39

Theoretical Guarantees Margin Loss Function To see where SVMs really shine, consider the margin loss ρ: 0 if ρ x Φ ρ (x) = 1 x ρ if 0 x ρ 1 if x 0 (12) The empirical margin loss of a hypothesis h is ˆR ρ (h) = 1 m m Φ ρ (y i h(x i )) 1 m i=1 m 1 [y i h(x i ) ρ] (13) i=1 Machine Learning: Chenhao Tan Boulder 35 of 39

Theoretical Guarantees Margin Loss Function To see where SVMs really shine, consider the margin loss ρ: 0 if ρ x Φ ρ (x) = 1 x ρ if 0 x ρ 1 if x 0 (12) The empirical margin loss of a hypothesis h is ˆR ρ (h) = 1 m m Φ ρ (y i h(x i )) 1 m i=1 m 1 [y i h(x i ) ρ] (13) i=1 The fraction of the points in the training sample S that have been misclassified or classified with confidence less than ρ. Machine Learning: Chenhao Tan Boulder 35 of 39

Theoretical Guarantees Generalization For linear classifiers H = {x w x : w Λ} and data X {x : x r}. Fix ρ > 0 then with probability at least 1 δ, for any h H, r R(h) ˆR ρ (h) + 2 2 Λ 2 ρ 2 m + log 1 δ (14) 2m Machine Learning: Chenhao Tan Boulder 36 of 39

Theoretical Guarantees Generalization For linear classifiers H = {x w x : w Λ} and data X {x : x r}. Fix ρ > 0 then with probability at least 1 δ, for any h H, r R(h) ˆR ρ (h) + 2 2 Λ 2 ρ 2 m + log 1 δ (14) 2m Data-dependent: must be separable with a margin Fortunately, many data do have good margin properties SVMs can find good classifiers in those instances Machine Learning: Chenhao Tan Boulder 36 of 39

Recap Outline Intuition Linear Classifiers Support Vector Machines Formulation Theoretical Guarantees Recap Machine Learning: Chenhao Tan Boulder 37 of 39

Recap Choosing what kind of classifier to use When building a text classifier, first question: how much training data is there currently available? Before neural networks: None? Very little? A fair amount? A huge amount Machine Learning: Chenhao Tan Boulder 38 of 39

Recap Choosing what kind of classifier to use When building a text classifier, first question: how much training data is there currently available? Before neural networks: None? Hand write rules or collect more data (possibly with active learning) Very little? A fair amount? A huge amount Machine Learning: Chenhao Tan Boulder 38 of 39

Recap Choosing what kind of classifier to use When building a text classifier, first question: how much training data is there currently available? Before neural networks: None? Hand write rules or collect more data (possibly with active learning) Very little? Naïve Bayes A fair amount? A huge amount Machine Learning: Chenhao Tan Boulder 38 of 39

Recap Choosing what kind of classifier to use When building a text classifier, first question: how much training data is there currently available? Before neural networks: None? Hand write rules or collect more data (possibly with active learning) Very little? Naïve Bayes A fair amount? SVM A huge amount Machine Learning: Chenhao Tan Boulder 38 of 39

Recap Choosing what kind of classifier to use When building a text classifier, first question: how much training data is there currently available? Before neural networks: None? Hand write rules or collect more data (possibly with active learning) Very little? Naïve Bayes A fair amount? SVM A huge amount Doesn t matter, use whatever works Machine Learning: Chenhao Tan Boulder 38 of 39

Recap Choosing what kind of classifier to use When building a text classifier, first question: how much training data is there currently available? With neural networks: None? Hand write rules or collect more data (possibly with active learning) Very little? Naïve Bayes, consider using pre-trained representation/model with neural networks A fair amount? SVM, consider using pre-trained representation/model with neural networks A huge amount Probably neural networks Trade-off between interpretability and model performance Machine Learning: Chenhao Tan Boulder 38 of 39

Recap SVM extensions: What s next Finding solutions Slack variables: not perfect line Kernels: different geometries Machine Learning: Chenhao Tan Boulder 39 of 39