Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Similar documents
Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Support Vector Machine

Support Vector Machine (continued)

Non-linear Support Vector Machines

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (SVM) and Kernel Methods

Linear Support Vector Machine. Classification. Linear SVM. Huiping Cao. Huiping Cao, Slide 1/26

Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

Machine Learning. Support Vector Machines. Manfred Huber

L5 Support Vector Classification

Introduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Pattern Recognition 2018 Support Vector Machines

Support Vector Machines. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Soft-margin SVM can address linearly separable problems with outliers

Support Vector Machines for Classification and Regression. 1 Linearly Separable Data: Hard Margin SVMs

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machines, Kernel SVM

Support Vector Machines

Support Vector Machines for Classification and Regression

CS6375: Machine Learning Gautam Kunapuli. Support Vector Machines

LECTURE 7 Support vector machines

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Lecture Support Vector Machine (SVM) Classifiers

Jeff Howbert Introduction to Machine Learning Winter

ICS-E4030 Kernel Methods in Machine Learning

An introduction to Support Vector Machines

Linear vs Non-linear classifier. CS789: Machine Learning and Neural Network. Introduction

Support Vector Machines

Linear & nonlinear classifiers

Machine Learning And Applications: Supervised Learning-SVM

Support vector machines

CSC 411 Lecture 17: Support Vector Machine

Constrained Optimization and Support Vector Machines

Statistical Pattern Recognition

Introduction to Support Vector Machines

Support Vector Machines. Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

Lecture Notes on Support Vector Machine

Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines

Data Mining. Linear & nonlinear classifiers. Hamid Beigy. Sharif University of Technology. Fall 1396

CS-E4830 Kernel Methods in Machine Learning

Support Vector Machines: Maximum Margin Classifiers

Linear & nonlinear classifiers

Machine Learning. Lecture 6: Support Vector Machine. Feng Li.

Statistical Machine Learning from Data

The Lagrangian L : R d R m R r R is an (easier to optimize) lower bound on the original problem:

Machine Learning Support Vector Machines. Prof. Matteo Matteucci

CS798: Selected topics in Machine Learning

Support Vector Machines II. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Machine Learning and Data Mining. Support Vector Machines. Kalev Kask

Lecture 9: Large Margin Classifiers. Linear Support Vector Machines

Support Vector Machines

Support Vector Machines

Chapter 9. Support Vector Machine. Yongdai Kim Seoul National University

Support Vector Machine for Classification and Regression

Soft-margin SVM can address linearly separable problems with outliers

Support Vector Machines for Regression

Kernel Methods and Support Vector Machines

A Tutorial on Support Vector Machine

Lecture 10: Support Vector Machine and Large Margin Classifier

Lecture 10: A brief introduction to Support Vector Machine

Neural Networks. Prof. Dr. Rudolf Kruse. Computational Intelligence Group Faculty for Computer Science

Support Vector Machines

Max Margin-Classifier

CS145: INTRODUCTION TO DATA MINING

Support Vector Machines

Support Vector Machines

Support Vector Machines. Maximizing the Margin

Review: Support vector machines. Machine learning techniques and image analysis

Linear, threshold units. Linear Discriminant Functions and Support Vector Machines. Biometrics CSE 190 Lecture 11. X i : inputs W i : weights

Soft-Margin Support Vector Machine

Support Vector Machines for Classification: A Statistical Portrait

COMP 875 Announcements

Machine Learning A Geometric Approach

SUPPORT VECTOR MACHINE

MACHINE LEARNING. Support Vector Machines. Alessandro Moschitti

SVM and Kernel machine

Support Vector Machines

Convex Optimization and Support Vector Machine

Support Vector Machines and Kernel Methods

Kernel Machines. Pradeep Ravikumar Co-instructor: Manuela Veloso. Machine Learning

LINEAR CLASSIFICATION, PERCEPTRON, LOGISTIC REGRESSION, SVC, NAÏVE BAYES. Supervised Learning

A GENERAL FORMULATION FOR SUPPORT VECTOR MACHINES. Wei Chu, S. Sathiya Keerthi, Chong Jin Ong

(Kernels +) Support Vector Machines

Announcements - Homework

SMO Algorithms for Support Vector Machines without Bias Term

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Support Vector Machine via Nonlinear Rescaling Method

Support Vector Machines. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Stat542 (F11) Statistical Learning. First consider the scenario where the two classes of points are separable.

Support Vector Machine & Its Applications

Introduction to Machine Learning Spring 2018 Note Duality. 1.1 Primal and Dual Problem

Learning with kernels and SVM

About this class. Maximizing the Margin. Maximum margin classifiers. Picture of large and small margin hyperplanes

ν =.1 a max. of 10% of training set can be margin errors ν =.8 a max. of 80% of training can be margin errors

Data Mining - SVM. Dr. Jean-Michel RICHER Dr. Jean-Michel RICHER Data Mining - SVM 1 / 55

Support Vector Machines Explained

Perceptron Revisited: Linear Separators. Support Vector Machines

Lecture 16: Modern Classification (I) - Separating Hyperplanes

Support Vector Machine. Natural Language Processing Lab lizhonghua

Transcription:

Support vector machines In a nutshell Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers) Solution only depends on a small subset of training examples (support vectors) Sound generalization theory (bounds or error based on margin) Can be easily extended to nonlinear separation (kernel machines) Maximum margin classifier 1

Maximum margin classifier Classifier margin Given a training set D, a classifier confidence margin is: ρ = min (x,y) Dyf(x) It is the minimal confidence margin (for predicting the true label) among training examples 2

A classifier geometric margin is: ρ w = min yf(x) (x,y) D w Maximum margin classifier Canonical hyperplane There is an infinite number of equivalent formulation for the same hyperplane: w T x + w 0 = 0 α(w T x + w 0 ) = 0 α 0 The canonical hyperplane is the hyperplane having confidence margin equal to 1: ρ = min (x,y) Dyf(x) = 1 Its geometric margin is: ρ w = 1 w Maximum margin classifier 3

Theorem 1 (Margin Error Bound). Consider the set of decision functions f(x) = signw T x with w Λ and x R, for some R,Λ > 0. Moreover, let ρ > 0 and ν denote the fraction of training examples with margin smaller than ρ/ w, referred to as the margin error. For all distributions P generating the data, with probability at least 1 δ over the drawing of the m training patterns, and for any ρ > 0 and δ (0, 1), the probability that a test pattern drawn from P will be misclassified is 4

bound from above by Here, c is a universal constant. ( ) c R2 Λ ν + 2 m ρ 2 ln 2 m + ln(1/δ). Margin Error Bound: interpretation ( ) c R2 Λ ν + 2 m ρ 2 ln 2 m + ln(1/δ). The probability of test error depends on (among other components): number of margin errors ν (examples with margin smaller than ρ/ w ) ln number of training examples (error depends on 2 m m ) size of the margin (error depends on 1/ρ 2 ) Note If ρ is fixed to 1 (canonical hyperplane), maximizing margin corresponds to minimizing w Learning problem minw,w 0 1 2 w 2 subject to: y i (w T x i + w 0 ) 1 (x i, y i ) D Note constraints guarantee that all points are correctly classified (plus canonical form) minimization corresponds to maximizing the (squared) margin quadratic optimization problem (objective is quadratic, points satisfying constraints form a convex set) Learning problem minw,w 0 1 2 w 2 subject to: y i (w T x i + w 0 ) 1 (x i, y i ) D Note 5

constraints guarantee that all points are correctly classified (plus canonical form) minimization corresponds to maximizing the (squared) margin quadratic optimization problem (objective is quadratic, points satisfying constraints form a convex set) Digression: constrained optimization Karush-Kuhn-Tucker (KKT) approach A constrained optimization problem can be addressed by converting it into an unconstrained problem with the same solution Let s have a constrained optimization problem as: min z subject to: f(z) g i (z) 0 i Let s introduce a non-negative variable α i 0 (called Lagrange multiplier) for each constraint and rewrite the optimization problem as (Lagrangian): min z maxα 0 f(z) i α i g i (z) Digression: constrained optimization Karush-Kuhn-Tucker (KKT) approach min z maxα 0 f(z) i α i g i (z) The optimal solutions z for this problem are the same as the optimal solutions for the original (constrained) problem: If for a given z at least one constraint is not satisfied, i.e. g i (z ) < 0 for some i, maximizing over α i leads to an infinite value (not a minimum, unless there is no non-infinite minimum) If all constraints are satisfied (i.e. g i (z ) 0 for all i), maximization over the α will set all elements of the summation to zero, so that z is a solution of min z f(z). Karush-Kuhn-Tucker (KKT) approach minw,w 0 1 2 w 2 subject to: y i (w T x i + w 0 ) 1 (x i, y i ) D The constraints can be included in the minimization using Lagrange multipliers α i 0 (m = D ): L(w, w 0, α) = 1 2 w 2 α i (y i (w T x i + w 0 ) 1) The Lagrangian is minimized wrt w, w 0 and maximized wrt α i (solution is a saddle point) 6

Dual formulation L(w, w 0, α) = 1 2 w 2 α i (y i (w T x i + w 0 ) 1) Vanishing derivatives wrt primal variables we get: w 0 L(w, w 0, α) = 0 α i y i = 0 w L(w, w 0, α) = 0 w = α i y i x i Dual formulation Substituting in the Lagrangian we get: 1 2 w 2 1 2 α i (y i (w T x i + w 0 ) 1) = α i α j y i y j x T i x j α i y i w 0 + α i = }{{} =0 α i 1 2 α i α j y i y j x T i x j α i α j y i y j x T i x j = L(α) which is to be maximized wrt the dual variables α Dual formulation max α IR m α i 1 2 α i α j y i y j x T i x j subject to α i 0 i = 1,..., m α i y i = 0 The resulting maximization problem including the constraints Still a quadratic optimization problem 7

Note The dual formulation has simpler contraints (box), easier to solve The primal formulation has d + 1 variables (number of features +1): minw,w 0 1 2 w 2 The dual formulation has m variables (number of training examples): max α IR m α i 1 2 α i α j y i y j x T i x j One can choose the primal formulation if it has much less variables (problem dependent) Decision function Substituting w = m α iy i x i in the decision function we get: f(x) = w T x + w 0 = α i y i x T i x + w 0 The decision function is linear combination of dot products between training points and the test point dot product is kind of similarity between points Weights of the combination are α i y i : large α i implies large contribution towards class y i (times the similarity) Karush-Khun-Tucker conditions (KKT) L(w, w 0, α) = 1 2 w 2 α i (y i (w T x i + w 0 ) 1) At the saddle point it holds that for all i: α i (y i (w T x i + w 0 ) 1) = 0 Thus, either the example does not contribute to the final f(x): α i = 0 or the example stays on the minimal confidence hyperplane from the decision one: y i (w T x i + w 0 ) = 1 8

Support vectors points staying on the minimal confidence hyperplanes are called support vectors All other points do not contribute to the final decision function (i.e. they could be removed from the training set) SVM are sparse i.e. they typically have few support vectors Decision function bias The bias w 0 can be computed from the KKT conditions Given an arbitrary support vector x i (with α i > 0) the KKT conditions imply: y i (w T x i + w 0 ) = 1 y i w T x i + y i w 0 = 1 w 0 = 1 y iw T x i y i For robustness, the bias is usually averaged over all support vectors 9

Soft margin SVM Soft margin SVM Slack variables 10

1 min w X,w 0 IR, ξ IR m 2 w 2 + C subject to ξ i y i (w T x i + w 0 ) 1 ξ i ξ i 0 i = 1,..., m i = 1,..., m A slack variable ξ i represents the penalty for example x i not satisfying the margin constraint The sum of the slacks is minimized together to the inverse margin The regularization parameter C 0 trades-off data fitting and size of the margin Soft margin SVM Regularization theory Regularized loss minimization problem 1 min w X,w 0 IR, ξ IR m 2 w 2 + C The loss term accounts for error minimization l(y i, f(x i )) The margin maximization term accounts for regularization i.e. solutions with larger margin are preferred Note Regularization is a standard approach to prevent overfitting It corresponds to a prior for simpler (more regular, smoother) solutions Soft margin SVM 11

Hinge loss l(y i, f(x i )) = 1 y i f(x i ) + = 1 y i (w T x i + w 0 ) + z + = z if z > 0 and 0 otherwise (positive part) it corresponds to the slack variable ξ i (violation of margin costraint) all examples not violating margin costraint have zero loss (sparse set of support vectors) Soft margin SVM Lagrangian L = C ξ i + 1 2 w 2 α i (y i (w T x i + w 0 ) 1 + ξ i ) β i ξ i where α i 0 and β i 0 Vanishing derivatives wrt primal variables we get: w 0 L = 0 α i y i = 0 m w L = 0 w = α i y i x i ξ i L = 0 C α i β i = 0 12

Soft margin SVM Dual formulation Substituting in the Lagrangian we get C ξ i + 1 2 w 2 α i (y i (w T x i + w 0 ) 1 + ξ i ) β i ξ i = ξ i (C α i β i ) + 1 }{{} 2 }{{} =0 =0 α i y i w 0 + α i = α i 1 2 α i α j y i y j x T i x j α i α j y i y j x T i x j α i α j y i y j x T i x j = L(α) Soft margin SVM Dual formulation max α IR m α i 1 2 α i α j y i y j x T i x j subject to 0 α i C i = 1,..., m α i y i = 0 The box constraint for α i comes from C α i β i = 0 (and the fact that both α i 0 and β i 0) Soft margin SVM Karush-Khun-Tucker conditions (KKT) L = C ξ i + 1 2 w 2 α i (y i (w T x i + w 0 ) 1 + ξ i ) β i ξ i At the saddle point it holds that for all i: α i (y i (w T x i + w 0 ) 1 + ξ i ) = 0 β i ξ i = 0 Thus, support vectors (α i > 0) are examples for which (y i (w T x i + w 0 ) 1 13

Soft margin SVM Support Vectors α i (y i (w T x i + w 0 ) 1 + ξ i ) = 0 β i ξ i = 0 If α i < C, C α i β i = 0 and β i ξ i = 0 imply that ξ i = 0 These are called unbound SV ((y i (w T x i + w 0 ) = 1, they stay on the confidence one hyperplane If α i = C (bound SV) then ξ i can be greater the zero, in which case the SV are margin errors Support vectors 14

References Biblio C. Burges, A tutorial on support vector machines for pattern recognition, Data Mining and Knowledge Discovery, 2(2), 121-167, 1998. Software svm module in scikit-learn http://scikit-learn.org/stable/index.html 15

libsvm http://www.csie.ntu.edu.tw/ cjlin/libsvm/ svmlight http://svmlight.joachims.org/ 16