Introduction to Support Vector Machines

Similar documents
Statistical Machine Learning from Data

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machines

Support Vector Machines for Classification and Regression. 1 Linearly Separable Data: Hard Margin SVMs

Perceptron Revisited: Linear Separators. Support Vector Machines

Linear vs Non-linear classifier. CS789: Machine Learning and Neural Network. Introduction

Machine Learning Support Vector Machines. Prof. Matteo Matteucci

CIS 520: Machine Learning Oct 09, Kernel Methods

Support Vector Machines for Classification and Regression

Machine Learning. Support Vector Machines. Manfred Huber

Lecture Notes on Support Vector Machine

Linear & nonlinear classifiers

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines

CS798: Selected topics in Machine Learning

Chapter 9. Support Vector Machine. Yongdai Kim Seoul National University

Machine Learning and Data Mining. Support Vector Machines. Kalev Kask

Jeff Howbert Introduction to Machine Learning Winter

Linear & nonlinear classifiers

Support Vector Machine (SVM) and Kernel Methods

Kernel Methods and Support Vector Machines

Support Vector Machines. Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

Lecture Support Vector Machine (SVM) Classifiers

Support Vector Machines: Maximum Margin Classifiers

Lecture 10: A brief introduction to Support Vector Machine

Linear Support Vector Machine. Classification. Linear SVM. Huiping Cao. Huiping Cao, Slide 1/26

Machine Learning. Lecture 6: Support Vector Machine. Feng Li.

(Kernels +) Support Vector Machines

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Lecture 9: Large Margin Classifiers. Linear Support Vector Machines

Support Vector Machines and Kernel Methods

Data Mining. Linear & nonlinear classifiers. Hamid Beigy. Sharif University of Technology. Fall 1396

L5 Support Vector Classification

Support Vector Machines

Support Vector Machine (continued)

Support Vector Machines, Kernel SVM

Support Vector Machines

Soft-Margin Support Vector Machine

Cheng Soon Ong & Christian Walder. Canberra February June 2018

CSC 411 Lecture 17: Support Vector Machine

ICS-E4030 Kernel Methods in Machine Learning

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Support Vector Machine for Classification and Regression

Support Vector Machines.

Support Vector Machine

Support Vector Machine

Support'Vector'Machines. Machine(Learning(Spring(2018 March(5(2018 Kasthuri Kannan

1 Boosting. COS 511: Foundations of Machine Learning. Rob Schapire Lecture # 10 Scribe: John H. White, IV 3/6/2003

Constrained Optimization and Lagrangian Duality

Support Vector Machines

Learning with kernels and SVM

Outline. Basic concepts: SVM and kernels SVM primal/dual problems. Chih-Jen Lin (National Taiwan Univ.) 1 / 22

EE613 Machine Learning for Engineers. Kernel methods Support Vector Machines. jean-marc odobez 2015

Support Vector Machines. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Dan Roth 461C, 3401 Walnut

LECTURE 7 Support vector machines

SUPPORT VECTOR MACHINE

Kernel Machines. Pradeep Ravikumar Co-instructor: Manuela Veloso. Machine Learning

Modelli Lineari (Generalizzati) e SVM

Support Vector Machines and Speaker Verification

Max Margin-Classifier

Support Vector Machines

Support Vector Machines

Linear, threshold units. Linear Discriminant Functions and Support Vector Machines. Biometrics CSE 190 Lecture 11. X i : inputs W i : weights

Support Vector Machines. Machine Learning Fall 2017

SVMs, Duality and the Kernel Trick

Neural Networks. Prof. Dr. Rudolf Kruse. Computational Intelligence Group Faculty for Computer Science

About this class. Maximizing the Margin. Maximum margin classifiers. Picture of large and small margin hyperplanes

Learning From Data Lecture 25 The Kernel Trick

Convex Optimization and Support Vector Machine

Support Vector and Kernel Methods

CS-E4830 Kernel Methods in Machine Learning

Support Vector Machines for Regression

Introduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Constrained Optimization and Support Vector Machines

SVM optimization and Kernel methods

Nearest Neighbors Methods for Support Vector Machines

The definitions and notation are those introduced in the lectures slides. R Ex D [h

An introduction to Support Vector Machines

Kernel Methods. Machine Learning A W VO

Applied inductive learning - Lecture 7

Kernel Methods. Charles Elkan October 17, 2007

Statistical and Computational Learning Theory

Introduction to SVM and RVM

Support Vector Machine & Its Applications

10/05/2016. Computational Methods for Data Analysis. Massimo Poesio SUPPORT VECTOR MACHINES. Support Vector Machines Linear classifiers

Support vector machines

Pattern Recognition 2018 Support Vector Machines

ML (cont.): SUPPORT VECTOR MACHINES

Support Vector Machines

LINEAR CLASSIFICATION, PERCEPTRON, LOGISTIC REGRESSION, SVC, NAÏVE BAYES. Supervised Learning

Data Mining - SVM. Dr. Jean-Michel RICHER Dr. Jean-Michel RICHER Data Mining - SVM 1 / 55

Kernels and the Kernel Trick. Machine Learning Fall 2017

Support Vector Machines

Machine Learning 4771

Review: Support vector machines. Machine learning techniques and image analysis

Lecture 10: Support Vector Machine and Large Margin Classifier

Support Vector Machines

Transcription:

Introduction to Support Vector Machines Shivani Agarwal

Support Vector Machines (SVMs) Algorithm for learning linear classifiers Motivated by idea of maximizing margin Efficient extension to non-linear SVMs through use of kernels 2

Problem Setting Instances: x X Class labels: y Y = {+1, 1} Target function: g : X Y Unknown distribution: D (on X) Training data: S = {(x 1, y 1 ),..., (x m, y m )}, y i = g(x i ) Objective: Given new x, predict y so that probability of error is minimal 3

Problem Setting (continued) Hypothesis space: H = {h : X Y } Error of h on training set S: err S (h) = 1 m m I {h(xi ) y i } i=1 Probability of error on new x: err(h) = E D [I h(x) g(x) ] More precise objective: Find h H such that err(h) is minimal 4

Linear Classifiers Instance space: X = R n Set of class labels: Y = {+1, 1} Training data: S = {(x 1, y 1 ),..., (x m, y m )} Hypothesis space: H lin(n) = {h : R n Y h(x) = sign(w x + b), w R n, b R} sign(w x + b) = { +1 if (w x + b) > 0 1 otherwise Thus the goal of a learning algorithm that learns a linear classifier is to find a hypothesis h H lin(n) with minimal err(h). 5

Example 1 X = R 2 Y = {+1( ), 1(x)} Which classifier would you expect to generalize better? Remember: want to minimize probability of error on future instances! 6

Intuitive Justification We expect the error on the training set, err S (h), to give some indication of the probability of error on a new instance, err(h). We expect simple hypotheses to generalize better than more complex hypotheses. Can we quantify the above ideas? 7

Complexity of a Hypothesis Space A hypothesis space H over instances in X is said to shatter a set A X if all possible dichotomies of A (all +/- labelings of elements of A) can be represented by some hypothesis in H. X = R 2, H = H lin(2). A is shattered by H, but B is not. 8

Complexity of a Hypothesis Space (continued) The VC dimension of a hypothesis space H, denoted by VC(H), is defined as the size of the largest subset of X that can be shattered by H. We saw that there exists a set of 3 points in R 2 that can be shattered by H lin(2). No set of 4 points in R 2 can be shattered by H lin(2) : Thus, VC(H lin(2) ) = 3. In general, VC(H lin(n) ) = n + 1. The VC dimension of H is a measure of the complexity of H. 9

Generalization Bound from Learning Theory Let 0 < δ < 1. Given a training set S of size m and a classifier h H, with probability 1 δ (over the choice of S) the following holds: err(h) err S (h) + VC(H) ( log( 2m VC(H) ) + 1) + log(4/δ) m 10

Example 2 X = R 2 Y = {+1( ), 1(x)} Which of these classifiers would be likely to generalize better? 11

Example 2 (continued) Recall the VC-based generalization bound: err(h) err S (h) + VC(H) ( log( 2m VC(H) ) + 1) + log(4/δ) m In this case, we get the same bound for both classifiers: err S (h 1 ) = err S (h 2 ) = 0 h 1, h 2 H lin(2), VC(H lin(2) ) = 3 How, then, can we explain our intuition that h 2 should give better generalization than h 1? 12

Example 2 (continued) Although both classifiers separate the data, the distance with which the separation is achieved is different: 13

Concept of Margin The margin γ i of a point x i R n with respect to a linear classifier h(x) = sign(w x + b) is defined as the distance of x i from the hyperplane w x + b = 0: w x γ i = i + b w The margin of a set of points {x 1,..., x m } is defined as the margin of the point closest to the hyperplane: γ = min γ w x i = min i + b 1 i m 1 i m w 14

Margin-Based Generalization Bound If H is the space of all linear classifiers in R n that separate the training data with margin at least γ, then VC(H) min ( R 2 γ 2, n ) + 1, where R is the radius of the smallest sphere (in R n ) that contains the data. Thus for such classifiers, we get a bound of the form err(h) err S (h) + ( ) O R 2 γ 2 + log(4/δ) m 15

The SVM Algorithm (Linearly Separable Case) Given training data S = {(x i, y i )} m i=1, where x i R n, the SVM algorithm finds a linear classifier that separates the data with maximal margin. Without loss of generality, we can represent any linear classifier in R n by some w R n, b R such that min w x i + b = 1. (1) 1 i m The margin of the data with respect to the classifier is then given by γ = min 1 i m w x i + b w = 1 w. Maximizing the margin is therefore equivalent to minimizing the norm w of the classifier, subject to the constraint in Eq. (1) above. 16

Optimization Problem Minimize f(w, b) 1 2 w 2 subject to y i (w x i + b) 1, i = 1,..., m This is an optimization problem in (n + 1) variables, with m linear inequality constraints. Introducing Lagrange multipliers α i, i = 1,..., m for the inequality constraints above gives the primal Lagrangian: Minimize L P (w, b, α) 1 2 w 2 subject to α i 0, i = 1,..., m m i=1 α i [y i (w x i + b) 1] 17

Optimization Problem (continued) Setting the gradients of L P with respect to w, b equal to zero gives: L P w = 0 w = m i=1 α i y i x i, L P b = 0 m i=1 α i y i = 0 Substituting the above in the primal gives the following dual problem: Maximize subject to L D (α) m i=1 m m i=1α i 1 2 i,j=1 α i α j y i y j (x i x j ) α i y i = 0; α i 0, i = 1,..., m This is a convex quadratic programming problem in α. 18

Solution The parameters w, b of the maximal margin classifier are determined by the solution α to the dual problem: b = 1 2 ( w = m i=1 α i y i x i min (w x i) + max (w x i) y i =+1 y i = 1 ) 19

Support Vectors Due to certain properties of the solution (known as the Karush-Kuhn-Tucker conditions), the solution α must satisfy α i [y i (w x i + b) 1] = 0, i = 1,..., m. Thus, α i > 0 only for those points x i that are closest to the classifying hyperplane. These points are called the support vectors. 20

Non-Separable Case Want to relax the constraints y i (w x i + b) 1. Can introduce slack variables ξ i : y i (w x i + b) 1 ξ i, where ξ i 0 i. An error occurs when ξ i > 1. Thus we can assign an extra cost for errors as follows: Minimize f(w, b, ξ) 2 1 w 2 + C m ξ i i=1 subject to y i (w x i + b) 1 ξ i ; ξ i 0, i = 1,..., m 21

Non-Separable Case (continued) Dual problem: Maximize subject to L D (α) m i=1 m m i=1α i 1 2 i,j=1 α i α j y i y j (x i x j ) α i y i = 0; 0 α i C, i = 1,..., m Solution: The solution for w is again given by w = m i=1 α i y i x i. The solution for b is similar to that in the linear case. 22

Visualizing the Solution in the Non-Separable Case 1. Margin support vectors ξ i = 0 Correct 2. Non-margin support vectors ξ i < 1 Correct (in margin) 3. Non-margin support vectors ξ i > 1 Error 23

Non-Linear SVMs Basic idea: Map the given data to some (high-dimensional) feature space R D, using a non-linear mapping ψ: ψ : R n R D. Learn a linear classifier in the new space. 24

Dot Products and Kernels Training phase: Maximize subject to L D (α) m i=1 m m i=1α i 1 2 i,j=1 α i α j y i y j (ψ(x i ) ψ(x j )) α i y i = 0; 0 α i C, i = 1,..., m Test phase: h(x) = sign(w ψ(x) + b), w R D, b R. 25

Dot Products and Kernels (continued) Recall the form of the solution: w = α i y i ψ(x i ). i SV Therefore, the test phase can be written as h(x) = sign(w ψ(x) + b) = sign α i y i (ψ(x i ) ψ(x)) + b i SV. Thus both training and test phases use only dot products between images ψ(x) of points x in R n. 26

Dot Products and Kernels (continued) A kernel function is a symmetric function K : R n R n R. A Mercer kernel, in addition, computes a dot product in some (high-dimensional) space: K(x, z) = ψ(x) ψ(z), for some ψ, D such that ψ : R n R D. Thus if we can find a Mercer kernel that computes a dot product in the feature space we are interested in, we can use the kernel to replace the dot products in the SVM. 27

SVMs with Kernels Training phase: Maximize subject to L D (α) m i=1 m m i=1α i 1 2 i,j=1 α i α j y i y j K(x i, x j ) α i y i = 0; 0 α i C, i = 1,..., m Test phase: h(x) = sign α i y i K(x i, x) + b i SV. 28

Example: Quadratic Kernel Let X = R 2, and consider learning a quadratic classifier in this space: h(x) = sign(w 1 x 2 1 + w 2x 2 2 + w 3x 1 x 2 + w 4 x 1 + w 5 x 2 + b) This is equivalent to learning a linear classifier in the feature space ψ(r 2 ), where ψ : R 2 R 5, ψ ([ x1 x 2 ]) = x 2 1 x 2 2 x 1 x 2 x 1 x 2 Without the use of a kernel, learning such a classifier using an SVM would require computing dot products in R 5.. 29

Example: Quadratic Kernel (continued) Consider the kernel It can be verified that where K : R 2 R 2 R, K(x, z) = (x z + 1) 2 ψ : R 2 R 6, K(x, z) = ψ (x) ψ (z), ψ ([ x1 x 2 ]) = x 2 1 x 2 2 2x1 x 2 2x1 2x2 Thus, an SVM with the above kernel can be used to learn a quadratic classifier in R 2 using only dot products in R 2. 1 30

Some Commonly Used Kernels x, z X = R n Polynomial kernels of degree d: K(x, z) = (x z + 1) d Gaussian kernels: x z 2 K(x, z) = exp( ) 2σ Such kernels are used to efficiently learn a linear classifier in a high-dimensional space using computations in only the original, lower-dimensional space. The Gaussian kernel corresponds to a dot product in an infinite dimensional space. 31

Kernels over Structured Data Kernels can also be defined for non-vectorial data, i.e. where X is a space other than R n. K : X X R A kernel K(x, z) over R n that computes dot products in some space can be viewed as computing some sort of similarity measure between data elements x, z R n. Therefore an appropriate similarity measure between elements in any space X can be used to define a kernel over X. Such kernels have been defined, for example, for data represented as trees or strings; SVMs can therefore be used to learn classifiers for such types of data. 32

Summary The SVM algorithm learns a linear classifier that maximizes the margin of the training data. Training an SVM consists of solving a quadratic programming problem in m variables, where m is the size of the training set. An SVM can learn a non-linear classifier in the original space through the use of a kernel function, which simulates dot products in a (high-dimensional) feature space. 33

Current Research Much work on efficient methods for finding approximate solutions to the quadratic programming problem, especially for large datasets. Multitude of new kernels for different types of structured data. Work on trying to optimize the margin distribution over all training points, rather than optimizing the margin of only the points closest to the separating hyperplane. 34