(Kernels +) Support Vector Machines

Similar documents
Max Margin-Classifier

Kernel Methods. Foundations of Data Analysis. Torsten Möller. Möller/Mori 1

Kernel Methods and Support Vector Machines

Pattern Recognition 2018 Support Vector Machines

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Support Vector Machine (SVM) and Kernel Methods

Machine Learning and Data Mining. Support Vector Machines. Kalev Kask

Perceptron Revisited: Linear Separators. Support Vector Machines

Support Vector Machine (SVM) and Kernel Methods

Linear & nonlinear classifiers

Support Vector Machines

SVMs, Duality and the Kernel Trick

EE613 Machine Learning for Engineers. Kernel methods Support Vector Machines. jean-marc odobez 2015

Support Vector Machines. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Introduction to Support Vector Machines

Support Vector Machines and Kernel Methods

Linear & nonlinear classifiers

ML (cont.): SUPPORT VECTOR MACHINES

Support Vector Machine (continued)

SUPPORT VECTOR MACHINE

Kernel Machines. Pradeep Ravikumar Co-instructor: Manuela Veloso. Machine Learning

Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

Support Vector Machine

Linear vs Non-linear classifier. CS789: Machine Learning and Neural Network. Introduction

Support Vector Machine (SVM) and Kernel Methods

Announcements - Homework

CS798: Selected topics in Machine Learning

Kernelized Perceptron Support Vector Machines

Statistical Machine Learning from Data

Support Vector Machines

Jeff Howbert Introduction to Machine Learning Winter

Support Vector Machines, Kernel SVM

10/05/2016. Computational Methods for Data Analysis. Massimo Poesio SUPPORT VECTOR MACHINES. Support Vector Machines Linear classifiers

Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines

Statistical Pattern Recognition

Lecture 9: Large Margin Classifiers. Linear Support Vector Machines

Support Vector Machines

Linear Classification and SVM. Dr. Xin Zhang

Lecture 10: Support Vector Machine and Large Margin Classifier

Machine Learning. Support Vector Machines. Manfred Huber

Lecture Support Vector Machine (SVM) Classifiers

CSC 411 Lecture 17: Support Vector Machine

Support Vector Machines

Kernels and the Kernel Trick. Machine Learning Fall 2017

LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

CIS 520: Machine Learning Oct 09, Kernel Methods

Support Vector Machines for Classification and Regression. 1 Linearly Separable Data: Hard Margin SVMs

Support Vector Machines for Classification: A Statistical Portrait

Support Vector Machine & Its Applications

Pattern Recognition and Machine Learning. Perceptrons and Support Vector machines

Review: Support vector machines. Machine learning techniques and image analysis

Support'Vector'Machines. Machine(Learning(Spring(2018 March(5(2018 Kasthuri Kannan

Support Vector Machines. Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

Support Vector Machines for Classification and Regression

Machine Learning Support Vector Machines. Prof. Matteo Matteucci

Linear, threshold units. Linear Discriminant Functions and Support Vector Machines. Biometrics CSE 190 Lecture 11. X i : inputs W i : weights

COMP 652: Machine Learning. Lecture 12. COMP Lecture 12 1 / 37

CS6375: Machine Learning Gautam Kunapuli. Support Vector Machines

Lecture 10: A brief introduction to Support Vector Machine

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Support Vector Machines: Maximum Margin Classifiers

Lecture Notes on Support Vector Machine

Support Vector Machines

LINEAR CLASSIFICATION, PERCEPTRON, LOGISTIC REGRESSION, SVC, NAÏVE BAYES. Supervised Learning

CSE546: SVMs, Dual Formula5on, and Kernels Winter 2012

L5 Support Vector Classification

Machine Learning. Lecture 6: Support Vector Machine. Feng Li.

Data Mining. Linear & nonlinear classifiers. Hamid Beigy. Sharif University of Technology. Fall 1396

Support Vector Machine

Support Vector Machines Explained

Support Vector Machines.

Support Vector Machines

Machine Learning. Kernels. Fall (Kernels, Kernelized Perceptron and SVM) Professor Liang Huang. (Chap. 12 of CIML)

Support vector machines Lecture 4

Basis Expansion and Nonlinear SVM. Kai Yu

COMS 4721: Machine Learning for Data Science Lecture 10, 2/21/2017

Foundation of Intelligent Systems, Part I. SVM s & Kernel Methods

Deep Learning for Computer Vision

Machine Learning Basics Lecture 4: SVM I. Princeton University COS 495 Instructor: Yingyu Liang

LMS Algorithm Summary

18.9 SUPPORT VECTOR MACHINES

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

ICS-E4030 Kernel Methods in Machine Learning

LECTURE 7 Support vector machines

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Introduction to SVM and RVM

Computer Vision Group Prof. Daniel Cremers. 2. Regression (cont.)

SVMs: nonlinearity through kernels

Support Vector Machines

Applied Machine Learning Annalisa Marsico

Kernel Methods. Machine Learning A W VO

Constrained Optimization and Support Vector Machines

Modelli Lineari (Generalizzati) e SVM

Support Vector Machines and Kernel Methods

10701 Recitation 5 Duality and SVM. Ahmed Hefny

About this class. Maximizing the Margin. Maximum margin classifiers. Picture of large and small margin hyperplanes

CS , Fall 2011 Assignment 2 Solutions

SVMs: Non-Separable Data, Convex Surrogate Loss, Multi-Class Classification, Kernels

Support Vector Machines and Speaker Verification

Introduction to Machine Learning

Transcription:

(Kernels +) Support Vector Machines Machine Learning Torsten Möller

Reading Chapter 5 of Machine Learning An Algorithmic Perspective by Marsland Chapter 6+7 of Pattern Recognition and Machine Learning by Bishop Chapter 12 of The Elements of Statistical Learning by Hastie, Tibshirani, Friedman 2

Today Motivation Kernels Support Vectors the Idea! Lagrange Multipliers Min-max vs. Max-min solving constraint problem non-separable data 3

Generalized linear model y(x) =f(w T x + w 0 ) This is called a generalized linear model f ( ) is a fixed non-linear function, e.g. f(u) = 1ifu 0 0 otherwise Decision boundary between classes will be linear function of x Can also apply non-linearity to x 4

Perceptron learning illustration 1 1 0.5 0.5 0 0 0.5 0.5 1 1 0.5 0 0.5 1 1 1 1 0.5 0 0.5 1 1 0.5 0.5 0 0 0.5 0.5 1 1 0.5 0 0.5 1 1 1 0.5 0 0.5 1 5

Limitations of Perceptrons Perceptrons can only solve linearly separable problems in feature space (same as the other models in this chapter) Canonical example of non-separable problem is X-OR (real datasets can look like this too) I 1 1 0? 0 1 I 2 6

Non-linear decision boundaries y(x) =f(w T (x)+b) it s not linear in x anymore separation may be easier in higher dim 7

Today Motivation Kernels Support Vectors the Idea! Lagrange Multipliers Min-max vs. Max-min solving constraint problem non-separable data 8

Non-linear mappings Last week, for logistic regression (classification), we looked at models with w T φ(x) The feature space φ(x) could be high-dimensional This is good because if data aren t separable in original input space (x), they may be in feature space φ(x) 9

Non-linear mappings We d like to avoid computing highdimensional φ(x) We d like to work with x which doesn t have a natural vector-space representation e.g. graphs, sets, strings 10

Kernel trick Before, we would explicitly compute φ(xi) for each datapoint Run algorithm in feature space For some feature spaces, can compute dot product φ(xi) T φ(xj) efficiently Efficient method is computation of a kernel function k(xi, xj) = φ(xi) T φ(xj) The kernel trick is to rewrite an algorithm to only have x enter in the form of dot products The menu: Kernel trick examples Kernel functions 11

A kernel trick Let s look at the nearest-neighbour classification algorithm For input point xi, find point xj with smallest distance: x i x j 2 =(x i x j ) T (x i x j ) = x T i x i 2x T i x j + x T j x j If we used a non-linear feature space φ( ) (x i ) (x j ) 2 = (x i ) T (x i ) 2 (x i ) T (x j )+ (x j ) T (x j ) = k(x i, x i ) 2k(x i, x j )+k(x j, x j ) So nearest-neighbour can be done in a highdimensional feature space without actually moving to it 12

A Kernel Function Consider the kernel function we find (x & z be 2D vectors) k(x, z) =(1+x 1 z 1 + x 2 z 2 ) 2 k(x, z) =(1+x T z) 2 =1+2x 1 z 1 +2x 2 z 2 + x 2 1z 2 1 +2x 1 z 1 x 2 z 2 + x 2 2z 2 2 =(1, p 2x 1, p 2x 2,x 2 1, p 2x 1 x 2,x 2 2)(1, p 2z 1, p 2z 2,z 2 1, p 2z 1 z 2,z 2 2) T = (x) T (z) So this particular kernel function does correspond to a dot product in a feature space (is valid) Computing k(x, z) is faster than explicitly computing φ(x) T φ(z) In higher dimensions, larger exponent, much faster 13

Why kernels? Why bother with kernels? Often easier to specify how similar two things are (dot product) than to construct explicit feature space φ. There are high-dimensional (even infinite) spaces that have efficient-to-compute kernels Separability So you want to use kernels Need to know when kernel function is valid, so we can apply the kernel trick 14

Valid kernels Given some arbitrary function k(xi, xj), how do we know if it corresponds to a dot product in some space? Valid kernels: if k(, ) satisfies: Symmetric;k(xi,xj)=k(xj,xi) Positive definite; for any x1,...,xn, the Gram matrix K must be positive semi-definite: Positive semi-definite means x T Kx 0 for all x then k(, ) corresponds to a dot product in some space φ a.k.a. Mercer kernel, admissible kernel, reproducing kernel 15

Examples of kernels Linear kernel k(x1,x2)=x1 T x2 φ(x)=x Polynomial kernel k(x1,x2) = (1+x1 T x2) d Contains all polynomial terms up to degree d Gaussian (radial) kernel k(x1,x2) =exp( x1 x2 2 /2σ 2 ) Infinite dimension feature space 16

Constructing kernels Can build new valid kernels from existing valid ones: k(x1,x2) = c k1(x1,x2), c>0 k(x1,x2) = k1(x1,x2) + k2(x1,x2) k(x1,x2) = k1(x1,x2) k2(x1,x2) k(x1,x2) = exp(k1(x1,x2)) Table on p. 296 gives many such rules 17

More kernels Stationary kernels are only a function of the difference between arguments: k(x1, x2) = k(x1 x2) Translation invariant in input space: k(x1, x2) =k(x1 +c,x2 +c) Homogeneous kernels, a.k.a. radial basis functions only a function of magnitude of difference: k(x1, x2) = k( x1 x2 ) Set subsets k(a1, A2) = 2 A1 A2, where A denotes number of elements in A Domain-specific: think hard about your problem, figure out what it means to be similar, define as k(, ), prove positive definite (Feynman algorithm) 18

Today Motivation Kernels Support Vectors the Idea! Lagrange Multipliers Min-max vs. Max-min solving constraint problem non-separable data 19

Non-linear decision boundaries y(x) =f(w T (x)+b) consider two-class classification let s assume moved training data into high-dimensional space feature space data are (indeed) linearly separable in that space could now apply (simple, linear) classifier, BUT... 20

... there are many decision which one to pick? boundaries! 21

Maximum margin We can define the margin of a classifier as the minimum distance to any example In support vector machines the decision boundary which maximizes the margin is chosen y = 1 y = 0 y = 1 margin 22

Marginal geometry Recall from lecture 3, for projection of x in w dir. is y(x) = 0 when w T x = -b, or w So T x b w w = y(x) w is signed distance to decision boundary y > 0 y = 0 y < 0 y(x) =w T x + b w T x w x 2 R 2 R 1 w T x w = w b w x? x y(x) kwk x 1 w 0 kwk 23

Support Vectors Assuming data are separated by the hyperplane, distance to decision boundary is t n y(x n ) w y = 1 y = 0 The maximum margin criterion chooses w, b by: y = 1 arg max w,b 1 w min n [t n(w T (x n )+b)] Points with this min value are known as support vectors 24

Canonical representation This optimization problem is complex: arg max w,b 1 w min[t n(w T (x n )+b)] n Note that rescaling w κw and b κb does not change distance tny(xn)/ w (many equiv. answers) So for x closest to surface, can set: t (w T (x )+b) =1 All other points are at least this far away: 8n, t n (w T (x n )+b) 1 25

Canonical representation This optimization problem is complex: arg max w,b 1 Under these constraints, the optimization becomes: arg max w,b w min n [t n(w T (x n )+b)] 1 w = arg min w,b Can be formulated as a constrained optimization problem: 1 2 w 2 26

Canonical representation So the optimization problem is now a constrained optimization problem: arg min w,b 1 2 w 2 s.t., 8n, t n (w T (x n )+b) 1 To solve this, we need to take a detour into Lagrange multipliers 27

Today Motivation Kernels Support Vectors the Idea! Lagrange Multipliers Min-max vs. Max-min solving constraint problem non-separable data 28

Lagrange Multipliers rf(x) Consider the problem: max x f(x) s.t., g(x) =0 rg(x) x A g(x) = 0 Points on g(x) = 0 must have g(x) normal to surface A stationary point must have no change in f in the direction of the surface, so f(x) must also be in this same direction So there must be some λ s. t. f(x) + λ g(x) = 0 29

Lagrange Multipliers rf(x) Consider the problem: max x f(x) s.t., g(x) =0 rg(x) x A g(x) = 0 So there must be some λ s. t. f(x) + λ g(x) = 0 Define Lagrangian: L(x, )=f(x)+ g(x) Stationary points of L(x,λ) have xl(x, λ) = f(x) + λ g(x) = 0 and λl(x, λ) =g(x) =0 So are stationary points of constrained problem! 30

Lagrange Multipliers Example Consider the problem max 1,x 2 )=1 x x 2 1 x 2 2 s.t. g(x 1,x 2 )=x 1 + x 2 1=0 Lagrangian: L(x, )=1 x 2 1 x 2 2 + (x 1 + x 2 1) Stationary points require: @L/@x 1 = 2x 1 + =0 So stationary point is @L/@x 2 = 2x 2 + =0 @L/@ = x 1 + x 2 1=0 x 2 (x 1,x 2)=( 1 2, 1 ), =1 2 (x? 1, x? 2) x 1 g(x 1, x 2 ) = 0 31

Lagrange Multipliers - Inequality Constraints rf(x) x A Consider the problem: max f(x) x s.t., g(x) 0 g(x) > 0 rg(x) x B g(x) = 0 Optimization over a region solutions either at stationary points (gradients 0) in region or on boundary Solutions have either: L(x, λ) = f(x) + λg(x) f(x)=0 and λ=0 (in region),or f(x)= λ g(x) and λ>0 (on boundary, > for maximizing f). For both, λg(x) = 0 Solutions have g(x) 0,λ 0,λg(x) = 0 32

Lagrange Multipliers - Inequality Constraints rf(x) x A Consider the problem: max f(x) x s.t., g(x) 0 g(x) > 0 rg(x) x B g(x) = 0 Exactly how does the Lagrangian relate to the optimization problem in this case? L(x, λ) = f(x) + λg(x) It turns out that the solution to optimization problem is: max x min 0 L(x, ) 33

Max-min Lagrangian L(x, λ) = f(x) + λg(x) Consider the following: min 0 L(x, ) If constraint g(x) 0 is not satisfied, g(x) < 0 λ can be made, and minλ 0 L(x,λ) = Otherwise, minλ 0 L(x,λ) = f(x), (with λ=0) min 0 L(x, )= 1 constraint not satisfied f(x) otherwise 34

Min-max (Dual form) So the solution to optimization problem is (called the primal problem): L P (x) = max x min 0 L(x, ) The dual problem is when one switches the order of the max and min: L D ( )=min 0 max x L(x, ) 35

Min-max (Dual form) L P (x) = max x min 0 L(x, ) L D( )=min 0 max x L(x, ) These are not the same, but it is always the case the dual is a bound for the primal (in the SVM case with minimization, LD(λ) LP(x)) Slater s theorem gives conditions for these two problems to be equivalent, with LD(λ) = LP(x). Slater s theorem applies for the SVM optimization problem, and solving the dual leads to kernelization and can be easier than solving the primal 36

Today Motivation Kernels Support Vectors the Idea! Lagrange Multipliers Min-max vs. Max-min solving constraint problem non-separable data 37

Now where were we? So the optimization problem is now a constrained optimization problem: arg min w,b 1 2 w 2 s.t., 8n, t n (w T (x n )+b) 1 For this problem, the Lagrangian (with N multipliers an) is: L(w,b,a) = w 2 2 NX a n t n (w T (x n )+b) 1 n=1 38

Now where were we? The Lagrangian (with N multipliers an) is: L(w,b,a) = w 2 2 We can find the derivatives of L wrt w, b and set to 0: w = 0= NX n=1 NX n=1 a n t n (w T (x n )+b) 1 a n t n (x n ) NX a n t n n=1 39

Dual form Plugging those equations into L removes w and b and results in a version of L where w,bl = 0: L(a) = NX n=1 1 a n 2 NX n=1 this new L is the dual representation of the problem (maximize with constraints) Note that it is kernelized It is quadratic, convex in a Bounded above since K positive semi-definite Optimal a can be found NX m=1 a n a m t n t m (x n ) T (x m ) With large datasets, descent strategies employed 40

Examples SVM trained using Gaussian kernel Support vectors circled Note non-linear decision boundary in x space 41

Examples From Burges, A Tutorial on Support Vector Machines for Pattern Recognition (1998) SVM trained using cubic polynomial kernel Left is linearly separable k(x 1, x 2 )=(x T 1 x 2 + 1) 3 Note decision boundary is almost linear, even using cubic polynomial kernel Right is not linearly separable But is separable using polynomial kernel 42

Today Motivation Kernels Support Vectors the Idea! Lagrange Multipliers Min-max vs. Max-min solving constraint problem non-separable data 43

Non-separable data (soft-margin classifier) For most problems, data will not be linearly separable (even in feature space φ) Can relax the constraints from tny(xn) 1 to tny(xn) 1 ξn The ξn 0 are called slack variables ξn = 0, satisfy original problem, so xn is on margin or correct side of margin 0 < ξn < 1, inside margin, but still correctly classified ξn > 1, mis-classified < 1 = 0 > 1 y = 1 y = 0 y = 1 = 0 44

Loss function for non-sep data Non-zero slack variables are bad, penalize while maximizing the margin: NX min C n + 1 2 w 2 n=1 Constant C > 0 controls importance of large margin versus incorrect (non-zero slack) Set using cross-validation Optimization is same quadratic, different constraints, convex < 1 = 0 > 1 y = 1 y = 0 y = 1 = 0 45

SVM Loss function The SVM for the separable case solved the problem: Can write this as: arg min w arg min w where E (z) = 0 if z 0, otherwise 1 2 w 2 s.t. 8n, t n y n 1 NX n=1 E 1 (t n y n 1) + w 2 46

SVM Loss function The SVM for the separable case solved the problem: Non-separable case relaxes this to be: arg min w arg min w where ESV(tnyn 1) = [1 tnyn]+ hinge loss [u]+ =u if u 0, 0 otherwise 1 2 w 2 s.t. 8n, t n y n 1 NX n=1 E SV (t n y n 1) + w 2 47

Loss functions Linear classifiers, compare loss function used for learning E(z) Black is misclassification error Simple linear classifier, squared error: (yn tn) 2 Logistic regression, cross-entropy error: tn ln yn SVM, hinge loss: ξn = [1 tnyn]+ 2 1 0 1 2 z 48

Summary Kernels high-dim spaces good for separation, bad for computation! many algorithms can be re-written with only dot products of features NN, perceptron, regression, PCA, SVMs SVMs Maximum margin criterion for deciding on decision boundary Linearly separable data Relax with slack variables for non-separable case Global optimization is possible: Convex problem (no local optima) 49