CS798: Selected topics in Machine Learning

Similar documents
Linear vs Non-linear classifier. CS789: Machine Learning and Neural Network. Introduction

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (continued)

Support Vector Machine

Introduction to Support Vector Machines

Linear & nonlinear classifiers

Machine Learning. Support Vector Machines. Manfred Huber

Data Mining. Linear & nonlinear classifiers. Hamid Beigy. Sharif University of Technology. Fall 1396

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Statistical Machine Learning from Data

Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines

Review: Support vector machines. Machine learning techniques and image analysis

10/05/2016. Computational Methods for Data Analysis. Massimo Poesio SUPPORT VECTOR MACHINES. Support Vector Machines Linear classifiers

Machine Learning and Data Mining. Support Vector Machines. Kalev Kask

Lecture Notes on Support Vector Machine

Linear & nonlinear classifiers

Linear Support Vector Machine. Classification. Linear SVM. Huiping Cao. Huiping Cao, Slide 1/26

Support Vector Machines.

Machine Learning. Lecture 6: Support Vector Machine. Feng Li.

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

(Kernels +) Support Vector Machines

Support Vector Machines and Kernel Methods

Kernel Methods. Machine Learning A W VO

ML (cont.): SUPPORT VECTOR MACHINES

Support Vector Machines

Kernel Machines. Pradeep Ravikumar Co-instructor: Manuela Veloso. Machine Learning

Kernel Methods and Support Vector Machines

ICS-E4030 Kernel Methods in Machine Learning

Jeff Howbert Introduction to Machine Learning Winter

Support Vector Machines

Max Margin-Classifier

Lecture 9: Large Margin Classifiers. Linear Support Vector Machines

Support Vector Machines. Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Kernel Methods. Foundations of Data Analysis. Torsten Möller. Möller/Mori 1

Support Vector Machines (SVM) in bioinformatics. Day 1: Introduction to SVM

L5 Support Vector Classification

Support Vector Machines

Lecture Support Vector Machine (SVM) Classifiers

Support Vector Machines for Regression

Support'Vector'Machines. Machine(Learning(Spring(2018 March(5(2018 Kasthuri Kannan

Support Vector Machines: Maximum Margin Classifiers

CS-E4830 Kernel Methods in Machine Learning

Support Vector Machines

Support Vector Machines, Kernel SVM

Support Vector Machine

Support Vector Machines for Classification and Regression. 1 Linearly Separable Data: Hard Margin SVMs

Pattern Recognition 2018 Support Vector Machines

Learning with kernels and SVM

Support Vector Machines

Support Vector Machines Explained

Machine Learning Support Vector Machines. Prof. Matteo Matteucci

LINEAR CLASSIFICATION, PERCEPTRON, LOGISTIC REGRESSION, SVC, NAÏVE BAYES. Supervised Learning

Machine Learning : Support Vector Machines

Outline. Basic concepts: SVM and kernels SVM primal/dual problems. Chih-Jen Lin (National Taiwan Univ.) 1 / 22

Outline. Motivation. Mapping the input space to the feature space Calculating the dot product in the feature space

Support Vector Machines

Lecture 10: A brief introduction to Support Vector Machine

Support Vector Machines for Classification: A Statistical Portrait

Support Vector Machines. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

LECTURE 7 Support vector machines

Support Vector Machines

CS4495/6495 Introduction to Computer Vision. 8C-L3 Support Vector Machines

Perceptron Revisited: Linear Separators. Support Vector Machines

Chapter 9. Support Vector Machine. Yongdai Kim Seoul National University

Support Vector Machines and Kernel Methods

Foundation of Intelligent Systems, Part I. SVM s & Kernel Methods

CIS 520: Machine Learning Oct 09, Kernel Methods

Applied inductive learning - Lecture 7

Kernel Methods & Support Vector Machines

Support Vector Machines

Support Vector Machines for Classification and Regression

Support Vector Machines

Learning From Data Lecture 25 The Kernel Trick

Pattern Recognition and Machine Learning. Perceptrons and Support Vector machines

Machine Learning, Fall 2011: Homework 5

EE613 Machine Learning for Engineers. Kernel methods Support Vector Machines. jean-marc odobez 2015

Incorporating detractors into SVM classification

COMP 652: Machine Learning. Lecture 12. COMP Lecture 12 1 / 37

Support Vector Machine & Its Applications

Constrained Optimization and Support Vector Machines

Kernel Methods. Charles Elkan October 17, 2007

Support Vector Machines

CS , Fall 2011 Assignment 2 Solutions

Kernel Methods. Outline

CS145: INTRODUCTION TO DATA MINING

Sparse Kernel Machines - SVM

LMS Algorithm Summary

SVMs: nonlinearity through kernels

An introduction to Support Vector Machines

Neural networks and support vector machines

Introduction to Support Vector Machines

18.9 SUPPORT VECTOR MACHINES

Linear, threshold units. Linear Discriminant Functions and Support Vector Machines. Biometrics CSE 190 Lecture 11. X i : inputs W i : weights

Lecture 10: Support Vector Machine and Large Margin Classifier

CS6375: Machine Learning Gautam Kunapuli. Support Vector Machines

Transcription:

CS798: Selected topics in Machine Learning Support Vector Machine Jakramate Bootkrajang Department of Computer Science Chiang Mai University Jakramate Bootkrajang CS798: Selected topics in Machine Learning 1 / 36

Introduction One of the most widely used out-of-the-box discriminative classifier Gives state-of-the-art classification performance Extends naturally to support non-linear classification tasks Jakramate Bootkrajang CS798: Selected topics in Machine Learning 2 / 36

Linear vs Non-linear classifier Linear classifier is in the form w T x + b In words, x is a linear combination of w b is the bias term Can be thought of as a line separating classes Non-linear Can be thought of as a curve separating classes Jakramate Bootkrajang CS798: Selected topics in Machine Learning 3 / 36

Linear vs Non-linear classifier Jakramate Bootkrajang CS798: Selected topics in Machine Learning 4 / 36

The problem with linear machine It cannot solve linearly non-separable classification task such as the XOR problem Jakramate Bootkrajang CS798: Selected topics in Machine Learning 5 / 36

But wait! See what happen if we embedded the four points of XOR problem in higher dimensional space (ie 3D space) Jakramate Bootkrajang CS798: Selected topics in Machine Learning 6 / 36

Support Vector Machines (SVM) Two key ideas Assuming linearly separable classes, learn separating hyperplane with maximum margin Expand input into high-dimensional space to deal with linearly non-separable cases (such as the XOR) Jakramate Bootkrajang CS798: Selected topics in Machine Learning 7 / 36

Visualising the decision hyperplane Recall a linear classifier w T x + b, or more compactly [ w 1 ] T [ x b ] For classification purpose, we want w T x < 0 for negative class w T x > 0 for positive class For example, w = [2 2] So, w T x violet > 0 and w T x orange < 0 w = [2 2] Jakramate Bootkrajang CS798: Selected topics in Machine Learning 8 / 36

Visualising the decision hyperplane Moreover, observe that w T x green = 0 A set of points where w T x = 0 defines the decision boundary Geometrically, they are the points(vectors) which are perpendicular to w (dot product is zero) w Jakramate Bootkrajang CS798: Selected topics in Machine Learning 9 / 36

Distance of a point x from the decision hyperplane w x r x p Represent x = x p + r w w (r unit vector) Since w T x p = 0, we then have w T x = w T x p + w T r w w In other words, r = wt x w, (note that r is invariant to scaling of w) Jakramate Bootkrajang CS798: Selected topics in Machine Learning 10 / 36

Many choices of separating hyperplane Jakramate Bootkrajang CS798: Selected topics in Machine Learning 11 / 36

SVM s goal = maximum margin According to a theorem from learning theory, from all possible linear decision functions the one that maximises the margin of the training set will minimise the generalisation error margin Jakramate Bootkrajang CS798: Selected topics in Machine Learning 12 / 36

Two types of margins Functional margin: w T x Can be increased without bound by multiplying a constant to w Geometric margin: r = wt x w The one that we want to maximise Subject to the constraint that training examples are classified correctly Jakramate Bootkrajang CS798: Selected topics in Machine Learning 13 / 36

Maximum margin (1/2) Since we can scale the functional margine, we can demand the functional margin for the nearest points to be +1 and 1 on the two side of the decision boundary Denoting a nearest positive example by x + and a nearest negative example by x, we have w T x + = +1 w T x = 1 margin x + x Jakramate Bootkrajang CS798: Selected topics in Machine Learning 14 / 36

Maximum margin (1/2) We then compute the margin margin = 1 2 (wt x + w wt x w ) = 1 2 w (wt x + w T x ) = 1 w Jakramate Bootkrajang CS798: Selected topics in Machine Learning 15 / 36

Maximum margin: summing up Given a linearly separable training set S = {x i, y i } m i=1 We need to find w which maximise 1 w Our quadratic programming problem with linear inequality constraints is then the following minimise: w 2 subject to: or w T x i +1 for y i = +1 w T x i 1 for y i = 1 y i w T x i 0 for all i Jakramate Bootkrajang CS798: Selected topics in Machine Learning 16 / 36

Support vectors The training points that are nearest to the decision boundary are called support vectors Quiz: what is the output of our decision function for these points? sv sv Jakramate Bootkrajang CS798: Selected topics in Machine Learning 17 / 36

Solving the quadratic programming Construct & minimise the Lagrangian L(w, α) = 1 m 2 w 2 α i (yw T x i 1) (1) i=1 st α i 0, for all i The optimal w is found by taking derivatives of L wrt w L(w, α) w m = w α i yx i = 0 (2) i=1 Note that w are expressed as a linear combination of training points Jakramate Bootkrajang CS798: Selected topics in Machine Learning 18 / 36

Solving the quadratic programming Karush-Kuhn-Tucker (KKT) condition for optimality requires that α i (yw T x i 1) = 0 The Lagrange multipliers α i are called dual variables Each training point has an associated dual variable The condition implies that only support vectors will have non-zero α i as its functional output is required to be exactly +1 or 1 Jakramate Bootkrajang CS798: Selected topics in Machine Learning 19 / 36

Solving the quadratic programming It is possible to find the dual of the objective function (eq1) Dual problem: the new objective having dual variables as its parameters Plugging eq2 into eq1 to obtain, D(α) = 1 2 m α i α j y i y j x T i x j + m i,j=1 i=1 st α i 0, for all i Note that data enteres only in the form of dot products α i Jakramate Bootkrajang CS798: Selected topics in Machine Learning 20 / 36

Solving the quadratic programming We can use optimation package to solve the above dual problem for α Jakramate Bootkrajang CS798: Selected topics in Machine Learning 21 / 36

Important properties of SVMs Sparse Only support vectors are important Data enters in the form of dot products Can be readily incorporated with kernel method Dual objective is convex Making life easier Jakramate Bootkrajang CS798: Selected topics in Machine Learning 22 / 36

The learned SVM Jakramate Bootkrajang CS798: Selected topics in Machine Learning 23 / 36

Classifying new data points Once the parameter α is found by solving the quadratic optimisation on the trainign set of points, we can use it to classify new unseen point Given new point x q, its class membership is sign(w T x q ) = m αi y i x T i x q i=1 = αi y i x T i x q i SV Jakramate Bootkrajang CS798: Selected topics in Machine Learning 24 / 36

Non-linear classifiers What to do when data is not linearly-separable? First approach Use non-linear model, eg, neuron-network, NDA with full covariance (problems: many parameters, local minima, experiences needed to train) Second approach Transform data into a richer feature space (including high dimensioal/non-linear features), then use a linear classifier Jakramate Bootkrajang CS798: Selected topics in Machine Learning 25 / 36

Learning in the feature space Map data into a feature space where they are linearly separable Jakramate Bootkrajang CS798: Selected topics in Machine Learning 26 / 36

Non-linear SVMs Recall that the linear SVMs depends only on x T x D(α) = 1 2 m α i α j y i y j x T i x j + i,j=1 st α i 0, for all i After the mapping, the non-linear algorithm will depend only on ϕ(x i ) T ϕ(x j ) D(α) = 1 2 Jakramate Bootkrajang CS798: Selected topics in Machine Learning 27 / 36 m i=1 m α i α j y i y j ϕ(x i ) T ϕ(x j ) + i,j=1 st α i 0, for all i α i m i=1 The dot product ϕ(x i ) T ϕ(x j ) is known as kernel function α i

Kernels A function that gives the dot product between the vectors in feature space induced by the mapping ϕ K(x i, x j ) = ϕ(x i ) T ϕ(x j ) In a matrix form, over all data, the matrix is also called Gram matrix K(x 1, x 1 ) K(x 1, x j ) K(x 1, x m ) K = K(x i, x 1 ) K(x i, x j ) K(x 1, x m ) K(x m, x 1 ) K(x m, x j ) K(x m, x m ) Gram matrix, K, is positive semi-definite, ie αkα 0 for all α R m Jakramate Bootkrajang CS798: Selected topics in Machine Learning 28 / 36

Kernels: Polynomial kernel Example: mapping x, y points in 2D input to 3D feature space [ ] [ ] x1 y1 x = y = The corresponding mapping ϕ is x 2 1 y 2 1 ϕ(x) = 2x1 x 2 ϕ(y) = 2y1 y 2 x 2 2 x 2 So we get a kernel K(x, y) = ϕ(x) T ϕ(y) = (x T y) 2 A polynomial kernel of degree 2 Using kernel trick, we may not even need to know the mapping ϕ y 2 y 2 2 Jakramate Bootkrajang CS798: Selected topics in Machine Learning 29 / 36

Kernels: Gaussian kernel Defined as This comes from writing K(x, y) = e x y 2 /σ e a = 1 + a + + 1 k! ak Taylor s expansion Let a = x T y, one can see that e xt y is a kernel with infinite dimension Normalising e xt y with σ and dividing the term by e x and e y to get the Gaussian kernel Jakramate Bootkrajang CS798: Selected topics in Machine Learning 30 / 36

Making kernels New kernels can be made from valid kernels as long as the resulting Gram matrix is positive definite The following operations are allowed K(x, y) = K1 (x, y) + K 2 (x, y) (addition) K(x, y) = λk 1 (x, y) (scaling) K(x, y) = K 1 (x, y) K 2 (x, y) (multiplication) There is a theorem called Mercer s theorem that characterises valid kernels Jakramate Bootkrajang CS798: Selected topics in Machine Learning 31 / 36

Many other kernels Linear kernel K(x, y) = x T y + c Exponential kernel K(x, y) = exp( x y 2σ 2 ) Sigmoid kernel K(x, y) = tanh(αx T y + c) Histogram intersection kernel K(x, y) = n i=1 min(x i, y i ) Cauchy kernel K(x, y) = 1 1+ x+y 2 σ 2 Discrete structure kernel: string kernel, tree kernel, graph kernel And many more Jakramate Bootkrajang CS798: Selected topics in Machine Learning 32 / 36

Bad kernel Gram matrix encodes similarities between input data points A bad kernel would be a kernel function which gives near diagonal Gram matrix 1 0 0 0 0 1 0 0 1 0 0 0 1 No clusters, no structure Jakramate Bootkrajang CS798: Selected topics in Machine Learning 33 / 36

SVMs and Kernels Prepare the data matrix Select the kernel function to use, compute Gram matrix Execute the training algorithm using a QP solver to obtain the α i values Unseen data can be classified using the α i values and the support vectors SV f(x) = sign( α i y i K(x i, x)) i=1 Jakramate Bootkrajang CS798: Selected topics in Machine Learning 34 / 36

Applications Handwritten digits recognition Dataset: US Postal service 4% error was obtained about only 4% of the training data were SVs Text categorisation Face detection DNA analysis And many more Jakramate Bootkrajang CS798: Selected topics in Machine Learning 35 / 36

Summary SVMs learn linear decision boundaries (discriminative approach) They pick the hyperplane that maximises the (geometric) margin The optimal hyperplane turns out to be a linear combination of support vectors Transform nonlinear problems to higher dimensional space using kernel functions In hope for linearly-separable classes in the transformed space Jakramate Bootkrajang CS798: Selected topics in Machine Learning 36 / 36