Perceptron Revisited: Linear Separators. Support Vector Machines

Similar documents
Linear, threshold units. Linear Discriminant Functions and Support Vector Machines. Biometrics CSE 190 Lecture 11. X i : inputs W i : weights

Support'Vector'Machines. Machine(Learning(Spring(2018 March(5(2018 Kasthuri Kannan

Support Vector Machine & Its Applications

SUPPORT VECTOR MACHINE

Linear Classification and SVM. Dr. Xin Zhang

Support Vector Machines II. CAP 5610: Machine Learning Instructor: Guo-Jun QI

CS145: INTRODUCTION TO DATA MINING

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

Support Vector Machine (SVM) and Kernel Methods

Introduction to Support Vector Machines

10/05/2016. Computational Methods for Data Analysis. Massimo Poesio SUPPORT VECTOR MACHINES. Support Vector Machines Linear classifiers

Introduction to SVM and RVM

Outline. Basic concepts: SVM and kernels SVM primal/dual problems. Chih-Jen Lin (National Taiwan Univ.) 1 / 22

Support Vector Machines (SVM) in bioinformatics. Day 1: Introduction to SVM

Jeff Howbert Introduction to Machine Learning Winter

Support Vector Machine (continued)

Support Vector Machines

Support Vector Machines

Lecture 10: A brief introduction to Support Vector Machine

Support Vector Machines.

Linear & nonlinear classifiers

Pattern Recognition 2018 Support Vector Machines

Support Vector Machines Explained

Support Vector Machines. Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

Machine Learning and Data Mining. Support Vector Machines. Kalev Kask

Neural Networks. Prof. Dr. Rudolf Kruse. Computational Intelligence Group Faculty for Computer Science

(Kernels +) Support Vector Machines

Machine Learning. Support Vector Machines. Manfred Huber

Support Vector Machine (SVM) and Kernel Methods

Statistical Machine Learning from Data

Kernels and the Kernel Trick. Machine Learning Fall 2017

EE613 Machine Learning for Engineers. Kernel methods Support Vector Machines. jean-marc odobez 2015

Kernel Methods and Support Vector Machines

Statistical Pattern Recognition

Linear & nonlinear classifiers

CS6375: Machine Learning Gautam Kunapuli. Support Vector Machines

ML (cont.): SUPPORT VECTOR MACHINES

About this class. Maximizing the Margin. Maximum margin classifiers. Picture of large and small margin hyperplanes

Foundation of Intelligent Systems, Part I. SVM s & Kernel Methods

Data Mining. Linear & nonlinear classifiers. Hamid Beigy. Sharif University of Technology. Fall 1396

Linear vs Non-linear classifier. CS789: Machine Learning and Neural Network. Introduction

Lecture 10: Support Vector Machine and Large Margin Classifier

CS249: ADVANCED DATA MINING

Support Vector and Kernel Methods

LINEAR CLASSIFICATION, PERCEPTRON, LOGISTIC REGRESSION, SVC, NAÏVE BAYES. Supervised Learning

Lecture 9: Large Margin Classifiers. Linear Support Vector Machines

Support Vector Machine

Support Vector Machines

Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines

Review: Support vector machines. Machine learning techniques and image analysis

Support Vector Machines

Support Vector Machines: Maximum Margin Classifiers

Support Vector Machines and Kernel Methods

Cheng Soon Ong & Christian Walder. Canberra February June 2018

L5 Support Vector Classification

Support Vector Machines and Speaker Verification

Indirect Rule Learning: Support Vector Machines. Donglin Zeng, Department of Biostatistics, University of North Carolina

SVMC An introduction to Support Vector Machines Classification

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Lecture Notes on Support Vector Machine

Learning From Data Lecture 25 The Kernel Trick

Max Margin-Classifier

Support Vector Machines

Support Vector Machines

Polyhedral Computation. Linear Classifiers & the SVM

Lecture Support Vector Machine (SVM) Classifiers

Chapter 9. Support Vector Machine. Yongdai Kim Seoul National University

SVM optimization and Kernel methods

Soft-Margin Support Vector Machine

Learning with kernels and SVM

Modelli Lineari (Generalizzati) e SVM

Support Vector Machine

Support Vector Machines. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Machine Learning. Lecture 6: Support Vector Machine. Feng Li.

Machine Learning Support Vector Machines. Prof. Matteo Matteucci

Support Vector Machines

Pattern Recognition and Machine Learning. Perceptrons and Support Vector machines

Kernel Machines. Pradeep Ravikumar Co-instructor: Manuela Veloso. Machine Learning

A GENERAL FORMULATION FOR SUPPORT VECTOR MACHINES. Wei Chu, S. Sathiya Keerthi, Chong Jin Ong

CSC 411 Lecture 17: Support Vector Machine

Introduction to Support Vector Machines

Support Vector Machine for Classification and Regression

Support Vector Machines for Classification: A Statistical Portrait

LECTURE 7 Support vector machines

Machine Learning : Support Vector Machines

Support Vector Machines, Kernel SVM

Applied inductive learning - Lecture 7

CIS 520: Machine Learning Oct 09, Kernel Methods

Multi-class SVMs. Lecture 17: Aykut Erdem April 2016 Hacettepe University

COMS 4721: Machine Learning for Data Science Lecture 10, 2/21/2017

Support Vector Machines. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Support Vector Machines

Classifier Complexity and Support Vector Classifiers

SVMs, Duality and the Kernel Trick

Support Vector Machines. Machine Learning Fall 2017

Kernel Methods. Barnabás Póczos

Introduction to Support Vector Machines

Machine Learning And Applications: Supervised Learning-SVM

CS798: Selected topics in Machine Learning

Transcription:

Support Vector Machines Perceptron Revisited: Linear Separators Binary classification can be viewed as the task of separating classes in feature space: w T x + b > 0 w T x + b = 0 w T x + b < 0 Department of Computer Sciences f(x = sign(w T x + b Linear Separators Which of the linear separators is optimal? Classification Margin T w xi + b Distance from example x i to the separator is r = w Examples closest to the hyperplane are support vectors. Marginρof the separator is the distance between support vectors. ρ r 3 4 1

Maximum Margin Classification Maximizing the margin is good according to intuition and PAC theory. Implies that only support vectors matter; other training examples are ignorable. Linear SVM Mathematically Let training set {(x i, y i } i=1..n, x i R d, y i {-1, 1} be separated by a hyperplane with margin ρ. Then for each training example (x i, y i : w T x i + b -ρ/ if y i = -1 w T x i + b ρ/ if y i = 1 y i (w T x i + b ρ/ For every support vector x s the above inequality is an equality. After rescaling w and b by ρ/ in the equality, we obtain that T y s( xs + b distance between each x s and the hyperplane is r = = Then the margin can be expressed through (rescaled w and b as: ρ = r = w w 1 w w 5 6 Linear SVMs Mathematically (cont. Then we can formulate the quadratic optimization problem: ρ = w is maximized and for all (x i, y i, i=1..n : y i (w T x i + b 1 Which can be reformulated as: Φ(w = w =w T w is minimized and for all (x i, y i, i=1..n : y i (w T x i + b 1 Solving the Optimization Problem Φ(w =w T w is minimized and for all (x i, y i, i=1..n : y i (w T x i + b 1 Need to optimize a quadratic function subject to linear constraints. Quadratic optimization problems are a well-known class of mathematical programming problems for which several (non-trivial algorithms exist. The solution involves constructing a dual problem where a Lagrange multiplierα i is associated with every inequality constraint in the primal (original problem: Find α 1 α n such that Q(α =Σα i - ½ΣΣα i α j y i y j x it x j is maximized and (1 Σα i y i = 0 ( α i 0 for all α i 7 8

The Optimization Problem Solution Soft Margin Classification Given a solution α 1 α n to the dual problem, solution to the primal is: w =Σα i y i x i b = y k -Σα i y i x i T x k for any α k > 0 What if the training set is not linearly separable? Slack variablesξ i can be added to allow misclassification of difficult or noisy examples, resulting margin called soft. Each non-zero α i indicates that corresponding x i is a support vector. Then the classifying function is (note that we don t need w explicitly: ξ i f(x = Σα i y i x it x + b ξ i Notice that it relies on an inner product between the test point x and the support vectors x i we will return to this later. Also keep in mind that solving the optimization problem involved computing the inner products x it x j between all training points. 9 10 Soft Margin Classification Mathematically Soft Margin Classification Solution The old formulation: Φ(w =w T w is minimized and for all (x i,y i, i=1..n : y i (w T x i + b 1 Modified formulation incorporates slack variables: Φ(w =w T w + CΣξ i is minimized and for all (x i,y i, i=1..n : y i (w T x i + b 1 ξ i,, ξ i 0 Parameter C can be viewed as a way to control overfitting: it trades off the relative importance of maximizing the margin and fitting the training data. Dual problem is identical to separable case (would not be identical if the - norm penalty for slack variables CΣξ i was used in primal objective, we would need additional Lagrange multipliers for slack variables: Find α 1 α N such that Q(α =Σα i - ½ΣΣα i α j y i y j x it x j is maximized and (1 Σα i y i = 0 ( 0 α i C for all α i Again, x i with non-zero α i will be support vectors. Solution to the dual problem is: w =Σα i y i x i b= y k (1-ξ k -Σα i y i x it x k for any k s.t. α k >0 Again, we don t need to compute w explicitly for classification: f(x = Σα i y i x it x + b 11 1 3

Theoretical Justification for Maximum Margins Vapnik has proved the following: The class of optimal linear separators has VC dimension h bounded from above as D h min, 1 m0 + ρ where ρ is the margin, D is the diameter of the smallest sphere that can enclose all of the training examples, and m 0 is the dimensionality. Intuitively, this implies that regardless of dimensionality m 0 we can minimize the VC dimension by maximizing the margin ρ. Thus, complexity of the classifier is kept small regardless of dimensionality. Linear SVMs: Overview The classifier is a separating hyperplane. Most important training points are support vectors; they define the hyperplane. Quadratic optimization algorithms can identify which training points x i are support vectors with non-zero Lagrangian multipliers α i. Both in the dual formulation of the problem and in the solution training points appear only inside inner products: Find α 1 α N such that Q(α =Σα i - ½ΣΣα i α j y i y j x i T x j is maximized and (1 Σα i y i = 0 ( 0 α i C for all α i f(x = Σα i y i x it x + b 13 14 Non-linear SVMs Datasets that are linearly separable with some noise work out great: 0 x But what are we going to do if the dataset is just too hard? Non-linear SVMs: Feature spaces General idea: the original feature space can always be mapped to some higher-dimensional feature space where the training set is separable: Φ: x φ(x 0 x How about mapping data to a higher-dimensional space: x 0 x 15 16 4

The Kernel Trick What Functions are Kernels? The linear classifier relies on inner product between vectors K(x i,x j =x it x j If every datapoint is mapped into high-dimensional space via some transformation Φ: x φ(x, the inner product becomes: K(x i,x j = φ(x i T φ(x j A kernel function is a function that is eqiuvalent to an inner product in some feature space. Example: -dimensional vectors x=[x 1 x ]; let K(x i,x j =(1 + x it x j, Need to show that K(x i,x j = φ(x i T φ(x j : K(x i,x j =(1 + x it x j,= 1+ x i1 x j1 + x i1 x j1 x i x j + x i x j + x i1 x j1 + x i x j = = [1 x i1 x i1 x i x i x i1 x i ] T [1 x j1 x j1 x j x j x j1 x j ] = = φ(x i T φ(x j, where φ(x = [1 x 1 x 1 x x x 1 x ] Thus, a kernel function implicitly maps data to a high-dimensional space (without the need to compute each φ(x explicitly. 17 For some functions K(x i,x j checking that K(x i,x j = φ(x i T φ(x j can be cumbersome. Mercer s theorem: Every semi-positive definite symmetric function is a kernel Semi-positive definite symmetric functions correspond to a semi-positive definite symmetric Gram matrix: K=,x 1 K(x,x 1,x 1,x K(x,x,x,x 3 K(x,x 3,x 3,x n K(x,x n,x n 18 Examples of Kernel Functions Non-linear SVMs Mathematically Linear: K(x i,x j = x it x j Mapping Φ: x φ(x, where φ(x is x itself Polynomial of power p: K(x i,x j = (1+ x it x j p Mapping Φ: x φ(x, where φ(x has d + p dimensions xi x Gaussian (radial-basis function: K(x i,x j = e σ j Mapping Φ: x φ(x, where φ(x is infinite-dimensional: every point is mapped to a function (a Gaussian; combination of functions for support vectors is the separator. Higher-dimensional space still has intrinsic dimensionality d (the mapping is not onto, but linear separators in it correspond to non-linear separators in original space. p 19 Dual problem formulation: Find α 1 α n such that Q(α =Σα i - ½ΣΣα i α j y i y j K(x i, x j is maximized and (1 Σα i y i = 0 ( α i 0 for all α i The solution is: f(x = Σα i y i K(x i, x j + b Optimization techniques for finding α i s remain the same! 0 5

SVM applications SVMs were originally proposed by Boser, Guyon and Vapnik in 199 and gained increasing popularity in late 1990s. SVMs are currently among the best performers for a number of classification tasks ranging from text to genomic data. SVMs can be applied to complex data types beyond feature vectors (e.g. graphs, sequences, relational data by designing kernel functions for such data. SVM techniques have been extended to a number of tasks such as regression [Vapnik et al. 97], principal component analysis [Schölkopf et al. 99], etc. Most popular optimization algorithms for SVMs use decomposition to hillclimb over a subset of α i s at a time, e.g. SMO [Platt 99] and [Joachims 99] Tuning SVMs remains a black art: selecting a specific kernel and parameters is usually done in a try-and-see manner. 1 6