Linear Classification and SVM. Dr. Xin Zhang

Similar documents
Support Vector Machine & Its Applications

Support Vector Machines II. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Perceptron Revisited: Linear Separators. Support Vector Machines

Support'Vector'Machines. Machine(Learning(Spring(2018 March(5(2018 Kasthuri Kannan

Support Vector Machines. CAP 5610: Machine Learning Instructor: Guo-Jun QI

FIND A FUNCTION TO CLASSIFY HIGH VALUE CUSTOMERS

Plan. Lecture: What is Chemoinformatics and Drug Design? Description of Support Vector Machine (SVM) and its used in Chemoinformatics.

Linear, threshold units. Linear Discriminant Functions and Support Vector Machines. Biometrics CSE 190 Lecture 11. X i : inputs W i : weights

10/05/2016. Computational Methods for Data Analysis. Massimo Poesio SUPPORT VECTOR MACHINES. Support Vector Machines Linear classifiers

SUPPORT VECTOR MACHINE

CS145: INTRODUCTION TO DATA MINING

Support Vector Machines (SVM) in bioinformatics. Day 1: Introduction to SVM

ML (cont.): SUPPORT VECTOR MACHINES

Support Vector Machines. Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

Kernel Methods and Support Vector Machines

Linear & nonlinear classifiers

Jeff Howbert Introduction to Machine Learning Winter

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machines

Linear & nonlinear classifiers

Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines

Support Vector Machines Explained

Support Vector Machines

Support Vector Machine (continued)

(Kernels +) Support Vector Machines

Introduction to SVM and RVM

Data Mining. Linear & nonlinear classifiers. Hamid Beigy. Sharif University of Technology. Fall 1396

Support Vector Machines.

LINEAR CLASSIFICATION, PERCEPTRON, LOGISTIC REGRESSION, SVC, NAÏVE BAYES. Supervised Learning

A short introduction to supervised learning, with applications to cancer pathway analysis Dr. Christina Leslie

Support Vector Machines

Neural Networks. Prof. Dr. Rudolf Kruse. Computational Intelligence Group Faculty for Computer Science

Pattern Recognition 2018 Support Vector Machines

CS4495/6495 Introduction to Computer Vision. 8C-L3 Support Vector Machines

CS6375: Machine Learning Gautam Kunapuli. Support Vector Machines

Applied Machine Learning Annalisa Marsico

Support Vector Machine (SVM) and Kernel Methods

Linear vs Non-linear classifier. CS789: Machine Learning and Neural Network. Introduction

Machine Learning and Data Mining. Support Vector Machines. Kalev Kask

CS249: ADVANCED DATA MINING

Outline. Basic concepts: SVM and kernels SVM primal/dual problems. Chih-Jen Lin (National Taiwan Univ.) 1 / 22

Support Vector Machines

Andrew W. Moore Professor School of Computer Science Carnegie Mellon University

Statistical learning theory, Support vector machines, and Bioinformatics

Brief Introduction to Machine Learning

Kernel Machines. Pradeep Ravikumar Co-instructor: Manuela Veloso. Machine Learning

Max Margin-Classifier

Modelli Lineari (Generalizzati) e SVM

Lecture 10: Support Vector Machine and Large Margin Classifier

Review: Support vector machines. Machine learning techniques and image analysis

Chapter 9. Support Vector Machine. Yongdai Kim Seoul National University

Machine Learning : Support Vector Machines

Machine Learning 2010

Machine Learning. Support Vector Machines. Manfred Huber

Introduction to Support Vector Machines

Machine Learning Support Vector Machines. Prof. Matteo Matteucci

Machine Learning & SVM

Lecture 10: A brief introduction to Support Vector Machine

Machine Learning And Applications: Supervised Learning-SVM

COMS 4721: Machine Learning for Data Science Lecture 10, 2/21/2017

Classifier Complexity and Support Vector Classifiers

SVMC An introduction to Support Vector Machines Classification

CS798: Selected topics in Machine Learning

Introduction to Support Vector Machines

Support vector machines Lecture 4

Support Vector Machines and Kernel Methods

Support Vector Machines

Pattern Recognition and Machine Learning. Perceptrons and Support Vector machines

Support Vector and Kernel Methods

Lecture 18: Kernels Risk and Loss Support Vector Regression. Aykut Erdem December 2016 Hacettepe University

Support Vector Machines

Lecture 9: Large Margin Classifiers. Linear Support Vector Machines

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Statistical Pattern Recognition

ECE521 week 3: 23/26 January 2017

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Discriminative Models

SVMs: nonlinearity through kernels

Machine Learning. Lecture 6: Support Vector Machine. Feng Li.

Support Vector Machines. Machine Learning Fall 2017

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas

Non-linear Support Vector Machines

18.9 SUPPORT VECTOR MACHINES

Foundation of Intelligent Systems, Part I. SVM s & Kernel Methods

Support Vector Machines and Kernel Methods

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

Support Vector Machine. Industrial AI Lab.

Kernels and the Kernel Trick. Machine Learning Fall 2017

Neural networks and support vector machines

UVA CS / Introduc8on to Machine Learning and Data Mining. Lecture 9: Classifica8on with Support Vector Machine (cont.

Discriminative Models

Midterm Review CS 7301: Advanced Machine Learning. Vibhav Gogate The University of Texas at Dallas

Polyhedral Computation. Linear Classifiers & the SVM

CSE546: SVMs, Dual Formula5on, and Kernels Winter 2012

Support Vector Machines. Maximizing the Margin

Support Vector Machines. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Support Vector Machines

Support Vector Machines for Classification and Regression. 1 Linearly Separable Data: Hard Margin SVMs

Transcription:

Linear Classification and SVM Dr. Xin Zhang Email: eexinzhang@scut.edu.cn

What is linear classification? Classification is intrinsically non-linear It puts non-identical things in the same class, so a difference in the input vector sometimes causes zero change in the answer (what does this show?) Linear classification means that the part that adapts is linear The adaptive part is followed by a fixed nonlinearity. It may also be preceded by a fixed non-linearity (e.g. nonlinear basis functions). T y( x) w x w0, Decision f ( y( x)) adaptive linear function fixed nonlinear function

Representing the target values for classification If there are only two classes, we typically use a single real valued output that has target values of 1 for the positive class and 0 (or sometimes -1) for the other class For probabilistic class labels the target value can then be the probability of the positive class and the output of the model can also represent the probability the model gives to the positive class. If there are N classes we often use a vector of N target values containing a single 1 for the correct class and zeros elsewhere. For probabilistic labels we can then use a vector of class probabilities as the target vector.

Three approaches to classification Use discriminant functions directly without probabilities: Convert the input vector into one or more real values so that a simple operation (like threshholding) can be applied to get the class. The real values should be chosen to maximize the useable information about the class label that is in the real value. Infer conditional class probabilities: p( class C k x) Compute the conditional probability of each class. Then make a decision that minimizes some loss function Compare the probability of the input under separate, class-specific, generative models. E.g. fit a multivariate Gaussian to the input vectors of each class and see which Gaussian makes a test data vector most probable. (Is this the best bet?)

The planar decision surface in data-space for the simple linear discriminant function: w T x w 0 0

Discriminant functions for N>2 classes One possibility is to use N two-way discriminant functions. Each function discriminates one class from the rest. Another possibility is to use N(N-1)/2 two-way discriminant functions Each function discriminates between two particular classes. Both these methods have problems

Problems with multi-class discriminant functions More than one good answer Two-way preferences need not be transitive!

Use N discriminant functions, yi, y j, yk... and pick the max. This is guaranteed to give consistent and convex decision regions if y is linear. A simple solution y y k k ( x A ) y x (1 ) x y x (1 ) x A j ( x implies ( for positive ) that A ) and B y k j ( x B ) y A j ( x B ) B

Using least squares for classification This is not the right thing to do and it doesn t work as well as better methods, but it is easy: It reduces classification to least squares regression. We already know how to do regression. We can just solve for the optimal weights with some matrix algebra. We use targets that are equal to the conditional probability of the class given the input. When there are more than two classes, we treat each class as a separate problem (we cannot get away with this if we use the max decision function).

Problems with using least squares for classification logistic regression least squares regression If the right answer is 1 and the model says 1.5, it loses, so it changes the boundary to avoid being too correct

Another example where least squares regression gives poor decision surfaces

Fisher s linear discriminant A simple linear discriminant function is a projection of the data down to 1-D. So choose the projection that gives the best separation of the classes. What do we mean by best separation? An obvious direction to choose is the direction of the line joining the class means. But if the main direction of variance in each class is not orthogonal to this line, this will not give good separation (see the next figure). Fisher s method chooses the direction that maximizes the ratio of between class variance to within class variance. This is the direction in which the projected points contain the most information about class membership (under Gaussian assumptions)

A picture showing the advantage of Fisher s linear discriminant. When projected onto the line joining the class means, the classes are not well separated. Fisher chooses a direction that makes the projected classes much tighter, even though their projected means are less far apart.

Math of Fisher s linear discriminants What linear transformation is best for discrimination? The projection onto the vector separating the class means seems sensible: y w T x w m 2 m 1 But we also want small variance within each class: Fisher s objective function is: s s 2 1 2 2 J ( w) n C ( y 1 ( y n m m n n C2 2 ( m2 m1 ) 2 2 s1 s2 1 2 ) ) between within

) ( : ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( 1 2 1 2 2 1 1 1 2 1 2 2 2 2 1 2 1 2 2 1 m m S w m x m x m x m x S m m m m S w S w w S w w W C n T n n C n T n n W T B W T B T solution Optimal s s m m J More math of Fisher s linear discriminants

Perceptrons Perceptrons describes a whole family of learning machines, but the standard type consisted of a layer of fixed non-linear basis functions followed by a simple linear discriminant function. They were introduced in the late 1950 s and they had a simple online learning procedure. Grand claims were made about their abilities. This led to lots of controversy. Researchers in symbolic AI emphasized their limitations (as part of an ideological campaign against real numbers, probabilities, and learning) Support Vector Machines are just perceptrons with a clever way of choosing the non-adaptive, non-linear basis functions and a better learning procedure. They have all the same limitations as perceptrons in what types of function they can learn. But people seem to have forgotten this.

Linear Classifiers x f y est denotes +1 denotes -1 w x + b>0 f(x,w,b) = sign(w x + b) w x + b=0 How would you classify this data? w x + b<0

Linear Classifiers x f y est denotes +1 f(x,w,b) = sign(w x + b) denotes -1 How would you classify this data?

Linear Classifiers x f y est denotes +1 f(x,w,b) = sign(w x + b) denotes -1 How would you classify this data?

Linear Classifiers x f y est denotes +1 f(x,w,b) = sign(w x + b) denotes -1 Any of these would be fine....but which is best?

Linear Classifiers x f y est denotes +1 f(x,w,b) = sign(w x + b) denotes -1 How would you classify this data? Misclassified to +1 class

Classifier Margin x f y est denotes +1 denotes -1 f(x,w,b) = sign(w x + b) Define the margin of a linear classifier as the width that the boundary could be increased by before hitting a datapoint.

Maximum Margin denotes +1 denotes -1 Support Vectors are those datapoints that the margin pushes up against x f y est 1. Maximizing the margin is good according f(x,w,b) to intuition = sign(w and PAC x + theory b) 2. Implies that only support vectors are important; other The training maximum examples are ignorable. Linear SVM margin linear classifier is the linear classifier with the, um, maximum margin. 3. Empirically it works very very well. This is the simplest kind of SVM (Called an LSVM)

Linear SVM Mathematically x + Predict Class = +1 zone Predict Class = -1 zone X - M=Margin Width wx+b=1 wx+b=0 wx+b=-1 What we know: w. x + + b = +1 w. x - + b = -1 w. (x + -x -) = 2 ( x M x w ) w 2 w

Linear SVM Mathematically Goal: 1) Correctly classify all training data 2) Maximize the Margin same as minimize if y i = +1 if y i = -1 We can formulate a Quadratic Optimization Problem and solve for w and b Minimize subject to y wx i b 1 wx i b 1 i ( wx b) 1 ( w) y i i 1 2 t w w ( wx b) 1 i for all i M 2 1 w w t w 2 i

Solving the Optimization Problem Find w and b such that Φ(w) =½ w T w is minimized; and for all {(x i,y i )}: y i (w T x i + b) 1 Need to optimize a quadratic function subject to linear constraints. Quadratic optimization problems are a well-known class of mathematical programming problems, and many (rather intricate) algorithms exist for solving them. The solution involves constructing a dual problem where a Lagrange multiplier α i is associated with every constraint in the primary problem: Find α 1 α N such that Q(α) =Σα i - ½ΣΣα i α j y i y j x it x j is maximized and (1) Σα i y i = 0 (2) α i 0 for all α i

The Optimization Problem Solution The solution has the form: w =Σα i y i x i b= y k - w T x k for any x k such that α k 0 Each non-zero α i indicates that corresponding x i is a support vector. Then the classifying function will have the form: f(x) = Σα i y i x it x + b Notice that it relies on an inner product between the test point x and the support vectors x i we will return to this later. Also keep in mind that solving the optimization problem involved computing the inner products x it x j between all pairs of training points.

Dataset with noise denotes +1 denotes -1 Hard Margin: So far we require all data points be classified correctly - No training error What if the training set is noisy? - Solution 1: use very powerful kernels OVERFITTING!

Soft Margin Classification Slack variables ξi can be added to allow misclassification of difficult or noisy examples. 2 11 What should our quadratic optimization criterion be? Minimize wx+b=1 wx+b=0 wx+b=-1 7 1 2 w. w C R k 1 ε k

Hard Margin v.s. Soft Margin The old formulation: Find w and b such that Φ(w) =½ w T w is minimized and for all {(x i,y i )} y i (w T x i + b) 1 The new formulation incorporating slack variables: Find w and b such that Φ(w) =½ w T w + CΣξ i is minimized and for all {(x i,y i )} y i (w T x i + b) 1- ξ i and ξ i 0 for all i Parameter C can be viewed as a way to control overfitting.

Linear SVMs: Overview The classifier is a separating hyperplane. Most important training points are support vectors; they define the hyperplane. Quadratic optimization algorithms can identify which training points x i are support vectors with non-zero Lagrangian multipliers α i. Both in the dual formulation of the problem and in the solution training points appear only inside dot products: Find α 1 α N such that Q(α) =Σα i - ½ΣΣα i α j y i y j x it x j is maximized and (1) Σα i y i = 0 (2) 0 α i C for all α i f(x) = Σα i y i x it x + b

Non-linear SVMs Datasets that are linearly separable with some noise work out great: But what are we going to do if the dataset is just too hard? 0 x How about mapping data to a higher-dimensional space: x 2 0 x 0 x

Non-linear SVMs: Feature spaces General idea: the original input space can always be mapped to some higher-dimensional feature space where the training set is separable: Φ: x φ(x)

The Kernel Trick The linear classifier relies on dot product between vectors K(x i,x j )=x it x j If every data point is mapped into high-dimensional space via some transformation Φ: x φ(x), the dot product becomes: K(x i,x j )= φ(x i ) T φ(x j ) A kernel function is some function that corresponds to an inner product in some expanded feature space. Example: 2-dimensional vectors x=[x 1 x 2 ]; let K(x i,x j )=(1 + x it x j ) 2, Need to show that K(x i,x j )= φ(x i ) T φ(x j ): K(x i,x j )=(1 + x it x j ) 2, = 1+ x i1 2x 2 j1 + 2 x i1 x j1 x i2 x j2 + x i2 2x 2 j2 + 2x i1 x j1 + 2x i2 x j2 = [1 x 2 i1 2 x i1 x i2 x 2 i2 2x i1 2x i2 ] T [1 x 2 j1 2 x j1 x j2 x 2 j2 2x j1 2x j2 ] = φ(x i ) T φ(x j ), where φ(x) = [1 x 2 1 2 x 1 x 2 x 2 2 2x 1 2x 2 ]

What Functions are Kernels? For some functions K(x i,x j ) checking that K(x i,x j )= φ(x i ) T φ(x j ) can be cumbersome. Mercer s theorem: Every semi-positive definite symmetric function is a kernel Semi-positive definite symmetric functions correspond to a semi-positive definite symmetric Gram matrix: K(x 1,x 1 ) K(x 1,x 2 ) K(x 1,x 3 ) K(x 1,x N ) K= K(x 2,x 1 ) K(x 2,x 2 ) K(x 2,x 3 ) K(x 2,x N ) K(x N,x 1 ) K(x N,x 2 ) K(x N,x 3 ) K(x N,x N )

Examples of Kernel Functions Linear: K(x i,x j )= x i Tx j Polynomial of power p: K(x i,x j )= (1+ x i Tx j ) p Gaussian (radial-basis function network): xi x K( xi, x j) exp( 2 2 j 2 ) Sigmoid: K(x i,x j )= tanh(β 0 x i Tx j + β 1 )

Non-linear SVMs Mathematically Dual problem formulation: Find α 1 α N such that Q(α) =Σα i - ½ΣΣα i α j y i y j K(x i, x j ) is maximized and (1) Σα i y i = 0 (2) α i 0 for all α i The solution is: f(x) = Σα i y i K(x i, x j )+ b Optimization techniques for finding α i s remain the same!

Nonlinear SVM - Overview SVM locates a separating hyperplane in the feature space and classify points in that space It does not need to represent the space explicitly, simply by defining a kernel function The kernel function plays the role of the dot product in the feature space.

Properties of SVM Flexibility in choosing a similarity function Sparseness of solution when dealing with large data sets - only support vectors are used to specify the separating hyperplane Ability to handle large feature spaces - complexity does not depend on the dimensionality of the feature space Overfitting can be controlled by soft margin approach Nice math property: a simple convex optimization problem which is guaranteed to converge to a single global solution Feature Selection

SVM Applications SVM has been used successfully in many real-world problems - text (and hypertext) categorization - image classification - bioinformatics (Protein classification, Cancer classification) - hand-written character recognition

Application 1: Cancer Classification High Dimensional - p>1000; n<100 Imbalanced - less positive samples n K[ x, x] k( x, x) N Many irrelevant features Noisy Genes Patients g-1 g-2 g-p P-1 p-2. p-n FEATURE SELECTION In the linear case, w i 2 gives the ranking of dim i SVM is sensitive to noisy (mis-labeled) data

It is sensitive to noise Weakness of SVM - A relatively small number of mislabeled examples can dramatically decrease the performance It only considers two classes - how to do multi-class classification with SVM? - Answer: 1) with output arity m, learn m SVM s SVM 1 learns Output==1 vs Output!= 1 SVM 2 learns Output==2 vs Output!= 2 : SVM m learns Output==m vs Output!= m 2)To predict the output for a new input, just predict with each SVM and find out which one puts the prediction the furthest into the positive region.

Application 2: Text Categorization Task: The classification of natural text (or hypertext) documents into a fixed number of predefined categories based on their content. - email filtering, web searching, sorting documents by topic, etc.. A document can be assigned to more than one category, so this can be viewed as a series of binary classification problems, one for each category

Representation of Text IR s vector space model (aka bag-of-words representation) A doc is represented by a vector indexed by a pre-fixed set or dictionary of terms Values of an entry can be binary or weights Normalization, stop words, word stems Doc x => φ(x)

Text Categorization using SVM The distance between two documents is φ(x) φ(z) K(x,z) = φ(x) φ(z) is a valid kernel, SVM can be used with K(x,z) for discrimination. Why SVM? -High dimensional input space -Few irrelevant features (dense concept) -Sparse document vectors (sparse instances) -Text categorization problems are linearly separable

Some Issues Choice of kernel - Gaussian or polynomial kernel is default - if ineffective, more elaborate kernels are needed - domain experts can give assistance in formulating appropriate similarity measures Choice of kernel parameters - e.g. σ in Gaussian kernel - σ is the distance between closest points with different classifications - In the absence of reliable criteria, applications rely on the use of a validation set or cross-validation to set such parameters. Optimization criterion Hard margin v.s. Soft margin - a lengthy series of experiments in which various parameters are tested

Additional Resources An excellent tutorial on VC-dimension and Support Vector Machines: C.J.C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2(2):955-974, 1998. The VC/SRM/SVM Bible: Statistical Learning Theory by Vladimir Vapnik, Wiley- Interscience; 1998 http://www.kernel-machines.org/

Reference Support Vector Machine Classification of Microarray Gene Expression Data, Michael P. S. Brown William Noble Grundy, David Lin, Nello Cristianini, Charles Sugnet, Manuel Ares, Jr., David Haussler www.cs.utexas.edu/users/mooney/cs391l/svm.ppt Text categorization with Support Vector Machines: learning with many relevant features T. Joachims, ECML - 98