A GENERAL FORMULATION FOR SUPPORT VECTOR MACHINES. Wei Chu, S. Sathiya Keerthi, Chong Jin Ong

Similar documents
An Improved Conjugate Gradient Scheme to the Solution of Least Squares SVM

Support Vector Machines for Classification and Regression. 1 Linearly Separable Data: Hard Margin SVMs

Support Vector Machines for Classification and Regression

Bayesian Trigonometric Support Vector Classifier

Machine Learning. Support Vector Machines. Manfred Huber

Support Vector Machine (continued)

SUPPORT VECTOR REGRESSION WITH A GENERALIZED QUADRATIC LOSS

Perceptron Revisited: Linear Separators. Support Vector Machines

Support Vector Machines Explained

Jeff Howbert Introduction to Machine Learning Winter

Support Vector Machines: Maximum Margin Classifiers

An introduction to Support Vector Machines

Support Vector Machines

Introduction to Support Vector Machines

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

CSC 411 Lecture 17: Support Vector Machine

Support Vector Machines. Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

Support Vector Machines

CIS 520: Machine Learning Oct 09, Kernel Methods

SUPPORT VECTOR MACHINE FOR THE SIMULTANEOUS APPROXIMATION OF A FUNCTION AND ITS DERIVATIVE

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Support Vector Machine for Classification and Regression

Support Vector Machines

Bayesian Support Vector Regression Using a Unified Loss function. S. Sathiya Keerthi

Lecture 10: A brief introduction to Support Vector Machine

Support Vector Machines

Support Vector Machines

LECTURE 7 Support vector machines

Lecture Notes on Support Vector Machine

Machine Learning. Lecture 6: Support Vector Machine. Feng Li.

Support Vector Machine

Improvements to Platt s SMO Algorithm for SVM Classifier Design

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

CS6375: Machine Learning Gautam Kunapuli. Support Vector Machines

Lecture 10: Support Vector Machine and Large Margin Classifier

Learning with kernels and SVM

Non-linear Support Vector Machines

Statistical Machine Learning from Data

SMO Algorithms for Support Vector Machines without Bias Term

Support Vector Machine Regression for Volatile Stock Market Prediction

ν =.1 a max. of 10% of training set can be margin errors ν =.8 a max. of 80% of training can be margin errors

Support Vector Machine (SVM) and Kernel Methods

Chapter 9. Support Vector Machine. Yongdai Kim Seoul National University

Support Vector Machines. Maximizing the Margin

Outline. Basic concepts: SVM and kernels SVM primal/dual problems. Chih-Jen Lin (National Taiwan Univ.) 1 / 22

Support Vector Machines

Formulation with slack variables

Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines

L5 Support Vector Classification

Review: Support vector machines. Machine learning techniques and image analysis

Linear & nonlinear classifiers

Introduction to SVM and RVM

Support Vector Machines.

Lecture Support Vector Machine (SVM) Classifiers

Support Vector Machines II. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Introduction to Support Vector Machines

LIBSVM: a Library for Support Vector Machines (Version 2.32)

Lecture 18: Kernels Risk and Loss Support Vector Regression. Aykut Erdem December 2016 Hacettepe University

Classifier Complexity and Support Vector Classifiers

Lecture 9: Large Margin Classifiers. Linear Support Vector Machines

Indirect Rule Learning: Support Vector Machines. Donglin Zeng, Department of Biostatistics, University of North Carolina

Machine Learning And Applications: Supervised Learning-SVM

Support Vector Machine (SVM) and Kernel Methods

Support vector machines

Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012


Cheng Soon Ong & Christian Walder. Canberra February June 2018

Support Vector Machines for Classification: A Statistical Portrait

Linear, threshold units. Linear Discriminant Functions and Support Vector Machines. Biometrics CSE 190 Lecture 11. X i : inputs W i : weights

Machine Learning and Data Mining. Support Vector Machines. Kalev Kask

Support Vector Machines for Regression

Kernel Methods and Support Vector Machines

SUPPORT VECTOR MACHINE

SVM and Kernel machine

Linear & nonlinear classifiers

Stat542 (F11) Statistical Learning. First consider the scenario where the two classes of points are separable.

The Lagrangian L : R d R m R r R is an (easier to optimize) lower bound on the original problem:

Constrained Optimization and Support Vector Machines

ICS-E4030 Kernel Methods in Machine Learning

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.

Support Vector Machine & Its Applications

SMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines

CS145: INTRODUCTION TO DATA MINING

Linear Classification and SVM. Dr. Xin Zhang

Pattern Recognition 2018 Support Vector Machines

Support Vector Machines, Kernel SVM

SVMs, Duality and the Kernel Trick

Machine Learning Support Vector Machines. Prof. Matteo Matteucci

Data Mining. Linear & nonlinear classifiers. Hamid Beigy. Sharif University of Technology. Fall 1396

Linear vs Non-linear classifier. CS789: Machine Learning and Neural Network. Introduction

A Revisit to Support Vector Data Description (SVDD)

The Intrinsic Recurrent Support Vector Machine

Statistical Pattern Recognition

LINEAR CLASSIFICATION, PERCEPTRON, LOGISTIC REGRESSION, SVC, NAÏVE BAYES. Supervised Learning

Support Vector Machine

Machine Learning. Support Vector Machines. Fabio Vandin November 20, 2017

Linear Models. min. i=1... m. i=1...m

Max Margin-Classifier

Square Penalty Support Vector Regression

Convex Optimization and Support Vector Machine

Transcription:

A GENERAL FORMULATION FOR SUPPORT VECTOR MACHINES Wei Chu, S. Sathiya Keerthi, Chong Jin Ong Control Division, Department of Mechanical Engineering, National University of Singapore 0 Kent Ridge Crescent, Singapore, 960 engp9354@nus.edu.sg, mpessk@nus.edu.sg, mpeongcj@nus.edu.sg ABSTRACT In this paper, we derive a general formulation of support vector machines for classification and regression respectively. L e loss function is proposed as a patch of L and L soft margin loss functions for classifier, while soft insensitive loss function is introduced as the generalization of popular loss functions for regression. The introduction of the two loss functions results in a general formulation for support vector machines.. INTRODUCTION As computationally powerful tools for supervised learning, support vector machines (SVMs) are widely used in classification and regression problems [0]. Let us suppose that a data set D = (x i, y i ) i =,..., n} is given for training, where the input vector x i R d and y i is the target value. SVMs take the idea to map these input vectors into a high dimensional reproducing kernel Hilbert space (RKHS), where a linear machine is constructed by imizing a regularized functional. The linear machine takes the form of f(x) = w φ(x) + b, where φ( ) is the mapping function, b is known as the bias, and the dot product φ(x) φ(x ) is also the reproducing kernel K(x, x ) in the RKHS. The regularized functional is usually defined as R(w, b) = C l ( y i, f(x i ) ) + w () where the regularization parameter C > 0, the norm of w in the RKHS is the stabilizer and n l( y i, f(x i ) ) is empirical loss term. In standard SVMs, the regularized functional () can be imized by solving a convex quadratic programg optimization problem that guarantees a unique global imum solution. Various loss functions can be used in SVMs that results in quadratic programg. In SVMs for classification [], hard margin, L and L loss functions are widely used. For regression, a lot of common loss Wei Chu gratefully acknowledges the financial support provided by the National University of Singapore through Research Scholarship. functions have been discussed in [9], such as Laplacian, Huber s, ɛ-insensitive and Gaussian etc. In this paper, we generalize these popular loss functions and put forward new loss functions for classification and regression respectively. The two new loss functions are C smooth. By introducing these loss functions in the regularized functional of classical SVMs in place of popular loss functions, we derive a general formulation for SVMs.. SUPPORT VECTOR CLASSIFIER In classical SVMs for binary classification (SVC) in which the target values y i, +}, the hard margin loss function is defined as l h (y x f x ) = 0 if y x f x +; + otherwise. The hard margin loss function is suitable for noise-free data sets. For other general cases, a loss function is popularly used in classical SVC, which is defined as l ρ (y x f x ) = 0 if y x f x +; ρ ( y x f x ) ρ otherwise, (3) where ρ is a positive integer. The imization of the regularized functional () with the (3) as loss function leads to a convex programg problem for any positive integer ρ; for L (ρ = ) or L (ρ = ), it is also a convex quadratic programg problem... L e Loss Function We generalize the L and L loss functions as the L e loss function, which is defined as 0 if y x f x > ; ( y l e (y x f x ) = x f x ) if y x f x ɛ; 4ɛ ( y x f x ) ɛ otherwise, (4) where the parameter ɛ > 0. ()

8 7 6 5 4 3 0 L L e Soft Margin Loss Functions in SVC L 3.5.5 ε 0.5 0 0.5.5 y x f x Figure : Graphs of loss functions, where ɛ is set at. From their definitions and Figure, we find that the L e approaches to L as the parameter ɛ 0. Let ɛ be fixed at some large value, the L e approaches the L for all practical purposes... Optimization Problem The imization problem in SVC () with the L e loss function can be rewritten as the following equivalent optimization problem by introducing slack variables ξ i y i ( w φ(x i ) + b ) i, which we refer to as the primal problem where R(w, b, ξ) = C n ( ) ψ e ξi + w,b,ξ w (5) yi ( w φ(x i ) + b ) ξ i, i ξ i 0, i ξ ψ e (ξ) = if ξ [0, ɛ]; 4ɛ ξ ɛ if ξ (ɛ, + ). Standard Lagrangian techniques [4] are used to derive the dual problem. Let i 0 and γ i 0 be the corresponding Lagrange multipliers for the inequalities in the primal problem (6), and then the Lagrangian for the primal (6) (7) problem would be: L(w, b, ξ) = C n ψ ( ) e ξi + w n γ i ξ i i (y ) i ( w φ(x i ) + b) + ξ i (8) The KKT conditions for the primal problem (5) require w = y i i φ(x i ) (9) y i i = 0 (0) C ψ e(ξ i ) ξ i = i + γ i i () Based on the definition of ψ e ( ) given in (7) and the constraint condition (), an equality constraint on Lagrange multipliers can be explicitly written as C ξi ɛ = i + γ i if 0 ξ i ɛ C = i + γ i if ξ i > ɛ i () If we collect all terms involving ξ i in the Lagrangian (8), we get T i = Cψ e (ξ i ) ( i + γ i )ξ i. Using (7) and () we can rewrite T i as ɛ T i = C ( i + γ i ) if ξ [0, ɛ]; (3) Cɛ if ξ (ɛ, + ). Thus the ξ i can be eliated if we set T i = ɛ C ( i + γ i ) and introduce the additional constraints 0 i + γ i C. Then the dual problem can be stated as a maximization problem in terms of the positive dual variables i and γ i : n max R(, γ) = i ɛ ( i + γ i ),γ C i y i j y j φ(x i ) φ(x j ) (4) i 0, γ i 0, 0 i + γ i C, i and i y i = 0. (5) It is noted that R(, γ) R(, 0) for any and γ satisfying (5). Hence the maximization of (4) over (, γ) can be found as maximizing R(, 0) over 0 i C and n i y i = 0. Therefore, the dual problem can be finally simplified as i y i j y j K(x i, x j ) i + ɛ C i (6)

0 i C, i and n i y i = 0. With the equality (9), the linear classifier can be obtained from the solution of (6) as f(x) = n i y i K(x i, x)+b where b can be easily obtained as a byproduct in the solution. In most of the cases, only some of the Lagrange multipliers, i, differ from zero at the optimal solution. They define the support vectors (SVs) of the problem. More exactly, the training samples (x i, y i ) associated with i satisfying 0 < i < C are called off-bound SVs, the samples with i = C are called on-bound SVs, and the samples with i = 0 are called non-svs..3. Discussion The above formulation (6) is a general framework for classical SVC. There are three special cases of the formulation:. L : the formulation is just the SVC using L if we set ɛ = 0.. Hard margin: when we set ɛ = 0 and keep C large enough to prevent any i from reaching the upper bound C, the solution of this formulation is identical to the standard SVC with hard margin loss function. 3. L : when we set C equal to the regularization parameter in SVC with L, and ɛ keep C large enough to prevent any i from reaching the upper bound at the optimal solution, the solution will be same as that of the standard SVC with L soft margin loss function. In practice, such as on unbalanced data sets, we would like to use different regularization parameters C + and C for the samples with positive label and negative label separately. As an extremely general case, we can use different regularization parameter C i for every sample (x i, y i ). It is straightforward to obtain the general dual problem as i y i j y j K(x i, x j ) i + ɛ C i n i (7) 0 i C i, i and n i y i = 0. Obviously, the dual problems (7) is a constrained convex quadratic programg problem. Denoting ˆ = [y, y,..., y n n ], P = [ y, y,..., y n ] T and Q = K + Λ where Λ is a n n diagonal matrix with Λ ii = ɛ C i and K is the kernel matrix with K ij = K(x i, x j ), (7) can be written in a general form as ˆ ˆT Q ˆ + P T ˆ (8) The ratio between C + and C is usually fixed at C + = N, where C N + N + is the number of samples with positive label and N is the number of samples with negative label. l i ˆ i u i, i and n ˆ i = 0 where l i = 0, u i = C i when y i = + and l i = C i, u i = 0 when y i =. As to the algorithm design for the solution, matrix-based quadratic programg techniques that use the chunking idea can be employed here. Popular SMO algorithms [7, 5] could be easily adapted for the solution. For a program design and source code, refer to []. 3. SUPPORT VECTOR REGRESSION A lot of loss functions for support vector regression (SVR) have been discusses in [9] that leads to a general optimization problem. There are four popular loss functions widely used for regression problems. They are. Laplacian loss function: l l (δ) = δ.. Huber s loss function: l h (δ) = δ 4ɛ δ ɛ if δ ɛ otherwise. 3. ɛ-insensitive loss function: 0 if δ ɛ l ɛ (δ) = δ ɛ otherwise. 4. Gaussian loss function: l g (δ) = δ. We introduce another loss function as the generalization of these popular loss functions. 3.. Soft Insensitive Loss Function The loss function known as soft insensitive loss function (SILF) is defined as: l ɛ,β (δ) = δ ɛ if δ > ( + β)ɛ ( δ ( β)ɛ) 4βɛ if ( + β)ɛ δ ( β)ɛ 0 if δ < ( β)ɛ (9) where 0 < β and ɛ > 0. There is a profile of SILF as shown in Figure. The properties of SILF are entirely controlled by two parameters, β and ɛ. For a fixed ɛ, SILF approaches the ɛ-ilf as β 0; on the other hand, as β, it approaches the Huber s loss function. In addition, SILF becomes the Laplacian loss function as ɛ 0. Hold ɛ fixed at some large value and let β, the SILF approach the quadratic loss function for all practical purposes. The application of SILF in Bayesian SVR has been discussed in [3]. 3.. Optimization Problem We introduce SILF into the regularized functional () that will leads to a quadratic programg problem that could work as a general framework. As usual, two slack variables

0.9 0.8 Soft Insensitive Loss Function C ψ e(ξi ) ξi = i + γi i (6) Based on the definition of ψ s given by (), the constraint condition (6) could be explicitly written as equality constraints: 0.7 0.6 0.5 ξ i i + γ i = C βɛ if 0 ξ i βɛ i + γ i = C if ξ i > βɛ (7) 0.4 0.3 0. 0. 0.5 (+β)ε 0.5 ( β)ε 0 ( β)ε ε=0.5 (+β)ε.5 Figure : Graphs of soft insensitive loss functions, where ɛ = 0.5 and β = 0.5. ξ i and ξ i are introduced as ξ i y i w φ(x i ) b ( β)ɛ and ξ i w φ(x i) +b y i ( β)ɛ i. The imization of the regularized functional () with SILF as loss function could be rewritten as the following equivalent optimization problem, which is usually called primal problem: R(w, b, ξ, w,b,ξ,ξ ξ ) = C ( ψs (ξ i )+ψ s (ξi ) ) + w (0) y i w φ(x i ) b ( β)ɛ + ξ i ; w φ(x i ) + b y i ( β)ɛ + ξi ; () ξ i 0; ξi 0 i where ξ if ξ [0, βɛ]; ψ s (ξ) = 4βɛ ξ βɛ if ξ (βɛ, + ). () Let i 0, i 0, γ i 0 and γ i 0 i be the corresponding Lagrangian multipliers for the inequalities in (). The KKT conditions for the primal problem require w = ( i i ) φ(x i ) (3) ( i i ) = 0 (4) C ψ e(ξ i ) ξ i = i + γ i i (5) ξ i i + γi = C βɛ if 0 ξi βɛ (8) i + γ i = C if ξ i > βɛ Following the analogous arguments as SVC did in (3), we can also eliate ξ i and ξi here. That yields a maximization problem in terms of the positive dual variables w, b, ξ, ξ : max ( i,γ,,γ i )( j i ) φ(x i ) φ(x j ) + ( i i )y i ( i + i )( β)ɛ βɛ C ( (i + γ i ) + (i + γi ) ) (9) i 0, i 0, γ i 0, γi 0, 0 i + γ i C, i, 0 i + γ i C, i and n ( i i ) = 0. As γ i and γi only appear in the last term in (9), the functional (9) is maximal when γ i = 0 and γi = 0 i. Thus, the dual problem can be simplified as R(, ) = ( i, i )( j i )K(x i, x j ) ( i i )y i + ( i + i )( β)ɛ + βɛ C ( i + i ) (30) 0 i C, i, 0 i C, i and n ( i i ) = 0. With the equality (3), the dual form of the regression function can be written as f(x) = n ( i i )K(x i, x j ) + b. 3.3. Discussion Like SILF, the dual problem (30) is a generalization of the several SVR formulations. More exactly,. when we set β = 0, (30) becomes the classical SVR using ɛ-insensitive loss function;. When β =, (30) becomes that when the Huber s loss function is used.

3. When β = 0 and ɛ = 0, (30) becomes that for the case of the Laplacian loss function. 4. As for the optimization problem () using Gaussian loss function with the variance σ, that is equivalent to the general SVR (30) with β = and ɛ C = σ provided that we keep upper bound C large enough to prevent any i and i from reaching the upper bound at the optimal solution. That is also the case of least-square support vector machines. As for a SMO algorithm design, refer to [6]. The dual problem (30) is also a constrained convex quadratic programg problem. Let us denote ˆ = [,..., n,,..., n] T, P = [ y + ( β)ɛ,..., y n + ( β)ɛ, [ y ( β)ɛ,..., y n ( β)ɛ] T K + βɛ Q = C I K ] K K + βɛ C I where K is the kernel matrix with ij-entry K(x i, x j ), (30) can be rewritten as ˆ ˆT Q ˆ + P T ˆ (3) l i ˆ i u i, i and n ˆ i = 0 where l i = 0, u i = C for i n and l i = C, u i = 0 for n + i n. 4. CONCLUSION In this paper, we introduce two loss functions as the generalization of popular loss functions for SVC and SVR respectively. The dual problem of SVMs arising from the introduction of the two loss functions could be used as a general formulation for SVMs. The optimization problems in SVMs with popular loss functions could be obtained as the special cases in the general formulation we derived. As a byproduct, the general formulation also provides a framework that facilitates the algorithm implementation. [3] Chu, W and Keerthi, S. S. and Ong, C. J., A unified loss function in Bayesian framework for support vector regression, Proceeding of the 8th International Conference on Machine Learning, 00. http://guppy.mpe.nus.edu.sg/ mpessk/svm/icml.pdf [4] Fletcher, R., Practical methods of optimization, John Wiley, 987. [5] Keerthi, S. S. and Shevade, S. K. and Bhattacharyya, C. and Murthy, K. R. K., Improvements to Platt s SMO algorithm for SVM classifier design, Neural Computation, 3 pp. 637-649, 00. [6] Keerthi, S. S. and Shevade, SMO algorithm for least squares SVM formulations, Technical Report CD-0-08, Dept. of Mechanical Engineering, National University of Singapore, 00. http://guppy.mpe.nus.edu.sg/ mpessk/papers/lssvm smo.ps.gz [7] Platt, J. C., Fast training of support vector machines using sequential imal optimization, Advances in Kernel Methods - Support vector learning, pp. 85-08, 998. [8] Shevade, S. K. and Keerthi, S. S. and Bhattacharyya, C. and Murthy, K. R. K., Improvements to the SMO algorithm for SVM regression, IEEE Transactions on Neural Networks, pp. 88-94, 000. [9] Smola, A. J. and Schölkopf, B., A tutorial on support vector regression, Technical Report NC-TR-998-030, GMD First, October, 998. [0] Vapnik, V. N. The Nature of Statistical Learning Theory, New York: Springer-Verlag, 995. 5. REFERENCES [] Burges, C. J. C. A tutorial on support vector machines for pattern recogintion, Data Mining and Knowledge Discovery, () pp. -67, 998. [] Chu, W and Keerthi, S. S. and Ong, C. J., Extended SVMs - Theory and Implementation, Technical Report CD-0-4, Mechanical Engineering, National University of Singapore, 00. http://guppy.mpe.nus.edu.sg/ chuwei/svm/esvm.pdf