LIBSVM: a Library for Support Vector Machines (Version 2.32)

Similar documents
LIBSVM: a Library for Support Vector Machines

A Simple Decomposition Method for Support Vector Machines

Training Support Vector Machines: Status and Challenges

Introduction to Support Vector Machines

A GENERAL FORMULATION FOR SUPPORT VECTOR MACHINES. Wei Chu, S. Sathiya Keerthi, Chong Jin Ong

Working Set Selection Using Second Order Information for Training SVM

Support Vector Machines

A NEW MULTI-CLASS SUPPORT VECTOR ALGORITHM

A Revisit to Support Vector Data Description (SVDD)

A Study on Sigmoid Kernels for SVM and the Training of non-psd Kernels by SMO-type Methods

Sequential Minimal Optimization (SMO)

A Dual Coordinate Descent Method for Large-scale Linear SVM

Outline. Basic concepts: SVM and kernels SVM primal/dual problems. Chih-Jen Lin (National Taiwan Univ.) 1 / 22

Solving the SVM Optimization Problem

An Improved Conjugate Gradient Scheme to the Solution of Least Squares SVM

Introduction to Support Vector Machines

Convergence of a Generalized Gradient Selection Approach for the Decomposition Method

Improvements to Platt s SMO Algorithm for SVM Classifier Design

Lecture 9: Large Margin Classifiers. Linear Support Vector Machines

Support Vector Machine Regression for Volatile Stock Market Prediction

Machine Learning. Support Vector Machines. Manfred Huber

ν =.1 a max. of 10% of training set can be margin errors ν =.8 a max. of 80% of training can be margin errors

SMO Algorithms for Support Vector Machines without Bias Term

MinOver Revisited for Incremental Support-Vector-Classification

An introduction to Support Vector Machines

Formulation with slack variables

Support Vector Machines and Kernel Methods

Gaps in Support Vector Optimization

Machine Learning. Lecture 6: Support Vector Machine. Feng Li.

A Dual Coordinate Descent Method for Large-scale Linear SVM

Analysis of Multiclass Support Vector Machines

Statistical Machine Learning from Data

A Tutorial on Support Vector Machine

Support Vector Machines

Support Vector Machine for Classification and Regression

Support Vector Machines

SVM Incremental Learning, Adaptation and Optimization

Discussion of Some Problems About Nonlinear Time Series Prediction Using ν-support Vector Machine

Optimization for Kernel Methods S. Sathiya Keerthi

SVM TRADE-OFF BETWEEN MAXIMIZE THE MARGIN AND MINIMIZE THE VARIABLES USED FOR REGRESSION

The Intrinsic Recurrent Support Vector Machine

Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

Support Vector Machines

Learning with kernels and SVM

Support Vector Machines

Support Vector Machines.

Nearest Neighbors Methods for Support Vector Machines

Support Vector Machines and Kernel Methods

Support Vector and Kernel Methods

MACHINE LEARNING. Support Vector Machines. Alessandro Moschitti

SUPPORT VECTOR MACHINE FOR THE SIMULTANEOUS APPROXIMATION OF A FUNCTION AND ITS DERIVATIVE

An SMO algorithm for the Potential Support Vector Machine

Rough Margin based Core Vector Machine

Constrained Optimization and Support Vector Machines

Support Vector Machine I

An Exact Solution to Support Vector Mixture

Support Vector Machines Explained

Lecture Support Vector Machine (SVM) Classifiers

Support Vector Machine (SVM) and Kernel Methods

Square Penalty Support Vector Regression

ICS-E4030 Kernel Methods in Machine Learning

Perceptron Revisited: Linear Separators. Support Vector Machines

Support Vector Machines for Classification and Regression. 1 Linearly Separable Data: Hard Margin SVMs

Lecture Notes on Support Vector Machine

Non-linear Support Vector Machines

Support Vector Machines

Support Vector Machine (SVM) and Kernel Methods

Introduction to Logistic Regression and Support Vector Machine

Learning with Rigorous Support Vector Machines

LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

QP Algorithms with Guaranteed Accuracy and Run Time for Support Vector Machines

Comments on the Core Vector Machines: Fast SVM Training on Very Large Data Sets

Pattern Classification, and Quadratic Problems

Support Vector Machines and Kernel Algorithms

SUPPORT VECTOR REGRESSION WITH A GENERALIZED QUADRATIC LOSS

Support Vector Machines for Classification and Regression

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Support Vector Machines: Maximum Margin Classifiers

Kobe University Repository : Kernel

Probability Estimates for Multi-class Classification by Pairwise Coupling

Support Vector Machine (SVM) and Kernel Methods

Support Vector Learning for Ordinal Regression

Introduction to Machine Learning Lecture 13. Mehryar Mohri Courant Institute and Google Research

SMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Support Vector Machine Classification via Parameterless Robust Linear Programming

Brief Introduction to Machine Learning

Outliers Treatment in Support Vector Regression for Financial Time Series Prediction

Review: Support vector machines. Machine learning techniques and image analysis

Support Vector Machine

Iteratively Reweighted Least Square for Asymmetric L 2 -Loss Support Vector Regression

Polyhedral Computation. Linear Classifiers & the SVM

Linear vs Non-linear classifier. CS789: Machine Learning and Neural Network. Introduction

Chapter 9. Support Vector Machine. Yongdai Kim Seoul National University

CS-E4830 Kernel Methods in Machine Learning

Mehryar Mohri Foundations of Machine Learning Courant Institute of Mathematical Sciences Homework assignment 3 April 5, 2013 Due: April 19, 2013

Applied inductive learning - Lecture 7

COMP 652: Machine Learning. Lecture 12. COMP Lecture 12 1 / 37

The Entire Regularization Path for the Support Vector Machine

Robust Kernel-Based Regression

Transcription:

LIBSVM: a Library for Support Vector Machines (Version 2.32) Chih-Chung Chang and Chih-Jen Lin Last updated: October 6, 200 Abstract LIBSVM is a library for support vector machines (SVM). Its goal is to help users can easily use SVM as a tool. In this document, we present all its implementation details. For the use of LIBSVM, the README file included in the package provides the information. Introduction LIBSVM is a library for support vector classification (SVM) and regression. Its goal is to let users can easily use SVM as a tool. In this document, we present all its implementation details. For the use of LIBSVM, the README file included in the package provides the information. In Section 2, we show formulations used in LIBSVM: C-support vector classification (C-SVC), ν-support vector classification (ν-svc), distribution estimation (one-class SVM), ɛ-support vector regression (ɛ-svr), and ν-support vector regression (ν-svr). We discuss the implementation of solving quadratic problems in Section 3. Section 4 describes two implementation techniques: shrinking and caching. Then in Section 5 we discuss the implementation of multi-class classification. We now also support different penalty parameters for unbalanced data. Details are in Section 6 2 Formulations 2. C-Support Vector Classification (Binary Case) Given training vectors x i R n, i =,..., l, in two classes, and a vector y R l such that y i, }, C-SVC (Cortes and Vapnik, 995; Vapnik, 998) solves the following primal problem: w,b,ξ 2 wt w + C ξ i (2.) i= y i (w T φ(x i ) + b) ξ i, ξ i 0, i =,..., l. Department of Computer Science and Information Engineering, National Taiwan University, Taipei 06, Taiwan (http://www.csie.ntu.edu.tw/~cjlin).

Its dual is α 2 αt Qα e T α 0 α i C, i =,..., l, (2.2) y T α = 0, where e is the vector of all ones, C is the upper bound, Q is an l by l positive semidefinite matrix, Q ij y i y j K(x i, x j ), and K(x i, x j ) φ(x i ) T φ(x j ) is the kernel. Here training vectors x i are mapped into a higher (maybe infinite) dimensional space by the function φ. The decision function is f(x) = sign( y i α i K(x i, x) + b). 2.2 ν-support Vector Classification (Binary Case) i= The ν-support vector classification (Schölkopf et al., 2000) uses a new parameter ν which let one control the number of support vectors and errors. The parameter ν (0, ] is an upper bound on the fraction of training errors and a lower bound of the fraction of support vectors. Details of the algorithm implemented in LIBSVM can be found in (Chang and Lin, 200a). Given training vectors x i R n, i =,..., l, in two classes, and a vector y R l such that y i, }, the primal form considered is: w,b,ξ,ρ 2 wt w νρ + l i= ξ i y i (w T φ(x i ) + b) ρ ξ i, ξ i 0, i =,..., l, ρ 0. The dual is: α 2 αt Qα 0 α i /l, i =,..., l, (2.3) e T α ν, y T α = 0. where Q ij y i y j K(x i, x j ). The decision function is: f(x) = sign( y i α i (K(x i, x) + b)). i= 2

In (Crisp and Burges, 999; Chang and Lin, 200a), it has been shown that e T α ν can be replaced by e T α = ν. With this property, in LIBSVM, we solve a scaled version of (2.3): α 2 αt Qα 0 α i, i =,..., l, e T α = νl, y T α = 0. We output α/ρ so the computed decision function is: and then two margins are f(x) = sign( which are the same as those of C-SVC. y i (α i /ρ)(k(x i, x) + b)) i= y i (w T φ(x i ) + b) = ± 2.3 Distribution Estimation (One-class SVM) One-class SVM was proposed by (Schölkopf et al., 200) for estimating the support of a high-dimensional distribution. Given training vectors x i R n, i =,..., l without any class information, the primal form in (Schölkopf et al., 200) is: w,b,ξ,ρ 2 wt w ρ + νl w T φ(x i ) ρ ξ i, i= ξ i ξ i 0, i =,..., l. The dual is: α 2 αt Qα where Q ij = K(x i, x j ) φ(x i ) T φ(x j ). 0 α i /(νl), i =,..., l, (2.4) e T α =, In LIBSVM we solve a scaled version of (2.4): 3

2 αt Qα 0 α i, i =,..., l, e T α = νl. The decision function is f(x) = sign( α i K(x i, x) ρ). i= 2.4 ɛ-support Vector Regression (ɛ-svr) Given a set of data points, (x, z ),..., (x l, z l )}, such that x i R n is an input and z i R is a target output, the standard form of support vector regression (Vapnik, 998) is: w,b,ξ,ξ 2 wt w + C ξ i + C i= i= z i w T φ(x i ) b ɛ + ξ i, w T φ(x i ) + b z i ɛ + ξ i, ξ i, ξ i 0, i =,..., l. ξ i The dual is: α,α 2 (α α ) T Q(α α ) + ɛ (α i + αi ) + i= z i (α i αi ) (α i αi ) = 0, 0 α i, αi C, i =,..., l, (2.5) i= where Q ij = K(x i, x j ) φ(x i ) T φ(x j ). The approximate function is: i= f(x) = ( α i + αi )K(x i, x) + b. i= 2.5 ν-support Vector Regression (ν-svr) Similar to ν-svc, for regression, (Schölkopf et al., 2000) use a parameter ν to control the number of support vectors. However, unlike ν-svc where C is replaced by ν here ν replaces 4

the parameter ɛ of ɛ-svr. The primal form is w,b,ξ,ξ,ɛ 2 wt w + C(νɛ + l (ξ i + ξi )) (2.6) i= (w T φ(x i ) + b) z i ɛ + ξ i, z i (w T φ(x i ) + b) ɛ + ξ i, ξ i, ξ i 0, i =,..., l, ɛ 0. and the dual is α,α 2 (α α ) T Q(α α ) + z T (α α ) e T (α α ) = 0, e T (α + α ) Cν, 0 α i, α i C/l, i =,..., l, (2.7) Similarly the inequality e T (α + α ) Cν can be replaced by an equality. In LIBSVM, we consider C C/l so the dual problem solved is: Then the decision function is α,α 2 (α α ) T Q(α α ) + z T (α α ) e T (α α ) = 0, e T (α + α ) = Clν, 0 α i, α i C, i =,..., l. (2.8) f(x) = ( α i + αi )K(x i, x) + b, i= the same as that of ɛ-svr. 3 Solving the Quadratic Problems 3. The Decomposition Method for C-SVC, ɛ-svr, and One-class SVM We consider the following general form of C-SVC, ɛ-svr, and one-class SVM: α 2 αt Qα + p T α y T α =, (3.) 0 α t C, t =,..., l, 5

where y t = ±, t =,..., l. It can be clearly seen that C-SVC and one-class SVM are already in the form of (3.). For ɛ-svr, we consider the following reformulation of (2.5): [ α T, (α ) T ] [ ] [ ] Q Q α α,α 2 Q Q α + [ ɛe T + z T, ɛe T z T ] [ ] α α y T [ α α ] = 0, 0 α t, α t C, t =,..., l, (3.2) where y is a 2l by vector with y t =, t =,..., l and y t =, t = l +,..., 2l. The difficulty of solving (3.) is the density of Q because Q ij is in general not zero. In LIBSVM, we consider the decomposition method to conquer this difficulty. Some work on this method are, for example, Osuna et al. 997b, Joachims 998, Platt 998, and Saunders et al. 998. Algorithm 3. (Decomposition method). Given a number q l as the size of the working set. Find α as the initial solution. Set k =. 2. If α k is an optimal solution of (2.2), stop. Otherwise, find a working set B,..., l} whose size is q. Define N,..., l}\b and α k B and αk N to be sub-vectors of αk corresponding to B and N, respectively. 3. Solve the following sub-problem with the variable α B : where α B 2 αt BQ BB α B + (p B + Q BN α k N) T α B 0 (α B ) t C, t =,..., q, (3.3) y T Bα B = y T Nα k N, [ ] QBB Q BN Q NB Q NN is a permutation of the matrix Q. 4. Set α k+ B to be the optimal solution of (3.3) and α k+ N αk N. Set k k + and goto Step 2. The basic idea of the decomposition method is that in each iteration, the indices,..., l} of the training set are separated to two sets B and N, where B is the working set and N =,..., l}\b. The vector α N is fixed so the objective value becomes 2 αt B Q BBα B (p B Q BN α N ) T α B + 2 αt N Q NNα N p T N α N. Then a sub-problem with the variable α B, i.e. (3.3), is solved. Note that B is updated in each iteration. To simplify the notation, we simply use B instead of B k. The strict decrease of the objective function holds and the theoretical convergence will be discussed in Section 3.3. 6

3.2 Working Set Selection and Stopping Criteria for C-SVC, ɛ-svr, and One-class SVM An important issue of the decomposition method is the selection of the working set B. The Karush-Kuhn-Tucker (KKT) condition of (3.) shows that there is a scalar b and two nonnegative vectors λ and µ such that Then the above can be rewritten as Qα + p + by = λ µ λ i α i = 0, µ i (C α) i = 0, λ i 0, µ i 0, i =,..., l. Qα + p + by 0 if α = 0, = 0 if 0 < α < C, 0 if α = C. By using y i = ±, i =,..., l, this further implies that y t =, α t < C (Qα + p) t + b 0 b (Qα + p) t = f(α) t, y t =, α t > 0 (Qα + p) t b 0 b (Qα + p) t = f(α) t, y t =, α t < C (Qα + p) t b 0 b (Qα + p) t = f(α) t, y t =, α t > 0 (Qα + p) t + b 0 b (Qα + p) t = f(α) t, (3.4) where f(α) 2 αt Qα + p T α and f(α) is the gradient of f(α) at α. We consider i argmax( f(α) t y t =, α t < C}, f(α) t y t =, α t > 0}), (3.5) j arg( f(α) t y t =, α t < C}, f(α) t y t =, α t > 0}). (3.6) We then use B = i, j} as the working set for the sub-problem (3.3) of the decomposition method. Here i and j are the two elements which violate the KKT condition the most. The idea of using only two elements for the working set are from the Sequential Minimal Optimization (SMO) by Platt 998. The main advantage is that an analytic solution of (3.3) can be obtained so there is no need to use an optimization software. Note that (3.5) and (3.6) are a special case of the working set selection of the software SV M light by Joachims 998. To be more precise, in SV M light, if α is the current solution, the following problem is solved: d f(α) T d y T d = 0, d, (3.7) d t 0, if α t = 0, d t 0, if α t = C, d t d t 0} = q. (3.8) 7

Note that d t d t 0} means the number of components of d which are not zero. The constraint (3.8) implies that a descent direction involving only q variables is obtained. Then components of α with non-zero d t are included in the working set B which is used to construct the sub-problem (3.3). Note that d is only used for identifying B but not as a search direction. It can be clearly seen that if q = 2, the solution of (3.7) is i = arg f(α) t d t y t d t = ; d t 0, if α t = 0; d t 0, if α t = C.}, j = arg f(α) t d t y t d t = ; d t 0, if α t = 0; d t 0, if α t = C.}, which is the same as (3.5) and (3.6). We also notice that this is also the modification 2 of the algorithms in (Keerthi et al., 200). We then define g i f(α) i if y i =, α i < C, f(α) i if y i =, α i > 0, (3.9) and From (3.4), we know g j f(α) j if y j =, α j < C, f(α) j if y j =, α j > 0. (3.0) g i g j (3.) implies that α is an optimal solution of (2.2). written and implemented as: Practically the stopping criteria can be g i g j + ɛ, (3.2) where ɛ is a small positive number. 3.3 Convergence of the Decomposition Method The convergence of decomposition methods was first studied in Chang et al. 2000 but algorithms discussed there do not coincide with existing implementations. In this section we will discuss only convergence results related to the specific decomposition method in Section 3.2. From Keerthi and Gilbert 2002 we have Theorem 3.2 Given any ɛ > 0, after a finite number of iterations (3.2) will be satisfied. 8

This theorem establishes the so-called finite teration property so we are sure that after finite steps the algorithm will stop. For asymptotic convergence from Lin 200a we have Theorem 3.3 If α k } is the sequence generated by the decomposition method in Section 3.2, the limit of any its convergent subsequence is an optimal solution of (3.). Note that Theorem 3.2 does not imply Theorem 3.3 as if we consider g i and g j in (3.2) as functions of α, they are not continuous. Hence we cannot take limit on both sides of (3.2) and claim that any convergent point has already satisfies the KKT condition. Theorem 3.3 was first proved as a special case of general results in Lin (200b) where some assumptions are needed. Now the proof in Lin (200a) does not require any assumption. 3.4 The Decomposition Method for ν-svc and ν-svr Both ν-svc and ν-svr can be considered as the following general form: α 2 αt Qα + p T α y T α =, (3.3) e T α = 2, 0 α t C, t =,..., l. The decomposition method is the same as Algorithm 3. but the sub-problem is different: α B 2 αt BQ BB α B + (p B + Q BN α k N) T α B y T Bα B = y T Nα k N, (3.4) e T Bα B = 2 e T Nα k N, 0 (α B ) t C, t =,..., q. Now if only two elements i and j are selected but y i y j, then yb T α B = yn T αk N and e T B α B = 2 e T N αk N imply that there are two equations with two variables so (3.4) has only one feasible point. Therefore, from α k, the solution cannot be moved any more. On the other hand, if y i = y j, yb T α B = yn T αk N and et B α B = 2 e T N αk N become the same equality so there are multiple feasible solutions. Therefore, we have to keep y i = y j while selecting the working set. 9

The KKT condition of (3.3) shows f(α) i ρ + by i = 0 if 0 < α i < C, 0 if α i = 0, 0 if α i = C. Define r ρ b, r 2 ρ + b. If y i = the KKT condition becomes f(α) i r 0 if α i < C, (3.5) 0 if α i > 0. On the other hand, if y i =, it is f(α) i r 2 0 if α i < C, (3.6) 0 if α i > 0. Hence, indices i and j are selected from either i = arg t f(α) t y t =, α t < C}, j = argmax t f(α) t y t =, α t > 0}, (3.7) or i = arg t f(α) t y t =, α t < C}, j = argmax t f(α) t y t =, α t > 0}, (3.8) depending on which one gives a smaller f(α) i f(α) j (i.e. larger KKT violations). This was first proposed in (Keerthi and Gilbert, 2002). Some details for classification and regression can be seen in (Chang and Lin, 200a, Section 4) and (Chang and Lin, 200b), respectively. The stopping criterion will be described in Section 3.6. 3.5 Analytical Solutions Now (3.3) is a simple problem with only two variables: ] [ αi α i,α j 2 [ ] [ Q αi α ii Q ij j Q ji Q jj α j ] + (Q i,n α N )α i + (Q j,n α N )α j y i α i + y j α j = y T Nα k N, (3.9) 0 α i, α j C. 0

Platt 998 substituted α i = y i ( y T N α N y j α j ) into the objective function of (3.3) and solved an unconstrained imization on α j. The following solution is obtained: where α new j = αj + G i G j Q ii +Q jj +2Q ij if y i y j, G α j + i G j Q ii +Q jj 2Q ij if y i = y j, G i f(α) i and G j f(α) j. (3.20) If this value is outside the possible region of α j (that is, exceeds the feasible region of (3.3)), the value of (3.20) is clipped into the feasible region and is assigned as the new α j. For example, if y i = y j and C α i + α j 2C, α new j L α i + α j C α new j must satisfy C H as the largest value α new i and α new j can be is C. Hence if we define α new j L and then α j + G i G j Q ii + Q jj + 2Q ij L, α new i = α i + α j α new j = C. (3.2) This can be illustrated by Figure 3. where it is like we are optimizating a quadratic function over a line segment. The line segment is the intersection between the linear constraint y i α i + y j α j = and bounded constraints 0 α i, α j C. α j y iα i + y jα j = yn T αn k Figure 3.: Analytically solving a two-variable problem α i However, numerically the last equality of (3.2) may not hold. The floating-point operation will cause that α i + α j α new j = α i + α j (α i + α j C) C.

Therefore, in most SVM software, a small tolerance ɛ a is specified and all α i C ɛ a are considered to be at the upper bound and all α i ɛ a are considered to be zero. This is necessary as otherwise some data will be wrongly considered as support vectors. In addition, the calculation of the bias term b also need correct identification of those α i which are free (i.e. 0 < α i < C). In (Hsu and Lin, 2002b), it has been pointed out that if all bounded α i obtain their values using direct assignments, there is no need of using an ɛ a. To be more precise, for floatingpoint computation, if α i C is assigned somewhere, a future floating-point comparison between C and C returns true as they both have the same internal representation. More details about its implementation will be presented in Section 6. Note that though this involves a little more operations, as solving the analytic solution of (3.9) takes only a small portion of the total computational time, the difference is negligible. Another or problem is that the denoator in (3.20) is sometime zero. When this happens, Q ij = ±(Q ii + Q ij )/2 so Q ii Q jj Q 2 ij = Q ii Q jj (Q ii + Q jj ) 2 /4 = (Q ii Q ij ) 2 /4 0. Therefore, we know if Q BB is positive definite, the zero denoator in (3.20) never happens. Hence this problem happens only if Q BB is a 2 by 2 singular matrix. We discuss some situations where Q BB may be singular.. The function φ does not map data to independent vectors in a higher-dimensional space so Q is only positive semidefinite. For example, using the linear or low-degree polynomial kernels. Then it is possible that a singular Q BB is picked. 2. Some kernels have a nice property that φ(x i ), i =,..., l are independent if x i x j. Thus Q as well as all possible Q BB are positive definite. An example is the RBF kernel Micchelli (986). However, for many practical data we have encountered, some of x i, i =,..., l are the same. Therefore, several rows (columns) of Q are exactly the same so Q BB may be singular. However, even if the denoator of (3.20) is zero, there are no numerical problems: 2

From (3.2), we note that g i + g j ɛ during the iterative process. Since g i + g j = ±( G i G j ) if y i y j, and g i + g j = ±(G i G j ) if y i = y j, the situation of 0/0 which is defined as NaN by IEEE standard does not appear. Therefore, (3.20) returns ± if the denoator is zero which can be detected as special quantity of IEEE standard and clipped to regular floating point number. 3.6 The Calculation of b or ρ After the solution α of the dual optimization problem is obtained, the variables b or ρ must be calculated as they are used in the decision function. Here we simply describe the case of ν-svc and ν-svr where b and ρ both appear. Other formulations are simplified cases of them. The KKT condition of (3.3) has been shown in (3.5) and (3.6). Now we consider the case of y i =. If there are α i which satisfy 0 < α i < C, then r = f(α) i. Practically to avoid numerical errors, we average them: 0<α r = i <C,y i = f(α) i 0<α i <C,y i =. On the other hand, if there is no such α i, as r must satisfy we take r the midpoint of the range. max f(α) i r f(α) i, α i =C,y i = α i =0,y i = For y i =, we can calculate r 2 in a similar way. After r and r 2 are obtained, ρ = r + r 2 2 Note that the KKT condition can be written as and b = r r 2. 2 and max f(α) i f(α) i α i >0,y i = α i <C,y i = max f(α) i f(α) i. α i >0,y i = α i <C,y i = 3

Hence practically we can use the following stopping criterion: The decomposition method stops if the iterate α satisfies the following condition: max( i + max i, α i <C,y i = α i >0,y i = (3.22) i + max i) < ɛ, α i <C,y i = α i >0,y i = (3.23) where ɛ > 0 is a chosen stopping tolerance. 4 Shrinking and Caching Since for many problems the number of free support vectors (i.e. 0 < α i < C) is small, the shrinking technique reduces the size of the working problem without considering some bounded variables (Joachims, 998). Near the end of the iterative process, the decomposition method identifies a possible set A where all final free α i may reside in. Indeed we can have the following theorem which shows that at the final iterations of the decomposition proposed in Section 3.2 only variables corresponding to a small set are still allowed to move (Lin, 200c, Theorem II.3): Theorem 4. If lim k α k = ᾱ, then from Theorem 3.3, ᾱ is an optimal solution. Furthermore, after k is large enough, only elements in t y t f(ᾱ) t = max( max f(ᾱ) t, max f(ᾱ) t) ᾱ t<c,y t= ᾱ t>0,y t= can still be possibly modified. = ( f(ᾱ) t, f(ᾱ) t)} (4.) ᾱ t<c,y t= ᾱ t>0,y t= Therefore, we tend to guess that if a variable α i is C for several iterations, then at the final solution, it is still at the upper bound. Hence instead of solving the whole problem (2.2), the decomposition method works on a smaller problem: α A 2 αt AQ AA α A (p A Q AN α k N) T α A 0 (α A ) t C, t =,..., q, (4.2) y T Aα A = y T Nα k N, where N =,..., l}\a. Of course this heuristic may fail if the optimal solution of (4.2) is not the corresponding part of that of (2.2). When that happens, the whole problem (2.2) is reoptimized starting from a point α where α B is an optimal solution of (4.2) and α N are bounded variables identified before the shrinking process. Note that while solving the shrinked problem (4.2), 4

we only know the gradient Q AA α A + Q AN α N + p A of (4.2). Hence when problem (2.2) is reoptimized we also have to reconstruct the whole gradient f(α), which is quite expensive. Many implementations began the shrinking procedure near the end of the iterative process, in LIBSVM however, we start the shrinking process from the beginning. The procedure is as follows:. After every (l, 000) iterations, we try to shrink some variables. Note that during the iterative process ( f(α k ) t y t =, α t < C}, f(α k ) t y t =, α t > 0}) = g j < g i = max( f(α k ) t y t =, α t < C}, f(α k ) t y t =, α t > 0}), (4.3) as (3.) is not satisfied yet. We conjecture that for those g t = f(α) t if y t =, α t < C, f(α) t if y t =, α t > 0, (4.4) if g t g j, (4.5) and α t resides at a bound, then the value of α t may not change any more. Hence we inactivate this variable. Similarly, for those f(α) t if y t =, α t < C, g t f(α) t if y t =, α t > 0, if and α t is at a bound, it is inactivated. dynamically reduced in every (l, 000) iterations. (4.6) g t g i, (4.7) Thus the set A of activated variables is 2. Of course the above shrinking strategy may be too aggressive. Since the decomposition method has a very slow convergence and a large portion of iterations are spent for achieving the final digit of the required accuracy, we would not like those iterations are wasted because of a wrongly shrinked problem (4.2). Hence when the decomposition method first achieves the tolerance g i g j + 0ɛ, 5

where ɛ is the specified stopping criteria, we reconstruct the whole gradient. Then based on the correct information, we use criteria like (4.4) and (4.6) to inactivate some variables and the decomposition method continues. Therefore, in LIBSVM, the size of the set A of (4.2) is dynamically reduced. To decrease the cost of reconstructing the gradient f(α), during the iterations we always keep Ḡ i = Q ij, i =,..., l. α j =C Then for the gradient f(α) i, i / A, we have f(α) i = Q ij α j = CḠi + Q ij α j. j= 0<α j <C For ν-svc and ν-svr, as the stopping condition (3.23) is different from (3.2), the shrinking strategies (4.5) and (4.7) must be modified. From (4.3), now we have to separate two cases: y t = and y t =. For y t =, (4.3) becomes f(α) t y t =, α t > 0} = g j < g i = max f(α) t y t =, α t < C} so we inactivate those α t with f(α) t g j if y t = and α t < C, and f(α) t g i if y t = and α t > 0. The case for y t = is similar. Another technique for reducing the computational time is caching. Since Q is fully dense and may not be stored in the computer memory, elements Q ij are calculated as needed. Then usually a special storage using the idea of a cache is used to store recently used Q ij (Joachims, 998). Hence the computational cost of later iterations can be reduced. In LIBSVM, we implement a simple least-recent-use strategy for the cache. We dynamically cache only recently used columns of Q AA of (4.2). 5 Multi-class classification We use the one-against-one approach (Knerr et al., 990) in which k(k )/2 classifiers are constructed and each one trains data from two different classes. The first use of this 6

strategy on SVM was in (Friedman, 996; Kreßel, 999). For training data from the ith and the jth classes, we solve the following binary classification problem: w ij,b ij,ξ ij 2 (wij ) T w ij + C( (ξ ij ) t ) t ((w ij ) T φ(x t )) + b ij ) ξ ij t, if x t in the ith class, ((w ij ) T φ(x t )) + b ij ) + ξ ij t, if x t in the jth class, ξ ij t 0. In classification we use a voting strategy: each binary classification is considered to be a voting where votes can be cast for all data points x - in the end point is designated to be in a class with maximum number of votes. In case that two classes have indentical votes, thought it may not be a good strategy, now we simply select the one with the smaller index. Another method for multi-class classification is the one-against-all approach in which k SVM models are constructed and the ith SVM is trained with all of the examples in the ith class with positive labels, and all other examples with negative labels. We did not consider it as some research work (e.g. (Weston and Watkins, 998; Platt et al., 2000)) have shown that it does not perform as good as one-against-one In addition, though we have to train as many as k(k )/2 classifiers, as each problem is smaller (only data from two classes), the total training time may not be more than the one-against-all method. This is one of the reasons why we choose the one-against-one method. Some detailed comparisons are in (Hsu and Lin, 2002a). 6 Unbalanced Data For some classification problems, numbers of data in different classes are unbalanced. Hence some researchers (e.g. (Osuna et al., 997a)) have proposed to use different penalty parameters in the SVM formulation: For example, C-SVM becomes w,b,ξ 2 wt w + C + y i = ξ i + C y i (w T φ(x i ) + b) ξ i, ξ i 0, i =,..., l. y i = ξ i 7

Its dual is α 2 αt Qα e T α 0 α i C +, if y i = (6.) 0 α i C, if y i = (6.2) y T α = 0, Note that by replacing C with different C i, i =,..., l, most of the analysis earlier are still correct. Now using C + and C is just a special case of it. Therefore, the implementation is almost the same. A main difference is on the solution of the sub-problem (3.9). Now it becomes: α i,α j 2 [ ] [ Q αi α ii Q ij j Q ji Q jj ] [ αi α j ] + (Q i,n α N )α i + (Q j,n α N )α j y i α i + y j α j = y T Nα k N, (6.3) 0 α i C i, 0 α j C j, where C i and C j can be C + or C depending on y i and y j. Following the concept of assigning all bounded values explicitly, if y i y j, we use the following segment of code: if(y[i]!=y[j]) double delta = (-G[i]-G[j])/(Q_i[i]+Q_j[j]+2*Q_i[j]); double diff = alpha[i] - alpha[j]; alpha[i] += delta; alpha[j] += delta; if(diff > 0) if(alpha[j] < 0) alpha[j] = 0; alpha[i] = diff; } } else if(alpha[i] < 0) alpha[i] = 0; alpha[j] = -diff; } } if(diff > C_i - C_j) 8

} } else } if(alpha[i] > C_i) alpha[i] = C_i; alpha[j] = C_i - diff; } if(alpha[j] > C_j) alpha[j] = C_j; alpha[i] = C_j + diff; } In the above code basically we think the feasible region as a rectangle with length C i and width C j. Then we consider situations where the line segment α i α j = y i ( yn T αk N ) goes to the outside of the rectangular region. Acknowledgments This work was supported in part by the National Science Council of Taiwan via the grants NSC 89-223-E-002-03 and NSC 89-223-E-002-06. The authors thank Chih-Wei Hsu and Jen-Hao Lee for many helpful discussions and comments. We also thank Ryszard Czerski and Lily Tian for some useful comments. References Chang, C.-C., C.-W. Hsu, and C.-J. Lin (2000). The analysis of decomposition methods for support vector machines. IEEE Trans. Neural Networks (4), 003 008. Chang, C.-C. and C.-J. Lin (200a). Training ν-support vector classifiers: Theory and algorithms. Neural Computation 3 (9), 29 247. Chang, C.-C. and C.-J. Lin (200b). Training ν-support vector regression: Theory and algorithms. Technical report, Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan. Cortes, C. and V. Vapnik (995). Support-vector network. Machine Learning 20, 273 297. Crisp, D. J. and C. J. C. Burges (999). A geometric interpretation of ν-svm classifiers. In Proceedings of Neural Information Processing, Volume 2, Cambridge, MA. MIT Press. 9

Friedman, J. (996). Another approach to polychotomous classification. Technical report, Department of Statistics, Stanford University. Available at http://www-stat.stanford.edu/reports/friedman/poly.ps.z. Hsu, C.-W. and C.-J. Lin (2002a). A comparison of methods for multi-class support vector machines. IEEE Trans. Neural Networks. To appear. Hsu, C.-W. and C.-J. Lin (2002b). A simple decomposition method for support vector machines. Machine Learning 46, 29 34. Joachims, T. (998). Making large-scale SVM learning practical. In B. Schölkopf, C. J. C. Burges, and A. J. Smola (Eds.), Advances in Kernel Methods - Support Vector Learning, Cambridge, MA. MIT Press. Keerthi, S. S. and E. G. Gilbert (2002). Convergence of a generalized SMO algorithm for SVM classifier design. Machine Learning 46, 35 360. Keerthi, S. S., S. Shevade, C. Bhattacharyya, and K. Murthy (200). Improvements to Platt s SMO algorithm for SVM classifier design. Neural Computation 3, 637 649. Knerr, S., L. Personnaz, and G. Dreyfus (990). Single-layer learning revisited: a stepwise procedure for building and training a neural network. In J. Fogelman (Ed.), Neurocomputing: Algorithms, Architectures and Applications. Springer-Verlag. Kreßel, U. (999). Pairwise classification and support vector machines. In B. Schölkopf, C. J. C. Burges, and A. J. Smola (Eds.), Advances in Kernel Methods Support Vector Learning, Cambridge, MA, pp. 255 268. MIT Press. Lin, C.-J. (200a). Asymptotic convergence of an SMO algorithm without any assumptions. Technical report, Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan. Lin, C.-J. (200b). On the convergence of the decomposition method for support vector machines. IEEE Transactions on Neural Networks 2. To appear. Lin, C.-J. (200c). Stopping criteria of decomposition methods for support vector machines: a theoretical justification. Technical report, Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan. Micchelli, C. A. (986). Interpolation of scattered data: distance matrices and conditionally positive definite functions. Constructive Approximation 2, 22. 20

Osuna, E., R. Freund, and F. Girosi (997a). Support vector machines: Training and applications. AI Memo 602, Massachusetts Institute of Technology. Osuna, E., R. Freund, and F. Girosi (997b). Training support vector machines: An application to face detection. In Proceedings of CVPR 97. Platt, J. C. (998). Fast training of support vector machines using sequential imal optimization. In B. Schölkopf, C. J. C. Burges, and A. J. Smola (Eds.), Advances in Kernel Methods - Support Vector Learning, Cambridge, MA. MIT Press. Platt, J. C., N. Cristianini, and J. Shawe-Taylor (2000). Large margin DAGs for multiclass classification. In Advances in Neural Information Processing Systems, Volume 2, pp. 547 553. MIT Press. Saunders, C., M. O. Stitson, J. Weston, L. Bottou, B. Schölkopf, and A. Smola (998). Support vector machine reference manual. Technical Report CSD-TR-98-03, Royal Holloway, University of London, Egham, UK. Schölkopf, B., J. C. Platt, J. Shawe-Taylor, A. J. Smola, and R. C. Williamson (200). Estimating the support of a high-dimensional distribution. Neural Computation 3 (7), 443 47. Schölkopf, B., A. Smola, R. C. Williamson, and P. L. Bartlett (2000). New support vector algorithms. Neural Computation 2, 207 245. Vapnik, V. (998). Statistical Learning Theory. New York, NY: Wiley. Weston, J. and C. Watkins (998). Multi-class support vector machines. Technical Report CSD-TR-98-04, Royal Holloway. 2