Evaluation of simple performance measures for tuning SVM hyperparameters

Similar documents
Lecture 10 Support Vector Machines II

Generalized Linear Methods

Kernel Methods and SVMs Extension

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Feature Selection: Part 1

10-701/ Machine Learning, Fall 2005 Homework 3

Natural Language Processing and Information Retrieval

Which Separator? Spring 1

Bounds on the Generalization Performance of Kernel Machines Ensembles

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING

Pattern Classification

Regularized Discriminant Analysis for Face Recognition

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015

Chapter 11: Simple Linear Regression and Correlation

Lecture Notes on Linear Regression

Linear Approximation with Regularization and Moving Least Squares

Ensemble Methods: Boosting

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

Lecture 3: Dual problems and Kernels

Problem Set 9 Solutions

Homework Assignment 3 Due in class, Thursday October 15

Maximal Margin Classifier

The Expectation-Maximization Algorithm

Module 9. Lecture 6. Duality in Assignment Problems

Semi-supervised Classification with Active Query Selection

Supporting Information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

MMA and GCMMA two methods for nonlinear optimization

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

Statistical machine learning and its application to neonatal seizure detection

Lecture 12: Classification

Assortment Optimization under MNL

Online Classification: Perceptron and Winnow

MAXIMUM A POSTERIORI TRANSDUCTION

COS 521: Advanced Algorithms Game Theory and Linear Programming

Numerical Heat and Mass Transfer

SIO 224. m(r) =(ρ(r),k s (r),µ(r))

CSC 411 / CSC D11 / CSC C11

Support Vector Machines

Power law and dimension of the maximum value for belief distribution with the max Deng entropy

Global Sensitivity. Tuesday 20 th February, 2018

Support Vector Machines CS434

Lecture 20: November 7

Neural networks. Nuno Vasconcelos ECE Department, UCSD

NUMERICAL DIFFERENTIATION

INF 5860 Machine learning for image classification. Lecture 3 : Image classification and regression part II Anne Solberg January 31, 2018

Support Vector Machines

Support Vector Machines

Support Vector Machines

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

Sparse Gaussian Processes Using Backward Elimination

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach

FORECASTING EXCHANGE RATE USING SUPPORT VECTOR MACHINES

CSE 252C: Computer Vision III

Nonlinear Classifiers II

Linear Classification, SVMs and Nearest Neighbors

Chapter 13: Multiple Regression

Excess Error, Approximation Error, and Estimation Error

Composite Hypotheses testing

4DVAR, according to the name, is a four-dimensional variational method.

Some modelling aspects for the Matlab implementation of MMA

Feature Selection for SVMs

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Calculation of time complexity (3%)

Multigradient for Neural Networks for Equalizers 1

Support Vector Machines CS434

Image classification. Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing i them?

Chapter 6 Support vector machine. Séparateurs à vaste marge

Finding Dense Subgraphs in G(n, 1/2)

Kernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan

Turbulence classification of load data by the frequency and severity of wind gusts. Oscar Moñux, DEWI GmbH Kevin Bleibler, DEWI GmbH

18-660: Numerical Methods for Engineering Design and Optimization

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS

Learning with Tensor Representation

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)

Multilayer Perceptron (MLP)

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012

Kristin P. Bennett. Rensselaer Polytechnic Institute

The Study of Teaching-learning-based Optimization Algorithm

Department of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method

ECE559VV Project Report

Durban Watson for Testing the Lack-of-Fit of Polynomial Regression Models without Replications

Estimating the Fundamental Matrix by Transforming Image Points in Projective Space 1

A Robust Method for Calculating the Correlation Coefficient

Chapter 9: Statistical Inference and the Relationship between Two Variables

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

CSE 546 Midterm Exam, Fall 2014(with Solution)

Inductance Calculation for Conductors of Arbitrary Shape

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm

A Hybrid Variational Iteration Method for Blasius Equation

Support Vector Machines

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

Vapnik-Chervonenkis theory

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES

MULTICLASS LEAST SQUARES AUTO-CORRELATION WAVELET SUPPORT VECTOR MACHINES. Yongzhong Xing, Xiaobei Wu and Zhiliang Xu

Transcription:

Evaluaton of smple performance measures for tunng SVM hyperparameters Kabo Duan, S Sathya Keerth, Aun Neow Poo Department of Mechancal Engneerng, Natonal Unversty of Sngapore, 0 Kent Rdge Crescent, 960, Sngapore Abstract Choosng optmal hyperparameters for support vector machnes s an mportant step n SVM desgn. Ths s usually done by mnmzng ether an estmate of generalzaton error or some other related performance measures. In ths paper, we emprcally study the usefulness of several smple performance measures that are very nexpensve to compute. The results pont out whch of these performance measures are adequate functonals for tunng SVM hyperparameters. For SVMs wth L soft-margn formulaton, none of the smple measures yelds a performance as good as k-fold cross-valdaton. Keywords: Support vector machne; Model selecton; Generalzaton error estmate; Performance measure; Hyperparameter tunng. Introducton Support vector machnes (SVMs) [] are extensvely used as a classfcaton tool n a varety of areas. They map the nput ( x ) nto a hgh dmensonal feature space ( z = φ(x) ) and construct an optmal hyperplane defned by w z b = 0 to separate examples from the two classes. For SVMs wth L soft-margn formulaton, ths s done by solvng the prmal problem: mn w + C ξ (P) s.t. y ( w z b) ξ, ξ > 0 where x s the -th example, y s the class label value whch s ether + or. (Throughout the paper, l wll denote the number of examples.) Ths problem s computatonally solved usng the soluton of ts dual form: mn f ( α ) = α (, ) α y y k x x α (D) s.t. 0 α C ; y = 0 α where k( x, x) = φ( x) φ( x) s the kernel functon that performs the nonlnear mappng. Popular kernel functons are: x x Gaussan kernel: k( x, x) = exp( ) σ d Polynomal kernel: k ( x, x) = ( + x x) To obtan a good performance, some parameters n SVMs have to be chosen carefully. These parameters nclude: the regularzaton parameter C, whch determnes the tradeoff between mnmzng the tranng error and mnmzng model complexty;

parameter (σ or d ) of the kernel functon that mplctly defnes the nonlnear mappng from nput space to some hgh dmensonal feature space. (In ths paper we partcularly focus on the Gaussan kernel.) These hgher level parameters are usually referred as hyperparameters. Tunng these hyperparameters s usually done by mnmzng the estmated generalzaton error such as the k-fold cross-valdaton error or the leave-one-out (LOO) error. Whle k-fold cross-valdaton error requres the soluton of several SVMs, LOO error requres the soluton of many (n the order of the number of examples) SVMs. For effcency, t s useful to have smpler estmates that, though crude, are very nexpensve to compute. Durng the past few years, several such smple estmates have been proposed. The man am of ths paper s to emprcally study the usefulness of these smple estmates as measures for tunng the SVM hyperparameters. The rest of the paper s organzed as follows. A bref revew of the performance measures s gven n secton. The settngs of the computatonal experments are descrbed n secton 3. The expermental results are analyzed and dscussed n secton 4. Fnally, some concludng remarks are made n secton 5. Performance Measures In ths secton, we brefly revew the estmates (performance measures) mentoned above.. K-fold Cross-Valdaton and LOO Cross-valdaton s a popular technque for estmatng generalzaton error and there are several versons. In k-fold cross-valdaton, the tranng data s randomly splt nto k mutually exclusve subsets (the folds) of approxmately equal sze. The SVM decson rule s obtaned usng k of the subsets and then tested on the subset left out. Ths procedure s repeated k tmes and n ths fashon each subset s used for testng once. Averagng the test error over the k trals gves an estmate of the expected generalzaton error. LOO can be vewed as an extreme form of k-fold cross-valdaton n whch k s equal to the number of examples. In LOO, one example s left out for testng each tme, and so the tranng and testng are repeated l tmes. It s known [9] that the LOO procedure gves an almost unbased estmate of the expected generalzaton error. K-fold cross-valdaton and LOO are applcable to arbtrary learnng algorthms. In the case of SVM, t s not necessary to run the LOO procedure on all l examples and strateges are avalable n the lterature to speed up the procedure. In spte of that, for tunng SVM hyperparameters, LOO s stll very expensve.. X-Alpha Bound In [7], Joachms developed the followng estmate, whch s an upper bound on the error rate of leave-one-out procedure. Ths estmate can be computed usng α from the soluton of SVM dual problem (D) and ξ from the soluton of SVM prmal problem (P):

Err = card { : ( R ξα α + ξ ) } () l Here card denotes cardnalty and R s an upper bound on c k( x, x) c + R for all x, x and some constant c. We refer to the estmate n () as the X-Alpha bound..3 Approxmate Span Bound Vapnk et al [3] ntroduced a new concept called span of support vectors. Based on ths new concept, they developed a new technque called span-rule (specally for SVMs) to approxmate the LOO estmate. The span-rule not only provdes a good functonal for SVM hyperparameter selecton, but also better reflects the actual error rate. The followng upper bound on LOO error was also proposed n [3]: n ) = * N S max( D, C α m LOO + () l l * n where: N LOO s the number of errors n LOO procedure; = α s the summaton of Lagrange multplers α taken over support vectors of the frst category (those for whch 0 < α < C ); m s the number of support vectors of the second category (those for whch α = C ); S s the span of support vectors (see [3] for the defnton of S ); D s the dameter of the smallest sphere contanng the tranng ponts n the feature space; and the Lagrange multplers α are obtaned from the tranng of SVM on the whole tranng data of sze l. Although the rght-hand sde bound n () has a smple form, t s expensve to compute the span S. The bound can be further smplfed by replacng S wth D SV, the dameter of the smallest sphere n the feature space contanng the support vectors of the frst category. It was proved n [3] that S D. Thus, we get SV n ) = * N DSV max( D, C α m LOO + (3) l l The rght-hand sde of () s referred as the span bound. Snce the bound n (3) s a looser bound than the span bound, we refer to t as the approxmate span bound..4 VC Bound SVMs are based on the dea of structural rsk mnmzaton ntroduced by statstcal learnng theory []. For the two-class classfcaton problem, the learnng machne s actually defned by a set of functons f ( x, α), whch perform a mappng from nput pattern x to class label y {, + }. A partcular choce of the adustable parameter α gves a traned machne. Suppose a set of tranng examples ( x, y),,( x l, yl ) are drawn from some unknown probablty dstrbuton P ( x, y). Then, the expected test error for a traned machne s: R( α) = y f ( x, α) dp( x, y) The quantty R (α ) s called expected rsk. Emprcal rsk s defned as the measured mean error rate on the tranng set: 3

l Remp = = y f ( x, α ) l For a partcular choce of α, wth probablty η ( 0 η ), the followng bound holds []: h(log(l / h) + ) log( η / 4) R( α) Remp ( α) + (4) l where h s the VC-dmenson of a set of functons f ( x, α) and t descrbes the capacty of the set of functons. The rght-hand sde of (4) s referred as rsk bound. The second term of the rsk bound s usually referred as the VC confdence. For a gven learnng task, the Structural Rsk Mnmzaton Prncple [] chooses the parameter α so that the rsk bound s mnmal. The man dffculty n applyng the rsk bound s that t s dffcult to determne the VC-dmenson of the set of functons. For SVMs, a VC bound was proposed n [] by approxmatng the VC-dmenson n (4) by a loose bound on t: h D w + (5) The rght-hand sde of (5) s a loose bound on VC-dmenson and, f we use ths bound to approxmate h, sometmes we may get nto a stuaton where l h s so small that the term nsde the square root n (4) may become negatve. To avod ths problem, we do the followng. Snce h s also bounded by l +, we smply set h to l + whenever D w + exceeds l +..5 Radus-Margn Bound For SVMs wth hard-margn formulaton, t was shown by Vapnk et al [3] that the followng bound holds: LOO Err D w (6) 4l where w s the weght vector computed by SVM tranng and D s the dameter of the smallest sphere that contans all the tranng examples n the feature space. The rght-hand sde of (6) s usually referred as the radus-margn bound. The SVM problem wth L soft-margn formulaton can be converted to the hardmargn SVM problem wth a slghtly modfed kernel functon [4]. Chapelle et al [3] explored the computaton of gradent of D and w, and ther results make these gradent computaton very easy. In ther experment, they mnmze radus margn bound usng gradent descent technque and the results showed that radus-margn bound could act as a good functonal to tune the degree of polynomal kernel. In ths paper, we wll study the usefulness of D w as a functonal to tune the hyperparameters of SVM wth Gaussan kernel (both L soft-margn formulaton and L soft-margn formulaton). 4

3 Computatonal Experments The purpose of our experments s to see how good the varous estmates (bounds) are for tunng the hyperparameters of SVMs. In ths paper, we manly focus on SVMs wth Gaussan kernel. For one gven estmator, goodness s evaluated by comparng the true mnmum of the test error wth the test error at the optmal hyperparameter set found by mnmzng the estmate. We dd the smulatons on fve benchmark datasets: Banana, Image, Splce, Waveform and Tree. General nformaton about the datasets s gven n Table. The detaled nformaton of the frst four datasets can be found n [0]. The Tree dataset was orgnally used by Baley et al [] and was formed from a geologcal remote sensng data; It has two classes: one conssts of patterns of trees, and the other conssts of non-tree patterns. Note that each of the datasets has a large number of test examples so that performance on the test set, the test error, can be taken as an accurate reflecton of generalzaton performance. Table. General nformaton about the datasets Datasets Number of nput Number of tranng Number of test varables examples examples Banana 400 4900 Image 8 300 00 Splce 60 000 75 Waveform 400 4600 Tree 8 700 69 One experment was set up for SVM wth L soft-margn formulaton. The smple performance measures we tested n ths experment are: 5-fold cross-valdaton error, X-Alpha bound, VC bound, approxmate span bound and D w. As we mentoned n secton, the SVM problem wth L soft-margn formulaton can be converted to the hard-margn SVM problem wth a slghtly modfed kernel functon. For SVM hard-margn formulaton, the radus-margn bound can be appled. So, we set up an experment to see how good the radus-margn bound ( D w ) s for the L soft-margn formulaton, partcularly wth Gaussan kernel. In the above two experments, frst we fx the regularzaton parameter C at some value and vary the wdth of Gaussan kernel σ n a large range, and then we fx the value of σ and vary the value of C. The fxed values of C and σ are chosen so that the combnaton acheves a test error close to the smallest test error rate. Tables -5 descrbe the performance of the varous estmates. Both test error rates and the hyperparameter values at the mnma of dfferent estmates are shown there. However, we must pont out that we only searched n a fnte range of the hyperparameter space and hence the mnma are confned to ths fnte range. Due to lack of space, we gve detaled plots of the estmates as functons of C and σ, only for the Image dataset (Fgures 4). The plots for the other datasets show smlar varatons wth respect to the two hyperparameters. We make the plots of other datasets avalable at: http://guppy.mpe.nus.edu/~mpessk/ncfgures.pdf. In order to show the varatons of dfferent estmates n one fgure, normalzaton was done on 5

the estmates when necessary. Snce what we really concern s how the varaton of the estmate relates to the varaton of the test error rather than how ther values are related, ths normalzaton does no harm. Another experment was set up to see how the sze of the tranng set affects the performance of dfferent estmates. The Waveform dataset was used n ths experment. We vary the number of tranng examples from 00 to 000. For comparson purpose, for each tranng set of dfferent sze, we use the same test set that has 4000 examples. As n the other experments, the performance of each estmate s evaluated by comparng the test error rates at the optmal hyperparameter set found by mnmzng the estmate. Fgure 5 shows the performance of the varous measures as a functon of tranng sze. 4 Analyss and Dscusson Let us analyze the performance of the varous estmates, one by one. K-fold Cross-Valdaton: On each dataset, 5-fold cross-valdaton produced a curve that not only has a mnmum very close to that of the test error curve, but t also has a shape very smlar to the curve of the test error. Of all the estmates, 5-fold cross-valdaton yelded the best performance. Even for a small tranng set wth 00 examples, 5-fold crossvaldaton gave a qute good estmate of generalzaton error (see Fgure 5). Recently, a lot of research work has been devoted to speedng up the LOO procedure so that t can be used to tune the hyperparameters of SVMs. Some of those speed-up strateges, such as alpha seedng [6] and loose tolerance [8], can be easly carred from LOO to k-fold cross-valdaton. Thus, k-fold cross-valdaton s also an effcent technque for tunng SVM hyperparameters. X-Alpha Bound: X-Alpha bound s a very smple bound, whch can be computed wthout any extra work after the SVM s traned on the whole tranng data. Although t produced a curve that has a shape slghtly dfferent from that of the test error, n most of the cases, the predcted hyperparameters gave performance reasonably close to the best one n terms of test error. We also notce that, at low C values, X-Alpha bound gves an estmate that s very close to the test error. Ths s because, at low C values, the α are small and hence, the X-Alpha estmate n () s very close to the LOO estmate. Another nce property of X-Alpha bound s that, rrespectve of the sze of tranng set, t always gves an estmate reasonably close to the true mnmum n terms of test error (see Fgure 5). To see the correlaton of the above two estmate (k-fold cross-valdaton estmate and X-Alpha bound) wth test error, we tred the combnaton of C and σ n a very 6

large range and generated a plot that takes the test error as one coordnate and the estmate as another coordnate. Each pont on the plot corresponds to one combnaton of C and σ. The plot s shown n Fgure 6. Snce we are especally nterested n ponts at whch the estmate and the test error take small values, the fgure s magnfed to focus only on ths partcular area. Ths plot shows that 5-fold crossvaldaton estmate has much better correlaton wth the test error. Approxmate Span Bound: In [3], Vapnk et al effectvely used span-based dea for tunng SVM hyperparameters. In approxmate span bound, S s replaced by D SV. The poor behavor of ths bound s probably due to the fact that D SV s a poor approxmate of S. VC Bound: The experments show that VC bound s not good for tunng SVM hyperparameters, at least for the datasets used by us. However, for another dataset, Burges [] found ths bound to be useful for determnng a good value for σ. Therefore, t s not clear how useful ths bound s. It s qute possble that the goodness of the VC bound depends on how well D w + approxmates the VC dmenson h. D w for L Soft-Margn Formulaton: Let us now consder D w for L soft-margn formulaton. Fgure and clearly show the nadequacy of ths measure for tunng hyperparameters. The plots for the other datasets are also very smlar. The nadequacy can be easly explaned. We can prove that, for an SVM wth Gaussan kernel, D w goes to zero as C goes to zero or as σ goes to nfnty. Frst, let us fx have σ and consder the varaton of w = l l α α = = l l α k α = = l l α = = α y y k( x, x ( x, x ) D w as C goes to zero. We l C Snce D s ndependent of C and upper-bounded by 4, t easly follows that, as C goes to zero, w goes to zero and so does D w. ) Now let us fx C at a fnte value and consder the varaton of to nfnty. We have D w as σ goes 7

D w = D l l l = = 4 α = = α y y k( x, x ) As σ goes to nfnty, k ( x, x) goes to and, snce the alpha varables are bounded by C, we have, n the lmt, Thus, as = = ( σ goes to nfnty, l l = = l l = = l = α y ) α α y = 0 y l α α y y k( x, x ) α α y y k( x, x ) D w goes to zero. Crstann et al n [5] showed that D w s good for tunng the wdth of the Gaussan kernel for hard-margn SVM. The asymptotc movement of D w to zero as σ goes to nfnty that we establshed above holds only when C s fxed at a fnte value. When C s nfnty (the hard margn case), the alpha varables are unbounded and hence our proof wll not hold. Thus, what we have shown s not n any way nconsstent wth the results of Crstann et al. Schölkopf et al [] showed that D w s good for tunng the degree of polynomal kernel for SVMs wth L soft-margn formulaton. Our experments and analyss on D w are only lmted to SVM wth Gaussan kernel. Although D w s nadequate for tunng hyperparameters for SVM wth Gaussan kernel, possbly t stll can be used to tune the degree of polynomal kernel, as Schölkopf et al dd. D w for L Soft-Margn Formulaton: Earler, we ponted out that D w s nadequate for tunng hyperparameters for the SVM L soft-margn formulaton wth Gaussan kernel. However, For SVMs wth L soft-margn formulaton, whch can be converted to an SVM hard-margn problem, our experments show that radus-margn bound gves a very good estmate of the optmal hyperparameters. Ths agrees wth the results of Chapelle et al [3], where the radus-margn bound s chosen as the functonal that s mnmzed usng gradent descent. However, we notce that the radus-margn bound may have more than one mnmum (see Fgure 3). Typcally, there s one local mnmum whose value of radus-margn bound s hgher than the least radus-margn bound value. Ths local mnmum s usually located at a very large σ value. Thus, mnmzng the radusmargn bound usng gradent descent technque, as Chapelle et al dd, can get stuck at a local mnmum of the radus-margn bound. So, choosng a proper startng pont for gradent descent search s mportant. 8

0.9 0.8 0.7 Image: log C = 4.0 X-Alpha Bound 5-fold CV Err 0.6 0.5 0.4 0. 0-0 -8-6 -4-0 4 6 8 0 log sgmasq (a) 0.9 0.8 0.7 Image: log C = 4.0 VC Bound Approx Span Bound D W 0.6 0.5 0.4 0. 0-0 -8-6 -4-0 4 6 8 0 log sgmasq (b) Fgure : Varaton of X-Alpha Bound, 5-fold CV Err,, VC Bound, Approxmate Span Bound, and D w wth respect to σ for fxed C value, for SVM L soft-margn formulaton. In (b), the vertcal axs s normalzed dfferently for X-Alpha Bound, Approxmate Span Bound and D w. For each curve, denotes the mnmum pont. 9

0.45 0.4 5 Image: log sgmasq =.0 X-Alpha Bound 5-fold CV Err 0.5 0. 5 0.05 0-0 -8-6 -4-0 4 6 8 0 log C (a) 0.9 0.8 Image: log sgmasq =.0 VC Bound Approx Span Bound D W 0.7 0.6 0.5 0.4 0. 0-0 -8-6 -4-0 4 6 8 0 log C (b) Fgure : Varaton of X-Alpha Bound, 5-fold CV Err,, VC Bound, Approxmate Span Bound, and D w wth respect to C for fxed σ value, for SVM L soft-margn formulaton. In (b), the vertcal axs s normalzed dfferently for X-Alpha Bound, Approxmate Span Bound and D w. For each curve, denotes the mnmum pont. 0

0.9 0.8 Image: log C = 0.44 D W 0.7 0.6 0.5 0.4 0. 0-0 -8-6 -4-0 4 6 8 0 log sgmasq Fgure 3: Varaton of soft-margn formulaton. The vertcal axs for denotes the mnmum pont. w D and wth respect to σ for fxed C value, for SVM L w D s normalzed. For each curve, 0.9 0.8 Image: log sgmasq = -0.9 D W 0.7 0.6 0.5 0.4 0. 0-0 -8-6 -4-0 4 6 8 0 log C Fgure 4: Varaton of soft-margn formulaton. The vertcal axs for denotes the mnmum pont. w D and wth respect to C for fxed σ value, for SVM L w D s normalzed. For each curve,

Table : The value of at the mnma of dfferent crtera for fxed C values, for SVM L softmargn formulaton. The values n parentheses are the correspondng logarthms of σ at the mnma. Crteron Banana Image Splce Waveform Tree log C = 5.0 log C = 4.0 log C = 0.40 log C =.40 log C = 8.60 043 0.088 0.0947 0 089 (0.60) (.00) (3.40) (3.0) (3.80) 5-fold CV Err 76 0.098 0.0975 59 44 (.30) (.0) (3.0) (4.40) (5.0) X-Alpha Bound 453 0.057 0.0979 035 55 (-.0) (.00) (3.80) (3.0) (.0) VC Bound 0.4094 0.564 766 93 0.609 (8.90) (0.0) (8.40) (0.0) (-0.0) Approx Span Bound 943 436 407 43 356 (6.60) (6.50) (5.60) (5.0) (9.80) D w 0.5594 0.564 0.4800 93 67 (0.0) (0.0) (0.0) (0.0) (-.40) Table 3: The value of at the mnma of dfferent crtera for fxed σ values, for SVM L softmargn formulaton. The values n parentheses are the correspondng logarthms of C at the mnma. Crteron Banana Image Splce Waveform Tree log σ =0.60 log σ =.0 log σ =3.40 log σ =3.0 log σ =3.80 045 0.078 0.0947 0 089 (5.0) (4.30) (0.40) (.40) (8.60) 5-fold CV Err 78 0.098 0.0947 0 8 (9.00) (6.0) (0.50) (0.0) (4.80) X-Alpha Bound 86 0.098 398 487 60 (9.30) (6.70) (-.70) (-.80) (9.60) VC Bound 987 584 0.4800 93 0.609 (-3.0) (-3.6) (-0.0) (-0.0) (-0.0) Approx Span Bound 5 0.0535 36 0 363 w D (.80) 0.5594 (-0.0) (-0.60) 0.564 (-0.0) (-0.90) 0.4800 (-0.0) (0.0) 93 (-0.0) (.0) 0.609 (-0.0) Table 4: The value of at the mnma of dfferent crtera for fxed C values, for SVM L softmargn formulaton. The values n parentheses are the correspondng logarthms of σ at the mnma. Banana Image Splce Waveform Tree Crteron log C = -0.90 log C = 0.44 log C = 6.9 log C = 0 log C = 9.80 8 0.038 0.0947 0.099 049 (-.40) (0.50) (3.30) (.80) (4.60) D w 4 0.097 00 0 67 (-.60) (-0) (3.0) (.0) (-.40) Table 5: The value of at the mnma of dfferent crtera for fxed σ values, for SVM L softmargn formulaton. The values n parentheses are the correspondng logarthms of C at the mnma. Banana Image Splce Waveform Tree Crteron log σ =-.39 log σ =-0.9 log σ =3.07 log σ =.80 log σ =4.60 8 0.08 007 0.099 049 w D (0.0) 7 (-0.90) (.40) 0.097 (0.40) (.0) 06 (9.0) (0.0) 007 (-0.60) (9.80) 43 (-.40)

at the Mmnma of Varous Measures 0.45 0.4 5 0.5 0. 5 Dataset: Waveform X-Alpha Bound 5-fold CV Err 0.05 00 300 400 500 600 700 800 900 000 Num of Tranng Examples (a) at the Mmnma of Varous Measures 0.45 0.4 5 0.5 0. 5 Dataset: Waveform VC Bound Approx Span Bound D W 0.05 00 300 400 500 600 700 800 900 000 Num of Tranng Examples (b) Fgure 5: Performance of varous measures for dfferent tranng set szes. The waveform dataset has been used n ths experment. The followng values were tred for the number of tranng examples: 00, 400, 600, 800, and 000. The number of the test examples s 4000. 3

Dataset: Waveform 3 5-fold CV Err 0.09 0.08 0.07 0.06 0.05 3 4 5 6 (a) Dataset: Waveform 0.5 X-Alpha Bound 0. 5 3 4 5 6 (b) Fgure 6: Correlaton of 5-fold cross-valdaton and X-Alpha bound wth test error. Each pont corresponds to one combnaton of C and σ. Each fgure has been magnfed to show only ponts where test error and the estmate take small values. The ponts wth least value of the estmate are marked by +. 4

5 Conclusons We have tested several easy-to-compute performance measures for SVMs wth L soft-margn formulaton and SVMs wth L soft-margn formulaton. The conclusons are: 5-fold cross-valdaton gves an excellent estmate of the generalzaton error. For the L soft margn SVM formulaton, none of the other measures yelds a performance as good as 5-fold cross valdaton. It even gves a good estmate on small tranng set. The 5-fold cross-valdaton estmate also has a very good correlaton wth the test error. X-Alpha bound can fnd a reasonably good hyperparameter set for SVM, at whch the test error s close to the true mnmum of the test error. But the hyperparameters sometmes may not be close to the optmal ones. A nce property of ths estmate s that t performs well over a range of tranng set szes. The approxmate span bound and VC bound cannot gve a useful predcton of the optmal hyperparameters. Ths s probably because the approxmatons ntroduced nto these bounds are too loose. For the SVM L soft-margn formulaton, hyperparameters. D w s nadequate for tunng the The radus-margn bound gves a very good predcton of the optmal hyperparameters for SVM L soft-margn formulaton. However, the possblty of local mnma should be taken nto consderaton when ths bound s mnmzed usng gradent descent method. References [] R.R. Baley, E.J. Pettt, R.T. Borochoff, M.T. Manry, and X. Jang, Automatc Recognton of USGS Land Use/Cover Categores Usng Statstcal and Neural Networks Classfers, n: Proceedngs of SPIE OE/Aerospace and Remote Sensng, SPIE 993. [] C.J.C. Burges, A Turtoral on Support Vector Machnes for Pattern Recognton, Data Mnng Knowledge Dscovery, Vol., No. (998) 955-975. [3] O. Chapelle, V. Vapnk, O. Bousquet, and S. Mukheree, Choosng Kernel Parameters for Support Vector Machnes, Submtted to Machne Learnng, 000. Avalable: http://www.ens-lyon.fr/~ochapell/kernel_params.ps.gz [4] C. Cortes and V. Vapnk, Support Vector Networks, Machne Learnng 0 (995) 73-97. [5] N. Crstann, C. Campbell and J. Shawe-Taylor, Dynamcally Adaptng Kernels n Support Vector Machnes, n: M. Kearns, S. Solla and D. Cohn, Ed., Advances n Neural Informaton Processng Systems, Vol.. (MIT Press, 999) 04-0. 5

[6] D. DeCoste and K. Wagstaff, Alpha Seedng for Support Vector Machnes, In: Proceedngs of Inernatonal Conference on Knowledge Dscovery and Data Mnng (KDD-000). [7] T. Joachms, The Maxmum-Margn Approach to Learnng Text Classfers: Method, Theory and Algorthms, Ph.D. Thess, Department of Computer Scence, Unversty of Dortmund, 000. [8] J. H. Lee and C.J. Ln, Automatc Model Selecton for Support Vector Machnes. Techncal Report, Department of Computer Scence and Informaton Engneerng, Natonal Tawan Unversty, 000. [9] Luntz and V. Bralovsky, On Estmaton of Characters Obtaned n Statstcal Procedure of Recognton, Techncheskaya Kbernetca, 3 (969). (n Russan). [0] G. Rätsch, Benchmark Datasets, 999. Avalable: http://da.frst.gmd.de/~raetsch/data/benchmarks.htm [] B. Schölkopf, C. Burges, and V. Vapnk, Extractng Support Data for A Gven Task, n: U. M. Fayyad and R. Uthurusamy, Ed.,Proc. Frst Internatonal Conference on Knowledge Dscovery & Data Mnng (AAAI Press, Menlo Park, 995). [] V. Vapnk, Statstcal Learnng Theory (John Wley & Sons, 998). [3] V. Vapnk and O. Chapelle, Bounds on Error Expectaton for Support Vector Machne, n: Smola, Bartlett, Schölkopf and Schuurmans, Ed., Advences n Large Margn Classfers (MIT Press, 999). 6