Support Vector Machines

Size: px
Start display at page:

Download "Support Vector Machines"

Transcription

1 Support Vector Machines Chih-Jen Lin Department of Computer Science National Taiwan University Talk at Machine Learning Summer School 2006, Taipei Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 1 / 98

2 Outline Basic concepts SVM primal/dual problems Training linear and nonlinear SVMs Parameter/kernel selection and practical issues Multi-class classification Discussion and conclusions Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 2 / 98

3 Outline Basic concepts Basic concepts SVM primal/dual problems Training linear and nonlinear SVMs Parameter/kernel selection and practical issues Multi-class classification Discussion and conclusions Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 3 / 98

4 Basic concepts Why SVM and Kernel Methods SVM: in many cases competitive with existing classification methods Relatively easy to use Kernel techniques: many extensions Regression, density estimation, kernel PCA, etc. Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 4 / 98

5 Basic concepts Support Vector Classification Training vectors : x i, i = 1,...,l Feature vectors. For example, A patient = [height, weight,...] Consider a simple case with two classes: Define an indicator vector y { 1 if xi in class 1 y i = 1 if x i in class 2, A hyperplane which separates all data Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 5 / 98

6 Basic concepts w T x + b = [ +1 ] 0 1 A separating hyperplane: w T x + b = 0 (w T x i ) + b > 0 if y i = 1 (w T x i ) + b < 0 if y i = 1 Decision function f (x) = sgn(w T x+b), x: test data Many possible choices of w and b Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 6 / 98

7 Maximal Margin Basic concepts Distance between w T x + b = 1 and 1: 2/ w = 2/ w T w A quadratic programming problem [Boser et al., 1992] min w,b 1 2 wt w subject to y i (w T x i + b) 1, i = 1,...,l. Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 7 / 98

8 Basic concepts Data May Not Be Linearly Separable An example: Allow training errors Higher dimensional ( maybe infinite ) feature space φ(x) = (φ 1 (x), φ 2 (x),...). Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 8 / 98

9 Basic concepts Standard SVM [Cortes and Vapnik, 1995] min w,b,ξ 1 2 wt w + C l i=1 subject to y i (w T φ(x i ) + b) 1 ξ i, ξ i 0, i = 1,...,l. Example: x R 3, φ(x) R 10 φ(x) = (1, 2x 1, 2x 2, 2x 3, x 2 1, x 2 2, x 2 3, 2x 1 x 2, 2x 1 x 3, 2x 2 x 3 ) ξ i Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 9 / 98

10 Basic concepts Finding the Decision Function w: maybe infinite variables The dual problem min α subject to 1 2 αt Qα e T α 0 α i C, i = 1,...,l y T α = 0, where Q ij = y i y j φ(x i ) T φ(x j ) and e = [1,...,1] T At optimum w = l i=1 α iy i φ(x i ) A finite problem: #variables = #training data Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 10 / 98

11 Kernel Tricks Basic concepts Q ij = y i y j φ(x i ) T φ(x j ) needs a closed form Example: x i R 3, φ(x i ) R 10 φ(x i ) = (1, 2(x i ) 1, 2(x i ) 2, 2(x i ) 3, (x i ) 2 1, (x i ) 2 2, (x i ) 2 3, 2(x i ) 1 (x i ) 2, 2(x i ) 1 (x i ) 3, 2(x i ) 2 (x i ) 3 ) Then φ(x i ) T φ(x j ) = (1 + x T i x j ) 2. Kernel: K(x,y) = φ(x) T φ(y); common kernels: e γ x i x j 2, (Radial Basis Function) (x T i x j /a + b) d (Polynomial kernel) Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 11 / 98

12 Basic concepts Can be inner product in infinite dimensional space Assume x R 1 and γ > 0. e γ x i x j 2 = e γ(x i x j ) 2 = e γx2 i +2γx ix j γxj 2 ( 2γx i x j (2γx ix j ) 2 =e γx2 i γx2 j =e γx2 i γx2 j where + (2γx ix j ) 3 + ) 1! 2! 3! ( 2γ 2γ (2γ) ! x i 1! x 2 (2γ) j+ xi 2 2 2! 2! (2γ) 3 (2γ) + xi 3 3 xj 3 + ) = φ(x i ) T φ(x j ), 3! 3! φ(x) = e γx2 [1, 2γ (2γ) 1! x, 2 (2γ) x 2 3 T, x 3, ]. 2! 3! x 2 j Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 12 / 98

13 More about Kernels Basic concepts How do we know kernels help to separate data? In R l, any l independent vectors linearly separable (x 1 ) T. w = (x l ) T [ ] +e e If K positive definite data linearly separable K = LL T. Transforming training points to independent vectors in R l Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 13 / 98

14 Basic concepts So what kind of kernel should I use? What kind of functions are valid kernels? How to decide kernel parameters? Will be discussed later Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 14 / 98

15 Decision function Basic concepts At optimum Decision function = = w = l i=1 α iy i φ(x i ) w T φ(x) + b l α i y i φ(x i ) T φ(x) + b i=1 l α i y i K(x i,x) + b i=1 Only φ(x i ) of α i > 0 used support vectors Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 15 / 98

16 Basic concepts Support Vectors: More Important Data Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 16 / 98

17 Basic concepts So we have roughly shown basic ideas of SVM A 3-D demonstration cjlin/libsvmtools/svmtoy3d Further references, for example, [Cristianini and Shawe-Taylor, 2000, Schölkopf and Smola, 2002] Also see discussion on kernel machines blackboard Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 17 / 98

18 Outline SVM primal/dual problems Basic concepts SVM primal/dual problems Training linear and nonlinear SVMs Parameter/kernel selection and practical issues Multi-class classification Discussion and conclusions Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 18 / 98

19 Deriving the Dual SVM primal/dual problems Consider the problem without ξ i min w,b subject to 1 2 wt w y i (w T φ(x i ) + b) 1, i = 1,...,l. Its dual min α 1 2 αt Qα e T α subject to 0 α i, i = 1,...,l, y T α = 0. Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 19 / 98

20 Lagrangian Dual SVM primal/dual problems where ( max min L(w, b, α)), α 0 w,b L(w, b, α) = 1 2 w 2 l ( α i yi (w T φ(x i ) + b) 1 ) i=1 Strong duality (be careful about this) ( min Primal = max min L(w, b, α)) α 0 w,b Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 20 / 98

21 SVM primal/dual problems Simplify the dual. When α is fixed, min L(w, b, α) = w,b { if l i=1 α iy i 0, 1 min w 2 wt w l i=1 α i[y i (w T φ(x i ) 1] if l i=1 α iy i = 0. If l i=1 α iy i 0, decrease in L(w, b, α) to b l α i y i i=1 Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 21 / 98

22 SVM primal/dual problems If l i=1 α iy i = 0, optimum of the strictly convex 1 2 wt w l i=1 α i[y i (w T φ(x i ) 1] happens when Thus, L(w, b, α) = 0. w w = l α i y i φ(x i ). i=1 Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 22 / 98

23 SVM primal/dual problems Note that w T w = = i,j ( l i=1 ) T ( l α i y i φ(x i ) j=1 α i α j y i y j φ(x i ) T φ(x j ) ) α j y j φ(x j ) max α 0 The dual is l α i 1 2 α i α j y i y j φ(x i ) T φ(x j ) i=1 i,j if l i=1 α iy i = 0, if l i=1 α iy i 0. Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 23 / 98

24 SVM primal/dual problems Lagrangian dual: max α 0 ( minw,b L(w, b, α) ) definitely not maximum of the dual Dual optimal solution not happen when. Dual simplified to max α R l l α i y i 0 i=1 l α i 1 2 i=1 l i=1 l α i α j y i y j φ(x i ) T φ(x j ) j=1 subject to y T α = 0, α i 0, i = 1,...,l. Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 24 / 98

25 SVM primal/dual problems More about Dual Problems After SVM is popular Quite a few people think that for any optimization problem Lagrangian dual exists and strong duality holds Wrong! We usually need Convex programming; Constraint qualification We have them SVM primal is convex; Linear constraints Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 25 / 98

26 SVM primal/dual problems Our problems may be infinite dimensional Can still use Lagrangian duality See a rigorous discussion in [Lin, 2001] Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 26 / 98

27 Outline Training linear and nonlinear SVMs Basic concepts SVM primal/dual problems Training linear and nonlinear SVMs Parameter/kernel selection and practical issues Multi-class classification Discussion and conclusions Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 27 / 98

28 Training Nonlinear SVMs Training linear and nonlinear SVMs If using kernels, we solve the dual min α subject to 1 2 αt Qα e T α 0 α i C, i = 1,...,l y T α = 0 Large dense quadratic programming Q ij 0, Q : an l by l fully dense matrix 30,000 training points: 30,000 variables: (30, /2) bytes = 3GB RAM to store Q: Traditional methods: Newton, Quasi Newton cannot be directly applied Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 28 / 98

29 Decomposition Methods Training linear and nonlinear SVMs Working on some variables each time (e.g., [Osuna et al., 1997, Joachims, 1998, Platt, 1998]) Similar to coordinate-wise minimization Working set B, N = {1,...,l}\B fixed Sub-problem at each iteration: subject to ] [ 1 [ min α T α B 2 B (α k N )T][ Q BB Q BN αb Q NB Q NN α k N ] [ e T B (e k N )T][ α B α k N ] 0 α t C, t B, y T Bα B = y T Nα k N Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 29 / 98

30 Avoid Memory Problems Training linear and nonlinear SVMs The new objective function 1 2 αt BQ BB α B + ( e B + Q BN α k N) T α B + constant B columns of Q needed Calculated when used Trade time for space Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 30 / 98

31 Does it Really Work? Training linear and nonlinear SVMs Compared to Newton, Quasi-Newton Slow convergence However, no need to have very accurate α ( l ) sgn α i y i K(x i,x) + b i=1 Prediction not affected much In some situations, # support vectors # training points Initial α 1 = 0, some elements never used Machine learning knowledge affects optimization Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 31 / 98

32 Training linear and nonlinear SVMs An example of training 50,000 instances using LIBSVM $./svm-train -m 200 -c 16 -g 4 22features optimization finished, #iter = Total nsv = 3370 time 5m1.456s On a Pentium M 1.4 GHz Laptop Calculating Q may have taken more than 5 minutes #SVs = 3,370 50,000 A good case where some remain at zero all the time Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 32 / 98

33 Training linear and nonlinear SVMs Issues of Decomposition Methods Working set size/selection Asymptotic convergence Finite termination & stopping conditions Convergence rate Numerical issues Optimization researchers are now also interested in these issues If interested in them, check my talk to optimization researchers in Rome last year: cjlin/talks/rome.pdf Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 33 / 98

34 Caching and Shrinking Training linear and nonlinear SVMs Speed up decomposition methods Caching [Joachims, 1998] Store recently used kernel columns in computer memory 100K Cache $ time./libsvm-2.81/svm-train -m 0.01 a4a s 40M Cache $ time./libsvm-2.81/svm-train -m 40 a4a 7.817s Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 34 / 98

35 Training linear and nonlinear SVMs Shrinking [Joachims, 1998] Some bounded elements remain until the end Heuristically resized to a smaller problem After certain iterations, most bounded elements identified and not changed [Lin, 2002] So caching and shrinking are useful Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 35 / 98

36 Caching: Issues Training linear and nonlinear SVMs A simple way: Store recently used columns What if in working set selection, deliberately select some indices in cache Goal: minimize the total number of columns calculated Difficult to connect algorithm and this goal Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 36 / 98

37 SVM doesn t Scale Up Training linear and nonlinear SVMs Yes, if you use kernels Training millions of data is time consuming But other nonlinear methods face the same problem e.g., kernel logistic regression Two possibilities 1 Linear SVMs: in some situations, can solve much larger problems 2 Approximation Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 37 / 98

38 Training Linear SVMs Training linear and nonlinear SVMs Linear kernel: min w,b,ξ 1 2 wt w + C l i=1 subject to y i (w T x i + b) 1 ξ i, ξ i 0. At optimum: ξ i = max ( 0, 1 y i (w T x i + b) ) ξ i Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 38 / 98

39 Training linear and nonlinear SVMs Remaining variables: w, b min w,b 1 2 wt w + C l max ( 0, 1 y i (w T x i + b) ) i=1 #variables = #features + 1 If #features small, easier to solve Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 39 / 98

40 Training linear and nonlinear SVMs Traditional optimization methods can be applied Training time similar to methods such as logistic regression What if #features and #instances both large? Very challenging Some language/document problems are of this type Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 40 / 98

41 Training linear and nonlinear SVMs Decomposition Methods for Linear SVMs Could we still solve the dual by decomposition methods? Even if #features small Slow convergence when C is large $bsvm-train -b 500 -c 500 -t 0 australian_scale optimization finished, #iter = obj = , rho = K ij = x T i x j, rank #features positive semi-definite only Still a research topic in understanding this Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 41 / 98

42 Training linear and nonlinear SVMs Decomposition Methods for Linear SVMs But no need to use large C C large enough, w the same [Keerthi and Lin, 2003] decision function the same Remember w = l α i y i x i R n, b R 1 i=1 # of 0 < α i < C n + 1 As C changes, optimal α share many elements at 0 and C Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 42 / 98

43 Training linear and nonlinear SVMs Decomposition Methods for Linear SVMs (Cont d) Warm start very effective [Kao et al., 2004] Starting from small C, faster convergence Using C = 1, 2, 4, 8,... $bsvm-train -c 500 -t 0 australian_scale optimization finished, #iter = So decomposition methods can still handle large linear SVMs Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 43 / 98

44 Approximations Training linear and nonlinear SVMs #instances large and using nonlinear kernels Difficult to solve the dual Subsampling Simple and often effective From this many more advanced techniques E.g., stratified subsampling Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 44 / 98

45 Approximations (Cont d) Training linear and nonlinear SVMs Incremental way: (e.g., [Syed et al., 1999]) Data 10 parts train 1st part SVs, train SVs + 2nd part,... Select good points first: KNN or heuristics e.g., [Bakır et al., 2005] Hierarchical settings (e.g., [Yu et al., 2003]) Clustering training data to several groups SVM models built for each group Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 45 / 98

46 Approximations (Cont d) Training linear and nonlinear SVMs Using only a subset to construct w w = i B α i y i φ(x i ). Put this into the primal min α B,b,ξ subject to 1 2 αt BQ BB α B + C l i=1 Q :,B α B + by e ξ ξ i Without considering ξ i, #variables = B + 1 Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 46 / 98

47 Approximations (Cont d) Training linear and nonlinear SVMs Selecting B: random [Lee and Mangasarian, 2001], incremental [Keerthi et al., 2006], and many other ways Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 47 / 98

48 Approximations (Cont d) Training linear and nonlinear SVMs All these approaches some simple but some sophisticated In machine learning, very often balance between simplification and performance Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 48 / 98

49 Outline Parameter/kernel selection and practical issues Basic concepts SVM primal/dual problems Training linear and nonlinear SVMs Parameter/kernel selection and practical issues Multi-class classification Discussion and conclusions Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 49 / 98

50 Parameter/kernel selection and practical issues Let s Try a Practical Example A problem from astroparticle physics 1 1:2.6173e+01 2: e+01 3: e-01 4: e :5.7073e+01 2: e+02 3: e-02 4: e :1.7259e+01 2: e+02 3: e-01 4: e :2.1779e+01 2: e+02 3: e-01 4: e :9.1339e+01 2: e+02 3: e-01 4: e :5.5375e+01 2: e+02 3: e-01 4: e :2.9562e+01 2: e+02 3: e-02 4: e+02 Training and testing sets available: 3,089 and 4,000 Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 50 / 98

51 Parameter/kernel selection and practical issues The Story Behind this Data Set User: I am using libsvm in a astroparticle physics application.. First, let me congratulate you to a really easy to use and nice package. Unfortunately, it gives me astonishingly bad results... OK. Please send us your data I am able to get 97% test accuracy. Is that good enough for you? User: You earned a copy of my PhD thesis Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 51 / 98

52 Training and Testing Training Parameter/kernel selection and practical issues $./svm-train train.1 optimization finished, #iter = 6131 nsv = 3053, nbsv = 724 Total nsv = 3053 Testing $./svm-predict test.1 train.1.model test.1.out Accuracy = % (2677/4000) nsv and nbsv: number of SVs and bounded SVs (α i = C). Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 52 / 98

53 Why this Fails Parameter/kernel selection and practical issues After training, nearly 100% support vectors Training and testing accuracy different $./svm-predict train.1 train.1.model o Accuracy = % (3082/3089) Most kernel elements: K ij = e x i x j 2 /4 { = 1 if i = j, 0 if i j. Some features in rather large ranges Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 53 / 98

54 Data Scaling Parameter/kernel selection and practical issues Without scaling Attributes in greater numeric ranges may dominate Example: and height gender x F x M x M y 1 = 0, y 2 = 1, y 3 = 1. Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 54 / 98

55 Parameter/kernel selection and practical issues The separating hyperplane almost vertical x 1 x 2 x 3 Strongly depends on the first attribute; but second may be also important Linearly scale the first to [0, 1] by: 1st attribute 150, Scaling generally helps, but not always Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 55 / 98

56 Parameter/kernel selection and practical issues Other ways for scaling Needed for k Nearest Neighbor, Neural networks as well unless the method is scale-invariant Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 56 / 98

57 Data Scaling: Same Factors Parameter/kernel selection and practical issues A common mistake $./svm-scale -l -1 -u 1 train.1 > train.1.scale $./svm-scale -l -1 -u 1 test.1 > test.1.scale Same factor on training and testing $./svm-scale -s range1 train.1 > train.1.scale $./svm-scale -r range1 test.1 > test.1.scale Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 57 / 98

58 After Data Scaling Parameter/kernel selection and practical issues Train scaled data and then prediction $./svm-train train.1.scale $./svm-predict test.1.scale train.1.scale.model test.1.predict Accuracy = 96.15% Training accuracy now is $./svm-predict train.1.scale train.1.scale.model Accuracy = % (2979/3089) Default parameter: C = 1, γ = 0.25 Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 58 / 98

59 Different Parameters Parameter/kernel selection and practical issues If we use C = 20, γ = 400 $./svm-train -c 20 -g 400 train.1.scale $./svm-predict train.1.scale train.1.scale.m Accuracy = 100% (3089/3089) 100% training accuracy but $./svm-predict test.1.scale train.1.scale.mo Accuracy = 82.7% (3308/4000) Very bad test accuracy Overfitting happens Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 59 / 98

60 Overfitting Parameter/kernel selection and practical issues In theory You can easily achieve 100% training accuracy This is useless When training and predicting a data, we should Avoid underfitting: small training error Avoid overfitting: small testing error Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 60 / 98

61 Parameter/kernel selection and practical issues and : training; and : testing Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 61 / 98

62 Parameter Selection Parameter/kernel selection and practical issues Is important Now parameters are C, kernel parameters Example: How to select them? So performance better? γ of e γ x i x j 2 a, b, d of (x T i x j /a + b) d Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 62 / 98

63 Parameter Selection (Cont d) Parameter/kernel selection and practical issues Also how to select kernels? e.g., RBF or polynomial Moreover, how to select methods? e.g., SVM or decision trees? Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 63 / 98

64 Performance Evaluation Parameter/kernel selection and practical issues l training data, x i R n, y i {+1, 1}, i = 1,...,l, a learning machine: x f (x, α), f (x, α) = 1 or 1. Different α: different machines The expected test error (generalized error) R(α) = y: class of x (i.e. 1 or -1) 1 y f (x, α) dp(x, y) 2 Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 64 / 98

65 Parameter/kernel selection and practical issues P(x, y) unknown, empirical risk (training error): R emp (α) = 1 2l l y i f (x i, α) i=1 Training errors not important; only test errors count 1 2 y i f (x i, α) : loss, choose 0 η 1, with probability at least 1 η: R(α) R emp (α) + another term A good classification method: minimize both terms at the same time Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 65 / 98

66 Parameter/kernel selection and practical issues But R emp (α) 0; another term large SVM: min w,b,ξ 1 2 wt w + C l i=1 subject to y i (w T φ(x i ) + b) 1 ξ i, ξ i 0, i = 1,. l i=1 ξ i related to training error w T w/2 relate to another term: called regularization term C: balance between the two ξ i Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 66 / 98

67 Parameter/kernel selection and practical issues Performance Evaluation (Cont d) In practice Available data training and validation Train the training Test the validation k-fold cross validation: Data randomly separated to k groups Each time k 1 as training and one as testing Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 67 / 98

68 Parameter/kernel selection and practical issues Using CV on training + validation Predict testing with the best parameters from CV Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 68 / 98

69 CV and Test Accuracy Parameter/kernel selection and practical issues If we select parameters so that CV is the highest, Does CV represent future test accuracy? Slightly different If we have enough parameters, we can achieve 100% CV as well e.g., more parameters than # of training data Available data with class labels training, validation, testing Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 69 / 98

70 Selecting Kernels Parameter/kernel selection and practical issues RBF, polynomial, or others? or even combinations Two situations: Too many kernels complicates the selection Design kernels suitable for target applications Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 70 / 98

71 Selecting Kernels (Cont d) Parameter/kernel selection and practical issues Contradicting but practically ok We have few general kernels RBF, polynomial, etc. somewhat related Beginners don t have many choices On the other hand researchers design many special ones e.g., string kernels Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 71 / 98

72 Selecting Kernels (Cont d) Parameter/kernel selection and practical issues For beginners, use RBF first Linear kernel: special case of RBF Performance of linear the same as RBF under certain parameters [Keerthi and Lin, 2003] Polynomial: numerical difficulties (< 1) d 0, (> 1) d More parameters than RBF Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 72 / 98

73 A Simple Procedure Parameter/kernel selection and practical issues 1 Conduct simple scaling on the data 2 Consider RBF kernel K(x,y) = e γ x y 2 3 Use cross-validation to find the best parameter C and γ 4 Use the best C and γ to train the whole training set 5 Test For beginners only, you can do a lot more Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 73 / 98

74 Parameter/kernel selection and practical issues Contour of Parameter Selection d lg(gamma) lg(c) Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 74 / 98

75 Parameter/kernel selection and practical issues The good region of parameters is quite large SVM is sensitive to parameters, but not that sensitive Sometimes default parameters work but it s good to select them if time is allowed Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 75 / 98

76 Efficient Parameter Selection Parameter/kernel selection and practical issues CV on grid points may be time consuming OK if one or two parameters But if more than two? E.g., feature scaling: K(x,y) = e n i=1 γ i(x i y i ) 2 Some features more important Still a challenging research issue Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 76 / 98

77 Parameter/kernel selection and practical issues Remember given parameters C and γ, we solve SVM to obtain optimal w or α Model a function of parameters min f (α(c, γ 1,...,γ n ), C, γ 1,...,γ n ) C,γ 1,...,γ n But usually non-convex The function from Bayesian frameworks (e.g., [Chu et al., 2003]) or smoothing CV bound CV(C, γ 1,...,γ n ) f (α(c, γ 1,...,γ n ), C, γ 1,...,γ n ) Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 77 / 98

78 Parameter/kernel selection and practical issues The minimization: Gradient-type methods or global optimization (e.g., genetic algorithms) The difficulty: Certainly more efforts than one single γ But performance may be just similar? Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 78 / 98

79 Kernel Combination Parameter/kernel selection and practical issues How about using where t 1 K 1 + t 2 K t r K r, t t r = 1 as the kernel Related to parameter selection t 1 e γ 1 x y + + t r e γ r x y If γ 1 good t 1 close to 1, others close to 0 Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 79 / 98

80 Parameter/kernel selection and practical issues [Lanckriet et al., 2004] form a convex f (α(t 1,...,t r ), t 1,...,t r ) when C is fixed Semi-definite programming problem But computational cost is also high Need more empirical studies Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 80 / 98

81 Design Kernels Parameter/kernel selection and practical issues Still a research issue e.g., in bioinformatics and vision, many new kernels But, should be careful if the function is a valid one K(x,y) = φ(x) T φ(y) For example, any two strings s 1, s 2 we can define edit distance e γedit(s 1,s 2 ) It s not a valid kernel [Cortes et al., 2003] Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 81 / 98

82 Mercer condition Parameter/kernel selection and practical issues What kind of K ij can be represented as φ(x i ) T φ(x j )? K(x,y) = φ(x) T φ(y) if and only if g s.t. g(x) 2 dx finite K(x,y)g(x)g(y)dxdy 0 A condition developed early last century However, still not easy to check Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 82 / 98

83 Outline Multi-class classification Basic concepts SVM primal/dual problems Training linear and nonlinear SVMs Parameter/kernel selection and practical issues Multi-class classification Discussion and conclusions Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 83 / 98

84 Multi-class Classification Multi-class classification k classes One-against-the rest: Train k binary SVMs: 1st class vs. (2 k)th class 2nd class vs.. (1, 3 k)th class k decision functions (w 1 ) T φ(x) + b 1. (w k ) T φ(x) + b k Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 84 / 98

85 Multi-class classification Prediction: arg max j (w j ) T φ(x) + b j Reason: If the 1st class, then we should have (w 1 ) T φ(x) + b 1 +1 (w 2 ) T φ(x) + b 2 1. (w k ) T φ(x) + b k 1 Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 85 / 98

86 Multi-class classification Multi-class Classification (Cont d) One-against-one: train k(k 1)/2 binary SVMs (1, 2), (1, 3),...,(1, k), (2, 3), (2, 4),...,(k 1, k) If 4 classes 6 binary SVMs y i = 1 y i = 1 Decision functions class 1 class 2 f 12 (x) = (w 12 ) T x + b 12 class 1 class 3 f 13 (x) = (w 13 ) T x + b 13 class 1 class 4 f 14 (x) = (w 14 ) T x + b 14 class 2 class 3 f 23 (x) = (w 23 ) T x + b 23 class 2 class 4 f 24 (x) = (w 24 ) T x + b 24 class 3 class 4 f 34 (x) = (w 34 ) T x + b 34 Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 86 / 98

87 Multi-class classification For a testing data, predicting all binary SVMs Classes winner Select the one with the largest vote class # votes May use decision values as well Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 87 / 98

88 More Complicated Forms Multi-class classification For example, [Vapnik, 1998, Weston and Watkins, 1999]: min w,b,ξ 1 2 k wmw T m + C m=1 l i=1 m y i ξ m i w T y i φ(x i ) + b yi w T mφ(x i ) + b m + 2 ξ m i, ξ m i 0, i = 1,...,l, m {1,...,k}\y i. y i : class of x i kl constraints Dual: kl variables; very large Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 88 / 98

89 Multi-class classification There are many other methods A comparison in [Hsu and Lin, 2002] Accuracy similar for many problems But 1-against-1 fastest for training Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 89 / 98

90 Multi-class classification Why 1vs1 Faster in Training 1 vs. 1 k(k 1)/2 problems, each 2l/k data on average 1 vs. all k problems, each l data If solving the optimization problem: polynomial of the size with degree d Their complexities (( ) d ) k(k 1) 2l O 2 k vs. ko(l d ) Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 90 / 98

91 Outline Discussion and conclusions Basic concepts SVM primal/dual problems Training linear and nonlinear SVMs Parameter/kernel selection and practical issues Multi-class classification Discussion and conclusions Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 91 / 98

92 Future Directions Discussion and conclusions I mentioned quite a few. Here are others. Better ways to handle unbalanced data i.e., some classes few data, some classes a lot Multi-label classification An instance associated with 2 labels e.g., a document in both politics, sports Structural data sets An instance may not be a vector e.g., a tree from a sentence Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 92 / 98

93 Conclusions Discussion and conclusions Dealing with data is interesting especially if you get good accuracy Some basic understandings are essential when applying classification methods SVM is a rather mature topic but still quite a few interesting research issues Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 93 / 98

94 References I Bakır, G. H., Bottou, L., and Weston, J. (2005). Breaking svm complexity with cross-training. In Saul, L. K., Weiss, Y., and Bottou, L., editors, Advances in Neural Information Processing Systems 17, pages MIT Press, Cambridge, MA. Boser, B., Guyon, I., and Vapnik, V. (1992). A training algorithm for optimal margin classifiers. In Proceedings of the Fifth Annual Workshop on Computational Learning Theory, pages ACM Press. Chu, W., Keerthi, S., and Ong, C. (2003). Bayesian trigonometric support vector classifier. Neural Computation, 15(9): Cortes, C., Haffner, P., and Mohri, M. (2003). Positive definite rational kernels. In Proceedings of the 16th Annual Conference on Learning Theory, pages Cortes, C. and Vapnik, V. (1995). Support-vector network. Machine Learning, 20: Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 94 / 98

95 References II Cristianini, N. and Shawe-Taylor, J. (2000). An Introduction to Support Vector Machines. Cambridge University Press, Cambridge, UK. Hsu, C.-W. and Lin, C.-J. (2002). A comparison of methods for multi-class support vector machines. IEEE Transactions on Neural Networks, 13(2): Joachims, T. (1998). Making large-scale SVM learning practical. In Schölkopf, B., Burges, C. J. C., and Smola, A. J., editors, Advances in Kernel Methods - Support Vector Learning, Cambridge, MA. MIT Press. Kao, W.-C., Chung, K.-M., Sun, C.-L., and Lin, C.-J. (2004). Decomposition methods for linear support vector machines. Neural Computation, 16(8): Keerthi, S. S., Chapelle, O., and DeCoste, D. (2006). Building support vector machines with reduced classifier complexity. Journal of Machine Learning Research, 7: Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 95 / 98

96 References III Keerthi, S. S. and Lin, C.-J. (2003). Asymptotic behaviors of support vector machines with Gaussian kernel. Neural Computation, 15(7): Lanckriet, G., Cristianini, N., Bartlett, P., El Ghaoui, L., and Jordan, M. (2004). Learning the Kernel Matrix with Semidefinite Programming. Journal of Machine Learning Research, 5: Lee, Y.-J. and Mangasarian, O. L. (2001). RSVM: Reduced support vector machines. In Proceedings of the First SIAM International Conference on Data Mining. Lin, C.-J. (2001). Formulations of support vector machines: a note from an optimization point of view. Neural Computation, 13(2): Lin, C.-J. (2002). A formal analysis of stopping criteria of decomposition methods for support vector machines. IEEE Transactions on Neural Networks, 13(5): Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 96 / 98

97 References IV Osuna, E., Freund, R., and Girosi, F. (1997). Training support vector machines: An application to face detection. In Proceedings of CVPR 97, pages , New York, NY. IEEE. Platt, J. C. (1998). Fast training of support vector machines using sequential minimal optimization. In Schölkopf, B., Burges, C. J. C., and Smola, A. J., editors, Advances in Kernel Methods - Support Vector Learning, Cambridge, MA. MIT Press. Schölkopf, B. and Smola, A. J. (2002). Learning with kernels. MIT Press. Syed, N. A., Liu, H., and Sung, K. K. (1999). Incremental learning with support vector machines. In Workshop on Support Vector Machines, IJCAI99. Vapnik, V. (1998). Statistical Learning Theory. Wiley, New York, NY. Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 97 / 98

98 References V Weston, J. and Watkins, C. (1999). Multi-class support vector machines. In Verleysen, M., editor, Proceedings of ESANN99, Brussels. D. Facto Press. Yu, H., Yang, J., and Han, J. (2003). Classifying large data sets using svms with hierarchical clusters. In KDD 03: Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pages , New York, NY, USA. ACM Press. Chih-Jen Lin (National Taiwan Univ.) MLSS 2006, Taipei 98 / 98

Outline. Basic concepts: SVM and kernels SVM primal/dual problems. Chih-Jen Lin (National Taiwan Univ.) 1 / 22

Outline. Basic concepts: SVM and kernels SVM primal/dual problems. Chih-Jen Lin (National Taiwan Univ.) 1 / 22 Outline Basic concepts: SVM and kernels SVM primal/dual problems Chih-Jen Lin (National Taiwan Univ.) 1 / 22 Outline Basic concepts: SVM and kernels Basic concepts: SVM and kernels SVM primal/dual problems

More information

Training Support Vector Machines: Status and Challenges

Training Support Vector Machines: Status and Challenges ICML Workshop on Large Scale Learning Challenge July 9, 2008 Chih-Jen Lin (National Taiwan Univ.) 1 / 34 Training Support Vector Machines: Status and Challenges Chih-Jen Lin Department of Computer Science

More information

Support Vector Machines and Kernel Methods

Support Vector Machines and Kernel Methods Support Vector Machines and Kernel Methods Chih-Jen Lin Department of Computer Science National Taiwan University Talk at International Workshop on Recent Trends in Learning, Computation, and Finance,

More information

Introduction to Support Vector Machines

Introduction to Support Vector Machines Introduction to Support Vector Machines Andreas Maletti Technische Universität Dresden Fakultät Informatik June 15, 2006 1 The Problem 2 The Basics 3 The Proposed Solution Learning by Machines Learning

More information

Introduction to Support Vector Machines

Introduction to Support Vector Machines Introduction to Support Vector Machines Hsuan-Tien Lin Learning Systems Group, California Institute of Technology Talk in NTU EE/CS Speech Lab, November 16, 2005 H.-T. Lin (Learning Systems Group) Introduction

More information

LIBSVM: a Library for Support Vector Machines (Version 2.32)

LIBSVM: a Library for Support Vector Machines (Version 2.32) LIBSVM: a Library for Support Vector Machines (Version 2.32) Chih-Chung Chang and Chih-Jen Lin Last updated: October 6, 200 Abstract LIBSVM is a library for support vector machines (SVM). Its goal is to

More information

A Simple Decomposition Method for Support Vector Machines

A Simple Decomposition Method for Support Vector Machines Machine Learning, 46, 291 314, 2002 c 2002 Kluwer Academic Publishers. Manufactured in The Netherlands. A Simple Decomposition Method for Support Vector Machines CHIH-WEI HSU b4506056@csie.ntu.edu.tw CHIH-JEN

More information

Polyhedral Computation. Linear Classifiers & the SVM

Polyhedral Computation. Linear Classifiers & the SVM Polyhedral Computation Linear Classifiers & the SVM mcuturi@i.kyoto-u.ac.jp Nov 26 2010 1 Statistical Inference Statistical: useful to study random systems... Mutations, environmental changes etc. life

More information

Introduction to Logistic Regression and Support Vector Machine

Introduction to Logistic Regression and Support Vector Machine Introduction to Logistic Regression and Support Vector Machine guest lecturer: Ming-Wei Chang CS 446 Fall, 2009 () / 25 Fall, 2009 / 25 Before we start () 2 / 25 Fall, 2009 2 / 25 Before we start Feel

More information

Perceptron Revisited: Linear Separators. Support Vector Machines

Perceptron Revisited: Linear Separators. Support Vector Machines Support Vector Machines Perceptron Revisited: Linear Separators Binary classification can be viewed as the task of separating classes in feature space: w T x + b > 0 w T x + b = 0 w T x + b < 0 Department

More information

Jeff Howbert Introduction to Machine Learning Winter

Jeff Howbert Introduction to Machine Learning Winter Classification / Regression Support Vector Machines Jeff Howbert Introduction to Machine Learning Winter 2012 1 Topics SVM classifiers for linearly separable classes SVM classifiers for non-linearly separable

More information

A Revisit to Support Vector Data Description (SVDD)

A Revisit to Support Vector Data Description (SVDD) A Revisit to Support Vector Data Description (SVDD Wei-Cheng Chang b99902019@csie.ntu.edu.tw Ching-Pei Lee r00922098@csie.ntu.edu.tw Chih-Jen Lin cjlin@csie.ntu.edu.tw Department of Computer Science, National

More information

Working Set Selection Using Second Order Information for Training SVM

Working Set Selection Using Second Order Information for Training SVM Working Set Selection Using Second Order Information for Training SVM Chih-Jen Lin Department of Computer Science National Taiwan University Joint work with Rong-En Fan and Pai-Hsuen Chen Talk at NIPS

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Tobias Pohlen Selected Topics in Human Language Technology and Pattern Recognition February 10, 2014 Human Language Technology and Pattern Recognition Lehrstuhl für Informatik 6

More information

Support Vector Machine for Classification and Regression

Support Vector Machine for Classification and Regression Support Vector Machine for Classification and Regression Ahlame Douzal AMA-LIG, Université Joseph Fourier Master 2R - MOSIG (2013) November 25, 2013 Loss function, Separating Hyperplanes, Canonical Hyperplan

More information

Support Vector Machines (SVM) in bioinformatics. Day 1: Introduction to SVM

Support Vector Machines (SVM) in bioinformatics. Day 1: Introduction to SVM 1 Support Vector Machines (SVM) in bioinformatics Day 1: Introduction to SVM Jean-Philippe Vert Bioinformatics Center, Kyoto University, Japan Jean-Philippe.Vert@mines.org Human Genome Center, University

More information

Support Vector Machines Explained

Support Vector Machines Explained December 23, 2008 Support Vector Machines Explained Tristan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introduction This document has been written in an attempt to make the Support Vector Machines (SVM),

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Kernel Methods and Support Vector Machines Oliver Schulte - CMPT 726 Bishop PRML Ch. 6 Support Vector Machines Defining Characteristics Like logistic regression, good for continuous input features, discrete

More information

Formulation with slack variables

Formulation with slack variables Formulation with slack variables Optimal margin classifier with slack variables and kernel functions described by Support Vector Machine (SVM). min (w,ξ) ½ w 2 + γσξ(i) subject to ξ(i) 0 i, d(i) (w T x(i)

More information

CS6375: Machine Learning Gautam Kunapuli. Support Vector Machines

CS6375: Machine Learning Gautam Kunapuli. Support Vector Machines Gautam Kunapuli Example: Text Categorization Example: Develop a model to classify news stories into various categories based on their content. sports politics Use the bag-of-words representation for this

More information

LIBSVM: a Library for Support Vector Machines

LIBSVM: a Library for Support Vector Machines LIBSVM: a Library for Support Vector Machines Chih-Chung Chang and Chih-Jen Lin Last updated: September 6, 2006 Abstract LIBSVM is a library for support vector machines (SVM). Its goal is to help users

More information

Support Vector and Kernel Methods

Support Vector and Kernel Methods SIGIR 2003 Tutorial Support Vector and Kernel Methods Thorsten Joachims Cornell University Computer Science Department tj@cs.cornell.edu http://www.joachims.org 0 Linear Classifiers Rules of the Form:

More information

Support Vector Machines.

Support Vector Machines. Support Vector Machines www.cs.wisc.edu/~dpage 1 Goals for the lecture you should understand the following concepts the margin slack variables the linear support vector machine nonlinear SVMs the kernel

More information

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (SVM) and Kernel Methods Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2015 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

More information

Support Vector Machines and Kernel Methods

Support Vector Machines and Kernel Methods 2018 CS420 Machine Learning, Lecture 3 Hangout from Prof. Andrew Ng. http://cs229.stanford.edu/notes/cs229-notes3.pdf Support Vector Machines and Kernel Methods Weinan Zhang Shanghai Jiao Tong University

More information

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (SVM) and Kernel Methods Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2014 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1394 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1394 1 / 34 Table

More information

Support Vector Machines

Support Vector Machines EE 17/7AT: Optimization Models in Engineering Section 11/1 - April 014 Support Vector Machines Lecturer: Arturo Fernandez Scribe: Arturo Fernandez 1 Support Vector Machines Revisited 1.1 Strictly) Separable

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Stephan Dreiseitl University of Applied Sciences Upper Austria at Hagenberg Harvard-MIT Division of Health Sciences and Technology HST.951J: Medical Decision Support Overview Motivation

More information

Lecture 9: Large Margin Classifiers. Linear Support Vector Machines

Lecture 9: Large Margin Classifiers. Linear Support Vector Machines Lecture 9: Large Margin Classifiers. Linear Support Vector Machines Perceptrons Definition Perceptron learning rule Convergence Margin & max margin classifiers (Linear) support vector machines Formulation

More information

CS145: INTRODUCTION TO DATA MINING

CS145: INTRODUCTION TO DATA MINING CS145: INTRODUCTION TO DATA MINING 5: Vector Data: Support Vector Machine Instructor: Yizhou Sun yzsun@cs.ucla.edu October 18, 2017 Homework 1 Announcements Due end of the day of this Thursday (11:59pm)

More information

Optimization for Kernel Methods S. Sathiya Keerthi

Optimization for Kernel Methods S. Sathiya Keerthi Optimization for Kernel Methods S. Sathiya Keerthi Yahoo! Research, Burbank, CA, USA Kernel methods: Support Vector Machines (SVMs) Kernel Logistic Regression (KLR) Aim: To introduce a variety of optimization

More information

Machine Learning. Support Vector Machines. Manfred Huber

Machine Learning. Support Vector Machines. Manfred Huber Machine Learning Support Vector Machines Manfred Huber 2015 1 Support Vector Machines Both logistic regression and linear discriminant analysis learn a linear discriminant function to separate the data

More information

An introduction to Support Vector Machines

An introduction to Support Vector Machines 1 An introduction to Support Vector Machines Giorgio Valentini DSI - Dipartimento di Scienze dell Informazione Università degli Studi di Milano e-mail: valenti@dsi.unimi.it 2 Outline Linear classifiers

More information

A Tutorial on Support Vector Machine

A Tutorial on Support Vector Machine A Tutorial on School of Computing National University of Singapore Contents Theory on Using with Other s Contents Transforming Theory on Using with Other s What is a classifier? A function that maps instances

More information

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (SVM) and Kernel Methods Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2016 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

More information

Neural Networks. Prof. Dr. Rudolf Kruse. Computational Intelligence Group Faculty for Computer Science

Neural Networks. Prof. Dr. Rudolf Kruse. Computational Intelligence Group Faculty for Computer Science Neural Networks Prof. Dr. Rudolf Kruse Computational Intelligence Group Faculty for Computer Science kruse@iws.cs.uni-magdeburg.de Rudolf Kruse Neural Networks 1 Supervised Learning / Support Vector Machines

More information

Support Vector Machines. Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

Support Vector Machines. Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar Data Mining Support Vector Machines Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar 02/03/2018 Introduction to Data Mining 1 Support Vector Machines Find a linear hyperplane

More information

A GENERAL FORMULATION FOR SUPPORT VECTOR MACHINES. Wei Chu, S. Sathiya Keerthi, Chong Jin Ong

A GENERAL FORMULATION FOR SUPPORT VECTOR MACHINES. Wei Chu, S. Sathiya Keerthi, Chong Jin Ong A GENERAL FORMULATION FOR SUPPORT VECTOR MACHINES Wei Chu, S. Sathiya Keerthi, Chong Jin Ong Control Division, Department of Mechanical Engineering, National University of Singapore 0 Kent Ridge Crescent,

More information

Brief Introduction to Machine Learning

Brief Introduction to Machine Learning Brief Introduction to Machine Learning Yuh-Jye Lee Lab of Data Science and Machine Intelligence Dept. of Applied Math. at NCTU August 29, 2016 1 / 49 1 Introduction 2 Binary Classification 3 Support Vector

More information

Learning with kernels and SVM

Learning with kernels and SVM Learning with kernels and SVM Šámalova chata, 23. května, 2006 Petra Kudová Outline Introduction Binary classification Learning with Kernels Support Vector Machines Demo Conclusion Learning from data find

More information

Solving the SVM Optimization Problem

Solving the SVM Optimization Problem Solving the SVM Optimization Problem Kernel-based Learning Methods Christian Igel Institut für Neuroinformatik Ruhr-Universität Bochum, Germany http://www.neuroinformatik.rub.de July 16, 2009 Christian

More information

CSC 411 Lecture 17: Support Vector Machine

CSC 411 Lecture 17: Support Vector Machine CSC 411 Lecture 17: Support Vector Machine Ethan Fetaya, James Lucas and Emad Andrews University of Toronto CSC411 Lec17 1 / 1 Today Max-margin classification SVM Hard SVM Duality Soft SVM CSC411 Lec17

More information

Review: Support vector machines. Machine learning techniques and image analysis

Review: Support vector machines. Machine learning techniques and image analysis Review: Support vector machines Review: Support vector machines Margin optimization min (w,w 0 ) 1 2 w 2 subject to y i (w 0 + w T x i ) 1 0, i = 1,..., n. Review: Support vector machines Margin optimization

More information

A Dual Coordinate Descent Method for Large-scale Linear SVM

A Dual Coordinate Descent Method for Large-scale Linear SVM Cho-Jui Hsieh b92085@csie.ntu.edu.tw Kai-Wei Chang b92084@csie.ntu.edu.tw Chih-Jen Lin cjlin@csie.ntu.edu.tw Department of Computer Science, National Taiwan University, Taipei 106, Taiwan S. Sathiya Keerthi

More information

Lecture 10: Support Vector Machine and Large Margin Classifier

Lecture 10: Support Vector Machine and Large Margin Classifier Lecture 10: Support Vector Machine and Large Margin Classifier Applied Multivariate Analysis Math 570, Fall 2014 Xingye Qiao Department of Mathematical Sciences Binghamton University E-mail: qiao@math.binghamton.edu

More information

SUPPORT VECTOR MACHINE

SUPPORT VECTOR MACHINE SUPPORT VECTOR MACHINE Mainly based on https://nlp.stanford.edu/ir-book/pdf/15svm.pdf 1 Overview SVM is a huge topic Integration of MMDS, IIR, and Andrew Moore s slides here Our foci: Geometric intuition

More information

Support Vector Machine Regression for Volatile Stock Market Prediction

Support Vector Machine Regression for Volatile Stock Market Prediction Support Vector Machine Regression for Volatile Stock Market Prediction Haiqin Yang, Laiwan Chan, and Irwin King Department of Computer Science and Engineering The Chinese University of Hong Kong Shatin,

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table

More information

Max Margin-Classifier

Max Margin-Classifier Max Margin-Classifier Oliver Schulte - CMPT 726 Bishop PRML Ch. 7 Outline Maximum Margin Criterion Math Maximizing the Margin Non-Separable Data Kernels and Non-linear Mappings Where does the maximization

More information

Support Vector Machine (continued)

Support Vector Machine (continued) Support Vector Machine continued) Overlapping class distribution: In practice the class-conditional distributions may overlap, so that the training data points are no longer linearly separable. We need

More information

Support Vector Machines II. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Support Vector Machines II. CAP 5610: Machine Learning Instructor: Guo-Jun QI Support Vector Machines II CAP 5610: Machine Learning Instructor: Guo-Jun QI 1 Outline Linear SVM hard margin Linear SVM soft margin Non-linear SVM Application Linear Support Vector Machine An optimization

More information

Machine Learning. Kernels. Fall (Kernels, Kernelized Perceptron and SVM) Professor Liang Huang. (Chap. 12 of CIML)

Machine Learning. Kernels. Fall (Kernels, Kernelized Perceptron and SVM) Professor Liang Huang. (Chap. 12 of CIML) Machine Learning Fall 2017 Kernels (Kernels, Kernelized Perceptron and SVM) Professor Liang Huang (Chap. 12 of CIML) Nonlinear Features x4: -1 x1: +1 x3: +1 x2: -1 Concatenated (combined) features XOR:

More information

Foundation of Intelligent Systems, Part I. SVM s & Kernel Methods

Foundation of Intelligent Systems, Part I. SVM s & Kernel Methods Foundation of Intelligent Systems, Part I SVM s & Kernel Methods mcuturi@i.kyoto-u.ac.jp FIS - 2013 1 Support Vector Machines The linearly-separable case FIS - 2013 2 A criterion to select a linear classifier:

More information

Support Vector Machine & Its Applications

Support Vector Machine & Its Applications Support Vector Machine & Its Applications A portion (1/3) of the slides are taken from Prof. Andrew Moore s SVM tutorial at http://www.cs.cmu.edu/~awm/tutorials Mingyue Tan The University of British Columbia

More information

ML (cont.): SUPPORT VECTOR MACHINES

ML (cont.): SUPPORT VECTOR MACHINES ML (cont.): SUPPORT VECTOR MACHINES CS540 Bryan R Gibson University of Wisconsin-Madison Slides adapted from those used by Prof. Jerry Zhu, CS540-1 1 / 40 Support Vector Machines (SVMs) The No-Math Version

More information

EE613 Machine Learning for Engineers. Kernel methods Support Vector Machines. jean-marc odobez 2015

EE613 Machine Learning for Engineers. Kernel methods Support Vector Machines. jean-marc odobez 2015 EE613 Machine Learning for Engineers Kernel methods Support Vector Machines jean-marc odobez 2015 overview Kernel methods introductions and main elements defining kernels Kernelization of k-nn, K-Means,

More information

Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012 Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Linear classifier Which classifier? x 2 x 1 2 Linear classifier Margin concept x 2

More information

Support Vector Machines and Kernel Algorithms

Support Vector Machines and Kernel Algorithms Support Vector Machines and Kernel Algorithms Bernhard Schölkopf Max-Planck-Institut für biologische Kybernetik 72076 Tübingen, Germany Bernhard.Schoelkopf@tuebingen.mpg.de Alex Smola RSISE, Australian

More information

CS4495/6495 Introduction to Computer Vision. 8C-L3 Support Vector Machines

CS4495/6495 Introduction to Computer Vision. 8C-L3 Support Vector Machines CS4495/6495 Introduction to Computer Vision 8C-L3 Support Vector Machines Discriminative classifiers Discriminative classifiers find a division (surface) in feature space that separates the classes Several

More information

Linear, threshold units. Linear Discriminant Functions and Support Vector Machines. Biometrics CSE 190 Lecture 11. X i : inputs W i : weights

Linear, threshold units. Linear Discriminant Functions and Support Vector Machines. Biometrics CSE 190 Lecture 11. X i : inputs W i : weights Linear Discriminant Functions and Support Vector Machines Linear, threshold units CSE19, Winter 11 Biometrics CSE 19 Lecture 11 1 X i : inputs W i : weights θ : threshold 3 4 5 1 6 7 Courtesy of University

More information

Statistical Machine Learning from Data

Statistical Machine Learning from Data Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Support Vector Machines Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole Polytechnique

More information

Support Vector Machines for Classification: A Statistical Portrait

Support Vector Machines for Classification: A Statistical Portrait Support Vector Machines for Classification: A Statistical Portrait Yoonkyung Lee Department of Statistics The Ohio State University May 27, 2011 The Spring Conference of Korean Statistical Society KAIST,

More information

A short introduction to supervised learning, with applications to cancer pathway analysis Dr. Christina Leslie

A short introduction to supervised learning, with applications to cancer pathway analysis Dr. Christina Leslie A short introduction to supervised learning, with applications to cancer pathway analysis Dr. Christina Leslie Computational Biology Program Memorial Sloan-Kettering Cancer Center http://cbio.mskcc.org/leslielab

More information

A Study on Sigmoid Kernels for SVM and the Training of non-psd Kernels by SMO-type Methods

A Study on Sigmoid Kernels for SVM and the Training of non-psd Kernels by SMO-type Methods A Study on Sigmoid Kernels for SVM and the Training of non-psd Kernels by SMO-type Methods Abstract Hsuan-Tien Lin and Chih-Jen Lin Department of Computer Science and Information Engineering National Taiwan

More information

An Improved Conjugate Gradient Scheme to the Solution of Least Squares SVM

An Improved Conjugate Gradient Scheme to the Solution of Least Squares SVM An Improved Conjugate Gradient Scheme to the Solution of Least Squares SVM Wei Chu Chong Jin Ong chuwei@gatsby.ucl.ac.uk mpeongcj@nus.edu.sg S. Sathiya Keerthi mpessk@nus.edu.sg Control Division, Department

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Le Song Machine Learning I CSE 6740, Fall 2013 Naïve Bayes classifier Still use Bayes decision rule for classification P y x = P x y P y P x But assume p x y = 1 is fully factorized

More information

Nearest Neighbors Methods for Support Vector Machines

Nearest Neighbors Methods for Support Vector Machines Nearest Neighbors Methods for Support Vector Machines A. J. Quiroz, Dpto. de Matemáticas. Universidad de Los Andes joint work with María González-Lima, Universidad Simón Boĺıvar and Sergio A. Camelo, Universidad

More information

Linear Classification and SVM. Dr. Xin Zhang

Linear Classification and SVM. Dr. Xin Zhang Linear Classification and SVM Dr. Xin Zhang Email: eexinzhang@scut.edu.cn What is linear classification? Classification is intrinsically non-linear It puts non-identical things in the same class, so a

More information

Classification and Support Vector Machine

Classification and Support Vector Machine Classification and Support Vector Machine Yiyong Feng and Daniel P. Palomar The Hong Kong University of Science and Technology (HKUST) ELEC 5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline

More information

The Kernel Trick, Gram Matrices, and Feature Extraction. CS6787 Lecture 4 Fall 2017

The Kernel Trick, Gram Matrices, and Feature Extraction. CS6787 Lecture 4 Fall 2017 The Kernel Trick, Gram Matrices, and Feature Extraction CS6787 Lecture 4 Fall 2017 Momentum for Principle Component Analysis CS6787 Lecture 3.1 Fall 2017 Principle Component Analysis Setting: find the

More information

Support'Vector'Machines. Machine(Learning(Spring(2018 March(5(2018 Kasthuri Kannan

Support'Vector'Machines. Machine(Learning(Spring(2018 March(5(2018 Kasthuri Kannan Support'Vector'Machines Machine(Learning(Spring(2018 March(5(2018 Kasthuri Kannan kasthuri.kannan@nyumc.org Overview Support Vector Machines for Classification Linear Discrimination Nonlinear Discrimination

More information

Pattern Recognition 2018 Support Vector Machines

Pattern Recognition 2018 Support Vector Machines Pattern Recognition 2018 Support Vector Machines Ad Feelders Universiteit Utrecht Ad Feelders ( Universiteit Utrecht ) Pattern Recognition 1 / 48 Support Vector Machines Ad Feelders ( Universiteit Utrecht

More information

Applied inductive learning - Lecture 7

Applied inductive learning - Lecture 7 Applied inductive learning - Lecture 7 Louis Wehenkel & Pierre Geurts Department of Electrical Engineering and Computer Science University of Liège Montefiore - Liège - November 5, 2012 Find slides: http://montefiore.ulg.ac.be/

More information

Scale-Invariance of Support Vector Machines based on the Triangular Kernel. Abstract

Scale-Invariance of Support Vector Machines based on the Triangular Kernel. Abstract Scale-Invariance of Support Vector Machines based on the Triangular Kernel François Fleuret Hichem Sahbi IMEDIA Research Group INRIA Domaine de Voluceau 78150 Le Chesnay, France Abstract This paper focuses

More information

Computer Vision Group Prof. Daniel Cremers. 2. Regression (cont.)

Computer Vision Group Prof. Daniel Cremers. 2. Regression (cont.) Prof. Daniel Cremers 2. Regression (cont.) Regression with MLE (Rep.) Assume that y is affected by Gaussian noise : t = f(x, w)+ where Thus, we have p(t x, w, )=N (t; f(x, w), 2 ) 2 Maximum A-Posteriori

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression

More information

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function. Bayesian learning: Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function. Let y be the true label and y be the predicted

More information

Support Vector Machine

Support Vector Machine Support Vector Machine Fabrice Rossi SAMM Université Paris 1 Panthéon Sorbonne 2018 Outline Linear Support Vector Machine Kernelized SVM Kernels 2 From ERM to RLM Empirical Risk Minimization in the binary

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Reading: Ben-Hur & Weston, A User s Guide to Support Vector Machines (linked from class web page) Notation Assume a binary classification problem. Instances are represented by vector

More information

Introduction to Machine Learning Lecture 13. Mehryar Mohri Courant Institute and Google Research

Introduction to Machine Learning Lecture 13. Mehryar Mohri Courant Institute and Google Research Introduction to Machine Learning Lecture 13 Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu Multi-Class Classification Mehryar Mohri - Introduction to Machine Learning page 2 Motivation

More information

Support Vector Machines. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Support Vector Machines. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington Support Vector Machines CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 A Linearly Separable Problem Consider the binary classification

More information

Support Vector Machines: Maximum Margin Classifiers

Support Vector Machines: Maximum Margin Classifiers Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind

More information

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function

More information

Bayesian Support Vector Machines for Feature Ranking and Selection

Bayesian Support Vector Machines for Feature Ranking and Selection Bayesian Support Vector Machines for Feature Ranking and Selection written by Chu, Keerthi, Ong, Ghahramani Patrick Pletscher pat@student.ethz.ch ETH Zurich, Switzerland 12th January 2006 Overview 1 Introduction

More information

Lecture Support Vector Machine (SVM) Classifiers

Lecture Support Vector Machine (SVM) Classifiers Introduction to Machine Learning Lecturer: Amir Globerson Lecture 6 Fall Semester Scribe: Yishay Mansour 6.1 Support Vector Machine (SVM) Classifiers Classification is one of the most important tasks in

More information

Classifier Complexity and Support Vector Classifiers

Classifier Complexity and Support Vector Classifiers Classifier Complexity and Support Vector Classifiers Feature 2 6 4 2 0 2 4 6 8 RBF kernel 10 10 8 6 4 2 0 2 4 6 Feature 1 David M.J. Tax Pattern Recognition Laboratory Delft University of Technology D.M.J.Tax@tudelft.nl

More information

Learning with Rigorous Support Vector Machines

Learning with Rigorous Support Vector Machines Learning with Rigorous Support Vector Machines Jinbo Bi 1 and Vladimir N. Vapnik 2 1 Department of Mathematical Sciences, Rensselaer Polytechnic Institute, Troy NY 12180, USA 2 NEC Labs America, Inc. Princeton

More information

Support Vector Machines

Support Vector Machines Two SVM tutorials linked in class website (please, read both): High-level presentation with applications (Hearst 1998) Detailed tutorial (Burges 1998) Support Vector Machines Machine Learning 10701/15781

More information

PAC-learning, VC Dimension and Margin-based Bounds

PAC-learning, VC Dimension and Margin-based Bounds More details: General: http://www.learning-with-kernels.org/ Example of more complex bounds: http://www.research.ibm.com/people/t/tzhang/papers/jmlr02_cover.ps.gz PAC-learning, VC Dimension and Margin-based

More information

STATE GENERALIZATION WITH SUPPORT VECTOR MACHINES IN REINFORCEMENT LEARNING. Ryo Goto, Toshihiro Matsui and Hiroshi Matsuo

STATE GENERALIZATION WITH SUPPORT VECTOR MACHINES IN REINFORCEMENT LEARNING. Ryo Goto, Toshihiro Matsui and Hiroshi Matsuo STATE GENERALIZATION WITH SUPPORT VECTOR MACHINES IN REINFORCEMENT LEARNING Ryo Goto, Toshihiro Matsui and Hiroshi Matsuo Department of Electrical and Computer Engineering, Nagoya Institute of Technology

More information

Introduction to Support Vector Machine

Introduction to Support Vector Machine Introduction to Support Vector Machine Yuh-Jye Lee National Taiwan University of Science and Technology September 23, 2009 1 / 76 Binary Classification Problem 2 / 76 Binary Classification Problem (A Fundamental

More information

Deviations from linear separability. Kernel methods. Basis expansion for quadratic boundaries. Adding new features Systematic deviation

Deviations from linear separability. Kernel methods. Basis expansion for quadratic boundaries. Adding new features Systematic deviation Deviations from linear separability Kernel methods CSE 250B Noise Find a separator that minimizes a convex loss function related to the number of mistakes. e.g. SVM, logistic regression. Systematic deviation

More information

Analysis of Multiclass Support Vector Machines

Analysis of Multiclass Support Vector Machines Analysis of Multiclass Support Vector Machines Shigeo Abe Graduate School of Science and Technology Kobe University Kobe, Japan abe@eedept.kobe-u.ac.jp Abstract Since support vector machines for pattern

More information

Support Vector Machines and Kernel Methods

Support Vector Machines and Kernel Methods Support Vector Machines and Kernel Methods Geoff Gordon ggordon@cs.cmu.edu July 10, 2003 Overview Why do people care about SVMs? Classification problems SVMs often produce good results over a wide range

More information

18.9 SUPPORT VECTOR MACHINES

18.9 SUPPORT VECTOR MACHINES 744 Chapter 8. Learning from Examples is the fact that each regression problem will be easier to solve, because it involves only the examples with nonzero weight the examples whose kernels overlap the

More information

(Kernels +) Support Vector Machines

(Kernels +) Support Vector Machines (Kernels +) Support Vector Machines Machine Learning Torsten Möller Reading Chapter 5 of Machine Learning An Algorithmic Perspective by Marsland Chapter 6+7 of Pattern Recognition and Machine Learning

More information

ECS289: Scalable Machine Learning

ECS289: Scalable Machine Learning ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 18, 2016 Outline One versus all/one versus one Ranking loss for multiclass/multilabel classification Scaling to millions of labels Multiclass

More information

Kernel methods CSE 250B

Kernel methods CSE 250B Kernel methods CSE 250B Deviations from linear separability Noise Find a separator that minimizes a convex loss function related to the number of mistakes. e.g. SVM, logistic regression. Deviations from

More information

Lecture 10: A brief introduction to Support Vector Machine

Lecture 10: A brief introduction to Support Vector Machine Lecture 10: A brief introduction to Support Vector Machine Advanced Applied Multivariate Analysis STAT 2221, Fall 2013 Sungkyu Jung Department of Statistics, University of Pittsburgh Xingye Qiao Department

More information