Support Vector Machines. Jie Tang Knowledge Engineering Group Department of Computer Science and Technology Tsinghua University 2012

Similar documents
Lecture 10 Support Vector Machines. Oct

Natural Language Processing and Information Retrieval

Lecture 10 Support Vector Machines II

Support Vector Machines CS434

Support Vector Machines

Maximal Margin Classifier

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Which Separator? Spring 1

Support Vector Machines

18-660: Numerical Methods for Engineering Design and Optimization

UVA CS / Introduc8on to Machine Learning and Data Mining. Lecture 10: Classifica8on with Support Vector Machine (cont.

Kristin P. Bennett. Rensselaer Polytechnic Institute

Multilayer Perceptron (MLP)

Kernel Methods and SVMs Extension

Lagrange Multipliers Kernel Trick

Lecture 3: Dual problems and Kernels

Support Vector Machines

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.

Linear Classification, SVMs and Nearest Neighbors

Support Vector Machines CS434

Lecture 6: Support Vector Machines

Convex Optimization. Optimality conditions. (EE227BT: UC Berkeley) Lecture 9 (Optimality; Conic duality) 9/25/14. Laurent El Ghaoui.

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015

Lecture Notes on Linear Regression

Recap: the SVM problem

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Feature Selection: Part 1

Pattern Classification

Chapter 6 Support vector machine. Séparateurs à vaste marge

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI

1 Convex Optimization

Machine Learning. What is a good Decision Boundary? Support Vector Machines

Lecture 20: November 7

Generalized Linear Methods

Chapter Newton s Method

MMA and GCMMA two methods for nonlinear optimization

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

17 Support Vector Machines

Assortment Optimization under MNL

10-701/ Machine Learning, Fall 2005 Homework 3

APPENDIX A Some Linear Algebra

Some modelling aspects for the Matlab implementation of MMA

6.854J / J Advanced Algorithms Fall 2008

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

EEE 241: Linear Systems

CSE 252C: Computer Vision III

Support Vector Machines

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

CHAPTER 7 CONSTRAINED OPTIMIZATION 2: SQP AND GRG

A Local Variational Problem of Second Order for a Class of Optimal Control Problems with Nonsmooth Objective Function

14 Lagrange Multipliers

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Ensemble Methods: Boosting

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Image classification. Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing i them?

Machine Learning. Support Vector Machines. Eric Xing , Fall Lecture 9, October 6, 2015

ON REGULARISATION PARAMETER TRANSFORMATION OF SUPPORT VECTOR MACHINES. Hong-Gunn Chew Cheng-Chew Lim. (Communicated by the associate editor name)

CSCI B609: Foundations of Data Science

A NEW ALGORITHM FOR FINDING THE MINIMUM DISTANCE BETWEEN TWO CONVEX HULLS. Dougsoo Kaown, B.Sc., M.Sc. Dissertation Prepared for the Degree of

The Minimum Universal Cost Flow in an Infeasible Flow Network

CSC 411 / CSC D11 / CSC C11

COS 521: Advanced Algorithms Game Theory and Linear Programming

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z )

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012

Kernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan

Supporting Information

Support Vector Machines

Module 9. Lecture 6. Duality in Assignment Problems

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Training Support Vector Machines with Particle Swarms

Department of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

PHYS 705: Classical Mechanics. Calculus of Variations II

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space.

Advanced Introduction to Machine Learning

COMPLEX NUMBERS AND QUADRATIC EQUATIONS

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

10) Activity analysis

FMA901F: Machine Learning Lecture 5: Support Vector Machines. Cristian Sminchisescu

3.1 ML and Empirical Distribution

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

UVA CS / Introduc8on to Machine Learning and Data Mining

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Kernel Methods and SVMs

A 2D Bounded Linear Program (H,c) 2D Linear Programming

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

The General Nonlinear Constrained Optimization Problem

15 Lagrange Multipliers

Finite Mixture Models and Expectation Maximization. Most slides are from: Dr. Mario Figueiredo, Dr. Anil Jain and Dr. Rong Jin

Learning Theory: Lecture Notes

Online Classification: Perceptron and Winnow

Machine Learning. Support Vector Machines. Eric Xing. Lecture 4, August 12, Reading: Eric CMU,

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

Proseminar Optimierung II. Victor A. Kovtunenko SS 2012/2013: LV

On a direct solver for linear least squares problems

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

6) Derivatives, gradients and Hessian matrices

Transcription:

Support Vector Machnes Je Tang Knowledge Engneerng Group Department of Computer Scence and Technology Tsnghua Unversty 2012 1

Outlne What s a Support Vector Machne? Solvng SVMs Kernel Trcks 2

What s a Support Vector Machne SVM s related to statstcal learnng theory [3] SVM was frst ntroduced n 1992 [1] SVM becomes popular because of ts success n handwrtten dgt recognton 1.1% test error rate for SVM. Ths s the same as the error rates of a carefully constructed neural network, Leet 4. See Secton 5.11 n [2] or the dscusson n [3] for detals SVM s now regarded as an mportant example of kernel methods, one of the key area n machne learnng [1] B.E. Boser et al. A Tranng Algorthm for Optmal Margn Classfers. Proceedngs of the Ffth Annual Workshop on Computatonal Learnng Theory 5 144-152, Pttsburgh, 1992. [2] L. Bottou et al. Comparson of classfer methods: a case study n handwrtten dgt recognton. Proceedngs of the 12th IAPR Internatonal Conference on Pattern Recognton, vol. 2, pp. 77-82, 1994. [3] V. Vapnk. The ature of Statstcal Learnng Theory. 2 nd edton, Sprnger, 1999. 3

Classfcaton Problem Gven a tranng set S={(x 1, y 1 ),(x 2, y 2 ),,(x, y )}, and x X=R m, =1,2,, To learn a functon g(x), and make the decson functon f(x)=sgn(g(x)) can classfy new nput x So ths s a supervsed batch learnng method Lnear classfer g(x) = (w T x + b) # sgn(g(x)) = 1,g(x) 0 & $ ' % 1,g(x) < 0( f (x) = sgn(g(x)) 4

What s a good Decson Boundary? Consder a two-class, lnearly separable classfcaton problem Many decson boundares! The Perceptron algorthm can be used to fnd such a boundary Dfferent algorthms have been proposed Are all decson boundares equally good? Class +1 Class -1 5

6 Geometrc Interpretaton

Affne Set Lne through x 1 and x 2 : all ponts x =θx 1 + (1 θ)x 2 θ R Affne set: contans the lne through any two dstnct ponts n the set Affne functon: f: R n ->R m s affne f f(x)=ax+b wth A number of the followng pages are from Boyd s sldes A R m n,b R m 7

Convex Set Lne segment between x 1 and x 2 : all ponts x =θx 1 + (1 θ)x 2 0 θ 1 Convex set: contans lne segment between any two ponts n the set x 1, x 2 C, 0 θ 1 =>θx 1 + (1 θ)x 2 C Examples (one convex, two nonconvex sets) 8

Hyperplanes and Halfspaces Hyperplane: set of the form {x a T x = b}(a 0) Halfspace: set of the form {x a T x b}(a 0) a s the normal vector Hyperplanes are affne and convex; halfspaces are convex. 9

Bsector based Decson Boundary k k conv( S) = x = λjx j λj = 1, λj 0, j = 1, L, k j= 1 j= 1 d Class -1 Class +1 c m 10

Formalzaton 1 1 mn β c d mnβ β x β x 2 2 st.. β = 1, β = 1 y = 1 y = 1 0 β 1, = [1, m] j j j j y = 1 y = 1 j The objectve s to solve all the β,. Then we can obtan the two ponts havng the closest dstance by c= β x d = β x j j y = 1 y = 1 ext we compute the hyperplane w T x + b = 0 by m 1 w= c d = βyx b= (( c d ) g ( c+ d )) 2 Fnally, we make predcton by f( x) = sgn(( w T x) + b) j 11

12 Maxmal Margn

Large-margn Decson Boundary The decson boundary should be as far away from the data of both classes as possble We should maxmze the margn, m Dstance between the orgn and the lne w T x=-b s -b/ w Class -1 m = 2γ w Class +1 m T wx+ b= γ 13 T wx+ b= γ

Formalzaton maxγ, wb, 2γ w equal to mn γ,w,b w 2γ ote: we have constrants s.t. w T x () + b γ,1 k, y =1 w T x ( j) + b < γ,k < j, y j = 1 equal to y () (w T x () + b) γ,1 Snce we can arbtrarly scale w and b wthout changng anythng, We ntroduce the scalng constrant γ=1 mn w,b w 2 s.t. Change to 2-norm space w 2 y () (w T x () + b) 1,1 2 loss functon Ths s a constraned optmzaton problem. 14

Loss Functon Then we result n the Lasso loss functon mn w,b 1 2 w 2 s.t. y () (b + w T x () ) 1 Another popular loss func: Hnge loss + penalty mn w,b [1 y () (b + w T x)] + + λ 2 =1 w 2 15

16 Loss Functon (cont.) Emprcal loss functon Structural loss functon mn w,b [1 y () (b + wx () )] + =1 mn w,b [1 y () (b + wx () )] + + λ 2 w 2 =1 where w 2 ndcates the complexty of the model, t s also named as penalty There are many knds of formulaton of the loss functon

Optmal Margn Classfers For the problem: w 2 mn w,b 2 s.t. y () (w T x () + b) 1,1 We can wrte lagrangan form: L(w,b,α) = w 2 2 α [ y() (w T x () + b) 1] s.t. α 0,1 WHY? Let us revew generalzed Lagrangan =0 17

Revew Convex Optmzaton and Lagrange Dualty 18

Convex Functon f: R n ->R s convex f dom(f) s a convex set and f (θx + (1 θ)y) θ f (x) + (1 θ) f ( y) for all x, y dom( f ),0 θ 1 f s concave f f s convex f s strctly convex f dom(f) s convex and f (θx + (1 θ)y) <θ f (x) + (1 θ) f ( y) for all x, y dom( f ), x y,0 θ 1 19

Frst-order Condton f s dfferentable f dom(f) s open and the gradent exsts at each # f (x) = % $ f (x) x 1 x dom( f ), f (x) x 2 1 st -order condton: dfferentable f wth convex doman s convex ff f ( y) f (x) + f (x) T *( y x),..., f (x) x n & ( ' for all x, y dom( f ) 20

Second-order Condton f s twce dfferentable f dom(f) s open and the Hessan exsts at each 2 nd -order condton: for twce dfferentable f wth convex doman f s convex ff for all 2 f (x) j = 2 f (x) x x j,, j =1,...,, x dom( f ) x dom( f ) 2 f (x) 0 x dom( f ) If 2 f (x) > 0 for all, then f s strctly convex. 21

Convex Optmzaton Problem 22 Standard form convex optmzaton problem f 0,f 1,,f k are convex; Equalty constrants are affne Important property: feasble set of a convex optmzaton problem s convex Example mn f ( x) 0 st.. f ( x) 0, = 1,..., k T a x b = 0, = 1,..., l mn f ( x) = x + x st f x 2 2 0 1 2.. 1 ( ) = 2 0 (1 + xx ) x h x x x 2 1( ) = ( 1 + 2) = 0 f 0 s convex; feasble set {(x 1, x 2 ) x 1 = x 2 0} s convex ot a convex problem snce f 1 s not convex, h 1 s not affne

Lagrange Dualty When solvng optmzaton problems wth constrants, lagrange dualty s always used to obtan the soluton of the prmal problem through solvng the dual problem. Prmal optmzaton problem If f(x), c (x), h j (x) are contnuously dfferentable functons defned n R n, then the followng optmzaton problem s called prmal optmzaton problem mn f( x) x R n st.. g ( x) 0, = 1,..., k h ( x) = 0, j = 1,..., l j 23

24 Prmal Optmzaton Problem To solve the prmal optmzaton problem, we defne the generalzed Lagrangan: Lx (, αβ, ) = f() x αg() x βh() x st.. α 0 where α and β j are Lagrange multplers. Consder the functon: Assume some x volates any of the prmal constrants (.e., f ether g (x)<0 or h j (x) 0 for some ), then we can verfy that Snce f g (x)<0 for some, we can set α as + ; f h j (x) 0 for some, we can set β j h j (x) as +, and set other α and β j as 0. In contrast, f the constrants are ndeed satsfed for a partcular value of x, then Therefore: θ P (x) = max L(x,α,β) α,β:α 0 k j j = 1 j= 1 Here, f we consder the mnmzaton problem l mn f( x) n x R θ ( x) = max [ f( x) α g ( x) β h ( x)] = P j j αβα, : 0 = 1 j= 1 k " $ θ P (x) = # f (x) %$ mn x θ P (x) = mn x l If x satsfes prmal constrants otherwse max α,β:α 0 Here P stands for prmal. L(x,α,β) st.. g ( x) 0, = 1,..., k h ( x) = 0, j = 1,..., l j Prmal problem We see that the prmal problem s represented by the mn max problem of the generalzed Lagrangan,.e., p* = mnθ x P (x)

Dual optmzaton problem: Dual Optmzaton Problem max θ α,β:α 0 D (α,β) = max mn L(x,α,β) α,β:α 0 x Ths s exactly the same as the prmal problem, except that the order of the max and the mn are now exchanged. We also defne the optmal value of the dual problem s objectve to be How are the prmal and the dual problems related? It can easly shown that: Proof: d* = max θ α,β:α 0 D (x) d* = max mn α,β:α 0 x L(x,α,β) mn x max α,β:α 0 L(x,α,β) = p * θ D (α,β) = mn L(x,α,β) L(x,α,β) max L(x,α,β) =θ x α,β:α 0 P (x) So θ D (α,β) θ P (x) Because the prmal and dual problem both have the optmal value, thus max θ α,β:α 0 D (α,β) mnθ x P (x) mn f( x) n x R st.. g ( x) 0, = 1,..., k h ( x) = 0, j = 1,..., l j Prmal problem.e., d* = max mn α,β:α 0 x L(x,α,β) mn x max α,β:α 0 L(x,α,β) = p * 25

KKT Condtons 26 Under certan condtons, we wll have d*=p* So that we can solve the dual problem n leu of the prmal problem. Then what s the condtons? Suppose (1) f and g are convex, and h (x) s affne. (2) the constrants g are (strctly) feasble; ths means that there exsts some x so that g (x) >0 for all. Under the above assumptons, there must exst x*, α* and β* so that x* s the soluton to the prmal problem, α*, β* are the soluton to the dual problem and p*=d*=l(x*, α*, β* ). The necessary and suffcent condtons are KKT (Karush-Kuhn-Tucher) condtons: L(x *,α *,β * ) = 0, [1, ] x L(x *,α *,β * ) α L(x *,α *,β * ) β = 0, [1,k] = 0, [1,l] α * g (x * ) = 0, [1,k] g (x * ) 0, [1,k] α * 0, [1,k] KKT dual complementarty condton. If α *>0, then g (x)=0

Back to Our Optmal Margn Classfers 27

Optmal Margn Classfers For the problem: w 2 mn w,b 2 s.t. y () (w T x () + b) 1,1 We can wrte lagrangan form: L(w,b,α) = w 2 2 α [ y() (w T x () + b) 1] s.t. α 0,1 Then our problem becomes: wb, =0 mn max Lwb (,, α) α If certan constrants are satsfed, then we have max mn Lwb α wb, (,, α) 28

Solve the Dual Problem max α mn w,b L(w,b,α) = w 2 s.t. α 0,1 2 α [ y() (w T x () + b) 1] Let us frst solve the nsde mnmal problem by settng the gradent of L(w, b, a) w.r.t. w and b to zero, we have =1 Lwb (,, α) w () () = w α y x = = 1 w= α y x = 1 () () 0 Lwb (,, α) b () = α y = = 1 0 Then let us substtute the two equatons nto L(w, b, a) to solve the maxmal problem 29

Solve the Dual Problem ( ) w = α y x ow we have: ( ) =1 and α y ( ) =0 =1 w = α y ( ) x ( ) back to L(w, b, a) Substtute =1 1 ( ) ( ) ( j) ( j) ( ) L(b,α ) = α y x α j y x α [ y ( α j y ( j ) x ( j ) x ( ) + b) 1] 2 =1 j=1 =1 j=1 1 ( ) ( j ) ( ) ( j ) ( ) ( j ) ( j ) ( ) ( ) = α α j y y x x α y α j y x x b α y + α 2 =1 j=1 =1 j=1 =1 =0 1 ( ) ( j ) ( ) ( j ) ( ) = α α j y y x x b α y + α 2 =1 j=1 =1 =1 because α y ( ) = 0, we obtan =1 1 ( ) ( j ) ( ) ( j ) L(α ) = α α j y y x x + α 2 =1 j=1 =1 The new objectve functon s a functon of a only It s known as the dual problem: f we know w, we know all a; vce versa 30

The Dual Problem (cont.) The orgnal problem, also known as prmal problem w 2 mn w,b 2 s.t. y () (w T x () + b) 1,1 mn w,b max α L(w,b,α) = w 2 s.t. α 0,1 2 α [ y() (w T x () + b) 1] =1 The dual problem max mn (,, α) α Lwb wb, max α 1 α 2 α j y () y ( j) x () x ( j) + α =1 j=1 s.t. α > 0,1 ; α y = 0 =1 =1 Propertes of a when we ntroduce the Lagrange multplers The result when we dfferentate the orgnal Lagrangan w.r.t. b 31

Relatonshp between Prmal and Dual Problems d = max mn L( x, αβ, ) mn max L( x, αβ, ) = p * * αβ, x x αβ, ote: f under some condtons, d*=p* We can solve the dual problem n leu of the prmal problem What s the condtons? The famous KKT condtons (Karush-Kuhn-Tucker condtons) Lw b w * * * (,, α ) Lw b b * * * (,, α ) * * α ( ) ( ) = 1 * ( ) = α y = 0 (2) In our case = 1 * ( ) * ( ) * () * () * y w x + b * = w y x = 0, [1, ] (1) ( ) 1 0, [1, ] 0, [1, ] α ( y ( w x + b ) 1) = 0, [1, ] (3) α What s KKT In Lagrangan formula Lx (, αβ, ) = f() x αg() x βh() x st.. α 0 k KKT condtons are j j = 0 j= 0 l L(x *,α *,β * ) x = 0, [1, ] L(x *,α *,β * ) α L(x *,α *,β * ) β = 0, [1,k] = 0, [1,l] α * g (x * ) = 0, [1,k] g (x * ) 0, [1,k] α * 0, [1,k] 32

ow We Have Then, what we have the maxmum optmum problem wth respect to α: 33 Ths s a quadratc programmng (QP) problem, A global maxmum of a can always be found Then solve w by Fnally solve b: Snce there s at least one α j* >0 (f all α j* =0, from equaton (1) we know that w*=0, however w*=0 s not the optmal soluton). Then from equaton (3), we know that Because y (j) y (j) =1, then max α 1 α 2 α j y () y ( j) x () x ( j) + α w = j=1 =1 α y () x () y ( j) (w * x ( j) + b * ) 1= 0 ( j) * ( j) ( j) ( ) ( ) ( j) α = 1 b= y w x = y y x x =1 s.t. α > 0,1 ; α y = 0 =1 =1 Characterstcs of the Soluton Many of the a are zero - w s a lnear combnaton of a small number of data ponts - Ths sparse representaton can be vewed as data compresson as n the constructon of knn classfer x wth non-zero a are called support vectors (SV) - The decson boundary s determned only by the SV

A Geometrcal Interpretaton Class -1 α 8 =0.6 α 10 =0 α 5 =0 α 7 =0 α 2 =0 α 4 =0 α 9 =0 Class +1 α 3 =0 α 6 =1.4 α 1 =0.8 34

How to Predct For a new sample x, we can predct t by: T ( α () () T ) = 1 wx+ b= y x x+ b = α + = 1 () () y x, x b classfy x as class +1 f the sum s postve, and class -1 otherwse ote: w need not be formed explctly 35

on-separable What s non-separable case? Class -1 We allow error ξ n classfcaton; t s based on the output of the dscrmnant functon w T x+b ξ approxmates the number of msclassfed samples Class +1 36

on-lnear Cases What s non-lnear case? 37

on-separable case The formalzaton of the optmal problem becomes : mn w,b,ξ s.t. w 2 +C ξ =1 y(w T x () + b) 1 ξ,1 ξ 0,1 Thus, examples are now permtted to have margn less than 1, and f an example has functonal margn 1-ξ (wth ξ>0), we would pay a cost of the objectve functon by ncreased by Cξ. The parameter C controls the relatve weghtng between the twn goals of makng the w 2 small and of ensurng that most examples have functonal margn at least 1. 38

Lagrangan Soluton Agan, we have the lagrangan form : L(w,b,ξ,α,σ ) = w 2 +C ξ α [ y(w T x () + b) 1+ξ ] σ ξ s.t. σ 0;α 0 max α L(α) = α 1 2 =1, j=1 s.t. C α 0, [1, ] α y () = 0 =1 α = + α () T () 0 y ( w x b) 1 () T () = C y w x + b ( ) 1 () T () 0 < α < C y ( w x + b) = 1 =1 =1 y () y ( j) α α j x (), x ( j) What s the dfference from the separable form??!! + =1 KKT condtons: L(w,b,α) w = 0, [1, ] L(w,b,α) ξ = 0, [1, ] L(w,b,α) = 0 b α ( y () (w T x () + b) 1 ξ ) = 0, [1, ] y () (w T x () + b) 1 ξ 0, [1, ] α 0,σ 0, [1, ] 39

How to tran SVM Solvng the quadratc programmng optmzaton problem drectly to tran the SVM s very slow when the tranng data grows large. Sequental mnmal optmzaton (SMO) algorthm, due to John Platt. Frst, let us ntroduce coordnate ascent algorthm: Loop untl convergence: { For =1,, m { a :=argmax a L(a 1,, a -1, a, a +1,, a m ) } } 40

Coordnate ascent s ok? Is t suffcent? α 1 y (1) = α 1 = y (1) =2 =2 α y () α y () 41

SMO Change the algorthm by: ths s just SMO Repeat untl convergence { 1. select some par a and a j to update next. (usng a heurstc that tres to pck the two that wll allow us to make the bggest progress towards the global maxmum). 2. reoptmze L(a) wth respect to a and a j, whle holdng all the other a. } Many approaches have been proposed -e.g., Loqo, cplex, etc. (see http://www.numercal.rl.ac.uk/qp/qp.html) α 1 y (1) +α 2 y (2) = =3 α y () = ς α 1 = (ς α 2 y (2) )y (1) L(a) = L((ς α 2 y (2) )y (1),α 2,...,α ) 42

SMO(2) La ( ) = L(( ς α y ) y, α,..., α ) (2) (1) 2 2 m Ths s a quadratc functon n a 2. I.e. t can be wrtten as: aα + bα + c 2 2 2 43

Solvng a 2 aα + bα + c 2 2 2 For the quadratc functon, we can smply solve t by settng ts dervatve to zero. Let us use a 2 new, unclpped as the resultng value. H f > H α α α new, unclpped L f( α2 < L) new, unclpped ( α2 ) new new new, unclpped 2 = 2 f ( L 2 H) Havng fnd a 2, we can go back to fnd the optmal a 1. Please read Platt s paper f you want to read more detals 44

Thanks! HP: http://keg.cs.tsnghua.edu.cn/jetang/ Emal: jetang@tsnghua.edu.cn 45