Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2015 Soleymani
Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin SVM and Soft-Margin SVM Nonlinear SVM Kernel trick Kernel methods 2
Margin Which line is better to select as the boundary to provide more generalization capability? Larger margin provides better generalization to unseen data x 2 Margin for a hyperplane that separates samples of two linearly separable classes is: The smallest distance between the decision boundary and any of the training samples x 1 3
Maximum margin SVM finds the solution with maximum margin Solution: a hyperplane that is farthest from all training samples x 2 x 2 x 1 Larger margin x 1 The hyperplane with the largest margin has equal distances to the nearest sample of both classes 4
Hard-margin SVM: Optimization problem 2M max M,w,w 0 w s. t. w T x i + w 0 M x i C 1 y i = 1 w T x i + w 0 M x i C 2 y i = 1 w T x + w 0 = 0 x 2 M w Margin: 2 M w 5 w w T x + w 0 = M w T x + w 0 = M x 1
Hard-margin SVM: Optimization problem 2M max M,w,w 0 w s. t. y (i) w T x i + w 0 M i = 1,, N w T x + w 0 = 0 x 2 M w M = min i=1,,n y i w T x i + w 0 6 w w T x + w 0 = M w T x + w 0 = M x 1
Hard-margin SVM: Optimization problem We can set w = w M, w 0 = w 0 M : max w,w 0 2 w s. t. y (i) w T x i + w 0 1 i = 1,, N w T x + w 0 = 0 x 2 1 w The place of boundary and margin lines do not change w w T x + w 0 = 1 w T x + w 0 = 1 7 x 1
Hard-margin SVM: Optimization problem We can equivalently optimize: 1 min w,w 0 2 w 2 s. t. y i w T x i + w 0 1 i = 1,, N It is a convex Quadratic Programming (QP) problem There are computationally efficient packages to solve it. It has a global minimum (if any). When training samples are not linearly separable, it has no solution. How to extend it to find a solution even though the classes are not linearly separable. 8
Beyond linear separability How to extend the hard-margin SVM to allow classification error Overlapping classes that can be approximately separated by a linear boundary Noise in the linearly separable classes x 2 9 x 1 x 1
Beyond linear separability: Soft-margin SVM Minimizing the number of misclassified points?! NP-complete Soft margin: Maximizing a margin while trying to minimize the distance between misclassified points and their correct margin plane 10
Soft-margin SVM SVM with slack variables: allows samples to fall within the margin, but penalizes them N 1 2 w 2 + C ξ i min w,w 0, ξ N i i=1 i=1 s. t. y i w T x i + w 0 1 ξ i i = 1,, N ξ i 0 x 2 ξ i : slack variables ξ i > 1: if x i misclassifed ξ i 0 < ξ i < 1 : if x i correctly classified but inside margin 11 x 1
Soft-margin SVM linear penalty (hinge loss) for a sample if it is misclassified or lied in the margin tries to maintain ξ i small while maximizing the margin. always finds a solution (as opposed to hard-margin SVM) more robust to the outliers Soft margin problem is still a convex QP ξ = 0 ξ = 0 12
Soft-margin SVM: Parameter C C is a tradeoff parameter: small C allows margin constraints to be easily ignored large margin large C makes constraints hard to ignore narrow margin C enforces all constraints: hard margin can be determined using a technique like crossvalidation C 13
Soft-margin SVM: Cost function min w,w 0, ξ N i i=1 1 2 w 2 + C i=1 s. t. y i w T x i + w 0 1 ξ i i = 1,, n ξ i 0 It is equivalent to the unconstrained optimization problem: n ξ i 1 min w,w 0 2 w 2 + C n i=1 max(0,1 y (i) (w T x (i) + w 0 )) 14
SVM loss function Hinge loss vs. 0-1 loss y = 1 max(0,1 y(w T x + w 0 )) 0-1 Loss Hinge Loss w T x + w 0 15
Dual formulation of the SVM We are going to introduce the dual SVM problem which is equivalent to the original primal problem. The dual problem: is often easier enable us to exploit the kernel trick gives us further insights into the optimal hyperplane 16
Optimization: Lagrangian multipliers p = min f(x) x s. t. g i x 0 i = 1,, m h i x = 0 i = 1,, p Lagrangian multipliers m p L x, α, λ = f x + α i g i x + λ i h i x i=1 i=1 max L x, α, λ = α i 0, λ i any g i x > 0 any h i x 0 f x otherwise p = min x max α i 0, λ i L x, α, λ α = α 1,, α m λ = [λ 1,, λ p ] 17
Optimization: Dual problem In general, we have: max min x y f(x, y) min y max x f(x, y) Primal problem: p = min x max α i 0, λ i L x, α, λ Dual problem: d = max α i 0, λ i min x L x, α, λ Obtained by swapping the order of min and max d p When the original problem is convex (f and g are convex functions and h is affine), we have strong duality d = p 18
Hard-margin SVM: Dual problem 19 1 min w,w 0 2 w 2 s. t. y i w T x i + w 0 1 i = 1,, N By incorporating the constraints through lagrangian multipliers, we will have: min w,w 0 max {α i 0} 1 2 w 2 + N i=1 α i 1 y i (w T x (i) + w 0 ) Dual problem (changing the order of min and max in the above problem): max min 1 {α i 0} w,w 0 2 w 2 + N i=1 α i 1 y i (w T x (i) + w 0 )
Hard-margin SVM: Dual problem max min {α i 0} w,w 0 L w, w 0, α L w, w 0, α = 1 2 w 2 + N i=1 α i 1 y i (w T x (i) + w 0 ) w L w, w 0, α = 0 w = N i=1 α i y (i) x (i) L w,w 0,α = 0 N w i=1 α i y (i) = 0 0 20 w 0 do not appear, instead, a global constraint on α is created.
Hard-margin SVM: Dual problem N N max α i 1 α α 2 i α j y (i) y (j) x i T x j i=1 i=1 j=1 Subject to N i=1 α i y (i) = 0 N α i 0 i = 1,, N It is a convex QP By solving the above problem first we find α and then w N α i y (i) x (i) = i=1 w 0 can be set by making the margin equidistant to classes (we discuss it more after defining support vectors). 21
Karush-Kuhn-Tucker (KKT) conditions Necessary conditions for the solution [w, w 0, α ]: w L w, w 0, α w,w 0,α = 0 L w,w 0,α w 0 w,w 0,α = 0 α i 0 i = 1,, N y i w T x i + w 0 1 i = 1,, N α i 1 y i w T x i + w 0 = 0 i = 1,, N 22 In general, the optimal x, α satisfies KKT conditions: x L x, α = 0 x,α α i 0 i = 1,, m g i x 0 i = 1,, m α i g i x = 0 i = 1,, m
Hard-margin SVM: Support vectors Inactive constraint: y i w T x i + w 0 > 1 α i = 0 and thus x i is not a support vector. Active constraint: y i w T x i + w 0 = 1 α i can be greater than 0 and thus x i can be a support vector. x 2 α > 0 α > 0 α > 0 1 w 23 x 1
Hard-margin SVM: Support vectors Inactive constraint: y i w T x i + w 0 > 1 α i = 0 and thus x i is not a support vector. Active constraint: y i w T x i + w 0 = 1 α i can be greater than 0 and thus x i can be a support vector. x 2 α > 0 α = 0 α = 0 α > 0 α > 0 1 w A sample with α i = 0 can also lie on one of the margin hyperplanes 24 x 1
Hard-margin SVM: Support vectors SupportVectors (SVs)= {x i α i > 0} The direction of hyper-plane can be found only based on support vectors: w = α i >0 α i y (i) x (i) 25
Hard-margin SVM: Dual problem Classifying new samples using only SVs Classification of a new sample x: y = sign w 0 + w T x y = sign w 0 + y = sign(w 0 + T α i y i x i x α i >0 α i >0 α i y i x i T x) Support vectors are sufficient to predict labels of new samples The classifier is based on the expansion in terms of dot products of x with support vectors. 26
How to find w 0 At least one of the α s is strictly positive (α s > 0 for support vector). The KKT complementary slackness condition tells us that the constraint corresponding to this non-zero α s is equality. w 0 is found such that the following constraint is satisfied. y s w 0 + w T x s = 1 We can find w 0 using one of the support vectors: w 0 = y s w T x s = y s α n y (n) x n T x s α n >0 27
Hard-margin SVM: Dual problem N N N max α i=1 α i 1 2 i=1 j=1 α i α j y (i) y (j) x i T x j Subject to N i=1 α i y (i) = 0 α i 0 i = 1,, N Only the dot product of each pair of training data appears in the optimization problem This is an important property that is helpful to extend to non-linear SVM (the cost function does not depend explicitly on the dimensionality of the feature space). 28
Soft-margin SVM: Dual problem max α N i=1 N N α i 1 α 2 i α j y (i) y (j) x i T x j i=1 j=1 Subject to N i=1 α i y (i) = 0 0 α i C i = 1,, N By solving the above quadratic problem first we find α and then find w = N i=1 α i y (i) x (i) and w 0 is computed from SVs. For a test sample x (as before): 29 y = sign w 0 + w T x = sign(w 0 + α i >0 α i y i x i T x)
Soft-margin SVM: Support vectors SupportVectors: α > 0 If 0 < α < C: SVs on the margin, ξ = 0. If α = C: SVs on or over the margin. 30
Primal vs. dual hard-margin SVM problem Primal problem of hard-margin SVM N inequality constraints d + 1 number of variables Dual problem of hard-margin SVM one equality constraint N positivity constraints N number of variables (Lagrange multipliers) Objective function more complicated The dual problem is helpful and instrumental to use the kernel trick 31
Primal vs. dual soft-margin SVM problem Primal problem of soft-margin SVM N inequality constraints N positivity constraints N + d + 1 number of variables Dual problem of soft-margin SVM one equality constraint 2N positivity constraints N number of variables (Lagrange multipliers) Objective function more complicated The dual problem is helpful and instrumental to use the kernel trick 32
Not linearly separable data Noisy data or overlapping classes (we discussed about it: soft margin) Near linearly separable x 2 x 1 Non-linear decision surface x 2 Transform to a new feature space 33 x 1
Nonlinear SVM Assume a transformation φ: R d R m space x φ x on the feature Find a hyper-plane in the transformed feature space: x 2 φ x = [φ 1 (x),..., φ m (x)] {φ 1 (x),...,φ m (x)}: set of basis functions (or features) φ i x : R d R φ 2 (x) φ: x φ x w T φ x + w 0 = 0 34 x 1 φ 1 (x)
Soft-margin SVM in a transformed space: Primal problem Primal problem: 1 min w,w 0 2 w 2 + C i=1 s. t. y i w T φ(x i ) + w 0 1 ξ i i = 1,, N ξ i 0 N ξ i w R m : the weights that must be found If m d (very high dimensional feature space) then there are many more parameters to learn 35
Soft-margin SVM in a transformed space: Dual problem Optimization problem: max α N i=1 N α i 1 2 i=1 N j=1 α i α j y (i) y (j) φ x (i) T φ x (i) Subject to N i=1 α i y (i) = 0 0 α i C i = 1,, n If we have inner products φ x (i) T φ x (j), only α = [α 1,, α N ] needs to be learnt. 36 It is not necessary to learn m parameters as opposed to the primal problem
Kernelized soft-margin SVM Optimization problem: max α N i=1 N α i 1 2 i=1 N j=1 k x, x = φ x T φ(x ) α i α j y (i) y (j) φ k(x x (i) i T, φx (j) x (i) ) Subject to N i=1 α i y (i) = 0 0 α i C i = 1,, n Classifying a new data: y = sign w 0 + w T φ(x) = sign(w 0 + αi >0 α i y i φ(x i ) T φ(x)) 37 k(x i, x)
Kernel trick Kernel: the dot product in the mapped feature space F k x, x = φ x T φ(x ) φ x = φ 1 x,, φ m x T k x, x shows the similarity of x and x. The distance can be found as accordingly Kernel trick Extension of many well-known algorithms to kernel-based ones By substituting the dot product with the kernel function 38
Kernel trick: Idea Idea: when the input vectors appears only in the form of dot products, we can use other choices of kernel function instead of inner product Solving the problem without explicitly mapping the data Explicit mapping is expensive if φ x is very high dimensional Instead of using a mapping φ: X F to represent x X by φ(x) F, a similarity function k: X X R is used. The data set can be represented by N N matrix K. We specify only an inner product function between points in the transformed space (not their coordinates) In many cases, the inner product in the embedding space can be computed efficiently. 39
Why kernel? kernel functions K can indeed be efficiently computed, with a cost proportional to d instead of m. Example: consider the second-order polynomial transform: φ x = 1, x 1,, x d, x 1 2, x 1 x 2,, x d x d T m = 1 + d + d 2 φ x T φ x = 1 + d i=1 x i x i + d i=1 d j=1 x i x j x i x j O(m) d d x i x i x j x j i=1 j=1 φ x T φ x = 1 + x T x + x T x 2 O(d) 40
Polynomial kernel: Efficient kernel function An efficient kernel function relies on a carefully constructed specific transforms to allow fast computation Example: for polynomial kernels, certain combinations of coefficients that would still make K easy (γ > 0 and c > 0) φ x = c 1, 2cγ x 1,, 2cγ x d, γ x 2 1, γ x 1 x 2,, γ 41
Polynomial kernel: Degree two We instead use k(x, x ) = x T x + 1 2 that corresponds to: φ x d-dimensional feature space x = x 1,,x d T = 1, 2x 1,, 2x d, x 2 1,.., x 2 T d, 2x 1 x 2,, 2x 1 x d, 2x 2 x 3,, 2x d 1 x d φ x = 1, 2x 1,, 2x d, x 2 1,..., x 2 T d, x 1 x 2,, x 1 x d, x 2 x 1,, x d 1 x d 42
Polynomial kernel In general, k(x, z) = x T z M contains all monomials of order M. This can similarly be generalized to include all terms up to degree M, k(x, z) = x T z + c M (c > 0) Example: SVM boundary for a polynomial kernel w 0 + w T φ x = 0 w 0 + αi >0 α i y (i) φ x i T φ x = 0 w 0 + αi >0 α i y (i) k(x i, x) = 0 w 0 + αi >0 α i y (i) x (i)t x + c M = 0 Boundary is a polynomial of order M 44
Some common kernel functions K is a mathematical and computational shortcut that allows us to combine the transform and the inner product into a single more efficient function. Linear: k(x, x ) = x T x Polynomial: k x, x = (x T x + c) M c 0 Gaussian: k x, x = exp( x x 2 2σ 2 ) Sigmoid: k x, x = tanh(ax T x + b) 45
Gaussian kernel Gaussian: k x, x = exp( x x 2 2σ 2 ) Infinite dimensional feature space Example: SVM boundary for a gaussian kernel Considers a Gaussian function around each data point. w 0 + αi >0 α i y (i) exp( x x(i) 2 ) = 0 2σ 2 SVM + Gaussian Kernel can classify any arbitrary training set Training error is zero when σ 0 All samples become support vectors (likely overfiting) 46
SVM: Linear and Gaussian kernels example Linear kernel: C = 0.1 Gaussian kernel: C = 1,σ = 0.25 some SVs appear to be far from the boundary? 47 [Zisserman s slides]
Hard margin Example For narrow Gaussian (large σ), even the protection of a large margin cannot suppress overfitting. 48 Y.S. Abou Mostafa, et. al, Learning From Data, 2012
SVM Gaussian kernel: Example f x = w 0 + α i >0 α i y (i) exp( x x(i) 2 2σ 2 ) 49 Source: Zisserman s slides
SVM Gaussian kernel: Example 50 Source: Zisserman s slides
SVM Gaussian kernel: Example 51 Source: Zisserman s slides
SVM Gaussian kernel: Example 52 Source: Zisserman s slides
SVM Gaussian kernel: Example 53 Source: Zisserman s slides
SVM Gaussian kernel: Example 54 Source: Zisserman s slides
SVM Gaussian kernel: Example 55 Source: Zisserman s slides
Constructing kernels Construct kernel functions directly Ensure that it is a valid kernel Corresponds to an inner product in some feature space. Example: k(x, x ) = x T x 2 Corresponding mapping: φ x = x 2 1, 2 2x 1 x 2, x T 2 for x = x 1, x T 2 We need a way to test whether a kernel is valid without having to construct φ x 56
Valid kernel: Necessary & sufficient conditions Gram matrix K N N : K ij = k(x (i), x (j) ) [Shawe-Taylor & Cristianini 2004] Restricting the kernel function to a set of points {x 1, x 2,, x (N) } K = k(x (1), x (1) ) k(x (1), x (N) ) k(x (N), x (1) ) k(x (N), x (N) ) Mercer Theorem: The kernel matrix is Symmetric Positive Semi-Definite (for any choice of data points) Any symmetric positive definite matrix can be regarded as a kernel matrix, that is as an inner product matrix in some space 57
Construct Valid Kernels c > 0, k 1: valid kernel f(. ): any function q(. ): a polynomial with coefficients 0 k 1, k 2: valid kernels φ(x): a function from x to R M k3(.,. ): a valid kernel in R M A : a symmetric positive semi-definite matrix x a and x b are variables (not necessarily disjoint) with x = (x a, x b ), and k a and k b are valid kernel functions over their respective spaces. 58 [Bishop]
Extending linear methods to kernelized ones Kernelized version of linear methods 59 Linear methods are famous Unique optimal solutions, faster learning algorithms, and better analysis However, we often require nonlinear methods in real-world problems and so we can use kernel-based version of these linear algorithms If a problem can be formulated such that feature vectors only appear in inner products, we can extend it to a kernelized one When we only need to know the dot product of all pairs of data (about the geometry of the data in the feature space) By replacing inner products with kernels in linear algorithms, we obtain very flexible methods We can operate in the mapped space without ever computing the coordinates of the data in that space
Which information can be obtained from kernel? Example: we know all pairwise distances d φ x, φ z 2 = φ x φ z 2 = k x, x + k z, z 2k(x, z) Therefore, we also know distance of points from center of mass of a set Many dimensionality reduction, clustering, and classification methods can be described according to pairwise distances. This allow us to introduce kernelized versions of them 60
Kernels for structured data Kernels also can be defined on general types of data Kernel functions do not need to be defined over vectors just we need a symmetric positive definite matrix Thus, many algorithms can work with general (non-vectorial) data Kernels exist to embed strings, trees, graphs, This may be more important than nonlinearity that we can use kernel-based version of the classical learning algorithms for recognition of structured data 61
Kernel function for objects Sets: Example of kernel function for sets: k A, B = 2 A B Strings: The inner product of the feature vectors for two strings can be defined as e.g. sum over all common subsequences weighted according to their frequency of occurrence and lengths A E G A T E A G G E G T E A G A E G A T G 62
Kernel trick advantages: summary Operating in the mapped space without ever computing the coordinates of the data in that space Besides vectors, we can introduce kernel functions for structured data (graphs, strings, etc.) Much of the geometry of the data in the embedding space is contained in all pairwise dot products In many cases, inner product in the embedding space can be computed efficiently. 63
SVM: Summary Hard margin: maximizing margin Soft margin: handling noisy data and overlapping classes Slack variables in the problem Dual problems of hard-margin and soft-margin SVM Lead us to non-linear SVM method easily by kernel substitution Also, classifier decision in terms of support vectors Kernel SVM s Learns linear decision boundary in a high dimension space without explicitly working on the mapped data 64