NEW ROBUST UNSUPERVISED SUPPORT VECTOR MACHINES

Size: px
Start display at page:

Download "NEW ROBUST UNSUPERVISED SUPPORT VECTOR MACHINES"

Transcription

1 J Syst Sci Complex () 24: NEW ROBUST UNSUPERVISED SUPPORT VECTOR MACHINES Kun ZHAO Mingyu ZHANG Naiyang DENG DOI:.7/s Received: March 8 / Revised: 5 February 9 c The Editorial Office of JSSC & Springer-Verlag Berlin Heidelberg Abstract This paper proposes robust version to unsupervised classification algorithm based on modified robust version of primal problem of standard SVMs, which directly relaxes it with label variables to a semi-definite programg. Numerical results confirm the robustness of the proposed method. Key words Robust, semi-definite programg, support vector machines, unsupervised learning. Introduction As an important branch in unsupervised learning, clustering analysis aims at partitioning a collection of objects into groups or clusters so that members within each cluster are more closely related to one another than objects assigned to different clusters []. Clustering algorithms provide automated tools to help identifying a structure from an unlabeled set, in a variety of areas including bio-informatics, computer vision, information retrieval and data ing. There is a rich resource of prior work on this subject. Efficient convex optimization techniques have had a profound impact on the field of machine learning. Most of them have been used in applying quadratic programg techniques to support vector machines (SVMs) and kernel machine training [2]. Semi-definite programg (SDP) extends the toolbox of optimization methods used in machine learning, beyond the current unconstrained, linear and quadratic programg techniques. One of its main attractions is that it has been proven successful in constructing relaxation of NP-hard problem. Data uncertainty is present in many real-world optimization problems. For example, in supply chain optimization, the actual demand for products, financial returns, actual material requirements and other resources are not precisely known when critical decisions need to be made. In engineering and science, data is subjected to measurement errors, which also constitute sources of data uncertainty in the optimization model [3]. Kun ZHAO Logistics School, Beijing Wuzi University, Beijing 49, China. Mingyu ZHANG School of Economic and Management, Beijing Jiaotong University, Beijing 44, China. Naiyang DENG (Corresponding author) College of Science, China Agricultural University, Beijing 83, China. This research is supported by the Key Project of the National Natural Science Foundation of China under Grant No This paper was recommended for publication by Editor Xiaoguang YANG.

2 NEW ROBUST UNSUPERVISED SUPPORT VECTOR MACHINES 467 In mathematical optimization models, we commonly assume that the data inputs are precisely known and ignore the influence of parameter uncertainties on the optimality and feasibility of the models. It is therefore conceivable that as the data differs from the assumed noal values, the generated optimal solution may violate critical constraints and perform poorly from an objective function point of view [3]. This observation raises the natural question of designing solution approaches that are immune to data uncertainty; that is, they are robust [4]. Robust optimization addresses the issue of data uncertainties from the perspective of computational tractability. ThefirststepinthisdirectionwastakenbySoyster [5]. A significant step forward for developing robust optimization was taken independently by Ben-Tal and Nemirovsk [6 8], EI-Ghaoui and Lebret [9], EI-Ghaoui, et al []. To overcome the issue of over conservatism, these papers proposed less conservative models by considering uncertain linear problems with ellipsoidal uncertainties, which solve the robust counterpart of the noal problem in the form of conic quadratic problems. Melvyn [3] proposed a new robust counterpart, which inherits the characters of the noal problems. We briefly outline the contents of the paper now. We review standard SVMs, SDP and robust linear optimization in Section 2. Section 3 formulates robust unsupervised classification algorithm which is based on primal problem of standard SVMs. Numerical results will be showed in Section 4. In Section 5, we have a conclusion. A word about our notation. All vectors will be column vectors unless transposed to a row vector by T. The scalar product of two vectors x and y in the n-dimensional real space R n will be denoted by x T y. For an l d matrix A, A i will denote the ith row of A. The identity matrix in a real space of arbitrary dimension will be denoted by I, while a column vector of ones of arbitrary dimension will be denoted by e. 2 Preliaries 2. Support Vector Machines Considering the supervised classification problem, and training set is T = {(x,y ), (x 2,y 2 ),,(x l,y l )}, wherex i R n, y i {, +} is corresponding output of input x i. The goal of SVMs is to find the linear classifier f(x) =w T x + b that maximizes the imum misclassification margin w R n,b R,ξ R l 2 w 2 + C ξ i () s.t. y i (w T x i + b) ξ i, i =, 2,,l, (2) ξ i, i =, 2,,l. (3) The problem () (3) is the primal problem of standard support vector machines (C-SVMs) []. Consistency of support vector machines is well understood [2 3]. 2.2 Semi-Definite Programg A recent development of convex optimization theory is semi-definite programg, a branch of that fields aimed at optimizing over the cone of semi-positive definite matrices. In semidefinite programg one imizes a linear function subject to the constraint that an affine combination of symmetric matrices is positive semi-definite. Such a constraint is nonlinear and nonsmooth, but convex, so semi-definite programs are convex optimization problems. Semi-

3 468 KUN ZHAO MINGYU ZHANG NAIYANG DENG definite programg can be regarded as an extension of linear programg, where the componentwise inequalities between vectors are replaced by matrix inequalities, or, equivalently, the first orthant is replaced by the cone of positive semi-definite matrices. Semi-definite programg unifies several standard problems (e.g., linear and quadratic programg). Although semi-definite programs are much more general than linear programs, they are not much harder to solve. Most interior-point methods for linear programg have been generalized to semidefinite programs. Given C M n, A i M n, i =, 2,,m and b =(b,b 2,,b m ) T R m,wherem n is the set of symmetric matrix. The standard SDP problem is to find a matrix X M n for the optimization problem (SDP) s.t. C X A i X = b i, i =, 2,,m, X, where the operation is the matrix inner product, i.e., A B =tra T B, the notation X means that X is a positive semi-definite matrix. The dual problem of SDP can be written as: max (SDD) s.t. b T λ C m λ i A i. Here λ R m. Interior point method has good effect for SDP. Moreover, there exist several softwares such as SeDuMi [4] and SDPT3 [5]. 2.3 Robust Linear Optimization The general optimization problem under parameter uncertainty is as follows: max f (x, D ) (4) s.t. f i (x, D i ), i I, (5) x X, (6) where f i (x, D i ), i {} I are given functions, X is a given set and D i, i {} I is the vector of uncertain coefficients. When the uncertain coefficients D i take values equal to their expected values D i, it is the noal problem of problem (4) (6). In order to address parameter uncertainty problem (4) (6), Ben-Tal and Nemirovski [6 7] and independently EI-Ghaoui, et al. [9 ] proposed to solve the following robust optimization problem max s.t. f (x, D ) D U (7) f i (x, D i ), i I, D i U i (8) x X, (9) where U i, i {} I are given uncertainty sets. In the robust optimization framework (7) (9), Melvyn Sim considered the uncertainty set U as follows { U = D u R N : D = D + } D j u j, u Ω, j N

4 NEW ROBUST UNSUPERVISED SUPPORT VECTOR MACHINES 469 where D is the noal value of the data, D j (j N) is a direction of data perturbation, and Ω is a parameter controlling the tradeoff between robustness and optimality (robustness increases as Ω increases). We restrict the vector norm as u = u +,suchasl and l 2. When we select the norm as l, then the corresponding perturbation region is a polyhedron and the robust counterpart is also a linear programg (LP), detailed robust counterpart can be seen in [3]. 3 Robust Unsupervised Classification Algorithm Semi-definite programg has showed its utility in machine learning. Lanckreit, et al. [6] showed how the kernel matrix can be learned from data via SDP techniques. They presented new methods for learning a kernel matrix from labeled data set and transductive data set. Both methods can relax the problem to SDP. For a transductive setting, using the labeled data, one can learn a good embedding (kernel matrix), which can then be applied to the unlabeled part of the data. De Bie and Cristanini [7] relaxed two-class transduction problem to SDP based on transductive SVMs. Xu, et al. [8] developed methods to two-class unsupervised and semi-supervised classification problems based on bounded C-SVMs in virtue of relaxation to SDP in the foundation of [6 7]. Its purpose is to find a labeling which has the maximum margin, not to find a large margin classifier. This leads to the method to cluster the data into two classes, which subsequently run an SVM, and will obtain the maximum margin with all possible labelings. Constraint about class balance ε l y i ε should be added (ε is an integer), otherwise we can simply assign all the data to the same class and then get unbounded margin; moreover, this can avoid noisy data s influence in some sense. Zhao, et al. [9 ] presented other versions which are based on Bounded ν-svms and Lagrangian SVMs, respectively. The virtues of the first algorithm are the facility of selecting parameter and better classification results, while the superiority of second algorithm is the much less consumed CPU seconds than other algorithms [8 9] for the same data set. All of the unsupervised classification algorithms mentioned above have complicated procedures of relaxing NP-hard problem to semi-definite programg, because they all need to find the dual problem twice. In order to avoid the complicated procedure mentioned in last paragraph, Zhao, et al. [2] directly relaxed a modified version of primal problem of SVMs with label variables to a semi-definite programg. In the methods mentioned above, the training data in the optimization problems are implicitly assumed to be known exactly. However, in practice, these training data have perturbations since they are usually corrupted by measurement noise. Zhao, et al. [22] proposed robust unsupervised and semi-supervised classification algorithms based on Bounded C-SVMs, which have complicated procedures because they have to find the dual problem twice. For the sake of avoidance of the complicated procedure, we will proposed robust unsupervised classification algorithm with polyhedron perturbation based on primal problem of standard SVMs. When considering the measurement noise, we assume training data x i R n, i =, 2,,l, which has perturbed as x i, concretely, x ij = x ij + x ij z ij,, 2,,l, j =, 2,,n, z i p Ω. z i is a random variable, when select its norm as l norm, that is, z i Ω, while it is equivalent with n j= z ij Ω,, 2,,l. Considering x ij = x ij + x ij z ij, n x ij x ij Ω, i =, 2,,l, x ij j= so the perturbation region of x i is a polyhedron.

5 47 KUN ZHAO MINGYU ZHANG NAIYANG DENG Considering primal problem of standard SVMs, and the training data have perturbations as mentioned above, then we get the optimization problem 2 wt w + C ξ i () s.t. y i ((w x i )+b) ξ i, () w,b,ξ ξ i,, 2,,l. (2) Constraint () is infinite and problem () (2) is semi-infinite optimization problem, there seems no good method to resolve it directly. Due to robust linear optimization, we tend to find its robust counterpart. In Melvyn s proposed robust framework [3],constrainty i ((w x i )+b) ξ i is equivalent to Then we get robust primal problem of standard SVMs w,b,ξ,t y i ((w x i )+b) +ξ i Ωt i, t i, (3) x ij w j y i t i, j =, 2,,n, (4) x ij w j y i t i, j =, 2,,n. (5) 2 wt w + C ξ i (6) s.t. y i ((w x i )+b) +ξ i Ωt i, (7) t i, (8) x ij w j y i t i, j =, 2,,n, (9) x ij w j y i t i, j =, 2,,n, () ξ i, i =, 2,,l. (2) When the labels y i,, 2,,l are unknown, an NP-hard optimization problem for unsupervised classification problem is formulated as follows: 2 wt w + C ξ i (22) s.t. y i ((w x i )+b) +ξ i Ωt i, (23) y i {, } l,w,b,ξ,t t i, ξ i, i =, 2,,l, (24) x ij w j y i t i, j =, 2,,n, (25) x ij w j y i t i, j =, 2,,n, (26) ε y i ε. (27) Now, we modify the above formulation in order to solve unsupervised classification problem appropriately. When variable ξ i and t i are replaced by ξi 2 and t 2 i, respectively, the constraints (24) can be dropped. So we get the robust modified primal problem of SVMs for unsupervised classification problem y i {, } l,w,b,ξ,t 2 w 2 + C ξi 2 (28)

6 NEW ROBUST UNSUPERVISED SUPPORT VECTOR MACHINES 47 Setting variable λ =(y T,w T,b,ξ T,t T ) T,weget where s.t. y i ((w x i )+b) +ξi 2 Ωt 2 i, (29) x ij w j y i t 2 i, j =, 2,,n, () x ij w j y i t 2 i, j =, 2,,n, (3) ε y i ε. (32) λ 2 λt A λ (33) s.t. λ T A i λ, i =, 2,,l, (34) λ T B ij λ, i =, 2,,l, j =, 2,,n, (35) λ T D ij λ, i =, 2,,l, j =, 2,,n, (36) ε (e T l, T n+2l+ )λ ε, (37) A =Diag( l l, I n n,, 2CI l l, l l ), (38) l l Ai l (n+) l l l l A i = Ai T l (n+) (n+) (n+) (n+) l (n+) l l l l (n+) Ai2 l l l l, (39) l l l (n+) l l Ai3 l l l l Bij l (n+) l l l l B ij = Bijl (n+) T (n+) (n+) (n+) l (n+) l l l l (n+) l l l l, (4) l l l (n+) l l Ai2 l l l l Bij l (n+) l l l l D ij = Bijl (n+) T (n+) (n+) (n+) l (n+) l l l l (n+) l l l l, (4) l l l (n+) l l Ai2 l l Ai l (n+) (i =, 2,,l) is the matrix whose elements in ith row are 2 (x i T, ) and the rest elements of it are all zeros. Ai2 l l (i =, 2,,l) is the matrix whose element in ith row ith column element is and the rest elements of it are all zeros. Ai3 l l (i =, 2,,l)isthematrix which its element in ith row ith column element is Ω and the rest elements of it are all zeros. Bij l (n+) is the matrix which its element in ith row ith column element is 2 x ij and the rest elements of it are all zeros. Let λλ T = M and relax λλ T = M to M and diag(m) l = e l. Then we get semi-definite programg problem M 2 Tr(MA ) (42) s.t. Tr(MA i ), i =, 2,,l, (43) Tr(MB ij), i =, 2,,l, j =, 2,,n, (44)

7 472 KUN ZHAO MINGYU ZHANG NAIYANG DENG Tr(MD ij), i =, 2,,l, j =, 2,,n, (45) εe M l (e T l, T +2l+n )T εe, (46) M, diag(m) l = e l, (47) where diag(m) l denotes the first l diagonal elements of matrix M, andm l denotes the first l rows of matrix M. After getting the optimal solution M to the problem (42) (47), training data s labels y T are obtained by the following rounding method. Rounding method ) Find the first l elements t =(η,η 2,,η l ) T in eigenvector corresponding to the largest eigenvalue of the matrix M. Construct the vector y =(y, y 2,, y l ) T =(sgn(η ), sgn(η 2 ),, sgn(η l )) T.Ify satisfy the constraint ε l y i ε, sety = y, which is final label of data and two classes of data are clustered. 2) If y dose not satisfy the constraint ε l y i ε, letδ = y T e ε, the labels y is obtained from y by the following way: Select δ smallest absolute values of η i from the majority class, and change the corresponding labels in y. 4 Numerical Results In this section, through numerical experiments, we will test our algorithm (RPSDP) with PRC-SDP [22] on various data sets using SeDuMi library with different robust tradeoff parameter Ω. We select synthetic data sets AI and Two-circles, which have 9 points and points in R 2 respectively. Select parameters ε =andc = and directions of data perturbations are produced randomly. Results are showed in Tables and 2. The number is the misclassification percent. Table Results about different parameter Ω on AI with RPSDP and PRC-SDP Ω RPSDP 2/9 /9 5/9 6/9 5/9 PRC-SDP /9 2/9 3/9 5/9 3/9 Table 2 Results about different parameter Ω on two-circles with RPSDP and PRC-SDP Ω RPSDP / / 2/ 2/ 2/ 2/ 2/ 2/ 2/ 2/ PRC-SDP 5/ 6/ 6/ 6/ 9/ 8/ 7/ 7/ 7/ 7/

8 NEW ROBUST UNSUPERVISED SUPPORT VECTOR MACHINES Figure Results by RPSDP on AI with different parameter Ω

9 474 KUN ZHAO MINGYU ZHANG NAIYANG DENG Figure 2 Results by RPSDP on Two-circles with different parameter Ω

10 NEW ROBUST UNSUPERVISED SUPPORT VECTOR MACHINES 475 We also conduct our algorithm on Digits data sets which can be obtained from toronto.edu/roweis/data.html. Selecting Numbers 3 and 2, Numbers 7 and, Numbers 9 and and Numbers 5 and 6, as data sets respectively. Every Number has five samples, which has 256 dimensions. Due to RPSDP have (n +3l +) 2 variables (l and n are the number and dimension of training data, respectively), while SeDuMi has difficulty to deal with much more variables when it is used to solve SDP. Therefore, it seems better to reduce the dimension of training data. In use of principal component analysis, the dimension reduced from 256 to 9. As the same to synthetic data sets, in order to evaluate the influence of the robust tradeoff parameter Ω, we will set value of Ω from.5 to 5, and directions of data perturbations are produced randomly. To evaluate robust classification performance, a labeled data set was taken and the labels are removed, then run robust unsupervised classification algorithms, and labeled each of the resulting class with the majority class according to the original training labels, then measured the number of misclassification. The results are showed in Table 3 and the number is the misclassification percent. Table 3 Results by RPSDP on digits data sets with different parameter Ω Ω Digit23 Digit7 Digit9 Digit56.5 2/ 2/ 3/ 2/ 2/ 2/ 2/ 2/.5 2/ / 2/ 2/ 2.5 2/ / 2/ 2/ 5 4/ 2/ 3/ 2/ 5 Conclusions In this paper we have established the robust unsupervised classification algorithm based on primal problem of standard SVMs. Principal component analysis has been utilized to reduce dimension of training data, because the conic convex optimization solver SeDuMi used in our numerical experiments can only solve problems of small data sets. From Section of numerical results we can learn that RPSDP has been less effected on robust parameter Ω than PRC-SDP. In the future, we will continue to estimate the approximation of SDP relaxation and get an approximation ratio of the worst case. References [] J. A. Hartigan, Clustering Algorithms, John Wiley & Sons, New York, NY, 975. [2] B. Schoelkopf and A. Smola, Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond, MIT Press, Massachusetts, Cambridge, 2. [3] S. Melvyn, Robust optimization, PhD. Thesis, Massachusetts Institute of Technology, 4. [4] B. Dimitris and S. Melvyn, The price of robustness, Opertions Research, 4, 52(): [5] A. L. Souster, Convex programg with set-inclusive constraints and applications to inexact linear programg, Oper. Res., 973, 2: [6] A. Ben-Tal and A. Nemirovski, Robust convex optimization, Math. Oper. Res., 998, 23: [7] A. Ben-Tal and A. Nemirovski, Robust solutions to uncertain programs, Oper. Res. Letters, 999, 25: 3. [8] A. Ben-Tal and A. Nemirovski, Robust solutions of linear programg problems constrained with uncertain data, Math. Program.,, 88: [9] L. El-Ghaoui and H. Lebret, Robust solutions to least-square problems to uncertain data matrices, SIAM J. Matrix Anal. Appl., 997, 8:

11 476 KUN ZHAO MINGYU ZHANG NAIYANG DENG [] L. El-Ghaoui, F. Oustry, and H. Lebret, Robust solutions to uncertain semidefinite programs, SIAM J. Optim., 998, 9(): [] N. Y. Deng and Y. J. Tian, A New Method of Data Mining: Support Vector Machines, Science Press, Beijing, 4. [2] D. R. Chen, et al., Support vector machine soft margin classifiers: Error analysis, Machine Learning Research, 4(5): [3] S. Smale and D. X. Zhou, Learning theory estimates via integral operators and their approximations, Constr. Approx., 7, 26: [4] J. F. Sturm, Using SeDuMi.2, A Matlab toolbox for optimization over symmetric cones, Optimization Methods and Software, 999, ( 2): [5] K. C. Toh, M. J. Todd, and R. H. Tütüncü, SDPT3 A Matlab package for semidefinite programg, Technical Report, School of Operations Research and Industrial Engineering, Cornell University, Ithaca, NY, USA, 996. [6] G. Lanckriet, et al., Learning the kernel matrix with semidefinite programg, Journal of Machine Learning Research, 4, (5). [7] T. De Bie and N. Crisrianini, Convex methods for transduction, Advances in Neural Information Processing Systems, 6(NIPS-3), 3. [8] L. Xu, et al., Maximum margin clustering, Advances in Neural Information Processing Systems, 7(NIPS-4), 4. [9] K. Zhao, Y. J. Tian, and N. Y. Deng, Unsupervised and semi-supervised two-class support vector machines, Sixth IEEE Internaitonal Conference on Data Minging Workshops, 6. [] K. Zhao, Y. J. Tian, and N. Y. Deng, Unsupervised and semi-supervised lagrangian support vector machines, Internaitonal Conference on Computational Science Workshops, Lecture Notes in Computer Science, 7, 4489(5): [2] K. Zhao, Y. J. Tian, and N. Y. Deng, New unsupervised support vector machines, Proceeding of MCDM, CCIS, 9, 35: [22] K. Zhao, Y. J. Tian, and N. Y. Deng, Robust unsupervised and semi-supervised bounded C-support vector machines, Proceedings of the Seventh IEEE International Conference on Data Mining Workshops, Omaha NE, USA, October, 7.

Convex Optimization in Classification Problems

Convex Optimization in Classification Problems New Trends in Optimization and Computational Algorithms December 9 13, 2001 Convex Optimization in Classification Problems Laurent El Ghaoui Department of EECS, UC Berkeley elghaoui@eecs.berkeley.edu 1

More information

Robust linear optimization under general norms

Robust linear optimization under general norms Operations Research Letters 3 (004) 50 56 Operations Research Letters www.elsevier.com/locate/dsw Robust linear optimization under general norms Dimitris Bertsimas a; ;, Dessislava Pachamanova b, Melvyn

More information

Handout 8: Dealing with Data Uncertainty

Handout 8: Dealing with Data Uncertainty MFE 5100: Optimization 2015 16 First Term Handout 8: Dealing with Data Uncertainty Instructor: Anthony Man Cho So December 1, 2015 1 Introduction Conic linear programming CLP, and in particular, semidefinite

More information

Support Vector Machine Classification via Parameterless Robust Linear Programming

Support Vector Machine Classification via Parameterless Robust Linear Programming Support Vector Machine Classification via Parameterless Robust Linear Programming O. L. Mangasarian Abstract We show that the problem of minimizing the sum of arbitrary-norm real distances to misclassified

More information

A GENERAL FORMULATION FOR SUPPORT VECTOR MACHINES. Wei Chu, S. Sathiya Keerthi, Chong Jin Ong

A GENERAL FORMULATION FOR SUPPORT VECTOR MACHINES. Wei Chu, S. Sathiya Keerthi, Chong Jin Ong A GENERAL FORMULATION FOR SUPPORT VECTOR MACHINES Wei Chu, S. Sathiya Keerthi, Chong Jin Ong Control Division, Department of Mechanical Engineering, National University of Singapore 0 Kent Ridge Crescent,

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012

More information

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (SVM) and Kernel Methods Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2014 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

More information

Handout 6: Some Applications of Conic Linear Programming

Handout 6: Some Applications of Conic Linear Programming ENGG 550: Foundations of Optimization 08 9 First Term Handout 6: Some Applications of Conic Linear Programming Instructor: Anthony Man Cho So November, 08 Introduction Conic linear programming CLP, and

More information

Jeff Howbert Introduction to Machine Learning Winter

Jeff Howbert Introduction to Machine Learning Winter Classification / Regression Support Vector Machines Jeff Howbert Introduction to Machine Learning Winter 2012 1 Topics SVM classifiers for linearly separable classes SVM classifiers for non-linearly separable

More information

Learning the Kernel Matrix with Semidefinite Programming

Learning the Kernel Matrix with Semidefinite Programming Journal of Machine Learning Research 5 (2004) 27-72 Submitted 10/02; Revised 8/03; Published 1/04 Learning the Kernel Matrix with Semidefinite Programg Gert R.G. Lanckriet Department of Electrical Engineering

More information

Learning the Kernel Matrix with Semi-Definite Programming

Learning the Kernel Matrix with Semi-Definite Programming Learning the Kernel Matrix with Semi-Definite Programg Gert R.G. Lanckriet gert@cs.berkeley.edu Department of Electrical Engineering and Computer Science University of California, Berkeley, CA 94720, USA

More information

Lecture Notes on Support Vector Machine

Lecture Notes on Support Vector Machine Lecture Notes on Support Vector Machine Feng Li fli@sdu.edu.cn Shandong University, China 1 Hyperplane and Margin In a n-dimensional space, a hyper plane is defined by ω T x + b = 0 (1) where ω R n is

More information

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (SVM) and Kernel Methods Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2015 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

More information

Robust portfolio selection under norm uncertainty

Robust portfolio selection under norm uncertainty Wang and Cheng Journal of Inequalities and Applications (2016) 2016:164 DOI 10.1186/s13660-016-1102-4 R E S E A R C H Open Access Robust portfolio selection under norm uncertainty Lei Wang 1 and Xi Cheng

More information

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers) Support vector machines In a nutshell Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers) Solution only depends on a small subset of training

More information

Sparse PCA with applications in finance

Sparse PCA with applications in finance Sparse PCA with applications in finance A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon 1 Introduction

More information

Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012 Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Linear classifier Which classifier? x 2 x 1 2 Linear classifier Margin concept x 2

More information

Support Vector Machines for Classification and Regression. 1 Linearly Separable Data: Hard Margin SVMs

Support Vector Machines for Classification and Regression. 1 Linearly Separable Data: Hard Margin SVMs E0 270 Machine Learning Lecture 5 (Jan 22, 203) Support Vector Machines for Classification and Regression Lecturer: Shivani Agarwal Disclaimer: These notes are a brief summary of the topics covered in

More information

A direct formulation for sparse PCA using semidefinite programming

A direct formulation for sparse PCA using semidefinite programming A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley A. d Aspremont, INFORMS, Denver,

More information

Operations Research Letters

Operations Research Letters Operations Research Letters 37 (2009) 1 6 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl Duality in robust optimization: Primal worst

More information

EE 227A: Convex Optimization and Applications October 14, 2008

EE 227A: Convex Optimization and Applications October 14, 2008 EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider

More information

Support Vector Machine

Support Vector Machine Andrea Passerini passerini@disi.unitn.it Machine Learning Support vector machines In a nutshell Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

More information

Support Vector Machines for Classification and Regression

Support Vector Machines for Classification and Regression CIS 520: Machine Learning Oct 04, 207 Support Vector Machines for Classification and Regression Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may

More information

A direct formulation for sparse PCA using semidefinite programming

A direct formulation for sparse PCA using semidefinite programming A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon

More information

Machine Learning. Support Vector Machines. Manfred Huber

Machine Learning. Support Vector Machines. Manfred Huber Machine Learning Support Vector Machines Manfred Huber 2015 1 Support Vector Machines Both logistic regression and linear discriminant analysis learn a linear discriminant function to separate the data

More information

Robust Kernel-Based Regression

Robust Kernel-Based Regression Robust Kernel-Based Regression Budi Santosa Department of Industrial Engineering Sepuluh Nopember Institute of Technology Kampus ITS Surabaya Surabaya 60111,Indonesia Theodore B. Trafalis School of Industrial

More information

LMI Methods in Optimal and Robust Control

LMI Methods in Optimal and Robust Control LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 02: Optimization (Convex and Otherwise) What is Optimization? An Optimization Problem has 3 parts. x F f(x) :

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

CSC 411 Lecture 17: Support Vector Machine

CSC 411 Lecture 17: Support Vector Machine CSC 411 Lecture 17: Support Vector Machine Ethan Fetaya, James Lucas and Emad Andrews University of Toronto CSC411 Lec17 1 / 1 Today Max-margin classification SVM Hard SVM Duality Soft SVM CSC411 Lec17

More information

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers) Support vector machines In a nutshell Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers) Solution only depends on a small subset of training

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal

More information

Robust Fisher Discriminant Analysis

Robust Fisher Discriminant Analysis Robust Fisher Discriminant Analysis Seung-Jean Kim Alessandro Magnani Stephen P. Boyd Information Systems Laboratory Electrical Engineering Department, Stanford University Stanford, CA 94305-9510 sjkim@stanford.edu

More information

Interval solutions for interval algebraic equations

Interval solutions for interval algebraic equations Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya

More information

An Improved Conjugate Gradient Scheme to the Solution of Least Squares SVM

An Improved Conjugate Gradient Scheme to the Solution of Least Squares SVM An Improved Conjugate Gradient Scheme to the Solution of Least Squares SVM Wei Chu Chong Jin Ong chuwei@gatsby.ucl.ac.uk mpeongcj@nus.edu.sg S. Sathiya Keerthi mpessk@nus.edu.sg Control Division, Department

More information

Applications of Robust Optimization in Signal Processing: Beamforming and Power Control Fall 2012

Applications of Robust Optimization in Signal Processing: Beamforming and Power Control Fall 2012 Applications of Robust Optimization in Signal Processing: Beamforg and Power Control Fall 2012 Instructor: Farid Alizadeh Scribe: Shunqiao Sun 12/09/2012 1 Overview In this presentation, we study the applications

More information

Robust Farkas Lemma for Uncertain Linear Systems with Applications

Robust Farkas Lemma for Uncertain Linear Systems with Applications Robust Farkas Lemma for Uncertain Linear Systems with Applications V. Jeyakumar and G. Li Revised Version: July 8, 2010 Abstract We present a robust Farkas lemma, which provides a new generalization of

More information

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers) Support vector machines In a nutshell Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers) Solution only depends on a small subset of training

More information

Unsupervised Classification via Convex Absolute Value Inequalities

Unsupervised Classification via Convex Absolute Value Inequalities Unsupervised Classification via Convex Absolute Value Inequalities Olvi L. Mangasarian Abstract We consider the problem of classifying completely unlabeled data by using convex inequalities that contain

More information

Short Course Robust Optimization and Machine Learning. Lecture 6: Robust Optimization in Machine Learning

Short Course Robust Optimization and Machine Learning. Lecture 6: Robust Optimization in Machine Learning Short Course Robust Optimization and Machine Machine Lecture 6: Robust Optimization in Machine Laurent El Ghaoui EECS and IEOR Departments UC Berkeley Spring seminar TRANSP-OR, Zinal, Jan. 16-19, 2012

More information

Statistical Pattern Recognition

Statistical Pattern Recognition Statistical Pattern Recognition Support Vector Machine (SVM) Hamid R. Rabiee Hadi Asheri, Jafar Muhammadi, Nima Pourdamghani Spring 2013 http://ce.sharif.edu/courses/91-92/2/ce725-1/ Agenda Introduction

More information

Outline. Basic concepts: SVM and kernels SVM primal/dual problems. Chih-Jen Lin (National Taiwan Univ.) 1 / 22

Outline. Basic concepts: SVM and kernels SVM primal/dual problems. Chih-Jen Lin (National Taiwan Univ.) 1 / 22 Outline Basic concepts: SVM and kernels SVM primal/dual problems Chih-Jen Lin (National Taiwan Univ.) 1 / 22 Outline Basic concepts: SVM and kernels Basic concepts: SVM and kernels SVM primal/dual problems

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information

Machine Learning And Applications: Supervised Learning-SVM

Machine Learning And Applications: Supervised Learning-SVM Machine Learning And Applications: Supervised Learning-SVM Raphaël Bournhonesque École Normale Supérieure de Lyon, Lyon, France raphael.bournhonesque@ens-lyon.fr 1 Supervised vs unsupervised learning Machine

More information

Machine Learning. Lecture 6: Support Vector Machine. Feng Li.

Machine Learning. Lecture 6: Support Vector Machine. Feng Li. Machine Learning Lecture 6: Support Vector Machine Feng Li fli@sdu.edu.cn https://funglee.github.io School of Computer Science and Technology Shandong University Fall 2018 Warm Up 2 / 80 Warm Up (Contd.)

More information

Robust-to-Dynamics Linear Programming

Robust-to-Dynamics Linear Programming Robust-to-Dynamics Linear Programg Amir Ali Ahmad and Oktay Günlük Abstract We consider a class of robust optimization problems that we call robust-to-dynamics optimization (RDO) The input to an RDO problem

More information

Introduction to Support Vector Machines

Introduction to Support Vector Machines Introduction to Support Vector Machines Andreas Maletti Technische Universität Dresden Fakultät Informatik June 15, 2006 1 The Problem 2 The Basics 3 The Proposed Solution Learning by Machines Learning

More information

Lecture: Cone programming. Approximating the Lorentz cone.

Lecture: Cone programming. Approximating the Lorentz cone. Strong relaxations for discrete optimization problems 10/05/16 Lecture: Cone programming. Approximating the Lorentz cone. Lecturer: Yuri Faenza Scribes: Igor Malinović 1 Introduction Cone programming is

More information

Discriminative Direction for Kernel Classifiers

Discriminative Direction for Kernel Classifiers Discriminative Direction for Kernel Classifiers Polina Golland Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA 02139 polina@ai.mit.edu Abstract In many scientific and engineering

More information

Max-Margin Ratio Machine

Max-Margin Ratio Machine JMLR: Workshop and Conference Proceedings 25:1 13, 2012 Asian Conference on Machine Learning Max-Margin Ratio Machine Suicheng Gu and Yuhong Guo Department of Computer and Information Sciences Temple University,

More information

A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs

A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs Raphael Louca Eilyan Bitar Abstract Robust semidefinite programs are NP-hard in general In contrast, robust linear programs admit

More information

Lecture 7: Weak Duality

Lecture 7: Weak Duality EE 227A: Conve Optimization and Applications February 7, 2012 Lecture 7: Weak Duality Lecturer: Laurent El Ghaoui 7.1 Lagrange Dual problem 7.1.1 Primal problem In this section, we consider a possibly

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

Convex Optimization of Graph Laplacian Eigenvalues

Convex Optimization of Graph Laplacian Eigenvalues Convex Optimization of Graph Laplacian Eigenvalues Stephen Boyd Abstract. We consider the problem of choosing the edge weights of an undirected graph so as to maximize or minimize some function of the

More information

Convex Methods for Transduction

Convex Methods for Transduction Convex Methods for Transduction Tijl De Bie ESAT-SCD/SISTA, K.U.Leuven Kasteelpark Arenberg 10 3001 Leuven, Belgium tijl.debie@esat.kuleuven.ac.be Nello Cristianini Department of Statistics, U.C.Davis

More information

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. . Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,

More information

BCOL RESEARCH REPORT 07.04

BCOL RESEARCH REPORT 07.04 BCOL RESEARCH REPORT 07.04 Industrial Engineering & Operations Research University of California, Berkeley, CA 94720-1777 LIFTING FOR CONIC MIXED-INTEGER PROGRAMMING ALPER ATAMTÜRK AND VISHNU NARAYANAN

More information

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (SVM) and Kernel Methods Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2016 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

Support Vector Machines, Kernel SVM

Support Vector Machines, Kernel SVM Support Vector Machines, Kernel SVM Professor Ameet Talwalkar Professor Ameet Talwalkar CS260 Machine Learning Algorithms February 27, 2017 1 / 40 Outline 1 Administration 2 Review of last lecture 3 SVM

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Min-max-min robustness: a new approach to combinatorial optimization under uncertainty based on multiple solutions 1

Min-max-min robustness: a new approach to combinatorial optimization under uncertainty based on multiple solutions 1 Min-max- robustness: a new approach to combinatorial optimization under uncertainty based on multiple solutions 1 Christoph Buchheim, Jannis Kurtz 2 Faultät Mathemati, Technische Universität Dortmund Vogelpothsweg

More information

Canonical Problem Forms. Ryan Tibshirani Convex Optimization

Canonical Problem Forms. Ryan Tibshirani Convex Optimization Canonical Problem Forms Ryan Tibshirani Convex Optimization 10-725 Last time: optimization basics Optimization terology (e.g., criterion, constraints, feasible points, solutions) Properties and first-order

More information

Robust Combinatorial Optimization under Budgeted-Ellipsoidal Uncertainty

Robust Combinatorial Optimization under Budgeted-Ellipsoidal Uncertainty EURO Journal on Computational Optimization manuscript No. (will be inserted by the editor) Robust Combinatorial Optimization under Budgeted-Ellipsoidal Uncertainty Jannis Kurtz Received: date / Accepted:

More information

Second Order Cone Programming, Missing or Uncertain Data, and Sparse SVMs

Second Order Cone Programming, Missing or Uncertain Data, and Sparse SVMs Second Order Cone Programming, Missing or Uncertain Data, and Sparse SVMs Ammon Washburn University of Arizona September 25, 2015 1 / 28 Introduction We will begin with basic Support Vector Machines (SVMs)

More information

Lecture 10: Support Vector Machine and Large Margin Classifier

Lecture 10: Support Vector Machine and Large Margin Classifier Lecture 10: Support Vector Machine and Large Margin Classifier Applied Multivariate Analysis Math 570, Fall 2014 Xingye Qiao Department of Mathematical Sciences Binghamton University E-mail: qiao@math.binghamton.edu

More information

CS6375: Machine Learning Gautam Kunapuli. Support Vector Machines

CS6375: Machine Learning Gautam Kunapuli. Support Vector Machines Gautam Kunapuli Example: Text Categorization Example: Develop a model to classify news stories into various categories based on their content. sports politics Use the bag-of-words representation for this

More information

LECTURE 13 LECTURE OUTLINE

LECTURE 13 LECTURE OUTLINE LECTURE 13 LECTURE OUTLINE Problem Structures Separable problems Integer/discrete problems Branch-and-bound Large sum problems Problems with many constraints Conic Programming Second Order Cone Programming

More information

Lecture Support Vector Machine (SVM) Classifiers

Lecture Support Vector Machine (SVM) Classifiers Introduction to Machine Learning Lecturer: Amir Globerson Lecture 6 Fall Semester Scribe: Yishay Mansour 6.1 Support Vector Machine (SVM) Classifiers Classification is one of the most important tasks in

More information

Mathematical Optimization Models and Applications

Mathematical Optimization Models and Applications Mathematical Optimization Models and Applications Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 1, 2.1-2,

More information

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Convex relaxation. In example below, we have N = 6, and the cut we are considering Convex relaxation The art and science of convex relaxation revolves around taking a non-convex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution

More information

Fast algorithms for solving H -norm minimization. problem

Fast algorithms for solving H -norm minimization. problem Fast algorithms for solving H -norm imization problems Andras Varga German Aerospace Center DLR - Oberpfaffenhofen Institute of Robotics and System Dynamics D-82234 Wessling, Germany Andras.Varga@dlr.de

More information

Machine Learning A Geometric Approach

Machine Learning A Geometric Approach Machine Learning A Geometric Approach CIML book Chap 7.7 Linear Classification: Support Vector Machines (SVM) Professor Liang Huang some slides from Alex Smola (CMU) Linear Separator Ham Spam From Perceptron

More information

The Value of Adaptability

The Value of Adaptability The Value of Adaptability Dimitris Bertsimas Constantine Caramanis September 30, 2005 Abstract We consider linear optimization problems with deterministic parameter uncertainty. We consider a departure

More information

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Convex relaxation. In example below, we have N = 6, and the cut we are considering Convex relaxation The art and science of convex relaxation revolves around taking a non-convex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution

More information

Machine Learning. Support Vector Machines. Fabio Vandin November 20, 2017

Machine Learning. Support Vector Machines. Fabio Vandin November 20, 2017 Machine Learning Support Vector Machines Fabio Vandin November 20, 2017 1 Classification and Margin Consider a classification problem with two classes: instance set X = R d label set Y = { 1, 1}. Training

More information

L5 Support Vector Classification

L5 Support Vector Classification L5 Support Vector Classification Support Vector Machine Problem definition Geometrical picture Optimization problem Optimization Problem Hard margin Convexity Dual problem Soft margin problem Alexander

More information

SVM TRADE-OFF BETWEEN MAXIMIZE THE MARGIN AND MINIMIZE THE VARIABLES USED FOR REGRESSION

SVM TRADE-OFF BETWEEN MAXIMIZE THE MARGIN AND MINIMIZE THE VARIABLES USED FOR REGRESSION International Journal of Pure and Applied Mathematics Volume 87 No. 6 2013, 741-750 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu doi: http://dx.doi.org/10.12732/ijpam.v87i6.2

More information

ML (cont.): SUPPORT VECTOR MACHINES

ML (cont.): SUPPORT VECTOR MACHINES ML (cont.): SUPPORT VECTOR MACHINES CS540 Bryan R Gibson University of Wisconsin-Madison Slides adapted from those used by Prof. Jerry Zhu, CS540-1 1 / 40 Support Vector Machines (SVMs) The No-Math Version

More information

Privacy-Preserving Linear Programming

Privacy-Preserving Linear Programming Optimization Letters,, 1 7 (2010) c 2010 Privacy-Preserving Linear Programming O. L. MANGASARIAN olvi@cs.wisc.edu Computer Sciences Department University of Wisconsin Madison, WI 53706 Department of Mathematics

More information

Robust l 1 and l Solutions of Linear Inequalities

Robust l 1 and l Solutions of Linear Inequalities Available at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Applications and Applied Mathematics: An International Journal (AAM) Vol. 6, Issue 2 (December 2011), pp. 522-528 Robust l 1 and l Solutions

More information

Outliers Treatment in Support Vector Regression for Financial Time Series Prediction

Outliers Treatment in Support Vector Regression for Financial Time Series Prediction Outliers Treatment in Support Vector Regression for Financial Time Series Prediction Haiqin Yang, Kaizhu Huang, Laiwan Chan, Irwin King, and Michael R. Lyu Department of Computer Science and Engineering

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Le Song Machine Learning I CSE 6740, Fall 2013 Naïve Bayes classifier Still use Bayes decision rule for classification P y x = P x y P y P x But assume p x y = 1 is fully factorized

More information

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture # 12 Scribe: Indraneel Mukherjee March 12, 2008

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture # 12 Scribe: Indraneel Mukherjee March 12, 2008 COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture # 12 Scribe: Indraneel Mukherjee March 12, 2008 In the previous lecture, e ere introduced to the SVM algorithm and its basic motivation

More information

Discussion About Nonlinear Time Series Prediction Using Least Squares Support Vector Machine

Discussion About Nonlinear Time Series Prediction Using Least Squares Support Vector Machine Commun. Theor. Phys. (Beijing, China) 43 (2005) pp. 1056 1060 c International Academic Publishers Vol. 43, No. 6, June 15, 2005 Discussion About Nonlinear Time Series Prediction Using Least Squares Support

More information

Support Vector Machine Regression for Volatile Stock Market Prediction

Support Vector Machine Regression for Volatile Stock Market Prediction Support Vector Machine Regression for Volatile Stock Market Prediction Haiqin Yang, Laiwan Chan, and Irwin King Department of Computer Science and Engineering The Chinese University of Hong Kong Shatin,

More information

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization April 6, 2018 1 / 34 This material is covered in the textbook, Chapters 9 and 10. Some of the materials are taken from it. Some of

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Support Vector Machines: Maximum Margin Classifiers

Support Vector Machines: Maximum Margin Classifiers Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

Adaptive Kernel Principal Component Analysis With Unsupervised Learning of Kernels

Adaptive Kernel Principal Component Analysis With Unsupervised Learning of Kernels Adaptive Kernel Principal Component Analysis With Unsupervised Learning of Kernels Daoqiang Zhang Zhi-Hua Zhou National Laboratory for Novel Software Technology Nanjing University, Nanjing 2193, China

More information

Twin Support Vector Machine in Linear Programs

Twin Support Vector Machine in Linear Programs Procedia Computer Science Volume 29, 204, Pages 770 778 ICCS 204. 4th International Conference on Computational Science Twin Support Vector Machine in Linear Programs Dewei Li Yingjie Tian 2 Information

More information

Linear vs Non-linear classifier. CS789: Machine Learning and Neural Network. Introduction

Linear vs Non-linear classifier. CS789: Machine Learning and Neural Network. Introduction Linear vs Non-linear classifier CS789: Machine Learning and Neural Network Support Vector Machine Jakramate Bootkrajang Department of Computer Science Chiang Mai University Linear classifier is in the

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. 35, No., May 010, pp. 84 305 issn 0364-765X eissn 156-5471 10 350 084 informs doi 10.187/moor.1090.0440 010 INFORMS On the Power of Robust Solutions in Two-Stage

More information

Support Vector Machines for Classification: A Statistical Portrait

Support Vector Machines for Classification: A Statistical Portrait Support Vector Machines for Classification: A Statistical Portrait Yoonkyung Lee Department of Statistics The Ohio State University May 27, 2011 The Spring Conference of Korean Statistical Society KAIST,

More information

Distributionally Robust Convex Optimization

Distributionally Robust Convex Optimization Submitted to Operations Research manuscript OPRE-2013-02-060 Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the journal title. However,

More information

Support Vector Machines.

Support Vector Machines. Support Vector Machines www.cs.wisc.edu/~dpage 1 Goals for the lecture you should understand the following concepts the margin slack variables the linear support vector machine nonlinear SVMs the kernel

More information

Kobe University Repository : Kernel

Kobe University Repository : Kernel Kobe University Repository : Kernel タイトル Title 著者 Author(s) 掲載誌 巻号 ページ Citation 刊行日 Issue date 資源タイプ Resource Type 版区分 Resource Version 権利 Rights DOI JaLCDOI URL Analysis of support vector machines Abe,

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

Lecture: Examples of LP, SOCP and SDP

Lecture: Examples of LP, SOCP and SDP 1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information