Discussion About Nonlinear Time Series Prediction Using Least Squares Support Vector Machine
|
|
- Suzanna Nash
- 6 years ago
- Views:
Transcription
1 Commun. Theor. Phys. (Beijing, China) 43 (2005) pp c International Academic Publishers Vol. 43, No. 6, June 15, 2005 Discussion About Nonlinear Time Series Prediction Using Least Squares Support Vector Machine XU Rui-Rui, BIAN Guo-Xing, GAO Chen-Feng, and CHEN Tian-Lun Department of Physics, Nankai University, Tianjin , China (Received September 16, 2004; Revised October 19, 2004) Abstract The least squares support vector machine (LS-SVM) is used to study the nonlinear time series prediction. First, the parameter γ and multi-step prediction capabilities of the LS-SVM network are discussed. Then we employ clustering method in the model to prune the number of the support values. The learning rate and the capabilities of filtering noise for LS-SVM are all greatly improved. PACS numbers: a Key words: least squares support vector machine, nonlinear time series, prediction, clustering 1 Introduction Nonlinear time series prediction exists broadly in many fields such as natural science, social science, economics, national defense, and so on. [1] It remains an attractive problem for a long time to analyze the nonlinear time series and mine the information of data. Support vector machine (SVM) based on statistical learning theory and structural risk minimization was first introduced by Vapnik et al. [2 4] It can solve many problems that are often encountered in application of artificial neural network including over-fitting, curse of dimensionality, local minima of energy, and so on. Some experimental results show that SVM is an effective model to classifying patterns and to estimating functions. However, the amount of calculation becomes larger and the learning rate will be greatly cut down with increasing of the amount of data for training. This is mainly because SVM typically follows the solution to a quadratic programming (QP) problem. Therefore, Suykens proposed a modified version of SVM, the least squares support vector machine (LS-SVM). [5] In this case the solution is given by a linear system instead of a QP problem. Thus the complexity of calculation decreases and the learning rate is raised at the same time. In this work, the prediction of time series that is produced from Mackey Glass equation is studied using LS- SVM. This paper is organized as follows. In the second part we describe the basic ideas of LS-SVM for prediction. In the third part, at first LS-SVM is tested for the Mackey- Glass equation. Then the parameter γ and the capabilities of multi-step prediction using LS-SVM are discussed. In the end we put clustering method into pruning the support value spectrum. In the last part conclusion of this model is presented. 2 Least Squares Support Vector Machine An LS-SVM model can be designed as Fig. 1. Fig. 1 Scheme of support vector machine. Suppose that we have a training set {x i, y i } N with input pattern x i R n for the i-th example and y i R for the corresponding desired output pattern, where N is the number of vectors for learning. In the feature space, LS-SVM can be described as y(x) = w T φ(x) + b, (1) where the nonlinear mapping φ(x) maps the input data into a higher-dimensional feature space. In LS-SVM, the objective function is min J(w, e) = 1 w,e 2 wt w + 1 N 2 γ e 2 i, (2) subject to the constraint y i = w T φ(x i ) + b + e i for i = 1,..., N. (3) The project supported by National Natural Science Foundation of China under Grant No and the Doctoral Foundation of the Ministry of Education of China
2 No. 6 Discussion About Nonlinear Time Series Prediction Using Least Squares Support Vector Machine 1057 Equality constraint is taken in LS-SVM instead of inequality constraint in SVM. Furthermore, the error e i has been changed to e 2 i in the object function. [3] These will simplify the solution to this problem. The Lagrangian can be defined as L(w, b, e; α) = J(w, e) α i {w T φ(x i )+b y i +e i }, (4) where α i denotes Lagrange multipliers. According to the Karush Kuhn Tucker (KKT) condition, we partially differentiate L and obtain formulas as follows: N w = 0 w = α i φ(x i ), B = 0 n α i = 0, e i = 0 α i = γe i, for i = 1,..., N, α i = 0 w T φ(x i ) + b + e i y i = 0, fractal dimension. In this work, we choose s = 17. The nonlinear time series based on Mackey Glass equation are regarded as a criterion for comparing ability of different predicting methods. The data of Mackey Glass equation are initially normalized as the following formula, x t = x t min(x), for t = 1, 2, 3,... (9) max(x) min(x) (i) Prediction Using LS-SVM We get 1500 data produced by Mackey Glass equation. The initial 200 data are used to train the LS-SVM network and then to predict the following 1300 data. The parameters of LS-SVM are taken as γ = 10 7, σ = 1.7, and the predicting error is measured with the root-meansquare error (RMSE): RMSE = (x i x i) 2 /N, where x i stands for the predicting value, x i is the desired value, and N denotes the number for prediction. Figure 2 illustrates the predicting results with RMSE = (the dotted line is the predicting values and real line is the desired ones). for i = 1,..., N. (5) By eliminating e i and w, we obtain the system ( ) 0 1 T ν [ b ] [ 0 ] 1 ν Ω + 1 γ I =, (6) a y where y = [y 1 ;... ; y N ], 1 ν = [1;... ; 1], α = [α 1 ;... ; α N ], and Ω ij = φ(x i ) T φ(x j ) = K(x i, x j ), i, j = 1,..., N. K(x i, x j ), which is called the inner-product kernel should satisfy the case of Mercer s condition. After application of the Mercer condition, we finally get the following regressive LS-SVM model: y(x) = α i K(x, x i ) + b. (7) where α i and b refer to the solutions to the linear equations (6). The kernel function K(.,.) has several choices such as polynomial function, radial-basis function, and multilayer perceptrons, etc. In all simulations we employ the radial-basis kernel with the form, K(x i, x j ) = exp ( x i x j 2 ) 2σ 2. Two parameters (γ, α) are considered for the LS-SVM. 3 Simulation Results Mackey Glass equation is a time-delayed differential equation that was first proposed as a model of white blood cell production, [6] αx t s x = 1 + (x t s ) 10 + (1 β)x t s, (8) where α = 0.2, β = 0.1, and s is an adjustable parameter. When s 17, equation (8) yields chaotic behavior with a Fig. 2 Predicting results using LS-SVM. (ii) Parameter γ It is shown in formula (5) that γ can affect the precision of prediction. We thus investigate the variation of the error with different γ. Table 1 Variation of predicting errors with different γ. γ RMSE
3 1058 XU Rui-Rui, BIAN Guo-Xing, GAO Chen-Feng, and CHEN Tian-Lun Vol. 43 Table 1 demonstrates that the error first decreases with increasing γ, however, it increases again when γ reaches a limit. In order to get the best predicting result, we choose γ = 10 7 for the least error in this problem. γ is also an important parameter to control the complexity of network and the balance between inseparable data. [3] In general, the complexity level of calculation increases with higher γ, and the result of prediction gets better at the same time. Moreover, there will be more inseparable data with increasing γ, which will lead to over-fitting and further make the error larger. Therefore, we need a best γ value to simulation experiment. (iii) Multi-step Prediction Multi-step prediction plays a significant role in testing the capability of prediction. We apply LS-SVM network and neural network in which the chaotic learning mechanism is introduced [7] to multi-step prediction of the nonlinear time series based on Machey Glass equation, then compare the corresponding results for N = 200. Table 2 Results of multi-step prediction with step from 1 to 4. Number of step RMSE of LS-SVM RMSE of chaotic algorithm [11] Table 3 Variation of the error with different parameter N of LS-SVM. N RMSE Time (s) Table 2 and figure 3 show that better results are obtained from LS-SVM than those from the neural network with increasing prediction step. Fig. 3 Comparison between the errors for LS-SVM and those for chaotic algorithm. The dotted line is the errors using LS-SVM, and the dashed line denotes the errors using neural network with chaotic algorithm. (iv) Comparison of Two Approaches Pruning the Support Value Spectrum LS-SVM is a modified version of SVM in a least square sense. In this case the solution is given by a linear system instead of a QP problem. The computational complexity accordingly increases with more training data, which lowers the learning rate of the network. Table 3 displays the effect of the quantity of training data on the prediction time. In order to solve this problem, J.A.K. Suykens [5] proposed an approach that can leave out the least important data from training set. (i) Train LS-SVM based on the initial N support values. (ii) Remove a small amount of points (e.g. 5% of the set) with the smallest values in the sorted α i spectrum. (iii) Retrain the LS-SVM based on the reduced training set. (iv) Go to (ii) unless the user-defined performance index degrades. Though this method has hastened the prediction, it still needs to calculate a big matrix for several times before the ultimate support values are decided, and therefore the rate is limited again. In this work we apply k-means clustering to LS-SVM to pruning the support values (CLS- SVM), [8,9] (i) Take the initial N training data as support values, and at the same time they are the initial centers of clustering (C). (ii) Compute all the distances between the next N training data and the initial centers of clustering. If X i c j is the minimum, put X i into the center c j, then compute the new c j = (X i + c j )/2. (Note that X i is the datum that has been raised dimension.) (iii) Sort all the training data again, then compare the new centers with the ones obtained from step (ii), if no difference, stop; otherwise return to step (ii). We test the two approaches to prune support values by the following methods.
4 No. 6 Discussion About Nonlinear Time Series Prediction Using Least Squares Support Vector Machine 1059 Non-noisy Nonlinear Time Series Predic- (i) tion Fig. 4 Adjustment of support values and prediction of the time series based on Mackey Glass equation by using CLS-SVM. The dotted line is predicting values and solid line the desired ones. The compared predicting results are shown in Table 4 and Fig. 4. In this process the length of the nonlinear time series is set to 1500, in which 400 data are used to train the network, and the ultimate pruned number of support values is 200. It can be clearly seen from Table 4 that the two methods to prune support values have succeeded in saving the predicting time with comparable errors. The predicting result using CLS-SVM is illustrated in Fig. 4. CLS-SVM obtains the better result in reducing time, while the other is more suitable for decreasing errors. (ii) Noisy Nonlinear Time Series Prediction The capabilities of filtering noise are important to predict practical data. Here we investigate both methods. The noisy time series can be expressed as [7,10] x t = x t + ν t, (10) where ν t = βkµ, β denotes an adjustable parameter that controls the degree of noise, µ is uniformly distributed in region [ 1, 1], k stands for the signal-to-noise ratio, namely the standard deviation of the Mackey Glass time series divides the standard deviation of noise component. We discuss the variation of the noise filtering capabilities for the two approaches with different noise degrees, which is significant to practical data, as displayed in Table 5. Table 4 Comparison of three predicting models. LS-SVM Pruning the least important data CLS-SVM RMSE Time (s) Table 5 Comparison of noise filtering capabilities for the two methods with different levels of noise. Noise degree Pruning the least important data CLS-SVM One can find that noise filtering capabilities of CLS-SVM are better than those of the other method, mainly because the internal characters of clustering work, which analyze the structure of data, and find out the inner information. This is the uppermost difference between the two approaches. The data that have been clustered can contain more information of the whole system than those that have not. We employ an RBF kernel with σ = 1.7, γ = 10 7 in all simulations, in which the dimension of reconstruction of the phase space is taken as 7. 4 Conclusion In this work, we apply LS-SVM to nonlinear time series prediction. The precision has been improved by taking the effects of different values of parameter γ into account. Multi-step prediction using LS-SVM that leaves out the least significant support values yields better results than those using artificial neural networks with chaotic learning mechanism. It is particularly mentioned that we introduce clustering into LS-SVM to prune support values. This new approach reduces the number of support values with comparable errors to LS-SVM, and greatly improves the capabilities of filtering noise.
5 1060 XU Rui-Rui, BIAN Guo-Xing, GAO Chen-Feng, and CHEN Tian-Lun Vol. 43 What we are going to focus on is to improve the clustering method used in LS-SVM to get better predicting result and less predicting time. It is sure that there will be wider foreground for LS-SVM in the prediction of practical data. References [1] Neural Networks in Finacial Engineering, eds. A.N. Referes, Y. Abu-Mostafa, J. Moody, and A. Weigend, World Scientific, Singapore (1996). [2] C. Cortes and V. Vapnik, Machine Learning 20 (1995) 273. [3] Simon Haykin, Neural Networks, Tsinghua University Press, Beijing (2001). [4] V.N. Vapnic, The Nature of Statistical Learning Theory, Springer, New York (2000). [5] J.A.K. Suykens, L. Lukas, and J. Vandewalle, Sparse Least Squares Support Vector Machine Classifiers, in Proceedings of European Symposium on Artificial Neural Networks, Belgium (2000), pp [6] M. Mackey and L. Glass, Science 197 (1977) 287. [7] LI Ke-Ping, Chen Tian-Lun, and Gao Zi-You, Commun. Theor. Phys. (Beijing, China) 40 (2003) 311. [8] Zhang Xue-Gong, Acta Automatica Sin. 26 (2000) 32. [9] Zheng Xin and Chen Tian-Lun, Commun. Theor. Phys. (Beijing, China) 40 (2003) 165. [10] Li Ke-Ping and Chen Tian-Lun, Commun. Theor. Phys. (Beijing, China) 35 (2001) 759. [11] Li Ke-Ping, Ph.D. thesis, Chaotic Neural Networks and Nonlinear Time Series Prediction, Nankai University, p. 30.
Discussion of Some Problems About Nonlinear Time Series Prediction Using ν-support Vector Machine
Commun. Theor. Phys. (Beijing, China) 48 (2007) pp. 117 124 c International Academic Publishers Vol. 48, No. 1, July 15, 2007 Discussion of Some Problems About Nonlinear Time Series Prediction Using ν-support
More informationFast Bootstrap for Least-square Support Vector Machines
ESA'2004 proceedings - European Symposium on Artificial eural etworks Bruges (Belgium), 28-30 April 2004, d-side publi., ISB 2-930307-04-8, pp. 525-530 Fast Bootstrap for Least-square Support Vector Machines
More informationSupport Vector Machines. Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar
Data Mining Support Vector Machines Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar 02/03/2018 Introduction to Data Mining 1 Support Vector Machines Find a linear hyperplane
More informationLinear vs Non-linear classifier. CS789: Machine Learning and Neural Network. Introduction
Linear vs Non-linear classifier CS789: Machine Learning and Neural Network Support Vector Machine Jakramate Bootkrajang Department of Computer Science Chiang Mai University Linear classifier is in the
More informationSupport Vector Machine (continued)
Support Vector Machine continued) Overlapping class distribution: In practice the class-conditional distributions may overlap, so that the training data points are no longer linearly separable. We need
More informationLinear & nonlinear classifiers
Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table
More informationSupport Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012
Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Linear classifier Which classifier? x 2 x 1 2 Linear classifier Margin concept x 2
More informationOutliers Treatment in Support Vector Regression for Financial Time Series Prediction
Outliers Treatment in Support Vector Regression for Financial Time Series Prediction Haiqin Yang, Kaizhu Huang, Laiwan Chan, Irwin King, and Michael R. Lyu Department of Computer Science and Engineering
More informationLinear & nonlinear classifiers
Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1394 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1394 1 / 34 Table
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression
More informationData Mining. Linear & nonlinear classifiers. Hamid Beigy. Sharif University of Technology. Fall 1396
Data Mining Linear & nonlinear classifiers Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Data Mining Fall 1396 1 / 31 Table of contents 1 Introduction
More informationCS798: Selected topics in Machine Learning
CS798: Selected topics in Machine Learning Support Vector Machine Jakramate Bootkrajang Department of Computer Science Chiang Mai University Jakramate Bootkrajang CS798: Selected topics in Machine Learning
More informationLearning with kernels and SVM
Learning with kernels and SVM Šámalova chata, 23. května, 2006 Petra Kudová Outline Introduction Binary classification Learning with Kernels Support Vector Machines Demo Conclusion Learning from data find
More informationAn Improved Conjugate Gradient Scheme to the Solution of Least Squares SVM
An Improved Conjugate Gradient Scheme to the Solution of Least Squares SVM Wei Chu Chong Jin Ong chuwei@gatsby.ucl.ac.uk mpeongcj@nus.edu.sg S. Sathiya Keerthi mpessk@nus.edu.sg Control Division, Department
More informationSupport Vector Machine (SVM) and Kernel Methods
Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2014 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin
More informationSupport Vector Machine
Support Vector Machine Kernel: Kernel is defined as a function returning the inner product between the images of the two arguments k(x 1, x 2 ) = ϕ(x 1 ), ϕ(x 2 ) k(x 1, x 2 ) = k(x 2, x 1 ) modularity-
More informationSupport Vector Machine (SVM) and Kernel Methods
Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2015 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin
More informationStatistical Machine Learning from Data
Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Support Vector Machines Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole Polytechnique
More informationMachine Learning. Support Vector Machines. Manfred Huber
Machine Learning Support Vector Machines Manfred Huber 2015 1 Support Vector Machines Both logistic regression and linear discriminant analysis learn a linear discriminant function to separate the data
More informationSupport Vector Machine (SVM) and Kernel Methods
Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2016 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin
More informationChapter 9. Support Vector Machine. Yongdai Kim Seoul National University
Chapter 9. Support Vector Machine Yongdai Kim Seoul National University 1. Introduction Support Vector Machine (SVM) is a classification method developed by Vapnik (1996). It is thought that SVM improved
More informationLecture Notes on Support Vector Machine
Lecture Notes on Support Vector Machine Feng Li fli@sdu.edu.cn Shandong University, China 1 Hyperplane and Margin In a n-dimensional space, a hyper plane is defined by ω T x + b = 0 (1) where ω R n is
More informationSupport Vector Machines: Maximum Margin Classifiers
Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind
More informationSupport Vector Machines and Kernel Methods
2018 CS420 Machine Learning, Lecture 3 Hangout from Prof. Andrew Ng. http://cs229.stanford.edu/notes/cs229-notes3.pdf Support Vector Machines and Kernel Methods Weinan Zhang Shanghai Jiao Tong University
More informationModel Selection for LS-SVM : Application to Handwriting Recognition
Model Selection for LS-SVM : Application to Handwriting Recognition Mathias M. Adankon and Mohamed Cheriet Synchromedia Laboratory for Multimedia Communication in Telepresence, École de Technologie Supérieure,
More informationForeign Exchange Rates Forecasting with a C-Ascending Least Squares Support Vector Regression Model
Foreign Exchange Rates Forecasting with a C-Ascending Least Squares Support Vector Regression Model Lean Yu, Xun Zhang, and Shouyang Wang Institute of Systems Science, Academy of Mathematics and Systems
More informationIntroduction to Support Vector Machines
Introduction to Support Vector Machines Shivani Agarwal Support Vector Machines (SVMs) Algorithm for learning linear classifiers Motivated by idea of maximizing margin Efficient extension to non-linear
More informationSUPPORT VECTOR MACHINE FOR THE SIMULTANEOUS APPROXIMATION OF A FUNCTION AND ITS DERIVATIVE
SUPPORT VECTOR MACHINE FOR THE SIMULTANEOUS APPROXIMATION OF A FUNCTION AND ITS DERIVATIVE M. Lázaro 1, I. Santamaría 2, F. Pérez-Cruz 1, A. Artés-Rodríguez 1 1 Departamento de Teoría de la Señal y Comunicaciones
More informationMachine Learning. Lecture 6: Support Vector Machine. Feng Li.
Machine Learning Lecture 6: Support Vector Machine Feng Li fli@sdu.edu.cn https://funglee.github.io School of Computer Science and Technology Shandong University Fall 2018 Warm Up 2 / 80 Warm Up (Contd.)
More informationSupport Vector Machines
EE 17/7AT: Optimization Models in Engineering Section 11/1 - April 014 Support Vector Machines Lecturer: Arturo Fernandez Scribe: Arturo Fernandez 1 Support Vector Machines Revisited 1.1 Strictly) Separable
More informationLMS Algorithm Summary
LMS Algorithm Summary Step size tradeoff Other Iterative Algorithms LMS algorithm with variable step size: w(k+1) = w(k) + µ(k)e(k)x(k) When step size µ(k) = µ/k algorithm converges almost surely to optimal
More informationOutline. Basic concepts: SVM and kernels SVM primal/dual problems. Chih-Jen Lin (National Taiwan Univ.) 1 / 22
Outline Basic concepts: SVM and kernels SVM primal/dual problems Chih-Jen Lin (National Taiwan Univ.) 1 / 22 Outline Basic concepts: SVM and kernels Basic concepts: SVM and kernels SVM primal/dual problems
More informationEach new feature uses a pair of the original features. Problem: Mapping usually leads to the number of features blow up!
Feature Mapping Consider the following mapping φ for an example x = {x 1,...,x D } φ : x {x1,x 2 2,...,x 2 D,,x 2 1 x 2,x 1 x 2,...,x 1 x D,...,x D 1 x D } It s an example of a quadratic mapping Each new
More informationMachine Learning and Data Mining. Support Vector Machines. Kalev Kask
Machine Learning and Data Mining Support Vector Machines Kalev Kask Linear classifiers Which decision boundary is better? Both have zero training error (perfect training accuracy) But, one of them seems
More informationICS-E4030 Kernel Methods in Machine Learning
ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This
More informationEffects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks
Commun. Theor. Phys. (Beijing, China) 42 (2004) pp. 121 125 c International Academic Publishers Vol. 42, No. 1, July 15, 2004 Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized
More informationSupport Vector Machines
Support Vector Machines Tobias Pohlen Selected Topics in Human Language Technology and Pattern Recognition February 10, 2014 Human Language Technology and Pattern Recognition Lehrstuhl für Informatik 6
More informationNON-FIXED AND ASYMMETRICAL MARGIN APPROACH TO STOCK MARKET PREDICTION USING SUPPORT VECTOR REGRESSION. Haiqin Yang, Irwin King and Laiwan Chan
In The Proceedings of ICONIP 2002, Singapore, 2002. NON-FIXED AND ASYMMETRICAL MARGIN APPROACH TO STOCK MARKET PREDICTION USING SUPPORT VECTOR REGRESSION Haiqin Yang, Irwin King and Laiwan Chan Department
More informationLecture 10: A brief introduction to Support Vector Machine
Lecture 10: A brief introduction to Support Vector Machine Advanced Applied Multivariate Analysis STAT 2221, Fall 2013 Sungkyu Jung Department of Statistics, University of Pittsburgh Xingye Qiao Department
More informationSupport Vector Machines (SVM) in bioinformatics. Day 1: Introduction to SVM
1 Support Vector Machines (SVM) in bioinformatics Day 1: Introduction to SVM Jean-Philippe Vert Bioinformatics Center, Kyoto University, Japan Jean-Philippe.Vert@mines.org Human Genome Center, University
More informationPerceptron Revisited: Linear Separators. Support Vector Machines
Support Vector Machines Perceptron Revisited: Linear Separators Binary classification can be viewed as the task of separating classes in feature space: w T x + b > 0 w T x + b = 0 w T x + b < 0 Department
More informationResearch Article Time Series Adaptive Online Prediction Method Combined with Modified LS-SVR and AGO
Mathematical Problems in Engineering Volume 2012, Article ID 985930, 12 pages doi:10.1155/2012/985930 Research Article Time Series Adaptive Online Prediction Method Combined with Modified LS-SVR and AGO
More informationSupport Vector Machine Regression for Volatile Stock Market Prediction
Support Vector Machine Regression for Volatile Stock Market Prediction Haiqin Yang, Laiwan Chan, and Irwin King Department of Computer Science and Engineering The Chinese University of Hong Kong Shatin,
More informationReview: Support vector machines. Machine learning techniques and image analysis
Review: Support vector machines Review: Support vector machines Margin optimization min (w,w 0 ) 1 2 w 2 subject to y i (w 0 + w T x i ) 1 0, i = 1,..., n. Review: Support vector machines Margin optimization
More informationDirect and Recursive Prediction of Time Series Using Mutual Information Selection
Direct and Recursive Prediction of Time Series Using Mutual Information Selection Yongnan Ji, Jin Hao, Nima Reyhani, and Amaury Lendasse Neural Network Research Centre, Helsinki University of Technology,
More informationNearest Neighbors Methods for Support Vector Machines
Nearest Neighbors Methods for Support Vector Machines A. J. Quiroz, Dpto. de Matemáticas. Universidad de Los Andes joint work with María González-Lima, Universidad Simón Boĺıvar and Sergio A. Camelo, Universidad
More informationIntroduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur
Introduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Module - 5 Lecture - 22 SVM: The Dual Formulation Good morning.
More informationLecture 9: Large Margin Classifiers. Linear Support Vector Machines
Lecture 9: Large Margin Classifiers. Linear Support Vector Machines Perceptrons Definition Perceptron learning rule Convergence Margin & max margin classifiers (Linear) support vector machines Formulation
More informationSupport Vector Machine & Its Applications
Support Vector Machine & Its Applications A portion (1/3) of the slides are taken from Prof. Andrew Moore s SVM tutorial at http://www.cs.cmu.edu/~awm/tutorials Mingyue Tan The University of British Columbia
More informationSupport Vector Machines
Support Vector Machines Some material on these is slides borrowed from Andrew Moore's excellent machine learning tutorials located at: http://www.cs.cmu.edu/~awm/tutorials/ Where Should We Draw the Line????
More informationSupport Vector Machines Explained
December 23, 2008 Support Vector Machines Explained Tristan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introduction This document has been written in an attempt to make the Support Vector Machines (SVM),
More informationKernels and the Kernel Trick. Machine Learning Fall 2017
Kernels and the Kernel Trick Machine Learning Fall 2017 1 Support vector machines Training by maximizing margin The SVM objective Solving the SVM optimization problem Support vectors, duals and kernels
More informationKernel Methods and Support Vector Machines
Kernel Methods and Support Vector Machines Oliver Schulte - CMPT 726 Bishop PRML Ch. 6 Support Vector Machines Defining Characteristics Like logistic regression, good for continuous input features, discrete
More informationLinear Support Vector Machine. Classification. Linear SVM. Huiping Cao. Huiping Cao, Slide 1/26
Huiping Cao, Slide 1/26 Classification Linear SVM Huiping Cao linear hyperplane (decision boundary) that will separate the data Huiping Cao, Slide 2/26 Support Vector Machines rt Vector Find a linear Machines
More informationAn introduction to Support Vector Machines
1 An introduction to Support Vector Machines Giorgio Valentini DSI - Dipartimento di Scienze dell Informazione Università degli Studi di Milano e-mail: valenti@dsi.unimi.it 2 Outline Linear classifiers
More informationClassifier Complexity and Support Vector Classifiers
Classifier Complexity and Support Vector Classifiers Feature 2 6 4 2 0 2 4 6 8 RBF kernel 10 10 8 6 4 2 0 2 4 6 Feature 1 David M.J. Tax Pattern Recognition Laboratory Delft University of Technology D.M.J.Tax@tudelft.nl
More informationKernel Machines. Pradeep Ravikumar Co-instructor: Manuela Veloso. Machine Learning
Kernel Machines Pradeep Ravikumar Co-instructor: Manuela Veloso Machine Learning 10-701 SVM linearly separable case n training points (x 1,, x n ) d features x j is a d-dimensional vector Primal problem:
More informationSupport Vector Machines for Regression
COMP-566 Rohan Shah (1) Support Vector Machines for Regression Provided with n training data points {(x 1, y 1 ), (x 2, y 2 ),, (x n, y n )} R s R we seek a function f for a fixed ɛ > 0 such that: f(x
More informationKernel Methods. Machine Learning A W VO
Kernel Methods Machine Learning A 708.063 07W VO Outline 1. Dual representation 2. The kernel concept 3. Properties of kernels 4. Examples of kernel machines Kernel PCA Support vector regression (Relevance
More informationPattern Recognition 2018 Support Vector Machines
Pattern Recognition 2018 Support Vector Machines Ad Feelders Universiteit Utrecht Ad Feelders ( Universiteit Utrecht ) Pattern Recognition 1 / 48 Support Vector Machines Ad Feelders ( Universiteit Utrecht
More informationContent. Learning. Regression vs Classification. Regression a.k.a. function approximation and Classification a.k.a. pattern recognition
Content Andrew Kusiak Intelligent Systems Laboratory 239 Seamans Center The University of Iowa Iowa City, IA 52242-527 andrew-kusiak@uiowa.edu http://www.icaen.uiowa.edu/~ankusiak Introduction to learning
More informationLecture 10: Support Vector Machine and Large Margin Classifier
Lecture 10: Support Vector Machine and Large Margin Classifier Applied Multivariate Analysis Math 570, Fall 2014 Xingye Qiao Department of Mathematical Sciences Binghamton University E-mail: qiao@math.binghamton.edu
More informationApplied inductive learning - Lecture 7
Applied inductive learning - Lecture 7 Louis Wehenkel & Pierre Geurts Department of Electrical Engineering and Computer Science University of Liège Montefiore - Liège - November 5, 2012 Find slides: http://montefiore.ulg.ac.be/
More informationSupport Vector Machines. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington
Support Vector Machines CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 A Linearly Separable Problem Consider the binary classification
More informationLINEAR CLASSIFICATION, PERCEPTRON, LOGISTIC REGRESSION, SVC, NAÏVE BAYES. Supervised Learning
LINEAR CLASSIFICATION, PERCEPTRON, LOGISTIC REGRESSION, SVC, NAÏVE BAYES Supervised Learning Linear vs non linear classifiers In K-NN we saw an example of a non-linear classifier: the decision boundary
More informationModifying A Linear Support Vector Machine for Microarray Data Classification
Modifying A Linear Support Vector Machine for Microarray Data Classification Prof. Rosemary A. Renaut Dr. Hongbin Guo & Wang Juh Chen Department of Mathematics and Statistics, Arizona State University
More informationSupport Vector Machines
Support Vector Machines Le Song Machine Learning I CSE 6740, Fall 2013 Naïve Bayes classifier Still use Bayes decision rule for classification P y x = P x y P y P x But assume p x y = 1 is fully factorized
More informationConstrained Optimization and Support Vector Machines
Constrained Optimization and Support Vector Machines Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/
More informationLinear Classification and SVM. Dr. Xin Zhang
Linear Classification and SVM Dr. Xin Zhang Email: eexinzhang@scut.edu.cn What is linear classification? Classification is intrinsically non-linear It puts non-identical things in the same class, so a
More informationSupport vector machines
Support vector machines Guillaume Obozinski Ecole des Ponts - ParisTech SOCN course 2014 SVM, kernel methods and multiclass 1/23 Outline 1 Constrained optimization, Lagrangian duality and KKT 2 Support
More informationSupport'Vector'Machines. Machine(Learning(Spring(2018 March(5(2018 Kasthuri Kannan
Support'Vector'Machines Machine(Learning(Spring(2018 March(5(2018 Kasthuri Kannan kasthuri.kannan@nyumc.org Overview Support Vector Machines for Classification Linear Discrimination Nonlinear Discrimination
More informationMachine Learning Support Vector Machines. Prof. Matteo Matteucci
Machine Learning Support Vector Machines Prof. Matteo Matteucci Discriminative vs. Generative Approaches 2 o Generative approach: we derived the classifier from some generative hypothesis about the way
More informationSupport Vector Machines.
Support Vector Machines www.cs.wisc.edu/~dpage 1 Goals for the lecture you should understand the following concepts the margin slack variables the linear support vector machine nonlinear SVMs the kernel
More informationOn Tolerant Fuzzy c-means Clustering with L 1 -Regularization
On Tolerant Fuzzy c-means Clustering with L 1 -Regularization HAMASUNA Yuhiro 1 ENDO Yasunori 2 MIYAMOTO Sadaa 2 1. Doctoral Program in Risk Engineering, University of Tsukuba 1-1-1 Tennodai, Tsukuba,
More informationMining Classification Knowledge
Mining Classification Knowledge Remarks on NonSymbolic Methods JERZY STEFANOWSKI Institute of Computing Sciences, Poznań University of Technology COST Doctoral School, Troina 2008 Outline 1. Bayesian classification
More informationSupport Vector Machines for Classification: A Statistical Portrait
Support Vector Machines for Classification: A Statistical Portrait Yoonkyung Lee Department of Statistics The Ohio State University May 27, 2011 The Spring Conference of Korean Statistical Society KAIST,
More informationLinear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)
Support vector machines In a nutshell Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers) Solution only depends on a small subset of training
More informationMachine Learning 2010
Machine Learning 2010 Michael M Richter Support Vector Machines Email: mrichter@ucalgary.ca 1 - Topic This chapter deals with concept learning the numerical way. That means all concepts, problems and decisions
More informationSUPPORT VECTOR MACHINE
SUPPORT VECTOR MACHINE Mainly based on https://nlp.stanford.edu/ir-book/pdf/15svm.pdf 1 Overview SVM is a huge topic Integration of MMDS, IIR, and Andrew Moore s slides here Our foci: Geometric intuition
More informationEE613 Machine Learning for Engineers. Kernel methods Support Vector Machines. jean-marc odobez 2015
EE613 Machine Learning for Engineers Kernel methods Support Vector Machines jean-marc odobez 2015 overview Kernel methods introductions and main elements defining kernels Kernelization of k-nn, K-Means,
More informationPredict GARCH Based Volatility of Shanghai Composite Index by Recurrent Relevant Vector Machines and Recurrent Least Square Support Vector Machines
Predict GARCH Based Volatility of Shanghai Composite Index by Recurrent Relevant Vector Machines and Recurrent Least Square Support Vector Machines Phichhang Ou (Corresponding author) School of Business,
More informationLinear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)
Support vector machines In a nutshell Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers) Solution only depends on a small subset of training
More informationJeff Howbert Introduction to Machine Learning Winter
Classification / Regression Support Vector Machines Jeff Howbert Introduction to Machine Learning Winter 2012 1 Topics SVM classifiers for linearly separable classes SVM classifiers for non-linearly separable
More informationNONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition
NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function
More informationNon-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines
Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Fall 2018 CS 551, Fall
More informationSupport Vector Machines II. CAP 5610: Machine Learning Instructor: Guo-Jun QI
Support Vector Machines II CAP 5610: Machine Learning Instructor: Guo-Jun QI 1 Outline Linear SVM hard margin Linear SVM soft margin Non-linear SVM Application Linear Support Vector Machine An optimization
More informationA GENERAL FORMULATION FOR SUPPORT VECTOR MACHINES. Wei Chu, S. Sathiya Keerthi, Chong Jin Ong
A GENERAL FORMULATION FOR SUPPORT VECTOR MACHINES Wei Chu, S. Sathiya Keerthi, Chong Jin Ong Control Division, Department of Mechanical Engineering, National University of Singapore 0 Kent Ridge Crescent,
More informationLinear, threshold units. Linear Discriminant Functions and Support Vector Machines. Biometrics CSE 190 Lecture 11. X i : inputs W i : weights
Linear Discriminant Functions and Support Vector Machines Linear, threshold units CSE19, Winter 11 Biometrics CSE 19 Lecture 11 1 X i : inputs W i : weights θ : threshold 3 4 5 1 6 7 Courtesy of University
More informationL5 Support Vector Classification
L5 Support Vector Classification Support Vector Machine Problem definition Geometrical picture Optimization problem Optimization Problem Hard margin Convexity Dual problem Soft margin problem Alexander
More informationSupport Vector Machines
Support Vector Machines Reading: Ben-Hur & Weston, A User s Guide to Support Vector Machines (linked from class web page) Notation Assume a binary classification problem. Instances are represented by vector
More informationKernel Methods. Outline
Kernel Methods Quang Nguyen University of Pittsburgh CS 3750, Fall 2011 Outline Motivation Examples Kernels Definitions Kernel trick Basic properties Mercer condition Constructing feature space Hilbert
More informationIntroduction to Support Vector Machines
Introduction to Support Vector Machines Andreas Maletti Technische Universität Dresden Fakultät Informatik June 15, 2006 1 The Problem 2 The Basics 3 The Proposed Solution Learning by Machines Learning
More informationCompactly supported RBF kernels for sparsifying the Gram matrix in LS-SVM regression models
Compactly supported RBF kernels for sparsifying the Gram matrix in LS-SVM regression models B. Hamers, J.A.K. Suykens, B. De Moor K.U.Leuven, ESAT-SCD/SISTA, Kasteelpark Arenberg, B-3 Leuven, Belgium {bart.hamers,johan.suykens}@esat.kuleuven.ac.be
More informationMax Margin-Classifier
Max Margin-Classifier Oliver Schulte - CMPT 726 Bishop PRML Ch. 7 Outline Maximum Margin Criterion Math Maximizing the Margin Non-Separable Data Kernels and Non-linear Mappings Where does the maximization
More informationLeast Squares SVM Regression
Least Squares SVM Regression Consider changing SVM to LS SVM by making following modifications: min (w,e) ½ w 2 + ½C Σ e(i) 2 subject to d(i) (w T Φ( x(i))+ b) = e(i), i, and C>0. Note that e(i) is error
More informationSolving optimal control problems by PSO-SVM
Computational Methods for Differential Equations http://cmde.tabrizu.ac.ir Vol. 6, No. 3, 2018, pp. 312-325 Solving optimal control problems by PSO-SVM Elham Salehpour Department of Mathematics, Nowshahr
More informationSpace-Time Kernels. Dr. Jiaqiu Wang, Dr. Tao Cheng James Haworth University College London
Space-Time Kernels Dr. Jiaqiu Wang, Dr. Tao Cheng James Haworth University College London Joint International Conference on Theory, Data Handling and Modelling in GeoSpatial Information Science, Hong Kong,
More informationCS-E4830 Kernel Methods in Machine Learning
CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This
More informationIntroduction to SVM and RVM
Introduction to SVM and RVM Machine Learning Seminar HUS HVL UIB Yushu Li, UIB Overview Support vector machine SVM First introduced by Vapnik, et al. 1992 Several literature and wide applications Relevance
More informationLinear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)
Support vector machines In a nutshell Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers) Solution only depends on a small subset of training
More information