Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Size: px
Start display at page:

Download "Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization"

Transcription

1 Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering University of Oklahoa 202 W. Boyd Street, Roo 124, oran, Oklahoa UITED STATES rpant@ou.edu, ttrafalis@ou.edu, kashbarker@ou.edu Abstract: - In this paper, we have developed a robust Support Vector Machines () schee of classifying ibalanced and noisy data using the principles of Robust Optiization. Uncertainty is prevalent in alost all datasets and has not been addressed efficiently by ost data ining techniques, as these are based on deterinistic atheatical tools. Ibalanced datasets exist while perforing analysis of rare events, and for such datasets eleents in the inority class becoe critical. Our ethod tries to address both issues lacking in traditional classifications. At present, we provide solutions for linear classification of data having bounded uncertainties. This can be extended to non-linear classification schees for any types of uncertainties that are convex. Our results in predicting the iportance of the inority class are better than the traditional soft-argin classification. Preliinary coputational results are presented. Key-Words: - Support Vector Machines, Robust Classification, Ibalance, Uncertainty, oise 1 Introduction Data classification is an iportant proble in the field of data ining. Classification refers to dividing the data into classes, where each class signifies certain properties that are coon to a set of data. The siplest classification is the perfect linear separation of data into two classes. In practical probles, data is not perfectly separable, due to which there are errors in classification. Further issues aking data analysis coplicated arise due to the presence of uncertain and ibalanced datasets. Uncertainty is prevalent is alost all datasets and is not addressed efficiently by ost data ining techniques, as these are based on deterinistic atheatical tools [8]. Ibalanced datasets exist while perforing analysis of rare events, for which eleents in the inority class becoe critical. Most of the data ining techniques perfor poorly in predicting the inority class for ibalanced data [3]. The solution of classification probles using Support Vector Machines (s) [9,10] is widely prevalent in data ining. The soft-argin classification ethods can provide efficient solutions to non-separable data, but due to uncertainty, the traditional soft-argin classification ight not be copletely effective in providing the optial classification. In general the ter "noisy or uncertain data" in classification is eant to represent those exaples that do not lie on the intended side of the separation argin. In this paper we extend this ter to include the uncertainty that anifests itself in every data point. Hence, our interpretation of noisy data consists of an error in easureent of each data point that has to be considered during classification and also there would be soe data points which will not be classified correctly. As stated earlier, traditional ethods adjust for the error relative to the axiu argin of classification, but do not consider individual data uncertainties. Bhattacharyya et al. [2] addressed such issues by developing Second Order Conic Prograing forulations for Gaussian uncertainty in data, which bore reseblance to the Total ethods of Bi and Zhang [4] that provided forulations for bounded uncertainties. While these ethods were developed separately they fall under the schee of Robust approaches detailed in works of Trafalis et al. [5,6,7,8], which uses concepts developed in Robust Optiization (RO) literature [1]. RO schees have been applied to to explore both data uncertainties and classification errors and results of the robust are found to perfor better than the traditional ethods. The RO techniques for can be extended to the study of ibalanced datasets. Exaples of such ISB:

2 Recent Researches in Coputer Science datasets are tornado datasets where out of a very large database only a few events are catastrophic and hence, iportant for decision-aking. RO ethods can be used to control the perturbations in data points, which in turn controls isclassification errors that affect the ability to successfully predict the inority class. Such ethods have been explored in this study. We look at linear separation classification for uncertain and ibalanced datasets. Using RO, we propose an optiization proble that provides a better solution to ibalanced, noisy data classification than the classical soft-argin classification. The ain contribution of this work is to forulate a classification proble that solves for ibalanced and noisy data. We have presented the forulation and results for convex bounded uncertainty datasets. This paper is organized as follows. Section 2 explains the proble stateent for classification of uncertain data, and we present the soft-argin classification proble stateent. Section 3 discusses the developent of the robust classification proble for noisy data, wherein the robust counterpart of the classical proble is developed and the final optiization equation for noisy data is presented. In Section 4, we extend the robust forulation to include ibalanced data analysis and present the final optiization proble that handles data with Euclidean nor bounded uncertainty. in Section 5 we perfor preliinary nuerical analyses for a few toy datasets and copare the perforance of our ethod with the classical based svtrain function of MATLAB. Section 6 discusses the conclusion and future developent of this work. 2 Proble Forulation For classification, we are given an input atrix X R xn of training saples. Each data point, x i X, is an n eleent row vector and there are such data points. Further, we are given that each data point belongs to either one of the two classes given as y {+1, 1}. The pair {X, y} is referred to as the training dataset. Uncertainty is incorporated into the analysis by assuing that the input atrix is given in ters of a noinal value and a perturbation, that is, X = X + Δ: Δ = δ 1, δ 2,, δ, (1) where, X is the noinal value of the data that is free fro uncertainties, Δ is the uncertainty set for defining the perturbations in the data in which δ i is the n eleent row vector of uncertainty associated with each data point x i. Suitable definitions about Δ have to be provided for obtaining feasible solutions to the classification probles. As a rule, it is assued that Δ has to belong to a convex set that gives coputationally tractable solutions. The proble we ai to solve is the two-class classification proble in which the aount of data belonging to one class y i = +1 is very few copared to the other class y i = 1. Further, the data has uncertainties, due to which there will be errors in classification. In the next subsections below we state our classification proble and suggest the forulation to handle uncertainties. We will further develop the classification schee to address both probles of data ibalance and noise. 2.1 Soft-argin Classification for onseparable Data For perfect linear separation of data the classification rule is defined in ters of a separating hyperplane and is given as y i w, x i + b 1, i = 1, 2,,, (2) where w R n is a weight vector perpendicular to the hyperplane and b is a scalar for deterining the offset of the hyperplane fro the origin. The ter w, x i signifies the vector dot product between the eleents of w and x i. The ai of classification is to find the axiu argin of separation between data points, which is stated through the optiization proble 1 in 2 w 2 2 (3) s. t. y i w, x i + b 1, i = 1, 2,,. The classification of data is generally not exact because in actual situations the data is too coplex to be perfectly linear separable. Hence, a ter for the error in classification has to be incorporated into the analysis. Due to errors in classification, the traditional hard-argin classification constraints are odified as y i w, x i + b 1 ξ i, (4) ξ i 0, i = 1, 2,,, where, ξ i is the scalar error in classification of the i th data point x i. The ai of an efficient classifier is to iniize the errors in classification. This is accoplished by iniizing the su of all the errors in classification, which is referred to as the realized hinge loss function [1]. Hence, the optiization proble to be solved for controlling errors becoes ISB:

3 Recent Researches in Coputer Science in ξ i (5) s. t. ξ i + b, ξ i 0, i = 1, 2,,. The classical soft-argin classification proble cobines the objectives of the optiization probles given by equations (3) to (5). Hence, the classical proble forulation becoes in λ w ξ i s. t. ξ i + b, ξ i 0, i = 1, 2,,, where, λ is the regularization paraeter. (6) 3 Robust Forulation of the Softargin Classification for oisy Data Fro the forulation of (6) it can be seen that the axial argin of separation is influenced by the realizations of the data points. In particular there are a few data points, called the support vectors that deterine the separation argin. If the data are uncertain then the region of influence of the support vectors is varied and we obtain ultiple solutions for the axial argin. Hence, it is intuitive to look at the worst-case realizations for data points as these would give the extree separation argin. RO ethods are therefore useful tools for accoplishing the task of finding the best separation argin under uncertainty. Solving the optiization proble of (6) can becoe coputationally intensive, especially if each data point has as unique uncertainty. Moreover, it is not possible to obtain coputationally tractable solutions unless certain rules for the uncertainty set are specified. The inequalities of (6) allow us to rewrite the hinge loss function in ters of the sapling data, labels, weight vector and the offset as ξ i = + b +, (7) where, + b + = ax 0,1 yiw,xi+ b. This forulation is siilar to an indicator function and has a convex upper bound. Hence, we can restate the soft-argin classification proble (6) as an unconstrained optiization proble given by in λ w b +. (8) This optiization proble otivates the forulations that lead to a robust analysis of data with noise. Fro (8) we can see that the new softargin classification forulation is easier to solve for noisy data, as we get a coputationally tractable forulation. For a robust analysis, we will iniize the worst-case hinge loss function due to uncertain data. The robust counterpart of (8) becoes in λ w ax x i X + b + (9) The significance of robust optiization principles in solving classification probles lies in the fact that it solves for the extree case of the data uncertainty. The geoetrical representation of data points with spherical uncertainty is shown in Fig.1, where for each data point the centre of the sphere represents its noinal value and the radius of the sphere represents the uncertainty. In the classical forulation the support vectors correspond to the centre of data points, which does not provide uch scope for change. As seen in Fig.1, using the robust optiization ethods the support vectors would be tangential to the spherical boundary of the perturbed data. Thus the solutions becoe sensitive to the radius of each sphere and can also result in ore points becoing support vectors. For ibalanced data sets this becoes iportant as it allows us to have ore points as support vectors for the inority class, which would include soe points that were being treated as outliers otherwise. Figure 1. Coparison of classical with robust 3.1 Incorporating Uncertainties in the Softargin Classification We revisit the forulation of the realized hinge loss function in (7) for incorporating uncertainties in the forulation. Dividing x i s into their noinal values, x i s, and uncertainties, δ i s, the new forulation for (7) becoes ISB:

4 Recent Researches in Coputer Science ξ i = + b (10) y i w, δ i +, Hence, in the robust counterpart of the realized hinge loss function we are concerned with finding the worst-case realization of the uncertainty, which does not involve the noinal data. In our robust forulation of (9) the worst-case hinge loss function is therefore dependent upon the worst-case realizations of the data perturbation, which eans the second ter of (9) can be expressed as ax δ i Δ + b y i w, δ i + (11) In (11) above, we can take the axiu inside the suation because of the convexity of the hinge loss function. One way of specifying the worst-case realization for the uncertainty is through the Cauchy-Schwarz inequality, which provides nor bounds on the data perturbations. For ost data, assuing nor upper bounds for perturbations is as justifiable assuption, which leads to convex forulations. The Cauchy-Schwarz bounds for y i w, δ i in (11) are given as 1 y i w, δ i δ i p w q, p + 1 q = 1 (12) Hence, fro (12) we obtain the following condition δ i p w q y i w, δ i δ i p w q, (13) which leads to the following worst-case robust forulation for (11) ax δ i Δ + b y i w, δ i + = (14) Cobining (9) and (14) gives us the robust for solving the classification proble when we have data uncertainty or noise. We restate our final robust proble, which we will develop to handle ibalanced data λ w in. (15) 4 Handling ibalanced data using Robust Classification For handling data ibalance, the training data saple can be partitioned into exaples with positive and negative labels respectively. We are interested in solving the following robust optiization proble λ w in y i=+1 (16) + y i= 1 Allowing this separation of the data into positive and negative saples helps us control the perturbation on the saples, which can be critical in including the iportant inority class saples in classification. The unconstrained optiization proble can be converted into a constrained optiization proble by assuing that the hinge loss functions are less than soe axiu values. Matheatically this is expressed as λ w τ 1 + τ 1 in s. t. y i=+1 y i= 1 τ 1, 1 yi w, xi τ 1. (17) where τ 1 and τ 1 are respectively the values which bound the su of errors in the positive and negative saples. A coon for of uncertainty bound which is used for data perturbations is the 2-nor or Euclidean uncertainty bound. For ost kinds of data it is assued that either the entire uncertainty in the dataset has a Euclidean bound ( Δ 2 r, r 0), or each data point is contained in an uncertainty sphere of fixed radius ρ( δ i 2 ρ). Depending upon the data ibalance we can ipose different bounds on the positive and negative saples respectively, in order to iprove our classification. The robust forulation, when uncertainty exists in each data point, becoes a conic prograing proble, which is stated as in λ w τ 1 + τ 1 s. t. y i=+1 y i= 1 + b + ρ 1 w 2 + τ 1, 1 yi w, xi + b + ρ 1 w q + τ 1. (18) 5 Data Analysis For checking the effectiveness of the robust analysis schee developed in this study, we generate soe ibalanced data and copare the perforance of our forulation with the soft-argin ISB:

5 Recent Researches in Coputer Science classification of MATLAB. We generate a 400x2 atrix of rando nubers, where each nuber x ij belongs to the set x ij = {0.5 + rand(0,1)}. Each data point in the analysis is the 2x1 vector x i = [x i1 ; x i2 ]. The Euclidean nor of each data point is calculated and if it is greater than one a label of y i = 1 is assigned to it, otherwise it belongs to the class y i = +1. This creates an ibalanced data set in which the positive labels belong to the inority class. ext we add an uncertainty δ i to each data point such that the Euclidean nor of that uncertainty is less than ρ = For the resulting data, we perfor analysis using the svtrain function of MATLAB and the code for our robust forulation also written in MATLAB. By controlling the separation criteria for the nor to be y i = 1: x i 2 < 1 + α(rand(0,1)), we can generate different ibalanced data sets. These are shown in Fig. 2 to 4. Figure 2. Very high ibalance in data with 10% data in inority class Figure 3. High ibalance in data with 25% data in inority class It can be observed fro the data generated that as we decrease the ibalance, we also have ore data points isclassified on either side of the separation argin, which would decrease over prediction accuracy. Fixing λ = 0.1, we use 50% of the data for training, 20% for tuning and the reaining 30% for testing the classification results of the two schees. In Table 1, 2 and 3 we show confusion atrices of the classifications of the different degrees of ibalance. It can be seen that for each case the inority class is predicted with greater accuracy using the robust schee. Hence, the results show that the ethods developed can iprove the perforance in prediction of the inority class. Table 1. Coparison of % accuracy between MATLAB and robust in predicting class for very high ibalanced data True Label Method Predicted Label MATLAB % 18.2% Robust +1 0% 81.2% % 9.1% % 90.9% Table 2. Coparison of % accuracy between MATLAB and robust in predicting class for high ibalanced data True Label Method Predicted Label MATLAB % 25.8% Robust % 74.2% % 22.6% % 77.4% Table 3. Coparison of % accuracy between MATLAB and robust in predicting class for oderate ibalanced data True Label Method Predicted Label MATLAB % 35.7% Robust % 64.3% % 30.1% % 69.6% Figure 4. Moderate ibalance in data with 40% data in inority class 6 Conclusion In this paper we have been able to develop a RO based schee for robust classification of ibalanced and noisy data. The ethods have been developed for linear classification of data into two class. They work on the assuption that the ISB:

6 Recent Researches in Coputer Science uncertainties are convex and bounded. This is a reasonable assuption as it applies to ost practical data. Our ethod is shown to perfor better than the classical soft-argin classification in predicting the iportance of the inority class. The ethod has to be developed further to iprove its overall accuracy. Also, it can be extended to include any type of uncertainty easures that are convex. A non-linear robust classification procedure can also be developed using the sae principles presented in this work. Further developent of these ethods would increase their application in classification and prediction of various types of datasets. References: [1] A. Ben-Tal, L. El Ghaoui and A. eirovski, Robust Optiization. Princeton, J: Princeton University Press, [2] C. Bhattacharyya, K.S. Pannagadatta, and A. J. Sola, A second order cone prograing forulation for classifying issing data, In Advances in eural Inforation Processing Systes, Vol. 17, [3] G.M. Weiss and F. Provost, Learning when training data are costly: the effect of class distribution on tree induction, Journal of Artificial Intelligence Research, Vol. 19, o. 1, 2003, pp [4] J. Bi and T. Zhang, Support vector classification with input data uncertainty. In Advances in eural Inforation Processing Systes, Vol. 17, [5] T.B. Trafalis and R.C. Gilbert, Maxiu Margin Classifiers with oisy Data: A Robust Optiization Approach, in Proceedings of the International Joint Conference on eural etworks (IJC) 2005, Piscataway, J, USA, 2005, pp [6] T.B. Trafalis and R.C. Gilbert, Robust classification and regression using support vector achines, European Journal of Operational Research, Vol. 173, o. 3, 2006, pp [7] T.B. Trafalis and R.C. Gilbert, Robust Support Vector Machines for Classification and Coputational Issues, Optiization Methods and Software, Vol. 22, o. 1, 2007, pp [8] T.B. Trafalis and S.A. Alwazzi, Support vector regression with noisy data: A second order cone prograing approach, International Journal of General Systes, Vol. 36, o. 2, 2007, pp [9] V.. Vapnik, The nature of statistical learning theory, Springer-Verlag, [10] V.. Vapnik, Statistical Learning Theory, Wiley, ISB:

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

Support Vector Machines. Goals for the lecture

Support Vector Machines. Goals for the lecture Support Vector Machines Mark Craven and David Page Coputer Sciences 760 Spring 2018 www.biostat.wisc.edu/~craven/cs760/ Soe of the slides in these lectures have been adapted/borrowed fro aterials developed

More information

Support Vector Machines. Maximizing the Margin

Support Vector Machines. Maximizing the Margin Support Vector Machines Support vector achines (SVMs) learn a hypothesis: h(x) = b + Σ i= y i α i k(x, x i ) (x, y ),..., (x, y ) are the training exs., y i {, } b is the bias weight. α,..., α are the

More information

Ensemble Based on Data Envelopment Analysis

Ensemble Based on Data Envelopment Analysis Enseble Based on Data Envelopent Analysis So Young Sohn & Hong Choi Departent of Coputer Science & Industrial Systes Engineering, Yonsei University, Seoul, Korea Tel) 82-2-223-404, Fax) 82-2- 364-7807

More information

Support Vector Machines. Machine Learning Series Jerry Jeychandra Blohm Lab

Support Vector Machines. Machine Learning Series Jerry Jeychandra Blohm Lab Support Vector Machines Machine Learning Series Jerry Jeychandra Bloh Lab Outline Main goal: To understand how support vector achines (SVMs) perfor optial classification for labelled data sets, also a

More information

1 Bounding the Margin

1 Bounding the Margin COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory Proble sets 5 and 6 Due: Noveber th Please send your solutions to learning-subissions@ttic.edu Notations/Definitions Recall the definition of saple based Radeacher

More information

Support Vector Machines MIT Course Notes Cynthia Rudin

Support Vector Machines MIT Course Notes Cynthia Rudin Support Vector Machines MIT 5.097 Course Notes Cynthia Rudin Credit: Ng, Hastie, Tibshirani, Friedan Thanks: Şeyda Ertekin Let s start with soe intuition about argins. The argin of an exaple x i = distance

More information

Boosting with log-loss

Boosting with log-loss Boosting with log-loss Marco Cusuano-Towner Septeber 2, 202 The proble Suppose we have data exaples {x i, y i ) i =... } for a two-class proble with y i {, }. Let F x) be the predictor function with the

More information

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lesson 1 4 October 2017 Outline Learning and Evaluation for Pattern Recognition Notation...2 1. The Pattern Recognition

More information

Stochastic Subgradient Methods

Stochastic Subgradient Methods Stochastic Subgradient Methods Lingjie Weng Yutian Chen Bren School of Inforation and Coputer Science University of California, Irvine {wengl, yutianc}@ics.uci.edu Abstract Stochastic subgradient ethods

More information

Robustness and Regularization of Support Vector Machines

Robustness and Regularization of Support Vector Machines Robustness and Regularization of Support Vector Machines Huan Xu ECE, McGill University Montreal, QC, Canada xuhuan@ci.cgill.ca Constantine Caraanis ECE, The University of Texas at Austin Austin, TX, USA

More information

Bayes Decision Rule and Naïve Bayes Classifier

Bayes Decision Rule and Naïve Bayes Classifier Bayes Decision Rule and Naïve Bayes Classifier Le Song Machine Learning I CSE 6740, Fall 2013 Gaussian Mixture odel A density odel p(x) ay be ulti-odal: odel it as a ixture of uni-odal distributions (e.g.

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents

More information

A Smoothed Boosting Algorithm Using Probabilistic Output Codes

A Smoothed Boosting Algorithm Using Probabilistic Output Codes A Soothed Boosting Algorith Using Probabilistic Output Codes Rong Jin rongjin@cse.su.edu Dept. of Coputer Science and Engineering, Michigan State University, MI 48824, USA Jian Zhang jian.zhang@cs.cu.edu

More information

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis City University of New York (CUNY) CUNY Acadeic Works International Conference on Hydroinforatics 8-1-2014 Experiental Design For Model Discriination And Precise Paraeter Estiation In WDS Analysis Giovanna

More information

Research Article Robust ε-support Vector Regression

Research Article Robust ε-support Vector Regression Matheatical Probles in Engineering, Article ID 373571, 5 pages http://dx.doi.org/10.1155/2014/373571 Research Article Robust ε-support Vector Regression Yuan Lv and Zhong Gan School of Mechanical Engineering,

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016 Lessons 7 14 Dec 2016 Outline Artificial Neural networks Notation...2 1. Introduction...3... 3 The Artificial

More information

Optimum Value of Poverty Measure Using Inverse Optimization Programming Problem

Optimum Value of Poverty Measure Using Inverse Optimization Programming Problem International Journal of Conteporary Matheatical Sciences Vol. 14, 2019, no. 1, 31-42 HIKARI Ltd, www.-hikari.co https://doi.org/10.12988/ijcs.2019.914 Optiu Value of Poverty Measure Using Inverse Optiization

More information

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Proc. of the IEEE/OES Seventh Working Conference on Current Measureent Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Belinda Lipa Codar Ocean Sensors 15 La Sandra Way, Portola Valley, CA 98 blipa@pogo.co

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vol. 57, No. 3, 2009 Algoriths for parallel processor scheduling with distinct due windows and unit-tie obs A. JANIAK 1, W.A. JANIAK 2, and

More information

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels Extension of CSRSM for the Paraetric Study of the Face Stability of Pressurized Tunnels Guilhe Mollon 1, Daniel Dias 2, and Abdul-Haid Soubra 3, M.ASCE 1 LGCIE, INSA Lyon, Université de Lyon, Doaine scientifique

More information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information Cite as: Straub D. (2014). Value of inforation analysis with structural reliability ethods. Structural Safety, 49: 75-86. Value of Inforation Analysis with Structural Reliability Methods Daniel Straub

More information

The Transactional Nature of Quantum Information

The Transactional Nature of Quantum Information The Transactional Nature of Quantu Inforation Subhash Kak Departent of Coputer Science Oklahoa State University Stillwater, OK 7478 ABSTRACT Inforation, in its counications sense, is a transactional property.

More information

INTELLECTUAL DATA ANALYSIS IN AIRCRAFT DESIGN

INTELLECTUAL DATA ANALYSIS IN AIRCRAFT DESIGN INTELLECTUAL DATA ANALYSIS IN AIRCRAFT DESIGN V.A. Koarov 1, S.A. Piyavskiy 2 1 Saara National Research University, Saara, Russia 2 Saara State Architectural University, Saara, Russia Abstract. This article

More information

UNIVERSITY OF TRENTO ON THE USE OF SVM FOR ELECTROMAGNETIC SUBSURFACE SENSING. A. Boni, M. Conci, A. Massa, and S. Piffer.

UNIVERSITY OF TRENTO ON THE USE OF SVM FOR ELECTROMAGNETIC SUBSURFACE SENSING. A. Boni, M. Conci, A. Massa, and S. Piffer. UIVRSITY OF TRTO DIPARTITO DI IGGRIA SCIZA DLL IFORAZIO 3823 Povo Trento (Italy) Via Soarive 4 http://www.disi.unitn.it O TH US OF SV FOR LCTROAGTIC SUBSURFAC SSIG A. Boni. Conci A. assa and S. Piffer

More information

Topic 5a Introduction to Curve Fitting & Linear Regression

Topic 5a Introduction to Curve Fitting & Linear Regression /7/08 Course Instructor Dr. Rayond C. Rup Oice: A 337 Phone: (95) 747 6958 E ail: rcrup@utep.edu opic 5a Introduction to Curve Fitting & Linear Regression EE 4386/530 Coputational ethods in EE Outline

More information

Polygonal Designs: Existence and Construction

Polygonal Designs: Existence and Construction Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G

More information

A Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science

A Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science A Better Algorith For an Ancient Scheduling Proble David R. Karger Steven J. Phillips Eric Torng Departent of Coputer Science Stanford University Stanford, CA 9435-4 Abstract One of the oldest and siplest

More information

Combining Classifiers

Combining Classifiers Cobining Classifiers Generic ethods of generating and cobining ultiple classifiers Bagging Boosting References: Duda, Hart & Stork, pg 475-480. Hastie, Tibsharini, Friedan, pg 246-256 and Chapter 10. http://www.boosting.org/

More information

Grafting: Fast, Incremental Feature Selection by Gradient Descent in Function Space

Grafting: Fast, Incremental Feature Selection by Gradient Descent in Function Space Journal of Machine Learning Research 3 (2003) 1333-1356 Subitted 5/02; Published 3/03 Grafting: Fast, Increental Feature Selection by Gradient Descent in Function Space Sion Perkins Space and Reote Sensing

More information

Understanding Machine Learning Solution Manual

Understanding Machine Learning Solution Manual Understanding Machine Learning Solution Manual Written by Alon Gonen Edited by Dana Rubinstein Noveber 17, 2014 2 Gentle Start 1. Given S = ((x i, y i )), define the ultivariate polynoial p S (x) = i []:y

More information

Introduction to Discrete Optimization

Introduction to Discrete Optimization Prof. Friedrich Eisenbrand Martin Nieeier Due Date: March 9 9 Discussions: March 9 Introduction to Discrete Optiization Spring 9 s Exercise Consider a school district with I neighborhoods J schools and

More information

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks Intelligent Systes: Reasoning and Recognition Jaes L. Crowley MOSIG M1 Winter Seester 2018 Lesson 7 1 March 2018 Outline Artificial Neural Networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lessons 7 20 Dec 2017 Outline Artificial Neural networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

Estimating Parameters for a Gaussian pdf

Estimating Parameters for a Gaussian pdf Pattern Recognition and achine Learning Jaes L. Crowley ENSIAG 3 IS First Seester 00/0 Lesson 5 7 Noveber 00 Contents Estiating Paraeters for a Gaussian pdf Notation... The Pattern Recognition Proble...3

More information

MSEC MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL SOLUTION FOR MAINTENANCE AND PERFORMANCE

MSEC MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL SOLUTION FOR MAINTENANCE AND PERFORMANCE Proceeding of the ASME 9 International Manufacturing Science and Engineering Conference MSEC9 October 4-7, 9, West Lafayette, Indiana, USA MSEC9-8466 MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL

More information

Ch 12: Variations on Backpropagation

Ch 12: Variations on Backpropagation Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith

More information

A general forulation of the cross-nested logit odel Michel Bierlaire, Dpt of Matheatics, EPFL, Lausanne Phone: Fax:

A general forulation of the cross-nested logit odel Michel Bierlaire, Dpt of Matheatics, EPFL, Lausanne Phone: Fax: A general forulation of the cross-nested logit odel Michel Bierlaire, EPFL Conference paper STRC 2001 Session: Choices A general forulation of the cross-nested logit odel Michel Bierlaire, Dpt of Matheatics,

More information

Use of PSO in Parameter Estimation of Robot Dynamics; Part One: No Need for Parameterization

Use of PSO in Parameter Estimation of Robot Dynamics; Part One: No Need for Parameterization Use of PSO in Paraeter Estiation of Robot Dynaics; Part One: No Need for Paraeterization Hossein Jahandideh, Mehrzad Navar Abstract Offline procedures for estiating paraeters of robot dynaics are practically

More information

Randomized Recovery for Boolean Compressed Sensing

Randomized Recovery for Boolean Compressed Sensing Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch

More information

Interactive Markov Models of Evolutionary Algorithms

Interactive Markov Models of Evolutionary Algorithms Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary

More information

Predictive Vaccinology: Optimisation of Predictions Using Support Vector Machine Classifiers

Predictive Vaccinology: Optimisation of Predictions Using Support Vector Machine Classifiers Predictive Vaccinology: Optiisation of Predictions Using Support Vector Machine Classifiers Ivana Bozic,2, Guang Lan Zhang 2,3, and Vladiir Brusic 2,4 Faculty of Matheatics, University of Belgrade, Belgrade,

More information

OPTIMIZATION in multi-agent networks has attracted

OPTIMIZATION in multi-agent networks has attracted Distributed constrained optiization and consensus in uncertain networks via proxial iniization Kostas Margellos, Alessandro Falsone, Sione Garatti and Maria Prandini arxiv:603.039v3 [ath.oc] 3 May 07 Abstract

More information

REDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION

REDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION ISSN 139 14X INFORMATION TECHNOLOGY AND CONTROL, 008, Vol.37, No.3 REDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION Riantas Barauskas, Vidantas Riavičius Departent of Syste Analysis, Kaunas

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

Soft-margin SVM can address linearly separable problems with outliers

Soft-margin SVM can address linearly separable problems with outliers Non-linear Support Vector Machines Non-linearly separable probles Hard-argin SVM can address linearly separable probles Soft-argin SVM can address linearly separable probles with outliers Non-linearly

More information

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval Unifor Approxiation and Bernstein Polynoials with Coefficients in the Unit Interval Weiang Qian and Marc D. Riedel Electrical and Coputer Engineering, University of Minnesota 200 Union St. S.E. Minneapolis,

More information

The Methods of Solution for Constrained Nonlinear Programming

The Methods of Solution for Constrained Nonlinear Programming Research Inventy: International Journal Of Engineering And Science Vol.4, Issue 3(March 2014), PP 01-06 Issn (e): 2278-4721, Issn (p):2319-6483, www.researchinventy.co The Methods of Solution for Constrained

More information

On Constant Power Water-filling

On Constant Power Water-filling On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives

More information

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness A Note on Scheduling Tall/Sall Multiprocessor Tasks with Unit Processing Tie to Miniize Maxiu Tardiness Philippe Baptiste and Baruch Schieber IBM T.J. Watson Research Center P.O. Box 218, Yorktown Heights,

More information

Least Squares Fitting of Data

Least Squares Fitting of Data Least Squares Fitting of Data David Eberly, Geoetric Tools, Redond WA 98052 https://www.geoetrictools.co/ This work is licensed under the Creative Coons Attribution 4.0 International License. To view a

More information

Department of Electronic and Optical Engineering, Ordnance Engineering College, Shijiazhuang, , China

Department of Electronic and Optical Engineering, Ordnance Engineering College, Shijiazhuang, , China 6th International Conference on Machinery, Materials, Environent, Biotechnology and Coputer (MMEBC 06) Solving Multi-Sensor Multi-Target Assignent Proble Based on Copositive Cobat Efficiency and QPSO Algorith

More information

MODIFICATION OF AN ANALYTICAL MODEL FOR CONTAINER LOADING PROBLEMS

MODIFICATION OF AN ANALYTICAL MODEL FOR CONTAINER LOADING PROBLEMS MODIFICATIO OF A AALYTICAL MODEL FOR COTAIER LOADIG PROBLEMS Reception date: DEC.99 otification to authors: 04 MAR. 2001 Cevriye GECER Departent of Industrial Engineering, University of Gazi 06570 Maltepe,

More information

Soft Computing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis

Soft Computing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis Soft Coputing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis Beverly Rivera 1,2, Irbis Gallegos 1, and Vladik Kreinovich 2 1 Regional Cyber and Energy Security Center RCES

More information

An improved self-adaptive harmony search algorithm for joint replenishment problems

An improved self-adaptive harmony search algorithm for joint replenishment problems An iproved self-adaptive harony search algorith for joint replenishent probles Lin Wang School of Manageent, Huazhong University of Science & Technology zhoulearner@gail.co Xiaojian Zhou School of Manageent,

More information

arxiv: v1 [cs.lg] 8 Jan 2019

arxiv: v1 [cs.lg] 8 Jan 2019 Data Masking with Privacy Guarantees Anh T. Pha Oregon State University phatheanhbka@gail.co Shalini Ghosh Sasung Research shalini.ghosh@gail.co Vinod Yegneswaran SRI international vinod@csl.sri.co arxiv:90.085v

More information

A method to determine relative stroke detection efficiencies from multiplicity distributions

A method to determine relative stroke detection efficiencies from multiplicity distributions A ethod to deterine relative stroke detection eiciencies ro ultiplicity distributions Schulz W. and Cuins K. 2. Austrian Lightning Detection and Inoration Syste (ALDIS), Kahlenberger Str.2A, 90 Vienna,

More information

Curious Bounds for Floor Function Sums

Curious Bounds for Floor Function Sums 1 47 6 11 Journal of Integer Sequences, Vol. 1 (018), Article 18.1.8 Curious Bounds for Floor Function Sus Thotsaporn Thanatipanonda and Elaine Wong 1 Science Division Mahidol University International

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION Vol. IX Uncertainty Models For Robustness Analysis - A. Garulli, A. Tesi and A. Vicino

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION Vol. IX Uncertainty Models For Robustness Analysis - A. Garulli, A. Tesi and A. Vicino UNCERTAINTY MODELS FOR ROBUSTNESS ANALYSIS A. Garulli Dipartiento di Ingegneria dell Inforazione, Università di Siena, Italy A. Tesi Dipartiento di Sistei e Inforatica, Università di Firenze, Italy A.

More information

Fairness via priority scheduling

Fairness via priority scheduling Fairness via priority scheduling Veeraruna Kavitha, N Heachandra and Debayan Das IEOR, IIT Bobay, Mubai, 400076, India vavitha,nh,debayan}@iitbacin Abstract In the context of ulti-agent resource allocation

More information

Ştefan ŞTEFĂNESCU * is the minimum global value for the function h (x)

Ştefan ŞTEFĂNESCU * is the minimum global value for the function h (x) 7Applying Nelder Mead s Optiization Algorith APPLYING NELDER MEAD S OPTIMIZATION ALGORITHM FOR MULTIPLE GLOBAL MINIMA Abstract Ştefan ŞTEFĂNESCU * The iterative deterinistic optiization ethod could not

More information

Foundations of Machine Learning Boosting. Mehryar Mohri Courant Institute and Google Research

Foundations of Machine Learning Boosting. Mehryar Mohri Courant Institute and Google Research Foundations of Machine Learning Boosting Mehryar Mohri Courant Institute and Google Research ohri@cis.nyu.edu Weak Learning Definition: concept class C is weakly PAC-learnable if there exists a (weak)

More information

arxiv: v3 [cs.lg] 7 Jan 2016

arxiv: v3 [cs.lg] 7 Jan 2016 Efficient and Parsionious Agnostic Active Learning Tzu-Kuo Huang Alekh Agarwal Daniel J. Hsu tkhuang@icrosoft.co alekha@icrosoft.co djhsu@cs.colubia.edu John Langford Robert E. Schapire jcl@icrosoft.co

More information

Lost-Sales Problems with Stochastic Lead Times: Convexity Results for Base-Stock Policies

Lost-Sales Problems with Stochastic Lead Times: Convexity Results for Base-Stock Policies OPERATIONS RESEARCH Vol. 52, No. 5, Septeber October 2004, pp. 795 803 issn 0030-364X eissn 1526-5463 04 5205 0795 infors doi 10.1287/opre.1040.0130 2004 INFORMS TECHNICAL NOTE Lost-Sales Probles with

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

Approximation in Stochastic Scheduling: The Power of LP-Based Priority Policies

Approximation in Stochastic Scheduling: The Power of LP-Based Priority Policies Approxiation in Stochastic Scheduling: The Power of -Based Priority Policies Rolf Möhring, Andreas Schulz, Marc Uetz Setting (A P p stoch, r E( w and (B P p stoch E( w We will assue that the processing

More information

Convex Programming for Scheduling Unrelated Parallel Machines

Convex Programming for Scheduling Unrelated Parallel Machines Convex Prograing for Scheduling Unrelated Parallel Machines Yossi Azar Air Epstein Abstract We consider the classical proble of scheduling parallel unrelated achines. Each job is to be processed by exactly

More information

Chaotic Coupled Map Lattices

Chaotic Coupled Map Lattices Chaotic Coupled Map Lattices Author: Dustin Keys Advisors: Dr. Robert Indik, Dr. Kevin Lin 1 Introduction When a syste of chaotic aps is coupled in a way that allows the to share inforation about each

More information

When Short Runs Beat Long Runs

When Short Runs Beat Long Runs When Short Runs Beat Long Runs Sean Luke George Mason University http://www.cs.gu.edu/ sean/ Abstract What will yield the best results: doing one run n generations long or doing runs n/ generations long

More information

STOPPING SIMULATED PATHS EARLY

STOPPING SIMULATED PATHS EARLY Proceedings of the 2 Winter Siulation Conference B.A.Peters,J.S.Sith,D.J.Medeiros,andM.W.Rohrer,eds. STOPPING SIMULATED PATHS EARLY Paul Glasseran Graduate School of Business Colubia University New Yor,

More information

Gary J. Balas Aerospace Engineering and Mechanics, University of Minnesota, Minneapolis, MN USA

Gary J. Balas Aerospace Engineering and Mechanics, University of Minnesota, Minneapolis, MN USA μ-synthesis Gary J. Balas Aerospace Engineering and Mechanics, University of Minnesota, Minneapolis, MN 55455 USA Keywords: Robust control, ultivariable control, linear fractional transforation (LFT),

More information

Recursive Algebraic Frisch Scheme: a Particle-Based Approach

Recursive Algebraic Frisch Scheme: a Particle-Based Approach Recursive Algebraic Frisch Schee: a Particle-Based Approach Stefano Massaroli Renato Myagusuku Federico Califano Claudio Melchiorri Atsushi Yaashita Hajie Asaa Departent of Precision Engineering, The University

More information

Biostatistics Department Technical Report

Biostatistics Department Technical Report Biostatistics Departent Technical Report BST006-00 Estiation of Prevalence by Pool Screening With Equal Sized Pools and a egative Binoial Sapling Model Charles R. Katholi, Ph.D. Eeritus Professor Departent

More information

Symbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm

Symbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm Acta Polytechnica Hungarica Vol., No., 04 Sybolic Analysis as Universal Tool for Deriving Properties of Non-linear Algoriths Case study of EM Algorith Vladiir Mladenović, Miroslav Lutovac, Dana Porrat

More information

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points

More information

1 Proof of learning bounds

1 Proof of learning bounds COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #4 Scribe: Akshay Mittal February 13, 2013 1 Proof of learning bounds For intuition of the following theore, suppose there exists a

More information

Weighted- 1 minimization with multiple weighting sets

Weighted- 1 minimization with multiple weighting sets Weighted- 1 iniization with ultiple weighting sets Hassan Mansour a,b and Özgür Yılaza a Matheatics Departent, University of British Colubia, Vancouver - BC, Canada; b Coputer Science Departent, University

More information

Lower Bounds for Quantized Matrix Completion

Lower Bounds for Quantized Matrix Completion Lower Bounds for Quantized Matrix Copletion Mary Wootters and Yaniv Plan Departent of Matheatics University of Michigan Ann Arbor, MI Eail: wootters, yplan}@uich.edu Mark A. Davenport School of Elec. &

More information

A Theoretical Analysis of a Warm Start Technique

A Theoretical Analysis of a Warm Start Technique A Theoretical Analysis of a War Start Technique Martin A. Zinkevich Yahoo! Labs 701 First Avenue Sunnyvale, CA Abstract Batch gradient descent looks at every data point for every step, which is wasteful

More information

Geometrical intuition behind the dual problem

Geometrical intuition behind the dual problem Based on: Geoetrical intuition behind the dual proble KP Bennett, EJ Bredensteiner, Duality and Geoetry in SVM Classifiers, Proceedings of the International Conference on Machine Learning, 2000 1 Geoetrical

More information

A MESHSIZE BOOSTING ALGORITHM IN KERNEL DENSITY ESTIMATION

A MESHSIZE BOOSTING ALGORITHM IN KERNEL DENSITY ESTIMATION A eshsize boosting algorith in kernel density estiation A MESHSIZE BOOSTING ALGORITHM IN KERNEL DENSITY ESTIMATION C.C. Ishiekwene, S.M. Ogbonwan and J.E. Osewenkhae Departent of Matheatics, University

More information

List Scheduling and LPT Oliver Braun (09/05/2017)

List Scheduling and LPT Oliver Braun (09/05/2017) List Scheduling and LPT Oliver Braun (09/05/207) We investigate the classical scheduling proble P ax where a set of n independent jobs has to be processed on 2 parallel and identical processors (achines)

More information

Upper bound on false alarm rate for landmine detection and classification using syntactic pattern recognition

Upper bound on false alarm rate for landmine detection and classification using syntactic pattern recognition Upper bound on false alar rate for landine detection and classification using syntactic pattern recognition Ahed O. Nasif, Brian L. Mark, Kenneth J. Hintz, and Nathalia Peixoto Dept. of Electrical and

More information

Kinetic Theory of Gases: Elementary Ideas

Kinetic Theory of Gases: Elementary Ideas Kinetic Theory of Gases: Eleentary Ideas 17th February 2010 1 Kinetic Theory: A Discussion Based on a Siplified iew of the Motion of Gases 1.1 Pressure: Consul Engel and Reid Ch. 33.1) for a discussion

More information

Fast Structural Similarity Search of Noncoding RNAs Based on Matched Filtering of Stem Patterns

Fast Structural Similarity Search of Noncoding RNAs Based on Matched Filtering of Stem Patterns Fast Structural Siilarity Search of Noncoding RNs Based on Matched Filtering of Ste Patterns Byung-Jun Yoon Dept. of Electrical Engineering alifornia Institute of Technology Pasadena, 91125, S Eail: bjyoon@caltech.edu

More information

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation journal of coplexity 6, 459473 (2000) doi:0.006jco.2000.0544, available online at http:www.idealibrary.co on On the Counication Coplexity of Lipschitzian Optiization for the Coordinated Model of Coputation

More information