Kernel based collaborative filtering for very large scale top-n item recommendation

Size: px
Start display at page:

Download "Kernel based collaborative filtering for very large scale top-n item recommendation"

Transcription

1 ESANN 06 proceedings, Eropean Syposi on Artificial Neral Networks, Coptational Intelligence and Machine Learning. Brges (Belgi), 7-9 April 06, i6doc.co pbl., ISBN Available fro Kernel based collaborative filtering for very large scale top-n ite recoendation Mirko olato and Fabio Aiolli University of adova - Departent of Matheatics Via Trieste, 63, 35 adova - Italy Abstract. The increasing availability of iplicit feedback datasets has raised the interest in developing effective collaborative filtering techniqes able to deal asyetrically with nabigos positive feedback and abigos negative feedback. In this paper, we propose a principled kernelbased collaborative filtering ethod for top-n ite recoendation with iplicit feedback. We present an efficient ipleentation sing the linear kernel, and how to generalize it to other kernels preserving efficiency. We copare or ethod with the state-of-the-art algorith on the Million Songs Dataset achieving an exection abot 5 tie faster, while having coparable effectiveness. Introdction Collaborative filtering (CF) techniqes can ake recoendation to a ser exploiting inforation provided by siilar sers. The typical CF setting consists of a set U of n sers, a set I of ites, and the so-called rating atrix R = {ri } Rn. In this paper we focs on iplicit feedback, and so we asse binary ratings, ri {0, }, where ri = eans that ser interacted with ite i (nabigos feedback) and ri = 0 eans there is not evidence that ser interacted with ite i (abigos feedback). Unlike traditional CF algoriths for explicit feedback, where one wants to accrately predict ratings for each nseen ser-ite pair, the goal in the iplicit feedback doain is to generate a top-n ranking of ites. Top-N recoendation with iplicit feedback was the sbject of one recent rearkable challenge organized by Kaggle, the Million Songs Dataset challenge [], that was defined on a very large dataset with roghly.m sers and 380K ites (i.e., songs) for a total of abot 50M ratings. The winning soltion described in [] (here called MSDW) is an extension of the well known ite-based nearest-neighbors (NN) algorith [3] that ses an asyetric siilarity easre, called asyetric cosine. Besides its otstanding perforance in ters of A@500, the MSD winning soltion is also easily scalable to very large datasets. However, one drawback of this soltion is that it is not theoretically well fonded. More recently, a new principled algorith for CF (CF-OMD) which explicitly optiizes the AUC has been proposed with very nice perforances on the MovieLens dataset [4]. Unfortnately, this last algorith cannot be proptly applied to large datasets as it reqires the optiization of n qadratic probles each one defined on variables. This work was spported by the University of adova nder the strategic project BIOINFOGEN.

2 ESANN 06 proceedings, Eropean Syposi on Artificial Neral Networks, Coptational Intelligence and Machine Learning. Brges (Belgi), 7-9 April 06, i6doc.co pbl., ISBN Available fro Here, we propose a variant of the CF-OMD algorith that akes it applicable to very large datasets achieving an exection tie abot 5 ties faster than the MSDW algorith on the MSD dataset. Secondly, we present strategies that allow the sae algorith to be applied with qite general kernels withot loss in efficiency. CF-OMD (Optiization of the Margin Distribtion) In this section we present a CF algorith, called CF-OMD [4], for top-n recoendation inspired by preference learning, and designed to explicitly axiize the AUC (Area Under the ROC Crve). Consider the noralized rating atrix Rn, with colns xi = ri kri k and let I be the set of ites rated by the ser. Let also define the probability distribtionover the positive and negative ites for as Γ = {α R i I αi =, i I αi = }. Then, for each test ser, the following convex optiization proble has to be solved: () α = argin α Y Y Λ α, α Γ where Y is a diagonal atrix, Y = diag(y ), sch that yi = if i I, otherwise, and Λ is a diagonal atrix sch that Λii = λp if i I, otherwise Λii = λn, where λp and λn are reglarization paraeters. These paraeters balance the contribtion of the nabigos ratings (λp ) and the abigos ones (λn ). Once solved the optiization proble the scores of the ser is calclated by r = Y α, and the recoendation is ade accordingly. Althogh this algorith has shown state-of-the-art reslts in ters of AUC, it is not sitable to deal with large dataset. In fact, let asse that each optiization proble can be solved by an algorith with a coplexity qadratic on the nber of paraeters. Then the global coplexity wold be O(nts ), where nts is the nber of sers in the test set, and for the MSD it wold be O(09 ). 3 Efficient CF-OMD Analyzing the reslts reported in [4], the athors noticed that high vales of λn did not particlarly affect the reslts, becase it tends to flatten the contribtion of the abigos negative feedbacks toward the average, itigating the relevance of noisy inforation. In CF contexts the data sparsity is particlarly high, this eans, on average, that the nber of abigos negative feedbacks is orders of agnitde greater than the nber of positive feedbacks. Forally, given a ser, let = I and = I \ I then =, where, and generally O() = O( ). On the basis of this observation, we can siplify the optiization proble (), by fixing λn =, which eans that i I, αi = : α = argin α α λp kα k α µ, α Γ ()

3 ESANN 06 proceedings, Eropean Syposi on Artificial Neral Networks, Coptational Intelligence and Machine Learning. Brges (Belgi), 7-9 April 06, i6doc.co pbl., ISBN Available fro where α are the probabilities associated with the positive ites, is the sb-atrix of containing only the colns corresponding to the positive ites and µ = ( i I xi ) is the centroid of the convex hll spanned by the negative ites. The nber of paraeters in () is and hence the coplexity fro O(nts ) is dropped to O(nts ), where = E[ I ]. In MSD which leads to a coplexity O(08 ). 3. Ipleentation trick Notwithstanding the hge iproveent in ters of coplexity, a naı ve ipleentation wold have an additional cost de to the calclation of µ. For all sers in the test set the cost wold be O(nts n ), where = E[ I \ I ], and it can be approxiated with O(nts n). To overcoe this bottleneck, we propose an efficient increental way of cal µ= clating µ. Consider the ean over all ites i I xi, then, for a given µ i I xi. Fro a coptational ser, we can express µ = point of view, it is sfficient to copte the s i I xi once (i.e., µ) and then, for every µ, sbtract the s of the positive ites. Using this siple trick, the overall coplexity drops to O(n) O(nts ). In the experiental section we sccessflly applied this algorith to the MSD achieving copetitive reslts against the state-of-the-art ethod bt with higher efficiency. 4 Kernelized CF-OMD The ethod proposed in Section 3, can be seen as a particlar case of a kernel ethod. In fact, is a kernel atrix, let call it K with the corresponding (linear) kernel fnction K : Rn Rn R. Given K we can reforlate () as: (3) α = argin α K α λp kα k α q, α Γ where q : qi = j I K(xi, xj ). Actally, inside the optiization proble (3) we can plg any kernel fnction. We will refer to this ethod as CF-KOMD. Generally speaking, the application of kernel ethods on hge dataset have an intractable coptational coplexity. Withot any shrewdness the proposed ethod wold not be applicable becase of the coptational cost of the kernel atrix and q. An iportant observation is that the coplexity is strictly connected with the sparsity of the kernel atrix which is, nfortnately, coonly dense. However, we can leverage on an iportant reslt to keep the kernel as sparse as possible withot changing the soltion of CF-KOMD. In [5] Karnick et al. observed that: if a fnction f : R R adits a Maclarin expansion with only nonnegative coefficients i.e., n=0 an xn, an 0, then it defines a positive definite kernel as K : (x, y) 7 f (hx, yi). As ephasized in [5], any kernels sed in practice [6] satisfy the above-entioned condition. 3

4 ESANN 06 proceedings, Eropean Syposi on Artificial Neral Networks, Coptational Intelligence and Machine Learning. Brges (Belgi), 7-9 April 06, i6doc.co pbl., ISBN Available fro Consider the application of this reslt on the polynoial kernel Kp : (x, y) 7 (hx, yi c)d where c R and d N. Kp can be defined as: d d (di) i Kp (x, y) = c hx, yi. (4) i i=0 When the polynoial is not hoogeneos (i.e., c 6= 0) the kernel atrix indced by Kp is dense de to the zero degree ter (i.e., cd ) which is added to all entries. Since adding a constant to a whole atrix eans a space translation, it can be deonstrated that this operation does not affect the argin in CF-KOMD. For this reason we can sparsify the kernel by reoving the factor cd obtaining a kernel atrix whose sparsity depends on the distribtion of the inpt data. Let K = be a kernel atrix and let (Kij 6= 0) be the probability that the entry Kij is not zero. Given the a-priori probabilities (xih 6= 0) and (xjh 6= 0), we can say that (Kij 6= 0) = ( (xih 6= 0) (xjh 6= 0))n. Anytie both xi and xj are poplar ites, i.e., (xih 6= 0) and (xjh 6= 0) are high, then (Kij 6= 0) tends to be high as well. On the contrary, when one of the two vectors represents an npoplar ite then the probability (Kij 6= 0) goes to zero. In CF contexts this sitation is pshed towards the liit since the poplarity distribtion generally follows a power low, and this often garantees the sparsity of the reslting kernel. Using the sparsified kernel, we can frther optiize the coplexity by providing a good approxiation of q that can be copted only once, instead of nts ties. The idea consists in replacing every qi with an estiate of E[K(xi, x)]. Forally, consider, withot any loss of generality, a noralized kernel fnction K(x K and let the approxiation of q be q s.t. q i = i, xj ). At each j I coponent of q, the approxiation error is bonded by which is linear on the sparsity of the dataset. 5 (see Appendix A), Experients and Reslts Experients have been perfored coparing the proposed ethods against the state-of-the-art ethod on MSD (MSDW) with respect to the ranking qality and coptational perforance. We sed two datasets: MSD, described in Section, and Movielens, which consists of 3850 sers and 73 ites for a total of 35K ratings. Methods have been copared sing the A [] and AUC easres. All ethods have been ipleented in ython. In this section we will refer to the Efficient CF-OMD with ECF-OMD and to the Kernelized CF-OMD with CF-K. 5. Movielens dataset The Movielens dataset has been randoly divided into a training set of roghly 50K ratings and a test set of 60K ratings. Since this dataset contains ratings in We sed CVOT package to solve the optiization proble MSDW ipleentation is available at aiollicodemsd The 4

5 ESANN 06 proceedings, Eropean Syposi on Artificial Neral Networks, Coptational Intelligence and Machine Learning. Brges (Belgi), 7-9 April 06, i6doc.co pbl., ISBN Available fro the for of a 5 stars preference, we had to convert the into binary ones where all vales greater than 0 are treated as. This test ais to show the accracy and the coptational perforance of the proposed ethods on a edi size dataset. Table sarizes the reslts. A@00 AUC MSDW (α) ECF-OMD (λp ) CF-K (λp ) Table : Ranking accracy on Movielens dataset sing AUC and A@00. We tested MSDW fixing the locality paraeter [] q = and varying the asyetric cosine weight α. For ECF-OMD we tried different λp bt its effect is inial on the final ranking, and for this reason we fixed it dring the CFK experient. In this experient we sed the polynoial kernel of degree with c =. Reslts show that both proposed ethods have higher AUC and A@00 with a slightly better perforance for CF-K. With this dataset all ethods terinate in few seconds. 5. MSD We sed MSD as described in the Kaggle challenge3 : the training set is coposed by M sers (pls 0K sers as validation set) with all their listening history and for the rest (i.e., 00K sers) only the first half of the history is provided, while the other half constittes the test set. In these experients we fixed the λp paraeter to the best perforing one on the Movielens dataset. Reslts are presented in Table. In this case MSDW aintains its record perforance in ters of A@500, while for the AUC all ethods have very good reslts. This nderline the fact that both ECF-OMD and CF-K try to optiize the AUC rather than the A. A@500 AUC MSDW (α, q) 5, ECF-OMD (λp ) CF-K (λp ) Table : Ranking accracy on MSD sing AUC and A@500. The coptational costs on this dataset are reported in Figre. The reslts are the average copting tie over K test sers. All ethods rn on a achine with 50Gb of RAM and x Eight-Core Intel(R) eon(r) CU E5-680 Actally the ties in Figre have a constant overhead de to read operations. Reslts show that ECF-OMD and CF-K are alost 5 tie faster than MSDW even thogh they reqire ore RAM to store the kernel atrix. It is worth to notice that CF-K has a coptational tie 3 5

6 ESANN 06 proceedings, Eropean Syposi on Artificial Neral Networks, Coptational Intelligence and Machine Learning. Brges (Belgi), 7-9 April 06, i6doc.co pbl., ISBN Available fro MSDW 7.5hrs ECF-OMD.55hrs CF-K 0.75hrs Fig. : Average coptational tie in hors for K sers. very close to ECF-OMD, and this highlights the positive effects of the coplexity optiization presented in this paper. A Appendix A. Optiization proble siplification Let µ defined as in Sec.3 and let, be the sb-atrices of containing only the colns corresponding, respectively, to the positive and negative ites for. Then, by fixing λn =, we can siplify () as: µ α = argin kα k λp kα k α = argin kα k kµ k α µ λp kα k = () α A. Approxiation error Let Kij = K(xi, xj ), then: q i qi = Kij Kij Kij Kij = Kij j I j I j I j I j I = Kij Kij Kij Kij j I j I j I j I =. References [] Brian McFee, Thierry Bertin-Mahiex, Daniel.W. Ellis, and Gert R.G. Lanckriet. The illion song dataset challenge. In roceedings of the st international conference copanion on World Wide Web, WWW Copanion, pages , New York, NY, USA, 0. ACM. [] Fabio Aiolli. Efficient top-n recoendation for very large scale binary rated datasets. In ACM Recoender Systes Conference, pages 73 80, Hong Kong, China, 03. [3] Mknd Deshpande and George Karypis. Ite-based top-n recoendation algoriths. ACM Trans. Inf. Syst., ():43 77, 004. [4] Fabio Aiolli. Convex AUC optiization for top-n recoendation with iplicit feedback. In ACM Recoender Systes Conference, pages 93 96, New York, USA, 04. [5] rshotta Kar and Harish Karnick. Rando featre aps for dot prodct kernels. In Neil D. Lawrence and Mark A. Girolai, editors, roceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics (AISTATS-), vole, pages , 0. [6] Bernhard Scholkopf and Alexander J. Sola. Learning with Kernels: Spport Vector Machines, Reglarization, Optiization, and Beyond. MIT ress, Cabridge, MA, USA, 00. 6

Models to Estimate the Unicast and Multicast Resource Demand for a Bouquet of IP-Transported TV Channels

Models to Estimate the Unicast and Multicast Resource Demand for a Bouquet of IP-Transported TV Channels Models to stiate the Unicast and Mlticast Resorce Deand for a Boqet of IP-Transported TV Channels Z. Avraova, D. De Vleeschawer,, S. Wittevrongel, H. Brneel SMACS Research Grop, Departent of Teleconications

More information

IEEE TRANSACTIONS ON CYBERNETICS. Leveraging Long and Short-term Information in Content-aware Movie Recommendation via Adversarial Training

IEEE TRANSACTIONS ON CYBERNETICS. Leveraging Long and Short-term Information in Content-aware Movie Recommendation via Adversarial Training i Leveraging Long and Short-ter Inforation in Content-aware Movie Recoendation via Adversarial Training Wei Zhao, Benyo Wang, Min Yang, Jianbo Ye, Zho Zhao, Xiaojn Chen, Ying Shen Abstract Movie recoendation

More information

New MINLP Formulations for Flexibility Analysis for Measured and Unmeasured Uncertain Parameters

New MINLP Formulations for Flexibility Analysis for Measured and Unmeasured Uncertain Parameters Anton A. Kiss, Edwin Zondervan, Richard Lakerveld, Leyla Özkan (Eds.) Proceedings of the 29 th Eropean Syposi on Copter Aided Process Engineering Jne 16 th to 19 th, 219, Eindhoven, The Netherlands. 219

More information

Extended Intervened Geometric Distribution

Extended Intervened Geometric Distribution International Jornal of Statistical Distribtions Applications 6; (): 8- http://www.sciencepblishinggrop.co//isda Extended Intervened Geoetric Distribtion C. Satheesh Kar, S. Sreeaari Departent of Statistics,

More information

ON THE LINIARIZATION OF EXPERIMENTAL HYSTERETIC LOOPS

ON THE LINIARIZATION OF EXPERIMENTAL HYSTERETIC LOOPS ON THE LINIARIZATION OF EXPERIMENTAL HYSTERETIC LOOPS TUDOR SIRETEANU 1, MARIUS GIUCLEA 1, OVIDIU SOLOMON In this paper is presented a linearization ethod developed on the basis of the experiental hysteresis

More information

Stability of Transductive Regression Algorithms

Stability of Transductive Regression Algorithms Stability of Transdctive Regression Algoriths Corinna Cortes corinna@googleco Google Research, 76 Ninth Avene, New York, NY 00 Mehryar Mohri ohri@cisnyed Corant Institte of Matheatical Sciences and Google

More information

An Analysis of W-fibers and W-type Fiber Polarizers

An Analysis of W-fibers and W-type Fiber Polarizers An Analysis of W-fibers and W-type Fiber Polarizers Corey M. Paye Thesis sbitted to the Faclty of the Virginia Polytechnic Institte and State University in partial flfillent of the reqireents for the degree

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

Support Vector Machines. Goals for the lecture

Support Vector Machines. Goals for the lecture Support Vector Machines Mark Craven and David Page Coputer Sciences 760 Spring 2018 www.biostat.wisc.edu/~craven/cs760/ Soe of the slides in these lectures have been adapted/borrowed fro aterials developed

More information

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair Proceedings of the 6th SEAS International Conference on Siulation, Modelling and Optiization, Lisbon, Portugal, Septeber -4, 006 0 A Siplified Analytical Approach for Efficiency Evaluation of the eaving

More information

Support Vector Machines. Maximizing the Margin

Support Vector Machines. Maximizing the Margin Support Vector Machines Support vector achines (SVMs) learn a hypothesis: h(x) = b + Σ i= y i α i k(x, x i ) (x, y ),..., (x, y ) are the training exs., y i {, } b is the bias weight. α,..., α are the

More information

Quadratic forms and a some matrix computations

Quadratic forms and a some matrix computations Linear Algebra or Wireless Conications Lectre: 8 Qadratic ors and a soe atri coptations Ove Edors Departent o Electrical and Inoration echnology Lnd University it Stationary points One diension ( d d =

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

An improved self-adaptive harmony search algorithm for joint replenishment problems

An improved self-adaptive harmony search algorithm for joint replenishment problems An iproved self-adaptive harony search algorith for joint replenishent probles Lin Wang School of Manageent, Huazhong University of Science & Technology zhoulearner@gail.co Xiaojian Zhou School of Manageent,

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

arxiv: v1 [cs.ds] 29 Jan 2012

arxiv: v1 [cs.ds] 29 Jan 2012 A parallel approxiation algorith for ixed packing covering seidefinite progras arxiv:1201.6090v1 [cs.ds] 29 Jan 2012 Rahul Jain National U. Singapore January 28, 2012 Abstract Penghui Yao National U. Singapore

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lessons 7 20 Dec 2017 Outline Artificial Neural networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

Stochastic Subgradient Methods

Stochastic Subgradient Methods Stochastic Subgradient Methods Lingjie Weng Yutian Chen Bren School of Inforation and Coputer Science University of California, Irvine {wengl, yutianc}@ics.uci.edu Abstract Stochastic subgradient ethods

More information

Geometrical intuition behind the dual problem

Geometrical intuition behind the dual problem Based on: Geoetrical intuition behind the dual proble KP Bennett, EJ Bredensteiner, Duality and Geoetry in SVM Classifiers, Proceedings of the International Conference on Machine Learning, 2000 1 Geoetrical

More information

Numerical Simulation of Melting Process in Single Screw Extruder with Vibration Force Field

Numerical Simulation of Melting Process in Single Screw Extruder with Vibration Force Field Nerical Silation of Melting Process in Single Screw Extrder with Vibration Force Field Yanhong Feng Jinping Q National Eng. Research Center of Novel Eqipent for Polyer Processing, Soth China University

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials Fast Montgoery-like Square Root Coputation over GF( ) for All Trinoials Yin Li a, Yu Zhang a, a Departent of Coputer Science and Technology, Xinyang Noral University, Henan, P.R.China Abstract This letter

More information

CS 331: Artificial Intelligence Naïve Bayes. Naïve Bayes

CS 331: Artificial Intelligence Naïve Bayes. Naïve Bayes CS 33: Artificial Intelligence Naïe Bayes Thanks to Andrew Moore for soe corse aterial Naïe Bayes A special type of Bayesian network Makes a conditional independence assption Typically sed for classification

More information

E. Alpaydın AERFAISS

E. Alpaydın AERFAISS E. Alpaydın AERFAISS 00 Introduction Questions: Is the error rate of y classifier less than %? Is k-nn ore accurate than MLP? Does having PCA before iprove accuracy? Which kernel leads to highest accuracy

More information

A Theoretical Analysis of a Warm Start Technique

A Theoretical Analysis of a Warm Start Technique A Theoretical Analysis of a War Start Technique Martin A. Zinkevich Yahoo! Labs 701 First Avenue Sunnyvale, CA Abstract Batch gradient descent looks at every data point for every step, which is wasteful

More information

Fast Structural Similarity Search of Noncoding RNAs Based on Matched Filtering of Stem Patterns

Fast Structural Similarity Search of Noncoding RNAs Based on Matched Filtering of Stem Patterns Fast Structural Siilarity Search of Noncoding RNs Based on Matched Filtering of Ste Patterns Byung-Jun Yoon Dept. of Electrical Engineering alifornia Institute of Technology Pasadena, 91125, S Eail: bjyoon@caltech.edu

More information

Energy Efficient and Fair Resource Allocation for LTE-Unlicensed Uplink Networks: A Two-sided Matching Approach with Partial Information

Energy Efficient and Fair Resource Allocation for LTE-Unlicensed Uplink Networks: A Two-sided Matching Approach with Partial Information 1 Energy Efficient and Fair Resorce Allocation for LTE-Unlicensed Uplink Networks: A Two-sided Matching Approach with Partial Inforation Yan Gao 1, Haonan H 1,Ye W 2*, Xiaoli Ch 1 and Jie Zhang 1 arxiv:1808.08508v1

More information

Time Domain Identification of Input Forces in Vibration Testing of Flight Vehicle Structures

Time Domain Identification of Input Forces in Vibration Testing of Flight Vehicle Structures 50th AIAA/ASME/ASCE/AHS/ASC Strctres, Strctral Dynaics, and Materials Conference7th 4-7 May 2009, Pal Springs, California AIAA 2009-2527 Tie Doain Identification of Inpt Forces in Vibration Testing

More information

An alternative approach to evaluate the average Nusselt number for mixed boundary layer conditions in parallel flow over an isothermal flat plate

An alternative approach to evaluate the average Nusselt number for mixed boundary layer conditions in parallel flow over an isothermal flat plate An alternative approach to evalate the average Nsselt nber for ied bondary layer conditions in parallel flo over an isotheral flat plate Viacheslav Stetsyk, Krzysztof J. Kbiak, ande i and John C Chai Abstract

More information

J6.7 AN EXAMINATION OF RELATIONSHIPS BETWEEN URBAN AND RURAL MICROMETEOROLOGY USING SEMI-EMPIRICAL AND COMPREHENSIVE MODELS. Riverside, CA, 92521

J6.7 AN EXAMINATION OF RELATIONSHIPS BETWEEN URBAN AND RURAL MICROMETEOROLOGY USING SEMI-EMPIRICAL AND COMPREHENSIVE MODELS. Riverside, CA, 92521 J6.7 AN EXAMINATION OF ELATIONSHIPS BETWEEN BAN AND AL MICOMETEOOLOGY SING SEMI-EMPIICAL AND COMPEHENSIVE MODELS Sang-Mi Lee, Ashok Lhar, and Akla Venkatra Departent of Mechanical Engineering, niversity

More information

Soft-margin SVM can address linearly separable problems with outliers

Soft-margin SVM can address linearly separable problems with outliers Non-linear Support Vector Machines Non-linearly separable probles Hard-argin SVM can address linearly separable probles Soft-argin SVM can address linearly separable probles with outliers Non-linearly

More information

Boosting with log-loss

Boosting with log-loss Boosting with log-loss Marco Cusuano-Towner Septeber 2, 202 The proble Suppose we have data exaples {x i, y i ) i =... } for a two-class proble with y i {, }. Let F x) be the predictor function with the

More information

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate The Siplex Method is Strongly Polynoial for the Markov Decision Proble with a Fixed Discount Rate Yinyu Ye April 20, 2010 Abstract In this note we prove that the classic siplex ethod with the ost-negativereduced-cost

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

SOME EFFECTIVE ESTIMATION PROCEDURES UNDER NON-RESPONSE IN TWO-PHASE SUCCESSIVE SAMPLING

SOME EFFECTIVE ESTIMATION PROCEDURES UNDER NON-RESPONSE IN TWO-PHASE SUCCESSIVE SAMPLING STATISTICS IN TRANSITION new series, Jne 06 63 STATISTICS IN TRANSITION new series, Jne 06 Vol. 7, No., pp. 63 8 SOME EFFECTIVE ESTIMATION PROCEDURES UNDER NONRESPONSE IN TWOPHASE SUCCESSIVE SAMPING G.

More information

Tracking using CONDENSATION: Conditional Density Propagation

Tracking using CONDENSATION: Conditional Density Propagation Tracking using CONDENSATION: Conditional Density Propagation Goal Model-based visual tracking in dense clutter at near video frae rates M. Isard and A. Blake, CONDENSATION Conditional density propagation

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

A MESHSIZE BOOSTING ALGORITHM IN KERNEL DENSITY ESTIMATION

A MESHSIZE BOOSTING ALGORITHM IN KERNEL DENSITY ESTIMATION A eshsize boosting algorith in kernel density estiation A MESHSIZE BOOSTING ALGORITHM IN KERNEL DENSITY ESTIMATION C.C. Ishiekwene, S.M. Ogbonwan and J.E. Osewenkhae Departent of Matheatics, University

More information

Sources of Non Stationarity in the Semivariogram

Sources of Non Stationarity in the Semivariogram Sorces of Non Stationarity in the Semivariogram Migel A. Cba and Oy Leangthong Traditional ncertainty characterization techniqes sch as Simple Kriging or Seqential Gassian Simlation rely on stationary

More information

HIGH RESOLUTION NEAR-FIELD MULTIPLE TARGET DETECTION AND LOCALIZATION USING SUPPORT VECTOR MACHINES

HIGH RESOLUTION NEAR-FIELD MULTIPLE TARGET DETECTION AND LOCALIZATION USING SUPPORT VECTOR MACHINES ICONIC 2007 St. Louis, O, USA June 27-29, 2007 HIGH RESOLUTION NEAR-FIELD ULTIPLE TARGET DETECTION AND LOCALIZATION USING SUPPORT VECTOR ACHINES A. Randazzo,. A. Abou-Khousa 2,.Pastorino, and R. Zoughi

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul

More information

Technical Note. ODiSI-B Sensor Strain Gage Factor Uncertainty

Technical Note. ODiSI-B Sensor Strain Gage Factor Uncertainty Technical Note EN-FY160 Revision November 30, 016 ODiSI-B Sensor Strain Gage Factor Uncertainty Abstract Lna has pdated or strain sensor calibration tool to spport NIST-traceable measrements, to compte

More information

P016 Toward Gauss-Newton and Exact Newton Optimization for Full Waveform Inversion

P016 Toward Gauss-Newton and Exact Newton Optimization for Full Waveform Inversion P016 Toward Gauss-Newton and Exact Newton Optiization for Full Wavefor Inversion L. Métivier* ISTerre, R. Brossier ISTerre, J. Virieux ISTerre & S. Operto Géoazur SUMMARY Full Wavefor Inversion FWI applications

More information

Stability And Unbalance Response Of Rotor Bearing System

Stability And Unbalance Response Of Rotor Bearing System Stability And Unbalance Response Of Rotor Bearing Syste T.V.V..N. Rao and. Athre Mechanical Engineering Grop Departent of Mechanical Engineering Birla Institte of Technology & Science Indian Institte of

More information

Safe Manual Control of the Furuta Pendulum

Safe Manual Control of the Furuta Pendulum Safe Manal Control of the Frta Pendlm Johan Åkesson, Karl Johan Åström Department of Atomatic Control, Lnd Institte of Technology (LTH) Box 8, Lnd, Sweden PSfrag {jakesson,kja}@control.lth.se replacements

More information

LOSSY JPEG compression [1] achieves a good compression

LOSSY JPEG compression [1] achieves a good compression 1 JPEG Noises beyond the First Copression Cycle Bin Li, Tian-Tsong Ng, Xiaolong Li, Shnqan Tan, and Jiw Hang arxiv:1405.7571v1 [cs.mm] 29 May 2014 Abstract This paper focses on the JPEG noises, which inclde

More information

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup)

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup) Recovering Data fro Underdeterined Quadratic Measureents (CS 229a Project: Final Writeup) Mahdi Soltanolkotabi Deceber 16, 2011 1 Introduction Data that arises fro engineering applications often contains

More information

On the Impact of Kernel Approximation on Learning Accuracy

On the Impact of Kernel Approximation on Learning Accuracy On the Ipact of Kernel Approxiation on Learning Accuracy Corinna Cortes Mehryar Mohri Aeet Talwalkar Google Research New York, NY corinna@google.co Courant Institute and Google Research New York, NY ohri@cs.nyu.edu

More information

The Calculation of the Diffraction Integral Using Chebyshev Polynomials

The Calculation of the Diffraction Integral Using Chebyshev Polynomials International Jornal of Applied Engineering Research ISSN 97-456 Vole, Nber (7) pp. -9 Research India Pblications. http://www.ripblication.co The Calclation of the Diffraction Integral Using Chebyshev

More information

On the Analysis of the Quantum-inspired Evolutionary Algorithm with a Single Individual

On the Analysis of the Quantum-inspired Evolutionary Algorithm with a Single Individual 6 IEEE Congress on Evolutionary Coputation Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-1, 6 On the Analysis of the Quantu-inspired Evolutionary Algorith with a Single Individual

More information

PLASTIC: Prioritize Long and Short-term Information in Top-n Recommendation using Adversarial Training

PLASTIC: Prioritize Long and Short-term Information in Top-n Recommendation using Adversarial Training PLASTIC: Prioritize Long and Short-ter Inforation in Top-n Recoendation sing Adversarial Training Wei Zhao 1,2, Benyo Wang 2, Jianbo Ye 3, Yongqiang Gao 2, Min Yang 1, Xiaojn Chen 4 1 Shenzhen Instittes

More information

Multi-view Discriminative Manifold Embedding for Pattern Classification

Multi-view Discriminative Manifold Embedding for Pattern Classification Multi-view Discriinative Manifold Ebedding for Pattern Classification X. Wang Departen of Inforation Zhenghzou 450053, China Y. Guo Departent of Digestive Zhengzhou 450053, China Z. Wang Henan University

More information

Ensemble Based on Data Envelopment Analysis

Ensemble Based on Data Envelopment Analysis Enseble Based on Data Envelopent Analysis So Young Sohn & Hong Choi Departent of Coputer Science & Industrial Systes Engineering, Yonsei University, Seoul, Korea Tel) 82-2-223-404, Fax) 82-2- 364-7807

More information

1 Bounding the Margin

1 Bounding the Margin COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost

More information

TrustSVD: Collaborative Filtering with Both the Explicit and Implicit Influence of User Trust and of Item Ratings

TrustSVD: Collaborative Filtering with Both the Explicit and Implicit Influence of User Trust and of Item Ratings Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence TrstSVD: Collaborative Filtering with Both the Explicit and Implicit Inflence of User Trst and of Item Ratings Gibing Go Jie Zhang

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016 Lessons 7 14 Dec 2016 Outline Artificial Neural networks Notation...2 1. Introduction...3... 3 The Artificial

More information

Adaptive Congestion Control in ATM Networks. Farzad Habibipour, Mehdi Galily, Masoum Fardis, and Ali Yazdian

Adaptive Congestion Control in ATM Networks. Farzad Habibipour, Mehdi Galily, Masoum Fardis, and Ali Yazdian aptive ongestion ontrol in TM Networks Fara Habibipor, Mehi Galil, Maso Faris, an li Yaian Iran Teleconication Research enter, Ministr of IT, Tehran, IRN habibipor@itrc.ac.ir bstract. In this paper an

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

Predictive Vaccinology: Optimisation of Predictions Using Support Vector Machine Classifiers

Predictive Vaccinology: Optimisation of Predictions Using Support Vector Machine Classifiers Predictive Vaccinology: Optiisation of Predictions Using Support Vector Machine Classifiers Ivana Bozic,2, Guang Lan Zhang 2,3, and Vladiir Brusic 2,4 Faculty of Matheatics, University of Belgrade, Belgrade,

More information

Support Vector Machines MIT Course Notes Cynthia Rudin

Support Vector Machines MIT Course Notes Cynthia Rudin Support Vector Machines MIT 5.097 Course Notes Cynthia Rudin Credit: Ng, Hastie, Tibshirani, Friedan Thanks: Şeyda Ertekin Let s start with soe intuition about argins. The argin of an exaple x i = distance

More information

Hamming Compressed Sensing

Hamming Compressed Sensing Haing Copressed Sensing Tianyi Zhou, and Dacheng Tao, Meber, IEEE Abstract arxiv:.73v2 [cs.it] Oct 2 Copressed sensing CS and -bit CS cannot directly recover quantized signals and require tie consuing

More information

A Smoothed Boosting Algorithm Using Probabilistic Output Codes

A Smoothed Boosting Algorithm Using Probabilistic Output Codes A Soothed Boosting Algorith Using Probabilistic Output Codes Rong Jin rongjin@cse.su.edu Dept. of Coputer Science and Engineering, Michigan State University, MI 48824, USA Jian Zhang jian.zhang@cs.cu.edu

More information

Shannon Sampling II. Connections to Learning Theory

Shannon Sampling II. Connections to Learning Theory Shannon Sapling II Connections to Learning heory Steve Sale oyota echnological Institute at Chicago 147 East 60th Street, Chicago, IL 60637, USA E-ail: sale@athberkeleyedu Ding-Xuan Zhou Departent of Matheatics,

More information

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents

More information

Affine Invariant Total Variation Models

Affine Invariant Total Variation Models Affine Invariant Total Variation Models Helen Balinsky, Alexander Balinsky Media Technologies aboratory HP aboratories Bristol HP-7-94 Jne 6, 7* Total Variation, affine restoration, Sobolev ineqality,

More information

NBN Algorithm Introduction Computational Fundamentals. Bogdan M. Wilamoswki Auburn University. Hao Yu Auburn University

NBN Algorithm Introduction Computational Fundamentals. Bogdan M. Wilamoswki Auburn University. Hao Yu Auburn University NBN Algorith Bogdan M. Wilaoswki Auburn University Hao Yu Auburn University Nicholas Cotton Auburn University. Introduction. -. Coputational Fundaentals - Definition of Basic Concepts in Neural Network

More information

arxiv: v1 [cs.ni] 20 Dec 2018

arxiv: v1 [cs.ni] 20 Dec 2018 arxiv:1812.08866v1 [cs.ni] 20 Dec 2018 NOMA AIDED NARROWBAND IOT FOR MACHINE TYPE COMMUNICATIONS WITH USER CLUSTERING ALI SHAHINI NIRWAN ANSARI TR-ANL-2018-002 20 th Deceber, 2018 ADVANCED NETWORKING LABORATORY

More information

A Model-Free Adaptive Control of Pulsed GTAW

A Model-Free Adaptive Control of Pulsed GTAW A Model-Free Adaptive Control of Plsed GTAW F.L. Lv 1, S.B. Chen 1, and S.W. Dai 1 Institte of Welding Technology, Shanghai Jiao Tong University, Shanghai 00030, P.R. China Department of Atomatic Control,

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October

More information

arxiv: v1 [cs.lg] 8 Jan 2019

arxiv: v1 [cs.lg] 8 Jan 2019 Data Masking with Privacy Guarantees Anh T. Pha Oregon State University phatheanhbka@gail.co Shalini Ghosh Sasung Research shalini.ghosh@gail.co Vinod Yegneswaran SRI international vinod@csl.sri.co arxiv:90.085v

More information

PAC-Bayes Analysis Of Maximum Entropy Learning

PAC-Bayes Analysis Of Maximum Entropy Learning PAC-Bayes Analysis Of Maxiu Entropy Learning John Shawe-Taylor and David R. Hardoon Centre for Coputational Statistics and Machine Learning Departent of Coputer Science University College London, UK, WC1E

More information

Collaborative Filtering using Associative Neural Memory

Collaborative Filtering using Associative Neural Memory Collaborative Filtering using Associative Neural Meory Chuck P. La Electrical Engineering Departent Stanford University Stanford, CA chuckla@stanford.edu Abstract There are two types of collaborative filtering

More information

UNIVERSITY OF TRENTO ON THE USE OF SVM FOR ELECTROMAGNETIC SUBSURFACE SENSING. A. Boni, M. Conci, A. Massa, and S. Piffer.

UNIVERSITY OF TRENTO ON THE USE OF SVM FOR ELECTROMAGNETIC SUBSURFACE SENSING. A. Boni, M. Conci, A. Massa, and S. Piffer. UIVRSITY OF TRTO DIPARTITO DI IGGRIA SCIZA DLL IFORAZIO 3823 Povo Trento (Italy) Via Soarive 4 http://www.disi.unitn.it O TH US OF SV FOR LCTROAGTIC SUBSURFAC SSIG A. Boni. Conci A. assa and S. Piffer

More information

A Sequential Dual Method for Large Scale Multi-Class Linear SVMs

A Sequential Dual Method for Large Scale Multi-Class Linear SVMs A Sequential Dual Method for Large Scale Multi-Class Linear SVMs Kai-Wei Chang Dept. of Coputer Science National Taiwan University Taipei 106, Taiwan b92084@csie.ntu.edu.tw S. Sathiya Keerthi Yahoo! Research

More information

Support Vector Machines. Machine Learning Series Jerry Jeychandra Blohm Lab

Support Vector Machines. Machine Learning Series Jerry Jeychandra Blohm Lab Support Vector Machines Machine Learning Series Jerry Jeychandra Bloh Lab Outline Main goal: To understand how support vector achines (SVMs) perfor optial classification for labelled data sets, also a

More information

Time-Reversible Multiple Time Scale ab Initio Molecular Dynamics

Time-Reversible Multiple Time Scale ab Initio Molecular Dynamics ~~ J. Phys. Che. 1993,97, 1342913434 13429 TieReversible Mltiple Tie Scale ab Initio Moleclar Dynaics Doglas A. Gibson and Eily A. Carter' Departent of Cheistry and Biocheistry, University of California,

More information

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks Intelligent Systes: Reasoning and Recognition Jaes L. Crowley MOSIG M1 Winter Seester 2018 Lesson 7 1 March 2018 Outline Artificial Neural Networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

Distributed Subgradient Methods for Multi-agent Optimization

Distributed Subgradient Methods for Multi-agent Optimization 1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions

More information

Domain-Adversarial Neural Networks

Domain-Adversarial Neural Networks Doain-Adversarial Neural Networks Hana Ajakan, Pascal Gerain 2, Hugo Larochelle 3, François Laviolette 2, Mario Marchand 2,2 Départeent d inforatique et de génie logiciel, Université Laval, Québec, Canada

More information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information Cite as: Straub D. (2014). Value of inforation analysis with structural reliability ethods. Structural Safety, 49: 75-86. Value of Inforation Analysis with Structural Reliability Methods Daniel Straub

More information

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t. CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when

More information

Research Article Approximate Multidegree Reduction of λ-bézier Curves

Research Article Approximate Multidegree Reduction of λ-bézier Curves Matheatical Probles in Engineering Volue 6 Article ID 87 pages http://dxdoiorg//6/87 Research Article Approxiate Multidegree Reduction of λ-bézier Curves Gang Hu Huanxin Cao and Suxia Zhang Departent of

More information

Introduction to Machine Learning. Recitation 11

Introduction to Machine Learning. Recitation 11 Introduction to Machine Learning Lecturer: Regev Schweiger Recitation Fall Seester Scribe: Regev Schweiger. Kernel Ridge Regression We now take on the task of kernel-izing ridge regression. Let x,...,

More information

CS435 Introduction to Big Data Spring 2018 Colorado State University. 3/7/2018 Week 8-B Sangmi Lee Pallickara

CS435 Introduction to Big Data Spring 2018 Colorado State University. 3/7/2018 Week 8-B Sangmi Lee Pallickara W8.B.0.0 CS435 Introduction to Big Data W8.B.1 FAQs Midter Average 73/100 Question and answer If you would like to discuss your score Office hour: 10AM ~ 11AM (Friday PART 1. LARGE SCALE DATA ANALYTICS

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS

DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS ISSN 1440-771X AUSTRALIA DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS An Iproved Method for Bandwidth Selection When Estiating ROC Curves Peter G Hall and Rob J Hyndan Working Paper 11/00 An iproved

More information

Symbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm

Symbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm Acta Polytechnica Hungarica Vol., No., 04 Sybolic Analysis as Universal Tool for Deriving Properties of Non-linear Algoriths Case study of EM Algorith Vladiir Mladenović, Miroslav Lutovac, Dana Porrat

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

ZISC Neural Network Base Indicator for Classification Complexity Estimation

ZISC Neural Network Base Indicator for Classification Complexity Estimation ZISC Neural Network Base Indicator for Classification Coplexity Estiation Ivan Budnyk, Abdennasser Сhebira and Kurosh Madani Iages, Signals and Intelligent Systes Laboratory (LISSI / EA 3956) PARIS XII

More information

ANALYSIS OF ELECTRONIC MICRO-PAYMENT MARKET

ANALYSIS OF ELECTRONIC MICRO-PAYMENT MARKET Jornal of Electronic Coerce Research, VOL 8, NO., 007 ANALYSIS OF ELECTRONIC ICRO-PAYENT ARKET Eric W. K. See-To epartent of anageent Science, Lancaster niversity anageent School Lancaster, nited Kingdo

More information

Foundations of Machine Learning Boosting. Mehryar Mohri Courant Institute and Google Research

Foundations of Machine Learning Boosting. Mehryar Mohri Courant Institute and Google Research Foundations of Machine Learning Boosting Mehryar Mohri Courant Institute and Google Research ohri@cis.nyu.edu Weak Learning Definition: concept class C is weakly PAC-learnable if there exists a (weak)

More information

Estimation of ADC Nonlinearities from the Measurement in Input Voltage Intervals

Estimation of ADC Nonlinearities from the Measurement in Input Voltage Intervals Estiation of ADC Nonlinearities fro the Measureent in Input Voltage Intervals M. Godla, L. Michaeli, 3 J. Šaliga, 4 R. Palenčár,,3 Deptartent of Electronics and Multiedia Counications, FEI TU of Košice,

More information

Ştefan ŞTEFĂNESCU * is the minimum global value for the function h (x)

Ştefan ŞTEFĂNESCU * is the minimum global value for the function h (x) 7Applying Nelder Mead s Optiization Algorith APPLYING NELDER MEAD S OPTIMIZATION ALGORITHM FOR MULTIPLE GLOBAL MINIMA Abstract Ştefan ŞTEFĂNESCU * The iterative deterinistic optiization ethod could not

More information

Grafting: Fast, Incremental Feature Selection by Gradient Descent in Function Space

Grafting: Fast, Incremental Feature Selection by Gradient Descent in Function Space Journal of Machine Learning Research 3 (2003) 1333-1356 Subitted 5/02; Published 3/03 Grafting: Fast, Increental Feature Selection by Gradient Descent in Function Space Sion Perkins Space and Reote Sensing

More information

Nonlinear parametric optimization using cylindrical algebraic decomposition

Nonlinear parametric optimization using cylindrical algebraic decomposition Proceedings of the 44th IEEE Conference on Decision and Control, and the Eropean Control Conference 2005 Seville, Spain, December 12-15, 2005 TC08.5 Nonlinear parametric optimization sing cylindrical algebraic

More information

ma x = -bv x + F rod.

ma x = -bv x + F rod. Notes on Dynaical Systes Dynaics is the study of change. The priary ingredients of a dynaical syste are its state and its rule of change (also soeties called the dynaic). Dynaical systes can be continuous

More information