Statistical Learning Theory: A Primer

Size: px
Start display at page:

Download "Statistical Learning Theory: A Primer"

Transcription

1 Internationa Journa of Computer Vision 38(), 9 3, 2000 c 2000 uwer Academic Pubishers. Manufactured in The Netherands. Statistica Learning Theory: A Primer THEODOROS EVGENIOU, MASSIMILIANO PONTIL AND TOMASO POGGIO Center for Bioogica and Computationa Learning, Artificia Inteigence Laboratory, MIT, Cambridge, MA, USA theos@ai.mit.edu ponti@ai.mit.edu tp@ai.mit.edu Abstract. In this paper we first overview the main concepts of Statistica Learning Theory, a framework in which earning from exampes can be studied in a principed way. We then briefy discuss we known as we as emerging earning techniques such as Reguarization Networks and Support Vector Machines which can be justified in term of the same induction principe. eywords: VC-dimension, structura risk minimization, reguarization networks, support vector machines. Introduction The goa of this paper is to provide a short introduction to Statistica Learning Theory (SLT) which studies probems and techniques of supervised earning. For a more detaied review of SLT see Evgeniou et a. (999). In supervised earning or earning-from-exampes a machine is trained, instead of programmed, to perform a given task on a number of input-output pairs. According to this paradigm, training means choosing a function which best describes the reation between the inputs and the outputs. The centra question of SLT is how we the chosen function generaizes, or how we it estimates the output for previousy unseen inputs. We wi consider techniques which ead to soution of the form c i (x, x i ). () where the x i, i =,..., are the input exampes, a certain symmetric positive definite function named kerne, and c i a set of parameters to be determined form the exampes. This function is found by minimizing functionas of the type H[ f ] = V (y i, f (x i )) + λ f 2, where V is a oss function which measures the goodness of the predicted output f (x i ) with respect to the given output y i, f 2 a smoothness term which can be thought of as a norm in the Reproducing erne Hibert Space defined by the kerne and λ a positive parameter which contros the reative weight between the data and the smoothness term. The choice of the oss function determines different earning techniques, each eading to a different earning agorithm for computing the coefficients c i. The rest of the paper is organized as foows. Section 2 presents the main idea and concepts in the theory. Section 3 discusses Reguarization Networks and Support Vector Machines, two important techniques which produce outputs of the form of Eq. (). 2. Statistica Learning Theory We consider two sets of random variabes x X R d and y Y R reated by a probabiistic reationship.

2 0 Evgeniou, Ponti and Poggio The reationship is probabiistic because generay an eement of X does not determine uniquey an eement of Y, but rather a probabiity distribution on Y. This can be formaized assuming that an unknown probabiity distribution P(x, y) is defined over the set X Y.Weare provided with exampes of this probabiistic reationship, that is with a data set D {(x i,y i ) X Y} caed training set, obtained by samping times the set X Y according to P(x, y). The probem of earning consists in, given the data set D, providing an estimator, that is a function f : X Y, that can be used, given any vaue of x X, to predict a vaue y. For exampe X coud be the set of a possibe images, Y the set {, }, and f (x) an indicator function which specifies whether image x contains a certain object (y = ), or not (y = ) (see for exampe Papageorgiou et a. (998)). Another exampe is the case where x is a set of parameters, such as pose or facia expressions, y is a motion fied reative to a particuar reference image of a face, and f (x) is a regression function which maps parameters to motion (see for exampe Ezzat and Poggio (996)). In SLT, the standard way to sove the earning probem consists in defining a risk functiona, which measures the average amount of error or risk associated with an estimator, and then ooking for the estimator with the owest risk. If V (y, f (x)) is the oss function measuring the error we make when we predict y by f (x), then the average error, the so caed expected risk, is: I [ f ] X,Y V (y, f (x))p(x, y) dx dy We assume that the expected risk is defined on a arge cass of functions F and we wi denote by f 0 the function which minimizes the expected risk in F. The function f 0 is our idea estimator, and it is often caed the target function. This function cannot be found in practice, because the probabiity distribution P(x, y) that defines the expected risk is unknown, and ony a sampe of it, the data set D, is avaiabe. To overcome this shortcoming we need an induction principe that we can use to earn from the imited number of training data we have. SLT, as deveoped by Vapnik (Vapnik, 998), buids on the so-caed empirica risk minimization (ERM) induction principe. The ERM method consists in using the data set D to buid a stochastic approximation of the expected risk, which is usuay caed the empirica risk, defined as I emp [ f ; ] = V (y i, f (x i )). Straight minimization of the empirica risk in F can be probematic. First, it is usuay an i-posed probem (Tikhonov and Arsenin, 977), in the sense that there might be many, possiby infinitey many, functions minimizing the empirica risk. Second, it can ead to overfitting, meaning that athough the minimum of the empirica risk can be very cose to zero, the expected risk which is what we are reay interested in can be very arge. SLT provides probabiistic bounds on the distance between the empirica and expected risk of any function (therefore incuding the minimizer of the empirica risk in a function space that can be used to contro overfitting). The bounds invove the number of exampes and the capacity h of the function space, a quantity measuring the compexity of the space. Appropriate capacity quantities are defined in the theory, the most popuar one being the VC-dimension (Vapnik and Chervonenkis, 97) or scae sensitive versions of it (earns and Shapire, 994; Aon et a., 993). The bounds have the foowing genera form: with probabiity at east η ( ) h I [ f ] < I emp [ f ] +,η. (2) where h is the capacity, and an increasing function of h and η. For more information and for exact forms of function we refer the reader to (Vapnik and Chervonenkis, 97; Vapnik, 998; Aon et a., 993). Intuitivey, if the capacity of the function space in which we perform empirica risk minimization is very arge and the number of exampes is sma, then the distance between the empirica and expected risk can be arge and overfitting is very ikey to occur. Since the space F is usuay very arge (e.g. F coud be the space of square integrabe functions), one typicay considers smaer hypothesis spaces H. Moreover, inequaity (2) suggests an aternative method for achieving good generaization: instead of minimizing the empirica risk, find the best trade off between the empirica risk and the compexity of the hypothesis space measured by the second term in the r.h.s. of inequaity (2). This observation eads to the method of Structura Risk Minimization (SRM).

3 Statistica Learning Theory: A Primer The idea of SRM is to define a nested sequence of hypothesis spaces H H 2 H M, where each hypothesis space H m has finite capacity h m and arger than that of a previous sets, that is: h h 2,..., h M. For exampe H m coud be the set of poynomias of degree m, or a set of spines with m nodes, or some more compicated noninear parameterization. Using such a nested sequence of more and more compex hypothesis spaces, SRM consists of choosing the minimizer of the empirica risk in the space H m for which the bound on the structura risk, as measured by the right hand side of inequaity (2), is minimized. Further information about the statistica properties of SRM can be found in Devroye et a. (996), Vapnik (998). To summarize, in SLT the probem of earning from exampes is soved in three steps: (a) we define a oss function V (y, f (x)) measuring the error of predicting the output of input x with f (x) when the actua output is y; (b) we define a nested sequence of hypothesis spaces H m, m =,...,M whose capacity is an increasing function of m; (c) we minimize the empirica risk in each of H m and choose, among the soutions found, the one with the best trade off between the empirica risk and the capacity as given by the right hand side of inequaity (2). 3. Learning Machines 3.. Learning as Functiona Minimization We now consider hypothesis spaces which are subsets of a Reproducing erne Hibert Space (RHS) (Wahba, 990). A RHS is a Hibert space of functions f of the form N n= a nφ n (x), where {φ n (x)} N n= is a set of given, ineary independent basis functions and N can be possiby infinite. A RHS is equipped with a norm which is defined as: f 2 = N n= a 2 n λ n, where {λ n } N n= is a decreasing, positive sequence of rea vaues whose sum is finite. The constants λ n and the basis functions {φ n } N n= define the symmetric positive definite kerne function: N (x, y) = λ n φ n (x)φ n (y), n= A nested sequence of spaces of functions in the RHS can be constructed by bounding the RHS norm of functions in the space. This can be done by defining a set of constants A < A 2 < < A M and considering spaces of the form: H m ={f RHS : f A m } It can be shown that the capacity of the hypothesis spaces H m is an increasing function of A m (see for exampe Evgeniou et a. (999)). According to the scheme given at the end of Section 2, the soution of the earning probem is found by soving, for each A m, the foowing optimization probem: min f subject to V (y i, f (x i )) f A m and choosing, among the soutions found for each A m, the one with the best trade off between empirica risk and capacity, i.e. the one which minimizes the bound on the structura risk as given by inequaity (2). The impementation of the SRM method described above is not practica because it requires to ook for the soution of a arge number of constrained optimization probems. This difficuty is overcome by searching for the minimum of: H[ f ] = V (y i, f (x i )) + λ f 2. (3) The functiona H[ f ] contains both the empirica risk and the norm (compexity or smoothness) of f in the RHS, simiary to functionas considered in reguarization theory (Tikhonov and Arsenin, 977). The reguarization parameter λ penaizes functions with high capacity: the arger λ, the smaer the RHS norm of the soution wi be. When impementing SRM, the key issue is the choice of the hypothesis space, i.e. the parameter H m where the structura risk is minimized. In the case of the functiona of Eq. (3), the key issue becomes the choice of the reguarization parameter λ. These two probems, as discussed in Evgeniou et a. (999), are reated, and the SRM method can in principe be used to choose λ (Vapnik, 998). In practice, instead of using SRM other methods are used such as cross-vaidation (Wahba, 990), Generaized Cross Vaidation, Finite Prediction Error and the MDL criteria (see Vapnik (998) for a review and comparison). An important feature of the minimizer of H[ f ] is that, independenty on the oss function V, the

4 2 Evgeniou, Ponti and Poggio minimizer has the same genera form (Wahba, 990) c i (x, x i ), (4) Notice that Eq. (4) estabishes a representation of the function f as a inear combination of kernes centered in each data point. Using different kernes we get functions such as Gaussian radia basis functions ( (x, y) = exp( β x y 2 )), or poynomias of degree d ( (x, y) = ( + x y) d ) (Girosi et a., 995; Vapnik, 998). We now turn to discuss a few earning techniques based on the minimization of functionas of the form (3) by specifying the oss function V. In particuar, we wi consider Reguarization Networks and Support Vector Machines (SVM), a earning technique which has recenty been proposed for both cassification and regression probems (see Vapnik (998) and references therein): Reguarization Networks SVM Cassification V (y i, f (x i )) = (y i f (x i )) 2, (5) V (y i, f (x i )) = y i f(x i ) +, (6) where x + = x if x > 0 and zero otherwise. SVM Regression V (y i, f (x i )) = y i f(x i ) ɛ, (7) where the function ɛ, caed ɛ-insensitive oss, is defined as: { 0 if x <ɛ x ɛ (8) x ɛ otherwise. We now briefy discuss each of these three techniques Reguarization Networks The approximation scheme that arises from the minimization of the quadratic functiona (y i f (x i )) 2 + λ f 2 (9) forafixedλis a specia form of reguarization. It is possibe to show (see for exampe Girosi et a. (995)) that the coefficients c i of the minimizer of (9) in Eq. (4) satisfy the foowing inear system of equations: (G + λi )c = y, (0) where I is the identity matrix, and we have defined (y) i = y i, (c) i = c i, (G) ij = (x i,x j ). Since the coefficients c i satisfy a inear system, Eq. (4) can be rewritten as: y i b i (x), () with b i (x) = j= (G + λi ) ij (x i,x). Equation () gives the dua representation of RN. Notice the difference between Eqs. (4) and (): in the first one the coefficients c i are earned from the data whie in the second one the bases functions b i are earned, the coefficient of the expansion being equa to the output of the exampes. We refer to (Girosi et a., 995) for more information on the dua representation Support Vector Machines We now discuss Support Vector Machines (SVM) (Cortes and Vapnik, 995; Vapnik, 998). We distinguish between rea output (regression) and binary output (cassification) probems. The method of SVM regression corresponds to the foowing minimization: Min f y i f (x i ) ɛ + λ f 2 (2) whie the method of SVM cassification corresponds to: Min f y i f (x i ) + + λ f 2, (3) It turns out that for both probems (2) and (3) the coefficients c i in Eq. (4) can be found by soving a Quadratic Programming (QP) probem with inear constraints. The reguarization parameter λ appears ony in the inear constraints: the absoute vaues of coefficients c i is bounded by 2. The QP probem is non λ

5 Statistica Learning Theory: A Primer 3 trivia since the size of matrix of the quadratic form is equa to and the matrix is dense. A number of agorithms for training SVM have been proposed: some are based on a decomposition approach where the QP probem is attacked by soving a sequence of smaer QP probems (Osuna et a., 997), others on sequentia updates of the soution (?). A remarkabe property of SVMs is that oss functions (7) and (6) ead to sparse soutions. This means that, unike in the case of Reguarization Networks, typicay ony a sma fraction of the coefficients c i in Eq. (4) are nonzero. The data points x i associated with the nonzero c i are caed support vectors. If a data points which are not support vectors were to be discarded from the training set the same soution woud be found. In this context, an interesting perspective on SVM is to consider its information compression properties. The support vectors represent the most informative data points and compress the information contained in the training set: for the purpose of, say, cassification ony the support vectors need to be stored, whie a other training exampes can be discarded. This, aong with some geometric properties of SVMs such as the interpretation of the RHS norm of their soution as the inverse of the margin (Vapnik, 998), is a key property of SVM and might expain why this technique works we in many practica appications ernes and Data Representations We concude this short review with a discussion on kernes and data representations. A key issue when using the earning techniques discussed above is the choice of the kerne in Eq. (4). The kerne (x i, x j ) defines a dot product between the projections of the two inputs x i and x j, in the feature space (the features being {φ (x), φ 2 (x),...,φ N (x)} with N the dimensionaity of the RHS). Therefore its choice is cosey reated to the choice of the effective representation of the data, i.e. the image representation in a vision appication. The probem of choosing the kerne for the machines discussed here, and more generay the issue of finding appropriate data representations for earning, is an important and open one. The theory does not provide a genera method for finding good data representations, but suggests representations that ead to simpe soutions. Athough there is not a genera soution to this probem, a number of recent experimenta and theoretica works provide insights for specific appications (Evgeniou et a., 2000; Jaakkoa and Hausser, 998; Mohan, 999; Vapnik, 998). References Aon, N., Ben-David, S., Cesa-Bianchi, N., and Hausser, D Scae-sensitive dimensions, uniform convergence, and earnabiity. Symposium on Foundations of Computer Science. Cortes, C. and Vapnik, V Support vector networks. Machine Learning, 20: 25. Devroye, L., Györfi, L., and Lugosi, G A Probabiistic Theory of Pattern Recognition, No. 3 in Appications of Mathematics. Springer: New York. Evgeniou, T., Ponti, M., Papageorgiou, C., and Poggio, T Image representations for object detection using kerne cassifiers. In Proceedings ACCV. Taiwan, p. To appear. Evgeniou, T., Ponti, M., and Poggio, T A unified framework for Reguarization Networks and Support Vector Machines. A.I. Memo No. 654, Artificia Inteigence Laboratory, Massachusetts Institute of Technoogy. Ezzat, T. and Poggio, T Facia anaysis and synthesis using image-based modes. In Face and Gesture Recognition. pp Girosi, F., Jones, M., and Poggio, T Reguarization theory and neura networks architectures. Neura Computation, 7: Jaakkoa, T. and Hausser, D Probabiistic kerne regression modes. In Proc. of Neura Information Processing Conference. earns, M. and Shapire, R Efficient distribution-free earning of probabiistic concepts. Journa of Computer and Systems Sciences, 48(3): Mohan, A Robust object detection in images by components. Master s Thesis, Massachusetts Institute of Technoogy. Osuna, E., Freund, R., and Girosi, F An improved training agorithm for support vector machines. In IEEE Workshop on Neura Networks and Signa Processing, Ameia Isand, FL. Papageorgiou, C., Oren, M., and Poggio, T A genera framework for object detection. In Proceedings of the Internationa Conference on Computer Vision, Bombay, India. Patt, J.C Sequentia minima imization: A fast agorithm for training support vector machines. Technica Report MST-TR-98-4, Microsoft Research. Tikhonov, A.N. and Arsenin, V.Y Soutions of I-posed Probems. Washington, D.C.: W.H. Winston. Vapnik, V.N Statistica Learning Theory. Wiey: New York. Vapnik, V.N. and Chervonenkis, A.Y. 97. On the uniform convergence of reative frequences of events to their probabiities. Th. Prob. and its Appications, 7(2): Wahba, G Spines Modes for Observationa Data. Vo. 59, Series in Appied Mathematics: Phiadephia.

Statistical Learning Theory: a Primer

Statistical Learning Theory: a Primer ??,??, 1 6 (??) c?? Kuwer Academic Pubishers, Boston. Manufactured in The Netherands. Statistica Learning Theory: a Primer THEODOROS EVGENIOU AND MASSIMILIANO PONTIL Center for Bioogica and Computationa

More information

A unified framework for Regularization Networks and Support Vector Machines. Theodoros Evgeniou, Massimiliano Pontil, Tomaso Poggio

A unified framework for Regularization Networks and Support Vector Machines. Theodoros Evgeniou, Massimiliano Pontil, Tomaso Poggio MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY and CENTER FOR BIOLOGICAL AND COMPUTATIONAL LEARNING DEPARTMENT OF BRAIN AND COGNITIVE SCIENCES A.I. Memo No. 1654 March23, 1999

More information

Appendix A: MATLAB commands for neural networks

Appendix A: MATLAB commands for neural networks Appendix A: MATLAB commands for neura networks 132 Appendix A: MATLAB commands for neura networks p=importdata('pn.xs'); t=importdata('tn.xs'); [pn,meanp,stdp,tn,meant,stdt]=prestd(p,t); for m=1:10 net=newff(minmax(pn),[m,1],{'tansig','purein'},'trainm');

More information

From Margins to Probabilities in Multiclass Learning Problems

From Margins to Probabilities in Multiclass Learning Problems From Margins to Probabiities in Muticass Learning Probems Andrea Passerini and Massimiiano Ponti 2 and Paoo Frasconi 3 Abstract. We study the probem of muticass cassification within the framework of error

More information

Regularization and statistical learning theory for data analysis

Regularization and statistical learning theory for data analysis Computational Statistics & Data Analysis 38 (2002) 421 432 www.elsevier.com/locate/csda Regularization and statistical learning theory for data analysis Theodoros Evgeniou a;, Tomaso Poggio b, Massimiliano

More information

Multilayer Kerceptron

Multilayer Kerceptron Mutiayer Kerceptron Zotán Szabó, András Lőrincz Department of Information Systems, Facuty of Informatics Eötvös Loránd University Pázmány Péter sétány 1/C H-1117, Budapest, Hungary e-mai: szzoi@csetehu,

More information

SVM: Terminology 1(6) SVM: Terminology 2(6)

SVM: Terminology 1(6) SVM: Terminology 2(6) Andrew Kusiak Inteigent Systems Laboratory 39 Seamans Center he University of Iowa Iowa City, IA 54-57 SVM he maxima margin cassifier is simiar to the perceptron: It aso assumes that the data points are

More information

Explicit overall risk minimization transductive bound

Explicit overall risk minimization transductive bound 1 Expicit overa risk minimization transductive bound Sergio Decherchi, Paoo Gastado, Sandro Ridea, Rodofo Zunino Dept. of Biophysica and Eectronic Engineering (DIBE), Genoa University Via Opera Pia 11a,

More information

Bayesian Learning. You hear a which which could equally be Thanks or Tanks, which would you go with?

Bayesian Learning. You hear a which which could equally be Thanks or Tanks, which would you go with? Bayesian Learning A powerfu and growing approach in machine earning We use it in our own decision making a the time You hear a which which coud equay be Thanks or Tanks, which woud you go with? Combine

More information

Moreau-Yosida Regularization for Grouped Tree Structure Learning

Moreau-Yosida Regularization for Grouped Tree Structure Learning Moreau-Yosida Reguarization for Grouped Tree Structure Learning Jun Liu Computer Science and Engineering Arizona State University J.Liu@asu.edu Jieping Ye Computer Science and Engineering Arizona State

More information

Discriminant Analysis: A Unified Approach

Discriminant Analysis: A Unified Approach Discriminant Anaysis: A Unified Approach Peng Zhang & Jing Peng Tuane University Eectrica Engineering & Computer Science Department New Oreans, LA 708 {zhangp,jp}@eecs.tuane.edu Norbert Riede Tuane University

More information

Inductive Bias: How to generalize on novel data. CS Inductive Bias 1

Inductive Bias: How to generalize on novel data. CS Inductive Bias 1 Inductive Bias: How to generaize on nove data CS 478 - Inductive Bias 1 Overfitting Noise vs. Exceptions CS 478 - Inductive Bias 2 Non-Linear Tasks Linear Regression wi not generaize we to the task beow

More information

A Brief Introduction to Markov Chains and Hidden Markov Models

A Brief Introduction to Markov Chains and Hidden Markov Models A Brief Introduction to Markov Chains and Hidden Markov Modes Aen B MacKenzie Notes for December 1, 3, &8, 2015 Discrete-Time Markov Chains You may reca that when we first introduced random processes,

More information

DISTRIBUTION OF TEMPERATURE IN A SPATIALLY ONE- DIMENSIONAL OBJECT AS A RESULT OF THE ACTIVE POINT SOURCE

DISTRIBUTION OF TEMPERATURE IN A SPATIALLY ONE- DIMENSIONAL OBJECT AS A RESULT OF THE ACTIVE POINT SOURCE DISTRIBUTION OF TEMPERATURE IN A SPATIALLY ONE- DIMENSIONAL OBJECT AS A RESULT OF THE ACTIVE POINT SOURCE Yury Iyushin and Anton Mokeev Saint-Petersburg Mining University, Vasiievsky Isand, 1 st ine, Saint-Petersburg,

More information

FRST Multivariate Statistics. Multivariate Discriminant Analysis (MDA)

FRST Multivariate Statistics. Multivariate Discriminant Analysis (MDA) 1 FRST 531 -- Mutivariate Statistics Mutivariate Discriminant Anaysis (MDA) Purpose: 1. To predict which group (Y) an observation beongs to based on the characteristics of p predictor (X) variabes, using

More information

The EM Algorithm applied to determining new limit points of Mahler measures

The EM Algorithm applied to determining new limit points of Mahler measures Contro and Cybernetics vo. 39 (2010) No. 4 The EM Agorithm appied to determining new imit points of Maher measures by Souad E Otmani, Georges Rhin and Jean-Marc Sac-Épée Université Pau Veraine-Metz, LMAM,

More information

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network An Agorithm for Pruning Redundant Modues in Min-Max Moduar Network Hui-Cheng Lian and Bao-Liang Lu Department of Computer Science and Engineering, Shanghai Jiao Tong University 1954 Hua Shan Rd., Shanghai

More information

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents MARKOV CHAINS AND MARKOV DECISION THEORY ARINDRIMA DATTA Abstract. In this paper, we begin with a forma introduction to probabiity and expain the concept of random variabes and stochastic processes. After

More information

Two view learning: SVM-2K, Theory and Practice

Two view learning: SVM-2K, Theory and Practice Two view earning: SVM-2K, Theory and Practice Jason D.R. Farquhar jdrf99r@ecs.soton.ac.uk Hongying Meng hongying@cs.york.ac.uk David R. Hardoon drh@ecs.soton.ac.uk John Shawe-Tayor jst@ecs.soton.ac.uk

More information

Statistics for Applications. Chapter 7: Regression 1/43

Statistics for Applications. Chapter 7: Regression 1/43 Statistics for Appications Chapter 7: Regression 1/43 Heuristics of the inear regression (1) Consider a coud of i.i.d. random points (X i,y i ),i =1,...,n : 2/43 Heuristics of the inear regression (2)

More information

On the V γ Dimension for Regression in Reproducing Kernel Hilbert Spaces. Theodoros Evgeniou, Massimiliano Pontil

On the V γ Dimension for Regression in Reproducing Kernel Hilbert Spaces. Theodoros Evgeniou, Massimiliano Pontil MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY and CENTER FOR BIOLOGICAL AND COMPUTATIONAL LEARNING DEPARTMENT OF BRAIN AND COGNITIVE SCIENCES A.I. Memo No. 1656 May 1999 C.B.C.L

More information

A. Distribution of the test statistic

A. Distribution of the test statistic A. Distribution of the test statistic In the sequentia test, we first compute the test statistic from a mini-batch of size m. If a decision cannot be made with this statistic, we keep increasing the mini-batch

More information

First-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries

First-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries c 26 Noninear Phenomena in Compex Systems First-Order Corrections to Gutzwier s Trace Formua for Systems with Discrete Symmetries Hoger Cartarius, Jörg Main, and Günter Wunner Institut für Theoretische

More information

Support Vector Machine and Its Application to Regression and Classification

Support Vector Machine and Its Application to Regression and Classification BearWorks Institutiona Repository MSU Graduate Theses Spring 2017 Support Vector Machine and Its Appication to Regression and Cassification Xiaotong Hu As with any inteectua project, the content and views

More information

XSAT of linear CNF formulas

XSAT of linear CNF formulas XSAT of inear CN formuas Bernd R. Schuh Dr. Bernd Schuh, D-50968 Kön, Germany; bernd.schuh@netcoogne.de eywords: compexity, XSAT, exact inear formua, -reguarity, -uniformity, NPcompeteness Abstract. Open

More information

ASummaryofGaussianProcesses Coryn A.L. Bailer-Jones

ASummaryofGaussianProcesses Coryn A.L. Bailer-Jones ASummaryofGaussianProcesses Coryn A.L. Baier-Jones Cavendish Laboratory University of Cambridge caj@mrao.cam.ac.uk Introduction A genera prediction probem can be posed as foows. We consider that the variabe

More information

Kernel Matching Pursuit

Kernel Matching Pursuit Kerne Matching Pursuit Pasca Vincent and Yoshua Bengio Dept. IRO, Université demontréa C.P. 6128, Montrea, Qc, H3C 3J7, Canada {vincentp,bengioy}@iro.umontrea.ca Technica Report #1179 Département d Informatique

More information

4 Separation of Variables

4 Separation of Variables 4 Separation of Variabes In this chapter we describe a cassica technique for constructing forma soutions to inear boundary vaue probems. The soution of three cassica (paraboic, hyperboic and eiptic) PDE

More information

Multicategory Classification by Support Vector Machines

Multicategory Classification by Support Vector Machines Muticategory Cassification by Support Vector Machines Erin J Bredensteiner Department of Mathematics University of Evansvie 800 Lincon Avenue Evansvie, Indiana 47722 eb6@evansvieedu Kristin P Bennett Department

More information

SVM-based Supervised and Unsupervised Classification Schemes

SVM-based Supervised and Unsupervised Classification Schemes SVM-based Supervised and Unsupervised Cassification Schemes LUMINITA STATE University of Pitesti Facuty of Mathematics and Computer Science 1 Targu din Vae St., Pitesti 110040 ROMANIA state@cicknet.ro

More information

Problem set 6 The Perron Frobenius theorem.

Problem set 6 The Perron Frobenius theorem. Probem set 6 The Perron Frobenius theorem. Math 22a4 Oct 2 204, Due Oct.28 In a future probem set I want to discuss some criteria which aow us to concude that that the ground state of a sef-adjoint operator

More information

Optimality of Inference in Hierarchical Coding for Distributed Object-Based Representations

Optimality of Inference in Hierarchical Coding for Distributed Object-Based Representations Optimaity of Inference in Hierarchica Coding for Distributed Object-Based Representations Simon Brodeur, Jean Rouat NECOTIS, Département génie éectrique et génie informatique, Université de Sherbrooke,

More information

BP neural network-based sports performance prediction model applied research

BP neural network-based sports performance prediction model applied research Avaiabe onine www.jocpr.com Journa of Chemica and Pharmaceutica Research, 204, 6(7:93-936 Research Artice ISSN : 0975-7384 CODEN(USA : JCPRC5 BP neura networ-based sports performance prediction mode appied

More information

Determining The Degree of Generalization Using An Incremental Learning Algorithm

Determining The Degree of Generalization Using An Incremental Learning Algorithm Determining The Degree of Generaization Using An Incrementa Learning Agorithm Pabo Zegers Facutad de Ingeniería, Universidad de os Andes San Caros de Apoquindo 22, Las Condes, Santiago, Chie pzegers@uandes.c

More information

Asynchronous Control for Coupled Markov Decision Systems

Asynchronous Control for Coupled Markov Decision Systems INFORMATION THEORY WORKSHOP (ITW) 22 Asynchronous Contro for Couped Marov Decision Systems Michae J. Neey University of Southern Caifornia Abstract This paper considers optima contro for a coection of

More information

A Ridgelet Kernel Regression Model using Genetic Algorithm

A Ridgelet Kernel Regression Model using Genetic Algorithm A Ridgeet Kerne Regression Mode using Genetic Agorithm Shuyuan Yang, Min Wang, Licheng Jiao * Institute of Inteigence Information Processing, Department of Eectrica Engineering Xidian University Xi an,

More information

A Solution to the 4-bit Parity Problem with a Single Quaternary Neuron

A Solution to the 4-bit Parity Problem with a Single Quaternary Neuron Neura Information Processing - Letters and Reviews Vo. 5, No. 2, November 2004 LETTER A Soution to the 4-bit Parity Probem with a Singe Quaternary Neuron Tohru Nitta Nationa Institute of Advanced Industria

More information

A proposed nonparametric mixture density estimation using B-spline functions

A proposed nonparametric mixture density estimation using B-spline functions A proposed nonparametric mixture density estimation using B-spine functions Atizez Hadrich a,b, Mourad Zribi a, Afif Masmoudi b a Laboratoire d Informatique Signa et Image de a Côte d Opae (LISIC-EA 4491),

More information

STA 216 Project: Spline Approach to Discrete Survival Analysis

STA 216 Project: Spline Approach to Discrete Survival Analysis : Spine Approach to Discrete Surviva Anaysis November 4, 005 1 Introduction Athough continuous surviva anaysis differs much from the discrete surviva anaysis, there is certain ink between the two modeing

More information

Universal Consistency of Multi-Class Support Vector Classification

Universal Consistency of Multi-Class Support Vector Classification Universa Consistency of Muti-Cass Support Vector Cassification Tobias Gasmachers Dae Moe Institute for rtificia Inteigence IDSI, 6928 Manno-Lugano, Switzerand tobias@idsia.ch bstract Steinwart was the

More information

VALIDATED CONTINUATION FOR EQUILIBRIA OF PDES

VALIDATED CONTINUATION FOR EQUILIBRIA OF PDES VALIDATED CONTINUATION FOR EQUILIBRIA OF PDES SARAH DAY, JEAN-PHILIPPE LESSARD, AND KONSTANTIN MISCHAIKOW Abstract. One of the most efficient methods for determining the equiibria of a continuous parameterized

More information

Appendix of the Paper The Role of No-Arbitrage on Forecasting: Lessons from a Parametric Term Structure Model

Appendix of the Paper The Role of No-Arbitrage on Forecasting: Lessons from a Parametric Term Structure Model Appendix of the Paper The Roe of No-Arbitrage on Forecasting: Lessons from a Parametric Term Structure Mode Caio Ameida cameida@fgv.br José Vicente jose.vaentim@bcb.gov.br June 008 1 Introduction In this

More information

II. PROBLEM. A. Description. For the space of audio signals

II. PROBLEM. A. Description. For the space of audio signals CS229 - Fina Report Speech Recording based Language Recognition (Natura Language) Leopod Cambier - cambier; Matan Leibovich - matane; Cindy Orozco Bohorquez - orozcocc ABSTRACT We construct a rea time

More information

Cryptanalysis of PKP: A New Approach

Cryptanalysis of PKP: A New Approach Cryptanaysis of PKP: A New Approach Éiane Jaumes and Antoine Joux DCSSI 18, rue du Dr. Zamenhoff F-92131 Issy-es-Mx Cedex France eiane.jaumes@wanadoo.fr Antoine.Joux@ens.fr Abstract. Quite recenty, in

More information

AI Memo AIM Permutation Tests for Classification

AI Memo AIM Permutation Tests for Classification AI Memo AIM-2003-09 Permutation Tests for Cassification Sayan Mukherjee Whitehead/MIT Center for Genome Research Center for Bioogica and Computationa Learning Massachusetts Institute of Technoogy Cambridge,

More information

Some Properties of Regularized Kernel Methods

Some Properties of Regularized Kernel Methods Journa of Machine Learning Research 5 (2004) 1363 1390 Submitted 12/03; Revised 7/04; Pubished 10/04 Some Properties of Reguarized Kerne Methods Ernesto De Vito Dipartimento di Matematica Università di

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Schoo of Computer Science Probabiistic Graphica Modes Gaussian graphica modes and Ising modes: modeing networks Eric Xing Lecture 0, February 0, 07 Reading: See cass website Eric Xing @ CMU, 005-07 Network

More information

Bourgain s Theorem. Computational and Metric Geometry. Instructor: Yury Makarychev. d(s 1, s 2 ).

Bourgain s Theorem. Computational and Metric Geometry. Instructor: Yury Makarychev. d(s 1, s 2 ). Bourgain s Theorem Computationa and Metric Geometry Instructor: Yury Makarychev 1 Notation Given a metric space (X, d) and S X, the distance from x X to S equas d(x, S) = inf d(x, s). s S The distance

More information

arxiv: v1 [cs.lg] 31 Oct 2017

arxiv: v1 [cs.lg] 31 Oct 2017 ACCELERATED SPARSE SUBSPACE CLUSTERING Abofaz Hashemi and Haris Vikao Department of Eectrica and Computer Engineering, University of Texas at Austin, Austin, TX, USA arxiv:7.26v [cs.lg] 3 Oct 27 ABSTRACT

More information

Distributed average consensus: Beyond the realm of linearity

Distributed average consensus: Beyond the realm of linearity Distributed average consensus: Beyond the ream of inearity Usman A. Khan, Soummya Kar, and José M. F. Moura Department of Eectrica and Computer Engineering Carnegie Meon University 5 Forbes Ave, Pittsburgh,

More information

International Journal "Information Technologies & Knowledge" Vol.5, Number 1,

International Journal Information Technologies & Knowledge Vol.5, Number 1, Internationa Journa "Information Tecnoogies & Knowedge" Vo.5, Number, 0 5 EVOLVING CASCADE NEURAL NETWORK BASED ON MULTIDIMESNIONAL EPANECHNIKOV S KERNELS AND ITS LEARNING ALGORITHM Yevgeniy Bodyanskiy,

More information

Separation of Variables and a Spherical Shell with Surface Charge

Separation of Variables and a Spherical Shell with Surface Charge Separation of Variabes and a Spherica She with Surface Charge In cass we worked out the eectrostatic potentia due to a spherica she of radius R with a surface charge density σθ = σ cos θ. This cacuation

More information

Theory of Generalized k-difference Operator and Its Application in Number Theory

Theory of Generalized k-difference Operator and Its Application in Number Theory Internationa Journa of Mathematica Anaysis Vo. 9, 2015, no. 19, 955-964 HIKARI Ltd, www.m-hiari.com http://dx.doi.org/10.12988/ijma.2015.5389 Theory of Generaized -Difference Operator and Its Appication

More information

CONVERGENCE RATES OF COMPACTLY SUPPORTED RADIAL BASIS FUNCTION REGULARIZATION

CONVERGENCE RATES OF COMPACTLY SUPPORTED RADIAL BASIS FUNCTION REGULARIZATION Statistica Sinica 16(2006), 425-439 CONVERGENCE RATES OF COMPACTLY SUPPORTED RADIAL BASIS FUNCTION REGULARIZATION Yi Lin and Ming Yuan University of Wisconsin-Madison and Georgia Institute of Technoogy

More information

Expectation-Maximization for Estimating Parameters for a Mixture of Poissons

Expectation-Maximization for Estimating Parameters for a Mixture of Poissons Expectation-Maximization for Estimating Parameters for a Mixture of Poissons Brandon Maone Department of Computer Science University of Hesini February 18, 2014 Abstract This document derives, in excrutiating

More information

C. Fourier Sine Series Overview

C. Fourier Sine Series Overview 12 PHILIP D. LOEWEN C. Fourier Sine Series Overview Let some constant > be given. The symboic form of the FSS Eigenvaue probem combines an ordinary differentia equation (ODE) on the interva (, ) with a

More information

221B Lecture Notes Notes on Spherical Bessel Functions

221B Lecture Notes Notes on Spherical Bessel Functions Definitions B Lecture Notes Notes on Spherica Besse Functions We woud ike to sove the free Schrödinger equation [ h d r R(r) = h k R(r). () m r dr r m R(r) is the radia wave function ψ( x) = R(r)Y m (θ,

More information

Converting Z-number to Fuzzy Number using. Fuzzy Expected Value

Converting Z-number to Fuzzy Number using. Fuzzy Expected Value ISSN 1746-7659, Engand, UK Journa of Information and Computing Science Vo. 1, No. 4, 017, pp.91-303 Converting Z-number to Fuzzy Number using Fuzzy Expected Vaue Mahdieh Akhbari * Department of Industria

More information

Research Article Numerical Range of Two Operators in Semi-Inner Product Spaces

Research Article Numerical Range of Two Operators in Semi-Inner Product Spaces Abstract and Appied Anaysis Voume 01, Artice ID 846396, 13 pages doi:10.1155/01/846396 Research Artice Numerica Range of Two Operators in Semi-Inner Product Spaces N. K. Sahu, 1 C. Nahak, 1 and S. Nanda

More information

PREDICTION OF DEFORMED AND ANNEALED MICROSTRUCTURES USING BAYESIAN NEURAL NETWORKS AND GAUSSIAN PROCESSES

PREDICTION OF DEFORMED AND ANNEALED MICROSTRUCTURES USING BAYESIAN NEURAL NETWORKS AND GAUSSIAN PROCESSES PREDICTION OF DEFORMED AND ANNEALED MICROSTRUCTURES USING BAYESIAN NEURAL NETWORKS AND GAUSSIAN PROCESSES C.A.L. Baier-Jones, T.J. Sabin, D.J.C. MacKay, P.J. Withers Department of Materias Science and

More information

VALIDATED CONTINUATION FOR EQUILIBRIA OF PDES

VALIDATED CONTINUATION FOR EQUILIBRIA OF PDES SIAM J. NUMER. ANAL. Vo. 0, No. 0, pp. 000 000 c 200X Society for Industria and Appied Mathematics VALIDATED CONTINUATION FOR EQUILIBRIA OF PDES SARAH DAY, JEAN-PHILIPPE LESSARD, AND KONSTANTIN MISCHAIKOW

More information

arxiv: v1 [math.ca] 6 Mar 2017

arxiv: v1 [math.ca] 6 Mar 2017 Indefinite Integras of Spherica Besse Functions MIT-CTP/487 arxiv:703.0648v [math.ca] 6 Mar 07 Joyon K. Boomfied,, Stephen H. P. Face,, and Zander Moss, Center for Theoretica Physics, Laboratory for Nucear

More information

NEW DEVELOPMENT OF OPTIMAL COMPUTING BUDGET ALLOCATION FOR DISCRETE EVENT SIMULATION

NEW DEVELOPMENT OF OPTIMAL COMPUTING BUDGET ALLOCATION FOR DISCRETE EVENT SIMULATION NEW DEVELOPMENT OF OPTIMAL COMPUTING BUDGET ALLOCATION FOR DISCRETE EVENT SIMULATION Hsiao-Chang Chen Dept. of Systems Engineering University of Pennsyvania Phiadephia, PA 904-635, U.S.A. Chun-Hung Chen

More information

Melodic contour estimation with B-spline models using a MDL criterion

Melodic contour estimation with B-spline models using a MDL criterion Meodic contour estimation with B-spine modes using a MDL criterion Damien Loive, Ney Barbot, Oivier Boeffard IRISA / University of Rennes 1 - ENSSAT 6 rue de Kerampont, B.P. 80518, F-305 Lannion Cedex

More information

4 1-D Boundary Value Problems Heat Equation

4 1-D Boundary Value Problems Heat Equation 4 -D Boundary Vaue Probems Heat Equation The main purpose of this chapter is to study boundary vaue probems for the heat equation on a finite rod a x b. u t (x, t = ku xx (x, t, a < x < b, t > u(x, = ϕ(x

More information

On the Goal Value of a Boolean Function

On the Goal Value of a Boolean Function On the Goa Vaue of a Booean Function Eric Bach Dept. of CS University of Wisconsin 1210 W. Dayton St. Madison, WI 53706 Lisa Heerstein Dept of CSE NYU Schoo of Engineering 2 Metrotech Center, 10th Foor

More information

Data Mining Technology for Failure Prognostic of Avionics

Data Mining Technology for Failure Prognostic of Avionics IEEE Transactions on Aerospace and Eectronic Systems. Voume 38, #, pp.388-403, 00. Data Mining Technoogy for Faiure Prognostic of Avionics V.A. Skormin, Binghamton University, Binghamton, NY, 1390, USA

More information

arxiv: v1 [cs.db] 1 Aug 2012

arxiv: v1 [cs.db] 1 Aug 2012 Functiona Mechanism: Regression Anaysis under Differentia Privacy arxiv:208.029v [cs.db] Aug 202 Jun Zhang Zhenjie Zhang 2 Xiaokui Xiao Yin Yang 2 Marianne Winsett 2,3 ABSTRACT Schoo of Computer Engineering

More information

MINIMAX PROBABILITY MACHINE (MPM) is a

MINIMAX PROBABILITY MACHINE (MPM) is a Efficient Minimax Custering Probabiity Machine by Generaized Probabiity Product Kerne Haiqin Yang, Kaizhu Huang, Irwin King and Michae R. Lyu Abstract Minimax Probabiity Machine (MPM), earning a decision

More information

Trainable fusion rules. I. Large sample size case

Trainable fusion rules. I. Large sample size case Neura Networks 19 (2006) 1506 1516 www.esevier.com/ocate/neunet Trainabe fusion rues. I. Large sampe size case Šarūnas Raudys Institute of Mathematics and Informatics, Akademijos 4, Vinius 08633, Lithuania

More information

Combining reaction kinetics to the multi-phase Gibbs energy calculation

Combining reaction kinetics to the multi-phase Gibbs energy calculation 7 th European Symposium on Computer Aided Process Engineering ESCAPE7 V. Pesu and P.S. Agachi (Editors) 2007 Esevier B.V. A rights reserved. Combining reaction inetics to the muti-phase Gibbs energy cacuation

More information

Fast Blind Recognition of Channel Codes

Fast Blind Recognition of Channel Codes Fast Bind Recognition of Channe Codes Reza Moosavi and Erik G. Larsson Linköping University Post Print N.B.: When citing this work, cite the origina artice. 213 IEEE. Persona use of this materia is permitted.

More information

Testing for the Existence of Clusters

Testing for the Existence of Clusters Testing for the Existence of Custers Caudio Fuentes and George Casea University of Forida November 13, 2008 Abstract The detection and determination of custers has been of specia interest, among researchers

More information

Uniprocessor Feasibility of Sporadic Tasks with Constrained Deadlines is Strongly conp-complete

Uniprocessor Feasibility of Sporadic Tasks with Constrained Deadlines is Strongly conp-complete Uniprocessor Feasibiity of Sporadic Tasks with Constrained Deadines is Strongy conp-compete Pontus Ekberg and Wang Yi Uppsaa University, Sweden Emai: {pontus.ekberg yi}@it.uu.se Abstract Deciding the feasibiity

More information

Partial permutation decoding for MacDonald codes

Partial permutation decoding for MacDonald codes Partia permutation decoding for MacDonad codes J.D. Key Department of Mathematics and Appied Mathematics University of the Western Cape 7535 Bevie, South Africa P. Seneviratne Department of Mathematics

More information

Sparse Semi-supervised Learning Using Conjugate Functions

Sparse Semi-supervised Learning Using Conjugate Functions Journa of Machine Learning Research (200) 2423-2455 Submitted 2/09; Pubished 9/0 Sparse Semi-supervised Learning Using Conjugate Functions Shiiang Sun Department of Computer Science and Technoogy East

More information

STABILITY OF A PARAMETRICALLY EXCITED DAMPED INVERTED PENDULUM 1. INTRODUCTION

STABILITY OF A PARAMETRICALLY EXCITED DAMPED INVERTED PENDULUM 1. INTRODUCTION Journa of Sound and Vibration (996) 98(5), 643 65 STABILITY OF A PARAMETRICALLY EXCITED DAMPED INVERTED PENDULUM G. ERDOS AND T. SINGH Department of Mechanica and Aerospace Engineering, SUNY at Buffao,

More information

Algorithms to solve massively under-defined systems of multivariate quadratic equations

Algorithms to solve massively under-defined systems of multivariate quadratic equations Agorithms to sove massivey under-defined systems of mutivariate quadratic equations Yasufumi Hashimoto Abstract It is we known that the probem to sove a set of randomy chosen mutivariate quadratic equations

More information

Nonlinear Gaussian Filtering via Radial Basis Function Approximation

Nonlinear Gaussian Filtering via Radial Basis Function Approximation 51st IEEE Conference on Decision and Contro December 10-13 01 Maui Hawaii USA Noninear Gaussian Fitering via Radia Basis Function Approximation Huazhen Fang Jia Wang and Raymond A de Caafon Abstract This

More information

Learning Fully Observed Undirected Graphical Models

Learning Fully Observed Undirected Graphical Models Learning Fuy Observed Undirected Graphica Modes Sides Credit: Matt Gormey (2016) Kayhan Batmangheich 1 Machine Learning The data inspires the structures we want to predict Inference finds {best structure,

More information

A Simple and Efficient Algorithm of 3-D Single-Source Localization with Uniform Cross Array Bing Xue 1 2 a) * Guangyou Fang 1 2 b and Yicai Ji 1 2 c)

A Simple and Efficient Algorithm of 3-D Single-Source Localization with Uniform Cross Array Bing Xue 1 2 a) * Guangyou Fang 1 2 b and Yicai Ji 1 2 c) A Simpe Efficient Agorithm of 3-D Singe-Source Locaization with Uniform Cross Array Bing Xue a * Guangyou Fang b Yicai Ji c Key Laboratory of Eectromagnetic Radiation Sensing Technoogy, Institute of Eectronics,

More information

Sequential Decoding of Polar Codes with Arbitrary Binary Kernel

Sequential Decoding of Polar Codes with Arbitrary Binary Kernel Sequentia Decoding of Poar Codes with Arbitrary Binary Kerne Vera Miosavskaya, Peter Trifonov Saint-Petersburg State Poytechnic University Emai: veram,petert}@dcn.icc.spbstu.ru Abstract The probem of efficient

More information

Physics 235 Chapter 8. Chapter 8 Central-Force Motion

Physics 235 Chapter 8. Chapter 8 Central-Force Motion Physics 35 Chapter 8 Chapter 8 Centra-Force Motion In this Chapter we wi use the theory we have discussed in Chapter 6 and 7 and appy it to very important probems in physics, in which we study the motion

More information

Symbolic models for nonlinear control systems using approximate bisimulation

Symbolic models for nonlinear control systems using approximate bisimulation Symboic modes for noninear contro systems using approximate bisimuation Giordano Poa, Antoine Girard and Pauo Tabuada Abstract Contro systems are usuay modeed by differentia equations describing how physica

More information

Convolutional Networks 2: Training, deep convolutional networks

Convolutional Networks 2: Training, deep convolutional networks Convoutiona Networks 2: Training, deep convoutiona networks Hakan Bien Machine Learning Practica MLP Lecture 8 30 October / 6 November 2018 MLP Lecture 8 / 30 October / 6 November 2018 Convoutiona Networks

More information

A Novel Learning Method for Elman Neural Network Using Local Search

A Novel Learning Method for Elman Neural Network Using Local Search Neura Information Processing Letters and Reviews Vo. 11, No. 8, August 2007 LETTER A Nove Learning Method for Eman Neura Networ Using Loca Search Facuty of Engineering, Toyama University, Gofuu 3190 Toyama

More information

A note on the generalization performance of kernel classifiers with margin. Theodoros Evgeniou and Massimiliano Pontil

A note on the generalization performance of kernel classifiers with margin. Theodoros Evgeniou and Massimiliano Pontil MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY and CENTER FOR BIOLOGICAL AND COMPUTATIONAL LEARNING DEPARTMENT OF BRAIN AND COGNITIVE SCIENCES A.I. Memo No. 68 November 999 C.B.C.L

More information

Paragraph Topic Classification

Paragraph Topic Classification Paragraph Topic Cassification Eugene Nho Graduate Schoo of Business Stanford University Stanford, CA 94305 enho@stanford.edu Edward Ng Department of Eectrica Engineering Stanford University Stanford, CA

More information

(This is a sample cover image for this issue. The actual cover is not yet available at this time.)

(This is a sample cover image for this issue. The actual cover is not yet available at this time.) (This is a sampe cover image for this issue The actua cover is not yet avaiabe at this time) This artice appeared in a journa pubished by Esevier The attached copy is furnished to the author for interna

More information

Adaptive Regularization for Transductive Support Vector Machine

Adaptive Regularization for Transductive Support Vector Machine Adaptive Reguarization for Transductive Support Vector Machine Zengin Xu Custer MMCI Saarand Univ. & MPI INF Saarbrucken, Germany zxu@mpi-inf.mpg.de Rong Jin Computer Sci. & Eng. Michigan State Univ. East

More information

A Comparison Study of the Test for Right Censored and Grouped Data

A Comparison Study of the Test for Right Censored and Grouped Data Communications for Statistica Appications and Methods 2015, Vo. 22, No. 4, 313 320 DOI: http://dx.doi.org/10.5351/csam.2015.22.4.313 Print ISSN 2287-7843 / Onine ISSN 2383-4757 A Comparison Study of the

More information

Global sensitivity analysis using low-rank tensor approximations

Global sensitivity analysis using low-rank tensor approximations Goba sensitivity anaysis using ow-rank tensor approximations K. Konaki 1 and B. Sudret 1 1 Chair of Risk, Safety and Uncertainty Quantification, arxiv:1605.09009v1 [stat.co] 29 May 2016 ETH Zurich, Stefano-Franscini-Patz

More information

Control Chart For Monitoring Nonparametric Profiles With Arbitrary Design

Control Chart For Monitoring Nonparametric Profiles With Arbitrary Design Contro Chart For Monitoring Nonparametric Profies With Arbitrary Design Peihua Qiu 1 and Changiang Zou 2 1 Schoo of Statistics, University of Minnesota, USA 2 LPMC and Department of Statistics, Nankai

More information

Chapter 7 PRODUCTION FUNCTIONS. Copyright 2005 by South-Western, a division of Thomson Learning. All rights reserved.

Chapter 7 PRODUCTION FUNCTIONS. Copyright 2005 by South-Western, a division of Thomson Learning. All rights reserved. Chapter 7 PRODUCTION FUNCTIONS Copyright 2005 by South-Western, a division of Thomson Learning. A rights reserved. 1 Production Function The firm s production function for a particuar good (q) shows the

More information

A Statistical Framework for Real-time Event Detection in Power Systems

A Statistical Framework for Real-time Event Detection in Power Systems 1 A Statistica Framework for Rea-time Event Detection in Power Systems Noan Uhrich, Tim Christman, Phiip Swisher, and Xichen Jiang Abstract A quickest change detection (QCD) agorithm is appied to the probem

More information

Active Learning & Experimental Design

Active Learning & Experimental Design Active Learning & Experimenta Design Danie Ting Heaviy modified, of course, by Lye Ungar Origina Sides by Barbara Engehardt and Aex Shyr Lye Ungar, University of Pennsyvania Motivation u Data coection

More information

Approach to Identifying Raindrop Vibration Signal Detected by Optical Fiber

Approach to Identifying Raindrop Vibration Signal Detected by Optical Fiber Sensors & Transducers, o. 6, Issue, December 3, pp. 85-9 Sensors & Transducers 3 by IFSA http://www.sensorsporta.com Approach to Identifying Raindrop ibration Signa Detected by Optica Fiber ongquan QU,

More information

Maximum likelihood decoding of trellis codes in fading channels with no receiver CSI is a polynomial-complexity problem

Maximum likelihood decoding of trellis codes in fading channels with no receiver CSI is a polynomial-complexity problem 1 Maximum ikeihood decoding of treis codes in fading channes with no receiver CSI is a poynomia-compexity probem Chun-Hao Hsu and Achieas Anastasopouos Eectrica Engineering and Computer Science Department

More information

arxiv: v1 [cs.lg] 23 Aug 2018

arxiv: v1 [cs.lg] 23 Aug 2018 Muticass Universum SVM Sauptik Dhar 1 Vadimir Cherkassky 2 Mohak Shah 1 3 arxiv:1808.08111v1 [cs.lg] 23 Aug 2018 Abstract We introduce Universum earning for muticass probems and propose a nove formuation

More information

Integrating Factor Methods as Exponential Integrators

Integrating Factor Methods as Exponential Integrators Integrating Factor Methods as Exponentia Integrators Borisav V. Minchev Department of Mathematica Science, NTNU, 7491 Trondheim, Norway Borko.Minchev@ii.uib.no Abstract. Recenty a ot of effort has been

More information