MAXIMUM A POSTERIORI TRANSDUCTION
|
|
- Stanley Fleming
- 5 years ago
- Views:
Transcription
1 MAXIMUM A POSTERIORI TRANSDUCTION LI-WEI WANG, JU-FU FENG School of Mathematcal Scences, Peng Unversty, Bejng, 0087, Chna Center for Informaton Scences, Peng Unversty, Bejng, 0087, Chna E-MIAL: {wanglw, fjf}@cs.pu.edu.cn Abstract: Transducton deals wth the problem of estmatng the values of a functon at gven ponts (called worng samples) by a set of tranng samples. Ths paper proposes a maxmum a posteror (MAP) scheme for the transducton. The probablty measure defned for the estmaton s nduced by the code length of the predcton error the model wth respect to some codng system. The deal MAP transducton s essentally to mnmze the so-called stochastc complexty. Approxmatons to the deal MAP transducton are also addressed, where one or multple models of the functon are estmated as well as the values at the worng sample. Ths wor nvestgates, for both pattern classfcaton regresson, that under what condton the approxmated MAP transducton s better than the tradtonal nducton, whch learns models from the tranng samples then computes the value at the gven ponts. Analyss on whether the worng samples compress the descrpton length of the model s also presented. For some codng system t does, for some others t doesn t. For farness, a unversal codng system should be adopted, but t nvolves the problem of not recursvely computable. Keywords: Transducton; maxmum a posteror; mnmum descrpton length; stochastc complexty. Introducton Transducton [] deals wth the problem of estmatng the values of a functonal dependency at gven ponts x, x,, x (called worng sample) by a set of n+ n+ n+ tranng sample ( x, y),( x, y),,( xn, yn). In the tradtonal nducton framewor, one learns models from the tranng sample then computes the value at the worng sample. But n some cases ths s not the best. The concept of transducton was ntroduced by Vapn []. The bacground phlosophy s that, a soluton of a relatvely smple problem (estmatng values on gven ponts) should not depend on the soluton of a substantally more complex problem (learnng a model). Transducton has been appled n text classfcaton wth SVMs []. Ths paper proposes a maxmum a posteror (MAP) framewor for the transducton. The probablty measure we defne for the estmaton s nduced by the code length of the predcton error the model wth respect to some codng system. The deal MAP transducton s essentally selectng yn +, yn +,, yn + so as to mnmze the stochastc complexty [3]. In some applcatons, one wants to estmate not only the values at the gven ponts, but an approxmated model or multple models for the underlyng functonal. Here, we use the word model nstead of functon to: a) dstngush t wth the real functonal dependency between x y ; b) emphasze that the model has to be chosen from some model class. But t should be clear that when we say a model, we mean a functon. So f M s a model, M ( x ) s the value of M on x. The MAP approach can be modfed to solve such problems. We show that the nducton may be equvalent to the MAP transducton f just one model s consdered. But t can hardly acheve the maxmal posteror probablty when multple models are estmated smultaneously. Both classfcaton (ndcator functon) regresson (real-valued functon) are addressed. It turns out that they are nherently dfferent n the MAP transducton. It s wdely beleved that the mproved performance over the nducton s due to the nformaton contaned n the worng samples [4], [5]. We nvestgate whether the worng sample can compress the descrpton length of the model. For pattern classfcaton, the answer s postve wth respectve to some codng system, whle negatve for other codng systems. The postve answer could not be extended to regresson problems drectly, because ( yn+, yn+ ) taes contnuous values. It should be mentoned that the Bayesan transducton has been suggested n [6], but we are dfferent from t not only on the probablty measure, but on what probablty are defned.
2 . Code Length the Probablty Measure When we deal wth data n an applcaton, a probablty dstrbuton s usually assumed to them. But the probablty measures defned n ths paper have no relaton to ths natural romness. All the data are seen as numbers (or vectors), so to the models. Let M be a model. We shall frst defne the probablty Py ( xm, ). Assume LyMx (, ( )) s a preassgned loss functon that measures the dstance between y M ( x ). Some examples are: LyMx (, ( )) = ( y Mx ( )). () 0 y = M( x). LyMx (, ( )) = otherwse. () LyMx (, ( )) = max(0, ymx ( )). (3) where (3) s equvalent to LyMx (, ( )) = ξ. Defne st.. ym( x) ξ (4) ξ 0 Py ( xm, ) Ke L( ym, ( x)) =. (5) M where K s a normalzaton coeffcent. We pont out agan that (5) does not mean any romness to x, y M. To explan what ths probablty measure s, note that by Kraft nequalty [3], there exsts a prefx code that encodng LyMx (, ( )) satsfyng: ( (, ( ))) Py ( xm, ) = c L y M x. (6) where clym ( (, ( x ))) s the code length of LyMx (, ( )). If y M ( x ) taes contnuous value, the so defned s a densty functon f LyMx (, ( )) s a dstance measure. We can stll encode t by some approprate quantzaton. So Py ( xm, ) represents the descrpton length of y gven x M, wth respect to the loss functon. Ths result can be extended to a set of ndependent pars ( x, y), ( x, y),,( xn, y n) naturally. In the rest of the paper, we shall always let X represent a set of x s, so do Y. Next, we defne n ths way: Suppose we are agreed at some codng system (a decodng functon). Gven X, the model M (may need some quantzaton frst) can be decoded from X together wth a strng s. We denote the decodng functon as M = d( X, s) (7) Defne ( = K c s ). (8) where s s the shortest strng such that M = d( X, s ), K s a normalzaton coeffcent. reduces to PM ( ) when no X s gven. It should be ponted out that we have to mpose some restrctons on M. That s, M s chosen from some class of models C, otherwse for any non-zero K =. (9) represents how many bts needed to descrbe M gven X wth respect to the codng system. PY ( X, M) PM) ( are often used explctly or mplctly n many applcatons. Maxmzaton of PY ( X, M) s mnmzaton of the emprcal rs. PM ( ) may be determned by the number of free parameters n. These probablty measures, f properly employed, can gve satsfed estmatons whatever the underlyng dstrbuton of ( x, y ) s. A well-nown result due to Vapn [] s the consstency of the Structural Rs Mnmzaton (SRM). It can be easly checed that the followng two probablty measures are well defned: PY (, M X) = PY ( X, M) PM ( X ). (0) PY ( X) = PY (, M X ). () PY (, M X) represents the code length to descrbe Y gven X. Ths s a two-part code, frst part for the predcton error the second for the model. But ths s not the best codng scheme, snce there s redundancy on codng the model. It s PY ( X) that reveals the necessary bts to descrbe Y gven X, the code length s, by nformaton theory, log PY ( X). Ths s essentally the stochastc complexty ntroduced by Rssanen [3]. Indeed, to defne the probablty measure, the data cannot be consdered as rom varables wth a jont
3 probablty PXY (, ), otherwse PY ( X, M) does not depend on M at all. Ths s where we are dfferent from other wors on Bayes transducton [6]. 3. MAP Estmaton Let ( x, y ),( x, y ),,( x, y ) sample, x, x,, x n+ n+ n+, y n + n n be the tranng be the worng sample, yn+, yn+, be the correspondng values. To smplfy the notons, denote X = ( x, x,, x n ), Y = ( y, y,, y n ), X = ( x, x,, x ), () Y n+ n+ n+ Y = ( y, y,, y ). n+ n+ n+ Y s: The MAP estmaton of Yˆ = arg max P( Y, Y X, X) Y (3) = arg max PY (, Y X, X, M) PM ( X, X) The deal MAP transducton gves no model for the underlyng functonal dependency. What s more, t s ntractable f the class C contans nfntely many models. We next loo at an approxmaton of (3). In some applcatons, one cares of the underlyng functonal dependency as well as Y. That s, one selects a model M from a class C, estmates Y smultaneously. The correspondng MAP estmator s: ( Yˆ, Mˆ ) = argmax P( Y, Y, M X, X ) Y (4) = arg max PY ( X, M) PY ( X, M) PM ( X, X ). Y Applyng the MAP to the nducton, denote Yˆ, M ˆ as the estmator, we have: Mˆ = arg max PY ( X, M) PM ( X), (5) ˆ ˆ Y = arg max P( Y X, M ). Y In fact, most applcatons do not employ, but PM ( ) nstead. We wll dscuss the dfference n the next secton. We are nterested n whether the MAP transducton s always better than the nducton. From (4) (5), f PM ( X, X) = PM ( X), (6) max PY ( X, M) = f( X ). (7) Y where f( X ) s a functon ndependent of M, then the two estmatons are dentcal. (6) means that X does not compress the code length of M gven X. We wll nvestgate t n the next secton. For most regresson problems, each component of Y can tae arbtrary value, the loss can be mnmzed to zero, so max PY ( X, M) does not depend on M (e.g. least square regresson as ()). The stuaton s dfferent for pattern classfcaton. Here, M s a classfer, such as T M ( x) = w x+ b, whch can tae contnuous value. But y {, + }, so ma x PY ( X, M) depends on M (as n (3)). In ths case, the MAP transducton may outperform over the nducton. The advantage s due to the type msmatch between M ( x) y. But ths s not true for those classfers that M( x) {, + }. The MAP estmaton wth one model as descrbed above s essentally to fnd a mnmal length two-part code. There s another approxmaton to the deal MAP transducton: estmatng multple ndependent models M,, M s from classes C,, Cs smultaneously as well as Y. ( Yˆ, Mˆ,, Mˆ ) = argmax P( Y, Y, M X, X ) s Y M C (8) = arg max PY (, Y X, X, M) PM ( X, X ). Y Apply the MAP to the nducton: Mˆ = arg max PY ( X, M) PM ( X), (9) ˆ ˆ ˆ Y = φ( M ( X),, Ms ( X)). where φ s a functon mxng Mˆ ( X ˆ ),, Ms ( X) up (e.g. votng). Agan we analyze f the MAP transducton the nducton are equvalent. Leave PM ( X, X) PM ( ) asde. In (9),,, ˆ X Mˆ M s are estmated separately, so they are ndependent to each other. Whle n (8), M ˆ ˆ,, M s are closely related snce Y s nvolved. Ths argument suggests that the MAP transducton wth multple models may always better than the nducton for both classfcaton regresson. 4. Compresson of the Model by the Worng Sample In ths secton we study whether X provde any help on compressng the model. That s whether
4 PM ( X) = PM ( ) (0) PM ( X, X) = PM ( X) () hold. We analyze (0) only, we thn there s no much dffculty to extend the followng argument to (). From (4) (5), we see that f (0) () hold, all the dfference between transducton nducton are caused by the type msmatch or the multple models nvolved. For smplcty, we use X = ( x,, x n ) nstead of X n ths secton. ( We have defned as proportonal to c s ) : M can be decoded from X together wth the shortest s. For the pattern classfcaton problem, consder the followng codng system. For arbtrary M, let strng s be the followng: the frst n bts are assgned to y, y,, yn each wth + or, the rest of s s denoted by M. To decode M, one runs wth ( x, y),( x, y ),,( x, ) n yn some tranng algorthm such as SVMs. Here, the classfer has to be unquely determned by the tranng algorthm because of the unqueness request for the output of a decoder. So classfers le the perceptrons are not approprate. At ths stage, we have the tranng result: a classfer M, we next use M adjust M to obtan M. From the above descrpton, heavly depends on X. For approprate X, t needs at most n extra bts to decode M, whch yelds K n, whle wthout X, t s mpossble that each M has such a large probablty. Note that the argument above s wth respect to a specal codng system. For another codng system, e.g. a trval decoder that does not consder X at all, (0) certanly holds (Ths codng system s often used n real applcaton mplctly). For farness, t s necessary to consder a unversal codng system,.e. a unversal Turng machne. Unfortunately, the two probabltes are not recursvely computable n ths stuaton. So we have no dea about t. It should be noted that even n the frst codng system, the compresson ganed s due to the effcency of encodng y, y,, yn n the tas of classfcaton. The argument s no longer vald for regresson, snce taes contnuous value. The bts needed to encode y, y,, yn tend to nfnty when more quantzaton levels set on y. Whether there s any codng system that maes X contrbute to M s not clear. We strongly suspect that for the least square regresson wthout any constrant on y, X does not contrbute to M wth respect to the unversal codng y system. 5 Concluson We propose a MAP scheme for transducton. We analyze the deal MAP transducton as well as two approxmatons. The deal MAP transducton s essentally a mnmzaton of the stochastc complexty. The two approxmatons estmate one multple models respectvely as well as Y. Transducton for both pattern classfcaton regresson are addressed. For classfcaton, the MAP transducton may outperform over the nducton due to a) the type msmatch between y M ( x), b) for some codng system PM ( X, X) (but t s not guaranteed wth respectve to a unversal codng system). For regresson, f we estmate wth respectve to a codng system that P ( M X, X ) = P( M X ), the MAP transducton s equvalent to the nducton when only one model s estmated. But t s hardly so when multple models are estmated smultaneously. There are two problems wth the MAP transducton. a) The computaton problem: Even for the smplest case --- classfcaton wth one model, the MAP estmaton s not easy to mplement. Approxmaton methods have to be consdered. Further more, can not be computed drectly. How to adopt the mutual nformaton nto the transducton s a problem. b) The optmalty of the MAP estmaton: MAP s sometmes not robust to nosy data. In nosy envronment, how to mprove the MAP transducton. Acnowledgements Ths wor s supported by the Natonal Natural Scence Foundaton of Chna References [] Vapn, V.: Statstcal Learnng Theory. Wley Inter-scence (998). [] Joachms, T.: Transductve Inference for Text Classfcaton Usng Support Vector Machnes. In Internatonal Conference on Machne Learnng (ICML). (999) [3] Rssanen, J.: Stochastc Complexty n Statstcal Inqury. World Scentfc, Sngapore (989)
5 [4] Blum, A., Mtchell, T.: Combnng Labeled Unlabeled data for Co-Tranng. In Proceedngs of the 998 Conference on Computatonal Learnng Theory (COLT). (998) [5] Ngam,. etc.: Text Classfcaton from Labeled Unlabeled Documents Usng EM, Machne Learnng, Vol. 39 (000) 03-34, [6] Graepel, T., Ralf, H., Obermayer, K.: Bayesan Transducton. In Advances n Neural Informaton System Processng (NIPS). (000)
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationSemi-supervised Classification with Active Query Selection
Sem-supervsed Classfcaton wth Actve Query Selecton Jao Wang and Swe Luo School of Computer and Informaton Technology, Beng Jaotong Unversty, Beng 00044, Chna Wangjao088@63.com Abstract. Labeled samples
More information2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification
E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton
More informationLogistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI
Logstc Regresson CAP 561: achne Learnng Instructor: Guo-Jun QI Bayes Classfer: A Generatve model odel the posteror dstrbuton P(Y X) Estmate class-condtonal dstrbuton P(X Y) for each Y Estmate pror dstrbuton
More informationLearning Theory: Lecture Notes
Learnng Theory: Lecture Notes Lecturer: Kamalka Chaudhur Scrbe: Qush Wang October 27, 2012 1 The Agnostc PAC Model Recall that one of the constrants of the PAC model s that the data dstrbuton has to be
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationThe Order Relation and Trace Inequalities for. Hermitian Operators
Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence
More informationNatural Language Processing and Information Retrieval
Natural Language Processng and Informaton Retreval Support Vector Machnes Alessandro Moschtt Department of nformaton and communcaton technology Unversty of Trento Emal: moschtt@ds.untn.t Summary Support
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationMaximum Likelihood Estimation (MLE)
Maxmum Lkelhood Estmaton (MLE) Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Wnter 01 UCSD Statstcal Learnng Goal: Gven a relatonshp between a feature vector x and a vector y, and d data samples (x,y
More informationConvexity preserving interpolation by splines of arbitrary degree
Computer Scence Journal of Moldova, vol.18, no.1(52), 2010 Convexty preservng nterpolaton by splnes of arbtrary degree Igor Verlan Abstract In the present paper an algorthm of C 2 nterpolaton of dscrete
More informationThe Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD
he Gaussan classfer Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world X observatons g decson functon L[g,y] loss of predctng y wth g Bayes decson rule s
More informationEstimation: Part 2. Chapter GREG estimation
Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the
More informationCS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements
CS 750 Machne Learnng Lecture 5 Densty estmaton Mlos Hauskrecht mlos@cs.ptt.edu 539 Sennott Square CS 750 Machne Learnng Announcements Homework Due on Wednesday before the class Reports: hand n before
More informationANOMALIES OF THE MAGNITUDE OF THE BIAS OF THE MAXIMUM LIKELIHOOD ESTIMATOR OF THE REGRESSION SLOPE
P a g e ANOMALIES OF THE MAGNITUDE OF THE BIAS OF THE MAXIMUM LIKELIHOOD ESTIMATOR OF THE REGRESSION SLOPE Darmud O Drscoll ¹, Donald E. Ramrez ² ¹ Head of Department of Mathematcs and Computer Studes
More informationA new Approach for Solving Linear Ordinary Differential Equations
, ISSN 974-57X (Onlne), ISSN 974-5718 (Prnt), Vol. ; Issue No. 1; Year 14, Copyrght 13-14 by CESER PUBLICATIONS A new Approach for Solvng Lnear Ordnary Dfferental Equatons Fawz Abdelwahd Department of
More informationOn an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1
On an Extenson of Stochastc Approxmaton EM Algorthm for Incomplete Data Problems Vahd Tadayon Abstract: The Stochastc Approxmaton EM (SAEM algorthm, a varant stochastc approxmaton of EM, s a versatle tool
More informationThe Minimum Universal Cost Flow in an Infeasible Flow Network
Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran
More information18.1 Introduction and Recap
CS787: Advanced Algorthms Scrbe: Pryananda Shenoy and Shjn Kong Lecturer: Shuch Chawla Topc: Streamng Algorthmscontnued) Date: 0/26/2007 We contnue talng about streamng algorthms n ths lecture, ncludng
More informationCIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M
CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute
More information9.913 Pattern Recognition for Vision. Class IV Part I Bayesian Decision Theory Yuri Ivanov
9.93 Class IV Part I Bayesan Decson Theory Yur Ivanov TOC Roadmap to Machne Learnng Bayesan Decson Makng Mnmum Error Rate Decsons Mnmum Rsk Decsons Mnmax Crteron Operatng Characterstcs Notaton x - scalar
More informationA new construction of 3-separable matrices via an improved decoding of Macula s construction
Dscrete Optmzaton 5 008 700 704 Contents lsts avalable at ScenceDrect Dscrete Optmzaton journal homepage: wwwelsevercom/locate/dsopt A new constructon of 3-separable matrces va an mproved decodng of Macula
More informationUsing T.O.M to Estimate Parameter of distributions that have not Single Exponential Family
IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran
More informationComparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method
Appled Mathematcal Scences, Vol. 7, 0, no. 47, 07-0 HIARI Ltd, www.m-hkar.com Comparson of the Populaton Varance Estmators of -Parameter Exponental Dstrbuton Based on Multple Crtera Decson Makng Method
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationSupport Vector Machines. Vibhav Gogate The University of Texas at dallas
Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest
More information10-701/ Machine Learning, Fall 2005 Homework 3
10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40
More informationDepartment of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING
MACHINE LEANING Vasant Honavar Bonformatcs and Computatonal Bology rogram Center for Computatonal Intellgence, Learnng, & Dscovery Iowa State Unversty honavar@cs.astate.edu www.cs.astate.edu/~honavar/
More informationECE559VV Project Report
ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.
More informationA Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach
A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland
More informationLectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More informationChapter 11: Simple Linear Regression and Correlation
Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests
More informationprinceton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg
prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there
More informationBoostrapaggregating (Bagging)
Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod
More informationClassification as a Regression Problem
Target varable y C C, C,, ; Classfcaton as a Regresson Problem { }, 3 L C K To treat classfcaton as a regresson problem we should transform the target y nto numercal values; The choce of numercal class
More information1 Convex Optimization
Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,
More informationMultilayer Perceptron (MLP)
Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne
More informationLINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity
LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have
More informationVARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES
VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES BÂRZĂ, Slvu Faculty of Mathematcs-Informatcs Spru Haret Unversty barza_slvu@yahoo.com Abstract Ths paper wants to contnue
More informationThe Expectation-Maximization Algorithm
The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.
More informationPolynomial Regression Models
LINEAR REGRESSION ANALYSIS MODULE XII Lecture - 6 Polynomal Regresson Models Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Test of sgnfcance To test the sgnfcance
More informationSociété de Calcul Mathématique SA
Socété de Calcul Mathématque SA Outls d'ade à la décson Tools for decson help Probablstc Studes: Normalzng the Hstograms Bernard Beauzamy December, 202 I. General constructon of the hstogram Any probablstc
More informationFundamentals of Neural Networks
Fundamentals of Neural Networks Xaodong Cu IBM T. J. Watson Research Center Yorktown Heghts, NY 10598 Fall, 2018 Outlne Feedforward neural networks Forward propagaton Neural networks as unversal approxmators
More informationLecture 3: Shannon s Theorem
CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts
More information3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X
Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number
More informationThe exam is closed book, closed notes except your one-page cheat sheet.
CS 89 Fall 206 Introducton to Machne Learnng Fnal Do not open the exam before you are nstructed to do so The exam s closed book, closed notes except your one-page cheat sheet Usage of electronc devces
More informationCollege of Computer & Information Science Fall 2009 Northeastern University 20 October 2009
College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:
More informationEcon107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)
I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes
More information8/25/17. Data Modeling. Data Modeling. Data Modeling. Patrice Koehl Department of Biological Sciences National University of Singapore
8/5/17 Data Modelng Patrce Koehl Department of Bologcal Scences atonal Unversty of Sngapore http://www.cs.ucdavs.edu/~koehl/teachng/bl59 koehl@cs.ucdavs.edu Data Modelng Ø Data Modelng: least squares Ø
More informationEGR 544 Communication Theory
EGR 544 Communcaton Theory. Informaton Sources Z. Alyazcoglu Electrcal and Computer Engneerng Department Cal Poly Pomona Introducton Informaton Source x n Informaton sources Analog sources Dscrete sources
More informationFeature Selection: Part 1
CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?
More informationMDL-Based Unsupervised Attribute Ranking
MDL-Based Unsupervsed Attrbute Rankng Zdravko Markov Computer Scence Department Central Connectcut State Unversty New Brtan, CT 06050, USA http://www.cs.ccsu.edu/~markov/ markovz@ccsu.edu MDL-Based Unsupervsed
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationChapter 9: Statistical Inference and the Relationship between Two Variables
Chapter 9: Statstcal Inference and the Relatonshp between Two Varables Key Words The Regresson Model The Sample Regresson Equaton The Pearson Correlaton Coeffcent Learnng Outcomes After studyng ths chapter,
More informationOn the Repeating Group Finding Problem
The 9th Workshop on Combnatoral Mathematcs and Computaton Theory On the Repeatng Group Fndng Problem Bo-Ren Kung, Wen-Hsen Chen, R.C.T Lee Graduate Insttute of Informaton Technology and Management Takmng
More informationP R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /
Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons
More informationFeature Selection in Multi-instance Learning
The Nnth Internatonal Symposum on Operatons Research and Its Applcatons (ISORA 10) Chengdu-Juzhagou, Chna, August 19 23, 2010 Copyrght 2010 ORSC & APORC, pp. 462 469 Feature Selecton n Mult-nstance Learnng
More informationAsymptotics of the Solution of a Boundary Value. Problem for One-Characteristic Differential. Equation Degenerating into a Parabolic Equation
Nonl. Analyss and Dfferental Equatons, ol., 4, no., 5 - HIKARI Ltd, www.m-har.com http://dx.do.org/.988/nade.4.456 Asymptotcs of the Soluton of a Boundary alue Problem for One-Characterstc Dfferental Equaton
More informationFREQUENCY DISTRIBUTIONS Page 1 of The idea of a frequency distribution for sets of observations will be introduced,
FREQUENCY DISTRIBUTIONS Page 1 of 6 I. Introducton 1. The dea of a frequency dstrbuton for sets of observatons wll be ntroduced, together wth some of the mechancs for constructng dstrbutons of data. Then
More informationFoundations of Arithmetic
Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an
More informationA Robust Method for Calculating the Correlation Coefficient
A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal
More informationGlobal Sensitivity. Tuesday 20 th February, 2018
Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values
More informationFor now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.
Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson
More informationSupporting Information
Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to
More informationChapter Newton s Method
Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve
More informationSingular Value Decomposition: Theory and Applications
Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real
More informationOn mutual information estimation for mixed-pair random variables
On mutual nformaton estmaton for mxed-par random varables November 3, 218 Aleksandr Beknazaryan, Xn Dang and Haln Sang 1 Department of Mathematcs, The Unversty of Msssspp, Unversty, MS 38677, USA. E-mal:
More informationThe Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction
ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also
More informationLecture 20: November 7
0-725/36-725: Convex Optmzaton Fall 205 Lecturer: Ryan Tbshran Lecture 20: November 7 Scrbes: Varsha Chnnaobreddy, Joon Sk Km, Lngyao Zhang Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer:
More informationModule 9. Lecture 6. Duality in Assignment Problems
Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept
More informationLossy Compression. Compromise accuracy of reconstruction for increased compression.
Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost
More informationCSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography
CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve
More informationHidden Markov Models
CM229S: Machne Learnng for Bonformatcs Lecture 12-05/05/2016 Hdden Markov Models Lecturer: Srram Sankararaman Scrbe: Akshay Dattatray Shnde Edted by: TBD 1 Introducton For a drected graph G we can wrte
More informationLecture 5 Decoding Binary BCH Codes
Lecture 5 Decodng Bnary BCH Codes In ths class, we wll ntroduce dfferent methods for decodng BCH codes 51 Decodng the [15, 7, 5] 2 -BCH Code Consder the [15, 7, 5] 2 -code C we ntroduced n the last lecture
More informationRelevance Vector Machines Explained
October 19, 2010 Relevance Vector Machnes Explaned Trstan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introducton Ths document has been wrtten n an attempt to make Tppng s [1] Relevance Vector Machnes
More information} Often, when learning, we deal with uncertainty:
Uncertanty and Learnng } Often, when learnng, we deal wth uncertanty: } Incomplete data sets, wth mssng nformaton } Nosy data sets, wth unrelable nformaton } Stochastcty: causes and effects related non-determnstcally
More informationVapnik-Chervonenkis theory
Vapnk-Chervonenks theory Rs Kondor June 13, 2008 For the purposes of ths lecture, we restrct ourselves to the bnary supervsed batch learnng settng. We assume that we have an nput space X, and an unknown
More informationThe lower and upper bounds on Perron root of nonnegative irreducible matrices
Journal of Computatonal Appled Mathematcs 217 (2008) 259 267 wwwelsevercom/locate/cam The lower upper bounds on Perron root of nonnegatve rreducble matrces Guang-Xn Huang a,, Feng Yn b,keguo a a College
More informationYong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )
Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often
More informationHomework Assignment 3 Due in class, Thursday October 15
Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.
More informationClustering & Unsupervised Learning
Clusterng & Unsupervsed Learnng Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Wnter 2012 UCSD Statstcal Learnng Goal: Gven a relatonshp between a feature vector x and a vector y, and d data samples (x,y
More informationMLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012
MLE and Bayesan Estmaton Je Tang Department of Computer Scence & Technology Tsnghua Unversty 01 1 Lnear Regresson? As the frst step, we need to decde how we re gong to represent the functon f. One example:
More informationLecture 10 Support Vector Machines. Oct
Lecture 10 Support Vector Machnes Oct - 20-2008 Lnear Separators Whch of the lnear separators s optmal? Concept of Margn Recall that n Perceptron, we learned that the convergence rate of the Perceptron
More informationISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013
ISSN: 2277-375 Constructon of Trend Free Run Orders for Orthogonal rrays Usng Codes bstract: Sometmes when the expermental runs are carred out n a tme order sequence, the response can depend on the run
More informationTAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES
TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES SVANTE JANSON Abstract. We gve explct bounds for the tal probabltes for sums of ndependent geometrc or exponental varables, possbly wth dfferent
More informationWhy Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one)
Why Bayesan? 3. Bayes and Normal Models Alex M. Martnez alex@ece.osu.edu Handouts Handoutsfor forece ECE874 874Sp Sp007 If all our research (n PR was to dsappear and you could only save one theory, whch
More informationNeural networks. Nuno Vasconcelos ECE Department, UCSD
Neural networs Nuno Vasconcelos ECE Department, UCSD Classfcaton a classfcaton problem has two types of varables e.g. X - vector of observatons (features) n the world Y - state (class) of the world x X
More informationIntegrals and Invariants of Euler-Lagrange Equations
Lecture 16 Integrals and Invarants of Euler-Lagrange Equatons ME 256 at the Indan Insttute of Scence, Bengaluru Varatonal Methods and Structural Optmzaton G. K. Ananthasuresh Professor, Mechancal Engneerng,
More informationThe Geometry of Logit and Probit
The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.
More informationEntropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or
Sgnal Compresson Sgnal Compresson Entropy Codng Entropy codng s also known as zero-error codng, data compresson or lossless compresson. Entropy codng s wdely used n vrtually all popular nternatonal multmeda
More informationA PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS
HCMC Unversty of Pedagogy Thong Nguyen Huu et al. A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS Thong Nguyen Huu and Hao Tran Van Department of mathematcs-nformaton,
More informationMatrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD
Matrx Approxmaton va Samplng, Subspace Embeddng Lecturer: Anup Rao Scrbe: Rashth Sharma, Peng Zhang 0/01/016 1 Solvng Lnear Systems Usng SVD Two applcatons of SVD have been covered so far. Today we loo
More informationExplaining the Stein Paradox
Explanng the Sten Paradox Kwong Hu Yung 1999/06/10 Abstract Ths report offers several ratonale for the Sten paradox. Sectons 1 and defnes the multvarate normal mean estmaton problem and ntroduces Sten
More informationSupport Vector Machines
Support Vector Machnes Konstantn Tretyakov (kt@ut.ee) MTAT.03.227 Machne Learnng So far Supervsed machne learnng Lnear models Least squares regresson Fsher s dscrmnant, Perceptron, Logstc model Non-lnear
More informationPsychology 282 Lecture #24 Outline Regression Diagnostics: Outliers
Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.
More informationANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)
Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of
More informationProblem Set 9 Solutions
Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem
More informationLecture 3 Stat102, Spring 2007
Lecture 3 Stat0, Sprng 007 Chapter 3. 3.: Introducton to regresson analyss Lnear regresson as a descrptve technque The least-squares equatons Chapter 3.3 Samplng dstrbuton of b 0, b. Contnued n net lecture
More information