Linear Approximation with Regularization and Moving Least Squares


 Toby Nelson
 3 years ago
 Views:
Transcription
1 Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004)
2 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton Soluton of overdetermned system of equatons by QR decomposton Statstcal bacground Weghted Least Squares Approxmaton of Functon Values and Gradents....3 Regularzaton of the Problem Addton of fcttous ponts Addton of mnmzng condtons wth respect to coeffcent sze Regularzaton by Addng the Mnmzng Condtons wth Respect to the Dfference n Coeffcents Obtaned by Related Approxmatons....4 Comments on Choce of Weghts General Weghted Least Squares... 8 Low Order Polynomal Approxmatons...8. Constant and Lnear Bass Quadratc Bass One dmenson wo dmensons hree dmensons Old (alternatve) mappng of coeffcents Cubc Bass Lnear Fttng wth Gradent data Quadratc Fttng wth Gradent data Movng least squares (MLS) approxmaton Spatal dervatves of the MLS approxmaton anpproxmaton wth gradent nformaton 4 4. Normal system of equatons Implementaton remars Second order dervatves (normal system) Approxmaton wth values and gradents (normal system) Overdetermned system of equatons Appendx Quc remnder WLS approxmaton MLS approxmaton Implementaton remars Formulas for functon gradents Sandbox...
3 Change of notaton: m N v  number of ponts where approxmated functon s evaluated m N g  number of ponts where gradent of the approxmated functon s evaluated n N b  number of bass functons Use of ndces: ndex of samplng ponts, ndces of components of approxmaton coeffcents, components of rghthand sde vector and components of the system matrx n systems of equatons l, m  components of coordnate dervatves t components of gradents of the sampled (approxmated) functon. LINEAR FIING. Weghted Least Squares n Functon Approxmaton We have values of some functon ( x) f, x R N, n N v ponts: ( ) =, =,..., f x y N. () v We would le to evaluate coeffcents of lnear combnaton of N b functons f ( x),, f ( x) such that N b = n n = ( x; a) ( x) ( x)... ( x) ( x) f% a f a f a f a f, () ( x ) ( x ) = =,..., f % f y N, (3) v.e. we want that the lnear approxmaton (or approxmaton) agrees as much as possble wth values of f ( x) n all ponts x. We loo for the best agreement n the weghted least squares sense,.e. we mnmze the functon χ a a x x. (4) Nv Nv ( ) = φ ( ) = w ( y ( ) y ) = w a f ( ) y = = wth respect to parameters of approxmaton a. w s the mdmensonal vector of weghts, whch weght sgnfcance of ponts x. Mnmum s the statonary pont of φ where n
4 d φ = 0 =,...,. (5) Dervatves of φ are d φ Nv = w a f ( ) y f x ( x ) (6) = Equaton ( 5) therefore gves the followng system of equatons for unnown coeffcents where and ( ( ) ( )) ( ) Nv Nv ( ) = = a : a w f x f x = w y f x, =,..., n (7) Coeffcents a can therefore be obtaned by solvng the lnear system of equatons Ca = d, (8) C = w f x f x (9) Nv = ( ) ( ) d = w f x y (0) Nv ( ) = he system of equatons ( 8) for calculaton of approxmaton coeffcents s calle normal system of equatons. It can be shown that C s postvesemdefnte. If C has a full ran n then t s postvedefnte, and the system can be solved by the Cholesy factorzaton C = V V. () (where V s upper trangular) followed by the solutons of a lower trangular system, V y = d, () ann upper trangular system, V a = y. (3)
5 .. Soluton of overdetermned system of equatons by QR decomposton Here we pont at the relaton between the least squares formulaton ( 4), ( 8)and drect soluton of the overdetermned system of equatons ( 3). We can wrte ntroduce matrx A and vector b such that and A ( ) = w f x (4) b = w y. (5) hen the equaton A a% = b (6) ( m n) ( n ) ( m ) reads componentwse as or ( x ) %, w f a = w y, w a% f ( x ) = w y, (7) whch s exactly ( 3), f we tae nto account ( ) and denote coeffcents by a% nstead of a. Equaton ( 6) (or ( 7) n componentwse notaton) s an overdetermned system, therefore we can not dvde both sdes of each equaton by w because the ths woulffect the sgnfcance of ndvdual equatons and therefore the soluton (the system n ths case does not have an exact soluton and therefore the relatve sgnfcance of equatons s mportant). Now, we can show that the system ( 6) s n some sense equvalent to the system ( 8). hs s seen by observng that C = d = A A A b, (8).e. the least squares system of equatons ( 8) (also referred to as the normal system of equatons) s obtaned by left multplyng the overdetermned system ( 6) by A. It can be shown that we can obtan the soluton of the normal system ( 8) by performng the QR decomposton of the matrx A form the system ( 6) []:
6 A = Q U (9) ( Nv ) ( Nv Nv ) Nv We denote by z soluton of the orthogonal system Q z = b,.e. z = Q b, (0) Matrx U s upper trapezod (by the QR decomposton). We wrte U and z n bloc form, U ( Nv ) V( ) = = > 0 (( Nv ) ) ( v 0, ), () z ( Nv ) y( ) = w( Nv ). () Now t follows from C = A A (tang nto account the decomposton and the bloc form) that C = A A = V V,.e. V s a Cholesy factor of the normal matrx C = A A. Wth QR factorzaton, we have avoded calculaton of C and ts Cholesy factorzaton. We can further verfy that d = A b = V y. (3) hs means that y, whch s the upper part of the transformed z, s the soluton of the lower trangular system ( ). he least squares soluton s therefore obtaned (accordng to ( 3)) by solvng the least squares system V a% = y. (4) Advantage of usng the QR factorzaton s that the matrx A s better condtoned than AA. If spectral senstvty of matrx A equals, then the spectral senstvty of AA = B s n. In ths expresson, s the largest and n n the smallest egenvalue of B. =========================== From ( ) we can see that = (, ) a f ( ) y x a x. (5) =
7 and therefore ( x,a) d y ( ) = f x = A w (6) We see that A w d y ( x, a) = (7) Sometmes we defne matrx X so that X ( x,a) d y = = f x ( ) = A. (8).. Statstcal bacground We have a model that predcts a set of measurements (observatons) y, whch s dependent on a set of unnown parameters a : ( ) y a. (9) In functon approxmaton, we have a model for a functon of one or a set of ndependent varables, y ( ) = y ( ; ) a x a (30) From the pont of vew of parameter estmaton, ths s the same as ( 9) because ndependent varables x are used ust to dstngush between dstnct measurements (to ndex the measurements, the same as ndex n ( 9)), anctual functonal relatons are not actually used. In the least squares formulaton, parameters a are estmated by mnmzng the sum of squares, mn χ Note that for lnear models, or n functon approxmaton, a ( ) ( y y ) Nv a =. (3) = ( ) = y a a f (3)
8 y = ( ; ) a f ( ) x a x (33) Both forms are equvalent, whch can be easly seen f we wrthe the second form ( 33) for x = x.... Statstcal bacground he statstcal bacground descrbed here appled for general least squares fttng (also nonlnear). For foundng the least squares procedures, we must assume that measurement errors are ndependently random and normally dstrbuted: y ~ e π y µ. (34) When fttng the parameters, we would le to fnd the parameters that are most lely to be correct. It s not meanngful to as e.g. What s the probablty that gven parameters a are correct. However, the ntuton tells us that the parameters for whch the model data doesn t loo le the measured data are unlely. We can as the queston Gven the partcular set of parameters, what s the probablty that the specfc data set could have occurred? If y tae contnuous values then we must say the probablty that y ± y occur. If ths probablty s very small then we conclude that the parameter under consderaton are unlely to be rght. Conversely, the ntuton tells us that the data should not be too unlely to mprobable for the correct model parameters. In other words, we ntutvely dentfy the probablty of data gven the parameters, as the lelhood of the parameters gven the data. hs s based on ntuton anhs no mathematcal bacground! We loo for parameters that maxmze the lelhood defned n the above way, and ths form of estmaton s the maxmum lelhood estmaton. Accordng to assumpton ( 34), the probablty of the data set s the product of probabltes for ndvdual data ponts: m y y P = exp y = (35) Maxmzng ths probablty s equvalent to mnmzng negatve of ts logarthm, ( y ( ) ) y m a mln y. = he pont s that there s ust one model the correct one, and there s a statstcal unverse of data sets that are drawn from that model.
9 Snce the last term s constant, mnmzng the equaton s equvalent to mnmzng ( 3). Remar: he dscusson s lmted to statstcal errors, whch we can average away (n a desred extent) f we tae enough data. Measurements are also susceptble systematc errors, whch can not be annhlated by any amount of averagng. In equatons ( 8)( 0) and ( 4)( 6), regardng statstcal argumentaton, we must set the weghts to w =. (36) wth Let us now estmate the uncertantes of the estmated parameters. he varance assocated a can be found from m a ( a ) = = y. (37) from ( 8) we have a ( x ) n m m y f = C d = C (38) = = = Snce C s ndependent of y, We wrte C = W Consequently, ( a ) ( ) a f x = y C. (39) = ( ) ( ) f x f x = n n m l WWl = l= =. (40) he fnal term n bracets n the above equaton s ust C. Snce ths s nverse of W, the equaton reduces do W,.e. ( a ) = C. (4) Off dagonal elements of C are covarances between Cov a (, a ) a an : = C (4) E.g. calbraton of a measurement equpment can depend on the temperature, and f we perform all measurements at a wrong temperature then averagng wll not reduce the systematc error.
10 ... Nonnormal dstrbuton of errors In the case of nonnormal errors, we often do the followng thngs that are derved from the assumpton that the error dstrbuton s normal: Ft parameters by mnmzng χ Use contours of constant χ as the boundary of the confdence regon Use Monte Carlo smulatons or analytcal calculatons to determne whch contour of χ s the correct one for the desred confdence level Gve the covarance matrx C as the formal covarance matrx of the ft on the assumpton of normally dstrbuted errors Interpret C as the actual squared standard errors of the parameter estomaton. Weghted Least Squares Approxmaton of Functon Values and Gradents Sometmes we have gradent nformaton besde the values of a functon n a gven set of ponts, and we want to construct an approxmaton that best fts the specfed values and the gradents. In ths secton equatons are derved for approxmatons that consder both value and gradent data. We have values of some functon f ( x) and ts gradents n m ponts: ( ) = ( ) =, =,..., f x y f x g N. (43) g g v We would le to evaluate coeffcents of lnear combnaton of N b functons f ( x),, f ( x ) such that = n n = ( x) ( x) ( x)... ( x) ( x) f% a f a f a f a f, (44) ( x ) ( x ) ( x ) g ( x ) g g g f % y = f =,..., N f % = f =,..., N, (45) v g g In order to eep the generalty of dervaton, we wll allow throughout the text that values and gradents of the approxmated functon f ( x ) are evaluated n dfferent sets of ponts (whch may however partally or fully concde). We wll denote the th component of g by g. he gradent of the approxmaton s smply ( ) ( ) f x = a f x (46) N b
11 We want that the lnear approxmaton (or approxmaton) agrees as much as possble wth values of f ( x) and that ts gradent agrees as much as possble wth gradents of f ( x) n all ponts x. We loo for the best agreement n the weghted least squares sense,.e. we mnmze the functon Ng ( ) ( ) N y Nv φ = w ( y ( x ) y ) + w g t x g g g t = = g = t= x t Nv Ng N f w a f ( ) y + w ( ) gt a g x x g g t = g = t= x t. (47) wth respect to parameters of approxmaton a. w are N v weghts that wegh sgnfcance of values n ponts x and w gl w are N g N weghts that wegh sgnfcance of ndvdual gradent components n x. Mnmum s the statonary pont of g Dervatves of φ are d φ φ where = 0 =,...,. (48) d φ Nv = w a f ( ) y f ( ) + x x = + f f ( x ) ( x ) Ng N w gl a g g gt g g = t= xt xt (49) Equaton ( 48) therefore gves the followng system of equatons for unnown coeffcents a :
Generalized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationLectures  Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures  Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More informationComposite Hypotheses testing
Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter
More informationInner Product. Euclidean Space. Orthonormal Basis. Orthogonal
Inner Product Defnton 1 () A Eucldean space s a fntedmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,
More informationFall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede
Fall 0 Analyss of Expermental easurements B. Esensten/rev. S. Errede We now reformulate the lnear Least Squares ethod n more general terms, sutable for (eventually extendng to the nonlnear case, and also
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More information2.3 Nilpotent endomorphisms
s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms
More information8/25/17. Data Modeling. Data Modeling. Data Modeling. Patrice Koehl Department of Biological Sciences National University of Singapore
8/5/17 Data Modelng Patrce Koehl Department of Bologcal Scences atonal Unversty of Sngapore http://www.cs.ucdavs.edu/~koehl/teachng/bl59 koehl@cs.ucdavs.edu Data Modelng Ø Data Modelng: least squares Ø
More informationANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)
Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of
More informationFeb 14: Spatial analysis of data fields
Feb 4: Spatal analyss of data felds Mappng rregularly sampled data onto a regular grd Many analyss technques for geophyscal data requre the data be located at regular ntervals n space and/or tme. hs s
More informationMaximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models
ECO 452  OE 4: Probt and Logt Models ECO 452  OE 4 Maxmum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models
More informationEcon107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)
I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More informationThe Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction
ECONOMICS 5*  NOTE (Summary) ECON 5*  NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also
More informationU.C. Berkeley CS294: Beyond WorstCase Analysis Luca Trevisan September 5, 2017
U.C. Berkeley CS94: Beyond WorstCase Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that
More informationCIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M
CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute
More informationChapter 13: Multiple Regression
Chapter 13: Multple Regresson 13.1 Developng the multpleregresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to
More informationAPPENDIX A Some Linear Algebra
APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,
More informationChapter 7 Generalized and Weighted Least Squares Estimation. In this method, the deviation between the observed and expected values of
Chapter 7 Generalzed and Weghted Least Squares Estmaton The usual lnear regresson model assumes that all the random error components are dentcally and ndependently dstrbuted wth constant varance. When
More informationPredictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore
Sesson Outlne Introducton to classfcaton problems and dscrete choce models. Introducton to Logstcs Regresson. Logstc functon and Logt functon. Maxmum Lkelhood Estmator (MLE) for estmaton of LR parameters.
More information3.1 Expectation of Functions of Several Random Variables. )' be a kdimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X
Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number
More informationChapter 2  The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.
Chapter  The Smple Lnear Regresson Model The lnear regresson equaton s: where y + = β + β e for =,..., y and are observable varables e s a random error How can an estmaton rule be constructed for the
More informationAPPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14
APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce
More informationMMA and GCMMA two methods for nonlinear optimization
MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons
More informationLECTURE 9 CANONICAL CORRELATION ANALYSIS
LECURE 9 CANONICAL CORRELAION ANALYSIS Introducton he concept of canoncal correlaton arses when we want to quantfy the assocatons between two sets of varables. For example, suppose that the frst set of
More informationLaboratory 3: Method of Least Squares
Laboratory 3: Method of Least Squares Introducton Consder the graph of expermental data n Fgure 1. In ths experment x s the ndependent varable and y the dependent varable. Clearly they are correlated wth
More informationRadar Trackers. Study Guide. All chapters, problems, examples and page numbers refer to Applied Optimal Estimation, A. Gelb, Ed.
Radar rackers Study Gude All chapters, problems, examples and page numbers refer to Appled Optmal Estmaton, A. Gelb, Ed. Chapter Example.0 Problem Statement wo sensors Each has a sngle nose measurement
More informationLaboratory 1c: Method of Least Squares
Lab 1c, Least Squares Laboratory 1c: Method of Least Squares Introducton Consder the graph of expermental data n Fgure 1. In ths experment x s the ndependent varable and y the dependent varable. Clearly
More informationNumerical Heat and Mass Transfer
Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06FnteDfference Method (Onedmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and
More informationLimited Dependent Variables
Lmted Dependent Varables. What f the lefthand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages
More informatione i is a random error
Chapter  The Smple Lnear Regresson Model The lnear regresson equaton s: where + β + β e for,..., and are observable varables e s a random error How can an estmaton rule be constructed for the unknown
More information2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification
E395  Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton
More informationLecture 12: Classification
Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo GuterrezOsuna
More informationTimeVarying Systems and Computations Lecture 6
TmeVaryng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy
More informationSalmon: Lectures on partial differential equations. Consider the general linear, secondorder PDE in the form. ,x 2
Salmon: Lectures on partal dfferental equatons 5. Classfcaton of secondorder equatons There are general methods for classfyng hgherorder partal dfferental equatons. One s very general (applyng even to
More informationChapter 11: Simple Linear Regression and Correlation
Chapter 11: Smple Lnear Regresson and Correlaton 111 Emprcal Models 112 Smple Lnear Regresson 113 Propertes of the Least Squares Estmators 114 Hypothess Test n Smple Lnear Regresson 114.1 Use of ttests
More informationComputation of Higher Order Moments from Two Multinomial Overdispersion Likelihood Models
Computaton of Hgher Order Moments from Two Multnomal Overdsperson Lkelhood Models BY J. T. NEWCOMER, N. K. NEERCHAL Department of Mathematcs and Statstcs, Unversty of Maryland, Baltmore County, Baltmore,
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the faketest data; fxed
More informationSolutions to exam in SF1811 Optimization, Jan 14, 2015
Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 OO 4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 OO 5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable
More informationThe exam is closed book, closed notes except your onepage cheat sheet.
CS 89 Fall 206 Introducton to Machne Learnng Fnal Do not open the exam before you are nstructed to do so The exam s closed book, closed notes except your onepage cheat sheet Usage of electronc devces
More informationx i1 =1 for all i (the constant ).
Chapter 5 The Multple Regresson Model Consder an economc model where the dependent varable s a functon of K explanatory varables. The economc model has the form: y = f ( x,x,..., ) xk Approxmate ths by
More informationSTAT 3008 Applied Regression Analysis
STAT 3008 Appled Regresson Analyss Tutoral : Smple Lnear Regresson LAI Chun He Department of Statstcs, The Chnese Unversty of Hong Kong 1 Model Assumpton To quantfy the relatonshp between two factors,
More informationChapter Newton s Method
Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve
More informationMarginal Effects in Probit Models: Interpretation and Testing. 1. Interpreting Probit Coefficients
ECON 5  NOE 15 Margnal Effects n Probt Models: Interpretaton and estng hs note ntroduces you to the two types of margnal effects n probt models: margnal ndex effects, and margnal probablty effects. It
More informationSee Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition)
Count Data Models See Book Chapter 11 2 nd Edton (Chapter 10 1 st Edton) Count data consst of nonnegatve nteger values Examples: number of drver route changes per week, the number of trp departure changes
More informationECONOMICS 351*A MidTerm Exam  Fall Term 2000 Page 1 of 13 pages. QUEEN'S UNIVERSITY AT KINGSTON Department of Economics
ECOOMICS 35*A MdTerm Exam  Fall Term 000 Page of 3 pages QUEE'S UIVERSITY AT KIGSTO Department of Economcs ECOOMICS 35*  Secton A Introductory Econometrcs Fall Term 000 MIDTERM EAM ASWERS MG Abbott
More informationSection 3.6 Complex Zeros
04 Chapter Secton 6 Comple Zeros When fndng the zeros of polynomals, at some pont you're faced wth the problem Whle there are clearly no real numbers that are solutons to ths equaton, leavng thngs there
More informationPrimer on HighOrder Moment Estimators
Prmer on HghOrder Moment Estmators Ton M. Whted July 2007 The ErrorsnVarables Model We wll start wth the classcal EIV for one msmeasured regressor. The general case s n Erckson and Whted Econometrc
More informationU.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016
U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrapup In whch we talk about even more generalzatons of Cheeger s nequaltes, and
More informationNUMERICAL DIFFERENTIATION
NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the
More informationResource Allocation and Decision Analysis (ECON 8010) Spring 2014 Foundations of Regression Analysis
Resource Allocaton and Decson Analss (ECON 800) Sprng 04 Foundatons of Regresson Analss Readng: Regresson Analss (ECON 800 Coursepak, Page 3) Defntons and Concepts: Regresson Analss statstcal technques
More informationSingular Value Decomposition: Theory and Applications
Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationEconomics 130. Lecture 4 Simple Linear Regression Continued
Economcs 130 Lecture 4 Contnued Readngs for Week 4 Text, Chapter and 3. We contnue wth addressng our second ssue + add n how we evaluate these relatonshps: Where do we get data to do ths analyss? How do
More information10.34 Fall 2015 Metropolis Monte Carlo Algorithm
10.34 Fall 2015 Metropols Monte Carlo Algorthm The Metropols Monte Carlo method s very useful for calculatng manydmensonal ntegraton. For e.g. n statstcal mechancs n order to calculate the prospertes of
More informationNorms, Condition Numbers, Eigenvalues and Eigenvectors
Norms, Condton Numbers, Egenvalues and Egenvectors 1 Norms A norm s a measure of the sze of a matrx or a vector For vectors the common norms are: N a 2 = ( x 2 1/2 the Eucldean Norm (1a b 1 = =1 N x (1b
More informationThe Geometry of Logit and Probit
The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.
More informationLINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity
LINEAR REGRESSION ANALYSIS MODULE IX Lecture  30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have
More informationn α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0
MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector
More informationComplex Numbers. x = B B 2 4AC 2A. or x = x = 2 ± 4 4 (1) (5) 2 (1)
Complex Numbers If you have not yet encountered complex numbers, you wll soon do so n the process of solvng quadratc equatons. The general quadratc equaton Ax + Bx + C 0 has solutons x B + B 4AC A For
More informationChat eld, C. and A.J.Collins, Introduction to multivariate analysis. Chapman & Hall, 1980
MT07: Multvarate Statstcal Methods Mke Tso: emal mke.tso@manchester.ac.uk Webpage for notes: http://www.maths.manchester.ac.uk/~mkt/new_teachng.htm. Introducton to multvarate data. Books Chat eld, C. and
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus
More informationLecture 3 Stat102, Spring 2007
Lecture 3 Stat0, Sprng 007 Chapter 3. 3.: Introducton to regresson analyss Lnear regresson as a descrptve technque The leastsquares equatons Chapter 3.3 Samplng dstrbuton of b 0, b. Contnued n net lecture
More informationMaximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models
ECO 452  OE 4: Probt and Logt Models ECO 452  OE 4 Mamum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models for
More informationSIO 224. m(r) =(ρ(r),k s (r),µ(r))
SIO 224 1. A bref look at resoluton analyss Here s some background for the Masters and Gubbns resoluton paper. Global Earth models are usually found teratvely by assumng a startng model and fndng small
More information1 GSW Iterative Techniques for y = Ax
1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn
More informationLecture 4: September 12
36755: Advanced Statstcal Theory Fall 016 Lecture 4: September 1 Lecturer: Alessandro Rnaldo Scrbe: Xao Hu Ta Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer: These notes have not been
More informationTHE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens
THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chnese for ther wonderful remander theorem. Glenn Stevens 1. Introducton The Chnese remander theorem says we can unquely solve any par of
More informationDifference Equations
Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1
More informationSTK4080/9080 Survival and event history analysis
SK48/98 Survval and event hstory analyss Lecture 7: Regresson modellng Relatve rsk regresson Regresson models Assume that we have a sample of n ndvduals, and let N (t) count the observed occurrences of
More informationFisher Linear Discriminant Analysis
Fsher Lnear Dscrmnant Analyss Max Wellng Department of Computer Scence Unversty of Toronto 10 Kng s College Road Toronto, M5S 3G5 Canada wellng@cs.toronto.edu Abstract Ths s a note to explan Fsher lnear
More informationClassification as a Regression Problem
Target varable y C C, C,, ; Classfcaton as a Regresson Problem { }, 3 L C K To treat classfcaton as a regresson problem we should transform the target y nto numercal values; The choce of numercal class
More informationLecture 3. Ax x i a i. i i
18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest
More informationCOS 521: Advanced Algorithms Game Theory and Linear Programming
COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton
More informationAppendix for Causal Interaction in Factorial Experiments: Application to Conjoint Analysis
A Appendx for Causal Interacton n Factoral Experments: Applcaton to Conjont Analyss Mathematcal Appendx: Proofs of Theorems A. Lemmas Below, we descrbe all the lemmas, whch are used to prove the man theorems
More informationCS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements
CS 750 Machne Learnng Lecture 5 Densty estmaton Mlos Hauskrecht mlos@cs.ptt.edu 539 Sennott Square CS 750 Machne Learnng Announcements Homework Due on Wednesday before the class Reports: hand n before
More informationStanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011
Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected
More informationDeriving the XZ Identity from Auxiliary Space Method
Dervng the XZ Identty from Auxlary Space Method Long Chen Department of Mathematcs, Unversty of Calforna at Irvne, Irvne, CA 92697 chenlong@math.uc.edu 1 Iteratve Methods In ths paper we dscuss teratve
More information1 Convex Optimization
Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,
More informationRockefeller College University at Albany
Rockefeller College Unverst at Alban PAD 705 Handout: Maxmum Lkelhood Estmaton Orgnal b Davd A. Wse John F. Kenned School of Government, Harvard Unverst Modfcatons b R. Karl Rethemeer Up to ths pont n
More informationLinear Feature Engineering 11
Lnear Feature Engneerng 11 2 LeastSquares 2.1 Smple leastsquares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19
More informationErrors for Linear Systems
Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons Â and ˆb avalable. Then the best thng we can do s to solve Âˆx ˆb exactly whch
More informationACTM State Calculus Competition Saturday April 30, 2011
ACTM State Calculus Competton Saturday Aprl 30, 2011 ACTM State Calculus Competton Sprng 2011 Page 1 Instructons: For questons 1 through 25, mark the best answer choce on the answer sheet provde Afterward
More informationP R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /
Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons
More informationChapter 9: Statistical Inference and the Relationship between Two Variables
Chapter 9: Statstcal Inference and the Relatonshp between Two Varables Key Words The Regresson Model The Sample Regresson Equaton The Pearson Correlaton Coeffcent Learnng Outcomes After studyng ths chapter,
More informationLecture 2: Prelude to the big shrink
Lecture 2: Prelude to the bg shrnk Last tme A slght detour wth vsualzaton tools (hey, t was the frst day... why not start out wth somethng pretty to look at?) Then, we consdered a smple 120astyle regresson
More informationPolynomial Regression Models
LINEAR REGRESSION ANALYSIS MODULE XII Lecture  6 Polynomal Regresson Models Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Test of sgnfcance To test the sgnfcance
More informationInterval Estimation in the Classical Normal Linear Regression Model. 1. Introduction
ECONOMICS 35*  NOTE 7 ECON 35*  NOTE 7 Interval Estmaton n the Classcal Normal Lnear Regresson Model Ths note outlnes the basc elements of nterval estmaton n the Classcal Normal Lnear Regresson Model
More information4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA
4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth oneway ANOVA If the populatons ncluded n the study are selected
More informationReview: Fit a line to N data points
Revew: Ft a lne to data ponts Correlated parameters: L y = a x + b Orthogonal parameters: J y = a (x ˆ x + b For ntercept b, set a=0 and fnd b by optmal average: ˆ b = y, Var[ b ˆ ] = For slope a, set
More information1 Matrix representations of canonical matrices
1 Matrx representatons of canoncal matrces 2d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3d rotaton around the xaxs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3d rotaton around the yaxs:
More informationAn (almost) unbiased estimator for the SGini index
An (almost unbased estmator for the SGn ndex Thomas Demuynck February 25, 2009 Abstract Ths note provdes an unbased estmator for the absolute SGn and an almost unbased estmator for the relatve SGn for
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationSTAT 511 FINAL EXAM NAME Spring 2001
STAT 5 FINAL EXAM NAME Sprng Instructons: Ths s a closed book exam. No notes or books are allowed. ou may use a calculator but you are not allowed to store notes or formulas n the calculator. Please wrte
More informationComparison of Regression Lines
STATGRAPHICS Rev. 9/13/2013 Comparson of Regresson Lnes Summary... 1 Data Input... 3 Analyss Summary... 4 Plot of Ftted Model... 6 Condtonal Sums of Squares... 6 Analyss Optons... 7 Forecasts... 8 Confdence
More informationLecture 7: Boltzmann distribution & Thermodynamics of mixing
Prof. Tbbtt Lecture 7 etworks & Gels Lecture 7: Boltzmann dstrbuton & Thermodynamcs of mxng 1 Suggested readng Prof. Mark W. Tbbtt ETH Zürch 13 März 018 Molecular Drvng Forces Dll and Bromberg: Chapters
More informationPHYS 450 Spring semester Lecture 02: Dealing with Experimental Uncertainties. Ron Reifenberger Birck Nanotechnology Center Purdue University
PHYS 45 Sprng semester 7 Lecture : Dealng wth Expermental Uncertantes Ron Refenberger Brck anotechnology Center Purdue Unversty Lecture Introductory Comments Expermental errors (really expermental uncertantes)
More informationWhy Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one)
Why Bayesan? 3. Bayes and Normal Models Alex M. Martnez alex@ece.osu.edu Handouts Handoutsfor forece ECE874 874Sp Sp007 If all our research (n PR was to dsappear and you could only save one theory, whch
More informationSimulated Power of the Discrete Cramérvon Mises GoodnessofFit Tests
Smulated of the Cramérvon Mses GoodnessofFt Tests Steele, M., Chaselng, J. and 3 Hurst, C. School of Mathematcal and Physcal Scences, James Cook Unversty, Australan School of Envronmental Studes, Grffth
More information