CLRM estimation Pietro Coretto Econometrics
|
|
- Norma McBride
- 5 years ago
- Views:
Transcription
1 Slide Set 4 CLRM estimatio Pietro Coretto pcoretto@uisa.it Ecoometrics Master i Ecoomics ad Fiace (MEF) Uiversità degli Studi di Napoli Federico II Versio: Thursday 24 th Jauary, 2019 (h08:41) P. Coretto MEF CLRM estimatio 1 / 22 Least Squares Method (LS) Give a additive regressio model: y = f (X; β) + ε ote that ε is ot observed, but it is fuctio of observables ad the ukow parameter ε = y f (X; β) LS method: assume the sigal f (X; β) is much stroger tha the error ε. look for a β such that the size of ε is as small as possible size of ε is measured by some orm ε P. Coretto MEF CLRM estimatio 2 / 22
2 Ordiary Least Squares estimator (OLS) OLS = LS with 2. Therefore the OLS objective fuctio is S(β) = ε 2 2 = ε ε = (y f (X; β)) (y f (X; β)), ad the OLS estimator b is defied as the optimal solutio b = arg mi S(β) β R K For the liear model S(β) = ε 2 2 = ε ε = (y Xβ) (y Xβ) = S(β) is icely covex! ε 2 i = (y i x iβ) 2 P. Coretto MEF CLRM estimatio 3 / 22 Propositio: OLS estimator The uique OLS estimator is b = (X X) 1 X y To see this, first we itroduce two simple matrix derivative rules: 1 Let a, b R p the a b b = b a b = a 2 Let b R p, ad let A R p p be symmetric, the a Ab b = 2Ab = 2b A P. Coretto MEF CLRM estimatio 4 / 22
3 Proof. Rewrite the LS objective fuctio S(β) =(y Xβ) (y Xβ) =y y β X y y Xβ + β X Xβ Note that the traspose of a scalar is the scalar itself, the so that we write y Xβ = (y Xβ) = β X y S(β) = y y 2β (X y) + β (X X)β (4.1) Sice S( ) is covex, there exists a miimum b which will satisfy the first order coditios S(β) β = 0 β=b P. Coretto MEF CLRM estimatio 5 / 22 By applyig the previous derivative rules (1) ad (2) to the 2 d ad 3 rd term of (4.1) S(b) b = 2(X y) + 2(X X)b = 0 Which lead to the so called ormal equatios (X X)b = X y The matrix X X is square symmetric (see homeworks). Based o A3 with probability 1 X X is o sigular, the (X X) 1 exists, the the ormal equatio ca be writte as (X X) 1 (X X)b = (X X) 1 X y = b = (X X) 1 X y which proves the desired result P. Coretto MEF CLRM estimatio 6 / 22
4 Formulatio i terms of sample averages It ca be show (see homeworks) that X X = x i x i ad X y = x i y i Defie S xx = 1 X X = 1 x i x i ad s xy = 1 X y = 1 x i y i Therefore b = (X X) 1 X y ca be writte as ) b =( X X X y ( ) 1 1 ( ) 1 = x i x i x i y i =S 1 xx s xy P. Coretto MEF CLRM estimatio 7 / 22 Oce β is estimated via b, the estimated error, also called residual is obtaied as e = y Xb Fitted values, also called the predicted values, are ŷ = Xb so that e = y ŷ Note that ŷ i = b 1 + b 2 x i2 + b 2 x i for all i = 1, 2,..., What is ŷ i? ŷ i I s the estimated coditioal expectatio of Y for the whe X 1 = 1, X 2 = x i2,..., X K = x ik P. Coretto MEF CLRM estimatio 8 / 22
5 Algebraic/Geometric properties of the OLS Propositio (orthogoality of residuals) The colum space of X is orthogoal to the residual vector Proof. Write the ormal equatios X Xb X y = 0 = X (y Xb) = 0 = X e = 0 Therefore for every colum X k (observed regressor) it holds true that the ier product X k e = 0. P. Coretto MEF CLRM estimatio 9 / 22 Propositio (residuals sum to zero) If the liear model icludes the costat term, the e i = (y i x ib) = 0 Proof. By assumptio we have a lier model with costat/itercept term.that is y i = β 1 + β 2 x i2 + β 3 x i ε i Therefore X 1 = 1 = (1,, 1,..., 1). Apply the previous property the 1 st colum of X X 1 e = 1 e = ad this proves the property e i = 0 P. Coretto MEF CLRM estimatio 10 / 22
6 Propositio (Fitted vector is a projectio) ŷ is the projectio of y oto the space spaed by colums of X (regressors) Proof. ŷ = Xb = X(X X) 1 X y = Py It suffices to show that that P = X(X X) 1 X is symmetric ad idempotet. P = (X(X X) 1 X ) ( (X X ) 1 ) X = X = X Therefore P is symmetric. ( (X X) ) 1 X = X(X X) 1 X = P P. Coretto MEF CLRM estimatio 11 / 22 PP = (X(X X) 1 X ) ( X(X X) 1 X ) = X(X X) 1 (X X)(X X) 1 X = X(X X) 1 X = P which shows that P is also idempotet, ad this completes the proof P it s called the ifluece matrix, because measures the impact of the observed ys o each predicted ŷ i. Elemets of the diagoal of P are called leverages, because are the ifluece y i o the the correspodig ŷ i P. Coretto MEF CLRM estimatio 12 / 22
7 Propositio (Orthogoal decompositio) The OLS fittig decomposes the observed vector y i the sum of two orthogoal compoets y = ŷ + e = Py + M y Remark: orthogoality implies that the idividual cotributios of each term of the decompositio of y are somewhat well idetified. Proof. First otice that e = y ŷ = y Py = (I P)y = M y where M = (I P). Therefore y = ŷ + e = Py + M y It remais to show that ŷ = Py ad e = M y are orthogoal vectors. P. Coretto MEF CLRM estimatio 13 / 22 First ote that M P = PM = 0, i fact (I P)P = I P PP = 0 Moreover Py, M y = (Py) (M y) = y P M y = y PM y = y 0y = 0 ad this completes the proof M = I P is called the residual maker matrix because it maps y ito e. It allows to write e i terms of the observables y ad X. Properties: M is idempotet ad symmetric (show it) M X = 0, i fact M X = (I P)X = X X = 0 Remark: it ca be show that this decompositio is also uique (a cosequece of Hilbert projectio theorem). P. Coretto MEF CLRM estimatio 14 / 22
8 OLS Projectio Source: Greee, W. H. (2011) Ecoometric Aalysis 7th Editio P. Coretto MEF CLRM estimatio 15 / 22 Estimate of the variace of the error term Mi of the LS objective fuctio S(b) = (y Xb) (y Xb) = e e This called Residual sum of squares RSS = ei 2 = e e Note that ad e = M y = M (Xβ + ε) = M ε RSS = e e = (M ε) (M ε) = ε M M ε = ε M ε P. Coretto MEF CLRM estimatio 16 / 22
9 Ubiased estimatio of the error variace s 2 = 1 K ei 2 = e e K = RSS K SER = stadard error of the regressio = s P. Coretto MEF CLRM estimatio 17 / 22 Estimatio error decompositio The samplig estimatio error is give by b β, ow b β = ( X X ) 1 X y β = ( X X ) 1 X (Xβ + ε) β = ( X X ) 1 (X X)β + ( X X ) 1 X ε β = β + ( X X ) 1 X ε β = ( X X ) 1 X ε The bias is the expected estimatio error: Bias(b) = E[b β] P. Coretto MEF CLRM estimatio 18 / 22
10 TSS = total sum of squares Let ȳ be the sample average of the observed y 1, y 2,..., y : ȳ = 1 y i, ad let ȳ = (ȳ, ȳ,..., ȳ). We ca also write ȳ = ȳ1 }{{} times TSS = the deviace (variability) observed i the idepedet variable y TSS = (y i y) 2 = (y ȳ) (y ȳ) This is a variability measure, because it computes the squared deviatios of y from its observed ucoditioal mea. P. Coretto MEF CLRM estimatio 19 / 22 ESS = explaied sum of squares ESS = the overall deviace of the predicted values of y wrt to the ucoditioal mea of y ESS = (ŷ i y) 2 = (ŷ ȳ) (ŷ ȳ) At first look this is ot exactly a measure of variability (why?). But it turs out that aother property of the OLS is that 1 ŷ i = 1 y i P. Coretto MEF CLRM estimatio 20 / 22
11 TSS decompositio ad goodess of fit It ca be show (we do t do this here) that TSS = ESS + RSS From the previous decompositio we get a famous (ad misused) goodess of fit statistic R 2 = ESS TSS = 1 RSS TSS R 2 is the portio of deviace observed i the y that is explaied by the liear model. This is also called coefficiet of determiatio. P. Coretto MEF CLRM estimatio 21 / 22 Problems with R 2 Icreases by addig more regressors. For this reaso its better to look at the so-called adjusted R 2 (for the degrees of freedom) which is computed as follows: R 2 = 1 RSS/( K) TSS/( 1) R 2 [0, 1] oly if the costat term is icluded i the model. So whe you estimate without itercept do t be scared if you get R 2 < 0 A extremely large R 2 is pathological, guess why! P. Coretto MEF CLRM estimatio 22 / 22
Slide Set 13 Linear Model with Endogenous Regressors and the GMM estimator
Slide Set 13 Liear Model with Edogeous Regressors ad the GMM estimator Pietro Coretto pcoretto@uisa.it Ecoometrics Master i Ecoomics ad Fiace (MEF) Uiversità degli Studi di Napoli Federico II Versio: Friday
More informationGeometry of LS. LECTURE 3 GEOMETRY OF LS, PROPERTIES OF σ 2, PARTITIONED REGRESSION, GOODNESS OF FIT
OCTOBER 7, 2016 LECTURE 3 GEOMETRY OF LS, PROPERTIES OF σ 2, PARTITIONED REGRESSION, GOODNESS OF FIT Geometry of LS We ca thik of y ad the colums of X as members of the -dimesioal Euclidea space R Oe ca
More informationAlgebra of Least Squares
October 19, 2018 Algebra of Least Squares Geometry of Least Squares Recall that out data is like a table [Y X] where Y collects observatios o the depedet variable Y ad X collects observatios o the k-dimesioal
More informationFirst, note that the LS residuals are orthogonal to the regressors. X Xb X y = 0 ( normal equations ; (k 1) ) So,
0 2. OLS Part II The OLS residuals are orthogoal to the regressors. If the model icludes a itercept, the orthogoality of the residuals ad regressors gives rise to three results, which have limited practical
More informationLinear Regression Models, OLS, Assumptions and Properties
Chapter 2 Liear Regressio Models, OLS, Assumptios ad Properties 2.1 The Liear Regressio Model The liear regressio model is the sigle most useful tool i the ecoometricia s kit. The multiple regressio model
More informationProperties and Hypothesis Testing
Chapter 3 Properties ad Hypothesis Testig 3.1 Types of data The regressio techiques developed i previous chapters ca be applied to three differet kids of data. 1. Cross-sectioal data. 2. Time series data.
More informationECON 3150/4150, Spring term Lecture 3
Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step ECON 3150/4150, Sprig term 2014. Lecture 3 Ragar Nymoe Uiversity of Oslo 21 Jauary 2014 1 / 30 Itroductio
More information(all terms are scalars).the minimization is clearer in sum notation:
7 Multiple liear regressio: with predictors) Depedet data set: y i i = 1, oe predictad, predictors x i,k i = 1,, k = 1, ' The forecast equatio is ŷ i = b + Use matrix otatio: k =1 b k x ik Y = y 1 y 1
More informationChapter 1 Simple Linear Regression (part 6: matrix version)
Chapter Simple Liear Regressio (part 6: matrix versio) Overview Simple liear regressio model: respose variable Y, a sigle idepedet variable X Y β 0 + β X + ε Multiple liear regressio model: respose Y,
More informationLinear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d
Liear regressio Daiel Hsu (COMS 477) Maximum likelihood estimatio Oe of the simplest liear regressio models is the followig: (X, Y ),..., (X, Y ), (X, Y ) are iid radom pairs takig values i R d R, ad Y
More informationSimple Linear Regression
Chapter 2 Simple Liear Regressio 2.1 Simple liear model The simple liear regressio model shows how oe kow depedet variable is determied by a sigle explaatory variable (regressor). Is is writte as: Y i
More informationStatistical Properties of OLS estimators
1 Statistical Properties of OLS estimators Liear Model: Y i = β 0 + β 1 X i + u i OLS estimators: β 0 = Y β 1X β 1 = Best Liear Ubiased Estimator (BLUE) Liear Estimator: β 0 ad β 1 are liear fuctio of
More informationSolutions to Odd Numbered End of Chapter Exercises: Chapter 4
Itroductio to Ecoometrics (3 rd Updated Editio) by James H. Stock ad Mark W. Watso Solutios to Odd Numbered Ed of Chapter Exercises: Chapter 4 (This versio July 2, 24) Stock/Watso - Itroductio to Ecoometrics
More informationECON 3150/4150, Spring term Lecture 1
ECON 3150/4150, Sprig term 2013. Lecture 1 Ragar Nymoe Uiversity of Oslo 15 Jauary 2013 1 / 42 Refereces to Lecture 1 ad 2 Hill, Griffiths ad Lim, 4 ed (HGL) Ch 1-1.5; Ch 2.8-2.9,4.3-4.3.1.3 Bårdse ad
More informationLecture 22: Review for Exam 2. 1 Basic Model Assumptions (without Gaussian Noise)
Lecture 22: Review for Exam 2 Basic Model Assumptios (without Gaussia Noise) We model oe cotiuous respose variable Y, as a liear fuctio of p umerical predictors, plus oise: Y = β 0 + β X +... β p X p +
More informationIntroduction to Econometrics (3 rd Updated Edition) Solutions to Odd- Numbered End- of- Chapter Exercises: Chapter 4
Itroductio to Ecoometrics (3 rd Updated Editio) by James H. Stock ad Mark W. Watso Solutios to Odd- Numbered Ed- of- Chapter Exercises: Chapter 4 (This versio August 7, 204) 205 Pearso Educatio, Ic. Stock/Watso
More informationEfficient GMM LECTURE 12 GMM II
DECEMBER 1 010 LECTURE 1 II Efficiet The estimator depeds o the choice of the weight matrix A. The efficiet estimator is the oe that has the smallest asymptotic variace amog all estimators defied by differet
More information3/3/2014. CDS M Phil Econometrics. Types of Relationships. Types of Relationships. Types of Relationships. Vijayamohanan Pillai N.
3/3/04 CDS M Phil Old Least Squares (OLS) Vijayamohaa Pillai N CDS M Phil Vijayamoha CDS M Phil Vijayamoha Types of Relatioships Oly oe idepedet variable, Relatioship betwee ad is Liear relatioships Curviliear
More informationInverse Matrix. A meaning that matrix B is an inverse of matrix A.
Iverse Matrix Two square matrices A ad B of dimesios are called iverses to oe aother if the followig holds, AB BA I (11) The otio is dual but we ofte write 1 B A meaig that matrix B is a iverse of matrix
More informationLinear Regression Demystified
Liear Regressio Demystified Liear regressio is a importat subject i statistics. I elemetary statistics courses, formulae related to liear regressio are ofte stated without derivatio. This ote iteds to
More informationEconomics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator
Ecoomics 24B Relatio to Method of Momets ad Maximum Likelihood OLSE as a Maximum Likelihood Estimator Uder Assumptio 5 we have speci ed the distributio of the error, so we ca estimate the model parameters
More informationLECTURE 8: ORTHOGONALITY (CHAPTER 5 IN THE BOOK)
LECTURE 8: ORTHOGONALITY (CHAPTER 5 IN THE BOOK) Everythig marked by is ot required by the course syllabus I this lecture, all vector spaces is over the real umber R. All vectors i R is viewed as a colum
More informationMA Advanced Econometrics: Properties of Least Squares Estimators
MA Advaced Ecoometrics: Properties of Least Squares Estimators Karl Whela School of Ecoomics, UCD February 5, 20 Karl Whela UCD Least Squares Estimators February 5, 20 / 5 Part I Least Squares: Some Fiite-Sample
More informationS Y Y = ΣY 2 n. Using the above expressions, the correlation coefficient is. r = SXX S Y Y
1 Sociology 405/805 Revised February 4, 004 Summary of Formulae for Bivariate Regressio ad Correlatio Let X be a idepedet variable ad Y a depedet variable, with observatios for each of the values of these
More informationDefinitions and Theorems. where x are the decision variables. c, b, and a are constant coefficients.
Defiitios ad Theorems Remember the scalar form of the liear programmig problem, Miimize, Subject to, f(x) = c i x i a 1i x i = b 1 a mi x i = b m x i 0 i = 1,2,, where x are the decisio variables. c, b,
More information1 Inferential Methods for Correlation and Regression Analysis
1 Iferetial Methods for Correlatio ad Regressio Aalysis I the chapter o Correlatio ad Regressio Aalysis tools for describig bivariate cotiuous data were itroduced. The sample Pearso Correlatio Coefficiet
More informationRegression, Inference, and Model Building
Regressio, Iferece, ad Model Buildig Scatter Plots ad Correlatio Correlatio coefficiet, r -1 r 1 If r is positive, the the scatter plot has a positive slope ad variables are said to have a positive relatioship
More informationCorrelation Regression
Correlatio Regressio While correlatio methods measure the stregth of a liear relatioship betwee two variables, we might wish to go a little further: How much does oe variable chage for a give chage i aother
More informationSolution to Chapter 2 Analytical Exercises
Nov. 25, 23, Revised Dec. 27, 23 Hayashi Ecoometrics Solutio to Chapter 2 Aalytical Exercises. For ay ε >, So, plim z =. O the other had, which meas that lim E(z =. 2. As show i the hit, Prob( z > ε =
More informationFull file at
Chapter Ecoometrics There are o exercises or applicatios i Chapter. 0 Pearso Educatio, Ic. Publishig as Pretice Hall Chapter The Liear Regressio Model There are o exercises or applicatios i Chapter. 0
More information(VII.A) Review of Orthogonality
VII.A Review of Orthogoality At the begiig of our study of liear trasformatios i we briefly discussed projectios, rotatios ad projectios. I III.A, projectios were treated i the abstract ad without regard
More informationSTATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS. Comments:
Recall: STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS Commets:. So far we have estimates of the parameters! 0 ad!, but have o idea how good these estimates are. Assumptio: E(Y x)! 0 +! x (liear coditioal
More informationLINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity
LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 9 Multicolliearity Dr Shalabh Departmet of Mathematics ad Statistics Idia Istitute of Techology Kapur Multicolliearity diagostics A importat questio that
More information1 General linear Model Continued..
Geeral liear Model Cotiued.. We have We kow y = X + u X o radom u v N(0; I ) b = (X 0 X) X 0 y E( b ) = V ar( b ) = (X 0 X) We saw that b = (X 0 X) X 0 u so b is a liear fuctio of a ormally distributed
More informationSimple Regression Model
Simple Regressio Model 1. The Model y i 0 1 x i u i where y i depedet variable x i idepedet variable u i disturbace/error term i 1,..., Eg: y wage (measured i 1976 dollars per hr) x educatio (measured
More informationInvestigating the Significance of a Correlation Coefficient using Jackknife Estimates
Iteratioal Joural of Scieces: Basic ad Applied Research (IJSBAR) ISSN 2307-4531 (Prit & Olie) http://gssrr.org/idex.php?joural=jouralofbasicadapplied ---------------------------------------------------------------------------------------------------------------------------
More informationMachine Learning Regression I Hamid R. Rabiee [Slides are based on Bishop Book] Spring
Machie Learig Regressio I Hamid R. Rabiee [Slides are based o Bishop Book] Sprig 015 http://ce.sharif.edu/courses/93-94//ce717-1 Liear Regressio Liear regressio: ivolves a respose variable ad a sigle predictor
More informationSummary and Discussion on Simultaneous Analysis of Lasso and Dantzig Selector
Summary ad Discussio o Simultaeous Aalysis of Lasso ad Datzig Selector STAT732, Sprig 28 Duzhe Wag May 4, 28 Abstract This is a discussio o the work i Bickel, Ritov ad Tsybakov (29). We begi with a short
More informationThe Method of Least Squares. To understand least squares fitting of data.
The Method of Least Squares KEY WORDS Curve fittig, least square GOAL To uderstad least squares fittig of data To uderstad the least squares solutio of icosistet systems of liear equatios 1 Motivatio Curve
More informationMatrix Representation of Data in Experiment
Matrix Represetatio of Data i Experimet Cosider a very simple model for resposes y ij : y ij i ij, i 1,; j 1,,..., (ote that for simplicity we are assumig the two () groups are of equal sample size ) Y
More informationIn this section we derive some finite-sample properties of the OLS estimator. b is an estimator of β. It is a function of the random sample data.
17 3. OLS Part III I this sectio we derive some fiite-sample properties of the OLS estimator. 3.1 The Samplig Distributio of the OLS Estimator y = Xβ + ε ; ε ~ N[0, σ 2 I ] b = (X X) 1 X y = f(y) ε is
More informationLesson 11: Simple Linear Regression
Lesso 11: Simple Liear Regressio Ka-fu WONG December 2, 2004 I previous lessos, we have covered maily about the estimatio of populatio mea (or expected value) ad its iferece. Sometimes we are iterested
More informationPOLS, GLS, FGLS, GMM. Outline of Linear Systems of Equations. Common Coefficients, Panel Data Model. Preliminaries
Outlie of Liear Systems of Equatios POLS, GLS, FGLS, GMM Commo Coefficiets, Pael Data Model Prelimiaries he liear pael data model is a static model because all explaatory variables are dated cotemporaeously
More information6. Kalman filter implementation for linear algebraic equations. Karhunen-Loeve decomposition
6. Kalma filter implemetatio for liear algebraic equatios. Karhue-Loeve decompositio 6.1. Solvable liear algebraic systems. Probabilistic iterpretatio. Let A be a quadratic matrix (ot obligatory osigular.
More informationDr. Maddah ENMG 617 EM Statistics 11/26/12. Multiple Regression (2) (Chapter 15, Hines)
Dr Maddah NMG 617 M Statistics 11/6/1 Multiple egressio () (Chapter 15, Hies) Test for sigificace of regressio This is a test to determie whether there is a liear relatioship betwee the depedet variable
More information1 Covariance Estimation
Eco 75 Lecture 5 Covariace Estimatio ad Optimal Weightig Matrices I this lecture, we cosider estimatio of the asymptotic covariace matrix B B of the extremum estimator b : Covariace Estimatio Lemma 4.
More informationDefinition 4.2. (a) A sequence {x n } in a Banach space X is a basis for X if. unique scalars a n (x) such that x = n. a n (x) x n. (4.
4. BASES I BAACH SPACES 39 4. BASES I BAACH SPACES Sice a Baach space X is a vector space, it must possess a Hamel, or vector space, basis, i.e., a subset {x γ } γ Γ whose fiite liear spa is all of X ad
More informationStatistics 203 Introduction to Regression and Analysis of Variance Assignment #1 Solutions January 20, 2005
Statistics 203 Itroductio to Regressio ad Aalysis of Variace Assigmet #1 Solutios Jauary 20, 2005 Q. 1) (MP 2.7) (a) Let x deote the hydrocarbo percetage, ad let y deote the oxyge purity. The simple liear
More informationEconomics 326 Methods of Empirical Research in Economics. Lecture 8: Multiple regression model
Ecoomics 326 Methods of Empirical Research i Ecoomics Lecture 8: Multiple regressio model Hiro Kasahara Uiversity of British Columbia December 24, 2014 Why we eed a multiple regressio model I There are
More informationMachine Learning for Data Science (CS 4786)
Machie Learig for Data Sciece CS 4786) Lecture & 3: Pricipal Compoet Aalysis The text i black outlies high level ideas. The text i blue provides simple mathematical details to derive or get to the algorithm
More informationIntroduction to Optimization Techniques
Itroductio to Optimizatio Techiques Basic Cocepts of Aalysis - Real Aalysis, Fuctioal Aalysis 1 Basic Cocepts of Aalysis Liear Vector Spaces Defiitio: A vector space X is a set of elemets called vectors
More informationWEIGHTED LEAST SQUARES - used to give more emphasis to selected points in the analysis. Recall, in OLS we minimize Q =! % =!
WEIGHTED LEAST SQUARES - used to give more emphasis to selected poits i the aalysis What are eighted least squares?! " i=1 i=1 Recall, i OLS e miimize Q =! % =!(Y - " - " X ) or Q = (Y_ - X "_) (Y_ - X
More informationMaximum Likelihood Estimation
Chapter 9 Maximum Likelihood Estimatio 9.1 The Likelihood Fuctio The maximum likelihood estimator is the most widely used estimatio method. This chapter discusses the most importat cocepts behid maximum
More informationQuestions and answers, kernel part
Questios ad aswers, kerel part October 8, 205 Questios. Questio : properties of kerels, PCA, represeter theorem. [2 poits] Let F be a RK defied o some domai X, with feature map φ(x) x X ad reproducig kerel
More informationSymmetric Matrices and Quadratic Forms
7 Symmetric Matrices ad Quadratic Forms 7.1 DIAGONALIZAION OF SYMMERIC MARICES SYMMERIC MARIX A symmetric matrix is a matrix A such that. A = A Such a matrix is ecessarily square. Its mai diagoal etries
More informationMatrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc.
2 Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES 2012 Pearso Educatio, Ic. Theorem 8: Let A be a square matrix. The the followig statemets are equivalet. That is, for a give A, the statemets
More informationOpen book and notes. 120 minutes. Cover page and six pages of exam. No calculators.
IE 330 Seat # Ope book ad otes 120 miutes Cover page ad six pages of exam No calculators Score Fial Exam (example) Schmeiser Ope book ad otes No calculator 120 miutes 1 True or false (for each, 2 poits
More informationTAMS24: Notations and Formulas
TAMS4: Notatios ad Formulas Basic otatios ad defiitios X: radom variable stokastiska variabel Mea Vätevärde: µ = X = by Xiagfeg Yag kpx k, if X is discrete, xf Xxdx, if X is cotiuous Variace Varias: =
More informationWhy learn matrix algebra? Vectors & Matrices with statistical applications. Brief history of linear algebra
R Vectors & Matrices with statistical applicatios x RXX RXY y RYX RYY Why lear matrix algebra? Simple way to express liear combiatios of variables ad geeral solutios of equatios. Liear statistical models
More informationSimple Regression. Acknowledgement. These slides are based on presentations created and copyrighted by Prof. Daniel Menasce (GMU) CS 700
Simple Regressio CS 7 Ackowledgemet These slides are based o presetatios created ad copyrighted by Prof. Daiel Measce (GMU) Basics Purpose of regressio aalysis: predict the value of a depedet or respose
More informationProblem Set 4 Due Oct, 12
EE226: Radom Processes i Systems Lecturer: Jea C. Walrad Problem Set 4 Due Oct, 12 Fall 06 GSI: Assae Gueye This problem set essetially reviews detectio theory ad hypothesis testig ad some basic otios
More informationApply change-of-basis formula to rewrite x as a linear combination of eigenvectors v j.
Eigevalue-Eigevector Istructor: Nam Su Wag eigemcd Ay vector i real Euclidea space of dimesio ca be uiquely epressed as a liear combiatio of liearly idepedet vectors (ie, basis) g j, j,,, α g α g α g α
More informationAsymptotic Results for the Linear Regression Model
Asymptotic Results for the Liear Regressio Model C. Fli November 29, 2000 1. Asymptotic Results uder Classical Assumptios The followig results apply to the liear regressio model y = Xβ + ε, where X is
More informationUNIT 11 MULTIPLE LINEAR REGRESSION
UNIT MULTIPLE LINEAR REGRESSION Structure. Itroductio release relies Obectives. Multiple Liear Regressio Model.3 Estimatio of Model Parameters Use of Matrix Notatio Properties of Least Squares Estimates.4
More informationCircle the single best answer for each multiple choice question. Your choice should be made clearly.
TEST #1 STA 4853 March 6, 2017 Name: Please read the followig directios. DO NOT TURN THE PAGE UNTIL INSTRUCTED TO DO SO Directios This exam is closed book ad closed otes. There are 32 multiple choice questios.
More informationCEU Department of Economics Econometrics 1, Problem Set 1 - Solutions
CEU Departmet of Ecoomics Ecoometrics, Problem Set - Solutios Part A. Exogeeity - edogeeity The liear coditioal expectatio (CE) model has the followig form: We would like to estimate the effect of some
More informationConvergence of random variables. (telegram style notes) P.J.C. Spreij
Covergece of radom variables (telegram style otes).j.c. Spreij this versio: September 6, 2005 Itroductio As we kow, radom variables are by defiitio measurable fuctios o some uderlyig measurable space
More information11 Correlation and Regression
11 Correlatio Regressio 11.1 Multivariate Data Ofte we look at data where several variables are recorded for the same idividuals or samplig uits. For example, at a coastal weather statio, we might record
More informationRecurrence Relations
Recurrece Relatios Aalysis of recursive algorithms, such as: it factorial (it ) { if (==0) retur ; else retur ( * factorial(-)); } Let t be the umber of multiplicatios eeded to calculate factorial(). The
More informationMachine Learning for Data Science (CS 4786)
Machie Learig for Data Sciece CS 4786) Lecture 9: Pricipal Compoet Aalysis The text i black outlies mai ideas to retai from the lecture. The text i blue give a deeper uderstadig of how we derive or get
More informationProbability 2 - Notes 10. Lemma. If X is a random variable and g(x) 0 for all x in the support of f X, then P(g(X) 1) E[g(X)].
Probability 2 - Notes 0 Some Useful Iequalities. Lemma. If X is a radom variable ad g(x 0 for all x i the support of f X, the P(g(X E[g(X]. Proof. (cotiuous case P(g(X Corollaries x:g(x f X (xdx x:g(x
More informationLocal Polynomial Regression
Local Polyomial Regressio Joh Hughes October 2, 2013 Recall that the oparametric regressio model is Y i f x i ) + ε i, where f is the regressio fuctio ad the ε i are errors such that Eε i 0. The Nadaraya-Watso
More informationQuestion 1: Exercise 8.2
Questio 1: Exercise 8. (a) Accordig to the regressio results i colum (1), the house price is expected to icrease by 1% ( 100% 0.0004 500 ) with a additioal 500 square feet ad other factors held costat.
More information24 MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS
24 MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS Corollary 2.30. Suppose that the semisimple decompositio of the G- module V is V = i S i. The i = χ V,χ i Proof. Sice χ V W = χ V + χ W, we have:
More informationTopic 9: Sampling Distributions of Estimators
Topic 9: Samplig Distributios of Estimators Course 003, 2016 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be
More informationEstimation for Complete Data
Estimatio for Complete Data complete data: there is o loss of iformatio durig study. complete idividual complete data= grouped data A complete idividual data is the oe i which the complete iformatio of
More informationStat 139 Homework 7 Solutions, Fall 2015
Stat 139 Homework 7 Solutios, Fall 2015 Problem 1. I class we leared that the classical simple liear regressio model assumes the followig distributio of resposes: Y i = β 0 + β 1 X i + ɛ i, i = 1,...,,
More informationarxiv: v1 [math.pr] 13 Oct 2011
A tail iequality for quadratic forms of subgaussia radom vectors Daiel Hsu, Sham M. Kakade,, ad Tog Zhag 3 arxiv:0.84v math.pr] 3 Oct 0 Microsoft Research New Eglad Departmet of Statistics, Wharto School,
More informationChapter 3 Inner Product Spaces. Hilbert Spaces
Chapter 3 Ier Product Spaces. Hilbert Spaces 3. Ier Product Spaces. Hilbert Spaces 3.- Defiitio. A ier product space is a vector space X with a ier product defied o X. A Hilbert space is a complete ier
More informationIntroduction to Optimization Techniques. How to Solve Equations
Itroductio to Optimizatio Techiques How to Solve Equatios Iterative Methods of Optimizatio Iterative methods of optimizatio Solutio of the oliear equatios resultig form a optimizatio problem is usually
More information11 THE GMM ESTIMATION
Cotets THE GMM ESTIMATION 2. Cosistecy ad Asymptotic Normality..................... 3.2 Regularity Coditios ad Idetificatio..................... 4.3 The GMM Iterpretatio of the OLS Estimatio.................
More informationLecture 7: Density Estimation: k-nearest Neighbor and Basis Approach
STAT 425: Itroductio to Noparametric Statistics Witer 28 Lecture 7: Desity Estimatio: k-nearest Neighbor ad Basis Approach Istructor: Ye-Chi Che Referece: Sectio 8.4 of All of Noparametric Statistics.
More informationThe Basic Space Model
The Basic Space Model Let x i be the ith idividual s (i=,, ) reported positio o the th issue ( =,, m) ad let X 0 be the by m matrix of observed data here the 0 subscript idicates that elemets are missig
More informationLeast Squares Methods
Det. of Biomed. Eg. BME80: Iverse Problems i Bioegieerig Kug ee Uiv. Least Squares Methods Overdetermied liear equatios m where R ad m > More equatios tha ukows Caot solve for i most cases. Least squares
More informationStudy the bias (due to the nite dimensional approximation) and variance of the estimators
2 Series Methods 2. Geeral Approach A model has parameters (; ) where is ite-dimesioal ad is oparametric. (Sometimes, there is o :) We will focus o regressio. The fuctio is approximated by a series a ite
More informationSingle-Equation GMM: Estimation
Sigle-Equatio GMM: Estimatio Lecture for Ecoomics 241B Douglas G. Steigerwald UC Sata Barbara Jauary 2012 Iitial Questio Iitial Questio How valuable is ivestmet i college educatio? ecoomics - measure value
More informationLecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting
Lecture 6 Chi Square Distributio (χ ) ad Least Squares Fittig Chi Square Distributio (χ ) Suppose: We have a set of measuremets {x 1, x, x }. We kow the true value of each x i (x t1, x t, x t ). We would
More informationLecture 12: September 27
36-705: Itermediate Statistics Fall 207 Lecturer: Siva Balakrisha Lecture 2: September 27 Today we will discuss sufficiecy i more detail ad the begi to discuss some geeral strategies for costructig estimators.
More informationSimple Linear Regression
Simple Liear Regressio 1. Model ad Parameter Estimatio (a) Suppose our data cosist of a collectio of pairs (x i, y i ), where x i is a observed value of variable X ad y i is the correspodig observatio
More informationAssessment and Modeling of Forests. FR 4218 Spring Assignment 1 Solutions
Assessmet ad Modelig of Forests FR 48 Sprig Assigmet Solutios. The first part of the questio asked that you calculate the average, stadard deviatio, coefficiet of variatio, ad 9% cofidece iterval of the
More informationRegression with an Evaporating Logarithmic Trend
Regressio with a Evaporatig Logarithmic Tred Peter C. B. Phillips Cowles Foudatio, Yale Uiversity, Uiversity of Aucklad & Uiversity of York ad Yixiao Su Departmet of Ecoomics Yale Uiversity October 5,
More informationResampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n.
Jauary 1, 2019 Resamplig Methods Motivatio We have so may estimators with the property θ θ d N 0, σ 2 We ca also write θ a N θ, σ 2 /, where a meas approximately distributed as Oce we have a cosistet estimator
More informationIntroduction to Machine Learning DIS10
CS 189 Fall 017 Itroductio to Machie Learig DIS10 1 Fu with Lagrage Multipliers (a) Miimize the fuctio such that f (x,y) = x + y x + y = 3. Solutio: The Lagragia is: L(x,y,λ) = x + y + λ(x + y 3) Takig
More informationOutline. Linear regression. Regularization functions. Polynomial curve fitting. Stochastic gradient descent for regression. MLE for regression
REGRESSION 1 Outlie Liear regressio Regularizatio fuctios Polyomial curve fittig Stochastic gradiet descet for regressio MLE for regressio Step-wise forward regressio Regressio methods Statistical techiques
More informationLecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting
Lecture 6 Chi Square Distributio (χ ) ad Least Squares Fittig Chi Square Distributio (χ ) Suppose: We have a set of measuremets {x 1, x, x }. We kow the true value of each x i (x t1, x t, x t ). We would
More informationThe central limit theorem for Student s distribution. Problem Karim M. Abadir and Jan R. Magnus. Econometric Theory, 19, 1195 (2003)
The cetral limit theorem for Studet s distributio Problem 03.6.1 Karim M. Abadir ad Ja R. Magus Ecoometric Theory, 19, 1195 (003) Z Ecoometric Theory, 19, 003, 1195 1198+ Prited i the Uited States of America+
More informationMATHEMATICAL SCIENCES PAPER-II
MATHEMATICAL SCIENCES PAPER-II. Let {x } ad {y } be two sequeces of real umbers. Prove or disprove each of the statemets :. If {x y } coverges, ad if {y } is coverget, the {x } is coverget.. {x + y } coverges
More informationLecture 16: UMVUE: conditioning on sufficient and complete statistics
Lecture 16: UMVUE: coditioig o sufficiet ad complete statistics The 2d method of derivig a UMVUE whe a sufficiet ad complete statistic is available Fid a ubiased estimator of ϑ, say U(X. Coditioig o a
More informationRegression and generalization
Regressio ad geeralizatio CE-717: Machie Learig Sharif Uiversity of Techology M. Soleymai Fall 2016 Curve fittig: probabilistic perspective Describig ucertaity over value of target variable as a probability
More informationMachine Learning Theory Tübingen University, WS 2016/2017 Lecture 11
Machie Learig Theory Tübige Uiversity, WS 06/07 Lecture Tolstikhi Ilya Abstract We will itroduce the otio of reproducig kerels ad associated Reproducig Kerel Hilbert Spaces (RKHS). We will cosider couple
More information