1 General linear Model Continued..
|
|
- Blaze Lambert
- 5 years ago
- Views:
Transcription
1 Geeral liear Model Cotiued.. We have We kow y = X + u X o radom u v N(0; I ) b = (X 0 X) X 0 y E( b ) = V ar( b ) = (X 0 X) We saw that b = (X 0 X) X 0 u so b is a liear fuctio of a ormally distributed radom vector u ad so is itself ormal. b v N(; (X 0 X) ) From before b bu 0 bu = k = u0 Nu k where N = I X(X 0 X) X 0 Now we ca show that for ay symmetric idempotet matrix with " v N(0; I ) "Q" Sice N is symmetric ad idempotet v trace(q) Now u 0 Nu ( k)b = v k b = (X 0 X) X 0 u as M is idempotet. So =) X( b ) = X(X 0 X) X 0 u = Mu =) ( b ) 0 X 0 X( b ) = u 0 M u = u 0 Mu ( b ) 0 X 0 X( b ) = u0 Mu v k
2 As MN = 0 these two results are idepedet That is u 0 Mu=k u 0 Nu= k v k k v F (k; k) ( b ) 0 X 0 X( b )= (y X b ) 0 (y X b )= k v F (k; k) which ca be used to costruct sigi cace tests o. replace the value of give by the ull ad compute the ratio. More usually we dot test the hypothesis o the whole parameter space. Cosider the liear restrictio Wat to test R = d H 0 : R = d H : R 6= d R is a rxk matrix of costats with Rak(R) = r ad d is a rx vector. May ecoomic hypotheses ca be put o this form. To test y i = 0 + x i + x i + 3 x 3i + u i = 0 : R = (000) : d = 0 = 3 : R = (00) : d = = : R = (0) : d = I each case there is oe restrictio ad Rak(r) =. For R = 0 = 0; = ; = : d = ad the Rak(R) = 3. I all of these cases the restricted model ca be estimated by imposig the restrictios o the data used to estimate the model y i = 0 + x i + x i + 3 x 3i + u i y i = 0 + x i + x i + 3 x 3i + u i whe = y i x i = 0 + (x i x 3i ) + u i whe = 3 y i x i = (x i x 3i ) + u i whe 0 = 0 3 5
3 Note that i each case after you impose r restrictios oly estimate k-r parameters. So each restrictio reduces the umber of parameters by oe. What we are doig is miimisig bu 0 bu subject to the restrictios R = d As a Lagragia L = bu 0 bu + (R d) = (y X) 0 (y X) (R d) The restricted estimator ca be writte = b (X 0 X) R So de e urestricted ad restricted residual sum of squares URSS = (y X b ) 0 (y X b ) RRSS = (y X ) 0 (y X ) The if the ull is true (RRSS U RSS)=r URSS= k v F (r; k) which is the familiar F test. This measures whether a icrease i RSS per degree of freedom from imposig the restrictios is large relative to the variace. If it is large reject the restrictio, if it is small accept the restrictio. What is large depeds o the sigi cace level chose.note that it is distributed F oly if the ull is true. If R b 6= d the will be a log way from b. So a alterative way of lookig at this is that if the restrictios are ot true R b d will be far from zero. To measure how far we could stadardise by the estimated variace of R b d amely V b ( R b d). The with R r b 0 d bv (R b d) (R b d) v F (r; k) bv (R b d) = R(X 0 X) R 0 A importat special case is where the hypothesis to be tested is that a particular equals a particular value, eg = The R = [000:::] : d = R b d = b with b V (R b d) the estimated variace of b So b V ar( b ) v F (; k) 3
4 Now if w v F (; k) =) w v t k so which is the commoly used t test b se( b ) v t k These are small sample tests ad require the model be liear ad the errors ormal. It these dot hold the have to use asymptotic tests. The 3 widely asymptotic tests are: The likelihood ratio test: uses restricted ad urestricted The Wald test: uses oly the urestricted The Lagrage Multiplier (LM) test: uses oly restricted These are asymptotically equivalet ad are each distributed r (where r is the umber of restrictios) if the ull hypothesis is true. All ca be writte as measures of distace stadardised by a variace covariace matrix, each di erig by which distace is measured. The ML estimates are those which maximise LL(), i.e. the b ; = S( b ) = 0 where S( b ) is the score vector, the derivatives of the LL with respect to each of the k elemets of the vector evaluated at the values, b ; which make S() = 0: We will call these the urestricted estimates ad the value of the Log-likelihood at b, LL( b ). Suppose theory suggests m k prior restrictios of the form R() = 0: If m = k, theory speci es all the parameters ad there are oe to estimate. The restricted estimates maximise L = LL() 0 R() where is a m vector of Lagrage Multipliers. The @ = 0 gives the restricted estimator ; we ca write this S( ) F ( ) = 0 where S( ) is the k Score vector evaluated at the restricted estimates ad F ( ) is the k m matrix of the derivatives of the restrictios with respect to the parameters evaluated at the restricted estimates. Notice that at the derivative of the Log-likelihood fuctio with respect to the parameters is ot equal to zero but to F ( ) : The value of the Log-likelihood at is LL( ) which is less tha or equal to LL( b ): If the hypotheses (restrictios) are true: 4
5 . the two log-likelihoods should be similar, i.e. LL( b ) LL( ) should be close to zero;. the urestricted estimates that satisfy the restrictios R( b ) should be close to zero (ote R( ) is exactly zero by costructio); 3. the restricted score, S( ), should be close to zero (ote S( b ) is exactly zero by costructio) or equivaletly the Lagrage Multipliers should be close to zero, the restrictios should ot be bidig. ILLUSTRATE WITH DIAGRAM FROM KENNEDY These implicatios are used as the basis for three types of test procedures. The issue is how to judge close to zero? To judge this we use the asymptotic equivalets of the liear distributioal results used above i the discussio of the properties of the Liear Regressio Model. Asymptotically the ML estimator is ormal b N(; I() ) asymptotically the scalar quadratic form is chi-squared ad asymptotically R( b ) is also ormal ( b ) 0 I()( b ) (k): R( b ) N(R(); F () 0 I() F ()) This gives us three procedures for geeratig asymptotic test statistics for the m restictios H 0 : R() = 0; each of which are distributed (m), whe the ull hypothesis is true:. Likelihood Ratio Tests LR = (LL( b ) LL( )) (m). Wald Tests W = R( b ) 0 [F () 0 I() F ()] R( b ) (m) 3. Lagrage Multiplier (or E ciet Score) Tests LM = S( ) 0 I( ) S( ) (m): The Likelihood ratio test is straightforward to calculate whe both the restricted ad urestricted models have bee estimated. The Wald test oly requires the urestricted estimates. 5
6 The Lagrage Multiplier test oly requires the restricted estimates. For the liear regressio model, the iequality W>LR>LM holds, so you are more likely to reject usig W. I the LRM, the LM test is usually calculated usig regressio residuals. The Wald test is ot ivariat to how you write o-liear restrictios. Suppose m =, ad R() is 3 = 0. This could also be writte 3 = = 0 ad these would give di eret values of the test statistic. The former form, usig multiplicatio rather tha divisio, is usually better. These formulas are ot used to compute the test. The LR test is usually h log = LL( b i ; y) LL( ; y) v r while the LM ad W are ormally calculated from tests o the regressio residuals.. Istrumetal Variables We kow that if the regressors are ot idepedet of disturbaces the OLS estimates are biased ad icosistet Assume that y = X + u =) b = + (X 0 X) X 0 u =) p lim b = + p lim X0 X X 0 X p lim = XX a positive de ite matrix of full rak ad X 0 u p lim = Xu 6= 0 the p lim b = + XX Xu p lim X0 u So the correlatio of the disturbace term with oe or more of the regressors will make OLS icosistet. Such correlatios ca be caused by measuremet error i oe or more regressors, but there are other possibilities: 6
7 lagged depedet variables autoregressive disturbace simultaeity Ca get cosistet estimator by istrumetal variables. Cosider: y = X + u with var(u) = I but X 0 u p lim 6= 0 Suppose we ca d a data matrix Z of order xl where l > k. variables i Z correlated with variables i X Z 0 X p lim = ZX a ite matrix of full rak Variables i Z are i the limit ucorrelated with u Z 0 u p lim = 0 the take premultiply both sides y = X + u Z 0 y = Z 0 X + Z 0 u var(z 0 u) = Z 0 Z If we use GLS b GLS = b IV = (X 0 Z(Z 0 Z) Z 0 X) X 0 Z(Z 0 Z) Z 0 y = (X 0 P X) X 0 P y with P = Z(Z 0 Z) Z 0 the V ar biv = (X 0 P X) b = (y b IV ) 0 (y b IV )= 7
8 usig Now k or does ot matter asymptotically). b IV = + X0 P X X0 P u X0 P X = X0 Z Z0 Z Z0 X Assume middle term has plim ZZ so p lim X0 P X = XZ ZZ ZX which will be ite o sigular matrix. Similarly p lim Z0 P u = XZ ZZ Zu = 0 as istrumets z are assumed ucorrelated with u i the limit. So the IV estimator is cosistet If l = k so that Z has the same umber of colums as X the X 0 Z is kxk ad osigular. The all simpli es to if l < k the sigular b IV = (Z 0 X) Z 0 y var( b IV ) = (Z 0 X) (Z 0 Z)(X 0 Z) Two Stage Least Squares applicatio of OLS A form of IV ca be see as result of double. Regress each of the variables i the X matrix o Z to give X b bx = Z(Z 0 Z) Z 0 X = P X. Regress y o X b to obtai b T SLS = ( X b 0 X) b X b 0 y = (X 0 P X) (X 0 P y) = b IV As before var( b IV ) = (X 0 P X) b = (y X b IV ) 0 (y X b IV )= 8
9 Choice of istrumets ca use variables from X ay variable thought exogeous ad idepedet of disturbaces lagged values Whe some of the X are used eed to partitio: X = [X X ] ad Z = [X Z ] X is xr X is x(k r) with r < k Z is x(p r) ca show i bx = hx X b where bx = Z(Z 0 Z) Z 0 X So variables i X serve as istrumets for themselves ad the remaiig secod stage regressors are the tted values of X obtaied form regressig X o the full set of istrumets Z Miimum umber of istrumets is k icludig ay variables that serve as their ow istrumets. Asymptotic e ciecy icreases with the umber of istrumets, but so does ite sample bias if select istrumets the P = I ad get the biased ad icosistet OLS estimates. Multicolliearity Have assumed i the past that the explaatory variables were liearly idepedet. This meas that (X 0 X) exists. Perfect multicollieairty implies that (X 0 X) will be a sigular matrix with rak less tha k. Which implies do ot have uique solutios to the ormal equatios As b = (X 0 X) X 0 y the if (X 0 X) is sigular b caot be estimated. I fact it meas that ot all regressio parameters (the whole vector of parameters) are estimable, oly certai liear fuctios of b i. 9
Efficient GMM LECTURE 12 GMM II
DECEMBER 1 010 LECTURE 1 II Efficiet The estimator depeds o the choice of the weight matrix A. The efficiet estimator is the oe that has the smallest asymptotic variace amog all estimators defied by differet
More informationMatrix Representation of Data in Experiment
Matrix Represetatio of Data i Experimet Cosider a very simple model for resposes y ij : y ij i ij, i 1,; j 1,,..., (ote that for simplicity we are assumig the two () groups are of equal sample size ) Y
More informationAsymptotic Results for the Linear Regression Model
Asymptotic Results for the Liear Regressio Model C. Fli November 29, 2000 1. Asymptotic Results uder Classical Assumptios The followig results apply to the liear regressio model y = Xβ + ε, where X is
More informationSolution to Chapter 2 Analytical Exercises
Nov. 25, 23, Revised Dec. 27, 23 Hayashi Ecoometrics Solutio to Chapter 2 Aalytical Exercises. For ay ε >, So, plim z =. O the other had, which meas that lim E(z =. 2. As show i the hit, Prob( z > ε =
More informationEconomics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator
Ecoomics 24B Relatio to Method of Momets ad Maximum Likelihood OLSE as a Maximum Likelihood Estimator Uder Assumptio 5 we have speci ed the distributio of the error, so we ca estimate the model parameters
More information11 THE GMM ESTIMATION
Cotets THE GMM ESTIMATION 2. Cosistecy ad Asymptotic Normality..................... 3.2 Regularity Coditios ad Idetificatio..................... 4.3 The GMM Iterpretatio of the OLS Estimatio.................
More informationFirst, note that the LS residuals are orthogonal to the regressors. X Xb X y = 0 ( normal equations ; (k 1) ) So,
0 2. OLS Part II The OLS residuals are orthogoal to the regressors. If the model icludes a itercept, the orthogoality of the residuals ad regressors gives rise to three results, which have limited practical
More informationStatistical Inference Based on Extremum Estimators
T. Rotheberg Fall, 2007 Statistical Iferece Based o Extremum Estimators Itroductio Suppose 0, the true value of a p-dimesioal parameter, is kow to lie i some subset S R p : Ofte we choose to estimate 0
More informationECONOMETRIC THEORY. MODULE XIII Lecture - 34 Asymptotic Theory and Stochastic Regressors
ECONOMETRIC THEORY MODULE XIII Lecture - 34 Asymptotic Theory ad Stochastic Regressors Dr. Shalabh Departmet of Mathematics ad Statistics Idia Istitute of Techology Kapur Asymptotic theory The asymptotic
More informationx iu i E(x u) 0. In order to obtain a consistent estimator of β, we find the instrumental variable z which satisfies E(z u) = 0. z iu i E(z u) = 0.
27 However, β MM is icosistet whe E(x u) 0, i.e., β MM = (X X) X y = β + (X X) X u = β + ( X X ) ( X u ) \ β. Note as follows: X u = x iu i E(x u) 0. I order to obtai a cosistet estimator of β, we fid
More informationSlide Set 13 Linear Model with Endogenous Regressors and the GMM estimator
Slide Set 13 Liear Model with Edogeous Regressors ad the GMM estimator Pietro Coretto pcoretto@uisa.it Ecoometrics Master i Ecoomics ad Fiace (MEF) Uiversità degli Studi di Napoli Federico II Versio: Friday
More informationEconomics 326 Methods of Empirical Research in Economics. Lecture 8: Multiple regression model
Ecoomics 326 Methods of Empirical Research i Ecoomics Lecture 8: Multiple regressio model Hiro Kasahara Uiversity of British Columbia December 24, 2014 Why we eed a multiple regressio model I There are
More informationProperties and Hypothesis Testing
Chapter 3 Properties ad Hypothesis Testig 3.1 Types of data The regressio techiques developed i previous chapters ca be applied to three differet kids of data. 1. Cross-sectioal data. 2. Time series data.
More informationMA Advanced Econometrics: Properties of Least Squares Estimators
MA Advaced Ecoometrics: Properties of Least Squares Estimators Karl Whela School of Ecoomics, UCD February 5, 20 Karl Whela UCD Least Squares Estimators February 5, 20 / 5 Part I Least Squares: Some Fiite-Sample
More informationMaximum Likelihood Estimation
Chapter 9 Maximum Likelihood Estimatio 9.1 The Likelihood Fuctio The maximum likelihood estimator is the most widely used estimatio method. This chapter discusses the most importat cocepts behid maximum
More informationAlgebra of Least Squares
October 19, 2018 Algebra of Least Squares Geometry of Least Squares Recall that out data is like a table [Y X] where Y collects observatios o the depedet variable Y ad X collects observatios o the k-dimesioal
More informationGeometry of LS. LECTURE 3 GEOMETRY OF LS, PROPERTIES OF σ 2, PARTITIONED REGRESSION, GOODNESS OF FIT
OCTOBER 7, 2016 LECTURE 3 GEOMETRY OF LS, PROPERTIES OF σ 2, PARTITIONED REGRESSION, GOODNESS OF FIT Geometry of LS We ca thik of y ad the colums of X as members of the -dimesioal Euclidea space R Oe ca
More informationTopic 9: Sampling Distributions of Estimators
Topic 9: Samplig Distributios of Estimators Course 003, 2016 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be
More informationTopic 9: Sampling Distributions of Estimators
Topic 9: Samplig Distributios of Estimators Course 003, 2018 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be
More informationTopic 9: Sampling Distributions of Estimators
Topic 9: Samplig Distributios of Estimators Course 003, 2018 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be
More informationClassical Linear Regression Model. Normality Assumption Hypothesis Testing Under Normality Maximum Likelihood Estimator Generalized Least Squares
Classical Liear Regressio Model Normality Assumptio Hypothesis Testig Uder Normality Maximum Likelihood Estimator Geeralized Least Squares Normality Assumptio Assumptio 5 e X ~ N(,s I ) Implicatios of
More informationInverse Matrix. A meaning that matrix B is an inverse of matrix A.
Iverse Matrix Two square matrices A ad B of dimesios are called iverses to oe aother if the followig holds, AB BA I (11) The otio is dual but we ofte write 1 B A meaig that matrix B is a iverse of matrix
More informationLecture 6 Testing Nonlinear Restrictions 1. The previous lectures prepare us for the tests of nonlinear restrictions of the form:
Eco 75 Lecture 6 Testig Noliear Restrictios The previous lectures prepare us for the tests of oliear restrictios of the form: H 0 : h( 0 ) = 0 versus H : h( 0 ) 6= 0: () I this lecture, we cosier Wal,
More informationLecture 3. Properties of Summary Statistics: Sampling Distribution
Lecture 3 Properties of Summary Statistics: Samplig Distributio Mai Theme How ca we use math to justify that our umerical summaries from the sample are good summaries of the populatio? Lecture Summary
More informationLecture 22: Review for Exam 2. 1 Basic Model Assumptions (without Gaussian Noise)
Lecture 22: Review for Exam 2 Basic Model Assumptios (without Gaussia Noise) We model oe cotiuous respose variable Y, as a liear fuctio of p umerical predictors, plus oise: Y = β 0 + β X +... β p X p +
More informationLecture 7: Properties of Random Samples
Lecture 7: Properties of Radom Samples 1 Cotiued From Last Class Theorem 1.1. Let X 1, X,...X be a radom sample from a populatio with mea µ ad variace σ
More informationProbability 2 - Notes 10. Lemma. If X is a random variable and g(x) 0 for all x in the support of f X, then P(g(X) 1) E[g(X)].
Probability 2 - Notes 0 Some Useful Iequalities. Lemma. If X is a radom variable ad g(x 0 for all x i the support of f X, the P(g(X E[g(X]. Proof. (cotiuous case P(g(X Corollaries x:g(x f X (xdx x:g(x
More informationSTATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS. Comments:
Recall: STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS Commets:. So far we have estimates of the parameters! 0 ad!, but have o idea how good these estimates are. Assumptio: E(Y x)! 0 +! x (liear coditioal
More informationSingle-Equation GMM: Estimation
Sigle-Equatio GMM: Estimatio Lecture for Ecoomics 241B Douglas G. Steigerwald UC Sata Barbara Jauary 2012 Iitial Questio Iitial Questio How valuable is ivestmet i college educatio? ecoomics - measure value
More information11 Correlation and Regression
11 Correlatio Regressio 11.1 Multivariate Data Ofte we look at data where several variables are recorded for the same idividuals or samplig uits. For example, at a coastal weather statio, we might record
More informationEcon 325/327 Notes on Sample Mean, Sample Proportion, Central Limit Theorem, Chi-square Distribution, Student s t distribution 1.
Eco 325/327 Notes o Sample Mea, Sample Proportio, Cetral Limit Theorem, Chi-square Distributio, Studet s t distributio 1 Sample Mea By Hiro Kasahara We cosider a radom sample from a populatio. Defiitio
More informationStatistical Properties of OLS estimators
1 Statistical Properties of OLS estimators Liear Model: Y i = β 0 + β 1 X i + u i OLS estimators: β 0 = Y β 1X β 1 = Best Liear Ubiased Estimator (BLUE) Liear Estimator: β 0 ad β 1 are liear fuctio of
More informationARIMA Models. Dan Saunders. y t = φy t 1 + ɛ t
ARIMA Models Da Sauders I will discuss models with a depedet variable y t, a potetially edogeous error term ɛ t, ad a exogeous error term η t, each with a subscript t deotig time. With just these three
More informationCEU Department of Economics Econometrics 1, Problem Set 1 - Solutions
CEU Departmet of Ecoomics Ecoometrics, Problem Set - Solutios Part A. Exogeeity - edogeeity The liear coditioal expectatio (CE) model has the followig form: We would like to estimate the effect of some
More informationEconomics 326 Methods of Empirical Research in Economics. Lecture 18: The asymptotic variance of OLS and heteroskedasticity
Ecoomics 326 Methods of Empirical Research i Ecoomics Lecture 8: The asymptotic variace of OLS ad heteroskedasticity Hiro Kasahara Uiversity of British Columbia December 24, 204 Asymptotic ormality I I
More information(all terms are scalars).the minimization is clearer in sum notation:
7 Multiple liear regressio: with predictors) Depedet data set: y i i = 1, oe predictad, predictors x i,k i = 1,, k = 1, ' The forecast equatio is ŷ i = b + Use matrix otatio: k =1 b k x ik Y = y 1 y 1
More informationLecture 33: Bootstrap
Lecture 33: ootstrap Motivatio To evaluate ad compare differet estimators, we eed cosistet estimators of variaces or asymptotic variaces of estimators. This is also importat for hypothesis testig ad cofidece
More information1 Inferential Methods for Correlation and Regression Analysis
1 Iferetial Methods for Correlatio ad Regressio Aalysis I the chapter o Correlatio ad Regressio Aalysis tools for describig bivariate cotiuous data were itroduced. The sample Pearso Correlatio Coefficiet
More informationLinear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d
Liear regressio Daiel Hsu (COMS 477) Maximum likelihood estimatio Oe of the simplest liear regressio models is the followig: (X, Y ),..., (X, Y ), (X, Y ) are iid radom pairs takig values i R d R, ad Y
More informationFirst Year Quantitative Comp Exam Spring, Part I - 203A. f X (x) = 0 otherwise
First Year Quatitative Comp Exam Sprig, 2012 Istructio: There are three parts. Aswer every questio i every part. Questio I-1 Part I - 203A A radom variable X is distributed with the margial desity: >
More informationHeretoskedasticity: Cont.
Heretoskedasticity: Cot. We have see last time how to costruct heteroskedasticity robust t-test, i.e. b i 1= bi = (W hite) se bi 1= (W hite)se bi We ow see how to costruct heteroskedastic robust Wald test
More informationLecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting
Lecture 6 Chi Square Distributio (χ ) ad Least Squares Fittig Chi Square Distributio (χ ) Suppose: We have a set of measuremets {x 1, x, x }. We kow the true value of each x i (x t1, x t, x t ). We would
More informationStatistical and Mathematical Methods DS-GA 1002 December 8, Sample Final Problems Solutions
Statistical ad Mathematical Methods DS-GA 00 December 8, 05. Short questios Sample Fial Problems Solutios a. Ax b has a solutio if b is i the rage of A. The dimesio of the rage of A is because A has liearly-idepedet
More informationResampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n.
Jauary 1, 2019 Resamplig Methods Motivatio We have so may estimators with the property θ θ d N 0, σ 2 We ca also write θ a N θ, σ 2 /, where a meas approximately distributed as Oce we have a cosistet estimator
More informationMatrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc.
2 Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES 2012 Pearso Educatio, Ic. Theorem 8: Let A be a square matrix. The the followig statemets are equivalet. That is, for a give A, the statemets
More informationUnbiased Estimation. February 7-12, 2008
Ubiased Estimatio February 7-2, 2008 We begi with a sample X = (X,..., X ) of radom variables chose accordig to oe of a family of probabilities P θ where θ is elemet from the parameter space Θ. For radom
More information1.010 Uncertainty in Engineering Fall 2008
MIT OpeCourseWare http://ocw.mit.edu.00 Ucertaity i Egieerig Fall 2008 For iformatio about citig these materials or our Terms of Use, visit: http://ocw.mit.edu.terms. .00 - Brief Notes # 9 Poit ad Iterval
More informationtests 17.1 Simple versus compound
PAS204: Lecture 17. tests UMP ad asymtotic I this lecture, we will idetify UMP tests, wherever they exist, for comarig a simle ull hyothesis with a comoud alterative. We also look at costructig tests based
More informationLecture Note 8 Point Estimators and Point Estimation Methods. MIT Spring 2006 Herman Bennett
Lecture Note 8 Poit Estimators ad Poit Estimatio Methods MIT 14.30 Sprig 2006 Herma Beett Give a parameter with ukow value, the goal of poit estimatio is to use a sample to compute a umber that represets
More informationLecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting
Lecture 6 Chi Square Distributio (χ ) ad Least Squares Fittig Chi Square Distributio (χ ) Suppose: We have a set of measuremets {x 1, x, x }. We kow the true value of each x i (x t1, x t, x t ). We would
More informationGoodness-of-Fit Tests and Categorical Data Analysis (Devore Chapter Fourteen)
Goodess-of-Fit Tests ad Categorical Data Aalysis (Devore Chapter Fourtee) MATH-252-01: Probability ad Statistics II Sprig 2019 Cotets 1 Chi-Squared Tests with Kow Probabilities 1 1.1 Chi-Squared Testig................
More informationMultivariate Regression: Estimating & Reporting Certainty, Hypotheses Tests
Multivariate Regressio: Estimatig & Reportig Certaity, Hypotheses Tests I. The C(N)LRM: A. Core Assumptios:. y=xβ +. E () = 0 3. V () = σ I 4. E ( X) = 0 5. X of full-colum rak. B. Coveiece Assumptios
More informationChapter 1 Simple Linear Regression (part 6: matrix version)
Chapter Simple Liear Regressio (part 6: matrix versio) Overview Simple liear regressio model: respose variable Y, a sigle idepedet variable X Y β 0 + β X + ε Multiple liear regressio model: respose Y,
More informationof the matrix is =-85, so it is not positive definite. Thus, the first
BOSTON COLLEGE Departmet of Ecoomics EC771: Ecoometrics Sprig 4 Prof. Baum, Ms. Uysal Solutio Key for Problem Set 1 1. Are the followig quadratic forms positive for all values of x? (a) y = x 1 8x 1 x
More informationTAMS24: Notations and Formulas
TAMS4: Notatios ad Formulas Basic otatios ad defiitios X: radom variable stokastiska variabel Mea Vätevärde: µ = X = by Xiagfeg Yag kpx k, if X is discrete, xf Xxdx, if X is cotiuous Variace Varias: =
More information3/3/2014. CDS M Phil Econometrics. Types of Relationships. Types of Relationships. Types of Relationships. Vijayamohanan Pillai N.
3/3/04 CDS M Phil Old Least Squares (OLS) Vijayamohaa Pillai N CDS M Phil Vijayamoha CDS M Phil Vijayamoha Types of Relatioships Oly oe idepedet variable, Relatioship betwee ad is Liear relatioships Curviliear
More informationECON 3150/4150, Spring term Lecture 3
Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step ECON 3150/4150, Sprig term 2014. Lecture 3 Ragar Nymoe Uiversity of Oslo 21 Jauary 2014 1 / 30 Itroductio
More informationProblem Set 4 Due Oct, 12
EE226: Radom Processes i Systems Lecturer: Jea C. Walrad Problem Set 4 Due Oct, 12 Fall 06 GSI: Assae Gueye This problem set essetially reviews detectio theory ad hypothesis testig ad some basic otios
More informationCorrelation Regression
Correlatio Regressio While correlatio methods measure the stregth of a liear relatioship betwee two variables, we might wish to go a little further: How much does oe variable chage for a give chage i aother
More informationMath 152. Rumbos Fall Solutions to Review Problems for Exam #2. Number of Heads Frequency
Math 152. Rumbos Fall 2009 1 Solutios to Review Problems for Exam #2 1. I the book Experimetatio ad Measuremet, by W. J. Youde ad published by the by the Natioal Sciece Teachers Associatio i 1962, the
More information5.4 The spatial error model Regression model with spatially autocorrelated errors
54 The spatial error model 54 Regressio model with spatiall autocorrelated errors I a multiple regressio model, the depedet variable Y depeds o k regressors X (=), X,, X k ad a disturbace ε: (4) is a x
More informationChimica Inorganica 3
himica Iorgaica Irreducible Represetatios ad haracter Tables Rather tha usig geometrical operatios, it is ofte much more coveiet to employ a ew set of group elemets which are matrices ad to make the rule
More informationIIT JAM Mathematical Statistics (MS) 2006 SECTION A
IIT JAM Mathematical Statistics (MS) 6 SECTION A. If a > for ad lim a / L >, the which of the followig series is ot coverget? (a) (b) (c) (d) (d) = = a = a = a a + / a lim a a / + = lim a / a / + = lim
More informationCLRM estimation Pietro Coretto Econometrics
Slide Set 4 CLRM estimatio Pietro Coretto pcoretto@uisa.it Ecoometrics Master i Ecoomics ad Fiace (MEF) Uiversità degli Studi di Napoli Federico II Versio: Thursday 24 th Jauary, 2019 (h08:41) P. Coretto
More information10. Comparative Tests among Spatial Regression Models. Here we revisit the example in Section 8.1 of estimating the mean of a normal random
Part III. Areal Data Aalysis 0. Comparative Tests amog Spatial Regressio Models While the otio of relative likelihood values for differet models is somewhat difficult to iterpret directly (as metioed above),
More informationSection 14. Simple linear regression.
Sectio 14 Simple liear regressio. Let us look at the cigarette dataset from [1] (available to dowload from joural s website) ad []. The cigarette dataset cotais measuremets of tar, icotie, weight ad carbo
More information, then cv V. Differential Equations Elements of Lineaer Algebra Name: Consider the differential equation. and y2 cos( kx)
Cosider the differetial equatio y '' k y 0 has particular solutios y1 si( kx) ad y cos( kx) I geeral, ay liear combiatio of y1 ad y, cy 1 1 cy where c1, c is also a solutio to the equatio above The reaso
More informationCEE 522 Autumn Uncertainty Concepts for Geotechnical Engineering
CEE 5 Autum 005 Ucertaity Cocepts for Geotechical Egieerig Basic Termiology Set A set is a collectio of (mutually exclusive) objects or evets. The sample space is the (collectively exhaustive) collectio
More information( θ. sup θ Θ f X (x θ) = L. sup Pr (Λ (X) < c) = α. x : Λ (x) = sup θ H 0. sup θ Θ f X (x θ) = ) < c. NH : θ 1 = θ 2 against AH : θ 1 θ 2
82 CHAPTER 4. MAXIMUM IKEIHOOD ESTIMATION Defiitio: et X be a radom sample with joit p.m/d.f. f X x θ. The geeralised likelihood ratio test g.l.r.t. of the NH : θ H 0 agaist the alterative AH : θ H 1,
More information1 Introduction to reducing variance in Monte Carlo simulations
Copyright c 010 by Karl Sigma 1 Itroductio to reducig variace i Mote Carlo simulatios 11 Review of cofidece itervals for estimatig a mea I statistics, we estimate a ukow mea µ = E(X) of a distributio by
More informationLecture 7 Testing Nonlinear Inequality Restrictions 1
Eco 75 Lecture 7 Testig Noliear Iequality Restrictios I Lecture 6, we discussed te testig problems were te ull ypotesis is de ed by oliear equality restrictios: H : ( ) = versus H : ( ) 6= : () We sowed
More informationDirection: This test is worth 250 points. You are required to complete this test within 50 minutes.
Term Test October 3, 003 Name Math 56 Studet Number Directio: This test is worth 50 poits. You are required to complete this test withi 50 miutes. I order to receive full credit, aswer each problem completely
More informationThe Method of Least Squares. To understand least squares fitting of data.
The Method of Least Squares KEY WORDS Curve fittig, least square GOAL To uderstad least squares fittig of data To uderstad the least squares solutio of icosistet systems of liear equatios 1 Motivatio Curve
More informationLinear Regression Demystified
Liear Regressio Demystified Liear regressio is a importat subject i statistics. I elemetary statistics courses, formulae related to liear regressio are ofte stated without derivatio. This ote iteds to
More informationf(x i ; ) L(x; p) = i=1 To estimate the value of that maximizes L or equivalently ln L we will set =0, for i =1, 2,...,m p x i (1 p) 1 x i i=1
Parameter Estimatio Samples from a probability distributio F () are: [,,..., ] T.Theprobabilitydistributio has a parameter vector [,,..., m ] T. Estimator: Statistic used to estimate ukow. Estimate: Observed
More informationL S => logf y i P x i ;S
Three Classical Tests; Wald, LM(core), ad LR tests uose that we hae the desity y; of a model with the ull hyothesis of the form H ; =. Let L be the log-likelihood fuctio of the model ad be the MLE of.
More informationChapter 13: Tests of Hypothesis Section 13.1 Introduction
Chapter 13: Tests of Hypothesis Sectio 13.1 Itroductio RECAP: Chapter 1 discussed the Likelihood Ratio Method as a geeral approach to fid good test procedures. Testig for the Normal Mea Example, discussed
More informationHomework 3 Solutions
Math 4506 Sprig 04 Homework 3 Solutios. a The ACF of a MA process has a o-zero value oly at lags, 0, ad. Problem 4.3 from the textbook which you did t do, so I did t expect you to metio this shows that
More informationM A T H F A L L CORRECTION. Algebra I 1 4 / 1 0 / U N I V E R S I T Y O F T O R O N T O
M A T H 2 4 0 F A L L 2 0 1 4 HOMEWORK ASSIGNMENT #4 CORRECTION Algebra I 1 4 / 1 0 / 2 0 1 4 U N I V E R S I T Y O F T O R O N T O P r o f e s s o r : D r o r B a r - N a t a Correctio Homework Assigmet
More informationt distribution [34] : used to test a mean against an hypothesized value (H 0 : µ = µ 0 ) or the difference
EXST30 Backgroud material Page From the textbook The Statistical Sleuth Mea [0]: I your text the word mea deotes a populatio mea (µ) while the work average deotes a sample average ( ). Variace [0]: The
More information[412] A TEST FOR HOMOGENEITY OF THE MARGINAL DISTRIBUTIONS IN A TWO-WAY CLASSIFICATION
[412] A TEST FOR HOMOGENEITY OF THE MARGINAL DISTRIBUTIONS IN A TWO-WAY CLASSIFICATION BY ALAN STUART Divisio of Research Techiques, Lodo School of Ecoomics 1. INTRODUCTION There are several circumstaces
More informationUniversity of Lausanne - École des HEC LECTURE NOTES ADVANCED ECONOMETRICS. Preliminary version, do not quote, cite and reproduce without permission
Uiversity of Lausae - École des HEC LECTURE NOTES ADVANCED ECONOMETRICS Prelimiary versio, do ot quote, cite ad reproduce without permissio Professor : Floria Pelgri 2009-2010 CHAPTER 4: THE CONSTRAINED
More informationLecture 12: September 27
36-705: Itermediate Statistics Fall 207 Lecturer: Siva Balakrisha Lecture 2: September 27 Today we will discuss sufficiecy i more detail ad the begi to discuss some geeral strategies for costructig estimators.
More informationChapter Vectors
Chapter 4. Vectors fter readig this chapter you should be able to:. defie a vector. add ad subtract vectors. fid liear combiatios of vectors ad their relatioship to a set of equatios 4. explai what it
More informationEconomics 102C: Advanced Topics in Econometrics 4 - Asymptotics & Large Sample Properties of OLS
Ecoomics 102C: Advaced Topics i Ecoometrics 4 - Asymptotics & Large Sample Properties of OLS Michael Best Sprig 2015 Asymptotics So far we have looked at the fiite sample properties of OLS Relied heavily
More informationStudy the bias (due to the nite dimensional approximation) and variance of the estimators
2 Series Methods 2. Geeral Approach A model has parameters (; ) where is ite-dimesioal ad is oparametric. (Sometimes, there is o :) We will focus o regressio. The fuctio is approximated by a series a ite
More informationApply change-of-basis formula to rewrite x as a linear combination of eigenvectors v j.
Eigevalue-Eigevector Istructor: Nam Su Wag eigemcd Ay vector i real Euclidea space of dimesio ca be uiquely epressed as a liear combiatio of liearly idepedet vectors (ie, basis) g j, j,,, α g α g α g α
More informationSTAT 371 Final Exam Summary
STAT 371 Fial Exam Summary Statistics for Fiace I 1 OLS ad Rβ Log-log model: l Y t = β 1 + β l X t, semi-log model: l Y t = β 1 + β X t, liear model: Y t = β 1 + β X t β, ˆβ is k 1, Y, Ŷ is k, X is k,
More informationLecture 5: Linear Regressions
Lecture 5: Liear Regressios I lecture 2, we itroduced statioary liear time series models. I that lecture, we discussed the data geeratig processes ad their characteristics, assumig that we kow all parameters
More informationDepartment of Mathematics
Departmet of Mathematics Ma 3/103 KC Border Itroductio to Probability ad Statistics Witer 2017 Lecture 19: Estimatio II Relevat textbook passages: Larse Marx [1]: Sectios 5.2 5.7 19.1 The method of momets
More information4. Partial Sums and the Central Limit Theorem
1 of 10 7/16/2009 6:05 AM Virtual Laboratories > 6. Radom Samples > 1 2 3 4 5 6 7 4. Partial Sums ad the Cetral Limit Theorem The cetral limit theorem ad the law of large umbers are the two fudametal theorems
More informationLarge Sample Theory. Convergence. Central Limit Theorems Asymptotic Distribution Delta Method. Convergence in Probability Convergence in Distribution
Large Sample Theory Covergece Covergece i Probability Covergece i Distributio Cetral Limit Theorems Asymptotic Distributio Delta Method Covergece i Probability A sequece of radom scalars {z } = (z 1,z,
More informationDefinitions and Theorems. where x are the decision variables. c, b, and a are constant coefficients.
Defiitios ad Theorems Remember the scalar form of the liear programmig problem, Miimize, Subject to, f(x) = c i x i a 1i x i = b 1 a mi x i = b m x i 0 i = 1,2,, where x are the decisio variables. c, b,
More informationIntroduction to Econometrics (3 rd Updated Edition) Solutions to Odd- Numbered End- of- Chapter Exercises: Chapter 3
Itroductio to Ecoometrics (3 rd Updated Editio) by James H. Stock ad Mark W. Watso Solutios to Odd- Numbered Ed- of- Chapter Exercises: Chapter 3 (This versio August 17, 014) 015 Pearso Educatio, Ic. Stock/Watso
More informationLinear Regression Models, OLS, Assumptions and Properties
Chapter 2 Liear Regressio Models, OLS, Assumptios ad Properties 2.1 The Liear Regressio Model The liear regressio model is the sigle most useful tool i the ecoometricia s kit. The multiple regressio model
More information1 Covariance Estimation
Eco 75 Lecture 5 Covariace Estimatio ad Optimal Weightig Matrices I this lecture, we cosider estimatio of the asymptotic covariace matrix B B of the extremum estimator b : Covariace Estimatio Lemma 4.
More informationThe Basic Space Model
The Basic Space Model Let x i be the ith idividual s (i=,, ) reported positio o the th issue ( =,, m) ad let X 0 be the by m matrix of observed data here the 0 subscript idicates that elemets are missig
More informationLINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity
LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 9 Multicolliearity Dr Shalabh Departmet of Mathematics ad Statistics Idia Istitute of Techology Kapur Multicolliearity diagostics A importat questio that
More informationLecture 19: Convergence
Lecture 19: Covergece Asymptotic approach I statistical aalysis or iferece, a key to the success of fidig a good procedure is beig able to fid some momets ad/or distributios of various statistics. I may
More informationLecture 2: Monte Carlo Simulation
STAT/Q SCI 43: Itroductio to Resamplig ethods Sprig 27 Istructor: Ye-Chi Che Lecture 2: ote Carlo Simulatio 2 ote Carlo Itegratio Assume we wat to evaluate the followig itegratio: e x3 dx What ca we do?
More information