Linear Model Under General Variance

Size: px
Start display at page:

Download "Linear Model Under General Variance"

Transcription

1 Linear Model Under General Variance We have a sample of T random variables y 1, y 2,, y T, satisfying the linear model Y = X β + e, where Y = (y 1,, y T )' is a (T 1) vector of random variables, X = (T K) matrix of explanatory variables, β is a (K 1) vector of parameters, and e = (e 1,, e T )' is a (T 1) error term vector. Under the classical linear model we have assumed Assumption A3: the e t s are independently distributed Assumption A4: V(e) = σ 2 I T, implying that V(e t ) = σ 2, t = 1,, T. (homoscedasticity). Lets relax assumptions A3 and A4 and consider the more general case where V(e) = s 2 y where σ 2 is a positive scalar, and ψ is a (T T) symmetric, positive-definite matrix. Under this structure: The variance of e is proportional to the matrix y. It allows for the variance of e t to vary across observations It allows for the possibility of non-zero covariance between e t and e t*, t t*. A Reformulation of the Standard Model Consider the model Y = X β + e, where V(e) = σ 2 ψ (model M). The (T T) matrix ψ being symmetric, positive-definite, it is non-singular and can be written as ψ -1 = P' P,

2 2 where P is a (T T) non-singular matrix. This is called the Cholesky decomposition of ψ -1. It follows that ψ = (P' P) -1 = P -1 (P' ) -1 or P ψ P' = I T. Let X * = P X, Y * = PY, and e * = P e. Premultiplying model M by P gives PY = P X β + P e, or Y * = X * β + e * (model M*) With P being a non-singular matrix, models M and M * are informationally equivalent. V(e) = σ 2 ψ, does not satisfy A3-A4 when ψ I T, V(e * ) = V(P e) = P V(e) P' = P σ 2 ψ P' = σ 2 P ψ P' = σ 2 P P -1 P' -1 P', since ψ = P -1 P' -1, = σ 2 I T, which satisfies assumptions A3-A4. While model M does not satisfy conditions A3-A4, model M * does. This implies that all the results obtained for the classical linear model do apply to model M *. Estimation of b Under General Variance Structure Consider the error sum-of-squares for model (M * ) S = e *' e * = (P e) ' (P e) = e ' P ' P e = e ' ψ -1 e, since P ' P = ψ -1. The term S = (e ' ψ -1 e) is called the weighted error sum of squares, where the weights involve the inverse of the ψ matrix (which is proportional to the variance of e). The value of β that minimizes S is b g the weighted least squares or generalized least squares estimator of β. b g is simply the least squares estimator under model M * b g = (X *' X * ) -1 X *' Y *, since b g is the least-squares estimator of β in M * = [(P X) ' P X] -1 (P X) ' P Y

3 3 = [X ' P ' P X] -1 X ' P ' P Y = [X ' ψ -1 X] -1 X ' ψ -1 Y, since P ' P = ψ -1. The generalized least squares estimator of β is b g = [X ' y -1 X] -1 X ' y -1 Y. Properties of b g Since b g is simply the least squares estimator of β in model M * and model M * satisfies all assumptions of the classical linear model, it follows that all of the results obtained for the classical linear model apply to model M * : β g is also the max. likelihood estimator of β in model M* where e is normally distributed. β g is an unbiased estimator of β: E(β g ) = β. V(β g ) = σ 2 [X *' X * ] -1 = σ 2 [(P X) ' (P X)] -1 = σ 2 [X ' P ' P X] -1 or, given ψ -1 = ' P, V(b g) ) = s 2 [X ' y -1 X] -1. β g is the best linear unbiased estimator (BLUE) of β, implying that it is efficient in finite sample (its variance is smallest compared to the variance of all linear unbiased estimators). In large samples, β g is a consistent estimator of β an asymptotically efficient estimator of β asymptotically normal with T 1/2 (β g - β) d N(0, T σ 2 (X ' ψ -1 X) -1 ], or β g N(β, σ 2 (X ' ψ -1 X) -1 ] as T. A Comparison of b s and b g We have b s = (X ' X) -1 X ' Y as the least-squares estimator of β, and b g = [X ' y -1 X] -1 X ' y -1 Y as the generalized least squares estimator of β. In general, β s β g whenever the matrix ψ is not proportional to the identity matrix I T. Then if ψ I T, on what basis can we choose between these two estimators in the estimation of β in model M?

4 4 Bias of b s Under General Variance Structure We have E(β s ) = E[(X'X) -1 X' Y] = (X'X) -1 X'E(y) = (X'X) -1 X' E[X β + e] = (X X) -1 X' X β + (X' X) -1 X' E(e) = β, since E(e) = 0. Thus, least squares estimator β s is an unbiased estimator of β under model M. Variance of b s Under The General Variance Structure Given E(β s ) = β under the general variance model M: V(b s ) = E[(b s - b)(b s - b)']. β s - β = (X' X) -1 X' Y - β = (X' X) -1 X' (X β + e) - β = (X' X) -1 X' e.? V(β s ) = E[(X' X) -1 X' e e' X (X' X) -1 ] = (X' X) -1 X' E(e e' ) X (X' X) -1 Given E(e e') = V(e) = σ 2 ψ,? V(b s ) = s 2 (X'X) -1 X' y X (X'X) -1. In Summary: When y = I T, then b g = b s = (X'X) -1 X' Y, and V(b g )=V(β s ) = σ 2 (X ψ -1 X) -1 = σ 2 (X X) -1 X ψ X (X X) -1 = s 2 (X'X) -1 When y I T under model M, we have b g = (X' y -1 X) -1 X' y -1 y b s, and V(b g ) = s 2 (X'y -1 X) -1 V(b s ) Efficiency of b s Under the General Variance Structure Applying the Gauss-Markov theorem to model M * implies: β g = (X' ψ -1 X) -1 X' ψ -1 Y is the best linear unbiased estimator (BLUE) of β in M. β s = (X' X) -1 X' Y is another linear unbiased estimator: V(β g ) = σ 2 (X' ψ -1 X) -1 V(β s ). In general b s is an inefficient estimator of β in model M (its variance is large compared to the variance of b g ).

5 5 Consistency of b s Under the General Variance Structure Since β s is an unbiased estimator of β, it is also asymptotically unbiased. Assume that, as T, (X'X/T) and (X' ψ X/T) converge each to a finite, nonsingular matrix. This implies that V(β s ) = σ 2 (X'X) -1 X' ψ X (X' X) -1 = (1/T) σ 2 (X' X/T) -1 (X' ψ X/T) (X' X/T) -1 0 as T. Together with being asymptotically unbiased, this implies that β s is a consistent estimator of β in model M. Estimation of s 2 Under the General Variance Structure When ψ I T, model M does not satisfy the conditions of the classical linear model. This implies σ l 2 = (Y X β s )' (Y X β s )/T, and σ u 2 = (Y X β s )' (Y X β s )/(T K) are in general biased and inconsistent estimators of σ 2. With models M and M * being informationally equivalent and model M * satisfying the conditions of the classical linear model, we can apply the results obtained for the classical linear model to M * : σ gl 2 = (Y * X * β g )' (Y * X * β g )/T a biased but consistent estimator of σ 2, β g = (X * ' X * ) -1 X * ' Y * = (X' ψ X) -1 X' ψ Y. This implies: σ gl 2 = (P Y P X β g )' (P Y P X β g )/T = (Y X β g )' P'P (Y X β g )/T. Since P'P = ψ -1, σ gl 2 = (Y X β g )' ψ -1 (Y X β g )/T and is a biased but consistent estimator of σ 2. Results of the classical linear model applied to M * give σ gu 2 = (Y * X * β g )' (Y * X * β g )/(T K) as an unbiased and consistent estimator of s 2 and σ gu 2 = (PY P X β g ) ' (PY P X β g )/(T K) = (Y X β g ) ' P' P (Y X β g )/(T K).

6 6 Since P' P = ψ -1, s gu 2 = (Y X b g )' y -1 (Y X b g )/(T K) is an unbiased and consistent estimator of σ 2. When y I T, in general s gl 2 s l 2, and s gu 2 s u 2, with only the generalized least squares being consistent estimators of σ 2. Prediction Under the General Variance Structure Let the sample information based on T observations be Y = X β + e, Y is (T 1), X is (T K), and e is (T 1), where e ~(0, σ 2 ψ). Consider a prediction scenario where the intent is to anticipate new and unknown Y 0 given known explanatory variables X 0, where Y 0 is generated by Y 0 = X 0 β + e 0, where Y 0 is (T 0 1), X 0 is (T 0 K), and e 0 = (Y 0 X 0 β) is (T 0 1) where e 0 ~(0, σ 2 ψ 0 ) and Cov(e, e 0 ) = σ 2 C, with C being a (T T 0 ) matrix. Note that when C 0, this allows for non-zero covariance between the error term of the sample and error term of the prediction. The variance of (e, e 0 ) is V(e, e 0 ) = σ 2 Ψ Ψ C ' Ψ, where is 0 C' Ψ0 symmetric, positive-definite (and thus non-singular) matrix. An Alternative Formulation Ψ Consider the Cholesky decomposition (P) of C' ; P' P = Ψ0 P1 0 where P = is a (T + T 0 ) (T + T 0 ) non-singular matrix. P2 P3 Ψ This implies: = P -1 P' -1 Ψ, or P C' Ψ P' = I T + T, or 0 0 C' Ψ0 P1 0 Ψ P1 ' P2 ' IT 0 P2 P 3 C' Ψ =. 0 0 P3 ' 0 IT 0 It follows that (P 2 ψ + P 3 C' ) P 1 ' = 0, or P 2 ψ = P 3 C', or P 3-1 P 2 = C' ψ -1. Consider model Q: Y X e = β + Y 0 X 0 e. 0 1 Ψ C' Ψ 0 1

7 7 Premultiplying by P results in model Q * where Y X e P P β P Y = + X e, or Y* P1 0 Y = Y* P P Y, e* P1 0 e = e* P P e y* X* e* β y* = + X* e*, X* P1 0 X = X* P P X, and Note that, since the matrix P is non-singular, models Q and Q * are informationally equivalent. Predicting Y 0 Under the General Variance Structure V(e *, e * 0 ) = P V(e, e 0 ) P' = σ 2 Ψ P P' = σ 2 I T + T C' Ψ 0. Thus Q * satisfies all 0 the assumptions of the traditional linear regression model. It follows that (X * 0 β g ) is the best linear unbiased predictor of Y * 0, where E(Y * 0 - X * 0 β g ) = 0 and (X * 0 β g ) has the smallest variance among linear unbiased predictors of Y * 0. Y * 0 = P 2 Y + P 3 Y 0 and X * 0 = P 2 X + P 3 X 0. This gives Y 0 = P -1 3 Y * 0 P -1 3 P 2 Y = P -1 3 [X * 0 β g ] - P -1 3 P 2 Y = P -1 3 [(P 2 X + P 3 X 0 ) β g ] P -1 3 P 2 Y = X 0 β g - P -1 3 P 2 [Y - X β g ]. Since -P -1 3 P 2 = C' ψ -1, this implies that the best linear unbiased predictor of Y 0 is X 0 b g + C' y -1 [y X b g ]. This predictor is unbiased in the sense that E[Y 0 (X 0 β g + C' ψ -1 [Y X β g ])] = 0. And it is best in the sense that it has the smallest possible variance among all linear unbiased predictors of Y 0. The predictor of Y 0 is X 0 β g if C = 0, but X 0 β g if C 0.

8 8 The prediction error (ε) is ε = Y 0 - (X 0 β g + C' ψ -1 [y - X β g ]). The variance of the prediction error, V(ε) is V(ε) = V(Y 0 X 0 β g C' ψ -1 [Y X β g ]) = V[P 3-1 (Y 0 * X 0 * β g )], since P 3-1 P 2 = C' ψ -1, = P 3-1 V(Y 0 * - X 0 * β g ) P 3 ' -1 = σ 2 P -1 3 [ I T0 + X 0 * (X * ' X * ) -1 X 0 * '] P 3 ' -1, using results from the classical linear model applied to model Q *. = σ 2 [P 3-1 P 3 ' -1 + P 3-1 X 0 * (X * ' X * ) -1 X 0 * ' P 3 ' -1 ] = σ 2 [ψ 0 C' ψ -1 C + (P 3-1 P 2 X + X 0 )(X * ' X * ) -1 (X' P 2 ' P 3 ' -1 + X 0 '], (proving this step is a little tedious ) = σ 2 [ψ 0 C' ψ -1 C + (X 0 C' ψ -1 X)(X' ψ -1 X) -1 (X 0 ' X' ψ -1 C)]. Note that the variance of the prediction error satisfies V(ε) = σ 2 [ψ 0 + X 0 (X' ψ -1 X) -1 X 0 '] if C = 0, σ 2 [ψ 0 + X 0 (X' ψ -1 X) -1 X 0 '] if C 0. Hypothesis Testing Under The General Variance Structure Assume we have the model: Y = X β + e where e ~ (0, σ 2 ψ). Consider the hypothesis consisting of J linear restrictions on β Null hypothesis: H 0 : R β = r Alternative hypothesis: H 1 : R β r, where R is a known (J K) matrix of rank J, and r is a known (J 1) vector. With the (K K) matrix ψ is known, the unrestricted generalized least squares estimator of β is b g = [X' y -1 X] -1 X' y -1 Y. We have shown that β g is an unbiased, consistent, and efficient estimator of β. An unbiased estimator of σ 2 is s gu 2 = (Y X b g ) ' y -1 (Y X b g )/(T K), and an unbiased estimator of the variance of β g is V(b g ) = s gu 2 [X' y -1 X] -1.

9 9 Under null hypothesis H 0, the restricted generalized least squares estimator of β is b gr = b g + C R' [R C R'] -1 [r R b g ], where C = [X' ψ -1 X] -1. Applying the results of the classical linear model to model M *. This gives the test statistic λ = (WSSE R WSSE u )/(J σ gu 2 ) = (β g - β gr ) ' X' ψ -1 X (β g - β gr )/(J σ gu 2 ) = (R β g - r) ' [R (X' ψ -1 X) -1 R'] -1 (R β g - r)/(j σ gu 2 ) where WSSE R = (y X β gr ) ' ψ -1 (y X β gr ), WSSE u = (y X β g ) ψ -1 (y X β g ) And WSSE R is the weighted restricted error sum of squares and WSSE U the weighted unrestricted error sum of squares. Under H 0 and assuming that e ~ N(0, s 2 y), the test statistic λ is distributed as F (J, T-K). With normality the following test procedure can be undertaken: Choose the significance level α = P(type-1 error) Find λ c that satisfies α = P(F (J, T-K) λ c ). Reject H 0 if λ > λ c Accept H 0 if λ λ c. If J = 1, consider using λ 1/2 : t = (R β g - r)/ [σ gu (R (X' ψ -1 X) -1 R') 1/2 ], t t (T-K) under H 0 which implies the following equivalent test procedure Choose the significance level α = P(type-1 error) Find t c that satisfies α/2 = P(t (T-K) t c ). Reject H 0 if t > t c Accept H 0 if t t c Estimation of b, s 2 and y When y not Known We have discussed the estimation of β and σ 2, assuming the (T T) matrix y is known. We proposed the unbiased estimators b g = [X' y -1 X] -1 X' y -1 Y, and s gu 2 = (Y X b l )' y -1 (Y X b l )/(T K). Note that both estimators depend on y. This is fine if ψ is known. However, it creates a problem if ψ is not known to the investigator. In that case, our proposed estimators are not empirically tractable (since they depend on the unknown ψ).

10 10 Lets consider the case where the (T T) matrix y is unknown and needs to be estimated. Thus, given a sample Y, we look for estimators β e for β, (σ 2 ) e for σ 2, and ψ e for ψ. A simple and intuitive way to proceed is, first to choose some estimator ψ e for ψ, and then to substitute it into our proposed estimators to obtain β e = [X' (ψ e ) -1 X] -1 X' (ψ e ) -1 Y as an estimator of β, and (σ 2 ) e = (Y X β l )' (ψ e ) -1 (Y X β l )/(T K) as an estimator of σ 2. This is the essence of the estimation method discussed below. This simple approach raises some difficult questions in evaluating the statistical properties of the estimator. The reason is that, since β e now depends explicitly on ψ e, the estimators β e and ψ e are necessarily correlated random variables. Note that this differs significantly from the classical linear model, where b s and s 2 u conveniently happened to be uncorrelated. This means that the small sample properties of the estimator can be complex and difficult to establish. However, large sample properties of the estimator remain available. Being easier to evaluate, we will rely extensively on such asymptotic properties. Some Key Asymptotic Results Under the General Variance Structure Assume that plim[(x' ψ -1 X)/T] = a (K K) finite, non-singular matrix. Let ψ e be a consistent estimator of ψ. Then plim[(x' (ψ e ) -1 X)/T] = plim[(x' ψ -1 X)/T] and plim[(x' (ψ e ) -1 e)/(t 1/2 )] = plim[(x' ψ -1 e)/(t 1/2 )]. β fg = [X' (ψ e ) -1 X] -1 X' (ψ e ) -1 Y is called the feasible generalized least squares estimator of β. When ψ e be a consistent estimator of ψ, it can be shown that β fg = [X' (ψ e ) -1 X] -1 X' (ψ e ) -1 Y has the same asymptotic distribution as β g = [X' ψ -1 X] -1 X' ψ -1 Y.

11 11 This is an important result since we already known the asymptotic properties of β g. It implies the following asymptotic properties for the feasible generalized least squares estimator β fg When ψ e be a consistent estimator of ψ, the estimator β fg of β is asymptotically unbiased consistent asymptotically efficient asymptotically normal, with (T 1/2 ) (β fg - β) d N(0, σ 2 [(X ψ -1 X)/T] -1 ) or b fg» N(b, s 2 [X y -1 X] -1 ) as T fi. A Proposed Estimation Procedure With a General Variance Structure We propose the following three-step estimation procedure: Obtain the least squares estimator β s = (X' X) -1 X'Y, a consistent estimator of β. From these estimates generate e s = Y X β s as a consistent estimator of e. Use e s to obtain consistent estimators ψ e of ψ, and (σ 2 ) e of σ 2. Obtain the feasible generalized least-squares estimator β fg = [X' (ψ e ) -1 X] -1 X' (ψ e ) -1 Y. This estimator β fg of β is consistent, asymptotically efficient, and satisfy b fg» N(b, s 2 [X' y -1 X] -1 ) as T. It follows that [(s 2 ) e [X' (y e ) -1 X] -1 ] is a consistent estimator of V(b fg ), which can be used to conduct asymptotic tests about β (e.g., using a Wald test). The above procedure is written in a very general form. How it gets implemented typically depends on the model specification for ψ. For that reason, we proceed with an analysis of more specific models.

Linear Model Under General Variance Structure: Autocorrelation

Linear Model Under General Variance Structure: Autocorrelation Linear Model Under General Variance Structure: Autocorrelation A Definition of Autocorrelation In this section, we consider another special case of the model Y = X β + e, or y t = x t β + e t, t = 1,..,.

More information

Simple and Multiple Linear Regression

Simple and Multiple Linear Regression Sta. 113 Chapter 12 and 13 of Devore March 12, 2010 Table of contents 1 Simple Linear Regression 2 Model Simple Linear Regression A simple linear regression model is given by Y = β 0 + β 1 x + ɛ where

More information

LECTURE 11: GENERALIZED LEAST SQUARES (GLS) In this lecture, we will consider the model y = Xβ + ε retaining the assumption Ey = Xβ.

LECTURE 11: GENERALIZED LEAST SQUARES (GLS) In this lecture, we will consider the model y = Xβ + ε retaining the assumption Ey = Xβ. LECTURE 11: GEERALIZED LEAST SQUARES (GLS) In this lecture, we will consider the model y = Xβ + ε retaining the assumption Ey = Xβ. However, we no longer have the assumption V(y) = V(ε) = σ 2 I. Instead

More information

Topic 6: Non-Spherical Disturbances

Topic 6: Non-Spherical Disturbances Topic 6: Non-Spherical Disturbances Our basic linear regression model is y = Xβ + ε ; ε ~ N[0, σ 2 I n ] Now we ll generalize the specification of the error term in the model: E[ε] = 0 ; E[εε ] = Σ = σ

More information

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8 Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall

More information

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate

More information

Parameter Estimation

Parameter Estimation Parameter Estimation Consider a sample of observations on a random variable Y. his generates random variables: (y 1, y 2,, y ). A random sample is a sample (y 1, y 2,, y ) where the random variables y

More information

Econometrics Summary Algebraic and Statistical Preliminaries

Econometrics Summary Algebraic and Statistical Preliminaries Econometrics Summary Algebraic and Statistical Preliminaries Elasticity: The point elasticity of Y with respect to L is given by α = ( Y/ L)/(Y/L). The arc elasticity is given by ( Y/ L)/(Y/L), when L

More information

MS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari

MS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari MS&E 226: Small Data Lecture 11: Maximum likelihood (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 18 The likelihood function 2 / 18 Estimating the parameter This lecture develops the methodology behind

More information

Regression Models - Introduction

Regression Models - Introduction Regression Models - Introduction In regression models there are two types of variables that are studied: A dependent variable, Y, also called response variable. It is modeled as random. An independent

More information

Inference in Regression Analysis

Inference in Regression Analysis Inference in Regression Analysis Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 4, Slide 1 Today: Normal Error Regression Model Y i = β 0 + β 1 X i + ǫ i Y i value

More information

Multivariate Regression

Multivariate Regression Multivariate Regression The so-called supervised learning problem is the following: we want to approximate the random variable Y with an appropriate function of the random variables X 1,..., X p with the

More information

Regression Analysis. y t = β 1 x t1 + β 2 x t2 + β k x tk + ϵ t, t = 1,..., T,

Regression Analysis. y t = β 1 x t1 + β 2 x t2 + β k x tk + ϵ t, t = 1,..., T, Regression Analysis The multiple linear regression model with k explanatory variables assumes that the tth observation of the dependent or endogenous variable y t is described by the linear relationship

More information

Introduction to Econometrics. Heteroskedasticity

Introduction to Econometrics. Heteroskedasticity Introduction to Econometrics Introduction Heteroskedasticity When the variance of the errors changes across segments of the population, where the segments are determined by different values for the explanatory

More information

Problem Set #6: OLS. Economics 835: Econometrics. Fall 2012

Problem Set #6: OLS. Economics 835: Econometrics. Fall 2012 Problem Set #6: OLS Economics 835: Econometrics Fall 202 A preliminary result Suppose we have a random sample of size n on the scalar random variables (x, y) with finite means, variances, and covariance.

More information

the error term could vary over the observations, in ways that are related

the error term could vary over the observations, in ways that are related Heteroskedasticity We now consider the implications of relaxing the assumption that the conditional variance Var(u i x i ) = σ 2 is common to all observations i = 1,..., n In many applications, we may

More information

Heteroskedasticity. y i = β 0 + β 1 x 1i + β 2 x 2i β k x ki + e i. where E(e i. ) σ 2, non-constant variance.

Heteroskedasticity. y i = β 0 + β 1 x 1i + β 2 x 2i β k x ki + e i. where E(e i. ) σ 2, non-constant variance. Heteroskedasticity y i = β + β x i + β x i +... + β k x ki + e i where E(e i ) σ, non-constant variance. Common problem with samples over individuals. ê i e ˆi x k x k AREC-ECON 535 Lec F Suppose y i =

More information

Y i = η + ɛ i, i = 1,...,n.

Y i = η + ɛ i, i = 1,...,n. Nonparametric tests If data do not come from a normal population (and if the sample is not large), we cannot use a t-test. One useful approach to creating test statistics is through the use of rank statistics.

More information

An Introduction to Parameter Estimation

An Introduction to Parameter Estimation Introduction Introduction to Econometrics An Introduction to Parameter Estimation This document combines several important econometric foundations and corresponds to other documents such as the Introduction

More information

Instrumental Variables and Two-Stage Least Squares

Instrumental Variables and Two-Stage Least Squares Instrumental Variables and Two-Stage Least Squares Generalised Least Squares Professor Menelaos Karanasos December 2011 Generalised Least Squares: Assume that the postulated model is y = Xb + e, (1) where

More information

Heteroskedasticity. We now consider the implications of relaxing the assumption that the conditional

Heteroskedasticity. We now consider the implications of relaxing the assumption that the conditional Heteroskedasticity We now consider the implications of relaxing the assumption that the conditional variance V (u i x i ) = σ 2 is common to all observations i = 1,..., In many applications, we may suspect

More information

Least Squares Estimation-Finite-Sample Properties

Least Squares Estimation-Finite-Sample Properties Least Squares Estimation-Finite-Sample Properties Ping Yu School of Economics and Finance The University of Hong Kong Ping Yu (HKU) Finite-Sample 1 / 29 Terminology and Assumptions 1 Terminology and Assumptions

More information

Ma 3/103: Lecture 24 Linear Regression I: Estimation

Ma 3/103: Lecture 24 Linear Regression I: Estimation Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, 2017 1 / 32 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the

More information

Econ 510 B. Brown Spring 2014 Final Exam Answers

Econ 510 B. Brown Spring 2014 Final Exam Answers Econ 510 B. Brown Spring 2014 Final Exam Answers Answer five of the following questions. You must answer question 7. The question are weighted equally. You have 2.5 hours. You may use a calculator. Brevity

More information

Simple Linear Regression

Simple Linear Regression Simple Linear Regression ST 430/514 Recall: A regression model describes how a dependent variable (or response) Y is affected, on average, by one or more independent variables (or factors, or covariates)

More information

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1 Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is

More information

2.1 Linear regression with matrices

2.1 Linear regression with matrices 21 Linear regression with matrices The values of the independent variables are united into the matrix X (design matrix), the values of the outcome and the coefficient are represented by the vectors Y and

More information

Spatial Regression. 3. Review - OLS and 2SLS. Luc Anselin. Copyright 2017 by Luc Anselin, All Rights Reserved

Spatial Regression. 3. Review - OLS and 2SLS. Luc Anselin.   Copyright 2017 by Luc Anselin, All Rights Reserved Spatial Regression 3. Review - OLS and 2SLS Luc Anselin http://spatial.uchicago.edu OLS estimation (recap) non-spatial regression diagnostics endogeneity - IV and 2SLS OLS Estimation (recap) Linear Regression

More information

Answers to Problem Set #4

Answers to Problem Set #4 Answers to Problem Set #4 Problems. Suppose that, from a sample of 63 observations, the least squares estimates and the corresponding estimated variance covariance matrix are given by: bβ bβ 2 bβ 3 = 2

More information

Chapter 14 Stein-Rule Estimation

Chapter 14 Stein-Rule Estimation Chapter 14 Stein-Rule Estimation The ordinary least squares estimation of regression coefficients in linear regression model provides the estimators having minimum variance in the class of linear and unbiased

More information

Instrumental Variables, Simultaneous and Systems of Equations

Instrumental Variables, Simultaneous and Systems of Equations Chapter 6 Instrumental Variables, Simultaneous and Systems of Equations 61 Instrumental variables In the linear regression model y i = x iβ + ε i (61) we have been assuming that bf x i and ε i are uncorrelated

More information

STAT5044: Regression and Anova. Inyoung Kim

STAT5044: Regression and Anova. Inyoung Kim STAT5044: Regression and Anova Inyoung Kim 2 / 47 Outline 1 Regression 2 Simple Linear regression 3 Basic concepts in regression 4 How to estimate unknown parameters 5 Properties of Least Squares Estimators:

More information

Regression Models - Introduction

Regression Models - Introduction Regression Models - Introduction In regression models, two types of variables that are studied: A dependent variable, Y, also called response variable. It is modeled as random. An independent variable,

More information

Advanced Econometrics I

Advanced Econometrics I Lecture Notes Autumn 2010 Dr. Getinet Haile, University of Mannheim 1. Introduction Introduction & CLRM, Autumn Term 2010 1 What is econometrics? Econometrics = economic statistics economic theory mathematics

More information

MS&E 226: Small Data

MS&E 226: Small Data MS&E 226: Small Data Lecture 6: Bias and variance (v5) Ramesh Johari ramesh.johari@stanford.edu 1 / 49 Our plan today We saw in last lecture that model scoring methods seem to be trading off two different

More information

Regression: Lecture 2

Regression: Lecture 2 Regression: Lecture 2 Niels Richard Hansen April 26, 2012 Contents 1 Linear regression and least squares estimation 1 1.1 Distributional results................................ 3 2 Non-linear effects and

More information

Chapter 14. Linear least squares

Chapter 14. Linear least squares Serik Sagitov, Chalmers and GU, March 5, 2018 Chapter 14 Linear least squares 1 Simple linear regression model A linear model for the random response Y = Y (x) to an independent variable X = x For a given

More information

Economics 620, Lecture 7: Still More, But Last, on the K-Varable Linear Model

Economics 620, Lecture 7: Still More, But Last, on the K-Varable Linear Model Economics 620, Lecture 7: Still More, But Last, on the K-Varable Linear Model Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 7: the K-Varable Linear Model IV

More information

PANEL DATA RANDOM AND FIXED EFFECTS MODEL. Professor Menelaos Karanasos. December Panel Data (Institute) PANEL DATA December / 1

PANEL DATA RANDOM AND FIXED EFFECTS MODEL. Professor Menelaos Karanasos. December Panel Data (Institute) PANEL DATA December / 1 PANEL DATA RANDOM AND FIXED EFFECTS MODEL Professor Menelaos Karanasos December 2011 PANEL DATA Notation y it is the value of the dependent variable for cross-section unit i at time t where i = 1,...,

More information

4 Multiple Linear Regression

4 Multiple Linear Regression 4 Multiple Linear Regression 4. The Model Definition 4.. random variable Y fits a Multiple Linear Regression Model, iff there exist β, β,..., β k R so that for all (x, x 2,..., x k ) R k where ε N (, σ

More information

Multiple Regression Analysis

Multiple Regression Analysis Multiple Regression Analysis y = 0 + 1 x 1 + x +... k x k + u 6. Heteroskedasticity What is Heteroskedasticity?! Recall the assumption of homoskedasticity implied that conditional on the explanatory variables,

More information

Linear Regression. Junhui Qian. October 27, 2014

Linear Regression. Junhui Qian. October 27, 2014 Linear Regression Junhui Qian October 27, 2014 Outline The Model Estimation Ordinary Least Square Method of Moments Maximum Likelihood Estimation Properties of OLS Estimator Unbiasedness Consistency Efficiency

More information

1. You have data on years of work experience, EXPER, its square, EXPER2, years of education, EDUC, and the log of hourly wages, LWAGE

1. You have data on years of work experience, EXPER, its square, EXPER2, years of education, EDUC, and the log of hourly wages, LWAGE 1. You have data on years of work experience, EXPER, its square, EXPER, years of education, EDUC, and the log of hourly wages, LWAGE You estimate the following regressions: (1) LWAGE =.00 + 0.05*EDUC +

More information

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,

More information

Chapter 2 The Simple Linear Regression Model: Specification and Estimation

Chapter 2 The Simple Linear Regression Model: Specification and Estimation Chapter The Simple Linear Regression Model: Specification and Estimation Page 1 Chapter Contents.1 An Economic Model. An Econometric Model.3 Estimating the Regression Parameters.4 Assessing the Least Squares

More information

Inference in Normal Regression Model. Dr. Frank Wood

Inference in Normal Regression Model. Dr. Frank Wood Inference in Normal Regression Model Dr. Frank Wood Remember We know that the point estimator of b 1 is b 1 = (Xi X )(Y i Ȳ ) (Xi X ) 2 Last class we derived the sampling distribution of b 1, it being

More information

Graduate Econometrics Lecture 4: Heteroskedasticity

Graduate Econometrics Lecture 4: Heteroskedasticity Graduate Econometrics Lecture 4: Heteroskedasticity Department of Economics University of Gothenburg November 30, 2014 1/43 and Autocorrelation Consequences for OLS Estimator Begin from the linear model

More information

Review of Econometrics

Review of Econometrics Review of Econometrics Zheng Tian June 5th, 2017 1 The Essence of the OLS Estimation Multiple regression model involves the models as follows Y i = β 0 + β 1 X 1i + β 2 X 2i + + β k X ki + u i, i = 1,...,

More information

Advanced Econometrics

Advanced Econometrics Based on the textbook by Verbeek: A Guide to Modern Econometrics Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna May 16, 2013 Outline Univariate

More information

The Finite Sample Properties of the Least Squares Estimator / Basic Hypothesis Testing

The Finite Sample Properties of the Least Squares Estimator / Basic Hypothesis Testing 1 The Finite Sample Properties of the Least Squares Estimator / Basic Hypothesis Testing Greene Ch 4, Kennedy Ch. R script mod1s3 To assess the quality and appropriateness of econometric estimators, we

More information

Statistics 910, #5 1. Regression Methods

Statistics 910, #5 1. Regression Methods Statistics 910, #5 1 Overview Regression Methods 1. Idea: effects of dependence 2. Examples of estimation (in R) 3. Review of regression 4. Comparisons and relative efficiencies Idea Decomposition Well-known

More information

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018 Econometrics I KS Module 2: Multivariate Linear Regression Alexander Ahammer Department of Economics Johannes Kepler University of Linz This version: April 16, 2018 Alexander Ahammer (JKU) Module 2: Multivariate

More information

Using Instrumental Variables to Find Causal Effects in Public Health

Using Instrumental Variables to Find Causal Effects in Public Health 1 Using Instrumental Variables to Find Causal Effects in Public Health Antonio Trujillo, PhD John Hopkins Bloomberg School of Public Health Department of International Health Health Systems Program October

More information

Quick Review on Linear Multiple Regression

Quick Review on Linear Multiple Regression Quick Review on Linear Multiple Regression Mei-Yuan Chen Department of Finance National Chung Hsing University March 6, 2007 Introduction for Conditional Mean Modeling Suppose random variables Y, X 1,

More information

(a) (3 points) Construct a 95% confidence interval for β 2 in Equation 1.

(a) (3 points) Construct a 95% confidence interval for β 2 in Equation 1. Problem 1 (21 points) An economist runs the regression y i = β 0 + x 1i β 1 + x 2i β 2 + x 3i β 3 + ε i (1) The results are summarized in the following table: Equation 1. Variable Coefficient Std. Error

More information

Reading Assignment. Serial Correlation and Heteroskedasticity. Chapters 12 and 11. Kennedy: Chapter 8. AREC-ECON 535 Lec F1 1

Reading Assignment. Serial Correlation and Heteroskedasticity. Chapters 12 and 11. Kennedy: Chapter 8. AREC-ECON 535 Lec F1 1 Reading Assignment Serial Correlation and Heteroskedasticity Chapters 1 and 11. Kennedy: Chapter 8. AREC-ECON 535 Lec F1 1 Serial Correlation or Autocorrelation y t = β 0 + β 1 x 1t + β x t +... + β k

More information

ECON 4230 Intermediate Econometric Theory Exam

ECON 4230 Intermediate Econometric Theory Exam ECON 4230 Intermediate Econometric Theory Exam Multiple Choice (20 pts). Circle the best answer. 1. The Classical assumption of mean zero errors is satisfied if the regression model a) is linear in the

More information

3 Multiple Linear Regression

3 Multiple Linear Regression 3 Multiple Linear Regression 3.1 The Model Essentially, all models are wrong, but some are useful. Quote by George E.P. Box. Models are supposed to be exact descriptions of the population, but that is

More information

Econometrics. Week 4. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague

Econometrics. Week 4. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague Econometrics Week 4 Institute of Economic Studies Faculty of Social Sciences Charles University in Prague Fall 2012 1 / 23 Recommended Reading For the today Serial correlation and heteroskedasticity in

More information

Financial Econometrics

Financial Econometrics Material : solution Class : Teacher(s) : zacharias psaradakis, marian vavra Example 1.1: Consider the linear regression model y Xβ + u, (1) where y is a (n 1) vector of observations on the dependent variable,

More information

Multiple Linear Regression

Multiple Linear Regression Multiple Linear Regression Asymptotics Asymptotics Multiple Linear Regression: Assumptions Assumption MLR. (Linearity in parameters) Assumption MLR. (Random Sampling from the population) We have a random

More information

Econ 2120: Section 2

Econ 2120: Section 2 Econ 2120: Section 2 Part I - Linear Predictor Loose Ends Ashesh Rambachan Fall 2018 Outline Big Picture Matrix Version of the Linear Predictor and Least Squares Fit Linear Predictor Least Squares Omitted

More information

Linear Regression. y» F; Ey = + x Vary = ¾ 2. ) y = + x + u. Eu = 0 Varu = ¾ 2 Exu = 0:

Linear Regression. y» F; Ey = + x Vary = ¾ 2. ) y = + x + u. Eu = 0 Varu = ¾ 2 Exu = 0: Linear Regression 1 Single Explanatory Variable Assume (y is not necessarily normal) where Examples: y» F; Ey = + x Vary = ¾ 2 ) y = + x + u Eu = 0 Varu = ¾ 2 Exu = 0: 1. School performance as a function

More information

Econometrics - 30C00200

Econometrics - 30C00200 Econometrics - 30C00200 Lecture 11: Heteroskedasticity Antti Saastamoinen VATT Institute for Economic Research Fall 2015 30C00200 Lecture 11: Heteroskedasticity 12.10.2015 Aalto University School of Business

More information

Sensitivity of GLS estimators in random effects models

Sensitivity of GLS estimators in random effects models of GLS estimators in random effects models Andrey L. Vasnev (University of Sydney) Tokyo, August 4, 2009 1 / 19 Plan Plan Simulation studies and estimators 2 / 19 Simulation studies Plan Simulation studies

More information

Maximum-Likelihood Estimation: Basic Ideas

Maximum-Likelihood Estimation: Basic Ideas Sociology 740 John Fox Lecture Notes Maximum-Likelihood Estimation: Basic Ideas Copyright 2014 by John Fox Maximum-Likelihood Estimation: Basic Ideas 1 I The method of maximum likelihood provides estimators

More information

Simple Linear Regression

Simple Linear Regression Simple Linear Regression Christopher Ting Christopher Ting : christophert@smu.edu.sg : 688 0364 : LKCSB 5036 January 7, 017 Web Site: http://www.mysmu.edu/faculty/christophert/ Christopher Ting QF 30 Week

More information

MS&E 226: Small Data. Lecture 6: Bias and variance (v2) Ramesh Johari

MS&E 226: Small Data. Lecture 6: Bias and variance (v2) Ramesh Johari MS&E 226: Small Data Lecture 6: Bias and variance (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 47 Our plan today We saw in last lecture that model scoring methods seem to be trading o two di erent

More information

STAT 100C: Linear models

STAT 100C: Linear models STAT 100C: Linear models Arash A. Amini June 9, 2018 1 / 56 Table of Contents Multiple linear regression Linear model setup Estimation of β Geometric interpretation Estimation of σ 2 Hat matrix Gram matrix

More information

Vector Auto-Regressive Models

Vector Auto-Regressive Models Vector Auto-Regressive Models Laurent Ferrara 1 1 University of Paris Nanterre M2 Oct. 2018 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions

More information

Panel Data Models. Chapter 5. Financial Econometrics. Michael Hauser WS17/18 1 / 63

Panel Data Models. Chapter 5. Financial Econometrics. Michael Hauser WS17/18 1 / 63 1 / 63 Panel Data Models Chapter 5 Financial Econometrics Michael Hauser WS17/18 2 / 63 Content Data structures: Times series, cross sectional, panel data, pooled data Static linear panel data models:

More information

VAR Models and Applications

VAR Models and Applications VAR Models and Applications Laurent Ferrara 1 1 University of Paris West M2 EIPMC Oct. 2016 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions

More information

Intermediate Econometrics

Intermediate Econometrics Intermediate Econometrics Heteroskedasticity Text: Wooldridge, 8 July 17, 2011 Heteroskedasticity Assumption of homoskedasticity, Var(u i x i1,..., x ik ) = E(u 2 i x i1,..., x ik ) = σ 2. That is, the

More information

This model of the conditional expectation is linear in the parameters. A more practical and relaxed attitude towards linear regression is to say that

This model of the conditional expectation is linear in the parameters. A more practical and relaxed attitude towards linear regression is to say that Linear Regression For (X, Y ) a pair of random variables with values in R p R we assume that E(Y X) = β 0 + with β R p+1. p X j β j = (1, X T )β j=1 This model of the conditional expectation is linear

More information

Chapter 1: Linear Regression with One Predictor Variable also known as: Simple Linear Regression Bivariate Linear Regression

Chapter 1: Linear Regression with One Predictor Variable also known as: Simple Linear Regression Bivariate Linear Regression BSTT523: Kutner et al., Chapter 1 1 Chapter 1: Linear Regression with One Predictor Variable also known as: Simple Linear Regression Bivariate Linear Regression Introduction: Functional relation between

More information

Introductory Econometrics

Introductory Econometrics Based on the textbook by Wooldridge: : A Modern Approach Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna November 23, 2013 Outline Introduction

More information

x i = 1 yi 2 = 55 with N = 30. Use the above sample information to answer all the following questions. Show explicitly all formulas and calculations.

x i = 1 yi 2 = 55 with N = 30. Use the above sample information to answer all the following questions. Show explicitly all formulas and calculations. Exercises for the course of Econometrics Introduction 1. () A researcher is using data for a sample of 30 observations to investigate the relationship between some dependent variable y i and independent

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7

MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7 MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7 1 Random Vectors Let a 0 and y be n 1 vectors, and let A be an n n matrix. Here, a 0 and A are non-random, whereas y is

More information

Lecture: Simultaneous Equation Model (Wooldridge s Book Chapter 16)

Lecture: Simultaneous Equation Model (Wooldridge s Book Chapter 16) Lecture: Simultaneous Equation Model (Wooldridge s Book Chapter 16) 1 2 Model Consider a system of two regressions y 1 = β 1 y 2 + u 1 (1) y 2 = β 2 y 1 + u 2 (2) This is a simultaneous equation model

More information

Bias Variance Trade-off

Bias Variance Trade-off Bias Variance Trade-off The mean squared error of an estimator MSE(ˆθ) = E([ˆθ θ] 2 ) Can be re-expressed MSE(ˆθ) = Var(ˆθ) + (B(ˆθ) 2 ) MSE = VAR + BIAS 2 Proof MSE(ˆθ) = E((ˆθ θ) 2 ) = E(([ˆθ E(ˆθ)]

More information

LECTURE 6. Introduction to Econometrics. Hypothesis testing & Goodness of fit

LECTURE 6. Introduction to Econometrics. Hypothesis testing & Goodness of fit LECTURE 6 Introduction to Econometrics Hypothesis testing & Goodness of fit October 25, 2016 1 / 23 ON TODAY S LECTURE We will explain how multiple hypotheses are tested in a regression model We will define

More information

Week 11 Heteroskedasticity and Autocorrelation

Week 11 Heteroskedasticity and Autocorrelation Week 11 Heteroskedasticity and Autocorrelation İnsan TUNALI Econ 511 Econometrics I Koç University 27 November 2018 Lecture outline 1. OLS and assumptions on V(ε) 2. Violations of V(ε) σ 2 I: 1. Heteroskedasticity

More information

Lecture 6: Linear models and Gauss-Markov theorem

Lecture 6: Linear models and Gauss-Markov theorem Lecture 6: Linear models and Gauss-Markov theorem Linear model setting Results in simple linear regression can be extended to the following general linear model with independently observed response variables

More information

A Note on Bootstraps and Robustness. Tony Lancaster, Brown University, December 2003.

A Note on Bootstraps and Robustness. Tony Lancaster, Brown University, December 2003. A Note on Bootstraps and Robustness Tony Lancaster, Brown University, December 2003. In this note we consider several versions of the bootstrap and argue that it is helpful in explaining and thinking about

More information

The Statistical Property of Ordinary Least Squares

The Statistical Property of Ordinary Least Squares The Statistical Property of Ordinary Least Squares The linear equation, on which we apply the OLS is y t = X t β + u t Then, as we have derived, the OLS estimator is ˆβ = [ X T X] 1 X T y Then, substituting

More information

1. Simple Linear Regression

1. Simple Linear Regression 1. Simple Linear Regression Suppose that we are interested in the average height of male undergrads at UF. We put each male student s name (population) in a hat and randomly select 100 (sample). Then their

More information

Multivariate Regression Analysis

Multivariate Regression Analysis Matrices and vectors The model from the sample is: Y = Xβ +u with n individuals, l response variable, k regressors Y is a n 1 vector or a n l matrix with the notation Y T = (y 1,y 2,...,y n ) 1 x 11 x

More information

Econometrics Master in Business and Quantitative Methods

Econometrics Master in Business and Quantitative Methods Econometrics Master in Business and Quantitative Methods Helena Veiga Universidad Carlos III de Madrid Models with discrete dependent variables and applications of panel data methods in all fields of economics

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

CHAPTER 6: SPECIFICATION VARIABLES

CHAPTER 6: SPECIFICATION VARIABLES Recall, we had the following six assumptions required for the Gauss-Markov Theorem: 1. The regression model is linear, correctly specified, and has an additive error term. 2. The error term has a zero

More information

1 Correlation between an independent variable and the error

1 Correlation between an independent variable and the error Chapter 7 outline, Econometrics Instrumental variables and model estimation 1 Correlation between an independent variable and the error Recall that one of the assumptions that we make when proving the

More information

Heteroskedasticity ECONOMETRICS (ECON 360) BEN VAN KAMMEN, PHD

Heteroskedasticity ECONOMETRICS (ECON 360) BEN VAN KAMMEN, PHD Heteroskedasticity ECONOMETRICS (ECON 360) BEN VAN KAMMEN, PHD Introduction For pedagogical reasons, OLS is presented initially under strong simplifying assumptions. One of these is homoskedastic errors,

More information

01 Probability Theory and Statistics Review

01 Probability Theory and Statistics Review NAVARCH/EECS 568, ROB 530 - Winter 2018 01 Probability Theory and Statistics Review Maani Ghaffari January 08, 2018 Last Time: Bayes Filters Given: Stream of observations z 1:t and action data u 1:t Sensor/measurement

More information

Instructions: Closed book, notes, and no electronic devices. Points (out of 200) in parentheses

Instructions: Closed book, notes, and no electronic devices. Points (out of 200) in parentheses ISQS 5349 Final Spring 2011 Instructions: Closed book, notes, and no electronic devices. Points (out of 200) in parentheses 1. (10) What is the definition of a regression model that we have used throughout

More information

Variable Selection and Model Building

Variable Selection and Model Building LINEAR REGRESSION ANALYSIS MODULE XIII Lecture - 37 Variable Selection and Model Building Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur The complete regression

More information

Specification errors in linear regression models

Specification errors in linear regression models Specification errors in linear regression models Jean-Marie Dufour McGill University First version: February 2002 Revised: December 2011 This version: December 2011 Compiled: December 9, 2011, 22:34 This

More information

Agricultural and Applied Economics 637 Applied Econometrics II

Agricultural and Applied Economics 637 Applied Econometrics II Agricultural and Applied Economics 637 Applied Econometrics II Assignment 1 Review of GLS Heteroskedasity and Autocorrelation (Due: Feb. 4, 2011) In this assignment you are asked to develop relatively

More information

Regression and Statistical Inference

Regression and Statistical Inference Regression and Statistical Inference Walid Mnif wmnif@uwo.ca Department of Applied Mathematics The University of Western Ontario, London, Canada 1 Elements of Probability 2 Elements of Probability CDF&PDF

More information

Regression. Oscar García

Regression. Oscar García Regression Oscar García Regression methods are fundamental in Forest Mensuration For a more concise and general presentation, we shall first review some matrix concepts 1 Matrices An order n m matrix is

More information

Non-linear panel data modeling

Non-linear panel data modeling Non-linear panel data modeling Laura Magazzini University of Verona laura.magazzini@univr.it http://dse.univr.it/magazzini May 2010 Laura Magazzini (@univr.it) Non-linear panel data modeling May 2010 1

More information