PANEL DATA RANDOM AND FIXED EFFECTS MODEL. Professor Menelaos Karanasos. December Panel Data (Institute) PANEL DATA December / 1
|
|
- Myles Johnston
- 5 years ago
- Views:
Transcription
1 PANEL DATA RANDOM AND FIXED EFFECTS MODEL Professor Menelaos Karanasos December 2011
2 PANEL DATA Notation y it is the value of the dependent variable for cross-section unit i at time t where i = 1,..., n and t = 1,..., T is the value of the jth explanatory variable for unit i at time t where j = 1,..., K. That is, there are K explanatory variables Thus the upperscript j denotes an explanatory variable. BALANCED PANELS We will restrict our discussion to estimation with balanced panels. That is we have the same number of observations on each cross-section unit so that the total number of observations is nt When n = 1 and T is large we have the familiar time-series data case Likewise, when T = 1 and n is large we have cross-section data Panel data estimation refers to cases where n > 1 and T > 1 X j it
3 POOLED ESTIMATOR The most common way of organizing the data is by decision units. Thus, let and y {z} i = T 1 X {z} i = T K That is X i is a T K matrix y i1 y i2. y it 7 5, ε i = {z} 6 4 Xi1 1 X 2 T 1 ε i1 ε i2. ε it i1 Xi1 K Xi2 1 Xi2 2 Xi2 K XiT 1 XiT 2 XiT K 7 5, 3 7 5,
4 Often the data are stacked to form 2 3 y 1 y 2 7 y = {z} nt y n 7 5, {z} X = 6 4 nt K 2 X 1 X 2. X n {z} ε = 6 4 nt 1 where y and ε are nt 1 vectors, and X is an nt K matrix 2 ε 1 ε 2. ε n 3 7 5,
5 MATRIX FORM The standard model can be expressed as y {z} nt 1 = {z} X nt K {z} b K 1 + ε {z} nt 1 (1) where {z} b = K The models we will discuss next are all variants of the standard linear model given by equation (1) The simplest estimation method is to assume that ε it iid(0, σ 2 ) for all i and t Estimation of this model is straightforward using OLS By assuming each observation is iid, however, we have essentially ignored the panel structure of the data b 1 b 2. b K 3 7 5
6 TWO EXTENSIONS Our starting point is the following model: y it = xit 0 {z} {z} b + ε it, 1K K 1 That is, x it is the tth row of the X i matrix or the itth row of the X matrix We go one step further and specify the following error structure for the disturbance term ε it = a i + η it, (2) where we assume that η it is uncorrelated with x it The rst term of the decomposition, a i, is called the individual e ect
7 ERROR STRUCTURE In this formulation our ignorance has two parts: The rst part varies across individuals or the cross-section unit but is constant across time This part may or may not be correlated with the explanatory variables The second part varies unsystematically (i.e., independently) across time and individuals RANDOM AND FIXED EFFECTS This formulation is the simplest way of capturing the notion that two observations from the same individual(unit) will be more like each other than observations from two di erent individuals A large proportion of empirical applications involve one of the following assumptions about the individual e ect: 1. Random e ects model: a i is uncorrelated with x it 2. Fixed e ects model: a i is correlated with x it
8 THE RANDOM EFFECTS MODEL It is important to stress that the substantive assumption that distinguishes this model from the xed e ects model is that the time-invariant person-speci c e ect a i is uncorrelated with x it. This orthogonality condition, along with our assumption about η it, is su cient for OLS to be asymptotically unbiased However, there are two problems: 1. OLS will produce consistent estimates of b but the standard errors will be understated. 2. OLS is not e cient compared to a feasible GLS procedure
9 THE ERROR It will be helpful to be a bit more explicit about the precise nature of the error. Recall that and ε {z} i = T ε i1 ε i2. ε it 7 5, {z} ε = 6 4 nt 1 ε it = a i + η it, ε 1 ε 2. ε n 7 5,
10 Thus, we can write η {z} i = T η i1 η i2. η it 3 7 5, {z} η = 6 4 nt 1 2 η 1 η 2. η n Moreover, we have E (η i ) = 0, E (η i η 0 i ) = σ2 ηi T, E (η) = 0, E (ηη 0 ) = σ 2 ηi nt.
11 THE a i Further, a {z} i = T a i a i where i is a T 1 vector of ones, and. a i = a i i, {z} a = 6 4 nt 1 2 a 1 a 2. a n 3 7 5, E (a i = 0), E (a i a j ) = 0, for i 6= j, E (a 2 i ) = σ 2 a, E (a i η jt ) = 0.
12 THE Σ MATRIX Given these assumptions we can write the error covariance matrix of the disturbance term of each individual cross-section unit: Σ {z} T T = E (ε i εi 0 ) = E (a {z } i ai 0 ) + E (η i η 0 i ) (3) T T = σ 2 a {z} ii 0 + σ 2 ηi T = = T T σ 2 6 a σ 2 6 η σ 2 a + σ 2 η σ 2 a σ 2 3 a σ 2 a σ 2 a + σ 2 η σ 2 a σ 2 a σ 2 a σ 2 a + σ 2 η
13 THE Ω MATRIX When the data are organized as in equation (1) the covariance of the error term for all the observations in the stacked model can be written as Ω {z} nt nt = I n {z} Σ = E(εε 0 ) = T T 2 3 Σ Σ Σ
14 The block diagonality of Ω makes nding an inverse simpler and we can focus on nding the inverse of Σ. It is straightforward but tedious to show that Σ 1/2 = 1 (1 θ) [I T ii 0 σ η T {z} ], T T s where θ = σ 2 η T σ 2 a + σ 2 η is the unknown quantity that must be estimated. Feasible GLS estimators require that we get estimates of the variances σ 2 a and σ 2 η in θ. (4)
15 RANDOM EFFECTS AS A COMBINATION OF WITHIN AND BETWEEN ESTIMATORS We consider two estimators that are consistent but not e cient relative to GLS The rst one is quite intuitive: convert all the data into speci c averages and perform OLS on this collapsed data set. In particular, y {z} i = T y i1 y i2. y it 3 T 7 5 ) y i = {z} y it /T, t=1 11 so that we have 2 one3 observation for each section i. Notice that y i y i y i = y i i = y i
16 Moreover, X i {z} T K x i 0 {z} 1K = = h Xi1 1 X 2 X 1 i2 X 2. XiT 1 X 2 i1 Xi1 K i2 Xi2 K..... it XiT K Xi 1 Xi 2 Xi K i, ) where X j T i = X j it /T. Recall that we also denote the tth row of the X i t=1 matrix by xit 0 = [ X it 1 Xit 2 Xit K ]. Speci cally we perform OLS on the following equation y i {z} 11 0 = x {z} i 1K {z} b + error, i = 1,..., n (5) K 1
17 Thus, the between-group estimator can be expressed as β B = " n # 1 n (x i x)(x i x) 0 i=1 i=1 (x i x)(y i y), where x = 1 nt n i=1 T j=1 x it and, y = 1 nt n i=1 T j=1 y it. The estimator β B is called the between-group estimator because it ignores variation within the group.
18 DUMMY VARIABLES To put this expression in matrix terms, stack the data as before and de ne a new nt n matrix D, which is merely the matrix of n dummy variables corresponding to each cross-section unit. That is D {z} nt n = D1 {z} nt 1 D 2 {z} nt 1 D n {z} nt 1 where D i = 1 for observation (i 1)T + 1,..., it, and zero otherwise Next de ne P {z} D = D(D 0 D) 1 D 0 a symmetric and idempotent matrix nt nt that is: PD 0 = P D and P 2 D = P D Premultiplying by this matrix transforms the data into the means described in equation (5),
19 PREDICTED VALUES (MEANS) This is because the predicted value of y from a regression on nothing {z} nt y 1 i 6 but the individual dummies is merely P D y = 4. y n i where i is the Tx1 unit vector. (See Matrices Appendix C) In other words, we run the following OLS regression: P D y = P D Xb+error 7 5 = by The β estimated this way is called the between estimator and is given by since P 0 D = P D and P 2 D = P D. β B = (X 0 P 0 D P D X) 1 X 0 P 0 D P D y = (X 0 P 2 D X) 1 X 0 P 2 D y = (X 0 P D X) 1 X 0 P D y
20 2SLS ESTIMATION The between estimator is consistent (though not e cient) when OLS on the pooled sample is consistent. This estimator corresponds to a 2SLS, using the person dummies as instruments. That is regress X on D and get the predicted values: bx {z} nt K = {z} D γ {z} (see Matrices Appendix C). Then regress y on bx: nt n nk β B = (bx 0 bx) 1 bx 0 y = D(D 0 D) 1 D 0 X = P {z } D X. γ = [(P D X) 0 P D X] 1 (P D X) 0 y = (X 0 P 0 D P D X) 1 X 0 P 0 D y = (X 0 P D X) 1 X 0 P D y.
21 WITHIN ESTIMATOR We can also use the information thrown away by the between estimator. P D z } { De ne M D = I nt D(D 0 D) 1 D 0, which is also a symmetric idempotent matrix. If we premultiply the data by M D and compute OLS on the transformed data we can derive the following within estimator β W = [(M D X) 0 D MX] 1 (M D X) 0 M D y = (X 0 M 0 D M D X) 1 X 0 M 0 D M D y = (X 0 M D X) 1 X 0 M D y. (6) A. The matrix M D can be interpreted as a residual-maker matrix. Premultiplying by this matrix transforms the data into residuals from auxiliary regressions of all the variables on a complete set of individual speci c constants (see also Matrices Appendix B):
22 RESIDUALS (DEVIATIONS FROM PERSON-SPECIFIC MEANS) i) We regress y and X on D and we save the residuals ey = M D y and ex = M D X: ey = y by = y Dγ = y D(D 0 D) 1 D 0 y {z } γ = [I nt = M D y. P D z } { D(D 0 D) 1 D 0 ]y ii) We regress ey = M D y on ex = M D X. Since the predicted value from regressing y on D, that is by = Dγ = D(D 0 D) 1 D 0 y, is merely the individual speci c mean (see the analysis above), the residuals ey = y by are merely deviations from person-speci c means.
23 Similarly, the residuals P D z } { X bx = X P D X = [I nt D(D 0 D) 1 D 0 ]X =ex = M D X are merely deviations from person-speci c means. B. Speci cally, equation (6) is equivalent to performing OLS on the following equation: y it y i = (x it x i ) 0 {z } {z} b + error, i = 1,..., N, t = 1,..., T. 1K K 1 If the assumptions underlying the random e ects model are correct, the within estimator is also a consistent estimator, but it is not e cient. This defect is clear since we have included n unnecessary extra variables.
24 Since y i = x i 0 b+a i, we can estimate a i as α i = y i x i 0 β.
25 C. The within estimator can also be expressed as β W = " n # 1 T n (x it x i )(x it x i ) 0 T i=1 t=1 i=1 t=1 (x it x i )(y it y i ). Its variance-covariance matrix is given by " n V (β W ) = σ 2 T η i=1 t=1 (x it x i )(x it x i ) 0 # 1.
26 D. The within estimator is merely an estimator that would result from running OLS on the data including a set of dummy variables where a 0 = [a 1 a 2 a n ]. Then y = Xb + Da + η, β W = (X 0 M D X) 1 X 0 M D y. (see Matrices Appendix B). It is called the within estimator because it uses only the variation within each cross-section unit. Its variance covariance matrix is given by V (β W ) = σ 2 η(x 0 M D X) 1
27 E. Alternatively, we can write y i {z} T 1 = X i {z} TxK b+a i {z} i +η i, i = 1,..., n Tx1 We premultiply the above ith equation by a TxT idempotent transformation matrix Q = I T 1 T ii0, to swipe out the individual e ect a i so that individual observations are measured as deviations from individual means (over time): Qy i = QX i b + Qia i + Qη i = QX i b + Qη i,
28 Applying the OLS procedure to the above equation, we have β W = " N # 1 Xi 0 QX i i=1 N i=1 X i Qy i. Its variance-covariance matrix is given by V (β W ) = σ 2 η " N i=1 X 0 i QX i # 1.
29 Notice that the pooled OLS estimate is just a weighted sum of the between and within estimators: β =(X 0 X) 1 X 0 y. Since M D = I nt D(D 0 D) 1 D 0 = I nt P D ) I nt = M D + P D, we have β = 8 9 I >< nt z } { >= (X 0 X) 1 >: X0 (M D + P D )y >; = (X 0 X) 1 X 0 M D y + X 0 P D y.
30 Next premultiplying X 0 M D y by I K = (X 0 M D X)(X 0 M D X) 1 and X 0 P D y by I K = (X 0 P D X)(X 0 P D X) 1 gives z } { β = (X 0 X) 1 f(x 0 M D X) (X 0 M D X) 1 X 0 M D y β W +(X 0 P D X)(X 0 P D X) 1 X 0 P D yg {z } β B = (X 0 X) 1 f(x 0 M D X)β W + (X 0 P D X)β B. Notice that the sum of the two weights is equal to I K : (X 0 X) 1 f(x 0 M D X) + (X 0 P D X)g = (X 0 X) 1 fx 0 (M D + P D )Xg = (X 0 X) 1 X 0 X = I K.
31 In the random e ects case, OLS on the pooled data fails to use information about the heteroscedasticity that results from using repeated observations of the same cross-section units. The problem with the pooled estimator is that it weights all observations equally. This treatment is not generally optimal because an additional observation on a person already in the data set is unlikely to add as much information as an additional observation from a new (independent) individual.
32 Next we have to compute the necessary quantities for a feasible GLS. Standard ANOVA suggests the following estimators bσ 2 η = bε 0 W bε W nt n K, bσ 2 B = bε0 Bbε B n K, bσ 2 a = bσ 2 B bσ 2 η T, where bε W are the residuals from the within regression and bε B are the residuals from the between regression. These can then be used to construct bθ. These estimators are asymptotically unbiased estimates of the relevant variances.
33 THE RANDOM EFFECT ESTIMATOR A simple procedure to compute the random e ect estimator is as follows: 1. Compute the between and within estimators 2. Use the residuals to calculate the appropriate variance terms 3. Calculate bθ 4. Run OLS on the following transformed variables y andx yit = y it (1 bθ)y i, (7) xit = x it (1 bθ)x i. (8) The transformation is intuitively appealing. When there is no uncorrelated person-speci c component of variance (σ 2 a = 0), θ = 1, and the random e ects estimator reduces to the pooled OLS estimator. When bθ = 0 the random e ects estimator reduces to the within estimator.
34 To get e cient estimates of the parameters we have to use the GLS method β GLS = " N # 1 Xi 0 Σ 1 X i i=1 N i=1 X i Σ 1 y i. where Σ is given by (3).
35 We can also obtain the following β GLS = " 1 T " 1 T N Xi 0 QX i + θ 2 i=1 # 1 n (x i x)(x i x) 0 i=1 N Xi 0 Qy i + θ 2 i=1 = β B + (I K )β W, (9) # n (x i x)(y i y) 0 i=1
36 where θ 2 has been de ned in (4) and = θ 2 T " n " N Xi 0 QX i + θ 2 T i=1 # (x i x)(x i x) 0. i=1 n i=1 (x i x)(x i x) 0 # 1 The GLS estimator (9) is a weighted average of the between-group and the within-group estimators. If θ! 0, then! 0, and the GLS becomes the within estimator. If θ! 1, then the GLS becomes the pooled OLS estimator.
37 Moreover, the variance covariance matrix of β GLS is V (β GLS ) = σ 2 η " N i=1 X 0 i QX i + T θ 2 # 1 n (x i x)(x i x) 0. i=1
38 THE FIXED EFFECTS MODEL The xed e ects model starts with the presumption that Cov(X it, a i ) 6= 0. Thus we must estimate the model conditionally on the presence of the xed e ects. That is, we must rewrite the model as y it = x it b+a i + η i, i = 1,..., n (10) where the a i are treated as unknown parameters to be estimated. In the typical case, T is small and n is large.
39 Although we can not estimate a i consistently, we can estimate the remaining parameters consistently. To do so we need only run the regression y = Xb + Da + η (11) where a 0 =[a 1 a 2 a n ], D = I n i T as before, is a set of n dummy variables.
40 The above equation is just the same as running a regression of each of our variables y and X on this set of dummies and then running the regression of the y residuals on the X residuals. The matrix that produces such residuals is the familiar M D = I nt D(D 0 D) 1 D 0 = I nt P D. We can run OLS on the transformed variables ey= M D y and ex = M D X to get β W = (X 0 M D X) 1 X 0 M D y. This is merely the within estimator we derived before. The within estimator is only one possible xed e ects estimator. Any transformation that rids us of the xed e ect will produce a xed e ect estimator.
41 Deviations from means purges the data of the xed e ects by removing means of these variables across individual cross-section units. That is, the predicted value of y, which belongs to group i, is just the mean of that group: y i = x i b+a i + η i (12) We can di erence Eqs (10) and (12) to yield y it y i = (x it x i )b + (η i η i ) In many applications, the easiest way to implement a xed e ects estimator with conventional software is to include a di erent dummy variable for each individual unit. This method is called the least-squares dummy variable (LSDV) method as in Eq. (11).
42 If n is very large, however, it may be computationally prohibitive to compute coe cients for each cross-section unit. In that case another way to implement a xed e ect estimator is as follows: Transform all the variables by subtracting person-speci c means Run OLS on the transformed variables Moreover, the standard errors need to be corrected. The correct standard errors are V (β W ) = σ 2 η(x 0 M D X) 1. This result is almost exactly the same output one would get from the two-step procedure de ned earlier.
43 The correct way to estimate σ 2 η is bσ 2 η = bε0 W bε W nt n K, The fact that the xed e ects estimator can be interpreted as a simple OLS regression of means-di erence variables explains why this estimator is often called a within group estimator. That is, it uses only the variation within an individual s set of observations.
44 A WU-HAUSMAN TEST The random and xed e ects estimators have di erent properties depending on the correlation between a i and the regressors. Speci cally, 1. If the e ects are uncorrelated with the explanatory variables, the random e ects estimator is consistent and e cient. The xed e ects estimator is consistent but not e cient. 2. If the e ects are correlated with the explanatory variables, the xed e ects estimator is consistent and e cient but the random e ects estimator is now inconsistent. One can perform a Hausman test, de ned as H = (β RE β FE ) 0 [V (β RE ) V (β FE )] 1 (β RE β FE ). The Hausman test statistic will be distributed asymptotically as χ 2 with K degrees of freedom under the null hypothesis that the random e ects estimator is correct.
45 An alternative method is to perform a simple auxiliary regression. Let yit and x it be the data transformed for the random e ects model as in Eqs. (7) and (8). Recall that the ex variables are transformed for xed-e ects regression. The Hausman test can be computed by means of a simple F test on d in the following auxiliary regression: y = X b + exd + error. The hypothesis being tested is whether the omission of xed e ects in the random e ects model has any e ect on the consistency of the random e ects estimates.
46 MATRICES APPENDIX A A matrix may be partitioned into as set of submatrices by indicating subgroups of rows and/or columns. For example A11 A A = 12 A 21 A 22 The inverse of a partitioned matrix may also be expressed in partitioned form: A 1 B = 11 B 11 A 12 A22 1 A22 1 A 21B 11 A A 22 1 A 21B 11 A 12 A22 1, (13) where B 11 = (A 11 A 12 A22 1 A 21) 1 or, alternatively, A 1 A 1 = 11 + A 11 1 A 12B 22 A 21 A11 1 A 1 B 22 A 21 A11 1 B A 12B 22, where B 22 = (A 22 A 21 A 1 11 A 12) 1.
47 Consider partition the matrix X as X = X 1 X 2. Then 2 3 A 11 A 12 z } { z } { X 0 X 0 X = 1 X 0 X2 0 X1 X 2 = 6 1X 1 X1X X2X {z } 1 X {z 2X } 2 5. A 22 A 22 The inverse of the above matrix is given by equation (13) where where B 11 = (X 0 1X 1 X 0 1X 2 (X 0 2X 2 ) 1 X 0 2X 1 ) 1 = (X 0 1IX 1 X 0 1X 2 (X 0 2X 2 ) 1 X 0 2X 1 ) 1 = (X 0 1M 2 X 1 ) 1, M 2 = I X 2 (X 0 2X 2 ) 1 X 0 2, that is, M 2 is a symmetric idempotent matrix. Similarly, B 22 = (X 0 2M 1 X 2 ) 1, where M 1 = I X 1 (X 0 1X 1 ) 1 X 0 1.
48 Moreover, B 11 A {z} 12 A22 1 is given by {z} X1 0 X 2 (X2 0 X 2) 1 B 11 X 0 1X 2 (X 0 2X 2 ) 1. In addition, X 0 y in partitioned form is given by X 0 X 0 y = 1 X 0 y = 1 y X2 0 y X 0 2
49 Thus the OLS estimates from the regression of y on X in partitioned form are given by 2 3 A 11 A 1 12 z } { z } { β1 X β = = 0 6 1X 1 X 0 1X 2 X 0 β 2 4 X2X y {z } 1 X {z 2X } 2 5 X2 0 y. A 22 A 22 Taking the rst row we have Similarly, we can show that X1 0 X (X X 2) 1 z} { z} { β 1 = B 11 X1y 0 B 11 A 12 A22 1 X2y 0 = B 11 X1y 0 B 11 X1X 0 2 (X 0 2 X 2) 1 X2y 0 = B {z} 11 X1(I 0 X 2 (X2 X 2) 1 X2) 0 y {z } (X1 0 M 2X 1 ) 1 = (X 0 1M 2 X 1 ) 1 X 0 1M 2 y. 0 M 2 β 2 = (X 0 2 M 1X 2 ) 1 X 0 2M 1 y.
50 Rewrite β 1 = (X 0 1M 2 X 1 ) 1 X 0 1M 2 y. Regressing y and X 1 on X 2 yields a vector of residuals, M 2 y, and a matrix of residuals, M 2 X 1. Regressing the former on the latter gives the β 1 coe cient vector.
51 MATRICES APPENDIX B M i = I X i (X 0 i X i ) 1 X 0 i, i = 1, 2. Premultiplication of any vector by M i gives the residuals from the regression of that vector on X i. Consider the following regression The estimator of b is given by The predicted value of y is given by y = X i b + ε. β = (X 0 i X i ) 1 X 0 i y = y. by = X i β = X i (X 0 i X i ) 1 X 0 i y = P i y. The vector of the residuals is given by ε = y by = y X i (X 0 i X i ) 1 X 0 i y = [I X i (X 0 i X i ) 1 X 0 i ]y = [I P i ]y = M i y.
52 MATRICES APPENDIX C Assume for simplicity and without loss of generality that we have only two sections. That is n = 2. De ne the 2T 2 matrix D, which is merely the matrix of 2 dummy variables corresponding to each cross-section unit. In a partition form we have 2T 2 z} { D = 22T z} { D 0 = where i is a T 1 unit vector T 1 z} { i 0 {z} T 1 1T z} { i {z} 1T T 1 z} { 0 i {z} T 1 1T z} { 0 0 i 0 {z} 1T or 3 7 5
53 Then 22 z} { D 0 D = i 0 i 0 0 i 0 i = T 0 0 T = (D 0 D) 1 = (1/T )I. = TI ) Moreover, 2T 2T 2 z} { DD 0 = 4 T T z} { ii 0 T T z} { 0 0 ii 0 where ii 0 is a T T matrix with unit elements. 3 5,
54 Thus D(D 0 D) 1 D 0 y = 1 T DD0 y 2 3 = 1 T T T T z} { z} { 4 ii y1 T 0 ii 0 y = 1 T 6 4 T t=1 y 1t i T y 2t i t=1 7 5 = y1 i y 2 i
55 In general if we have n cross-sections then D is an nt n matrix 2 3 T 1 z} { It can be shown that nt n z} { D = D {z} nt n 6 4 T 1 z} { 0 i 0 0 i i (D 0 D) 1 D 0 y = {z} nt y 1 i y 2 i. y n i = by.
56 Similarly, where D {z} nt n h X i = (D 0 D) 1 D 0 {z} X = nt k X 1 X 2. X n = bx, Xi 1 i Xi 2 i Xi K i i, with X j i = T t=1 X j it /T.
Panel Data. March 2, () Applied Economoetrics: Topic 6 March 2, / 43
Panel Data March 2, 212 () Applied Economoetrics: Topic March 2, 212 1 / 43 Overview Many economic applications involve panel data. Panel data has both cross-sectional and time series aspects. Regression
More informationInstrumental Variables and Two-Stage Least Squares
Instrumental Variables and Two-Stage Least Squares Generalised Least Squares Professor Menelaos Karanasos December 2011 Generalised Least Squares: Assume that the postulated model is y = Xb + e, (1) where
More informationChapter 2. Dynamic panel data models
Chapter 2. Dynamic panel data models School of Economics and Management - University of Geneva Christophe Hurlin, Université of Orléans University of Orléans April 2018 C. Hurlin (University of Orléans)
More informationWe begin by thinking about population relationships.
Conditional Expectation Function (CEF) We begin by thinking about population relationships. CEF Decomposition Theorem: Given some outcome Y i and some covariates X i there is always a decomposition where
More informationTopic 10: Panel Data Analysis
Topic 10: Panel Data Analysis Advanced Econometrics (I) Dong Chen School of Economics, Peking University 1 Introduction Panel data combine the features of cross section data time series. Usually a panel
More informationEcon 582 Fixed Effects Estimation of Panel Data
Econ 582 Fixed Effects Estimation of Panel Data Eric Zivot May 28, 2012 Panel Data Framework = x 0 β + = 1 (individuals); =1 (time periods) y 1 = X β ( ) ( 1) + ε Main question: Is x uncorrelated with?
More informationRepeated observations on the same cross-section of individual units. Important advantages relative to pure cross-section data
Panel data Repeated observations on the same cross-section of individual units. Important advantages relative to pure cross-section data - possible to control for some unobserved heterogeneity - possible
More informationEconometrics of Panel Data
Econometrics of Panel Data Jakub Mućk Meeting # 2 Jakub Mućk Econometrics of Panel Data Meeting # 2 1 / 26 Outline 1 Fixed effects model The Least Squares Dummy Variable Estimator The Fixed Effect (Within
More informationLecture 4: Linear panel models
Lecture 4: Linear panel models Luc Behaghel PSE February 2009 Luc Behaghel (PSE) Lecture 4 February 2009 1 / 47 Introduction Panel = repeated observations of the same individuals (e.g., rms, workers, countries)
More information1. The Multivariate Classical Linear Regression Model
Business School, Brunel University MSc. EC550/5509 Modelling Financial Decisions and Markets/Introduction to Quantitative Methods Prof. Menelaos Karanasos (Room SS69, Tel. 08956584) Lecture Notes 5. The
More informationPanel Data Model (January 9, 2018)
Ch 11 Panel Data Model (January 9, 2018) 1 Introduction Data sets that combine time series and cross sections are common in econometrics For example, the published statistics of the OECD contain numerous
More informationEconometrics of Panel Data
Econometrics of Panel Data Jakub Mućk Meeting # 3 Jakub Mućk Econometrics of Panel Data Meeting # 3 1 / 21 Outline 1 Fixed or Random Hausman Test 2 Between Estimator 3 Coefficient of determination (R 2
More informationCh 2: Simple Linear Regression
Ch 2: Simple Linear Regression 1. Simple Linear Regression Model A simple regression model with a single regressor x is y = β 0 + β 1 x + ɛ, where we assume that the error ɛ is independent random component
More informationChapter 6. Panel Data. Joan Llull. Quantitative Statistical Methods II Barcelona GSE
Chapter 6. Panel Data Joan Llull Quantitative Statistical Methods II Barcelona GSE Introduction Chapter 6. Panel Data 2 Panel data The term panel data refers to data sets with repeated observations over
More informationRecent Advances in the Field of Trade Theory and Policy Analysis Using Micro-Level Data
Recent Advances in the Field of Trade Theory and Policy Analysis Using Micro-Level Data July 2012 Bangkok, Thailand Cosimo Beverelli (World Trade Organization) 1 Content a) Classical regression model b)
More informationPanel Data Models. James L. Powell Department of Economics University of California, Berkeley
Panel Data Models James L. Powell Department of Economics University of California, Berkeley Overview Like Zellner s seemingly unrelated regression models, the dependent and explanatory variables for panel
More informationy it = α i + β 0 ix it + ε it (0.1) The panel data estimators for the linear model are all standard, either the application of OLS or GLS.
0.1. Panel Data. Suppose we have a panel of data for groups (e.g. people, countries or regions) i =1, 2,..., N over time periods t =1, 2,..., T on a dependent variable y it and a kx1 vector of independent
More information1 Correlation between an independent variable and the error
Chapter 7 outline, Econometrics Instrumental variables and model estimation 1 Correlation between an independent variable and the error Recall that one of the assumptions that we make when proving the
More informationInstrumental Variables
Università di Pavia 2010 Instrumental Variables Eduardo Rossi Exogeneity Exogeneity Assumption: the explanatory variables which form the columns of X are exogenous. It implies that any randomness in the
More informationPanel Data Models. Chapter 5. Financial Econometrics. Michael Hauser WS17/18 1 / 63
1 / 63 Panel Data Models Chapter 5 Financial Econometrics Michael Hauser WS17/18 2 / 63 Content Data structures: Times series, cross sectional, panel data, pooled data Static linear panel data models:
More informationLecture Notes on Measurement Error
Steve Pischke Spring 2000 Lecture Notes on Measurement Error These notes summarize a variety of simple results on measurement error which I nd useful. They also provide some references where more complete
More informationChapter 6: Endogeneity and Instrumental Variables (IV) estimator
Chapter 6: Endogeneity and Instrumental Variables (IV) estimator Advanced Econometrics - HEC Lausanne Christophe Hurlin University of Orléans December 15, 2013 Christophe Hurlin (University of Orléans)
More informationEconometrics of Panel Data
Econometrics of Panel Data Jakub Mućk Meeting # 6 Jakub Mućk Econometrics of Panel Data Meeting # 6 1 / 36 Outline 1 The First-Difference (FD) estimator 2 Dynamic panel data models 3 The Anderson and Hsiao
More informationTime Invariant Variables and Panel Data Models : A Generalised Frisch- Waugh Theorem and its Implications
Time Invariant Variables and Panel Data Models : A Generalised Frisch- Waugh Theorem and its Implications Jaya Krishnakumar No 2004.01 Cahiers du département d économétrie Faculté des sciences économiques
More informationMa 3/103: Lecture 25 Linear Regression II: Hypothesis Testing and ANOVA
Ma 3/103: Lecture 25 Linear Regression II: Hypothesis Testing and ANOVA March 6, 2017 KC Border Linear Regression II March 6, 2017 1 / 44 1 OLS estimator 2 Restricted regression 3 Errors in variables 4
More informationMultiple Equation GMM with Common Coefficients: Panel Data
Multiple Equation GMM with Common Coefficients: Panel Data Eric Zivot Winter 2013 Multi-equation GMM with common coefficients Example (panel wage equation) 69 = + 69 + + 69 + 1 80 = + 80 + + 80 + 2 Note:
More informationThe Linear Regression Model
The Linear Regression Model Carlo Favero Favero () The Linear Regression Model 1 / 67 OLS To illustrate how estimation can be performed to derive conditional expectations, consider the following general
More informationTime Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley
Time Series Models and Inference James L. Powell Department of Economics University of California, Berkeley Overview In contrast to the classical linear regression model, in which the components of the
More information1 The Multiple Regression Model: Freeing Up the Classical Assumptions
1 The Multiple Regression Model: Freeing Up the Classical Assumptions Some or all of classical assumptions were crucial for many of the derivations of the previous chapters. Derivation of the OLS estimator
More informationØkonomisk Kandidateksamen 2004 (I) Econometrics 2. Rettevejledning
Økonomisk Kandidateksamen 2004 (I) Econometrics 2 Rettevejledning This is a closed-book exam (uden hjælpemidler). Answer all questions! The group of questions 1 to 4 have equal weight. Within each group,
More informationEconomics 241B Estimation with Instruments
Economics 241B Estimation with Instruments Measurement Error Measurement error is de ned as the error resulting from the measurement of a variable. At some level, every variable is measured with error.
More informationNotes on Panel Data and Fixed Effects models
Notes on Panel Data and Fixed Effects models Michele Pellizzari IGIER-Bocconi, IZA and frdb These notes are based on a combination of the treatment of panel data in three books: (i) Arellano M 2003 Panel
More informationFixed Effects Models for Panel Data. December 1, 2014
Fixed Effects Models for Panel Data December 1, 2014 Notation Use the same setup as before, with the linear model Y it = X it β + c i + ɛ it (1) where X it is a 1 K + 1 vector of independent variables.
More informationSensitivity of GLS estimators in random effects models
of GLS estimators in random effects models Andrey L. Vasnev (University of Sydney) Tokyo, August 4, 2009 1 / 19 Plan Plan Simulation studies and estimators 2 / 19 Simulation studies Plan Simulation studies
More informationIn the bivariate regression model, the original parameterization is. Y i = β 1 + β 2 X2 + β 2 X2. + β 2 (X 2i X 2 ) + ε i (2)
RNy, econ460 autumn 04 Lecture note Orthogonalization and re-parameterization 5..3 and 7.. in HN Orthogonalization of variables, for example X i and X means that variables that are correlated are made
More informationDealing With Endogeneity
Dealing With Endogeneity Junhui Qian December 22, 2014 Outline Introduction Instrumental Variable Instrumental Variable Estimation Two-Stage Least Square Estimation Panel Data Endogeneity in Econometrics
More information(c) i) In ation (INFL) is regressed on the unemployment rate (UNR):
BRUNEL UNIVERSITY Master of Science Degree examination Test Exam Paper 005-006 EC500: Modelling Financial Decisions and Markets EC5030: Introduction to Quantitative methods Model Answers. COMPULSORY (a)
More informationECONOMET RICS P RELIM EXAM August 24, 2010 Department of Economics, Michigan State University
ECONOMET RICS P RELIM EXAM August 24, 2010 Department of Economics, Michigan State University Instructions: Answer all four (4) questions. Be sure to show your work or provide su cient justi cation for
More informationEconometrics of Panel Data
Econometrics of Panel Data Jakub Mućk Meeting # 1 Jakub Mućk Econometrics of Panel Data Meeting # 1 1 / 31 Outline 1 Course outline 2 Panel data Advantages of Panel Data Limitations of Panel Data 3 Pooled
More informationAdvanced Econometrics I
Lecture Notes Autumn 2010 Dr. Getinet Haile, University of Mannheim 1. Introduction Introduction & CLRM, Autumn Term 2010 1 What is econometrics? Econometrics = economic statistics economic theory mathematics
More informationReview of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley
Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate
More informationEconometrics of Panel Data
Econometrics of Panel Data Jakub Mućk Meeting # 4 Jakub Mućk Econometrics of Panel Data Meeting # 4 1 / 30 Outline 1 Two-way Error Component Model Fixed effects model Random effects model 2 Non-spherical
More information1. The OLS Estimator. 1.1 Population model and notation
1. The OLS Estimator OLS stands for Ordinary Least Squares. There are 6 assumptions ordinarily made, and the method of fitting a line through data is by least-squares. OLS is a common estimation methodology
More informationEconometrics of Panel Data
Econometrics of Panel Data Jakub Mućk Meeting # 4 Jakub Mućk Econometrics of Panel Data Meeting # 4 1 / 26 Outline 1 Two-way Error Component Model Fixed effects model Random effects model 2 Hausman-Taylor
More informationShort T Panels - Review
Short T Panels - Review We have looked at methods for estimating parameters on time-varying explanatory variables consistently in panels with many cross-section observation units but a small number of
More informationEconometrics II. Nonstandard Standard Error Issues: A Guide for the. Practitioner
Econometrics II Nonstandard Standard Error Issues: A Guide for the Practitioner Måns Söderbom 10 May 2011 Department of Economics, University of Gothenburg. Email: mans.soderbom@economics.gu.se. Web: www.economics.gu.se/soderbom,
More informationIntroductory Econometrics
Introductory Econometrics Violation of basic assumptions Heteroskedasticity Barbara Pertold-Gebicka CERGE-EI 16 November 010 OLS assumptions 1. Disturbances are random variables drawn from a normal distribution.
More informationPeter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8
Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall
More informationAdvanced Econometrics
Based on the textbook by Verbeek: A Guide to Modern Econometrics Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna May 16, 2013 Outline Univariate
More informationReference: Davidson and MacKinnon Ch 2. In particular page
RNy, econ460 autumn 03 Lecture note Reference: Davidson and MacKinnon Ch. In particular page 57-8. Projection matrices The matrix M I X(X X) X () is often called the residual maker. That nickname is easy
More informationECONOMETRICS FIELD EXAM Michigan State University May 9, 2008
ECONOMETRICS FIELD EXAM Michigan State University May 9, 2008 Instructions: Answer all four (4) questions. Point totals for each question are given in parenthesis; there are 00 points possible. Within
More informationNon-Spherical Errors
Non-Spherical Errors Krishna Pendakur February 15, 2016 1 Efficient OLS 1. Consider the model Y = Xβ + ε E [X ε = 0 K E [εε = Ω = σ 2 I N. 2. Consider the estimated OLS parameter vector ˆβ OLS = (X X)
More information18.S096 Problem Set 3 Fall 2013 Regression Analysis Due Date: 10/8/2013
18.S096 Problem Set 3 Fall 013 Regression Analysis Due Date: 10/8/013 he Projection( Hat ) Matrix and Case Influence/Leverage Recall the setup for a linear regression model y = Xβ + ɛ where y and ɛ are
More informationRegression and Statistical Inference
Regression and Statistical Inference Walid Mnif wmnif@uwo.ca Department of Applied Mathematics The University of Western Ontario, London, Canada 1 Elements of Probability 2 Elements of Probability CDF&PDF
More informationInference about Clustering and Parametric. Assumptions in Covariance Matrix Estimation
Inference about Clustering and Parametric Assumptions in Covariance Matrix Estimation Mikko Packalen y Tony Wirjanto z 26 November 2010 Abstract Selecting an estimator for the variance covariance matrix
More informationTest of hypotheses with panel data
Stochastic modeling in economics and finance November 4, 2015 Contents 1 Test for poolability of the data 2 Test for individual and time effects 3 Hausman s specification test 4 Case study Contents Test
More informationGeneralized Method of Moments: I. Chapter 9, R. Davidson and J.G. MacKinnon, Econometric Theory and Methods, 2004, Oxford.
Generalized Method of Moments: I References Chapter 9, R. Davidson and J.G. MacKinnon, Econometric heory and Methods, 2004, Oxford. Chapter 5, B. E. Hansen, Econometrics, 2006. http://www.ssc.wisc.edu/~bhansen/notes/notes.htm
More information10 Panel Data. Andrius Buteikis,
10 Panel Data Andrius Buteikis, andrius.buteikis@mif.vu.lt http://web.vu.lt/mif/a.buteikis/ Introduction Panel data combines cross-sectional and time series data: the same individuals (persons, firms,
More informationLinear Model Under General Variance
Linear Model Under General Variance We have a sample of T random variables y 1, y 2,, y T, satisfying the linear model Y = X β + e, where Y = (y 1,, y T )' is a (T 1) vector of random variables, X = (T
More informationCLUSTER EFFECTS AND SIMULTANEITY IN MULTILEVEL MODELS
HEALTH ECONOMICS, VOL. 6: 439 443 (1997) HEALTH ECONOMICS LETTERS CLUSTER EFFECTS AND SIMULTANEITY IN MULTILEVEL MODELS RICHARD BLUNDELL 1 AND FRANK WINDMEIJER 2 * 1 Department of Economics, University
More informationREVIEW (MULTIVARIATE LINEAR REGRESSION) Explain/Obtain the LS estimator () of the vector of coe cients (b)
REVIEW (MULTIVARIATE LINEAR REGRESSION) Explain/Obtain the LS estimator () of the vector of coe cients (b) Explain/Obtain the variance-covariance matrix of Both in the bivariate case (two regressors) and
More information[y i α βx i ] 2 (2) Q = i=1
Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation
More informationThe Standard Linear Model: Hypothesis Testing
Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Lecture 25: The Standard Linear Model: Hypothesis Testing Relevant textbook passages: Larsen Marx [4]:
More informationIntroduction to Estimation Methods for Time Series models. Lecture 1
Introduction to Estimation Methods for Time Series models Lecture 1 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 1 SNS Pisa 1 / 19 Estimation
More informationHOW IS GENERALIZED LEAST SQUARES RELATED TO WITHIN AND BETWEEN ESTIMATORS IN UNBALANCED PANEL DATA?
HOW IS GENERALIZED LEAST SQUARES RELATED TO WITHIN AND BETWEEN ESTIMATORS IN UNBALANCED PANEL DATA? ERIK BIØRN Department of Economics University of Oslo P.O. Box 1095 Blindern 0317 Oslo Norway E-mail:
More informationQuantitative Techniques - Lecture 8: Estimation
Quantitative Techniques - Lecture 8: Estimation Key words: Estimation, hypothesis testing, bias, e ciency, least squares Hypothesis testing when the population variance is not known roperties of estimates
More informationMULTIVARIATE POPULATIONS
CHAPTER 5 MULTIVARIATE POPULATIONS 5. INTRODUCTION In the following chapters we will be dealing with a variety of problems concerning multivariate populations. The purpose of this chapter is to provide
More informationEconometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018
Econometrics I KS Module 2: Multivariate Linear Regression Alexander Ahammer Department of Economics Johannes Kepler University of Linz This version: April 16, 2018 Alexander Ahammer (JKU) Module 2: Multivariate
More informationThe Case Against JIVE
The Case Against JIVE Related literature, Two comments and One reply PhD. student Freddy Rojas Cama Econometrics Theory II Rutgers University November 14th, 2011 Literature 1 Literature 2 Key de nitions
More informationTesting Weak Convergence Based on HAR Covariance Matrix Estimators
Testing Weak Convergence Based on HAR Covariance atrix Estimators Jianning Kong y, Peter C. B. Phillips z, Donggyu Sul x August 4, 207 Abstract The weak convergence tests based on heteroskedasticity autocorrelation
More informationFinansiell Statistik, GN, 15 hp, VT2008 Lecture 15: Multiple Linear Regression & Correlation
Finansiell Statistik, GN, 5 hp, VT28 Lecture 5: Multiple Linear Regression & Correlation Gebrenegus Ghilagaber, PhD, ssociate Professor May 5, 28 Introduction In the simple linear regression Y i = + X
More informationLecture 4: Heteroskedasticity
Lecture 4: Heteroskedasticity Econometric Methods Warsaw School of Economics (4) Heteroskedasticity 1 / 24 Outline 1 What is heteroskedasticity? 2 Testing for heteroskedasticity White Goldfeld-Quandt Breusch-Pagan
More informationMissing dependent variables in panel data models
Missing dependent variables in panel data models Jason Abrevaya Abstract This paper considers estimation of a fixed-effects model in which the dependent variable may be missing. For cross-sectional units
More informationECONOMETRICS II (ECO 2401) Victor Aguirregabiria (March 2017) TOPIC 1: LINEAR PANEL DATA MODELS
ECONOMETRICS II (ECO 2401) Victor Aguirregabiria (March 2017) TOPIC 1: LINEAR PANEL DATA MODELS 1. Panel Data 2. Static Panel Data Models 2.1. Assumptions 2.2. "Fixed e ects" and "Random e ects" 2.3. Estimation
More informationSTAT5044: Regression and Anova. Inyoung Kim
STAT5044: Regression and Anova Inyoung Kim 2 / 51 Outline 1 Matrix Expression 2 Linear and quadratic forms 3 Properties of quadratic form 4 Properties of estimates 5 Distributional properties 3 / 51 Matrix
More informationApplied Economics. Panel Data. Department of Economics Universidad Carlos III de Madrid
Applied Economics Panel Data Department of Economics Universidad Carlos III de Madrid See also Wooldridge (chapter 13), and Stock and Watson (chapter 10) 1 / 38 Panel Data vs Repeated Cross-sections In
More informationVariable Selection and Model Building
LINEAR REGRESSION ANALYSIS MODULE XIII Lecture - 37 Variable Selection and Model Building Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur The complete regression
More informationSpatial panels: random components vs. xed e ects
Spatial panels: random components vs. xed e ects Lung-fei Lee Department of Economics Ohio State University l eeecon.ohio-state.edu Jihai Yu Department of Economics University of Kentucky jihai.yuuky.edu
More informationthe error term could vary over the observations, in ways that are related
Heteroskedasticity We now consider the implications of relaxing the assumption that the conditional variance Var(u i x i ) = σ 2 is common to all observations i = 1,..., n In many applications, we may
More informationMultivariate Regression
Multivariate Regression The so-called supervised learning problem is the following: we want to approximate the random variable Y with an appropriate function of the random variables X 1,..., X p with the
More informationInstrumental Variables, Simultaneous and Systems of Equations
Chapter 6 Instrumental Variables, Simultaneous and Systems of Equations 61 Instrumental variables In the linear regression model y i = x iβ + ε i (61) we have been assuming that bf x i and ε i are uncorrelated
More informationGMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails
GMM-based inference in the AR() panel data model for parameter values where local identi cation fails Edith Madsen entre for Applied Microeconometrics (AM) Department of Economics, University of openhagen,
More informationTime Series Analysis for Macroeconomics and Finance
Time Series Analysis for Macroeconomics and Finance Bernd Süssmuth IEW Institute for Empirical Research in Economics University of Leipzig December 12, 2011 Bernd Süssmuth (University of Leipzig) Time
More informationECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Spring 2013 Instructor: Victor Aguirregabiria
ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Spring 2013 Instructor: Victor Aguirregabiria SOLUTION TO FINAL EXAM Friday, April 12, 2013. From 9:00-12:00 (3 hours) INSTRUCTIONS:
More informationIntroduction to Econometrics Final Examination Fall 2006 Answer Sheet
Introduction to Econometrics Final Examination Fall 2006 Answer Sheet Please answer all of the questions and show your work. If you think a question is ambiguous, clearly state how you interpret it before
More informationMS&E 226: Small Data
MS&E 226: Small Data Lecture 6: Bias and variance (v5) Ramesh Johari ramesh.johari@stanford.edu 1 / 49 Our plan today We saw in last lecture that model scoring methods seem to be trading off two different
More informationRegression Analysis. y t = β 1 x t1 + β 2 x t2 + β k x tk + ϵ t, t = 1,..., T,
Regression Analysis The multiple linear regression model with k explanatory variables assumes that the tth observation of the dependent or endogenous variable y t is described by the linear relationship
More informationGARCH Models Estimation and Inference
GARCH Models Estimation and Inference Eduardo Rossi University of Pavia December 013 Rossi GARCH Financial Econometrics - 013 1 / 1 Likelihood function The procedure most often used in estimating θ 0 in
More informationApplied Microeconometrics (L5): Panel Data-Basics
Applied Microeconometrics (L5): Panel Data-Basics Nicholas Giannakopoulos University of Patras Department of Economics ngias@upatras.gr November 10, 2015 Nicholas Giannakopoulos (UPatras) MSc Applied Economics
More informationSpecification testing in panel data models estimated by fixed effects with instrumental variables
Specification testing in panel data models estimated by fixed effects wh instrumental variables Carrie Falls Department of Economics Michigan State Universy Abstract I show that a handful of the regressions
More informationINTRODUCTION TO BASIC LINEAR REGRESSION MODEL
INTRODUCTION TO BASIC LINEAR REGRESSION MODEL 13 September 2011 Yogyakarta, Indonesia Cosimo Beverelli (World Trade Organization) 1 LINEAR REGRESSION MODEL In general, regression models estimate the effect
More informationSo far our focus has been on estimation of the parameter vector β in the. y = Xβ + u
Interval estimation and hypothesis tests So far our focus has been on estimation of the parameter vector β in the linear model y i = β 1 x 1i + β 2 x 2i +... + β K x Ki + u i = x iβ + u i for i = 1, 2,...,
More informationEcon 620. Matrix Differentiation. Let a and x are (k 1) vectors and A is an (k k) matrix. ) x. (a x) = a. x = a (x Ax) =(A + A (x Ax) x x =(A + A )
Econ 60 Matrix Differentiation Let a and x are k vectors and A is an k k matrix. a x a x = a = a x Ax =A + A x Ax x =A + A x Ax = xx A We don t want to prove the claim rigorously. But a x = k a i x i i=
More informationECON 4551 Econometrics II Memorial University of Newfoundland. Panel Data Models. Adapted from Vera Tabakova s notes
ECON 4551 Econometrics II Memorial University of Newfoundland Panel Data Models Adapted from Vera Tabakova s notes 15.1 Grunfeld s Investment Data 15.2 Sets of Regression Equations 15.3 Seemingly Unrelated
More informationThe regression model with one stochastic regressor (part II)
The regression model with one stochastic regressor (part II) 3150/4150 Lecture 7 Ragnar Nymoen 6 Feb 2012 We will finish Lecture topic 4: The regression model with stochastic regressor We will first look
More informationLecture Notes Part 7: Systems of Equations
17.874 Lecture Notes Part 7: Systems of Equations 7. Systems of Equations Many important social science problems are more structured than a single relationship or function. Markets, game theoretic models,
More informationMa 3/103: Lecture 24 Linear Regression I: Estimation
Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, 2017 1 / 32 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the
More information1 Estimation of Persistent Dynamic Panel Data. Motivation
1 Estimation of Persistent Dynamic Panel Data. Motivation Consider the following Dynamic Panel Data (DPD) model y it = y it 1 ρ + x it β + µ i + v it (1.1) with i = {1, 2,..., N} denoting the individual
More informationGMM estimation of spatial panels
MRA Munich ersonal ReEc Archive GMM estimation of spatial panels Francesco Moscone and Elisa Tosetti Brunel University 7. April 009 Online at http://mpra.ub.uni-muenchen.de/637/ MRA aper No. 637, posted
More informationWooldridge, Introductory Econometrics, 4th ed. Chapter 15: Instrumental variables and two stage least squares
Wooldridge, Introductory Econometrics, 4th ed. Chapter 15: Instrumental variables and two stage least squares Many economic models involve endogeneity: that is, a theoretical relationship does not fit
More information