Econometrics. Andrés M. Alonso. Unit 1: Introduction: The regression model. Unit 2: Estimation principles. Unit 3: Hypothesis testing principles.
|
|
- Lorin Flowers
- 5 years ago
- Views:
Transcription
1 Andrés M. Alonso Unit 1: Introduction: The regression model. Unit 2: Estimation principles. Unit 3: Hypothesis testing principles. Unit 4: Heteroscedasticity in the regression model. Unit 5: Endogeneity of regressors.
2 Bibliography 2 Creel, M. (2006), (Available at Hayashi, F. (2000), Princeton University Press. (Library code: D HAY). Wooldridge, J.M. (2000) Introductory : A Modern Approach, South Western College Publishing (Library code: D WOO). Software: I recommend to use MATLAB (or Octave, its freeware version) or S-Plus (or R, its freeware version). Causin, M.C. (2005) MATLAB Tutorials. (Available at LeSage, J.P. (1999) Applied using MATLAB. (Available at
3 The outline for today 3 Unit 1. Introduction: The regression model. 1.1 Conditional expectation. 1.2 Assumptions. 1.3 Interpretation of the model. Addendum. Resampling methods in i.i.d. data. The Jackknife. The Bootstrap. Unit 2. Estimation principles. Unit 3: Hypothesis testing principles. Unit 4: Heteroscedasticity in the regression model. Unit 5: Endogeneity of regressors.
4 Introduction 4 is a branch of Economy concerning the empirical study of relations among economic variables. In econometrics is used: An econometric model: Based on the economic theory. Data: Facts. techniques: Statistical inference. Recommended readings: Chapter 2 and 3 of Creel (2006) and Chapter 1 and 6 of Greene (2000).
5 Introduction 5 Objectives of an econometric study Structural Analysis: It consists on the use of models to measure specific economic relations. Its an important objective since we can compare different theories under the fact s (data) information. Prediction: It consists on the use of models to predict future values of economic variables using the fact s (data) information. Policies evaluation: alternatives policies. It consists on the use of models to select among These three objectives are interrelated.
6 Introduction 6 Economic and econometric models: Example # 1: Economic theory tells us that demand functions are something like: x i = x i (p i, m i, z i ) x i is G 1 vector of quantities demanded. p i is G 1 vector of prices. m i is income. z i is a vector of individual characteristics related to preferences. Suppose we have a sample consisting of one observation on n individuals demands at time period t (this is a cross section, where i = 1, 2,..., n indexes the individuals in the sample).
7 Introduction 7 Example # 1 (cont.): This is nice economic model but it is not estimable as it stands, since: The form of the demand function is different for all i. Some components of z i may not be observable to an outside modeler. For example, people don t eat the same lunch every day, and you can t tell what they will order just by looking at them. Suppose we can break z i into the observable components w i and a single unobservable component ε i. What can we do?
8 Introduction 8 Example # 1 (cont.): An estimable (e.g., econometric) model is x i = β 0 + p iβ p + m i β m + w iβ w + ε i Here, we impose a number of restrictions on the theoretical model: The functions x i ( ) which may differ for all i have been restricted to all belong to the same parametric family. Of all parametric families of functions, we have restricted the model to the class of linear in the variables functions. There is a single unobservable component, and we assume it is additive.
9 Introduction 9 Example # 1 (cont.): These are very strong restrictions, compared to the theoretical model. Furthermore, these restrictions have no theoretical basis. For this reason, specification testing will be needed, to check that the model seems to be reasonable. First conclusion: Only when we are convinced that the model is at least approximately correct should we use it for economic analysis.
10 Introduction 10 The classical linear model is based upon several assumptions: 1. Linearity: the model is a linear function of the parameter vector β 0 : y t = β 0 1x 1 + β 0 2x β 0 kx k + ε t, (1) or in matrix form, y = x β 0 + ε, where y is the dependent variable, x = (x 1, x 2,, x k ), where x t is a k 1 vector of explanatory variables and β 0 and ε are conformable vectors. The subscript 0 in β 0 parameter. means this is the true value of the unknown
11 Introduction 11 Suppose that we have n observations, then model (1) can be written as: y = Xβ + ε, (2) where y is a n 1 vector and X is a n k matrix. 2. IID mean zero errors: E(ε X) = 0 Var(ε X) = E(εε X) = σ 2 I n. What are the implications of this second assumption?
12 Introduction Nonstochastic, linearly independent regressors: (a) X n k has rank k, its number of columns. (b) X is nonstochastic. (c) lim n 1 n X X = Q X, a finite positive definite matrix. What are the implications of this third assumption? 4. Normality (Optional): ε X is normally distributed. For economic data, the assumption of nonstochastic regressors is obviously unrealistic. Likewise, normality of errors will often be unrealistic.
13 Introduction 13 Linear models are more general than they might first appear, since one can employ nonlinear transformations of the variables: ϕ 0 (z t ) = [ ϕ 1 (w t ) ϕ 2 (w t ) ϕ p (w t ) ] β 0 + ε t where the φ i () are known functions. Defining y t = ϕ 0 (z t ), x t1 = ϕ 1 (w t ), etc, leads to a model in the form of equation (1). For example, the Cobb-Douglas model: z = Aw β 2 2 wβ 3 3 exp(ε) can be transformed logarithmically to obtain ln z = ln A + β 2 ln w 2 + β 3 ln w 3 + ε.
14 Introduction 14 Example: The Nerlove model Theoretical background: For a firm that takes input prices w and the output level q as given, the cost minimization problem is to choose the quantities of inputs x to solve the problem min w x x subject to the restriction f(x) = q. The solution is the vector of factor demands x(w, q). The cost function is obtained by substituting the factor demands into the criterion function: C(w, q) = w x(w, q).
15 Introduction 15 Example: The Nerlove model Monotonicity Increasing factor prices cannot decrease cost, so C(w,q) w 0. Homogeneity Because the cost of production is a linear function of w, it has the property of homogeneity of degree 1 in input prices: C(tw, q) = tc(w, q), where t is a scalar constant. Returns to scale The returns to scale parameter γ is defined as the inverse of the elasticity of cost with respect to output: γ = ( C(w, q) q ) 1 q. C(w, q) Constant returns to scale is the case where increasing production q implies that cost increases in the proportion 1:1. If this is the case, then γ = 1.
16 Introduction 16 Example: The Nerlove model Cobb-Douglas approximating model: The Cobb-Douglas functional form is linear in the logarithms of the regressors and the dependent variable. For a cost function, if there are g factors, the Cobb-Douglas cost function has the form: C = Aw β wβ g g q β q e ε. After a logarithmic transformation we obtain: where α = ln A. ln C = α + β 1 ln w β g ln w g + β q ln q + ɛ,
17 Introduction 17 Example: The Nerlove model Company cost Company cost - logs Output x Output - logs
18 Introduction 18 Example: The Nerlove model Using the Cobb-Douglas functional form we can: Verify that the property of HOD1, since it implies that g i=1 β g = 1. Verify the hypothesis that the technology exhibits CRTS, since it implies that γ = 1 β q = 1 so β q = 1. Verify the hypothesis of monotonicity, since it implies that the coefficients β i 0, i = 1,..., g. BUT, the β s are unknowns, then we must to estimate them.
19 Estimation principles 19 Estimation methods Ordinary Least Squares Method: The ordinary least squares (OLS) estimator is defined as the value that minimizes the sum of the squared errors: 5 4 ˆβ = arg min s(β) where s(β) = n (y t x tβ) 2 t=1 = (y Xβ) (y Xβ) = y y 2y Xβ + β X Xβ = y Xβ 2. Company cost - logs Using this distance, the OLS line is defined Output - logs
20 Estimation principles 20 Ordinary Least Squares Method (2): To minimize the criterion s(β), find the derivative with respect to β and set it to zero: D β s( ˆβ) = 2X y + 2X X ˆβ = 0, so ˆβ = (X X) 1 X y. To verify that this is a minimum, check the s.o.c.: D 2 βs( ˆβ) = 2X X. Since rank of X is equal k, this matrix is positive definite, since it s a quadratic form in a p.d. matrix (the identity matrix of order n), so ˆβ is in fact a minimizer.
21 Estimation principles 21 Ordinary Least Squares Method (3): The fitted values are in the vector ŷ = X ˆβ. The residuals are in the vector ˆε = y X ˆβ Note that y = Xβ + ε = X ˆβ + ˆε. The first order conditions can be written as: X y + X X ˆβ = 0 X (y + X ˆβ) = 0 X ˆε = 0, then, the OLS residuals are orthogonal to X.
22 Estimation principles 22 Ordinary Least Squares Method (4): We have that X ˆβ is the projection of y on the span of X, or X ˆβ = X (X X) 1 X y. Therefore, the matrix that projects y onto the span of X is since X ˆβ = P X y. P X = X(X X) 1 X, ˆε is the projection of y onto the n k dimensional space orthogonal to the span of X. We have that ˆε = y X ˆβ = y X(X X) 1 X y = [ I n X(X X) 1 X ] y.
23 Estimation principles 23 So the matrix that projects y onto the space orthogonal to the span of X is Then ˆε = M X y. Therefore M X = I n X(X X) 1 X = I n P X. y = P X y + M X y = X ˆβ + ˆε. Note that both P X and M X are symmetric and idempotent. A symmetric matrix A is one such that A = A. An idempotent matrix A is one such that A = AA. The only nonsingular idempotent matrix is the identity matrix.
24 Estimation principles 24 Example: The Nerlove model (estimation) The file <nerlovedata.m> contains data on 145 electric utility companies cost of production, output and input prices. The data are for the U.S., and were collected by M. Nerlove. The observations are by row, and the columns are COMPANY CODE, COST (C), OUTPUT (Q), PRICE OF LABOR (P L ), PRICE OF FUEL (P F ) and PRICE OF CAPITAL (P K ). Data Example: Let s MATLAB works...
25 Estimation principles 25 We will estimate the Cobb-Douglas model: using OLS. ln C = β 1 + β 2 ln Q + β 3 ln P L + β 4 ln P F + β 5 ln P K + ɛ (3) The <prg1sl1.m> results are the following: % OLS estimation (using the formula). B =
26 Estimation principles Company cost - logs Residuals Fitted values Fitted values Do the theoretical restrictions hold? Does the model fit well? What do you think about CRTS? Influential observations or outliers?
27 Estimation principles 27 Influential observations and outliers: The OLS estimator of the i th element of the vector β is simply: ˆβ i = [ (X X) 1 X ] y i = c iy Then ˆβ i variable. is a linear estimator, i.e., it s a linear function of the dependent Since it s a linear combination of the observations on the dependent variable, where the weights are determined by the observations on the regressors, some observations may have more influence than others.
28 Estimation principles 28 Influential observations and outliers (2): Define h t = (P X ) tt = e tp X e t = P X e t 2 = e t 2 = 1 h t is the t th element on the main diagonal of P X and e t is t th -unit vector. So 0 < h t < 1, and Trace(P X ) = k h = k/n. So, on average, the weight on the y t s is k/n. If the weight is much higher, then the observation is potentially influential. However, an observation may also be influential due to the value of y t, rather than the weight it is multiplied by, which only depends on the x t s.
29 Estimation principles 29 Influential observations and outliers (3): Consider the estimation of β without using the t th observation (denote this estimator by ˆβ (t) ). Then, ˆβ (t) = ˆβ ( ) 1 (X X) 1 X 1 h tˆε t t so the change in the t th observations fitted value is X t ˆβ Xt ˆβ(t) = ( ht 1 h t ) ˆε t While an observation may be influential if it doesn t affect its own fitted value, it certainly is influential if it does. A fast means of identifying influential observations is to plot function of t. ( ht 1 h t ) ˆε t as a
30 Estimation principles 30 Influential observations and outliers (4): The following example is based on Table 13.8 of Page 423 in Peña, D. (1995) Estadística. Modelos y Métodos. It consists on three data sets that differ in only one observation (see file <penadata.m>). Data Set ˆβ1 ˆβ2 ˆβ3 ŝ ε R 2 h t ˆε t ( ) ht 1 h ˆε t t Original A B C A is an outlier but is not an influential observation. B is an influential but is not an outlier observation. C is an outlier influential observation.
31 Estimation principles 31 Example: The Nerlove model (influential observations) 6 Company cost - logs Fitted values Leverage Influence
32 Estimation principles 32 Example: The Nerlove model (outliers observations) 5 Residuals Standardized residuals Residuals Fitted values Standardized residuals: r t = ˆε t ŝ ε 1 ht t n k 2.
33 Estimation principles 33 Example: The Nerlove model (outliers observations) 5 4 Warning: Multiple testing Residuals Standardized residuals 3 2 Residuals Fitted values Instead of using the α-critical value, we will use α/n-critical value.
34 Estimation principles 34 Properties of OLS estimator Small sample properties of the least squares estimator: Unbiasedness: For ˆβ we have ˆβ = (X X) 1 X (Xβ + ε) = β + (X X) 1 X ε, then applying the strong exogeneity assumption and the law of iterated expectation, we have E[(X X) 1 X ε] = E[E[(X X) 1 X ε X]] So the OLS estimator is unbiased. = E[(X X) 1 X E[ε X]] = 0,
35 Estimation principles 35 Small sample properties of the least squares estimator (2): Normality: ˆβ X N (β, (X X) 1 σ 2 ), where σ 2 = E[ε 2 t]. Proof: Note that ˆβ = β + (X X) 1 X ε, then ˆβ is a linear function of ε. It only rest to obtain the conditional variance. Var( ˆβ X) = E[( ˆβ β)( ˆβ β) X] = E[(X X) 1 X εε XX X) 1 X] = (X X) 1 X E[εε X] XX X) 1 = (X X) 1 σ 2. Notice that the above results is true regardless the distribution of ε.
36 Estimation principles 36 Small sample properties of the least squares estimator (3): Efficiency (Gauss-Markov theorem): In the classical linear regression model, the OLS estimator, ˆβ, is the minimum variance linear unbiased estimator of β. Moreover, for any vector of constants w, the minimum variance linear unbiased estimator of w β is w ˆβ. Proof: The OLS estimator is a linear estimator, i.e., it is a linear function of the dependent variable. It is also unbiased, as we proved two slides ago. Now, consider a new linear estimator β = Cy. If this estimator is unbiased, then we must have CX = I since E[Cy] = E(CXβ + Cε) = CXβ Therefore, CX = I.
37 Estimation principles 37 The variance of an unbiased β is V ( β) = CC σ 2. Define D = C (X X) 1 X Since CX = I, then DX = 0 so Var( β X) = ( D + (X X) 1 X ) ( D + (X X) 1 X ) σ 2 ( = DD + (X X) 1) σ 2 = DD σ 2 + Var( ˆβ X) Finally, Var( β) V ( ˆβ). Notice that the above results is true regardless the distribution of ε.
38 Estimation principles 38 Small sample properties of the least squares estimator (3): Unbiasedness of ˆσ 2 = 1 n k ˆε ˆε = 1 n k ε M X ε, where M X is idempotent matrix that projects onto the space orthogonal to the span of X and ˆε = M X y. Proof: E[ˆσ 2 ] = = = = = σ 2. 1 n k E[Trace(ε M X ε)] 1 n k E(Trace(M Xεε )] 1 n k Trace(E[M Xεε ]) 1 n k σ2 Trace(M X )
39 Resampling methods in i.i.d. data. 39 Resampling methods - Introduction Problem: Let X = (X 1,..., X N ) be observations generated by a model P, and let T (X) be the statistics of interest that estimate the parameter θ. We want to know: Bias: b T = E [T (X)] θ, Variance: v T, Distribution: L(T, P).
40 Resampling methods in i.i.d. data. 40 Resampling methods in i.i.d. data Jackknife: Let X = (X 1, X 2,..., X N ) be a sample of size N and let T N = T N (X) be an estimator of θ. In the jackknife samples an observation of X is excluded each time, i.e. X (i) = (X 1, X 2,..., X i 1, X i+1,..., X N ) for i = 1, 2,..., N, and we calculate the i-th jackknife statistic T N 1,i = T N 1 ( X(i) ). Jackknife bias estimator: Jackknife variance estimator: b Jack = (N 1)( T N T N ) v Jack = N 1 N N ( TN 1,i T ) 2 N, i=1 where T N = N 1 N i=1 T N 1,i
41 Resampling methods in i.i.d. data. 41 Jackknife Outline of the resampling procedure: (X 1,..., X N ) X\X 1 T N 1,1 X\X 2 T N 1, X\X N T N 1,N b Jack = (N 1)( T N T N ) N ( v Jack = N 1 N TN 1,i T ) 2 N i=1 Possible uses: Bias reduction: T Jack = T N b Jack. If N(T N θ) is asymptotically normal distributed, then we can obtain an estimate of the asymptotic variance by v Jack.
42 Resampling methods in i.i.d. data. 42 Example # 1: Let X 1,..., X N be i.i.d. N (µ, σ 2 ) observations and T N = X be the statistic of interest. In this case, we know that: b T = 0, v T = σ2 N, σ2 L(T ) = N (µ, N ). Jackknife results: N = 100, µ = 0 and σ 2 = 1. jackknife(data = x, statistic = mean) Number of Replications: 100 Summary Statistics: Observed Bias Mean SE mean e b jack = 1.374e and v jack = 0, =
43 Resampling methods in i.i.d. data. 43 d-jackknife: Let S n,r be the subsets of {1, 2,..., N} of size r. For any s = {i 1, i 2,..., i r } S N,r we obtain the d-jackknife replica by T r,s c = T r (X i1, X i2,..., X ir ). d-jackknife variance estimator: v Jack d = r dc s S N,r ( T r,s c C 1 s S N,r T r,s c) 2, where C = ( N d ). d-jackknife as estimator of the distribution of T N : Let H n (x) = Pr{ n(t n θ) x} be the distribution we want to estimate. We define the jackknife histogram by: H Jack (x) = 1 C s S N,d I ( Nr/d(Tr,s T N ) x). To learn more: Shao and Tu (1995).
44 Resampling methods in i.i.d. data. 44 Example # 2: We consider the following statistics: T n = X 2 n and the sampling median T n = F 1 n ( 1 2 ) = Q 2 : [ ] RB = Ê vjack v T v T and RM = Ê [ (v Jack v T ) 2].
45 Resampling methods in i.i.d. data. 45 Resampling methods in i.i.d. data Bootstrap: Let X = (X 1,..., X N ) be observations generated by model P and let T (X) be the statistic whose distribution L(T, P ) we want to estimate. The bootstrap proposes the distribution L (T ; P N ) of T = T (X ) as estimator of L(T, P ), where X are observations generated by the estimated model PN. Possible estimators of P in the i.i.d. case: Standard bootstrap: F N (x) = N 1 N i=1 I (X i x). Parametric bootstrap: F ϑ (Its assumed that P = F ϑ ). Smoothed bootstrap: F n,h is a kernel estimator of F. To learn more: Efron y Tibshirani (1993), Shao and Tu (1995), and Davison and Hinkley (1997).
46 Resampling methods in i.i.d. data. 46 Standard bootstrap Outline of the resampling procedure: (X 1,..., X N ) F N (X (1) 1,..., X (1) N ) T (1) N (X (2) 1,..., X (2) N ) T (2) N (X (B) 1,..., X (B) N ) T (B) N b Boot = E [T N ] T N v Boot = E [ (T N E [T N ])2] L T (x) = Pr {(T N T N) x} where E and Pr are the bootstrap expectation and the bootstrap probability. Remark: The X i are i.i.d. F N observations, therefore the resampling procedure could be interpreted as a sampling with replacement from the original sample (X 1,..., X N ).
47 Resampling methods in i.i.d. data. 47 Example # 1 (cont.): T N = X and B = 1000 resamples of size N. bootstrap(data = x.f, statistic = mean) Number of Replications: 1000 Summary Statistics: Observed Bias Mean SE mean Empirical Percentiles: 2.5% 5% 95% 97.5% mean b boot = y v boot = 0, =
48 Resampling methods in i.i.d. data. 48 Example # 1 (cont.): mean mean Density Quantiles of Replicates Value Quantiles of Standard Normal
49 Resampling methods in i.i.d. data. 49 Example # 3: Let X 1,..., X N be i.i.d. observations N (µ = 0, σ 2 = 1) and T N = Q 2. In this case, we know that the asymptotic distribution of N( Q2 Q 2 ) is N (0, 1/4φ(0) 2 ) and v T 1/( ) = Jackknife results: jackknife(data = x.f, statistic = median) Number of Replications: 100 Summary Statistics: Observed Bias Mean SE median e Empirical Percentiles: 2.5% 5% 95% 97.5% median
50 Resampling methods in i.i.d. data. 50 Example # 3 (cont.): Bootstrap results: bootstrap(data = x.f, statistic = median) Number of Replications: 1000 Summary Statistics: Observed Bias Mean SE median , Empirical Percentiles: 2.5% 5% 95% 97.5% median v jack = and v boot =
51 Resampling methods in i.i.d. data. 51 Example # 3 (cont.): median median median Density Density Density Value Value Value
52 Resampling methods in i.i.d. data. 52 Bootstrap always? One example where the bootstrap fails: Let X 1,..., X N be i.i.d. U(0, θ). We know that the m.l.e. of θ is T N = θ = max 1 i N X i and that the density function of T N is: f θ(x) = { nx n 1 θ n if 0 x θ 0 otherwise Bootstrap results: bootstrap(data = x.f2, statistic = max) Summary Statistics: Observed Bias Mean SE max Empirical Percentiles: 2.5% 5% 95% 97.5% max
53 Resampling methods in i.i.d. data. 53 Bootstrap always? One example where the bootstrap fails (cont.): Funcion de densidad de Max(X_i) max f.y Density f.x Value Pr{ θ = θ} = 0, but Pr { θ = θ} = 1 (1 1/n) n 1 e 1.
54 Resampling methods in i.i.d. data. 54 Bootstrap always? One example where the bootstrap fails (cont.): A bootstrap solution is the parametric bootstrap: Funcion de densidad de Max(X_i) Param f.y Density f.x Value
55 Resampling methods in i.i.d. data. 55 ( Bootstrap in Linear Regression: Let (Y1, X 1,1,..., X 1,k ), (Y 2, X 2,1,..., X 2,k ),..., (Y N, X N,1,..., X N,k ) ) be observations generated by the model: Y i = β 0 + β 1 X i,1 + + β k X i,k + ε i, where the ε i are i.i.d. with E[ε i ] = 0, and E[ε 2 i ] = σ2. Resampling of residuals (based on model): 1. We generate resamples ε 1, ε 2,..., ε N i.i.d. from F N, ε. 2. We construct bootstrap replicas by: Y i = β 0 + β 1 X i,1 + + β k X i,k + ε i, where the β i are estimates of the parameters β i. 3. Finally, we compute the statistic of interest in the bootstrap replicas.
56 Resampling methods in i.i.d. data. 56 Bootstrap in Linear Regression: Resampling of pairs (not based on model): 1. We generate bootstrap replicas (Y 1, X 1,1,..., X 1,k ), (Y 2, X 2,1,..., X 2,k ),..., (Y N, X N,1,..., X N,k )) i.i.d. from F N,(Y,X1,...,X k ). 2. We calculate the statistic of interest in the bootstrap replicas. Remark: The bootstrap observations in the resampling of pairs could be interpreted as a sampling with replacement from the N vectors p 1 in the original data. Remark: What happens if ε i is not an i.i.d. sequence? For example, if the ε i is an AR(1) process? or if it is an heteroscedastic sequence?
57 Resampling methods in i.i.d. data. 57 Example # 4: The data are taken from an operation of a plant for the oxidation of ammonia to nitric acid, measured on 21 consecutive days (Chapter 6 of Draper and Smith (1966)): loss Air.Flow Water.Temp Acid.Conc. percent of ammonia lost air flow to the plant cooling water inlet temperature acid concentration as a percentage Bootstrap results: bootstrap(data = stack, statistic = coef(lm(stack.loss ~ Air.Flow + Water.Temp + Acid.Conc., stack)), B = 1000, seed = 0, trace = F) Summary Statistics: Observed Bias Mean SE (Intercept) Air.Flow Water.Temp Acid.Conc
58 Resampling methods in i.i.d. data. 58 Example # 4 (cont): Summary Statistics: Observed Bias Mean SE (Intercept) Air.Flow Water.Temp Acid.Conc Empirical Percentiles: 2.5% 5% 95% 97.5% (Intercept) Air.Flow Water.Temp Acid.Conc
59 Resampling methods in i.i.d. data. 59 Example # 4 (cont): (Intercept) Air.Flow Density Density Value Water.Temp Value Acid.Conc. Density Density Value Value
Ma 3/103: Lecture 24 Linear Regression I: Estimation
Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, 2017 1 / 32 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the
More informationThe outline for Unit 3
The outline for Unit 3 Unit 1. Introduction: The regression model. Unit 2. Estimation principles. Unit 3: Hypothesis testing principles. 3.1 Wald test. 3.2 Lagrange Multiplier. 3.3 Likelihood Ratio Test.
More informationEconometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018
Econometrics I KS Module 2: Multivariate Linear Regression Alexander Ahammer Department of Economics Johannes Kepler University of Linz This version: April 16, 2018 Alexander Ahammer (JKU) Module 2: Multivariate
More informationEcon 583 Final Exam Fall 2008
Econ 583 Final Exam Fall 2008 Eric Zivot December 11, 2008 Exam is due at 9:00 am in my office on Friday, December 12. 1 Maximum Likelihood Estimation and Asymptotic Theory Let X 1,...,X n be iid random
More informationAdvanced Quantitative Methods: ordinary least squares
Advanced Quantitative Methods: Ordinary Least Squares University College Dublin 31 January 2012 1 2 3 4 5 Terminology y is the dependent variable referred to also (by Greene) as a regressand X are the
More informationAdvanced Econometrics I
Lecture Notes Autumn 2010 Dr. Getinet Haile, University of Mannheim 1. Introduction Introduction & CLRM, Autumn Term 2010 1 What is econometrics? Econometrics = economic statistics economic theory mathematics
More informationEconometrics Master in Business and Quantitative Methods
Econometrics Master in Business and Quantitative Methods Helena Veiga Universidad Carlos III de Madrid Models with discrete dependent variables and applications of panel data methods in all fields of economics
More informationReview of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley
Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate
More informationApplied Econometrics (MSc.) Lecture 3 Instrumental Variables
Applied Econometrics (MSc.) Lecture 3 Instrumental Variables Estimation - Theory Department of Economics University of Gothenburg December 4, 2014 1/28 Why IV estimation? So far, in OLS, we assumed independence.
More informationEC3062 ECONOMETRICS. THE MULTIPLE REGRESSION MODEL Consider T realisations of the regression equation. (1) y = β 0 + β 1 x β k x k + ε,
THE MULTIPLE REGRESSION MODEL Consider T realisations of the regression equation (1) y = β 0 + β 1 x 1 + + β k x k + ε, which can be written in the following form: (2) y 1 y 2.. y T = 1 x 11... x 1k 1
More informationIntroduction to Estimation Methods for Time Series models. Lecture 1
Introduction to Estimation Methods for Time Series models Lecture 1 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 1 SNS Pisa 1 / 19 Estimation
More informationLinear models. Linear models are computationally convenient and remain widely used in. applied econometric research
Linear models Linear models are computationally convenient and remain widely used in applied econometric research Our main focus in these lectures will be on single equation linear models of the form y
More informationCh 3: Multiple Linear Regression
Ch 3: Multiple Linear Regression 1. Multiple Linear Regression Model Multiple regression model has more than one regressor. For example, we have one response variable and two regressor variables: 1. delivery
More information1. The OLS Estimator. 1.1 Population model and notation
1. The OLS Estimator OLS stands for Ordinary Least Squares. There are 6 assumptions ordinarily made, and the method of fitting a line through data is by least-squares. OLS is a common estimation methodology
More informationLECTURE 2 LINEAR REGRESSION MODEL AND OLS
SEPTEMBER 29, 2014 LECTURE 2 LINEAR REGRESSION MODEL AND OLS Definitions A common question in econometrics is to study the effect of one group of variables X i, usually called the regressors, on another
More informationMultiple Linear Regression CIVL 7012/8012
Multiple Linear Regression CIVL 7012/8012 2 Multiple Regression Analysis (MLR) Allows us to explicitly control for many factors those simultaneously affect the dependent variable This is important for
More informationInstrumental Variables, Simultaneous and Systems of Equations
Chapter 6 Instrumental Variables, Simultaneous and Systems of Equations 61 Instrumental variables In the linear regression model y i = x iβ + ε i (61) we have been assuming that bf x i and ε i are uncorrelated
More informationLecture 3: Multiple Regression
Lecture 3: Multiple Regression R.G. Pierse 1 The General Linear Model Suppose that we have k explanatory variables Y i = β 1 + β X i + β 3 X 3i + + β k X ki + u i, i = 1,, n (1.1) or Y i = β j X ji + u
More informationApplied Econometrics (QEM)
Applied Econometrics (QEM) The Simple Linear Regression Model based on Prinicples of Econometrics Jakub Mućk Department of Quantitative Economics Jakub Mućk Applied Econometrics (QEM) Meeting #2 The Simple
More informationHomoskedasticity. Var (u X) = σ 2. (23)
Homoskedasticity How big is the difference between the OLS estimator and the true parameter? To answer this question, we make an additional assumption called homoskedasticity: Var (u X) = σ 2. (23) This
More informationEconometrics I Lecture 3: The Simple Linear Regression Model
Econometrics I Lecture 3: The Simple Linear Regression Model Mohammad Vesal Graduate School of Management and Economics Sharif University of Technology 44716 Fall 1397 1 / 32 Outline Introduction Estimating
More informationThe Linear Regression Model
The Linear Regression Model Carlo Favero Favero () The Linear Regression Model 1 / 67 OLS To illustrate how estimation can be performed to derive conditional expectations, consider the following general
More informationMS&E 226: Small Data
MS&E 226: Small Data Lecture 6: Bias and variance (v5) Ramesh Johari ramesh.johari@stanford.edu 1 / 49 Our plan today We saw in last lecture that model scoring methods seem to be trading off two different
More informationPractical Econometrics. for. Finance and Economics. (Econometrics 2)
Practical Econometrics for Finance and Economics (Econometrics 2) Seppo Pynnönen and Bernd Pape Department of Mathematics and Statistics, University of Vaasa 1. Introduction 1.1 Econometrics Econometrics
More informationEconometrics - 30C00200
Econometrics - 30C00200 Lecture 11: Heteroskedasticity Antti Saastamoinen VATT Institute for Economic Research Fall 2015 30C00200 Lecture 11: Heteroskedasticity 12.10.2015 Aalto University School of Business
More informationOrdinary Least Squares Regression
Ordinary Least Squares Regression Goals for this unit More on notation and terminology OLS scalar versus matrix derivation Some Preliminaries In this class we will be learning to analyze Cross Section
More information14 Multiple Linear Regression
B.Sc./Cert./M.Sc. Qualif. - Statistics: Theory and Practice 14 Multiple Linear Regression 14.1 The multiple linear regression model In simple linear regression, the response variable y is expressed in
More informationNon-Spherical Errors
Non-Spherical Errors Krishna Pendakur February 15, 2016 1 Efficient OLS 1. Consider the model Y = Xβ + ε E [X ε = 0 K E [εε = Ω = σ 2 I N. 2. Consider the estimated OLS parameter vector ˆβ OLS = (X X)
More informationSTAT 100C: Linear models
STAT 100C: Linear models Arash A. Amini June 9, 2018 1 / 56 Table of Contents Multiple linear regression Linear model setup Estimation of β Geometric interpretation Estimation of σ 2 Hat matrix Gram matrix
More informationEconometrics of Panel Data
Econometrics of Panel Data Jakub Mućk Meeting # 1 Jakub Mućk Econometrics of Panel Data Meeting # 1 1 / 31 Outline 1 Course outline 2 Panel data Advantages of Panel Data Limitations of Panel Data 3 Pooled
More informationMa 3/103: Lecture 25 Linear Regression II: Hypothesis Testing and ANOVA
Ma 3/103: Lecture 25 Linear Regression II: Hypothesis Testing and ANOVA March 6, 2017 KC Border Linear Regression II March 6, 2017 1 / 44 1 OLS estimator 2 Restricted regression 3 Errors in variables 4
More informationLecture: Simultaneous Equation Model (Wooldridge s Book Chapter 16)
Lecture: Simultaneous Equation Model (Wooldridge s Book Chapter 16) 1 2 Model Consider a system of two regressions y 1 = β 1 y 2 + u 1 (1) y 2 = β 2 y 1 + u 2 (2) This is a simultaneous equation model
More informationECO Class 6 Nonparametric Econometrics
ECO 523 - Class 6 Nonparametric Econometrics Carolina Caetano Contents 1 Nonparametric instrumental variable regression 1 2 Nonparametric Estimation of Average Treatment Effects 3 2.1 Asymptotic results................................
More informationBusiness Statistics. Tommaso Proietti. Linear Regression. DEF - Università di Roma 'Tor Vergata'
Business Statistics Tommaso Proietti DEF - Università di Roma 'Tor Vergata' Linear Regression Specication Let Y be a univariate quantitative response variable. We model Y as follows: Y = f(x) + ε where
More informationQuantitative Methods I: Regression diagnostics
Quantitative Methods I: Regression University College Dublin 10 December 2014 1 Assumptions and errors 2 3 4 Outline Assumptions and errors 1 Assumptions and errors 2 3 4 Assumptions: specification Linear
More informationEcon 510 B. Brown Spring 2014 Final Exam Answers
Econ 510 B. Brown Spring 2014 Final Exam Answers Answer five of the following questions. You must answer question 7. The question are weighted equally. You have 2.5 hours. You may use a calculator. Brevity
More informationStatistics 910, #5 1. Regression Methods
Statistics 910, #5 1 Overview Regression Methods 1. Idea: effects of dependence 2. Examples of estimation (in R) 3. Review of regression 4. Comparisons and relative efficiencies Idea Decomposition Well-known
More informationInstrumental Variables
Università di Pavia 2010 Instrumental Variables Eduardo Rossi Exogeneity Exogeneity Assumption: the explanatory variables which form the columns of X are exogenous. It implies that any randomness in the
More informationRegression Analysis for Data Containing Outliers and High Leverage Points
Alabama Journal of Mathematics 39 (2015) ISSN 2373-0404 Regression Analysis for Data Containing Outliers and High Leverage Points Asim Kumer Dey Department of Mathematics Lamar University Md. Amir Hossain
More informationCh 2: Simple Linear Regression
Ch 2: Simple Linear Regression 1. Simple Linear Regression Model A simple regression model with a single regressor x is y = β 0 + β 1 x + ɛ, where we assume that the error ɛ is independent random component
More informationEconometrics of Panel Data
Econometrics of Panel Data Jakub Mućk Meeting # 6 Jakub Mućk Econometrics of Panel Data Meeting # 6 1 / 36 Outline 1 The First-Difference (FD) estimator 2 Dynamic panel data models 3 The Anderson and Hsiao
More informationRecent Advances in the Field of Trade Theory and Policy Analysis Using Micro-Level Data
Recent Advances in the Field of Trade Theory and Policy Analysis Using Micro-Level Data July 2012 Bangkok, Thailand Cosimo Beverelli (World Trade Organization) 1 Content a) Classical regression model b)
More informationMultiple Regression Model: I
Multiple Regression Model: I Suppose the data are generated according to y i 1 x i1 2 x i2 K x ik u i i 1...n Define y 1 x 11 x 1K 1 u 1 y y n X x n1 x nk K u u n So y n, X nxk, K, u n Rks: In many applications,
More information1 Outline. 1. Motivation. 2. SUR model. 3. Simultaneous equations. 4. Estimation
1 Outline. 1. Motivation 2. SUR model 3. Simultaneous equations 4. Estimation 2 Motivation. In this chapter, we will study simultaneous systems of econometric equations. Systems of simultaneous equations
More informationEstimation of the Response Mean. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 27
Estimation of the Response Mean Copyright c 202 Dan Nettleton (Iowa State University) Statistics 5 / 27 The Gauss-Markov Linear Model y = Xβ + ɛ y is an n random vector of responses. X is an n p matrix
More informationEconometrics of Panel Data
Econometrics of Panel Data Jakub Mućk Meeting # 2 Jakub Mućk Econometrics of Panel Data Meeting # 2 1 / 26 Outline 1 Fixed effects model The Least Squares Dummy Variable Estimator The Fixed Effect (Within
More informationECON The Simple Regression Model
ECON 351 - The Simple Regression Model Maggie Jones 1 / 41 The Simple Regression Model Our starting point will be the simple regression model where we look at the relationship between two variables In
More informationReview of Econometrics
Review of Econometrics Zheng Tian June 5th, 2017 1 The Essence of the OLS Estimation Multiple regression model involves the models as follows Y i = β 0 + β 1 X 1i + β 2 X 2i + + β k X ki + u i, i = 1,...,
More informationXβ is a linear combination of the columns of X: Copyright c 2010 Dan Nettleton (Iowa State University) Statistics / 25 X =
The Gauss-Markov Linear Model y Xβ + ɛ y is an n random vector of responses X is an n p matrix of constants with columns corresponding to explanatory variables X is sometimes referred to as the design
More informationLinear Regression. Junhui Qian. October 27, 2014
Linear Regression Junhui Qian October 27, 2014 Outline The Model Estimation Ordinary Least Square Method of Moments Maximum Likelihood Estimation Properties of OLS Estimator Unbiasedness Consistency Efficiency
More informationEstadística II Chapter 4: Simple linear regression
Estadística II Chapter 4: Simple linear regression Chapter 4. Simple linear regression Contents Objectives of the analysis. Model specification. Least Square Estimators (LSE): construction and properties
More informationThe Standard Linear Model: Hypothesis Testing
Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Lecture 25: The Standard Linear Model: Hypothesis Testing Relevant textbook passages: Larsen Marx [4]:
More informationEconometrics I KS. Module 1: Bivariate Linear Regression. Alexander Ahammer. This version: March 12, 2018
Econometrics I KS Module 1: Bivariate Linear Regression Alexander Ahammer Department of Economics Johannes Kepler University of Linz This version: March 12, 2018 Alexander Ahammer (JKU) Module 1: Bivariate
More informationQuick Review on Linear Multiple Regression
Quick Review on Linear Multiple Regression Mei-Yuan Chen Department of Finance National Chung Hsing University March 6, 2007 Introduction for Conditional Mean Modeling Suppose random variables Y, X 1,
More informationThe Multiple Regression Model Estimation
Lesson 5 The Multiple Regression Model Estimation Pilar González and Susan Orbe Dpt Applied Econometrics III (Econometrics and Statistics) Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model:
More informationEcon 620. Matrix Differentiation. Let a and x are (k 1) vectors and A is an (k k) matrix. ) x. (a x) = a. x = a (x Ax) =(A + A (x Ax) x x =(A + A )
Econ 60 Matrix Differentiation Let a and x are k vectors and A is an k k matrix. a x a x = a = a x Ax =A + A x Ax x =A + A x Ax = xx A We don t want to prove the claim rigorously. But a x = k a i x i i=
More informationRegression Models - Introduction
Regression Models - Introduction In regression models there are two types of variables that are studied: A dependent variable, Y, also called response variable. It is modeled as random. An independent
More informationGeneralized Method of Moments: I. Chapter 9, R. Davidson and J.G. MacKinnon, Econometric Theory and Methods, 2004, Oxford.
Generalized Method of Moments: I References Chapter 9, R. Davidson and J.G. MacKinnon, Econometric heory and Methods, 2004, Oxford. Chapter 5, B. E. Hansen, Econometrics, 2006. http://www.ssc.wisc.edu/~bhansen/notes/notes.htm
More informationUNIVERSITÄT POTSDAM Institut für Mathematik
UNIVERSITÄT POTSDAM Institut für Mathematik Testing the Acceleration Function in Life Time Models Hannelore Liero Matthias Liero Mathematische Statistik und Wahrscheinlichkeitstheorie Universität Potsdam
More informationGraduate Econometrics Lecture 4: Heteroskedasticity
Graduate Econometrics Lecture 4: Heteroskedasticity Department of Economics University of Gothenburg November 30, 2014 1/43 and Autocorrelation Consequences for OLS Estimator Begin from the linear model
More informationTESTING FOR NORMALITY IN THE LINEAR REGRESSION MODEL: AN EMPIRICAL LIKELIHOOD RATIO TEST
Econometrics Working Paper EWP0402 ISSN 1485-6441 Department of Economics TESTING FOR NORMALITY IN THE LINEAR REGRESSION MODEL: AN EMPIRICAL LIKELIHOOD RATIO TEST Lauren Bin Dong & David E. A. Giles Department
More informationMIT Spring 2015
Regression Analysis MIT 18.472 Dr. Kempthorne Spring 2015 1 Outline Regression Analysis 1 Regression Analysis 2 Multiple Linear Regression: Setup Data Set n cases i = 1, 2,..., n 1 Response (dependent)
More informationRegression #3: Properties of OLS Estimator
Regression #3: Properties of OLS Estimator Econ 671 Purdue University Justin L. Tobias (Purdue) Regression #3 1 / 20 Introduction In this lecture, we establish some desirable properties associated with
More informationBootstrapping Heteroskedasticity Consistent Covariance Matrix Estimator
Bootstrapping Heteroskedasticity Consistent Covariance Matrix Estimator by Emmanuel Flachaire Eurequa, University Paris I Panthéon-Sorbonne December 2001 Abstract Recent results of Cribari-Neto and Zarkos
More information1 Least Squares Estimation - multiple regression.
Introduction to multiple regression. Fall 2010 1 Least Squares Estimation - multiple regression. Let y = {y 1,, y n } be a n 1 vector of dependent variable observations. Let β = {β 0, β 1 } be the 2 1
More informationWeighted Least Squares
Weighted Least Squares The standard linear model assumes that Var(ε i ) = σ 2 for i = 1,..., n. As we have seen, however, there are instances where Var(Y X = x i ) = Var(ε i ) = σ2 w i. Here w 1,..., w
More informationstatistical sense, from the distributions of the xs. The model may now be generalized to the case of k regressors:
Wooldridge, Introductory Econometrics, d ed. Chapter 3: Multiple regression analysis: Estimation In multiple regression analysis, we extend the simple (two-variable) regression model to consider the possibility
More informationEconometrics. 7) Endogeneity
30C00200 Econometrics 7) Endogeneity Timo Kuosmanen Professor, Ph.D. http://nomepre.net/index.php/timokuosmanen Today s topics Common types of endogeneity Simultaneity Omitted variables Measurement errors
More informationTopic 4: Model Specifications
Topic 4: Model Specifications Advanced Econometrics (I) Dong Chen School of Economics, Peking University 1 Functional Forms 1.1 Redefining Variables Change the unit of measurement of the variables will
More informationSTAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method.
STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method. Rebecca Barter May 5, 2015 Linear Regression Review Linear Regression Review
More informationLeast Squares Estimation-Finite-Sample Properties
Least Squares Estimation-Finite-Sample Properties Ping Yu School of Economics and Finance The University of Hong Kong Ping Yu (HKU) Finite-Sample 1 / 29 Terminology and Assumptions 1 Terminology and Assumptions
More informationInstrumental Variables
Università di Pavia 2010 Instrumental Variables Eduardo Rossi Exogeneity Exogeneity Assumption: the explanatory variables which form the columns of X are exogenous. It implies that any randomness in the
More informationMultiple Linear Regression
Multiple Linear Regression Simple linear regression tries to fit a simple line between two variables Y and X. If X is linearly related to Y this explains some of the variability in Y. In most cases, there
More information11. Bootstrap Methods
11. Bootstrap Methods c A. Colin Cameron & Pravin K. Trivedi 2006 These transparencies were prepared in 20043. They can be used as an adjunct to Chapter 11 of our subsequent book Microeconometrics: Methods
More informationEconomics 536 Lecture 7. Introduction to Specification Testing in Dynamic Econometric Models
University of Illinois Fall 2016 Department of Economics Roger Koenker Economics 536 Lecture 7 Introduction to Specification Testing in Dynamic Econometric Models In this lecture I want to briefly describe
More informationStatistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation
Statistics - Lecture One Charlotte Wickham wickham@stat.berkeley.edu http://www.stat.berkeley.edu/~wickham/ Outline 1. Basic ideas about estimation 2. Method of Moments 3. Maximum Likelihood 4. Confidence
More informationThe regression model with one fixed regressor cont d
The regression model with one fixed regressor cont d 3150/4150 Lecture 4 Ragnar Nymoen 27 January 2012 The model with transformed variables Regression with transformed variables I References HGL Ch 2.8
More informationMultivariate Regression Analysis
Matrices and vectors The model from the sample is: Y = Xβ +u with n individuals, l response variable, k regressors Y is a n 1 vector or a n l matrix with the notation Y T = (y 1,y 2,...,y n ) 1 x 11 x
More informationStatistical View of Least Squares
Basic Ideas Some Examples Least Squares May 22, 2007 Basic Ideas Simple Linear Regression Basic Ideas Some Examples Least Squares Suppose we have two variables x and y Basic Ideas Simple Linear Regression
More informationPreliminaries The bootstrap Bias reduction Hypothesis tests Regression Confidence intervals Time series Final remark. Bootstrap inference
1 / 171 Bootstrap inference Francisco Cribari-Neto Departamento de Estatística Universidade Federal de Pernambuco Recife / PE, Brazil email: cribari@gmail.com October 2013 2 / 171 Unpaid advertisement
More informationThis model of the conditional expectation is linear in the parameters. A more practical and relaxed attitude towards linear regression is to say that
Linear Regression For (X, Y ) a pair of random variables with values in R p R we assume that E(Y X) = β 0 + with β R p+1. p X j β j = (1, X T )β j=1 This model of the conditional expectation is linear
More informationPeter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8
Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall
More informationBootstrap, Jackknife and other resampling methods
Bootstrap, Jackknife and other resampling methods Part VI: Cross-validation Rozenn Dahyot Room 128, Department of Statistics Trinity College Dublin, Ireland dahyot@mee.tcd.ie 2005 R. Dahyot (TCD) 453 Modern
More informationEconometrics. 9) Heteroscedasticity and autocorrelation
30C00200 Econometrics 9) Heteroscedasticity and autocorrelation Timo Kuosmanen Professor, Ph.D. http://nomepre.net/index.php/timokuosmanen Today s topics Heteroscedasticity Possible causes Testing for
More informationIntroductory Econometrics
Based on the textbook by Wooldridge: : A Modern Approach Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna November 23, 2013 Outline Introduction
More informationRestricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model
Restricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model Xiuming Zhang zhangxiuming@u.nus.edu A*STAR-NUS Clinical Imaging Research Center October, 015 Summary This report derives
More informationØkonomisk Kandidateksamen 2004 (I) Econometrics 2. Rettevejledning
Økonomisk Kandidateksamen 2004 (I) Econometrics 2 Rettevejledning This is a closed-book exam (uden hjælpemidler). Answer all questions! The group of questions 1 to 4 have equal weight. Within each group,
More informationPreliminaries The bootstrap Bias reduction Hypothesis tests Regression Confidence intervals Time series Final remark. Bootstrap inference
1 / 172 Bootstrap inference Francisco Cribari-Neto Departamento de Estatística Universidade Federal de Pernambuco Recife / PE, Brazil email: cribari@gmail.com October 2014 2 / 172 Unpaid advertisement
More informationLecture 1: OLS derivations and inference
Lecture 1: OLS derivations and inference Econometric Methods Warsaw School of Economics (1) OLS 1 / 43 Outline 1 Introduction Course information Econometrics: a reminder Preliminary data exploration 2
More informationReliability of inference (1 of 2 lectures)
Reliability of inference (1 of 2 lectures) Ragnar Nymoen University of Oslo 5 March 2013 1 / 19 This lecture (#13 and 14): I The optimality of the OLS estimators and tests depend on the assumptions of
More informationTopic 7: HETEROSKEDASTICITY
Universidad Carlos III de Madrid César Alonso ECONOMETRICS Topic 7: HETEROSKEDASTICITY Contents 1 Introduction 1 1.1 Examples............................. 1 2 The linear regression model with heteroskedasticity
More informationGreene, Econometric Analysis (7th ed, 2012)
EC771: Econometrics, Spring 2012 Greene, Econometric Analysis (7th ed, 2012) Chapters 2 3: Classical Linear Regression The classical linear regression model is the single most useful tool in econometrics.
More informationApplied Econometrics (QEM)
Applied Econometrics (QEM) based on Prinicples of Econometrics Jakub Mućk Department of Quantitative Economics Jakub Mućk Applied Econometrics (QEM) Meeting #3 1 / 42 Outline 1 2 3 t-test P-value Linear
More informationLinear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,
Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,
More informationIntroduction to Econometrics
Introduction to Econometrics Lecture 3 : Regression: CEF and Simple OLS Zhaopeng Qu Business School,Nanjing University Oct 9th, 2017 Zhaopeng Qu (Nanjing University) Introduction to Econometrics Oct 9th,
More informationDealing With Endogeneity
Dealing With Endogeneity Junhui Qian December 22, 2014 Outline Introduction Instrumental Variable Instrumental Variable Estimation Two-Stage Least Square Estimation Panel Data Endogeneity in Econometrics
More informationSimple Linear Regression Model & Introduction to. OLS Estimation
Inside ECOOMICS Introduction to Econometrics Simple Linear Regression Model & Introduction to Introduction OLS Estimation We are interested in a model that explains a variable y in terms of other variables
More informationthe error term could vary over the observations, in ways that are related
Heteroskedasticity We now consider the implications of relaxing the assumption that the conditional variance Var(u i x i ) = σ 2 is common to all observations i = 1,..., n In many applications, we may
More informationECNS 561 Multiple Regression Analysis
ECNS 561 Multiple Regression Analysis Model with Two Independent Variables Consider the following model Crime i = β 0 + β 1 Educ i + β 2 [what else would we like to control for?] + ε i Here, we are taking
More informationNon-linear panel data modeling
Non-linear panel data modeling Laura Magazzini University of Verona laura.magazzini@univr.it http://dse.univr.it/magazzini May 2010 Laura Magazzini (@univr.it) Non-linear panel data modeling May 2010 1
More information