Quick Review on Linear Multiple Regression
|
|
- Neal Ryan
- 5 years ago
- Views:
Transcription
1 Quick Review on Linear Multiple Regression Mei-Yuan Chen Department of Finance National Chung Hsing University March 6, 2007
2 Introduction for Conditional Mean Modeling Suppose random variables Y, X 1, X 2,..., X k are considered and the conditional mean of Y on X 1, X 2,...,X k, E(Y X 1, X 2,..., X k ), is interested. Knowing E(Y X 1, X 2,..., X k ), the average behavior of Y conditional on specific realizations of X 1, X 2,..., X k is observed. Besides, the response of average behavior of Y conditional on one set of specific realizations of X 1, X 2,...,X k to another set of realizations could be analyzed. For example, the change of E(Y X 1 = x 1, X 2 = x 2,...,X k = x k ) to E(Y X 1 = x 1, X 2 = x 2 +,..., X k = x k ) could be treated as the pure effect of X 2 = x 2 changes to x 2 + on the average value of Y.
3 Denote m(x 1, x 2,...,x k ) = E(Y X 1 = x 1, X 2 = x 2,...,X k = x k ). To simplification, some functional forms are assumed for m(x 1, x 2,...,x k ), say linear and/or nonlinear parametric functional forms. Of course, m(x 1, x 2,...,x k ) can also be considered as nonparametric. The goal of econometric analysis is to estimate and infer m(x 1, x 2,...,x k ) using a collection of sample observations {y t, x t1, x t2,...,x tk, t = 1,..., T }, where T is total number of sample observations.
4 Linear Multiple Regression Suppose m(x 1, x 2,...,x k ) = β 10 x 1 + β 20 x β k0 x k is assumed. For any realization (y, x 1, x 2,...,x k ), it can be represented as y = E(Y X 1 = x 1, X 2 = x 2,...,X k = x k ) + e, = β 10 x 1 + β 20 x β k0 x k + e, where e is a difference between y and the conditional mean. Therefore, given a collection of sample observations {y t, x t1, x t2,...,x tk, t = 1,..., T }, the linear regression model is formulated as y t = β 10 x t1 + β 20 x t2 + + β k0 x tk + e t, t = 1,..., T. (1) The term e t is called as regression error.
5 OLS Estimator A linear regression model in (1) can be written in matrix notation as y T 1 = X T k β 0k 1 + e T 1, where k T. We want to find a k-dimensional regression hyperplane which best fits the data (y, X). Different estimators are obtained according different definition of best. The least squares estimator defines the best as the squared deviations of observed y t to fitted value ŷ t is minimized. The maximum likelihood estimator takes the best as the likelihood value is maximized.
6 Least Squares Estimator Denote the averaged squared deviations of all observed y t to ad hoc fitted values ŷ t as Q(β) = (y Xβ) (y Xβ)/T. The least squares estimator for β 0 is obtained by: min Q(β) := 1 β R k T (y Xβ) (y Xβ). The FOCs are also called the normal equations: β Q(β) = β (y y 2y Xβ + β X Xβ)/T = 2X (y Xβ)/T set = 0, and the resulting OLS estimator is ˆβ T = (X X) 1 X y.
7 The second order condition is satisfied because X X is positive definite. The vector of fitted values are ŷ = X ˆβ T = Py, where P = X(X X) 1 X, and the vector of regression residuals is ê = y ŷ = (I T P)y. By normal equations, X ê = 0 so that ŷe = 0. When X contains a constant term, we also have T t=1 êt = 0 and T t=1 y t = T T=1 ŷt.
8 Geometrically, ŷ = Py is the orthogonal projection of y onto the k-dimensional space span(x), the space spanned by the column vectors of X, and e = (I T P)y is the orthogonal projection of y onto span(x) (orthogonal complement space of span(x)). Consequently, ŷ is the best approximation of y in span(x) in the sense that y ŷ y z for all z span(x).
9 Properties of OLS Estimator under Classical Assumptions We first make the following classical assumptions on (y, X) and e: [A1 ]y = Xβ 0 + e, β 0 <, is the correct model. [A2 ]X is a T k nonstochastic and finite matrix. [A3 ]X X is nonsingular for all T k. (X is of full column rank.) [A4 ]e is a random vector such that E(e) = 0. [A4 ]e is a random vector such that E(e) = 0 and E(ee ) = σ0i 2 T, where σ0 2 <. [A5 ]e N(0, σ0i 2 T ), where σ0 2 <.
10 (1) Given assumptions [A1-A3], ˆβ T and ˆσ T 2 exist and are unique. (2) Given assumptions [A1-A4], ˆβ T is unbiased. (3) Given assumptions [A1-A3] and [A4 ], var(ˆβ T ) = E[(ˆβ T Eˆβ T )(ˆβ T Eˆβ T ) ] = E[(X X) 1 X ee X(X X) 1 ] = σ0(x 2 X) 1.
11 (4) Gauss-Markov Result: Given assumptions [A1-A3] and [A4 ], ˆβ T is the best linear unbiased estimator (BLUE) of β 0. (5) Given assumptions [A1-A3] and [A4 ], ˆσ T 2 = ê ê/(t k) is an unbiased estimator for σ0. 2 (6) If we assume [A5] instead of [A4 ], ˆβ T is the maximum likelihood estimator (MLE). But, the MLE for σ0 2 is σ T 2 = ê ê/t is biased estimator. (7) Given assumption [A1-A3] and [A5], ˆβ T and ˆσ T 2 are the minimum variance unbiased estimator (MVUE).
12 Goodness of Fit A natural measure is the regression variance ˆσ T 2 = ê ê/(t k). Some relative measures: (1) The Coefficient of Determination: Non-Centered R 2. (2) The Coefficient of Determination: Centered R 2. (3) Adjusted R 2 : R 2. R 2 ê ê/(t k) = 1 (y y TȳT 2 )/(T 1) = 1 T 1 T k (1 R2 ) = R 2 k 1 T k (1 R2 ).
13 Three alternatives that have been proposed for comparing models are 1. R 2 = T+k (1 T k R2 ), which minimizes Amemiya s prediction criterion, PC = ê ê T k ( 1 + k ) ( = ˆσ T k ). T T 2. Akaike s information criterion: (AIC) ) (ê ê AIC = ln + 2k T T = ln σ2 T + 2k T. 3. Schwarz information criterion: (SIC) SIC = ln σ 2 T + k ln T T.
14 Sampling Distribution of OLS Estimator under Classical Assumptions Given [A5]: e N(0, σ 2 0I T ), the following distributions are immediate. y X N(Xβ 0, σ 2 0I T ); ˆβ T X N(β 0, σ 2 0(X X) 1 ); ê X = (I T P)e N(0, σ 2 0(I T P)). As (T k)ˆσ 2 T /σ2 0 = ê ê/σ 2 0, (T k)ˆσ2 T χ 2 (T k), σ0 2 with mean (T k) and variance 2(T k). Hence, ˆσ T 2 has mean σ0 2 and variance 2σ0/(T 4 k).
15 Testing Linear Hypotheses H 0 : Rβ 0 = r, where R is a q k nonstochastic matrix with rank q, and r is a vector of pre-specified real values. [R(X X) 1 R ] 1/2 (Rˆβ T r)/σ 0 N(0,I q ) (Rˆβ T r) [R(X X) 1 R ] 1 (Rˆβ T r)/σ 2 0 χ 2 (q). Recall that (T k)ˆσ 2 T /σ2 0 χ2 (T k). φ = = {(Rˆβ T r) [R(X X) 1 R ] 1 (Rˆβ T r)/σ0 2}/q {(T k)ˆσ T 2 /σ2 0 }/(T k) χ 2 (q)/q χ 2 (T k)/(t k) = (Rˆβ T r) [R(X X) 1 R ] 1 (Rˆβ T r) qˆσ T 2 F(q,T k).
16 An Alternative Approach Given the constraint Rβ 0 = r, the constrained OLS estimator can be ontained by minimizing the Lagrangian: min β (y Xβ) (y Xβ)/T + (Rβ r) λ, where λ is the Lagrangian multiplier. The FOCs are 2X (y Xβ)/T + R λ set = 0 Rβ r set = 0. The FOCS can written as [ ] [ 2X X/T R R 0 β λ ] set = [ 2X y/t r ].
17 We can solve λ T = 2[R(X X/T) 1 R ] 1 (Rˆβ T r), β T = ˆβ T (2X X/T) 1 R λt. β T is called the constrained OLS estimator for β 0. Note that the vector of constrained OLS residuals is ë = y X β T = y X ˆβ T + X(ˆβ T β T ) = ê + X(ˆβ T β T );
18 ë ë = ê ê + (ˆβ T β T ) X X(ˆβ T β T ) = ê ê + (Rˆβ T r) [R(X X) 1 R ] 1 (Rˆβ T r), since ˆβ T β T = (X X/T) 1 R [R(X X/T) 1 R ] 1 (Rˆβ T r). Thus ë ë ê ê = (Rˆβ T r) [R(X X) 1 R ] 1 (Rˆβ T r) is the numerator term in the F-test, φ.
19 φ = ë ë ê ê = ESS c ESS u qˆσ T 2 qˆσ T 2 = (ESS c ESS u )/q ESS u /(T k) (Ru 2 R 2 = c)/q (1 Ru)/(T 2 k).
20 where the subscripts c and u signify the constrained and unconstrained models, respectively. In other words, the F-test can be interpreted as a test of the loss of fit because it compares the performance of the constrained and unconstrained models. In particular, if we want to test whether all the coefficients (except the constant term) equal zero, then R 2 c = 0 so that φ = Ru/(k 2 1) F(k 1, T k). (1 (Ru)/(T 2 k)
21 Asymptotic Properties of the OLS Estimator ( ) 1 ( ) 1 T ˆβ T = x T t x 1 T t x T t y t t=1 t=1 ( ) 1 ( ) 1 T = β 0 x T t x 1 T t x T t x t t=1 t=1 ( ) 1 ( ) 1 T + x T t x 1 T t x T t e t. t=1 t=1
22 Asymptotic Normality of OLS Estimator: IID Observations
23 Komolgorov Theorem Let {Z t } be a sequence of i. i. d. random variables and Z T T 1 T t=1 Z a.s. t. Then Z T µ if and only if IE Z t < and IE(Z t ) = µ.
24 Lindeberg-Levy Central Limit Theorem Let {Z t } be a sequence of i. i. d. random scalars. If varz t σ 2 <, σ 2 0, then T( Z T µ T )/ σ T = T( Z T µ)/σ = T T (Z T µ)/σ A N(0, 1). t=1
25 Asymptotic Normality under IID Case Given B1 y = Xβ 0 + e; B2 {(x t, e t ) } is an i. i. d. sequence; B3 1 E(x te t ) = 0; 2 E x ti e t 2 <,i = 1,... k; 3 V T var(t 1/2 X e) = V is positive definite; B4 1 E x ti 2 <,i = 1,...,k; 2 M E(x t x t) is positive definite. Then D 1/2 T(ˆβ T β 0 ) A N(0, I), where D M 1 V M 1.
26 Suppose in addition that B5 there exists Vˆ T symmetric and positive semidefinite such that Vˆ T V p 0. Then Dˆ T D p 0, where Dˆ T = (X X/T) 1 Vˆ T (X X/T) 1.
27 Asymptotic Normality of OLS: Independent Heterogeneous Observations
28 Markov s SLLN Let {Z t } be a sequence of independent random variables with E(Z t ) = µ t <. If for some δ > 0, t=1 E Z t µ t 1+δ /t 1+δ <, then Z T µ T 0 a.s.
29 Lindeberg-Feller s CLT Let {Z t } be a sequence of independent random scalars with E(Z t ) µ t, varz t σt 2 <, σt 2 0, and distribution function F t (z). Then T( Z T µ T )/ σ A T N(0, 1) and lim n σ 2 T T 1 T t=1 (z µ t) 2 >ǫt σ 2 T (z µ T ) 2 df t (z) = 0. The last condition of this result is called Lindeberg condition.
30 Liapounov s CLT Let {Z t } be a sequence of independent random scalars with E(Z t ) = µ t, varz t = σt 2, σt 2 0, and E Z t µ t 2+δ < < for some δ > 0 and all t. If σ T 2 > δ > 0 for all T sufficiently large, then T( Z T µ T )/ σ A T N(0, 1).
31 Asymptotic Normality: Ind. Het. Observations Suppose that the following conditions hold: B1 y t = x tβ 0 + e t, t = 1,...,T; B2 {(x t, e t) } is an independent sequence; B3 1 E(x t e t ) = 0 for all t, 2 E x ti e t 2+δ < for some δ > 0 and all i = 1,...,k, and t, 3 V T var(x e/t 1/2 ) is uniformly p.d.; B4 1 E x 2 ti 1+δ < for some δ > 0 and all i = 1,...,k, and t, 2 M T E(X X/T) is uniformly p.d. Then D 1/2 T T(ˆβ T β 0 ) A N(0, I), where D T = M 1 T V TM 1 T.
32 Further suppose: B5 there exists ˆV T p.s.d. and symmetric such that ˆV T V T p 0. Then ˆD T = (X X/T) 1 ˆV T (X X/T) 1 and ˆD T D T p 0.
33 Large Sample Tests I We consider various large sample tests for the linear hypothesis Rβ 0 = r, where R is a q k nonstochastic matrix with rank q k.
34 Wald Test Let Γ T = RD T R = RM 1 T V TM 1 T R. Then under the null hypothesis, and the Wald statistic is Γ 1/2 T T(RˆβT r) A N(0, I), W T = T(Rˆβ T r) Γ 1 T (Rˆβ T r) A χ 2 (q), where ˆΓ T = R ˆD T R = R(X X/T) 1 ˆV T (X X/T) 1 R.
35 Lagrange Multiplier Test Given the constraint Rβ = r, the constrained OLS estimator is obtained by minimizing the Lagrangian (y Xβ) (y Xβ)/T + (Rβ r) λ, where λ is the Lagrange multiplier. Intuitively, when the null hypothesis is true (i.e., the constraint is valid), the shadow price (λ) of this constraint should be low. Hence, whether the shadow price is close to zero is an evidence for or against the hypothesis. The Lagrange multiplier (LM) test can be interpreted as a test of λ = 0.
36 λ T = 2[R(X X/T) 1 R ] 1 (Rˆβ T r), β T = ˆβ T (X X/T) 1 R λt /2, where β T is the constrained OLS estimator, and λ T is the basis of the LM test.
37 Suppose that the asymptotic normality of ˆβ T holds, Λ 1/2 T TbλT A N(0, I), where Λ T = 4(RM 1 T R ) 1 Γ T (RM 1 T R ) 1. The LM statistic is where ˆΛ T = LM T = T λ 1 T ˆΛ λ T T A χ 2 (q), 4[R(X X/T) 1 R ] 1 [R(X X/T) 1 V T (X X/T) 1 R ][R(X X/T) 1 R ] 1, and V T is an estimator of V T obtained from the constrained regression such that V T V T P 0 under the null.
38 If ˆV T replaces V T in ˆΛ T, then LM T = 4(Rˆβ T r) [R(X X/T) 1 R 1 ] 1ˆΛ T [R(X X/T) 1 R ] 1 (Rˆβ T r) = T(Rˆβ T r) ˆΓ 1 T (Rˆβ T r) = W T. This suggests that these two tests are asymptotically equivalent under the null hypothesis, i.e., W T LM T P 0.
39 Test of s coefficients being zero: [0 I s ]β 0 = 0. Accordingly, the original model can be written as y = X 1 b 10 + X 2 b 20 + e, where X 1 and X 2 are T (k s) and T s matrices, respectively. Clearly, the constrained model is y = X 1 b 10 + e, so that the constrained OLS estimator is β T = ( b 1T 0), where b 1T = (X 1X 1 ) 1 X 1y, and the constrained OLS residual is ë = y X 1 b1t.
40 Writing P 1 = X 1 (X 1X 1 ) 1 X 1, it is easily verified that by matrix inversion formula, R(X X) 1 [ = [X 2(I P 1 )X 2 ] 1 X 2X 1 (X 1X 1 ) 1 [X 2(I P 1 )X 2 ] 1], R(X X) 1 R = [X 2 (I P 1)X 2 ] 1, R(X X) 1 X = [X 2(I P 1 )X 2 ] 1 X 2(I P 1 ). Hence λ T = 2X 2 (I P 1)ë/T = 2X 2ë/T, and ˆΛ T = 4[R(X X/T) 1 R ] 1 [R(X X/T) 1 V T(X X/T) 1 R ][R(X X/T) 1 = 4[ X 2X 1 (X 1X 1 ) 1 I s ] V T [ X 2X 1 (X 1X 1 ) 1 I s ].
41 The LM statistic is thus LM T = T 4 λ [ T [ X 2 X 1(X 1 X 1) 1 I s ] V T [ X 2 X 1(X 1 X 1) 1 I s ] ] 1 λt. When V T = σ 2 T (X X/T) 1 is consistent for V T, where σ 2 T = T t=1 ë2 t/t, the LM statistic can be further simplified as LM T = ë X(X X) 1 X ë ë ë/t = TR 2, where R 2 is the (non-centered) R 2 of regressing ë on X.
42 Likelihood Ratio Test When e t are i.i.d. N(0, σ 2 0), we have learned that the OLS estimator is also the MLE maximizing L T (β, σ 2 ) = T 2 log(2π) T 2 log(σ2 ) 1 2σ 2 T (y t x tβ) 2. t=1 Let β T ( β T ) be the (un)constrained MLE of β 0 and σ 2 T = 1 T T ë 2 t, σ 2 T = 1 T T ê 2 t. t=1 t=1
43 The likelihood ratio (LR) test is based on the log likelihood-ratio: ( ) LR T = 2 L T ( β T, σ T) 2 L T ( β T, σ T) 2 = T log ( σ 2 T σ 2 T ). If the null hypothesis is true, the likelihood ratio is close to one so that LR T is close to zero; otherwise, LR T is positive.
44 As σ 2 T = σ 2 T + ( β T β T ) (X X/T)( β T β T ) = σ 2 T + (R β T r) [R(X X/T) 1 R ] 1 (R β T r), LR T = T log ( ) 1+(R β T r) [R(X X/T) 1 R ] 1 (R β T r)/ σ T 2 }{{} =:z T
45 By noting that the mean value expansion of log(1 + z) about z = 0 is (1 + z ) 1 z, where z lies between z and 0, we can write LR T = T(1 + z T ) 1 z T = T(R β T r) [R(X X/T) 1 R ] 1 (R β T r)/ σ 2 T + o P (1), where the second term is nothing but the Wald statistic with ˆV T = σ 2 T (X X/T). We immediately have the following result.
46 Suppose that σ T 2(X X/T) is consistent for V T. Then under the null hypothesis, LR T A χ 2 (q). Therefore, the Wald, LR, and LM tests are asymptotically equivalent provided that σ T 2 (X X/T) is consistent for V T. If σ T 2 (X X/T) is not consistent for V T, LR T need not have a limiting χ 2 distribution. Thus, the LR test is not robust to heteroskedasticity and serial correlation, whereas the Wald and LM tests are robust if V T is estimated properly.
47 Conflict Among Tests If σ0 2 is known, it can be seen that T LR T = (ë 2 t e 2 t)/σ0 2 = W T. t=1 We have also learned that the Wald and LM tests differ by the asymptotic covariance matrix estimator used in the statistics. It follows that when σ0 2 is known, LM T = W T = LR T. As W T = LR T ( σ T); 2 LM T = LR T ( σ T). 2 if σ T 2 (X X/T) is not consistent for V T, LR T need not have a limiting χ 2 distribution. Thus, the LR test is not robust to heteroskedasticity and serial correlation, whereas the Wald and LM tests are robust if V T is estimated properly.
48 Observe that LR T LM T = LR T LR T ( σ 2 T) = 2[L T ( β T, σ 2 T) L T ( β u T, σ 2 T)] 0, where β u T maximizes L T (β, σ T 2 ), and that W T LR T = LR T ( σ 2 T) LR T = 2[L T ( β T, σ 2 T) L T ( β r T, σ 2 T)] 0, where β r T maximizes L T (β, σ T 2 ) subject to the constraint Rβ = r. We have established an inequality in finite samples: W T LR T LM T ;
49 Estimation of the Asymptotic Covariance Matrix In the most general form, V T can be written as ( ) 1 T var x t e t T = 1 T t=1 T var(x t e t ) + 1 T t=1 T 1 T τ=1 t=τ+1 E(x t τ e t τ e t x t) + E(x t e t e t τ x t τ). We have learned that the limiting distributions of the large sample tests discussed in the preceding subsections depend crucially on the consistent estimation of V T.
50 The Case of No Serial Correlation We have learned that when {(x t, e t ) } is an independent sequence, ( ) 1 T var x t e t = 1 T T t=1 T var(x t e t ). Let ˆV T = T t=1 ê2 tx t x t/t. It can be seen that when ˆβ T is consistent for β 0, 1 T T ê 2 tx t x t 1 T t=1 = 1 T T E(e 2 tx t x t) t=1 t=1 T ( e 2 t x t x t E(e 2 tx t x t) ) 2 T t=1 1 T T t=1 T t=1 ( ) (ˆβ T β 0 ) x t x t(ˆβ T β 0 )x t x t P 0, ( ) e t x t(ˆβ T β 0 ) x t x t +
51 Thus, ˆV T is consistent for V T, and ˆD T = ( 1 T ) 1 ( T x t x 1 t T t=1 ) ( T ê 2 tx t x 1 t T t=1 T x t x t t=1 ) 1 is consistent for D T, the asymptotic covariance matrix of T(ˆβT β 0 ).
52 More generally, if E(e t F t 1 ) = 0, where F t 1 = σ((e i 1, x i) ; i t) contains information up to time t 1, then for τ < t, E(x t e t e τ x τ) = E(x t E(e t F t 1 )e τ x τ) = 0, so that V T = T t=1 var(x te t )/T. Consequently, ˆD T above is still consistent for D T.
53 General Case In the time series context, it is possible that x t e t exhibit certain correlation. If x t e t are asymptotically uncorrelated in the sense that E(x t e t e t τ x t τ) 0 at a suitable rate as τ, then for τ large, T t=τ+1 E(x te t e t τ x t τ)/t should be very small. This suggests that V T may be well approximated by V T = 1 T T var(x t e t ) + 1 T t=1 m(t) T τ=1 t=τ+1 E(x t τ e t τ e t x t) + E(x t e t e t τ x t τ), for some m(t), where m(t) should be growing with T to maintain the approximation property.
54 In particular, m(t) is required to be O(T 1/4 ), i.e., m(t) also tends to infinity at a rate much slower than T. The following estimator is a heteroskedasticity and autocorrelation consistent convariance matrix estimator: ˆV T = 1 T T ê 2 tx t x t + 1 T t=1 m(t) T τ=1 t=τ+1 (x t τ ê t τ ê t x t + x t ê t ê t τ x t τ).
55 The major problem is that ˆV T need not be p.s.d. Newey & West (1987) propose a simple estimator: ˇV T = 1 T T ê 2 t x tx t + 1 T t=1 m(t) τ=1 w m (τ) T t=τ+1 ( xt τ ê t τ ê t x t + x tê t ê t τ x ) t τ, where w m (τ) = 1 [τ/(m + 1)] is a weight function. Note that w m (τ) is decreasing in τ; hence the larger the τ, the smaller the associated weights. Also note that for fixed τ, w m (τ) 1 as m.
56 Testing for Efficient Market Hypothesis, EMH EMH: E(p t Ω t 1 ) = p t 1, Ω t 1 is the information set at t 1. Under EMH, Ω t 1 = p t 1. That is, E(p t Ω t 1 ) = E(p t p t 1 ) = p t 1. Gien a linear model for the conditional mean E(p t p t 1 ) = α 0 + β 0 p t 1, a linear regression model for observations t = 1,..., T is set to be p t = α 0 + β 0 p t 1 + e t, t = 1,...,T. Testing for EMH is equivalent to test the null hypothesis H 0 : β 0 = 1.
57 Assumptions to be checked: [A1 ]: True model? Yes! [A2 ]: Is p t 1 nonstochastic? No! Non-classical regression analysis! [B2 ]: Does p 2 t 1 obey a WLLN? No! as p t 1 is not stationary so that spurious regression may exist. This is known by data plot or unit root tests.
58 What is the stationary and nonstationary? 1. Strong Stationarity: A time series {y t } is strong stationary if the distribution and joint distribution are time invariant. 2. Weak Stationarity: A time series {y t } is weak stationary if it has constant mean, constant variance, and the covariance between y t and y t+s depending on s not t.
59 It is clear that ln p t is nonstationary when p t is nonstationary. However, the first-order difference of ln p t, p t = ln p t ln p t 1 = r t which is defined as the return, becomes stationary. Observe that p t = α 0 + β 0 p t 1 ln p t ln p t 1 = α 0 + β 0 ln p t 1 = α 0 + β 0 ln p t 2 ln p t ln p t 1 = β 0 (ln p t 1 ln p t 1 ) p t = β 0 p t 1 r t = β 0 r t 1. Therefore, the linear regression model we considered becomes r t = α 0 + β 0 r t 1 + e t, t = 1,...,T,
60 Question again: How do we make a reliable statistical inference for the null hypothesis? [A1 ]: True model? Yes! [A2 ]: Is r t 1 nonstochastic? No! Non-classical regression analysis! [B2 ]: Does {r 2 t 1} obey a WLLN? Yes! as {r t 1 } is stationary. [A5 ]: Is r t normally distributed? No! Check by Eviews. [B3 ]: Does {r t 1 e t } obey a CLT? Yes! Therefore, regression analysis is implementable and the large sample test is applicable.
Classical Least Squares Theory
Classical Least Squares Theory CHUNG-MING KUAN Department of Finance & CRETA National Taiwan University October 18, 2014 C.-M. Kuan (Finance & CRETA, NTU) Classical Least Squares Theory October 18, 2014
More informationAsymptotic Least Squares Theory
Asymptotic Least Squares Theory CHUNG-MING KUAN Department of Finance & CRETA December 5, 2011 C.-M. Kuan (National Taiwan Univ.) Asymptotic Least Squares Theory December 5, 2011 1 / 85 Lecture Outline
More informationClassical Least Squares Theory
Classical Least Squares Theory CHUNG-MING KUAN Department of Finance & CRETA National Taiwan University October 14, 2012 C.-M. Kuan (Finance & CRETA, NTU) Classical Least Squares Theory October 14, 2012
More informationReview of Econometrics
Review of Econometrics Zheng Tian June 5th, 2017 1 The Essence of the OLS Estimation Multiple regression model involves the models as follows Y i = β 0 + β 1 X 1i + β 2 X 2i + + β k X ki + u i, i = 1,...,
More informationFENG CHIA UNIVERSITY ECONOMETRICS I: HOMEWORK 4. Prof. Mei-Yuan Chen Spring 2008
FENG CHIA UNIVERSITY ECONOMETRICS I: HOMEWORK 4 Prof. Mei-Yuan Chen Spring 008. Partition and rearrange the matrix X as [x i X i ]. That is, X i is the matrix X excluding the column x i. Let u i denote
More informationStatistics 910, #5 1. Regression Methods
Statistics 910, #5 1 Overview Regression Methods 1. Idea: effects of dependence 2. Examples of estimation (in R) 3. Review of regression 4. Comparisons and relative efficiencies Idea Decomposition Well-known
More informationIntroduction to Estimation Methods for Time Series models. Lecture 1
Introduction to Estimation Methods for Time Series models Lecture 1 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 1 SNS Pisa 1 / 19 Estimation
More informationECONOMETRICS (I) MEI-YUAN CHEN. Department of Finance National Chung Hsing University. July 17, 2003
ECONOMERICS (I) MEI-YUAN CHEN Department of Finance National Chung Hsing University July 17, 2003 c Mei-Yuan Chen. he L A EX source file is ec471.tex. Contents 1 Introduction 1 2 Reviews of Statistics
More informationMultivariate Regression Analysis
Matrices and vectors The model from the sample is: Y = Xβ +u with n individuals, l response variable, k regressors Y is a n 1 vector or a n l matrix with the notation Y T = (y 1,y 2,...,y n ) 1 x 11 x
More informationMa 3/103: Lecture 24 Linear Regression I: Estimation
Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, 2017 1 / 32 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the
More informationPeter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8
Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall
More informationMEI Exam Review. June 7, 2002
MEI Exam Review June 7, 2002 1 Final Exam Revision Notes 1.1 Random Rules and Formulas Linear transformations of random variables. f y (Y ) = f x (X) dx. dg Inverse Proof. (AB)(AB) 1 = I. (B 1 A 1 )(AB)(AB)
More informationChapter 4: Constrained estimators and tests in the multiple linear regression model (Part III)
Chapter 4: Constrained estimators and tests in the multiple linear regression model (Part III) Florian Pelgrin HEC September-December 2010 Florian Pelgrin (HEC) Constrained estimators September-December
More informationEconomics 583: Econometric Theory I A Primer on Asymptotics
Economics 583: Econometric Theory I A Primer on Asymptotics Eric Zivot January 14, 2013 The two main concepts in asymptotic theory that we will use are Consistency Asymptotic Normality Intuition consistency:
More informationHeteroskedasticity and Autocorrelation
Lesson 7 Heteroskedasticity and Autocorrelation Pilar González and Susan Orbe Dpt. Applied Economics III (Econometrics and Statistics) Pilar González and Susan Orbe OCW 2014 Lesson 7. Heteroskedasticity
More informationIntroductory Econometrics
Based on the textbook by Wooldridge: : A Modern Approach Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna December 17, 2012 Outline Heteroskedasticity
More informationGeneral Linear Model: Statistical Inference
Chapter 6 General Linear Model: Statistical Inference 6.1 Introduction So far we have discussed formulation of linear models (Chapter 1), estimability of parameters in a linear model (Chapter 4), least
More informationReview of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley
Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate
More informationLarge Sample Properties of Estimators in the Classical Linear Regression Model
Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in
More informationMA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7
MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7 1 Random Vectors Let a 0 and y be n 1 vectors, and let A be an n n matrix. Here, a 0 and A are non-random, whereas y is
More informationUnderstanding Regressions with Observations Collected at High Frequency over Long Span
Understanding Regressions with Observations Collected at High Frequency over Long Span Yoosoon Chang Department of Economics, Indiana University Joon Y. Park Department of Economics, Indiana University
More informationGeneralized Method of Moment
Generalized Method of Moment CHUNG-MING KUAN Department of Finance & CRETA National Taiwan University June 16, 2010 C.-M. Kuan (Finance & CRETA, NTU Generalized Method of Moment June 16, 2010 1 / 32 Lecture
More informationA Primer on Asymptotics
A Primer on Asymptotics Eric Zivot Department of Economics University of Washington September 30, 2003 Revised: October 7, 2009 Introduction The two main concepts in asymptotic theory covered in these
More informationHeteroskedasticity. y i = β 0 + β 1 x 1i + β 2 x 2i β k x ki + e i. where E(e i. ) σ 2, non-constant variance.
Heteroskedasticity y i = β + β x i + β x i +... + β k x ki + e i where E(e i ) σ, non-constant variance. Common problem with samples over individuals. ê i e ˆi x k x k AREC-ECON 535 Lec F Suppose y i =
More informationGeneralized Least Squares Theory
Chapter 4 Generalized Least Squares Theory In Section 3.6 we have seen that the classical conditions need not hold in practice. Although these conditions have no effect on the OLS method per se, they do
More informationLinear Regression. Junhui Qian. October 27, 2014
Linear Regression Junhui Qian October 27, 2014 Outline The Model Estimation Ordinary Least Square Method of Moments Maximum Likelihood Estimation Properties of OLS Estimator Unbiasedness Consistency Efficiency
More informationCh.10 Autocorrelated Disturbances (June 15, 2016)
Ch10 Autocorrelated Disturbances (June 15, 2016) In a time-series linear regression model setting, Y t = x tβ + u t, t = 1, 2,, T, (10-1) a common problem is autocorrelation, or serial correlation of the
More informationLECTURE ON HAC COVARIANCE MATRIX ESTIMATION AND THE KVB APPROACH
LECURE ON HAC COVARIANCE MARIX ESIMAION AND HE KVB APPROACH CHUNG-MING KUAN Institute of Economics Academia Sinica October 20, 2006 ckuan@econ.sinica.edu.tw www.sinica.edu.tw/ ckuan Outline C.-M. Kuan,
More informationEmpirical Economic Research, Part II
Based on the text book by Ramanathan: Introductory Econometrics Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna December 7, 2011 Outline Introduction
More informationIntroduction to Estimation Methods for Time Series models Lecture 2
Introduction to Estimation Methods for Time Series models Lecture 2 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 2 SNS Pisa 1 / 21 Estimators:
More informationLinear models. Linear models are computationally convenient and remain widely used in. applied econometric research
Linear models Linear models are computationally convenient and remain widely used in applied econometric research Our main focus in these lectures will be on single equation linear models of the form y
More informationLectures on Structural Change
Lectures on Structural Change Eric Zivot Department of Economics, University of Washington April5,2003 1 Overview of Testing for and Estimating Structural Change in Econometric Models 1. Day 1: Tests of
More informationLecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN
Lecture Notes 5 Convergence and Limit Theorems Motivation Convergence with Probability Convergence in Mean Square Convergence in Probability, WLLN Convergence in Distribution, CLT EE 278: Convergence and
More information9. AUTOCORRELATION. [1] Definition of Autocorrelation (AUTO) 1) Model: y t = x t β + ε t. We say that AUTO exists if cov(ε t,ε s ) 0, t s.
9. AUTOCORRELATION [1] Definition of Autocorrelation (AUTO) 1) Model: y t = x t β + ε t. We say that AUTO exists if cov(ε t,ε s ) 0, t s. ) Assumptions: All of SIC except SIC.3 (the random sample assumption).
More informationPractical Econometrics. for. Finance and Economics. (Econometrics 2)
Practical Econometrics for Finance and Economics (Econometrics 2) Seppo Pynnönen and Bernd Pape Department of Mathematics and Statistics, University of Vaasa 1. Introduction 1.1 Econometrics Econometrics
More informationx = 1 n (x = 1 (x n 1 ι(ι ι) 1 ι x) (x ι(ι ι) 1 ι x) = 1
Estimation and Inference in Econometrics Exercises, January 24, 2003 Solutions 1. a) cov(wy ) = E [(WY E[WY ])(WY E[WY ]) ] = E [W(Y E[Y ])(Y E[Y ]) W ] = W [(Y E[Y ])(Y E[Y ]) ] W = WΣW b) Let Σ be a
More informationHeteroskedasticity. Part VII. Heteroskedasticity
Part VII Heteroskedasticity As of Oct 15, 2015 1 Heteroskedasticity Consequences Heteroskedasticity-robust inference Testing for Heteroskedasticity Weighted Least Squares (WLS) Feasible generalized Least
More informationSupplemental Material for KERNEL-BASED INFERENCE IN TIME-VARYING COEFFICIENT COINTEGRATING REGRESSION. September 2017
Supplemental Material for KERNEL-BASED INFERENCE IN TIME-VARYING COEFFICIENT COINTEGRATING REGRESSION By Degui Li, Peter C. B. Phillips, and Jiti Gao September 017 COWLES FOUNDATION DISCUSSION PAPER NO.
More informationMaximum Likelihood Estimation
Maximum Likelihood Estimation Merlise Clyde STA721 Linear Models Duke University August 31, 2017 Outline Topics Likelihood Function Projections Maximum Likelihood Estimates Readings: Christensen Chapter
More informationAdvanced Econometrics I
Lecture Notes Autumn 2010 Dr. Getinet Haile, University of Mannheim 1. Introduction Introduction & CLRM, Autumn Term 2010 1 What is econometrics? Econometrics = economic statistics economic theory mathematics
More informationThe outline for Unit 3
The outline for Unit 3 Unit 1. Introduction: The regression model. Unit 2. Estimation principles. Unit 3: Hypothesis testing principles. 3.1 Wald test. 3.2 Lagrange Multiplier. 3.3 Likelihood Ratio Test.
More informationAsymptotic Statistics-III. Changliang Zou
Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (
More informationRegression Analysis. y t = β 1 x t1 + β 2 x t2 + β k x tk + ϵ t, t = 1,..., T,
Regression Analysis The multiple linear regression model with k explanatory variables assumes that the tth observation of the dependent or endogenous variable y t is described by the linear relationship
More information6. MAXIMUM LIKELIHOOD ESTIMATION
6 MAXIMUM LIKELIHOOD ESIMAION [1] Maximum Likelihood Estimator (1) Cases in which θ (unknown parameter) is scalar Notational Clarification: From now on, we denote the true value of θ as θ o hen, view θ
More informationHomoskedasticity. Var (u X) = σ 2. (23)
Homoskedasticity How big is the difference between the OLS estimator and the true parameter? To answer this question, we make an additional assumption called homoskedasticity: Var (u X) = σ 2. (23) This
More informationRegression and Statistical Inference
Regression and Statistical Inference Walid Mnif wmnif@uwo.ca Department of Applied Mathematics The University of Western Ontario, London, Canada 1 Elements of Probability 2 Elements of Probability CDF&PDF
More informationGENERALISED LEAST SQUARES AND RELATED TOPICS
GENERALISED LEAST SQUARES AND RELATED TOPICS Haris Psaradakis Birkbeck, University of London Nonspherical Errors Consider the model y = Xβ + u, E(u) =0, E(uu 0 )=σ 2 Ω, where Ω is a symmetric and positive
More informationBrief Suggested Solutions
DEPARTMENT OF ECONOMICS UNIVERSITY OF VICTORIA ECONOMICS 366: ECONOMETRICS II SPRING TERM 5: ASSIGNMENT TWO Brief Suggested Solutions Question One: Consider the classical T-observation, K-regressor linear
More informationBIOS 2083 Linear Models c Abdus S. Wahed
Chapter 5 206 Chapter 6 General Linear Model: Statistical Inference 6.1 Introduction So far we have discussed formulation of linear models (Chapter 1), estimability of parameters in a linear model (Chapter
More informationEcon 583 Final Exam Fall 2008
Econ 583 Final Exam Fall 2008 Eric Zivot December 11, 2008 Exam is due at 9:00 am in my office on Friday, December 12. 1 Maximum Likelihood Estimation and Asymptotic Theory Let X 1,...,X n be iid random
More informationStatistics and econometrics
1 / 36 Slides for the course Statistics and econometrics Part 10: Asymptotic hypothesis testing European University Institute Andrea Ichino September 8, 2014 2 / 36 Outline Why do we need large sample
More informationHeteroskedasticity. We now consider the implications of relaxing the assumption that the conditional
Heteroskedasticity We now consider the implications of relaxing the assumption that the conditional variance V (u i x i ) = σ 2 is common to all observations i = 1,..., In many applications, we may suspect
More informationIntroductory Econometrics
Based on the textbook by Wooldridge: : A Modern Approach Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna November 23, 2013 Outline Introduction
More informationsimple if it completely specifies the density of x
3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely
More informationMIT Spring 2015
Regression Analysis MIT 18.472 Dr. Kempthorne Spring 2015 1 Outline Regression Analysis 1 Regression Analysis 2 Multiple Linear Regression: Setup Data Set n cases i = 1, 2,..., n 1 Response (dependent)
More information13. Time Series Analysis: Asymptotics Weakly Dependent and Random Walk Process. Strict Exogeneity
Outline: Further Issues in Using OLS with Time Series Data 13. Time Series Analysis: Asymptotics Weakly Dependent and Random Walk Process I. Stationary and Weakly Dependent Time Series III. Highly Persistent
More informationLesson 4: Stationary stochastic processes
Dipartimento di Ingegneria e Scienze dell Informazione e Matematica Università dell Aquila, umberto.triacca@univaq.it Stationary stochastic processes Stationarity is a rather intuitive concept, it means
More informationSummer School in Statistics for Astronomers V June 1 - June 6, Regression. Mosuk Chow Statistics Department Penn State University.
Summer School in Statistics for Astronomers V June 1 - June 6, 2009 Regression Mosuk Chow Statistics Department Penn State University. Adapted from notes prepared by RL Karandikar Mean and variance Recall
More informationIntermediate Econometrics
Intermediate Econometrics Heteroskedasticity Text: Wooldridge, 8 July 17, 2011 Heteroskedasticity Assumption of homoskedasticity, Var(u i x i1,..., x ik ) = E(u 2 i x i1,..., x ik ) = σ 2. That is, the
More information1 Appendix A: Matrix Algebra
Appendix A: Matrix Algebra. Definitions Matrix A =[ ]=[A] Symmetric matrix: = for all and Diagonal matrix: 6=0if = but =0if 6= Scalar matrix: the diagonal matrix of = Identity matrix: the scalar matrix
More informationEconomic modelling and forecasting
Economic modelling and forecasting 2-6 February 2015 Bank of England he generalised method of moments Ole Rummel Adviser, CCBS at the Bank of England ole.rummel@bankofengland.co.uk Outline Classical estimation
More informationEmpirical Market Microstructure Analysis (EMMA)
Empirical Market Microstructure Analysis (EMMA) Lecture 3: Statistical Building Blocks and Econometric Basics Prof. Dr. Michael Stein michael.stein@vwl.uni-freiburg.de Albert-Ludwigs-University of Freiburg
More informationAn estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic
Chapter 6 ESTIMATION OF THE LONG-RUN COVARIANCE MATRIX An estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic standard errors for the OLS and linear IV estimators presented
More informationLinear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,
Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,
More informationBootstrapping Heteroskedasticity Consistent Covariance Matrix Estimator
Bootstrapping Heteroskedasticity Consistent Covariance Matrix Estimator by Emmanuel Flachaire Eurequa, University Paris I Panthéon-Sorbonne December 2001 Abstract Recent results of Cribari-Neto and Zarkos
More informationEconomics 582 Random Effects Estimation
Economics 582 Random Effects Estimation Eric Zivot May 29, 2013 Random Effects Model Hence, the model can be re-written as = x 0 β + + [x ] = 0 (no endogeneity) [ x ] = = + x 0 β + + [x ] = 0 [ x ] = 0
More informationMultivariate Regression
Multivariate Regression The so-called supervised learning problem is the following: we want to approximate the random variable Y with an appropriate function of the random variables X 1,..., X p with the
More information3. Linear Regression With a Single Regressor
3. Linear Regression With a Single Regressor Econometrics: (I) Application of statistical methods in empirical research Testing economic theory with real-world data (data analysis) 56 Econometrics: (II)
More informationSensitivity of GLS estimators in random effects models
of GLS estimators in random effects models Andrey L. Vasnev (University of Sydney) Tokyo, August 4, 2009 1 / 19 Plan Plan Simulation studies and estimators 2 / 19 Simulation studies Plan Simulation studies
More informationReliability of inference (1 of 2 lectures)
Reliability of inference (1 of 2 lectures) Ragnar Nymoen University of Oslo 5 March 2013 1 / 19 This lecture (#13 and 14): I The optimality of the OLS estimators and tests depend on the assumptions of
More informationAnalysis of Cross-Sectional Data
Analysis of Cross-Sectional Data Kevin Sheppard http://www.kevinsheppard.com Oxford MFE This version: October 30, 2017 November 6, 2017 Outline Econometric models Specification that can be analyzed with
More informationFall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.
1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n
More informationThe Statistical Property of Ordinary Least Squares
The Statistical Property of Ordinary Least Squares The linear equation, on which we apply the OLS is y t = X t β + u t Then, as we have derived, the OLS estimator is ˆβ = [ X T X] 1 X T y Then, substituting
More information11. Further Issues in Using OLS with TS Data
11. Further Issues in Using OLS with TS Data With TS, including lags of the dependent variable often allow us to fit much better the variation in y Exact distribution theory is rarely available in TS applications,
More informationXβ is a linear combination of the columns of X: Copyright c 2010 Dan Nettleton (Iowa State University) Statistics / 25 X =
The Gauss-Markov Linear Model y Xβ + ɛ y is an n random vector of responses X is an n p matrix of constants with columns corresponding to explanatory variables X is sometimes referred to as the design
More information13.2 Example: W, LM and LR Tests
13.2 Example: W, LM and LR Tests Date file = cons99.txt (same data as before) Each column denotes year, nominal household expenditures ( 10 billion yen), household disposable income ( 10 billion yen) and
More informationPart IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015
Part IB Statistics Theorems with proof Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly)
More information(θ θ ), θ θ = 2 L(θ ) θ θ θ θ θ (θ )= H θθ (θ ) 1 d θ (θ )
Setting RHS to be zero, 0= (θ )+ 2 L(θ ) (θ θ ), θ θ = 2 L(θ ) 1 (θ )= H θθ (θ ) 1 d θ (θ ) O =0 θ 1 θ 3 θ 2 θ Figure 1: The Newton-Raphson Algorithm where H is the Hessian matrix, d θ is the derivative
More information3. For a given dataset and linear model, what do you think is true about least squares estimates? Is Ŷ always unique? Yes. Is ˆβ always unique? No.
7. LEAST SQUARES ESTIMATION 1 EXERCISE: Least-Squares Estimation and Uniqueness of Estimates 1. For n real numbers a 1,...,a n, what value of a minimizes the sum of squared distances from a to each of
More information8. Hypothesis Testing
FE661 - Statistical Methods for Financial Engineering 8. Hypothesis Testing Jitkomut Songsiri introduction Wald test likelihood-based tests significance test for linear regression 8-1 Introduction elements
More informationSTAT Financial Time Series
STAT 6104 - Financial Time Series Chapter 4 - Estimation in the time Domain Chun Yip Yau (CUHK) STAT 6104:Financial Time Series 1 / 46 Agenda 1 Introduction 2 Moment Estimates 3 Autoregressive Models (AR
More informationMS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari
MS&E 226: Small Data Lecture 11: Maximum likelihood (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 18 The likelihood function 2 / 18 Estimating the parameter This lecture develops the methodology behind
More informationBusiness Statistics. Tommaso Proietti. Linear Regression. DEF - Università di Roma 'Tor Vergata'
Business Statistics Tommaso Proietti DEF - Università di Roma 'Tor Vergata' Linear Regression Specication Let Y be a univariate quantitative response variable. We model Y as follows: Y = f(x) + ε where
More informationEconometrics II - EXAM Answer each question in separate sheets in three hours
Econometrics II - EXAM Answer each question in separate sheets in three hours. Let u and u be jointly Gaussian and independent of z in all the equations. a Investigate the identification of the following
More informationMaximum Likelihood (ML) Estimation
Econometrics 2 Fall 2004 Maximum Likelihood (ML) Estimation Heino Bohn Nielsen 1of32 Outline of the Lecture (1) Introduction. (2) ML estimation defined. (3) ExampleI:Binomialtrials. (4) Example II: Linear
More informationØkonomisk Kandidateksamen 2004 (I) Econometrics 2. Rettevejledning
Økonomisk Kandidateksamen 2004 (I) Econometrics 2 Rettevejledning This is a closed-book exam (uden hjælpemidler). Answer all questions! The group of questions 1 to 4 have equal weight. Within each group,
More informationEconometrics of Panel Data
Econometrics of Panel Data Jakub Mućk Meeting # 6 Jakub Mućk Econometrics of Panel Data Meeting # 6 1 / 36 Outline 1 The First-Difference (FD) estimator 2 Dynamic panel data models 3 The Anderson and Hsiao
More informationLecture 11: Regression Methods I (Linear Regression)
Lecture 11: Regression Methods I (Linear Regression) Fall, 2017 1 / 40 Outline Linear Model Introduction 1 Regression: Supervised Learning with Continuous Responses 2 Linear Models and Multiple Linear
More informationRestricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model
Restricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model Xiuming Zhang zhangxiuming@u.nus.edu A*STAR-NUS Clinical Imaging Research Center October, 015 Summary This report derives
More informationAnalysis of Cross-Sectional Data
Analysis of Cross-Sectional Data Kevin Sheppard http://www.kevinsheppard.com Oxford MFE This version: November 8, 2017 November 13 14, 2017 Outline Econometric models Specification that can be analyzed
More informationThe Multiple Regression Model Estimation
Lesson 5 The Multiple Regression Model Estimation Pilar González and Susan Orbe Dpt Applied Econometrics III (Econometrics and Statistics) Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model:
More informationEconomics 536 Lecture 7. Introduction to Specification Testing in Dynamic Econometric Models
University of Illinois Fall 2016 Department of Economics Roger Koenker Economics 536 Lecture 7 Introduction to Specification Testing in Dynamic Econometric Models In this lecture I want to briefly describe
More informationEstimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators
Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let
More informationSimple and Multiple Linear Regression
Sta. 113 Chapter 12 and 13 of Devore March 12, 2010 Table of contents 1 Simple Linear Regression 2 Model Simple Linear Regression A simple linear regression model is given by Y = β 0 + β 1 x + ɛ where
More information1 Outline. 1. Motivation. 2. SUR model. 3. Simultaneous equations. 4. Estimation
1 Outline. 1. Motivation 2. SUR model 3. Simultaneous equations 4. Estimation 2 Motivation. In this chapter, we will study simultaneous systems of econometric equations. Systems of simultaneous equations
More informationThis model of the conditional expectation is linear in the parameters. A more practical and relaxed attitude towards linear regression is to say that
Linear Regression For (X, Y ) a pair of random variables with values in R p R we assume that E(Y X) = β 0 + with β R p+1. p X j β j = (1, X T )β j=1 This model of the conditional expectation is linear
More informationCh 3: Multiple Linear Regression
Ch 3: Multiple Linear Regression 1. Multiple Linear Regression Model Multiple regression model has more than one regressor. For example, we have one response variable and two regressor variables: 1. delivery
More informationGeneralized Linear Models
Generalized Linear Models Lecture 3. Hypothesis testing. Goodness of Fit. Model diagnostics GLM (Spring, 2018) Lecture 3 1 / 34 Models Let M(X r ) be a model with design matrix X r (with r columns) r n
More informationMultivariate Time Series: VAR(p) Processes and Models
Multivariate Time Series: VAR(p) Processes and Models A VAR(p) model, for p > 0 is X t = φ 0 + Φ 1 X t 1 + + Φ p X t p + A t, where X t, φ 0, and X t i are k-vectors, Φ 1,..., Φ p are k k matrices, with
More informationEcon 510 B. Brown Spring 2014 Final Exam Answers
Econ 510 B. Brown Spring 2014 Final Exam Answers Answer five of the following questions. You must answer question 7. The question are weighted equally. You have 2.5 hours. You may use a calculator. Brevity
More information