Interpreting Regression Results

Size: px
Start display at page:

Download "Interpreting Regression Results"

Transcription

1 Interpreting Regression Results Carlo Favero Favero () Interpreting Regression Results 1 / 42

2 Interpreting Regression Results Interpreting regression results is not a simple exercise. We propose to split these procedure in three steps. First, understand the relevance of our regression independently from inference on the parameters. There is an easy way to do this: suppose all parameters in the model are known and identical to the estimated values and learn how to read these. Second, introduce a measure of sampling variability and evaluate again what you know taking into account that parameters are estimated and there is uncertainty surrounding your point estimates. Third, remember that each regression is run after a reduction process has been, explicitly or implicitly implemented. The relevant question is what happens if something went wrong in the reduction process? What are the consequences of omitting relevant information or of including irrelevant one in your specification? Favero () Interpreting Regression Results 2 / 42

3 Relevance of a regression is different form statistical significance of the estimated parameters. In fact, confusing statistical significance of the estimated parameter describing the effect of a regressor on the dependent variable with practical relevance of that effect is a rather common mistake in the use of the linear model. Statistical inference is a tool for estimating parameters in a probability model and assessing the amount of sampling variability. Statistics gives us indication on what we can say about the values of the parameters in the model on the basis of our sample. The relevance of a regression is determined by the share of the unconditional variance of y that is explained by the variance of E (y X). Measuring how large is the share of the unconditional variance of y explained by the regression function is the fundamental role of R 2. Favero () Interpreting Regression Results 3 / 42 as a measure of relevance of a regression 2

4 The R-squared as a measure of relevance of a regression To illustrate the point let us consider two specific cases of applications of the CAPM: ( ( r i t r rf t r m t t ( ui,t ) ) r rf u m,t ) = 0.8σ m u m,t + σ i u i,t = µ m + σ m u m,t [( 0 n.i.d. 0 ), ( )] µ m = , σ m = 0.054, σ 1 = 0.09, σ 2 = We simulate an artificial sample of 1056 (same length with the sample July 1926-June2014) observations for each process. µ m and σ m are calibrated to match the first two moments of the market portfolio excess returns over the sample 1926:7-2014:7. While the standard errors of the two excess returns are calibrated to deliver R 2 in the CAPM regression of respectively about.22 and.98. Favero () Interpreting Regression Results 4 / 42

5 The R-squared as a measure of relevance of a regression By running the two CAPM regressions on the artificial sample: TABLE 3.1: The estimation ( of ) the CAPM on artificial data Dependent Variable r 1 t rrf t Regressor ( ) Coefficient Std. Error t-ratio Prob. r m t r rf t R S.E. of regression ( ) Dependent Variable r 2 t rrf t Regressor Coefficient Std. Error t-ratio Prob. ( r m t r rf t ) R S.E. of regression In both cases the estimated beta are statistically significant and very close to their true value of 0.8. Favero () Interpreting Regression Results 5 / 42

6 In both experiments the conditional expectation changes of the same amount but the share of the unconditional variance of y explained by Favero () Interpreting Regression Results 6 / 42 The R-squared as a measure of relevance of a regression Simulate again the processes but introduce at some point a temporary shift of two per cent in the excess returns in the market portfolio Simulated Market Portfolio excess returns Baseline Alternative Simulated Portfolio 1 excess returns Baseline Alternative Simulated Portfolio 2 excess returns Baseline Alternative

7 Inference in the Linear Regression Model Users of econometric models in finance attributes high priority to the concept of "statistical significance" of their estimates. In the standard statistical jargon an estimate of a parameter is statistical significant if its estimated value, compared with its sampling standard deviation makes it unlikely that in other samples the estimate may change of sign. In the linear regression model the statistical index mostly used is the t-ratio and an estimated parameter has a significance which is usually measured in terms of its P-value, the probability with which that coefficient is equal to zero. In the previous section we have discussed the common confusion between statistical significance and relevance In this section we illustrate the basic principles that allow us to evaluate statistical significance and to perform test of relevant hypothesis on the estimated coefficient in a linear model. Favero () Interpreting Regression Results 7 / 42

8 Elements of Distribution theory We consider the distribution of a generic n-dimensional vector z, together with the derived distribution of the vector x = g (z) which admits the inverse z = h (x), with h = g 1. If prob (z 1 < z < z 2 ) = z 2 z 1 f (z) dz, and prob (x 1 < x < x 2 ) = x 2 x 1 f (x) dx, then: f (x) = f (h (x)) J, where J = h 1 h n x 1 x h 1 x n... h n x n = h. x Favero () Interpreting Regression Results 8 / 42

9 The normal distribution The standardized normal univariate has the following distribution: 1 f (z) = exp ( 12 ) z2, 2π E (z) = 0, var (z) = 1. By considering the transformation x = σz + µ, we derive the distribution of the univariate normal as: ( ) 1 f (x) = σ 2π exp (x µ)2 2σ 2, E (x) = µ, var (x) = σ 2. Favero () Interpreting Regression Results 9 / 42

10 The normal multivariate distribution Consider now the vector z = (z 1, z 2,..., z n ), such that f (z) = n i=1 ( f (z i ) = (2π) n 2 exp 1 ) 2 z z. z is, by construction, a vector of normal independent variables with zero mean and identity variance covariance matrix. The conventional notation is z N (0, I n ). Favero () Interpreting Regression Results 10 / 42

11 The normal multivariate distribution Consider now the linear transformation, x = Az + µ, where A is an (n n) invertible matrix. We consider the following transformation z = A 1 (x µ) with Jacobian J = A 1 = 1 A. By applying the formula for the transformation of variables, we have: f (x) = (2π) n 2 A 1 exp ( 1 ) 2 (x µ) A 1 A 1 (x µ), which, by defining the positive definite matrix = AA, equals ( f (x) = (2π) n exp 1 ) 2 (x µ) 1 (x µ). The conventional notation for the multivariate normal is x N (µ, ). Favero () Interpreting Regression Results 11 / 42

12 The transformation of normal multivariate The formula of the transformation of variable allows us to better understand the theorem introduced in a previous section of this chapter., Theorem For any x N (µ, ), given any (m n) B matrix and any (m 1) vector, d, if y = Bx + d, this implies y N ( Bµ + d, B B ). Consider a partitioning of an n-variate normal vector in two sub-vectors of dimensions n 1 and n n 1 : ( ) (( ) ( )) x1 µ1 Σ11 Σ N, 12. x 2 µ 2 Σ 21 Σ 22 By applying the formula for the transofrmation of variables, we obtain two results: 1 x 1 N (µ 1, 11 ), which follows from applying the general formula in the case d = 0, B = (I n1 0); 2 (x 1 x 2 ) N ( µ 1 + Σ 12 Σ22 1 (x 2 µ 2 ), Σ 11 Σ 12 Σ22 1 Σ 21), which is Favero () Interpreting Regression Results 12 / 42

13 Distributions derived from the normal Consider z N (0, I n ), an n-variate standard normal. The distribution of ω = z z is defined as a χ 2 (n) distribution with n degrees of freedom. Consider two vectors z 1 and z 2 of dimensions n 1 and n 2 respectively, with the following distribution: ( z1 z 2 ) (( 0 N 0 ) ( In1 0, 0 I n2 )). We have ω 1 = z 1 z 1 χ 2 (n 1 ), ω 2 = z 2 z 2 χ 2 (n 2 ), and ω 1 + ω 2 = z 1 z 1 + z 2 z 2 χ 2 (n 1 + n 2 ). In general, the sum of two independent χ 2 (n) distributions is in itself distributed as χ 2 with a number of degrees of freedom equal to the sum of the degrees of freedom of the two χ 2. Favero () Interpreting Regression Results 13 / 42

14 Distributions derived from the normal Our discussion of the multivariate normal concludes that if x N (µ, ), then (x µ) 1 (x µ) χ 2 (n). A related result establishes that if z N (0, I n ) and M is a symmetric idempotent (n n) matrix of rank r, then z Mz χ 2 (r). Another distribution related to the normal is the F-distribution. The F-distribution is obtained as the ratio of two independent χ 2 divided by the respective degrees of freedom. Given ω 1 χ 2 (n 1 ), and ω 2 χ 2 (n 2 ), we have: ω 1 /n 1 ω 2 /n 2 F (n 1, n 2 ). Favero () Interpreting Regression Results 14 / 42

15 Distributions derived from the normal The Student s t distribution is then defined as: t n = F (1, n). Another useful result establishes that two quadratic forms in the standard multivariate normal, z Mz and z Qz, are independent if MQ = 0. We can finally state the following theorem, which is fundamental to the statistical inference in the linear model: Theorem If z N (0, I n ), M and Q are symmetric and idempotent matrices of ranks r and s respectively and MQ = 0, then z Qz r F (s, r). z Mz s Favero () Interpreting Regression Results 15 / 42

16 The conditional distribution y X To perform inference in the linear regression model, we need a further hypothesis to specify the distribution of y conditional upon X: ( ) y X N Xβ, σ 2 I, (1) or, equivalently ( ) ɛ X N 0, σ 2 I. (2) ) Given (1) we can immediately derive the distribution of ( β X which, being a linear combination of a normal distribution, is also normal: ) ( ( β X N β, σ 2 ( X X ) ) 1. (3) Favero () Interpreting Regression Results 16 / 42

17 The conditional distribution y X Equation (3) constitutes the basis to construct confidence intervals and to perform hypothesis testing in the linear regression model. Consider the following expression: ( β β ) X X ( β β ) σ 2 = ɛ X (X X) 1 X X (X X) 1 X ɛ σ 2 = ɛ Qɛ σ 2, Q = X ( X X ) 1 X and, applying the results derived in the previous section, we know that ɛ Qɛ σ 2 X χ 2 (k). (4) Favero () Interpreting Regression Results 17 / 42

18 The conditional distribution y X Equation (4) is not useful in practice, as we do not know σ 2. However, we know that ) S ( β X ɛ σ 2 = Mɛ σ 2 X χ 2 (T k). (5) M = I Q (6) Since MQ = 0, we know the distribution of the ratio of (4) and (5); moreover, taking the ratio, we get rid of the unknown term σ 2 : ( β β ) X X ( β β ) /σ 2 s 2 /σ 2 = ɛ Qɛ ɛ (T k) kf (k, T k). (7) Mɛ Favero () Interpreting Regression Results 18 / 42

19 Clicker 6 Insert Clicker 6 here Favero () Interpreting Regression Results 19 / 42

20 Confidence Intervals for β We use result (7) to obtain from the tables of the F-distribution the critical value F α (k, T k) such that prob [F (k, T k) > F α (k, T k)] = α, 0 < α < 1, for different values of α we are in the position of evaluating exactly an inequality of the following form: { ) ) } prob ( β β X X ( β β ks 2 F α (k, T k) = 1 α, which defines confidence intervals for β centred upon β. Favero () Interpreting Regression Results 20 / 42

21 Hypothesis Testing Hypothesis testing is strictly linked to the derivation of confidence intervals. When testing the hypothesis, we aim at rejecting the validity of restrictions imposed on the model on the basis of the sample evidence. Within this framework, (??) (3) are the maintained hypothesis and the restricted version of the model is identified with the null hypothesis H 0. Following the Neyman Pearson approach to hypothesis testing, one derives a statistic with known distribution under the null. Then the probability of the first-type error (rejecting H 0 when it is true) is fixed at α. For example, we use a test at the level α of the null hypothesis β = β 0, based on the F-statistic, when we do not reject the null H 0 if β 0 lies within the confidence interval associated with the probability 1 α. However, in practice, this is not a useful way of proceeding, as the economic hypotheses of interest rarely involve a number of restrictions equal to the number of estimated parameters. Favero () Interpreting Regression Results 21 / 42

22 Hypothesis Testing The general case of interest is therefore the one when we have r restrictions on the vector of parameters with r < k. If we limit our interest to the class of linear restrictions, we can express them as H 0 = Rβ = r, where R is an (r k) matrix of parameters with rank k and r is an (r 1) vector of parameters. To illustrate how R and r are constructed, we consider the baseline case of the CAPM model; we want to impose the restriction β 0,i = 0 on the following specification: ( r i t r rf t ) = β 0,i + β 1,i ( r m t Rβ = r, ( ) ( ) β 1 0 0,i = (0). β 1,i r rf t ) + u i,t, (8) The distribution of a known statistic under the null is derived by applying known results. Favero () Interpreting Regression Results 22 / 42

23 Hypothesis Testing ) ( If ( β X N β, σ 2 (X X) 1), then: ( ) ( R β r X N Rβ r, σ 2 R ( X X ) ) 1 R. (9) The test is constructed by deriving the distribution of (9) under the null Rβ r = 0. Given that ( ) R β r X = Rβ r + R ( X X ) 1 X u, under H 0, we have: ( ) R β r (R ( X X ) ) 1 1 ( ) R R β r = ɛ X ( X X ) 1 R ( R ( X X ) 1 R ) 1 R ( X X ) 1 X ɛ = ɛ Pɛ. where P is a symmetric idempotent matrix of rank r, orthogonal to M. Favero () Interpreting Regression Results 23 / 42

24 Hypothesis Testing Then ( ) R β r (R (X X) 1 R ) 1 ( ) R β r s 2 rf (r, T k), under H 0, which can be used to test the relevant hypothesis. Favero () Interpreting Regression Results 24 / 42

25 Clicker 7 Insert Clicker 7 here Favero () Interpreting Regression Results 25 / 42

26 The Partitioned Regression Model Given the linear model: y = Xβ + ɛ, Partition X in two blocks two blocks of dimension (Txr) and (Tx (k r)) and β in a corresponding way into [ ] β 1 β 2. The partitioned regression model can then be written as follows y = X 1 β 1 + X 2 β 2 + ɛ, Favero () Interpreting Regression Results 26 / 42

27 The Partitioned Regression Model It is useful to derive the formula for the OLS estimator in the partitioned regression model. To obtain such results we partition the normal equations X X β = X y as: ( X 1 X 2 ) ( X1 X 2 ) ( β1 β 2 ) = ( X 1 X 2 ) y, or, equivalently, ( X 1 X 1 X 1 X 2 X 2 X 1 X 2 X 2 ) ( β1 β 2 ) = ( X 1 y X 2 y ). (10) Favero () Interpreting Regression Results 27 / 42

28 The Partitioned Regression Model System (10) can be resolved in two stages by first deriving an expression β 2 as: β 2 = ( X 2 X ) ) 1 2 (X 2 y X 2 X 1 β 1, and then by substituting it in the first equation of (10) to obtain X 1 X 1 β 1 + X 1 X ( ) ) 2 X 1 2 X 2 (X 2 y X 2 X 1 β 1 = X 1 y, from which: β 1 = ( X 1 M ) 1 2X 1 X 1 M 2 y ( ) ) M 2 = (I X 2 X 1 2 X 2 X 2. Favero () Interpreting Regression Results 28 / 42

29 The Partitioned Regression Model Note that, as M 2 is idempotent, we can also write: β 1 = ( X 1 M 2 M 2X 1 ) 1 X 1 M 2 M 2y, and β 1 can be interpreted as the vector of OLS coefficients of the regression of y on the matrix of residuals of the regression of X 1 on X 2. Thus, an OLS regression on two regressors is equivalent to two OLS regressions on a single regressor (Frisch-Waugh theorem). Favero () Interpreting Regression Results 29 / 42

30 The Partitioned Regression Model Finally, consider the residuals of the partitioned model: ɛ = y X 1 β1 X 2 β2, ɛ = y X 1 β X2 ( X 2 X 2 ) 1 ( X 2 y X 2 X 1 β 1 ), ɛ = M 2 y M 2 X 1 β1 ( ) = M 2 y M 2 X 1 X 1 1 M 2 X 1 X 1 M 2 y ( ) ) = (M 2 M 2 X 1 X 1 1 M 2 X 1 X 1 M 2 y, however, we already know that ɛ = My, therefore, ( ) ) M = (M 2 M 2 X 1 X 1 1 M 2 X 1 X 1 M 2. (11) Favero () Interpreting Regression Results 30 / 42

31 Testing restrictions on a subset of coefficients In the general framework to test linear restrictions we set r = 0, R = [ I r 0 ], and partition β in a corresponding way into [ ] β 1 β 2. In this case the restriction Rβ r = 0 is equivalent to β 1 = 0 in the partitioned regression model. Under H 0, X 1 has no additional explicatory power for y with respect to X 2, therefore: ( ) H 0 : y = X 2 β 2 + ɛ, (ɛ X 1, X 2 ) N 0, σ 2 I. Note that the statement y = X 2 γ 2 + ɛ, ( ) (ɛ X 2 ) N 0, σ 2 I, is always true under our maintained hypotheses. However, in general γ 2 = β 2. Favero () Interpreting Regression Results 31 / 42

32 Testing restrictions on a subset of coefficients To derive a statistic to test H 0 remember that the general matrix R (X X) 1 R is the upper left block of (X X) 1, which we can now write as (X 1 M 2X 1 ) 1. The statistic then takes the form β 1 (X 1 M 2X 1 ) β 1 rs 2 = y M 2 X 1 (X 1 M 2X 1 ) 1 X 1 M 2y T k y F (T k, r). My r Given (11), (10) can be re-written as: y M 2 y y My T k y F (T k, r), (12) My r where the denominator is the sum of the squared residuals in the unconstrained model, while the numerator is the difference between the sum of residuals in the constrained model and the sum of residuals in the unconstrained model. Favero () Interpreting Regression Results 32 / 42

33 Testing restrictions on a subset of coefficients Consider the limit case r = 1 and β 1 is a scalar. The F-statistic takes the form β 2 1 s 2 (X 1 M 2X 1 ) F (T k, r), under H 0, where (X 1 M 2X 1 ) 1 is element (1, 1) of the matrix (X X) 1. Using the result on the relation between the F and the Student s t-distribution: β 1 s (X 1 M 2X 1 ) 1/2 t (T k) under H 0. Therefore, an immediate test of significance of the coefficient can be performed, by taking the ratio of each estimated coefficient and the associated standard error. Favero () Interpreting Regression Results 33 / 42

34 The partial regression theorem The Frisch-Waugh Theorem described above is worth more consideration. The theorem tells us than any given regression coefficient in the model E (y X) = Xβ can be computed in two different but exactly equivalent ways: 1) by regressing y on all the columns of X, 2) by first regressing the j-th column of X on all the other columns of X, computing the residuals of this regression and then by regressing y on these residuals. This result is relevant in that it clarifies that the relationships pinned down by the estimated parameters in a linear model do not describe the connections between the regressand and each regressor but the connection between the part of each regressor that is not explained by the other ones and the regressand. Favero () Interpreting Regression Results 34 / 42

35 What if analysis The relevant question in this case becomes how much shall y change if I change X i? The estimation of a single equation linear model does not allow to anser that question, for a number of reasons. First, estimated parameters in a linear model can only answer the question how much shall E (y X) if I change X? We have seen that the two questions are very different if the R 2 of the regression is low, in this case a change in E (y X) may not effect any visible and relevant effect on y. Second, a regression model is a conditional expected value GIVEN X. In this sense there is no space for changing the value of any element in X. Favero () Interpreting Regression Results 35 / 42

36 What if analysis Any statement involving such a change requires some assumption on how the conditional expectation of y changes if X changes and a correct analysis of this requires an assumption on the joint distribution of y and X. Simulation might require the use of the multivariate joint model even when valid estimation can be performed concentrating only on the conditional model. Strong exogeneity is stronger than weak exogeneity for the estimation of the parameters of interest. Favero () Interpreting Regression Results 36 / 42

37 What if analysis Think of a linear model with know parameters y = β 1 x 1 + β 2 x 2 What is in this model the effect of on y of changing x 1 by one unit while keeping x 2 constant? Easy β 1. Now think of the estimated linear model: y = ˆ β 1 x 1 + ˆ β 2 x 2 + û Now y is different from E (y X) and the question "what is in this model the effect of on E (y X) of changing x 1 by one unit while keeping x 2 constant?" does not in general make sense. Favero () Interpreting Regression Results 37 / 42

38 Clicker 8 Insert Clicker 8 here Favero () Interpreting Regression Results 38 / 42

39 What if analysis Changing x 1 keeping x 2 unaltered implies that there is zero correlation among this variables. But the estimates β ˆ 1 and β ˆ 2 are obained by using data in which in general there is some correlation between x 1 and x 2. Data in which fluctuations in x 1 do not have any effect on x 2 would have most likely generated different estimates from those obtained in the estimation sample. The only valid question that can be answered using the coefficients in linear regression is "What is the effect on E (y X) of changing the part of each regressors that is orthogonal to the other ones". "What if" analysis requires simulation and in most cases a low level of reduction than that used for regression analysis. Favero () Interpreting Regression Results 39 / 42

40 The semi-partial R-squared When the columns of X are orthogonal to eache other the total R 2 can be exactly decomposed in the sum of the partial R 2 due to each regressor x i (the partial R 2 of a regressor i is defined as the R 2 of the regression of y on x i ). This is in general not the case in applications with non experimental data: columns of X are correlated and a (often large) part of the overall R 2 does depend on the joint behaviour of the columns of X. However, it is always possible to compute the marginal contribution to the overall R 2 due to each regressor x i, defined as the difference between the overall R 2 and the R 2 ot the regression that inlcudes all columns X except x i. This is called the semi-partial R 2. Favero () Interpreting Regression Results 40 / 42

41 The semi-partial R-squared Interestingly, the the semi-partial R 2 is a simple tranformation of the t-ratio: ( spr 2 i = t2 β 1 R 2 ) i (T k) This result has two interesting implications. First, a quantity which we considered as just a measure of statistical reliability, can lead to a measure of relevance when combined with the overall R 2 of the regression. Second, we can re-iterate the difference between statistical significance and relevance. Suppose you have a sample size of and you have 10 columns in X and the t-ratio on a coefficient β i is of about 4 with an associate P-value of the order.01: very statistical significant! The derivation of the semi-partial R 2 tells us that the contribution of this variable to the overall R2 is at most approximately 16/( ) that is: less than two thousands. Favero () Interpreting Regression Results 41 / 42

42 Clicker 9 Insert Clicker 9 here Favero () Interpreting Regression Results 42 / 42

Interpreting Regression Results -Part II

Interpreting Regression Results -Part II Interpreting Regression Results -Part II Carlo Favero Favero () Interpreting Regression Results -Part II / 9 The Partitioned Regression Model Given the linear model: y = Xβ + ɛ, Partition X in two blocks

More information

The Linear Regression Model

The Linear Regression Model The Linear Regression Model Carlo Favero Favero () The Linear Regression Model 1 / 67 OLS To illustrate how estimation can be performed to derive conditional expectations, consider the following general

More information

Model Mis-specification

Model Mis-specification Model Mis-specification Carlo Favero Favero () Model Mis-specification 1 / 28 Model Mis-specification Each specification can be interpreted of the result of a reduction process, what happens if the reduction

More information

So far our focus has been on estimation of the parameter vector β in the. y = Xβ + u

So far our focus has been on estimation of the parameter vector β in the. y = Xβ + u Interval estimation and hypothesis tests So far our focus has been on estimation of the parameter vector β in the linear model y i = β 1 x 1i + β 2 x 2i +... + β K x Ki + u i = x iβ + u i for i = 1, 2,...,

More information

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate

More information

coefficients n 2 are the residuals obtained when we estimate the regression on y equals the (simple regression) estimated effect of the part of x 1

coefficients n 2 are the residuals obtained when we estimate the regression on y equals the (simple regression) estimated effect of the part of x 1 Review - Interpreting the Regression If we estimate: It can be shown that: where ˆ1 r i coefficients β ˆ+ βˆ x+ βˆ ˆ= 0 1 1 2x2 y ˆβ n n 2 1 = rˆ i1yi rˆ i1 i= 1 i= 1 xˆ are the residuals obtained when

More information

Heteroscedasticity and Autocorrelation

Heteroscedasticity and Autocorrelation Heteroscedasticity and Autocorrelation Carlo Favero Favero () Heteroscedasticity and Autocorrelation 1 / 17 Heteroscedasticity, Autocorrelation, and the GLS estimator Let us reconsider the single equation

More information

Economics 240A, Section 3: Short and Long Regression (Ch. 17) and the Multivariate Normal Distribution (Ch. 18)

Economics 240A, Section 3: Short and Long Regression (Ch. 17) and the Multivariate Normal Distribution (Ch. 18) Economics 240A, Section 3: Short and Long Regression (Ch. 17) and the Multivariate Normal Distribution (Ch. 18) MichaelR.Roberts Department of Economics and Department of Statistics University of California

More information

Lectures 5 & 6: Hypothesis Testing

Lectures 5 & 6: Hypothesis Testing Lectures 5 & 6: Hypothesis Testing in which you learn to apply the concept of statistical significance to OLS estimates, learn the concept of t values, how to use them in regression work and come across

More information

LECTURE 2 LINEAR REGRESSION MODEL AND OLS

LECTURE 2 LINEAR REGRESSION MODEL AND OLS SEPTEMBER 29, 2014 LECTURE 2 LINEAR REGRESSION MODEL AND OLS Definitions A common question in econometrics is to study the effect of one group of variables X i, usually called the regressors, on another

More information

1 Appendix A: Matrix Algebra

1 Appendix A: Matrix Algebra Appendix A: Matrix Algebra. Definitions Matrix A =[ ]=[A] Symmetric matrix: = for all and Diagonal matrix: 6=0if = but =0if 6= Scalar matrix: the diagonal matrix of = Identity matrix: the scalar matrix

More information

Problem Set #6: OLS. Economics 835: Econometrics. Fall 2012

Problem Set #6: OLS. Economics 835: Econometrics. Fall 2012 Problem Set #6: OLS Economics 835: Econometrics Fall 202 A preliminary result Suppose we have a random sample of size n on the scalar random variables (x, y) with finite means, variances, and covariance.

More information

Recent Advances in the Field of Trade Theory and Policy Analysis Using Micro-Level Data

Recent Advances in the Field of Trade Theory and Policy Analysis Using Micro-Level Data Recent Advances in the Field of Trade Theory and Policy Analysis Using Micro-Level Data July 2012 Bangkok, Thailand Cosimo Beverelli (World Trade Organization) 1 Content a) Classical regression model b)

More information

Econometric Methods. Prediction / Violation of A-Assumptions. Burcu Erdogan. Universität Trier WS 2011/2012

Econometric Methods. Prediction / Violation of A-Assumptions. Burcu Erdogan. Universität Trier WS 2011/2012 Econometric Methods Prediction / Violation of A-Assumptions Burcu Erdogan Universität Trier WS 2011/2012 (Universität Trier) Econometric Methods 30.11.2011 1 / 42 Moving on to... 1 Prediction 2 Violation

More information

The Statistical Property of Ordinary Least Squares

The Statistical Property of Ordinary Least Squares The Statistical Property of Ordinary Least Squares The linear equation, on which we apply the OLS is y t = X t β + u t Then, as we have derived, the OLS estimator is ˆβ = [ X T X] 1 X T y Then, substituting

More information

The regression model with one stochastic regressor.

The regression model with one stochastic regressor. The regression model with one stochastic regressor. 3150/4150 Lecture 6 Ragnar Nymoen 30 January 2012 We are now on Lecture topic 4 The main goal in this lecture is to extend the results of the regression

More information

Financial Econometrics Lecture 6: Testing the CAPM model

Financial Econometrics Lecture 6: Testing the CAPM model Financial Econometrics Lecture 6: Testing the CAPM model Richard G. Pierse 1 Introduction The capital asset pricing model has some strong implications which are testable. The restrictions that can be tested

More information

Linear models. Linear models are computationally convenient and remain widely used in. applied econometric research

Linear models. Linear models are computationally convenient and remain widely used in. applied econometric research Linear models Linear models are computationally convenient and remain widely used in applied econometric research Our main focus in these lectures will be on single equation linear models of the form y

More information

Next is material on matrix rank. Please see the handout

Next is material on matrix rank. Please see the handout B90.330 / C.005 NOTES for Wednesday 0.APR.7 Suppose that the model is β + ε, but ε does not have the desired variance matrix. Say that ε is normal, but Var(ε) σ W. The form of W is W w 0 0 0 0 0 0 w 0

More information

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018 Econometrics I KS Module 2: Multivariate Linear Regression Alexander Ahammer Department of Economics Johannes Kepler University of Linz This version: April 16, 2018 Alexander Ahammer (JKU) Module 2: Multivariate

More information

Regression: Ordinary Least Squares

Regression: Ordinary Least Squares Regression: Ordinary Least Squares Mark Hendricks Autumn 2017 FINM Intro: Regression Outline Regression OLS Mathematics Linear Projection Hendricks, Autumn 2017 FINM Intro: Regression: Lecture 2/32 Regression

More information

Political Science 236 Hypothesis Testing: Review and Bootstrapping

Political Science 236 Hypothesis Testing: Review and Bootstrapping Political Science 236 Hypothesis Testing: Review and Bootstrapping Rocío Titiunik Fall 2007 1 Hypothesis Testing Definition 1.1 Hypothesis. A hypothesis is a statement about a population parameter The

More information

Essential of Simple regression

Essential of Simple regression Essential of Simple regression We use simple regression when we are interested in the relationship between two variables (e.g., x is class size, and y is student s GPA). For simplicity we assume the relationship

More information

Solutions for Econometrics I Homework No.3

Solutions for Econometrics I Homework No.3 Solutions for Econometrics I Homework No3 due 6-3-15 Feldkircher, Forstner, Ghoddusi, Pichler, Reiss, Yan, Zeugner April 7, 6 Exercise 31 We have the following model: y T N 1 X T N Nk β Nk 1 + u T N 1

More information

Introduction to Statistical Hypothesis Testing

Introduction to Statistical Hypothesis Testing Introduction to Statistical Hypothesis Testing Arun K. Tangirala Statistics for Hypothesis Testing - Part 1 Arun K. Tangirala, IIT Madras Intro to Statistical Hypothesis Testing 1 Learning objectives I

More information

Econometrics of Panel Data

Econometrics of Panel Data Econometrics of Panel Data Jakub Mućk Meeting # 3 Jakub Mućk Econometrics of Panel Data Meeting # 3 1 / 21 Outline 1 Fixed or Random Hausman Test 2 Between Estimator 3 Coefficient of determination (R 2

More information

Hypothesis testing Goodness of fit Multicollinearity Prediction. Applied Statistics. Lecturer: Serena Arima

Hypothesis testing Goodness of fit Multicollinearity Prediction. Applied Statistics. Lecturer: Serena Arima Applied Statistics Lecturer: Serena Arima Hypothesis testing for the linear model Under the Gauss-Markov assumptions and the normality of the error terms, we saw that β N(β, σ 2 (X X ) 1 ) and hence s

More information

Vector Autoregressive Model. Vector Autoregressions II. Estimation of Vector Autoregressions II. Estimation of Vector Autoregressions I.

Vector Autoregressive Model. Vector Autoregressions II. Estimation of Vector Autoregressions II. Estimation of Vector Autoregressions I. Vector Autoregressive Model Vector Autoregressions II Empirical Macroeconomics - Lect 2 Dr. Ana Beatriz Galvao Queen Mary University of London January 2012 A VAR(p) model of the m 1 vector of time series

More information

LECTURE 5 HYPOTHESIS TESTING

LECTURE 5 HYPOTHESIS TESTING October 25, 2016 LECTURE 5 HYPOTHESIS TESTING Basic concepts In this lecture we continue to discuss the normal classical linear regression defined by Assumptions A1-A5. Let θ Θ R d be a parameter of interest.

More information

Econometrics Review questions for exam

Econometrics Review questions for exam Econometrics Review questions for exam Nathaniel Higgins nhiggins@jhu.edu, 1. Suppose you have a model: y = β 0 x 1 + u You propose the model above and then estimate the model using OLS to obtain: ŷ =

More information

[y i α βx i ] 2 (2) Q = i=1

[y i α βx i ] 2 (2) Q = i=1 Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation

More information

Lecture 4: Testing Stuff

Lecture 4: Testing Stuff Lecture 4: esting Stuff. esting Hypotheses usually has three steps a. First specify a Null Hypothesis, usually denoted, which describes a model of H 0 interest. Usually, we express H 0 as a restricted

More information

1. The Multivariate Classical Linear Regression Model

1. The Multivariate Classical Linear Regression Model Business School, Brunel University MSc. EC550/5509 Modelling Financial Decisions and Markets/Introduction to Quantitative Methods Prof. Menelaos Karanasos (Room SS69, Tel. 08956584) Lecture Notes 5. The

More information

Math 423/533: The Main Theoretical Topics

Math 423/533: The Main Theoretical Topics Math 423/533: The Main Theoretical Topics Notation sample size n, data index i number of predictors, p (p = 2 for simple linear regression) y i : response for individual i x i = (x i1,..., x ip ) (1 p)

More information

In the bivariate regression model, the original parameterization is. Y i = β 1 + β 2 X2 + β 2 X2. + β 2 (X 2i X 2 ) + ε i (2)

In the bivariate regression model, the original parameterization is. Y i = β 1 + β 2 X2 + β 2 X2. + β 2 (X 2i X 2 ) + ε i (2) RNy, econ460 autumn 04 Lecture note Orthogonalization and re-parameterization 5..3 and 7.. in HN Orthogonalization of variables, for example X i and X means that variables that are correlated are made

More information

Topic 3: Inference and Prediction

Topic 3: Inference and Prediction Topic 3: Inference and Prediction We ll be concerned here with testing more general hypotheses than those seen to date. Also concerned with constructing interval predictions from our regression model.

More information

Reference: Davidson and MacKinnon Ch 2. In particular page

Reference: Davidson and MacKinnon Ch 2. In particular page RNy, econ460 autumn 03 Lecture note Reference: Davidson and MacKinnon Ch. In particular page 57-8. Projection matrices The matrix M I X(X X) X () is often called the residual maker. That nickname is easy

More information

Lecture 4: Heteroskedasticity

Lecture 4: Heteroskedasticity Lecture 4: Heteroskedasticity Econometric Methods Warsaw School of Economics (4) Heteroskedasticity 1 / 24 Outline 1 What is heteroskedasticity? 2 Testing for heteroskedasticity White Goldfeld-Quandt Breusch-Pagan

More information

Topic 3: Inference and Prediction

Topic 3: Inference and Prediction Topic 3: Inference and Prediction We ll be concerned here with testing more general hypotheses than those seen to date. Also concerned with constructing interval predictions from our regression model.

More information

Unless provided with information to the contrary, assume for each question below that the Classical Linear Model assumptions hold.

Unless provided with information to the contrary, assume for each question below that the Classical Linear Model assumptions hold. Economics 345: Applied Econometrics Section A01 University of Victoria Midterm Examination #2 Version 1 SOLUTIONS Spring 2015 Instructor: Martin Farnham Unless provided with information to the contrary,

More information

LECTURE 5. Introduction to Econometrics. Hypothesis testing

LECTURE 5. Introduction to Econometrics. Hypothesis testing LECTURE 5 Introduction to Econometrics Hypothesis testing October 18, 2016 1 / 26 ON TODAY S LECTURE We are going to discuss how hypotheses about coefficients can be tested in regression models We will

More information

Ma 3/103: Lecture 25 Linear Regression II: Hypothesis Testing and ANOVA

Ma 3/103: Lecture 25 Linear Regression II: Hypothesis Testing and ANOVA Ma 3/103: Lecture 25 Linear Regression II: Hypothesis Testing and ANOVA March 6, 2017 KC Border Linear Regression II March 6, 2017 1 / 44 1 OLS estimator 2 Restricted regression 3 Errors in variables 4

More information

Ma 3/103: Lecture 24 Linear Regression I: Estimation

Ma 3/103: Lecture 24 Linear Regression I: Estimation Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, 2017 1 / 32 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the

More information

ECON 5350 Class Notes Functional Form and Structural Change

ECON 5350 Class Notes Functional Form and Structural Change ECON 5350 Class Notes Functional Form and Structural Change 1 Introduction Although OLS is considered a linear estimator, it does not mean that the relationship between Y and X needs to be linear. In this

More information

The multiple regression model; Indicator variables as regressors

The multiple regression model; Indicator variables as regressors The multiple regression model; Indicator variables as regressors Ragnar Nymoen University of Oslo 28 February 2013 1 / 21 This lecture (#12): Based on the econometric model specification from Lecture 9

More information

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015 Part IB Statistics Theorems with proof Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Birkbeck Working Papers in Economics & Finance

Birkbeck Working Papers in Economics & Finance ISSN 1745-8587 Birkbeck Working Papers in Economics & Finance Department of Economics, Mathematics and Statistics BWPEF 1809 A Note on Specification Testing in Some Structural Regression Models Walter

More information

Regression with a Single Regressor: Hypothesis Tests and Confidence Intervals

Regression with a Single Regressor: Hypothesis Tests and Confidence Intervals Regression with a Single Regressor: Hypothesis Tests and Confidence Intervals (SW Chapter 5) Outline. The standard error of ˆ. Hypothesis tests concerning β 3. Confidence intervals for β 4. Regression

More information

Introduction to Econometrics

Introduction to Econometrics Introduction to Econometrics T H I R D E D I T I O N Global Edition James H. Stock Harvard University Mark W. Watson Princeton University Boston Columbus Indianapolis New York San Francisco Upper Saddle

More information

Part 6: Multivariate Normal and Linear Models

Part 6: Multivariate Normal and Linear Models Part 6: Multivariate Normal and Linear Models 1 Multiple measurements Up until now all of our statistical models have been univariate models models for a single measurement on each member of a sample of

More information

Introduction to Estimation Methods for Time Series models. Lecture 1

Introduction to Estimation Methods for Time Series models. Lecture 1 Introduction to Estimation Methods for Time Series models Lecture 1 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 1 SNS Pisa 1 / 19 Estimation

More information

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata Maura Department of Economics and Finance Università Tor Vergata Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet Outline 1 The Power Function

More information

Large Sample Properties of Estimators in the Classical Linear Regression Model

Large Sample Properties of Estimators in the Classical Linear Regression Model Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in

More information

Dependence. Practitioner Course: Portfolio Optimization. John Dodson. September 10, Dependence. John Dodson. Outline.

Dependence. Practitioner Course: Portfolio Optimization. John Dodson. September 10, Dependence. John Dodson. Outline. Practitioner Course: Portfolio Optimization September 10, 2008 Before we define dependence, it is useful to define Random variables X and Y are independent iff For all x, y. In particular, F (X,Y ) (x,

More information

New Developments in Econometrics Lecture 16: Quantile Estimation

New Developments in Econometrics Lecture 16: Quantile Estimation New Developments in Econometrics Lecture 16: Quantile Estimation Jeff Wooldridge Cemmap Lectures, UCL, June 2009 1. Review of Means, Medians, and Quantiles 2. Some Useful Asymptotic Results 3. Quantile

More information

Regression #5: Confidence Intervals and Hypothesis Testing (Part 1)

Regression #5: Confidence Intervals and Hypothesis Testing (Part 1) Regression #5: Confidence Intervals and Hypothesis Testing (Part 1) Econ 671 Purdue University Justin L. Tobias (Purdue) Regression #5 1 / 24 Introduction What is a confidence interval? To fix ideas, suppose

More information

Stat 5101 Lecture Notes

Stat 5101 Lecture Notes Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random

More information

Multiple Regression Analysis: Inference ECONOMETRICS (ECON 360) BEN VAN KAMMEN, PHD

Multiple Regression Analysis: Inference ECONOMETRICS (ECON 360) BEN VAN KAMMEN, PHD Multiple Regression Analysis: Inference ECONOMETRICS (ECON 360) BEN VAN KAMMEN, PHD Introduction When you perform statistical inference, you are primarily doing one of two things: Estimating the boundaries

More information

Least Squares Estimation-Finite-Sample Properties

Least Squares Estimation-Finite-Sample Properties Least Squares Estimation-Finite-Sample Properties Ping Yu School of Economics and Finance The University of Hong Kong Ping Yu (HKU) Finite-Sample 1 / 29 Terminology and Assumptions 1 Terminology and Assumptions

More information

Regression Analysis. y t = β 1 x t1 + β 2 x t2 + β k x tk + ϵ t, t = 1,..., T,

Regression Analysis. y t = β 1 x t1 + β 2 x t2 + β k x tk + ϵ t, t = 1,..., T, Regression Analysis The multiple linear regression model with k explanatory variables assumes that the tth observation of the dependent or endogenous variable y t is described by the linear relationship

More information

Homework Set 2, ECO 311, Fall 2014

Homework Set 2, ECO 311, Fall 2014 Homework Set 2, ECO 311, Fall 2014 Due Date: At the beginning of class on October 21, 2014 Instruction: There are twelve questions. Each question is worth 2 points. You need to submit the answers of only

More information

ECON3150/4150 Spring 2015

ECON3150/4150 Spring 2015 ECON3150/4150 Spring 2015 Lecture 3&4 - The linear regression model Siv-Elisabeth Skjelbred University of Oslo January 29, 2015 1 / 67 Chapter 4 in S&W Section 17.1 in S&W (extended OLS assumptions) 2

More information

1 Correlation and Inference from Regression

1 Correlation and Inference from Regression 1 Correlation and Inference from Regression Reading: Kennedy (1998) A Guide to Econometrics, Chapters 4 and 6 Maddala, G.S. (1992) Introduction to Econometrics p. 170-177 Moore and McCabe, chapter 12 is

More information

Ch 3: Multiple Linear Regression

Ch 3: Multiple Linear Regression Ch 3: Multiple Linear Regression 1. Multiple Linear Regression Model Multiple regression model has more than one regressor. For example, we have one response variable and two regressor variables: 1. delivery

More information

Statistics and econometrics

Statistics and econometrics 1 / 36 Slides for the course Statistics and econometrics Part 10: Asymptotic hypothesis testing European University Institute Andrea Ichino September 8, 2014 2 / 36 Outline Why do we need large sample

More information

Brief Suggested Solutions

Brief Suggested Solutions DEPARTMENT OF ECONOMICS UNIVERSITY OF VICTORIA ECONOMICS 366: ECONOMETRICS II SPRING TERM 5: ASSIGNMENT TWO Brief Suggested Solutions Question One: Consider the classical T-observation, K-regressor linear

More information

A Bootstrap Test for Causality with Endogenous Lag Length Choice. - theory and application in finance

A Bootstrap Test for Causality with Endogenous Lag Length Choice. - theory and application in finance CESIS Electronic Working Paper Series Paper No. 223 A Bootstrap Test for Causality with Endogenous Lag Length Choice - theory and application in finance R. Scott Hacker and Abdulnasser Hatemi-J April 200

More information

Instrumental Variables

Instrumental Variables Università di Pavia 2010 Instrumental Variables Eduardo Rossi Exogeneity Exogeneity Assumption: the explanatory variables which form the columns of X are exogenous. It implies that any randomness in the

More information

CHAPTER 6: SPECIFICATION VARIABLES

CHAPTER 6: SPECIFICATION VARIABLES Recall, we had the following six assumptions required for the Gauss-Markov Theorem: 1. The regression model is linear, correctly specified, and has an additive error term. 2. The error term has a zero

More information

Fundamental Probability and Statistics

Fundamental Probability and Statistics Fundamental Probability and Statistics "There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don't know. But there are

More information

Linear Regression. Junhui Qian. October 27, 2014

Linear Regression. Junhui Qian. October 27, 2014 Linear Regression Junhui Qian October 27, 2014 Outline The Model Estimation Ordinary Least Square Method of Moments Maximum Likelihood Estimation Properties of OLS Estimator Unbiasedness Consistency Efficiency

More information

Lecture 6: Geometry of OLS Estimation of Linear Regession

Lecture 6: Geometry of OLS Estimation of Linear Regession Lecture 6: Geometry of OLS Estimation of Linear Regession Xuexin Wang WISE Oct 2013 1 / 22 Matrix Algebra An n m matrix A is a rectangular array that consists of nm elements arranged in n rows and m columns

More information

MAXIMUM LIKELIHOOD, SET ESTIMATION, MODEL CRITICISM

MAXIMUM LIKELIHOOD, SET ESTIMATION, MODEL CRITICISM Eco517 Fall 2004 C. Sims MAXIMUM LIKELIHOOD, SET ESTIMATION, MODEL CRITICISM 1. SOMETHING WE SHOULD ALREADY HAVE MENTIONED A t n (µ, Σ) distribution converges, as n, to a N(µ, Σ). Consider the univariate

More information

Lab 07 Introduction to Econometrics

Lab 07 Introduction to Econometrics Lab 07 Introduction to Econometrics Learning outcomes for this lab: Introduce the different typologies of data and the econometric models that can be used Understand the rationale behind econometrics Understand

More information

CHAPTER 21: TIME SERIES ECONOMETRICS: SOME BASIC CONCEPTS

CHAPTER 21: TIME SERIES ECONOMETRICS: SOME BASIC CONCEPTS CHAPTER 21: TIME SERIES ECONOMETRICS: SOME BASIC CONCEPTS 21.1 A stochastic process is said to be weakly stationary if its mean and variance are constant over time and if the value of the covariance between

More information

Answers to Problem Set #4

Answers to Problem Set #4 Answers to Problem Set #4 Problems. Suppose that, from a sample of 63 observations, the least squares estimates and the corresponding estimated variance covariance matrix are given by: bβ bβ 2 bβ 3 = 2

More information

R = µ + Bf Arbitrage Pricing Model, APM

R = µ + Bf Arbitrage Pricing Model, APM 4.2 Arbitrage Pricing Model, APM Empirical evidence indicates that the CAPM beta does not completely explain the cross section of expected asset returns. This suggests that additional factors may be required.

More information

Ch 2: Simple Linear Regression

Ch 2: Simple Linear Regression Ch 2: Simple Linear Regression 1. Simple Linear Regression Model A simple regression model with a single regressor x is y = β 0 + β 1 x + ɛ, where we assume that the error ɛ is independent random component

More information

Multiple Linear Regression CIVL 7012/8012

Multiple Linear Regression CIVL 7012/8012 Multiple Linear Regression CIVL 7012/8012 2 Multiple Regression Analysis (MLR) Allows us to explicitly control for many factors those simultaneously affect the dependent variable This is important for

More information

Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing

Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing 1 In most statistics problems, we assume that the data have been generated from some unknown probability distribution. We desire

More information

Multiple Regression Analysis

Multiple Regression Analysis Multiple Regression Analysis y = β 0 + β 1 x 1 + β 2 x 2 +... β k x k + u 2. Inference 0 Assumptions of the Classical Linear Model (CLM)! So far, we know: 1. The mean and variance of the OLS estimators

More information

Randomized Complete Block Designs

Randomized Complete Block Designs Randomized Complete Block Designs David Allen University of Kentucky February 23, 2016 1 Randomized Complete Block Design There are many situations where it is impossible to use a completely randomized

More information

Wiley. Methods and Applications of Linear Models. Regression and the Analysis. of Variance. Third Edition. Ishpeming, Michigan RONALD R.

Wiley. Methods and Applications of Linear Models. Regression and the Analysis. of Variance. Third Edition. Ishpeming, Michigan RONALD R. Methods and Applications of Linear Models Regression and the Analysis of Variance Third Edition RONALD R. HOCKING PenHock Statistical Consultants Ishpeming, Michigan Wiley Contents Preface to the Third

More information

Eco517 Fall 2004 C. Sims MIDTERM EXAM

Eco517 Fall 2004 C. Sims MIDTERM EXAM Eco517 Fall 2004 C. Sims MIDTERM EXAM Answer all four questions. Each is worth 23 points. Do not devote disproportionate time to any one question unless you have answered all the others. (1) We are considering

More information

Econ 836 Final Exam. 2 w N 2 u N 2. 2 v N

Econ 836 Final Exam. 2 w N 2 u N 2. 2 v N 1) [4 points] Let Econ 836 Final Exam Y Xβ+ ε, X w+ u, w N w~ N(, σi ), u N u~ N(, σi ), ε N ε~ Nu ( γσ, I ), where X is a just one column. Let denote the OLS estimator, and define residuals e as e Y X.

More information

IV Estimation and its Limitations: Weak Instruments and Weakly Endogeneous Regressors

IV Estimation and its Limitations: Weak Instruments and Weakly Endogeneous Regressors IV Estimation and its Limitations: Weak Instruments and Weakly Endogeneous Regressors Laura Mayoral IAE, Barcelona GSE and University of Gothenburg Gothenburg, May 2015 Roadmap Deviations from the standard

More information

Intermediate Econometrics

Intermediate Econometrics Intermediate Econometrics Heteroskedasticity Text: Wooldridge, 8 July 17, 2011 Heteroskedasticity Assumption of homoskedasticity, Var(u i x i1,..., x ik ) = E(u 2 i x i1,..., x ik ) = σ 2. That is, the

More information

Cointegration Lecture I: Introduction

Cointegration Lecture I: Introduction 1 Cointegration Lecture I: Introduction Julia Giese Nuffield College julia.giese@economics.ox.ac.uk Hilary Term 2008 2 Outline Introduction Estimation of unrestricted VAR Non-stationarity Deterministic

More information

Review of Econometrics

Review of Econometrics Review of Econometrics Zheng Tian June 5th, 2017 1 The Essence of the OLS Estimation Multiple regression model involves the models as follows Y i = β 0 + β 1 X 1i + β 2 X 2i + + β k X ki + u i, i = 1,...,

More information

EC4051 Project and Introductory Econometrics

EC4051 Project and Introductory Econometrics EC4051 Project and Introductory Econometrics Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Intro to Econometrics 1 / 23 Project Guidelines Each student is required to undertake

More information

Multiple Regression Analysis. Basic Estimation Techniques. Multiple Regression Analysis. Multiple Regression Analysis

Multiple Regression Analysis. Basic Estimation Techniques. Multiple Regression Analysis. Multiple Regression Analysis Multiple Regression Analysis Basic Estimation Techniques Herbert Stocker herbert.stocker@uibk.ac.at University of Innsbruck & IIS, University of Ramkhamhaeng Regression Analysis: Statistical procedure

More information

Review of Statistics 101

Review of Statistics 101 Review of Statistics 101 We review some important themes from the course 1. Introduction Statistics- Set of methods for collecting/analyzing data (the art and science of learning from data). Provides methods

More information

Multivariate Distributions

Multivariate Distributions IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate

More information

Probability. Table of contents

Probability. Table of contents Probability Table of contents 1. Important definitions 2. Distributions 3. Discrete distributions 4. Continuous distributions 5. The Normal distribution 6. Multivariate random variables 7. Other continuous

More information

STOCKHOLM UNIVERSITY Department of Economics Course name: Empirical Methods Course code: EC40 Examiner: Lena Nekby Number of credits: 7,5 credits Date of exam: Saturday, May 9, 008 Examination time: 3

More information

Multivariate Tests of the CAPM under Normality

Multivariate Tests of the CAPM under Normality Multivariate Tests of the CAPM under Normality Bernt Arne Ødegaard 6 June 018 Contents 1 Multivariate Tests of the CAPM 1 The Gibbons (198) paper, how to formulate the multivariate model 1 3 Multivariate

More information

Unit roots in vector time series. Scalar autoregression True model: y t 1 y t1 2 y t2 p y tp t Estimated model: y t c y t1 1 y t1 2 y t2

Unit roots in vector time series. Scalar autoregression True model: y t 1 y t1 2 y t2 p y tp t Estimated model: y t c y t1 1 y t1 2 y t2 Unit roots in vector time series A. Vector autoregressions with unit roots Scalar autoregression True model: y t y t y t p y tp t Estimated model: y t c y t y t y t p y tp t Results: T j j is asymptotically

More information

A Likelihood Ratio Test

A Likelihood Ratio Test A Likelihood Ratio Test David Allen University of Kentucky February 23, 2012 1 Introduction Earlier presentations gave a procedure for finding an estimate and its standard error of a single linear combination

More information

Econometrics I. Ricardo Mora

Econometrics I. Ricardo Mora Econometrics I Department of Economics Universidad Carlos III de Madrid Master in Industrial Economics and Markets Outline Motivation 1 Motivation 2 3 4 Motivation The Analogy Principle The () is a framework

More information

8. Hypothesis Testing

8. Hypothesis Testing FE661 - Statistical Methods for Financial Engineering 8. Hypothesis Testing Jitkomut Songsiri introduction Wald test likelihood-based tests significance test for linear regression 8-1 Introduction elements

More information