Econometric Theory.

Size: px
Start display at page:

Download "Econometric Theory."

Transcription

1 Econometric Theory Assessment is by exam only : three questions from a choice of five in two hours. Past papers and the distributed material can be obtained from the class web page, which is located at Tutorials The four tutorials for this class are compulsory and attendance will be recorded. 1

2 (a) If R is a random variable then Econometric Theory var(r) = E[[R E(R)] ] = E[R ] [E(R)]. Some Statistics Thus var(r) = E[R ] if E(R) = 0. (b) If R 1 and R are random variables then cov(r 1,R ) = E[[R 1 E(R 1 )][R E(R )]] = E[R 1 R ] E(R 1 )E(R ). Thus cov(r 1,R ) = E[R 1 R ] if E(R 1 ) = 0 or E(R ) = 0 (or both). (c) If R 1 and R are independent random variables then cov(r 1,R ) = 0 and E[R 1 R ] = E(R 1 )E(R ). (d) If R is a random vector and A is a non-random matrix then E(AR) = AE(R) and var(ar) = Avar(R)A. If R = [ R 1 R R 3 ] then and E(R 1 ) E(R) = E(R ) E(R 3 ) var(r 1 ) cov(r 1,R ) cov(r 1,R 3 ) var(r) = cov(r 1,R ) var(r ) cov(r,r 3 ). cov(r 1,R 3 ) cov(r,r 3 ) var(r 3 )

3 Regression in Matrix Notation We can write as Y t = β 1 X 1t + β X t + + β k X kt + u t t = 1,...,n (1) Y t = X tβ + u t = β X t + u t t = 1,...,n if we define the vectors X t = [ X 1t X t... X kt ] and β = [ β 1 β... β k ] () but the most compact notation follows from stacking over t = 1,,...,n : the definitions Y 1 X 11 X 1... X k1 u 1 Y Y =.. X = X 1 X... X k... and u = u. (3). Y n X 1n X n... X kn u n imply we can write Y = Xβ + u instead of (1). In matrix notation the least squares estimator of the β vector is derived in the following way : let ˆβ be any estimate of β in Y = Xβ+u. The implied error vector is defined by e = Y X ˆβ and t=n e e = e t = (Y X ˆβ) (Y X ˆβ) t=1 is the implied error sum of squares. The ols estimator of β is the ˆβ that minimises e e and is given by b = (X X) 1 X Y. This follows from firstly writing e e = (Y X ˆβ) (Y X ˆβ) and then using d(a x) dx = (Y ˆβ X )(Y X ˆβ) = Y Y Y X ˆβ ˆβ X Y + ˆβ X X ˆβ = d(x a) dx = a and d(x Ax) dx = (A + A )x (where a and x are column vectors, and A is a square matrix) to obtain d(e e) = (Y X) X Y + (X X + (X X) )ˆβ dˆβ = X Y X Y + (X X + X X)ˆβ = X Y + X X ˆβ which is 0 at ˆβ = b = (X X) 1 X Y. This argument only deals with the first order conditions for a minimum so we do not really know that b = (X X) 1 X Y defines a minimum of e e, but it does. 3

4 Given the definitions of X t in () and of X in (3) it follows that the individual rows of the X matrix are the X t vectors written as row vectors (ie transposed). This implies that the X t vectors define the columns of X and that t=n X X = X t X t (4) t=1 where, using the definitions of Y and u in (3), we also have t=n t=n X Y = X t Y t and X u = X t u t. (5) t=1 t=1 As an application of b = (X X) 1 X Y consider the regression corresponding to Y t = β 1 + β X t + u t. In this case (4) gives X X = [ ] 1 [1 ] [ ] 1 X Xt = t X t X t Xt and (5) gives X Y = [ 1 X t ] Y t = [ ] [ ] Y t Yt = Xt X t Y t Y t so that the least squares estimator is defined by b = [ b1 b ] [ = n Xt ] 1 [ ] Xt Yt Xt X t Y t [ ] n Xt = Xt X t which is [ n Xt ][ ] Xt b1 X t b = [ ] Yt Xt. Y t 4

5 Statistical Results The expectation of b is obtained in the following way : First write and then b = (X X) 1 X Y = (X X) 1 X (Xβ + u) = β + (X X) 1 X u E(b) = E(β + (X X) 1 X u) = E(β) + E((X X) 1 X u) = β + E((X X) 1 X u) as E(β) = β. This gives E(b) = β + (X X) 1 X E(u) = β using E(AR) = AE(R) for non-random A and then E(u) = 0. The conclusion is that b is an unbiased estimator of β. The variance matrix of b is defined as var(b) = E[(b E(b))(b E(b)) ] and if we make the assumptions required to show E(b) = β then we have var(b) = E[(b β)(b β) ] and we can use b = β + (X X) 1 X u to write var(b) = E[(b β)(b β) ] = E[(X X) 1 X u((x X) 1 X u) ] = E[(X X) 1 X uu X(X X) 1 ] = (X X) 1 X E(uu )X(X X) 1 using E(ARB) = AE(R)B for non-random A and B. We obtain var(b) = (X X) 1 X (σ I n )X(X X) 1 = σ (X X) 1 X X(X X) 1 = σ (X X) 1 if we assume E(uu ) = σ I n. The ols residual vector is defined as û = Y Xb, which is û t = Y t b 1 X 1t b X t b k X kt t = 1,,...,n in longhand. The residual sum of squares is then defined by t=n RSS = û û = t=1 û t 5

6 and means [ ] RSS E = σ n k s = RSS n k is an unbiased estimator of σ. The proof follows from E(û) = 0 and var(û) = σ (I n P), where P = X(X X) 1 X. We have E(RSS) = E( û t) = E(û t) = (var(û t ) + (E(û t )) ) = σ (1 P tt ) = σ (n P tt ) = σ (n tr(p)) where tr(p) = tr(x(x X) 1 X ) = tr((x X) 1 X X) = tr(i k ) = k gives the result 1. 1 Remember that the trace of a square matrix is the sum of the elements on the main diagonal so that if M is p p then tr(m) = M 11 + M + + M pp. The result used in the manipulation of tr(p) is : if AB and BA are both square matrices then tr(ab)=tr(ba). 6

7 Problem Set One Q1) (page 3) Obtain explicit expressions for the X X matrix and the X Y vector corresponding to Y t = β 1 + β X t + u t t = 1,...,n and hence write out explicitly the normal equations X Xb = X Y. Deduce that the elements of b = ( ) b 1 b can be expressed as b = (Xt X )(Y t Y ) (Xt X ) with b 1 = Y b X. Q) Obtain explicit expressions for the X X matrix and the X Y vector corresponding to Y t = β 1 + β X t + β 3 X 3t + u t t = 1,...,n and hence write out explicitly the normal equations X Xb = X Y. Q3) Obtain an analytical expression for σ (X X) 1 where X X is as in Q1. Hence find var(b 1 ), var(b ), and cov(b 1,b ), where b 1 and b are the ols estimators of β 1 and β in Y t = β 1 + β X t + u t t = 1,...,n. Q4) Write each of the following null hypotheses in the form Rβ = r where R is a matrix, β is a vector, and r is a vector. (a) H 0 : β + β 3 = 1 in Y t = β 1 + β X t + β 3 X 3t + u t. (b) H 0 : β = β 3 = β 4 = 0 in Y t = β 1 + β X t + β 3 X 3t + β 4 X 4t + u t. (c) H 0 : β + 5 = β 3 and β 4 = 10 in Y t = β 1 + β X t + β 3 X 3t + β 4 X 4t + u t. In each case find a regression from which RSS r, the restricted residual sum of squares, can be obtained. 7

8 Q5) It is possible to show that the ols estimator b = (X X) 1 X Y, corresponding to the equation Y = Xβ + u and a regression of Y on X, is the ˆβ that minimises the error sum of squares defined by S(ˆβ) = (Y X ˆβ) (Y X ˆβ), where ˆβ is any estimate of β, Y X ˆβ is the implied error vector and S(ˆβ) is the sum of the squared elements of Y X ˆβ. An advantage of this result is that it makes it clear how a restricted ols estimator should be defined. Thus suppose Rβ = r defines a set of linear restrictions on the elements of β. If R is q k and of rank q then there are q linearly independent restrictions in Rβ = r and the restricted ols estimator, b r say, can be defined as the ˆβ that minimises S(ˆβ) subject to the restrictions Rˆβ = r being satisfied. It is possible to show that the required estimator is b r = b (X X) 1 R (R(X X) 1 R ) 1 (Rb r). (a) Show that Rb r = r. (b) In the terminology of F tests (Y Xb) (Y Xb) will be RSS u, the unrestricted residual sum of squares, and (Y Xb r ) (Y Xb r ) will be RSS r, the restricted residual sum of squares. Given this, show that RSS r RSS u = (Rb r) (R(X X) 1 R ) 1 (Rb r). hint : the expression for b r will allow you to write Y Xb r = (Y Xb) + for some. But this implies RSS r = RSS u + + (Y Xb) and you should find that (Y Xb) begins with (Y Xb) X and be able to show that this is a zero vector (using the definition of b) so that (Y Xb) = 0 follows. This means RSS r = RSS u +, and the result you want should follow if you work on. note : the basis of F testing is [ ] [ ] RSSr RSS u n k F q,n k (6) RSS u q if the null hypothesis is true. The result derived in part (b) can be used to deduce RSS r RSS u σ χ q if the null hypothesis is true. The other distributional result required is (n k)s σ χ n k 8

9 where s is the estimator of σ presented on page 5 of the lecture notes. As the above χ s can be shown to be independent χ q/q χ n k /(n k) will be F q,n k. It is precisely the F q,n k in (6). The distributional result in (6) requires the X matrix to be non-random and the u t to be normal as well as iid 0,σ. Q6) Occasionally a version of (6) based on comparing the R in restricted and unrestricted regressions appears. The idea is to rearrange R u = 1 RSS u TSS R r = 1 RSS r TSS to obtain expressions for RSS u and RSS r, substitute into (6), and tidy up the result. Show that the result is [ R u R r 1 R u ] [ n k q ]. Note : this argument requires that TSS is the same in the restricted and unrestricted regressions. The implication is that the dependent variable must be the same in the restricted and unrestricted regressions. This is often not the case so that the expression in (6) is the more general. Q7) Show that each of the matrices and then X X (X X) 1 R(X X) 1 R (R(X X) 1 R ) 1 (X X) 1 R (R(X X) 1 R ) 1 R(X X) 1 are symmetric. Note recall that M is symmetric means M = M. The following might be useful : (M ) = M, (M ) 1 = (M 1 ), and (M 1 M ) = M M 1. 9

10 Consistent parameter estimation If R n, n = 1,,3,..., is a sequence of random variables then R n is said to converge in probability to the constant c if and only if Pr( R n c < ε) = Pr(c ε < R n < c + ε) 1 as n for all ε > 0, in which case we write plim(r n ) = c or R n conditions for plim(r n ) = c are p c. Sufficient E(R n ) c as n and var(r n ) 0 as n. The application of convergence in probability when parameter estimation is discussed lies in the definition : if ˆθ n is an estimator of θ based on n observations then ˆθ n is a consistent estimator of θ if and only if plim(ˆθ n ) = θ. Now consider the least squares estimation of β in Y t = βx t + u t u t iid 0/σ. (7) The least squares estimator of β is Xt Y t Xt u b = X = β + t t X, t and if X t is non-random then we have E(b) = β and var(b) = σ X t. If we assume that X t increases without bound as n increases it is easy to show that b is a consistent estimator of β. An assumption that makes this work is that lim n (n 1 X t ) = q 0 < q <. (8) When this assumption is made it is also possible to show that both s (Yt bx t ) = and ˆσ (Yt bx t ) = n 1 n are consistent estimators of σ. The result for ˆσ can be established using X ˆσ = (β b) t u + t Xt u t + (β b) (10) n n n which means plim(ˆσ ) is [ X plim((β b) )plim t n ] [ ] u + plim t + plim(β b)plim n [ Xt u t which is σ. When there are several explanatory variables and we have Y = Xβ + u with X assumed to be non-random we can use E(b) = β and var(b) = σ (X X) 1. If we further assume lim n (n 1 X X) (11) 10 n (9) ],

11 exists as a proper matrix with an inverse then we can deduce that b is a consistent estimator of β by noting var(b) = σ (X X) 1 = σ n 1 (n 1 X X) 1 will tend to zero times (lim n (n 1 X X)) 1, which is a zero matrix, as n goes to infinity. This, together with the unbiasedness of b as an estimator of β, will allow us to deduce consistency using sufficient conditions. Further, both s = RSS n k and ˆσ = RSS n, (1) where RSS = (Y Xb) (Y Xb), are consistent estimators of σ given the current assumptions. One way of showing this is to write Y Xb = Xβ + u Xb = X(β b) + u (13) and deduce that RSS can be written as (β b) X X(β b) + u u + (β b) X u. (14) Part of the rest of the argument will involve showing that plim(n 1 X u) = 0. This result can be used in presenting plim(b) = β in a different way than above. This presentation starts with b = β + (X X) 1 X u and then gives plim(b) = plim(β + (X X) 1 X u) = plim(β) + plim((x X) 1 X u) = β + plim((n 1 X X) 1 n 1 X u) = β + plim((n 1 X X) 1 )plim(n 1 X u) = β + (plim(n 1 X X)) 1 plim(n 1 X u) = β + (plim(n 1 X X)) 1 0 = β. An advantage of this presentation is that it suggests a classic derivation in econometrics. The derivation is of plim(b) without an assumption that X is non-random, so that E(b) and var(b) are not available. We have plim(b) = plim(β + (X X) 1 X u) = plim(β) + plim((x X) 1 X u) = β + plim((n 1 X X) 1 n 1 X u) = β + plim((n 1 X X) 1 )plim(n 1 X u) = β + (plim(n 1 X X)) 1 plim(n 1 X u) = β + Q 1 η if we assume that Q = plim(n 1 X X) exists and is non-singular, and that η = plim(n 1 X u) exists. plim(b) = β then requires η = 0. If any element of η 11

12 is non-zero then, in general, each element of plim(b) will differ from the corresponding element of β, so that no elements of β will be estimated consistently. Of course the derivation of plim(b) = β +Q 1 η requires Q and η to exist and it is worth asking whether more fundamental assumptions, which imply the existence of Q and η, can be made. Further, whether the existence of η is assumed or established, we will need to be able to determine whether or not it is a zero vector if we want to know whether least squares consistently estimates β. To continue the discussion here it might be worth dealing with the time-series and cross-sectional cases separately. 1

13 Convergence in Probability R n converges in probability to the constant c if and only if Pr( R n c < ε) = Pr(c ε < R n < c + ε) 1 as n for all ε > 0. (e) R n converges in probability to c is written as plim(r n ) = c or as R n p c. (f) Sufficient conditions for plim(r n ) = c are E(R n ) c as n and var(r n ) 0 as n (g) (Khinchine s theorem) If R t, t = 1,,3,..., are iid random variables with expectation µ then plim(n 1 n 1 R t) = µ. (h) We have plim(r 1n + R n ) = plim(r 1n ) + plim(r n ). The plim of a sum is the sum of the individual plims, as long as the plims exist : plim(r 1n + R n ) may exist with plim(r 1n ) and plim(r n ) not existing. We have plim(r 1n R n ) = plim(r 1n )plim(r n ). The plim of a product is the product of the individual plims, as long as the plims exist : plim(r 1n R n ) may exist with plim(r 1n ) and plim(r n ) not existing. We have plim(g(r n )) = g(plim(r n )) as long as g is a continuous function. (i) The definition given for scalar random variables can be extended : the plim of a vector is the vector of individual plims and the plim of a matrix is the matrix of individual plims. The various rules in (h) continue to hold so that plim(m 1n + M n ) = plim(m 1n ) + plim(m n ), as long as M 1n and M n are matrices of the same size and the plims exist. We have plim(m 1n M n ) = plim(m 1n )plim(m n ), as long as the product M 1n M n exists, and the plims exist. We have plim((m n ) 1 ) = (plim(m n )) 1, so that the plim of an inverse is the inverse of the plim. 13

14 Problem Set Two Q10) Assume that Y = Xβ + u with E(u) = 0, E(uu ) = σ I n, and X non-random. If A is a non-random matrix then AY is a linear estimator of β. Since AY = A(Xβ + u) = AXβ + Au it follows that E(AY ) is AXβ and AX = I k is therefore required if AY is to be unbiased for β. Whether this is the case or not the variance matrix of AY is given by σ AA. We therefore have var(ay ) var(b) = σ AA σ (X X) 1 = σ (AA (X X) 1 ). Show that this matrix is psd for all A which ensure AY is unbiased for β. Hint: write AA (X X) 1 = AA AX(X X) 1 X A = A(I P)A, where P = X(X X) 1 X, and show that I P = (I P)(I P). The result ought then to be straightforward. Q11) (Page 8) Consider the model Y t = βx t + u t t = 1,,...,n, where u t is iid (0,σ ), and where X t is non-random with [ ] 1 lim X n n t = q 0 < q <. Define b and ˆσ by Xt Y t b = X ˆσ (Yt bx t ) =. t n (a) Show that E(b) = β and var(b) = σ / X t from first principles. (b) Show that Y t bx t = (β b)x t +u t and that ˆσ can therefore be written as X (β b) t u + t Xt u t + (β b). n n n A matrix M is psd (positive semi-definite) when a Ma 0 for all a vectors. The implication of the required result is that var(a b) var(a AY ) for all a vectors. This means that a AY is never a better unbiased estimator of a β than a b. The result is the Gauss-Markov theorem and implies a b is the best (minimum variance) linear unbiased estimator of a β for all a vectors. 14

15 (c) Show that [ ] 1 plim Xt u t = 0. n Hint : find the expectation and variance and use the sufficient conditions of the lectures. Note : this result together with the arguments of the lectures means plim(ˆσ ) = σ is established. (d) The result in (b) means that the residual sum of squares, RSS = nˆσ, can be written as (β b) X t + u t + (β b) X t u t. Use the definition of b and Y t = βx t + u t to show that this is u t (b β) X t and deduce that E(RSS) = (n 1)σ. hint : recall var(b). Q1) (page 10) In Y = Xβ + u show that Y Xb, the residual vector, can be written as X(β b)+u and that (Y Xb) (Y Xb), the residual sum of squares (RSS), can be written as (β b) X X(β b) + u u + (β b) X u. Deduce that ˆσ = n 1 RSS and s = (n k) 1 RSS are consistent estimators of σ when the following assumptions are made : X is non-random, Q = lim n (n 1 X X) exists and is non-singular, and u t, t = 1,,...,n, the elements of the u vector, are iid 0/σ. Note : we showed plim(b) = β given these assumptions in the lectures. u u is u t so plim(n 1 u u) = σ will follow using previous arguments. Q13) Consider the equation Y t = β 1 + β t + u t t = 1,,...,n where u t iid(0,σ ). (a) Write down the X X matrix for this case. Does n 1 X X have a finite limit as n goes to infinity? Does n 3 X X have a finite limit as n goes to infinity? If so is the limiting matrix non-singular? (b) Define S by [ ] n 1/ 0 S = 0 n 3/ = diag(n 1/,n 3/ ) and show that SX XS has a finite and non-singular limit as n goes to infinity. (c) Show that plim(b) = β. hint : write var(b) = σ (X X) 1 = σ S(SX XS) 1 S and let n go to infinity. Also remember that E(b) = β. 15

16 Q14) (page 10) Use the expression for RSS and the definition of ˆσ in Q1 to show that plim(ˆσ ) = σ η Q 1 η where Q = plim(n 1 X X) is non-singular, η = plim(n 1 X u), and the u t are iid (0,σ ). Comment on this expression. Hint for the derivation : Q and Q 1 will be symmetric matrices. The result plim(b) = β +Q 1 η from the lectures will be useful. Hint for the comment : Q and Q 1 will be positive-definite matrices 3. 3 As in : M is positive definite if and only if (the quadratic form) x Mx is positive for all non-zero x vectors. 16

17 The consistency of the least squares estimator when the observations are independent When the data set being used in estimation is cross-sectional the different observations can typically be assumed to be independent. Further, the idea of a random sample is taken to imply independent drawings from a population distribution so that we end up making an iid assumption in the case of random sampling. When we consider the least squares estimation of β in Y i = βx i + u i (15) we might assume [ X i to obtain u i ] is iid across i. In this case it is quite straightforward plim(n 1 X i ) = E(X ) plim(n 1 X i u i ) = E(Xu) and plim(b) = β + E(Xu) E(X ), so that E(Xu) = 0 is required for plim(b) = β. If E(u) = 0 then E(Xu) = cov(x,u) and plim(b) = β requires X and u to be uncorrelated. This condition will hold if (15) is interpreted as a conditional expectations function. In this case the distribution of [ Y i X i ] is assumed to be iid across i and E(Yi X i ) is assumed to be βx i. Given this the disturbance term is defined by writing u i = Y i E(Y i X i ) = Y i βx i (16) and we arrive at (15). The construction of u i in (16) implies E(u i X i ) = 0 and it follows that E(u i ) = 0 as the law of iterated expectations (lie) tells us E(u i ) = E(E(u i X i )). The same result means we can write E(X i u i ) = E(E(X i u i X i )) = E(X i E(u i X i )) which is 0 if E(u i X i ) = 0. When the least squares estimator is consistent we obtain a limiting distribution from writing n(b β) = n 1/ X i u i n 1 X i and noting that n 1/ X i u i d N(0,E((Xu) )) follows from a central limit theorem. The implication is that we have n(b β) d N [0, E((Xu) ) (E(X )) ]. (17) 17

18 If we assume that E(u X) = σ (conditional homoscedasticity) we can move from to E((Xu) ) = E(X u ) = E(E(X u X)) = E(X E(u X)) E(X σ ) = σ E(X ). Using this result means we can write [ d σ ] n(b β) N 0, E(X ) (18) in place of (17). We also have b β d N(0,1) SE(b) where SE(b) = s / Xi. If we have (17) and no assumption of homoscedasticity then b β d N(0,1) SE w (b) is required. Here X i SE w (b) = û i ( Xi ) is a White standard error. The limiting distribution in (18) can be deduced from a standard textbook presentation of a limiting distribution for the least squares estimator. Assuming that b is a consistent estimator of β, the equation n(b β) = [ 1 n X X] 1 1 n X u is combined with 1 n X X p Q and 1 n X u d N(0,σ Q) to obtain n(b β) d N(0,σ Q 1 ). (19) In the special case defined by (15) Q = plim(n 1 X X) is plim(n 1 X i ) = E(X ) and we obtain (18) from (19). 18

19 Convergence in Distribution If R n, n = 1,,3,..., is a sequence of random variables then R n converges in distribution to R, R n d R, if and only if lim Pr(R n a) = Pr(R a) n for all a values at which Pr(R a), the cumulative distribution function of R, is continuous. Pr(R a) will have discontinuites if R is a discrete random variable. For a continuous random variable there are no points of discontinuity and the condition for all a values at which Pr(R a) is continuous is not required. In practice we often have limiting (asymptotic) normality, as in R n d N(0,τ ) which means for all a. lim n Pr(R n a) = Pr(N(0,τ ) a) = Φ[ a τ d (j) If R n R and g is continuous then g(rn ) d d g(r). For example if R n N(0,1) then Rn d χ (1) as (N(0,1)) is χ (1). (k) convergence in probability and convergence in distribution results combine to give convergence in distribution results, with one complication. For example if plim(a n ) = a and b n d N(0,τ ) then and a n + b n d a + N(0,τ ) a n b n d an(0,τ ). When a = 0 the limiting random variable is N(0,0) and a convergence in probability result applies. Thus the one complication is : if a n p 0 and bn has a limiting distribution then a n b n p 0. (l) Let A n be a matrix and b n be a vector and assume plim(a n ) = A and b n d N(0,Σ). Then An b n d N(0,AΣA ). (m) If R d N r (µ,σ) then (R µ) Σ 1 (R µ) d χ (r). ] 19

20 Problem Set Three Assume E(u X) = σ, conditional homoscedasticity, in Q0) through Q). Q0) In the model of pages 17 and 18 show that ˆσ is a consistent estimator of σ. Hint : start with X ˆσ = (β b) i u + i Xi u i + (β b), n n n from (b) of Q11, but bear in mind that X i is random in the model of pages 17 and 18. Q1) In the model of pages 17 and 18 it is straightforward to show that n(ˆσ σ ) can be written as n [ u i n σ ] + X n(β b)(β b) i n + Xi u i n(β b). n Use this result to find a limiting distribution for n(ˆσ σ ). Hint : the complication in (k) of the notes on convergence in distribution (page 17) will allow you to argue that two of the three terms make no contribution. You also need the simple central limit theorem of the lectures : if R 1,...,R n are iid(µ,τ ) then n(n 1 n 1 R i µ) d N(0,1). τ Q) In the model of pages 17 and 18 suppose that X i is regressed on Y i, resulting in the calculation of ˆγ = X i Y i / Y i, where follows. plim(ˆγ) = E(XY ) E(Y ) (a) Find expressions for E(XY ), E(Y ), plim(ˆγ) and for plim(ˆγ) 1 β. Is ˆγ a consistent estimator of 1/β? If not is it possible to determine the direction of the inconsistency? (b) Is 1/b a consistent estimator of 1/β? Find the limiting distribution of [ 1 n b 1 ]. β 0

21 Q3) If then [ X1 X ] ] [ ]] µ1 Σ11 Σ N[[, 1 µ Σ 1 Σ E(X 1 X ) = µ 1 + Σ 1 (Σ ) 1 (X µ ) (0) is the generalisation of the rule quoted in the lectures. Here X 1 and X are vectors so that µ 1 and µ are vectors and Σ 11, Σ 1 = Σ 1, and Σ are matrices. (a) Find E(E(X 1 X )). (b) Use (0) to obtain E(X 1 X,X 3 ) where [ X 1 X ] X 3 is normal with mean µ and variance matrix Σ, as in µ 1 σ 11 σ 1 σ 13 µ = µ Σ = σ 1 σ σ 3. µ 3 σ 31 σ 3 σ 33 (c) Obtain the expectation of E(X 1 X,X 3 ), as in (b), conditional on X, which is E(E(X 1 X,X 3 ) X ) in symbols. Hint : you will need to find E(X 3 X ). This will follow from (0) if you can find the joint distribution of [ X X 3 ]. You ought to be able to avoid a matrix inversion with a bit of effort. (d) Show that the expression obtained in (b) is E(X 1 X ). Hint : the joint distribution of [ ] X 1 X is required. Q4) If E(Y i X 1i,X i ) = β 1 X 1i + β X i and u i = Y i β 1 X 1i β X i show that (a) E(u i ) = 0. (b) E(u i X 1i ) = 0 and E(u i X i ) = 0. (c) E(u i X 1i ) = 0 and E(u i X i ) = 0. 1

22 hint : the lie and the result illustrated in (b) and (c) of Q3, E(E(X 1 X,X 3 ) X ) = E(X 1 X ), will be useful. Q5) The stationary point implied by Y i = β 1 + β X i + β 3 X i + u i is at β /(β 3 ), and is estimated by b b 3. Show that this is a consistent estimator and find the limiting distribution of [ b n β ]. b 3 β 3

23 Autoregressive models The AR(1) process X t = ρx t 1 + u t can be written as (1 ρl)x t = u t using the lag operator L since LX t = X t 1. We have and X t = (1 ρl) 1 u t (1 ρl) 1 = 1 + ρl + ρ L + ρ 3 L 3 + (1) is true as long as ρ < 1. Therefore, as long as ρ < 1, X t = (1 + ρl + ρ L + ρ 3 L 3 + )u t = u t + ρu t 1 + ρ u t + ρ 3 u t 3 + () A justification for the result in (1) holding if ρ < 1 is that it gives an expression, in (), which also emerges from backwards substitution. We write X t = ρx t 1 + u t = ρ(ρx t + u t 1 ) + u t = ρ X t + u t + ρu t 1 = ρ (ρx t 3 + u t ) + u t + ρu t 1 = ρ 3 X t 3 + u t + ρu t 1 + ρ u t = = ρ d X t d + u t + ρu t ρ d 1 u t d+1 which becomes () as d increases if the first term fades away, which is the reason why it is necessary to assume ρ < 1. If the u t are iid(0,σ ) then it is possible to deduce E(X t ) = 0 var(x t ) = σ 1 ρ cov(x t,x t k ) = ρk σ 1 ρ (3) from (). The fact that these expressions do not depend upon t means we can describe the AR(1) with ρ < 1 as stationary. The requirement that ρ < 1 is called the stationarity condition 4. The only consequence of starting with X t = α + ρx t 1 + u t (4) is for E(X t ). Writing X t α [ 1 ρ = ρ X t 1 α ] + u t 1 ρ 4 As 1 ρz is zero at z = 1/ρ stationarity is often described as requiring the root of 1 ρz to exceed one in absolute value (or lie outside the unit circle ). 3

24 means that () is replaced by X t = α 1 ρ + u t + ρu t 1 + ρ u t + ρ 3 u t 3 + (5) and that E(X t ) = α/(1 ρ). The same conclusion follows from writing E(X t ) = E(α + ρx t 1 + u t ) = α + ρe(x t 1 ) + E(u t ) = α + ρe(x t 1 ) and then using the fact that stationarity requires E(X t ) = E(X t 1 ). In a similar way we can write var(x t ) = var(α + ρx t 1 + u t ) = var(ρx t 1 + u t ) = ρ var(x t 1 ) + var(u t ) + ρcov(x t 1,u t ) and deduce var(x t ) by using (5) to conclude that cov(x t 1,u t ) = 0 and using the fact that stationarity requires var(x t ) = var(x t 1 ). We arrive at var(x t ) = σ /(1 ρ ). Obtaining (5) from (4) using the lag operator involves writing X t = (1 ρl) 1 (α + u t ) = (1 ρl) 1 α + (1 ρl) 1 u t = (1 ρ) 1 α + u t + ρu t 1 + ρ u t + ρ 3 u t 3 + since f(l)c = f(1)c if c is a constant and f(l) = f 0 + f 1 L + f L + is a polynomial in L. For the AR() we write X t = α + ρ 1 X t 1 + ρ X t + u t (1 ρ 1 L ρ L )X t = α + u t and stationarity requires the two roots of 1 ρ 1 z ρ z (6) to both exceed one in absolute value. Assuming stationarity we have E(X t ) = E(α + ρ 1 X t 1 + ρ X t + u t ) = α + ρ 1 E(X t 1 ) + ρ E(X t ) + E(u t ) = α + ρ 1 E(X t 1 ) + ρ E(X t ) = α + ρ 1 E(X t ) + ρ E(X t ) and then E(X t ) = (1 ρ 1 ρ ) 1 α. This can also be deduced from the MA representation which follows from writing 1 ρ 1 L ρ L = (1 i 1 L)(1 i L) (7) 4

25 where i 1 and i are the inverse roots of the quadratic in (6) and i 1 < 1 and i < 1 are the stationarity conditions. We have X t = (1 ρ 1 L ρ L ) 1 α + (1 ρ 1 L ρ L ) 1 u t where (1 ρ 1 L ρ L ) 1 α = (1 ρ 1 ρ ) 1 α and (1 ρ 1 L ρ L ) 1 u t = (1 i 1 L) 1 (1 i L) 1 u t It follows that we can write X t = = (1 + i 1 L + i 1L +...)(1 + i L + i L +...)u t = u t + (i 1 + i )u t 1 + (i 1 + i 1 i + i )u t +... = u t + λ 1 u t 1 + λ u t +... α 1 ρ 1 ρ + u t + λ 1 u t 1 + λ u t +..., which makes it clear that X t is independent of u t+1, u t+,... The AR(p) process X t = α + ρ 1 X t 1 + ρ X t + + ρ p X t p + u t is stationary if all of the roots of 1 ρ 1 z ρ z ρ p z p exceed one in absolute value. As with the AR(1) and AR() the point of the conditions is that they allow a moving average representation to be obtained. Stationarity means random fluctuation about a constant expectation. This seems inappropriate if the data under consideration appear to be trended. One response to trends is to introduce t as a variable. The equation X t = α + ρx t 1 + βt + u t (8) with ρ < 1 can be studied by finding the π 1 and π that give 5 X t π 1 π t = ρ(x t 1 π 1 π (t 1)) + u t. The phrase trend stationary is associated with the model in (8) when ρ < 1. Another case to consider is ρ = 1 and β = 0 giving X t = α + X t 1 + u t (9) which is also X t = α + u t given X t denotes the first difference X t X t 1. Now we have E(X t ) = α + E(X t 1 ) 5 The alternative is to try to deal with (1 ρl) 1 t. 5

26 and E(X t ) will exceed E(X t 1 ) if α is positive. (9) follows from (4) if ρ = 1 and is often described as the unit root case. The test statistic for H 0 : ρ = 1 in (4) is ˆρ 1 SE( ˆρ) (30) and is often described as a unit root test statistic. It turns out that the limiting distribution of this quantity under the null hypothesis depends on whether α is zero or not 6. The same is not true for the unit root test statistic in the regression corresponding to (8) even though β = 0 is assumed in obtaining the limiting null distribution. This distribution is the intercept and trend Dickey-Fuller distribution. By contrast in the trend stationary case with ρ < 1 conventional asymptotics based on limiting normal distributions apply when the regression corresponding to (8) is examined. 6 Hence the description of α as being a nuisance parameter. Setting ρ = 1 in (4) gives (9) and backwards substitution gives X t = X 0 + tα + t j=1 u j. The dominant term is t if it is present due to α 0. Otherwise the behaviour of X t is determined by È u j. When α 0 a limiting N(0, 1) is obtained for (30). When α = 0 the limiting distribution of (30) is the intercept only Dickey-Fuller distribution. 6

27 The consistency of the least squares estimator in the stationary AR(1) The least squares estimator of β in the regression corresponding to is X t = βx t 1 + u t b = where we can show Xt X t 1 ut X X = β + t 1 t 1 X = β + n 1 u t X t 1 t 1 n 1 Xt 1 plim(n 1 u t X t 1 ) = 0 plim(n 1 X t 1) = σ 1 β (31) when u t iid(0,σ ) and β < 1. Obtaining the first probability limit follows from writing and then which is var(n 1 u t X t 1 ) = n var( u t X t 1 ) var( u t X t 1 ) = E(( u t X t 1 ) ) = E( u t X t 1 us X s 1 ), E( u t u s X t 1 X s 1 ) = E(u t u s X t 1 X s 1 ) where E(u t u s X t 1 X s 1 ) = 0 whenever t s. When t = s we have which is E(u t u s X t 1 X s 1 ) = E(u tx t 1) = E(u t)e(x t 1) (σ ) 1 β. Having got this far it is pretty straightforward to use sufficient conditions and obtain plim(n 1 u t X t 1 ) = 0. The second result in (31) can be established using the fact that X t = (βx t 1 + u t ) = β X t 1 + u t + βx t 1 u t can be used to give n 1 Xt 1 as [ 1 (X0 X n )(X 0 + X n ) 1 β + n u t n ] + β Xt 1 u t n and from this point on it is not too demanding an argument. The only work involves getting a probability limit of zero for the first term in brackets. A limiting distribution is obtained by combining n(b β) = 1 n ut X t 1 1 n X t 1 7

28 with 1 X p σ 1 d n t 1 n ut 1 β X t 1 N(0,E((ut X t 1 ) )) to obtain n(b β) d N(0,1 β ). This result follows from the general result in (19) by noting that in the stationary AR(1) Q = plim(n 1 X X) = plim(n 1 X t 1) = σ 1 β. 8

29 Problem Set Four Q30) Let p(z) = 1 + p 1 z + p z, with p 0, have roots z 1 and z. (a) Can z = 0 be a root of p(z)? (b) Let i 1 = 1/z 1 and i = 1/z. Show that p(z) can be written in the form p(z) = (1 i 1 z)(1 i z). Q31) Use X t = u t + ρu t 1 + ρ u t +, where the u t are iid(0,σ ) and ρ < 1, to obtain the E(X t ), var(x t ), and cov(x t,x t k ) of (3). note : x j = 1 + x + x + j=0 is 1/(1 x) if x < 1. Q3) Show that and that (1 ρl) 1 (1 ρ) 1 = ρ ρ 1 (1 ρl) 1 (1 L) (1 ρl) 1 L (1 ρ) 1 = 1 ρ 1 (1 ρl) 1 (1 L) Q33) Show that n Xt = 1 n Xt 1 X0 + Xn (3) 1 is always true. Now assume that X t = αx t 1 + u t holds, implying X t = α X t 1 + u t + αx t 1 u t. (33) (a) Sum from t = 1 to t = n on both sides of (33) and use the result in (3) to show that n n n (1 α ) Xt 1 = X0 Xn + α X t 1 u t + u t. 1 (b) Use the result of part (a) to show that [ n X t 1 u t = 1 n Xn X u t ] 1 1

30 holds if α = 1, the unit root case. Find the limiting distribution of n 1 X t 1 u t assuming X 0 = 0 and that the u t are iid(0,σ ). hint : n 1 u t will, as usual, have a plim. Backwards substitution in X t = X t 1 + u t (which follows from X t = αx t 1 + u t and α = 1) gives X n = u n + u n u 1 if X 0 = 0. The most basic central limit theorem will give a limiting distribution for n 1/ X n to get you started. Note : the assumption that X 0 = 0 just makes the manipulations a bit easier. (c) Use the result in (a) to show that [ n Xt 1 = 1 1 α X0 Xn + α 1 n X t 1 u t + 1 n 1 u t ] holds if α 1. If α < 1 and u t is iid(0,σ ) it is possible to show that both plim(n 1 (X 0 X n)) and plim(n 1 X t 1 u t ) are equal to zero in the above expression. What can you conclude regarding n 1 X t 1? Note : we are dealing with page 7 of the notes although there is some change in the notation. Q34) Suppose y t = αy t 1 + u t with α < 1 and u t = ρu t 1 + e t with ρ < 1 and with e t iid(0,σ e). Assume that plim(a) = α + E(y t 1u t ) E(y t 1 ) gives the probability limit of a, the least squares estimator of α. (a) Use (b) Use y t 1 = u t 1 + αu t + α u t 3 + and (3) to show that E(y t 1 u t ) = ρσ e (1 ρ )(1 αρ). y t = α y t 1 + u t + αy t 1 u t (34) as a starting point in deriving E(y t 1) = σ e(1 + αρ) (1 α )(1 ρ )(1 αρ). Hint : take expectations on both sides of (34) and use E(y t ) = E(y t 1). 30

31 (c) Show that Comment. plim(a) = α + ρ(1 α ) 1 + αρ. Note : It is part of the folklore of econometrics that lagged dependent variables and autocorrelated disturbances means least squares is inconsistent. This question provides a demonstration. Q35) Suppose y t = ρy t 1 + u t with ρ < 1 and u t iid(0,σ ). We have, using (3), [ n ] var n 1 y t = n σ (1 ρ ) S n 1 where S n is the sum of the elements of the matrix 1 ρ ρ... ρ n ρ n 1 ρ 1 ρ ρ n 3 ρ n ρ ρ 1 ρ n 4 ρ n ρ n ρ n 3 ρ n 4 1 ρ ρ n 1 ρ n ρ n 3... ρ 1 (a) Show that S n = n ρ nρ + ρ n+1 (1 ρ). Suggestion : induction seems a possibility. (b) Show that [ n ] plim n 1 y t =

32 Vector Autoregressions The VAR(p) X t = c + A 1 X t A p X t p + u t can be written as A(L)X t = c + u t where A(L) = I k A 1 L A p L p if k is the dimension of X t. The stationarity condition is that the roots of det(i k A 1 z A p z p ) all exceed one in absolute value. This condition is required to ensure that X t = (A(L)) 1 (c + u t ) produces a vector moving average. The VAR(1) with k = is [ ] [ ] [ ] [ ] [ ] X1t c1 a11 a = + 1 X1,t 1 u1t + a 1 a X,t 1 u t which is X t c X 1t = c 1 + a 11 X 1,t 1 + a 1 X,t 1 + u 1t X t = c + a 1 X 1,t 1 + a X,t 1 + u t in longhand. We have [ ] [ ] [ ] 1 0 a11 a I Az = 1 1 a11 z a z = 1 z 0 1 a 1 a a 1 z 1 a z which has determinant 1 (a 11 + a )z + (a 11 a a 1 a 1 )z. The two roots of this expression are both required to exceed one in absolute value for stationarity and given this (7) and (1) can be used in [ ] (I AL) a L a = 1 L det(i AL) a 1 L 1 a 11 L to obtain a vector moving average of the form X t = e + u t + M 1 u t 1 + M u t + implying E(X t ) = e and var(x t ) = Ω+M 1 ΩM 1+M ΩM +. The expectation vector e can be found from E(X t ) = E(c + AX t 1 + u t ) = c + AE(X t 1 ) + E(u t ) = c + AE(X t 1 ) and then E(X t ) = E(X t 1 ). The VAR() X t = c + A 1 X t 1 + A X t + u t 3

33 is which is which is X t X t 1 = c + (A 1 I k )X t 1 + A X t + u t X t X t 1 = c + (A 1 + A I k )X t 1 + A (X t X t 1 ) + u t X t = c ΠX t 1 + Γ 1 X t 1 + u t where Π = I k A 1 A. This matrix will be singular if z = 1 is a solution to det(i k A 1 z A z ) = 0. A k k matrix with rank r where 0 < r < k can be written in the form αβ where α and β are k r and full column rank. We have X t = c αβ X t 1 + Γ 1 X t 1 + u t (35) which has the appearance of a vector ecm 7. For illustration take k = 3 and r =. We have α 11 α 1 [ ] α = α 1 α β β11 β = 1 β 31 β α 31 α 1 β β 3 3 and then [ ] β β11 X X t 1 = 1,t 1 + β 1 X,t 1 + β 31 X 3,t 1 = β 1 X 1,t 1 + β X,t 1 + β 3 X 3,t 1 [ ecm1,t 1 ecm,t 1 ] with αβ X t 1 = α 11ecm 1,t 1 + α 1 ecm,t 1 α 1 ecm 1,t 1 + α ecm,t 1. α 31 ecm 1,t 1 + α 3 ecm,t 1 We find that (35) is X 1t = c 1 + α 11 ecm 1,t 1 + α 1 ecm,t 1 + γ 11 X 1,t 1 + γ 1 X,t 1 + γ 13 X 3,t 1 + u 1t X t = c + α 1 ecm 1,t 1 + α ecm,t 1 + γ 1 X 1,t 1 + γ X,t 1 + γ 3 X 3,t 1 + u t X 3t = c 3 + α 31 ecm 1,t 1 + α 3 ecm,t 1 + γ 31 X 1,t 1 + γ 3 X,t 1 + γ 33 X 3,t 1 + u 3t. The final reduced rank case is rank(π) = 0, giving Π = 0 k k, and X t = c + Γ 1 X t 1 + u t. 7 Further discussion requires finding conditions under which X t is I(1) with X t and β X t being I(0). If this is the case then β X t represent cointegation. Pages 146 to 15 of Cointegration, error-correction, and the econometric analysis of non-stationary data by Banerjee, Dolado, Galbraith, and Hendry is a possible starting point. 33

34 Consider the system defined by where [ut ] A vector autoregression Y t = θ 1 X t + θ X t 1 + θ 3 Y t 1 + u t and X t = αx t 1 + v t v t is iid [ ] 0, 0 [ σ u σ uv σ uv σ v The implied VAR will be stationary when θ 3 < 1 and α < 1 ]. (36) and it follows that E(X t ), E(Y t ), var(x t ), var(y t ), cov(x t,y t ),... will not depend on t in this case. The consistency of least squares estimators can be investigated using the plim(b) = β +Q 1 η result. The least squares estimator of α will be consistent. The consistency of the least squares estimator of [ θ 1 θ θ 3 ] requires σ uv = 0. When θ 3 < 1 and α = 1 there is a unit root and the implied VAR is nonstationary. Progress requires considering the value of θ 1 +θ. If θ 1 +θ = 0 then Y t is a stationary AR(1) so that Y t is I(0) and X t is I(1). When θ 1 + θ 0 the variable Y t is I(1), the variable X t is I(1), and it is possible to show that Y t and X t are cointegrated. This follows from showing that the linear combination Y t θx t, with θ = θ 1 + θ 1 θ 3, is I(0). This follows from Y t θx t = θ 3 (Y t 1 θx t 1 ) + u t + (θ 1 θ)v t, which defines a stationary AR(1) for Y t θx t. If ε t = u t + (θ 1 θ)v t we have which is which is so that Y t θx t = θ 3 (Y t 1 θx t 1 ) + ε t, (1 θ 3 L)(Y t θx t ) = ε t, Y t θx t = (1 θ 3 L) 1 ε t, Y t = θx t + e t, (37) where e t is AR(1). The regression corresponding to (37) is the cointegrating regression. A limiting distribution follows from the joint limiting distribution that can be shown to exist for n 1 X t e t and n X t and from n(ˆθ θ) = n 1 X t e t n Xt. The other non-stationary cases which it is worth considering are (i) θ 3 = 1 and α < 1, and (ii) θ 3 = 1 and α = 1. 34

35 Problem Set Five Q40) Write the two equations in (.13) from Hendry s book Dynamic Econometrics as a VAR in [ c t i t ]. (a) Show that the stability conditions are on the roots of the quadratic 1 + (α 1 β 1 )z + (1 + β 1 α 1 )z, and that the roots are z = 1 and z = 1/(1 + β 1 α 1 ). (b) Show that c t i t = α 0 β 0 + (1 + β 1 α 1 )(c t 1 i t 1 ) + ε t v t. What condition is required to deduce that c t i t is I(0)? What does finding that c t i t is I(0) imply? hint : subtract i t from c t and rearrange using c t i t = (c t i t ) to obtain the required equation. Q41) (a) With reference to the [ u t v t ] process of (36) find the γ which gives cov(e t,v t ) = 0 where e t is defined by e t = u t γv t. (b) Use the result of part (a) to find an expression for the probability limit of the least squares estimator of [ θ 1 θ θ 3 ] in the regression corresponding to Y t = θ 1 X t + θ X t 1 + θ 3 Y t 1 + u t when the VAR of A vector autoregression is assumed to be stationary. Hint: substitute u t = γv t + e t = γ(x t αx t 1 ) + e t, rearrange, and consider the consistency of least squares in the implied regression. Q4) Assume θ 3 < 1, α = 1, and θ 1 + θ 0 in the A vector autoregression notes. (a) Show that the equation for Y t can be rearranged to take the form Y t = θ 1 X t + (θ 3 1)(Y t 1 θx t 1 ) + u t. (38) What would you call this expression? (b) Assuming θ is a known parameter: (i) Use plim(b) = β + Q 1 η to decide whether least squares will consistently estimate θ 1 and θ 3 1 in the regression corresponding to (38). 35

36 (ii) Show that a regression of Y t on Y t 1 θx t 1 will consistently estimate θ 3 1. (iii) Find the limiting distribution of the estimator of (ii) above. Hint : apply the n(b β) d N(0,σ Q 1 ) of (19) by finding Q. Q43) Consider the VAR(1), X t = c + AX t 1 + u t, with [ ] A = Find the roots of the VAR. Show that the matrix Π = A I, as in X t = c + ΠX t 1 + u t, can be written as αβ with [ ] [ ] 0. 1 α = β = Show that β X t = X 1t X t is a stationary AR(1). Show that and that X 1t = 0.(X 1,t 1 X,t 1 ) + c 1 + u 1t X t = 0.6(X 1,t 1 X,t 1 ) + c + u t. Q44) Consider the VAR(1), X t = c + AX t 1 + u t, with [ ] A = Find the roots of the VAR. Show that the matrix Π = A I, as in X t = c + ΠX t 1 + e t, can be written as αβ with [ ] [ ] 0. 1 α = β =. 0.1 Show that β X t = X 1t + X t is a random walk process. Show that and that Comment. X 1t = c 1 0.(X 1,t 1 + X,t 1 ) + u 1t X t = c + 0.1(X 1,t 1 + X,t 1 ) + u t. 36

Questions and Answers on Unit Roots, Cointegration, VARs and VECMs

Questions and Answers on Unit Roots, Cointegration, VARs and VECMs Questions and Answers on Unit Roots, Cointegration, VARs and VECMs L. Magee Winter, 2012 1. Let ɛ t, t = 1,..., T be a series of independent draws from a N[0,1] distribution. Let w t, t = 1,..., T, be

More information

The Statistical Property of Ordinary Least Squares

The Statistical Property of Ordinary Least Squares The Statistical Property of Ordinary Least Squares The linear equation, on which we apply the OLS is y t = X t β + u t Then, as we have derived, the OLS estimator is ˆβ = [ X T X] 1 X T y Then, substituting

More information

Statistics 910, #5 1. Regression Methods

Statistics 910, #5 1. Regression Methods Statistics 910, #5 1 Overview Regression Methods 1. Idea: effects of dependence 2. Examples of estimation (in R) 3. Review of regression 4. Comparisons and relative efficiencies Idea Decomposition Well-known

More information

ECON 4160, Spring term Lecture 12

ECON 4160, Spring term Lecture 12 ECON 4160, Spring term 2013. Lecture 12 Non-stationarity and co-integration 2/2 Ragnar Nymoen Department of Economics 13 Nov 2013 1 / 53 Introduction I So far we have considered: Stationary VAR, with deterministic

More information

Economics 536 Lecture 7. Introduction to Specification Testing in Dynamic Econometric Models

Economics 536 Lecture 7. Introduction to Specification Testing in Dynamic Econometric Models University of Illinois Fall 2016 Department of Economics Roger Koenker Economics 536 Lecture 7 Introduction to Specification Testing in Dynamic Econometric Models In this lecture I want to briefly describe

More information

MEI Exam Review. June 7, 2002

MEI Exam Review. June 7, 2002 MEI Exam Review June 7, 2002 1 Final Exam Revision Notes 1.1 Random Rules and Formulas Linear transformations of random variables. f y (Y ) = f x (X) dx. dg Inverse Proof. (AB)(AB) 1 = I. (B 1 A 1 )(AB)(AB)

More information

ECON 4160, Lecture 11 and 12

ECON 4160, Lecture 11 and 12 ECON 4160, 2016. Lecture 11 and 12 Co-integration Ragnar Nymoen Department of Economics 9 November 2017 1 / 43 Introduction I So far we have considered: Stationary VAR ( no unit roots ) Standard inference

More information

Econometrics Summary Algebraic and Statistical Preliminaries

Econometrics Summary Algebraic and Statistical Preliminaries Econometrics Summary Algebraic and Statistical Preliminaries Elasticity: The point elasticity of Y with respect to L is given by α = ( Y/ L)/(Y/L). The arc elasticity is given by ( Y/ L)/(Y/L), when L

More information

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,

More information

LECTURE 2 LINEAR REGRESSION MODEL AND OLS

LECTURE 2 LINEAR REGRESSION MODEL AND OLS SEPTEMBER 29, 2014 LECTURE 2 LINEAR REGRESSION MODEL AND OLS Definitions A common question in econometrics is to study the effect of one group of variables X i, usually called the regressors, on another

More information

Econometrics I. Professor William Greene Stern School of Business Department of Economics 25-1/25. Part 25: Time Series

Econometrics I. Professor William Greene Stern School of Business Department of Economics 25-1/25. Part 25: Time Series Econometrics I Professor William Greene Stern School of Business Department of Economics 25-1/25 Econometrics I Part 25 Time Series 25-2/25 Modeling an Economic Time Series Observed y 0, y 1,, y t, What

More information

Unit Root and Cointegration

Unit Root and Cointegration Unit Root and Cointegration Carlos Hurtado Department of Economics University of Illinois at Urbana-Champaign hrtdmrt@illinois.edu Oct 7th, 016 C. Hurtado (UIUC - Economics) Applied Econometrics On the

More information

Linear models. Linear models are computationally convenient and remain widely used in. applied econometric research

Linear models. Linear models are computationally convenient and remain widely used in. applied econometric research Linear models Linear models are computationally convenient and remain widely used in applied econometric research Our main focus in these lectures will be on single equation linear models of the form y

More information

Quick Review on Linear Multiple Regression

Quick Review on Linear Multiple Regression Quick Review on Linear Multiple Regression Mei-Yuan Chen Department of Finance National Chung Hsing University March 6, 2007 Introduction for Conditional Mean Modeling Suppose random variables Y, X 1,

More information

Asymptotic Theory. L. Magee revised January 21, 2013

Asymptotic Theory. L. Magee revised January 21, 2013 Asymptotic Theory L. Magee revised January 21, 2013 1 Convergence 1.1 Definitions Let a n to refer to a random variable that is a function of n random variables. Convergence in Probability The scalar a

More information

Linear Regression with Time Series Data

Linear Regression with Time Series Data Econometrics 2 Linear Regression with Time Series Data Heino Bohn Nielsen 1of21 Outline (1) The linear regression model, identification and estimation. (2) Assumptions and results: (a) Consistency. (b)

More information

Multivariate Time Series: Part 4

Multivariate Time Series: Part 4 Multivariate Time Series: Part 4 Cointegration Gerald P. Dwyer Clemson University March 2016 Outline 1 Multivariate Time Series: Part 4 Cointegration Engle-Granger Test for Cointegration Johansen Test

More information

Final Exam. Economics 835: Econometrics. Fall 2010

Final Exam. Economics 835: Econometrics. Fall 2010 Final Exam Economics 835: Econometrics Fall 2010 Please answer the question I ask - no more and no less - and remember that the correct answer is often short and simple. 1 Some short questions a) For each

More information

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018 Econometrics I KS Module 2: Multivariate Linear Regression Alexander Ahammer Department of Economics Johannes Kepler University of Linz This version: April 16, 2018 Alexander Ahammer (JKU) Module 2: Multivariate

More information

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) 1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For

More information

Linear Regression. Junhui Qian. October 27, 2014

Linear Regression. Junhui Qian. October 27, 2014 Linear Regression Junhui Qian October 27, 2014 Outline The Model Estimation Ordinary Least Square Method of Moments Maximum Likelihood Estimation Properties of OLS Estimator Unbiasedness Consistency Efficiency

More information

Multivariate Regression Analysis

Multivariate Regression Analysis Matrices and vectors The model from the sample is: Y = Xβ +u with n individuals, l response variable, k regressors Y is a n 1 vector or a n l matrix with the notation Y T = (y 1,y 2,...,y n ) 1 x 11 x

More information

F9 F10: Autocorrelation

F9 F10: Autocorrelation F9 F10: Autocorrelation Feng Li Department of Statistics, Stockholm University Introduction In the classic regression model we assume cov(u i, u j x i, x k ) = E(u i, u j ) = 0 What if we break the assumption?

More information

The Linear Regression Model

The Linear Regression Model The Linear Regression Model Carlo Favero Favero () The Linear Regression Model 1 / 67 OLS To illustrate how estimation can be performed to derive conditional expectations, consider the following general

More information

Økonomisk Kandidateksamen 2004 (I) Econometrics 2. Rettevejledning

Økonomisk Kandidateksamen 2004 (I) Econometrics 2. Rettevejledning Økonomisk Kandidateksamen 2004 (I) Econometrics 2 Rettevejledning This is a closed-book exam (uden hjælpemidler). Answer all questions! The group of questions 1 to 4 have equal weight. Within each group,

More information

Review of Econometrics

Review of Econometrics Review of Econometrics Zheng Tian June 5th, 2017 1 The Essence of the OLS Estimation Multiple regression model involves the models as follows Y i = β 0 + β 1 X 1i + β 2 X 2i + + β k X ki + u i, i = 1,...,

More information

Ma 3/103: Lecture 24 Linear Regression I: Estimation

Ma 3/103: Lecture 24 Linear Regression I: Estimation Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, 2017 1 / 32 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the

More information

Financial Econometrics

Financial Econometrics Material : solution Class : Teacher(s) : zacharias psaradakis, marian vavra Example 1.1: Consider the linear regression model y Xβ + u, (1) where y is a (n 1) vector of observations on the dependent variable,

More information

Simple Linear Regression: The Model

Simple Linear Regression: The Model Simple Linear Regression: The Model task: quantifying the effect of change X in X on Y, with some constant β 1 : Y = β 1 X, linear relationship between X and Y, however, relationship subject to a random

More information

Econometrics Master in Business and Quantitative Methods

Econometrics Master in Business and Quantitative Methods Econometrics Master in Business and Quantitative Methods Helena Veiga Universidad Carlos III de Madrid Models with discrete dependent variables and applications of panel data methods in all fields of economics

More information

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate

More information

Unit roots in vector time series. Scalar autoregression True model: y t 1 y t1 2 y t2 p y tp t Estimated model: y t c y t1 1 y t1 2 y t2

Unit roots in vector time series. Scalar autoregression True model: y t 1 y t1 2 y t2 p y tp t Estimated model: y t c y t1 1 y t1 2 y t2 Unit roots in vector time series A. Vector autoregressions with unit roots Scalar autoregression True model: y t y t y t p y tp t Estimated model: y t c y t y t y t p y tp t Results: T j j is asymptotically

More information

Advanced Econometrics I

Advanced Econometrics I Lecture Notes Autumn 2010 Dr. Getinet Haile, University of Mannheim 1. Introduction Introduction & CLRM, Autumn Term 2010 1 What is econometrics? Econometrics = economic statistics economic theory mathematics

More information

MA Advanced Econometrics: Spurious Regressions and Cointegration

MA Advanced Econometrics: Spurious Regressions and Cointegration MA Advanced Econometrics: Spurious Regressions and Cointegration Karl Whelan School of Economics, UCD February 22, 2011 Karl Whelan (UCD) Spurious Regressions and Cointegration February 22, 2011 1 / 18

More information

ECON 616: Lecture Two: Deterministic Trends, Nonstationary Processes

ECON 616: Lecture Two: Deterministic Trends, Nonstationary Processes ECON 616: Lecture Two: Deterministic Trends, Nonstationary Processes ED HERBST September 11, 2017 Background Hamilton, chapters 15-16 Trends vs Cycles A commond decomposition of macroeconomic time series

More information

LECTURE 5 HYPOTHESIS TESTING

LECTURE 5 HYPOTHESIS TESTING October 25, 2016 LECTURE 5 HYPOTHESIS TESTING Basic concepts In this lecture we continue to discuss the normal classical linear regression defined by Assumptions A1-A5. Let θ Θ R d be a parameter of interest.

More information

Modelling of Economic Time Series and the Method of Cointegration

Modelling of Economic Time Series and the Method of Cointegration AUSTRIAN JOURNAL OF STATISTICS Volume 35 (2006), Number 2&3, 307 313 Modelling of Economic Time Series and the Method of Cointegration Jiri Neubauer University of Defence, Brno, Czech Republic Abstract:

More information

INTRODUCTORY ECONOMETRICS

INTRODUCTORY ECONOMETRICS INTRODUCTORY ECONOMETRICS Lesson 2b Dr Javier Fernández etpfemaj@ehu.es Dpt. of Econometrics & Statistics UPV EHU c J Fernández (EA3-UPV/EHU), February 21, 2009 Introductory Econometrics - p. 1/192 GLRM:

More information

Introduction to Estimation Methods for Time Series models. Lecture 1

Introduction to Estimation Methods for Time Series models. Lecture 1 Introduction to Estimation Methods for Time Series models Lecture 1 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 1 SNS Pisa 1 / 19 Estimation

More information

(c) i) In ation (INFL) is regressed on the unemployment rate (UNR):

(c) i) In ation (INFL) is regressed on the unemployment rate (UNR): BRUNEL UNIVERSITY Master of Science Degree examination Test Exam Paper 005-006 EC500: Modelling Financial Decisions and Markets EC5030: Introduction to Quantitative methods Model Answers. COMPULSORY (a)

More information

Linear Model Under General Variance

Linear Model Under General Variance Linear Model Under General Variance We have a sample of T random variables y 1, y 2,, y T, satisfying the linear model Y = X β + e, where Y = (y 1,, y T )' is a (T 1) vector of random variables, X = (T

More information

STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method.

STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method. STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method. Rebecca Barter May 5, 2015 Linear Regression Review Linear Regression Review

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 MA 575 Linear Models: Cedric E Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 1 Revision: Probability Theory 11 Random Variables A real-valued random variable is

More information

Least Squares Estimation-Finite-Sample Properties

Least Squares Estimation-Finite-Sample Properties Least Squares Estimation-Finite-Sample Properties Ping Yu School of Economics and Finance The University of Hong Kong Ping Yu (HKU) Finite-Sample 1 / 29 Terminology and Assumptions 1 Terminology and Assumptions

More information

An Introduction to Parameter Estimation

An Introduction to Parameter Estimation Introduction Introduction to Econometrics An Introduction to Parameter Estimation This document combines several important econometric foundations and corresponds to other documents such as the Introduction

More information

Multivariate Time Series: VAR(p) Processes and Models

Multivariate Time Series: VAR(p) Processes and Models Multivariate Time Series: VAR(p) Processes and Models A VAR(p) model, for p > 0 is X t = φ 0 + Φ 1 X t 1 + + Φ p X t p + A t, where X t, φ 0, and X t i are k-vectors, Φ 1,..., Φ p are k k matrices, with

More information

7 Introduction to Time Series

7 Introduction to Time Series Econ 495 - Econometric Review 1 7 Introduction to Time Series 7.1 Time Series vs. Cross-Sectional Data Time series data has a temporal ordering, unlike cross-section data, we will need to changes some

More information

7 Introduction to Time Series Time Series vs. Cross-Sectional Data Detrending Time Series... 15

7 Introduction to Time Series Time Series vs. Cross-Sectional Data Detrending Time Series... 15 Econ 495 - Econometric Review 1 Contents 7 Introduction to Time Series 3 7.1 Time Series vs. Cross-Sectional Data............ 3 7.2 Detrending Time Series................... 15 7.3 Types of Stochastic

More information

Summer School in Statistics for Astronomers V June 1 - June 6, Regression. Mosuk Chow Statistics Department Penn State University.

Summer School in Statistics for Astronomers V June 1 - June 6, Regression. Mosuk Chow Statistics Department Penn State University. Summer School in Statistics for Astronomers V June 1 - June 6, 2009 Regression Mosuk Chow Statistics Department Penn State University. Adapted from notes prepared by RL Karandikar Mean and variance Recall

More information

Y t = ΦD t + Π 1 Y t Π p Y t p + ε t, D t = deterministic terms

Y t = ΦD t + Π 1 Y t Π p Y t p + ε t, D t = deterministic terms VAR Models and Cointegration The Granger representation theorem links cointegration to error correction models. In a series of important papers and in a marvelous textbook, Soren Johansen firmly roots

More information

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015 Part IB Statistics Theorems with proof Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Lecture 3: Multiple Regression

Lecture 3: Multiple Regression Lecture 3: Multiple Regression R.G. Pierse 1 The General Linear Model Suppose that we have k explanatory variables Y i = β 1 + β X i + β 3 X 3i + + β k X ki + u i, i = 1,, n (1.1) or Y i = β j X ji + u

More information

Linear Regression with Time Series Data

Linear Regression with Time Series Data u n i v e r s i t y o f c o p e n h a g e n d e p a r t m e n t o f e c o n o m i c s Econometrics II Linear Regression with Time Series Data Morten Nyboe Tabor u n i v e r s i t y o f c o p e n h a g

More information

Empirical Market Microstructure Analysis (EMMA)

Empirical Market Microstructure Analysis (EMMA) Empirical Market Microstructure Analysis (EMMA) Lecture 3: Statistical Building Blocks and Econometric Basics Prof. Dr. Michael Stein michael.stein@vwl.uni-freiburg.de Albert-Ludwigs-University of Freiburg

More information

Estimating Estimable Functions of β. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 17

Estimating Estimable Functions of β. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 17 Estimating Estimable Functions of β Copyright c 202 Dan Nettleton (Iowa State University) Statistics 5 / 7 The Response Depends on β Only through Xβ In the Gauss-Markov or Normal Theory Gauss-Markov Linear

More information

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8 Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall

More information

Advanced Quantitative Methods: ordinary least squares

Advanced Quantitative Methods: ordinary least squares Advanced Quantitative Methods: Ordinary Least Squares University College Dublin 31 January 2012 1 2 3 4 5 Terminology y is the dependent variable referred to also (by Greene) as a regressand X are the

More information

Multivariate Time Series

Multivariate Time Series Multivariate Time Series Notation: I do not use boldface (or anything else) to distinguish vectors from scalars. Tsay (and many other writers) do. I denote a multivariate stochastic process in the form

More information

Cointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Financial Econometrics / 56

Cointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Financial Econometrics / 56 Cointegrated VAR s Eduardo Rossi University of Pavia November 2013 Rossi Cointegrated VAR s Financial Econometrics - 2013 1 / 56 VAR y t = (y 1t,..., y nt ) is (n 1) vector. y t VAR(p): Φ(L)y t = ɛ t The

More information

1. The Multivariate Classical Linear Regression Model

1. The Multivariate Classical Linear Regression Model Business School, Brunel University MSc. EC550/5509 Modelling Financial Decisions and Markets/Introduction to Quantitative Methods Prof. Menelaos Karanasos (Room SS69, Tel. 08956584) Lecture Notes 5. The

More information

General Linear Model: Statistical Inference

General Linear Model: Statistical Inference Chapter 6 General Linear Model: Statistical Inference 6.1 Introduction So far we have discussed formulation of linear models (Chapter 1), estimability of parameters in a linear model (Chapter 4), least

More information

Linear Regression with Time Series Data

Linear Regression with Time Series Data u n i v e r s i t y o f c o p e n h a g e n d e p a r t m e n t o f e c o n o m i c s Econometrics II Linear Regression with Time Series Data Morten Nyboe Tabor u n i v e r s i t y o f c o p e n h a g

More information

Answers to Problem Set #4

Answers to Problem Set #4 Answers to Problem Set #4 Problems. Suppose that, from a sample of 63 observations, the least squares estimates and the corresponding estimated variance covariance matrix are given by: bβ bβ 2 bβ 3 = 2

More information

Advanced Econometrics

Advanced Econometrics Based on the textbook by Verbeek: A Guide to Modern Econometrics Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna May 2, 2013 Outline Univariate

More information

So far our focus has been on estimation of the parameter vector β in the. y = Xβ + u

So far our focus has been on estimation of the parameter vector β in the. y = Xβ + u Interval estimation and hypothesis tests So far our focus has been on estimation of the parameter vector β in the linear model y i = β 1 x 1i + β 2 x 2i +... + β K x Ki + u i = x iβ + u i for i = 1, 2,...,

More information

Econometrics of Panel Data

Econometrics of Panel Data Econometrics of Panel Data Jakub Mućk Meeting # 6 Jakub Mućk Econometrics of Panel Data Meeting # 6 1 / 36 Outline 1 The First-Difference (FD) estimator 2 Dynamic panel data models 3 The Anderson and Hsiao

More information

ECON The Simple Regression Model

ECON The Simple Regression Model ECON 351 - The Simple Regression Model Maggie Jones 1 / 41 The Simple Regression Model Our starting point will be the simple regression model where we look at the relationship between two variables In

More information

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis Introduction to Time Series Analysis 1 Contents: I. Basics of Time Series Analysis... 4 I.1 Stationarity... 5 I.2 Autocorrelation Function... 9 I.3 Partial Autocorrelation Function (PACF)... 14 I.4 Transformation

More information

Econ 620. Matrix Differentiation. Let a and x are (k 1) vectors and A is an (k k) matrix. ) x. (a x) = a. x = a (x Ax) =(A + A (x Ax) x x =(A + A )

Econ 620. Matrix Differentiation. Let a and x are (k 1) vectors and A is an (k k) matrix. ) x. (a x) = a. x = a (x Ax) =(A + A (x Ax) x x =(A + A ) Econ 60 Matrix Differentiation Let a and x are k vectors and A is an k k matrix. a x a x = a = a x Ax =A + A x Ax x =A + A x Ax = xx A We don t want to prove the claim rigorously. But a x = k a i x i i=

More information

Outline. Nature of the Problem. Nature of the Problem. Basic Econometrics in Transportation. Autocorrelation

Outline. Nature of the Problem. Nature of the Problem. Basic Econometrics in Transportation. Autocorrelation 1/30 Outline Basic Econometrics in Transportation Autocorrelation Amir Samimi What is the nature of autocorrelation? What are the theoretical and practical consequences of autocorrelation? Since the assumption

More information

Christopher Dougherty London School of Economics and Political Science

Christopher Dougherty London School of Economics and Political Science Introduction to Econometrics FIFTH EDITION Christopher Dougherty London School of Economics and Political Science OXFORD UNIVERSITY PRESS Contents INTRODU CTION 1 Why study econometrics? 1 Aim of this

More information

Heteroskedasticity and Autocorrelation

Heteroskedasticity and Autocorrelation Lesson 7 Heteroskedasticity and Autocorrelation Pilar González and Susan Orbe Dpt. Applied Economics III (Econometrics and Statistics) Pilar González and Susan Orbe OCW 2014 Lesson 7. Heteroskedasticity

More information

Linear Regression. y» F; Ey = + x Vary = ¾ 2. ) y = + x + u. Eu = 0 Varu = ¾ 2 Exu = 0:

Linear Regression. y» F; Ey = + x Vary = ¾ 2. ) y = + x + u. Eu = 0 Varu = ¾ 2 Exu = 0: Linear Regression 1 Single Explanatory Variable Assume (y is not necessarily normal) where Examples: y» F; Ey = + x Vary = ¾ 2 ) y = + x + u Eu = 0 Varu = ¾ 2 Exu = 0: 1. School performance as a function

More information

Instrumental Variables

Instrumental Variables Università di Pavia 2010 Instrumental Variables Eduardo Rossi Exogeneity Exogeneity Assumption: the explanatory variables which form the columns of X are exogenous. It implies that any randomness in the

More information

FENG CHIA UNIVERSITY ECONOMETRICS I: HOMEWORK 4. Prof. Mei-Yuan Chen Spring 2008

FENG CHIA UNIVERSITY ECONOMETRICS I: HOMEWORK 4. Prof. Mei-Yuan Chen Spring 2008 FENG CHIA UNIVERSITY ECONOMETRICS I: HOMEWORK 4 Prof. Mei-Yuan Chen Spring 008. Partition and rearrange the matrix X as [x i X i ]. That is, X i is the matrix X excluding the column x i. Let u i denote

More information

Chapter 5 Matrix Approach to Simple Linear Regression

Chapter 5 Matrix Approach to Simple Linear Regression STAT 525 SPRING 2018 Chapter 5 Matrix Approach to Simple Linear Regression Professor Min Zhang Matrix Collection of elements arranged in rows and columns Elements will be numbers or symbols For example:

More information

VAR Models and Cointegration 1

VAR Models and Cointegration 1 VAR Models and Cointegration 1 Sebastian Fossati University of Alberta 1 These slides are based on Eric Zivot s time series notes available at: http://faculty.washington.edu/ezivot The Cointegrated VAR

More information

Basic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = The errors are uncorrelated with common variance:

Basic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = The errors are uncorrelated with common variance: 8. PROPERTIES OF LEAST SQUARES ESTIMATES 1 Basic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = 0. 2. The errors are uncorrelated with common variance: These assumptions

More information

Two-Variable Regression Model: The Problem of Estimation

Two-Variable Regression Model: The Problem of Estimation Two-Variable Regression Model: The Problem of Estimation Introducing the Ordinary Least Squares Estimator Jamie Monogan University of Georgia Intermediate Political Methodology Jamie Monogan (UGA) Two-Variable

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Econometrics. Week 11. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague

Econometrics. Week 11. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague Econometrics Week 11 Institute of Economic Studies Faculty of Social Sciences Charles University in Prague Fall 2012 1 / 30 Recommended Reading For the today Advanced Time Series Topics Selected topics

More information

5: MULTIVARATE STATIONARY PROCESSES

5: MULTIVARATE STATIONARY PROCESSES 5: MULTIVARATE STATIONARY PROCESSES 1 1 Some Preliminary Definitions and Concepts Random Vector: A vector X = (X 1,..., X n ) whose components are scalarvalued random variables on the same probability

More information

SOME BASICS OF TIME-SERIES ANALYSIS

SOME BASICS OF TIME-SERIES ANALYSIS SOME BASICS OF TIME-SERIES ANALYSIS John E. Floyd University of Toronto December 8, 26 An excellent place to learn about time series analysis is from Walter Enders textbook. For a basic understanding of

More information

Business Economics BUSINESS ECONOMICS. PAPER No. : 8, FUNDAMENTALS OF ECONOMETRICS MODULE No. : 3, GAUSS MARKOV THEOREM

Business Economics BUSINESS ECONOMICS. PAPER No. : 8, FUNDAMENTALS OF ECONOMETRICS MODULE No. : 3, GAUSS MARKOV THEOREM Subject Business Economics Paper No and Title Module No and Title Module Tag 8, Fundamentals of Econometrics 3, The gauss Markov theorem BSE_P8_M3 1 TABLE OF CONTENTS 1. INTRODUCTION 2. ASSUMPTIONS OF

More information

Spatial Regression. 3. Review - OLS and 2SLS. Luc Anselin. Copyright 2017 by Luc Anselin, All Rights Reserved

Spatial Regression. 3. Review - OLS and 2SLS. Luc Anselin.   Copyright 2017 by Luc Anselin, All Rights Reserved Spatial Regression 3. Review - OLS and 2SLS Luc Anselin http://spatial.uchicago.edu OLS estimation (recap) non-spatial regression diagnostics endogeneity - IV and 2SLS OLS Estimation (recap) Linear Regression

More information

EC3062 ECONOMETRICS. THE MULTIPLE REGRESSION MODEL Consider T realisations of the regression equation. (1) y = β 0 + β 1 x β k x k + ε,

EC3062 ECONOMETRICS. THE MULTIPLE REGRESSION MODEL Consider T realisations of the regression equation. (1) y = β 0 + β 1 x β k x k + ε, THE MULTIPLE REGRESSION MODEL Consider T realisations of the regression equation (1) y = β 0 + β 1 x 1 + + β k x k + ε, which can be written in the following form: (2) y 1 y 2.. y T = 1 x 11... x 1k 1

More information

1 Introduction to Generalized Least Squares

1 Introduction to Generalized Least Squares ECONOMICS 7344, Spring 2017 Bent E. Sørensen April 12, 2017 1 Introduction to Generalized Least Squares Consider the model Y = Xβ + ɛ, where the N K matrix of regressors X is fixed, independent of the

More information

ECON 3150/4150, Spring term Lecture 7

ECON 3150/4150, Spring term Lecture 7 ECON 3150/4150, Spring term 2014. Lecture 7 The multivariate regression model (I) Ragnar Nymoen University of Oslo 4 February 2014 1 / 23 References to Lecture 7 and 8 SW Ch. 6 BN Kap 7.1-7.8 2 / 23 Omitted

More information

Econometrics. Week 4. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague

Econometrics. Week 4. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague Econometrics Week 4 Institute of Economic Studies Faculty of Social Sciences Charles University in Prague Fall 2012 1 / 23 Recommended Reading For the today Serial correlation and heteroskedasticity in

More information

Problem Set #6: OLS. Economics 835: Econometrics. Fall 2012

Problem Set #6: OLS. Economics 835: Econometrics. Fall 2012 Problem Set #6: OLS Economics 835: Econometrics Fall 202 A preliminary result Suppose we have a random sample of size n on the scalar random variables (x, y) with finite means, variances, and covariance.

More information

For a stochastic process {Y t : t = 0, ±1, ±2, ±3, }, the mean function is defined by (2.2.1) ± 2..., γ t,

For a stochastic process {Y t : t = 0, ±1, ±2, ±3, }, the mean function is defined by (2.2.1) ± 2..., γ t, CHAPTER 2 FUNDAMENTAL CONCEPTS This chapter describes the fundamental concepts in the theory of time series models. In particular, we introduce the concepts of stochastic processes, mean and covariance

More information

Regression with time series

Regression with time series Regression with time series Class Notes Manuel Arellano February 22, 2018 1 Classical regression model with time series Model and assumptions The basic assumption is E y t x 1,, x T = E y t x t = x tβ

More information

Math Camp II. Calculus. Yiqing Xu. August 27, 2014 MIT

Math Camp II. Calculus. Yiqing Xu. August 27, 2014 MIT Math Camp II Calculus Yiqing Xu MIT August 27, 2014 1 Sequence and Limit 2 Derivatives 3 OLS Asymptotics 4 Integrals Sequence Definition A sequence {y n } = {y 1, y 2, y 3,..., y n } is an ordered set

More information

Econometrics A. Simple linear model (2) Keio University, Faculty of Economics. Simon Clinet (Keio University) Econometrics A October 16, / 11

Econometrics A. Simple linear model (2) Keio University, Faculty of Economics. Simon Clinet (Keio University) Econometrics A October 16, / 11 Econometrics A Keio University, Faculty of Economics Simple linear model (2) Simon Clinet (Keio University) Econometrics A October 16, 2018 1 / 11 Estimation of the noise variance σ 2 In practice σ 2 too

More information

Econ 583 Final Exam Fall 2008

Econ 583 Final Exam Fall 2008 Econ 583 Final Exam Fall 2008 Eric Zivot December 11, 2008 Exam is due at 9:00 am in my office on Friday, December 12. 1 Maximum Likelihood Estimation and Asymptotic Theory Let X 1,...,X n be iid random

More information

Ch.10 Autocorrelated Disturbances (June 15, 2016)

Ch.10 Autocorrelated Disturbances (June 15, 2016) Ch10 Autocorrelated Disturbances (June 15, 2016) In a time-series linear regression model setting, Y t = x tβ + u t, t = 1, 2,, T, (10-1) a common problem is autocorrelation, or serial correlation of the

More information

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS Exam: ECON3150/ECON4150 Introductory Econometrics Date of exam: Wednesday, May 15, 013 Grades are given: June 6, 013 Time for exam: :30 p.m. 5:30 p.m. The problem

More information

Vector Auto-Regressive Models

Vector Auto-Regressive Models Vector Auto-Regressive Models Laurent Ferrara 1 1 University of Paris Nanterre M2 Oct. 2018 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions

More information

Notes on Time Series Modeling

Notes on Time Series Modeling Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g

More information

Understanding Regressions with Observations Collected at High Frequency over Long Span

Understanding Regressions with Observations Collected at High Frequency over Long Span Understanding Regressions with Observations Collected at High Frequency over Long Span Yoosoon Chang Department of Economics, Indiana University Joon Y. Park Department of Economics, Indiana University

More information