Economics 102C: Advanced Topics in Econometrics 4 - Asymptotics & Large Sample Properties of OLS

Size: px
Start display at page:

Download "Economics 102C: Advanced Topics in Econometrics 4 - Asymptotics & Large Sample Properties of OLS"

Transcription

1 Ecoomics 102C: Advaced Topics i Ecoometrics 4 - Asymptotics & Large Sample Properties of OLS Michael Best Sprig 2015

2 Asymptotics So far we have looked at the fiite sample properties of OLS Relied heavily o the assumptio that ε i X N [ 0, σ 2] Without this assumptio, the t statistic does t have a t distributio, ad F statistics do t have F distributios. Luckily, If we are uwillig to assume ormality, we ca still use asymptotic (large sample) properties of estimators. Look at what happes to estimators as sample size Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 1 / 64

3 Asymptotics Most estimators are fuctios of sample meas, so focus o what happes to these meas i large samples. Turs out thigs like the t ad F statistics will have approximately t ad F distributios i large samples. Most advaced methods justified exclusively o asymptotic grouds. Fiite-sample properties are ofte ukow Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 2 / 64

4 Covergece i Probability We defie asymptotic argumets with respect to the sample size. Let x by a sequece radom variable (a set of radom variables {x 1, x 2,..., x }) idexed by its sample size Defiitio (Covergece i Probability) The radom variable x coverges i probability to a costat c if lim Pr ( x c > ε) = 0 for ay ε > 0 The probability that x takes values far from c disappears as If x coverges i probability to c, we say that plim x = c Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 3 / 64

5 Covergece i Mea Square It is ofte hard to verify covergece i probability, so we usually use a special case. Defiitio (Covergece i Mea Square) If x has mea µ ad variace σ 2 such that with limits c ad 0, respectively, the x coverges i mea square to c ad plim x = c Easier to check that the mea ad variace have limits Note that mea square covergece implies covergece i porbability, but ot vice versa Defiitio A estimator ˆθ of a parameter θ is a cosistet estimator of θ if ad oly if plim ˆθ = θ Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 4 / 64

6 Cosistecy: Practice Tur to your eighbor: Let s take the example of the sample mea i a i.i.d. radom sample: x 1 x i: 1. Fid E [ x ] ad Var [ x ] 2. What happes to E [ x ] ad Var [ x ] as? 3. Usig the defiitio of Covergece i Mea Square, what is plim x? Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 5 / 64

7 Cosistecy: Practice Tur to your eighbor: Let s take the example of the sample mea i a i.i.d. radom sample: x 1 x i: 1. Fid E [ x ] ad Var [ x ] [ ] 1 E [ x ] = E x i = 1 E [x i] = µ [ ] [ 1 Var [ x ] = Var x i = 1 ] Var x 2 i = 1 Var [x] = σ 2 = σ2 2. What happes to E [ x ] ad Var [ x ] as? E [ x ] µ Var [ x ] 0 3. Usig the defiitio of Covergece i Mea Square, what is plim x? mea square x µ plim x = µ Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 6 / 64

8 Cosistecy I fact the example of the sample mea geeralizes: For ay fuctio g (x), if E [g (x)] ad Var [g (x)] are fiite costats, the plim 1 g (x i ) = E [g (x)] Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 7 / 64

9 The Law of Large Numbers ad Slutsky s Theorem Theorem (Law of Large Numbers) If x i i = 1,..., is a radom (i.i.d.) sample from a distributio with fiite mea E [x i ] = µ <, the plim x = µ Theorem (Slutsky s Theorem) For a cotiuous fuctio g (x ) that is ot a fuctio of, plim g (x ) = g (plim x ) Proofs are hard. Not required for 102C These results are very powerful ad allow us to show that estimators (fuctios of the data x ) are cosistet Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 8 / 64

10 Properties of plims probability limits have a umber of useful properties: If x ad y are RVs with plim x = c ad plim y = d the 1. plim (x + y ) = c + d 2. plim x y = cd 3. plim x /y = c/d as log as d 0 If W is a matrix whose elemets are RVs, ad if plim W = Ω plim W 1 = Ω 1 If X ad Y are radom matrices with plim X = A ad plim Y = B plim X Y = AB Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 9 / 64

11 Covergece i Distributio We use the plim to aalyze whether estimators are cosistet I order to make iferece (for e.g. is a coefficiet = 0?) we eed to kow the distributio of the estimator Defiitio (Covergece i Distributio) x coverges i distributio to a radom variable x with CDF F (x) if lim F (x ) F (x) = 0 over the whole support of F (x). We deote this as d x x Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 10 / 64

12 Rules for Covergece i Distributio Aalogously to the rules for plims, if x d x ad plim y = c, the x y x + y d cx d x + c x /y d x/c if c 0 If x d x ad g (x ) is a cotiuous fuctio, the g (x ) d g (x) If y d y ad plim (x y ) = 0, the x d y If x d x the c x d c x Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 11 / 64

13 Asymptotic Normality ad the Cetral Limit Theorem I priciple, if plim ˆθ = θ, the ˆθ distributio of ˆθ is a spike. d θ ad the limitig Of course, we do t thik that i ay give sample this is a reasoable thig to assume. Istead, to get more reasoable statistical properties of the estimator, we use a stabilizig trasformatio. Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 12 / 64

14 Asymptotic Normality ad the Cetral Limit Theorem Theorem (Uivariate Cetral Limit Theorem) If x 1, x 2,..., x are a radom sample from a distributio with mea µ < ad variace σ 2 < ad x = 1 x i, the ( x µ) d N [ 0, σ 2] Theorem (Multivariate Cetral Limit Theorem) if x 1, x 2,..., x are a radom sample from a multivariate distributio with fiite mea vector µ ad fiite covariace matrix Q, ad x = 1 x i, the ( x µ) d N [0, Q] Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 13 / 64

15 Outlie Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol

16 OLS Asymptotics: Cosistecy We eed to modify the assumptios we made whe studyig OLS i fiite samples slightly. We assume: 1. (x i, ε i ) i = 1,..., is a sequece of idepedet observatios 2. plim X X = Q, a o-sigular matrix Now rewrite the OLS estimate ˆβ as ( X ˆβ ) X 1 ( X ) ε = β + So that asymptotically: plim ˆβ = β + Q 1 plim ( X ) ε Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 15 / 64

17 OLS Asymptotics: Cosistecy So we eed to fid plim ( X ε ). Notice we ca write this as 1 X ε = 1 x i ε i = 1 w i = w So that plim ˆβ=β + Q 1 plim w To fid plim w we eed to see what happes to E [ w] ad Var [ w] as E [w i ] = E x [E [w i x i ]] = E x x i E [ε i x i ] E [ w] = 0 (< ) }{{} =0 (exogeeity ass) = 0 Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 16 / 64

18 OLS Asymptotics: Cosistecy Turig to Var [ w]: Var [ w] = E [Var [ w X]] + Var [E [ w X]] }{{} =0 (E[ε i X]=0) Var [ w X] = E [ w w X ] = 1 X E [ εε X ] X 1 = 1 X σ 2 IX 1 ( ) ( ) σ 2 X = X ( ) ( ) σ 2 X X Var [ w] = E lim Var [ w] = 0 Q = 0 Therefore lim E [ w] = 0 < ad lim Var [ w] = 0 so w coverges i mea square to 0 ad Puttig this all together: plim w = 0 plim ˆβ = β + Q 1 0 = β Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 17 / 64

19 OLS Asymptotic Distributio We wat to relax the ormality assumptio, but we still wat to kow the distributio of ˆβ (at least i large eough samples) To do this, we will use the cetral limit theorem, which requires us to assume that the observatios are idepedet We will focus o ( ) ( X ) X 1 ( ) 1 ˆβ β = X ε Usig the rules of covergece i distributio, if this has a limitig distributio, it s the same as that of [ ( X ) ] X 1 ( ) ( ) 1 1 plim X ε = Q 1 X ε Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 18 / 64

20 OLS Asymptotic Distributio Let s try ad fid the limitig distributio of ( 1 ) X ε = w E [w i ] = 0, what about Var [w i ]? w i = x i ε i so Var [x i ε i ] = E [ x i ε 2 i x i] = σ 2 E [ x i x i] = σ 2 Q Applyig the cetral limit theorem: ( 1 ) X ε d N [ 0, σ 2 Q ] Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 19 / 64

21 Puttig pieces together OLS Asymptotic Distributio Q 1 ( 1 ) X ε d N [ Q 1 0, Q 1 ( σ 2 Q ) Q 1] ( ˆβ β ) d N [ 0, σ 2 Q 1] Theorem (Asympototic distributio of ˆβ) If ε i are i.i.d. with mea 0 ad variace σ 2, the ] ˆβ a N [β, σ2 Q 1 I practice, of course, we estimate (1/) Q 1 with (X X) 1 ad σ 2 with û û/ ( k) Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 20 / 64

22 OLS Asymptotic Distributio So, we have a (asymptotically) ormal distributio for ˆβ, but it s ot because we assumed the εi are ormally distributed it s comig from the cetral limit theorem Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 21 / 64

23 OLS Asymptotics: Practice Tur to your eighbor: let s work out the asymptotics for OLS i the simplest case: y i = β 0 + β 1 x i1 + u i 1. Write ˆβ = ( ˆβ0 ˆβ1 ) = (X X) 1 X y i terms of the raw observatios y i, x i1 ad their sample averages x What is (X X) i this case? 1.2 Express (X X) 1 i terms of x 1, the x i1 x 1 ad x2 i1 1.3 Express X y i terms of x iy i ad ȳ 1.4 Express ˆβ 1 i terms of x 1, ȳ, 1 (x i1 x 1 ) 2, ad 1 (x i1 x 1 ) (y i ȳ) 2. Write plim ˆβ 1 i terms of β 1, Cov (x 1, u i ) ad Var (x 1 ) ad show it is = β 1 Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 22 / 64

24 3. We ca write ( ) ˆβ1 β 1 = OLS Asymptotics: Practice ( 1 (x i1 x 1 ) 2 ) 1 ( 1 ) (x i1 x 1 ) (u i ū) Uder what coditio is it the case that the limitig distributio of ( ) ˆβ1 β 1 is the same as that of Var (x1 ) 1 ( 1 (x i1 x 1 ) (u i ū) )? 4. Use the Law of Iterated Expectatios (LIE) ad the exogeeity assumptio to fid E [(x i1 x 1 ) (u i ū)] 5. Use the LIE ad the homoskedasticity assumptio to fid Var [(x i1 x 1 ) (u i ū)] 6. Usig the cetral limit theorem, what is the limitig distributio of ( 1 (x i1 x 1 ) (u i ū) )? 7. What is the asymptotic distributio of ˆβ 1? Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 23 / 64

25 OLS Asymptotics: Practice-Solutios 1.1: To fid X X, first ote that 1 x 11 1 x 21 X ( 2) =.. 1 x 1 so 1 x 11 ( ) X x 21 X (2 2) = x 11 x 21 x x 1 ( = 1 x ) ( ) i1 x 1 x1 = i1 x2 1 i1 x 1 x2 i1 Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 24 / 64

26 OLS Asymptotics: Practice-Solutios 1.2: Recall that by defiitio ( X X ) 1 = 1 X X ( X X 22 X X 12 X X 21 X X 11 ) To fid (X X) 1 start by fidig X X : X X [ ] = 2 1 x 2 i1 ( x 1 ) 2 = 2 1 (x i1 x 1 ) 2 which meas that we ca write (X X) 1 as ( X X ) 1 = 1 1 ( 1 1 (x i1 x 1 ) 2 x2 i1 x 1 x 1 1 ) Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 25 / 64

27 OLS Asymptotics: Practice-Solutios 1.3: Fist ote that y = ( y 1 y 2 y ) so y 1 ( ) y X y = x 11 x 21 x 1. y ( ) ( y = i = x i1 y i 1.4: Puttig the precedig pieces together: 1 ) ȳ x i1 y i ( ( X ) 1 1 X X y = ( ˆβ0 ˆβ 1 ) = (x i1 x 1 ) 2 ) 1 ( 1 x 2 i1 x 1 x 1 1 ) ( ) ȳ 1 x i1 y i ) ( 1 ) ( 1 1 = (x i1 x 1 ) 2 x 2 i1ȳ 1 x i1 y i x 1 1 x i1 y i x ȳ ( ) 1 1 ( = (x i1 x 1 ) 2 ȳ 1 (x i1 x 1 ) 2 x 1 1 (x i1 x 1 ) (y i ȳ) 1 (x i1 x 1 ) (y i ȳ) ( ȳ ˆβ ) 1 x ( 1 1 (x i1 x 1 ) 2) 1 ( 1 (x i1 x 1 ) (y i ȳ)) ) Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 26 / 64

28 OLS Asymptotics: Practice-Solutios 2: Isert y i = β 0 + β 1 x i1 + u i ad ȳ = β 0 + β 1 x 1 + ū ito the expressio for ˆβ 1 from above to get ( ) 1 1 ( ) ˆβ 1 = (x i1 x 1 ) 2 1 (x i1 x 1 ) (β 1 (x i1 x) + u i ū) ( ) 1 1 ( = (x i1 x 1 ) 2 1 β 1 (x i1 x 1 ) ) (x i1 x 1 ) (u i ū) ( ) 1 1 ( ) = β 1 + (x i1 x 1 ) 2 1 (x i1 x 1 ) (u i ū) Usig the law of large umbers, ad so plim 1 (x i1 x 1 ) 2 [ = E (x 1i E [x 1 ]) 2] = Var [x 1 ] plim 1 (x i1 x 1 ) (u i ū) = E [(x 1i E [x 1 ]) (u i E [u])] = Cov (x 1, u) ( plim ˆβ 1 = β 1 + plim 1 ) 1 ( (x i1 x 1 ) 2 plim 1 ) (x i1 x 1 ) (u i ū) = β 1 + Cov (x 1, u) Var [x 1 ] Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 27 / 64

29 OLS Asymptotics: Practice-Solutios 3. First use the rule of covergece i probability that if plim x = c, the plim g (x ) = g (plim x ). From this we see that if plim 1 (x i1 x 1 ) 2 = Var (x 1 ), the plim ( 1 ) 1 (x i1 x 1 ) 2 = [Var (x 1 )] 1 Next, we use rule of covergece i distributio that if d d plim x = c ad y y, the x y cy. ( Siceplim 1 (x i1 x 1 ) 2) 1 = [Var (x1 )] 1 the limitig distributio of ( ) ˆβ1 β 1 is the same as that of ( ) Var (x1 ) 1 1 (x i1 x 1 ) (u i ū) Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 28 / 64

30 OLS Asymptotics: Practice-Solutios 4. By the law of iterated expectatios E [(x i1 x 1 ) (u i ū)] = E x1 [E [(x i1 x 1 ) (u i ū) x 1 ]] = E x1 (x i1 x 1 ) E [(u i ū) x 1 ] }{{} =0 (exogeeity) = E x1 [(x i1 x 1 ) 0] = 0 5. Usig the aswer to part 4, [ Var [(x i1 x 1 ) (u i ū)] = E [(x i1 x 1 ) (u i ū)] 2] [E [(x i1 x 1 ) (u i ū)]] 2 = E [[(x i1 x 1 ) (u i ū)] 2] ad usig the LIE E [[(x i1 x 1 ) (u i ū)] 2] [ ]] = E x1 E [[(x i1 x 1 ) (u i ū)] 2 x 1 ]] = E x1 [(x i1 x 1 ) 2 E [(u i ū) 2 x 1 ] = E x1 [(x i1 x 1 ) 2 Var (u i x 1 ) Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 29 / 64

31 OLS Asymptotics: Practice-Solutios by the homoskedasticity assumptio Var (u i x 1 ) = σ 2 so E [[(x i1 x 1 ) (u i ū)] 2] = E x1 [(x i1 x 1 ) 2 σ 2] [ = σ 2 E x1 (x i1 x 1 ) 2] = σ 2 Var [x 1 ] 6. Parts 4 ad 5 showed that (x i1 x 1 ) (u i ū) has a fiite mea ad variace, so we ca apply the cetral limit theorem to see that ( ) 1 d (x i1 x 1 ) (u i ū) N [ 0, σ 2 Var [x 1 ] ] 7. Usig the aswers to parts 3 6, ( ) [ d ˆβ1 β 1 N 0, σ 2 (Var [x 1 ]) 1] ad so [ ] d ˆβ 1 N β 1, σ2 1 Var (x 1 ) Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 30 / 64

32 Outlie Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol

33 Asymptotic Iferece with OLS: The t statistic Above, we showed that ˆβ d N where Q = plim (X X/) ] [β, σ2 Q 1 Of course, we do t kow σ 2, so we eed a estimator of it. We will use s 2 = 1 k û û ( where û = y Xˆβ = X β ˆβ First rewrite this as s 2 = k ) + u. ) (û û Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 32 / 64

34 Asymptotic Iferece with OLS: The t statistic The break ope û û û û = 1 [( ˆβ) β X + u ] [ ( X β ˆβ ) ] + u = 1 [( ˆβ) ( β X X β ˆβ ) ( ˆβ) ( + β X u + u X β ˆβ ) ] + u u ( Sice plim β ˆβ ) = 0, the plims of the first 3 terms are 0, ad so plim û û = plim u u u u/ = 1 u2 i ad so by the law of large umbers, plim u u = E [ u 2 ] i = Var [ui ] = σ Combie this with the fact that / ( k) 1 as ad we see that plim s 2 = σ 2 Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 33 / 64

35 Asymptotic Iferece with OLS: The t statistic Rewrite our familiar t statistic as ( ˆβk βk) 0 t k = s 2 (X X/) 1 kk Recall that if we assume ε i N ( 0, σ 2) the t k t [ k] Usig the above results, the deomiator has plim s 2 (X X/) 1 kk = σ 2 Q 1 kk Ad uder the ull hypothesis ( β k = βk) 0 the umerator coverges i distributio to N [ 0, σ 2 Q 1 ] kk. Therefore, combiig these we see that t k d N [0, 1] Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 34 / 64

36 Asymptotic Iferece with OLS: The F statistic What about testig multiple (liear) hypotheses? To test the set of J hypotheses Rβ q = 0 we study the asymptotic distributio oft the Wald Statistic ( W = JF = R ˆβ ) q [Rs 2 ( X X ) ] 1 1 ( ) R Rˆβ q If ( ˆβ β ) d N [ 0, σ 2 Q 1], ad if the ull hypothesis (Rβ q 0) is true, the ( ) R ˆβ β = ( R ˆβ ) d [ ( q N 0, R σ 2 Q 1) R ] As shorthad, let z = ( R ˆβ ) q ad P = R ( σ 2 Q 1) R, so we ca restate this as z d N [0, P] Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 35 / 64

37 Asymptotic Iferece with OLS: The F statistic The iverse square root of a matrix P is aother matrix T such that T 2 = P 1. We call this matrix P 1/2 Sice z d N [0, P], P 1/2 z d N [ 0, P 1/2 PP 1/2] = N [0, I] d Recall that if x x the g (x ) d g (x), ad that if y i N (0, 1), the J y2 i χ2 (J). Together, these imply that ( ) ( ) P 1/2 z P 1/2 z = z Pz d χ 2 (J) Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 36 / 64

38 Asymptotic Iferece with OLS: The F statistic Puttig all of this together, we have show that ( [R ( Rˆβ q) σ 2 Q 1) R ] ( 1 d Rˆβ q) χ 2 (J) Sice plim s 2 (X X/) 1 = σ 2 Q 1, it is also the case that the Wald Statistic W d χ 2 (J) Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 37 / 64

39 Outlie Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol

40 Measuremet error The most cotroversial assumptio of OLS is exogeeity: E [ε i X] = 0 Let s thik through the cosequeces of measuremet error for the exogeeity assumptio ad the OLS estimator There are 2 basic forms of measuremet error: 1. Measuremet error i the outcome variable y i 2. Measuremet error i oe (or more) depedet variable Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 39 / 64

41 Measuremet error i the depedet variable Imagie that the true model is that y i = x iβ + ε i However, we do t have data o yi, istead we observe y i = yi + w i where w i is ucorrelated with x i ad ε i ad has E [w i ] = 0. The we ca write this model as y i = x iβ + ε i + w i = x iβ + v i I this model, if E [ε i X] = 0 the E [v i X] = 0 also. The exogeeity assumptio is still satisfied. Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 40 / 64

42 Measuremet error i the depedet variable With measuremet error i the depedet variable, the OLS estimator is still ubiased: ( plim ˆβ X = β + Q 1 ) v plim ( X = β + Q (plim 1 )) ε + X w = β But, the OLS estimator becomes oisier, it s asymptotic variace is [ ] Var [v] Var [ε + w] Avar ˆβ = Q 1 = Q 1 = σ2 ε + σw 2 Q 1 Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 41 / 64

43 Measuremet error i a idepedet variable If there is measuremet error i the explaatory variables we are i much more trouble. Start with the case of a sigle explaatory variable: Suppose the true model is y i = β 0 + β 1 x 1i + ε i But we oly have data o x 1i = x 1i + e 1i so that we ca rewrite the model as y i = β 0 + β 1 x 1i + (ε i β 1 e 1i ) = β 0 + β 1 x 1i + v i Ad clearly E [v i x 1i ] 0 sice they both cotai e 1i So what happes to the OLS estimator? Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 42 / 64

44 Measuremet error i a idepedet variable The classical errors i variables assumptio is that Cov (x 1, e 1 ) E [e 1 ] = 0 We ca derive the icosistecy i the OLS estimate ˆβ 1 : Startig with the deomiator: plim ˆβ 1 = β 1 + Cov (x 1, v) Var (x 1 ) Var (x 1 ) = Var (x 1 + e 1 ) = Var (x 1) + Var (e 1 ) + 2 Cov (x 1, e 1 ) }{{} =0 = σ 2 x 1 + σ2 e 1 Ad the umerator: Cov (x 1, v) = Cov (x 1 + e 1, ε i β 1 e 1 ) = Cov (e 1, β 1 e 1 ) = β 1 σe 2 }{{} 1 0! Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 43 / 64

45 Measuremet error i a idepedet variable Puttig this all together plim ˆβ 1 = β 1 β 1σe 2 1 σx 2 + σ 2 = β 1 e 1 1 [ σ 2 e 1 1 σx 2 + σe ] = β 1 σ 2 x 1 σ 2 x 1 + σ 2 e 1 Sice 0 < σ2 x 1 < 1, ˆβ σ 2 x +σe 2 1 is biased towards 0: atteuatio 1 1 bias The amout of atteuatio bias depeds o σ 2 e 1 /σ 2 x 1 : if σ 2 x 1 is large relative to σ 2 e 1 the the atteuatio bias is small If σ 2 x 1 is small relative to σ 2 e 1, the the atteuatio bias is large Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 44 / 64

46 Measuremet error with several idepedet variables Let us geeralize the above reasoig: y = X β + ε X = X + U Assume that E [X U] = 0 ad E [U] = 0 (classical measuremet error) We estimate the model y = Xβ + V V = ε Uβ The aalogs of the above are: ( X ) ( X (X + U) (X ) + U) plim = plim where = Q + Σ uu Q =E [ X X ] Σ uu = E [ U U ] Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 45 / 64

47 Measuremet error with several idepedet variables Ad, plim ( X ) V = plim ( (X + U) ) (ε Uβ) = Σ uu β As a result, [ plim ˆβ = β + plim ( X )] X 1 plim = β (Q + Σ uu ) 1 Σ uu β = (Q + Σ uu ) 1 Q β ( X ) V I geeral, all bets are off. All the coefficiets are biased, ad sigig them is hard. Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 46 / 64

48 Measuremet error with several idepedet variables What if oly 1 of the variables is measured with error? Is that coefficiet atteuated but the rest are fie? This is like sayig that Σ uu = It ca be show that ow σu β 1 plim ˆβ 1 = 1 + σuq 2 11 where q11 is the (1, 1)th elemet of Q. There is atteuatio bias i the coefficiet o the mismeasured variable It ca also be show that for k 1 [ ] plim ˆβ σ 2 k = β k β u qk σuq 2 11 where q k1 is the (k, 1)th elemet of Q. The icosistecy here has ukow sig, but is ot, i geeral 0. Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 47 / 64

49 Measuremet Error: Summig up I a regressio i which oly 1 variable is measured with error: the coefficiet o that variable is atteuated (biased towards 0) all the other coefficiets are biased too, but i ukow directios I a regressio i which more tha 1 variable is measured with error: all the coefficiets are biased i ukow directios: All bets are off. Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 48 / 64

50 Outlie Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol

51 Omitted Variable Bias agai Let s partitio the X matrix ito two parts X 1 ad X 2 : X = [ X 1 X 2 ] Revisitig omitted variables: If the real model is y = X 1 β 1 + X 2 β 2 + ε, but we omit X 2 Asymptotically: ˆβ 1 = β 1 + ( X 1X 1 ) 1 X 1 X 2 β 2 + ( X 1X 1 ) 1 X 1 ε [ ( )] X plim ˆβ 1 ( ) [ ( )] 1 = β 1 + plim 1 X 1 X plim 1 X 2 X 1 ( ) β 2 + plim 1 X 1 X plim 1 ε = β 1 + Q 1 [ E X ] 1 X 2 β 2 So eve asymptotically, OLS is icosistet uless E [X 1 X 2 ] = 0 (X 1 ad X 2 ucorrelated) β2 = 0 (the true model does t iclude X 2 Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 50 / 64

52 Omitted Variable Bias: Practice Tur to your eighbor: Let s cosider the simplest possible case: The true model is y i = β 0 + β 1 x 1i + β 2 x 2i + ε i (1) but istead we ru the OLS regressio igorig x 2i : y i = β 0 + β 1 x 1i + v i (2) 1. Recall from above that i this case we ca write [ ] 1 [ ] 1 ˆβ 1 = (x 1i x 1 ) 2 1 (x 1i x 1 ) (y i ȳ) Substitute i (1) to express ˆβ 1 i terms of β 1, β 2, the x 1i, x 2i ad ε i ad x 1 ad x 2 2. Apply the law of large umbers to the result to fid plim ˆβ 1 Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 51 / 64

53 Omitted Variable Bias: Practice 3. Recall that the plim of the OLS estimator for β 1 i (2) ca be writte as plim ˆβ 1 = β 1 + Cov (x 1, v) Var (x 1 ) 3.1. Write v i i terms of objects i equatio (1) 3.2. Substitute this ito the expressio for plim ˆβ 1 to fid plim ˆβ 1 i terms of objects i equatio (1) 4. Imagie that y is (log) earigs, x 1 is years of schoolig ad x 2 is itelligece What is the likely sig of Cov (x 1, x 2 )? 4.2. What is the likely sig of β 2? 4.3. What is the likely sig of the omitted variable bias? 5. I our previous example y is health status, x 1 is goig to hospital ad x 2 is y 0i, people s latet health status What is the likely sig of Cov (x 1, x 2 )? 5.2. What is the likely sig of β 2? 5.3. What is the likely sig of the omitted variable bias? Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 52 / 64

54 Omitted Variable Bias: Practice 1. Let s focus first o the umerator: 1 (x 1i x 1) (y i ȳ) = 1 [ (x 1i x 1) (β 0 + β 1x 1i + β 2x 2i + ε i) ] (β 0 + β 1 x 1 + β 2 x 2 + ε) = 1 (x 1i x 1) (β 1 (x 1i x 1) + β 2 (x 2i x 2) + (ε i ε) = β (x 1i x 1) 2 + β 2 1 (x 1i x 1) (ε i ε) Combiig this with the deomiator: (x 1i x 1) (x 2i x 2) 1 ˆβ 1 = β 1 +β (x 1i x 1 ) (x 2i x 2 ) 2 (x 1i x 1 ) 2 + (x 1i x 1 ) (ε i ε) (x 1i x 1 ) Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 53 / 64

55 Omitted Variable Bias: Practice 2. Applyig the law of large umbers to each part: plim 1 plim 1 (x 1i x 1) 2 = E [ (x 1i E [x 1]) 2] = Var [x 1] (x 1i x 1) (x 2i x 2) = E [(x 1i E [x 1]) (x 2i E [x 2])] = Cov (x 1, x 2) plim 1 (x 1i x 1) (ε i ε) = E [(x 1i E [x 1]) (ε i E [ε])] = Cov (x 1, ε) Combiig these, plim ˆβ Cov (x 1, x 2 ) 1 = β 1 + β 2 Var (x 1 ) Cov (x 1, x 2 ) = β 1 + β 2 Var (x 1 ) + Cov (x 1, ε) Var (x 1 ) Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 54 / 64

56 Omitted Variable Bias: Practice 3. Recall that the plim of the OLS estimator for β 1 i (2) ca be writte as plim ˆβ 1 = β 1 + Cov (x 1, v) Var (x 1 ) 3.1. Write v i i terms of objects i equatio (1) v i = ε i + β 2 x 2i 3.2. Substitute this ito the expressio for plim ˆβ 1 to fid plim ˆβ 1 i terms of objects i equatio (1) plim ˆβ 1 = β 1 + Cov (x 1, ε + β 2 x 2 ) Var (x 1 ) = β 1 + Cov (x 1, ε) Var (x 1 ) = β 1 + β 2 Cov (x 1, x 2 ) Var (x 1 ) + Cov (x 1, β 2 x 2 ) Var (x 1 ) Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 55 / 64

57 Omitted Variable Bias: Practice 4. Imagie that y is (log) earigs, x 1 is years of schoolig ad x 2 is itelligece What is the likely sig of Cov (x 1, x 2 )?: > What is the likely sig of β 2? > What is the likely sig of the omitted variable bias? > 0 5. I our previous example y is health status, x 1 is goig to hospital ad x 2 is y 0i, people s latet health status What is the likely sig of Cov (x 1, x 2 )? < What is the likely sig of β 2? > What is the likely sig of the omitted variable bias? < 0 Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 56 / 64

58 Outlie Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol

59 Bad Cotrol Give the difficulty of rulig out omitted variable bias, you might thik the more cotrols the better, right? Sadly o, there are 2 kids of ways this ca go wrog: Bad Cotrols 1. Puttig variables that are themselves affected by the variable of iterest o the right had side of the regressio. 2. Puttig bad proxies for umeasured variables o the right had side of the regressio. Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 58 / 64

60 Bad Cotrol 1: Outcome Variables o the RHS Imagie the followig exercise: We are tryig to estimate the effect of goig to college o earigs People ca work i 2 occupatios: Blue Collar ad White Collar Clearly occupatio is correlated by both college ad earigs, so should it be a cotrol? Problem: College affects both occupatioal choice, ad earigs Let s use our potetial outcomes framework to study this y i = C i y 1i + (1 C i ) y 0i W i = C i W 1i + (1 C i ) W 0i where C i = 1 if go to college (0 otherwise) ad W i = 1 if white collar (0 if blue collar) ad y 1i, y 0i, W 1i, W 0i are the potetial outcomes. Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 59 / 64

61 Bad Cotrol 1: Outcome Variables o the RHS Let s assume C i is radomly assiged, so that it is idepedet of all the potetial outcomes: {y 1i, y 0i, W 1i, W 0i } C i So, we ca estimate the effect of college o earigs ad occupatioal choice o problem: E [y i C i = 1] E [y i C i = 0] = E [y 1i y 0i ] E [W i C i = 1] E [W i C i = 0] = E [W 1i W 0i ] The problem is that the compariso of earigs coditioal o W i is ot the causal effect of college coditioal o occupatio because of a selectio problem. College chages the compositio of people i each occupatio: College affects white collar earigs, but it also affects who becomes a white collar worker Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 60 / 64

62 Bad Cotrol 1: Outcome Variables o the RHS Imagie regressig y i o C i i the subsample of white collar workers: E [y i W i = 1, C i = 1] E [y i W i = 1, C i = 0] = E [y 1i W i = 1, C i = 1] E [y 0i W i = 1, C i = 0] Sice C i is radomly assiged ad idepedet of the potetial outcomes E [y 1i W i = 1, C i = 1] E [y 0i W i = 1, C i = 0] = E [y 1i W 1i = 1] E [y 0i W 0i = 1] = E [y 1i y 0i W 1i = 1] + E [y }{{} 0i W 1i = 1] E [y 0i W 0i = 1] }{{} causal effect selectio bias Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 61 / 64

63 Bad Cotrol 2: Bad Proxies for Uobserved RHS Variables Agai, let s take a cocrete example: Let s say you wat to measure the effect of schoolig S i o earigs y i : y i = α + ρs i + γa i + ε i If we do t corol at all for ability a i we have omitted variable bias: Cov (S, a) ˆρ = ρ + Var (S) γ Imagie we had the scores o a IQ test at age 14, before people make ay schoolig choices (assume everyoe completes 8th grade): a ei the cotrolig for aei fixes the problem E [S i ε i ] = E [a ei ε i ] = 0 Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 62 / 64

64 Bad Cotrol 2: Bad Proxies for Uobserved RHS Variables These kids of measures are very hard to come by though. Imagie istead that you had test scores o a test employers use to scree applicats a li The problem is that this ability measure is measured after schoolig choices have bee made If the measure is affected by schoolig, the we have a problem: a li = π 0 + π 1 S i + π 2 a i Substitutig out a i we see that ( y i = α γ π ) ( 0 + ρ γ π ) 1 S i + γ a li + ε i π 2 π 2 π 2 Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 63 / 64

65 Bad Cotrol 2: Bad Proxies for Uobserved RHS Variables So what ca we do? I this example γ > 0, π 1 > 0, ad π 2 > 0 so ρ γ π 1 π 2 < ρ We ca regress a li o S i to get a sese of how large π 1 is likely to be. If π 1 is small, maybe ot too much of a problem Also, ote that regressio without ability measure overestimates ρ regressio cotrollig for ali uderestimates ρ so we ca put bouds o ρ Asymptotics OLS Asymptotics OLS Asymptotic Iferece Measuremet Error Omitted Variable Bias Bad Cotrol 64 / 64

MA Advanced Econometrics: Properties of Least Squares Estimators

MA Advanced Econometrics: Properties of Least Squares Estimators MA Advaced Ecoometrics: Properties of Least Squares Estimators Karl Whela School of Ecoomics, UCD February 5, 20 Karl Whela UCD Least Squares Estimators February 5, 20 / 5 Part I Least Squares: Some Fiite-Sample

More information

Asymptotic Results for the Linear Regression Model

Asymptotic Results for the Linear Regression Model Asymptotic Results for the Liear Regressio Model C. Fli November 29, 2000 1. Asymptotic Results uder Classical Assumptios The followig results apply to the liear regressio model y = Xβ + ε, where X is

More information

First Year Quantitative Comp Exam Spring, Part I - 203A. f X (x) = 0 otherwise

First Year Quantitative Comp Exam Spring, Part I - 203A. f X (x) = 0 otherwise First Year Quatitative Comp Exam Sprig, 2012 Istructio: There are three parts. Aswer every questio i every part. Questio I-1 Part I - 203A A radom variable X is distributed with the margial desity: >

More information

Solution to Chapter 2 Analytical Exercises

Solution to Chapter 2 Analytical Exercises Nov. 25, 23, Revised Dec. 27, 23 Hayashi Ecoometrics Solutio to Chapter 2 Aalytical Exercises. For ay ε >, So, plim z =. O the other had, which meas that lim E(z =. 2. As show i the hit, Prob( z > ε =

More information

STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS. Comments:

STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS. Comments: Recall: STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS Commets:. So far we have estimates of the parameters! 0 ad!, but have o idea how good these estimates are. Assumptio: E(Y x)! 0 +! x (liear coditioal

More information

ECONOMETRIC THEORY. MODULE XIII Lecture - 34 Asymptotic Theory and Stochastic Regressors

ECONOMETRIC THEORY. MODULE XIII Lecture - 34 Asymptotic Theory and Stochastic Regressors ECONOMETRIC THEORY MODULE XIII Lecture - 34 Asymptotic Theory ad Stochastic Regressors Dr. Shalabh Departmet of Mathematics ad Statistics Idia Istitute of Techology Kapur Asymptotic theory The asymptotic

More information

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n.

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n. Jauary 1, 2019 Resamplig Methods Motivatio We have so may estimators with the property θ θ d N 0, σ 2 We ca also write θ a N θ, σ 2 /, where a meas approximately distributed as Oce we have a cosistet estimator

More information

6.3 Testing Series With Positive Terms

6.3 Testing Series With Positive Terms 6.3. TESTING SERIES WITH POSITIVE TERMS 307 6.3 Testig Series With Positive Terms 6.3. Review of what is kow up to ow I theory, testig a series a i for covergece amouts to fidig the i= sequece of partial

More information

Lecture 2: Monte Carlo Simulation

Lecture 2: Monte Carlo Simulation STAT/Q SCI 43: Itroductio to Resamplig ethods Sprig 27 Istructor: Ye-Chi Che Lecture 2: ote Carlo Simulatio 2 ote Carlo Itegratio Assume we wat to evaluate the followig itegratio: e x3 dx What ca we do?

More information

7.1 Convergence of sequences of random variables

7.1 Convergence of sequences of random variables Chapter 7 Limit Theorems Throughout this sectio we will assume a probability space (, F, P), i which is defied a ifiite sequece of radom variables (X ) ad a radom variable X. The fact that for every ifiite

More information

Efficient GMM LECTURE 12 GMM II

Efficient GMM LECTURE 12 GMM II DECEMBER 1 010 LECTURE 1 II Efficiet The estimator depeds o the choice of the weight matrix A. The efficiet estimator is the oe that has the smallest asymptotic variace amog all estimators defied by differet

More information

Large Sample Theory. Convergence. Central Limit Theorems Asymptotic Distribution Delta Method. Convergence in Probability Convergence in Distribution

Large Sample Theory. Convergence. Central Limit Theorems Asymptotic Distribution Delta Method. Convergence in Probability Convergence in Distribution Large Sample Theory Covergece Covergece i Probability Covergece i Distributio Cetral Limit Theorems Asymptotic Distributio Delta Method Covergece i Probability A sequece of radom scalars {z } = (z 1,z,

More information

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f.

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f. Lecture 5 Let us give oe more example of MLE. Example 3. The uiform distributio U[0, ] o the iterval [0, ] has p.d.f. { 1 f(x =, 0 x, 0, otherwise The likelihood fuctio ϕ( = f(x i = 1 I(X 1,..., X [0,

More information

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4 MATH 30: Probability ad Statistics 9. Estimatio ad Testig of Parameters Estimatio ad Testig of Parameters We have bee dealig situatios i which we have full kowledge of the distributio of a radom variable.

More information

CEU Department of Economics Econometrics 1, Problem Set 1 - Solutions

CEU Department of Economics Econometrics 1, Problem Set 1 - Solutions CEU Departmet of Ecoomics Ecoometrics, Problem Set - Solutios Part A. Exogeeity - edogeeity The liear coditioal expectatio (CE) model has the followig form: We would like to estimate the effect of some

More information

Lecture 19: Convergence

Lecture 19: Convergence Lecture 19: Covergece Asymptotic approach I statistical aalysis or iferece, a key to the success of fidig a good procedure is beig able to fid some momets ad/or distributios of various statistics. I may

More information

17. Joint distributions of extreme order statistics Lehmann 5.1; Ferguson 15

17. Joint distributions of extreme order statistics Lehmann 5.1; Ferguson 15 17. Joit distributios of extreme order statistics Lehma 5.1; Ferguso 15 I Example 10., we derived the asymptotic distributio of the maximum from a radom sample from a uiform distributio. We did this usig

More information

Lecture 3. Properties of Summary Statistics: Sampling Distribution

Lecture 3. Properties of Summary Statistics: Sampling Distribution Lecture 3 Properties of Summary Statistics: Samplig Distributio Mai Theme How ca we use math to justify that our umerical summaries from the sample are good summaries of the populatio? Lecture Summary

More information

7.1 Convergence of sequences of random variables

7.1 Convergence of sequences of random variables Chapter 7 Limit theorems Throughout this sectio we will assume a probability space (Ω, F, P), i which is defied a ifiite sequece of radom variables (X ) ad a radom variable X. The fact that for every ifiite

More information

Random Variables, Sampling and Estimation

Random Variables, Sampling and Estimation Chapter 1 Radom Variables, Samplig ad Estimatio 1.1 Itroductio This chapter will cover the most importat basic statistical theory you eed i order to uderstad the ecoometric material that will be comig

More information

Lecture 22: Review for Exam 2. 1 Basic Model Assumptions (without Gaussian Noise)

Lecture 22: Review for Exam 2. 1 Basic Model Assumptions (without Gaussian Noise) Lecture 22: Review for Exam 2 Basic Model Assumptios (without Gaussia Noise) We model oe cotiuous respose variable Y, as a liear fuctio of p umerical predictors, plus oise: Y = β 0 + β X +... β p X p +

More information

Slide Set 13 Linear Model with Endogenous Regressors and the GMM estimator

Slide Set 13 Linear Model with Endogenous Regressors and the GMM estimator Slide Set 13 Liear Model with Edogeous Regressors ad the GMM estimator Pietro Coretto pcoretto@uisa.it Ecoometrics Master i Ecoomics ad Fiace (MEF) Uiversità degli Studi di Napoli Federico II Versio: Friday

More information

Properties and Hypothesis Testing

Properties and Hypothesis Testing Chapter 3 Properties ad Hypothesis Testig 3.1 Types of data The regressio techiques developed i previous chapters ca be applied to three differet kids of data. 1. Cross-sectioal data. 2. Time series data.

More information

x iu i E(x u) 0. In order to obtain a consistent estimator of β, we find the instrumental variable z which satisfies E(z u) = 0. z iu i E(z u) = 0.

x iu i E(x u) 0. In order to obtain a consistent estimator of β, we find the instrumental variable z which satisfies E(z u) = 0. z iu i E(z u) = 0. 27 However, β MM is icosistet whe E(x u) 0, i.e., β MM = (X X) X y = β + (X X) X u = β + ( X X ) ( X u ) \ β. Note as follows: X u = x iu i E(x u) 0. I order to obtai a cosistet estimator of β, we fid

More information

Statistical Properties of OLS estimators

Statistical Properties of OLS estimators 1 Statistical Properties of OLS estimators Liear Model: Y i = β 0 + β 1 X i + u i OLS estimators: β 0 = Y β 1X β 1 = Best Liear Ubiased Estimator (BLUE) Liear Estimator: β 0 ad β 1 are liear fuctio of

More information

1 General linear Model Continued..

1 General linear Model Continued.. Geeral liear Model Cotiued.. We have We kow y = X + u X o radom u v N(0; I ) b = (X 0 X) X 0 y E( b ) = V ar( b ) = (X 0 X) We saw that b = (X 0 X) X 0 u so b is a liear fuctio of a ormally distributed

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2016 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

Discrete Mathematics for CS Spring 2008 David Wagner Note 22

Discrete Mathematics for CS Spring 2008 David Wagner Note 22 CS 70 Discrete Mathematics for CS Sprig 2008 David Wager Note 22 I.I.D. Radom Variables Estimatig the bias of a coi Questio: We wat to estimate the proportio p of Democrats i the US populatio, by takig

More information

Last Lecture. Wald Test

Last Lecture. Wald Test Last Lecture Biostatistics 602 - Statistical Iferece Lecture 22 Hyu Mi Kag April 9th, 2013 Is the exact distributio of LRT statistic typically easy to obtai? How about its asymptotic distributio? For testig

More information

Convergence of random variables. (telegram style notes) P.J.C. Spreij

Convergence of random variables. (telegram style notes) P.J.C. Spreij Covergece of radom variables (telegram style otes).j.c. Spreij this versio: September 6, 2005 Itroductio As we kow, radom variables are by defiitio measurable fuctios o some uderlyig measurable space

More information

Geometry of LS. LECTURE 3 GEOMETRY OF LS, PROPERTIES OF σ 2, PARTITIONED REGRESSION, GOODNESS OF FIT

Geometry of LS. LECTURE 3 GEOMETRY OF LS, PROPERTIES OF σ 2, PARTITIONED REGRESSION, GOODNESS OF FIT OCTOBER 7, 2016 LECTURE 3 GEOMETRY OF LS, PROPERTIES OF σ 2, PARTITIONED REGRESSION, GOODNESS OF FIT Geometry of LS We ca thik of y ad the colums of X as members of the -dimesioal Euclidea space R Oe ca

More information

A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence

A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence Sequeces A sequece of umbers is a fuctio whose domai is the positive itegers. We ca see that the sequece,, 2, 2, 3, 3,... is a fuctio from the positive itegers whe we write the first sequece elemet as

More information

32 estimating the cumulative distribution function

32 estimating the cumulative distribution function 32 estimatig the cumulative distributio fuctio 4.6 types of cofidece itervals/bads Let F be a class of distributio fuctios F ad let θ be some quatity of iterest, such as the mea of F or the whole fuctio

More information

Econ 325 Notes on Point Estimator and Confidence Interval 1 By Hiro Kasahara

Econ 325 Notes on Point Estimator and Confidence Interval 1 By Hiro Kasahara Poit Estimator Eco 325 Notes o Poit Estimator ad Cofidece Iterval 1 By Hiro Kasahara Parameter, Estimator, ad Estimate The ormal probability desity fuctio is fully characterized by two costats: populatio

More information

Introductory statistics

Introductory statistics CM9S: Machie Learig for Bioiformatics Lecture - 03/3/06 Itroductory statistics Lecturer: Sriram Sakararama Scribe: Sriram Sakararama We will provide a overview of statistical iferece focussig o the key

More information

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample.

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample. Statistical Iferece (Chapter 10) Statistical iferece = lear about a populatio based o the iformatio provided by a sample. Populatio: The set of all values of a radom variable X of iterest. Characterized

More information

Math 152. Rumbos Fall Solutions to Review Problems for Exam #2. Number of Heads Frequency

Math 152. Rumbos Fall Solutions to Review Problems for Exam #2. Number of Heads Frequency Math 152. Rumbos Fall 2009 1 Solutios to Review Problems for Exam #2 1. I the book Experimetatio ad Measuremet, by W. J. Youde ad published by the by the Natioal Sciece Teachers Associatio i 1962, the

More information

Chapter 3. Strong convergence. 3.1 Definition of almost sure convergence

Chapter 3. Strong convergence. 3.1 Definition of almost sure convergence Chapter 3 Strog covergece As poited out i the Chapter 2, there are multiple ways to defie the otio of covergece of a sequece of radom variables. That chapter defied covergece i probability, covergece i

More information

January 25, 2017 INTRODUCTION TO MATHEMATICAL STATISTICS

January 25, 2017 INTRODUCTION TO MATHEMATICAL STATISTICS Jauary 25, 207 INTRODUCTION TO MATHEMATICAL STATISTICS Abstract. A basic itroductio to statistics assumig kowledge of probability theory.. Probability I a typical udergraduate problem i probability, we

More information

An Introduction to Asymptotic Theory

An Introduction to Asymptotic Theory A Itroductio to Asymptotic Theory Pig Yu School of Ecoomics ad Fiace The Uiversity of Hog Kog Pig Yu (HKU) Asymptotic Theory 1 / 20 Five Weapos i Asymptotic Theory Five Weapos i Asymptotic Theory Pig Yu

More information

Parameter, Statistic and Random Samples

Parameter, Statistic and Random Samples Parameter, Statistic ad Radom Samples A parameter is a umber that describes the populatio. It is a fixed umber, but i practice we do ot kow its value. A statistic is a fuctio of the sample data, i.e.,

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2018 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

POLS, GLS, FGLS, GMM. Outline of Linear Systems of Equations. Common Coefficients, Panel Data Model. Preliminaries

POLS, GLS, FGLS, GMM. Outline of Linear Systems of Equations. Common Coefficients, Panel Data Model. Preliminaries Outlie of Liear Systems of Equatios POLS, GLS, FGLS, GMM Commo Coefficiets, Pael Data Model Prelimiaries he liear pael data model is a static model because all explaatory variables are dated cotemporaeously

More information

Chapter 6 Sampling Distributions

Chapter 6 Sampling Distributions Chapter 6 Samplig Distributios 1 I most experimets, we have more tha oe measuremet for ay give variable, each measuremet beig associated with oe radomly selected a member of a populatio. Hece we eed to

More information

STA6938-Logistic Regression Model

STA6938-Logistic Regression Model Dr. Yig Zhag STA6938-Logistic Regressio Model Topic -Simple (Uivariate) Logistic Regressio Model Outlies:. Itroductio. A Example-Does the liear regressio model always work? 3. Maximum Likelihood Curve

More information

Understanding Samples

Understanding Samples 1 Will Moroe CS 109 Samplig ad Bootstrappig Lecture Notes #17 August 2, 2017 Based o a hadout by Chris Piech I this chapter we are goig to talk about statistics calculated o samples from a populatio. We

More information

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d Liear regressio Daiel Hsu (COMS 477) Maximum likelihood estimatio Oe of the simplest liear regressio models is the followig: (X, Y ),..., (X, Y ), (X, Y ) are iid radom pairs takig values i R d R, ad Y

More information

Sequences A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence

Sequences A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence Sequeces A sequece of umbers is a fuctio whose domai is the positive itegers. We ca see that the sequece 1, 1, 2, 2, 3, 3,... is a fuctio from the positive itegers whe we write the first sequece elemet

More information

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 19

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 19 CS 70 Discrete Mathematics ad Probability Theory Sprig 2016 Rao ad Walrad Note 19 Some Importat Distributios Recall our basic probabilistic experimet of tossig a biased coi times. This is a very simple

More information

Sequences. Notation. Convergence of a Sequence

Sequences. Notation. Convergence of a Sequence Sequeces A sequece is essetially just a list. Defiitio (Sequece of Real Numbers). A sequece of real umbers is a fuctio Z (, ) R for some real umber. Do t let the descriptio of the domai cofuse you; it

More information

TAMS24: Notations and Formulas

TAMS24: Notations and Formulas TAMS4: Notatios ad Formulas Basic otatios ad defiitios X: radom variable stokastiska variabel Mea Vätevärde: µ = X = by Xiagfeg Yag kpx k, if X is discrete, xf Xxdx, if X is cotiuous Variace Varias: =

More information

Statistical Inference Based on Extremum Estimators

Statistical Inference Based on Extremum Estimators T. Rotheberg Fall, 2007 Statistical Iferece Based o Extremum Estimators Itroductio Suppose 0, the true value of a p-dimesioal parameter, is kow to lie i some subset S R p : Ofte we choose to estimate 0

More information

Regression with an Evaporating Logarithmic Trend

Regression with an Evaporating Logarithmic Trend Regressio with a Evaporatig Logarithmic Tred Peter C. B. Phillips Cowles Foudatio, Yale Uiversity, Uiversity of Aucklad & Uiversity of York ad Yixiao Su Departmet of Ecoomics Yale Uiversity October 5,

More information

Series: Infinite Sums

Series: Infinite Sums Series: Ifiite Sums Series are a way to mae sese of certai types of ifiitely log sums. We will eed to be able to do this if we are to attai our goal of approximatig trascedetal fuctios by usig ifiite degree

More information

Probability 2 - Notes 10. Lemma. If X is a random variable and g(x) 0 for all x in the support of f X, then P(g(X) 1) E[g(X)].

Probability 2 - Notes 10. Lemma. If X is a random variable and g(x) 0 for all x in the support of f X, then P(g(X) 1) E[g(X)]. Probability 2 - Notes 0 Some Useful Iequalities. Lemma. If X is a radom variable ad g(x 0 for all x i the support of f X, the P(g(X E[g(X]. Proof. (cotiuous case P(g(X Corollaries x:g(x f X (xdx x:g(x

More information

Linear Regression Demystified

Linear Regression Demystified Liear Regressio Demystified Liear regressio is a importat subject i statistics. I elemetary statistics courses, formulae related to liear regressio are ofte stated without derivatio. This ote iteds to

More information

Notes 19 : Martingale CLT

Notes 19 : Martingale CLT Notes 9 : Martigale CLT Math 733-734: Theory of Probability Lecturer: Sebastie Roch Refereces: [Bil95, Chapter 35], [Roc, Chapter 3]. Sice we have ot ecoutered weak covergece i some time, we first recall

More information

Chapter 10: Power Series

Chapter 10: Power Series Chapter : Power Series 57 Chapter Overview: Power Series The reaso series are part of a Calculus course is that there are fuctios which caot be itegrated. All power series, though, ca be itegrated because

More information

Lecture Chapter 6: Convergence of Random Sequences

Lecture Chapter 6: Convergence of Random Sequences ECE5: Aalysis of Radom Sigals Fall 6 Lecture Chapter 6: Covergece of Radom Sequeces Dr Salim El Rouayheb Scribe: Abhay Ashutosh Doel, Qibo Zhag, Peiwe Tia, Pegzhe Wag, Lu Liu Radom sequece Defiitio A ifiite

More information

11 Correlation and Regression

11 Correlation and Regression 11 Correlatio Regressio 11.1 Multivariate Data Ofte we look at data where several variables are recorded for the same idividuals or samplig uits. For example, at a coastal weather statio, we might record

More information

Once we have a sequence of numbers, the next thing to do is to sum them up. Given a sequence (a n ) n=1

Once we have a sequence of numbers, the next thing to do is to sum them up. Given a sequence (a n ) n=1 . Ifiite Series Oce we have a sequece of umbers, the ext thig to do is to sum them up. Give a sequece a be a sequece: ca we give a sesible meaig to the followig expressio? a = a a a a While summig ifiitely

More information

Algebra of Least Squares

Algebra of Least Squares October 19, 2018 Algebra of Least Squares Geometry of Least Squares Recall that out data is like a table [Y X] where Y collects observatios o the depedet variable Y ad X collects observatios o the k-dimesioal

More information

ARIMA Models. Dan Saunders. y t = φy t 1 + ɛ t

ARIMA Models. Dan Saunders. y t = φy t 1 + ɛ t ARIMA Models Da Sauders I will discuss models with a depedet variable y t, a potetially edogeous error term ɛ t, ad a exogeous error term η t, each with a subscript t deotig time. With just these three

More information

Infinite Sequences and Series

Infinite Sequences and Series Chapter 6 Ifiite Sequeces ad Series 6.1 Ifiite Sequeces 6.1.1 Elemetary Cocepts Simply speakig, a sequece is a ordered list of umbers writte: {a 1, a 2, a 3,...a, a +1,...} where the elemets a i represet

More information

ECON 3150/4150, Spring term Lecture 3

ECON 3150/4150, Spring term Lecture 3 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step ECON 3150/4150, Sprig term 2014. Lecture 3 Ragar Nymoe Uiversity of Oslo 21 Jauary 2014 1 / 30 Itroductio

More information

Singular Continuous Measures by Michael Pejic 5/14/10

Singular Continuous Measures by Michael Pejic 5/14/10 Sigular Cotiuous Measures by Michael Peic 5/4/0 Prelimiaries Give a set X, a σ-algebra o X is a collectio of subsets of X that cotais X ad ad is closed uder complemetatio ad coutable uios hece, coutable

More information

Economics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator

Economics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator Ecoomics 24B Relatio to Method of Momets ad Maximum Likelihood OLSE as a Maximum Likelihood Estimator Uder Assumptio 5 we have speci ed the distributio of the error, so we ca estimate the model parameters

More information

CHAPTER 5. Theory and Solution Using Matrix Techniques

CHAPTER 5. Theory and Solution Using Matrix Techniques A SERIES OF CLASS NOTES FOR 2005-2006 TO INTRODUCE LINEAR AND NONLINEAR PROBLEMS TO ENGINEERS, SCIENTISTS, AND APPLIED MATHEMATICIANS DE CLASS NOTES 3 A COLLECTION OF HANDOUTS ON SYSTEMS OF ORDINARY DIFFERENTIAL

More information

ECE 330:541, Stochastic Signals and Systems Lecture Notes on Limit Theorems from Probability Fall 2002

ECE 330:541, Stochastic Signals and Systems Lecture Notes on Limit Theorems from Probability Fall 2002 ECE 330:541, Stochastic Sigals ad Systems Lecture Notes o Limit Theorems from robability Fall 00 I practice, there are two ways we ca costruct a ew sequece of radom variables from a old sequece of radom

More information

Lecture 20: Multivariate convergence and the Central Limit Theorem

Lecture 20: Multivariate convergence and the Central Limit Theorem Lecture 20: Multivariate covergece ad the Cetral Limit Theorem Covergece i distributio for radom vectors Let Z,Z 1,Z 2,... be radom vectors o R k. If the cdf of Z is cotiuous, the we ca defie covergece

More information

Correlation Regression

Correlation Regression Correlatio Regressio While correlatio methods measure the stregth of a liear relatioship betwee two variables, we might wish to go a little further: How much does oe variable chage for a give chage i aother

More information

MAT1026 Calculus II Basic Convergence Tests for Series

MAT1026 Calculus II Basic Convergence Tests for Series MAT026 Calculus II Basic Covergece Tests for Series Egi MERMUT 202.03.08 Dokuz Eylül Uiversity Faculty of Sciece Departmet of Mathematics İzmir/TURKEY Cotets Mootoe Covergece Theorem 2 2 Series of Real

More information

LECTURE 14 NOTES. A sequence of α-level tests {ϕ n (x)} is consistent if

LECTURE 14 NOTES. A sequence of α-level tests {ϕ n (x)} is consistent if LECTURE 14 NOTES 1. Asymptotic power of tests. Defiitio 1.1. A sequece of -level tests {ϕ x)} is cosistet if β θ) := E θ [ ϕ x) ] 1 as, for ay θ Θ 1. Just like cosistecy of a sequece of estimators, Defiitio

More information

(A sequence also can be thought of as the list of function values attained for a function f :ℵ X, where f (n) = x n for n 1.) x 1 x N +k x N +4 x 3

(A sequence also can be thought of as the list of function values attained for a function f :ℵ X, where f (n) = x n for n 1.) x 1 x N +k x N +4 x 3 MATH 337 Sequeces Dr. Neal, WKU Let X be a metric space with distace fuctio d. We shall defie the geeral cocept of sequece ad limit i a metric space, the apply the results i particular to some special

More information

Lecture 8: Convergence of transformations and law of large numbers

Lecture 8: Convergence of transformations and law of large numbers Lecture 8: Covergece of trasformatios ad law of large umbers Trasformatio ad covergece Trasformatio is a importat tool i statistics. If X coverges to X i some sese, we ofte eed to check whether g(x ) coverges

More information

Econ 325/327 Notes on Sample Mean, Sample Proportion, Central Limit Theorem, Chi-square Distribution, Student s t distribution 1.

Econ 325/327 Notes on Sample Mean, Sample Proportion, Central Limit Theorem, Chi-square Distribution, Student s t distribution 1. Eco 325/327 Notes o Sample Mea, Sample Proportio, Cetral Limit Theorem, Chi-square Distributio, Studet s t distributio 1 Sample Mea By Hiro Kasahara We cosider a radom sample from a populatio. Defiitio

More information

MATH 472 / SPRING 2013 ASSIGNMENT 2: DUE FEBRUARY 4 FINALIZED

MATH 472 / SPRING 2013 ASSIGNMENT 2: DUE FEBRUARY 4 FINALIZED MATH 47 / SPRING 013 ASSIGNMENT : DUE FEBRUARY 4 FINALIZED Please iclude a cover sheet that provides a complete setece aswer to each the followig three questios: (a) I your opiio, what were the mai ideas

More information

Since X n /n P p, we know that X n (n. Xn (n X n ) Using the asymptotic result above to obtain an approximation for fixed n, we obtain

Since X n /n P p, we know that X n (n. Xn (n X n ) Using the asymptotic result above to obtain an approximation for fixed n, we obtain Assigmet 9 Exercise 5.5 Let X biomial, p, where p 0, 1 is ukow. Obtai cofidece itervals for p i two differet ways: a Sice X / p d N0, p1 p], the variace of the limitig distributio depeds oly o p. Use the

More information

DS 100: Principles and Techniques of Data Science Date: April 13, Discussion #10

DS 100: Principles and Techniques of Data Science Date: April 13, Discussion #10 DS 00: Priciples ad Techiques of Data Sciece Date: April 3, 208 Name: Hypothesis Testig Discussio #0. Defie these terms below as they relate to hypothesis testig. a) Data Geeratio Model: Solutio: A set

More information

Economics 326 Methods of Empirical Research in Economics. Lecture 8: Multiple regression model

Economics 326 Methods of Empirical Research in Economics. Lecture 8: Multiple regression model Ecoomics 326 Methods of Empirical Research i Ecoomics Lecture 8: Multiple regressio model Hiro Kasahara Uiversity of British Columbia December 24, 2014 Why we eed a multiple regressio model I There are

More information

The Binomial Theorem

The Binomial Theorem The Biomial Theorem Robert Marti Itroductio The Biomial Theorem is used to expad biomials, that is, brackets cosistig of two distict terms The formula for the Biomial Theorem is as follows: (a + b ( k

More information

Frequentist Inference

Frequentist Inference Frequetist Iferece The topics of the ext three sectios are useful applicatios of the Cetral Limit Theorem. Without kowig aythig about the uderlyig distributio of a sequece of radom variables {X i }, for

More information

Probability and Statistics

Probability and Statistics ICME Refresher Course: robability ad Statistics Staford Uiversity robability ad Statistics Luyag Che September 20, 2016 1 Basic robability Theory 11 robability Spaces A probability space is a triple (Ω,

More information

Summary. Recap ... Last Lecture. Summary. Theorem

Summary. Recap ... Last Lecture. Summary. Theorem Last Lecture Biostatistics 602 - Statistical Iferece Lecture 23 Hyu Mi Kag April 11th, 2013 What is p-value? What is the advatage of p-value compared to hypothesis testig procedure with size α? How ca

More information

Goodness-of-Fit Tests and Categorical Data Analysis (Devore Chapter Fourteen)

Goodness-of-Fit Tests and Categorical Data Analysis (Devore Chapter Fourteen) Goodess-of-Fit Tests ad Categorical Data Aalysis (Devore Chapter Fourtee) MATH-252-01: Probability ad Statistics II Sprig 2019 Cotets 1 Chi-Squared Tests with Kow Probabilities 1 1.1 Chi-Squared Testig................

More information

Estimation of the Mean and the ACVF

Estimation of the Mean and the ACVF Chapter 5 Estimatio of the Mea ad the ACVF A statioary process {X t } is characterized by its mea ad its autocovariace fuctio γ ), ad so by the autocorrelatio fuctio ρ ) I this chapter we preset the estimators

More information

The variance of a sum of independent variables is the sum of their variances, since covariances are zero. Therefore. V (xi )= n n 2 σ2 = σ2.

The variance of a sum of independent variables is the sum of their variances, since covariances are zero. Therefore. V (xi )= n n 2 σ2 = σ2. SAMPLE STATISTICS A radom sample x 1,x,,x from a distributio f(x) is a set of idepedetly ad idetically variables with x i f(x) for all i Their joit pdf is f(x 1,x,,x )=f(x 1 )f(x ) f(x )= f(x i ) The sample

More information

Section 11.8: Power Series

Section 11.8: Power Series Sectio 11.8: Power Series 1. Power Series I this sectio, we cosider geeralizig the cocept of a series. Recall that a series is a ifiite sum of umbers a. We ca talk about whether or ot it coverges ad i

More information

Lecture 7: Density Estimation: k-nearest Neighbor and Basis Approach

Lecture 7: Density Estimation: k-nearest Neighbor and Basis Approach STAT 425: Itroductio to Noparametric Statistics Witer 28 Lecture 7: Desity Estimatio: k-nearest Neighbor ad Basis Approach Istructor: Ye-Chi Che Referece: Sectio 8.4 of All of Noparametric Statistics.

More information

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss ECE 90 Lecture : Complexity Regularizatio ad the Squared Loss R. Nowak 5/7/009 I the previous lectures we made use of the Cheroff/Hoeffdig bouds for our aalysis of classifier errors. Hoeffdig s iequality

More information

Economics 326 Methods of Empirical Research in Economics. Lecture 18: The asymptotic variance of OLS and heteroskedasticity

Economics 326 Methods of Empirical Research in Economics. Lecture 18: The asymptotic variance of OLS and heteroskedasticity Ecoomics 326 Methods of Empirical Research i Ecoomics Lecture 8: The asymptotic variace of OLS ad heteroskedasticity Hiro Kasahara Uiversity of British Columbia December 24, 204 Asymptotic ormality I I

More information

Math 155 (Lecture 3)

Math 155 (Lecture 3) Math 55 (Lecture 3) September 8, I this lecture, we ll cosider the aswer to oe of the most basic coutig problems i combiatorics Questio How may ways are there to choose a -elemet subset of the set {,,,

More information

LECTURE 8: ASYMPTOTICS I

LECTURE 8: ASYMPTOTICS I LECTURE 8: ASYMPTOTICS I We are iterested i the properties of estimators as. Cosider a sequece of radom variables {, X 1}. N. M. Kiefer, Corell Uiversity, Ecoomics 60 1 Defiitio: (Weak covergece) A sequece

More information

1 Covariance Estimation

1 Covariance Estimation Eco 75 Lecture 5 Covariace Estimatio ad Optimal Weightig Matrices I this lecture, we cosider estimatio of the asymptotic covariace matrix B B of the extremum estimator b : Covariace Estimatio Lemma 4.

More information

We are mainly going to be concerned with power series in x, such as. (x)} converges - that is, lims N n

We are mainly going to be concerned with power series in x, such as. (x)} converges - that is, lims N n Review of Power Series, Power Series Solutios A power series i x - a is a ifiite series of the form c (x a) =c +c (x a)+(x a) +... We also call this a power series cetered at a. Ex. (x+) is cetered at

More information

Lesson 11: Simple Linear Regression

Lesson 11: Simple Linear Regression Lesso 11: Simple Liear Regressio Ka-fu WONG December 2, 2004 I previous lessos, we have covered maily about the estimatio of populatio mea (or expected value) ad its iferece. Sometimes we are iterested

More information

Discrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 22

Discrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 22 CS 70 Discrete Mathematics for CS Sprig 2007 Luca Trevisa Lecture 22 Aother Importat Distributio The Geometric Distributio Questio: A biased coi with Heads probability p is tossed repeatedly util the first

More information

Stat 421-SP2012 Interval Estimation Section

Stat 421-SP2012 Interval Estimation Section Stat 41-SP01 Iterval Estimatio Sectio 11.1-11. We ow uderstad (Chapter 10) how to fid poit estimators of a ukow parameter. o However, a poit estimate does ot provide ay iformatio about the ucertaity (possible

More information

Problem Set 4 Due Oct, 12

Problem Set 4 Due Oct, 12 EE226: Radom Processes i Systems Lecturer: Jea C. Walrad Problem Set 4 Due Oct, 12 Fall 06 GSI: Assae Gueye This problem set essetially reviews detectio theory ad hypothesis testig ad some basic otios

More information

Questions and Answers on Maximum Likelihood

Questions and Answers on Maximum Likelihood Questios ad Aswers o Maximum Likelihood L. Magee Fall, 2008 1. Give: a observatio-specific log likelihood fuctio l i (θ) = l f(y i x i, θ) the log likelihood fuctio l(θ y, X) = l i(θ) a data set (x i,

More information