Solution to Chapter 3 Analytical Exercises

Similar documents
Solution to Chapter 2 Analytical Exercises

Slide Set 13 Linear Model with Endogenous Regressors and the GMM estimator

Lecture 6 Testing Nonlinear Restrictions 1. The previous lectures prepare us for the tests of nonlinear restrictions of the form:

Large Sample Theory. Convergence. Central Limit Theorems Asymptotic Distribution Delta Method. Convergence in Probability Convergence in Distribution

REVIEW 1, MATH n=1 is convergent. (b) Determine whether a n is convergent.

6.3.3 Parameter Estimation

Chapter 2 Transformations and Expectations

7.1 Convergence of sequences of random variables

6.451 Principles of Digital Communication II Wednesday, March 9, 2005 MIT, Spring 2005 Handout #12. Problem Set 5 Solutions

k=1 s k (x) (3) and that the corresponding infinite series may also converge; moreover, if it converges, then it defines a function S through its sum

Lecture 8: Convergence of transformations and law of large numbers

Statistical Inference Based on Extremum Estimators

Time series models 2007

Ma 530 Introduction to Power Series

7.1 Convergence of sequences of random variables

x iu i E(x u) 0. In order to obtain a consistent estimator of β, we find the instrumental variable z which satisfies E(z u) = 0. z iu i E(z u) = 0.

Created by T. Madas SERIES. Created by T. Madas

1 Review and Overview

( ) (( ) ) ANSWERS TO EXERCISES IN APPENDIX B. Section B.1 VECTORS AND SETS. Exercise B.1-1: Convex sets. are convex, , hence. and. (a) Let.

REAL ANALYSIS II: PROBLEM SET 1 - SOLUTIONS

STAT331. Example of Martingale CLT with Cox s Model

Since X n /n P p, we know that X n (n. Xn (n X n ) Using the asymptotic result above to obtain an approximation for fixed n, we obtain

Distribution of Random Samples & Limit theorems

17. Joint distributions of extreme order statistics Lehmann 5.1; Ferguson 15

HOMEWORK I: PREREQUISITES FROM MATH 727

Summary. Recap. Last Lecture. Let W n = W n (X 1,, X n ) = W n (X) be a sequence of estimators for

Lecture 20: Multivariate convergence and the Central Limit Theorem

Inhomogeneous Poisson process

Lecture 19: Convergence

MATH 472 / SPRING 2013 ASSIGNMENT 2: DUE FEBRUARY 4 FINALIZED

Analytic Number Theory Solutions

COMM 602: Digital Signal Processing

University of Manitoba, Mathletics 2009

11 THE GMM ESTIMATION

Efficient GMM LECTURE 12 GMM II

Probability for mathematicians INDEPENDENCE TAU

sin(n) + 2 cos(2n) n 3/2 3 sin(n) 2cos(2n) n 3/2 a n =

This section is optional.

(average number of points per unit length). Note that Equation (9B1) does not depend on the

1 General linear Model Continued..

[ 11 ] z of degree 2 as both degree 2 each. The degree of a polynomial in n variables is the maximum of the degrees of its terms.

Definition 2 (Eigenvalue Expansion). We say a d-regular graph is a λ eigenvalue expander if

Algorithms in The Real World Fall 2002 Homework Assignment 2 Solutions

PROBLEM SET 5 SOLUTIONS 126 = , 37 = , 15 = , 7 = 7 1.

ECONOMETRIC THEORY. MODULE XIII Lecture - 34 Asymptotic Theory and Stochastic Regressors

CSE 1400 Applied Discrete Mathematics Number Theory and Proofs

Ma 4121: Introduction to Lebesgue Integration Solutions to Homework Assignment 5

as best you can in any three (3) of a f. [15 = 3 5 each] e. y = sec 2 (arctan(x)) f. y = sin (e x )

ACO Comprehensive Exam 9 October 2007 Student code A. 1. Graph Theory

Supplemental Material: Proofs

Question 1: Exercise 8.2

Entropy Rates and Asymptotic Equipartition

AP Calculus BC Review Chapter 12 (Sequences and Series), Part Two. n n th derivative of f at x = 5 is given by = x = approximates ( 6)

Math 155 (Lecture 3)

Regression with an Evaporating Logarithmic Trend

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 6 9/23/2013. Brownian motion. Introduction

Mixingales. Chapter 7

First Year Quantitative Comp Exam Spring, Part I - 203A. f X (x) = 0 otherwise

Notes 19 : Martingale CLT

of the matrix is =-85, so it is not positive definite. Thus, the first

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 21 11/27/2013

Convergence of random variables. (telegram style notes) P.J.C. Spreij

Asymptotic Results for the Linear Regression Model

Math 61CM - Solutions to homework 3

On the optimality of McLeish s conditions for the central limit theorem

MA541 : Real Analysis. Tutorial and Practice Problems - 1 Hints and Solutions

4. Partial Sums and the Central Limit Theorem

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d

Economics 102C: Advanced Topics in Econometrics 4 - Asymptotics & Large Sample Properties of OLS

Information Theory and Statistics Lecture 4: Lempel-Ziv code

Problem Cosider the curve give parametrically as x = si t ad y = + cos t for» t» ß: (a) Describe the path this traverses: Where does it start (whe t =

CHAPTER 5. Theory and Solution Using Matrix Techniques

Direction: This test is worth 150 points. You are required to complete this test within 55 minutes.

2.2. Central limit theorem.

LECTURE 8: ORTHOGONALITY (CHAPTER 5 IN THE BOOK)

Solutions for Math 411 Assignment #2 1

Math 140A Elementary Analysis Homework Questions 1

Notes 5 : More on the a.s. convergence of sums

It is often useful to approximate complicated functions using simpler ones. We consider the task of approximating a function by a polynomial.

Research Article A Limit Theorem for Random Products of Trimmed Sums of i.i.d. Random Variables

Solutions to HW Assignment 1

Math 163 REVIEW EXAM 3: SOLUTIONS

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS

Lecture #3. Math tools covered today

Fermat s Little Theorem. mod 13 = 0, = }{{} mod 13 = 0. = a a a }{{} mod 13 = a 12 mod 13 = 1, mod 13 = a 13 mod 13 = a.

Integrable Functions. { f n } is called a determining sequence for f. If f is integrable with respect to, then f d does exist as a finite real number

MATH 10550, EXAM 3 SOLUTIONS

The Chi Squared Distribution Page 1

On triangular billiards

Different kinds of Mathematical Induction

De Moivre s Theorem - ALL

6.3 Testing Series With Positive Terms

1 = 2 d x. n x n (mod d) d n

Application to Random Graphs

10-701/ Machine Learning Mid-term Exam Solution

Statistical and Mathematical Methods DS-GA 1002 December 8, Sample Final Problems Solutions

Solutions to Odd Numbered End of Chapter Exercises: Chapter 4

Please do NOT write in this box. Multiple Choice. Total

Mathematics 170B Selected HW Solutions.

MATH 312 Midterm I(Spring 2015)

Transcription:

December 7, 003 Solutio to Chapter 3 Aalytical Exercises Hayashi Ecoometrics. If A is symmetric a iempotet, the A = A a AA = A. So x Ax = x AAx = x A Ax = z z 0 where z Ax.. (a) By assumptio, {x i, ε i } is joitly statioary a ergoic, so by ergoic theorem the first term of ( ) coverges almost surely to E(x i ε i ) which exists a is fiite by Assumptio 3.5. (b) z i x i ε i is the prouct of x i ε i a x i z i. By usig the Cauchy-Schwarts iequality, we obtai E( x i ε i x i z i ) E(x i ε i ) E(x i z i ). E(x i ε i ) exists a is fiite by Assumptio 3.5 a E(x i z i ) exists a is fiite by Assumptio 3.6. Therefore, E( x i z i x i ε i ) is fiite. Hece, E(x i z i x i ε i ) exists a is fiite. (c) By ergoic statioarity the sample average of z i x i ε i coverges i probability to some fiite umber. Because δ is cosistet for δ by Propositio 3., δ δ coverges to 0 i probability. Therefore, the seco term of ( ) coverges to zero i probability. () By ergoic statioarity a Assumptio 3.6 the sample average of zi x i coverges i probability to some fiite umber. As metioe i (c) δ δ coverges to 0 i probability. Therefore, the last term of ( ) vaishes. 3. (a) Q Σ xzs Σ xz Σ xzwσ xz (Σ xzwswσ xz ) Σ xzwσ xz = Σ xzc CΣ xz Σ xzwσ xz (Σ xzwc C WΣxz ) Σ xzwσ xz = H H Σ xzwσ xz (G G) Σ xzwσ xz = H H H G(G G) G H = H [I K G(G G) G ]H = H M G H. (b) First, we show that M G is symmetric a iempotet. M G = I K G(G(G G) ) = I K G((G G) G ) = I K G(G G) G = M G. M G M G = I K I K G(G G) G I K I K G(G G) G + G(G G) G G(G G) G = I K G(G G) G = M G. Thus, M G is symmetric a iempotet. For ay L-imesioal vector x, x Qx = x H M G Hx Therefore, Q is positive semiefiite. = z M G z (where z Hx) 0 (sice M G is positive semiefiite).

4. (the aswer o p. 54 of the book simplifie) If W is as efie i the hit, the WSW = W a Σ xzwσ xz = Σ zz A Σ zz. So (3.5.) reuces to the asymptotic variace of the OLS estimator. By (3.5.), it is o smaller tha (Σ xz S Σ xz ), which is the asymptotic variace of the efficiet GMM estimator. 5. (a) From the expressio for δ(ŝ ) (give i (3.5.)) a the expressio for g ( δ) (give i (3.4.)), it is easy to show that g ( δ(ŝ )) = Bs xy. But Bs xy = Bg because Bs xy = (I K S xz (S xz Ŝ S xz ) S xz Ŝ )s xy = (I K S xz (S xz Ŝ S xz ) S xz Ŝ )(S xz δ + g) (sice y i = z iδ + ε i ) = (S xz S xz (S xz Ŝ S xz ) S xz Ŝ S xz )δ + (I K S xz (S xz Ŝ S xz ) S xz Ŝ )g = (S xz S xz )δ + Bg = Bg. (b) Sice Ŝ = C C, we obtai B Ŝ B = B C C B = (C B) (C B). But C B = C(I K S xz (S xz Ŝ S xz ) S xz Ŝ ) = C CS xz (S xz C CS xz ) S xz C C = C A(A A) A C (where A CS xz ) = [I K A(A A) A ]C MC. So B Ŝ B = (MC) (MC) = C M MC. It shoul be routie to show that M is symmetric a iempotet. Thus B Ŝ B = C MC. The rak of M equals its trace, which is trace(m) = trace(i K A(A A) A ) = trace(i K ) trace(a(a A) A ) = trace(i K ) trace(a A(A A) ) = K trace(i L ) = K L. (c) As efie i (b), C C = Ŝ. Let D be such that D D = S. The choice of C a D is ot uique, but it woul be possible to choose C so that plim C = D. Now, v (Cg) = C( g). By usig the Ergoic Statioary Martigale Differeces CLT, we obtai g N(0, S). So v = C( g) N(0, Avar(v)) where Avar(v) = DSD = D(D D) D = DD D D = I K.

() J( δ(ŝ ), Ŝ ) = g ( δ(ŝ )) Ŝ g ( δ(ŝ )) = ( Bg) Ŝ ( Bg) (by (a)) = g B Ŝ Bg = g C MCg (by (b)) = v Mv (sice v Cg). Sice v N(0, I K ) a M is iempotet, v Mv is asymptotically chi-square with egrees of freeom equalig the rak of M = K L. 6. From Exercise 5, J = g B Ŝ Bg. Also from Exercise 5, Bg = Bsxy. 7. For the most parts, the hits are early the aswer. Here, we provie aswers to (), (f), (g), (i), a (j). () As show i (c), J = v M v. It suffices to prove that v = C F C v. v C g = C F g = C F C Cg = C F C Cg = C F C v (sice v Cg). (f) Use the hit to show that A D = 0 if A M = 0. It shoul be easy to show that A M = 0 from the efiitio of M. (g) By the efiitio of M i Exercise 5, MD = D A(A A) A D. So MD = D sice A D = 0 as show i the previous part. Sice both M a D are symmetric, DM = D M = (MD) = D = D. As show i part (e), D is iempotet. Also, M is iempotet as show i Exercise 5. So (M D) = M DM MD + D = M D. As show i Exercise 5, the trace of M is K L. As show i (e), the trace of D is K L. So the trace of M D is K K. The rak of a symmetric a iempotet matrix is its trace. (i) It has bee show i Exercise 6 that g C MCg = s xyc MCs xy sice C MC = B Ŝ B. Here, we show that g C DCg = s xyc DCs xy. g C DCg = g FC M C F g (C DC = FC M C F by the efiitio of D i ()) = g F B (Ŝ) B F g (sice C M C = B (Ŝ) B from (a)) = g B (Ŝ) B g (sice g = F g). From the efiitio of B a the fact that s x y = S x zδ + g, it follows that B g = B s x y. So g B (Ŝ) B g = s B x y (Ŝ) B s xy = s xyf B (Ŝ) B F s xy (sice s x y = F s xy ) = s xyfc M C F s xy (sice B (Ŝ) B = C M C from (a)) = s xyc DCs xy. 3

(j) M D is positive semi-efiite because it is symmetric a iempotet. 8. (a) Solve the first-orer coitios i the hit for δ to obtai δ = δ(ŵ) (S xzŵs xz) R λ. Substitute this ito the costrait Rδ = r to obtai the expressio for λ i the questio. The substitute this expressio for λ ito the above equatio to obtai the expressio for δ i the questio. (b) The hit is almost the aswer. (c) What ees to be show is that ( δ(ŵ) δ) (S xzŵsxz)( δ(ŵ) δ) equals the Wal statistic. But this is immeiate from substitutio of the expressio for δ i (a). 9. (a) By applyig (3.4.), we obtai [ ] [ ] ( δ δ) (S xz Ŵ S xz ) S xzŵ g. = ( δ δ) (S xzŵs xz ) S xzŵ By usig Billigsley CLT, we have g N(0, S). Also, we have [ ] (S xz Ŵ S xz ) S xzŵ (S xzŵs xz ) S xzŵ p [ ] Q Σ xzw Q. Σ xzw Therefore, by Lemma.4(c), [ ] ( δ δ) N ( δ δ) = N ( 0, ( 0, [ ] Q Σ xzw Q Σ xzw [ ]) A A. A A S (W Σ xz Q ).. W Σ xz Q ) (b) q ca be rewritte as q = ( δ δ ) = ( δ δ) ( δ δ) = [ ] [ ] ( δ δ). ( δ δ) Therefore, we obtai q N(0, Avar(q)). where Avar(q) = [ ] [ ] [ ] A A = A A A + A A A. 4

0. (a) (c) Sice W = S, Q, A, A, a A ca be rewritte as follows: Q = Σ xzw Σ xz = Σ xzs Σ xz, A = Q Σ xzw S S Σ xz Q = Q (Σ xzw Σ xz )Q = Q Q Q = Q, A = Q Σ xzs SW Σ xz Q = Q, A = (Σ xzs Σ xz ) Σ xzs SS Σ xz (Σ xzs Σ xz ) = (Σ xzs Σ xz ) = Q. Substitutio of these ito the expressio for Avar(q) i (b), we obtai (b) From the efiitio of δ, Avar(q) = A Q = A (Σ xzs Σ xz ) σ xz E(x i z i ) = E(x i (x i β + v i )) δ δ = = Avar( δ(ŵ)) Avar( δ(ŝ )). = β E(x i ) + E(x i v i ) = βσ x 0 (by assumptios (), (3), a (4)). ( ) x i z i x i ε i = s xz x i ε i. We have x i z i = x i (x i β + v i ) = x i β + x iv i, which, beig a fuctio of (x i, η i ), is ergoic statioary by assumptio (). So by the Ergoic theorem, s xz p σ xz. Sice σ xz 0 by (a), we have s xz p σxz. By assumptio (), E(x i ε i ) = 0. So by assumptio (), we have x iε i p 0. Thus δ δ p 0. (c) s xz x i z i = = (x i β + x i v i ) x i + p 0 E(x i ) + E(x i v i ) = 0 x i v i (sice β = ) 5

() sxz = x i + x i v i. By assumptio () a the Ergoic Theorem, the first term of RHS coverges i probability to E(x i ) = σ x > 0. Assumptio () a the Martigale Differeces CLT imply that x i v i a N(0, s ). Therefore, by Lemma.4(a), we obtai sxz σ x + a. (e) δ δ ca be rewritte as δ δ = ( s xz ) g. From assumptio () a the Martigale Differeces CLT, we obtai g b N(0, s ). where s is the (, ) elemet of S. By usig the result of () a Lemma.3(b), δ δ (σ x + a) b. (a, b) are joitly ormal because the joit istributio is the limitig istributio of [ ] g g = ( x. iv i ) (f) Because δ δ coverges i istributio to (σ x + a) b which is ot zero, the aswer is No. 6