LECTURE 11 LINEAR PROCESSES III: ASYMPTOTIC RESULTS

Similar documents
Lecture 19: Convergence

Solution to Chapter 2 Analytical Exercises

First Year Quantitative Comp Exam Spring, Part I - 203A. f X (x) = 0 otherwise

1 Convergence in Probability and the Weak Law of Large Numbers

1 Covariance Estimation

Notes 5 : More on the a.s. convergence of sums

7.1 Convergence of sequences of random variables

Regression with an Evaporating Logarithmic Trend

LECTURE 13 SPURIOUS REGRESSION, TESTING FOR UNIT ROOT = C (1) C (1) 0! ! uv! 2 v. t=1 X2 t

2.2. Central limit theorem.

Economics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator

LECTURE 8: ASYMPTOTICS I

ECONOMETRIC THEORY. MODULE XIII Lecture - 34 Asymptotic Theory and Stochastic Regressors

Mixingales. Chapter 7

Lecture 20: Multivariate convergence and the Central Limit Theorem

Asymptotic Results for the Linear Regression Model

Week 10. f2 j=2 2 j k ; j; k 2 Zg is an orthonormal basis for L 2 (R). This function is called mother wavelet, which can be often constructed

A note on self-normalized Dickey-Fuller test for unit root in autoregressive time series with GARCH errors

Lecture 8: Convergence of transformations and law of large numbers

1 = δ2 (0, ), Y Y n nδ. , T n = Y Y n n. ( U n,k + X ) ( f U n,k + Y ) n 2n f U n,k + θ Y ) 2 E X1 2 X1

Advanced Stochastic Processes.

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n.

Efficient GMM LECTURE 12 GMM II

SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker

ST5215: Advanced Statistical Theory

Notes 19 : Martingale CLT

for all x ; ;x R. A ifiite sequece fx ; g is said to be ND if every fiite subset X ; ;X is ND. The coditios (.) ad (.3) are equivalet for =, but these

Sequences and Series of Functions

MA Advanced Econometrics: Properties of Least Squares Estimators

7.1 Convergence of sequences of random variables

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS

Convergence of random variables. (telegram style notes) P.J.C. Spreij

INFINITE SEQUENCES AND SERIES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 21 11/27/2013

Partial match queries: a limit process

Statistical Inference Based on Extremum Estimators

Probability and Random Processes

Study the bias (due to the nite dimensional approximation) and variance of the estimators

Product measures, Tonelli s and Fubini s theorems For use in MAT3400/4400, autumn 2014 Nadia S. Larsen. Version of 13 October 2014.

Chapter 3. Strong convergence. 3.1 Definition of almost sure convergence

Entropy Rates and Asymptotic Equipartition

Lecture 3. Properties of Summary Statistics: Sampling Distribution

A Weak Law of Large Numbers Under Weak Mixing

Statistical Analysis on Uncertainty for Autocorrelated Measurements and its Applications to Key Comparisons

Lecture 4. We also define the set of possible values for the random walk as the set of all x R d such that P(S n = x) > 0 for some n.

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss

Math 525: Lecture 5. January 18, 2018

Berry-Esseen bounds for self-normalized martingales

Cointegration versus Spurious Regression and Heterogeneity in Large Panels

This section is optional.

Lecture 3 : Random variables and their distributions

January 25, 2017 INTRODUCTION TO MATHEMATICAL STATISTICS

Introduction to Probability. Ariel Yadin

Lecture 33: Bootstrap

Notes 27 : Brownian motion: path properties

Lecture 01: the Central Limit Theorem. 1 Central Limit Theorem for i.i.d. random variables

Lecture 6 Testing Nonlinear Restrictions 1. The previous lectures prepare us for the tests of nonlinear restrictions of the form:

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 +

Estimation for Complete Data

Kernel density estimator

Self-normalized deviation inequalities with application to t-statistic

A survey on penalized empirical risk minimization Sara A. van de Geer

Parameter, Statistic and Random Samples

Eigenvalues and Eigenvectors

Slide Set 13 Linear Model with Endogenous Regressors and the GMM estimator

Singular Continuous Measures by Michael Pejic 5/14/10

MAT1026 Calculus II Basic Convergence Tests for Series

Conditional-Sum-of-Squares Estimation of Models for Stationary Time Series with Long Memory

n=1 a n is the sequence (s n ) n 1 n=1 a n converges to s. We write a n = s, n=1 n=1 a n

The variance of a sum of independent variables is the sum of their variances, since covariances are zero. Therefore. V (xi )= n n 2 σ2 = σ2.

Table 12.1: Contingency table. Feature b. 1 N 11 N 12 N 1b 2 N 21 N 22 N 2b. ... a N a1 N a2 N ab

Solutions: Homework 3

Chapter 2 The Monte Carlo Method

Probability for mathematicians INDEPENDENCE TAU

Notes for Lecture 11

Math 61CM - Solutions to homework 3

Estimation of the Mean and the ACVF

On the Asymptotics of ADF Tests for Unit Roots 1

Precise Rates in Complete Moment Convergence for Negatively Associated Sequences

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d

Lecture 2. The Lovász Local Lemma

Lecture 7: Properties of Random Samples

Solutions of Homework 2.

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 6 9/23/2013. Brownian motion. Introduction

EFFECTIVE WLLN, SLLN, AND CLT IN STATISTICAL MODELS

It is often useful to approximate complicated functions using simpler ones. We consider the task of approximating a function by a polynomial.

Chapter 5. Inequalities. 5.1 The Markov and Chebyshev inequalities

5 Birkhoff s Ergodic Theorem

Hyun-Chull Kim and Tae-Sung Kim

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f.

Chapter 6 Principles of Data Reduction

A stochastic model for phylogenetic trees

Measure and Measurable Functions

On Involutions which Preserve Natural Filtration

Random Matrices with Blocks of Intermediate Scale Strongly Correlated Band Matrices

CSE 527, Additional notes on MLE & EM

Chapter 6 Infinite Series

Complex Analysis Spring 2001 Homework I Solution

LECTURE 2 LEAST SQUARES CROSS-VALIDATION FOR KERNEL DENSITY ESTIMATION

Sequences and Limits

Transcription:

PRIL 7, 9 where LECTURE LINER PROCESSES III: SYMPTOTIC RESULTS (Phillips ad Solo (99) ad Phillips Lecture Notes o Statioary ad Nostatioary Time Series) I this lecture, we discuss the LLN ad CLT for a liear process f t g geerated as t C (L) " t j () C (L) " t ; L j ; ad f" t g is a sequece of iid radom variables with zero mea ad ite variace. The LLN ad CLT for f t g rely oly o the LLN ad CLT for iid sequeces ad a certai decompositio of the lag polyomial C (L) : The method also works for more geeral sequeces with " t s beig idepedet but ot idetically distributed (iid) ad martigale di erece sequeces (mds). De itio Let ff t g be a icreasig sequece of - elds. The f(u t ; F t )g is a martigale if E (u t jf t ) u t with probability oe. The sequece f(" t ; F t )g is said to be a martigale di erece sequece if E (" t jf t ) with probability oe. For fu t g, the - eld F t is ofte take to be (u t ; u t ; : : :) : Suppose that f(u t ; F t )g is a martigale. The f(u t u t ; F t )g is a mds, sice u t u t u t E (u t jf t ) ; ad, therefore, E (u t u t jf t ) : Note either of the requiremets for f" t g, iid, iid or mds, is stroger tha just a WN. Lemma (SLLN for iid sequeces, White (), Corollary 3.9) Let f" t g be a sequece of idepedet radom variables such that sup t E j" t j + < for some > : The, P t " t P t E" t a:s: : Lemma (SLLN for mds, White (), Exercise 3.77) Let f(" t ; F t )g be a mds such that sup t E j" t j + < for some > : The, P t " t a:s: : Lemma 3 (CLT for iid sequeces, White (), Theorem 5.) Let f" t g be a sequece of idepedet radom variables such that E" t for all t; sup t E j" t j + < for some > ; ad for all su cietly large P t E" t > > : The, P t " t P t E" t d N (; ) : Lemma 4 (CLT for mds, White (), Corollary 5.6) Let f(" t ; F t )g be a mds such that sup t E j" t j + < for some > : Suppose that for all su cietly large P t E" t > > ; ad P t " t P t E" t p : The, P t " t P t E" t d N (; ) : Lemma 5 (CLT for strictly statioary ad ergodic mds) Let f(" t ; F t )g be a strictly statioary ad ergodic mds such that E" t < : The P t " t d N ; E" t : Beveridge ad Nelso (BN) decompositio First, we discuss a algebraic decompositio of a lag polyomial ito log-ru ad trasitory elemets. The decompositio was itroduced by Beveridge ad Nelso (98). Lemma 6 Let C (L) P L j : The (a) C (L) C() ( L) e C (L) ; where e C (L) P el j with e P hj+ c h:

(b) If P j j j j < ; the P e < : (c) If P j j jj < ; the P jej < : Proof. For part (a), write L j + j + + : : : + + : : : : j j j j3 jh jh+ L L L h Rearragig the terms, L j ( L) j ( L) L j : : : ( L) L h jh+ : : : ( L) hj+ C () ( L) e C (L) : c h L j

For part (b), ec j hj+ hj+ hj+ c h jc h j jc h j h 4 jc h j h 4 jc h j h hj+ hj+ jc h j h jc h j h jc h j h : h hj+ Next, cosider P P hj+ jc hj h : The term jc j appears i the sum oly oce, whe j. The term jc j appears i the sum twice, whe j ; : Hece, jc h j appears whe j ; ; : : : ; h ; total h times. Therefore, jc h j h j j j j hj+ j j j ; ad For part (c), ec j j j j : je j c h hj+ jc h j hj+ j j j: Notice that the assumptios P j j j j < ad P j j jj < are stroger tha iteess of the log-ru variace P j jj <. ccordig to the BN decompositio, if f t g is a liear process, the t C (L) " t C () " t ( L) e C (L) " t C () " t (e" t e" t ) ; () 3

where e" t e C (L) " t : Furthermore, e" t has ite variace provided that P j j j <. The rst summad o the right-had side of (), C () " t ; is the log-ru compoet, ad e" t e" t is trasiet. P The similar decompositio exists i the vector case. The trasiet compoet has ite variace if j j kc j k < : C e j C h hj+ kc j k hj+ hj+ j kc j k : kc h k h 4 kc h k h 4 The coditio i part (c) of Theorem 6, becomes P j j kc jk < i the vector case. WLLN Suppose that t C (L) " t : f" t g is a sequece of iid radom variables with E j" t j < ad E" t. C (L) satis es P j j jj < : We will show that uder these coditios t t p : The key is the BN decompositio (). Notice that P t (e" t e" t ) is a so called telescopig sum: so that (e" t e" t ) (~" ~" ) + (~" ~" ) + (~" 3 ~" ) + : : : + (~" ~" ) t e" e" ; t t C () t " t (e" e" ) : Due to P j j jj <, jc ()j < : Hece, by the WLLN for iid sequeces, C () t 4 " t p :

Next, ad provided that P j j jj < : Therefore, P je" t j > E je" tj ; E je" t j E e " t j je j E j" t E j" j < ; je j (e" e" ) p : If we assume that E" t < ; the we ca replace P j j jj < with P j j j j < ; sice P j j je" t j > Ee" t ; ad Ee" t < provided that P j j j j < holds. We ca prove similar WLLNs with f" t g beig iid or a mds by usig the correspodig LLNs for iid or mds. For example, the result holds if f" t g is iid, sup t E j" t j + < for some > ; ad P j j jj < : If f(" t ; F t )g is a mds, the result holds with sup t E j" t j + < for some > ; ad P j j j j < : So far we assumed that E t : Oe ca modify the rst assumptio so that t + C (L) " t ; where is the mea of t : I this case, uder the same set of coditios, we have P t t p : For example, R () process with mea is give by ( L) ( t ) " t : If jj < ; the the sample average of t coverges i probability to provided that the correspodig momet restrictios hold. CLT Suppose that t C (L) " t : f" t g is a sequece of iid radom variables with E j" t j < ad E" t. C (L) satis es P j j j j < : C() 6 : The BN decompositio allows us to write t t C () t " t (e" e" ) C () " t + o p () t d C () N ; N ; C () : 5

Here, covergece i distributio is by the CLT for iid radom variables. The approach illustrates why i the serially correlated case the asymptotic variace depeds o the log-ru variace of f t g : gai, the approach ca be exteded to the case where f" t g is iid of mds. I the vector case, suppose that f" t g is a sequece of iid k-vectors with E" t ; ad E" t " t ; a ite matrix. Let t C (L) " t ; ad P j kc j k < ; C () 6 : Sice P t " t d N (; ) ; we have that t Covergece of sample variaces t d N ; C () C () : Estimators of coe ciets i the liear regressio model ivolve secod sample momets P t t t. Here, we discuss covergece of sample secod momets whe f t g is a liear process. We assume that f t g is a scalar liear process satisfyig the same assumptios as i the previous sectio. Write t (C (L) " t ) " t j l c l " t j " t l c j" t j + c l " t j " t l l>j c j" t j + +h " t j " t j h (chage of variable l j + h, so that h ; ; : : : ) B (L)" t + where for h ; ; : : :, h B h (L) " t " t h ; h B h (L) b h;j L j +h L j : Thus, B (L) B (L) b ;j L j c jl j : b ;j L j + L j : : : : The BN decompositio of B h (L) is B h (L) B h () ( L) e B h (L) ; (3) 6

where eb h (L) e bh;j e bh;j L j ; lj+ lj+ b h;l c l c l+h : The BN decompositio of B h (L) is valid provided that P j j j j < : e b h;j c l c l+h lj+ lj+ lj+ l 4 c l c l+h l l c l l c l l l c l l l c l ; l 4 lj+ lj+ ad P l l c l is ite provided that P l l jc l j is ite: l c l l jc l j l l sup j j j c l+hl c l+hl c l+hl l l jc l j < ; where sup j j j < because P l l jc l j < ad therefore jc l j as l. Thus, we have t B (L)" t + B h (L) " t " t h B ()" t + h l B h () " t " t h ( L) e B (L) " t + c j h eb h (L) " t " t h h " t + u t ( L)ev t ; 7

where u t " t B h () " t h ; h ev t e B (L) " t + eb h (L) " t " t h : h We have We will show ext that Let F t (" t ; " t t t c j t t " t + u t (ev ev ) : u t a:s: : ; : : :) : We have that f(u t ; F t )g is a mds. Lemma 7 (White (), Theorem 3.76) Let f(u t ; F t )g be a mds. If for some r, P t E ju tj r t +r < ; the P t u t a:s: : We will verify that fu t g satis es the coditio of the above lemma. Set r : The coditio is satis ed if sup t Eu t < ; sice P t t < : Eu t 4 E t B h () " t h h 4 4 B h () : h Next, B h () h +h h h c j c j h c j j jc j; c j+h c j+h ad, by the same argumet as o page 3 of Lecture, P j j j j < implies that P j j < as well. s before, oe ca show that (ev ev ) p ; provided that P j j j j <. 8

Lastly, by the WLLN for iid sequeces, t " t p : Therefore, t p c j t E t : 9