Midterm Suggested Solutions

Similar documents
Autoregressive Moving Average (ARMA) Models and their Practical Applications

Week 5 Quantitative Analysis of Financial Markets Characterizing Cycles

ECON/FIN 250: Forecasting in Finance and Economics: Section 6: Standard Univariate Models

Forecasting with ARMA

Econometrics II Heij et al. Chapter 7.1

Empirical Market Microstructure Analysis (EMMA)

Lecture on ARMA model

Econ 623 Econometrics II Topic 2: Stationary Time Series

Covariance Stationary Time Series. Example: Independent White Noise (IWN(0,σ 2 )) Y t = ε t, ε t iid N(0,σ 2 )

Class 1: Stationary Time Series Analysis

Introduction to Stochastic processes

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

ECON 616: Lecture 1: Time Series Basics

Chapter 4: Models for Stationary Time Series

Discrete time processes

STAT Financial Time Series

Advanced Econometrics

Univariate Time Series Analysis; ARIMA Models

University of Oxford. Statistical Methods Autocorrelation. Identification and Estimation

FE570 Financial Markets and Trading. Stevens Institute of Technology

Lecture 1: Fundamental concepts in Time Series Analysis (part 2)

Some Time-Series Models

Ch 8. MODEL DIAGNOSTICS. Time Series Analysis

Econ 4120 Applied Forecasting Methods L10: Forecasting with Regression Models. Sung Y. Park CUHK

4. MA(2) +drift: y t = µ + ɛ t + θ 1 ɛ t 1 + θ 2 ɛ t 2. Mean: where θ(l) = 1 + θ 1 L + θ 2 L 2. Therefore,

Stationary Stochastic Time Series Models

Università di Pavia. Forecasting. Eduardo Rossi

Lecture 2: Univariate Time Series

{ } Stochastic processes. Models for time series. Specification of a process. Specification of a process. , X t3. ,...X tn }

Chapter 8: Model Diagnostics

APPLIED ECONOMETRIC TIME SERIES 4TH EDITION

Introduction to Time Series Analysis. Lecture 11.

Introduction to ARMA and GARCH processes

Lecture 4a: ARMA Model

Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications

Problem Set 2: Box-Jenkins methodology

CHAPTER 8 FORECASTING PRACTICE I

1 Linear Difference Equations

Parameter estimation: ACVF of AR processes

Lecture 1: Stationary Time Series Analysis

White Noise Processes (Section 6.2)

Module 3. Descriptive Time Series Statistics and Introduction to Time Series Models

ECON/FIN 250: Forecasting in Finance and Economics: Section 8: Forecast Examples: Part 1

3. ARMA Modeling. Now: Important class of stationary processes

Econometrics I: Univariate Time Series Econometrics (1)

Review Session: Econometrics - CLEFIN (20192)

Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications

Applied time-series analysis

Ch. 15 Forecasting. 1.1 Forecasts Based on Conditional Expectations

AR, MA and ARMA models

EASTERN MEDITERRANEAN UNIVERSITY ECON 604, FALL 2007 DEPARTMENT OF ECONOMICS MEHMET BALCILAR ARIMA MODELS: IDENTIFICATION

Modelling using ARMA processes

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong

Time Series Analysis

Note: The primary reference for these notes is Enders (2004). An alternative and more technical treatment can be found in Hamilton (1994).

ECON/FIN 250: Forecasting in Finance and Economics: Section 7: Unit Roots & Dickey-Fuller Tests

Vector autoregressions, VAR

Chapter 6: Model Specification for Time Series

Lecture 1: Stationary Time Series Analysis

Circle the single best answer for each multiple choice question. Your choice should be made clearly.

A time series is called strictly stationary if the joint distribution of every collection (Y t

Lecture 2: ARMA(p,q) models (part 2)

at least 50 and preferably 100 observations should be available to build a proper model

2. An Introduction to Moving Average Models and ARMA Models

Econometría 2: Análisis de series de Tiempo

Econometrics of financial markets, -solutions to seminar 1. Problem 1

Ch 4. Models For Stationary Time Series. Time Series Analysis

Covariances of ARMA Processes

Econometric Forecasting

Forecasting and Estimation

Introduction to Time Series Analysis. Lecture 12.

MAT3379 (Winter 2016)

LINEAR STOCHASTIC MODELS

Permanent Income Hypothesis (PIH) Instructor: Dmytro Hryshko

Time Series I Time Domain Methods

Lab: Box-Jenkins Methodology - US Wholesale Price Indicator

E 4160 Autumn term Lecture 9: Deterministic trends vs integrated series; Spurious regression; Dickey-Fuller distribution and test

18.S096 Problem Set 4 Fall 2013 Time Series Due Date: 10/15/2013

Classic Time Series Analysis

5 Transfer function modelling

3 Theory of stationary random processes

10. Time series regression and forecasting

Lesson 2: Analysis of time series

Estimating AR/MA models

11. Further Issues in Using OLS with TS Data

E 4101/5101 Lecture 6: Spectral analysis

Econometría 2: Análisis de series de Tiempo

MAT 3379 (Winter 2016) FINAL EXAM (SOLUTIONS)

Time Series Examples Sheet

6.3 Forecasting ARMA processes

Basic concepts and terminology: AR, MA and ARMA processes

A SARIMAX coupled modelling applied to individual load curves intraday forecasting

NANYANG TECHNOLOGICAL UNIVERSITY SEMESTER II EXAMINATION MAS451/MTH451 Time Series Analysis TIME ALLOWED: 2 HOURS

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Module 4. Stationary Time Series Models Part 1 MA Models and Their Properties

Forecasting. Francis X. Diebold University of Pennsylvania. August 11, / 323

Ch 6. Model Specification. Time Series Analysis

Time Series Analysis -- An Introduction -- AMS 586

Lecture note 2 considered the statistical analysis of regression models for time

Stat 5100 Handout #12.e Notes: ARIMA Models (Unit 7) Key here: after stationary, identify dependence structure (and use for forecasting)

Transcription:

CUHK Dept. of Economics Spring 2011 ECON 4120 Sung Y. Park Midterm Suggested Solutions Q1 (a) In time series, autocorrelation measures the correlation between y t and its lag y t τ. It is defined as. ρ(τ) = cov(y t, y t τ ) var(yt ) var(y t τ ) = γ(τ) γ(0) Partial autocorrelation measures the association between y t and y t τ after controlling for the effects of y t 1, y t 2,, y t τ+1. Thus, autocorrelation captures both direct and indirect relationship between y t and y t τ, whereas partial autocorrelation only measures the direct relation between y t and y t τ. That explains why autocorrelation at certain displacements be positive while the partial autocorrelations at those same displacements are negative. (b) Covariance Stationary requires 1) Time invariant mean 2) Time invariant variance and 3) Covariance only depends on time distance. Given the autocovariance functions, we can check whether 2) and 3) are satisfied. (i) From γ(t, τ) = α, we can see γ(t, τ) is a constant and independent of t, and γ(0) = α, a constant. Hence, this autocovariance function is consistent with covariance stationarity. (ii) From γ(t, τ) = e ατ, we can see γ(t, τ) depends on time distance τ only, and γ(0) = e 0 = 1, a constant. Hence, this covariance function is consistent with covariance stationarity. (iii) From γ(t, τ) = τα, we can see γ(t, τ) depends on time distance τ only and γ(0) = 0, a constant. Hence, this covariance function is consistent with covariance stationarity. Q2 Wold Theorem is important because any covariance stationarity can be written as a linear combination of its innovations. Since these innovations ɛ t W N(0, σ 2 ), we can easily model and 1

forecast covariance stationary series and examine their unconditional and conditional moment structures using properties of White Noise process. Though in the contex of Wold Theorem, {y t } is a zero-mean series, it involves no loss of generality as a zero-mean seriesy t can alwasys be constructed as deviations from mean, ie. {y t µ}. Wold reprentation has infinitely many lags, which is not practical. However, an infinite polymial can be written as a ratio of two finite order polynomials (rational polynomial). Hence y t = B(L)ɛ t = Θ(L) Φ(L) ɛ t where Θ(L) = q i=0 θ il i, and Φ(L) = q i=0 φ il i. For AR(p), where or y t = φ 1 y t 1 + + φ p y t p + ɛ t = Ly t + L 2 y t + + L p y t + ɛ t Φ(L)y t = ɛ t If {y t } is covariance stationary, which requires that all roots of Φ(L) lie outside the unit circle, we can write y t = 1 Φ(L) ɛ t which is a rational polynomial whose numerator polynomial is of degree 0. Therefore, AR(p) is a special case of Wold representation under the condition that all roots of lag operator polynomial lie outside the unit circle. For MA(q), where y t = ɛ t + θ 1 ɛ t 1 + + θ q ɛ t q = Θ(L)ɛ ( t) ɛ t W N(0, σ 2 ) which is a rational polynomial whose denominator polynomial is of degree 0. Therefore, MA(q) is a special case of Wold representation. No condition is needed here as MA(q) is always covariance stationary. For ARMA(p,q), where y t = φ 1 y t 1 + + φ p y t p + ɛ t + θ 1 ɛ t 1 + + θ q ɛ t q ɛ t W N(0, σ 2 ) or Φ(L)y t = Θ(L)ɛ t 2

If {y t } is covariance stationary, which requires that all roots of Φ(L) lie outside the unit circle, we can write y t = Θ(L) Φ(L) ɛ t which is the same as rational polynomial approximation of Wold Theorem. Therefore, ARMA(p,q) is a special case of Wold representation under the condition that all roots of AR lag operator polynomial lie outside the unit circle. Q3 The number of observations in the sample is T=47 (2000Q1:2011Q3), and we are asked to construct a 5-step-ahead out-of-sample forecast (2011Q3:2012Q4, h=5). 0 i=1,2,3, For the period 2012Q4, TIME 2012Q4 = T + h = 52, D i2012q4 = 1 i=4 Point Forecast : y 2012Q4,2011Q3 = y 52,47 = 2 52 + 0.2 0 + 0.7 0 + 0.2 0 + 0.6 1 = 104.6 95% Interval Forecast : 104.6 ± 1.96σ, where ˆσ 2 = 16, or y 2012Q4,2011Q3 [96.76, 112.44] Density Forecast : y 2012Q4,2011Q3 N(104.6, 16) Q4 (a)a precise estimate requires an effecient estimator. OLS estimtor is effcient if Homoskedasiticy and No Serial Correlation are satisfied. Hence, to check whether her estimates are precise or not, we only need to check these two conditions. To test for Homoskedasiticy, we can perform Joint LM Test or White Test. To test for Serial Correlation, we can check Durbin-Waston Statistic to see whether there exhibits first order serial correlation. More Gernerally, we can plot correlogram and perform Box-Pierce Test or Ljung-Box Test to see whether ɛ t is White Noise. (b) As mentioned in the question, ɛ t has significant non-zero serial correlation, so we need to tackle serial correlation problem. One solution is to use ARMA model to capture the dynamics in residuals. Or a simpler method is to add lagged dependent variables as regressors. (c) If we model residuals by ARMA, 3

Step 1: Plot the correlogram of residuals in the original model and see how many significant lags there are for ACF and PACF. Say, we observe significant ACF up to the qth lag and significant PACF up to pth lag. Step2: Fit residuals by ARMA (p, q), where p [0, p] and q [0, q], and estimate the model again. Compare AIC and SIC of each specification, and choose the one with lowest AIC and SIC. Say, it is ARMA(p*,q*). Step 3: Check the residuals of the model whose residuals are modeled by ARMA(p*,q*) in the same way as Step 1. In addition, perform Box-Pierce Test or Ljung-Box Test to seek further evidence of White Noise Residuals. If the model passes the tests in Step 3, we can proceed with it and believe it produces precise estimates. However, if it could not pass, we should go back to Step 2, and choose the model with second lowest AIC and SIC. Then go through the entire process again until we eventually find a model which has White Noise residuals. Q5 (a) y T +2, T = ˆφy T +1,T + ɛ T +2,T + ˆθ 1 ɛ T +1,T + ˆθ 2 ɛ T,T = ˆφ( ˆφy T,T + ɛ T +1,T + ˆθ 1 ɛ T,T + ˆθ 2 ɛ T 1,T ) + ɛ T +2,T + ˆθ 1 ɛ T +1,T + ˆθ 2 ɛ T,T = ˆφ 2 y T + ˆφˆθ 1 ɛ T + ˆφˆθ 2 ɛ T 1 + ˆθ 2 ɛ T = ˆφ 2 y T + ( ˆφˆθ 1 + +ˆθ 2 )ɛ T + ˆφˆθ 2 ɛ T 1 (b) 2-step ahead forecast error = e T +2,T = y T +2 y T +2,T = (φy T +1 + ɛ T +2 + θ 1 ɛ T +1 + θ 2 ɛ T ) (φ 2 y T + (φθ 1 + +θ 2 )ɛ T + φθ 2 ɛ T 1 ) = [φ(φy T + ɛ T +1 + θ 1 ɛ T + θ 2 ɛ T 1 ) + ɛ T +2 + θ 1 ɛ T +1 + θ 2 ɛ T (φ 2 y T + (φθ 1 + +θ 2 )ɛ T + φθ 2 ɛ T 1 ) = (φ + θ 1 )ɛ T +1 + ɛ T +2 V ar(e T +2,T = [(φ + θ 1 ) 2 + 1]σ 2 Hence, 2-step ahead 95% interval forecast is ˆφ 2 y T + ( ˆφˆθ 1 + +ˆθ 2 )ɛ T + ˆφˆθ 2 ɛ T 1 ± 1.96 ( ˆφ + ˆθ 1 ) 2 + 1]ˆσ Q6 (a) On one hand, MSE is a biased estimator of out-of-sample 1-step-ahead prediction error variance. It provides an overly optimistic assessment of out-of-sample prediction error variance. On the other hand, MSE does not penalize for degree of freedom used. It will never rise if more independent variables are included in the model. Hence, it may lead to an incorrect model selection. (b) AIC and SIC penalize degree of freedom by penalty factors which are functions of k/t, so a more parsimonious model can be chosen by AIC and SIC. Moreover, AIC is asyptotically 4

effcient, and SIC is consistent. Therefore, with a larger and larger sample size, we are more likely to select a good model to capture the true data generating process. (c) To make SIC chooses more simple model than AIC does, SIC should penalize degree of freedom more than AIC. Note that so SIC > AIC for all T 8. AIC = e 2k/T σ 2 = (e 2 ) k/t σ 2 7.39 k/t σ 2, SIC = T k/t σ 2, Q7 (a) Both GNI and EXP do not seem stationary. Firstly, they seem to have upward trends, so Constant Mean Codition is voilated. We should detrend or take first difference. Secondly, there seem to be seasonality in both series, which voilates the Condition that autovariance only depends on time distance. We should deseasonality. Therefore, T should be at least as large as 8 to make SIC chooses a more simple model than AIC does. (b) Apply Yule-Walker equations, Multiply ɛ t h to both sides, ɛ t = ρ 1 ɛ t 1 + ρ 2 ɛ t 2 + η t Take expectations on both sides, ɛ t ɛ t h = ρ 1 ɛ t 1 ɛ t h + ρ 2 ɛ t 2 ɛ t h + η t ɛ t h γ(h) = ρ 1 γ(h 1) + ρ 2 γ(h 2) + Cov(η t, ɛ t h ) For h = 0, γ(0) = ρ 1 γ(1) + ρ 2 γ(2) + σ 2, as Cov(η t, ɛ t ) = Cov(η t, ρ 1 ɛ t 1 + ρ 2 ɛ t 2 + η t ) = Cov(η t, η t ) = σ 2 For h 1, γ(h) = ρ 1 γ(h 1) + ρ 2 γ(h 2) ˆγ(1) = ˆρ 1ˆγ(0) + ˆρ 2ˆγ(1) ˆγ(2) = ˆρ 1ˆγ(1) + ˆρ 2ˆγ(0) 5

Plug in ˆγ(0) = 0.5, ˆγ(1) = 0.4, ˆγ(2) = 0.3 and solve the equation system, we can get, ˆρ 1 = 8/9, and ˆρ 2 = 1/9 (c) Recursive residuals and CUSUM test can be used to test parameter stability. Recursive residuals are from recursive parameter estimation. For the standard linear regression model, y t = k β i x i,t + ɛ t i=1 ɛ t iid N(0, σ 2 ) Instead of immediately using all he data to estimate the model, we begin our estimation with the first k observations, and add one observation each time after until the sample is exhausted. At each t, t = k,, T 1, we can compute a 1-step ahead forecast ŷ t+1,t = k i=1 x i,t+1. The corresponding forecast errors are recursive residuals, ê t+1 = y t+1 ŷ t+1,t.as the variance of ê t+1,t grows with the sample size, ê t+1,t N(0, σ 2 r t ) where r t > 1 for all t and r t is a function of the data. We can examine the plot of the recursive residuals and estimated two standard-error bands (±2ˆσ r t ). Under the null hypothesis of parameter stability, we expect all recursive risiduals lie inside the two standard-error bands. CUSUM is the cumulative sum of the standard recursive residuals, CUSUM t t w τ+1,τ, t = k,, T 1 τ=k w t+1,t êt+1,t σ iid N(0, 1) r t We can examine the time series plot of the CUSUM and its 95% probability bounds. Under null hypothesis of parameter stability, we expect CUSUM does not cross the bounds at any point. From the results in Figure 2, though recursive errors does not reveal parameter instability, CUSUM drops outside the lower band in 1997-1998, which raises an alert for parameter instability. (d) Since we suspect the true β is time varing, we can propose a time-varing parameter model. Namely, our model can be modified into: ln(exp t ) = α + β(t) ln(gni t ) + ɛ t 6

where β(t) is a function of time t. Bonus 1. Generate 1000 samples from the ARMA (1,1) process: y t = 0.9y t=1 + ɛ t + ɛ t 1 The sample size is 500 each. 2. Fit a ARMA (1,1) model for each sample, save all the estimated parameters and means of residuals in the object estimate. 7