Autoregressive and Moving-Average Models

Similar documents
4. MA(2) +drift: y t = µ + ɛ t + θ 1 ɛ t 1 + θ 2 ɛ t 2. Mean: where θ(l) = 1 + θ 1 L + θ 2 L 2. Therefore,

Ch. 14 Stationary ARMA Process

Covariance Stationary Time Series. Example: Independent White Noise (IWN(0,σ 2 )) Y t = ε t, ε t iid N(0,σ 2 )

Vector Autogregression and Impulse Response Functions

Basic concepts and terminology: AR, MA and ARMA processes

Discrete time processes

Class 1: Stationary Time Series Analysis

Ch. 15 Forecasting. 1.1 Forecasts Based on Conditional Expectations

ECON/FIN 250: Forecasting in Finance and Economics: Section 6: Standard Univariate Models

Week 5 Quantitative Analysis of Financial Markets Characterizing Cycles

1 Class Organization. 2 Introduction

Empirical Market Microstructure Analysis (EMMA)

Chapter 4: Models for Stationary Time Series

1 Linear Difference Equations

LINEAR STOCHASTIC MODELS

Estimating Markov-switching regression models in Stata

EASTERN MEDITERRANEAN UNIVERSITY ECON 604, FALL 2007 DEPARTMENT OF ECONOMICS MEHMET BALCILAR ARIMA MODELS: IDENTIFICATION

Econometrics II Heij et al. Chapter 7.1

Introduction to ARMA and GARCH processes

E 4101/5101 Lecture 6: Spectral analysis

Lecture 2: Univariate Time Series

Define y t+h t as the forecast of y t+h based on I t known parameters. The forecast error is. Forecasting

Univariate Time Series Analysis; ARIMA Models

3. ARMA Modeling. Now: Important class of stationary processes

Some Time-Series Models

Introduction to Stochastic processes

Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications

Time Series Econometrics 4 Vijayamohanan Pillai N

Ch 4. Models For Stationary Time Series. Time Series Analysis

Università di Pavia. Forecasting. Eduardo Rossi

University of Oxford. Statistical Methods Autocorrelation. Identification and Estimation

Single Equation Linear GMM with Serially Correlated Moment Conditions

Trend-Cycle Decompositions

9) Time series econometrics

10) Time series econometrics

Lecture on ARMA model

ECON/FIN 250: Forecasting in Finance and Economics: Section 7: Unit Roots & Dickey-Fuller Tests

at least 50 and preferably 100 observations should be available to build a proper model

3 Theory of stationary random processes

Monday 7 th Febraury 2005

Permanent Income Hypothesis (PIH) Instructor: Dmytro Hryshko

Lecture 1: Stationary Time Series Analysis

Empirical Macroeconomics

Forecasting with ARMA

Lecture 2: ARMA(p,q) models (part 2)

Univariate Nonstationary Time Series 1

FE570 Financial Markets and Trading. Stevens Institute of Technology

7. MULTIVARATE STATIONARY PROCESSES

5 Transfer function modelling

Lecture 1: Stationary Time Series Analysis

Problem Set 2: Box-Jenkins methodology

Applied time-series analysis

Chapter 5: Models for Nonstationary Time Series

ECON 616: Lecture 1: Time Series Basics

Cointegration and Error-Correction

Financial Econometrics Using Stata

Advanced Econometrics

Chapter 9: Forecasting

Chapter 6: Model Specification for Time Series

Autoregressive Moving Average (ARMA) Models and their Practical Applications

ECONOMETRICS Part II PhD LBS

A time series is called strictly stationary if the joint distribution of every collection (Y t

Empirical Macroeconomics

CHAPTER 8 FORECASTING PRACTICE I

Lecture 4a: ARMA Model

7 Introduction to Time Series

ESSE Mid-Term Test 2017 Tuesday 17 October :30-09:45

Econometric Forecasting

Lecture 8: ARIMA Forecasting Please read Chapters 7 and 8 of MWH Book

Class: Trend-Cycle Decomposition

A SEASONAL TIME SERIES MODEL FOR NIGERIAN MONTHLY AIR TRAFFIC DATA

SOME BASICS OF TIME-SERIES ANALYSIS

APPLIED ECONOMETRIC TIME SERIES 4TH EDITION

Lecture 1: Fundamental concepts in Time Series Analysis (part 2)

5: MULTIVARATE STATIONARY PROCESSES

Econ 623 Econometrics II Topic 2: Stationary Time Series

Econ 4120 Applied Forecasting Methods L10: Forecasting with Regression Models. Sung Y. Park CUHK

LECTURE 10 LINEAR PROCESSES II: SPECTRAL DENSITY, LAG OPERATOR, ARMA. In this lecture, we continue to discuss covariance stationary processes.

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

Problem Set 1 Solution Sketches Time Series Analysis Spring 2010

Lecture note 2 considered the statistical analysis of regression models for time

CENTRE FOR APPLIED MACROECONOMIC ANALYSIS

Forecasting. Francis X. Diebold University of Pennsylvania. August 11, / 323

Circle the single best answer for each multiple choice question. Your choice should be made clearly.

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.

Midterm Suggested Solutions

Time Series 2. Robert Almgren. Sept. 21, 2009

ARIMA Models. Jamie Monogan. January 25, University of Georgia. Jamie Monogan (UGA) ARIMA Models January 25, / 38

Cointegration, Stationarity and Error Correction Models.

Time Series. Chapter Time Series Data

Stationary Stochastic Time Series Models

Lesson 9: Autoregressive-Moving Average (ARMA) models

7 Introduction to Time Series Time Series vs. Cross-Sectional Data Detrending Time Series... 15

Econometría 2: Análisis de series de Tiempo

{ } Stochastic processes. Models for time series. Specification of a process. Specification of a process. , X t3. ,...X tn }

Covariances of ARMA Processes

Ch. 19 Models of Nonstationary Time Series

Forecasting. Simon Shaw 2005/06 Semester II

EC402: Serial Correlation. Danny Quah Economics Department, LSE Lent 2015

Ch 6. Model Specification. Time Series Analysis

Transcription:

Chapter 3 Autoregressive and Moving-Average Models 3.1 Introduction Let y be a random variable. We consider the elements of an observed time series {y 0,y 1,y2,...,y t } as being realizations of this randoms variable. We also define a white-noise process. A sequence {ε t } is a white-noise process if each value of the sequence has mean zero, has a constant variance, and is uncorrelated with all other realizations. Formally,{ε t } is a white-noise process if, for each t, E(ε t ) = E(ε t 1 )= =0 (3.1) E(ε 2 t ) = E(ε 2 t 1)= =σ 2 E(ε t ε t s ) = E(ε t j ε t j s )=0 for all j and s For the rest of these notes,{ε t } will always denote a white-noise process. Figure 3.1 illustrates a white-noise process generated in Stata with the following code: clear set obs 150 set seed 1000 gen time=_n tsset time gen white=invnorm(uniform()) twoway line white time, m(o) c(l) scheme(sj) /// ytitle( "white-noise" ) /// title( "White-Noise Process" ) 3.2 Stationarity A stochastic process is said to be covariance-stationary if it has a finite mean and variance. That is, for all t, j, and t s, E(y t ) = E(y t s )= µ (3.2) 11

12 3 Autoregressive and Moving-Average Models White-Noise Process white-noise -2-1 0 1 2 0 50 100 150 time Fig. 3.1 White-Noise Process,{ε t } E[(y t µ) 2 ] = E[(y t s µ) 2 ]=σ 2 y (3.3) E[(y t µ)(y t s µ)] = E[(y t j µ)(y t j s µ)]=γ s (3.4) where µ, σ 2 y, and γ s are all constants. For a covariance-stationary series, we can define the autocorrelation between y t and y t s as ρ s = γ s γ 0 (3.5) where both, γ s and γ 0, are defined in Equation 3.4. Obviously, ρ 0 = 1. 3.3 The Moving-Average Processes 3.3.1 The MA(1) Process The first-order moving average process, or MA(1) process, is y t = ε t + θε t 1 =(1+θL)ε t (3.6)

3.3 The Moving-Average Processes 13 The MA(1) process expresses an observed series as a function of the current and lagged unobserved shocks. To develop an intuition of the behavior of the MA(1) process, we show the following three simulated realizations: y1 t = ε t + 0.08ε t 1 =(1+0.08L)ε t (3.7) y2 t = ε t + 0.98ε t 1 =(1+0.98L)ε t y3 t = ε t 0.98ε t 1 =(1 0.98L)ε t In the first two variables (y1 and y2), past shocks feed positively into the current value of the series, with a small weight of θ = 0.08 in the first case, and a large weight of θ = 0.98 in the second case. While one may think that the second case would produce a more persistent series, it doesn t. The structure of the MA(1) process, in which only the first lag of the shock appears on the right, forces it to have a very short memory, and hence weak dynamics. Figure 3.2 illustrates the generated series y1 1 and y2 t. This figure shows the weak dynamics of MA(1) processes. In addition, it also shows that the y2 series is more volatile than y1. Following the previous Stata code, we can generate Figure 3.2 with: gen Y1 = white+0.08*l.white gen Y2 = white+0.98*l.white twoway (line Y1 Y2 time, clcolor(blue red) ), scheme(sj) /// ytitle( "Y1 and Y2" ) /// title( "Two MA(1) Processes" ) It is easy to see that the unconditional mean and variance of an MA(1) process are and E(y t ) = E(ε t )+θe(ε t 1 ) (3.8) = 0 var(y t ) = var(ε t )+θ 2 var(ε t 1 ) (3.9) = σ 2 + θ 2 σ 2 = σ 2 (1+θ 2 ) Notice that for a given σ 2, as θ increases in absolute value, the unconditional variance increases as well. This explains why the y2 is more volatile than y1. The conditional mean and variance of an MA(1), where the conditioning information set is Ω t 1 ={ε t 1,ε t 2,...}, are and E(y t Ω t 1 ) = E(ε t + θε t 1 Ω t 1 ) (3.10) = E(ε t Ω t 1 )+E(θε t 1 Ω t 1 ) = θε t 1 var(y t Ω t 1 ) = E((y t E(y t Ω t 1 )) 2 Ω t 1 ) (3.11) = E(ε 2 t Ω t 1 )

14 3 Autoregressive and Moving-Average Models Two MA(1) Processes Y1 and Y2-4 -2 0 2 4 0 50 100 150 time Y1 Y2 Fig. 3.2 Two MA(1) Processes = E(ε 2 t ) = σ 2 The conditional mean explicitly adapts to the information set, in contrasts to the unconditional mean, which is constant. We will study more the y1, y2, and y3 series once we learn about the autocorrelation and the partial autocorrelation functions. 3.3.2 The MA(q) Process The general finite-order moving average process of order q, or MA(q), is expressed as where as you know y 1 = ε t + θ 1 ε t 1 + θ 2 ε t 2 + +θ q ε t q = B(L)ε t (3.12) B(L)=1+θ 1 L+θ 2 L 2 + +θ q L q.

3.4 The Autoregressive Processes 15 is a qth-order lag operator polynomial. The MA(q) process is a natural generalization of the MA(1). By allowing for more lags of the shocks on the right hand side of the equation, the MA(q) process can capture richer dynamic patterns. 3.4 The Autoregressive Processes 3.4.1 The AR(1) Process The autoregressive process has a simple motivation: it is simply a stochastic difference equation in which the current value of a series is linearly related to its past values, plus an additive stochastic shock. The first-order autoregressive process, AR(1), is which can be written as y t = ϕy t 1 ε t (3.13) (1 ϕl)y t = ε t (3.14) To illustrate the dynamics of different AR(1) processed, we simulate the realizations of the following four AR(1) processes: z1 t = +0.9 z1 t 1 + ε t (3.15) z2 t = +0.2 z2 t 1 + ε t z3 t = 0.9 z3 t 1 + ε t z4 t = 0.2 z4 t 1 + ε t where we keep the innovation sequence{ε t } to be the same in each case. Figure 3.3 illustrates the time series graph of the z1 t and z2 t series, while Figure 3.4 illustrates z3 t and z4 t. These two figures were obtained using: gen Z1 = 0 gen Z2 = 0 gen Z3 = 0 gen Z4 = 0 replace Z1 = +0.9*l.Z1 + white if time > 1 replace Z2 = +0.2*l.Z2 + white if time > 1 replace Z3 = -0.9*l.Z3 + white if time > 1 replace Z4 = -0.2*l.Z4 + white if time > 1 twoway (line Z1 Z2 time, clcolor(blue red) ), scheme(sj) /// ytitle( "Z1 and Z2" ) /// title( "Two AR(1) Processes" ) twoway (line Z3 Z4 time, clcolor(blue red) ), scheme(sj) /// ytitle( "Z3 and Z4" ) /// title( "Two AR(1) Processes" ) From the first figure we can see that the fluctuations in the AR(1) with parameter ϕ = 0.9 appears much more persistent than those of the AR(1) with parameter ϕ = 0.4. This contrasts sharply with the MA(1) process, which has a very short memory

16 3 Autoregressive and Moving-Average Models Two AR(1) Processes Z1 and Z2-6 -4-2 0 2 4 0 50 100 150 time Z1 Z2 Fig. 3.3 Two AR(1) Processes regardless of the parameter value. Hence, the AR(1) model is capable of capturing much more persistent dynamics. Figure 3.4 shows that the sign is also critical in the dynamic of an AR(1) process. With a positive ϕ, a positive value is most likely followed by another positive value. However, with a negative ϕ, the series quickly changes from positive to negative and vice versa. Finally, when ϕ is negative, the there is a larger dispersion the larger its value. Let s begin with a simple AR(1) process. Then, if we substitute backwards for the lagged y s on the right hand side and use the lag operator we can write, y t = ϕy t 1 + ε t (3.16) y t = ε t + ϕε t 1 + ϕ 2 ε t 2 + 1 y t = 1 ϕl ε t which is the moving average representation of y. It is convergent if and only if ϕ <1, which is the covariance stationarity condition for the AR(1) case. From the moving average representation of the covariance stationary AR(1) process, we can obtain the unconditional mean and variance, E(y t ) = E(ε t + ϕε t 1 + ϕ 2 ε t 2 + ) (3.17)

3.4 The Autoregressive Processes 17 Two AR(1) Processes Z3 and Z4-5 0 5 0 50 100 150 time Z3 Z4 Fig. 3.4 Two AR(1) Processes and = E(ε t )+ϕe(ε t 1 )+ϕ 2 E(ε t 2 )+ = 0 The conditional moments are and var(y t ) = var(ε t + ϕε t 1 + ϕ 2 ε t 2 + ) (3.18) = σ 2 + ϕ 2 σ 2 + ϕ 4 σ 2 + = σ 2 ϕ 2i i=0 = σ 2 1 ϕ 2 E(y t y t 1 ) = E(ϕy t 1 + ε t y t 1 ) (3.19) = ϕe(y t 1 y t 1 )+E(ε t y t 1 ) = ϕy t 1 + 0 = ϕy t 1

18 3 Autoregressive and Moving-Average Models var(y t ) = var(ϕy t 1 + ε t y t 1 ) (3.20) = ϕ 2 var(y t 1 y t 1 )+var(ε t y t 1 ) = 0+σ 2 = σ 2 It is important to note how the conditional mean adapts to the changing information set as the process evolves. 3.4.2 The AR(p) Process The general pth order autoregressive process, AR(p), is written using the lag operator we have y t = ϕ 1 y t 1 + ϕ 2 y t 2 + +ϕ p y t p + ε t A(L)y t = (1 ϕ 1 L ϕ 2 L 2 ϕ p L p )y t = ε t The AR(p) process is covariance stationary if and only if the inverses of all roots of the autoregressive lag operator polynomial A(L) are inside the unit circle. A necessary condition for covariance stationary is p i=1 ϕ i < 1. If the condition is satisfied, the process may or may not be stationary, but if the condition is violated, the process can t be stationary. In the covariance stationary case, we can write the process in the convergent infinite moving average form y t = 1 A(L) ε t 3.5 Autoregressive Moving Average (ARMA) Models The random shock that drives an autoregressive process may itself be a moving average process, then the most appropriate process may be an ARMA process. An ARMA process is just the combination of an AR and a MA process. A p, q autoregressive moving average process is usually written as ARMA(p, q). The simplest ARMA process is an ARMA(1,1), given by in lag operator form y t = ϕy t 1 + ε t + θε t 1 (1 ϕl)y t = (1+θL)ε t

3.5 Autoregressive Moving Average (ARMA) Models 19 where ϕ <1 is required for stationarity and θ <1 for invertibility. If the covariance stationarity condition is satisfied, then we have the moving average representation y t = (1+θL) (1 ϕl) ε t which is an infinite distributed lag of current and past innovations. Likewise, if the invertibility condition is satisfied, we have the autoregressive representation is The ARMA(p, q) process is given by or in its lag operator form 1 (1 ϕl) (1+θL) y t = ε t y t = ϕy t 1 + +ϕ p y t p + ε t + θ 1 ε t 1 + +θ q ε t q A(L)y t = B(L)ε t If the inverses of all roots of A(L) are inside the unit circle, then the process is covariance stationary and has the following convergent infinite moving average representation y t = B(L) A(L) ε t. If the inverses of all roots of B(L) are inside the unit circle, then the process is invertible and has the following convergent infinite autoregressive representation A(L) B(L) y t = ε t As with autoregressions and moving averages, ARMA processes have a fixed unconditional mean but a time-varying conditional mean. 1 Where A(L) = 1 ϕ 1 L ϕ 2 L 2 ϕ p L p and B(L) = 1+θ 1 L+θ 2 L 2 + +θ q L q.

20 3 Autoregressive and Moving-Average Models 3.6 ARMA Models in Stata The mechanics behind estimating ARMA models in Stata is simple. However, the key behind the estimation is the model selection, which will be covered in detail in the next chapter once we cover the autocorrelation and partial autocorrelation functions. For now, let s consider the following example from Enders (2004) pages 87-93 consider also in the Stata Manual. Let y denote the first difference of the logarithm of the U.S. Whole Price Index (WPI). We have quarterly data over the period 1960q1 through 1990q4. The Stata command is given by use http://www.stata-press.com/data/r11/wpi1 arima D.ln_wpi, ar(1) ma(1) (setting optimization to BHHH) Iteration 0: log likelihood = 378.88646 Iteration 4: log likelihood = 382.41728 (switching optimization to BFGS) Iteration 5: log likelihood = 382.42198 Iteration 8: log likelihood = 382.42714 ARIMA regression Sample: 1960q2-1990q4 Number of obs = 123 Wald chi2(2) = 509.04 Log likelihood = 382.4271 Prob > chi2 = 0.0000 ------------------------------------------------------------------------------ OPG D.ln_wpi Coef. Std. Err. z P> z [95% Conf. Interval] -------------+---------------------------------------------------------------- ln_wpi _cons.0108226.0054612 1.98 0.048.0001189.0215263 -------------+---------------------------------------------------------------- ARMA ar L1..8832466.0428881 20.59 0.000.7991874.9673058 ma L1. -.4771587.0920432-5.18 0.000 -.65756 -.2967573 -------------+---------------------------------------------------------------- /sigma.0107717.0004533 23.76 0.000.0098832.0116601 ------------------------------------------------------------------------------ Any of the following two commands will yield the same output arima D.ln_wpi, arima(1,0,1) arima ln_wpi, arima(1,1,1) Moreover, a ARMA(1,4) model can be estimated using arima D.ln_wpi, ar(1) ma(1/4) The capital letter I in ARIMA denotes the order of integration, which will be covered in detail in Chapter 6. For now we assume that the difference of the logarithm of the WPI is integrated of order zero. Hence the ARIMA is just an ARMA. Finally, you can try estimating the MA(1) and AR(1) simulated in Equations 3.7 and 3.15.