Chapter 4: Models for Stationary Time Series

Similar documents
Ch 4. Models For Stationary Time Series. Time Series Analysis

Chapter 9: Forecasting

ECON/FIN 250: Forecasting in Finance and Economics: Section 6: Standard Univariate Models

Chapter 6: Model Specification for Time Series

Chapter 5: Models for Nonstationary Time Series

3 Theory of stationary random processes

Some Time-Series Models

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong

Covariance Stationary Time Series. Example: Independent White Noise (IWN(0,σ 2 )) Y t = ε t, ε t iid N(0,σ 2 )

Ch. 14 Stationary ARMA Process

University of Oxford. Statistical Methods Autocorrelation. Identification and Estimation

Discrete time processes

Time Series 2. Robert Almgren. Sept. 21, 2009

Univariate Time Series Analysis; ARIMA Models

We will only present the general ideas on how to obtain. follow closely the AR(1) and AR(2) cases presented before.

A time series is called strictly stationary if the joint distribution of every collection (Y t

Class 1: Stationary Time Series Analysis

1 Linear Difference Equations

STAT 520: Forecasting and Time Series. David B. Hitchcock University of South Carolina Department of Statistics

Covariances of ARMA Processes

Lesson 9: Autoregressive-Moving Average (ARMA) models

3. ARMA Modeling. Now: Important class of stationary processes

Time Series Solutions HT 2009

Ch 6. Model Specification. Time Series Analysis

6 NONSEASONAL BOX-JENKINS MODELS

FE570 Financial Markets and Trading. Stevens Institute of Technology

Autoregressive and Moving-Average Models

Lecture 4a: ARMA Model

Ch. 15 Forecasting. 1.1 Forecasts Based on Conditional Expectations

Time Series Analysis

Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications

Time Series Analysis Fall 2008

EASTERN MEDITERRANEAN UNIVERSITY ECON 604, FALL 2007 DEPARTMENT OF ECONOMICS MEHMET BALCILAR ARIMA MODELS: IDENTIFICATION

Identifiability, Invertibility

Econ 424 Time Series Concepts

Autoregressive Moving Average (ARMA) Models and their Practical Applications

STAT Financial Time Series

Forecasting with ARMA

Marcel Dettling. Applied Time Series Analysis SS 2013 Week 05. ETH Zürich, March 18, Institute for Data Analysis and Process Design

LINEAR STOCHASTIC MODELS

E 4101/5101 Lecture 6: Spectral analysis

Lecture note 2 considered the statistical analysis of regression models for time

Module 3. Descriptive Time Series Statistics and Introduction to Time Series Models

{ } Stochastic processes. Models for time series. Specification of a process. Specification of a process. , X t3. ,...X tn }

Econ 623 Econometrics II Topic 2: Stationary Time Series

2. An Introduction to Moving Average Models and ARMA Models

Lecture on ARMA model

Università di Pavia. Forecasting. Eduardo Rossi

Econometrics II Heij et al. Chapter 7.1

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Econometría 2: Análisis de series de Tiempo

Midterm Suggested Solutions

Chapter 1. Basics. 1.1 Definition. A time series (or stochastic process) is a function Xpt, ωq such that for

Statistics of stochastic processes

Lecture 1: Fundamental concepts in Time Series Analysis (part 2)

For a stochastic process {Y t : t = 0, ±1, ±2, ±3, }, the mean function is defined by (2.2.1) ± 2..., γ t,

Chapter 8: Model Diagnostics

Lecture 1: Stationary Time Series Analysis

Non-Stationary Time Series and Unit Root Testing

Basic concepts and terminology: AR, MA and ARMA processes

Econometrics of financial markets, -solutions to seminar 1. Problem 1

Lesson 2: Analysis of time series

Part III Example Sheet 1 - Solutions YC/Lent 2015 Comments and corrections should be ed to

Calculation of ACVF for ARMA Process: I consider causal ARMA(p, q) defined by

Lecture 16: State Space Model and Kalman Filter Bus 41910, Time Series Analysis, Mr. R. Tsay

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

Empirical Market Microstructure Analysis (EMMA)

STAT 443 (Winter ) Forecasting

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Linear Model Under General Variance Structure: Autocorrelation

Introduction to ARMA and GARCH processes

Solutions to Odd-Numbered End-of-Chapter Exercises: Chapter 14

Lecture 2: Univariate Time Series

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

Problem Set 1 Solution Sketches Time Series Analysis Spring 2010

Part 1. Multiple Choice (50 questions, 1 point each) Part 2. Problems/Short Answer (10 questions, 5 points each)

Lecture 1: Stationary Time Series Analysis

11. Further Issues in Using OLS with TS Data

IDENTIFICATION OF ARMA MODELS

Ch 5. Models for Nonstationary Time Series. Time Series Analysis

Univariate ARIMA Models

STOR 356: Summary Course Notes

Homework 2 Solutions

Permanent Income Hypothesis (PIH) Instructor: Dmytro Hryshko

Ch 9. FORECASTING. Time Series Analysis

7. MULTIVARATE STATIONARY PROCESSES

Topic 4 Unit Roots. Gerald P. Dwyer. February Clemson University

APPLIED ECONOMETRIC TIME SERIES 4TH EDITION

Time Series: Theory and Methods

Exponential decay rate of partial autocorrelation coefficients of ARMA and short-memory processes

Chapter 3 - Temporal processes

Classic Time Series Analysis

γ 0 = Var(X i ) = Var(φ 1 X i 1 +W i ) = φ 2 1γ 0 +σ 2, which implies that we must have φ 1 < 1, and γ 0 = σ2 . 1 φ 2 1 We may also calculate for j 1

LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity.

18.S096 Problem Set 4 Fall 2013 Time Series Due Date: 10/15/2013

Non-Stationary Time Series and Unit Root Testing

MAT3379 (Winter 2016)

Elements of Multivariate Time Series Analysis

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d)

White Noise Processes (Section 6.2)

Transcription:

Chapter 4: Models for Stationary Time Series Now we will introduce some useful parametric models for time series that are stationary processes. We begin by defining the General Linear Process. Let {Y t } be our observed time series and let {e t } be a white noise process (consisting of iid zero-mean r.v. s). {Y t } is a general linear process if it can be represented by Y t = e t + ψ 1 e t 1 + ψ 2 e t 2 + where e t, e t 1,... are white noise. So this process is a weighted linear combination of present and past white noise terms.

More on General Linear Process When the number of terms is actually infinite, we need some regularity condition on the coefficients, such as i=1 ψ2 i <. Often we assume the weights are exponentially decaying, i.e., where 1 < φ < 1. ψ j = φ j Then Y t = e t + φe t 1 + φ 2 e t 2 +

Properties of the General Linear Process Since the white noise terms all have mean zero, clearly E(Y t ) = 0 for all t. Also, var(y t ) = var(e t + φe t 1 + φ 2 e t 2 + ) = var(e t ) + φ 2 var(e t 1 ) + φ 4 var(e t 2 ) + = σ 2 e(1 + φ 2 + φ 4 + ) = σ2 e 1 φ 2

More Properties of the General Linear Process cov(y t, Y t 1 ) = cov(e t + φe t 1 + φ 2 e t 2 +, e t 1 + φe t 2 + φ 2 e t 3 + ) = cov(φe t 1, e t 1 ) + cov(φ 2 e t 2, φe t 2 ) + = φσ 2 e + φ 3 σ 2 e + φ 5 σ 2 e + = φσ 2 e(1 + φ 2 + φ 4 + = φσ2 e 1 φ 2 Hence corr(y t, Y t 1 ) = φ. Similarly, cov(y t, Y t k ) = φk σ 2 e 1 φ 2 and corr(y t, Y t k ) = φ k.

Stationarity of the General Linear Process Thus we see this process is stationary. The expected value is constant over time, and this covariance depends only only the lag k and not the actual time t. To obtain a process with some (constant) nonzero mean, we can just add some term µ to the definition. This does not affect the autocovariance structure, so such a process is still stationary.

Moving Average Processes This is a special case of the general linear process. A moving average process of order q, denoted MA(q), is defined as: Y t = e t θ 1 e t 1 θ 2 e t 2 θ q e t q The simplest moving average process is the (first-order) MA(1) process: Y t = e t θe t 1 Note that we do not need the subscript on the θ since there is only one of them in this model.

Properties of the MA(1) Process It is easy to see that E(Y t ) = 0 and var(y t ) = σ 2 e(1 + θ 2 ). Furthermore, cov(y t, Y t 1 ) = cov(e t θe t 1, e t 1 θe t 2 ) = cov( θe t 1, e t 1 ) = θσ 2 e. And cov(y t, Y t 2 ) = cov(e t θe t 1, e t 2 θe t 3 ) = 0, since we see there are no subscripts in common. Similarly, cov(y t, Y t k ) = 0 for any k 2, so in the MA(1) process, the observations farther apart than 1 time unit are uncorrelated. Clearly, corr(y t, Y t 1 ) = ρ 1 = θ/(1 + θ 2 ) for the MA(1) process.

Lag-1 Autocorrelation for the MA(1) Process We see that the value of corr(y t, Y t 1 ) = ρ 1 depends on what θ is. The largest value that ρ 1 can be is 0.5 (when θ = 1) and the smallest value that ρ 1 can be is 0.5 (when θ = 1). Some examples: When θ = 0.1, ρ = 0.099; when θ = 0.5, ρ = 0.40; when θ = 0.9, ρ = 0.497 (see R example plots). Just reverse the signs when θ is negative: When θ = 0.1, ρ = 0.099; when θ = 0.5, ρ = 0.40; when θ = 0.9, ρ = 0.497. Note that the lag-1 autocorrelation will be the same for the reciprocal of θ as for θ itself. Typically we will restrict attention to values of θ between 1 and 1 for reasons of invertibility (more later).

Second-order Moving Average Process A moving average of order 2, denoted MA(2), is defined as: Y t = e t θ 1 e t 1 θ 2 e t 2 It can be shown that, for the MA(2) process, γ 0 = var(y t ) = (1 + θ 2 1 + θ 2 2)σ 2 e γ 1 = cov(y t, Y t 1 ) = ( θ 1 + θ 1 θ 2 )σ 2 e γ 2 = cov(y t, Y t 2 ) = θ 2 σ 2 e

Autocorrelations for Second-order Moving Average Process The autocorrelation formulas can be found in the usual way from the autocovariance and variance formulas. For the specific case when θ 1 = 1 and θ 2 = 0.6, ρ 1 = 0.678 and ρ 2 = 0.254. And ρ k = 0 for k = 3, 4,.... The strong negative lag-1 autocorrelation, weakly positive lag-2 autocorrelation, and zero lag-3 autocorrelation can be seen in plots of Y t versus Y t 1, Y t versus Y t 2, etc., from a simulated MA(2) process.

Extension to MA(q) Process For the moving average of order q, denoted MA(q): It can be shown that Y t = e t θ 1 e t 1 θ 2 e t 2 θ q e t q γ 0 = var(y t ) = (1 + θ 2 1 + θ 2 2 + + θ 2 q)σ 2 e The autocorrelations ρ k are zero for k > q and are quite flexible, depending on θ 1, θ 2,..., θ q, for earlier lags when k q.

Autoregressive Processes Autoregressive (AR) processes take the form of a regression of Y t on itself, or more accurately on past values of the process: Y t = φ 1 Y t 1 + φ 2 Y t 2 + + φ p Y t p + e t, where e t is independent of Y t 1, Y t 2,..., Y t p. So the value of the process at time t is a linear combination of past values of the process, plus some independent disturbance or innovation term.

The First-order Autoregressive Process The AR(1) process is (note we do not need a subscript on the φ here) a stationary process with: Y t = φy t 1 + e t, Without loss of generality, we can assume E(Y t ) = 0 (if not, we could replace Y t with Y t µ everywhere). Note γ 0 = var(y t ) = φ 2 var(y t 1 ) + var(e t ) so that γ 0 = φ 2 γ 0 + σ 2 e and we see: where φ 2 < 1 φ < 1. γ 0 = σ2 e 1 φ 2,

Autocovariances of the AR(1) Process Multiplying the AR(1) model equation by Y t k and taking expected values, we have: E(Y t k Y t ) = φe(y t k Y t 1 ) + E(e t Y t k ) γ k = φγ k 1 + E(e t Y t k ) = φγ k 1 since e t and Y t k are independent and (each) have mean 0. Since γ k = φγ k 1, then for k = 1, γ 1 = φγ 0 = φσ 2 e/(1 φ 2 ). For k = 2, we get γ 2 = φγ 1 = φ 2 σ 2 e/(1 φ 2 ). In general, γ k = φ k σ 2 e/(1 φ 2 ).

Autocorrelations of the AR(1) Process Since ρ k = γ k /γ 0, we see: ρ k = φ k, for k = 1, 2, 3,... Since φ < 1, the autocorrelation gets closer to zero (weaker) as the number of lags increases. If 0 < φ < 1, all the autocorrelations are positive. Example: The correlation between Y t and Y t 1 may be strong, but the correlation between Y t and Y t 8 will be much weaker. So the value of the process is associated with very recent values much more than with values far in the past.

More on Autocorrelations of the AR(1) Process If 1 < φ < 0, the lag-1 autocorrelation is negative, and the signs of the autocorrelations alternate from positive to negative over the further lags. For φ near 1, the overall graph of the process will appear smooth, while for φ near 1, the overall graph of the process will appear jagged. See the R plots for examples.

The AR(1) Model as a General Linear Process Recall that the AR(1) model implies Y t = φy t 1 + e t, and also that Y t 1 = φy t 2 + e t 1. Substituting, we have Y t = φ(φy t 2 + e t 1 ) + e t, so that Y t = e t + φe t 1 + φ 2 Y t 2. Repeating this by substituting into the past infinitely often, we can represent this by: Y t = e t + φe t 1 + φ 2 e t 2 + φ 3 e t 3 + This is in the form of the general linear process, with ψ j = φ j (we require that φ < 1).

Stationarity For an AR(1) process, it can be shown that the process is stationary if and only if φ < 1. For an AR(2) process, one following Y t = φ 1 Y t 1 + φ 2 Y t 2 + e t, we consider the AR characteristic equation: 1 φ 1 x φ 2 x 2 = 0. The AR(2) process is stationary if and only if the solutions of the AR characteristic equation exceed 1 in absolute value, i.e., if and only if φ 1 + φ 2 < 1, φ 2 φ 1 < 1, and φ 2 < 1.

Autocorrelations and Variance of the AR(2) Process The formulas for the lag-k autocorrelation, ρ k, and the variance γ 0 = var(y t ) of an AR(2) process are complicated and depend on φ 1 and φ 2. The key things to note are: the autocorrelation ρ k dies out toward 0 as the lag k increases; the autocorrelation function can have a wide variety of shapes, depending on the values of φ 1 and φ 2 (see R examples).

The General Autoregressive Process For an AR(p) process: Y t = φ 1 Y t 1 + φ 2 Y t 2 + + φ p Y t p + e t the AR characteristic equation is: 1 φ 1 x φ 2 x 2 + + φ p x p = 0. The AR(p) process is stationary if and only if the solutions of the AR characteristic equation exceed 1 in absolute value.

Autocorrelations in the General AR Process If the process is stationary, we may form what are called the Yule-Walker equations: ρ 1 = φ 1 + φ 2 ρ 1 + φ 3 ρ 2 + + φ p ρ p 1 ρ 2 = φ 1 ρ 1 + φ 2 + φ 3 ρ 1 + + φ p ρ p 2. ρ p = φ 1 ρ p 1 + φ 2 ρ p 2 + φ 3 ρ p 3 + + ρ p and solve numerically for the autocorrelations ρ 1, ρ 2,..., ρ k.

The Mixed Autoregressive Moving Average (ARMA) Model Consider a time series that has both autoregressive and moving average components: Y t = φ 1 Y t 1 + φ 2 Y t 2 + + φ p Y t p + e t θ 1 e t 1 θ 2 e t 2 θ q e t q. This is called an Autoregressive Moving Average process of order p and q, or an ARMA(p, q) process.

The ARMA(1, 1) Model The simplest type of ARMA(p, q) model is the ARMA(1, 1) model: Y t = φy t 1 + e t θe t 1. The variance of a Y t that follows the ARMA(1, 1) process is: γ 0 = 1 2φθ + θ2 1 φ 2 σ 2 e The autocorrelation function of the ARMA(1, 1) process is, for k 1: (1 θφ)(φ θ) ρ k = 1 2θφ + θ 2 φk 1

Autocorrelations of the ARMA(1, 1) Process The autocorrelation function ρ k of an ARMA(1, 1) process decays toward 0 as k increases, with damping factor φ. Under the AR(1) process, the decay started from ρ 0 = 1, but for the ARMA(1, 1) process, the decay starts from ρ 1, which depends on θ and φ. The shape of the autocorrelation function can vary, depending on the signs of φ and θ.

Other Properties of the ARMA(1, 1) and ARMA(p, q) Processes The ARMA(1, 1) process (and the general ARMA(p, q) process) can also be written as a general linear process. The ARMA(1, 1) process is stationary if and only if the solution to the AR characteristic equation 1 φx = 0 is greater than 1, i.e., if and only if φ < 1. The ARMA(p, q) process is stationary if and only if the solutions to the AR characteristic equation all exceed 1. The values of the autocorrelation function ρ k for an ARMA(p, q) process can be found by numerically solving a series of equations that depend on either φ 1,..., φ p or θ 1,..., θ q.

Invertibility Recall that the MA(1) process is nonunique: We get the same autocorrelation function if we replace θ by 1/θ. A similar nonuniqueness property holds for higher-order moving average models. We have seen that an AR process can be represented as an infinite-order MA process. Can an MA process be represented as an AR process? Note that in the MA(1) process, Y t = e t θe t 1. So e t = Y t + θe t 1, and similarly, e t 1 = Y t 1 + θe t 2. So e t = Y t + θ(y t 1 + θe t 2 ) = Y t + θy t 1 + θ 2 e t 2. We can continue this substitution infinitely often to obtain: e t = Y t + θy t 1 + θ 2 Y t 2 +

More on Invertibility Rewriting, we get Y t = θy t 1 θ 2 Y t 2 + e t If θ < 1, this MA(1) model has been inverted into an infinite-order AR model. So the MA(1) model is invertible if and only if θ < 1. In general, the MA(q) model is invertible if and only if the solutions of the MA characteristic equation 1 θ 1 x θ 2 x 2 θ q x q = 0 all exceed 1 in absolute value. We see invertibility of MA models is similar to stationarity of AR models.

Invertibility and the Nonuniqueness Problem We can solve the nonuniqueness problem of MA processes by restricting attention only to invertible MA models. There is only one set of coefficient parameters that yield an invertible MA process with a particular autocorrelation function. Example: Both Y t = e t + 2e t 1 and Y t = e t + 0.5e t 1 have the same autocorrelation function. But of these two, only the second model is invertible (its solution to the MA characteristic equation is 2). For ARMA(p, q) models, we restrict attention to those models which are both stationary and invertible.