Covariances of ARMA Processes

Similar documents
STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong

3. ARMA Modeling. Now: Important class of stationary processes

Calculation of ACVF for ARMA Process: I consider causal ARMA(p, q) defined by

STAD57 Time Series Analysis. Lecture 8

Chapter 4: Models for Stationary Time Series

γ 0 = Var(X i ) = Var(φ 1 X i 1 +W i ) = φ 2 1γ 0 +σ 2, which implies that we must have φ 1 < 1, and γ 0 = σ2 . 1 φ 2 1 We may also calculate for j 1

ARMA (and ARIMA) models are often expressed in backshift notation.

Lesson 9: Autoregressive-Moving Average (ARMA) models

Identifiability, Invertibility

MATH 5075: Time Series Analysis

3 Theory of stationary random processes

Time Series I Time Domain Methods

STAT 443 (Winter ) Forecasting

18.S096 Problem Set 4 Fall 2013 Time Series Due Date: 10/15/2013

Part III Example Sheet 1 - Solutions YC/Lent 2015 Comments and corrections should be ed to

Time Series 2. Robert Almgren. Sept. 21, 2009

We will only present the general ideas on how to obtain. follow closely the AR(1) and AR(2) cases presented before.

Introduction to Time Series Analysis. Lecture 7.

Univariate Time Series Analysis; ARIMA Models

Econometría 2: Análisis de series de Tiempo

Some Time-Series Models

EASTERN MEDITERRANEAN UNIVERSITY ECON 604, FALL 2007 DEPARTMENT OF ECONOMICS MEHMET BALCILAR ARIMA MODELS: IDENTIFICATION

Ch 4. Models For Stationary Time Series. Time Series Analysis

Contents. 1 Time Series Analysis Introduction Stationary Processes State Space Modesl Stationary Processes 8

Ch. 14 Stationary ARMA Process

ARMA Models: I VIII 1

Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications

Discrete time processes

Levinson Durbin Recursions: I

Levinson Durbin Recursions: I

A time series is called strictly stationary if the joint distribution of every collection (Y t

Difference equations. Definitions: A difference equation takes the general form. x t f x t 1,,x t m.

STOR 356: Summary Course Notes

Applied time-series analysis

Introduction to Time Series Analysis. Lecture 11.

STAT STOCHASTIC PROCESSES. Contents

ECON/FIN 250: Forecasting in Finance and Economics: Section 6: Standard Univariate Models

LECTURE 10 LINEAR PROCESSES II: SPECTRAL DENSITY, LAG OPERATOR, ARMA. In this lecture, we continue to discuss covariance stationary processes.

at least 50 and preferably 100 observations should be available to build a proper model

Lecture 1: Fundamental concepts in Time Series Analysis (part 2)

ESSE Mid-Term Test 2017 Tuesday 17 October :30-09:45

FE570 Financial Markets and Trading. Stevens Institute of Technology

Empirical Market Microstructure Analysis (EMMA)

Marcel Dettling. Applied Time Series Analysis SS 2013 Week 05. ETH Zürich, March 18, Institute for Data Analysis and Process Design

Parameter estimation: ACVF of AR processes

MAT 3379 (Winter 2016) FINAL EXAM (SOLUTIONS)

Stat 248 Lab 2: Stationarity, More EDA, Basic TS Models

Module 4. Stationary Time Series Models Part 1 MA Models and Their Properties

Module 3. Descriptive Time Series Statistics and Introduction to Time Series Models

6 NONSEASONAL BOX-JENKINS MODELS

1 Linear Difference Equations

Time Series 3. Robert Almgren. Sept. 28, 2009

Permanent Income Hypothesis (PIH) Instructor: Dmytro Hryshko

2. An Introduction to Moving Average Models and ARMA Models

Ch 6. Model Specification. Time Series Analysis

Forecasting with ARMA

Covariance Stationary Time Series. Example: Independent White Noise (IWN(0,σ 2 )) Y t = ε t, ε t iid N(0,σ 2 )

6.3 Forecasting ARMA processes

Lecture 2: Univariate Time Series

Autoregressive Moving Average (ARMA) Models and their Practical Applications

Chapter 6: Model Specification for Time Series

Forecasting using R. Rob J Hyndman. 2.4 Non-seasonal ARIMA models. Forecasting using R 1

Midterm Suggested Solutions

Time Series Examples Sheet

Nonlinear time series

Introduction to ARMA and GARCH processes

STAT Financial Time Series

We use the centered realization z t z in the computation. Also used in computing sample autocovariances and autocorrelations.

Exponential decay rate of partial autocorrelation coefficients of ARMA and short-memory processes

UNIVERSITY OF TORONTO SCARBOROUGH Department of Computer and Mathematical Sciences Midterm Test, March 2014

Lecture 2: ARMA(p,q) models (part 2)

Ch. 15 Forecasting. 1.1 Forecasts Based on Conditional Expectations

Stochastic processes: basic notions

Econ 623 Econometrics II Topic 2: Stationary Time Series

Problem Set 1 Solution Sketches Time Series Analysis Spring 2010

LINEAR STOCHASTIC MODELS

A SARIMAX coupled modelling applied to individual load curves intraday forecasting

Long-range dependence

Università di Pavia. Forecasting. Eduardo Rossi

Circle the single best answer for each multiple choice question. Your choice should be made clearly.

Time Series Examples Sheet

University of Oxford. Statistical Methods Autocorrelation. Identification and Estimation

CHAPTER 8 FORECASTING PRACTICE I

Chapter 12: An introduction to Time Series Analysis. Chapter 12: An introduction to Time Series Analysis

ECON 616: Lecture 1: Time Series Basics

Basic concepts and terminology: AR, MA and ARMA processes

Time Series: Theory and Methods

ITSM-R Reference Manual

Circle a single answer for each multiple choice question. Your choice should be made clearly.

{ } Stochastic processes. Models for time series. Specification of a process. Specification of a process. , X t3. ,...X tn }

Time Series Analysis

Chapter 3. ARIMA Models. 3.1 Autoregressive Moving Average Models

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d)

Chapter 1. Basics. 1.1 Definition. A time series (or stochastic process) is a function Xpt, ωq such that for

Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications

Ross Bettinger, Analytical Consultant, Seattle, WA

ARMA models with time-varying coefficients. Periodic case.

Exercises - Time series analysis

Moving Average (MA) representations

MAT 3379 (Winter 2016) FINAL EXAM (PRACTICE)

Transcription:

Statistics 910, #10 1 Overview Covariances of ARMA Processes 1. Review ARMA models: causality and invertibility 2. AR covariance functions 3. MA and ARMA covariance functions 4. Partial autocorrelation function 5. Discussion Review of ARMA processes ARMA process A stationary solution {X t } (or if its mean is not zero, {X t µ}) of the linear difference equation X t φ 1 X t 1 φ p X t p = w t + θ 1 w t 1 + + θ q w t q φ(b)x t = θ(b)w t (1) where w t denotes white noise, w t W N(0, σ 2 ). Definition 3.5 adds the identifiability condition that the polynomials φ(z) and θ(z) have no zeros in common and the normalization condition that φ(0) = θ(0) = 1. Causal process A stationary process {X t } is said to be causal if there exists a summable sequence (some require l 2, others want more and restrict these to l 1 ) sequence {ψ j } such that {X t } has the one-sided moving average representation X t = ψ j w t j = ψ(b)w t. (2) j=0 Proposition 3.1 states that a stationary ARMA process {X t } is causal if and only if (iff) the zeros of the autoregressive polynomial φ(z) lie outside the unit circle (i.e., φ(z) 0 for z 1). Since φ(0) = 1, φ(z) > 0 for z 0. (The unit circle in the complex plane

Statistics 910, #10 2 consists of those x for which z = 1; the unit disc includes the interior of the unit circle.) If the zeros of φ(z) lie outside the unit circle, then we can invert each of the factors (1 B/z j ) that make up φ(b) = p j=1 (1 B/z j) one at a time (as when back-substituting in the derivation of the AR(1) representation). Owing to the geometric decay in 1/z j, the coefficients in the resulting expression are summable. Suppose, on the other hand, that the process is causal. Then by taking expectations substituting X t = ψ(b)w t for X t in the definition of the ARMA process, it follows that E φ(b)ψ(b)w t = E θ(b)w t φ(z) ψ(z) = θ(z) By definition, φ(z) and θ(z) share no zeros and ψ(z) < for z 1 (since the ψ j are summable). The zeros of φ(z) must be outside the unit circle else we get a contradiction because the existence of such a zero implies that φ( z) = 0 and θ( z) = 0. (For z > 1, ψ(z) can grow arbitrarily large, balancing the zero of φ(z).) Covariance generating function The covariances of the ARMA process {X t } are γ(h) = σ 2 ψ j+ h ψ j. (3) j=0 Equivalently, the covariance γ(h) is the coefficient of z h in the polynomial G(z) = σ 2 ψ(z)ψ(z 1 ) = σ 2 θ(z)θ(z 1 ) φ(z)φ(z 1 ), (4) which is the covariance generating function of the process. Invertible The ARMA process {X t } is invertible if there exists an l 2 sequence {π j } such that w t = π j X t j. (5) j=0 Proposition 3.2 states that the process {X t }is invertible if and only if the zeros of the moving average polynomial lie outside the unit circle.

Statistics 910, #10 3 Example 3.6 shows what happens with common zeros in φ(z) and θ(z). The process is for which X t = 0.4X t 1 + 0.45X t 2 + w t + w t 1 + 0.25w t 2 φ(z) = (1 + 0.5z)(1 0.9z), θ(z) = (1 + 0.5z 0 ) 2. Hence, the two share a common factor. The initial ARMA(2,2) reduces to a causal, invertible ARMA(1,1) model. Calculations in R R has the function polyroot for finding the zeros of polynomials. Other symbolic software (e.g., Mathematica) do this much better, giving you a formula for the roots in general (when possible). Common assumption: invertible and causal In general when dealing with covariances of ARMA processes, we assume that the process is causal and invertible so that we can mvoe between the two one-sided representations (5) and (2). AR covariance functions Estimation Given the assumption of stationarity, in most cases we can easily obtain consistent estimates of the process covariances, such as T h t=1 ˆγ(h) = (x t+h x)(x t x). (6) n What should such a covariance function resemble? Does the shape of the covariance function or estimated correlation function ˆρ(h) = ˆγ(h) ˆγ(0) distinguish the underlying process? AR(1) For a first-order autoregression, we know that the covariances γ(h), h = 0, 1, 2,... decay geometrically: γ(h) = σ2 1 φ 2 φ h.

Statistics 910, #10 4 Compare this theoretical property to the properties of realizations of an AR(1). Do the estimated covariances (6) necessarily decay geometrically? Yule-Walker equations The covariances satisfy the difference equation that defines the autoregression. The one-sided moving-average representation of the process (2) implies that w t is uncorrelated with past observations X s, s < t. Hence, for lags k = 1, 2,... (assuming E X t = 0) E [X t k (X t φ 1 X t 1 φ p X t p )] = E[X t k w t ] γ(k) φ 1 γ(k 1) φ p γ(k p) = φ(b)γ(k) = 0. Rearranging the terms, we obtain the Yule-Walker equations, p δ k σ 2 = γ(k) γ(k j)φ j, k = 0, 1, 2,.... (7) j=1 Deine the vectors γ = (γ(1),..., γ(p)), φ = (φ 1,..., φ p ) and the matrix Γ = [γ(j k)] j,k=1,...,p. The matrix form of the Yule-Walker equations is γ = Γφ. (The Yule-Walker equations are analogous to the normal equations from least-squares regression, with Y = X t and the explanatory variables X t 1,..., X t p ). The sample version of these is used in some methods for estimation. Geometric decay In the general AR(p) case, the covariances decay as a sum of geometric series. To solve a linear difference equation such as (7, with k > 0) for the covariances, replace γ(k) by z k, obtaining p z k φ j z k j = z k φ(1/z) = 0. j=1 Note that any zero of the polynomial φ(1/z) solves the equation, and by linearity, so does any linear combination of such zeros. Thus, the eventual solution is of the form (ignoring duplicated zeros) γ(k) = j a j r k j, r j < 1,

Statistics 910, #10 5 where the a j are constants (determined by boundary conditions) and the r j s are the reciprocals of the zeros of the AR polynomial φ(z). Boundary conditions The Yule-Walker equations can be solved for γ(0), γ(1),..., γ(p 1) given φ 1,..., φ p. This use (i.e., solve for the covariances from the coefficients) is the inverse of how they are used in estimation. AR(2) example These are interesting because they allow for complexvalued zeros in the polynomial φ(z). The presence of such complex pairs produces oscillations in the observed process. For the process to be stationary, we need the zeros of φ(z) to lie outside the unit circle. If the two zeros are z 1 and z 2, then φ(z) = 1 φ 1 z φ 2 z 2 = (1 z/z 1 )(1 z/z 2 ). (8) Since φ 1 = 1/z 1 + 1/z 2 and φ 2 = 1/(z 1 z 2 ), the coefficients lie within the rectangular region 2 < φ 1 = 1/z 1 + 1/z 2 < +2, 1 < φ 2 = 1 z 1 z 2 < +1. Since φ(z) 0 for z 1 and φ(0) = 1, φ(z) is positive for over the unit disc z 1 and φ(1) = 1 φ 1 φ 2 > 0 φ 1 + φ 2 < 1 φ( 1) = 1 + φ 1 φ 2 > 0 φ 2 φ 2 < 1 From the quadratic formula applied to (8), φ 2 1 + 4φ 2 < 0 implies that the zeros form a complex conjugate pair, z 1 = re iλ, z 2 = re iλ φ 1 = 2 cos(λ)/r, φ 2 = 1/r 2. Turning to the covariances of the AR(2) process, these satisfy the difference equation γ(h) φ 1 γ(h 1) φ 2 γ(h 2) = 0 for h = 1, 2,.... We need two initial values to start these recursions. To make this chore easier, work with correlations. Dividing by γ(0) gives ρ(h) φ 1 ρ(h 1) φ 2 ρ(h 2) = 0,

Statistics 910, #10 6 and we know ρ(0) = 1. To find ρ(1), use the equation defined by γ(0) and γ(1) = γ( 1): ρ(1) = φ 1 ρ(0) + φ 2 ρ( 1) = φ 1 + φ 2 ρ(1), which shows ρ(1) = φ 1 /(1 φ 2 ). When the zeros are a complex pair, ρ(h) = c r h cos(hλ), a damped sinusoid, and realizations exhibit quasiperiodic behavior. MA and ARMA covariance functions Moving average case For an MA(q) process, we have (θ 0 = 1) γ(h) = σ 2 j θ j+ h θ j where θ j = 0 for j < 0 and j > q. In contrast to the geometric decay of an autoregression, the covariances of a moving average cut off abruptly. Such covariance functions are necessary and sufficient to identify a moving average process. Calculation of the covariances via the infinite MA representation and equation (3) proceeds by solving system of equations, defined by the relation ψ(z) = θ(z) ψ(z)φ(z) = θ(z). φ(z) The idea is to match the coefficients of like powers of z in (1 + ψ 1 z + ψ 2 z 2 +...)(1 + φ 1 z + φ 2 z 2 +...) = (1 + θ 1 z +...) But this only leads to the collection of ψ s, not the covariances. Mixed models Observe that the covariances satisfy the convolution expression (multiply both sides of (1) by lags X t j, j max(p, q + 1)) γ(j) p φ k γ(j k) = 0, j max(p, q + 1), k=1 which is again a homogeneous linear difference equation. Thus, for high enough lag, the covariances again decay as a sum of geometric

Statistics 910, #10 7 series. The ARMA(1,1) example from the text (Example 3.11, p. 105) illustrates these calculations. For initial values, we find γ(j) p φ k γ(j k) = σ 2 k=1 j k q a generalization of the Yule-Walker equations. θ k ψ k j Essentially, the initial q covariances of an ARMA(p,q) process deviate from the recursion that defines the covariances of the AR(p) components of the process. Partial autocorrelation function Definition The partial autocorrelation function φ hh is the partial correlation between X t+h and X t conditioning upon the intervening variables, φ hh = Corr(X t+h, X t X t+1,..., X t+h 1 ). Consequently, φ 11 = ρ(1), the usual autocorrelation. The partial autocorrelations are often called reflection coefficients, particularly in signal processing. Partial regression The reasoning behind partial correlation resembles the motivation for partial regression residual plots which show the impact of a variable in regression. If we have the OLS fit of the twopredictor linear model Y = b 0 + b 1 X 1 + b 2 X 2 + residual and we form the two partial regressions by regressing out the effects of X 1 from X 2 and Y, r 2 = X 2 a 0 a 1 X 1, r y = Y c 0 c 1 X 1, then the regression coefficient of residual r y on the other residual r 2 is b 2, the multiple regression coefficient. Defining equation Since the partial autocorrelation φ hh is the coefficient of the last lag in the regression of X t on X t 1, X t 2,..., X t h, we

Statistics 910, #10 8 obtain an equation for φ hh (assuming that the mean of {X t } is zero) by noting that the normal equations imply that E X t j (X t φ h1 X t 1 φ h2 X t 2 φ hh X t h ) = 0, j = 1, 2,..., h. Key property For an AR(p) process, φ hh = 0, h > p so that the partial correlation function cuts off after the order p of the autoregression. Also notice that φ pp = φ p. Since an invertible moving average can be represented an infinite autoregression, the partial autocorrelations of a moving average process decay geometrically. Hence, we have the following table of behaviors (Table 3.1): AR(p) ARMA(p, q) MA(q) γ(h) geometric decay geometric after q cuts off at q φ hh cuts off at p geometric after p geometric decay Once upon a time before the introduction of model selection criteria such as AIC, this table was the key to choosing the order of an ARMA(p, q) process. Estimates Estimates of the partials autocorrelations arise from solving the Yule-Walker equations (7), using a recursive method known as the Levinson recursion. This algorithm is discussed later. Discussion Weak spots We have left some problems only partially solved, such as the meaning of infinite sums of random variables. How does one manipulate these expressions? When are such manipulations valid? Correlation everywhere Much of time series analysis is complicated because the observable terms {X t } are correlated. In a sense, time series analysis is a lot like regression with collinearity. Just as regression is simplified by moving to uncorrelated predictors, time series analysis benefits from using a representation in terms of uncorrelated terms that is more general than the one-sided infinite moving average form.