γ 0 = Var(X i ) = Var(φ 1 X i 1 +W i ) = φ 2 1γ 0 +σ 2, which implies that we must have φ 1 < 1, and γ 0 = σ2 . 1 φ 2 1 We may also calculate for j 1

Similar documents
3. ARMA Modeling. Now: Important class of stationary processes

We will only present the general ideas on how to obtain. follow closely the AR(1) and AR(2) cases presented before.

Lesson 9: Autoregressive-Moving Average (ARMA) models

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong

{ } Stochastic processes. Models for time series. Specification of a process. Specification of a process. , X t3. ,...X tn }

Covariances of ARMA Processes

Identifiability, Invertibility

Ch 4. Models For Stationary Time Series. Time Series Analysis

1 Linear Difference Equations

ARMA (and ARIMA) models are often expressed in backshift notation.

Time Series I Time Domain Methods

ESSE Mid-Term Test 2017 Tuesday 17 October :30-09:45

3 Theory of stationary random processes

Time Series Outlier Detection

EASTERN MEDITERRANEAN UNIVERSITY ECON 604, FALL 2007 DEPARTMENT OF ECONOMICS MEHMET BALCILAR ARIMA MODELS: IDENTIFICATION

FE570 Financial Markets and Trading. Stevens Institute of Technology

ARMA models with time-varying coefficients. Periodic case.

Econometría 2: Análisis de series de Tiempo

Part III Example Sheet 1 - Solutions YC/Lent 2015 Comments and corrections should be ed to

Covariance Stationary Time Series. Example: Independent White Noise (IWN(0,σ 2 )) Y t = ε t, ε t iid N(0,σ 2 )

Applied time-series analysis

at least 50 and preferably 100 observations should be available to build a proper model

Time Series Analysis -- An Introduction -- AMS 586

ARMA Models: I VIII 1

Module 4. Stationary Time Series Models Part 1 MA Models and Their Properties

STAD57 Time Series Analysis. Lecture 8

Exponential decay rate of partial autocorrelation coefficients of ARMA and short-memory processes

Introduction to Time Series Analysis. Lecture 7.

Chapter 4: Models for Stationary Time Series

STA 6857 Signal Extraction & Long Memory ARMA ( 4.11 & 5.2)

A time series is called strictly stationary if the joint distribution of every collection (Y t

Ch 5. Models for Nonstationary Time Series. Time Series Analysis

Univariate Time Series Analysis; ARIMA Models

The autocorrelation and autocovariance functions - helpful tools in the modelling problem

Some Time-Series Models

Discrete time processes

STAT 443 (Winter ) Forecasting

Time Series Analysis

Difference equations. Definitions: A difference equation takes the general form. x t f x t 1,,x t m.

We use the centered realization z t z in the computation. Also used in computing sample autocovariances and autocorrelations.

Trend-Cycle Decompositions

Circle the single best answer for each multiple choice question. Your choice should be made clearly.

Lecture 1: Fundamental concepts in Time Series Analysis (part 2)

Ross Bettinger, Analytical Consultant, Seattle, WA

18.S096 Problem Set 4 Fall 2013 Time Series Due Date: 10/15/2013

Long-range dependence

Ch. 14 Stationary ARMA Process

Time Series 3. Robert Almgren. Sept. 28, 2009

Basic concepts and terminology: AR, MA and ARMA processes

Time Series 2. Robert Almgren. Sept. 21, 2009

Circle a single answer for each multiple choice question. Your choice should be made clearly.

Empirical Market Microstructure Analysis (EMMA)

STAT 720 TIME SERIES ANALYSIS. Lecture Notes. Dewei Wang. Department of Statistics. University of South Carolina

Time Series Examples Sheet

Marcel Dettling. Applied Time Series Analysis SS 2013 Week 05. ETH Zürich, March 18, Institute for Data Analysis and Process Design

{ ϕ(f )Xt, 1 t n p, X t, n p + 1 t n, { 0, t=1, W t = E(W t W s, 1 s t 1), 2 t n.

Chapter 9: Forecasting

Classic Time Series Analysis

Lecture 2: Univariate Time Series

Exercises - Time series analysis

STAT STOCHASTIC PROCESSES. Contents

FORECASTING SUGARCANE PRODUCTION IN INDIA WITH ARIMA MODEL

Stochastic processes: basic notions

Quantitative Finance I

Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications

University of Oxford. Statistical Methods Autocorrelation. Identification and Estimation

Review Session: Econometrics - CLEFIN (20192)

Lesson 13: Box-Jenkins Modeling Strategy for building ARMA models

Introduction to Stochastic processes

Time Series Analysis. Solutions to problems in Chapter 5 IMM

Stat 5100 Handout #12.e Notes: ARIMA Models (Unit 7) Key here: after stationary, identify dependence structure (and use for forecasting)

AR(p) + I(d) + MA(q) = ARIMA(p, d, q)

STOR 356: Summary Course Notes

2. An Introduction to Moving Average Models and ARMA Models

Calculation of ACVF for ARMA Process: I consider causal ARMA(p, q) defined by

ITSM-R Reference Manual

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

Introduction to Time Series Analysis. Lecture 11.

LINEAR STOCHASTIC MODELS

Introduction to ARMA and GARCH processes

ECON 616: Lecture 1: Time Series Basics

Lecture 2: ARMA(p,q) models (part 2)

Nonlinear time series

NANYANG TECHNOLOGICAL UNIVERSITY SEMESTER II EXAMINATION MAS451/MTH451 Time Series Analysis TIME ALLOWED: 2 HOURS

ARIMA Models. Jamie Monogan. January 25, University of Georgia. Jamie Monogan (UGA) ARIMA Models January 25, / 38

Econometría 2: Análisis de series de Tiempo

MAT 3379 (Winter 2016) FINAL EXAM (SOLUTIONS)

ARIMA Models. Jamie Monogan. January 16, University of Georgia. Jamie Monogan (UGA) ARIMA Models January 16, / 27

ARIMA Models. Richard G. Pierse

Ch. 15 Forecasting. 1.1 Forecasts Based on Conditional Expectations

Chapter 12: An introduction to Time Series Analysis. Chapter 12: An introduction to Time Series Analysis

Module 3. Descriptive Time Series Statistics and Introduction to Time Series Models

Generalised AR and MA Models and Applications

Stochastic Modelling Solutions to Exercises on Time Series

Econometric Forecasting

Time Series 4. Robert Almgren. Oct. 5, 2009

The Identification of ARIMA Models

Statistics of stochastic processes

Time Series: Theory and Methods

Forecasting using R. Rob J Hyndman. 2.4 Non-seasonal ARIMA models. Forecasting using R 1

Transcription:

4.2 Autoregressive (AR) Moving average models are causal linear processes by definition. There is another class of models, based on a recursive formulation similar to the exponentially weighted moving average. Definition 4.11 (Autoregressive AR(p)). Suppose φ 1,...,φ p R are constants and(w i ) WN(σ 2 ).TheAR(p)processwithparametersσ 2,φ 1,...,φ p isdefined through p X i = W i + φ j X i j, (3) j=1 whenever such stationary process (X i ) exists. Remark 4.12. The process in Definition 4.11 is sometimes called a stationary AR(p) process. It is possible to consider a non-stationary AR(p) process for any φ 1,...,φ p satisfying (3) for i 0 by letting for example X i = 0 for i [ p+1,0]. Example 4.13 (Variance and autocorrelation of AR(1) process). For the AR(1) process, whenever it exits, we must have γ 0 = Var(X i ) = Var(φ 1 X i 1 +W i ) = φ 2 1γ 0 +σ 2, which implies that we must have φ 1 < 1, and We may also calculate for j 1 γ 0 = σ2. 1 φ 2 1 γ j = E[X i X i j ] = E[(φ 1 X i 1 +W i )X i j ] = φ 1 E[X i 1 X i j ] = φ j 1γ 0, which gives that ρ j = φ j 1. Example 4.14. Simulation of an AR(1) process. phi_1 <- 0.7 x <- arima.sim(model=list(ar=phi_1), 140) # This is the explicit simulation: gamma_0 <- 1/(1-phi_1^2) x_0 <- rnorm(1)*sqrt(gamma_0) x <- filter(rnorm(140), phi_1, method = "r", init = x_0) Example 4.15. Consider a stationary AR(1) process. We may write n 1 X i = φ 1 X i 1 +W i = = φ n 1X i n + φ j 1W i j. 29

Simulated values 4 0 2 4 0 20 40 60 80 100 120 140 ACF 0.2 0.2 0.6 1.0 Figure 23: Simulation of AR(1) process in Example 4.14. phi_1 = 0.9 phi_1 = 0.9 0.0 0.4 0.8 0.5 0.5 phi_1 = 0.5 phi_1 = 0.7 0.0 0.4 0.8 0.5 0.5 Figure 24: Autocorrelations of AR(1) with different parameters. 30

Define the causal linear process Y i = φj 1W i j, then we may write (detailed proof not examinable) ( E Xi Y i 2) ( 1/2 ) = E φ n 1X i n φ j 2 1/2 1W i j j=n φ 1 n( ) EXi n 2 1/2 + φ 1 j( ) EWi j 2 1/2 = φ 1 n (σ X + j=n σ ) n 0, 1 φ 1 where σ 2 X = EX2 1. This implies X i = Y i (almost surely). We may write the autoregressive process also in terms of the backshift operator, as p X i φ j B j X i = W i, (4) or φ(b)x i = W i, where j=1 Definition 4.16 (Characteristic polynomial of AR(p)). φ(z) := 1 p φ j z j. Remark 4.17. Note the minus sign in the AR polynomial, contrary to the plus in the MA polynomial. In some contexts (esp. signal processing), the AR coefficients are often defined φ i = φ i, so that the AR polynomial will look exactly like the MA polynomial. Theorem 4.18. The (stationary) AR(p) process exists and can be written as a causal linear process if and only if j=1 φ(z) 0 for all z C with z 1, that is, the roots of the complex polynomial φ(z) lie strictly outside the unit disc. For full proof, see for example Theorem 3.1.1 of Brockwell and Davis. However, to get the idea, we may write informally X i = φ(b) 1 W i, and we may write the reciprocal of the characteristic function as 1 φ(z) = c j z j, for z 1+ǫ, 31

This means that we may write the AR(p) as a causal linear process X i = c j W i j, where the coefficients satisfy 10 c j K(1+ǫ/2) j. Remark 4.19. ThisjustifiesviewingAR(p)asa MA( ) withcoefficients(c j ) j 1. This also implies that we may apporximate AR(p) with arbitrary precision by MA(q) with large enough q. 4.3 Invertibility of MA Example 4.20. Let θ 1 (0,1) and σ 2 > 0 be some parameters, and consider two MA(1) models, X i = W i +θ 1 W i 1, X i = W i + θ 1 Wi 1, where θ 1 = 1/θ 1 and σ 2 = σ 2 θ 2 1. We have What do you observe? γ 0 = σ 2 (1+θ 2 1), γ 1 = σ 2 θ 1 γ 0 = σ 2 (1+ θ 2 1) γ 1 = σ 2 θ 1. (W n ) i.i.d. N(0,σ 2 ) ( W n ) i.i.d. N(0, σ 2 ), It turns out that the following invertibility condition resolves the MA(q) identifiability problem, and therefore it is standard that the roots of the characteristic polynomial are assumed to lie outside the unit disc. Theorem 4.21. If the roots of the characteristic polynomial of MA(q) are strictly outside the unit circle, the MA(q) is invertible in the sense that it satisfies W i = β j X i j, where the constants satisfy β 0 = 1 and β j K(1 + ǫ) j for some constants K < and ǫ > 0. that As with Theorem 4.18, we may write symbolically, from X i = θ(b)w i, W i = 1 θ(b) X i = β j X i j, where the constants β j are uniquely determined by 1/θ(z) = β jz j, as the roots of θ(z) lie outside the unit disc. 10. Because c j (1+ǫ/2) j 0 as j. 32

4.4 Autoregressive moving average (ARMA) Definition 4.22 (Autoregressive moving average ARMA(p,q) process). Suppose φ 1,...,φ p R are coefficients of a (stationary) AR(p) process and θ 1,...,θ q R, and(w i ) WN(σ 2 ).The(stationary)ARMA(p,q)processwiththeseparameters is a process satisfying X i = p q φ j X i j + θ j W i j, (5) j=1 with the convention θ 0 = 1 and where the first sum vanishes if p = 0. Remark 4.23. AR(p) is ARMA(p,0) and MA(q) is ARMA(0,q). We may write ARMA(p,q) briefly with the characteristic polynomials of the AR and MA and the backshift operator as φ(b)x i = θ(b)w i. Simulation of a general ARMA(p,q) model is not straightforward exactly, but we can approximately simulate it by setting X p+1 = = X 0 = 0 (say) and then following (5). Then, X b,x b+1,...,x b+n is an approximate sample of a stationary ARMA(p,q) if b is large enough. This is what R function arima.sim does; the parameter n.start is b above. Example 4.24. Simulation of ARMA(2,1) model with φ 1 = 0.3, φ 2 = 0.4, θ 1 = 0.8. x <- arima.sim(list(ma = c(-0.8), ar=c(.3,-.4)), 140, n.start = 1e5) This is the same as q <- 2; n <- 140; n.start <- 1e5 z <- filter(rnorm(n.start+n), c(1, -0.8), sides=1) z <- tail(z, n.start+n-q) x <- tail(filter(z, c(.3,-.4), method="r"), n) (The latter may sometimes be necessary, because arima.sim checks the stability of the AR part by calculating the roots of φ(z) numerically, which is notoriously unstable if the order of φ is large. Sometimes arima.sim refuses to simulate a stable ARMA...) Remark 4.25. If the characteristic polynomials θ(z) and φ(z) of an ARMA(p,q) share a (complex) root, say x 1 = y 1, then θ(z) φ(z) = (z x 1)(z x 2 ) (z x q ) (z y 1 )(z y 2 ) (z y p ) 33

Simulateed values 4 2 0 2 0 20 40 60 80 100 120 140 ACF 0.5 0.5 1.0 Figure 25: Simulation of ARMA(2,1) in Example 10.6. = (z x 2) (z x q ) (z y 2 ) (z y p ) = θ(z) φ(z), where θ(z) is of order q 1 and φ(z) is of order p 1, and it turns out that φ(b)x i = θ(b)w i, which means that the model reduces to ARMA(p 1,q 1). Condition 4.26 (Regularity conditions for ARMA). In what follows, we shall assume the following: (a) The roots of the AR characteristic polynomial are strictly outside the unit disc (cf Theorem 4.18). (b) The roots of the MA characteristic polynomial are strictly outside the unit disc (cf. Theorem 4.21). (c) The AR and MA characteristic polynomials do not have common roots (cf. Remark 4.25). 34

Theorem 4.27. A stationary ARMA(p,q) model satisfying Condition 4.26 exists, is invertible and can be written as a causal linear process X i = ξ j W i j, W i = β j X i j, where the constants ξ j and β j satisfy ξ j z j = θ(z) φ(z) and β j z j = φ(z) θ(z). In addition, β 0 = 1 and there exist constants K < and ǫ > 0 such that max{ ξ j, β j } K(1+ǫ) j for all j 0. Remark 4.28. In fact, the coefficients ξ j (or β j ) related to any ARMA(p,q) can be calculated numerically from the parameters easily. Also the autocovariance can be calculated numerically up to any lag in a straightforward way; cf. Brockwell and Davis p. 91 95. In R, the autocorrelation coefficients can be calculated with ARMAacf. 4.5 Integrated models Autoregressive moving average models are pretty flexible models for stationary series. However, in many practical time series, it might be more useful to consider the differenced series (Definition 2.10). This brings us to the general notion of Definition 4.29 (Difference operator). Suppose (X i ) is a stochastic process. Its d:th order difference process is defined as ( d X i ), where the d:th order difference operator may be written in terms of the backshift operator as d = (1 B) d for d 1. Definition 4.30 (Autoregressive integrated moving average ARIMA(p,d,q) process). If the d-th difference of the process ( d X i ) follows ARMA(p,q), then we say (X i ) is ARIMA(p,d,q). Remark 4.31. Suppose that ( d X i ) is a stationary ARMA(p, q). (i) The ARIMA(p,d,q) process (X i ) is not unique (why?). (ii) The ARIMA(p,d,q) process (X i ) is not, in general, stationary. The process (X i ) (or the data x 1,...,x n ) is said to be difference stationary. Example 4.32. Simple random walk is an ARIMA(0,1,0). X i = X i 1 +W i 35