Difference equations. Definitions: A difference equation takes the general form. x t f x t 1,,x t m.

Similar documents
Discrete time processes

1 Linear Difference Equations

3. ARMA Modeling. Now: Important class of stationary processes

Covariance Stationary Time Series. Example: Independent White Noise (IWN(0,σ 2 )) Y t = ε t, ε t iid N(0,σ 2 )

Some Time-Series Models

1 Class Organization. 2 Introduction

Multivariate Time Series

LECTURE 10 LINEAR PROCESSES II: SPECTRAL DENSITY, LAG OPERATOR, ARMA. In this lecture, we continue to discuss covariance stationary processes.

Covariances of ARMA Processes

7. MULTIVARATE STATIONARY PROCESSES

Introduction to Stochastic processes

ARIMA Modelling and Forecasting

5: MULTIVARATE STATIONARY PROCESSES

Identifiability, Invertibility

3 Theory of stationary random processes

Notes on Time Series Modeling

Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications

Module 4. Stationary Time Series Models Part 1 MA Models and Their Properties

Time Series 2. Robert Almgren. Sept. 21, 2009

We use the centered realization z t z in the computation. Also used in computing sample autocovariances and autocorrelations.

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Statistics of stochastic processes

γ 0 = Var(X i ) = Var(φ 1 X i 1 +W i ) = φ 2 1γ 0 +σ 2, which implies that we must have φ 1 < 1, and γ 0 = σ2 . 1 φ 2 1 We may also calculate for j 1

Lecture 2: Univariate Time Series

Lesson 9: Autoregressive-Moving Average (ARMA) models

EC402: Serial Correlation. Danny Quah Economics Department, LSE Lent 2015

Stochastic Processes: I. consider bowl of worms model for oscilloscope experiment:

11. Further Issues in Using OLS with TS Data

Ch. 14 Stationary ARMA Process

Ch 4. Models For Stationary Time Series. Time Series Analysis

Time Series 3. Robert Almgren. Sept. 28, 2009

Econ 623 Econometrics II Topic 2: Stationary Time Series

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

ARMA Estimation Recipes

Vector autoregressive Moving Average Process. Presented by Muhammad Iqbal, Amjad Naveed and Muhammad Nadeem

Ch. 15 Forecasting. 1.1 Forecasts Based on Conditional Expectations

18.S096 Problem Set 4 Fall 2013 Time Series Due Date: 10/15/2013

ARMA Models: I VIII 1

Chapter 4: Models for Stationary Time Series

Module 3. Descriptive Time Series Statistics and Introduction to Time Series Models

Basic concepts and terminology: AR, MA and ARMA processes

Lecture 1: Fundamental concepts in Time Series Analysis (part 2)

ECON 616: Lecture 1: Time Series Basics

Econometría 2: Análisis de series de Tiempo

Econometría 2: Análisis de series de Tiempo

Univariate Time Series Analysis; ARIMA Models

FE570 Financial Markets and Trading. Stevens Institute of Technology

Econometric Forecasting

Econ 424 Time Series Concepts

LINEAR STOCHASTIC MODELS

Problem Set 1 Solution Sketches Time Series Analysis Spring 2010

at least 50 and preferably 100 observations should be available to build a proper model

Introduction to ARMA and GARCH processes

Math Tools: Stochastic Processes Revised: November 30, 2015

Time Series Analysis -- An Introduction -- AMS 586

ARMA (and ARIMA) models are often expressed in backshift notation.

ECON/FIN 250: Forecasting in Finance and Economics: Section 6: Standard Univariate Models

Review Session: Econometrics - CLEFIN (20192)

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong

ARIMA Models. Richard G. Pierse

Empirical Market Microstructure Analysis (EMMA)

Applied time-series analysis

Estimating Moving Average Processes with an improved version of Durbin s Method

Multivariate ARMA Processes

6 NONSEASONAL BOX-JENKINS MODELS

Multivariate Time Series: VAR(p) Processes and Models

Eigenvalues and Eigenvectors

Chapter 3 - Temporal processes

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

ARMA models with time-varying coefficients. Periodic case.

Permanent Income Hypothesis (PIH) Instructor: Dmytro Hryshko

Lecture 2: ARMA(p,q) models (part 2)

{ } Stochastic processes. Models for time series. Specification of a process. Specification of a process. , X t3. ,...X tn }

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Gaussian processes. Basic Properties VAG002-

We will only present the general ideas on how to obtain. follow closely the AR(1) and AR(2) cases presented before.

New Introduction to Multiple Time Series Analysis

Some suggested repetition for the course MAA508

Part III Example Sheet 1 - Solutions YC/Lent 2015 Comments and corrections should be ed to

Next tool is Partial ACF; mathematical tools first. The Multivariate Normal Distribution. e z2 /2. f Z (z) = 1 2π. e z2 i /2

7. Forecasting with ARIMA models

1 Matrices and vector spaces

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

For a stochastic process {Y t : t = 0, ±1, ±2, ±3, }, the mean function is defined by (2.2.1) ± 2..., γ t,

6.3 Forecasting ARMA processes

Time Series Outlier Detection

MTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1.

Solutions to Odd-Numbered End-of-Chapter Exercises: Chapter 14

Structural Macroeconometrics. Chapter 4. Summarizing Time Series Behavior

Chapter 3. ARIMA Models. 3.1 Autoregressive Moving Average Models

NANYANG TECHNOLOGICAL UNIVERSITY SEMESTER II EXAMINATION MAS451/MTH451 Time Series Analysis TIME ALLOWED: 2 HOURS

ESSE Mid-Term Test 2017 Tuesday 17 October :30-09:45

Cointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Financial Econometrics / 56

Dynamic models. Dependent data The AR(p) model The MA(q) model Hidden Markov models. 6 Dynamic models

Gaussian, Markov and stationary processes

Exponential decay rate of partial autocorrelation coefficients of ARMA and short-memory processes

Autoregressive Moving Average (ARMA) Models and their Practical Applications

FINANCIAL ECONOMETRICS AND EMPIRICAL FINANCE -MODULE2 Midterm Exam Solutions - March 2015

Next topics: Solving systems of linear equations

Transcription:

Difference equations Definitions: A difference equation takes the general form x t fx t 1,x t 2, defining the current value of a variable x as a function of previously generated values. A finite order (mth order) difference equation takes the general form x t fx t 1,,x t m. A linear difference equation takes the general form x t 1 x t 1 2 x t 2 A stochastic difference equation takes the general form x t fx t 1,x t 2,, t where t is a random sequence (often i.i.d. in applications) called the forcing process or driving process. (c) James Davidson 2014 2.1 30/04/2014

A linear stochastic difference equation takes the general form x t 1 x t 1 2 x t 2 t The object is to solve these equations (determine the path x 1,x 2,x 3, ) given initial conditions x 0,x 1, and (if present) the sequence t. Consider the non-stochastic case to establish methods and notation. The linear case is the simplest and best understood, and we focus on this. (c) James Davidson 2014 2.2 30/04/2014

Some useful formulae A geometric series is A 1 a a 2 a 3 a i. i0 Let the tth partial sum be denoted t 1 At 1 a a 2 a t 2 a t 1 i0 a i. Note that aat a a 2 a t 1 a t At a t 1. If a 1, this has the closed-form solution At aat 1 a t 1 at 1 a. (c) James Davidson 2014 2.3 30/04/2014

Cases: 1. If a 1, the geometric series is convergent. a t 0ast, and A limat 1 t 1 a 2. If a 1itdiverges: A. 3. If a 1, no solution. At "flip-flops" between 0 and 1. 4. If a 1, At flip-flops between in limit! 5. Finally, if a 1, At t and A. (c) James Davidson 2014 2.4 30/04/2014

Also consider A t a 2a 2 3a 3 ta t By a similar argument, 1 aa t a 2a 2 3a 3 ta t a 2 2a 3 ta t1 a a 2 a 3 a t ta t1 Hence, with a 1, A t a 1 a At tat. When a 1, note that A i1 ia i a 1 a A a 1 a 2 (c) James Davidson 2014 2.5 30/04/2014

First order linear difference equation x t 1 x t 1 Given x 0, the solution path is found by iteration, as x 1 1 x 0 x 2 1 1 1 2 x 0 x t 1 1 t 1 1 t 1 x 0 If 1 1 the series is summable, and as t, j x t 1. j0 1 1 This is called the stable solution, and is independent of x 0. x t approaches this point from any starting point. In the other cases of 1, there is either no stable solution, or an infinite solution. Note: we can guess the solution by putting x t x t 1 x (say) and so solve x 1 1 This must be the stable solution if one exists but otherwise, it is irrelevant. (c) James Davidson 2014 2.6 30/04/2014

Second Order Linear Difference Equation x t 1 x t 1 2 x t 2 Our concern is to find conditions for a stable solution to this equation, as in the first-order case. If it exists, this must take the form x 1 1 2 However, solution by iteration is obviously difficult: x t 1 x t 1 2 x t 2 1 1 2 2 1 2 x t 2 1 2 x t 3? How do we know if the solution converges? (c) James Davidson 2014 2.7 30/04/2014

Consider the pair of first-order equations x t 1 x t 1 y t Write these in the form y t 2 y t 1. x t 1 x t 1 2 x t 1 1 x t 2 x t 1 2 x t 1 1 2 x t 2 x t 1 x t 1 2 x t 2 It is intuitively clear that the stability conditions have the form 1 1, 2 1 since then y t 1 2 and x t 1 1 1 1 2 1 1 2 To apply these restrictions, it is necessary to invert the mapping 1, 2 1, 2. (c) James Davidson 2014 2.8 30/04/2014

The Lag Operator Let the operator L (alternative notation, B) be defined by Lx t x t 1. Then, for example, L 2 x t LLx t Lx t 1 x t 2. The second-order equation can now be written 1 1 Lx t y t or equivalently, 1 2 Ly t 1 1 L1 2 Lx t 1 1 2 L 1 2 L 2 x t 1 1 L 2 L 2 x t. The model therefore defines a quadratic equation in the lag operator. (c) James Davidson 2014 2.9 30/04/2014

Reminder: Roots of a Quadratic Consider, for z C, the quadratic equation z 2 1 z 2 z 1 z 2 0 where 1 1 2 and 2 1 2. 1 and 2 are the roots (zeros) of this equation, and are given by 1 1 1 2 4 2 2, 2 1 1 2 4 2 2 When 2 1 2 /4 these solutions are complex numbers (complex conjugate pair, since 1 and 2 real). Also note: The roots of the equation 1 1 z 2 z 2 1 1 z1 2 z 0 are 1/ 1 and 1/ 2, similarly. y t (above) can be complex-valued, although x t is real-valued by construction. (c) James Davidson 2014 2.10 30/04/2014

Stability Analysis The stable solution (finite, independent of initial conditions) evidently takes the form x t 1 1 L 2 L 2 1 1 2 provided the inversion of the lag polynomial is a legitimate step. Assume 1 1, 2 1 and consider for z C, 1 1 1 1 z 2 z 2 1 1 z1 2 z. Note that, for z 1, so that 1 1 z1 1 z 1 2 z 2 1 3 z 3 1 1 1 1 z j 1 z j. j0 1. Assume 1 2 (both real). Then, for z 1, 1 1 1 z1 2 z 1 1 1 2 1 1 z 2 1 2 z j0 1 j1 2 j1 1 2 z j (c) James Davidson 2014 2.11 30/04/2014

2. Assume 1 2 (real). Write in j1 j1 1 2 1 2. 2 1 By L Hôpital s rule*, and hence (Compare page 2.5 above.) 1 j1 1 j1 1 1 1 z 2 j 1 1 j j0 j 11 j z j as 0, *Iff 0andg 0as 0, the limit of f g is equal to that of f g latter is defined., when the (c) James Davidson 2014 2.12 30/04/2014

Complex Roots Let 1 re i 2 re i... a complex conjugate pair in polar coordinates, where r 0, i 1 and 0 2. Their modulus (absolute value) is 1 2 1 2 r. Using the facts we obtain re i rcos irsin (Euler s formula) cos x cosx sin x sinx 1 j1 2 j1 1 2 r j eij1 e ij1 e i e i r j sinj 1 sin, j 1,2,3, (c) James Davidson 2014 2.13 30/04/2014

Stability depends on the k lying inside the unit circle, having r 1. 1 1 μ 1 r θ 0 1 μ 2 1 (c) James Davidson 2014 2.14 30/04/2014

Finally..., replace z by the operator L. Since L j for any j, wehavetheresult x t 1 1 2 j0 1 j1 2 j1 1 2 real roots, 1, 2 1 j0 j 11 j equal roots, 1 1 j0 r j sinj 1 sin complex roots, r 1 (c) James Davidson 2014 2.15 30/04/2014

Third Order Case Exercise: verify that if 1 2, 2 3 and 1 3,then 1 1 1 z1 2 z1 3 z etc., etc. 1 1 2 1 3 2 3 1 2 2 3 1 1 z 2 2 1 3 1 2 z 3 2 1 2 1 3 z The General Case: Factorising the polynomial as 1 1 z p z p 1 1 z1 2 z1 p z, the rule is that the difference equation x t 1 x t 1 p x t p has a stable solution if and only if k 1fork 1,,p. Since the roots of this polynomial are 1/ 1,,1/ p, we express the stability condition as: The roots of the lag polynomial lie strictly outside the unit circle. Since the lag coefficients are real, the roots are either real, or in conjugate complex pairs. (c) James Davidson 2014 2.16 30/04/2014

Stochastic Linear Difference Equations First Order Case: x t 1 x t 1 t, t 1,2,3, The iterative solution is x t 1 1 t 1 1 t 1 t 1 t 1 1 1 t 1 x 0 Assume that t iid0, 2. The solution of this equation is interpreted in terms of the properties of the random variable x t when t is large. Case: 1 1. As t, dependence on starting value becomes negligible. Ex t 1 1 j 1 E t j j1 1 1 Varx t E j0 j 2 1 t j j0 2j 1 E 2 t j j0 2 j0 1 2j 2 1 1 2. k0,k j 1 j 1 k E t j t k (c) James Davidson 2014 2.17 30/04/2014

In this calculation, note that 1 j 2 1. j0 1 1 2 Therefore, if the E t j t k were to take any finite fixed values such that for all t, j and k, we can say that E t j t k B (*) j0 k0,k j 1 j 1 k E t j t k j0 1 j 1 k E t j t k k0 B 1 1 2. Hence, since E t j t k 0 whenever j k, the double sum of cross-product terms vanishes unambiguously. (c) James Davidson 2014 2.18 30/04/2014

Other Properties Since the forcing process is i.i.d., and x t depends (in effect) on only a finite number of these terms, this is a stationary process when t is large enough. x tm and Ex t tm j 0forj m. Hence, 1 1 j0 1 j tm j m j m 1 1 1 1 t j 1 j 1 tm j j0 j0 1 1 m 1 m m 1 1 x t 1 j 1 tm j j0 Covx t,x tm m 1 Varx t 1 m 2 2 1 1 m. Therefore, x t is a short memory process, since j 2 j0 1 2 1 1 1. Note: we may define a stationary process with starting point t 0, by letting x 0 be a drawing from the stationary distribution of x t. (c) James Davidson 2014 2.19 30/04/2014

This stochastic process is called a first-order autoregression (AR(1)). Realization of 100 observations from x t 0.7x t 1 t, t N0,1, x 0 0 Corresponding i.i.d. process, t : (c) James Davidson 2014 2.20 30/04/2014

Second Order Case. x t 1 x t 1 1 x t 2 t, t 1,2,3, From previous results, we have the stationary second order solution (MA() representation), j0 1 j1 2 j1 1 2 t j real roots, 1, 2 1 x t t 1 1 L 2 L 2 j j 11 t j equal roots, 1 1 j0 j0 r j sinj 1 sin t j complex roots, r 1 Complex roots imply sinusoidal lag distributions in the MA() representation! (c) James Davidson 2014 2.21 30/04/2014

Generalization The AR(p) process is x t 1 x t 1 p x t p t. Using the lag operator, this can be written in the form or where x t 1 Lx t 2 L 2 x t p L p x t t Lx t t L 1 1 L 2 L 2 p L p. The stationary solution of the model, when it exists, can be written in the form x t 1 t L where 1/L is a lag polynomial of infinite order, with summable coefficients. (c) James Davidson 2014 2.22 30/04/2014

Autocovariances of AR processes These are conveniently found from the Yule-Walker equations. Multiply the equation by x t j for j 0,1,2,,p and take expected values to yield a system of p 1 equations in the unknowns 0, p. Consider the AR(2): 0 1 1 2 2 2 1 1 0 2 1 0 2 1 1 2 0 0 These equations may be solved as 0 1 2 1 1 2 1 1 2 0 2 1 1 1 2 0 0 2 1 2 1 2 2 1 2 1 2 1 2 1 2 1 2 2 2. (c) James Davidson 2014 2.23 30/04/2014

For the higher order cases, solve the difference equation for j 3,4,5, j 1 j 1 2 j 2 Obviously, the conditions for j 0 are identical to the stability conditions for the process. By rearranging the Y-W equations, one can also solve for 2, 1, 2 from 0, 1, 2. (c) James Davidson 2014 2.24 30/04/2014

Moving Average Processes Consider a process of the form x t L t where L 1 1 L q L q and t iid0, 2. This is the MA(q) process. The autocovariances are 0 2 1 1 2 q 2 1 2 1 2 1 q q 1 2 2 2 3 1 q q 2 q 2 q with j 0forj q. Thus, the process is stationary for all choices of L. If L is invertible, it can be expressed as a difference equation of infinite order, x t L 1 t. All MA(q) processes can be written as AR() except those having a root of unity (over-differenced processes). (c) James Davidson 2014 2.25 30/04/2014

Invertibility of MA Processes Consider the MA(1) case first. is a process with the following properties: v t t 1 t 1 Ev t 0, Varv t 2 1 2 1, Covv t,v t 1 1 2 Covv t,v t j 0forj 1. But suppose t iid0, 2 where 2 2 1 2. Then we can write equivalently v t t 1 1 t 1 The processes v t and v t have the same autocovariances, and on this basis are observationally equivalent. They are not distributed identically, and if t is i.i.d then t is not, in general. However, we cannot distinguish them on the basis of first and second moments. With this caveat, there is no loss of generality in choosing the representation with 1 1. (c) James Davidson 2014 2.26 30/04/2014

The MA(q) case: v t L t and v t L t are observationally equivalent processes where L 1 1 L1 q L L 1 1 1 L1 2 L1 q L and E 2 t 2 1 E 2 t. In total, 2 q equivalent representations! Conventionally, we impose invertibility to identify the model. Further caveat: We can also write, for example with same v t, where by construction t v t 1 1 L t 1 1 1 L t 1 1L 1 1 1 L t. In this case, both t and t are uncorrelated processes with E t 2 1 2 E t 2, but at most one of these processes can be i.i.d, in general. The exception is the Gaussian case, where uncorrelatedness is equivalent to independence. (c) James Davidson 2014 2.27 30/04/2014

ARMA Processes Combining AR and MA components yields the ARMA(p,q) process Lx t L t. This represents a flexible class of linear models for stationary processes. Subject to stability/invertibility, the ARMA can be viewed as a difference equation of infinite order (AR( )), L L x t 1 t and as a moving average of infinite order (MA()): x t 1 L L t. (c) James Davidson 2014 2.28 30/04/2014

Solving Rational Lag Expansions Method of undetermined coefficients Write z zz and solve for the coefficients of z. For simplicity set q p, sowehave 1 1 z p z p 1 1 z p z p 1 1 z 2 z 2 3 z 3 1 1 z p z p 1 z 1 1 z 2 p 1 z p1 p z p 1 p z p1 p p z 2p 1 Equating coefficients of z j for j 1,2,... and rearranging, we obtain 1 1 1 2 2 2 1 1 3 3 3 2 1 1 2 and then for j p, p p p p 1 1 1 p 1 j p p 1 j p1 1 j 1. These equations can be solved as a recursion for as many steps as required. Stability of the polynomial z ensures that j 0asj. (c) James Davidson 2014 2.29 30/04/2014

Vector Autoregressions Let x t be a sequence of m-vectors. The VAR(p) model takes the form x t a 0 A 1 x t 1 A p x t p u t where a 0 is a m-vector of intercepts, A k,k 1,,p are square m m matrices, u t m 1 is a vector of disturbances with Eu t 0 11 1m Eu t u t m1 mm Also write where AL I A 1 L A p L p. ALx t u t (c) James Davidson 2014 2.30 30/04/2014

Stability of the VAR Start with the case p 1. Solve the system by repeated substitution as x t a 0 Ax t 1 u t a 0 Aa 0 Ax t 2 u t 1 u t I A A 2...a 0 u t Au t 1 A 2 u t 2 Thus, we need to know the properties of A n as n gets large. (c) James Davidson 2014 2.31 30/04/2014

Eigenvalues The eigenvalues of A are the solutions (assumed distinct - possibly complex-valued) to the equation A I 0. A square matrix with distinct eigenvalues 1, 2,, m has diagonalization A CMC 1 where M diag 1,, m and C is the matrix of eigenvectors. Note that A I CMC 1 I CM C 1 CC 1 M I C C 1 M I 1 2 m. Stability condition: A n CMC 1 CMC 1 CMC 1 CM n C 1. The conditions for A n 0 as n are that i 1, for each i. (c) James Davidson 2014 2.32 30/04/2014

The General Case Write the VAR(p) model in companion form: x t A 1 A 2 A p 0 x t 1 u t x t 1 I 0 0 0 x t 2 0 x t 2 0 I 0 0 0 x t p x t p 0 0 I 0 x t p 1 0 or x t A x t 1 u t mp 1 1. Repeat the analysis of p 1 on this model. A (mp 1 mp 1) hasmp 1 eigenvalues of which m are 0. It can be shown that the remaining mp eigenvalues match the inverted roots of A 0. (c) James Davidson 2014 2.33 30/04/2014

The Generalized Stability (Invertibility) Condition: Generalizing the AR(p) analysis, we can show that all the roots of the equation A p I p 1 A 1 A p (a polynomial of order mp) must lie outside the unit circle. Note the case p 1. Observe that I A m I A where 1/, so the eigenvalue and polynomial root conditions are equivalent. Note the case m 1. The companion form provides an alternative way to analyse the stability of the AR(p). (c) James Davidson 2014 2.34 30/04/2014

The Final Form of a VAR To solve ALx t u t, note that AL 1 1 AL adjal where A is a lag polynomial of order mp and the elements of adjal are lag polynomials of maximum order pm 1. The final form equations are AL x t adjalu t Key facts: The vector on the right-hand side is a sum of m moving average terms in the elements of u t. A sum of MAq terms has zero autocovariances beyond q lags, hence itself has a MA representation. Hence, the final form of a VAR is a vector of univariate ARMA processes! Nominally the AR roots are the same for each equation. In practice there are frequently cancellations of common factors, element by element. (c) James Davidson 2014 2.35 30/04/2014