MIT OpenCourseWare http://ocw.mit.edu 14.384 Time Series Analysis Fall 008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Introduction 1 14.384 Time Series Analysis, Fall 007 Professor Anna Mikusheva Paul Schrimpf, scribe September 6, 007 Lecture 1 Stationarity, Lag Operator, ARMA, and Covariance Structure Introduction History popular in early 90s, making comeback now. Current comeback is largely due to macro-applications. Can roughly divide time series into macro and finance related stuff. Macro stuff mostly focuses on means. Finance on higher moments. Macro limited by short horizon of data available. Outline Can divide course into 1. Classics. DSGE Goals stationary nonstationary Univariate ARMA unit root Multivariate VARMA cointegration simulated GMM ML Bayesian Most of you probably interested in empirical research, so we ll give you the tools needed to do this. However, we ll also cover theory and highlight open questions. Problem Sets Will have an empirical part requires programming. Use whatever language you prefer. We recommend Matlab and discourage Stata. You need not write your programs from scratch. You can freely download programs from the web, but make sure you use them correctly and cite them. Working in groups is encouraged, but you should write your own solutions. ARMA Processes Stationarity We need what we have observed to be stable, in some sense, so that we can make statements about the future. Definition 1. white noise {e t } such that Ee t = 1, Ee t e s = 0, Ee t = σ Definition. strict stationarity A process, {y t }, is strictly stationarity if for each k, the distribution of {y t,..., y t+k } is the same for all t
Stationarity Definition 3. nd order stationarity {y t }, is nd order stationary if Ey t, Ey t, and cov(y t, y t+k ) do not depend on t Examples of non-stationary Example 4. Break: y t = { β + e t β + λ + e t t k t > k Example 5. Random Walk (also known as unit root process) y t = y t 1 + e t Definition 6. Lag operator Denoted L. Ly t = y t 1. The lag operator can be raised to powers, e.g. L y t = y t. We can also form polynomials of it a(l) = a 0 + a 1 L + a L +... + a p L p a(l)y t = a 0 y t + a 1 y t 1 + a y t +... + a p y t p Lag polynomials can be multiplied. Multiplication is commutative, a(l)b(l) = b(l)a(l). Inversion Lag polynomials can also be inverted. Example (1 ρl)(1 ρl) 1 1 (1 ρl) ρ i L i = ρ i L i ρ i L i i=1 0 = ρ L 0 = 1 Of course, this only makes sense if ρ < 1, because then if x t is weakly stationary, J i i L ( ρ L )x t y t as J For higher order polynomials, we can invert them by factoring, using the formula for (1 ρl) 1, and then rearranging, for example: 1 a 1 L a L =(1 λ 1 L)(1 λ L), λ i < 1 (1 a 1 L a L ) 1 =(1 λ 1 L) 1 (1 λ L) 1 =( λ i 1 L i i )( λ L i ) ( ) j = L j j k λ 1 j=0 k=0 k λ
Simple Processes 3 Or, perhaps more easily we can do a partial fraction decomposition 1 a b = + (1 λ 1 x)(1 λ x) 1 λ 1 x 1 λ x λ 1 λ a =, b = λ 1 λ λ λ 1 a 1 i (L) = a λ 1 L i i + b λ L i This trick only works when the λ i are unique. The formula is slightly different otherwise. Note: the λ i are the inverse of the roots of the lag polynomials. To invert a polynomial, we needed λ i < 1, i.e., the roots of the polynomial are outside of unit circle. Simple Processes Autoregressive (AR) Moving average (MA) ARMA AR(1): y t = ρy t 1 + e t, ρ < 1 (1 ρl)y t = e t AR(p): a(l)y t = e t, where a(l) is order p MA(1): y t = e t + θe t 1 y t = (1 + θl)e t MA(q): y t = b(l)e t, where b(l) is order q ARMA(p, q): a(l)y t = b(l)e t, where a(l) is order p and b(l) is order q, and a(l) and b(l) are relatively prime. An ARMA representation is not unique. For example, an AR(1) (with ρ < 1) is equal to an MA(), as we saw above. In fact, this is more generally true. Any AR(p) with roots outside the unit circle has an MA representation. Covariances Definition 7. auto-covariance γ k cov(y t, y t+k ) Definition 8. auto-correlation ρ k AR(1) example γ γ k 0 y t = ρy t 1 + e t
Covariances 4 Observe V ar(yt) = ρ σ V ar(y t 1 ) + σ, and V ar(y t ) = V ar(y t 1) = γ 0, so γ 0 = ρ. Also, it is easy to see k ρ σ 1 by induction that γk = ρ. Another way to see this is from the MA representation: More generally, if y t = y t = c ie t i then ( ) cov(y t, y t+k ) =cov c i e t i, c i e t+k i =σ c j c j+k c(ξ)c(ξ 1 ) =( c i ξ i )( c i ξ i ) = c j c l ξ j l = ξ k c j c j+k 1 ρ i e i σ γ 0 = ρ i σ = 1 ρ ρ k σ γ k = cov( ρ i e t i, ρ i e t+k i ) = ρ i σ = 1 ρ i=k j=0 MA representation and covariance stationarity y t = c ie t i so y t has finite variance, and in fact is covariance stationary, if j=0 c j <. It is often easier to prove things with the stronger assumption of absolute summability, j=0 c j < (or stronger still j=0 j c j < ). Definition 9. covariance function γ(ξ) = i= γ i ξ i, where ξ is a complex number. Lemma 10. Covariance function of MA For an MA, y t = c(l)e t, γ(ξ) = σ c(ξ)c(ξ 1 ). Proof. j,l=0 k= j=0 σ b(ξ)b(ξ 1 ) Lemma 11. Covariance function of ARMA For an ARMA, a(l)y t = b(l)e t, γ(ξ) = a(ξ)a(ξ 1 )