Representation of cointegrated autoregressive processes 1 Representation of cointegrated autoregressive processes with application

Similar documents
UNIVERSITY OF CALIFORNIA, SAN DIEGO DEPARTMENT OF ECONOMICS

(Y jz) t (XjZ) 0 t = S yx S yz S 1. S yx:z = T 1. etc. 2. Next solve the eigenvalue problem. js xx:z S xy:z S 1

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

by Søren Johansen Department of Economics, University of Copenhagen and CREATES, Aarhus University

Parametric Inference on Strong Dependence

GMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails

Notes on Time Series Modeling

Bootstrapping the Grainger Causality Test With Integrated Data

LECTURE 12 UNIT ROOT, WEAK CONVERGENCE, FUNCTIONAL CLT

Cointegration Lecture I: Introduction

1 Regression with Time Series Variables

A representation theory for a class of vector autoregressive models for fractional processes

Reduced rank regression in cointegrated models

ECON 4160, Spring term Lecture 12

NUMERICAL DISTRIBUTION FUNCTIONS OF LIKELIHOOD RATIO TESTS FOR COINTEGRATION

Chapter 1. GMM: Basic Concepts

Elements of Multivariate Time Series Analysis

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Problem set 1 - Solutions

Testing for non-stationarity

Likelihood inference for a nonstationary fractional autoregressive model

Simple Estimators for Semiparametric Multinomial Choice Models

Tests for Cointegration, Cobreaking and Cotrending in a System of Trending Variables

a11 a A = : a 21 a 22

Panel cointegration rank testing with cross-section dependence

SIMILAR-ON-THE-BOUNDARY TESTS FOR MOMENT INEQUALITIES EXIST, BUT HAVE POOR POWER. Donald W. K. Andrews. August 2011

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Multivariate Time Series: VAR(p) Processes and Models

Testing for Regime Switching: A Comment

QED. Queen s Economics Department Working Paper No. 1244

Cointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Financial Econometrics / 56

Cointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Fin. Econometrics / 31

Robust testing of time trend and mean with unknown integration order errors

Nonparametric Cointegration Analysis of Fractional Systems With Unknown Integration Orders

CREATES Research Paper A necessary moment condition for the fractional functional central limit theorem

Estimation and Inference with Weak Identi cation

Fractional integration and cointegration: The representation theory suggested by S. Johansen

THE UNIVERSITY OF CHICAGO Booth School of Business Business 41914, Spring Quarter 2013, Mr. Ruey S. Tsay

Introduction: structural econometrics. Jean-Marc Robin

A New Approach to Robust Inference in Cointegration

Cointegration Tests Using Instrumental Variables Estimation and the Demand for Money in England

On the Error Correction Model for Functional Time Series with Unit Roots

Numerical Distribution Functions of. Likelihood Ratio Tests for Cointegration. James G. MacKinnon. Alfred A. Haug. Leo Michelis

ASYMPTOTIC NORMALITY OF THE QMLE ESTIMATOR OF ARCH IN THE NONSTATIONARY CASE

ECONOMICS 7200 MODERN TIME SERIES ANALYSIS Econometric Theory and Applications

Chapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models

Nonparametric Trending Regression with Cross-Sectional Dependence

VECTOR AUTOREGRESSIONS WITH UNKNOWN MIXTURES OF I (0), I (1), AND I (2) COMPONENTS

ECON0702: Mathematical Methods in Economics

The Generalized Cochrane-Orcutt Transformation Estimation For Spurious and Fractional Spurious Regressions

SIMILAR-ON-THE-BOUNDARY TESTS FOR MOMENT INEQUALITIES EXIST, BUT HAVE POOR POWER. Donald W. K. Andrews. August 2011 Revised March 2012

Multivariate Time Series

8 Periodic Linear Di erential Equations - Floquet Theory

Bootstrapping Long Memory Tests: Some Monte Carlo Results

ECON 4160, Lecture 11 and 12

Title. Description. var intro Introduction to vector autoregressive models

E 4160 Autumn term Lecture 9: Deterministic trends vs integrated series; Spurious regression; Dickey-Fuller distribution and test

Simple Estimators for Monotone Index Models

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8]

Vector error correction model, VECM Cointegrated VAR

G. S. Maddala Kajal Lahiri. WILEY A John Wiley and Sons, Ltd., Publication

ARIMA Models. Richard G. Pierse

It is easily seen that in general a linear combination of y t and x t is I(1). However, in particular cases, it can be I(0), i.e. stationary.

On the robustness of cointegration tests when series are fractionally integrated

Vector Time-Series Models

Estimating the Number of Common Factors in Serially Dependent Approximate Factor Models

Comparing Nested Predictive Regression Models with Persistent Predictors

ARDL Cointegration Tests for Beginner

Testing the null of cointegration with structural breaks

A Conditional-Heteroskedasticity-Robust Con dence Interval for the Autoregressive Parameter

Multivariate Time Series: Part 4

A Test of Cointegration Rank Based Title Component Analysis.

Chapter 2. GMM: Estimating Rational Expectations Models

Inference on a Structural Break in Trend with Fractionally Integrated Errors

Bootstrapping Long Memory Tests: Some Monte Carlo Results

E 4101/5101 Lecture 9: Non-stationarity

Lecture 5: Unit Roots, Cointegration and Error Correction Models The Spurious Regression Problem

Estimation and Testing for Common Cycles

GMM estimation of spatial panels

Simulating Properties of the Likelihood Ratio Test for a Unit Root in an Explosive Second Order Autoregression

Trending Models in the Data

Stationarity and cointegration tests: Comparison of Engle - Granger and Johansen methodologies

1 Linear Algebra Problems

Representation of integrated autoregressive processes in Banach space

A TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.

Forecasting 1 to h steps ahead using partial least squares

CAE Working Paper # Fixed-b Asymptotic Approximation of the Sampling Behavior of Nonparametric Spectral Density Estimators

Supplemental Material for KERNEL-BASED INFERENCE IN TIME-VARYING COEFFICIENT COINTEGRATING REGRESSION. September 2017

LECTURE 13: TIME SERIES I

The PPP Hypothesis Revisited

A FRACTIONAL DICKEY-FULLER TEST FOR UNIT ROOTS

MATHEMATICAL PROGRAMMING I

Forecasting Levels of log Variables in Vector Autoregressions

Cointegration. Overview and Development

13 Endogeneity and Nonparametric IV

1 Quantitative Techniques in Practice

Equivalence of several methods for decomposing time series into permananent and transitory components

On GMM Estimation and Inference with Bootstrap Bias-Correction in Linear Panel Data Models

Testing for a Trend with Persistent Errors

Transcription:

Representation of cointegrated autoregressive processes Representation of cointegrated autoregressive processes with application to fractional processes Søren Johansen Institute of Mathematical Sciences and Department of Economics University of Copenhagen Short title: Representation of cointegrated autoregressive processes Abstract We analyze vector autoregressive processes using the matrix valued characteristic polynomial. The purpose of this paper is to give a survey of the mathematical results on inversion of a matrix polynomial in case there are unstable roots, to study integrated and cointegrated processes. The new results are in the I(2) representation, which contains explicit formulas for the rst two terms and a useful property of the third. We de ne a new error correction model for fractional processes and derive a representation of the solution. Keywords: Granger representation, error correction models, integration of order and 2, fractional autoregressive model, JEL Classi cation: C32 I would like to thank the Danish Social Sciences Research Council for continuing support, the members of the ESF-EMM network for discussions and Morten Ørregaard Nielsen for useful comment. The paper is partly based on a lecture presented at the Joint Statistics Meeting, San Francisco 2003, in a session in honour of T. W. Anderson.

Representation of cointegrated autoregressive processes 2 Introduction The integration and cointegration properties of autoregressive processes are studied by analyzing the inverse function of the matrix polynomial generating the coe cients. This is an well known technique for stationary processes, and for I() processes this was also the approach in the original proof of the Granger Representation Theorem, see Engle and Granger (987). The purpose of this paper is to give a survey of mathematical results on the inverse of a matrix polynomial. This is given in section 2. In section 3 we then discuss the application of the results to the well known models for I(), I(2); explosive, and seasonally integrated processes. The I(2) representation is extended to give the rst two terms explicitly and a useful property of the third, which implies that the cointegrating relations are I(0) not just stationary. As an application of the representation results, we consider in section 4 a new error correction model for fractionally cointegrated processes and derive a representation of the solution, see Granger (986). 2 Expansion and inversion of matrix polynomials To establish some notation consider the p dimensional autoregressive process X t given by X t = kx i X t i + " t ; t = ; : : : ; T: () i= The solution X t is a function of initial values and the shocks " t ; which we assume to be a sequence of i.i.d. variables with mean zero and variance. We de ne the

Representation of cointegrated autoregressive processes 3 characteristic polynomial (z) as the p p matrix function (z) = I p kx i z i ; i= and the characteristic equation by j(z)j = 0; where j(z)j = det((z)): Let the roots of the characteristic equation be z ; : : : ; z N : We call a root z i stable if jz i j > ; unstable if jz i j and explosive if jz i j < : Thus stable = minfjz i j : z i stableg > because it is the minimum of a nite set of numbers greater than one. Note that (0) = I p implies z i 6= 0: The expression (z) = Adj((z)) ; z 6= fz ; : : : ; z N g j(z)j shows that (z) has a pole at z = fz ; : : : ; z N g; since j(z m )j = 0: If the pole is of order one it has the form C m ( z=z m ) ; so that the function H(z) de ned by H(z) = (z) C m z=z m ; z 6= fz ; : : : ; z N g has no pole at z m and has a convergent power series in a neighborhood of z = z m. We rst prove two theorems on expansion of matrix polynomials, which translated into results about the autoregressive process () will give the error correction formulations of the autoregressive equations. Note that the points of expansion need not

Representation of cointegrated autoregressive processes 4 be roots of the polynomial. The rst result can be interpreted as an interpolation formula, due to Legendre, which nds a polynomial which at n + points takes prescribed values. Theorem Let z i ; i = 0; ; : : : ; n be distinct and z 0 = 0. The matrix polynomial (z) has the expansion around z 0 ; z ; : : : ; z n (z) = I p p(z) + (z m ) p m(z) z + p(z)z (z); p m (z m ) z m m= where (z) is a matrix polynomial and p(z) = ny ( z=z i ); p m (z) = i= ny ( z=z i ): (2) i= i6=m Proof. The matrix polynomial R(z) = (z) I p p(z) (z m ) p m(z) z ; p m (z m ) z m m= is zero for z = z j : R(z j ) = (z j ) I p p(z j ) (z m ) p m(z j ) z j = 0; j = 0; : : : ; n; p m (z m ) z m m= because (0) = I p ; p(0) = ; p(z j ) = 0 and p m (z j ) = p m (z m ) fj=mg ; j = ; : : : ; m. Hence each entry of R(z) has p(z)z as a factor so that R(z) = p(z)z (z) for some matrix polynomial (z).

Representation of cointegrated autoregressive processes 5 Next we give an expansion around two points, where we can prescribe the values of the function and the rst derivative at one of the points. We denote by (z) _ the derivative and by (z) the second derivative with respec to to z. A similar result can be given for more than two points and higher derivatives, Johansen (93). Theorem 2 Let z 6= z 2 : The polynomial has the expansion (z) = (z z ) 2 (z 2 z ) 2 (z 2) + ( (z z ) 2 (z 2 z ) 2 )(z ) +(z z ) (z z 2) (z z 2 ) _ (z ) + (z z ) 2 (z z 2 ) (z) for some matrix polynomial (z): Proof. The matrix polynomial R(z) = (z) (z z ) 2 (z 2 z ) 2 (z 2) ( (z z ) 2 (z 2 z ) )(z ) (z z 2 ) (z z 2) (z (z z 2 ) _ ); satis es R(z ) = R(z 2 ) = 0 and R(z _ ) = 0: Thus each entry of R(z) has the factor (z z 2 )(z z ) 2, so that R(z) = (z z 2 )(z z ) 2 (z); as was to be proved. Corresponding to the two types of expansions in Theorems and 2, we next give a representation of the inverse polynomial under various assumptions on the roots. We apply the notation (z) = m + (z z m ) _ m + 2 (z z m) 2 m + (z z m ) 3 (z); (3) and in case the root is z = ; we leave out the subscript m.

Representation of cointegrated autoregressive processes 6 These results have essentially been derived before, see Johansen and Schaumburg (998) and Nielsen (2005b). We give the extra term C in the expansion (7), see Rahbek and Mosconi (999) for the I() model, and as a new result a complete expression for C in (3) and a partial result on C 0 ; (4), for the I(2) model. For a p r matrix of rank r(< p) we use the notation a = a(a 0 a) ; and a? is a p (p r) matrix of rank p r for which a 0 a? = 0: We say that the root z m has multiplicity n; if det((z)) = ( z=z m ) n g(z); g(z m ) 6= 0: Theorem 3 Let z 0 be a root of the characteristic polynomial of multiplicity n and let 0 = 0 of rank r. If the I ( ) condition j 0? _ 0? j 6= 0 (4) is satis ed, then the multiplicity n equals p r and (z) = C + C + ( z=z 0 )H(z); 0 < jz z 0 j < ; (5) z=z 0 for some > 0; where H(z) is regular and has no singularity at z = z 0 and C = z 0? ( 0? _ 0? ) 0?; (6) C = 0 + C _ 0 0 + 0 _ 0 C C( _ 0 0 _ 0 + 2 0 )C: (7) The proof is given in the Appendix. Note that the root z 0 need not be unstable, and that there may be other roots in the characteristic equation. This implies that we can

Representation of cointegrated autoregressive processes 7 only expand H(z) in a neighborhood of z 0 : When we apply the result to autoregressive processes we need to take into account the position of the other roots: Corollary 4 Let fz m ; m = ; : : : ; ng be n roots of the characteristic polynomial of multiplicity n m ; m = ; : : : ; n and let (z m ) = m 0 m of rank r m. If the I() condition (4) is satis ed for all m, then n m = p r m and (z) = m= C m z=z m + K(z); (8) for some > 0; where K(z) is has no singularities for z = z ; : : : ; z n and C m is given by (6) with (; ; z 0 ; _ 0 ) replaced by ( m ; m ; z m ; _ m ). Proof. By repeated application of Theorem 3 we can remove the poles z m ; m = ; : : : ; n: If we remove all the unstable roots, then H(z) is regular on jzj < stable ; and we can hence expand around z = 0; with exponentially decreasing coe cients because stable >. These coe cients will be used to de ne a stationary process in the proof of Granger s representation theorem. Finally let us consider the case where the I() condition (4) fails. For notational reasons we formulate the I(2) result in Theorem 5 only for the root z = : We let () = 0 have rank r < p, but also assume that 0? _? = 0 ; has reduced rank s < p r; so that the I() condition (4) is not satis ed. We de ne the mutually

Representation of cointegrated autoregressive processes 8 orthogonal directions (; ; 2 ) = (;? ;?? ) and (; ; 2 ) = (;? ;?? ): (9) Theorem 5 Let z = be a root so that () = 0 of rank r; and assume that 0? _? = 0 of rank s: Then, if the I(2) condition j 0 2[ _ 0 _ + 2 ] 2 j 6= 0 (0) is satis ed at z =, it holds that (z) = C 2 ( z) + C 2 z + C 0 + ( z)h(z); 0 < jz j < ; () where H(z) is regular for jz j <, and C and C 2 are given by C 2 = 2 ( 0 2 2 ) 0 2; (2) C = 0 + [ 0 0 _ ]C2 + C 2 [ 0 _ 0 ] +C 2 [ _ 0 + 0 _ 0 _ 0 _ 0 _... + ]C 2 ; (3) 6 where = _ 0 _ + 2 : Finally C 0 satis es 0 C 0 = I r + 0 C 2 : (4)

Representation of cointegrated autoregressive processes 9 Note that C satis es 0 C = 0 _ C2 ; C = C 2 _ ; (5) 0 C = 0 (I p C 2 ); 0 C = I s : (6) The proof is given in the Appendix. Note that in (36) of the proof appears the relation j(z)j = ( z) p r jk(z)jja 0 Bj : This is relation (i) (det(g()) = r g(); = z) from Lemma in Engle and Granger (987) 2. The I() condition is equivalent to g(0) = jkjja 0 Bj 6= 0; or that the multiplicity of the root is the dimension minus the rank of : Thus the idea of applying a general theorem about inversion of polynomials to discuss the solution of the error correction model is found in Engle and Granger (987). The approach taken here gives some information on the conditions under which the matrix function can be inverted and leads to the explicit formulae for some of the coe cients. We start with the error correction model which we usually t to data, and then nd conditions under which the solution is I() or I(2). The approach taken by Granger was to start with an I() process, which exhibits cointegration, and then derive the (in nite lag) error correction model that it satis es. In both cases the proof of the result involves the inversion of the matrix function. For I(d) processes similar 2 A counter example to Lemma in the paper by Engle and Granger (987) is given by taking G() = diag( + ; 2 ; 2 ): What is missing in Lemma is a condition corresponding to the I() condition of cointegration (see Johansen 996, Theorem 4.5.)

Representation of cointegrated autoregressive processes 0 results were developed by la Cour (998). A systematic exposition of the mathematics behind the representations for I() and I(2) processes is given by Faliva and Zoia (2006). Another type of proof of these results applies the Smith form of a matrix polynomial, which is a reduction of a matrix polynomial to diagonal form using matrix polynomials with determinant one, so that the question of inverting the matrix is reduced to discussing a diagonal matrix, see Engle and Yoo (99) or Ahn and Reinsel (994) and the survey by Haldrup and Salmon (998). In this context, the I() condition is most naturally given as the assumption that the number of unit roots of (z) equals p minus the rank of (). The equivalence of the two conditions is given in Johansen (996, Corollary 4.3), see also Anderson (2003) for a simple proof. Yet another way of considering these results is via the Jordan representation of the companion form, where the I() condition is formulated as the absence of Jordan blocks (corresponding to the unit root) of order two in the companion form of the matrix polynomial, see Archontakis (998), Gregoire (999), Bauer and Wagner (2005), and Hansen (2005) for further details. 3 The autoregressive model and its solution The simplest example of the relation between the solution of the autoregressive equations and the inverse of the characteristic polynomial is given by the classical representation of a stationary autoregressive process, see for instance Anderson (974).

Representation of cointegrated autoregressive processes Theorem 6 If the roots of the characteristic equation are stable, then (z) has no singularities for jzj < stable : The expansion (z) = X C i z i ; jzj < stable i=0 de nes exponentially decreasing coe cients fc i g i=0: The stationary solution of the equation (L)X t = " t is given by X t = X C i " t i : i=0 3. Error correction models The error (or equilibrium) correction formulation is a convenient way of rewriting the autoregressive equations when one takes into account the existence of roots. We apply the results of Theorem and Theorem 2 to formulate two error correction models. Corollary 7 Let z = z m ; m = ; : : : ; n be distinct roots of (z), and let (z m ) = m 0 m: Then the process X t satis es the error correction model: p(l)x t = m 0 p m (L) m z m p m (z m ) X t m= Xk n + p(l) i= ix t i + " t ; (7) for some matrix coe cients i; i = ; : : : ; k n; where p(z) and p m (z) are given by (2). Proof. This follows from Theorem by expanding around (z) around 0; z ; : : : ; z n.

Representation of cointegrated autoregressive processes 2 Corollary 8 Let z = be a root so that () = 0 : Then the process X t satis es the error correction model 2 X t = 0 X t X t + Xk 2 i= i 2 X t i + " t ; (8) where = _ 0 ; for some matrix coe cients i; i = ; : : : ; k 2: Proof. This follows by choosing z = and z 2 = 0 in Theorem 2. Note that the formulation (8) has no assumption about the process being I(2); only that z = is a root. We next turn to the solution of the error correction models where we have to be more precise about the roots and the assumptions on the coe cients. 3.2 Granger s representation theorem The next result is a representation of the solution of (L)X t = " t. The equations determine the process X t as a function of the errors " i (i = ; : : : ; t) and the initial values of the process, but the integration properties of X t depends on the unstable roots. 3.2. I() processes We rst give a general result for I() processes and then apply it to some special cases Theorem 9 Let z m ; m = ; : : : ; n be the distinct unstable roots of (z). Then, if the I() condition is satis ed for all the unstable roots, X t is non-stationary, but p(l)x t and p m (L) 0 mx t can be made stationary with mean zero by a suitable choice of initial

Representation of cointegrated autoregressive processes 3 values. In this case, X t has the representation X t = C m S mt + m= m= z t m A m + Y t ; t = ; : : : ; T; (9) where S mt = z t m P t i= zi m" i ; Y t is stationary, C m is given by (6), and nally A m depends on initial values and are chosen so that 0 ma m = 0: If z m = is the only unstable root we nd X t = C tx " i + C " t + Z t + A; (20) i= where Z t is stationary, and C and C are given by (6) and (7), and 0 A = 0. Proof. Under the I() condition it follows from (8), by multiplying by p(z); that p(z) (z) = C m p m (z) + p(z)h(z); jzj < stable ; m= for some regular function H with no singularities for jzj < stable because we have removed all the unstable roots. We then expand H(z) = P i=0 C i z i ; jzj < stable with exponentially decreasing coe cients and de ne the stationary process Y t = P i=0 C i " t i : We nd, by multiplying (L)X t = " t by p(l) (L); that X t satis es the di erence equation We rst show that X t p(l)x t = C m p m (L)" t + p(l)y t : (2) m= = P n m= C ms mt + Y t is a solution of this equation. Because the process S mt = z t m P t i= zi m" i is a solution of the equation ( L=z m )S mt = " t ; and

Representation of cointegrated autoregressive processes 4 p(z) = ( z=z m )p m (z); we get that p(l)xt = C m p m (L)( L=z m )S mt + p(l)y t = C m p m (L)" t + p(l)y t : m= m= The complete solution is then given by adding the solution X 0 t to the homogeneous equation p(l)x 0 t = 0; which is given by X 0 t = P n m= z t m A m for arbitrary coe cient matrices A m : Thus X t = X t + X 0 t represents all possible solutions to (2). We see that p(l)x t = p(l)x t is stationary for any choice of coe cients A m ; and next turn to the stationarity of p k (L) 0 kx t p k (L) 0 kx t = 0 kc m p k (L)S mt + m= m= p k (L)z t m 0 ka m + p k (L) 0 ky t : (22) We use the relation p k (z) = p km (z)( z=z m ); p km (z) = Q j6=k;m ( z=z j) and show that the rst term is stationary: 0 kc m p k (L)S mt = m= 0 kc m p km (L)( L=z m )S mt = m= m6=k 0 kc m p km (L)" t m= m6=k because 0 kc k = 0 and ( L=z m )S mt = " t : The second term of (22) is p k (L)z t k 0 ka k + p km (L)( m= m6=k L=z m )z t m 0 ka m = p k (z k )z t k 0 ka k ; because ( L=z m )z t m = 0; and p k (L)z t k = p k (z k )z t k : Hence, if we choose 0 ka k = 0; the second term vanishes. Finally the third term of (22) is stationary, so that p k (L) 0 kx t

Representation of cointegrated autoregressive processes 5 is stationary with mean zero. If z m = is the only stable root we get from (5) the representation (20) in the same way. We next give some special cases for well known error correction models. The I() model. When z = is the only unstable root we get the usual I() model where the error correction form is given by (7) in Corollary 7 for z m = ; so that p(l) = L = ; and the error correction model becomes X t = 0 X t + Xk i= ix t i + " t : (23) The I() condition (4) asserts that j 0??j 6= 0; where = I p P k i= i: We nd the representation (20). For inference in this model, see for example Johansen (996). Note that from (20) it follows that X t and 0 X t = 0 C " t + 0 Y t are stationary. In fact 0 C = of X t can be I( I r ; so that 0 C 6= 0 and 0 X t is I(0); and no linear combination ): Similarly there can not be multi cointegration in the I() model in the sense that X t cannot cointegrate in a non-trivial sense with P t i= 0 X i ; see Granger and Lee (990). In case 0 P t i= 0 X i + 0 2X t were stationary, we would have 0 0 C + 0 2C = 0: Multiplying by we get 0 = 0 0 C = 0 ; so that = 0: For details see Engsted and Johansen (999). The model for seasonal roots. The I() model for quarterly seasonal roots is found if there are roots at z = ; i; see Hylleberg, Engle, Granger and Yoo (990), Ahn and Reinsel (994), and Schaumburg and Johansen (998), where inference is discussed. We illustrate here the simpler case of half year data with roots at z = ;

Representation of cointegrated autoregressive processes 6 see Lee (992). We de ne () = 0 ; ( ) = 0 : The error correction form of the equation is given by (7) in Corollary 7 with z = ; z 2 = : Because p(l) = ( L)( + L) = ( L 2 ); p (L) = ( + L); p (L) = ( L) we nd the error correction equation ( L 2 )X t = 2 ( + L) 0 X t 2 ( L) Xk 2 0 X t + ( L 2 ) i X t i= i + " t and the solution, if the I() condition (4) is satis ed at z =, is given by (9), which in this case becomes X t = C tx tx " i + C ( ) t ( ) i " i + A + ( ) t A + Y t ; i= i= where C and C are given by (6) for z m = ; and 0 A = 0 and 0 A = 0. The non-stationary behavior of the process X t is governed by the two processes S t = tx tx " i and S t = ( ) t ( ) i " i : i= i= The rst is a random walk, and P t i= ( )i " i is also a random walk, if " t has a symmetric

Representation of cointegrated autoregressive processes 7 distribution, but the factor ( ) t shows that S t is not a random walk. It is a nonstationary process that satis es the basic equation ( + L)S t = " t ; so that yearly aggregates of S ( ) t are stationary, whereas ( L)S t = " t shows that the di erence within a year of S t is stationary. The interpretation of cointegration in this case is not so simple. What we see is a process, which is non-stationary and even the di erence X t X t is non-stationary. If we take instead the yearly data X year t = X t + X t ; then the di erence X year t = X t + X t (X t + X t 2 ) = X t X t 2 = ( L 2 )X t is stationary. Cointegration means that even though X t X t is non-stationary, there are linear combinations for which 0 (X t X t ) is stationary. Similarly there are combinations for which 0 (X t + X t ) is stationary. Models for explosive roots. If the characteristic polynomial has one real explosive root < ; and a root at z = ; then the error correction model is found from (7) by choosing z = ; z 2 = gives p(l) = ( L)( L) = ; p (L) = ( L); p () = ; p (L) = ( L); p () = :

Representation of cointegrated autoregressive processes 8 The error correction model is given by X t = ( =) 0 X t + ( ) Xk 2 0 X t + i X t i + " t : i= The process has the representation, see (9), X t = C tx tx " i + C t i " i + t A + A + Y t ; i= i= provided the I() condition (4) is satis ed at z = and z =. Note that the process is non-stationary because of the random walk, but this time there is also an explosive process t P t i=0 i " i : The sum P t i=0 i " i actually converges for t! ; and the explosiveness is due to the factor t! : Note that 0 X t is not stationary. It is explosive and only ( L=) 0 X t is stationary. Similarly 0 X t is not stationary but I(): For inference the main references are the papers by Nielsen (2002, 2005a, 2005b), see also Anderson (959). 3.2.2 I(2) processes We next turn to I(2) processes and assume that z = is a root, so that = 0 of rank r, but 0? _? = 0 of rank s; so that the I() condition is not satis ed. This determines orthogonal directions (; ; 2 ) and (; ; 2 ); which are needed in the formulation of the results, see (9). Theorem 0 Let z = be the only unstable root, and assume that the I() condition (4) does not hold. If the I(2) condition (0) is satis ed for z =, then X t is non-

Representation of cointegrated autoregressive processes 9 stationary, but 2 X t ; (; ) 0 X t ; and 0 X t + 0 _ Xt can be made I(0) by a suitable choice of initial values, and then X t has the representation X t = C 2 tx ix tx " j + C " i + Y t + A + Bt; (24) i= j= i= where C and C 2 are given by (2) and (3), Y t is I(0) and A and B depend on initial values and are chosen so that 0 A + 0 _ B = 0 and (; ) 0 B = 0: The linear combination 0 X t + 0 _ Xt is called the polynomial cointegrating relation. Proof. We follow the proof of Theorem 9. From () we nd ( z) 2 (z) = C 2 + C ( z) + ( z) 2 H(z); jzj < stable ; where H(z) is a regular function with no singularities for jzj < stable ; and hence given by H(z) = P i=0 C i z i ; jzj < stable ; where C i are exponentially decreasing. We de ne the stationary process Y t = P i=0 C i " t i ; and note that C 0 = P i=0 C i 6= 0 because of (4), so that Y t is I(0): Next apply 2 (L) to (L)X t = " t : We then nd the di erence equation 2 X t = C 2 " t + C " t + 2 Y t ; t = ; : : : ; T: The process X t = C 2 tx ix tx " j + C " i + Y t i= j= i=

Representation of cointegrated autoregressive processes 20 is clearly a solution, and the complete solution is found by adding the solution to the homogenous equation 2 X 0 t = 0; which is given by X 0 t = A + Bt; that is, any solution has the form X t = X t + A + Bt: It follows that 2 X t = 2 X t is stationary with mean zero for any choice of A and B: We want to choose A and B so that (; ) 0 X t ; and 0 X t + 0 _ Xt become stationary with mean zero. We nd (; ) 0 X t = (; ) 0 C " t + (; ) 0 Y t + (; ) 0 B; which is stationary if B is chosen so that (; ) 0 B = 0: Moreover it is I(0) because, by (6), (; ) 0 C = (0; I s ) 6= 0: Next tx tx 0 X t + 0 Xt _ = 0 C " i + 0 Y t + 0 A + 0 Bt + 0 (C2 _ " i + C " t + Y t + B) i= i= = 0 Y t + 0 _ C " t + 0 _ Yt + 0 A + 0 _ B; because 0 B = 0 and 0 C + 0 C2 _ = 0; see (5). We next choose 0 A + 0 B _ = 0; that is, A = 0 B: _ This completes the proof of the representation (24). Note that 2 X t is I(0) because C 2 6= 0; (; ) 0 X t is I(0) because (; ) 0 C 6= 0; and nally 0 X t + 0 Xt _ is I(0) because the sum of the coe cients is 0 C 0 + 0 C _ 6= 0; because (4) and (5) show that 0 C 0 + 0 _ C = I r + 0 C 2 0 _ C2 _ = I r + 0 C 2 0 C 2 = Ir :

Representation of cointegrated autoregressive processes 2 For I(2) processes we usually apply the error correction model (8), where satis es 0?? = 0 : Inference can be found in Boswijk (2000), Johansen (997, 2006) and Paruolo (996, 2000). Note that the representations (9) and (24) are chosen so that it follows directly from (9) that X t and 0 X t are stationary, and from (24) it is seen that 2 X t ; (; ) 0 X t and 0 X t + 0 _ Xt are stationary. 4 Fractional processes In this section we discuss some models for cofractional processes that have been analyzed in the literature and suggest a modi cation of the model by Granger (986), which admits a feasible representation theory. We present here the rst results for this new model and they will be followed up in a companion paper Johansen (2007), with the subsequent aim of developing model based inference for fractional and cofractional processes based upon the model. 4. Basic de nitions The concept of the fractional di erence was introduced by Granger and Joyeux (980) and Hosking (98) to discuss processes with a long memory. The fractional operator is de ned by the expansion d = ( L) d = X k=0 d ( ) k L k ; k

Representation of cointegrated autoregressive processes 22 and we shall use the de nition Xt d d + = ( ) k L k ; k k=0 so that d +X t = d X t ft 0g; see for instance Robinson and Iacone (2005). We denote by I(0) + the class of asymptotically stationary processes of the form Xt X t = ft 0g C i " t i ; i=0 where 0 < tr( P i=0 C i) 0 ( P i=0 C i) <. De nition We say that Y t is fractional of order d; and write Y t 2 F(d) if d +Y t t 2 I(0) + for some function, t ; of the initial values. De nition 2 If Y t 2 F(d) with d > 0 and there exists a linear combination 6= 0 so that 0 Y t 2 F(d b) for 0 < b d we call Y t cofractional, F(d; b), with cofractional vector, and the number of linearly independent such vectors is the cofractional rank. 4.2 Some models for cofractional processes A parametric model for cofractional processes, which satisfactorily describes the variation of the data, should ideally have the properties The model is a dynamic model that generates F(d; b) processes The cofractional relations as well as the adjustment towards these are modelled

Representation of cointegrated autoregressive processes 23 The models for di erent cofractional rank should be nested A representation theory for the solution should allow a discussion of the properties of the process, depending on the parameter values If we can nd such a model we can conduct inference by deriving estimators and tests of relevant hypotheses on long-run and short-run parameters, based upon the Gaussian likelihood function. The rst parametric model for cofractional processes F(d; b) was suggested by Granger (986) as a modi cation of the usual error correction model: A (L) d X t = ( b ) d b 0 X t + d(l)" t ; Here A (L) is a lag polynomial and d(l) is a scalar function. If we set d(l) = and change X t to X t and 0 to 0 ; we get the model A (L) d X t = ( b ) d b 0 X t + " t : (25) This is a dynamic parametric model which models both short-run and long-run parameters. The model was applied by Davidson (2002), and Lasak (2005) derives a likelihood ratio test for the hypothesis r = 0 when d = and nds the asymptotic distribution.

Representation of cointegrated autoregressive processes 24 Most authors consider the simpler regression type model, which explicitly models the order of cofractionality: X t = 0 X 2t + d+b + u t (26) X 2t = d + u 2t ; where u t is a general invertible I(0) process. Here X has dimension r and X 2 dimension p r: There is a large literature, based upon (26), aimed at developing (a semiparametric) estimation theory for fractional and cofractional processes, see for instance Robinson and Marinucci (200), Chen and Hurvich (2003) or Nielsen and Shimotsu (2006) to mention a few. In this literature the purpose is mainly to conduct inference on the order of fractionality and test for the occurrence of cofractionality. The relations between (26) and (25) is best seen by de ning the matrix = ( I r ; 0) 0 and = (I r ; 0 ) 0 : Then (26) can be written in an error correction form as follows: d X t = ( b ) d b 0 X t + v t ; (27) where v t = (u 0 t + u 0 2t; u 0 2t) 0 : For parametric inference in (26), a model for v t is needed, and Sowell (989) suggested that v t could be modelled as an invertible and stationary ARMA model and derived a computable expression for the variance of the fractional stationary process. For this model Dueker and Startz (998) applied Gaussian maximum likelihood esti-

Representation of cointegrated autoregressive processes 25 mation, using the results by Sowell. Note that for an ARMA error A(L)v t = B(L)" t ; with " t i.i.d.(0; ), we get the model A(L) d X t = A(L)( b ) d b 0 X t + B(L)" t ; which is close to the error correction formulation (25). Breitung and Hassler (2002) suggested to analyze model (27), where is an unknown parameter and construct test for the rank of and based on the assumption that d is known, and v t is an AR process (B(L) = I p ) with i.i.d. Gaussian errors. The representation theory for model (26) is easy, as X t 2 F(d; b); and X t 0 X 2t = 0 X t 2 F(d b): The adjustment ; however, is assumed known, and so is the cofractional rank r, so the models for di erent cofractional rank are not nested. The representation theory for Granger s model (25) is not so easy due to the occurrence of both the lag operator and the fractional lag operator L b = b ; which appears natural for fractional processes. We therefore propose a di erent model d X t = ( b ) 0 d b X t + kx i= i d ( b ) i X t + " t : (28) In this model the polynomial in the lag operator is replaced by a polynomial in the fractional lag operator. This model satis es the requirements for a parametric model for cofractional processes, as we shall now show that there is a feasible representation theory.

Representation of cointegrated autoregressive processes 26 4.3 A representation theorem for cofractional processes We associate with (28) the (regular) characteristic function (z) = ( z) d I p 0 ( ( z) b )( z) d b kx i= i( ( z) b ) i ( z) d but also the polynomial (u) = ( u)i p 0 u kx i= iu i ( u); which is related to (z) via the substitution u = ( z) b : (u) = ( z) b d (z): Note that ( z) b and hence (z) has a singularity at z = ; unless d and b are nonnegative integers. By introducing the variable u = ( z) b the function ( z) b d (z) becomes a polynomial and the conditions on the roots are in terms of this polynomial, which is the characteristic polynomial for the usual I() model, see (23). We want to nd the solution of equation (28) under the I() condition for (u). A condition for the process to be fractional of order d and cofractional of order d b is given in Johansen (2007). We de ne = I p P k i= i: Theorem 3 If u = is the only unstable root of j (u)j = 0; if () = 0 ; and

Representation of cointegrated autoregressive processes 27 if the I() condition holds j 0?? j 6= 0; (29) then for jzj < ; (z) = X D i ( z) d+bi + ( z) d+b(n+) Di z i ; (30) i=0 i=0 where n = [(d b)=b] is chosen so that d + nb < 0 d + (n + )b: The rst two matrices are given by D 0 =? ( 0?? ) 0?; (3) D = [ 0 + 0 D 0 + D 0 0 + D 0 ( + kx i i )D 0 ]: (32) i= The solution of (L)X t = " t has the representation X t = i=0 D i d+ib + " t + d+(n+)b + Y + t + t ; t = ; 2; : : : (33) where the rst n + terms are fractional of order d; d b; : : : ; d nb and Y + t = P t i=0 D i " t i. The function t depends on initial values. If Y t 2 I(0) + ; it follows that X t is F(d) and 0 X t is F(d b): Proof. Under the I() condition (29), (u) can be inverted using (5) with z 0 = ; D 0 = C; D = C ; so that ( u) (u) = D 0 + D ( u) + ( u) 2 H(u)

Representation of cointegrated autoregressive processes 28 is a regular function for juj < stable : We expand as ( u) (u) = X D i ( u) i + ( u) n+ C i u i ; juj < stable ; i=0 i=0 where n is chosen so that d + nb < 0 d + (n + )b; and the coe cients C i are exponentially decreasing. For z small we nd that ( z) d (z) = = X D i ( z) ib + ( z) (n+)b C i ( ( z) b ) i i=0 X D i ( z) ib + ( z) (n+)b Di z i ; (34) i=0 i=0 i=0 which proves (30). Here D 0 and D are given by (6) and (7), and substituting _ = 0 ; 2 = kx i= i i we get (3) and (32). In order to prove (33), we write (28) as " t = (L)X t = + (L)X t + (L)X t and multiply by (L) + to nd (L) + " t = X t t ; (35) where t = + (L) (L)X t is a function of initial values of X t. We let Y + t =

Representation of cointegrated autoregressive processes 29 P t i=0 D i " t i and nd the representation (33) X t = t + (L) + " t = t + i=0 D i d+ib + " t + d+(n+)b + Y + t : It follows that d +X t = d + t + D 0 " t + D b +" t + D 2 2b + " t + + D n nb + " t + (n+)b + Y + t : If Y + t 2 I(0) + ; we see that X t is fractional of order d because D 0 6= 0; and that 0 X t is fractional of order d b because 0 D 0 = 0; and 0 D 6= 0: The representation in (33) for d = b = ; and hence n = 0; is X t = D 0 tx Xt " i + Di " t i + t ; t = ; 2; : : : i= i=0 This should be compared with (20), where we have chosen a representation in terms of the stationary process C " t + Z t and given a more explicit discussion of the role of the initial values, rather than just the term t. 5 Conclusion This paper discusses the connection between the solution to the autoregressive equations and the inverse of the generating function of the coe cients under di erent conditions on the coe cient matrices. We apply these results to nd representations of cointegrated vector autoregressive processes which are I(); I(2), seasonally integrated

Representation of cointegrated autoregressive processes 30 and explosive. For cofractional processes we suggest a modi cation of the model suggested by Granger (986) for which the above results can be applied to give a representation theory. 6 Appendix Proof of Theorem 3. There is no loss of generality in assuming that z 0 = ; as the polynomial (uz 0 ) has a root at u = ; and satis es the I() condition at u = ; if (z) does at z = z 0 : Note that the derivative for u = is z 0 _ (z0 ): We therefore assume that z 0 = and leave out the subscript 0 for notational convenience. Let = 0 have rank r; and let C and C be given by (6) and (7). Note that (z) is a regular function with singularities at the zeros of the characteristic equation. We want to show that the function H(z) = ((z) C z C )=( z); 0 < jz j < ; extended by continuity at z = is regular for jz j < : Multiplying (3) by A = (;? ) and B = ;? we obtain 0 0 A 0 (z)b = B @ I r + (z ) _ 00 (z ) _ 0 (z ) _ 0 (z ) _ C A + (z )2 2 B @ 00 0 0 C A +(z ) 3 A 0 (z)b;

Representation of cointegrated autoregressive processes 3 where _ 00 = 0 _ ; _ 0 = 0? _ ; etc. We can factorize (z ) from the last p r columns by multiplying from the right by F (z) = 0 B @ I r 0 0 (z ) I p r C A ; and we nd that K(z) = A 0 (z)bf (z) = K + (z ) _ K + (z ) 2 K (z) where 0 0 K = B @ I r _ 0 0 _ C A ; _K = B @ _ 00 2 0 _ 0 2 C A : From jk(z)j = ja 0 Bjj(z)j( z) (p r) ; z 6= ; (36) it is seen that K(z) has the same roots as (z); except for z = ; where jk()j = jkj = j _ j = j 0? _? j 6= 0; if the I() condition (4) is satis ed for z = : It follows that the multiplicity of the root at z = is p r; and that K(z) is invertible for jz j < " for some small : K (z) = K (z )K _KK + (z ) 2 H (z)

Representation of cointegrated autoregressive processes 32 where H (z) is regular for jz j < ; and K = K _KK = 0 B @ 0 B @ I r _ 0 _ 0 _ C A _ 00 0 _ 0 ( 0 + _ 0 _ ) _ 0 C A with ij = _ i0 _ 0j + 2 ij ; i; j = 0; : (37) Multiplying by F (z) from the left we nd F (z)k (z) = (z ) M + M 0 + (z )H 2 (z); z 6= with M = 0 B @ 0 0 0 _ 0 C A ; M 0 = B @ I r _ 0 _ 0 C A for a function H 2 (z) which is regular for jz j < : Therefore, multiplying by B and A 0 ; we nd (z) = BF (z)(a 0 (z)bf (z)) A 0 = BF (z)k(z) A 0 = (z )? _ 0? 0 +? 0 0 + _ 0 _ 0?? 0? +(z )BH 2 (z)a 0 = C( z) 0 + C _ 0 + 0 _ C C( _ 0 _ + 2 )C + (z )H 3 (z);

Representation of cointegrated autoregressive processes 33 where C is given in (6) and H 3 (z) is a regular function for j zj < : Thus the pole at z = can be removed by subtracting the term C( z) : This proves (5), (6), and (7). Proof of Theorem 5. We de ne A = (; ; 2 ) and B = ( ; ; 2 ): Because = 0 and 0? _? = 0 a calculation will show that A 0 (z)b = 0 B @ I r + (z ) _ 00 (z ) _ 0 (z ) _ 02 (z ) _ 0 (z )I s 0 (z ) _ 20 0 0 + 2 (z )2 f ij g + 6 (z )3 f... ijg + (z C A ) 4 (z): Next we multiply from the right by the matrix 0 F (z) = B @ I r 0 (z ) _ 02 0 (z ) I s 0 0 0 (z ) 2 I p r s C A to get K(z) = A 0 (z)bf (z) = K + (z ) _ K + (z ) 2 (z)

Representation of cointegrated autoregressive processes 34 where K = _K = 0 B @ 0 B @ I r _ 0 _ 00 _ 02 + 2 02 0 I s _ 0 _ 02 + 2 2 0 0 _ 20 _ 02 + 2 22 _ 00 2 0 2 00 02 _ + 6 _ 0 2 2 0 02 _ + 6 _ 20 2 2 2 20 02 _ + 6... C A 02... 2... 22 : C A Note that K has full rank because j _ 20 _ 02 + 2 22 j = j 0 2( _ 0 _ + 2 ) 2 j 6= 0: For 0 < jz j < K (z) = K (z )K _KK + (z ) 2 H (z); where H (z) is regular for jz j < : With ij given in (37) we nd 0 0 K = B @ I r _ 0 02 0 I s 2 0 0 22 ; K = C B A @ I r _ 0 ( 02 _ 0 2 ) 22 0 I s 2 22 0 0 22 ; C A so that F (z)k (z) = ( z) 2 M 2 + ( z) M + M 0 + ( z)h 2 (z)

Representation of cointegrated autoregressive processes 35 where H 2 (z) is regular for jz j < and M 2 = 0 0 0 0 0 0 0 0 ; M = B C B @ A @ 0 0 22 0 0 _ 02 22 0 I s 2 22 22 _ 20 22 2 = 22 [ _ 20 00 _ 02 _ + 2 _ 2002 + 2 20 02 _ 2 2 +... 22] 6 ; C A 22 : The matrix M 0 is rather complicated, but it su ces to nd that it has the form 0 M 0 = B @ I r + _ 02 22 _ 20 : C A Finally we use (z) = BF (z)(a 0 (z)bf (z)) A 0 = BF (z)k(z) A 0 to nd C 2 = 2 22 0 2; C = 0 + ( 2 _ 02 ) 22 0 2 + 2 22 ( 2 0 _ 20 0 ) + 2 0 2; 0 C 0 = I r + 0 _ C2 _ which reduces to (2), (3), and (4). This completes the proof of Theorem 5:

Representation of cointegrated autoregressive processes 36 7 References References [] Ahn, S. K., Reinsel, G. C. (994). Estimation of partially non-stationary vector autoregressive models with seasonal behavior. Journal of Econometrics 62:37 350. [2] Anderson, T. W. (959). On asymptotic distributions of estimates of parameters of stochastic di erence equations. Annals of Mathematical Statistics 30:676 687. [3] Anderson, T. W. (97). The Statistical Analysis of Time Series. New York: Wiley. [4] Anderson, T. W. (2003). A simple condition for cointegration. Discussion paper, Stanford University. [5] Archontakis, F. (998). An alternative proof of Granger s representation theorem for I() systems through Jordan matrices. Journal of the Italian Statistical Society 7: 27. [6] Bauer, D., Wagner, M. (2005). A canonical form for unit root processes in the state space framework, Discussion paper, University of Bern. [7] Boswijk, P. H. (2000). Mixed Normality and Ancillarity in I(2) Systems. Econometric Theory 6:878 904.

Representation of cointegrated autoregressive processes 37 [8] Chen, W. W., Hurvich, C. M. (2003). Semiparametric estimation of multivariate fractional cointegration. Journal of the American Statistical Association 98:629 642. [9] Davidson, J. (2002). A model of fractional cointegration, and tests for cointegration using the bootstrap. Journal of Econometrics 0:87 22. [0] Dittmann, I. (2004). Error correction models for fractionally cointegrated time series. Journal of Time Series Analysis 25:27 32. [] Duecker, M., Startz, R. (998). Maximum likelihood estimation of fractional cointegration with an application to US and Canada bond rates. Review of Economics and Statistics 83, 420 426. [2] Engle, R. F., Granger, C. W. J. (987). Co-integration and error correction: representation, estimation and testing. Econometrica 55:25 276. [3] Engle, R. F., Yoo, S. B. (99). Cointegrated economic time series: an overview with new results. In: Long-run Economic Relations: Readings in Cointegration. Oxford: Oxford University Press, pp. 237 266. [4] Engle R. F., Granger, C. W. J., Hylleberg, S., and H. S. Lee, (993). Seasonal cointegration: The Japanese consumption function. Journal of Econometrics 55:275 298.

Representation of cointegrated autoregressive processes 38 [5] Engsted, T., Johansen, S. (999). Granger s representation theorem and multicointegration. In: Cointegration, Causality and Forecasting. A Festschrift in Honour of Clive W. J. Granger. Oxford: Oxford University Press, pp. 220 2. [6] Faliva, M., Zoia, M. G. (2006). Topics in Dynamic Model Analysis. Lecture Notes in Economics and Mathematical Statistics, Springer, New York. [7] Granger, C. W. J. (986). Developments in the study of cointegrated economic variables. Oxford Bulletin of Economics and Statistics 48:23-228. [8] Granger, C. W. J., Lee, T. -H. (990). Multicointegration. In: Advances in Econometrics: Cointegration, Spurious Regressions and unit Roots. Greenwich: JAI Press, pp. 7 84. [9] Granger, C. W. J., Jouyoux, R. (980). An introduction to long memory time series models and fractional di erencing. Journal of Time Series Analysis :5 29. [20] Gregoir, S. (999). Multivariate time series with various hidden unit roots, Part I, Integral operator algebra and representation theorem. Econometric Theory 5:435 468. [2] Hansen, P. R. (2005). Granger s representation theorem. A closed form expression for I() processes. The Econometrics Journal 8:23 38. [22] Haldrup, N., Salmon, M. (998). Representations of I(2) cointegrated systems using the Smith-McMillan form. Journal of Econometrics 84:303 325. [23] Hosking, J. R. M. (98). Fractional di erencing. Biometrika 68:65 76.

Representation of cointegrated autoregressive processes 39 [24] Hylleberg, S., Engle, R. F., Granger, C. W. J., Yoo, S. B. (990). Seasonal integration and cointegration. Journal of Econometrics 44:25 238. [25] Johansen, P. (93). Über osculierende Interpolation. Skandinavisk Aktuarietidsskrift, 23 237. [26] Johansen, S. (992). A representation of vector autoregressive processes integrated of order 2. Econometric Theory 8:88 202. [27] Johansen, S. (996). Likelihood Based Inference on Cointegration in the Vector Autoregressive Model., Oxford: Oxford University Press. [28] Johansen, S. (997). Likelihood Analysis of the I(2) Model. Scandinavian Journal of Statistics 24:433-462. [29] Johansen, S. (2007). A representation theory for a class of vector autoregressive models for fractional processes. Forthcoming Econometric Theory. [30] Johansen, S. (2006). The statistical analysis of hypotheses on the cointegrating relations in the I(2) model, Journal of Econometrics 32:8 5. [3] Johansen, S., Schaumburg, E. (998). Likelihood analysis of seasonal cointegration. Journal of Econometrics 88:30 339. [32] la Cour, L. (998). A parametric characterization of integrated vector autoregressive (VAR) processes. Econometric Theory 4:87 99. [33] Lee, H. S., (992). Maximum likelihood inference on cointegration and seasonal cointegration. Journal of Econometrics 54: 47.

Representation of cointegrated autoregressive processes 40 [34] Nielsen, B. (200). The asymptotic distribution of unit root tests of unstable autoregressive processes. Econometrica 69:2-29. [35] Nielsen, B. (2005a). Strong consistency results for least squares estimators in general vector autoregressions with deterministic terms. Econometric Theory 2:534 56. [36] Nielsen, B. (2005b). Analysis of co-explosive processes. Discussion paper, W8, Nu eld, Oxford. [37] Nielsen, M. Ø., Shimotsu, K. (2006). Determining the cointegrating rank in nonstationary fractional systems by the exact local Whittle approach. Forthcoming in Journal of Econometrics. [38] Paruolo, P. (996). On the determination of integration indices in I(2) systems. Journal of Econometrics, 72:33 356. [39] Paruolo, P. (2000). Asymptotic e ciency of the two step estimator in I(2) systems. Econometric Theory 6:524 550. [40] Rahbek, A. C., Mosconi, R. (999). Cointegration rank inference with stationary regressors in VAR models. Econometrics Journal 2:76 9. [4] Robinson, P. M., Iacone, F. (2005). Cointegration in fractional systems with deterministic trends. Journal of Econometrics 29:263 298. [42] Robinson, P. M., Marinucci D. (200). Semiparametric fractional cointegration analysis. Journal of Econometrics 05:225 247.