The Generalized Cochrane-Orcutt Transformation Estimation For Spurious and Fractional Spurious Regressions Shin-Huei Wang and Cheng Hsiao Jan 31, 2010 Abstract This paper proposes a highly consistent estimation, the two stage generalized Cochrane-Orcutt transformation (2SGAR) estimation for cointegration, the spurious regression, fractional cointegration and the fractional spurious regression via AR (k) approximation, eventhough the error terms are unknown in practice. The convergent rates of estimators of the spurious regression and the fractional cointegration are improved by using our methodology when compared to the existing estimating methods. Moreover, relaxing the no correlation assumption of regressors and error terms for above mentioned regressions, our procedure can also result in consistent estimators. We further compare our new estimator to other three estimators, OLS, GLS correction (GLSC) and feasible GLS (FGLS) by Monte Carlo studies. The evidence on simulation shows that the powerfulness and usefulness of our estimation, because our estimation can produce more accurate estimators for the most considered regressions. Keywords: Spurious Regression; Unit Root; Cointegration JEL classification: C22 CORE, Universite Catholique de Louvain Department of Economics, University of Southern California, U.S.A., University of Hong Kong Corresponding Author, E-Mail: shin-huei.wang@uclouvain.be 1
1. Introduction The long-run relation between economic time series often play an important role in macroeconomics and finance. For example, a stable long run relationship linking real money balances to real income and interest rate. Additionally, most macroeconomics models often imply that certain variables are cointegrated as defined by Engle and Granger (1987). However, tests often fails to reject the null hypothesis of no cointegration for these variables. Choi et al. (2008) pointed out one possible explanation of these empirical results is that the error is unit-root non-stationary due to a non-stationary measurement error in one variable or non-stationary omitted variables. When the stochastic error of a regression is unit root non-stationary, the regression is technically called a spurious regression. In general, the spurious regression occurs with regressions involving unit root non-stationary variables. This paper proposes the two-stage generalized Cochrane-Orcutt transformation estimation based on AR(k) approximation of stationary and non-stationary processes, namely 2SGAR, for a class of regression models including the spurious regression mentioned above. In the first stage, we use the GLS correction estimation proposed in Choi et al. (2007) and fit GLS correction residuals by an AR (k) model. Then we conduct the AR(k) Cochrane-Orcutt transformation of the dependent variables and regressors and consider OLS estimation to reestimate the new regresssion in the second stage. Our estimating approach has also some advantages under the following circumstances. The well known issue is that with substantially correlated errors, the OLS estimate is severely biased and many unit root and cointegration tests have serious size distortion as well. Hence, the inconclusive test results cause the uncertain determination of whether or not the error in the regression is stationary or unit root non-stationary. Moreover, while the GLS correction is the ideal cure for spurious regression, there still exists inappropriate situations. If the error term is really stationary, but substantially correlated (for example, if the coefficient of AR(1) regression is equal to 0.95 rather than unity), then differencing the data can result in a misspecified regression. Additionally, for most empirical findings, the error term could not only follow an AR(1) process but also a generalized ARMA (p, q) process. Likewise, the error term of spurious regression could be an ARIMA (p, 1, q) also. Therefore, even if the conventional Cochrane-Orcutt FGLS estimator has promising performance for the regression with AR(1)error term, in empirical works, it is hard to determine what the reality of error term is. On the other hand, the conventional 2
Cochrane-Orcutt FGLS estimator is problematic when the error term of the regression is hard to be tell. For the sake of tackling this problem, we consider to approximate errors by an AR (k) model. Berk (1974) and Bauer and Wagner (2008) use an AR model to approximate the I(0) and I(1) process respectively. By doing so, we could filter the error term well by an AR model no matter what the property of the error is, even though some I(0) and I(1) omitted variables or measurement errors are included in the error term. Then, we use this fitted AR(k) to run the Cochrane- Orcutt procedure. In addition, relaxing the no correlation assumption of regressors and error term, our procedure could result in a consistent estimator. Recently, many economic and financial processes in practice possess fractional memory parameters, a property known as long memory. Thus, a generalization of standard cointegration, what is so-called the fractional cointegration has been studied extensively both in theory and applications (e.g., Chen and Hurvich, 2003; Robinson and Hualde, 2003; Christensen and Nielsen, 2006) The literature on fractional cointegration is relatively new. Consider a two-dimensional process (x t, y t ) such that both variates are the autoregressive fractionally integrated moving average process of order p, d, q, denoted as ARFIMA (p, d, q) or I(d) process, when the integrated process of order d, or the differencing parameter d is a fractional number. We say that x t and y t are fractionally cointegrated if there exits a linear combination u t = y t βx t. such that u t is I(d u ) and x t is I(d), with d u < d. In other words, the fractional cointegration framework provides more information since it allows the memory parameters to take fractional values and d d u to be any positive real number. On the contrary, when d b, then the regression is called a fractional spurious regression. For estimating the cointegration parameter β, Robinson (1994) proposed a narrow-band least squares estimator (NBLSE) of β in the frequency domain when 0 < d < 0.5. The case where 0 < d < 0.5 is of interest, particularly if one wishes to study fractional cointegration in volatility of financial series(see Anderson al., 2001). Moreover, successful implement of ARFIMA models depends on being able to compute good estimates of the fractional integration parameters. Nevertheless, it is well known that the estimation of the differencing parameter of the ARFIMA process may not be very accurate in finite samples when d is close to 0.5 and sample size is not large enough such as T = 100. More importantly, many I(0) against I(d) tests or fractional cointegration tests cause the uncertain determination of whether or not the error in the regression is a stationary short or a long memory process, or a non-stationary long memory process. For example, when T = 100 and true differencing parameter d is 3
0.001 or 0.49, or when the difference between d and d u are extremely small, such as d u d = 0.01, most current tests cannot provide conclusive testing results. Therefore, based on the analysis of Poskitt (2007) showing the AR approximation of long memory processes, we also extend our 2SGAR methodology to the fractional spurious regression. Additionally, through the simulation studies, our 2SGAR performs the best as compared to other three conventional estimators, OLS, GLS correction (GLSC) and feasible GLS (FGLS) considered in Choi et al. (2008). On the the hand, our 2SGAR is very useful and a good choice to empirical works. The rest of this paper is organized as follows. Section 2 introduces the estimation and the main results. In section 3 we verify the theoretical findings through a Monte Carlo experiment. The last section summarizes this paper. All proofs are in the Appendix. 2. Model and Estimation The objective of this section is to establish the 2SGAR estimation via an AR approximation for a class of regression models. We first investigate the asymptotic property of our 2SGAR estimating procedure under the following regression (1). 2.1. Spurious regressions with I(1) error Consider the regression model y t = βx t + u t, x t = I(1), (1) where the error term u t in (1) satisfying the following Assumption 1. Assumption 1. u t is generated as: φ(l)(1 L) d u t = θ(l)e t, (2) where (i) d=0, or 1; (ii) φ(l) and θ(l) are finite degree lag polynomials, and the zeroes of φ(l), and θ(l) all lie outside the unit circle; (iii) φ(l) and θ(l) have no common roots; (iv) e t is an i.i.d. process, with E(e t ) = 0, E(e 2 t ) = σ2 e, and E ( e 4 ) t <. When d = 0, Assumption 1 guarantees that the conditions in Theorem 2 of Berk (1974) hold, and allows us to represent the ARMA process u t as: u t = b j u t j + e t, where b j 0 as j. (3) j=1 4
and to approximate u t by AR(k) as follows: u t = k bj u t j + ê tk. (4) j=1 Likewise, when d = 1, by Theorem 3 of Bauer and Wagner (2008), the ARIMA (p, d, q) can also be approximated by an AR model as (4). Furthermore, as d = 0, u t is stationary and hence regression (1) is a cointegration with serially correlated errors. as d = 1, u t is I(1) non-stationary process and hence regression (1) is a spurious regression.both models are very crucial to empirical applications in macroeconomics and finance. We are now in the position to illustrate our two-stage generalized Cochrane- Orcutt transformation estimating procedure. Two stages of our estimating procedure are detailed as follows : Stage (I). We start with taking the full first difference, and use OLS to estimate y t = β x t + u t, (5) to get β GLSC. This procedure can be views as GLS corrected estimation described in Choi et al. (2007). When x t and u t are uncorrelated, the β GLSC is consistent and we can get into the stage (II) directly. Stage (II). Approximating the û t = y β GLSC x t by an AR(k) model, then we get û t = k j=1 b j û t j + ê tk. Then, we conduct the following Cochrane-Orcutt transformation of the regression (1): k ŷ t = y t bj y t j, i=j k x t = x t bj x t j. (6) i=j Consider OLS estimation of the regression ŷ t = β x t + error, (7) then the OLS estimator of β in (7) is computed as T T β 2SGAR = [ x t x t ] 1 [ x t ŷ t ] (8) t=1 t=1 Thus, when x t and u t are uncorrelated, the asymptotic properties of β 2SGAR are presented in Theorem 1. The detailed proof results in this section is given in Appendix. 5
THEOREM 1. If the data generating processes satisfy( equation (1), x t and u t are uncorrelated and d = 1, then as T and k = o (T/logT ) 1/2), the 2SGAR estimator for the model in (1) is (logt ) 1/2 k 2 T 3/2 -consistent. We next relax the no correlation of assumption between x t and u t. THEOREM 2. If the data generating processes satisfy ( equation (1), x t and u t are correlated and d = 1, then as T and k = o (T/logT ) 1/2), the 2SGAR estimator for the model in (1) (logt ) 1/2 k 2 T 3/2 is consistent. Theorem 1 and Theorem 2 indicate that our two stage generalized Cochrane- Orcutt transformation estimation (2SGAR) for the spurious regression can produce more highly T 3/2 k 2 (logt ) 1/2 -consistent estimator, when compared to (T )-GLS correction estimator (GLSC) and (T )-Feasible GLS estimator (FGLS) as shown in Choi et al. (2007). In other words, using our 2SGAR estimation can cause more accurate estimator for the case of the spurious regression. 2.2. Regression with I(0) error This section considers the asymptotic property of the 2SGAR estimator under the assumption of cointegration, i.e., u t is an I(0) process. THEOREM 3. If the data generating processes satisfy equation (1), x t and u t are uncorrelated and d = 0, then as T and k = o p (T 1/3 ), the 2SGAR estimator for the model in (1) is consistent at the convergent rate T 1/2. In the following Theorem 4, we further relax the no correlation of assumption between x t and u t. THEOREM 4. If the data generating processes satisfy equation (1), x t and u t are correlated and d = 0, then as T and k = o p (T 1/3 ), the 2SGAR estimator for the model in (1) is consistent at the convergent rate (logt ) 1/2 T 1/2. 2.3. Fractional Spurious Regression We follow the definition of fractional cointegration in Robinson and Msrinucc (2003), under which a (q 1)I(d 1, d 2,, d q ) series x t is cointegrated if there exists 6
α 0 such that α x t = u t is I(d u ) with d u < (min(d 1, d 2,, d q )), where x t and u t satisfy the following assumption of stationary autoregressive fractionally integrated moving average process of order p, d, q, denoted as ARFIMA (p, d, q) or I(d) process, when the integrated process of order d, or the differencing parameter d is a fractional number. Assumption 3. x t is generated as: φ(l)(1 L) d x t = θ(l)e t, where (i) d ( 0.5, 0.5); (iii) φ(l), and θ(l) are finite degree polynomials, and the zeroes of φ(l), and θ(l) all lie outside the unit circle; (iv) φ(l) and θ(l) have no common zeroes; (v) e t is an independently and identically distributed process, with E(e t ) = 0, E(e 2 t ) = σ2, and E ( e 4 ) t <. We further focus on the stationary, positive-memory-parameter case, where d 1, d 2,, d q, d u (0, 1/2). This case also considered by Robinson (1994) and found in stock market volatility (see e.g. Anderson al, 2001 a; Anderson al, 2001 b). Gil-Alana (2003) points out that there will be a cointegrating relationship if the order of integration of the residual is smaller than that of the individual series. On the contrary, if the order of integration of the residual is greater than that of the individual series, this is so called fractional spurious regression as follows: y t = βx t + u t, x t = I(d) u t = I(d u ), (9) where x t and u t satisfy Assumption 3 and d u d, d, d u (0, 0.5). In Theorem 5 and Theorem 6, we prove that our 2SGAR is consistent when the fractional regression is spurious. THEOREM 5. If the data generating processes satisfy equation (9), x t and u t are uncorrelated and 0 < d, d u < 0.5, d u > d, then as T and k = o((t/logt ) 1/2 d ), d (0, 0.5), the 2SGAR estimator for the model in (9) is consistent at the convergent rate O p (k 2du d+0.5 (logt ) 1/2+du T 1/2 du ). THEOREM 6. If the data generating processes satisfy equation (9), x t and u t are correlated and 0 < d, d u < 0.5, d u > d, then as T and k = o((t/logt ) 1/2 d ), 7
d (0, 0.5), the 2SGAR estimator for the model in (8) is consistent at the convergent rate O p (k 2du d+0.5 (logt ) 1/2+du T 1/2 du ). 2.4. Fractional Cointegration We also investigate the asymptotic property of 2SGAR estimator under the assumption of fractional cointegration, i.e., d u < d, d, d u (0, 0.5) in (9). THEOREM 7. If the data generating processes satisfy equation (9), x t and u t are uncorrelated and 0 < d, d u < 0.5, then as T and k = o((t/logt ) 1/2 d ), the 2SGAR estimator for the model in (8) is consistent at the convergent rate O p (k 2du d+0.5 (logt ) 0.5+du T 1/2 du ). THEOREM 8. If the data generating processes satisfy equation (9), x t and u t are correlated and 0 < d, d u < 0.5, then as T and k = o((t/logt ) 1/2 d ), the 2SGAR estimator for the model in (9) is consistent at the convergent rate O p (k 0.5+d (logt/t ) 0.5 d ). Additionally, the fractional cointegration model used for analysis of purchasing power parity in Cheung and Lai (1993) is considered here. We denoted it as y t = βx t + u t, x t = I(1) u t = I(d u ), (10) where u t satisfy Assumption 3 and d u (0, 0.5). The asymptotic property of 2SGAR estimator of (10) can be conducted in the following Theorem 9 and Theorem 10. THEOREM 9. If the data generating processes satisfy equation (10), x t and u t are uncorrelated and 0 < d u < 0.5, then as T and k = o((t/logt ) 1/2 du ), the 2SGAR estimator for the model in (10) is consistent at the convergent rate O p (k 2du d+0.5 (logt ) 0.5+du T 1/2 du ). THEOREM 10. If the data generating processes satisfy equation (10), x t and u t are correlated and 0 < d u < 0.5, then as T and k = o((t/logt ) 1/2 du ), 8
the 2SGAR estimator for the model in (10) is k 1.5 (logt ) 0.5 T 3/2 -consistent at the convergent rate. Theorem 7 and Theorem 8 show that using our 2SGAR estimation can produce a more accurate estimator for the fractional cointegration when compared to the existing estimation, such as the maximum difference estimation (MD) shown in Tsay (2007). Additionally, MD estimation can be just used for the case where the regressor and error term are uncorrelated but cannt 3. Simulation Study This section investigates the finite sample performances of our 2SGAR estimator and those of the other three estimators OLS, GLS correction (GLSC), 2SGAR and Feasible GLS (FGLS), through Monte Carlo experiments. These three estimators are also conducted in Choi et al. (2008). The Monte Carlo experiment for each model is based on 2500 replications with different sample size T = 100, 200 and 500. The structural parameter is set to β = 2. We generate x t and u t from two independent standard normal distributions and let x t = I(1) and u t which are as follows. DGP (a). u t = ρu t 1 + e t, ρ = 0, 0.5, 0.8, 0.7, 0.95, DGP (b). (1 0.95L)u t = (1 + 0.8e t ), DGP (c). (1 L)u t = e t, DGP (d). (1 + ρl)(1 L)u t = e t, ρ = 0.5, 0.8, 0.95 Using the AIC, we choose the suitable order k of AR(k) for approximating the time series, no matter the series is I(0) or I(1). Table 1 shows the root of mean square error (RMSE) of all four estimators for the cases in which u t satisfies (a)-(d). With respect to ρ = 0 and ρ 0, the regression with error term u t satisfying DGP (a) represents the cointegration with i.i.d error and with serially correlated error, respectively. It appears that OLS estimator is the best one when ρ = 0. However, with the increase of the value ρ, the performance of the OLS estimator goes worse and that of GLS-correction and of our 2SGAR estimator go better. Particularly, when ρ = 0.95, the MSE of OLS is about 10 times of that of GLS-correction and 2SGAR. In spite of the similar performance for GLS-correction and 2SGAR when ρ = 0.95, as ρ = 0.5, 0.7 and 0.8, our 2SGAR estimator produces smaller MSE. When error term u t satisfies the DGP (c), the regression is spurious 9
and GLS corrected performs as well as 2SGAR. More importantly, as u t follows the DGP (d) and (e), the 2SGAR performs the best as expected. We now turn out to examine the finite sample performance of our 2SGAR when the regressions are the fractional cointegration and the fractional spurious regression We follow McLeod and Hipel (1978) to first generate T independent values from the standard normal distribution and form a T 1 column vector e. We then calculate the T autocovariances of the I(d) process, from which we construct the T T variancecovariance matrix Σ and compute its Cholesky decomposition C (i.e., Σ = CC ). Finally, the vector p of the T realized values of the I(d) process is defined by p = Ce. We discard the first 200 values. With the different differencing parameter d, d = (0.1, 0.2, 0.3, 0.4, 0.49). We further simulate several ARFIMA (p, d, q) processes. DGP (a ). (1 L) d u t = e t, DGP (b ). (1 0.9L)(1 L) d u t = (1 + 0.8L)e t, DGP (c ). (1 0.9L)(1 L) d u t = e t, Table 2 reported the bias and RMSE of four estimators when x t and u t are I(d) processes. We set x t follow the DGP (a ) process. When d u < d, the regression is the fractional cointegration. In this case, our 2SGAR performs the best as expected. Particularly, when u t follows the DGP (b ), the RMSEs of the structural parameter produced by the 2SGAR estimation are much smaller than those produced by the other three estimations. For the case in which d d u, which is so called the fractional spurious regression, the 2SGAR also perform the best, although in the case where u t follows the DGP (c ) with d u = 0.1 and 0.49, the 2GGAR makes the same RMSE as that made by GLSC. More importantly, when d = 0.3 and u t follows the DGP (b ) with d u = 0.49, the RMSE produced by OLS, GLSC and FGLS are around 10 times, 3 times and 6 times of that produced by the 2SGAR. On the other hand, concluding the simulation results presented in Table 1 and Table 2, it can be seen that 2SGAR significantly outperforms the OLS, GLSC and FGLS in most cases and a good choice for empirical works. 10
Table 1. The RMSE of Four Estimators u t T OLS GLSC 2SGAR F GLS (a). ρ = 0.0 100 0.024 0.144 0.051 0.024 200 0.011 0.101 0.027 0.011 500 0.0047 0.064 0.013 0.004 ρ = 0.5 100 0.040 0.102 0.056 0.040 200 0.019 0.071 0.030 0.019 500 0.008 0.045 0.013 0.008 ρ = 0.7 100 0.053 0.079 0.054 0.043 200 0.026 0.055 0.030 0.022 500 0.011 0.036 0.013 0.010 ρ = 0.8 100 0.063 0.064 0.051 0.050 200 0.032 0.046 0.029 0.030 500 0.014 0.029 0.013 0.013 ρ = 0.95 100 0.100 0.032 0.032 0.113 200 0.055 0.022 0.021 0.042 500 0.026 0.014 0.013 0.020 (b) 100 0.341 0.132 0.087 0.132 200 0.102 0.073 0.053 0.062 500 0.070 0.057 0.033 0.046 (c) 100 0.961 0.103 0.106 0.301 200 0.941 0.072 0.072 0.189 500 0.947 0.045 0.045 0.099 (d) ρ = 0.5 100 1.654 0.104 0.084 1.112 200 1.625 0.072 0.058 0.898 500 1.638 0.044 0.035 0.492 ρ = 0.8 100 2.814 0.104 0.051 1.348 200 2.797 0.071 0.035 1.002 500 2.830 0.044 0.021 0.717 ρ = 0.95 100 5.211 0.101 0.025 3.867 200 5.564 0.071 0.017 2.100 500 5.791 0.044 0.010 0.100 11
Table 2. The RMSE of Four Estimators for Fractional Cointegration and Fractional Spurious Regression x t u t T OLS GLSC 2SGAR F GLS d=0.1 (a ) d u =0.1 100 0.105 0.124 0.106 0.110 200 0.073 0.088 0.073 0.079 500 0.046 0.054 0.046 0.058 d u = 0.4 100 0.180 0.105 0.102 0.162 200 0.132 0.074 0.072 0.113 500 0.085 0.046 0.044 0.073 (b ) d u =0.1 100 0.687 0.081 0.049 0.299 200 0.488 0.052 0.027 0.110 500 0.304 0.032 0.016 0.098 d u =0.3 100 1.434 0.073 0.036 1.101 200 1.067 0.050 0.023 0.885 500 0.680 0.031 0.014 0.402 d u =0.49 100 9.952 0.073 0.040 7.782 200 7.841 0.049 0.021 5.502 500 5.449 0.031 0.012 3.048 (c ) d u = 0.1 100 0.382 0.078 0.077 0.239 200 0.273 0.054 0.055 0.127 500 0.170 0.034 0.034 0.097 d u = 0.49 100 5.532 0.067 0.064 3.764 200 4.357 0.047 0.045 2.592 500 3.021 0.028 0.028 1.933 12
Table 2. Continued x t u t T OLS GLSC 2SGAR F GLS d=0.3 (a ) d u =0.1 100 0.102 0.133 0.102 0.103 200 0.071 0.094 0.071 0.073 500 0.046 0.059 0.044 0.045 d u = 0.2 100 0.125 0.126 0.107 0.118 200 0.089 0.089 0.074 0.088 500 0.059 0.056 0.047 0.052 d u = 0.4 100 0.288 0.113 0.107 0.192 200 0.233 0.080 0.075 0.143 500 0.179 0.050 0.046 0.122 (b ) d u =0.1 100 1.052 0.088 0.046 0.998 200 0.776 0.060 0.031 0.627 500 0.511 0.037 0.018 0.432 d u =0.3 100 2.523 0.087 0.041 1.781 200 2.019 0.059 0.027 1.001 500 1.460 0.036 0.016 0.999 d u =0.49 100 19.65 0.088 0.046 9.982 200 17.36 0.060 0.025 8.801 500 14.66 0.037 0.014 6.701 (c ) d u = 0.1 100 0.587 0.086 0.087 0.311 200 0.432 0.060 0.061 0.212 500 0.284 0.037 0.037 0.194 d u = 0.49 100 10.91 0.075 0.072 7.842 200 9.643 0.052 0.051 6.203 500 8.142 0.032 0.031 5.244 13
Table 2. Continued x t u t T OLS GLSC 2SGAR F GLS d=1 (a ) d u =0.1 100 0.033 0.137 0.056 0.065 200 0.016 0.096 0.030 0.039 500 0.007 0.061 0.014 0.015 d u = 0.2 100 0.047 0.131 0.062 0.070 200 0.025 0.092 0.034 0.045 500 0.012 0.058 0.017 0.022 d u = 0.3 100 0.073 0.125 0.071 0.082 200 0.043 0.088 0.041 0.052 500 0.022 0.056 0.022 0.030 d u = 0.4 100 0.133 0.120 0.083 0.092 200 0.085 0.084 0.052 0.060 500 0.049 0.053 0.030 0.038 (b ) d u =0.1 100 0.494 0.136 0.072 0.238 200 0.271 0.095 0.047 0.462 500 0.131 0.059 0.028 0.101 d u =0.3 100 1.222 0.153 0.066 0.988 200 0.749 0.105 0.043 0.524 500 0.409 0.065 0.025 0.231 d u =0.49 100 9.341 0.180 0.060 6.214 200 6.576 0.123 0.038 3.940 500 4.086 0.076 0.022 2.343 (c ) d u = 0.1 100 0.275 0.104 0.106 0.193 200 0.151 0.074 0.068 0.098 500 0.072 0.046 0.039 0.057 d u = 0.49 100 5.188 0.117 0.104 3.259 200 3.652 0.081 0.071 1.962 500 2.268 0.050 0.043 1.204 4. Concluding remarks In this paper, we developed the new estimator to estimate structural parameters in the cointegration, the spurious regression, the fractional cointegration and the fractional spurious regression, namely 2SGAR. Asymptotic theory shows that The 2SGAR is highly consistent for those cases. Relaxing the no correlation assumption 14
of regressor and error term for those regressions, our procedure can also result in a consistent estimator. Monte Carlo experiments conducted in this paper confirm our theoretical prediction. We find that the 2SGAR performs the best in most cases compared to other estimators, OLS, GLS correction and FGLS. On the other hand, our 2SGAR is a good choice for empirical works. 15
REFERENCE Berk, K. N. (1974): Consistent autoregressive spectral estimates, The Annals of Statistics, 2, 489-502. Bauer, D. and Wagner, M. (2008): Autoregressive Approximations of Multiple Frequency I(1) Processes, Working paper, Economics Series, Institute for Advanced Studies, Vienna. Chen, W. W. and Hurvich, C. (2003): Estimating fractional cointegraiton in the presence of polynomial trends, Journal of Econometrics, 117, 95-121. Christensen, B. J. and Nielsen, M.. (2006) : Asymptotic normality of narrowband least squares in the stationary fracitonal cointegration model and volatililty forecasting, Journal of Econometrics, 133, 343-371. Choi, C.Y., Hu, L. and Ogaki M. (2008): Structural Spurious Regressions and A Hausman-tyep Cointegration Test, Journal of Econometrics, Cheung, Y.-W. AND K. S. Lai (1993): A Fractional Cointegration Analysis of Purchasing Power Parity, Journal of Business and Economic Statistics, 11, 103-112. McLeod, A. I. AND K. W. Hipel (1978): Preservation of the Rescaled Adjusted Range 1: A Reassessment of the Hurst Phenomenon, Water Resources Research, 14, 491-508. Poskitt, D.S. (2007): Autoregressive Approximation in nonstandard situations; the fractional integrated and non-invertible cases, Annals of Institute of Statistical Mathematics, 59, 697-725. Robinson, P. M. (1994): Semiparametric analysis of long-memory time series, Annals of Statisitcs, 22, 515-539. Tsay, W. J. (2000): Estimating trending variables in the presence of fractionally Integrated Errors, Econometric Theory, 16, 324-346. Tsay, W. J. (2007): Using difference-based methods for inference in regression with fractionally integrated processes, Journal of Time Series Analysis, 28, 827-843. 16