Simultaneous Indirect Inference, Impulse Responses, and ARMA models

Size: px
Start display at page:

Download "Simultaneous Indirect Inference, Impulse Responses, and ARMA models"

Transcription

1 Simultaneous Indirect Inference, Impulse Responses, and ARMA models Lynda Khalaf y and Beatriz Peraza López z August 2015 Abstract Indirect Inference based methods are proposed to derive Identi cation Robust con dence sets. The primary focus concerns ARMA models and impulse responses, in which case simultaneous con dence bands are derived. Auxiliary estimating functions include two-sided forward and backward looking regressions with leads and lags, long-autoregressions and empirical autocorrelations. Resulting objective functions are treated as test statistics that are inverted rather than optimized, using the Monte Carlo test method. Simulation studies illustrate accurate size and good power. Overall, results underscore the usefulness of proposed methods regardless of proximity to boundaries. Keywords: Con dence Set, Indirect Inference, ARMA, Impulse-Response, Monte Carlo test, Simultaneous Inference.. This work was supported by the Social Sciences and Humanities Research Council of Canada and the Fonds de recherche sur la société et la culture (Québec). y Department of Economics, Carleton University. Mailing address: Department of Economics, Room C-870 Loeb Building,Carleton University, 1125 Colonel By Drive, Ottawa, Ontario, K1S 5B6 Canada. Tel (613) ; FAX: (613) Lynda_Khalaf@carleton.ca. z Department of Economics, Carleton University. Mailing address: Department of Economics, Room C-870 Loeb Building, Carleton University, 1125 Colonel By Drive, Ottawa, Ontario, K1S 5B6 Canada. Tel (613) ; FAX: (613) beatrizperazalopez@cmail.carleton.ca 1

2 1 Introduction Indirect inference refers to resampling-based statistical methods that use simulation to calculate estimating functions including likelihoods, scores or moments. The idea of replacing complicated or intractable statistical functions by computer simulations motivated initial applications and underlying theory; see Smith (1993), Gouriéroux, Monfort and Renault (1993), Gouriéroux and Monfort (1996) and Gallant and Tauchen (1996). Since then, available evidence suggests that indirect inference often works better than traditional methods in nite samples for bias or misspeci cation corrections. 1 This paper considers a two-stage framework that can yield identi cation robust con dence sets via Indirect Inference. The primary focus concerns inference in Autoregressive Moving Average (ARMA) models including impulse responses, in which case we derive simultaneous con dence bands, as in Jordá (2009); see also Jordá and Marcellino (2010) and Jordá, Knuppel and Marcellino (2013) on simultaneous path forecasts. Despite the simplicity of ARMA models in time series, identi cation and boundary issues raise enduring complications for estimation and inference. Autoregression (AR) unit roots cause the so-called non-uniform convergence problem: methods that require stationarity break down at or close to the unit boundary, whereas local-to-unity motivated methods perform poorly away from unity; see Hansen (1999), Mikusheva (2007), Andrews and Guggenberger (2010), and more recently, Gorodnichenko, Mikusheva and Ng (2012), Mikusheva (2012, 2014) and Phillips (2014). Invertibility of Moving Average (MA) components causes the so-called pile up e ect: because of identi cation failure, estimating functions including likelihoods can have a local maximum at the invertibility boundary even when the true process is invertible; see Sargan and Bhargava (1983), Davis and Dunsmuir (1996), Gospodinov (2002), and more recently, Davis and Song (2011), and 1 See, for example, Dridi, Renault and Guay (2007), Gouriéroux, Phillips and Yu (2010), Li (2010), Dominicy and Veredas (2013), Fuleky and Zivot (2014) and Calvet and Czellar (2015). 2

3 Gospodinov and Ng (2015). We propose con dence sets robust to both problems via Indirect Inference and the following auxiliary estimates: (i) two-sided forward and backward looking regressions with leads and lags, (ii) long-autoregressions (AR), and (iii) empirical autocorrelations. Methods with leads and lags generally aim to deal with feedback e ects; see Stock and Watson (1993), Dufour and Torrès (2000). AR-based approximations have long been popular for ARMA models; see Hannan and Rissanen (1982), Galbraith and Zinde-Walsh (1994, 1997), Ghysels, Khalaf and Vodounou (2002), Guay and Scaillet (2003), Gospodinov and Ng (2015), Dufour and Pelletier (2014). Empirical autocorrelations and related measures including cumulants and local projections have recently been re-introduced from a minimum distance perspective; see e.g. Jordá and Kozicki (2011), Gorodnichenko, Mikusheva and Ng (2012), Gospodinov and Ng (2015) and the references therein. The indirect inference framework we propose uni es all the above for con dence interval purposes. While the ARMA case concretizes concepts, our method applies more generally, as follows. Indirect inference relies on a binding function, denoted thereafter = L(); that links a parameter of interest to an auxiliary parameter for which a simple estimator is available. Denote the latter ^ = ^(X) where X refers to available data and ^(:) is the estimating function. L() need not be known but admits a simulation-based approximate. Denote the latter (). Conformably, a minimum-distance estimator is derived that matches ^ to (). To set focus, consider a Wald-type distance metric of the form S(X; ) = (^ ()) 0 (^ ()). The standard Indirect Inference estimator corresponds to ^ = ARGMIN (S(X; )). Associated con dence sets and hypotheses tests are typically justi ed using standard asymptotic arguments. For example, regularity conditions are derived to ensure consistency and e.g. asymptotic normality of ^, leading to Wald-type con dence intervals. Such intervals usually require identifying, which may be an issue in many models that necessitate Indirect Inference. We revisit Indirect Inference from an 3

4 alternative outlook, to demonstrate its usefulness when identi cation of raises di culties, and when further parameters of interest are m non-standard transformations of, as with impulse responses, which we denote g i (), i = 1; : : : ; m. In contrast to the above described extremum estimation strategy, we treat S(X; ) as a test statistic which we invert rather than minimize. Inverting a test involves collecting the parameter values that are not rejected via this test at a given level of signi cance to construct a con dence set. We propose to automate the latter into Indirect Inference, along these lines. Assume that () can be derived by simulating the true process once is set to a given value, say 0. Then a simulation-based p-value can be derived for S(X; 0 ) = (^ ( 0 )) 0 (^ ( 0 )) using this same 0 and a supplementary simulation round conditional on ( 0 ). Denote the p-value thus derived as ^p ( 0 ). Simulations are thus incorporated in two stages, rst to calibrate ( 0 ) in S(X; 0 ) and next to compute its p-value. An -level joint con dence set can thus be assembled by numerically collecting the 0 values for which ^p ( 0 ) >. Exactness and identi cation robustness are achieved by computing ^p ( 0 ) via the MC test method of Dufour (2006). This method achieves size control when the null distribution of the statistic, even if not computable analytically, does not depend on unknown or nuisance parameters. In this regard, our work relates to Dufour and Valery (2006, 2009) in stochastic volatility models, and Dufour and Kurz-Kim (2010) and Beaulieu, Dufour and Khalaf (2007, 2013, 2014) in models with fat-tailed fundamentals. Many applications of Indirect Inference satisfy this requirement nearly by de nition, as () should be computable once is set. In our ARMA application, a nuisance-parameter-free distribution results from scale-invariance of OLS in AR settings, and from the standardization inherent to empirical autocorrelations. 2 For inference on g i (), i = 1; : : : ; m, the join con dence set 2 Scale invariance is not an unduly restrictive requirement particularly in univariate models. For example, Gouriéroux, Phillips and Yu (2010) consider a unit variance for Indirect Inference on the lag coe cient in dynamic Panels; Beaulieu, Dufour and Khalaf (2014) standardize scale exactly to one for MC tests on 4

5 is projected, which entails maximizing and minimizing each function over the non-rejected values of. The con dence intervals obtained are thus simultaneous by construction. We conduct a simulation study on ARMA and MA processes and compare our method with MLE. Simple objective functions as in Galbraith and Zinde-Walsh (1994, 1997) as well as over-identi ed criteria are considered, in addition to various lag structures. Results can be summarized as follows. In contrast to MLE, our method eradicates pile up problems even with very small samples, with no more than 50 observations. The signi cance level is a ected neither by the at segments in the objective function, nor by how close the estimated parameters are to one. MLE is oversized for both individual parameter and joint tests. None of the estimation functions steadily outperforms the rest for all sample sizes and pairs tested. Leads improve power for highly persistent processes. Overidenti cation yields important power bene ts. On balance, autocorrelation-based methods require a larger sample to catch up with AR-based counterparts. Inverting the asymptotic Wald test based on MLE estimates is almost infeasible at corner solutions with parameters close to the unit boundary, since in many instances the variance-covariance matrix is not invertible. This result is expected and provides support to our research. Finally, we show that MLE-based impulse-responses can severely underestimate the e ect of shocks. The paper is organized as follows. In section 2, we introduce our framework. In section 3, we discuss inference by test inversion. The ARMA special case is presented in section 4. Simulation results are presented in section 5. We conclude in section 6. tail parameters of Stable distributions. 5

6 2 General Framework Consider the general model (X ; fp (#) : # 2 g) (1) where X is the sample space, X = (X 1 ; :::; X t ; :::; X T ) 0 (2) represents a sample of size T, P (#) is a probability distribution over X indexed by # = ( 0 ; 0 ) 0 ; 2 ; 2 (3) and is the q-dimensional parameter of interest. is a nuisance parameter one aims to partial out. Pseudo-data can be simulated from (1) and an auxiliary parameter denoted, of dimension p q, can be de ned so that it is linked to via an exact or limiting binding function = L(#). In general, L(:) need not have a known analytical form. Furthermore, a simple statistic is available to estimate given X. Denote by ^ = ^(X) the considered data-based estimate of where ^(:) is the estimating function. Its simulation-based counterpart given a speci c value of # can be derived, for example, by: (i) creating H simulated paths x h (#) = (x h;1 (#); :::; x h;t (#):::; x h;t (#)) 0 ; h = 1; :::; H (4) each of size T from (1) given #; (ii) applying the same estimation method that was used 6

7 to derive ^ to each simulated path, leading to a series of estimators ^ h (#) = ^(x h (#)); h = 1; :::; H (5) and then (iii) computing their average: (#) = 1 H HX ^ h (#): (6) h=1 The distance between ^ and (#) forms the basis of Indirect Inference. A common special case is the Wald-type distance measure (^ (#)) 0 ^(^ (#)) where ^ is a positive de nite weighting matrix. Gouriéroux et al. (1993) suggest ^ = I p where I p denotes a p-dimensional identity matrix because the loss in e ciency with respect to an optimal estimator can be disregarded. This practice is also suggested, more recently, by Gouriéroux, Phillips and Yu (2010). 2.1 Nuisance parameters Focusing on as the parameter of interest requires a strategy to account for. In some settings, can be easily estimated given. Alternatively, an auxiliary estimate or objective function that is exactly invariant to can be considered. We emphasize the latter case in this paper, de ned via the following assumptions that characterize the model and considered estimating functions. Assumption 1. In the context of model (1); an estimating function ^(:) is invariant to, if ^(X ) = ^ X y (7) where X and X y are samples from (1) conformable with probability distributions P (( 0 ; 0 ) 0 ) 7

8 and P ( 0 ; 0 y )0 ; respectively, and 6= y are two di erent values of : This assumption ensures that ^(x h (( 0 ; 0 ) 0 )) = ^(x h (( 0 ; 0 y) 0 )); h = 1; :::; H which yields an invariant objective function for the transformation that maps X into X y. Indeed, applying Assumption 1 to the calibration in (6), we have: (( 0 ; 0 ) 0 ) = 1 H = 1 H HX ^ h (( 0 ; 0 ) 0 ) (8) h=1 HX ^ h (( 0 ; 0 y) 0 ) = (( 0 ; 0 y) 0 ): (9) h=1 It follows that can be set to any value in for simulation purposes with no bearing on estimation nor inference via objective functions of the form S(X; #) = (^ (#)) 0 (^ (#)): (10) In line with the above cited literature, (10) uses an identity weighting matrix. Other choices are possible as long as S(X; #) remains invariant to, or that the distribution of S(X; #) does not depend on under the null hypothesis that xes. The latter pivotality assumption does not require numerical invariance to throughout the considered parameter space. Instead, a - possibly restricted - estimate of may be available that evacuates the parameter only under the null. While Assumption 1 su ces for the ARMA models we consider below, we formalize the alternative pivotality assumption for completion as follows. 8

9 Assumption 2. In the context of model (1) and the null hypothesis H 0 ( 0 ) : = 0 ; 0 known an objective function S(X; ( 0 ; ^ 0 ) 0 )) where ^ is a conformable estimate of, is invariant to if the null distribution of S(X; ( 0 0; ^ 0 ) 0 )) does not depend on : A common special case includes scale invariance which implies robustness to multiplying the data by any positive scalar a, so Assumptions 1 yields: x h (( 0 ; a) 0 ) = ax h (( 0 ; 1) 0 ) ^(x h (( 0 ; 1) 0 )) = ^(x h (( 0 ; a) 0 )); ^(ax)) = ^ (X) : Its multivariate counterpart involves multiplying the data by an invertible matrix. For illustrative nite sample cases, see Beaulieu, Dufour and Khalaf (2007, 2013) and Dufour, Khalaf and Beaulieu (2003, 2010). In these papers, the inverted tests for multivariate Student-t and mixtures of normals distributions require Assumption 2, and the estimate in question uses the sum-of-squares residuals matrix. To simplify notation in what follows, since is partialled-out, that is, will be set in simulations to any known value in it parameter space, we suppress it out in the notation for S(X; #), (#), ^ h (#) and x h (#), which we will rede ne as S(X; ), (), ^ h () and x h (). 9

10 3 Inference via test inversion Standard applications of Indirect Inference resort to minimizing (10) and formulating adequate regularity conditions so that the resulting estimator, denoted ^, is (typically) asymptotically normal with a covariance matrix that can be estimated consistently. To compare and contrast our proposed inference method with common practices and to de ne test inversion within a familiar setting, let us revisit the traditional -level con dence interval CI(g(); ) = fg(^) z ^1=2 =2 g g for a given scalar function g (), where ^ 1=2 g denotes the relevant standard error estimate computed (for example, via the delta-method) from the estimated asymptotic variance/covariance matrix of ^, and z =2 is two-tailed standard normal critical point. As is well known, CI(g(); ) is derived by solving, over g( 0 ), the following inequality jg(^) g( 0 )j=^ 1=2 g z =2 : (11) Said di erently, the solution of (11) inverts a t-type test of H g : g() = g( 0 ); 0 known (12) based on the statistic jg(^) g()j=^ 1=2 g. Ideally, P[g() 2 CI(g(); )] 1, at least as T! 1. However, identi cation failures and the above cited pile up e ects, root cancellations or stationarity concerns will cause intervals of this form to severely under-cover, even with large samples. In this paper, we depart from this approach and propose a con dence set for based on inverting a test 10

11 of H 0 ( 0 ) : = 0 ; 0 known (13) using (10) which we treat as a test statistic. Inverting a test of H 0 ( 0 ) at a given level consists in collecting, numerically or analytically, the 0 values that are not rejected using the considered test at the considered level. For example, given the right-tailed test statistic T ( 0 ) = S(X; 0 ) = (^ ( 0 )) 0 (^ ( 0 )) (14) and assuming (for the moment and for illustrative purposes) that an -level cut-o point T c is available, test inversion involves solving, over 0, the inequality T ( 0 ) < T c. The solution of this inequality is a parameter space subset, denoted CS(; ), that satis es P[ 2 CS(; )] 1 : (15) Since the nite sample distribution of (10) is intractable, and since there is no reason to expect that a cut-o point that is invariant to 0 can be found, a numerical solution is required. We propose to collect by numerical search, the 0 values that are not rejected using a MC p-value we will de ne in the next section, denoted ^p ( 0 ). In other words, we propose to search for the 0 values for which ^p ( 0 ) >. The output of such a search is a joint -level con dence region, which we denote CS(; ) as above. The set thus derived can be empty, which will provide a diagnostic test of the hypothesized model. Said di erently, an empty con dence set would occur when all values of are rejected at the considered level: in this case, the considered speci cation would be incompatible with available data. Our proposed variant of Indirect Inference thus preserves its speci cation-robustness attributes. If identi cation is weak, the set will be unbounded or di use, which will also provide useful 11

12 information regarding model t (or lack thereof). If a con dence interval for the individual components of or, more generally, for a given function g() is desired, the above de ned CS(; ) can be projected. This implies minimizing and maximizing g() over the values in CS(; ). The ARMA-based impulse-response special case we study below illustrates the usefulness of such projections. 3.1 Exact p-values We propose applying Dufour s (2006) MC method to compute the above ^p ( 0 ) empirical p-value. Given that the null distribution of the considered statistic is nuisance parameter free, the MC method controls type I error even if S(X; 0 ) is itself simulation-based, in nite samples. The method can be summarized as follows. We introduce a new layer of simulation, for each value of 0 : we draw L replications satisfying the null hypothesis from the model, regardless the choice for x l ( 0 ) = (x l;1 ( 0 ); :::; x l;t ( 0 ); :::; x l;t ( 0 )) 0 ; l = 1:::L: (16) These replications should be generated independently from the simulations underlying S(X; 0 ). Next, we apply ^(:) to each replication l leading to: ~ l ( 0 ) = ^(x l ( 0 )); l = 1; :::; L; (17) and compute a series of statistics similar to the one in (14), but now considering the vector of the estimated parameters ~ l ( 0 ) for each replication l S l (x l ( 0 )) = ( ~ l ( 0 ) ( 0 )) 0 ( ~ l ( 0 ) ( 0 )) (18) 12

13 instead of the vector obtained by applying the estimation method to the data. We are able to use the same vector of calibrated parameters ( 0 ) from (14) due to the exchangeability property of the MC test [Dufour (2006)]. Lastly, the empirical p-value associated with the null hypothesis is computed as: ^p ( 0 ) = G L + 1 L + 1 ; (19) where G L denotes the number of times the statistic S l (x l ( 0 )) is greater than or equal to the data based S(X; 0 ): 4 ARMA special case To set focus, we consider the zero mean Gaussian ARMA(1,1) process of size T with unknown parameters and, X t = X t 1 + e t + e t 1 ; t = 1:::T; (20) with e t = u t (21) where jj < 1 guarantees invertibility or an Autoregressive AR(1) representation, j j < 1 ensures stationarity, and the joint distribution of u 1 ; :::; u t ; :::; u T is known. For example, u t can be assumed independent and identically distributed with a standard normal distribution: u t i:i:d: N (0; 1): (22) We assume that X 0 = e 0 where e 0 is xed. Student-t errors are also considered in our Monte Carlo study; the generalized lambda family of distributions as considered in Gospodinov and Ng (2015) is another interesting parametrization. With regards to the above notation, 13

14 we maintain as the nuisance parameter and we focus on joint estimation of = (; ) 0 : In this context, for any positive scalar a, we have: Q(X) = minq(x) = minq(ax) = aminq p (X) (23) = 0 f; 0 b 0 ; f = f1 ; :::; fp 0 ; b = b1 ; :::; bp 0 (24) T P t=p+1 [X t f1 X t+1 ::: fp X t+p b1 X t 1 ::: bp X t p ] 2 ; (25) and ^ j = d Cov(X t ; X t j ) dv ar(x t ) Cov(aX = d t ; ax t j ) ; j = 1:::p: (26) dv ar(ax t ) These invariance properties guarantees that Indirect Inference on based on Q(:) or on a set of empirical autocorrelations ^ j does not require specifying. To simplify notation, we use the same number of leads and lags in Q(X) although clearly, invariance to does not require this restriction. Conformably, we consider three choices for our estimating function ^(:): (i) two -sided OLS estimation with forward and backward-looking components; (ii) OLS estimation of backward-looking or long-ar; and (iii) empirical autocorrelations. For clarity, and given any sample vector X = (X 1 ; :::; X T ) 0 of size T, we will refer to these choices as ^ fb (X) = ARGMIN Q(X); with = 0 f; 0 b 0 ^ b (X) = ARGMIN 0 Q(X), with 0 = (0; 0 b) 0 ; ^ c (X) = (^ 1 ; :::; ^ p ) 0 where Q(X) corresponds to (25) and ^ j is given by (26). In each case, to compute the simulation-based counterparts of each for any given value of, it su ces to create H 14

15 simulated paths x h () = (x h;1 ; :::; x h;t ; :::; x h;t ) 0, as x h;t () = x h;t 1 + u h;t + u h;t 1 ; t = 1:::T; h = 1:::H: (27) The auxiliary parameter estimator for each path using all above estimating functions will numerically coincide with its counterpart based on x h;t () = x h;t 1 + u h;t + u h;t 1 ; t = 1:::T; h = 1:::H; (28) for any positive. Population autocorrelations need not be derived by simulation; yet invariance is explicitly imposed this way for any p. For the MA(1) special case, we also use a simpli ed auxiliary parameter: we t a long-ar to observed and simulated data and retain the coe cient of the rst lag X t 1 as ^. We denote the underlying estimating function as (:). 4.1 Impulse-response con dence bands Impulse-response functions are the most empirically and policy relevant transformations of ARMA parameters. If the stationarity condition is satis ed, (20) admits an MA (1) approximation using lags operators L, X t = (1 L) 1 (1 + L)e t 1X = (1 (L))e t = s e t s ; t = 1:::T; (29) s=0 15

16 and the empirical impulse-response coe cients can be computed following the well-known iterative process: 0 = 1 (30) s = s 1 ( + ) for s 1: (31) These relations de ne a series of known functions of the vector = (; ), denoted g i (), i = 1; :::; m, that can be projected in turn. This implies minimizing and maximizing g i () over the values in CS(; ). Con dence intervals so obtained are simultaneous, in the following sense: let g i CS (; ) denote the image of CS (; ) by the function g i. Then (15) implies that P g i () 2 g i CS (; ) ; i = 1; : : : ; m 1 : (32) In this context, our method is related to the research by Jordá (2009), Jordá and Marcellino (2010) and Jordá, Knuppel and Marcellino (2013) on simultaneous con dence bands and applications to forecasting. The simulation study reported below includes an illustrative analysis of such projections emphasizing close to boundary values of and : 5 Simulation Study We conduct MC simulation experiments to asses the performance of our method for MA(1) and ARMA(1,1) processes. Our objective is to study power and size when the MA parameters are close to the non-invertibility region for both processes, and the AR parameter is close to the non-stationary region for the ARMA. Our experiments are designed as follows. We simulate Gaussian processes and processes where the errors are drawn from 16

17 a Student- t distribution with 5 degrees of freedom. Simulated samples with number of observations T = 50, 100 and 200, used here as pseudo-data, are drawn from the ARMA model in (11), setting = 1 in line with in Assumptions 1 and 2. These sample size choices correspond to common samples selected in MC studies as well as data used in empirical applications. Each pair of fm A; ARg parameters used to generate ARMA(1,1) processes is denoted in our experiments as f 0 ; 0 g. The outcomes from all our simulation studies are available upon request. Here we focus on the following set of pairs to illustrate our results: f 0 ; 0 g = ff0; 0:6g; f0:6; 0g; f0:99; 0gf0:5; 0:96g; f0:96; 0:5g; f0:99; 0:99gg. In particular, we set the AR parameter = 0 in (20) a priory to draw samples from MA(1) processes with the values f 0 g = f0:6; 0:99g. To nd the number of auxiliary parameter estimates providing the highest power, we studied power under various combinations of the initial settings of our model. We changed the number of parameters from 1 to 3 and 8 for the MA and from 6 to 8 and 12 for the ARMA, taking into consideration the recommendations from Ghysels, Khalaf and Vodounou (2002) and Galbraith and Zinde-Walsh (1994) for the long-ar estimation, and Gorodnichenko et al. (2012) for the estimation with autocorrelations in our experiments. Since we found that 8 provides the highest power for all the sample sizes, we suggest that for empirical practices, the total number of auxiliary parameters is set to 8 on each of the choices of the estimating function explained in section 2.2: (i) f = b = 4 in the OLS estimation with forward and backward-looking components, (ii) = 8 in the OLS estimation of the long-ar, and (iii) j = 8 in the estimation with empirical autocorrelations. In the case of the simpli ed auxiliary parameter estimation for the MA process, the long- AR is estimated as (ii), but only the rst coe cient is retained. 17

18 In all cases, we set the number of simulated paths to H = 3. We tested H = 1 and 3 as suggested by Ghysels, Khalaf and Vodounou (2002) for the MA, and also H = 5 to examine whether it could improve power for ARMA. Then, we selected H = 3, keeping the number of auxiliary parameters equal to 8, as the value which provides the highest power for all the sample sizes. The number of replications in the application of Dufour s (2006) MC method to compute the p-values is set to L = 199, and N = 1000 replications are considered for the MC simulations. Lastly, an identity weighting matrix is used to preserve the invariance to scale of the statistic associated to every choice of the estimating function. We report empirical rejections and compare our results versus the MLE individual parameter tests and joint tests (Wald-type) in terms of size, considering a level of signi cance = 0:05. Our main results are summarized along the following lines. The results of the experiments conducted for MA(1) processes are presented in Tables 1 and 2, and the studies related to the ARMA(1,1) process are shown in Tables 3 to 6. We use abbreviations for each estimating function: OLS-FBLC, OLS-long AR and AutoCorr correspond to the OLS estimation with forward and backward-looking components, the OLS estimation of the long-ar and the estimation with empirical autocorrelations, respectively. Size is controlled with our method for all sample sizes, choices of the estimating function, and for both Gaussian and Student- t errors, irrespective of how close the true parameters are to the unit root. The MLE shows size problems when the sample size is small T = 50, for both, individual parameter tests and joint tests. Size seems to improve for the MLE when the sample size is increased and the parameters are far from the unit root regions. However, when the parameters approach the unit root regions, the MLE size fails as the sample size increases, which con rms its asymptotic failure. This is observed in Table 2, when 0 = f0:99g for the MA, and in Table 6 with the almost unit root pair f 0 ; 0 g = 18

19 f0:99; 0:99g for the ARMA. The results obtained in term of power for processes with Student- t errors are in line with the ones obtained with Gaussian errors. This outcome con rms the robustness to fatter tails, since we purposely ignore the fact that the errors are drawn from the Studentt distribution when we apply our method. From the experiments for the MA processes, in Tables 1 and 2, we can identify that the pile up e ect starts for values of the parameter higher than 0.6 for nite samples. Although our method does not produce size distortions, it cannot discriminate between tested values higher than 0.85 when the true value is between 0.85 and 1. For small values of the parameter, the method with the simpli ed auxiliary parameter estimation inspired in Galbraith and Zinde-Walsh (1994, 1997) shows an increase in power with respect to the OLS estimation of the long-ar. However, as the value of the parameter increases and gets closer to the unit root region, the latter method shows higher power than the former. This result indicates that overidenti cation is useful to increase power when we have higher persistence in MA(1) processes. In general, the power of our method increases for the three estimating functions applied to the ARMA as the sample size increases, but none of them steadily shows the highest power for all sample sizes and pairs tested. In Table 3, the true pair f 0 ; 0 g = f0; 0:6g is equivalent to an AR(1) process with high persistence. In this case, the method with the rst choice of estimating function, the OLS estimation with forward and backward-looking components, shows the highest power for most of the pairs tested, including the almost unit root pair f ; g = f0:99; 0:99g for small sample sizes, T =50 and 100. As we increase the sample size to T = 200, the power obtained with our method for each of the three choices of estimating function increases and reaches similar values. In Tables 4 and 5 we have two pairs with one parameter closer to the unit root than 19

20 the other. In Table 4, for the true pair f 0 ; 0 g = f0:5; 0:96g, using the autocorrelations as the estimating function in our method gives more power for most of the pairs tested when the sample size is T =50 and 100. However, its power is very poor for the almost unit root pair; and as a result, the method with the rst choice of estimating function, the OLS estimation with forward and backward-looking components, is the one able to discriminate this pair with the highest power. As before, once we increase the sample size, power reaches similar values when our method is applied with the three estimating functions. In Table 5, for the true pair f 0 ; 0 g = f0:96; 0:5g, the second choice of the estimating function, the OLS estimation of the long-ar, shows the highest power for most of the pairs tested when the sample size is T =50 and 100. Still, choosing the autocorrelations as the estimating function enables to discriminate between the true pair and the almost unit root pair, since power is poor when we use the other two estimating functions, for this particular pair. When the sample size is T =200, power increases for the three estimating functions, yet we need the autocorrelations to ensure we can discriminate the almost unit root pair. Lastly, in Table 6 we examine the almost unit root pair as the true pair, f 0 ; 0 g = f0:99; 0:99g. For most of the pairs tested, applying the method with the OLS estimation of the long-ar gives the highest power when the sample size is T =50 and 100. However, for these sample sizes, using autocorrelations gives the highest power for the last three pairs tested, for which the value of the MA parameter is closer to the boundary than the value of the AR parameter. As we increase the sample size, power reaches similar values for our method with the three estimating functions. 20

21 5.1 Robustness checks In addition to our previous experiments, we conducted simple robustness checks for our method. Two examples are shown in Tables 7 and 8. We present the power values for two ARMA processes which are misspeci ed as ARMA(1,1) with a sample size T =100. In Table 7 we have an ARMA(2,1) with f 0 ; 1;0 ; 2;0 g = f0:3; 0:6; 0:85g, so we have a high persistence AR(2) mixed with a low persistence MA(1). In general, the method applied with autocorrelations o ers more power rejecting the pairs where the AR parameter is closer to the boundary than the MA parameter, while with the OLS estimation with forward and backward-looking components we obtain more power rejecting the pairs where the MA parameter is closer to the boundary than the AR parameter. The ARMA (2,1) in Table 8 with f 1;0 ; 2;0 ; 0 g = f0:3; 0:85; 0:6g, shows a high persistence AR(1) mixed with a MA(2) which has higher persistence on its second parameter. In this case, we nd that the method applied with both, the OLS estimation of forward and backward-looking components and the OLS estimation of the long-ar, is more powerful rejecting the misspeci cation than with the autocorrelations. Note that in both examples, our three choices of estimating functions provide good power, rejecting the misspeci cation for all the pairs tested. 5.2 Impulse-response con dence bands To illustrate the transformation via the computation of impulse-response con dence bands we conduct further simulation exercises. We simulate one ARMA(1,1) sample path selecting di erent combinations of the parameters f 0 ; 0 g. The setting of our model is the same we followed on the above mentioned experiments: the total number of auxiliary parameters is set to 8 on the three choices for the estimating function, H = 3 and L =

22 We conduct a grid search on the four quadrants with a step of For brevity, we show two simulations with the pairs f 0 ; 0 g = f0:99; 0:99g and f 0 ; 0 g = f0:96; 0:5g, for samples of T =50 observations, in Figures 1 and 2. There are four diagrams on each gure: the simulated path is plotted on the upper left section, the joint 95% con dence set appears on the upper right, the joint 95% con dence surface is on the lower left, and the impulse-response con dence bands are shown on the lower right section. Figure 1 and 2 present the outcomes of our method selecting the OLS estimation with forward and backward-looking components (FBLC) and the autocorrelations, respectively as estimating functions. Note that even with this small sample size, the true values of the parameters are covered in the joint 95% con dence sets and the impulse-response corresponding to the true pair is covered by our impulse-response con dence bands. However, for the almost unit root pair in Figure 1, the impulse-response function obtained from the MLE is outside the con dence bands. As expected, inverting the asymptotic Wald test based on MLE estimates is almost infeasible at corner solutions with parameters close to the unit boundary, since in many instances the variance-covariance matrix is not invertible. Comparing our impulse-response con dence bands with the impulse-response function obtained with the MLE point estimates, we conclude that the MLE can lead to spurious results when identi cation problems exists. 6 Conclusions In this paper we revisit Indirect Inference jointly with the MC test method from a con dence set two-stage simulation-based outlook focusing on ARMA processes in nite samples. Despite the simplicity of ARMA models in time series, identi cation and bound- 22

23 ary issues raise enduring complications for estimation and inference. We propose estimating equations via two sided regressions with forward and backward-looking components, backward looking autoregressions and autocorrelations. We treat the resulting objective functions as test statistics which we invert through a two-stage simulation to obtain a joint con dence set for the parameters: rst, we obtain calibrated estimates applying the previously mentioned estimating functions, then we compute the p-values applying MC test methods. Invariance principles are proposed to ensure nite sample exactness in general nuisance parameter dependent settings. The proposed impulse-responses con dence bands represent an important and empirically relevant transformation of the joint con dence set. Supportive simulation results con rm the following. In contrast to MLE-based tests, our proposed methods achieve size control and good power, irrespective of how close the parameters are to the unit root boundary. The results obtained in term of power for processes with Student-t errors are in line with the ones obtained with Gaussian errors. Our method eradicates pile up problems even with very small samples, for example, with no more than 50 observations. The signi cance level is a ected neither by at segments in the objective function, nor by how close the estimated parameters are to the unit root boundary. Finally, our studies on impulse-response con dence bands shows that relying on MLE when the processes are highly persistent can lead researches to underestimate shock e ects when conducting estimation and forecasting. 23

24 References [1] Andrews D. W. K., and P. Guggenberger (2010). Asymptotic Size and a Problem with Subsampling and With the m out of n Bootstrap. Econometric Theory 26, [2] Barnard G. A. (1963). Comment on The Spectral Analysis of Point Processes by M.S. Bartlett. Journal of the Royal Statistical Society, Series B 25, 294. [3] Beaulieu M.-C., Dufour J.-M. and L. Khalaf (2007). Multivariate tests of meanvariance e ciency with possibly non-gaussian errors: an exact simulation-based approach. Journal of Business and Economic Statistics 25, [4] Beaulieu M.-C., Dufour J.-M. and L. Khalaf (2013). Identi cation-robust estimation and testing of the zero-beta CAPM. Review of Economic Studies 80, [5] Beaulieu M-C., Dufour J.-M. and L. Khalaf (2014). Exact Con dence Set Estimation and Goodness of Fit Test Methods for Asymmetric Heavy Tailed Stable Distributions. Journal of Econometrics 181, [6] Bolduc D., L. Khalaf, and C.Yélou (2010). Identi cation Robust Con dence Set Methods for Inference on Parameter Ratios with Application to Discrete Choice Models. Journal of Econometrics 157, [7] Calvet L. E. and V. Czellar (2015). Through the looking glass: Indirect inference via simple equilibria. Journal of Econometrics, Forthcoming. [8] Davis R. and W. Dunsmuir (1996). Maximum Likelihood Estimation for MA(1) processes with a Root on the Unit Circle. Econometric Theory 12, [9] Davis R. and L. Song (2011). Unit Roots in Moving Averages Beyond First Order. Annals of Statistics 39,

25 [10] Dominicy Y. and D. Veredas (2013). The Method of Simulated Quantiles. Journal of Econometrics 172, [11] Dufour J.-M, L. Khalaf and M.-C. Beaulieu (2003). Exact Skewness and Kurtosis Tests for Multivariate Normality and Goodness of Fit in Multivariate Regressions with Application to Asset Pricing Models, Oxford Bulletin of Economics and Statistics 65, [12] Dufour. J.-M, L. Khalaf and M.-C. Beaulieu (2010). Finite Sample Diagnostics in Multivariate Regressions with Applications to Asset Pricing Models, Journal of Applied Econometrics 25, [13] Dufour J.-M. and J. R. Kurz-Kim (2010). Exact inference and optimal invariant estimation for the stability parameter of symmetric -stable distributions. Journal of Empirical Finance 17, [14] Dufour J.-M. (2006). Monte Carlo Tests with Nuisance Parameters: A General Approach to Finite-Sample Inference and Nonstandard Asymptotics in Econometrics. Journal of Econometrics 133, [15] Dufour J.-M. and D. Pelletier (2014). Practical Methods for Modelling Weak VARMA Processes: Identi cation, Estimation and Speci cation with a Macroeconomic Application. Working paper. [16] Dufour J.-M. and O. Torrès (2000). Markovian Processes, Two-Sided Autoregressions and Exact Inference for Stationary and Nonstationary Autoregressive Processes. Journal of Econometrics 99, [17] Dufour J.-M. and P. Valéry (2006). On a Simple Two-Stage Closed-Form Estimator for a Stochastic Volatility in a General Linear Regression; in Volume 20 (Part A) of Advances in Econometrics: Econometric Analysis of Economic and Financial Time 25

26 Series, in honor of Clive Granger and Robert Engle, edited by Thomas B. Fomby and Dek Terrell, Elsevier Science, Oxford, UK, [18] Dufour J.-M. and P. Valéry (2009). Exact and Assymptotic Test for Possibly Non- Regular Hypothesis on Stochastic Volatility Models. Journal of Econometrics 150, [19] Dridi R., Renault E. and A. Guay (2007). Indirect Inference and Calibration of Dynamic Stochastic General Equilibrium Models. Journal of Econometrics 136, [20] Guay A. and O. Scaillet (2003). Indirect Inference, Nuisance Parameter, and Threshold Moving Average Models. Journal of Business and Economic Statistics 21, [21] Dwass M. (1957). Modi ed Randomization Test for Nonparametric Hypothesis. Annals of Mathematical Statistics 28, [22] Galbraith J. W. and V. Zinde-Walsh (1994). A Simple, Non-Iterative Estimator for Moving Average Models. Biometrika 81, [23] Galbraith J. W. and V. Zinde-Walsh (1997). On Simple Auto-Regressive-Based Estimation and Identi cation Techniques for ARMA Models. Biometrika 84, [24] Fuleky P. and E. Zivot (2014). Indirect Inference Based on the Score. Econometrics Journal, Forthcoming. [25] Gallant A. R. and G. Tauchen (1996). Which Moments to Match. Econometric Theory 12, [26] Ghysels E., Khalaf L. and C. Vodounou (2003). Simulation Based Inference in Moving Average Models. Annales D Economie et Statistique 69, [27] Gorodnichenko Y., Mikusheva A. and S. Ng (2012). Estimators for Persistent and Possibly Non-Stationary Data with Classical Properties. Econometric Theory 28,

27 1036. [28] Gospodinov N. (2002). Bootstrap Based Inference in Models with a Nearly Noninvertible Moving Average Component. Journal of Business and Economic Statistics 20, [29] Gospodinov N. and S. Ng (2015). Minimum Distance Estimation of Possibly Non- Invertible Moving Average Models. Journal of Business and Economic Statistics 33, [30] Gouriéroux C., Monfort A. and E. Renault (1993). Indirect Inference. Journal of Applied Econometrics 8, S85-S118. [31] Gouriéroux C. and A. Monfort (1996). Simulation -Based Econometric Methods, Core Lectures, Oxford University Press. [32] Gouriéroux C., Phillips P. C. B. and J. Yu (2010). Indirect Inference for Dynamic Panel Models. Journal of Econometrics 157, [33] Hannan E. J. and J. R. Rissanen (1982). Recursive Estimation of Mixed Autoregressive-Moving-Average Order. Biometrika 69, [34] Hansen B. E. (1999). The Grid Bootstrap and the Autoregressive Model. Review of Economics and Statistics 81, [35] Jordá O. (2009). Simultaneous Con dence Regions for Impulse Responses The Review of Economics and Statistics 91, [36] Jordá O. and M. Marcellino (2010). Path Forecast Evaluation. Journal of Applied Econometrics 25, [37] Jordá O. and S. Kozicki (2011). Estimation and Inference by the Method of Projection Minimum Distance: An Application to the New Keynesian Hybrid Phillips Curve. 27

28 International Economic Review 52, [38] Jordá O., Knuppel M. and M. Marcellino (2013). Empirical Simultaneous Con dence Regions for Path-Forecasts, International Journal of Forecasting 29, [39] Li T. (2010). Indirect Inference in Structural Econometric Models. Journal of Econometrics 157, [40] MATLAB 8.1, The MathWorks, Inc., Natick, Massachusetts, United States. [41] Mikusheva A. (2007). Uniform Inference in Autoregressive Models. Econometrica 75, [42] Mikusheva A. (2012). One-Dimensional Inference in Autoregressive Models with the Potential Presence of a Unit Root. Econometrica 80, [43] Mikusheva A. (2014). Second Order Expansion of the t-statistic in AR(1) Models. Econometric Theory, forthcoming. [44] Phillips P. C. B. (2014). On Con dence Intervals for Autoregressive Roots and Predictive Rgressions. Econometrica 82, [45] Sargan J. D. and A. Bhargava (1983). Maximum Likelihood Estimation of Regression Models with First Order Moving Average Errors When the Root Lies on the Unit Circle. Econometrica 51, [46] Smith A. (1993). Estimating Nonlinear Time Series Models Using Simulated Vector Autoregressions. Journal of Applied Econometrics 8, S63-S84. [47] Stock J. H., and M. W. Watson (1993). A Simple Estimator of Cointegrating Vectors in Higher Order Integrated Systems. Econometrica 61,

29 Table 1: Empirical Rejections: True value 0 = 0:6 SIZE Gaussian MA Student-t (df=5) MA T OLS-long AR Simpli ed MLE OLS-long AR Simpli ed MLE POWER T=50 Gaussian MA Student-t (df=5) MA Tested value OLS-long AR Simpli ed OLS-long AR Simpli ed {0.00} {0.30} {0.85} {0.90} {0.96} {0.99} T=100 Gaussian MA Student-t (df=5) MA Tested value OLS-long AR Simpli ed OLS-long AR Simpli ed {0.00} {0.30} {0.85} {0.90} {0.96} {0.99} T=200 Gaussian MA Student-t (df=5) MA Tested value OLS-long AR Simpli ed OLS-long AR Simpli ed {0.00} {0.30} {0.85} {0.90} {0.96} {0.99}

30 Table 2: Empirical Rejections: True value 0 = 0:99 SIZE Gaussian MA Student-t (df=5) MA T OLS-long AR Simpli ed MLE OLS-long AR Simpli ed MLE POWER T=50 Gaussian MA Student-t (df=5) MA Tested value OLS-long AR Simpli ed OLS-long AR Simpli ed {0.00} {0.30} {0.60} {0.85} {0.90} {0.96} T=100 Gaussian MA Student-t (df=5) MA Tested value OLS-long AR Simpli ed OLS-long AR Simpli ed {0.00} {0.30} {0.60} {0.85} {0.90} {0.96} T=200 Gaussian MA Student-t (df=5) MA Tested value OLS-long AR Simpli ed OLS-long AR Simpli ed {0.00} {0.30} {0.60} {0.85} {0.90} {0.96}

31 Table 3: Empirical Rejections: True pair f 0 ; 0 g = f0; 0:6g SIZE Gaussian ARMA Student-t (df=5) ARMA T OLS-FBLC OLS-long AR AutoCorr OLS-FBLC OLS-long AR AutoCorr MLE Gaussian ARMA Student-t (df=5) ARMA T OLS-FBLC OLS-long AR AutoCorr OLS-FBLC OLS-long AR AutoCorr POWER T=50 Gaussian ARMA Student-t (df=5) ARMA Tested pair OLS-FBLC OLS-long AR AutoCorr OLS-FBLC OLS-long AR AutoCorr {0.3, 0.2} {0.3, 0.85} {0.5, 0.5} {0.5, 0.96} {0.6, 0} {0.6, 0.85} {0.85, 0.2} {0.85, 0.6} {0.96, 0.5} {0.99, 0.99}

32 POWER f 0 ; 0 g = f0; 0:6g T=100 Gaussian ARMA Student-t (df=5) ARMA Tested pair OLS-FBLC OLS-long AR AutoCorr OLS-FBLC OLS-long AR AutoCorr {0.3, 0.2} {0.3, 0.85} {0.5, 0.5} {0.5, 0.96} {0.6, 0} {0.6, 0.85} {0.85, 0.2} {0.85, 0.6} {0.96, 0.5} {0.99, 0.99} T=200 Gaussian ARMA Student-t (df=5) ARMA Tested pair OLS-FBLC OLS-long AR AutoCorr OLS-FBLC OLS-long AR AutoCorr {0.3, 0.2} {0.3, 0.85} {0.5, 0.5} {0.5, 0.96} {0.6, 0} {0.6, 0.85} {0.85, 0.2} {0.85, 0.6} {0.96, 0.5} {0.99, 0.99}

33 Table 4: Empirical Rejections: True pair f 0 ; 0 g = f0:5; 0:96g SIZE Gaussian ARMA Student-t (df=5) ARMA T OLS-FBLC OLS-long AR AutoCorr OLS-FBLC OLS-long AR AutoCorr MLE Gaussian ARMA Student-t (df=5) ARMA T MA AR Joint MA AR Joint POWER T=50 Gaussian ARMA Student-t (df=5) ARMA Tested pair OLS-FBLC OLS-long AR AutoCorr OLS-FBLC OLS-long AR AutoCorr {0, 0.6} {0.3, 0.2} {0.3, 0.85} {0.5, 0.5} {0.6, 0} {0.6, 0.85} {0.85, 0.2} {0.85, 0.6} {0.96, 0.5} {0.99, 0.99}

34 POWER f 0 ; 0 g = f0:5; 0:96g T=100 Gaussian ARMA Student-t (df=5) ARMA Tested pair OLS-FBLC OLS-long AR AutoCorr OLS-FBLC OLS-long AR AutoCorr {0, 0.6} {0.3, 0.2} {0.3, 0.85} {0.5, 0.5} {0.6, 0} {0.6, 0.85} {0.85, 0.2} {0.85, 0.6} {0.96, 0.5} {0.99, 0.99} T=200 Gaussian ARMA Student-t (df=5) ARMA Tested pair OLS-FBLC OLS-long AR AutoCorr OLS-FBLC OLS-long AR AutoCorr {0, 0.6} {0.3, 0.2} {0.3, 0.85} {0.5, 0.5} {0.6, 0} {0.6, 0.85} {0.85, 0.2} {0.85, 0.6} {0.96, 0.5} {0.99, 0.99}

35 Table 5: Empirical Rejections: True pair f 0 ; 0 g = f0:96; 0:5g SIZE Gaussian ARMA Student-t (df=5) ARMA T OLS-FBLC OLS-long AR AutoCorr OLS-FBLC OLS-long AR AutoCorr MLE Gaussian ARMA Student-t (df=5) ARMA T MA AR Joint MA AR Joint POWER T=50 Gaussian ARMA Student-t (df=5) ARMA Tested pair OLS-FBLC OLS-long AR AutoCorr OLS-FBLC OLS-long AR AutoCorr {0, 0.6} {0.3, 0.2} {0.3, 0.85} {0.5, 0.5} {0.5, 0.96} {0.6, 0} {0.6, 0.85} {0.85, 0.2} {0.85, 0.6} {0.99, 0.99}

36 POWER f 0 ; 0 g = f0:96; 0:5g T=100 Gaussian ARMA Student-t (df=5) ARMA Tested pair OLS-FBLC OLS-long AR AutoCorr OLS-FBLC OLS-long AR AutoCorr {0, 0.6} {0.3, 0.2} {0.3, 0.85} {0.5, 0.5} {0.5, 0.96} {0.6, 0} {0.6, 0.85} {0.85, 0.2} {0.85, 0.6} {0.99, 0.99} T=200 Gaussian ARMA Student-t (df=5) ARMA Tested pair OLS-FBLC OLS-long AR AutoCorr OLS-FBLC OLS-long AR AutoCorr {0, 0.6} {0.3, 0.2} {0.3, 0.85} {0.5, 0.5} {0.5, 0.96} {0.6, 0} {0.6, 0.85} {0.85, 0.2} {0.85, 0.6} {0.99, 0.99}

37 Table 6: Empirical Rejections: True pair f 0 ; 0 g = f0:99; 0:99g SIZE Gaussian ARMA Student-t (df=5) ARMA T OLS-FBLC OLS-long AR AutoCorr OLS-FBLC OLS-long AR AutoCorr MLE Gaussian ARMA Student-t (df=5) ARMA T MA AR Joint MA AR Joint POWER T=50 Gaussian ARMA Student-t (df=5) ARMA Tested pair OLS-FBLC OLS-long AR AutoCorr OLS-FBLC OLS-long AR AutoCorr {0, 0.6} {0.3, 0.2} {0.3, 0.85} {0.5, 0.5} {0.5, 0.96} {0.6, 0} {0.6, 0.85} {0.85, 0.2} {0.85, 0.6} {0.96, 0.5}

38 POWER f 0 ; 0 g = f0:99; 0:99g T=100 Gaussian ARMA Student-t (df=5) ARMA Tested pair OLS-FBLC OLS-long AR AutoCorr OLS-FBLC OLS-long AR AutoCorr {0, 0.6} {0.3, 0.2} {0.3, 0.85} {0.5, 0.5} {0.5, 0.96} {0.6, 0} {0.6, 0.85} {0.85, 0.2} {0.85, 0.6} {0.96, 0.5} T=200 Gaussian ARMA Student-t (df=5) ARMA Tested pair OLS-FBLC OLS-long AR AutoCorr OLS-FBLC OLS-long AR AutoCorr {0, 0.6} {0.3, 0.2} {0.3, 0.85} {0.5, 0.5} {0.5, 0.96} {0.6, 0} {0.6, 0.85} {0.85, 0.2} {0.85, 0.6} {0.96, 0.5}

39 Table 7: Empirical Rejections: True values f 1;0 ; 2;0 ; 0 g = f0:3; 0:85; 0:60g POWER T=100 Gaussian ARMA Tested pair OLS-FBLC OLS-long AR AutoCorr {0, 0.6} {0.3, 0.2} {0.3, 0.85} {0.5, 0.5} {0.5, 0.96} {0.6, 0} {0.6, 0.85} {0.85, 0.2} {0.85, 0.6} {0.96, 0.5} {0.99, 0.99} Table 8: Empirical Rejections: True values f 0 ; 1;0 ; 2;0 g = f0:3; 0:6; 0:85g POWER T=100 Gaussian ARMA Tested pair OLS-FBLC OLS-long AR AutoCorr {0, 0.6} {0.3, 0.2} {0.3, 0.85} {0.5, 0.5} {0.5, 0.96} {0.6, 0} {0.6, 0.85} {0.85, 0.2} {0.85, 0.6} {0.96, 0.5} {0.99, 0.99}

40 Figure 1: ARMA(1,1) Simulated Path f 0 ; 0 g = f0:99; 0:99g 40

41 Figure 2: ARMA(1,1) Simulated Path f 0 ; 0 g = f0:96; 0:5g 41

Problem set 1 - Solutions

Problem set 1 - Solutions EMPIRICAL FINANCE AND FINANCIAL ECONOMETRICS - MODULE (8448) Problem set 1 - Solutions Exercise 1 -Solutions 1. The correct answer is (a). In fact, the process generating daily prices is usually assumed

More information

GMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails

GMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails GMM-based inference in the AR() panel data model for parameter values where local identi cation fails Edith Madsen entre for Applied Microeconometrics (AM) Department of Economics, University of openhagen,

More information

Discussion Paper Series

Discussion Paper Series INSTITUTO TECNOLÓGICO AUTÓNOMO DE MÉXICO CENTRO DE INVESTIGACIÓN ECONÓMICA Discussion Paper Series Size Corrected Power for Bootstrap Tests Manuel A. Domínguez and Ignacio N. Lobato Instituto Tecnológico

More information

Comment on HAC Corrections for Strongly Autocorrelated Time Series by Ulrich K. Müller

Comment on HAC Corrections for Strongly Autocorrelated Time Series by Ulrich K. Müller Comment on HAC Corrections for Strongly Autocorrelated ime Series by Ulrich K. Müller Yixiao Sun Department of Economics, UC San Diego May 2, 24 On the Nearly-optimal est Müller applies the theory of optimal

More information

Bootstrapping the Grainger Causality Test With Integrated Data

Bootstrapping the Grainger Causality Test With Integrated Data Bootstrapping the Grainger Causality Test With Integrated Data Richard Ti n University of Reading July 26, 2006 Abstract A Monte-carlo experiment is conducted to investigate the small sample performance

More information

11. Bootstrap Methods

11. Bootstrap Methods 11. Bootstrap Methods c A. Colin Cameron & Pravin K. Trivedi 2006 These transparencies were prepared in 20043. They can be used as an adjunct to Chapter 11 of our subsequent book Microeconometrics: Methods

More information

A New Approach to Robust Inference in Cointegration

A New Approach to Robust Inference in Cointegration A New Approach to Robust Inference in Cointegration Sainan Jin Guanghua School of Management, Peking University Peter C. B. Phillips Cowles Foundation, Yale University, University of Auckland & University

More information

A TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED

A TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED A TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED by W. Robert Reed Department of Economics and Finance University of Canterbury, New Zealand Email: bob.reed@canterbury.ac.nz

More information

ECONOMICS 7200 MODERN TIME SERIES ANALYSIS Econometric Theory and Applications

ECONOMICS 7200 MODERN TIME SERIES ANALYSIS Econometric Theory and Applications ECONOMICS 7200 MODERN TIME SERIES ANALYSIS Econometric Theory and Applications Yongmiao Hong Department of Economics & Department of Statistical Sciences Cornell University Spring 2019 Time and uncertainty

More information

City, University of London Institutional Repository. This version of the publication may differ from the final published version.

City, University of London Institutional Repository. This version of the publication may differ from the final published version. City Research Online City, University of London Institutional Repository Citation: Khalaf, L. and Urga, G. (2014). Identification robust inference in cointegrating regressions. Journal of Econometrics,

More information

Finite-sample quantiles of the Jarque-Bera test

Finite-sample quantiles of the Jarque-Bera test Finite-sample quantiles of the Jarque-Bera test Steve Lawford Department of Economics and Finance, Brunel University First draft: February 2004. Abstract The nite-sample null distribution of the Jarque-Bera

More information

Estimation and Inference with Weak Identi cation

Estimation and Inference with Weak Identi cation Estimation and Inference with Weak Identi cation Donald W. K. Andrews Cowles Foundation Yale University Xu Cheng Department of Economics University of Pennsylvania First Draft: August, 2007 Revised: March

More information

Cointegration and the joint con rmation hypothesis

Cointegration and the joint con rmation hypothesis Cointegration and the joint con rmation hypothesis VASCO J. GABRIEL Department of Economics, Birkbeck College, UK University of Minho, Portugal October 2001 Abstract Recent papers by Charemza and Syczewska

More information

Likelihood Ratio Based Test for the Exogeneity and the Relevance of Instrumental Variables

Likelihood Ratio Based Test for the Exogeneity and the Relevance of Instrumental Variables Likelihood Ratio Based est for the Exogeneity and the Relevance of Instrumental Variables Dukpa Kim y Yoonseok Lee z September [under revision] Abstract his paper develops a test for the exogeneity and

More information

SIMILAR-ON-THE-BOUNDARY TESTS FOR MOMENT INEQUALITIES EXIST, BUT HAVE POOR POWER. Donald W. K. Andrews. August 2011

SIMILAR-ON-THE-BOUNDARY TESTS FOR MOMENT INEQUALITIES EXIST, BUT HAVE POOR POWER. Donald W. K. Andrews. August 2011 SIMILAR-ON-THE-BOUNDARY TESTS FOR MOMENT INEQUALITIES EXIST, BUT HAVE POOR POWER By Donald W. K. Andrews August 2011 COWLES FOUNDATION DISCUSSION PAPER NO. 1815 COWLES FOUNDATION FOR RESEARCH IN ECONOMICS

More information

A Conditional-Heteroskedasticity-Robust Con dence Interval for the Autoregressive Parameter

A Conditional-Heteroskedasticity-Robust Con dence Interval for the Autoregressive Parameter A Conditional-Heteroskedasticity-Robust Con dence Interval for the Autoregressive Parameter Donald W. K. Andrews Cowles Foundation for Research in Economics Yale University Patrik Guggenberger Department

More information

Markov-Switching Models with Endogenous Explanatory Variables. Chang-Jin Kim 1

Markov-Switching Models with Endogenous Explanatory Variables. Chang-Jin Kim 1 Markov-Switching Models with Endogenous Explanatory Variables by Chang-Jin Kim 1 Dept. of Economics, Korea University and Dept. of Economics, University of Washington First draft: August, 2002 This version:

More information

Cointegration Tests Using Instrumental Variables Estimation and the Demand for Money in England

Cointegration Tests Using Instrumental Variables Estimation and the Demand for Money in England Cointegration Tests Using Instrumental Variables Estimation and the Demand for Money in England Kyung So Im Junsoo Lee Walter Enders June 12, 2005 Abstract In this paper, we propose new cointegration tests

More information

Parametric Inference on Strong Dependence

Parametric Inference on Strong Dependence Parametric Inference on Strong Dependence Peter M. Robinson London School of Economics Based on joint work with Javier Hualde: Javier Hualde and Peter M. Robinson: Gaussian Pseudo-Maximum Likelihood Estimation

More information

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley Time Series Models and Inference James L. Powell Department of Economics University of California, Berkeley Overview In contrast to the classical linear regression model, in which the components of the

More information

LM threshold unit root tests

LM threshold unit root tests Lee, J., Strazicich, M.C., & Chul Yu, B. (2011). LM Threshold Unit Root Tests. Economics Letters, 110(2): 113-116 (Feb 2011). Published by Elsevier (ISSN: 0165-1765). http://0- dx.doi.org.wncln.wncln.org/10.1016/j.econlet.2010.10.014

More information

Testing for Regime Switching: A Comment

Testing for Regime Switching: A Comment Testing for Regime Switching: A Comment Andrew V. Carter Department of Statistics University of California, Santa Barbara Douglas G. Steigerwald Department of Economics University of California Santa Barbara

More information

Chapter 1. GMM: Basic Concepts

Chapter 1. GMM: Basic Concepts Chapter 1. GMM: Basic Concepts Contents 1 Motivating Examples 1 1.1 Instrumental variable estimator....................... 1 1.2 Estimating parameters in monetary policy rules.............. 2 1.3 Estimating

More information

Econ 423 Lecture Notes: Additional Topics in Time Series 1

Econ 423 Lecture Notes: Additional Topics in Time Series 1 Econ 423 Lecture Notes: Additional Topics in Time Series 1 John C. Chao April 25, 2017 1 These notes are based in large part on Chapter 16 of Stock and Watson (2011). They are for instructional purposes

More information

SIMILAR-ON-THE-BOUNDARY TESTS FOR MOMENT INEQUALITIES EXIST, BUT HAVE POOR POWER. Donald W. K. Andrews. August 2011 Revised March 2012

SIMILAR-ON-THE-BOUNDARY TESTS FOR MOMENT INEQUALITIES EXIST, BUT HAVE POOR POWER. Donald W. K. Andrews. August 2011 Revised March 2012 SIMILAR-ON-THE-BOUNDARY TESTS FOR MOMENT INEQUALITIES EXIST, BUT HAVE POOR POWER By Donald W. K. Andrews August 2011 Revised March 2012 COWLES FOUNDATION DISCUSSION PAPER NO. 1815R COWLES FOUNDATION FOR

More information

Economics 620, Lecture 13: Time Series I

Economics 620, Lecture 13: Time Series I Economics 620, Lecture 13: Time Series I Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 13: Time Series I 1 / 19 AUTOCORRELATION Consider y = X + u where y is

More information

Inference about Clustering and Parametric. Assumptions in Covariance Matrix Estimation

Inference about Clustering and Parametric. Assumptions in Covariance Matrix Estimation Inference about Clustering and Parametric Assumptions in Covariance Matrix Estimation Mikko Packalen y Tony Wirjanto z 26 November 2010 Abstract Selecting an estimator for the variance covariance matrix

More information

Chapter 2. GMM: Estimating Rational Expectations Models

Chapter 2. GMM: Estimating Rational Expectations Models Chapter 2. GMM: Estimating Rational Expectations Models Contents 1 Introduction 1 2 Step 1: Solve the model and obtain Euler equations 2 3 Step 2: Formulate moment restrictions 3 4 Step 3: Estimation and

More information

When is it really justifiable to ignore explanatory variable endogeneity in a regression model?

When is it really justifiable to ignore explanatory variable endogeneity in a regression model? Discussion Paper: 2015/05 When is it really justifiable to ignore explanatory variable endogeneity in a regression model? Jan F. Kiviet www.ase.uva.nl/uva-econometrics Amsterdam School of Economics Roetersstraat

More information

Modeling Expectations with Noncausal Autoregressions

Modeling Expectations with Noncausal Autoregressions Modeling Expectations with Noncausal Autoregressions Markku Lanne * Pentti Saikkonen ** Department of Economics Department of Mathematics and Statistics University of Helsinki University of Helsinki Abstract

More information

A CONDITIONAL-HETEROSKEDASTICITY-ROBUST CONFIDENCE INTERVAL FOR THE AUTOREGRESSIVE PARAMETER. Donald W.K. Andrews and Patrik Guggenberger

A CONDITIONAL-HETEROSKEDASTICITY-ROBUST CONFIDENCE INTERVAL FOR THE AUTOREGRESSIVE PARAMETER. Donald W.K. Andrews and Patrik Guggenberger A CONDITIONAL-HETEROSKEDASTICITY-ROBUST CONFIDENCE INTERVAL FOR THE AUTOREGRESSIVE PARAMETER By Donald W.K. Andrews and Patrik Guggenberger August 2011 Revised December 2012 COWLES FOUNDATION DISCUSSION

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY & Contents PREFACE xiii 1 1.1. 1.2. Difference Equations First-Order Difference Equations 1 /?th-order Difference

More information

Do Markov-Switching Models Capture Nonlinearities in the Data? Tests using Nonparametric Methods

Do Markov-Switching Models Capture Nonlinearities in the Data? Tests using Nonparametric Methods Do Markov-Switching Models Capture Nonlinearities in the Data? Tests using Nonparametric Methods Robert V. Breunig Centre for Economic Policy Research, Research School of Social Sciences and School of

More information

G. S. Maddala Kajal Lahiri. WILEY A John Wiley and Sons, Ltd., Publication

G. S. Maddala Kajal Lahiri. WILEY A John Wiley and Sons, Ltd., Publication G. S. Maddala Kajal Lahiri WILEY A John Wiley and Sons, Ltd., Publication TEMT Foreword Preface to the Fourth Edition xvii xix Part I Introduction and the Linear Regression Model 1 CHAPTER 1 What is Econometrics?

More information

On Standard Inference for GMM with Seeming Local Identi cation Failure

On Standard Inference for GMM with Seeming Local Identi cation Failure On Standard Inference for GMM with Seeming Local Identi cation Failure Ji Hyung Lee y Zhipeng Liao z First Version: April 4; This Version: December, 4 Abstract This paper studies the GMM estimation and

More information

Robust Con dence Intervals in Nonlinear Regression under Weak Identi cation

Robust Con dence Intervals in Nonlinear Regression under Weak Identi cation Robust Con dence Intervals in Nonlinear Regression under Weak Identi cation Xu Cheng y Department of Economics Yale University First Draft: August, 27 This Version: December 28 Abstract In this paper,

More information

LECTURE 13: TIME SERIES I

LECTURE 13: TIME SERIES I 1 LECTURE 13: TIME SERIES I AUTOCORRELATION: Consider y = X + u where y is T 1, X is T K, is K 1 and u is T 1. We are using T and not N for sample size to emphasize that this is a time series. The natural

More information

Tests for Cointegration, Cobreaking and Cotrending in a System of Trending Variables

Tests for Cointegration, Cobreaking and Cotrending in a System of Trending Variables Tests for Cointegration, Cobreaking and Cotrending in a System of Trending Variables Josep Lluís Carrion-i-Silvestre University of Barcelona Dukpa Kim y Korea University May 4, 28 Abstract We consider

More information

A Test of Cointegration Rank Based Title Component Analysis.

A Test of Cointegration Rank Based Title Component Analysis. A Test of Cointegration Rank Based Title Component Analysis Author(s) Chigira, Hiroaki Citation Issue 2006-01 Date Type Technical Report Text Version publisher URL http://hdl.handle.net/10086/13683 Right

More information

Testing for a Trend with Persistent Errors

Testing for a Trend with Persistent Errors Testing for a Trend with Persistent Errors Graham Elliott UCSD August 2017 Abstract We develop new tests for the coe cient on a time trend in a regression of a variable on a constant and time trend where

More information

A Non-Parametric Approach of Heteroskedasticity Robust Estimation of Vector-Autoregressive (VAR) Models

A Non-Parametric Approach of Heteroskedasticity Robust Estimation of Vector-Autoregressive (VAR) Models Journal of Finance and Investment Analysis, vol.1, no.1, 2012, 55-67 ISSN: 2241-0988 (print version), 2241-0996 (online) International Scientific Press, 2012 A Non-Parametric Approach of Heteroskedasticity

More information

GMM based inference for panel data models

GMM based inference for panel data models GMM based inference for panel data models Maurice J.G. Bun and Frank Kleibergen y this version: 24 February 2010 JEL-code: C13; C23 Keywords: dynamic panel data model, Generalized Method of Moments, weak

More information

Heteroskedasticity-Robust Inference in Finite Samples

Heteroskedasticity-Robust Inference in Finite Samples Heteroskedasticity-Robust Inference in Finite Samples Jerry Hausman and Christopher Palmer Massachusetts Institute of Technology December 011 Abstract Since the advent of heteroskedasticity-robust standard

More information

Estimation and Inference with Weak, Semi-strong, and Strong Identi cation

Estimation and Inference with Weak, Semi-strong, and Strong Identi cation Estimation and Inference with Weak, Semi-strong, and Strong Identi cation Donald W. K. Andrews Cowles Foundation Yale University Xu Cheng Department of Economics University of Pennsylvania This Version:

More information

On GMM Estimation and Inference with Bootstrap Bias-Correction in Linear Panel Data Models

On GMM Estimation and Inference with Bootstrap Bias-Correction in Linear Panel Data Models On GMM Estimation and Inference with Bootstrap Bias-Correction in Linear Panel Data Models Takashi Yamagata y Department of Economics and Related Studies, University of York, Heslington, York, UK January

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY PREFACE xiii 1 Difference Equations 1.1. First-Order Difference Equations 1 1.2. pth-order Difference Equations 7

More information

Estimating the Number of Common Factors in Serially Dependent Approximate Factor Models

Estimating the Number of Common Factors in Serially Dependent Approximate Factor Models Estimating the Number of Common Factors in Serially Dependent Approximate Factor Models Ryan Greenaway-McGrevy y Bureau of Economic Analysis Chirok Han Korea University February 7, 202 Donggyu Sul University

More information

When is a copula constant? A test for changing relationships

When is a copula constant? A test for changing relationships When is a copula constant? A test for changing relationships Fabio Busetti and Andrew Harvey Bank of Italy and University of Cambridge November 2007 usetti and Harvey (Bank of Italy and University of Cambridge)

More information

Comparing Nested Predictive Regression Models with Persistent Predictors

Comparing Nested Predictive Regression Models with Persistent Predictors Comparing Nested Predictive Regression Models with Persistent Predictors Yan Ge y and ae-hwy Lee z November 29, 24 Abstract his paper is an extension of Clark and McCracken (CM 2, 25, 29) and Clark and

More information

1 Regression with Time Series Variables

1 Regression with Time Series Variables 1 Regression with Time Series Variables With time series regression, Y might not only depend on X, but also lags of Y and lags of X Autoregressive Distributed lag (or ADL(p; q)) model has these features:

More information

Bootstrapping Long Memory Tests: Some Monte Carlo Results

Bootstrapping Long Memory Tests: Some Monte Carlo Results Bootstrapping Long Memory Tests: Some Monte Carlo Results Anthony Murphy and Marwan Izzeldin University College Dublin and Cass Business School. July 2004 - Preliminary Abstract We investigate the bootstrapped

More information

Choice of Spectral Density Estimator in Ng-Perron Test: Comparative Analysis

Choice of Spectral Density Estimator in Ng-Perron Test: Comparative Analysis MPRA Munich Personal RePEc Archive Choice of Spectral Density Estimator in Ng-Perron Test: Comparative Analysis Muhammad Irfan Malik and Atiq-ur- Rehman International Institute of Islamic Economics, International

More information

GMM estimation of spatial panels

GMM estimation of spatial panels MRA Munich ersonal ReEc Archive GMM estimation of spatial panels Francesco Moscone and Elisa Tosetti Brunel University 7. April 009 Online at http://mpra.ub.uni-muenchen.de/637/ MRA aper No. 637, posted

More information

Technical Appendix-3-Regime asymmetric STAR modeling and exchange rate reversion

Technical Appendix-3-Regime asymmetric STAR modeling and exchange rate reversion Technical Appendix-3-Regime asymmetric STAR modeling and exchange rate reversion Mario Cerrato*, Hyunsok Kim* and Ronald MacDonald** 1 University of Glasgow, Department of Economics, Adam Smith building.

More information

DSGE Methods. Estimation of DSGE models: GMM and Indirect Inference. Willi Mutschler, M.Sc.

DSGE Methods. Estimation of DSGE models: GMM and Indirect Inference. Willi Mutschler, M.Sc. DSGE Methods Estimation of DSGE models: GMM and Indirect Inference Willi Mutschler, M.Sc. Institute of Econometrics and Economic Statistics University of Münster willi.mutschler@wiwi.uni-muenster.de Summer

More information

Modeling Expectations with Noncausal Autoregressions

Modeling Expectations with Noncausal Autoregressions MPRA Munich Personal RePEc Archive Modeling Expectations with Noncausal Autoregressions Markku Lanne and Pentti Saikkonen Department of Economics, University of Helsinki, Department of Mathematics and

More information

Confidence Intervals in Autoregressive Panels, Invariance, and. Indirect Inference

Confidence Intervals in Autoregressive Panels, Invariance, and. Indirect Inference Confidence Intervals in Autoregressive Panels, Invariance, and Indirect Inference Lynda Khalaf Carleton University Charles J. Saunders Carleton University December 22, 24 Abstract Confidence regions are

More information

Bootstrapping Long Memory Tests: Some Monte Carlo Results

Bootstrapping Long Memory Tests: Some Monte Carlo Results Bootstrapping Long Memory Tests: Some Monte Carlo Results Anthony Murphy and Marwan Izzeldin Nu eld College, Oxford and Lancaster University. December 2005 - Preliminary Abstract We investigate the bootstrapped

More information

GLS-based unit root tests with multiple structural breaks both under the null and the alternative hypotheses

GLS-based unit root tests with multiple structural breaks both under the null and the alternative hypotheses GLS-based unit root tests with multiple structural breaks both under the null and the alternative hypotheses Josep Lluís Carrion-i-Silvestre University of Barcelona Dukpa Kim Boston University Pierre Perron

More information

A nonparametric test for seasonal unit roots

A nonparametric test for seasonal unit roots Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna To be presented in Innsbruck November 7, 2007 Abstract We consider a nonparametric test for the

More information

Darmstadt Discussion Papers in Economics

Darmstadt Discussion Papers in Economics Darmstadt Discussion Papers in Economics The Effect of Linear Time Trends on Cointegration Testing in Single Equations Uwe Hassler Nr. 111 Arbeitspapiere des Instituts für Volkswirtschaftslehre Technische

More information

Finite Sample and Optimal Inference in Possibly Nonstationary ARCH Models with Gaussian and Heavy-Tailed Errors

Finite Sample and Optimal Inference in Possibly Nonstationary ARCH Models with Gaussian and Heavy-Tailed Errors Finite Sample and Optimal Inference in Possibly Nonstationary ARCH Models with Gaussian and Heavy-Tailed Errors J and E M. I Université de Montréal University of Alicante First version: April 27th, 2004

More information

Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis

Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis Sílvia Gonçalves and Benoit Perron Département de sciences économiques,

More information

Introduction to Econometrics

Introduction to Econometrics Introduction to Econometrics T H I R D E D I T I O N Global Edition James H. Stock Harvard University Mark W. Watson Princeton University Boston Columbus Indianapolis New York San Francisco Upper Saddle

More information

(Y jz) t (XjZ) 0 t = S yx S yz S 1. S yx:z = T 1. etc. 2. Next solve the eigenvalue problem. js xx:z S xy:z S 1

(Y jz) t (XjZ) 0 t = S yx S yz S 1. S yx:z = T 1. etc. 2. Next solve the eigenvalue problem. js xx:z S xy:z S 1 Abstract Reduced Rank Regression The reduced rank regression model is a multivariate regression model with a coe cient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure,

More information

Applications of Subsampling, Hybrid, and Size-Correction Methods

Applications of Subsampling, Hybrid, and Size-Correction Methods Applications of Subsampling, Hybrid, and Size-Correction Methods Donald W. K. Andrews Cowles Foundation for Research in Economics Yale University Patrik Guggenberger Department of Economics UCLA November

More information

Instead of using all the sample observations for estimation, the suggested procedure is to divide the data set

Instead of using all the sample observations for estimation, the suggested procedure is to divide the data set Chow forecast test: Instead of using all the sample observations for estimation, the suggested procedure is to divide the data set of N sample observations into N 1 observations to be used for estimation

More information

Subsets Tests in GMM without assuming identi cation

Subsets Tests in GMM without assuming identi cation Subsets Tests in GMM without assuming identi cation Frank Kleibergen Brown University and University of Amsterdam Sophocles Mavroeidis y Brown University sophocles_mavroeidis@brown.edu Preliminary: please

More information

ARIMA Models. Richard G. Pierse

ARIMA Models. Richard G. Pierse ARIMA Models Richard G. Pierse 1 Introduction Time Series Analysis looks at the properties of time series from a purely statistical point of view. No attempt is made to relate variables using a priori

More information

Finite sample distributions of nonlinear IV panel unit root tests in the presence of cross-section dependence

Finite sample distributions of nonlinear IV panel unit root tests in the presence of cross-section dependence Finite sample distributions of nonlinear IV panel unit root tests in the presence of cross-section dependence Qi Sun Department of Economics University of Leicester November 9, 2009 Abstract his paper

More information

Inference for Parameters Defined by Moment Inequalities: A Recommended Moment Selection Procedure. Donald W.K. Andrews and Panle Jia

Inference for Parameters Defined by Moment Inequalities: A Recommended Moment Selection Procedure. Donald W.K. Andrews and Panle Jia Inference for Parameters Defined by Moment Inequalities: A Recommended Moment Selection Procedure By Donald W.K. Andrews and Panle Jia September 2008 Revised August 2011 COWLES FOUNDATION DISCUSSION PAPER

More information

1 Quantitative Techniques in Practice

1 Quantitative Techniques in Practice 1 Quantitative Techniques in Practice 1.1 Lecture 2: Stationarity, spurious regression, etc. 1.1.1 Overview In the rst part we shall look at some issues in time series economics. In the second part we

More information

Notes on Time Series Modeling

Notes on Time Series Modeling Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g

More information

Trending Models in the Data

Trending Models in the Data April 13, 2009 Spurious regression I Before we proceed to test for unit root and trend-stationary models, we will examine the phenomena of spurious regression. The material in this lecture can be found

More information

9) Time series econometrics

9) Time series econometrics 30C00200 Econometrics 9) Time series econometrics Timo Kuosmanen Professor Management Science http://nomepre.net/index.php/timokuosmanen 1 Macroeconomic data: GDP Inflation rate Examples of time series

More information

ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Spring 2013 Instructor: Victor Aguirregabiria

ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Spring 2013 Instructor: Victor Aguirregabiria ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Spring 2013 Instructor: Victor Aguirregabiria SOLUTION TO FINAL EXAM Friday, April 12, 2013. From 9:00-12:00 (3 hours) INSTRUCTIONS:

More information

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8]

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8] 1 Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8] Insights: Price movements in one market can spread easily and instantly to another market [economic globalization and internet

More information

DSGE-Models. Limited Information Estimation General Method of Moments and Indirect Inference

DSGE-Models. Limited Information Estimation General Method of Moments and Indirect Inference DSGE-Models General Method of Moments and Indirect Inference Dr. Andrea Beccarini Willi Mutschler, M.Sc. Institute of Econometrics and Economic Statistics University of Münster willi.mutschler@uni-muenster.de

More information

On the Power of Tests for Regime Switching

On the Power of Tests for Regime Switching On the Power of Tests for Regime Switching joint work with Drew Carter and Ben Hansen Douglas G. Steigerwald UC Santa Barbara May 2015 D. Steigerwald (UCSB) Regime Switching May 2015 1 / 42 Motivating

More information

An Improved Panel Unit Root Test Using GLS-Detrending

An Improved Panel Unit Root Test Using GLS-Detrending An Improved Panel Unit Root Test Using GLS-Detrending Claude Lopez 1 University of Cincinnati August 2004 This paper o ers a panel extension of the unit root test proposed by Elliott, Rothenberg and Stock

More information

A Course on Advanced Econometrics

A Course on Advanced Econometrics A Course on Advanced Econometrics Yongmiao Hong The Ernest S. Liu Professor of Economics & International Studies Cornell University Course Introduction: Modern economies are full of uncertainties and risk.

More information

1. The Multivariate Classical Linear Regression Model

1. The Multivariate Classical Linear Regression Model Business School, Brunel University MSc. EC550/5509 Modelling Financial Decisions and Markets/Introduction to Quantitative Methods Prof. Menelaos Karanasos (Room SS69, Tel. 08956584) Lecture Notes 5. The

More information

Elements of Multivariate Time Series Analysis

Elements of Multivariate Time Series Analysis Gregory C. Reinsel Elements of Multivariate Time Series Analysis Second Edition With 14 Figures Springer Contents Preface to the Second Edition Preface to the First Edition vii ix 1. Vector Time Series

More information

Panel cointegration rank testing with cross-section dependence

Panel cointegration rank testing with cross-section dependence Panel cointegration rank testing with cross-section dependence Josep Lluís Carrion-i-Silvestre y Laura Surdeanu z September 2007 Abstract In this paper we propose a test statistic to determine the cointegration

More information

2. Multivariate ARMA

2. Multivariate ARMA 2. Multivariate ARMA JEM 140: Quantitative Multivariate Finance IES, Charles University, Prague Summer 2018 JEM 140 () 2. Multivariate ARMA Summer 2018 1 / 19 Multivariate AR I Let r t = (r 1t,..., r kt

More information

TESTS FOR STOCHASTIC SEASONALITY APPLIED TO DAILY FINANCIAL TIME SERIES*

TESTS FOR STOCHASTIC SEASONALITY APPLIED TO DAILY FINANCIAL TIME SERIES* The Manchester School Vol 67 No. 1 January 1999 1463^6786 39^59 TESTS FOR STOCHASTIC SEASONALITY APPLIED TO DAILY FINANCIAL TIME SERIES* by I. C. ANDRADE University of Southampton A. D. CLARE ISMA Centre,

More information

Testing an Autoregressive Structure in Binary Time Series Models

Testing an Autoregressive Structure in Binary Time Series Models ömmföäflsäafaäsflassflassflas ffffffffffffffffffffffffffffffffffff Discussion Papers Testing an Autoregressive Structure in Binary Time Series Models Henri Nyberg University of Helsinki and HECER Discussion

More information

FINITE SAMPLE TESTS FOR ARCH EFFECTS AND VARIANCE CHANGE-POINTS IN LINEAR REGRESSIONS

FINITE SAMPLE TESTS FOR ARCH EFFECTS AND VARIANCE CHANGE-POINTS IN LINEAR REGRESSIONS FINITE SAMPLE TESTS FOR ARCH EFFECTS AND VARIANCE CHANGE-POINTS IN LINEAR REGRESSIONS Jean-Marie Dufour a, Lynda Khalaf b, Jean-Thomas Bernard c and Ian Genest d a CIRANO, CIREQ, and Département de sciences

More information

GARCH Models Estimation and Inference

GARCH Models Estimation and Inference GARCH Models Estimation and Inference Eduardo Rossi University of Pavia December 013 Rossi GARCH Financial Econometrics - 013 1 / 1 Likelihood function The procedure most often used in estimating θ 0 in

More information

A Guide to Modern Econometric:

A Guide to Modern Econometric: A Guide to Modern Econometric: 4th edition Marno Verbeek Rotterdam School of Management, Erasmus University, Rotterdam B 379887 )WILEY A John Wiley & Sons, Ltd., Publication Contents Preface xiii 1 Introduction

More information

INDIRECT INFERENCE BASED ON THE SCORE

INDIRECT INFERENCE BASED ON THE SCORE INDIRECT INFERENCE BASED ON THE SCORE Peter Fuleky and Eric Zivot June 22, 2010 Abstract The Efficient Method of Moments (EMM) estimator popularized by Gallant and Tauchen (1996) is an indirect inference

More information

Testing for Regime Switching in Singaporean Business Cycles

Testing for Regime Switching in Singaporean Business Cycles Testing for Regime Switching in Singaporean Business Cycles Robert Breunig School of Economics Faculty of Economics and Commerce Australian National University and Alison Stegman Research School of Pacific

More information

R = µ + Bf Arbitrage Pricing Model, APM

R = µ + Bf Arbitrage Pricing Model, APM 4.2 Arbitrage Pricing Model, APM Empirical evidence indicates that the CAPM beta does not completely explain the cross section of expected asset returns. This suggests that additional factors may be required.

More information

Introduction to Eco n o m et rics

Introduction to Eco n o m et rics 2008 AGI-Information Management Consultants May be used for personal purporses only or by libraries associated to dandelon.com network. Introduction to Eco n o m et rics Third Edition G.S. Maddala Formerly

More information

The Bootstrap: Theory and Applications. Biing-Shen Kuo National Chengchi University

The Bootstrap: Theory and Applications. Biing-Shen Kuo National Chengchi University The Bootstrap: Theory and Applications Biing-Shen Kuo National Chengchi University Motivation: Poor Asymptotic Approximation Most of statistical inference relies on asymptotic theory. Motivation: Poor

More information

Economics 620, Lecture 18: Nonlinear Models

Economics 620, Lecture 18: Nonlinear Models Economics 620, Lecture 18: Nonlinear Models Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 18: Nonlinear Models 1 / 18 The basic point is that smooth nonlinear

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

Inference in VARs with Conditional Heteroskedasticity of Unknown Form

Inference in VARs with Conditional Heteroskedasticity of Unknown Form Inference in VARs with Conditional Heteroskedasticity of Unknown Form Ralf Brüggemann a Carsten Jentsch b Carsten Trenkler c University of Konstanz University of Mannheim University of Mannheim IAB Nuremberg

More information

IDENTIFICATION-ROBUST SUBVECTOR INFERENCE. Donald W. K. Andrews. September 2017 Updated September 2017 COWLES FOUNDATION DISCUSSION PAPER NO.

IDENTIFICATION-ROBUST SUBVECTOR INFERENCE. Donald W. K. Andrews. September 2017 Updated September 2017 COWLES FOUNDATION DISCUSSION PAPER NO. IDENTIFICATION-ROBUST SUBVECTOR INFERENCE By Donald W. K. Andrews September 2017 Updated September 2017 COWLES FOUNDATION DISCUSSION PAPER NO. 3005 COWLES FOUNDATION FOR RESEARCH IN ECONOMICS YALE UNIVERSITY

More information

ESTIMATION AND INFERENCE WITH WEAK, SEMI-STRONG, AND STRONG IDENTIFICATION. Donald W. K. Andrews and Xu Cheng. October 2010 Revised July 2011

ESTIMATION AND INFERENCE WITH WEAK, SEMI-STRONG, AND STRONG IDENTIFICATION. Donald W. K. Andrews and Xu Cheng. October 2010 Revised July 2011 ESTIMATION AND INFERENCE WITH WEAK, SEMI-STRONG, AND STRONG IDENTIFICATION By Donald W. K. Andrews and Xu Cheng October 1 Revised July 11 COWLES FOUNDATION DISCUSSION PAPER NO. 1773R COWLES FOUNDATION

More information