KALMAN-TYPE RECURSIONS FOR TIME-VARYING ARMA MODELS AND THEIR IMPLICATION FOR LEAST SQUARES PROCEDURE ANTONY G AU T I E R (LILLE)

Similar documents
Mohsen Pourahmadi. 1. A sampling theorem for multivariate stationary processes. J. of Multivariate Analysis, Vol. 13, No. 1 (1983),

Elements of Multivariate Time Series Analysis

Asymptotic inference for a nonstationary double ar(1) model

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Revisiting linear and non-linear methodologies for time series prediction - application to ESTSP 08 competition data

A Functional Central Limit Theorem for an ARMA(p, q) Process with Markov Switching

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series: Theory and Methods

PERIODIC ARMA MODELS: APPLICATION TO PARTICULATE MATTER CONCENTRATIONS

[1] Thavaneswaran, A.; Heyde, C. C. A note on filtering for long memory processes. Stable non-gaussian models in finance and econometrics. Math.

MCMC analysis of classical time series algorithms.

Applied Time. Series Analysis. Wayne A. Woodward. Henry L. Gray. Alan C. Elliott. Dallas, Texas, USA

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

Acta Universitatis Carolinae. Mathematica et Physica

Statistics 910, #15 1. Kalman Filter

Time Series Analysis. Asymptotic Results for Spatial ARMA Models

First-order random coefficient autoregressive (RCA(1)) model: Joint Whittle estimation and information

A time series is called strictly stationary if the joint distribution of every collection (Y t

Forecasting. Simon Shaw 2005/06 Semester II

Some Time-Series Models

Functional time series

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong

X t = a t + r t, (7.1)

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.

Time Series I Time Domain Methods

Estimating Missing Observations in Economic Time Series

Diagnostic Test for GARCH Models Based on Absolute Residual Autocorrelations

Part I State space models

Trend-Cycle Decompositions

Multivariate Time Series: VAR(p) Processes and Models

Lecture 16: State Space Model and Kalman Filter Bus 41910, Time Series Analysis, Mr. R. Tsay

Week 5 Quantitative Analysis of Financial Markets Characterizing Cycles

Goodness-of-Fit Tests for Time Series Models: A Score-Marked Empirical Process Approach

Chapter 12: An introduction to Time Series Analysis. Chapter 12: An introduction to Time Series Analysis

A SARIMAX coupled modelling applied to individual load curves intraday forecasting

Lecture 1: Fundamental concepts in Time Series Analysis (part 2)

Ch. 15 Forecasting. 1.1 Forecasts Based on Conditional Expectations

Time Series Analysis -- An Introduction -- AMS 586

New Introduction to Multiple Time Series Analysis

Basics: Definitions and Notation. Stationarity. A More Formal Definition

Lecture 2: Univariate Time Series

A Course in Time Series Analysis

Residuals in Time Series Models

Exercises - Time series analysis

Multivariate Markov-switching ARMA processes with regularly varying noise

The likelihood for a state space model

Econometric Forecasting

Parametric Inference on Strong Dependence

Univariate Nonstationary Time Series 1

A test for improved forecasting performance at higher lead times

ECONOMICS 7200 MODERN TIME SERIES ANALYSIS Econometric Theory and Applications

Statistical Methods for Forecasting

July 31, 2009 / Ben Kedem Symposium

SOME BASICS OF TIME-SERIES ANALYSIS

Chapter 6: Model Specification for Time Series

Predictive spatio-temporal models for spatially sparse environmental data. Umeå University

Consistency of Quasi-Maximum Likelihood Estimators for the Regime-Switching GARCH Models

Multiresolution Models of Time Series

ARIMA Modelling and Forecasting

Discrete time processes

Thomas J. Fisher. Research Statement. Preliminary Results

Time-Varying Moving Average Model for Autocovariance Nonstationary Time Series

Cointegrated VARIMA models: specification and. simulation

covariance function, 174 probability structure of; Yule-Walker equations, 174 Moving average process, fluctuations, 5-6, 175 probability structure of

Multivariate Markov-switching ARMA processes with regularly varying noise

7. Forecasting with ARIMA models

Ch 5. Models for Nonstationary Time Series. Time Series Analysis

More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction

Empirical Market Microstructure Analysis (EMMA)

Lecture 9: Markov Switching Models

Non-Gaussian Maximum Entropy Processes

Lesson 13: Box-Jenkins Modeling Strategy for building ARMA models

Inference in VARs with Conditional Heteroskedasticity of Unknown Form

Parameter Estimation for ARCH(1) Models Based on Kalman Filter

MGR-815. Notes for the MGR-815 course. 12 June School of Superior Technology. Professor Zbigniew Dziong

Lecture 2: ARMA(p,q) models (part 2)

Univariate Time Series Analysis; ARIMA Models

ARMA MODELS Herman J. Bierens Pennsylvania State University February 23, 2009

10. Time series regression and forecasting

1 Linear Difference Equations

consistency is faster than the usual T 1=2 consistency rate. In both cases more general error distributions were considered as well. Consistency resul

Statistics of stochastic processes

Chapter 8: Model Diagnostics

Chapter 9: Forecasting

Vector autoregressive Moving Average Process. Presented by Muhammad Iqbal, Amjad Naveed and Muhammad Nadeem

{ } Stochastic processes. Models for time series. Specification of a process. Specification of a process. , X t3. ,...X tn }

Analysis. Components of a Time Series

Journal of Statistical Research 2007, Vol. 41, No. 1, pp Bangladesh

A NON-PARAMETRIC TEST FOR NON-INDEPENDENT NOISES AGAINST A BILINEAR DEPENDENCE

Econometría 2: Análisis de series de Tiempo

Test for Parameter Change in ARIMA Models

Chapter 18 The STATESPACE Procedure

Applied time-series analysis

Define y t+h t as the forecast of y t+h based on I t known parameters. The forecast error is. Forecasting

Asymptotic properties of QML estimators for VARMA models with time-dependent coefficients: Part I

Nonstationary time series models

Econ 423 Lecture Notes: Additional Topics in Time Series 1

Stochastic Processes

APPLIED TIME SERIES ECONOMETRICS

FE570 Financial Markets and Trading. Stevens Institute of Technology

Transcription:

PROBABILITY AND MATHEMATICAL STATISTICS Vol 29, Fasc 1 (29), pp 169 18 KALMAN-TYPE RECURSIONS FOR TIME-VARYING ARMA MODELS AND THEIR IMPLICATION FOR LEAST SQUARES PROCEDURE BY ANTONY G AU T I E R (LILLE) Abstract This paper is devoted to ARMA models with timedependent coefficients, including well-known periodic ARMA models We provide state-space representations and Kalman-type recursions to derive a Wold Cramér decomposition for the least squares residuals This decomposition turns out to be very convenient for further developments related to parameter least squares estimation Some examples are proposed to illustrate the main purpose of these state-space forms 2 AMS Mathematics Subject Classification: Primary: 62M1; Secondary: 62M5 Key words and phrases: Kalman-type recursions; least squares procedure; state-space representations; time-varying ARMA models; Wold Cramér decomposition 1 INTRODUCTION In time series analysis, mounting evidences have been proposed to emphasize that empirical models in many fields such as economics, climatology or engineering are characterized by non-stationarity and parameter instability Various methods have been devised in order to take non-stationarity into account: they mainly consist of differencing or filtering the original series with the purpose of turning it into a new set of data exhibiting no apparent deviation from stationarity However, such methods allow for very restricted types of non-stationarities only They essentially result in seasonal autoregressive integrated moving average (SARIMA) models, which consist in fitting an autoregressive moving average (ARMA) model to the differenced series This implies that the same ARMA model applies whatever the season under consideration Nevertheless, when considering, for instance, economic daily variables depending on weekdays and on weekends (motorway traffic being an example), it seems natural to hope for different dynamics (and so different forecasting formulae) The author would like to thank Christian Francq for inspiration and the referee for a careful reading of the manuscript

17 A Gautier This observation has sparked off an explosion of interest in time series models with time-varying parameters One notable class of such models is the timevarying ARMA models class, in which parameters can move discretely, for example, between a given number of regimes In particular, this wide class includes the well-documented periodic ARMA time series models, extensively studied in the statistical literature in the last decade (see eg Adams and Goodwin [1], Anderson et al [2], Basawa and Lund [4], Miamee and Talebi [15], Gautier [12]) and whose parameters switch according to a fixed periodic calendar More generally, Dahlhaus [8], Azrak and Mélard [3] and Bibi and Francq [5] have dealt with the problem of estimating the whole class of ARMA models with time-dependent coefficients, via quasi-likelihood or least squares techniques The main purpose of this paper is therefore to provide easy mathematical methods to investigate further the large sample properties of least squares estimators of general time-varying ARMA models Since the least squares procedure consists of the minimization of weighted sums of squares of prediction errors (see Godambe and Heyde [13] for a general reference), these methods are based on a state-space representation of the model under study and on a Wold Cramér decomposition for the residuals One distinctive feature of this paper is to explore the state-space framework as a statistical tool to address the issue of estimating ARMA models in the presence of time-dependent coefficients Note however that it is far beyond the scope of this paper to study the asymptotic behaviour of the least squares estimates (to consider this question, see Francq and Gautier [1]) The remainder of the paper is organized as follows Section 2 describes the time-varying ARMA model under consideration Section 3 provides a state-space representation and Kalman-type recursions for the model to obtain a pure timevarying MA representation for the residuals in closed form In Section 4, the coefficients of the Wold Cramér decomposition are derived in simple examples Concluding remarks are gathered in Section 5 The following notation will be used throughout the paper The square identity matrix of order m is denoted by I (m) We denote by (m) the -vector of size m Let N = {, 1, 2, } For seek of simplicity, we indicate the size of any multivariate element into brackets under its notation, eg M (m n) denotes the matrix M with m rows and n columns By M we mean the transpose of any matrix M We finally denote by u i the i-th vector of the canonical basis of R p+q 2 MODEL AND PRELIMINARIES Consider a time series (X t ) t=1,2, exhibiting changes in regime at known dates We suppose that there exists a finite number d of regimes which alternate themselves either in a constant periodicity or at irregular time intervals Denote by s t the regime corresponding to the index t, so that s t = k when the time series is in the k-th regime at time t, for k {1,, d} The sequence (s t ) is assumed to be known as a real-valued sequence of observed changes in regime The dynamics

Kalman-type recursions for time-varying ARMA models 171 of X t in each regime can be described by an ARMA(p, q) equation, (21) X t m(s t ) + p i=1 a i (s t ) {X t i m(s t i )} = ɛ t + q i=1 b i (s t )ɛ t i for all t 1, where p, q are minimal orders so that (21) is identifiable Note that, under the additional assumptions made below, m(s t ) can be interpreted as the expectation of X t The process (ɛ t ) is a white noise, namely a sequence of uncorrelated random variables with mean and variance 1 The parameter of interest in Model (21) is denoted by θ = {m(1),, m(d), a 1 (1),, a p (d), b 1 (1),, b q (d)} and belongs to an open subset Θ of R (p+q+1)d The recursive relation (21) requires starting values that we specify as X t m(s t ) = ɛ t = for t = 1 max{p, q},, The best one-step predictor in the mean-square sense, that we denote by ˆX t (θ) = E θ (X t X t 1,, X 1 ), can be computed recursively by (22) ˆX t (θ) = m(s t ) for t = 1 max(p, q),,, ˆX k (θ) = m(s k ) p i=1 a i(s k ) {X k i m(s k i )} + q i=1 b i(s k ){X k i ˆX k i (θ)} for k = 1,, t The true value of the parameter θ is denoted by θ So we write θ = {m (1),, m (d), a 1 (1),, a p (d), b 1 (1),, b q (d)} The prediction errors are finally defined by (23) e t (θ) = X t ˆX t (θ) Conditions ensuring consistency and asymptotic normality of least squares and quasi-generalized least squares estimators of Model (21) subject to stationary and ergodic changes in regime have been given in Francq and Gautier [1] In the afore-mentioned paper, it has been shown that the residuals (23) can be expressed in the following Wold Cramér decomposition for non-stationary processes (see Cramér [7]): (24) e t (θ) = i= ψ t,i (θ, θ )ɛ t i + c t (θ, θ )

172 A Gautier For example, in the time-varying AR(1) case, we have (see Subsection 41) ψ t,i (θ, θ ) = ( 1) i {a 1 (s t ) a 1 (s t )} i 2 a 1 (s t 1 j ), c t (θ, θ ) = {m (s t ) m(s t )} + a 1 (s t ) {m (s t 1 ) m(s t 1 )} Recall that, given the observations X 1,, X n, a least squares estimator of the parameter θ in Model (21) is obtained by solving ˆθ n = arg min n θ Θ n 1 t=1 e 2 t (θ), where Θ is a compact subset of Θ which contains θ (see Cochrane and Orcutt [6]) Accordingly, relation (24) turns out to be very convenient to study the large sample properties of the least squares estimators of Model (21) 3 MAIN RESULTS State-space representations and the associated Kalman recursions have had a profound impact on time series analysis and many related areas The techniques were originally developed in connection with the control of linear systems (see Hannan and Deistler [14]) Many time series models can be formulated as special cases of the general state-space models The Kalman recursions, which play a key role in the estimation of state-space models, provide a unified approach to prediction and estimation for all processes that can be given in a state-space form In this section, a state-space representation of Model (21) and Kalman-type recursions are therefore derived Next we apply them to obtain the time-varying MA representation (24) in closed form 31 State-space representation Let us introduce the following vectors: X t = X t (θ) = (p+q) 1 X t m(s t ) X t 1 m(s t 1 ) X t p+1 m(s t p+1 ) ɛ t ɛ t 1 ɛ t q+1, ɛ t (p+q) 1 = ɛ t ɛ t,

and the following square matrix: Kalman-type recursions for time-varying ARMA models 173 Φ t (θ) (p+q) (p+q) a 1 (s t ) a 2 (s t ) a p (s t ) b 1 (s t ) b 2 (s t ) b q (s t ) 1 = 1 1 1 whose (p + 1, p)-entry equals zero We have implicitly assumed that p 1 and q 1, without loss of generality because the a i ( ) s and b i ( ) s can be equal to zero The following result is then straightforward PROPOSITION 31 Consider Model (21) and the above-mentioned notation X t, ɛ t and Φ t (θ) We have the following state-space form: (31) X t = ɛ t + Φ t (θ)x t 1, t = 1, 2,, where X = (p+q) 32 Wold Cramér decomposition for the residuals We now concentrate on the way to obtain the Wold Cramér representation (24) for the prediction errors of least squares estimation, via Kalman-type recursions Consider the following square matrices: J = (p+q) (p+q) 1 1 1 1 1

174 A Gautier where the 1 value is on the (p + 1)-st line, and K = (p+q) (p+q) 1, 1 where both 1 s are respectively on the first and (p + 1)-st lines The main result is now presented PROPOSITION 32 Consider Model (21) and let the least squares residuals be defined by (23) Equation (24) holds with ψ t, (θ, θ ) = 1, (32) ψ t,i (θ, θ ) = u [ i p+1 k= for i = 1,, t 1, and (33) c t (θ, θ ) = u p+1 i= { k 1 { i 1 P r o o f First, one can write (22) as JΦ t j (θ) } K { i k 1 Φ t k j (θ ) }] Ku 1 JΦ t j (θ) } Ku 1 {m (s t i ) m(s t i )} (34) ˆX t t 1 = Φ t (θ) ˆX t 1 t 1, ˆX t t = J ˆX t t 1 + x t, t = 1, 2,, where ˆX = (p+q), ˆX t t 1 = (p+q) 1 ˆX t m(s t ) X t 1 m(s t 1 ) X t p+1 m(s t p+1 ) e t 1 (θ) e t q+1 (θ), ˆXt t (p+q) 1 = X t m(s t ) X t 1 m(s t 1 ) X t p+1 m(s t p+1 ) e t (θ) e t 1 (θ) e t q+1 (θ),

and From (34), we obtain Kalman-type recursions for time-varying ARMA models 175 x t = (p+q) 1 X t m(s t ) X t m(s t ) (35) ˆXt t = x t + JΦ t (θ) ˆX t 1 t 1 = x t + Since and we obtain from (35) (36) ˆX t t = X t (θ ) = ɛ t + k=1 { k 1 k=1 { k 1 Φ t j (θ ) } ɛ t k x t = KX t = KX t (θ ) + K {X t X t (θ )}, i { k 1 i= k= { i 1 + i= JΦ t j (θ) } K { i k 1 JΦ t j (θ) } x t k Φ t k j (θ ) } ɛ t i JΦ t j (θ) } K { X t i (θ) X t i (θ ) }, where k 1 JΦ t j(θ) = I (p+q) when k = and i k 1 Φ t k j (θ ) = I (p+q) when k = i Note that m (s t ) m(s t ) K {X t (θ) X t (θ )} = m (s t ) m(s t ) Finally, the proof is completed when taking the (p + 1)-st component of ˆX t t given by (36)

176 A Gautier REMARK 31 It is worth mentioning that equations (32) (33) show that the coefficient ψ t,i (θ, θ ) only depends on a finite number of past values of the regimes, more precisely on s t,, s t i+1, whereas the intercept c t (θ, θ ) generally depends on all the past values of the regimes Note also that e t (θ ) = ɛ t, c t (θ, θ ) = and ψ t,i (θ, θ ) = for all i 1 4 EXAMPLES In this section, we show that the expressions of the ψ t,i (θ, θ ) s and c t (θ, θ ) s given by (32) and (33), respectively, can be simplified in particular cases 41 AR(1) model First consider the simple example of Model (21) with (p, q) = (1, ) The vectorial representations are however used for p = q = 1 We have (31) with Φ t (θ) = and (34) (36) with Note that J = ( a1 (s t ) ( 1 ) ( ) ( ) Xt m(s, X t = t ) ɛt, ɛ ɛ t =, t ɛ t ) ( Xt m(s, x t = t ) X t m(s t ) k 1 Therefore, we obtain (32) with JΦ t j (θ) = for k 2 ) ( 1, K = 1 ψ t,1 (θ, θ ) = u 2 {KΦ t (θ ) + JΦ t (θ)k} Ku 1 = a 1 (s t ) + a 1 (s t ), ψ t,i (θ, θ ) = u 2 [ 1 k= = ( 1) i i 1 { k 1 JΦ t j (θ) } K a 1 (s t j ) + ( 1) i 1 a 1 (s t ) i 2 ( i k 1 a 1 (s t k j ) = ( 1) i {a 1 (s t ) a 1 (s t )} i 2 a 1 (s t 1 j ) a 1 (s t 1 j ) for i 1 with usual conventions Moreover, we have the intercept ) )] Ku 1 c t (θ, θ ) = u 2Ku 1 {m (s t ) m(s t )} + u 2JΦ t (θ)ku 1 {m (s t 1 ) m(s t 1 )} = {m (s t ) m(s t )} + a 1 (s t ) {m (s t 1 ) m(s t 1 )}

Kalman-type recursions for time-varying ARMA models 177 42 MA(1) model Now, we consider the MA(1) case, ie Model (21) with (p, q) = (, 1) Using the notation of the AR(1) case, equations (31) (36) hold with ( ) b1 (s Φ t (θ) = t ) Note that i k 1 Φ t k j (θ ) = for k i 2 Therefore we have (24) with ψ t,i (θ, θ ) = u 2 = u 2 [{ i 1 JΦ t j (θ) } K + { i 2 [ ( 1) i ( i 1 b 1(s t j ) +( 1) i 1 ( i 2 = ( 1) i i 1 JΦ t j (θ) } KΦ t i+1 (θ ) ] Ku 1 ) K ) ] b KΦ 1(s t j ) t i+1 (θ ) Ku 1 i 2 i 1 b 1 (s t j ) + ( 1) b 1 (s t j )b 1 (s t i+1 ) = ( 1) i i 2 b 1 (s t j ) {b 1 (s t i+1 ) b 1 (s t i+1 )} for i 1 The intercept is equal to c t (θ, θ ) = = i= i= ( 1) i u 2 ( i 1 b 1(s t j ) ( 1) i{ i 1 b 1 (s t j ) } {m (s t i ) m(s t i )} ) Ku 1 {m (s t i ) m(s t i )} 43 ARMA(1,1) model Now, consider an ARMA(1,1) model, ie Model (21) with (p, q) = (1, 1) Maintaining our notation, equations (31) (36) hold with Note that k 1 = JΦ t j (θ) ( for k 1, and Φ t (θ) = ( a1 (s t ) b 1 (s t ) ) ( 1) k 1{ k 2 b 1(s t j ) } a 1 (s t k+1 ) ( 1) k k 1 b 1(s t j ) )

178 A Gautier i k 1 Φ t k j (θ ) = ( ( 1) i k i k 1 a 1 (s t k j ) ( 1) i k 1{ i k 2 a 1 (s t k j ) } ) b 1 (s t i+1 ) for k i 1 Therefore, we obtain (24) with ψ t,i (θ, θ ) = ( 1) i 1{ i 2 = + i 1 k=1 ( 1) i{ k 2 a 1 (s t j ) } {b 1 (s t i+1 ) a 1 (s t i+1 )} b 1 (s t j ) } {a 1 (s t k+1 ) b 1 (s t k+1 )} { i k 2 a 1 (s t k j ) } {b 1 (s t i+1 ) a 1 (s t i+1 )} + ( 1) i 1{ i 2 b 1 (s t j ) } {a 1 (s t i+1 ) b 1 (s t i+1 )} i 1 k=1 ( 1) i{ k 2 b 1 (s t j ) } {a 1 (s t k+1 ) a 1 (s t k+1 )} { i k 2 a 1 (s t k j ) } {b 1 (s t i+1 ) a 1 (s t i+1 )} + ( 1) i 1{ i 2 b 1 (s t j ) } {a 1 (s t i+1 ) b 1 (s t i+1 ) + b 1 (s t i+1 ) a 1 (s t i+1 )} for i 1 In this example, the intercept equals c t (θ, θ ) = u 2Ku 1 {m (s t ) m(s t )} { i 1 + JΦ t j (θ) } Ku 1 {m (s t i ) m(s t i )} u 2 i=1 = m (s t ) m(s t ) + {a 1 (s t ) b 1 (s t )} {m (s t 1 ) m(s t 1 )} + ( 1) i 1{ i 2 b 1 (s t j ) } {a 1 (s t i+1 ) b 1 (s t i+1 )} i=2 {m (s t i ) m(s t i )} 44 AR(2) model Finally, let us have a look at Model (21) with (p, q) = (2, ) In this case, equations (31) (36) hold with Φ t (θ) = a 1(s t ) a 2 (s t ) 1,

and J = 1 1 Kalman-type recursions for time-varying ARMA models 179 X t = X t m(s t ) X t 1 m(s t 1 ), ɛ t = ɛ t, ɛ t ɛ t, x t = X t m(s t ) X t m(s t ), K = 1 1 Now, So we have (24) with k 1 JΦ t j (θ) = when k 3 ψ t,i (θ, θ ) = u [ 2 3 k= Here, the intercept is { k 1 JΦ t j (θ) } K { i k 1 Φ t k j (θ ) }] Ku 1 = u 3 {KΦ t (θ )Φ t 1 (θ ) + JΦ t (θ)kφ t 1 (θ ) +JΦ t (θ)jφ t 1 (θ)k} i 3 Φ t 2 j (θ )Ku 1 c t (θ, θ ) = u 3Ku 1 {m (s t ) m(s t )} + u 3JΦ t (θ)ku 1 {m (s t 1 ) m(s t 1 )} + u 3JΦ t (θ)jφ t 1 (θ)ku 1 {m (s t 2 ) m(s t 2 )} = m (s t ) m(s t ) + a 1 (s t ) {m (s t 1 ) m(s t 1 )} + a 2 (s t ) {m (s t 2 ) m(s t 2 )} 5 CONCLUDING REMARKS This paper can be appreciated as a preliminary technical step before the least squares estimation of general time-varying ARMA models We have investigated the state-space framework of such non-stationary time series models to obtain a convenient representation for the prediction errors We have provided a simple methodology that allows to derive the coefficients of the Wold Cramér decomposition for the residuals which play a crucial role in the least squares minimization procedure Of course, one can investigate this question using a straightforward univariate approach but this way seems rather tedious The method presented in the paper is conversely based on the multivariate area and represents an appealing alternative to univariate calculations

18 A Gautier REFERENCES [1] G J Adams and G C Goodwin, Parameter estimation for periodic ARMA models, J Time Ser Anal 16 (1995), pp 127 146 [2] P L Anderson, M M Meerschaert and A V Vecchia, Innovations algorithm for periodically stationary time series, Stochastic Process Appl 83 (1999), pp 149 169 [3] R Azrak and G Mélard, Asymptotic properties of quasi-maximum likelihood estimators for ARMA models with time-dependent coefficients, Stat Inference Stoch Process 9 (26), pp 279 33 [4] I V Basawa and R B Lund, Large sample properties of parameter estimates for periodic ARMA models, J Time Ser Anal 22 (21), pp 651 663 [5] A Bibi and C Francq, Consistent and asymptotically normal estimators for cyclically time-dependent linear models, Ann Inst Statist Math 55 (23), pp 41 68 [6] D Cochrane and D H Orcutt, Applications of least squares regression to relationships containing autocorrelated error terms, J Amer Statist Assoc 44 (1949), pp 32 61 [7] H Cramér, On some classes of nonstationary stochastic processes, Proc 4-th Berkeley Symp Math Statist Prob 2 (1961), pp 57 78 [8] R Dahlhaus, Fitting time series models to nonstationary processes, Ann Statist 25 (1997), pp 1 37 [9] P de Jong and J Penzer, The ARMA model in state space form, Statist Probab Lett 7 (24), pp 119 125 [1] C Francq and A Gautier, Large sample properties of parameter least squares estimates for time-varying ARMA models, J Time Ser Anal 25 (24), pp 765 783 [11] C Francq and A Gautier, Estimation of time-varying ARMA models with Markovian changes in regime, Statist Probab Lett 7 (24), pp 243 251 [12] A Gautier, Asymptotic inefficiency of mean-correction on parameter estimation for a periodic first-order autoregressive model, Comm Statist Theory Methods 35 (26), pp 283 216 [13] V P Godambe and C C Heyde, Quasi-likelihood and optimal estimation, Internat Statist Rev 55 (1987), pp 231 244 [14] E J Hannan and M Deistler, The Statistical Theory of Linear Systems, Wiley, New York 1988 [15] A G Miamee and S Talebi, On PC solutions of PARMA(p, q) models, Probab Math Statist 25 (25), pp 279 288 Université Lille 3 Laboratoire EQUIPPE BP 6149 59653 Villeneuve d Ascq Cedex, France E-mail: antonygautier@univ-lille3fr Received on 17428; revised version on 9128