Order Selection for Vector Autoregressive Models

Size: px
Start display at page:

Download "Order Selection for Vector Autoregressive Models"

Transcription

1 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 2, FEBRUARY Order Selection for Vector Autoregressive Models Stijn de Waele and Piet M. T. Broersen Abstract Order-selection criteria for vector autoregressive (AR) modeling are discussed. The performance of an order-selection criterion is optimal if the model of the selected order is the most accurate model in the considered set of estimated models: here vector AR models. Suboptimal performance can be a result of underfit or overfit. The Akaike information criterion (AIC) is an asymptotically unbiased estimator of the Kullback Leibler discrepancy (KLD) that can be used as an order-selection criterion. AIC is known to suffer from overfit: The selected model order can be greater than the optimal model order. Two causes of overfit are finite sample effects and asymptotic effects. As a consequence of finite sample effects, AIC underestimates the KLD for higher model orders, leading to overfit. Asymptotically, overfit is the result of statistical variations in the order-selection criterion. To derive an accurate order-selection criterion, both causes of overfit have to be addressed. Moreover, the cost of underfit has to be taken into account. The combined information criterion (CIC) for vector signals is robust to finite-sample effects and has the optimal asymptotic penalty factor. This penalty factor is the result of a tradeoff of underfit and overfit. The optimal penalty factor depends on the number of estimated parameters per model order. The CIC is compared to other criteria such as the AIC, the corrected Akaike information criterion (AICc), and the consistent minimum description length (MDL). Index Terms Multivariate time series analysis, order selection, selection bias. I. INTRODUCTION DETERMINATION of the model order is an important step in vector autoregressive (AR) modeling. In this paper, we will consider order-selection for estimated vector AR( ) models from observations of vector-valued time series. Automatic order selection using statistical order-selection criteria was first introduced by Akaike [1]. A typical application of vector AR models is clutter suppression in airborne radar signal processing. The colored clutter signal is whitened by inverse filtering with an estimated vector AR filter [2]. The performance of an order-selection criterion is optimal if the model of the selected order is the most accurate model in the considered set of estimated models. Note that this is not necessarily the true model order. If the true process is AR(10), where the last six parameters are insignificant, the estimated AR(4)-model will be the most accurate. The two ways order selection can fail is by selecting either a model order that is too low or a model order that is too high. These phenomena are called underfit and overfit, respectively. The Akaike information criterion (AIC) is known to suffer from overfit [3], [4]. As a result, a lot of attention has been given to reducing overfit in order-selection. Several solutions to the overfit problem have been proposed. One solution is to choose a very conservative value for the maximum model order. This strongly reduces the finite sample overfit. For vector AR models estimated from observations of an -dimensional time series, the maximum model order should be less than ( at most) [5]. However, the optimal model order may well be greater than this maximum candidate order. As a result, this restriction can reduce the performance of AR modeling [6]. A corrected AIC (AICc) has been introduced based on a different elaboration of asymptotic results [7]. Although asymptotically equivalent to AIC, simulations have shown that more accurate order-selection is achieved with this criterion [8]. Another solution is the usage of consistent criteria with an penalty factor that increases with [4]. A consistent criterion is the minimum description length (MDL) criterion [9] or the equivalent Bayesian information criterion (BIC), where the penalty factor is set to. Asymptotically, MDL works well if the true process is a AR( ) process with very significant parameters. In practice many processes are of a more complex nature, typically AR( ). Consistent criteria do not perform very well for these complex processes [10]. As mentioned before, not only overfit but also underfit may lead to reduced accuracy of order-selection. The combined information criterion (CIC) for scalar signals [11] is based on a trade-off of underfit and overfit. It is a combination of the finite sample information criterion (FSIC) and an asymptotic order-selection criterion with penalty factor. In this paper, we will derive the CIC criterion for vector signals and compare it with existing criteria. Throughout, we will take into account the possibility of order-selection for partial prediction. Partial prediction means predicting elements ( )of the vector from previous observations. An application of partial prediction is found in control [12]. We will discuss the case where the model order is determined based on the given data only. If reliable knowledge about the optimal order is available from previous experiments this should be taken into account. In Section II, some definitions for vector AR time series analysis are given. In Section III, overfit in order selection is discussed. In Section IV, the combined information criterion CIC for vectors signals is derived. This new criterion is compared with other criteria in a simulation experiment described in Section V. Manuscript received October 17, 2001; revised September 24, This work was supported by the Dutch Technology Foundation (STW) under Contract DTN The associate editor coordinating the review of this paper and approving it for publication was Dr. Alexi Gorokhov. The authors are with the Signals, Systems, and Control Group, Delft University of Technology, Delft, The Netherlands ( S.deWaele@tn.tudelft.nl). Digital Object Identifier /TSP II. DEFINITIONS A discrete-time vector time series is a vector from a vector space as a function of the integer variable. The dimension of is. Components of the vector are denoted. An X/03$ IEEE

2 428 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 2, FEBRUARY 2003 AR( ) vector process is a stationary stochastic signal that is generated by the following difference equation: The are the AR parameter linear mappings from and are abbreviated to AR parameters. The linear mapping can be fully characterized by a matrix [13, p. 84] with matrix elements. The number of AR parameters is the AR order. The generating signal is a white noise signal with covariance matrix. The variance of is given by the trace of : (1) tr (2) The covariance matrix of is denoted. The auto- and cross correlations and spectra can be calculated from the AR parameters. A useful representation of the AR parameters are the partial correlations [14]. The requirement of stationarity of the AR model can be expressed in the partial correlations as: This means that all singular values of are smaller than 1. The error of an estimated model is evaluated by using the estimated model for prediction. The one-step ahead predictor for an estimated AR( ) with parameters is given by The prediction error signal is given by : The model error (ME) is a normalized version of the one-stepahead prediction error (PE) with (3) (4) (5) ME tr (6) PE tr (7) Here, is assumed to be of full rank. This is a generalization of the model error for scalar signals [15]. It is the first-order Taylor approximation of the Kullback Leibler discrepancy (KLD) for normally distributed processes [16]. An equally accurate approximation of the KLD is found using the determinant instead of the trace in the ME ME (8) With partial prediction we are interested in predicting elements of ( ) based on all elements of the previous observations. Without loss of generality, the discussion is restricted to prediction of the first elements of so that Order selection for partial prediction was first discussed by Akaike [17]. An application is the prediction error method of (9) system identification, where one is interested in prediction the output of a system based on previous values of the input and the output [18]. The model error for partial prediction is defined as with tr (10) (11) Asymptotically, the expectation of model error is equal to the number of estimated parameters for unbiased models ( ) for (12) For the selection of the model orde, r several order-selection criteria are available. The formulation of the criteria is adapted to order-selection for partial prediction. An order-selection criterion can be based on the fit RES( ) of the estimated AR( ) model to the data plus a penalty factor for the number of estimated parameters. The residual is given by RES (13) where is the estimate of the covariance matrix of the generating white noise of the estimated AR( ) model. is related to in the same way as is related to [see (11)]. The order-selection criteria that will be discussed are the following. The generalized information criterion (GIC), which is defined as The AIC [14] GIC RES (14) The MDL or BIC [9]: AIC (15) MDL GIC (16) The AICc [7] for partial prediction: AICc RES (17) The CIC for vector signals CIC FSIC (18) The expression for CIC is given here for the sake of completeness. It will be discussed in detail in Section IV. The selected model order is given by the model order where an order-selection is minimal (19) For standard modeling ( ), the residual can be expressed in terms of the estimated partial correlations as RES (20)

3 DE WAELE AND BROERSEN: ORDER SELECTION FOR VECTOR AUTOREGRESSIVE MODELS 429 The most common estimators for vector AR models are the Yule Walker, least squares, and Nuttall Strand estimators [14]. The Nuttall Strand or multivariate Burg algorithm estimates the partial correlations directly from the data. Unlike the least squares estimate, the resulting model is guaranteed to be stationary since the estimated partial correlations satisfy the inequality (3). The estimate does not contain the triangular bias that is present in the Yule Walker estimate [19]. Therefore, the Nuttall Strand estimator deserves a preference. III. OVERFIT IN ORDER SELECTION In this section, it will be shown that overfit can be explained from the statistical properties (bias and variance) of the order-selection criterion. The optimal model order is the order for which the model error ME is lowest. Two causes of overfit can be distinguished: finite sample effects and asymptotic effects. The model orders where the number of estimated parameters is smaller than 0.1 times the total number of observations is considered as the asymptotic regime. This results in the following restriction of the model order : Asymptotic regime: (21) The solution to the problem of finite sample overfit has been discussed before [7], [11]. Finite sample overfit is briefly discussed here to provide a complete survey of the overfit problem and to show the different mechanisms governing finite sample overfit and asymptotic overfit. A. Finite Sample Overfit The AIC is an asymptotically unbiased estimator of the KLD. This asymptotic approximation is only valid as long as the number of estimated parameters is small compared with the number of observations. For high model orders, the expectation of AIC is too low. As a result, a very high model order will be selected with a very poor accuracy [5], [7]. Finite sample overfit can be removed by using an estimator for the KLD that has a lower bias for higher model orders. Exact calculations of finite sample effects are not viable. However, sufficiently accurate results can be obtained by using simulation results. Finite sample estimators of the KLD are the corrected AIC, AICc, and the FSIC [20]. The generalization of FSIC to vector signals is given by FSIC RES (22) The are the finite sample variance coefficients, that have been determined from simulation experiments. They contain the statistical finite sample behavior of a particular estimator. For the Nuttall Strand estimator, the are given by [21] (23) The denominator is the number of degrees of freedom per element available for estimation of. The advantage of using FSIC is that the observed difference in the behavior of different estimators is reflected in the criterion. B. Asymptotic Overfit 1) Introduction: Asymptotic overfit is the effect that the selected model order is greater than the optimal model order even if the maximum number of estimated parameters is small with respect to the total number of observations. With AIC, the probability of selecting a model order greater than the true order is considerable, even if the maximum model order is within the asymptotic regime [3]. As a result, AIC does not provide a consistent estimate of the true model order [4]. In the asymptotic regime, AIC provides an unbiased estimate for the KLD. Here, overfit is a result of statistical variations in the order-selection criterion. In the next subsection, the cost of overfit as a result of these statistical variations will be calculated using asymptotic approximations. However, we will first show that this cause of overfit is indeed mainly restricted to the asymptotic regime. Only after having established this, an asymptotic calculation becomes meaningful. A typical simulation example is a two-dimensional (2-D) vector moving average (MA) process of order 1, given by (24) where is white noise with covariance matrix.as is often the case, for practical processes, this example process cannot be described exactly by a finite-order AR model. The number of observations is 200 per element. Overfit as a result of statistical fluctuations occurs when the increment of the order-selection criterion is comparable with its standard deviation. The increment FSIC is defined as FSIC FSIC FSIC (25) The expectation of the increment of FSIC and its standard deviation as a function of the model order for the simulation example are given in Fig. 1. As can be seen in the figure, the magnitudes of FSIC and FSIC are comparable only for low model orders. Therefore, overfit as a result of statistical fluctuations is indeed restricted to the asymptotic regime (here ). In addition, the example shows that statistical fluctuations are not very influential for the model orders where the estimate parameters are very significant ( ). The performance of an order-selection criterion should be measured by determining the error of the selected model. An important factor determining the cost of overfit is selection bias [22]. Consider selection between an estimated AR( ) model and an estimated AR( ) model from an AR( ) process. Due to the penalty for the additional parameters in AIC, order is only selected if the residual reduction is greater than the average reduction. Since the true is zero, the larger parameter value results in large model error ME. Therefore, we find that the expectation of AIC( ) given that is the selected order is smaller than the a priori expectation of AIC( ): AIC AIC (26)

4 430 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 2, FEBRUARY 2003 Fig. 1. Absolute increment of the finite sample information criterion 1FSIC and its standard deviation as a function of the model order k. At the same time, the model error for the selected order is greater than the a priori expectation ME ME (27) Since the model error and the KLD are asymptotically equivalent, it can be concluded that AIC does not provide an unbiased estimate for the KLD for the selected order. As a simulation example to show this effect a sample of observations of the MA(1)-process (24) with is used. The results are given in Fig. 2. 2) Cost of Overfit: The calculation of the cost of overfit given here is a generalization of Shibata s calculation for scalar signals [3]. We will determine the cost of overfit as the expectation of the model error of a selected model in white noise. In the second-order Taylor approximation around,,, GIC can be expressed in the parameters as GIC (28) where is the Frobenius-norm for that can be expressed in terms of the parameters as The model error ME is given by (29) ME (30) Note that a low value of GIC corresponds to a large value of the ME because occurs with the opposite sign in (30). This explains the opposite behavior of AIC and ME for the selected order, as expressed by (26) and (27) and found in the simulation example (see Fig. 2). Using the asymptotic theory for parameter estimation, it can be shown that the parameters are independent normally distributed with zero mean and variance. Therefore, the sum of squares has a Chi-square distribution with degrees of freedom, which is denoted. We will determine the cost of overfit for more general orderselection problem. Suppose models of increasing model order are estimated. The number of estimated parameters is. There- Fig. 2. A priori expectations of AIC and the model error ME compared with the expectations of AIC and ME given that the model order k is the selected model order. fore, is the number of additionally estimated parameters when the model order is increased by 1. The decrease of the residual is denoted : RES RES (31) has a Chi-square distributed with degrees of freedom. The GIC can be expressed in terms of the parameters as while the ME is given by GIC (32) ME (33) Standard time series analysis fits into this framework by using. The number of additionally estimated parameters per order is equal to. The cost of overfit is the ME of the selected model ME (34) By calculating the ME of the selected model, selection bias is automatically taken into account. To evaluate this expression, the following result derived by Spitzer as a corollary from [23, Th. 3.1] is used. Corollary (Spitzer): Given is a set of stochastic variables with, where the are independent identically distributed (i.i.d.). Then, the expectation of the maximum of is given by (35) Since this corollary considers the maximum of a set of stochastic variables and we take the minimum of GIC, we will look at minus GIC. This can be written as a sum of i.i.d.s : GIC (36) with. The ME can be expressed in terms of as ME (37)

5 DE WAELE AND BROERSEN: ORDER SELECTION FOR VECTOR AUTOREGRESSIVE MODELS 431 Since GIC is minimal for the selected model order, the value of for the selected model order is given by. The selected order is denoted. Therefore, the expectation of the ME is given by ME (38) The selected model is zero if of Spitzer, the first contribution becomes. Using the corollary Using the fact that (39) Fig. 3. Cost of overfit C for AIC (penalty =2) as a function of the additional number of estimated parameters per model order r. (40) and some straightforward rearrangement, this contribution can be written as (41) As in the derivation of Shibata [3], we will now apply Spitzer s corollary to calculate the expectation of. The result is given by (42) Combining (41) and (42) and letting tend to infinity, the expectation of the ME for the selected model is given by (43) The cost of overfit as a function of for AIC ( ) is plotted in Fig. 3. Some examples of order-selection problems and their corresponding cost of overfit for AIC are scalar AR ( ): ; standard 2-D AR ( ):. IV. COMBINED INFORMATION CRITERION In the previous section, it was shown that finite sample overfit can be prevented by using a unbiased estimator for the KLD. However, an unbiased estimator for the KLD results in a considerable cost of asymptotic overfit. The cost of asymptotic overfit can be reduced by increasing the penalty factor. An example of such an order-selection criterion is the MDL [see (16)] that has a penalty factor. This results in accurate order-selection for AR( ) processes with very significant parameters. Otherwise, increasing the penalty factor will introduce underfit. Underfit means that the selected model order is lower than the optimal model order, thus missing significant parameters. In this section, an optimal asymptotic penalty factor is calculated by making a trade-off of underfit and overfit. Finally, the CIC for vector signals is introduced. This order-selection criterion has the optimal asymptotic penalty factor in the asymptotic regime and takes finite sample effects into account for higher model orders. A. Cost of Underfit The cost of underfit is determined by the model error as a result of missing a critical parameter. A critical parameter for the penalty factor is defined as the parameter for which the expectation of GIC is equal for the AR( ) model that includes the critical parameter and the AR( ) model that does not include this parameter. Using the asymptotic Taylor approximation (28), the expectation GIC can be written as GIC GIC (44) The expectation of the norm of the estimated partial correlation is given by (45) Using the fact that the parameter estimates are asymptotically unbiased with variance, this becomes (46) Substituting this expression in (44) and equalizing GIC and GIC yields (47) Including the true in the AR( ) model results in a decrease of the model error with respect to the AR( ) model of for including and an increase of as a result of

6 432 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 2, FEBRUARY 2003 inaccuracy in the estimated parameters. Therefore, the total cost of underfit is given by (48) The norm of the critical parameter decreases as. This reflects that as the number of observations is increased, smaller details become significant because parameter estimation becomes more accurate. Not including this small parameter in the selected model therefore contributes to the cost of underfit. B. Optimal Penalty Factor Given the expressions for the cost of overfit (43) and the cost of underfit (48), we will now define an optimal penalty factor. The optimal penalty factor is the defined as the penalty factor for which the maximum of underfit and overfit is minimal: (49) Since the cost of overfit is monotonically decreasing as a function of and the cost of overfit is monotonically increasing, this amounts to equating and : (50) The optimal penalty factor as a function of is plotted in Fig. 4. It is quite accurately approximated by (51) Using this approximation the order-selection criterion for partial prediction becomes GIC (52) This can be rearranged to yield GIC AIC (53) Using this order-selection criterion, an optimal trade-off of asymptotic underfit and overfit is made. However, GIC will suffer from finite sample overfit, as it does not take into account finite sample effects. The CIC has the optimal penalty factor in the asymptotic regime, whereas an unbiased estimate for the KLD is used in the finite sample regime. The CIC is given by CIC FSIC (54) This order-selection criterion will now be compared with existing order-selection criteria. The differences are illustrated with simulations. Compared to AIC, two differences are present. First, the asymptotic penalty factor is greater than the penalty for AIC. This yields a better trade-off of asymptotic underfit and overfit. Moreover, CIC is not subject to finite sample overfit, as is AIC. The CIC deviates quite considerably from the MDL. The ever increasing penalty factor in MDL can lead to a great cost of underfit, which is given by (55) as can be found by substituting the penalty factor of MLD in (48). This is a result of the fact that small param- Fig. 4. Optimal penalty as a function of the additional number r of estimated parameters per model order. eters that could improve the model accuracy are not included in the selected model. This explains why MLD is less accurate for complex [AR( )] processes. For the Nuttall Strand estimator, the performance of CIC differs from AICc only in the asymptotic regime. Both criteria are not subject to finite sample overfit. This difference in the asymptotic penalty factor is largest for small. The cost of asymptotic overfit is small for large. V. SIMULATIONS Simulation illustrate the differences between the various order-selection criteria. We will discuss three different cases: MA(1)-process given by (24); observations; ; AR(3) process with and, with (56) and ; ; AR(3) process with and ; ;. The ME of the selected model and the asymptotic penalty factor for a number of order-selection criteria are given in Table I. The maximum model order considered for selection equals 50. The first simulation example shows that the effect of finite sample overfit in AIC leads to a poor quality of the selected model. This type of overfit does not occur in the other two examples. Here, the maximum model order is relatively small with respect to the number of observations. The second example shows that the cost of underfit in MDL can be large. For, the estimated AR(3) model typically is the most accurate model. Still, MDL frequently selects the AR(0) model. In a theoretical analysis of order-selection, it is often assumed that underfit does not occur. This assumption is only realistic if a larger number of observations of an AR process is available, as in the third simulation example. The CIC and AIC yield accurate models in all three examples. The model error is somewhat smaller for CIC. The third example, where underfit is absent, shows that the theoretical analysis of Section III accurately describes the asymptotic cost of overfit. The asymptotic cost of overfit calculated using

7 DE WAELE AND BROERSEN: ORDER SELECTION FOR VECTOR AUTOREGRESSIVE MODELS 433 TABLE I SELECTION QUALITY OF FOUR ORDER SELECTION CRITERIA FOR VECTOR AR MODELS. THE ACCURACY OF THE SELECTED MODEL IIS EXPRESSED USING THE ME (AVERAGE OF SIMULATIONS). ALSO GIVEN IS THE ASYMPTOTIC PENALTY FACTOR. AIC AND AIC ARE EQUIVALENT IN THE ASYMPTOTIC REGIME; THE DIFFERENCE BETWEEN THESE CRITERIA OCCURS IN THE FINITE SAMPLE REGIME (43) for the 2-D AR(3) process with is 2.23 for, as in AIC and AIC. Combining this cost of overfit with the cost of parameter estimation of 6 [ ; ; in (12)] yields a total error of For the penalty factor of 2.5 as used in CIC, a lower cost of overfit of 1.05 is predicted, yielding a total error of Both predictions are in agreement with the simulation results. VI. CONCLUDING REMARKS The analysis of underfit and overfit in vector time series analysis has resulted in the CIC for vector signals. It has been shown that the optimal penalty factor depends on the additional number of parameters that is estimated if the model order is increased by 1. The optimal penalty factor decreases from 3 for to the penalty factor as in AIC for large. The analysis of the underfit and overfit, leading to the orderselection criterion CIC, can easily be extended to other order-selection problems such as linear regression. The finite sample behavior of an estimator can be determined from a simple simulation experiment. The assumption of the distribution for the residuals in the calculation of the asymptotic cost of overfit is valid for a wide range of problems [24]. REFERENCES [1] H. Akaike, Fitting autoregressive models for prediction, Ann. Inst. Stat. Math., vol. 21, pp , [2] J. Li, G. Liu, and P. Stoica, Moving target feature extraction for airborne high-range resolution phased-array radar, IEEE Trans. Signal Processing, vol. 49, pp , Feb [3] R. Shibata, Approximate efficiency of a selection procedure for the number of regression variables, Biometrika, vol. 71, no. 1, pp , [4] M. B. Priestley, Spectral Analysis and Time Series. London: Academic, [5] Y. Sakamoto, M. Ishiguro, and G. Kitagawa, Akaike Information Criterion Statistics. Tokyo, Japan: KTK, [6] J. Roman, M. Rangaswamy, D. Davis, Q. Zhang, B. Himed, and J. Michels, Parametric adaptive matched filter for airborne radar applications, IEEE Trans. Aerosp. Electron. Syst., vol. 36, pp , Apr [7] C. Hurvich and C. Tsai, A corrected Akaike information criterion for vector autoregressive model selection, J. Time Series Anal., vol. 14, no. 3, pp , [8] T. Subba Rao, Developments in Time Series Analysis. London, U.K.: Chapman and Hall, 1993, ch. 5, pp [9] J. Rissanen, Modeling by the shortest data description, Automatica, vol. 14, pp , [10] D. Anderson, K. Burnham, and G. White, Comparison of AIC and CAIC for model selection and statistical inference from capture-recapture studies, J. Applied Statist., vol. 25, no. 2, pp , [11] P. M. T. Broersen, Finite sample criteria for autoregressive order-selection, IEEE Trans. Signal Processing, vol. 48, pp , Dec [12] H. Akaike and G. Kitagawa, Eds., The Practice of Time Series Analysis. New York: Springer, 1999, Statistics for Engineering and Physical Science. [13] W. Greub, Linear algebra, in Graduate Texts in Mathematics, fourth ed. New York: Springer, [14] S. L. Marple, Digital Spectral Analysis With Applications. Englewood Cliffs, NJ: Prentice-Hall, [15] P. M. T. Broersen, The quality of models for ARMA processes, IEEE Trans. Signal Processing, vol. 46, pp , June [16] S. de Waele and P. M. T. Broersen, Finite sample effects in vector autoregressive modeling, IEEE Trans. Instrum. Meas., vol. 51, Oct. 2002, to be published. [17] H. Akaike, Autoregressive model fitting for control, Ann. Inst. Stat. Math., vol. 23, pp , [18] L. Ljung, System Identification-Theory for the User, second ed. Upper Saddle River, NJ: Prentice Hall, [19] J. Erkelens and P. Broersen, Bias propagation in the autocorrelation method of linear prediction, IEEE Trans. Speech Audio Processing, vol. 5, pp , Mar [20] P. Broersen and H. Wensink, Autoregressive model order-selection by a finite sample estimator for the Kullback Leibler discrepancy, IEEE Trans. Instrum. Meas., vol. 46, pp , July [21] S. de Waele and P. M. T. Broersen, Finite sample effects in multichannel autoregressive modeling, in Proc. IMTC Conf., Budapest, Hungary, 2001, pp [22] A. Miller, Subset Selection in Regression. London, U.K.: Chapman and Hall, [23] F. Spitzer, A combinatorial lemma and its application to probability theory, Trans. Amer. Math. Soc., vol. 82, pp , [24] S. Kullback, Information Theory and Statistics. London, U.K.: Wiley, Stijn de Waele was born in Eindhoven, The Netherlands, in He received the M.Sc. degree in applied physics in 1998 from Delft University of Technology, Delft, the Netherlands, where he is currently pursuing the Ph.D. degree with the Department of Applied Physics. His research interests are the development of new time series analysis algorithms and its application to radar signal processing. Piet M. T. Broersen was born in Zijdewind, the Netherlands, in He received the M.Sc. degree in applied physics in 1968 and the Ph.D. degree in 1976, both from the Delft University of Technology, Delft, the Netherlands. He is currently with the Department of Applied Physics, Delft University of Technology. His main research interest is automatic identification. He found a solution for the selection of order and type of time series models and the application to spectral analysis, model building, and feature extraction. His next subject is the automatic identification of input output relations with statistical criteria.

Automatic Spectral Analysis With Time Series Models

Automatic Spectral Analysis With Time Series Models IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 51, NO. 2, APRIL 2002 211 Automatic Spectral Analysis With Time Series Models Piet M. T. Broersen Abstract The increased computational speed and

More information

Automatic Autocorrelation and Spectral Analysis

Automatic Autocorrelation and Spectral Analysis Piet M.T. Broersen Automatic Autocorrelation and Spectral Analysis With 104 Figures Sprin ger 1 Introduction 1 1.1 Time Series Problems 1 2 Basic Concepts 11 2.1 Random Variables 11 2.2 Normal Distribution

More information

THE PROCESSING of random signals became a useful

THE PROCESSING of random signals became a useful IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 58, NO. 11, NOVEMBER 009 3867 The Quality of Lagged Products and Autoregressive Yule Walker Models as Autocorrelation Estimates Piet M. T. Broersen

More information

Automatic spectral analysis with time series models

Automatic spectral analysis with time series models Selected Topics in Signals, Systems and Control Vol. 12, September 2001 Automatic spectral analysis with time series models P.M.T. Broersen Signals, Systems and Control Group, Department of Applied Physics

More information

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström PREDICTIO ERROR METHODS Torsten Söderström Department of Systems and Control, Information Technology, Uppsala University, Uppsala, Sweden Keywords: prediction error method, optimal prediction, identifiability,

More information

AR-order estimation by testing sets using the Modified Information Criterion

AR-order estimation by testing sets using the Modified Information Criterion AR-order estimation by testing sets using the Modified Information Criterion Rudy Moddemeijer 14th March 2006 Abstract The Modified Information Criterion (MIC) is an Akaike-like criterion which allows

More information

Analysis of the AIC Statistic for Optimal Detection of Small Changes in Dynamic Systems

Analysis of the AIC Statistic for Optimal Detection of Small Changes in Dynamic Systems Analysis of the AIC Statistic for Optimal Detection of Small Changes in Dynamic Systems Jeremy S. Conner and Dale E. Seborg Department of Chemical Engineering University of California, Santa Barbara, CA

More information

Model Order Selection for Probing-based Power System Mode Estimation

Model Order Selection for Probing-based Power System Mode Estimation Selection for Probing-based Power System Mode Estimation Vedran S. Perić, Tetiana Bogodorova, KTH Royal Institute of Technology, Stockholm, Sweden, vperic@kth.se, tetianab@kth.se Ahmet. Mete, University

More information

THE ESTIMATED spectra or autocorrelation functions

THE ESTIMATED spectra or autocorrelation functions IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 58, NO. 5, MAY 2009 36 Modified Durbin Method for Accurate Estimation of Moving-Average Models Piet M. T. Broersen Abstract Spectra with narrow

More information

Optimum Sampling Vectors for Wiener Filter Noise Reduction

Optimum Sampling Vectors for Wiener Filter Noise Reduction 58 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 50, NO. 1, JANUARY 2002 Optimum Sampling Vectors for Wiener Filter Noise Reduction Yukihiko Yamashita, Member, IEEE Absact Sampling is a very important and

More information

On Moving Average Parameter Estimation

On Moving Average Parameter Estimation On Moving Average Parameter Estimation Niclas Sandgren and Petre Stoica Contact information: niclas.sandgren@it.uu.se, tel: +46 8 473392 Abstract Estimation of the autoregressive moving average (ARMA)

More information

LTI Systems, Additive Noise, and Order Estimation

LTI Systems, Additive Noise, and Order Estimation LTI Systems, Additive oise, and Order Estimation Soosan Beheshti, Munther A. Dahleh Laboratory for Information and Decision Systems Department of Electrical Engineering and Computer Science Massachusetts

More information

On the Behavior of Information Theoretic Criteria for Model Order Selection

On the Behavior of Information Theoretic Criteria for Model Order Selection IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 49, NO. 8, AUGUST 2001 1689 On the Behavior of Information Theoretic Criteria for Model Order Selection Athanasios P. Liavas, Member, IEEE, and Phillip A. Regalia,

More information

1. Introduction Over the last three decades a number of model selection criteria have been proposed, including AIC (Akaike, 1973), AICC (Hurvich & Tsa

1. Introduction Over the last three decades a number of model selection criteria have been proposed, including AIC (Akaike, 1973), AICC (Hurvich & Tsa On the Use of Marginal Likelihood in Model Selection Peide Shi Department of Probability and Statistics Peking University, Beijing 100871 P. R. China Chih-Ling Tsai Graduate School of Management University

More information

Selecting an optimal set of parameters using an Akaike like criterion

Selecting an optimal set of parameters using an Akaike like criterion Selecting an optimal set of parameters using an Akaike like criterion R. Moddemeijer a a University of Groningen, Department of Computing Science, P.O. Box 800, L-9700 AV Groningen, The etherlands, e-mail:

More information

Bias Correction of Cross-Validation Criterion Based on Kullback-Leibler Information under a General Condition

Bias Correction of Cross-Validation Criterion Based on Kullback-Leibler Information under a General Condition Bias Correction of Cross-Validation Criterion Based on Kullback-Leibler Information under a General Condition Hirokazu Yanagihara 1, Tetsuji Tonda 2 and Chieko Matsumoto 3 1 Department of Social Systems

More information

Selection Criteria Based on Monte Carlo Simulation and Cross Validation in Mixed Models

Selection Criteria Based on Monte Carlo Simulation and Cross Validation in Mixed Models Selection Criteria Based on Monte Carlo Simulation and Cross Validation in Mixed Models Junfeng Shang Bowling Green State University, USA Abstract In the mixed modeling framework, Monte Carlo simulation

More information

KULLBACK-LEIBLER INFORMATION THEORY A BASIS FOR MODEL SELECTION AND INFERENCE

KULLBACK-LEIBLER INFORMATION THEORY A BASIS FOR MODEL SELECTION AND INFERENCE KULLBACK-LEIBLER INFORMATION THEORY A BASIS FOR MODEL SELECTION AND INFERENCE Kullback-Leibler Information or Distance f( x) gx ( ±) ) I( f, g) = ' f( x)log dx, If, ( g) is the "information" lost when

More information

ISyE 691 Data mining and analytics

ISyE 691 Data mining and analytics ISyE 691 Data mining and analytics Regression Instructor: Prof. Kaibo Liu Department of Industrial and Systems Engineering UW-Madison Email: kliu8@wisc.edu Office: Room 3017 (Mechanical Engineering Building)

More information

Asymptotic Analysis of the Generalized Coherence Estimate

Asymptotic Analysis of the Generalized Coherence Estimate IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 49, NO. 1, JANUARY 2001 45 Asymptotic Analysis of the Generalized Coherence Estimate Axel Clausen, Member, IEEE, and Douglas Cochran, Senior Member, IEEE Abstract

More information

Expressions for the covariance matrix of covariance data

Expressions for the covariance matrix of covariance data Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden

More information

Adaptive MV ARMA identification under the presence of noise

Adaptive MV ARMA identification under the presence of noise Chapter 5 Adaptive MV ARMA identification under the presence of noise Stylianos Sp. Pappas, Vassilios C. Moussas, Sokratis K. Katsikas 1 1 2 3 University of the Aegean, Department of Information and Communication

More information

Evolutionary ARMA Model Identification With Unknown Process Order *

Evolutionary ARMA Model Identification With Unknown Process Order * Evolutionary ARMA Model Identification With Unknown Process Order * G N BELIGIANNIS 1,2, E N DEMIRIS 1 and S D LIKOTHANASSIS 1,2,3 1 Department of Computer Engineering and Informatics University of Patras

More information

Auxiliary signal design for failure detection in uncertain systems

Auxiliary signal design for failure detection in uncertain systems Auxiliary signal design for failure detection in uncertain systems R. Nikoukhah, S. L. Campbell and F. Delebecque Abstract An auxiliary signal is an input signal that enhances the identifiability of a

More information

Elements of Multivariate Time Series Analysis

Elements of Multivariate Time Series Analysis Gregory C. Reinsel Elements of Multivariate Time Series Analysis Second Edition With 14 Figures Springer Contents Preface to the Second Edition Preface to the First Edition vii ix 1. Vector Time Series

More information

Testing composite hypotheses applied to AR-model order estimation; the Akaike-criterion revised

Testing composite hypotheses applied to AR-model order estimation; the Akaike-criterion revised MODDEMEIJER: TESTING COMPOSITE HYPOTHESES; THE AKAIKE-CRITERION REVISED 1 Testing composite hypotheses applied to AR-model order estimation; the Akaike-criterion revised Rudy Moddemeijer Abstract Akaike

More information

Detection of Signals by Information Theoretic Criteria: General Asymptotic Performance Analysis

Detection of Signals by Information Theoretic Criteria: General Asymptotic Performance Analysis IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 50, NO. 5, MAY 2002 1027 Detection of Signals by Information Theoretic Criteria: General Asymptotic Performance Analysis Eran Fishler, Member, IEEE, Michael

More information

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection SG 21006 Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 28

More information

Publications (in chronological order)

Publications (in chronological order) Publications (in chronological order) 1. A note on the investigation of the optimal weight function in estimation of the spectral density (1963), J. Univ. Gau. 14, pages 141 149. 2. On the cross periodogram

More information

On the convergence of the iterative solution of the likelihood equations

On the convergence of the iterative solution of the likelihood equations On the convergence of the iterative solution of the likelihood equations R. Moddemeijer University of Groningen, Department of Computing Science, P.O. Box 800, NL-9700 AV Groningen, The Netherlands, e-mail:

More information

CONTENTS NOTATIONAL CONVENTIONS GLOSSARY OF KEY SYMBOLS 1 INTRODUCTION 1

CONTENTS NOTATIONAL CONVENTIONS GLOSSARY OF KEY SYMBOLS 1 INTRODUCTION 1 DIGITAL SPECTRAL ANALYSIS WITH APPLICATIONS S.LAWRENCE MARPLE, JR. SUMMARY This new book provides a broad perspective of spectral estimation techniques and their implementation. It concerned with spectral

More information

DETECTION of the number of sources measured by an

DETECTION of the number of sources measured by an 2746 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 58, NO 5, MAY 2010 Nonparametric Detection of Signals by Information Theoretic Criteria: Performance Analysis an Improved Estimator Boaz Nadler Abstract

More information

An Invariance Property of the Generalized Likelihood Ratio Test

An Invariance Property of the Generalized Likelihood Ratio Test 352 IEEE SIGNAL PROCESSING LETTERS, VOL. 10, NO. 12, DECEMBER 2003 An Invariance Property of the Generalized Likelihood Ratio Test Steven M. Kay, Fellow, IEEE, and Joseph R. Gabriel, Member, IEEE Abstract

More information

Comparison of New Approach Criteria for Estimating the Order of Autoregressive Process

Comparison of New Approach Criteria for Estimating the Order of Autoregressive Process IOSR Journal of Mathematics (IOSRJM) ISSN: 78-578 Volume 1, Issue 3 (July-Aug 1), PP 1- Comparison of New Approach Criteria for Estimating the Order of Autoregressive Process Salie Ayalew 1 M.Chitti Babu,

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface

More information

TIME SERIES ANALYSIS AND FORECASTING USING THE STATISTICAL MODEL ARIMA

TIME SERIES ANALYSIS AND FORECASTING USING THE STATISTICAL MODEL ARIMA CHAPTER 6 TIME SERIES ANALYSIS AND FORECASTING USING THE STATISTICAL MODEL ARIMA 6.1. Introduction A time series is a sequence of observations ordered in time. A basic assumption in the time series analysis

More information

Performance Analysis of an Adaptive Algorithm for DOA Estimation

Performance Analysis of an Adaptive Algorithm for DOA Estimation Performance Analysis of an Adaptive Algorithm for DOA Estimation Assimakis K. Leros and Vassilios C. Moussas Abstract This paper presents an adaptive approach to the problem of estimating the direction

More information

Minimum Message Length Autoregressive Model Order Selection

Minimum Message Length Autoregressive Model Order Selection Minimum Message Length Autoregressive Model Order Selection Leigh J. Fitzgibbon School of Computer Science and Software Engineering, Monash University Clayton, Victoria 38, Australia leighf@csse.monash.edu.au

More information

Regression and Time Series Model Selection in Small Samples. Clifford M. Hurvich; Chih-Ling Tsai

Regression and Time Series Model Selection in Small Samples. Clifford M. Hurvich; Chih-Ling Tsai Regression and Time Series Model Selection in Small Samples Clifford M. Hurvich; Chih-Ling Tsai Biometrika, Vol. 76, No. 2. (Jun., 1989), pp. 297-307. Stable URL: http://links.jstor.org/sici?sici=0006-3444%28198906%2976%3a2%3c297%3aratsms%3e2.0.co%3b2-4

More information

5 Autoregressive-Moving-Average Modeling

5 Autoregressive-Moving-Average Modeling 5 Autoregressive-Moving-Average Modeling 5. Purpose. Autoregressive-moving-average (ARMA models are mathematical models of the persistence, or autocorrelation, in a time series. ARMA models are widely

More information

Optimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko

Optimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko IEEE SIGNAL PROCESSING LETTERS, VOL. 17, NO. 12, DECEMBER 2010 1005 Optimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko Abstract A new theorem shows that

More information

2.2 Classical Regression in the Time Series Context

2.2 Classical Regression in the Time Series Context 48 2 Time Series Regression and Exploratory Data Analysis context, and therefore we include some material on transformations and other techniques useful in exploratory data analysis. 2.2 Classical Regression

More information

Akaike criterion: Kullback-Leibler discrepancy

Akaike criterion: Kullback-Leibler discrepancy Model choice. Akaike s criterion Akaike criterion: Kullback-Leibler discrepancy Given a family of probability densities {f ( ; ), 2 }, Kullback-Leibler s index of f ( ; ) relativetof ( ; ) is Z ( ) =E

More information

Time Series: Theory and Methods

Time Series: Theory and Methods Peter J. Brockwell Richard A. Davis Time Series: Theory and Methods Second Edition With 124 Illustrations Springer Contents Preface to the Second Edition Preface to the First Edition vn ix CHAPTER 1 Stationary

More information

CONSIDER the p -th order autoregressive, AR(p ), explanation

CONSIDER the p -th order autoregressive, AR(p ), explanation Estimating the Order of an Autoregressive Model using Normalized Maximum Likelihood Daniel F. Schmidt, Member, and Enes Makalic Abstract This paper examines the estimation of the order of an autoregressive

More information

On the convergence of the iterative solution of the likelihood equations

On the convergence of the iterative solution of the likelihood equations On the convergence of the iterative solution of the likelihood equations R. Moddemeijer University of Groningen, Department of Computing Science, P.O. Box 800, NL-9700 AV Groningen, The Netherlands, e-mail:

More information

PARAMETER ESTIMATION AND ORDER SELECTION FOR LINEAR REGRESSION PROBLEMS. Yngve Selén and Erik G. Larsson

PARAMETER ESTIMATION AND ORDER SELECTION FOR LINEAR REGRESSION PROBLEMS. Yngve Selén and Erik G. Larsson PARAMETER ESTIMATION AND ORDER SELECTION FOR LINEAR REGRESSION PROBLEMS Yngve Selén and Eri G Larsson Dept of Information Technology Uppsala University, PO Box 337 SE-71 Uppsala, Sweden email: yngveselen@ituuse

More information

Riccati difference equations to non linear extended Kalman filter constraints

Riccati difference equations to non linear extended Kalman filter constraints International Journal of Scientific & Engineering Research Volume 3, Issue 12, December-2012 1 Riccati difference equations to non linear extended Kalman filter constraints Abstract Elizabeth.S 1 & Jothilakshmi.R

More information

MANY digital speech communication applications, e.g.,

MANY digital speech communication applications, e.g., 406 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 15, NO. 2, FEBRUARY 2007 An MMSE Estimator for Speech Enhancement Under a Combined Stochastic Deterministic Speech Model Richard C.

More information

Model Selection for Semiparametric Bayesian Models with Application to Overdispersion

Model Selection for Semiparametric Bayesian Models with Application to Overdispersion Proceedings 59th ISI World Statistics Congress, 25-30 August 2013, Hong Kong (Session CPS020) p.3863 Model Selection for Semiparametric Bayesian Models with Application to Overdispersion Jinfang Wang and

More information

Lecture 7: Model Building Bus 41910, Time Series Analysis, Mr. R. Tsay

Lecture 7: Model Building Bus 41910, Time Series Analysis, Mr. R. Tsay Lecture 7: Model Building Bus 41910, Time Series Analysis, Mr R Tsay An effective procedure for building empirical time series models is the Box-Jenkins approach, which consists of three stages: model

More information

COMPLEX SIGNALS are used in various areas of signal

COMPLEX SIGNALS are used in various areas of signal IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 2, FEBRUARY 1997 411 Second-Order Statistics of Complex Signals Bernard Picinbono, Fellow, IEEE, and Pascal Bondon, Member, IEEE Abstract The second-order

More information

Akaike criterion: Kullback-Leibler discrepancy

Akaike criterion: Kullback-Leibler discrepancy Model choice. Akaike s criterion Akaike criterion: Kullback-Leibler discrepancy Given a family of probability densities {f ( ; ψ), ψ Ψ}, Kullback-Leibler s index of f ( ; ψ) relative to f ( ; θ) is (ψ

More information

FAST AND EFFECTIVE MODEL ORDER SELECTION METHOD TO DETERMINE THE NUMBER OF SOURCES IN A LINEAR TRANSFORMATION MODEL

FAST AND EFFECTIVE MODEL ORDER SELECTION METHOD TO DETERMINE THE NUMBER OF SOURCES IN A LINEAR TRANSFORMATION MODEL FAST AND EFFECTIVE MODEL ORDER SELECTION METHOD TO DETERMINE THE NUMBER OF SOURCES IN A LINEAR TRANSFORMATION MODEL Fengyu Cong 1, Asoke K Nandi 1,2, Zhaoshui He 3, Andrzej Cichocki 4, Tapani Ristaniemi

More information

A NEW INFORMATION THEORETIC APPROACH TO ORDER ESTIMATION PROBLEM. Massachusetts Institute of Technology, Cambridge, MA 02139, U.S.A.

A NEW INFORMATION THEORETIC APPROACH TO ORDER ESTIMATION PROBLEM. Massachusetts Institute of Technology, Cambridge, MA 02139, U.S.A. A EW IFORMATIO THEORETIC APPROACH TO ORDER ESTIMATIO PROBLEM Soosan Beheshti Munther A. Dahleh Massachusetts Institute of Technology, Cambridge, MA 0239, U.S.A. Abstract: We introduce a new method of model

More information

Model selection criteria Λ

Model selection criteria Λ Model selection criteria Λ Jean-Marie Dufour y Université de Montréal First version: March 1991 Revised: July 1998 This version: April 7, 2002 Compiled: April 7, 2002, 4:10pm Λ This work was supported

More information

Design of Time Series Model for Road Accident Fatal Death in Tamilnadu

Design of Time Series Model for Road Accident Fatal Death in Tamilnadu Volume 109 No. 8 2016, 225-232 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu Design of Time Series Model for Road Accident Fatal Death in Tamilnadu

More information

Title without the persistently exciting c. works must be obtained from the IEE

Title without the persistently exciting c.   works must be obtained from the IEE Title Exact convergence analysis of adapt without the persistently exciting c Author(s) Sakai, H; Yang, JM; Oka, T Citation IEEE TRANSACTIONS ON SIGNAL 55(5): 2077-2083 PROCESS Issue Date 2007-05 URL http://hdl.handle.net/2433/50544

More information

THIS paper deals with robust control in the setup associated

THIS paper deals with robust control in the setup associated IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 50, NO 10, OCTOBER 2005 1501 Control-Oriented Model Validation and Errors Quantification in the `1 Setup V F Sokolov Abstract A priori information required for

More information

Large Sample Properties of Estimators in the Classical Linear Regression Model

Large Sample Properties of Estimators in the Classical Linear Regression Model Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in

More information

Diagnostic Test for GARCH Models Based on Absolute Residual Autocorrelations

Diagnostic Test for GARCH Models Based on Absolute Residual Autocorrelations Diagnostic Test for GARCH Models Based on Absolute Residual Autocorrelations Farhat Iqbal Department of Statistics, University of Balochistan Quetta-Pakistan farhatiqb@gmail.com Abstract In this paper

More information

Thomas J. Fisher. Research Statement. Preliminary Results

Thomas J. Fisher. Research Statement. Preliminary Results Thomas J. Fisher Research Statement Preliminary Results Many applications of modern statistics involve a large number of measurements and can be considered in a linear algebra framework. In many of these

More information

314 IEEE TRANSACTIONS ON RELIABILITY, VOL. 55, NO. 2, JUNE 2006

314 IEEE TRANSACTIONS ON RELIABILITY, VOL. 55, NO. 2, JUNE 2006 314 IEEE TRANSACTIONS ON RELIABILITY, VOL 55, NO 2, JUNE 2006 The Mean Residual Life Function of a k-out-of-n Structure at the System Level Majid Asadi and Ismihan Bayramoglu Abstract In the study of the

More information

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction Acta Math. Univ. Comenianae Vol. LXV, 1(1996), pp. 129 139 129 ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES V. WITKOVSKÝ Abstract. Estimation of the autoregressive

More information

The Behaviour of the Akaike Information Criterion when Applied to Non-nested Sequences of Models

The Behaviour of the Akaike Information Criterion when Applied to Non-nested Sequences of Models The Behaviour of the Akaike Information Criterion when Applied to Non-nested Sequences of Models Centre for Molecular, Environmental, Genetic & Analytic (MEGA) Epidemiology School of Population Health

More information

covariance function, 174 probability structure of; Yule-Walker equations, 174 Moving average process, fluctuations, 5-6, 175 probability structure of

covariance function, 174 probability structure of; Yule-Walker equations, 174 Moving average process, fluctuations, 5-6, 175 probability structure of Index* The Statistical Analysis of Time Series by T. W. Anderson Copyright 1971 John Wiley & Sons, Inc. Aliasing, 387-388 Autoregressive {continued) Amplitude, 4, 94 case of first-order, 174 Associated

More information

GENERALIZED DEFLATION ALGORITHMS FOR THE BLIND SOURCE-FACTOR SEPARATION OF MIMO-FIR CHANNELS. Mitsuru Kawamoto 1,2 and Yujiro Inouye 1

GENERALIZED DEFLATION ALGORITHMS FOR THE BLIND SOURCE-FACTOR SEPARATION OF MIMO-FIR CHANNELS. Mitsuru Kawamoto 1,2 and Yujiro Inouye 1 GENERALIZED DEFLATION ALGORITHMS FOR THE BLIND SOURCE-FACTOR SEPARATION OF MIMO-FIR CHANNELS Mitsuru Kawamoto,2 and Yuiro Inouye. Dept. of Electronic and Control Systems Engineering, Shimane University,

More information

Bias-corrected AIC for selecting variables in Poisson regression models

Bias-corrected AIC for selecting variables in Poisson regression models Bias-corrected AIC for selecting variables in Poisson regression models Ken-ichi Kamo (a), Hirokazu Yanagihara (b) and Kenichi Satoh (c) (a) Corresponding author: Department of Liberal Arts and Sciences,

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fifth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada International Edition contributions by Telagarapu Prabhakar Department

More information

Mohsen Pourahmadi. 1. A sampling theorem for multivariate stationary processes. J. of Multivariate Analysis, Vol. 13, No. 1 (1983),

Mohsen Pourahmadi. 1. A sampling theorem for multivariate stationary processes. J. of Multivariate Analysis, Vol. 13, No. 1 (1983), Mohsen Pourahmadi PUBLICATIONS Books and Editorial Activities: 1. Foundations of Time Series Analysis and Prediction Theory, John Wiley, 2001. 2. Computing Science and Statistics, 31, 2000, the Proceedings

More information

The Discrete Kalman Filtering of a Class of Dynamic Multiscale Systems

The Discrete Kalman Filtering of a Class of Dynamic Multiscale Systems 668 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL 49, NO 10, OCTOBER 2002 The Discrete Kalman Filtering of a Class of Dynamic Multiscale Systems Lei Zhang, Quan

More information

J. Liang School of Automation & Information Engineering Xi an University of Technology, China

J. Liang School of Automation & Information Engineering Xi an University of Technology, China Progress In Electromagnetics Research C, Vol. 18, 245 255, 211 A NOVEL DIAGONAL LOADING METHOD FOR ROBUST ADAPTIVE BEAMFORMING W. Wang and R. Wu Tianjin Key Lab for Advanced Signal Processing Civil Aviation

More information

510 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 58, NO. 2, FEBRUARY X/$ IEEE

510 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 58, NO. 2, FEBRUARY X/$ IEEE 510 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 58, NO. 2, FEBRUARY 2010 Noisy Data and Impulse Response Estimation Soosan Beheshti, Senior Member, IEEE, and Munther A. Dahleh, Fellow, IEEE Abstract This

More information

9. Model Selection. statistical models. overview of model selection. information criteria. goodness-of-fit measures

9. Model Selection. statistical models. overview of model selection. information criteria. goodness-of-fit measures FE661 - Statistical Methods for Financial Engineering 9. Model Selection Jitkomut Songsiri statistical models overview of model selection information criteria goodness-of-fit measures 9-1 Statistical models

More information

Exploring Granger Causality for Time series via Wald Test on Estimated Models with Guaranteed Stability

Exploring Granger Causality for Time series via Wald Test on Estimated Models with Guaranteed Stability Exploring Granger Causality for Time series via Wald Test on Estimated Models with Guaranteed Stability Nuntanut Raksasri Jitkomut Songsiri Department of Electrical Engineering, Faculty of Engineering,

More information

A Modified Baum Welch Algorithm for Hidden Markov Models with Multiple Observation Spaces

A Modified Baum Welch Algorithm for Hidden Markov Models with Multiple Observation Spaces IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, VOL. 9, NO. 4, MAY 2001 411 A Modified Baum Welch Algorithm for Hidden Markov Models with Multiple Observation Spaces Paul M. Baggenstoss, Member, IEEE

More information

Model Selection Tutorial 2: Problems With Using AIC to Select a Subset of Exposures in a Regression Model

Model Selection Tutorial 2: Problems With Using AIC to Select a Subset of Exposures in a Regression Model Model Selection Tutorial 2: Problems With Using AIC to Select a Subset of Exposures in a Regression Model Centre for Molecular, Environmental, Genetic & Analytic (MEGA) Epidemiology School of Population

More information

DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof

DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof Delft Center for Systems and Control, Delft University of Technology, Mekelweg

More information

Discrepancy-Based Model Selection Criteria Using Cross Validation

Discrepancy-Based Model Selection Criteria Using Cross Validation 33 Discrepancy-Based Model Selection Criteria Using Cross Validation Joseph E. Cavanaugh, Simon L. Davies, and Andrew A. Neath Department of Biostatistics, The University of Iowa Pfizer Global Research

More information

Spectral Analysis of Irregularly Sampled Data with Time Series Models

Spectral Analysis of Irregularly Sampled Data with Time Series Models The Open Signal Processing Journal, 2008, 1, 7-14 7 Open Access Spectral Analysis of Irregularly Sampled Data with Time Series Models Piet M.T. Broersen* Department of Multi Scale Physics, Delft University

More information

Autoregressive (AR) spectral estimates for Frequency- Wavenumber (F-k) analysis of strong-motion data

Autoregressive (AR) spectral estimates for Frequency- Wavenumber (F-k) analysis of strong-motion data Autoregressive (AR) spectral estimates for Frequency- Wavenumber (F-k) analysis of strong-motion data R. Rupakhety & R. Sigbörnsson Earthquake Engineering Research Center (EERC), University of Iceland

More information

An Akaike Criterion based on Kullback Symmetric Divergence in the Presence of Incomplete-Data

An Akaike Criterion based on Kullback Symmetric Divergence in the Presence of Incomplete-Data An Akaike Criterion based on Kullback Symmetric Divergence Bezza Hafidi a and Abdallah Mkhadri a a University Cadi-Ayyad, Faculty of sciences Semlalia, Department of Mathematics, PB.2390 Marrakech, Moroco

More information

A Subspace Approach to Estimation of. Measurements 1. Carlos E. Davila. Electrical Engineering Department, Southern Methodist University

A Subspace Approach to Estimation of. Measurements 1. Carlos E. Davila. Electrical Engineering Department, Southern Methodist University EDICS category SP 1 A Subspace Approach to Estimation of Autoregressive Parameters From Noisy Measurements 1 Carlos E Davila Electrical Engineering Department, Southern Methodist University Dallas, Texas

More information

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8]

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8] 1 Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8] Insights: Price movements in one market can spread easily and instantly to another market [economic globalization and internet

More information

Applied Time. Series Analysis. Wayne A. Woodward. Henry L. Gray. Alan C. Elliott. Dallas, Texas, USA

Applied Time. Series Analysis. Wayne A. Woodward. Henry L. Gray. Alan C. Elliott. Dallas, Texas, USA Applied Time Series Analysis Wayne A. Woodward Southern Methodist University Dallas, Texas, USA Henry L. Gray Southern Methodist University Dallas, Texas, USA Alan C. Elliott University of Texas Southwestern

More information

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang

More information

Further Results on Model Structure Validation for Closed Loop System Identification

Further Results on Model Structure Validation for Closed Loop System Identification Advances in Wireless Communications and etworks 7; 3(5: 57-66 http://www.sciencepublishinggroup.com/j/awcn doi:.648/j.awcn.735. Further esults on Model Structure Validation for Closed Loop System Identification

More information

ON MODEL SELECTION FOR STATE ESTIMATION FOR NONLINEAR SYSTEMS. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof

ON MODEL SELECTION FOR STATE ESTIMATION FOR NONLINEAR SYSTEMS. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof ON MODEL SELECTION FOR STATE ESTIMATION FOR NONLINEAR SYSTEMS Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof Delft Center for Systems and Control, Delft University of Technology, Mekelweg 2, 2628 CD

More information

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M. TIME SERIES ANALYSIS Forecasting and Control Fifth Edition GEORGE E. P. BOX GWILYM M. JENKINS GREGORY C. REINSEL GRETA M. LJUNG Wiley CONTENTS PREFACE TO THE FIFTH EDITION PREFACE TO THE FOURTH EDITION

More information

Alfredo A. Romero * College of William and Mary

Alfredo A. Romero * College of William and Mary A Note on the Use of in Model Selection Alfredo A. Romero * College of William and Mary College of William and Mary Department of Economics Working Paper Number 6 October 007 * Alfredo A. Romero is a Visiting

More information

ACOMMON task in science and engineering is the selection

ACOMMON task in science and engineering is the selection 2726 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 46, NO 10, OCTOBER 1998 Asymptotic MAP Criteria for Model Selection Petar M Djurić, Member, IEEE Abstract The two most popular model selection rules in

More information

On Properties of QIC in Generalized. Estimating Equations. Shinpei Imori

On Properties of QIC in Generalized. Estimating Equations. Shinpei Imori On Properties of QIC in Generalized Estimating Equations Shinpei Imori Graduate School of Engineering Science, Osaka University 1-3 Machikaneyama-cho, Toyonaka, Osaka 560-8531, Japan E-mail: imori.stat@gmail.com

More information

Performance of Autoregressive Order Selection Criteria: A Simulation Study

Performance of Autoregressive Order Selection Criteria: A Simulation Study Pertanika J. Sci. & Technol. 6 (2): 7-76 (2008) ISSN: 028-7680 Universiti Putra Malaysia Press Performance of Autoregressive Order Selection Criteria: A Simulation Study Venus Khim-Sen Liew, Mahendran

More information

Dynamic Time Series Regression: A Panacea for Spurious Correlations

Dynamic Time Series Regression: A Panacea for Spurious Correlations International Journal of Scientific and Research Publications, Volume 6, Issue 10, October 2016 337 Dynamic Time Series Regression: A Panacea for Spurious Correlations Emmanuel Alphonsus Akpan *, Imoh

More information

State Estimation by IMM Filter in the Presence of Structural Uncertainty 1

State Estimation by IMM Filter in the Presence of Structural Uncertainty 1 Recent Advances in Signal Processing and Communications Edited by Nios Mastorais World Scientific and Engineering Society (WSES) Press Greece 999 pp.8-88. State Estimation by IMM Filter in the Presence

More information

Acomplex-valued harmonic with a time-varying phase is a

Acomplex-valued harmonic with a time-varying phase is a IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 9, SEPTEMBER 1998 2315 Instantaneous Frequency Estimation Using the Wigner Distribution with Varying and Data-Driven Window Length Vladimir Katkovnik,

More information

WE study the capacity of peak-power limited, single-antenna,

WE study the capacity of peak-power limited, single-antenna, 1158 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 3, MARCH 2010 Gaussian Fading Is the Worst Fading Tobias Koch, Member, IEEE, and Amos Lapidoth, Fellow, IEEE Abstract The capacity of peak-power

More information

Multivariate Time Series

Multivariate Time Series Multivariate Time Series Notation: I do not use boldface (or anything else) to distinguish vectors from scalars. Tsay (and many other writers) do. I denote a multivariate stochastic process in the form

More information

Cross validation of prediction models for seasonal time series by parametric bootstrapping

Cross validation of prediction models for seasonal time series by parametric bootstrapping Cross validation of prediction models for seasonal time series by parametric bootstrapping Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna Prepared

More information

A New Subspace Identification Method for Open and Closed Loop Data

A New Subspace Identification Method for Open and Closed Loop Data A New Subspace Identification Method for Open and Closed Loop Data Magnus Jansson July 2005 IR S3 SB 0524 IFAC World Congress 2005 ROYAL INSTITUTE OF TECHNOLOGY Department of Signals, Sensors & Systems

More information