System Identification

Size: px
Start display at page:

Download "System Identification"

Transcription

1 System Identification Arun K. Tangirala Department of Chemical Engineering IIT Madras July 26, 2013 Module 6 Lecture 1 Arun K. Tangirala System Identification July 26,

2 Objectives of this Module In this module, the objectives are to learn concepts pertaining to Estimation of time-series models Methods for estimating response-based (non-parametric) descriptions Prediction-error methods for estimating parametric models Arun K. Tangirala System Identification July 26,

3 Lectures in this module This module contains four (4) lectures: Lecture 1: Estimation of time-series models Lecture 2: Estimation of impulse / step response models Lecture 3: Estimation of frequency response functions Lecture 4: Estimation of parametric input-output models Arun K. Tangirala System Identification July 26,

4 Contents of Lecture 1 In this lecture, we shall: Learn the different techniques for estimating AR models Briefly discuss methods for estimating MA models Learn how to estimate ARMA and ARIMA models Arun K. Tangirala System Identification July 26,

5 Background In Module 3 we studied different time-series models for linear stationary random processes. A general description is given by the ARIMA model. In this module, we shall learn how to estimate these models using the methods presented in Module 4. The estimated model is useful in identification in (i) developing the noise (disturbance) model and (ii) estimation of power spectral densities. Auto-regressive models result in linear predictors - therefore a linear OLS method suffices. The linear nature of the AR predictors also attracts a few other specialized methods. The historical nature and the applicability of this topic is such that numerous texts and survey/tutorial articles (references) dedicated to this topic have been written. We shall only discuss four popular estimators, namely i. Yule-Walker method ii. LS / Covariance method iii. Modified covariance method iv. Burg s estimator Arun K. Tangirala System Identification July 26,

6 Estimation of auto-regressive models The AR estimation problem is stated as follows. Given N observations of a stationary process {v[k]}, k = 0,, N 1, fit an AR(P ) model. v[k] = P ( d j )v[k j] + e[k] (1) j=1 One of the first methods used to estimate AR models was the Yule-Walker method based on the Yule-Walker equations discussed in Lecture 3.6. This method belongs to the class of MoM estimators presented in Lecture 4.4. It is also one of the simplest to use. However, under some conditions the Y-W method is known to suffer from certain shortcomings as we shall learn shortly. Gradually more powerful and sophisticated alternatives exist. Arun K. Tangirala System Identification July 26,

7 Yule-Walker method The Y-W method is an MoM approach as outlined in Lectures 3.6 and 4.4. Idea: The second-order moments of the bivariate p.d.f. f(v[k], v[k l]), i.e., the ACVFs of an AR(P ) process are related to the parameters of the model as, σ vv[0] σ vv[1] σ vv[p 1] σ vv[1] σ vv[0] σ vv[p 2]... σ vv[p 1] } σ vv[p 2] {{ σ vv[0] } Σ P d 1 d 2. d P }{{} θ P σ 2 v + σ T P θ P = σ 2 e σ vv[1] σ vv[2] =. σ vv[p ] }{{} σ P Thus, the Y-W estimates of the AR(P ) model and the innovations variance σ 2 e are ˆθ = ˆΣ 1 P ˆσP ˆσ 2 e = ˆσ 2 v + ˆσ T P ˆθ = ˆσ 2 v ˆσ T P provided ˆΣ P is invertible, which is guaranteed so long as σ[0] > 0. 1 ˆΣ P ˆσP (2b) (2a) Arun K. Tangirala System Identification July 26,

8 Y-W method The matrix ˆΣ P is constructed using the biased estimator of the ACVF (recall L4.6) ˆσ[l] = 1 N 1 (v[k] v)(v[k l] v) (3) N k=l The Y-W estimates can be shown as the solution to the OLS minimization where ε[k] = v[k] ˆv[k k 1] = v[k] N+P 1 ˆθ YW = arg min ε 2 [k] (4) θ k=0 P ( d i )v[k i] i=1 Remark: The summation in (4) starts from k = 0 and runs up to k = N + P 1. In order to compute the prediction errors from k = 0,, P 1 and k = N,, N + P 1, the method pads p zeros to both ends of the series. This approach is frequently referred to as pre- and post-windowing of data. Arun K. Tangirala System Identification July 26,

9 Properties of Y-W estimator The Y-W estimates, in general, enjoy good asymptotic properties: 1 For a model of order P, if the process {v[k]} is also AR(P ), the parameter estimates asymptotically follow a multivariate Gaussian distribution d N(θ θ0) N ( 0, σeγ 2 1 ) P (5) In practice, the theoretical variance and covariance matrix are replaced by their respective estimates. 2 The 95% CI for the individual parameter θ i0 is approximately constructed from the diagonals of ˆΣ ˆθ ˆθ i ± 1.96ˆσe (ˆΣ 1 P ) 1/2 ii (6) N Arun K. Tangirala System Identification July 26,

10 Properties of Y-W estimators... contd. 3 Using the first property, if {v[k]} is an AR(P 0) process and an AR model of order P > P 0 is fit to the series, then the coefficients in excess of the true order are distributed as Nθl AN(0, 1) l > P 0 (7) To verify this fact, consider fitting an AR(P ) model to a white-noise process, i.e., when P 0 = 0. Then Σ P = σ 2 ei. 4 Recall that the last coefficient of an AR(P ) model is the PACF coefficient φ P P of the series. By the present notation, φ ll = d l = θ l (8) It follows from the above property that if the true process is AR(P 0), the 95% significance levels for PACF estimates at lags l > P 0 are 1.96 N ˆφ ll 1.96 N (9) Arun K. Tangirala System Identification July 26,

11 Properties of Y-W estimator... contd. 5 From (5) it follows the Y-W estimates of an AR model are consistent. 6 The Y-W estimator suffers from a drawback. It may produce poor (high variability) estimates when the generating auto-regressive process has poles close to unit circle (reference). The cause is the poor conditioning of the auto-covariance matrix ˆΣ P for such processes combined with the bias in the ACVF estimator. The effects of the latter (bias) always prevail, but are magnified when ˆΣ P is poorly conditioned. 7 The Durbin-Levinson s algorithm is used to compute the parameter estimates in a recursive manner without having to explicitly invert ˆΣ P. 8 The Toeplitz structure of ˆΣ P and the biased ACVF estimator guarantee that the resulting model is stable and minimum phase. Arun K. Tangirala System Identification July 26,

12 Example Y-W method A series consisting of N = 500 observations of a random process is given. Fit an AR(2) model using the Y-W method. Solution: The variance and ACF estimates at lags l = 1, 2 are computed to be ˆσ[0] = , ˆρ[1] = , ˆρ[2] = respectively. Plugging in these estimates into (2a) produces [ ] [ ˆd ˆθ = = ˆd ] 1 [ ] [ ] = The estimate of the innovations variance can be computed using (2b) [ ] [ ] ˆσ e = = (11) (10) Arun K. Tangirala System Identification July 26,

13 Example... contd. The errors in the estimates can be computed from (5) by replacing the theoretical values with their estimated counterparts. [ ] 1 [ ] Σ ˆθ = = (12) Consequently, approximate 95% C.I.s for d 1 and d 2 are [ , ] and [0.2928, ] respectively. Comparing the estimates with the true values used for simulation d 1,0 = 1.2; d 20 = 0.32 (13) we observe that the method has produced reasonably good estimates and that the C.I.s contain the true values. Note: The Y-W estimator is generally used when the data length is large and it is known a priori that the generating process has poles well within the unit circle. In general, it is used to initialize other non-linear estimators. Arun K. Tangirala System Identification July 26,

14 Least squares / Covariance method The least squares method, as we learnt in L4.4, obtains the estimate as ˆθ LS = arg min θ N 1 k=p ε 2 [k] (14) Comparing with the standard linear regression form, we have [ ] T [ ] T ϕ[k] = v[k 1] v[k P ] ; θ = d = d 1 d P (15) Using the LS solution, we have ˆθ LS = ˆd LS = (Φ T Φ) 1 Φ T v = ( ) 1 ( ) 1 1 N P ΦT Φ N P ΦT v (16) where [ Φ = ϕ[p ] ϕ[p + 1] ϕ[n 1] [ v = v[p ] v[p + 1] v[n 1] ] T ] T Arun K. Tangirala System Identification July 26,

15 LS / COV method A careful examination of (16) suggests that it can be written as a MoM estimate ˆθ = ˆΣ P ˆσ P (17) by introducing ˆσ vv[1, 1] ˆσ vv[1, 2] ˆσ vv[1, P ] 1 ˆΣ P N P ΦT Φ =... (18) ˆσ vv[p, 1] ˆσ vv[p, 2] ˆσ P [P, P ] ˆσ P 1 N P ΦT v = ˆσ vv[1, 1]. where the estimate of the ACVF is given by Observe that ˆΣ P ˆσ vv[l 1, l 2] = (19) ˆσ vv[p, 1] N 1 1 v[n l 1]v[n l 2] (20) N P n=p is a symmetric matrix by virtue of (20). Due to the equivalence above, the method is also known as the covariance method. Arun K. Tangirala System Identification July 26,

16 Modified covariance method The modified covariance (MCOV) method stems from a modification of the objective function in the LS approach. It minimizes the sum squares of both forward and backward prediction errors, ε F and ε B respectively. ˆθ LS = arg min θ N 1 k=p N p 1 ε 2 F [k] + k=0 ε 2 B[k] (21) By a change of summation index, the objective function can also be written as N 1 k=p N p 1 ε 2 F [k] + k=0 N 1 ε 2 B[k] = (ε 2 F [k] + ε 2 B[k P ]) (22) The backward prediction error is defined in a similar way as the forward version: k=p ε B [k] = v[k] ˆv[k {v[k + 1],, v[k + P ]}] = v[k] P ( d i )v[k + i] (23) i=1 Arun K. Tangirala System Identification July 26,

17 MCOV method Thus, the objective in the MCOV method is to minimize ( N 1 2 ( ) 2 P P v[k] + d i v[k i]) + v[k P ] + d i v[k P + i] (24) k=p i=1 i=1 The solution to this optimization problem is of the same form as from the LS/COV method but by replacing the auto-covariance estimate with the one given below. ˆθ MCOV = ˆΣ P ˆσ P N 1 ˆσ vv [l 1, l 2 ] = (v[k l 1 ]x[k l 2 ] + x[k P + l 1 ]v[k P + l 2 ]) k=p ˆΣ P,ij = ˆσ[i, j]; ˆσ P,i = ˆσ[i, 1], i = 1,, P ; j = 1,, P (25a) (25b) (25c) Note: The covariance matrix ˆΣ P as the D-L method cannot be applied. is no longer Toeplitz and therefore a recursion algorithm such Arun K. Tangirala System Identification July 26,

18 Properties of covariance estimators 1 In both the LS and MCOV methods, the regressor ϕ[k] and the prediction error are constructed from k = P to k = N 1 unlike in the Y-W method. Thus, the LS and the MCOV methods do not pad the data. 2 The asymptotic properties of the covariance (LS) and the MCOV estimators are, however, identical to that of the Y-W estimator (reference). 3 Application of these methods to the estimation of line spectra (sinusoids embedded in noise) produces better results than the Y-W method, especially for short data records. The modified covariance estimator fares better than the OLS in this respect. 4 On the other hand, stability of the resulting models is not guaranteed while using the covariance-based estimators. Moreover, the variance-covariance matrix does not possess a Toeplitz structure, which is disadvantageous from a computational viewpoint. Arun K. Tangirala System Identification July 26,

19 Example Estimating AR(2) using LS and MCOV For the series of the example illustrating Y-W method, estimate the parameters using the LS and MCOV methods. Solution: The LS method yields The MCOV method yields ˆd 1 = 1.269; ˆd2 = ˆd 1 = 1.268; ˆd2 = which only slightly differ among each other and the Y-W estimates. The standard errors in both estimates are identical to those computed in the Y-W case by virtue of the properties discussed above. Arun K. Tangirala System Identification July 26,

20 Burg s estimator Burg s method (Burg s reference) minimizes the same objective as the MCOV method except that it aims at incorporating two desirable features: i. Stability of the estimated AR model ii. A D-L like recursion algorithm for parameter estimation. The key idea is to employ the reflection coefficient (negative PACF coefficient)- based AR representation. Therefore, the reflection coefficients κ p, p = 1,, P are estimated instead of the model parameters. Stability of the model is guaranteed by requiring the magnitudes of the estimated reflection coefficients to be each less than unity. The optimization problem remains the same as in the MCOV method. ˆθ Burg = arg min κ p N 1 (ε 2 F [k] + ε 2 B[k P ]) (26) k=p Arun K. Tangirala System Identification July 26,

21 Burg s method... contd. In order to arrive at a D-L like recursive solution, the forward and backward prediction errors associated with a model of order p are re-written as follows: p [ ] [ ] ε (p) F [k] = v[k] + 1 d iv[k i] = v[k] v[k p] θ (p) (27) i=1 p [ ] [ ] ε (p) B [k p] = v[k p] + d iv[k p + i] = v[k] v[k p] θ(p) (28) 1 i=1 Then, using [ ] θ (p) θ (p 1) + κ p = θ(p 1) κ p (29) the following recursive relations can be obtained: ε (p) B ε (p) F [k] = ε(p 1) F [k] + κ pε (p 1) B [k p] (30) [k p] = ε(p 1) B [k p] + κ pε (p 1) F [k] (31) Arun K. Tangirala System Identification July 26,

22 Burg s method... contd. Inserting the recursive relations into the objective function and solving ˆκ p = 2 N 1 n=p ( N 1 n=p (ε (p 1) F ε (p 1) F [n]ε (p 1) B [n p] [n]) 2 + (ε (p 1) B [n p]) 2) (32) Stability of the estimated model can be verified by showing that the optimal reflection coefficient in (32) satisfies κ p 1, p. The estimates of the innovations variance are also recursively updated as: ˆσ 2(p) e = ˆσ 2(p 1) e (1 ˆκ 2 p) (33) Given that the reflection coefficients are always less than unity in magnitude, the innovations variance is guaranteed to decrease with increase in order. Arun K. Tangirala System Identification July 26,

23 Burg s estimation procedure A basic procedure for Burg s algorithm thus follows: Burg s method 1 Set p = 0 and θ (0) = 0 so that the forward and backward prediction errors are initialized to ε (0) F [k] = v[k] = ε(0) F [k]. 2 Increment the order p by one and compute κ p+1 using (32). 3 Update the parameter vector θ (p+1) using (29). 4 Update the prediction errors for the incremented order using (27) and (28) 5 Repeat steps 2-4 until a desired order p = P. It is easy to see that the optimal estimate of κ 1 with the initialization above is ˆκ 1 = ρ vv[1], which is also the optimal LS estimate of an AR(1) model. A computationally efficient version of the above algorithm, known by the name Burg s recursion, updates the denominator of (32) recursively. Arun K. Tangirala System Identification July 26,

24 Properties of Burg s estimator Asymptotic properties of optimal estimates of κ p are not trivial to derive particularly when the postulated model order is lower than the true order P 0. It is even more difficult to analyze the properties of parameter estimates since they are not explicitly optimized. The following is a summary of facts on the Burg s estimator from extensive studies by several researchers: 1 The bias of Burg s estimates are as large as those of the LS estimates, but lower than those of the Yule-Walker, especially when the underlying process is auto-regressive with roots near the unit circle. 2 The variance of ˆκ p for models with orders p P 0 is given by 1 κ 2 p var(ˆκ p ) = N, p = P 0 (34) 1 N, p > P 0 The case of p > P 0 is consistent with the result for the variance of the PACF coefficient estimates at lags l > P 0 given by (??) and (7). Arun K. Tangirala System Identification July 26,

25 Properties of Burg s estimator... contd. 3 The innovations variance estimate is asymptotically unbiased, again when the postulated order is at least equal to the true order ( E(ˆσ e) 2 = σe 2 1 p ), p P 0 = lim N N E(ˆσ2 e) = σe 2 (35) 4 All reflection coefficients for orders p P 0 are independent of the lower order estimates. 5 By the asymptotic equivalence of Burg s method with the Y-W estimator, the distribution and covariance of resulting parameter estimates are identical to that given in (5). The difference is in the point estimate of θ and the estimate of the innovations variance. 6 Finally, a distinct property of Burg s estimator is that it guarantees stability of AR models. Arun K. Tangirala System Identification July 26,

26 Example Simulated AR(2) series For the simulated series considered in the previous examples, obtain Burg s estimates of the model parameters. Solution: ˆd 1 = 1.267; ˆd2 = which are almost identical to the MCOV estimates. Once again given the large sample size, the asymptotic properties can be expected to be identical to those of previous methods. Arun K. Tangirala System Identification July 26,

27 Estimation of MA models The problem of estimating an MA model is more involved than that of the AR parameters primarily because the predictor is non-linear in the unknowns. With an MA(M) model the predictor is ˆv[k k 1] = c 1 e[k 1] + + c M e[k M], k M (36) wherein both the parameters and the past innovations are unknown. Thus, the non-linear least squares estimation method and the MLE are popularly used for estimating MA models. Both these methods require a proper initialization so as to not get lost in local minima. A few popular methods for obtaining preliminary estimates are briefly discussed. For details, read (reference). Arun K. Tangirala System Identification July 26,

28 Preliminary estimates of MA models Four popular methods are used to seed the NLS and MLE algorithms. 1 Method of moments: Same as Y-W method, but now the equations are non-linear. For instance, to estimate an MA(1) model, we have ˆρ vv [1] = c c 2 1 (37) giving rise to two solutions. Only invertible solutions are accepted. 2 Durbin s estimator: The idea underlying Durbin s estimator is to first generate the innovation sequence through a high-order AR model. Subsequently, the MA(M) model is re-written as v[k] ê[k] = M c i ê[k i] (38) where ê[k] = ˆD(q 1 )v[k] is the estimate obtained from the AR model. The order of the AR model used for this purpose can be selected in different ways, for e.g., using AIC or BIC. A simple guideline recommends P = 2M. For a more detailed reading, see Broersen (reference). Arun K. Tangirala System Identification July 26, i=1

29 Preliminary estimates of MA models 3 Innovations algorithm: It is similar to the D-L algorithm for AR models. The key idea is to use the innovations representation of the MA model by recalling that the white-noise sequences are also theoretically the one-step ahead predictions. Defining c 0 1 M M v[k] = c ie[k i] = c i(v[k] ˆv[k k 1]) (39) i=0 i=0 A recursive algorithm can be now constructed. i. Set m = 0 and ˆσ 2 e,0 = ˆσ 2 v. ii. Compute j 1 ĉ m,m j = (ˆσ e,m) (σ 2 1 vv[m j] i=0 ĉ j,j iĉ m,m iˆσ 2 e,i M 1 iii. Update the innovations variance ˆσ e,m 2 = ˆσ v 2 ĉ 2 m,m j ˆσ e,j 2 j=0 iv. Repeat steps (ii) and (iii) until a desired order m = M. ), 0 j < m (40) Arun K. Tangirala System Identification July 26,

30 Preliminary estimates of MA models... contd. 4 Hannan-Rissanen s method: The approach is similar to that of Durbin s estimator in the sense that the innovations are replaced by their estimates from an AR model. However, the difference is that the parameters are estimated from a linear least-squares regression of v[k] on estimated past innovations: ˆv[k] = M c i ê[k i], k M (41) i=1 The past terms of ê[k] are obtained as the residuals of a sufficiently high AR (p ) model. The parameter estimates can be further updated using an additional step, but it can be usually avoided. For additional details, refer to Brockwell [2002]. Arun K. Tangirala System Identification July 26,

31 Estimation of ARMA models Given a set of N observations {v[0], v[1],, v[n 1]} of a process, estimate the P = P + M parameters θ = d 1 d P c 1 c M of [ ] T the ARMA(P, M) model P M v[k] + d j v[k j] = c i e[k i] + e[k] (42) j=1 and the innovations variance σ 2 e. It is assumed without loss of generality that the generating process is zero-mean. i=1 Due to the presence of the MA component, once again the predictor is non-linear in unknowns rendering the optimization problem complicated. Standard solvers are based on either NLS or ML methods. Arun K. Tangirala System Identification July 26,

32 Estimation of ARMA models... contd. With the nonlinear LS method, typically a Gauss-Newton method is used. Analytical expressions are used to compute the gradients (of the predictor) at each iteration. In the MLE approach, the likelihood function is set up using the prediction error (innovations) approach and a nonlinear optimization solver such as the G-N method is used. Any one of the four methods discussed earlier for MA models can be used to initialize the algorithms. The Y-W method is the standard choice. See Schumway and Stoffer (reference) for a theoretical discussion of the NLS and MLE algorithms, i.e., how to evaluate the gradients for the former or set up the likelihood functions for the latter. Arun K. Tangirala System Identification July 26,

33 NLS and ML estimators of ARMA models Deriving the asymptotic properties of the NLS and ML estimators is beyond the scope of this text. Only the main result is stated (see Brockwell and Davis [1991], Brockwell [2002], Shumway and Stoffer [2006]). The parameter estimates of an ARMA(P, M) model obtained from the unconditional, conditional least squares and the ML estimators initialized with the method of moments are asymptotically consistent. Further, N( ˆθ θ0) AN ( 0, σ 2 es(θ 0) 1) (43) The (P + M) (P + M) covariance matrix S is given by [ ] E(x P x T P ) E(x P wm T ) S = E(w M x T P ) E(w M wm T ) (44) where x P x P = w M = and w M are constructed from two auto-regressive processes [ ] T 1 x[k 1] x[k 2] x[k P ] ; x[k] = [ w[k 1] w[k 2] w[k M] e[k] (45) D(q 1 ) ] T 1 ; w[k] = e[k] (46) C(q 1 ) Arun K. Tangirala System Identification July 26,

34 Remarks The block diagonals S 11 (P P ) and S 22 (M M) are essentially the auto-covariance matrices of x[k] and w[k] respectively, while the off-diagonals are the matrices of cross-covariance functions between x[k] and w[k]. A few special cases are discussed 1 AR(1): For this case, S is a scalar. Using (44), S = E(x[k 1]x[k 1]) = σ 2 e/(1 d 2 1) = var( ˆd 1) = 1/(1 d 2 1) (47) Thus, as the pole of the AR(1) process draws closer to unit circle, the variance estimate increases drastically. This makes a case for building ARIMA models. 2 MA(1): Using (44), S = E(w[k 1]w[k 1]) = σ 2 /(1 c 2 1) = var(ĉ 1) = 1/(1 c 2 1) (48) Just as with the AR case, when the zero of the MA model is on the unit circle, the variance of the estimate is very large. For small samples, no expression for the variance or the distribution exists. In such cases, the bootstrapping method is an effective alternative. Arun K. Tangirala System Identification July 26,

35 Procedure to fit an ARMA model Systematic procedure 1 Carry out a visual examination of the series. Inspect the data for outliers, drifts, significantly differing variances, etc. 2 Perform the necessary pre-processing of data (e.g., removal of trends, transformation) to obtain a stationary series. 3 If the intent is to develop a pure AR model, use PACF and likewise for a pure MA model, use ACF for estimating the orders. For ARMA models, a good start is an ARMA(1,1) model. 4 For AR models, use the MCOV or Burg s method with the chosen order. If the purpose is spectral estimation, then prefer the MCOV method. For MA and ARMA models, generate preliminary estimates (typically using the Y-W or the H-R method) with the chosen orders. Use these preliminary estimates with an MLE or NLS algorithm to obtain optimal estimates. 5 Subject the model to a quality (diagnostic) check. If the model passes all the checks, then accept this model. Else work towards an appropriate model order until satisfactory results are obtained. Arun K. Tangirala System Identification July 26,

36 Example Estimating an ARMA model The objective of this exercise is to build an ARMA representation for the process whose ACF and PACF plots are shown below. 1.2 Auto correlation function 1 Partial auto correlation function ACF PACF Lags Lags Beginning with an ARMA(1,1) choice, the estimated model is Ĥ(q 1 ) = 1 + (±0.025) q 1 (49) q 1 (±0.02) Arun K. Tangirala System Identification July 26,

37 Model Assessment The standard errors in the estimates reported below each coefficient estimate reveal that the model has a good precision (low variability). Additionally, the model is both stationary and invertible. The ACF of residuals from the estimated model is shown below. The model is thus satisfactory in both respects ACF Lags ACF of the residuals from the ARMA(1,1) model Arun K. Tangirala System Identification July 26,

38 ARMA estimation... contd. It is of interest to note that the true process is also an ARMA(1,1) representation: H(q 1 ) = q q 1 It is a coincidence thus that the orders of the estimated model and the generating process agree. When the residual whiteness test indicates the need for increasing the model order, there is no definitive way of determining whether the numerator or denominator order or both should be increased. The solution has to be determined by trial and error. Fortunately, we can converge to a working model within a handful of iterations since an ARMA(2,2) representation is capable of representing a large class of stationary processes (reference). In general, when competing models are available, the decision on the final model is based on information criteria measures (see Module 8). Arun K. Tangirala System Identification July 26,

39 MATLAB code for estimating the ARMA model 1 % Generate data 2 Hq = i d p o l y ( 1, [ ], [ ], [ ], [ ], N o i s e v a r i a n c e, 1 ) ; 3 ek = randn ( , 1 ) ; 4 vk = sim (Hq, ek ) ; 5 6 % Remove mean 7 vkd = d e t r e n d ( vk, c o n s t a n t ) ; 8 9 % Plot ACF and PACF 10 a c f ( vkd. y, 2 0, 1 ) 11 p a c f ( vkd. y, 2 0, 1 ) % F i t an ARMA( 1, 1 ) model 14 mod arma = armax ( vk, [ 1 1 ] ) ; 15 p r e s e n t ( mod arma ) % ACF o f r e s i d u a l s 18 e r r a r m a = pe ( mod arma, vk ) ; % N o t i c e e r r a r m a i s not i d d a t a o b j e c t 19 a c f ( e r r a r m a, 2 0, 1 ) ; Arun K. Tangirala System Identification July 26,

40 Estimation of ARIMA models In Module 3 we learnt that non-stationarities are of two types, deterministic (e.g., trend type non-stationarity) and stochastic (e.g., mean, variance and integrating type non-stationarity). Of particular interest are the difference stationary processes which are nicely represented by ARIMA models, d v[k] = C(q 1 ) D(q 1 ) e[k], = 1 q 1 (50) which are capable of handling trend-type non-stationarities as well. Arun K. Tangirala System Identification July 26,

41 Estimating ARIMA models... contd. The additional step in ARIMA modelling is determining the degree of differencing d, the orders of the ARMA components P and M and the parameters of the C and D polynomials. Given that ARIMA models are primarily meant for difference stationary processes and that unnecessary differencing can cause more harm than good, it is first important to examine the data for the presence of non-stationarities and also determine their type before arriving at a decision to fit an ARIMA model. These are the preliminary steps in a general procedure for building ARIMA models as outlined next. Arun K. Tangirala System Identification July 26,

42 Steps for building an ARIMA model Procedure 1 Examine/test the series for integrating type non-stationarity using visual inspection of the series and/or the ACF plots and the unit root tests (e.g., Dickey-Fuller, Phillips-Perron tests). If the series exhibits strong evidence for unit roots, then an ARIMA model can be fit after following steps 2 and 3 below. Conducting unit root tests can be challenging and involved. They have to be performed with care and should be corroborated with visual observations of the series as well as the ACF/PACF plots. 2 If there is a strong evidence (additionally) for trend type non-stationarities, remove them by fitting polynomial functions to the series (using OLS method for example) and work with the residuals of this fit. Denote these by w[k]. 3 If the residuals (or the series in the absence of trends) is additionally known to contain growth effects, then a logarithmic transformation is recommended. Call the resulting series as w[k] or ṽ[k] as the case maybe. 4 Determine the appropriate degree of differencing d (by a visual or statistical testing of the differenced series). 5 Fit an ARMA model to d w[k] or d ṽ[k] (or to the respective untransformed series if step 3 is skipped). Arun K. Tangirala System Identification July 26,

43 Summary of Lecture 1 AR models are much easier to estimate than MA models because they give rise to linear predictors A variety of methods are available to estimate AR models - popular ones being the Yule-Walker, LS / COV, modified covariance and Burg s method. Among the four methods, Y-W and Burg s method guarantee stability, but the latter is better for processes with poles close to unit circle. MCOV method is preferred when AR models are used in spectral estimation. ML methods are generally not used for estimating AR models because the improvement achieved is marginal ARMA (and MA) models give rise to non-linear optimization algorithms that require preliminary estimates NLS and ML estimators both yield asymptotically similar ARMA model estimates. The best ARMA model is almost always determined iteratively, but in a systematic manner. Arun K. Tangirala System Identification July 26,

Moving Average (MA) representations

Moving Average (MA) representations Moving Average (MA) representations The moving average representation of order M has the following form v[k] = MX c n e[k n]+e[k] (16) n=1 whose transfer function operator form is MX v[k] =H(q 1 )e[k],

More information

Estimating trends using filters

Estimating trends using filters Estimating trends using filters... contd. 3. Exponential smoothing of data to estimate the trend m[k] ˆm[k] = v[k]+(1 )ˆm[k 1], k =2,, n ˆm[1] = v[1] The choice of has to be fine tuned according to the

More information

Parameter estimation: ACVF of AR processes

Parameter estimation: ACVF of AR processes Parameter estimation: ACVF of AR processes Yule-Walker s for AR processes: a method of moments, i.e. µ = x and choose parameters so that γ(h) = ˆγ(h) (for h small ). 12 novembre 2013 1 / 8 Parameter estimation:

More information

Ch 6. Model Specification. Time Series Analysis

Ch 6. Model Specification. Time Series Analysis We start to build ARIMA(p,d,q) models. The subjects include: 1 how to determine p, d, q for a given series (Chapter 6); 2 how to estimate the parameters (φ s and θ s) of a specific ARIMA(p,d,q) model (Chapter

More information

Marcel Dettling. Applied Time Series Analysis SS 2013 Week 05. ETH Zürich, March 18, Institute for Data Analysis and Process Design

Marcel Dettling. Applied Time Series Analysis SS 2013 Week 05. ETH Zürich, March 18, Institute for Data Analysis and Process Design Marcel Dettling Institute for Data Analysis and Process Design Zurich University of Applied Sciences marcel.dettling@zhaw.ch http://stat.ethz.ch/~dettling ETH Zürich, March 18, 2013 1 Basics of Modeling

More information

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection SG 21006 Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 28

More information

Introduction to Time Series Analysis. Lecture 11.

Introduction to Time Series Analysis. Lecture 11. Introduction to Time Series Analysis. Lecture 11. Peter Bartlett 1. Review: Time series modelling and forecasting 2. Parameter estimation 3. Maximum likelihood estimator 4. Yule-Walker estimation 5. Yule-Walker

More information

Time Series: Theory and Methods

Time Series: Theory and Methods Peter J. Brockwell Richard A. Davis Time Series: Theory and Methods Second Edition With 124 Illustrations Springer Contents Preface to the Second Edition Preface to the First Edition vn ix CHAPTER 1 Stationary

More information

Empirical Market Microstructure Analysis (EMMA)

Empirical Market Microstructure Analysis (EMMA) Empirical Market Microstructure Analysis (EMMA) Lecture 3: Statistical Building Blocks and Econometric Basics Prof. Dr. Michael Stein michael.stein@vwl.uni-freiburg.de Albert-Ludwigs-University of Freiburg

More information

Univariate Time Series Analysis; ARIMA Models

Univariate Time Series Analysis; ARIMA Models Econometrics 2 Fall 24 Univariate Time Series Analysis; ARIMA Models Heino Bohn Nielsen of4 Outline of the Lecture () Introduction to univariate time series analysis. (2) Stationarity. (3) Characterizing

More information

Lecture 2: Univariate Time Series

Lecture 2: Univariate Time Series Lecture 2: Univariate Time Series Analysis: Conditional and Unconditional Densities, Stationarity, ARMA Processes Prof. Massimo Guidolin 20192 Financial Econometrics Spring/Winter 2017 Overview Motivation:

More information

Automatic Autocorrelation and Spectral Analysis

Automatic Autocorrelation and Spectral Analysis Piet M.T. Broersen Automatic Autocorrelation and Spectral Analysis With 104 Figures Sprin ger 1 Introduction 1 1.1 Time Series Problems 1 2 Basic Concepts 11 2.1 Random Variables 11 2.2 Normal Distribution

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY & Contents PREFACE xiii 1 1.1. 1.2. Difference Equations First-Order Difference Equations 1 /?th-order Difference

More information

Differencing Revisited: I ARIMA(p,d,q) processes predicated on notion of dth order differencing of a time series {X t }: for d = 1 and 2, have X t

Differencing Revisited: I ARIMA(p,d,q) processes predicated on notion of dth order differencing of a time series {X t }: for d = 1 and 2, have X t Differencing Revisited: I ARIMA(p,d,q) processes predicated on notion of dth order differencing of a time series {X t }: for d = 1 and 2, have X t 2 X t def in general = (1 B)X t = X t X t 1 def = ( X

More information

FORECASTING SUGARCANE PRODUCTION IN INDIA WITH ARIMA MODEL

FORECASTING SUGARCANE PRODUCTION IN INDIA WITH ARIMA MODEL FORECASTING SUGARCANE PRODUCTION IN INDIA WITH ARIMA MODEL B. N. MANDAL Abstract: Yearly sugarcane production data for the period of - to - of India were analyzed by time-series methods. Autocorrelation

More information

Autoregressive Moving Average (ARMA) Models and their Practical Applications

Autoregressive Moving Average (ARMA) Models and their Practical Applications Autoregressive Moving Average (ARMA) Models and their Practical Applications Massimo Guidolin February 2018 1 Essential Concepts in Time Series Analysis 1.1 Time Series and Their Properties Time series:

More information

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M. TIME SERIES ANALYSIS Forecasting and Control Fifth Edition GEORGE E. P. BOX GWILYM M. JENKINS GREGORY C. REINSEL GRETA M. LJUNG Wiley CONTENTS PREFACE TO THE FIFTH EDITION PREFACE TO THE FOURTH EDITION

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY PREFACE xiii 1 Difference Equations 1.1. First-Order Difference Equations 1 1.2. pth-order Difference Equations 7

More information

Elements of Multivariate Time Series Analysis

Elements of Multivariate Time Series Analysis Gregory C. Reinsel Elements of Multivariate Time Series Analysis Second Edition With 14 Figures Springer Contents Preface to the Second Edition Preface to the First Edition vii ix 1. Vector Time Series

More information

University of Oxford. Statistical Methods Autocorrelation. Identification and Estimation

University of Oxford. Statistical Methods Autocorrelation. Identification and Estimation University of Oxford Statistical Methods Autocorrelation Identification and Estimation Dr. Órlaith Burke Michaelmas Term, 2011 Department of Statistics, 1 South Parks Road, Oxford OX1 3TG Contents 1 Model

More information

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong STAT 443 Final Exam Review L A TEXer: W Kong 1 Basic Definitions Definition 11 The time series {X t } with E[X 2 t ] < is said to be weakly stationary if: 1 µ X (t) = E[X t ] is independent of t 2 γ X

More information

Time Series I Time Domain Methods

Time Series I Time Domain Methods Astrostatistics Summer School Penn State University University Park, PA 16802 May 21, 2007 Overview Filtering and the Likelihood Function Time series is the study of data consisting of a sequence of DEPENDENT

More information

Multivariate Time Series

Multivariate Time Series Multivariate Time Series Notation: I do not use boldface (or anything else) to distinguish vectors from scalars. Tsay (and many other writers) do. I denote a multivariate stochastic process in the form

More information

Classical Decomposition Model Revisited: I

Classical Decomposition Model Revisited: I Classical Decomposition Model Revisited: I recall classical decomposition model for time series Y t, namely, Y t = m t + s t + W t, where m t is trend; s t is periodic with known period s (i.e., s t s

More information

CH5350: Applied Time-Series Analysis

CH5350: Applied Time-Series Analysis CH5350: Applied Time-Series Analysis Arun K. Tangirala Department of Chemical Engineering, IIT Madras Spectral Representations of Random Signals Arun K. Tangirala (IIT Madras) Applied Time-Series Analysis

More information

Module 3. Descriptive Time Series Statistics and Introduction to Time Series Models

Module 3. Descriptive Time Series Statistics and Introduction to Time Series Models Module 3 Descriptive Time Series Statistics and Introduction to Time Series Models Class notes for Statistics 451: Applied Time Series Iowa State University Copyright 2015 W Q Meeker November 11, 2015

More information

A time series is called strictly stationary if the joint distribution of every collection (Y t

A time series is called strictly stationary if the joint distribution of every collection (Y t 5 Time series A time series is a set of observations recorded over time. You can think for example at the GDP of a country over the years (or quarters) or the hourly measurements of temperature over a

More information

Figure 29: AR model fit into speech sample ah (top), the residual, and the random sample of the model (bottom).

Figure 29: AR model fit into speech sample ah (top), the residual, and the random sample of the model (bottom). Original 0.4 0.0 0.4 ACF 0.5 0.0 0.5 1.0 0 500 1000 1500 2000 0 50 100 150 200 Residual 0.05 0.05 ACF 0 500 1000 1500 2000 0 50 100 150 200 Generated 0.4 0.0 0.4 ACF 0.5 0.0 0.5 1.0 0 500 1000 1500 2000

More information

Lecture 7: Model Building Bus 41910, Time Series Analysis, Mr. R. Tsay

Lecture 7: Model Building Bus 41910, Time Series Analysis, Mr. R. Tsay Lecture 7: Model Building Bus 41910, Time Series Analysis, Mr R Tsay An effective procedure for building empirical time series models is the Box-Jenkins approach, which consists of three stages: model

More information

The Identification of ARIMA Models

The Identification of ARIMA Models APPENDIX 4 The Identification of ARIMA Models As we have established in a previous lecture, there is a one-to-one correspondence between the parameters of an ARMA(p, q) model, including the variance of

More information

ARIMA Models. Jamie Monogan. January 25, University of Georgia. Jamie Monogan (UGA) ARIMA Models January 25, / 38

ARIMA Models. Jamie Monogan. January 25, University of Georgia. Jamie Monogan (UGA) ARIMA Models January 25, / 38 ARIMA Models Jamie Monogan University of Georgia January 25, 2012 Jamie Monogan (UGA) ARIMA Models January 25, 2012 1 / 38 Objectives By the end of this meeting, participants should be able to: Describe

More information

FE570 Financial Markets and Trading. Stevens Institute of Technology

FE570 Financial Markets and Trading. Stevens Institute of Technology FE570 Financial Markets and Trading Lecture 5. Linear Time Series Analysis and Its Applications (Ref. Joel Hasbrouck - Empirical Market Microstructure ) Steve Yang Stevens Institute of Technology 9/25/2012

More information

ITSM-R Reference Manual

ITSM-R Reference Manual ITSM-R Reference Manual George Weigt February 11, 2018 1 Contents 1 Introduction 3 1.1 Time series analysis in a nutshell............................... 3 1.2 White Noise Variance.....................................

More information

Applied time-series analysis

Applied time-series analysis Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna October 18, 2011 Outline Introduction and overview Econometric Time-Series Analysis In principle,

More information

Econ 623 Econometrics II Topic 2: Stationary Time Series

Econ 623 Econometrics II Topic 2: Stationary Time Series 1 Introduction Econ 623 Econometrics II Topic 2: Stationary Time Series In the regression model we can model the error term as an autoregression AR(1) process. That is, we can use the past value of the

More information

System Identification

System Identification System Identification Arun K. Tangirala Department of Chemical Engineering IIT Madras July 27, 2013 Module 3 Lecture 1 Arun K. Tangirala System Identification July 27, 2013 1 Objectives of this Module

More information

at least 50 and preferably 100 observations should be available to build a proper model

at least 50 and preferably 100 observations should be available to build a proper model III Box-Jenkins Methods 1. Pros and Cons of ARIMA Forecasting a) need for data at least 50 and preferably 100 observations should be available to build a proper model used most frequently for hourly or

More information

Estimating Moving Average Processes with an improved version of Durbin s Method

Estimating Moving Average Processes with an improved version of Durbin s Method Estimating Moving Average Processes with an improved version of Durbin s Method Maximilian Ludwig this version: June 7, 4, initial version: April, 3 arxiv:347956v [statme] 6 Jun 4 Abstract This paper provides

More information

EECE Adaptive Control

EECE Adaptive Control EECE 574 - Adaptive Control Basics of System Identification Guy Dumont Department of Electrical and Computer Engineering University of British Columbia January 2010 Guy Dumont (UBC) EECE574 - Basics of

More information

{ } Stochastic processes. Models for time series. Specification of a process. Specification of a process. , X t3. ,...X tn }

{ } Stochastic processes. Models for time series. Specification of a process. Specification of a process. , X t3. ,...X tn } Stochastic processes Time series are an example of a stochastic or random process Models for time series A stochastic process is 'a statistical phenomenon that evolves in time according to probabilistic

More information

Nonlinear time series

Nonlinear time series Based on the book by Fan/Yao: Nonlinear Time Series Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna October 27, 2009 Outline Characteristics of

More information

EEG- Signal Processing

EEG- Signal Processing Fatemeh Hadaeghi EEG- Signal Processing Lecture Notes for BSP, Chapter 5 Master Program Data Engineering 1 5 Introduction The complex patterns of neural activity, both in presence and absence of external

More information

Gaussian processes. Basic Properties VAG002-

Gaussian processes. Basic Properties VAG002- Gaussian processes The class of Gaussian processes is one of the most widely used families of stochastic processes for modeling dependent data observed over time, or space, or time and space. The popularity

More information

Some Time-Series Models

Some Time-Series Models Some Time-Series Models Outline 1. Stochastic processes and their properties 2. Stationary processes 3. Some properties of the autocorrelation function 4. Some useful models Purely random processes, random

More information

THE PROCESSING of random signals became a useful

THE PROCESSING of random signals became a useful IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 58, NO. 11, NOVEMBER 009 3867 The Quality of Lagged Products and Autoregressive Yule Walker Models as Autocorrelation Estimates Piet M. T. Broersen

More information

6.3 Forecasting ARMA processes

6.3 Forecasting ARMA processes 6.3. FORECASTING ARMA PROCESSES 123 6.3 Forecasting ARMA processes The purpose of forecasting is to predict future values of a TS based on the data collected to the present. In this section we will discuss

More information

STAT Financial Time Series

STAT Financial Time Series STAT 6104 - Financial Time Series Chapter 4 - Estimation in the time Domain Chun Yip Yau (CUHK) STAT 6104:Financial Time Series 1 / 46 Agenda 1 Introduction 2 Moment Estimates 3 Autoregressive Models (AR

More information

Forecasting using R. Rob J Hyndman. 2.4 Non-seasonal ARIMA models. Forecasting using R 1

Forecasting using R. Rob J Hyndman. 2.4 Non-seasonal ARIMA models. Forecasting using R 1 Forecasting using R Rob J Hyndman 2.4 Non-seasonal ARIMA models Forecasting using R 1 Outline 1 Autoregressive models 2 Moving average models 3 Non-seasonal ARIMA models 4 Partial autocorrelations 5 Estimation

More information

Advanced Econometrics

Advanced Econometrics Advanced Econometrics Marco Sunder Nov 04 2010 Marco Sunder Advanced Econometrics 1/ 25 Contents 1 2 3 Marco Sunder Advanced Econometrics 2/ 25 Music Marco Sunder Advanced Econometrics 3/ 25 Music Marco

More information

Econometric Forecasting

Econometric Forecasting Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna October 1, 2014 Outline Introduction Model-free extrapolation Univariate time-series models Trend

More information

Computer Exercise 1 Estimation and Model Validation

Computer Exercise 1 Estimation and Model Validation Lund University Time Series Analysis Mathematical Statistics Fall 2018 Centre for Mathematical Sciences Computer Exercise 1 Estimation and Model Validation This computer exercise treats identification,

More information

COMPUTER SESSION: ARMA PROCESSES

COMPUTER SESSION: ARMA PROCESSES UPPSALA UNIVERSITY Department of Mathematics Jesper Rydén Stationary Stochastic Processes 1MS025 Autumn 2010 COMPUTER SESSION: ARMA PROCESSES 1 Introduction In this computer session, we work within the

More information

Problem Set 2: Box-Jenkins methodology

Problem Set 2: Box-Jenkins methodology Problem Set : Box-Jenkins methodology 1) For an AR1) process we have: γ0) = σ ε 1 φ σ ε γ0) = 1 φ Hence, For a MA1) process, p lim R = φ γ0) = 1 + θ )σ ε σ ε 1 = γ0) 1 + θ Therefore, p lim R = 1 1 1 +

More information

Part 1. Multiple Choice (50 questions, 1 point each) Part 2. Problems/Short Answer (10 questions, 5 points each)

Part 1. Multiple Choice (50 questions, 1 point each) Part 2. Problems/Short Answer (10 questions, 5 points each) GROUND RULES: This exam contains two parts: Part 1. Multiple Choice (50 questions, 1 point each) Part 2. Problems/Short Answer (10 questions, 5 points each) The maximum number of points on this exam is

More information

EL1820 Modeling of Dynamical Systems

EL1820 Modeling of Dynamical Systems EL1820 Modeling of Dynamical Systems Lecture 9 - Parameter estimation in linear models Model structures Parameter estimation via prediction error minimization Properties of the estimate: bias and variance

More information

Chapter 12: An introduction to Time Series Analysis. Chapter 12: An introduction to Time Series Analysis

Chapter 12: An introduction to Time Series Analysis. Chapter 12: An introduction to Time Series Analysis Chapter 12: An introduction to Time Series Analysis Introduction In this chapter, we will discuss forecasting with single-series (univariate) Box-Jenkins models. The common name of the models is Auto-Regressive

More information

Introduction to ARMA and GARCH processes

Introduction to ARMA and GARCH processes Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio Corsi Introduction to ARMA () and GARCH processes SNS Pisa 3 March 2010 1 / 24 Stationarity Strict stationarity: (X 1,

More information

Univariate ARIMA Models

Univariate ARIMA Models Univariate ARIMA Models ARIMA Model Building Steps: Identification: Using graphs, statistics, ACFs and PACFs, transformations, etc. to achieve stationary and tentatively identify patterns and model components.

More information

Akaike criterion: Kullback-Leibler discrepancy

Akaike criterion: Kullback-Leibler discrepancy Model choice. Akaike s criterion Akaike criterion: Kullback-Leibler discrepancy Given a family of probability densities {f ( ; ψ), ψ Ψ}, Kullback-Leibler s index of f ( ; ψ) relative to f ( ; θ) is (ψ

More information

Time Series Analysis

Time Series Analysis Time Series Analysis hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby 1 Outline of the lecture Chapter 9 Multivariate time series 2 Transfer function

More information

Chapter 2: Unit Roots

Chapter 2: Unit Roots Chapter 2: Unit Roots 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and undeconometrics II. Unit Roots... 3 II.1 Integration Level... 3 II.2 Nonstationarity

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface

More information

TIME SERIES ANALYSIS AND FORECASTING USING THE STATISTICAL MODEL ARIMA

TIME SERIES ANALYSIS AND FORECASTING USING THE STATISTICAL MODEL ARIMA CHAPTER 6 TIME SERIES ANALYSIS AND FORECASTING USING THE STATISTICAL MODEL ARIMA 6.1. Introduction A time series is a sequence of observations ordered in time. A basic assumption in the time series analysis

More information

On Moving Average Parameter Estimation

On Moving Average Parameter Estimation On Moving Average Parameter Estimation Niclas Sandgren and Petre Stoica Contact information: niclas.sandgren@it.uu.se, tel: +46 8 473392 Abstract Estimation of the autoregressive moving average (ARMA)

More information

Econometrics I: Univariate Time Series Econometrics (1)

Econometrics I: Univariate Time Series Econometrics (1) Econometrics I: Dipartimento di Economia Politica e Metodi Quantitativi University of Pavia Overview of the Lecture 1 st EViews Session VI: Some Theoretical Premises 2 Overview of the Lecture 1 st EViews

More information

MAT3379 (Winter 2016)

MAT3379 (Winter 2016) MAT3379 (Winter 2016) Assignment 4 - SOLUTIONS The following questions will be marked: 1a), 2, 4, 6, 7a Total number of points for Assignment 4: 20 Q1. (Theoretical Question, 2 points). Yule-Walker estimation

More information

A SARIMAX coupled modelling applied to individual load curves intraday forecasting

A SARIMAX coupled modelling applied to individual load curves intraday forecasting A SARIMAX coupled modelling applied to individual load curves intraday forecasting Frédéric Proïa Workshop EDF Institut Henri Poincaré - Paris 05 avril 2012 INRIA Bordeaux Sud-Ouest Institut de Mathématiques

More information

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8]

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8] 1 Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8] Insights: Price movements in one market can spread easily and instantly to another market [economic globalization and internet

More information

Time Series Econometrics 4 Vijayamohanan Pillai N

Time Series Econometrics 4 Vijayamohanan Pillai N Time Series Econometrics 4 Vijayamohanan Pillai N Vijayamohan: CDS MPhil: Time Series 5 1 Autoregressive Moving Average Process: ARMA(p, q) Vijayamohan: CDS MPhil: Time Series 5 2 1 Autoregressive Moving

More information

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis Introduction to Time Series Analysis 1 Contents: I. Basics of Time Series Analysis... 4 I.1 Stationarity... 5 I.2 Autocorrelation Function... 9 I.3 Partial Autocorrelation Function (PACF)... 14 I.4 Transformation

More information

Introduction to Time Series Analysis. Lecture 12.

Introduction to Time Series Analysis. Lecture 12. Last lecture: Introduction to Time Series Analysis. Lecture 12. Peter Bartlett 1. Parameter estimation 2. Maximum likelihood estimator 3. Yule-Walker estimation 1 Introduction to Time Series Analysis.

More information

11. Further Issues in Using OLS with TS Data

11. Further Issues in Using OLS with TS Data 11. Further Issues in Using OLS with TS Data With TS, including lags of the dependent variable often allow us to fit much better the variation in y Exact distribution theory is rarely available in TS applications,

More information

Midterm Suggested Solutions

Midterm Suggested Solutions CUHK Dept. of Economics Spring 2011 ECON 4120 Sung Y. Park Midterm Suggested Solutions Q1 (a) In time series, autocorrelation measures the correlation between y t and its lag y t τ. It is defined as. ρ(τ)

More information

Lecture Wigner-Ville Distributions

Lecture Wigner-Ville Distributions Introduction to Time-Frequency Analysis and Wavelet Transforms Prof. Arun K. Tangirala Department of Chemical Engineering Indian Institute of Technology, Madras Lecture - 6.1 Wigner-Ville Distributions

More information

3. ARMA Modeling. Now: Important class of stationary processes

3. ARMA Modeling. Now: Important class of stationary processes 3. ARMA Modeling Now: Important class of stationary processes Definition 3.1: (ARMA(p, q) process) Let {ɛ t } t Z WN(0, σ 2 ) be a white noise process. The process {X t } t Z is called AutoRegressive-Moving-Average

More information

Econ 423 Lecture Notes: Additional Topics in Time Series 1

Econ 423 Lecture Notes: Additional Topics in Time Series 1 Econ 423 Lecture Notes: Additional Topics in Time Series 1 John C. Chao April 25, 2017 1 These notes are based in large part on Chapter 16 of Stock and Watson (2011). They are for instructional purposes

More information

Booth School of Business, University of Chicago Business 41914, Spring Quarter 2017, Mr. Ruey S. Tsay. Solutions to Midterm

Booth School of Business, University of Chicago Business 41914, Spring Quarter 2017, Mr. Ruey S. Tsay. Solutions to Midterm Booth School of Business, University of Chicago Business 41914, Spring Quarter 017, Mr Ruey S Tsay Solutions to Midterm Problem A: (51 points; 3 points per question) Answer briefly the following questions

More information

Chapter 4: Models for Stationary Time Series

Chapter 4: Models for Stationary Time Series Chapter 4: Models for Stationary Time Series Now we will introduce some useful parametric models for time series that are stationary processes. We begin by defining the General Linear Process. Let {Y t

More information

Chapter 9: Forecasting

Chapter 9: Forecasting Chapter 9: Forecasting One of the critical goals of time series analysis is to forecast (predict) the values of the time series at times in the future. When forecasting, we ideally should evaluate the

More information

Note: The primary reference for these notes is Enders (2004). An alternative and more technical treatment can be found in Hamilton (1994).

Note: The primary reference for these notes is Enders (2004). An alternative and more technical treatment can be found in Hamilton (1994). Chapter 4 Analysis of a Single Time Series Note: The primary reference for these notes is Enders (4). An alternative and more technical treatment can be found in Hamilton (994). Most data used in financial

More information

Topic 4 Unit Roots. Gerald P. Dwyer. February Clemson University

Topic 4 Unit Roots. Gerald P. Dwyer. February Clemson University Topic 4 Unit Roots Gerald P. Dwyer Clemson University February 2016 Outline 1 Unit Roots Introduction Trend and Difference Stationary Autocorrelations of Series That Have Deterministic or Stochastic Trends

More information

Forecasting. Simon Shaw 2005/06 Semester II

Forecasting. Simon Shaw 2005/06 Semester II Forecasting Simon Shaw s.c.shaw@maths.bath.ac.uk 2005/06 Semester II 1 Introduction A critical aspect of managing any business is planning for the future. events is called forecasting. Predicting future

More information

Basics: Definitions and Notation. Stationarity. A More Formal Definition

Basics: Definitions and Notation. Stationarity. A More Formal Definition Basics: Definitions and Notation A Univariate is a sequence of measurements of the same variable collected over (usually regular intervals of) time. Usual assumption in many time series techniques is that

More information

COMPUTER SESSION 3: ESTIMATION AND FORECASTING.

COMPUTER SESSION 3: ESTIMATION AND FORECASTING. UPPSALA UNIVERSITY Department of Mathematics JR Analysis of Time Series 1MS014 Spring 2010 COMPUTER SESSION 3: ESTIMATION AND FORECASTING. 1 Introduction The purpose of this exercise is two-fold: (i) By

More information

Lab: Box-Jenkins Methodology - US Wholesale Price Indicator

Lab: Box-Jenkins Methodology - US Wholesale Price Indicator Lab: Box-Jenkins Methodology - US Wholesale Price Indicator In this lab we explore the Box-Jenkins methodology by applying it to a time-series data set comprising quarterly observations of the US Wholesale

More information

A Data-Driven Model for Software Reliability Prediction

A Data-Driven Model for Software Reliability Prediction A Data-Driven Model for Software Reliability Prediction Author: Jung-Hua Lo IEEE International Conference on Granular Computing (2012) Young Taek Kim KAIST SE Lab. 9/4/2013 Contents Introduction Background

More information

EASTERN MEDITERRANEAN UNIVERSITY ECON 604, FALL 2007 DEPARTMENT OF ECONOMICS MEHMET BALCILAR ARIMA MODELS: IDENTIFICATION

EASTERN MEDITERRANEAN UNIVERSITY ECON 604, FALL 2007 DEPARTMENT OF ECONOMICS MEHMET BALCILAR ARIMA MODELS: IDENTIFICATION ARIMA MODELS: IDENTIFICATION A. Autocorrelations and Partial Autocorrelations 1. Summary of What We Know So Far: a) Series y t is to be modeled by Box-Jenkins methods. The first step was to convert y t

More information

3 Theory of stationary random processes

3 Theory of stationary random processes 3 Theory of stationary random processes 3.1 Linear filters and the General linear process A filter is a transformation of one random sequence {U t } into another, {Y t }. A linear filter is a transformation

More information

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström PREDICTIO ERROR METHODS Torsten Söderström Department of Systems and Control, Information Technology, Uppsala University, Uppsala, Sweden Keywords: prediction error method, optimal prediction, identifiability,

More information

A Diagnostic for Seasonality Based Upon Autoregressive Roots

A Diagnostic for Seasonality Based Upon Autoregressive Roots A Diagnostic for Seasonality Based Upon Autoregressive Roots Tucker McElroy (U.S. Census Bureau) 2018 Seasonal Adjustment Practitioners Workshop April 26, 2018 1 / 33 Disclaimer This presentation is released

More information

Next tool is Partial ACF; mathematical tools first. The Multivariate Normal Distribution. e z2 /2. f Z (z) = 1 2π. e z2 i /2

Next tool is Partial ACF; mathematical tools first. The Multivariate Normal Distribution. e z2 /2. f Z (z) = 1 2π. e z2 i /2 Next tool is Partial ACF; mathematical tools first. The Multivariate Normal Distribution Defn: Z R 1 N(0,1) iff f Z (z) = 1 2π e z2 /2 Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) (a column

More information

Covariance Stationary Time Series. Example: Independent White Noise (IWN(0,σ 2 )) Y t = ε t, ε t iid N(0,σ 2 )

Covariance Stationary Time Series. Example: Independent White Noise (IWN(0,σ 2 )) Y t = ε t, ε t iid N(0,σ 2 ) Covariance Stationary Time Series Stochastic Process: sequence of rv s ordered by time {Y t } {...,Y 1,Y 0,Y 1,...} Defn: {Y t } is covariance stationary if E[Y t ]μ for all t cov(y t,y t j )E[(Y t μ)(y

More information

Introduction to Maximum Likelihood Estimation

Introduction to Maximum Likelihood Estimation Introduction to Maximum Likelihood Estimation Eric Zivot July 26, 2012 The Likelihood Function Let 1 be an iid sample with pdf ( ; ) where is a ( 1) vector of parameters that characterize ( ; ) Example:

More information

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information

Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications

Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications Prof. Massimo Guidolin 20192 Financial Econometrics Winter/Spring 2018 Overview Moving average processes Autoregressive

More information

Vector autoregressions, VAR

Vector autoregressions, VAR 1 / 45 Vector autoregressions, VAR Chapter 2 Financial Econometrics Michael Hauser WS17/18 2 / 45 Content Cross-correlations VAR model in standard/reduced form Properties of VAR(1), VAR(p) Structural VAR,

More information

Gaussian Copula Regression Application

Gaussian Copula Regression Application International Mathematical Forum, Vol. 11, 2016, no. 22, 1053-1065 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/imf.2016.68118 Gaussian Copula Regression Application Samia A. Adham Department

More information

Chapter 6: Model Specification for Time Series

Chapter 6: Model Specification for Time Series Chapter 6: Model Specification for Time Series The ARIMA(p, d, q) class of models as a broad class can describe many real time series. Model specification for ARIMA(p, d, q) models involves 1. Choosing

More information

Applied Time. Series Analysis. Wayne A. Woodward. Henry L. Gray. Alan C. Elliott. Dallas, Texas, USA

Applied Time. Series Analysis. Wayne A. Woodward. Henry L. Gray. Alan C. Elliott. Dallas, Texas, USA Applied Time Series Analysis Wayne A. Woodward Southern Methodist University Dallas, Texas, USA Henry L. Gray Southern Methodist University Dallas, Texas, USA Alan C. Elliott University of Texas Southwestern

More information