LARGE SAMPLE PROPERTIES OF PARAMETER ESTIMATES FOR PERIODIC ARMA MODELS

Size: px
Start display at page:

Download "LARGE SAMPLE PROPERTIES OF PARAMETER ESTIMATES FOR PERIODIC ARMA MODELS"

Transcription

1 LARGE SAMPLE PROPERIES OF PARAMEER ESIMAES FOR PERIODIC ARMA MODELS BY I. V. BASAWA and ROBER LUND he University of Georgia First Version received November 1999 Abstract. his paper studies the asymptotic properties of parameter estimates for causal and invertible periodic autoregressive moving-average (PARMA) time series models. A general limit result for PARMA parameter estimates with a moving-average component is derived. he paper presents examples that explicitly identify the limiting covariance matrix for parameter estimates from a general periodic autoregression (PAR), a rst-order periodic moving average (PMA(1)), and the mixed PARMA(1,1) model. Some comparisons and contrasts to univariate and vector autoregressive movingaverage sequences are made. Keywords. Least squares; maximum likelihood; periodic time series; PARMA model; ARMA model; VARMA model. 1. INRODUCION ime series with periodically varying parameters are natural modelling vehicles for series with cyclic autocovariances. Such series arise in climatology (Hannan, 1955; Monin, 1963; Jones and Brelsford, 1967; Bloom eld et al., 1994), economics (Parzen and Pagano, 1979), hydrology (Vecchia, 1985a, 1985b), electrical engineering (Gardner and Franks, 1975) and many other disciplines. Analogous to autoregressive moving-average (ARMA) models and short memory stationary series, periodic autoregressive moving-average (PARMA) models are fundamental periodic time series models (Jones and Brelsford, 1967; Vecchia, 1985a, 1985b; Cipra and lusty, 1987; Lund and Basawa, 2000). In this paper, the asymptotic properties of parameter estimates from a general causal and invertible PARMA model are derived. he results extend those for periodic autoregressions rst proven in Pagano (1978) and routman (1979). he rest of this paper proceeds as follows. In Section 2, a brief overview of PARMA models and their properties is presented. Section 3 establishes consistency and asymptotic normality of the PARMA parameter estimates and identi es the asymptotic covariance matrix for these estimates. Section 4 presents three examples where the asymptotic covariance matrix of the PARMA parameter estimates is explicitly computed in terms of the PARMA model parameters /01/06 651±663 JOURNAL OF IME SERIES ANALYSIS Vol. 22, No. 6 # 2001 Blackwell Publishers Ltd., 108 Cowley Road, Oxford OX4 1JF, UK and 350 Main Street, Malden, MA 02148, USA.

2 652 I. V. BASAWA AND R. LUND 2. PARMA MODELS Suppose that fx t g is a time series with nite second moments and that E[X t ] ˆ 0 for all t. We call fx t g a PARMA( p, q) series if it satis es the periodic linear difference equation X n í Xp ö k (í)x n í k ˆ E n í Xq è k (í)e n í k (2:1) In (2.1), the notation X n í denotes the series during the íth season, 1 < í <, of period n. he autoregressive and moving-average model orders are p and q, respectively, and ö 1 (í),..., ö p (í) and è 1 (í),..., è q (í) are the autoregressive and moving-average model coef cients, respectively, during season í. here is no mathematical loss of generality in taking p and q to be constant in the season í (Lund and Basawa, 2000). he errors fe t g are mean zero periodic white noise with var(e n í ) ˆ ó 2. 0 for all seasons í. For convenience, the non-periodic notations fx t g, fe t gfö k (t)g, etc. will be used interchangeably with the periodic notations fx n í g, fe n í g, fö k (í)g, etc. Equation (2.1) has the -variate ARMA representation (Vecchia, 1985a) Ö 0 ~X n Xp Ö k ~X n k ˆ È 0 ~E 0 Xq È k ~E n k (2:2) where f ~X n g and f~e n g are the -variate series ~X n ˆ (X n 1,..., X n )9 and ~E n ˆ (E n 1,..., E n )9. he model orders in (2.2) are p ˆdp=e and q ˆdq=e, where dxe denotes the smallest integer greater than or equal to x. he 3 autoregressive and moving-average coef cients are computed as follows. Ö 0 and È 0 have (i, j)th entries 8 8 < 1 i ˆ j < 1 i ˆ j (Ö 0 ) i, j ˆ 0 i, j (È : 0 ) i, j ˆ 0 i, j (2:3) : ö i j (i) i. j è i j (i) i. j (Ö m ) i, j ˆ ö m i j (i) for 1 < m < p, and (È m ) i, j ˆ è m i j (i) for 1 < m < q. Here, the conventions ö k (í) ˆ 0 for k. p and è k (í) ˆ 0 for k. q are made. he PARMA model in (2.1) is assumed to be causal and invertible in the sense that det@ Ö 0 Xp Ö k z k A 6ˆ 0 and È0 Xq È k z k A 6ˆ 0 (2:4) for all complex z satisfying jzj < 1. Bentarzi and Hallin (1993) give alternative causality and invertibility conditions that are equivalent to (2.4). It is tacitly assumed that the coef cients in (2.2) are identi able in the sense of Reinsel (1997), Section or Deistler et al. (1978).

3 Under causality and invertibility, it is possible to relate fx t g and fe t g through the in nite order moving-average and autoregressive expansions X n í ˆ X1 ø j (í)e n í j (2:5) and E n í ˆ X1 jˆ0 jˆ0 ð j (í)x n í j (2:6) In (2.5) and (2.6), the `seasonal weights' ø k (í) and ð k (í) satisfy X 1 X 1 max jø k (í)j, 1 and max jð k (í)j, 1 (2:7) 1<í< 1<í< kˆ0 kˆ0 he weight sequences fø k (í)g and fð k (í)g can be computed by setting ø 0 (í) ˆ ð 0 (í) ˆ 1 for all seasons í, and calculating values recursively via and PARAMEER ESIMAES FOR PERIODIC ARMA MODELS 653 ø k (í) ˆ è k (í)1 [k<q] ð k (í) ˆ ö k (í)1 [k,q] min(k, X p) jˆ1 min(k, X p) jˆ1 ö j (í)ø k j (í j) k > 1; 1 < í < (2:8) è j (í)ð k j (í j) k > 1; 1 < í < (2:9) he notation used in (2.8), (2.9) and elsewhere interprets ø k ( j) and ð k ( j) for each k > 0, ö k ( j) for 1 < k < p, ó 2 ( j), etc. periodically in j with period. When causal, (2.1) has a unique (in mean square) solution fx t g with a periodic autocovariance structure in the sense that cov(x t, X s ) ˆ cov(x t, X s ) for all integers t and s (Lund and Basawa, 2000). Such series are also called periodically correlated (Gladyshev, 1961; Hurd, 1989), cyclostationary (Gardner and Franks, 1975), and periodically stationary (Monin, 1963). he autocovariance structure of the PARMA model is easily computed from its causal representation. Let ã í (h) ˆ cov(x n í, X n í h ) be the season í autocovariance at lag h > 0. Manipulations with (2.5) give (Lund and Basawa 2000) ã í (h) ˆ X1 ø k h (í)ø k (í h)ó 2 (í k h) (2:10) kˆ0 3. ASYMPOIC PROPERIES OF PARMA PARAMEER ESIMAES his setion studies the asymptotic properties of parameter estimates from a causal and invertible PARMA model. As seen in Section 2, any PARMA model

4 654 I. V. BASAWA AND R. LUND has a -variate ARMA representation. Hence, in principle, the asymptotic properties of the PARMA parameter estimates can be deduced from the wellestablished multivariate ARMA asymptotic results in Dunsmuir and Hannan (1976), Deistler et al. (1978), Hannan and Deistler (1988), LuÈtkepohl (1991) and Reinsel (1997) (among others). Because of this, we will omit lengthy technical arguments in favour of an estimating equation outline. We will work in the univariate PARMA setting rather than transform to a - variate ARMA model via (2.2). here are two primary reasons for this. First, one would have to invert the -variate ARMA transformation to learn about the individual PARMA model coef cients. Results developed directly in terms of the univariate PARMA model would be more readily usable. Second, (2.2) is not in standard multivariate ARMA form as Ö 0 and È 0 are not necessarily the identity matrix. Rescaling the vector noises in (2.2) via ~ç n ˆ È 0 ~E n and then multiplying both sides by Ö 1 0 yields a standard vector ARMA model; however, the covariance matrix of f~ç n g and the MA parameters would then depend on both the PARMA autoregressive parameters and ~ó 2. For these reasons, we will work directly in the PARMA setting. Suppose that X 1,..., X N is a data sample from a causal and invertible PARMA model. he sample contains N full periods of data which are indexed from 0 to N 1. Note that 1 < n í < N when 0 < n < N 1 and 1 < í <. In this section, the large sample properties of the least squares estimates of the PARMA parameters are studied. We assume that fe t g is periodic i.i.d. mean zero noise with a nite fourth moment. For notation, let ~ö(í) ˆ (ö 1 (í),..., ö p (í))9 and è(í) ~ ˆ (è 1 (í),..., è q (í))9 denote the autoregressive and moving-average parameters during season í, respectively. he collection of all PARMA parameters will be denoted by ~á ˆ (~ö(1)9, è(1)9, ~ ~ö(2)9, è(2)9, ~..., ~ö()9, è()9)9. ~ he dimension of ~á is ( p q) 3 1. he white noise variances ~ó 2 ˆ (ó 2 (1),..., ó 2 ())9 will be treated as nuisance parameters. he weighted least squares estimate ^~á LS of ~á is obtained by minimizing the weighted sum of squares S(~á) ˆ XN 1 X íˆ1 ó 2 (í)e n í (~á)2 (3:1) where E t (~á) is determined recursively in t via a truncated version of (2.1): E n í (~á) ˆ X n í Xp ö k (í)x n í k Xq è k (í)e n í k (~á) (3:2) In (3.2), it is understood that E t (~á) ˆ X t ˆ 0 for t < 0. As a matter of notation, we will write E t (~á) to emphasize explicit dependence of E t on ~á (see (2.6) and (2.9)). We will freely interchange E t (~á) and E t (~á) in the asymptotic arguments that follow. hat this interchange does not alter any of the derived asymptotic distributions follows from straightforward modi cations of the proofs of

5 PARAMEER ESIMAES FOR PERIODIC ARMA MODELS 655 heorem in Fuller (1996), or Sections 8.11 and 10.8 of Brockwell and Davis (1991). he estimate ^~á LS is a solution to the ( p q)-dimensional estimating equation X N 1 X ó 2 (í)e n í n í (~á) ˆ ~0 íˆ1 (3:3) In t (~á)= is numerically evaluated by taking partial derivatives in (3.2); see also (3.10) below. Equation (3.3) assumes that ó 2 (í) is knownp for each season í. When ó 2 (í) is unknown it can be replaced by any N - consistent estimate without altering the plimit distribution of ^~áls. Using E[E n í (~á) 4 ], 1, it can be shown that one N -consistent estimate of ó 2 (í) is ^ó 2 (í) ˆ N 1 XN 1 E n í ( ^~á 0 ) 2 (3:4) where ^~á 0 is the ordinary least squares estimate of ~á obtained as a solution to the estimating equation (3.3) that is not weighted for ó 2 (í): X N 1 X E n í n í (~á) ˆ ~0 (3:5) íˆ1 Note that solution of (3.5) does not require a value of ~ó 2. Now set ~S N (~á) ˆ XN 1 ~Z n (~á) (3:6) where ~Z n (~á) ˆ X ó n í (~á) (í)e n í (~á) (3:7) íˆ1 he estimating equation in (3.3) is asymptotically equivalent to ~S N (~á) ˆ ~0. he asymptotic distribution of ^~á LS can be obtained from that of N 1=2 ~S N (~á). o quantify this, we will use the following ergodic result. LEMMA 3.1. Consider a causal and invertible PARMA model with the above assumptions on fe t g. hen the following convergence holds as N!1. (i) N 1 ~S N (~á)! P ~0; (ii) N 1P N 1 ~ Z n (~á)~z n (~á)9! P A(~á, ~ó 2 ) where A(~á, ~ó 2 ) ˆ X íˆ1 ó 2 (í)ã í (~á, ~ó 2 ) (3:8)

6 656 I. V. BASAWA AND R. LUND and " # Ã í (~á, ~ó n í í (~á) 9 ) ˆ E (iii) N 1P N ~Z n (~á)=! P A(~á, ~ó 2 ). (3:9) PROOF. aking partial derivatives in (2.1) n í (~á) ˆ Xp Xq ˆ Xp k (í) X n í k k (í) E n í k(~á) ~e ( p q)(í 1) k X n í k Xq è k n í k(~á) è k n í k(~á) ~e ( p q)(í 1) p k E n í k (~á) (3:10) where ~e j denotes a ( p q) 3 1 unit vector whose entries are all zero except for a one in the jth row. Stability properties of (3.10) can be used to show that f@e t (~á)=g satis es a differentiated version of n í (~á) ˆ k (í, ~á) X n í k (3:11) where max 1<í< P 1 j@ð k(í, ~á)=j, 1. It now follows that f@e t (~á)=g is a strictly stationary (in a periodic sense) mean zero ( p q)-variate series with nite second moments. Causality implies t (~á)= and E t (~á) are independent for each xed t; note that the k ˆ 0 term is absent in (3.11). From (3.11), heorems 6.21 and Proposition 6.32 in Breiman (1968), and the niteness of second moments of f@e t (~á)=g, the law of large numbers relations and XN 1 1 N N 1 XN n í í n í (~á)! P ~0 (3:12) 9! P Ãí (~á, ~ó 2 ) (3:13) as N!1 now follow. Using (3.13) and the independence of E t (~á) t (~á)= for each xed t establishes the result in part (i).

7 From (3.7), (3.11), and causality, one sees that f~z n (~á)g is mean zero strictly stationary white noise with E[~Z n (~á)~z n (~á)9] ˆ A(~á, ~ó 2 ). Hence, part (ii) is the law of large numbers for f~z n (~á)~z n (~á)9g and follows in a similar manner to part (i) using (3.13). o prove part (iii), take a partial derivative in (3.6) and use (3.13) to reduce the task to showing that XN 1 2 E n í (~á) N E n í (~á)! P ~0 (3:14) 9 as N!1. Using independence of E t (~á) 2 E t (~á)=9 for each xed t, we need only show that XN 1 2 E n í (~á) N! P 0 (3:15) 9 as N!1. aking a second derivative in (3.10) and arguing as with the rst derivative shows that f@ 2 E t (~á)=9g is a mean zero ( p q) 3 ( p q) dimensional series satisfying the law of large numbers in (3.15). QED HEOREM 3.1. For a causal and invertible Gaussian PARMA model with the above assumptions on fe t g, min( p, q) > 1, and ó 2 (í). 0 for each season í, as N!1. N 1=2 ( ^~á LS ~á)! D N(~0, A 1 (~á, ~ó 2 )) (3:16) PROOF. Let F n 1 ˆ ó (X n, X n 1,...) and use (3.7) to get " # Ef~Z n (~á)jf n 1 ] ˆ X ó n í (~á) (í)e E n í (~á) F n 1 íˆ1 ˆ X íˆ1 " # ó n í (~á) (í)e[e n í (~á)]e F n 1 ˆ ~0 (3:17) since E n í (~á) is independent of X n, X n 1,... for each season í and E[E n í (~á)] 0. Hence, f~s n (~á)g is a mean zero martingale with respect to ff n g. Using part (ii) of Lemma 3.1, one can verify that the central limit theorem for martingales (Hall and Heyde, 1980, ch. 3) applies to f~s n (~á)g: as N!1. PARAMEER ESIMAES FOR PERIODIC ARMA MODELS 657 N 1=2 ~S N (~á)! D N(~0, A(~á, ~ó 2 )) (3:18)

8 658 I. V. BASAWA AND R. LUND Now consider the rst-order aylor expansion ~S N ( ^~á LS ) ˆ ~S N ~S N (~á) ( ^~á LS ~á) ~R N (3:19) where ~R N denotes the remainder. A straightforward modi cation of the proof of heorem in Fuller (1996) with Lemma 3.1 shows that ^~áls is a consistent estimator of ~á. It can also be shown that ~R N ˆ O P (1) as N!1. Now use ~S N ( ^~á LS ) ˆ ~0 in (3.19) to get N ~S N (~á) N 1=2 ( ^~á LS ~á) ˆ N 1=2 ~S N (~á) o P (1) (3:20) Using part (iii) of Lemma 3.1, equation (3.18), and Slutzky's heorem in equation (3.20) completes our work. QED REMARK 3.1. If fe t g is Gaussian, then the maximum likelihood estimate ^~á ML of ~á has the same asymptotic distribution as the weighted least squares estimate ^~á LS. Lund and Basawa (2000) discuss ef cient computation of ^~á ML. he maximum likelihood estimates of ó 2 (í), denoted by ^ó 2 ML (í) for 1 < í <, have the large sample form ^ó 2 ML XN 1 1 (í) ˆ N E n í ( ^á ML ) 2 (3:21) heorem 3.1 establishes asymptotic normality of PARMA maximum likelihood estimates from Gaussian models. Lund and Basawa (2000) discuss ef cient computation of ^~á ML. REMARK 3.2. In the stationary ARMA setting (ö k (í) ö k, è k (í) è k ó 2 (í) ó 2 ), heorem 3.1 shows that and (N) 1 ( ^~á LS ~á)! D N(~0, ó 2 Ã 1 (~á, ó 2 )) (3:22) as N!1where Ã(~á, ó t (~á) 9 ) ˆ E (3:23) and ~á ˆ (ö 1,..., ö p, è 1,..., è q ). In the ARMA setting, the limiting covariance matrix ó 2 Ã 1 (~á, ó 2 ) does not depend on ó 2 as Ã(~á, ó 2 ) is ó 2 times a function of the AR and MA parameters only. As the examples in the next section show, this factorization will not carry over the PARMA setting.

9 PARAMEER ESIMAES FOR PERIODIC ARMA MODELS EXAMPLES Application of heorem 3.1 requires computation of A(~á, ~ó 2 ) (or à í (~á, ~ó 2 ) for each season í). he rest of this paper addresses this matter with speci c PARMA examples. EXAMPLE 4.1. Consider the causal pth order periodic autoregression (PAR( p)) satisfying X n í Xp ö k (í)x n í k ˆ E n í (4:1) Equation (3.10) reduces n í (~á) ˆ Xp ~e p(í 1) k X n í k (4:2) Multiplying (4.2) by the transpose of itself and taking expectations gives à í (~á, ~ó 2 ) ˆ Xp X p E p(í 1) k, p(í 1) j cov(x n í k, X n í j ) (4:3) jˆ1 where E i, j ˆ ~e i ~e9 j denotes a p 3 p matrix whose entries are all zero except for a one in the ith row and jth column. Using (4.3) in (3.8) reveals the block diagonal form A(~á, ~ó 2 ) ˆ block diag(q 1 (~á, ~ó 2 ),..., Q (~á, ~ó 2 )) (4:4) where Q í (~á, ~ó 2 )isa p 3 p matrix for each season í with (i, j)th entry Q í (~á, ~ó 2 ) i, j ˆ ó 2 (í)cov(x n í i, X n í j ) 1 < i, j < p (4:5) his classical result was rst proven in heorem 3 of Pagano (1978) using spectral-based methods (Hannan, 1970). REMARK 4.1. In the general PARMA setting, A(~á, ~ó 2 ) can depend on the white noise variances. o see this more clearly, consider Example 4.1 with p ˆ 1. Equations (4.4) and (4.5) show that A(~á, ~ó 2 ) is a 3 diagonal matrix with (í, í)th entry A(~á, ~ó 2 ) í,í ˆ ó 2 (í) var(x n í 1 ) 1 < í < (4:6) heorem 1 of Bloom eld, et al. (1994) explicitly computes the PAR(1) periodic variances as! X í var(x n í ) ˆ r 2 ó 2 (k) r 2 X í r 2 k 1 r 2 r 2 ó 2 (k) í r 2 1 < í < (4:7) k where r í ˆ Ð í lˆ1 ö 1(l) for each season í. Causality implies that jr j, 1

10 660 I. V. BASAWA AND R. LUND (Vecchia, 1985a). Combining (4.6) and (4.7) gives an example where A(~á, ~ó 2 ) depends on ó 2 (í) for all seasons í. Speci cally, one obtains 2! 3 s(í 1) X A(~á, ~ó 2 ) í,í ˆ ó 2 (í) r 2 ó 2 (k) r 2 X 4 s(í 1) r 2 k 1 r 2 r 2 ó 2 (k) 5 s(í 1) r 2 k 1 < í < (4:8) where s(í 1) is the season `corresponding' to index í 1: í 1 if 2 < í < s(í 1) ˆ (4:9) if í ˆ 1 EXAMPLE 4.2. Consider an invertible rst-order periodic moving average (PMA(1)) satisfying X n í ˆ E n í è 1 (í)e n í 1 (4:10) Equation (3.10) n í (~á) ˆ è 1 n í 1(~á) ~e í E n í 1 (~á) (4:11) Multiplying both sides of (4.11) by its own transpose, taking expectations, and using independence of E t (~á) t (~á)= for each t gives à í (~á, ~ó 2 ) è 2 1 (í)ã í 1(~á, ~ó 2 ) E í,í ó 2 (í 1) 1 < í < (4:12) with the boundary condition à 0 (~á, ~ó 2 ) ˆ à (~á, ~ó 2 ). It is a tedious but straightforward algebraic matter to use (4.12) times and apply the boundary condition to get! X í à í (~á, ~ó 2 ) ˆ r 2 ó 2 (l 1) r 2 X í r 2 E l,l lˆ1 l 1 r 2 r 2 ó 2 (l 1) í r 2 E l,l 1 < í < lˆ1 l (4:13) where r í ˆ Ð í iˆ1 è 1(i) for each season í. Invertibility implies that jr j, 1. Equation (4.13) identi es à í (~á, ~ó 2 ) as a diagonal matrix with (k, k)th entry " # à í (~á, ~ó 2 ) k,k ˆ r 2 í r 2 ó 2 r 2 (k 1) k 1 r 2 1 [k<í] 1 < k < (4:14) Using (4.14) in (3.8) shows that A(~á, ~ó 2 ) is a diagonal matrix with (k, k)th entry A(~á, ~ó 2 ) k,k ˆ ó 2 (k 1)r 2 X r 2 k (1 r 2 ) lˆ1 r 2 l ó 2 (l) ó 2 (k 1) X r 2 l r 2 k ó lˆk 2 (l) 1 < í < j (4:15)

11 Equation (4.15) explicitly identi es the asymptotic covariance matrix for the PMA(1) parameter estimates, complementing the PAR(1) result in (4.8). REMARK 4.2. In the stationary ARMA setting, the asymptotic covariance matrix for AR and MA parameter estimates has an interchangeable structure (Brockwell and Davis, 1991, Section 8.7). For example, the asymptotic variance of the AR(1) estimate of ö 1 is n 1 (1 ö 2 1 ) and the asymptotic variance of the MA(1) estimate of è 1 is n 1 (1 è 2 1 ) ± merely interchange ö 1 and è 1. Examples 4.1 and 4.2 show that this interchangeability does not hold in the PARMA setting. his can be explicitly seen in the rst-order case by comparing the expressions in (4.8) and (4.15): for a PAR(1), à í (~á, ~ó 2 ) ˆ E í,í Var(X n í 1 ) in (4.3) has one nonzero entry whereas à í (~á, ~ó 2 ) for a PMA(1) in (4.14) has nonzero entries. Our last example considers the mixed PARMA(1,1) model. EXAMPLE 4.3. Consider a causal and invertible rst-order PARMA(1,1) series satisfying X n í ˆ ö 1 (í)x n í 1 E n í è 1 (í)e n í 1 (4:16) hen (3.10) n í (~á) PARAMEER ESIMAES FOR PERIODIC ARMA MODELS 661 ˆ ~e 2í 1 X n í 1 è 1 n í 1(~á) ~e 2í E n í 1 (~á) (4:17) Multiplying both sides of (4.17) by its own transpose, taking an expectation, and simplifying gives the matrix-valued periodic difference equation à í (~á, ~ó 2 ) ˆ è 1 (í) 2 à í 1 (~á, ~ó 2 ) M í (4:18) with the boundary condition à 0 (~á, ~ó 2 ) ˆ à (~á, ~ó 2 ). In (4.18), M í ˆ ó 2 (í 1)[E 2í,2í 1 E 2í 1,2í E 2í,2í ] var(x n í 1 )E 2í 1,2í 1 (4:19) In obtaining (4.18), we have used n í 1(~á) he solution to (4.18) is à í (~á, ~ó 2 ) ˆ r 2 è,í X n í 1 X í M k r 2 è,k E[E n í 1(~á)X n í 1 ] [ó 2 (í 1)] ˆ ~0 (4:20) r 2 è, 1 r 2 è,! X r 2 M k è,í r 2 è,k (4:21)

12 662 I. V. BASAWA AND R. LUND where r è,í ˆ Ðíˆ1 è 1(í). Invertibility of the model implies that jr è, j, 1. he information matrix A(~á, ~ó 2 ) is easily obtained by using (4.21) in (3.8); var(x n í ) is computed explicitly in Lund and Basawa (2000) as! X í var(x n í ) ˆ r 2 ä(k) r 2 X ö, ö,í r 2 ö,k 1 r 2 r 2 ä(k) ö,í ö, r 2 (4:22) ö,k where ä(í) ˆ ó 2 (í) è 1 (í)ó 2 (í 1) 2ö 1 (í)è 1 (í)ó 2 (í 1) (4:23) and r ö, í ˆ Ð9 íˆ1 ö,(í). Causality of the model implies that jr ö, j, 1. ACKNOWLEDGEMENS Robert Lund's research was supported by NSF Grant DMS he comments of two referees greatly improved this paper. REFERENCES BENARZI, M. and HALLIN, M. (1993) On the invertibility of periodic moving-average models. Journal of ime Series Analysis 15, 263±8. BLOOMFIELD, P., HURD, H. L. and LUND, R. B. (1994) Periodic correlation in stratospheric ozone data. Journal of ime Series Analysis 15, 127±50. BREIMAN, L. (1968) Probability. Reading, MA: Addison-Wesley. BROCKWELL, P. J. and DAVIS, R. A. (1991) ime Series: heory and Methods (2nd edn). New York: Springer Verlag. CIPRA,. and LUSY, P. (1987) Estimation in multiple autoregressive-moving average models using periodicity. Journal of ime Series Analysis 8, 293±300. DEISLER, M., DUNSMUIR, W. and HANNAN, E. J. (1978) Vector linear time series models: corrections and extensions. Advances in Applied Probability 10, 360±72. DUNSMUIR, W. and HANNAN, E. J. (1976) Vector linear time series models. Advances in Applied Probability 8, 339±64. FULLER, W. A. (1996) Introduction to Statistical ime Series (2nd edn). New York: John Wiley and Sons. GARDNER, W. and FRANKS, L. E. (1975). Characterization of cyclostationary random signal processes. IEEE ransactions on Information heory 21, 4±14. GLADYSHEV, E. G. (1961) Periodically correlated random sequences. Soviet Math 2, 385±8. HALL, P. and HEYDE, C. C. (1980) Martingale Limit heory and its Applications. New York: Academic Press. HANNAN, E. J. (1955) A test for singularities in Sydney rainfall. Australian Journal of Physics 8, 289±97. б (1970) Multiple ime Series. New York: John Wiley and Sons. б and DEISLER, M. (1988) he Statistical heory of Linear Systems. New York: Wiley. HURD, H. L. (1989) Representation of strongly harmonizable periodically correlated processes and their covariances. Journal of Multivariate Analysis 29, 53±67. JONES, R. H. and BRELSFORD, W. M. (1967) imes series with periodic structure. Biometrika 54, 403±8. LUND, R. B. and BASAWA, I. V. (2000) Recursive prediction and likelihood evaluation for periodic ARMA models. Journal of ime Series Analysis 20, 75±93. LUÈ KEPOHL, H. (1991) Introduction to Multiple ime Series Analysis. Berlin: Springer-Verlag.

13 PARAMEER ESIMAES FOR PERIODIC ARMA MODELS 663 MONIN, A. S. (1963) Stationary and periodic time series in the general circulation of the atmosphere. In Proceedings Symposium on ime Series Analysis (ed M. Rosenblatt). New York: John Wiley and Sons, 144±51. PAGANO, M. (1978) On periodic and multiple autoregressions. he Annals of Statistics 6, 1310±17. PARZEN, E. and PAGANO, M. (1979) An approach to modeling seasonally stationary time series. Journal of Econometrics 9, 137±53. REINSEL, G. C. (1997) Elements of Multivariate ime Series Analysis (2nd edn). New York: Spring- Verlag. ROUMAN, B. M. (1979) Some results in periodic autoregressions. Biometrika 67, 365±73. VECCHIA, A. V. (1985a) Periodic autoregressive-moving average (PARMA) modeling with applications to water resources. Water Resources Bulletin 21, 721±30. б (1985b) Maximum likelihood estimation for periodic autoregressive moving average models. echnometrics 27, 375±84.

PERIODIC ARMA MODELS: APPLICATION TO PARTICULATE MATTER CONCENTRATIONS

PERIODIC ARMA MODELS: APPLICATION TO PARTICULATE MATTER CONCENTRATIONS PERIODIC ARMA MODELS: APPLICATION TO PARTICULATE MATTER CONCENTRATIONS A. J. Q. Sarnaglia, V. A. Reisen, P. Bondon Federal University of Espírito Santo, Department of Statistics, Vitória, ES, Brazil Federal

More information

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley Time Series Models and Inference James L. Powell Department of Economics University of California, Berkeley Overview In contrast to the classical linear regression model, in which the components of the

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY & Contents PREFACE xiii 1 1.1. 1.2. Difference Equations First-Order Difference Equations 1 /?th-order Difference

More information

Vector autoregressive Moving Average Process. Presented by Muhammad Iqbal, Amjad Naveed and Muhammad Nadeem

Vector autoregressive Moving Average Process. Presented by Muhammad Iqbal, Amjad Naveed and Muhammad Nadeem Vector autoregressive Moving Average Process Presented by Muhammad Iqbal, Amjad Naveed and Muhammad Nadeem Road Map 1. Introduction 2. Properties of MA Finite Process 3. Stationarity of MA Process 4. VARMA

More information

Next tool is Partial ACF; mathematical tools first. The Multivariate Normal Distribution. e z2 /2. f Z (z) = 1 2π. e z2 i /2

Next tool is Partial ACF; mathematical tools first. The Multivariate Normal Distribution. e z2 /2. f Z (z) = 1 2π. e z2 i /2 Next tool is Partial ACF; mathematical tools first. The Multivariate Normal Distribution Defn: Z R 1 N(0,1) iff f Z (z) = 1 2π e z2 /2 Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) (a column

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY PREFACE xiii 1 Difference Equations 1.1. First-Order Difference Equations 1 1.2. pth-order Difference Equations 7

More information

ASYMPTOTIC NORMALITY OF THE QMLE ESTIMATOR OF ARCH IN THE NONSTATIONARY CASE

ASYMPTOTIC NORMALITY OF THE QMLE ESTIMATOR OF ARCH IN THE NONSTATIONARY CASE Econometrica, Vol. 7, No. March, 004), 4 4 ASYMPOIC NORMALIY OF HE QMLE ESIMAOR OF ARCH IN HE NONSAIONARY CASE BY SØREN OLVER JENSEN AND ANDERS RAHBEK We establish consistency and asymptotic normality

More information

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form: 17 4 Determinants and the Inverse of a Square Matrix In this section, we are going to use our knowledge of determinants and their properties to derive an explicit formula for the inverse of a square matrix

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

Parametric Inference on Strong Dependence

Parametric Inference on Strong Dependence Parametric Inference on Strong Dependence Peter M. Robinson London School of Economics Based on joint work with Javier Hualde: Javier Hualde and Peter M. Robinson: Gaussian Pseudo-Maximum Likelihood Estimation

More information

1. Fundamental concepts

1. Fundamental concepts . Fundamental concepts A time series is a sequence of data points, measured typically at successive times spaced at uniform intervals. Time series are used in such fields as statistics, signal processing

More information

Gaussian processes. Basic Properties VAG002-

Gaussian processes. Basic Properties VAG002- Gaussian processes The class of Gaussian processes is one of the most widely used families of stochastic processes for modeling dependent data observed over time, or space, or time and space. The popularity

More information

A Comparison between Linear and Nonlinear Forecasts for Nonlinear AR Models

A Comparison between Linear and Nonlinear Forecasts for Nonlinear AR Models JOURNAL OF FORECASTING, VOL. 6, 49±508 (997) A Comparison between Linear and Nonlinear Forecasts for Nonlinear AR Models MEIHUI GUO* AND Y. K. TSENG National Sun Yat-sen University, Taiwan, ROC ABSTRACT

More information

KALMAN-TYPE RECURSIONS FOR TIME-VARYING ARMA MODELS AND THEIR IMPLICATION FOR LEAST SQUARES PROCEDURE ANTONY G AU T I E R (LILLE)

KALMAN-TYPE RECURSIONS FOR TIME-VARYING ARMA MODELS AND THEIR IMPLICATION FOR LEAST SQUARES PROCEDURE ANTONY G AU T I E R (LILLE) PROBABILITY AND MATHEMATICAL STATISTICS Vol 29, Fasc 1 (29), pp 169 18 KALMAN-TYPE RECURSIONS FOR TIME-VARYING ARMA MODELS AND THEIR IMPLICATION FOR LEAST SQUARES PROCEDURE BY ANTONY G AU T I E R (LILLE)

More information

New Introduction to Multiple Time Series Analysis

New Introduction to Multiple Time Series Analysis Helmut Lütkepohl New Introduction to Multiple Time Series Analysis With 49 Figures and 36 Tables Springer Contents 1 Introduction 1 1.1 Objectives of Analyzing Multiple Time Series 1 1.2 Some Basics 2

More information

UNIVERSITY OF CALIFORNIA, SAN DIEGO DEPARTMENT OF ECONOMICS

UNIVERSITY OF CALIFORNIA, SAN DIEGO DEPARTMENT OF ECONOMICS 2-7 UNIVERSITY OF LIFORNI, SN DIEGO DEPRTMENT OF EONOMIS THE JOHNSEN-GRNGER REPRESENTTION THEOREM: N EXPLIIT EXPRESSION FOR I() PROESSES Y PETER REINHRD HNSEN DISUSSION PPER 2-7 JULY 2 The Johansen-Granger

More information

GMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails

GMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails GMM-based inference in the AR() panel data model for parameter values where local identi cation fails Edith Madsen entre for Applied Microeconometrics (AM) Department of Economics, University of openhagen,

More information

Testing for Regime Switching: A Comment

Testing for Regime Switching: A Comment Testing for Regime Switching: A Comment Andrew V. Carter Department of Statistics University of California, Santa Barbara Douglas G. Steigerwald Department of Economics University of California Santa Barbara

More information

Simulating Properties of the Likelihood Ratio Test for a Unit Root in an Explosive Second Order Autoregression

Simulating Properties of the Likelihood Ratio Test for a Unit Root in an Explosive Second Order Autoregression Simulating Properties of the Likelihood Ratio est for a Unit Root in an Explosive Second Order Autoregression Bent Nielsen Nuffield College, University of Oxford J James Reade St Cross College, University

More information

Time Series Analysis -- An Introduction -- AMS 586

Time Series Analysis -- An Introduction -- AMS 586 Time Series Analysis -- An Introduction -- AMS 586 1 Objectives of time series analysis Data description Data interpretation Modeling Control Prediction & Forecasting 2 Time-Series Data Numerical data

More information

Exponential decay rate of partial autocorrelation coefficients of ARMA and short-memory processes

Exponential decay rate of partial autocorrelation coefficients of ARMA and short-memory processes Exponential decay rate of partial autocorrelation coefficients of ARMA and short-memory processes arxiv:1511.07091v2 [math.st] 4 Jan 2016 Akimichi Takemura January, 2016 Abstract We present a short proof

More information

a11 a A = : a 21 a 22

a11 a A = : a 21 a 22 Matrices The study of linear systems is facilitated by introducing matrices. Matrix theory provides a convenient language and notation to express many of the ideas concisely, and complicated formulas are

More information

COMPUTER ALGEBRA DERIVATION OF THE BIAS OF LINEAR ESTIMATORS OF AUTOREGRESSIVE MODELS

COMPUTER ALGEBRA DERIVATION OF THE BIAS OF LINEAR ESTIMATORS OF AUTOREGRESSIVE MODELS doi:10.1111/j.1467-9892.2005.00459.x COMPUTER ALGEBRA DERIVATION OF THE BIA OF LINEAR ETIMATOR OF AUTOREGREIVE MODEL By Y. Zhang A. I. McLeod Acadia University The University of Western Ontario First Version

More information

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B Chapter 8 - S&B Algebraic operations Matrix: The size of a matrix is indicated by the number of its rows and the number of its columns. A matrix with k rows and n columns is called a k n matrix. The number

More information

Residuals in Time Series Models

Residuals in Time Series Models Residuals in Time Series Models José Alberto Mauricio Universidad Complutense de Madrid, Facultad de Económicas, Campus de Somosaguas, 83 Madrid, Spain. (E-mail: jamauri@ccee.ucm.es.) Summary: Three types

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Time Series: Theory and Methods

Time Series: Theory and Methods Peter J. Brockwell Richard A. Davis Time Series: Theory and Methods Second Edition With 124 Illustrations Springer Contents Preface to the Second Edition Preface to the First Edition vn ix CHAPTER 1 Stationary

More information

Difference equations. Definitions: A difference equation takes the general form. x t f x t 1,,x t m.

Difference equations. Definitions: A difference equation takes the general form. x t f x t 1,,x t m. Difference equations Definitions: A difference equation takes the general form x t fx t 1,x t 2, defining the current value of a variable x as a function of previously generated values. A finite order

More information

Problem Set 1 Solution Sketches Time Series Analysis Spring 2010

Problem Set 1 Solution Sketches Time Series Analysis Spring 2010 Problem Set 1 Solution Sketches Time Series Analysis Spring 2010 1. Construct a martingale difference process that is not weakly stationary. Simplest e.g.: Let Y t be a sequence of independent, non-identically

More information

Multivariate Time Series

Multivariate Time Series Multivariate Time Series Notation: I do not use boldface (or anything else) to distinguish vectors from scalars. Tsay (and many other writers) do. I denote a multivariate stochastic process in the form

More information

ECON 616: Lecture 1: Time Series Basics

ECON 616: Lecture 1: Time Series Basics ECON 616: Lecture 1: Time Series Basics ED HERBST August 30, 2017 References Overview: Chapters 1-3 from Hamilton (1994). Technical Details: Chapters 2-3 from Brockwell and Davis (1987). Intuition: Chapters

More information

Time Series Analysis. Asymptotic Results for Spatial ARMA Models

Time Series Analysis. Asymptotic Results for Spatial ARMA Models Communications in Statistics Theory Methods, 35: 67 688, 2006 Copyright Taylor & Francis Group, LLC ISSN: 036-0926 print/532-45x online DOI: 0.080/036092050049893 Time Series Analysis Asymptotic Results

More information

Review of Vectors and Matrices

Review of Vectors and Matrices A P P E N D I X D Review of Vectors and Matrices D. VECTORS D.. Definition of a Vector Let p, p, Á, p n be any n real numbers and P an ordered set of these real numbers that is, P = p, p, Á, p n Then P

More information

An Extrapolated Gauss-Seidel Iteration

An Extrapolated Gauss-Seidel Iteration mathematics of computation, volume 27, number 124, October 1973 An Extrapolated Gauss-Seidel Iteration for Hessenberg Matrices By L. J. Lardy Abstract. We show that for certain systems of linear equations

More information

Simple and Explicit Estimating Functions for a Discretely Observed Diffusion Process

Simple and Explicit Estimating Functions for a Discretely Observed Diffusion Process Published by Blackwell Publishers Ltd, 108 Cowley Road, Oxford OX4 1JF, UK and 350 Main Street, Malden, MA 0148, USA Vol 7: 65±8, 000 Simple and Explicit Estimating Functions for a Discretely Observed

More information

A COMPLETE ASYMPTOTIC SERIES FOR THE AUTOCOVARIANCE FUNCTION OF A LONG MEMORY PROCESS. OFFER LIEBERMAN and PETER C. B. PHILLIPS

A COMPLETE ASYMPTOTIC SERIES FOR THE AUTOCOVARIANCE FUNCTION OF A LONG MEMORY PROCESS. OFFER LIEBERMAN and PETER C. B. PHILLIPS A COMPLETE ASYMPTOTIC SERIES FOR THE AUTOCOVARIANCE FUNCTION OF A LONG MEMORY PROCESS BY OFFER LIEBERMAN and PETER C. B. PHILLIPS COWLES FOUNDATION PAPER NO. 1247 COWLES FOUNDATION FOR RESEARCH IN ECONOMICS

More information

Spread, estimators and nuisance parameters

Spread, estimators and nuisance parameters Bernoulli 3(3), 1997, 323±328 Spread, estimators and nuisance parameters EDWIN. VAN DEN HEUVEL and CHIS A.J. KLAASSEN Department of Mathematics and Institute for Business and Industrial Statistics, University

More information

(Y jz) t (XjZ) 0 t = S yx S yz S 1. S yx:z = T 1. etc. 2. Next solve the eigenvalue problem. js xx:z S xy:z S 1

(Y jz) t (XjZ) 0 t = S yx S yz S 1. S yx:z = T 1. etc. 2. Next solve the eigenvalue problem. js xx:z S xy:z S 1 Abstract Reduced Rank Regression The reduced rank regression model is a multivariate regression model with a coe cient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure,

More information

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey Copyright 2005, W.R. Winfrey Topics Preliminaries Echelon Form of a Matrix Elementary Matrices; Finding A -1 Equivalent Matrices LU-Factorization Topics Preliminaries Echelon Form of a Matrix Elementary

More information

5: MULTIVARATE STATIONARY PROCESSES

5: MULTIVARATE STATIONARY PROCESSES 5: MULTIVARATE STATIONARY PROCESSES 1 1 Some Preliminary Definitions and Concepts Random Vector: A vector X = (X 1,..., X n ) whose components are scalarvalued random variables on the same probability

More information

IDENTIFICATION OF PERIODIC AUTOREGRESSIVE MOVING-AVERAGE TIME SERIES MODELS WITH R

IDENTIFICATION OF PERIODIC AUTOREGRESSIVE MOVING-AVERAGE TIME SERIES MODELS WITH R Journal of Mathematics and Statistics (3): 358-367, 4 ISSN: 549-3644 4 doi:.3844/jmssp.4.358.367 Published Online (3) 4 (http://www.thescipub.com/jmss.toc) IDENTIFICATION OF PERIODIC AUTOREGRESSIVE MOVING-AVERAGE

More information

7. MULTIVARATE STATIONARY PROCESSES

7. MULTIVARATE STATIONARY PROCESSES 7. MULTIVARATE STATIONARY PROCESSES 1 1 Some Preliminary Definitions and Concepts Random Vector: A vector X = (X 1,..., X n ) whose components are scalar-valued random variables on the same probability

More information

GMM estimation of spatial panels

GMM estimation of spatial panels MRA Munich ersonal ReEc Archive GMM estimation of spatial panels Francesco Moscone and Elisa Tosetti Brunel University 7. April 009 Online at http://mpra.ub.uni-muenchen.de/637/ MRA aper No. 637, posted

More information

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix

More information

An algorithm for robust fitting of autoregressive models Dimitris N. Politis

An algorithm for robust fitting of autoregressive models Dimitris N. Politis An algorithm for robust fitting of autoregressive models Dimitris N. Politis Abstract: An algorithm for robust fitting of AR models is given, based on a linear regression idea. The new method appears to

More information

Spatial autoregression model:strong consistency

Spatial autoregression model:strong consistency Statistics & Probability Letters 65 (2003 71 77 Spatial autoregression model:strong consistency B.B. Bhattacharyya a, J.-J. Ren b, G.D. Richardson b;, J. Zhang b a Department of Statistics, North Carolina

More information

ARIMA Modelling and Forecasting

ARIMA Modelling and Forecasting ARIMA Modelling and Forecasting Economic time series often appear nonstationary, because of trends, seasonal patterns, cycles, etc. However, the differences may appear stationary. Δx t x t x t 1 (first

More information

COMPUTER ALGEBRA DERIVATION OF THE BIAS OF LINEAR ESTIMATORS OF AUTOREGRESSIVE MODELS

COMPUTER ALGEBRA DERIVATION OF THE BIAS OF LINEAR ESTIMATORS OF AUTOREGRESSIVE MODELS COMPUTER ALGEBRA DERIVATION OF THE BIAS OF LINEAR ESTIMATORS OF AUTOREGRESSIVE MODELS Y. ZHANG and A.I. MCLEOD Acadia University and The University of Western Ontario May 26, 2005 1 Abstract. A symbolic

More information

Econ 623 Econometrics II Topic 2: Stationary Time Series

Econ 623 Econometrics II Topic 2: Stationary Time Series 1 Introduction Econ 623 Econometrics II Topic 2: Stationary Time Series In the regression model we can model the error term as an autoregression AR(1) process. That is, we can use the past value of the

More information

ARIMA Models. Richard G. Pierse

ARIMA Models. Richard G. Pierse ARIMA Models Richard G. Pierse 1 Introduction Time Series Analysis looks at the properties of time series from a purely statistical point of view. No attempt is made to relate variables using a priori

More information

X t = a t + r t, (7.1)

X t = a t + r t, (7.1) Chapter 7 State Space Models 71 Introduction State Space models, developed over the past 10 20 years, are alternative models for time series They include both the ARIMA models of Chapters 3 6 and the Classical

More information

Online Appendix. j=1. φ T (ω j ) vec (EI T (ω j ) f θ0 (ω j )). vec (EI T (ω) f θ0 (ω)) = O T β+1/2) = o(1), M 1. M T (s) exp ( isω)

Online Appendix. j=1. φ T (ω j ) vec (EI T (ω j ) f θ0 (ω j )). vec (EI T (ω) f θ0 (ω)) = O T β+1/2) = o(1), M 1. M T (s) exp ( isω) Online Appendix Proof of Lemma A.. he proof uses similar arguments as in Dunsmuir 979), but allowing for weak identification and selecting a subset of frequencies using W ω). It consists of two steps.

More information

TIME SERIES AND FORECASTING. Luca Gambetti UAB, Barcelona GSE Master in Macroeconomic Policy and Financial Markets

TIME SERIES AND FORECASTING. Luca Gambetti UAB, Barcelona GSE Master in Macroeconomic Policy and Financial Markets TIME SERIES AND FORECASTING Luca Gambetti UAB, Barcelona GSE 2014-2015 Master in Macroeconomic Policy and Financial Markets 1 Contacts Prof.: Luca Gambetti Office: B3-1130 Edifici B Office hours: email:

More information

Estimating the Number of Common Factors in Serially Dependent Approximate Factor Models

Estimating the Number of Common Factors in Serially Dependent Approximate Factor Models Estimating the Number of Common Factors in Serially Dependent Approximate Factor Models Ryan Greenaway-McGrevy y Bureau of Economic Analysis Chirok Han Korea University February 7, 202 Donggyu Sul University

More information

Time Series I Time Domain Methods

Time Series I Time Domain Methods Astrostatistics Summer School Penn State University University Park, PA 16802 May 21, 2007 Overview Filtering and the Likelihood Function Time series is the study of data consisting of a sequence of DEPENDENT

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

Truncated Poisson Regression for Time Series of Counts

Truncated Poisson Regression for Time Series of Counts Published by Blackwell Publishers Ltd, 108 Cowley Road, Oxford OX4 1JF, UK and 350 Main Street, Malden, MA 02148, USA Vol 28: 645±659, 2001 Truncated Poisson Regression for Time Series of Counts KONSTANTINOS

More information

Lecture 3 Stationary Processes and the Ergodic LLN (Reference Section 2.2, Hayashi)

Lecture 3 Stationary Processes and the Ergodic LLN (Reference Section 2.2, Hayashi) Lecture 3 Stationary Processes and the Ergodic LLN (Reference Section 2.2, Hayashi) Our immediate goal is to formulate an LLN and a CLT which can be applied to establish sufficient conditions for the consistency

More information

Linear Algebra and Matrix Inversion

Linear Algebra and Matrix Inversion Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

One important issue in the study of queueing systems is to characterize departure processes. Study on departure processes was rst initiated by Burke (

One important issue in the study of queueing systems is to characterize departure processes. Study on departure processes was rst initiated by Burke ( The Departure Process of the GI/G/ Queue and Its MacLaurin Series Jian-Qiang Hu Department of Manufacturing Engineering Boston University 5 St. Mary's Street Brookline, MA 2446 Email: hqiang@bu.edu June

More information

Part III Example Sheet 1 - Solutions YC/Lent 2015 Comments and corrections should be ed to

Part III Example Sheet 1 - Solutions YC/Lent 2015 Comments and corrections should be  ed to TIME SERIES Part III Example Sheet 1 - Solutions YC/Lent 2015 Comments and corrections should be emailed to Y.Chen@statslab.cam.ac.uk. 1. Let {X t } be a weakly stationary process with mean zero and let

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

Appendix C Vector and matrix algebra

Appendix C Vector and matrix algebra Appendix C Vector and matrix algebra Concepts Scalars Vectors, rows and columns, matrices Adding and subtracting vectors and matrices Multiplying them by scalars Products of vectors and matrices, scalar

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

Delta Method. Example : Method of Moments for Exponential Distribution. f(x; λ) = λe λx I(x > 0)

Delta Method. Example : Method of Moments for Exponential Distribution. f(x; λ) = λe λx I(x > 0) Delta Method Often estimators are functions of other random variables, for example in the method of moments. These functions of random variables can sometimes inherit a normal approximation from the underlying

More information

LECTURE 12 UNIT ROOT, WEAK CONVERGENCE, FUNCTIONAL CLT

LECTURE 12 UNIT ROOT, WEAK CONVERGENCE, FUNCTIONAL CLT MARCH 29, 26 LECTURE 2 UNIT ROOT, WEAK CONVERGENCE, FUNCTIONAL CLT (Davidson (2), Chapter 4; Phillips Lectures on Unit Roots, Cointegration and Nonstationarity; White (999), Chapter 7) Unit root processes

More information

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of Chapter 2 Linear Algebra In this chapter, we study the formal structure that provides the background for quantum mechanics. The basic ideas of the mathematical machinery, linear algebra, are rather simple

More information

Time Series 3. Robert Almgren. Sept. 28, 2009

Time Series 3. Robert Almgren. Sept. 28, 2009 Time Series 3 Robert Almgren Sept. 28, 2009 Last time we discussed two main categories of linear models, and their combination. Here w t denotes a white noise: a stationary process with E w t ) = 0, E

More information

Expressions for the covariance matrix of covariance data

Expressions for the covariance matrix of covariance data Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden

More information

Linear Equations and Matrix

Linear Equations and Matrix 1/60 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Gaussian Elimination 2/60 Alpha Go Linear algebra begins with a system of linear

More information

Single Equation Linear GMM with Serially Correlated Moment Conditions

Single Equation Linear GMM with Serially Correlated Moment Conditions Single Equation Linear GMM with Serially Correlated Moment Conditions Eric Zivot November 2, 2011 Univariate Time Series Let {y t } be an ergodic-stationary time series with E[y t ]=μ and var(y t )

More information

Stat 206: Linear algebra

Stat 206: Linear algebra Stat 206: Linear algebra James Johndrow (adapted from Iain Johnstone s notes) 2016-11-02 Vectors We have already been working with vectors, but let s review a few more concepts. The inner product of two

More information

MATH 2030: MATRICES ,, a m1 a m2 a mn If the columns of A are the vectors a 1, a 2,...,a n ; A is represented as A 1. .

MATH 2030: MATRICES ,, a m1 a m2 a mn If the columns of A are the vectors a 1, a 2,...,a n ; A is represented as A 1. . MATH 030: MATRICES Matrix Operations We have seen how matrices and the operations on them originated from our study of linear equations In this chapter we study matrices explicitely Definition 01 A matrix

More information

Properties of the Determinant Function

Properties of the Determinant Function Properties of the Determinant Function MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Overview Today s discussion will illuminate some of the properties of the determinant:

More information

On Expected Gaussian Random Determinants

On Expected Gaussian Random Determinants On Expected Gaussian Random Determinants Moo K. Chung 1 Department of Statistics University of Wisconsin-Madison 1210 West Dayton St. Madison, WI 53706 Abstract The expectation of random determinants whose

More information

INSTITIÚID TEICNEOLAÍOCHTA CHEATHARLACH INSTITUTE OF TECHNOLOGY CARLOW MATRICES

INSTITIÚID TEICNEOLAÍOCHTA CHEATHARLACH INSTITUTE OF TECHNOLOGY CARLOW MATRICES 1 CHAPTER 4 MATRICES 1 INSTITIÚID TEICNEOLAÍOCHTA CHEATHARLACH INSTITUTE OF TECHNOLOGY CARLOW MATRICES 1 Matrices Matrices are of fundamental importance in 2-dimensional and 3-dimensional graphics programming

More information

MIDTERM 1 - SOLUTIONS

MIDTERM 1 - SOLUTIONS MIDTERM - SOLUTIONS MATH 254 - SUMMER 2002 - KUNIYUKI CHAPTERS, 2, GRADED OUT OF 75 POINTS 2 50 POINTS TOTAL ) Use either Gaussian elimination with back-substitution or Gauss-Jordan elimination to solve

More information

MATH Mathematics for Agriculture II

MATH Mathematics for Agriculture II MATH 10240 Mathematics for Agriculture II Academic year 2018 2019 UCD School of Mathematics and Statistics Contents Chapter 1. Linear Algebra 1 1. Introduction to Matrices 1 2. Matrix Multiplication 3

More information

Derivation of the Kalman Filter

Derivation of the Kalman Filter Derivation of the Kalman Filter Kai Borre Danish GPS Center, Denmark Block Matrix Identities The key formulas give the inverse of a 2 by 2 block matrix, assuming T is invertible: T U 1 L M. (1) V W N P

More information

Estimation of some bilinear time series models with time varying coe±cients

Estimation of some bilinear time series models with time varying coe±cients Estimation of some bilinear time series models with time varying coe±cients Abdelouahab Bibi D epartement de Math ematiques, Universit e MentourideConstantine Alg erie bibi@wissal.dz and Alwell J. Oyet

More information

The autocorrelation and autocovariance functions - helpful tools in the modelling problem

The autocorrelation and autocovariance functions - helpful tools in the modelling problem The autocorrelation and autocovariance functions - helpful tools in the modelling problem J. Nowicka-Zagrajek A. Wy lomańska Institute of Mathematics and Computer Science Wroc law University of Technology,

More information

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction Acta Math. Univ. Comenianae Vol. LXV, 1(1996), pp. 129 139 129 ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES V. WITKOVSKÝ Abstract. Estimation of the autoregressive

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Multivariate Distributions

Multivariate Distributions IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate

More information

9. Multivariate Linear Time Series (II). MA6622, Ernesto Mordecki, CityU, HK, 2006.

9. Multivariate Linear Time Series (II). MA6622, Ernesto Mordecki, CityU, HK, 2006. 9. Multivariate Linear Time Series (II). MA6622, Ernesto Mordecki, CityU, HK, 2006. References for this Lecture: Introduction to Time Series and Forecasting. P.J. Brockwell and R. A. Davis, Springer Texts

More information

at least 50 and preferably 100 observations should be available to build a proper model

at least 50 and preferably 100 observations should be available to build a proper model III Box-Jenkins Methods 1. Pros and Cons of ARIMA Forecasting a) need for data at least 50 and preferably 100 observations should be available to build a proper model used most frequently for hourly or

More information

Introduction to Groups

Introduction to Groups Introduction to Groups S F Ellermeyer November 2, 2006 A group, G, is a set, A, endowed with a single binary operation,, such that: The operation is associative, meaning that a (b c) = (a b) c for all

More information

{ } Stochastic processes. Models for time series. Specification of a process. Specification of a process. , X t3. ,...X tn }

{ } Stochastic processes. Models for time series. Specification of a process. Specification of a process. , X t3. ,...X tn } Stochastic processes Time series are an example of a stochastic or random process Models for time series A stochastic process is 'a statistical phenomenon that evolves in time according to probabilistic

More information

Lecture Notes Part 2: Matrix Algebra

Lecture Notes Part 2: Matrix Algebra 17.874 Lecture Notes Part 2: Matrix Algebra 2. Matrix Algebra 2.1. Introduction: Design Matrices and Data Matrices Matrices are arrays of numbers. We encounter them in statistics in at least three di erent

More information

Inference in VARs with Conditional Heteroskedasticity of Unknown Form

Inference in VARs with Conditional Heteroskedasticity of Unknown Form Inference in VARs with Conditional Heteroskedasticity of Unknown Form Ralf Brüggemann a Carsten Jentsch b Carsten Trenkler c University of Konstanz University of Mannheim University of Mannheim IAB Nuremberg

More information

Linear Algebra and Matrices

Linear Algebra and Matrices Linear Algebra and Matrices 4 Overview In this chapter we studying true matrix operations, not element operations as was done in earlier chapters. Working with MAT- LAB functions should now be fairly routine.

More information

Empirical Macroeconomics

Empirical Macroeconomics Empirical Macroeconomics Francesco Franco Nova SBE April 5, 2016 Francesco Franco Empirical Macroeconomics 1/39 Growth and Fluctuations Supply and Demand Figure : US dynamics Francesco Franco Empirical

More information

CS 195-5: Machine Learning Problem Set 1

CS 195-5: Machine Learning Problem Set 1 CS 95-5: Machine Learning Problem Set Douglas Lanman dlanman@brown.edu 7 September Regression Problem Show that the prediction errors y f(x; ŵ) are necessarily uncorrelated with any linear function of

More information

Section 9.2: Matrices.. a m1 a m2 a mn

Section 9.2: Matrices.. a m1 a m2 a mn Section 9.2: Matrices Definition: A matrix is a rectangular array of numbers: a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn In general, a ij denotes the (i, j) entry of A. That is, the entry in

More information

Section 8.1. Vector Notation

Section 8.1. Vector Notation Section 8.1 Vector Notation Definition 8.1 Random Vector A random vector is a column vector X = [ X 1 ]. X n Each Xi is a random variable. Definition 8.2 Vector Sample Value A sample value of a random

More information

Unit Roots in White Noise?!

Unit Roots in White Noise?! Unit Roots in White Noise?! A.Onatski and H. Uhlig September 26, 2008 Abstract We show that the empirical distribution of the roots of the vector auto-regression of order n fitted to T observations of

More information

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns. Section 9.2: Matrices Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns. That is, a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn A

More information