5: MULTIVARATE STATIONARY PROCESSES
|
|
- Hannah Bruce
- 5 years ago
- Views:
Transcription
1 5: MULTIVARATE STATIONARY PROCESSES 1
2 1 Some Preliminary Definitions and Concepts Random Vector: A vector X = (X 1,..., X n ) whose components are scalarvalued random variables on the same probability space. Vector Random Process: A family of random vectors {X t, t T } defined on a probability space, where T is a set of time points. Typically T = R, T = Z or T = N, the sets or real, integer and natural numbers, respectively. Time Series Vector: A particular realization of a vector random process. 2
3 1.1 The Lag operator The lag operator L maps a sequence {X t } into a sequence {Y t } such that Y t = LX t = X t 1, for all t. If we apply L repeatedly on a process, for instance L(L(LX t )), we will use the convention L(L(LX t )) = L 3 X t = X t 3. If we apply L to a constant c, Lc = c. Inversion: L 1 is the inverse of L, such that L 1 (L)X t = X t, L 1 = X t+1. The difference operator is the filter 1 L. Wen applied to X t, (1 L)X t = X t = X t X t 1 3
4 1.2 Polynomials in the lag operator We can form (univariate) polynomials; φ(l) = φ 0 L 0 + φ 1 L + φ 2 L φ p L p is a polynomial in the lag operator of order p and is such that φ(l)x t = φ 0 X t + φ 1 X t φ p X t p. Recall th AR(1) process: (1 φl)y t = ε t, with ε t W N. Lag polynomials can also be inverted. For a polynomial φ(l), we are looking for the values of the coefficients α i of φ(l) 1 = α 0 + α 1 L + α 2 L such that φ(l) 1 φ(l) = 1. Case 1: p = 1. Let φ(l) = (1 φl) with φ < 1. To find the inverse write (1 φl)(α 0 + α 1 L + α 2 L ) 1 = 1 note that all the coefficients of the non-zero powers of L must be equal to zero. This gives α 0 = 1 φ + α 1 = 0 α 1 = φ 4
5 φα 1 + α 2 = 0 α 2 = φ 2 φα 2 + α 3 = 0 α 3 = φ 3 and so on. In general α k = φ k, so (1 φl) 1 = j=0 φj L j provided that φ < 1. It is easy to check this because so (1 φl)(1 + φl + φ 2 L φ k L k ) = 1 φ k+1 L k+1 (1 + φl + φ 2 L φ k L k ) = 1 φk+1 L k+1 and k k j=0 φj L j 1 (1 φl). (1 φl) 5
6 Case 2: p = 2. Let φ(l) = (1 φ 1 L φ 2 L 2 ). To find the inverse it is useful to factor the polynomial in the following way (1 φ 1 L φ 2 L 2 ) = (1 λ 1 L)(1 λ 2 L) where the λ 1, λ 2 are the reciprocal of the roots of the above left-hand side polynomial or equivalently the eigenvalues of the matrix ( ) φ1 φ Suppose λ 1, λ 2 < 0 and λ 1 λ 2. We have that (1 φ 1 L φ 2 L 2 ) 1 = (1 λ 1 L) 1 (1 λ 2 L) 1. Therefore we can use what we have seen above for the case p = 1. We can write [ (1 λ 1 L) 1 (1 λ 2 L) 1 = (λ 1 λ 2 ) 1 λ 1 1 λ 1 L λ ] 2 1 λ 2 L λ 1 [ = 1 + λ1 L + λ 1 L ] λ 1 λ 2 λ 2 λ 1 λ 2 [ 1 + λ2 L + λ 2 L ] = (c 1 + c 2 ) + (c 1 λ 1 + c 2 λ 2 )L + (c 1 λ c 2 λ 2 2)L
7 where c 1 = λ 1 /(λ 1 λ 2 ), c 2 = λ 2 /(λ 1 λ 2 ) Matrix of polynomial in the lag operator: Φ(L) if its elements are polynomial in the lag operator, i.e. ( ) 1 0.5L Φ(L) = = Φ 0 + Φ 1 L L 1 + L where Φ 0 = ( ) 1 1, Φ 1 = 0 2 ( ) 0 0.5, Φ j>1 = 1 1 ( ) When applied to a vector X t we obtain ( ) ( ) 1 0.5L X1t Φ(L)X t = L 1 + L X 2t = ( X1t 0.5X 2t 1 X 1t 1 + X 2t + X 2t 1 ) 7
8 The inverse is matrix such that Φ(L) 1 Φ(L) = I. Suppose Φ(L) = (I Φ 1 L). Its inverse Φ(L) 1 = A(L) is a matrix such that (A 0 + A 1 L + A 2 L )Φ = I. That is A 0 = I A 1 Φ 1 = 0 A 1 = Φ 1 A 2 A 1 Φ 1 = 0 A 2 = Φ 2. A k A k 1 Φ 1 = 0 A k = Φ k (1) 8
9 1.3 Covariance Stationarity Let Y t be a n-dimensional vector of time series, Y t = [Y 1t,..., Y nt ]. Then Y t is covariance (weakly) stationary if E(Y t ) = µ, and the autocovariance matrix Γ j = E(Y t µ)(y t j µ) for all t, j, that is are independent of t and both finite. Stationarity of each of the components of Y t does not imply stationarity of the vector Y t. Stationarity in the vector case requires that the components of the vector are stationary and costationary. Although γ j = γ j for a scalar process, the same is not true for a vector process. The correct relation is Γ j = Γ j Example: n = 2 and µ = 0 ( ) E(Y1t Y 1t 1 ) E(Y 1t Y 2t 1 ) Γ 1 = E(Y 2t Y 1t 1 ) E(Y 2t Y 2t 1 ) ( ) E(Y1t+1 Y 1t ) E(Y 1t+1 Y 2t ) = E(Y 2t+1 Y 1t ) E(Y 2t+1 Y 2t ) ( ) E(Y1t Y 1t+1 ) E(Y 1t Y 2t+1 ) = = Γ 1 E(Y 2t Y 1t+1 ) E(Y 2t Y 2t+1 ) 9
10 1.4 Convergence of random variables We refresh three concepts of stochastic convergence in the univariate case and then we extend the three cases to the multivariate framework. Let {x T, T = 1, 2,...} be a sequence of random variables. Convergence in probability {x T } T =1 converges in probability to c, (written x T p c), if lim T p( x T c > ɛ) = 0 When the above condition is satisfied, the number c is called probability limit or plim of the sequence {x T }, indicated as plimx T = c Convergence in mean square. The sequence of random variables {x T } converges in mean square to c, denoted by x T c, m.s. if lim E(x T c) 2 = 0 T 10
11 Convergence in distribution Let {x T } T =1 be a sequence of random variables and let F T denote the cumulative distribution function of x T and F the cumulative distribution of the scalar x. The sequence is said to converge in distribution d L written as x T x (or xt x), if for all real numbers c for which F is continuous lim F T(c) = F (c). T 11
12 The concepts of stochastic convergence can be generalized to the multivariate setting. Suppose {X T } is a sequence of n-dimensional random vectors and X is a n-dimensional random vector. Then 1. X T p X if XjT p Xj for j = 1,..., n 2. X T m.s. X if lim T E(X T X) (X t X) = 0 3. X T d X if limt F T (c) = F (c) where F and F T are the joint distribution of X and X T. Proposition C.1 L. Suppose {X T } T =1 is a sequence of n-dimensional random vectors. Then the following relations hold: (a) X T m.s. X X T p X XT d X (c) (Slutsky s Theorem) If g : R n R m is a continuous function, then X T p X g(xt ) p g(x) (i.e. plim g(x T ) = g(plim X T ) X T d X g(xt ) d g(x) 12
13 Proposition C.2 L. Suppose {X T } and {Y T } are sequences of n 1 random vectors and A T is a sequence of n n random matrices, x s a n 1 random vector, c s a fixed n 1 vector, and A is a fixed n n matrix. 1. If plimx T, plimy T and plima T exist then (a) plim (X T ± Y T ) = plim X T ± plim Y T, (b) plim c X T = c plim X T (c) plim X T Y T = (plim X T ) (plim Y T ) (d) plim A T X T = (plim A T )(plim X T ) 2. If X T d X and plim (XT Y T ) = 0 then Y T d X. 3. If X T d X and plim YT = c, then (a) X T + Y T d X + c (b) Y T X t d c X 4. If X T d X and plim AT = A then A T X T d AX 5. If X T d X and plim AT = 0 then plim A T X T = 0 13
14 Example Let {X T } be a sequence of n 1 random vectors with X T d N(µ, Ω), and Let {Y T } be a sequence of n 1 random vectors with Y T p C. Then Y T d T N(C µ, CΩC ). 14
15 1.5 Infinite sums of random variables An infinite sequence of real numbers {a i }, i = 0, ±1, ±2,... is absolutely summable if lim n n i= n a i < 0 A sequence of k k matrices {A i } is absolutely summable if each sequence (element) is absolutely summable, lim n n i= n a mn,i < 0, where a mn,i is the element (m, n) of A i. Equivalently it is absolutely summable if its Euclidean norm, A i = ( m n a mn,i) 1/2, is absolutely summable. Proposition C.9 L Suppose {A i } is an absolutely summable sequence of k k matrices and {z t } is a sequence of k dimensional random vectors satisfying E(z tz t ) c, t = 0, ±1, ±2,..., for some finite constant c. Then there exists a sequence of k-dimensional random variables {y t } such that n i= n A m.s. iz t y t 15
16 1.6 Limit Theorems The Law of Large Numbers and the Central Limit Theorem are the most important results for computing the limits of sequences of random variables. There are many versions of LLN and CLT that differ on the assumptions about the dependence of the variables. Proposition C.12 L (Weak law of large numbers) 1. (iid sequences) Let {y t } be an i.i.d sequence of random variables with E(y t ) = µ <. Then T ȳ T = T 1 p y t µ 2. (independent sequences) Let {y t } be a sequence of independent random variables with E(y t ) = µ < and E y t 1+ɛ c < for some ɛ > 0 and a finite constant c. Then T 1 T t=1 y p t µ. 3. (stationary processes) Let y t be a convariance stationary process with finite E(y t ) = µ and E[(y t µ)(y t j µ)] = γ j with absolutely summable auto- 16 t=1
17 covariances j=0 γ j <. Then ȳ T m.s. µ hence ȳ T p µ. Weak stationarity and absolutely summable covariances are sufficient conditions for a law of large numbers to hold. Proposition C.13L (Central limit theorem) 1. (i.i.d. sequences) Let {Y T } be a sequence of k-dimensional iid random vectors with mean µ and covariance matrix Σ. Then T ( Ȳ T µ) d N(0, Σ) 2. (stationary processes) Let Y t = µ + j=0 Φ jε t j be a n-dimensional stationary random process with, E(Y t ) = µ < j=0 Φ j < and ε t i.i.d(0, Σ). Then T ( Ȳ T µ) d N 0, where Γ j is the autcovariance matrix at lag j. j= Γ j 17
18 2 Some stationary processes 2.1 White Noise (WN) A n-dimensional vector white noise ɛ t = [ɛ 1t,..., ɛ nt ] W N(0, Ω) is such if E(ɛ t ) = 0 and Γ k = Ω (Ω a symmetric positive definite matrix) if k = 0 and 0 if k 0. If ɛ t, ɛ τ are independent the process is an independent vector White Noise (i.i.d). If also ɛ t N the process is a Gaussian WN. Important: A vector whose components are white noise is not necessarily a white noise. Example: ( let u t be a scalar white noise and define ɛ t = (u t, u t 1 ). Then σ 2 ) ( ) E(ɛ t ɛ t) = u σu 2 and E(ɛ t ɛ t 1) = σu
19 2.2 Vector Moving Average (VMA) Given the n-dimensional vector White Noise ɛ t a vector moving average of order q is defined as Y t = µ + ɛ t + C 1 ɛ t C q ɛ t q where C j are n n matrices of coefficients. The VMA(1) Let us consider the VMA(1) Y t = µ + ɛ t + C 1 ɛ t 1 with ɛ t W N(0, Ω), µ is the mean of Y t. The variance of the process is given by with autocovariances Γ 0 = E[(Y t µ)(y t µ) ] = Ω + C 1 ΩC 1 Γ 1 = C 1 Ω, Γ 1 = ΩC 1, Γ j = 0 for j > 1 19
20 The VMA(q) Let us consider the VMA(q) Y t = µ + ɛ t + C 1 ɛ t C q ɛ t q with ɛ t W N(0, Ω), µ is the mean of Y t. The variance of the process is given by with autocovariances Γ 0 = E[(Y t µ)(y t µ) ] = Ω + C 1 ΩC 1 + C 2 ΩC C q ΩC q Γ j = C j Ω + C j+1 ΩC 1 + C j+2 ΩC C q ΩC q j for j = 1, 2,..., q Γ j = ΩC j + C 1 ΩC j+1 + C 2 ΩC j C q+j ΩC q for j = 1, 2,..., q Γ j = 0 for j > q 20
21 The VMA( ) A useful process, as we will see, is the VMA( ) Y t = µ + C j ε t j (2) j=0 the process can be thought as the limiting case of a VMA(q) (for q ). Recall the previous result the process converges in mean square if {C j } is absolutely summable. Proposition (10.2H). Let Y t be an n 1 vector satisfying Y t = µ + C j ε t j j=0 where ε t is a vector WN with E(ε t j ) = 0 and E(ε t ε t j = Ω for j = 0 and zero otherwise and {C j } j=0 is absolutely summable. Let Y it denote the ith element of Y t and µ i the ith element of µ. Then (a) The autocovariance between the ith variable at time t and the jth variable at time s periods earlier, E(Y it µ i )(Y jt s µ j ) exists and is given by 21
22 the row icolumn j element of for s = 0, 1, 2,... Γ s = C s+v ΩC s v=0 (b) The sequence of matrices {Γ s } s=0 is absolutely summable. If furthermore {ε t } t= is an i.i.d. sequence with E ε i1 tε i2 tε i3 tε i4 t for i 1, i 2, i 3, i 4 = 1, 2,..., n, then also (c) E Y i1 t 1 Y i2 t 2 Y i3 t 3 Y i4 t 4 for all t 1, t 2, t 3, t 4 (d) (1/T ) T t=1 y ity jt s p E(yit y jt s ), for i, j = 1, 2,..., n and for all s Implications: 1. Result (a) implies that the second moments of a MA( ) with absolutely summable coefficients can be found by taking the limit of the autocovariance of an MA(q). 2. Result (b) ensures ergodicity for the mean 3. Result (c) says that Y t has bounded fourth moments 22
23 4. Result (d) says that Y t is ergodic for second moments Note: the vector M A( ) representation of a stationary VAR satisfies the absolute summability condition so that assumption of the previous proposition hold. 23
24 2.3 Invertible and fundamental VMA The VMA is invertible, i.e. it possesses a VAR representation, if and only if the determinant of C(L) vanishes only outside the unit circle, i.e. if det(c(z)) 0 for all z 1. Example Consider the process ) ( Y1t Y 2t = ( 1 L ) ( ) ε1t 0 θ L ε 2t det(c(z)) = θ z which is zero for z = θ. The process is invertible if and only if θ > 1. The VMA is fundamental if and only if the det(c(z)) 0 for all z < 1. In the previous example the process is fundamental if and only if θ 1. In the case θ = 1 the process is fundamental but noninvertible. Provided that θ > 1 the MA process can be inverted and the shock can be obtained as a combination of present and past values of Y t. That is the VAR 24
25 (Vector Autoregressive) representation can be recovered. The representation will entail infinitely many lags of Y t with absolutely summable coefficients, so that the process converges in mean square. Considering the above example ( 1 L θ L or 0 1 θ L ) ( ) Y1t = Y 2t ( ε1t ε 2t Y 1t + L 1 θ 1 1 θ LY 2t = ε 1t 1 1 θ 1 1 θ LY 2t = ε 2t (3) ) 25
26 2.4 Wold Decomposition Any zero-mean stationary vector process Y t admits the following representation Y t = C(L)ε t + µ t (4) where C(L)ɛ t is the stochastic component with C(L) = i=0 C il i and µ t the purely deterministic component, the one perfectly predictable using linear combinations of past Y t. If µ t = 0 the process is said regular. Here we only consider regular processes. (4) represents the Wold representation of Y t which is unique and for which the following properties hold: (b) ɛ t is the innovation for Y t, i.e. ɛ t = Y t Proj(Y t Y t 1, Y t 1,...),i.e. the shock is fundamental. (b) ɛ t is White noise, Eɛ t = 0, Eɛ t ɛ τ = 0, for t τ, Eɛ t ɛ t = Ω (c) The coefficients are square summable j=0 C j 2 <. (d) C 0 = I 26
27 The result is very powerful since holds for any covariance stationary process. However the theorem does not implies that (4) is the true representation of the process. For instance the process could be stationary but non-linear or noninvertible. 27
28 2.5 Other fundamental MA( ) Representations It is easy to extend the Wold representation to the general class of invertible MA( ) representations. For any non singular matrix R of constant we define u t = R 1 ɛ t and we have Y t = C(L)Ru t = D(L)u t where u t W N(0, R 1 ΩR 1 ). Notice that all these representations obtained as linear combinations of the Wold representations are fundamental. In fact, det(c(l)r) = det(c(l))det(r). Therefore if det(c(l)r) 0 z < 1 so will det(c(l)r). 28
29 3 VAR: representations Every stationary vector process Y t admits a Wold representation. If the MA matrix of lag polynomials is invertible, then a unique VAR exists. We define C(L) 1 as an (n n) lag polynomial such that C(L) 1 C(L) = I; i.e. when these lag polynomial matrices are matrix-multiplied, all the lag terms cancel out. This operation in effect converts lags of the errors into lags of the vector of dependent variables. Thus we move from MA coefficient to VAR coefficients. Define A(L) = C(L) 1. Then given the (invertible) MA coefficients, it is easy to map these into the VAR coefficients: Y t = C(L)ɛ t A(L)Y t = ɛ t (5) where A(L) = A 0 A 1 L 1 A 2 L 2... and A j for all j are (n n) matrices of coefficients. 29
30 To show that this matrix lag polynomial exists and how it maps into the coefficients in C(L), note that by assumption we have the identity (A 0 A 1 L 1 A 2 L 2...)(I + C 1 L 1 + C 2 L ) = I After distributing, the identity implies that coefficients on the lag operators must be zero, which implies the following recursive solution for the VAR coefficients: A 0 = I A 1 = A 0 C 1 A k = A 0 C k + A 1 C k A k 1 C 1 As noted, the VAR is of infinite order (i.e. infinite number of lags required to fully represent joint density). In practice, the VAR is usually restricted for estimation by truncating the lag-length. Recall that the AR coefficients are absolutely summable and vanish at long lags. 30
31 The pth-order vector autoregression, denoted VAR(p) is given by Y t = A 1 Y t 1 + A 2 Y t A p Y t p + ɛ t (6) Note: Here we are considering zero mean processes. In case the mean of Y t is not zero we should add a constant in the VAR equations. VAR(1) representation Any VAR(p) can be rewritten as a VAR(1). To form a VAR(1) from the general model we define: e t = [ɛ, 0,..., 0], Y t = [Y t, Y t 1,..., Y t p+1] A 1 A 2... A p 1 A p I n A = 0 I n I n 0 Therefore we can rewrite the VAR(p) as a VAR(1) Y t = AY t 1 + e t This is also known as the companion form of the VAR(p) 31
32 SUR representation The VAR(p) can be stacked as Y = XΓ + u where X = [X 1,..., X T ], X t = [Y [ɛ 1,..., ɛ T ] and Γ = [A 1,..., A p ] t 1, Y t 2..., Y t p] Y = [Y 1,..., Y T ], u = Vec representation Let vec denote the stacking columns operator, i.e X = X 11 X 12 X 21 X 22 then vec(x) = X 31 X 32 X 11 X 21 X 31 X 12 X 22 X 32 Let γ = vec(γ), then the VAR can be rewritten as Y t = (I n X t)γ + ɛ t 32
33 4 VAR: Stationarity 4.1 Stability and stationarity Consider the VAR(1) Y t = µ + AY t 1 + ε t Substituting backward we obtain Y t = µ + AY t 1 + ε t = µ + A(µ + AY t 2 + ε t 1 ) + ε t = (I + A)µ + A 2 Y t 2 + Aε t 1 + ε t. Y t = (I + A A j 1 )µ + A j Y t j + j 1 i=0 A i ε t i The eigenvalues of A, λ, solve det(a Iλ) = 0. If all the eigenvalues of A are smaller than one in modulus the sequence A i, i = 0, 1,... is absolutely summable. Therefore 1. the infinite sum j 1 i=0 Ai ε t i exists in mean square; 33
34 2. (I + A A j 1 )µ (I A) 1 and A j 0 as j goes to infinity. Therefore if the eigenvalues are smaller than one in modulus then Y t has the following representation Y t = (I A) 1 µ + A i ε t i Note that the the eigenvalues correspond to the reciprocal of the roots of the determinant of A(z) = I Az. A VAR(1) is called stable if i=0 det(i Az) 0 for z 1. For a VAR(p) the stability condition also requires that all the eigenvalues of A (the AR matrix of the companion form of Y t ) are smaller than one in modulus. Therefore we have that a VAR(p) is called stable if det(i A 1 z A 2 z 2,..., A p z p ) 0 for z 1. A condition for stationarity: A stable VAR process is stationary. Notice that the converse is not true. An unstable process can be stationary. 34
35 4.2 Back the Wold representation If the VAR is stationary Y t has the following Wold representation Y t = C(L)ɛ t = C j ɛ t where the sequence {C j } is absolutely summable, j=0 C j < How can we find it? Let us rewrite the VAR(p) as a VAR(1) j=0 We know how to find the MA( ) representation of a stationary AR(1). We can proceed similarly for the VAR(1). Substituting backward in the companion form we have Y t = A j Y t j + A j 1 e t j A 1 e t e t If conditions for stationarity are satisfied, the series i=1 Aj converges and Y t has an VMA( ) representation in terms of the Wold shock e t given by Y t = (I AL) 1 e t 35
36 = A j e t j i=1 = C(L)e t where C 0 = A 0 = I, C 1 = A 1, C 2 = A 2,..., C k = A k. C j will be the n n upper left matrix of C j. 36
37 Example A stationary VAR(1) ( ) ( ) ( ) ( ) Y1t Y1t 1 ɛ1t = + Y 2t Y 2t 1 ɛ 2t ( ) ( ) E(ɛ t ɛ t) = Ω = λ = Figure 1: Blu: Y 1, green Y 2. 37
38 5 VAR: second moments Second Moments of a VAR(p). Let us consider the companion form of a stationary (zero mean for simplicity) VAR(p) defined earlier Y t = AY t 1 + e t (7) The variance of Y t is given by Σ = E[(Y t )(Y t ) ] = A ΣA + Ω (8) a closed form solution to (7) can be obtained in terms of the vec operator. Let A, B, C be matrices such that the product ABC exists. A property of the vec operator is that vec(abc) = (C A)vec(B) Applying the vec operator to both sides of (7) we have vec( Σ) = (A A)vec( Σ) + vec( Ω) If we define A = (A A) then we have vec( Σ) = (I A) 1 vec( Ω) 38
39 where Γ 0 = Σ = Γ 0 Γ 1 Γ p 1 Γ 1 Γ 0 Γ p Γ p+1 Γ p+2 Γ 0 The variance Σ = Γ 0 of the original series Y t is given by the first n rows and columns of Σ. The jth autocovariance of Y t (denoted Γ j ) can be found by post multiplying (6) by Y t j and taking expectations: Thus E(Y t Y t j ) = AE(Y t Y t j ) + E(e t Y t j ) Γ j = A Γ j 1 or Γ j = A j Γ0 = A j Σ 39
40 The autocovariances Γ j of the original series Y t are given by the first n rows and columns of Γ j and are given by known as Yule-Walker equations. Γ h = A 1 Γ h 1 + A 2 Γ h A p Γ h p 40
7. MULTIVARATE STATIONARY PROCESSES
7. MULTIVARATE STATIONARY PROCESSES 1 1 Some Preliminary Definitions and Concepts Random Vector: A vector X = (X 1,..., X n ) whose components are scalar-valued random variables on the same probability
More informationTime Series (Part II)
Time Series (Part II) Luca Gambetti UAB IDEA Winter 2011 1 Contacts Prof.: Luca Gambetti Office: B3-1130 Edifici B Office hours: email: luca.gambetti@uab.es webpage: http://pareto.uab.es/lgambetti/ 2 Goal
More informationCh. 14 Stationary ARMA Process
Ch. 14 Stationary ARMA Process A general linear stochastic model is described that suppose a time series to be generated by a linear aggregation of random shock. For practical representation it is desirable
More informationCovariance Stationary Time Series. Example: Independent White Noise (IWN(0,σ 2 )) Y t = ε t, ε t iid N(0,σ 2 )
Covariance Stationary Time Series Stochastic Process: sequence of rv s ordered by time {Y t } {...,Y 1,Y 0,Y 1,...} Defn: {Y t } is covariance stationary if E[Y t ]μ for all t cov(y t,y t j )E[(Y t μ)(y
More informationECON 616: Lecture 1: Time Series Basics
ECON 616: Lecture 1: Time Series Basics ED HERBST August 30, 2017 References Overview: Chapters 1-3 from Hamilton (1994). Technical Details: Chapters 2-3 from Brockwell and Davis (1987). Intuition: Chapters
More information3. ARMA Modeling. Now: Important class of stationary processes
3. ARMA Modeling Now: Important class of stationary processes Definition 3.1: (ARMA(p, q) process) Let {ɛ t } t Z WN(0, σ 2 ) be a white noise process. The process {X t } t Z is called AutoRegressive-Moving-Average
More informationClass 1: Stationary Time Series Analysis
Class 1: Stationary Time Series Analysis Macroeconometrics - Fall 2009 Jacek Suda, BdF and PSE February 28, 2011 Outline Outline: 1 Covariance-Stationary Processes 2 Wold Decomposition Theorem 3 ARMA Models
More information1 Class Organization. 2 Introduction
Time Series Analysis, Lecture 1, 2018 1 1 Class Organization Course Description Prerequisite Homework and Grading Readings and Lecture Notes Course Website: http://www.nanlifinance.org/teaching.html wechat
More informationTIME SERIES AND FORECASTING. Luca Gambetti UAB, Barcelona GSE Master in Macroeconomic Policy and Financial Markets
TIME SERIES AND FORECASTING Luca Gambetti UAB, Barcelona GSE 2014-2015 Master in Macroeconomic Policy and Financial Markets 1 Contacts Prof.: Luca Gambetti Office: B3-1130 Edifici B Office hours: email:
More informationDiscrete time processes
Discrete time processes Predictions are difficult. Especially about the future Mark Twain. Florian Herzog 2013 Modeling observed data When we model observed (realized) data, we encounter usually the following
More information1 Linear Difference Equations
ARMA Handout Jialin Yu 1 Linear Difference Equations First order systems Let {ε t } t=1 denote an input sequence and {y t} t=1 sequence generated by denote an output y t = φy t 1 + ε t t = 1, 2,... with
More informationIntroduction to Stochastic processes
Università di Pavia Introduction to Stochastic processes Eduardo Rossi Stochastic Process Stochastic Process: A stochastic process is an ordered sequence of random variables defined on a probability space
More informationSingle Equation Linear GMM with Serially Correlated Moment Conditions
Single Equation Linear GMM with Serially Correlated Moment Conditions Eric Zivot October 28, 2009 Univariate Time Series Let {y t } be an ergodic-stationary time series with E[y t ]=μ and var(y t )
More informationLecture 1: Fundamental concepts in Time Series Analysis (part 2)
Lecture 1: Fundamental concepts in Time Series Analysis (part 2) Florian Pelgrin University of Lausanne, École des HEC Department of mathematics (IMEA-Nice) Sept. 2011 - Jan. 2012 Florian Pelgrin (HEC)
More informationWeek 5 Quantitative Analysis of Financial Markets Characterizing Cycles
Week 5 Quantitative Analysis of Financial Markets Characterizing Cycles Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036
More informationSingle Equation Linear GMM with Serially Correlated Moment Conditions
Single Equation Linear GMM with Serially Correlated Moment Conditions Eric Zivot November 2, 2011 Univariate Time Series Let {y t } be an ergodic-stationary time series with E[y t ]=μ and var(y t )
More informationCh. 15 Forecasting. 1.1 Forecasts Based on Conditional Expectations
Ch 15 Forecasting Having considered in Chapter 14 some of the properties of ARMA models, we now show how they may be used to forecast future values of an observed time series For the present we proceed
More informationLECTURE 10 LINEAR PROCESSES II: SPECTRAL DENSITY, LAG OPERATOR, ARMA. In this lecture, we continue to discuss covariance stationary processes.
MAY, 0 LECTURE 0 LINEAR PROCESSES II: SPECTRAL DENSITY, LAG OPERATOR, ARMA In this lecture, we continue to discuss covariance stationary processes. Spectral density Gourieroux and Monfort 990), Ch. 5;
More informationVector autoregressive Moving Average Process. Presented by Muhammad Iqbal, Amjad Naveed and Muhammad Nadeem
Vector autoregressive Moving Average Process Presented by Muhammad Iqbal, Amjad Naveed and Muhammad Nadeem Road Map 1. Introduction 2. Properties of MA Finite Process 3. Stationarity of MA Process 4. VARMA
More informationEmpirical Macroeconomics
Empirical Macroeconomics Francesco Franco Nova SBE April 5, 2016 Francesco Franco Empirical Macroeconomics 1/39 Growth and Fluctuations Supply and Demand Figure : US dynamics Francesco Franco Empirical
More informationEmpirical Macroeconomics
Empirical Macroeconomics Francesco Franco Nova SBE April 21, 2015 Francesco Franco Empirical Macroeconomics 1/33 Growth and Fluctuations Supply and Demand Figure : US dynamics Francesco Franco Empirical
More informationLINEAR STOCHASTIC MODELS
LINEAR STOCHASTIC MODELS Let {x τ+1,x τ+2,...,x τ+n } denote n consecutive elements from a stochastic process. If their joint distribution does not depend on τ, regardless of the size of n, then the process
More information2.5 Forecasting and Impulse Response Functions
2.5 Forecasting and Impulse Response Functions Principles of forecasting Forecast based on conditional expectations Suppose we are interested in forecasting the value of y t+1 based on a set of variables
More informationVector Auto-Regressive Models
Vector Auto-Regressive Models Laurent Ferrara 1 1 University of Paris Nanterre M2 Oct. 2018 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions
More informationSome Time-Series Models
Some Time-Series Models Outline 1. Stochastic processes and their properties 2. Stationary processes 3. Some properties of the autocorrelation function 4. Some useful models Purely random processes, random
More informationVAR Models and Applications
VAR Models and Applications Laurent Ferrara 1 1 University of Paris West M2 EIPMC Oct. 2016 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions
More information1 Introduction to Multivariate Models
1 Introduction to Multivariate Models Beyond univariate models... Consider the following AR(2) process for inflation (y t ) Y t = a 1 Y t 1 + a 2 Y t 2 + ε t ε t W N In this course we study multivariate
More informationLecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications
Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications Prof. Massimo Guidolin 20192 Financial Econometrics Winter/Spring 2018 Overview Moving average processes Autoregressive
More informationDifference equations. Definitions: A difference equation takes the general form. x t f x t 1,,x t m.
Difference equations Definitions: A difference equation takes the general form x t fx t 1,x t 2, defining the current value of a variable x as a function of previously generated values. A finite order
More informationPermanent Income Hypothesis (PIH) Instructor: Dmytro Hryshko
Permanent Income Hypothesis (PIH) Instructor: Dmytro Hryshko 1 / 36 The PIH Utility function is quadratic, u(c t ) = 1 2 (c t c) 2 ; borrowing/saving is allowed using only the risk-free bond; β(1 + r)
More informationStatistics of stochastic processes
Introduction Statistics of stochastic processes Generally statistics is performed on observations y 1,..., y n assumed to be realizations of independent random variables Y 1,..., Y n. 14 settembre 2014
More information11. Further Issues in Using OLS with TS Data
11. Further Issues in Using OLS with TS Data With TS, including lags of the dependent variable often allow us to fit much better the variation in y Exact distribution theory is rarely available in TS applications,
More informationLecture 1: Stationary Time Series Analysis
Syllabus Stationarity ARMA AR MA Model Selection Estimation Lecture 1: Stationary Time Series Analysis 222061-1617: Time Series Econometrics Spring 2018 Jacek Suda Syllabus Stationarity ARMA AR MA Model
More informationEcon 623 Econometrics II Topic 2: Stationary Time Series
1 Introduction Econ 623 Econometrics II Topic 2: Stationary Time Series In the regression model we can model the error term as an autoregression AR(1) process. That is, we can use the past value of the
More informationECONOMETRICS Part II PhD LBS
ECONOMETRICS Part II PhD LBS Luca Gambetti UAB, Barcelona GSE February-March 2014 1 Contacts Prof.: Luca Gambetti email: luca.gambetti@uab.es webpage: http://pareto.uab.es/lgambetti/ Description This is
More informationTrend-Cycle Decompositions
Trend-Cycle Decompositions Eric Zivot April 22, 2005 1 Introduction A convenient way of representing an economic time series y t is through the so-called trend-cycle decomposition y t = TD t + Z t (1)
More informationBasic concepts and terminology: AR, MA and ARMA processes
ECON 5101 ADVANCED ECONOMETRICS TIME SERIES Lecture note no. 1 (EB) Erik Biørn, Department of Economics Version of February 1, 2011 Basic concepts and terminology: AR, MA and ARMA processes This lecture
More informationProblem Set 1 Solution Sketches Time Series Analysis Spring 2010
Problem Set 1 Solution Sketches Time Series Analysis Spring 2010 1. Construct a martingale difference process that is not weakly stationary. Simplest e.g.: Let Y t be a sequence of independent, non-identically
More informationCointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Financial Econometrics / 56
Cointegrated VAR s Eduardo Rossi University of Pavia November 2013 Rossi Cointegrated VAR s Financial Econometrics - 2013 1 / 56 VAR y t = (y 1t,..., y nt ) is (n 1) vector. y t VAR(p): Φ(L)y t = ɛ t The
More informationTime Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley
Time Series Models and Inference James L. Powell Department of Economics University of California, Berkeley Overview In contrast to the classical linear regression model, in which the components of the
More informationIntroduction to ARMA and GARCH processes
Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio Corsi Introduction to ARMA () and GARCH processes SNS Pisa 3 March 2010 1 / 24 Stationarity Strict stationarity: (X 1,
More informationECON/FIN 250: Forecasting in Finance and Economics: Section 6: Standard Univariate Models
ECON/FIN 250: Forecasting in Finance and Economics: Section 6: Standard Univariate Models Patrick Herb Brandeis University Spring 2016 Patrick Herb (Brandeis University) Standard Univariate Models ECON/FIN
More informationEstimating Moving Average Processes with an improved version of Durbin s Method
Estimating Moving Average Processes with an improved version of Durbin s Method Maximilian Ludwig this version: June 7, 4, initial version: April, 3 arxiv:347956v [statme] 6 Jun 4 Abstract This paper provides
More informationNotes on Time Series Modeling
Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g
More informationChapter 4: Models for Stationary Time Series
Chapter 4: Models for Stationary Time Series Now we will introduce some useful parametric models for time series that are stationary processes. We begin by defining the General Linear Process. Let {Y t
More informationLecture 2: Univariate Time Series
Lecture 2: Univariate Time Series Analysis: Conditional and Unconditional Densities, Stationarity, ARMA Processes Prof. Massimo Guidolin 20192 Financial Econometrics Spring/Winter 2017 Overview Motivation:
More informationAutoregressive Moving Average (ARMA) Models and their Practical Applications
Autoregressive Moving Average (ARMA) Models and their Practical Applications Massimo Guidolin February 2018 1 Essential Concepts in Time Series Analysis 1.1 Time Series and Their Properties Time series:
More informationNonlinear time series
Based on the book by Fan/Yao: Nonlinear Time Series Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna October 27, 2009 Outline Characteristics of
More informationUnivariate Nonstationary Time Series 1
Univariate Nonstationary Time Series 1 Sebastian Fossati University of Alberta 1 These slides are based on Eric Zivot s time series notes available at: http://faculty.washington.edu/ezivot Introduction
More informationECO 513 Fall 2009 C. Sims CONDITIONAL EXPECTATION; STOCHASTIC PROCESSES
ECO 513 Fall 2009 C. Sims CONDIIONAL EXPECAION; SOCHASIC PROCESSES 1. HREE EXAMPLES OF SOCHASIC PROCESSES (I) X t has three possible time paths. With probability.5 X t t, with probability.25 X t t, and
More informationPart III Example Sheet 1 - Solutions YC/Lent 2015 Comments and corrections should be ed to
TIME SERIES Part III Example Sheet 1 - Solutions YC/Lent 2015 Comments and corrections should be emailed to Y.Chen@statslab.cam.ac.uk. 1. Let {X t } be a weakly stationary process with mean zero and let
More informationForecasting with ARMA
Forecasting with ARMA Eduardo Rossi University of Pavia October 2013 Rossi Forecasting Financial Econometrics - 2013 1 / 32 Mean Squared Error Linear Projection Forecast of Y t+1 based on a set of variables
More informationMultivariate ARMA Processes
LECTURE 8 Multivariate ARMA Processes A vector y(t) of n elements is said to follow an n-variate ARMA process of orders p and q if it satisfies the equation (1) A 0 y(t) + A 1 y(t 1) + + A p y(t p) = M
More information1.1. VARs, Wold representations and their limits
1. Shocks Nr. 1 1.1. VARs, Wold representations and their limits A brief review of VARs. Assume a true model, in MA form: X = A 0 e + A 1 e( 1) + A 2 e( 2) +...; E(ee ) = I = (A 0 + A 1 L + A 2 L 2 +...)
More informationEC402: Serial Correlation. Danny Quah Economics Department, LSE Lent 2015
EC402: Serial Correlation Danny Quah Economics Department, LSE Lent 2015 OUTLINE 1. Stationarity 1.1 Covariance stationarity 1.2 Explicit Models. Special cases: ARMA processes 2. Some complex numbers.
More informationChapter 2. Some basic tools. 2.1 Time series: Theory Stochastic processes
Chapter 2 Some basic tools 2.1 Time series: Theory 2.1.1 Stochastic processes A stochastic process is a sequence of random variables..., x 0, x 1, x 2,.... In this class, the subscript always means time.
More informationMultivariate Time Series
Multivariate Time Series Notation: I do not use boldface (or anything else) to distinguish vectors from scalars. Tsay (and many other writers) do. I denote a multivariate stochastic process in the form
More informationPreliminaries. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
Preliminaries Copyright c 2018 Dan Nettleton (Iowa State University) Statistics 510 1 / 38 Notation for Scalars, Vectors, and Matrices Lowercase letters = scalars: x, c, σ. Boldface, lowercase letters
More informationEmpirical Market Microstructure Analysis (EMMA)
Empirical Market Microstructure Analysis (EMMA) Lecture 3: Statistical Building Blocks and Econometric Basics Prof. Dr. Michael Stein michael.stein@vwl.uni-freiburg.de Albert-Ludwigs-University of Freiburg
More informationIt is easily seen that in general a linear combination of y t and x t is I(1). However, in particular cases, it can be I(0), i.e. stationary.
6. COINTEGRATION 1 1 Cointegration 1.1 Definitions I(1) variables. z t = (y t x t ) is I(1) (integrated of order 1) if it is not stationary but its first difference z t is stationary. It is easily seen
More informationStationary Stochastic Time Series Models
Stationary Stochastic Time Series Models When modeling time series it is useful to regard an observed time series, (x 1,x,..., x n ), as the realisation of a stochastic process. In general a stochastic
More informationAutoregressive and Moving-Average Models
Chapter 3 Autoregressive and Moving-Average Models 3.1 Introduction Let y be a random variable. We consider the elements of an observed time series {y 0,y 1,y2,...,y t } as being realizations of this randoms
More informationLecture 1: Stationary Time Series Analysis
Syllabus Stationarity ARMA AR MA Model Selection Estimation Forecasting Lecture 1: Stationary Time Series Analysis 222061-1617: Time Series Econometrics Spring 2018 Jacek Suda Syllabus Stationarity ARMA
More informationTime Series 2. Robert Almgren. Sept. 21, 2009
Time Series 2 Robert Almgren Sept. 21, 2009 This week we will talk about linear time series models: AR, MA, ARMA, ARIMA, etc. First we will talk about theory and after we will talk about fitting the models
More informationChapter 1. Basics. 1.1 Definition. A time series (or stochastic process) is a function Xpt, ωq such that for
Chapter 1 Basics 1.1 Definition A time series (or stochastic process) is a function Xpt, ωq such that for each fixed t, Xpt, ωq is a random variable [denoted by X t pωq]. For a fixed ω, Xpt, ωq is simply
More informationAsymptotic Theory. L. Magee revised January 21, 2013
Asymptotic Theory L. Magee revised January 21, 2013 1 Convergence 1.1 Definitions Let a n to refer to a random variable that is a function of n random variables. Convergence in Probability The scalar a
More informationChapter 3 - Temporal processes
STK4150 - Intro 1 Chapter 3 - Temporal processes Odd Kolbjørnsen and Geir Storvik January 23 2017 STK4150 - Intro 2 Temporal processes Data collected over time Past, present, future, change Temporal aspect
More informationEstimating and Identifying Vector Autoregressions Under Diagonality and Block Exogeneity Restrictions
Estimating and Identifying Vector Autoregressions Under Diagonality and Block Exogeneity Restrictions William D. Lastrapes Department of Economics Terry College of Business University of Georgia Athens,
More informationMultivariate Time Series: VAR(p) Processes and Models
Multivariate Time Series: VAR(p) Processes and Models A VAR(p) model, for p > 0 is X t = φ 0 + Φ 1 X t 1 + + Φ p X t p + A t, where X t, φ 0, and X t i are k-vectors, Φ 1,..., Φ p are k k matrices, with
More informationTime Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY
Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY & Contents PREFACE xiii 1 1.1. 1.2. Difference Equations First-Order Difference Equations 1 /?th-order Difference
More informationCointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Fin. Econometrics / 31
Cointegrated VAR s Eduardo Rossi University of Pavia November 2014 Rossi Cointegrated VAR s Fin. Econometrics - 2014 1 / 31 B-N decomposition Give a scalar polynomial α(z) = α 0 + α 1 z +... + α p z p
More informationStochastic Processes: I. consider bowl of worms model for oscilloscope experiment:
Stochastic Processes: I consider bowl of worms model for oscilloscope experiment: SAPAscope 2.0 / 0 1 RESET SAPA2e 22, 23 II 1 stochastic process is: Stochastic Processes: II informally: bowl + drawing
More informationReliability and Risk Analysis. Time Series, Types of Trend Functions and Estimates of Trends
Reliability and Risk Analysis Stochastic process The sequence of random variables {Y t, t = 0, ±1, ±2 } is called the stochastic process The mean function of a stochastic process {Y t} is the function
More informationLECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity.
LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity. Important points of Lecture 1: A time series {X t } is a series of observations taken sequentially over time: x t is an observation
More informationTime Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY
Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY PREFACE xiii 1 Difference Equations 1.1. First-Order Difference Equations 1 1.2. pth-order Difference Equations 7
More informationEconometría 2: Análisis de series de Tiempo
Econometría 2: Análisis de series de Tiempo Karoll GOMEZ kgomezp@unal.edu.co http://karollgomez.wordpress.com Segundo semestre 2016 IX. Vector Time Series Models VARMA Models A. 1. Motivation: The vector
More informationEmpirical Macroeconomics
Empirical Macroeconomics Francesco Franco Nova SBE April 18, 2018 Francesco Franco Empirical Macroeconomics 1/23 Invertible Moving Average A difference equation interpretation Consider an invertible MA1)
More informationTime Series 3. Robert Almgren. Sept. 28, 2009
Time Series 3 Robert Almgren Sept. 28, 2009 Last time we discussed two main categories of linear models, and their combination. Here w t denotes a white noise: a stationary process with E w t ) = 0, E
More informationVector autoregressions, VAR
1 / 45 Vector autoregressions, VAR Chapter 2 Financial Econometrics Michael Hauser WS17/18 2 / 45 Content Cross-correlations VAR model in standard/reduced form Properties of VAR(1), VAR(p) Structural VAR,
More informationProf. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis
Introduction to Time Series Analysis 1 Contents: I. Basics of Time Series Analysis... 4 I.1 Stationarity... 5 I.2 Autocorrelation Function... 9 I.3 Partial Autocorrelation Function (PACF)... 14 I.4 Transformation
More informationLecture 2: ARMA(p,q) models (part 2)
Lecture 2: ARMA(p,q) models (part 2) Florian Pelgrin University of Lausanne, École des HEC Department of mathematics (IMEA-Nice) Sept. 2011 - Jan. 2012 Florian Pelgrin (HEC) Univariate time series Sept.
More informationUnivariate Time Series Analysis; ARIMA Models
Econometrics 2 Fall 24 Univariate Time Series Analysis; ARIMA Models Heino Bohn Nielsen of4 Outline of the Lecture () Introduction to univariate time series analysis. (2) Stationarity. (3) Characterizing
More informationStatistical signal processing
Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable
More informationNotes on Random Vectors and Multivariate Normal
MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution
More informationSTAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong
STAT 443 Final Exam Review L A TEXer: W Kong 1 Basic Definitions Definition 11 The time series {X t } with E[X 2 t ] < is said to be weakly stationary if: 1 µ X (t) = E[X t ] is independent of t 2 γ X
More informationBooth School of Business, University of Chicago Business 41914, Spring Quarter 2013, Mr. Ruey S. Tsay. Midterm
Booth School of Business, University of Chicago Business 41914, Spring Quarter 2013, Mr. Ruey S. Tsay Midterm Chicago Booth Honor Code: I pledge my honor that I have not violated the Honor Code during
More informationMidterm Suggested Solutions
CUHK Dept. of Economics Spring 2011 ECON 4120 Sung Y. Park Midterm Suggested Solutions Q1 (a) In time series, autocorrelation measures the correlation between y t and its lag y t τ. It is defined as. ρ(τ)
More information3 Theory of stationary random processes
3 Theory of stationary random processes 3.1 Linear filters and the General linear process A filter is a transformation of one random sequence {U t } into another, {Y t }. A linear filter is a transformation
More informationEconometrics II Heij et al. Chapter 7.1
Chapter 7.1 p. 1/2 Econometrics II Heij et al. Chapter 7.1 Linear Time Series Models for Stationary data Marius Ooms Tinbergen Institute Amsterdam Chapter 7.1 p. 2/2 Program Introduction Modelling philosophy
More informationUniversità di Pavia. Forecasting. Eduardo Rossi
Università di Pavia Forecasting Eduardo Rossi Mean Squared Error Forecast of Y t+1 based on a set of variables observed at date t, X t : Yt+1 t. The loss function MSE(Y t+1 t ) = E[Y t+1 Y t+1 t ]2 The
More informationTitle. Description. var intro Introduction to vector autoregressive models
Title var intro Introduction to vector autoregressive models Description Stata has a suite of commands for fitting, forecasting, interpreting, and performing inference on vector autoregressive (VAR) models
More informationTime Series Analysis
Time Series Analysis hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby 1 Outline of the lecture Chapter 9 Multivariate time series 2 Transfer function
More informationAdvanced Econometrics
Based on the textbook by Verbeek: A Guide to Modern Econometrics Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna May 2, 2013 Outline Univariate
More informationARMA Estimation Recipes
Econ. 1B D. McFadden, Fall 000 1. Preliminaries ARMA Estimation Recipes hese notes summarize procedures for estimating the lag coefficients in the stationary ARMA(p,q) model (1) y t = µ +a 1 (y t-1 -µ)
More informationA time series is called strictly stationary if the joint distribution of every collection (Y t
5 Time series A time series is a set of observations recorded over time. You can think for example at the GDP of a country over the years (or quarters) or the hourly measurements of temperature over a
More informationCh 4. Models For Stationary Time Series. Time Series Analysis
This chapter discusses the basic concept of a broad class of stationary parametric time series models the autoregressive moving average (ARMA) models. Let {Y t } denote the observed time series, and {e
More informationLong-range dependence
Long-range dependence Kechagias Stefanos University of North Carolina at Chapel Hill May 23, 2013 Kechagias Stefanos (UNC) Long-range dependence May 23, 2013 1 / 45 Outline 1 Introduction to time series
More information1 Introduction to Generalized Least Squares
ECONOMICS 7344, Spring 2017 Bent E. Sørensen April 12, 2017 1 Introduction to Generalized Least Squares Consider the model Y = Xβ + ɛ, where the N K matrix of regressors X is fixed, independent of the
More informationEconometría 2: Análisis de series de Tiempo
Econometría 2: Análisis de series de Tiempo Karoll GOMEZ kgomezp@unal.edu.co http://karollgomez.wordpress.com Segundo semestre 2016 III. Stationary models 1 Purely random process 2 Random walk (non-stationary)
More informationEcon 424 Time Series Concepts
Econ 424 Time Series Concepts Eric Zivot January 20 2015 Time Series Processes Stochastic (Random) Process { 1 2 +1 } = { } = sequence of random variables indexed by time Observed time series of length
More information