Forecasting with ARMA

Similar documents
Università di Pavia. Forecasting. Eduardo Rossi

Ch. 15 Forecasting. 1.1 Forecasts Based on Conditional Expectations

Introduction to ARMA and GARCH processes

ECON/FIN 250: Forecasting in Finance and Economics: Section 6: Standard Univariate Models

Introduction to Stochastic processes

3. ARMA Modeling. Now: Important class of stationary processes

Empirical Market Microstructure Analysis (EMMA)

ECON/FIN 250: Forecasting in Finance and Economics: Section 7: Unit Roots & Dickey-Fuller Tests

ECONOMETRICS Part II PhD LBS

Midterm Suggested Solutions

Ch. 14 Stationary ARMA Process

ECON 616: Lecture 1: Time Series Basics

Permanent Income Hypothesis (PIH) Instructor: Dmytro Hryshko

Covariance Stationary Time Series. Example: Independent White Noise (IWN(0,σ 2 )) Y t = ε t, ε t iid N(0,σ 2 )

Econ 623 Econometrics II Topic 2: Stationary Time Series

Autoregressive Moving Average (ARMA) Models and their Practical Applications

Forecasting and Estimation

Chapter 4: Models for Stationary Time Series

Discrete time processes

5 Transfer function modelling

Class 1: Stationary Time Series Analysis

Lecture 2: ARMA(p,q) models (part 2)

Econometrics II Heij et al. Chapter 7.1

Principles of forecasting

Ch 9. FORECASTING. Time Series Analysis

Univariate Time Series Analysis; ARIMA Models

Ch. 19 Models of Nonstationary Time Series

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49

Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications

Chapter 9: Forecasting

Univariate Nonstationary Time Series 1

1 Linear Difference Equations

Chapter 1. Basics. 1.1 Definition. A time series (or stochastic process) is a function Xpt, ωq such that for

Lecture 1: Stationary Time Series Analysis

4. MA(2) +drift: y t = µ + ɛ t + θ 1 ɛ t 1 + θ 2 ɛ t 2. Mean: where θ(l) = 1 + θ 1 L + θ 2 L 2. Therefore,

Single Equation Linear GMM with Serially Correlated Moment Conditions

2.5 Forecasting and Impulse Response Functions

ARMA Estimation Recipes

Lecture 1: Stationary Time Series Analysis

Lesson 9: Autoregressive-Moving Average (ARMA) models

Forecasting. This optimal forecast is referred to as the Minimum Mean Square Error Forecast. This optimal forecast is unbiased because

Lecture 1: Fundamental concepts in Time Series Analysis (part 2)

Basic concepts and terminology: AR, MA and ARMA processes

Lecture 2: Univariate Time Series

Ch 4. Models For Stationary Time Series. Time Series Analysis

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Problem Set 1 Solution Sketches Time Series Analysis Spring 2010

Problem Set 2: Box-Jenkins methodology

Autoregressive and Moving-Average Models

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53

at least 50 and preferably 100 observations should be available to build a proper model

1 Class Organization. 2 Introduction

Trend-Cycle Decompositions

Time Series 3. Robert Almgren. Sept. 28, 2009

EASTERN MEDITERRANEAN UNIVERSITY ECON 604, FALL 2007 DEPARTMENT OF ECONOMICS MEHMET BALCILAR ARIMA MODELS: IDENTIFICATION

Cointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Financial Econometrics / 56

Problem Set 2 Solution Sketches Time Series Analysis Spring 2010

Covariances of ARMA Processes

Single Equation Linear GMM with Serially Correlated Moment Conditions

LINEAR STOCHASTIC MODELS

ESSE Mid-Term Test 2017 Tuesday 17 October :30-09:45

Chapter 6: Model Specification for Time Series

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Review Session: Econometrics - CLEFIN (20192)

University of Oxford. Statistical Methods Autocorrelation. Identification and Estimation

Lecture on ARMA model

TIME SERIES AND FORECASTING. Luca Gambetti UAB, Barcelona GSE Master in Macroeconomic Policy and Financial Markets

Define y t+h t as the forecast of y t+h based on I t known parameters. The forecast error is. Forecasting

A time series is called strictly stationary if the joint distribution of every collection (Y t

Week 5 Quantitative Analysis of Financial Markets Characterizing Cycles

Vector Auto-Regressive Models

STAD57 Time Series Analysis. Lecture 8

VAR Models and Applications

5: MULTIVARATE STATIONARY PROCESSES

LECTURE 10 LINEAR PROCESSES II: SPECTRAL DENSITY, LAG OPERATOR, ARMA. In this lecture, we continue to discuss covariance stationary processes.

Lesson 2: Analysis of time series

18.S096 Problem Set 4 Fall 2013 Time Series Due Date: 10/15/2013

ARMA (and ARIMA) models are often expressed in backshift notation.

FE570 Financial Markets and Trading. Stevens Institute of Technology

E 4101/5101 Lecture 6: Spectral analysis

Stationary Stochastic Time Series Models

Estimating Moving Average Processes with an improved version of Durbin s Method

2. An Introduction to Moving Average Models and ARMA Models

STAT Financial Time Series

7. MULTIVARATE STATIONARY PROCESSES

Chapter 3 - Temporal processes

TMA4285 December 2015 Time series models, solution.

Multivariate ARMA Processes

Calculation of ACVF for ARMA Process: I consider causal ARMA(p, q) defined by

Econometrics of financial markets, -solutions to seminar 1. Problem 1

Time Series 2. Robert Almgren. Sept. 21, 2009

Some Time-Series Models

Econometría 2: Análisis de series de Tiempo

Part III Example Sheet 1 - Solutions YC/Lent 2015 Comments and corrections should be ed to

Dynamic Regression Models

Classic Time Series Analysis

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

Advanced Econometrics

Exercises - Time series analysis

ARIMA Modelling and Forecasting

Transcription:

Forecasting with ARMA Eduardo Rossi University of Pavia October 2013 Rossi Forecasting Financial Econometrics - 2013 1 / 32

Mean Squared Error Linear Projection Forecast of Y t+1 based on a set of variables observed at date t, X t : Yt+1 t. The loss function MSE(Yt+1 t ) = E[Y t+1 Yt+1 t ]2 The forecast with the smallest MSE is Y t+1 t = E[Y t+1 X t ] Suppose Y t+1 t is a linear function of X t: Ŷ t+1 t = α X t if E[(Y t+1 α X t )X t] = 0 then α X t is the linear projection of Y t+1 on X t. Rossi Forecasting Financial Econometrics - 2013 2 / 32

Linear Projection Linear Projection The LP projection produces the smallest MSE among the class of linear forecasting rule P(Y t+1 X t ) = α X t using MSE[ P(Y t+1 X t )] MSE[E(Y t+1 X t )] E[(Y t+1 α X t )X t] = 0 E[Y t+1 X t] = α E[X t X t] α = E[Y t+1 X t]e[x t X t] 1 Rossi Forecasting Financial Econometrics - 2013 3 / 32

LP and OLS Linear Projection LP is closely related to OLS regression [ 1 β = T y t+1 = β X t + u t ] 1 [ ] T X t X 1 T t X t Y t+1 T t=1 β is constructed from the sample moments, while α is constructed from population moments. If {X t, Y t+1 } is covariance stationary and ergodic for second moments, then the sample moments will converge to the population moments as the sample size T goes to infinity 1 T 1 T t=1 t=1 T X t X p t E[X t X t] T p X t y t+1 E[Xt Y t+1 ] t=1 Rossi Forecasting Financial Econometrics - 2013 4 / 32

LP and OLS Linear Projection implying β p α β is consistent for the LP coefficient. Rossi Forecasting Financial Econometrics - 2013 5 / 32

Forecasting based on lagged ɛ s Infinite moving average (Y t µ) = ψ(l)ɛ t ɛ t WN(0, σ 2 ) ψ 0 = 1, j=0 ψ j <. An infinite number of obs on ɛ through date t: {ɛ t, ɛ t 1,...}. We know the values of µ and {ψ 1, ψ 2,...} Y t+s = µ + ɛ t+s + ψ 1 ɛ t+s 1 + ψ 2 ɛ t+s 2 +... + ψ s ɛ t + ψ s+1 ɛ t 1 +... The optimal linear forecast is: Ê[Y t+s ɛ t, ɛ t 1,...] = µ + ψ s ɛ t + ψ s+1 ɛ t 1 +... where Ê[Y t+s X t ] P(Y t+s 1, X t ). The unknown future ɛ s are set to their expected value of zero. The forecast error is Y t+s Ê[Y t+s ɛ t, ɛ t 1,...] = ɛ t+s + ψ 1 ɛ t+s 1 + ψ 2 ɛ t+s 2 +... + ψ s 1 ɛ t+1 Rossi Forecasting Financial Econometrics - 2013 6 / 32

Forecasting based on lagged ɛ s E[(Y t+s Ê[Y t+s ɛ t, ɛ t 1,...]) 2 ] = (1 + ψ 2 1 +... + ψ 2 s 1)σ 2 when s the MSE converges to the unconditional variance σ 2 j=0 ψ2 j. MA(q): ψ(l) = 1 + θ 1 L +... + θ q L q Y t+s = µ + ɛ t+s + θ 1 ɛ t+s 1 +... + θ t+s q ɛ t+s q The optimal linear forecast is { µ + θs ɛ Ê[Y t+s ɛ t, ɛ t 1,...] = t + θ s+1 ɛ t 1 +... + θ q ɛ t q+s s = 1,..., q µ s = q + 1,... Rossi Forecasting Financial Econometrics - 2013 7 / 32

Forecasting based on lagged ɛ s MSE: σ 2 s = 1 (1 + θ 1 +... + θs 1 2 )σ2 s = 2, 3,..., q (1 + θ1 2 +... + θ2 q)σ 2 s = q + 1, q + 2,... The MSE increases with the forecast horizon up until s = q. For s > q the forecast is the unconditional mean and the MSE is the unconditional variance of the series. Rossi Forecasting Financial Econometrics - 2013 8 / 32

Forecasting based on lagged ɛ s Compact lag operator ψ(l) L s = L s + ψ 1 L 1 s + ψ 2 L 2 s +... + ψ s 1 L 1 + ψ s L 0 + ψ s+1 L 1 + ψ s+2 L 2 +... the annihilation operator replaces negative powers of L by zero [ ] ψ(l) = ψ s L 0 + ψ s+1 L 1 + ψ s+2 L 2 +... L s + [ ] ψ(l) Ê[Y t+s ɛ t, ɛ t 1,...] = µ + L s ɛ t + Rossi Forecasting Financial Econometrics - 2013 9 / 32

Forecasting based on lagged Y s In the usual forecasting situation we have obs on lagged Y s. Suppose the infinite MA process has an Infinite AR representation η(l)(y t µ) = ɛ t η(l) = j=0 η jl j, η 0 = 1 and j=0 η j < A c.s. AR(p) satisfies η(l) = [ψ(l)] 1. (1 φ 1 L φ 2 L 2 +... + φ p L p )(Y t µ) = ɛ t φ(l)(y t µ) = ɛ t η(l) = φ(l) ψ(l) = [φ(l)] 1 Rossi Forecasting Financial Econometrics - 2013 10 / 32

Forecasting based on lagged Y s For an MA(q): Y t µ = (1 + θ 1 L +... + θ q L q )ɛ t Y t µ = θ(l)ɛ t ψ(l) = θ(l) η(l) = [θ(l)] 1 provided that is based on an invertible representation. Rossi Forecasting Financial Econometrics - 2013 11 / 32

Forecasting based on lagged Y s ARMA(p,q) can be represented as an AR( ) with ψ(l) = θ(l) φ(l) provided that the roots of φ(z) and θ(z) lie outside the unit circle. When the restrictions are satisfied obs on {Y t, Y t 1, Y t 2,... } will be sufficient to construct {ɛ t, ɛ t 1,...}. Rossi Forecasting Financial Econometrics - 2013 12 / 32

Forecasting based on lagged Y s For example for an AR(1): (1 φl)(y t µ) = ɛ t given φ and µ and Y t,y t 1, the value of ɛ t can be constructed from ɛ t = (Y t µ) φ(y t 1 µ) For an invertible MA(1): (1 + θl) 1 (Y t µ) = ɛ t given an infinite number of obs on Y, we can compute: ɛ t = (Y t µ) θ(y t 1 µ) + θ 2 (Y t 2 µ) θ 3 (Y t 3 µ) +... Rossi Forecasting Financial Econometrics - 2013 13 / 32

Forecasting based on lagged Y s Under the conditions [ ] ψ(l) Ê[Y t+s Y t, Y t 1,...] = µ + L s η(l)(y t µ) + the forecast of Y t+s as a function of lagged Y s. Using η(l) = [ψ(l)] 1 [ ] ψ(l) Ê[Y t+s Y t, Y t 1,...] = µ + L s [ψ(l)] 1 (Y t µ) + Wiener-Kolmogorov prediction formula. Rossi Forecasting Financial Econometrics - 2013 14 / 32

Wiener-Kolmogorov prediction formula - AR(1) For example for an AR(1): ψ(l) = (1 φl)(y t µ) = ɛ t 1 1 φl = 1 + φl + φ2 L 2 +... + φ s L s +... the annihilation operator is: [ ] ψ(l) L s = φ s + φ s+1 L 1 + φ s+2 L 2 +... = φs 1 φl [ ] ψ(l) Ê[Y t+s Y t, Y t 1,...] = µ + where ɛ t = (1 φl)(y t µ). + L s η(l)(y t µ) = µ + + Ê[Y t+s Y t, Y t 1,...] = µ + φ s (Y t µ) φs 1 φl (1 φl)(y t µ) the forecast decays geometrically from (Y t µ) toward µ as s increases. Rossi Forecasting Financial Econometrics - 2013 15 / 32

Wiener-Kolmogorov prediction formula - AR(1) Given that ψ j = φ j, from the MSE of a MA( ), we have that the MSE s-period-ahead forecast error is: [1 + φ 2 +... + φ 2(s 1) ]σ 2 as s MSE = σ2 1 φ 2 Rossi Forecasting Financial Econometrics - 2013 16 / 32

Wiener-Kolmogorov prediction formula - AR(p) Stationary AR(p) process Y t+s µ = f (s) 11 (Y t µ) + f (s) 12 (Y t 1 µ) +... + f (s) 1p (Y t p+1 µ) + ɛ t+s + ψ 1 ɛ t+s 1 + + ψ s 1 ɛ t+1 the optimal s-period-ahead forecast is forecast error ψ j = f (j) 11 Ŷ t+s t = µ + f (s) 11 (Y t µ) +... + f (s) 1p (Y t p+1 µ) Y t+s t Ŷt+s t = ɛ t+s + ψ 1 ɛ t+s 1 + + ψ s 1 ɛ t+1 Rossi Forecasting Financial Econometrics - 2013 17 / 32

Wiener-Kolmogorov prediction formula - AR(p) To calculate the optimal forecast we use a recursion. Start with the forecast Ŷt+1 t Ŷ t+1 t µ = φ 1 (Y t µ) +... + φ p (Y t p+1 µ) Ŷ t+2 t+1 : Ŷ t+2 t+1 µ = φ 1 (Y t+1 µ) +... + φ p (Y t p+2 µ) Law of Iterated Projections: Forecast Ŷt+2 t+1 projected on date t information set then we obtain Ŷt+2 t: Ŷ t+2 t µ = φ 1 (Ŷt+1 t µ) +... + φ p (Y t p+2 µ) substituting Ŷt+1 t Ŷ t+2 t µ = φ 1 [φ 1 (Y t µ) +... + φ p (Y t p+1 µ)] + φ 2 (Y t µ) +... + φ p (Y t p+2 µ) Rossi Forecasting Financial Econometrics - 2013 18 / 32

Wiener-Kolmogorov prediction formula - AR(p) Ŷ t+2 t µ = (φ 2 1 + φ 2 )(Y t µ) + (φ 1 φ 2 + φ 3 )(Y t 1 µ) +... + (φ 1 φ p 1 + φ p )(Y t p+2 µ) + φ 1 φ p (Y t p+1 µ) The s-period-ahead forecast of an AR(p) process can be obtained by iterating on Ŷ t+j t µ = φ 1 (Ŷt+j 1 t µ) +... + φ p (Ŷt+j p t µ) Rossi Forecasting Financial Econometrics - 2013 19 / 32

Wiener-Kolmogorov prediction formula - MA(1) Invertible MA(1) (Y t µ) = (1 + θl)ɛ t with θ < 1. Wiener-Kolmogorov formula [ ] ψ(l) Ŷ t+s t = µ + L s (1 + θl) 1 (Y t µ) + Forecast s = 1 [ ] (1 + θl) = θ Alternatively in practice L 1 Ŷ t+1 t = µ + θ 1 + θl (Y t µ) + = µ + θ(y t µ) θ 2 (Y t 1 µ) + θ 3 (Y t 2 µ) +... ɛ t = (1 + θl) 1 (Y t µ) ɛ t = (Y t µ) θ ɛ t 1. For s = 2, 3,... [ ] Rossi Forecasting Financial Econometrics - 2013 20 / 32

Wiener-Kolmogorov prediction formula - MA(q) (Y t µ) = θ(l)ɛ t θ(l) = (1 + θ 1 L + θ 2 L 2 +... + θ q L q ) [ 1 + θ1 L +... + θ q L q ] 1 Ŷ t+s t = µ + L s + θ(l) (Y t µ) [ 1 + θ1 L +... + θ q L q ] { 1 + θs L + θ = s+1 L 2 +... + θ q L q s s = 1,..., q 0 s = q + 1,... For L s + Ŷ t+s t = µ + (θ s + θ s+1 L +... + θ q L q s ) ɛ t ɛ t = (Y t µ) θ 1 ɛ t 1... θ q ɛ t q Rossi Forecasting Financial Econometrics - 2013 21 / 32

Wiener-Kolmogorov prediction formula - ARMA(1,1) (1 φl)(y t µ) = (1 + θl)ɛ t Stationarity: φ < 1. Invertibility: θ < 1. [ ] 1 + θl Ŷ t+s t = µ + (1 φl)l s [ 1 + θl ] (1 φl)l s + = = + 1 φl 1 + θl (Y t µ) 1 (1 φl) = 1 + φl + φ2 L 2 +... [ ] 1 (1 φl)l s + θl (1 φl)l s + [ (1 + φl + φ 2 L 2 +...) L s + θl(1 + φl + φ2 L 2 ] +...) L s = (φ s + φ s+1 L + φ s+2 L 2 +...) + θ(φ s 1 + φ s L + φ s+1 L 2 +...) + Rossi Forecasting Financial Econometrics - 2013 22 / 32

Wiener-Kolmogorov prediction formula - ARMA(1,1) [ 1 + θl ] (1 φl)l s + = φ s (1 + φl + φ 2 L 2 +...) + θφ s 1 (1 + φl + φ 2 L 2 +...) = (φ s + θφ s 1 )(1 + φl + φ 2 L 2 +...) = φs + θφ s 1 1 φl Rossi Forecasting Financial Econometrics - 2013 23 / 32

Wiener-Kolmogorov prediction formula - ARMA(1,1) For s = 2, 3,... the forecast [ ] 1 + θl 1 φl Ŷ t+s t = µ + (1 φl)l s + 1 + θl (Y t µ) = µ + φs + θφ s 1 1 φl 1 φl 1 + θl (Y t µ) = µ + φs + θφ s 1 1 + θl (Y t µ) Ŷ t+s t µ = φ(ŷt+s 1 t µ) the forecast decays geometrically at the rate φ toward the unconditional mean µ. The one-period-ahead forecast (s=1) is given by Ŷ t+1 t = µ + φ + θ 1 + θl (Y t µ) Rossi Forecasting Financial Econometrics - 2013 24 / 32

Wiener-Kolmogorov prediction formula - ARMA(1,1) φ(1 + θl) + θ(1 φl) Ŷ t+1 t = µ + (Y t µ) 1 + θl = µ + φ(y t µ) + 1 φl 1 + θl (Y t µ) ɛ t = 1 φl 1 + θl (Y t µ) = (Y t µ) φ(y t 1 µ) θ ɛ t 1 ɛ t = Y t Ŷt t 1 Rossi Forecasting Financial Econometrics - 2013 25 / 32

Wiener-Kolmogorov prediction formula - ARMA(1,1) s = 2, Ŷ t+2 t = µ + φ2 + θφ 1 + θl (Y t µ) = µ + φ φ + θ 1 + θl (Y t µ) = µ + φ(φ + θ)(1 θl + θ 2 L 2 θ 3 L 3 +...)(Y t µ) = µ + φ(φ + θ)(y t µ) φ(φ + θ)θ(y t 1 µ) +... Rossi Forecasting Financial Econometrics - 2013 26 / 32

Wiener-Kolmogorov prediction formula - ARMA(p,q) ARMA(p,q): φ(l)(y t µ) = θ(l)ɛ t Ŷ t+1 t µ = φ 1 (Y t µ) +... + φ p (Y t p+1 µ) + θ 1 ɛ t +... + θ q ɛ t q+1 ɛ t = Y t Y t t 1 Ŷ τ t = Y τ τ t φ 1 (Ŷt+s 1 t µ) +... + φ p (Y t+s p t µ) + θ 1 ɛ t +... + θ q ɛ t+s q for s = 1,..., q Ŷ t+s t µ = φ 1 (Ŷt+s 1 t µ) +... + φ p (Y t+s p t µ) for s = q + 1,... Rossi Forecasting Financial Econometrics - 2013 27 / 32

Forecasts based on a finite number of observations Exact Finite-sample Properties {Y t, Y t 1,..., Y t m+1 } observations. Presample ɛ s all equal to 0. Approximation Ê[Y t+s Y t, Y t 1,...] = Ê[Y t+s Y t,..., Y t m+1, ɛ t m = 0, ɛ t m 1 = 0,...] MA(q): ɛ t m = ɛ t m 1 =... = ɛ t m q+1 = 0 ɛ t m+1 = Y t m+1 µ The values are to be replaced in For s = q = 1: ɛ t m+2 = Y t m+2 µ θ 1 ɛ t m+1 ɛ t m+3 = Y t m+3 µ θ 1 ɛ t m+2 θ 2 ɛ t m+1 Ŷ t+s t = µ + (θ s + θ s+1 L + θ s+2 L 2 +... + θ q L q s ) ɛ t Ŷ t+s t = µ + θ(y t µ) θ 2 (Y t 1 µ) +... + ( 1) m 1 θ m (Y t m+1 µ) truncated infinite AR. For m and θ small we have a good approximation. For θ = 1 the approximation may be poorer. Rossi Forecasting Financial Econometrics - 2013 28 / 32

Forecasts based on a finite number of observations Exact Finite-sample Properties Alternative approach: Exact projection of Y t+1 on its most recent values 1 Y t X t =. Y t m+1 Linear Forecast α (m) X t = α m 0 + α m 1 Y t +... + α m my t m+1 Rossi Forecasting Financial Econometrics - 2013 29 / 32

Forecasts based on a finite number of observations Exact Finite-sample Properties If Y t is c.s. implies E[Y t Y t j ] = γ j + µ 2 X t = [1, Y t,..., Y t m+1 ] α (m) = [ µ (γ 1 + µ 2 )... (γ m + µ 2 ) ] 1 µ... µ µ (γ 0 + µ 2 )... (γ m 1 + µ 2 )... µ (γ m 1 + µ 2 )... (γ 0 + µ 2 ) when a constant term is included in X t it is more convenient to express variables in deviations from the mean. 1 Rossi Forecasting Financial Econometrics - 2013 30 / 32

Forecasts based on a finite number of observations Calculate the projection of (Y t+1 µ) on (Y t µ), (Y t 1 µ),..., (Y t m+1 µ) α (m) = s-period-ahead forecast γ 0 γ 1... γ m 1... γ m 1 γ m 2... γ 0 1 γ 1. γ m Ŷ t+s t = µ + α (m,s) 1 (Y t µ) +... + α (m,s) m (Y t m+s µ) α (m,s) 1. α (m,s) m = γ 0 γ 1... γ m 1... γ m 1 γ m 2... γ 0 1 γ s. γ s+m 1 Rossi Forecasting Financial Econometrics - 2013 31 / 32

Forecasts based on a finite number of observations Inversion of an (m m) matrix. Two algorithms: 1 Kalman Filter to compute finite-sample forecast. 2 Triangular Factorization. Rossi Forecasting Financial Econometrics - 2013 32 / 32