Principles of forecasting
|
|
- Linda Mosley
- 5 years ago
- Views:
Transcription
1 2.5 Forecasting
2 Principles of forecasting Forecast based on conditional expectations Suppose we are interested in forecasting the value of y t+1 based on a set of variables X t (m 1 vector). Let y t+1 t denote such a forecast. To evaluate the usefulness of the forecast we need to specify a loss function. Here we specify a quadratic loss function. A quadratic loss function means that y t+1 t is chose to minimize E(y t+1 y t+1 t ) 2. E(y t+1 y t+1 t ) 2 denoted is known as the mean squared error associated with the forecast y t+1 t MSE(y t+1 t )=E(y t+1 y t+1 t ) 2
3 Fundamental result: the forecast with the smallest MSE is the expectation of y t+1 t conditional on X t that is y t+1 t = E(y t+1 X t ) We now verify the claim. Let g(x t ) be any other function and let y t+1 t = g(x t ). The associated MSE is Define E [y t+1 g(x t )] 2 = E [y t+1 E(y t+1 X t )+E(y t+1 X t ) g(x t )] 2 = E [y t+1 E(y t+1 X t )] E {[y t+1 E(y t+1 X t )][E(y t+1 X t ) g(x t )]} + + [E(y t+1 X t ) g(x t )] 2 η t+1 [y t+1 E(y t+1 X t )][E(y t+1 X t ) g(x t )] (33) The conditional expectation is E(η t+1 X t ) = E {[y t+1 E(y t+1 X t )][E(y t+1 X t ) g(x t )] X t } = [E(y t+1 X t ) g(x t )] E {[y t+1 E(y t+1 X t )] X t } = [E(y t+1 X t ) g(x t )] [E(y t+1 X t ) E(y t+1 X t )] = 0
4 Therefore by law of iterated expectations This means that E(η t+1 )=E(E(η t+1 X t )) = 0 E [y t+1 g(x t )] 2 = E [y t+1 E(y t+1 X t )] 2 + E([E(y t+1 X t ) g(x t )]) 2 Therefore the function that minimizes the MSE is g(x t )=E(y t+1 X t ) E(y t+1 X t ) is the optimal forecast of Y t+1 conditional of X t under a quadratic loss function. The MSE of this optimal forecast is E[y t+1 g(x t )] 2 = E[y t+1 E(y t+1 X t )] 2
5 Forecast based on linear projections We now restrict the class of forecasts we consider to be linear function of X t : y t+1 t = α X t Suppose α is such that the resulting forecast error is uncorrelated with X t E[(y t+1 α X t )X t] = 0 (34) If (34) holds the we call α X t the linear projection of y t+1 on X t. The linear projection produces the smallest mean square forecast error among the class of linear forecasting rules. To verify this let g X t be any arbitrary forecasting rule. the middle term E ( y t+1 g X t ) 2 = E ( y t+1 α X t + α X t g X t ) 2 = E ( y t+1 α X t ) 2 + [( )( )] +2E y t+1 α X t α X t g X t + ( ) +E α X t g 2 X t E[ ( y t+1 α X t )( α X t g X t ) ] = E[ ( yt+1 α X t ) X t ] [α g] by definition of linear projection. Thus = 0 E ( y t+1 g X t ) 2 = E ( y t+1 α X t ) 2 + E ( α X t g X t ) 2 The optimal linear forecast is the value g X t = α X t.
6 We use the notation P (y t+1 t )=α X t to indicate the linear projection of y t+1 on X t. Notice that MSE[P (y t+1 X t )] MSE [E(y t+1 X t )] The projection coefficient α can be calculated in terms of moments of y t+1 and X t. E(y t+1 X t )=α E(X t X t ) α = E(y t+1 X t )[E(X tx t )] 1 It is easy to generalize to the multivariate case. Let Y t be a n 1 vector α an m n matrix and X t an m 1 vector. The linear pojection is defined to be the vactor α X t satisfying E(Y t+1 α X t )X t =0
7 Linear projections and OLS regression There is a close relationship between OLS estimator and the linear projection coefficient. If Y t+1 and X t are stationary processes and also ergodic for the second moments then implying (1/T ) (1/T ) T t=1 T t=1 X t X t p E(X t X t) X t y t+1 p E(Xt y t+1 ) ˆβ p α The OLS regression yields a consistent estimate of the linear projection coefficient.
8 Forecasting with VAR models Iterated forecast Let us consider the VAR(p) in companion form Y t = AY t 1 + e t (35) where e t is white noise. The h-step ahead linear predictor of Y t+h conditional on the information available at time t, Y t+h t, is given by Y t+h t = A h Y t = AY t+h 1 t (36) where the first n rows of Y t+h t represent the optimal forecast of Y t+h. From (36) it is easy to compute recursively the forecast for Y t+h at any horizon. The predictor in the previous slide is optimal in the sense that, as we know, it delivers the minimum MSE among those that are linear functions of Y.
9 Using we get the forecast error Y t+h = A h Y t + Y t+h Y t+h t = h 1 i=0 h 1 i=0 A i e t+h i (37) A i e t+h i (38) From the forecast error it is easy to obtain the Mean Square Error, the covariance of the forecast error, MSE[Y t+h t ]=E ( Y t+k Y t+h t ) (Yt+k Y t+h t ) =Σ(h) = h 1 i=0 A i ΩA i = Σ(h 1) + A h 1 ΩA h 1 the MSE for will be the first upper left n n matrix. Notice that the MSE is non decreasing and that as h will approach the variance of Y t.
10 Direct forecast An alternative is to compute the direct forecast by computing the projection of Y t+h on Y t. To see this consider a bivariate VAR(p) with two variables, x t and y t. We want to forecast x t+h given the incormation available at time t. The direct forecast works as follows: 1. Estimate the projection equation x t = a + p 1 φ i x t h i + p 1 i=0 i=0 ψ i y t h i + ε t 2. Using the estimated coefficients, the predictor x t+h t is obtain as ˆx t+h t = a + p 1 φ i x t i + p 1 i=0 i=0 ψ i y t i
11 Forecast evaluation: pseudo out-of-sample exercises A key issue in forecasting is to evaluate the forecasting accuracy of a model of interest. In particular several times we will be interested in comparing the performance of competing forecasting models. How can we perform such a forecast evaluation? Answer: we can compare the mean squared errors using pseudo out-of-sample forecast exercises.
12 Suppose we have a sample of T observations. Let τ = T 0 <T. A pseudo out-of-sample exercise works as follows: 1. We use τ observations to estimate the parameters of the model. 2. We forecast Y τ+j with Ŷ τ+j τ j =1, 2,..., s. 3. We compute the forecast error w τ+j τ = Y τ+j Ŷ τ+j τ with j =1, 2,..., s. 4. We update τ = τ + 1 and repeat steps 1-3. We repeat steps 1-4 up to the end of the sample and we compute the mean squared error for each variable i =1,...n. MSE(Ŷ i,t +j T )= 1 T T 0 j T T 0 j τ=t 0 w 2 i,τ+j τ or the root mean squared error RMSE(Ŷ i,t +j T )= 1 T T 0 j T T 0 j τ=t 0 w 2 i,τ+j τ Repeating the same exercise for various models we can choose the one which has the smallest MSE or RMSE.
13 Application: Forecasting inflation and unemployment D Agostino Gambetti and Giannone (JAE 2012) consider a trivariate VAR(2) model including inflation unemployment and the short term interest rate. The sample spans from 1948:I-2007:IV. Forecast up to 12 quarters ahead. Estimation is in real time and is made using both VAR and AR for each of the series. Estimation is done both recursively snd with rolling window.
14
15
2.5 Forecasting and Impulse Response Functions
2.5 Forecasting and Impulse Response Functions Principles of forecasting Forecast based on conditional expectations Suppose we are interested in forecasting the value of y t+1 based on a set of variables
More informationUniversità di Pavia. Forecasting. Eduardo Rossi
Università di Pavia Forecasting Eduardo Rossi Mean Squared Error Forecast of Y t+1 based on a set of variables observed at date t, X t : Yt+1 t. The loss function MSE(Y t+1 t ) = E[Y t+1 Y t+1 t ]2 The
More informationCh. 15 Forecasting. 1.1 Forecasts Based on Conditional Expectations
Ch 15 Forecasting Having considered in Chapter 14 some of the properties of ARMA models, we now show how they may be used to forecast future values of an observed time series For the present we proceed
More informationForecasting with ARMA
Forecasting with ARMA Eduardo Rossi University of Pavia October 2013 Rossi Forecasting Financial Econometrics - 2013 1 / 32 Mean Squared Error Linear Projection Forecast of Y t+1 based on a set of variables
More informationECONOMETRICS Part II PhD LBS
ECONOMETRICS Part II PhD LBS Luca Gambetti UAB, Barcelona GSE February-March 2014 1 Contacts Prof.: Luca Gambetti email: luca.gambetti@uab.es webpage: http://pareto.uab.es/lgambetti/ Description This is
More informationStatistics 910, #15 1. Kalman Filter
Statistics 910, #15 1 Overview 1. Summary of Kalman filter 2. Derivations 3. ARMA likelihoods 4. Recursions for the variance Kalman Filter Summary of Kalman filter Simplifications To make the derivations
More informationSingle Equation Linear GMM with Serially Correlated Moment Conditions
Single Equation Linear GMM with Serially Correlated Moment Conditions Eric Zivot November 2, 2011 Univariate Time Series Let {y t } be an ergodic-stationary time series with E[y t ]=μ and var(y t )
More informationDefine y t+h t as the forecast of y t+h based on I t known parameters. The forecast error is. Forecasting
Forecasting Let {y t } be a covariance stationary are ergodic process, eg an ARMA(p, q) process with Wold representation y t = X μ + ψ j ε t j, ε t ~WN(0,σ 2 ) j=0 = μ + ε t + ψ 1 ε t 1 + ψ 2 ε t 2 + Let
More informationξ t = Fξ t 1 + v t. then λ is less than unity in absolute value. In the above expression, A denotes the determinant of the matrix, A. 1 y t 1.
Christiano FINC 520, Spring 2009 Homework 3, due Thursday, April 23. 1. In class, we discussed the p th order VAR: y t = c + φ 1 y t 1 + φ 2 y t 2 +... + φ p y t p + ε t, where ε t is a white noise with
More informationVector Auto-Regressive Models
Vector Auto-Regressive Models Laurent Ferrara 1 1 University of Paris Nanterre M2 Oct. 2018 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions
More informationVAR Models and Applications
VAR Models and Applications Laurent Ferrara 1 1 University of Paris West M2 EIPMC Oct. 2016 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions
More informationChapter 9: Forecasting
Chapter 9: Forecasting One of the critical goals of time series analysis is to forecast (predict) the values of the time series at times in the future. When forecasting, we ideally should evaluate the
More informationForecasting 1 to h steps ahead using partial least squares
Forecasting 1 to h steps ahead using partial least squares Philip Hans Franses Econometric Institute, Erasmus University Rotterdam November 10, 2006 Econometric Institute Report 2006-47 I thank Dick van
More informationTime-Varying Parameters
Kalman Filter and state-space models: time-varying parameter models; models with unobservable variables; basic tool: Kalman filter; implementation is task-specific. y t = x t β t + e t (1) β t = µ + Fβ
More informationEconometría 2: Análisis de series de Tiempo
Econometría 2: Análisis de series de Tiempo Karoll GOMEZ kgomezp@unal.edu.co http://karollgomez.wordpress.com Segundo semestre 2016 IX. Vector Time Series Models VARMA Models A. 1. Motivation: The vector
More informationVector autoregressions, VAR
1 / 45 Vector autoregressions, VAR Chapter 2 Financial Econometrics Michael Hauser WS17/18 2 / 45 Content Cross-correlations VAR model in standard/reduced form Properties of VAR(1), VAR(p) Structural VAR,
More informationForecasting and Estimation
February 3, 2009 Forecasting I Very frequently the goal of estimating time series is to provide forecasts of future values. This typically means you treat the data di erently than if you were simply tting
More informationSingle Equation Linear GMM with Serially Correlated Moment Conditions
Single Equation Linear GMM with Serially Correlated Moment Conditions Eric Zivot October 28, 2009 Univariate Time Series Let {y t } be an ergodic-stationary time series with E[y t ]=μ and var(y t )
More informationCovariance and Correlation
Covariance and Correlation ST 370 The probability distribution of a random variable gives complete information about its behavior, but its mean and variance are useful summaries. Similarly, the joint probability
More informationState-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49
State-space Model Eduardo Rossi University of Pavia November 2013 Rossi State-space Model Financial Econometrics - 2013 1 / 49 Outline 1 Introduction 2 The Kalman filter 3 Forecast errors 4 State smoothing
More information1 Introduction to Multivariate Models
1 Introduction to Multivariate Models Beyond univariate models... Consider the following AR(2) process for inflation (y t ) Y t = a 1 Y t 1 + a 2 Y t 2 + ε t ε t W N In this course we study multivariate
More informationModule 9: Stationary Processes
Module 9: Stationary Processes Lecture 1 Stationary Processes 1 Introduction A stationary process is a stochastic process whose joint probability distribution does not change when shifted in time or space.
More information1 Class Organization. 2 Introduction
Time Series Analysis, Lecture 1, 2018 1 1 Class Organization Course Description Prerequisite Homework and Grading Readings and Lecture Notes Course Website: http://www.nanlifinance.org/teaching.html wechat
More informationMultivariate forecasting with VAR models
Multivariate forecasting with VAR models Franz Eigner University of Vienna UK Econometric Forecasting Prof. Robert Kunst 16th June 2009 Overview Vector autoregressive model univariate forecasting multivariate
More informationEcon 623 Econometrics II Topic 2: Stationary Time Series
1 Introduction Econ 623 Econometrics II Topic 2: Stationary Time Series In the regression model we can model the error term as an autoregression AR(1) process. That is, we can use the past value of the
More informationPh.D. Qualifying Exam Monday Tuesday, January 4 5, 2016
Ph.D. Qualifying Exam Monday Tuesday, January 4 5, 2016 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Find the maximum likelihood estimate of θ where θ is a parameter
More informationClass 4: VAR. Macroeconometrics - Fall October 11, Jacek Suda, Banque de France
VAR IRF Short-run Restrictions Long-run Restrictions Granger Summary Jacek Suda, Banque de France October 11, 2013 VAR IRF Short-run Restrictions Long-run Restrictions Granger Summary Outline Outline:
More informationVariable Targeting and Reduction in High-Dimensional Vector Autoregressions
Variable Targeting and Reduction in High-Dimensional Vector Autoregressions Tucker McElroy (U.S. Census Bureau) Frontiers in Forecasting February 21-23, 2018 1 / 22 Disclaimer This presentation is released
More information1. Shocks. This version: February 15, Nr. 1
1. Shocks This version: February 15, 2006 Nr. 1 1.3. Factor models What if there are more shocks than variables in the VAR? What if there are only a few underlying shocks, explaining most of fluctuations?
More informationUsing all observations when forecasting under structural breaks
Using all observations when forecasting under structural breaks Stanislav Anatolyev New Economic School Victor Kitov Moscow State University December 2007 Abstract We extend the idea of the trade-off window
More information10. Time series regression and forecasting
10. Time series regression and forecasting Key feature of this section: Analysis of data on a single entity observed at multiple points in time (time series data) Typical research questions: What is the
More information5: MULTIVARATE STATIONARY PROCESSES
5: MULTIVARATE STATIONARY PROCESSES 1 1 Some Preliminary Definitions and Concepts Random Vector: A vector X = (X 1,..., X n ) whose components are scalarvalued random variables on the same probability
More information{η : η=linear combination of 1, Z 1,, Z n
If 3. Orthogonal projection. Conditional expectation in the wide sense Let (X n n be a sequence of random variables with EX n = σ n and EX n 0. EX k X j = { σk, k = j 0, otherwise, (X n n is the sequence
More informationLinear models and their mathematical foundations: Simple linear regression
Linear models and their mathematical foundations: Simple linear regression Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/21 Introduction
More informationEcon 2120: Section 2
Econ 2120: Section 2 Part I - Linear Predictor Loose Ends Ashesh Rambachan Fall 2018 Outline Big Picture Matrix Version of the Linear Predictor and Least Squares Fit Linear Predictor Least Squares Omitted
More informationVAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:
VAR Model (k-variate VAR(p model (in the Reduced Form: where: Y t = A + B 1 Y t-1 + B 2 Y t-2 + + B p Y t-p + ε t Y t = (y 1t, y 2t,, y kt : a (k x 1 vector of time series variables A: a (k x 1 vector
More information9. Model Selection. statistical models. overview of model selection. information criteria. goodness-of-fit measures
FE661 - Statistical Methods for Financial Engineering 9. Model Selection Jitkomut Songsiri statistical models overview of model selection information criteria goodness-of-fit measures 9-1 Statistical models
More informationA time series is called strictly stationary if the joint distribution of every collection (Y t
5 Time series A time series is a set of observations recorded over time. You can think for example at the GDP of a country over the years (or quarters) or the hourly measurements of temperature over a
More informationState-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53
State-space Model Eduardo Rossi University of Pavia November 2014 Rossi State-space Model Fin. Econometrics - 2014 1 / 53 Outline 1 Motivation 2 Introduction 3 The Kalman filter 4 Forecast errors 5 State
More informationEstimating and Accounting for the Output Gap with Large Bayesian Vector Autoregressions
Estimating and Accounting for the Output Gap with Large Bayesian Vector Autoregressions James Morley 1 Benjamin Wong 2 1 University of Sydney 2 Reserve Bank of New Zealand The view do not necessarily represent
More informationA measurement error model approach to small area estimation
A measurement error model approach to small area estimation Jae-kwang Kim 1 Spring, 2015 1 Joint work with Seunghwan Park and Seoyoung Kim Ouline Introduction Basic Theory Application to Korean LFS Discussion
More informationLecture 8: ARIMA Forecasting Please read Chapters 7 and 8 of MWH Book
Lecture 8: ARIMA Forecasting Please read Chapters 7 and 8 of MWH Book 1 Predicting Error 1. y denotes a random variable (stock price, weather, etc) 2. Sometimes we want to do prediction (guessing). Let
More informationECON 4160, Spring term Lecture 12
ECON 4160, Spring term 2013. Lecture 12 Non-stationarity and co-integration 2/2 Ragnar Nymoen Department of Economics 13 Nov 2013 1 / 53 Introduction I So far we have considered: Stationary VAR, with deterministic
More informationLecture 7a: Vector Autoregression (VAR)
Lecture 7a: Vector Autoregression (VAR) 1 2 Big Picture We are done with univariate time series analysis Now we switch to multivariate analysis, that is, studying several time series simultaneously. VAR
More information4 Multiple Linear Regression
4 Multiple Linear Regression 4. The Model Definition 4.. random variable Y fits a Multiple Linear Regression Model, iff there exist β, β,..., β k R so that for all (x, x 2,..., x k ) R k where ε N (, σ
More information5 Transfer function modelling
MSc Further Time Series Analysis 5 Transfer function modelling 5.1 The model Consider the construction of a model for a time series (Y t ) whose values are influenced by the earlier values of a series
More information10) Time series econometrics
30C00200 Econometrics 10) Time series econometrics Timo Kuosmanen Professor, Ph.D. 1 Topics today Static vs. dynamic time series model Suprious regression Stationary and nonstationary time series Unit
More informationStatistics 349(02) Review Questions
Statistics 349(0) Review Questions I. Suppose that for N = 80 observations on the time series { : t T} the following statistics were calculated: _ x = 10.54 C(0) = 4.99 In addition the sample autocorrelation
More informationMixed frequency models with MA components
Mixed frequency models with MA components Claudia Foroni a Massimiliano Marcellino b Dalibor Stevanović c a Deutsche Bundesbank b Bocconi University, IGIER and CEPR c Université du Québec à Montréal September
More informationRegression Models - Introduction
Regression Models - Introduction In regression models there are two types of variables that are studied: A dependent variable, Y, also called response variable. It is modeled as random. An independent
More informationForecasting. This optimal forecast is referred to as the Minimum Mean Square Error Forecast. This optimal forecast is unbiased because
Forecasting 1. Optimal Forecast Criterion - Minimum Mean Square Error Forecast We have now considered how to determine which ARIMA model we should fit to our data, we have also examined how to estimate
More informationECON 4160, Lecture 11 and 12
ECON 4160, 2016. Lecture 11 and 12 Co-integration Ragnar Nymoen Department of Economics 9 November 2017 1 / 43 Introduction I So far we have considered: Stationary VAR ( no unit roots ) Standard inference
More informationLecture 2: Univariate Time Series
Lecture 2: Univariate Time Series Analysis: Conditional and Unconditional Densities, Stationarity, ARMA Processes Prof. Massimo Guidolin 20192 Financial Econometrics Spring/Winter 2017 Overview Motivation:
More informationMeasurement Errors and the Kalman Filter: A Unified Exposition
Luiss Lab of European Economics LLEE Working Document no. 45 Measurement Errors and the Kalman Filter: A Unified Exposition Salvatore Nisticò February 2007 Outputs from LLEE research in progress, as well
More informationMonitoring Forecasting Performance
Monitoring Forecasting Performance Identifying when and why return prediction models work Allan Timmermann and Yinchu Zhu University of California, San Diego June 21, 2015 Outline Testing for time-varying
More informationTime Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY
Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY & Contents PREFACE xiii 1 1.1. 1.2. Difference Equations First-Order Difference Equations 1 /?th-order Difference
More informationMultiple Linear Regression
Multiple Linear Regression Simple linear regression tries to fit a simple line between two variables Y and X. If X is linearly related to Y this explains some of the variability in Y. In most cases, there
More information7. MULTIVARATE STATIONARY PROCESSES
7. MULTIVARATE STATIONARY PROCESSES 1 1 Some Preliminary Definitions and Concepts Random Vector: A vector X = (X 1,..., X n ) whose components are scalar-valued random variables on the same probability
More informationBayesian Compressed Vector Autoregressions
Bayesian Compressed Vector Autoregressions Gary Koop a, Dimitris Korobilis b, and Davide Pettenuzzo c a University of Strathclyde b University of Glasgow c Brandeis University 9th ECB Workshop on Forecasting
More informationEcon 423 Lecture Notes: Additional Topics in Time Series 1
Econ 423 Lecture Notes: Additional Topics in Time Series 1 John C. Chao April 25, 2017 1 These notes are based in large part on Chapter 16 of Stock and Watson (2011). They are for instructional purposes
More informationA test for improved forecasting performance at higher lead times
A test for improved forecasting performance at higher lead times John Haywood and Granville Tunnicliffe Wilson September 3 Abstract Tiao and Xu (1993) proposed a test of whether a time series model, estimated
More informationCh 9. FORECASTING. Time Series Analysis
In this chapter, we assume the model is known exactly, and consider the calculation of forecasts and their properties for both deterministic trend models and ARIMA models. 9.1 Minimum Mean Square Error
More informationRegression Models - Introduction
Regression Models - Introduction In regression models, two types of variables that are studied: A dependent variable, Y, also called response variable. It is modeled as random. An independent variable,
More informationStochastic Processes: I. consider bowl of worms model for oscilloscope experiment:
Stochastic Processes: I consider bowl of worms model for oscilloscope experiment: SAPAscope 2.0 / 0 1 RESET SAPA2e 22, 23 II 1 stochastic process is: Stochastic Processes: II informally: bowl + drawing
More informationStatement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.
MATHEMATICAL STATISTICS Homework assignment Instructions Please turn in the homework with this cover page. You do not need to edit the solutions. Just make sure the handwriting is legible. You may discuss
More informationPeter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8
Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall
More informationLevinson Durbin Recursions: I
Levinson Durbin Recursions: I note: B&D and S&S say Durbin Levinson but Levinson Durbin is more commonly used (Levinson, 1947, and Durbin, 1960, are source articles sometimes just Levinson is used) recursions
More informationTIME SERIES AND FORECASTING. Luca Gambetti UAB, Barcelona GSE Master in Macroeconomic Policy and Financial Markets
TIME SERIES AND FORECASTING Luca Gambetti UAB, Barcelona GSE 2014-2015 Master in Macroeconomic Policy and Financial Markets 1 Contacts Prof.: Luca Gambetti Office: B3-1130 Edifici B Office hours: email:
More informationVolume 38, Issue 2. Nowcasting the New Turkish GDP
Volume 38, Issue 2 Nowcasting the New Turkish GDP Barış Soybilgen İstanbul Bilgi University Ege Yazgan İstanbul Bilgi University Abstract In this study, we predict year-on-year and quarter-on-quarter Turkish
More informationVariance Decomposition
Variance Decomposition 1 14.384 Time Series Analysis, Fall 2007 Recitation by Paul Schrimpf Supplementary to lectures given by Anna Mikusheva October 5, 2007 Recitation 5 Variance Decomposition Suppose
More informationLevinson Durbin Recursions: I
Levinson Durbin Recursions: I note: B&D and S&S say Durbin Levinson but Levinson Durbin is more commonly used (Levinson, 1947, and Durbin, 1960, are source articles sometimes just Levinson is used) recursions
More informationMFE Financial Econometrics 2018 Final Exam Model Solutions
MFE Financial Econometrics 2018 Final Exam Model Solutions Tuesday 12 th March, 2019 1. If (X, ε) N (0, I 2 ) what is the distribution of Y = µ + β X + ε? Y N ( µ, β 2 + 1 ) 2. What is the Cramer-Rao lower
More informationIDENTIFICATION OF ARMA MODELS
IDENTIFICATION OF ARMA MODELS A stationary stochastic process can be characterised, equivalently, by its autocovariance function or its partial autocovariance function. It can also be characterised by
More informationECON 616: Lecture 1: Time Series Basics
ECON 616: Lecture 1: Time Series Basics ED HERBST August 30, 2017 References Overview: Chapters 1-3 from Hamilton (1994). Technical Details: Chapters 2-3 from Brockwell and Davis (1987). Intuition: Chapters
More informationForecast comparison of principal component regression and principal covariate regression
Forecast comparison of principal component regression and principal covariate regression Christiaan Heij, Patrick J.F. Groenen, Dick J. van Dijk Econometric Institute, Erasmus University Rotterdam Econometric
More informationIV estimators and forbidden regressions
Economics 8379 Spring 2016 Ben Williams IV estimators and forbidden regressions Preliminary results Consider the triangular model with first stage given by x i2 = γ 1X i1 + γ 2 Z i + ν i and second stage
More informationFunctional time series
Rob J Hyndman Functional time series with applications in demography 4. Connections, extensions and applications Outline 1 Yield curves 2 Electricity prices 3 Dynamic updating with partially observed functions
More informationProf. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis
Introduction to Time Series Analysis 1 Contents: I. Basics of Time Series Analysis... 4 I.1 Stationarity... 5 I.2 Autocorrelation Function... 9 I.3 Partial Autocorrelation Function (PACF)... 14 I.4 Transformation
More informationARIMA Modelling and Forecasting
ARIMA Modelling and Forecasting Economic time series often appear nonstationary, because of trends, seasonal patterns, cycles, etc. However, the differences may appear stationary. Δx t x t x t 1 (first
More informationB y t = γ 0 + Γ 1 y t + ε t B(L) y t = γ 0 + ε t ε t iid (0, D) D is diagonal
Structural VAR Modeling for I(1) Data that is Not Cointegrated Assume y t =(y 1t,y 2t ) 0 be I(1) and not cointegrated. That is, y 1t and y 2t are both I(1) and there is no linear combination of y 1t and
More informationLecture 7a: Vector Autoregression (VAR)
Lecture 7a: Vector Autoregression (VAR) 1 Big Picture We are done with univariate time series analysis Now we switch to multivariate analysis, that is, studying several time series simultaneously. VAR
More informationEvaluating Conditional Forecasts from Vector Autoregressions
Evaluating Conditional Forecasts from Vector Autoregressions Todd E. Clark Federal Reserve Bank of Cleveland Michael W. McCracken Federal Reserve Bank of St. Louis March 2014 (preliminary and incomplete)
More informationProblem Set 1 Solution Sketches Time Series Analysis Spring 2010
Problem Set 1 Solution Sketches Time Series Analysis Spring 2010 1. Construct a martingale difference process that is not weakly stationary. Simplest e.g.: Let Y t be a sequence of independent, non-identically
More informationNotes on Time Series Modeling
Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g
More information2. Multivariate ARMA
2. Multivariate ARMA JEM 140: Quantitative Multivariate Finance IES, Charles University, Prague Summer 2018 JEM 140 () 2. Multivariate ARMA Summer 2018 1 / 19 Multivariate AR I Let r t = (r 1t,..., r kt
More informationPermanent Income Hypothesis (PIH) Instructor: Dmytro Hryshko
Permanent Income Hypothesis (PIH) Instructor: Dmytro Hryshko 1 / 36 The PIH Utility function is quadratic, u(c t ) = 1 2 (c t c) 2 ; borrowing/saving is allowed using only the risk-free bond; β(1 + r)
More informationFinal Exam November 24, Problem-1: Consider random walk with drift plus a linear time trend: ( t
Problem-1: Consider random walk with drift plus a linear time trend: y t = c + y t 1 + δ t + ϵ t, (1) where {ϵ t } is white noise with E[ϵ 2 t ] = σ 2 >, and y is a non-stochastic initial value. (a) Show
More informationStationary Stochastic Time Series Models
Stationary Stochastic Time Series Models When modeling time series it is useful to regard an observed time series, (x 1,x,..., x n ), as the realisation of a stochastic process. In general a stochastic
More informationDSGE Model Forecasting
University of Pennsylvania EABCN Training School May 1, 216 Introduction The use of DSGE models at central banks has triggered a strong interest in their forecast performance. The subsequent material draws
More informationProblem Set 2: Box-Jenkins methodology
Problem Set : Box-Jenkins methodology 1) For an AR1) process we have: γ0) = σ ε 1 φ σ ε γ0) = 1 φ Hence, For a MA1) process, p lim R = φ γ0) = 1 + θ )σ ε σ ε 1 = γ0) 1 + θ Therefore, p lim R = 1 1 1 +
More informationEconometrics of financial markets, -solutions to seminar 1. Problem 1
Econometrics of financial markets, -solutions to seminar 1. Problem 1 a) Estimate with OLS. For any regression y i α + βx i + u i for OLS to be unbiased we need cov (u i,x j )0 i, j. For the autoregressive
More informationMa 3/103: Lecture 24 Linear Regression I: Estimation
Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, 2017 1 / 32 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the
More informationCovariance Stationary Time Series. Example: Independent White Noise (IWN(0,σ 2 )) Y t = ε t, ε t iid N(0,σ 2 )
Covariance Stationary Time Series Stochastic Process: sequence of rv s ordered by time {Y t } {...,Y 1,Y 0,Y 1,...} Defn: {Y t } is covariance stationary if E[Y t ]μ for all t cov(y t,y t j )E[(Y t μ)(y
More informationTime Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY
Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY PREFACE xiii 1 Difference Equations 1.1. First-Order Difference Equations 1 1.2. pth-order Difference Equations 7
More informationStatistics 910, #5 1. Regression Methods
Statistics 910, #5 1 Overview Regression Methods 1. Idea: effects of dependence 2. Examples of estimation (in R) 3. Review of regression 4. Comparisons and relative efficiencies Idea Decomposition Well-known
More informationMS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari
MS&E 226: Small Data Lecture 11: Maximum likelihood (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 18 The likelihood function 2 / 18 Estimating the parameter This lecture develops the methodology behind
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 254 Part V
More informationTheoretical and Simulation-guided Exploration of the AR(1) Model
Theoretical and Simulation-guided Exploration of the AR() Model Overview: Section : Motivation Section : Expectation A: Theory B: Simulation Section : Variance A: Theory B: Simulation Section : ACF A:
More informationThe Statistical Property of Ordinary Least Squares
The Statistical Property of Ordinary Least Squares The linear equation, on which we apply the OLS is y t = X t β + u t Then, as we have derived, the OLS estimator is ˆβ = [ X T X] 1 X T y Then, substituting
More informationA. Recursively orthogonalized. VARs
Orthogonalized VARs A. Recursively orthogonalized VAR B. Variance decomposition C. Historical decomposition D. Structural interpretation E. Generalized IRFs 1 A. Recursively orthogonalized Nonorthogonal
More information