Open Economy Macroeconomics: Theory, methods and applications

Similar documents
Time-Varying Parameters

Filtering and Likelihood Inference

X t = a t + r t, (7.1)

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form

Endogenous Information Choice

Volume 30, Issue 3. A note on Kalman filter approach to solution of rational expectations models

Switching Regime Estimation

Measurement Errors and the Kalman Filter: A Unified Exposition

Ch. 15 Forecasting. 1.1 Forecasts Based on Conditional Expectations

Kalman Filter. Lawrence J. Christiano

ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter

1. Using the model and notations covered in class, the expected returns are:

Statistics 910, #15 1. Kalman Filter

ARIMA Modelling and Forecasting

Elements of Multivariate Time Series Analysis

Notes on Time Series Modeling

1 Class Organization. 2 Introduction

Lecture 16: State Space Model and Kalman Filter Bus 41910, Time Series Analysis, Mr. R. Tsay

Nonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania

Animal Spirits, Fundamental Factors and Business Cycle Fluctuations

2.5 Forecasting and Impulse Response Functions

Levinson Durbin Recursions: I

Statistics Homework #4

Levinson Durbin Recursions: I

TIME SERIES AND FORECASTING. Luca Gambetti UAB, Barcelona GSE Master in Macroeconomic Policy and Financial Markets

Stat 248 Lab 2: Stationarity, More EDA, Basic TS Models

Module 9: Stationary Processes

Comment on A Comparison of Two Business Cycle Dating Methods. James D. Hamilton

Y t = log (employment t )

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Estimating Macroeconomic Models: A Likelihood Approach

ECO 513 Fall 2008 C.Sims KALMAN FILTER. s t = As t 1 + ε t Measurement equation : y t = Hs t + ν t. u t = r t. u 0 0 t 1 + y t = [ H I ] u t.

Graduate Macroeconomics 2 Problem set Solutions

Stochastic process for macro

Class: Trend-Cycle Decomposition

Bayesian Inference for DSGE Models. Lawrence J. Christiano

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong

The 2001 recession displayed unique characteristics in comparison to other

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49

Kalman Filtering. Namrata Vaswani. March 29, Kalman Filter as a causal MMSE estimator

ADAPTIVE TIME SERIES FILTERS OBTAINED BY MINIMISATION OF THE KULLBACK-LEIBLER DIVERGENCE CRITERION

Probabilistic Graphical Models

STA 4273H: Statistical Machine Learning

Problem Set 2 Solution Sketches Time Series Analysis Spring 2010

Ambiguous Business Cycles: Online Appendix

A time series is called strictly stationary if the joint distribution of every collection (Y t

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω

Timevarying VARs. Wouter J. Den Haan London School of Economics. c Wouter J. Den Haan

Linear-Quadratic Optimal Control: Full-State Feedback

Q-Learning and Stochastic Approximation

FIR Filters for Stationary State Space Signal Models

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.

Extended Kalman Filter Tutorial

LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity.

Solving Models with Heterogeneous Expectations

1 Teaching notes on structural VARs.

Looking for the stars

4 Derivations of the Discrete-Time Kalman Filter

Chapter 1. Introduction. 1.1 Background

Chapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models

1. Shocks. This version: February 15, Nr. 1

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL

Point, Interval, and Density Forecast Evaluation of Linear versus Nonlinear DSGE Models

FEDERAL RESERVE BANK of ATLANTA

Matching DSGE models,vars, and state space models. Fabio Canova EUI and CEPR September 2012

Vector Auto-Regressive Models

Searching for the Output Gap: Economic Variable or Statistical Illusion? Mark W. Longbrake* J. Huston McCulloch

The Kalman filter, Nonlinear filtering, and Markov Chain Monte Carlo

Bayesian DSGE Model Estimation

Extracting Rational Expectations Model Structural Matrices from Dynare

Econ 623 Econometrics II Topic 2: Stationary Time Series

ECON 616: Lecture 1: Time Series Basics

VAR Models and Applications

... Econometric Methods for the Analysis of Dynamic General Equilibrium Models

EE482: Digital Signal Processing Applications

Lecture 5 Linear Quadratic Stochastic Control

Quantitative Trendspotting. Rex Yuxing Du and Wagner A. Kamakura. Web Appendix A Inferring and Projecting the Latent Dynamic Factors

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters

Chapter 9: Forecasting

Pure Random process Pure Random Process or White Noise Process: is a random process {X t, t 0} which has: { σ 2 if k = 0 0 if k 0

Likelihood-based Estimation of Stochastically Singular DSGE Models using Dimensionality Reduction

Equivalence of several methods for decomposing time series into permananent and transitory components

Networked Sensing, Estimation and Control Systems

Lecture 7: Linear-Quadratic Dynamic Programming Real Business Cycle Models

An Introduction to Perturbation Methods in Macroeconomics. Jesús Fernández-Villaverde University of Pennsylvania

EL2520 Control Theory and Practice

5: MULTIVARATE STATIONARY PROCESSES

Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF

Empirical Macroeconomics

Nonlinear Parameter Estimation for State-Space ARCH Models with Missing Observations

Computer Intensive Methods in Mathematical Statistics

Dynamic Factor Models and Factor Augmented Vector Autoregressions. Lawrence J. Christiano

Lecture 8: Bayesian Estimation of Parameters in State Space Models

Statistics of stochastic processes

1 Kalman Filter Introduction

Gaussian Process Approximations of Stochastic Differential Equations

EM-algorithm for Training of State-space Models with Application to Time Series Prediction

Fresh perspectives on unobservable variables: Data decomposition of the Kalman smoother

Optimal control and estimation

Transcription:

Open Economy Macroeconomics: Theory, methods and applications Lecture 4: The state space representation and the Kalman Filter Hernán D. Seoane UC3M January, 2016

Today s lecture State space representation The Kalman Filter

Today s lecture Some references Hamilton (2000), Ch13 Bauer, Haltom, Rubio-Ramirez, 2003, Using the Kalman Filter to Smooth the Shocks of a Dynamic Stochastic General Equilibrium Model Bauer, Haltom, Rubio-Ramirez, 2005, Smoothing the shocks of a dynamic stochastic general equilibrium model Sargent and Ljunqvist, 2012. Advanced Macroeconomic Theory. Ch 2 Kim and Nelson, 1999. State-Space models with regime-switching. Ch 2 and Ch 3

State space representation x t+1 = Fx t + v t+1 y t = H x t + w t Where y t is a vector of variables observed at t and x t+1 is a vector of unobserved variables at t, the state vector F and H are matrices of coefficients of the required dimensions The first equation is the state equation, and the second equation is the measurement or observed equation

State Space Representation v t and w t are uncorrelated normally distributed white noise vectors E(v t v t ) = Q E(w t w t ) = R

State Space Representation It is not unique Suppose B is a square matrix, non-singular, and conformable with F Define xt = Bx t, F = BFB 1 and H = (H B). Then, x t+1 = F x t + Bv t+1 y t = (H ) x t + w t

State Space Representation Example, consider an AR(2) process y t = ρ 1 y t 1 + ρ 2 y t 2 + w t Define x t = ( ρ 1 ρ 2 1 0 ) x t 1 + ( 1 0 ) w t For x t = [y t y t 1 ] This is the transition equation The measurement equation is y t = [1 0]x t

Some preliminary stuff Suppose we want to forecast based on conditional expectations: forecast the value of Y t+1 based on variables X t Suppose we want the forecast to be a linear function Y t+1 = α X t Suppose we can find α s such that the forecast error Y t+1 α X t is uncorrelated to X t Here α X t is called a linear projection of Y t+1 on X t

Some notation Let x t+1 t = E(x t+1 y t ) be the linear projection of x t+1 on y t and a constant Let y t+1 t = E(y t+1 y t ) = H x t+1 t be the linear projection of y t+1 on y t and a constant Let P t+1 t = E(x t+1 x t+1 t )(x t+1 x t+1 t ) be the mean squared forecasting error when projecting x t+1 Let Σ t+1 t = E(y t+1 y t+1 t )(y t+1 y t+1 t ) = H P t+1 t H + R be the mean squared forecasting error when projecting y t+1

How does it work? The Kalman Filter starts by assuming an initial state condition Suppose we have x t t 1 and y t t 1 We observe y t we need to update x t t With x t t, we could compute x t+1 t = Fx t t and also y t+1 t = H x t+1 t Here a whole updating of information would have occurred and we just need to wait to get the new y t+1 observation

Forecasting y t Suppose we have x t t 1 and P t t 1 And we observe a new realization of the data, y t... now we want to use this new data to obtain x t+1 t and P t+1 t Let s first find the forecast of y t : ŷ t t 1 = Ê(y t y t 1 ) note that: Ê(y t x t ) = H x t ŷ t t 1 = H Ê(x t y t 1 ) = H x t t 1, given that x t t 1 is known from the previous iteration, we can solve for the forecast of y t

Forecasting y t The error of this forecast is y t ŷ t t 1 = H x t + w t H x t t 1 = H (x t x t t 1 ) + w t with a MSE of E[(y t ŷ t t 1 )(y t ŷ t t 1 ) ] = E[H (x t x t t 1 )(x t x t t 1 ) H] + E[w t w t] or E[(y t ŷ t t 1 )(y t ŷ t t 1 ) ] = H P t t 1 H + R

Updating inference about x t The inference about x t is updated on the basis on the evidence of y t to produce x t t = Ê(x t y t, y t 1 ) = Ê(x t y t ) This comes just from using the formula for updating a linear projection: x t t = x t t 1 + {E[(x t x t t 1 )(y t ŷ t t 1 ) ]} {E[(y t ŷ t t 1 )(y t ŷ t t 1 ) ]} 1 (y t ŷ t t 1 ) x t t = x t t 1 + {E[(x t x t t 1 )(H (x t x t t 1 ) + w t ) ]} {E[(y t ŷ t t 1 )(y t ŷ t t 1 ) ]} 1 (y t ŷ t t 1 ) x t t = x t t 1 + {E[(x t x t t 1 )(x t x t t 1 ) H]} {E[(y t ŷ t t 1 )(y t ŷ t t 1 ) ]} 1 (y t ŷ t t 1 ) x t t = x t t 1 + P t t 1 H {H P t t 1 H + R} 1 (y t H x t t 1 ) The MSE associated to this update P t t P t t = P t t 1 P t t 1 H(H P t t 1 H + R) 1 H P t t 1

Producing a forecast of x t+1 Now we want to forecast x t+1 t = Ê(x t+1 y t ) x t+1 t = Ê(x t+1 y t ) = FÊ(x t y t ) + Ê(v t+1 y t ) = Fx t t + 0 Plugging our previous findings ( ) x t+1 t = F x t t 1 + P t t 1 H {H P t t 1 H + R} 1 (y t H x t t 1 ) x t+1 t = Fx t t 1 + FP t t 1 H {H P t t 1 H + R} 1 (y t H x t t 1 ) Define K t = FP t t 1 H(H P t t 1 H + R) 1

How does it work? Hence x t+1 t = Fx t t 1 + K t (y t H x t t 1 ) This is an updating equation after observing y t K t is the Kalman Gain and determines how much importance is going to be allocated to the new information We want the K t that minimizes the mean squared forecast error P t+1 t = FP t t F + Q

The algorithm Given x t t 1 and P t t 1 and observation y t, Kalman Filter algorithm is as follows y t t 1 = H x t t 1 Σ t t 1 = H P t t 1 H + R x t t = x t t 1 + H P t t 1 [H P t t 1 H + R] 1 (y t y t t 1 ) P t t = P t t 1 P t t 1 H[H P t t 1 H + R] 1 HP t t 1 x t+1 t = Fx t t P t+1 t = FP t t F + Q

Intuition about K t Remember Rewrite it as K t = P t t 1 H(H P t t 1 H + R) 1 K t = P t t 1 H(Σ t t 1 ) 1 If we did a big mistake forecasting x t t 1, K t is large, which means we are going to put a lot of weight on the new information

Note that intuitively, we start from an initial condition Then, we are using the observables to update our forecast of the unobserved variables Where is the initial condition coming from? Where do we start the system?

We focus in stationary processes Initialize the algorithm in the steady state x 1 0 = x P 1 0 = P Where x = Fx P = FP F + Q The second expression is a Lyapunov equation and can be solved iteratively or using Kronecker products

So far Note that we have a way of recovering filtered estimates of the unobserved components conditional on the up to t data We can try to do better than that, when running the Kalman Filter, we already have the whole sequence of observables up to period T We can try to recover the smoothed estimates of the unobserved variables x T = {x t } T t=1, by computing their value conditional on the whole sample, y T

So far We are looking for x t T = E(x t y T ) This procedure is called smoothing and we do it using the Kalman Smoother Inputs for the Kalman Smoother are all obtained from Kalman Filter

The Kalman Smoother Suppose we know x t+1. Using the formula for updating linear projections E(x t x t+1, y t ) = x t t + ( E(x t x t t )(x t+1 x t+1 t ) ) P 1 t+1 t (x t+1 x t+1 t ) Here: E(x t x t t )(x t+1 x t+1 t ) = E[(x t x t t )(Fx t + v t+1 Fx t t ) ] = P t t F The error and x t and the projection are uncorrelated, then E[(x t x t t )(x t x t t )F ] = P t t F E(x t x t+1, y t ) = x t t + J t (x t+1 x t+1 t ) for J t = P t t F P 1 t+1 t

The Kalman Smoother Now, this linear projection E(x t x t+1, y t ) is the same as E(x t x t+1, y T ) Which is true because y t+j = H ( F j 1 x t+1 + F j 2 v t+2 +... + v t+j ) + w t+j for all j and the error x t E(x t x t+1, y t ) is uncorrelated with x t+1 (by definition of linear projection) and v t+2,..., v t+j and w t+j (because of our maintained assumptions) once we know x t+1 additional data contains no information then the error x t E(x t x t+1, y t ) is uncorrelated of y t+1 for all j > 0, then E(x t x t+1, y t ) = E(x t x t+1, y T ) = x t t + J t (x t+1 x t+1 t )

The Kalman Smoother Finally integrating out x t+1 ( E(x t y T ) = E E(x t x t+1, y T ) y T) = x t t + J t (E(x t+1 y T ) x t+1 t ) ( E(x t y T ) = E E(x t x t+1, y T ) y T) = x t t + J t (x t+1 T x t+1 t )

Algorithm Run the Kalman Filter and keep {x t t } T t=1, {x t+1 t} T 1 t=0, {P t t} T t=1, {P t+1 t } T 1 t=0 Note that the last entry in {x t t } T t=1 is x T T We have all the information for J t = P t t F P 1 t+1 t which we can use in E(x t y T ) = x t t + J t (x t+1 T x t+1 t ) to obtain x t T Finally, we iterate backwards

Implications Given that the innovations are Gaussian and the system is linear [ x t y t ] y t 1 N ([ x t t 1 y t t 1 ] [, P t t 1 H P t t 1 P t t 1 H H P t t 1 H + R ]) This implies that: x t y t N(x t t, P t t ) A consequence of these is that y t y t 1 is also normally distributed