Open Economy Macroeconomics: Theory, methods and applications

Size: px
Start display at page:

Download "Open Economy Macroeconomics: Theory, methods and applications"

Transcription

1 Open Economy Macroeconomics: Theory, methods and applications Lecture 4: The state space representation and the Kalman Filter Hernán D. Seoane UC3M January, 2016

2 Today s lecture State space representation The Kalman Filter

3 Today s lecture Some references Hamilton (2000), Ch13 Bauer, Haltom, Rubio-Ramirez, 2003, Using the Kalman Filter to Smooth the Shocks of a Dynamic Stochastic General Equilibrium Model Bauer, Haltom, Rubio-Ramirez, 2005, Smoothing the shocks of a dynamic stochastic general equilibrium model Sargent and Ljunqvist, Advanced Macroeconomic Theory. Ch 2 Kim and Nelson, State-Space models with regime-switching. Ch 2 and Ch 3

4 State space representation x t+1 = Fx t + v t+1 y t = H x t + w t Where y t is a vector of variables observed at t and x t+1 is a vector of unobserved variables at t, the state vector F and H are matrices of coefficients of the required dimensions The first equation is the state equation, and the second equation is the measurement or observed equation

5 State Space Representation v t and w t are uncorrelated normally distributed white noise vectors E(v t v t ) = Q E(w t w t ) = R

6 State Space Representation It is not unique Suppose B is a square matrix, non-singular, and conformable with F Define xt = Bx t, F = BFB 1 and H = (H B). Then, x t+1 = F x t + Bv t+1 y t = (H ) x t + w t

7 State Space Representation Example, consider an AR(2) process y t = ρ 1 y t 1 + ρ 2 y t 2 + w t Define x t = ( ρ 1 ρ ) x t 1 + ( 1 0 ) w t For x t = [y t y t 1 ] This is the transition equation The measurement equation is y t = [1 0]x t

8 Some preliminary stuff Suppose we want to forecast based on conditional expectations: forecast the value of Y t+1 based on variables X t Suppose we want the forecast to be a linear function Y t+1 = α X t Suppose we can find α s such that the forecast error Y t+1 α X t is uncorrelated to X t Here α X t is called a linear projection of Y t+1 on X t

9 Some notation Let x t+1 t = E(x t+1 y t ) be the linear projection of x t+1 on y t and a constant Let y t+1 t = E(y t+1 y t ) = H x t+1 t be the linear projection of y t+1 on y t and a constant Let P t+1 t = E(x t+1 x t+1 t )(x t+1 x t+1 t ) be the mean squared forecasting error when projecting x t+1 Let Σ t+1 t = E(y t+1 y t+1 t )(y t+1 y t+1 t ) = H P t+1 t H + R be the mean squared forecasting error when projecting y t+1

10 How does it work? The Kalman Filter starts by assuming an initial state condition Suppose we have x t t 1 and y t t 1 We observe y t we need to update x t t With x t t, we could compute x t+1 t = Fx t t and also y t+1 t = H x t+1 t Here a whole updating of information would have occurred and we just need to wait to get the new y t+1 observation

11 Forecasting y t Suppose we have x t t 1 and P t t 1 And we observe a new realization of the data, y t... now we want to use this new data to obtain x t+1 t and P t+1 t Let s first find the forecast of y t : ŷ t t 1 = Ê(y t y t 1 ) note that: Ê(y t x t ) = H x t ŷ t t 1 = H Ê(x t y t 1 ) = H x t t 1, given that x t t 1 is known from the previous iteration, we can solve for the forecast of y t

12 Forecasting y t The error of this forecast is y t ŷ t t 1 = H x t + w t H x t t 1 = H (x t x t t 1 ) + w t with a MSE of E[(y t ŷ t t 1 )(y t ŷ t t 1 ) ] = E[H (x t x t t 1 )(x t x t t 1 ) H] + E[w t w t] or E[(y t ŷ t t 1 )(y t ŷ t t 1 ) ] = H P t t 1 H + R

13 Updating inference about x t The inference about x t is updated on the basis on the evidence of y t to produce x t t = Ê(x t y t, y t 1 ) = Ê(x t y t ) This comes just from using the formula for updating a linear projection: x t t = x t t 1 + {E[(x t x t t 1 )(y t ŷ t t 1 ) ]} {E[(y t ŷ t t 1 )(y t ŷ t t 1 ) ]} 1 (y t ŷ t t 1 ) x t t = x t t 1 + {E[(x t x t t 1 )(H (x t x t t 1 ) + w t ) ]} {E[(y t ŷ t t 1 )(y t ŷ t t 1 ) ]} 1 (y t ŷ t t 1 ) x t t = x t t 1 + {E[(x t x t t 1 )(x t x t t 1 ) H]} {E[(y t ŷ t t 1 )(y t ŷ t t 1 ) ]} 1 (y t ŷ t t 1 ) x t t = x t t 1 + P t t 1 H {H P t t 1 H + R} 1 (y t H x t t 1 ) The MSE associated to this update P t t P t t = P t t 1 P t t 1 H(H P t t 1 H + R) 1 H P t t 1

14 Producing a forecast of x t+1 Now we want to forecast x t+1 t = Ê(x t+1 y t ) x t+1 t = Ê(x t+1 y t ) = FÊ(x t y t ) + Ê(v t+1 y t ) = Fx t t + 0 Plugging our previous findings ( ) x t+1 t = F x t t 1 + P t t 1 H {H P t t 1 H + R} 1 (y t H x t t 1 ) x t+1 t = Fx t t 1 + FP t t 1 H {H P t t 1 H + R} 1 (y t H x t t 1 ) Define K t = FP t t 1 H(H P t t 1 H + R) 1

15 How does it work? Hence x t+1 t = Fx t t 1 + K t (y t H x t t 1 ) This is an updating equation after observing y t K t is the Kalman Gain and determines how much importance is going to be allocated to the new information We want the K t that minimizes the mean squared forecast error P t+1 t = FP t t F + Q

16 The algorithm Given x t t 1 and P t t 1 and observation y t, Kalman Filter algorithm is as follows y t t 1 = H x t t 1 Σ t t 1 = H P t t 1 H + R x t t = x t t 1 + H P t t 1 [H P t t 1 H + R] 1 (y t y t t 1 ) P t t = P t t 1 P t t 1 H[H P t t 1 H + R] 1 HP t t 1 x t+1 t = Fx t t P t+1 t = FP t t F + Q

17 Intuition about K t Remember Rewrite it as K t = P t t 1 H(H P t t 1 H + R) 1 K t = P t t 1 H(Σ t t 1 ) 1 If we did a big mistake forecasting x t t 1, K t is large, which means we are going to put a lot of weight on the new information

18 Note that intuitively, we start from an initial condition Then, we are using the observables to update our forecast of the unobserved variables Where is the initial condition coming from? Where do we start the system?

19 We focus in stationary processes Initialize the algorithm in the steady state x 1 0 = x P 1 0 = P Where x = Fx P = FP F + Q The second expression is a Lyapunov equation and can be solved iteratively or using Kronecker products

20 So far Note that we have a way of recovering filtered estimates of the unobserved components conditional on the up to t data We can try to do better than that, when running the Kalman Filter, we already have the whole sequence of observables up to period T We can try to recover the smoothed estimates of the unobserved variables x T = {x t } T t=1, by computing their value conditional on the whole sample, y T

21 So far We are looking for x t T = E(x t y T ) This procedure is called smoothing and we do it using the Kalman Smoother Inputs for the Kalman Smoother are all obtained from Kalman Filter

22 The Kalman Smoother Suppose we know x t+1. Using the formula for updating linear projections E(x t x t+1, y t ) = x t t + ( E(x t x t t )(x t+1 x t+1 t ) ) P 1 t+1 t (x t+1 x t+1 t ) Here: E(x t x t t )(x t+1 x t+1 t ) = E[(x t x t t )(Fx t + v t+1 Fx t t ) ] = P t t F The error and x t and the projection are uncorrelated, then E[(x t x t t )(x t x t t )F ] = P t t F E(x t x t+1, y t ) = x t t + J t (x t+1 x t+1 t ) for J t = P t t F P 1 t+1 t

23 The Kalman Smoother Now, this linear projection E(x t x t+1, y t ) is the same as E(x t x t+1, y T ) Which is true because y t+j = H ( F j 1 x t+1 + F j 2 v t v t+j ) + w t+j for all j and the error x t E(x t x t+1, y t ) is uncorrelated with x t+1 (by definition of linear projection) and v t+2,..., v t+j and w t+j (because of our maintained assumptions) once we know x t+1 additional data contains no information then the error x t E(x t x t+1, y t ) is uncorrelated of y t+1 for all j > 0, then E(x t x t+1, y t ) = E(x t x t+1, y T ) = x t t + J t (x t+1 x t+1 t )

24 The Kalman Smoother Finally integrating out x t+1 ( E(x t y T ) = E E(x t x t+1, y T ) y T) = x t t + J t (E(x t+1 y T ) x t+1 t ) ( E(x t y T ) = E E(x t x t+1, y T ) y T) = x t t + J t (x t+1 T x t+1 t )

25 Algorithm Run the Kalman Filter and keep {x t t } T t=1, {x t+1 t} T 1 t=0, {P t t} T t=1, {P t+1 t } T 1 t=0 Note that the last entry in {x t t } T t=1 is x T T We have all the information for J t = P t t F P 1 t+1 t which we can use in E(x t y T ) = x t t + J t (x t+1 T x t+1 t ) to obtain x t T Finally, we iterate backwards

26 Implications Given that the innovations are Gaussian and the system is linear [ x t y t ] y t 1 N ([ x t t 1 y t t 1 ] [, P t t 1 H P t t 1 P t t 1 H H P t t 1 H + R ]) This implies that: x t y t N(x t t, P t t ) A consequence of these is that y t y t 1 is also normally distributed

Time-Varying Parameters

Time-Varying Parameters Kalman Filter and state-space models: time-varying parameter models; models with unobservable variables; basic tool: Kalman filter; implementation is task-specific. y t = x t β t + e t (1) β t = µ + Fβ

More information

Filtering and Likelihood Inference

Filtering and Likelihood Inference Filtering and Likelihood Inference Jesús Fernández-Villaverde University of Pennsylvania July 10, 2011 Jesús Fernández-Villaverde (PENN) Filtering and Likelihood July 10, 2011 1 / 79 Motivation Introduction

More information

X t = a t + r t, (7.1)

X t = a t + r t, (7.1) Chapter 7 State Space Models 71 Introduction State Space models, developed over the past 10 20 years, are alternative models for time series They include both the ARIMA models of Chapters 3 6 and the Classical

More information

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER KRISTOFFER P. NIMARK The Kalman Filter We will be concerned with state space systems of the form X t = A t X t 1 + C t u t 0.1 Z t

More information

Endogenous Information Choice

Endogenous Information Choice Endogenous Information Choice Lecture 7 February 11, 2015 An optimizing trader will process those prices of most importance to his decision problem most frequently and carefully, those of less importance

More information

Volume 30, Issue 3. A note on Kalman filter approach to solution of rational expectations models

Volume 30, Issue 3. A note on Kalman filter approach to solution of rational expectations models Volume 30, Issue 3 A note on Kalman filter approach to solution of rational expectations models Marco Maria Sorge BGSE, University of Bonn Abstract In this note, a class of nonlinear dynamic models under

More information

Switching Regime Estimation

Switching Regime Estimation Switching Regime Estimation Series de Tiempo BIrkbeck March 2013 Martin Sola (FE) Markov Switching models 01/13 1 / 52 The economy (the time series) often behaves very different in periods such as booms

More information

Measurement Errors and the Kalman Filter: A Unified Exposition

Measurement Errors and the Kalman Filter: A Unified Exposition Luiss Lab of European Economics LLEE Working Document no. 45 Measurement Errors and the Kalman Filter: A Unified Exposition Salvatore Nisticò February 2007 Outputs from LLEE research in progress, as well

More information

Ch. 15 Forecasting. 1.1 Forecasts Based on Conditional Expectations

Ch. 15 Forecasting. 1.1 Forecasts Based on Conditional Expectations Ch 15 Forecasting Having considered in Chapter 14 some of the properties of ARMA models, we now show how they may be used to forecast future values of an observed time series For the present we proceed

More information

Kalman Filter. Lawrence J. Christiano

Kalman Filter. Lawrence J. Christiano Kalman Filter Lawrence J. Christiano Background The Kalman filter is a powerful tool, which can be used in a variety of contexts. can be used for filtering and smoothing. To help make it concrete, we will

More information

ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter

ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter D. Richard Brown III Worcester Polytechnic Institute 09-Apr-2009 Worcester Polytechnic Institute D. Richard Brown III 09-Apr-2009 1 /

More information

1. Using the model and notations covered in class, the expected returns are:

1. Using the model and notations covered in class, the expected returns are: Econ 510a second half Yale University Fall 2006 Prof. Tony Smith HOMEWORK #5 This homework assignment is due at 5PM on Friday, December 8 in Marnix Amand s mailbox. Solution 1. a In the Mehra-Prescott

More information

Statistics 910, #15 1. Kalman Filter

Statistics 910, #15 1. Kalman Filter Statistics 910, #15 1 Overview 1. Summary of Kalman filter 2. Derivations 3. ARMA likelihoods 4. Recursions for the variance Kalman Filter Summary of Kalman filter Simplifications To make the derivations

More information

ARIMA Modelling and Forecasting

ARIMA Modelling and Forecasting ARIMA Modelling and Forecasting Economic time series often appear nonstationary, because of trends, seasonal patterns, cycles, etc. However, the differences may appear stationary. Δx t x t x t 1 (first

More information

Elements of Multivariate Time Series Analysis

Elements of Multivariate Time Series Analysis Gregory C. Reinsel Elements of Multivariate Time Series Analysis Second Edition With 14 Figures Springer Contents Preface to the Second Edition Preface to the First Edition vii ix 1. Vector Time Series

More information

Notes on Time Series Modeling

Notes on Time Series Modeling Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g

More information

1 Class Organization. 2 Introduction

1 Class Organization. 2 Introduction Time Series Analysis, Lecture 1, 2018 1 1 Class Organization Course Description Prerequisite Homework and Grading Readings and Lecture Notes Course Website: http://www.nanlifinance.org/teaching.html wechat

More information

Lecture 16: State Space Model and Kalman Filter Bus 41910, Time Series Analysis, Mr. R. Tsay

Lecture 16: State Space Model and Kalman Filter Bus 41910, Time Series Analysis, Mr. R. Tsay Lecture 6: State Space Model and Kalman Filter Bus 490, Time Series Analysis, Mr R Tsay A state space model consists of two equations: S t+ F S t + Ge t+, () Z t HS t + ɛ t (2) where S t is a state vector

More information

Nonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania

Nonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania Nonlinear and/or Non-normal Filtering Jesús Fernández-Villaverde University of Pennsylvania 1 Motivation Nonlinear and/or non-gaussian filtering, smoothing, and forecasting (NLGF) problems are pervasive

More information

Animal Spirits, Fundamental Factors and Business Cycle Fluctuations

Animal Spirits, Fundamental Factors and Business Cycle Fluctuations Animal Spirits, Fundamental Factors and Business Cycle Fluctuations Stephane Dées Srečko Zimic Banque de France European Central Bank January 6, 218 Disclaimer Any views expressed represent those of the

More information

2.5 Forecasting and Impulse Response Functions

2.5 Forecasting and Impulse Response Functions 2.5 Forecasting and Impulse Response Functions Principles of forecasting Forecast based on conditional expectations Suppose we are interested in forecasting the value of y t+1 based on a set of variables

More information

Levinson Durbin Recursions: I

Levinson Durbin Recursions: I Levinson Durbin Recursions: I note: B&D and S&S say Durbin Levinson but Levinson Durbin is more commonly used (Levinson, 1947, and Durbin, 1960, are source articles sometimes just Levinson is used) recursions

More information

Statistics Homework #4

Statistics Homework #4 Statistics 910 1 Homework #4 Chapter 6, Shumway and Stoffer These are outlines of the solutions. If you would like to fill in other details, please come see me during office hours. 6.1 State-space representation

More information

Levinson Durbin Recursions: I

Levinson Durbin Recursions: I Levinson Durbin Recursions: I note: B&D and S&S say Durbin Levinson but Levinson Durbin is more commonly used (Levinson, 1947, and Durbin, 1960, are source articles sometimes just Levinson is used) recursions

More information

TIME SERIES AND FORECASTING. Luca Gambetti UAB, Barcelona GSE Master in Macroeconomic Policy and Financial Markets

TIME SERIES AND FORECASTING. Luca Gambetti UAB, Barcelona GSE Master in Macroeconomic Policy and Financial Markets TIME SERIES AND FORECASTING Luca Gambetti UAB, Barcelona GSE 2014-2015 Master in Macroeconomic Policy and Financial Markets 1 Contacts Prof.: Luca Gambetti Office: B3-1130 Edifici B Office hours: email:

More information

Stat 248 Lab 2: Stationarity, More EDA, Basic TS Models

Stat 248 Lab 2: Stationarity, More EDA, Basic TS Models Stat 248 Lab 2: Stationarity, More EDA, Basic TS Models Tessa L. Childers-Day February 8, 2013 1 Introduction Today s section will deal with topics such as: the mean function, the auto- and cross-covariance

More information

Module 9: Stationary Processes

Module 9: Stationary Processes Module 9: Stationary Processes Lecture 1 Stationary Processes 1 Introduction A stationary process is a stochastic process whose joint probability distribution does not change when shifted in time or space.

More information

Comment on A Comparison of Two Business Cycle Dating Methods. James D. Hamilton

Comment on A Comparison of Two Business Cycle Dating Methods. James D. Hamilton Comment on A Comparison of Two Business Cycle Dating Methods James D. Hamilton Harding and Pagan note that their stripped-down Markov-switching model (3)-(5) is an example of a standard state-space model,

More information

Y t = log (employment t )

Y t = log (employment t ) Advanced Macroeconomics, Christiano Econ 416 Homework #7 Due: November 21 1. Consider the linearized equilibrium conditions of the New Keynesian model, on the slide, The Equilibrium Conditions in the handout,

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

Estimating Macroeconomic Models: A Likelihood Approach

Estimating Macroeconomic Models: A Likelihood Approach Estimating Macroeconomic Models: A Likelihood Approach Jesús Fernández-Villaverde University of Pennsylvania, NBER, and CEPR Juan Rubio-Ramírez Federal Reserve Bank of Atlanta Estimating Dynamic Macroeconomic

More information

ECO 513 Fall 2008 C.Sims KALMAN FILTER. s t = As t 1 + ε t Measurement equation : y t = Hs t + ν t. u t = r t. u 0 0 t 1 + y t = [ H I ] u t.

ECO 513 Fall 2008 C.Sims KALMAN FILTER. s t = As t 1 + ε t Measurement equation : y t = Hs t + ν t. u t = r t. u 0 0 t 1 + y t = [ H I ] u t. ECO 513 Fall 2008 C.Sims KALMAN FILTER Model in the form 1. THE KALMAN FILTER Plant equation : s t = As t 1 + ε t Measurement equation : y t = Hs t + ν t. Var(ε t ) = Ω, Var(ν t ) = Ξ. ε t ν t and (ε t,

More information

Graduate Macroeconomics 2 Problem set Solutions

Graduate Macroeconomics 2 Problem set Solutions Graduate Macroeconomics 2 Problem set 10. - Solutions Question 1 1. AUTARKY Autarky implies that the agents do not have access to credit or insurance markets. This implies that you cannot trade across

More information

Stochastic process for macro

Stochastic process for macro Stochastic process for macro Tianxiao Zheng SAIF 1. Stochastic process The state of a system {X t } evolves probabilistically in time. The joint probability distribution is given by Pr(X t1, t 1 ; X t2,

More information

Class: Trend-Cycle Decomposition

Class: Trend-Cycle Decomposition Class: Trend-Cycle Decomposition Macroeconometrics - Spring 2011 Jacek Suda, BdF and PSE June 1, 2011 Outline Outline: 1 Unobserved Component Approach 2 Beveridge-Nelson Decomposition 3 Spectral Analysis

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.

More information

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong STAT 443 Final Exam Review L A TEXer: W Kong 1 Basic Definitions Definition 11 The time series {X t } with E[X 2 t ] < is said to be weakly stationary if: 1 µ X (t) = E[X t ] is independent of t 2 γ X

More information

The 2001 recession displayed unique characteristics in comparison to other

The 2001 recession displayed unique characteristics in comparison to other Smoothing the Shocks of a Dynamic Stochastic General Equilibrium Model ANDREW BAUER NICHOLAS HALTOM AND JUAN F RUBIO-RAMÍREZ Bauer and Haltom are senior economic analysts and Rubio-Ramírez is an economist

More information

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53 State-space Model Eduardo Rossi University of Pavia November 2014 Rossi State-space Model Fin. Econometrics - 2014 1 / 53 Outline 1 Motivation 2 Introduction 3 The Kalman filter 4 Forecast errors 5 State

More information

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49 State-space Model Eduardo Rossi University of Pavia November 2013 Rossi State-space Model Financial Econometrics - 2013 1 / 49 Outline 1 Introduction 2 The Kalman filter 3 Forecast errors 4 State smoothing

More information

Kalman Filtering. Namrata Vaswani. March 29, Kalman Filter as a causal MMSE estimator

Kalman Filtering. Namrata Vaswani. March 29, Kalman Filter as a causal MMSE estimator Kalman Filtering Namrata Vaswani March 29, 2018 Notes are based on Vincent Poor s book. 1 Kalman Filter as a causal MMSE estimator Consider the following state space model (signal and observation model).

More information

ADAPTIVE TIME SERIES FILTERS OBTAINED BY MINIMISATION OF THE KULLBACK-LEIBLER DIVERGENCE CRITERION

ADAPTIVE TIME SERIES FILTERS OBTAINED BY MINIMISATION OF THE KULLBACK-LEIBLER DIVERGENCE CRITERION ADAPTIVE TIME SERIES FILTERS OBTAINED BY MINIMISATION OF THE KULLBACK-LEIBLER DIVERGENCE CRITERION Elena L vovna Pervukhina 1, Jean-François Emmenegger 2 1 Sevastopol National Technical University, UKRAINE

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 12: Gaussian Belief Propagation, State Space Models and Kalman Filters Guest Kalman Filter Lecture by

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project

More information

Problem Set 2 Solution Sketches Time Series Analysis Spring 2010

Problem Set 2 Solution Sketches Time Series Analysis Spring 2010 Problem Set 2 Solution Sketches Time Series Analysis Spring 2010 Forecasting 1. Let X and Y be two random variables such that E(X 2 ) < and E(Y 2 )

More information

Ambiguous Business Cycles: Online Appendix

Ambiguous Business Cycles: Online Appendix Ambiguous Business Cycles: Online Appendix By Cosmin Ilut and Martin Schneider This paper studies a New Keynesian business cycle model with agents who are averse to ambiguity (Knightian uncertainty). Shocks

More information

A time series is called strictly stationary if the joint distribution of every collection (Y t

A time series is called strictly stationary if the joint distribution of every collection (Y t 5 Time series A time series is a set of observations recorded over time. You can think for example at the GDP of a country over the years (or quarters) or the hourly measurements of temperature over a

More information

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω ECO 513 Spring 2015 TAKEHOME FINAL EXAM (1) Suppose the univariate stochastic process y is ARMA(2,2) of the following form: y t = 1.6974y t 1.9604y t 2 + ε t 1.6628ε t 1 +.9216ε t 2, (1) where ε is i.i.d.

More information

Timevarying VARs. Wouter J. Den Haan London School of Economics. c Wouter J. Den Haan

Timevarying VARs. Wouter J. Den Haan London School of Economics. c Wouter J. Den Haan Timevarying VARs Wouter J. Den Haan London School of Economics c Wouter J. Den Haan Time-Varying VARs Gibbs-Sampler general idea probit regression application (Inverted Wishart distribution Drawing from

More information

Linear-Quadratic Optimal Control: Full-State Feedback

Linear-Quadratic Optimal Control: Full-State Feedback Chapter 4 Linear-Quadratic Optimal Control: Full-State Feedback 1 Linear quadratic optimization is a basic method for designing controllers for linear (and often nonlinear) dynamical systems and is actually

More information

Q-Learning and Stochastic Approximation

Q-Learning and Stochastic Approximation MS&E338 Reinforcement Learning Lecture 4-04.11.018 Q-Learning and Stochastic Approximation Lecturer: Ben Van Roy Scribe: Christopher Lazarus Javier Sagastuy In this lecture we study the convergence of

More information

FIR Filters for Stationary State Space Signal Models

FIR Filters for Stationary State Space Signal Models Proceedings of the 17th World Congress The International Federation of Automatic Control FIR Filters for Stationary State Space Signal Models Jung Hun Park Wook Hyun Kwon School of Electrical Engineering

More information

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M. TIME SERIES ANALYSIS Forecasting and Control Fifth Edition GEORGE E. P. BOX GWILYM M. JENKINS GREGORY C. REINSEL GRETA M. LJUNG Wiley CONTENTS PREFACE TO THE FIFTH EDITION PREFACE TO THE FOURTH EDITION

More information

Extended Kalman Filter Tutorial

Extended Kalman Filter Tutorial Extended Kalman Filter Tutorial Gabriel A. Terejanu Department of Computer Science and Engineering University at Buffalo, Buffalo, NY 14260 terejanu@buffalo.edu 1 Dynamic process Consider the following

More information

LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity.

LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity. LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity. Important points of Lecture 1: A time series {X t } is a series of observations taken sequentially over time: x t is an observation

More information

Solving Models with Heterogeneous Expectations

Solving Models with Heterogeneous Expectations Solving Models with Heterogeneous Expectations Wouter J. Den Haan London School of Economics c Wouter J. Den Haan August 29, 2014 Overview 1 Current approaches to model heterogeneous expectations 2 Numerical

More information

1 Teaching notes on structural VARs.

1 Teaching notes on structural VARs. Bent E. Sørensen February 22, 2007 1 Teaching notes on structural VARs. 1.1 Vector MA models: 1.1.1 Probability theory The simplest (to analyze, estimation is a different matter) time series models are

More information

Looking for the stars

Looking for the stars Looking for the stars Mengheng Li 12 Irma Hindrayanto 1 1 Economic Research and Policy Division, De Nederlandsche Bank 2 Department of Econometrics, Vrije Universiteit Amsterdam April 5, 2018 1 / 35 Outline

More information

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information

Chapter 1. Introduction. 1.1 Background

Chapter 1. Introduction. 1.1 Background Chapter 1 Introduction Science is facts; just as houses are made of stones, so is science made of facts; but a pile of stones is not a house and a collection of facts is not necessarily science. Henri

More information

Chapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models

Chapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models Chapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models Fall 22 Contents Introduction 2. An illustrative example........................... 2.2 Discussion...................................

More information

1. Shocks. This version: February 15, Nr. 1

1. Shocks. This version: February 15, Nr. 1 1. Shocks This version: February 15, 2006 Nr. 1 1.3. Factor models What if there are more shocks than variables in the VAR? What if there are only a few underlying shocks, explaining most of fluctuations?

More information

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL 1. Optimal regulator with noisy measurement Consider the following system: ẋ = Ax + Bu + w, x(0) = x 0 where w(t) is white noise with Ew(t) = 0, and x 0 is a stochastic

More information

Point, Interval, and Density Forecast Evaluation of Linear versus Nonlinear DSGE Models

Point, Interval, and Density Forecast Evaluation of Linear versus Nonlinear DSGE Models Point, Interval, and Density Forecast Evaluation of Linear versus Nonlinear DSGE Models Francis X. Diebold Frank Schorfheide Minchul Shin University of Pennsylvania May 4, 2014 1 / 33 Motivation The use

More information

FEDERAL RESERVE BANK of ATLANTA

FEDERAL RESERVE BANK of ATLANTA FEDERAL RESERVE BANK of ATLANTA On the Solution of the Growth Model with Investment-Specific Technological Change Jesús Fernández-Villaverde and Juan Francisco Rubio-Ramírez Working Paper 2004-39 December

More information

Matching DSGE models,vars, and state space models. Fabio Canova EUI and CEPR September 2012

Matching DSGE models,vars, and state space models. Fabio Canova EUI and CEPR September 2012 Matching DSGE models,vars, and state space models Fabio Canova EUI and CEPR September 2012 Outline Alternative representations of the solution of a DSGE model. Fundamentalness and finite VAR representation

More information

Vector Auto-Regressive Models

Vector Auto-Regressive Models Vector Auto-Regressive Models Laurent Ferrara 1 1 University of Paris Nanterre M2 Oct. 2018 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions

More information

Searching for the Output Gap: Economic Variable or Statistical Illusion? Mark W. Longbrake* J. Huston McCulloch

Searching for the Output Gap: Economic Variable or Statistical Illusion? Mark W. Longbrake* J. Huston McCulloch Draft Draft Searching for the Output Gap: Economic Variable or Statistical Illusion? Mark W. Longbrake* The Ohio State University J. Huston McCulloch The Ohio State University August, 2007 Abstract This

More information

The Kalman filter, Nonlinear filtering, and Markov Chain Monte Carlo

The Kalman filter, Nonlinear filtering, and Markov Chain Monte Carlo NBER Summer Institute Minicourse What s New in Econometrics: Time Series Lecture 5 July 5, 2008 The Kalman filter, Nonlinear filtering, and Markov Chain Monte Carlo Lecture 5, July 2, 2008 Outline. Models

More information

Bayesian DSGE Model Estimation

Bayesian DSGE Model Estimation Bayesian DSGE Model Estimation Lecture six Alexander Kriwoluzky Universiteit van Amsterdam February 12th, 2010 Aim of the week Prior vs. Posterior Kalman smoother Chain Diagnostics 1 Motivation We understand

More information

Extracting Rational Expectations Model Structural Matrices from Dynare

Extracting Rational Expectations Model Structural Matrices from Dynare Extracting Rational Expectations Model Structural Matrices from Dynare Callum Jones New York University In these notes I discuss how to extract the structural matrices of a model from its Dynare implementation.

More information

Econ 623 Econometrics II Topic 2: Stationary Time Series

Econ 623 Econometrics II Topic 2: Stationary Time Series 1 Introduction Econ 623 Econometrics II Topic 2: Stationary Time Series In the regression model we can model the error term as an autoregression AR(1) process. That is, we can use the past value of the

More information

ECON 616: Lecture 1: Time Series Basics

ECON 616: Lecture 1: Time Series Basics ECON 616: Lecture 1: Time Series Basics ED HERBST August 30, 2017 References Overview: Chapters 1-3 from Hamilton (1994). Technical Details: Chapters 2-3 from Brockwell and Davis (1987). Intuition: Chapters

More information

VAR Models and Applications

VAR Models and Applications VAR Models and Applications Laurent Ferrara 1 1 University of Paris West M2 EIPMC Oct. 2016 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions

More information

... Econometric Methods for the Analysis of Dynamic General Equilibrium Models

... Econometric Methods for the Analysis of Dynamic General Equilibrium Models ... Econometric Methods for the Analysis of Dynamic General Equilibrium Models 1 Overview Multiple Equation Methods State space-observer form Three Examples of Versatility of state space-observer form:

More information

EE482: Digital Signal Processing Applications

EE482: Digital Signal Processing Applications Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/

More information

Lecture 5 Linear Quadratic Stochastic Control

Lecture 5 Linear Quadratic Stochastic Control EE363 Winter 2008-09 Lecture 5 Linear Quadratic Stochastic Control linear-quadratic stochastic control problem solution via dynamic programming 5 1 Linear stochastic system linear dynamical system, over

More information

Quantitative Trendspotting. Rex Yuxing Du and Wagner A. Kamakura. Web Appendix A Inferring and Projecting the Latent Dynamic Factors

Quantitative Trendspotting. Rex Yuxing Du and Wagner A. Kamakura. Web Appendix A Inferring and Projecting the Latent Dynamic Factors 1 Quantitative Trendspotting Rex Yuxing Du and Wagner A. Kamakura Web Appendix A Inferring and Projecting the Latent Dynamic Factors The procedure for inferring the latent state variables (i.e., [ ] ),

More information

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 204 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

Chapter 9: Forecasting

Chapter 9: Forecasting Chapter 9: Forecasting One of the critical goals of time series analysis is to forecast (predict) the values of the time series at times in the future. When forecasting, we ideally should evaluate the

More information

Pure Random process Pure Random Process or White Noise Process: is a random process {X t, t 0} which has: { σ 2 if k = 0 0 if k 0

Pure Random process Pure Random Process or White Noise Process: is a random process {X t, t 0} which has: { σ 2 if k = 0 0 if k 0 MODULE 9: STATIONARY PROCESSES 7 Lecture 2 Autoregressive Processes 1 Moving Average Process Pure Random process Pure Random Process or White Noise Process: is a random process X t, t 0} which has: E[X

More information

Likelihood-based Estimation of Stochastically Singular DSGE Models using Dimensionality Reduction

Likelihood-based Estimation of Stochastically Singular DSGE Models using Dimensionality Reduction Likelihood-based Estimation of Stochastically Singular DSGE Models using Dimensionality Reduction Michal Andrle 1 Computing in Economics and Finance, Prague June 212 1 The views expressed herein are those

More information

Equivalence of several methods for decomposing time series into permananent and transitory components

Equivalence of several methods for decomposing time series into permananent and transitory components Equivalence of several methods for decomposing time series into permananent and transitory components Don Harding Department of Economics and Finance LaTrobe University, Bundoora Victoria 3086 and Centre

More information

Networked Sensing, Estimation and Control Systems

Networked Sensing, Estimation and Control Systems Networked Sensing, Estimation and Control Systems Vijay Gupta University of Notre Dame Richard M. Murray California Institute of echnology Ling Shi Hong Kong University of Science and echnology Bruno Sinopoli

More information

Lecture 7: Linear-Quadratic Dynamic Programming Real Business Cycle Models

Lecture 7: Linear-Quadratic Dynamic Programming Real Business Cycle Models Lecture 7: Linear-Quadratic Dynamic Programming Real Business Cycle Models Shinichi Nishiyama Graduate School of Economics Kyoto University January 10, 2019 Abstract In this lecture, we solve and simulate

More information

An Introduction to Perturbation Methods in Macroeconomics. Jesús Fernández-Villaverde University of Pennsylvania

An Introduction to Perturbation Methods in Macroeconomics. Jesús Fernández-Villaverde University of Pennsylvania An Introduction to Perturbation Methods in Macroeconomics Jesús Fernández-Villaverde University of Pennsylvania 1 Introduction Numerous problems in macroeconomics involve functional equations of the form:

More information

EL2520 Control Theory and Practice

EL2520 Control Theory and Practice EL2520 Control Theory and Practice Lecture 8: Linear quadratic control Mikael Johansson School of Electrical Engineering KTH, Stockholm, Sweden Linear quadratic control Allows to compute the controller

More information

5: MULTIVARATE STATIONARY PROCESSES

5: MULTIVARATE STATIONARY PROCESSES 5: MULTIVARATE STATIONARY PROCESSES 1 1 Some Preliminary Definitions and Concepts Random Vector: A vector X = (X 1,..., X n ) whose components are scalarvalued random variables on the same probability

More information

Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF

Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF Eugenia Kalnay and Shu-Chih Yang with Alberto Carrasi, Matteo Corazza and Takemasa Miyoshi ECODYC10, Dresden 28 January 2010 Relationship

More information

Empirical Macroeconomics

Empirical Macroeconomics Empirical Macroeconomics Francesco Franco Nova SBE April 5, 2016 Francesco Franco Empirical Macroeconomics 1/39 Growth and Fluctuations Supply and Demand Figure : US dynamics Francesco Franco Empirical

More information

Nonlinear Parameter Estimation for State-Space ARCH Models with Missing Observations

Nonlinear Parameter Estimation for State-Space ARCH Models with Missing Observations Nonlinear Parameter Estimation for State-Space ARCH Models with Missing Observations SEBASTIÁN OSSANDÓN Pontificia Universidad Católica de Valparaíso Instituto de Matemáticas Blanco Viel 596, Cerro Barón,

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 5 Sequential Monte Carlo methods I 31 March 2017 Computer Intensive Methods (1) Plan of today s lecture

More information

Dynamic Factor Models and Factor Augmented Vector Autoregressions. Lawrence J. Christiano

Dynamic Factor Models and Factor Augmented Vector Autoregressions. Lawrence J. Christiano Dynamic Factor Models and Factor Augmented Vector Autoregressions Lawrence J Christiano Dynamic Factor Models and Factor Augmented Vector Autoregressions Problem: the time series dimension of data is relatively

More information

Lecture 8: Bayesian Estimation of Parameters in State Space Models

Lecture 8: Bayesian Estimation of Parameters in State Space Models in State Space Models March 30, 2016 Contents 1 Bayesian estimation of parameters in state space models 2 Computational methods for parameter estimation 3 Practical parameter estimation in state space

More information

Statistics of stochastic processes

Statistics of stochastic processes Introduction Statistics of stochastic processes Generally statistics is performed on observations y 1,..., y n assumed to be realizations of independent random variables Y 1,..., Y n. 14 settembre 2014

More information

1 Kalman Filter Introduction

1 Kalman Filter Introduction 1 Kalman Filter Introduction You should first read Chapter 1 of Stochastic models, estimation, and control: Volume 1 by Peter S. Maybec (available here). 1.1 Explanation of Equations (1-3) and (1-4) Equation

More information

Gaussian Process Approximations of Stochastic Differential Equations

Gaussian Process Approximations of Stochastic Differential Equations Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Dan Cawford Manfred Opper John Shawe-Taylor May, 2006 1 Introduction Some of the most complex models routinely run

More information

EM-algorithm for Training of State-space Models with Application to Time Series Prediction

EM-algorithm for Training of State-space Models with Application to Time Series Prediction EM-algorithm for Training of State-space Models with Application to Time Series Prediction Elia Liitiäinen, Nima Reyhani and Amaury Lendasse Helsinki University of Technology - Neural Networks Research

More information

Fresh perspectives on unobservable variables: Data decomposition of the Kalman smoother

Fresh perspectives on unobservable variables: Data decomposition of the Kalman smoother Fresh perspectives on unobservable variables: Data decomposition of the Kalman smoother AN 2013/09 Nicholas Sander December 2013 Reserve Bank of New Zealand Analytical Note series ISSN 2230 5505 Reserve

More information

Optimal control and estimation

Optimal control and estimation Automatic Control 2 Optimal control and estimation Prof. Alberto Bemporad University of Trento Academic year 2010-2011 Prof. Alberto Bemporad (University of Trento) Automatic Control 2 Academic year 2010-2011

More information