Parameterized Expectations Algorithm

Similar documents
Projection. Wouter J. Den Haan London School of Economics. c by Wouter J. Den Haan

Accuracy of models with heterogeneous agents

Learning in Macroeconomic Models

Numerically Stable and Accurate Stochastic Simulation Approaches for Solving Dynamic Economic Models

Introduction to Numerical Methods

Solving Models with Heterogeneous Agents: Reiter Hybrid Method

Dynare. Wouter J. Den Haan London School of Economics. c by Wouter J. Den Haan

Solving Models with Heterogeneous Agents Xpa algorithm

Hoover Institution, Stanford University and NBER. University of Alicante and Hoover Institution, Stanford University

Solving Models with Heterogeneous Agents Macro-Finance Lecture

Solving Heterogeneous Agent Models with Dynare

Technical appendices: Business cycle accounting for the Japanese economy using the parameterized expectations algorithm

GMM, HAC estimators, & Standard Errors for Business Cycle Statistics

Parameterized Expectations Algorithm and the Moving Bounds

Solution Methods. Jesús Fernández-Villaverde. University of Pennsylvania. March 16, 2016

Numerically stable and accurate stochastic simulation approaches for solving dynamic economic models

High-dimensional Problems in Finance and Economics. Thomas M. Mertens

NBER WORKING PAPER SERIES NUMERICALLY STABLE STOCHASTIC SIMULATION APPROACHES FOR SOLVING DYNAMIC ECONOMIC MODELS

PROJECTION METHODS FOR DYNAMIC MODELS

NBER WORKING PAPER SERIES ONE-NODE QUADRATURE BEATS MONTE CARLO: A GENERALIZED STOCHASTIC SIMULATION ALGORITHM

Assessing others rationality in real time

Parameterized Expectations Algorithm: How to Solve for Labor Easily

Estimating Deep Parameters: GMM and SMM

Solving Models with Heterogeneous Expectations

Timevarying VARs. Wouter J. Den Haan London School of Economics. c Wouter J. Den Haan

Function Approximation

Lecture XI. Approximating the Invariant Distribution

Switching Regime Estimation

1 Bewley Economies with Aggregate Uncertainty

A Cluster-Grid Algorithm: Solving Problems With High Dimensionality

Solving models with (lots of) idiosyncratic risk

Econ 423 Lecture Notes: Additional Topics in Time Series 1

SOLVING HETEROGENEOUS-AGENT MODELS WITH PARAMETERIZED CROSS-SECTIONAL DISTRIBUTIONS

The regression model with one stochastic regressor (part II)

Nonlinear and Stable Perturbation-Based Approximations

Lecture: Face Recognition and Feature Reduction

Graduate Macro Theory II: Notes on Solving Linearized Rational Expectations Models

Forecast comparison of principal component regression and principal covariate regression

Perturbation Methods I: Basic Results

Linear Regression Linear Regression with Shrinkage

Decentralised economies I

Solving and Simulating Models with Heterogeneous Agents and Aggregate Uncertainty Yann ALGAN, Olivier ALLAIS, Wouter J. DEN HAAN, and Pontus RENDAHL

Lecture: Face Recognition and Feature Reduction

Gaussian Filtering Strategies for Nonlinear Systems

GMM and SMM. 1. Hansen, L Large Sample Properties of Generalized Method of Moments Estimators, Econometrica, 50, p

November 28 th, Carlos Guestrin 1. Lower dimensional projections

Hoover Institution, Stanford University. Department of Economics, Stanford University. Department of Economics, Santa Clara University

A primer on Structural VARs

Modeling Data with Linear Combinations of Basis Functions. Read Chapter 3 in the text by Bishop

Dynare Summer School - Accuracy Tests

Advanced Econometrics III, Lecture 5, Dynamic General Equilibrium Models Example 1: A simple RBC model... 2

Linear Regression Linear Regression with Shrinkage

COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017

Learning Dynamics with Data (Quasi-) Differencing

Estimating Macroeconomic Models: A Likelihood Approach

December 20, MAA704, Multivariate analysis. Christopher Engström. Multivariate. analysis. Principal component analysis

Lecture 6: Dynamic Models

Algorithms for Uncertainty Quantification

Y t = log (employment t )

Discrete State Space Methods for Dynamic Economies

DS-GA 1002 Lecture notes 12 Fall Linear regression

Solving nonlinear dynamic stochastic models: An algorithm computing value function by simulations

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt.

Fitting. PHY 688: Numerical Methods for (Astro)Physics

Information Choice in Macroeconomics and Finance.

Asset pricing in DSGE models comparison of different approximation methods

Least Squares Optimization

Sample Exam Questions for Econometrics

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω

Regression: Ordinary Least Squares

ECON 4160, Lecture 11 and 12

Stochastic process for macro

Economics 701 Advanced Macroeconomics I Project 1 Professor Sanjay Chugh Fall 2011

ECONOMICS 7200 MODERN TIME SERIES ANALYSIS Econometric Theory and Applications

An estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic

Machine Learning for Signal Processing Sparse and Overcomplete Representations. Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013

Introduction: Solution Methods for State-Dependent and Time-Dependent Models

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices

Computer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo

Minimum Weighted Residual Methods

Chapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models

1 Outline. 1. Motivation. 2. SUR model. 3. Simultaneous equations. 4. Estimation

1 With state-contingent debt

Probabilistic Graphical Models

News Shocks: Different Effects in Boom and Recession?

Data Analysis and Machine Learning Lecture 12: Multicollinearity, Bias-Variance Trade-off, Cross-validation and Shrinkage Methods.

Dynamic Programming with Hermite Interpolation

PANEL DISCUSSION: THE ROLE OF POTENTIAL OUTPUT IN POLICYMAKING

Filtering and Likelihood Inference

Interpolation. 1. Judd, K. Numerical Methods in Economics, Cambridge: MIT Press. Chapter

Announcements (repeat) Principal Components Analysis

News Shocks, Information Flows and SVARs

Spectral Regularization

Expectation Maximization

1 The Basic RBC Model

PCA, Kernel PCA, ICA

The Real Business Cycle Model

Projection Methods. Michal Kejak CERGE CERGE-EI ( ) 1 / 29

Introduction to Data Mining

Transcription:

Parameterized Expectations Algorithm Wouter J. Den Haan London School of Economics c by Wouter J. Den Haan

Overview Two PEA algorithms Explaining stochastic simulations PEA Advantages and disadvantages Improvements of Maliar, Maliar & Judd Extensions learning combining with perturbation

Model [ ( )] ct ν = E t βct+1 ν αz t+1 kt+1 α 1 + 1 δ c t + k t+1 = z t k α t + (1 δ) k t ln(z t+1 ) = ρ ln (z t ) + ε t+1 ε t+1 N(0, σ 2 ) k 1, z 1 given k t is beginning-of-period t capital stock

Two types of PEA 1 Standard projections algorithm: 1 parameterize E t [ ] with P n (k t, z t ; η n ) 2 solve c t from c t = (P n (k t, z t ; η n )) 1/ν and k t+1 from budget constraint 2 Stochastic (simulations) PEA

Stochastic PEA based on simulations 1 Simulate {z t } T t=1 2 Let η 1 n be initial guess for η n

Stochastic PEA 3 Iterate until η i n converges using following scheme 1 Generate {c t, k t+1 } T t=1 using 2 Generate {y t+1 } T 1 t=1 using 3 Let ˆη i n 4 Update using ct ν = P n (k t, z t ; η i n ) k t+1 = z t kt α + (1 δ) k t c t y t+1 = βc ν t+1 = arg min η T ( αz t+1 k α 1 t+1 + 1 δ ) (y t+1 P n (k t, z t ; η)) t=t begin T η i+1 n = ω ˆη i n + (1 ω) ηi n with 0 < ω 1 2

Stochastic PEA T begin >> 1 (say 500 or 1,000) ensures possible bad period 1 values don t matter ω < 1 improves stability ω is called "dampening" parameter

Stochastic PEA Idea of regression: y t+1 P n (k t, z t ; η) + u t+1, u t+1 is a prediction error = u t+1 is orthogonal to regressors Suppose P n (k t, z t ; η) = exp (a 0 + a 1 ln k t + a 2 ln z t ). You are not allowed to run the linear regression Why not? ln y t+1 = a 0 + a 1 ln k t + a 2 ln z t + ũ t+1

PEA & RE Suppose η n is the fixed point we are looking for So with η n we get best predictor of y t+1 Does this mean that solution is a rational expectations equilibrium?

Disadvantages of stoch. sim. PEA The inverse of X X may be hard to calculate for higher-order approximations Regression points are clustered = low precission recall that even equidistant nodes is not enough for uniform convergence "nodes" are even less spread out with stochastic PEA)

Disadvantages of stochastic PEA Projection step has sampling error this disappears at slow rate (especially with serial correlation)

Advantages of stoch. sim. PEA Regression points are clustered = better fit where it matters IF functional form is poor (with good functional form it is better to spread out points)

Advantages of stoch. sim. PEA Grid: you may include impossible points Simulation: model iself tells you which nodes to include (approximation also important and away from fixed point you may still get in weird places of the state space)

Odd shapes ergodic set in matching model

Improvements proposed by Maliar, Maliar & Judd 1 Use flexibility given to you 2 Use Ê [y t+1 ] instead of y t+1 as regressand Ê [y t+1 ] is numerical approximation of E[y t+1 ] even with poor approximation the results improve!!! 3 Improve regression step

Use flexibility Many E[] s to approximate. 1 Standard approach: c ν t = E t [ βc v t+1 αβc ν t+1 ( αz t+1 k α 1 t+1 + 1 δ )] 2 Alternative: k t+1 = E t [k t+1 βαβ ( ct+1 c t ) ] ν ( ) αz t+1kt+1 α 1 + 1 δ Such transformations can make computations easier, but can also affect stability of algorithm (for better or worse)

E[y] instead of y as regressor E[y t+1 ] = E[f (ε t+1 )] with ε t+1 N(0, σ 2 ) = Hermite Gaussian quadrature can be used (MMJ: using Ê [y t+1 ] calculated using one node is better than using y t+1 ) Key thing to remember: sampling uncertainty is hard to get rid off

E[y] instead of y as regressor Suppose: y t+1 = exp (a o + a 1 ln k t + a 2 ln z t ) + u t+1 u t+1 = prediction error Then you cannot estimate coeffi cients using LS based on ln (y t+1 ) = a o + a 1 ln k t + a 2 ln z t + u t+1 You have to use non-linear least squares

E[y] instead of y as regressor Suppose: E [y t+1 ] = exp (a o + a 1 ln k t + a 2 ln z t ) + ū t+1 ū t+1 = numerical error Then you can estimate coeffi cients using LS based on ln E [y t+1 ] = a o + a 1 ln k t + a 2 ln z t + ū t+1 Big practical advantage

Simple ways to improve regression 1 Hermite polynomials and scaling 2 LS-Singular Value Decomposition 3 Principal components

Simple ways to improve regression The main underlying problem is that X X is ill conditioned which makes it diffi cult to calculate X X This problem is reduced by 1 Scaling so that each variable has zero mean and unit variance 2 Hermite polynomials

Hermite polynomials; Definition P n (x) = n a j H j (x) j=0 where the basis functions, H j (x), satisfy E [ H i (x)h j (x) ] = 0 for i = j if x N(0, 1)

Hermite polynomials; Construction H 0 (x) = 1 H 1 (x) = x H m+1 (x) = xh m (x) mh m 1 (x) for j > 1 This gives H 0 (x) = 1 H 1 (x) = x H 2 (x) = x 2 1 H 3 (x) = x 3 3x H 4 (x) = x 4 6x 2 + 3 H 5 (x) = x 5 10x 3 + 15x

One tricky aspect about scaling Suppose one of the explanatory variables is x t = k t M T S T M T = T k t /T & S T = t=1 ( T (k t M(k t ) 2 /T t=1 ) 1/2

One tricky aspect about scaling = each iteration the explanatory variables change (since M and S change) = taking a weighted average of old and new coeffi cient is odd I found that convergence properties can be quite bad In principle you can avoid problem by rewriting polynomial, but that is tedious for higher-order So better to keep M T and S T fixed across iterations

Two graphs say it all; regular polynomials 30 20 10 0-10 -20-30 -2-1.5-1 -0.5 0 0.5 1 1.5 2

Two graphs say it all; Hermite polynomials 20 15 10 5 0-5 -10-15 -20-2 -1.5-1 -0.5 0 0.5 1 1.5 2

LS-Singular Values Decomposition β = ( X X ) 1 X Y = VS 1 U Y Goal: avoid calculating X X explicitly SVD of the (T n) matrix X : X = USV U : (T n) orthogonal matrix S : (n n) diagonal matrix with singular values s 1 s 2 V : (n n) orthogonal matrix s i is the sqrt of i th eigen value

LS-Singular Values Decomposition In Matlab [U,S,V]=svd(X,0);

Principal components With many explanatory variables use principle components SVD: X = USV where X is demeaned Principle components: Z = XV Properties Z i : mean zero and variance s 2 i Idea: exclude principle components corresponding to lower eigenvalues But check with how much R 2 drops

PEA and learning Traditional algorithm: simulate an economy using belief η i n formulate new belief η i+1 n simulate same economy using belief η i+1 n

PEA and learning Alternative algorithm to find fixed point simulate T observations using belief η T 1 n formulate new belief η T n generate 1 more observation use T + 1 observations to formulate new belief η T+1 continue Convergence properties can be problematic

PEA and learning Modification of alternative algorithm is economically interesting simulate T observations using belief η T 1 n use τ observations to formulate new belief η T n generate 1 more observation use last τ observations to formulate new belief η T+1 continue Beliefs are based on limited past = time-varying beliefs

PEA and learning Suppose the model has different regimes e.g. high productivity and low productivity regime agents do not observe regime= it makes sense to use limited number of past observations With the above algorithm agents gradually learn new law of motion

PEA and perturbation True in many macroeconomic models: perturbation generates accurate solution of "real side" of the economy perturbation does not generates accurate solution of asset prices real side does not at all or not much depend on asset prices Then solve for real economy using perturbation and for asset prices using PEA one-step algorithm (no iteration needed)

References Den Haan, W.J. and A. Marcet, 1990, Solving the stochastic growth model with parameterized expectations, Journal of Business and Economic Statistics. Den Haan, W.J., Parameterized expectations, lecture notes. Heer, B., and A. Maussner, 2009, Dynamic General Equilibrium Modeling. Judd, K. L. Maliar, and S. Maliar, 2011, One-node quadrature beats Monte Carlo: A generlized stochastic simulation algorithm, NBER WP 16708 Judd, K. L. Maliar, and S. Maliar, 2010, Numerically stable stochastic methods for solving dynamics models, NBER WP 15296