Lecture XI. Approximating the Invariant Distribution

Similar documents
1 Computing the Invariant Distribution

2. What is the fraction of aggregate savings due to the precautionary motive? (These two questions are analyzed in the paper by Ayiagari)

Lecture XII. Solving for the Equilibrium in Models with Idiosyncratic and Aggregate Risk

1 Bewley Economies with Aggregate Uncertainty

Lecture XIV. Solving for the Equilibrium in Models with Idiosyncratic and Aggregate Risk

High-dimensional Problems in Finance and Economics. Thomas M. Mertens

Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time

Taxing capital along the transition - Not a bad idea after all?

New Notes on the Solow Growth Model

Lecture 3: Huggett s 1993 model. 1 Solving the savings problem for an individual consumer

Uncertainty Per Krusell & D. Krueger Lecture Notes Chapter 6

Dynamic Programming with Hermite Interpolation

Lecture 2: Firms, Jobs and Policy

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo

Macroeconomic Theory II Homework 2 - Solution

Notes on Measure Theory and Markov Processes

Macroeconomic Topics Homework 1

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.

1 Numerical Methods to Solve the Income-Fluctuation Problem

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Discrete State Space Methods for Dynamic Economies

Finite-Horizon Statistics for Markov chains

0.1 Tauchen s method to approximate a continuous income process

Handling Non-Convexities: Entrepreneurship and Financial Frictions

Productivity Losses from Financial Frictions: Can Self-financing Undo Capital Misallocation?

Lecture 7: Stochastic Dynamic Programing and Markov Processes

Ramsey Cass Koopmans Model (1): Setup of the Model and Competitive Equilibrium Path

Factored State Spaces 3/2/178

Incomplete Markets, Heterogeneity and Macroeconomic Dynamics

Markov Perfect Equilibria in the Ramsey Model

Accuracy of models with heterogeneous agents

Macroeconomics Qualifying Examination

Markov Chains, Random Walks on Graphs, and the Laplacian

Macroeconomic Theory II Homework 4 - Solution

Parameterized Expectations Algorithm

The Ramsey Model. (Lecture Note, Advanced Macroeconomics, Thomas Steger, SS 2013)

Heterogeneous Agent Models: I

Lecture 12. Estimation of Heterogeneous Agent Models. ECO 521: Advanced Macroeconomics I. Benjamin Moll. Princeton University, Fall

Equilibrium Conditions and Algorithm for Numerical Solution of Kaplan, Moll and Violante (2017) HANK Model.

A simple macro dynamic model with endogenous saving rate: the representative agent model

Stochastic Problems. 1 Examples. 1.1 Neoclassical Growth Model with Stochastic Technology. 1.2 A Model of Job Search

Heterogeneous Agent Models in Continuous Time Part I

ECOM 009 Macroeconomics B. Lecture 2

A Computational Method for Multidimensional Continuous-choice. Dynamic Problems

Projection Methods. Felix Kubler 1. October 10, DBF, University of Zurich and Swiss Finance Institute

Solving Models with Heterogeneous Agents Xpa algorithm

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Competitive Equilibrium and the Welfare Theorems

Lecture 1: Introduction to computation

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE

Numerical Methods I Solving Nonlinear Equations

Graduate Macroeconomics 2 Problem set Solutions

1. Using the model and notations covered in class, the expected returns are:

APPENDIX Should the Private Sector Provide Public Capital?

Nonlinear Solution of Heterogeneous Agent Models by Approximate Aggregation

Macroeconomics Qualifying Examination

Uncertainty and Randomization

Numerical integration and importance sampling

Practical Dynamic Programming: An Introduction. Associated programs dpexample.m: deterministic dpexample2.m: stochastic

Function Approximation

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

Lecture V. Numerical Optimization

Lecture 4: Dynamic Programming

No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1.

Neural Networks Learning the network: Backprop , Fall 2018 Lecture 4

Introduction to Machine Learning CMU-10701

(a) Write down the Hamilton-Jacobi-Bellman (HJB) Equation in the dynamic programming

Lecture 8. The Workhorse Model of Income and Wealth Distribution in Macroeconomics. ECO 521: Advanced Macroeconomics I.

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

Foundation of (virtually) all DSGE models (e.g., RBC model) is Solow growth model

Government The government faces an exogenous sequence {g t } t=0

Solving models with (lots of) idiosyncratic risk

Lecture 1: Dynamic Programming

Economic Growth: Lecture 13, Stochastic Growth

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018

Assumption 5. The technology is represented by a production function, F : R 3 + R +, F (K t, N t, A t )

1 Two elementary results on aggregation of technologies and preferences

Permanent Income Hypothesis Intro to the Ramsey Model

1 The Basic RBC Model

Technical Appendix: The Optimum Quantity of Debt y. November 1995

Recursive Methods Recursive Methods Nr. 1

Iterative Methods for Solving A x = b

Stochastic process for macro

Lecture Notes: Estimation of dynamic discrete choice models

The Real Business Cycle Model

Appendices to: Revisiting the Welfare Effects of Eliminating Business Cycles

problem. max Both k (0) and h (0) are given at time 0. (a) Write down the Hamilton-Jacobi-Bellman (HJB) Equation in the dynamic programming

When Inequality Matters for Macro and Macro Matters for Inequality

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities

Chapter 4 AD AS. O. Afonso, P. B. Vasconcelos. Computational Economics: a concise introduction

Lecture 2. (1) Aggregation (2) Permanent Income Hypothesis. Erick Sager. September 14, 2015

Solving Heterogeneous Agent Models with Dynare

Lecture 3: Computing Markov Perfect Equilibria

Global Value Chain Participation and Current Account Imbalances

RBC Model with Indivisible Labor. Advanced Macroeconomic Theory

Lecture I. What is Quantitative Macroeconomics?

FINM6900 Finance Theory Noisy Rational Expectations Equilibrium for Multiple Risky Assets

SEQUENTIAL ESTIMATION OF DYNAMIC DISCRETE GAMES. Victor Aguirregabiria (Boston University) and. Pedro Mira (CEMFI) Applied Micro Workshop at Minnesota

Negishi s method for stochastic OLG models

Transcription:

Lecture XI Approximating the Invariant Distribution Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Invariant Distribution p. 1 /24

SS Equilibrium in the Aiyagari model G. Violante, Invariant Distribution p. 2 /24

SS Equilibrium of the Aiyagari model Fixed point algorithm over the interest rate (or K). 1. Fix an initial guess for the interest rate r 0 ( δ, 1β 1 ). r 0 is our first candidate for the equilibrium (the superscript denotes the iteration number). 2. Given r 0, solve the dynamic programming problem of the agent to obtain a = g(a,y;r 0 ). We described several solution methods. 3. Given the decision rule for assets next period g(a,y;r 0 ) and the Markov transition over productivity shocks Γ(y,y), we can construct the transition function Q ( r 0) and, by successive iterations, we obtain the fixed point distribution Λ ( r 0), conditional on the candidate interest rate r 0. G. Violante, Invariant Distribution p. 3 /24

SS Equilibrium of the Aiyagari model 4. Compute the aggregate demand of capital K ( r 0) from the optimal choice of the firm who takes r 0 as given, i.e. K(r 0 ) = F 1 k (r0 +δ) 5. Compute the integral: A ( r 0) = A Y g(a,y;r 0 )dλ(a,y;r 0 ) which gives the aggregate supply of assets. This can be easily done by using the invariant distribution obtained in step 3. G. Violante, Invariant Distribution p. 4 /24

SS Equilibrium of the Aiyagari model 6. Compare K ( r 0) with A ( r 0) to verify whether the asset market clearing condition holds. 7. Use a nonlinear equation solver in your outer loop to update from r 0 to r 1 and go back to step 1. 8. Keep iterating until you reach convergence of the interest rate, i.e., until, at iteration n: r n+1 r n < ε G. Violante, Invariant Distribution p. 5 /24

Calculating the invariant distribution Continuum of households makes the wealth distribution a continuous function and therefore an infinite-dimensional object in the state space. Need approximation. Small errors in computing the invariant distribution, particularly in models with aggregate shocks, can accumulate and lead to significant errors in the equilibrium values of aggregate variables. Five approaches: 1. piecewise linear approximation of invariant distribution function 2. discretization of the invariant density function 3. eigenvector method 4. simulation of an artificial panel 5. approximating the cdf by an exponential function G. Violante, Invariant Distribution p. 6 /24

Preliminaries Let Y be the grid of y and Γ be its J-state Markov chain with Γ ij = Pr{y = y j y = y i } We think of separate asset distributions conditional on each value of y Y, i.e. Λ(a,y ) = Pr{a a,y = y } The invariant distribution function satisfies the law of motion: Λ(a,y ) = y YΓ(y,y)Λ ( g 1 (a,y),y ) for all (a,y ) where g 1 (a,y) is the inverse of the saving decision rule with respect to its first argument a. Clearly, we are using the monotonicity of g. G. Violante, Invariant Distribution p. 7 /24

Piecewise linear approximation of invariant distribution This method involves approximating Λ by a weighted sum of piecewise-linear functions and iterating on the law of motion above The approximation of Λ must be shape-preserving since it would not make sense if Λ decreased over some range of a. This makes piecewise linear basis functions a better choice than say Chebyshev polynomials or cubic splines. Other approximations that preserve shape, such as shape-preserving splines could potentially also be appropriate First we need a grid A of interpolation nodes in the interval [a,ā]. This grid needs to be a lot finer than the grid used for computing the decision rule. Does not have to be an evenly spaced grid. Let N be the size of this grid. G. Violante, Invariant Distribution p. 8 /24

Piecewise linear approximation of invariant distribution Choose an initial distribution Λ 0 over the grid A Y. One choice is, for example, for every pair (a k,y j ) A Y Λ 0 (a k,y j ) = a k a ā a Γ j as if the two variables were independent and the distribution over assets a were uniform. Update the distribution on grid points as follows. For every pair (a k,y j ) A Y Λ 1 (a k,y j ) = y i Y Γ ij Λ 0( g 1 (a k,y i ),y i ) Note: (i) need to compute g 1 and (ii) g 1 not on A G. Violante, Invariant Distribution p. 9 /24

Piecewise linear approximation of invariant distribution How do we solve for a = g 1 (a k,y i )? Recall: you also have the consumption decision rule on the original grid used for policy functions (obtained residually from the budget constraint). Then, since a k = g(a,y i ) c(a,y i )+a k = Ra+y i and we can calculate a through a nonlinear solver given a piece-wise linear representation of c(a,y i ). Given a = g 1 (a k,y i ), we need a linear interpolation of Λ 0 since Λ 0 is only defined on the grid. G. Violante, Invariant Distribution p. 10 /24

Piecewise linear approximation of invariant distribution For an a [a n,a n+1 ], we define: Λ 0 (a,y i ) = Λ 0 (a n,y i )+ Λ0 (a n+1,y i ) Λ 0 (a n,y i ) a n+1 a n (a a n ) Compare Λ 1 (a k,y j ) to Λ 0 (a k,y j ), for example with the supnorm max j,k Λ1 (a k,y j ) Λ 0 (a k,y j ) < ε and stop when your tolerance level ε is reached How do we compute the aggregate supply of capital A = y i Y A adλ(a,y i ) G. Violante, Invariant Distribution p. 11 /24

Piecewise linear approximation of invariant distribution We assume the distribution of assets is uniform within the element [a n,a n+1 ] and therefore Then: an+1 a n adλ(a,y i ) = an+1 a n [ ] Λ(an+1,y i ) Λ(a n,y i ) a da a n+1 a n = Λ(a n+1,y i ) Λ(a n,y i ) a n+1 a n = Λ(a n+1,y i ) Λ(a n,y i ) 2 [ a 2 2 ] an+1 a n (a n+1 +a n ) A = y i Y N 1 n=1 Λ(a n+1,y i ) Λ(a n,y i ) 2 (a n+1 +a n )+Λ(a,y i )a where we account for the fact that there may be a mass point at the borrowing constraint. G. Violante, Invariant Distribution p. 12 /24

Discretization of the invariant density function A simpler approach involves finding an approximation to the invariant density function λ(a, y) We will approximate the density by a probability distribution function defined over a discretized version of the state space. Once again the grid should be finer than the one used to compute the optimal savings rule. Choose initial density functions λ 0 (a k,y i ). For example λ 0 (a k,y i ) = 1 JN i.e., mass uniformly distributed G. Violante, Invariant Distribution p. 13 /24

Discretization of the invariant density function For every (a k,y j ) on the grid: λ 1 (a k,y j ) = y i Y λ 1 (a k+1,y j ) = y i Y a k+1 g(a m,y i ) Γ ij λ 0 (a m,y i ) a k+1 a k m M i g(a m,y i ) a k Γ ij λ 0 (a m,y i ) a k+1 a k m M i where M i = {m = 1,...,N a k g(a m,y i ) a k+1 } G. Violante, Invariant Distribution p. 14 /24

Discretization of the invariant density function We can think of this way of handling the discrete approximation to the density function as forcing the agents in the economy to play a lottery. Lottery: with the probability of going to a k given that your optimal policy is to save a = g(a m,y i ) [a k, a k+1 ] is given by a k+1 g(a m,y i ) a k+1 a k and with probability 1 a k+1 g(a m,y i ) a k+1 a k you go to a k+1. Then, once you have found λ, the aggregate supply of capital is computed as: A = y i Y a k A a k λ(a k,y j ) G. Violante, Invariant Distribution p. 15 /24

Eigenvector method Let A be a square matrix. If there is a vector v 0 in R n such that: va = ev for some scalar e, then e is called the eigenvalue of A with corresponding (left) eigenvector v. Recall that the definition of invariant pdf for a Markov transition matrix Q is λ that satisfies: λ Q = λ thus an invariant distribution is the eigenvector of the matrix Q associated to eigenvalue 1. How do we guarantee this eigenvector is unique and how do we find it in that case? G. Violante, Invariant Distribution p. 16 /24

Eigenvector method Perron-Frobenius Theorem: Let Q be a transition matrix of an homogeneous ergodic Markov chain. Then Q has a unique dominant eigenvalue e = 1 such that: its associated eigenvector has all positive entries all other eigenvalues are smaller than e in absolute value Q has no other eigenvector with all non-negative entries This eigenvector (renormalized so that it sums to one) is the unique invariant distribution G. Violante, Invariant Distribution p. 17 /24

Eigenvector methods How do we construct Q? Let Q(a,y ;a,y) be the JN JN transition matrix from state (a,y) into (a,y ). Since the evolution of y is independent of a: Q(a,y ;a,y) = Q a (a ;a,y) Γ(y,y) where, if a = g(a,y) is such that a k+1 a < a k, then we define: Q a (a k ;a,y) = a k+1 g(a,y) a k+1 a k Q a (a k+1 ;a,y) = g(a,y) a k a k+1 a k... same lottery we used before for the density G. Violante, Invariant Distribution p. 18 /24

Eigenvector methods In practice: the Q function is very large and very sparse and there are many eigenvalues very close to one The corresponding eigenvectors generally have negative components that are inappropriate for the density function. Idea: perturb the zero entries of Q by adding a perturbation constant η and renormalizing the rows of Q. Then find the unique non-negative eigenvector associated with the unit root of the matrix. The stationary density can be obtained by normalizing this eigenvector to add to one. How do we select η? Authors suggest η minq 2N or even smaller G. Violante, Invariant Distribution p. 19 /24

Monte-Carlo simulation One must generate a large sample of households and track them over time. Monte Carlo simulation is memory and time consuming: not recommended for low dimensional problems. Valuable method however when the dimension of the problem is large since it is not subject to the curse of dimensionality that plagues all other methods. Choose a sample size I. Typically, I 10,000 At t = 0, initialize states by (i) draws from Γ and (ii) some value for a 0 i = a, for all agents, e.g., steady-state capital of the representative agent (divided by I) G. Violante, Invariant Distribution p. 20 /24

Monte-Carlo simulation Update asset holdings each individual i in the sample by using the decision rule: a 1 i = g ( ) a 0 i,yi 1 where y 1 i is drawn from Γ( y 1 i,y0 i). Use a random number generator for the uniform distribution in [0,1]. Suppose the draw from the uniform is u. Then y 1 i = y j where j is the smallest index for gridpoints on Y such that: u j j=1 Γ ( y j,y 0 i ) Because of randomness, the fraction of households with income values y will never be exactly equal to Γ (y). G. Violante, Invariant Distribution p. 21 /24

Monte-Carlo simulation Correction: you can make an adjustment where you (i) check for which values of y you have an excess relative to the stationary share Γ (y) and reassign the status of individuals with other realizations of y (for which you have less than the stationary share) appropriately. At every t compute the vector M t of statistics of the wealth distribution (e.g., mean, variance, IQR, etc...). Stop if M t and M t 1 are close enough. Then A is just the mean of wealth holding in the final sample. G. Violante, Invariant Distribution p. 22 /24

Speed vs stability of the algorithm Two tricks that make the algorithm a lot more stable, albeit slower 1. Don t use gradient methods to solve for r in the outer loop. Use bisection. Given r 0, to obtain the new candidate r 1 bisect between r 0 and [ F K (A ( r 0) ) δ ] r 1 = 1 2 { r 0 + [ F K (A ( r 0) ) δ ]} which are, by construction, on opposite sides of the steady-state interest rate r. Even better, use dampening in updating: r 1 = ωr 0 +(1 ω) [ F K (A ( r 0) ) δ ] with weight ω = 0.8 0.9 for example. G. Violante, Invariant Distribution p. 23 /24

Speed vs stability of the algorithm 2. When you resolve the household problem for a new value of the interest rate r n+1, do not initialize the decision rule from the one corresponding to r n obtained in the previous loop. Always start from the same guess to avoid propagation error in wealth distribution. Slower, but safer. G. Violante, Invariant Distribution p. 24 /24