Parameterized expectations algorithm

Similar documents
Minimum Weighted Residual Methods

Technical appendices: Business cycle accounting for the Japanese economy using the parameterized expectations algorithm

Parameterized Expectations Algorithm and the Moving Bounds

1 Bewley Economies with Aggregate Uncertainty

Lecture XI. Approximating the Invariant Distribution

1 The Basic RBC Model

ECOM 009 Macroeconomics B. Lecture 2

Lecture 2. (1) Permanent Income Hypothesis (2) Precautionary Savings. Erick Sager. February 6, 2018

A Quick Introduction to Numerical Methods

Projection Methods. Felix Kubler 1. October 10, DBF, University of Zurich and Swiss Finance Institute

Parameterized Expectations Algorithm: How to Solve for Labor Easily

Macroeconomics Theory II

Stochastic simulations with DYNARE. A practical guide.

High-dimensional Problems in Finance and Economics. Thomas M. Mertens

The Real Business Cycle Model

Lecture 4 The Centralized Economy: Extensions

Suggested Solutions to Homework #6 Econ 511b (Part I), Spring 2004

Lecture 15. Dynamic Stochastic General Equilibrium Model. Randall Romero Aguilar, PhD I Semestre 2017 Last updated: July 3, 2017

Incomplete Markets, Heterogeneity and Macroeconomic Dynamics

Lecture 6: Discrete-Time Dynamic Optimization

problem. max Both k (0) and h (0) are given at time 0. (a) Write down the Hamilton-Jacobi-Bellman (HJB) Equation in the dynamic programming

0 β t u(c t ), 0 <β<1,

News-Shock Subroutine for Prof. Uhlig s Toolkit

Graduate Macroeconomics 2 Problem set Solutions

Lecture 2. (1) Aggregation (2) Permanent Income Hypothesis. Erick Sager. September 14, 2015

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω

Parameterized Expectations Algorithm

Competitive Equilibrium and the Welfare Theorems

Solution Methods. Jesús Fernández-Villaverde. University of Pennsylvania. March 16, 2016

Chapter 4. Applications/Variations

ECOM 009 Macroeconomics B. Lecture 3

Real Business Cycle Model (RBC)

Lecture 4: Dynamic Programming

Estimating Deep Parameters: GMM and SMM

Lecture 3: Dynamics of small open economies

Solving Deterministic Models

Dynare Class on Heathcote-Perri JME 2002

DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION

Small Open Economy RBC Model Uribe, Chapter 4

Lecture 1: Dynamic Programming

Lecture 3: Huggett s 1993 model. 1 Solving the savings problem for an individual consumer

MA Advanced Macroeconomics: Solving Models with Rational Expectations

Graduate Macro Theory II: Notes on Quantitative Analysis in DSGE Models

Macroeconomic Theory II Homework 2 - Solution

Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Deterministic Models

Assessing Structural VAR s

1 Computing the Invariant Distribution

Econ 504, Lecture 1: Transversality and Stochastic Lagrange Multipliers

Ramsey Cass Koopmans Model (1): Setup of the Model and Competitive Equilibrium Path

Additional Material for Estimating the Technology of Cognitive and Noncognitive Skill Formation (Cuttings from the Web Appendix)

Lecture notes on modern growth theory

The Metropolis-Hastings Algorithm. June 8, 2012

Math Camp Notes: Everything Else

Advanced Macroeconomics

Dynamic Problem Set 1 Solutions

Lecture 7: Stochastic Dynamic Programing and Markov Processes

Assumption 5. The technology is represented by a production function, F : R 3 + R +, F (K t, N t, A t )

2. What is the fraction of aggregate savings due to the precautionary motive? (These two questions are analyzed in the paper by Ayiagari)

GMM and SMM. 1. Hansen, L Large Sample Properties of Generalized Method of Moments Estimators, Econometrica, 50, p

DSGE-Models. Calibration and Introduction to Dynare. Institute of Econometrics and Economic Statistics

Getting to page 31 in Galí (2008)

MA Advanced Macroeconomics: 6. Solving Models with Rational Expectations

Introduction to Real Business Cycles: The Solow Model and Dynamic Optimization

Comprehensive Exam. Macro Spring 2014 Retake. August 22, 2014

Simple Consumption / Savings Problems (based on Ljungqvist & Sargent, Ch 16, 17) Jonathan Heathcote. updated, March The household s problem X

1 With state-contingent debt

Assessing Structural VAR s

Solving nonlinear dynamic stochastic models: An algorithm computing value function by simulations

Discrete State Space Methods for Dynamic Economies

Session 4: Money. Jean Imbs. November 2010

UNIVERSITY OF WISCONSIN DEPARTMENT OF ECONOMICS MACROECONOMICS THEORY Preliminary Exam August 1, :00 am - 2:00 pm

Uncertainty Per Krusell & D. Krueger Lecture Notes Chapter 6

Topic 2. Consumption/Saving and Productivity shocks

Solving a Dynamic (Stochastic) General Equilibrium Model under the Discrete Time Framework

Lecture 2 The Centralized Economy: Basic features

University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming

Technological transfers, limited commitment and growth

Learning in Macroeconomic Models

Macroeconomics I. University of Tokyo. Lecture 13

Decentralised economies I

HOMEWORK #3 This homework assignment is due at NOON on Friday, November 17 in Marnix Amand s mailbox.

Dynamic Optimization Using Lagrange Multipliers

An Introduction to Perturbation Methods in Macroeconomics. Jesús Fernández-Villaverde University of Pennsylvania

Economic Growth: Lecture 9, Neoclassical Endogenous Growth

Lecture XII. Solving for the Equilibrium in Models with Idiosyncratic and Aggregate Risk

Problem Set 4. Graduate Macro II, Spring 2011 The University of Notre Dame Professor Sims

1 Recursive Competitive Equilibrium

In the Ramsey model we maximized the utility U = u[c(t)]e nt e t dt. Now

An approximate consumption function

PANEL DISCUSSION: THE ROLE OF POTENTIAL OUTPUT IN POLICYMAKING

MA Advanced Macroeconomics: 7. The Real Business Cycle Model

Spring 2017 Econ 574 Roger Koenker. Lecture 14 GEE-GMM

Lecture 2 The Centralized Economy

Lecture 15 Real Business Cycle Model. Noah Williams

Public Economics The Macroeconomic Perspective Chapter 2: The Ramsey Model. Burkhard Heer University of Augsburg, Germany

Econometrics of Panel Data

Using Theory to Identify Transitory and Permanent Income Shocks: A Review of the Blundell-Preston Approach

Monetary Economics: Problem Set #4 Solutions

Transcription:

Lecture Notes 8 Parameterized expectations algorithm The Parameterized Expectations Algorithm (PEA hereafter) was introduced by Marcet [988]. As it will become clear in a moment, this may be viewed as a generalized method of undetermined coefficients, in which economic agents learn the decision rule at each step of the algorithm. It will therefore have a natural interpretation in terms of learning behavior. The basic idea of this method is to approximate the expectation function of the individuals rather than attempting to recover directly the decision rules by a smooth function, in general a polynomial function. Implicit in this approach is the fact that the space spanned by polynomials is dense in the space spanned by all functions in the sense lim inf k θ R k x X sup F θ (x) F (x) = 0 where F is the function to be approximated and F θ is an k th order interpolating function that is parameterized by θ. 8. Basics The basic idea that underlies this approach is to replace expectations by an a priori given function of the state variables of the problem in hand, and then

reveal the set of parameters that insure that the residuals from the Euler equations are a martingale difference sequence (E t ε t+ = 0). Note that the main difficulty when solving the model is to deal with the integral involved by the expectation. The approach of the basic PEA algorithm is to approximate it by Monte Carlo simulations. PEA algorithm may be implemented to solve a large set of models that admit the following general representation F (E t (E (y t+, x t+, y t, x t )), y t, x t, ε t ) = 0 (8.) where F : R m R ny R nx R ne R nx+ny describes the model and E : R ny R nx R ny R nx R m defines the transformed variables on which we take expectations. E t is the standard conditional expectations operator. ε t is the set of innovations of the structural shocks that affect the economy. In order to fix notations, let us take the optimal growth model as an example λ t βe t [ λt+ ( αzt+ k α t+ + δ)] = 0 c σ t λ t = 0 k t+ z t k α t + c t ( δ)k t = 0 z t+ ρz t ε t+ = 0 In this example, we have y = {c, λ}, x = {k, z} and ε = ε, the function E takes the form E ({c, λ} t+, {k, z} t+, {c, λ} t, {k, z} t ) = λ t+ ( αzt+ k α t+ + δ) while F (.) is given by λ t βe t [E ({c, λ} t+, {k, z} t+, {c, λ} t, {k, z} t )] c F (.) = t σ λ t k t+ z t kt α + c t ( δ)k t z t+ ρz t ε t+ The idea of the PEA algorithm is then to replace the expectation function E t (E (y t+, x t+, y t, x t )) by an parametric approximation function, Φ(x t ; θ), of 2

the current state variables x t and a vector of parameters θ, such that the approximated model may be restated as F (Φ(x t, θ), y t, x t, ε t ) = 0 (8.2) The problem of the PEA algorithm is then to find a vector θ such that θ Argmin Φ(x t, θ) E t (E (y t+, x t+, y t, x t )) 2 θ Θ that is the solution satisfies the rational expectations hypothesis. At this point, note that we selected a quadratic norm, but one also may consider other metrics of the form θ Argmin R(x t, θ) ΩR(x t, θ) θ Θ with R(x t, θ) Φ(x t, θ) E t (E (y t+, x t+, y t, x t )) and Ω is a weighting matrix. This would then correspond to a GMM type of estimation. consider θ Argmin max{ Φ(x t, θ) E t (E (y t+, x t+, y t, x t )) } θ Θ One may also which would call for LAD estimation methods. However, the usual practice is use the standard quadratic norm. Once, θ and therefore the approximation function has been found, Φ(x t, θ) and equation (8.2) may be used to generate time series for the variables of the model. The algorithm may then be described as follows. Step. Specify a guess for the function Φ(x t, θ), an initial θ. Choose a stopping criterion η > 0, a sample size T that should be large enough and draw a sequence {ε t } T t=0 that will be used during all the algorithm. Step 2. At iteration i, and for the given θ i, simulate, recursively, a sequence for {y t (θ i )} T t=0 and {x t(θ i )} T t=0 3

Step 3. Find G(θ i ) that satisfies θ Argmin θ Θ T T E (y t+ (θ), x t+ (θ), y t (θ), x t (θ)) Φ(x t (θ), θ) 2 t=0 which just amounts to perform a non linear least square regression taking E (y t+ (θ), x t+ (θ), y t (θ), x t (θ)) as the dependent variable, Φ(.) as the explanatory function and θ as the parameter to be estimated. Step 4. Set θ i+ to θ i+ = γ θ i + ( γ)θ i (8.3) where γ (0, ) is a smoothing parameter. On the one hand, setting low γ helps convergence, but at the cost of increasing the computational time. As long as good initial conditions can be found and the model is not too non linear, setting γ close to is sufficient, however, when dealing with strongly non linear models with binding constraints for example decreasing γ will generally helps a lot. Step 5. If θ i+ θ i < η then stop, otherwise go back to step 2. Reading this algorithm, it appears that it may easily be given a learning interpretation. Indeed, each iteration mays be interpreted as a learning step, in which the individual uses a rule of thumb as a decision rule and reveal information on the kind of errors he/she does using this rule of thumb. He/she then corrects the rule that is find another θ that will be used during the next step. But it should be noted that nothing in the algorithm guarantees that the algorithm always converges and if it does delivers a decision rule that is compatible with the rational expectation hypothesis. At this point, several comments stemming from the implementation of the method are in order. First of all, we need to come with an interpolating For a convergence proof in the case of the optimal growth model, see Marcet and Marshall [994]. 4

function, Φ(.). How should it be specified? In fact, we are free to choose any functional form we may think of, nevertheless economic theory may guide us as well as some constraints imposed by the method more particularly in step 3. A widely used interpolating function combines non linear aspects of the exponential function with some polynomials, such that Φ j (x, θ) may take the form (where j {,..., m} refers to a particular expectation) Φ j (x, θ) = exp ( θ P (x) ) where P (x) is a multivariate polynomial. 2 One advantage of this interpolating function is obviously that it guarantees positive values for the expectations, which turns out to be mostly the case in economics. One potential problem with such a functional form is precisely related to the fact that it uses simple polynomials which then may generate multicolinearity problems during step 3. As an example, let us take the simple case in which the state variable is totally exogenous and is an AR() process with log normal innovations: log(a t ) = ρ log(a t ) + ε t with ρ < and ε N (0, σ). The state variable is then a t. If we simulate the sequence {a t } T t=0 with T = 0000, and compute the correlation matrix of {a t, a 2 t, a 3 t, a 4 t } we get, for ρ = 0.95 and σ = 0.0.0000 0.9998 0.999 0.9980 0.9998.0000 0.9998 0.999 0.999 0.9998.0000 0.9998 0.9980 0.999 0.9998.0000 revealing some potential multicolinearity problems to occur. As an illustrative example, assume that we want to approximate the expectation function in this model, it will be a function of the capital stock which is a particularly smooth sequence, therefore if there will be significant differences between the sequence itself and the sequence taken at the power 2, the difference may then be small 2 For instante, let us consider the case n x = 2, P (x t) may then consists of a constant term, x t, x 2t, x 2 t, x 2 2t, x tx 2t. 5

for the sequence at the power 4. Hence multicolinearity may occur. One way to circumvent this problem is to rely on orthogonal polynomials instead of standard polynomials in the interpolating function. A second problem that arises in this approach is to select initial conditions for θ. Indeed, this step is crucial for at least 3 reasons: (i) the problem is fundamentally non linear, (ii) convergence is not always guarantee, (iii) economic theory imposes a set of restrictions to insure positivity of some variables for example. Therefore, much attention should be paid when imposing an initial value to θ. A third important problem is related to the choice of γ, the smoothing parameter. A too large value may put too much weight on new values for θ and therefore reinforce the potential forces that lead to divergence of the algorithm. On the contrary, setting γ too close to 0 may be costly in terms of computational CPU time. It must however be noted that no general rule may be given for these implementation issues and that in most of the case, one has to guess and try. Therefore, I shall now report 3 examples of implementation. The first one is the standard optimal growth model, the other one corresponds to the optimal growth model with investment irreversibility, the last one will be the problem of a household facing borrowing constraints. But before going to the examples, we shall consider a linear example that will highlight the similarity between this approach and the undetermined coefficient approach. 8.2 A linear example Let us consider the simple model y t = ae t y t+ + bx t x t+ = ( ρ)x + ρx t + ε t+ 6

where ε N (0, σ 2 ). Finding an expectation function in this model amounts to find a function Φ(x t, θ) for E t (ay t+ +bx t ). Let us make the following guess for the solution: Φ(x t, θ) = θ 0 + θ x t In this case, solving the PEA problem amount to solve min {θ 0,θ } N (Φ(x t, θ) ay t+ bx t ) 2 The first order conditions for this problem are N N (θ 0 + θ x t ay t+ bx t ) = 0 (8.4) x t (θ 0 + θ x t ay t+ bx t ) = 0 (8.5) Equation (8.4) can be rewritten as θ 0 + θ N x t = a N y t+ + b N But, since Φ(x t, θ) is an approximate solution for the expectation function, the model implies that such that the former equation rewrites y t = E t (ay t+ + bx t ) = Φ(x t, θ) x t θ 0 + θ N Asymptotically, we have x t = a N (θ 0 + θ x t+ ) + b N x t lim N N x t = lim N N such that this first order condition converges to x t+ = x θ 0 + θ x = aθ 0 + aθ x + bx 7

therefore, rearranging terms, we have θ 0 ( a) + θ ( a)x = bx (8.6) Now, let us consider equation (8.5), which can be rewritten as θ 0 N x t + θ N x 2 t = a N y t+ x t + b N x 2 t Like for the first condition, we acknowledge that such that the condition rewrites y t = E t (ay t+ + bx t ) = Φ(x t, θ) θ 0 N x t + θ N Asymptotically, we have x 2 t = a N (θ 0 + θ x t+ )x t + b N x 2 t (8.7) lim N finally, we have lim N N N x t = x and lim N x t x t+ = lim N N N x 2 t = E(x 2 ) = σx 2 + x 2 x t (( ρ)x + ρx t + ε t+ ) Since ε is the innovation of the process, we have lim N N N x tε t+ = 0, such that lim N N x t x t+ = ( ρ)x 2 + ρe(x 2 ) = x 2 + ρσx 2 Hence, (8.7) asymptotically rewrites as x( a)θ 0 + ( aρ)(x 2 + σ 2 x)θ = b(x 2 + σ 2 x) 8

We therefore have to solve the system θ 0 ( a) + θ ( a)x = bx x( a)θ 0 + ( a)(x 2 + ρσ 2 x)θ = b(x 2 + σ 2 x) premultiplying the first equation by x, and plugging the result in the second equation leads to such that ( aρ)θ σ 2 x = bσ 2 x θ = b aρ Plugging this result into the first equation, we get θ 0 = ab( ρ)x ( a)( aρ) Therefore, Asymptotically, the solution is given by y t = ab( ρ)x ( a)( aρ) + b aρ x t which corresponds exactly to the solution of the model (see Lecture notes #).Therefore, asymptotically, the PEA algorithm is nothing else but an undetermined coefficient method. 8.3 Standard PEA solution: the Optimal Growth Model Let us first recall the type of problem we have in hand. We are about to solve the set of equations λ t βe t [ λt+ ( αzt+ k α t+ + δ)] = 0 c σ t λ t = 0 k t+ z t k α t + c t ( δ)k t = 0 log(z t+ ) ρ log(z t ) ε t+ = 0 9

Our problem will therefore be to get an approximation for the expectation function: βe t [ λt+ ( αzt+ k α t+ + δ)] In this problem, we have 2 state variables: k t and z t, such that Φ(.) should be a function of both k t and z t. We will make the guess Φ(k t, z t ; θ) = exp ( θ 0 + θ log(k t ) + θ 2 log(z t ) + θ 3 log(k t ) 2 + θ 4 log(z t ) 2 + θ 5 log(k t ) log(z t ) ) From the first equation of the above system, we have that for a given vector θ = {θ 0, θ, θ 2, θ 3, θ 4, θ 5 } λ t (θ) = Φ(k t (θ), z t (θ); θ), which enables us to recover and therefore get c t (θ) = λ t (θ) σ k t+ (θ) = z t k t (θ) α c t (θ) + ( δ)k t (θ) We then recover a whole sequence for {k t (θ)} T t=0, {z t} T t=0, {λ t(θ)} T t=0, and {c t (θ)} T t=0, which makes it simple to compute a sequence for ϕ t+ (θ) λ t+ (θ) ( αz t+ k t+ (θ) α + δ ) Since Φ(k t, z t ; θ) is an exponential function of a polynomial, we may run the regression log(ϕ t+ (θ)) = θ 0 + θ log(k t (θ)) + θ 2 log(z t ) + θ 3 log(k t (θ)) 2 +θ 4 log(z t ) 2 + θ 5 log(k t (θ)) log(z t ) (8.8) to get θ. We then set a new value for θ according to the updating scheme (8.3) and restart the process until convergence. The parameterization we used in the matlab code are given in table 8. and is totally standard. γ, the smoothing parameter was set to, implying that in each iteration the new θ vector is totally passed as a new guess in the 0

Table 8.: Optimal growth: Parameterization β σ α δ ρ σ e 0.95 0.3 0. 0.9 0.0 progression of the algorithm. The stopping criterion was set at η=e-6 and T =20000 data points were used to compute the OLS regression. Initial conditions were set as follows. We first solve the model relying on a log linear approximation. We then generate a random draw of size T for ε and generate series using the log linear approximate solution. We then built the needed series to recover a draw for {ϕ t+ (θ)} t=0, {k t(θ)} t=0 and {z t(θ)} t=0 and ran the regression (8.8) to get an initial condition for θ, reported in table 8.2. The algorithm converges after 22 iterations and delivers the final decision Table 8.2: Decision rule θ 0 θ θ 2 θ 3 θ 4 θ 5 Initial 0.5386-0.7367-0.2428 0.09 0.252-0.2934 Final 0.5489-0.7570-0.3337 0.9 0.580-0.96 rule reported in table 8.2. When γ is set at 0.75, 3 iterations are needed, 46 for γ = 0.5 and 90 for γ = 0.25. It is worth noting that the final decision rule does differ from the initial conditions, but not by an as large amount as one would have expected, meaning that in this setup and provided the approximation is good enough 3 certainty equivalence and non linearities do not play such a great role. In fact, as illustrated in figure 8., the capital decision rule does not display that much non linearities. Although particularly simple to implement (see the following matlab code), this method should be handle with care as it may be difficult to obtain convergence for some models. Nevertheless it has another attractive feature: it can handle problems with 3 Note that for the moment we have not made any evaluation of the accuracy of the decision rule. We will undertake such an evaluation in the sequel.

3 Figure 8.: Capital decision rule 2.9 2.8 2.7 k t+ 2.6 2.5 2.4 2.3 2.2 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3 k t possibly binding contraints. We now provide two examples of such models. clear all Matlab Code: PEA Algorithm (OGM) long = 20000; init = 500; slong = init+long; T = init+:slong-; T = init+2:slong; tol = e-6; crit = ; gam = ; sigma = ; delta = 0.; beta = 0.95; alpha = 0.3; ab = 0; rho = 0.9; se = 0.0; param = [ab alpha beta delta rho se sigma long init]; ksy =(alpha*beta)/(-beta*(-delta)); yss = ksy^(alpha/(-alpha)); kss = yss^(/alpha); 2

iss css csy lss = delta*kss; = yss-iss; = css/yss; = css^(-sigma); randn( state,); e = se*randn(slong,); a = zeros(slong,); a() = ab+e(); for i = 2:slong; a(i) = rho*a(i-)+(-rho)*ab+e(i); end b0 = peaoginit(e,param); % Compute initial conditions % % Main Loop % iter = ; while crit>tol; % % Simulated path % k = zeros(slong+,); lb = zeros(slong,); X = zeros(slong,length(b0)); k() = kss; for i = :slong; X(i,:)= [ log(k(i)) a(i) log(k(i))*log(k(i)) a(i)*a(i) log(k(i))*a(i)]; lb(i) = exp(x(i,:)*b0); k(i+)=exp(a(i))*k(i)^alpha+(-delta)*k(i)-lb(i)^(-/sigma); end y = beta*lb(t).*(alpha*exp(a(t)).*k(t).^(alpha-)+-delta); bt = X(T,:)\log(y); b = gam*bt+(-gam)*b0; crit = max(abs(b-b0)); b0 = b; disp(sprintf( Iteration: %d\tconv. crit.: %g,iter,crit)) iter=iter+; end; 3

8.4 PEA and binding constraints: Optimal growth with irreversible investment We now consider a variation to the previous model, in the sense that we restrict gross investment to be positive in each and every period: i t 0 k t+ ( δ)k t (8.9) This assumption amounts to assume that there does not exist a second hand market for capital. In such a case the problem of the central planner is to determined consumption and capital accumulation, such that utility is maximum: s.t. and max {c t,k t+ } t=0 E 0 t=0 β t c σ t σ k t+ = z t k α t c t + ( δ)k t k t+ ( δ)k t Forming the Lagrangean associated to the previous problem, we have L t = E t τ=0 β τ [ c σ t+τ σ + λ t+τ ( zt+τ k α t+τ c t+τ + ( δ)k t+τ k t+τ+ ) + µ t+τ (k t+ ( δ)k t ) ] which leads to the following set of first order conditions c σ t = λ t (8.0) λ t µ t = βe t [ λt+ ( αzt+ k α t+ + δ) µ t+ ( δ) ] (8.) k t+ = z t k α t c t + ( δ)k t (8.2) µ t (k t+ ( δ)k t ) (8.3) The main difference with the previous example is that now the central planner faces a constraint that may be binding in each and every period. Therefore, 4

this complicates a little bit the algorithm, and we have to find a rule for both the expectation function E t [ϕ t+ ] where ϕ t+ β ( λ t+ ( αzt+ k α t+ + δ) µ t+ ( δ) ) and µ t. We then proceed as suggested in Marcet and Lorenzoni [999]:. Compute two sequences for {λ t (θ)} t=0 and {k t(θ)} t=0 from (8.) and (8.2) under the assumption that the constraint is not binding that is µ t (θ) = 0. In such a case, we just compute the sequences as in the standard optimal growth model. 2. Test whether, under this assumption, i t (β) 0. If it is the case, then set µ t (θ) = 0, otherwise set k t+ (θ) = ( δ)k t (θ), c t (θ) is computed from the resource constraint and µ t (θ) is found from (8.). Note that, using this procedure, µ t is just treated as an additional variable which is just used to compute a sequence to solve the model. We therefore do not need to compute explicitly its interpolating function, as far as ϕ t+ is concerned we use the same interpolating function as in the previous example and therefore run a regression of the type log(ϕ t+ (θ)) = θ 0 + θ log(k t (θ)) + θ 2 log(z t ) + θ 3 log(k t (θ)) 2 +θ 4 log(z t ) 2 + θ 5 log(k t (θ)) log(z t ) (8.4) to get θ. Up to the shock, the parameterization, reported in table 8.3, we used in the matlab code is essentially the same as the one we used in the optimal growth model. The shock was artificially assigned a lower persistence and a greater volatility in order to increase the probability of binding the constraint, and therefore illustrate the potential of this approach. γ, the smoothing parameter was set to. The stopping criterion was set at η=e-6 and T =20000 data points were used to compute the OLS regression. 5

Table 8.3: Optimal growth: Parameterization β σ α δ ρ σ e 0.95 0.3 0. 0.8 0.4 Initial conditions were set as in the standard optimal growth model: We first solve the model relying on a log linear approximation. We then generate a random draw of size T for ε and generate series using the log linear approximate solution. We then built the needed series to recover a draw for {ϕ t+ (θ)} t=0, {k t(θ)} t=0 and {z t(θ)} t=0 and ran the regression (8.4) to get an initial condition for θ, reported in table 8.4. The algorithm converges after 5 iterations and delivers the final decision rule reported in table 8.4. Contrary Table 8.4: Decision rule θ 0 θ θ 2 θ 3 θ 4 θ 5 Initial 0.4620-0.5760-0.3909 0.0257 0.0307-0.0524 Final 0.3558-0.3289-0.782-0.20-0.268 0.326 to the standard optimal growth model, the initial and final rule totally differ in the sense the coefficient in front of the capital stock in the final rule is half that on the initial rule, that in front of the shock is double, and the sign in front of all the quadratic terms are reversed. This should not be surprising as the initial rule is computed under (i) the certainty equivalence hypothesis and (ii) the assumption that the constraint never binds, whereas the size of the shocks we introduce in the model implies that the constraint binds in 2.8% of the cases. The latter quantity may seem rather small, but this is sufficient to dramatically alter the decision of the central planner when it acts under rational expectations. This is illustrated by figures 8.2 and 8.3 which respectively report the decision rules for investment, capital and the lagrange multiplier and a typical path for investment and lagrange multiplier. As reflected in 6

.5 investment Figure 8.2: Decision rules 800 Distribution of investment 0.5 600 400 200 0 0 2 4 6 8 8 k t Capital stock 0 0 0.5.5 0.8 Lagrange multiplier 6 0.6 4 0.4 2 0.2 0 0 2 4 6 8 k t 0 0 2 4 6 8 k t 0.8 0.6 0.4 0.2 Figure 8.3: Typical investment path investment Lagrange multiplier 0.25 0.2 0.5 0. 0.05 0 0 200 400 600 Time 0 0 200 400 600 Time 7

the upper right panel of figure 8.2 which reports the simulated distribution of investment the distribution is highly skewed and exhibits a mode at i t = 0, revealing the fact that the constraint occasionally binds. This is also illustrated in the lower left panel that reports the decision rule for the capital stock. As can be seen from this graph, the decision rule is bounded from below by the line ( δ)k t (the grey line on the graph), such situation then correspond to situations where the Lagrange multiplier is positive as reported in the lower right panel of the figure. Matlab Code: PEA Algorithm (Irreversible Investment) clear all long = 20000; init = 500; slong = init+long; T = init+:slong-; T = init+2:slong; tol = e-6; crit = ; gam = ; sigma = ; delta = 0.; beta = 0.95; alpha = 0.3; ab = 0; rho = 0.8; se = 0.25; kss = ((-beta*(-delta))/(alpha*beta))^(/(alpha-)); css = kss^alpha-delta*kss; lss = css^(-sigma); ysk = (-beta*(-delta))/(alpha*beta); csy = -delta/ysk; % % Simulation of the shock % randn( state,); e = se*randn(slong,); a = zeros(slong,); a() = ab+e(); for i = 2:slong; a(i) = rho*a(i-)+(-rho)*ab+e(i); end % % Initial guess 8

% param = [ab alpha beta delta rho se sigma long init]; b0 = peaoginit(e,param); % % Main Loop % iter = ; while crit>tol; % % Simulated path % k = zeros(slong+,); lb = zeros(slong,); mu = zeros(slong,); X = zeros(slong,length(b0)); k() = kss; for i = :slong; X(i,:)= [ log(k(i)) a(i) log(k(i))*log(k(i)) a(i)*a(i) log(k(i))*a(i)]; lb(i) = exp(x(i,:)*b0); iv = exp(a(i))*k(i)^alpha-lb(i)^(-/sigma); if iv>0; k(i+) = (-delta)*k(i)+iv; mu(i) = 0; else k(i+) = (-delta)*k(i); c = exp(a(i))*k(i)^alpha; mu(i) = c^(-sigma)-lb(i); end end y = beta*(lb(t).*(alpha*exp(a(t)).*k(t).^(alpha-)+-delta)... -mu(t)*(-delta)); bt = X(T,:)\log(y); b = gam*bt+(-gam)*b0; crit = max(abs(b-b0)); b0 = b; disp(sprintf( Iteration: %d\tconv. crit.: %g,iter,crit)) iter = iter+; end; 9

8.5 The Households Problem With Borrowing Constraints As a final example, we now report the example of a consumer that faces borrowing constraints, such that she solves the program s.t. max E t {c t} τ=0 β τ u(c t+τ ) a t+ = ( + r)a t + ω t c t a t+ 0 log(ω t+ ) = ρ log(ω t ) + ( ρ) log(ω) + ε t+ Let us first recall the first order conditions that are associated with this problem: c σ t = λ t (8.5) λ t = µ t + β( + r)e t λ t+ (8.6) a t+ = ( + r)a t + ω t c t (8.7) log(ω t+ ) = ρ log(ω t ) + ( ρ) log(ω) + ε t+ (8.8) µ t (a t+ a) = 0 (8.9) µ t 0 (8.20) In order to solve this model, we have to find a rule for both the expectation function where E t [ϕ t+ ] ϕ t+ βrλ t+ and µ t. We propose to follow the same procedure as the previous one: 20

. Compute two sequences for {λ t (θ)} t=0 and {a t(θ)} t=0 from (8.6) and (8.7) under the assumption that the constraint is not binding that is µ t (θ) = 0. 2. Test whether, under this assumption, a t+ (β) a. If it is the case, then set µ t (θ) = 0, otherwise set a t+ (θ) = a, c t (θ) is computed from the resource constraint and µ t (θ) is found from (8.6). Note that, using this procedure, µ t is just treated as an additional variable which is just used to compute a sequence to solve the model. We therefore do not need to compute explicitly its interpolating function, as far as ϕ t+ is concerned we use the same interpolating function as in the previous example and therefore run a regression of the type log(ϕ t+ (θ)) = θ 0 + θ a t (θ) + θ 2 ω t + θ 3 a t (θ) 2 + θ 4 ω 2 t + θ 5 a t (θ)ω t (8.2) to get θ. The parameterization is reported in table 8.5. γ, the smoothing parameter Table 8.5: Borrowing constraint: Parameterization a β σ ρ σ ω R ω 0 0.95.5 0.7 0..04 was set to. The stopping criterion was set at η=e-6 and T =20000 data points were used to compute the OLS regression. One key issue in this particular problem is related to the initial conditions. Indeed, it is extremely difficult to find a good initial guess as the only model for which we might get an analytical solution while being related to the present model is the standard permanent income model. Unfortunately, this model exhibits a non stationary behavior, in the sense it generates an I() process for the level of individual wealth and consumption, and therefore the marginal utility of wealth. We therefore have to take another route. We propose the 2

following procedure. For a given a 0 and a sequence {ω t } T t=0, we generate c 0 = ra 0 + ω 0 + η 0 where r > r and ε 0 N (0, σ η ). In practice, we took r = 0. and σ η = 0.. We then compute a from the law of motion of wealth. If a < a then a is set to a and c 0 = Ra 0 +y 0 a, otherwise c 0 is not modified. We then proceed exactly the same way for all t > 0. We then have in hand a sequence for both a t and c t, and therefore for λ t. We can then recover easily ϕ t+ and an initial θ from the regression (8.2) (see table 8.6). Table 8.6: Decision rule θ 0 θ θ 2 θ 3 θ 4 θ 5 Initial.6740-0.6324-2.98 0.033 0.5438 0.297 Final.5046-0.579-2.792 0.0458 0.7020 0.359 The algorithm converges after 79 iterations and delivers the final decision rule reported in table 8.6. Note that if the final decision rule effectively differs from the initial one, the difference is not huge, meaning that our initialization procedure is relevant. Figure 8.4 reports the decision rule of consumption in terms of cash on hand that is the effective amount a household may use to purchase goods (Ra t + ω t a). Figure 8.5 reports the decision rule for wealth accumulation as well as the implied distribution, which admits a mode in a, revealing that the constraints effectively binds (in 3.7% of the cases). clear Matlab Code: PEA Algorithm (Borrowing Constraints) crit = ; tol = e-6; gam = ; long = 20000; init = 500; slong = long+init; T = init+:slong-; T = init+2:slong; rw = 0.7; sw = 0.; 22

.3 Figure 8.4: Consumption decision rule Consumption.2. 0.9 0.8 0.7 0.6 0.5 0.5.5 2 2.5 3 3.5 4 4.5 5 Cash on hand (R a +ω a) t t 4 Wealth Figure 8.5: Wealth accumulation 4000 Distribution of wealth 3 3000 2 2000 000 0 0 2 3 4 a t 0 0 2 3 4 23

wb = 0; beta = 0.95; R = /(beta+0.0); sigma =.5; ab = 0; randn( state,); e = sw*randn(slong,); w = zeros(slong,); w() = wb+e(); for i = 2:slong; w(i)= rw*w(i-)+(-rw)*wb+e(i); end w=exp(w); a = zeros(slong,); c = zeros(slong,); lb = zeros(slong,); X = zeros(slong,6); a() = ass; rt = 0.2; sc = 0.; randn( state,234567890); ec = sc*randn(slong,); for i=:slong; X(i,:) = [ a(i) w(i) a(i)*a(i) w(i)*w(i) a(i)*w(i)]; c(i) = rt*a(i)+w(i)+ec(i); a = R*a(i)+w(i)-c(i); if a>ab; a(i+)=a; else a(i+)= ab; c(i) = R*a(i)+w(i)-ab; end end lb = c.^(-sigma); y = log(beta*r*lb(t)); b0 = X(T,:)\y iter=; while crit>tol; a = zeros(slong,); c = zeros(slong,); lb = zeros(slong,); X = zeros(slong,length(b0)); a() = 0; for i=:slong; X(i,:)= [ a(i) w(i) a(i)*a(i) w(i)*w(i) a(i)*w(i)]; 24

lb(i) = exp(x(i,:)*b0); a = R*a(i)+w(i)-lb(i)^(-/sigma); if a>ab; a(i+) = a; c(i) = lb(i).^(-./sigma); else a(i+) = ab; c(i) = R*a(i)+w(i)-ab; lb(i) = c(i)^(-sigma); end end y = log(beta*r*lb(t)); b = X(T,:)\y; b = gam*b+(-gam)*b0; crit = max(abs(b-b0)); b0 = b; disp(sprintf( Iteration: %d\tconv. crit.: %g,iter,crit)) iter=iter+; end; 25

26

Bibliography Marcet, A., Solving Nonlinear Stochastic Models by Parametrizing Expectations, mimeo, Carnegie Mellon University 988. and D.A. Marshall, Solving Nonlinear Rational Expectations Models by Parametrized Expectations : Convergence to Stationary Solutions, Manuscript, Universitat Pompeu Fabra, Barcelone 994. and G. Lorenzoni, The Parameterized Expectations Approach: Some Practical Issues, in M. Marimon and A. Scott, editors, Computational Methods for the Study of Dynamic Economies, Oxford: Oxford University Press, 999. 27

Index Expectation function, 2 Interpolating function, Orthogonal polynomial, 6 28

Contents 8 Parameterized expectations algorithm 8. Basics................................ 8.2 A linear example.......................... 6 8.3 Standard PEA solution: the Optimal Growth Model...... 9 8.4 PEA and binding constraints: Optimal growth with irreversible investment.............................. 4 8.5 The Households Problem With Borrowing Constraints..... 20 29

30

List of Figures 8. Capital decision rule........................ 2 8.2 Decision rules............................ 7 8.3 Typical investment path...................... 7 8.4 Consumption decision rule..................... 23 8.5 Wealth accumulation........................ 23 3

32

List of Tables 8. Optimal growth: Parameterization................ 8.2 Decision rule............................ 8.3 Optimal growth: Parameterization................ 6 8.4 Decision rule............................ 6 8.5 Borrowing constraint: Parameterization............. 2 8.6 Decision rule............................ 22 33