On automatic derivation of first order conditions in dynamic stochastic optimisation problems

Similar documents
Asset pricing in DSGE models comparison of different approximation methods

SMOOTHIES: A Toolbox for the Exact Nonlinear and Non-Gaussian Kalman Smoother *

The Real Business Cycle Model

problem. max Both k (0) and h (0) are given at time 0. (a) Write down the Hamilton-Jacobi-Bellman (HJB) Equation in the dynamic programming

1 The Basic RBC Model

Lecture 15. Dynamic Stochastic General Equilibrium Model. Randall Romero Aguilar, PhD I Semestre 2017 Last updated: July 3, 2017

Dynamic Optimization Using Lagrange Multipliers

Dynamic (Stochastic) General Equilibrium and Growth

Modelling Czech and Slovak labour markets: A DSGE model with labour frictions

Solving Deterministic Models

First order approximation of stochastic models

Competitive Equilibrium and the Welfare Theorems

(a) Write down the Hamilton-Jacobi-Bellman (HJB) Equation in the dynamic programming

Advanced Macroeconomics

Deterministic Models

New Notes on the Solow Growth Model

On Linear Quadratic Approximations

Topic 8: Optimal Investment

CES functions and Dixit-Stiglitz Formulation

Stochastic simulations with DYNARE. A practical guide.

DSGE-Models. Calibration and Introduction to Dynare. Institute of Econometrics and Economic Statistics

Equilibrium Determinacy in a Two-Tax System with Utility from Government Expenditure

Dynamic stochastic game and macroeconomic equilibrium

Dynamic Optimization Problem. April 2, Graduate School of Economics, University of Tokyo. Math Camp Day 4. Daiki Kishishita.

Public Economics The Macroeconomic Perspective Chapter 2: The Ramsey Model. Burkhard Heer University of Augsburg, Germany

Log-linearisation Tutorial

Macroeconomic Theory and Analysis Suggested Solution for Midterm 1

The Ramsey Model. (Lecture Note, Advanced Macroeconomics, Thomas Steger, SS 2013)

Solving a Dynamic (Stochastic) General Equilibrium Model under the Discrete Time Framework

Aspects of Stickiness in Understanding Inflation

Macroeconomics Qualifying Examination

Lecture 4 The Centralized Economy: Extensions

Fluctuations. Shocks, Uncertainty, and the Consumption/Saving Choice

Topic 2. Consumption/Saving and Productivity shocks

Lecture 6: Discrete-Time Dynamic Optimization

ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko

The economy is populated by a unit mass of infinitely lived households with preferences given by. β t u(c Mt, c Ht ) t=0

Economic Growth: Lecture 13, Stochastic Growth

Small Open Economy RBC Model Uribe, Chapter 4

Advanced Macroeconomics

Decentralised economies I

Indeterminacy with No-Income-Effect Preferences and Sector-Specific Externalities

Asymptotic relations in Cournot s game

Comprehensive Exam. Macro Spring 2014 Retake. August 22, 2014

ECON 5118 Macroeconomic Theory

Monetary Economics Notes

Chapter 11 The Stochastic Growth Model and Aggregate Fluctuations

Macroeconomics Qualifying Examination

ECON 581: Growth with Overlapping Generations. Instructor: Dmytro Hryshko

Assumption 5. The technology is represented by a production function, F : R 3 + R +, F (K t, N t, A t )

Economic Growth: Lecture 8, Overlapping Generations

Markov Perfect Equilibria in the Ramsey Model

1 With state-contingent debt

Macroeconomics I. University of Tokyo. Lecture 12. The Neo-Classical Growth Model: Prelude to LS Chapter 11.

1 Two elementary results on aggregation of technologies and preferences

Dynamic Optimization: An Introduction

Economics 210B Due: September 16, Problem Set 10. s.t. k t+1 = R(k t c t ) for all t 0, and k 0 given, lim. and

Lecture 2 The Centralized Economy

Lecture 3: Growth with Overlapping Generations (Acemoglu 2009, Chapter 9, adapted from Zilibotti)

Suggested Solutions to Problem Set 2

Introduction to Real Business Cycles: The Solow Model and Dynamic Optimization

Macroeconomics II Money in the Utility function

Getting to page 31 in Galí (2008)

Macroeconomics Theory II

Real Business Cycle Model (RBC)

Dynamic stochastic general equilibrium models. December 4, 2007

Suggested Solutions to Homework #3 Econ 511b (Part I), Spring 2004

The Basic RBC model with Dynare

Online Appendix I: Wealth Inequality in the Standard Neoclassical Growth Model

News Driven Business Cycles in Heterogenous Agents Economies

MA Advanced Macroeconomics: 7. The Real Business Cycle Model

A simple macro dynamic model with endogenous saving rate: the representative agent model

Foundation of (virtually) all DSGE models (e.g., RBC model) is Solow growth model

The New Keynesian Model: Introduction

Ramsey Cass Koopmans Model (1): Setup of the Model and Competitive Equilibrium Path

DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION

Lecture 2 The Centralized Economy: Basic features

Taking Perturbation to the Accuracy Frontier: A Hybrid of Local and Global Solutions

Dynastic Altruism, Population Growth, and Economic Prosperity

Uncertainty Per Krusell & D. Krueger Lecture Notes Chapter 6

High-dimensional Problems in Finance and Economics. Thomas M. Mertens

The Necessity of the Transversality Condition at Infinity: A (Very) Special Case

4- Current Method of Explaining Business Cycles: DSGE Models. Basic Economic Models

INTRODUCTORY MATHEMATICS FOR ECONOMICS MSCS. LECTURE 3: MULTIVARIABLE FUNCTIONS AND CONSTRAINED OPTIMIZATION. HUW DAVID DIXON CARDIFF BUSINESS SCHOOL.

Learning to Optimize: Theory and Applications

A Simple Algorithm for Solving Ramsey Optimal Policy with Exogenous Forcing Variables

Manuelli & Seshadri: Human Capital and the Wealth of Nations

Introduction to Numerical Methods

u(c t, x t+1 ) = c α t + x α t+1

Dynamic Problem Set 1 Solutions

Problem Set 3: Proposed solutions Econ Fall 2012 (Prof. Roberto Chang)

Macroeconomics I. University of Tokyo. Lecture 13

Housing with overlapping generations

Neoclassical Growth Model / Cake Eating Problem

Productivity Losses from Financial Frictions: Can Self-financing Undo Capital Misallocation?

Economic Growth: Lecture 7, Overlapping Generations

Simple New Keynesian Model without Capital

Maximin and minimax strategies in symmetric oligopoly: Cournot and Bertrand

2008/73. On the Golden Rule of Capital Accumulation Under Endogenous Longevity. David DE LA CROIX Grégory PONTHIERE

An adaptation of Pissarides (1990) by using random job destruction rate

Transcription:

MPRA Munich Personal RePEc Archive On automatic derivation of first order conditions in dynamic stochastic optimisation problems Grzegorz Klima and Kaja Retkiewicz-Wijtiwiak Department for Strategic Analyses, Chancellery of the Prime Minister of the Republic of Poland 28. April 2014 Online at http://mpra.ub.uni-muenchen.de/55612/ MPRA Paper No. 55612, posted 29. April 2014 23:46 UTC

On automatic derivation of first order conditions in dynamic stochastic optimisation problems Grzegorz Klima Department for Strategic Analyses, Chancellery of the Prime Minister of the Republic of Poland Kaja Retkiewicz-Wijtiwiak Department for Strategic Analyses, Chancellery of the Prime Minister of the Republic of Poland April 28, 2014 Abstract This note presents an algorithm for deriving first order conditions applicable to the most common optimisation problems encountered in dynamic stochastic models automatically. Given a symbolic library or a computer algebra system one can efficiently derive first order conditions which can then be used for solving models numerically (steady state, linearisation). Keywords: DSGE, stochastic optimisation, first order conditions, symbolic computations. JEL classification: C61, C63, C68. 1 Introduction Toolboxes aimed at solving dynamic stochastic general equilibrium models (e.g. Dynare [1]) require users to derive first order conditions for agents optimisation problems manually. This is most probably caused by the lack of an algorithm for deriving them automatically, given objective function and constraints. This note presents such algorithm which is applicable to most common optimisation problems encountered in dynamic stochastic models. Given a symbolic library or a computer algebra system satisfying some functional requirements, one can efficiently derive the first order conditions which can then be used for solving the model numerically (steady state, linearisation). The approach presented here is fairly general and can be extended in order to handle more complicated optimisation problems. The views expressed herein are solely of the authors and do not necessarily reflect those of the Chancellery of the Prime Minister of the Republic of Poland. E-mail: gklima@users.sourceforge.net. 1

2 The problem This is the standard setup presented in economic textbooks. A detailed exposition can be for example found in [3] or [2]. Time is discrete, infinite, and it begins at t = 0. In each period t = 1, 2,... a realisation of the stochastic event ξ t is observed. A history of events up to time t is denoted by s t. More formally, let (Ω, F, P) be a discrete probabilistic space with the filtration {, Ω} = F 0 F 1 F t F t+1 Ω. Each event at date t (ξ t ) and every history up to time t (s t ) is F t -measurable. Let π(s t ) denote the probability of history s t up to time t. The conditional probability π(s t+1 s t ) is the probability of the event ξ t+1 such that s t+1 = s t ξ t+1. In what follows it is assumed that variable with the time index t is F t -measurable. In the period t = 0 an agent determines vectors of control variables x(s t ) = ( x 1 (s t ),..., x N (s t ) ) at all possible events s t as a solution to her optimisation problem. The objective function U 0 (lifetime utility) is recursively given by the following equation: U t (s t ) =F ( x t 1 (s t 1 ), x t (s t ), z t 1 (s t 1 ), z t (s t ), E t H 1 (x t 1, x t, U t+1, z t 1, z t, z t+1 ),..., E t H J (... ) ), (1) with constraints satisfying: G i ( x t 1 (s t 1 ), x t (s t ), z t 1 (s t 1 ), z t (s t ), E t H 1 (x t 1, x t, U t+1, z t 1, z t, z t+1 ),..., E t H J (... ) ) = 0, x 1 given. (2) where x t (s t ) are decision variables and z t (s t ) are exogenous variables and i = 1,..., I indexes constraints. We shall denote the expression E t H j (x t 1, x t, U t+1, z t 1, z t, z t+1 ) compactly as E t H j t+1 with j = 1,..., J. We have: E t H j t+1 = π(s t+1 s t )H j (x t 1 (s t 1 ), x t (s t ), U t+1 (s t+1 ), z t 1 (s t 1 ), z t (s t ), z t+1 (s t+1 )). Let us now modify the problem by substituting q j t (s t ) for E t H j t+1 and adding constraints of the form qj t (s t ) = E t H j t+1. ( ) We shall also use F t (s t ) and G i t(s t ) to denote expressions F x t 1 (s t 1 ), x t (s t ), z t 1 (s t 1 ), z t (s t ), qt 1 (s t ),..., q j t (s t ) ) and G (x i t 1 (s t 1 ), x t (s t ), z t 1 (s t 1 ), z t (s t ), qt 1 (s t ),..., q j t (s t ) respectively. Then the agent s problem may be written as: max U 0 (x t) t=0,(ut) t=0 s.t. : (3) U t (s t ) = F t (s t ), G i t(s t ) = 0, q j t (s t ) = E t H j t+1, x 1 given. 2

3 The First Order Conditions The Lagrangian for the problem (3) may be written as follows: L = U 0 + π(s t )λ t (s t ) [F t (s t ) U t (s t )] t=0 s t + π(s t )λ t (s t ) µ i t(s t )G i t(s t ) t=0 s t + π(s t )λ t (s t ) η j t (s t )(E t H j t+1 qj t (s t )). s t t=0 The first order condition for maximizing the Lagrangian with respect to U t (s t ) is: 0 = π(s t )λ t (s t ) + π(s t 1 )λ t 1 (s t 1 )π(s t s t 1 ) η j t 1 (s t 1)H j t,3 (s t), where 3 in H j t,3 (s t) stands for a partial derivative of H j t (s t ) with respect to its third argument, i.e. U t (s t ) (we shall adopt such notation throughout this note). Using the property π(s t 1 )π(s t s t 1 ) = π(s t ), aggregating the equation with respect to (s t ) and dividing it by π(s t ) yields: λ t (s t ) = λ t 1 (s t 1 ) η j t 1 (s t 1)H j t,3 (s t), which implies: λ t+1 (s t+1 ) = λ t (s t ) η j t (s t )H j t+1,3 (s t+1). In general, Lagrange multipliers on time aggregators, i.e. equation (1), are non-stationary. For instance, in case of exponential discounting, one will have λ t+1 = βλ t. Dividing the equation by λ t (s t ) we obtain: λ t+1 (s t+1 ) λ t (s t ) = η j t (s t )H j t+1,3 (s t+1). Now let us set λ t (s t ) = 1. This is equivalent to reinterpreting λ t+1 t(s t+1 ) as λt+1(st+1) λ t(s t) in all equations. We have: λ t+1 (s t+1 ) = η j t (s t )H j t+1,3 (s t+1). (4) The first order condition for maximizing the Lagrangian L with respect to x t (s t ) gives: 0 = π(s t )λ t (s t )F t,2 (s t ) + π(s t+1 )λ t+1 (s t+1 )F t+1,1 (s t+1 ) + π(s t )λ t (s t ) + π(s t )λ t (s t ) µ i t(s t )G i t,2(s t ) + η j t (s t )H j t+1,2 (s t+1) + π(s t+1 )λ t+1 (s t+1 ) µ i t+1(s t+1 )G i t+1,1(s t+1 ) π(s t+1 )λ t+1 (s t+1 ) η j t+1 (s t+1)h j t+2,1 (s t+2). 3

Simplification yields: 0 = π(s t )λ t (s t ) F t,2 (s t ) + µ i t(s t )G i t,2(s t ) + + π(s t+1 )λ t+1 (s t+1 ) F t+1,1 (s t+1 ) + η j t (s t )H j t+1,2 (s t+1) µ i t+1(s t+1 )G i t+1,1(s t+1 ) + η j t+1 (s t+1)h j t+2,1 (s t+2). Setting λ t (s t ) = 1 as before, dividing the equation by π(s t ), and making use of the property π(s t+1 ) = π(s t+1 s t )π(s t ) we obtain: 0 = F t,2 (s t ) + µ i t(s t )G i t,2(s t ) + η j t (s t )H j t+1,2 (s t+1) + π(s t+1 s t )λ t+1 (s t+1 ) F t+1,1 (s t+1 ) + After rearrangement we arrive at stochastic Euler equations: 0 =F t,2 (s t ) + µ i t(s t )G i t,2(s t ) + + E t λ t+1 F t+1,1 + µ i t+1(s t+1 )G i t+1,1(s t+1 ) + η j t+1 (s t+1)h j t+2,1 (s t+2). η j t (s t )H j t+1,2 (s t+1) (5) µ i t+1(s t+1 )G i t+1,1 + η j t+1 (s t+1)h j t+2,1 (s t+2). Finally, differentiating the Lagrangean L with respect to q j t (s t ) gives: 0 = π(s t )λ t (s t )F t,4+j (s t ) + π(s t )λ t (s t ) Setting λ t (s t ) = 1 and dividing the equation by π(s t ) yields: 0 = F t,4+j (s t ) + µ i t(s t )G i t,4+j(s t ) π(s t )λ t (s t )η j t (s t ). µ i t(s t )G i t,4+j(s t ) η j t (s t ). (6) There are N + 1 + J first order conditions: one( w.r.t. to U t (4), N w.r.t. ) x n t (5) and J w.r.t. q j t (6). There are also I conditions G i t = 0, the equation F x t 1, x t, z t 1, z t, qt 1,..., q j t = U t and J equations defining q j t. The overall number of equations (N + I + 2J + 2) equals the number of variables: N decision variables x n t, the variable U t, J variables q j t, the Lagrange multiplier λ t, I Lagrange multipliers µ i t and J Lagrange multipliers η j t (which gives N + I + 2J + 2 variables). 4 An example The purpose of this section is to illustrate the FOC derivation procedure presented above with an example of a typical optimisation problem encountered in numerous RBC models. Assume a representative firm in a competitive setting maximises an objective (discounted profits) at time 0 (Π 0 ), given definition of profits 4

earned each period (π t ), the technology available (Cobb-Douglas production function), and the low of motion for capital (K t ). The firm owns capital and employs labour L t at wage W t. It uses production factors to produce Y t, which it sells at price P t. All prices are treated as given. The firm discounts its next-period profits with the growth rate of the Lagrange multiplier (λ c t) in household s (firm s owner) problem corresponding to the budget constraint, i.e. λ c t is a shadow price of consumption. Therefore, the expected discounted profit is: [ ] β t λ c t E 0 λ c π t. 0 t=0 The firm s optimisation problem can be written as follows: max K t,l d t,yt,it,πt Π t = π t + E t s.t. : π t = P t Y t L t W t I t, (λ π t ) Y t = K t 1 α ( L t e Zt) 1 α, (λ Y t ) K t = I t + K t 1 (1 δ), (λ k t ) [ ] β λc t+1 λ c Π t+1 t (7) where Z t is an exogenous variable determining labour productivity and α, β, and δ are respectively: capital share, discount factor, and depreciation rate. λ π t, λ Y t, λ k t are Lagrange multipliers for the constraints. In order to derive the FOCs for this maximisation problem using equations (4), (5), and (6), it is helpful to define the symbols used in the previous section for the problem (7): x t [K t, L t, Y t, I t, π t ], z t [Z t ], U t Π t, λ t λ Π t [ qt 1 = E t Ht+1 1 E t F t x 5 t + q 1 t, β λc t+1 λ c t Π t+1 ], (η 1 t ) G 1 t P t x 3 t x 2 t W t x 4 t x 5 t, (µ 1 t λ π t ) G 2 t x 1 t 1α ( x 2 t e z1 t ) 1 α x 3 t, (µ 2 t λ Y t ) G 3 t x 4 t + x 1 t 1 (1 δ) x 1 t. (µ 3 t λ k t ) Substituting these definitions in equations (4)-(6) one obtains the following set of first order conditions: from equation (4): λ Π t + η 1 t 1βλ c t 1 1 λ c t = 0, from equation (5): λ k t + E t [λ Π t+1 ( αλ Y 1+α t+1k ( t L d t+1e Zt+1) )] 1 α + λ k t+1 (1 δ) = 0, W t λ π t + λ Y t e Zt (1 α) K t 1 α ( L d t e Zt) α = 0, P t λ π t λ Y t = 0, λ π t + λ k t = 0, 1 λ π t = 0, 5

from equation (6): 1 η 1 t = 0. Substituting for Lagrange multipliers one gets familiar results: ( 1 + E t [βλ c 1 t λ c 1+α t+1 1 δ + αp t+1 K ( t L d t+1e Zt+1) )] 1 α = 0, W t + P t e Zt (1 α) K t 1 α ( L d t e Zt) α = 0. 5 Algorithm The equations derived in section 3 can be collected to yield an automatic method for deriving the first order conditions manually. This method is presented as Algorithm 1. FOC w.r.t. U 1: for j 1,..., J do 2: Z j η j t 3: end for 4: Z j Zj H j t+1 U t+1 5: C 0 Z λ t+1 FOCs w.r.t. x 6: for n 1,..., N do 7: for i 1,..., I do 8: L i n µ i t G i t x n t 9: Pn i µ i G i t+1 t+1 x n t 10: end for 11: for j 1,..., J do 12: M j n η j t H j t+1 x n t 13: Q j n η j H j t+2 t+1 x n t 14: end for 15: L n i Li n 16: M n j M j n 17: P n i P n i 18: Q n j Qj n 19: R n Ft+1 x + P n n + Q n t 20: C n Ft x + L n n + M n + E t [λ t+1 R n ] t 21: end for FOCs w.r.t. q 22: for j 1,..., J do 23: for i 1,..., I do 24: Sj i µi t 25: end for G i t q j t 26: S j i Si j 27: C N+j Ft + S q j j η j t t 28: end for Algorithm 1: Derivation of First Order Conditions for (3) 6

Note, that the last set of first order conditions can always be solved for η t and used to eliminate it from all remaining equations. In order to implement the above algorithm, one needs a symbolic library or a computer algebra system with the following functionality: variables and time-indexed variables, basic arithmetical operations (+,, /, ) and elementary functions, lag and conditional expected value operators, substitution and differentiation operations. 6 Special case the deterministic model In the case of deterministic model the objective function U 0 (lifetime utility) is recursively given by the following equation: U t = F (x t 1, x t, z t 1, z t, z t+1, U t+1 ), (8) with constraints satisfying: G i (x t 1, x t, z t 1, z t, z t+1, U t+1 ), (9) where x t are decision variables and z t are exogenous variables and i = 1,..., I indexes constraints. As earlier, we shall also use F t and G i t to denote the expressions F (x t 1, x t, z t 1, z t, z t+1, U t+1 ) and G i (x t 1, x t, z t 1, z t, z t+1, U t+1 ), respectively. Then the agent s problem may be written as: max U 0 (x t) t=0,(ut) t=0 s.t. : (10) U t = F t, G i t = 0, x 1 given. The Lagrangian for problem (10) may be written as follows: L = U 0 + λ t (F t U t ) + t=0 λ t t=0 µ i tg i t. The first order condition for maximizing the Lagrangian with respect to U t (s t ) is: 0 = λ t + λ t 1 F t 1,6 + λ t 1 Rearranging and aggregating the formula yields: λ t = λ t 1 (F t 1,6 + µ i t 1G i t 1,6. µ i t 1G i t 1,6 ), 7

which implies: Dividing the equation by λ t we obtain: λ t+1 = λ t (F t,6 + λ t+1 λ t = F t,6 + ) µ i tg i t,6. µ i tg i t,6. Now let us set λ t = 1. This is equivalent to reinterpreting λ t+1 as λt+1 λ t in all equations. We have: λ t+1 = F t,6 + µ i tg i t,6. (11) The first order condition for maximizing the Lagrangian L with respect to x t gives: 0 = λ t F t,2 + λ t+1 F t+1,1 + λ t µ i tg i t,2 + λ t+1 After setting λ t = 1 as before and rearrangement we arrive at: 0 = F t,2 + µ i tg i t,2 + λ t+1 (F t+1,1 + µ i t+1g i t+1,1. µ i t+1g i t+1,1 ). (12) Equations (11) and (12) can be collected to yield an automatic method for deriving first order conditions of a deterministic model. This method is presented as Algorithm 2. FOC w.r.t. U 1: A Ft U t+1 2: for i 1,..., I do 3: B i µ i t 4: end for 5: B i Bi G i t U t+1 6: C 0 A + B λ t+1 FOCs w.r.t. x 7: for n 1,..., N do 8: K n Ft x n t 9: M n Ft+1 x n t 10: for i 1,..., I do 11: L i n µ i t G i t x n t 12: Pn i µ i G i t+1 t+1 x n t 13: end for 14: 15: L n i Li n P n i P n i 16: C n K n + L n + λ t+1 (M n + P n ) 17: end for Algorithm 2: Derivation of First Order Conditions deterministic model 8

7 Summary We have derived the first order conditions for a general dynamic stochastic optimisation problem with objective function given by a recursive forward-looking equation. We have also constructed an algorithm for deriving first order conditions on a computer automatically. This algorithm has already been implemented in the DSGE solution framework called gecon 1 which is being developed at the Chancellery of the Prime Minister of the Republic of Poland. It is hoped that the algorithm presented here will reduce the burden and risk associated with pen & paper derivations of first order conditions in DSGE models and allow researchers in the field to focus on economic aspects of the models. References [1] Stéphane Adjemian, Houtan Bastani, Frédéric Karamé, Michel Juillard, Junior Maih, Ferhat Mihoubi, George Perendia, Marco Ratto, and Sébastien Villemot. Dynare: Reference manual version 4. Dynare Working Papers 1, CEPREMAP, 2013. [2] S.F. LeRoy, J. Werner, and S.A. Ross (Foreword). Principles of Financial Economics. Cambridge University Press, 1997. [3] L. Ljungqvist and T.J. Sargent. Recursive macroeconomic theory. MIT press, 2004. 1 http://gecon.r-forge.r-project.org. 9