Lecture 1: Dynamic Programming

Size: px
Start display at page:

Download "Lecture 1: Dynamic Programming"

Transcription

1 Lecture 1: Dynamic Programming Fatih Guvenen November 2, 2016 Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

2 Goal Solve V (k, z) =max c,k 0 u(c)+ E(V (k 0, z 0 ) z) c + k 0 =(1 + r)k + z z 0 = z + Questions: 1 Does a solution exist? 2 Is it unique? 3 If the answers to (1) and (2) are yes: how do we find this solution? Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

3 Contraction Mapping Theorem Definition: Contraction Mapping. Let (S, d) be a metric space and T : S! S be a mapping of S into itself. T is a contraction mapping with modulus, if for some 2 (0, 1) we have for all v 1, v 2 2 S. d(tv 1, Tv 2 ) apple d(v 1, v 2 ) Contraction Mapping Theorem: Let (S,d) be a complete metric space and suppose that T : S! S is a contraction mapping. Then, T has a unique fixed point v 2 S such that for all v 0 2 S. Tv = v = lim N!1 T N v 0 The beauty of CMT is that it is a constructive theorem: it not only tells us the existence and uniqueness of v but it also shows us how to find it! Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

4 Qualitative Properties of v We cannot apply CMT in certain cases, because the particular set we are interested in is not a complete metric space. The following corollary comes in handy in those cases. Corollary: Let (S,d) be a complete metric space and T : S! S be a contraction mapping with Tv = v. a. If S is a closed subset of S, and T (S) S, then v 2 S. b. If, in addition, T (S) S S, then v 2 S. S = {continuous, bounded, strictly concave}. Not a complete metric space. S = continuous, bounded, weakly concave} is. So we need to be able to establish that T maps elements in S into S. Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

5 APrototypeProblem apple V (k, z) =max u(c)+ c,k 0 Z c + k 0 =(1 + r)k + z z 0 = z + V (k 0, z 0 )f (z 0 z)dz 0 CMT tells us to start with a guess V 0 and then repeatedly solve the problem in the RHS. But: How to evaluate the conditional expectation? I Several non-trivial issues that are easy to underestimate How to do constrained optimization (esp. in multi-dimension)? This course will focus on methods that are especially suitable for incomplete mkts/heterogenous agent models. Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

6 Algorithmus 1 : STANDARD VALUE FUNCTION ITERATION 1 Set n = 0. Choose an initial guess V 0 2 S. 2 Obtain V n+1 by applying the mapping: V n+1 = TV n, which entails maximizing the right-hand side of the Bellman equation. 3 Stop if convergence criteria satisfied: V n+1 V n < toler. Otherwise, increase n and return to step 2. Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

7 Simple Analytical Example The Neoclassical Growth Model Consider the special case with log utility, Cobb-Douglas production and full depreciation: V (k) = max c,k 0 log c + V k 0 s.t c = Ak k 0 Rewrite the Bellman equation as: V (k) =max c,k 0 log Ak k 0 + V k 0 Our goal is to find V (k) and a decision rule g such that k 0 = g(k). Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

8 I. Backward Induction (Brute Force) If t = T < 1, in the last period we would have: V 0 (k) 0 for all k. Therefore: 8 9 >< >= V 1 (k) =max k 0 >: log Ak k 0 + V 0 k 0 {z } >; V 1 = max k 0 log (Ak Substitute V 1 into the RHS of V 2 : 0 k 0 ) ) k 0 = 0 ) V 1 (k) =log A + log k V 2 = max k 0 log Ak k 0 + log A + log k 0 ) FOC : 1 Ak k 0 = k 0 ) k 0 = Ak 1 + Substitute k 0 to obtain V 2. We can keep iterating to find the solution. Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

9 II. Guess and Verify (Value Function) The first method suggests a more direct approach. Note that V 2 is also of the form: a + b log k as was V 1. Conjecture that the solution V (k) =a + b log k, where a and b are coefficients that need to be determined. a + b log k = max c,k 0 log Ak k 0 + a + b log k 0 FOC: 1 Ak k 0 = b k 0 ) k 0 = b 1 + b Ak Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

10 II. Guess and Verify (Value Function) Let LHS = a + b log k. Plug in the expression for k 0 into the RHS: RHS = log Ak b 1 + b Ak b + a + b log 1 + b Ak 1 b = (1 + b) log A + log + a + b log 1 + b 1 + b + (1 + b) log k Imposing the condition that LHS RHS for allk, we find a and b : a = apple 1 1 log A +(1 ) log ( log ) b = 1 We have solved the model! Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

11 Guess and Verify as a Numerical Tool Although this was a very special example, the same logic can be used for numerical analysis. As long as the true value function is well-behaved (smooth, etc), we can choose a sufficiently flexible functional form that has a finite (ideally small) number of parameters. Then we can apply the same logic as above and solve for the unknown coefficients, which then gives us the complete solution. Many solution methods rely on various version of this general idea (perturbation methods, collocation methods, parametrized expectations) Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

12 III. Guess and Verify (Policy Functions) a.k.a Euler equations approach Let the policy rule for savings be: k 0 = g(k). The Euler equation is: Ak 1 g (k) A g (k) 1 A (g (k) g (g (k))) = 0 for all k. which is a functional equation in g (k). Guess g (k) =sak, and substitute above: 1 (1 s) Ak = A (sak ) 1 A ((sak ) sa (aak ) ) As can be seen, k cancels out, and we get s =. By using a very flexible choice of g() this method too can be used for solving very general models. Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

13 Back to VFI VFI can be very slow when 1. Three ways to accelerate. 1 (Howard s) Policy Iteration Algorithm (together with its modified version) 2 MacQueen-Porteus (MQP) error bounds 3 Endogenous Grid Method (EGM). In general, basic VFI should never be used without at least one of these add-ons. I I EGM is your best bet when it s applicable. But in certain cases, it s not. In those cases, a combination of Howard s algorithm and MQP can be very useful. Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

14 Howard s Policy Iteration Consider the neoclassical growth model: c 1 V (k, z) =max c,k E V k 0, z 0 z s.t c + k 0 = e z k +(1 )k (P1) z 0 = z + 0, k 0 k. In stage n of the VFI algorithm, first, we maximize the RHS and solve for the policy rule: ( ) (e s j ki +(1 )k s) 1 n (k, z) =arg max s k 1 + E V n s, z 0 z. (1) Second: V n+1 = T sn V n. (2) Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

15 Policy Iteration Maximization step can be very time consuming. So it seems like a waste that we use the new policy only one period in updating to V n+1. I A simple but key insight is that (2) is also a contraction with modulus. Therefore if we were to repeatedly apply T sn, it would also converge to a fixed point itself at rate. I I Of course, this fixed point would not be the solution of the original Bellman equation we would like to solve. But it is an operatore that is much cheaper to apply. So we may want to apply it more than once. Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

16

17 Two Properties of Howard s Algorithm Puterman and Brumelle (1979) show that: Policy iteration is equivalent to the Newton-Kantarovich method applied to dynamic programming. Thus, just like Newton s method it has two properties: 1 it is guaranteed to converge to the true solution when the initial point, V 0, is in the domain of attraction of V, and 2 when (i) is satisfied, it converges at a quadratic rate in iteration index n. Bad news: no more global convergence like VFI (unless state space is discrete) Good news: potentially very fast convergence. Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

18 Algorithmus 2 : VFI WITH POLICY ITERATION ALGORITHM 1 Set n = 0. Choose an initial guess v 0 2 S. 2 Obtain s n as in (1) and take the updated value function to be: v n+1 = lim m!1 T m s n v n, which is the (fixed point) value function resulting from using policy, s n, forever. 1 3 Stop if convergence criteria satisfied: v n+1 v n < toler. Otherwise, increase n and return to step 1. Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

19 Modified Policy Iteration Caution: Quadratic convergence is a bit misleading: this is the rate in n. And in contrast to VFI, Howard s algorithm takes a lot of time to evaluate step 2. So overall, it may not be much faster when the state space is large. Second, the basin of attraction can be small. These can be fixed by slightly modifying the algorithm. Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

20 Error Bounds: Background In iterative numerical algorithms, we need a stopping rule. In dynamic programming we want to know how far we are from the true solution in each iteration. Contraction mapping theorem can be used to show: kv v k k 1 apple 1 1 kv k+1 v k k 1. So if we want to stop when we are " away from true solution, kv k+1 v k k 1 <" (1 ). Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

21 Algorithmus 3 : VFI WITH Modified POLICY IMPROVEMENT ALGO- RITHM 1 Modify Step 2 of Howard s algorithm: Obtain s n as in (1) and take the updated value function to be: v n+1 = T m s n v n, which entails m applications of Howard s mapping to update to v n+1. Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

22 Error Bounds: Background Remarks: 1 This bound is for the worst case scenario (sup-norm). If v varies over a wide range, this bound may be misleading. I Consider u(c) = c Typically, v will cover an enormous range of values. Bound might be too pessimistic. 2 ASIDE: In general, it may be hard to judge what an " deviation in v means in economic terms. I I Another approach will be to define stopping rule in the policy space. It is easier to judge what it means to consume x% less than optimal. Typically policies converge faster than values so this might allow stopping sooner. Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

23 MacQueen-Porteus Bounds Consider a different formulation for a dynamic programming problem: 2 3 V (x i )= max 4U(x i, y)+ JX ij (y)v (x j ) 5, (3) y2 (x i ) State space is discrete. But choices are continuous. j=1 Allows for simple modeling of interesting problems. Very common formulation in other fields using dynamic programming. (See, e.g., all books by Bertsekas, with Shreve, or Ozdaglar, etc.) Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

24 MacQueen-Porteus Bounds Theorem [MacQueen-Porteus bounds] Consider 2 define V (x i )= max y2 (x i ) 4U(x i, y)+ 3 JX ij (y)v (x j ) 5, (4) c n = 1 min [V n V n 1 ] c n = 1 max [V n V n 1 ] (5) Then, for all x 2 X, we have: j=1 T n V 0 (x)+c n apple V (x) apple T n V 0 (x)+c n. (6) Furthermore, with each iteration, the two bounds approach the true solution monotonically. Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

25 MQP Bounds: Comments MQP bounds can be quite tight. Example: V n (x) V n 1 (x) = for all x. Suppose = 100 (a large number). The usual bound implies: kv V n k 1 apple 1 1 kv n (x) V n 1 (x)k 1 = 1, so we would keep iterating. MQP implies c n = c n =, which the then implies 1 = V (x) T n V 0 (x) = 1. We find V (x) =V n (x)+ 1, in one step! MQP: both lower and upper bound for signed difference. No upper bound only for sup-norm. Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

26 Algorithmus 4 : VFI WITH MACQUEEN-PORTEUS ERROR BOUNDS [Step 2 :] Stop when c n c n < toler. Then take the final estimate of V to be either the median Ṽ = T n cn + c V 0 + n 2 or the mean (i.e., average error bound across states): ˆV = T n V 0 + n(1 ) nx i=1 T n V 0 (x i ) T n 1 V 0 (x i ). Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

27 MQP: Convergence Rate Bertsekas (1987) derives the convergence rate of MQP bounds algorithm It is proportional to the subdominant eigenvalue of ij (y ) (the transition matrix evaluated at optimal policy). VFI is proportional to the dominant eigenvalue, which is always 1. Multiplied by, gives convergence rate. Subdominant (2nd largest) eigenvalue ( 2 ) is sometimes 1 and sometimes not: I AR(1) process, discretized: 2 = (persistence parameter) I More than 1 ergodic set: 2 = 1. When persistence is low, this can lead to substantial improvements in speed. Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

28 Endogenous Grid Method

29 Endogenous Grid Method Under standard VFI, we have c = E V k k 0, z 0 z j. This equation can be rewritten (by substituting out consumption using the budget constraint) as z j k i +(1 )k i k 0 = E V k k 0, z 0 z j, (7) In VFI, we solve for k 0 for each grid point today (k i, z j ). Slow for three reasons: I This is a non-linear equation in k 0. I For every trial value of k 0, we need to repeatedly evaluate the conditional expectation (since k 0 appears inside the expectation). I V (ki, z j ) is stored at grid points defined over k, so for every trial value of k 0, we need to interpolate to obtain off-grid values V (k 0, zj 0) for each zj 0. Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

30 EGM View the problem differently: c 1 V k, z j = max c,k E V k 0 i, z0 s.t c + ki 0 = z j k +(1 )k (P3) ln z 0 = ln z j + 0, k 0 k Now the same FOC as before: z j k +(1 )k k 0 i = E V k k 0 i, z0 z j, (8) but solve for k as a function of k 0 i and z j : z j k +(1 )k = E V k k 0 i, z0 z j 1/ + k 0 i. z j Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

31 EGM Define and rewrite the Bellman equation as: ( (Y k 0 ) 1 V (Y, z) =max k 0 1 s.t ln z 0 = ln z j + 0. Y zk +(1 )k (9) + E V Y 0, z 0 z j ) The key observation is that Y 0 is only a function of k 0 i and z 0, so we can write the conditional expectation on the right hand side as: V(k 0 i, z j) E V Y 0 (k 0 i, z0 ), z 0 z j. Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

32 EGM V (Y, z) =max k 0 ( (Y k 0 ) V(k 0 i, z j) ) Now the FOC of this new problem becomes: c (k 0 i, z j) = V k 0(k 0 i, z j). (10) Having obtained c (ki 0, z j) from this expression, we use the resource constraint to compute today s end-of-period resources Y (ki 0, z j)=c (ki 0, z j)+ki 0 as well as V Y (ki 0, z j), z j = c (ki 0, z j) V(k 0 i, z j) Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

33 EGM: The Algorithm 0: Set n = 0. Construct a grid for tomorrow s capital and today s shock: (ki 0, z j). Choose an initial guess V 0 (ki 0, z j). 1: For all i, j, obtain c (k 0 i, z j)= V n k (k 0 i, z j) 1/. 2: Obtain today s end-of-period resources as a function of tomorrow s capital and today s shock: Y (k 0 i, z j)=c (k 0 i, z j)+k 0 i, and today s updated value function, V n+1 Y (ki 0, z j), z j = c (ki 0, z j) V n (k 0 i, z j) by plugging in consumption decision into the RHS. Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

34 EGM: The Algorithm (Cont d) 3: Interpolate V n+1 to obtain its values on a grid of tomorrow s end-of-period resources: Y 0 = z 0 (ki 0) +(1 )ki 0. 4: Obtain V n+1 (k 0 i, z j)= E V n+1 Y 0 (ki 0, z0 ), z 0 z j. 5: Stop if convergence criterion is satisfied and obtain beginning-of-period capital, k, by solving the nonlinear equation Y n (i, j) z j k +(1 )k, for all i, j. Otherwise, go to step 1. Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

35 Is This Worth the Trouble? RRA Utility CRRA CRRA+PI CRRA+PI+MP CRRA+PI+MP+100grid Endog Grid (grid curv=2) Table: Time for convergence (seconds) : dim k=300, = 3.0 Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

36 Another Benchmark! MQP/N max! # no yes no yes = 1 = 5 Table: Mc-Queen Porteus Bounds and Policy Iteration Fatih Guvenen Lecture 1: Dynamic Programming November 2, / 32

Lecture 4: Dynamic Programming

Lecture 4: Dynamic Programming Lecture 4: Dynamic Programming Fatih Guvenen January 10, 2016 Fatih Guvenen Lecture 4: Dynamic Programming January 10, 2016 1 / 30 Goal Solve V (k, z) =max c,k 0 u(c)+ E(V (k 0, z 0 ) z) c + k 0 =(1 +

More information

Dynamic Programming: Numerical Methods

Dynamic Programming: Numerical Methods Chapter 4 Dynamic Programming: Numerical Methods Many approaches to solving a Bellman equation have their roots in the simple idea of value function iteration and the guess and verify methods. These are

More information

ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko

ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko Indirect Utility Recall: static consumer theory; J goods, p j is the price of good j (j = 1; : : : ; J), c j is consumption

More information

Dynamic Programming Theorems

Dynamic Programming Theorems Dynamic Programming Theorems Prof. Lutz Hendricks Econ720 September 11, 2017 1 / 39 Dynamic Programming Theorems Useful theorems to characterize the solution to a DP problem. There is no reason to remember

More information

Lecture 7: Numerical Tools

Lecture 7: Numerical Tools Lecture 7: Numerical Tools Fatih Guvenen January 10, 2016 Fatih Guvenen Lecture 7: Numerical Tools January 10, 2016 1 / 18 Overview Three Steps: V (k, z) =max c,k 0 apple u(c)+ Z c + k 0 =(1 + r)k + z

More information

DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION

DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION UNIVERSITY OF MARYLAND: ECON 600. Alternative Methods of Discrete Time Intertemporal Optimization We will start by solving a discrete time intertemporal

More information

Suggested Solutions to Homework #3 Econ 511b (Part I), Spring 2004

Suggested Solutions to Homework #3 Econ 511b (Part I), Spring 2004 Suggested Solutions to Homework #3 Econ 5b (Part I), Spring 2004. Consider an exchange economy with two (types of) consumers. Type-A consumers comprise fraction λ of the economy s population and type-b

More information

Slides II - Dynamic Programming

Slides II - Dynamic Programming Slides II - Dynamic Programming Julio Garín University of Georgia Macroeconomic Theory II (Ph.D.) Spring 2017 Macroeconomic Theory II Slides II - Dynamic Programming Spring 2017 1 / 32 Outline 1. Lagrangian

More information

HOMEWORK #3 This homework assignment is due at NOON on Friday, November 17 in Marnix Amand s mailbox.

HOMEWORK #3 This homework assignment is due at NOON on Friday, November 17 in Marnix Amand s mailbox. Econ 50a second half) Yale University Fall 2006 Prof. Tony Smith HOMEWORK #3 This homework assignment is due at NOON on Friday, November 7 in Marnix Amand s mailbox.. This problem introduces wealth inequality

More information

Lecture 7: Stochastic Dynamic Programing and Markov Processes

Lecture 7: Stochastic Dynamic Programing and Markov Processes Lecture 7: Stochastic Dynamic Programing and Markov Processes Florian Scheuer References: SLP chapters 9, 10, 11; LS chapters 2 and 6 1 Examples 1.1 Neoclassical Growth Model with Stochastic Technology

More information

ADVANCED MACROECONOMIC TECHNIQUES NOTE 3a

ADVANCED MACROECONOMIC TECHNIQUES NOTE 3a 316-406 ADVANCED MACROECONOMIC TECHNIQUES NOTE 3a Chris Edmond hcpedmond@unimelb.edu.aui Dynamic programming and the growth model Dynamic programming and closely related recursive methods provide an important

More information

Numerical Methods in Economics

Numerical Methods in Economics Numerical Methods in Economics MIT Press, 1998 Chapter 12 Notes Numerical Dynamic Programming Kenneth L. Judd Hoover Institution November 15, 2002 1 Discrete-Time Dynamic Programming Objective: X: set

More information

ECON607 Fall 2010 University of Hawaii Professor Hui He TA: Xiaodong Sun Assignment 2

ECON607 Fall 2010 University of Hawaii Professor Hui He TA: Xiaodong Sun Assignment 2 ECON607 Fall 200 University of Hawaii Professor Hui He TA: Xiaodong Sun Assignment 2 The due date for this assignment is Tuesday, October 2. ( Total points = 50). (Two-sector growth model) Consider the

More information

Lecture 4: The Bellman Operator Dynamic Programming

Lecture 4: The Bellman Operator Dynamic Programming Lecture 4: The Bellman Operator Dynamic Programming Jeppe Druedahl Department of Economics 15th of February 2016 Slide 1/19 Infinite horizon, t We know V 0 (M t ) = whatever { } V 1 (M t ) = max u(m t,

More information

Simple Consumption / Savings Problems (based on Ljungqvist & Sargent, Ch 16, 17) Jonathan Heathcote. updated, March The household s problem X

Simple Consumption / Savings Problems (based on Ljungqvist & Sargent, Ch 16, 17) Jonathan Heathcote. updated, March The household s problem X Simple Consumption / Savings Problems (based on Ljungqvist & Sargent, Ch 16, 17) subject to for all t Jonathan Heathcote updated, March 2006 1. The household s problem max E β t u (c t ) t=0 c t + a t+1

More information

Projection Methods. Felix Kubler 1. October 10, DBF, University of Zurich and Swiss Finance Institute

Projection Methods. Felix Kubler 1. October 10, DBF, University of Zurich and Swiss Finance Institute Projection Methods Felix Kubler 1 1 DBF, University of Zurich and Swiss Finance Institute October 10, 2017 Felix Kubler Comp.Econ. Gerzensee, Ch5 October 10, 2017 1 / 55 Motivation In many dynamic economic

More information

An approximate consumption function

An approximate consumption function An approximate consumption function Mario Padula Very Preliminary and Very Incomplete 8 December 2005 Abstract This notes proposes an approximation to the consumption function in the buffer-stock model.

More information

Stochastic Dynamic Programming: The One Sector Growth Model

Stochastic Dynamic Programming: The One Sector Growth Model Stochastic Dynamic Programming: The One Sector Growth Model Esteban Rossi-Hansberg Princeton University March 26, 2012 Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 1 / 31 References

More information

A simple macro dynamic model with endogenous saving rate: the representative agent model

A simple macro dynamic model with endogenous saving rate: the representative agent model A simple macro dynamic model with endogenous saving rate: the representative agent model Virginia Sánchez-Marcos Macroeconomics, MIE-UNICAN Macroeconomics (MIE-UNICAN) A simple macro dynamic model with

More information

Solutions for Homework #4

Solutions for Homework #4 Econ 50a (second half) Prof: Tony Smith TA: Theodore Papageorgiou Fall 2004 Yale University Dept. of Economics Solutions for Homework #4 Question (a) A Recursive Competitive Equilibrium for the economy

More information

Problem Set #4 Answer Key

Problem Set #4 Answer Key Problem Set #4 Answer Key Economics 808: Macroeconomic Theory Fall 2004 The cake-eating problem a) Bellman s equation is: b) If this policy is followed: c) If this policy is followed: V (k) = max {log

More information

Introduction to Recursive Methods

Introduction to Recursive Methods Chapter 1 Introduction to Recursive Methods These notes are targeted to advanced Master and Ph.D. students in economics. They can be of some use to researchers in macroeconomic theory. The material contained

More information

Value and Policy Iteration

Value and Policy Iteration Chapter 7 Value and Policy Iteration 1 For infinite horizon problems, we need to replace our basic computational tool, the DP algorithm, which we used to compute the optimal cost and policy for finite

More information

An Application to Growth Theory

An Application to Growth Theory An Application to Growth Theory First let s review the concepts of solution function and value function for a maximization problem. Suppose we have the problem max F (x, α) subject to G(x, β) 0, (P) x

More information

Lecture 1: Introduction

Lecture 1: Introduction Lecture 1: Introduction Fatih Guvenen University of Minnesota November 1, 2013 Fatih Guvenen (2013) Lecture 1: Introduction November 1, 2013 1 / 16 What Kind of Paper to Write? Empirical analysis to: I

More information

A Quick Introduction to Numerical Methods

A Quick Introduction to Numerical Methods Chapter 5 A Quick Introduction to Numerical Methods One of the main advantages of the recursive approach is that we can use the computer to solve numerically interesting models. There is a wide variety

More information

Economics 2010c: Lecture 2 Iterative Methods in Dynamic Programming

Economics 2010c: Lecture 2 Iterative Methods in Dynamic Programming Economics 2010c: Lecture 2 Iterative Methods in Dynamic Programming David Laibson 9/04/2014 Outline: 1. Functional operators 2. Iterative solutions for the Bellman Equation 3. Contraction Mapping Theorem

More information

Practical Dynamic Programming: An Introduction. Associated programs dpexample.m: deterministic dpexample2.m: stochastic

Practical Dynamic Programming: An Introduction. Associated programs dpexample.m: deterministic dpexample2.m: stochastic Practical Dynamic Programming: An Introduction Associated programs dpexample.m: deterministic dpexample2.m: stochastic Outline 1. Specific problem: stochastic model of accumulation from a DP perspective

More information

Ramsey Cass Koopmans Model (1): Setup of the Model and Competitive Equilibrium Path

Ramsey Cass Koopmans Model (1): Setup of the Model and Competitive Equilibrium Path Ramsey Cass Koopmans Model (1): Setup of the Model and Competitive Equilibrium Path Ryoji Ohdoi Dept. of Industrial Engineering and Economics, Tokyo Tech This lecture note is mainly based on Ch. 8 of Acemoglu

More information

Dynamic Programming. Macro 3: Lecture 2. Mark Huggett 2. 2 Georgetown. September, 2016

Dynamic Programming. Macro 3: Lecture 2. Mark Huggett 2. 2 Georgetown. September, 2016 Macro 3: Lecture 2 Mark Huggett 2 2 Georgetown September, 2016 Three Maps: Review of Lecture 1 X = R 1 + and X grid = {x 1,..., x n } X where x i+1 > x i 1. T (v)(x) = max u(f (x) y) + βv(y) s.t. y Γ 1

More information

HOMEWORK #1 This homework assignment is due at 5PM on Friday, November 3 in Marnix Amand s mailbox.

HOMEWORK #1 This homework assignment is due at 5PM on Friday, November 3 in Marnix Amand s mailbox. Econ 50a (second half) Yale University Fall 2006 Prof. Tony Smith HOMEWORK # This homework assignment is due at 5PM on Friday, November 3 in Marnix Amand s mailbox.. Consider a growth model with capital

More information

Lecture 3: Huggett s 1993 model. 1 Solving the savings problem for an individual consumer

Lecture 3: Huggett s 1993 model. 1 Solving the savings problem for an individual consumer UNIVERSITY OF WESTERN ONTARIO LONDON ONTARIO Paul Klein Office: SSC 4044 Extension: 85484 Email: pklein2@uwo.ca URL: http://paulklein.ca/newsite/teaching/619.php Economics 9619 Computational methods in

More information

Dynamic Optimization Problem. April 2, Graduate School of Economics, University of Tokyo. Math Camp Day 4. Daiki Kishishita.

Dynamic Optimization Problem. April 2, Graduate School of Economics, University of Tokyo. Math Camp Day 4. Daiki Kishishita. Discrete Math Camp Optimization Problem Graduate School of Economics, University of Tokyo April 2, 2016 Goal of day 4 Discrete We discuss methods both in discrete and continuous : Discrete : condition

More information

The Market Resources Method for Solving Dynamic Optimization Problems *

The Market Resources Method for Solving Dynamic Optimization Problems * Federal Reserve Bank of Dallas Globalization and Monetary Policy Institute Working Paper No. 274 http://www.dallasfed.org/assets/documents/institute/wpapers/2016/0274.pdf The Market Resources Method for

More information

Discrete State Space Methods for Dynamic Economies

Discrete State Space Methods for Dynamic Economies Discrete State Space Methods for Dynamic Economies A Brief Introduction Craig Burnside Duke University September 2006 Craig Burnside (Duke University) Discrete State Space Methods September 2006 1 / 42

More information

Basic Deterministic Dynamic Programming

Basic Deterministic Dynamic Programming Basic Deterministic Dynamic Programming Timothy Kam School of Economics & CAMA Australian National University ECON8022, This version March 17, 2008 Motivation What do we do? Outline Deterministic IHDP

More information

Lecture 4 The Centralized Economy: Extensions

Lecture 4 The Centralized Economy: Extensions Lecture 4 The Centralized Economy: Extensions Leopold von Thadden University of Mainz and ECB (on leave) Advanced Macroeconomics, Winter Term 2013 1 / 36 I Motivation This Lecture considers some applications

More information

Lecture Notes On Solution Methods for Representative Agent Dynamic Stochastic Optimization Problems

Lecture Notes On Solution Methods for Representative Agent Dynamic Stochastic Optimization Problems Comments Welcome Lecture Notes On Solution Methods for Representative Agent Dynamic Stochastic Optimization Problems Christopher D. Carroll ccarroll@jhu.edu May 4, 2011 Abstract These lecture notes sketch

More information

1. Using the model and notations covered in class, the expected returns are:

1. Using the model and notations covered in class, the expected returns are: Econ 510a second half Yale University Fall 2006 Prof. Tony Smith HOMEWORK #5 This homework assignment is due at 5PM on Friday, December 8 in Marnix Amand s mailbox. Solution 1. a In the Mehra-Prescott

More information

Value Function Iteration

Value Function Iteration Value Function Iteration (Lectures on Solution Methods for Economists II) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 February 26, 2018 1 University of Pennsylvania 2 Boston College Theoretical Background

More information

Stochastic Problems. 1 Examples. 1.1 Neoclassical Growth Model with Stochastic Technology. 1.2 A Model of Job Search

Stochastic Problems. 1 Examples. 1.1 Neoclassical Growth Model with Stochastic Technology. 1.2 A Model of Job Search Stochastic Problems References: SLP chapters 9, 10, 11; L&S chapters 2 and 6 1 Examples 1.1 Neoclassical Growth Model with Stochastic Technology Production function y = Af k where A is random Let A s t

More information

ECON 2010c Solution to Problem Set 1

ECON 2010c Solution to Problem Set 1 ECON 200c Solution to Problem Set By the Teaching Fellows for ECON 200c Fall 204 Growth Model (a) Defining the constant κ as: κ = ln( αβ) + αβ αβ ln(αβ), the problem asks us to show that the following

More information

Chapter 3. Dynamic Programming

Chapter 3. Dynamic Programming Chapter 3. Dynamic Programming This chapter introduces basic ideas and methods of dynamic programming. 1 It sets out the basic elements of a recursive optimization problem, describes the functional equation

More information

Lecture 6: Discrete-Time Dynamic Optimization

Lecture 6: Discrete-Time Dynamic Optimization Lecture 6: Discrete-Time Dynamic Optimization Yulei Luo Economics, HKU November 13, 2017 Luo, Y. (Economics, HKU) ECON0703: ME November 13, 2017 1 / 43 The Nature of Optimal Control In static optimization,

More information

Neoclassical Growth Model: I

Neoclassical Growth Model: I Neoclassical Growth Model: I Mark Huggett 2 2 Georgetown October, 2017 Growth Model: Introduction Neoclassical Growth Model is the workhorse model in macroeconomics. It comes in two main varieties: infinitely-lived

More information

Lecture V. Numerical Optimization

Lecture V. Numerical Optimization Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize

More information

Lecture 2 The Centralized Economy: Basic features

Lecture 2 The Centralized Economy: Basic features Lecture 2 The Centralized Economy: Basic features Leopold von Thadden University of Mainz and ECB (on leave) Advanced Macroeconomics, Winter Term 2013 1 / 41 I Motivation This Lecture introduces the basic

More information

Dynamic Problem Set 1 Solutions

Dynamic Problem Set 1 Solutions Dynamic Problem Set 1 Solutions Jonathan Kreamer July 15, 2011 Question 1 Consider the following multi-period optimal storage problem: An economic agent imizes: c t} T β t u(c t ) (1) subject to the period-by-period

More information

Solution Methods. Jesús Fernández-Villaverde. University of Pennsylvania. March 16, 2016

Solution Methods. Jesús Fernández-Villaverde. University of Pennsylvania. March 16, 2016 Solution Methods Jesús Fernández-Villaverde University of Pennsylvania March 16, 2016 Jesús Fernández-Villaverde (PENN) Solution Methods March 16, 2016 1 / 36 Functional equations A large class of problems

More information

Planning in Markov Decision Processes

Planning in Markov Decision Processes Carnegie Mellon School of Computer Science Deep Reinforcement Learning and Control Planning in Markov Decision Processes Lecture 3, CMU 10703 Katerina Fragkiadaki Markov Decision Process (MDP) A Markov

More information

ECOM 009 Macroeconomics B. Lecture 2

ECOM 009 Macroeconomics B. Lecture 2 ECOM 009 Macroeconomics B Lecture 2 Giulio Fella c Giulio Fella, 2014 ECOM 009 Macroeconomics B - Lecture 2 40/197 Aim of consumption theory Consumption theory aims at explaining consumption/saving decisions

More information

Topic 2. Consumption/Saving and Productivity shocks

Topic 2. Consumption/Saving and Productivity shocks 14.452. Topic 2. Consumption/Saving and Productivity shocks Olivier Blanchard April 2006 Nr. 1 1. What starting point? Want to start with a model with at least two ingredients: Shocks, so uncertainty.

More information

University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming

University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming University of Warwick, EC9A0 Maths for Economists 1 of 63 University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming Peter J. Hammond Autumn 2013, revised 2014 University of

More information

Macro 1: Dynamic Programming 1

Macro 1: Dynamic Programming 1 Macro 1: Dynamic Programming 1 Mark Huggett 2 2 Georgetown September, 2016 DP Warm up: Cake eating problem ( ) max f 1 (y 1 ) + f 2 (y 2 ) s.t. y 1 + y 2 100, y 1 0, y 2 0 1. v 1 (x) max f 1(y 1 ) + f

More information

Advanced Economic Growth: Lecture 21: Stochastic Dynamic Programming and Applications

Advanced Economic Growth: Lecture 21: Stochastic Dynamic Programming and Applications Advanced Economic Growth: Lecture 21: Stochastic Dynamic Programming and Applications Daron Acemoglu MIT November 19, 2007 Daron Acemoglu (MIT) Advanced Growth Lecture 21 November 19, 2007 1 / 79 Stochastic

More information

Macroeconomics Theory II

Macroeconomics Theory II Macroeconomics Theory II Francesco Franco FEUNL February 2016 Francesco Franco (FEUNL) Macroeconomics Theory II February 2016 1 / 18 Road Map Research question: we want to understand businesses cycles.

More information

Solow Growth Model. Michael Bar. February 28, Introduction Some facts about modern growth Questions... 4

Solow Growth Model. Michael Bar. February 28, Introduction Some facts about modern growth Questions... 4 Solow Growth Model Michael Bar February 28, 208 Contents Introduction 2. Some facts about modern growth........................ 3.2 Questions..................................... 4 2 The Solow Model 5

More information

Fluctuations. Shocks, Uncertainty, and the Consumption/Saving Choice

Fluctuations. Shocks, Uncertainty, and the Consumption/Saving Choice Fluctuations. Shocks, Uncertainty, and the Consumption/Saving Choice Olivier Blanchard April 2002 14.452. Spring 2002. Topic 2. 14.452. Spring, 2002 2 Want to start with a model with two ingredients: ²

More information

Projection Methods. Michal Kejak CERGE CERGE-EI ( ) 1 / 29

Projection Methods. Michal Kejak CERGE CERGE-EI ( ) 1 / 29 Projection Methods Michal Kejak CERGE CERGE-EI ( ) 1 / 29 Introduction numerical methods for dynamic economies nite-di erence methods initial value problems (Euler method) two-point boundary value problems

More information

Lecture XI. Approximating the Invariant Distribution

Lecture XI. Approximating the Invariant Distribution Lecture XI Approximating the Invariant Distribution Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Invariant Distribution p. 1 /24 SS Equilibrium in the Aiyagari model G.

More information

COURSE Iterative methods for solving linear systems

COURSE Iterative methods for solving linear systems COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme

More information

Graduate Macroeconomics 2 Problem set Solutions

Graduate Macroeconomics 2 Problem set Solutions Graduate Macroeconomics 2 Problem set 10. - Solutions Question 1 1. AUTARKY Autarky implies that the agents do not have access to credit or insurance markets. This implies that you cannot trade across

More information

Stochastic Dynamic Programming. Jesus Fernandez-Villaverde University of Pennsylvania

Stochastic Dynamic Programming. Jesus Fernandez-Villaverde University of Pennsylvania Stochastic Dynamic Programming Jesus Fernande-Villaverde University of Pennsylvania 1 Introducing Uncertainty in Dynamic Programming Stochastic dynamic programming presents a very exible framework to handle

More information

Econ 504, Lecture 1: Transversality and Stochastic Lagrange Multipliers

Econ 504, Lecture 1: Transversality and Stochastic Lagrange Multipliers ECO 504 Spring 2009 Chris Sims Econ 504, Lecture 1: Transversality and Stochastic Lagrange Multipliers Christopher A. Sims Princeton University sims@princeton.edu February 4, 2009 0 Example: LQPY The ordinary

More information

Lecture 5: The Bellman Equation

Lecture 5: The Bellman Equation Lecture 5: The Bellman Equation Florian Scheuer 1 Plan Prove properties of the Bellman equation (In particular, existence and uniqueness of solution) Use this to prove properties of the solution Think

More information

Lecture 5: Some Informal Notes on Dynamic Programming

Lecture 5: Some Informal Notes on Dynamic Programming Lecture 5: Some Informal Notes on Dynamic Programming The purpose of these class notes is to give an informal introduction to dynamic programming by working out some cases by h. We want to solve subject

More information

IE 5531 Midterm #2 Solutions

IE 5531 Midterm #2 Solutions IE 5531 Midterm #2 s Prof. John Gunnar Carlsson November 9, 2011 Before you begin: This exam has 9 pages and a total of 5 problems. Make sure that all pages are present. To obtain credit for a problem,

More information

Development Economics (PhD) Intertemporal Utility Maximiza

Development Economics (PhD) Intertemporal Utility Maximiza Development Economics (PhD) Intertemporal Utility Maximization Department of Economics University of Gothenburg October 7, 2015 1/14 Two Period Utility Maximization Lagrange Multiplier Method Consider

More information

Introduction to Numerical Methods

Introduction to Numerical Methods Introduction to Numerical Methods Wouter J. Den Haan London School of Economics c by Wouter J. Den Haan "D", "S", & "GE" Dynamic Stochastic General Equilibrium What is missing in the abbreviation? DSGE

More information

Myopic and perfect foresight in the OLG model

Myopic and perfect foresight in the OLG model Economics Letters 67 (2000) 53 60 www.elsevier.com/ locate/ econbase a Myopic and perfect foresight in the OLG model a b, * Philippe Michel, David de la Croix IUF, Universite de la Mediterranee and GREQAM,

More information

Economics 210B Due: September 16, Problem Set 10. s.t. k t+1 = R(k t c t ) for all t 0, and k 0 given, lim. and

Economics 210B Due: September 16, Problem Set 10. s.t. k t+1 = R(k t c t ) for all t 0, and k 0 given, lim. and Economics 210B Due: September 16, 2010 Problem 1: Constant returns to saving Consider the following problem. c0,k1,c1,k2,... β t Problem Set 10 1 α c1 α t s.t. k t+1 = R(k t c t ) for all t 0, and k 0

More information

Problem Set 2: Proposed solutions Econ Fall Cesar E. Tamayo Department of Economics, Rutgers University

Problem Set 2: Proposed solutions Econ Fall Cesar E. Tamayo Department of Economics, Rutgers University Problem Set 2: Proposed solutions Econ 504 - Fall 202 Cesar E. Tamayo ctamayo@econ.rutgers.edu Department of Economics, Rutgers University Simple optimal growth (Problems &2) Suppose that we modify slightly

More information

Projection. Wouter J. Den Haan London School of Economics. c by Wouter J. Den Haan

Projection. Wouter J. Den Haan London School of Economics. c by Wouter J. Den Haan Projection Wouter J. Den Haan London School of Economics c by Wouter J. Den Haan Model [ ] ct ν = E t βct+1 ν αz t+1kt+1 α 1 c t + k t+1 = z t k α t ln(z t+1 ) = ρ ln (z t ) + ε t+1 ε t+1 N(0, σ 2 ) k

More information

Optimistic Policy Iteration and Q-learning in Dynamic Programming

Optimistic Policy Iteration and Q-learning in Dynamic Programming Optimistic Policy Iteration and Q-learning in Dynamic Programming Dimitri P. Bertsekas Laboratory for Information and Decision Systems Massachusetts Institute of Technology November 2010 INFORMS, Austin,

More information

Heterogeneous Agent Models: I

Heterogeneous Agent Models: I Heterogeneous Agent Models: I Mark Huggett 2 2 Georgetown September, 2017 Introduction Early heterogeneous-agent models integrated the income-fluctuation problem into general equilibrium models. A key

More information

Contents. An example 5. Mathematical Preliminaries 13. Dynamic programming under certainty 29. Numerical methods 41. Some applications 57

Contents. An example 5. Mathematical Preliminaries 13. Dynamic programming under certainty 29. Numerical methods 41. Some applications 57 T H O M A S D E M U Y N C K DY N A M I C O P T I M I Z AT I O N Contents An example 5 Mathematical Preliminaries 13 Dynamic programming under certainty 29 Numerical methods 41 Some applications 57 Stochastic

More information

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term; Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many

More information

1 Bewley Economies with Aggregate Uncertainty

1 Bewley Economies with Aggregate Uncertainty 1 Bewley Economies with Aggregate Uncertainty Sofarwehaveassumedawayaggregatefluctuations (i.e., business cycles) in our description of the incomplete-markets economies with uninsurable idiosyncratic risk

More information

Markov Decision Processes Infinite Horizon Problems

Markov Decision Processes Infinite Horizon Problems Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld 1 What is a solution to an MDP? MDP Planning Problem: Input: an MDP (S,A,R,T)

More information

(a) Write down the Hamilton-Jacobi-Bellman (HJB) Equation in the dynamic programming

(a) Write down the Hamilton-Jacobi-Bellman (HJB) Equation in the dynamic programming 1. Government Purchases and Endogenous Growth Consider the following endogenous growth model with government purchases (G) in continuous time. Government purchases enhance production, and the production

More information

MS&E338 Reinforcement Learning Lecture 1 - April 2, Introduction

MS&E338 Reinforcement Learning Lecture 1 - April 2, Introduction MS&E338 Reinforcement Learning Lecture 1 - April 2, 2018 Introduction Lecturer: Ben Van Roy Scribe: Gabriel Maher 1 Reinforcement Learning Introduction In reinforcement learning (RL) we consider an agent

More information

Practice Questions for Mid-Term I. Question 1: Consider the Cobb-Douglas production function in intensive form:

Practice Questions for Mid-Term I. Question 1: Consider the Cobb-Douglas production function in intensive form: Practice Questions for Mid-Term I Question 1: Consider the Cobb-Douglas production function in intensive form: y f(k) = k α ; α (0, 1) (1) where y and k are output per worker and capital per worker respectively.

More information

Econ Slides from Lecture 14

Econ Slides from Lecture 14 Econ 205 Sobel Econ 205 - Slides from Lecture 14 Joel Sobel September 10, 2010 Theorem ( Lagrange Multipliers ) Theorem If x solves max f (x) subject to G(x) = 0 then there exists λ such that Df (x ) =

More information

Dynamic Optimization: An Introduction

Dynamic Optimization: An Introduction Dynamic Optimization An Introduction M. C. Sunny Wong University of San Francisco University of Houston, June 20, 2014 Outline 1 Background What is Optimization? EITM: The Importance of Optimization 2

More information

Uncertainty Per Krusell & D. Krueger Lecture Notes Chapter 6

Uncertainty Per Krusell & D. Krueger Lecture Notes Chapter 6 1 Uncertainty Per Krusell & D. Krueger Lecture Notes Chapter 6 1 A Two-Period Example Suppose the economy lasts only two periods, t =0, 1. The uncertainty arises in the income (wage) of period 1. Not that

More information

Lecture 3: Hamilton-Jacobi-Bellman Equations. Distributional Macroeconomics. Benjamin Moll. Part II of ECON Harvard University, Spring

Lecture 3: Hamilton-Jacobi-Bellman Equations. Distributional Macroeconomics. Benjamin Moll. Part II of ECON Harvard University, Spring Lecture 3: Hamilton-Jacobi-Bellman Equations Distributional Macroeconomics Part II of ECON 2149 Benjamin Moll Harvard University, Spring 2018 1 Outline 1. Hamilton-Jacobi-Bellman equations in deterministic

More information

Lecture 10. Dynamic Programming. Randall Romero Aguilar, PhD II Semestre 2017 Last updated: October 16, 2017

Lecture 10. Dynamic Programming. Randall Romero Aguilar, PhD II Semestre 2017 Last updated: October 16, 2017 Lecture 10 Dynamic Programming Randall Romero Aguilar, PhD II Semestre 2017 Last updated: October 16, 2017 Universidad de Costa Rica EC3201 - Teoría Macroeconómica 2 Table of contents 1. Introduction 2.

More information

Optimal Control. McGill COMP 765 Oct 3 rd, 2017

Optimal Control. McGill COMP 765 Oct 3 rd, 2017 Optimal Control McGill COMP 765 Oct 3 rd, 2017 Classical Control Quiz Question 1: Can a PID controller be used to balance an inverted pendulum: A) That starts upright? B) That must be swung-up (perhaps

More information

Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time

Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time David Laibson 9/30/2014 Outline Lectures 9-10: 9.1 Continuous-time Bellman Equation 9.2 Application: Merton s Problem 9.3 Application:

More information

Constrained optimization.

Constrained optimization. ams/econ 11b supplementary notes ucsc Constrained optimization. c 2016, Yonatan Katznelson 1. Constraints In many of the optimization problems that arise in economics, there are restrictions on the values

More information

Neoclassical Growth Model / Cake Eating Problem

Neoclassical Growth Model / Cake Eating Problem Dynamic Optimization Institute for Advanced Studies Vienna, Austria by Gabriel S. Lee February 1-4, 2008 An Overview and Introduction to Dynamic Programming using the Neoclassical Growth Model and Cake

More information

A comparison of numerical methods for the. Solution of continuous-time DSGE models. Juan Carlos Parra Alvarez

A comparison of numerical methods for the. Solution of continuous-time DSGE models. Juan Carlos Parra Alvarez A comparison of numerical methods for the solution of continuous-time DSGE models Juan Carlos Parra Alvarez Department of Economics and Business, and CREATES Aarhus University, Denmark November 14, 2012

More information

Economic Growth: Lecture 13, Stochastic Growth

Economic Growth: Lecture 13, Stochastic Growth 14.452 Economic Growth: Lecture 13, Stochastic Growth Daron Acemoglu MIT December 10, 2013. Daron Acemoglu (MIT) Economic Growth Lecture 13 December 10, 2013. 1 / 52 Stochastic Growth Models Stochastic

More information

Recursive Methods. Introduction to Dynamic Optimization

Recursive Methods. Introduction to Dynamic Optimization Recursive Methods Nr. 1 Outline Today s Lecture finish off: theorem of the maximum Bellman equation with bounded and continuous F differentiability of value function application: neoclassical growth model

More information

1 Linear Quadratic Control Problem

1 Linear Quadratic Control Problem 1 Linear Quadratic Control Problem Suppose we have a problem of the following form: { vx 0 ) = max β t ) } x {a t,x t+1 } t Qx t +a t Ra t +2a t Wx t 1) x t is a vector of states and a t is a vector of

More information

Advanced Macroeconomics

Advanced Macroeconomics Advanced Macroeconomics The Ramsey Model Micha l Brzoza-Brzezina/Marcin Kolasa Warsaw School of Economics Micha l Brzoza-Brzezina/Marcin Kolasa (WSE) Ad. Macro - Ramsey model 1 / 47 Introduction Authors:

More information

Key Concepts: Economic Computation, Part III

Key Concepts: Economic Computation, Part III Key Concepts: Economic Computation, Part III Brent Hickman Summer, 8 1 Using Newton s Method to Find Roots of Real- Valued Functions The intuition behind Newton s method is that finding zeros of non-linear

More information

September Math Course: First Order Derivative

September Math Course: First Order Derivative September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which

More information

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of

More information

TOBB-ETU - Econ 532 Practice Problems II (Solutions)

TOBB-ETU - Econ 532 Practice Problems II (Solutions) TOBB-ETU - Econ 532 Practice Problems II (Solutions) Q: Ramsey Model: Exponential Utility Assume that in nite-horizon households maximize a utility function of the exponential form 1R max U = e (n )t (1=)e

More information