BU Macro BU Macro Fall 2008, Lecture 4

Similar documents
SZG Macro 2011 Lecture 3: Dynamic Programming. SZG macro 2011 lecture 3 1

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon

Essential Microeconomics : OPTIMAL CONTROL 1. Consider the following class of optimization problems

Macroeconomic Theory Ph.D. Qualifying Examination Fall 2005 ANSWER EACH PART IN A SEPARATE BLUE BOOK. PART ONE: ANSWER IN BOOK 1 WEIGHT 1/3

This document was generated at 7:34 PM, 07/27/09 Copyright 2009 Richard T. Woodward

This document was generated at 1:04 PM, 09/10/13 Copyright 2013 Richard T. Woodward. 4. End points and transversality conditions AGEC

Online Appendix to Solution Methods for Models with Rare Disasters

E β t log (C t ) + M t M t 1. = Y t + B t 1 P t. B t 0 (3) v t = P tc t M t Question 1. Find the FOC s for an optimum in the agent s problem.

The Brock-Mirman Stochastic Growth Model

Midterm Exam. Macroeconomic Theory (ECON 8105) Larry Jones. Fall September 27th, Question 1: (55 points)

Linear Dynamic Models

THE BELLMAN PRINCIPLE OF OPTIMALITY

Reserves measures have an economic component eg. what could be extracted at current prices?

RANDOM LAGRANGE MULTIPLIERS AND TRANSVERSALITY

ANSWERS TO EVEN NUMBERED EXERCISES IN CHAPTER 6 SECTION 6.1: LIFE CYCLE CONSUMPTION AND WEALTH T 1. . Let ct. ) is a strictly concave function of c

1. An introduction to dynamic optimization -- Optimal Control and Dynamic Programming AGEC

Problem Set 5. Graduate Macro II, Spring 2017 The University of Notre Dame Professor Sims

10. State Space Methods

FINM 6900 Finance Theory

Introduction D P. r = constant discount rate, g = Gordon Model (1962): constant dividend growth rate.

Let us start with a two dimensional case. We consider a vector ( x,

Economics 8105 Macroeconomic Theory Recitation 6

Section 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients

A Dynamic Model of Economic Fluctuations

1 Answers to Final Exam, ECN 200E, Spring

O Q L N. Discrete-Time Stochastic Dynamic Programming. I. Notation and basic assumptions. ε t : a px1 random vector of disturbances at time t.

Appendix 14.1 The optimal control problem and its solution using

1. An introduction to dynamic optimization -- Optimal Control and Dynamic Programming AGEC

dt = C exp (3 ln t 4 ). t 4 W = C exp ( ln(4 t) 3) = C(4 t) 3.

Chapter 2. First Order Scalar Equations

Inventory Analysis and Management. Multi-Period Stochastic Models: Optimality of (s, S) Policy for K-Convex Objective Functions

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles

Some Basic Information about M-S-D Systems

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle

= ( ) ) or a system of differential equations with continuous parametrization (T = R

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes

System of Linear Differential Equations

Two Coupled Oscillators / Normal Modes

( ) a system of differential equations with continuous parametrization ( T = R + These look like, respectively:

Graduate Macro Theory II: Notes on Neoclassical Growth Model

Seminar 4: Hotelling 2

Lecture 20: Riccati Equations and Least Squares Feedback Control

The expectation value of the field operator.

Chapter 6. Systems of First Order Linear Differential Equations

Final Spring 2007

The Brock-Mirman Stochastic Growth Model

Lecture Notes 3: Quantitative Analysis in DSGE Models: New Keynesian Model

Notes on Kalman Filtering

Chapter 4 Introduction to Dynamic Programming

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still.

Announcements: Warm-up Exercise:

Problem 1 / 25 Problem 2 / 20 Problem 3 / 10 Problem 4 / 15 Problem 5 / 30 TOTAL / 100

Vehicle Arrival Models : Headway

Solutions Problem Set 3 Macro II (14.452)

1 Consumption and Risky Assets

Y 0.4Y 0.45Y Y to a proper ARMA specification.

Cooperative Ph.D. Program in School of Economic Sciences and Finance QUALIFYING EXAMINATION IN MACROECONOMICS. August 8, :45 a.m. to 1:00 p.m.

Differential Equations

Lecture Notes 2. The Hilbert Space Approach to Time Series

Math 334 Fall 2011 Homework 11 Solutions

Seminar 5 Sustainability

IMPLICIT AND INVERSE FUNCTION THEOREMS PAUL SCHRIMPF 1 OCTOBER 25, 2013

Chapter #1 EEE8013 EEE3001. Linear Controller Design and State Space Analysis

Hamilton- J acobi Equation: Explicit Formulas In this lecture we try to apply the method of characteristics to the Hamilton-Jacobi equation: u t

Linear Response Theory: The connection between QFT and experiments

WEEK-3 Recitation PHYS 131. of the projectile s velocity remains constant throughout the motion, since the acceleration a x

Policy regimes Theory

2.7. Some common engineering functions. Introduction. Prerequisites. Learning Outcomes

Solutions from Chapter 9.1 and 9.2

8. Basic RL and RC Circuits

Reading from Young & Freedman: For this topic, read sections 25.4 & 25.5, the introduction to chapter 26 and sections 26.1 to 26.2 & 26.4.

ODEs II, Lecture 1: Homogeneous Linear Systems - I. Mike Raugh 1. March 8, 2004

Application of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing

Section 7.4 Modeling Changing Amplitude and Midline

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010

LAPLACE TRANSFORM AND TRANSFER FUNCTION

4. The multiple use forestry maximum principle. This principle will be derived as in Heaps (1984) by considering perturbations

Introduction to choice over time

In this chapter the model of free motion under gravity is extended to objects projected at an angle. When you have completed it, you should

Problem Set on Differential Equations

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS

d 1 = c 1 b 2 - b 1 c 2 d 2 = c 1 b 3 - b 1 c 3

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB

Macroeconomics Qualifying Examination

Chapter 15: Phenomena. Chapter 15 Chemical Kinetics. Reaction Rates. Reaction Rates R P. Reaction Rates. Rate Laws

Online Convex Optimization Example And Follow-The-Leader

!!"#"$%&#'()!"#&'(*%)+,&',-)./0)1-*23)

CHAPTER 12 DIRECT CURRENT CIRCUITS

SOLUTIONS TO ECE 3084

Lecture Notes 5: Investment

Chapter 7: Solving Trig Equations

ME 391 Mechanical Engineering Analysis

ACE 562 Fall Lecture 4: Simple Linear Regression Model: Specification and Estimation. by Professor Scott H. Irwin

Math 2142 Exam 1 Review Problems. x 2 + f (0) 3! for the 3rd Taylor polynomial at x = 0. To calculate the various quantities:

R t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t

Physics 127b: Statistical Mechanics. Fokker-Planck Equation. Time Evolution

Week 1 Lecture 2 Problems 2, 5. What if something oscillates with no obvious spring? What is ω? (problem set problem)

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017

INDEX. Transient analysis 1 Initial Conditions 1

Lecture 10 Estimating Nonlinear Regression Models

Transcription:

Dynamic Programming BU Macro 2008 Lecure 4 1

Ouline 1. Cerainy opimizaion problem used o illusrae: a. Resricions on exogenous variables b. Value funcion c. Policy funcion d. The Bellman equaion and an associaed Lagrangian e. The envelope heorem f. The Euler equaion 2

Ouline Con d 2. Consumpion over ime 3. Adding uncerainy 4. Consumpion under uncerainy: seing up he problem. 3

1. A cerainy dynamic problem and he DP approach = 0 Maximize β uk (, x, c ) Subjec o k+ 1 k = g( k, x, c) and x = x ( ς ) ς = m( ) ς 1 4

Noable (relaive o Lecure 1) Immediae jump o infinie horizon problem, no essenial bu maches presenaion in LS chaper 2 (noe differences in noaion, hough). The exogenous (x) variable(s) are now funcions of a vecor of exogenous sae variables, which evolve according o a difference equaion (perhaps nonlinear, perhaps in a vecor). The laer is a key par of he vision of Richard Bellman, he invenor of DP: his experience in oher areas (such as difference equaions) led him o hink in erms of describing dynamics in erms of sae variables. 5

Recursive policies Suppose conrols are funcions of saes, c = π ( k, ς ) k + 1 = k + g ( k, x, c ) = k + g( k, x( ς ), π( k, ς)) Then, he sae vecor evolves according o a recursion k+ 1 k + g( k, x( ς), π( k, ς)) s+ 1 = M( s) = = ς+ 1 m( ς+ 1) ha can be used o generae fuure saes from given iniial condiions 6

Evaluaing he objecive Under any recursive policy, we can see ha all of he erms which ener in he objecive are a funcion of he iniial sae (s 0 ) so ha he objecive is also a funcion of he iniial sae β = 0 = 0 uk (, x, c) = β uk (, x( ς), π( k, x( ς))) 7

Noice he swich Given ha here is a funcion which describes he policy, he objecive is now a funcion of he sae vecor. We have made he change we are now hinking in erms of funcions raher han sequences. Bu we haven opimized ye! We could be calculaing he objecive wih a very bad policy. 8

Bellman s core idea Subdivide complicaed ineremporal problems ino many wo period problems, in which he rade-off is beween he presen now and laer. Specifically, he idea was o find he opimal conrol and sae now, aking as given ha laer behavior would iself be opimal. 9

The Principle of Opimaliy An opimal policy has he propery ha, whaever he sae and opimal firs decision may be, he remaining decisions consiue an opimal policy wih respec o he sae originaing from he firs decisions Bellman (1957, pg. 83) 10

Following he principle, The naural maximizaion problem is max{ uc (, k, x( ς )) + βv( k, ς )} c, k + 1 + 1 + 1 s.. k = k + g ( k, x, c ) ς + 1 + 1 = m( ς ) Where he righ hand side is he curren momenary objecive (u) plus he consequences (V) of for he discouned objecive of behaving opimally in he fuure. 11

Noing ha ime does no ener in an essenial way We someimes wrie his as (wih meaning nex period) max{ uckx (,, ( ς)) + βv( k', ς')} ck, ' s.. k' = k + g (, kx (),) ς c ς' = m( ς) So hen he Bellman equaion is wrien as Vk (, ς ) = max{ uckx (,,, ()) ς + βvk ( ', ς')} ck, ' s.. k' = k + g( k, x( ς), c) ς ' = m ( ς ) 12

Afer he maximizaion We know he opimal policy (which we will call π as above) and can calculae he associaed value, so ha here is now a Bellman equaion of he form Vk (, ς) = {((, uπ kς), kx, ()) ς + βvk ( + gkx (, ( ς), π( k, ς)), ς')} A funcional equaion is defined, colloquially, as an equaion whose unknowns are funcions. In our conex, he unknowns are he policy and value funcions. 13

How o do he opimizaion? You are free o choose, depending on he applicaion Someimes we ake he Euler roue, subsiuing in he consrain and maximizing direcly over k Oher imes we wan o use a Lagrange approach, puing a muliplier li on he consrain governing k 14

The associaed Lagrangian Takes he form L= { u( c, k, x( ς )) + βv( k', ς ')} + λ[ k+ g( k, x( ς ), c) k'] The opimal policy, sae evoluion and relaed muliplier are obained by maximizing wih respec o c,k k and minimizing wih respec o λ. Hence hese are all funcions of he sae variables. 15

For an opimum (off corners) We mus have L u(, c k, x()) ς g(, k x(),) ς c = + λ = 0 c c c L V( k', ς ') = λ+ β = 0 k' k' L = [ k + g( k, x( ς ), c) k'] = 0 λ And, a he values which solve hese equaions, V=L 16

The envelope heorem (Benvenise-Scheinkman) i Quesion: wha is he effec of an infiniessimal change in k on V? Answer: I is given by V u( c, k, x( ς )) g( k, x( ς ), c) = + λ k k k when we evaluae a he opimal policy and he associaed muliplier. As in LS, his may also be wrien a form which does no involve V u( c, k, x( ς )) V( k', ς ') g( k, x( ς), c) he muliplier, = + β k k k' k 17

Ouline of proof Nonrivial o show differeniabiliy of V Bu if we have his (as we will frequenly assume) hen V L u( c, k, x( ς )) c u( c, k, x( ς )) = = { + } k k c k k V ( k ', ς ') k ' + β k' k λ + [ k+ g ( k, x ( ς ), c ) k '] k g( kx, ( ς), c) g( kx, ( ς), c) c k' + λ [1 + ] + λ [ ] k k k k While his looks ugly, all erms involving behavior are muliplied by coefficiens ha are se o zero by he FOCs. 18

Ieraing on he Bellman Equaion Under specific condiions on he funcions u and g, he Bellman equaion has a unique, sricly concave (in k) soluion. Under hese condiions, i i can be calculaed l by considering he limi Vj+ 1 ( k, ς ) = max c, k' { u ( k, x( ς), c) + β Vj( k', ς ')} s.. k' = k+ g( k, x( ς ), c) These ieraions are inerpreable as calculaing he value funcions for a problem wih successively longer horizons. 19

2. Opimal consumpion over ime Simple case (no k,x in u) =0= 0 β uc ( ) Accumulaion of asses a = Ra [ + y c ] + 1 y + 1 = y + ρ ( y y ) And βr=1 (level consumpion) 20

Bellman Equaion V( a, y) = max { u( c) +βv( a', y')} ca, ' s.. a' = R[ a+ y c] y' y = ρ ( y y ) 21

Taking an Euler Roue 1 V( a, y) = max ca, '{ u( a+ y a') +βv( a', y')} R s.. y' y= ρ( y y) 1 1 V( a', y') EE :0 = uc ( a + y a ') +β R R a' V ( ay, ) 1 ET : = uc ( a + y a ') a R 22

Learning abou consumpion Updae ET and inser in EE o ge 1 1 uc( a+ y a') = uc( a' + y' a'') c= c' R R Suppose here is a linear policy funcion c= κ + θ ( y y ) + θ a y a c ' = κ + θ ( y ' y ) + θ a ' y = κ + θ ρ( y y) + θ R[ a+ y c] y a a = κ + θ ρ( y y) + θ R[ a+ y κ θ ( y y) θ a] y a y a 23

Requiring c=c, we have equaions ha resric undeermined d coefficiens i κ + θ ( y y ) + θ a y a = κ + θ ρ( y y) + θ R[ a+ ( y y) + y κ θ ( y y) θ a] y a y a κ = κ + θ Ry ( κ) κ = a θ = θ ρ+ θ R(1 θ ) θ = θ R/[1 ρ+ θ R] y y a y y a a R 1 θa = θar[1 θa] θa = ( ) R y θ y = θ a 1 ρ (1 ) R 24

Economic Rules Consume he normal level of income (y) Consume he ineres from asse sock, leaving he asse sock unchanged period o period (excep as noed nex) Consume based on he presen value of deviaions from normal income, reaing his as if i were anoher source of wealh; allow variaions in asse posiion on his basis. 25

Could have goen hese rules more direcly 1 j 1 j j ( ) c= a+ ( ) [ y+ ρ ( y y)] R R j= 0 j= 0 1 1 1 ( ) 1 c = a + 1 y + y ρ y 1 1 1 R R R R 11 1 c= y+ [ a+ ( y y)] R ρ 1 R 26

Quesions & Answers If we could have goen hem more easily, hen why do we need DP? Because here are many problems ha we canno solve so easily and DP is a procedure re for solving hem. Wha is he value funcion? 1 1 1 V ( a, y ) = ( ( )) 1 β u a+ 1 y+ ρ y y 1 1 R R Easy o deermine in his case because c is consan over ime; V inheris properies of u Check: ake his v, inser in Bellman equaion as v, show opimal form c has specified form, show v has his form. 27

3. A Sochasic dynamic problem and he DP approach = 0 Maximize E { β u ( k, x, c )} ( k, ς ) 0 0 Subjec o k 1 k g( k, x, c ) + = and Markovian exogenous sae variables x = x( ς ) ϒ (, ς B ) = prob ( ς B ς = ς ) + 1 28

Markov examples Markov chains (LS, Chaper 1) Linear sae space sysems Nonlinear difference equaions wih iid shocks, ς = m( ς, e ) + 1 + 1 We won be more explici unil necessary. Key poin: saes are enough o compue expecaions. 29

Bellman Equaion Uncerainy case is minor modificaion of cerainy case Vk (, ς) = max{(, uckx, ()) ς + βevk ( ', ς') (, kς)} ck, ' s.. k' = k + g( k, x( ς), c) 30

Proceeding as above Lagrangian g L= {(, uckx, ()) ς + βevk ( ', ς ') (, k ς)} + λ[ k+ g( k, x( ς), c) k'] FOCs L u(, c k, x()) ς g(, k x(),) ς c = + λ = c c c L EV ( k', ς ') = λ + β = 0 k ' ς ' L = [ k + g ( k, x ( ς ), c ) k '] = 0 λ 0 ET is unchanged 31

Implicaions for opimal policies Funcions of saes and sae evoluion c = π ( k, ς ) k + 1 k = g ( k, x ( ς ), π ( k, ς )) λ = λ( k, ς ) Sae evoluion is now a larger Markov process. For example, s k+ 1 k + g ( k, x( ς), π( k, ς)) = ς = = + 1 m( ς, e+ 1) M( s, e ) + 1 + 1 32

Value Funcion Since c,k,x depend on saes, he value funcion also is V(s). I is he maximized RHS of he Bellman equaion. 33

4. Opimal consumpion wih flucuaing income: seing up a DP Simple case (no k,x in u) =0 E{ β u( c )} s Accumulaion of asses (don necessarily resric R) a+ 1 = Ra [ + y c] 0 Income process y( ς ) ς : Markov 34

One version of he Bellman equaion V ( a, ς ) = max {( u ( c ) + EV ( a ', ς ')} ca, ' 1 s..[ a + y ( ς ) c a'] = 0 R 35

FOCs and ET Make sure you can work hese ou following he recipe above, c: uc ( c) λ = 0 1 EV ( a ', ς ') a': λ + E = 0 R { a ' } 1 λ: [ a+ y( ς) c a'] = 0 R ET : EV ( a, ς ) = λ a 36

Implicaions for policies Opimal consumpion depends on (a) wealh; and (b) he variables ha are useful for forecasing fuure income. caς (, ) Bu solving for his funcion is no longer easy. Raionalizes SL s discussion of numerical mehods, a opic ha we will consider furher laer. 37

Implicaion for Value funcion Value funcion is objecive evaluaed a opimal consumpion policy, which is a funcion of a Markov process, so ha 0 ς0 = β π ς 0 ς0 = 0 Va (, ) E{ u( ( a, ))} ( a, ) Value funcion saisfies he Bellman funcional equaion. Va (, ς ) = max {(() uc ( + EVa ( ', ς')} ca, ' 1 s..[ a + y( ς) c a' = 0] R = ( u( π( a, ς)) + EV( Ra [ + y( ς) π( a, ς)], ς') ( a, ς) 38

Wha we ve covered in his lecure Inroducion o DP under cerainy Bellman Equaion Associaed Lagrangian g FOCs and he ET Seing up and solving cerainy consumpion problem DP wih exogenous variables ha are funcions of a Markov process (exogenous sae vecor) Seing up consumpion problem wih uncerain income 39

Wha s nex? Furher analysis of opimal consumpion Theory: Levhari/Srinvasan Theory: Sandmo Theory and Empirics: Hall 40