Recursive Metho ds. Introduction to Dynamic Optimization Nr

Size: px
Start display at page:

Download "Recursive Metho ds. Introduction to Dynamic Optimization Nr"

Transcription

1 1 Recursive Metho ds Introduction to Dynamic Optimization Nr. 1

2 2 Outline Today s Lecture finish Euler Equations and Transversality Condition Principle of Optimality: Bellman s Equation Study of Bellman equation with bounded F contraction mapping and theorem of the maximum Introduction to Dynamic Optimization Nr. 2

3 3 Infinite Horizon T = subject to, with x 0 given V (x 0 )= sup {} instead of max {} sup {x t+1 } t=0 X β t F (x t,x t+1 ) t=0 x t+1 Γ (x t ) (1) define xt+1ª 0 as a plan t=0 define Π (x 0 ) ª x 0 ª t+1 t=0 x0 t+1 Γ (x 0 t) and x 0 0 = x 0 Introduction to Dynamic Optimization Nr. 3

4 4 Assumptions A1. Γ (x) is non-empty for all x X A2. lim T P T t=0 βt F (x t,x t+1 ) exists for all x Π (x 0 ) then problem is well defined Introduction to Dynamic Optimization Nr. 4

5 5 Recursive Formulation: Bellman Equation value function satisfies V (x 0 ) = max {x t+1 } t=0 x t+1 Γ(x t ) = max x 1 Γ(x 0 ) = max x 1 Γ(x 0 ) ( X ) β t F (x t,x t+1 ) t=0 F (x 0,x 1 )+ F (x 0,x 1 )+β max {x t+1 } t=1 x t+1 Γ(x t ) max {x t+1 } t=1 x t+1 Γ(x t ) = max x 1 Γ(x 0 ) {F (x 0,x 1 )+βv (x 1 )} X β t F (x t,x t+1 ) X β t F (x t+1,x t+2 ) t=1 t=0 Introduction to Dynamic Optimization Nr. 5

6 6 idea: use BE to find value function and p V olicy f unction g [Princ of Optimality] Intro duction to Dynamic Optimization Nr. 5A

7 7 Bellman Equation: Principle of Optimality Principle of Optimality idea: use the functional equation to find V and g V (x) = max {F (x, y)+βv (y)} y Γ(x) note: nuisance subscripts t, t +1, dropped a solution is a function V ( ) thesameonbothsides IF BE has unique solution then V = V more generally the right solution to (BE) delivers V Introduction to Dynamic Optimization Nr. 6

8 8 Recursive Methods Introduction to Dynamic Optimization Nr. 1

9 9 Outline Today s Lecture housekeeping: ps#1 and recitation day/ theory general / web page finish Principle of Optimality: Sequence Problem (for values and plans) solution to Bellman Equation begin study of Bellman equation with bounded and continuous F tools: contraction mapping and theorem of the maximum Introduction to Dynamic Optimization Nr. 2

10 10 Sequence Problem vs. Functional Equation Sequence Problem: (SP)... more succinctly X V (x 0 ) = sup β t F (x t,x t+1 ) {x t+1 } t=0 t=0 s.t. x t+1 Γ (x t ) x 0 given V (x 0 )= sup u ( x) x Π(x 0 ) functional equation (FE) [this particular FE called Bellman Equation] V (x) = max y Γ(x) Introduction to Dynamic Optimization {F (x, y)+ βv (y)} (FE) Nr. 3 ( SP)

11 11 Principle of Optimality IDEA: use BE to find value function V and optimal plan x Thm 4.2. V defined by SP V solves FE Thm 4.3. V solves FE and... V = V Thm 4.4. x Π (x 0 ) is optimal V (x t ) = F x t,x t+1 + βv x t+1 Thm 4.5. x Π (x 0 ) satisfies V (x t ) = F x t,x t+1 and... x is optimal Introduction to Dynamic Optimization + βv x t+1 Nr. 4

12 12 Why isthis Progress? intuition: breaks planning horizon into two: now and then notation: reduces unnecessary notation (esp ecially with uncertainty) analysis: prove existence, uniqueness and properties of optimal policy (e.g. continuity, montonicity, etc...) computation: associated numerical algorithm are powerful for many applications Introduction to Dynamic Optimization Nr. 5

13 13 Proof oftheorem 4.3 (max case) Assume for any x Π (x 0 ) BE implies Substituting V (x 1 ): lim T βt V (x T )=0. V (x 0 ) F (x 0,x 1 )+ βv (x 1 ), all x 1 Γ (x 0 ) = F (x 0,x 1)+ βv (x 1 ), some x 1 Γ (x 0 ) V (x 0 ) F (x 0,x 1 )+ βf (x 1,x 2 )+ β 2 V (x 2 ), all x Π (x 0 ) = F (x 0,x 1 )+ βf (x 1,x 2)+ β 2 V (x 2), some x Π (x 0 ) Introduction to Dynamic Optimization Nr. 6

14 14 Continue this way V (x 0 ) = nx β t F (x t,x t+1 )+ β n+1 V (x n+1 ) for all x Π (x 0 ) t=0 nx t=0 β t F x t,x t+1 + β n+1 V x n+1 for some x Π (x 0 ) Since β T V (x T ) 0, taking the limit n on both sides of both expressions we conclude that: V (x 0 ) u ( x) for all x Π (x 0 ) V (x 0 ) = u ( x ) for some x Π (x 0 ) Introduction to Dynamic Optimization Nr. 7

15 15 Bellman Equation as a Fixed Point define operator T (f)(x) = max {F (x, y)+ βf (y)} y Γ(x) V solution of BE V fixed point of T [i.e. TV = V ] Bounded Returns: if kf k < B and F and Γ are continuos: T maps continuous bounded functions into continuous bounded functions bounded returns T is a Contraction Mapping unique fixed point many other bonuses Introduction to Dynamic Optimization Nr. 8

16 16 Needed Tools Basic Real Analysis (section 3.1): {vector, metric, noslp, complete} spaces cauchy sequences closed, compact, bounded sets Contraction Mapping Theorem (section 3.2) Theorem of the Maximum: study of RHS of Bellman equation (equivalently of T ) (section 3.3) Introduction to Dynamic Optimization Nr. 9

17 17 Recursive Methods Introduction to Dynamic Optimization Nr. 1

18 18 Outline Today s Lecture study Functional Equation (Bellman equation) with bounded and continuous F tools: contraction mapping and theorem of the maximum Introduction to Dynamic Optimization Nr. 2

19 19 Bellman Equation as a Fixed Point define operator T (f)(x) = max {F (x, y)+ βf (y)} y Γ(x) V solution of BE V fixed point of T [i.e. TV = V ] Bounded Returns: if kf k < B and F and Γ are continuous: T maps continuous b ounded functions into continuous b ounded functions bounded returns T is a Contraction Mapping unique fixed point many other bonuses Introduction to Dynamic Optimization Nr. 3

20 20 Our Favorite Metric Space with S = ½ ¾ f : X R, f is continuous, and kf k sup f (x) < x X ρ (f, g) = kf gk sup f (x) g (x) x X Definition. A linear space S is complete if any Cauchy sequence converges. For a definition of a Cauchy sequence and examples of complete metric spaces see SLP. Theorem. The set of b ounded and continuous functions is Complete. See SLP. Introduction to Dynamic Optimization Nr. 4

21 21 Contraction Mapping Definition. Let (S, ρ) be a metric space. Let T : S S be an operator. T is a contraction with modulus β (0, 1) for any x, y in S. ρ (Tx, T y) βρ (x, y) Introduction to Dynamic Optimization Nr. 5

22 22 Contraction Mapping Theorem Theorem (CMThm). If T is a contraction in (S, ρ) with modulus β, then (i) there is a unique fixed point s S, s = Ts and (ii) iterations of T converge to the fixed point ρ (T n s 0,s ) β n ρ (s 0,s ) for any s 0 S, where T n+1 (s) = T (T n (s)). Introduction to Dynamic Optimization Nr. 6

23 23 CMThm Proof for (i) 1st step: construct fixed point s take any s 0 S define {s n } by s n+1 = Ts n then ρ (s 2,s 1 )= ρ (Ts 1,T s 0 ) βρ (s 1,s 0 ) generalizing ρ (s n+1,s n ) β n ρ (s 1,s 0 ) then, for m >n ρ (s m,s n ) ρ (s m,s m 1 )+ ρ (s m 1,s m 2 ) ρ (s n+1,s n ) β m 1 + β m β n ρ (s 1,s 0 ) β n β m n 1 + β m n ρ (s 1,s 0 ) β n 1 β ρ (s 1,s 0 ) thus {s n } is cauchy. hence s n s Introduction to Dynamic Optimization Nr. 7

24 24 2nd step: show s = Ts ρ (Ts,s ) ρ (Ts,s n )+ ρ (s,s n ) βρ (s,s n 1 )+ ρ (s,s n ) 0 3nd step: s is unique. Ts 1 = s 1 and s 2 = Ts 2 0 a = ρ (s 1,s 2 ) = ρ (Ts 1,T s 2) βρ (s 1,s 2) = βa only possible if a = 0 s 1 = s 2. Finally, as for (ii): ρ (T n s 0,s )= ρ (T n s 0,T s ) βρ T n 1 s 0,s β n ρ (s 0,s ) Introduction to Dynamic Optimization Nr. 8

25 25 Corollary. Let S be a complete metric space, let S 0 S and S 0 close. Let T be a contraction on S and let s = Ts. Assume that T (S 0 ) S 0, i.e. if s 0 S, then T (s 0 ) S 0 then s S 0. Moreover, if S 00 S 0 and then s S 00. T (S 0 ) S 00, i.e. if s 0 S 0, then T (s 0 ) S 00 Introduction to Dynamic Optimization Nr. 9

26 26 Blackwell s sufficient conditions. Let S be the space of bounded functions on X, andk k be given by the sup norm. Let T : S S. Assume that (i) T is monotone, that is, Tf (x) Tg(x) for any x X and g, f such that f (x) g (x) for all x X, and (ii) T discounts, that is, there is a β (0, 1) such that for any a R +, T (f + a)(x) Tf (x)+aβ for all x X. Then T is a contraction. Introduction to Dynamic Optimization Nr. 10

27 27 Proof. By definition f = g + f g and using the definition of k k, f (x) g (x)+kf gk then by monotonicity i) Tf T (g + kf gk) and by discounting ii) setting a = kf gk Tf T (g)+β kf gk. Reversing the roles of f and g : Tg T (f)+β kf gk ktf Tgk β kf gk Introduction to Dynamic Optimization Nr. 11

28 28 Bellman equation application (Tv)(x) = max {F (x, y)+βv (y)} y Γ(x) Assume that F is bounded and continuous and that Γ is continuous and has compact range. Theorem. T maps the set of continuous and bounded functions S into itself. Moreover T is a contraction. Introduction to Dynamic Optimization Nr. 12

29 29 Proof. That T maps the set of continuos and bounded follow from the Theorem of Maximum (we do this next) That T is a contraction follows since T satisfies the Blackwell sufficient conditions. T satisfies the Blackwell sufficient conditions. For monotonicity, notice that for f v Tv(x) = max {F (x, y)+βv (y)} y Γ(x) = F (x, g (x)) + βv (g (x)) {F (x, g (y)) + βf (g (x))} max y Γ(x) {F (x, y)+βf (y)} = Tf (x) A similar argument follows for discounting: for a>0 T (v + a)(x) = max = max y Γ(x) y Γ(x) {F (x, y)+β (v (y)+a)} {F (x, y)+βv (y)} + βa = T (v)(x)+βa. Introduction to Dynamic Optimization Nr. 13

30 30 Theorem of the Maximum wa n T tto map continuous function into continuous functions (Tv)(x) = max {F (x, y)+βv (y)} y Γ(x) want to learn about optimal policy of RHS of Bellman G (x) =arg max y Γ(x) {F (x, y)+βv (y)} First, continuity concepts for correspondences... then, a few example maximizations... finally, Theorem of the Maximum Introduction to Dynamic Optimization Nr. 14

31 31 Continuity Notions for Correspondences assume Γ is non-empty and compact valued (the set Γ (x) is non empty and compact for all x X) Upper Hemi Continuity (u.h.c.) at x: for any pair of sequences {x n } and {y n } with x n x and x n Γ (y n ) there exists a subsequence of {y n } that converges to a point y Γ (x). Lower Hemi Continuity (l.h.c.) at x: for any sequence {x n } with x n x and for every y Γ (x) there exists a sequence {y n } with x n Γ (y n ) such that y n y. Continuous at x: if Γ is both upper and lower hemi continuous at x Introduction to Dynamic Optimization Nr. 15

32 32 Max Examples h (x) = max f (x, y) y Γ(x) G (x) = arg max f (x, y) y Γ(x) ex 1: f (x, y) =xy; X =[ 1, 1] ; Γ (x) =X. G (x) = h (x) = x { 1} x<0 [ 1, 1] x =0 {1} x>0 Introduction to Dynamic Optimization Nr. 16

33 33 2 ex 2: f (x,y )=xy ;X =[ 1, 1] ;Γ (x) =X G (x) = h (x) = max{0,x} Intro duction to Dynamic Optimization {0} x<0 [ 1, 1] x =0 { 1, 1} x>0 Nr. 16A

34 34 Theorem of the Maximum Define: h (x) = max f (x, y) y Γ(x) G (x) = arg max f (x, y) y Γ(x) = {y Γ (x) : h (x) =f (x, y)} Theorem. (Berge) Let X R l and Y R m. Let f : X Y R be continuous and Γ : X Y be compact-valued and continuous. Then h : X R is continuous and G : X Y is non-empty, compact valued, and u.h.c. Introduction to Dynamic Optimization Nr. 17

35 35 lim max max lim Theorem. Suppose {f n (x, y)} and f (x, y) are concave in y and f n f in the sup-norm (uniformly). Define g n (x) = arg max y Γ(x) f n (x, y) g (x) = arg max y Γ(x) f (x, y) then g n (x) g (x) for all x (pointwise convergence); if X is compact then the convergence is uniform. Introduction to Dynamic Optimization Nr. 18

36 36 UsesofCorollaryofCMThm Monotonicity of v Theorem. Assume that F (,y) is increasing, that Γ is increasing, i.e. Γ (x) Γ (x 0 ) for x x 0. Then, the unique fixed point v satisfying v = Tv is increasing. If F (,y) is strictly increasing, so is v. Introduction to Dynamic Optimization Nr. 19

37 37 Proof BythecorollaryoftheCMThm,itsuffices to show Tf is increasing if f is increasing. Let x x 0 : Tf (x) = max {F (x, y)+βf (y)} y Γ(x) since y Γ (x) Γ (x 0 ) If F (,y) is strictly increasing = F (x, y )+βf (y ) for some y Γ (x) F (x 0,y )+βf (y ) max y Γ(x 0 ) {F (x, y)+βf (y)} = Tf (x0 ) F (x, y )+βf (y ) <F(x 0,y )+βf (y ). Introduction to Dynamic Optimization Nr. 20

38 38 Concavity (or strict) concavity of v Theorem. Assume that X is convex, Γ is concave, i.e. y Γ (x), y 0 Γ (x 0 ) implies that y θ θy 0 +(1 θ) y Γ (θx 0 +(1 θ) x) Γ x θ for any x, x 0 X and θ (0, 1). Finally assume that F is concave in (x, y). Then, the fixed point v satisfying v = Tv is concave in x. Moreover, if F (,y) is strictly concave, so is v. Introduction to Dynamic Optimization Nr. 21

39 39 Differentiability can t use same strategy:space of differentiable functions is not closed many envelope theorems Formula: if h (x) is differentiable and y is interior then h 0 (x) =f x (x, y) right value... but is h differentiable? one answer (Demand Theory) relies on f.o.c. and assuming twice differentiability of f won t work for us since f = F (x, y)+βv (y) and we don t even know if f is once differentiable! going in circles Introduction to Dynamic Optimization Nr. 22

40 40 Benveniste and Sheinkman First a Lemma... Lemma. Suppose v (x) is concave and that there exists w (x) such that w (x) v (x) and v (x 0 )=w(x 0 ) in some neighborhood D of x 0 and w is differentiable at x 0 (w 0 (x 0 ) exists) then v is differentiable at x 0 and v 0 (x 0 )= w 0 (x 0 ). Proof. Since v is concave it has at least one subgradient p at x 0 : w (x) w (x 0 ) v (x) v (x 0 ) p (x x 0 ) Thus a subgradient of v is also a subgradient of w. But w has a unique subgradient equal to w 0 (x 0 ). Introduction to Dynamic Optimization Nr. 23

41 41 Benveniste and Sheinkman Now a Theorem Theorem. Suppose F is strictly concave and Γ is convex. If x 0 int (X) and g (x 0 ) int (Γ (x 0 )) then the fixed point of T, V, is differentiable at x and V 0 (x) =F x (x, g (x)) Proof. We know V is concave. Since x 0 int (X ) and g (x 0 ) int (Γ (x 0 )) then g (x 0 ) int (Γ (x)) for x D a neighborhood of x 0 then W (x) =F (x, g (x 0 )) + βv (g (x 0 )) and t hen W (x) V (x) and W (x 0 )=V (x 0 ) and W 0 (x 0 )=F x (x 0,g(x 0 )) so the r esult f ollows from the lemma. Introduction to Dynamic Optimization Nr. 24

42 42 Recursive Methods Introduction to Dynamic Optimization Nr. 1

43 43 Outline Today s Lecture finish off: theorem of the maximum Bellman equation with bounded and continuous F differentiability of value function application: neoclassical growth model homogenous and unbounded returns, more applications Introduction to Dynamic Optimization Nr. 2

44 44 Our Favorite Metric Space with S = ½ ¾ f : X R, f is continuous, and kf k sup f (x) < x X ρ (f, g) = kf gk sup f (x) g (x) x X (Tv)(x) = max y Γ(x) {F (x, y)+ βv (y)} Assume that F is bounded and continuous and that Γ is continuous and has compact range. Theorem 4.6. T maps the set of continuous and bounded functions S into itself. Moreover T is a contraction. Introduction to Dynamic Optimization Nr. 3

45 45 Proo f. That T maps the s et of continuous a nd bo unded f ollo w f rom the Theorem of Maximum (we do this next) That T is a contraction Blackwell sufficient conditions monotonicity, notice that for f v Tv (x) = max {F (x, y)+ βv (y)} y Γ(x) = F (x, g (x)) + βv (g (x)) {F (x, g (y)) + βf (g (x))} discounting: for a > 0 max {F (x, y)+ βf (y)} = Tf (x) y Γ(x) T (v + a)(x) = max = m ax y Γ(x) y Γ(x) {F (x, y)+ β (v (y)+ a)} {F (x, y)+ βv (y)} + βa = T (v)(x)+ βa. Introduction to Dynamic Optimization Nr. 4

46 46 Theorem of the Maximum wan t T to map c ontinuous f unction into continuous f unctions (Tv)(x) = max {F (x, y)+ βv (y)} y Γ(x) want to learn about optimal policy of RHS of Bellman G (x) = arg max y Γ(x) {F (x, y)+ βv (y)} First, continuity concepts for correspondences... then, a few example maximizations... finally, Theorem of the Maximum Introduction to Dynamic Optimization Nr. 5

47 47 Continuity Notions for Correspondences assume Γ is non-empty and compact valued (the set Γ (x) is non empty and compact for all x X) Upper Hemi Continuity (u.h.c.) at x: for any pair of sequences {x n } and {y n } with x n x and x n Γ (y n ) there exists a subsequence of {y n } that converges to a point y Γ (x). Lower Hemi Continuity (l.h.c.) at x: for any sequence {x n } with x n x and for every y Γ (x) there exists a sequence {y n } with x n Γ (y n ) such that y n y. Continuous at x: if Γ is both upper and lower hemi continuous at x Introduction to Dynamic Optimization Nr. 6

48 48 Max Examples h (x) = max f (x, y) y Γ(x) G (x) = arg max f (x, y) y Γ(x) ex 1: f (x, y) = xy; X = [ 1, 1] ; Γ (x) = X. G (x) = h (x) = x { 1} x< 0 [ 1, 1] x = 0 {1} x> 0 Introd uction to Dynamic O ptimization N r. 7

49 49 ex 2: f (x, y) = xy 2 ; X =[ 1, 1] ; Γ (x) = X {0} x< 0 G (x) = [ 1, 1] x = 0 { 1, 1} x> 0 h (x) = max {0,x} Introd uction to Dynamic O ptimization Nr. 7A

50 50 Theorem of the Maximum Define: h (x) = max f (x, y) y Γ(x) G (x) = arg max f (x, y) y Γ(x) = {y Γ (x) : h (x) = f (x, y)} Theorem 3.6. (Berge) Let X R l and Y R m. Let f : X Y R be continuous and Γ : X Y be compact-valued and continuous. Then h : X R is continuous and G : X Y is non-empty, compact valued, and u.h.c. Introduction to Dynamic Optimization Nr. 8

51 51 lim max max lim Theorem 3.8. Suppose {f n (x, y)} and f (x, y) are concave in y that and Γ is convex and compact valued. Then if f n f in the sup-norm (uniformly). Define g n (x) = arg max y Γ(x) f n (x, y) g (x) = arg max y Γ(x) f (x, y) then g n (x) g (x) for all x (pointwise convergence); if X is compact then the convergence is uniform. Introduction to Dynamic Optimization Nr. 9

52 52 Uses of Corollary ofcmthm Monotonicity of v Theorem 4.7. Assume that F (,y) is increasing, that Γ is increasing, i.e. Γ (x) Γ (x 0 ) for x x 0. Then, the unique fixed point v satisfying v = Tv is increasing. If F (,y) is strictly increasing, so is v. Introduction to Dynamic Optimization Nr. 10

53 53 Proof By the corollary of the CMThm, it suffices to show Tf is increasing if f is increasing. Let x x 0 : Tf (x) = max {F (x, y)+ βf (y)} y Γ(x) since y Γ (x) Γ (x 0 ) If F (,y) is strictly increasing = F (x, y )+ βf (y ) for some y Γ (x) F (x 0,y )+ βf (y ) max y Γ(x 0 ) {F (x, y)+ βf (y)} = Tf (x 0 ) F (x, y )+ βf (y ) < F (x 0,y )+ βf (y ). Introduction to Dynamic Optimization Nr. 11

54 54 Concavity (or strict) concavity of v Theorem 4.8. Assume that X is convex, Γ is concave, i.e. y Γ (x), y 0 Γ (x 0 ) implies that y θ θy 0 +(1 θ) y Γ (θx 0 +(1 θ) x) Γ x θ for any x, x 0 X and θ (0, 1). Finally assume that F is concave in (x, y). Then, the fixed point v satisfying v = Tv is concave in x. Moreover, if F (,y) is strictly concave, so is v. Introduction to Dynamic Optimization Nr. 12

55 55 convergence of policy functions with concavity of F and convexity of Γ optimal policy correspondence G (x) is actually a continuous function g (x) since v n v uniformly g n g (Theorem 4.8) we can use this to derive comparative statics Introduction to Dynamic Optimization Nr. 13

56 56 Differentiability can t use same strategy as with monotonicty or concavity: space of differentiable functions is not closed many envelope theorems, imply differentiability of h h (x) = max f (x, y) y Γ(x) always if formula: if h (x) is differentiable and there exists a y int (Γ (x)) then h 0 (x) = f x (x, y)...but is h differentiable? Introd uction to Dynamic O ptimization N r. 14

57 57 one a pp roach ( e.g. Demand Theo ry) r elies o n s moothness of Γ and f (twice differentiability) use f.o.c. and implicit function theorem won t w ork f or u s s in ce f (x, y) = F (x, y)+ βv (y) don t k no w i f f is once differentiable yet! going in circles...

58 58 Benveniste and Sheinkman First a Lemma... Lemma. Suppose v (x) is concave and that there exists w (x) such that w (x) v (x) and v (x 0 )= w (x 0 ) in some neighborhood D of x 0 and w is differentiable at x 0 (w 0 (x 0 ) exists) then v is differentiable at x 0 and v 0 (x 0 )= w 0 (x 0 ). Proo f. Since v is concave i t h as at least o ne subgradient p at x 0 : w (x) w (x 0 ) v (x) v (x 0 ) p (x x 0 ) Thus a subgradient of v is also a subgradient of w. But w has a unique subgradient equal to w 0 (x 0 ). Introduction to Dynamic Optimization Nr. 15

59 59 Benveniste and Sheinkman Now a Theorem Theorem. Suppose F is strictly concave and Γ is convex. If x 0 int (X) and g (x 0 ) int (Γ (x 0 )) then the fixed point of T, V, is differentiable at x and V 0 (x) = F x (x, g (x)) Proof. We know V is concave. Since x 0 int (X) and g (x 0) int (Γ (x 0)) then g (x 0 ) int (Γ (x )) fo r x D a n e i g h bo rh ood o f x 0 then W (x) = F (x, g (x 0 )) + βv (g (x 0 )) and then W (x ) V (x ) and W (x 0 )= V (x 0 ) and W 0 (x 0 )= F x (x 0,g (x 0 )) so the r esult follo ws from the l emma. Introduction to Dynamic Optimization Nr. 16

60 60 Recursive Methods Introduction to Dynamic Optimization Nr. 1

61 61 Outline Today s Lecture discuss Matlab code differentiability of value function application: neoclassical growth model homogenous and unbounded returns, more applications Introduction to Dynamic Optimization Nr. 2

62 62 Review of Bounded Returns Theorems (Tv)(x) = max {F (x, y)+ βv (y)} y Γ(x) F is bounded and continuous and Γ is continuous and compact Theorem 4.6. T is a contraction. Theorem 4.7. F (,y) and Γ is increasing v is increasing. If F (,y) is strictly increasing, v strictly increasing. Theorem 4.8. X, Γ convex F is concave in (x, y) v is concave in x. If F (,y) is strictly concave v is strictly concave and the optimal correspondence G (x) is a continuous function g (x). Theorem 4.9. g n g Introduction to Dynamic Optimization Nr. 3

63 63 Differentiability can t use same strategy as with monotonicty or concavity: space of differentiable functions is not closed many envelope theorems, imply differentiability of h h (x) = max f (x, y) y Γ(x) alw a ys if fo rmula: if h (x ) is diff erentiable and there exists a y int (Γ (x)) then h 0 (x) = f x (x, y)...but is h diff erentiable? Introd uction to Dynamic O ptimization N r. 4

64 64 one a pp roach ( e.g. Demand Theo ry) r elies o n s moothness of Γ and f (twice differentiability) use f.o.c. and implicit function theorem won t w ork f or u s s in ce f (x, y) = F (x, y)+ βv (y) don t k no w i f f is once differentiable yet! going in circles...

65 65 Benveniste and Sheinkman First a Lemma... Lemma. Suppose v (x) is concave and that there exists w (x) such that w (x) v (x) and v (x 0 )= w (x 0 ) in some neighborhood D of x 0 and w is differentiable at x 0 (w 0 (x 0 ) exists) then v is differentiable at x 0 and v 0 (x 0 )= w 0 (x 0 ). Proof. Since v is concave it has at least one subgradient p at x 0 : w (x) w (x 0 ) v (x) v (x 0 ) p (x x 0 ) Thus a subgradient of v is also a subgradient of w. But w has a unique subgradient equal to w 0 (x 0 ). Introduction to Dynamic Optimization Nr. 5

66 66 Benveniste and Sheinkman Now a Theorem Theorem. Suppose F is strictly concave and Γ is convex. If x 0 int (X) and g (x 0 ) int (Γ (x 0 )) then the fixed point of T, V, is differentiable at x and V 0 (x) =F x (x, g (x)) Proof. We know V is concave. Since x 0 int (X) and g (x 0) int (Γ (x 0)) then g (x 0 ) int (Γ (x )) fo r x D a n e i g h bo rh ood o f x 0 then W (x) =F (x, g (x 0 )) + βv (g (x 0 )) and then W (x) V (x) and W (x 0 )=V (x 0 ) and W 0 (x 0 )=F x (x 0,g(x 0 )) so the result follo ws from the lemma. Introduction to Dynamic Optimization Nr. 6

67 67 Recursive Methods Introduction to Dynamic Optimization Nr. 1

68 68 Outline Today s Lecture neoclassical growth application: use all theorems constant returns to scale homogenous returns unbounded returns Introduction to Dynamic Optimization Nr. 2

69 69 Constant Returns F (λx, λy) = λf (x, y), for λ 0 and, x X = λx X, for λ 0 (i.e. X is a cone) y Γ (x) = λy Γ (λx), for λ 0 (graph of Γ, A, is a cone) Introduction to Dynamic Optimization Nr. 3

70 70 Restrictions since F is unbounded is the sup <? is the max well defined? can we apply the Principle of Optimality? 1. restrict Γ: for some α such that γβ < 1: state can t grow too fast y Γ (x) = kyk α kxk 2. restrict F : for some 0 < B < F (x, y) B (kxk + kyk) all (x, y) A some weak boundedness condition: only allow unboundedness along rays Introduction to Dynamic Optimization Nr. 4

71 71 Implications kx t k α t kx 0 k for x Π (x 0 ) all x 0 X Thus: u n (x) u n 1 (x) = β t F (x t,x t+1 ) β t B (kx t k + kx t+1 k) = β t B α t kx 0 k + α t+1 kx 0 k = (βα) t B (1 + α) kx 0 k 0 so u n (x) is Cauchy = u n (x) u (x) So we have A1 and A2 = theorems 4.2 and 4.4 Introduction to Dynamic Optimization Nr. 5

72 72 supremum s properties we established that v : X R note that u (λx) = λu (x) and x Π (x 0 ) = λx Π (λx 0 ) v must be homogenous of degree 1 v (λx 0 ) = sup u (x) x Π(λx 0 ) = sup x λ Π(x 0 ) u = λ sup x Π(x 0 ) = λv (x 0 ) Introduction to Dynamic Optimization ³ λ x λ u ( x) Nr. 6

73 73 X u (x) = β t F (x t,x t+1 ) t=0 X β t F (x t,x t+1 ) t=0 B X β t α t kx 0 k + α t+1 kx 0 k t=0 B kx 0 k = X (βα) t (1 + α) t=0 B 1+ α kx 0 k 1 βα = v (x) c kx 0 k for some c R Introduction to Dynamic Optimization Nr. 7

74 74 What Space to Use? H (X) = ( f : X R : f is continuous and homogenous of degree 1 H (X) is complete and f (x) kxk is bounded kf k = sup f (x) = sup x X x X kxk=1 define operator T : H (X) H (X) Tf (x) = max y Γ(x) Introduction to Dynamic Optimization f (x) kxk {F (x, y)+ βf (y)} Nr. 8 )

75 75 Properties Operator T : H (X) H (X) note that for any v H (X) Tf (x) = max {F (x, y)+ βf (y)} y Γ(x) β t v (x t ) β t c kx t k (αβ) t c kx 0 k 0 thus β t v (x t ) 0 for all feasible plans (Theorems 4.3 and 4.5 apply) = T has unique fixed point v H (X) is T is a contraction? Introduction to Dynamic Optimization Nr. 9

76 76 Is T a contraction? Modify Blackwell s condition (bounded functions) to show that T it is a contraction; approach in SLP Note that Tf kxk = max y Γ(x) = max y Γ(x) ½ µ ¾ 1 1 y F (x, y)+ β kxk kxk f kyk kyk ½ µ x F kxk, y + β kyk µ ¾ y kxk kxk f kyk Idea: study related operator on functions space of continuous functions defined for kxk = 1 Introduction to Dynamic Optimization Nr. 10

77 77 Related operator Let Define T ³ : C X ³ C X Tf = max y Γ(x) kxk=1 X = X {x : kxk =1} as ½ F (x, y)+β kyk f µ ¾ y kyk T satifies all our assumptions about bounded returns! = T is a contraction of modulus αβ < 1 Introduction to Dynamic Optimization Nr. 11

78 78 Yes, T is a contraction! since T is a contraction of modulus αβ < 1 sup Tf Tg αβ sup x X x X for f H (X) (note that f H (X) Thus sup x X Tf Tg = kxk sup x X Tf = Tf kxk Tf Tg so T is a contraction on H (X) αβ sup x X f g Introduction to Dynamic Optimization Nr. 12 f g = αβ sup f g x X

79 79 Renormalizing studying a related operator is convenient in practice reduces dimensionality! kxk =1not necessarily most convenient normalization another normalization (much used) if x = x 1,x 2 R n and x 1 R then use x 1 =1 Introduction to Dynamic Optimization Nr. 13

80 80 Homogenous Returns of Degree θ similartrickswork(seealvarezandstokey,jet) rough idea for: θ>0 F (λx, λy) =λ θ F (x, y) F (x, y) B (kxk + kyk) θ all (x, y) A Γ as before but now α such that γ βα θ < 1 a rguments a re exactly pa rallel in particular, T is a contraction of modulus γ for θ < 0 and θ = 0several complications with origin... but they can be surmounted Introduction to Dynamic Optimization Nr. 14

81 81 Unbounded Returns and Monotonicity numerically cannot handle unbounded returns idea: T may not be a contraction butallisnotlost:itstillismonotonic Introduction to Dynamic Optimization Nr. 15

82 82 1. Start from v 0 v 2. IF Tv 0 = v 1 v 0 then define v n = T n v 0 (decreasing sequence) 3. IF lim n v 0 (x n ) 0allx Π (x 0 )allx 0 then clearly v n (x) v (x) for all x X, for some v : X R 4. IF Tv = v (is this implied by v n v?) THEN v = v canbeusedforquadraticreturns

83 83 Unbounded Returns and Monotonicity Squeezing argument: 1. suppose v L (x) v (x) v U (x) 2. and T n v U (x) v and T n v U (x) v THEN v = v Introduction to Dynamic Optimization Nr. 16

84 84 Next Class we redonewithchapter4 next class: deterministic dynamics Chapter 6 Boldrin-Montruccio 1986 paper Introduction to Dynamic Optimization Nr. 17

85 85 Recursive Methods Recursive Methods Nr. 1

86 86 Outline Today s Lecture Anything goes : Boldrin Montrucchio Global Stability: Liapunov functions Linear Dynamics Local Stability: Linear Approximation of Euler Equations Recursive Methods Nr. 2

87 87 Anything Goes treat X = [0, 1] R case for simplicity take any g (x) :[0, 1] [0, 1] that is twice continuously differentiable on [0, 1] g 0 (x) and g 00 (x) exists and are bounded define W (x, y) = 1 2 y 2 + yg (x) L 2 x 2 Lemma: W is strictly concave for large enough L Recursive Methods Nr. 3

88 88 Proof W (x, y) = 1 2 y2 + yg (x) L 2 x2 W 1 W 2 = yg 0 (x) Lx = y + g (x) W 11 W 22 W 12 = yg 00 (x) L = 1 = g 0 (x) thus W 22 < 0; W 11 < 0 is satisfied if L max x g 00 (x) W 11 W 22 W 12 W 21 = yg 00 (x)+ L g 0 (x) 2 > 0 L>g 0 (x) 2 + yg 00 (x) then L > [max x g 0 (x) ] 2 +max x g 00 (x) will do. Recursive Methods Nr. 4

89 89 Decomposing W (in a concave way) define V (x) = W (x, g (x)) and F so that W (x, y) = F (x, y)+ βv (y) that is F (x, y) = W (x, y) βv (y). Lemma: V is strictly concave Pro of: immediate since W is concave and X is convex. Computing the second derivative is useful anyway: V 00 (x) = g 00 (x) g (x)+ g 0 (x) 2 L since g [0, 1] then clearly our bound on L implies V 00 (x) < 0. Recursive Methods Nr. 5

90 90 Concavity of F Lemma: F is concave for β h 0, β i for some β > 0 F 11 (x, y) = W 11 (x, y) = yg 00 (x) L F 12 (x, y) = W 12 (x, y) = 1 F 22 (x, y) = W 22 βv 22 = 1 β h i g 00 (x) g (x)+ g 0 (x) 2 L F 11 F 22 F 12 2 > 0 ³ h i (yg 00 (x) L) 1 β g 00 (x) g (x)+ g 0 (x) 2 L g 0 (x) 2 > 0 Recursive Methods Nr. 6

91 91... concavity of F Let η 1 (β) = min η 2 (β) = min x,y x,y ( F 22) F11 F 22 F 12 2 η (β) = min {H 1 (β),h 2 (β)} 0 for β =0 η (β) > 0. η is continuous (Theorem of the Maximum) exists β h > 0 such that H (β) 0 for all β 0, β i. Recursive Methods Nr. 7

92 92 Monotonicity Use W (x, y) = 1 2 y 2 + yg (x) L 1 2 x 2 + L 2 x L 2 does not affect second derivatives claim: F is monotone for large enough L 2 Recursive Methods Nr. 8

93 93 Recursive Methods Introduction to Dynamic Optimization Nr. 1

94 94 Outline Today s Lecture linearization argument review linear dynamics stability theorem for Non-Linear dynamics Introduction to Dynamic Optimization Nr. 2

95 95 Linearization argument Euler Equation F y (x, g (x)) + βf x (g (x),g (g (x))) = 0 steady state F y (x,x )+ βf x (x,x )= 0 g 0 (x ) gives dynamics of x t close to a steady state first order Taylor approximation x t+1 x = g 0 (x )(x t x ) local stability if g 0 (x ) < 1 Introduction to Dynamic Optimization Nr. 3

96 96 computing g 0 (x) 0 = F yx (x,x )+ F yy (x,x ) g 0 (x )+ +βf xx (x,x ) g 0 (x )+ βf xy (x,x )[g 0 (x )] 2 quadratic in g 0 (x ) two candidates for g 0 (x ) reciprocal pairs: λ is a solution so is 1/λβ 0= F yx (x,x )+[F yy (x,x )+ βf xx (x,x )] λ + βf xy (x,x ) λ 2 dividing by λ 2 β and since F yx (x,x )= F xy (x,x ) 2 1 0= βf yx (x,x ) +[F yy (x,x )+ βf xx (x,x )] λβ Thus if λ 1 < 1 λ 2 > 1 Introduction to Dynamic Optimization 1 βλ Nr. 4 +F xy (x,x

97 97 Using g 0 (x ) x 0 close to the steady state x smaller root has absolute value less than one, consider the following sequence of {x t+1 } : x t+1 = x + g 0 (x )(x t x ) for t 0 sequence satisfies the Euler Equations since g 0 (x ) < 1, it converges to the steady state x, and hence it satisfies the transversality condition if F concave we have found a solution If both λ 1 > 1 and λ 2 > 1, then we do not know which one describes g 0 (x ) if any, but we do know that that steady state is not locally stable Introduction to Dynamic Optimization Nr. 5

98 98 F (x, y) =U (f (x),y) so Neoclassical growth model F x (x, y) = U 0 (f (x) y) f 0 (x) F y (x, y) = U 0 (f (x) y) F xx (x, y) = U 00 (f (x) y) f 0 (x) 2 + U 0 (f (x) y) f 00 (x) F yy (x, y) = U 00 (f (x) y) F xy (x, y) = U 00 (f (x) y) f 0 (x) steady state k solves 1=βf 0 (k ) 0 = F xy +[F yy + βf xx ] g 0 +(g 0 ) 2 F xy = U 00 f 0 + U 00 + βu 00 f 02 + βu 0 f 00 g 0 (g 0 ) 2 βu 00 f 0 µ f = U 1/β /β + f 0 /U00 U 0 g 0 +(g 0 ) Introduction to Dynamic Optimization Nr. 6

99 99 quadratic function Q (λ) =1/β µ f /β + f 0 /U00 U 0 λ + λ 2. Notice that Q (0) = 1 β > 0 µ f 00 Q (1) = f 0 /U00 U 0 < 0 µ f Q 0 (λ ) = 0 : 1 <λ 00 = 1+1/β + µ f 00 1 Q (1/β) = f 0 /U00 U 0 β < 0 Q (λ) > 0 for λ large 00 /U f 0 U 0 /2 Introduction to Dynamic Optimization Nr. 7

100 100 So, 0 = Q (λ 1 )=Q (λ 2 ) 0 < λ 1 < 1 < 1/β < λ 2 smallest root λ 1 = g 0 (k ) changes with f 00 f / U 00 0 U 0 controls speed of convergence Introduction to Dynamic Optimization Nr. 8

101 101 Stability of linear dynamic systems of higher dimensions assume A is non-singular ȳ =0 y t+1 = Ay t diagonalizing the matrix A we obtain: A = P 1 ΛP Λ is a diagonal matrix with its eigenvalues λ i on its diagonal matrix P contains the eigenvectors of A Introduction to Dynamic O ptimization Nr. 9

102 102 write linear system as Py t+1 = Λ Py t for t 0 or defining z as z t Py t z t+1 = Λz t for t 0 Introduction to Dynamic O ptimization Nr. 9A

103 103 Stability Theorem Let λ i be such that for i =1, 2,..., m we have λ i < 1 and for i = m+1,m+ 2,..., n we have λ i 1. Consider the sequence for some initial condition y 0.Then y t+1 = Ay t for t 0 lim y t =0, t if an only if the initial condition y 0 satisfies: y 0 = P 1 ẑ 0 where ẑ 0 is a vector with its n m last coordinates equal to zero, i.e. ẑ i0 =0for i = m +1,m+2,..., m Introduction to Dynamic Optimization Nr. 10

104 104 Non-Linear version take x t+1 = h (x t ) and let A be the Jacobian (n n) of h. Assume I A is nonsingular. Assume A has eigenvalues λ i be such that for i =1, 2,...,m we have λ i < 1 and fo r i = m +1,m +2,..., n we have λ i 1. Then there is a n eighbourhood of x, call it U, and a continuously diff erentiable function φ : U R n m such that x t is stable IFF x o U and φ (x 0 )=0. The jacobian of the function φ has rank n m. idea: can solve φ for n m last coordinates as functions of first m coordinates Introduction to Dynamic Optimization Nr. 11

105 105 Second order differential equation x t+2 = A 1 x t+1 + A 2 x t with x t R n and with initial conditions x 0 and x 1. define then X t = xt x t 1 X t+2 = JX t where the matrix 2n 2n matrix J has four n n blocks A1 A J = 2 I 0 Introduction to Dynamic Optimization Nr. 12

106 106 Linearized Euler equations Idea: apply second order linear stability theory to linearized Euler F x (x, y)+βf x (y, h (y, x)) = 0 stacked system then H (X t )=X t+1 is X t = H (X t )= xt+1 x t h (xt+1,x t ) x t+1 then compute the jacobian of H and use our non-linear theorem remark: roots will come in reciprocal pairs Introduction to Dynamic Optimization Nr. 13

107 107 Recursive Methods Recursive Methods Nr. 1

108 108 Outline Today s Lecture Dynamic Programming under Uncertainty notation of sequence problem leave study of dynamics for next week Dynamic Recursive Games: Abreu-Pearce-Stachetti Application: today s Macro seminar Recursive Methods Nr. 2

109 109 Dynamic Programming with Uncertainty general model of uncertainty: need Measure Theory for simplicity: finite state S Markov process for s (recursive uncertainty) Pr s t+1 s t = p (s t+1 s t ) v (x 0,s 0 ) ( X X sup β t F xt s t 1,x t+1 s t Pr s t ) s 0 {x t+1 ( )} t=0 t s t x t+1 s t Γ x t s t 1 x 0 given Recursive Methods Nr. 3

110 110 Dynamic Programming Functional Equation (Bellman Equation) ( v (x, s) = sup F (x, y)+ β X ) v (y, s 0 ) p (s 0 s) or simply (or more generally): v (x, s) =sup {F (x, y)+ βe [v (y, s 0 ) s]} where the E [ s] is the conditional expectation operator over s 0 given s basically same: Ppple of Optimality, Contraction Mapping (bounded case), Monotonicity [actually: differentiability sometimes easier!] notational gain is huge! Recursive Methods s 0 Nr. 4

111 111 Policy Rules Rule more intuitive too! fundamental change in the notion of a solution optimal policy g (x, s) vs. optimal sequence of contingent plan {x t+1 (s t )} t=0 Question: how can we use g to understand the dynamics of the solution? (important for many models) Answer: next week... Recursive Methods Nr. 5

112 112 Abreu Pearce and Stachetti (APS) Dynamic Programming for Dynamic Games idea: subgame perfect equilibria of repeated games have recursive structure players care about future strategies only through their associated utility values APS study general N person game with non-observable actions we follow Ljungqvist-Sargent: continuum of identical agents vs. benevolent government time consistency problems (credibility through reputation) agent i has preferences u (x i,x,y) where x is average across x i s Recursive Methods Nr. 6

113 113 One Period competitive equilibria: ½ ¾ C = (x, y) : x arg max u (x i,x,y) x i assume x = h (y) for all (x, y) C 1. Dictatorial allocation: max x,y u (x, x, y) (wishful thinking!) 2. Ramsey commitment allocation: max (x,y) C u (x, x, y) (wishful think - ing?) 3. Nash equilibrium x N,y N : (might be bad outcome) x N arg max u x, x N,y N x N,y N C y N arg max y u x x Recursive Methods N,x N,y y N N = H x Nr. 7

114 114 Kydland-Prescott / Barro-Gordon v (u, π) = u 2 π 2 u = ū (π π e ) u (π e i,π e,π) = v (ū (π π e ),π) λ (π e i π) 2 then π e i = πe = π = h (π) take λ 0 then = (ū (π π e )) 2 π 2 λ (π e i π) 2 (ū π + π e ) 2 π 2 First best Ramsey: n o n o max (ū π + h (π)) 2 π 2 = max (ū) 2 π 2 π π π =0 Recursive Methods Nr. 8

115 115 Kydland-Prescott / Barro-Gordon Nash outcome. Gov t optimal reaction: n o max (ū π + π e ) 2 π 2 π this is π = H (π e ) π = ū + πe 2 Nash equilibria is then π = H (h (π)) = H (π) = ū+π 2 which implies π en = π N =ū unemployment stays at ū but positive inflation worse off Andy Atkeson: adds shock θ that is private info of gov t (macro seminar) Recursive Methods Nr. 9

116 116 Infinitely Repeated Economy Payoff for government: V g = 1 δ δ X δ t r (x t,y t ) t=1 where r (x, y) = u (x, x, y) strategies σ... σ g = σ h = σ g t x t 1,y t 1 ª t=0 σ h x t 1,y t 1 ª induce {x t,y t } from which we can write V g (σ). continuation stategies: after history (x t,y t ) we write σ (x t,y t ) t t=0 Recursive Methods Nr. 10

117 117 Subgame Perfect Equilibrium Astrategy profile σ = (σ h,σ g ) is a subgame perfect equilibrium of the infinitely repeated economy if for each t 1 and each history (x t 1,y t 1 ) X t 1 Y t 1, 1. The outcome x t = σ h t (x t 1,y t 1 ) is a competitive equilibrium given that y t = σ g t (x t 1,y t 1 ),i.e. (x t,y t ) C 2. For each ŷ Y (1 δ)r(x t,y t )+δv g (σ (x t,y t )) (1 δ)r(x t, ŷ)+δv g (σ (x t ;y t 1,ŷ)) (one shot deviations are not optimal) Recursive Methods Nr. 11

118 118 Lemma Take σ and let x and y be the associated first period outcome. Then σ is sub-game perfect if and only if: 1. for all (ˆx, ŷ) X Y σ ˆx,ŷ is a sub-game perfect equilibrium 2. (x, y) C 3. ŷ Y (1 δ)r(x t,y t )+ δv g (σ (x,y) ) (1 δ)r(x t, ŷ)+ δv g (σ (ˆx,ŷ) ) note the stellar role of V g (σ (x,y) ) and V g (σ (ˆx,ŷ) ), its all that matters for checking whether it is best to do x or deviate... idea! think about values as fundamental Recursive Methods Nr. 12

119 119 Values of all SPE Set V of values V = V g (σ) σis a subgame perfect equilibrium Let W R. A 4-tuple (x, y, ω 1,ω 2 ) is said to be admissible with respect to W if (x, y) C, ω 1,ω 2 W W and (1 δ)r(x, y)+ δω 1 (1 δ)r(x, ŷ)+ δω 2, ŷ Y Recursive Methods Nr. 13

120 120 B(W) operator Definition: For each set W R, let B(W ) be the set of possible values ω = (1 δ)r(x, y)+ δω 1 associated with some admissible tuples (x, y, ω 1,ω 2 ) wrt W : B(W ) ½ w : (x, y) C and ω 1,ω 2 W s.t. (1 δ)r(x, y)+ δω 1 (1 δ)r(x, ŷ)+ δω 2, ŷ Y note that V is a fixed point B (V ) = V we will see that V is the biggest fixed point Recursive Methods Nr. 14 ¾

121 121 Monotonicity of B. If W W 0 R then B(W ) B(W 0 ) Theorem (self-generation): If W R is bounded and W B(W ) (self-generating ) then B(W ) V Proof Step 1 : for any W B(W ) we can choose and x, y, ω 1, and ω 2 (1 δ)r(x, y)+ δω 1 (1 δ)r(x, ŷ)+ δω 2, ŷ Y Step 2: for ω 1,ω 2 W thus do the same thing for them as in step 1 continue in this way... Recursive Methods Nr. 15

122 122 Three facts and analgorithm V B(V ) If W B(W ), then B(W ) V (by self-generation) B is monotone and maps compact sets into compact sets Algorithm: start with W 0 such that V B (W 0 ) W 0 then define W n = B n (W 0 ) W n V Proof: since W n are decreasing (and compact) they must converge, the limit must bea fixed point, but V is biggest fixed point Recursive Methods Nr. 16

123 123 Finding V In this simple case here we can do more... lowest v is self-enforcing highest v is self-rewarding v low = min {(1 δ) r (x, y)+ δv} (x,y) C v V (1 δ)r(x, y)+ δv (1 δ)r(x, ŷ)+ δv low all ŷ Y v low =(1 δ)r(h (y),y)+ δv (1 δ)r(h (y),h (h (y))) + δv low if binds and v > v low then minimize RHS of inequality Recursive Methods v low =min r(h (y),h(h(y))) y Nr. 17

124 124 Best Value for Best, use Worst to punish and Best as reward so solve: max (x,y) C v V = {(1 δ) r (x, y)+ δv high } (1 δ)r(x, y)+ δv high (1 δ)r(x, ŷ)+ δv low all ŷ Y then clearly v high = r (x, y) max r (h (y),y) subject to r (h (y),y) (1 δ)r(h (y),h (h (y))) + δv low if constraint not binding Ramsey (first best) otherwise value is constrained by v low Recursive Methods Nr. 18

125 125 Recursive Methods Recursive Methods Nr. 1

126 126 Outline Today s Lecture continue APS: worst and best value Application: Insurance with Limitted Commitment stochastic dynamics Recursive Methods Nr. 2

127 127 B(W) operator Definition: For each set W R, let B(W ) be the set of possible values ω = (1 δ)r(x, y)+ δω 1 associated with some admissible tuples (x, y, ω 1,ω 2 ) wrt W : B(W ) ½ w : (x, y) C and ω 1,ω 2 W s.t. (1 δ)r(x, y)+ δω 1 (1 δ)r(x, ŷ)+ δω 2, ŷ Y note that V is a fixed point B (V ) = V actually, V is the biggest fixed point [fixed point not necessarily unique!] Recursive Methods Nr. 3 ¾

128 128 Finding V In this simple case here we can do more... lowest v is self-enforcing highest v is self-rewarding then v low = min {(1 δ) r (x, y)+ δv} (x,y) C v V (1 δ)r(x, y)+ δv (1 δ)r(x, ŷ)+ δv low all ŷ Y v low =(1 δ)r(h (y),y)+ δv (1 δ)r(h (y),h (h (y))) + δv low if binds and v > v low then minimize RHS of inequality v low =min r(h (y),h(h(y))) Recursive Methods y Nr. 4

129 129 Best Value for Best, use Worst to punish and Best as reward so solve: max (x,y) C v V = {(1 δ) r (x, y)+ δv high } (1 δ)r(x, y)+ δv high (1 δ)r(x, ŷ)+ δv low all ŷ Y then clearly v high = r (x, y) max r (h (y),y) subject to r (h (y),y) (1 δ)r(h (y),h (h (y))) + δv low if constraint not binding Ramsey (first best) otherwise value is constrained by v low Recursive Methods Nr. 5

130 130 Insurance with Limitted Commitment 2 agents utility u c A and u c B y A t is iid over [y low,y high ] yt B =ȳ yt A define same distribution as y A t w aut = (symmetry) Eu (y) 1 β let [w l (y),w h (y)] be the set of attainable levels of utility for A when A has income y (by symmetry itisalso that of A with income ȳ y) v (w, y) for w [w l,w h ] be the highest utility for B given that A is promised w and has income y (the pareto frontier) Recursive Methods Nr. 6

131 Recursive Representation v (w, y) =max u c B + βev (w 0 (y 0 ),y 0 ) ª 131 w = u c A + βew (y 0 ) u c A + βew (y 0 ) u (y)+βv aut u c B + βev (w 0 (y 0 ),y 0 ) u (ȳ y)+βv aut c A + c B ȳ w 0 (y 0 ) [w l (y 0 ),w h (y 0 )] is this a contraction? NO is it monotonic? YES should solve for [w l (y),w h (y)] jointly clearly w l (y) =u (y)+βv aut w h (y) such that v (w h (y),y)=u (ȳ y)+βv aut Recursive Methods Nr. 7

132 132 Stochastic Dynamics output of stochastic dynamic programming: optimal policy: convergence to steady state? x t+1 = g (x t,z t ) on rare occasions (but not necessarily never...) convergence to something? Recursive Methods Nr. 8

133 133 Notion of Convergence Idea: start at t =0with some x 0 and s 0 compute x 1 = g (x 0,z 0 ) x 1 is not uncertain from t =0view z 1 is realized compute x 2 = g (x 1,z 1 ) x 2 israndomfrompointofviewoft =0 continue... x 3,x 4,x 5,...x t are random variables from t =0perspective F t (x t ) distribution of x t (given x 0,z 0 ) more generally think of joint distribution of (x, z) convergence concept lim F t (x) =F (x) t Recursive Methods Nr. 9

134 134 Examples stochastic growth model Brock-Mirman (δ =0) u (c) = logc f (A, k) = Ak α and A t is i.i.d. optimal policy k t+1 = sa t kt α with s = βα Recursive Methods Nr. 10

135 135 Examples search model: last recitation employment state u and e (also wage if we want) invariant distribution gives steady state unemployment rate if uncertainty is idiosyncratic in a large population F can be interpreted as a cross section Recursive Methods Nr. 11

136 136 Bewley / Aiyagari income fluctuations problem v (a, y; R) = solution a 0 = g (a, y; R) invariant distribution F (a; R) max {u (Ra + y 0 a 0 Ra+y a0 )+βe [v (a 0,y 0 ; R) y]} cross section assets in large population how does F vary with R? (continuously?) once we have F can compute moments: market clearing Z adf (a; R) =K Recursive Methods Nr. 12

137 137 Markov Chains N states of the world let Π ij be probability of s t+1 = j conditional on s t = i Π =(Π ij ) transition matrix p distribution over states p 0 p 1 = Πp 0 (why?)... p t = Π t p 0 does Π t converge? Recursive Methods Nr. 13

138 138 Examples example 1: Π t converges example 2: transient state example 3: Π t does not converge but flucutates example C: ergodic sets Recursive Methods Nr. 14

139 139 Theorem Let S = {s 1,...s l } and Π a. S can be partitioned into M ergodic sets b. the sequence µ n 1 1 X Π k Q n k=0 c. each row of Q is an invariant distribution and so are the convex combinations Recursive Methods Nr. 15

140 140 Theorem Let S = {s 1,...s l } and Π then Π has a unique ergodic set if and only if there is a state s j such that forallithereexistsann 1 such that π (n) ij > 0. In this case Π has a unique invariant distribution p ; each row of Q equals p Theorem let ε n j =min i π n ij and εn = P j εn j. Then S has a unique ergodic set with no cyclical moving subsets if and only if for some N 1 ε N > 0. In this case Π n Q Recursive Methods Nr. 16

141 Dynamic Optimization and Economic Applications (Recursive Methods) Economics Department Spring 2003 The unifying theme of this course is best captured by the title of our main reference book: Recursive Methods in Economic Dynamics. We start by covering deterministic and stochastic dynamic optimization using dynamic programming analysis. We then study the properties of the resulting dynamic systems. Finally, we will go over a recursive method for repeated games that has proven useful in contract theory and macroeconomics. We shall stress applications and examples of all these techniques throughout the course. The main reference for the course is (hereafter, SLP): Stokey, Nancy L. and Robert E. Jr. Lucas with Edward C. Prescott, 1989, Recursive Methods in Economic Dynamics, Cambridge, MA: Harvard University Press. We will follow SLP s exposition as closely as possible, departing only to add more recent developments and applications when needed. Although some homework assignments will seek to introduce you to solving problems numerically, numerical methods are not a main part of this course. I highly recommend Kenneth Judd s Numerical Methods in Economics book as a complement of the material of this course. The course grade will be based on problem set homework (30%) and a final exam (70%). TA sessions are once a week and will be used mainly to go over problems and some additional material not covered in the lectures. Course Outline (each topic approximately 1-1½ week) Topic 1: Preliminaries; Euler Equations and Transversality Conditions; Principle of Optimality [SLP Chapters 3-4.1] Topic 2: Bounded Returns; Differentiability of Value Function; Homogenous and Unbounded Returns; Applications [SLP Chapters 4-5] Topic 3: Deterministic Global and Local Dynamics [SLP Chapter 6] Topic 4: Stochastic Dynamic Programming; Applications; Markov Chains [SLP Chapter 9-11] Topic 5: Weak Convergence; Applications [SLP Chapters 12-13] Topic 6: Repeated Games and Dynamic Contracts (APS)

142 Topic 7 (if time permits): Continuous-Time Dynamic Programming and Hamilton- Jacobi-Bellman PDE Equations [Fleming and Soner, Chapter 1]. References Euler Equations and Transversality: Sufficiency and Necessity J Weitzman, M. L. (1973) Duality Theory for Infinite Horizon Convex Models, Management Science 19, *Benveniste and Scheinkman Duality Theory for Dynamic Optimization Models of Economics: The Continuous Time Case Journal of Economic Theory, v. 27, *Kamihigashi, Takashi Necessity of Transversality Conditions for Infinite Horizon Problems Econometrica v69, n4 (July 2001): MIT Kamihigashi, Takashi A Simple Proof of the Necessity of the Transversality Condition Economic Theory v20, n2 (September 2002): *Araujo, A. and J. A. Scheinkman, Maximum Principle and Transversality Condition for Concave Infinite Horizon Economic Models Journal of Economic Theory v30, n1 (June 1983): 1-16 J Michel, Philippe On the Transversality Condition in Infinite Horizon Optimal Problems Econometrica v50, n4 (July 1982): J Michel, Philippe Some Clarifications on the Transversality Condition Econometrica v58, n3 (May 1990): *Leung, Siu Fai Transversality Condition and Optimality in a Class of Infinite Horizon Continuous Time Economic Models, Journal of Economic Theory, v54, n1 (June 1991): Differentiability of the Value Function *Milgrom, Paul and Ilya Segal (2002) Envelope Theorems for Arbitrary Choice Sets, Econometrica, v. 70, n. 2 (March), Differentiability of the Policy Function J Araujo, A. The Once but Not Twice Differentiability of the Policy Function Econometrica v59, n5 (September 1991): J Araujo, A. and J. A. Scheinkman (1977) Smoothness, Comparative Dynamics, and the Turnpike Property, Econometrica, v. 45, n. 3 (April). J Santos, Manuel S. (1991) Smoothness of the Policy Function in Discrete Time Economic Models, Econometrica, v. 59, n. 5. Homogenous Dynamic Programming MIT Alvarez, Fernando and Nancy L. Stokey (1998) Dynamic Programming with Homogenous Functions, Journal of Economic Theory, 82, Complicated Dynamics from Dynamic Optimization Problems *Boldrin, Michele and Luigi Montrucchio (1986) On the Indeterminacy of Capital Accumulation Paths, Journal of Economic Theory, 40, MIT Mitra, Tapan (1998) On the Relationship between Discounting and Complicated Behavior in Dynamic Optimization Models, Journal of Economic Behavior and Organizations, v. 33,

143 Repeated Games and Dynamic Contracts J Abreu, Dilip, David Pearce and Ennio Stachetti (1990) Towards a Theory of Infinitely Repeated Games with Discounting, Econometrica, v. 58, n. 5, Ljungqvist, Lars and Thomas Sargent, Recursive Macroeconomic Theory, Chapter 16. *Phelan, Christopher and Ennio Stacchetti (1999) Sequential Equilibria in a Ramsey Tax Model, Econometrica v69, n6 (November 2001): *Athey, Susan, Andrew Atkeson, Patrick J. Kehoe (2001) On the optimality of transparent monetary policy, Federal Reserve Bank of Minneapolis, Working Paper Number 613 *Athey, Susan, Kyle Bagwell and Chris Sanchirico (2002) Collusion and Price Rigidity, manuscript *Jonathan Levin (2002) Relational Incentive Contracts manuscript, Stanford University Continuous-Time Dynamic Programming Fleming, Wendell Helms and H. Mete Soner (1993) Controlled Markov Processes and Viscosity Solutions, Applications of Mathematics, vol. 25, Springer-Verlag [chapter 1]

Outline Today s Lecture

Outline Today s Lecture Outline Today s Lecture finish Euler Equations and Transversality Condition Principle of Optimality: Bellman s Equation Study of Bellman equation with bounded F contraction mapping and theorem of the maximum

More information

Recursive Methods Recursive Methods Nr. 1

Recursive Methods Recursive Methods Nr. 1 Nr. 1 Outline Today s Lecture Dynamic Programming under Uncertainty notation of sequence problem leave study of dynamics for next week Dynamic Recursive Games: Abreu-Pearce-Stachetti Application: today

More information

Recursive Methods Recursive Methods Nr. 1

Recursive Methods Recursive Methods Nr. 1 Recursive Methods Recursive Methods Nr. 1 Outline Today s Lecture continue APS: worst and best value Application: Insurance with Limitted Commitment stochastic dynamics Recursive Methods Nr. 2 B(W) operator

More information

Recursive Methods. Introduction to Dynamic Optimization

Recursive Methods. Introduction to Dynamic Optimization Recursive Methods Nr. 1 Outline Today s Lecture finish off: theorem of the maximum Bellman equation with bounded and continuous F differentiability of value function application: neoclassical growth model

More information

Lecture 5: The Bellman Equation

Lecture 5: The Bellman Equation Lecture 5: The Bellman Equation Florian Scheuer 1 Plan Prove properties of the Bellman equation (In particular, existence and uniqueness of solution) Use this to prove properties of the solution Think

More information

Dynamic Programming Theorems

Dynamic Programming Theorems Dynamic Programming Theorems Prof. Lutz Hendricks Econ720 September 11, 2017 1 / 39 Dynamic Programming Theorems Useful theorems to characterize the solution to a DP problem. There is no reason to remember

More information

Macro 1: Dynamic Programming 1

Macro 1: Dynamic Programming 1 Macro 1: Dynamic Programming 1 Mark Huggett 2 2 Georgetown September, 2016 DP Warm up: Cake eating problem ( ) max f 1 (y 1 ) + f 2 (y 2 ) s.t. y 1 + y 2 100, y 1 0, y 2 0 1. v 1 (x) max f 1(y 1 ) + f

More information

Stochastic Dynamic Programming: The One Sector Growth Model

Stochastic Dynamic Programming: The One Sector Growth Model Stochastic Dynamic Programming: The One Sector Growth Model Esteban Rossi-Hansberg Princeton University March 26, 2012 Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 1 / 31 References

More information

Economics 8105 Macroeconomic Theory Recitation 3

Economics 8105 Macroeconomic Theory Recitation 3 Economics 8105 Macroeconomic Theory Recitation 3 Conor Ryan September 20th, 2016 Outline: Minnesota Economics Lecture Problem Set 1 Midterm Exam Fit Growth Model into SLP Corollary of Contraction Mapping

More information

1 Jan 28: Overview and Review of Equilibrium

1 Jan 28: Overview and Review of Equilibrium 1 Jan 28: Overview and Review of Equilibrium 1.1 Introduction What is an equilibrium (EQM)? Loosely speaking, an equilibrium is a mapping from environments (preference, technology, information, market

More information

Contents. An example 5. Mathematical Preliminaries 13. Dynamic programming under certainty 29. Numerical methods 41. Some applications 57

Contents. An example 5. Mathematical Preliminaries 13. Dynamic programming under certainty 29. Numerical methods 41. Some applications 57 T H O M A S D E M U Y N C K DY N A M I C O P T I M I Z AT I O N Contents An example 5 Mathematical Preliminaries 13 Dynamic programming under certainty 29 Numerical methods 41 Some applications 57 Stochastic

More information

Simple Consumption / Savings Problems (based on Ljungqvist & Sargent, Ch 16, 17) Jonathan Heathcote. updated, March The household s problem X

Simple Consumption / Savings Problems (based on Ljungqvist & Sargent, Ch 16, 17) Jonathan Heathcote. updated, March The household s problem X Simple Consumption / Savings Problems (based on Ljungqvist & Sargent, Ch 16, 17) subject to for all t Jonathan Heathcote updated, March 2006 1. The household s problem max E β t u (c t ) t=0 c t + a t+1

More information

Computing Equilibria of Repeated And Dynamic Games

Computing Equilibria of Repeated And Dynamic Games Computing Equilibria of Repeated And Dynamic Games Şevin Yeltekin Carnegie Mellon University ICE 2012 July 2012 1 / 44 Introduction Repeated and dynamic games have been used to model dynamic interactions

More information

On the Principle of Optimality for Nonstationary Deterministic Dynamic Programming

On the Principle of Optimality for Nonstationary Deterministic Dynamic Programming On the Principle of Optimality for Nonstationary Deterministic Dynamic Programming Takashi Kamihigashi January 15, 2007 Abstract This note studies a general nonstationary infinite-horizon optimization

More information

Slides II - Dynamic Programming

Slides II - Dynamic Programming Slides II - Dynamic Programming Julio Garín University of Georgia Macroeconomic Theory II (Ph.D.) Spring 2017 Macroeconomic Theory II Slides II - Dynamic Programming Spring 2017 1 / 32 Outline 1. Lagrangian

More information

Chapter 3. Dynamic Programming

Chapter 3. Dynamic Programming Chapter 3. Dynamic Programming This chapter introduces basic ideas and methods of dynamic programming. 1 It sets out the basic elements of a recursive optimization problem, describes the functional equation

More information

Notes on Measure Theory and Markov Processes

Notes on Measure Theory and Markov Processes Notes on Measure Theory and Markov Processes Diego Daruich March 28, 2014 1 Preliminaries 1.1 Motivation The objective of these notes will be to develop tools from measure theory and probability to allow

More information

1 Stochastic Dynamic Programming

1 Stochastic Dynamic Programming 1 Stochastic Dynamic Programming Formally, a stochastic dynamic program has the same components as a deterministic one; the only modification is to the state transition equation. When events in the future

More information

Basic Deterministic Dynamic Programming

Basic Deterministic Dynamic Programming Basic Deterministic Dynamic Programming Timothy Kam School of Economics & CAMA Australian National University ECON8022, This version March 17, 2008 Motivation What do we do? Outline Deterministic IHDP

More information

UNIVERSITY OF VIENNA

UNIVERSITY OF VIENNA WORKING PAPERS Cycles and chaos in the one-sector growth model with elastic labor supply Gerhard Sorger May 2015 Working Paper No: 1505 DEPARTMENT OF ECONOMICS UNIVERSITY OF VIENNA All our working papers

More information

Stochastic Dynamic Programming. Jesus Fernandez-Villaverde University of Pennsylvania

Stochastic Dynamic Programming. Jesus Fernandez-Villaverde University of Pennsylvania Stochastic Dynamic Programming Jesus Fernande-Villaverde University of Pennsylvania 1 Introducing Uncertainty in Dynamic Programming Stochastic dynamic programming presents a very exible framework to handle

More information

Time is discrete and indexed by t =0; 1;:::;T,whereT<1. An individual is interested in maximizing an objective function given by. tu(x t ;a t ); (0.

Time is discrete and indexed by t =0; 1;:::;T,whereT<1. An individual is interested in maximizing an objective function given by. tu(x t ;a t ); (0. Chapter 0 Discrete Time Dynamic Programming 0.1 The Finite Horizon Case Time is discrete and indexed by t =0; 1;:::;T,whereT

More information

A Simple Proof of the Necessity. of the Transversality Condition

A Simple Proof of the Necessity. of the Transversality Condition comments appreciated A Simple Proof of the Necessity of the Transversality Condition Takashi Kamihigashi RIEB Kobe University Rokkodai, Nada, Kobe 657-8501 Japan Phone/Fax: +81-78-803-7015 E-mail: tkamihig@rieb.kobe-u.ac.jp

More information

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games 6.254 : Game Theory with Engineering Applications Lecture 7: Asu Ozdaglar MIT February 25, 2010 1 Introduction Outline Uniqueness of a Pure Nash Equilibrium for Continuous Games Reading: Rosen J.B., Existence

More information

Economic Growth: Lecture 13, Stochastic Growth

Economic Growth: Lecture 13, Stochastic Growth 14.452 Economic Growth: Lecture 13, Stochastic Growth Daron Acemoglu MIT December 10, 2013. Daron Acemoglu (MIT) Economic Growth Lecture 13 December 10, 2013. 1 / 52 Stochastic Growth Models Stochastic

More information

A simple macro dynamic model with endogenous saving rate: the representative agent model

A simple macro dynamic model with endogenous saving rate: the representative agent model A simple macro dynamic model with endogenous saving rate: the representative agent model Virginia Sánchez-Marcos Macroeconomics, MIE-UNICAN Macroeconomics (MIE-UNICAN) A simple macro dynamic model with

More information

Necessity of Transversality Conditions. for Stochastic Problems

Necessity of Transversality Conditions. for Stochastic Problems comments appreciated Necessity of Transversality Conditions for Stochastic Problems Takashi Kamihigashi Department of Economics State University of New York at Stony Brook Stony Brook, NY 11794-4384 USA

More information

Department of Economics Working Paper Series

Department of Economics Working Paper Series Department of Economics Working Paper Series On the Existence and Characterization of Markovian Equilibrium in Models with Simple Non-Paternalistic Altruism Olivier F. Morand University of Connecticut

More information

Math Camp Notes: Everything Else

Math Camp Notes: Everything Else Math Camp Notes: Everything Else Systems of Dierential Equations Consider the general two-equation system of dierential equations: Steady States ẋ = f(x, y ẏ = g(x, y Just as before, we can nd the steady

More information

Social Welfare Functions for Sustainable Development

Social Welfare Functions for Sustainable Development Social Welfare Functions for Sustainable Development Thai Ha-Huy, Cuong Le Van September 9, 2015 Abstract Keywords: criterion. anonymity; sustainable development, welfare function, Rawls JEL Classification:

More information

Bayesian Persuasion Online Appendix

Bayesian Persuasion Online Appendix Bayesian Persuasion Online Appendix Emir Kamenica and Matthew Gentzkow University of Chicago June 2010 1 Persuasion mechanisms In this paper we study a particular game where Sender chooses a signal π whose

More information

Neoclassical Growth Model: I

Neoclassical Growth Model: I Neoclassical Growth Model: I Mark Huggett 2 2 Georgetown October, 2017 Growth Model: Introduction Neoclassical Growth Model is the workhorse model in macroeconomics. It comes in two main varieties: infinitely-lived

More information

Stochastic Problems. 1 Examples. 1.1 Neoclassical Growth Model with Stochastic Technology. 1.2 A Model of Job Search

Stochastic Problems. 1 Examples. 1.1 Neoclassical Growth Model with Stochastic Technology. 1.2 A Model of Job Search Stochastic Problems References: SLP chapters 9, 10, 11; L&S chapters 2 and 6 1 Examples 1.1 Neoclassical Growth Model with Stochastic Technology Production function y = Af k where A is random Let A s t

More information

Deterministic Dynamic Programming in Discrete Time: A Monotone Convergence Principle

Deterministic Dynamic Programming in Discrete Time: A Monotone Convergence Principle Deterministic Dynamic Programming in Discrete Time: A Monotone Convergence Principle Takashi Kamihigashi Masayuki Yao March 30, 2015 Abstract We consider infinite-horizon deterministic dynamic programming

More information

Lecture 1. Evolution of Market Concentration

Lecture 1. Evolution of Market Concentration Lecture 1 Evolution of Market Concentration Take a look at : Doraszelski and Pakes, A Framework for Applied Dynamic Analysis in IO, Handbook of I.O. Chapter. (see link at syllabus). Matt Shum s notes are

More information

[A + 1 ] + (1 ) v: : (b) Show: the derivative of T at v = v 0 < 0 is: = (v 0 ) (1 ) ; [A + 1 ]

[A + 1 ] + (1 ) v: : (b) Show: the derivative of T at v = v 0 < 0 is: = (v 0 ) (1 ) ; [A + 1 ] Homework #2 Economics 4- Due Wednesday, October 5 Christiano. This question is designed to illustrate Blackwell's Theorem, Theorem 3.3 on page 54 of S-L. That theorem represents a set of conditions that

More information

DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION

DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION UNIVERSITY OF MARYLAND: ECON 600. Alternative Methods of Discrete Time Intertemporal Optimization We will start by solving a discrete time intertemporal

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

Lecture 7: Stochastic Dynamic Programing and Markov Processes

Lecture 7: Stochastic Dynamic Programing and Markov Processes Lecture 7: Stochastic Dynamic Programing and Markov Processes Florian Scheuer References: SLP chapters 9, 10, 11; LS chapters 2 and 6 1 Examples 1.1 Neoclassical Growth Model with Stochastic Technology

More information

STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics

STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics Ph. D. Comprehensive Examination: Macroeconomics Fall, 202 Answer Key to Section 2 Questions Section. (Suggested Time: 45 Minutes) For 3 of

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Chapter 3 Deterministic Dynamic Programming 3.1 The Bellman Principle of Optimality Richard Bellman (1957) states his Principle of Optimality in full generality as follows: An optimal policy has the property

More information

Advanced Economic Growth: Lecture 21: Stochastic Dynamic Programming and Applications

Advanced Economic Growth: Lecture 21: Stochastic Dynamic Programming and Applications Advanced Economic Growth: Lecture 21: Stochastic Dynamic Programming and Applications Daron Acemoglu MIT November 19, 2007 Daron Acemoglu (MIT) Advanced Growth Lecture 21 November 19, 2007 1 / 79 Stochastic

More information

Stochastic convexity in dynamic programming

Stochastic convexity in dynamic programming Economic Theory 22, 447 455 (2003) Stochastic convexity in dynamic programming Alp E. Atakan Department of Economics, Columbia University New York, NY 10027, USA (e-mail: aea15@columbia.edu) Received:

More information

1 THE GAME. Two players, i=1, 2 U i : concave, strictly increasing f: concave, continuous, f(0) 0 β (0, 1): discount factor, common

1 THE GAME. Two players, i=1, 2 U i : concave, strictly increasing f: concave, continuous, f(0) 0 β (0, 1): discount factor, common 1 THE GAME Two players, i=1, 2 U i : concave, strictly increasing f: concave, continuous, f(0) 0 β (0, 1): discount factor, common With Law of motion of the state: Payoff: Histories: Strategies: k t+1

More information

Asymmetric Information in Economic Policy. Noah Williams

Asymmetric Information in Economic Policy. Noah Williams Asymmetric Information in Economic Policy Noah Williams University of Wisconsin - Madison Williams Econ 899 Asymmetric Information Risk-neutral moneylender. Borrow and lend at rate R = 1/β. Strictly risk-averse

More information

A Quick Introduction to Numerical Methods

A Quick Introduction to Numerical Methods Chapter 5 A Quick Introduction to Numerical Methods One of the main advantages of the recursive approach is that we can use the computer to solve numerically interesting models. There is a wide variety

More information

Uncertainty Per Krusell & D. Krueger Lecture Notes Chapter 6

Uncertainty Per Krusell & D. Krueger Lecture Notes Chapter 6 1 Uncertainty Per Krusell & D. Krueger Lecture Notes Chapter 6 1 A Two-Period Example Suppose the economy lasts only two periods, t =0, 1. The uncertainty arises in the income (wage) of period 1. Not that

More information

2. What is the fraction of aggregate savings due to the precautionary motive? (These two questions are analyzed in the paper by Ayiagari)

2. What is the fraction of aggregate savings due to the precautionary motive? (These two questions are analyzed in the paper by Ayiagari) University of Minnesota 8107 Macroeconomic Theory, Spring 2012, Mini 1 Fabrizio Perri Stationary equilibria in economies with Idiosyncratic Risk and Incomplete Markets We are now at the point in which

More information

ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko

ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko Indirect Utility Recall: static consumer theory; J goods, p j is the price of good j (j = 1; : : : ; J), c j is consumption

More information

The Principle of Optimality

The Principle of Optimality The Principle of Optimality Sequence Problem and Recursive Problem Sequence problem: Notation: V (x 0 ) sup {x t} β t F (x t, x t+ ) s.t. x t+ Γ (x t ) x 0 given t () Plans: x = {x t } Continuation plans

More information

Economics 2010c: Lecture 2 Iterative Methods in Dynamic Programming

Economics 2010c: Lecture 2 Iterative Methods in Dynamic Programming Economics 2010c: Lecture 2 Iterative Methods in Dynamic Programming David Laibson 9/04/2014 Outline: 1. Functional operators 2. Iterative solutions for the Bellman Equation 3. Contraction Mapping Theorem

More information

Lecture 4: The Bellman Operator Dynamic Programming

Lecture 4: The Bellman Operator Dynamic Programming Lecture 4: The Bellman Operator Dynamic Programming Jeppe Druedahl Department of Economics 15th of February 2016 Slide 1/19 Infinite horizon, t We know V 0 (M t ) = whatever { } V 1 (M t ) = max u(m t,

More information

Optimal Growth Models and the Lagrange Multiplier

Optimal Growth Models and the Lagrange Multiplier CORE DISCUSSION PAPER 2003/83 Optimal Growth Models and the Lagrange Multiplier Cuong Le Van, H. Cagri Saglam November 2003 Abstract We provide sufficient conditions on the objective functional and the

More information

Lecture 7: Linear-Quadratic Dynamic Programming Real Business Cycle Models

Lecture 7: Linear-Quadratic Dynamic Programming Real Business Cycle Models Lecture 7: Linear-Quadratic Dynamic Programming Real Business Cycle Models Shinichi Nishiyama Graduate School of Economics Kyoto University January 10, 2019 Abstract In this lecture, we solve and simulate

More information

Dynamic Macroeconomic Theory Notes. David L. Kelly. Department of Economics University of Miami Box Coral Gables, FL

Dynamic Macroeconomic Theory Notes. David L. Kelly. Department of Economics University of Miami Box Coral Gables, FL Dynamic Macroeconomic Theory Notes David L. Kelly Department of Economics University of Miami Box 248126 Coral Gables, FL 33134 dkelly@miami.edu Current Version: Fall 2013/Spring 2013 I Introduction A

More information

University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming

University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming University of Warwick, EC9A0 Maths for Economists 1 of 63 University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming Peter J. Hammond Autumn 2013, revised 2014 University of

More information

ECON 2010c Solution to Problem Set 1

ECON 2010c Solution to Problem Set 1 ECON 200c Solution to Problem Set By the Teaching Fellows for ECON 200c Fall 204 Growth Model (a) Defining the constant κ as: κ = ln( αβ) + αβ αβ ln(αβ), the problem asks us to show that the following

More information

Final Exam - Math Camp August 27, 2014

Final Exam - Math Camp August 27, 2014 Final Exam - Math Camp August 27, 2014 You will have three hours to complete this exam. Please write your solution to question one in blue book 1 and your solutions to the subsequent questions in blue

More information

An Application to Growth Theory

An Application to Growth Theory An Application to Growth Theory First let s review the concepts of solution function and value function for a maximization problem. Suppose we have the problem max F (x, α) subject to G(x, β) 0, (P) x

More information

Chapter 23 Credible Government Policies, I

Chapter 23 Credible Government Policies, I Chapter 23 Credible Government Policies, I 23.1. Introduction Kydland and Prescott (1977) opened the modern discussion of time consistency in macroeconomics with some examples that show how outcomes differ

More information

Introduction to Recursive Methods

Introduction to Recursive Methods Chapter 1 Introduction to Recursive Methods These notes are targeted to advanced Master and Ph.D. students in economics. They can be of some use to researchers in macroeconomic theory. The material contained

More information

1 With state-contingent debt

1 With state-contingent debt STOCKHOLM DOCTORAL PROGRAM IN ECONOMICS Helshögskolan i Stockholm Stockholms universitet Paul Klein Email: paul.klein@iies.su.se URL: http://paulklein.se/makro2.html Macroeconomics II Spring 2010 Lecture

More information

New Notes on the Solow Growth Model

New Notes on the Solow Growth Model New Notes on the Solow Growth Model Roberto Chang September 2009 1 The Model The firstingredientofadynamicmodelisthedescriptionofthetimehorizon. In the original Solow model, time is continuous and the

More information

Necessity of the Transversality Condition for Stochastic Models with Bounded or CRRA Utility

Necessity of the Transversality Condition for Stochastic Models with Bounded or CRRA Utility Necessity of the Transversality Condition for Stochastic Models with Bounded or CRRA Utility Takashi Kamihigashi RIEB, Kobe University tkamihig@rieb.kobe-u.ac.jp March 9, 2004 Abstract This paper shows

More information

Ergodicity and Non-Ergodicity in Economics

Ergodicity and Non-Ergodicity in Economics Abstract An stochastic system is called ergodic if it tends in probability to a limiting form that is independent of the initial conditions. Breakdown of ergodicity gives rise to path dependence. We illustrate

More information

HOMEWORK #1 This homework assignment is due at 5PM on Friday, November 3 in Marnix Amand s mailbox.

HOMEWORK #1 This homework assignment is due at 5PM on Friday, November 3 in Marnix Amand s mailbox. Econ 50a (second half) Yale University Fall 2006 Prof. Tony Smith HOMEWORK # This homework assignment is due at 5PM on Friday, November 3 in Marnix Amand s mailbox.. Consider a growth model with capital

More information

ECON607 Fall 2010 University of Hawaii Professor Hui He TA: Xiaodong Sun Assignment 2

ECON607 Fall 2010 University of Hawaii Professor Hui He TA: Xiaodong Sun Assignment 2 ECON607 Fall 200 University of Hawaii Professor Hui He TA: Xiaodong Sun Assignment 2 The due date for this assignment is Tuesday, October 2. ( Total points = 50). (Two-sector growth model) Consider the

More information

Recursive Contracts and Endogenously Incomplete Markets

Recursive Contracts and Endogenously Incomplete Markets Recursive Contracts and Endogenously Incomplete Markets Mikhail Golosov, Aleh Tsyvinski and Nicolas Werquin January 2016 Abstract In this chapter we study dynamic incentive models in which risk sharing

More information

Dynamic stochastic game and macroeconomic equilibrium

Dynamic stochastic game and macroeconomic equilibrium Dynamic stochastic game and macroeconomic equilibrium Tianxiao Zheng SAIF 1. Introduction We have studied single agent problems. However, macro-economy consists of a large number of agents including individuals/households,

More information

1. Money in the utility function (start)

1. Money in the utility function (start) Monetary Economics: Macro Aspects, 1/3 2012 Henrik Jensen Department of Economics University of Copenhagen 1. Money in the utility function (start) a. The basic money-in-the-utility function model b. Optimal

More information

ADVANCED MACROECONOMIC TECHNIQUES NOTE 3a

ADVANCED MACROECONOMIC TECHNIQUES NOTE 3a 316-406 ADVANCED MACROECONOMIC TECHNIQUES NOTE 3a Chris Edmond hcpedmond@unimelb.edu.aui Dynamic programming and the growth model Dynamic programming and closely related recursive methods provide an important

More information

Permanent Income Hypothesis Intro to the Ramsey Model

Permanent Income Hypothesis Intro to the Ramsey Model Consumption and Savings Permanent Income Hypothesis Intro to the Ramsey Model Lecture 10 Topics in Macroeconomics November 6, 2007 Lecture 10 1/18 Topics in Macroeconomics Consumption and Savings Outline

More information

Second Welfare Theorem

Second Welfare Theorem Second Welfare Theorem Econ 2100 Fall 2015 Lecture 18, November 2 Outline 1 Second Welfare Theorem From Last Class We want to state a prove a theorem that says that any Pareto optimal allocation is (part

More information

ADVANCED MACRO TECHNIQUES Midterm Solutions

ADVANCED MACRO TECHNIQUES Midterm Solutions 36-406 ADVANCED MACRO TECHNIQUES Midterm Solutions Chris Edmond hcpedmond@unimelb.edu.aui This exam lasts 90 minutes and has three questions, each of equal marks. Within each question there are a number

More information

Organizational Equilibrium with Capital

Organizational Equilibrium with Capital Organizational Equilibrium with Capital Marco Bassetto, Zhen Huo, and José-Víctor Ríos-Rull FRB of Chicago, Yale University, University of Pennsylvania, UCL, CAERP Fiscal Policy Conference Mar 20, 2018

More information

Macroeconomics II Dynamic macroeconomics Class 1: Introduction and rst models

Macroeconomics II Dynamic macroeconomics Class 1: Introduction and rst models Macroeconomics II Dynamic macroeconomics Class 1: Introduction and rst models Prof. George McCandless UCEMA Spring 2008 1 Class 1: introduction and rst models What we will do today 1. Organization of course

More information

HOMEWORK #3 This homework assignment is due at NOON on Friday, November 17 in Marnix Amand s mailbox.

HOMEWORK #3 This homework assignment is due at NOON on Friday, November 17 in Marnix Amand s mailbox. Econ 50a second half) Yale University Fall 2006 Prof. Tony Smith HOMEWORK #3 This homework assignment is due at NOON on Friday, November 7 in Marnix Amand s mailbox.. This problem introduces wealth inequality

More information

Solving Extensive Form Games

Solving Extensive Form Games Chapter 8 Solving Extensive Form Games 8.1 The Extensive Form of a Game The extensive form of a game contains the following information: (1) the set of players (2) the order of moves (that is, who moves

More information

1 The Basic RBC Model

1 The Basic RBC Model IHS 2016, Macroeconomics III Michael Reiter Ch. 1: Notes on RBC Model 1 1 The Basic RBC Model 1.1 Description of Model Variables y z k L c I w r output level of technology (exogenous) capital at end of

More information

Topic 9. Monetary policy. Notes.

Topic 9. Monetary policy. Notes. 14.452. Topic 9. Monetary policy. Notes. Olivier Blanchard May 12, 2007 Nr. 1 Look at three issues: Time consistency. The inflation bias. The trade-off between inflation and activity. Implementation and

More information

Introduction to Dynamic Programming Lecture Notes

Introduction to Dynamic Programming Lecture Notes Introduction to Dynamic Programming Lecture Notes Klaus Neusser November 30, 2017 These notes are based on the books of Sargent (1987) and Stokey and Robert E. Lucas (1989). Department of Economics, University

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

Equilibrium Analysis of Dynamic Economies

Equilibrium Analysis of Dynamic Economies Equilibrium Analysis of Dynamic Economies Preliminary Version Juan Manuel Licari Department of Economics University of Pennsylvania April, 2005 Abstract We study equilibrium properties of discrete-time,

More information

"0". Doing the stuff on SVARs from the February 28 slides

0. Doing the stuff on SVARs from the February 28 slides Monetary Policy, 7/3 2018 Henrik Jensen Department of Economics University of Copenhagen "0". Doing the stuff on SVARs from the February 28 slides 1. Money in the utility function (start) a. The basic

More information

Organizational Equilibrium with Capital

Organizational Equilibrium with Capital Organizational Equilibrium with Capital Marco Bassetto, Zhen Huo, and José-Víctor Ríos-Rull FRB of Chicago, New York University, University of Pennsylvania, UCL, CAERP HKUST Summer Workshop in Macroeconomics

More information

PROFIT FUNCTIONS. 1. REPRESENTATION OF TECHNOLOGY 1.1. Technology Sets. The technology set for a given production process is defined as

PROFIT FUNCTIONS. 1. REPRESENTATION OF TECHNOLOGY 1.1. Technology Sets. The technology set for a given production process is defined as PROFIT FUNCTIONS 1. REPRESENTATION OF TECHNOLOGY 1.1. Technology Sets. The technology set for a given production process is defined as T {x, y : x ɛ R n, y ɛ R m : x can produce y} where x is a vector

More information

Optimization, Part 2 (november to december): mandatory for QEM-IMAEF, and for MMEF or MAEF who have chosen it as an optional course.

Optimization, Part 2 (november to december): mandatory for QEM-IMAEF, and for MMEF or MAEF who have chosen it as an optional course. Paris. Optimization, Part 2 (november to december): mandatory for QEM-IMAEF, and for MMEF or MAEF who have chosen it as an optional course. Philippe Bich (Paris 1 Panthéon-Sorbonne and PSE) Paris, 2016.

More information

Multiple Interior Steady States in the Ramsey Model with Elastic Labor Supply

Multiple Interior Steady States in the Ramsey Model with Elastic Labor Supply Multiple Interior Steady States in the Ramsey Model with Elastic Labor Supply Takashi Kamihigashi March 18, 2014 Abstract In this paper we show that multiple interior steady states are possible in the

More information

Real Business Cycle Model (RBC)

Real Business Cycle Model (RBC) Real Business Cycle Model (RBC) Seyed Ali Madanizadeh November 2013 RBC Model Lucas 1980: One of the functions of theoretical economics is to provide fully articulated, artificial economic systems that

More information

Implementability, Walrasian Equilibria, and Efficient Matchings

Implementability, Walrasian Equilibria, and Efficient Matchings Implementability, Walrasian Equilibria, and Efficient Matchings Piotr Dworczak and Anthony Lee Zhang Abstract In general screening problems, implementable allocation rules correspond exactly to Walrasian

More information

Graduate Microeconomics II Lecture 5: Cheap Talk. Patrick Legros

Graduate Microeconomics II Lecture 5: Cheap Talk. Patrick Legros Graduate Microeconomics II Lecture 5: Cheap Talk Patrick Legros 1 / 35 Outline Cheap talk 2 / 35 Outline Cheap talk Crawford-Sobel Welfare 3 / 35 Outline Cheap talk Crawford-Sobel Welfare Partially Verifiable

More information

1. Using the model and notations covered in class, the expected returns are:

1. Using the model and notations covered in class, the expected returns are: Econ 510a second half Yale University Fall 2006 Prof. Tony Smith HOMEWORK #5 This homework assignment is due at 5PM on Friday, December 8 in Marnix Amand s mailbox. Solution 1. a In the Mehra-Prescott

More information

Dynamic Optimization Problem. April 2, Graduate School of Economics, University of Tokyo. Math Camp Day 4. Daiki Kishishita.

Dynamic Optimization Problem. April 2, Graduate School of Economics, University of Tokyo. Math Camp Day 4. Daiki Kishishita. Discrete Math Camp Optimization Problem Graduate School of Economics, University of Tokyo April 2, 2016 Goal of day 4 Discrete We discuss methods both in discrete and continuous : Discrete : condition

More information

Economics 101 Lecture 5 - Firms and Production

Economics 101 Lecture 5 - Firms and Production Economics 101 Lecture 5 - Firms and Production 1 The Second Welfare Theorem Last week we proved the First Basic Welfare Theorem, which states that under fairly weak assumptions, a Walrasian equilibrium

More information

ECOM 009 Macroeconomics B. Lecture 2

ECOM 009 Macroeconomics B. Lecture 2 ECOM 009 Macroeconomics B Lecture 2 Giulio Fella c Giulio Fella, 2014 ECOM 009 Macroeconomics B - Lecture 2 40/197 Aim of consumption theory Consumption theory aims at explaining consumption/saving decisions

More information

ADVANCED MACROECONOMICS 2015 FINAL EXAMINATION FOR THE FIRST HALF OF SPRING SEMESTER

ADVANCED MACROECONOMICS 2015 FINAL EXAMINATION FOR THE FIRST HALF OF SPRING SEMESTER ADVANCED MACROECONOMICS 2015 FINAL EXAMINATION FOR THE FIRST HALF OF SPRING SEMESTER Hiroyuki Ozaki Keio University, Faculty of Economics June 2, 2015 Important Remarks: You must write all your answers

More information

Convex Analysis and Economic Theory AY Elementary properties of convex functions

Convex Analysis and Economic Theory AY Elementary properties of convex functions Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory AY 2018 2019 Topic 6: Convex functions I 6.1 Elementary properties of convex functions We may occasionally

More information

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45 Division of the Humanities and Social Sciences Supergradients KC Border Fall 2001 1 The supergradient of a concave function There is a useful way to characterize the concavity of differentiable functions.

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

On optimal growth models when the discount factor is near 1 or equal to 1

On optimal growth models when the discount factor is near 1 or equal to 1 On optimal growth models when the discount factor is near 1 or equal to 1 Cuong Le Van, Lisa Morhaim To cite this version: Cuong Le Van, Lisa Morhaim. On optimal growth models when the discount factor

More information