University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming

Size: px
Start display at page:

Download "University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming"

Transcription

1 University of Warwick, EC9A0 Maths for Economists 1 of 63 University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming Peter J. Hammond Autumn 2013, revised 2014

2 University of Warwick, EC9A0 Maths for Economists 2 of 63 Lecture Outline Optimal Saving The Two Period Problem The T Period Problem A General Problem Infinite Time Horizon Main Theorem Policy Improvement

3 University of Warwick, EC9A0 Maths for Economists 3 of 63 Intertemporal Utility Consider a household which at time s is planning its intertemporal consumption stream c T s := (c s, c s+1,..., c T ) over periods t in the set {s, s + 1,..., T }. Its intertemporal utility function R T s+1 c T s Us T (c T s ) R is assumed to take the additively separable form U T s (c T s ) := T t=s u t(c t ) where the one period felicity functions c u t (c) are differentiably increasing and strictly concave (DISC) i.e., u t(c) > 0, and u t (c) < 0 for all t and all c > 0. As before, the household faces: 1. fixed initial wealth w s ; 2. a terminal wealth constraint w T +1 0.

4 University of Warwick, EC9A0 Maths for Economists 4 of 63 Risky Wealth Accumulation Also as before, we assume a wealth accumulation equation w t+1 = r t (w t c t ), where r t is the household s gross rate of return on its wealth in period t. It is assumed that: 1. the return r t in each period t is a random variable with positive values; 2. the return distributions for different times t are stochastically independent; 3. starting with predetermined wealth w s at time s, the household seeks to maximize the expectation E s [U T s (c T s )] of its intertemporal utility.

5 University of Warwick, EC9A0 Maths for Economists 5 of 63 Lecture Outline Optimal Saving The Two Period Problem The T Period Problem A General Problem Infinite Time Horizon Main Theorem Policy Improvement

6 University of Warwick, EC9A0 Maths for Economists 6 of 63 Two Period Case We work backwards from the last period, when s = T. In this last period the household will obviously choose c T = w T, yielding a maximized utility equal to V T (w T ) = u T (w T ). Next, consider the penultimate period, when s = T 1. The consumer will want to choose c T 1 in order to maximize u T 1 (c T 1 ) }{{} period T 1 + E T 1 V T (w T ) }{{} result of an optimal policy in period T subject to the wealth constraint w T = r T 1 }{{} random gross return (w T 1 c T 1 ) }{{} saving

7 University of Warwick, EC9A0 Maths for Economists 7 of 63 First-Order Condition Substituting both the function V T (w T ) = u T (w T ) and the wealth constraint into the objective reduces the problem to max {u T 1 (c T 1 ) + E T 1 [u T ( r T 1 (w T 1 c T 1 )) ] } c T 1 subject to 0 c T 1 w T 1 and c T := r T 1 (w T 1 c T 1 ). Assume we can differentiate under the integral sign, and that there is an interior solution with 0 < c T 1 < w T 1. Then the first-order condition (FOC) is 0 = u T 1 (c T 1) + E T 1 [( r T 1 )u T ( r T 1(w T 1 c T 1 ))]

8 University of Warwick, EC9A0 Maths for Economists 8 of 63 The Stochastic Euler Equation Rearranging the first-order condition while recognizing that c T := r T 1 (w T 1 c T 1 ), one obtains u T 1 (c T 1) = E T 1 [ r T 1 u T ( r T 1(w T 1 c T 1 ))] Dividing by u T 1 (c T 1) gives the stochastic Euler equation 1 = E T 1 [ ] u T r ( c T ) T 1 u T 1 (c T 1) ] = E T 1 [ r T 1 MRS T T 1 (c T 1; c T ) involving the marginal rate of substitution function MRS T T 1 (c T 1; c T ) := u T ( c T ) u T 1 (c T 1)

9 University of Warwick, EC9A0 Maths for Economists 9 of 63 The CES Case For the marginal utility function c u (c), its elasticity of substitution is defined for all c > 0 by η(c) := d ln u (c)/d ln c. Then η(c) is both the degree of relative risk aversion, and the degree of relative fluctuation aversion. A constant elasticity of substitution (or CES) utility function satisfies d ln u (c)/d ln c = ɛ < 0 for all c > 0. The marginal rate of substitution satisfies u (c)/u ( c) = (c/ c) ɛ for all c, c > 0.

10 University of Warwick, EC9A0 Maths for Economists 10 of 63 Normalized Utility Normalize by putting u (1) = 1, implying that u (c) c ɛ. Then integrating gives u(c; ɛ) = u(1) + c 1 x ɛ dx u(1) + c1 ɛ 1 if ɛ 1 = 1 ɛ u(1) + ln c if ɛ = 1 Introduce the final normalization 1 if ɛ 1 u(1) = 1 ɛ 0 if ɛ = 1 The utility function is reduced to c 1 ɛ 1 if ɛ 1 u(c; ɛ) = 1 ɛ ln c if ɛ = 1

11 University of Warwick, EC9A0 Maths for Economists 11 of 63 The Stochastic Euler Equation in the CES Case Consider the CES case when u t(c) δ t c ɛ, where each δ t is the discount factor for period t. Definition The one-period discount factor in period t is defined as β t := δ t+1 /δ t. Then the stochastic Euler equation takes the form [ ( ) ] ɛ ct 1 = E T 1 r T 1 β T 1 c T 1 Because c T 1 is being chosen at time T 1, this implies that (c T 1 ) ɛ = E T 1 [ rt 1 β T 1 ( c T ) ɛ]

12 The Two Period Problem in the CES Case In the two-period case, we know that c T = w T = r T 1 (w T 1 c T 1 ) in the last period, so the Euler equation becomes (c T 1 ) ɛ = E T 1 [ rt 1 β T 1 ( c T ) ɛ] = β T 1 (w T 1 c T 1 ) ɛ E T 1 [ ( rt 1 ) 1 ɛ] Take the ( 1/ɛ) th power of each side and define ρ T 1 := ( β T 1 E T 1 [ ( rt 1 ) 1 ɛ]) 1/ɛ to reduce the Euler equation to c T 1 = ρ T 1 (w T 1 c T 1 ) whose solution is evidently c T 1 = γ T 1 w T 1 where γ T 1 := ρ T 1 /(1 + ρ T 1 ) and 1 γ T 1 = 1/(1 + ρ T 1 ) are respectively the optimal consumption and savings ratios. It follows that ρ T 1 = γ T 1 /(1 γ T 1 ) is the consumption/savings ratio. University of Warwick, EC9A0 Maths for Economists 12 of 63

13 University of Warwick, EC9A0 Maths for Economists 13 of 63 Optimal Discounted Expected Utility The optimal policy in periods T and T 1 is c t = γ t w t where γ T = 1 and γ T 1 has just been defined. In this CES case, the discounted utility of consumption in period T is V T (w T ) := δ T u(w T ; ɛ). The discounted expected utility at time T 1 of consumption in periods T and T 1 together is V T 1 (w T 1 ) = δ T 1 u(γ T 1 w T 1 ; ɛ) + δ T E T 1 [u( w T ; ɛ)] where w T = r T 1 (1 γ T 1 )w T 1.

14 University of Warwick, EC9A0 Maths for Economists 14 of 63 Discounted Expected Utility in the Logarithmic Case In the logarithmic case when ɛ = 1, one has V T 1 (w T 1 ) = δ T 1 ln(γ T 1 w T 1 ) + δ T E T 1 [ln ( r T 1 (1 γ T 1 )w T 1 )] It follows that V T 1 (w T 1 ) = α T 1 + (δ T 1 + δ T )u(w T 1 ; ɛ) where α T 1 := δ T 1 ln γ T 1 + δ T {ln(1 γ T 1 ) + E T 1 [ln r T 1 ]}

15 University of Warwick, EC9A0 Maths for Economists 15 of 63 Discounted Expected Utility in the CES Case In the CES case when ɛ 1, one has (1 ɛ)v T 1 (w T 1 ) = δ T 1 (γ T 1 w T 1 ) 1 ɛ + δ T [(1 γ T 1 )w T 1 ] 1 ɛ E T 1 [( r T 1 ) 1 ɛ ] so V T 1 (w T 1 ) = v T 1 u(w T 1 ; ɛ) where v T 1 := δ T 1 (γ T 1 ) 1 ɛ + δ T (1 γ T 1 ) 1 ɛ E T 1 [( r T 1 ) 1 ɛ ] In both cases, one can write V T 1 (w T 1 ) = α T 1 + v T 1 u(w T 1 ; ɛ) for a suitable additive constant α T 1 (which is 0 in the CES case) and a suitable multiplicative constant v T 1.

16 University of Warwick, EC9A0 Maths for Economists 16 of 63 Lecture Outline Optimal Saving The Two Period Problem The T Period Problem A General Problem Infinite Time Horizon Main Theorem Policy Improvement

17 University of Warwick, EC9A0 Maths for Economists 17 of 63 The Time Line In each period t, suppose: the consumer starts with known wealth w t ; then the consumer chooses consumption c t, along with savings or residual wealth w t c t ; there is a cumulative distribution function F t (r) on R that determines the gross return r t as a positive-valued random variable. After these three steps have been completed, the problem starts again in period t + 1, with the consumer s wealth known to be w t+1 = r t (w t c t ).

18 University of Warwick, EC9A0 Maths for Economists 18 of 63 Expected Conditionally Expected Utility Starting at any t, suppose the consumer s choices, together with the random returns, jointly determine a cdf F T t over the space of intertemporal consumption streams c T t. The associated expected utility is E t [ U T t (c T t ) ], using the shorthand E t to denote integration w.r.t. the cdf F T t. Then, given that the consumer has chosen c t at time t, let E t+1 [ c t ] denote the conditional expected utility. This is found by integrating w.r.t. the conditional cdf F T t+1 (ct t+1 c t). The law of iterated expectations allows us to write the unconditional expectation E t [ U T t (c T t ) ] as the expectation E t [E t+1 [U T t (c T t ) c t ]] of the conditional expectation.

19 University of Warwick, EC9A0 Maths for Economists 19 of 63 The Expectation of Additively Separable Utility Our hypothesis is that the intertemporal von Neumann Morgenstern utility function takes the additively separable form U T t (c T t ) = T τ=t u τ (c τ ) The conditional expectation given c t must then be E t+1 [U T t (c T t ) c t ] = u t (c t ) + E t+1 [ T τ=t+1 u τ (c τ ) c t ] whose expectation is [ T ] [ [ T ] ] E t u τ (c τ ) = u t (c t ) + E t E t+1 u τ (c τ ) c t τ=t τ=t+1

20 University of Warwick, EC9A0 Maths for Economists 20 of 63 The Continuation Value Let V t+1 (w t+1 ) be the state valuation function expressing the maximum of the continuation value ] [ T ] E t+1 [U t+1(c T T t+1) = E t+1 u τ (c τ ) τ=t+1 as a function of the wealth level or state w t+1 = r t (w t c t ). Assume this maximum value is achieved by following an optimal policy from period t + 1 on. Then total expected utility at time t will then reduce to ] [ ]] E t [U t T (c T t ) = u t (c t ) + E t [E t+1 Tτ=t+1 u τ (c τ ) c t = u t (c t ) + E t [V t+1 ( w t+1 )] = u t (c t ) + E t [V t+1 ( r t (w t c t ))]

21 The Principle of Optimality Maximizing E s [ U T s (c T s ) ] w.r.t. c s, taking as fixed the optimal consumption plans c t (w t ) at times t = s + 1,..., T, therefore requires choosing c s to maximize u s (c s ) + E s [V s+1 ( r s (w s c s ))] Let c s (w s ) denote a solution to this maximization problem. Then the value of an optimal plan (c t (w t )) T t=s that starts with wealth w s at time s is V s (w s ) := u s (c s (w s )) + E s [V s+1 ( r s (w s c s (w s )))] Together, these two properties can be expressed as } V s (w s ) = cs max {u (w s ) = arg s (c s ) + E s [V s+1 ( r s (w s c s ))]} 0 c s w s which can be described as the the principle of optimality. University of Warwick, EC9A0 Maths for Economists 21 of 63

22 University of Warwick, EC9A0 Maths for Economists 22 of 63 An Induction Hypothesis Consider once again the case when u t (c) δ t u(c; ɛ) for the CES (or logarithmic) utility function that satisfies u (c; ɛ) c ɛ and, specifically { c 1 ɛ /(1 ɛ) if ɛ 1; u(c; ɛ) = ln c if ɛ = 1. Inspired by the solution we have already found for the final period T and penultimate period T 1, we adopt the induction hypothesis that there are constants α t, γ t, v t (t = T, T 1,..., s + 1, s) for which c t (w t ) = γ t w t and V t (w t ) = α t + v t u(w t ; ɛ) In particular, the consumption ratio γ t and savings ratio 1 γ t are both independent of the wealth level w t.

23 University of Warwick, EC9A0 Maths for Economists 23 of 63 Applying Backward Induction Under the induction hypotheses that ct (w t ) = γ t w t and V t (w t ) = α t + v t u(w t ; ɛ) the maximand u s (c s ) + E s [V s+1 ( r s (w s c s ))] takes the form δ s u(c s ; ɛ) + E s [α s+1 + v s+1 u( r s (w s c s ); ɛ)] The first-order condition for this to be maximized w.r.t. c s is 0 = δ s u (c s ; ɛ) v s+1 E s [ r s u ( r s (w s c s ); ɛ)] or, equivalently, that δ s (c s ) ɛ = v s+1 E s [ r s ( r s (w s c s )) ɛ )] = v s+1 (w s c s ) ɛ E s [( r s ) 1 ɛ ]

24 University of Warwick, EC9A0 Maths for Economists 24 of 63 Solving the Logarithmic Case When ɛ = 1 and so u(c; ɛ) = ln c, the first-order condition reduces to δ s (c s ) 1 = v s+1 (w s c s ) 1. Its solution is indeed c s = γ s w s where δ s (γ s ) 1 = v s+1 (1 γ s ) 1, implying that γ s = δ s /(δ s + v s+1 ). The state valuation function then becomes V s (w s ) = δ s u(γ s w s ; ɛ) + α s+1 + v s+1 E s [u( r s (1 γ s )w s ; ɛ)] = δ s ln(γ s w s ) + α s+1 + v s+1 E s [ln( r s (1 γ s )w s )] = δ s ln(γ s w s ) + α s+1 + v s+1 {ln(1 γ s )w s + ln R s } where we define the geometric mean certainty equivalent return R s so that ln R s := E s [ln( r s )].

25 University of Warwick, EC9A0 Maths for Economists 25 of 63 The State Valuation Function The formula V s (w s ) = δ s ln(γ s w s ) + α s+1 + v s+1 {ln(1 γ s )w s + ln R s } reduces to the desired form V s (w s ) = α s + v s ln w s provided we take v s := δ s + v s+1, which implies that γ s = δ s /v s, and also α s := δ s ln γ s + α s+1 + v s+1 {ln(1 γ s ) + ln R s } = δ s ln(δ s /v s ) + α s+1 + v s+1 {ln(v s+1 /v s ) + ln R s } = δ s ln δ s + α s+1 v s ln v s + v s+1 {ln v s+1 + ln R s } This confirms the induction hypothesis for the logarithmic case. The relevant constants v s are found by summing backwards, starting with v T = δ T, implying that v s = T τ=s δ s.

26 University of Warwick, EC9A0 Maths for Economists 26 of 63 The Stationary Logarithmic Case In the stationary logarithmic case: the felicity function in each period t is β t ln c t, so the one period discount factor is the constant β; the certainty equivalent return R t is also a constant R. Then v s = T τ=s δ s = T τ=s βτ = (β s β T +1 )/(1 β), implying that γ s = β s /v s = β s (1 β)/(β s β T +1 ). It follows that c s = γ s w s = (1 β)w s 1 β T s+1 = (1 β)w s 1 β H+1 when there are H := T s periods left before the horizon T. As H, this solution converges to c s = (1 β)w s, so the savings ratio equals the constant discount factor β. Remarkably, this is also independent on the gross return to saving.

27 University of Warwick, EC9A0 Maths for Economists 27 of 63 First-Order Condition in the CES Case Recall that the first-order condition in the CES Case is δ s (c s ) ɛ = v s+1 (w s c s ) ɛ E s [( r s ) 1 ɛ ] = v s+1 (w s c s ) ɛ R 1 ɛ s where we have defined the certainty equivalent return R s as the solution to R 1 ɛ s := E s [( r s ) 1 ɛ ]. The first-order condition indeed implies that c s (w s ) = γ s w s, where δ s (γ s ) ɛ = v s+1 (1 γ s ) ɛ R 1 ɛ s. This implies that or γ s = ( v s+1 R 1 ɛ ) 1/ɛ s /δ s 1 γ s γ s = ( vs+1 Rs 1 ɛ ) 1/ɛ /δ s 1 + ( v s+1 R 1 ɛ s /δ s ) 1/ɛ = ( vs+1 Rs 1 ɛ ) 1/ɛ (δ s ) 1/ɛ + ( v s+1 R 1 ɛ s ) 1/ɛ

28 Completing the Solution in the CES Case Under the induction hypothesis that V s+1 (w) = v s+1 w 1 ɛ /(1 ɛ), one also has (1 ɛ)v s (w s ) = δ s (γ s w s ) 1 ɛ + v s+1 E s [( r s (1 γ s )w s ) 1 ɛ ] This reduces to the desired form (1 ɛ)v s (w s ) = v s (w s ) 1 ɛ, where v s := δ s (γ s ) 1 ɛ + v s+1 E s [( r s ) 1 ɛ ](1 γ s ) 1 ɛ = δ s(v s+1 Rs 1 ɛ ) 1 1/ɛ + v s+1 Rs 1 ɛ (δ s ) 1 1/ɛ [(δ s ) 1/ɛ + ( v s+1 Rs 1 ɛ ) 1/ɛ] 1 ɛ = δ s v s+1 R 1 ɛ s = δ s v s+1 R 1 ɛ s (v s+1 R 1 ɛ s ) 1/ɛ + (δ s ) 1/ɛ [(δ s ) 1/ɛ + ( v s+1 R 1 ɛ [(δ s ) 1/ɛ + ( v s+1 R 1 ɛ s s ) 1/ɛ] 1 ɛ ) 1/ɛ] ɛ This confirms the induction hypothesis for the CES case. Again, the relevant constants are found by working backwards. University of Warwick, EC9A0 Maths for Economists 28 of 63

29 University of Warwick, EC9A0 Maths for Economists 29 of 63 Histories and Strategies For each time t = s, s + 1,..., T between the start s and the horizon T, let h t denote a known history (w τ, c τ, r τ ) t τ=s of the triples (w τ, c τ, r τ ) at successive times τ = s, s + 1,..., t up to time t. A general policy the consumer can choose involves a measurable function h t ψ t (h t ) mapping each known history up to time t, which determines the consumer s information set, into a consumption level at that time. The collection of successive functions ψs T = ψ t T t=s is what a game theorist would call the consumer s strategy in the extensive form game against nature.

30 Markov Strategies We found an optimal solution for the two-period problem when t = T 1. It took the form of a Markov strategy ψ t (h t ) := c t (w t ), which depends only on w t as the particular state variable. The following analysis will demonstrate in particular that at each time t = s, s + 1,..., T, under the induction hypothesis that the consumer will follow a Markov strategy in periods τ = t + 1, t + 2,..., T, there exists a Markov strategy that is optimal in period t. It will follow by backward induction that there exists an optimal strategy h t ψ t (h t ) for every period t = s, s + 1,..., T that takes the Markov form h t w t c t (w t ). This treats history as irrelevant, except insofar as it determines current wealth w t at the time when c t has to be chosen. University of Warwick, EC9A0 Maths for Economists 30 of 63

31 University of Warwick, EC9A0 Maths for Economists 31 of 63 A Stochastic Difference Equation Accordingly, suppose that the consumer pursues a Markov strategy taking the form w t c t (w t ). Then the Markov state variable w t will evolve over time according to the stochastic difference equation w t+1 = φ t (w t, r t ) := r t (w t c t (w t )). Starting at any time t, conditional on initial wealth w t, this equation will have a random solution w T t+1 = ( w τ ) T τ=t+1 described by a unique joint conditional cdf F T t+1 (wt t+1 w t) on R T s. Combined with the Markov strategy w t c t (w t ), this generates a random consumption stream c T t+1 = ( c τ ) T τ=t+1 described by a unique joint conditional cdf G T t+1 (ct t+1 w t) on R T s.

32 University of Warwick, EC9A0 Maths for Economists 32 of 63 General Finite Horizon Problem Consider the objective of choosing y s in order to maximize [ T ] 1 E s u s (x s, y s ) + φ T (x T ) t=s subject to the law of motion x t+1 = ξ t (x t, y t, ɛ t ), where the random shocks ɛ t at different times t = s, s + 1, s + 2,..., T 1 are conditionally independent given x t, y t. Here x T φ T (x T ) is the terminal state valuation function. The stochastic law of motion can also be expressed through successive conditional probabilities P t+1 (x t+1 x t, y t ). The choices of y t at successive times determine a controlled Markov process governing the stochastic transition from each state x t to its immediate successor x t+1.

33 Backward Recurrence Relation The optimal solution can be derived by solving the backward recurrence relation V s (x s ) = } ys (x s ) = arg max {u s(x s, y s ) + E s [V s+1 (x s+1 ) x s, y s ]} y s F s(x s) where 1. x s denotes the inherited state at time s; 2. V s (x s ) is the current value in state x s of the state value function X x V s (x) R; 3. X x F s (x) Y is the feasible set correspondence; 4. (x, y) u s (x, y) denotes the immediate return function in period s; 5. X x y s (x) F s (x s ) is the optimal strategy or policy function; 6. The relevant terminal condition is that V T (x T ) is given by the exogenously specified function φ T (x T ). University of Warwick, EC9A0 Maths for Economists 33 of 63

34 University of Warwick, EC9A0 Maths for Economists 34 of 63 Lecture Outline Optimal Saving The Two Period Problem The T Period Problem A General Problem Infinite Time Horizon Main Theorem Policy Improvement

35 University of Warwick, EC9A0 Maths for Economists 35 of 63 An Infinite Horizon Savings Problem Game theorists speak of the one-shot deviation principle. This states that if any deviation from a particular policy or strategy improves a player s payoff, then there exists a one-shot deviation that improves the payoff. We consider the infinite horizon extension of the consumption/investment problem already considered. This takes the form of choosing a consumption policy c t (w t ) in order to maximize the discounted sum of total utility, given by t=s βt s u(c t ) subject to the accumulation equation w t+1 = r t (w t c t ) where the initial wealth w s is treated as given.

36 University of Warwick, EC9A0 Maths for Economists 36 of 63 Some Assumptions The parameter β (0, 1) is the constant discount factor. Note that utility function R c u(c) is independent of t; its first two derivatives are assumed to satisfy the inequalities u (c) > 0 and u (c) < 0 for all c R +. The investment returns r t in successive periods are assumed to be i.i.d. random variables. It is assumed that w t in each period t is known at time t, but not before.

37 University of Warwick, EC9A0 Maths for Economists 37 of 63 Terminal Constraint There has to be an additional constraint that imposes a lower bound on wealth at some time t. Otherwise there would be no optimal policy the consumer can always gain by increasing debt (negative wealth), no matter how large existing debt may be. In the finite horizon, there was a constraint w T 0 on terminal wealth. But here T is effectively infinite. One might try an alternative like lim inf t βt w t 0 But this places no limit on wealth at any finite time. We use the alternative constraint requiring that w t 0 for all time.

38 University of Warwick, EC9A0 Maths for Economists 38 of 63 The Stationary Problem Our modified problem can be written in the following form that is independent of s: max c 0,c 1,...,c t,... t=0 βt u(c t ) subject to the constraints c t w t and w t+1 = r t (w t c t ) for all t = 0, 1, 2,..., with w 0 = w, where w is given. Because the starting time s is irrelevant, this is a stationary problem. Define the state valuation function w V (w) as the maximum value of the objective, as a function of initial wealth w.

39 University of Warwick, EC9A0 Maths for Economists 39 of 63 Bellman s Equation For the finite horizon problem, the principle of optimality was } V s (w s ) = cs max {u (w s ) = arg s (c s ) + E s [V s+1 ( r s (w s c s ))]} 0 c s w s For the stationary infinite horizon problem, however, the time starting time s is irrelevant. So the principle of optimality can be expressed as } V (w) = c max {u(c) + βe[v ( r(w c))]} (w) = arg 0 c w The state valuation function w V (w) appears on both left and right hand sides of this equation. Solving it therefore involves finding a fixed point, or function, in an appropriate function space.

40 University of Warwick, EC9A0 Maths for Economists 40 of 63 Isoelastic Case We consider yet again the isoelastic case with a CES (or logarithmic) utility function that satisfies u (c; ɛ) c ɛ and, specifically { c 1 ɛ /(1 ɛ) if ɛ 1; u(c; ɛ) = ln c if ɛ = 1. Recall the corresponding finite horizon case, where we found that the solution to the corresponding equations } V s (w s ) = cs max {u (w s ) = arg s (c s ) + βe s [V s+1 ( r s (w s c s ))]} 0 c s w s takes the form V s (w) = α s + v s u(w; ɛ) for suitable real constants α s and v s > 0, where α s = 0 if ɛ 1.

41 University of Warwick, EC9A0 Maths for Economists 41 of 63 First-Order Condition Accordingly, we look for a solution to the stationary problem } V (w) = c max {u(c; ɛ) + βe[v ( r(w c))]} (w) = arg 0 c w taking the isoelastic form V (w) = α + vu(w; ɛ) for suitable real constants α and v > 0, where α = 0 if ɛ 1. The first-order condition for solving this concave maximization problem is c ɛ = βe[ r( r(w c)) ɛ ] = ζ ɛ (w c) ɛ where ζ ɛ := βr 1 ɛ with R as the certainty equivalent return defined by R 1 ɛ := E[ r 1 ɛ ]. Hence c = γw where γ ɛ = ζ ɛ (1 γ) ɛ, implying that γ = 1/(1 + ζ).

42 University of Warwick, EC9A0 Maths for Economists 42 of 63 Solution in the Logarithmic Case When ɛ = 1 and so u(c; ɛ) = ln c, one has V (w) = u(γw; ɛ) + β{α + ve[u( r(1 γ)w; ɛ)]} = ln(γw) + β{α + ve[ln( r(1 γ)w)]} = ln γ + (1 + βv) ln w + β {α + v ln(1 γ) + E[ln r]} This is consistent with V (w) = α + v ln w in case: 1. v = 1 + βv, implying that v = (1 β) 1 ; 2. and also α = ln γ + β {α + v ln(1 γ) + E[ln r]}, which implies that α = (1 β) 1 [ ln γ + β { (1 β) 1 ln(1 γ) + E[ln r] }] This confirms the solution for the logarithmic case.

43 This confirms the solution for the CES case. University of Warwick, EC9A0 Maths for Economists 43 of 63 Solution in the CES Case When ɛ 1 and so u(c; ɛ) = c 1 ɛ /(1 ɛ), the equation implies that V (w) = u(γw; ɛ) + βve[u( r(1 γ)w; ɛ)] (1 ɛ)v (w) = (γw) 1 ɛ + βve[( r(1 γ)w) 1 ɛ ] = vw 1 ɛ where v = γ 1 ɛ + βv(1 γ) 1 ɛ R 1 ɛ and so v = γ 1 ɛ 1 β(1 γ) 1 ɛ R 1 ɛ = γ 1 ɛ 1 (1 γ) 1 ɛ ζ ɛ But optimality requires γ = 1/(1 + ζ), implying finally that v = (1 + ζ)ɛ 1 1 ζ(1 + ζ) ɛ 1 = 1 (1 + ζ) 1 ɛ ζ

44 University of Warwick, EC9A0 Maths for Economists 44 of 63 Lecture Outline Optimal Saving The Two Period Problem The T Period Problem A General Problem Infinite Time Horizon Main Theorem Policy Improvement

45 University of Warwick, EC9A0 Maths for Economists 45 of 63 Bounded Returns Suppose that the stochastic transition from each state x to the immediately succeeding state x is specified by a conditional probability measure B P( x B x, u) on a σ-algebra of the state space. Consider the stationary problem of choosing a policy x u (x) in order to maximize the infinite discounted sum of utility where 0 < β < 1. E t=1 βt 1 f (x t, u t ) The return function (x, u) f (x, u) R is uniformly bounded provided there exist a uniform lower bound M and a uniform upper bound M such that M f (x, u) M for all (x, u)

46 University of Warwick, EC9A0 Maths for Economists 46 of 63 Existence and Uniqueness Theorem Consider the Bellman equation system } V (x) = u max {f (x, u) + βe [V ( x) x, u]} (x) arg u F (x) Under the assumption of uniformly bounded returns: 1. there is a unique state valuation function x V (x) that satisfies this equation system; 2. any associated policy solution x u (x) determines an optimal policy that is stationary i.e., independent of time.

47 University of Warwick, EC9A0 Maths for Economists 47 of 63 The Function Space The boundedness assumption M f (x, u) M for all (x, u) ensures that, because 0 < β < 1 and so t=1 βt 1 = 1 1 β, the infinite discounted sum of utility W := E satisfies (1 β) W [M, M ]. t=1 βt 1 f (x t, u t ) This makes it natural to consider the linear space V of all bounded functions X x V (x) R equipped with its sup norm defined by V := sup x X V (x). We will pay special attention to the subset V M := {V V x X : (1 β)v (x) [M, M ]} of state valuation functions with values V (x) lying within the range of the possible values of W.

48 University of Warwick, EC9A0 Maths for Economists 48 of 63 Two Mappings Given any measurable policy function X x u(x) denoted by u, define the mapping T u : V M V by [T u V ](x) := f (x, u(x)) + βe [V ( x) x, u(x)] When the state is x, this gives the value [T u V ](x) of choosing the policy u(x) for one period, and then experiencing a future discounted return V ( x) after reaching each possible subsequent state x X. Define also the mapping T : V M V by [T V ](x) := max {f (x, u) + βe [V ( x) x, u]} u F (x) These definitions allow the Bellman equation system to be rewritten as V (x) = [T V ](x) u (x) arg max u F (x) [T u V ](x)

49 University of Warwick, EC9A0 Maths for Economists 49 of 63 Two Mappings of V M into Itself For all V V M, policies u, and x X, we have defined [T u V ](x) := f (x, u(x)) + βe [V ( x) x, u(x)] and [T V ](x) := max u F (x) {f (x, u) + βe [V ( x) x, u]} Because of the boundedness condition M f (x, u) M, together with the assumption that V belongs to the domain V M, these definitions jointly imply that (1 β) [T u V ](x) (1 β) M + β M = M and (1 β) [T u V ](x) (1 β) M + β M = M Similarly, given any V V M, one has M (1 β) [T V ](x) M for all x X. Therefore both V T u V and V T V map V M into itself.

50 University of Warwick, EC9A0 Maths for Economists 50 of 63 A First Contraction Mapping The definition [T u V ](x) := f (x, u(x)) + β E [V ( x) x, u(x)] implies that for any two functions V 1, V 2 V M, one has [T u V 1 ](x) [T u V 2 ](x) = β E [V 1 ( x) V 2 ( x) x, u(x)] The definition of the sup norm therefore implies that T u V 1 T u V 2 = sup x X [T u V 1 ](x) [T u V 2 ](x) = sup x X β E [V 1 ( x) V 2 ( x) x, u(x)] β sup x X E [ V 1 ( x) V 2 ( x) x, u(x)] β sup x X V 1 ( x) V 2 ( x) = β V 1 V 2 Hence V T u V is a contraction mapping with factor β < 1 that maps the normed linear space V M into itself.

51 University of Warwick, EC9A0 Maths for Economists 51 of 63 Applying the Contraction Mapping Theorem, I For each fixed policy u, the contraction mapping V T u V mapping the space V M into itself has a unique fixed point in the form of a function V u V M. Furthermore, given any initial function V V M, consider the infinite sequence of mappings [T u ] k V (k N) that result from applying the operator T u iteratively k times. The contraction mapping property of T u implies that [T u ] k V V u 0 as k.

52 Characterizing the Fixed Point, I Starting from V 0 = 0 and given any initial state x X, note that [T u ] k V 0 (x) = [T u ] ( [T u ] k 1 V 0 ) (x) = f (x, u(x)) + β E [( [T u ] k 1 V 0 ) ( x) x, u(x) ] It follows by induction on k that [T u ] k V 0 ( x) equals the expected discounted total payoff E k t=1 βt 1 f (x t, u t ) of starting from x 1 = x and then following the policy x u(x) for k subsequent periods. Taking the limit as k, it follows that for any state x X, the value V u ( x) of the fixed point in V M is the expected discounted total payoff E t=1 βt 1 f (x t, u t ) of starting from x 1 = x and then following the policy x u(x) for ever thereafter. University of Warwick, EC9A0 Maths for Economists 52 of 63

53 University of Warwick, EC9A0 Maths for Economists 53 of 63 Recall the definition A Second Contraction Mapping [T V ](x) := max {f (x, u) + βe [V ( x) x, u]} u F (x) Given any state x X and any two functions V 1, V 2 V M, define u 1, u 2 F (x) so that for k = 1, 2 one has [T V k ](x) = f (x, u k ) + βe [V k ( x) x, u k ]} Note that [T V 2 ](x) f (x, u 1 ) + βe [V 2 ( x) x, u 1 ]} implying that [T V 1 ](x) [T V 2 ](x) βe [V 1 ( x) V 2 ( x) x, u 1 ]} β V 1 V 2 Similarly, interchanging 1 and 2 in the above argument gives [T V 2 ](x) [T V 1 ](x) β V 1 V 2. Hence T V 1 T V 2 β V 1 V 2, so T is also a contraction.

54 University of Warwick, EC9A0 Maths for Economists 54 of 63 Applying the Contraction Mapping Theorem, II Similarly the contraction mapping V T V has a unique fixed point in the form of a function V V M such that V ( x) is the maximized expected discounted total payoff of starting in state x 1 = x and following an optimal policy for ever thereafter. Moreover, V = T V = T u V. This implies that V is also the value of following the policy x u (x) throughout, which must therefore be an optimal policy.

55 University of Warwick, EC9A0 Maths for Economists 55 of 63 Characterizing the Fixed Point, II Starting from V 0 = 0 and given any initial state x X, note that [T ] k V 0 (x) = [T ] ( [T ] k 1 V 0 ) (x) = max u F (x) {f (x, u) + β E [( [T ] k 1 V 0 ) ( x) x, u ] } It follows by induction on k that [T ] k V 0 ( x) equals the maximum possible expected discounted total payoff E k t=1 βt 1 f (x t, u t ) of starting from x 1 = x and then following the backward sequence of optimal policies (uk, u k 1, u k 2,..., u 2, u 1 ), where for each k the policy x uk ( x) is optimal when k periods remain.

56 University of Warwick, EC9A0 Maths for Economists 56 of 63 Method of Successive Approximation The method of successive approximation starts with an arbitrary function V 0 V M. For k = 1, 2,...,, it then repeatedly solves the pair of equations V k = T V k 1 = T u k V k 1 to construct sequences of: 1. state valuation functions X x V k (x) R; 2. policies X x uk (x) F (x) that are optimal given that one applies the preceding state valuation function X x V k 1 ( x) R to each immediately succeeding state x. Because the operator V T V on V M is a contraction mapping, the method produces a convergent sequence (V k ) k=1 of state valuation functions whose limit satisfies V = T V = T u V for a suitable policy X x u (x) F (x).

57 University of Warwick, EC9A0 Maths for Economists 57 of 63 Lecture Outline Optimal Saving The Two Period Problem The T Period Problem A General Problem Infinite Time Horizon Main Theorem Policy Improvement

58 University of Warwick, EC9A0 Maths for Economists 58 of 63 Monotonicity For all functions V V M, policies u, and states x X, we have defined [T u V ](x) := f (x, u(x)) + β E [V ( x) x, u(x)] and [T V ](x) := max u F (x) {f (x, u) + β E [V ( x) x, u]} Notation Given any pair V 1, V 2 V M, we write V 1 V 2 to indicate that the inequality V 1 (x) V 2 (x) holds for all x X. Definition An operator V M V TV V M is monotone just in case whenever V 1, V 2 V M satisfy V 1 V 2, one has TV 1 TV 2. Theorem The following operators on V M are monotone: 1. V T u V for all policies u; 2. V T V for the optimal policy.

59 University of Warwick, EC9A0 Maths for Economists 59 of 63 Proof that T u is Monotone Given any state x X and any two functions V 1, V 2 V M, the definition of T u implies that [T u V 1 ](x) := f (x, u(x)) + β E [V 1 ( x) x, u(x)] and [T u V 2 ](x) := f (x, u(x)) + β E [V 2 ( x) x, u(x)] Subtracting the second equation from the first implies that [T u V 1 ](x) [T u V 2 ](x) = β E [V 1 ( x) V 2 ( x) x, u(x)] If V 1 V 2 and so the inequality V 1 ( x) V 2 ( x) holds for all x X, it follows that [T u V 1 ](x) [T u V 2 ](x). Since this holds for all x X, we have proved that T u V 1 T u V 2.

60 University of Warwick, EC9A0 Maths for Economists 60 of 63 Proof that T is Monotone Given any state x X and any two functions V 1, V 2 V M, define u 1, u 2 F (x) so that for k = 1, 2 one has [T V k ](x) = max u F (x) {f (x, u) + β E [V ( x) x, u]} = f (x, u k ) + β E [V k ( x) x, u k ] It follows that [T V 1 ](x) f (x, u 2 ) + β E [V 1 ( x) x, u 2 ] and [T V 2 ](x) = f (x, u 2 ) + β E [V 2 ( x) x, u 2 ] Subtracting the second equation from the first inequality gives [T V 1 ](x) [T V 2 ](x) β E [V 1 ( x) V 2 ( x) x, u 2 ] If V 1 V 2 and so the inequality V 1 ( x) V 2 ( x) holds for all x X, it follows that [T V 1 ](x) [T V 2 ](x). Since this holds for all x X, we have proved that T V 1 T V 2.

61 University of Warwick, EC9A0 Maths for Economists 61 of 63 Policy Improvement The method of policy improvement starts with any fixed policy u 0 or X x u 0 (x) F t (x), along with the value V u 0 of following that policy for ever, which is the unique fixed point that satisfies V u 0 = T u 0 V u 0. At each step k = 1, 2,..., given the previous policy u k 1 and associated value V u k 1 satisfying V u k 1 = T u k 1V u k 1: 1. the policy u k is chosen so that T V u k 1 = T u k V u k 1; 2. the state valuation function x V k (x) is chosen as the unique fixed point of the operator T u k. Theorem The double infinite sequence (u k, V u k ) k N of policies and their associated state valuation functions satisfies 1. V u k V u k 1 for all k N (policy improvement); 2. V u k V 0 as k, where V is the infinite-horizon optimal state valuation function that satisfies T V = V.

62 Proof of Policy Improvement By definition of the optimality operator T, one has T V T u V for all functions V V M and all policies u. So at each step k of the policy improvement routine, one has T u k V u k 1 = T V u k 1 T u k 1 V u k 1 = V u k 1 In particular, T u k V u k 1 V u k 1. Now, applying successive iterations of the monotonic operator T u k implies that V u k 1 T u k V u k 1 [T u k ] 2 V u k [T u k ] r V u k 1 [T u k ] r+1 V u k 1... But the definition of V u k implies that for all V V M, including V = V u k 1, one has [T u k ] r V V u k 0 as r. Hence V u k = sup r [T u k ] r V u k 1 V u k 1, thus confirming that the policy u k does improve u k 1. University of Warwick, EC9A0 Maths for Economists 62 of 63

63 Proof of Convergence Recall that at each step k of the policy improvement routine, one has T u k V u k 1 = T V u k 1 and also T u k V u k = V u k. Now, for each state x X, define ˆV (x) := sup k V u k (x). Because V u k V u k 1 and T u k is monotonic, one has V u k = T u k V u k T u k V u k 1 = T V u k 1. Next, because T is monotonic, it follows that ˆV = sup V u k k sup k T V u k 1 = T (sup V u k 1 ) = T ˆV k Similarly, monotonicity of and the definition of T imply that ˆV = sup V u k k = sup T u k V u k k sup T V u k k = T (sup V u k ) = T ˆV k Hence ˆV = T ˆV = V, because T has a unique fixed point. Therefore V = sup k V u k and so, because the sequence V u k (x) is non-decreasing, one has V u k (x) V (x) for each x X. University of Warwick, EC9A0 Maths for Economists 63 of 63

64 University of Warwick, EC9A0 Maths for Economists 64 of 63 Lecture Outline Optimal Saving The Two Period Problem The T Period Problem A General Problem Infinite Time Horizon Main Theorem Policy Improvement

65 Unbounded Utility In economics the boundedness condition M f (x, u) M is rarely satisfied! Consider for example the isoelastic utility function c 1 ɛ if ɛ > 0 and ɛ 1 u(c; ɛ) = 1 ɛ ln c if ɛ = 1 This function is obviously: 1. bounded below but unbounded above in case 0 < ɛ < 1; 2. unbounded both above and below in case ɛ = 1; 3. bounded above but unbounded below in case ɛ > 1. Also commonly used is the negative exponential utility function defined by u(c) = e αc where α is the constant absolute rate of risk aversion (CARA). This function is bounded above and also below (provided c 0). University of Warwick, EC9A0 Maths for Economists 65 of 63

Lecture Notes 10: Dynamic Programming

Lecture Notes 10: Dynamic Programming University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 81 Lecture Notes 10: Dynamic Programming Peter J. Hammond 2018 September 28th University of Warwick, EC9A0 Maths for Economists Peter

More information

Example I: Capital Accumulation

Example I: Capital Accumulation 1 Example I: Capital Accumulation Time t = 0, 1,..., T < Output y, initial output y 0 Fraction of output invested a, capital k = ay Transition (production function) y = g(k) = g(ay) Reward (utility of

More information

DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION

DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION UNIVERSITY OF MARYLAND: ECON 600. Alternative Methods of Discrete Time Intertemporal Optimization We will start by solving a discrete time intertemporal

More information

Lecture Notes 6: Dynamic Equations Part A: First-Order Difference Equations in One Variable

Lecture Notes 6: Dynamic Equations Part A: First-Order Difference Equations in One Variable University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 54 Lecture Notes 6: Dynamic Equations Part A: First-Order Difference Equations in One Variable Peter J. Hammond latest revision 2017

More information

Slides II - Dynamic Programming

Slides II - Dynamic Programming Slides II - Dynamic Programming Julio Garín University of Georgia Macroeconomic Theory II (Ph.D.) Spring 2017 Macroeconomic Theory II Slides II - Dynamic Programming Spring 2017 1 / 32 Outline 1. Lagrangian

More information

ECON 2010c Solution to Problem Set 1

ECON 2010c Solution to Problem Set 1 ECON 200c Solution to Problem Set By the Teaching Fellows for ECON 200c Fall 204 Growth Model (a) Defining the constant κ as: κ = ln( αβ) + αβ αβ ln(αβ), the problem asks us to show that the following

More information

Simple Consumption / Savings Problems (based on Ljungqvist & Sargent, Ch 16, 17) Jonathan Heathcote. updated, March The household s problem X

Simple Consumption / Savings Problems (based on Ljungqvist & Sargent, Ch 16, 17) Jonathan Heathcote. updated, March The household s problem X Simple Consumption / Savings Problems (based on Ljungqvist & Sargent, Ch 16, 17) subject to for all t Jonathan Heathcote updated, March 2006 1. The household s problem max E β t u (c t ) t=0 c t + a t+1

More information

In the Ramsey model we maximized the utility U = u[c(t)]e nt e t dt. Now

In the Ramsey model we maximized the utility U = u[c(t)]e nt e t dt. Now PERMANENT INCOME AND OPTIMAL CONSUMPTION On the previous notes we saw how permanent income hypothesis can solve the Consumption Puzzle. Now we use this hypothesis, together with assumption of rational

More information

Basic Deterministic Dynamic Programming

Basic Deterministic Dynamic Programming Basic Deterministic Dynamic Programming Timothy Kam School of Economics & CAMA Australian National University ECON8022, This version March 17, 2008 Motivation What do we do? Outline Deterministic IHDP

More information

Asymmetric Information in Economic Policy. Noah Williams

Asymmetric Information in Economic Policy. Noah Williams Asymmetric Information in Economic Policy Noah Williams University of Wisconsin - Madison Williams Econ 899 Asymmetric Information Risk-neutral moneylender. Borrow and lend at rate R = 1/β. Strictly risk-averse

More information

Chapter 4. Applications/Variations

Chapter 4. Applications/Variations Chapter 4 Applications/Variations 149 4.1 Consumption Smoothing 4.1.1 The Intertemporal Budget Economic Growth: Lecture Notes For any given sequence of interest rates {R t } t=0, pick an arbitrary q 0

More information

Notes on Recursive Utility. Consider the setting of consumption in infinite time under uncertainty as in

Notes on Recursive Utility. Consider the setting of consumption in infinite time under uncertainty as in Notes on Recursive Utility Consider the setting of consumption in infinite time under uncertainty as in Section 1 (or Chapter 29, LeRoy & Werner, 2nd Ed.) Let u st be the continuation utility at s t. That

More information

Econ 504, Lecture 1: Transversality and Stochastic Lagrange Multipliers

Econ 504, Lecture 1: Transversality and Stochastic Lagrange Multipliers ECO 504 Spring 2009 Chris Sims Econ 504, Lecture 1: Transversality and Stochastic Lagrange Multipliers Christopher A. Sims Princeton University sims@princeton.edu February 4, 2009 0 Example: LQPY The ordinary

More information

1 Markov decision processes

1 Markov decision processes 2.997 Decision-Making in Large-Scale Systems February 4 MI, Spring 2004 Handout #1 Lecture Note 1 1 Markov decision processes In this class we will study discrete-time stochastic systems. We can describe

More information

ECOM 009 Macroeconomics B. Lecture 2

ECOM 009 Macroeconomics B. Lecture 2 ECOM 009 Macroeconomics B Lecture 2 Giulio Fella c Giulio Fella, 2014 ECOM 009 Macroeconomics B - Lecture 2 40/197 Aim of consumption theory Consumption theory aims at explaining consumption/saving decisions

More information

Government The government faces an exogenous sequence {g t } t=0

Government The government faces an exogenous sequence {g t } t=0 Part 6 1. Borrowing Constraints II 1.1. Borrowing Constraints and the Ricardian Equivalence Equivalence between current taxes and current deficits? Basic paper on the Ricardian Equivalence: Barro, JPE,

More information

Dynamic Optimization Using Lagrange Multipliers

Dynamic Optimization Using Lagrange Multipliers Dynamic Optimization Using Lagrange Multipliers Barbara Annicchiarico barbara.annicchiarico@uniroma2.it Università degli Studi di Roma "Tor Vergata" Presentation #2 Deterministic Infinite-Horizon Ramsey

More information

problem. max Both k (0) and h (0) are given at time 0. (a) Write down the Hamilton-Jacobi-Bellman (HJB) Equation in the dynamic programming

problem. max Both k (0) and h (0) are given at time 0. (a) Write down the Hamilton-Jacobi-Bellman (HJB) Equation in the dynamic programming 1. Endogenous Growth with Human Capital Consider the following endogenous growth model with both physical capital (k (t)) and human capital (h (t)) in continuous time. The representative household solves

More information

Advanced Economic Growth: Lecture 21: Stochastic Dynamic Programming and Applications

Advanced Economic Growth: Lecture 21: Stochastic Dynamic Programming and Applications Advanced Economic Growth: Lecture 21: Stochastic Dynamic Programming and Applications Daron Acemoglu MIT November 19, 2007 Daron Acemoglu (MIT) Advanced Growth Lecture 21 November 19, 2007 1 / 79 Stochastic

More information

Economics 2010c: Lecture 2 Iterative Methods in Dynamic Programming

Economics 2010c: Lecture 2 Iterative Methods in Dynamic Programming Economics 2010c: Lecture 2 Iterative Methods in Dynamic Programming David Laibson 9/04/2014 Outline: 1. Functional operators 2. Iterative solutions for the Bellman Equation 3. Contraction Mapping Theorem

More information

Macroeconomic Theory II Homework 2 - Solution

Macroeconomic Theory II Homework 2 - Solution Macroeconomic Theory II Homework 2 - Solution Professor Gianluca Violante, TA: Diego Daruich New York University Spring 204 Problem The household has preferences over the stochastic processes of a single

More information

The Necessity of the Transversality Condition at Infinity: A (Very) Special Case

The Necessity of the Transversality Condition at Infinity: A (Very) Special Case The Necessity of the Transversality Condition at Infinity: A (Very) Special Case Peter Ireland ECON 772001 - Math for Economists Boston College, Department of Economics Fall 2017 Consider a discrete-time,

More information

Macro 1: Dynamic Programming 2

Macro 1: Dynamic Programming 2 Macro 1: Dynamic Programming 2 Mark Huggett 2 2 Georgetown September, 2016 DP Problems with Risk Strategy: Consider three classic problems: income fluctuation, optimal (stochastic) growth and search. Learn

More information

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September

More information

Lecture 6: Discrete-Time Dynamic Optimization

Lecture 6: Discrete-Time Dynamic Optimization Lecture 6: Discrete-Time Dynamic Optimization Yulei Luo Economics, HKU November 13, 2017 Luo, Y. (Economics, HKU) ECON0703: ME November 13, 2017 1 / 43 The Nature of Optimal Control In static optimization,

More information

ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko

ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko Indirect Utility Recall: static consumer theory; J goods, p j is the price of good j (j = 1; : : : ; J), c j is consumption

More information

Final Exam - Math Camp August 27, 2014

Final Exam - Math Camp August 27, 2014 Final Exam - Math Camp August 27, 2014 You will have three hours to complete this exam. Please write your solution to question one in blue book 1 and your solutions to the subsequent questions in blue

More information

Permanent Income Hypothesis Intro to the Ramsey Model

Permanent Income Hypothesis Intro to the Ramsey Model Consumption and Savings Permanent Income Hypothesis Intro to the Ramsey Model Lecture 10 Topics in Macroeconomics November 6, 2007 Lecture 10 1/18 Topics in Macroeconomics Consumption and Savings Outline

More information

Lecture notes: Rust (1987) Economics : M. Shum 1

Lecture notes: Rust (1987) Economics : M. Shum 1 Economics 180.672: M. Shum 1 Estimate the parameters of a dynamic optimization problem: when to replace engine of a bus? This is a another practical example of an optimal stopping problem, which is easily

More information

Macroeconomics Qualifying Examination

Macroeconomics Qualifying Examination Macroeconomics Qualifying Examination August 2015 Department of Economics UNC Chapel Hill Instructions: This examination consists of 4 questions. Answer all questions. If you believe a question is ambiguously

More information

Uncertainty Per Krusell & D. Krueger Lecture Notes Chapter 6

Uncertainty Per Krusell & D. Krueger Lecture Notes Chapter 6 1 Uncertainty Per Krusell & D. Krueger Lecture Notes Chapter 6 1 A Two-Period Example Suppose the economy lasts only two periods, t =0, 1. The uncertainty arises in the income (wage) of period 1. Not that

More information

Dynamic Optimization Problem. April 2, Graduate School of Economics, University of Tokyo. Math Camp Day 4. Daiki Kishishita.

Dynamic Optimization Problem. April 2, Graduate School of Economics, University of Tokyo. Math Camp Day 4. Daiki Kishishita. Discrete Math Camp Optimization Problem Graduate School of Economics, University of Tokyo April 2, 2016 Goal of day 4 Discrete We discuss methods both in discrete and continuous : Discrete : condition

More information

Development Economics (PhD) Intertemporal Utility Maximiza

Development Economics (PhD) Intertemporal Utility Maximiza Development Economics (PhD) Intertemporal Utility Maximization Department of Economics University of Gothenburg October 7, 2015 1/14 Two Period Utility Maximization Lagrange Multiplier Method Consider

More information

Lecture 7: Stochastic Dynamic Programing and Markov Processes

Lecture 7: Stochastic Dynamic Programing and Markov Processes Lecture 7: Stochastic Dynamic Programing and Markov Processes Florian Scheuer References: SLP chapters 9, 10, 11; LS chapters 2 and 6 1 Examples 1.1 Neoclassical Growth Model with Stochastic Technology

More information

The Real Business Cycle Model

The Real Business Cycle Model The Real Business Cycle Model Macroeconomics II 2 The real business cycle model. Introduction This model explains the comovements in the fluctuations of aggregate economic variables around their trend.

More information

Utility Theory CHAPTER Single period utility theory

Utility Theory CHAPTER Single period utility theory CHAPTER 7 Utility Theory 7.. Single period utility theory We wish to use a concept of utility that is able to deal with uncertainty. So we introduce the von Neumann Morgenstern utility function. The investor

More information

Intertemporal Risk Aversion, Stationarity, and Discounting

Intertemporal Risk Aversion, Stationarity, and Discounting Traeger, CES ifo 10 p. 1 Intertemporal Risk Aversion, Stationarity, and Discounting Christian Traeger Department of Agricultural & Resource Economics, UC Berkeley Introduce a more general preference representation

More information

HOMEWORK #1 This homework assignment is due at 5PM on Friday, November 3 in Marnix Amand s mailbox.

HOMEWORK #1 This homework assignment is due at 5PM on Friday, November 3 in Marnix Amand s mailbox. Econ 50a (second half) Yale University Fall 2006 Prof. Tony Smith HOMEWORK # This homework assignment is due at 5PM on Friday, November 3 in Marnix Amand s mailbox.. Consider a growth model with capital

More information

G Recitation 3: Ramsey Growth model with technological progress; discrete time dynamic programming and applications

G Recitation 3: Ramsey Growth model with technological progress; discrete time dynamic programming and applications G6215.1 - Recitation 3: Ramsey Growth model with technological progress; discrete time dynamic Contents 1 The Ramsey growth model with technological progress 2 1.1 A useful lemma..................................................

More information

An approximate consumption function

An approximate consumption function An approximate consumption function Mario Padula Very Preliminary and Very Incomplete 8 December 2005 Abstract This notes proposes an approximation to the consumption function in the buffer-stock model.

More information

Chapter 3. Dynamic Programming

Chapter 3. Dynamic Programming Chapter 3. Dynamic Programming This chapter introduces basic ideas and methods of dynamic programming. 1 It sets out the basic elements of a recursive optimization problem, describes the functional equation

More information

Lecture 2. (1) Permanent Income Hypothesis (2) Precautionary Savings. Erick Sager. February 6, 2018

Lecture 2. (1) Permanent Income Hypothesis (2) Precautionary Savings. Erick Sager. February 6, 2018 Lecture 2 (1) Permanent Income Hypothesis (2) Precautionary Savings Erick Sager February 6, 2018 Econ 606: Adv. Topics in Macroeconomics Johns Hopkins University, Spring 2018 Erick Sager Lecture 2 (2/6/18)

More information

Optimization, Part 2 (november to december): mandatory for QEM-IMAEF, and for MMEF or MAEF who have chosen it as an optional course.

Optimization, Part 2 (november to december): mandatory for QEM-IMAEF, and for MMEF or MAEF who have chosen it as an optional course. Paris. Optimization, Part 2 (november to december): mandatory for QEM-IMAEF, and for MMEF or MAEF who have chosen it as an optional course. Philippe Bich (Paris 1 Panthéon-Sorbonne and PSE) Paris, 2016.

More information

THE CARLO ALBERTO NOTEBOOKS

THE CARLO ALBERTO NOTEBOOKS THE CARLO ALBERTO NOTEBOOKS Unique Solutions of Some Recursive Equations in Economic Dynamics Working Paper No. 46 June 2007 www.carloalberto.org Massimo Marinacci Luigi Montrucchio Unique Solutions of

More information

Value and Policy Iteration

Value and Policy Iteration Chapter 7 Value and Policy Iteration 1 For infinite horizon problems, we need to replace our basic computational tool, the DP algorithm, which we used to compute the optimal cost and policy for finite

More information

Graduate Macroeconomics 2 Problem set Solutions

Graduate Macroeconomics 2 Problem set Solutions Graduate Macroeconomics 2 Problem set 10. - Solutions Question 1 1. AUTARKY Autarky implies that the agents do not have access to credit or insurance markets. This implies that you cannot trade across

More information

Markov Decision Processes Infinite Horizon Problems

Markov Decision Processes Infinite Horizon Problems Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld 1 What is a solution to an MDP? MDP Planning Problem: Input: an MDP (S,A,R,T)

More information

ECON 5118 Macroeconomic Theory

ECON 5118 Macroeconomic Theory ECON 5118 Macroeconomic Theory Winter 013 Test 1 February 1, 013 Answer ALL Questions Time Allowed: 1 hour 0 min Attention: Please write your answers on the answer book provided Use the right-side pages

More information

Lecture 6: Recursive Preferences

Lecture 6: Recursive Preferences Lecture 6: Recursive Preferences Simon Gilchrist Boston Univerity and NBER EC 745 Fall, 2013 Basics Epstein and Zin (1989 JPE, 1991 Ecta) following work by Kreps and Porteus introduced a class of preferences

More information

The New Keynesian Model: Introduction

The New Keynesian Model: Introduction The New Keynesian Model: Introduction Vivaldo M. Mendes ISCTE Lisbon University Institute 13 November 2017 (Vivaldo M. Mendes) The New Keynesian Model: Introduction 13 November 2013 1 / 39 Summary 1 What

More information

Dynamic Programming Theorems

Dynamic Programming Theorems Dynamic Programming Theorems Prof. Lutz Hendricks Econ720 September 11, 2017 1 / 39 Dynamic Programming Theorems Useful theorems to characterize the solution to a DP problem. There is no reason to remember

More information

ADVANCED MACRO TECHNIQUES Midterm Solutions

ADVANCED MACRO TECHNIQUES Midterm Solutions 36-406 ADVANCED MACRO TECHNIQUES Midterm Solutions Chris Edmond hcpedmond@unimelb.edu.aui This exam lasts 90 minutes and has three questions, each of equal marks. Within each question there are a number

More information

Stochastic convexity in dynamic programming

Stochastic convexity in dynamic programming Economic Theory 22, 447 455 (2003) Stochastic convexity in dynamic programming Alp E. Atakan Department of Economics, Columbia University New York, NY 10027, USA (e-mail: aea15@columbia.edu) Received:

More information

Lecture 2 The Centralized Economy: Basic features

Lecture 2 The Centralized Economy: Basic features Lecture 2 The Centralized Economy: Basic features Leopold von Thadden University of Mainz and ECB (on leave) Advanced Macroeconomics, Winter Term 2013 1 / 41 I Motivation This Lecture introduces the basic

More information

Economic Growth: Lecture 8, Overlapping Generations

Economic Growth: Lecture 8, Overlapping Generations 14.452 Economic Growth: Lecture 8, Overlapping Generations Daron Acemoglu MIT November 20, 2018 Daron Acemoglu (MIT) Economic Growth Lecture 8 November 20, 2018 1 / 46 Growth with Overlapping Generations

More information

STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics

STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics Ph. D. Comprehensive Examination: Macroeconomics Fall, 202 Answer Key to Section 2 Questions Section. (Suggested Time: 45 Minutes) For 3 of

More information

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011 Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance

More information

Stochastic Dynamic Programming: The One Sector Growth Model

Stochastic Dynamic Programming: The One Sector Growth Model Stochastic Dynamic Programming: The One Sector Growth Model Esteban Rossi-Hansberg Princeton University March 26, 2012 Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 1 / 31 References

More information

Lecture 5: Some Informal Notes on Dynamic Programming

Lecture 5: Some Informal Notes on Dynamic Programming Lecture 5: Some Informal Notes on Dynamic Programming The purpose of these class notes is to give an informal introduction to dynamic programming by working out some cases by h. We want to solve subject

More information

Introduction to Recursive Methods

Introduction to Recursive Methods Chapter 1 Introduction to Recursive Methods These notes are targeted to advanced Master and Ph.D. students in economics. They can be of some use to researchers in macroeconomic theory. The material contained

More information

Introduction Optimality and Asset Pricing

Introduction Optimality and Asset Pricing Introduction Optimality and Asset Pricing Andrea Buraschi Imperial College Business School October 2010 The Euler Equation Take an economy where price is given with respect to the numéraire, which is our

More information

u(c t, x t+1 ) = c α t + x α t+1

u(c t, x t+1 ) = c α t + x α t+1 Review Questions: Overlapping Generations Econ720. Fall 2017. Prof. Lutz Hendricks 1 A Savings Function Consider the standard two-period household problem. The household receives a wage w t when young

More information

Dynamic Problem Set 1 Solutions

Dynamic Problem Set 1 Solutions Dynamic Problem Set 1 Solutions Jonathan Kreamer July 15, 2011 Question 1 Consider the following multi-period optimal storage problem: An economic agent imizes: c t} T β t u(c t ) (1) subject to the period-by-period

More information

UNIVERSITY OF VIENNA

UNIVERSITY OF VIENNA WORKING PAPERS Cycles and chaos in the one-sector growth model with elastic labor supply Gerhard Sorger May 2015 Working Paper No: 1505 DEPARTMENT OF ECONOMICS UNIVERSITY OF VIENNA All our working papers

More information

ADVANCED MACROECONOMIC TECHNIQUES NOTE 3a

ADVANCED MACROECONOMIC TECHNIQUES NOTE 3a 316-406 ADVANCED MACROECONOMIC TECHNIQUES NOTE 3a Chris Edmond hcpedmond@unimelb.edu.aui Dynamic programming and the growth model Dynamic programming and closely related recursive methods provide an important

More information

New Notes on the Solow Growth Model

New Notes on the Solow Growth Model New Notes on the Solow Growth Model Roberto Chang September 2009 1 The Model The firstingredientofadynamicmodelisthedescriptionofthetimehorizon. In the original Solow model, time is continuous and the

More information

1. Money in the utility function (start)

1. Money in the utility function (start) Monetary Economics: Macro Aspects, 1/3 2012 Henrik Jensen Department of Economics University of Copenhagen 1. Money in the utility function (start) a. The basic money-in-the-utility function model b. Optimal

More information

Suggested Solutions to Homework #3 Econ 511b (Part I), Spring 2004

Suggested Solutions to Homework #3 Econ 511b (Part I), Spring 2004 Suggested Solutions to Homework #3 Econ 5b (Part I), Spring 2004. Consider an exchange economy with two (types of) consumers. Type-A consumers comprise fraction λ of the economy s population and type-b

More information

HOMEWORK #3 This homework assignment is due at NOON on Friday, November 17 in Marnix Amand s mailbox.

HOMEWORK #3 This homework assignment is due at NOON on Friday, November 17 in Marnix Amand s mailbox. Econ 50a second half) Yale University Fall 2006 Prof. Tony Smith HOMEWORK #3 This homework assignment is due at NOON on Friday, November 7 in Marnix Amand s mailbox.. This problem introduces wealth inequality

More information

Contents. An example 5. Mathematical Preliminaries 13. Dynamic programming under certainty 29. Numerical methods 41. Some applications 57

Contents. An example 5. Mathematical Preliminaries 13. Dynamic programming under certainty 29. Numerical methods 41. Some applications 57 T H O M A S D E M U Y N C K DY N A M I C O P T I M I Z AT I O N Contents An example 5 Mathematical Preliminaries 13 Dynamic programming under certainty 29 Numerical methods 41 Some applications 57 Stochastic

More information

A simple macro dynamic model with endogenous saving rate: the representative agent model

A simple macro dynamic model with endogenous saving rate: the representative agent model A simple macro dynamic model with endogenous saving rate: the representative agent model Virginia Sánchez-Marcos Macroeconomics, MIE-UNICAN Macroeconomics (MIE-UNICAN) A simple macro dynamic model with

More information

Equilibrium Valuation with Growth Rate Uncertainty

Equilibrium Valuation with Growth Rate Uncertainty Equilibrium Valuation with Growth Rate Uncertainty Lars Peter Hansen University of Chicago Toulouse p. 1/26 Portfolio dividends relative to consumption. 2 2.5 Port. 1 Port. 2 Port. 3 3 3.5 4 1950 1955

More information

The Growth Model in Continuous Time (Ramsey Model)

The Growth Model in Continuous Time (Ramsey Model) The Growth Model in Continuous Time (Ramsey Model) Prof. Lutz Hendricks Econ720 September 27, 2017 1 / 32 The Growth Model in Continuous Time We add optimizing households to the Solow model. We first study

More information

MDP Preliminaries. Nan Jiang. February 10, 2019

MDP Preliminaries. Nan Jiang. February 10, 2019 MDP Preliminaries Nan Jiang February 10, 2019 1 Markov Decision Processes In reinforcement learning, the interactions between the agent and the environment are often described by a Markov Decision Process

More information

Reinforcement Learning and Optimal Control. ASU, CSE 691, Winter 2019

Reinforcement Learning and Optimal Control. ASU, CSE 691, Winter 2019 Reinforcement Learning and Optimal Control ASU, CSE 691, Winter 2019 Dimitri P. Bertsekas dimitrib@mit.edu Lecture 8 Bertsekas Reinforcement Learning 1 / 21 Outline 1 Review of Infinite Horizon Problems

More information

Dynamic stochastic general equilibrium models. December 4, 2007

Dynamic stochastic general equilibrium models. December 4, 2007 Dynamic stochastic general equilibrium models December 4, 2007 Dynamic stochastic general equilibrium models Random shocks to generate trajectories that look like the observed national accounts. Rational

More information

Lecture 4: The Bellman Operator Dynamic Programming

Lecture 4: The Bellman Operator Dynamic Programming Lecture 4: The Bellman Operator Dynamic Programming Jeppe Druedahl Department of Economics 15th of February 2016 Slide 1/19 Infinite horizon, t We know V 0 (M t ) = whatever { } V 1 (M t ) = max u(m t,

More information

Stochastic Problems. 1 Examples. 1.1 Neoclassical Growth Model with Stochastic Technology. 1.2 A Model of Job Search

Stochastic Problems. 1 Examples. 1.1 Neoclassical Growth Model with Stochastic Technology. 1.2 A Model of Job Search Stochastic Problems References: SLP chapters 9, 10, 11; L&S chapters 2 and 6 1 Examples 1.1 Neoclassical Growth Model with Stochastic Technology Production function y = Af k where A is random Let A s t

More information

Practical Dynamic Programming: An Introduction. Associated programs dpexample.m: deterministic dpexample2.m: stochastic

Practical Dynamic Programming: An Introduction. Associated programs dpexample.m: deterministic dpexample2.m: stochastic Practical Dynamic Programming: An Introduction Associated programs dpexample.m: deterministic dpexample2.m: stochastic Outline 1. Specific problem: stochastic model of accumulation from a DP perspective

More information

Dynamic (Stochastic) General Equilibrium and Growth

Dynamic (Stochastic) General Equilibrium and Growth Dynamic (Stochastic) General Equilibrium and Growth Martin Ellison Nuffi eld College Michaelmas Term 2018 Martin Ellison (Nuffi eld) D(S)GE and Growth Michaelmas Term 2018 1 / 43 Macroeconomics is Dynamic

More information

Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time

Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time David Laibson 9/30/2014 Outline Lectures 9-10: 9.1 Continuous-time Bellman Equation 9.2 Application: Merton s Problem 9.3 Application:

More information

Small Open Economy RBC Model Uribe, Chapter 4

Small Open Economy RBC Model Uribe, Chapter 4 Small Open Economy RBC Model Uribe, Chapter 4 1 Basic Model 1.1 Uzawa Utility E 0 t=0 θ t U (c t, h t ) θ 0 = 1 θ t+1 = β (c t, h t ) θ t ; β c < 0; β h > 0. Time-varying discount factor With a constant

More information

Comprehensive Exam. Macro Spring 2014 Retake. August 22, 2014

Comprehensive Exam. Macro Spring 2014 Retake. August 22, 2014 Comprehensive Exam Macro Spring 2014 Retake August 22, 2014 You have a total of 180 minutes to complete the exam. If a question seems ambiguous, state why, sharpen it up and answer the sharpened-up question.

More information

Numerical Methods in Economics

Numerical Methods in Economics Numerical Methods in Economics MIT Press, 1998 Chapter 12 Notes Numerical Dynamic Programming Kenneth L. Judd Hoover Institution November 15, 2002 1 Discrete-Time Dynamic Programming Objective: X: set

More information

Solving a Dynamic (Stochastic) General Equilibrium Model under the Discrete Time Framework

Solving a Dynamic (Stochastic) General Equilibrium Model under the Discrete Time Framework Solving a Dynamic (Stochastic) General Equilibrium Model under the Discrete Time Framework Dongpeng Liu Nanjing University Sept 2016 D. Liu (NJU) Solving D(S)GE 09/16 1 / 63 Introduction Targets of the

More information

Concavity of the Consumption Function with Recursive Preferences

Concavity of the Consumption Function with Recursive Preferences Concavity of the Consumption Function with Recursive Preferences Semyon Malamud May 21, 2014 Abstract Carroll and Kimball 1996) show that the consumption function for an agent with time-separable, isoelastic

More information

Monetary Economics: Solutions Problem Set 1

Monetary Economics: Solutions Problem Set 1 Monetary Economics: Solutions Problem Set 1 December 14, 2006 Exercise 1 A Households Households maximise their intertemporal utility function by optimally choosing consumption, savings, and the mix of

More information

Advanced Macroeconomics

Advanced Macroeconomics Advanced Macroeconomics The Ramsey Model Marcin Kolasa Warsaw School of Economics Marcin Kolasa (WSE) Ad. Macro - Ramsey model 1 / 30 Introduction Authors: Frank Ramsey (1928), David Cass (1965) and Tjalling

More information

Value Function Iteration

Value Function Iteration Value Function Iteration (Lectures on Solution Methods for Economists II) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 February 26, 2018 1 University of Pennsylvania 2 Boston College Theoretical Background

More information

1 THE GAME. Two players, i=1, 2 U i : concave, strictly increasing f: concave, continuous, f(0) 0 β (0, 1): discount factor, common

1 THE GAME. Two players, i=1, 2 U i : concave, strictly increasing f: concave, continuous, f(0) 0 β (0, 1): discount factor, common 1 THE GAME Two players, i=1, 2 U i : concave, strictly increasing f: concave, continuous, f(0) 0 β (0, 1): discount factor, common With Law of motion of the state: Payoff: Histories: Strategies: k t+1

More information

1 Bewley Economies with Aggregate Uncertainty

1 Bewley Economies with Aggregate Uncertainty 1 Bewley Economies with Aggregate Uncertainty Sofarwehaveassumedawayaggregatefluctuations (i.e., business cycles) in our description of the incomplete-markets economies with uninsurable idiosyncratic risk

More information

Macroeconomics Qualifying Examination

Macroeconomics Qualifying Examination Macroeconomics Qualifying Examination August 2016 Department of Economics UNC Chapel Hill Instructions: This examination consists of 4 questions. Answer all questions. If you believe a question is ambiguously

More information

Q-Learning for Markov Decision Processes*

Q-Learning for Markov Decision Processes* McGill University ECSE 506: Term Project Q-Learning for Markov Decision Processes* Authors: Khoa Phan khoa.phan@mail.mcgill.ca Sandeep Manjanna sandeep.manjanna@mail.mcgill.ca (*Based on: Convergence of

More information

1 Stochastic Dynamic Programming

1 Stochastic Dynamic Programming 1 Stochastic Dynamic Programming Formally, a stochastic dynamic program has the same components as a deterministic one; the only modification is to the state transition equation. When events in the future

More information

"0". Doing the stuff on SVARs from the February 28 slides

0. Doing the stuff on SVARs from the February 28 slides Monetary Policy, 7/3 2018 Henrik Jensen Department of Economics University of Copenhagen "0". Doing the stuff on SVARs from the February 28 slides 1. Money in the utility function (start) a. The basic

More information

Proving the Regularity of the Minimal Probability of Ruin via a Game of Stopping and Control

Proving the Regularity of the Minimal Probability of Ruin via a Game of Stopping and Control Proving the Regularity of the Minimal Probability of Ruin via a Game of Stopping and Control Erhan Bayraktar University of Michigan joint work with Virginia R. Young, University of Michigan K αρλoβασi,

More information

Lecture 3: Dynamics of small open economies

Lecture 3: Dynamics of small open economies Lecture 3: Dynamics of small open economies Open economy macroeconomics, Fall 2006 Ida Wolden Bache September 5, 2006 Dynamics of small open economies Required readings: OR chapter 2. 2.3 Supplementary

More information

Lecture 5: The Bellman Equation

Lecture 5: The Bellman Equation Lecture 5: The Bellman Equation Florian Scheuer 1 Plan Prove properties of the Bellman equation (In particular, existence and uniqueness of solution) Use this to prove properties of the solution Think

More information

Economic Growth: Lectures 5-7, Neoclassical Growth

Economic Growth: Lectures 5-7, Neoclassical Growth 14.452 Economic Growth: Lectures 5-7, Neoclassical Growth Daron Acemoglu MIT November 7, 9 and 14, 2017. Daron Acemoglu (MIT) Economic Growth Lectures 5-7 November 7, 9 and 14, 2017. 1 / 83 Introduction

More information

Lecture 2. (1) Aggregation (2) Permanent Income Hypothesis. Erick Sager. September 14, 2015

Lecture 2. (1) Aggregation (2) Permanent Income Hypothesis. Erick Sager. September 14, 2015 Lecture 2 (1) Aggregation (2) Permanent Income Hypothesis Erick Sager September 14, 2015 Econ 605: Adv. Topics in Macroeconomics Johns Hopkins University, Fall 2015 Erick Sager Lecture 2 (9/14/15) 1 /

More information