Development Economics (PhD) Intertemporal Utility Maximization Department of Economics University of Gothenburg October 7, 2015 1/14
Two Period Utility Maximization Lagrange Multiplier Method Consider a two-period (period 1 and period 2) problem The two-period utility function: U = u(c 1 ) + 1 1 + δ u(c 2) (1) Standard assumption: u (.) > 0, and u (.) < 0 Period 1 budget constraint: A 0 > 0, = inheritance Y 1 + (1 + r)a 0 = c 1 + A 1 (2) A 0 < 0, = debt from previous generation Period 2 budget constraint: 2/14
Two Period Utility Maximization Lagrange Multiplier Method Cont. Period 2 budget constraint: Y 2 = (1 + r)a 1 = c 2 + A 2 (3) For simplicity, assume A 0 = 0, = no inheritance, and no debt; A 2 = 0 no bequest/no debt in the final year (A 2 ) = max c1,c 2 U = u(c 1 ) + 1 1 + δ u(c 2) s.t (4) Y 1 = c 1 + A 1 (5) Y 2 + (1 + r)a 1 = c 2 (6) 3/14
Two Period Utility Maximization Lagrange Multiplier Method Cont. Use the Lagrange multiplier method L = u(c 1 ) + 1 1 + δ u(c 2) + λ 1 (Y 1 c 1 A 1 ) FOCs for c 1, c 2, A 1, λ 1 and λ 2 +λ 2 (Y 2 + (1 + r)a 1 c 2 ) (7) L c 1 = u (c 1 ) λ 1 = 0 (8) L = 1 c 2 1 + δ u (c 2 ) λ 2 = 0 (9) 4/14
Two Period Utility Maximization Lagrange Multiplier Method Cont. (8) (10) = L A 1 = λ 1 + (1 + r)λ 2 = 0 (10) L λ 1 = Y 1 = c 1 + A 1 (11) L λ 2 = Y 2 + (1 + r)a 1 = c 2 (12) λ 1 = u (c 1 ) (13) λ 2 = 1 1 + δ u (c 2 ) and (14) 5/14
Two Period Utility Maximization Lagrange Multiplier Method Cont. λ 1 + (1 + r)λ 2 = 0 (15) Substituting equations (13) and (14) into (15) gives u (c 1 ) = 1 + r 1 + δ u (c 2 ) (16) Eq. (16) is known as the Euler Equation 6/14
Dynamic Programming Consider the lifetime utility function of the consumer, U =: u(c 1 ) + 1 1 + δ u(c 1 1 2) + ( 1 + δ )2 u(c 3 ) +... + ( 1 + δ )T 1 u(c T ) (17) [= T t=1 1 ( 1 + δ )T 1 u(c t )] s.t (18) The optimization problem of the consumer is given as follows: 7/14
Dynamic Programming Cont. max ct [U = T t=1 1 ( 1 + δ )T 1 u(c t )] s.t (19) A t = (1 + r)a t 1 + Y t c t (20) In dynamic optimization, A t is called the State Variable, representing the total amount of resources available to the consumer c t is called control variable which must be chosen by the consumer to maximize utility Note that the level of consumption chosen at any time t (given A t 1 ) affects the level of wealth available in period t + 1 The optimization problem of the consumer would be solved recursively (from period 1 to T) 8/14
Dynamic Programming Cont. Let the value function (the maximized value of the objective function) in t = 1 given an initial stock of wealth A 0 be given by T 1 V 1 (A 0 ) = max c1 t=1( 1 + δ )T 1 u(c t ) (21) V 1 (A 0 ) = max c1 {u(c 1 ) + 1 1 + δ u(c 1 2) + ( 1 + δ )2 u(c 3 ) +... 1 +( 1 + δ )T 1 u(c T )} (22) 9/14
Dynamic Programming Cont. V 1 (A 0 ) = max c1 {u(c 1 ) + 1 T 1 + δ [ 1 ( t=2 1 + δ )T 2 u(c t )]} s.t (23) A t = (1 + r)a t 1 + Y t c t (24) Given an initial stock of wealth from period 1, A 1 the consumer again maximizes utility in period 2 subject to the wealth constraint given in (24) above Thus the value function in period 2 would be T 1 V 2 (A 1 ) = max c2 t=2( 1 + δ )T 2 u(c t ) (25) Substituting equation (25) into (23) yields 10/14
Dynamic Programming Cont. V 1 (A 0 ) = max c1 [u(c 1 ) + 1 1 + δ V 2(A 1 )] (26) If the consumer optimizes in such a way, V t (A t 1 ) = max ct [u(c t ) + 1 1 + δ V t+1(a t )] (27) Equation (27) is known as the Bellman Equation Drop the subscript t in the value function V(A t 1 ) = max ct [u(c t ) + 1 1 + δ V(A t)] (28) 11/14
Dynamic Programming Cont. Differentiate the Bellman equation w.r.t c t V(A t 1 ) c t = 0 (29) u (c t ) + 1 1 + δ V (A t ) A t c t = 0 (30) Since A t / c t = 1 from e.q (24), we can rewrite (30) as u (c t ) 1 1 + δ V (A t ) = 0 (31) We need to solve for V (A t ) in eq (31) to complete the optimization Differentiate the Bellman equation given in (28) w.r.t A t 1 12/14
Dynamic Programming Cont. Eq (31) = : V 1 (A t 1 ) = ( 1 + δ )V (A t ) A t (32) A t 1 V (A t 1 ) = ( 1 + r 1 + δ )V (A t ) (33) Eq. (34) = V (A t ) = (1 + δ)u (c t ) (34) V (A t 1 ) = (1 + δ)u (c t 1 ) (35) Substituting (34) and (35) into (33) gives u (c t ) = 1 + r 1 + δ u (c t+1 ) (36) 13/14
Dynamic Programming Cont. If δ = r, u (c t ) = u (c t+1 ) (37) 14/14