Introduction to Dynamic Programming Applied to Economics
|
|
- Lester Wilson
- 5 years ago
- Views:
Transcription
1 Introduction to Dynamic Programming Applied to Economics Paulo Brito Instituto Superior de Economia e Gestão Universidade Técnica de Lisboa pbrito@iseg.utl.pt
2 Contents 1 Introduction A general overview Discrete time deterministic models Continuous time deterministic models Discrete time stochastic models Continuous time stochastic models References I Deterministic Dynamic Programming 10 2 Discrete Time The Optimal control problem Elements of the problem The simple cake-eating problem Calculus of variations problems Optimal control problems Dynamic programming Applications The cake eating problem
3 2.2.2 The consumption-portfolio choice problem The Ramsey problem Continuous Time The dynamic programming principle and the HJB equation Simplest problem optimal control problem Infinite horizon discounted problem Bibliography Applications The cake eating problem The dynamic consumption-saving problem The Ramsey model II Stochastic Dynamic Programming 54 4 Discrete Time A short introduction to stochastic processes Filtrations Probabilities over filtrations Stochastic processes Some important processes Stochastic Dynamic Programming Applications The representative consumer The consumer problem: alternative methods of solution Infinite horizons
4 Paulo Brito Dynamic Programming Continuous time Introduction to continuous time stochastic processes Brownian motions Processes and functions of B Itô s integral Stochastic integrals Stochastic differential equations Stochastic optimal control Applications The representative agent problem The stochastic Ramsey model
5 Chapter 1 Introduction We will study the three workhorses of modern macro and financial economics, using dynamic programming methods: the cake-eating (or resource depletion) problem, the intertemporal allocation problem for the representative agent in a finance economy, the Ramsey model, in four different environments: discrete time and continuous time; deterministic and stochastic methodology we the dynamic programming approach, some heuristic proofs, and derive explicit equations whenever possible. 4
6 Paulo Brito Dynamic Programming A general overview We will consider the following types of problems: Discrete time deterministic models In the space of the sequences {u t, x t } t=0, such that (u t, x t ) R m R, and t (u t, x t ) are deterministic processes, that verify the sequence of budget constraints x t+1 x 0 = g(x t, u t ), t = 0,.., given where 0 < β 1 1+ρ < 1, where ρ > 0, and in some cases, verifying lim t β t x t 0, choose an optimal sequence {u t, x t } t=0 that maximizes the sum max {u} t=0 β t f(u t, x t ) t=0 By applying the principle of dynamic programming the first order necessary conditions for this problem are given by the Hamilton-Jacobi-Bellman (HJB) equation, which is usually written as V (x t ) = max u t {f(u t, x t ) + βv (x t+1 )} V (x) = max {f(u, x) + βv (g(u, x))} (1.1) u If an optimal control u exists, it has the form u = h(x), where h(x) is called the policy function. If we substitute back in the HJB equation, we get a functional equation V (x) = f(h(x), x) + βv [g(h(x), x)].
7 Paulo Brito Dynamic Programming Then solving the HJB equation means finding the function V (x) which solves the functional equation. If we are able to determine V (x) (explicitly or numerically) then we can also determine u t = h(x t). If we substitute in the difference equation, x t+1 = g(x t, h(x t )), starting at x 0, the solution {u t, x t } t=0 of the optimal control problem can be determined, explicitly or implicitly. Only in very rare cases we can find V (.) explicitly.
8 Paulo Brito Dynamic Programming Continuous time deterministic models In the space of continuous functions of time (u(t), x(t)) choose an optimal flow [(u (t), x (t))] t R+ such that [u (t)] t R+ maximizes the functional V [u] = 0 f(u(t), x(t))e ρt dt where ρ > 0, subject to the instantaneous budget and the initial state constraints and, in some cases, the terminal constraint dx ẋ(t) = g(x(t), u(t)), t 0 dt x(0) = x 0 given lim t e ρt x(t) 0 By applying the principle of the dynamic programming the first order conditions for this problem are represented by the HJB equation { } ρv (x) = max f(u, x) + V (x)g(u, x). u Again, if an optimal control exists it is determined from the policy function u = h(x) and the HJB equation is equivalent to the functional differential equation 1 ρv (x) = f(h(x), x) + V (x)g(h(x), x). Again, if we can find V (x) we can also find h(x) and can determine the optimal flow [(u (t), x (t))] t R+ from solving the ordinary differential equation ẋ = g(h(x), x) given x(0) = x 0. 1 We use the convention ẋ = dx/dt, if x = x(t), is a time-derivative and V (x) = dv/dx, if V = V (x) is the derivative fr any other argument.
9 Paulo Brito Dynamic Programming Discrete time stochastic models The variables are random sequences {u t (ω), x t (ω)} t=0 which are adapted to the filtration F = {F t } t=0 over a probability space (Ω, F, P). The domain of the variables is ω N (Ω, F, P, F), such that (t, ω) u t and x t R where (t, ω) x t. Then u t R is a random variable. An economic agent chooses a random sequence {u t, x t } t=0 that maximizes the sum [ ] max E 0 β t f(u t, x t ) u subject to the contingent sequence of budget constraints t=0 x t+1 x 0 = g(x t, u t, ω t+1 ), t = 0.., given where 0 < β < 1. By applying the principle of the dynamic programming the first order conditions of this problem are given by the HJB equation V (x t ) = max {f(u t, x t ) + βe t [V (g(u t, x t, ω t+1 ))]} u where E t [V (g(u t, x t, ω t+1 ))] = E[V (g(u t, x t, ω t+1 )) F t ]. If it exists, the optimal control can take the form u t = f (E t [v(x t+1 )]) Continuous time stochastic models The most common problem used in economics and finance is the following: in the space of the flows [(u(ω, t), x(ω, t)) : ω = ω(t) (Ω, F, P, F(t)), t R + ] choose a flow [u (t)] that maximizes the functional V [u] = E 0 [ 0 ] f(u(t), x(t))e ρt dt
10 Paulo Brito Dynamic Programming where u(t) = u(ω(t), t) and x(t) = x(ω(t), t) are Itô processes and ρ > 0, such that the instantaneous budget constraint is represented by a stochastic differential equation dx = g(x(t), u(t), t)dt + σ(x(t), u(t))db(t), t R + x(0) = x 0 given where [db(t) : t R + ] is a Wiener process. By applying the stochastic version of the principle of DP the HJB equation is a second order functional equation ρv (x) = max u { f(u, x) + g(u, x)v (x) + 1 } 2 (σ(u, x))2 V (x). 1.2 References First contribution: Bellman (1957) Discrete time: Bertsekas (1976), Sargent (1987), Stokey and Lucas (1989), Ljungqvist and Sargent (2000), Bertsekas (2005a), Bertsekas (2005b) Continous time: Fleming and Rishel (1975), Kamien and Schwartz (1991), Bertsekas (2005a), Bertsekas (2005b)
11 Part I Deterministic Dynamic Programming 10
12 Chapter 2 Discrete Time 2.1 The Optimal control problem Elements of the problem assume that time evolves in a discrete way, meaning that t {0, 1, 2,...}, that is t N 0 ; the economy is described by two variables that evolve along time: a state variable x t and a control variable, u t ; we know the initial value of the state variable, x 0, and the law of evolution of the state, which is a function of the control variable (u(.)): x t+1 = g t (x t, u t ); in some cases, we can impose terminal conditions on the horizon, T, or on the state variable v(x T ) = v T with v T known; we assume that there are m control variables, and that they belong to the set u t U R m for any t N 0, 11
13 Paulo Brito Dynamic Programming then, for any sequence of controls, u u {u 0, u 1,... : u t U} the economy can follow a large number of feasible paths, x {x 0, x 1,...}, with x t+1 = g t (x t, u t ), u t U however, if we have a criteria that allows for the evaluation of all feasible paths U(x 0, x 1,...,u 0, u 1,...) and if there is at least an optimal control u = {u 0, u 1,...} which maximizes U, then there is at least an optimal path for the state variable x {x 0, x 1,...} The simple cake-eating problem AS an illustration, assume that there is a cake (or a resource) with initial size φ 0 > 0 and that we want it to have size φ T < φ 0 in period T. Let us denote the size of the cake at the beginning of period time t by W t. Then we know the initial and the terminal values of W t, W 0 = φ 0, and W T = φ T. The cake (resource) is consumed (depleted) in every period. Denote by C t the consumption of the cake in period t. Then W 0 = φ 0 W 1 = W 0 C 0 = φ 0 C 0... T 2 W T 1 = W T 2 C T 2 = φ 0 W T = φ T. t=0 C t
14 Paulo Brito Dynamic Programming Therefore, in the interval between 0 and T the sequence of {C 0,...,C T 1 } or, compactly {C t } T 1 t=0 is consumed. Consuming the cake, will imply that its size along time is given by the sequence {W 0,...,W t,...,w T }. We can derive the sequence of {W t } T t=0 from the difference equation W t+1 = W t C t, t = 0,...T 1 (2.1) together with the initial and terminal values. It has the solution t 1 W t = W 0 C s, t = 1,...T s=0 How should we eat the cake (deplete the resource )? Two observations: First, the solution path for W depends on the path for C. Second, the problem seems to be mispecified because there is an infinite number of paths for C, which verify the constraints of the problem T 1 C s = φ 0 φ T. t=0 For instance all the following are feasible solutions: consume everything at once {C t } T 1 t=0 = {φ 0 φ T, 0,..., 0} then {W t } T t=0 = {φ 0, φ T, φ T,..., φ T } consume everything at the end {C t } T 1 t=0 = {0, 0,..., φ 0 φ T }, then {W t } T t=0 = {φ 0, φ 0,...,φ 0, φ T } stationary consumption strategy, C t = φ 0 φ T T, then W t = 1 T ((T t)φ 0 + tφ T ) then {W t } T t=0 = {φ 0, 1 T ((T 1)φ 0 + φ T ),..., 1 T (φ 0 + (T 1)φ T ), φ T }
15 Paulo Brito Dynamic Programming It is apparent that the specification of the cake-eating problem is incomplete. In order to choose a path for the cake (resource) we need an optimality criteria. A simple criterium could be to maximize the sum of the consumption T 1 max C t {C} t=0 But it is simple to see that as T 1 t=0 C t = φ 0 φ T then all the previous paths are solutions. That is, there is an infinite number of solutions. Next we will see that criteria like T 1 max {C} t=0 β t f(c t ) for 0 < β < 1 and f(c) increasing and concave, will produce an unique optimal path, among the feasible paths. If we substitute equation (2.1) at the objective functional, we get a calculus of variations problem T 1 max {W } t=0 β t f(w t W t+1 ).
16 Paulo Brito Dynamic Programming Calculus of variations problems The simplest problem The simplest calculus of variations problem consists in finding the maximum or the minimum of a functional over x {x t } T t=0, given initial and a terminal values x 0 and x T. Assume that F(x, x) is continuous and differentiable in (x, x). The simplest problem of the calculus of variations is to maximize the value functional F(x t+1, x t, t) (2.2) T 1 max x t=0 where function F(.) is called the objective function where φ 0 and φ T are given. subject to x 0 = φ 0 and x T = φ T (2.3) We denote the solution of the calculus of variations by {x t }T t=0. The value functional is defined as T 1 V ({x}) = F(x t+1, x t, t) and the optimal value functional is t=0 t=0 T 1 T 1 V ({x }) = F(x t+1, x t, t) = max F(x t+1, x t, t). {x} Proposition 1. (Necessary condition for optimality) Let {x t }T t=0 be a solution for the problem defined by equations (2.2) and (2.3). Then it verifies the Euler-Lagrange equation F(x t, x t 1, t 1) x t + F(x t+1, x t, t) x t = 0, t = 1, 2,..., T 1 (2.4) and the initial and the terminal conditions x 0 = φ 0, t = 0 x T = φ T, t = T. t=0
17 Paulo Brito Dynamic Programming Proof. Assume that we know the optimal solution {x t }T t=0. Therefore, we also know the optimal value function V (x ) = T 1 t=0 F(x t+1, x t, t). Consider an alternative candidate for solution {x t } T 1 t=0 such that x t = x t + ε t, where ε t 0 for t = 1,..., T 1 but ε 0 = ε T = 0. That is, the alternative candidate solution departs and reaches the same values values of the optimal solution, but through a different path. In this case the value function is V (x) = T 1 t=0 F(x t+1, x t, t). Observe that V ({x}) = F(x 1, x 0, 0)+F(x 2, x 1, 1)+...+F(x t, x t 1, t 1)+F(x t+1, x t, t)+...+f(x T, x T 1, T 1) If F(.) is differentiable, the variation in the value function is 1 that is V (x) V (x ) = T 1 V (x) V (x ) = ( F(x 0, x 1, 0) + F(x 2, x 1, 1) x 1 x 1 ( F(x... + T 1, x T 2, T 2) t=1 x T 1 ) ε F(x T, x T 1, T 1) x T 1 ( F(x t, x t 1, t 1) + F(x t+1, x t, t + 1) ) ε t. x t x t ) ε T 1 If {x t } T 1 t=0 is an optimal solution then V ({x}) = V ({x }), which holds if (2.4) is verified. Observations if we have a minimum problem we have just to consider the symmetric of the value function T 1 min y t=0 T 1 F(y t+1, y t, t) = max y t=0 F(y t+1, y t, t) If F(x, y) is concave then the necessary conditions are also sufficient. 1 Recall that ε 0 = ε T = 0.
18 Paulo Brito Dynamic Programming Example: The cake eating problem for a logarithmic utility The problem is to find the optimal paths C = {C t T 1 max C t=0 1 }Tt=0 and W = {Wt }T t=0 that solve the problem β t ln(c t ), subject to W t+1 = W t C t, W 0 = φ, W T = 0. (2.5) This problem can be transformed into the calculus of variations problem, because C t = W t W t+1, The Euler equation is or, equivalently T 1 max W t=0 has the general solution β t ln(w t W t+1 ), subject to W 0 = φ, W T = 0. β t 1 1 W t 1 W t + β t 1 W t W t+1 = 0, t = 1, 2,..., T 1 W t+2 = (1 + β)w t+1 βw t, t = 0,...T 2. Wt = 1 ( βk1 + k 2 + (k 1 k 2 )β t), t = 0, 1..., T 1 β which depends on two arbitrary constants, k 1 and k 2. We can evaluate them by using the initial and terminal conditions W0 = 1 ( βk 1 β 1 + k 2 + (k 1 k 2 )) = φ WT = βk 1 + k 2 + (k 1 k 2 )β T = 0, then k 1 = φ, k 2 = β βt 1 β T φ. Therefore, the solution for the cake-eating problem C, W is generated by ( ) β Wt t β T = φ, t = 0, 1,...T (2.6) 1 β T and, as C t = W t W t+1 ( 1 β Ct = 1 β T ) β t φ, t = 0, 1,...T 1. (2.7)
19 Paulo Brito Dynamic Programming The free terminal state problem Now let us consider the problem where φ 0 is given. T 1 max x t=0 F(x t+1, x t, t) subject to x 0 = φ 0 and x T free (2.8) Proposition 2. (Necessary condition for optimality for the free end point problem) Let {x t } T t=0 be a solution for the problem defined by equations (2.2) and (2.3). Then it verifies the Euler-Lagrange equation F(x t, x t 1, t 1) x t + F(x t+1, x t, t) x t = 0, t = 1, 2,..., T 1 (2.9) and the initial and the transversality condition x 0 = φ 0, t = 0 F(x T,x T 1,T 1) x T = 0, t = T. Proof. Again we assume that we know {x t } T t=0 and V ({x }), and we use the same method as in the proof for the simplest problem, with the difference that the perturbation for the final period is ǫ T 0. Now V (x) V (x ) = T 1 ( F(x t, x t 1, t 1) + F(x t+1, x ) t, t + 1) ε t + x t x t t=1 + F(x T, x T 1, T 1) ε T x T Then V (x) V (x ) = 0 if and only if the Euler and the transversality conditions are verified.
20 Paulo Brito Dynamic Programming Condition (2) is called the transversality condition. Its meaning is the following: if the terminal state of the system is free, it would be optimal if there is no gain in changing the solution trajectory as regards the horizon of the program. If F(x T,x T 1,T 1) x T could improve the solution by increasing x T along time) and if F(x T,x T 1,T 1) x T > 0 then we (remember that the utiity functional is additive < 0 we have an non-optimal terminal state by excess. However, in free endpoint problems we need an additional terminal condition in order to have a meaningful solution. To convince oneself, consider the following problem. Cake eating problem with free terminal size. Consider the previous example but assume that T is known but W T is free. The first order conditions from proposition (2.11) are W t+1 = (1 + β)w t βw t 1 W 0 = φ β T 1 W T W T 1 = 0. If we substitute the solution of the Euler-Lagrange equation we get β T 1 W T W T 1 = β T 1 β T β T 1 1 β k 1 k 2 = 1 k 1 k 2 which can only be zero if k 2 k 1 =. If we look at the transversality condition, the last condition only holds if W T W T 1 =, which does not make sense. One way to solve this, and which is very important in applications to economics is to introduce a terminal constraint. Free terminal state problem with a terminal constraint where φ 0 and φ T are given. T 1 max x t=0 F(x t+1, x t, t) Consider the problem subject to x 0 = φ 0 and x T φ T (2.10)
21 Paulo Brito Dynamic Programming Proposition 3. (Necessary condition for optimality for the free end point problem with terminal constraints) Let {x t }T t=0 be a solution for the problem defined by equations (2.2) and (2.10). Then it verifies the Euler-Lagrange equation F(x t, x t 1, t 1) x t + F(x t+1, x t, t) x t = 0, t = 1, 2,..., T 1 (2.11) and the initial and the transversality condition x 0 = φ 0, t = 0 F(x T,x T 1,T 1) x T (φ T x T ) = 0, t = T. Proof. Now we write V (x) as a Lagrangean T 1 V (x) = F(x t+1, x t, t) + λ(φ T x T ) t=0 where λ is a lagrange multiplier. Using again the variation method with ǫ 0 = 0 and ǫ T 0 we have V (x) V (x ) = T 1 ( ) F(x t, x t 1, t 1) + F(x t+1, x t, t + 1) ε t + x t x t t=1 + F(x T, x T 1, T 1) x T ε T λ(x T x T ) From the Kuhn-Tucker conditions, we have the conditions, regarding the terminal state, F T 1 x T λ = 0, λ(φ T x T) = 0.
22 Paulo Brito Dynamic Programming The cake eating problem again Now, if we introduce the terminal condition W T 0, the first order conditions are W t+1 = (1 + β)w t βw t+1 W 0 = φ β T 1 W T W T W T 1 = 0. If T is finite, the last condition only holds if W T = 0, which means that it is optimal to eat all the cake in finite time. The solution is, thus formally, but not conceptually, the same as in the fixed endpoint case. Infinite horizon problem The most common problems in macroeconomics is the discounted infinite horizon problem: max x β t 1 F(x t+1, x t ) (2.12) t=0 where, 0 < β < 1, x 0 = φ 0 where φ 0 is given. Proposition 4. (Necessary condition for optimality for the infinite horizon problem) Let {x t }T t=0 be a solution for the problem defined by equations (2.12) and (2.3). Then it verifies the Euler-Lagrange equation F(x t, x t 1 ) x t + β F(x t+1, x t ) x t = 0, t = 0, 1,... and x 0 = x 0, lim t β t 1 F(x t,x t 1 ) x t = 0,
23 Paulo Brito Dynamic Programming Infinite horizon with terminal conditions If we assume that lim t x t = 0 then the transversality condition becomes lim t βt F(x t, x t 1 ) x t = 0. x t The cake eating problem The solution of the Euler-Lagrange equation was already derived as Wt = 1 ( βk1 + k 2 + (k 1 k 2 )β t), t = 0, 1..., 1 β If we substitute in the transversality condition for the infinite horizon problem without terminal conditions, we get lim βt 1 ln(w t 1 W t ) = lim β t 1 (W t W t 1 ) 1 β t 1 1 β = lim = t W t t t β t β t 1 k 1 k 2 1 k 2 k 1 which again is only zero if k 2 k 1 =. If we consider the infinite horizon problem with a terminal constraint lim t 0 and substitute in the transversality condition for the infinite horizon problem without terminal conditions, we get lim βt 1 ln(w t 1 W t ) t W t W t = lim t W t k 1 k 2 = βk 1 + k 2 (1 β)(k 2 k 1 ) because lim t β t = 0 because 0 < β < 1. The transversality condition holds if and only if k 2 = βk 1. If we substitute in the solution for W t, we get W t = k 1(1 β) 1 β βt = k 1 β t, t = 0, 1..., and, as the solution should verify the initial condition W 0 = φ 0 we finally get the solution for the infinite horizon problem W t = φ 0 β t, t = 0, 1...,
24 Paulo Brito Dynamic Programming Optimal control problems The most common dynamic optimization problems in economics and finance have the following common assumptions timing: the state variable x t is usually a stock and is measured at the beginning of period t and the control u t is usually a flow and is measured in the end of period t; horizon: can be finite or is infinite (T = ). The second case is more common; objective functional: - there is an intertemporal utility function is additively separable, stationary, and involves time-discounting (impatience): T 1 β t f(u t, x t ), - 0 < β < 1, models impatience as β 0 = 1 and lim t β t = 0; t=0 - f(.) is well behaved: it is continuous, continuously differentiable and concave in (u, x); the economy is described by an autonomous difference equation x t+1 = g(x t, u t ) where g(.) is autonomous, continuous, differentiable, concave. Then the DE verifies the conditions for existence and uniqueness of solutions; the non-ponzi game condition holds: holds, for a discount factor 0 < ϕ < 1; lim t ϕt x t 0 there may be some side conditions, v.g., x t 0, u t 0, which may produce corner solutions. We will deal only with the case in which the solutions are interior (or the domain of the variables is open).
25 Paulo Brito Dynamic Programming These assumptions are formalized as optimal control problems: Definition 1. The simplest optimal control problem (OCP): Find {u t, x t} T t=0 : which solves such that u t U and for x 0, x T given and T free. max {u t} T t=0 T β t f(u t, x t ) t=0 x t+1 = g(x t, u t ) Definition 2. The free terminal state optimal control problem (OCP): Find {u t, x t} T 1 t=0 : which solves such that u t U and for x 0, T given and x T free. max {u t} T t=0 T β t f(u t, x t ) t=0 x t+1 = g(x t, u t ) If T = we have the infinite horizon discounted optimal control problem Methods for solving the OCP in the sense of obtaining necessary conditions or necessary and sufficient conditions: method of Lagrange (for the case T finite) Pontriyagin s maximum principle (Pontryagin et al. (1962)) Dynamic programming principle (Bellman (1957)). Necessary and sufficient optimality conditions Intuitive meaning:
26 Paulo Brito Dynamic Programming necessary conditions: assuming that we know the optimal solution, {u t, x t } which optimality conditions should the variables of the problem verify? (This means that they hold for every extremum feasible solutions); sufficient conditions: if the functions defining the problem, f(.) and g(.), verify some conditions, then feasible paths verifying some optimality conditions are solutions of the problem. In general, if the behavioral functions f(.) and g(.) are well behaved (continuous, continuously differentiable and concave) then necessary conditions are also sufficient.
27 Paulo Brito Dynamic Programming Dynamic programming The Principle of dynamic programming (Bellman, 1957, p. 83): an optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision. We next follow an heuristic approach for deriving necessary conditions for problem 1, following the principle of DP. Finite horizon problem Assume that we know a solution optimal control {u, x t }T t=0. Which properties should the optimal solution have? Definition 3. Definition: value function for time τ T 1 V τ (x τ ) = β t τ f(u t, x t) t=τ Proposition 5. Given an optimal solution to the optimal control problem, solution optimal control {u, x t }T t=0, then it verifies Hamilton-Jacobi-Equation V t (x t ) = max u t {f(x t, u t ) + βv t+1 (x t+1 ), t = 0,..., T 1} (2.13)
28 Paulo Brito Dynamic Programming Proof. If we know a solution for problem 1, then at time τ = 0, we have V 0 (x 0 ) = T 1 β t f(u t, x t ) = t=0 T 1 = max β t f(u t, x t ) = {u t} T 1 t=0 t=0 ( = max f(x0, u 0 ) + βf(x 1, u 1 ) + β 2 f(x 2, u 2 ) +... ) = {u t} T 1 t=0 ( ) T 1 = max f(x 0, u 0 ) + β β t 1 f(x t, u t ) = {u t} T 1 t=0 t=1 ( ) T 1 = max f(x 0, u 0 ) + β max β t 1 f(x t, u t ) u 0 {u t} T 1 t=1 by the principle of dynamic programming. Then t=1 V 0 (x 0 ) = max u 0 {f(x 0, u 0 ) + βv 1 (x 1 )} We can apply the same idea for the value function for any time 0 t T to get equation (2.13), which holds for feasible solutions, i.e., verifying x t+1 = g(x t, u t ) and given x 0. Intuition: we transform the maximization of a functional into a recursive two-period problem. We solve the control problem by solving the HJB equation. To do this we have to find the sequence {V 0,...,V T }, through the recursion V t (x) = max {f(x, u) + βv t+1 (g(x, u))} (2.14) u Infinite horizon problem Definition 4. The infinite horizon optimal control problem (OCP): Find {u t, x t} t=0 : which solves max {u t} t=0 T β t f(u t, x t ) t=0
29 Paulo Brito Dynamic Programming such that x t+1 = g(x t, u t ) for x 0 given. For the infinite horizon discounted optimal control problem, the limit function V = lim j V j is independent of j so the Hamilton Jacobi Bellman equation becomes V (x) = max u {f(x, u) + βv [g(x, u)]} = max H(x, u) u Properties of the value function: it is usually difficult to get the properties of V (.). In general continuity is assured but not differentiability (this is a subject for advanced courses on DP, see Stokey and Lucas (1989)). If some regularity conditions hold, we may determine the optimal control through the optimality condition H(x, u) u if H(.) is C 2 then we get the policy function = 0 u = h(x) which gives an optimal rule for changing the optimal control, given the state of the economy. If we can determine (or prove that there exists such a relationship) then we say that our problem is recursive. In this case the HJB equation becomes a non-linear functional equation V (x) = f(x, h(x)) + βv [g(x, h(x))]. Solving the HJB: means finding the value function V (x). Methods: analytical (in some cases exact) and mostly numerical (value function iteration).
30 Paulo Brito Dynamic Programming Applications The cake eating problem We consider the infinite horizon problem subject to max {C t} t=0 T β t ln(c t ) t=0 for W 0 = φ given. The Hamilton-Jacobi-Bellman equation is W t+1 = W t C t V (W) = max {ln (C) + βv (W C)}. C We will solve the problem if we can find the unknown function V (W). The optimality condition is {ln (C) + βv (W C)} C = 1 C βv (W C) = 0 which we express as C = 1 βv (W C). Possibly we can solve uniquely for C to get C = C(W), and W 1 (W) = W C(W). Then the HJB becomes a functional equation V (W) = ln (C (W)) + βv (W 1 (W)). We will solve it by using the method of the undetermined coefficients. Assume that the solution has the form V (W) = a + b ln(w)
31 Paulo Brito Dynamic Programming where the coefficients a and b are unknown. It is indeed a solution, if, upon substituting it in the HJB equation, we could determine a and b as functions of known parameters. First, we find that C = W 1 = bβ W bβ 1 + bβ W Substituting in the HJB equation, we get a + b ln (W) = ln (W) ln (1 + bβ) + β which is equivalent to ( ( ) ) bβ a + b ln + b ln (W), 1 + bβ (b(1 β) 1) ln(w) = a(β 1) ln (1 + bβ) + βb ln We determine b and a by setting both terms to zero Then the value function is b = 1 1 β a = ln (1 β) + β ln (β) 1 β ( ) bβ. 1 + bβ V (W) = 1 1 β ln (χw), where χ ( β β (1 β) 1 β) 1/(1 β). and C = (1 β)w. Therefore, we determine the optimal paths by substituting C into the restriction of the problem, W t+1 = W t (1 β)w t = βw t and solving it W t = φβ t, t = 0,...,
32 Paulo Brito Dynamic Programming and C t = φ(1 β)β t, t = 1,..., The consumption-portfolio choice problem Assumptions: there are T > 1 periods; consumers are homogeneous and have an additive intertemporal utility functional; the instantaneous utility function is continuous, differentiable, increasing, concave and is homogenous of degree n; consumers have a stream of endowments, y {y t } T t=0, known with certainty; institutional setting: there are spot markets for the good and for a financial asset. The financial asset is an entitlement to receive the dividend D t at the end of every period t. The spot prices are P t and S t for the good and for the financial asset, respectively. Market timing, we assume that the good market opens in the beginning and that the asset market opens at the end of every period. The consumer s problem choose a sequence of consumption {c t } T t=0 and of portfolios {θ t} T t=1, which is, in this simple case, the quantity of the asset bought at the beginning of time t, in order to find T 1 max β t u(c t ) {c t,θ t+1 } T t=0 t=0
33 Paulo Brito Dynamic Programming subject to the sequence of budget constraints: A 0 + y 0 = c 0 + θ 1 S 0 θ 1 (S 1 + D 1 ) + y 1 = c 1 + θ 2 S 1... θ t (S t + D t ) + y t = c t + θ t+1 S t... θ T (S T + D T ) + y T = c T where A t is the stock of financial wealth (in real terms) at the beginning of period t. If we denote A t+1 = θ t+1 S t then the generic period budget constraint is A t+1 = y t c t + R t A t, t = 0,..., T (2.15) where the asset return is Then the HJB equation is R t+1 = 1 + r t+1 = S t+1 + D t+1 S t. V (A t ) = max c t {u(c t ) + βv (A t+1 )} (2.16) We can write the equation as } V (A) = max {u(c) + βv (Ã) c (2.17) where à = y c + RA then V (Ã) = V (y c + RA).
34 Paulo Brito Dynamic Programming Deriving an intertemporal arbitrage condition The optimality condition is: u (c ) = βv (Ã) if we could find the optimal policy function c = h(a) and substitute it in the HJB equation we would get V (A) = u(h(a)) + βv (Ã ), Ã = y h(a) + RA. Using the Benveniste and Scheinkman (1979) envelope theorem, we differentiate for A to get V (A) = u (c ) h A + βv (Ã ) Ã = u (c ) h A + βv (Ã ) = βrv (Ã ) = = Ru (c ) A = ( R h A ) = from the optimality condition. If we shift both members of the last equation we get V (Ã ) = Ru ( c ), and, then Ru ( c ) = β 1 u (c ). Then, the optimal consumption path (we delete the * from now on) verifies the arbitrage condition u (c) = βru ( c). In the literature the relationship is called the consumer s intertemporal arbitrage condition u (c t ) = βu (c t+1 )R t (2.18)
35 Paulo Brito Dynamic Programming Observe that βr t = 1 + r t 1 + ρ is the ratio between the market return and the psychological factor. If the utility function is homogenous of degree η, it has the properties u(c) = c η u(1) u (c) = c η 1 u (1) the arbitrage condition is a linear difference equation c t+1 = λc t, λ (βr t ) 1 1 η Determining the optimal value function In some cases, we can get an explicit solution for the HJB equation (2.17). We have to determine jointly the optimal policy function h(a) and the optimal value function V (A). We will use the same non-constructive method to derive both functions: first, we make a conjecture on the form of V (.) and then apply the method of the undetermined coefficients. Assumptions Let us assume that the utility function is logarithmic: u(c) = ln(c) and assume for simplicity that y = 0 2. In this case the optimality condition becomes c = [βv (RA c )] 1 Conjecture: let us assume that the value function is of the form V (A) = B 0 + B 1 ln(a) (2.19) 2 Alternatively, if we denote H t the human capital at the beginning of period t, and H t+1 = y t + R t H t, then we could define total capital at the beginning of period t as W t = A t + H t and the next results will be valid for the stock of total capital H t.
36 Paulo Brito Dynamic Programming where B 0 and B 1 are undetermined coefficients. From this point on we apply again the method of the undetermined coefficients: if the conjecture is right then we will get an equation without the independent variable A, and which would allow us to determine the coefficients, B 0 and B 1, as functions of the parameters of the HJB equation. Then V = B 1 A. Applying this to the optimality condition, we get c = h(a) = RA 1 + βb 1 then which is a linear function of A. ( ) Ã = RA c βb1 = RA 1 + βb 1 Substituting in the HJB equation, we get ( ) [ ( )] RA βb1 RA B 0 + B 1 ln(a) = ln + β B 0 + B 1 ln = (2.20) 1 + βb βb ( ) [ ( 1 ) ] R βb1 R = ln + ln(a) + β B 0 + B 1 ln + ln(a) 1 + βb βb 1 The term in ln(a) can be eliminated if B 1 = 1 + βb 1, that is if and equation (2.20) reduces to B 1 = 1 1 β B 0 (1 β) = ln(r(1 β)) + β 1 β ln(rβ)
37 Paulo Brito Dynamic Programming which we can solve for B 0 to get B 0 = (1 β) 2 ln(rθ), where Θ (1 β) 1 β β β Finally, as our conjecture proved to be right, we can substitute B 0 and B 1 in equation (2.19) the optimal value function and the optimal policy function are ( ) V (A) = (1 β) 1 ln (RΘ) (1 β) 1 A and c = (1 β)ra then optimal consumption is linear in financial wealth. We can also determine the optimal asset accumulation, by noting that c t = c substituting in the period budget constraint and A t+1 = βr t A t If we assume that R t = R then the solution for that DE is A t = (βr) t A 0, t = 0,..., and, therefore the optimal path for consumption is c t = (1 β)β t R t+1 A 0 Observe that the transversality condition holds, because 0 < β < 1. lim t R t A t = lim A 0 β t = 0 t
38 Paulo Brito Dynamic Programming Exercises 1. solve the HJB equation for the case in which y > 0 2. solve the HJB equation for the case in which y > 0 and the utility function is CRRA: u(c) = c 1 θ /(1 θ), for θ > 0; 3. try to solve the HJB equation for the case in which y = 0 and the utility function is CARA: u(c) = B e βc /β, for β > 0 4. try to solve the HJB equation for the case in which y > 0 and the utility function is CARA: u(c) = B e βc /β, for β > 0.
39 Paulo Brito Dynamic Programming The Ramsey problem Find a sequence {c t, k t } t=0 which solves the following optimal control problem: subject to max {c} t=0 β t u(c t ) t=0 k t+1 = f(k t ) c t + (1 δ)k t where 0 δ 1 given k 0. Both the utility function and the production function are neoclassical: continuous, differentiable, smooth and verify the Inada conditions. These conditions would ensure that the necessary conditions for optimality are also sufficient. The HJB function is where k 1 = f(k) c + (1 δ)k. V (k) = max {u(c) + βv (k 1 )} c The optimality condition is u (c) = βv (k 1 (k)) If it allows us to find a policy function c = h(k) then the HJB becomes V (k) = u[h(k)] + βv [f(k) h(k) + (1 δ)k] (2.21) This equation has no explicit solution for generic utility and production function. Even for explicit utility and production functions the HJB has not an explicit solution. Next we present a benchmark case where we can find an explicit solution for the HJB equation. Benchmark case: Let u(c) = ln(c) and f(k) = Ak α for 0 < α < 1, and δ = 1. Conjecture: V (k) = B 0 + B 1 ln(k)
40 Paulo Brito Dynamic Programming In this case the optimality condition is h(k) = Akα 1 + βb 1 If we substitute in equation (2.21) we get ( ) [ ( )] Ak α Ak α βb 1 B 0 + B 1 ln(k) = ln + β B 0 + B 1 ln 1 + βb βb 1 (2.22) Again we can eliminate the term in ln(k) by making Thus, (2.22) changes to B 1 = α 1 αβ B 0 (1 β) = ln(a(1 αβ)) + αβ 1 αβ ln(αβa) Finally the optimal value function and the optimal policy function are ( ) V (A) = (1 αβ) 1 ln (AΘ) (1 β) 1 A, Θ (1 αβ) 1 αβ (αβ) αβ and c = (1 αβ)ak α Then the optimal capital accumulation is governed by the equation k t+1 = αβak α t This equation generates a forward path starting from the known initial capital stock k 0 {k t } t=0 = {k 0, αβak α 0, (αβa) α+1 k α2 0,..., (αβa) αt 1 +1 k αt 0,...} which converges to a stationary solution: k = (αβa) 1/(1 α).
41 Chapter 3 Continuous Time 3.1 The dynamic programming principle and the HJB equation Simplest problem optimal control problem In the space of the functions (u(t), x(t)) for t 0 t t 1 find functions (u (t), x (t)) which solve the problem: subject to t1 max f(t, x(t), u(t))dt u(t) t 0 ẋ dx(t) dt = g(t, x(t), u(t)) given x(t 0 ) = x 0. We assume that t 1 is know and that x(t 1 ) is free. The value function is, for the initial instant V(t 0, x 0 ) = and for the terminal time V(t 1, x(t 1 )) = 0. t1 t 0 f(t, x, u )dt 40
42 Paulo Brito Dynamic Programming Lemma 1. First order necessary conditions for optimality from the Dynamic Programming principle Let V C 2 (T, R). Then the value function which is associated to the optimal path {(x (t), u (t) : t 0 t t 1 } verifies the fundamental partial differential equation or the Hamilton-Jacobi- Bellman equation Proof. Consider the value function ( t1 ) V(t 0, x 0 ) = max f(t, x, u)dt u = max u V t (t, x) = max [f(t, x, u) + V x(t, x)g(t, x, u)]. u t 0 ( t0 + t = max u f(.)dt + t 0 t 0 t t 0 + t t 0 t t 0 + t t1 t 0 + t t0 + t ) f(.)dt = ( t > 0, small) t 0 u f(.)dt + max t 0 + t t t 1 ) f(.)dt t 0 + t ( t1 (from dynamic prog principle) [ t0 + t ] = max f(.)dt + V(t 0 + t, x 0 + x) = t u 0 (approximating x(t 0 + t) x 0 + x) = = max u t 0 t t 0 + t [f(t 0, x 0, u) t + V(t 0, x 0 ) + V t (t 0, x 0 ) t + V x (t 0, x 0 ) x + h.o.t] if u constant and V C 2 (T, R)). Passing V(t 0, x 0 ) to the second member, dividing by t and taking the limit lim t 0 we get, for every t [t 0, t 1 ], 0 = max [f(t, x, u) + V t(t, x) + V x (t, x)ẋ]. u
43 Paulo Brito Dynamic Programming
44 Paulo Brito Dynamic Programming The policy function is now u = h(t, x). If we substitute in the HJB equation then we get a first order partial differental equation V t (t, x) = f(t, x, h(t, x)) + V x (t, x)g(t, x, h(t, x))]. Though the differentiability of V is assured for the functions f and g which are common in the economics literature, we can get explicit solutions, for V (.) and for h(.), only in very rare cases. Proving that V is differentiable, even in the case in which we cannot determine it explicitly is hard and requires proficiency in Functional Analysis Infinite horizon discounted problem Lemma 2. First order necessary conditions for optimality from the Dynamic Programming principle Let V C 2 (T, R). Then the value function associated to the optimal path {(x (t), u (t) : t 0 t < + } verifies the fundamental non-linear ODE called the Hamilton-Jacobi-Bellman equation ρv (x) = max [f(x, u) + V (x)g(x, u)]. u Proof. Now, we have V(t 0, x 0 ) = max u ( + = e ρt 0 max u ) f(x, u)e ρt dt = t 0 ( + t 0 ) f(x, u)e ρ(t t0) dt = = e ρt 0 V (x 0 ) (3.1) where V (.) is independent from t 0 and only depends on x 0. We can do ( + ) V (x 0 ) = max f(x, u)e ρt dt. u 0
45 Paulo Brito Dynamic Programming If we let, for every (t, x) V(t, x) = e ρt V (x) and if we substitute the derivatives in the HJB equation for the simplest problem, we get the new HJB. Observations: if we determine the policy function u = h(x) and substitute in the HJB equation, we see that the new HJB equation is a non-linear ODE ρv (x) = f(x, h(x)) + V (x)g(x, h(x))]. Differently from the solution from the Pontriyagin s principle, the HJB defines a recursion over x. Intuitively it generates a rule which says : if we observe the state x the optimal policy is h(x) in such a way that the initial value problem should be equal to the present value of the variation of the state. It is still very rare to find explicit solutions for V (x). There is a literature on how to compute it numerically, which is related to the numerical solution of ODE s and not with approximating value functions as in the discrete time case Bibliography See Kamien and Schwartz (1991).
46 Paulo Brito Dynamic Programming Applications The cake eating problem The problem is to find the optimal flows of cake munching C = [C (t)] t [0,T] and of the size of the cake W = [W (t)] t [0,T] such that T max C 0 ln(c(t))e ρt dt, subject to Ẇ = C, t (0, T), W(0) = φ W(T) = 0 (3.2) where φ > 0 is given. Observation: The problem can be equivalently written as a calculus of variations problem, T max W ln( Ẇ(t))e ρt dt, subject to W(0) = φ W(T) = 0 0 Consider again problem (3.2). Now, we want to solve it by using the principle of the dynamic programming. In order to do it, we have to determine the value function V = V (t, W) which solves the HJB equation The optimal policy for consumption is V { t = max e ρt ln(c) C V } C W C (t) = e ρt ( V W If we substitute back into the HJB equation we get the partial differential equation [ e ρt V ( ) ] 1 V t = ln e ρt 1 W To solve it, let us use the method of undetermined coefficients by conjecturing that the solution is of the type ) 1 V (t, W) = e ρt (a + b ln W)
47 Paulo Brito Dynamic Programming where a and b are constants to be determined, if our conjecture is right. With this function, the HJB equation comes ρ(a + b ln W) = ln(w) lnb 1 if we set b = 1/ρ we eliminate the term in ln W and get a = (1 ln(ρ))/ρ. Therefore, solution for the HJB equation is V (t, W) = 1 + ln(ρ) + ln W ρ e ρt and the optimal policy for consumption is C (t) = ρw(t).
48 Paulo Brito Dynamic Programming The dynamic consumption-saving problem Assumptions: T = R +, i.e., decisions and transactions take place continuously in time; deterministic environment: the agents have perfect information over the flow of endowments y [y(t)] t R+ and the relevant prices; preferences over flows of consumption, [c(t)] t R+ are evaluated by the intertemporal utility functional V [c] = 0 u(c(t))e ρt dt which displays impatience (the discount factor e ρt (0, 1)), stationarity (u(.) is not directly dependent on time) and time independence and the instantaneous utility function (u(.)) is continuous, differentiable, increasing and concave; observe that mathematically the intertemporal utility function is in fact a functional, or a generalized function, i.e., a mapping whose argument is a function (not a number as in the case of functions). Therefore, solving the consumption problem means finding an optimal function. In particular, it consists in finding an optimal trajectory for consumption; institutional setting: there are spot real and financial markets that are continuously open. The price P(t) clear the real market instantaneously. There is an asset market in which a single asset is traded which has the price S(t) and pays a dividend V (t). Derivation of the budget constraint: The consumer chooses the number of assets θ(t). If we consider a small increment in time h and assume that the flow variables are constant in the interval then S(t + h)θ(t + h) = θ(t)s(t) + θ(t)d(t)h + P(t)(y(t) c(t))h.
49 Paulo Brito Dynamic Programming Define A(t) = S(t)θ(t) in nominal terms. The budget constraint is equivalently where i(t) = D(t) S(t) h 0 then A(t + h) A(t) = i(t)a(t)h + P(t)(y(t) c(t))h is the nominal rate of return. If we divide by h and take the limit when A(t + h) A(t) lim h 0 h da(t) dt = i(t)a(t) + P(t)(y(t) c(t)). If we define real wealth and the real interest rate as a(t) A(t) P(t) we get the instantaneous budget constraint where we assume that a(0) = a 0 given. and r(t) = i(t) + P P(t), then ȧ(t) = r(t)a(t) + y(t) c(t) (3.3) Define the human wealth, in real terms, as as from the Leibniz s rule ḣ(t) dh(t) dt then total wealth at time t is h(t) = t e R s t r(τ)dτ y(s)ds = r(t) e R s t r(τ)dτ y(s)ds y(t) = r(t)h(t) y(t) t w(t) = a(t) + h(t) and we may represent the budget constraint as a function of w(.) ẇ = ȧ(t) + ḣ(t) = = r(t)w(t) c(t) The instantaneous budget constraint should not be confused with the intertemporal budget constraint. Assume the solvalibity condition holds at time t = 0 R t lim t e 0 r(τ)dτ a(t) = 0.
50 Paulo Brito Dynamic Programming Then it is equivalent to the following intertemporal budget constraint, w(0) = 0 e R t 0 r(τ)dτ c(t)dt, the present value of the flow of consumption should be equal to the initial total wealth. To prove this, solve the instantaneous budhet constraint (3.3) to get a(t) = a(0)e R t 0 r(τ)dτ + t 0 e R s 0 r(τ)dτ y(s) c(s)ds multiply by e R t 0 r(τ)dτ, pass to the limit t, apply the solvability condition and use the definition of human wealth. Therefore the intertemporal optimization problem for the representative agent is to find [c (t), w (t)] t R+ which maximizes V [c] = + subject to the instantaneous budget constraint 0 u(c(t))e ρt dt ẇ(t) = r(t)w(t) c(t) given w(0) = w 0. Solving the consumer problem using DP. The HJB equation is where w = w(t), c = c(t), r = r(t). { } ρv (w) = max u(c) + V (w)(rw c) c
51 Paulo Brito Dynamic Programming We assume that the utility function is homogeneous of degree η. Therefore it has the properties: u(c) = c η u(1) u (c) = c η 1 u (1) The optimality condition is: u (c ) = V (w) then ( V c = (w) u (1) ) 1 η 1 substituting in the HJB equation we get the ODE, defined on V (w), ( V ρv (w) = rwv (w) + (u(1) u (1)) In order to solve it, we guess that its solution is of the form: (w) u (1) ) η η 1 V (w) = Bw η if we substitute in the HJB equation, we get ρbw η = ηrbw η + (u(1) u (1)) Then we can eliminate the term in w η and solve for B to get [( u(1) u B = (1) ρ ηr ( ) η ηbw. u (1) )( ) η ] η 1 1 η. u (1) Then, as B is a function of r = r(t), we determine explicitly the value function as a function of total wealth. [( u(1) u V (w(t)) = Bw(t) η = (1) ρ ηr(t) ) ( ) η ] η 1 1 η w(t) η u (1)
52 Paulo Brito Dynamic Programming Observation: this is one known case in which we can solve explicitly the HJB equation as it is a linear function on the state variable, w and the objective function u(c) is homogeneous. The optimal policy function can also be determined explicitly c (t) = ( ) 1 ηb(t) η 1 w(t) π(t)w(t) (3.4) u (1) as it sets the control as a function of the state variable, and not as depending on the path of the co-state and state variables as in the Pontryiagin s case, sometimes this solution is called robust feedback control. Substituting in the budget constraint, we get the optimal wealth accumulation which is a solution of w (t) = w 0 e R t 0 r(s) π(s)ds (3.5) w = r(t)w (t) c (t) = (r(t) π(t))w (t) Conclusion: the optimal paths for consumption and wealth accumulation (c (t), w (t)) are given by equations (3.4) and (3.5) for any t R The Ramsey model This is a problem for a centralized planner which chooses the optimal consumption flow c(t) in order to maximize the intertemporal utility functional where ρ > 0 subject to maxv [c] = 0 u(c)e ρt dt k(t) = f(k(t)) c(t) k(0) = k 0 given
53 Paulo Brito Dynamic Programming The HJB equation is ρv (k) = max{u(c) + V (k)(f(k) c)} c Benchmark assumptions: u(c) = c1 σ 1 σ where σ > 0 and f(k) = Akα where 0 < α < 1. The HJB equation is { } c 1 σ ρv (k) = max c 1 σ + V (k) (Ak α c) (3.6) the optimality condition is c = ( ) 1 V σ (k) after substituting in equation (3.6) we get ( ) σ ρv (k) = V (k) 1 σ V (k) 1 σ + Ak α (3.7) In some particular cases we can get explicit solutions, but in general we don t. Particular case: α = σ Equation (3.7) becomes Let us conjecture that the solution is ( ) σ ρv (k) = V (k) 1 σ V (k) 1 σ + Ak σ (3.8) V (k) = B 0 + B 1 k 1 σ where B 0 and B 1 are undetermined coefficients. Then V (k) = B 1 (1 σ)k σ. If we substitute in equation (3.8) we get ] ρ(b 0 + B 1 k 1 σ ) = B 1 [σ ((1 σ)b 1 ) 1 σ k 1 σ + (1 σ)a (3.9)
54 Paulo Brito Dynamic Programming Equation (3.9) is true only if B 0 = B 1 = A(1 σ) B 1 (3.10) ρ ( ) ( ) σ 1 σ. (3.11) 1 σ ρ case Then the following function is indeed a solution of the HJB equation in this particular The optimal policy function is V (k) = ( ) σ ( σ A ρ ρ + 1 ) 1 σ k1 σ c = h(k) = ρ σ k We can determine the optimal flow of consumption and capital (c (t), k (t)) by substituting c(t) = ρ k(t) in the admissibility conditions to get the ODE σ k = Ak (t) α ρ σ k (t) for given k(0) = k 0. This a Bernoulli ODE which has an explicit solution as and k (t) = [ ( Aρ σ + k 1 1 σ 0 Aρ σ ) c (t) = ρ σ k (t), t = 0,...,. ] 1 1 σ e (1 σ)ρ t σ, t = 0,...,
55 Part II Stochastic Dynamic Programming 54
56 Chapter 4 Discrete Time 4.1 A short introduction to stochastic processes Filtrations We assume that the underlying information is given by a filtration F = {F 0, F 1,...,F T } drawn from a probability space (Ω, F, P). The basic magnitudes of the problem are stochastic processes, X t {X τ : τ = 0, 1,..., T } where X t = X(w t ) is a random variable where w t F t. The information available at period t T will be represented by the σ algebra F t F. A simple example is given by the following binomial information tree The primitive probability space The event space Ω is the set of all elemtary events. It can be continuous or discrete, and have a finite or an infinite number of elements. In the following we will assume that Ω = {ω 1,..., ω s,...,ω N } Then N = dim(ω) is the number of possible states of nature. 55
57 Paulo Brito Dynamic Programming {w 1, w 2 } {w 1, w 2, w 3, w 4 } {w 3, w 4 } {w 1 } {w 2 } {w 3 } {w 4 } Ω {w 5, w 6 } {w 5, w 6, w 7, w 8 } {w 7, w 8 } Figure 4.1: Information tree {w 5 } {w 6 } {w 7 } {w 8 } The σ-algebra, F is the set of all subsets of Ω, build by operations of union, and includes the empty set (no event). P is a probability measure over F: that is, for any A F, 0 P(A) 1. The information through time The information available at time t T will be represented by F t and by a filtration, for the sequence of periods t = 0, 1,..., T, F = {F t } T t=0. Definition 5. A filtration is a sequence of σ algebras {F t } F = {F 0, F 1,...,F T }. A filtration is non-anticipating if F 0 = {, Ω}, F s F t, if s t,
DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION
DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION UNIVERSITY OF MARYLAND: ECON 600. Alternative Methods of Discrete Time Intertemporal Optimization We will start by solving a discrete time intertemporal
More informationOptimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112
Optimal Control Ömer Özak SMU Macroeconomics II Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112 Review of the Theory of Optimal Control Section 1 Review of the Theory of Optimal Control Ömer
More informationChapter 3. Dynamic Programming
Chapter 3. Dynamic Programming This chapter introduces basic ideas and methods of dynamic programming. 1 It sets out the basic elements of a recursive optimization problem, describes the functional equation
More informationproblem. max Both k (0) and h (0) are given at time 0. (a) Write down the Hamilton-Jacobi-Bellman (HJB) Equation in the dynamic programming
1. Endogenous Growth with Human Capital Consider the following endogenous growth model with both physical capital (k (t)) and human capital (h (t)) in continuous time. The representative household solves
More informationHOMEWORK #3 This homework assignment is due at NOON on Friday, November 17 in Marnix Amand s mailbox.
Econ 50a second half) Yale University Fall 2006 Prof. Tony Smith HOMEWORK #3 This homework assignment is due at NOON on Friday, November 7 in Marnix Amand s mailbox.. This problem introduces wealth inequality
More informationLecture 9. Dynamic Programming. Randall Romero Aguilar, PhD I Semestre 2017 Last updated: June 2, 2017
Lecture 9 Dynamic Programming Randall Romero Aguilar, PhD I Semestre 2017 Last updated: June 2, 2017 Universidad de Costa Rica EC3201 - Teoría Macroeconómica 2 Table of contents 1. Introduction 2. Basics
More informationECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko
ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko Indirect Utility Recall: static consumer theory; J goods, p j is the price of good j (j = 1; : : : ; J), c j is consumption
More informationIntroduction Optimality and Asset Pricing
Introduction Optimality and Asset Pricing Andrea Buraschi Imperial College Business School October 2010 The Euler Equation Take an economy where price is given with respect to the numéraire, which is our
More informationLecture 10. Dynamic Programming. Randall Romero Aguilar, PhD II Semestre 2017 Last updated: October 16, 2017
Lecture 10 Dynamic Programming Randall Romero Aguilar, PhD II Semestre 2017 Last updated: October 16, 2017 Universidad de Costa Rica EC3201 - Teoría Macroeconómica 2 Table of contents 1. Introduction 2.
More informationIntroduction to Recursive Methods
Chapter 1 Introduction to Recursive Methods These notes are targeted to advanced Master and Ph.D. students in economics. They can be of some use to researchers in macroeconomic theory. The material contained
More informationSuggested Solutions to Homework #3 Econ 511b (Part I), Spring 2004
Suggested Solutions to Homework #3 Econ 5b (Part I), Spring 2004. Consider an exchange economy with two (types of) consumers. Type-A consumers comprise fraction λ of the economy s population and type-b
More informationSlides II - Dynamic Programming
Slides II - Dynamic Programming Julio Garín University of Georgia Macroeconomic Theory II (Ph.D.) Spring 2017 Macroeconomic Theory II Slides II - Dynamic Programming Spring 2017 1 / 32 Outline 1. Lagrangian
More informationLecture 6: Discrete-Time Dynamic Optimization
Lecture 6: Discrete-Time Dynamic Optimization Yulei Luo Economics, HKU November 13, 2017 Luo, Y. (Economics, HKU) ECON0703: ME November 13, 2017 1 / 43 The Nature of Optimal Control In static optimization,
More informationDeterministic Dynamic Programming
Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0
More informationGovernment The government faces an exogenous sequence {g t } t=0
Part 6 1. Borrowing Constraints II 1.1. Borrowing Constraints and the Ricardian Equivalence Equivalence between current taxes and current deficits? Basic paper on the Ricardian Equivalence: Barro, JPE,
More informationHJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011
Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance
More informationRamsey Cass Koopmans Model (1): Setup of the Model and Competitive Equilibrium Path
Ramsey Cass Koopmans Model (1): Setup of the Model and Competitive Equilibrium Path Ryoji Ohdoi Dept. of Industrial Engineering and Economics, Tokyo Tech This lecture note is mainly based on Ch. 8 of Acemoglu
More informationLecture 5: Some Informal Notes on Dynamic Programming
Lecture 5: Some Informal Notes on Dynamic Programming The purpose of these class notes is to give an informal introduction to dynamic programming by working out some cases by h. We want to solve subject
More informationThe Real Business Cycle Model
The Real Business Cycle Model Macroeconomics II 2 The real business cycle model. Introduction This model explains the comovements in the fluctuations of aggregate economic variables around their trend.
More informationSolution of Stochastic Optimal Control Problems and Financial Applications
Journal of Mathematical Extension Vol. 11, No. 4, (2017), 27-44 ISSN: 1735-8299 URL: http://www.ijmex.com Solution of Stochastic Optimal Control Problems and Financial Applications 2 Mat B. Kafash 1 Faculty
More informationUniversity of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming
University of Warwick, EC9A0 Maths for Economists 1 of 63 University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming Peter J. Hammond Autumn 2013, revised 2014 University of
More informationDynamic Optimization Problem. April 2, Graduate School of Economics, University of Tokyo. Math Camp Day 4. Daiki Kishishita.
Discrete Math Camp Optimization Problem Graduate School of Economics, University of Tokyo April 2, 2016 Goal of day 4 Discrete We discuss methods both in discrete and continuous : Discrete : condition
More informationA simple macro dynamic model with endogenous saving rate: the representative agent model
A simple macro dynamic model with endogenous saving rate: the representative agent model Virginia Sánchez-Marcos Macroeconomics, MIE-UNICAN Macroeconomics (MIE-UNICAN) A simple macro dynamic model with
More informationThe Ramsey Model. Alessandra Pelloni. October TEI Lecture. Alessandra Pelloni (TEI Lecture) Economic Growth October / 61
The Ramsey Model Alessandra Pelloni TEI Lecture October 2015 Alessandra Pelloni (TEI Lecture) Economic Growth October 2015 1 / 61 Introduction Introduction Introduction Ramsey-Cass-Koopmans model: di ers
More informationAdvanced Macroeconomics
Advanced Macroeconomics The Ramsey Model Marcin Kolasa Warsaw School of Economics Marcin Kolasa (WSE) Ad. Macro - Ramsey model 1 / 30 Introduction Authors: Frank Ramsey (1928), David Cass (1965) and Tjalling
More informationDynamic Problem Set 1 Solutions
Dynamic Problem Set 1 Solutions Jonathan Kreamer July 15, 2011 Question 1 Consider the following multi-period optimal storage problem: An economic agent imizes: c t} T β t u(c t ) (1) subject to the period-by-period
More informationOnline Appendix I: Wealth Inequality in the Standard Neoclassical Growth Model
Online Appendix I: Wealth Inequality in the Standard Neoclassical Growth Model Dan Cao Georgetown University Wenlan Luo Georgetown University July 2016 The textbook Ramsey-Cass-Koopman neoclassical growth
More informationCompetitive Equilibrium and the Welfare Theorems
Competitive Equilibrium and the Welfare Theorems Craig Burnside Duke University September 2010 Craig Burnside (Duke University) Competitive Equilibrium September 2010 1 / 32 Competitive Equilibrium and
More informationAdvanced Macroeconomics
Advanced Macroeconomics The Ramsey Model Micha l Brzoza-Brzezina/Marcin Kolasa Warsaw School of Economics Micha l Brzoza-Brzezina/Marcin Kolasa (WSE) Ad. Macro - Ramsey model 1 / 47 Introduction Authors:
More informationThe representative agent model
Chapter 3 The representative agent model 3.1 Optimal growth In this course we re looking at three types of model: 1. Descriptive growth model (Solow model): mechanical, shows the implications of a given
More informationIntroduction to optimal control theory in continuos time (with economic applications) Salvatore Federico
Introduction to optimal control theory in continuos time (with economic applications) Salvatore Federico June 26, 2017 2 Contents 1 Introduction to optimal control problems in continuous time 5 1.1 From
More informationNOTES ON CALCULUS OF VARIATIONS. September 13, 2012
NOTES ON CALCULUS OF VARIATIONS JON JOHNSEN September 13, 212 1. The basic problem In Calculus of Variations one is given a fixed C 2 -function F (t, x, u), where F is defined for t [, t 1 ] and x, u R,
More informationProblem Set #4 Answer Key
Problem Set #4 Answer Key Economics 808: Macroeconomic Theory Fall 2004 The cake-eating problem a) Bellman s equation is: b) If this policy is followed: c) If this policy is followed: V (k) = max {log
More informationDynamic Optimization Using Lagrange Multipliers
Dynamic Optimization Using Lagrange Multipliers Barbara Annicchiarico barbara.annicchiarico@uniroma2.it Università degli Studi di Roma "Tor Vergata" Presentation #2 Deterministic Infinite-Horizon Ramsey
More informationPermanent Income Hypothesis Intro to the Ramsey Model
Consumption and Savings Permanent Income Hypothesis Intro to the Ramsey Model Lecture 10 Topics in Macroeconomics November 6, 2007 Lecture 10 1/18 Topics in Macroeconomics Consumption and Savings Outline
More information1 Jan 28: Overview and Review of Equilibrium
1 Jan 28: Overview and Review of Equilibrium 1.1 Introduction What is an equilibrium (EQM)? Loosely speaking, an equilibrium is a mapping from environments (preference, technology, information, market
More informationDynamic Programming. Peter Ireland ECON Math for Economists Boston College, Department of Economics. Fall 2017
Dynamic Programming Peter Ireland ECON 772001 - Math for Economists Boston College, Department of Economics Fall 2017 We have now studied two ways of solving dynamic optimization problems, one based on
More informationLecture notes for Macroeconomics I, 2004
Lecture notes for Macroeconomics I, 2004 Per Krusell Please do NOT distribute without permission! Comments and suggestions are welcome. 1 Chapter 3 Dynamic optimization There are two common approaches
More informationSalient features of dynamic optimization problems Basics of dynamic programming The principle of optimality
A Z D I A 4 2 B C 5 6 7 2 E F 3 4 3 5 G H 8 6 2 4 J K 3 2 Z D I A 4 2 B C 5 6 7 2 E F 3 4 3 5 G H 8 6 2 4 J K 3 2 Z s x {, 2,..., n} r(x, s) x s r(, A) = 2 r(, D) = 2 r(2, A) = 4 r(2, D) = 8 g(x, s) s
More informationECON 2010c Solution to Problem Set 1
ECON 200c Solution to Problem Set By the Teaching Fellows for ECON 200c Fall 204 Growth Model (a) Defining the constant κ as: κ = ln( αβ) + αβ αβ ln(αβ), the problem asks us to show that the following
More informationDynamic (Stochastic) General Equilibrium and Growth
Dynamic (Stochastic) General Equilibrium and Growth Martin Ellison Nuffi eld College Michaelmas Term 2018 Martin Ellison (Nuffi eld) D(S)GE and Growth Michaelmas Term 2018 1 / 43 Macroeconomics is Dynamic
More informationUncertainty Per Krusell & D. Krueger Lecture Notes Chapter 6
1 Uncertainty Per Krusell & D. Krueger Lecture Notes Chapter 6 1 A Two-Period Example Suppose the economy lasts only two periods, t =0, 1. The uncertainty arises in the income (wage) of period 1. Not that
More informationMonetary Economics: Solutions Problem Set 1
Monetary Economics: Solutions Problem Set 1 December 14, 2006 Exercise 1 A Households Households maximise their intertemporal utility function by optimally choosing consumption, savings, and the mix of
More informationNeoclassical Growth Model / Cake Eating Problem
Dynamic Optimization Institute for Advanced Studies Vienna, Austria by Gabriel S. Lee February 1-4, 2008 An Overview and Introduction to Dynamic Programming using the Neoclassical Growth Model and Cake
More informationEconomics 2010c: Lectures 9-10 Bellman Equation in Continuous Time
Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time David Laibson 9/30/2014 Outline Lectures 9-10: 9.1 Continuous-time Bellman Equation 9.2 Application: Merton s Problem 9.3 Application:
More information1. Money in the utility function (start)
Monetary Economics: Macro Aspects, 1/3 2012 Henrik Jensen Department of Economics University of Copenhagen 1. Money in the utility function (start) a. The basic money-in-the-utility function model b. Optimal
More informationUNIVERSITY OF VIENNA
WORKING PAPERS Cycles and chaos in the one-sector growth model with elastic labor supply Gerhard Sorger May 2015 Working Paper No: 1505 DEPARTMENT OF ECONOMICS UNIVERSITY OF VIENNA All our working papers
More informationChapter 4. Applications/Variations
Chapter 4 Applications/Variations 149 4.1 Consumption Smoothing 4.1.1 The Intertemporal Budget Economic Growth: Lecture Notes For any given sequence of interest rates {R t } t=0, pick an arbitrary q 0
More informationMacroeconomics I. University of Tokyo. Lecture 12. The Neo-Classical Growth Model: Prelude to LS Chapter 11.
Macroeconomics I University of Tokyo Lecture 12 The Neo-Classical Growth Model: Prelude to LS Chapter 11. Julen Esteban-Pretel National Graduate Institute for Policy Studies The Cass-Koopmans Model: Environment
More informationHOMEWORK #1 This homework assignment is due at 5PM on Friday, November 3 in Marnix Amand s mailbox.
Econ 50a (second half) Yale University Fall 2006 Prof. Tony Smith HOMEWORK # This homework assignment is due at 5PM on Friday, November 3 in Marnix Amand s mailbox.. Consider a growth model with capital
More informationContents. An example 5. Mathematical Preliminaries 13. Dynamic programming under certainty 29. Numerical methods 41. Some applications 57
T H O M A S D E M U Y N C K DY N A M I C O P T I M I Z AT I O N Contents An example 5 Mathematical Preliminaries 13 Dynamic programming under certainty 29 Numerical methods 41 Some applications 57 Stochastic
More informationECON607 Fall 2010 University of Hawaii Professor Hui He TA: Xiaodong Sun Assignment 2
ECON607 Fall 200 University of Hawaii Professor Hui He TA: Xiaodong Sun Assignment 2 The due date for this assignment is Tuesday, October 2. ( Total points = 50). (Two-sector growth model) Consider the
More informationG Recitation 3: Ramsey Growth model with technological progress; discrete time dynamic programming and applications
G6215.1 - Recitation 3: Ramsey Growth model with technological progress; discrete time dynamic Contents 1 The Ramsey growth model with technological progress 2 1.1 A useful lemma..................................................
More informationDynamic Programming Theorems
Dynamic Programming Theorems Prof. Lutz Hendricks Econ720 September 11, 2017 1 / 39 Dynamic Programming Theorems Useful theorems to characterize the solution to a DP problem. There is no reason to remember
More informationBasic Techniques. Ping Wang Department of Economics Washington University in St. Louis. January 2018
Basic Techniques Ping Wang Department of Economics Washington University in St. Louis January 2018 1 A. Overview A formal theory of growth/development requires the following tools: simple algebra simple
More informationMacroeconomics Qualifying Examination
Macroeconomics Qualifying Examination August 2015 Department of Economics UNC Chapel Hill Instructions: This examination consists of 4 questions. Answer all questions. If you believe a question is ambiguously
More information(a) Write down the Hamilton-Jacobi-Bellman (HJB) Equation in the dynamic programming
1. Government Purchases and Endogenous Growth Consider the following endogenous growth model with government purchases (G) in continuous time. Government purchases enhance production, and the production
More informationA comparison of numerical methods for the. Solution of continuous-time DSGE models. Juan Carlos Parra Alvarez
A comparison of numerical methods for the solution of continuous-time DSGE models Juan Carlos Parra Alvarez Department of Economics and Business, and CREATES Aarhus University, Denmark November 14, 2012
More informationTopic 2. Consumption/Saving and Productivity shocks
14.452. Topic 2. Consumption/Saving and Productivity shocks Olivier Blanchard April 2006 Nr. 1 1. What starting point? Want to start with a model with at least two ingredients: Shocks, so uncertainty.
More informationProjection Methods. Michal Kejak CERGE CERGE-EI ( ) 1 / 29
Projection Methods Michal Kejak CERGE CERGE-EI ( ) 1 / 29 Introduction numerical methods for dynamic economies nite-di erence methods initial value problems (Euler method) two-point boundary value problems
More informationAdvanced Mathematical Economics. Paulo B. Brito PhD in Economics: ISEG Universidade de Lisboa
Advanced Mathematical Economics Paulo B. Brito PhD in Economics: 2018-2019 ISEG Universidade de Lisboa pbrito@iseg.ulisboa.pt September 19, 2018 Contents 1 Introduction 2 1.1 Generic structure of the models.............................
More informationSolution by the Maximum Principle
292 11. Economic Applications per capita variables so that it is formally similar to the previous model. The introduction of the per capita variables makes it possible to treat the infinite horizon version
More information"0". Doing the stuff on SVARs from the February 28 slides
Monetary Policy, 7/3 2018 Henrik Jensen Department of Economics University of Copenhagen "0". Doing the stuff on SVARs from the February 28 slides 1. Money in the utility function (start) a. The basic
More informationLecture 3: Growth Model, Dynamic Optimization in Continuous Time (Hamiltonians)
Lecture 3: Growth Model, Dynamic Optimization in Continuous Time (Hamiltonians) ECO 503: Macroeconomic Theory I Benjamin Moll Princeton University Fall 2014 1/16 Plan of Lecture Growth model in continuous
More informationECON 582: An Introduction to the Theory of Optimal Control (Chapter 7, Acemoglu) Instructor: Dmytro Hryshko
ECON 582: An Introduction to the Theory of Optimal Control (Chapter 7, Acemoglu) Instructor: Dmytro Hryshko Continuous-time optimization involves maximization wrt to an innite dimensional object, an entire
More informationDynamic stochastic general equilibrium models. December 4, 2007
Dynamic stochastic general equilibrium models December 4, 2007 Dynamic stochastic general equilibrium models Random shocks to generate trajectories that look like the observed national accounts. Rational
More informationEconomic Growth: Lecture 13, Stochastic Growth
14.452 Economic Growth: Lecture 13, Stochastic Growth Daron Acemoglu MIT December 10, 2013. Daron Acemoglu (MIT) Economic Growth Lecture 13 December 10, 2013. 1 / 52 Stochastic Growth Models Stochastic
More informationEconomic Growth: Lecture 8, Overlapping Generations
14.452 Economic Growth: Lecture 8, Overlapping Generations Daron Acemoglu MIT November 20, 2018 Daron Acemoglu (MIT) Economic Growth Lecture 8 November 20, 2018 1 / 46 Growth with Overlapping Generations
More informationCHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS. p. 1/73
CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS p. 1/73 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS Mixed Inequality Constraints: Inequality constraints involving control and possibly
More informationIntroduction to Dynamic Programming Lecture Notes
Introduction to Dynamic Programming Lecture Notes Klaus Neusser November 30, 2017 These notes are based on the books of Sargent (1987) and Stokey and Robert E. Lucas (1989). Department of Economics, University
More informationEconomic Growth (Continued) The Ramsey-Cass-Koopmans Model. 1 Literature. Ramsey (1928) Cass (1965) and Koopmans (1965) 2 Households (Preferences)
III C Economic Growth (Continued) The Ramsey-Cass-Koopmans Model 1 Literature Ramsey (1928) Cass (1965) and Koopmans (1965) 2 Households (Preferences) Population growth: L(0) = 1, L(t) = e nt (n > 0 is
More informationLecture 6: Competitive Equilibrium in the Growth Model (II)
Lecture 6: Competitive Equilibrium in the Growth Model (II) ECO 503: Macroeconomic Theory I Benjamin Moll Princeton University Fall 204 /6 Plan of Lecture Sequence of markets CE 2 The growth model and
More informationControlled Diffusions and Hamilton-Jacobi Bellman Equations
Controlled Diffusions and Hamilton-Jacobi Bellman Equations Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2014 Emo Todorov (UW) AMATH/CSE 579, Winter
More informationEndogenous Growth: AK Model
Endogenous Growth: AK Model Prof. Lutz Hendricks Econ720 October 24, 2017 1 / 35 Endogenous Growth Why do countries grow? A question with large welfare consequences. We need models where growth is endogenous.
More informationLecture notes on modern growth theory
Lecture notes on modern growth theory Part 2 Mario Tirelli Very preliminary material Not to be circulated without the permission of the author October 25, 2017 Contents 1. Introduction 1 2. Optimal economic
More informationLecture 2 The Centralized Economy: Basic features
Lecture 2 The Centralized Economy: Basic features Leopold von Thadden University of Mainz and ECB (on leave) Advanced Macroeconomics, Winter Term 2013 1 / 41 I Motivation This Lecture introduces the basic
More informationS2500 ANALYSIS AND OPTIMIZATION, SUMMER 2016 FINAL EXAM SOLUTIONS
S25 ANALYSIS AND OPTIMIZATION, SUMMER 216 FINAL EXAM SOLUTIONS C.-M. MICHAEL WONG No textbooks, notes or calculators allowed. The maximum score is 1. The number of points that each problem (and sub-problem)
More informationViscosity Solutions for Dummies (including Economists)
Viscosity Solutions for Dummies (including Economists) Online Appendix to Income and Wealth Distribution in Macroeconomics: A Continuous-Time Approach written by Benjamin Moll August 13, 2017 1 Viscosity
More informationADVANCED MACROECONOMIC TECHNIQUES NOTE 3a
316-406 ADVANCED MACROECONOMIC TECHNIQUES NOTE 3a Chris Edmond hcpedmond@unimelb.edu.aui Dynamic programming and the growth model Dynamic programming and closely related recursive methods provide an important
More informationEconomic Growth: Lecture 9, Neoclassical Endogenous Growth
14.452 Economic Growth: Lecture 9, Neoclassical Endogenous Growth Daron Acemoglu MIT November 28, 2017. Daron Acemoglu (MIT) Economic Growth Lecture 9 November 28, 2017. 1 / 41 First-Generation Models
More informationPart A: Answer question A1 (required), plus either question A2 or A3.
Ph.D. Core Exam -- Macroeconomics 5 January 2015 -- 8:00 am to 3:00 pm Part A: Answer question A1 (required), plus either question A2 or A3. A1 (required): Ending Quantitative Easing Now that the U.S.
More informationMathematical Economics. Lecture Notes (in extracts)
Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter
More informationarxiv: v1 [math.pr] 24 Sep 2018
A short note on Anticipative portfolio optimization B. D Auria a,b,1,, J.-A. Salmerón a,1 a Dpto. Estadística, Universidad Carlos III de Madrid. Avda. de la Universidad 3, 8911, Leganés (Madrid Spain b
More informationWorst Case Portfolio Optimization and HJB-Systems
Worst Case Portfolio Optimization and HJB-Systems Ralf Korn and Mogens Steffensen Abstract We formulate a portfolio optimization problem as a game where the investor chooses a portfolio and his opponent,
More informationIntroduction to Optimal Control Theory and Hamilton-Jacobi equations. Seung Yeal Ha Department of Mathematical Sciences Seoul National University
Introduction to Optimal Control Theory and Hamilton-Jacobi equations Seung Yeal Ha Department of Mathematical Sciences Seoul National University 1 A priori message from SYHA The main purpose of these series
More informationCHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME. Chapter2 p. 1/67
CHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME Chapter2 p. 1/67 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME Main Purpose: Introduce the maximum principle as a necessary condition to be satisfied by any optimal
More informationSTATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics
STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics Ph. D. Comprehensive Examination: Macroeconomics Fall, 202 Answer Key to Section 2 Questions Section. (Suggested Time: 45 Minutes) For 3 of
More informationLecture 2. (1) Aggregation (2) Permanent Income Hypothesis. Erick Sager. September 14, 2015
Lecture 2 (1) Aggregation (2) Permanent Income Hypothesis Erick Sager September 14, 2015 Econ 605: Adv. Topics in Macroeconomics Johns Hopkins University, Fall 2015 Erick Sager Lecture 2 (9/14/15) 1 /
More informationMacro 1: Dynamic Programming 1
Macro 1: Dynamic Programming 1 Mark Huggett 2 2 Georgetown September, 2016 DP Warm up: Cake eating problem ( ) max f 1 (y 1 ) + f 2 (y 2 ) s.t. y 1 + y 2 100, y 1 0, y 2 0 1. v 1 (x) max f 1(y 1 ) + f
More information1 Bewley Economies with Aggregate Uncertainty
1 Bewley Economies with Aggregate Uncertainty Sofarwehaveassumedawayaggregatefluctuations (i.e., business cycles) in our description of the incomplete-markets economies with uninsurable idiosyncratic risk
More informationAn Application to Growth Theory
An Application to Growth Theory First let s review the concepts of solution function and value function for a maximization problem. Suppose we have the problem max F (x, α) subject to G(x, β) 0, (P) x
More informationThomas Knispel Leibniz Universität Hannover
Optimal long term investment under model ambiguity Optimal long term investment under model ambiguity homas Knispel Leibniz Universität Hannover knispel@stochastik.uni-hannover.de AnStAp0 Vienna, July
More informationGraduate Macro Theory II: Notes on Solving Linearized Rational Expectations Models
Graduate Macro Theory II: Notes on Solving Linearized Rational Expectations Models Eric Sims University of Notre Dame Spring 2017 1 Introduction The solution of many discrete time dynamic economic models
More informationIn the Ramsey model we maximized the utility U = u[c(t)]e nt e t dt. Now
PERMANENT INCOME AND OPTIMAL CONSUMPTION On the previous notes we saw how permanent income hypothesis can solve the Consumption Puzzle. Now we use this hypothesis, together with assumption of rational
More informationLecture 2. (1) Permanent Income Hypothesis (2) Precautionary Savings. Erick Sager. February 6, 2018
Lecture 2 (1) Permanent Income Hypothesis (2) Precautionary Savings Erick Sager February 6, 2018 Econ 606: Adv. Topics in Macroeconomics Johns Hopkins University, Spring 2018 Erick Sager Lecture 2 (2/6/18)
More informationLecture 5: The Bellman Equation
Lecture 5: The Bellman Equation Florian Scheuer 1 Plan Prove properties of the Bellman equation (In particular, existence and uniqueness of solution) Use this to prove properties of the solution Think
More informationOverlapping Generation Models
Overlapping Generation Models Ömer Özak SMU Macroeconomics II Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 122 Growth with Overlapping Generations Section 1 Growth with Overlapping Generations
More informationBasic Deterministic Dynamic Programming
Basic Deterministic Dynamic Programming Timothy Kam School of Economics & CAMA Australian National University ECON8022, This version March 17, 2008 Motivation What do we do? Outline Deterministic IHDP
More informationThe speculator: a case study of the dynamic programming equation of an infinite horizon stochastic problem
The speculator: a case study of the dynamic programming equation of an infinite horizon stochastic problem Pavol Brunovský Faculty of Mathematics, Physics and Informatics Department of Applied Mathematics
More informationCourse Handouts: Pages 1-20 ASSET PRICE BUBBLES AND SPECULATION. Jan Werner
Course Handouts: Pages 1-20 ASSET PRICE BUBBLES AND SPECULATION Jan Werner European University Institute May 2010 1 I. Price Bubbles: An Example Example I.1 Time is infinite; so dates are t = 0,1,2,...,.
More information