Basic Deterministic Dynamic Programming

Size: px
Start display at page:

Download "Basic Deterministic Dynamic Programming"

Transcription

1 Basic Deterministic Dynamic Programming Timothy Kam School of Economics & CAMA Australian National University ECON8022, This version March 17, 2008

2 Motivation What do we do? Outline Deterministic IHDP Deterministic IHDP: Example From Sequence to Recursive Problem Histories, strategies, and value function Principle of Optimality Existence and Uniqueness of value function Useful results Strategy Space Backing out Strategies: Existence and Uniqueness? Stationary Optimal Strategies Example: RCK model Existence of S.O.S. Properties of v Unique π Dynamic properties

3 Motivation, Plan of Attack Previously, we heuristically motivated the -horizon (deterministic) planning problem, as limiting case of the smaller finite T -horizon problem. But... Not precise enough...!

4 Motivation, Plan of Attack In this lecture... We show rigorously how to characterize -horizon (deterministic) planning problem as a recursive infinite horizon dynamic programming (IHDP) problem. This leads to the analysis of the solution to a functional equation: the Bellman equation or the Bellman operator, of the form: j B : C b (X) C b (X), C b (X) := w : X R w ff. is continuous, bounded Fixed-point solution to a mapping w B(w) defined on a space of functions. (So we can think of functions w : X R as points living in such a space, just like points x R.) Economic meaning of the value function w?

5 Motivation, Plan of Attack In this lecture (cont d)... Step 1. Characterize fixed-point solution: v = B(v): Existence of v Uniqueness of v Properties of v. Inherits continuity, boundedness...? Step 2. Know v. Then reverse engineer optimal strategies that support the optimal value v(x 0) of the decision process starting from state x 0 X: Focus on strategies π = (π 0, π 1,...) induced by stationary policy/decision functions, x t π t(x t) such that π t(x) = π s(x) = π(x) for all t s. Existence of π? Uniqueness of optimal strategy, π = {π(x t)} t=0? Concrete Example: Characterizing and solving IHDP problem for the RCK model.

6 Setting Up: Deterministic IHDP Key objects in the infinite-horizon discounted optimization problem {X, A, Γ, f, U, β}: 1. X is state space. 2. A is action space. 3. Γ : X 2 A is feasible action correspondence. 4. f : X A X is state transition law. 5. U : X A R is per-period payoff/reward/utility. 6. β is constant subjective discount factor.

7 Setting Up: Example In the RCK optimal growth model, {X, A, Γ, f, U, β} is: 1. X = R + space of possible states of capital stock. 2. A = R +, space of saving/consumption choice in RCK. 3. For each current state k X, feasible action set is: { } Γ(k) = k 2 A 0 k F (k) + (1 δ)k. 4. State transition law given by k = F (k) + (1 δ)k c f(k, c). 5. Per-period payoff/reward/utility: U(c) R 6. β (0, 1) is constant subjective discount factor.

8 From Sequence to Recursive Problem The sequence problem in general: { v(x 0 ) = sup β t U(u t, x t ) : {u t Γ(x t)} t=0 t=0 } x t+1 = f(x t, u t ), given x 0 X (P1) Remarks: v(x 0) is l.u.b. or maximal lifetime payoff w.r.t. initial state x 0, if planner follows a optimal path of actions that solve the RHS maximal problem. v has an indirect utility interpretation. Recall duality theorem of optimization in e.g. consumer theory? Why sup and not max for the RHS problem in (P1)?

9 Histories, strategies To understand (P1) and the recursive value function approach, describe the sequential decision making problem as follows. Definition A t-history: h t = {x 0, u 0,..., x t 1, u t 1, x t}. Set of all possible t-histories: Let H 0 = X. Then H t = {h t t = 1, 2,...}. Period t state under history h t : x t[h t ]. A (feasible) strategy σ = {σ t(h t )} t=0 is a plan of action s.t. σ t : H t A and the actions are feasible for each history: σ t(h t ) Γ(x t[h t ]). Set of all feasible strategies: j ff Σ = σ σt(ht ) Γ(x t[h t ]), h t H t, t N. Note: Distinguish between action actually taken u t and component of strategy σ t.

10 Histories, strategies Few will have the greatness to bend history itself; but each of us can work to change a small portion of events, and in the total of all those acts will be written the history of this generation. J.F.K.

11 Idea: Histories, strategies Planner (decision maker) stands at arbitrary initial date t = 0. Planner looks ahead; knows Current state, x = x 0 Transition law f Know reward function U and knows discount factor β Planner considers (possibly arbitrary) strategy σ = {σ t}. Fix this plan. So, strategy σ, would induce a fixed path for the state {x t(σ, x 0)} t N. Under strategy σ, at t = 0, h 0 = x 0(σ, x 0) = x 0, ( h 0 = x 0 x 1(σ, x 0) = f(x 0, u 0(σ, x 0)) u 0(σ, x 0) = σ 0(h 0 (σ, x 0)) Under strategy σ, at t = 1, x 2(σ, x 0) = f(x 1(σ, x 0), u 1(σ, x 0)) ( h 1 (σ, x 0) = (x 0, u 0(σ, x 0), x 1(σ, x 0)) u 1(σ, x 0) = σ 1(h 1 (σ, x 0))

12 Histories, strategies, and reward By induction, for each fixed strategy σ Σ, for given x 0, we have for all t N: h t (σ, x 0 ) ={x 0 (σ), u 0 (σ, x 0 ),..., x t (σ, x 0 )} u t (σ, x 0 ) =σ t (h t (σ, x 0 )) x t+1 (σ, x 0 ) =f(x t (σ, x 0 ), u t (σ, x 0 )) So we can generate the infinite sequence of states and actions {x t (σ, x 0 ), u t (σ, x 0 )} t N, induced by the strategy σ.

13 Histories, strategies, and value function Each period t action u t (σ, x 0 ), consistent with strategy σ, and starting from initial state x 0, induces a period-t payoff: U t (σ)(x 0 ) = U[x t (σ, x 0 ), u t (σ, x 0 )]. Then total discounted payoff/reward generated under (σ, x 0 ) is W (σ)(x 0 ) = β t U t (σ)(x 0 ). t=0 Definition: The value function is the maximal total discounted payoff across all possible (i.e. feasible) strategies. v(x 0 ) = sup W (σ)(x 0 ). σ Σ

14 Sequence Problem to Recursive Problem First, need to be able to compare W (σ)(x 0 ) and W ( σ)(x 0 ) for any σ σ. Not possible if W is not bounded. Assumption 1. There exists K < + s.t. K U(u t, x t ) K for all (u t, x t ) A X. This buys us: Lemma 1. v : X R is bounded. and Lemma 2. For any initial state x 0 X, and ɛ > 0, there is a strategy σ s.t. W (σ)(x 0 ) v(x 0 ) ɛ.

15 The Bellman Principle of Optimality So now we are ready to show that the sequence problem (PI), of picking an optimal strategy σ, Idea: yields an optimal value function v, that satisfies a recursive representation, that take the form of a Bellman functional equation. So then solving the Bellman equation for v, also implies we have solved for the value function for (P1), and so we can recursive recover the supporting optimal strategy/strategies, while solving the Bellman equation.

16 The Bellman Principle of Optimality Theorem (Bellman principle of optimality) Let x denote the current state and x the next-period state. For each x X, the value function v : X R of (P1) satisfies v(x) = sup {U(x, u) + βv(x )} s.t. x = f(x, u) (1) u Γ(x) Remark. The Bellman equation says that one s actions or decisions along the optimal path has to be time consistent. That is, once we are on this path, there is no incentive to deviate from it along any future decision nodes.

17 Proof. Let W : X R be s.t W (x) = for any x X. Trick: w.t.s. sup {U(x, u) + βv(f(x, u))}. u Γ(x) v(x) W (x), and, v(x) W (x), so then, v(x) = W (x) for any x X. So we will break this proof down into these TWO steps.

18 Proof. (Continued...) Step 1. Show v(x) W (x). Pick a feasible strategy σ Σ. Note: x = f(x, u). By Lemma 2: σ s.t. continuation value: W (σ)(x ) v(x ) ɛ. So then v(x) U(x, u) + βw (σ)(f(x, u)) = U(x, u) + βv(f(x, u)) βɛ. Since this holds for all u Γ(x), and ɛ > 0 is arbitrary, then v(x) sup {U(x, u) + βv(f(x, u)) βɛ} u Γ(x) sup {U(x, u) + βv(f(x, u))} = W (x). u Γ(x)

19 Proof. (Continued...) Step 2. Show v(x) W (x). Fix any x X and ɛ > 0. Let σ be s.t. W (σ)(x) v(x) ɛ. Starting at x, pick u = σ 0(x) σ, and so x 1 = f(x, u). Define the continuation strategy under σ, following σ 0(x) be σ 1. Since σ 1 σ and σ Σ, then supported continuation value is either optimal or it is not, so So then, v(x 1) W (σ 1)(x 1). v(x) ɛ W (σ)(x) = U(x, u) + βw (σ 1)[f(x, u)] U(x, u) + βv(f(x, u)) sup {U(x, u) + βv(f(x, u))} = W (x). u Γ(x)

20 Existence and Uniqueness of value function Notes: We moved from a problem (P1) of picking an infinite plan of action σ to one of solving recursively for the optimal value function v : X R. Now, the problem then become one of not knowing what the value function v : X R looks like. Idea: we can start with any guess of v(x) for all x X, and apply the Bellman operator to produce another guess of v(x). The theorem tells us that these successive approximations of v will eventually converge to a unique value function v : X R that satisfies both sides of the Bellman equation.

21 Existence and Uniqueness of value function Application of The Contraction Mapping Principle (a.k.a. The Banach* Fixed-Point Theorem). Stefan Banach *Stefan Banach (1922), Sur les opérations dans les ensembles abstraits et leur application aux équations intégrales. Fundamenta Mathematicae, 3:

22 Existence and Uniqueness of value function Let B(X) denote the set of all bounded functions from X to R. So then v B(X). Use the sup-norm metric to measure how close two functions v, w B(X) are: d (v, w) = sup v(x) w(x). x X Let the map T on B(X) be defined as follows. For w B(X), T w(x) = sup {U(x, u) + βw(f(x, u))} u Γ(x) at any x X. Since U, w B(X), then T w B(X). So our Bellman operator T : B(X) B(X). A fixed-point of this operator will give us the value function of (P1).

23 Useful results Warning. You should have pre-read the material on real analysis listed prior to Semester 1 (see e.g. SLP, Chapter 3). Concepts: Metric spaces, sequences, limits, Cauchy sequences, vector spaces, Banach spaces. We say a complete metric space (X, d) is one where every Cauchy sequence in the set X converges to a limit, and the limit is in X. Lemma The metric space (B(X), d ) is complete. Definition Let (S, d) be a metric space and the map T : S S. Let T (w) := T w be the value of T at w S. T is a contraction with modulus 0 β < 1 if d(t w, T v) βd(w, v) for all w, v S.

24 Useful results Theorem (Banach Fixed Point Theorem) If (S, d) is a complete metric space and T : S S is a contraction, then there is a fixed point for T and it is unique. Proof. See lecture notes. You need to know how this works! Existence: Prove using completeness of (S, d) and triangle inequality exists at least one fixed point. Uniqueness: Then prove by contradiction that there can only be one such fixed point.

25 Useful Results We can make use of the following result to verify whether T is a contraction mapping. Lemma (Blackwell s sufficient conditions for a contraction) Let M : B(X) B(X) be any map satisfying 1. Monotonicity: For any v, w B(X) such that w v Mw Mv. 2. Discounting: There exists a 0 β < 1 such that M(w + c) = Mw + βc, for all w B(X) and c R. (Define (f + c)(x) = f(x) + c.) Then M is a contraction with modulus β.

26 Application: Banach Fixed-point Theorem Theorem (Existence and Uniqueness of value function) v : X R is the unique fixed point of the operator T : B(X) B(X), such that if any w B(X) satisfies w(x) = sup {U(x, u) + βw(f(x, u))} u Γ(x) at any x X, then it must be that w = v. Proof. T : B(X) B(X) is a contraction with modulus β. Since (B(X), d ) is a complete metric space, v : X R is the unique fixed point of T by Theorem 4.

27 Backing out Strategies OK, so far we had done the following: 1. Show the Bellman Principle of Optimality: Sequence Problem to Recursive Problem. 2. Characterize solution to Bellman equation finding optimal value function v, as the fixed point of the Bellman operator. 3. Study basic conditions for existence and uniqueness of solution v B(X) i.e. in value function space. 4. What does finding v imply about the supporting optimal strategy or strategies in the strategy space? 5. Turns out we can get existence of optimal strategies, for free, with our existing set of assumptions. 6. But, need further structure on the model s primitives, before we can conclude uniqueness of optimal strategy.

28 Optimal strategy Theorem If U bounded (Ass. 1), a strategy σ is optimal if and only if W (σ) satisfies the Bellman equation W (σ)(x) = at each x X. sup {U(x, u) + βw (σ)(f(x, u))} u Γ(x)

29 Backing out Strategies: Stationary Strategies Often we want to be able to say more about the optimal strategies. Focus on stationary optimal strategies. Definition A Markovian strategy π for {X, A, U, f, Γ, β} is a strategy such that π = {π t } t N and π t = π t (x t [h t ]), where for each t, π t : X A such that π t (x t ) Γ(x t ). Definition A Markovian strategy π = {π t } t N with the further property that π t (x) = π τ (x) = π(x) for all x X is called a stationary strategy.

30 Stationary optimal strategies: existence Need more structure: Assumption 2. U is continuous on X A. Assumption 3. f is continuous on X A. Assumption 4. Γ is a continuous, compact valued correspondence on X.

31 Stationary optimal strategies: existence Together with assumption U is bounded on X A, conclude: Step 1 Existence of a unique continuous and bounded value function that satisfies the Bellman Principle of Optimality; Step 2 Existence of a well-defined feasible action correspondence admitting a stationary optimal strategy that satisfies the Bellman Principle of Optimality; and Step 3 This stationary strategy delivers a total discounted payoff that is equal to the value function, and is indeed an optimal strategy.

32 Stationary optimal strategies: existence Notice now with Assumptions 1-4, we can focus on the space of bounded and continuous functions from X to R denoted by C b (X). Previously we defined the Bellman operator T on B(X). Now the space in which our candidate value functions live is C b (X). Define the operator T : C b (X) C b (X) by T w(x) = max {U(x, u) + βw(f(x, u))} u Γ(x) for each x X.

33 Stationary optimal strategies: Step 1 Lemma T : C b (X) C b (X) is a contraction with modulus β. Finally, by Banach s fixed point theorem, we can show the existence of a unique continuous and bounded value function that satisfies the Bellman Principle of Optimality. Theorem There exists a unique w C b (X) such that given each x X w (x) = max u Γ(x) {U(x, u) + βw (f(x, u))}.

34 Stationary optimal strategies: Step 2 1. Define G : X P (A) by G (x) = arg max {U(x, u) + u Γ(x) βw (f(x, u))}. 2. By the Maximum Theorem G is a nonempty upper-semicontinuous correspondence. 3. Thus there exists a function π : X A such that given each x X, π G Γ(x). 4. By construction, at all x X and for any ũ π (x), it must be that w (x) = max u Γ(x) {U(x, π (x)) + βw [f(x, π (x))]} U(x, ũ) + βw [f(x, ũ)], ũ Γ(x). 5. So the function π : X A defines a stationary optimal strategy.

35 Stationary optimal strategies: Step 3 1. Fix any initial state x X. 2. sequence {x t(x, π ), u t(x, π )} under s.o.s. π, from x. 3. Payoff U t(π )(x) := U[x t(x, π ), u t(x, π )] from π beginning from x. 4. Shorthand: x t := x t(x, π ) and u t := u t(x, π ). By definition, Show: W (π )(x) = X β t U t(π )(x). t=0 w (x) =U(x, π (x)) + βw [f(x, π (x))] ( TX 1 ) = lim T β t U t(π )(x) + β T w [x T (π, x)] = W (π )(x). t=0 5. Since unique fixed-point w satisfying the Bellman equation, then W (π )(x) = max u Γ(x) {U(x, u) + βw (π )[f(x, u)]}.

36 Stationary optimal strategies So Steps 1-3 give us the result: Theorem If the stationary dynamic programming problem {X, A, Γ, f, U, β} satisfies Assumptions 1-4, then there exists a stationary optimal policy π. Furthermore the value function v = W (π ) is bounded and continuous on X, and satisfies for each x X, v(x) = max {U(x, u) + βv(f(x, u))} u Γ(x) =U(x, π (x)) + βw (π )(f(x, π (x))). Note: If we have additional strict concavity assumptions on U and f, an optimal strategy not only exists, but will also be unique s.o.s is unique.

37 A concrete example: The (T )-RCK model See lecture notes for details. Here we will add a few more assumptions. Things to look out for: 1. Strict concavity of U will buy monotonicity of optimal saving/consumption path. 2. Strict concavity of U and quasi-concavity of f will buy: Monotonicity of optimal saving/consumption path. Steady state values that are independent of preferences U. Depends only on technology. Unique optimal strategy unique s.o.s. plus U, f C 1 and f also strictly concave, then v continuously differentiable at k int(x), and optimal path can be described by Euler equations!

38 A concrete example: The (T )-RCK model The sequence problem in the RCK model previously, can be written down as: max {c t,k t+1 } t N subject to β t U(c t ) t=0 k 0 =k given, f(k t ) =c t + k t+1, for all t N. 0 k t+1 f(k t ).

39 A concrete example: The (T )-RCK model {X, A, Γ, U, g, β} fully describes the stationary discounted dynamic programming in the RCK model. State space, X = R +. State variable k X. Action space, A = R +. State transition function g : X A X, such that for each (k, c) X A, the next period s state is k = g(k, c) = f(k) c. Feasible action correspondence, Γ(k) = [0, f(k)]. Per period payoff from action c Γ(k) given state k, U(k, c) = U(c).

40 We would like to check the following items for this example model: 1. When do (stationary) optimal strategies exist? 2. Properties of the value function 3. Is an optimal strategy unique here? 4. What are the dynamic properties i.e. the trajectory of {c t, k t } t N under the optimal strategy? What is the behavior of the transitional path (short run)? The steady state (long run)?

41 A concrete example: The (T )-RCK model Alternative restrictions: Assumption. Instead of U bounded, let X = A = [0, k] where k <. State and action spaces are compact. U is continuous on X A. So indirectly U (restricted to compact X A) will be bounded. Specifically, in this model U : A R. Assumption. f : X R + is continuous and nondecreasing on X, and f is bounded

42 RCK model: 1. Existence of S.O.S Theorem There exists a stationary optimal strategy π : X A for the optimal growth model given by {X, A, Γ, U, g, β}, such that v(π)(k) = max k Γ(k) {U(f(k) k ) + βv(π)[k ]} =U(π(k)) + βv[g(k, π(k))]}.

43 RCK model: 2. Value function inherits primitives Theorem v : X R is a nondecreasing function on X. Proof. Define T : C b (X) C b (X) by T w(k) = max {U(f(k) k [0,f(k)] k ) + βw(k )} T can be proved to be a contraction on (C b (X), d ). Since C b (X) complete T : C b (X) C b (X) has a unique fixed point v C b (X) Since f is nondecreasing on X and c Γ(k) = [0, f(k)], then k = g(k, c) = f(k) c is also nondecreasing on X. Starting at any w on X that is nondecreasing and given g is nondecreasing, T w is also nondecreasing on X. Therefore v is nondecreasing on X.

44 RCK model: 3. Uniqueness of optimal strategy Theorem Suppose: then 1. All assumptions as before, and 2. U strictly increasing and strictly concave on A, 1. the optimal savings level π(k) := f(k) c(k) under the optimal strategy π, where k = π(k), is nondecreasing on X; and 2. if f is (weakly) concave on X, then, the value function v is (weakly) concave on X; and 3. the correspondence G : X 2 A : j G (k) = k ff max {U(f(k) k ) + βw (k )}, k X. k Γ(k) is a singleton set (a set of only one maximizer k ) for each state k X. Therefore G admits a unique optimal strategy π. Furthermore, π is a continuous function on X.

45 RCK model: 4. Dynamic properties Theorem Suppose: then 1. All assumptions as before, 2. U C 1 ((0, )) and lim c 0 U (c) =, and 3. f C 1 ((0, )) and lim k 0 f (k) = 1/β, 1. the solution k = π(k) is such that π(k) (0, f(k)) for all k X; and 2. Benveniste and Scheinkman: v : X R is a C 1 function, s.t. v(k) is continuously differentiable at any feasible k int(x) with its derivative at k: v (k) = U(f(k) π(k))f (k). 3. So then the optimal path can be describe by the Euler equations: U c[f(k) π(k)] = βu c[f(k ) π(k )]f k (π(k)), such that k = π(k).

46 RCK model: 4. Dynamic properties Theorem Under Assumptions above, the optimal saving decision function is increasing on X. That is, for k > k, π(k) > π( k). Theorem Given any initial condition k X, the sequence of states {k t+1 (k)} t N under the optimal policy function π : X P (A), and the sequence of consumption levels {c t (k)} t N converge monotonically to k and c respectively. Furthermore, k and c are unique.

Economics 8105 Macroeconomic Theory Recitation 3

Economics 8105 Macroeconomic Theory Recitation 3 Economics 8105 Macroeconomic Theory Recitation 3 Conor Ryan September 20th, 2016 Outline: Minnesota Economics Lecture Problem Set 1 Midterm Exam Fit Growth Model into SLP Corollary of Contraction Mapping

More information

Macro 1: Dynamic Programming 1

Macro 1: Dynamic Programming 1 Macro 1: Dynamic Programming 1 Mark Huggett 2 2 Georgetown September, 2016 DP Warm up: Cake eating problem ( ) max f 1 (y 1 ) + f 2 (y 2 ) s.t. y 1 + y 2 100, y 1 0, y 2 0 1. v 1 (x) max f 1(y 1 ) + f

More information

DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION

DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION UNIVERSITY OF MARYLAND: ECON 600. Alternative Methods of Discrete Time Intertemporal Optimization We will start by solving a discrete time intertemporal

More information

ADVANCED MACROECONOMIC TECHNIQUES NOTE 3a

ADVANCED MACROECONOMIC TECHNIQUES NOTE 3a 316-406 ADVANCED MACROECONOMIC TECHNIQUES NOTE 3a Chris Edmond hcpedmond@unimelb.edu.aui Dynamic programming and the growth model Dynamic programming and closely related recursive methods provide an important

More information

Lecture 5: The Bellman Equation

Lecture 5: The Bellman Equation Lecture 5: The Bellman Equation Florian Scheuer 1 Plan Prove properties of the Bellman equation (In particular, existence and uniqueness of solution) Use this to prove properties of the solution Think

More information

Dynamic Programming Theorems

Dynamic Programming Theorems Dynamic Programming Theorems Prof. Lutz Hendricks Econ720 September 11, 2017 1 / 39 Dynamic Programming Theorems Useful theorems to characterize the solution to a DP problem. There is no reason to remember

More information

Stochastic Dynamic Programming: The One Sector Growth Model

Stochastic Dynamic Programming: The One Sector Growth Model Stochastic Dynamic Programming: The One Sector Growth Model Esteban Rossi-Hansberg Princeton University March 26, 2012 Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 1 / 31 References

More information

Economics 2010c: Lecture 2 Iterative Methods in Dynamic Programming

Economics 2010c: Lecture 2 Iterative Methods in Dynamic Programming Economics 2010c: Lecture 2 Iterative Methods in Dynamic Programming David Laibson 9/04/2014 Outline: 1. Functional operators 2. Iterative solutions for the Bellman Equation 3. Contraction Mapping Theorem

More information

Optimization, Part 2 (november to december): mandatory for QEM-IMAEF, and for MMEF or MAEF who have chosen it as an optional course.

Optimization, Part 2 (november to december): mandatory for QEM-IMAEF, and for MMEF or MAEF who have chosen it as an optional course. Paris. Optimization, Part 2 (november to december): mandatory for QEM-IMAEF, and for MMEF or MAEF who have chosen it as an optional course. Philippe Bich (Paris 1 Panthéon-Sorbonne and PSE) Paris, 2016.

More information

Contents. An example 5. Mathematical Preliminaries 13. Dynamic programming under certainty 29. Numerical methods 41. Some applications 57

Contents. An example 5. Mathematical Preliminaries 13. Dynamic programming under certainty 29. Numerical methods 41. Some applications 57 T H O M A S D E M U Y N C K DY N A M I C O P T I M I Z AT I O N Contents An example 5 Mathematical Preliminaries 13 Dynamic programming under certainty 29 Numerical methods 41 Some applications 57 Stochastic

More information

Stochastic Dynamic Programming. Jesus Fernandez-Villaverde University of Pennsylvania

Stochastic Dynamic Programming. Jesus Fernandez-Villaverde University of Pennsylvania Stochastic Dynamic Programming Jesus Fernande-Villaverde University of Pennsylvania 1 Introducing Uncertainty in Dynamic Programming Stochastic dynamic programming presents a very exible framework to handle

More information

ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko

ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko Indirect Utility Recall: static consumer theory; J goods, p j is the price of good j (j = 1; : : : ; J), c j is consumption

More information

Slides II - Dynamic Programming

Slides II - Dynamic Programming Slides II - Dynamic Programming Julio Garín University of Georgia Macroeconomic Theory II (Ph.D.) Spring 2017 Macroeconomic Theory II Slides II - Dynamic Programming Spring 2017 1 / 32 Outline 1. Lagrangian

More information

An Application to Growth Theory

An Application to Growth Theory An Application to Growth Theory First let s review the concepts of solution function and value function for a maximization problem. Suppose we have the problem max F (x, α) subject to G(x, β) 0, (P) x

More information

University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming

University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming University of Warwick, EC9A0 Maths for Economists 1 of 63 University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming Peter J. Hammond Autumn 2013, revised 2014 University of

More information

The Principle of Optimality

The Principle of Optimality The Principle of Optimality Sequence Problem and Recursive Problem Sequence problem: Notation: V (x 0 ) sup {x t} β t F (x t, x t+ ) s.t. x t+ Γ (x t ) x 0 given t () Plans: x = {x t } Continuation plans

More information

1 Recursive Formulations

1 Recursive Formulations 1 Recursive Formulations We wish to solve the following problem: subject to max {a t,x t} T β t r(x t,a t (1 a t Γ(x t (2 x t+1 = t(x t (3 x 0 given. (4 The horizon length, T, can be infinite (in fact,

More information

HOMEWORK #3 This homework assignment is due at NOON on Friday, November 17 in Marnix Amand s mailbox.

HOMEWORK #3 This homework assignment is due at NOON on Friday, November 17 in Marnix Amand s mailbox. Econ 50a second half) Yale University Fall 2006 Prof. Tony Smith HOMEWORK #3 This homework assignment is due at NOON on Friday, November 7 in Marnix Amand s mailbox.. This problem introduces wealth inequality

More information

Outline Today s Lecture

Outline Today s Lecture Outline Today s Lecture finish Euler Equations and Transversality Condition Principle of Optimality: Bellman s Equation Study of Bellman equation with bounded F contraction mapping and theorem of the maximum

More information

ECON 2010c Solution to Problem Set 1

ECON 2010c Solution to Problem Set 1 ECON 200c Solution to Problem Set By the Teaching Fellows for ECON 200c Fall 204 Growth Model (a) Defining the constant κ as: κ = ln( αβ) + αβ αβ ln(αβ), the problem asks us to show that the following

More information

Introduction to Recursive Methods

Introduction to Recursive Methods Chapter 1 Introduction to Recursive Methods These notes are targeted to advanced Master and Ph.D. students in economics. They can be of some use to researchers in macroeconomic theory. The material contained

More information

A simple macro dynamic model with endogenous saving rate: the representative agent model

A simple macro dynamic model with endogenous saving rate: the representative agent model A simple macro dynamic model with endogenous saving rate: the representative agent model Virginia Sánchez-Marcos Macroeconomics, MIE-UNICAN Macroeconomics (MIE-UNICAN) A simple macro dynamic model with

More information

Suggested Solutions to Homework #3 Econ 511b (Part I), Spring 2004

Suggested Solutions to Homework #3 Econ 511b (Part I), Spring 2004 Suggested Solutions to Homework #3 Econ 5b (Part I), Spring 2004. Consider an exchange economy with two (types of) consumers. Type-A consumers comprise fraction λ of the economy s population and type-b

More information

Markov Decision Processes Infinite Horizon Problems

Markov Decision Processes Infinite Horizon Problems Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld 1 What is a solution to an MDP? MDP Planning Problem: Input: an MDP (S,A,R,T)

More information

Social Welfare Functions for Sustainable Development

Social Welfare Functions for Sustainable Development Social Welfare Functions for Sustainable Development Thai Ha-Huy, Cuong Le Van September 9, 2015 Abstract Keywords: criterion. anonymity; sustainable development, welfare function, Rawls JEL Classification:

More information

Lecture 5: Some Informal Notes on Dynamic Programming

Lecture 5: Some Informal Notes on Dynamic Programming Lecture 5: Some Informal Notes on Dynamic Programming The purpose of these class notes is to give an informal introduction to dynamic programming by working out some cases by h. We want to solve subject

More information

1 Linear Quadratic Control Problem

1 Linear Quadratic Control Problem 1 Linear Quadratic Control Problem Suppose we have a problem of the following form: { vx 0 ) = max β t ) } x {a t,x t+1 } t Qx t +a t Ra t +2a t Wx t 1) x t is a vector of states and a t is a vector of

More information

1 Jan 28: Overview and Review of Equilibrium

1 Jan 28: Overview and Review of Equilibrium 1 Jan 28: Overview and Review of Equilibrium 1.1 Introduction What is an equilibrium (EQM)? Loosely speaking, an equilibrium is a mapping from environments (preference, technology, information, market

More information

Chapter 3. Dynamic Programming

Chapter 3. Dynamic Programming Chapter 3. Dynamic Programming This chapter introduces basic ideas and methods of dynamic programming. 1 It sets out the basic elements of a recursive optimization problem, describes the functional equation

More information

Lecture 4: The Bellman Operator Dynamic Programming

Lecture 4: The Bellman Operator Dynamic Programming Lecture 4: The Bellman Operator Dynamic Programming Jeppe Druedahl Department of Economics 15th of February 2016 Slide 1/19 Infinite horizon, t We know V 0 (M t ) = whatever { } V 1 (M t ) = max u(m t,

More information

Optimal Stopping Problems

Optimal Stopping Problems 2.997 Decision Making in Large Scale Systems March 3 MIT, Spring 2004 Handout #9 Lecture Note 5 Optimal Stopping Problems In the last lecture, we have analyzed the behavior of T D(λ) for approximating

More information

1 Markov decision processes

1 Markov decision processes 2.997 Decision-Making in Large-Scale Systems February 4 MI, Spring 2004 Handout #1 Lecture Note 1 1 Markov decision processes In this class we will study discrete-time stochastic systems. We can describe

More information

Competitive Equilibrium and the Welfare Theorems

Competitive Equilibrium and the Welfare Theorems Competitive Equilibrium and the Welfare Theorems Craig Burnside Duke University September 2010 Craig Burnside (Duke University) Competitive Equilibrium September 2010 1 / 32 Competitive Equilibrium and

More information

Lecture Notes 10: Dynamic Programming

Lecture Notes 10: Dynamic Programming University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 81 Lecture Notes 10: Dynamic Programming Peter J. Hammond 2018 September 28th University of Warwick, EC9A0 Maths for Economists Peter

More information

Time is discrete and indexed by t =0; 1;:::;T,whereT<1. An individual is interested in maximizing an objective function given by. tu(x t ;a t ); (0.

Time is discrete and indexed by t =0; 1;:::;T,whereT<1. An individual is interested in maximizing an objective function given by. tu(x t ;a t ); (0. Chapter 0 Discrete Time Dynamic Programming 0.1 The Finite Horizon Case Time is discrete and indexed by t =0; 1;:::;T,whereT

More information

Recursive Methods. Introduction to Dynamic Optimization

Recursive Methods. Introduction to Dynamic Optimization Recursive Methods Nr. 1 Outline Today s Lecture finish off: theorem of the maximum Bellman equation with bounded and continuous F differentiability of value function application: neoclassical growth model

More information

Introduction to Continuous-Time Dynamic Optimization: Optimal Control Theory

Introduction to Continuous-Time Dynamic Optimization: Optimal Control Theory Econ 85/Chatterjee Introduction to Continuous-ime Dynamic Optimization: Optimal Control heory 1 States and Controls he concept of a state in mathematical modeling typically refers to a specification of

More information

ECON607 Fall 2010 University of Hawaii Professor Hui He TA: Xiaodong Sun Assignment 2

ECON607 Fall 2010 University of Hawaii Professor Hui He TA: Xiaodong Sun Assignment 2 ECON607 Fall 200 University of Hawaii Professor Hui He TA: Xiaodong Sun Assignment 2 The due date for this assignment is Tuesday, October 2. ( Total points = 50). (Two-sector growth model) Consider the

More information

1 THE GAME. Two players, i=1, 2 U i : concave, strictly increasing f: concave, continuous, f(0) 0 β (0, 1): discount factor, common

1 THE GAME. Two players, i=1, 2 U i : concave, strictly increasing f: concave, continuous, f(0) 0 β (0, 1): discount factor, common 1 THE GAME Two players, i=1, 2 U i : concave, strictly increasing f: concave, continuous, f(0) 0 β (0, 1): discount factor, common With Law of motion of the state: Payoff: Histories: Strategies: k t+1

More information

Value and Policy Iteration

Value and Policy Iteration Chapter 7 Value and Policy Iteration 1 For infinite horizon problems, we need to replace our basic computational tool, the DP algorithm, which we used to compute the optimal cost and policy for finite

More information

Neoclassical Growth Model: I

Neoclassical Growth Model: I Neoclassical Growth Model: I Mark Huggett 2 2 Georgetown October, 2017 Growth Model: Introduction Neoclassical Growth Model is the workhorse model in macroeconomics. It comes in two main varieties: infinitely-lived

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Chapter 3 Deterministic Dynamic Programming 3.1 The Bellman Principle of Optimality Richard Bellman (1957) states his Principle of Optimality in full generality as follows: An optimal policy has the property

More information

A CORRESPONDENCE-THEORETIC APPROACH TO DYNAMIC OPTIMIZATION

A CORRESPONDENCE-THEORETIC APPROACH TO DYNAMIC OPTIMIZATION Macroeconomic Dynamics, 13 (Supplement 1), 2009, 97 117. Printed in the United States of America. doi:10.1017/s1365100509080134 A CORRESPONDENCE-THEORETIC APPROACH TO DYNAMIC OPTIMIZATION C.D. ALIPRANTIS

More information

Example I: Capital Accumulation

Example I: Capital Accumulation 1 Example I: Capital Accumulation Time t = 0, 1,..., T < Output y, initial output y 0 Fraction of output invested a, capital k = ay Transition (production function) y = g(k) = g(ay) Reward (utility of

More information

[A + 1 ] + (1 ) v: : (b) Show: the derivative of T at v = v 0 < 0 is: = (v 0 ) (1 ) ; [A + 1 ]

[A + 1 ] + (1 ) v: : (b) Show: the derivative of T at v = v 0 < 0 is: = (v 0 ) (1 ) ; [A + 1 ] Homework #2 Economics 4- Due Wednesday, October 5 Christiano. This question is designed to illustrate Blackwell's Theorem, Theorem 3.3 on page 54 of S-L. That theorem represents a set of conditions that

More information

Lecture 7: Stochastic Dynamic Programing and Markov Processes

Lecture 7: Stochastic Dynamic Programing and Markov Processes Lecture 7: Stochastic Dynamic Programing and Markov Processes Florian Scheuer References: SLP chapters 9, 10, 11; LS chapters 2 and 6 1 Examples 1.1 Neoclassical Growth Model with Stochastic Technology

More information

Lecture 4: Dynamic Programming

Lecture 4: Dynamic Programming Lecture 4: Dynamic Programming Fatih Guvenen January 10, 2016 Fatih Guvenen Lecture 4: Dynamic Programming January 10, 2016 1 / 30 Goal Solve V (k, z) =max c,k 0 u(c)+ E(V (k 0, z 0 ) z) c + k 0 =(1 +

More information

1 Stochastic Dynamic Programming

1 Stochastic Dynamic Programming 1 Stochastic Dynamic Programming Formally, a stochastic dynamic program has the same components as a deterministic one; the only modification is to the state transition equation. When events in the future

More information

On the Principle of Optimality for Nonstationary Deterministic Dynamic Programming

On the Principle of Optimality for Nonstationary Deterministic Dynamic Programming On the Principle of Optimality for Nonstationary Deterministic Dynamic Programming Takashi Kamihigashi January 15, 2007 Abstract This note studies a general nonstationary infinite-horizon optimization

More information

Lecture notes for Analysis of Algorithms : Markov decision processes

Lecture notes for Analysis of Algorithms : Markov decision processes Lecture notes for Analysis of Algorithms : Markov decision processes Lecturer: Thomas Dueholm Hansen June 6, 013 Abstract We give an introduction to infinite-horizon Markov decision processes (MDPs) with

More information

Notes for ECON 970 and ECON 973 Loris Rubini University of New Hampshire

Notes for ECON 970 and ECON 973 Loris Rubini University of New Hampshire Notes for ECON 970 and ECON 973 Loris Rubini University of New Hampshire 1 Introduction Economics studies resource allocation problems. In macroeconomics, we study economywide resource allocation problems.

More information

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N Problem 1. Let f : A R R have the property that for every x A, there exists ɛ > 0 such that f(t) > ɛ if t (x ɛ, x + ɛ) A. If the set A is compact, prove there exists c > 0 such that f(x) > c for all x

More information

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological

More information

1 Stochastic Dynamic Programming

1 Stochastic Dynamic Programming 1 Stochastic Dynamic Programming Formally, a stochastic dynamic program has the same components as a deterministic one; the only modification is to the state transition equation. When events in the future

More information

Neoclassical Growth Model / Cake Eating Problem

Neoclassical Growth Model / Cake Eating Problem Dynamic Optimization Institute for Advanced Studies Vienna, Austria by Gabriel S. Lee February 1-4, 2008 An Overview and Introduction to Dynamic Programming using the Neoclassical Growth Model and Cake

More information

Lecture 1: Dynamic Programming

Lecture 1: Dynamic Programming Lecture 1: Dynamic Programming Fatih Guvenen November 2, 2016 Fatih Guvenen Lecture 1: Dynamic Programming November 2, 2016 1 / 32 Goal Solve V (k, z) =max c,k 0 u(c)+ E(V (k 0, z 0 ) z) c + k 0 =(1 +

More information

Dynamic Optimization with a Nonsmooth, Nonconvex Technology: The Case of a Linear Objective Function

Dynamic Optimization with a Nonsmooth, Nonconvex Technology: The Case of a Linear Objective Function Dynamic Optimization with a Nonsmooth, Nonconvex Technology: The Case of a Linear Objective Function Takashi Kamihigashi* RIEB Kobe University tkamihig@rieb.kobe-u.ac.jp Santanu Roy Department of Economics

More information

Markov Decision Processes and Dynamic Programming

Markov Decision Processes and Dynamic Programming Markov Decision Processes and Dynamic Programming A. LAZARIC (SequeL Team @INRIA-Lille) Ecole Centrale - Option DAD SequeL INRIA Lille EC-RL Course In This Lecture A. LAZARIC Markov Decision Processes

More information

Lecture notes for Macroeconomics I, 2004

Lecture notes for Macroeconomics I, 2004 Lecture notes for Macroeconomics I, 2004 Per Krusell Please do NOT distribute without permission! Comments and suggestions are welcome. 1 Chapter 3 Dynamic optimization There are two common approaches

More information

Advanced Economic Growth: Lecture 21: Stochastic Dynamic Programming and Applications

Advanced Economic Growth: Lecture 21: Stochastic Dynamic Programming and Applications Advanced Economic Growth: Lecture 21: Stochastic Dynamic Programming and Applications Daron Acemoglu MIT November 19, 2007 Daron Acemoglu (MIT) Advanced Growth Lecture 21 November 19, 2007 1 / 79 Stochastic

More information

Economics 204 Summer/Fall 2011 Lecture 5 Friday July 29, 2011

Economics 204 Summer/Fall 2011 Lecture 5 Friday July 29, 2011 Economics 204 Summer/Fall 2011 Lecture 5 Friday July 29, 2011 Section 2.6 (cont.) Properties of Real Functions Here we first study properties of functions from R to R, making use of the additional structure

More information

Numerical Methods in Economics

Numerical Methods in Economics Numerical Methods in Economics MIT Press, 1998 Chapter 12 Notes Numerical Dynamic Programming Kenneth L. Judd Hoover Institution November 15, 2002 1 Discrete-Time Dynamic Programming Objective: X: set

More information

How much should the nation save?

How much should the nation save? How much should the nation save? Econ 4310 Lecture 2 Asbjorn Rodseth University of Oslo August 21, 2013 Asbjorn Rodseth (University of Oslo) How much should the nation save? August 21, 2013 1 / 13 Outline

More information

Mathematics II, course

Mathematics II, course Mathematics II, course 2013-2014 Juan Pablo Rincón Zapatero October 24, 2013 Summary: The course has four parts that we describe below. (I) Topology in Rn is a brief review of the main concepts and properties

More information

Dynamic Optimization Problem. April 2, Graduate School of Economics, University of Tokyo. Math Camp Day 4. Daiki Kishishita.

Dynamic Optimization Problem. April 2, Graduate School of Economics, University of Tokyo. Math Camp Day 4. Daiki Kishishita. Discrete Math Camp Optimization Problem Graduate School of Economics, University of Tokyo April 2, 2016 Goal of day 4 Discrete We discuss methods both in discrete and continuous : Discrete : condition

More information

MS&E338 Reinforcement Learning Lecture 1 - April 2, Introduction

MS&E338 Reinforcement Learning Lecture 1 - April 2, Introduction MS&E338 Reinforcement Learning Lecture 1 - April 2, 2018 Introduction Lecturer: Ben Van Roy Scribe: Gabriel Maher 1 Reinforcement Learning Introduction In reinforcement learning (RL) we consider an agent

More information

Department of Economics Working Paper Series

Department of Economics Working Paper Series Department of Economics Working Paper Series On the Existence and Characterization of Markovian Equilibrium in Models with Simple Non-Paternalistic Altruism Olivier F. Morand University of Connecticut

More information

Deterministic Dynamic Programming in Discrete Time: A Monotone Convergence Principle

Deterministic Dynamic Programming in Discrete Time: A Monotone Convergence Principle Deterministic Dynamic Programming in Discrete Time: A Monotone Convergence Principle Takashi Kamihigashi Masayuki Yao March 30, 2015 Abstract We consider infinite-horizon deterministic dynamic programming

More information

Organizational Equilibrium with Capital

Organizational Equilibrium with Capital Organizational Equilibrium with Capital Marco Bassetto, Zhen Huo, and José-Víctor Ríos-Rull FRB of Chicago, Yale University, University of Pennsylvania, UCL, CAERP Fiscal Policy Conference Mar 20, 2018

More information

Introduction to Dynamic Programming Lecture Notes

Introduction to Dynamic Programming Lecture Notes Introduction to Dynamic Programming Lecture Notes Klaus Neusser November 30, 2017 These notes are based on the books of Sargent (1987) and Stokey and Robert E. Lucas (1989). Department of Economics, University

More information

Lecture notes for Macroeconomics I, 2004

Lecture notes for Macroeconomics I, 2004 Lecture notes for Macroeconomics I, 2004 Per Krusell Please do NOT distribute without permission Comments and suggestions are welcome! 1 2 Chapter 1 Introduction These lecture notes cover a one-semester

More information

Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time

Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time David Laibson 9/30/2014 Outline Lectures 9-10: 9.1 Continuous-time Bellman Equation 9.2 Application: Merton s Problem 9.3 Application:

More information

Optimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112

Optimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112 Optimal Control Ömer Özak SMU Macroeconomics II Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112 Review of the Theory of Optimal Control Section 1 Review of the Theory of Optimal Control Ömer

More information

Exhaustible Resources and Economic Growth

Exhaustible Resources and Economic Growth Exhaustible Resources and Economic Growth Cuong Le Van +, Katheline Schubert + and Tu Anh Nguyen ++ + Université Paris 1 Panthéon-Sorbonne, CNRS, Paris School of Economics ++ Université Paris 1 Panthéon-Sorbonne,

More information

Lecture 6: Contraction mapping, inverse and implicit function theorems

Lecture 6: Contraction mapping, inverse and implicit function theorems Lecture 6: Contraction mapping, inverse and implicit function theorems 1 The contraction mapping theorem De nition 11 Let X be a metric space, with metric d If f : X! X and if there is a number 2 (0; 1)

More information

UNIVERSITY OF VIENNA

UNIVERSITY OF VIENNA WORKING PAPERS Cycles and chaos in the one-sector growth model with elastic labor supply Gerhard Sorger May 2015 Working Paper No: 1505 DEPARTMENT OF ECONOMICS UNIVERSITY OF VIENNA All our working papers

More information

Problem Set #4 Answer Key

Problem Set #4 Answer Key Problem Set #4 Answer Key Economics 808: Macroeconomic Theory Fall 2004 The cake-eating problem a) Bellman s equation is: b) If this policy is followed: c) If this policy is followed: V (k) = max {log

More information

A Quick Introduction to Numerical Methods

A Quick Introduction to Numerical Methods Chapter 5 A Quick Introduction to Numerical Methods One of the main advantages of the recursive approach is that we can use the computer to solve numerically interesting models. There is a wide variety

More information

Mathematical Methods in Economics (Part I) Lecture Note

Mathematical Methods in Economics (Part I) Lecture Note Mathematical Methods in Economics (Part I) Lecture Note Kai Hao Yang 09/03/2018 Contents 1 Basic Topology and Linear Algebra 4 1.1 Review of Metric Space and Introduction of Topological Space........ 4

More information

Distributed Optimization. Song Chong EE, KAIST

Distributed Optimization. Song Chong EE, KAIST Distributed Optimization Song Chong EE, KAIST songchong@kaist.edu Dynamic Programming for Path Planning A path-planning problem consists of a weighted directed graph with a set of n nodes N, directed links

More information

Econ 504, Lecture 1: Transversality and Stochastic Lagrange Multipliers

Econ 504, Lecture 1: Transversality and Stochastic Lagrange Multipliers ECO 504 Spring 2009 Chris Sims Econ 504, Lecture 1: Transversality and Stochastic Lagrange Multipliers Christopher A. Sims Princeton University sims@princeton.edu February 4, 2009 0 Example: LQPY The ordinary

More information

Problem Set 2: Solutions Math 201A: Fall 2016

Problem Set 2: Solutions Math 201A: Fall 2016 Problem Set 2: s Math 201A: Fall 2016 Problem 1. (a) Prove that a closed subset of a complete metric space is complete. (b) Prove that a closed subset of a compact metric space is compact. (c) Prove that

More information

Development Economics (PhD) Intertemporal Utility Maximiza

Development Economics (PhD) Intertemporal Utility Maximiza Development Economics (PhD) Intertemporal Utility Maximization Department of Economics University of Gothenburg October 7, 2015 1/14 Two Period Utility Maximization Lagrange Multiplier Method Consider

More information

ECON607 Fall 2010 University of Hawaii Professor Hui He TA: Xiaodong Sun Assignment 1 Suggested Solutions

ECON607 Fall 2010 University of Hawaii Professor Hui He TA: Xiaodong Sun Assignment 1 Suggested Solutions ECON607 Fall 200 University of Hawaii Professor Hui He TA: Xiaodong Sun Assignment Suggested Solutions The due date for this assignment is Thursday, Sep. 23.. Consider an stochastic optimal growth model

More information

ADVANCED MACROECONOMICS 2015 FINAL EXAMINATION FOR THE FIRST HALF OF SPRING SEMESTER

ADVANCED MACROECONOMICS 2015 FINAL EXAMINATION FOR THE FIRST HALF OF SPRING SEMESTER ADVANCED MACROECONOMICS 2015 FINAL EXAMINATION FOR THE FIRST HALF OF SPRING SEMESTER Hiroyuki Ozaki Keio University, Faculty of Economics June 2, 2015 Important Remarks: You must write all your answers

More information

A novel approach to Banach contraction principle in extended quasi-metric spaces

A novel approach to Banach contraction principle in extended quasi-metric spaces Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (2016), 3858 3863 Research Article A novel approach to Banach contraction principle in extended quasi-metric spaces Afrah A. N. Abdou a, Mohammed

More information

Lecture 6: Discrete-Time Dynamic Optimization

Lecture 6: Discrete-Time Dynamic Optimization Lecture 6: Discrete-Time Dynamic Optimization Yulei Luo Economics, HKU November 13, 2017 Luo, Y. (Economics, HKU) ECON0703: ME November 13, 2017 1 / 43 The Nature of Optimal Control In static optimization,

More information

Notes on the Thomas and Worrall paper Econ 8801

Notes on the Thomas and Worrall paper Econ 8801 Notes on the Thomas and Worrall paper Econ 880 Larry E. Jones Introduction The basic reference for these notes is: Thomas, J. and T. Worrall (990): Income Fluctuation and Asymmetric Information: An Example

More information

Asymmetric Information in Economic Policy. Noah Williams

Asymmetric Information in Economic Policy. Noah Williams Asymmetric Information in Economic Policy Noah Williams University of Wisconsin - Madison Williams Econ 899 Asymmetric Information Risk-neutral moneylender. Borrow and lend at rate R = 1/β. Strictly risk-averse

More information

Infinite-Horizon Discounted Markov Decision Processes

Infinite-Horizon Discounted Markov Decision Processes Infinite-Horizon Discounted Markov Decision Processes Dan Zhang Leeds School of Business University of Colorado at Boulder Dan Zhang, Spring 2012 Infinite Horizon Discounted MDP 1 Outline The expected

More information

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books.

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books. Applied Analysis APPM 44: Final exam 1:3pm 4:pm, Dec. 14, 29. Closed books. Problem 1: 2p Set I = [, 1]. Prove that there is a continuous function u on I such that 1 ux 1 x sin ut 2 dt = cosx, x I. Define

More information

5 Compact linear operators

5 Compact linear operators 5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finite-dimensional space, there exists a basis consisting of eigenvectors.

More information

Macro I - Practice Problems - Growth Models

Macro I - Practice Problems - Growth Models Macro I - Practice Problems - Growth Models. Consider the infinitely-lived agent version of the growth model with valued leisure. Suppose that the government uses proportional taxes (τ c, τ n, τ k ) on

More information

HOMEWORK #1 This homework assignment is due at 5PM on Friday, November 3 in Marnix Amand s mailbox.

HOMEWORK #1 This homework assignment is due at 5PM on Friday, November 3 in Marnix Amand s mailbox. Econ 50a (second half) Yale University Fall 2006 Prof. Tony Smith HOMEWORK # This homework assignment is due at 5PM on Friday, November 3 in Marnix Amand s mailbox.. Consider a growth model with capital

More information

Value Function Iteration

Value Function Iteration Value Function Iteration (Lectures on Solution Methods for Economists II) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 February 26, 2018 1 University of Pennsylvania 2 Boston College Theoretical Background

More information

The Growth Model in Continuous Time (Ramsey Model)

The Growth Model in Continuous Time (Ramsey Model) The Growth Model in Continuous Time (Ramsey Model) Prof. Lutz Hendricks Econ720 September 27, 2017 1 / 32 The Growth Model in Continuous Time We add optimizing households to the Solow model. We first study

More information

Research Article Some Generalizations of Fixed Point Results for Multivalued Contraction Mappings

Research Article Some Generalizations of Fixed Point Results for Multivalued Contraction Mappings International Scholarly Research Network ISRN Mathematical Analysis Volume 2011, Article ID 924396, 13 pages doi:10.5402/2011/924396 Research Article Some Generalizations of Fixed Point Results for Multivalued

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0

More information

Algorithms for MDPs and Their Convergence

Algorithms for MDPs and Their Convergence MS&E338 Reinforcement Learning Lecture 2 - April 4 208 Algorithms for MDPs and Their Convergence Lecturer: Ben Van Roy Scribe: Matthew Creme and Kristen Kessel Bellman operators Recall from last lecture

More information

Optimal Growth Models and the Lagrange Multiplier

Optimal Growth Models and the Lagrange Multiplier CORE DISCUSSION PAPER 2003/83 Optimal Growth Models and the Lagrange Multiplier Cuong Le Van, H. Cagri Saglam November 2003 Abstract We provide sufficient conditions on the objective functional and the

More information