DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION
|
|
- Darleen Ward
- 5 years ago
- Views:
Transcription
1 DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION UNIVERSITY OF MARYLAND: ECON 600. Alternative Methods of Discrete Time Intertemporal Optimization We will start by solving a discrete time intertemporal optimization problem using two simple methods: the method of substitution and the Lagrange method. We will then study in more detail the Maximum Principle and the dynamic programming approach. Suppose that an individual solves the following problem: max {c t,a t+} T β t u(c t ) a t+ = ()(a t + y t c t ), t = 0,,..., T r > 0, a 0 given Where a t denotes assets (or wealth) held at the beginning of period t, y t is labor income in period t, c t denotes consumption expenditure incurred in period t, β is the discount factor, r is the interest rate, and u() represents the period-by-period utility function, assumed to be twice continuously differentiable, strictly increasing and strictly concave. We also assume that lim ct 0 u (c t ) =... The method of substitution. Substitute the period-by-period budget constraint into the objective function to get: ( max β t u a ) t+ {a t+} T + a t + y t Now we have an unconstrained optimization problem in the decision (or choice) variables a t+, t = 0,,..., T. Since the objective function is strictly concave, the first-order conditions will be necessary and sufficient to determine the unique global maximum point. The FOC for a t+ is: ( β t u a t+ or + a t + y t ) = β t+ ()u ( u (c t ) = β()u (c t+ ) a ) t+2 + a t+ + y t+ Date: Summer 203. Notes compiled on August 23, 203
2 DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION 2 From the period by period budget constraint, we can obtain the lifetime budget constraint: ( ) t ( ) T + ( ) t c t + a T + = a 0 + y t If the individual cannot die with unpaid debts, a T + < 0, then a T + = 0 will always hold, since it would not be optimal for the individual to die with unused resources. Let us specify a functional form for u(c t ) : Then the Euler equation becomes: u(c t ) = c σ t σ, σ > 0 c t+ = [β()] σ ct Note that time-iterating using the Euler equation yields c = [β()] σ c0 c 2 = [β()] σ c = [β()] 2 σ c0. c t = [β()] t σ c0 Replacing the expression for c t into the lifetime budget constraint yields (after some algebra): c 0 = [ β σ () σ σ β σ () σ σ ] T + [a 0 + ( ) ] t y t In the limit as T and assuming β σ () σ < ) c 0 = ( [ β σ ( ) ] t σ () σ a 0 + y t If we need to solve an infinite horizon problem, it is usually simpler to solve the infinite horizon problem directly, instead of taking the limit of the finite horizon solution. Consider the infinite horizon problem: max {c t,a t+} β t u(c t ) a t+ = ()(a t + y t c t ) Using the same process of substitution, as in the finite time case, we can derive the Euler equation u (c t ) = β()u (c t+ )
3 DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION 3 As for the lifetime budget constraint we need an additional condition which is referred to as the transversality condition: ( ) T lim a T + = 0 T ( ) T + What is the economic meaning of that condition? If lim T +r at + < 0, the present discounted value of the individuals lifetime expenditure would be greater than the present discounted value of her lifetime resources by an amount that does not converge to zero. Her debt grows at a rate that is at least as great as the interest rate. To rule out that possibility, we impose the no-ponzi game condition: ( ) T lim a T + 0 T ( ) T + On the other hand, if lim T +r at + > 0, the present discounted value of the individuals lifetime expenditures is lower than the present discounted value of her lifetime resources by an amount that does not converge to zero. That means that the individual could increase her lifetime utility by consuming more. Therefore, under the no-ponzi game condition, the transversality condition must hold, and the lifetime budget constraint is: ( ) t ( ) t c t = a 0 + y t which combined with the Euler equation yields the same expression for c 0 as we derived using the limiting argument. Show this..2. The Lagrange method. The finite horizon problem stated above, taking into account the terminal non-indebtedness condition can be written as: max β t u(c t ) {c t} T ( ) t ( ) T + c t + a T + = a 0 + ( ) t y t and ( ) T + a T + 0 ( ) T + We argued earlier that +r at + = 0 using economic reasoning. That condition will be formally implied by the Kuhn-Tucker theorem when we use the Lagrange method. Note that the two constraints can be combined as ( ) t ( ) t a 0 + y t c t 0
4 DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION 4 and the Lagrangian can be written as [ ( ) t L = β t u(c t ) + λ a 0 + y t ( ) ] t c t The First Order conditions can be written as ( ) t β t u (c t ) = λ t L ( ) t ( ) t λ = a 0 + y t c t 0 λ 0 λ L λ = 0 The first condition at time t and t + can be used to derive the Euler Equation u (c t ) = β()u (c t+ ) Note that λ = u (c 0 ), and therefore the shadow value of the lifetime budget constraint is equal to the marginal utility of consumption at t = 0. Also, from the last FONC we can see that unless λ = u (c 0 ) = 0, which cannot be true given economic scarcity, the lifetime budget constraint must hold with equality (a T + = 0). Then: ( ) T + a T + = 0 Thus, by the Kuhn-Tucker theorem, we get the same solution for consumption as the substitution method. We can also use the complementary slackness condition to derive the transversality condition for the infinite horizon problem. In the limit, as T, we must have: ( ) T + lim a T + = 0 T.3. The Maximum Principle. Intertemporal optimization problems often have a special structure that allows us to characterize their solutions in a certain way. The most important aspect of that structure is the existence of stock-flow relationships among the variables. We will use x t to denote the stock variables (or state variables in the mathematical terminology) measured at the beginning of period t, and to denote the flow variables (or control variables). Economic activity in one period determines the changes in stocks from that period to the next. Therefore, increments to stocks depend on both the stocks and the flows during this period: x t+ x t = f t (x t, ) In addition to the constraints that govern the changes in the state variables, there may be constraints on all the variables the variables pertaining to any one time period, such as: g t (x t, ) 0 Constraints for stocks and flows to be non-negative can also be included in the previous equation.
5 DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION 5 The objective function is often additively separable: it can be expressed as a sum of instantaneous return functions, where each return function depends on the variables pertaining to one time period only: r t (x t, ) The optimization problem is given by: max {,x t+} T r t (x t, ) x t+ x t = f t (x t, ) g t (x t, ) 0 x 0 given We assume that r, f and g are at least C. Introduce the Lagrangian multipliers λ t for the equation describing transition of the state variable and µ t for the additional constraint g t (x t, ) 0. The Lagrangian is given by: L = r t (x t, ) + λ t [f t (x t, ) + x t x t+ ] + µ t g t (x t, ) Assuming interior solutions, the FONC are given by (.) (.2) (.3) (.4) L = r t f t g t + λ t + µ t = 0 L = r t f t g t + λ t + λ t λ t + µ t = 0 L = f t (x t, ) + x t x t+ = 0 λ t g t (x t, ) 0 µ t 0 µ t g t (x t, ) = 0 These conditions can be written in a more compact way. Define the function H t, called the Hamiltonian, by H t (x t,, λ t ) = r t (x t, ) + λ t f t (x t, ) Then the problem can be reformulated as maximizing H t g t (x t, ) 0. Denote by H t (x t,, λ t ) the resulting maximum value. The Lagrangian for this single-period optimization problem is given by H t = H t (x t,, λ t ) + µ t g t (x t, ) Now of as the only choice variable and it has to be chosen to maximize H t (x t ;, λ t ) g t (x t ; ) 0. Thus we can rewrite (.2) as λ t λ t = H t H is the Lagrangian for the static optimization problem in which only the are choice variables, and the x t and λ t are parameters. Therefore, by the Envelope
6 DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION 6 Theorem we have λ t λ t = H t The Envelope Theorem also yields H t λ t as = Ht λ t = f t. Therefore, (.3) can be written x t+ x t = H t λ t The Maximum Principle: The first-order necessary conditions for the intertemporal optimization problem are: () For each t, maximizes the Hamiltonian H t (x t,, t) the singleperiod constraints g t (x t, ) 0, (2) For each t, the change in x t over time is given by x t+ x t = H t λ t (3) For each t, the change in λ t over time is given by λ t λ t = H The first-order conditions are sufficient for a unique optimum if the appropriate curvature conditions are imposed on r, f and g. In particular, sufficiency for a unique optimum holds if a strictly concave function is maximized over a closed strictly convex region. We can interpret the maximization condition (.) by noting that we would not want to choose to maximize r t (x t, ). We know that the choice of affects x t+ via the transition equation of x t, and therefore affects the terms in the objective function at times t +, etc. We can capture all these future effects by using the shadow price of the affected stock. The effect of on x t+ equals its effect on f t (x t, ), and the resulting change in the objective function is found by multiplying this by the shadow price λ t of x t+. That is what we add to r to get the Hamiltonian. Note that the equation λ t λ t = H t has a useful economic interpretation. We can write it as [ ] rt g t f t + µ t + λ t + λ t λ t = 0 A marginal unit of x t yields the marginal return rt g + µ t t within period t, and an extra ft next period valued at λ t. We can think of these as a dividend. And the change in the price λ t λ t is like a capital gain. When x t is optimal, the overall return (the sum of these components) should be zero. In other words, the shadow prices take values that do not allow for an excess return from holding the stock; this is an intertemporal no-arbitrage condition.
7 DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION 7.4. Dynamic Programming. Consider the optimization problem: max {c t} T r t (x t, ) x t+ x t = f t (x t, ) g t (x t, ) 0 x 0 given Define the value function as the resulting maximum value of the objective function expressed as a function of the initial state variables, say V 0 (x 0 ). Since the objective function is separable, instead of starting at time 0, we can start at another particular time, say t = τ. Given that the state variables at a point in time determine all other variables both currently and at all future dates, they determine the maximum value that the objective function can attain. Let V τ (x τ ) denote the value function for the problem of maximizing T t=τ r t(x t, ) x t+ x t = f t (x t, ) and g t (x t, ) 0, t = τ, τ +,..., T. The idea underlying the dynamic programming approach is Bellmans principle of optimality: Pick any t and consider the decision about the control variables at that time. This choice will lead to next periods state x t+ according to the transition equation for x t. Thereafter it remains to solve the subproblem starting at t + and achieve the maximum value V t+ (x t+ ). Then, the total value starting at t can be broken down into two terms: r t (x t, ) that accrues at once, and V t+ (x t+ ) that accrues thereafter. The choice of should maximize the sum of these two terms: V t (x t ) = max r t (x t, ) + V t+ (x t+ ) x t+ x t = f t (x t, ) g t (x t, ) 0 x t given The idea is that whatever the decision at t, the subsequent decisions should be optimal for the subproblem starting at t +. This recursive relationship is known as Bellmans equation. Using this equation, we can start in the final period T and proceed recursively to earlier time periods. In period T, we have V T (x T ) = max u T r T (x T, u T ) x T + x T = f T (x T, u T ) g T (x T, u T ) 0 x T given This is a static optimization problem, and yields the policy function u T = h T (x T ) which, together with the transition equation x T + x T = f T (x T, u T ), gives the
8 DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION 8 value function V T (x T ). The value function can then be used in the right hand side of the static optimization problem for period V T (x T ) = max u T r T (x T, u T ) + V T (x T ) x T x T = f T (x T, u T ) g T (x T, u T ) 0 x T given This is another static optimization problem, and yields the value function V T (x T ). We can proceed recursively backwards all the way to period 0. Note that the Bellman equation shows that dynamic programming problems are two-period problems, where the periods are today and the future. But this works only when the instantaneous return function and the constraint function have the property that controls at t influence states x t+s and returns r t+s (x t+s, +s ) for s > 0. If the Bellmans principle of optimality is applicable, it leads to the same decision rules as the Maximum principle. Consider the intertemporal optimization problem we have been discussing. Substituting for x t+ from the transition equation into the Bellamn equation, we have s.t. V t (x t ) = max r t (x t, ) + V t+ (f t (x t, ) + x t ) g t (x t, ) 0 and x t given Letting µ t denote the Lagrangian multipliers on the constraints, the first-order condition is By the Envelope Theorem: r t + V t+ + f t + µ t g t = 0 V t = r t + V t+ + ( ) ft g t + + µ t Recall that with the Maximum Principle we had: ( ) r t ft g t + λ t + λ t + µ t = 0 Comparing the two equations above, reveals that V t+ + = λ t, Using this in the First order condition yields V t = λ t r t + λ t f t + µ t g t = 0 which is exactly the FOC for to maximize the Hamiltonian defined in the earlier section.
9 DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION 9 Example. Consider a finite horizon consumption problem and let u(c t ) = ln(c t ) and f(k t ) = k α t, 0 < α <. The discount factor is 0 < β <. Assume that the horizon ends at T and there is no value to capital carried over to T +. Let h be the decision rule for consumption and g be the decision rule for capital). The problem can then be written as max {c t,k t+} T 0 β t u(c t ) k t+ = f(k t ) c t, k 0 given Note that we can use either c t or k t+ as the control variable. Let us use the latter and subsitute in using the resource constraint. The Bellman equation for period T is: V T (k T ) = max k T + ln(k α T k T + ) The solution is k T + = 0 (so g T (k T ) = 0). Then we have c T = kt α h T (k T ) and V T (k T ) = ln(kt α ). At T we have, V T (k T ) = max k T ln(k α T k T ) + β ln(k α T ) which yields the FOC k α T k T = β kt α αk α T which yields Then, and k T = αβ + αβ kα T = g T (k T ) c T h T (k T ) = + αβ kα T V T = αβ ln αβ + ( + αβ) ln( + αβ kα T ) Similarly we can keep going back to t = 0. It can be shown that the consumption decision rule takes the form: and c t = h t (k t ) = k t+ = g t (k t ) = T t+ τ=0 (αβ) τ kα t T t τ=0 (αβ)τ T t+ τ=0 (αβ) τ αβkα t
10 DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION Infinite Horizon. What happens as T? In the finite-horizon case, the value function changes with time, since problems that start at different dates differ not only in the initial value of the state but also in the time remaining until the end of the planning period. If T is very large, however, the starting point is less important. In fact, as T, under certain conditions that we will explore later, the value function and the optimal decision rules are time invariant. If h t, f t, r t change radically with time it will be hard to find a solution. A common setup to simplify this is to have r t (x t, ) = β t r(x t, ), β (0, ) f t (x t, ) = f(x t, ) g t (x t, ) = g(x t, ) There are two ways of solving the Bellman Equation, which we will study next:.4.2. Guess and Verify. If one could figure out the form the value function would take, the method of undetermined coefficients would give a solution, in the following way: () Guess a form for the value function. For instance, we could believe that the value function was of the form V G (x) = A + Bh(x) + Cz(x), for known functions h(x) and z(x) but unknown coefficients A, B and C. (2) Plug the conjecture V G into both sides of Bellmans equation V G (x t ) = max r(x t, ) + V G (f(x t, ) + x t ) (3) Obtain the policy function from the FOC and plug it back into the Bellmans equation. (4) Find values of the coefficients that make the equation hold. This method will work if we are close to the correct form of the value function, but trial-and-error on what functional forms to include will be usually fruitless. Example. Consider the same example above but now let the problem be an infinite horizon problem. Guess the form of the Value function as Recall that the Bellman equation was V (k t ) = A + B ln(k t ) V (k t ) = max k t+ ln(k α t k t+ ) + βv (k t+ ) Plug in the guess for the value function into the Bellman equation: The FOC yields A + B ln(k t ) = max k t+ ln(k α t k t+ ) + β(a + B ln(k t+ )) k t+ = βb + βb kα t Replace this into the RHS of the Bellman equation and rearrange to get: A + B ln(k t ) = βa + ln + βb + βb ln βb + α( + βb) ln(k t ) + βb }{{}}{{} B A
11 DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION [ We can then solve for B = α αβ, and A = β ln( αβ) + hence the Value function. Then from the FOC we can get k t+ = αβkt α ] αβ αβ ln(αβ) and and then from the resource constraint c t = ( αβ)k α t. Note how these are the same as the finite horizon problem decision rules when lim T Method of Successive Approximations. Denote by V j (x) the j-th guess of V (x). We proceed as follows: () Start with V 0 (x), arbitrary. (2) Plug V 0 (x) into the right-hand side of the Bellman equation to generate a new function on the left-hand side, namely: V (x t ) = max r(x t, ) + V 0 (f(x t, ) + x t ) (3) Obtain the policy function from the FOC and plug it back into the above equation. (4) If V (x) = V 0 (x), the guess is correct, then V (x) = V 0 (x). (5) Otherwise, use V (x) as the initial guess and repeat. We could solve the previous example using this method also. To verify that, start with V 0 (x) = 0 and find V by inspection rather than brute force calculus. Why? By subsituting in for x t+ you lose the x t+ 0 constraint. Either keep track of this as well, or have x t+ be your control variable. Then proceed normally. You should get the same sequence as above when recursing backwards in the finite horizon problem (note that this does not always happen!) Unique solution. We need to explore the conditions under which the value function is time invariant and the sequence of functions V j (x) converges to a unique function, say V (x), which reproduces itself if plugged into the right-hand side of the Bellman equation. For the optimization problem (OP) given at the beginning of the section, let T and then we have V t (x t ) = max β t r(x t, ) + V t+ (x t+ ) x t+ x t = f(x t, ) and g(x t, ) 0 It seems natural to conjecture that V t+ = β t+ V = β β t V = βv t. Then the Bellman equation becomes V (x t ) = max r(x t, ) + βv (x t+ ) Solving the problem defined above is equivalent to finding a fixed point for the mapping K such that KV = V in KV (x t ) = max r(x t, ) + βv (x t+ ) x t+ x t = f(x t, ) and g(x t, ) 0 Under what conditions can we ensure that such a fixed point exists? To answer this question we need to introduce some concepts.
12 DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION 2 Definition. Let B(X) be the space of bounded functions from X to R. Let (B(X), d) be a metric space, and K : B(X) B(X). We say that K is a contraction if there exists β [0, ) such that d[k(v), K(v )] βd(v, v ), v, v B(X) Fact 2. Let (B(X), d) be a complete metric space, and K : B(X) B(X) a contraction. Then, There is a unique point v B(X) such that K(v ) = v (i.e. a fixed point), and The sequence {v n }, defined by v = K(v 0 ), v 2 = K(v ), K(v n+ ) = K(v n ), converges to v for any starting point v 0 B(X). Thus, we want to show that the mapping K is a contraction. The following theorem gives sufficient conditions for an operator in a useful function space to be a contraction Theorem 3. (Blackwells sufficient Conditions for a Contraction). Let K be an operator on a metric space (B(X), d), where B(X) is a space of functions. If K satisfies: () Monotonicity: For any v, v B(X), v(x) v (x) x X (Kv)(x) (Kv )(x) x X. (2) Discounting: There is a β [0, ) such that K(v + c) K(v) + βc for all v B(X) and any positive real c. Then K is a contraction with modulus β. In light of Blackwells sufficient conditions, then, we only need to show that the mapping K defined above satisfies the monotonicity and discounting properties on a complete metric space. To this purpose, let us assume that the return function r(x t, ) is real valued, continuous, concave and bounded, and that the constraint set {x t, x t+, : x t+ = f(x t, ) + x t, g(x t, ) 0} is convex and compact. We work with the metric space of continuous bounded functions mapping x X into the real line. The metric is given by the d sup metric. This metric space can be shown to be complete, and it can also be shown that K maps a continuous bounded function V into a continuous bounded function KV. Furthermore, K satisfies Blackwell s sufficient conditions: Let V (x) W (x) for all x X. Define u W t = arg max r(x t, ) + βw (f(x t, ) + x t ) s.t. g(x t, ) 0 Then KW (x t ) = max r(x t, ) + βw (f(x t, ) + x t ) s.t. g(x t, ) 0 = r(x t, u W t ) + βw ((f(x t, u W t ) + x t ) r(x t, u W t ) + βv ((f(x t, u W t ) + x t ) max r(x t, ) + βv (f(x t, ) + x t ) s.t. g(x t, ) 0 = KV (x t )
13 DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION 3 and K is monotonic. Also, for any positive constant c, and K discounts. K(V (x t ) + c) = max r(x t, ) + β[v (x t+ ) + c] s.t. constraints = βc + max r(x t, ) + βv (x t+ ) s.t. constraints = βc + K(V (x t ))
ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko
ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko Indirect Utility Recall: static consumer theory; J goods, p j is the price of good j (j = 1; : : : ; J), c j is consumption
More informationSlides II - Dynamic Programming
Slides II - Dynamic Programming Julio Garín University of Georgia Macroeconomic Theory II (Ph.D.) Spring 2017 Macroeconomic Theory II Slides II - Dynamic Programming Spring 2017 1 / 32 Outline 1. Lagrangian
More informationDynamic Problem Set 1 Solutions
Dynamic Problem Set 1 Solutions Jonathan Kreamer July 15, 2011 Question 1 Consider the following multi-period optimal storage problem: An economic agent imizes: c t} T β t u(c t ) (1) subject to the period-by-period
More informationECON 2010c Solution to Problem Set 1
ECON 200c Solution to Problem Set By the Teaching Fellows for ECON 200c Fall 204 Growth Model (a) Defining the constant κ as: κ = ln( αβ) + αβ αβ ln(αβ), the problem asks us to show that the following
More informationLecture 6: Discrete-Time Dynamic Optimization
Lecture 6: Discrete-Time Dynamic Optimization Yulei Luo Economics, HKU November 13, 2017 Luo, Y. (Economics, HKU) ECON0703: ME November 13, 2017 1 / 43 The Nature of Optimal Control In static optimization,
More informationDynamic Programming Theorems
Dynamic Programming Theorems Prof. Lutz Hendricks Econ720 September 11, 2017 1 / 39 Dynamic Programming Theorems Useful theorems to characterize the solution to a DP problem. There is no reason to remember
More informationLecture 5: Some Informal Notes on Dynamic Programming
Lecture 5: Some Informal Notes on Dynamic Programming The purpose of these class notes is to give an informal introduction to dynamic programming by working out some cases by h. We want to solve subject
More informationDynamic optimization: a recursive approach. 1 A recursive (dynamic programming) approach to solving multi-period optimization problems:
E 600 F 206 H # Dynamic optimization: a recursive approach A recursive (dynamic programming) approach to solving multi-period optimization problems: An example A T + period lived agent s value of life
More informationDevelopment Economics (PhD) Intertemporal Utility Maximiza
Development Economics (PhD) Intertemporal Utility Maximization Department of Economics University of Gothenburg October 7, 2015 1/14 Two Period Utility Maximization Lagrange Multiplier Method Consider
More informationADVANCED MACROECONOMIC TECHNIQUES NOTE 3a
316-406 ADVANCED MACROECONOMIC TECHNIQUES NOTE 3a Chris Edmond hcpedmond@unimelb.edu.aui Dynamic programming and the growth model Dynamic programming and closely related recursive methods provide an important
More informationDynamic Optimization Problem. April 2, Graduate School of Economics, University of Tokyo. Math Camp Day 4. Daiki Kishishita.
Discrete Math Camp Optimization Problem Graduate School of Economics, University of Tokyo April 2, 2016 Goal of day 4 Discrete We discuss methods both in discrete and continuous : Discrete : condition
More informationSuggested Solutions to Homework #3 Econ 511b (Part I), Spring 2004
Suggested Solutions to Homework #3 Econ 5b (Part I), Spring 2004. Consider an exchange economy with two (types of) consumers. Type-A consumers comprise fraction λ of the economy s population and type-b
More informationLecture 9. Dynamic Programming. Randall Romero Aguilar, PhD I Semestre 2017 Last updated: June 2, 2017
Lecture 9 Dynamic Programming Randall Romero Aguilar, PhD I Semestre 2017 Last updated: June 2, 2017 Universidad de Costa Rica EC3201 - Teoría Macroeconómica 2 Table of contents 1. Introduction 2. Basics
More informationMacro 1: Dynamic Programming 1
Macro 1: Dynamic Programming 1 Mark Huggett 2 2 Georgetown September, 2016 DP Warm up: Cake eating problem ( ) max f 1 (y 1 ) + f 2 (y 2 ) s.t. y 1 + y 2 100, y 1 0, y 2 0 1. v 1 (x) max f 1(y 1 ) + f
More informationCompetitive Equilibrium and the Welfare Theorems
Competitive Equilibrium and the Welfare Theorems Craig Burnside Duke University September 2010 Craig Burnside (Duke University) Competitive Equilibrium September 2010 1 / 32 Competitive Equilibrium and
More informationIn the Ramsey model we maximized the utility U = u[c(t)]e nt e t dt. Now
PERMANENT INCOME AND OPTIMAL CONSUMPTION On the previous notes we saw how permanent income hypothesis can solve the Consumption Puzzle. Now we use this hypothesis, together with assumption of rational
More informationLecture 10. Dynamic Programming. Randall Romero Aguilar, PhD II Semestre 2017 Last updated: October 16, 2017
Lecture 10 Dynamic Programming Randall Romero Aguilar, PhD II Semestre 2017 Last updated: October 16, 2017 Universidad de Costa Rica EC3201 - Teoría Macroeconómica 2 Table of contents 1. Introduction 2.
More informationThe Necessity of the Transversality Condition at Infinity: A (Very) Special Case
The Necessity of the Transversality Condition at Infinity: A (Very) Special Case Peter Ireland ECON 772001 - Math for Economists Boston College, Department of Economics Fall 2017 Consider a discrete-time,
More informationBasic Deterministic Dynamic Programming
Basic Deterministic Dynamic Programming Timothy Kam School of Economics & CAMA Australian National University ECON8022, This version March 17, 2008 Motivation What do we do? Outline Deterministic IHDP
More informationHOMEWORK #3 This homework assignment is due at NOON on Friday, November 17 in Marnix Amand s mailbox.
Econ 50a second half) Yale University Fall 2006 Prof. Tony Smith HOMEWORK #3 This homework assignment is due at NOON on Friday, November 7 in Marnix Amand s mailbox.. This problem introduces wealth inequality
More informationDynamic Optimization Using Lagrange Multipliers
Dynamic Optimization Using Lagrange Multipliers Barbara Annicchiarico barbara.annicchiarico@uniroma2.it Università degli Studi di Roma "Tor Vergata" Presentation #2 Deterministic Infinite-Horizon Ramsey
More informationLecture 5: The Bellman Equation
Lecture 5: The Bellman Equation Florian Scheuer 1 Plan Prove properties of the Bellman equation (In particular, existence and uniqueness of solution) Use this to prove properties of the solution Think
More informationUncertainty Per Krusell & D. Krueger Lecture Notes Chapter 6
1 Uncertainty Per Krusell & D. Krueger Lecture Notes Chapter 6 1 A Two-Period Example Suppose the economy lasts only two periods, t =0, 1. The uncertainty arises in the income (wage) of period 1. Not that
More informationContents. An example 5. Mathematical Preliminaries 13. Dynamic programming under certainty 29. Numerical methods 41. Some applications 57
T H O M A S D E M U Y N C K DY N A M I C O P T I M I Z AT I O N Contents An example 5 Mathematical Preliminaries 13 Dynamic programming under certainty 29 Numerical methods 41 Some applications 57 Stochastic
More informationChapter 4. Applications/Variations
Chapter 4 Applications/Variations 149 4.1 Consumption Smoothing 4.1.1 The Intertemporal Budget Economic Growth: Lecture Notes For any given sequence of interest rates {R t } t=0, pick an arbitrary q 0
More informationUniversity of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming
University of Warwick, EC9A0 Maths for Economists 1 of 63 University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming Peter J. Hammond Autumn 2013, revised 2014 University of
More informationLecture notes for Macroeconomics I, 2004
Lecture notes for Macroeconomics I, 2004 Per Krusell Please do NOT distribute without permission! Comments and suggestions are welcome. 1 Chapter 3 Dynamic optimization There are two common approaches
More informationMonetary Economics: Solutions Problem Set 1
Monetary Economics: Solutions Problem Set 1 December 14, 2006 Exercise 1 A Households Households maximise their intertemporal utility function by optimally choosing consumption, savings, and the mix of
More informationEcon 504, Lecture 1: Transversality and Stochastic Lagrange Multipliers
ECO 504 Spring 2009 Chris Sims Econ 504, Lecture 1: Transversality and Stochastic Lagrange Multipliers Christopher A. Sims Princeton University sims@princeton.edu February 4, 2009 0 Example: LQPY The ordinary
More informationIntroduction to Recursive Methods
Chapter 1 Introduction to Recursive Methods These notes are targeted to advanced Master and Ph.D. students in economics. They can be of some use to researchers in macroeconomic theory. The material contained
More informationEconomics 2010c: Lecture 2 Iterative Methods in Dynamic Programming
Economics 2010c: Lecture 2 Iterative Methods in Dynamic Programming David Laibson 9/04/2014 Outline: 1. Functional operators 2. Iterative solutions for the Bellman Equation 3. Contraction Mapping Theorem
More informationG Recitation 3: Ramsey Growth model with technological progress; discrete time dynamic programming and applications
G6215.1 - Recitation 3: Ramsey Growth model with technological progress; discrete time dynamic Contents 1 The Ramsey growth model with technological progress 2 1.1 A useful lemma..................................................
More informationLecture 4: The Bellman Operator Dynamic Programming
Lecture 4: The Bellman Operator Dynamic Programming Jeppe Druedahl Department of Economics 15th of February 2016 Slide 1/19 Infinite horizon, t We know V 0 (M t ) = whatever { } V 1 (M t ) = max u(m t,
More informationThe representative agent model
Chapter 3 The representative agent model 3.1 Optimal growth In this course we re looking at three types of model: 1. Descriptive growth model (Solow model): mechanical, shows the implications of a given
More informationEconomics 210B Due: September 16, Problem Set 10. s.t. k t+1 = R(k t c t ) for all t 0, and k 0 given, lim. and
Economics 210B Due: September 16, 2010 Problem 1: Constant returns to saving Consider the following problem. c0,k1,c1,k2,... β t Problem Set 10 1 α c1 α t s.t. k t+1 = R(k t c t ) for all t 0, and k 0
More informationEcon Slides from Lecture 14
Econ 205 Sobel Econ 205 - Slides from Lecture 14 Joel Sobel September 10, 2010 Theorem ( Lagrange Multipliers ) Theorem If x solves max f (x) subject to G(x) = 0 then there exists λ such that Df (x ) =
More informationDynamic Programming. Peter Ireland ECON Math for Economists Boston College, Department of Economics. Fall 2017
Dynamic Programming Peter Ireland ECON 772001 - Math for Economists Boston College, Department of Economics Fall 2017 We have now studied two ways of solving dynamic optimization problems, one based on
More informationDynamic Optimization: An Introduction
Dynamic Optimization An Introduction M. C. Sunny Wong University of San Francisco University of Houston, June 20, 2014 Outline 1 Background What is Optimization? EITM: The Importance of Optimization 2
More informationOptimization, Part 2 (november to december): mandatory for QEM-IMAEF, and for MMEF or MAEF who have chosen it as an optional course.
Paris. Optimization, Part 2 (november to december): mandatory for QEM-IMAEF, and for MMEF or MAEF who have chosen it as an optional course. Philippe Bich (Paris 1 Panthéon-Sorbonne and PSE) Paris, 2016.
More informationLecture 2 The Centralized Economy
Lecture 2 The Centralized Economy Economics 5118 Macroeconomic Theory Kam Yu Winter 2013 Outline 1 Introduction 2 The Basic DGE Closed Economy 3 Golden Rule Solution 4 Optimal Solution The Euler Equation
More informationGovernment The government faces an exogenous sequence {g t } t=0
Part 6 1. Borrowing Constraints II 1.1. Borrowing Constraints and the Ricardian Equivalence Equivalence between current taxes and current deficits? Basic paper on the Ricardian Equivalence: Barro, JPE,
More informationAn Application to Growth Theory
An Application to Growth Theory First let s review the concepts of solution function and value function for a maximization problem. Suppose we have the problem max F (x, α) subject to G(x, β) 0, (P) x
More informationA simple macro dynamic model with endogenous saving rate: the representative agent model
A simple macro dynamic model with endogenous saving rate: the representative agent model Virginia Sánchez-Marcos Macroeconomics, MIE-UNICAN Macroeconomics (MIE-UNICAN) A simple macro dynamic model with
More informationSession 4: Money. Jean Imbs. November 2010
Session 4: Jean November 2010 I So far, focused on real economy. Real quantities consumed, produced, invested. No money, no nominal in uences. I Now, introduce nominal dimension in the economy. First and
More informationFinal Exam - Math Camp August 27, 2014
Final Exam - Math Camp August 27, 2014 You will have three hours to complete this exam. Please write your solution to question one in blue book 1 and your solutions to the subsequent questions in blue
More informationOptimization Over Time
Optimization Over Time Joshua Wilde, revised by Isabel Tecu and Takeshi Suzuki August 26, 21 Up to this point, we have only considered constrained optimization problems at a single point in time. However,
More informationMath Camp Notes: Everything Else
Math Camp Notes: Everything Else Systems of Dierential Equations Consider the general two-equation system of dierential equations: Steady States ẋ = f(x, y ẏ = g(x, y Just as before, we can nd the steady
More informationSeptember Math Course: First Order Derivative
September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which
More informationNotes on Control Theory
Notes on Control Theory max t 1 f t, x t, u t dt # ẋ g t, x t, u t # t 0, t 1, x t 0 x 0 fixed, t 1 can be. x t 1 maybefreeorfixed The choice variable is a function u t which is piecewise continuous, that
More informationLecture 4: Optimization. Maximizing a function of a single variable
Lecture 4: Optimization Maximizing or Minimizing a Function of a Single Variable Maximizing or Minimizing a Function of Many Variables Constrained Optimization Maximizing a function of a single variable
More informationHOMEWORK #1 This homework assignment is due at 5PM on Friday, November 3 in Marnix Amand s mailbox.
Econ 50a (second half) Yale University Fall 2006 Prof. Tony Smith HOMEWORK # This homework assignment is due at 5PM on Friday, November 3 in Marnix Amand s mailbox.. Consider a growth model with capital
More informationChapter 3. Dynamic Programming
Chapter 3. Dynamic Programming This chapter introduces basic ideas and methods of dynamic programming. 1 It sets out the basic elements of a recursive optimization problem, describes the functional equation
More informationMathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7
Mathematical Foundations -- Constrained Optimization Constrained Optimization An intuitive approach First Order Conditions (FOC) 7 Constraint qualifications 9 Formal statement of the FOC for a maximum
More informationProblem Set # 2 Dynamic Part - Math Camp
Problem Set # 2 Dynamic Part - Math Camp Consumption with Labor Supply Consider the problem of a household that hao choose both consumption and labor supply. The household s problem is: V 0 max c t;l t
More informationThe Kuhn-Tucker and Envelope Theorems
The Kuhn-Tucker and Envelope Theorems Peter Ireland EC720.01 - Math for Economists Boston College, Department of Economics Fall 2010 The Kuhn-Tucker and envelope theorems can be used to characterize the
More informationDYNAMIC LECTURE 1 UNIVERSITY OF MARYLAND: ECON 600
DYNAMIC LECTURE 1 UNIVERSITY OF MARYLAND: ECON 6 1. differential Equations 1 1.1. Basic Concepts for Univariate Equations. We use differential equations to model situations which treat time as a continuous
More informationSTATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY
STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY UNIVERSITY OF MARYLAND: ECON 600 1. Some Eamples 1 A general problem that arises countless times in economics takes the form: (Verbally):
More informationproblem. max Both k (0) and h (0) are given at time 0. (a) Write down the Hamilton-Jacobi-Bellman (HJB) Equation in the dynamic programming
1. Endogenous Growth with Human Capital Consider the following endogenous growth model with both physical capital (k (t)) and human capital (h (t)) in continuous time. The representative household solves
More informationOptimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112
Optimal Control Ömer Özak SMU Macroeconomics II Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112 Review of the Theory of Optimal Control Section 1 Review of the Theory of Optimal Control Ömer
More informationIntroduction to Continuous-Time Dynamic Optimization: Optimal Control Theory
Econ 85/Chatterjee Introduction to Continuous-ime Dynamic Optimization: Optimal Control heory 1 States and Controls he concept of a state in mathematical modeling typically refers to a specification of
More informationPermanent Income Hypothesis Intro to the Ramsey Model
Consumption and Savings Permanent Income Hypothesis Intro to the Ramsey Model Lecture 10 Topics in Macroeconomics November 6, 2007 Lecture 10 1/18 Topics in Macroeconomics Consumption and Savings Outline
More informationAdvanced Macroeconomics
Advanced Macroeconomics The Ramsey Model Marcin Kolasa Warsaw School of Economics Marcin Kolasa (WSE) Ad. Macro - Ramsey model 1 / 30 Introduction Authors: Frank Ramsey (1928), David Cass (1965) and Tjalling
More informationLecture 2 The Centralized Economy: Basic features
Lecture 2 The Centralized Economy: Basic features Leopold von Thadden University of Mainz and ECB (on leave) Advanced Macroeconomics, Winter Term 2013 1 / 41 I Motivation This Lecture introduces the basic
More informationSuggested Solutions to Problem Set 2
Macroeconomic Theory, Fall 03 SEF, HKU Instructor: Dr. Yulei Luo October 03 Suggested Solutions to Problem Set. 0 points] Consider the following Ramsey-Cass-Koopmans model with fiscal policy. First, we
More informationOutline Today s Lecture
Outline Today s Lecture finish Euler Equations and Transversality Condition Principle of Optimality: Bellman s Equation Study of Bellman equation with bounded F contraction mapping and theorem of the maximum
More informationMacroeconomics I. University of Tokyo. Lecture 12. The Neo-Classical Growth Model: Prelude to LS Chapter 11.
Macroeconomics I University of Tokyo Lecture 12 The Neo-Classical Growth Model: Prelude to LS Chapter 11. Julen Esteban-Pretel National Graduate Institute for Policy Studies The Cass-Koopmans Model: Environment
More informationDynamic (Stochastic) General Equilibrium and Growth
Dynamic (Stochastic) General Equilibrium and Growth Martin Ellison Nuffi eld College Michaelmas Term 2018 Martin Ellison (Nuffi eld) D(S)GE and Growth Michaelmas Term 2018 1 / 43 Macroeconomics is Dynamic
More informationTopic 2. Consumption/Saving and Productivity shocks
14.452. Topic 2. Consumption/Saving and Productivity shocks Olivier Blanchard April 2006 Nr. 1 1. What starting point? Want to start with a model with at least two ingredients: Shocks, so uncertainty.
More informationAdvanced Macroeconomics
Advanced Macroeconomics The Ramsey Model Micha l Brzoza-Brzezina/Marcin Kolasa Warsaw School of Economics Micha l Brzoza-Brzezina/Marcin Kolasa (WSE) Ad. Macro - Ramsey model 1 / 47 Introduction Authors:
More informationSolutions for Homework #4
Econ 50a (second half) Prof: Tony Smith TA: Theodore Papageorgiou Fall 2004 Yale University Dept. of Economics Solutions for Homework #4 Question (a) A Recursive Competitive Equilibrium for the economy
More informationECON 582: The Neoclassical Growth Model (Chapter 8, Acemoglu)
ECON 582: The Neoclassical Growth Model (Chapter 8, Acemoglu) Instructor: Dmytro Hryshko 1 / 21 Consider the neoclassical economy without population growth and technological progress. The optimal growth
More informationProblem Set 2: Proposed solutions Econ Fall Cesar E. Tamayo Department of Economics, Rutgers University
Problem Set 2: Proposed solutions Econ 504 - Fall 202 Cesar E. Tamayo ctamayo@econ.rutgers.edu Department of Economics, Rutgers University Simple optimal growth (Problems &2) Suppose that we modify slightly
More informationMacroeconomics: A Dynamic General Equilibrium Approach
Macroeconomics: A Dynamic General Equilibrium Approach Mausumi Das Lecture Notes, DSE Jan 23-Feb 23, 2018 Das (Lecture Notes, DSE) DGE Approach Jan 23-Feb 23, 2018 1 / 135 Modern Macroeconomics: the Dynamic
More informationSeminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1
Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with
More informationConstrained Optimization. Unconstrained Optimization (1)
Constrained Optimization Unconstrained Optimization (Review) Constrained Optimization Approach Equality constraints * Lagrangeans * Shadow prices Inequality constraints * Kuhn-Tucker conditions * Complementary
More informationMacroeconomic Theory II Homework 2 - Solution
Macroeconomic Theory II Homework 2 - Solution Professor Gianluca Violante, TA: Diego Daruich New York University Spring 204 Problem The household has preferences over the stochastic processes of a single
More informationLecture 4: Dynamic Programming
Lecture 4: Dynamic Programming Fatih Guvenen January 10, 2016 Fatih Guvenen Lecture 4: Dynamic Programming January 10, 2016 1 / 30 Goal Solve V (k, z) =max c,k 0 u(c)+ E(V (k 0, z 0 ) z) c + k 0 =(1 +
More informationLecture notes for Macroeconomics I, 2004
Lecture notes for Macroeconomics I, 2004 Per Krusell Please do NOT distribute without permission Comments and suggestions are welcome! 1 2 Chapter 1 Introduction These lecture notes cover a one-semester
More informationTraditional vs. Correct Transversality Conditions, plus answers to problem set due 1/28/99
Econ. 5b Spring 999 George Hall and Chris Sims Traditional vs. Correct Transversality Conditions, plus answers to problem set due /28/99. Where the conventional TVC s come from In a fairly wide class of
More informationECOM 009 Macroeconomics B. Lecture 2
ECOM 009 Macroeconomics B Lecture 2 Giulio Fella c Giulio Fella, 2014 ECOM 009 Macroeconomics B - Lecture 2 40/197 Aim of consumption theory Consumption theory aims at explaining consumption/saving decisions
More information1 Recursive Competitive Equilibrium
Feb 5th, 2007 Let s write the SPP problem in sequence representation: max {c t,k t+1 } t=0 β t u(f(k t ) k t+1 ) t=0 k 0 given Because of the INADA conditions we know that the solution is interior. So
More informationLecture 6: Competitive Equilibrium in the Growth Model (II)
Lecture 6: Competitive Equilibrium in the Growth Model (II) ECO 503: Macroeconomic Theory I Benjamin Moll Princeton University Fall 204 /6 Plan of Lecture Sequence of markets CE 2 The growth model and
More informationEconomic Growth: Lecture 13, Stochastic Growth
14.452 Economic Growth: Lecture 13, Stochastic Growth Daron Acemoglu MIT December 10, 2013. Daron Acemoglu (MIT) Economic Growth Lecture 13 December 10, 2013. 1 / 52 Stochastic Growth Models Stochastic
More informationBasic Techniques. Ping Wang Department of Economics Washington University in St. Louis. January 2018
Basic Techniques Ping Wang Department of Economics Washington University in St. Louis January 2018 1 A. Overview A formal theory of growth/development requires the following tools: simple algebra simple
More informationBEEM103 UNIVERSITY OF EXETER. BUSINESS School. January 2009 Mock Exam, Part A. OPTIMIZATION TECHNIQUES FOR ECONOMISTS solutions
BEEM03 UNIVERSITY OF EXETER BUSINESS School January 009 Mock Exam, Part A OPTIMIZATION TECHNIQUES FOR ECONOMISTS solutions Duration : TWO HOURS The paper has 3 parts. Your marks on the rst part will be
More informationMacroeconomic Theory and Analysis Suggested Solution for Midterm 1
Macroeconomic Theory and Analysis Suggested Solution for Midterm February 25, 2007 Problem : Pareto Optimality The planner solves the following problem: u(c ) + u(c 2 ) + v(l ) + v(l 2 ) () {c,c 2,l,l
More informationLecture 1: Dynamic Programming
Lecture 1: Dynamic Programming Fatih Guvenen November 2, 2016 Fatih Guvenen Lecture 1: Dynamic Programming November 2, 2016 1 / 32 Goal Solve V (k, z) =max c,k 0 u(c)+ E(V (k 0, z 0 ) z) c + k 0 =(1 +
More informationThe Ramsey Model. Alessandra Pelloni. October TEI Lecture. Alessandra Pelloni (TEI Lecture) Economic Growth October / 61
The Ramsey Model Alessandra Pelloni TEI Lecture October 2015 Alessandra Pelloni (TEI Lecture) Economic Growth October 2015 1 / 61 Introduction Introduction Introduction Ramsey-Cass-Koopmans model: di ers
More informationOptimization. A first course on mathematics for economists
Optimization. A first course on mathematics for economists Xavier Martinez-Giralt Universitat Autònoma de Barcelona xavier.martinez.giralt@uab.eu II.3 Static optimization - Non-Linear programming OPT p.1/45
More informationThe Real Business Cycle Model
The Real Business Cycle Model Macroeconomics II 2 The real business cycle model. Introduction This model explains the comovements in the fluctuations of aggregate economic variables around their trend.
More informationNeoclassical Growth Model / Cake Eating Problem
Dynamic Optimization Institute for Advanced Studies Vienna, Austria by Gabriel S. Lee February 1-4, 2008 An Overview and Introduction to Dynamic Programming using the Neoclassical Growth Model and Cake
More informationSimple Consumption / Savings Problems (based on Ljungqvist & Sargent, Ch 16, 17) Jonathan Heathcote. updated, March The household s problem X
Simple Consumption / Savings Problems (based on Ljungqvist & Sargent, Ch 16, 17) subject to for all t Jonathan Heathcote updated, March 2006 1. The household s problem max E β t u (c t ) t=0 c t + a t+1
More informationCHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS. p. 1/73
CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS p. 1/73 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS Mixed Inequality Constraints: Inequality constraints involving control and possibly
More informationECON607 Fall 2010 University of Hawaii Professor Hui He TA: Xiaodong Sun Assignment 2
ECON607 Fall 200 University of Hawaii Professor Hui He TA: Xiaodong Sun Assignment 2 The due date for this assignment is Tuesday, October 2. ( Total points = 50). (Two-sector growth model) Consider the
More informationThe Kuhn-Tucker and Envelope Theorems
The Kuhn-Tucker and Envelope Theorems Peter Ireland ECON 77200 - Math for Economists Boston College, Department of Economics Fall 207 The Kuhn-Tucker and envelope theorems can be used to characterize the
More information[A + 1 ] + (1 ) v: : (b) Show: the derivative of T at v = v 0 < 0 is: = (v 0 ) (1 ) ; [A + 1 ]
Homework #2 Economics 4- Due Wednesday, October 5 Christiano. This question is designed to illustrate Blackwell's Theorem, Theorem 3.3 on page 54 of S-L. That theorem represents a set of conditions that
More informationModern Macroeconomics II
Modern Macroeconomics II Katsuya Takii OSIPP Katsuya Takii (Institute) Modern Macroeconomics II 1 / 461 Introduction Purpose: This lecture is aimed at providing students with standard methods in modern
More informationThe Kuhn-Tucker Problem
Natalia Lazzati Mathematics for Economics (Part I) Note 8: Nonlinear Programming - The Kuhn-Tucker Problem Note 8 is based on de la Fuente (2000, Ch. 7) and Simon and Blume (1994, Ch. 18 and 19). The Kuhn-Tucker
More informationLecture 2. (1) Aggregation (2) Permanent Income Hypothesis. Erick Sager. September 14, 2015
Lecture 2 (1) Aggregation (2) Permanent Income Hypothesis Erick Sager September 14, 2015 Econ 605: Adv. Topics in Macroeconomics Johns Hopkins University, Fall 2015 Erick Sager Lecture 2 (9/14/15) 1 /
More informationRecursive Methods. Introduction to Dynamic Optimization
Recursive Methods Nr. 1 Outline Today s Lecture finish off: theorem of the maximum Bellman equation with bounded and continuous F differentiability of value function application: neoclassical growth model
More information