Math Camp Notes: Everything Else

Similar documents
Optimization Over Time

Dynamical Systems. August 13, 2013

Constrained Optimization

Final Exam - Math Camp August 27, 2014

DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION

ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko

Mathematical Economics. Lecture Notes (in extracts)

Solutions to Macro Final 2006

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

Rice University. Answer Key to Mid-Semester Examination Fall ECON 501: Advanced Microeconomic Theory. Part A

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

REVIEW OF DIFFERENTIAL CALCULUS

Properties of Walrasian Demand

ECON 582: An Introduction to the Theory of Optimal Control (Chapter 7, Acemoglu) Instructor: Dmytro Hryshko

ECON 5111 Mathematical Economics

Nonlinear Programming (NLP)

Economics 101A (Lecture 3) Stefano DellaVigna

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2

MATH 4211/6211 Optimization Constrained Optimization

EconS 301. Math Review. Math Concepts

September Math Course: First Order Derivative

Dynamic Problem Set 1 Solutions

MAT 22B - Lecture Notes

CHAPTER 4: HIGHER ORDER DERIVATIVES. Likewise, we may define the higher order derivatives. f(x, y, z) = xy 2 + e zx. y = 2xy.

Math 1270 Honors ODE I Fall, 2008 Class notes # 14. x 0 = F (x; y) y 0 = G (x; y) u 0 = au + bv = cu + dv

Optimization. A first course on mathematics for economists

Microeconomics I. September, c Leopold Sögner

Multi Variable Calculus

Slides II - Dynamic Programming

Lecture 6: Discrete-Time Dynamic Optimization

Nonlinear Programming and the Kuhn-Tucker Conditions

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema

Problem Set #2: Overlapping Generations Models Suggested Solutions - Q2 revised

Lakehead University ECON 4117/5111 Mathematical Economics Fall 2002

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Lecture 10. (2) Functions of two variables. Partial derivatives. Dan Nichols February 27, 2018

Constrained optimization.

Mathematical Preliminaries for Microeconomics: Exercises

Optimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112

Lecture 2 The Centralized Economy

Dynamic Optimization: An Introduction

1 The pendulum equation

The Kuhn-Tucker Problem

Marginal Functions and Approximation

minimize x subject to (x 2)(x 4) u,

Econ 101A Problem Set 1 Solution

Lecture Notes for Chapter 12

CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS. p. 1/73

Structural Properties of Utility Functions Walrasian Demand

Macro 1: Dynamic Programming 1

The Envelope Theorem

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Solutions to Dynamical Systems 2010 exam. Each question is worth 25 marks.

Lecture Notes: Geometric Considerations in Unconstrained Optimization

GARP and Afriat s Theorem Production

OPTIMAL CONTROL THEORY. Preliminary and Incomplete - please do not cite

8.7 Taylor s Inequality Math 2300 Section 005 Calculus II. f(x) = ln(1 + x) f(0) = 0

Linear Algebra: Linear Systems and Matrices - Quadratic Forms and Deniteness - Eigenvalues and Markov Chains

Problem Set # 2 Dynamic Part - Math Camp

Constrained maxima and Lagrangean saddlepoints

Lecture 2: Review of Prerequisites. Table of contents

Lecture 8: Basic convex analysis

154 Chapter 9 Hints, Answers, and Solutions The particular trajectories are highlighted in the phase portraits below.

Dynamic Macroeconomic Theory Notes. David L. Kelly. Department of Economics University of Miami Box Coral Gables, FL

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

3 Stability and Lyapunov Functions

Economics 501B Final Exam Fall 2017 Solutions

Chapter 4. Maximum Theorem, Implicit Function Theorem and Envelope Theorem

z = f (x; y) f (x ; y ) f (x; y) f (x; y )

G Recitation 3: Ramsey Growth model with technological progress; discrete time dynamic programming and applications

IE 5531: Engineering Optimization I

FIN 550 Practice Exam Answers. A. Linear programs typically have interior solutions.

Econ 504, Lecture 1: Transversality and Stochastic Lagrange Multipliers

Modern Optimization Theory: Concave Programming

Convex Functions and Optimization

Microeconomic Theory I Midterm October 2017

Topic 5: The Difference Equation

NOTES ON CALCULUS OF VARIATIONS. September 13, 2012

Assignment #5. 1 Keynesian Cross. Econ 302: Intermediate Macroeconomics. December 2, 2009

Economics 101. Lecture 2 - The Walrasian Model and Consumer Choice

Online Appendixes for \A Theory of Military Dictatorships"

General Equilibrium and Welfare

y x 3. Solve each of the given initial value problems. (a) y 0? xy = x, y(0) = We multiply the equation by e?x, and obtain Integrating both sides with

BEEM103 UNIVERSITY OF EXETER. BUSINESS School. January 2009 Mock Exam, Part A. OPTIMIZATION TECHNIQUES FOR ECONOMISTS solutions

( )! ±" and g( x)! ±" ], or ( )! 0 ] as x! c, x! c, x! c, or x! ±". If f!(x) g!(x) "!,

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

Static Problem Set 2 Solutions

Simple Consumption / Savings Problems (based on Ljungqvist & Sargent, Ch 16, 17) Jonathan Heathcote. updated, March The household s problem X

1 General Equilibrium

Concave programming. Concave programming is another special case of the general constrained optimization. subject to g(x) 0

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Basic Techniques. Ping Wang Department of Economics Washington University in St. Louis. January 2018

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY

FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS III: Autonomous Planar Systems David Levermore Department of Mathematics University of Maryland

Economic Growth (Continued) The Ramsey-Cass-Koopmans Model. 1 Literature. Ramsey (1928) Cass (1965) and Koopmans (1965) 2 Households (Preferences)

Lecture 4: Analysis of Optimal Trajectories, Transition Dynamics in Growth Model

Ramsey Cass Koopmans Model (1): Setup of the Model and Competitive Equilibrium Path

n=0 ( 1)n /(n + 1) converges, but not

Government The government faces an exogenous sequence {g t } t=0

Transcription:

Math Camp Notes: Everything Else Systems of Dierential Equations Consider the general two-equation system of dierential equations: Steady States ẋ = f(x, y ẏ = g(x, y Just as before, we can nd the steady state of the system by setting both ẋ = and ẏ =. Example #1: Let ẋ = e x 1 1 and ẏ = ye x. Setting both these equations equal to yields ẋ = e x 1 = 1 x = 1 ẏ = ye = y = Example #2: Let ẋ = x + 2y and ẏ = x 2 + y. Setting both these equations equal to yields ẋ = x = 2y ẏ = y = x 2 x = 2 ( x 2 x(1 2x = x = {, 1 2 } y = {, 1 4 } Therefore, the two steady states are (x, y = { (,, ( 1 2, 1 4}. Example #3: Let ẋ = e 1 x 1 and ẏ = (2 ye x. Setting both these equations equal to yields Stability ẋ = e 1 x = 1 x = 1 ẏ = (2 ye = y = 2 For a single dierential equation ẏ = f(y, we could test whether the steady state was stable by checking whether ẏ y <. yss If so, then the dierential equation was stable. The condition for systems of dierential equations is more complicated, and deals with the eigenvalues of the Jacobian matrix of the system. In order to be a stable system, each eigenvalue of the Jacobian matrix at a steady state y ss must be negative or have a negative real part. If an eigenvalue is positive or has a positive real part, then the steady state is unstable. If the Jacobian at y ss has some pure imaginary or zero eigenvalues and no positive eigenvalues, then we cannot determine the stability of the steady state through the Jacobian. Example #1 Revisited: Let ẋ = e x 1 1 and ẏ = ye x. We already calculated that the steady state of the system will be z = (x, y = (1,. The Jacobian of the system is ( ( e x 1 1 ye x e x (z =, e which implies that the eigenvalues of the system are 1 and e. Since both of these are positive, we have an unstable system. 1

Example #2 Revisited: Let ẋ = x + 2y and ẏ = x 2 + y. We already calculated that the steady states of the system are z = (x, y = {(,, ( 1 2, 1 4 }. The Jacobian of the system is ( 1 2 2x 1 When z = (,, then we have the Jacobian ( 1 2 1 which implies that the repeated eigenvalue of the system is 1. Since both of these are positive, we have an unstable system. Example #3 Revisited: Let ẋ = e 1 x 1 and ẏ = (2 ye x. We already calculated that the steady state of the system will be z = (x, y = (1, 2. The Jacobian of the system is ( ( e 1 x 1 (2 ye x e x (z =, e which implies that the eigenvalues of the system are 1 and e. Since both of these are negative, we have a stable system. Solution for Linear Systems Consider the linear system of dierential equations:. ẋ = a 11 x + a 12 y ẏ = a 21 x + a 22 y which can be expressed as ẋ = Ax, ( where x = (x, y, ẋ = (ẋ, ẏ a11 a, and A = 12. Also assume that y a 21 a and x are given. 22 Consider the case where A is a diagonal matrix, i.e that a 12 = a 21 =. Then the new system is whose solution is ẋ = a 11 x, ẏ = a 22 y, x = x e a11t, y = y e a22t. That was easy! We can also easily see that he eigenvalues of the Jacobian matrix will be a 11 and a 22, and therefore the system will be stable if both a 11 and a 22 are less than zero. For the case where a 12 or a 21, the solution is more complicated. However, if we can diagonalize A, we can transform the system ẋ = Ax into ẋ = P ΛP 1 x, then multiply both sides by P to get If we dene ẇ = P 1 ẋ and w = P 1 x, then we have P 1 ẋ = ΛP 1 x. ẇ = Λw. where Λ is a diagonal matrix. The solution for this system is easy, and then we can transform it back to ẋ = Ax. Example: 2

Solve the following system of dierential equations: The system can be rewritten as The characteristic equation for A is ẋ = Ax = ( ẋ ẏ ẋ = x y ẏ = 4x + y = ( 1 1 4 1 ( x y. (1 λ 2 4 = λ 2 2λ 3 = (λ 3(λ + 1 =, and therefore the eigenvalues of the matrix A are λ = {3, 1}. unstable. The matrix A Iλ associated with λ = 3 is ( 2 1, 4 2 Therefore, we know the system will be which implies that y = 2x, and therefore (1, 2 is the corresponding eigenvector. For λ = 1, we have that ( 2 1 A Iλ =, 4 2 which implies that y = 2x, or that (1, 2 is the corresponding eigenvector. We can now form the matrix ( 1 1 P = P 1 = 1 ( 2 1. 2 2 4 2 1 Let w = P 1 x. ( wx = 1 ( 2 1 w y 4 2 1 which implies ( x y ( wx ( w y ( = 1 4 w x ( = 1 2 x( 1 4 y( ( 2 1 2 1 ( x( y(, w y ( = 1 2 x( + 1 4 y( This gives us our initial conditions. We now have the system whose solution is ẇ = Λw, w x = w x (e 3t, w y = w y (e t. Now we plug in the initial conditions to get { 1 w x = 2 x( 1 } 4 y( e 3t { 1 w y = 2 x( + 1 } 4 y( e t. Finally, since we have w = P 1 x, then x = P w, { 1 x = w x + w y = 2 x( 1 } { 1 4 y( e 3t + 2 x( + 1 } 4 y( y = 2w x + 2w y = { x( + 12 y( } e 3t + e t {x( + 12 y( } e t 3

Non-Linear Systems Finding general solutions of non-linear systems can be extremely dicult if not impossible. However, we can nd a rst-order estimate of the solution about a steady state using the Taylor rule. For example, assume our general system of equations: ẋ = f(x, y ẏ = g(x, y Setting these equations equal to zero, we can solve for some steady state (x, y. The Taylor expansion gives us an approximation of the function h(x, y around some point (x, y. Notice that the Taylor expansion in this case is h(x, y h(x, y + h x (x, y(x x + h y (x, y(y y. This is a linear approximation of a nonlinear function about (x, y. We can use the Taylor approximation to rewrite the system of dierential equations about (x, y : ẋ f(x, y + f x (x, y (x x + f y (x, y (y y ẏ g(x, y + g x (x, y (x x + g y (x, y (y y. This can we rewritten is the form ẋ = Ax + c, where ( ( fx (x A =, y f y (x, y ẋ g x (x, y g y (x, y, ẋ = ẏ ( x, x = y ( c1, c =, c 2 and c 1, c 2 are constants. In a way, however, the constants don't matter because they just shift our phase diagram around. They don't actually aect the stability or motion of the system. We can then proceed to nd an approxmation of the system according to the diagonalization presented in the last section. Optimization in Discrete Time Up to this point, we have only considered constrained optimization problems at a single point in time. However, many constrained optimization problems in economics deal not only with the present, but with future time periods as well. We may wish to solve the optimization problem not only today, but for all future periods as well. The strategy for solving a discrete time optimization problem is as follows: 1. Write the proper Lagrangean function. 2. Find the 1st order conditions 3. Solve the resulting dierence equations of the control variables 4. Use the constraints to nd the initial conditions of the control variables 5. Plug the constraints into the dierence equations to solve for the path of the control variable over time A control variable is a variable you can control; for example, you may not be able to control how much capital is in the economy initially, but you can control how much you consume. Things we cannot control completely, but that are nevertheless aected by what we choose as our control are called state variables. For example, the amount of capital you have tomorrow depends on the amount you consume today. Example: Solve the following optimization problem in discrete time max {c t} U ({c t } subject to A 4

In other words, we want to choose a level of consumption such that our lifetime utility will be maximized, given a xed level of assets. This is a similar problem to what a retired person would face if she had no income. In order for the agent to satisfy the budget constraint, we must have that A c t (1 + r t. In other words, the present value of her lifetime consumption must be less than or equal to the level of her total assets. Also assume that utility is seperable and is discounted by a factor β (, 1 each period. U ({c t } = β t u (c t, where u(c t is the within-period utility of consumption. We assume it is concave. The Lagrangean for this problem can be written as ( L = β t c t u (c t λ A (1 + r t We know that the constraint will always be binding so long as our utility function exhibits nice properties such as local non-satiation. Therefore we can solve the Lagrangean as if the inequality constraint we a equality constraint. Unfortunately, this Lagrangean will have an innite number of rst order conditions since t goes to innity (i.e. there are an innite number of c t inputs to the Lagrangean function. This is where a dierence equation comes in handy. If we can come up with some sort of condition that must hold between consumption in any two periods, the we can write a dierence equation, iterate it, and solve it for all t using an initial condition. Find the rst order conditions of the Lagrangean with respect to an arbitrary c t and c t+1 : L = β t u 1 (c t λ c t (1 + r t = L = β t+1 u 1 (c t+1 λ c t+1 (1 + r t+1 = Dividing the top equation by the bottom equation we have Say our within period utility function is u (c t u = (1 + rβ (c t+1 u(c t = ln(c t, where sigma is a constant greater than or equal to. Our rst order condition becomes c t+1 = (1 + rβ c t The solution to this linear dierence equation is c t+1 = [(1 + rβ] c t c t = c [(1 + rβ] t. Now we have solved our maximization problem for all time periods. Well, not quite. We don't know what c is. In order to nd it, we need to plug this condition into our budget constraint. c t A = (1 + r t = c [(1 + rβ] t (1 + r t = c β t = c 1 β c = (1 βa Therefore, the solution to the agent's optimization problem is c t = (1 β [(1 + rβ] t A 5

Optimization in Continuous Time To solve optimization problems in continuous time, we abstract from the Lagrangean and use a Hamiltonian. The proof behind why the Hamiltonian works will not be presented in this class, but will be presented in your rst semester math class instead. Suppose we have a value function f(x, y. We can think of this as being like an instantaneous utility function. We want to control the ow of the value of this function over time so that the lifetime value of the function will be maximized. In other words, we want to maximize f(x, ydt subject to constraints. Since time is continuous, the constraint cannot be a static function. It must tell me the change in my state variable at each point in time, and therefore it must be a dierential equation. For example, if my objective function is instantaneous utility and my constraint is my assets, then the optimization problem would look like max x(t,y(t e βt U(c(tdt subject to Ȧ = ra c(t Notice that the maximizer of this function is a function itself. It gives us the time path of consumption, not just a particular level of consumption. There are two equivalent formulations of the Hamiltonian; the current value Hamiltonian and the present value Hamiltonian. The current value Hamiltonian for this problem would be expressed as H = U(c + λȧ. Notice this is almost exactly like the Lagrangean function. However, the rst order conditions are slightly dierent: c = λ = ȧ = βλ λ A We would need to solve this system using our analysis from dierential equations. The present value Hamiltonian would be formulated this way: H = e βt U(c + λg. Notice the objective function is now discounted in the Hamiltonian, whereas before it was not. The rst order conditions are c = λ = ȧ A = λ The strategy for solving the Hamiltonian is as follows: 1. Write the Hamiltonian 2. Find the rst order conditions 3. Obtain dierential equations in c and a 4. Solve one of them 6

5. Use the budget constrain to nd the initial conditions Example: max ln [c t ] dt subject to ȧ = ra c. Assume that a is known. We form the current value Hamiltonian H = ln(c t + λ (ra c The rst order conditions are c = 1 c λ = λ = ra c = ȧ = λr = βλ λ A From the rst condition, we get ln(c t = ln(λ t. Taking the derivative of both sides of this function, we get ċ c = λ λ. From the third condition we get β r = λ λ Setting these two conditions equal to each other we get ċ c = r β The dierential equation which solves this is c t = c e (r βt. Now it remains to nd the inital conditon for c, which we can nd using the budget constraint. We know that the present discounted value of our consumption must equal our initial assests, so a = e rt c(tdt = e rt c e (r βt dt = c e βt = c 1 β Therefore, the soltion is c = βa c t = βa e (r βt Implicit Function Theorem Let G(x 1,..., x n, y be a C 1 function on a ball about (x 1,..., x n, y. Suppose that (x 1,..., x n, y satises and that G(x 1,..., x n, y = c G y (x 1,..., x n, y. Then there is a C 1 function y = y(x 1,..., x n dened on an open ball about (x 1,..., x n so that the following conditions hold: 1. G(x 1,..., x n, y(x 1,..., x n = c for all (x 1,..., x n B 2. y = y(x 1,..., x n 3. For each index i, G y (x x 1,..., x x n = i (x 1,..., x n, y G i y (x 1,..., x n, y. 7

Envelope Theorem Unconstrained Problems Let f(x; a be a C 1 function of x R N and the scalar a. For each choice of the parameter a, consider the unconstrained maximization problem max f(x; awith respect to x. Let x (a be a solution to this problem, and that it is a C 1 function of a. Then, Constrained Problems d da f(x (a; a = a f(x (a; a Let f, h 1,..., h k : R N R 1 R 1 be C 1 functions. Let x (a = (x 1(a,..., x n(a denote the solution of the problem of maximizing f(x (a on the constraint set h 1 (x, a =,..., h k (x, a = for any choice of the parameter a. Suppose x(a and the multipliers λ i (a are all C 1 functions of a. And that the resulting NDCQ holds. Then, d da f(x (a; a = L a f(x (a, λ(a; a, where L is the Lagrangean of this function. Properties of Functions Quasiconcavity A function f dened on a convex set U R N is quasiconcave if for every real number a, C + a = {x U : f(x a} is convex. In other words, the better than sets of the function f(x are convex. Quasiconvexity A function f dened on a convex set U R N is quasiconcave if for every real number a, C a = {x U : f(x a} is convex. In other words, worse than sets of the function f(x are convex. Homogeneous Functions For and scalar k, a real-valued function f(x 1,..., x n is homogeneous of degree k if f(tx 1,..., tx n = t k f(x 1,..., x n x 1,..., x n and all t >. One implication of these functions is that the tangent planes to the level sets of f have constant slope along each ray of the origin. Also, level sets are radial expansions and contractions of each other. Hemicontinuity Upper Hemicontinuity Let φ : S T be a correspondance, and S and T be closed subsets of R N and R K respectively. Let x v, x S, v = 1, 2, 3,... Also let x v x, y v φ(x v for all v = 1, 2, 3,..., and y v y. Then φ is upper hemicontinuous at x i y φ(x. A correspondance is upper hemicontinuous i its graph is closed in S T. 8

Lower Hemicontinuity Let φ : S T be a correspondance, and S and T be closed subsets of R N and R K respectively. Let x v S, v = 1, 2, 3,... Also let x v x, y φ(x v for all v = 1, 2, 3,.... Then φ is lower hemicontinuous at x i y v φ(x v where y v y. A correspondance is lower hemicontinuous if you can draw a function through every point on the graph of the correspondance and the graph of the function about each point will be contained in the graph of the correspondance. Fixed Point Theorems Brower's Fixed Point Theorem (Functions Let S be a nonempty, compact, and convex set. Let f : S S where f is continuous. Then there exists an x S such that x = f(x. Kakutani's Fixed Point Theorem (Correspondances Let S be a nonempty, compact, and convex set. Let φ : S S be a correspondance that is upper hemicontinuous everywhere on S. Also let φ(x be convex for all x. Then there exists an x S such that x φ(x. Hyperplane Theorems Separating Hyperplane Theorem Suppose B R N is convex and closed, and that x / B. Then p R N {}, and c R such that p x > c and p y < c for every y B. Supporting Hyperplane Theorem Suppose B R N is convex, and that x / int B. Then p R N {}, and c R such that p x p y for every y B. 9