Math Camp Notes: Everything Else Systems of Dierential Equations Consider the general two-equation system of dierential equations: Steady States ẋ = f(x, y ẏ = g(x, y Just as before, we can nd the steady state of the system by setting both ẋ = and ẏ =. Example #1: Let ẋ = e x 1 1 and ẏ = ye x. Setting both these equations equal to yields ẋ = e x 1 = 1 x = 1 ẏ = ye = y = Example #2: Let ẋ = x + 2y and ẏ = x 2 + y. Setting both these equations equal to yields ẋ = x = 2y ẏ = y = x 2 x = 2 ( x 2 x(1 2x = x = {, 1 2 } y = {, 1 4 } Therefore, the two steady states are (x, y = { (,, ( 1 2, 1 4}. Example #3: Let ẋ = e 1 x 1 and ẏ = (2 ye x. Setting both these equations equal to yields Stability ẋ = e 1 x = 1 x = 1 ẏ = (2 ye = y = 2 For a single dierential equation ẏ = f(y, we could test whether the steady state was stable by checking whether ẏ y <. yss If so, then the dierential equation was stable. The condition for systems of dierential equations is more complicated, and deals with the eigenvalues of the Jacobian matrix of the system. In order to be a stable system, each eigenvalue of the Jacobian matrix at a steady state y ss must be negative or have a negative real part. If an eigenvalue is positive or has a positive real part, then the steady state is unstable. If the Jacobian at y ss has some pure imaginary or zero eigenvalues and no positive eigenvalues, then we cannot determine the stability of the steady state through the Jacobian. Example #1 Revisited: Let ẋ = e x 1 1 and ẏ = ye x. We already calculated that the steady state of the system will be z = (x, y = (1,. The Jacobian of the system is ( ( e x 1 1 ye x e x (z =, e which implies that the eigenvalues of the system are 1 and e. Since both of these are positive, we have an unstable system. 1
Example #2 Revisited: Let ẋ = x + 2y and ẏ = x 2 + y. We already calculated that the steady states of the system are z = (x, y = {(,, ( 1 2, 1 4 }. The Jacobian of the system is ( 1 2 2x 1 When z = (,, then we have the Jacobian ( 1 2 1 which implies that the repeated eigenvalue of the system is 1. Since both of these are positive, we have an unstable system. Example #3 Revisited: Let ẋ = e 1 x 1 and ẏ = (2 ye x. We already calculated that the steady state of the system will be z = (x, y = (1, 2. The Jacobian of the system is ( ( e 1 x 1 (2 ye x e x (z =, e which implies that the eigenvalues of the system are 1 and e. Since both of these are negative, we have a stable system. Solution for Linear Systems Consider the linear system of dierential equations:. ẋ = a 11 x + a 12 y ẏ = a 21 x + a 22 y which can be expressed as ẋ = Ax, ( where x = (x, y, ẋ = (ẋ, ẏ a11 a, and A = 12. Also assume that y a 21 a and x are given. 22 Consider the case where A is a diagonal matrix, i.e that a 12 = a 21 =. Then the new system is whose solution is ẋ = a 11 x, ẏ = a 22 y, x = x e a11t, y = y e a22t. That was easy! We can also easily see that he eigenvalues of the Jacobian matrix will be a 11 and a 22, and therefore the system will be stable if both a 11 and a 22 are less than zero. For the case where a 12 or a 21, the solution is more complicated. However, if we can diagonalize A, we can transform the system ẋ = Ax into ẋ = P ΛP 1 x, then multiply both sides by P to get If we dene ẇ = P 1 ẋ and w = P 1 x, then we have P 1 ẋ = ΛP 1 x. ẇ = Λw. where Λ is a diagonal matrix. The solution for this system is easy, and then we can transform it back to ẋ = Ax. Example: 2
Solve the following system of dierential equations: The system can be rewritten as The characteristic equation for A is ẋ = Ax = ( ẋ ẏ ẋ = x y ẏ = 4x + y = ( 1 1 4 1 ( x y. (1 λ 2 4 = λ 2 2λ 3 = (λ 3(λ + 1 =, and therefore the eigenvalues of the matrix A are λ = {3, 1}. unstable. The matrix A Iλ associated with λ = 3 is ( 2 1, 4 2 Therefore, we know the system will be which implies that y = 2x, and therefore (1, 2 is the corresponding eigenvector. For λ = 1, we have that ( 2 1 A Iλ =, 4 2 which implies that y = 2x, or that (1, 2 is the corresponding eigenvector. We can now form the matrix ( 1 1 P = P 1 = 1 ( 2 1. 2 2 4 2 1 Let w = P 1 x. ( wx = 1 ( 2 1 w y 4 2 1 which implies ( x y ( wx ( w y ( = 1 4 w x ( = 1 2 x( 1 4 y( ( 2 1 2 1 ( x( y(, w y ( = 1 2 x( + 1 4 y( This gives us our initial conditions. We now have the system whose solution is ẇ = Λw, w x = w x (e 3t, w y = w y (e t. Now we plug in the initial conditions to get { 1 w x = 2 x( 1 } 4 y( e 3t { 1 w y = 2 x( + 1 } 4 y( e t. Finally, since we have w = P 1 x, then x = P w, { 1 x = w x + w y = 2 x( 1 } { 1 4 y( e 3t + 2 x( + 1 } 4 y( y = 2w x + 2w y = { x( + 12 y( } e 3t + e t {x( + 12 y( } e t 3
Non-Linear Systems Finding general solutions of non-linear systems can be extremely dicult if not impossible. However, we can nd a rst-order estimate of the solution about a steady state using the Taylor rule. For example, assume our general system of equations: ẋ = f(x, y ẏ = g(x, y Setting these equations equal to zero, we can solve for some steady state (x, y. The Taylor expansion gives us an approximation of the function h(x, y around some point (x, y. Notice that the Taylor expansion in this case is h(x, y h(x, y + h x (x, y(x x + h y (x, y(y y. This is a linear approximation of a nonlinear function about (x, y. We can use the Taylor approximation to rewrite the system of dierential equations about (x, y : ẋ f(x, y + f x (x, y (x x + f y (x, y (y y ẏ g(x, y + g x (x, y (x x + g y (x, y (y y. This can we rewritten is the form ẋ = Ax + c, where ( ( fx (x A =, y f y (x, y ẋ g x (x, y g y (x, y, ẋ = ẏ ( x, x = y ( c1, c =, c 2 and c 1, c 2 are constants. In a way, however, the constants don't matter because they just shift our phase diagram around. They don't actually aect the stability or motion of the system. We can then proceed to nd an approxmation of the system according to the diagonalization presented in the last section. Optimization in Discrete Time Up to this point, we have only considered constrained optimization problems at a single point in time. However, many constrained optimization problems in economics deal not only with the present, but with future time periods as well. We may wish to solve the optimization problem not only today, but for all future periods as well. The strategy for solving a discrete time optimization problem is as follows: 1. Write the proper Lagrangean function. 2. Find the 1st order conditions 3. Solve the resulting dierence equations of the control variables 4. Use the constraints to nd the initial conditions of the control variables 5. Plug the constraints into the dierence equations to solve for the path of the control variable over time A control variable is a variable you can control; for example, you may not be able to control how much capital is in the economy initially, but you can control how much you consume. Things we cannot control completely, but that are nevertheless aected by what we choose as our control are called state variables. For example, the amount of capital you have tomorrow depends on the amount you consume today. Example: Solve the following optimization problem in discrete time max {c t} U ({c t } subject to A 4
In other words, we want to choose a level of consumption such that our lifetime utility will be maximized, given a xed level of assets. This is a similar problem to what a retired person would face if she had no income. In order for the agent to satisfy the budget constraint, we must have that A c t (1 + r t. In other words, the present value of her lifetime consumption must be less than or equal to the level of her total assets. Also assume that utility is seperable and is discounted by a factor β (, 1 each period. U ({c t } = β t u (c t, where u(c t is the within-period utility of consumption. We assume it is concave. The Lagrangean for this problem can be written as ( L = β t c t u (c t λ A (1 + r t We know that the constraint will always be binding so long as our utility function exhibits nice properties such as local non-satiation. Therefore we can solve the Lagrangean as if the inequality constraint we a equality constraint. Unfortunately, this Lagrangean will have an innite number of rst order conditions since t goes to innity (i.e. there are an innite number of c t inputs to the Lagrangean function. This is where a dierence equation comes in handy. If we can come up with some sort of condition that must hold between consumption in any two periods, the we can write a dierence equation, iterate it, and solve it for all t using an initial condition. Find the rst order conditions of the Lagrangean with respect to an arbitrary c t and c t+1 : L = β t u 1 (c t λ c t (1 + r t = L = β t+1 u 1 (c t+1 λ c t+1 (1 + r t+1 = Dividing the top equation by the bottom equation we have Say our within period utility function is u (c t u = (1 + rβ (c t+1 u(c t = ln(c t, where sigma is a constant greater than or equal to. Our rst order condition becomes c t+1 = (1 + rβ c t The solution to this linear dierence equation is c t+1 = [(1 + rβ] c t c t = c [(1 + rβ] t. Now we have solved our maximization problem for all time periods. Well, not quite. We don't know what c is. In order to nd it, we need to plug this condition into our budget constraint. c t A = (1 + r t = c [(1 + rβ] t (1 + r t = c β t = c 1 β c = (1 βa Therefore, the solution to the agent's optimization problem is c t = (1 β [(1 + rβ] t A 5
Optimization in Continuous Time To solve optimization problems in continuous time, we abstract from the Lagrangean and use a Hamiltonian. The proof behind why the Hamiltonian works will not be presented in this class, but will be presented in your rst semester math class instead. Suppose we have a value function f(x, y. We can think of this as being like an instantaneous utility function. We want to control the ow of the value of this function over time so that the lifetime value of the function will be maximized. In other words, we want to maximize f(x, ydt subject to constraints. Since time is continuous, the constraint cannot be a static function. It must tell me the change in my state variable at each point in time, and therefore it must be a dierential equation. For example, if my objective function is instantaneous utility and my constraint is my assets, then the optimization problem would look like max x(t,y(t e βt U(c(tdt subject to Ȧ = ra c(t Notice that the maximizer of this function is a function itself. It gives us the time path of consumption, not just a particular level of consumption. There are two equivalent formulations of the Hamiltonian; the current value Hamiltonian and the present value Hamiltonian. The current value Hamiltonian for this problem would be expressed as H = U(c + λȧ. Notice this is almost exactly like the Lagrangean function. However, the rst order conditions are slightly dierent: c = λ = ȧ = βλ λ A We would need to solve this system using our analysis from dierential equations. The present value Hamiltonian would be formulated this way: H = e βt U(c + λg. Notice the objective function is now discounted in the Hamiltonian, whereas before it was not. The rst order conditions are c = λ = ȧ A = λ The strategy for solving the Hamiltonian is as follows: 1. Write the Hamiltonian 2. Find the rst order conditions 3. Obtain dierential equations in c and a 4. Solve one of them 6
5. Use the budget constrain to nd the initial conditions Example: max ln [c t ] dt subject to ȧ = ra c. Assume that a is known. We form the current value Hamiltonian H = ln(c t + λ (ra c The rst order conditions are c = 1 c λ = λ = ra c = ȧ = λr = βλ λ A From the rst condition, we get ln(c t = ln(λ t. Taking the derivative of both sides of this function, we get ċ c = λ λ. From the third condition we get β r = λ λ Setting these two conditions equal to each other we get ċ c = r β The dierential equation which solves this is c t = c e (r βt. Now it remains to nd the inital conditon for c, which we can nd using the budget constraint. We know that the present discounted value of our consumption must equal our initial assests, so a = e rt c(tdt = e rt c e (r βt dt = c e βt = c 1 β Therefore, the soltion is c = βa c t = βa e (r βt Implicit Function Theorem Let G(x 1,..., x n, y be a C 1 function on a ball about (x 1,..., x n, y. Suppose that (x 1,..., x n, y satises and that G(x 1,..., x n, y = c G y (x 1,..., x n, y. Then there is a C 1 function y = y(x 1,..., x n dened on an open ball about (x 1,..., x n so that the following conditions hold: 1. G(x 1,..., x n, y(x 1,..., x n = c for all (x 1,..., x n B 2. y = y(x 1,..., x n 3. For each index i, G y (x x 1,..., x x n = i (x 1,..., x n, y G i y (x 1,..., x n, y. 7
Envelope Theorem Unconstrained Problems Let f(x; a be a C 1 function of x R N and the scalar a. For each choice of the parameter a, consider the unconstrained maximization problem max f(x; awith respect to x. Let x (a be a solution to this problem, and that it is a C 1 function of a. Then, Constrained Problems d da f(x (a; a = a f(x (a; a Let f, h 1,..., h k : R N R 1 R 1 be C 1 functions. Let x (a = (x 1(a,..., x n(a denote the solution of the problem of maximizing f(x (a on the constraint set h 1 (x, a =,..., h k (x, a = for any choice of the parameter a. Suppose x(a and the multipliers λ i (a are all C 1 functions of a. And that the resulting NDCQ holds. Then, d da f(x (a; a = L a f(x (a, λ(a; a, where L is the Lagrangean of this function. Properties of Functions Quasiconcavity A function f dened on a convex set U R N is quasiconcave if for every real number a, C + a = {x U : f(x a} is convex. In other words, the better than sets of the function f(x are convex. Quasiconvexity A function f dened on a convex set U R N is quasiconcave if for every real number a, C a = {x U : f(x a} is convex. In other words, worse than sets of the function f(x are convex. Homogeneous Functions For and scalar k, a real-valued function f(x 1,..., x n is homogeneous of degree k if f(tx 1,..., tx n = t k f(x 1,..., x n x 1,..., x n and all t >. One implication of these functions is that the tangent planes to the level sets of f have constant slope along each ray of the origin. Also, level sets are radial expansions and contractions of each other. Hemicontinuity Upper Hemicontinuity Let φ : S T be a correspondance, and S and T be closed subsets of R N and R K respectively. Let x v, x S, v = 1, 2, 3,... Also let x v x, y v φ(x v for all v = 1, 2, 3,..., and y v y. Then φ is upper hemicontinuous at x i y φ(x. A correspondance is upper hemicontinuous i its graph is closed in S T. 8
Lower Hemicontinuity Let φ : S T be a correspondance, and S and T be closed subsets of R N and R K respectively. Let x v S, v = 1, 2, 3,... Also let x v x, y φ(x v for all v = 1, 2, 3,.... Then φ is lower hemicontinuous at x i y v φ(x v where y v y. A correspondance is lower hemicontinuous if you can draw a function through every point on the graph of the correspondance and the graph of the function about each point will be contained in the graph of the correspondance. Fixed Point Theorems Brower's Fixed Point Theorem (Functions Let S be a nonempty, compact, and convex set. Let f : S S where f is continuous. Then there exists an x S such that x = f(x. Kakutani's Fixed Point Theorem (Correspondances Let S be a nonempty, compact, and convex set. Let φ : S S be a correspondance that is upper hemicontinuous everywhere on S. Also let φ(x be convex for all x. Then there exists an x S such that x φ(x. Hyperplane Theorems Separating Hyperplane Theorem Suppose B R N is convex and closed, and that x / B. Then p R N {}, and c R such that p x > c and p y < c for every y B. Supporting Hyperplane Theorem Suppose B R N is convex, and that x / int B. Then p R N {}, and c R such that p x p y for every y B. 9