E 600 Chapter 4: Optimization Simona Helmsmueller August 8, 2018
Goals of this lecture: Every theorem in these slides is important! You should understand, remember and be able to apply each and every one of them! The best way to remember a theorem is to apply it over and over again - luckily, that is exactly what you will be doing in your econ master courses. Some theorems (implicit function, Lagrange) have a lot of conditions and scary formulas. I do not expect you to exactly write them down. However, you should know and actively remember the intuitive meaning behind each condition and you should be able to check them in a given (economic) context. The latter often requires being able to write down the economic problem in a form which fits the optimization framework as discussed in this lecture. You should be comfortable doing that.
Introduction Economics is the study of optimal (efficient) allocations given limited resources.
Introduction Economics is the study of optimal (efficient) allocations given limited resources. Examples include: Choice of optimal ratio of input factors in production given input and output prices Optimal division of time between labor and leisure given a time budget Optimal allocation of tax money to investments and subsidies given a federal budget
Definition Optimization Problem (P min ) minimize x dom(f ) f (x) subject to g i (x) = 0, i = 1,..., m. h i (x) 0, i = 1,..., k.
Definition Optimization Problem (P min ) minimize x dom(f ) f (x) subject to g i (x) = 0, i = 1,..., m. h i (x) 0, i = 1,..., k. (P max ) maximize x dom(f ) f (x) subject to g i (x) = 0, i = 1,..., m. h i (x) 0, i = 1,..., k.
Example 1: Utility maximization Consumer theory: agents maximize their utility by choosing an optimal consumption bundle of the n goods in the economy x = (x 1,...x n ) Objective function: f (x) = u(x 1,..., x n ) Constraints arise from the prices of goods p = (p 1,..., p n ) and the available income I : p x I The maximization problem then reads: max u(x) subject to p x I. x:x i 0
Example 2: Expenditure minimization Dual problem of utility maximization: minimize expenditures given a certain utility level The minimization problem reads: min p x subject to u(x) u(x). x:x i 0
Example 3: Profit maximization Single output producer (y) n input factors: x = x 1,..., x n, with prices w = w 1,..., w n Production function y = g(x) Price function p(y) (inverse demand function) Maximization problem: max p(g(x))g(x) x w subject to x 0 x
Example 4: Cost minimization As an exercise, formulate the dual problem of the firm.
Definition: (Maximum and Minimum) Let X be a subset of R. An element x in X is called a maximum in X if, for all x in X, x x. An element x in X is called a minimum in X if, for all x in X, x x. If the above inequalities are strict, one speaks of strict maximum (resp. strict minimum). A maximum or a minimum is often simply referred to as an extremum.
Definition: (Local and Global Maximizers) Let f be a real-valued function defined on X R n. A point x in X is: A global maximizer for f on X if and only if: x X, f ( x) f (x) A strict global maximizer for f on X if and only if: x X, x x, f ( x) > f (x) A local maximizer for f on X ε > 0 such that: x X B ε ( x), f ( x) f (x) A strict local maximizer for f on X ε > 0 s.t.: x X B ε ( x), x x, f ( x) > f (x)
Fact: (Equivalence Between Minimizers and Maximizers) Consider a problem of the form P min. x is a local (resp. global) extremizer for P min if and only if it is a local (resp. global) extremizer for a problem of the form P max with identical constraints but f (x) as an objective function.
Fact: (Equivalence Between Minimizers and Maximizers) Consider a problem of the form P min. x is a local (resp. global) extremizer for P min if and only if it is a local (resp. global) extremizer for a problem of the form P max with identical constraints but f (x) as an objective function. PLEASE CONSIDER P max AS A CANONICAL SHAPE AND ALWAYS TRY TO RESHAPE YOUR PROBLEM SO THAT IT FITS IT!
Objectives of Optimization Theory: 1. Identify conditions which guarantee existence of a solution. 2. Identify the set of optimal, feasible points: Necessary conditions, which every solution must fulfill Sufficient conditions, which guarantee that any point fulfilling them is a solution
Objectives of Optimization Theory: 1. Identify conditions which guarantee existence of a solution. 2. Identify the set of optimal, feasible points: Necessary conditions, which every solution must fulfill Sufficient conditions, which guarantee that any point fulfilling them is a solution 3. Identify conditions which guarantee uniqueness of the solution. 4. Analyze how the solution depends on parameters of the optimization problem
Theorem: (Weierstrass - Extreme Value Theorem) Let X be a nonempty closed and bounded subset of R n and f : X R be continuous. Then, f is bounded in X and attains both its maximum and its minimum in X. That is, there exist points x M and x m in X such that f (x m ) f (x) f (x M ) for all x X
Theorem: (Weierstrass - Extreme Value Theorem) Let X be a nonempty closed and bounded subset of R n and f : X R be continuous. Then, f is bounded in X and attains both its maximum and its minimum in X. That is, there exist points x M and x m in X such that What does this mean? f (x m ) f (x) f (x M ) for all x X
Exercise: what conditions would allow us to apply Weierstrass on the utility maximization and the cost minimization problems?
Consider the following optimization problem: (P) maximize x dom(f ) f (x) where dom(f ) R n and f is real-valued and continuously differentiable.
Theorem: (First Order Necessary Condition (FOC)) Consider (P). Let x be an element in the interior of dom(f ). If x is a local extremum of f, then: f ( x) = 0
Theorem: (First Order Necessary Condition (FOC)) Consider (P). Let x be an element in the interior of dom(f ). If x is a local extremum of f, then: f ( x) = 0 The reverse is not necessarily true! Any point x dom(f ) with f (x) = 0 is called a critical point of f.
Figure: f (x, y) = x 2 y 2, 0 is a Saddle Point
Theorem: (SOC: Necessity) Consider (P). Let x be an element in the interior of dom(f ) and B ε ( x) an open ε-ball around x. Assume f is in C 2 (B ε ( x)). If x is a maximizer of f, then: λ R n λ H f ( x)λ 0 i.e. the Hessian of f at x is negative semidefinite.
Theorem: (SOC: Necessity) Consider (P). Let x be an element in the interior of dom(f ) and B ε ( x) an open ε-ball around x. Assume f is in C 2 (B ε ( x)). If x is a maximizer of f, then: λ R n λ H f ( x)λ 0 i.e. the Hessian of f at x is negative semidefinite. Theorem: (SOC: Sufficiency) Consider (P). Let x be an element in the interior of dom(f ) and B ε ( x) an open ε-ball around x. Assume f is in C 3 (B ε ( x)). If f ( x) = 0 and H f ( x) is negative definite, then x is a local maximizer of f.
Consider the following optimization problem: (P min,c ) minimize x dom(f ) f (x) subject to g i (x) = 0, i = 1,..., m.
Definition: (Level Sets) Let X be a nonempty subset of R n, f : X R, and c be an element of R. The c-level set of f is the set L f c := {x x X, f (x) = c}. The c-lower level set of f is the set L f c := {x x X, f (x) c}. The strict c-lower level set of f is the set L f c := {x x X, f (x) < c}. The c-upper level set and the strict c-upper level set of f are defined symmetrically.
Figure: Level Sets in Geography (source: http://canebrake13.com/fieldcraft/map compass.php)
What are implicit functions? What do they have to do with constrained optimization?
Theorem: (Implicit Function Theorem) Let X R n and f : X R. Suppose also that f belongs to C 1 (A), where A is a neighborhood of x in X and that for some i in {1, 2,, n} f ( x) x i 0. Then φ i (x i ) defined on a neighborhood B of x i such that φ i ( x i ) = x i. Also, if x i B, then (x 1,..., x i 1, φ i (x i ), x i+1,..., x n ) A and f (x 1,..., x i 1, φ i (x i ), x i+1,..., x n ) = f (x). Finally, φ i is differentiable at x i and for j i φ i ( x i ) x j = f ( x) x j f ( x) x.
Theorem: (Lagrange optimization - several equality constraints) Let f, g 1,..., g m C 1 be functions of n variables. Consider the problem of maximizing f (x) on the constraint set C g = {x = (x 1,..., x n ) : g 1 (x) = a 1,..., g m (x) = a m }. Suppose that x C g and that x is a (local) max or min of f on C g. Suppose further that x satisfies the nondegenerate constraint qualification, i.e., the Jacobian matrix of the constraint functions has maximal rank (is invertible). Then, there exist λ 1,...λ m such that (x 1,..., x n, λ 1,..., λ m) = (x, λ ) is a critical point of the Langrangian L(x, λ) = f (x) λ 1 (g 1 (x) a 1 )... λ m (g m (x) a m ).
Consider the following optimization problem: (P min,c2 ) minimize x dom(f ) f (x) subject to g i (x) b i, i = 1,..., m.
Most of the economic problems are in this form It is more difficult to prove or illustrate graphically Justin Leduc did in excellent job in illustrating the idea homework: carefully read through his script on the Kuhn-Tucker theorem (uploaded on my webpage)
Theorem: (Lagrange optimization - one inequality constraints) Suppose that f and g are C 1 functions on R 2 and that (x, y ) maximizes f on the constraint set g(x, y) b. If g(x, y ) = b, suppose that g x (x, y ) 0 or g y (x, y ) 0. In any case, form the Lagrangian function L(x, y, λ) = f (x, y) λ(g(x, y) b). Then, there is a multiplier λ such that: L 1. x (x, y, λ ) = 0 L 2. y (x, y, λ ) = 0 3. λ (g(x, y ) b) = 0 4. λ 0 5. g(x, y ) b
Theorem: (Lagrange optimization - several inequality constraints) Suppose that f and g 1,..., g k are C 1 functions of n variables and that x R n maximizes f on the constraint set defined by the k inequalities g i (x) b i, i = 1,..., k. Suppose that nondegenerate constraint qualification is satisfied at x, i.e., the rank at x of the Jacobian matrix of the binding constraints is maximal. Form the Lagrangian function L(x, λ 1,..., λ k ) = f (x) λ 1 (g 1 (x) b 1 )... λ k (g k (x) b k ). Then, there is multipliers λ 1,..., λ k such that: L 1. (x, λ ) = 0,..., L (x, λ ) = 0 x 1 x n 2. λ i (g i (x ) b i ) = 0 i = 1,..., k 3. λ i 0 i = 1,.., k 4. g i (x ) b i i = 1,..., k