Finite Dimensional Optimization Part I: The KKT Theorem 1

Similar documents
Finite Dimensional Optimization Part III: Convex Optimization 1

Econ 508-A FINITE DIMENSIONAL OPTIMIZATION - NECESSARY CONDITIONS. Carmen Astorne-Figari Washington University in St. Louis.

Finite-Dimensional Cones 1

Nonlinear Programming and the Kuhn-Tucker Conditions

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

Optimization. A first course on mathematics for economists

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

University of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris

September Math Course: First Order Derivative

Convexity in R N Supplemental Notes 1

Separation in General Normed Vector Spaces 1

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

How to Characterize Solutions to Constrained Optimization Problems

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

The Kuhn-Tucker Problem

where u is the decision-maker s payoff function over her actions and S is the set of her feasible actions.

CHAPTER 1-2: SHADOW PRICES

Multivariate Differentiation 1

The Kuhn-Tucker and Envelope Theorems

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

Competitive Consumer Demand 1

Constrained maxima and Lagrangean saddlepoints

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2

Summary Notes on Maximization

Econ Slides from Lecture 14

Economics 101. Lecture 2 - The Walrasian Model and Consumer Choice

Interior-Point Methods for Linear Optimization

Review of Optimization Methods

Useful Math for Microeconomics

Existence of Optima. 1

E 600 Chapter 4: Optimization

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

ARE202A, Fall 2005 CONTENTS. 1. Graphical Overview of Optimization Theory (cont) Separating Hyperplanes 1

Optimality Conditions for Constrained Optimization

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Constrained Optimization

Nonlinear Programming (NLP)

Concave and Convex Functions 1

Outline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Numerical Optimization

Mathematics For Economists

Convex Optimization & Lagrange Duality

Duality Theory of Constrained Optimization

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

The Kuhn-Tucker and Envelope Theorems

Tangent spaces, normals and extrema

ARE202A, Fall Contents

Concave and Convex Functions 1

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

We are now going to move on to a discussion of Inequality constraints. Our canonical problem now looks as ( ) = 0 ( ) 0

Appendix A Taylor Approximations and Definite Matrices

1 General Equilibrium

The Fundamental Welfare Theorems

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS

OPTIMIZATION THEORY IN A NUTSHELL Daniel McFadden, 1990, 2003

The Karush-Kuhn-Tucker (KKT) conditions

4 Lecture Applications

CO 250 Final Exam Guide

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Midterm Exam - Solutions

Existence of minimizers

Lecture Notes on Support Vector Machine

5 Handling Constraints

Constrained Optimization and Lagrangian Duality

Dynamic Macroeconomic Theory Notes. David L. Kelly. Department of Economics University of Miami Box Coral Gables, FL

Last Revised: :19: (Fri, 12 Jan 2007)(Revision:

Constrained Optimization

Constraint qualifications for nonlinear programming

FINANCIAL OPTIMIZATION

OPTIMISATION /09 EXAM PREPARATION GUIDELINES

i) This is simply an application of Berge s Maximum Theorem, but it is actually not too difficult to prove the result directly.

Chapter 4 - Convex Optimization

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

5. Duality. Lagrangian

Roles of Convexity in Optimization Theory. Efor, T. E and Nshi C. E

Optimality, Duality, Complementarity for Constrained Optimization

Final Exam - Math Camp August 27, 2014

2.098/6.255/ Optimization Methods Practice True/False Questions

Mathematical Appendix

4TE3/6TE3. Algorithms for. Continuous Optimization

Modern Optimization Theory: Concave Programming

A Simple Computational Approach to the Fundamental Theorem of Asset Pricing

The Inverse Function Theorem 1

Convex Optimization M2

The Arzelà-Ascoli Theorem

II. An Application of Derivatives: Optimization

Lecture Notes: Math Refresher 1

7.5 Partial Fractions and Integration

Linear Programming Redux

Solving Dual Problems

Lagrange Relaxation and Duality

The Revenue Equivalence Theorem 1

Mathematical Foundations II

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ).

Transcription:

John Nachbar Washington University March 26, 2018 1 Introduction Finite Dimensional Optimization Part I: The KKT Theorem 1 These notes characterize maxima and minima in terms of first derivatives. I focus primarily on maximization. The problem of minimizing a function f has the same solution (or solutions) as the problem of maximizing f, so all of the results for maximization have easy corollaries for minimization. The main result of these notes is the Karush-Kuhn-Tucker (KKT) Theorem, recorded as Theorem 2 in Section 4.4. The KKT Theorem was formulated independently, first in Karush (1939) and later in Kuhn and Tucker (1951). Karush s contribution was unknown for many years and it is common to see the KKT Theorem referred to as Kuhn-Tucker (and I still sometimes do this in my own notes). These notes cover only necessary conditions, conditions that solutions to maximization problems must satisfy. Part II of these notes discuss how to guarantee that a candidate solution is indeed a maximum (or a minimum, or an inflection point, or saddle point, etc.). Part III of these notes develops some of the complementary machinery of convex programming. One of the main attractions of convex programming is that it extends to situations where the functions are not differentiable. 2 Basic definitions. Consider a function f : X R, where X R N. In economic applications, X is often either R N + or R N ++. These notes gloss over the possibility that x might not be interior to X, in which case one must work with a modified definition of derivative at x. Suppose that we are interested in the behavior of f on a non-empty subset C X. For example, in a competitive demand problem, X is the set of possible consumption bundles, f is a utility function, and C is the set of affordable consumption bundles. In general, I refer to f as the objective function and C as the constraint set, also called the feasible set or opportunity set. Definition 1. Let C X R N, f : X R, and x C. 1 cbna. This work is licensed under the Creative Commons Attribution-NonCommercial- ShareAlike 4.0 License. 1

1. x is a local maximum iff and there is an ε > 0 such that f(x ) f(x) for any x N ε (x ) C. 2. x is an interior local maximum iff there is an ε > 0 such that N ε (x ) C and f(x ) f(x) for all x N ε (x ). 3. x is a (global) maximum iff f(x ) f(x) for all x C. The definitions for minima are analogous, but with inequalities reversed. R. The following examples illustrate these concepts. In all of the examples, f : R Example 1. f(x) = x 2. C = R. Then x = 0 is the maximum. Example 2. f(x) = x 2. C = R. Then x = 0 is the minimum. There is no maximum. Example 3. f(x) = x. C = [ 1, 1]. Then x = 1 is the maximum. It is not interior. The restriction to C is binding, meaning that the solution is at a boundary of C. Example 4. Let f(x) = x(x 1)(x+1) 2. C = R. Here, x = 1 is a local maximum but not a global maximum. The global maximum is at x = (1 + 17)/8. 3 Interior Maxima. I begin by recording the fact, which should already be familiar, that at an interior local maximum, the gradient of the objective function f is zero. The intuition is that the gradient of f gives the direction of fastest increase in f, and at an interior maximum, f is not increasing in any direction. Theorem 1. Consider f : X R. If x is an interior local maximum of f and f is differentiable at x then f(x ) = 0. Proof. By contraposition. Suppose that f(x ) 0. Then f(x ) f(x ) > 0, which implies that the directional derivative of f in the direction f(x ) is strictly positive. Therefore, there is γ such that for all γ (0, γ), f(x + γ f(x )) f(x ) > 0, which implies that x is not a local maximum. By an analogous argument, if x is a local minimum then, again, f(x ) = 0. I often refer to f(x ) = 0 as the (unconstrained) first-order condition (FOC). It is first order in the sense that it involves only first derivatives. The condition f(x ) = 0 is necessary for a local maximum but not sufficient for a local, let alone a global, maximum. In the examples below, f : R R and C = R. 2

Example 5. Let f(x) = x 3. Then f(0) = 0 but x = 0 is neither a local maximum nor a local minimum. Example 6. Let f(x) = x 2. Then f(0) = 0 but x = 0 is a not a local maximum; it is, however, a global minimum Example 7. Recall Example 4. The gradient is zero at 1, (1 17)/8, and (1 + 17)/8. The first is a local, but not global maximum, the second is a local but not global minimum (there is no minimum), and the last is the global maximum. 4 Constrained Maxima The KKT Theorem. 4.1 The constrained maximization problem in standard form. To proceed further I need to establish some notational conventions. As before, let C X R N and consider the maximization problem, max x C f(x). The main result of these notes, the Karush-Kuhn-Tucker (KKT) Theorem, assumes that C is written in a particular manner, which I refer to as standard form. If C is not expressed in standard form then KKT is still true provided its conclusion is reformulated. This is not difficult to do, but from a purely practical standpoint, one can avoid error in using KKT by remembering to write C in standard form. To illustrate standard form, consider the consumer utility maximization problem, max x C f(x) where C = {x R N + : p x m}. This is often written, In standard form, the problem is, max x f(x) s.t. p x m, x 0. max x f(x) s.t. p x m 0, x 1 0,. x N 0. More generally, a maximization problem is in standard form iff there are K functions g k : X R such that C is the set of x such that g k (x) 0 for all k. More 3

concisely, letting g = (g 1,..., g K ), the canonical maximization problem in standard form, MAX, is max x f(x) s.t. g(x) 0. In particular, C = g 1 ((, 0]). Similarly, the canonical minimization in standard form, MIN, is min x f(x) s.t. g(x) 0. Note that constraints in MAX use while the constraints in MIN use. As a mnemonic, in MAX problems think of constraints as imposing ceilings blocking further increases in f via increases in x. This motivates the less than or equal form of the standard constraint. Conversely, for MIN problems think of constraints as establishing floors preventing further decreases in f via decreases in x, hence the greater than or equal form of the standard constraint. This caricature of the MAX and MIN problems should not be taken too seriously. For example, the constraints x 0 in the consumer maximization problem are really floors even though in the standard form they are expressed as ceilings ( x 0). 4.2 The KKT Theorem An Informal Statement. Let x be feasible, so that g k (x ) 0 for all k. Say that constraint k is binding at x if g k (x ) = 0. If constraint k is not binding, so that g k (x ) < 0, say that k constraint is slack. Define J to be the set of indices of the binding constraints: J = {k {1,..., K} : g k (x ) = 0}. From the perspective of first order conditions, only the binding constraints matter. KKT states that if x is a local maximum, and if J, then, subject to a technical condition called Constraint Qualification discussed in Section 4.3, for each k J there is a number λ k 0 such that, f(x ) = k J λ k g k (x ). I refer to this as the KKT condition. (If J = then the KKT condition is f(x ) = 0, as in Theorem 1.) The λ k are called KKT multipliers. The KKT condition for a MIN problem (in standard form) is identical. For geometric intuition, suppose that J, and let A be the set that is positively spanned by the gradients of the binding constraints: { } A = x R N : λ k 0 s.t. x = k J λ k g k (x ). 4

Then A is a cone: for any a A and any λ 0, λa A. KKT thus says that, if J, then f(x ) lies in the cone A. See Figure 1. Very informally, the intuition behind KKT is that if x is a local maximum but f(x ) 0 then moving in a direction that increases f must violate one of the binding constraints, which means moving in a direction that increases at least one of the binding g k functions. The proof of KKT verifies that f(x ) A formalizes this intuition. rg 1 (x ) A rf(x ) rg 2 (x ) 0 Figure 1: The cone A and the KKT condition. As I discuss in the notes on the Envelope Theorem, λ k measures how much the objective function would be increased if constraint k were relaxed slightly. For example, in the standard utility maximization problem, the KKT multiplier on the budget constraint measures how much utility would increase if the consumer had a bit more income. 4.3 MF Constraint Qualification. I give a formal statement and proof of KKT in Section 4.4. Before doing so, I need to discuss the technical condition called Constraint Qualification mentioned in Section 4.2. To see that some additional condition may be needed, consider the following example, in which the KKT condition does not hold at the solution. Example 8. The domain is R. Let f(x) = x and let g(x) = x 3. Then the constraint set is C = (, 0] and the solution is x = 0. The KKT condition fails. Since 5

f(x ) = 1 while g(x ) = 0, there is no λ 0 such that f(x ) = λ g(x ). Here is a more subtle example of the KKT condition failing, in which the constraint gradients don t vanish. Example 9. Let f(x 1, x 2 ) = x 1. Let the constraints be g 1 (x 1, x 2 ) = x 2 1 + x 2 and g 2 (x 1, x 2 ) = x 2 1 x 2. The boundaries of the constraints are parabolas in x 2 in terms of x 1. The constraint set is again C = {0} and the solution is again {0}. Once again the KKT condition fails. f(x ) = (1, 0) while g 1 (x ) = (0, 1) and g 2 (x ) = (0, 1). There are no λ 1, λ 2 0 such that f(x ) = λ 1 g 1 (x ) + λ 2 g 2 (x ). I rule out Example 8 and 9 by assuming the following condition, an example of constraint qualification. Here and below, assume that x is feasible and that g k is differentiable at x for all k J. Definition 2. Mangasarian-Fromovitz Constraint Qualification (MF) holds at x iff there exists a v R N such that for all k J, g k (x ) v < 0. MF was introduced in Mangasarian and Fromovitz (1967). It is arguably the most useful form of constraint qualification but it is not the only one and it is not the weakest. Section 4.8 surveys other forms of constraint qualification. In Section 4.5, I show that a sufficient condition for MF is that (a) the g k are convex functions and (b) the constraint set C has a non-empty interior. In practice, these conditions are typically very easy to check. In Example 8, MF fails because g(x ) = 0, hence g(x ) v = 0 for any v. In Example 9, MF fails because g 1 (x ) = g 2 (x ), and hence for any v such that g 1 (x ) v < 0, g 2 (x ) v > 0. Whether MF holds can depend on the details of how the constraint set is described, rather than on the constraint set per se. Consider, for example, the following variant of Example 8. Example 10. The domain is R. Let f(x) = g(x) = x. Then the constraint set is C = (, 0] and the solution is x = 0, as in Example 8. Although this problem has the same objective function as in Example 8 and the same constraint set, the KKT condition holds at the solution here, whereas it did not in 8. Explicitly, here f(0) = g(0) = 1 and so f(0) = λ g(0) with λ = 1, which is positive. 4.4 The Karush-Kuhn-Tucker Theorem (KKT). Given a MAX problem, a local solution is a local maximum for the given objective function and constraint set. The main result of these notes is the following, which uses MF for constraint qualification. In their original formulations, Karush (1939) and Kuhn and Tucker (1951) used an even weaker form of constraint qualification. I discuss this briefly in Section 4.8. 6

Theorem 2 (Karush-Kuhn-Tucker). Let x be a local solution to a differentiable MAX in standard form. If J =, then f(x ) = 0. If J and MF holds at x, then for every k J there is a λ k 0 such that f(x ) = k J λ k g k(x ). This equality is called the KKT condition and the λ k are called the KKT multipliers. Proof. I claim that if x is feasible but the KKT condition does not hold then there is a v R N such that f(x ) v > 0 and, for any k J, g k (x ) v < 0. Suppose for the moment that this claim is true. Since f is differentiable, f(x ) v equals the directional derivative of f at x in the direction v. Since f(x ) v > 0, this implies that there is a γ f > 0 such that for all γ (0, γ f ), f(x + γv) f(x ) > 0, hence f(x + γv) > f(x ). By a similar argument, since g k (x ) v < 0 for any k J, there is a γ k > 0 such that for all γ (0, γ k ), g k (x + γv) g k (x ) < 0, hence g k (x + γv) < 0 (since = 0 k J, hence g k (x ) = 0). Finally, for any k / J, since g k (x ) < 0 and g k is continuous (since it is differentiable), there is γ k > 0 such that for all γ (0, γ k ), g k (x + γv) < 0. Let γ be the minimum of γ f and the γ k ; note that γ > 0 since it is a minimum of a finite set of strictly positive numbers. Then for all γ (0, γ), 1. f(x + γv) > f(x ), 2. g k (x + γv) < 0 for all k (if there are any constraints at all). Thus, for all γ (0, γ), x + γv is feasible and yields a higher value for the objective function. Therefore x is not a local solution. It remains to show that the above claim is true. 1. J =. In this case, the KKT condition failing means that f(x ) 0. Then, since f(x ) 0, f(x ) f(x ) > 0, which proves the claim with v = f(x ). This is essentially the same argument as in the proof of Theorem 1. 2. J. As already discussed in Section 4.2, the KKT condition is equivalent to requiring that f(x ) A = { x R N : λ k 0 s.t. x = k J λ k g k (x ) }, where A is the cone positively spanned by the gradients of the binding constraints. I now argue by contraposition: if f(x ) / A then there exists a v R N such that f(x ) v > 0 and for all k J, g k (x ) v < 0. As already discussed, this will complete proof. As proved in the notes on Cones, the cone A is closed and convex. By the Separating Hyperplane Theorem for Cones in the notes on Cones, there is a ˆv R N such that f(x ) ˆv > 0 and for all a A, a ˆv 0. 2 2 The notes, A Basic Separation Theorem for Cones, provides a separation theorem that is slightly weaker but is adequate for this application and has a more self-contained proof. 7

By MF, there is a ṽ R N such that g k (x ) ṽ < 0 for all k J. Take θ (0, 1) and let v = θṽ + (1 θ)ˆv. f(x ) v = θ f(x ) ṽ + (1 θ) f(x ) ˆv, which is strictly positive for θ small even if f(x ) ṽ is negative. g k (x ) v = θ g k (x ) ṽ + (1 θ) g k (x ) ˆv, which is strictly negative for any θ (0, 1). This completes the proof. The picture is as in Figure 2. rf(x ) rg 1 (x ) A rg 2 (x ) v 0 Figure 2: The separation argument behind KKT. The KKT multipliers need not be unique. Example 11. N = 1, f(x) = x, g 1 (x) = x and g 2 (x) = 2x. Then the constraint set is (, 0], the solution is at x = 0, MF holds (take v = 1), and the KKT condition is 1 = λ 1 + 2λ 2. Any λ 1, λ 2 0 that satisfy this expression will work. For example, both λ 1 = 1, λ 2 = 0 and λ 1 = 0, λ 2 = 1/2 are valid. 8

The KKT multipliers are unique if the gradients of the binding constraints are linearly independent, a condition that I call LI. As I discuss in Section 4.8, LI implies MF. The KKT Theorem is sometimes expressed in an alternate form, which I use in the notes on Convex Optimization. Theorem 3 (KKT Theorem - Alternate Form). Let x be a local solution to a differentiable MAX in standard form. If PI holds at x, then for every k there is a λ k 0 such that, f(x ) = λ k g k(x ), with λ k = 0 if k / J. In particular, if J = then f(x ) = 0. Proof. This is just a reinterpretation of Theorem 2. k Remark 1. Thus, in Theorem 3, for every k, λ k = 0 if g k(x ) < 0 and g k (x ) = 0 if λ k > 0. This is called complementary slackness and can also be written λ k g k(x ) = 0. 4.5 Checking MF: The Slater Condition. Theorem 4 below shows that a sufficient condition for MF is that the constraint functions are convex and that the following condition holds. Definition 3. A MAX problem in standard form satisfies the Slater Condition iff there is a point x such that g k (x) < 0 for all k. The Slater Condition is equivalent to requiring that the constraint set C has a non-empty interior. For the modified MAX problem of Section 4.6, in which equality constraints are added to the problem, the Slater Condition is equivalent to requiring that C have a non-empty relative interior (relative to the subset of R N defined by the equality constraints). Informally, Slater is a non-local version of MF. MF implies Slater: MF implies that there are points arbitrarily near x that are interior to the constraint set, which implies, in particular, that the constraint set has a non-empty interior. The converse, that Slater implies MF, is not true in general but is true with additional conditions. Theorem 4. In a differentiable MAX problem, at any feasible x, Slater is equivalent to MF if either 1. each binding constraint function g k is convex, 2. each binding constraint function g k is quasi-convex and g k (x ) 0. 9

An analogous result holds for MIN problems, with concave exchanged for convex. Proof. As already noted above, if MF holds then Slater holds. Conversely, MF holds vacuously if no constraints are binding. Suppose then that at least one constraint is binding. Let x be as in the Slater condition and let v = x x. I claim that for any k J, g k (x ) v < 0. Consider any k J. 1. Suppose that g k is convex. Then g k (x) g k (x ) (x x ) + g k (x ). Since g k (x) < 0 (by assumption) and g k (x ) = 0 (since k J), and since v = x x, this implies g k (x ) v < 0, as was to be shown. 2. Suppose that g k is quasi-convex and that g k (x ) 0. By continuity, since g k (x) < 0, there is an ε > 0 such that for all w on the unit sphere in R N, g k (x+ εw) < 0. Since g k (x ) = 0 (since k J), it follows that g k (x + εw) < g k (x ). Hence for any θ (0, 1), by quasi-convexity, g k (θ(x+εw)+(1 θ)x ) g k (x ). Rewriting, g k (x + θ(x + εw x )) g k (x ), or g k (x + θ(v + εw)) g k (x ) 0. Dividing by θ and taking the limit as θ 0 gives the directional derivative of g k at x in the direction v + εw. Since g k is differentiable, this directional derivative is equal to g k (x ) (v + εw), hence g k (x ) (v + εw) 0, or g k (x ) v + ε g k (x ) w 0. This holds for any w on the unit sphere. Since g k (x ) 0, there is a w for which g k (x ) w > 0, which implies g k (x ) v < 0 In practice, in economic applications, all constraints are often convex in MAX problems (or concave in MIN problems) and so checking MF constraint qualification boils down to checking Slater, which is often trivial. In the following example, the constraint function is convex but Slater does not hold and MF fails. Example 12. The domain is R. f(x) = x. g(x) = x 2, which is convex. The constraint set is C = {0}, which has an empty interior. The solution is (trivially) x = 0. MF fails since g(0) = 0 and hence there is no v R such that g(0) v < 0. On the other hand, in the next example, Slater holds but the constraint function condition fails and MF fails, again because the gradient of the constraint function vanishes at x. Example 13. Recall Example 8 in Section 4.3, with domain R, f(x) = x, g(x) = x 3, and x = 0. MF fails. In this example, Slater holds (take x = 1) but the constraint violates both of the two conditions in Theorem 4; g is not convex in the constraint region; it is quasi-convex but g(0) = 0 and so, once again, there is no v R such that g(0) v < 0. 10

Finally, here is a more elaborate example, a variation on Example 9 in in Section 4.3, in which Slater holds, the gradients of the constraint functions do not vanish at x, but MF still fails. Example 14. The domain is R 2. Let f(x 1, x 2 ) = x 1. Let g 1 (x 1, x 2 ) = x 2 1 (x 1 +1) x 2, g 2 (x 1, x 2 ) = x 2 1 (x 1 + 1) + x 2. The constraint set is the union of the origin and points (x 1, x 2 ) with x 1 1 and x 2 [x 2 1 (x 1 + 1), x 2 1 (x 1 + 1)]. The solution is x = (0, 0). But f(0, 0) = (1, 0) while g 1 (0, 0) = (0, 1) and g 1 (0, 0) = (0, 1) and hence there is no λ 1, λ 2 such that f(0, 0) = λ 1 g 1 (0, 0) + λ 2 g 2 (0, 0). In this example, neither constraint function is quasi-convex. 4.6 Equality constraints. Economists sometimes want to consider constraints that must hold with equality. A standard economic example is the budget constraint p x = m or p x m = 0. Requiring p x = m is equivalent to requiring that both p x m and p x m hold simultaneously. There is thus a sense in which equality constraints are just special cases of inequality constraints, and accordingly equality constraints ought to fit within the KKT framework. Without belaboring the issue, a minor modification of the KKT theorem (which I will not prove or even state formally) says that if the equality constraints are given by r l (x) = 0 then the first order conditions for a local minimum are that there are multipliers λ k 0 and µ l R, f(x ) = k J λ k g k(x ) + l µ l r l(x ). Note that the µ l, the multipliers on the equality constraints, could be either positive or negative. What economists model as equality constraints are often just binding inequality constraints. For example, in the case of utility maximization, the constraint p x = m is really p x m. The consumer is physically allowed to set p x < m (i.e., to spend less than her income) but it is not optimal to do so (the constraint is binding at the solution) under standard assumptions. It is accordingly understood that the sign of the KKT multiplier on the budget constraint cannot be negative. I will not dwell further on equality constraints. 4.7 Non-negativity constraints. It is common in economic applications to require that x 0. This generates N constraints of the form: x n 0. From the KKT condition, f(x ) = k J λ k g k(x ). 11

Suppose that the non-negativity constraint for x n is constraint k, and that, moreover, this constraint is binding. If λ k > 0, then λ k g k(x ) = (0,..., 0, λ k, 0,..., 0), with λ k in the nth place, so this constraint is lowering the right-hand side of the above equality. Let I be the set of binding constraints other than the non-negativity constraints. Then f(x ) λ k g k (x ). k I I mention this because many authors do not treat the conditions x n 0 as explicit constraints. Because of this, these authors state the KKT condition as an inequality. I think it is easier, and safer, to remember the KKT condition as an equality, with all binding constraints explicit. 4.8 Other forms of constraint qualification. MF is just one of many forms of constraint qualification, any of which suffice to imply that the KKT condition holds at a local solution. I will not be exhaustive but here are some of the more important alternatives. Let S be the set of the gradients of the binding constraints, S = { g k (x )} k J. Recall (from the notes on Cones) that a cone A is pointed iff for any a A, a 0, a / A. Here are three other forms of constraint qualification. Definition 4. Linearly independent constraint qualification (LI) holds at x iff S is linearly independent: if k J λ k g k (x ) = 0 then λ k = 0 for all k J. LI holds vacuously if J =. Definition 5. Positive linearly independent constraint qualification (PI) holds at x iff S is positive linearly independent: if k J λ k g k (x ) = 0 and λ k 0 for all k J then λ k = 0 for all k J. PI holds vacuously if J =. Definition 6. Pointed cone constraint qualification (PC) holds at x iff g k (x ) 0 for all k J and A is pointed. PC holds vacuously if J =. It is, I hope, obvious that LI implies PI. Here are examples showing that the converse is not true in general. Example 15. N = 2 and there are three binding constraints, with g 1 (x ) = (1, 0), g 2 (x ) = (0, 1), and g 2 (x ) = (1, 1). The cone A is the non-negative orthant. LI fails here, since g 1 (x ) + g 2 (x ) g 3 (x ) = 0, but PI can be shown to hold (this will follow from Theorem 5 below). Note here that the cone A is positively spanned by the independent vectors g 1 (x ) and g 2 (x ); g 3 (x ) is, in a sense, redundant. Example 16. N = 3 and there are four binding constraints, with g 1 (x ) = (1, 0, 0), 12

g 2 (x ) = (0, 1, 0), g 3 (x ) = (1, 0, 1), g 4 (x ) = (0, 1, 1). The cone A has a square cross-section. LI fails here, since g 1 (x ) + g 2 (x ) + g 3 (x ) 4 (x ) = 0, but PI can be shown to hold (again, this will follow from Theorem 5 below). Note that all four binding constraint gradients are needed to positively span A, even though the gradients are not independent. No gradient is redundant in the same sense as in Example 15. On the other hand, the three conditions MF, PI, and PC are all equivalent. Thus, if MF holds then the cone A is pointed, as drawn in Figure 2. Theorem 5. MF, PI, PC are equivalent. Proof. If J = then the claim holds vacuously. Assume, therefore, that J. MF PI. By contraposition. Suppose that PI fails. Then there are λ k 0, with at least one λ k > 0, such that k J λ k g k (x ) = 0. Then for any v R N, 0 = 0 v = ( k J λ k g k (x ) ) v = k J λ k( g k (x ) v), which implies that MF cannot hold. PI PC. By contraposition. If g k (x ) = 0 for any k J then PI fails (take λ k > 0 for this k and zero otherwise). On the other hand, suppose that A is not pointed. Then there is a a A such that a 0 and a A. By definition of A, there are weights λ k 0 such that a = k J λ k g k (x ), and weights ˆλ k 0 such that a = ˆλ k J k g k (x ). But then 0 = a a = k J (λ k + ˆλ k ) g k (x ). Since a 0, λ k + ˆλ k 0 for at least one k, hence PI fails. PC MF. It is almost immediate that A is a convex cone. Because A is finitely generated, it is closed; see the notes on Cones. By assumption, A is pointed. Then the Supporting Hyperplane Theorem for Pointed Cones in the notes on Cones implies that there is a v such that a v < 0 for all a A, a 0. By assumption, g k (x ) 0 for every k J, and the result follows (take a = g k (x )). Taking stock, LI is stronger than PI and hence stronger than MF. Theorem 4 is false for LI (Slater plus convexity of the constraints does not imply LI) and, as a consequence, LI is potentially more difficult to verify, which is the main reason why I have emphasized MF rather than LI. But LI has some advantages. First, if LI holds then the KKT multipliers not only exist but are unique. In this context, 13

look back at Example 11, where MF holds but LI fails and the multipliers are not unique. Second, if LI holds then there is an existence proof for KKT multipliers that is elementary in the sense of not relying on a separation argument. I combine both of these observations in the next theorem and its proof. Theorem 6. Let x be a local solution to a differentiable MAX in standard form. If J =, then f(x ) = 0. If J and LI holds at x, then for every k J there there are unique λ k 0 such that f(x ) = k J λ k g k(x ). Proof. Since LI implies MF, existence of the KKT multipliers are given by Theorem 2. Uniqueness is guaranteed by a standard linear algebra argument. As noted above, if LI holds then one can prove existence of the KKT multipliers without invoking a separation argument. The following proof is attributed in Kreps (2012) to Ben Porath. Suppose that J. For ease of notation (and this is only for ease of notation), suppose that all K constraints are binding at x. Suppose first that f(x ) is not in the vector space spanned by the g k (x ). LI then implies that the set { f(x ), g 1 (x ),..., g K (x )} is independent, in which case the matrix, call it B, formed by using these K + 1 vectors as rows, has rank K + 1. Then there is a v R N such that Bv = (1, 1,..., 1) R K+1, hence, in particular, f(x ) v > 0 and g k (x ) v < 0 for all k. As in the above proof of KKT, this implies that x is not a local maximum. By contraposition, the above set cannot be independent, and hence there exist λ k (not necessarily non-negative) such that f(x ) = k λ k g k(x ). It remains to show that λ k 0. Suppose that f(x ) = k λ k g k(x ) but with some λ k < 0. For ease of notation, suppose λ 1 < 0. Choose M < 0, hence Mλ 1 > 0, such that Mλ 1 k>1 λ k > 0. Let ˆB be the matrix formed by the using the K constraint gradients as rows. Then by LI, ˆB has rank K and hence there is a v such that ˆBv = (M, 1,..., 1) R K. Since f(x ) = k λ k g k(x ), f(x ) v = k λ k g k(x ) v = Mλ 1 k>1 λ k > 0. On the other hand, g k (x ) v < 0 for all k. Once again, this implies that x is not a local maximum. Hence, by contraposition, λ k 0 for all k. Remark 2. Continuing in the spirit of the proof of Theorem 6, suppose that, as in Example 15, one can find a subset of the binding constraint gradients that (a) positively spans A and (b) is independent. In this case, one can still use the linear algebra argument in the proof of Theorem 6 even if LI is violated: work just with an independent set of binding constraints that positively spans A and set λ k = 0 for all of the other binding constraint gradients. But if, as in Example 16, there is no subset of binding constraint gradients that (a) spans A and (b) is independent, then this proof of KKT no longer applies. 14

In practice, examples in which MF fails, or even where LI fails, are extremely rare. For completeness, however, I discuss briefly here weaker conditions for existence of KKT multipliers. The set A formed by the gradients of the binding constraints is always a closed, convex cone. Even without constraint qualification, the Separating Hyperplane Theorem for Cones in the notes on Cones implies that there is a vector v 0 such that f(x ) v > 0 while g k (x ) v 0 for all binding k. Constraint qualification was introduced to guarantee that there is a v such that, in fact, g k (x ) v < 0, which ensures that for small γ > 0, g k (x + γv) < 0, so that x + γv is feasible. But this is sometimes stronger than necessary. If the binding constraints are concave, which includes linear constraints as a special case, then x +γv is feasible when g k (x ) v = 0: by concavity, g k (x +γv) g k (x ) (x +γv x )+g(x ), hence g(x +γv) 0 since Dg k (x )v = g k (x ) v = 0 and g(x ) = 0. Example 17. The domain is R 2 +. f(x) = x 1 x 2. g 1 (x) = x 1 +x 2 2. g 2 (x) = 2 x 1 x 2. The constraint set is the diagonal line segment C = {(x 1, x 2 ) R 2 + : x 1 + x 2 = 2}. The solution is x = (1, 1). MF fails since PC fails: A = {x R 2 : x 1 = x 2 }, which is not pointed. But KKT multipliers exist. f(x ) = g 1 (x ) = (1, 1). g 2 (x ) = ( 1, 1). So take λ 1 = 1, λ 2 = 0. As one would expect since LI fails, these multipliers are not unique. λ 1 = 0, λ 2 = 1 also works. But in the general case, if g k (x ) v = 0, then x + γv need not be feasible even for small γ. Essentially the same proof will, however, continue to work if I can find feasible points arbitrarily near x in (approximately) the direction v. Example 9 in Section 4.3 illustrates one difficulty that can arise. In this example, at x = 0, f(x ) = (1, 0) and A is the x 2 axis. A v that separates f(x ) from A must lie along the x 1 axis. Take v = (1, 0). For each constraint k, there is a sequence of feasible points {x t } that approximates x + γv in the sense that (x t x )/ x t x v. For example, for k = 1, take x t = (1/t, 1/t 2 ). But the order of quantifiers matters: one needs a different sequence for each constraint. In particular, the sequence just given for g 1 violates g 2 and hence the sequence is not feasible. Both Karush (1939) and Kuhn and Tucker (1951) assume that, for any v such that g k (x ) v 0 for all k J, there is a sequences of feasible points that approximates x + γv in the sense just discussed. Informally, they assume that any direction v that should be feasible, given the derivatives of the binding g k, actually is feasible. This form of constraint qualification is strictly weaker than MF but it is difficult to check directly and examples in which the difference matters tend to be artificial. 15

4.9 Binding constraints, active constraints, and slack constraints. Recall that a constraint k is binding if g k (x ) = 0. It is slack otherwise. This terminology is somewhat misleading. First, a constraint can be binding but irrelevant. Example 18. Let f(x) = x 2 and let the constraint be x 0; thus g(x) = x. The solution is at x = 0. The constraint is binding but λ = 0 because f(x ) = 0. Relaxing the constraint does not affect the solution. In Example 18, λ = 0 even though the constraint is binding. Call a binding constraint k active if λ k > 0.3 Although Example 18 provides a counter example, binding constraints will typically be active. Second, slack constraints can affect the global solution, as the next example illustrates. Example 19. Let the constraints be x 1 and x 1, hence g 1 (x) = x 1 and g 2 (x) = x 1. Let f(x) = x 2 + x 4. The graph of f looks like a W. There are constrained maxima at -1, 0, and 1. At x = 0, g 1 (x ) < 0 and g 2 (x ) < 0, so the constraints are not binding. Nevertheless, the restriction to the constraint set matters. If either constraint were relaxed then the objective function would increase. For example, if the constraint x 1 is changed to x 3 then the unique constrained maximum is x = 3. In particular, 0 is no longer a maximum. The underlying issue here is that KKT is a result about local, rather than global, maximization. Even if the constraint is relaxed, x = 0 remains a local maximum. The fact that constraints are slack at x = 0 correctly reflects this. The problem illustrated by Example 19 does not occur if the objective function is concave, in which case local maxima are global maxima. 5 Using KKT. The bottom line here is bad news: there is no simple procedure for finding points x and multipliers λ that satisfy KKT. A systematic procedure would be to try to solve the unconstrained problem. If you can do so without violating a constraint, then you are done. If not, pick a constraint and look for solutions to the problem in which this one constraint is binding. For example, if the binding constraint happens to be labeled 1, then we get, f(x ) = λ 1 g 1 (x ) g 1 (x ) = 0. 3 Many references use active to mean binding; here I am taking active to be more restrictive than binding. 16

This gives n + 1 equations (not necessarily linear) in n + 1 unknowns, namely the x and λ 1. With good fortune, you may be able to solve it analytically. Having gone through all K constraints one by one, if you find solutions that do not violate other constraints, choose the one that maximizes the value of f (there may be more than one). If, on the other hand, there are no solutions within the constraint set C, then start looking at the constraints two at a time. And so on. You may be able to cut down on the tedium if you can be clever and figure out which constraints are likely to be binding. Another useful fact to remember when doing maximization problems is the following. In the theorem below, C could be any set at all, not necessarily a subset of R N. Theorem 7. Consider any set C, any function f : X R. Let h : R R be any strictly increasing function. Then x maximizes f on C iff it maximizes ˆf = h f on C. Proof. Suppose f(x ) f(x) for any x in C. Then, since h is strictly increasing, h(f(x )) h(f(x)). And conversely. One can sometimes simplify calculations enormously by a clever choice of h. Note that if you modify the objective function in this way, while you won t change the solution, you will, typically, change the KKT multipliers. In the following examples, the constraints are linear, and hence, as discussed in Section 4.8, constraint qualification holds. Example 20. Consider the problem max f(x) = x 1 1/2 x 2 1/3 x 3 1/6 s.t. 4x 1 + 8x 2 + 3x 3 9 x 0 This could, for example, be a utility maximization problem with utility function f and budget constraint given by prices p = (4, 8, 3) and income 9. I need to translate this into standard form: max f(x) = x 1 1/2 x 2 1/3 x 3 1/6 s.t. 4x 1 + 8x 2 + 3x 3 9 0 x 0 All constraint functions are linear (hence convex) and Slater holds (set x 1 = 3/8, x 2 = 3/16, and x 3 = 1/2, for example). Therefore, by Theorem 4, MF constraint qualification holds and hence any solution must satisfy the KKT condition. The check on constraint qualification in the other examples in this section is so similar that I won t bother to make those checks explicit. 17

Next, note that any solution x will be strictly positive. As already noted, for example, it is feasible to take x 1 = 3/8, x 2 = 3/16, x 3 = 1/2. This yields a positive value for the objective function, whereas the objective function is 0 if any x n = 0. Thus, the non-negativity constraints do not bind. On the other hand, at any solution, the first constraint must bind: if 4x 1 +8x 2 + 3x 3 9 < 0, then one could increase the objective function which is increasing in all of its arguments, by increasing one or all of the x n, if only a little, without violating the constraint. Since at an optimum only the first constraint binds, the KKT condition is where Hence γ 2x 1 γ 3x 2 γ 6x 3 = λ 1 4 8 3, γ = x 1 1/2 x 2 1/3 x 3 1/6 γ 2λ 1 γ 3λ 1 γ 6λ 1 = 4x 1 8x 2 3x 3 Substituting this into the binding constraint, 4x 1 + 8x 2 + 3x 3 9 = 0, yields, or γ 2λ 1 + γ 3λ + γ 1 6λ 1 9 2 3 3 2 γ λ 1 = 9.. = 9, Substituting this back into the KKT condition yields, 4x 1 = 8x 2 3x 3 or while x = ( 9 8, 3 8, 1 ) 0, 2 λ 1 = 1 48 31/3 2 3/4. I finish with two remarks. First, note that I have not actually shown that x is a solution. This will follow from Part II of these notes, which give sufficient conditions. 18

Second, I could have made the calculations tidier by working with the log of the objective function. By Theorem 7, this yields a new maximization problem with the same solution as the original one (as you can verify by direct calculation): max ˆf(x) = 1 2 ln(x 1) + 1 3 ln(x 2) + 1 6 ln(x 2) s.t. 4x 1 + 8x 2 + 3x 3 9 0 x 0 Strictly speaking, ln(f(x)) is not defined if any x n = 0. But I have just argued that no such x can be a solution. This sort of technicality is common in economics optimization problems and typically are simply ignored. Example 21. Consider the same problem as in Example 20 but now suppose that I had guessed that ( 3 x = 4, 3 ) 8, 1, which is not actually a solution but which, as I noted in the course of Example 20, is feasible. Since constraint 1 binds at x, the KKT condition requires, f( x) = λ 1 g 1 ( x), or (approximately and after some tedious calculation), 0.41 4 0.56 = λ 1 8. 0.10 3 There is no λ 1 that will work. For example, to satisfy the first line, λ 1 would have to be about 1/10, but to satisfy the third it would have to be about 1/30. Geometrically, we are detecting the fact that the vectors f( x) and g( x) point in different directions, indicating that x is not a solution. And, in fact, you can compute that f( x) 0.62 whereas f(x ) 0.68. Example 22. Once again consider the same problem as in Example 20 but now suppose that I had guessed that ( 1 ˆx = 4, 1 8, 1 ). 3 This is not a solution but it is feasible. Since no constraints bind at ˆx, the KKT condition requires f( x) = 0, which direct calculation verifies is not true. You can compute that f(ˆx) 0.32 whereas, again, f(x ) 0.68. In the next example, more than one constraint binds at the solution. 19

Example 23. Consider Translating this into standard form, max x 1 + 1 + 2 x 2 + 1 + 3 x 3 + 1 s.t. 4x 1 + 8x 2 + 3x 3 9 x 0 max x 1 + 1 + 2 x 2 + 1 + 3 x 3 + 1 s.t. 4x 1 + 8x 2 + 3x 3 9 0 x 0 The objective function is strictly increasing in all arguments so I again conclude that the first constraint will be binding at the solution. It is no longer obvious, however, whether the non-negativity constraints bind. Let s guess (correctly as it turns out) that at the solution only x 3 is positive. This is intuitive, since x 3 gets the most weight in the objective function but the least weight in the first constraint. Don t place too much confidence in this sort of intuition, however; we re just making an educated guess and we could have been wrong. If x 1 = x 2 = 0 then the binding constraint 4x 1 + 8x 2 + 3x 3 = 9 implies x 3 = 3. I have, f(x ) = (1/2, 1, 3/4) g 1 (x ) = (4, 8, 3) g 2 (x ) = ( 1, 0, 0) g 3 (x ) = (0, 1, 0) From the KKT condition, I get 1/2 1 3/4 = which can be solved to yield, λ 1 λ 2 λ 3 4 1 0 8 0 1 3 0 0 = 1/4 1/2 1 > 0. Thus, I can satisfy the KKT condition at x = (0, 0, 3). Again, the sufficient conditions of Part II of these notes guarantee that x actually is the solution. λ 1 λ 2 λ 3, 20

Example 24. Consider the same problem as in Example 23. Suppose that I had instead guessed that only the first constraint was binding. Then the KKT condition is, Manipulating this yields and 1 2 1 x1 +1 1 x2 +1 3 1 2 x1 +1 x = = λ 1 3/5 3/5 27/5. λ 1 = 1 16 10 > 0. Notice that this solution satisfies the KKT condition. BUT, it is not feasible, since constraints 2 and 3 are violated. Example 25. Once again, consider the same problem as in Example 23. Suppose that I had instead guessed the solution to be ˆx = (9/4, 0, 0), which is feasible. Working through a similar calculation to the one above, but with different binding constraints, yields 1/ 13 1 3/2 = 4 8 3 4 0 0 8 1 0 3 0 1. λ 1 λ 3 λ 4. Solving this yields λ 1 λ 3 λ 4 = 1 4 13 2 13 1 3 4 13 3 2 0.1 0.4 1.3. The critical point to notice here is that λ 3, λ 4 < 0. KKT requires all multipliers on binding constraints to be non-negative. This means that ˆx has failed the KKT condition and therefore cannot be a maximum. If you had neglected to write the problem in standard form you might instead have found all the multipliers to be positive, and concluded (mistakenly) that x satisfied the KKT conditions. You can compute that f(ˆx) 1.8 whereas f(x ) = 6. References Karush, William. 1939. Minima of Functions of Several Variables with Inequalities as Side Conditions. Masters Thesis, University of Chicago. 21

Kreps, David M. 2012. Microeconomic Foundations I: Choice and Competitive Markets. Princeton University Press. Kuhn, H. W. and A. W. Tucker. 1951. Nonlinear Programming. In Proceedings of the 2nd Berkley Symposium. University of California Press pp. 481 492. Mangasarian, O. L. and S. Fromovitz. 1967. The Fritz John Necessary Optimality Conditions in the Presence of Equality and Inequality Constraints. Journal of Mathematical Analysis and Applications 17:37 47. 22