Constrained maxima and Lagrangean saddlepoints
|
|
- Homer Kelly
- 5 years ago
- Views:
Transcription
1 Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Winter 2018 Topic 10: Constrained maxima and Lagrangean saddlepoints 10.1 An alternative As an application of the separating hyperplane theorem we present the following result due to Fan, Glicksberg, and Hoffman [1] Concave Alternative Theorem Let C be a nonempty convex subset of a vector space, and let f 1,..., f m : C R be concave. Letting f = (f 1,..., f m ): C R m, exactly one of the following is true. ( x C ) [ f( x) 0 ]. (1) Or (exclusive), ( p > 0 ) ( x C ) [ p f(x) 0 ]. (2) Proof : Clearly both cannot be true. Suppose (1) fails. Define A = {f(x) : x C} and let  be its decreasing hull,  = { y R m : ( x C ) [ y f(x) ] }. Since (1) fails, we see that A and R m ++ are disjoint. Consequently  and Rm ++ are disjoint. Now R m ++ is clearly convex, and observe that  is also convex. (The set A itself need not be convex see, for instance, Figure ) To see this, suppose y 1, y 2 Â. Then y i f(x i ), i = 1, 2. Therefore, for any λ (0, 1), λy 1 + (1 λ)y 2 λf(x 1 ) + (1 λ)f(x 2 ) f ( λx 1 + (1 λ)x 2 ), since each f j is concave. Therefore λy 1 + (1 λ)y 2 Â. Thus, by the Separating Hyperplane Theorem 8.5.1, there is a nonzero vector p R m properly separating  and Rm ++. We may assume p  p Rm ++. (3) By Exercise 8.2.4, p > 0. Let ε > 0, so that ε1 0, and note that (3) implies for every y Â, we have p y εp 1. Since ε may be taken arbitrarily small, we conclude that p y 0 for all y in Â. In particular, p f(x) 0 for all x in C. KC Border: for Ec 181, Winter 2018 src: SaddlePoint v ::15.56
2 KC Border Constrained maxima and Lagrangean saddlepoints Saddlepoints In this section we discuss the relation between constrained maxima of concave functions and saddlepoints of the so-called Lagrangean Definition Let f : X Y R. A point (x ; y ) 1 in X Y is a saddlepoint of f (over X Y ) if it satisfies f(x; y ) f(x ; y ) f(x ; y) for all x X, y Y. That is, (x ; y ) is a saddlepoint of f if x maximizes f( ; y ) over X and y minimizes f(x ; ) over Y. Saddlepoints of a function have the following nice interchangeability property Lemma (Interchangeability of saddlepoints) Let f : X Y R, and let (x 1 ; y 1 ) and (x 2 ; y 2 ) be saddlepoints of f. Then f(x 1 ; y 1 ) = f(x 2 ; y 1 ) = f(x 1 ; y 2 ) = f(x 2 ; y 2 ). Consequently (x 1 ; y 2 ) and (x 2 ; y 1 ) are also saddlepoints Exercise Prove Lemma Lagrangeans Saddlepoints play an important rôle in the analysis of constrained maximum problems via Lagrangean functions Definition Given f, g 1,..., g m : X R, the associated Lagrangean L: X Λ R is defined by L(x; λ) = f(x) + λ j g j (x) = f(x) + λ g(x), where Λ is an appropriate subset of R m. (Usually Λ = R m or R m +.) The components of λ are called Lagrange multipliers. 2 The first result is that saddlepoints of Lagrangeans are constrained maxima. This result requires no restrictions on the functions. 1 The use of a semicolon instead of a comma is to visually emphasize the differing roles of x and y, but mathematically it plays the same role as a comma. On occasion I may forget this convention and use a comma. Don t fret about it. 2 You may have noticed that I have departed from my usual convention of letting Greek letters denote scalars. This usage of λ to denote a vector of Lagrange multipliers is one of my many bad habits. v ::15.56 src: SaddlePoint KC Border: for Ec 181, Winter 2018
3 KC Border Constrained maxima and Lagrangean saddlepoints Theorem (Lagrangean saddlepoints are constrained maxima) Let X be an arbitrary set, and let f, g 1,..., g m : X R. Suppose that (x, λ ) is a saddlepoint of the Lagrangean L(x; λ) = f + λ g (over X R m +). That is, L(x; λ ) L(x ; λ ) L(x ; λ) x X, λ 0. (4) (4a) (4b) Then x maximizes f over X subject to the constraints g j (x) 0, j = 1,..., m, and furthermore λ jg j (x ) = 0, j = 1,..., m. (5) Proof : Inequality (4b) implies λ g(x ) λ g(x ) for all λ 0. By the Nonnegativity Test 0.1.1, g(x ) 0, so x satisfies the constraints. Setting λ = 0, we see that λ g(x ) 0. This combined with λ 0 and g(x ) 0 implies λ g(x ) = 0. Indeed it implies λ jg j (x ) = 0 for j = 1,..., m. Now note that (4a) implies f(x) + λ g(x) f(x ) for all x. Therefore, if x satisfies the constraints, g(x) 0, we have f(x) f(x ), so x is a constrained maximizer. Condition (5) implies that if the multiplier λ j is strictly positive, then the corresponding constraint is binding, g j (x ) = 0; and if a constraint is slack, g j (x ) > 0, then the corresponding multiplier satisfies λ j = 0. These conditions are sometimes called the complementary slackness conditions. The converse of Theorem is not quite true, but almost. To state the correct result we now introduce the notion of a generalized Lagrangean Definition A generalized Lagrangean L µ : X Λ R, where µ 0, is defined by L µ (x; λ) = µf(x) + λ j g j (x), where Λ is an appropriate subset of R m. Note that each choice of µ generates a different generalized Lagrangean. However for Λ = R m +, as long as µ > 0, a point (x; λ) is a saddlepoint of the generalized Lagrangean if and only if (x; λ/µ) is a saddlepoint of the Lagrangean. Thus the only case to worry about is µ = 0. The next results state that for concave functions satisfying a regularity condition, constrained maxima are saddlepoints of some generalized Lagrangean Theorem (Concave constrained maxima are nearly Lagrangean saddlepoints) Let C R n be convex, and let f, g 1,..., g m : C R be concave. Suppose x maximizes f subject to the constraints g j (x) 0, j = 1,..., m. Then there exist real numbers µ, λ 1,..., λ m 0, not all zero, such that (x ; λ ) is a saddlepoint of the generalized Lagrangean L µ. That is, µ f(x) + λ jg j (x) µ f(x ) + (6a) λ jg j (x ) µ f(x ) + (6b) λ j g j (x ) (6) KC Border: for Ec 181, Winter 2018 src: SaddlePoint v ::15.56
4 KC Border Constrained maxima and Lagrangean saddlepoints 10 4 for all x C and all λ 1,..., λ m 0. Furthermore λ jg j (x ) = 0. (7) Proof : Since x is a constrained maximizer there is no x C satisfying f(x) f(x ) > 0 and g(x) 0. Therefore the Concave Alternative implies the existence of nonnegative µ, λ 1,..., λ m, not all zero, satisfying µ f(x) + λ jg j (x) µ f(x ) for every x C. Evaluating this at x = x yields m λ jg j (x ) 0. But each term in this sum is the product of two nonnegative terms, so (7) holds. This in turn implies (6a). Given that g j (x ) 0 for all j, (7) also implies (6b) Corollary (Constrained maxima may be Lagrangean saddlepoints) Under the hypotheses of Theorem suppose in addition that Slater s Condition, ( x C ) [ g( x) 0 ], (S) is satisfied. Then µ > 0, and may be taken equal to 1. Consequently the pair ( x ; (λ 1,..., λ m) ) is a saddlepoint of the Lagrangean for x C, λ 0. That is, where L(x; λ) = f(x) + λ g(x). L(x; λ ) L(x ; λ ) L(x ; λ) x C, λ 0, (8) Proof : Suppose µ = 0. Then evaluating (6) at x = x yields λ g( x) 0, but g( x) > 0 implies λ j = 0, j = 1,..., m. Thus µ = 0 and λ j = 0, j = 1,..., m, a contradiction. Therefore µ > 0, and by dividing the Lagrangean by µ, we may take µ = 1. The remainder is then just Theorem Combining these results gives us the following The Saddlepoint Theorem Let C R n be convex, and let f, g 1,..., g m : C R be concave. Assume in addition that Slater s Condition, is satisfied. The following are equivalent. ( x C ) [ g( x) 0 ] (S) 1. The point x maximizes f over C subject to the constraints g j (x) 0, j = 1,..., m. v ::15.56 src: SaddlePoint KC Border: for Ec 181, Winter 2018
5 KC Border Constrained maxima and Lagrangean saddlepoints Then there exist real numbers λ 1,..., λ m 0 such that (x ; λ ) is a saddlepoint of the Lagrangean L. That is, f(x) + λ jg j (x) f(x ) + λ jg j (x ) f(x ) + λ j g j (x ) (9) for all x C and all λ 1,..., λ m 0. Furthermore λ jg j (x ) = 0, j = 1,..., m Remark Sometimes it hard to see the forest for the trees. The most important consequence of the Saddlepoint Theorem is that for the concave case satisfying Slater s condition, the constrained maximizer x of f is an unconstrained maximizer of the Lagrangean L( ; λ ). The rôle of a Lagrange multiplier is to act as a conversion factor between the g-values and the f-values that convert the constrained maximization problem to an unconstrained maximization problem Example Concavity is crucial for these results. Even in the classical case of an interior constrained maximizer of a smooth function, even when the Lagrange Multiplier Theorem applies, the constrained maximizer may not be an unconstrained maximizer of the Lagrangean. Sydsaeter [4] points out the following simple example. Let f(x, y) = xy and g(x, y) = 1 (x + y), so the Lagrangean is L(x, y; λ) = xy + λ(1 x y). Note that g is linear and so concave, but f is quasiconcave, but not concave, since it exhibits increasing returns. The point (x, y ) = (1, 1) maximizes f subject to the constraints g(x, y) 0, x 0, y 0. The point ( x, ȳ) = (0, 0) verifies Slater s condition. The multiplier λ = 2 satisfies the conclusion of the classical Lagrange multiplier theorem, namely the first order conditions L(x, y ; λ ) x = 2x y λ = 0 and But (x, y ) does not maximize L(x, y ; λ ) y L(x, y; 2) = xy + 2 2x 2y. = 2x y λ = 0. Indeed there is no unconstrained maximizer of the Lagrangean for any λ 0. KC Border: for Ec 181, Winter 2018 src: SaddlePoint v ::15.56
6 KC Border Constrained maxima and Lagrangean saddlepoints 10 6 Karlin [2, vol. 1, Theorem 7.1.1, p. 201] proposed the following alternative to Slater s Condition: ( λ > 0 ) ( x(λ) C ) [ λ g ( x(λ) ) > 0 ], which we may as well call Karlin s condition Theorem Let C R n be convex, and let g 1,..., g m : C R be concave. Then g satisfies Slater s Condition if and only it satisfies Karlin s Condition. Proof : Clearly Slater s Condition implies Karlin s. Now suppose g violates Slater s Condition. Then by the Concave Alternative Theorem , it must also violate Karlin s. The next example shows what can go wrong when Slater s Condition fails Example In this example, due to Slater [3], C = R, f(x) = x, and g(x) = (1 x) 2, so both f and g are concave. Note that Slater s Condition fails because g 0. The constraint set {g 0} is the singleton {1}. Therefore f attains a constrained maximum at x = 1. There is however no saddlepoint over R R + of the Lagrangean L(x; λ) = x λ(1 x) 2 = λ + (1 + 2λ)x λx 2. To see that L has no saddlepoint, consider an arbitrary ( x, λ) R R +. If λ = 0, then since L(x; 0) = x, no value for x maximizes L( ; 0). On the other hand if λ > 0, the first order condition for a maximizer at x is L = 0, or 1+2 λ 2 λ x = 0, x which implies x = 1 + 1/(2 λ) > 1. But for x > 1, L( x; ) is strictly decreasing, so no λ is a minimizer The rôle of Slater s Condition In this section we present a geometric argument that illuminates the rôle of Slater s Condition in the saddlepoint theorem. The saddlepoint theorem was proved by invoking the Concave Alternative Theorem , so let us return to the underlying argument used in its proof. In the framework of Theorem , define the function h: C R m+1 by h(x) = ( g 1 (x),..., g m (x), f(x) f(x ) ) and set A = {h(x) : x C} and  = {y R m+1 : ( x C ) [ y h(x) ] }. Then  is a convex set bounded in part by A. Figure depicts the sets A and  for Slater s example , where f(x) f(x ) is plotted on the vertical v ::15.56 src: SaddlePoint KC Border: for Ec 181, Winter 2018
7 KC Border Constrained maxima and Lagrangean saddlepoints 10 7 A f(x) f(x ) f(x) f(x ) 1 h(x ) g(x) 1  h(x ) g(x) A 1 1 Figure The sets A and  for Slater s example. axis and g(x) is plotted on the horizontal axis. Now if x maximizes f over the convex set C subject to the constraints g j (x) 0, j = 1,..., m, then h(x ) has the largest vertical coordinate among all the points in H whose horizontal coordinates are nonnegative. The semipositive m + 1-vector ˆλ = (λ 1,..., λ m, µ ) from Theorem is obtained by separating the convex set  and Rm It has the property that ˆλ h(x) ˆλ h(x ) for all x C. That is, the vector ˆλ defines a hyperplane through h(x ) that supports  at h(x ). It is clear in the case of Slater s example that the supporting hyperplane is a vertical line. The fact that the hyperplane is vertical means that µ (the multiplier on f) must be zero. If there is a non-vertical supporting hyperplane through h(x ), then µ is nonzero, so we can divide by it and obtain a full saddlepoint of the true Lagrangean. This is where Slater s condition comes in. In the one dimensional, one constraint case, Slater s Condition reduces to the existence of x satisfying g( x) > 0. This rules out having a vertical supporting line through h(x ). To see this, note that the vertical component of h(x ) is f(x ) f(x ) = 0. If g(x ) = 0, then the vertical line through h(x ) is simply the vertical axis, which cannot support Â, since h( x) A lies to the right of the axis. See Figure In Figure , the shaded area is included in Â. For instance, let C = (, 0], f(x) = x, and g(x) = x + 1. Then the set  is just {y R2 : y (0, 1)}. Later we shall see that if f and the g j s are linear, then Slater s Condition is not needed to guarantee a non-vertical supporting line. Intuitively, the reason for this is that for the linear programming problems considered, the set  is polyhedral, so even if g(x ) = 0, there is still a non-vertical line separating  and Rm The proof of this fact relies on some results on linear inequalities that will be proven KC Border: for Ec 181, Winter 2018 src: SaddlePoint v ::15.56
8 KC Border Constrained maxima and Lagrangean saddlepoints 10 8 f(x) f(x ) h(x ) g(x)  h( x) Figure Slater s condition rules out a vertical supporting line. later. It is subtle because Slater s condition rules out a vertical supporting line. In the linear case, there may be a vertical supporting line, but if there is, there is also a non-vertical supporting line that yields a Lagrangean saddlepoint. As a case in point, consider C = (, 0], f(x) = x, and g(x) = x. Then the set  is just {y R 2 : y 0}, which is separated from R 2 ++ by every semipositive vector Decentralization and Lagrange Multipliers This section is inspired by Uzawa [6]. Consider the case of a firm that can produce n different products, each according to a one-product concave production function f j of l inputs or factors. The firm has already contracted to have a quantity ω k of factor k available for use. (Alternatively, consider a country with n industries that sell their outputs on world markets and each factor is immobile and fixed in supply.) If the sale price of the j th output is p j > 0, then the firm (or country) faces the constrained maximization problem of maximizing the value of output, subject to the resource constraint on each factor: maximize v 1,...,v n n p j f j (v j ) subject to n v jk ω k, v jk 0, k = 1,..., l j = 1,..., n k = 1,..., l. The Lagrangean for this problem is: n l n L(v; µ) = p j f j (v j ) + µ k ω k v jk. k=1 By assumption, each f j is concave, so the objective function n p j f j (v j ) is concave. Moreover, the constraint function g k (v) := ω k n v jk is affine in v, and so concave. Note that as long as each ω k > 0, then Slater s Condition is satisfied. v ::15.56 src: SaddlePoint KC Border: for Ec 181, Winter 2018
9 KC Border Constrained maxima and Lagrangean saddlepoints 10 9 Therefore by the Saddlepoint Theorem , a point v solves the constrained maximization problem if and only if the there is a vector µ such that ( v; µ) is a saddlepoint of the Lagrangean over ( ) R l n + R l +. Now regroup the Lagrangean as ( ) ( ) l l l L(v; µ) = p 1 f 1 (v 1 ) µ k v 1k + + p n f n (v n ) µ k v nk + µ k ω k. k=1 By the Saddlepoint Theorem the constrained maximizer v is an unconstrained maximizer of the Lagrangean evaluated at µ = µ. But note that this implies that for each j = 1,..., n, l v j maximizes p j f j (v j ) µ k v jk. k=1 In other words the saddlepoint values of the Lagrange multipliers are factor wages such that each optimal v j unconstrainedly maximizes the profit at price p j and factor wages µ. These multipliers, sometimes known as shadow wages, allow the constrained problem to be decentralized as n independent profit maximization decisions. You should verify that the converse is true, namely if each v j unconstrainedly maximizes the profit at price p j and factor wages µ, and the market for inputs clears, that is, if ( n v j ω and each µ k ωk l v jk ) = 0, then v solves the constrained maximization problem. Thus the prices of of the outputs determine the wages of the factors as saddlepoint Lagrange multipliers. k=1 k= References [1] K. Fan, I. Glicksberg, and A. J. Hoffman Systems of inequalities involving convex functions. Proceedings of the American Mathematical Society 8: [2] S. Karlin Mathematical methods and theory in games, programming, and economics. New York: Dover. Reprint of the 1959 two-volume edition published by Addison Wesley. [3] M. L. Slater Lagrange multipliers revisited: A contribution to non-linear programming. Discussion Paper Math. 403, Cowles Commission. Reissued as Cowles Foundation Discussion Paper #80 in [4] K. Sydsaeter Letter to the editor on some frequently occurring errors in the economic literature concerning problems of maxima and minima. Journal of Economic Theory 9(4): DOI: / (74) KC Border: for Ec 181, Winter 2018 src: SaddlePoint v ::15.56
10 KC Border Constrained maxima and Lagrangean saddlepoints [5] H. Uzawa The Kuhn Tucker conditions in concave programming. In K. J. Arrow, L. Hurwicz, and H. Uzawa, eds., Studies in Linear and Nonlinear Programming, number 2 in Stanford Mathematical Studies in the Social Sciences, chapter 3, pages Stanford, California: Stanford University Press. [6] A note on the Menger Wieser theory of imputation. Zeitschrift für Nationalökonomie 18(3): DOI: /BF v ::15.56 src: SaddlePoint KC Border: for Ec 181, Winter 2018
Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1
Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with
More informationConvex Analysis and Economic Theory Winter 2018
Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Winter 2018 Topic 0: Vector spaces 0.1 Basic notation Here are some of the fundamental sets and spaces
More informationConvex Analysis and Economic Theory Winter 2018
Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Winter 2018 Topic 7: Quasiconvex Functions I 7.1 Level sets of functions For an extended real-valued
More informationConcave programming. Concave programming is another special case of the general constrained optimization. subject to g(x) 0
1 Introduction Concave programming Concave programming is another special case of the general constrained optimization problem max f(x) subject to g(x) 0 in which the objective function f is concave and
More informationSummary Notes on Maximization
Division of the Humanities and Social Sciences Summary Notes on Maximization KC Border Fall 2005 1 Classical Lagrange Multiplier Theorem 1 Definition A point x is a constrained local maximizer of f subject
More informationMathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7
Mathematical Foundations -- Constrained Optimization Constrained Optimization An intuitive approach First Order Conditions (FOC) 7 Constraint qualifications 9 Formal statement of the FOC for a maximum
More informationConvex Analysis and Economic Theory AY Elementary properties of convex functions
Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory AY 2018 2019 Topic 6: Convex functions I 6.1 Elementary properties of convex functions We may occasionally
More informationLinear programming: Theory
Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analsis and Economic Theor Winter 2018 Topic 28: Linear programming: Theor 28.1 The saddlepoint theorem for linear programming The
More informationThe general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.
1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationNonlinear Programming and the Kuhn-Tucker Conditions
Nonlinear Programming and the Kuhn-Tucker Conditions The Kuhn-Tucker (KT) conditions are first-order conditions for constrained optimization problems, a generalization of the first-order conditions we
More informationCalculus and Maximization I
Division of the Humanities and Social Sciences Calculus and Maximization I KC Border Fall 2000 1 Maxima and Minima A function f : X R attains a global maximum or absolute maximum over X at x X if f(x )
More informationEcon 508-A FINITE DIMENSIONAL OPTIMIZATION - NECESSARY CONDITIONS. Carmen Astorne-Figari Washington University in St. Louis.
Econ 508-A FINITE DIMENSIONAL OPTIMIZATION - NECESSARY CONDITIONS Carmen Astorne-Figari Washington University in St. Louis August 12, 2010 INTRODUCTION General form of an optimization problem: max x f
More informationDivision of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45
Division of the Humanities and Social Sciences Supergradients KC Border Fall 2001 1 The supergradient of a concave function There is a useful way to characterize the concavity of differentiable functions.
More informationConstrained Optimization. Unconstrained Optimization (1)
Constrained Optimization Unconstrained Optimization (Review) Constrained Optimization Approach Equality constraints * Lagrangeans * Shadow prices Inequality constraints * Kuhn-Tucker conditions * Complementary
More informationConstrained Optimization
Constrained Optimization Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 13, 2013 1 General Problem Consider the following general constrained optimization problem:
More informationNonlinear Programming (NLP)
Natalia Lazzati Mathematics for Economics (Part I) Note 6: Nonlinear Programming - Unconstrained Optimization Note 6 is based on de la Fuente (2000, Ch. 7), Madden (1986, Ch. 3 and 5) and Simon and Blume
More informationOptimization. A first course on mathematics for economists
Optimization. A first course on mathematics for economists Xavier Martinez-Giralt Universitat Autònoma de Barcelona xavier.martinez.giralt@uab.eu II.3 Static optimization - Non-Linear programming OPT p.1/45
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationOptimization Theory. Lectures 4-6
Optimization Theory Lectures 4-6 Unconstrained Maximization Problem: Maximize a function f:ú n 6 ú within a set A f ú n. Typically, A is ú n, or the non-negative orthant {x0ú n x$0} Existence of a maximum:
More informationChap 2. Optimality conditions
Chap 2. Optimality conditions Version: 29-09-2012 2.1 Optimality conditions in unconstrained optimization Recall the definitions of global, local minimizer. Geometry of minimization Consider for f C 1
More informationConvex Optimization & Lagrange Duality
Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT
More informationLecture 4: Optimization. Maximizing a function of a single variable
Lecture 4: Optimization Maximizing or Minimizing a Function of a Single Variable Maximizing or Minimizing a Function of Many Variables Constrained Optimization Maximizing a function of a single variable
More informationDate: July 5, Contents
2 Lagrange Multipliers Date: July 5, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 14 2.3. Informative Lagrange Multipliers...........
More informationOutline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution
Outline Roadmap for the NPP segment: 1 Preliminaries: role of convexity 2 Existence of a solution 3 Necessary conditions for a solution: inequality constraints 4 The constraint qualification 5 The Lagrangian
More informationDuality Theory of Constrained Optimization
Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive
More informationLagrangian Duality. Richard Lusby. Department of Management Engineering Technical University of Denmark
Lagrangian Duality Richard Lusby Department of Management Engineering Technical University of Denmark Today s Topics (jg Lagrange Multipliers Lagrangian Relaxation Lagrangian Duality R Lusby (42111) Lagrangian
More informationWhen are Sums Closed?
Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Fall 2018 Winter 2019 Topic 20: When are Sums Closed? 20.1 Is a sum of closed sets closed? Example 0.2.2
More informationUniversity of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris
University of California, Davis Department of Agricultural and Resource Economics ARE 5 Lecture Notes Quirino Paris Karush-Kuhn-Tucker conditions................................................. page Specification
More informationMATH2070 Optimisation
MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints
More informationTechnical Results on Regular Preferences and Demand
Division of the Humanities and Social Sciences Technical Results on Regular Preferences and Demand KC Border Revised Fall 2011; Winter 2017 Preferences For the purposes of this note, a preference relation
More informationUNCLASSIFIED AD NUMBER LIMITATION CHANGES
TO: UNCLASSIFIED AD NUMBER AD237455 LIMITATION CHANGES Approved for public release; distribution is unlimited. FROM: Distribution authorized to U.S. Gov't. agencies and their contractors; Administrative/Operational
More informationFinite Dimensional Optimization Part I: The KKT Theorem 1
John Nachbar Washington University March 26, 2018 1 Introduction Finite Dimensional Optimization Part I: The KKT Theorem 1 These notes characterize maxima and minima in terms of first derivatives. I focus
More informationCHAPTER 1-2: SHADOW PRICES
Essential Microeconomics -- CHAPTER -: SHADOW PRICES An intuitive approach: profit maimizing firm with a fied supply of an input Shadow prices 5 Concave maimization problem 7 Constraint qualifications
More informationThe Envelope Theorem
The Envelope Theorem In an optimization problem we often want to know how the value of the objective function will change if one or more of the parameter values changes. Let s consider a simple example:
More informationg(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to
1 of 11 11/29/2010 10:39 AM From Wikipedia, the free encyclopedia In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange) provides a strategy for finding the
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality
More informationMATHEMATICAL ECONOMICS: OPTIMIZATION. Contents
MATHEMATICAL ECONOMICS: OPTIMIZATION JOÃO LOPES DIAS Contents 1. Introduction 2 1.1. Preliminaries 2 1.2. Optimal points and values 2 1.3. The optimization problems 3 1.4. Existence of optimal points 4
More informationVector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n.
Vector Spaces Definition: The usual addition and scalar multiplication of n-tuples x = (x 1,..., x n ) R n (also called vectors) are the addition and scalar multiplication operations defined component-wise:
More informationGENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION
Chapter 4 GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Alberto Cambini Department of Statistics and Applied Mathematics University of Pisa, Via Cosmo Ridolfi 10 56124
More informationJørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark.
DUALITY THEORY Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark. Keywords: Duality, Saddle point, Complementary
More informationEC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2
LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 2010/11 Math for Microeconomics September Course, Part II Problem Set 1 with Solutions 1. Show that the general
More informationLinear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016
Linear Programming Larry Blume Cornell University, IHS Vienna and SFI Summer 2016 These notes derive basic results in finite-dimensional linear programming using tools of convex analysis. Most sources
More informationNonlinear Optimization
Nonlinear Optimization Etienne de Klerk (UvT)/Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos Course WI3031 (Week 4) February-March, A.D. 2005 Optimization Group 1 Outline
More informationMidterm Exam - Solutions
EC 70 - Math for Economists Samson Alva Department of Economics, Boston College October 13, 011 Midterm Exam - Solutions 1 Quasilinear Preferences (a) There are a number of ways to define the Lagrangian
More informationMotivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:
CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through
More information6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games
6.254 : Game Theory with Engineering Applications Lecture 7: Asu Ozdaglar MIT February 25, 2010 1 Introduction Outline Uniqueness of a Pure Nash Equilibrium for Continuous Games Reading: Rosen J.B., Existence
More informationConvex analysis and profit/cost/support functions
Division of the Humanities and Social Sciences Convex analysis and profit/cost/support functions KC Border October 2004 Revised January 2009 Let A be a subset of R m Convex analysts may give one of two
More informationwhere u is the decision-maker s payoff function over her actions and S is the set of her feasible actions.
Seminars on Mathematics for Economics and Finance Topic 3: Optimization - interior optima 1 Session: 11-12 Aug 2015 (Thu/Fri) 10:00am 1:00pm I. Optimization: introduction Decision-makers (e.g. consumers,
More informationMiscellaneous Notes on Optimization Theory and Related Topics or A Dessert Course in Optimization *
Division of the Humanities and Social Sciences Miscellaneous Notes on Optimization Theory and Related Topics or A Dessert Course in Optimization * KC Border Continually under revision. Last Seriously Revised:
More informationMathematical Economics. Lecture Notes (in extracts)
Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter
More informationLagrangian Duality. Evelien van der Hurk. DTU Management Engineering
Lagrangian Duality Evelien van der Hurk DTU Management Engineering Topics Lagrange Multipliers Lagrangian Relaxation Lagrangian Duality 2 DTU Management Engineering 42111: Static and Dynamic Optimization
More informationFinite Dimensional Optimization Part III: Convex Optimization 1
John Nachbar Washington University March 21, 2017 Finite Dimensional Optimization Part III: Convex Optimization 1 1 Saddle points and KKT. These notes cover another important approach to optimization,
More informationSeptember Math Course: First Order Derivative
September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which
More informationLecture 3: Semidefinite Programming
Lecture 3: Semidefinite Programming Lecture Outline Part I: Semidefinite programming, examples, canonical form, and duality Part II: Strong Duality Failure Examples Part III: Conditions for strong duality
More informationConvex Functions and Optimization
Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized
More informationCHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.
1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function
More informationCalculus and optimization
Calculus an optimization These notes essentially correspon to mathematical appenix 2 in the text. 1 Functions of a single variable Now that we have e ne functions we turn our attention to calculus. A function
More informationProblem Set 2 Solutions
EC 720 - Math for Economists Samson Alva Department of Economics Boston College October 4 2011 1. Profit Maximization Problem Set 2 Solutions (a) The Lagrangian for this problem is L(y k l λ) = py rk wl
More informationDivision of the Humanities and Social Sciences. Sums of sets, etc.
Division of the Humanities and Social Sciences Sums of sets, etc. KC Border September 2002 Rev. November 2012 Rev. September 2013 If E and F are subsets of R m, define the sum E + F = {x + y : x E; y F
More informationNumerical Optimization
Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,
More informationMathematical Economics: Lecture 16
Mathematical Economics: Lecture 16 Yu Ren WISE, Xiamen University November 26, 2012 Outline 1 Chapter 21: Concave and Quasiconcave Functions New Section Chapter 21: Concave and Quasiconcave Functions Concave
More informationConvex Sets with Applications to Economics
Convex Sets with Applications to Economics Debasis Mishra March 10, 2010 1 Convex Sets A set C R n is called convex if for all x, y C, we have λx+(1 λ)y C for all λ [0, 1]. The definition says that for
More informationInequalities and alternatives
Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Winter 2018 Topic 25: Inequalities and alternatives This chapter discusses both the solution of linear
More informationOPTIMIZATION THEORY IN A NUTSHELL Daniel McFadden, 1990, 2003
OPTIMIZATION THEORY IN A NUTSHELL Daniel McFadden, 1990, 2003 UNCONSTRAINED OPTIMIZATION 1. Consider the problem of maximizing a function f:ú n 6 ú within a set A f ú n. Typically, A might be all of ú
More informationNEW MAXIMUM THEOREMS WITH STRICT QUASI-CONCAVITY. Won Kyu Kim and Ju Han Yoon. 1. Introduction
Bull. Korean Math. Soc. 38 (2001), No. 3, pp. 565 573 NEW MAXIMUM THEOREMS WITH STRICT QUASI-CONCAVITY Won Kyu Kim and Ju Han Yoon Abstract. In this paper, we first prove the strict quasi-concavity of
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationModern Optimization Theory: Concave Programming
Modern Optimization Theory: Concave Programming 1. Preliminaries 1 We will present below the elements of modern optimization theory as formulated by Kuhn and Tucker, and a number of authors who have followed
More informationA CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE
Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received
More informationEconomics 101A (Lecture 3) Stefano DellaVigna
Economics 101A (Lecture 3) Stefano DellaVigna January 24, 2017 Outline 1. Implicit Function Theorem 2. Envelope Theorem 3. Convexity and concavity 4. Constrained Maximization 1 Implicit function theorem
More informationApplications of Linear Programming
Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal
More informationFinal Exam - Math Camp August 27, 2014
Final Exam - Math Camp August 27, 2014 You will have three hours to complete this exam. Please write your solution to question one in blue book 1 and your solutions to the subsequent questions in blue
More informationThe Kuhn-Tucker Problem
Natalia Lazzati Mathematics for Economics (Part I) Note 8: Nonlinear Programming - The Kuhn-Tucker Problem Note 8 is based on de la Fuente (2000, Ch. 7) and Simon and Blume (1994, Ch. 18 and 19). The Kuhn-Tucker
More informationTheory of Value Fall 2017
Division of the Humanities and Social Sciences Ec 121a KC Border Theory of Value Fall 2017 Lecture 2: Profit Maximization 2.1 Market equilibria as maximizers For a smooth function f, if its derivative
More informationEconomics 101. Lecture 2 - The Walrasian Model and Consumer Choice
Economics 101 Lecture 2 - The Walrasian Model and Consumer Choice 1 Uncle Léon The canonical model of exchange in economics is sometimes referred to as the Walrasian Model, after the early economist Léon
More informationRockefellar s Closed Functions
Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Fall 2018 Winter 2019 Topic 21: Rockefellar s Closed Functions 21.1 Closed convex functions Convex analysts
More informationEcon Slides from Lecture 14
Econ 205 Sobel Econ 205 - Slides from Lecture 14 Joel Sobel September 10, 2010 Theorem ( Lagrange Multipliers ) Theorem If x solves max f (x) subject to G(x) = 0 then there exists λ such that Df (x ) =
More informationARE202A, Fall 2005 CONTENTS. 1. Graphical Overview of Optimization Theory (cont) Separating Hyperplanes 1
AREA, Fall 5 LECTURE #: WED, OCT 5, 5 PRINT DATE: OCTOBER 5, 5 (GRAPHICAL) CONTENTS 1. Graphical Overview of Optimization Theory (cont) 1 1.4. Separating Hyperplanes 1 1.5. Constrained Maximization: One
More informationE 600 Chapter 4: Optimization
E 600 Chapter 4: Optimization Simona Helmsmueller August 8, 2018 Goals of this lecture: Every theorem in these slides is important! You should understand, remember and be able to apply each and every one
More informationProblem Set 3
4.02 Problem Set 3 Due Thursday, October 2, 2004, in class Starred (*) problems will not count for the grade on this problem set; they are based on material from lectures on 0/2 and 0/26, and provide practice
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationLagrange Relaxation and Duality
Lagrange Relaxation and Duality As we have already known, constrained optimization problems are harder to solve than unconstrained problems. By relaxation we can solve a more difficult problem by a simpler
More informationConvex Optimization and Modeling
Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More information14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.
CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity
More informationEC400 Math for Microeconomics Syllabus The course is based on 6 sixty minutes lectures and on 6 ninety minutes classes.
London School of Economics Department of Economics Dr Francesco Nava Offi ce: 32L.3.20 EC400 Math for Microeconomics Syllabus 2016 The course is based on 6 sixty minutes lectures and on 6 ninety minutes
More informationLectures 9 and 10: Constrained optimization problems and their optimality conditions
Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained
More informationCO 250 Final Exam Guide
Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,
More informationAppendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS
Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution
More informationSharpening the Karush-John optimality conditions
Sharpening the Karush-John optimality conditions Arnold Neumaier and Hermann Schichl Institut für Mathematik, Universität Wien Strudlhofgasse 4, A-1090 Wien, Austria email: Arnold.Neumaier@univie.ac.at,
More informationCONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS
CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS A Dissertation Submitted For The Award of the Degree of Master of Philosophy in Mathematics Neelam Patel School of Mathematics
More informationSTATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY
STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY UNIVERSITY OF MARYLAND: ECON 600 1. Some Eamples 1 A general problem that arises countless times in economics takes the form: (Verbally):
More information1.4 FOUNDATIONS OF CONSTRAINED OPTIMIZATION
Essential Microeconomics -- 4 FOUNDATIONS OF CONSTRAINED OPTIMIZATION Fundamental Theorem of linear Programming 3 Non-linear optimization problems 6 Kuhn-Tucker necessary conditions Sufficient conditions
More informationChapter 2 Convex Analysis
Chapter 2 Convex Analysis The theory of nonsmooth analysis is based on convex analysis. Thus, we start this chapter by giving basic concepts and results of convexity (for further readings see also [202,
More informationNotes on the Implicit Function Theorem
CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences Notes on the Implicit Function Theorem KC Border v 20130827::1408 1 Implicit Function Theorems The Implicit Function Theorem
More informationConvex Optimization Boyd & Vandenberghe. 5. Duality
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More information1 Objective. 2 Constrained optimization. 2.1 Utility maximization. Dieter Balkenborg Department of Economics
BEE020 { Basic Mathematical Economics Week 2, Lecture Thursday 2.0.0 Constrained optimization Dieter Balkenborg Department of Economics University of Exeter Objective We give the \ rst order conditions"
More informationCONSTRAINED OPTIMALITY CRITERIA
5 CONSTRAINED OPTIMALITY CRITERIA In Chapters 2 and 3, we discussed the necessary and sufficient optimality criteria for unconstrained optimization problems. But most engineering problems involve optimization
More informationLagrangian Methods for Constrained Optimization
Pricing Communication Networks: Economics, Technology and Modelling. Costas Courcoubetis and Richard Weber Copyright 2003 John Wiley & Sons, Ltd. ISBN: 0-470-85130-9 Appendix A Lagrangian Methods for Constrained
More informationEC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline
LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 20010/11 Math for Microeconomics September Course, Part II Lecture Notes Course Outline Lecture 1: Tools for
More information