Econ 508-A FINITE DIMENSIONAL OPTIMIZATION - NECESSARY CONDITIONS. Carmen Astorne-Figari Washington University in St. Louis.

Size: px
Start display at page:

Download "Econ 508-A FINITE DIMENSIONAL OPTIMIZATION - NECESSARY CONDITIONS. Carmen Astorne-Figari Washington University in St. Louis."

Transcription

1 Econ 508-A FINITE DIMENSIONAL OPTIMIZATION - NECESSARY CONDITIONS Carmen Astorne-Figari Washington University in St. Louis August 12, 2010

2 INTRODUCTION General form of an optimization problem: max x f (x) s.t. x X x = (x 1, x 2,..., x N ): N-dimensional vector of choice variables or instruments. f (x) = f (x 1,..., x N ): real valued function called objective function. X R N : constraint set, feasible set or opportunity set.

3 INTRODUCTION An optimization problem in R N is one where, given function f : R N R, we choose the value of x over a given set X R N that maximizes or minimizes the value of f. Alternative notation: max x X f (x) min x X f (x) or max { f (x) x X} min { f (x) x X}

4 BASIC DEFINITIONS Given f and X as above, The set of attainable values of f on X, or image of X under f, denoted f (X), is defined by f (X) = {w R some x X such that f (x) = w} The interior of X is the set defined by int(x) = {x X some ε > 0 such that N ε (x) X} [pictures: R, R 2 ]

5 SOLUTIONS A solution to the problem max{f (x) x X} or maximizer of f on X is a point x s.t. f (x ) f (x) x X. A solution to the problem min{f (x) x X} or minimizer of f on X is a point x s.t. f (x ) f (x) x X.

6 LOCAL VS. GLOBAL SOLUTIONS x X is a global maximizer of f in X iff f (x ) f (x) x X. x X is a global minimizer of f in X iff f (x ) f (x) x X. x X is a local maximizer of f in X iff x N ε (x ) X. x X is a local minimizer of f in X iff x N ε (x ) X. f (x ) f (x) f (x ) f (x) [pictures]

7 Most of the definitions or results for maximization problems have an exact analog for minimization problems. From now on, the minimization analog will be omitted. A solution x X is interior iff there is an ε > 0 such that N ε (x ) X and f (x ) f (x) x N ε (x ). [picture]

8 SET OF SOLUTIONS The set of solutions to a max problem is denoted argmax (f (x) x X) = {x X f (x) f (x ) x X} Consider the following example: Let X = [ 1, 1], and f : R R be f (x) = x 2. [picture] Maximizing f on C has two solutions, x = 1 and x = 1. As we can see, the set arg max(f (x) x X) can have more than one element.

9 EXAMPLE 1 Let X = [0, 1], and f : R R be f (x) = x 2. [picture] The problem of maximizing f on X has exactly one solution, the point x = 1. The set arg max(f (x) x X) can be a singleton.

10 EXAMPLE 2 Let X = R +, and f : R + R be f (x) = x 2. [picture] The problem of maximizing f on X has no solution. The set arg max(f (x) x X) =.

11 EXISTENCE OF A SOLUTION A sufficient condition for existence of a solution is given by the Weierstrass theorem. THEOREM: (Weierstrass) If f is continuous and X is closed and bounded (hence compact) and nonempty, then there exist a global maximum and a global minimum. However, Weierstrass is sufficient but not necessary for existence of a solution. Example: X = (0, 2] and f : R R be f (x) = x 2 [picture]

12 OBJECTIVES OF OPTIMIZATION THEORY I. To identify a set of conditions on f and X that guarantee the existence of solutions to optimization problems. [Weierstrass theorem] II. To obtain a characterization of the set of solutions: A. Necessary conditions [picture] [differentiability] B. Sufficient conditions [picture] [convexity] C. Conditions that guarantee uniqueness of a solution D. A theory of parametric variation: [envelope theorem] Sometimes optimization problems are presented in parametric form: f and or X depend on parameters θ Θ, where Θ (set of feasible parameter values). We write: max{f (x, θ) x X(θ)}

13 MOTIVATION (1) Consumer s Utility Maximization Problem max u(x) s.t. p x m x 0 p R l ++ x R l Does a solution exist? The constraint set X = B(p, m) = {x R l : p x m} compact. [picture] is So if u is continuous, then a solution exists (by Weierstrass theorem).

14 MOTIVATION (2) Firm s Cost Minimization Problem min p x s.t. f (x) y x 0 p R l ++, y R ++ x R l Does a solution exist? The constraint set is not compact. [picture] If the constraint set is not empty, x s.t. f (x ) y. Compactify the constraint set: {x : p x p x and f (x) y}. Since the objective function is continuous, there exists a solution.

15 USEFUL THEOREMS (1) THEOREM: Let f denote the function whose value at any x is f (x). Then x is a maximum of f on X iff it is a minimum of f on X; and z is a minimum of f on X iff z is a maximum of f on X. PROOF: HW.

16 USEFUL THEOREMS (2) THEOREM: Let ϕ : R R be a strictly increasing function, that is, a function such that y > y ϕ(y) > ϕ(y ) Then x is a maximum of f on X iff x is a maximum of the composition of ϕ and f on X; and z is a minimum of f on X iff z is a minimum of ϕ f on X. REMARK: it suffices that ϕ be a strictly increasing function on just the set f (X).That is, that ϕ only satisfies ϕ(y) > ϕ(y ) for all y, y f (X) with y > y. PROOF: HW.

17 CONSTRAINED OPTIMIZATION Consider the problem (P): max f 0 (x) s.t. x S; f 1 (x) 0,... f m (x) 0 Where: S R N is convex f i : S R for i = 0, 1,...m are differentiable and concave

18 EXAMPLE Consumer s Utility Maximization Problem max u(x) s.t. x R l + m p x 0 where p R l ++

19 SLATER CONDITION Slater Condition: ˆx int(s) s.t. f i (ˆx) > 0 for i = 1,...m. EXAMPLE: f 0 (x) = x, f 1 (x) = x 2 Let S = R So the maximization problem is: max x s.t. x x R -x 2 0 There is no x R s.t. x 2 > 0 so Slater is violated.

20 THE LAGRANGEAN Associated with (P), we can define a function L : S R m + R given by: L(x, λ) = f 0 (x) + m i=1 λ i f i (x) Notice that, for a given value of λ, L(x, ) is concave in x, and for a given value of x, L(, λ) is convex in λ. A function with these properties is also called saddle function.

21 SADDLE POINT DEFINITION: a point (x, λ ) is a saddle point of L(x, λ) if L(x, λ ) L(x, λ ) L(x, λ) for all (x, λ) S R m +

22 SADDLE POINT: CHARACTERIZATION If x int(s), (x, λ ) is a saddle point iff 1. Df 0 (x ) + m i=1 λ i Df i (x ) = 0 2. f i (x ) 0, λ i 0 for i = 1,...m 3. m i=1 λ i f i (x ) = 0 Given condition 2, condition 3 can be replaced by 3. λ i f i (x ) = 0

23 KUHN TUCKER UNDER CONCAVITY THEOREM: (Kuhn-Tucker I) Assume f 0, f 1,..., f M are concave, continuous functions from a convex set S R N into R. Let the problem (P) and L(x, λ) be as described above. Then: (i) (ii) If (x, λ ) S R M is a saddle point of L(x, λ), then x solves (P). Assume that the Slater condition holds. Then if x S is a solution to (P), λ R M such that (x, λ ) is a saddle point of L(x, λ). This version of Kuhn Tucker doesn t require differentiability.

24 EFFECTIVE CONSTRAINTS DEFINITION: an inequality constraint is effective or binding at a certain point x if f i (x ) = 0 i.e. if the constraint holds with equality at x.

25 KUHN TUCKER WITHOUT CONCAVITY THEOREM: (Kuhn Tucker II) Let f 0 : S R be a C 1 function on a certain open set S R N, and let f i : R N R, i = 1,..., M be C 1 functions. Suppose that x is a local maximum of f 0 on the set D = S {x R N f i (x) 0, i = 1,...M}. Let E {1,..., M} denote the set of effective constraints at x. Suppose that the derivatives {Df i (x ) i E} form an independent set of vectors. Then there exist λ i R, i = 1,..., M s.t. (i) λ i 0, i = 1,..., M; (ii) λ i f i (x ) = 0, i = 1,..., M; (iii) Df 0 (x ) + M λ i f i (x ) = 0 i=1 NOTE: these conditions are only necessary.

26 ANOTHER APPROACH State problems in standard form : MAX PROBLEM MIN PROBLEM max f (x) min f (x) s.t. g 1 (x) 0 s.t. g 1 (x) 0.. g K (x) 0 g K (x) 0

27 EXAMPLE (i) Consumer: (ii) Firm: max u(x) min p x s.t. p x m 0 s.t. f (x) y 0 x 1 0 x x N 0 x N 0

28 CONSTRAINT QUALIFICATION Recall we defined the set E which contains only the indices of the binding (effective) constraints. DEFINITION: constraint qualification (CQ) holds at x iff { g i (x ) : i E} is independent.

29 KUHN TUCKER II RESTATED THEOREM (KT): Consider a MAX problem in the standard form. Let f, g k be C 1. Let x be a local maximum. Suppose that CQ holds at x. Then λ k 0 s.t. (1) f (x ) = K λ k g k(x ) k=1 (2) λ k g k(x ) = 0 k (complementary slackness) NOTE: (2) says that if g k (x ) < 0 (kth constraint not binding), then λ k = 0. We can rewrite (1): f (x ) = k E λ k g k(x )

30 EXAMPLE Let f (x 1, x 2 ) = x 2. Let the constraints be g 1 = (x 1 1) 2 + x [a disk centered at (1, 0)] g 2 = (x 1 + 1) 2 + x [a disk centered at ( 1, 0)] The only feasible point is (0, 0), so it is the solution. f (ˆx ) = (0, 1) g 1 (x ) = ( 2, 0) g 2 (x ) = (2, 0)

31 EXAMPLE (2) There s no way of writing f (x ) as a linear combination of g 1 (x ) and g 2 (x )! Condition KT(1) fails. What is wrong with this example? Constraint qualification (CQ) fails at (0, 0). But CQ isn t necessary. There are other restrictions on the MAX problem sufficient to guarantee KT conditions hold under other assumptions (see previous section, Kuhn Tucker under concavity)

32 BINDING VS. ACTIVE CONSTRAINTS EXAMPLE: max f (x) = x 2 s.t. g(x) = x 0 [picture] The solution is x = 0. The constraint is binding, but λ = 0 because f (x ) = 0. Relaxing the constraint does not change the solution. Call constraint k active if λ k > 0. Then g in the previous example is binding, but not active. If a constraint is active, then it is binding (by condition KT (2)). Most of the times, binding constraints will be active, but not always.

33 SLACK CONSTRAINTS CAN AFFECT THE GLOBAL SOLUTIONS EXAMPLE: max f (x) = x 2 + x 4 s.t. g 1 (x) = x 1 0, g 2 (x) = x 1 0 [picture: W ] We have three constrained maxima at 1, 0 and 1. At x = 0, none of the constraints is binding: g 1 (x ) < 0 and g 2 (x ) < 0. However, if, say, we relax g 2 unique maximum at x = 3. to x 3 0, then there would be a KT is a result about local rather than global maximization. Even if we relax the constraint, x = 0 remains a local maximum.

34 INTUITION: ONE BINDING CONSTRAINT Mountain example. By (1), f (x ) = λ g(x ) g(x ) = 0: g cannot be increased. g(x ) level set g(x) (the fence). (we can only move along the level set g(x )). f (x ) is also level set g(x) (any movement that we re allowed to make does not increase f ). [pictures]

35 EXAMPLE 1 max f (x) = x s.t. x 0, x 1 [picture] g 1 {}}{ λ 1 ( x 0) λ 2 (x }{{ 1 } 0) g 2 At x = 1 : f (x) = Df (x) = 1 2 x f (x ) = 1 2 λ 1 = 0 by KT (2). λ 2 solves f (x ) = λ 2 g 2(x ) 1 2 = λ 2(1) λ 2 = 1 2 [picture of the gradients]

36 EXAMPLE 2 max f (x) = e x s.t. x 0 ( x 0) [picture] At x = 0 : By (2), λ = 1 f (x) = e x f (x ) = 1 g(x ) = 1 [picture of gradients]

37 EXAMPLE 3 For N = 2, suppose (1) doesn t hold ( f (x ) and g(x )are not collinear) Then it s feasible to move and increase f simultaneously. [picture]

38 INTUITION: TWO BINDING CONSTRAINTS [picture] Mountain example. By (1), f (x ) = λ 1 g 1(x ) + λ 2 g 2(x ). The gradient of the objective function lies in the cone spanned by the gradients of each binding constraint. [picture]

39 PROOF OF KUHN TUCKER Define W = {x R N : λ k s.t. x = λ k g k (x ) x E } WTS: if x is a local max, then f (x ) W. By contraposition: if f (x ) / W, then x can t be a local max. Suppose f (x ) / W. Since W is convex and closed, and f (x ) is compact and convex (a point), by the (strict) Separating Hyperplane Theorem, v 0 R N and c R such that f (x ) v > c > w v w W

40 PROOF OF KUHN TUCKER (CONTINUED 1) (i) Since 0 W (set all λ k = 0), c > 0, so f (x ) v > 0. Given f (x ) v > 0, D v f (x ) > 0 (there is a movement in direction v that increases f ) (ii) Since λ k g k (x ) W, then c > λ k g k (x ) v For any λ k > 0, c/λ k > g k (x ) v Taking limits as λ, 0 g k (x ) v ( k E) The movement in direction v is feasible.

41 PROOF OF KUHN TUCKER (CONTINUED 2) Define J = {k E : g k (x ) v = 0} If J =, then g k (x ) v < 0 k E α sufficiently small, g k (x + αv) < g k (x ) = 0 (g k no longer binds); for k / E, g k (x + αv) < 0, so by continuity of g, g k (x + αv) < 0 α sufficiently small. So k, g k (x + αv) < 0 α sufficiently small, so the movement is feasible. Also, given that f (x ) v > 0, then f (x + αv) > f (x ) for α sufficiently small, so x cannot be a local maximum.

42 PROOF OF KUHN TUCKER (CONTINUED 3) If J, then at least one k E for which g k (x )v = 0, so the point x + αv might not be feasible. Use Implicit Function Theorem to argue that points x that are (a) feasible, (b) arbitrarily close to x, and (c) the movement from x to x is arbitrarily close to v. Let K = J. Let S = { g(x ) : k J}. By CQ, S is independent. By CQ and Implicit Fcn Thm, equations g k (x ) = 0 k J implicitly define a C 1 function ψ : R (N K) R N s.t. k J and z R (N K), g k (ψ(z)) = 0, Dψ(z) has full rank, and ψ(0) = x That is, ψ gives a N K-dimensional surface consisting of all points near x for which the constraints in J hold with equality.

43 PROOF OF KUHN TUCKER (CONTINUED 4) Given that g k (ψ(z)) = 0 z R N K and all k J, by the Chain Rule, Dg k (x )Dψ(0)z = 0 z R N K and all k J. Let A = [D k ] k J, (K N). By CQ, A has full rank (K), so its null space has dimension N K. Since Dψ has full rank (N K), it maps onto the null space of A. Since v lies in the null space of A, z v R (N K) s.t. v = Dψ(0)z v. For sufficiently small α > 0, ψ(αz v ) gives such point x that fulfills (a), (b) and (c).

44 PROOF OF KUHN TUCKER (CONTINUED 5) We already know that k J, g k (ψ(αz v ) = 0 α > 0. For k E\J, by Chain Rule, Dg k (x )Dψ(0)z v = Dg k (x )v < 0, which implies that g k (ψ(αz v )) < g k (x ) = 0 for α sufficiently small. For k / E, g k (x ) < 0. By continuity, g k (ψ(αz v )) < 0 for α sufficiently small. Thus, x = ψ(αz) is feasible. By Chain Rule, Df (x )Dψ(0)z v = Df (x )v. Since f (x ) v > 0, Df (x )v > 0 Hence, for α sufficiently small, f (x) > f (x ).

45 USING THE KT CONDITIONS Bad news! There is no easy way of finding points x and multipliers λ that satisfy the KT conditions. Cookbook? Try solving the unconstrained problem first. If the solution satisfies the constraints, you re done. If it doesn t, make an educated guess of which of the constraints might be binding.

46 EXAMPLE 1 max f (x) = x 1/2 1 x 1/3 3 s.t. 4x 1 + 8x 2 + 3x 3 9 x 0 2 x 1/6 In the standard form: max f (x) = x 1/2 1 x 1/3 2 x 1/6 3 s.t. 4x 1 + 8x 2 + 3x x 0

47 EXAMPLE 1 i. None of the non-negativity constraints will bind. How do we know? If any of the x n = 0, then f (x) = 0. Take any feasible point, for instance, ( 1 4, 1 8, 1 3 ) plug it into the constraint: 3 9 = 6 0, which yields f (x) = ( 1 2 )2 ( 1 3 )1/6 > 0. ii. The constraint 4x 1 + 8x 2 + 3x 3 9 = 0 is binding. How do we know? Df (x) >> 0. From KT(2), we know that λ 2 = λ 3 = λ 4 = 0. Use KT(1) to find λ 1.

48 EXAMPLE 1 iii. Want to avoid messy calculations? Use the second useful theorem : apply a strictly increasing transformation to f (x), and solve the max problem. Let ˆf (x) = Ln(f (x)). This transformation is strictly increasing: D(Ln(y)) = 1 y > 0 (we already know that f (x) > 0) So solving the problem: max ˆf (x) = 1 2 ln x ln x ln x 3 s.t. 4x 1 + 8x 2 + 3x x 0 Yields the same solution as the original problem.

49 EXAMPLE 1 Use KT(1): 1 2x 1 1 3x 2 1 6x 3 4 = λ or 1 2λ 1 1 3λ 1 1 6λ 1 = 4x 1 8x 2 3x 3 Substituting in the constraint, we get 1 2λ λ λ 1 = 9, λ 1 = Substituting back, x = However, I haven t shown yet that x is a solution. So far, we only know that it satisfies the KT necessary conditions. To show that x is a solution, I need sufficient conditions.

50 EXAMPLE 2 max f (x) = x x x s.t. 4x 1 + 8x 2 + 3x 3 9 x 0 In standard form: max f (x) = x x x s.t. g 1 = 4x 1 + 8x 2 + 3x x 0 Now it s no longer obvious that the solution has x >> 0. What do we know now? (i) Df (x) >> 0, so the first constraint will bind: 4x 1 + 8x 2 + 3x 3 9 = 0.

51 EXAMPLE 2 Now start guessing! Guess 1: x >> 0? If only the first constraint binds, then by KT(2), λ 2 = λ 3 = λ 4 = x And by KT(1), x 2 +1 = λ x 3 +1 After some calculations we get x 1 x 2 = x 3 But this point is not in the feasible set! and λ 1 = 10 16

52 EXAMPLE 2 This bad guess can give us a clue of what the solution looks like. Guess 2: x 1 = x 2 = 0 Then, since the first constraint binds, x 3 = 3, so x = (0, 0, 3) 1 f (x 2 ) = 1 g 1 (x ) = 4 8, g 2 (x ) = 1 0, g 3 (x ) = Write f (x ) = λ 1 g 1 (x ) + λ 2 g 2 (x ) + λ 3 g 3 (x ) in matrix form: = λ 1 λ λ 4 3 and λ λ 2 = 1 > 0 2 λ 3 1 Set λ 4 = 0, and both KT(1) and KT(2) hold.

53 EXAMPLE 2 What would have happened if I had guessed that x 2 = x 3 = 0? x = ( 9 4, 0, 0), By KT(2), λ 2 = 0 By KT(1), λ 1 λ λ 4 So λ k 0 is violated. which is feasible, so we would expect KT to fail. [The sign of KT multipliers is important]. KT can catch bad guesses.

Finite Dimensional Optimization Part I: The KKT Theorem 1

Finite Dimensional Optimization Part I: The KKT Theorem 1 John Nachbar Washington University March 26, 2018 1 Introduction Finite Dimensional Optimization Part I: The KKT Theorem 1 These notes characterize maxima and minima in terms of first derivatives. I focus

More information

Nonlinear Programming and the Kuhn-Tucker Conditions

Nonlinear Programming and the Kuhn-Tucker Conditions Nonlinear Programming and the Kuhn-Tucker Conditions The Kuhn-Tucker (KT) conditions are first-order conditions for constrained optimization problems, a generalization of the first-order conditions we

More information

Finite Dimensional Optimization Part III: Convex Optimization 1

Finite Dimensional Optimization Part III: Convex Optimization 1 John Nachbar Washington University March 21, 2017 Finite Dimensional Optimization Part III: Convex Optimization 1 1 Saddle points and KKT. These notes cover another important approach to optimization,

More information

Optimization. A first course on mathematics for economists

Optimization. A first course on mathematics for economists Optimization. A first course on mathematics for economists Xavier Martinez-Giralt Universitat Autònoma de Barcelona xavier.martinez.giralt@uab.eu II.3 Static optimization - Non-Linear programming OPT p.1/45

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written 11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function

More information

E 600 Chapter 4: Optimization

E 600 Chapter 4: Optimization E 600 Chapter 4: Optimization Simona Helmsmueller August 8, 2018 Goals of this lecture: Every theorem in these slides is important! You should understand, remember and be able to apply each and every one

More information

Outline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution

Outline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution Outline Roadmap for the NPP segment: 1 Preliminaries: role of convexity 2 Existence of a solution 3 Necessary conditions for a solution: inequality constraints 4 The constraint qualification 5 The Lagrangian

More information

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7 Mathematical Foundations -- Constrained Optimization Constrained Optimization An intuitive approach First Order Conditions (FOC) 7 Constraint qualifications 9 Formal statement of the FOC for a maximum

More information

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY UNIVERSITY OF MARYLAND: ECON 600 1. Some Eamples 1 A general problem that arises countless times in economics takes the form: (Verbally):

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

Constrained Optimization

Constrained Optimization Constrained Optimization Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 13, 2013 1 General Problem Consider the following general constrained optimization problem:

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Mathematical Appendix

Mathematical Appendix Ichiro Obara UCLA September 27, 2012 Obara (UCLA) Mathematical Appendix September 27, 2012 1 / 31 Miscellaneous Results 1. Miscellaneous Results This first section lists some mathematical facts that were

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

8. Constrained Optimization

8. Constrained Optimization 8. Constrained Optimization Daisuke Oyama Mathematics II May 11, 2018 Unconstrained Maximization Problem Let X R N be a nonempty set. Definition 8.1 For a function f : X R, x X is a (strict) local maximizer

More information

CHAPTER 1-2: SHADOW PRICES

CHAPTER 1-2: SHADOW PRICES Essential Microeconomics -- CHAPTER -: SHADOW PRICES An intuitive approach: profit maimizing firm with a fied supply of an input Shadow prices 5 Concave maimization problem 7 Constraint qualifications

More information

Useful Math for Microeconomics

Useful Math for Microeconomics Useful Math for Microeconomics Jonathan Levin Antonio Rangel September 2001 1 Introduction Most economic models are based on the solution of optimization problems. These notes outline some of the basic

More information

Constrained maxima and Lagrangean saddlepoints

Constrained maxima and Lagrangean saddlepoints Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Winter 2018 Topic 10: Constrained maxima and Lagrangean saddlepoints 10.1 An alternative As an application

More information

The Kuhn-Tucker Problem

The Kuhn-Tucker Problem Natalia Lazzati Mathematics for Economics (Part I) Note 8: Nonlinear Programming - The Kuhn-Tucker Problem Note 8 is based on de la Fuente (2000, Ch. 7) and Simon and Blume (1994, Ch. 18 and 19). The Kuhn-Tucker

More information

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents MATHEMATICAL ECONOMICS: OPTIMIZATION JOÃO LOPES DIAS Contents 1. Introduction 2 1.1. Preliminaries 2 1.2. Optimal points and values 2 1.3. The optimization problems 3 1.4. Existence of optimal points 4

More information

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with

More information

Modern Optimization Theory: Concave Programming

Modern Optimization Theory: Concave Programming Modern Optimization Theory: Concave Programming 1. Preliminaries 1 We will present below the elements of modern optimization theory as formulated by Kuhn and Tucker, and a number of authors who have followed

More information

Chapter 3: Constrained Extrema

Chapter 3: Constrained Extrema Chapter 3: Constrained Extrema Math 368 c Copyright 2012, 2013 R Clark Robinson May 22, 2013 Chapter 3: Constrained Extrema 1 Implicit Function Theorem For scalar fn g : R n R with g(x ) 0 and g(x ) =

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints. 1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,

More information

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness. CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity

More information

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered

More information

Optimization Theory. Lectures 4-6

Optimization Theory. Lectures 4-6 Optimization Theory Lectures 4-6 Unconstrained Maximization Problem: Maximize a function f:ú n 6 ú within a set A f ú n. Typically, A is ú n, or the non-negative orthant {x0ú n x$0} Existence of a maximum:

More information

Paul Schrimpf. October 17, UBC Economics 526. Constrained optimization. Paul Schrimpf. First order conditions. Second order conditions

Paul Schrimpf. October 17, UBC Economics 526. Constrained optimization. Paul Schrimpf. First order conditions. Second order conditions UBC Economics 526 October 17, 2012 .1.2.3.4 Section 1 . max f (x) s.t. h(x) = c f : R n R, h : R n R m Draw picture of n = 2 and m = 1 At optimum, constraint tangent to level curve of function Rewrite

More information

Chap 2. Optimality conditions

Chap 2. Optimality conditions Chap 2. Optimality conditions Version: 29-09-2012 2.1 Optimality conditions in unconstrained optimization Recall the definitions of global, local minimizer. Geometry of minimization Consider for f C 1

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

EE/AA 578, Univ of Washington, Fall Duality

EE/AA 578, Univ of Washington, Fall Duality 7. Duality EE/AA 578, Univ of Washington, Fall 2016 Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

Preliminary draft only: please check for final version

Preliminary draft only: please check for final version ARE211, Fall2015 NPP1: THU, SEP 17, 2015 PRINTED: AUGUST 25, 2015 (LEC# 7) Contents 3. Nonlinear Programming Problems and the Karush Kuhn Tucker conditions 2 3.1. KKT conditions and the Lagrangian: a cook-book

More information

Mathematical Preliminaries for Microeconomics: Exercises

Mathematical Preliminaries for Microeconomics: Exercises Mathematical Preliminaries for Microeconomics: Exercises Igor Letina 1 Universität Zürich Fall 2013 1 Based on exercises by Dennis Gärtner, Andreas Hefti and Nick Netzer. How to prove A B Direct proof

More information

Econ Slides from Lecture 14

Econ Slides from Lecture 14 Econ 205 Sobel Econ 205 - Slides from Lecture 14 Joel Sobel September 10, 2010 Theorem ( Lagrange Multipliers ) Theorem If x solves max f (x) subject to G(x) = 0 then there exists λ such that Df (x ) =

More information

Duality Theory of Constrained Optimization

Duality Theory of Constrained Optimization Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality

More information

Final Exam - Math Camp August 27, 2014

Final Exam - Math Camp August 27, 2014 Final Exam - Math Camp August 27, 2014 You will have three hours to complete this exam. Please write your solution to question one in blue book 1 and your solutions to the subsequent questions in blue

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Finite-Dimensional Cones 1

Finite-Dimensional Cones 1 John Nachbar Washington University March 28, 2018 1 Basic Definitions. Finite-Dimensional Cones 1 Definition 1. A set A R N is a cone iff it is not empty and for any a A and any γ 0, γa A. Definition 2.

More information

Lagrangian Duality Theory

Lagrangian Duality Theory Lagrangian Duality Theory Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapter 14.1-4 1 Recall Primal and Dual

More information

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research Introduction to Machine Learning Lecture 7 Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu Convex Optimization Differentiation Definition: let f : X R N R be a differentiable function,

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual

More information

OPTIMIZATION THEORY IN A NUTSHELL Daniel McFadden, 1990, 2003

OPTIMIZATION THEORY IN A NUTSHELL Daniel McFadden, 1990, 2003 OPTIMIZATION THEORY IN A NUTSHELL Daniel McFadden, 1990, 2003 UNCONSTRAINED OPTIMIZATION 1. Consider the problem of maximizing a function f:ú n 6 ú within a set A f ú n. Typically, A might be all of ú

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS A Dissertation Submitted For The Award of the Degree of Master of Philosophy in Mathematics Neelam Patel School of Mathematics

More information

1.4 FOUNDATIONS OF CONSTRAINED OPTIMIZATION

1.4 FOUNDATIONS OF CONSTRAINED OPTIMIZATION Essential Microeconomics -- 4 FOUNDATIONS OF CONSTRAINED OPTIMIZATION Fundamental Theorem of linear Programming 3 Non-linear optimization problems 6 Kuhn-Tucker necessary conditions Sufficient conditions

More information

ARE202A, Fall 2005 CONTENTS. 1. Graphical Overview of Optimization Theory (cont) Separating Hyperplanes 1

ARE202A, Fall 2005 CONTENTS. 1. Graphical Overview of Optimization Theory (cont) Separating Hyperplanes 1 AREA, Fall 5 LECTURE #: WED, OCT 5, 5 PRINT DATE: OCTOBER 5, 5 (GRAPHICAL) CONTENTS 1. Graphical Overview of Optimization Theory (cont) 1 1.4. Separating Hyperplanes 1 1.5. Constrained Maximization: One

More information

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Chapter 4 GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Alberto Cambini Department of Statistics and Applied Mathematics University of Pisa, Via Cosmo Ridolfi 10 56124

More information

Chapter 4 - Convex Optimization

Chapter 4 - Convex Optimization Chapter 4 - Convex Optimization Justin Leduc These lecture notes are meant to be used by students entering the University of Mannheim Master program in Economics. They constitute the base for a pre-course

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45 Division of the Humanities and Social Sciences Supergradients KC Border Fall 2001 1 The supergradient of a concave function There is a useful way to characterize the concavity of differentiable functions.

More information

University of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris

University of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris University of California, Davis Department of Agricultural and Resource Economics ARE 5 Lecture Notes Quirino Paris Karush-Kuhn-Tucker conditions................................................. page Specification

More information

Microeconomics, Block I Part 1

Microeconomics, Block I Part 1 Microeconomics, Block I Part 1 Piero Gottardi EUI Sept. 26, 2016 Piero Gottardi (EUI) Microeconomics, Block I Part 1 Sept. 26, 2016 1 / 53 Choice Theory Set of alternatives: X, with generic elements x,

More information

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014 Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,

More information

Roles of Convexity in Optimization Theory. Efor, T. E and Nshi C. E

Roles of Convexity in Optimization Theory. Efor, T. E and Nshi C. E IDOSR PUBLICATIONS International Digital Organization for Scientific Research ISSN: 2550-7931 Roles of Convexity in Optimization Theory Efor T E and Nshi C E Department of Mathematics and Computer Science

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Date: July 5, Contents

Date: July 5, Contents 2 Lagrange Multipliers Date: July 5, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 14 2.3. Informative Lagrange Multipliers...........

More information

Economics 101A (Lecture 3) Stefano DellaVigna

Economics 101A (Lecture 3) Stefano DellaVigna Economics 101A (Lecture 3) Stefano DellaVigna January 24, 2017 Outline 1. Implicit Function Theorem 2. Envelope Theorem 3. Convexity and concavity 4. Constrained Maximization 1 Implicit function theorem

More information

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2 LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 2010/11 Math for Microeconomics September Course, Part II Problem Set 1 with Solutions 1. Show that the general

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

6 Optimization. The interior of a set S R n is the set. int X = {x 2 S : 9 an open box B such that x 2 B S}

6 Optimization. The interior of a set S R n is the set. int X = {x 2 S : 9 an open box B such that x 2 B S} 6 Optimization The interior of a set S R n is the set int X = {x 2 S : 9 an open box B such that x 2 B S} Similarly, the boundary of S, denoted@s, istheset @S := {x 2 R n :everyopenboxb s.t. x 2 B contains

More information

How to Characterize Solutions to Constrained Optimization Problems

How to Characterize Solutions to Constrained Optimization Problems How to Characterize Solutions to Constrained Optimization Problems Michael Peters September 25, 2005 1 Introduction A common technique for characterizing maximum and minimum points in math is to use first

More information

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to 1 of 11 11/29/2010 10:39 AM From Wikipedia, the free encyclopedia In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange) provides a strategy for finding the

More information

Summary Notes on Maximization

Summary Notes on Maximization Division of the Humanities and Social Sciences Summary Notes on Maximization KC Border Fall 2005 1 Classical Lagrange Multiplier Theorem 1 Definition A point x is a constrained local maximizer of f subject

More information

The Kuhn-Tucker and Envelope Theorems

The Kuhn-Tucker and Envelope Theorems The Kuhn-Tucker and Envelope Theorems Peter Ireland EC720.01 - Math for Economists Boston College, Department of Economics Fall 2010 The Kuhn-Tucker and envelope theorems can be used to characterize the

More information

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem: CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through

More information

Concave programming. Concave programming is another special case of the general constrained optimization. subject to g(x) 0

Concave programming. Concave programming is another special case of the general constrained optimization. subject to g(x) 0 1 Introduction Concave programming Concave programming is another special case of the general constrained optimization problem max f(x) subject to g(x) 0 in which the objective function f is concave and

More information

Economics 501B Final Exam Fall 2017 Solutions

Economics 501B Final Exam Fall 2017 Solutions Economics 501B Final Exam Fall 2017 Solutions 1. For each of the following propositions, state whether the proposition is true or false. If true, provide a proof (or at least indicate how a proof could

More information

Primal/Dual Decomposition Methods

Primal/Dual Decomposition Methods Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients

More information

On the (Non-)Differentiability of the Optimal Value Function When the Optimal Solution Is Unique

On the (Non-)Differentiability of the Optimal Value Function When the Optimal Solution Is Unique On the (Non-)Differentiability of the Optimal Value Function When the Optimal Solution Is Unique Daisuke Oyama Faculty of Economics, University of Tokyo Hongo, Bunkyo-ku, Tokyo 113-0033, Japan oyama@e.u-tokyo.ac.jp

More information

Mathematics For Economists

Mathematics For Economists Mathematics For Economists Mark Dean Final 2010 Tuesday 7th December Question 1 (15 Points) Let be a continuously differentiable function on an interval in R. Show that is concave if and only if () ()

More information

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 20010/11 Math for Microeconomics September Course, Part II Lecture Notes Course Outline Lecture 1: Tools for

More information

OPTIMISATION /09 EXAM PREPARATION GUIDELINES

OPTIMISATION /09 EXAM PREPARATION GUIDELINES General: OPTIMISATION 2 2008/09 EXAM PREPARATION GUIDELINES This points out some important directions for your revision. The exam is fully based on what was taught in class: lecture notes, handouts and

More information

Existence of minimizers

Existence of minimizers Existence of imizers We have just talked a lot about how to find the imizer of an unconstrained convex optimization problem. We have not talked too much, at least not in concrete mathematical terms, about

More information

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006 Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in

More information

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES General: OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES This points out some important directions for your revision. The exam is fully based on what was taught in class: lecture notes, handouts and homework.

More information

Lakehead University ECON 4117/5111 Mathematical Economics Fall 2002

Lakehead University ECON 4117/5111 Mathematical Economics Fall 2002 Test 1 September 20, 2002 1. Determine whether each of the following is a statement or not (answer yes or no): (a) Some sentences can be labelled true and false. (b) All students should study mathematics.

More information

Constrained Optimization. Unconstrained Optimization (1)

Constrained Optimization. Unconstrained Optimization (1) Constrained Optimization Unconstrained Optimization (Review) Constrained Optimization Approach Equality constraints * Lagrangeans * Shadow prices Inequality constraints * Kuhn-Tucker conditions * Complementary

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016 Linear Programming Larry Blume Cornell University, IHS Vienna and SFI Summer 2016 These notes derive basic results in finite-dimensional linear programming using tools of convex analysis. Most sources

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

Competitive Consumer Demand 1

Competitive Consumer Demand 1 John Nachbar Washington University May 7, 2017 1 Introduction. Competitive Consumer Demand 1 These notes sketch out the basic elements of competitive demand theory. The main result is the Slutsky Decomposition

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

We are now going to move on to a discussion of Inequality constraints. Our canonical problem now looks as ( ) = 0 ( ) 0

We are now going to move on to a discussion of Inequality constraints. Our canonical problem now looks as ( ) = 0 ( ) 0 4 Lecture 4 4.1 Constrained Optimization with Inequality Constraints We are now going to move on to a discussion of Inequality constraints. Our canonical problem now looks as Problem 11 (Constrained Optimization

More information

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Duality Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Lagrangian Consider the optimization problem in standard form

More information

Convexity in R N Supplemental Notes 1

Convexity in R N Supplemental Notes 1 John Nachbar Washington University November 1, 2014 Convexity in R N Supplemental Notes 1 1 Introduction. These notes provide exact characterizations of support and separation in R N. The statement of

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal

More information

The Kuhn-Tucker and Envelope Theorems

The Kuhn-Tucker and Envelope Theorems The Kuhn-Tucker and Envelope Theorems Peter Ireland ECON 77200 - Math for Economists Boston College, Department of Economics Fall 207 The Kuhn-Tucker and envelope theorems can be used to characterize the

More information

Mathematical Preliminaries

Mathematical Preliminaries Mathematical Preliminaries Economics 3307 - Intermediate Macroeconomics Aaron Hedlund Baylor University Fall 2013 Econ 3307 (Baylor University) Mathematical Preliminaries Fall 2013 1 / 25 Outline I: Sequences

More information

1 General Equilibrium

1 General Equilibrium 1 General Equilibrium 1.1 Pure Exchange Economy goods, consumers agent : preferences < or utility : R + R initial endowments, R + consumption bundle, =( 1 ) R + Definition 1 An allocation, =( 1 ) is feasible

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Math 273a: Optimization Subgradients of convex functions

Math 273a: Optimization Subgradients of convex functions Math 273a: Optimization Subgradients of convex functions Made by: Damek Davis Edited by Wotao Yin Department of Mathematics, UCLA Fall 2015 online discussions on piazza.com 1 / 42 Subgradients Assumptions

More information

The Karush-Kuhn-Tucker (KKT) conditions

The Karush-Kuhn-Tucker (KKT) conditions The Karush-Kuhn-Tucker (KKT) conditions In this section, we will give a set of sufficient (and at most times necessary) conditions for a x to be the solution of a given convex optimization problem. These

More information

i) This is simply an application of Berge s Maximum Theorem, but it is actually not too difficult to prove the result directly.

i) This is simply an application of Berge s Maximum Theorem, but it is actually not too difficult to prove the result directly. Bocconi University PhD in Economics - Microeconomics I Prof. M. Messner Problem Set 3 - Solution Problem 1: i) This is simply an application of Berge s Maximum Theorem, but it is actually not too difficult

More information