Modern Optimization Theory: Concave Programming

Similar documents
Paul Schrimpf. October 17, UBC Economics 526. Constrained optimization. Paul Schrimpf. First order conditions. Second order conditions

Economics 101A (Lecture 3) Stefano DellaVigna

Constrained Optimization

The Kuhn-Tucker Problem

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2

E 600 Chapter 4: Optimization

. This matrix is not symmetric. Example. Suppose A =

Constrained Optimization and Lagrangian Duality

Econ 508-A FINITE DIMENSIONAL OPTIMIZATION - NECESSARY CONDITIONS. Carmen Astorne-Figari Washington University in St. Louis.

ARE202A, Fall Contents

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY

Nonlinear Programming and the Kuhn-Tucker Conditions

Tutorial 3: Optimisation

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

IE 5531: Engineering Optimization I

Optimality conditions for Equality Constrained Optimization Problems

Review of Optimization Methods

Convex Optimization & Lagrange Duality

Mathematics For Economists

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Nonlinear Programming (NLP)

Problem Set 2 Solutions

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

Calculus and optimization

Optimization. A first course on mathematics for economists

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems.

Nonlinear Optimization: What s important?

Mathematical Preliminaries

Microeconomics I. September, c Leopold Sögner

Outline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution

Lecture 4: Optimization. Maximizing a function of a single variable

Lecture 18: Optimization Programming

EC400 Math for Microeconomics Syllabus The course is based on 6 sixty minutes lectures and on 6 ninety minutes classes.

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

MATH2070 Optimisation

ECON 4117/5111 Mathematical Economics

II. An Application of Derivatives: Optimization

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

Generalization to inequality constrained problem. Maximize

Econ Slides from Lecture 14

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

Constrained maxima and Lagrangean saddlepoints

x +3y 2t = 1 2x +y +z +t = 2 3x y +z t = 7 2x +6y +z +t = a

Finite Dimensional Optimization Part I: The KKT Theorem 1

OPTIMIZATION THEORY IN A NUTSHELL Daniel McFadden, 1990, 2003

We are now going to move on to a discussion of Inequality constraints. Our canonical problem now looks as ( ) = 0 ( ) 0

ECON 5111 Mathematical Economics

BEEM103 UNIVERSITY OF EXETER. BUSINESS School. January 2009 Mock Exam, Part A. OPTIMIZATION TECHNIQUES FOR ECONOMISTS solutions

Mathematical Appendix

Mathematical Economics. Lecture Notes (in extracts)

UNCLASSIFIED AD NUMBER LIMITATION CHANGES

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Optimization Theory. Lectures 4-6

Constrained Optimization. Unconstrained Optimization (1)

Appendix A Taylor Approximations and Definite Matrices

CHAPTER 1-2: SHADOW PRICES

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

Mathematical Foundations -1- Supporting hyperplanes. SUPPORTING HYPERPLANES Key Ideas: Bounding hyperplane for a convex set, supporting hyperplane

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Constrained Optimization

or R n +, R n ++ etc.;

Optimality Conditions for Constrained Optimization

Numerical Optimization

Roles of Convexity in Optimization Theory. Efor, T. E and Nshi C. E

Mathematical Economics: Lecture 16

Support Vector Machines: Maximum Margin Classifiers

IE 5531: Engineering Optimization I

Sufficient Conditions for Finite-variable Constrained Minimization

4TE3/6TE3. Algorithms for. Continuous Optimization

Introduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

CONVEX FUNCTIONS AND OPTIMIZATION TECHINIQUES A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF

PRESENTATION OF MATHEMATICAL ECONOMICS SYLLABUS FOR ECONOMICS HONOURS UNDER CBCS, UNIVERSITY OF CALCUTTA

September Math Course: First Order Derivative

Concave programming. Concave programming is another special case of the general constrained optimization. subject to g(x) 0

DEPARTMENT OF MANAGEMENT AND ECONOMICS Royal Military College of Canada. ECE Modelling in Economics Instructor: Lenin Arango-Castillo

Final Exam - Math Camp August 27, 2014

January 29, Introduction to optimization and complexity. Outline. Introduction. Problem formulation. Convexity reminder. Optimality Conditions

8. Constrained Optimization

University of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris

IE 5531 Midterm #2 Solutions

Finite Dimensional Optimization Part III: Convex Optimization 1

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Convex Optimization M2

1.3 The Indirect Utility Function

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

FIN 550 Exam answers. A. Every unconstrained problem has at least one interior solution.

Optimality Conditions

5. Duality. Lagrangian

Lecture: Duality of LP, SOCP and SDP

Lesson 11. Lagrange's Multiplier Rule / Constrained Optimization

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

Part IB Optimisation

Scientific Computing: Optimization

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces

Lakehead University ECON 4117/5111 Mathematical Economics Fall 2002

Transcription:

Modern Optimization Theory: Concave Programming

1. Preliminaries 1 We will present below the elements of modern optimization theory as formulated by Kuhn and Tucker, and a number of authors who have followed their general approach. Modern constrained maximization theory is concerned with the following problem: 9 Maximize f (x) >= subject to g j (x) 0; for j = 1; 2; :::; m (P) >; and x 2 X where X is a non-empty subset of < n ; and f; g j (j = 1; 2; :::; m) are functions from X to <: Constraint Set: C = x 2 X: g j (x) 0; for j = 1; 2; :::; m :

A point ^x 2 X is a point of constrained global maximum if ^x solves the problem (P). A point ^x 2 X is a point of constrained local maximum if there exists an open ball around ^x; B (^x) ; such that f (^x) f (x) for all x 2 B (^x) \ C: A pair ^x; ^ 2 (X < m +) is a saddle point if x; ^ ^x; ^ (^x; ) for all x 2 X and all 2 < m +; where 2 - (x; ) = f (x) + g (x) for all (x; ) 2 (X < m +) : ^x; ^ is simultaneously a point of maximum and minimum of (x; ): maximum with respect to x; and minimum with respect to : The constraint minimization problem and the corresponding constrained global minimum and constrained local minimum can be dened analogously.

2. Constrained Global Maxima and Saddle Points A major part of modern optimization theory is concerned with establishing (under suitable conditions) an equivalence result between a point of constrained global maximum and saddle point. We explore this theory in what follows. Theorem 1: If ^x; ^ 2 (X < m +) is a saddle point, then (i) ^g (^x) = 0; (ii) g (^x) 0; and (iii) ^x is a point of constrained global maximum. Proof: To be discussed in class. Hints: - For (i) and (ii) use the second inequality in the denition of a saddle point. - Then use (i), (ii) and the rst inequality in the saddle point denition to prove (iii). 3

4 A converse of Theorem 1 can be proved if X is a convex set, f; g j (j = 1; 2; :::; m) are concave functions on X; and a condition on the constraints, generally known as Slater's condition is satised. - Notice that none of these conditions are needed for the validity of Theorem 1. Slater's Condition: Given the problem (P), we will say that Slater's condition holds if there exists x 2 X such that g j (x) > 0; for j = 1; 2; :::; m: Theorem 2 (Kuhn-Tucker): Suppose ^x 2 X is a point of constrained global maximum. If X is a convex set, f; g j (j = 1; 2; :::; m) are concave functions on X; and Slater's condition holds, then there is ^ 2 < m + such that (i) ^g (^x) = 0; and (ii) ^x; ^ is a saddle point.

Examples: The following examples demonstrate why the assumptions of Theorem 2 are needed for the conclusion to be valid. #1. Let X = < + ; f : X! < be given by f (x) = x; and g : X! < be given by g (x) = x 2 : (a) What is the point of constrained global maximum (^x) for the problem (P) for this characterization of X; f and g? (b) Can you nd a ^ 2 < + such that ^x; ^ is a saddle point? Explain clearly. (c) What goes wrong? Explain clearly. #2. Let X = < + ; f : X! < be given by f (x) = x 2 ; and g : X! < be given by g (x) = 1 x: (a) What is the point of constrained global maximum (^x) for the problem (P) for this characterization of X; f and g? (b) Can you nd a ^ 2 < + such that ^x; ^ is a saddle point? Explain clearly. (c) Is the Slater's condition satised? What goes wrong? Explain clearly. 5

6 3. The Kuhn-Tucker Conditions and Saddle Points The Kuhn-Tucker Conditions: Let X be an open set in < n ; and f; g j (j = 1; 2; :::; m) be continuously differentiable on X: A pair ^x; ^ 2 (X < m +) satises the Kuhn-Tucker conditions if (i) @f @x i (^x) + m P j=1 (ii) g (^x) 0; and ^g (^x) = 0: ^ j @g j @x i (^x) = 0; i = 1; 2; :::; n; The condition ^g (^x) = 0 is called the `Complementary Slackness' condition. Note ^g (^x) = 0 ) ^ 1 g 1 (^x) + :::+ ^ m g m (^x) = 0; ) ^ 1 g 1 (^x) = 0; :::; ^ m g m (^x) = 0; since ^ j 0 as ^ 2 < m + and gj (^x) 0. - So if g j (^x) > 0; then ^ j = 0: That is, if a constraint is not binding, then the corresponding multiplier is 0: - But if g j (^x) = 0; then ^ j can be either > 0 or equal to zero.

A part of modern optimization theory is concerned with establishing the equivalence (under some suitable conditions) between a saddle point and a point where the Kuhn- Tucker conditions are satised. Theorem 3: Let X be an open set in < n ; and f; g j (j = 1; 2; :::; m) be continuously differentiable on X: Suppose a pair ^x; ^ 2 (X < m +) satises the Kuhn-Tucker conditions. If X is convex and f; g j (j = 1; 2; :::; m) are concave on X; then (i) ^x; ^ is a saddle point, and (ii) ^x is a point of constrained global maximum. Proof: To be discussed in class. Theorem 4: Let X be an open set in < n ; and f; g j (j = 1; 2; :::; m) be continuously differentiable on X: Suppose a pair ^x; ^ 2 (X < m +) is a saddle point. Then ^x; ^ satises the Kuhn-Tucker conditions. Proof: To be discussed in class. 7

4. Sufcient Conditions for Constrained Global Maximum and Minimum Now we have all the ingredients to nd out the sufcient conditions for a constrained global maximum or minimum involving the Kuhn-Tucker conditions. #3. State and prove rigorously a theorem that gives the sufcient conditions for a constrained global maximum involving the Kuhn-Tucker conditions. #4. State and prove rigorously a theorem that gives the sufcient conditions for a constrained global minimum involving the Kuhn-Tucker conditions. 8

9 5. Constrained Local and Global Maxima It is clear that if ^x is a point of constrained global maximum, then ^x is also a point of constrained local maximum. The circumstances under which the converse is true are given by the following theorem. Theorem 5: Let X be a convex set in < n : Let f; g j (j = 1; 2; :::; m) be concave functions on X: Suppose ^x is a point of constrained local maximum. Then ^x is also a point of constrained global maximum. Proof: To be discussed in class. Hints: Establish rst that since X is a convex set and g j (j = 1; 2; :::; m)'s are concave functions, the constraint set C is a convex set.

6. Necessary Conditions for Constrained Local Maximum and Minimum We now establish the useful result (corresponding to the classical Lagrange Theorem) that if x 2 X is a point of constrained local maximum then, under suitable conditions, there exists 2 < k + such that (x ; ) satises the Kuhn-Tucker conditions. Theorem 6 (Constrained Local Maximum): Let X be an open set in < n ; and f; g j (j = 1; 2; :::; k) be continuously differentiable on X: Suppose that x 2 X is a point of constrained local maximum of f subject to k inequality constraints: g 1 (x) b 1 ; :::; g k (x) b k : Without loss of generality, assume that the rst k 0 constraints are binding at x and that the last (k k 0 ) constraints are not binding. Suppose that the following nondegenerate constraint qualication is satised at x : 10

11 The rank at x of the following Jacobian matrix of the binding constraints is k 0 : 0 1 @g 1 (x @g 1 ) (x ) @x 1 @x n B..... C @ @g k 0 (x @g k 0 A : ) (x ) @x 1 @x n Form the Lagrangian L (x; ) f (x) 1 g 1 (x) b 1 Then, there exist multipliers ( 1; :::; k) such that (a) @L @x 1 (x ; ) = 0; :::; @L @x n (x ; ) = 0; (b) 1 g 1 (x ) b 1 = 0; :::; k g k (x ) b k = 0; (c) 1 0; :::; k 0; (d) g 1 (x ) b 1 ; :::; g k (x ) b k : ::: k g k (x) b k :

Proof: To be discussed in class (see Section 19.6, pages 480-482, of the textbook). Note that the conditions (a) (d) are the Kuhn-Tucker conditions. 12 Example: Consider the following problem: Maximize x subject to (1 x) 3 y; x 0; y 0: (a) Dene carefully X; f; and the g j 's and b j 's for this problem. (b) Draw carefully the constraint set for this problem and nd out (x ; y ) such that (x ; y ) solves this problem. (c) Are there j 's (the number of j 's should be in accordance with the number of gj 's) such that (x ; y ) and the j 's satisfy the Kuhn-Tucker conditions? Explain carefully. (d) What goes wrong? Explain carefully. 9 = ;

Theorem 7 (Mixed Constraints): Let X be an open set in < n ; and f; g j (j = 1; 2; :::; k) and h i (i = 1; 2; :::; m) be continuously differentiable on X: Suppose that x 2 X is a point of constrained local maximum of f subject to k inequality constraints and m equality constraints: g 1 (x) b 1 ; :::; g k (x) b k ; h 1 (x) = c 1 ; :::; h m (x) = c m : Without loss of generality, assume that the rst k 0 inequality constraints are binding at x and that the last (k k 0 ) constraints are not binding. Suppose that the following nondegenerate constraint qualication is satised at x : 13

The rank at x of the Jacobian matrix of the equality constraints and the binding inequality constraints 0 1 @g 1 (x @g 1 ) (x ) @x 1 @x n..... @g k 0 (x @g k 0 ) (x ) @x 1 @x n @h 1 (x @h 1 ) (x ) @x 1 @x n B..... C @ @h m (x @h m A ) (x ) @x 1 @x n is (k 0 + m) : 14

15 Form the Lagrangian L (x; ; ) f (x) 1 g 1 (x) b 1 ::: k g k (x) b k 1 h 1 (x) c 1 ::: m [h m (x) c m ] : Then, there exist multipliers ( 1; :::; k; 1; :::; m) such that (a) @L @x 1 (x ; ; ) = 0; :::; @L @x n (x ; ; ) = 0; (b) 1 g 1 (x ) b 1 = 0; :::; k g k (x ) b k = 0; (c) h 1 (x ) = c 1 ; :::; h m (x ) = c m ; (d) 1 0; :::; k 0; (e) g 1 (x ) b 1 ; :::; g k (x ) b k :

Theorem 8 (Constrained Local Minimum): Let X be an open set in < n ; and f; g j (j = 1; 2; :::; k) and h i (i = 1; 2; :::; m) be continuously differentiable on X: Suppose that x 2 X is a point of constrained local minimum of f subject to k inequality constraints and m equality constraints: g 1 (x) b 1 ; :::; g k (x) b k ; h 1 (x) = c 1 ; :::; h m (x) = c m : Without loss of generality, assume that the rst k 0 inequality constraints are binding at x and that the last (k k 0 ) constraints are not binding. Suppose that the following nondegenerate constraint qualication is satised at x : 16

The rank at x of the Jacobian matrix of the equality constraints and the binding inequality constraints 0 @g 1 (x @g 1 1 ) (x ) @x 1 @x n..... @g k 0 (x @g k 0 ) (x ) @x @x n @h 1 (x @h 1 ) (x ) @x 1 @x n B..... C @ @h m (x @h m A ) (x ) @x 1 @x n is (k 0 + m) : 17

18 Form the Lagrangian L (x; ; ) f (x) 1 g 1 (x) b 1 ::: k g k (x) b k 1 h 1 (x) c 1 ::: m [h m (x) c m ] : Then, there exist multipliers ( 1; :::; k; 1; :::; m) such that (a) @L @x 1 (x ; ; ) = 0; :::; @L @x n (x ; ; ) = 0; (b) 1 g 1 (x ) b 1 = 0; :::; k g k (x ) b k = 0; (c) h 1 (x ) = c 1 ; :::; h m (x ) = c m ; (d) 1 0; :::; k 0; (e) g 1 (x ) b 1 ; :::; g k (x ) b k :

19 7. Sufcient Conditions for Constrained Local Maximum and Minimum We use techniques similar to the necessary conditions. Given a solution (x ; ; ) of the Kuhn-Tucker conditions (the rst-order conditions), divide the inequality constraints into binding constraints and non-binding constraints at x : - On the one hand, we treat the binding inequality constraints like equality constraints; - on the other hand, the multipliers for the non-binding constraints must be zero and these constraints drop out of the Lagrangian.

Theorem 9: Let X be an open set in < n ; and f; g j (j = 1; 2; :::; k) and h i (i = 1; 2; :::; m) be twice continuously differentiable on X: Consider the problem of maximizing f on the constraint set: g C g;h x 2 j (x) b X: j ; for j = 1; 2; :::; k; h i (x) = c i ; for i = 1; 2; :::; m: Form the Lagrangian L (x; ; ) f (x) 1 g 1 (x) b 1 ::: k g k (x) b k 1 h 1 (x) c 1 ::: m [h m (x) c m ] : (a) Suppose that there exist multipliers ( 1; :::; k; 1; :::; m) such that @L (x ; ; ) = 0; :::; @L (x ; ; ) = 0; @x 1 @x n 1 0; :::; k 0; 1 g 1 (x ) b 1 = 0; :::; k g k (x ) b k = 0; h 1 (x ) = c 1 ; :::; h m (x ) = c m : 20

(b) Without loss of generality, assume that the rst k 0 inequality constraints are binding at x and that the last (k k 0 ) constraints are not binding. Write g 1 ; :::; g k 0 as gk0 ; h 1 ; :::; h m as h; the Jacobian derivative of g k0 at x as Dg k0 (x ) ; and the Jacobian derivative of h at x as Dh (x ) : Suppose that the Hessian of L with respect to x at (x ; ; ) is negative denite on the linear constraint set that is, fv: Dg k0 (x ) v = 0 and Dh (x ) v = 0g ; v 6= 0; Dg k0 (x ) v = 0 and Dh (x ) v = 0 ) v T H L (x ; ; ) v < 0: Then x is a point of constrained local maximum of f on the constraint set C g;h : 21

22 To check condition (b), form the bordered Hessian 0 @g 1 @g 1 0 0 0 0 j @x 1 @x n.......... j..... @g k 0 @g k 0 0 0 0 0 j @x 1 @x n @h 1 @h 1 0 0 0 0 j @x 1 @x n H =.......... j..... @h m @h m 0 0 0 0 j @x 1 @x n j @g 1 @g k 0 @h 1 @h m @ 2 L @ 2 L j @x 1 @x 1 @x 1 @x 1 @x 2 1 @x n x 1 B.......... j..... @ @g 1 @g k 0 @h 1 @h m @ 2 L @ 2 L j @x n @x n @x n @x n @x 1 x n @x 2 n 1 C A

23 Check the signs of the last (n the determinant of H itself. (k 0 + m)) leading principal minors of H; starting with If H has the same sign as ( 1) n and if these last (n (k 0 + m)) leading principal minors alternate in sign, then condition (b) holds. We need to make the following changes in the wording of Theorem 9 for an inequalityconstrained minimization problem: (i) write the inequality constraints as g j (x) b j in the presentation of the constraint set C g;h ; (ii) change negative denite and < 0 in condition (b) to positive denite and > 0. The bordered Hessian check requires that the last (n (k 0 + m)) leading principal minors of H all have the same sign as ( 1) k 0+m :

24 Example 1: Consider the following constrained maximization problem: 9 nq Maximize x i i=1 >= np (P) subject to x i n; i=1 and x i 0; i = 1; 2; :::n: >; Find out the solution to (P) by showing your steps clearly. Example 2: Consider the following constrained maximization problem: 9 Maximize x 2 + x + 4y 2 >= subject to 2x + 2y 1; >; (Q) and x 0; y 0: Find out the solution to (Q) by showing your steps clearly.

25 References Must read the following sections from the textbook: Section 18.3, 18.4, 18.5, 18.6 (pages 424 447): Inequality Constraints, Mixed Constraints, Constrained Minimization Problems, and Kuhn-Tucker Formulation; Section 19.3 (pages 466 469): Second-Order Conditions (Inequality Constraints), Section 19.6 (pages 480 482): Proofs of First Order Conditions. This material is based on 1. Mangasarian, O. L., Non-Linear Programming, (chapters 5, 7), 2. Takayama, A., Mathematical Economics, (chapter 1), 3. Nikaido, H., Convex Structures and Economic Theory, (chapter 1).