Optimality conditions for Equality Constrained Optimization Problems

Similar documents
Roles of Convexity in Optimization Theory. Efor, T. E and Nshi C. E

Economics 101A (Lecture 3) Stefano DellaVigna

Generalization to inequality constrained problem. Maximize

MATH2070 Optimisation

Modern Optimization Theory: Concave Programming

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

Lecture 4: Optimization. Maximizing a function of a single variable

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY

Algorithms for Constrained Optimization

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Paul Schrimpf. October 17, UBC Economics 526. Constrained optimization. Paul Schrimpf. First order conditions. Second order conditions

Ebonyi State University, Abakaliki 2 Department of Computer Science, Ebonyi State University, Abakaliki

Constrained Optimization

Gradient Descent. Dr. Xiaowei Huang

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

Tutorial 3: Optimisation

Calculus and optimization

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

The Kuhn-Tucker Problem

Sufficient Conditions for Finite-variable Constrained Minimization

Constrained Optimization

We are now going to move on to a discussion of Inequality constraints. Our canonical problem now looks as ( ) = 0 ( ) 0

Review of Optimization Methods

5 Handling Constraints

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Outline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution

Week 4: Calculus and Optimization (Jehle and Reny, Chapter A2)

1 Computing with constraints

Chapter 3 Numerical Methods

More on Lagrange multipliers

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2

E 600 Chapter 4: Optimization

Perturbation Analysis of Optimization Problems

5.5 Quadratic programming

Nonlinear Optimization: What s important?

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

CONSTRAINED OPTIMALITY CRITERIA

Constrained optimization: direct methods (cont.)

Optimization. Next: Curve Fitting Up: Numerical Analysis for Chemical Previous: Linear Algebraic and Equations. Subsections

Numerical optimization

Optimality Conditions

September Math Course: First Order Derivative

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS

Intro to Nonlinear Optimization

Lesson 11. Lagrange's Multiplier Rule / Constrained Optimization

Constrained Optimization. Unconstrained Optimization (1)

MATHEMATICS FOR ECONOMISTS. An Introductory Textbook. Third Edition. Malcolm Pemberton and Nicholas Rau. UNIVERSITY OF TORONTO PRESS Toronto Buffalo

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

School of Business. Blank Page

Algorithms for nonlinear programming problems II

Consensus building: How to persuade a group

Lec3p1, ORF363/COS323

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Algorithms for constrained local optimization

Topic one: Production line profit maximization subject to a production rate constraint. c 2010 Chuan Shi Topic one: Line optimization : 22/79

Additional Homework Problems

LAGRANGE MULTIPLIERS

. This matrix is not symmetric. Example. Suppose A =

Econ Slides from Lecture 14

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

II. An Application of Derivatives: Optimization

Summary Notes on Maximization

ECON 5111 Mathematical Economics

ECON2285: Mathematical Economics

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

8. Constrained Optimization

Tangent spaces, normals and extrema

Written Examination

x +3y 2t = 1 2x +y +z +t = 2 3x y +z t = 7 2x +6y +z +t = a

IE 5531: Engineering Optimization I

Scientific Computing: Optimization

Introduction to Unconstrained Optimization: Part 2

Optimization Tutorial 1. Basic Gradient Descent

CONVEX FUNCTIONS AND OPTIMIZATION TECHINIQUES A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF

Optimisation theory. The International College of Economics and Finance. I. An explanatory note. The Lecturer and Class teacher: Kantorovich G.G.

Optimization. Thomas Dangl. March Thomas Dangl Optimization 1

Optimality Conditions for Constrained Optimization

Optimization. A first course on mathematics for economists Problem set 4: Classical programming

Lagrangian Duality Theory

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema

Optimization. A first course on mathematics for economists

Unit: Optimality Conditions and Karush Kuhn Tucker Theorem

Lecture 2 - Unconstrained Optimization Definition[Global Minimum and Maximum]Let f : S R be defined on a set S R n. Then

Notes on Constrained Optimization

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Optimization Methods. Lecture 18: Optimality Conditions and. Gradient Methods. for Unconstrained Optimization

Computational Optimization. Constrained Optimization Part 2

Constrained optimization with equality constraints: a didactic approach of Lagrange's Theorem

Economics 101. Lecture 2 - The Walrasian Model and Consumer Choice

Understanding the Simplex algorithm. Standard Optimization Problems.

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces

Nonlinear Programming and the Kuhn-Tucker Conditions

Lakehead University ECON 4117/5111 Mathematical Economics Fall 2002

Constrained Optimization Theory

Multidisciplinary System Design Optimization (MSDO)

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

Algorithms for nonlinear programming problems II

Transcription:

International Journal of Mathematics and Statistics Invention (IJMSI) E-ISSN: 2321 4767 P-ISSN: 2321-4759 Volume 4 Issue 4 April. 2016 PP-28-33 Optimality conditions for Equality Constrained Optimization Problems EFOR, T. E Department of Mathematics and Computer Science Ebonyi state University, Abakaliki. Nigeria ABSTRACT : This paper focuses on the equality constrained optimization problems. It considered the Lagrange multiplier method for solving equality constrained problem. The Lagrange First and second order optimality conditions for equality problems are discussed. Also the paper uses the fact that the definiteness of an symmetric matrix can be characterized in terms of the sub matrix to apply the Hessian Borded Matrix method to check for optimality condition for equality constrained with many constraints. Examples are used to illustrate each of the method. KEYWORDS: Bordered Hessian matrix, Constraint qualification, Equality constrained, Local minimizer, Lagrange multiplier. I. Introduction Optimization is a mathematical procedure for determining optimal allocation of scarce resources. It has found practical applications in almost all sectors of life, for instance in economics, engineering etc. This can also be defined as the selection of the best element from available resources and achieve maximum output. Generally, optimization problems are divided into two namely: unconstrained and constrained optimization problems. Unconstrained problems do not arise often in practical applications. Sundaram in his paper [1] considered such problems. Because optimality conditions for constrained problems become a logical extension of the conditions for unconstrained problems. Shuonan in [2] applied the constrained optimization methods on a simple toy problem. Constrained optimization is approached somewhat differently from unconstrained optimization because the goal is not to find the global optima. However, constrained optimization methods use unconstrained optimization as a sub-step. In [3],the author used Newton s Method to solve for minimizers of an equality constraint problems. T his paper will only consider Lagrangian multiplier method and Hessian Borded matrix method to solve for optima of the equality constrained optimization problem. For the purpose of understanding, I hereby state the standard form of unconstrained optimization problems as follows Subject to where is some open subset of Here, to study this type of optimization problem, the study of the theory is under assumptions of differentiability, the principal objective here is to identify necessary conditions that the derivative of must satisfy at an optimum. Unconstrained problem does not literally mean the absence of constraints, but it refers to a situation in which we can move (at least) a small distance away from the optimum point in any direction without leaving the feasible set. II. Constrained Optimization Problems Now, we turn our attention to constrained optimization problems as our interest In constrained optimization problem, the objective function is subject to certain constraints. For instance, in industries, most constraints are encountered such as transportation, technology, time, labour, etc The standard model for the constrained optimization problem is posed as follows; where is some special set in that can assume any of the following forms in practice; (i) (ii) (iii) 28 Page

Where is open subset of and, Basically, there are essentially two types of constrained problems, namely equality and inequality constraints. But for the purpose of this paper, we concentrate on equality constrained problems. 2.1. Equality Constrained Problems: The model is posed as ) 2.2. Inequality constrained problems The model is posed as; ) We also have the general mixed constrained problem which is given by ) 2.3. Notations We give some notations and definitions that we will need throughout this paper. (1).The gradient of a real-valued function is given by n (2).The Hessian matrix of denoted by (3).The Jacobian matrix of a vector valued function written in the form can also be,where means the transpose of. The fundamental tool for deriving optimality condition in optimization algorithms is the so called lagrangain function. The purpose of is to link the objective function and the constraints set, The variable are called the lagrangian multipliers associated to the minimize III. Optimality condition for equality constrained problems. 3.1 OPTIMALITY CONDITION FOR EQUALITY CONSTRAINED PROBLEMS. Here, we are interested in the problem (P), that is Where In this section we are interested in the conditions in which 3.I.1 The Theorem of Lagrange. and the constraint sets will satisfy at an optimum. In [4], Huijuan presented the rule for the lagrange multipliers and introduced its application to the field of power systems economic operation. The Theorem of Lagrange provides a powerful characterization of optima of equality-constrained optimization problems in terms of the behavior of the objective function and the constraint functions g at this point Theorem 3.1 (The Theorem of Lagrange) Let be function, Suppose is a local maximum or minimum of on the set 29 Page

Where is open. Suppose also that then there exist a vector such that, where are called the Lagrangian multiplier associated with the local optimum is the rank of the gradient vector that is, the rank of We defined the Lagrangian function as follows: The function is called the Lagrangian associated with the problem (P). With this function, we get the Lagrangian equations as follows:, Definition 3.1: Let. We say that is a regular point of if the vectors of are linear independent. In matrix form, is a regular point of if the Jacobian matrix of g at, denoted by has rank (i.e, full rank),where Remark 1: If a pair satisfies the twin conditions that and we will say that meets the first order conditions of the Theorem of Lagrange, or meets the first order necessary conditions in equality constrained optimization problems. The Theorem of lagrange only provides necessary conditions for local optima at, and that, only for those local optima which also meet the condition that. These conditions are not asserted to be sufficient: that is, the Theorem does not mean that if there exist such that, and then must either be a local maximum or a local minimum, even if also meets the rank condition Example 3. Let and be functions on defined by Consider the equality. Constrained optimization problem of minimizing over the set Let be the point and let. Then Therefore, is a feasible point. Moreover, for any. Here, that is full rank., Substituting in We have This implies that it satisfies the condition of the theorem of Lagrange, but it is not sufficient. 3.2 Constraint qualification: The condition in the theorem of Lagrange that the rank of be equal to the number of constraints is called the constraint qualification under equality constraints. It ensures that contains an invertible sub-matrix, which may be used to define the vector. 30 Page

However, if the constraint qualification is violated, then the conclusions of the theorem may also fail. That is, if is a local optimum at which, then there need not exist a vector such that The set of all critical point of contains the set of all local maxima and minima of on at which the constraint qualification is met A particular implications of this property is the following proposition Proposition 3.4: Suppose the following two conditions hold: (1) A global minimizer exists to the given problem (2)The constraint qualification is met at Then there exists such that is a critical point of. Example 3.4: Consider the problem Minimize and the constraint equation reduces to and. Let, then Applying Lagrangian multiplier method we construct the Lagrangian function, The vector is called the vector of Lagrangian Multipliers, Now we first show that the two conditions of proposition 3.4 (namely; existence of global minimizer, and the constraint qualification) are met, so the critical points of the Lagrangian function will, contain the set of global minima, is a continuous function on and is compact thus by the existence theorem, there exists a minimizer of in. We check for the constraint qualification. The derivative of the constraint function g at any point is given by Given by and or Since and cannot be zero simultaneously on otherwise, would fail. It implies that we must have at Therefore, the constraint qualification holds everywhere on. Since,, then there exist such that for a minimizer, we have The Lagrangian equation reduces to: (1) (2) (3) The critical points of are the solutions Solving equations above we obtain candidates for minimizer as Evaluating at these four points Hence the points and are the global minimizer of on while the points and are the global maximers of on. 31 Page

3.3 LAGRANGE SECOND ORDER OPTIMALITY CONDITIONS FOR EQUALITY CONSTRAINED PROBLEM. Now, we look into a situations where the candidates for minimizers are so many that it involves too many computations. To avoid such computations, we give the Lagrange second order conditions for equality constrained problem. Note that second order conditions for equality constrained problem pertain only to local maxima and minima just like the case of unconstrained problems, this is because of the fact that differentiability is only a local property. Theorem 3.5: Let and be functions. Suppose there exists point and such that ( and. Define and let denote the symmetric matrix of (1) If has a local minimum on at then for all. (2) If has a local maximum on at then for all Applying this theorem to example 3.4, We have that, Substituting into For For To obtain = Let We see from the example that points and gives and hence they are local maximizers while points and gives t 2 * *, * 0 hence are local minimizers of on. z L x z Observe in theorem 3.5 that the definiteness of a symmetric matrix say can be completely characterized in terms of the sub-matrices of. Because of this observation we turn to related problem: the characterization of the definiteness of on the set where is a matrix of rank, because of this characterization we state alternative way to check for the conditions of Theorem 3.5. This theorem uses what is called bordered Hessian matrix denoted by. The condition is stated as follows: Define; Where and The matrix is called the Bodered Hessian matrix. Once the critical points are computed for the Lagrangean function and Borded Hessian matrix evaluated at (, then is (1) A maximum point if, starting with the principal major of order, the last, principal minor of form an alternating sign pattern with (2) A minimum point if, starting with principal minor determinant of order the last principal minor determinants of have the sign of Example 3.5 consider the problem Minimize 32 Page

Subject to We construct the lagrangain function The critical points will be obtained thus; Solving these equation, we have (4) (5) (6) (7) (8) We construct the Since and and we need to check the determinant of only. We must have the sign of for the critical point to be a minimum. Determinant of. Therefore is a minimum point since IV. CONCLUSION Herein, an attempt has being made in discussing the Lagrange first and second optimality conditions. Methods for checking optimality conditions for equality constrained problems has being examined and this paper have showed that for complexity computation, Hessian Borded Matrix method becomes a better approach for checking optimal point. RFERENCES [1] Rangarajan, K. Sundaram, A first course in optimization theory, Cambridge, United States; University Press, (1998). [2] Shuonan Dong Methods for Constrained Optimization, 18.086 Project 2 Spring, 2006 [3] Jean P.Vert, Nonlinear optimization (c), Insead, Spring, 2006 [4] Huijuan Li, Lagrange Multipliers and their Applications University of Tennessee, Knoxville,TN 37921 USA, 2008 33 Page