GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION

Similar documents
CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

Symmetric and Asymmetric Duality

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Optimization. A first course on mathematics for economists

Research Article Optimality Conditions and Duality in Nonsmooth Multiobjective Programs

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Characterizations of Solution Sets of Fréchet Differentiable Problems with Quasiconvex Objective Function

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

Nonlinear Programming and the Kuhn-Tucker Conditions

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

HIGHER ORDER OPTIMALITY AND DUALITY IN FRACTIONAL VECTOR OPTIMIZATION OVER CONES

Optimality Conditions for Constrained Optimization

POLARS AND DUAL CONES

Application of Harmonic Convexity to Multiobjective Non-linear Programming Problems

Convex Optimization M2

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Finite Dimensional Optimization Part III: Convex Optimization 1

. This matrix is not symmetric. Example. Suppose A =

5. Duality. Lagrangian

Transformation of Quasiconvex Functions to Eliminate. Local Minima. JOTA manuscript No. (will be inserted by the editor) Suliman Al-Homidan Nicolas

SECOND ORDER DUALITY IN MULTIOBJECTIVE PROGRAMMING

OPTIMALITY CONDITIONS AND DUALITY FOR SEMI-INFINITE PROGRAMMING INVOLVING SEMILOCALLY TYPE I-PREINVEX AND RELATED FUNCTIONS

Chapter 2 Convex Analysis

Optimality Conditions for Nonsmooth Convex Optimization

Chap 2. Optimality conditions

Lecture 8: Basic convex analysis

Preprint February 19, 2018 (1st version October 31, 2017) Pareto efficient solutions in multi-objective optimization involving forbidden regions 1

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

The Skorokhod reflection problem for functions with discontinuities (contractive case)

Microeconomics I. September, c Leopold Sögner

Constrained maxima and Lagrangean saddlepoints

Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark.

Chapter 1. Preliminaries

Convex Analysis and Economic Theory Winter 2018

CONSTRAINED OPTIMALITY CRITERIA

Inequality Constraints

CONVEX FUNCTIONS AND OPTIMIZATION TECHINIQUES A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY

Math 341: Convex Geometry. Xi Chen

Lecture 7: Semidefinite programming

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games

A SET OF LECTURE NOTES ON CONVEX OPTIMIZATION WITH SOME APPLICATIONS TO PROBABILITY THEORY INCOMPLETE DRAFT. MAY 06

Exercises: Brunn, Minkowski and convex pie

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Sets with Applications to Economics

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

Mathematical Economics: Lecture 16

Franco Giannessi, Giandomenico Mastroeni. Institute of Mathematics University of Verona, Verona, Italy

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions

Constrained Optimization and Lagrangian Duality

Convex Functions and Optimization

Set, functions and Euclidean space. Seungjin Han

Convex Sets Strict Separation. in the Minimax Theorem

Normal Fans of Polyhedral Convex Sets

Summary Notes on Maximization

On the Properties of Positive Spanning Sets and Positive Bases

CSCI : Optimization and Control of Networks. Review on Convex Optimization

A Characterization of Polyhedral Convex Sets

Centre d Economie de la Sorbonne UMR 8174

On constraint qualifications with generalized convexity and optimality conditions

Econ 508-A FINITE DIMENSIONAL OPTIMIZATION - NECESSARY CONDITIONS. Carmen Astorne-Figari Washington University in St. Louis.

A Brief Review on Convex Optimization

Lecture: Duality.

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

Convex Programs. Carlo Tomasi. December 4, 2018

Extreme points of compact convex sets

CO 250 Final Exam Guide

Constraint qualifications for nonlinear programming

Convex Analysis and Economic Theory AY Elementary properties of convex functions

INVEX FUNCTIONS AND CONSTRAINED LOCAL MINIMA

EFFICIENCY AND GENERALISED CONVEXITY IN VECTOR OPTIMISATION PROBLEMS

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

GEORGIA INSTITUTE OF TECHNOLOGY H. MILTON STEWART SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES OPTIMIZATION III

On Weak Pareto Optimality for Pseudoconvex Nonsmooth Multiobjective Optimization Problems

Technical Results on Regular Preferences and Demand

Math Advanced Calculus II

Finite-Dimensional Cones 1

A Criterion for the Stochasticity of Matrices with Specified Order Relations

E 600 Chapter 4: Optimization

FIRST ORDER CHARACTERIZATIONS OF PSEUDOCONVEX FUNCTIONS. Vsevolod Ivanov Ivanov

Finite Dimensional Optimization Part I: The KKT Theorem 1

Appendix A Taylor Approximations and Definite Matrices

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Local convexity on smooth manifolds 1,2,3. T. Rapcsák 4

On the Value Function of a Mixed Integer Linear Optimization Problem and an Algorithm for its Construction

On John type ellipsoids

Date: July 5, Contents

GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect

Summer School: Semidefinite Optimization

Numerical Optimization

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS

A New Fenchel Dual Problem in Vector Optimization

Jensen s inequality for multivariate medians

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Contents Real Vector Spaces Linear Equations and Linear Inequalities Polyhedra Linear Programs and the Simplex Method Lagrangian Duality

Transcription:

Chapter 4 GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Alberto Cambini Department of Statistics and Applied Mathematics University of Pisa, Via Cosmo Ridolfi 10 56124 Pisa, ITALY acambini@ec.unipi.it Laura Martein Department of Statistics and Applied Mathematics University of Pisa, Via Cosmo Ridolfi 10 56124 Pisa, ITALY Imartein@ec.unipi.it Abstract Keywords: In this chapter, the role of generalized convex functions in optimization is stressed. A particular attention is devoted to local-global properties, to optimality of stationary points and to sufficiency of first order necessary optimality conditions for scalar and vector problems. Despite of the numerous classes of generalized convex functions suggested in these last fifty years, we have limited ourselves to introduce and study those classes of scalar and vector functions which are more used in the literature. generalized convexity, vector optimization, optimality conditions. 1. Introduction In classical scalar optimization theory, convexity plays a fundamental role since it guarantees the validity of important properties like as: a local minimizer is also a global minimizer, a stationary point is a global

152 GENERALIZED CONVEXITY AND MONOTONICITY minimizer and the usual first order necessary optimality conditions are also sufficient for a point to be a global minimizer. For many mathematical models used in decision sciences, economics, management sciences, stochastics, applied mathematics and engineering, the notion of convexity does not longer suffice. Various generalizations of convex functions have been introduced in literature. Many of such functions preserve one or more properties of convex functions and give rise to models which are more adaptable to real-world situations then convex models. Starting from the pioneer work of Arrow-Enthoven [1], attempts have been made to weaken the convexity assumption and thus to explore the extent of optimality conditions applicability. The results obtained in the scalar case have had a great influence on the area of vector optimization which has been widely developed in recent years with the aim to extend and generalize, in this field, such results (see for instance [15]), [42], [62]. In the scalar case various generalization of convexity have been suggested and their properties studied (see for instance [3], [86]). In this chapter we limit ourselves to consider only the classes which preserve the properties of convex functions related to optimality like as quasiconvex, semistrictly quasiconvex and pseudoconvex functions. A particular attention is devoted to pseudolinear functions, that is functions which are both pseudoconvex and pseudoconcave, since they have also the nice property, which is very important from a computational point of view, that the set of all minimizers and the set of all maximizers are contained in the boundary of the feasible region; in particular, if such a region is a polyhedral set and if the minimum (maximum) value exists, it is attained at least at a vertex of the feasible set. In Section 3 we have considered the class of the so-called invex functions since it is the wider class for which the Kuhn-Tucker conditions become sufficient. These functions can be characterized as the ones for which a stationary point is a minimum point and, like as the considered classes of generalized convex functions, they play an important role also in establishing constraint qualifications. In vector optimization, the concept of minimum is usually translated by means of an ordering cone in the space of the objectives. For sake of simplicity in this chapter we refer to the Pareto cone, that is the nonnegative orthant of the space of the objectives. This cone induces only a partial order and this is the main reason for which there are several ways to extend the notion of generalized convexity in vector optimization. As in the scalar case, we have chosen to present some classes of vector generalized convex functions which preserve local-global properties and the sufficiency of the most important first order necessary vector

Optimality Conditions 153 optimality conditions. Furthermore, in Section 10, we have suggested a possible way to extend the concept of pseudolinearity for a vector function, while in Section 11 the notion of vector invexity is developed. In this chapter, we have given the fundamental notions, ideas and properties of generalized convex scalar and vector functions. For sake of simplicity, we have considered differentiable functions even if recent papers are devoted to the nonsmooth case. 2. Generalized convex scalar functions and optimality conditions. In this section, we will establish local-global properties and the sufficiency of the most important first order necessary optimality conditions. With this aim, we introduce some classes of real-valued generalized convex functions which contain properly the convex one. Let be a function defined on an open set X of and let S be a convex subset of X. Definition 4.1 The function sets is quasiconvex on S if its lower-level are convex sets for all real numbers As is known, an useful characterization of a quasiconvex function is the following: for every and When is a differentiable function, is quasiconvex if and only if Definition 4.2 The function is semistrictly quasiconvex on S if for every and Definition 4.3 The differentiable function is pseudoconvex on S if We recall that, in the differentiable case, a pseudoconvex function is a semistrictly quasiconvex function, that, in turn, is quasiconvex.

154 GENERALIZED CONVEXITY AND MONOTONICITY The essence of the difference between quasiconvexity and pseudoconvexity is stated in the following known theorem for which we will give a very simple proof. Theorem 4.1 Let be a differentiable function on an open convex set If then is pseudoconvex if and only if it is quasiconvex. Proof. Taking into account that a pseudoconvex function is quasiconvex too, we must prove that a quasiconvex function is pseudoconvex. Assume, to get a contradiction, that there exist with and Since is quasiconvex, necessarily we have For the continuity of there exists such that with (observe that since Consequently, for the quasiconvexity of we have On the other hand so that and this is absurd. Let us note that a function is quasiconcave, semistrictly quasiconcave or pseudoconcave if and only if the function is quasiconvex, semistrictly quasiconvex or pseudoconvex, so that all results that we are going to describe for generalized convex functions hold with obvious changes for the corresponding class of generalized concave functions. Like as the convex case, a semistrictly quasiconvex function f (in particular pseudoconvex) has the nice properties that the set of points at which f attains its global minimum over S is a convex set and that a local minimum is also global. This last property is lost for a quasiconvex function; with this regard it is sufficient to consider the quasiconvex function and the point which is a local but not global minimum point. Nevertheless, if is a strict local minimum point for a quasiconvex function, then it is also a strict global minimum point. The class of semistrictly quasiconvex functions is the wider class for which a local minimum is also global in the following sense:

Optimality Conditions 155 if is a continuous quasiconvex function, then is semistrictly quasiconvex if and only if every local minimum is also global. In order to analyze local and global optimality at a point we must compare with and this suggests to consider generalized convexity at the point Following Mangasarian [65], we can also weaken the assumption of convexity of S, requiring that S is star-shaped at that is implies the line-segment is contained in S. The following definitions hold: Definition 4.4 Let be a function defined on the star-shaped set S at i) is quasiconvex at if ii) is semistrictly quasiconvex at if iii) is pseudoconvex at if is differentiable at and It is well known that a point, which is minimum with respect to every feasible direction starting from it, is not necessarily a local minimum point. Consider for instance the function on the set The point is minimum with respect to every feasible direction but it is not a local minimum, how it can be verified by considering the restriction of to the curve The following theorem points out that, requiring suitable generalized convexity assumptions, optimality along feasible directions at a point implies the optimality of Theorem 4.2 Let be a function defined on the star-shaped set S at i) If is quasiconvex at and is a strict local minimum point for every direction then is a strict global minimum point of on S. ii) If is semistrictly quasiconvex at and is a local minimum point for every direction then is a global minimum point of on S. iii) If is pseudoconvex at and is a local minimum point for every direction then is a global minimum point of on S.

156 GENERALIZED CONVEXITY AND MONOTONICITY Now we will stress the role of generalized convexity in establishing sufficient first order optimality conditions. With this aim, in the following we will consider a real-valued differentiable function defined on an open subset X of It is well known that a stationary point for is not necessarily a minimum point for such a property holds requiring a suitable assumption of generalized convexity. Theorem 4.3 Consider a differentiable function on a convex set If is pseudoconvex at the stationary point then is a global minimum point for the function on S. Theorem 4.3 does not hold if pseudoconvexity assumption is substituted with quasiconvexity or semistrictly quasiconvexity. In fact is a strictly increasing function so that it is both semistrictly quasiconvex and quasiconvex, but the stationary point is not a minimum point for Now we will see how generalized convexity assumptions are very important to guarantee the sufficiency of first order optimality conditions which are in general only necessary. When S is star-shaped at one of the most known conditions for to be a local minimum point is where is the cone of feasible directions of S at given by Such a condition is not sufficient even if strict inequality holds as is shown in the following example. Example 4.1 Consider the function defined on the convex set and the point We have and but is not a minimum point as it can be verified performing a restriction of the function on the curve In Example 4.1 the closure of the cone contains properly and directions belonging to are critical directions, that is In order to be sure that is a minimum point, we must require the validity of the sufficient condition When the function is pseudoconvex the critical directions do not play any role as it is stated in the following theorem whose proof follows directly by the definition of pseudoconvexity. Theorem 4.4 Let be a star-shaped set at and let be pseudoconvex at Then is a minimum point for on S if and only if

Optimality Conditions 157 Remark 4.1 Let us note that if is a stationary point, the condition is always verified. Since quasiconvexity assumption in a stationary point does not guarantee the optimality of Theorem 4.4 does not hold for this class of functions. On the other hand, if is quasiconvex at and then is pseudoconvex at so that we have the following corollary. Corollary 4.1 Let be a star-shaped set at If is quasiconvex at and then is a minimum point for on S. Consider now the case where the feasible set S is described by means of constraint functions. More exactly consider the problem: where are differentiable functions defined on an open set The most known first order necessary optimality conditions for a constrained problem are the following Kuhn-Tucker conditions: let be a feasible point and set if is a local minimum point for P and a constraint qualification holds, then there exist such that: The following example points out that (4.9) and (4.10) are not sufficient optimality conditions. Example 4.2 Consider the problem It is easy to verify that for the point (0, 0), conditions (4.9) and (4.10) hold with but (0, 0) is not a local minimum point for the problem. Let us now show that (4.9) and (4.10) are also sufficient when the objective and the constraint functions are certain generalized convex functions. Theorem 4.5 Let be a feasible point for problem P and assume that verifies the Kuhn-Tucker conditions (4.9) and (4.10). If is

158 GENERALIZED CONVEXITY AND MONOTONICITY pseudoconvex at and are quasiconvex at then is a global minimum point for problem P. Proof. Assume there exists a feasible point such that From the pseudoconvexity of we have and from the quasiconvexity of we have Taking into account that it results and this contradicts (4.9). Remark 4.2 In Example 4.2, the objective and the constraint functions are quasiconvex and also semistrictly quasiconvex on and this points out that Theorem 4.5 does not hold if is quasiconvex or semistrictly quasiconvex. Taking into account Remark 4.1, Theorem 4.5 holds if is quasiconvex at and Such a condition is verified, for instance, in Consumer Theory, where it is assumed that the partial derivatives of the utility function U are positive and -U is a quasiconvex function. 3. Invex scalar functions In [38] Hanson has introduced a new class of generalized convex functions (invex functions) with the aim to extend the validity of the sufficiency of the Kuhn-Tucker conditions. The term invex is due to Craven [27] and it steams for invariant convex. Since the papers of Hanson and Craven, during the last twenty years, a great deal of contributions related to invex functions, especially with regard to optimization problems, have been made (see for instance [8, 28, 29, 40, 80, 93]. Definition 4.5 The differentiable real-valued function defined on the set is invex if there exists a vector function defined on X X such that Obviously a differentiable convex function (on an open convex set X) is also invex (it is sufficient to choose A meaningful property characterizing invex functions is stated in the following theorem [8].

Optimality Conditions 159 Theorem 4.6 A differentiable function is invex (with respect to some if and only if every stationary point is a global minimum point. Proof. Let be invex with respect to some If is a stationary point of from (4.11) we have so that is a global minimum point. Now we will prove that (4.11) holds for the function defined as If is a stationary point and also a global minimum for we have otherwise so that (4.11) holds. It follows immediately from Theorem 4.6 that every function without stationary points is invex. The class of pseudoconvex functions is contained in the class of invex functions (it is sufficient to note that for a pseudoconvex function a stationary point is also a global minimum point), while there is not inclusion relationships between the class of quasiconvex functions and the class of invex functions. Indeed, the function is quasiconvex but not invex, since is a stationary point but it is not a minimum point; furthermore the following example shows that there exist invex functions which are not quasiconvex. Example 4.3 Consider the function on the open convex set It is easy to verify that the stationary points of are global minimum points, so that is invex. On the other hand, setting A = ( 1, 1), we have and so that is not quasiconvex. Since a function which is not quasiconvex is also not pseudoconvex, the previous example shows also that the classes of pseudoconvex and convex functions are strictly contained in the one of invex functions. Some nice properties of convex functions are lost in the invex case. In fact, unlike the convex or pseudoconvex case, the restriction of an invex function on a not open set, does not maintain the local-global property. With this regard, consider again the function on the closed set The point is a local minimum for f on S but it is not global since

160 GENERALIZED CONVEXITY AND MONOTONICITY For the same function defined on the set of all minimum points is given by which is a non convex set; as a consequence, for an invex function the set of all minimum points is not necessarily a convex set. Following Hanson [38], we will prove the sufficiency of the Kuhn- Tucker conditions under suitable invex assumptions. Theorem 4.7 Let be a feasible point for problem P and assume that verifies the Kuhn- Tucker conditions (4.9) and (4.10). If are invex functions with respect to the same then is a global minimum point for problem P. Proof. For any we have Consequently is a global minimum point. The proof of Theorem 4.7 points out that it is sufficient to require the invexity of the functions at that is (4.11) holds with Invexity allowed also the weakening of convexity requirements in duality theory, since duality results involving Wolfe dual or alternative duals can be established [29, 37, 40, 67]. Furthermore, invex functions, as well as generalized convex functions, play some role in establishing constraint qualifications as it will be seen in the next section. Since invexity requires differentiability assumption, in [8, 40] the following new class of functions, not necessarily differentiable, have been introduced. Let be a real valued function defined on a subset of and We say that a subset X of is if for every the segment is contained in X. Definition 4.6 Let be a real valued function defined on a set X; is pre-invex with respect to if the following inequality holds: A differentiable function satisfying (4.12) is also invex and this is the reason why functions verifying (4.12) are called pre-invex [93].

Optimality Conditions 161 Like as convex functions, for a pre-invex function every local minimum is also global and nonnegative linear combinations of pre-invex functions with respect to the same are pre-invex. Furthermore it is possible to establish saddle point and duality theorems following the classical approach used in the convex case [93]. 4. Generalized convexity and constraint qualifications As we have pointed out in Section 2, the Kuhn-Tucker conditions (4.9) and (4.10) are necessary optimality conditions for problem P when certain regularity conditions on the constraints are satisfied. The aim of this section is to stress the role of generalized convexity and invexity in establishing constraint qualifications. For our purposes, it will be useful to define the following sets: where is pseudoconcave at Denoting with the closure of the convex hull of the Bouligand tangent cone to the feasible region S at the following inclusion relationships hold: In order to guarantee the validity of the Kuhn-Tucker conditions, it must result (Guignard constraint qualification), so that any condition which implies becomes a constraint qualification. In particular, condition is a constraint qualification; indeed it implies so that from (4.13) we have We will prove that, under suitable assumption of generalized convexity on the constraint functions, it results or The following theorem holds. Theorem 4.8 If one of the following conditions holds then Guignard constraint qualification is verified. i) The functions are pseudoconvex at and there exists such that [65]. ii) The functions are quasiconvex at and there exists such that [1].

162 GENERALIZED CONVEXITY AND MONOTONICITY iii) The functions are pseudoconvex at and there is some vector such that [69]. iv) The functions are pseudoconcave at [65]. v) The functions are invex at with respect to the same and there exists such that [8]. Proof. i) Since for the pseudoconvexity of we have so that the direction and thus ii) It follows from i), taking into account that the quasiconvexity of at and the assumption imply the pseudoconvexity of at iii) For the function is pseudoconvex and pseudoconcave so that (see section 6) implies On the other hand implies, for the pseudoconvexity of so that and thus iv) It is sufficient to note that v) We have For the invexity of it results so that and thus At last we prove the necessity of the Kuhn-Tucker conditions requiring a generalized Karlin constraint qualification which involves invexity [37]. Theorem 4.9 Let be an optimal solution for problem P where the functions are invex with respect to the same Assume that there exist no vector such that Then conditions (4.9) and (4.10) hold. Proof. The optimality of implies the following F. John conditions: there exist such that Assume that that is For the invexity assumption, we have: so that contradicts the generalized Karlin constraint qualification. and this

Optimality Conditions 163 5. Maximum points and generalized convexity In this section we will show that for a generalized convex function, a global maximum point, if one exists, is attained at the boundary of the feasible region S and, under suitable assumptions on S, it is an extreme point of S. We will begin to prove that if the maximum value of the function is reached at a relative interior point of S, then is constant on S. We recall that the relative interior of a convex set denoted by is defined as the interior which results when C is regarded as a subset of its affine hull In other words, where B is the Euclidean unit ball in Lemma 4.1 Let be a continuous and semistrictly quasiconvex function on a convex set S. If is such that then is constant on S. Proof. Assume that there exists such that For a known property of convex sets [81], there exists such that Since is a continuous function, without loss of generality, we can assume that The semistrict quasiconvexity of implies and this is absurd since From Lemma 4.1, we have directly the following result. Theorem 4.10 Let be a continuous and semistrictly quasiconvex function on a convex and closed set S. If assumes maximum value on S, then it is reached at some boundary point. The previous theorem can be strengthened when the convex set S does not contain lines (such an assumption implies the existence of an extreme point [81]). Theorem 4.11 Let be a continuous and semistrictly quasiconvex function on a convex and closed set S containing no lines. If assumes maximum value on S, then it is reached at an extreme point. Proof. If is constant, then the thesis is trivial. Let be such that From Theorem 4.10, belongs to the boundary of S. Let C be the minimal face of S containing if is not an extreme point, then It follows from Lemma 4.1 that is constant on C. On the other hand, C is a convex closed set containing

164 GENERALIZED CONVEXITY AND MONOTONICITY no lines, so that C has at least one extreme point which is also an extreme point of S [81]. Consequently is a global maximum for on S. Obviously the previous result holds for a pseudoconvex function, while a quasiconvex function can have a global maximum point which is not a boundary point. In fact the function is nondecreasing, so that it is quasiconvex; on the other hand any point where assumes its maximum value is not a boundary point. If we want to extend Theorem 4.11 to the class of quasiconvex functions, we must require additional assumptions on the convex set S. Theorem 4.12 Let be a continuous and quasiconvex function on a convex and compact set S. Then there exists some extreme point at which assumes its maximum value. Proof. From Weierstrass Theorem, there exists with Since S is convex and compact, it is also the convex hull of its extreme points, so that there exists a finite number of extreme points such that From the quasiconvexity of we have and the thesis follows. Taking into account that a pseudoconvex function is also semistrictly quasiconvex, we have the following corollaries: Corollary 4.2 Let be a pseudoconvex function on a convex and closed set S. If assumes maximum value on S, then it is reached at some boundary point. Corollary 4.3 Let be a pseudoconvex function on a convex and closed set S containing no lines. If assumes maximum value on S, then it is reached at an extreme point. From a computational point of view, Theorems 4.11 and 4.12 and Corollaries 4.2 and 4.3 are very important since they establish that we must investigate the boundary of the feasible set (in particular the extreme points if one exists) in order to find a global maximum of a generalized convex function. Nevertheless, for these classes of functions a local maximum is not necessarily global and the necessary optimality condition is not sufficient for to be a

Optimality Conditions 165 local maximum point, so that the problem of maximize a quasiconvex or pseudoconvex function is a hard problem. This kind of difficulties vanishes if is also quasiconcave or pseudoconcave. We will deep this aspect in the next section. 6. Quasilinear and pseudolinear scalar functions A function defined on a convex subset S of is said to be quasilinear (pseudolinear) if it is both quasiconvex and quasiconcave (pseudoconvex and pseudoconcave). The pseudolinear functions have some properties stated in [26, 54, 55] for which we propose simple proofs. Theorem 4.13 Let be a function defined on an open convex set i) If is pseudolinear and there exists such that then is constant on S. ii) is pseudolinear if and only if iii) Assume Then is pseudolinear on S if and only if its normalized gradient mapping is constant on each level set Proof. i) It follows from Theorem 4.3, taking into account that the stationary point is also a global maximum, since is pseudoconcave. ii) Let be pseudolinear. Since is also quasilinear, then implies that is constant on the line-segment so that the directional derivative is equal to zero. Assume now Since implies necessarily we have Assume that (4.14) holds; we must prove that is both pseudoconvex and pseudoconcave. If is not pseudoconvex, there exist with such that Since implies we must have so that the direction is an increasing direction at The continuity of the function implies the existence of 1 [ such that Consequently and this contradicts (4.14). It follows that is pseudoconvex. In an analogous way it can be proven that is pseudoconcave.

166 GENERALIZED CONVEXITY AND MONOTONICITY iii) Let be pseudolinear with We must prove that Set We have Indeed, if from ii) it results for every such that From ii), it follows and so that and thus In an analogous way we can prove that Since it results Set and assume that for a suitable the points are such that The continuity of implies the existence of 1 [ such that with From ii) we must have on the other hand so that from ii), and this is absurd. Consequently we have Assume now that (4.15) holds. Let and set If is constant in sign, then is quasilinear on the line segment [0, 1]. Otherwise, from elementary analysis, there exist such that with We can assume, without loss of generality, that Set Since we have and and this is absurd. It follows that the restriction of the function over every line-segment contained in S is quasilinear, so that is quasilinear and also pseudolinear since Remark 4.3 Following the same lines of the proof given in ii) of the previuos theorem, it can be shown that (4.14) is equivalent to the following two statements: which point out that pseudolinearity is equivalent to require that the logical implication in the definition of pseudoconvex (pseudoconcave) function can be reversed.

Optimality Conditions 167 Let us note that i) and ii) of Theorem 4.13 do not hold if is quasilinear, as it is easy to verify considering the function furthermore, i) and ii) of Theorem 4.13 hold even if S is a relatively open convex set, while in iii) the assumption cannot be weakened, as it is shown in the following example. Example 4.4 Consider the function defined on the relatively open convex set Consider the convex set obviously while By simple calculation, it results Let A = (0, 0, 1), B = (0, 0, 2); we have so that while is pseudolinear on S. Condition iii) of Theorem 4.13 can be strengthened when the function is defined on the whole space in the sense stated in the following theorem. Theorem 4.14 The non constant function is pseudolinear on the whole space if and only if its normalized gradient mapping is constant on Proof. It follows from iii) of Theorem 4.13. Let be pseudolinear on and assume that its normalized gradient mapping is not constant on Then there exist such that From iii) of Theorem 4.13, we have Set and Let us note that implies so that and, from ii) of Theorem 4.13, Analogously we have Since there exists so that and this is absurd. From a geometrical point of view, the previous theorem states that the level sets of a non constant pseudolinear function, defined on the whole space are parallel hyperplanes; vice-versa if the level sets of a differentiable function, with no critical points, are hyperplanes, then the function is pseudolinear. In any case, if the level sets of a function are hyperplanes, then the function is quasilinear, but the vice-versa is not true. In fact the function

168 GENERALIZED CONVEXITY AND MONOTONICITY where is as in (4.5), is quasilinear and the level set is not a hyperplane. When the non constant pseudolinear function is defined on a convex set from iii) of Theorem 4.13, the level sets are the intersection between S and hyperplanes which are not necessarily parallel (consider for instance the classic case of linear fractional functions). The above considerations suggest a simple way to construct a pseudolinear function. Consider for instance the family of lines It is easy to verify that such lines are the level sets of the function defined on since is pseudolinear on S. Another way to construct a pseudolinear function is to consider a composite function where is pseudolinear and is a differentiable function having a strictly positive (or negative) derivative. With respect to an optimization problem having a pseudolinear objective function defined on a polyhedral set S, we have the nice property that when the maximum and the minimum value exist, they are reached at a vertex of S. Setting the edges starting from a vertex the necessary and sufficient optimality condition stated in Theorem 4.4 and the analogous one for a pseudoconcave function can be specified by means of the following theorem. Theorem 4.15 Let be a pseudolinear function defined on a polyhedral set S. Then: i) A vertex is a minimum point for on S if and only if ii) A vertex is a maximum point for on S if and only if When S is a polyhedral compact set, the previous theorem can be extended to a quasilinear function. Theorem 4.16 Let be a quasilinear function defined on a polyhedral compact set S. Then: i) A vertex is a minimum point for on S if and only if ii) A vertex is a maximum point for on S if and only if

Optimality Conditions 169 The optimality conditions stated in the previous theorem have suggested some simplex-like procedure for pseudolinear problems. These programs include linear programs and linear fractional programs which arise in many practical applications [30, 87, 89]. Algorithms for a linear fractional problem have been suggested by several authors [9, 25, 68]. Computational comparisons between algorithms for linear fractional programming are given in [33]. 7. Generalized convex vector functions Let X be an open set of the n-dimensional space a convex subset of X and F a vector function from X to In what follows, we will consider in the partial order induced by the Paretian cone, even if most of the results, which are going to establish, hold when the partial order is induced by any closed convex cone. For convenience, we set As is known, there are different ways in extending the definitions of generalized convex functions to the vector case; we will address to those classes which will allow to obtain several properties related to optimality, referring to bibliography for further deepenings (see for instance [15], [18], [62]). As it happens in the scalar case, we can refer to vector generalized convexity or to vector generalized concavity. In this last case, it is sufficient to substitute in what follows C, with C, respectively, in order to obtain the corresponding definitions and results. Definition 4.7 The Junction F is said to be C-convex (on S) if It is easy to prove that F is C-convex if and only if any component of F is convex. Definition 4.8 The function F is said to be C-quasiconvex (on S) if

170 GENERALIZED CONVEXITY AND MONOTONICITY When Definition 4.8 reduces to (4.1). If any component of F is quasiconvex, then F is C-quasiconvex, but the converse is not true. For instance the function is on but it is not componentwise quasiconvex. In [62], Luc suggests another definition of a quasiconvex vector function which is equivalent to require componentwise quasiconvexity and which plays an important role in establishing the connectedness of the set of all efficient points. This class of functions is strictly contained in the class of C-quasiconvex ones [15]. Remark 4.4 When F is a differentiable function, C-quasiconvexity implies the following property: where denotes the Jacobian matrix of F evaluated at Unlike the scalar case, the converse implication does not hold in general and this points out that the study of vector generalized convexity is more complicated than that of the scalar case; this remark motivates once again the variety of definitions which have been suggested in the literature for the vector case. Assume now that F is a differentiable function and let be the Jacobian matrix of F evaluated at As for the quasiconvex case, there are different ways to extend scalar pseudoconvexity to the vector case. We introduce three classes of vector pseudoconvex functions which reduce, when to the classical definition given by Mangasarian [65]. Definition 4.9 The function F is said to be S) if (on Definition 4.10 The function F is said to be (on S) if Definition 4.11 (on S) if The function F is said to be If any component of F is pseudoconvex, then F is and also if any component of F is

Optimality Conditions 171 strictly pseudoconvex (that is then F is if any component of F is quasiconvex and at least one is strictly pseudoconvex, then F is The converse of these statements are not true in general, as it is shown in the following example. Example 4.5 The function is and on with but it is not componentwise quasiconvex or pseudoconvex. The function is on with but its components are not strictly pseudoconvex. The following theorem states the inclusion relationships among the introduced classes of functions. Theorem 4.17 Let F be a differentiable function. i) If F is C-convex (on S), then it is C-quasiconvex, and (on S). ii) If F is (on S), then it is and (on S). Proof. In order to prove that C-convexity implies and it is sufficient to note that C-convexity implies and that The other inclusion relationships follow directly from the definitions. The following examples point out that there are not inclusion relationships between and ity and between C-convexity and Example 4.6 Consider the function It is easy to prove that there do not exist such that so that F is On the other hand, setting we have while and thus F is not Consider now the function Setting we have and so that F is not while simple calculations show that F is

172 GENERALIZED CONVEXITY AND MONOTONICITY Example 4.7 Consider the function F is C-convex since it is componentwise convex. Setting we have and so that F is not On the other hand, the function with but it is is not C-convex on The following example points out that, unlike the scalar case, there is not inclusion relationship between quasiconvex and pseudoconvex vector functions. Example 4.8 Consider the function Since F is componentwise quasiconvex, it is also C-quasiconvex. Setting we have but so that F is not Furthermore but and thus F is not Consider now the function It is easy to verify that the relation ( in particular does not hold for every so that F is both and Setting we have so that but 1[ and thus F is not C-quasiconvex. 8. Efficiency In vector optimization, because of the contradiction of the objective functions, it is not possible, in general, to find a point which is optimal for all the objectives simultaneously. For such a reason, a new concept of optimality was introduced by the economist Edgeworth in 1881. However this new concept is usually referred to the French-Italian economist Pareto who in 1896 developed it further. Speaking roughly, a point is Pareto optimal if and only if it is possible to improve (in the sense of minimization) one of the objective functions only at the cost of making at least one of the remaining objective functions worse; a point is weakly Pareto optimal if and only if it is not possible to improve all the objective functions simultaneously. For a formal definition, consider the following vector optimization problem: where X be an open set of

Optimality Conditions 173 Definition 4.12 A point is said to be : weakly efficient or weakly Pareto optimal if efficient or Pareto optimal if If the previous conditions are verified in where I is a suitable neighbourhood of then is said to be a local weak efficient point or a local efficient point, respectively. In the scalar case (s=1) a (local) weak efficient point and an (local) efficient point reduce to the ordinary definition of a (local) minimum point. Obviously (local) efficiency implies (local) weak efficiency. With respect to the class of functions, a point is (local) weakly efficient if and only if it is (local) efficient. Let us note that all the results which will be established for problem P hold for the problem substituting with with and requiring generalized concavity instead of generalized convexity. With regard to the existence of efficient points of problem P, we refer the interested reader to [12, 62, 84, 94]. Now we will stress the role played by vector generalized convexity in investigating relationships between local and global optima. As in the scalar case, from now on, we will consider generalized convexity at a point and we will require that the feasible set S is star-shaped at furthermore, referring to a pseudoconvex vector function, the differentiability of F at is assumed. The following theorem shows that, under suitable assumption of generalized convexity, local efficiency implies efficiency. Theorem 4.18 i) If is a local efficient point for P and F is at then is an efficient point for P. ii) If is a local weak efficient point for P and F is at then is a weak efficient point for P. Proof. i) Assume that there exists such that Since F is at we have that is and this implies the existence of a suitable such that This contradicts the local efficiency of

174 GENERALIZED CONVEXITY AND MONOTONICITY ii) The proof is similar to the one given in i). is In general a local efficient point for P is not an efficient point when F as it is shown in the following example. Example 4.9 Consider the function and where It is easy to prove that F is at with and that is a local efficient point, but not efficient since Let us note that the function F is also C-quasiconvex since its components are quasiconvex, so that, also for this class of vector functions, local efficiency does not imply efficiency. The class of functions is not appropriate to guarantee that a local efficient point is also efficient as is pointed out in the following example. Example 4.10 Consider the function F is at with and the feasible point is a local efficient point but it is not a (weak) efficient point since As we have pointed out in the previous examples, and do not guarantee that a local efficient point is also efficient. An important subclass of these two classes of functions, for which such a property holds, is the componentwise convex one. Theorem 4.19 Let F be C-convex at If is a local efficient (weak efficient) point for P, then is efficient (weak efficient) for P. Proof. Assume that there exists such that Since F is C-convex at it results Taking into account that (intc+c=intc), we have and this contradicts the local efficiency (weak efficiency) of If F is componentwise pseudoconvex at local efficiency does not imply efficiency. unlike the scalar case,

Optimality Conditions 175 Consider for instance the function S = [0, 1], The first component of F is pseudoconvex at since there does not exist such that Since the second component of F is convex, then F is componentwise pseudoconvex at It is easy to verify that is a local efficient point, but not efficient since The following theorem shows that requiring the pseudoconvexity of the components of F not only at a point but on the whole feasible region, local efficiency implies efficiency. Theorem 4.20 Consider problem P where F is componentwise pseudoconvex on the convex set S. If is a local efficient point for P, then is efficient for P. Proof. We recall that a pseudoconvex scalar function has the property that if then is decreasing at with respect to the direction and if then is decreasing at or is a minimum point with respect to the direction Assume now that there exists such that where at least one inequality is strict. For the mentioned property, it results that is not a local efficient point and this is a contradiction. Some other classes of generalized convex vector functions have been suggested in order to maintain local-global properties. For instance in [11], a class of functions is introduced verifying the following property This class contains componentwise semistrictly quasiconvex functions, but, unlike the scalar case, an upper semicontinuous function verifying (4.18) is not necessarily C-quasiconvex. Consider in fact the continuous function and the point Condition (4.18) is verified at with since On the other hand F is not C-quasiconvex at since for we have but it is not true that Another class of functions verifying local-global properties has been introduced in [64] with the following property: F is componentwise quasiconvex and for all such that

176 GENERALIZED CONVEXITY AND MONOTONICITY It is easy to prove that this last class is strictly contained in the previous one. 9. Optimality conditions in vector optimization As in the scalar case, generalized convexity in vector optimization plays an important role in establishing sufficient optimality conditions. In order to develop a self-contained analysis, we will also present the proofs of the first order necessary optimality conditions, which are based on the following known separation theorems. Theorem 4.21 Let W be a linear subspace of i) if and only if ii) if and only if Theorem 4.22 Set where and are linear subspaces of and respectively. Then the following hold. i) if and only if ii) if and only if iii) if and only if Consider problem P where F is a differentiable function. The following theorem holds. Theorem 4.23 If is an interior local weak efficient point for P then Proof. Consider the line-segment The local weak efficiency of implies so that Setting it results the thesis follows from i) of Theorem 4.21.

Optimality Conditions 177 Remark 4.5 In the scalar case, condition (4.19) is equivalent to state that is a stationary point; for such a reason, we will refer to points verifying (4.19) as stationary points of a vector function. The following example shows that (4.19) is not, in general, a sufficient optimality condition. Example 4.11 Consider problem P where and the feasible point We have so that with Consequently point, since is a stationary point for F, but not a local weak efficient Theorem 4.24 i) If is a stationary point for F and F is pseudoconvex at then is an efficient point. ii) If is a stationary point for F and F is at then is a weak efficient point. Proof. Assume that is not an efficient point or a weak efficient point. Then there exists such that and thus this contradicts (4.19). Then i) and ii) hold. Taking into account that a C-convex function is also function, we have the following corollary. Corollary 4.4 If is a stationary point for F and F is C-convex at then is a weak efficient point. Consider the C-convex function and the cone The point is a stationary point for F and also a weak efficient point, but it is not efficient. From i) of Theorem 4.17, it follows that a stationary point is not in general an efficient point for the classes of and functions. For this last class of functions, a stationary point is not in general a weak efficient point; for instance the function is at the stationary point but is not a weak efficient point. Requiring in (4.19) the positivity of all multipliers, it is possible to state some other sufficient optimality conditions. Theorem 4.25 If F is at and

178 GENERALIZED CONVEXITY AND MONOTONICITY then is an efficient point. Proof. The proof is similar to the one given in Theorem 4.24. Taking into account Theorems 4.19 and 4.20, we have the following corollary. Corollary 4.5 i) If F is C-convex at and (4.20) holds, then is an efficient point. ii) If F is componentwise pseudoconvex on S and (4.20) holds, then is an efficient point. Remark 4.6 Let us note that (4.20) is a sufficient condition for to be a local efficient point without any requirement of generalized convexity when or, equivalently, when In section 10 we will prove, for a wide class of functions including the linear ones, that (4.20) is a necessary and sufficient condition for to be an efficient point. Consider now the case where is not necessarily an interior point. A necessary condition for to be a weak efficient point for P is In the scalar case (4.21) reduces to and this implies where denotes the convex hull of S. In the vector case, when S is star-shaped at but not convex, condition (4.21) cannot be extended to the elements of and this implies that (4.21) cannot be expressed by means of multipliers as it is shown in the following example. Example 4.12 Consider the linear vector function Setting it results and We have and so that it is not possible to separate S and In order to express (4.21) by means of multipliers, we must require the convexity of S. The following theorem holds. Theorem 4.26 Consider problem P where S is a convex set. If weak efficient point, then is a

Optimality Conditions 179 Proof. Consider the set since is a linear function and S is a convex set, W is convex too. The necessary optimality condition (4.21) implies so that (4.22) follows from Theorem 4.21. Under suitable assumptions of generalized convexity, (4.22) becomes a sufficient optimality condition. Theorem 4.27 i) If (4.22) holds and F is at then is an efficient point. ii) If (4.22) holds and F is at then is a weak efficient point. iii) If (4.22) holds and F is C-convex at then is a weak efficient point. Proof. The proofs are similar to the one given in Theorem 4.24. Consider now the case where the feasible region S is expressed by inequality constraints, that is the problem where are differentiable functions and For sake of simplicity, corresponding to a feasible point we will assume, without loss of generality, that is binding to all the constraints, that is The following theorem states a first order necessary optimality condition which can be considered the natural extension in vector optimization of the classical Fritz John condition. Theorem 4.28 If is a weak efficient point for P* then Proof. Let us note that implies that is a feasible direction of S at Since is a weak efficient point, or, equivalently, Setting we have The thesis follows from i) of Theorem 4.22. As in the scalar case, it can happen that in (4.23); if we will refer to (4.23) as the Kuhn-Tucker conditions in vector optimization. Under suitable assumptions of generalized convexity, the Kuhn-Tucker conditions become sufficient optimality conditions.

180 GENERALIZED CONVEXITY AND MONOTONICITY Theorem 4.29 i) If (4.23) holds with F is at and G is C-quasiconvex at then is an efficient point for P*. ii) If (4.23) holds with F is at and G is C-quasiconvex at then is a weak efficient point for P*. iii) If (4.23) holds with F is at and G is C-quasiconvex at then is a local efficient point for P*. iv) If (4.23) holds with F is C-convex at and G is C-quasiconvex at then is an efficient point for P*. Corollary 4.6 i) If (4.23) holds with F is componentwise strictly pseudoconvex at and G is C-quasiconvex at then is an efficient point for P*. ii) If (4.23) holds with F is componentwise pseudoconvex at and G is C-quasiconvex at then is a weak efficient point for P*. iii) If (4.23) holds with F is componentwise pseudoconvex on S and G is C-quasiconvex at then is an efficient point for P*. 10. Pseudolinearity in vector optimization Conditions (4.16) and (4.17) suggest to define pseudolinearity with respect to the Paretian cone requiring that the logical implication in the definitions of and can be reversed. More exactly, let be an open set, a differentiable function and a convex set. Definition 4.13 The function F is following two statements hold: (on S) if the Remark 4.7 If F is componentwise pseudolinear, then F is the converse is not true, as it can be easily verified considering the class of functions where are such that and there exist such that implies that a local efficient point is efficient too, even if such a property does not hold for functions (see Example 4.10).

Optimality Conditions 181 Theorem 4.30 Let F be (on S). If is a local efficient point for P, then is efficient for P. Proof. Assume that there exists such that Then Consider the line-segment It results so that for the pseudolinearity of F, and this contradicts the local efficiency of Remark 4.8 If we are interested in maximizing a vector pseudolinear function, it is sufficient to consider efficiency with respect to the cone C instead of -C; so that, taking into account condition (4.25), the previous result and the ones that we are going to establish hold for a maximum vector problem too. Another important property of vector pseudolinear functions is that the sufficient optimality condition (4.20) becomes also necessary. Theorem 4.31 Let F be be an interior point of S. Then if (on S) and let is an efficient point for P if and only Proof. The pseudolinearity of F and the efficiency of imply (4.26) follows from ii) of Theorem 4.21. The converse statement follows from Theorem 4.25. Corollary 4.7 Let F be a componentwise pseudolinear or an affine function (on S). Then is an interior efficient point if and only if When all the components of F are pseudolinear on the whole space the existence of a stationary point implies that any point of is efficient as it is shown in the following theorem. Theorem 4.32 Let F be componentwise pseudolinear on If there exists an efficient point for F, then any point of is efficient. Proof. Without loss of generality, we can suppose that indeed if there exists such that then is a constant function and the efficiency of a point with respect