Vector Variational Principles; ε-efficiency and Scalar Stationarity

Similar documents
Subdifferential representation of convex functions: refinements and applications

On Total Convexity, Bregman Projections and Stability in Banach Spaces

Stability of efficient solutions for semi-infinite vector optimization problems

Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency

ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT

Epiconvergence and ε-subgradients of Convex Functions

Examples of Convex Functions and Classifications of Normed Spaces

SOME PROPERTIES ON THE CLOSED SUBSETS IN BANACH SPACES

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

Journal of Inequalities in Pure and Applied Mathematics

A Dual Condition for the Convex Subdifferential Sum Formula with Applications

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Differentiability of Convex Functions on a Banach Space with Smooth Bump Function 1

Relationships between upper exhausters and the basic subdifferential in variational analysis

AW -Convergence and Well-Posedness of Non Convex Functions

ON THE RANGE OF THE SUM OF MONOTONE OPERATORS IN GENERAL BANACH SPACES

SOME REMARKS ON SUBDIFFERENTIABILITY OF CONVEX FUNCTIONS

OPTIMALITY CONDITIONS FOR D.C. VECTOR OPTIMIZATION PROBLEMS UNDER D.C. CONSTRAINTS. N. Gadhi, A. Metrane

Continuous Sets and Non-Attaining Functionals in Reflexive Banach Spaces

Optimization and Optimal Control in Banach Spaces

The local equicontinuity of a maximal monotone operator

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q)

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION

Necessary Optimality Conditions for ε e Pareto Solutions in Vector Optimization with Empty Interior Ordering Cones

Global Maximum of a Convex Function: Necessary and Sufficient Conditions

A convergence result for an Outer Approximation Scheme

Yuqing Chen, Yeol Je Cho, and Li Yang

SECOND-ORDER CHARACTERIZATIONS OF CONVEX AND PSEUDOCONVEX FUNCTIONS

ERROR BOUNDS FOR VECTOR-VALUED FUNCTIONS ON METRIC SPACES. Keywords: variational analysis, error bounds, slope, vector variational principle

THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION

Centre d Economie de la Sorbonne UMR 8174

CONTINUOUS CONVEX SETS AND ZERO DUALITY GAP FOR CONVEX PROGRAMS

Dedicated to Michel Théra in honor of his 70th birthday

Refined optimality conditions for differences of convex functions

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane

Weak-Star Convergence of Convex Sets

On Directed Sets and their Suprema

A characterization of essentially strictly convex functions on reflexive Banach spaces

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

Abstract. The connectivity of the efficient point set and of some proper efficient point sets in locally convex spaces is investigated.

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

The Fitzpatrick Function and Nonreflexive Spaces

Journal of Inequalities in Pure and Applied Mathematics

AN INTERSECTION FORMULA FOR THE NORMAL CONE ASSOCIATED WITH THE HYPERTANGENT CONE

On the Semicontinuity of Vector-Valued Mappings

On duality theory of conic linear problems

Quasi-relative interior and optimization

Strongly convex functions, Moreau envelopes and the generic nature of convex functions with strong minimizers

On Weak Pareto Optimality for Pseudoconvex Nonsmooth Multiobjective Optimization Problems

On the use of semi-closed sets and functions in convex analysis

EXISTENCE RESULTS FOR OPERATOR EQUATIONS INVOLVING DUALITY MAPPINGS VIA THE MOUNTAIN PASS THEOREM

Mathematics for Economists

The sum of two maximal monotone operator is of type FPV

Necessary and Sufficient Conditions for the Existence of a Global Maximum for Convex Functions in Reflexive Banach Spaces

A New Fenchel Dual Problem in Vector Optimization

A Minimal Point Theorem in Uniform Spaces

Maximal Monotone Inclusions and Fitzpatrick Functions

On Kusuoka Representation of Law Invariant Risk Measures

STABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES

On constraint qualifications with generalized convexity and optimality conditions

CHARACTERIZATION OF (QUASI)CONVEX SET-VALUED MAPS

Robust Farkas Lemma for Uncertain Linear Systems with Applications

Keywords: monotone operators, sums of operators, enlargements, Yosida regularizations, subdifferentials

CONVEX OPTIMIZATION VIA LINEARIZATION. Miguel A. Goberna. Universidad de Alicante. Iberian Conference on Optimization Coimbra, November, 2006

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction

Victoria Martín-Márquez

An Attempt of Characterization of Functions With Sharp Weakly Complete Epigraphs

Robust Duality in Parametric Convex Optimization

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Weak sharp minima on Riemannian manifolds 1

S. DUTTA AND T. S. S. R. K. RAO

A Note on the Class of Superreflexive Almost Transitive Banach Spaces

1 Introduction The study of the existence of solutions of Variational Inequalities on unbounded domains usually involves the same sufficient assumptio

On Gap Functions for Equilibrium Problems via Fenchel Duality

MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS

On pseudomonotone variational inequalities

Stationarity and Regularity of Infinite Collections of Sets. Applications to

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

Characterizations of the solution set for non-essentially quasiconvex programming

Self-equilibrated Functions in Dual Vector Spaces: a Boundedness Criterion

A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators

Scalar Asymptotic Contractivity and Fixed Points for Nonexpansive Mappings on Unbounded Sets

Journal of Inequalities in Pure and Applied Mathematics

On duality gap in linear conic problems

Metric regularity and systems of generalized equations

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

On the Brézis - Haraux - type approximation in nonreflexive Banach spaces

EXISTENCE RESULTS FOR QUASILINEAR HEMIVARIATIONAL INEQUALITIES AT RESONANCE. Leszek Gasiński

Document downloaded from: This paper must be cited as:

Polishness of Weak Topologies Generated by Gap and Excess Functionals

Convergence rate estimates for the gradient differential inclusion

Optimality Conditions for Nonsmooth Convex Optimization

SOME REMARKS ON THE SPACE OF DIFFERENCES OF SUBLINEAR FUNCTIONS

Parcours OJD, Ecole Polytechnique et Université Pierre et Marie Curie 05 Mai 2015

On a Class of Multidimensional Optimal Transportation Problems

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

arxiv: v2 [math.oc] 17 Jul 2017

Convex Functions. Pontus Giselsson

Convex Analysis and Economic Theory AY Elementary properties of convex functions

Transcription:

Journal of Convex Analysis Volume 8 (2001), No. 1, 71 85 Vector Variational Principles; ε-efficiency and Scalar Stationarity S. Bolintinéanu (H. Bonnel) Université de La Réunion, Fac. des Sciences, IREMIA, 15 av. René Cassin, 97715 Saint Denis Messag. Cedex, France. email: hbonnel@univ-reunion.fr Received November 24, 1998 Revised manuscript received April 19, 2000 The aim of this paper is to present several versions of vector variational principles related to some type of metrically consistent ε- efficiency and to the approximate necessary first order efficiency condition. Keywords: Vector optimization, multicriteria optimization, ε-efficiency, stationary sequences, Kuhn- Tucker sequences, minimizing sequences, Pareto optimizing sequences, weakly efficient sequences, convex analysis, variational analysis, Ekeland variational principle 1. Introduction For a scalar convex function a point which is almost minimizing is not necessarily an almost stationary one, as we can see considering the convex smooth function (see [11]) f : R 2 R, f(x 1, x 2 ) = exp(x 2 1 x 2 ) and the sequence n x n = (n, n 2 + ln n). We have f(x n ) = 1/ n 0 = inf f but f(x n ) = (2 n, 1/ n) (0, 0). Relations between stationary and minimizing sequences have been considered in [11, 20] for constrained scalar mathematical programming problems, where it is also shown the following, which is immediate from Ekeland variational principle. Theorem 1.1. Let f : R n R be a continuously differentiable function. Then for any minimizing sequence (x k ), there exists a nearby sequence (y k ) such that (i) lim (x k y k ) = 0, (ii) lim f(y k) = inf f(x), (iii) lim f(y k) = 0. k + k + x Rn k + The aim of this paper is to present some generalizations of the above result to nonsmooth vector optimization problems in infinite dimension spaces. For a scalar minimization problem the optimal value inf f(r n ) is a singleton (eventually ). A different situation occurs in a vector minimization problem where the set of optimal values (which is the infimal set) may be infinite and sometimes unbounded. There are several way to define an approximate efficient solution. We introduce some types of ε-efficiency (with ε a positive scalar) which are metrically consistent in the sense that each ε efficient solution is located in an ε-neighborhood of the infimal set. Approaching the infimal set in an asymptotic manner we consider the asymptotically weakly Pareto optimizing (called also asymptotically weakly efficient) sequences. On the other hand the The author is thankful to the anonymous referees for their helpful comments. The author is also grateful to the people of the Institut Girard Desargues- Université Lyon 1 for their warm hospitality. ISSN 0944-6532 / $ 2.50 c Heldermann Verlag

72 S. Bolintinéanu / Vector variational principles; ε-efficiency and scalar stationarity approximate necessary first order weakly-efficiency condition leads us to define the notion of weakly scalarly stationary (or weakly Kuhn-Tucker) sequence. 2. Preliminaries 2.1. The infimal set Let Y be a topological space endowed with a partial order relation denoted C (C is a subset of Y related to the order relation). Let T Y. Denote min(t C) = {a T a C x, x T } MIN(T C) = {a T x T, x C a = x = a}. Changing C in C we obtain the sets max(t C) and MAX(T C). Also inf(t C) = max({x Y x C y, y T } C), and in the same way sup(t C) is the least upper bound of T. Note that the above subsets may be empty, and min(t C), inf(t C), max(t C), sup(t C) contain no more than one element. Remark also that their definition does not use the topological structure of Y. The minimal set MIN(T C) (resp. the maximal set MAX(T C)) is also called Pareto or efficient set when dealing with a minimization (resp. maximization) problem. The efficient set may be infinite or even unbounded. The chronic fault of the sets inf(t C) and sup(t C) is that they may be far from T (e.g. if Y = R 2, C = R 2 +, y C y y y C and T is the unit (closed or open) disk, then inf T = {( 1, 1)} and sup T = {(1, 1)} which are located at the Euclidean distance of 2 1 > 0 from T ). That is why we will consider the infimal set (see also [18] or [24]) as follows: INF(T C) = {a T x T, x C a = x = a}, where T stands for the closure of the set T (and reversing the order we define the supremal set SUP(T C)). This set coincides with MIN(T C) (resp. MAX(T C)) if T is closed. Moreover min(t C) MIN(T C); MIN( T C) INF(T C); MIN(T C) INF(T C) and it is possible to have strictly inclusions. If min(t C) is not empty (hence is a singleton), then we have min(t C) = inf(t C) = MIN(T C). Note also that we may have inf(t C) INF(T C). Example 2.1. Let Y = R 2 with the norm topology, and C = R 2 +, y C y y y C. (1) For T = [0, 1[ ]0, 1[ we have min(t C) = MIN(T C) =, MIN( T C) = inf(t C) = {(0, 0)} and INF(T C) = [0, 1] {0}.

S. Bolintinéanu / Vector variational principles; ε-efficiency and scalar stationarity 73 (2) For T = {(cos t, sin t) t [0, π[} we have min(t C) =, MIN(T C) = {(1, 0)}, min( T C) = MIN( T C) = inf(t C) = {( 1, 0)}, INF(T C) = {(1, 0), ( 1, 0)}. (3) For T = {(x 1, x 2 ) x 2 1 + x 2 2 < 1} we have min(t C) = MIN(T C) =, inf(t C) = {( 1, 1)}, INF(T C) = {(cos t, sin t) t [π, 3π 2 ]} = MIN( T C). 2.2. The weakly infimal set in the policy space Throughout this paper Y denotes a real Banach space (the policy space ) endowed with a closed convex pointed 1 cone C (i.e.,c = C, R + C + R + C C and C C = {0}). This cone defines a (partial) order relation in Y given by : x C y y x C compatible with vector addition and positive scalar multiplication. Thus Y is a (partially) ordered Banach space. We will assume that the interior int C is not empty. The cone C 0 = int C {0} defines another partial order relation x C0 y y x C 0, and when x C0 y, x y we put x < C y, i.e. x < C y y x int C. We will also denote by B the closed unit ball in any normed vector space. If S Y, and y Y, we denote d(y, S) = inf z S y z, with the convention d(y, ) = +. Let T Y. Since we are interested by vector minimization problems, in the sequel we will study only the (weakly) minimal or infimal sets, but obviously reversing the inequality (or replacing C by C) we may obtain analogous properties for the (weakly) maximal or supremal sets. We will define the weakly efficient (or minimal) set by MIN w (T C) = MIN(T C 0 ) = {y T z T : z < C y} It is easy to see the following. = {y T y + T Y \ ( int C)}. Remark 2.2. MIN w (T C) contains MIN(T C), and if T is closed, then MIN w (T C) is closed. Also, we define the weakly infimal set by INF w (T C) = INF(T C 0 ) = {y T z T : z < C y} = {y T y + T Y \ ( int C)}. It is obvious that INF(T C) INF w (T C). 1 Some results of this paper hold even when C is not pointed, but to simplify the presentation we make this hypothesis.

74 S. Bolintinéanu / Vector variational principles; ε-efficiency and scalar stationarity Proposition 2.3. (hence INF w (T C) is closed) and where bd T is the boundary of T. INF w (T C) = MIN w ( T C), MIN w (T C) INF w (T C) bd T, Proof. The inclusion MIN w ( T C) INF w (T C) being obvious, let us prove the converse. Since Y \ int C is closed, for each y INF w (T C) we have y T = y T Y \ int C, hence y MIN w ( T C). The inclusion MIN w (T C) INF w (T C) is obvious. Let y INF w (T C). If y / bd T it follows that y int T. Hence for sufficiently small ε > 0 we have y + εb T. Take v int C B. It follows that y εv < C y with y εv T contradicting y INF w (T C). It is easy to adapt some existence results known for the efficient set (see e.g. [19, Chapter 2, section 3]), in the case of the weakly infimal set. Thus, we can state the following. Proposition 2.4. Let T be such that, for some y T, the set T y = (y C) T is C-complete i.e., T y contains no decreasing net (y α ) α I such that T y α I(T \ (y α C)). Then INF w (T C) is not empty. Remark 2.5. If the set T y is compact then it is C-complete. Let us consider the set C + = {λ Y λ, y 0 y C}, where Y is the topological dual of the space Y and, stands for the duality product. Theorem 2.6. Assume int C + and let T Y be such that there exists λ int C +, bounded from below on T. Then, for every y T \INF w (T C), there exists z INF w (T C) such that z < C y. Proof. We need first the following. Lemma 2.7. Suppose int C + not empty and let λ int C +. Then inf λ, y d(λ, Y \ C + ). y C, y =1 Proof. For each positive number r < α = d(λ, Y \ C + ) we have λ + rb C +. Let y C, y = 1. According to the Hahn-Banach theorem, there exists e Y, e = 1, such that e, y = 1. Thus 0 λ re, y, hence r λ, y. Letting r α we obtain the result. Now, we go back to the proof of the theorem. This can be done using Phelps extemization principle (see [1, Theorem 2.5.]), but thanks to one of the referees, we can give a direct proof.

S. Bolintinéanu / Vector variational principles; ε-efficiency and scalar stationarity 75 Let λ and y as stated in the theorem. There exists y T such that y < C y. Let us show that the section T y = (y C) T is C-complete. Indeed, if (y α ) is a decreasing net in T y, then ( λ, y α ) is a decreasing, bounded from below net in R, hence it is fundamental. According to the previous Lemma, for every e C \ {0}, we have λ, e e d(λ, Y \ C + ). This implies that (y α ) is fundamental, hence it has a limit, say ȳ T y. This shows that the family (T y \ (y α C)) cannot cover T y, hence T y is C-complete. Thus INF w (T y C), and since INF w (T y C) INF w (T C), any z INF w (T y C) satisfies the conclusion of the theorem. 2.3. The extended space Y We will briefly recall some results presented in [8]. In the real convex analysis, we allow the values +, in order to handle convex functions defined on the whole space X, and for other practical reasons. For the same reasons we will extend the space Y to a topological partially ordered space Y = Y {+ C, C } where + C and C are two distinct elements not belonging to Y (some other results about the extended space can be found in [9, 22]). A neighborhood of + C ( C ) in Y is a set N, ( N resp.) such that {+ C } (y+c) N for some y Y. Then the topology τ of Y is defined by τ = τ {O Y O is a neighborhood of + C or of C and O \ {+ C, C } τ}, where τ is the topology of Y. Thus, the imbedding Y Y is continuous, and Y is dense in Y. The algebraic operations are extended in the same way as in the convex analysis (see [8]). Note that + C C = + C ; 0 C = 0 Y. We will extend the relations C, C0 and < C to Y by y Y, C C y + C, C C0 y + C, C < C y < C + C. Note that, despite the analogy with the extended real line R, the space Y is in general not compact (even in the case Y = R 2 and C = R 2 +). Remark 2.8. Following our definition, a sequence (y n ) n Y tends to + C in Y iff for every y Y there exists some n 0 such that, for all n n 0 we have y C y n. Also y n C iff y n + C. Remark 2.9. We have also that y n + C (or y n C ) and y n Y implies y n +. Of course, the converse is false. For T Y, T stands for the closure in the space Y, and to simplify the notation, we put MIN T = MIN(T C), MIN w T = MIN w (T C), INF w T = INF w (T C) etc. Note that, if T {+ C }, then INF w T = {y T y + T Y \ ( int C { C })}.

76 S. Bolintinéanu / Vector variational principles; ε-efficiency and scalar stationarity All the results about the (weakly) minimal or infimal set presented in the previous section in the space Y, hold for the space Y too, excepting the relation INF w (T C) bd T. Moreover, according to the definition, we obtain easily the following. Proposition 2.10. The following statements are equivalent: (i) C INF w (T ). (ii) INF w (T ) = { C }. (iii) (y n ) T y n C. (iv) C T. Proposition 2.11. Let T Y {+ C } such that C / T and T {+ C }. Then INF w T = INF w (T \ {+ C }), and these sets coincide with the weakly infimal set associated to T \ {+ C } in the space Y. Proposition 2.12. Let (y n ) be a sequence in Y. Then λ C + \ {0}, y n + C = λ, y n +. (1) Proof. Let e int C. Then for any λ C + \ {0} we have λ, e > 0 (otherwise, let r > 0 such that e + rb C; we have λ(e + rb) = rλ(b) R +, λ( B) = λ(b) wich implies λ = 0). For any k R + there exists some n 0 N such that ke y n for all n > n 0. Thus k λ, e C λ, y n. We will extend each element λ C + \ {0} to a function (denoted λ too) λ : Y R putting λ, + C = + ; λ, C =. We will denote Λ = {λ : Y R λ Y C + ; λ Y = 1; λ( C ) =, λ(+ C ) = + }. Notice also the following converse of Proposition 2.12. Proposition 2.13. Let C be a polyhedral convex pointed cone with non empty interior (thus Y must be a finite dimensional space!). Then, for any sequence (y n ) Y such that λ Λ, λ, y n +, we have y n + C. Proof. Since C + is a polyhedral cone (see [23]), there exists {λ 1,..., λ p } Λ such that p C + = { α i λ i (α 1,..., α p ) R p +}. Let e int C. For each i {1,..., p} there exists n i N such that λ i, y n λ i, e, n > n i. i=1

S. Bolintinéanu / Vector variational principles; ε-efficiency and scalar stationarity 77 Let n 0 = max(n 1,..., n p ). Then λ C +, n > n 0, λ, y n λ, e. This implies hence y n + C. n > n 0, e C y n, 2.4. Extended values vector optimization problem Unless otherwise stated, X stands for a reflexive Banach space. Let us consider a map F : X Y. We will denote by dom(f ) the effective domain of F, i.e. dom(f ) = {x X F (x) + C }. We say that F is proper if dom(f ) and F (x) C, x dom(f ). For the vector minimization problem: (V MP ) minimize F (x), x X we say that a X is a weakly Pareto solution if F (a) MIN w F (X), i.e., there is no x X such that F (x) < C F (a). The weakly Pareto (or efficient) set will be denoted by E w = F 1 (MIN w F (X)). Note that, if F : X Y is continuous, then E w is closed. Note also that, if C F (X), then E w = F 1 ({ C }). Let F : X Y be a map. Definition 2.14. We say that F is C-convex if θ [0, 1], x, x X, F ((1 θ)x + θx ) C (1 θ)f (x) + θf (x ). Definition 2.15. We say that F is C-semicontinuous if the level sets are closed for all α Y. L(F ; α) = {x X F (x) C α} For each λ Λ, we will consider the function F λ : X R given by F λ = λ F. Definition 2.16. We say that F is positively lower semicontinuous (continuous) if for any λ Λ, the function F λ is lower semicontinuous (continuous resp.). Remark 2.17. F is C-convex if and only if F λ is convex for every λ C + \ {0}. If F is C-convex then L(F ; α) is a convex set in X for all α Y.

78 S. Bolintinéanu / Vector variational principles; ε-efficiency and scalar stationarity If F is positively lower semicontinuous, then F is C-semicontinuous (because L(F ; α) = λ C + \{0}{x X F λ (x) λ, α }). If F is C-convex the effective domain dom(f ) is a convex set. Definition 2.18. Let F : X Y be a C-convex map (G ateaux differentiable, with dom(f ) open resp.). A point a X (a dom(f ) resp.) is called weakly scalarly stationary, if there exists λ Λ such that 0 F λ (a), where F λ (a) = {x X F λ (x) F λ (a) x, x a, x X} is the usual subdifferential of the extended real values convex function F λ G ateaux derivative resp.). at a (the We denote by S w the set of the weakly scalarly stationary points. Proposition 2.19. (i) If F : X Y is C-convex, then E w = S w. (ii) If F : X Y is G ateaux differentiable with dom(f ) open, then E w S w. Proof. The part (i) has been proved in [8], and (ii) can be found in [6] for Fréchet differentiable functions, and it is easy to generalize it for G ateaux differentiable functions. Let F : X Y {+ C } be a proper map. We will denote INF w (F ) = INF w (F (X)) = {y F (X) F (X) y Y \ ( int C { C })} Remark 2.20. It is easy to see that: (i) F (E w ) = F (X) INF w (F ) INF w (F ). (ii) E w = F 1 (INF w (F )). (iii) E w may be empty while INF w (F ) is not empty ( see [6]). = {y F (X) F (X) (y int C) = }. Definition 2.21. The point a X is ε weakly efficient if there exists e B, such that F (a) εe INF w (F ), where ε > 0. Remark 2.22. Note that, for F (a) Y, we have that a is ε weakly efficient iff d(f (a); INF w (F )) ε. Remark 2.23. In the literature we can find a different definition for the approximate weakly efficiency. Thus (see [18, 16]), given a vector ε C (or ε Y ), a point x X is an ε weakly efficient solution if there is no x X such that F (x ) < C F (x) ε. The disadvantage of this definition is that the value F (x) may be far from the set INF w (F ).

S. Bolintinéanu / Vector variational principles; ε-efficiency and scalar stationarity 79 Example 2.24. Consider X = Y = R 2 with the euclidean structure, C = R 2 +, and F : X Y such that (x 1, x 2 ) R 2, F (x 1, x 2 ) = (x 2 1 + x 2 2, 10 8 x 2 2). Thus F (X) = {(y 1, y 2 ) R 2 + y 2 10 8 y 1 }. Hence INF w (F ) is the half-axis R + (1, 0). For the point a = (0; 10 2 ), we have F (0; 10 2 ) = (10 4, 10 4 ), hence a is an ε weakly efficient solution with ε = (10 4, 0) but d(f (a); INF w (F )) = 10 4. Definition 2.25. The point a X is ε correct weakly efficient if there exists e C B, such that F (a) εe INF w (F ), where ε > 0. Remark 2.26. It is obvious that every ε correct weakly efficient point is also an ε weakly efficient point. Also, a point a X is weakly efficient a is ε correct weakly efficient for all ε > 0 a is ε weakly efficient for all ε > 0. Remark 2.27. If we assume that F λ is bounded from below for some λ Λ int C +, then using Theorem 2.6 (with T = F (X)\{+ C }) and Proposition 2.11, we have that for each a dom(f ), there exists y int C {0}, hence y C such that F (a) y INF w (F ). However, when d(f (a); INF w (F )) < ε, we cannot be sure that y ε. Of course, there exists z INF w (F ) such that F (a) z < ε, but not necessarily z C. Example 2.28. Consider X = Y = R 2 with the euclidean structure, C = R 2 +, and F : X Y such that F (x) = x for all x T = {(x 1, x 2 ) R R x 2 10 3 ; (x 1 +x 2 ) (x 2 10 6 x 1 ) = 0}, and F (x) = + C elsewhere. Thus INF w (F ) = INF w (T ) = {(t, t) t < 10 3 } {( 10 3, 10 3 )} and the point (0; 0) is 10 3 2-weakly efficient but it is not 10 3 2-correct weakly efficient (it is ε correct weakly efficient for ε 10 3 1 + 10 12 ). Definition 2.29. A sequence (x n ) in dom(f ) will be called: (1) asymptotically weakly Pareto optimizing (a.w.p.) if d(f (x n ), INF w (F )) 0 or F (x n ) C (2) correct asymptotically weakly Pareto optimizing (c.a.w.p.) if there exists a sequence (ε n ) R + such that (i) lim n ε n = 0; (ii) n, x n is ε n correct weakly efficient. (3) weakly scalarly stationary (w.s.s.) if F is C-convex or F is G ateaux differentiable with dom(f ) open, and there exists a sequence (λ n ) Λ verifying the following: which is equivalent to saying that n N, ξ n F λn (x n ) : ξ n X 0, (2) n N, F λn (x n ), d(0; F λn (x n )) 0. Remark 2.30. If (x n ) is an a.w.p. sequence, then INF w (F ) and we have two possibilities: (i) C / INF w (F ), and in this case d(f (x n ), INF w (F )) 0. (ii) INF w (F ) = { C }, and in this case F (x n ) C. Note that (i) is equivalent to the following: (i ) there exists a sequence (ε n ) of positive real numbers tending to 0 such that, for all n N, x n is ε n weakly efficient.

80 S. Bolintinéanu / Vector variational principles; ε-efficiency and scalar stationarity 3. An a.w.p. sequence is close to a w.s.s. one 3.1. A vector variational principle Here we consider that X is a finite dimensional euclidean space identified with its dual. Theorem 3.1. Let F : X Y {+ C } be a proper map, positively lower semicontinuous (l.s.c.) such that INF w (F ) Y. Then, for any ε > 0, > 0 and a X such that d(f (a); INF w (F )) < ε there exists a dom F and e B Y such that: (i) a a, (ii) a is a weakly efficient point for the map G : X Y {+ C }, given by x G(x) = F (x) εϕ(1 x a 2 2 )e, (3) where ϕ : R R is an increasing, convex differentiable function such that ϕ(r ) = {0}, ϕ (1) 2, ϕ(1) = 1, (for example ϕ(t) = 0, if t < 0; ϕ(t) = t 2 if t 0.) (iii) for any x dom F : F (x) G(x) ε. (4) If in addition F is G ateaux differentiable 2 with dom F open set, then λ Λ, d(0; F λ (a )) 4 ε. (5) Proof. Let y INF w (F ) such that F (a) y < ε. Denote by e the vector verifying F (a) εe = y. If F (a) INF w (F ) take e = 0, a = a, and (i), (ii), (iii) follow immediately. So we will suppose that F (a) / INF w (F ), hence e B \ {0}. Obviously the function x εϕ(1 x a 2 2 ) is continuous on X. Thus the function G defined by (3) is positively l.s.c., hence C- semicontinuous, which implies that the set L(G; y) is closed. Note also that a L(G; y) because G(a) = y. Consider λ Λ. The function G λ is l.s.c. as a sum of such functions, hence it attains its minimum on the compact set (a + B) L(G; y) dom G = dom F. Thus there exists a (a + B) L(G; y) such that G λ (a ) = min G λ (x). x (a+b) L(G;y) Let x X such that G(x) < C G(a ). Since G(a ) C y we must have x L(G; y). We have G λ (x) < G λ (a ) hence x / (a + B) L(G; y). Thus x / a + B which implies that G(x) = F (x) < C G(a ) C y contradicting the fact that y INF w (F ). Hence there is no x X such that G(x) < C G(a ) which means that a is a weakly efficient point for G. Thus (i) and (ii) have been proved. 2 we denote the Gâteaux derivative of a function H at a by H(a) or by H (a)

S. Bolintinéanu / Vector variational principles; ε-efficiency and scalar stationarity 81 (4) is obvious. Let F be G ateaux differentiable with dom F open set. Then G is G ateaux differentiable at any x dom F, and λ Λ, G λ (x) = F λ (x) 2ε λ, e ϕ (1 2 x a 2 )(x a). 2 Thus, according to Proposition 2.19, 0 G λ (a ) for some λ Λ, and (5) follows immediately. The following is obvious. Corollary 3.2. Let F : X Y {+ C } be G ateaux differentiable, with dom F an open set. If (a k ) is an a.w.p. sequence then there exists a sequence (a k ) such that (i) a k a k 0, (ii) (a k ) is a w.s.s. sequence. (iii) Moreover, if F is uniformly continuous on dom F, then (a k ) is also an a.w.p. sequence 3.2. Ekeland vector variational principle for ε correct weakly efficient points In this subsection we will see that much more can be said about an ε correct weakly efficient point. Let X be a reflexive Banach space, and F : X Y {+ }. Theorem 3.3. Suppose that F is a proper, positively weakly l.s.c. map. Let ε > 0 and let a X be any ε correct weakly efficient point. Consider some e C B such that F (a) εe INF w (F ). Then, for any > 0, there exists some b X such that : (i) a b, (ii) b is a weakly efficient point for the perturbed function x G(x) = F (x) + ε x b e, (iii) F (b) C F (a) ε a b e. Proof. Consider the function x H(x) = F (x) ε(1 x a ) + e, where t + = max(t, 0), t R. Put y = H(a) = F (a) εe. For each λ Λ, the scalar function H λ is weakly l.s.c. as a sum of weakly l.s.c. functions ( since λ, e 0 and x ε(1 x a ) + is weakly l.s.c., because x x a is weakly l.s.c. and t t + is continuous and increasing). Thus the level set L(H; y) is weakly closed, hence the set S = (a + B) L(H; y) is weakly compact. Then, for each λ Λ, the set arg min x S H λ (x) is not empty. So, we can choose b arg min x S H λ (x). Obviously we have (i). To prove (ii), notice first that, for any x S, we have H λ (b) H λ (x) which is equivalent to F λ (b) F λ (x) + ε λ, e ( x a b a ).

82 S. Bolintinéanu / Vector variational principles; ε-efficiency and scalar stationarity Since x a b a x b, we obtain Let now consider x X such that x S, G λ (b) G λ (x). (6) G(x) < C G(b). (7) From (6) we have that x / S. On the other hand, (7) is equivalent to F (x) < C F (b) ε x b e. (8) If x / a + B, then x b x a + a b + a b hence, from (8), F (x) < C F (b) + ε ( + a b )e = H(b) C y contradicting the fact that y INF w (F ). If x a + B, then (8) implies H(x) + ε(1 and, since H(b) C y, we obtain x a )e < C H(b) + ε(1 b a + x b )e, H(x) < C y + ε ( x a b a x b )e C y. It follows that x L(H; y), hence x S which is impossible. In conclusion, there is no x X verifying (7), hence b is weakly efficient for G. (iii) follows immediately from the fact that H(b) C y. Corollary 3.4. With the hypotheses and notations of Theorem 3.3, if we have in addition that dom F is an open set and F is G ateaux differentiable on dom F, then there exists some λ Λ such that (F λ ) (b) ε. (9) Proof. The set C 1 = Y \ ( int C) is a closed cone (not convex), i.e. R + C 1 C 1. Let h X. Since b is weakly efficient for G, then for any positive real t sufficiently small such that b + th dom F, we have that G(b + th) G(b) C 1, hence F (b + th) F (b) t + ε h e C 1.

S. Bolintinéanu / Vector variational principles; ε-efficiency and scalar stationarity 83 Letting t 0 +, we obtain that Consider the set F (b)h + ε h e C 1. U = {F (b)h + ε h e h X}. It is easy to see that U + C C 1, and U + C is a convex set. Then by the separation theorem, there exists λ Y \ {0} such that h X, c C, λ, F (b)h + ε h e λ, c. It follows that λ C + (and dividing by λ we can suppose that λ Λ), and using the fact that λ, F (b)h = (F λ ) (b)h = F λ (b), we obtain Changing h in h we obtain that which implies (9). h bd B, (F λ ) λ, e (b)h + ε h bd B, (F λ ) λ, e (b)h ε 0. ε, Corollary 3.5. With the hypotheses and notations of Theorem 3.3, if we have in addition that F is C-convex, then there exists some λ Λ such that d(0, F λ (b)) ε. (10) Proof. It is easy to see that the function G is C-convex. Then, according to Proposition 2.19, we have that there exists some λ Λ such that 0 G λ (b). Thus 0 F λ (b)+ ε λ,e B and (10) follows easily. We obtain immediately the following. Theorem 3.6. Let F be a proper, positively weakly l.s.c. map. If F is C-convex, or F is G ateaux differentiable with dom F open, then given any c.a.w.p. sequence (a k ), there exists a sequence (b k ) such that (i) a k b k 0, (ii) (b k ) is w.s.s., (iii) F (b k ) C F (a k ). Remark 3.7. Unfortunately, in Theorem 3.6 we cannot be sure that the sequence (b k ) is a.w.p., unless F is C-convex and asymptotically well behaved as defined in [8], or F is uniformly continuous on dom F.

84 S. Bolintinéanu / Vector variational principles; ε-efficiency and scalar stationarity Remark 3.8. It is important to note the differences with the vector variational principle obtained by Loridan in [17], where the initial point (denoted a in our theorem) is an ε solution of some scalarized problem. If the problem is nonconvex then there exists (weakly) efficient points x such that F (x) is far from any ε solution of any scalarized function F λ, λ Λ. References [1] H. Attouch, R. Riahi: Stability results for Ekeland s ε-variational principle and cone extremal solutions, Math. Oper. Res. 18 (1993) 173 201. [2] J. P. Aubin, I. Ekeland: Applied Nonlinear Analysis, John Wiley & Sons, New York, 1984. [3] A. Auslender, J. P. Crouzeix: Well behaved asymptotical convex functions, Ann. Inst. Henri Poincaré, Analyse Non Linéaire 6 (1989) 101 122. [4] A. Auslender: How to deal with the unbounded in optimization: Theory and algorithms, Mathematical Programming 79 (1997) 3 18. [5] A. Auslender, R. Cominetti, P. Crouzeix: Functions with unbounded level sets and applications to duality theory, SIAM J. Optimization 3 (1993) 669 687. [6] B. Bernoussi, S. Bolintinéanu, C. C. Chou: Pareto optimizing and scalarly stationary sequences, J. Math. Analysis Appl. 220 (1998) 553 561. [7] B. Bernoussi, S. Bolintinéanu, C. C. Chou, M. El Maghri: Efficient and Kuhn-Tucker sequences in constrained vector optimization problems, in: ISMP 97, Lausanne, Switzerland. [8] S. Bolintinéanu: Approximate efficiency and scalar stationarity in unbounded nonsmooth convex vector optimization problems, J. Optim. Th. Appl. 106 (2000) 277 308. [9] J. M. Borwein, J.-P. Penot, M. Thera: Conjugate convex operators, J. Math. Anal. Appl. 102 (1984) 399 414. [10] F. H. Clarke: Optimization and Nonsmooth Analysis, John Wiley & Sons, New York, 1983. [11] C. C. Chou, K. F. Ng, J. S. Pang: Minimizing and stationary sequences of optimization problems, SIAM J. Control Optimization 36 (1998) 1908 1936. [12] J. P. Dauer, W. Stadler: A survey of vector optimization in infinite-dimensional spaces, Part 2, J. Optim. Th. Appl. 51 (1986) 205 241. [13] R. Deville, G. Godefroy, V. Zizler: A smooth variational principle with applications to Hamilton-Jacobi equations in infinite dimensions, J. Funct. Anal. 111 (1993) 197 212. [14] S. Dolecki, C. Malivert: Polarities and stability in vector optimization, Lecture Notes in Economics and Mathematical systems 294 (1987) 96 113. [15] I. Ekeland: On the variational principle, J. Math. Anal. Appl. 47 (1974) 324 353. [16] B. Lemaire: Approximation in multiobjective optimization, J. Global Optim. 2 (1992) 117 132. [17] P. Loridan: ε-solutions in vector minimization problems, J. Optim. Th. Appl. 43 (1984) 265 276. [18] P. Loridan: Well-posedness in vector optimization, in: Recent Developments in Well-Posed Variational Problems, R. Lucchetti, J. Revalski (eds.), Kluwer Academic Publishers, Math. Appl. 331 (1995) 171 192. [19] D. T. Luc: Theory of Vector Optimization, Lecture Notes in Economics and Mathematical Systems 319, Springer-Verlag, Berlin, 1989.

S. Bolintinéanu / Vector variational principles; ε-efficiency and scalar stationarity 85 [20] J.-S. Pang: Error bounds in mathematical programming, Mathematical Programming 79 (1997) 229 332. [21] J.-P. Penot, A. Sterna-Karwat: Parametrized multicriteria optimization: continuity and closedness of optimal multifunctions, J. Math. Anal. Appl. 120 (1986) 150 168. [22] J.-P. Penot: Compact Nets, Filters and Relations, J. Math. Anal. Appl. 93 (1983) 400 417. [23] R. T. Rockafellar: Convex Analysis, Princeton University Press, Princeton, NJ, 1970. [24] T. Staib: On two generalizations of Pareto minimality, J. Optim. Th. Appl. 59 (1988) 289 306. [25] C. Tammer: A generalization of Ekeland s variational principle, Optimization 25 (1992) 129 141.