Optimality for set-valued optimization in the sense of vector and set criteria

Similar documents
Stability of efficient solutions for semi-infinite vector optimization problems

arxiv: v2 [math.oc] 17 Jul 2017

Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems

Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q)

Journal of Inequalities in Pure and Applied Mathematics

Characterizations of the solution set for non-essentially quasiconvex programming

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Research Article Optimality Conditions of Vector Set-Valued Optimization Problem Involving Relative Interior

Research Article Levitin-Polyak Well-Posedness for Equilibrium Problems with Functional Constraints

On intermediate value theorem in ordered Banach spaces for noncompact and discontinuous mappings

NONLINEAR SCALARIZATION CHARACTERIZATIONS OF E-EFFICIENCY IN VECTOR OPTIMIZATION. Ke-Quan Zhao*, Yuan-Mei Xia and Xin-Min Yang 1.

Nonlinear Scalarizations of Set Optimization Problems with Set Orderings

Research Article A Note on Optimality Conditions for DC Programs Involving Composite Functions

Centre d Economie de la Sorbonne UMR 8174

On Weak Pareto Optimality for Pseudoconvex Nonsmooth Multiobjective Optimization Problems

CHARACTERIZATION OF (QUASI)CONVEX SET-VALUED MAPS

On the Midpoint Method for Solving Generalized Equations

Robust Duality in Parametric Convex Optimization

Mathematical Programming Involving (α, ρ)-right upper-dini-derivative Functions

A Minimal Point Theorem in Uniform Spaces

Necessary Optimality Conditions for ε e Pareto Solutions in Vector Optimization with Empty Interior Ordering Cones

The Generalized Viscosity Implicit Rules of Asymptotically Nonexpansive Mappings in Hilbert Spaces

Research Article A Characterization of E-Benson Proper Efficiency via Nonlinear Scalarization in Vector Optimization

On Gap Functions for Equilibrium Problems via Fenchel Duality

Research Article Finding Global Minima with a Filled Function Approach for Non-Smooth Global Optimization

On the use of semi-closed sets and functions in convex analysis

New hybrid conjugate gradient methods with the generalized Wolfe line search

Lower Semicontinuity of the Efficient Solution Mapping in Semi-Infinite Vector Optimization

Strong Convergence of the Mann Iteration for Demicontractive Mappings

Applied Mathematics Letters

Subdifferential representation of convex functions: refinements and applications

A maximum principle for optimal control system with endpoint constraints

Abstract. The connectivity of the efficient point set and of some proper efficient point sets in locally convex spaces is investigated.

A convergence result for an Outer Approximation Scheme

Optimality conditions of set-valued optimization problem involving relative algebraic interior in ordered linear spaces

Coderivatives of gap function for Minty vector variational inequality

Vector Variational Principles; ε-efficiency and Scalar Stationarity

Research Article Equivalent Extensions to Caristi-Kirk s Fixed Point Theorem, Ekeland s Variational Principle, and Takahashi s Minimization Theorem

AW -Convergence and Well-Posedness of Non Convex Functions

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems

CONVEX COMPOSITE FUNCTIONS IN BANACH SPACES AND THE PRIMAL LOWER-NICE PROPERTY

Research Article Strong Convergence of a Projected Gradient Method

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES

A projection-type method for generalized variational inequalities with dual solutions

Relaxed Quasimonotone Operators and Relaxed Quasiconvex Functions

Strong convergence theorems for total quasi-ϕasymptotically

A New Fenchel Dual Problem in Vector Optimization

The local equicontinuity of a maximal monotone operator

On nonexpansive and accretive operators in Banach spaces

HIGHER ORDER OPTIMALITY AND DUALITY IN FRACTIONAL VECTOR OPTIMIZATION OVER CONES

OPTIMALITY CONDITIONS FOR D.C. VECTOR OPTIMIZATION PROBLEMS UNDER D.C. CONSTRAINTS. N. Gadhi, A. Metrane

THE LAGRANGE MULTIPLIERS FOR CONVEX VECTOR FUNCTIONS IN BANACH SPACES

A Hausdorff-type distance, a directional derivative of a set-valued map and applications in set optimization

Minimality Concepts Using a New Parameterized Binary Relation in Vector Optimization 1

Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

The Split Hierarchical Monotone Variational Inclusions Problems and Fixed Point Problems for Nonexpansive Semigroup

Generalized vector equilibrium problems on Hadamard manifolds

Refined optimality conditions for differences of convex functions

Some new sequences that converge to the Ioachimescu constant

Research Article Some Generalizations of Fixed Point Results for Multivalued Contraction Mappings

GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

AN INTERSECTION FORMULA FOR THE NORMAL CONE ASSOCIATED WITH THE HYPERTANGENT CONE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

Convergence to Common Fixed Point for Two Asymptotically Quasi-nonexpansive Mappings in the Intermediate Sense in Banach Spaces

ERROR BOUNDS FOR VECTOR-VALUED FUNCTIONS ON METRIC SPACES. Keywords: variational analysis, error bounds, slope, vector variational principle

Convergence theorems for a finite family. of nonspreading and nonexpansive. multivalued mappings and equilibrium. problems with application

Global Maximum of a Convex Function: Necessary and Sufficient Conditions

Relationships between upper exhausters and the basic subdifferential in variational analysis

The viscosity technique for the implicit midpoint rule of nonexpansive mappings in

Optimality and Duality Theorems in Nonsmooth Multiobjective Optimization

Yuqing Chen, Yeol Je Cho, and Li Yang

Strong convergence theorems for asymptotically nonexpansive nonself-mappings with applications

Preprint Stephan Dempe and Patrick Mehlitz Lipschitz continuity of the optimal value function in parametric optimization ISSN

SCALARIZATION APPROACHES FOR GENERALIZED VECTOR VARIATIONAL INEQUALITIES

On constraint qualifications with generalized convexity and optimality conditions

Fixed points of monotone mappings and application to integral equations

Research Article Cyclic Iterative Method for Strictly Pseudononspreading in Hilbert Space

Well-posedness for generalized mixed vector variational-like inequality problems in Banach space

CONTROLLABILITY OF NONLINEAR DISCRETE SYSTEMS

On Optimality Conditions for Pseudoconvex Programming in Terms of Dini Subdifferentials

Epiconvergence and ε-subgradients of Convex Functions

On non-expansivity of topical functions by a new pseudo-metric

Monotone variational inequalities, generalized equilibrium problems and fixed point methods

Applications of Mathematics

Dedicated to Michel Théra in honor of his 70th birthday

Research Article Existence and Uniqueness of Homoclinic Solution for a Class of Nonlinear Second-Order Differential Equations

Research Article Fixed Point Theorems in Cone Banach Spaces

Chapter 1. Optimality Conditions: Unconstrained Optimization. 1.1 Differentiable Problems

On the relations between different duals assigned to composed optimization problems

On Stability of Dynamic Equations on Time Scales Via Dichotomic Maps

Research Article Optimality Conditions and Duality in Nonsmooth Multiobjective Programs

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions

Research Article Iterative Approximation of Common Fixed Points of Two Nonself Asymptotically Nonexpansive Mappings

A DUALITY APPROXIMATION OF SOME NONLINEAR PDE s

Remark on a Couple Coincidence Point in Cone Normed Spaces

Transcription:

Kong et al. Journal of Inequalities and Applications 2017 2017:47 DOI 10.1186/s13660-017-1319-x R E S E A R C H Open Access Optimality for set-valued optimization in the sense of vector and set criteria Xiangyu Kong, GuoLin Yu * and Wei Liu * Correspondence: guolin_yu@126.com Institute of Applied Mathematics, Beifang University of Nationalities, Yinchuan, Ningxia 750021, P.R. China Abstract The vector criterion and set criterion are two defining approaches of solutions for the set-valued optimization problems. In this paper, the optimality conditions of both criteria of solutions are established for the set-valued optimization problems. By using Studniarski derivatives, the necessary and sufficient optimality conditions are derived in the sense of vector and set optimization. MSC: Primary 90C29; 90C46; secondary 26B25 Keywords: vector optimization; set optimization; optimality conditions; pseudoconvexity; Studniarski derivative 1 Introduction Let F be a set-valued mapping from a set S to a real normed linear space Y and let D Y be a convex cone. The set-valued optimization problem is formalized as follows: P minimize Fx subject to x S. It is generally known that there are two types of criteria of solutions for problem P: the vector criteria and set criteria. The vector criterion is the most commonly used in problem P, which is looking for efficient elements of the set FS= x S Fx. The problem P with vector criterion is often defined as set-valued vector optimization. Over the past decades, the set-valued vector optimization theory and its applications have been investigated extensively by many authors. Fruitful results are produced in this field, and we recommendthe excellentbooks [1 4] and the references therein to the reader. Another criterion, the set criterion, also called set optimization, was proposed by Kuroiwa [5, 6] in 1999 for the first time. Set optimization consists of looking for image sets F x, with x S, which satisfy some set-relations between the rest of the image sets Fx withx S. In[3], p. 378, Jahn states that the set relation approach opens a new and exciting field of research. Although the set criterion seems to be more natural and interesting than the traditional one, the study of set optimization is very limited, such as papers on existence conditions, see [7 12]; on duality theory, see [13 16]; on optimality conditions, see [17 19]; on scalarization [20 23];on well-posedness properties,see [24, 25];on Ekeland variational principles, see [26, 27]. The Authors 2017. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License http://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original authors and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Kong et al. Journal of Inequalities and Applications 2017 2017:47 Page 2 of 11 The investigation of optimality conditions, especially as regards the vector criterion, has received tremendous attention in the research of optimization problems and has been studied extensively. As everyone knows most of the problems under consideration are nonsmooth, which leads to introducing many kinds of generalized derivatives by the authors. A meaningful concept is the Studniarski derivative [28], which hassome properties similar to the contingent derivative [1] and the Hadamardderivative [29]. Recently, much attention has been paid to optimality conditions and related topics for vector optimization by using Studniarski derivatives; see [30, 31]. Inspired by the above observations, in this paper, the optimality conditions are established for problem P in both the vector criterion and set criterion by using Studniarski derivatives.therestofthepaperisorganizedasfollows:insection2, some well-known definitions and results used in sequel are recalled. In Section 3 and Section 4, thenecessary and sufficient optimality conditions are given in the sense of the vector criterion and set criterion, respectively. 2 Preliminaries Throughout the rest of paper, it is assumed that X and Y are two real normed linear spaces, and C Y is a convex closed pointed cones with nonempty interior, i.e., int C. The partial order of Y is deduced by cone C. TheorderinggivenbyC on Y is denoted by, which is defined as follows: y 1 y 2 if y 2 y 1 C. Let S X be a nonempty set and x S. The contingent cone to S at x is defined by see [1]: TS, x= { x X : t n 0 +,x n x with x + t n x n S for all n N }. For a set-valued mapping F : X 2 Y,theset domf:= { x X : Fx }, is called the domain of F,theset graphf:= { x, y X Y : y Fx } is called the graph of the map F,andtheset epif:= { x, y X Y : y Fx+C } is termed the epigraph of F. Let x, ȳ graphf, based upon the definition of contingent cone, we can derive TepiF, x, ȳ, which is formulated by T epif, x, ȳ := { x, y X Y : t n 0 +,x n, y n x, y with ȳ + t n y n F x + t n x n for all n N }. 2.1

Kong et al. Journal of Inequalities and Applications 2017 2017:47 Page 3 of 11 Definition 2.1 see [1] Let F : X 2 Y be a set-valued map. F is called C-pseudoconvex at x, ȳ graphfif epif x, ȳ T epif, x, ȳ. For a set-valued mapping F : X 2 Y,theupperandlowerStudniarskiderivativesofthe map F is defined as follows: Definition 2.2 see [28] Let x, ȳ graphf. i The upper Studniarski derivatives of F at x, ȳ is defined by DF x, ȳx:= lim sup t,x 0 +,x F x + tx ȳ. t ii The lower Studniarski derivative of F at x, ȳ is defined by DF x, ȳx:= lim inf t,x 0 +,x F x + tx ȳ. t Remark 2.3 Regarding the upper and lower Studniarski derivatives of the map F, one has: a It is easily to see from Definition 2.2 that DF x, ȳx= { y Y : t n 0 +, x n, y n x, y s.t. n, ȳ + t n y n F x + t n x n }, DF x, ȳx= { y Y : t n 0 +, x n x, y n y s.t. n, ȳ + t n y n F x + t n x n }. b The upper Studniarski derivative is an exactly contingent derivative [1] or an upper Hadamard derivative [29], and the lower Studniarski derivative is the lower Hadamard derivative introduced by Penot [29]. Lemma 2.4 is an useful property associated with C-pseudoconvexity of a set-valued mapping, which will be used in the next two sections. Lemma 2.4 Let S X be a convex set and F : S 2 Y be C-pseudoconvex at x, ȳ graphf. One has Fx ȳ DF x, ȳx x+c, for all x S. 2.2 Proof Since F at x, ȳisc-pseudoconvex, it yields x x, y ȳ T epif, x, ȳ, for all x S, y Fx. Therefore, we see from 2.1 that there exist t n 0 +,x n, y n x x, y ȳsuchthat x + t n x n, ȳ + t n y n epif.

Kong et al. Journal of Inequalities and Applications 2017 2017:47 Page 4 of 11 Thus ȳ + t n y n F x + t n x n +C. Then one has y n F x + t nx n ȳ t n + C. By taking the limits when n +,wederive y ȳ DF x, ȳx x+c, for all x S, y Fx. So, we get 2.2, as desired. Example 2.5 Let X = Y = R, C = R + and F : X 2 Y be defined by Fx:= { y Y : y x 2}, for all x X. Taking 0, 0 = x, ȳ graphf, we can derive that DF0; 0x=DF0; 0x=R +, for all x X. On the other hand, it is obviously that F is C-pseudoconvex at x, ȳ = 0, 0 and satisfies Fx ȳ DF x, ȳx x+c, for all x X. 3 Optimality conditions with vector criterion Let S X be a nonempty set, F : S 2 Y be a set-valued mapping and x, ȳ graphf. From now on, for convention we always assume that the upper and lower Studniarski derivatives of the map F at x, ȳ exist and dom DF x, ȳ =dom DF x, ȳ =S. Consider the set-valued optimization problem P formulated in Section 1.Weshalles- tablish the optimality conditions for vector criteria of solutions for problem P. In order to distinguish from the case of set criteria, we rewrite the problem P as follows: min Fx VP s.t. x S X. Definition 3.1 The pair x, ȳ graphf is said to be a weak minimizer of VP, denoted by ȳ WMin[FS, C], if FS ȳ int C=. Theorems 3.2 and 3.3 are necessary optimality conditions for the weak minimizer of problem VP.

Kong et al. Journal of Inequalities and Applications 2017 2017:47 Page 5 of 11 Theorem 3.2 Let x, ȳ graphf be a weak minimizer of VP. Then DF x, ȳx int C=, x S. 3.1 Proof Let y DF x, ȳx withx S. Then there exist t n 0 +,x n x and y n F x + t n x n suchthat y n ȳ t n y. 3.2 Because x, ȳ is a weak minimizer of VP, one has F x + tn x n ȳ int C=. 3.3 Now, let us show that y / int C. In fact, assuming that y int C, since int C is open, we obtain from 3.2 y n ȳ int C, forlargen. Hence, it follows from y n F x + t n x n that y n ȳ F x + t n x n ȳ int C, forlargen, which contradicts 3.3. Theorem 3.3 Let x, ȳ graphf. If x, ȳ is a weak minimizer of VP, then DF x, ȳx int C=, for all x TS, x. 3.4 Proof According to Theorem 3.2, one shows that for all x S DF x, ȳx int C=. 3.5 We proceed by contradiction. If the conclusion in Theorem 3.3 does not hold, then there exists ˆx TS, xsuchthat DF x, ȳˆx int C. 3.6 Noticing that ˆx TS, x, there exist t n 0 + and x n x such that x + t n x n S for all n. In addition, we see from 3.6 that there exists ŷ DF x, ȳˆx int C. Hence, with the above t n andx n thereisy n ŷ such that ȳ + t n y n F x + t n x n. So, we see that there exist t n 0 +,x n x,andy n ŷ such that ȳ + t n y n F x + t n x n.

Kong et al. Journal of Inequalities and Applications 2017 2017:47 Page 6 of 11 Thus, ŷ DF x, ȳˆx int C, which is a contradiction to 3.5. Now, we present a sufficient optimality condition for problem VP under the assumption of pesudoconvexity of F. Theorem 3.4 Let S be a convex set and F be C-pseudoconvex at x, ȳ graphf. If DF x, ȳx x int C=, x S, 3.7 then x, ȳ is a weak minimizer of VP. Proof We proceed by contradiction. Suppose that x, ȳ is not a weak minimizer of VP, then there exists x S such that Fx ȳ int C. 3.8 Since F isc-pseudoconvexat x, ȳ, oneshowsfromlemma2.4that Fx ȳ DF x, ȳx x+c. Therefore, one shows from 3.8that DF x, ȳx x+d int C, and as a consequence DF x, ȳx x int C. This is a contradiction to 3.7. 4 Optimality conditions with set criterion This section works with the optimality conditions for problem P in the sense of the set criterion. Firstly, let us recall the concepts of set relations. Definition 4.1 see [5] Let A and B be two nonempty sets of Y.WewriteA l B,iffor all b B there exists a A such that a b. Here, l is called a lower relation. Remark 4.2 The above defined lower relation is equivalent to B A + C and it is the generalization of the order induced by a pointed convex cone C in Y, in the following sense: a b if b a + C. Definition 4.3 see [14] Let A and B be two nonempty sets of Y.WewriteA < lw B,if B A + int C. Here, < lw is named a lower weak relation.

Kong et al. Journal of Inequalities and Applications 2017 2017:47 Page 7 of 11 Definition 4.4 see [14] Let A be a family of subsets of Y. 1 A A is named a lower minimal of A,ifforeachB A such that B l A,itsatisfies A l B; 2 A A is called a lower weak minimal of A,ifforeachB A such that B < lw A,it satisfies A < lw B. The relation l determinesina the equivalence relation: A B A l B and B l A whose classes it is denoted by [A] l. Analogously the relation < lw determinesina an equivalence relation whose classes are written [A] lw. In the sense of a set criterion, the problem P can be rewritten in the next form: SP min l {Fx} s.t. x S X. In this case, a lower weak minimizer of the problem SP is a pair x, F x such that x S and F x is a lower weak minimal of the family of images of F, i.e. the family F = { Fx:x S }. We say that F x is a lower weak minimal of SP instead of F. Example 4.5 Let S =[0,1] R, Y = R 2 and C = R 2 +.LetF 0 :[0,1] 2 R2 be a set-valued map and defined by {y 1, y 2 R 2 :y 1 1 2 +y 2 1 2 =1}, x =0, F 0 x= [x, x + 1, 1, x + 1], 0 < x <1, [0, 1, 1, 0], x =1, where [y, y ]withy, y R 2 denotes the line segment, i.e. [y, y ]={λy +1 λy :0 λ 1}. Considering the following problem SP 0 : SP 0 min l {F 0 x} s.t. x [0, 1]. By computing, we can derive that 1, F1 is the lower minimizer of SP 0 and x, Fx with 0 < x 1 is the lower weak minimizer of SP 0. Definition 4.6 see [19] Let F x be a lower weak minimal of SP. F x isnamedstrict lower weak minimal of SP, if there exists a neighborhood U x of x such that Fx lw F x, for all x U x S.Then x is called a strict lower weak minimum of SP. Lemma 4.7 see [19] Let x S. Then F x is a lower weak minimal of SP if and only if for each x S one of the conditions below is fulfilled:

Kong et al. Journal of Inequalities and Applications 2017 2017:47 Page 8 of 11 1 Fx [F x] lw. 2 There exists ȳ F x such that Fx ȳ int C =. Definition 4.8 see [19] Domination property. It is called that a subset A Y has the D-weak minimal property, if for all y A there exists a weakly minimal a of A such that a y int C {0}. Lemma 4.9 Let x, x Sandȳ F x. If Fx lw F x, WMin[F x, C]={ȳ} and F x has the C-weak minimal property, one has Fx ȳ int C=. 4.1 Proof Assuming that 4.1 doesnothold, thatis, Fx ȳ int C. Then there exist y Fx andd int C such that y ȳ = d. So,ityieldsȳ = y + d and ȳ Fx+int C. On the other hand, it follows from WMin[F x, C]={ȳ} and the C-weak minimal property of F xthat,forallȳ F x, there is d int C {0} such that ȳ = ȳ + d Fx+int C. Therefore, we derive that F x Fx+int C, which means Fx< lw F x. This contradicts the hypothesis. Theorem 4.10 Let x be a strict lower weak minimum of SP. If WMin[F x, C]={ȳ} and F x has the C-weak minimal property, then DF x, ȳx int C=, for all x S. 4.2 Proof Because x is a strict lower weak minimum of SP, we see from Definition 4.6 that there exists a neighborhood U xof x such that Fx lw F x for all x U x S.Letx S. Assuming that y DF x, ȳx then there exist t n 0 +,x n x and y n F x+t n x n such that y n ȳ t n y. Therefore for large enough n,wecanget x + t n x n U x S which verifies F x + t n x n lw F x. Thus, it follows from Lemma 4.9 that F x + tn x n ȳ int C=, forlargen. 4.3 Let us prove that y / int D.Otherwise,onehas y n ȳ int C, forlargen.

Kong et al. Journal of Inequalities and Applications 2017 2017:47 Page 9 of 11 Noticing that y n F x + t n x n, we derive y n ȳ F x + t n x n ȳ int C, for large n. This is a contradiction to 4.3. So, we derive 4.2, as desired. Theorem 4.11 Let x be a strict lower weak minimum of SP. If WMin[F x, C]={ȳ} and F x has the C-weak minimal property then DF x, ȳx int C=, for all x TS, x. 4.4 Proof Based upon Theorem 4.10, we see that equation 4.2 holds. The rest of the proof can be followed by similar arguments to that of Theorem 3.3. Theorem 4.12 Let S be convex set and F be C-pseudoconvex at x, ȳ graphf. If for every x S we have DF x, ȳx x int C=, x S, then x is a lower weak minimum of SP. Proof Suppose that x is not a lower weak minimum of SP, it follows from Lemma 4.7 that there exists x S such that for each ȳ F xthereisy Fx with y ȳ int C. 4.5 Since F isc-pseudoconvexat x, ȳ, wederivefromlemma2.4that F x ȳ DF x, ȳ x x + C. Therefore, we obtain y ȳ DF x, ȳ x x + C. 4.6 Combining 4.5with4.6, we get DF x, ȳ x x + D int C, furthermore, DF x, ȳ x x int D, which is a contradiction. Remark 4.13 By comparing the results derived in Section 3, it can be found that the optimality for the set criterion and vector criterion, with the suitable conditions, possesses the same forms in terms of Studniarshi derivatives.

Kong et al. Journal of Inequalities and Applications 2017 2017:47 Page 10 of 11 5 Conclusions We have studied the optimality conditions for both the vector criterion and set criterion for a set-valued optimization problem in this note. We have presented two sufficient optimality conditions and a necessary condition for a weak minimizer in terms of Studniarski derivatives. In the set optimization criterion, utilizing the known results, we have proved two necessary optimality conditions for a strict lower weak minimum and a lower weak minimum. A sufficient optimality condition has been proved under the assumption of pseudoconvexity. Competing interests The authors declare that they have no competing interests. Authors contributions All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript. Acknowledgements This research was supported by Natural Science Foundation of China under Grant No. 11361001; Natural Science Foundation of Ningxia under Grant No. NZ14101. The authors are grateful to the anonymous referees who have contributed to improve the quality of the paper. Received: 13 October 2016 Accepted: 10 February 2017 References 1. Aubin, JP, Frankowsa, H: Set-Valued Analysis. Birkhauser, Boston 1990 2. Luc, DT: Theory of Vector Optimization. Springer, Berlin 1989 3. Jahn, J: Vector Optimization: Theory, Applications and Extensions. Springer, Berlin 2003 4. Chen, G, Huang, X, Yang, X: Vector Optimization-Set-Valued and Variational Analysis. Springer, Berlin 2005 5. Kuroiwa, D: Some duality theorems of set-valued optimization with natural criteria. In: Proceedings of the International Conference on Nonlinear Analysis and Convex Analysis, pp. 221-228. World Scientific, River Edge 1999 6. Kuroiwa, D: On the set-valued optimization. Nonlinear Anal. 47, 1395-1400 2001 7. Kuroiwa, D: Existence theorems of set optimization with set-valued maps. J. Inf. Optim. Sci. 241, 73-84 2003 8. Kuroiwa, D: Existence of efficient points of set optimization with weighted criteria. J. Nonlinear Convex Anal. 41, 117-123 2003 9. Hernández, E, Rodríguez-Marín, L, Sama, M: On solutions of set-valued optimization problems. Comput. Math. Appl. 60, 1401-1408 2010 10. Hernández, E, Rodríguez-Marín, L: Existence theorems for set optimization problems. Nonlinear Anal. 67, 1726-1736 2007 11. Hernández, E, Rodríguez-Marín, L, Sama, M: Some equivalent problems in set optimization. Oper. Res. Lett. 37, 61-64 2009 12. Maeda, T: On optimization problems with set-valued objective maps: existence and optimality. J. Optim. Theory Appl. 153, 263-279 2012 13. Hernández, E, Rodríguez-Marín, L: Duality in set optimization with set-valued maps. Pac. J. Optim. 32,245-255 2007 14. Hernández, E, Rodríguez-Marín, L: Lagrangian duality in set-valued optimization. J. Optim. Theory Appl. 134, 119-134 2007 15. Hernández, E, Löhne, A, Rodríguez-Marín, L, Tammer, C: Lagrange duality, stability and subdifferentials in vector optimization. Optimization 72,131-141 2013 16. Hamel, AH, Löhne, A: Lagrange duality in set optimization. J. Optim. Theory Appl. 161, 368-397 2014 17. Alonso, M, Rodríguez-Marín, L: Set-relations and optimality conditions in set-valued maps. Nonlinear Anal. 63, 1167-1179 2005 18. Alonso-Durán, M, Rodríguez-Marín, L: On approximate solutions in set-valued optimization problems. J. Comput. Appl. Math. 236, 4421-4427 2012 19. Alonso-Durán, M, Rodríguez-Marín, L: Optimality conditions for set-valued maps with set optimization. Nonlinear Anal. 70, 3057-3064 2009 20. Hernández, E, Rodríguez-Marín, L: Nonconvex scalarization in set optimization with set-valued maps. J. Math. Anal. Appl. 325, 1-18 2007 21. Araya, Y: Four types of nonlinear scalarizations and some applications in set optimization. Nonlinear Anal. 75, 3821-3835 2012 22. Yao, Y, Chen, R, Xu, HK: Schemes for finding minimum-norm solutions of variational inequalities. Nonlinear Anal. 72, 3447-3456 2010 23. Gutiérrez, C, Jiménez, B, Miglierina, E, Molho, E: Scalarization in set optimization with solid and nonsolid ordering cones. J. Optim. Theory Appl. 61, 525-552 2015 24. Zhang, WY, Li, SJ, Teo, KL: Well-posedness for set-optimization problem. Nonlinear Anal. 71, 3769-3778 2009 25. Gutiérrez, C, Miglierina, E, Molho, D, Novo, V: Pointwise well-posedness in set optimization with cone proper sets. Nonlinear Anal. 75,1822-1833 2012 26. Ha, TXD: Some variants of the Ekeland variational principle for a set-valued map. J. Optim. Theory Appl. 124, 187-206 2005

Kong et al. Journal of Inequalities and Applications 2017 2017:47 Page 11 of 11 27. Hamel, AH, Löhne, A: Minimal element theorems and Ekeland s principle with set relations. J. Nonlinear Convex Anal. 71, 19-37 2006 28. Studniarski, M: Necessary and sufficient conditions for isolated local minima of nonsmooth functions. SIAM J. Control Optim. 24, 1044-1049 1986 29. Penot, JP: Differentiability of relations and differential stability of perturbed optimization problems. SIAM J. Control Optim. 22, 529-551 1984 30. Luu, DV: Higher-order necessary and sufficient conditions for strict local Pareto minima in terms of Studniarski s derivatives. Optimization 57, 593-605 2008 31. Sun, XK, Li, SJ: Lower Studniarski derivative of the perturbation map in parametrized vector optimization. Optim. Lett. 5, 601-614 2011