Approximations for Pareto and Proper Pareto solutions and their KKT conditions

Size: px
Start display at page:

Download "Approximations for Pareto and Proper Pareto solutions and their KKT conditions"

Transcription

1 Approximations for Pareto and Proper Pareto solutions and their KKT conditions P. Kesarwani, P.K.Shukla, J. Dutta and K. Deb October 6, 2018 Abstract There has been numerous amount of studies on proper Pareto points in multiobjective optimization theory. Geoffrion proper points are one of the most prevalent form of proper optimality. Due to some convergence issues a restricted version of these proper points, Geoffrion proper points with preset bounds has been introduced recently. Since solution of any algorithm for multiobjective optimization problem gives rise to approximate solution set, in this article we study the approximate version of Geoffrion proper points with preset bounds. We investigate scalarization results and saddle point condition for these points. We also study the notion of approximate KKT condition for multiobjective optimization problem in general settings. Further, we discuss notion of approximate Benson proper point and develop KKT condition for the same. 1 Introduction There has been a growing interest among optimizers to analyze the nature of an approximate solution to an optimization problem. This stems from the fact that the most optimization problems cannot be solved in an exact manner. This fact about optimization has been very clearly stated in the beginning of the monograph on convex optimization by Nesterov [?]. In practice we know that our algorithms only return us some kind of approximate local/global solutions. Thus it is natural to develop a formal theory of approximate solutions and see to what extent their behavior parallels that of an exact solution. Optimizers have attempted to answer these questions over the past several decades, see for example [?], [?], [?] and [?]. In vector optimization the key to find a Pareto minimizer or a weak Pareto minimizer largely hinges on scalarization techniques(see for examples Ehrgott [?], Jahn [?], Luc [?], Chankong et al [?]). Using these techniques the solution that we generate is an approximate one. Hence it is natural to study the notion of approximate solutions in vector optimization. Several researchers have already analyzed various notions of approximate solution in vector optimization. See for example Dutte et al [?], Gutierrez et al [?,?], White [?], Rong et al [?]. It is also important to a decision maker, who is taking some decisions based on multiobjective optimization models need not necessarily be interested in all the pareto Department of Mathematics and Statistics, Indian Institute Of Technology Kanpur,India Institute AIFB, Karlsruhe Institute of Technology, Karlsruhe, Germany Department of Economics Sciences, Indian Institute Of Technology Kanpur, India College of Engineering, Michigan State University, Michigan, USA 1

2 solutions of the problem at hand. In many cases the decision maker just focuses on a part of the Pareto frontier in the image space which corresponds to a subset of the set of Pareto solutions. These subsets when chosen in a particular way gives rise to various classes of proper Pareto solutions (see for example Ehrgott [?]). In real practice and more specifically in engineering there is a growing importance of the use of evolutionary algorithms for solving a multiobjective optimization problem(see for example Deb [?]). These evolutionary algorithms which are population based heuristics produces very quickly an approximation of the Pareto frontier in the image space. Thus the proper Pareto solutions that one gets in practice are in fact some kind of approximation of the proper Pareto solutions. Thus we are again faced with a situation where we need to build up an analysis for approximated proper Pareto solutions. When we consider the natural ordering cone or the non-negative cone then the first notion of a proper Pareto optimal solution was put forward by Geoffrion [?]. In [?], Geoffrion defined those Pareto optimal point as proper for which the objective function have a bounded trade off. The importance of this notion was clear when Geoffrion [?] tide this to the weighted-sum scalarization. If we take all the weights positive then solving the weighted-sum scalar problem will lead to a proper Pareto solution in the sense of Geoffrion [?]. The converse amazingly holds true if all objective functions are convex (see [?]). Such proper solutions are now referred to as the Geoffrion proper Pareto solution. It has been shown through an example in Eichfelder [?] that in many situations the ordering cone that we need to use is not the positive cone. How does one then talk of a proper Pareto solution? Thus the idea of Geoffrion had to be generalized into this new setting. This was achieved for example in Benson [?], Borwein [?] and Henig [?]. Around thirteen years ago in Shukla et al [?] it was shown that a more robust version of the Geoffrion proper Pareto solution can be in fact developed. The notion due to Geoffrion suffers from a drawback that the decision maker does not know before hand is how much is the trade-off bound. A decision maker in many situations might like to be concerned with only those proper solutions whose trade-off is bounded by a preset number provided by the decision maker herself/himself. In [?] such solutions are shown to be stable when approximate solution re considered. To be more precise a sequence of approximated Geoffrion proper Pareto solutions with a preset trade-off bound converges to a Geoffrion solution with the same trade-off bound. This property is not present if we consider the usual Geoffrion proper solutions. For more details on this new solution concept see [?]. 1.1 Our Aim The title of the paper reflects the fact that the celebrated Karush-Kuhn-Tucker condition(kkt for short) will play a fundamental role in this paper. There is no doubt that the KKT conditions play a pivotal role in scalar optimization. However its role in vector optimization is not clearly understood except for the fact that it provides a necessary condition. In fact if we look at the literature of multiobjective optimization we do not see any direct use of the KKT condition even in algorithms for solving linear multiobjective optimization problems. On the other hand as we have mentioned in most practical situations evolutionary algorithms are gaining prominence. Since these are heuristics it is important to check the quality of the points that we finally select as an approximation to Pareto points. To be more robust in our approach the KKT conditions can be profitably used as 2

3 a stopping criteria for such heuristics. In fact we can accept a point generated by a heuristic if it violates the KKT conditions with in a given margin of error. To the best of our knowledge such a use of the KKT condition for multiobjective optimization was first carried out in [?] (see also [?]). This motivated us to rethink on how to develope approximate versions of KKT condition for approximate Geoffrion proper solutions with a preset trade-off bound. In fact we show that such conditions are both necessary and sufficient when the problem data is convex. Thus at least in the convex case any point satisfying our approximate KKT conditions will be an approximate Geoffrion proper Pareto optimal point with a preset bound. This is a novel feature since usually the converse does not hold so easily even in the convex case when we just consider approximate weak Pareto solutions see for example Dutta et al [?]. Further we have developed a saddle point criteria for such type of solution points. We of course not only focus on Geoffrion solutions but also look at approximate version of Besnon proper Pareto solutions for which the necessary approximate KKT conditions are also sufficient when the problem data is cone-convex. thus both for the Geoffrion and Benson cases we have a complete characterization when the problem data is convex and when some qualification conditions are met. In order to make our presenation more complete we also study some approximate KKT conditions for approximate Pareto and weak Pareto points. The study of approximate KKT conditions for vector optimization problem done in a general setting was carried out for example in Durea et al [?]. We use their approach but provide results which are very different from that in [?]. A interesting result which we present here is a kind of converse. We show that if the problem data is locally Lipschitz or convex then if there is a sequence of points which are feasible and converges to a weak Pareto minimum, then there exists a subsequence of that sequence whose elements satisfy an approximate KKT type condition. We are yet to explore the complete implications of this result. 1.2 Organization of the Paper The paper is divided into five major sections which includes this introduction. In section 2 we introduce the standard notations used in this paper and also present the varios solution concept used in this paper. In this section we also introduce the notion of a Geoffrion proper Pareto solution with a fixed upper bound to the trade-offs. This new solution concept was introduced to the best of our knowledge by Shukla et al [?] and then was freshly looked into in Shukla et al [?]. In [?] new and important properties of the above mentioned solution concept was explored. In section 3 we focus on the usual notion of approximate weak Pareto solutions in the light of approximate KKT conditions. A key notion of approximate KKT condition introduced in this section is that of modified ɛ-kkt condition for multiobjective programming problem with locally Lipschitz data. This was motivated from its scalar counterpart introduced in Dutta et al [?]. The key result of this section is a result of the converse type. It says that if we consider a multiobjective programming problem with locally Lipschitz data and if there is a sequence converging to a weak local Pareto minimizer then there is a subsequence of the sequence which is feasible and satisfies a modified ɛ-kkt type condition under mild regularity assumption. This result is possible by the application of the Ekeland Variational for the vector case which is due to Tammer [?]. The key focus in section 4 is to develop an approximate version of the KKT condition for the approximate version of the Geoffrion proper Pareto solution with a fixed 3

4 upper bound on the trade off which is necessary in general but is also sufficient if in particular when the problem data is convex. In this section we also present some approximate KKT conditions associated with the approximate version of some other classes of proper Pareto solution like the Benson proper solution when the ordering cone is not the non-negative cone itself. In section 5 we explore whether approximate version of the Geoffrion proper Pareto with a fixed upper bound to the trade-off can be characterized through an approximate version of saddle point condition when the problem data is convex. 2 Notations and Definitions Our notations are fairly standard. Let A R n be a given set then closure and interior of set A is denoted by cla and inta respectively. A vector x R n is usually written as a column vector x = (x 1, x 2,..., x n ) T. The inner product of two vectors is given by x, y or x T y. We will use these two notations of inner product is an interchangeable fashion. A set A is a cone if for each a A and positive scalar λ, λa A. A cone A is pointed if A ( A) = {0}. We shall write cone(a) to denote the cone generated by the set A which is given as cone(a) = {λa λ 0, a A}. A dual cone A of set A R n is a cone A = {y R n x, y 0, x A} and (A ) 0 is defined by (A ) 0 = {y R n x, y > 0, x A \ {0}}. Consider the following multiobjective optimization problem MOP: min f(x) := (f 1 (x),..., f m (x)), subject to g j (x) 0, j = 1, 2,.., l. where each f i : R n R and g j : R n R. Let us denote the constraint set by X := {x R n : g j (x) 0, j = 1, 2,.., l} R n, I := {1, 2,.., m}, L := {1, 2,.., l} and set of active indices at x as R(x) := {r L : g r (x) = 0}. Solving MOP requires a (binary) ordering relation on R m. Given an ordering relation induced by a cone, one can compare two m-dimensional vectors from the set f(x) := {f(x) x X} and define an optimality notion for MOP. Consider C R m to be a closed, convex and pointed cone and define a ordering relation C on R m. For x, y R m, we will say x C y if and only if x y C. By choosing different cone C, we can get different notions of optimality for MOP. These optimality notions is used in algorithms to find one or many optimal solutions of MOP. When there are objective functions which are conflicting in nature, there is no single feasible point which minimizes all the objective functions simultaneously. Thus, a concept of optimality, called (weak) Pareto-optimality is considered. Unless MOP has a special structure, e.g X being polyhedral and each f i being linear or quadratic, almost all the algorithms for solving MOP generates an infinite sequence of iterates (or an infinite sequence of a set of points) that converges to the set of optimal points of MOP. For computational reasons, it is usually necessary to terminate the algorithm after some finite number of iterations. This leads to a sub-optimal point or, depending upon the distance of the obtained point to the set of optimal points of MOP, an ɛ-optimal point. To formalize our notions, in what follows we will consider ɛ R m +, i.e. ɛ = (ɛ 1, ɛ 2,..., ɛ m ), ɛ i 0 for each i I. Our focus on this paper is on ɛ- solutions of MOP. In the following definition the order of image space f(x) is induced by natural cone C = R m +. 4

5 Definition 2.1 Given an ɛ R n +, if there is no x X such that f(x) + ɛ f(x ) R m + \ {0}, then point x X is said to be an ɛ-pareto optimal solution of MOP. Further if there is no x X such that f(x) + ɛ f(x ) int(r m +), then point x is said to be a weak ɛ-pareto optimal solution of MOP. Let us make it clear at this stage that all our notations in this paper, specifically those denoting solution stets are borrowed from our paper [?]. The current paper is in many ways a continuation of our study in [?]. Though not always seen in the literature the following notions of a local solutions are also relevant. Definition 2.2 A point x is said to be a loacl Pareto optimal solution of MOP if there exists δ > 0 and there is no x X B δ (x ) such that, f(x) f(x ) R m + \{0}, where B δ (x 0 ) R n is a unit ball of radius δ. The weak counter part of local solution can be defined in the similar fashion as in Definition 2.1. We would like to mention that in several situations we would have to consider the particular form of the vector ɛ R m +, given by ɛ = εe, where e = (1, 1.., 1) T. In those cases the solutions would be reffered to as the ε-pareto and weak ε-pareto solution respectively. The set of all ɛ-pareto points will be denoted by S ɛ (f, X) and the set of all weak ɛ Pareto points as Sw(f, ɛ X). An (weak) ɛ-pareto optimal solution with ɛ = 0 is commonly known as (weak) Pareto optimal. For simplicity, the set of Pareto optimal solutions and the set of weak Pareto optimal solutions will be denoted by S(f, X) and S w (f, X), respectively. As argued at the beginning of this paper, different Pareto optimal solutions might have different properties that a decision maker may desire. The need to filter out bad Pareto optimal solutions lead to the notion of a properly efficient point. There are different notions of proper optimality (see a nice survey in [?]). For example, if a closed, convex and pointed cone C is used as the ordering cone, Benson proper optimality [?] is a widely-studied notion. In an earlier work [?], we introduced the following approximate version of Benson proper optimality. Definition 2.3 Given an ɛ R n +, a point x 0 X is called Benson ɛ-proper solutions (with respect to the ordering cone C) if cl(cone(f(x) + (C + ɛ) f(x 0 ))) ( C) = {0}. The set of all Benson ɛ-proper solutions will be denoted by SB ε (f, X, C). If ɛ = 0, this reduces to classical notion of Benson proper efficiency [?], and we will use the notation S B (f, X, C) instead of SB 0 (f, X, C). In the case of C = R m +, Benson ɛ-proper optimality is equivalent to the following notion of Geoffrion ɛ-proper optimality (see [?,?], for example). Definition 2.4 Given an ɛ R n +, a point x 0 X is called ɛ-geoffrion proper solution if x 0 S ɛ (f, X) and if there exists a number M > 0 such that for all (i, x) I X satisfying f i (x) < f i (x 0 ) ɛ i, there exists an index j I such that f j (x 0 ) ɛ j < f j (x) and f i (x 0 ) f i (x) ɛ i f j (x) f j (x 0 ) + ɛ j M. 5

6 Geoffrion ɛ-proper solution says that the trade-offs between objective functions at two points are bounded. It is interesting to ask whether the trade-offs are bounded above for all the points. This would mean that the same M can work for all the Geoffrion ɛ-proper solutions. This may not always be the case unless the set of all trade off bounds is bounded above. Now we state a practical version of above definition which has been introduced in Shukla et al [?]. Definition 2.5 Given an ɛ R n + and a scalar ˆM > 0, a point x 0 X is called ( ˆM, ɛ)-geoffrion proper solution if x 0 S ɛ (f, X) and for all (i, x) I X satisfying f i (x) < f i (x 0 ) ɛ i, there exists an index j I such that f j (x 0 ) ɛ j < f j (x) and f i (x 0 ) f i (x) ɛ i f j (x) f j (x 0 ) + ɛ j ˆM. Given ˆM > 0, we shall denote the set of all ( ˆM, ɛ)- Geoffrion proper and ɛ-geoffrion proper solution as G ˆM,ɛ (f, X) and G ɛ (f, X) respectively. For ɛ = 0, the set of exact ˆM- Geoffrion proper and Geoffrion proper solution is denoted by G ˆM(f, X) and G(f, X) respectively. Since in this article we are dealing with vector valued objective function, we need to understand the notion of continuity and boundedness of vector valued functions. Definition 2.6 Let C be a closed, convex and pointed cone with non empty interior and f : U R m where U is a non empty subset of R n. The function f is C-bounded below if there exists y R m such that f(x) C y for all x U. Let c int(c), the function f is (c, C)- lower semi continuous if for all t R, {x U : tc C f(x)} is closed. Now we state Ekeland variational principle for vector valued functions ( see [?]) which is going to play a important role in this article. Theorem 2.7 Let C be a closed convex and pointed cone with non empty interior say c 0 int(c) and f : U R m is (c 0, C)- lower semi continuous and C bounded below. Given ε R +, x 0 U satisfying f(x) + εc 0 f(x 0 ) C \ {0}, x U (2.1) Then there exists x 0 = x 0 (ε) U such that (a) f(x) + εc 0 f( x 0 ) int(c), x U \ { x 0 }, (b) x 0 x 0 ε, (c) f(x) + ε x 0 x c 0 f( x 0 ) int(c), x U \ { x 0 }. 2.1 Tools from non-smooth analysis In this article we rely on two major tools from non-smooth analysis, namely the subdifferential of a convex function and the Clarke subdifferential of a locally Lipschitz function. Though these notions are very well known in the optimization community, we shall provide the definitions for completeness. We shall however restrict ourselves to the class of functions chosen here namely finite-valued function on R n. 6

7 Let f : R n R be a convex function, then the subdifferential of f at x is a set of vectors in R n, given as f(x) = {v R n : f(y) f(x) v, y x, y R n }. The subdifferential set is a non-empty, convex and compact for every x R n. The subdifferential is also deeply linked with the notion of the directional derivative of a convex function. The directional derivative of a convex function at a given x in the direction h is given as f (x, h) = lim λ 0 f(x + λh) f(x) λ This directional derivative exists for each x and in each direction h and we have f(x) = {v R n : f (x, h) v, h, h R n }. Thus each of these can be recovered from the other. Now the most common question to ask is whether the generalized notion of derivative has properties like the usual derivative of calculus? We will begin with the most fundamental one, the sum rule. Let f : R n R and g : R n R are convex functions. Then (f + g)(x) = f(x) + g(x). (2.2) For more details on subdifferentail of convex functions see for example [1]. It is important to note that a point x 0 is a global minimum of f on R n if and only if 0 f(x 0 ). Since subdifferential is a generalized version of derivative it has some limitation. The ε-subdifferential is a relaxed version of the subdifferential which is very useful tool in convex analysis and optimization. We begin with defining the ε-subdifferential of convex function. Definition 2.8 Let f : R n R be convex function and ε 0. The ε-subdifferential of f at x is given as ε f(x) = {v R n : f(y) f(x) v, y x ε, y R n }. The elements of ε f(x) are called ε-gradients of f at x and ε f(x) for all x R n. Property (2.2) holds true for ε-subdifferential as well. Thus for f, g convex real valued function and for ε, ε 1, ε 2 0, we have ε (f + g)(x) = ε=ε 1 +ε 2 ε f(x) + ε g(x). A point x 0 is called an ε-minimizer of f on R n if f(y) f(x) ε for all y R n. Thus x 0 is an ε-minimizer of f on R n if and only if 0 ε f(x 0 ). Above subdifferential is only defined for convex functions, so the obvious question is to ask what about subdifferential of non convex functions? We now discuss subdifferential of a nonconvex function. It is the relation of subdifferential and directional derivative as above that becomes a key to develop the notion of a subdifferential for a locally Lipschitz function. A function f : R n R is Lipschitz around x R n, if there exists a neighborhood U x of x and L x 0 such that f(y) f(z) L x y z, y, z U x. 7

8 where L x is Lipschtiz constant of f at x. A function f is said to be locally Lipschitz if f is Lipschitz around x for any x R n. We now define the Clarke directional derivative of locally Lipschitz function f at x and in the direction h R n is given as f (x, h) = lim sup y x,t 0 f(y + th) f(y). t The Clarke subdifferential of f at x R n is given as, f(x) = {ξ R n : f (x, h) ξ, h, h R n }. For each x R n, the set f(x) is non-empty convex and compact. It is important to note that when function f is convex then f(x) = f(x), for all x R n. Same as subdifferential of convex function, Clarke subdifferential has sum rule but it gives only one side containment i.e. for given two locally Lipschtiz functionf, g, we have (f + g)(x) f(x) + g(x). Let us discuss some important properties of Clarke subdifferential which will be used in this article (for proofs see [?]). Let f : R n R be a locally Lipschitz function. Then (i) For each given x R n, the function f (x, h) is convex and positive homogeneous in h, (ii) f (x, h) is upper-semicontinuous function of (x, h). (iii) If x 0 R n is a local minimum of f over R n then 0 f(x 0 ). 3 Approximate KKT conditions we have already explained in the previous section the importance of approximate solutions of a multiobjective optimization problem from a practical point of view. Thus it is natural to ask whether these approximate solutions satisfy some kind of approximate KKT condition. In the literature there are several approaches to approximate KKT conditions (see for example [?],[?],[?]). In this section we begin by defining a notion of approximate KKT points which suits very well for the purpose of convex vector optimization problem. This notion is called modified ɛ-kkt points, which are motivated by a similar notion defined in Dutta et al [?] for scalar optimization problem and Durea et al [?] for vector optimization. The next question which is a natural one is as follows, Under what assumptions on the problem data, do an ɛ-weak Pareto point (ɛ-weak Pareto) satisfy the approximate KKT conditions and under what assumptions the converse can also be generated? We shall focus on these questions in this section. Through out our discussion in this section we shall also consider the m objective functions f 1, f 2,..., f m to be locally Lipschitz. We shall not repeat this assumption in the statements of our results. Definition 3.1 A feasible point x 0 X is said to be a modified ε-kkt point for a given ε R + if there exists x ε such that x 0 x ε ε and there exists u i f i (x ε ) 8

9 for i I, v r g r (x ε ) for r = L, vectors λ R m + that with λ = 1 and µ R l + such λ i u i + and µ r v r ε, (3.1) r µ r g r (x 0 ) ε. (3.2) The following natural constraint qualification appears for example in Rockafellar and Wets [?]. This is called the Basic constraint qualification( BCQ for short). Definition 3.2 The problem MOP satisfies BCQ at x if there exists no p R l + \ {0} such that 0 l p r g r ( x). Theorem 3.3 Consider the problem MOP and let {ε k } to be a decreasing sequence of positive real numbers such that ε k 0 as k. Consider {x k } to be a sequence of feasible points of MOP with x k x 0 as k. Assume that for each k, x k is a modified ε k -KKT point of MOP. Further assume that the Basic constraint qualification holds at x 0. Then x 0 is a KKT point of MOP. Proof. By our assumption for each k, x k is a modified ε k - KKT point, so from Definition 3.1, there exists ˆx k such that x k ˆx k ε k and there exists u k i f i ( ˆx k ) for all i I, v k r g r ( ˆx k ) for all r L, vectors λ k R m + with λ k = 1 and µ k R l + such that λ k i u k i + µ k rvr k ε k, (3.3) µ k rg r (x k ) ε k. (3.4) First of all we will show that {µ k } is bounded. On the contrary assume that {µ k } is unbounded. Thus with out loss of generality we can write µ k as k. From equation (3.3), we have λ k i µ k uk i + µ k r µ k vk r 1 εk. (3.5) µ k Let p k = µk µ k Rl +. Since p k = 1, p k is a bounded sequence so by Bolzano- Weirstrass theorem there exists a subsequence of {p k } which converges to as ˆp R l + with ˆp = 1. In fact we need not relabel {p k } and assume that p k ˆp, without loss of generality. This shows that µ k µ k = pk ˆp, as k (3.6) By our assumption f i s and g r s are locally Lipschitz functions, it implies the Clarke subdifferential of f i s and g r s are locally bounded. Hence u k i f i ( ˆx k ) and v k r g r ( ˆx k ) are bounded for all i I and r L. Thus without loss of generality we can 9

10 assume that for each i I and r L there exist û i and ˆv r respectively such that for all i I and r L, u k i û i, and v k r ˆv r, as k. Since f i s and g r s are graph closed as well and x k x 0 that implies û i f i (x 0 ) for all i I and ˆv r g r (x 0 ) for all r L. Since the sequences {u k i } and {λ k } are bounded sequences for all i I, we have for all i I λ k i µ k uk i 0, as k. (3.7) Since ε k is decreasing sequence with ε k 0, it implies that ε k 0. So we have { ε k } to be a bounded sequence and hence εk 0, as k (3.8) µ k As k in (3.5) by using (3.6),(3.7) and (3.8), we get l ˆp rˆv r 0. Hence we have ˆp rˆv r = 0, where ˆp R l + with ˆp = 1 and ˆv r g r (x 0 ) for r L. This contradicts the assumption that BCQ holds at x 0. Therefore the sequence {µ k } is a bounded. Thus without loss of generality there exist ˆµ R l + such that µ k ˆµ as k. Now as λ k R m + with λ k = 1, the sequence {λ k } has a limit point say ˆλ with ˆλ = 1. With out loss of generality we can assume that λ k ˆλ. Now as k in (3.3), we get m ˆλ i û i + l ˆµ rˆv r 0. Thus ˆλ i û i + ˆµ rˆv r = 0, (3.9) where ˆλ R m + with ˆλ = 1, ˆµ R l +, û i f i (x 0 ) for i I and ˆv r g r (x 0 ) for r L. Now since x k are feasible points, g r (x k ) 0 for all r L and k. Using continuity property of g r s, we conclude that g r (x 0 ) 0 for all r L. Hence x 0 is a feasible point for MOP. Since ˆµ r 0 for all r L, we have in from (3.4), we get l ˆµ r g r (x 0 ) 0. Therefore we conclude that ˆµ r g r (x 0 ) 0. Now as k ˆµ r g r (x 0 ) = 0. (3.10) Thus (3.9) and (3.10) together implies that x 0 is a KKT point of MOP. An interesting question one might ask is the following. Does the previous theorem has some kind of converse? To be more precise we ask our self the following question. If we have sequence of iterates from the feasible set converge to a weak Pareto minimizer, then do these iterates satisfy some kind of approximate KKT conditions? The answer surprisingly turns out to be affirmative and thus it strengthens the whole premise of studying approximate version of KKT condition for multiobjective optimization problem. 10

11 Theorem 3.4 Consider the problem MOP where g 1,.., g l are convex functions. Let x 0 is a local weak Pareto minima and the Slater condition holds. Consider {ε k } to be a decreasing sequence of positive real numbers such that ε k 0 as k. Then there exists a feasible sequence {x k } with x k x 0 and a subsequence {y k } of {x k } such that for each y k there exists ŷ k with y k ŷ k ε k, u k i f i (ŷ k ) for all i I, v k r g r (ŷ k ) for all r L and λ k R m + with λ k = 1, µ k R l + such that λ k i u k i + µ k rvr k ε k, (3.11) µ k rg r (ŷ k ) = 0. (3.12) Proof. Since x 0 is a locally weak Pareto minimizer of MOP, by definition (2.2) there exists δ > 0 such that for all x V, f(x) f(x 0 ) int(r m +), (3.13) where V = X B δ (x 0 ). Since each g r s are convex functions, the feasible set X is closed and convex. Thus V is closed, convex and bounded. Now since x 0 V, there exists a sequence x k in X with x k x 0 and for k sufficiently large x k V. Since each f i s, i I are assumed to be locally Lipschitz it implies that f i (x k ) f i (x 0 ) as k for all i I. Now consider ε 1 > 0, since f i (x k ) f i (x 0 ) for all i I and it implies that for each i I there exists a natural number N i 1 such that for all k > N i 1, f i (x k ) f i (x 0 ) < ε 1. Now choose N 1 N, N 1 = max{n 1 1, N 2 1,..., N m 1 }. Then for all k > N 1 and i I, f i (x k ) f i (x 0 ) < ε l. Consider y 1 = x N1 +1, then f i (y 1 ) f i (x 0 ) < ε 1. This means that for all i I, Now (3.13) can be rephrased as, f i (y 1 ) < f i (x 0 ) + ε 1. (3.14) f(x) f(x 0 ) W, x V (3.15) where W = R m \ ( intr m +). It is important to note that W is a closed cone but a non-convex one. Further one has W + R m + W. Now using (3.14) we can write that Now adding (3.15) and (3.16) we have f(x 0 ) + eɛ 1 f(y 1 ) int(r m +). (3.16) f(x 0 ) + eɛ 1 f(y 1 ) int(r m +) W, x V. (3.17) Hence for x V, f(x 0 ) + eɛ 1 f(y 1 ) W. This shows that for x V, f(x 0 ) + eɛ 1 f(y 1 ) int(r m +). Hence y 1 is a weak ε 1 -Pareto minima of MOP over V. In similar way for each k N and ε k > 0, we can get a element y k {x k } with each y k to be a weak ε k -Pareto minima of MOP over V. Now consider y k which is a weak ε k -Pareto minima of MOP over V i.e. for all x V f(x) + ε k e f(y k ) int(r m +). (3.18) Since each f i is locally Lipschitz, we conclude that f is (e, R m +)-lower semi continuous and R m +-bounded below. Now using the vector Ekeland Variational Principle (2.7), for each y k there exists ŷ k V such that for all x V \ {ŷ k } 11

12 (a) f(x) + ε k e f(ŷ k ) int(r m +), (b) ŷ k y k ε k, (c) f(x) + ε k ŷ k y k e f(ŷ k ) int(r m +). Thus from (c) above we conclude that ŷ k is weak Pareto minimizer of the problem min f(x) + ε k x ŷ k e x V Now using necessary optimality condition for the above multiobjective problem, there exists λ k R m + with λ k = 1 such that 0 λ k i (f i + ε k x ŷ k )(ŷ k ) + N V (ŷ k ), where f i (ŷ k ) denotes Clarke subdifferential of function f i at the point ŷ k and N V (ŷ k ) is normal cone of V at ŷ k. Now applying sum rule for the Clarke subdifferential (see Clarke [?])and using the fact that subdifferential of the norm function at origin is the unit ball, we get 0 λ k i f i (ŷ k ) + ε k B R n + N V (ŷ k ). (3.19) Since x k x 0 and y k {x k }, for k sufficiently large we can conclude that y k X B δ (x 0 ). Further from (ii) above we see that ŷ k B ɛ k (y k ). Now we choose k large enough so that B ɛ k (y k ) B δ (x 0 ), proving that ŷ k B δ (x 0 ). Now since X B δ (x 0 ) (as x 0 X and x 0 B δ (x 0 )), we conclude that ri(x B δ (x 0 )). Further using Theorem 6.5 in Rockafellar [?], we conclude that ri(x B δ (x 0 )) = ri(x) ri(b δ (x 0 )). Now using Theorem 23.8 in Rockafellar [?] we have N V (ŷ k ) = N X Bδ (x 0 ) (ŷ k) = N X (ŷ k ) + N Bδ (x 0 ) (ŷ k). As ŷ k B δ (x 0 ) = intb δ (x 0 ), we see that N Bδ (x 0 ) (ŷ k) = {0}. Thus N V (ŷ k ) = N X (ŷ k ). Hence we can rewrite (3.19) as 0 λ k i f i (ŷ k ) + ε k B R n + N X (ŷ k ). (3.20) Further as the Slater condition holds, using corollary from Rockafellar [?], we conclude that N X (ŷ k ) = { l µ k rvr k : v r g r (ŷ k ), µ k r 0, µ k rgr k (ŷ k ) = 0, r L}. Now using above form of N X (ŷ k ) and (3.20), we conclude that there exists u k i f i (ŷ k ) for all i I, vr k g r (ŷ k ) for all r L and scalars λ k R m + with λ k = 1, µ k R l + such that (3.11) and (3.12) holds. Thus the theorem follows. Remark 3.5 In the above theorem the objective functions are taken to be locally Lipschitz only. If objective function f i s are convex as well then we have more concrete result then above. To proof this result we need the following lemma. Lemma 3.6 Consider a MOP problem with each objective functions f 1, f 2,.., f m s to be convex. Then every local weak Pareto minima is a global weak Pareto minima. 12

13 Proof. Let x o is a local weak Pareto minima of MOP i.e. there exists δ > 0 such that f(x) f(x 0 ) int(r m +), x X B δ (x 0 ). On contrary assume that x 0 is not global weak minima i.e. there exists x X such that f(x ) f(x 0 ) int(r m +), (3.21) which implies f i (x ) < f i (x 0 ) for all i I. Since each f i s are convex functions, we have for t R + and using (3.21), f i ((1 t)x + tx 0 ) (1 t)f i (x ) + tf i (x 0 ) < f i (x 0 ), i I. (3.22) Now we can always choose t R + such that (1 t)x + tx 0 B δ (x 0 ). For this particular t, equation (3.22) is a contradiction to the fact that x 0 is local weak Pareto minima. Theorem 3.7 Consider the problem MOP with each f 1,.., f m and g 1,.., g l to be convex and locally Lipschitz functions. Let x 0 is a local weak Pareto minima and Slater condition holds. Consider {ε k } to be a decreasing sequence of positive real numbers such that ε k 0 as k. Then there exists a feasible sequence {x k } with x k x 0 and a subsequence {y k } of {x k } such that each y k is a modified ε k -KKT point. Proof. Follow the proof of above theorem till equation (3.18) which implies that y k is a weak ε k -Pareto minima of MOP over V = X B δ (x 0 ) with δ > 0 i.e. y k is a local weak ε k -Pareto minima of MOP. By using the assumption of convexity and Lemma 3.6, we conclude that y k is a weak ε k -Pareto minima of MOP i.e. f(x) + ε k e f(y k ) int(r m +), x X, Now using Theorem 3.6 of Durea et al [?], we conclude that y k is a modified ε k -KKT point. 4 Approximate ˆM-Geoffrion solutions, Saddle points and KKT conditions In this section we analyze saddle point conditions and KKT condition for approximate ˆM-Geoffrion solutions which gives a complete characterization of the considered proper points. We also discuss the correspondence between ˆM-Geoffrion solutions and solution of a weighted sum scalar problem. Before discussing the mentioned results we shall observe that there is a characterization of Geoffrion properly efficient points by a system of inequalities. For a given ɛ R m + and ˆM > 0, consider x 0 X, i I and define the following system of inequalities (Q i ) as f i (x 0 ) + f i (x) + ɛ i < 0, f i (x 0 ) + f i (x) + ɛ i < M(f j (x 0 ) f j (x) ɛ j ), x X. j I \ {i} Proposition 4.1 For given ɛ R m + and ˆM > 0 consider the problem MOP. Then a point x 0 G ˆM,ɛ (f, X) if and only if for each i I, the system Q i is inconsistent. 13

14 Proof of the above proposition can be found in [?]. Before discussing the saddle point condition for approximate ˆM Geoffrion proper point let us discuss the correspondence between (M, ɛ)-geoffrion proper solutions and solution of weighted sum scalar problem. This correspondence plays a pivotal role to prove saddle point conditions for (M, ɛ)-geoffrion proper solutions. To this end, let for an s R m, the weighted sum scalar problem P (s ) be defined as follows: min x X s, f(x). (P (s )) Theorem 4.2 For a given ɛ R m + and ˆM > 0, let x 0 is a s, ɛ -minimum of P (s ), where s int(r m +). If ˆM (m 1) max { s i }, then x i,j s 0 G j ˆM,ɛ (f, X). Proof. Let us assume on the contrary that x 0 / G ˆM,ɛ (f, X). Therefore, from Proposition 4.1 we obtain an i I such that Q i is consistent. Without loss of generality, we assume that i = 1. Thus, the system Q i, written as f 1 (x 0 ) + f 1 (x) + ɛ 1 < 0, f 1 (x 0 ) + f 1 (x) + ɛ 1 < ˆM(f j (x 0 ) f j (x) ɛ j ), j I \ {1} x X. has a solution. As ˆM (m 1){ s j s i } for all s intr m +, the consistency of system Q i implies that s 1( f 1 (x 0 ) + f 1 (x) + ɛ 1 ) < s j(m 1)(f j (x 0 ) f j (x) ɛ j ), j I \ {1}. Summing the above equation for all j I \ {1}, we obtain that s 1( f 1 (x 0 ) + f 1 (x) + ɛ 1 ) < s j(f j (x 0 ) f j (x) ɛ j ), which further implies j=2 s, f(x 0 ) s, f(x) s, ɛ > 0. (4.1) Since (4.1) is a contradiction to the s, ɛ -minimality of P (s ). Therefore, the theorem follows. All the solutions from G ˆM,ɛ (f, X) satisfy an upper trade-off bound of ˆM (in the sense of Geoffrion-proper efficiency). Smaller bounds are more relevant to the decision maker as they provide tighter trade-offs among the criteria values. It would, therefore, be of interest to find the minimum M such that G M,ɛ (f, X) is non-empty. Under the conditions of Theorem 4.2, we need minimum value of ˆM equals m 1, and this occurs when all components of s are identical. The next examples show that if conditions in Theorem 4.2 are not satisfied, then even smaller values of ˆM are possible. This is the case with non-convex or discrete multicriteria optimization problems. In the following examples we consider ɛ = 0 and find ˆM-Geoffrion proper points Example 4.3 Let X := {(0, 0, 1), (0, 1, 0), (1, 0, 0), (1/ 3, 1/ 3, 1/ 3) }, m = 3, and f be the identity mapping. The sets G 2 (f, X) and G 1 (f, X) can be easily computed as follows: G 2 (f, X) = {(0, 0, 1), (0, 1, 0), (1, 0, 0), (1/ 3, 1/ 3, 1/ 3) }, G 1 (f, X) = {(0, 0, 1), (0, 1, 0), (1, 0, 0) }. Moreover, G M (f, X) = for M < 1. Therefore, the minimum value of M is 1. 14

15 Example 4.4 Let us the consider the DTLZ2 test problem from [?]. This test problem has 12 variables, 3 objectives, and a concave efficient front. The set of Paretooptimal solutions are given by S(f, X) := { x [0, 1] 12 f 1 (x) 2 + f 2 (x) 2 + f 3 (x) 2 = 1, f 1 (x), f 2 (x), f 3 (x) 0 }. Let x, y, z X := [0, 1] 12 be three Pareto-optimal solutions such that f(x) = (0, 0, 1), f(y) = (0, 1, 0), f(z) = (1, 0, 0) and we aim to check if f(x) belongs to G 1 (f, X) or not. From [?, Lemma 2], we know that G 1 (f, X) = G 1 (f, S(f, X)). Therefore, we consider the trade-offs of f(x) with respect to all efficient points, i.e. all points from the set f(g 1 (f, S(f, X))). Let, for δ [0, 1], S δ := {x S(f, X) f 3 (x) = 1 δ}. From the definition of S(f, X), it follows that f(s δ ) := {(u, v, w) R 3 w = 1 δ, u 2 + v 2 1 (1 δ) 2 }. The minimum value of the Geoffrion M bound, denoted by ˆM, can now be computed as follows: ˆM = max δ [0,1] δ max (u,v,w) S δ = max δ{u, v} δ [0,1] δ (1 δ) = 1. 2 The converse of Theorem 4.2 also holds with convexity assumption on objective functions and feasible set. Since if for each i, g i is convex then the feasible set X is convex set. We have the following result. Theorem 4.5 Let for each i I and r L, f i and g r are convex functions. If x 0 G ˆM,ɛ (f, X), then there exists an s int(r m +) such that x 0 is an s, ɛ -minimum of P (s ). Proof. Let x 0 G ˆM,ɛ (f, X). Then using Proposition 4.1 we obtain that the system Γ (x 0, ɛ, i, ˆM, X) is inconsistent for each i I. Applying the Gordan s Theorem of the alternative we conclude, after some rearrangements, that for each i I, there exists scalars λ i i 0, λ i j 0, m j=1 λi j = 1 such that f i (x) + ˆM j i λ i jf j (x) f i (x 0 ) + ˆM j i λ i jf j (x 0 ) ( ɛ i + ˆM j i λ i jɛ j ) for all x X. Therefore f i (x) + ˆM λ i jf j (x) j i f i (x 0 ) + ˆM λ i jf j (x 0 ) j i ( ɛ i + ˆM ) λ i jɛ j. j i 15

16 Hence, for all x X ( 1 + ˆM ) λ i j f j (x) j=1 i j ( 1 + ˆM i j j=1 λ i j ) f j (x 0 ) ( 1 + ˆM i j j=1 λ i j ) ɛ j. Setting s j = (1 + ˆM i j λi j), we conclude that s int(r m +) and that x 0 is an s, ɛ -minimum of P (s ). Remark 4.6 Theorem 4.5 can also be proved by noting the fact that each ( ˆM, ɛ)- Geoffrion proper point is ɛ-geoffrion proper point with constant ˆM > 0. Hence using Theorem 3.15 form [?] we can deduce the above result. Now if we denote the set of s, ɛ -minimum of P (s ) by Sol ɛ (P (s )) under convexity assumption on data, then Theorem 4.2 and 4.5 implies that for a given ˆM, there exists s intr m + such that Sol ɛ (P (s )) G ˆM,ɛ (f, X) Sol ɛ (P (s)). s intr m + Now we come to the main attraction of this section, the saddle point conditions for ( ˆM, ɛ)-geoffrion proper solutions. For this study we consider the problem MOP where each f i, i I and g j, j L are a convex function. Whenever the data of problem is convex, we shall denote the problem MOP as CMOP. Given ˆM > 0, and any index i I we define the ( ˆM, i)-lagrangian associated with CMOP as follows L ˆM i (x, τ i, µ i ) = f i (x) + τ i ˆMf j j (x) + j i µ i rg r (x), (4.2) where τ i = (τ1, i τ2, i..., τm) i S m and µ i = (µ i 1, µ i 2,..., µ i l ) Rl +. Here S m is the unit simplex in R m given as S m = {x R m : 0 x i 1, i I, x i = 1} (4.3) Our aim here is to show the key role played by the ( ˆM, i)-lagrangian in analyzing or rather characterizing the Geoffrion ( ˆM, ɛ)-proper solution. Theorem 4.7 For a given ɛ R m + and ˆM > 0, let us consider the problem CMOP which satisfy the Slater condition, i.e. there exists ˆx R n s,t, g r (ˆx) < 0 for all r L. If x 0 G ˆM,ɛ (f, X) then for each i, there exists τ i S m, µ i R l + such that for all x R n and µ R m +, (i) L ˆM i (x 0, τ i, µ) ɛ i L ˆM i (x 0, τ i, µ i ) L ˆM i (x, τ i, µ i ) + ɛ i (ii) µ i rg r (x 0 ) ɛ i, where ɛ i = ɛ i + m j=1,j i τ i j ˆMɛ j. Conversely if x 0 R n be such that for each i I, there exists ( τ i, µ i ) S m R l + such that (i) and (ii) holds then x 0 G M,2ɛ (f, X), where M (1 + ˆM)(m 1). Proof. Let x 0 G ˆM,ɛ (f, X). Then using Proposition 4.1, we conclude that for each i the system Q i, f i (x 0 ) + f i (x) + ɛ i < 0, f i (x 0 ) + f i (x) + ɛ i < M(f j (x 0 ) f j (x) ɛ j ), j I \ {i} g r (x) 0, r L 16

17 has no solution for all x R n. It is easy to observe that the system Q i has no solution if we replace g r 0 by g r < 0 for all r L. Now by applying the Gordan s theorem of alternative, there exists τ i = (τ1, i..., τm) i R m + and µ i = (µ i 1,..., µ i l ) Rl + with (τ i, µ i ) 0 such that for all x R n, τi i (f i (x) f i (x 0 ) + ɛ i ) + τj(f i i (x) + ˆMf j (x) f i (x 0 ) ˆMf j (x 0 ) + ɛ i + ˆMɛ j ) j i + Hence for all x R n, µ i rg r (x) 0. ( τj)[f i i (x) f i (x 0 ) + ɛ i ] + [τ i ˆMf j j (x) τ i ˆMf j j (x 0 ) + τ i ˆMɛ j j ] j=1 j i + µ i rg r (x) 0. (4.4) Now first we claim that τ i = (τ1, i..., τm) i 0. So on the contrary let τ i = 0 and then from (4.4), we have for all x R n, µ i rg r (x) 0. Since the Slater condition holds with ˆx R n and µ i 0 we have µ i rg r (x) < 0, which contradicts the fact that µ i rg r (x) 0. Hence τ i 0 and thus m τj i > 0. Thus dividing both sides of (4.4) with m τj i we have for all x R n, j=1 f i (x) f i (x 0 ) + ɛ i + [ τ i ˆMf j j (x) τ i ˆMf j j (x 0 ) + τ i ˆMɛ j j ] + j i j=1 µ i rg r (x) 0, (4.5) where τ i j = τ j i τj i j=1 and µ i r = µi r τj i j=1. Now set x = x 0 in (4.5) to get ɛ i + j i τ j i ˆMɛ j + µ i rg r (x 0 ) 0, By setting ɛ i = ɛ i + j i τ i j ˆMɛ j, we get µ i rg r (x 0 ) ɛ i which proves part (ii) of the theorem. Now from (4.5) we have for all x R n, f i (x) + j i τ j i ˆMf j (x) + µ i rg r (x) + ɛ i f i (x 0 ) + j i τ i j ˆMf j (x 0 ). (4.6) Further as x 0 is feasible to CMOP, we have l µ i rg r (x 0 ) 0. Thus from (4.6) we get f i (x) + j i τ j i ˆMf j (x) + µ i rg r (x) + ɛ i f i (x 0 ) + j i 17 τ j i ˆMf j (x 0 ) + µ i rg r (x 0 ),

18 which implies for each i and for all x R n, L ˆM i (x, τ i, µ i ) + ɛ i L ˆM i (x 0, τ i, µ i ). (4.7) Further from (4.2) we observe that for all i and any µ R l + L ˆM i (x 0, τ i, µ) f(x 0 ) + j i τ i j ˆMf j (x 0 ), which can be written as L ˆM i (x 0, τ i, µ) f(x 0 ) + τ i ˆMf j j (x 0 ) + l µ i rg r (x) + ɛ i. Thus j i for all x R n and µ R l +, L ˆM i (x 0, τ i, µ) L ˆM i (x 0, τ i, µ i ) + ɛ i. (4.8) The equation (4.7) and (4.8) together proves part (i) of the theorem. Now for the sufficient part, let us assume that for a given x 0 R n and each i I there exists τ i S m and µ i R l + such that (i) and (ii) holds. Our first step would be to show that x 0 is feasible to CMOP. As we know from (i), for all µ R l + Thus L ˆM i (x 0, τ i, µ) ɛ i L ˆM i (x 0, τ i, µ i ). f i (x 0 ) + j i τ i j ˆMf j (x 0 ) + µ r g r (x 0 ) ɛ i f i (x 0 ) + j i τ i j ˆMf j (x 0 ). This shows that for all µ R l +, µ r g r (x 0 ) ɛ i, (4.9) On the contrary assume that x 0 is not feasible, thus there exists r 0 L such that g r0 (x 0 ) > 0. Consider in particular µ = (0,..., 0, µ r0, 0,..., 0), with µ r0 > 0. Therefore from (4.9), we have µ r0 g r0 (x 0 ) ɛ i. (4.10) But if we continue to increase the value of µ r0, there exists a value of µ r0 beyond which µ r0 g r0 (x 0 ) > ɛ i. This contradicts (4.10) and hence we conclude that x 0 is a feasible solution of CMOP. Now from right hand side of (i) we also have, for all x R n which implies L ˆM i (x, τ i, µ i ) + ɛ i L ˆM i (x 0, τ i, µ i ). (4.11) f i (x) + j i τ j i ˆMf j (x) + µ i rg r (x) + ɛ i + j i 18 τ j i ˆMɛ j f i (x 0 ) + τ i ˆMf j j (x 0 ) j i + µ i rg r (x 0 ).

19 Now for any feasible x, µ i rg r (x) 0. Thus from the above inequality we have, f i (x) + j i τ i j ˆMf j (x) + ɛ i + j i τ i j ˆMɛ j f i (x 0 ) + j i τ i j ˆMf j (x 0 ) + µ i rg r (x 0 ). (4.12) Using (ii) we have f i (x) + j i τ i j ˆMf j (x) + ɛ i + j i τ i j ˆMɛ j f i (x 0 ) + j i τ i j ˆMf j (x 0 ) (ɛ i + j i τ i j ˆMɛ j ). Since it holds for each i, by summing over all the i s we get τ i j)f i (x) + τ i j)(2ɛ j ) τ i j)f i (x 0 ). (1 + ˆM j i (1 + ˆM j i (1 + ˆM j i Hence x 0 is s, 2ɛ -minimizer of P (s ) where the vector s has the form s i = 1 + ˆM m τ k i, i = 1, 2,.., m. Now since τ i S m for all i, we have for all i, j I k=1,k i s i s j = 1 + ˆM m τ k i k=1,k i τ j k k=1,k j 1 + ˆM m = 1 + ˆM(1 τ i i ) 1 + ˆM(1 τ j j ) 1 + ˆM. Since the above inequality is true for every i and j, we have max i,j { s i } 1 + ˆM. s j Now consider M (1 + ˆM)(m 1) and using Theorem 4.2 we conclude that x 0 G M,2ɛ (f, X). This completes the proof. Remark 4.8 The saddle point type conditions are useful as a sufficient condition if the number of objectives are only few in number. In fact for sufficiency we can have a much simpler condition. If x 0 R n be such that for each i I, there exists τ i S m and µ i R l + such that for all µ R l + and x R n, (a) L ˆM i (x 0, τ i, µ) ɛ i L ˆM i (x 0, τ i, µ i ) L ˆM i (x, τ i, µ i ) + ɛ i, (b) µ i rg r (x 0 ) ɛ i, holds. Then x 0 G M,2ɛ (f, X). In order to prove the statement we mentioned above,note that ɛ i = ɛ i + j i τ i j ˆMɛ j, thus ɛ i ɛ i. Hence from (a) and (b) above, we get (i) and (ii) of Theorem 4.7. Now we can simply apply the converse part of Theorem 4.7 to show that x 0 G M,2 ɛ (f, X), M (1+ ˆM)(m 1) Note that the condition (a) and (b) above are much more simpler to check compared to (i) and (ii) since ɛ i involves the multipliers τ i j. Hence for the sufficient part of the above theorem we will be using item (a) and (b) in place of (i) and (ii). 19

20 Of course from the necessary part of Theorem 4.7 we can also derive a multiplier rule involving ɛ-subdifferentials, however this rule will be quite different. Observe that if x 0 G ˆM,ɛ (f, X), then form Part (i) of Theorem 4.7 we have for any i I there exists τ i S m and µ i R l + such that for all x R n L ˆM i (x 0, τ i, µ i ) L ˆM i (x, τ i, µ i ) + ɛ i, which implies that x 0 ɛ i arg min x R n L ˆM i (., τ i, µ i ). Thus for each i I, 0 ɛil ˆM i (x 0, τ i, µ i ). In fact a more compact necessary condition of the KKT type is given as follows, 0 ɛil ˆM i (x 0, τ i, µ i ) with µ i rg r (x 0 ) ɛ i. (4.13) Theorem 4.9 For a given ɛ R m + and ˆM > 0, let us consider the problem CMOP. If x 0 G ˆM,ɛ (f, X), then there exists m vectors τ i S m and µ i R l +, i I such that (i) 0 ɛil ˆM i (x 0, τ i, µ i ), (ii) µ i rg r (x 0 ) ɛ i. where ɛ i = ɛ i + j i τ i j ˆMɛ j, i I. Conversely if x 0 X be such that there exists vectors ( τ i, µ i ) S m R l +, i I such that (i) and (ii) holds then x 0 G M,ɛ (f, X), where M = (1 + ˆM)(m 1). Proof. The necessary part has already been done in above remark. For sufficient part, let (i) and (ii) holds for x 0 X. This means that there exists v i ɛil ˆM i (x 0, τ i, µ i ) for all i I such that 0 = v 1 + v v m. (4.14) Thus from definition of ɛ-subdifferential we have for each i I, L ˆM i (x, τ, µ i ) L ˆM i (x, τ i, µ i ) v i, x x 0 ɛ i. Thus L ˆM i (x, τ, µ i ) Now using (4.14), we get L ˆM i (x, τ i, µ i ) v i, x x 0 ɛ i. (f i (x) + j i τ j i ˆMf j (x) + µ i rg r (x)) τ j i ˆMf j (x 0 ) (f i (x 0 ) + j i + µ i rg r (x 0 )) ɛ i. For feasible point x and using part (ii), we get (f i (x) + j i τ i j ˆMf j (x)) (f i (x 0 ) + j i τ i j ˆMf j (x 0 )) ɛ i, 20

21 which can be rewritten as τ i j ˆM)f i (x) τ i j ˆM)f i (x 0 ) τ i j ˆM)ɛ i. (1 + i j (1 + i j (1 + i j Hence x 0 is s, ɛ -minimizer of P (s ) where s i = 1 + ˆM minimizer of P (s ) where s i = 1 + ˆM m j=1,j i m k=1,k i τ i k. Hence x 0 is s, ɛ - τ k i. Now using the same argument as in Theorem 4.2, we conclude that x 0 G M,ɛ (f, X), where M = (1 + ˆM)(m 1). This completes the proof. 5 Approximate KKT conditions for Approximate Benson solutions In this section we would like to focus on Benson ɛ-proper solutions. Our aim in this section is to develop scalarization rules for these approximate proper solutions. Scalarization rules are one of the keys to optimality condition in vector optimization. It is important to note that when C = R n +, Benson ɛ-proper Pareto solutions reduce to Geoffrion ɛ-proper solutions. We would like to note that our result are different from the optimality condition for approximate solutions studied in Dutta and Vetrivel [?]. To develop scalarization techniques for Benson ɛ-proper, we will make use of the following cone separation result. Proposition 5.1 (Borwein, [?]) Let K, N be closed, convex cones in R m and let N K = {0}. If the dual cone K has nonempty interior, then (K ) 0 ( N ). (5.1) The next two theorems present and analyze a scalarization technique for Benson ɛ-proper solutions. Theorem 5.2 Let C R m be a cone such that (C ) 0 and let s (C ) 0. If x 0 is an s, ɛ -minimum of P (s ), then x 0 SB ɛ (f, X, C). Proof. Let h 0 and h cl(cone(f(x) + (C + ε) f(x 0 ))). Our aim is to establish that h C. From the definition of h, there exists a sequence {h n } such that h n cone(f(x) + (C + ɛ) f(x 0 ))) and h n = t n (f(x n ) + s n + ɛ f(x 0 )) h, with t n 0, x n X, and s n C. Now since x 0 is an s, ɛ -minimum of P (s ), for each n it holds that This shows that s, f(x 0 ) s, f(x n ) + s, ɛ. s, f(x n ) + ɛ f(x 0 ) 0. (5.2) 21

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION CHRISTIAN GÜNTHER AND CHRISTIANE TAMMER Abstract. In this paper, we consider multi-objective optimization problems involving not necessarily

More information

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction ACTA MATHEMATICA VIETNAMICA 271 Volume 29, Number 3, 2004, pp. 271-280 SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM NGUYEN NANG TAM Abstract. This paper establishes two theorems

More information

Chapter 2 Convex Analysis

Chapter 2 Convex Analysis Chapter 2 Convex Analysis The theory of nonsmooth analysis is based on convex analysis. Thus, we start this chapter by giving basic concepts and results of convexity (for further readings see also [202,

More information

Necessary Optimality Conditions for ε e Pareto Solutions in Vector Optimization with Empty Interior Ordering Cones

Necessary Optimality Conditions for ε e Pareto Solutions in Vector Optimization with Empty Interior Ordering Cones Noname manuscript No. (will be inserted by the editor Necessary Optimality Conditions for ε e Pareto Solutions in Vector Optimization with Empty Interior Ordering Cones Truong Q. Bao Suvendu R. Pattanaik

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

First-order optimality conditions for mathematical programs with second-order cone complementarity constraints

First-order optimality conditions for mathematical programs with second-order cone complementarity constraints First-order optimality conditions for mathematical programs with second-order cone complementarity constraints Jane J. Ye Jinchuan Zhou Abstract In this paper we consider a mathematical program with second-order

More information

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions Angelia Nedić and Asuman Ozdaglar April 16, 2006 Abstract In this paper, we study a unifying framework

More information

On smoothness properties of optimal value functions at the boundary of their domain under complete convexity

On smoothness properties of optimal value functions at the boundary of their domain under complete convexity On smoothness properties of optimal value functions at the boundary of their domain under complete convexity Oliver Stein # Nathan Sudermann-Merx June 14, 2013 Abstract This article studies continuity

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS A Dissertation Submitted For The Award of the Degree of Master of Philosophy in Mathematics Neelam Patel School of Mathematics

More information

Convex Optimization Notes

Convex Optimization Notes Convex Optimization Notes Jonathan Siegel January 2017 1 Convex Analysis This section is devoted to the study of convex functions f : B R {+ } and convex sets U B, for B a Banach space. The case of B =

More information

Maximal Monotone Inclusions and Fitzpatrick Functions

Maximal Monotone Inclusions and Fitzpatrick Functions JOTA manuscript No. (will be inserted by the editor) Maximal Monotone Inclusions and Fitzpatrick Functions J. M. Borwein J. Dutta Communicated by Michel Thera. Abstract In this paper, we study maximal

More information

(convex combination!). Use convexity of f and multiply by the common denominator to get. Interchanging the role of x and y, we obtain that f is ( 2M ε

(convex combination!). Use convexity of f and multiply by the common denominator to get. Interchanging the role of x and y, we obtain that f is ( 2M ε 1. Continuity of convex functions in normed spaces In this chapter, we consider continuity properties of real-valued convex functions defined on open convex sets in normed spaces. Recall that every infinitedimensional

More information

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,

More information

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Chapter 4 GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Alberto Cambini Department of Statistics and Applied Mathematics University of Pisa, Via Cosmo Ridolfi 10 56124

More information

On Weak Pareto Optimality for Pseudoconvex Nonsmooth Multiobjective Optimization Problems

On Weak Pareto Optimality for Pseudoconvex Nonsmooth Multiobjective Optimization Problems Int. Journal of Math. Analysis, Vol. 7, 2013, no. 60, 2995-3003 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ijma.2013.311276 On Weak Pareto Optimality for Pseudoconvex Nonsmooth Multiobjective

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

Optimality Conditions for Nonsmooth Convex Optimization

Optimality Conditions for Nonsmooth Convex Optimization Optimality Conditions for Nonsmooth Convex Optimization Sangkyun Lee Oct 22, 2014 Let us consider a convex function f : R n R, where R is the extended real field, R := R {, + }, which is proper (f never

More information

Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems

Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems Mathematical and Computational Applications Article Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems Wenling Zhao *, Ruyu Wang and Hongxiang Zhang School of Science,

More information

On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean

On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean Renato D.C. Monteiro B. F. Svaiter March 17, 2009 Abstract In this paper we analyze the iteration-complexity

More information

Minimality Concepts Using a New Parameterized Binary Relation in Vector Optimization 1

Minimality Concepts Using a New Parameterized Binary Relation in Vector Optimization 1 Applied Mathematical Sciences, Vol. 7, 2013, no. 58, 2841-2861 HIKARI Ltd, www.m-hikari.com Minimality Concepts Using a New Parameterized Binary Relation in Vector Optimization 1 Christian Sommer Department

More information

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

Continuity of convex functions in normed spaces

Continuity of convex functions in normed spaces Continuity of convex functions in normed spaces In this chapter, we consider continuity properties of real-valued convex functions defined on open convex sets in normed spaces. Recall that every infinitedimensional

More information

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions Angelia Nedić and Asuman Ozdaglar April 15, 2006 Abstract We provide a unifying geometric framework for the

More information

Identifying Active Constraints via Partial Smoothness and Prox-Regularity

Identifying Active Constraints via Partial Smoothness and Prox-Regularity Journal of Convex Analysis Volume 11 (2004), No. 2, 251 266 Identifying Active Constraints via Partial Smoothness and Prox-Regularity W. L. Hare Department of Mathematics, Simon Fraser University, Burnaby,

More information

Math 273a: Optimization Subgradients of convex functions

Math 273a: Optimization Subgradients of convex functions Math 273a: Optimization Subgradients of convex functions Made by: Damek Davis Edited by Wotao Yin Department of Mathematics, UCLA Fall 2015 online discussions on piazza.com 1 / 42 Subgradients Assumptions

More information

On duality theory of conic linear problems

On duality theory of conic linear problems On duality theory of conic linear problems Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 3332-25, USA e-mail: ashapiro@isye.gatech.edu

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION

THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION HALUK ERGIN AND TODD SARVER Abstract. Suppose (i) X is a separable Banach space, (ii) C is a convex subset of X that is a Baire space (when endowed

More information

Convex Analysis and Economic Theory AY Elementary properties of convex functions

Convex Analysis and Economic Theory AY Elementary properties of convex functions Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory AY 2018 2019 Topic 6: Convex functions I 6.1 Elementary properties of convex functions We may occasionally

More information

Franco Giannessi, Giandomenico Mastroeni. Institute of Mathematics University of Verona, Verona, Italy

Franco Giannessi, Giandomenico Mastroeni. Institute of Mathematics University of Verona, Verona, Italy ON THE THEORY OF VECTOR OPTIMIZATION AND VARIATIONAL INEQUALITIES. IMAGE SPACE ANALYSIS AND SEPARATION 1 Franco Giannessi, Giandomenico Mastroeni Department of Mathematics University of Pisa, Pisa, Italy

More information

You should be able to...

You should be able to... Lecture Outline Gradient Projection Algorithm Constant Step Length, Varying Step Length, Diminishing Step Length Complexity Issues Gradient Projection With Exploration Projection Solving QPs: active set

More information

Vector Variational Principles; ε-efficiency and Scalar Stationarity

Vector Variational Principles; ε-efficiency and Scalar Stationarity Journal of Convex Analysis Volume 8 (2001), No. 1, 71 85 Vector Variational Principles; ε-efficiency and Scalar Stationarity S. Bolintinéanu (H. Bonnel) Université de La Réunion, Fac. des Sciences, IREMIA,

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This

More information

Dedicated to Michel Théra in honor of his 70th birthday

Dedicated to Michel Théra in honor of his 70th birthday VARIATIONAL GEOMETRIC APPROACH TO GENERALIZED DIFFERENTIAL AND CONJUGATE CALCULI IN CONVEX ANALYSIS B. S. MORDUKHOVICH 1, N. M. NAM 2, R. B. RECTOR 3 and T. TRAN 4. Dedicated to Michel Théra in honor of

More information

Date: July 5, Contents

Date: July 5, Contents 2 Lagrange Multipliers Date: July 5, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 14 2.3. Informative Lagrange Multipliers...........

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

ON NECESSARY OPTIMALITY CONDITIONS IN MULTIFUNCTION OPTIMIZATION WITH PARAMETERS

ON NECESSARY OPTIMALITY CONDITIONS IN MULTIFUNCTION OPTIMIZATION WITH PARAMETERS ACTA MATHEMATICA VIETNAMICA Volume 25, Number 2, 2000, pp. 125 136 125 ON NECESSARY OPTIMALITY CONDITIONS IN MULTIFUNCTION OPTIMIZATION WITH PARAMETERS PHAN QUOC KHANH AND LE MINH LUU Abstract. We consider

More information

Convex analysis and profit/cost/support functions

Convex analysis and profit/cost/support functions Division of the Humanities and Social Sciences Convex analysis and profit/cost/support functions KC Border October 2004 Revised January 2009 Let A be a subset of R m Convex analysts may give one of two

More information

Semi-infinite programming, duality, discretization and optimality conditions

Semi-infinite programming, duality, discretization and optimality conditions Semi-infinite programming, duality, discretization and optimality conditions Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205,

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints. 1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,

More information

Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark.

Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark. DUALITY THEORY Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark. Keywords: Duality, Saddle point, Complementary

More information

CHAPTER 7. Connectedness

CHAPTER 7. Connectedness CHAPTER 7 Connectedness 7.1. Connected topological spaces Definition 7.1. A topological space (X, T X ) is said to be connected if there is no continuous surjection f : X {0, 1} where the two point set

More information

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth

More information

g 2 (x) (1/3)M 1 = (1/3)(2/3)M.

g 2 (x) (1/3)M 1 = (1/3)(2/3)M. COMPACTNESS If C R n is closed and bounded, then by B-W it is sequentially compact: any sequence of points in C has a subsequence converging to a point in C Conversely, any sequentially compact C R n is

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo September 6, 2011 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

Convex Analysis and Optimization Chapter 4 Solutions

Convex Analysis and Optimization Chapter 4 Solutions Convex Analysis and Optimization Chapter 4 Solutions Dimitri P. Bertsekas with Angelia Nedić and Asuman E. Ozdaglar Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

Elements of Convex Optimization Theory

Elements of Convex Optimization Theory Elements of Convex Optimization Theory Costis Skiadas August 2015 This is a revised and extended version of Appendix A of Skiadas (2009), providing a self-contained overview of elements of convex optimization

More information

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University February 7, 2007 2 Contents 1 Metric Spaces 1 1.1 Basic definitions...........................

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect

GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, 2018 BORIS S. MORDUKHOVICH 1 and NGUYEN MAU NAM 2 Dedicated to Franco Giannessi and Diethard Pallaschke with great respect Abstract. In

More information

Implications of the Constant Rank Constraint Qualification

Implications of the Constant Rank Constraint Qualification Mathematical Programming manuscript No. (will be inserted by the editor) Implications of the Constant Rank Constraint Qualification Shu Lu Received: date / Accepted: date Abstract This paper investigates

More information

A Descent Method for Equality and Inequality Constrained Multiobjective Optimization Problems

A Descent Method for Equality and Inequality Constrained Multiobjective Optimization Problems A Descent Method for Equality and Inequality Constrained Multiobjective Optimization Problems arxiv:1712.03005v2 [math.oc] 11 Dec 2017 Bennet Gebken 1, Sebastian Peitz 1, and Michael Dellnitz 1 1 Department

More information

Technische Universität Ilmenau Institut für Mathematik

Technische Universität Ilmenau Institut für Mathematik Technische Universität Ilmenau Institut für Mathematik Preprint No. M 13/05 Properly optimal elements in vector optimization with variable ordering structures Gabriele Eichfelder and Refail Kasimbeyli

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality

More information

Optimality, identifiability, and sensitivity

Optimality, identifiability, and sensitivity Noname manuscript No. (will be inserted by the editor) Optimality, identifiability, and sensitivity D. Drusvyatskiy A. S. Lewis Received: date / Accepted: date Abstract Around a solution of an optimization

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

Stability of efficient solutions for semi-infinite vector optimization problems

Stability of efficient solutions for semi-infinite vector optimization problems Stability of efficient solutions for semi-infinite vector optimization problems Z. Y. Peng, J. T. Zhou February 6, 2016 Abstract This paper is devoted to the study of the stability of efficient solutions

More information

Computing Efficient Solutions of Nonconvex Multi-Objective Problems via Scalarization

Computing Efficient Solutions of Nonconvex Multi-Objective Problems via Scalarization Computing Efficient Solutions of Nonconvex Multi-Objective Problems via Scalarization REFAIL KASIMBEYLI Izmir University of Economics Department of Industrial Systems Engineering Sakarya Caddesi 156, 35330

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

Inequality Constraints

Inequality Constraints Chapter 2 Inequality Constraints 2.1 Optimality Conditions Early in multivariate calculus we learn the significance of differentiability in finding minimizers. In this section we begin our study of the

More information

Subdifferential representation of convex functions: refinements and applications

Subdifferential representation of convex functions: refinements and applications Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential

More information

From now on, we will represent a metric space with (X, d). Here are some examples: i=1 (x i y i ) p ) 1 p, p 1.

From now on, we will represent a metric space with (X, d). Here are some examples: i=1 (x i y i ) p ) 1 p, p 1. Chapter 1 Metric spaces 1.1 Metric and convergence We will begin with some basic concepts. Definition 1.1. (Metric space) Metric space is a set X, with a metric satisfying: 1. d(x, y) 0, d(x, y) = 0 x

More information

Lagrange Relaxation and Duality

Lagrange Relaxation and Duality Lagrange Relaxation and Duality As we have already known, constrained optimization problems are harder to solve than unconstrained problems. By relaxation we can solve a more difficult problem by a simpler

More information

Notes on uniform convergence

Notes on uniform convergence Notes on uniform convergence Erik Wahlén erik.wahlen@math.lu.se January 17, 2012 1 Numerical sequences We begin by recalling some properties of numerical sequences. By a numerical sequence we simply mean

More information

Douglas-Rachford splitting for nonconvex feasibility problems

Douglas-Rachford splitting for nonconvex feasibility problems Douglas-Rachford splitting for nonconvex feasibility problems Guoyin Li Ting Kei Pong Jan 3, 015 Abstract We adapt the Douglas-Rachford DR) splitting method to solve nonconvex feasibility problems by studying

More information

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q)

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) Andreas Löhne May 2, 2005 (last update: November 22, 2005) Abstract We investigate two types of semicontinuity for set-valued

More information

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global

More information

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces Introduction to Optimization Techniques Nonlinear Optimization in Function Spaces X : T : Gateaux and Fréchet Differentials Gateaux and Fréchet Differentials a vector space, Y : a normed space transformation

More information

On constraint qualifications with generalized convexity and optimality conditions

On constraint qualifications with generalized convexity and optimality conditions On constraint qualifications with generalized convexity and optimality conditions Manh-Hung Nguyen, Do Van Luu To cite this version: Manh-Hung Nguyen, Do Van Luu. On constraint qualifications with generalized

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

Convex Analysis and Optimization Chapter 2 Solutions

Convex Analysis and Optimization Chapter 2 Solutions Convex Analysis and Optimization Chapter 2 Solutions Dimitri P. Bertsekas with Angelia Nedić and Asuman E. Ozdaglar Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

Set, functions and Euclidean space. Seungjin Han

Set, functions and Euclidean space. Seungjin Han Set, functions and Euclidean space Seungjin Han September, 2018 1 Some Basics LOGIC A is necessary for B : If B holds, then A holds. B A A B is the contraposition of B A. A is sufficient for B: If A holds,

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 4 Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 2 4.1. Subgradients definition subgradient calculus duality and optimality conditions Shiqian

More information

Decision Science Letters

Decision Science Letters Decision Science Letters 8 (2019) *** *** Contents lists available at GrowingScience Decision Science Letters homepage: www.growingscience.com/dsl A new logarithmic penalty function approach for nonlinear

More information

BASICS OF CONVEX ANALYSIS

BASICS OF CONVEX ANALYSIS BASICS OF CONVEX ANALYSIS MARKUS GRASMAIR 1. Main Definitions We start with providing the central definitions of convex functions and convex sets. Definition 1. A function f : R n R + } is called convex,

More information

HIGHER ORDER OPTIMALITY AND DUALITY IN FRACTIONAL VECTOR OPTIMIZATION OVER CONES

HIGHER ORDER OPTIMALITY AND DUALITY IN FRACTIONAL VECTOR OPTIMIZATION OVER CONES - TAMKANG JOURNAL OF MATHEMATICS Volume 48, Number 3, 273-287, September 2017 doi:10.5556/j.tkjm.48.2017.2311 - - - + + This paper is available online at http://journals.math.tku.edu.tw/index.php/tkjm/pages/view/onlinefirst

More information

Existence of minimizers

Existence of minimizers Existence of imizers We have just talked a lot about how to find the imizer of an unconstrained convex optimization problem. We have not talked too much, at least not in concrete mathematical terms, about

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

Convex Feasibility Problems

Convex Feasibility Problems Laureate Prof. Jonathan Borwein with Matthew Tam http://carma.newcastle.edu.au/drmethods/paseky.html Spring School on Variational Analysis VI Paseky nad Jizerou, April 19 25, 2015 Last Revised: May 6,

More information

An Approximate Lagrange Multiplier Rule

An Approximate Lagrange Multiplier Rule An Approximate Lagrange Multiplier Rule J. Dutta, S. R. Pattanaik Department of Mathematics and Statistics Indian Institute of Technology, Kanpur, India email : jdutta@iitk.ac.in, suvendu@iitk.ac.in and

More information

Math 341: Convex Geometry. Xi Chen

Math 341: Convex Geometry. Xi Chen Math 341: Convex Geometry Xi Chen 479 Central Academic Building, University of Alberta, Edmonton, Alberta T6G 2G1, CANADA E-mail address: xichen@math.ualberta.ca CHAPTER 1 Basics 1. Euclidean Geometry

More information

Finite Dimensional Optimization Part I: The KKT Theorem 1

Finite Dimensional Optimization Part I: The KKT Theorem 1 John Nachbar Washington University March 26, 2018 1 Introduction Finite Dimensional Optimization Part I: The KKT Theorem 1 These notes characterize maxima and minima in terms of first derivatives. I focus

More information

Characterizations of Solution Sets of Fréchet Differentiable Problems with Quasiconvex Objective Function

Characterizations of Solution Sets of Fréchet Differentiable Problems with Quasiconvex Objective Function Characterizations of Solution Sets of Fréchet Differentiable Problems with Quasiconvex Objective Function arxiv:1805.03847v1 [math.oc] 10 May 2018 Vsevolod I. Ivanov Department of Mathematics, Technical

More information

NONLINEAR SCALARIZATION CHARACTERIZATIONS OF E-EFFICIENCY IN VECTOR OPTIMIZATION. Ke-Quan Zhao*, Yuan-Mei Xia and Xin-Min Yang 1.

NONLINEAR SCALARIZATION CHARACTERIZATIONS OF E-EFFICIENCY IN VECTOR OPTIMIZATION. Ke-Quan Zhao*, Yuan-Mei Xia and Xin-Min Yang 1. TAIWANESE JOURNAL OF MATHEMATICS Vol. 19, No. 2, pp. 455-466, April 2015 DOI: 10.11650/tjm.19.2015.4360 This paper is available online at http://journal.taiwanmathsoc.org.tw NONLINEAR SCALARIZATION CHARACTERIZATIONS

More information

Optimality and Duality Theorems in Nonsmooth Multiobjective Optimization

Optimality and Duality Theorems in Nonsmooth Multiobjective Optimization RESEARCH Open Access Optimality and Duality Theorems in Nonsmooth Multiobjective Optimization Kwan Deok Bae and Do Sang Kim * * Correspondence: dskim@pknu.ac. kr Department of Applied Mathematics, Pukyong

More information

Primal Solutions and Rate Analysis for Subgradient Methods

Primal Solutions and Rate Analysis for Subgradient Methods Primal Solutions and Rate Analysis for Subgradient Methods Asu Ozdaglar Joint work with Angelia Nedić, UIUC Conference on Information Sciences and Systems (CISS) March, 2008 Department of Electrical Engineering

More information

A projection-type method for generalized variational inequalities with dual solutions

A projection-type method for generalized variational inequalities with dual solutions Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4812 4821 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A projection-type method

More information

Martin Luther Universität Halle Wittenberg Institut für Mathematik

Martin Luther Universität Halle Wittenberg Institut für Mathematik Martin Luther Universität Halle Wittenberg Institut für Mathematik Lagrange necessary conditions for Pareto minimizers in Asplund spaces and applications T. Q. Bao and Chr. Tammer Report No. 02 (2011)

More information

Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency

Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency Applied Mathematics E-Notes, 16(2016), 133-143 c ISSN 1607-2510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency

More information