Duality for vector optimization problems via a general scalarization

Size: px
Start display at page:

Download "Duality for vector optimization problems via a general scalarization"

Transcription

1 Duality for vector optimization problems via a general scalarization Radu Ioan Boţ Sorin-Mihai Grad Abstract. Considering a vector optimization problem to which properly efficient solutions are defined by using convex cone-monotone scalarization functions, we attach to it, by means of perturbation theory, new vector duals. When the primal problem, the scalarization function the perturbation function are particularized, different dual vector problems are obtained, some of them already known in the literature. Weak strong duality statements are delivered in each case. Keywords. vector duality, conjugate functions, cone-monotone functions, cones Introduction preliminaries One of the most used approaches to vector optimization problems is by attaching them scalar optimization problems whose optimal solutions deliver then efficient solutions to the original problems. The most used scalarization technique is the linear scalarization, but one can find in the literature different other scalarizations that fulfill the specific needs of certain problems better than the linear one, as it can be seen, for instance, in most of the literature we cite in this paper. Consequently, in works like [3,,2,5,8,9,26] a general scalarization by using cone-monotone functions was proposed, sometimes in order to assign a vector dual to the original vector optimization problem. For a primal-dual pair of vector optimization problems one usually has weak vector duality, under additional hypotheses, also strong vector duality. In this paper we continue extend our research from [3, 5], we proposed a Fenchel-Lagrange type vector duality approach for constrained vector optimization problems. Here we assign via perturbations vector duals to a general vector optimization problem, with respect to two different classes of properly efficient solutions. The relations between the vector duals are investigated in each case weak strong duality statements are provided. Research partially supported by DFG (German Research Foundation), project WA 922/- 3. Faculty of Mathematics, Chemnitz University of Technology, D-0907 Chemnitz, Germany, radu.bot@mathematik.tu-chemnitz.de. Faculty of Mathematics, Chemnitz University of Technology, D-0907 Chemnitz, Germany, sorin-mihai.grad@mathematik.tu-chemnitz.de.

2 Moreover, we present situations there exist differences between the considered vector duals. Carefully choosing the scalarization functions we obtain then different other dual vector problems to the primal vector problem, in the case of linear scalarization rediscovering some older results from the literature. Nevertheless, particularizing the initial vector optimization problem to be first unconstrained, then constrained, for suitable choices of the perturbation function we deliver different vector duals for these classes of primal vector problems, in some cases rediscovering vector duals proposed in the literature. Consider two separated locally convex vector spaces X Y their topological dual spaces X, respectively, Y, denote by x, x = x (x) the value at x X of the linear continuous functional x X. A cone K X is a nonempty subset of X which fulfills λk K for all λ 0. A cone K X is said to be nontrivial if K 0 K X, while if K ( K) = 0 we call K pointed. When working in finitely dimensional spaces, all the vectors are considered as column vectors. An upper index T transposes a column vector to a row one viceversa. On Y we consider the partial ordering C induced by the convex cone C Y, defined by z C y y z C when z, y Y. We use also the notation z C y to write more compactly that z C y z y, z, y Y. To Y we attach a greatest element with respect to C, which does not belong to Y, denoted by C let Y = Y C. Then for any y Y one has y C C we consider on Y the operations y + C = C + y = C for all y Y t C = C for all t 0. Similarly we assume that y C C for any y Y. When int(c), by z < C y we mean y z int(c), z, y Y, we assume that y < C C for all y Y. The dual cone of C is C = y Y : y, y 0 y C. By convention, v, C = + for all v C. Given a subset U of X, by cl(u), δ U σ U we denote its closure, indicator function support function, respectively. In vector optimization it is often used also the quasi interior of the dual cone of K, K 0 = x K : x, x > 0 for all x K\0. Moreover, we consider the projection function Pr X : X Y X, defined by Pr X (x, y) = x for all (x, y) X Y. Having a function f : X R = R ± we use the classical notations for domain dom f = x X : f(x) < +, lower semicontinuous hull cl f : X R conjugate function f : X R, f (x ) = sup x, x f(x) : x X. We call f proper if f(x) > for all x X dom f. Between a function its conjugate there is the Young-Fenchel inequality f (x ) + f(x) x, x for all x X x X. The adjoint of a linear continuous mapping A : X Y is A : Y X given by A y, x = y, Ax for any (x, y ) X Y. Let W Y. A function g : Y R, is said to be C-increasing on W if g(x) g(y) for all x, y W such that x C y; strongly C-increasing on W if g(x) < g(y) for all x, y W such that x C y; strictly C-increasing on W if g is C-increasing on W, int(c) for all x, y W fulfilling x < C y follows g(x) < g(y). 2

3 When W = Y we call these classes of functions C-increasing, strongly C- increasing strictly C-increasing, respectively. When Y = R, the R + - increasing functions are actually the increasing functions, while the strongly strictly R + -increasing functions coincide, being the strictly increasing functions. If int(c), we denote Ĉ = int(c) 0 we have int(ĉ) = int(c). In this case the strictly C-increasing functions on a set W Y are strongly Ĉ-increasing functions on W, too. A vector function h : X Y is called proper if its domain dom h = x X : h(x) Y is nonempty, respectively, C-convex if h(tx + ( t)y) C th(x) + ( t)h(y) for all x, y X all t [0, ]. The vector optimization problems we consider in this paper consist of vectorminimizing vector-maximizing a vector function with respect to the partial ordering induced in the image space of the vector function by a nontrivial pointed convex cone. For the primal vector optimization problems we define different types of properly efficient solutions, with respect to the considered scalarization functions, while for the vector duals we consider efficient weakly efficient solutions. 2 Duality for a vector optimization problem via a general scalarization Let X, Y V be separated locally convex vector spaces, with V partially ordered by the nontrivial pointed convex cone K V. Let us introduce now the minimality notions for sets we use later in order to consider different types of solutions to vector optimization problems. Let M be a nonempty subset of V consider an arbitrary nonempty set of scalarization functions defined on V taking values in R denoted by M. Definition. An element x M is said to be a minimal element of M (regarding the partial ordering induced by K) if there is no v M satisfying v K v. The set of all minimal elements of M is denoted by Min(M, K). Definition 2. An element v M is said to be an M-properly minimal element of M if there exists an s M such that s( v) s(v) for all v M. The set of all M-properly minimal elements of M is denoted by PMin M (M, K). Definition 3. Additionally assume that int(k). An element v M is said to be a weakly minimal element of M (regarding the partial ordering induced by K) if ( v int(k)) M =. The set of all weakly minimal elements of M is denoted by WMin(M, K). Corresponding maximality notions are defined by using the definitions from above. The elements of the set Max(M, K) := Min(M, K) are called maximal elements of M, while the set WMax(M, K) := WMin(M, K) contains the weakly maximal elements of M. Now we formulate the vector optimization problem we shall work with. Let 3

4 F : X V be a proper vector function consider the general vectorminimization problem (P V G) Min x X F (x). The solution concepts we consider for (P V G) follow from the ones introduced above for sets. An element x X fulfilling x dom F is said to be an efficient solution to the vector optimization problem (P V G) if F ( x) Min(F (dom F ), K), respectively, when int(k), a weakly efficient solution to the same problem if F ( x) WMin(F (dom F ), K). Consider now the set of scalarization functions S s : V R : F (dom F ) + K dom s s is proper, convex strongly K-increasing on F (dom F ) + K. By convention, we extend every s S with the value s( K ) = +. An element x X is said to be an S-properly efficient solution to (P V G) if F ( x) PMin S (F (dom F ), K). Remark. Every S-properly efficient solution to (P V G) belongs to dom F it is also an efficient solution to the same vector optimization problem, if int(k), each efficient solution to (P V G) is a weakly efficient one, too. In order to deal with (P V G) via duality, consider now the vector perturbation function Φ : X Y V which fulfills Φ(x, 0) = F (x) for all x X. We call Y the perturbation space its elements perturbation variables. Then 0 Pr Y (dom Φ) thus Φ is proper. The primal vector optimization problem introduced above can be reformulated as (P V G) Min Φ(x, 0). x X Inspired by the way conjugate dual problems are attached to a given scalar primal problem via perturbations, we attach to (P V G) the following dual vector problems with respect to S-properly efficient solutions (DV G S ) B G S =, respectively, Max h G S (s,v,y,v) B G (s, v, y, v), S (s, v, y, v) S K Y V : s(v) s (v ) (v Φ) (0, y ) h G S (s, v, y, v) = v, (DV G S 2 ) Max h G S (s,y,v) B G 2 (s, y, v), S 2 B G S 2 = (s, y, v) S Y V : s(v) (s Φ) (0, y ) 4

5 h G S 2 (s, y, v) = v. Without resorting to the vector perturbation function Φ one can also attach to (P V G) another vector dual, inspired by the vector dual (DV C P ) from [5, Section 4.3.3], namely (DV G S 3 ) Max (s,v) B G S 3 h G S 3 (s, v), B G S 3 = (s, v) S V : s(v) inf s(f (x)) x X h G S 3 (s, v) = v. Remark 2. It is a simple verification to show that in general it holds (s Φ) inf v K [s (v ) + (v Φ) ], thus whenever (s, v, y, v) B G S we have s(v) (s Φ) (0, y ), which yields (s, y, v) B G S 2. Consequently, h G S (BG S ) h G S 2 (BG S 2 ). Sufficient conditions for having equality in this inclusion can be found in [5, Theorem 3.5.2]; we mention here only one, namely that, provided that Φ is proper K-convex, for each s S there exists a point x dom F such that s is continuous at F ( x). Since the scalarization functions most used in the literature (see [3, 5]) are also continuous at least over the sets they are strongly K-increasing on, we note that when the mentioned condition is satisfied the first two vector duals introduced above have the same images of their feasible sets through their objective vector functions, thus it is not necessary to particularize both of them when dealing with concrete scalarization functions from the literature in Section 3. However, we treat them in the general case since the scalarization functions need not be continuous. In Example one can find a possible scalarization function of this kind, while in Example 2 we deliver another one, in a situation (DV G S ) (DV GS 2 ) do not coincide. Example. Let X = R, V = R 2, K = R 2 +, S = s, s : R 2 R, s(x, y) = x 2 + y 2 + δ R 2 + (x, y) F : R (R 2 ), F (x) = (x, 0) T if x (0, ) F (x) = R 2 + otherwise. One can easily see that F (dom F ) = (0, ) 0 dom s = R 2 +, thus the condition F (dom F )+R 2 + dom s is satisfied. Moreover, the scalarization function s is proper, convex strongly R 2 +-increasing on R 2 +, but it is not continuous on F (dom F ). Example 2. Let X = R, Y = R, V = R 2, K = (0, 0), S = s, x ln x x + y2 s : R 2 2, if x > 0, y 0, R, s(x, y) = y 2 2, if x = 0, y 0, +, otherwise, ( ) Φ : R 2 (R 2 ) (x, x), if x = y = 0 or x 0 y 0,, Φ(x, y) = (0,0), otherwise. 5

6 Then the scalarization function s is proper, convex strongly K-increasing on its domain (Φ(, 0))(dom Φ(, 0)) + K = (0, 0) [0, + ) (, 0] = dom s. Regarding the conjugates that appear in the formulation of (DV G S ) (DV G S 2 ), we have s (v, v 2) = e v + (v 2 )2 2, if v R, v 2 0, e v, if v R, v 2 > 0, ( (v, v2) T Φ ) (0, y 0, if v ) = + v2 = y = 0, +, otherwise, ( s Φ) (0, y ) = 0 for all y R. It is straightforward to see that s(0, 0) = 0 = ( s Φ) (0, y ) for all y R, thus (0, 0) h G S 2 (BG S 2 ). On the other h, s (v, v 2 ) > 0 for all v, v 2 R, thus s (v, v 2 ) ( (v, v 2 )T Φ ) (0, y ) < 0 whenever v, v 2, y R. As s(0, 0) = 0, it is obvious that (0, 0) / h G S (BG S ). Consequently, (DV G S ) (DV GS 2 ) do not coincide in this situation. Remark 3. Note also that we have inf x X s(f (x)) (s Φ) (0, y ) for all (s, y, v) B G S 2, thus h G S 2 (BG S 2 ) h G S 3 (BG S 3 ). To show that the opposite inclusion does not always hold we consider the situation presented in Example 3. However, as it will be seen further in Theorem 4, for (DV G S 3 ) strong duality holds whenever (P V G) has an S-properly efficient element, so we will not insist much on this vector dual. Example 3. Let X = R 2, Y = R, C = R +, V = R 2, K = R 2 +, U = (x, y) T R 2 : 0 x 2, 3 y 4, if x = 0, y 4, if x (0, 2] ( ) y F : R 2 (R 2 ), if (x, y), F (x, y) = y T U, x 0, R 2 +, otherwise, ( ) y Φ : R 2 R (R 2 ), if (x, y), Φ(x, y, z) = y T U, x z 0, R 2 +, otherwise, (see also Subsection 3.) S = s : R 2 R, s(x, y) = ax + by : (a, b) int(r 2 +). Note first that F (x, y) = (y, y) T if x = 0 3 y 4, while otherwise F (x, y) = R 2 +. Whenever s S there exist (v, v 2 )T int(r 2 +) such that s F = (v, v 2 )T F. We have (v, v 2 )T F (x, y) 3(v + v 2 ) for all (x, y)t R 2. Consequently, (3, 3) T h G S 3 (BG S 3 ). Assuming that (3, 3) T h G S 2 (BG S 2 ), it follows that there exist (v, v 2 )T int(r 2 +) y R such that 3(v +, 6

7 v2 ) ((v, v 2 )T Φ) (0, y ), i.e. ((v, v 2 )T Φ) (0, y ) 3(v + v 2 ). For all (v, v 2 )T int(r 2 +) all y R we have ((v, v2) T Φ) (0, y ) = sup (x,y) T U, x z 0,z R = sup (x,y) T U y z y(v + v 2) y(v + v2) + sup y z z x = (v + v 2) + δ (,0] (y ) > 3(v + v 2). Therefore, our assumption is false, i.e. (3, 3) T / h G S 2 (BG S 2 ). Consequently, (DV G S 2 ) (DV GS 3 ) do not coincide in this situation. Remark 4. Replacing the inequalities from the feasible sets of the vector duals to (P V G) by equalities we obtain other vector duals to (P V G) which have smaller feasible sets. All the investigations done in this paper can be considered for those vector duals, too. For these vector-maximization problems we consider efficient solutions, defined below for (DV G S ) analogously for the others. An element ( s, v, ȳ, v) B G S is said to be an efficient solution to the vector optimization problem (DV G S ) if ( s, v, ȳ, v) dom h G S h G S ( s, v, ȳ, v) Max(h G S (dom hg S ), K). Let us show now that for the just introduced dual problems there is weak duality. h G S 3 Theorem. There are no x X (s, v) B G S 3 such that F (x) K (s, v). Proof. Assume to the contrary that there exist x X (s, v) B G S 3 fulfilling F (x) K h G S 3 (s, v). Then x dom F it follows s(f (x)) < s(v) since s S. But from the way the feasible set of the vector dual is defined we get s(v) inf z X s(f (z)) combining these two inequalities we reach a contradiction. The weak duality statements for the other two vector duals can be obtained as consequences of Theorem, having in mind the inclusions from Remark 2 Remark 3. Theorem 2. There are no x X (s, v, y, v) B G S such that F (x) K h G S (s, v, y, v). Theorem 3. There are no x X (s, y, v) B G S 2 such that F (x) K h G S 2 (s, y, v). Next we turn our attention to strong duality for the vector duals introduced in this paper. Due to the way it is constructed, for (DV G S 3 ) strong duality follows at once, without any additional assumption. 7

8 Theorem 4. there exists an efficient solution ( s, v) B G S 3 to (DV G S 3 ( s, v) = v. h G S 3 If x X is an S-properly efficient solution to (P V G), ) such that F ( x) = Proof. As x X is a properly efficient solution to (P V G), F ( x) V there exists a function s S such that s(f ( x)) s(f (x)) for all x X. Thus s(f ( x)) inf x X s(f (x)). Consequently, ( s, F ( x)) B G S 3 F ( x) = h G S 3 ( s, F ( x)). The efficiency of ( s, F ( x)) to (DV G S 3 ) follows immediately via Theorem. To obtain strong duality for the other two vector duals we assigned to (P V G) we need some additional hypotheses. Thus, we take the function F to be K- convex we impose the fulfillment of a suitable regularity condition. One can introduce different regularity conditions for each of the two remaining vector duals, see for instance [2,4,5], but we consider here only a classical one involving continuity, namely (RC S ) s S x X such that (x, 0) dom Φ, Φ(x, ) is continuous at 0 s is continuous at Φ(x, 0). This regularity condition guarantees, as can be seen in the proof of the next statement, on one h that there is strong duality for the scalarized problem attached to (P V G), on the other h, that the conjugates of s Φ can be separated. Of course one can consider other regularity conditions, following for instance [2, 4, 5]. The strong duality statements for these two vector duals follow. Theorem 5. If F is a K-convex vector function, the regularity condition (RC S ) is fulfilled x X is an S-properly efficient solution to (P V G), there exist s S, v K, ȳ Y v V such that ( s, v, ȳ, v) B G S is an efficient solution to (DV G S ), ( s, ȳ, v) B G S 2 is an efficient solution to (DV G S 2 ) F ( x) = h G S ( s, v, ȳ, v) = h G S 2 ( s, ȳ, v) = v. Proof. As x X is an S-properly efficient solution to (P V G), F ( x) V there exists a function s S such that s(f ( x)) s(f (x)) for all x X. Thus s(f ( x)) inf x X s(f (x)), consequently, s(f ( x)) = min x X s(f (x)). Using now [5, Theorem 3.2.], the fulfillment of (RC S ) yields the existence of ȳ Y such that sup y Y ( s Φ) (0, y ) is attained at ȳ s(f ( x)) = ( s Φ) (0, ȳ ). Taking v = F ( x), it follows that ( s, ȳ, v) B G S 2 F ( x) = h G S 2 ( s, ȳ, v) = v. On the other h, the hypotheses yield (see Remark 2) also the existence of v K such that ( s Φ) (0, ȳ ) = s ( v )+( v Φ) (0, ȳ ), thus h G S ( s, v, ȳ, v) = v, too, ( s, v, ȳ, v) B G S. The efficiency of ( s, v, ȳ, v) B G S to (DV G S ) follows immediately by Theorem 2, while the one of ( s, ȳ, v) B G S 2 to (DV G S 2 ) is a consequence of Theorem 3. 8

9 Often, when int(k), the scalarization functions considered in the literature are not strongly K-increasing, but strictly K-increasing. Following ideas from [3, 5], one can notice that such scalarization functions can be brought into the vector duality framework we treat in this paper by employing the cone K = int(k) 0. Note that a weakly efficient solution to (P V G) is actually an efficient solution to it when working with the cone K. It can also be verified that every strictly K-increasing on F (dom F ) + K function is actually strongly K-increasing on F (dom F ) +K. Consider another set of scalarization functions T s : V R : F (dom F ) + K dom s s is proper, convex strictly K-increasing on F (dom F ) + K. By convention, we extend any s T with the value s( K ) = +. We say that an element x X is a T -properly efficient solution to (P V G) if F ( x) PMin T (F (dom F ), K). With respect to the T -properly efficient solutions of the primal problem (P V G) one can define three vector duals that are obtained from (DV G S i ), i =, 2, 3, by replacing S with T, namely (DV G T ) B G T = WMax h G T (s,v,y,v) B G (s, v, y, v), T (s, v, y, v) T K Y V : s(v) s (v ) (v Φ) (0, y ) (DV G T 2 ) h G T (s, v, y, v) = v, WMax h G T (s,y,v) B G 2 (s, y, v), T 2 B G T 2 = (s, y, v) T Y V : s(v) (s Φ) (0, y ), respectively, h G T 2 (s, y, v) = v, (DV G T 3 ) One can show that WMax h G T (s,v) B G 3 (s, v), T 3 B G T 3 = (s, v) T V : s(v) inf s(f (x)) x X h G T 3 (s, v) = v. h G T (B G T ) h G T 2 (B G T 2 ) h G T 3 (B G T 3 ). () 9

10 To these problems we consider weakly efficient solutions, directly defined only for (DV G T ), since for the other two vector duals they can be given analogously. An element ( s, v, ȳ, v) B G T is said to be a weakly efficient solution to the vector optimization problem (DV G T ) if ( s, v, ȳ, v) dom h G T h G T ( s, v, ȳ, v) WMax(h G T (dom h G T ), K). The weak strong duality statements concerning (P V G) these vector duals follow as direct consequences of Theorems 5. Note that in this case the regularity condition (RC S ) can be weakened to (RC) x X such that (x, 0) dom Φ Φ(x, ) is continuous at 0, because the continuity assumptions for s are no longer necessary under the hypothesis int(k), as it can be seen below in the proof of Theorem 7. Assuming also all over this paper that int(k) would make (RC) ( its special cases) the only regularity condition considered, since the proof of Theorem 5 could be then modified analogously to the one of Theorem 7. Theorem 6. (a) There are no x X (s, v, y, v) B G T such that F (x) < K h G T (s, v, y, v). (b) There are no x X (s, y, v) B G T 2 such that F (x) < K h G T 2 (s, y, v). (c) There are no x X (s, v) B G T 3 such that F (x) < K h G T 3 (s, v). Theorem 7. (a) If x X is a T -properly efficient solution to (P V G), there exist s T v V such that ( s, v) B G T 3 is a weakly efficient solution to (DV G T 3 ) F ( x) = h G T 3 ( s, v) = v. (b) If F is a K-convex vector function, the regularity condition (RC) is fulfilled x X is a T -properly efficient solution to (P V G), there exist s T, v K, ȳ Y v V such that ( s, v, ȳ, v) B G T is a weakly efficient solution to (DV G T ), ( s, v, v) B G T 2 is a weakly efficient solution to (DV G T 2 ) F ( x) = hg T ( s, v, ȳ, v) = h G T 2 ( s, ȳ, v) = v. Proof. We prove only (b), since (a) can be shown analogously to the proof of Theorem 4. As x X is a T -properly efficient solution to (P V G), F ( x) V there exists a function s T such that s(f ( x)) s(f (x)) for all x X. Thus s(f ( x)) inf x X s(f (x)), consequently, s(f ( x)) = min x X s(f (x)). Note also that the optimization problem is actually nothing else than inf s(f (x)), x X inf s(y). x X,y Y, Φ(x,0) y K 0

11 To the latter problem we attach its Lagrange dual sup inf [ s(y) + v, Φ(x, 0) y ], v K which can be rewritten as x X, y Y sup s (v ) ((v Φ)(, 0)) (0). v K Because (RC) holds, we obtain an x dom F such that Φ(x, 0)+int(K) dom s also a y dom s such that Φ(x, 0) y int(k). Using now [5, Theorem 3.2.9], we obtain that for the primal-dual pair of scalar optimization problems introduced above there is strong duality, thus there exists v K such that s(f ( x)) = s ( v ) (( v Φ)(, 0)) (0). Applying now [5, Theorem 3.2.], (RC) yields also the existence of ȳ Y such that (( v Φ)(, 0)) (0) = ( v Φ) (0, ȳ ). Taking v = F ( x), it follows that ( s, v, ȳ, v) B G T F ( x) = h G T ( s, v, ȳ, v) = v. Using () (see also Remark 2), it follows that ( s, ȳ, v) B G T 2 h G T 2 ( s, ȳ, v) = v. The weak efficiency of ( s, v, ȳ, v) B G T to (DV G T ) follows immediately by Theorem 6(a), while the one of ( s, ȳ, v) B G T 2 to (DV G T 2 ) is a consequence of Theorem 6(b). 3 Vector duality via several particular scalarizations In the third section of this paper we consider several concrete scalarization functions, obtaining vector duals corresponding duality statements by particularizing the set S or T, respectively. Namely, we present the linear scalarization, the set scalarization the (semi)norm scalarization. Since all these scalarizations are made with continuous functions, taking into consideration Remark 2 Theorem 4 it is clear that it makes no sense to deal in this section with all the vector duals we considered to (P V G), thus we work only with (DV G S ) (DV G T ), respectively. Before proceeding, let us mention that different other scalarizations were considered in the literature, from which we recall some here. From the scalarizations involving strongly K-increasing functions we mention the one using continuous sublinear functions from [28], the one with quadratic functions in finite dimensional spaces from [9] (see also [3]) the one containing penalty functions from [37]. Regarding the scalarizations involving strictly K-increasing functions, we have the one with continuous sublinear functions from [23], the maximum-linear scalarization met in papers like [6, 25, 27] (see also [3, 5]) with its special case the weighted Tchebyshev scalarization for which we refer to [0, 9, 20, 24, 33], the scalarization with oriented distances from [39] the bottleneck scalarization considered in [7].

12 3. Linear scalarization The linear scalarization is the most famous used scalarization method in the literature it operates with strongly ( also strictly) K-increasing linear continuous functions. From the huge amount of works it appears we mention here only [, 5, 9, 26]. We deal first with the case of the strongly K-increasing linear functions. Take the set of scalarization functions S l = s v : V R : v K 0, s v (v) = v, v v V recall that like in the general case every function s v S l is extended with the value s v ( K ) = +. Each s v S l is a linear continuous strongly K-increasing function dom s v = V. Note that the S l -properly efficient solutions to (P V G) are actually the classical properly efficient solutions in the sense of linear scalarization to it. Noticing that for all k K one has s v (k ) = δ v (k ), the dual vector problem (DV G S ) becomes (DV G S l ) Max h G S l (v,y,v) B G (v, y, v), S l B G S l = (v, y, v) K 0 Y V : v, v (v Φ) (0, y ) h G S l (v, y, v) = v. Note that this is actually the vector dual to (P V G) considered in [4] [5, Section 4.3]. The weak strong duality statements for (P V G) (DV G S l ) follow, with the remark that due to the continuity of the scalarization function the regularity condition we consider is (RC). Theorem 8. (a) There are no x X (v, y, v) B G S l such that F (x) K h G S l (v, y, v). (b) If F is a K-convex vector function, the regularity condition (RC) is fulfilled x X is an S l -properly efficient solution to (P V G), there exist v K 0, ȳ Y v V such that ( v, ȳ, v) B G S l is an efficient solution to (DV G S l ) F ( x) = hg S l ( v, ȳ, v) = v. On the other h, in the case int(k) one can take as set of scalarization functions also T l = s v : V R : v K \0, s v (v) = v, v v V with every function s v T l extended with the value s v ( K ) = +. Each s v T l is a linear continuous strictly K-increasing function dom s v = V. Note that the T l -properly efficient solutions to (P V G) are actually the weakly efficient solutions to it. The dual vector problem (DV G T ) becomes 2

13 in this case (DV G T l ) WMax h G T l (v,y,v) B G (v, y, v), T l B G T l = (v, y, v) (K \0) Y V : v, v (v Φ) (0, y ) h G T l (v, y, v) = v. Note that this is actually the vector dual to (P V G) considered in [5, Subsection 4.3.4]. The weak strong duality statements for (P V G) (DV G T l ) follow. Theorem 9. (a) There are no x X (v, y, v) B G T l such that F (x) < K h G T l (v, y, v). (b) If F is a K-convex vector function, the regularity condition (RC) is fulfilled x X is a weakly efficient solution to (P V G), there exist v K \0, ȳ Y v V such that ( v, ȳ, v) B G T l is a weakly efficient solution to (DV G T l ) F ( x) = hg T l ( v, ȳ, v) = v. 3.2 Set scalarization As set scalarizations we underst the scalarization approaches for which the scalarization functions are defined by means of some given sets. We consider here a quite general scalarization function inspired by the one used in works like [3, 3, 32, 35]. In this subsection int(k) is taken nonempty. Consider a fixed nonempty convex set E V which satisfies cl(e)+int(k) int(e). For all µ int(k) we define the scalarization function s µ : V R by s µ (v) = inf t R : v tµ cl(e), extended with the value s µ ( K ) = +. According to [3, 35], for µ int(k) the function s µ is convex, continuous strictly K-increasing. The set of scalarization functions is then T s = s µ : V R : µ int(k). An element x X is a T s -properly efficient solution to (P V G) if there exists µ int(k) such that s µ (F ( x)) s µ (F (x)) for all x X. Since the conjugate function of s µ, µ int(k), is (cf. [3, 5]) s µ : V R, s µ(v σ ) = cl(e) (v ), if v, µ =, +, otherwise, the dual vector problem attached to (P V G) via the set scalarization turns out to be 3

14 (DV G Ts ) WMax h G Ts (µ,v,y,v) B G (µ, v, y, v), Ts B G Ts = (µ, v, y, v) int(k) (K \0) Y V : v, µ =, s µ (v) σ cl(e) (v ) (v Φ) (0, y ) h G Ts (µ, v, y, v) = v. The weak strong duality statements for (P V G) (DV G T l ) follow. Theorem 0. (a) There are no x X (µ, v, y, v) B G Ts such that F (x) < K h G Ss (µ, v, y, v). (b) If F is a K-convex vector function, the regularity condition (RC) is fulfilled x X is a weakly efficient solution to (P V G), there exist µ int(k), v K \0, ȳ Y v V such that ( µ, v, ȳ, v) B G Ts is a weakly efficient solution to (DV G Ts ) F ( x) = hg Ts ( µ, v, ȳ, v) = v. In the literature there are some interesting special cases of the set scalarization, from which we mention here the scalarization with conical sets, E = K, since K is a convex cone, the condition cl(e) + int(k) int(k) is automatically fulfilled as equality, mentioned in papers like [28, 30], the scalarization with sets generated by norms for which we refer to [32, 38], having as a subcase the situation when oblique norms are employed the solutions of (P V G) are then S-properly efficient (see [29,32]),, finally, the scalarization with polyhedral sets treated in [36]. Note also that in [35] can be found a deeper analysis of an approach for embedding older classical scalarization functions into the set scalarization concept that the set scalarization with its special instances was employed into vector duality in [3, 5]. 3.3 (Semi)Norm scalarization The (semi)norm scalarization has its roots in the fact that in some circumstances some (semi)norms on V turn out to be strongly K-increasing functions, as noted in different works from which we recall here only [9,2,22,29,37]. The scalarization functions we investigate in the following are based on strongly K- increasing gauges. This kind of scalarization functions has been used in [34] for location problems in [7] for goal programming, but also papers like [8, 8, 26, 40] can be mentioned here since they contain different scalarizations involving (semi)norms. Assume first that there exists b V such that Φ(dom Φ) b + K. We consider E V a convex set such that 0 int(e) its gauge (Minkowski function) γ E : X R, defined by γ E (x) = infλ 0 : x λe, is strongly K-increasing on K. Since 0 int(e) it yields that γ E (v) R for all v V. 4

15 For every a b K define the scalarization function s a : V R by γe (v a), if v b + K, s a (v) = +, otherwise, extended with the value s a ( K ) = +. All these functions are convex, continuous, because 0 int(e), strongly K-increasing on b + K one also has F (dom F ) b + K = dom s a for all a b K. Considering the following family of scalarization functions S g = s a : V R : a b K, we say that an element x X is an S g -properly efficient solution to (P V G) if there exists a b K such that s a (F ( x)) s a (F (x)) for all x X. Since from [3, 5] we know that (s a ) (v ) = v, a + min w, b a v V, w K, σ E (v w ) the dual vector problem to (P V G) with respect to this scalarization is (DV G Sg ) Max h G Sg (a,v,y,w,v) B G (a, v, y, w, v), Sg B G Sg = (a, v, y, w, v) (b K) K 0 Y ( K ) (b + K) : σ E (v w ), γ E (v a) w, a b v, a (v Φ) (0, y ) h G Sg (a, v, y, w, v) = v. The weak strong duality statements for (P V G) (DV G Sg ) follow, with the remark that due to the continuity of the scalarization function the regularity condition we consider is (RC). Theorem. (a) There are no x X (a, v, y, w, v) B G Sg such that F (x) K h G Sg (a, v, y, w, v). (b) If F is a K-convex vector function, the regularity condition (RC) is fulfilled x X is an S g -properly efficient solution to (P V G), there exist ā b K, v K 0, ȳ Y, w K v b + K such that (ā, v, ȳ, w, v) B G Sg is an efficient solution to (DV G Sg ) F ( x) = h G Sg (ā, v, ȳ, w, v) = v. Note that the duality approach described in this subsection can be considered also in the particular case when γ E is a norm with the unit ball E. 5

16 Remark 5. If V is a Hilbert space, then the norm of V is strongly K- increasing on K if only if K K (cf. [9]). This is the case if, for instance, V = R k K is the nonnegative orthant in R k. Not only the Euclidean norm is strongly R k +-increasing on R k +, but also the oblique norms (cf. [29, 32]) are strongly R k +-increasing on R k +. Other conditions which ensure that a norm is strongly K-increasing on a given set have been investigated in [8, 9, 37]. 4 Vector duality for particular instances of (P V G) This section is dedicated to the implementation of the vector duality approach introduced in Section 2 for two large classes of vector optimization problems, that can be obtained as special cases of (P V G). For carefully chosen perturbation functions we obtain vector duals for these particular primal vector problems via the general scalarization approach considered in this paper. In some places we rediscover duals already known in the literature, pointing this out is the case. 4. Vector duality for unconstrained vector optimization problems Let f : X V g : Y V be given proper vector functions A : X Y a linear continuous mapping such that dom f A (dom g). The primal unconstrained vector optimization problem we consider is (P V A) Min[f(x) + g(ax)]. x X Since (P V A) is a special case of (P V G) obtained by taking F = f + g A, we use the approach developed in the second section in order to deal with it via duality. More precisely, for a convenient choice of the vector perturbation function Φ we obtain vector duals to (P V A) which are special cases of (DV G S ) (DV G T ), respectively. In order to attach dual vector problems to (P V A), consider the vector perturbation function Φ A : X Y V, Φ A (x, y) = f(x) + g(ax + y). For v K y Y one has (v Φ A ) (0, y ) = (v f) ( A y )+(v g) (y ). Now we are ready to formulate the vector duals to (P V A) that are special cases of (DV G S ) (DV GT ), namely (DV A S ) Max h A S (s,v,y,v) B A (s, v, y, v), S B A S = (s, v, y, v) S K Y V : s(v) s (v ) (v f) ( A y ) (v g) (y ) 6

17 , when int(k), h A S (s, v, y, v) = v, (DV A T ) WMax h A T (s,v,y,v) B A (s, v, y, v), T B A T = (s, v, y, v) T K Y V : s(v) s (v ) (v f) ( A y ) (v g) (y ) h A T (s, v, y, v) = v. Due to the fact that in the case V = R these duals turn out to coincide with the classical Fenchel dual problem to the scalar optimization problem (P V A) we say that these two vector duals are the Fenchel vector duals to (P V A). The weak strong duality statements for these two vector duals to (P V A) follow as special instances of Theorem 2, Theorem 5 Theorem 7, with the regularity conditions from the general case becoming (RCA S ) s S x dom f A (dom g) such that g is continuous at Ax s is continuous at f(x ) + g(ax ),, respectively, (RCA) x dom f A (dom g) such that g is continuous at Ax. Theorem 2. (a) There are no x X (s, v, y, v) B A S such that f(x) + g(ax) K h A S (s, v, y, v). (b) If f g are K-convex vector functions, the regularity condition (RCA S ) is fulfilled x X is an S-properly efficient solution to (P V A), there exist s S, v K, ȳ Y v V such that ( s, v, ȳ, v) B A S is an efficient solution to (DV A S ) f( x)+g(a x) = ha S ( s, v, ȳ, v) = v. Theorem 3. Let int(k). (a) There are no x X (s, v, y, v) B A T such that f(x) + g(ax) < K h A T (s, v, y, v). (b) If f g are K-convex vector functions, the regularity condition (RCA) is fulfilled x X is a T -properly efficient solution to (P V A), there exist s T, v K, ȳ Y v V such that ( s, v, ȳ, v) B A T is a weakly efficient solution to (DV A T ) f( x)+g(a x) = ha T ( s, v, ȳ, v) = v. Special instances of the dual vector problems considered in this subsection can be found in [5, Section 4.], the linear scalarization is considered. 7

18 4.2 Vector duality for constrained vector optimization problems Consider the nonempty convex set S X the proper vector functions f : X V g : X Y fulfilling dom f S g (C). Let the primal vector optimization problem with geometric cone constraints (P V C) Min x A f(x), A = x S : g(x) C. Since (P V C) is a special case of (P V G) obtained by taking F : X V f(x), if x A,, F (x) = K, otherwise, we use the approach developed in the Section 2 in order to deal with it via duality. More precisely, for convenient choices of the vector perturbation function Φ we obtain vector duals to (P V C) which are special cases of (DV G S ) (DV G T ). Consider first the Lagrange type vector perturbation function Φ CL : X Y V, Φ CL f(x), if x S, g(x) y C, (x, y) = K, otherwise. For v K y Y we have (v Φ CL ) (0, y ) = ((v f) (y g) + δ S ) (0) + δ C (y ), so the Lagrange vector duals to (P V C) are (note the change of sign of y ) (DV CL S ) B CL S = Max h CL S (s,v,y,v) B CL (s, v, y, v), S (s, v, y, v) S K C V : s(v) s (v ) ((v f)+(y g)+δ S ) (0), when int(k), h CL S (s, v, y, v) = v, (DV CL T ) B CL T = WMax h CL T (s,v,y,v) B CL (s, v, y, v), T (s, v, y, v) T K C V :s(v) s (v ) ((v f)+(y g)+δ S ) (0) h CL T (s, v, y, v) = v. The weak strong duality statements follow as special instances of Theorem 2, Theorem 5 Theorem 7, with the regularity conditions becoming 8

19 (RCCL S ), respectively, (RCCL) s S x dom f S such that g(x ) int(c) s is continuous at f(x ), x dom f S such that g(x ) int(c), which is the classical Slater constraint qualification extended to the vector case. Theorem 4. (a) There are no x X (s, v, y, v) B CL S such that f(x) K h CL S (s, v, y, v). (b) If f is a K-convex vector function, g is a C-convex vector function, the regularity condition (RCCL S ) is fulfilled x X is an S-properly efficient solution to (P V C), there exist s S, v K, ȳ C v V such that ( s, v, ȳ, v) B CL S is an efficient solution to (DV CL S ) f( x) = h CL S ( s, v, ȳ, v) = v. Theorem 5. Let int(k). (a) There are no x X (s, v, y, v) B CL T such that f(x) < K h CL T (s, v, y, v). (b) If f is a K-convex vector function, g is a C-convex vector function, the regularity condition (RCCL) is fulfilled x X is a T -properly efficient solution to (P V C), there exist s T, v K, ȳ C v V such that ( s, v, ȳ, v) B CL T is a weakly efficient solution to (DV CL T ) f( x) = h CL T ( s, v, ȳ, v) = v. When (P V G) is particularized to (P V C) Φ to Φ CL, one can find in the literature special instances of all three vector duals with respect to S-properly efficient solutions proposed in this article. For instance, the vector dual obtained from (DV G S 3 ) by means of linear scalarization was considered in [5], while in [2] one can find the vector dual which is a special case of (DV G S 2 ), but with the scalarization functions taken moreover continuous. In [5], in the same framework, a vector dual similar to (DV G S 2 ) is considered, but with the inequality from the constraints replaced by equality. In [5, 8, 9] it is treated the vector dual obtained in this framework from (DV G S ) by using the linear scalarization. Regarding the vector duals with respect to T -properly efficient solutions, the one obtained from (DV G T 3 ) by means of linear scalarization was considered in [5], in [3] the special case of (DV G S 2 ) in this framework is mentioned, but with the scalarization functions taken also continuous, while (DV G T ) via the linear scalarization can be found in [5, 9]. A second vector perturbation function that can be considered for (P V C) is the Fenchel type vector perturbation function Φ CF : X X V, Φ CF f(x + y), if x A, (x, y) = K, otherwise. 9

20 Using it the following Fenchel vector duals obtained as special cases of (DV G S ) (DV G T ) can be attached to (P V C) (DV CF S ) B CF S = Max h CF S (s,v,y,v) B CF (s, v, y, v), S (s, v, y, v) S K Y V : s(v) s (v ) (v f) ( y ) σ A (y ), when int(k), h CF S (s, v, y, v) = v, (DV CF T ) B CF T = WMax h CF T (s,v,y,v) B CF (s, v, y, v), T (s, v, y, v) T K Y V :s(v) s (v ) (v f) ( y ) σ A (y ) h CF T (s, v, y, v) = v. The way we named these vector duals comes from the fact that rewriting (P V C) in the form of (P V A) ( g is taken to be δ A A the identity operator), one can derive (DV CF S) (DV CF T ) directly from the Fenchel vector duals considered in the previous subsection. Consequently, in this case we do not give again the weak strong duality statements, since they can be obtained directly from both the general case the unconstrained case. The last vector perturbation function we consider in this section is the Fenchel-Lagrange type vector perturbation function Φ CF L : X X Y V, Φ CF L f(x + z), if x S, g(x) y C, (x, z, y) = K, otherwise. For v K, z X y Y one has (v Φ CF L ) (0, z, y ) = (v f) (z ) + ( (y g) + δ S ) ( z ) + δ C (y ). Consequently, the Fenchel-Lagrange vector duals to (P V C) obtained, by making use of the vector perturbation function Φ CF L, from the vector duals introduced in Section 2 are (DV CF L S ) B CF L S = Max h CF L S (s,v,z,y,v) B CF L (s, v, z, y, v), S (s, v, z, y, v) S K X C V : s(v) s (v ) (v f) (z ) ((y g) + δ S ) ( z ) h CF L S (s, v, z, y, v) = v, 20

21 , when int(k), (DV CF L T ) B CF L T = Max h CF L T (s,v,z,y,v) B CF L (s, v, z, y, v), T (s, v, z, y, v) T K X C V : s(v) s (v ) (v f) (z ) ((y g) + δ S ) ( z ) h CF L T (s, v, z, y, v) = v. These are nothing but the vector duals introduced via the general scalarization in [3] in the finitely dimensional case then extended to infinite dimensions in [5, Section 4.4]. In both these works the scalarization functions are then particularized, like in Section 3. Before giving the weak strong duality statements for these vector duals, we consider the regularity conditions (RCCF L S ) s S x dom f S such that f is continuous at x, g(x ) int(c) s is continuous at f(x ),, respectively, (RCCF L) x dom f S such that f is continuous at x g(x ) int(c). Theorem 6. (a) There are no x X (s, v, z, y, v) B CF L S such that f(x) K h CF L S (s, v, z, y, v). (b) If f is a K-convex vector function, g is a C-convex vector function, the regularity condition (RCCF L S ) is fulfilled x X is an S-properly efficient solution to (P V C), there exist s S, v K, z X, ȳ C v V such that ( s, v, z, ȳ, v) B CF L S is an efficient solution to (DV CF L S ) f( x) = hcf L S ( s, v, z, ȳ, v) = v. Theorem 7. Let int(k). (a) There are no x X (s, v, z, y, v) B CF L T such that f(x) < K h CF L T (s, v, z, y, v). (b) If f is a K-convex vector function, g is a C-convex vector function, the regularity condition (RCCF L T ) is fulfilled x X is a T -properly efficient solution to (P V C), there exist s T, v K, z X, ȳ C v V such that ( s, v, z, ȳ, v) B CF L T is a weakly efficient solution to (DV CF L T ) f( x) = hcf L T ( s, v, z, ȳ, v) = v. Obviously, one can particularize the scalarization function in each situation treated in this section, using the scalarization functions dealt in Section 3 with or others from the literature. 2

22 References [] M. Adán, V. Novo (2004): Proper efficiency in vector optimization on real linear spaces, Journal of Optimization Theory Applications 2(3): [2] R.I. Boţ (200): Conjugate duality in convex optimization, Lecture Notes in Economics Mathematical Systems 637, Springer-Verlag, Berlin- Heidelberg. [3] R.I. Boţ, S.-M. Grad, G. Wanka (2007): A general approach for studying duality in multiobjective optimization, Mathematical Methods of Operations Research (ZOR) 65(3): [4] R.I. Boţ, S.-M. Grad, G. Wanka (2008): A new constraint qualification for the formula of the subdifferential of composed convex functions in infinite dimensional spaces, Mathematische Nachrichten 28(8): [5] R.I. Boţ, S.-M. Grad, G. Wanka (2009): Duality in vector optimization, Springer-Verlag, Berlin-Heidelberg. [6] R.I. Boţ, S.-M. Grad, G. Wanka (2009): Generalized Moreau-Rockafellar results for composed convex functions, Optimization 58(7): [7] E. Carrizosa, J. Fliege (2002): Generalized goal programming: polynomial methods applications, Mathematical Programming 93(2): [8] J.P. Dauer, O.A. Saleh (993): A characterization of proper minimal points as solutions of sublinear optimization problems, Journal of Mathematical Analysis Applications 78(): [9] J. Fliege (200): Approximation techniques for the set of efficient points, Habilitation Thesis, Faculty of Mathematics, University of Dortmund. [0] J. Fliege, A. Heseler (2002): Constructing approximations to the efficient set of convex quadratic multiobjective problems, Reports on Applied Mathematics 2, Faculty of Mathematics, University of Dortmund. [] C. Gerstewitz (983): Nichtkonvexe Dualität in der Vektoroptimierung, Wissen- schaftliche Zeitschrift den Technischen Hochschule Carl Schorlemmer Leuna-Merseburg 25(3): [2] C. Gerstewitz, E. Iwanow (985): Dualität für nichtkonvexe Vektoroptimierungs- probleme, Workshop on Vector Optimization (Plauen, 984), Wissenschaftliche Zeitschrift der Technischen Hochschule Ilmenau 3(2):6 8. [3] C. Gerth, P. Weidner (990): Nonconvex separation theorems some applications in vector optimization, Journal of Optimization Theory Applications 67(2):

23 [4] A. Göpfert (986): Multicriterial duality, examples advances, in: G. Fel, M. Grauer, A. Kurzhanski, A.P. Wierzbicki (eds), Large-scale modelling interactive decision analysis, Proceedings of the international workshop held in Eisenach, November 8 2, 985, Lecture Notes in Economics Mathematical Systems 273, Springer-Verlag, Berlin, pp [5] A. Göpfert, C. Gerth (986): Über die Skalarisierung und Dualisierung von Vektoroptimierungsproblemen, Zeitschrift für Analysis und ihre Anwendungen 5(4): [6] C. Gutiérrez, B. Jiménez, V. Novo (2006): On approximate solutions in vector optimization problems via scalarization, Computational Optimization Applications 35(3): [7] S. Helbig (987): Parametric optimization with a bottleneck objective vector optimization, in: J. Jahn, W. Krabs (eds), Recent Advances Historical Development of Vector Optimization, Springer-Verlag, Berlin, pp [8] J. Jahn (984): Scalarization in vector optimization, Mathematical Programming 29(2): [9] J. Jahn (2004) Vector optimization-theory, applications, extensions, Springer-Verlag, Berlin. [20] I. Kaliszewski (987): Generating nested subsets of efficient solutions, in: J. Jahn, W. Krabs (eds), Recent Advances Historical Development of Vector Optimization, Springer-Verlag, Berlin, pp [2] I. Kaliszewski (986): Norm scalarization proper efficiency in vector optimization, Foundations of Control Engineering (3):7 3. [22] P. Q. Khanh (993): Optimality conditions via norm scalarization in vector optimization, SIAM Journal on Control Optimization 3(3): [23] D. T. Luc, T. Q. Phong, M. Volle (2006): A new duality approach to solving concave vector maximization problems, Journal of Global Optimization 36(3): [24] P. Mbunga (2003): Structural stability of vector optimization problems, Optimization optimal control (Ulaanbaatar, 2002), Series on Computers Operations Research, World Scientific Publishing, River Edge, pp [25] K. Miettinen, M.M. Mäkelä (2002): On scalarizing functions in multiobjective optimization, OR Spectrum 24(2): [26] E. Miglierina, E. Molho (2002): Scalarization stability in vector optimization, Journal of Optimization Theory Applications 4(3):

24 [27] K. Mitani, H. Nakayama (997): A multiobjective diet planning support system using the satisficing trade-off method, Journal of Multi-Criteria Decision Analysis 6(3):3 39. [28] A. M. Rubinov, R. N. Gasimov (2004): Scalarization nonlinear scalar duality for vector optimization with preferences that are not necessarily a pre-order relation, Journal of Global Optimization 29(4): [29] B. Schl, K. Klamroth, M. M. Wiecek (2002): Norm-based approximation in multicriteria programming, Global optimization, control, games IV, Computers & Mathematics with Applications 44(7): [30] C. Tammer (996): A variational principle applications for vectorial control approximation problems, Preprint 96-09, Reports on Optimization Stochastics, Martin-Luther University Halle-Wittenberg. [3] C. Tammer, A. Göpfert (2002): Theory of vector optimization, in: M. Ehrgott X. Gibleux (eds), Multiple criteria optimization: state of the art annotated bibliographic surveys, International Series in Operations Research & Management Science 52, Kluwer Academic Publishers, Boston, pp. 70. [32] C. Tammer, K. Winkler (2003): A new scalarization approach applications in multicriteria d.c. optimization, Journal of Nonlinear Convex Analysis 4(3): [33] T. Tanino, H. Kuk (2002): Nonlinear multiobjective programming, in: M. Ehrgott X. Gibleux (eds), Multiple criteria optimization: state of the art annotated bibliographic surveys, International Series in Operations Research & Management Science 52, Kluwer Academic Publishers, Boston, pp [34] G. Wanka, R.I. Boţ, E.T. Vargyas (2007): Duality for location problems with unbounded unit balls, European Journal of Operational Research 79(3): [35] P. Weidner (990): An approach to different scalarizations in vector optimization, Wissenschaftliche Zeitschrift der Technischen Hochschule Ilmenau 36(3):03 0. [36] P. Weidner (994): The influence of proper efficiency on optimal solutions of scalarizing problems in multicriteria optimization, OR Spektrum 6(4): [37] A. P. Wierzbicki (977): Basic properties of scalarizing functionals for multiobjective optimization, Mathematische Operationsforschung und Statistik Series Optimization 8(): [38] K. Winkler (2003): Skalarisierung mehrkriterieller Optimierungsprobleme mittels schiefer Normen, in: W. Habenicht, B. Scheubrein, R. Scheubein (eds), Multi-Criteria- und Fuzzy-Systeme in Theorie und Praxis, Deutscher Universitäts-Verlag, Wiesbaden, pp

25 [39] A. Zaffaroni (2003): Degrees of efficiency degrees of minimality, SIAM Journal on Control Optimization 42(3): [40] X.Y. Zheng (2000): Scalarization of Henig proper efficient points in a normed space, Journal of Optimization Theory Applications 05():

A New Fenchel Dual Problem in Vector Optimization

A New Fenchel Dual Problem in Vector Optimization A New Fenchel Dual Problem in Vector Optimization Radu Ioan Boţ Anca Dumitru Gert Wanka Abstract We introduce a new Fenchel dual for vector optimization problems inspired by the form of the Fenchel dual

More information

Classical linear vector optimization duality revisited

Classical linear vector optimization duality revisited Optim Lett DOI 10.1007/s11590-010-0263-1 ORIGINAL PAPER Classical linear vector optimization duality revisited Radu Ioan Boţ Sorin-Mihai Grad Gert Wanka Received: 6 May 2010 / Accepted: 8 November 2010

More information

On the relations between different duals assigned to composed optimization problems

On the relations between different duals assigned to composed optimization problems manuscript No. will be inserted by the editor) On the relations between different duals assigned to composed optimization problems Gert Wanka 1, Radu Ioan Boţ 2, Emese Vargyas 3 1 Faculty of Mathematics,

More information

STRONG AND CONVERSE FENCHEL DUALITY FOR VECTOR OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES

STRONG AND CONVERSE FENCHEL DUALITY FOR VECTOR OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES STUDIA UNIV. BABEŞ BOLYAI, MATHEMATICA, Volume LIV, Number 3, September 2009 STRONG AND CONVERSE FENCHEL DUALITY FOR VECTOR OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES Abstract. In relation to the vector

More information

Robust Duality in Parametric Convex Optimization

Robust Duality in Parametric Convex Optimization Robust Duality in Parametric Convex Optimization R.I. Boţ V. Jeyakumar G.Y. Li Revised Version: June 20, 2012 Abstract Modelling of convex optimization in the face of data uncertainty often gives rise

More information

Minimality Concepts Using a New Parameterized Binary Relation in Vector Optimization 1

Minimality Concepts Using a New Parameterized Binary Relation in Vector Optimization 1 Applied Mathematical Sciences, Vol. 7, 2013, no. 58, 2841-2861 HIKARI Ltd, www.m-hikari.com Minimality Concepts Using a New Parameterized Binary Relation in Vector Optimization 1 Christian Sommer Department

More information

Almost Convex Functions: Conjugacy and Duality

Almost Convex Functions: Conjugacy and Duality Almost Convex Functions: Conjugacy and Duality Radu Ioan Boţ 1, Sorin-Mihai Grad 2, and Gert Wanka 3 1 Faculty of Mathematics, Chemnitz University of Technology, D-09107 Chemnitz, Germany radu.bot@mathematik.tu-chemnitz.de

More information

On Gap Functions for Equilibrium Problems via Fenchel Duality

On Gap Functions for Equilibrium Problems via Fenchel Duality On Gap Functions for Equilibrium Problems via Fenchel Duality Lkhamsuren Altangerel 1 Radu Ioan Boţ 2 Gert Wanka 3 Abstract: In this paper we deal with the construction of gap functions for equilibrium

More information

Duality for almost convex optimization problems via the perturbation approach

Duality for almost convex optimization problems via the perturbation approach Duality for almost convex optimization problems via the perturbation approach Radu Ioan Boţ Gábor Kassay Gert Wanka Abstract. We deal with duality for almost convex finite dimensional optimization problems

More information

A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators

A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators Radu Ioan Boţ Ernö Robert Csetnek April 23, 2012 Dedicated to Jon Borwein on the occasion of his 60th birthday Abstract. In this note

More information

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION CHRISTIAN GÜNTHER AND CHRISTIANE TAMMER Abstract. In this paper, we consider multi-objective optimization problems involving not necessarily

More information

On the Brézis - Haraux - type approximation in nonreflexive Banach spaces

On the Brézis - Haraux - type approximation in nonreflexive Banach spaces On the Brézis - Haraux - type approximation in nonreflexive Banach spaces Radu Ioan Boţ Sorin - Mihai Grad Gert Wanka Abstract. We give Brézis - Haraux - type approximation results for the range of the

More information

On the acceleration of the double smoothing technique for unconstrained convex optimization problems

On the acceleration of the double smoothing technique for unconstrained convex optimization problems On the acceleration of the double smoothing technique for unconstrained convex optimization problems Radu Ioan Boţ Christopher Hendrich October 10, 01 Abstract. In this article we investigate the possibilities

More information

STABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES

STABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES STABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES CHONG LI, DONGHUI FANG, GENARO LÓPEZ, AND MARCO A. LÓPEZ Abstract. We consider the optimization problem (P A )

More information

SHORT COMMUNICATION. Communicated by Igor Konnov

SHORT COMMUNICATION. Communicated by Igor Konnov On Some Erroneous Statements in the Paper Optimality Conditions for Extended Ky Fan Inequality with Cone and Affine Constraints and Their Applications by A. Capătă SHORT COMMUNICATION R.I. Boţ 1 and E.R.

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

Set-valued Duality Theory for Multiple Objective Linear Programs and Application to Mathematical Finance

Set-valued Duality Theory for Multiple Objective Linear Programs and Application to Mathematical Finance Set-valued Duality Theory for Multiple Objective Linear Programs and Application to Mathematical Finance Frank Heyde Andreas Löhne Christiane Tammer December 5, 2006 Abstract We develop a duality theory

More information

The Subdifferential of Convex Deviation Measures and Risk Functions

The Subdifferential of Convex Deviation Measures and Risk Functions The Subdifferential of Convex Deviation Measures and Risk Functions Nicole Lorenz Gert Wanka In this paper we give subdifferential formulas of some convex deviation measures using their conjugate functions

More information

Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems

Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems Applied Mathematics Volume 2012, Article ID 674512, 13 pages doi:10.1155/2012/674512 Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems Hong-Yong Fu, Bin Dan, and Xiang-Yu

More information

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q)

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) Andreas Löhne May 2, 2005 (last update: November 22, 2005) Abstract We investigate two types of semicontinuity for set-valued

More information

Preprint February 19, 2018 (1st version October 31, 2017) Pareto efficient solutions in multi-objective optimization involving forbidden regions 1

Preprint February 19, 2018 (1st version October 31, 2017) Pareto efficient solutions in multi-objective optimization involving forbidden regions 1 Preprint February 19, 2018 1st version October 31, 2017) Pareto efficient solutions in multi-objective optimization involving forbidden regions 1 by CHRISTIAN GÜNTHER Martin Luther University Halle-Wittenberg

More information

Duality and optimality in multiobjective optimization

Duality and optimality in multiobjective optimization Duality and optimality in multiobjective optimization Von der Fakultät für Mathematik der Technischen Universität Chemnitz genehmigte D i s s e r t a t i o n zur Erlangung des akademischen Grades Doctor

More information

Brézis - Haraux - type approximation of the range of a monotone operator composed with a linear mapping

Brézis - Haraux - type approximation of the range of a monotone operator composed with a linear mapping Brézis - Haraux - type approximation of the range of a monotone operator composed with a linear mapping Radu Ioan Boţ, Sorin-Mihai Grad and Gert Wanka Faculty of Mathematics Chemnitz University of Technology

More information

Closing the duality gap in linear vector optimization

Closing the duality gap in linear vector optimization Closing the duality gap in linear vector optimization Andreas H. Hamel Frank Heyde Andreas Löhne Christiane Tammer Kristin Winkler Abstract Using a set-valued dual cost function we give a new approach

More information

A Minimal Point Theorem in Uniform Spaces

A Minimal Point Theorem in Uniform Spaces A Minimal Point Theorem in Uniform Spaces Andreas Hamel Andreas Löhne Abstract We present a minimal point theorem in a product space X Y, X being a separated uniform space, Y a topological vector space

More information

Computing Efficient Solutions of Nonconvex Multi-Objective Problems via Scalarization

Computing Efficient Solutions of Nonconvex Multi-Objective Problems via Scalarization Computing Efficient Solutions of Nonconvex Multi-Objective Problems via Scalarization REFAIL KASIMBEYLI Izmir University of Economics Department of Industrial Systems Engineering Sakarya Caddesi 156, 35330

More information

Lecture 5. The Dual Cone and Dual Problem

Lecture 5. The Dual Cone and Dual Problem IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Maximal monotonicity for the precomposition with a linear operator

Maximal monotonicity for the precomposition with a linear operator Maximal monotonicity for the precomposition with a linear operator Radu Ioan Boţ Sorin-Mihai Grad Gert Wanka July 19, 2005 Abstract. We give the weakest constraint qualification known to us that assures

More information

Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency

Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency Applied Mathematics E-Notes, 16(2016), 133-143 c ISSN 1607-2510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency

More information

Quasi-relative interior and optimization

Quasi-relative interior and optimization Quasi-relative interior and optimization Constantin Zălinescu University Al. I. Cuza Iaşi, Faculty of Mathematics CRESWICK, 2016 1 Aim and framework Aim Framework and notation The quasi-relative interior

More information

Closing the Duality Gap in Linear Vector Optimization

Closing the Duality Gap in Linear Vector Optimization Journal of Convex Analysis Volume 11 (2004), No. 1, 163 178 Received July 4, 2003 Closing the Duality Gap in Linear Vector Optimization Andreas H. Hamel Martin-Luther-University Halle-Wittenberg, Department

More information

On Weak Pareto Optimality for Pseudoconvex Nonsmooth Multiobjective Optimization Problems

On Weak Pareto Optimality for Pseudoconvex Nonsmooth Multiobjective Optimization Problems Int. Journal of Math. Analysis, Vol. 7, 2013, no. 60, 2995-3003 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ijma.2013.311276 On Weak Pareto Optimality for Pseudoconvex Nonsmooth Multiobjective

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Convex analysis and profit/cost/support functions

Convex analysis and profit/cost/support functions Division of the Humanities and Social Sciences Convex analysis and profit/cost/support functions KC Border October 2004 Revised January 2009 Let A be a subset of R m Convex analysts may give one of two

More information

THE LAGRANGE MULTIPLIERS FOR CONVEX VECTOR FUNCTIONS IN BANACH SPACES

THE LAGRANGE MULTIPLIERS FOR CONVEX VECTOR FUNCTIONS IN BANACH SPACES REVISTA INVESTIGACIÓN OPERACIONAL VOL. 39, NO. 3, 411-425, 2018 THE LAGRANGE MULTIPLIERS FOR CONVEX VECTOR FUNCTIONS IN BANACH SPACES Vu Anh Tuan, Thanh Tam Le, and Christiane Tammer Martin-Luther-University

More information

PETRA WEIDNER 1. Research Report April 25, 2017

PETRA WEIDNER 1. Research Report April 25, 2017 MINIMIZERS OF GERSTEWITZ FUNCTIONALS arxiv:1704.08632v1 [math.oc] 27 Apr 2017 by PETRA WEIDNER 1 Research Report April 25, 2017 Abstract: Scalarization in vector optimization is essentially based on the

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

BASICS OF CONVEX ANALYSIS

BASICS OF CONVEX ANALYSIS BASICS OF CONVEX ANALYSIS MARKUS GRASMAIR 1. Main Definitions We start with providing the central definitions of convex functions and convex sets. Definition 1. A function f : R n R + } is called convex,

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Research Article A Note on Optimality Conditions for DC Programs Involving Composite Functions

Research Article A Note on Optimality Conditions for DC Programs Involving Composite Functions Abstract and Applied Analysis, Article ID 203467, 6 pages http://dx.doi.org/10.1155/2014/203467 Research Article A Note on Optimality Conditions for DC Programs Involving Composite Functions Xiang-Kai

More information

Constraint qualifications for convex inequality systems with applications in constrained optimization

Constraint qualifications for convex inequality systems with applications in constrained optimization Constraint qualifications for convex inequality systems with applications in constrained optimization Chong Li, K. F. Ng and T. K. Pong Abstract. For an inequality system defined by an infinite family

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties

More information

SOME REMARKS ON THE SPACE OF DIFFERENCES OF SUBLINEAR FUNCTIONS

SOME REMARKS ON THE SPACE OF DIFFERENCES OF SUBLINEAR FUNCTIONS APPLICATIONES MATHEMATICAE 22,3 (1994), pp. 419 426 S. G. BARTELS and D. PALLASCHKE (Karlsruhe) SOME REMARKS ON THE SPACE OF DIFFERENCES OF SUBLINEAR FUNCTIONS Abstract. Two properties concerning the space

More information

On duality theory of conic linear problems

On duality theory of conic linear problems On duality theory of conic linear problems Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 3332-25, USA e-mail: ashapiro@isye.gatech.edu

More information

Preprint Stephan Dempe and Patrick Mehlitz Lipschitz continuity of the optimal value function in parametric optimization ISSN

Preprint Stephan Dempe and Patrick Mehlitz Lipschitz continuity of the optimal value function in parametric optimization ISSN Fakultät für Mathematik und Informatik Preprint 2013-04 Stephan Dempe and Patrick Mehlitz Lipschitz continuity of the optimal value function in parametric optimization ISSN 1433-9307 Stephan Dempe and

More information

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Chapter 4 GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Alberto Cambini Department of Statistics and Applied Mathematics University of Pisa, Via Cosmo Ridolfi 10 56124

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

Dedicated to Michel Théra in honor of his 70th birthday

Dedicated to Michel Théra in honor of his 70th birthday VARIATIONAL GEOMETRIC APPROACH TO GENERALIZED DIFFERENTIAL AND CONJUGATE CALCULI IN CONVEX ANALYSIS B. S. MORDUKHOVICH 1, N. M. NAM 2, R. B. RECTOR 3 and T. TRAN 4. Dedicated to Michel Théra in honor of

More information

The sum of two maximal monotone operator is of type FPV

The sum of two maximal monotone operator is of type FPV CJMS. 5(1)(2016), 17-21 Caspian Journal of Mathematical Sciences (CJMS) University of Mazandaran, Iran http://cjms.journals.umz.ac.ir ISSN: 1735-0611 The sum of two maximal monotone operator is of type

More information

ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT

ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT EMIL ERNST AND MICHEL VOLLE Abstract. This article addresses a general criterion providing a zero duality gap for convex programs in the setting of

More information

Helly's Theorem and its Equivalences via Convex Analysis

Helly's Theorem and its Equivalences via Convex Analysis Portland State University PDXScholar University Honors Theses University Honors College 2014 Helly's Theorem and its Equivalences via Convex Analysis Adam Robinson Portland State University Let us know

More information

Subdifferential representation of convex functions: refinements and applications

Subdifferential representation of convex functions: refinements and applications Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

On duality gap in linear conic problems

On duality gap in linear conic problems On duality gap in linear conic problems C. Zălinescu Abstract In their paper Duality of linear conic problems A. Shapiro and A. Nemirovski considered two possible properties (A) and (B) for dual linear

More information

Optimality Conditions for Nonsmooth Convex Optimization

Optimality Conditions for Nonsmooth Convex Optimization Optimality Conditions for Nonsmooth Convex Optimization Sangkyun Lee Oct 22, 2014 Let us consider a convex function f : R n R, where R is the extended real field, R := R {, + }, which is proper (f never

More information

A Dual Condition for the Convex Subdifferential Sum Formula with Applications

A Dual Condition for the Convex Subdifferential Sum Formula with Applications Journal of Convex Analysis Volume 12 (2005), No. 2, 279 290 A Dual Condition for the Convex Subdifferential Sum Formula with Applications R. S. Burachik Engenharia de Sistemas e Computacao, COPPE-UFRJ

More information

Inertial forward-backward methods for solving vector optimization problems

Inertial forward-backward methods for solving vector optimization problems Inertial forward-backward methods for solving vector optimization problems Radu Ioan Boţ Sorin-Mihai Grad Dedicated to Johannes Jahn on the occasion of his 65th birthday Abstract. We propose two forward-backward

More information

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers. Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following

More information

GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect

GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, 2018 BORIS S. MORDUKHOVICH 1 and NGUYEN MAU NAM 2 Dedicated to Franco Giannessi and Diethard Pallaschke with great respect Abstract. In

More information

Optimality for set-valued optimization in the sense of vector and set criteria

Optimality for set-valued optimization in the sense of vector and set criteria Kong et al. Journal of Inequalities and Applications 2017 2017:47 DOI 10.1186/s13660-017-1319-x R E S E A R C H Open Access Optimality for set-valued optimization in the sense of vector and set criteria

More information

Additional Homework Problems

Additional Homework Problems Additional Homework Problems Robert M. Freund April, 2004 2004 Massachusetts Institute of Technology. 1 2 1 Exercises 1. Let IR n + denote the nonnegative orthant, namely IR + n = {x IR n x j ( ) 0,j =1,...,n}.

More information

Centre d Economie de la Sorbonne UMR 8174

Centre d Economie de la Sorbonne UMR 8174 Centre d Economie de la Sorbonne UMR 8174 On alternative theorems and necessary conditions for efficiency Do Van LUU Manh Hung NGUYEN 2006.19 Maison des Sciences Économiques, 106-112 boulevard de L'Hôpital,

More information

LECTURE 12 LECTURE OUTLINE. Subgradients Fenchel inequality Sensitivity in constrained optimization Subdifferential calculus Optimality conditions

LECTURE 12 LECTURE OUTLINE. Subgradients Fenchel inequality Sensitivity in constrained optimization Subdifferential calculus Optimality conditions LECTURE 12 LECTURE OUTLINE Subgradients Fenchel inequality Sensitivity in constrained optimization Subdifferential calculus Optimality conditions Reading: Section 5.4 All figures are courtesy of Athena

More information

Convex Analysis and Economic Theory AY Elementary properties of convex functions

Convex Analysis and Economic Theory AY Elementary properties of convex functions Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory AY 2018 2019 Topic 6: Convex functions I 6.1 Elementary properties of convex functions We may occasionally

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

Robust Farkas Lemma for Uncertain Linear Systems with Applications

Robust Farkas Lemma for Uncertain Linear Systems with Applications Robust Farkas Lemma for Uncertain Linear Systems with Applications V. Jeyakumar and G. Li Revised Version: July 8, 2010 Abstract We present a robust Farkas lemma, which provides a new generalization of

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

LECTURE 13 LECTURE OUTLINE

LECTURE 13 LECTURE OUTLINE LECTURE 13 LECTURE OUTLINE Problem Structures Separable problems Integer/discrete problems Branch-and-bound Large sum problems Problems with many constraints Conic Programming Second Order Cone Programming

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Local strong convexity and local Lipschitz continuity of the gradient of convex functions Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate

More information

arxiv: v1 [math.fa] 16 Jun 2011

arxiv: v1 [math.fa] 16 Jun 2011 arxiv:1106.3342v1 [math.fa] 16 Jun 2011 Gauge functions for convex cones B. F. Svaiter August 20, 2018 Abstract We analyze a class of sublinear functionals which characterize the interior and the exterior

More information

Maximal Monotone Inclusions and Fitzpatrick Functions

Maximal Monotone Inclusions and Fitzpatrick Functions JOTA manuscript No. (will be inserted by the editor) Maximal Monotone Inclusions and Fitzpatrick Functions J. M. Borwein J. Dutta Communicated by Michel Thera. Abstract In this paper, we study maximal

More information

Self-equilibrated Functions in Dual Vector Spaces: a Boundedness Criterion

Self-equilibrated Functions in Dual Vector Spaces: a Boundedness Criterion Self-equilibrated Functions in Dual Vector Spaces: a Boundedness Criterion Michel Théra LACO, UMR-CNRS 6090, Université de Limoges michel.thera@unilim.fr reporting joint work with E. Ernst and M. Volle

More information

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone ON THE IRREDUCIBILITY LYAPUNOV RANK AND AUTOMORPHISMS OF SPECIAL BISHOP-PHELPS CONES M. SEETHARAMA GOWDA AND D. TROTT Abstract. Motivated by optimization considerations we consider cones in R n to be called

More information

The local equicontinuity of a maximal monotone operator

The local equicontinuity of a maximal monotone operator arxiv:1410.3328v2 [math.fa] 3 Nov 2014 The local equicontinuity of a maximal monotone operator M.D. Voisei Abstract The local equicontinuity of an operator T : X X with proper Fitzpatrick function ϕ T

More information

REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS

REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS fredi tröltzsch 1 Abstract. A class of quadratic optimization problems in Hilbert spaces is considered,

More information

Technische Universität Ilmenau Institut für Mathematik

Technische Universität Ilmenau Institut für Mathematik Technische Universität Ilmenau Institut für Mathematik Preprint No. M 13/05 Properly optimal elements in vector optimization with variable ordering structures Gabriele Eichfelder and Refail Kasimbeyli

More information

Necessary Optimality Conditions for ε e Pareto Solutions in Vector Optimization with Empty Interior Ordering Cones

Necessary Optimality Conditions for ε e Pareto Solutions in Vector Optimization with Empty Interior Ordering Cones Noname manuscript No. (will be inserted by the editor Necessary Optimality Conditions for ε e Pareto Solutions in Vector Optimization with Empty Interior Ordering Cones Truong Q. Bao Suvendu R. Pattanaik

More information

arxiv: v1 [math.oc] 10 Feb 2016

arxiv: v1 [math.oc] 10 Feb 2016 CHARACTERIZING WEAK SOLUTIONS FOR VECTOR OPTIMIZATION PROBLEMS N. DINH, M.A. GOBERNA, D.H. LONG, AND M.A. LÓPEZ arxiv:1602.03367v1 [math.oc] 10 Feb 2016 Abstract. This paper provides characterizations

More information

NONLINEAR SCALARIZATION CHARACTERIZATIONS OF E-EFFICIENCY IN VECTOR OPTIMIZATION. Ke-Quan Zhao*, Yuan-Mei Xia and Xin-Min Yang 1.

NONLINEAR SCALARIZATION CHARACTERIZATIONS OF E-EFFICIENCY IN VECTOR OPTIMIZATION. Ke-Quan Zhao*, Yuan-Mei Xia and Xin-Min Yang 1. TAIWANESE JOURNAL OF MATHEMATICS Vol. 19, No. 2, pp. 455-466, April 2015 DOI: 10.11650/tjm.19.2015.4360 This paper is available online at http://journal.taiwanmathsoc.org.tw NONLINEAR SCALARIZATION CHARACTERIZATIONS

More information

Relationships between upper exhausters and the basic subdifferential in variational analysis

Relationships between upper exhausters and the basic subdifferential in variational analysis J. Math. Anal. Appl. 334 (2007) 261 272 www.elsevier.com/locate/jmaa Relationships between upper exhausters and the basic subdifferential in variational analysis Vera Roshchina City University of Hong

More information

Translative Sets and Functions and their Applications to Risk Measure Theory and Nonlinear Separation

Translative Sets and Functions and their Applications to Risk Measure Theory and Nonlinear Separation Translative Sets and Functions and their Applications to Risk Measure Theory and Nonlinear Separation Andreas H. Hamel Abstract Recently defined concepts such as nonlinear separation functionals due to

More information

On John type ellipsoids

On John type ellipsoids On John type ellipsoids B. Klartag Tel Aviv University Abstract Given an arbitrary convex symmetric body K R n, we construct a natural and non-trivial continuous map u K which associates ellipsoids to

More information

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions Angelia Nedić and Asuman Ozdaglar April 15, 2006 Abstract We provide a unifying geometric framework for the

More information

Examples of Convex Functions and Classifications of Normed Spaces

Examples of Convex Functions and Classifications of Normed Spaces Journal of Convex Analysis Volume 1 (1994), No.1, 61 73 Examples of Convex Functions and Classifications of Normed Spaces Jon Borwein 1 Department of Mathematics and Statistics, Simon Fraser University

More information

4. Algebra and Duality

4. Algebra and Duality 4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone

More information

New Class of duality models in discrete minmax fractional programming based on second-order univexities

New Class of duality models in discrete minmax fractional programming based on second-order univexities STATISTICS, OPTIMIZATION AND INFORMATION COMPUTING Stat., Optim. Inf. Comput., Vol. 5, September 017, pp 6 77. Published online in International Academic Press www.iapress.org) New Class of duality models

More information

SOME REMARKS ON SUBDIFFERENTIABILITY OF CONVEX FUNCTIONS

SOME REMARKS ON SUBDIFFERENTIABILITY OF CONVEX FUNCTIONS Applied Mathematics E-Notes, 5(2005), 150-156 c ISSN 1607-2510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ SOME REMARKS ON SUBDIFFERENTIABILITY OF CONVEX FUNCTIONS Mohamed Laghdir

More information

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45 Division of the Humanities and Social Sciences Supergradients KC Border Fall 2001 1 The supergradient of a concave function There is a useful way to characterize the concavity of differentiable functions.

More information

Applied Mathematics Letters

Applied Mathematics Letters Applied Mathematics Letters 25 (2012) 974 979 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml On dual vector equilibrium problems

More information

Duality Theory of Constrained Optimization

Duality Theory of Constrained Optimization Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive

More information

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems Radu Ioan Boţ Ernö Robert Csetnek August 5, 014 Abstract. In this paper we analyze the

More information

On the use of semi-closed sets and functions in convex analysis

On the use of semi-closed sets and functions in convex analysis Open Math. 2015; 13: 1 5 Open Mathematics Open Access Research Article Constantin Zălinescu* On the use of semi-closed sets and functions in convex analysis Abstract: The main aim of this short note is

More information

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma 4-1 Algebra and Duality P. Parrilo and S. Lall 2006.06.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone of valid

More information