Minimality Concepts Using a New Parameterized Binary Relation in Vector Optimization 1

Size: px
Start display at page:

Download "Minimality Concepts Using a New Parameterized Binary Relation in Vector Optimization 1"

Transcription

1 Applied Mathematical Sciences, Vol. 7, 2013, no. 58, HIKARI Ltd, Minimality Concepts Using a New Parameterized Binary Relation in Vector Optimization 1 Christian Sommer Department Mathematik, Angewandte Mathematik 2 Universität Erlangen-Nürnberg, Erlangen, Germany sommer@math.fau.de Copyright c 2013 Christian Sommer. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract A new parameterized binary relation is used to define minimality concepts in vector optimization. To simplify the problem of determining minimal elements the method of scalarization is applied. Necessary and sufficient conditions for the existence of minimal elements with respect to the scalarized problems are given. The multiplier rule of Lagrange is generalized. As a necessary minimality condition a Karush-Kuhn- Tucker-condition is obtained. The results are applied to optimization problems in finite-dimensional vector spaces. Mathematics Subject Classification: 47N10, 49N10, 65K10 Keywords: Minimality concepts, scalarization, Lagrange multiplier rule, Karush-Kuhn-Tucker-condition 1 Introduction To compare elements of a real Hilbert space Y, in a recent paper [13] we have introduced a subset C ϕ Y and a new parameterized binary relation ϕ see also [12]. We have defined C ϕ respectively ϕ by a map ϕ depending on linear operators A and B and a vector a Y. Relating to these parameters we have 1 This paper contains the second part of the dissertation of the author written at the University of Erlangen-Nürnberg, 2012

2 2842 Christian Sommer investigated the geometrical and topological properties of C ϕ, including the questions when C ϕ Y, C ϕ is bounded, convex or closed and so on. In this paper we are particularly interested in the properties of the binary relation ϕ. Using it, in Section 2 we define some new minimality concepts like strong, weak and proper minimality. Since it is often difficult to establish the existence of optimal elements in vector optimization, the given problem should be replaced by a suitable scalarized optimization problem with a real valued objective function see Jahn [6], [7], [9]. The solution of the transformed problem can be easier determined in general. Using the methods of scalarization, in Section 3 we obtain necessary and sufficient conditions for minimal, strongly, properly and weakly minimal elements. In Section 4 we generalize the multiplier rule of Lagrange for our minimality concept and obtain, as a necessary condition for minimality, a Karush-Kuhn-Tucker-condition. Finally we apply the Lagrange multiplier rule to special optimization problems in Y = R n and Y = S n. 2 Properties of ϕ and Some Definitions of Minimality To define the binary relation ϕ we follow the lines in [12], [13]: Let Y denote a real Hilbert space with the well-known inner product, : Y Y R and the induced norm x := x, x. Moreover, assume that Y is partially ordered by a convex cone C Y Y. Let two linear, continuous and self-adjoint operators A, B : Y Y and a vector a Y be given. We define the map ϕ : Y Y R and the subset C ϕ of Y by and ϕx, y := ϕ A,B,a x, y := x, Ax y, By + a, x y 2.1 C ϕ := { z Y : x, y Y : z = y x, ϕx, y 0 }. 2.2 Furthermore, for x, y Y we introduce the parameterized binary relation which plays the central role in this paper. x ϕ y : y x C Y or, if not, y x C ϕ. 2.3 To obtain best possible results we make the following convention. Let A be positive semidefinite and B be negative semidefinite. 2.4 Remark 2.1 The additional hypotheses on A and B imply that x, Ax 0 and y, By 0 for all x, y Y. We have shown in [12], [13] that then intc ϕ, C ϕ is convex and contained in the half space H + := {z Y :

3 Minimality concepts in vector optimization 2843 a, z 0}. Moreover, we have determined the dual set C ϕ of C ϕ and the contingent cone at an arbitrary z C ϕ H where H := {z Y : a, z = 0}. Thus, it follows that C ϕ has the most substantial mathematical structure under our convention. Analogously to C ϕ we define the set D ϕ by D ϕ := { z Y : x, y Y : z = y x, x ϕ y }. 2.5 Remark 2.2 i It is easily seen that D ϕ = C Y C ϕ. ii By [12], Th the relation ϕ represents a partial ordering iff D ϕ is a convex cone. iii An example for a convex cone D ϕ is given by Example in [12] C Y = R 3 +. On the other hand, the Examples 6.1 and 6.2 in [13] C Y = R 2 + describe situations where D ϕ is neither convex nor a cone. Minimal elements are usually defined with respect to a partial ordering, i.e., an ordering cone is needed see Jahn [9], pp Since in our situation D ϕ could fail to be a cone, we here present minimal concepts in a more general way. To obtain reasonable connections between our different concepts we assume the property C Y H Since by 2.4 the inclusion C ϕ H + holds, we are then actually concerned with the situation that D ϕ H +. For the general case without assuming 2.6 we refer to Weidner [16]. In that paper several minimality concepts have been defined with respect to general subsets D of Y with the only property that D \ {0 Y }. Let us now assume that 2.6 is always satisfied in the following. Definition 2.3 Let T Y, T. An x T is called a D ϕ -minimal element of T if { x} Dϕ T { x} + Dϕ. 2.7 The following characterization of D ϕ -minimal elements can be easily verified see [12], Lemma 3.3. Lemma 2.4 Let T Y, T and x T. The following statements are equivalent. a x is a D ϕ -minimal element of T.

4 2844 Christian Sommer b There is no x T such that x {x} + D ϕ \ D ϕ. Remark 2.5 Weidner [16], pp. 89 defines so-called efficient elements according to Lemma 2.4, b as a criterion for optimality. The simple proof of the following lemma is given in [12], Lemma 3.6. Lemma 2.6 If D ϕ is pseudo-pointed, i.e., D ϕ H = {0 Y }, property 2.7 can be replaced by { x} Dϕ T = { x}. 2.8 Of course, a minimal or more general an optimal element could be interpreted as a lower bound of a given set. This leads to the following notation. Definition 2.7 Let T Y, T. An x T is called a strongly D ϕ - minimal element of T if T { x} + D ϕ. Remark 2.8 It is obvious that an x T is a strongly D ϕ -minimal element of T iff x x D ϕ holds for all x T. The following statement is an easy consequence of the above definitions. Lemma 2.9 Let T Y, T. Then, every strongly D ϕ -minimal element of T is a D ϕ -minimal element of T. To define the next minimality notion we need the concept of a contingent cone. Definition 2.10 Let X, be a real normed space and S X, S. a Let some x cls the closure of S be given. An element h X is called a tangent to S at x, if there are a sequence x n n N S and a sequence λ n n N of positive real numbers such that x = lim n x n and h = lim n λ n x n x. b The set T S, x of all tangents to S at x is called contingent cone to S at x. Remark 2.11 Vogel [14], p. 78 uses contingent cones to define the properties local-efficient, local Bayes and properly efficient. Moreover, proper minimality defined by contingent cones plays an important role in the research of Borwein [1], [2]. In this context we introduce proper D ϕ -minimality as follows.

5 Minimality concepts in vector optimization 2845 Definition 2.12 Let T Y, T. An x T is called a properly D ϕ - minimal element of T if x is a D ϕ -minimal element of T and 0 Y is a D ϕ -minimal element of the contingent cone T T + D ϕ, x. Finally, we discuss a weaker property than the above defined minimality concepts. For this purpose let us observe that intd ϕ by Remark 2.1. Definition 2.13 Let T Y, T. An x T is called a weakly D ϕ - minimal element of T if { x} intdϕ T =. The following lemma shows a connection between the concepts of D ϕ -minimality and weak D ϕ -minimality see [12], Lemma 3.15 for the simple proof. Lemma 2.14 Let T Y, T. Then, every D ϕ -minimal element of T is a weakly D ϕ -minimal element of T. The next result presents a useful property of weakly D ϕ -minimal elements see [12], Lemma 3.16 for the simple proof. Lemma 2.15 Let T Y, T. Then, every weakly D ϕ -minimal element x T of the set T + D ϕ is a weakly D ϕ -minimal element of T. The converse of Lemma 2.15 is not true in general. Let us consider the following example. Example 2.16 Let Y = R 2. We define A, B and vectors a, v, x R 2 by A :=, B :=, a :=, v :=, x := R 2. As we have shown in [12], Ex. 3.8, C ϕ satisfies the equality For C Y = R 2 + we then obtain C ϕ = {z = z 1, z 2 T R 2 : z z2 1 z 1 }. D ϕ = R 2 + {z = z 1, z 2 T R 2 : z z2 1 z 1 }. Let us consider the set T := R 2 \ D ϕ {0 R 2}. Then, it is easily seen that x is a D ϕ -minimal element of T and, in view of Lemma 2.14, also a weakly

6 2846 Christian Sommer D ϕ -minimal element of T. To argue that the converse of Lemma 2.15 is not true we observe that v = 1, 0 T = 2, 1 T + 1, 1 T T + Cϕ T + D ϕ 2 2 and v = 1, 0 T intc ϕ intd ϕ. Hence, we obtain v { x} intd ϕ T + D ϕ which clearly implies that { x} intdϕ T + D ϕ { x}. Thus, it follows that x fails to be a weakly D ϕ -minimal element of T + D ϕ. 3 Scalarization As we have already explained in the introduction, a given problem in vector optimization can sometimes be better handled after replacing it by a suitable scalarized optimization problem with a real valued objective function. Since the theory of scalarized optimization is largely studied, the solution of the transformed problem can be easier determined in general. Mainly two methods of scalarization are applied, by using linear or nonlinear functionals respectively norms or semi-norms see [6], [7], [9]. In [3] [5] nonlinear functionals are constructed to separate nonconvex sets. The properties of such functionals are discussed in detail in [17], [18]. Moreover, in [5] results on scalarization are obtained for the case of weakly and properly efficient elements which have been first defined and studied in [15] and [16]. Let us again assume that D ϕ is given as in Section 2, i.e., D ϕ H +. Applying the above mentioned methods of scalarization, in the following subsections we derive some necessary and sufficient conditions for D ϕ -minimal, properly D ϕ -minimal and weakly D ϕ -minimal elements of a set T. For our studies we can come back to Definition 2.12 and Lemma 2.14 observing that every sufficient condition for proper D ϕ -minimality is sufficient for D ϕ -minimality and also for weak D ϕ -minimality. On the other hand, every necessary condition for weak D ϕ -minimality is necessary for D ϕ -minimality and also for proper D ϕ -minimality. Definition 3.1 Let subsets T 1 and T 2 of Y be given such that T 1 T 2. a A functional f : T 2 R is called monotonically increasing resp. strongly monotonically increasing on T 1, if for every x T 1 the condition x { x} D ϕ T1, x x = fx f x resp. fx < f x holds.

7 Minimality concepts in vector optimization 2847 b A functional f : T 2 R is called strictly monotonically increasing on T 1, if for every x T 1 the condition holds. x { x} intd ϕ T 1 = fx < f x Remark 3.2 i Of course, every strongly monotonically increasing functional on T 1 is also strictly monotonically increasing. ii It is well-known that a linear functional l : Y R on a real Hilbert space Y,, is given by a scalar product v, with v = vl Y. Since this special representation is irrelevant in the following, we use the general notion l. 3.1 Scalarization concerning D ϕ -minimality Let us begin our studies on scalarization with the property of D ϕ -minimality. Theorem 3.3 Let T Y, T and assume that the set T +D ϕ is convex. Moreover, assume for some x T that x is both a D ϕ -minimal element of the set T and a D ϕ -minimal element of the set T + D ϕ. Then, there is a linear functional l Y \ {0 Y } such that Y denotes the algebraic dual space of Y. l x lx for all x T 3.1 This theorem has a generalized analogue in Section 3.3, namely Theorem Hence, the statement follows immediately from that result. Remark 3.4 As we have shown by Example 2.16, a D ϕ -minimal element x T of T is not necessarily a D ϕ -minimal element of T + D ϕ. Hence, such a property has to be additionally assumed in Theorem 3.3. However, this assumption becomes redundant if the minimality is defined by an ordering cone C Y see [9], Th We are now in position to establish a characterization of D ϕ -minimality by using linear functionals. Theorem 3.5 Let T Y, T and assume that D ϕ is closed, convex and pseudo-pointed. Then, an element x T is a D ϕ -minimal element of T iff for every x T \ { x} there are a continuous linear functional l Y \ {0 Y } and an α R such that If this is the case, then l x d α < lx for all d D ϕ. 3.2 l x < lx. 3.3

8 2848 Christian Sommer Proof: Let x T be a D ϕ -minimal element of T. Since D ϕ is pseudopointed, by Lemma 2.6 this means { x} Dϕ T = { x}. 3.4 This property is equivalent to x / { x} D ϕ for all x T \ { x}. 3.5 Since D ϕ is nonempty, convex and closed, the set { x} D ϕ has the same properties. Hence, we can apply a well-known separation theorem see [9], pp. 75: 3.5 is then equivalent to the statement that for every x T \ { x} there are a continuous linear functional l Y \ {0 Y } and an ᾱ R such that lx < ᾱ l x d for all d Dϕ. 3.6 With l := l and α := ᾱ 3.6 is equivalent to assertion 3.2. Moreover, inequality 3.3 follows immediately from 3.2 setting d = 0 Y D ϕ. We are now able to present another necessary and sufficient condition for D ϕ -minimality where the normal vector a of H plays an important role. The rather elementary proof can be found in [12], Th Theorem 3.6 Let T Y, T and x T. Assume that D ϕ is pseudopointed. Then the following statements hold. a If the property a, x a, x is satisfied for all x T, then x is a D ϕ -minimal element of T. b If T is convex, then the converse of statement a is also true. The hypothesis on T to be convex is essential in statement b as the following example shows. Example 3.7 Let Y = R 2. We define A, B, the nonconvex set T and vectors a, x R 2 as in Example Then, D ϕ is pseudo-pointed and x = 0 R 2 is a D ϕ -minimal element of T. On the other hand, setting v := 2, 1 T T we obtain a T v = 1, 1 2, 1 T = 1 < 0 = a T x. Hence, the converse of Theorem 3.6 a does not hold. Using the induced norm on Y we give a sufficient condition for D ϕ -minimality. Theorem 3.8 Let T Y, T. Moreover, let ˆx Y and x T be given such that T {ˆx} + D ϕ 3.7 and Then the following statements hold. x ˆx x ˆx for all x T. 3.8

9 Minimality concepts in vector optimization 2849 a If the norm is monotonically increasing on D ϕ and x is uniquely determined by 3.8, then x is a D ϕ -minimal element of T. b If is strongly monotonically increasing on D ϕ, then x is a D ϕ -minimal element of T. To prove the statement we need a lemma see [9], pp. 129 for the proof. Lemma 3.9 Let T Y, T. Moreover, let a functional f : T R and an x T be given such that Then the following statements hold. f x fx for all x T. 3.9 a If f is monotonically increasing on T and x is uniquely determined by 3.9, then x is a D ϕ -minimal element of T. b If f is strongly monotonically increasing on T, then x is a D ϕ -minimal element of T. c If f is strictly monotonically increasing on T, then x is a weakly D ϕ - minimal element of T. Proof of Theorem 3.8: We prove only statement a; the proof of b follows analogously. For this purpose we follow the arguments in [9], p. 131, but without the concept of an ordering interval. Of course, we can apply Lemma 3.9 a after having shown that the functional f : T R, fx := x ˆx is monotonically increasing on T. Let t T be given. We have to show that fx f t holds for every x { t} D ϕ T. By 3.7 we obtain t ˆx D ϕ and { t} D ϕ T { t} D ϕ {ˆx} + Dϕ Moreover, using that x { t} D ϕ T and 3.10 we obtain t ˆx x ˆx = t x D ϕ and x {ˆx} + D ϕ. This implies x ˆx { t ˆx} D ϕ Dϕ. Since by assumption is monotonically increasing on D ϕ, the relation fx = x ˆx t ˆx = f t follows. Hence, f is monotonically increasing on T. Since by assumption x is uniquely determined by 3.8, it is a D ϕ -minimal element of T by Lemma 3.9 a.

10 2850 Christian Sommer Remark 3.10 Property 3.7 obviously means that ˆx can be considered as a lower bound of T. But also for the case when T does not satisfy such a property, approximation problems are suitable for determining D ϕ -minimal elements of T. This is, however, only possible under additional hypotheses on D ϕ as the following result shows see [12], Th for the proof. Theorem 3.11 Let T Y, T. Moreover, assume that D ϕ satisfies the property D ϕ + D ϕ D ϕ. Let x T and x T { x} D ϕ be given such that x x x x for all x T { x} D ϕ a If the norm is monotonically increasing on D ϕ and x is uniquely determined by 3.11, then x is a D ϕ -minimal element of T. b If is strongly monotonically increasing on D ϕ, then x is a D ϕ -minimal element of T. Remark 3.12 i We have shown in [12], Lemma 4.12 that the property D ϕ + D ϕ D ϕ implies that D ϕ is a convex cone. This strong property is also needed in the Theorems 3.13 and 3.18 below. ii In general, the set D ϕ fails to satisfy D ϕ + D ϕ D ϕ. iii In contrast to Theorem 3.8 where the minimal distance between ˆx and T is essential, in Theorem 3.11 the maximal distance between x and T { x} D ϕ plays a central role. 3.2 Scalarization concerning proper D ϕ -minimality We give here a sufficient condition for proper D ϕ -minimality under a strong additional hypothesis on D ϕ. Theorem 3.13 Let T Y, T. Moreover, assume that D ϕ satisfies the property D ϕ + D ϕ D ϕ. Let an ˆx Y be given such that T {ˆx} + intd ϕ. If the norm is strongly monotonically increasing on D ϕ and if there is an x T such that x ˆx x ˆx for all x T, 3.12 then x is a properly D ϕ -minimal element of T. Proof: We follow the arguments in [9], p But in contrast to that, our set D ϕ is not assumed to be pseudo-pointed. Therefore, we have to make slight changes within the proof. Since is strongly monotonically increasing on D ϕ and, therefore, on T {ˆx} intd ϕ D ϕ, the functional f : T R, fx := x ˆx

11 Minimality concepts in vector optimization 2851 is strongly monotonically increasing on T. Hence, by Lemma 3.9 b, x is a D ϕ -minimal element of T. Next we want to show that 0 Y is a D ϕ -minimal element of the contingent cone T T + D ϕ, x. Since x T, T {ˆx} D ϕ and D ϕ + D ϕ D ϕ, for all x T and d D ϕ we obtain and x ˆx + d T {ˆx} + D ϕ D ϕ + D ϕ D ϕ 3.13 x ˆx {x ˆx + d} D ϕ Dϕ Moreover, since is strongly monotonically increasing on D ϕ, from we conclude This implies x ˆx x ˆx x + d ˆx for all x T, d D ϕ. x ˆx x ˆx for all x T + D ϕ It is easily seen that f is continuous and convex. Then, using 3.15 and a known result on contingent cones see [9], pp. 94 we obtain the inequality x ˆx x ˆx + h for all h T T + D ϕ, x Define the set T and the functional g by T := T T + D ϕ, x {ˆx x} + D ϕ and g : T R, gh := x ˆx + h. Since T T T + D ϕ, x, by 3.16 we obtain g0 Y = x ˆx x ˆx + h = gh for all h T. Moreover, since is strongly monotonically increasing on D ϕ, g has the same property on {ˆx x} + D ϕ T = T. Hence, by Lemma 3.9 b, 0Y is a D ϕ -minimal element of T. To complete the proof let us assume that 0 Y fails to be a D ϕ -minimal element of the contingent cone T T + D ϕ, x. Then, there is an element x {0Y } D ϕ T T + Dϕ, x such that x / {0 Y } + D ϕ. Since x T and T {ˆx} + intd ϕ, the property x ˆx intd ϕ holds. Hence, there is a λ > 0 such that x ˆx + λx D ϕ resp. λx {ˆx x} + D ϕ. Then, since T T + D ϕ, x and D ϕ by Remark 3.12 are cones, we obtain λx D ϕ T T + D ϕ, x {ˆx x} + D ϕ = Dϕ T Moreover, λx / D ϕ, because x / D ϕ. Altogether, the inclusion {0Y } D ϕ T {0Y } + D ϕ is violated, a contradiction to the property that 0 Y of T. This completes the proof. is a D ϕ -minimal element

12 2852 Christian Sommer 3.3 Scalarization concerning weak D ϕ -minimality First we give a necessary condition for weak D ϕ -minimality generalizing Theorem 3.3. Theorem 3.14 Let T Y, T and assume that the set T + D ϕ is convex. Moreover, assume for some x T that x is both a weakly D ϕ -minimal element of the set T and a weakly D ϕ -minimal element of the set T + D ϕ. Then, there is a linear functional l Y \ {0 Y } such that l x lx for all x T Proof: Since x is a weakly D ϕ -minimal element of T + D ϕ, by Definition 2.13 we obtain { x} intdϕ T + D ϕ = Let an arbitrary t T be given. It is then easily seen that intt + D ϕ int {t} + D ϕ = {t} + intdϕ. We want to show that intt + D ϕ { x} =. On the contrary suppose that x intt + D ϕ. Hence, there is an ɛ > 0 such that B ɛ x = { v Y : v x < ɛ } T + D ϕ. Then, using Remark 2.2 und 3.19 we obtain { x} intcϕ B ɛ x { x} intd ϕ T + D ϕ = Let z H + \ H and let λ > 0 be sufficiently small such that λ < ɛ and z λ z, Bz + a, z > We define w Y by w := x λz. Using 3.21 we obtain ϕ0 Y, λz = λz, Bλz + a, 0 Y λz = λ 2 z, Bz λ a, z = λ λ z, Bz + a, z < 0. This implies λz = λz 0 Y intc ϕ see also [12], Th Thus, it follows w = x λz { x} intc ϕ and w x = λz = λ z < ɛ, i.e., w B ɛ x, a contradiction to Since intt +D ϕ { x} =, we can apply a well-known separation theorem see [9], pp. 72 to the convex sets T + D ϕ and { x}. Hence, there are a linear functional l Y \ {0 Y } and an ᾱ R such that lt + d ᾱ l x for all t T, d Dϕ Setting l := l, α := ᾱ and d = 0 Y D ϕ in 3.22, we have obtained statement The next statement is similar to Theorem 3.6. The rather elementary proof can be found in [12], Th

13 Minimality concepts in vector optimization 2853 Theorem 3.15 Let T Y, T and x T. Then the following statements hold. a If the property a, x a, x is satisfied for all x T, then x is a weakly D ϕ -minimal element of T. b If T is convex, then the converse of statement a is also true. Remark 3.16 The hypothesis on T to be convex is essential in statement b. Let us again consider Example As we have shown there the vector x = 0 R 2 is a D ϕ -minimal and, therefore, a weakly D ϕ -minimal element of the nonconvex set T. On the other hand, setting v := 2, 1 T T we obtain a T v = 1, 1 2, 1 T = 1 < 0 = a T x. Hence, the converse of Theorem 3.15 a does not hold. We now give a sufficient condition for weak D ϕ -minimality of subsets T which have a lower bound. Theorem 3.17 Let T Y, T and let an ˆx Y be given such that T {ˆx} + D ϕ If the norm is strictly monotonically increasing on D ϕ and if there is an element x T satisfying x ˆx x ˆx for all x T, then x is a weakly D ϕ -minimal element of T. Proof: The statement can be verified by similar arguments as in the proof of Theorem 3.8. We finish this section by giving another sufficient condition for weak D ϕ - minimality. Theorem 3.18 Let T Y, T and assume that D ϕ + D ϕ D ϕ. Moreover, let an x T be given. If the norm is strictly monotonically increasing on D ϕ and if there is an element x T { x} D ϕ such that x x x x for all x T { x} D ϕ, then x is a weakly D ϕ -minimal element of T. Proof: The statement can be verified by similar arguments as in the proof of Theorem 3.11 see also [12], Th

14 2854 Christian Sommer 4 Karush-Kuhn-Tucker-Condition In this section we generalize the multiplier rule of Lagrange for the concept of D ϕ -minimality. For this purpose we consider an abstract optimization problem with constraints given by equalities and inequalities. As a necessary condition for minimality we establish a Karush-Kuhn-Tucker-condition KKT. We restrict ourselves to the concept of weak D ϕ -minimality considering the fact that a necessary condition for weak D ϕ -minimality is also necessary for D ϕ - minimality see Lemma Moreover, to obtain such conditions we use Fr?chet differentiable maps. It is also possible to formulate the Lagrange multiplier rule by means of a more general class of differentiable maps see e.g. [10]. We finally apply our results to the case of finite-dimensional Hilbert spaces, especially studying two concrete optimization problems. Our studies have been motivated by known results on KKT-conditions which can be found in [8], pp. 187 and [9], pp More detailed, we are now concerned with the following situation: Assume that X, X and Z 2, Z2 are real Banach spaces. Moreover, let Y,, be the given real Hilbert space and Z 1, Z1 be a partial ordered normed space with ordering cone C Z1 such that intc Z1. Assume that D ϕ is convex and satisfies D ϕ H +. Moreover, let Ŝ X such that intŝ and Ŝ is convex. Let maps f : X Y, g : X Z 1 and h : X Z 2 be given. 4.1 We define the feasible set S by S := { x Ŝ : gx C Z 1, hx = 0 Z2 }. Assuming that S we consider the abstract optimization problem min fx. 4.2 x S The map f is called objective function. This leads to the following definition. Definition 4.1 Assume that 4.1 is satisfied and consider the abstract optimization problem 4.2. An element x S is called a D ϕ -minimal resp. weakly D ϕ -minimal solution of problem 4.2 if f x is a D ϕ -minimal resp. weakly D ϕ -minimal element of the image set fs. To establish a necessary condition for weakly D ϕ -minimal solutions of problem 4.2 we are first concerned with contingent cones.

15 Minimality concepts in vector optimization 2855 Theorem 4.2 Let X, X be a real normed space and S X, S. Moreover, let Y,, be the given real Hilbert space and assume that K Y such that K is convex and intk. Let a map r : X Y be given. If r is Fr?chet differentiable at an x S with r x K, then { h T S, x : r x+r xh intk } T {x S : rx intk }, x. To prove this theorem we need a known auxiliary result see [18], p. 7. Lemma 4.3 Let X be a real topological linear space and S X such that S is convex and ints. If x S and y ints, then λx + 1 λy ints for all λ [0, 1. Proof of Theorem 4.2: In the first part we follow the lines in [9], pp In the second part we complete the proof by applying Lemma 4.3 we need this auxiliary result, because K fails to be a cone in general; hence, some arguments used in [9] cannot be applied. Let an arbitrary h T S, x be given such that r x + r xh intk. 4.3 If h = 0 X, the statement trivially follows. Hence, let h 0 X. Then, there are a sequence x n n N of elements x n S and a sequence λ n n N of positive real numbers such that x = lim n x n and h = lim n λ n x n x. Since h 0 X, it is easily seen that lim n λ n = and there is an N N such that x n x 0 X for all n N. Now defining h n := λ n x n x, n N, we obtain rx n = 1 λ n rxn r x r xx n x + r xh n h λ n + r x + r xh r x for all n N. λ n 4.4 Next we verify the identity lim n [ λ n rxn r x r xx n x + r xh n h ] = 0 Y. 4.5 Since by assumption r is Fr?chet differentiable at x, using the above arguments we obtain [λ n rxn r x r xx n x ] lim n [ = lim λ n n = lim n h n X r = lim n h n X r x + x n x ] r x r xx n x x + xn x r x r xx n x r lim n x n x X x + xn x r x r xx n x = 0 Y. x n x X

16 2856 Christian Sommer This together with lim n r xh n h = r x0 X = 0 Y implies 4.5. By 4.3 and 4.5 there is an N 1 N such that [ y n := λ n rxn r x r xx n x ] + r xh n h + r x + r xh intk for all n N Moreover, since lim n λ n =, there is an N 2 N such that 1 λ n < 1 for all n N 2. Set N := max{n 1, N 2 }. Since r x K, using 4.4, 4.6 and Lemma 4.3 we then conclude that rx n = 1 y n r x intk for all n N, λ n λ n i.e., rx n intk for all n N. But this implies that {x } h T S : rx intk, x which completes the proof of Theorem 4.2. Now using Theorem 4.2 and the well-known Theorem of Lyusternik see e.g. [10], [11], [19] or [8], pp. 96 we can establish a necessary condition for a weakly D ϕ -minimal solution of problem 4.2. Theorem 4.4 Let the abstract optimization problem 4.2 be given such that 4.1 is satisfied and assume that x S is a weakly D ϕ -minimal solution of 4.2. Moreover, let f and g be Fr?chet differentiable at x, let h be continuously Fr?chet differentiable at x and h x be surjective. Then, there is no x intŝ such that and f xx x intd ϕ, g x + g xx x intc Z1 h xx x = 0 Z2. Proof: On the contrary, assume that there is an x intŝ such that f xx x intd ϕ, g x+g xx x intc Z1 and h xx x = 0 Z2. We distinguish two cases. First case. Assume that x = x. This implies 0 Y = f x0 X = f xx x intd ϕ resp. 0 Y intd ϕ, contradicting 4.1 where we have assumed that D ϕ H +. Second case. Assume that x x. Since Theorem 4.2 and also the Theorem of Lyusternik are available, this case can be treated analogously to a more special case in [9], pp We are now in position to formulate the announced generalization of the Lagrange multiplier rule. For verifying the following result the separation theorem of Eidelheit see [9], p. 74 and Theorem 4.4 play an essential role.

17 Minimality concepts in vector optimization 2857 Theorem 4.5 Let the abstract optimization problem 4.2 be given such that 4.1 is satisfied and assume that x S is a weakly D ϕ -minimal solution of 4.2. Moreover, let f and g be Fr?chet differentiable at x, let h be continuously Fr?chet differentiable at x and let the image set h xx be closed. Then, there are a real number λ 0 and continuous linear functionals u C Z 1 and v Z 2 with λ, u, v 0, 0 Z 1, 0 Z 2 such that and λ a, f xx x + u g x + v h x x x 0 for all x Ŝ 4.7 u g x = If in addition, there is an element ˆx intŝ such that g x + g xˆx x intc Z1 and h xˆx x = 0 Z2, and if the map h x is surjective, then λ > 0. To verify Theorem 4.5 we need an auxiliary result which is a direct consequence of Theorem in [12]. Lemma 4.6 Let D ϕ be the dual set of D ϕ. If the property D ϕ H + is satisfied, then D ϕ = cone {a}. Proof of Theorem 4.5: Using the same kind of arguments as in [9], pp. 166 we conclude that there are continuous linear functionals t Dϕ, u CZ 1 and v Z2 such that t, u, v 0 Y Z1, u g x = 0 and Z 2 t f x + u g x + v h x x x 0 for all x Ŝ. 4.9 Moreover, since D ϕ H +, the equality Dϕ = cone {a} follows from Lemma 4.6. Hence, there is a λ 0 with t = λa. Then, the assertion 4.7 follows directly from 4.9. If in addition, there is an element ˆx intŝ such that g x+g xˆx x intc Z1 and h xˆx x = 0 Z2, and if the map h x is surjective, then the property t 0 Y follows from arguments used in [9], p This implies λ > 0. Remark 4.7 i The necessary optimality conditions presented in Theorem 4.5 generalize not only the Lagrange multiplier rule, but also the Fritz-Johnconditions. Moreover, if λ is positive for instance, if the constraint qualification in the second part of Theorem 4.5 is satisfied, then we even obtain an extension of the Karush-Kuhn-Tucker-conditions compare e.g. [9], p ii If Ŝ = X, then inequality 4.7 simplifies to λa f x + u g x + v h x = 0 X.

18 2858 Christian Sommer We now apply the generalized multiplier rule stated in Theorem 4.5 to the case of finite-dimensional Hilbert spaces Y. Let us first consider a multiobjective optimization problem, i.e., we are concerned with problem 4.2 in the situation Y = R n. Theorem 4.8 Let Y = R n and assume that f : R m R n, g : R m R k and h : R m R p are given maps. Let R k be partially ordered by the natural ordering cone R k +. Assume that D ϕ is convex and satisfies D ϕ H +. Let the feasible set S be defined by S := { x R m : g i x 0 for all i {1,..., k}, h i x = 0 for all i {1,..., p} }. Moreover, let x S be a weakly D ϕ -minimal solution of the multiobjective optimization problem min x S fx and assume that f and g are differentiable at x and that h is continuously differentiable at x. Let an x R m be given such that g i x T x x < 0 for all i I x, h i x T x x = 0 for all i {1,..., p} where the index set I x is defined by I x := { i {1,..., k} : g i x = 0 } the set of the active inequality restrictions at x. Finally, assume that the gradients h 1 x,..., h p x are linearly independent. Then, there are multipliers u i 0 i I x and v i R i {1,..., p} with the property n a i f i x + u i g i x + i=1 i I x p v i h i x = 0 R m. i=1 Proof: Applying Theorem 4.5 and Remark 4.7 we can similarly argue as in [9], pp. 173 where a more special case has been considered. We are now interested in the case when Y = S n where S n denotes the vector space of all real symmetric n n - matrices. Let this finite-dimensional space be endowed with the inner product, which is defined by M 1, M 2 := tracem 1 M 2 for all M 1, M 2 S n. Theorem 4.9 Let Y = S n and assume that F : R m S n and G : R m S k are given matrix valued maps. Let the space S k be partially ordered by a convex cone C S k. Assume that D ϕ is convex and satisfies D ϕ H +. Let the feasible set S be defined by S := { x R m : Gx C }.

19 Minimality concepts in vector optimization 2859 Moreover, let x S be a weakly D ϕ -minimal solution of the conic optimization problem min x S F x and assume that F and G are elementwise differentiable at x. Then, there are a real number λ 0 and a matrix L C with λ, L 0, 0 S k such that λa F x + L G x = 0 R m 4.10 and L, G x = 0. If in addition, the equality holds, then λ > 0. G xr m + cone C + {G x} = S k Proof: Applying Theorem 4.5 and Remark 4.7 we can similarly argue as in [8], pp. 204 where a more special case has been considered. Remark 4.10 The vector space S k considered in Theorem 4.9 is partially ordered by a convex cone C S k. Typical examples for such convex cones can be found in [8], pp. 189, for instance the L?wner cone S k + := { M S k : M is positive semidefinite }, the K-copositive ordering cone where K R k is a given convex cone the nonnegative ordering cone C k K := { M S k : x T Mx 0 for all x K }, N k := { M S k : m ij 0 for all i, j {1,..., k} } and the double nonnegative ordering cone D k := S k + N k. Remark 4.11 i It would be of interest under what additional hypotheses the necessary conditions given in the Theorems 4.8 and 4.9 are also sufficient. Typically, certain generalized convexity concepts like quasiconvexity or pseudoconvexity are needed to adapt them at the reference set D ϕ. ii Another subject not having studied in our paper would be a duality theory for optimization problems which are based on the relation ϕ.

20 2860 Christian Sommer References [1] J.M. Borwein, Proper efficient points for maximizations with respect to cones, SIAM J. Control Optim., , [2] J.M. Borwein, The geometry of Pareto efficiency over cones, Math. Operationsforsch. Statist. Ser. Optim., , [3] C. Gerstewitz Tammer, Nichtkonvexe Dualität in der Vektoroptimierung, Wissensch. Zeitschr. TH Leuna-Merseburg, , [4] C. Gerstewitz Tammer, E. Iwanow, Dualität für nichtkonvexe Vektoroptimierungsprobleme, Wissensch. Zeitschr. TH Ilmenau, , [5] C. Gerth Tammer, P. Weidner, Nonconvex separation theorems and some applications in vector optimization, J. Optim. Theory Appl., , [6] J. Jahn, Scalarization in vector optimization, Math. Program., , [7] J. Jahn, Existence theorems in vector optimization, J. Optim. Theory Appl., [8] J. Jahn, Introduction to the Theory of Nonlinear Optimization, Springer, Berlin, [9] J. Jahn, Vector Optimization - Theory, Applications, and Extensions, Springer, Berlin, [10] A. Kirsch, W. Warth and J. Werner, Notwendige Optimalitätsbedingungen und ihre Anwendung, Lecture Notes in Economics and Mathematical Systems, 152, Springer, Berlin, [11] L.A. Lyusternik, W.I. Sobolew, Elemente der Funktionalanalysis, Akademie-Verlag, Berlin, [12] C. Sommer, Eine neue Ordnungsrelation in der Vektoroptimierung, Dissertation, Universität Erlangen-Nürnberg, [13] C. Sommer, A new parameterized binary relation in vector optimization, Preprint, Erlangen, 2012.

21 Minimality concepts in vector optimization 2861 [14] W. Vogel, Vektoroptimierung in Produkträumen, Hain, Meisenheim am Glan, [15] P. Weidner, Dominanzmengen und Optimalitätsbegriffe in der Vektoroptimierung, Wissensch. Zeitschr. TH Ilmenau, , [16] P. Weidner, Charakterisierung von Mengen effizienter Elemente in linearen Räumen auf der Grundlage allgemeiner Bezugsmengen, Dissertation A, Universität Halle-Wittenberg, [17] P. Weidner, Comparison of six types of separating functionals, in: H.- J. Sebastian, K. Tammer eds.: System modelling and optimization, Leipzig, 1989, Springer, Berlin, [18] P. Weidner, Ein Trennungskonzept und seine Anwendungen auf Vektoroptimierungsverfahren, Dissertation B, Universität Halle-Wittenberg, [19] J. Werner, Optimization - Theory and applications, Vieweg, Braunschweig, Received: February 10, 2013

A New Fenchel Dual Problem in Vector Optimization

A New Fenchel Dual Problem in Vector Optimization A New Fenchel Dual Problem in Vector Optimization Radu Ioan Boţ Anca Dumitru Gert Wanka Abstract We introduce a new Fenchel dual for vector optimization problems inspired by the form of the Fenchel dual

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION CHRISTIAN GÜNTHER AND CHRISTIANE TAMMER Abstract. In this paper, we consider multi-objective optimization problems involving not necessarily

More information

On Weak Pareto Optimality for Pseudoconvex Nonsmooth Multiobjective Optimization Problems

On Weak Pareto Optimality for Pseudoconvex Nonsmooth Multiobjective Optimization Problems Int. Journal of Math. Analysis, Vol. 7, 2013, no. 60, 2995-3003 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ijma.2013.311276 On Weak Pareto Optimality for Pseudoconvex Nonsmooth Multiobjective

More information

A Minimal Point Theorem in Uniform Spaces

A Minimal Point Theorem in Uniform Spaces A Minimal Point Theorem in Uniform Spaces Andreas Hamel Andreas Löhne Abstract We present a minimal point theorem in a product space X Y, X being a separated uniform space, Y a topological vector space

More information

Set-valued Duality Theory for Multiple Objective Linear Programs and Application to Mathematical Finance

Set-valued Duality Theory for Multiple Objective Linear Programs and Application to Mathematical Finance Set-valued Duality Theory for Multiple Objective Linear Programs and Application to Mathematical Finance Frank Heyde Andreas Löhne Christiane Tammer December 5, 2006 Abstract We develop a duality theory

More information

PETRA WEIDNER 1. Research Report April 25, 2017

PETRA WEIDNER 1. Research Report April 25, 2017 MINIMIZERS OF GERSTEWITZ FUNCTIONALS arxiv:1704.08632v1 [math.oc] 27 Apr 2017 by PETRA WEIDNER 1 Research Report April 25, 2017 Abstract: Scalarization in vector optimization is essentially based on the

More information

Convex Sets Strict Separation. in the Minimax Theorem

Convex Sets Strict Separation. in the Minimax Theorem Applied Mathematical Sciences, Vol. 8, 2014, no. 36, 1781-1787 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2014.4271 Convex Sets Strict Separation in the Minimax Theorem M. A. M. Ferreira

More information

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction ACTA MATHEMATICA VIETNAMICA 271 Volume 29, Number 3, 2004, pp. 271-280 SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM NGUYEN NANG TAM Abstract. This paper establishes two theorems

More information

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS GERD WACHSMUTH Abstract. Kyparisis proved in 1985 that a strict version of the Mangasarian- Fromovitz constraint qualification (MFCQ) is equivalent to

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

Centre d Economie de la Sorbonne UMR 8174

Centre d Economie de la Sorbonne UMR 8174 Centre d Economie de la Sorbonne UMR 8174 On alternative theorems and necessary conditions for efficiency Do Van LUU Manh Hung NGUYEN 2006.19 Maison des Sciences Économiques, 106-112 boulevard de L'Hôpital,

More information

Summary Notes on Maximization

Summary Notes on Maximization Division of the Humanities and Social Sciences Summary Notes on Maximization KC Border Fall 2005 1 Classical Lagrange Multiplier Theorem 1 Definition A point x is a constrained local maximizer of f subject

More information

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

University of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris

University of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris University of California, Davis Department of Agricultural and Resource Economics ARE 5 Lecture Notes Quirino Paris Karush-Kuhn-Tucker conditions................................................. page Specification

More information

Characterizations of Solution Sets of Fréchet Differentiable Problems with Quasiconvex Objective Function

Characterizations of Solution Sets of Fréchet Differentiable Problems with Quasiconvex Objective Function Characterizations of Solution Sets of Fréchet Differentiable Problems with Quasiconvex Objective Function arxiv:1805.03847v1 [math.oc] 10 May 2018 Vsevolod I. Ivanov Department of Mathematics, Technical

More information

The Generalized Viscosity Implicit Rules of Asymptotically Nonexpansive Mappings in Hilbert Spaces

The Generalized Viscosity Implicit Rules of Asymptotically Nonexpansive Mappings in Hilbert Spaces Applied Mathematical Sciences, Vol. 11, 2017, no. 12, 549-560 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ams.2017.718 The Generalized Viscosity Implicit Rules of Asymptotically Nonexpansive

More information

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q)

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) Andreas Löhne May 2, 2005 (last update: November 22, 2005) Abstract We investigate two types of semicontinuity for set-valued

More information

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces Introduction to Optimization Techniques Nonlinear Optimization in Function Spaces X : T : Gateaux and Fréchet Differentials Gateaux and Fréchet Differentials a vector space, Y : a normed space transformation

More information

Solvability of System of Generalized Vector Quasi-Equilibrium Problems

Solvability of System of Generalized Vector Quasi-Equilibrium Problems Applied Mathematical Sciences, Vol. 8, 2014, no. 53, 2627-2633 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2014.43183 Solvability of System of Generalized Vector Quasi-Equilibrium Problems

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

On Optimality Conditions for Pseudoconvex Programming in Terms of Dini Subdifferentials

On Optimality Conditions for Pseudoconvex Programming in Terms of Dini Subdifferentials Int. Journal of Math. Analysis, Vol. 7, 2013, no. 18, 891-898 HIKARI Ltd, www.m-hikari.com On Optimality Conditions for Pseudoconvex Programming in Terms of Dini Subdifferentials Jaddar Abdessamad Mohamed

More information

A Note of the Strong Convergence of the Mann Iteration for Demicontractive Mappings

A Note of the Strong Convergence of the Mann Iteration for Demicontractive Mappings Applied Mathematical Sciences, Vol. 10, 2016, no. 6, 255-261 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2016.511700 A Note of the Strong Convergence of the Mann Iteration for Demicontractive

More information

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Chapter 4 GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Alberto Cambini Department of Statistics and Applied Mathematics University of Pisa, Via Cosmo Ridolfi 10 56124

More information

On constraint qualifications with generalized convexity and optimality conditions

On constraint qualifications with generalized convexity and optimality conditions On constraint qualifications with generalized convexity and optimality conditions Manh-Hung Nguyen, Do Van Luu To cite this version: Manh-Hung Nguyen, Do Van Luu. On constraint qualifications with generalized

More information

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1 Tim Hoheisel and Christian Kanzow Dedicated to Jiří Outrata on the occasion of his 60th birthday Preprint

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

HIGHER ORDER OPTIMALITY AND DUALITY IN FRACTIONAL VECTOR OPTIMIZATION OVER CONES

HIGHER ORDER OPTIMALITY AND DUALITY IN FRACTIONAL VECTOR OPTIMIZATION OVER CONES - TAMKANG JOURNAL OF MATHEMATICS Volume 48, Number 3, 273-287, September 2017 doi:10.5556/j.tkjm.48.2017.2311 - - - + + This paper is available online at http://journals.math.tku.edu.tw/index.php/tkjm/pages/view/onlinefirst

More information

arxiv: v3 [math.oc] 5 Dec 2017

arxiv: v3 [math.oc] 5 Dec 2017 arxiv:1606.08611v3 [math.oc] 5 Dec 2017 SCALARIZATION IN VECTOR OPTIMIZATION BY FUNCTIONS WITH UNIFORM SUBLEVEL SETS by PETRA WEIDNER 1 Research Report Version 3 from December 05, 2017 Improved, Extended

More information

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima B9824 Foundations of Optimization Lecture 1: Introduction Fall 2009 Copyright 2009 Ciamac Moallemi Outline 1. Administrative matters 2. Introduction 3. Existence of optima 4. Local theory of unconstrained

More information

Closing the duality gap in linear vector optimization

Closing the duality gap in linear vector optimization Closing the duality gap in linear vector optimization Andreas H. Hamel Frank Heyde Andreas Löhne Christiane Tammer Kristin Winkler Abstract Using a set-valued dual cost function we give a new approach

More information

SECTION C: CONTINUOUS OPTIMISATION LECTURE 11: THE METHOD OF LAGRANGE MULTIPLIERS

SECTION C: CONTINUOUS OPTIMISATION LECTURE 11: THE METHOD OF LAGRANGE MULTIPLIERS SECTION C: CONTINUOUS OPTIMISATION LECTURE : THE METHOD OF LAGRANGE MULTIPLIERS HONOUR SCHOOL OF MATHEMATICS OXFORD UNIVERSITY HILARY TERM 005 DR RAPHAEL HAUSER. Examples. In this lecture we will take

More information

Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency

Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency Applied Mathematics E-Notes, 16(2016), 133-143 c ISSN 1607-2510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency

More information

Stability of efficient solutions for semi-infinite vector optimization problems

Stability of efficient solutions for semi-infinite vector optimization problems Stability of efficient solutions for semi-infinite vector optimization problems Z. Y. Peng, J. T. Zhou February 6, 2016 Abstract This paper is devoted to the study of the stability of efficient solutions

More information

Computing Efficient Solutions of Nonconvex Multi-Objective Problems via Scalarization

Computing Efficient Solutions of Nonconvex Multi-Objective Problems via Scalarization Computing Efficient Solutions of Nonconvex Multi-Objective Problems via Scalarization REFAIL KASIMBEYLI Izmir University of Economics Department of Industrial Systems Engineering Sakarya Caddesi 156, 35330

More information

Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems

Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems Applied Mathematics Volume 2012, Article ID 674512, 13 pages doi:10.1155/2012/674512 Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems Hong-Yong Fu, Bin Dan, and Xiang-Yu

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

OPTIMALITY CONDITIONS AND DUALITY FOR SEMI-INFINITE PROGRAMMING INVOLVING SEMILOCALLY TYPE I-PREINVEX AND RELATED FUNCTIONS

OPTIMALITY CONDITIONS AND DUALITY FOR SEMI-INFINITE PROGRAMMING INVOLVING SEMILOCALLY TYPE I-PREINVEX AND RELATED FUNCTIONS Commun. Korean Math. Soc. 27 (2012), No. 2, pp. 411 423 http://dx.doi.org/10.4134/ckms.2012.27.2.411 OPTIMALITY CONDITIONS AND DUALITY FOR SEMI-INFINITE PROGRAMMING INVOLVING SEMILOCALLY TYPE I-PREINVEX

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Convex Sets Strict Separation in Hilbert Spaces

Convex Sets Strict Separation in Hilbert Spaces Applied Mathematical Sciences, Vol. 8, 2014, no. 64, 3155-3160 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2014.44257 Convex Sets Strict Separation in Hilbert Spaces M. A. M. Ferreira 1

More information

On intermediate value theorem in ordered Banach spaces for noncompact and discontinuous mappings

On intermediate value theorem in ordered Banach spaces for noncompact and discontinuous mappings Int. J. Nonlinear Anal. Appl. 7 (2016) No. 1, 295-300 ISSN: 2008-6822 (electronic) http://dx.doi.org/10.22075/ijnaa.2015.341 On intermediate value theorem in ordered Banach spaces for noncompact and discontinuous

More information

Closing the Duality Gap in Linear Vector Optimization

Closing the Duality Gap in Linear Vector Optimization Journal of Convex Analysis Volume 11 (2004), No. 1, 163 178 Received July 4, 2003 Closing the Duality Gap in Linear Vector Optimization Andreas H. Hamel Martin-Luther-University Halle-Wittenberg, Department

More information

4. Algebra and Duality

4. Algebra and Duality 4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone

More information

Finite-Dimensional Cones 1

Finite-Dimensional Cones 1 John Nachbar Washington University March 28, 2018 1 Basic Definitions. Finite-Dimensional Cones 1 Definition 1. A set A R N is a cone iff it is not empty and for any a A and any γ 0, γa A. Definition 2.

More information

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima B9824 Foundations of Optimization Lecture 1: Introduction Fall 2010 Copyright 2010 Ciamac Moallemi Outline 1. Administrative matters 2. Introduction 3. Existence of optima 4. Local theory of unconstrained

More information

SECOND ORDER DUALITY IN MULTIOBJECTIVE PROGRAMMING

SECOND ORDER DUALITY IN MULTIOBJECTIVE PROGRAMMING Journal of Applied Analysis Vol. 4, No. (2008), pp. 3-48 SECOND ORDER DUALITY IN MULTIOBJECTIVE PROGRAMMING I. AHMAD and Z. HUSAIN Received November 0, 2006 and, in revised form, November 6, 2007 Abstract.

More information

Remark on a Couple Coincidence Point in Cone Normed Spaces

Remark on a Couple Coincidence Point in Cone Normed Spaces International Journal of Mathematical Analysis Vol. 8, 2014, no. 50, 2461-2468 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ijma.2014.49293 Remark on a Couple Coincidence Point in Cone Normed

More information

Monetary Risk Measures and Generalized Prices Relevant to Set-Valued Risk Measures

Monetary Risk Measures and Generalized Prices Relevant to Set-Valued Risk Measures Applied Mathematical Sciences, Vol. 8, 2014, no. 109, 5439-5447 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2014.43176 Monetary Risk Measures and Generalized Prices Relevant to Set-Valued

More information

Convex Feasibility Problems

Convex Feasibility Problems Laureate Prof. Jonathan Borwein with Matthew Tam http://carma.newcastle.edu.au/drmethods/paseky.html Spring School on Variational Analysis VI Paseky nad Jizerou, April 19 25, 2015 Last Revised: May 6,

More information

Mathematical Programming Involving (α, ρ)-right upper-dini-derivative Functions

Mathematical Programming Involving (α, ρ)-right upper-dini-derivative Functions Filomat 27:5 (2013), 899 908 DOI 10.2298/FIL1305899Y Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Mathematical Programming Involving

More information

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma 4-1 Algebra and Duality P. Parrilo and S. Lall 2006.06.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone of valid

More information

THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION

THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION HALUK ERGIN AND TODD SARVER Abstract. Suppose (i) X is a separable Banach space, (ii) C is a convex subset of X that is a Baire space (when endowed

More information

Chap 2. Optimality conditions

Chap 2. Optimality conditions Chap 2. Optimality conditions Version: 29-09-2012 2.1 Optimality conditions in unconstrained optimization Recall the definitions of global, local minimizer. Geometry of minimization Consider for f C 1

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

Strong Convergence of the Mann Iteration for Demicontractive Mappings

Strong Convergence of the Mann Iteration for Demicontractive Mappings Applied Mathematical Sciences, Vol. 9, 015, no. 4, 061-068 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.1988/ams.015.5166 Strong Convergence of the Mann Iteration for Demicontractive Mappings Ştefan

More information

INVEX FUNCTIONS AND CONSTRAINED LOCAL MINIMA

INVEX FUNCTIONS AND CONSTRAINED LOCAL MINIMA BULL. AUSRAL. MAH. SOC. VOL. 24 (1981), 357-366. 9C3 INVEX FUNCIONS AND CONSRAINED LOCAL MINIMA B.D. CRAVEN If a certain weakening of convexity holds for the objective and all constraint functions in a

More information

Characterization of proper optimal elements with variable ordering structures

Characterization of proper optimal elements with variable ordering structures Characterization of proper optimal elements with variable ordering structures Gabriele Eichfelder and Tobias Gerlach April 2, 2015 Abstract In vector optimization with a variable ordering structure the

More information

Franco Giannessi, Giandomenico Mastroeni. Institute of Mathematics University of Verona, Verona, Italy

Franco Giannessi, Giandomenico Mastroeni. Institute of Mathematics University of Verona, Verona, Italy ON THE THEORY OF VECTOR OPTIMIZATION AND VARIATIONAL INEQUALITIES. IMAGE SPACE ANALYSIS AND SEPARATION 1 Franco Giannessi, Giandomenico Mastroeni Department of Mathematics University of Pisa, Pisa, Italy

More information

Date: July 5, Contents

Date: July 5, Contents 2 Lagrange Multipliers Date: July 5, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 14 2.3. Informative Lagrange Multipliers...........

More information

Preprint February 19, 2018 (1st version October 31, 2017) Pareto efficient solutions in multi-objective optimization involving forbidden regions 1

Preprint February 19, 2018 (1st version October 31, 2017) Pareto efficient solutions in multi-objective optimization involving forbidden regions 1 Preprint February 19, 2018 1st version October 31, 2017) Pareto efficient solutions in multi-objective optimization involving forbidden regions 1 by CHRISTIAN GÜNTHER Martin Luther University Halle-Wittenberg

More information

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State

More information

Quadratic Optimization over a Polyhedral Set

Quadratic Optimization over a Polyhedral Set International Mathematical Forum, Vol. 9, 2014, no. 13, 621-629 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/imf.2014.4234 Quadratic Optimization over a Polyhedral Set T. Bayartugs, Ch. Battuvshin

More information

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality

More information

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim Korean J. Math. 25 (2017), No. 4, pp. 469 481 https://doi.org/10.11568/kjm.2017.25.4.469 GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS Jong Kyu Kim, Salahuddin, and Won Hee Lim Abstract. In this

More information

Research Article Optimality Conditions and Duality in Nonsmooth Multiobjective Programs

Research Article Optimality Conditions and Duality in Nonsmooth Multiobjective Programs Hindawi Publishing Corporation Journal of Inequalities and Applications Volume 2010, Article ID 939537, 12 pages doi:10.1155/2010/939537 Research Article Optimality Conditions and Duality in Nonsmooth

More information

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS A Dissertation Submitted For The Award of the Degree of Master of Philosophy in Mathematics Neelam Patel School of Mathematics

More information

The Split Hierarchical Monotone Variational Inclusions Problems and Fixed Point Problems for Nonexpansive Semigroup

The Split Hierarchical Monotone Variational Inclusions Problems and Fixed Point Problems for Nonexpansive Semigroup International Mathematical Forum, Vol. 11, 2016, no. 8, 395-408 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/imf.2016.6220 The Split Hierarchical Monotone Variational Inclusions Problems and

More information

KKM-Type Theorems for Best Proximal Points in Normed Linear Space

KKM-Type Theorems for Best Proximal Points in Normed Linear Space International Journal of Mathematical Analysis Vol. 12, 2018, no. 12, 603-609 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ijma.2018.81069 KKM-Type Theorems for Best Proximal Points in Normed

More information

SHORT COMMUNICATION. Communicated by Igor Konnov

SHORT COMMUNICATION. Communicated by Igor Konnov On Some Erroneous Statements in the Paper Optimality Conditions for Extended Ky Fan Inequality with Cone and Affine Constraints and Their Applications by A. Capătă SHORT COMMUNICATION R.I. Boţ 1 and E.R.

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

Constrained Optimization Theory

Constrained Optimization Theory Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August

More information

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

Technische Universität Ilmenau Institut für Mathematik

Technische Universität Ilmenau Institut für Mathematik Technische Universität Ilmenau Institut für Mathematik Preprint No. M 13/05 Properly optimal elements in vector optimization with variable ordering structures Gabriele Eichfelder and Refail Kasimbeyli

More information

NONLINEAR SCALARIZATION CHARACTERIZATIONS OF E-EFFICIENCY IN VECTOR OPTIMIZATION. Ke-Quan Zhao*, Yuan-Mei Xia and Xin-Min Yang 1.

NONLINEAR SCALARIZATION CHARACTERIZATIONS OF E-EFFICIENCY IN VECTOR OPTIMIZATION. Ke-Quan Zhao*, Yuan-Mei Xia and Xin-Min Yang 1. TAIWANESE JOURNAL OF MATHEMATICS Vol. 19, No. 2, pp. 455-466, April 2015 DOI: 10.11650/tjm.19.2015.4360 This paper is available online at http://journal.taiwanmathsoc.org.tw NONLINEAR SCALARIZATION CHARACTERIZATIONS

More information

Some Properties of D-sets of a Group 1

Some Properties of D-sets of a Group 1 International Mathematical Forum, Vol. 9, 2014, no. 21, 1035-1040 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/imf.2014.45104 Some Properties of D-sets of a Group 1 Joris N. Buloron, Cristopher

More information

LINEAR INTERVAL INEQUALITIES

LINEAR INTERVAL INEQUALITIES LINEAR INTERVAL INEQUALITIES Jiří Rohn, Jana Kreslová Faculty of Mathematics and Physics, Charles University Malostranské nám. 25, 11800 Prague, Czech Republic 1.6.1993 Abstract We prove that a system

More information

Constraint qualifications for nonlinear programming

Constraint qualifications for nonlinear programming Constraint qualifications for nonlinear programming Consider the standard nonlinear program min f (x) s.t. g i (x) 0 i = 1,..., m, h j (x) = 0 1 = 1,..., p, (NLP) with continuously differentiable functions

More information

Subdifferential representation of convex functions: refinements and applications

Subdifferential representation of convex functions: refinements and applications Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential

More information

POLARS AND DUAL CONES

POLARS AND DUAL CONES POLARS AND DUAL CONES VERA ROSHCHINA Abstract. The goal of this note is to remind the basic definitions of convex sets and their polars. For more details see the classic references [1, 2] and [3] for polytopes.

More information

Lecture 6 - Convex Sets

Lecture 6 - Convex Sets Lecture 6 - Convex Sets Definition A set C R n is called convex if for any x, y C and λ [0, 1], the point λx + (1 λ)y belongs to C. The above definition is equivalent to saying that for any x, y C, the

More information

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm Volume 31, N. 1, pp. 205 218, 2012 Copyright 2012 SBMAC ISSN 0101-8205 / ISSN 1807-0302 (Online) www.scielo.br/cam A sensitivity result for quadratic semidefinite programs with an application to a sequential

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

Optimality and Duality Theorems in Nonsmooth Multiobjective Optimization

Optimality and Duality Theorems in Nonsmooth Multiobjective Optimization RESEARCH Open Access Optimality and Duality Theorems in Nonsmooth Multiobjective Optimization Kwan Deok Bae and Do Sang Kim * * Correspondence: dskim@pknu.ac. kr Department of Applied Mathematics, Pukyong

More information

Finite Dimensional Optimization Part I: The KKT Theorem 1

Finite Dimensional Optimization Part I: The KKT Theorem 1 John Nachbar Washington University March 26, 2018 1 Introduction Finite Dimensional Optimization Part I: The KKT Theorem 1 These notes characterize maxima and minima in terms of first derivatives. I focus

More information

Practice Exam 1: Continuous Optimisation

Practice Exam 1: Continuous Optimisation Practice Exam : Continuous Optimisation. Let f : R m R be a convex function and let A R m n, b R m be given. Show that the function g(x) := f(ax + b) is a convex function of x on R n. Suppose that f is

More information

Chapter 2 Convex Analysis

Chapter 2 Convex Analysis Chapter 2 Convex Analysis The theory of nonsmooth analysis is based on convex analysis. Thus, we start this chapter by giving basic concepts and results of convexity (for further readings see also [202,

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Preprint Stephan Dempe and Patrick Mehlitz Lipschitz continuity of the optimal value function in parametric optimization ISSN

Preprint Stephan Dempe and Patrick Mehlitz Lipschitz continuity of the optimal value function in parametric optimization ISSN Fakultät für Mathematik und Informatik Preprint 2013-04 Stephan Dempe and Patrick Mehlitz Lipschitz continuity of the optimal value function in parametric optimization ISSN 1433-9307 Stephan Dempe and

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

CHARACTERIZATION OF (QUASI)CONVEX SET-VALUED MAPS

CHARACTERIZATION OF (QUASI)CONVEX SET-VALUED MAPS CHARACTERIZATION OF (QUASI)CONVEX SET-VALUED MAPS Abstract. The aim of this paper is to characterize in terms of classical (quasi)convexity of extended real-valued functions the set-valued maps which are

More information

Research Article Optimality Conditions of Vector Set-Valued Optimization Problem Involving Relative Interior

Research Article Optimality Conditions of Vector Set-Valued Optimization Problem Involving Relative Interior Hindawi Publishing Corporation Journal of Inequalities and Applications Volume 2011, Article ID 183297, 15 pages doi:10.1155/2011/183297 Research Article Optimality Conditions of Vector Set-Valued Optimization

More information

Characterization of Weakly Primary Ideals over Non-commutative Rings

Characterization of Weakly Primary Ideals over Non-commutative Rings International Mathematical Forum, Vol. 9, 2014, no. 34, 1659-1667 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/imf.2014.49155 Characterization of Weakly Primary Ideals over Non-commutative Rings

More information

Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems

Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems Naohiko Arima, Sunyoung Kim, Masakazu Kojima, and Kim-Chuan Toh Abstract. In Part I of

More information

Additional Homework Problems

Additional Homework Problems Additional Homework Problems Robert M. Freund April, 2004 2004 Massachusetts Institute of Technology. 1 2 1 Exercises 1. Let IR n + denote the nonnegative orthant, namely IR + n = {x IR n x j ( ) 0,j =1,...,n}.

More information

Math 341: Convex Geometry. Xi Chen

Math 341: Convex Geometry. Xi Chen Math 341: Convex Geometry Xi Chen 479 Central Academic Building, University of Alberta, Edmonton, Alberta T6G 2G1, CANADA E-mail address: xichen@math.ualberta.ca CHAPTER 1 Basics 1. Euclidean Geometry

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

Math 5311 Constrained Optimization Notes

Math 5311 Constrained Optimization Notes ath 5311 Constrained Optimization otes February 5, 2009 1 Equality-constrained optimization Real-world optimization problems frequently have constraints on their variables. Constraints may be equality

More information

An introduction to some aspects of functional analysis

An introduction to some aspects of functional analysis An introduction to some aspects of functional analysis Stephen Semmes Rice University Abstract These informal notes deal with some very basic objects in functional analysis, including norms and seminorms

More information