w-(h,ω) CONJUGATE DUALITY THEORY IN MULTIOBJECTIVE NONLINEAR OPTIMIZATION 1
|
|
- Ilene Barton
- 6 years ago
- Views:
Transcription
1 Management Science and Engineering Vol.1 No.2 December 2007 w-(h,ω) CONJUGATE DUALITY THEORY IN MULTIOBJECTIVE NONLINEAR OPTIMIZATION 1 Feng Junwen 2 Abstract: The duality in multiobjective optimization holds now a major position in the theory of multiobjective programming not only due to its mathematical elegance but also its economic implications. For weak efficiency instead of efficiency, this paper gives the definition and some fundamental properties of the weak supremum and infimum sets. Based on the weak supremum, the concepts, some properties and their relationships of w-(h,ω) conjugate maps, w-(h,ω)-subgradients, w- Γ H p (Ω)-regularitions of vector-valued point-to-set maps are provided, and a new duality theory in multiobjective nonlinear optimization------w-(h,ω) Conjugate Duality Theory is established by means of the w-(h,ω) conjugate maps. The concepts and their relations to the weak efficient solutions to the primal and dual problems of the w-(h,ω)-lagrangian map and weak saddle-point are developed. Finally, several special cases for H and Ω are discussed. Key words: Conjugate duality theory, Multiobjective optimization, Weak efficiency 1. INTRODUCTION In the duality theory of nonlinear programming problems, conjugate functions play important roles [1,2,8]. So, in order to discuss duality in multiobjective optimization problems, we need to introduce an extended notion of "conjugate" which fits well the multiobjective problems. Tanino and Sawaragi [9] have developed a duality theory in multiobjective convex programming problems under the Pareto optimality criterion. However, their duality theory is not reflexive in the sense analogous to [1, 2 ]. The nonreflexivity of their duality seems to be caused by the fact that the objective functions in the primal problems are vector-valued, while those in the dual problems are set-valued. If we are concerned with the reflexivity of the duality theory, we will have to start from set-valued objective functions in the primal problems. Then a new problem comes about: what kinds of set-valued functions are suitable for objective functions? Feng [4] extended the results of [9] to a more general framework, and provided a reflexive duality theory in multiobjective optimization based on efficiency. Based on weak efficiency rather than efficiency, Kawasaki [6,7] developed some interesting results by defining conjugate and subgradients via weak supremum. In this article, a new 1 Supported by NSFC No School of Economics and Management, Nanjing University of Science and Technology, , China. * Received 3 July 2007; accepted 1 August 2007
2 duality theory for weak efficiency is developed with the help of the weak supremum and the generalization of the conjugate relations discussed in [5]. In section 2, the definitions of a supremum set and a infimum set which are slightly different from Kawasaki's are given and some of their properties are provided. In section 3, we shall give the definitions of w-(h,ω) conjugate maps and w-(h,ω)-subgradients and w- Γ H p (Ω)-regulations of the point-to-set map, and discuss some their fundamental properties. The weak saddle-point problem is also discussed in this section. In section 4, the w-(h,ω) conjugate duality theory is developed which states the relationships between the primal and dual problems. In section 5, the w-(h,ω)-lagrangian map is defined and its weak saddle-point problem is discussed. In section 6, several special cases for H and Ω are discussed. Before going further, we introduce the following notations. Let R p be the p-dimensional Euclidean space, and ε and -ε be the p-dimensional points consisting of + and -, respectively, and R p =R p {ε,-ε}. K={x R p : x>0} {0}. By K, we get two partial orders K and < K in R p, that is, for any points a,b R p, a K b if and only if b-a K, and a< K b if and only if b-a int(k)=k\{0}. Furthermore, the partial orders can be extended naturally, i.e., for any a R p, -ε< K a< K ε. For a,b R p, a K b if and only if a i =b i for i=1,2,...,p, or a i <b i for i=1,2,...,p, and a< K b if and only if a i <b i for i=1,2,...,p. Then we have a< K b if and only if a=b or a<b. And -ε<a<ε for all a R p. For any a R p, B R p and C R p, we define: a+ε=+ε; a+b={a+b R p : b B}; B+C={b+c R p : b B,c C}.The p-dimensional point consisting of a common element a R is simply denoted by a in this paper. Suppose X R p and F:R n P(R p ), sometimes, we denote F(X) or x F(x) to be x X F(x). Additionally, considering the length of the paper, we omit the proofs of all the theorems. 2. WEAK SUPREMUM AND INFIMUM SETS AND THEIR PROPERTIES Definition 2.1 Let Y R p. y R p is said to be a weak maximal (minimal) point of Y if y Y and there is no y Y satisfying y<y (y <y). The set of weak maximal (minimal) points of Y is denoted by w-maxy (w-miny), which will be called the weak maximal (minimal) set of Y. Generally, w-maxy or w-miny may be empty. It is well known that if Y is a nonempty compact set in R p, then the w-maxy and w-miny are not empty [10]. Definition 2.2 Let Y R p, S(Y-K) and I(Y+K) are defined by S(Y-K)= ε ε... Y = ory = { ε } p cl(( Y K) R... otherwise p p R... R Y K, or Y I(Y+K)= ε... Y = ory = { ε } p cl(( Y + K) R... otherwise p p R... R Y + Kor ε Y where for B R p, cl(b) denotes the topological closure of B. Furthermore, we shall define w-supy and w-infy by w-supy=w-max S(Y-K) ; w-infy=w-min I(Y+K) 2
3 which will be called the weak supremum set and weak infimum set of Y respectively, and their elements will be called the weak supremum point and weak infimum point of Y, respectively. From the definition of the topological closure, we know that R p Y+K if and only if R p cl((y-k) R p ), and R p Y+K if and only if R p cl((y+k) R p ). When p=1, w-supy={supy} w-infy={infy}. For Y R P, w-supy=-w-inf(-y), w-infy=-w-sup(-y), and if Y 1 Y 2 R p, then S(Y 1 -K) S(Y 2 -K), I(Y 1 +K) I(Y 2 +K). We shall now introduce new notations " ", " ", " " and " ", which will be used repeatedly in this paper. Let Y 1,Y 2 R p, we write Y 1 Y 2 if there are no points y 1 Y 1 and y 2 Y 2 satisfying y 2 < y 1. We write Y 1 Y 2 if Y 1 Y 2 and for any y 1 Y 1 there exists some y 2 Y 2 such that y 1 < y 2. We write Y 1 Y 2 if -Y 2 -Y 1. We write Y 1 Y 2 if both Y 1 Y 2 and Y 1 Y 2 hold. The following lemma is easily verified. Lemma 2.1 Let Y 1,Y 2,Y 3 R p. (1) Transitivity. If Y 1 Y 2 and Y 2 Y 3, then Y 1 Y 3. If Y 1 Y 2 and Y 2 Y 3, then and Y 1 Y3. (2) Uniqueness. If Y 1 Y 2 and Y 2 Y 1, then Y 1 =Y 2. If Y 1 Y 2 and Y 2 Y 1, then Y 1 =Y 2. In each case, Y =w-maxy =w-miny (Y=Y 1 or Y 2 ). (3) Self-conversity. If Y 1 =w-miny 1 or Y 1 =w-maxy 1, then Y 1 Y 1 and Y 1 Y 1 Theorem 2.1 ( Existence Theorem) Let Y R p. Then (1) w-supy is the nonempty set in R p. Furthermore, for each y S(Y-K), there exists a point y w-supy such that y K y. In particular, for each y S(Y-K)\{-ε}, there exists a unique number t [0,+ ] such that y+t w-supy. Similarly, for each y S(Y-K) and y -ε, there exists some y w-supy such that y < K y. In particular, if y ε, then there exists a unique number t [-,0] such that y+t w-supy. (2) w-infy is the nonempty set in R p. Furthermore, for each y I(Y+K) there exists y w-infy such that y K y. In particular, for each y I(Y+K)\{ε}, there exists a unique number t [-,0] such that y+t w-infy. Similarly, for each y I(Y+K) there exists y w-infy such that y < K y. In particular, if y -ε, then there exists a unique number t [0,+ ] such that y+t w-infy. Theorem 2.2 For any Y R p, w-maxy w-supy, and w-miny w-infy. Theorem 2.3 Let Y R P. If w-supy ±{ε}, then (w-supy+int(k)) c - =S(Y-K). Similarly, if w-infy ±{ε}, then (w-infy-int(k)) c + =I(Y+K). Here for any B R p, we denote B c - =R p \(B {-ε}), B c + =R p \(B {ε}). Theorem 2.4 Let Y R p. If w-supy ±{ε}, then w-supy+k is closed in R p. Similarly, if w-infy ±{ε}, then w-infy-k is closed in R p. Theorem 2.5 Let a R p, Y R p, Y R p. Then (1) w-sup(y+a)=a+w-supy ; W-Inf(Y+a)=a+w-InfY. 3
4 (2) If Y and Y {ε}, then a Y if and only if a w-infy-k. Similarly, if Y and Y {-ε}, then Y a if and only if a w-supy+k. (3) If a Y and Y' Y, then a Y. Theorem 2.6 Let Y 1, Y 2 R p. If Y 1 -K Y 2 -K, then w-supy 1 w-supy 2. Similarly, f Y 1 +K Y 2 +K, then w-infy 1 w-infy 2. Theorem 2.7 Let Y R p. Then (1) w-inf(w-supy)=w-supy; w-sup(w-infy)=w-infy. (2) w-maxy w-inf(w-maxy); w-maxy w-sup(w-maxy). Substituting w-max for w-min, the conclusions still hold. Theorem 2.8 Let Y, Y 1, Y 2 R p. (1) If Y Y, then w-supy w-infy. (2) If the following corresponding operations make sense, i.e. there appears no +ε-ε form operation, then w-supy 1 +w-supy 2, w-sup(y 1 +Y 2 ), hence w-supy 1 +w-supy 2 w-sup(y 1 +Y 2 )+K and w-sup(y 1 +Y 2 ) w-supy 1 +w-supy 2 +K. Theorem 2.9 Let Y R p, and Y'={y R p : y Y}, then w-supy'=w-infy. Similarly, let Y"={y R p : Y y}, then w-infy"=w-supy. Theorem 2.10 Let X be some vector space, F:X P(R p ) be the point-to-set map from X into R p, and denote F(X)= x X F(x), then we have w-supf(x)=w-sup( x X w-supf(x)) and w-inff(x)=w-inf( x X w-inff(x)). Hence, for Y R p, we have w-sup(w-supy)=w-supy, and w-inf(w-infy)=w-infy 3. W-(H,Ω) CONJUGATE MAPS, W-(H,Ω)-SUBGRADIENTS AND W-(H,Ω)-REGULARITIONS Throughout this paper, we shall designate X and Y two locally convex Hausdorff spaces (finite-dimensional). Usually, we can suppose that X=R n and Y=R m. For A being any set of some vector space, and we denote by P(A) the power set of A. Definition 3.1 Let F:X P(R p ), Ω be a family of vector-valued functions ω:x Y, and H be a family of vector-valued functions h:y R p, closed w-sup pointwise, i.e., if H' H, then w-sup{h'} H, where w-sup is taken pointwisely, that is, for y Y, (w-sup{h'})(y)=w-sup{h'(y)}. Under the above assumption, the w-(h,ω) conjugate map w-f c :Ω P(H) of F is defined by the formula: w-f c (ω)=w-sup{h:h H,h ω F} for ω Ω 4
5 that is, w-f c (ω)(y)=w-sup{h(y): h H, h ω F} for ω Ω and y Y Inversely, let G:Ω P(H), the w-(h,x) conjugate map w-g* :X P(R p ) of G is defined by the formula: w-g*(x)=w-sup{h(ω(x)): h H, ω Ω, h G(ω)} for x X It is easily verified that if w-supg(ω)=g(ω) for ω Ω, then w-g*(x)=w-sup ω G(ω)(ω(x)) for x X Moreover, the w-(h,x) conjugate map of w-f c is called the w-(h,ω) biconjugate map of F, and is denoted by w-f c *, i.e., w-f c * :X P(R n ), and w-f c * =w-(w-f c )*, and w-f c *(x)=w-sup ω Ω w-f (ω)(ω(x)) for x X. Definition 3.2 (w-γph (Ω) Family) The set of the point-to-set maps F:X P(R n ) which can be written in the following form: F(x)=w-Sup{h(ω(x)): h H',ω Ω'} for x X is denoted by w-γ p H (Ω), where H' H, Ω' Ω and Ω' is nonempty. Definition 3.3 (w-γph (Ω)-normalization) Let F:X P(R n ). The point-to-set map ~ F (H,Ω):X P(R n ) defined by the following formula is called the w-γ p H (Ω)-normalization of F: ~ F (H,Ω)(x)=w-Sup{h(ω(x)): h H, ω Ω, h ω F} for x X Let H L ={h b :h b (t)=t+b ( t R p ), b R p }, Ω ={ω * : ω * (x)=<<x,x*>> ( x X), x* X*}where X* is the dual space of X, and <, > is a bilinear pairing between X and X*, and <<, >>=(<, >,<, >,...,<, >). Theorem 3.1 Let H=H L, Y=R p, F:X P(R p ), and G:Ω P(H). Then w-f c can be written : w-f c (ω)(t)=t-w-sup{ x {ω(x)-f(x)}} for ω Ω and t Y. Similarly, w-g* can be written: w-g*(x)=w-sup ω {ω(x)+a(ω)} for x X, where A(ω)=w-InfA(ω), and A(ω) satisfies G(ω)(t)=t+A(ω) for t Y and ω Ω. In particular, w-f c * can be written: w-f c *(x)=w-sup ω {ω(x)- F (ω)} for x X, where F (ω)=w-sup ω {ω(x)-f(x)} for ω Ω. We have seen that if H=H L and Ω=Ω L, w-f c is none more than the conjugate map discussed in [8] by substituting Max for w-sup. The relationships between the w-(h,ω) conjugate map and w-γ P H (Ω)-normalization of F are given as follows. Theorem 3.2 Let F:X P(R p ), and G:Ω P(H). Then (1) ~ F (H,Ω) w-γ P H (Ω), ~ F (H,Ω) F, ~ F (H,Ω)=w-F c *, and w-g* w-γ p H (Ω). (2) if F' w-γ p H (Ω) and F' F, then F' ~ F (H,Ω) 5
6 (3) F ~ (H,Ω)=F if and only if F w-γ p H (Ω). (4) w-f c (ω)(ω(x)) F(x) for ω Ω and x X. Similar discussions can be made for G:Ω P(H). Definition 3.4 (w-σph (Ω) Family) The set of the point-to-set maps G:Ω P(H) which can be written: G(ω)=w-Sup{h: h H ω } for ω Ω, where H ω H for each ω Ω is denoted by w-σ p H (Ω) Definition 3.5 (w-σph (Ω)-normalization) Let G:Ω P(H). The point-to-set map ~ G (H,Ω):Ω P(H) defined by: ~ G (H,Ω)(ω)=w-Sup{h: h H,h G(ω)}=w-Sup{h: h H, h(t) G(ω)(t) t Y} for ω Ω is called the w-σ p H (Ω)-normalization of G. Dually to theorem 3.2, we have: Theorem 3.3 Let G:Ω P(H), F:X P(R p ). Then (1) G ~ (H,Ω) w-σ p H (Ω), G ~ (H,Ω) G, w- G ~ * =w-g*, G ~ w-g* c, and w-f c w-σ p H(Ω). (2) if G' w-σ p H (Ω) and G' G, then G' G ~ (H,Ω). (3) G ~ (H,Ω)=G if and only if G w-σ p H (Ω). (4) if w-supg(ω)=g(ω) for some ω Ω, then G(ω)(ω(x)) w-g*(x) for x X. Definition 3.6 F:X P(R p ) is said to be w-(h,ω)-subdifferentiable at (x;y) if y F(x), x X and there exist h H and ω Ω such that h(ω(x))=y and h ω F. And such ω Ω is called the w-(h,ω)-subgradient of F at (x;y) The set of all the w-(h,ω)-subgradients of F at (x;y) is called the w-(h,ω)-subdifferential of F at (x;y), which is denoted by w- Ω F(x;y). Moreover, we define w- Ω F(x)= y w- Ω F(x;y). If w- Ω F(x;y) for any y F(x),then we say that F is w-(h,ω)-subdifferentiable at x. Definition 3.7 G:Ω P(H) is said to be w-(h,x)-subdifferentiable at (ω;h), if h G(ω), ω Ω and there exists x X such that h(ω(x)) w-g*(x), and h ω w-g*. And such x X is called the w-(h,x)-subgradient of G at (ω;h). The set of all the w-(h,x)-subgradients of G at (ω;h) is called the w-(h,x)-subdifferential of G at (ω,h), and will be denoted by w- X G(ω;h). Moreover, let w- X G(ω)= h w- X G(ω;h). If w- X G(ω;h) for each h G(ω), then we say that G is the w-(h,x)-subdifferentiable at ω. Theorem 3.4 Let F:X P(R p ). (1) ω w- Ω F(x;y) if and only if y F(x) and y w-f c (ω)(ω(x)). (2) w- Ω F(x;y) w- Ω w-f c *(x;y) for x X and y R P. (3) If w-f c *(x) F(x), w-f c w-f c * c, then w- Ω F(x;y)=w- Ω w-f c *(x;y) for y w-f c *(x). 6
7 (4) Suppose that Ω contains the element 0 Ω satisfying 0 Ω (x)=0 Y for x X, and H has the following property: for each a R p, there exists h a H such that h a (0)=a. Then 0 Ω w- Ω F(x;y) if and only if y F(x) w-min x F(x). Corollary Let F:X P(R p ). (1) w- Ω F(x) w- Ω w-f c *(x) for x X. (2) If w-f c *(x) F(x) and w-f c w-f c * c, then w- Ω F(x)=w- Ω w-f c *(x). (3) ω w- Ω F(x) if and only if F(x) w-f (ω)(ω(x)). (4) Under the condition of theorem 3.10's (4), 0 Ω w- F(x) if and only if F(x) w-min x F(x). Theorem 3.5 Let G:Ω P(H). (1) If x w- X G(ω;h), then h(ω(x)) G(ω)(ω(x)) w-g* c (ω)(ω(x)). (2) If w- X G(ω;h), then there exists h' H such that w- X w-g* c (ω;h'). (3) If w-g* c (ω) G(ω), then w- X G(ω;h) w- w-g* c (ω;h) for h w-g* c (ω). Corollary Let G:Ω P(H). (1) If w- X G(ω), then for some x X, G(ω)(ω(x)) w-g* c (ω)(ω(x)). (2) w- X G(ω) w- X w-g* c (ω). (3) If w-g* c (ω) G(ω), then w- X G(ω) w- X w-g* c (ω). The relationship between the w-(h,ω)-subgradient and w-(h,x)-subgradient is as follows. Theorem 3.6 Let F:X P(R p ), G:Ω P(H). (1) If ω w- Ω F(x;y), then there exists h w-f c (ω) such that x w- X w-f c (ω;h). Inversely, if F w-γ p H (Ω) and x w- X w-f c (ω;h), then there exists y R p such that ω w- Ω F(x;y). (2) If x w- X G(ω;h), then there exists y R p such that ω w- Ω w-g*(x;y). Inversely, if w-g* c (ω) G(ω), ω w- Ω w-g*(x;y), then there exists h H such that x w- X G(ω;h). In the following, we will discuss the weak saddle-point problem of point-to-set maps. Let point-to-set map L be L:X Y P(R p ), where X and Y are two finite-dimensional locally convex Hausdorff spaces. Definition 3.8 The point (x,y ) X Y is called the weak saddle-point of map L if there exists b L(x,y ) such that L(x,y) b L(x,y ) for x X And y Y. When p=1 and L:X Y R, the above definition coincides with that of the saddle-point, in the sense, of a function. In the following, for convenience, we shall use the notations given by: 7
8 w-supl x =w-sup y L(x,y) ; w-infl y =w-inf x L(x,y); w-infw-supl=w-inf x w-supl x ; w-supw-infl=w-sup y w-infl y ; w-minw-supl=w-min x w-supl x ; w-maxw-infl=w-max y w-infl y By the definition of the weak saddle-point, for b L(x,y), (x,y) is the weak saddle-point of L if and only if b (w-max y L(x,y)) (w-min x L(x,y)) if and only if b (w-max y w-min x L(x,y)) (w-min x w-max y L(x,y)). 4. W-(H,Ω) DUALITY IN MULTIOBJECTIVE OPTIMIZATION Let X,U, and Y be three locally convex Hausdorff spaces, where X is called the decision space and U is called the perturbation space. Let Φ be a map from X U into R p, i.e., Φ:X U P(R p ). We assume that Φ is not identically empty on X {0}. Let Ω X and Ω U be two families of functions from X into Y and U into Y, respectively. Especially, we denote Ω'=Ω X Ω U and Ω=Ω U, where Ω' is a family of functions from X U into Y and ω Ω if and only if there exist ω Ω X and θ Ω U such that ω =ω θ, that is, ω (x,u)=ω(x)+θ(u) for ( x,u) X U. For ω =ω θ, we sometimes write it ω =(ω,θ). In particular, we assume that Ω X contains the element 0 X satisfying 0 X (x)=0 Y Y, where 0 Y is the zero element of Y. Finally, Let H be a family of functions from Y into R p, closed w-sup pointwise. Then we consider the following optimization problems with set-valued objective functions: (MO) The original problem: Find x X such that Φ(x,0) w-min x Φ(x,0). (MP) The primal problem: Find x X such that Φ(x,0) w-inf x Φ(x,0). (MD') The w-(h,ω) conjugate dual problem of (MO): Find θ Ω such that w-φ c (0 X,θ)(θ(0)) w-max θ w-φ c (0 X,θ)(θ(0)). (MD) The w-(h,ω) conjugate dual problem of (MP): Find θ Ω such that w-φ c (0 X,θ)(θ(0)) w-sup θ w-φ c (0 X,θ)(θ(0)). We call solutions of (MO), (MP), (MD'), and (MD) in X or Ω weak efficient solutions, respectively, which will be denoted by w-eff(mo),w-eff(mp), w-eff(md') and w-eff(md), respectively. Moreover, denote w-min(mp)=w-min x Φ(x,0) w-inf(mp)=w-inf x Φ(x,0) w-max(md)=w-max θ w-φ c (0 X,θ)(θ(0)) w-sup(md)=w-sup θ w-φ c (0 X,θ)(θ(0)) Theorem 4.1 w-eff(mo)=w-eff(mp) ; w-eff(md')=w-eff(md). By virtue of this theorem, we have only to consider the duality between (MP) and (MD) instead of (MO) and (MD'). The first duality is the following weak duality. Theorem 4.2 (w-(h,ω) Weak Duality Theorem) 8
9 (1) w-max(md) w-min(mp) w-max(md) Φ(x,0) for x X w-φ c (0 X,θ)(θ(0)) w-min(mp) for θ Ω; w-max(md) x Φ(x,0); θ w-φ c (0 X,θ)(θ(0)) w-min(mp). (2) If x X and θ Ω satisfy w-φ c (0 X,θ)(θ(0)) Φ(x,0), then x w-eff(mp) and θ w-eff(md). (3) If x X satisfies Φ(x,0) w-max(md), then x w-eff(mp) and w-min(mp) w-max(md). Substituting w-max for w-sup, the conclusion still holds. (4) If θ Ω satisfies w-φ c (0 X,θ)(θ(0)) w-min(mp), then θ w-eff(md) and w-min(mp) w-max(md). Substituting w-min for w-inf, the conclusion still holds. Theorem 4.3 w-sup(md) w-inf(mp). The perturbed problem relative to (MP) and (MD) is given as follows. (MP u ) The primal perturbed problem: find x X such that Φ(x,u) w-inf x Φ(x,u). (MD ω ) The dual perturbed problem: find θ Ω such that w-φ c (ω,θ)(ω(0)+θ(0)) w-sup θ w-φ c (ω,θ)(ω(0)+θ(0)). Definition 4.1 The point-to-set maps P:U P(R p ) and D:Ω X P(H) defined by P(u)=w-Inf x Φ(x,u) for u U. D(ω)(t)=w-Sup θ w-φ c (ω,θ)(t+θ(0)) for ω Ω and t Y are called the primal and dual perturbed maps, respectively. The following theorem shows how to describe (MP) and (MD) by means of P and D. Theorem 4.4 (1)w-P c (θ)=w-φ c (0 X,θ) for θ Ω, that is, w-p c (θ)(t)=w-φ c (0 X,θ)(t) for θ Ω and y Y. (2) w-d*(x)=w-φ c *(x,0) for x X. (3) If Φ w-γ p H (Ω*), in particular, as long as Φ(x,0)=w-Φ c *(x,0) for x X, then w-d*(x)=φ(x,0) for x X. ASSUMPTION (A1) For each a R p, there is h a H such that h a (0)=a. ASSUMPTION (A2) Φ w-γ p H(Ω*) or Φ is w-γ p H (Ω*)-normalizable at (x,0) X U for each x X, i.e., w-φ c *(x,0)=φ(x,0) for x X. 9
10 Theorem 4.5 Feng Junwen/Management Science and Engineering (1) w-sup(md)=w-p c *(0), w-inf(mp)=p(0). (2) Under the assumptions (A1) and (A2), w-inf(mp)=w-d* c (0 X )(0). We now introduce two concepts: normality and stability. Definition 4.2 (MP) is called w-(h,ω)-stable if P is w-(h,ω)-subdifferentiable at u=0 U. Dually, (MD) is called w-(h,x)-stable if D is w-(h,x)-subdifferentiable at ω=0 X Ω X. Definition 4.3 (MP) is called w-(h,ω)-normal if P satisfies P(0) w-p c *(0). Dually, (MD) is called w-(h,x)-normal if D satisfies D(0 X )(0) w-d* c (0 X )(0). Theorem 4.6 (1) (MP) is w-(h,ω)-normal if and only if w-inf(mp)=w-sup(md). (2) Under assumptions (A1) and (A2), (MD) is w-(h,x)-normal if and only if w-inf(mp) =w-sup(md) if and only if (MP) is w-(h,ω)-normal. Theorem 4.7 (1) If w-p c *(0) P(0), then w-eff(md)=w- Ω P(0). (2) Under assumptions (A1) and (A2), w-eff(mp)=w- X w-d* c (0 X ). Moreover, if w-d* c *(0 X )(0) D(0 X )(0), then w-eff(mp)=w- X D(0 ). Theorem 4.8 (1) (MP) is w-(h,ω)-stable if and only if w-inf(mp)=w-sup(md)=w-max(md). (2) Under assumptions (A1) and (A2), (MD) is w-(h,x)-stable if and only if w-inf(mp)=w-sup(md)=w-min(mp) Definition 4.4 The set w-dgs(mp,md)=w-inf(mp) w-sup(md) is called the w-(h,ω) conjugate dual gap set of (MP) and (MD), briefly called the dual gap set. From the results of the above several theorems, we have the following w-(h,ω) strong duality theorem based on the w-(h,ω)-stability or w-(h,ω)-subgradient. Theorem 4.9 (w-(h,ω) Strong Duality Theorem) (1) If (MP) is w-(h,ω)-stable, then (MD) has the solution θ Ω, θ w-eff(md). Moreover, (MP) is w-(h,ω)-stable if and only if for each y w-inf(mp), there is θ w-eff(md) such that y w-φ c (0 X,θ)(θ(0)). And if (MP) is w-(h,ω)-stable and w-min(mp), then for each x w-min(mp), there is θ Ω such that Φ(x,0) w-φ c (0 X,θ)(θ(0)). (2) (MD) has a solution, i.e., w-eff(md), and w-dgs(mp,md)=w-inf(mp) or 10
11 w-dgs(mp,md)=w-sup(md), if and only if (MP) is w-(h,ω)-stable. In this case, w-eff(md) =w- Ω P(0). Theorem 4.10 ( w-(h,ω) Inverse Duality Theorem) Under assumptions (A1) and (A2), we have (1) (MD) is w-(h,x)-stable if and only if for each y w-sup(md), there exists x w-eff(mp) such that y Φ(x,0). In particular, if (MD) is w-(h,x)-stable, then w-eff(mp), and for each θ w-eff(md), there is x w-eff(mp) such that w-φ c (0 X,θ) (θ(0)) Φ(x,0). (2) w-eff(mp) and w-dgs(mp,md)=w-inf(mp) or w-sup(md) if and only if (MD) is w-(h,x)-stable. In this case, w-eff(mp)=w- X D(0 X ). Corollary Under assumptions (A1) and (A2), (MP) is w-(h,ω)-stable and w-eff(mp) if and only if (MD) is w-(h,x)-stable and w-eff(md). In this case, w-inf(mp) =w-sup(md)=w-max(md)=w-min(mp). 5. W-(H,Ω)-LAGRANGIAN MAPS In this section, we shall introduce a w-(h,ω)-lagrangian map of the (MP) relative to the given perturbation Φ, and clarify the relationship between pairs of (MP) and (MD) and the weak saddle-points of the w-(h,ω)-lagrangian map. Definition 5.1 The w-(h,ω)-lagrangian map L:X Ω P(R p ) of (MP) relative to Φ is defined by L(x,θ)=w-Φ c x (θ)(θ(0)) for x X and θ Ω, where for each x X, Φ x : Ω P(R p ), Φ x (u)=φ(x,u) for u U. The (MP) and (MD) are represented by the w-(h,ω)-lagrangian map L as follows. Theorem 5.1 (1) w-φ c (0 X,θ)(θ(0)) w-infl θ for θ Ω, If H is closed w-inf pointwise, then for θ Ω, w-φ c (0 X,θ)(θ(0))=w-Inf x L(x,0) =w-infl θ. (2) w-supl x w-infφ(x,0) for x X. If Φ x w-γ p H (Ω), then w-supl x =Φ(x,0). If Φ x w-γ p H (Ω) for x X, then for x X, w-supl x =Φ(x,0). Theorem 5.2 If H is closed w-inf pointwise, and Φ x w-γ p H (Ω) for x X, then the following conditions are equivalent to each other: (1) (x,θ) X Ω is a weak saddle-point of L; (2) x w-eff(mp), θ w-eff(md) and Φ(x,0) w-φ c (0 X,θ)(θ(0)) ; (3)Φ(x,0) w-φ c (0 X,θ)(θ(0)). Theorem 5.3 Under the Condition of theorem 5.2, if (MP) is w-(h,ω)-stable, then the following conditions are equivalent to each other: 11
12 (1) x w-eff(mp); (2) there exists θ Ω such that (x,θ) is a weak saddle-point of L. in this case, θ w-eff (MD). Theorem 5.5 Under the condition of theorem 5.2, if assumptions (A1) and (A2) hold, and (MD) is w-(h,x)-stable, then the following conditions are equivalent to each other: (1) θ w-eff(md); (2) there exists x X such that (x,θ) is a weak saddle-point of L. in this case, x w-eff (MP). 6. APPLICATIONS FOR SEVERAL SPECIAL CASES OF H AND Ω CASE 1. Vector Dual variable. Take H=H L, Ω X ={<<x*, >>: x* X* }, Ω U ={<<u*, >>: u* U* } where X*,U* are the dual space of X and U, respectively, X and X* or U and U* are placed in duality by a linear pairing denoted by <, >. In this special case for H and Ω, the results of [6,7 ] are none more than that of this paper. CASE 2. Matrix Dual Variable. Consider the following original problem: w-min{f(x): x X 0 } where X 0 ={x X: G(x) Q 0} and X R n, and (1) Q is the pointed closed convex cone in R m with the nonempty interior int(q) ; (2) f:r n R p, continuous and convex; (3) g:r n R m, continuous and Q-convex. f ( x ) + K... x X, g ( x ) Q u, u R Let Φ(x,u)= +... otherwise m and H=H L, Ω={Λ R p m : ΛQ K}, where K denotes the boundary of K. In this special case, the results of this paper reduce to these of chapter 6 in [8]. 7. CONCLUSIONS In this paper, based on weak efficiency, we have extended the concepts of "conjugate, and subdifferentiability" of functions to those of point-to-set maps, and developed a duality theory in 12
13 multiobjective optimization. One possible generalization of this paper is to consider the other kind of "K", for example, letting K=int(D) {0}, where D is a convex cone with nonempty interior. This form of K will be useful to develop the duality theory for multiobjective optimization with the domination structure. [4] has discussed this generalization. Finally, we hope to make contributions for solving multiobjective optimization problems in some sense by taking special cases for H and Ω. REFERENCES [1] Ekeland, I. and Temam, R Convex Analysis and Variational problems. North-Holland, Amsterdam. [2] Feng, J (H,Ω) Duality theory for nonlinear programming. Systems Science and Mathematical Science, 10(2) [3] Feng, J (H,Ω) Conjugate function theory. Systems science and Mathematical Science, 10(1). [4] Feng, J Duality Theory, Stochastic Method and Their Applications of Multiobjective Decision-Making. Ph.D. Dissertation, School of Management, Beijing University of Aeronautics and Astronautics [5] Kawasaki,H Conjugate relations and weak subdifferentials of relations. Mathematics of Operations Research, 6(4). [6] Kawasaki,H A duality theorem in multiobjective nonlinear programming. Mathematics of Operations Research, 7, (1). [7] Rockafellar,R.T Convex Analysis. Princeton University Press. [8] Sawaragi, Y. and Nakayama, H. and Tanino, T Theory of Multiobjective Optimization. New York: Academic Press [9] Tanino, T. and Sawaragi, Y Conjugate maps and duality in multiobjective optimization. Journal of Optimization Theory and Applications, 31(4). [10] Yu, P. L Cone convexity, cone extreme points, and nondominated solutions in decision problems with multiobjectives. Journal of Optimization Theory and Applications, 14(3). 13
Duality (Continued) min f ( x), X R R. Recall, the general primal problem is. The Lagrangian is a function. defined by
Duality (Continued) Recall, the general primal problem is min f ( x), xx g( x) 0 n m where X R, f : X R, g : XR ( X). he Lagrangian is a function L: XR R m defined by L( xλ, ) f ( x) λ g( x) Duality (Continued)
More informationResearch Article Existence and Duality of Generalized ε-vector Equilibrium Problems
Applied Mathematics Volume 2012, Article ID 674512, 13 pages doi:10.1155/2012/674512 Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems Hong-Yong Fu, Bin Dan, and Xiang-Yu
More informationCONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS
CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS A Dissertation Submitted For The Award of the Degree of Master of Philosophy in Mathematics Neelam Patel School of Mathematics
More informationDUALITY, OPTIMALITY CONDITIONS AND PERTURBATION ANALYSIS
1 DUALITY, OPTIMALITY CONDITIONS AND PERTURBATION ANALYSIS Alexander Shapiro 1 School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205, USA, E-mail: ashapiro@isye.gatech.edu
More informationConvex analysis and profit/cost/support functions
Division of the Humanities and Social Sciences Convex analysis and profit/cost/support functions KC Border October 2004 Revised January 2009 Let A be a subset of R m Convex analysts may give one of two
More informationOn duality theory of conic linear problems
On duality theory of conic linear problems Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 3332-25, USA e-mail: ashapiro@isye.gatech.edu
More informationLecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem
Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R
More informationSome Properties of the Augmented Lagrangian in Cone Constrained Optimization
MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented
More informationOptimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers
Optimization for Communications and Networks Poompat Saengudomlert Session 4 Duality and Lagrange Multipliers P Saengudomlert (2015) Optimization Session 4 1 / 14 24 Dual Problems Consider a primal convex
More informationDYNAMIC DUALIZATION IN A GENERAL SETTING
Journal of the Operations Research Society of Japan Vol. 60, No. 2, April 2017, pp. 44 49 c The Operations Research Society of Japan DYNAMIC DUALIZATION IN A GENERAL SETTING Hidefumi Kawasaki Kyushu University
More informationSelf-equilibrated Functions in Dual Vector Spaces: a Boundedness Criterion
Self-equilibrated Functions in Dual Vector Spaces: a Boundedness Criterion Michel Théra LACO, UMR-CNRS 6090, Université de Limoges michel.thera@unilim.fr reporting joint work with E. Ernst and M. Volle
More informationSTABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES
STABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES CHONG LI, DONGHUI FANG, GENARO LÓPEZ, AND MARCO A. LÓPEZ Abstract. We consider the optimization problem (P A )
More informationA New Fenchel Dual Problem in Vector Optimization
A New Fenchel Dual Problem in Vector Optimization Radu Ioan Boţ Anca Dumitru Gert Wanka Abstract We introduce a new Fenchel dual for vector optimization problems inspired by the form of the Fenchel dual
More informationConvex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version
Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com
More informationSemi-infinite programming, duality, discretization and optimality conditions
Semi-infinite programming, duality, discretization and optimality conditions Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205,
More informationTitle Problems. Citation 経営と経済, 65(2-3), pp ; Issue Date Right
NAOSITE: Nagasaki University's Ac Title Author(s) Vector-Valued Lagrangian Function i Problems Maeda, Takashi Citation 経営と経済, 65(2-3), pp.281-292; 1985 Issue Date 1985-10-31 URL http://hdl.handle.net/10069/28263
More informationOn the Existence and Convergence of the Central Path for Convex Programming and Some Duality Results
Computational Optimization and Applications, 10, 51 77 (1998) c 1998 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. On the Existence and Convergence of the Central Path for Convex
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationA SET OF LECTURE NOTES ON CONVEX OPTIMIZATION WITH SOME APPLICATIONS TO PROBABILITY THEORY INCOMPLETE DRAFT. MAY 06
A SET OF LECTURE NOTES ON CONVEX OPTIMIZATION WITH SOME APPLICATIONS TO PROBABILITY THEORY INCOMPLETE DRAFT. MAY 06 CHRISTIAN LÉONARD Contents Preliminaries 1 1. Convexity without topology 1 2. Convexity
More informationFenchel-Moreau Conjugates of Inf-Transforms and Application to Stochastic Bellman Equation
Fenchel-Moreau Conjugates of Inf-Transforms and Application to Stochastic Bellman Equation Jean-Philippe Chancelier and Michel De Lara CERMICS, École des Ponts ParisTech First Conference on Discrete Optimization
More informationConvex Optimization and Modeling
Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual
More informationAdditional Homework Problems
Additional Homework Problems Robert M. Freund April, 2004 2004 Massachusetts Institute of Technology. 1 2 1 Exercises 1. Let IR n + denote the nonnegative orthant, namely IR + n = {x IR n x j ( ) 0,j =1,...,n}.
More informationDedicated to Michel Théra in honor of his 70th birthday
VARIATIONAL GEOMETRIC APPROACH TO GENERALIZED DIFFERENTIAL AND CONJUGATE CALCULI IN CONVEX ANALYSIS B. S. MORDUKHOVICH 1, N. M. NAM 2, R. B. RECTOR 3 and T. TRAN 4. Dedicated to Michel Théra in honor of
More informationUTILITY OPTIMIZATION IN A FINITE SCENARIO SETTING
UTILITY OPTIMIZATION IN A FINITE SCENARIO SETTING J. TEICHMANN Abstract. We introduce the main concepts of duality theory for utility optimization in a setting of finitely many economic scenarios. 1. Utility
More informationThe local equicontinuity of a maximal monotone operator
arxiv:1410.3328v2 [math.fa] 3 Nov 2014 The local equicontinuity of a maximal monotone operator M.D. Voisei Abstract The local equicontinuity of an operator T : X X with proper Fitzpatrick function ϕ T
More informationEC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1
EC9A0: Pre-sessional Advanced Mathematics Course Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1 1 Infimum and Supremum Definition 1. Fix a set Y R. A number α R is an upper bound of Y if
More informationSubgradients. subgradients. strong and weak subgradient calculus. optimality conditions via subgradients. directional derivatives
Subgradients subgradients strong and weak subgradient calculus optimality conditions via subgradients directional derivatives Prof. S. Boyd, EE364b, Stanford University Basic inequality recall basic inequality
More informationNotes on Ordered Sets
Notes on Ordered Sets Mariusz Wodzicki September 10, 2013 1 Vocabulary 1.1 Definitions Definition 1.1 A binary relation on a set S is said to be a partial order if it is reflexive, x x, weakly antisymmetric,
More informationNONDIFFERENTIABLE MULTIOBJECTIVE SECOND ORDER SYMMETRIC DUALITY WITH CONE CONSTRAINTS. Do Sang Kim, Yu Jung Lee and Hyo Jung Lee 1.
TAIWANESE JOURNAL OF MATHEMATICS Vol. 12, No. 8, pp. 1943-1964, November 2008 This paper is available online at http://www.tjm.nsysu.edu.tw/ NONDIFFERENTIABLE MULTIOBJECTIVE SECOND ORDER SYMMETRIC DUALITY
More informationCVaR and Examples of Deviation Risk Measures
CVaR and Examples of Deviation Risk Measures Jakub Černý Department of Probability and Mathematical Statistics Stochastic Modelling in Economics and Finance November 10, 2014 1 / 25 Contents CVaR - Dual
More informationMaximal Monotone Inclusions and Fitzpatrick Functions
JOTA manuscript No. (will be inserted by the editor) Maximal Monotone Inclusions and Fitzpatrick Functions J. M. Borwein J. Dutta Communicated by Michel Thera. Abstract In this paper, we study maximal
More informationRobust Duality in Parametric Convex Optimization
Robust Duality in Parametric Convex Optimization R.I. Boţ V. Jeyakumar G.Y. Li Revised Version: June 20, 2012 Abstract Modelling of convex optimization in the face of data uncertainty often gives rise
More informationVictoria Martín-Márquez
A NEW APPROACH FOR THE CONVEX FEASIBILITY PROBLEM VIA MONOTROPIC PROGRAMMING Victoria Martín-Márquez Dep. of Mathematical Analysis University of Seville Spain XIII Encuentro Red de Análisis Funcional y
More informationSubgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives
Subgradients subgradients and quasigradients subgradient calculus optimality conditions via subgradients directional derivatives Prof. S. Boyd, EE392o, Stanford University Basic inequality recall basic
More informationThe Subdifferential of Convex Deviation Measures and Risk Functions
The Subdifferential of Convex Deviation Measures and Risk Functions Nicole Lorenz Gert Wanka In this paper we give subdifferential formulas of some convex deviation measures using their conjugate functions
More informationDuality and optimality in multiobjective optimization
Duality and optimality in multiobjective optimization Von der Fakultät für Mathematik der Technischen Universität Chemnitz genehmigte D i s s e r t a t i o n zur Erlangung des akademischen Grades Doctor
More informationPreprint Stephan Dempe and Patrick Mehlitz Lipschitz continuity of the optimal value function in parametric optimization ISSN
Fakultät für Mathematik und Informatik Preprint 2013-04 Stephan Dempe and Patrick Mehlitz Lipschitz continuity of the optimal value function in parametric optimization ISSN 1433-9307 Stephan Dempe and
More informationMerit Functions and Descent Algorithms for a Class of Variational Inequality Problems
Merit Functions and Descent Algorithms for a Class of Variational Inequality Problems Michael Patriksson June 15, 2011 Abstract. We consider a variational inequality problem, where the cost mapping is
More informationSTRONG AND CONVERSE FENCHEL DUALITY FOR VECTOR OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES
STUDIA UNIV. BABEŞ BOLYAI, MATHEMATICA, Volume LIV, Number 3, September 2009 STRONG AND CONVERSE FENCHEL DUALITY FOR VECTOR OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES Abstract. In relation to the vector
More informationBASICS OF CONVEX ANALYSIS
BASICS OF CONVEX ANALYSIS MARKUS GRASMAIR 1. Main Definitions We start with providing the central definitions of convex functions and convex sets. Definition 1. A function f : R n R + } is called convex,
More informationOn Gap Functions for Equilibrium Problems via Fenchel Duality
On Gap Functions for Equilibrium Problems via Fenchel Duality Lkhamsuren Altangerel 1 Radu Ioan Boţ 2 Gert Wanka 3 Abstract: In this paper we deal with the construction of gap functions for equilibrium
More informationRemark on a Couple Coincidence Point in Cone Normed Spaces
International Journal of Mathematical Analysis Vol. 8, 2014, no. 50, 2461-2468 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ijma.2014.49293 Remark on a Couple Coincidence Point in Cone Normed
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationAbstract. In this paper, the concepts and the properties of conjugate maps and directional derivatives of convex vector functions from a subset of
ACTA MATHEMATICA VIETNAMICA Volume 25, Number 3, 2000, pp. 315 344 315 ON CONJUGATE MAPS AND DIRECTIONAL DERIVATIVES OF CONVEX VECTOR FUNCTIONS NGUYEN XUAN TAN AND PHAN NHAT TINH Abstract. In this paper,
More informationTranslative Sets and Functions and their Applications to Risk Measure Theory and Nonlinear Separation
Translative Sets and Functions and their Applications to Risk Measure Theory and Nonlinear Separation Andreas H. Hamel Abstract Recently defined concepts such as nonlinear separation functionals due to
More informationResearch Article A Note on Optimality Conditions for DC Programs Involving Composite Functions
Abstract and Applied Analysis, Article ID 203467, 6 pages http://dx.doi.org/10.1155/2014/203467 Research Article A Note on Optimality Conditions for DC Programs Involving Composite Functions Xiang-Kai
More informationDual methods for the minimization of the total variation
1 / 30 Dual methods for the minimization of the total variation Rémy Abergel supervisor Lionel Moisan MAP5 - CNRS UMR 8145 Different Learning Seminar, LTCI Thursday 21st April 2016 2 / 30 Plan 1 Introduction
More informationA very complicated proof of the minimax theorem
A very complicated proof of the minimax theorem Jonathan Borwein FRSC FAAS FAA FBAS Centre for Computer Assisted Research Mathematics and its Applications The University of Newcastle, Australia http://carma.newcastle.edu.au/meetings/evims/
More information2 Sequences, Continuity, and Limits
2 Sequences, Continuity, and Limits In this chapter, we introduce the fundamental notions of continuity and limit of a real-valued function of two variables. As in ACICARA, the definitions as well as proofs
More informationLagrange Relaxation and Duality
Lagrange Relaxation and Duality As we have already known, constrained optimization problems are harder to solve than unconstrained problems. By relaxation we can solve a more difficult problem by a simpler
More informationSubdifferential representation of convex functions: refinements and applications
Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential
More informationAbstract. The connectivity of the efficient point set and of some proper efficient point sets in locally convex spaces is investigated.
APPLICATIONES MATHEMATICAE 25,1 (1998), pp. 121 127 W. SONG (Harbin and Warszawa) ON THE CONNECTIVITY OF EFFICIENT POINT SETS Abstract. The connectivity of the efficient point set and of some proper efficient
More informationConvex Analysis Notes. Lecturer: Adrian Lewis, Cornell ORIE Scribe: Kevin Kircher, Cornell MAE
Convex Analysis Notes Lecturer: Adrian Lewis, Cornell ORIE Scribe: Kevin Kircher, Cornell MAE These are notes from ORIE 6328, Convex Analysis, as taught by Prof. Adrian Lewis at Cornell University in the
More informationParcours OJD, Ecole Polytechnique et Université Pierre et Marie Curie 05 Mai 2015
Examen du cours Optimisation Stochastique Version 06/05/2014 Mastère de Mathématiques de la Modélisation F. Bonnans Parcours OJD, Ecole Polytechnique et Université Pierre et Marie Curie 05 Mai 2015 Authorized
More informationLECTURE 10 LECTURE OUTLINE
LECTURE 10 LECTURE OUTLINE Min Common/Max Crossing Th. III Nonlinear Farkas Lemma/Linear Constraints Linear Programming Duality Convex Programming Duality Optimality Conditions Reading: Sections 4.5, 5.1,5.2,
More informationLecture 5. The Dual Cone and Dual Problem
IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the
More informationSymmetric and Asymmetric Duality
journal of mathematical analysis and applications 220, 125 131 (1998) article no. AY975824 Symmetric and Asymmetric Duality Massimo Pappalardo Department of Mathematics, Via Buonarroti 2, 56127, Pisa,
More informationZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT
ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT EMIL ERNST AND MICHEL VOLLE Abstract. This article addresses a general criterion providing a zero duality gap for convex programs in the setting of
More informationConvex Stochastic optimization
Convex Stochastic optimization Ari-Pekka Perkkiö January 30, 2019 Contents 1 Practicalities and background 3 2 Introduction 3 3 Convex sets 3 3.1 Separation theorems......................... 5 4 Convex
More informationMath 273a: Optimization Convex Conjugacy
Math 273a: Optimization Convex Conjugacy Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 online discussions on piazza.com Convex conjugate (the Legendre transform) Let f be a closed proper
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationEssential Supremum and Essential Maximum with Respect to Random Preference Relations
Essential Supremum and Essential Maximum with Respect to Random Preference Relations Yuri KABANOV a, Emmanuel LEPINETTE b a University of Franche Comté, Laboratoire de Mathématiques, 16 Route de Gray,
More informationTopology. Xiaolong Han. Department of Mathematics, California State University, Northridge, CA 91330, USA address:
Topology Xiaolong Han Department of Mathematics, California State University, Northridge, CA 91330, USA E-mail address: Xiaolong.Han@csun.edu Remark. You are entitled to a reward of 1 point toward a homework
More informationCOMPENSATION PAYMENTS FOR AIRCRAFT NOISE IN AN URBAN BUILDING CONFLICT SITUATION ON THE BASIS OF A SET-VALUED CONJUGATE DUALITY
COMPENSATION PAMENTS FOR AIRCRAFT NOISE IN AN URBAN BUILDING CONFLICT SITUATION ON THE BASIS OF A SET-VALUED CONJUGATE DUALIT Norman Neukel HIKARI LTD HIKARI LTD Hikari Ltd is a publisher of international
More informationON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction
J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties
More informationTHE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION
THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION HALUK ERGIN AND TODD SARVER Abstract. Suppose (i) X is a separable Banach space, (ii) C is a convex subset of X that is a Baire space (when endowed
More informationON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION
ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION CHRISTIAN GÜNTHER AND CHRISTIANE TAMMER Abstract. In this paper, we consider multi-objective optimization problems involving not necessarily
More informationGeneral Notation. Exercises and Problems
Exercises and Problems The text contains both Exercises and Problems. The exercises are incorporated into the development of the theory in each section. Additional Problems appear at the end of most sections.
More informationLecture 7: Semidefinite programming
CS 766/QIC 820 Theory of Quantum Information (Fall 2011) Lecture 7: Semidefinite programming This lecture is on semidefinite programming, which is a powerful technique from both an analytic and computational
More informationSemicontinuous functions and convexity
Semicontinuous functions and convexity Jordan Bell jordan.bell@gmail.com Department of Mathematics, University of Toronto April 3, 2014 1 Lattices If (A, ) is a partially ordered set and S is a subset
More informationMaximal monotone operators are selfdual vector fields and vice-versa
Maximal monotone operators are selfdual vector fields and vice-versa Nassif Ghoussoub Department of Mathematics, University of British Columbia, Vancouver BC Canada V6T 1Z2 nassif@math.ubc.ca February
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 4 Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 2 4.1. Subgradients definition subgradient calculus duality and optimality conditions Shiqian
More informationConvex Optimization & Lagrange Duality
Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT
More informationStochastic Optimization Lecture notes for part I. J. Frédéric Bonnans 1
Stochastic Optimization Lecture notes for part I Optimization Master, University Paris-Saclay Version of January 25, 2018 J. Frédéric Bonnans 1 1 Centre de Mathématiques Appliquées, Inria, Ecole Polytechnique,
More informationOPTIMALITY CONDITIONS FOR D.C. VECTOR OPTIMIZATION PROBLEMS UNDER D.C. CONSTRAINTS. N. Gadhi, A. Metrane
Serdica Math. J. 30 (2004), 17 32 OPTIMALITY CONDITIONS FOR D.C. VECTOR OPTIMIZATION PROBLEMS UNDER D.C. CONSTRAINTS N. Gadhi, A. Metrane Communicated by A. L. Dontchev Abstract. In this paper, we establish
More informationApplied Mathematics Letters
Applied Mathematics Letters 25 (2012) 974 979 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml On dual vector equilibrium problems
More informationOn the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean
On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean Renato D.C. Monteiro B. F. Svaiter March 17, 2009 Abstract In this paper we analyze the iteration-complexity
More informationDuality and dynamics in Hamilton-Jacobi theory for fully convex problems of control
Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control RTyrrell Rockafellar and Peter R Wolenski Abstract This paper describes some recent results in Hamilton- Jacobi theory
More informationDuality and Utility Maximization
Duality and Utility Maximization Bachelor Thesis Niklas A. Pfister July 11, 2013 Advisor: Prof. Dr. Halil Mete Soner Department of Mathematics, ETH Zürich Abstract This thesis explores the problem of maximizing
More informationOn Weak Pareto Optimality for Pseudoconvex Nonsmooth Multiobjective Optimization Problems
Int. Journal of Math. Analysis, Vol. 7, 2013, no. 60, 2995-3003 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ijma.2013.311276 On Weak Pareto Optimality for Pseudoconvex Nonsmooth Multiobjective
More informationSummer School: Semidefinite Optimization
Summer School: Semidefinite Optimization Christine Bachoc Université Bordeaux I, IMB Research Training Group Experimental and Constructive Algebra Haus Karrenberg, Sept. 3 - Sept. 7, 2012 Duality Theory
More informationDUALITY IN COUNTABLY INFINITE MONOTROPIC PROGRAMS
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 DUALITY IN COUNTABLY INFINITE MONOTROPIC PROGRAMS ARCHIS GHATE Abstract. Finite-dimensional
More informationThe Fitzpatrick Function and Nonreflexive Spaces
Journal of Convex Analysis Volume 13 (2006), No. 3+4, 861 881 The Fitzpatrick Function and Nonreflexive Spaces S. Simons Department of Mathematics, University of California, Santa Barbara, CA 93106-3080,
More informationNONLINEAR SCALARIZATION CHARACTERIZATIONS OF E-EFFICIENCY IN VECTOR OPTIMIZATION. Ke-Quan Zhao*, Yuan-Mei Xia and Xin-Min Yang 1.
TAIWANESE JOURNAL OF MATHEMATICS Vol. 19, No. 2, pp. 455-466, April 2015 DOI: 10.11650/tjm.19.2015.4360 This paper is available online at http://journal.taiwanmathsoc.org.tw NONLINEAR SCALARIZATION CHARACTERIZATIONS
More information(U 2 ayxtyj < (2 ajjxjxj) (2 a^jj),
PROCEEDINGS Ol THE AMERICAN MATHEMATICAL SOCIETY Volume 59, Number 1. August 1976 THE MEANING OF THE CAUCHY-SCHWARZ- BUNIAKOVSKY INEQUALITY eduardo h. zarantonello1 Abstract. It is proved that a mapping
More informationOptimization. Yuh-Jye Lee. March 28, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 40
Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 28, 2017 1 / 40 The Key Idea of Newton s Method Let f : R n R be a twice differentiable function
More informationDuality and optimality conditions in stochastic optimization and mathematical finance
Duality and optimality conditions in stochastic optimization and mathematical finance Sara Biagini Teemu Pennanen Ari-Pekka Perkkiö April 25, 2015 Abstract This article studies convex duality in stochastic
More informationA Dual Condition for the Convex Subdifferential Sum Formula with Applications
Journal of Convex Analysis Volume 12 (2005), No. 2, 279 290 A Dual Condition for the Convex Subdifferential Sum Formula with Applications R. S. Burachik Engenharia de Sistemas e Computacao, COPPE-UFRJ
More information(convex combination!). Use convexity of f and multiply by the common denominator to get. Interchanging the role of x and y, we obtain that f is ( 2M ε
1. Continuity of convex functions in normed spaces In this chapter, we consider continuity properties of real-valued convex functions defined on open convex sets in normed spaces. Recall that every infinitedimensional
More informationConjugate duality in stochastic optimization
Ari-Pekka Perkkiö, Institute of Mathematics, Aalto University Ph.D. instructor/joint work with Teemu Pennanen, Institute of Mathematics, Aalto University March 15th 2010 1 / 13 We study convex problems.
More informationDuality for almost convex optimization problems via the perturbation approach
Duality for almost convex optimization problems via the perturbation approach Radu Ioan Boţ Gábor Kassay Gert Wanka Abstract. We deal with duality for almost convex finite dimensional optimization problems
More informationEpiconvergence and ε-subgradients of Convex Functions
Journal of Convex Analysis Volume 1 (1994), No.1, 87 100 Epiconvergence and ε-subgradients of Convex Functions Andrei Verona Department of Mathematics, California State University Los Angeles, CA 90032,
More informationDuality Theory of Constrained Optimization
Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive
More informationMaximal Monotonicity, Conjugation and the Duality Product in Non-Reflexive Banach Spaces
Journal of Convex Analysis Volume 17 (2010), No. 2, 553 563 Maximal Monotonicity, Conjugation and the Duality Product in Non-Reflexive Banach Spaces M. Marques Alves IMPA, Estrada Dona Castorina 110, 22460-320
More informationSOME REMARKS ON SUBDIFFERENTIABILITY OF CONVEX FUNCTIONS
Applied Mathematics E-Notes, 5(2005), 150-156 c ISSN 1607-2510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ SOME REMARKS ON SUBDIFFERENTIABILITY OF CONVEX FUNCTIONS Mohamed Laghdir
More informationEnhanced Fritz John Optimality Conditions and Sensitivity Analysis
Enhanced Fritz John Optimality Conditions and Sensitivity Analysis Dimitri P. Bertsekas Laboratory for Information and Decision Systems Massachusetts Institute of Technology March 2016 1 / 27 Constrained
More informationLocal strong convexity and local Lipschitz continuity of the gradient of convex functions
Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate
More informationConstraint qualifications for convex inequality systems with applications in constrained optimization
Constraint qualifications for convex inequality systems with applications in constrained optimization Chong Li, K. F. Ng and T. K. Pong Abstract. For an inequality system defined by an infinite family
More informationExtreme points of compact convex sets
Extreme points of compact convex sets In this chapter, we are going to show that compact convex sets are determined by a proper subset, the set of its extreme points. Let us start with the main definition.
More informationSOME REMARKS ON THE SPACE OF DIFFERENCES OF SUBLINEAR FUNCTIONS
APPLICATIONES MATHEMATICAE 22,3 (1994), pp. 419 426 S. G. BARTELS and D. PALLASCHKE (Karlsruhe) SOME REMARKS ON THE SPACE OF DIFFERENCES OF SUBLINEAR FUNCTIONS Abstract. Two properties concerning the space
More information