Optimality conditions in global optimization and their applications

Size: px
Start display at page:

Download "Optimality conditions in global optimization and their applications"

Transcription

1 Math. Program., Ser. B DOI 0.007/s FULL LENGTH PAPER Optimality conditions in global optimization and their applications A. M. Rubinov Z. Y. Wu Received: 5 September 2005 / Accepted: 5 January 2006 Springer-Verlag 2007 Abstract In this paper we derive necessary and sufficient conditions for some problems of global minimization. Our approach is based on methods of abstract convexity: we use a representation of an upper semicontinuous function as the lower envelope of a family of convex functions. We discuss applications of conditions obtained to the examination of some tractable sufficient conditions for the global minimum and to the theory of inequalities. Keywords Global optimization Necessary and sufficient conditions Abstract convexity Inequalities Mathematics Subject Classification 2000) 90C30 90C46 4A65 Introduction The theory of local optimization is based on a local approximation of functions and sets that can be accomplished by methods of calculus and its modern generalizations. However, local approximation alone cannot help to examine global optimization problems, so different tools should be used instead of calculus or together with calculus. One of these tools is abstract convexity see, for example, [9,2,4]) which deals with The work was supported by a grant from the Australian Research Council. A. M. Rubinov Z. Y. Wu B) School of Information Technology and Mathematical Sciences, University of Ballarat, Ballarat 3353, VIC, Australia z.wu@ballarat.edu.au A. M. Rubinov a.rubinov@ballarat.edu.au

2 A. M. Rubinov, Z. Y. Wu functions that can be represented as the upper envelope or the lower envelope of a subset of a set of sufficiently simple functions. We use methods of abstract convexity in this paper. We consider the global minimization of a function f over a convex set assuming that f can be represented as the infimum of a family f t ) t T of convex functions and derive necessary and sufficient conditions for the global minimum. Conditions obtained are expressed in terms of ε t -subdifferentials of functions f t with a certain ε t 0. It is known see for example [2]) that for each upper semicontinuous finite function f defined on a subset Ω of a Hilbert space X and bounded from above on Ω by a quadratic function of the form hx) = a x 2 +[b, x]+c, there exists a family f t ) t T of convex quadratic functions such that f x) = inf t T f t x). For applications we need to know an explicit description of such a family. We give the required description for a differentiable function f with the Lipschitz continuous gradient mapping. Necessary and sufficient conditions or only sufficient conditions in global optimization are known for some classes of problems, in particular for the minimization of DC functions including concave functions) over a convex set and also for the minimization of quadratic functions subject to quadratic constraints and/or box constraints see, for example [,2,4,5,7,8,0,,5 8] and [Jeyakumar V., Rubinov A.M., Wu Z.Y. in Non-convex quadratic minimization problems with quadratic constraints: global optimality conditions, submitted paper; Jeyakumar V., Rubinov A.M., Wu Z.Y. in Sufficient global optimality conditions for non-convex quadratic minimization problems with box constraints submitted paper and Wu Z.Y., Jeyakumar V., Rubinov, A.M. in Sufficient conditions for globally optimality of bivalent nonconvex quadratic programs, submitted paper]). We consider a different kind of problems in this paper. In the simplest case of the unconstrained minimization of a function f : X IR such that f x) f y) a x y for all x, y X we obtain the following result: if a point x is a global minimizer of f over X then f t) 4a f t) 2 f x) for all t X. In other words f t) f x) 0 t X) f t) f x) 4a f t) 2 t X)..) Conditions obtained are not tractable. Nevertheless these conditions have some interesting applications and we examine two of them. First, we apply the conditions obtained to examination of the tractable sufficient condition for a global minimum that was used in papers [Jeyakumar V., Rubinov A.M., Wu Z.Y. in Non-convex quadratic minimization problems with quadratic constraints: global optimality conditions, submitted paper; Jeyakumar V., Rubinov A.M., Wu Z.Y. in Sufficient global optimality conditions for non-convex quadratic minimization problems with box constraints, submitted paper and Wu Z.Y., Jeyakumar V., Rubinov A.M. in Sufficient conditions for globally optimality of bivalent nonconvex quadratic programs,. submitted paper]. This condition is presented in terms of the abstract convex subdifferential L f x), where L is a certain set of elementary functions. By definition L f x) = { l L : f y) f x) lx) + ly) for all y X }.

3 Optimality conditions in global optimization and their applications If subdifferential calculus rules are valid for L then the sufficient condition under consideration is also necessary one. However, calculus rules do not hold for many sets L of nonlinear elementary functions, so it is interesting to estimate how the sufficient condition under consideration is far from a necessary condition. The developed approach allows as to examine this problem. Second, we apply the conditions obtained to the theory of inequalities. Many inequalities for examples, inequalities involving means) can be represented in the form f x) f x) 0 where f is a certain function. We say that the inequality f x) f x) ux) with ux) 0 is sharper that the inequality f x) f x) 0 if there exists x with ux) >0. It was observed in [3] that certain conditions for global minimum can be used for sharpening some special inequalities. We demonstrate that conditions similar to.) lead to nontrivial sharpening some well-known inequalities. In particular, we sharpen the inequality between the arithmetic mean and the geometric mean. The outline of the paper is as follows. Section 2 collects definitions and preliminary results from abstract convexity. Section 3 contains a result about abstract concavity with respect to a set of quadratic functions. Approximate subdifferentiability is discussed in Sect. 4. Section 5 provides necessary and sufficient conditions for the global minimum over a convex set. A sufficient condition for global minimum is examined in Sect. 6. The inequality.) and its generalization are discussed in Sect. 7. Application to the theory of inequalities is presented in Sect Preliminaries 2. Notation We use the following notation: IR is the real line, IR + = IR {+ },IR = IR { }, IR = IR {+ } { }. X is a Hilbert space with the inner product [, ] and the norm x = [x, x]. By, r) ={x X : x y r}. cl Ω is the closure of a set Ω H. IR n is a n-dimensional Euclidean space. IR n + = { x = x,...,x n ) T IR n : x i 0, i =,...,n } IR n ++ = { x = x,...,x n ) T IR n : x i > 0, i =,...,n }. Let Ω be a set and f : Ω IR. Then dom f := { x Ω : < f x) <+ }. If f : Ω IR and g : Ω IR then f g means that f x) gx) for all x Ω. If f : X IR + is a convex function and x dom f, then f x) and ε f x) are the subdifferential and ε-subdifferential of f at x, respectively, in the sense of convex analysis. If Ω X is a convex set, x cl Ω, and ε 0, then N ε,ω x) ={u X :[u, y] [u, x] ε for all y Ω}. In particular N Ω x) N 0,Ω x) is the normal cone of Ω at x. We assume that the infimum over the empty set is equal to + and the supremum over the empty set is equal to.

4 A. M. Rubinov, Z. Y. Wu 2.2 Abstract convexity and abstract concavity In this subsection we present basic definitions from abstract convexity that will be used throughout the paper. See [9,2,4] for detailed presentation of different aspects of abstract convexity.) Let Ω be a set and H be a set of functions h : Ω IR. A function f : Ω IR + is called abstract convex with respect to H or H-convex if there exists a set U H such that f x) = sup h U hx) for all x Ω. ThesetH is called a set of elementary functions in such a setting. Let H be a set of functions h : Ω IR +. A function f : Ω IR is called abstract concave with respect to H or H-concave if there exists a set V H such that f x) = inf h V hx) for all x Ω. We again call H a set of elementary functions. If Ω is a convex closed subset of a Hilbert space X and H consists of convex functions h : Ω IR +,weusetheterminf-convex functions for H-concave functions. Let Ω Ω X, f : Ω IR + and x 0 dom f.letl be a set of functions l : Ω IR. An element l L is called an L-subgradient of f at the point x 0 if x 0 dom l and f x) f x 0 ) + lx) lx 0 ) for each x Ω. The set L f x 0 ) of all L-subgradients of f at x 0 is referred to as L subdifferential of f at x 0. An element l L is called an ε, L)-subgradient of f at a point x 0 dom f if x 0 dom l and f x) f x 0 ) + lx) lx 0 ) ε, for each x Ω. The set ε,l f x) of all ε, L)-subgradients of f at x 0 is referred to as ε, L)- subdifferential of f at x 0. If L is the set of linear functions defined on X, f : X IR + is a lower semicontinuous convex function and x dom f then L f x) = f x), ε,l f x) = ε f x), where f x) and ε f x) are the subdifferential and ε subdifferential in the sense of convex analysis, respectively. For properties of L-subdifferentials and ε, L)-subdifferentials, see, for example, [2]. For a set Ω X the indicator function δ Ω is defined as δ Ω x) = { 0 if x Ω + if x / Ω. Let L be a set of elementary functions l : X IR such that dom l Ω for all l L and let x Ω.Thenormal set of Ω at x with respect to L is given by N L,Ω x) := { l L : ly) lx) 0 for all y Ω }. If L is a cone in a vector space, that is l L,λ > 0) λl L, then N L,Ω is also a cone so we can use the term normal cone in such a case.) The ε-normal set

5 Optimality conditions in global optimization and their applications of Ω at x with respect to L is given by N ε,l,ω x) := { l L : ly) lx) ε for all y Ω }. It is easy to see that N L,Ω x) = L δ Ω x), N ε,l,ω x) = ε,l δ Ω x), x Ω. 2.3 Approximate subdifferential of the sum The following result will be intensively used in the paper we formulate it only in the Hilbert space setting): Theorem Let f 0, f,..., f m be proper convex functions defined on a Hilbert space X and f = m i=0 f i. Let ε > 0. Assume that there exists x dom f 0 at which functions f,..., f m are finite and continuous. Then ε f x 0 ) = { m εi f i x 0 ) : ε i 0, i = 0,...,m, i=0 for each x 0 m i=0 dom f i. } m ε i = ε. This result can be found in [3] seealso[9]).itwasassumedin[3] that m =. The validity of the result for an arbitrary m can be easily obtained by induction. i=0 3 Abstract concavity with respect to a set of quadratic functions Let H be the set of all quadratic functions h of the form hx) = a x 2 +[l, x]+c, x X, 3.) where a > 0, l X and c IR. We say that a function f : Ω IR is majorized by H if there exists h H such that h f. The following result holds see [2], Example 6.2). Theorem 2 Let Ω X and H be the set of quadratic functions defined by 3.). Then a function f : Ω IR is H-concave if and only if f is majorized by H and upper semicontinuous. Since H consists of convex functions it follows that a majorized by H upper semicontinuous function is inf-convex. It follows from Theorem 2 that for each majorized by H upper semicontinuous function f : Ω IR there exists a family f t ) t T of quadratic functions f t H such that f x) = inf t T f t x) x Ω). For applications we need to have an explicit description of such a family. It is also important to describe functions f and families

6 A. M. Rubinov, Z. Y. Wu f t ) t T such that f x) = min t T f t x). We now describe such a family for functions f with the Lipschitz continuous gradient mapping x f x). It follows from the Proposition below that such a function is majorized by the set H of quadratics. Proposition Let Ω X be a convex set and let f be a differentiable function defined on an open set containing Ω. Assume that the mapping x f x) is Lipschitz continuous on Ω: K := Let a K. For each t Ω consider the function f x) f y) sup < ) x,y Ω,x =y x y f t x) = f t) + [ f t), x t ] + a x t 2, x X). 3.3) Then f x) = min t Ω f t x), x Ω. Proof Applying the mean value theorem we have for each x, y Ω: therefore f x) f y) = [ f y + θx y)), x y ], θ [0, ], f x) f y) [ f y), x y ] = [ f y + θx y)) f y), x y ] f y + θx y)) f y) x y K θ x y 2 K x y 2 a x y 2. This means that f x) f y) + [ f y), x y ] + a x y 2 := g y x), x Ω. Since f x) = g x x) it follows that f x) = min y Ω g y x) for all x Ω. 4 Approximate L k -subdifferentiability We start with the following assertion: Proposition 2 Let L be a set of continuous concave functions l : X IR. Let Ω X be a nonempty convex set and let f : Ω IR. Assume that f x) = inf t T f t x) x Ω), where f t : X IR is a continuous convex function t T ). Let y Ω, η 0 and ε t = f t y) f y), t T ). Then l η,l f y) if and only if for each t T there exists ε i,t 0 i =, 2, 3) such that ε,t + ε 2,t + ε 3,t = ε t + η and 0 ε,t f t y) + ε2,t l)y) + N ε3,t,ωy). 4.)

7 Optimality conditions in global optimization and their applications Proof We have: η,l f y) = { l L : f x) lx) f y) ly) η x Ω } 4.2) = { l L : inf f tx) lx) f y) ly) η x Ω } t T = { l L : f t x) lx) f y) ly) η t T, x Ω } = { l L : f t x) + l)x) + δ Ω x) f t y) + ly)) + δ Ω y) ε t η t T, x X } 4.3) Thus η,l f y) = { l L : 0 εt +η f t + l) + δ Ω )y) }. 4.4) Convexity and nonemptiness of Ω implies that δ Ω is a proper convex function. Since the functions f t and l are continuous and convex we can use Theorem. Then we obtain: εt +η f t + l)+δ Ω )y) = { ε,t f t y)+ ε2,t l)y)+ ε3,t δ Ω y):ε,t,ε 2,t,ε 3,t 0, ε,t +ε 2,t +ε 3,t =ε t +η}, so 0 εt +η f t + l) + δ Ω )y) if and only if there exist ε,t,ε 2,t,ε 3,t 0,ε,t + ε 2,t + ε 3,t = ε t + η such that 0 ε,t f t y) + ε2,t l)y) + ε3,t δ Ω y). 4.5) Since ε3,t δ Ω = N ε3,t,ω, we conclude that 4.5) is equivalent to 4.). Remark Assume that assumptions of Proposition 2 hold. Let f t x) = f t +δ Ω. Then l η,l f y) if and only if there exists ε,t 0, ε 2,t 0 such that ε,t + ε 2,t = ε t + η and 0 ε,t f t y) + ε2,t l)y). The proof of this assertion is similar to the proof of Proposition 2 and we omit it. We now consider a more general case when L is a set of concave functions l : X IR. Proposition 3 Let Ω X be a non empty convex set, f : Ω IR and L be a set of concave functions l : X IR such that dom l Ω. Assume that f x) = inf t T f t x) x Ω), where f t : X IR is a continuous convex function t T ). Let y Ω, η 0 and ε t = f t y) f y), t T ). Then l η,l f y) if and only if y dom l and for each t T there exists ε,t 0 and ε 2,t 0 such that ε,t + ε 2,t = ε t + η and where l = l + δ Ω. 0 ε,t f t y) + ε2,tl y).

8 A. M. Rubinov, Z. Y. Wu Proof Since 4.4) can be represented in the form 0 εt +η f t + l )y), the desired result follows from Theorem. Let k. For each y X and a < 0 consider the function ϕ k y,a x) = a x y k, x X. 4.6) Let } L k = {ϕy,a k : y X, a < 0 4.7) be a set of elementary functions. We only consider cases k = and k = 2. If k = 2 then ϕ 2 y,a x) := a x y 2 = a x 2 +[m, x]+c, where m = 2ay, c = a y 2. This presentation sometimes is more convenient. In this section we will study the approximate L k -subdifferentiability of inf-convex functions f : Ω IR for k =, 2, where Ω X. It is known, see [2]) that for each Lipschitz function f the subdifferential L f x) is not empty; for f C 2 Ω) which is minorized by a quadratic function, the subdifferential L2 f x) is not empty. Proposition 2 can be simplified if either L = L or L = L 2. Let L = L then l L if and only if lx) = a x z with a > 0 and z X. Proposition 4 Let a > 0 and y, z X. Let lx) = a x z x X) and let where ε 0. Then c ε m) = a y z [m, y z] ε, 4.8) ε ly) = { m B0, a) : c ε m) 0 }. Proof Let px)= x. Then p0) = B0, ).Wehave ε ly)=a ε/a py z). Since p is a positively homogeneous function it follows see, for example, [2], Proposition 7.9) that ε/a py z) ={m B0, ) :[m, y z] y z ε/a}, so ε ly) = { m B0, a) :[m, y z] a y z ε } = { m B0, a) : c ε m) 0 }. Corollary Assume that assumptions of Proposition 2 hold and L = L. Let lx) = a x z, where a > 0 and z X. Let y Ω. Then l η,l f y) if and only if for each t T there exists ε,t and ε 2,t with the following properties: i) ε,t 0, ε 2,t 0, ε,t + ε 2,t = ε t + η; ii) there exists m ε,t f t y) such that m a and a y z [m, y z] ε 2,t, where f t = f t + δ Ω.

9 Optimality conditions in global optimization and their applications It follows from Proposition 4 and Remark. Remark 2 If y = z then c ε m) 0 for all m X and ε 0so ε lz) = B0, a). In this case ε lz) does not depend on ε. We also need the following statement: Proposition 5 Let a > 0 and y, z X. Let lx) = a x z 2, x X). Then ε ly) = B2ay z), 2 aε) for each ε 0. The result fairly easily follows from [6, Chap. ]. The following argument can be also used: Let gx) = a x z 2 a y z 2 [m, x y] +ε, x X. Then m ε ly) if and only if gx) 0 for all x X. Calculating minimum of the quadratic function g we obtain the desired result. 5 Necessary and sufficient condition for the global minimum over a convex set Let f : X IR + and Ω X. Definition Let η 0. We say that x is an η-global minimizer of f over Ω if f x) f x) η x x, x Ω. Let f Ω = f +δ Ω. Definition can be represented in the following form: A point x Ω is an η-global minimizer of f over Ω if l η, x L f Ω x) where l η, x x) = η x x. It is clear that x is a global minimizer if and only if it is an η-global minimizer for all η>0. We now present necessary and sufficient conditions for an approximate global minimizer of an inf-convex function over a convex set in terms of approximate subdifferentials of corresponding convex functions. Theorem 3 ) Let Ω X be a convex nonempty set and f t ) t T be a family of convex continuous functions defined on X. Let f : Ω IR be a function such that f x) = inf t T f t x). Let x Ω and ε t = f t x) f x). Then 2) x isanη-global minimizer with η>0over Ω if and only if for each t T there exists ε,t 0,ε 2,t 0 such that ε,t + ε 2,t = ε t and 0 ε,t f t x) + N ε2,t,ω x) + B0,η). 5.) 3) x is a global minimizer over Ω if and only if for each t T there exists ε,t 0,ε 2,t 0 such that ε,t + ε 2,t = ε t and ε,t f t x) N ε2,t,ω x)) =. 5.2)

10 A. M. Rubinov, Z. Y. Wu Proof ) Let η>0. A point x is an η-global minimizer if and only if the function l η x) = η x x belongs to L f x). In view of Proposition 2 this means that for all t Ω there exists ε,t 0,ε 2,t 0 and ε 3,t 0 such that ε,t + ε 2,t + ε 3,t = ε t and 0 ε,t f t x) + N ε2,t,ω x) + ε3,t l η ) x)). 5.3) In view of Corollary 2 we get ε l η ) x) = B0,η)for all ε>0, so 5.3) can be rewritten as 0 ε,t f t x) + N ε2,t,ω x) + B0,η), where ε,t + ε 2,t ε t. Since ε - subdifferential contains ε -subdifferential for ε ε, we can choose ε,t and ε 2,t for which ε,t + ε 2,t = ε t. 2) Let x be a global minimizer. Then 5.) holds for each η>0; in other words, for each η>0there exist numbers ε,t,η 0 and ε 2,t,η 0, such that ε,t,η + ε 2,t,η = ε t and there exist vectors a η ε,t,η f t x), b η N ε2,t,η,ω x) and c η B0,η)such that a η + b η + c η = 0. Then c η 0asη 0+. Without loss of generality we can assume that there exist lim η 0+ ε,t,η := ε,t 0 and lim η 0+ ε 2,t,η := ε 2,t 0. Obviously ε,t + ε 2,t = ε t. Since ε-subdifferential is bounded we can assume without loss of generality that there exists a weak limit lim η 0+ a η := a. Then there exists a weak limit lim η 0+ b η := b. We have a ε,t f t x), b N ε2,t,ω x) and a + b = 0so5.2) holds. Corollary 2 Assume that assumptions of Theorem 3 hold and Ω = X. Then ) x isanη-global minimizer with η>0 over X if and only if for each t T εt f t x) B0,η) =. 5.4) 2) x is a global minimizer over X if and only if for each t T, 0 εt f t x). 5.5) Indeed, 5.4) and 5.5) follow from 5.) and 5.2), respectively, and the fact that N ε,x x) ={0} for all ε. We now consider a special case where f t ) t T is the family of quadratic functions from Proposition. Theorem 4 Let Ω X be a convex nonempty set and f be a continuously differentiable function defined on an open set containing Ω. Assume that the mapping x f x) is Lipschitz continuous on Ω with the Lipschitz constant K. Let f t be the function defined on X by 3.3) and f t = f t + δ Ω. Let η>0and a K. Then

11 Optimality conditions in global optimization and their applications i) x Ω is an η-global minimizer of f over Ω if and only if where min{ y :y εt f t x)} η, t Ω, 5.6) ε t = f t x) f x) =[ f t), x t]+a x t 2 + f t) f x), t Ω. 5.7) ii) If Ω = X then x isanη- global minimizer if and only if 2a x t)+ f t) 2 [a f t), x t]+a 2 x t 2 + a[ f t) f x)]+η, t X. 5.8) Proof i) For t Ω consider the function 3.3): f t x) = f t) +[ f t), x t]+a x t 2, x X. It follows from Proposition that f x) = min t Ω f t x). It is easy to check that f t x) = a v t x 2 a v t t 2 + f t), with v t = t 2a f t). In view of Theorem 3 a point x is an η-global minimizer over Ω if and only if for each t T there exist ε,t and ε 2,t such that ε,t 0, ε 2,t 0, ε,t + ε 2,t = ε t 5.9) and 5.) holds: 0 ε,t f t x) + B0,η)+ N ε2,t,ω x). 5.0) Since f t = f t + δ Ω,wehave εt f t,ε x) = ε,t 0,ε 2,t 0,ε,t +ε 2,t =ε t ε,t f t x) + N ε2,t,ω x), so the intersection B0,η) εt f t x) is not empty if and only if there exist ε,t and ε 2,t such that 5.9) and 5.0) holds. This implies that 5.0) is equivalent to 5.6). ii) Let Ω = X and let l t x) = a v t x 2, x X), t Ω). Clearly ε f t x) = ε l t x) for all ε 0. Since Ω = X it follows that f t = f t, therefore due to 5.6), x is an η-global minimizer if and only if min { y :y εt l t x) } η.

12 A. M. Rubinov, Z. Y. Wu Applying Proposition 5 we conclude that εt l t x) = B2a x v t ), 2 aε t ) so we need to calculate min { y :y B2a x v t ), 2 aε t ) }. It is easy to check that this minimum is equal to max0, q t ) with q t := 2a x v t ) 2 aε t, so the inequality 5.6) holds if and only if q t η. Since 2a x v t ) = 2a x t + ) 2a f t) = 2a x t) + f t) and aε t = [ a f t), x t ] + a 2 x t 2 + a [ f t) f x) ] we have q t = 2a x t) + f t) 2 [a f t), x t ] + a 2 x t 2 + a [ f t) f x) ]. Thus the result follows. 6 Sufficient conditions for the constrained global minimum Necessary and sufficient conditions presented in the previous section are not tractable. However, results obtained in this section can be useful for examination of some tractable conditions. In this section we investigate a sufficient condition for the global minimum that is based on abstract convexity. Consider the following optimization problem P): minimize f x) subject to x Ω, 6.) where f : X IR and Ω X. Proposition 6 Sufficient condition for a global minimizer) Let x Ω. Let L be a set of elementary functions l : X R such that L f x) =. If there exists a function l L such that then x is a global minimizer of P). Proof We have: l L f x) and lx) l x) for all x Ω 6.2) f x) f x) lx) l x) 0, x Ω.

13 Optimality conditions in global optimization and their applications We can express the result of Proposition 6 in terms of the L-normal set N L,Ω x). Indeed, lx) l x) for all x Ω means that l N L,Ω x), so the sufficient condition under consideration can be represented in the following form: Proposition 7 Let x Ω.If L f x) N L,Ω ) x) = then x is a global minimizer of P). Remark 3 Assume that subdifferential calculus rules hold for the set L and functions f and δ S, in particular L f + δ Ω )x) L f x) + L δ Ω x). 6.3) Then 6.2) is also a necessary condition. Indeed, the definition of a global minimizer can be presented in terms of the L-subdifferentials as follows: 0 L f + δ Ω ) x). 6.4) If 6.3) holds and x is a global minimizer then 0 L f x) + L δ Ω x). This implies L f x) L δ Ω ) x) =. Since l L δ Ω x) means that lx) l x) for all x Ω, we get 6.2). If L is an additive set l, l 2 L l + l 2 L) then the inclusion opposite to 6.3) holds so 6.3) is equivalent to L f + δ Ω )x) = L f x) + L δ Ω x). 6.5) Unfortunately 6.5) does not hold for many sets L of elementary functions see detailed discussion in Jeyakumar V., Rubinov A.M., Wu Z.Y. in Generalized Fenchel s conjugation formulas and duality for abstract convex functions, submitted paper). There is an example in Jeyakumar V., Rubinov A.M., Wu Z.Y. in Sufficient global optimality conditions for non-convex quadratic minimization problems with box constraints. Submitted paper) which shows that 6.2) is not a necessary condition for a set L of quadratic functions. Condition 6.2) is tractable in some important cases see [Jeyakumar V., Rubinov A.M., Wu Z.Y. in Non-convex quadratic minimization problems with quadratic constraints: global optimality conditions, submitted paper; Jeyakumar V., Rubinov A.M., Wu Z.Y. in Sufficient global optimality conditions for non-convex quadratic minimization problems with box constraints, submitted paper and Wu Z.Y., Jeyakumar V., Rubinov A.M. in Sufficient conditions for globally optimality of bivalent nonconvex quadratic programs, submitted paper] for details) and there is a hope that the number of these cases can be extended. It is interesting to compare 6.2) with a necessary condition in general situation, where 6.3) is not valid. We will compare 6.2) with the necessary and sufficient condition 5.2) given in Theorem 3. Assume that Ω is a nonempty convex set; f : X IR is a function such that f x) = inf t T f t x) for x X, where f t : X IR is a continuous convex function t T ); L is a set of concave functions l : X IR such that dom l Ω for all l L.

14 A. M. Rubinov, Z. Y. Wu First we present condition 6.2) it terms that were used in Theorem 3. Let x Ω. Assume that there exists a function l such that 6.2) holds, that is l L f x) and lx) l x) 0 for all x Ω. In view of Proposition 3, l L f x) if and only if for each t T there exist ε,t 0 and ε 2,t 0 such that ε,t + ε 2,t = f t x) f x) and ε,t f t x) ε2,t l) x)) =. 6.6) Condition 5.2) can be rewritten in the following form: x is a global minimizer if and only if for each t T there exist ε,t 0 and ε 2,t 0 such that ε,t + ε 2,t = f t x) f x) and ε,t f t x) N ε2,t,ω x)) =. 6.7) It follows from the aforesaid that there is only one distinction between sufficient condition 6.2) and necessary and sufficient condition 5.2): the subdifferential ε2,t l) x) in 6.2) is replaced with the normal cone N ε2,t,ω x) in 6.7). Note that 6.2) is expressed in terms of the L-subdifferentials L f x) and we do not need to have a set of elementary function L in 5.2). We can provide a more precise comparison between 6.2) and 5.2) for a special set of elementary function L. This class depends on a point x. Recall that a function l : X IR is called superlinear if lx + y) lx) + ly) for all x, y X and lλx) = λlx) for x X and λ>0. An upper semicontinuous superlinear function l has the following representation lx) = inf{[u, x] :u l0)}, x X, 6.8) where l0) := {u X :[u, x] lx)} for all x X. 6.9) If l : X IR is a superlinear function then the function l = l is sublinear and its subdifferential at zero l 0) coincides with l0). Let x be a global minimizer of the problem P) defined by 6.). For the investigation of this global minimizer we will use a set L x which consists of the functions of the form mx) = lx x), where l is a superlinear function. For the sake of a simplicity we assume that x = 0; then we consider superlinear functions themselves. Proposition 8 Consider the problem P). Assume that Ω is a nonempty convex set with 0 Ω and f : X IR is a function such that f x) = inf t T f t x), x X, where f t : X IR is a continuous convex function t T ). Let L be the set of all superlinear functions l : X IR with dom l Ω. Then if and only if where ε t = f t 0) f 0). L f 0) N L,Ω 0)) = 6.0) εt f t 0) N Ω 0)) =, t T, 6.)

15 Optimality conditions in global optimization and their applications Proof Assume that 6.0) holds, so there exists l L such that l L f 0) and l N L,Ω 0). This means that lx) f x) f 0) x X and lx) 0 x Ω. 6.2) It follows from 6.8) and the second inequality in 6.2) that l)0) N Ω 0). 6.3) The first inequality in 6.2) is equivalent to lx) f t x) f t 0) + ε t for each x X and t T, that is 0 εt f + l))0) for every t T. Applying Theorem we obtain that εt ft + l) ) 0) = ε,ε 2 0,ε +ε 2 =ε t ε f t 0) + ε2 l)0) ). 6.4) It is well-known see for example [2]) and easy to check that ε-subdifferential of a sublinear function at zero with ε > 0 coincides with its subdifferential at zero; therefore 6.4) can be represented in the form: εt ft + l) ) 0) = 0 ε ε t Since εt f t 0) ε f t 0) for 0 ε ε t,6.5) implies ε f t 0) + l)0) ). 6.5) εt ft + l) ) 0) = εt f t 0) + l)0). 6.6) It follows from 6.6) and 6.3) that 6.) holds. Conversely, let 6.) hold, i.e., for any t T, there exists u t X such that u t εt f t 0) and u t N Ω 0). Letlx) = inf t T [u t, x]. Then l : X IR is a superlinear function and l0) = cl co{u t } t T. Since N Ω 0) is a closed and convex set, this implies that l0) N Ω 0). The inclusion u t N Ω 0) also implies that lx) = inf t T [u t, x] 0, x Ω so dom l Ω. This means that l L. Wehave for x X and t T : lx) [u t, x] f t x) f t 0) + ε t = f t x) f 0) and so lx) f x) f 0) for all x X. Hence l L f 0). Since u t N Ω 0) it follows that [u t, x] 0 for each x Ω and t T. Hence lx) 0 for each x Ω, whence l N L,Ω 0). Proposition 8 allows us to compare necessary and sufficient condition 5.2) and sufficient condition 6.2) in the case under consideration. Let x = 0 be a solution of the problem P). LetL be the set of all superlinear functions l : X IR with dom l Ω. Then 5.2) can be represented in the following form: for any t T, there exist ε,t 0 and ε 2,t 0 such that ε,t + ε 2,t = ε t and ε,t f t 0) N ε2,t,ω0) ) =. 6.7)

16 A. M. Rubinov, Z. Y. Wu Due to Proposition 8 sufficient condition 6.2) can be represented in the form εt f t 0) N Ω 0) ) =, t T. 6.8) Thus 6.2) is a special case of 5.2) corresponding to ε,t = ε t, ε 2,t = 0. 7 Some properties of the global minimum In this section we will examine some properties of the global minimum. We start with functions f : X IR having the Lipschitz continuous gradient mapping. Theorem 5 Let f C X) and let the mapping x f x) be Lipschitz continuous: K := f x) f y) sup < +. x,y X,x =y x y Let a K. Let a point x X be a global minimizer of f. Then 4a f t) 2 f t) f x), t X. 7.) Proof A point x is a global minimizer on X if and only if 5.8) holds for all η>0, that is [a 2a x t)+ f t) 2 f t), x t]+a 2 x t 2 +a[ f t) f x) ], t X. 7.2) This inequality can be represented in the form [ ] [ ] f t), f t) 4a f t) f x) 0, t X which is equivalent to 7.). We now give a version of Theorem 5 for functions defined on a set Ω X with the nonempty interior. Theorem 6 Consider the space X and assume that X is equipped with not only the norm, but also with a norm, which is equivalent to. Let Ω Xbea set with int Ω = and let f be a continuously differentiable function defined on an open set containing Ω. Assume that the mapping x f x) is Lipschitz on Ω: K := f x) f y) sup < +. x =y,x,y Ω x y Let x int Ω be a global minimizer of f over Ω. Consider the ball B x, r) = {x : x x r} int Ω and let M := max { f t) : t B x, r) }. 7.3)

17 Optimality conditions in global optimization and their applications Let q > 0 be a number such that B x, r + q) Ω and let a max K, M ). 7.4) 2q Then 4a f t) 2 f t) f x), t B x, r). 7.5) Proof Let r := r + M 2a. Then r r + q,sob x, r ) Ω. Applying Proposition we conclude that f x) = min t B x,r ) f t x) for any x B x, r ), where f t x) = f t) +[ f t), x t] +a x t 2. Thus, f x) min t B x,r) f t x) for any x B x, r ). Let us calculate v t argmin x X f t x). Since v t is a solution of the equation f t x) = 0, we have v t = t f t) 2a. So [ min f tx) = f t v t ) = f t) + f t), ] x X 2a f t) + f t) 2 4a = f t) 4a f t) 2. For any t B x, r) it holds: So, v t B x, r ) and Then v t x t x + f t) 2a r + q = r. f t) 4a f t) 2 = f t v t ) = min f tx) = min f tx). x Ω x B x,r ) f x) = min f x) = min f x) x B x,r) x B x,r ) min x B x,r ) min f tx) = t B x,r) min t B x,r) f t) 4a f t) 2). Thus 7.5) is valid. We now present a version of Theorem 6 for functions f : X IR that have the boundedly Lipschitz continuous gradient mapping x f x). This means that for

18 A. M. Rubinov, Z. Y. Wu all r > 0 it holds: K r := f x) f y) sup < ) x =y, x, y r x y Theorem 7 Let f C X). Let the mapping x f x) be boundedly Lipschitz continuous. For any r > 0, let M r := sup f t) and r := r + M r, t B0,r) 2K r where K r is defined by 7.6). If x is a global minimizer of f then for all r x it holds: f t) 2 f t) f x), t B0, r), 7.7) a r where a r K r is an arbitrary number. The proof is similar to that of Theorem 6 and we present only its scheme. Proof Let r > 0. It follows from Proposition that f x) = min t B0,r ) f t x) for any x B0, r ), where f t x) = f t) +[ f t), x t] +a r x t 2 and a r K r. Thus, f x) inf t B0,r) f t x) for any x B0, r ). The same argument as in the proof of Theorem 6 shows that the global minimizer x t of f t x) over X is located in B0, r ). Therefore f t x t ) = min f tx) = min f tx) = f t) f t) 2. x X x B0,r ) 4a r If x is a global minimizer of f x) on X and r x, then f x) = min f x) = min f x) x B0,r) x B0,r ) min f t) f t) 2, forany a r K r. t B0,r ) 4a r Thus, 7.5) holds. 8 Global optimization and inequalities In this section we demonstrate that a theory of global optimization has some interesting application to the theory of inequalities. Indeed, many inequalities can be presented in the form f x) f x), x Ω

19 Optimality conditions in global optimization and their applications where Ω X, so the examination of these inequalities can be reduced to examination of a global optimization problem Assume that we have two inequalities: minimize f x) subject to x Ω. f x) f x) 0 8.) and f x) f x) ux) where ux) ) We say that 8.2) is sharper that 8.) if there is a vector x such that ux) >0. Results obtained in the previous section can be used for construction of inequalities that are sharper than the given one. For the sake of definiteness we consider the situation described in Theorem 6. We assume that conditions of this theorem hold and we will use the same notation. Consider the function Q a f ) := g : Ω IR, where gx) = f x) 4a f x) 2, x Ω. 8.3) Here a is the number defined by 7.4). The function g enjoys the following properties:. gx) f x) for all x Ω; gx) = f x) if and only if x is a stationary point of f ; it follows directly from the definition of g. 2. Let x int Ω be a global minimizer of f over Ω,so x is a global minimizer of f over B x, r + q). Then x is also a global minimizer of g over B x, r + q) and min x B f x) = min x B gx). Indeed, since x is a global minimizer of f then 7.5) holds; the inclusion x int Ω implies f x) = 0. Therefore 7.5) can be presented in the following form: f x) 4a f x) 2 f x) 4a f x) 2, x B x, r + q)). This can be rewritten as g x) gx), x B x, r + q)). Hence x is a global minimizer of g over B x, r + q). Since f x) = 0, it follows that g x) = f x). Then min gx) = x B x,r+q) min f x). 8.4) x B x,r+q) If f is a twice-continuously differentiable function then the function g is differentiable and gx) = f x) 2a 2 f x) f x) = Id ) 2a 2 f x) f x). 8.5) It follows from 8.5) that a point y is a stationary point of g if either y is a stationary point of f or f y) is an eigenvector of 2 f y) corresponding to the eigenvalue 2a.

20 A. M. Rubinov, Z. Y. Wu The inequality gx) g x) 0 can be rewritten in the form f x) f x) 4a f x) 2, which is sharper than the inequality f x) f x) 0. If f is a convex function defined on a convex set Ω then gx) > f x) for all x int Ω which are different from a global minimizer of f. Remark 4 Let Ω = X and f C X) be a boundedly Lipschitz function. Let a > 0 be an arbitrary number and Q a : C X) C X) be an operator defined by Q a f )x) = f x) 4a f x) 2. Then Q a f )x) f x) for all x X and Q a f )x) = f x) if x is a stationary point. Let x) = for all x X. Then Q a c) = c for all c IR. Let a k be a sequence of positive numbers. Then f Q a f Q a 2 Q a f Q a k...q a 2 Q a f and for a stationary point x of f we have f x) = Q a f x) = Q a 2 Q a f x) = = Q a k...q a 2 Q a f x). Using the described construction we will present a sharper version of the wellknown inequality between the arithmetic mean and the geometric mean which asserts that n x + +x n )>x...x n ) n, x IR n +, x = λ with λ ) Here =,...,) T IR n +. Recall that IRn + is the cone of n-vectors with nonnegative coordinates and IR n ++ is the cone of n-vectors with positive coordinates. Theorem 8 Let λ>r be positive numbers. Let a λ,r = min r<d<λ max n n ) 2 λ + d) λ d) 2, ) n λ+r n λ r 2d r). Then for all x such that x λ r it holds: n x + x n ) x...x n ) n + 4a λ,r n 2 ) n π nx) n 2, 8.7) x i i=

21 Optimality conditions in global optimization and their applications where n π n x) = x i, x = x,...,x n ) T IR n ) i= Proof Let f x) = n x + +x n ) π n x)) n, x = x,...,x n ) T IR n ) Then f x) 0 and f x) = 0 if and only if x = λ, where λ>0and =,...,) T. So vectors λ are global minimizers of f over IR n +. The function f is sublinear. We will sharpen the inequality 8.6) applying Theorem 6 to the inequality f x) 0. An easy calculation shows that f x) = π nx) n n x ),..., π nx) n. x n Hence ) f x) 2 = n n 2 π nx) n ) x i= i Later on we will use not only the Euclidean norm but also the norm.for λ>d > 0 consider the ball V λ,d := B λ, d) = { x IR n : λ x d } 8.) ={x IR n : λ d x i λ + d,...,i =,...,n}. 8.2) Since d <λit follows that V λ,d IR n ++.Letρ ix) = π nx) n. We need to estimate x i ρ i x) for x V λ,d.wehave ρ i x) x = x...x n ) λ + d n j n x i x j n λ d) 2 i = j. so ρ i x) x = n i n x 2 i x...x n ) n n n λ + d λ d) 2, n λ + d) 2 n )2 λ + d) 2 ρ i x) n 2 + λ d) 4 n 2 λ d) 4 ) n 2 λ + d) = n λ d) 2, x V λ,d). 8.3) ) 2

22 A. M. Rubinov, Z. Y. Wu Let x, y V λ,d. Applying the mean value theorem we conclude that there exist numbers θ i [0, ], i =,...,n such that n ) f x) f y) = 2 ρ i x) ρ i y)) 2 n i= n ) = 2 [ ρ i x + θ i y x), x y)] 2 n i= n ) 2 ρ i x + θ i y x) 2 x y 2. n i= Since x, y V λ,d it follows that x + θ i y x) V d for all i. Applying 8.3) we conclude that f x) f y) a λ, d) x y, x, y V λ,d where ) n 2 λ + d) a λ, d) = n λ d) ) Hence mapping x f x) is Lipschitz continuous on V λ,d with the Lipschitz constant K a λ, d). We will apply Theorem 6 to the set Ω = V λ,d where r < d <λand the global minimizer x = λ of the function f. Assume that the norm that was used in Theorem 6 coincides with.letq = d r. Let us estimate M = max{ f x) : x V λ,r }. Due to symmetry it is enough to estimate only the first coordinate π nx) n x of the gradient f x). It is easy to see that and M max x V λ,r Let π nx) x n = max x V λ,r a 2 λ, d, r) = x2... x ) n n x x ) n λ+r n λ r 2d r) ) n λ + r n. λ r aλ, d, r) = maxa λ, d), a 2 λ, d, r)) ) n ) λ+r n = max n 2 λ + d) n λ d) 2, λ r 2d r).

23 Optimality conditions in global optimization and their applications Note that lim d λ 0 aλ, d, r) = lim d r+0 aλ, d, r) =+ so the function d aλ, d, r) attains its minimum on the segment r,λ).leta λ,r = min r<d<λ aλ, d, r). Applying Theorem 6 we conclude that n x + x n ) x...x n ) n + 4a λ,r n 2 n π nx) n x i i= ) 2 for x V λ,r. Acknowledgments The authors thank two anonymous referees whose valuable comments and suggestions helped significantly to improve the presentation and also strengthen the paper. References. Beck, A., Teboulle, M.: Global optimality conditions for quadratic optimization problems with binary constraints. SIAM J. Optim., ) 2. Dur, M., Horst, R., Locatelli, M.: Necessary and sufficient global optimality conditions for convex maximization revisited. J. Math. Anal. Appl. 272), ) 3. Hiriart-Urruty, J.-B.: ε-subdifferential calculus. Convex analysis and optimization. Res. Notes Math. 57, ) 4. Hiriart-Urruty, J.-B.: Conditions for global optimality 2. J. Glob. Optim. 3, ) 5. Hiriart-Urruty, J.-B.: Global optimality conditions in maximizing a convex quadratic function under convex quadratic constraints. J. Glob. Optim. 2, ) 6. Hiriart-Urruty, J.-B., Lemarechal, C.: Convex Analysis and Minimnization Algorithms, II. Springer, Berlin 993) 7. Hiriart-Urruty, J.-B., Ledyaev, Y.A.: Note on the characterization of the global maxima of a tangentially) convex function over a convex set. J. Convex Anal. 3, ) 8. Horst, R., Pardalos, P. eds.): Handbook of Global Optimization, Nonconvex Optimization and its Applications. Kluwer, Dordrecht 995) 9. Pallaschke, D., Rolewicz, S.: Foundations of Mathematical Optimization. Kluwer, Dordrechet 997) 0. Pan, X.-C., Zheng, Q.: Global optimum shape design.. Comput. Math. Appl. 37, ). Pinar, M.C.: Sufficient global optimality conditions for bivalent quadratic optimization. J. Optim. Theor. Appl. 222), ) 2. Rubinov, A.M.: Abstract Convexity and Global Optimization. Kluwer, Dordrecht 2000) 3. Rubinov, A.M., Glover, B.M.: Toland-Singer formula cannot discriminate a global minimizer from a choice of stationary points. Numer. Funct. Anal. Optim. 20, ) 4. Singer, I.: Abstract Convex Analysis. Wiley, New York 997) 5. Strekalovsky, A.S.: Global optimality conditions for nonconvex optimization. J. Glob. Optim. 2, ) 6. Strekalovsky, A.S.: Elements of nonconvex optimization. Nauka, Novosibirsk in Russian) 2003) 7. Tsevendorj, I.: Piecewice-convex maximization problems: global optimality conditions. J. Glob. Optim. 2, 4 200) 8. Tuy, H.: Convex Analysis and Global Optimization. Kluwer, Dordrecht 998) 9. Zalinescu, C.: Convex Analysis in General Vector Spaces. World Scientific, London 2002)

Global Optimality Conditions for Optimization Problems

Global Optimality Conditions for Optimization Problems The 7th International Symposium on Operations Research and Its Applications (ISORA 08) Lijiang, China, October 31 Novemver 3, 2008 Copyright 2008 ORSC & APORC, pp. 377 384 Global Optimality Conditions

More information

Subdifferential representation of convex functions: refinements and applications

Subdifferential representation of convex functions: refinements and applications Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties

More information

Relationships between upper exhausters and the basic subdifferential in variational analysis

Relationships between upper exhausters and the basic subdifferential in variational analysis J. Math. Anal. Appl. 334 (2007) 261 272 www.elsevier.com/locate/jmaa Relationships between upper exhausters and the basic subdifferential in variational analysis Vera Roshchina City University of Hong

More information

Necessary and Sufficient Conditions for the Existence of a Global Maximum for Convex Functions in Reflexive Banach Spaces

Necessary and Sufficient Conditions for the Existence of a Global Maximum for Convex Functions in Reflexive Banach Spaces Laboratoire d Arithmétique, Calcul formel et d Optimisation UMR CNRS 6090 Necessary and Sufficient Conditions for the Existence of a Global Maximum for Convex Functions in Reflexive Banach Spaces Emil

More information

Global Maximum of a Convex Function: Necessary and Sufficient Conditions

Global Maximum of a Convex Function: Necessary and Sufficient Conditions Journal of Convex Analysis Volume 13 2006), No. 3+4, 687 694 Global Maximum of a Convex Function: Necessary and Sufficient Conditions Emil Ernst Laboratoire de Modélisation en Mécaniue et Thermodynamiue,

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

A Dual Condition for the Convex Subdifferential Sum Formula with Applications

A Dual Condition for the Convex Subdifferential Sum Formula with Applications Journal of Convex Analysis Volume 12 (2005), No. 2, 279 290 A Dual Condition for the Convex Subdifferential Sum Formula with Applications R. S. Burachik Engenharia de Sistemas e Computacao, COPPE-UFRJ

More information

Convex Functions. Pontus Giselsson

Convex Functions. Pontus Giselsson Convex Functions Pontus Giselsson 1 Today s lecture lower semicontinuity, closure, convex hull convexity preserving operations precomposition with affine mapping infimal convolution image function supremum

More information

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials G. Y. Li Communicated by Harold P. Benson Abstract The minimax theorem for a convex-concave bifunction is a fundamental theorem

More information

OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS

OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS S. HOSSEINI Abstract. A version of Lagrange multipliers rule for locally Lipschitz functions is presented. Using Lagrange

More information

Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition

Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition Guoyin Li Communicated by X.Q. Yang Abstract In this paper, we establish global optimality

More information

Robust Farkas Lemma for Uncertain Linear Systems with Applications

Robust Farkas Lemma for Uncertain Linear Systems with Applications Robust Farkas Lemma for Uncertain Linear Systems with Applications V. Jeyakumar and G. Li Revised Version: July 8, 2010 Abstract We present a robust Farkas lemma, which provides a new generalization of

More information

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION CHRISTIAN GÜNTHER AND CHRISTIANE TAMMER Abstract. In this paper, we consider multi-objective optimization problems involving not necessarily

More information

Optimality Conditions for Nonsmooth Convex Optimization

Optimality Conditions for Nonsmooth Convex Optimization Optimality Conditions for Nonsmooth Convex Optimization Sangkyun Lee Oct 22, 2014 Let us consider a convex function f : R n R, where R is the extended real field, R := R {, + }, which is proper (f never

More information

ALGORITHMS FOR MINIMIZING DIFFERENCES OF CONVEX FUNCTIONS AND APPLICATIONS

ALGORITHMS FOR MINIMIZING DIFFERENCES OF CONVEX FUNCTIONS AND APPLICATIONS ALGORITHMS FOR MINIMIZING DIFFERENCES OF CONVEX FUNCTIONS AND APPLICATIONS Mau Nam Nguyen (joint work with D. Giles and R. B. Rector) Fariborz Maseeh Department of Mathematics and Statistics Portland State

More information

Chapter 2 Convex Analysis

Chapter 2 Convex Analysis Chapter 2 Convex Analysis The theory of nonsmooth analysis is based on convex analysis. Thus, we start this chapter by giving basic concepts and results of convexity (for further readings see also [202,

More information

SECOND-ORDER CHARACTERIZATIONS OF CONVEX AND PSEUDOCONVEX FUNCTIONS

SECOND-ORDER CHARACTERIZATIONS OF CONVEX AND PSEUDOCONVEX FUNCTIONS Journal of Applied Analysis Vol. 9, No. 2 (2003), pp. 261 273 SECOND-ORDER CHARACTERIZATIONS OF CONVEX AND PSEUDOCONVEX FUNCTIONS I. GINCHEV and V. I. IVANOV Received June 16, 2002 and, in revised form,

More information

Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic Constraints

Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic Constraints Journal of Global Optimization 21: 445 455, 2001. 2001 Kluwer Academic Publishers. Printed in the Netherlands. 445 Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic

More information

II KLUWER ACADEMIC PUBLISHERS. Abstract Convexity and Global Optimization. Alexander Rubinov

II KLUWER ACADEMIC PUBLISHERS. Abstract Convexity and Global Optimization. Alexander Rubinov Abstract Convexity and Global Optimization by Alexander Rubinov School of Information Technology and Mathematical Sciences, University of Ballarat, Victoria, Australia II KLUWER ACADEMIC PUBLISHERS DORDRECHT

More information

Convex analysis and profit/cost/support functions

Convex analysis and profit/cost/support functions Division of the Humanities and Social Sciences Convex analysis and profit/cost/support functions KC Border October 2004 Revised January 2009 Let A be a subset of R m Convex analysts may give one of two

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Local strong convexity and local Lipschitz continuity of the gradient of convex functions Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate

More information

arxiv: v1 [math.oc] 17 Oct 2017

arxiv: v1 [math.oc] 17 Oct 2017 Noname manuscript No. (will be inserted by the editor) Saddle representations of positively homogeneous functions by linear functions Valentin V. Gorokhovik Marina Trafimovich arxiv:1710.06133v1 [math.oc]

More information

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus 1/41 Subgradient Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes definition subgradient calculus duality and optimality conditions directional derivative Basic inequality

More information

Convex Analysis and Economic Theory Fall 2018 Winter 2019

Convex Analysis and Economic Theory Fall 2018 Winter 2019 Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Fall 2018 Winter 2019 Topic 18: Differentiability 18.1 Differentiable functions In this section I want

More information

arxiv: v1 [math.fa] 16 Jun 2011

arxiv: v1 [math.fa] 16 Jun 2011 arxiv:1106.3342v1 [math.fa] 16 Jun 2011 Gauge functions for convex cones B. F. Svaiter August 20, 2018 Abstract We analyze a class of sublinear functionals which characterize the interior and the exterior

More information

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane Conference ADGO 2013 October 16, 2013 Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions Marc Lassonde Université des Antilles et de la Guyane Playa Blanca, Tongoy, Chile SUBDIFFERENTIAL

More information

On non-expansivity of topical functions by a new pseudo-metric

On non-expansivity of topical functions by a new pseudo-metric https://doi.org/10.1007/s40065-018-0222-8 Arabian Journal of Mathematics H. Barsam H. Mohebi On non-expansivity of topical functions by a new pseudo-metric Received: 9 March 2018 / Accepted: 10 September

More information

Epiconvergence and ε-subgradients of Convex Functions

Epiconvergence and ε-subgradients of Convex Functions Journal of Convex Analysis Volume 1 (1994), No.1, 87 100 Epiconvergence and ε-subgradients of Convex Functions Andrei Verona Department of Mathematics, California State University Los Angeles, CA 90032,

More information

GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect

GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, 2018 BORIS S. MORDUKHOVICH 1 and NGUYEN MAU NAM 2 Dedicated to Franco Giannessi and Diethard Pallaschke with great respect Abstract. In

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Characterizations of the solution set for non-essentially quasiconvex programming

Characterizations of the solution set for non-essentially quasiconvex programming Optimization Letters manuscript No. (will be inserted by the editor) Characterizations of the solution set for non-essentially quasiconvex programming Satoshi Suzuki Daishi Kuroiwa Received: date / Accepted:

More information

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45 Division of the Humanities and Social Sciences Supergradients KC Border Fall 2001 1 The supergradient of a concave function There is a useful way to characterize the concavity of differentiable functions.

More information

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This

More information

Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency

Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency Applied Mathematics E-Notes, 16(2016), 133-143 c ISSN 1607-2510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency

More information

Translative Sets and Functions and their Applications to Risk Measure Theory and Nonlinear Separation

Translative Sets and Functions and their Applications to Risk Measure Theory and Nonlinear Separation Translative Sets and Functions and their Applications to Risk Measure Theory and Nonlinear Separation Andreas H. Hamel Abstract Recently defined concepts such as nonlinear separation functionals due to

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

New formulas for the Fenchel subdifferential of the conjugate function

New formulas for the Fenchel subdifferential of the conjugate function New formulas for the Fenchel subdifferential of the conjugate function Rafael Correa, Abderrahim Hantoute Centro de Modelamiento Matematico, Universidad de Chile (CNRS UMI 2807), Avda Blanco Encalada 2120,

More information

Convex Analysis Background

Convex Analysis Background Convex Analysis Background John C. Duchi Stanford University Park City Mathematics Institute 206 Abstract In this set of notes, we will outline several standard facts from convex analysis, the study of

More information

Quasi-relative interior and optimization

Quasi-relative interior and optimization Quasi-relative interior and optimization Constantin Zălinescu University Al. I. Cuza Iaşi, Faculty of Mathematics CRESWICK, 2016 1 Aim and framework Aim Framework and notation The quasi-relative interior

More information

A Proximal Method for Identifying Active Manifolds

A Proximal Method for Identifying Active Manifolds A Proximal Method for Identifying Active Manifolds W.L. Hare April 18, 2006 Abstract The minimization of an objective function over a constraint set can often be simplified if the active manifold of the

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

DUALITY AND FARKAS-TYPE RESULTS FOR DC INFINITE PROGRAMMING WITH INEQUALITY CONSTRAINTS. Xiang-Kai Sun*, Sheng-Jie Li and Dan Zhao 1.

DUALITY AND FARKAS-TYPE RESULTS FOR DC INFINITE PROGRAMMING WITH INEQUALITY CONSTRAINTS. Xiang-Kai Sun*, Sheng-Jie Li and Dan Zhao 1. TAIWANESE JOURNAL OF MATHEMATICS Vol. 17, No. 4, pp. 1227-1244, August 2013 DOI: 10.11650/tjm.17.2013.2675 This paper is available online at http://journal.taiwanmathsoc.org.tw DUALITY AND FARKAS-TYPE

More information

On duality theory of conic linear problems

On duality theory of conic linear problems On duality theory of conic linear problems Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 3332-25, USA e-mail: ashapiro@isye.gatech.edu

More information

Robust Duality in Parametric Convex Optimization

Robust Duality in Parametric Convex Optimization Robust Duality in Parametric Convex Optimization R.I. Boţ V. Jeyakumar G.Y. Li Revised Version: June 20, 2012 Abstract Modelling of convex optimization in the face of data uncertainty often gives rise

More information

STABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES

STABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES STABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES CHONG LI, DONGHUI FANG, GENARO LÓPEZ, AND MARCO A. LÓPEZ Abstract. We consider the optimization problem (P A )

More information

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q)

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) Andreas Löhne May 2, 2005 (last update: November 22, 2005) Abstract We investigate two types of semicontinuity for set-valued

More information

DUALIZATION OF SUBGRADIENT CONDITIONS FOR OPTIMALITY

DUALIZATION OF SUBGRADIENT CONDITIONS FOR OPTIMALITY DUALIZATION OF SUBGRADIENT CONDITIONS FOR OPTIMALITY R. T. Rockafellar* Abstract. A basic relationship is derived between generalized subgradients of a given function, possibly nonsmooth and nonconvex,

More information

POLARS AND DUAL CONES

POLARS AND DUAL CONES POLARS AND DUAL CONES VERA ROSHCHINA Abstract. The goal of this note is to remind the basic definitions of convex sets and their polars. For more details see the classic references [1, 2] and [3] for polytopes.

More information

Stationarity and Regularity of Infinite Collections of Sets. Applications to

Stationarity and Regularity of Infinite Collections of Sets. Applications to J Optim Theory Appl manuscript No. (will be inserted by the editor) Stationarity and Regularity of Infinite Collections of Sets. Applications to Infinitely Constrained Optimization Alexander Y. Kruger

More information

The local equicontinuity of a maximal monotone operator

The local equicontinuity of a maximal monotone operator arxiv:1410.3328v2 [math.fa] 3 Nov 2014 The local equicontinuity of a maximal monotone operator M.D. Voisei Abstract The local equicontinuity of an operator T : X X with proper Fitzpatrick function ϕ T

More information

Constraint qualifications for convex inequality systems with applications in constrained optimization

Constraint qualifications for convex inequality systems with applications in constrained optimization Constraint qualifications for convex inequality systems with applications in constrained optimization Chong Li, K. F. Ng and T. K. Pong Abstract. For an inequality system defined by an infinite family

More information

The Subdifferential of Convex Deviation Measures and Risk Functions

The Subdifferential of Convex Deviation Measures and Risk Functions The Subdifferential of Convex Deviation Measures and Risk Functions Nicole Lorenz Gert Wanka In this paper we give subdifferential formulas of some convex deviation measures using their conjugate functions

More information

Abstract Monotone Operators Representable by Abstract Convex Functions

Abstract Monotone Operators Representable by Abstract Convex Functions Applied Mathematical Sciences, Vol. 6, 2012, no. 113, 5649-5653 Abstract Monotone Operators Representable by Abstract Convex Functions H. Mohebi and A. R. Sattarzadeh Department of Mathematics of Shahid

More information

Research Article A Note on Optimality Conditions for DC Programs Involving Composite Functions

Research Article A Note on Optimality Conditions for DC Programs Involving Composite Functions Abstract and Applied Analysis, Article ID 203467, 6 pages http://dx.doi.org/10.1155/2014/203467 Research Article A Note on Optimality Conditions for DC Programs Involving Composite Functions Xiang-Kai

More information

Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems

Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems V. Jeyakumar and G. Li Revised Version:August 31, 2012 Abstract An exact semidefinite linear programming (SDP) relaxation

More information

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE Fixed Point Theory, Volume 6, No. 1, 2005, 59-69 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.htm WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE YASUNORI KIMURA Department

More information

ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT

ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT EMIL ERNST AND MICHEL VOLLE Abstract. This article addresses a general criterion providing a zero duality gap for convex programs in the setting of

More information

SOME REMARKS ON THE SPACE OF DIFFERENCES OF SUBLINEAR FUNCTIONS

SOME REMARKS ON THE SPACE OF DIFFERENCES OF SUBLINEAR FUNCTIONS APPLICATIONES MATHEMATICAE 22,3 (1994), pp. 419 426 S. G. BARTELS and D. PALLASCHKE (Karlsruhe) SOME REMARKS ON THE SPACE OF DIFFERENCES OF SUBLINEAR FUNCTIONS Abstract. Two properties concerning the space

More information

Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems

Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems Applied Mathematics Volume 2012, Article ID 674512, 13 pages doi:10.1155/2012/674512 Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems Hong-Yong Fu, Bin Dan, and Xiang-Yu

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 4 Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 2 4.1. Subgradients definition subgradient calculus duality and optimality conditions Shiqian

More information

Weak sharp minima on Riemannian manifolds 1

Weak sharp minima on Riemannian manifolds 1 1 Chong Li Department of Mathematics Zhejiang University Hangzhou, 310027, P R China cli@zju.edu.cn April. 2010 Outline 1 2 Extensions of some results for optimization problems on Banach spaces 3 4 Some

More information

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions Angelia Nedić and Asuman Ozdaglar April 15, 2006 Abstract We provide a unifying geometric framework for the

More information

Bellman function approach to the sharp constants in uniform convexity

Bellman function approach to the sharp constants in uniform convexity Adv. Calc. Var. 08; (): 89 9 Research Article Paata Ivanisvili* Bellman function approach to the sharp constants in uniform convexity DOI: 0.55/acv-06-0008 Received February 9 06; revised May 5 06; accepted

More information

Convex Analysis and Economic Theory AY Elementary properties of convex functions

Convex Analysis and Economic Theory AY Elementary properties of convex functions Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory AY 2018 2019 Topic 6: Convex functions I 6.1 Elementary properties of convex functions We may occasionally

More information

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

PARTIAL REGULARITY OF BRENIER SOLUTIONS OF THE MONGE-AMPÈRE EQUATION

PARTIAL REGULARITY OF BRENIER SOLUTIONS OF THE MONGE-AMPÈRE EQUATION PARTIAL REGULARITY OF BRENIER SOLUTIONS OF THE MONGE-AMPÈRE EQUATION ALESSIO FIGALLI AND YOUNG-HEON KIM Abstract. Given Ω, Λ R n two bounded open sets, and f and g two probability densities concentrated

More information

Duality (Continued) min f ( x), X R R. Recall, the general primal problem is. The Lagrangian is a function. defined by

Duality (Continued) min f ( x), X R R. Recall, the general primal problem is. The Lagrangian is a function. defined by Duality (Continued) Recall, the general primal problem is min f ( x), xx g( x) 0 n m where X R, f : X R, g : XR ( X). he Lagrangian is a function L: XR R m defined by L( xλ, ) f ( x) λ g( x) Duality (Continued)

More information

Variational inequalities for set-valued vector fields on Riemannian manifolds

Variational inequalities for set-valued vector fields on Riemannian manifolds Variational inequalities for set-valued vector fields on Riemannian manifolds Chong LI Department of Mathematics Zhejiang University Joint with Jen-Chih YAO Chong LI (Zhejiang University) VI on RM 1 /

More information

THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION

THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION HALUK ERGIN AND TODD SARVER Abstract. Suppose (i) X is a separable Banach space, (ii) C is a convex subset of X that is a Baire space (when endowed

More information

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences

More information

A quasisecant method for minimizing nonsmooth functions

A quasisecant method for minimizing nonsmooth functions A quasisecant method for minimizing nonsmooth functions Adil M. Bagirov and Asef Nazari Ganjehlou Centre for Informatics and Applied Optimization, School of Information Technology and Mathematical Sciences,

More information

TWO MAPPINGS RELATED TO SEMI-INNER PRODUCTS AND THEIR APPLICATIONS IN GEOMETRY OF NORMED LINEAR SPACES. S.S. Dragomir and J.J.

TWO MAPPINGS RELATED TO SEMI-INNER PRODUCTS AND THEIR APPLICATIONS IN GEOMETRY OF NORMED LINEAR SPACES. S.S. Dragomir and J.J. RGMIA Research Report Collection, Vol. 2, No. 1, 1999 http://sci.vu.edu.au/ rgmia TWO MAPPINGS RELATED TO SEMI-INNER PRODUCTS AND THEIR APPLICATIONS IN GEOMETRY OF NORMED LINEAR SPACES S.S. Dragomir and

More information

Refined optimality conditions for differences of convex functions

Refined optimality conditions for differences of convex functions Noname manuscript No. (will be inserted by the editor) Refined optimality conditions for differences of convex functions Tuomo Valkonen the date of receipt and acceptance should be inserted later Abstract

More information

Thai Journal of Mathematics Volume 14 (2016) Number 1 : ISSN

Thai Journal of Mathematics Volume 14 (2016) Number 1 : ISSN Thai Journal of Mathematics Volume 14 (2016) Number 1 : 53 67 http://thaijmath.in.cmu.ac.th ISSN 1686-0209 A New General Iterative Methods for Solving the Equilibrium Problems, Variational Inequality Problems

More information

The sum of two maximal monotone operator is of type FPV

The sum of two maximal monotone operator is of type FPV CJMS. 5(1)(2016), 17-21 Caspian Journal of Mathematical Sciences (CJMS) University of Mazandaran, Iran http://cjms.journals.umz.ac.ir ISSN: 1735-0611 The sum of two maximal monotone operator is of type

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

Journal of Inequalities in Pure and Applied Mathematics

Journal of Inequalities in Pure and Applied Mathematics Journal of Inequalities in Pure and Applied Mathematics http://jipam.vu.edu.au/ Volume 4, Issue 4, Article 67, 2003 ON GENERALIZED MONOTONE MULTIFUNCTIONS WITH APPLICATIONS TO OPTIMALITY CONDITIONS IN

More information

Dedicated to Michel Théra in honor of his 70th birthday

Dedicated to Michel Théra in honor of his 70th birthday VARIATIONAL GEOMETRIC APPROACH TO GENERALIZED DIFFERENTIAL AND CONJUGATE CALCULI IN CONVEX ANALYSIS B. S. MORDUKHOVICH 1, N. M. NAM 2, R. B. RECTOR 3 and T. TRAN 4. Dedicated to Michel Théra in honor of

More information

Helly's Theorem and its Equivalences via Convex Analysis

Helly's Theorem and its Equivalences via Convex Analysis Portland State University PDXScholar University Honors Theses University Honors College 2014 Helly's Theorem and its Equivalences via Convex Analysis Adam Robinson Portland State University Let us know

More information

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth

More information

A SET OF LECTURE NOTES ON CONVEX OPTIMIZATION WITH SOME APPLICATIONS TO PROBABILITY THEORY INCOMPLETE DRAFT. MAY 06

A SET OF LECTURE NOTES ON CONVEX OPTIMIZATION WITH SOME APPLICATIONS TO PROBABILITY THEORY INCOMPLETE DRAFT. MAY 06 A SET OF LECTURE NOTES ON CONVEX OPTIMIZATION WITH SOME APPLICATIONS TO PROBABILITY THEORY INCOMPLETE DRAFT. MAY 06 CHRISTIAN LÉONARD Contents Preliminaries 1 1. Convexity without topology 1 2. Convexity

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

Metric regularity and systems of generalized equations

Metric regularity and systems of generalized equations Metric regularity and systems of generalized equations Andrei V. Dmitruk a, Alexander Y. Kruger b, a Central Economics & Mathematics Institute, RAS, Nakhimovskii prospekt 47, Moscow 117418, Russia b School

More information

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008 Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plus-properties and F-uniqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FP-I book:

More information

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L.

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L. McMaster University Advanced Optimization Laboratory Title: A Proximal Method for Identifying Active Manifolds Authors: Warren L. Hare AdvOl-Report No. 2006/07 April 2006, Hamilton, Ontario, Canada A Proximal

More information

CONTINUOUS CONVEX SETS AND ZERO DUALITY GAP FOR CONVEX PROGRAMS

CONTINUOUS CONVEX SETS AND ZERO DUALITY GAP FOR CONVEX PROGRAMS CONTINUOUS CONVEX SETS AND ZERO DUALITY GAP FOR CONVEX PROGRAMS EMIL ERNST AND MICHEL VOLLE ABSTRACT. This article uses classical notions of convex analysis over euclidean spaces, like Gale & Klee s boundary

More information

On Gap Functions for Equilibrium Problems via Fenchel Duality

On Gap Functions for Equilibrium Problems via Fenchel Duality On Gap Functions for Equilibrium Problems via Fenchel Duality Lkhamsuren Altangerel 1 Radu Ioan Boţ 2 Gert Wanka 3 Abstract: In this paper we deal with the construction of gap functions for equilibrium

More information

Week 3: Faces of convex sets

Week 3: Faces of convex sets Week 3: Faces of convex sets Conic Optimisation MATH515 Semester 018 Vera Roshchina School of Mathematics and Statistics, UNSW August 9, 018 Contents 1. Faces of convex sets 1. Minkowski theorem 3 3. Minimal

More information

Convex Analysis and Economic Theory Winter 2018

Convex Analysis and Economic Theory Winter 2018 Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Winter 2018 Supplement A: Mathematical background A.1 Extended real numbers The extended real number

More information

Lecture 1: Background on Convex Analysis

Lecture 1: Background on Convex Analysis Lecture 1: Background on Convex Analysis John Duchi PCMI 2016 Outline I Convex sets 1.1 Definitions and examples 2.2 Basic properties 3.3 Projections onto convex sets 4.4 Separating and supporting hyperplanes

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

On Directed Sets and their Suprema

On Directed Sets and their Suprema XLIM UMR CNRS 6172 Département Mathématiques-Informatique On Directed Sets and their Suprema M. Ait Mansour & N. Popovici & M. Théra Rapport de recherche n 2006-03 Déposé le 1er mars 2006 (version corrigée)

More information

Mathematics for Economists

Mathematics for Economists Mathematics for Economists Victor Filipe Sao Paulo School of Economics FGV Metric Spaces: Basic Definitions Victor Filipe (EESP/FGV) Mathematics for Economists Jan.-Feb. 2017 1 / 34 Definitions and Examples

More information

FIRST ORDER CHARACTERIZATIONS OF PSEUDOCONVEX FUNCTIONS. Vsevolod Ivanov Ivanov

FIRST ORDER CHARACTERIZATIONS OF PSEUDOCONVEX FUNCTIONS. Vsevolod Ivanov Ivanov Serdica Math. J. 27 (2001), 203-218 FIRST ORDER CHARACTERIZATIONS OF PSEUDOCONVEX FUNCTIONS Vsevolod Ivanov Ivanov Communicated by A. L. Dontchev Abstract. First order characterizations of pseudoconvex

More information

Extended Monotropic Programming and Duality 1

Extended Monotropic Programming and Duality 1 March 2006 (Revised February 2010) Report LIDS - 2692 Extended Monotropic Programming and Duality 1 by Dimitri P. Bertsekas 2 Abstract We consider the problem minimize f i (x i ) subject to x S, where

More information

Self-dual Smooth Approximations of Convex Functions via the Proximal Average

Self-dual Smooth Approximations of Convex Functions via the Proximal Average Chapter Self-dual Smooth Approximations of Convex Functions via the Proximal Average Heinz H. Bauschke, Sarah M. Moffat, and Xianfu Wang Abstract The proximal average of two convex functions has proven

More information

On the convexity of piecewise-defined functions

On the convexity of piecewise-defined functions On the convexity of piecewise-defined functions arxiv:1408.3771v1 [math.ca] 16 Aug 2014 Heinz H. Bauschke, Yves Lucet, and Hung M. Phan August 16, 2014 Abstract Functions that are piecewise defined are

More information