Lagrange multipliers theorem and saddle point optimality criteria in mathematical programming
|
|
- Rudolf Stanley
- 5 years ago
- Views:
Transcription
1 J. Math. Anal. Appl. 323 (2006) Lagrange multipliers theorem and saddle point optimality criteria in mathematical programming Duong Minh Duc, Nguyen Dinh Hoang, Lam Hoang Nguyen National University of Hochiminh City, Viet Nam Received 16 July 2005 Available online 28 November 2005 Submitted by B.S. Mordukhovich Abstract We prove a version of Lagrange multipliers theorem for nonsmooth functionals defined on normed spaces. Applying these results, we extend some results about saddle point optimality criteria in mathematical programming Elsevier Inc. All rights reserved. Keywords: Lagrange multipliers theorem; Saddle point; Mathematical programming 1. Introduction Let U be an open neighborhood of a vector x in a normed space E,g 1,...,g N and f be real functions on U. We consider the following optimization problem with inequality constraints: (PI) min f(y) s.t. g i (y) 0 i = 1,...,N. If E is a Banach space, x is a solution of (PI), g 1,...,g N and f are Fréchet differentiable at x and D((g 1,...,g N ))(x)(e) = R N, Ioffe and Tihomirov in [7] proved that there are a real number a 0 and nonpositive real numbers a 1,...,a N such that (a 0,...,a N ) 0 R N+1 and a 0 Df (x) = a 1 Dg 1 (x) + +a N Dg N (x). * Corresponding author. address: dmduc@hcmc.netnam.vn (D.M. Duc) X/$ see front matter 2005 Elsevier Inc. All rights reserved. doi: /j.jmaa
2 442 D.M. Duc et al. / J. Math. Anal. Appl. 323 (2006) In [4], Halkin extended this result to the following optimization problem: (PIE) min f(y) s.t. g i (y) 0 i = 1,...,n, g n+j (y) = 0 j = 1,...,N n. Halkin also proved that a 1,...,a n are nonpositive. Furthermore, if g 1,...,g N are not only Fréchet differentiable but also C 1 at x, that is the Fréchet derivatives of g 1,...,g N exist on a neighborhood of x and are continuous at x, Ioffe and Tihomirov in [7, p. 73] proved that a 0 in their cited result is not equal to 0. Using the notation of generalized gradients, Clarke, Ioffe, Michel and Penot, Mordukhovich, Rockafellar and Treiman have extended the above results for Lipschitz constraint functions in [3,5,6,10,11,17,19,20]. In [21,22], Ye considered the problem (PIE) with mixed assumptions of Gâteaux, Fréchet differentiability and Lipschitz continuity of constraint functions. In [8,9,12 16], using the extremal principle, Kruger, Mordukhovich and Wang consider the problem (PIE) for locally Lipschitzian constraint functions on subsets in Asplund spaces. If N = 1, we reduced the Lagrange multipliers rule to a two-dimensional problem in [1] and obtained a version of Lagrange multipliers theorem, in which we only required the smoothness of the restrictions of f and g on F U, where F is any two-dimensional vector subspace containing x of E. This smoothness is very weak and may not imply the continuity of the functions. In the next section of the present paper, we prove a discrete implicit mapping theorem (see Lemma 2.1) and apply it to extend the results in [1] to the case N>1forf and g, whose restrictions on F U are smooth for any (N + 1)-dimensional vector subspace F containing x of E. We note that our results can be applied to functions which are not C 1 -Fréchet differentiable neither Lipschitz continuous, even they are not continuous at x (see Remark 2.3). Applying these results, we extend some results of Bector et al. [2] in the last section. 2. Lagrange multipliers rule Let U be a nonempty open subset of a normed space (E, E ), X be a linear subspace of E, Z be a finite-dimensional linear subspace of X and J be a mapping from U into a normed space Y. We consider Z as a normed subspace of X.Letv be a vector in X and x be in U. Denote by Z(v) and Z(x,v) the vector subspaces of E generated by Z {v} and Z {x,v}, respectively. We put U x,z ={y Z: x + y U}, J x,z (y) = J(x+ y) y U x,z. Then U x,z is an open subset of Z. Wesay (i) J is (X, Z)-continuous at x on U if and only if for every v in X, there is a positive real number η v such that J u,z is continuous at 0 for any u in Z(x,v) B E (x, η v ); (ii) J is X-differentiable at x if and only if there exists a linear mapping DJ(x) from X into Y such that J(x+ th) J(x) = DJ (x)(h) h X; t 0 t
3 D.M. Duc et al. / J. Math. Anal. Appl. 323 (2006) (iii) J is (X, Z)-differentiable at x if and only if J is X-differentiable at x and for any v in X,if the sequence {(h m,t m )} m N Z(v) R converges to (h, 0) in Z(v) R then J(x+ t m h m ) J(x) = DJ (x)(h). m t m Remark 2.1. Let J be a linear mapping from E into R n, X be a linear subspace of E and Z be a finite-dimensional vector subspace of X. It is clear that J is (X, Z)-continuous at any x in E and (X, Z)-differentiable at any x in E although it may not be continuous on E. Remark 2.2. If J is (X, F )-differentiable at x then there are a positive real number ɛ v and a mapping φ v from B E (0,ɛ v ) Z(v) into Y such that B E (x, ɛ v ) U, z 0 φ v (z) = 0 and J(x+ z) = J(x)+ DJ (x)(z) + z E φ v (z) z B E (0,ɛ v ) Z(v). Indeed, it is sufficient to prove that J(x+ z) J(x) DJ (x)(z) = 0. z Z(v),z 0 z E We assume by contradiction that there exist a sequence {z m } m N in Z(v) and a positive real number ɛ such that 0 < z m E <m 1 and J(x+ z m ) J(x) DJ (x)(z m ) z m >ɛ n N. ( ) E Y We put s m = z m 1 E z m Z(v). Because Z(v) is a finite-dimensional subspace of X, there exists a subsequence {s mk } k N of {s m } m N such that k s mk = s in Z(v). Because J is (X, F )-differentiable at x then J(x+ z mk ) J(x) J(x+ z mk E s mk ) J(x) = = DJ (x)(s). k z mk E k z mk E Since Z(v) is finite-dimensional, Df (x) is continuous on Z(v) and ( ) DJ(x) zmk = k z mk DJ (x)(s m k ) = DJ (x)(s). E k Then we have J(x+ z mk ) J(x) DJ (x)(z mk ) = 0, k z mk E we get the contradiction with ( ) and we get the result. We have the following result: Lemma 2.1 (Discrete Implicit Mapping Theorem). Let U be an open neighborhood of a vector x in a normed linear space E, X be a vector subspace of E, F be a n-dimensional vector subspace of X and g be a mapping from U into R n with g = (g 1,...,g n ). Assume that g is (X, F )-continuous and (X, F )-differentiable at x. Put M ={y U: g(y) = g(x)} and e i = (δ 1 i,...,δn i ) for any i in {1,...,n}, where δj i is the Kronecker number. Let v be in X and h 1,...,h n be n vectors in F such that Dg(x)(v) = 0 and Dg(x)(h i ) = e i for any i in {1,...,n}.
4 444 D.M. Duc et al. / J. Math. Anal. Appl. 323 (2006) Then there exist a sequence {s m = (s 1 m,...,sn m )} m N converging to 0 in R n and a sequence of positive real numbers {α m } m N converging to 0 in R such that u m α m ( s 1 m h 1 + +s n m h n) + αm v + x M m N. Proof. We can assume without loss of generality that x = 0 and g(x) = 0. Fix a vector v in X such that Dg(0)(v) = 0. By Remark 2.1 about the (X, F )-continuity and (X, F )-differentiability of g at 0, there is a positive real number ɛ v and a mapping φ v from B F(v) (0,ɛ v ) F(v) B E (0,ɛ v ) to R n such that: B E (x, ɛ v ) U, g y,f is continuous at 0 for every y B F(v) (0,ɛ v ), y 0 φ v (y) = 0 and g(z) = Dg(0)(z) + z E φ v (z) z B F(v) (0,ɛ v ). There is a real positive number r such that α(s 1 h 1 + +s n h n + v) belongs to B F(v) (0,ɛ v ) for any (α, s 1,...,s n ) (0,r) B n (0,r), where B k (0,p) ={t = (t 1,...,t k ) R k : t R k (t 1 ) 2 + +(t k ) 2 <p}. We put η(s) = s 1 h 1 + +s n h n s = ( s 1,...,s n) B n (0,r), G α (s) = α 1 g ( α ( η(s) + v )) (α, s) (0,r) B n (0,r). Because g y,f is continuous at 0 for every y in B F(v) (0,ɛ v ), we see that G α is continuous on B n (0,r)for any fixed α in (0,r)and G α (s) = α 1 g ( α ( s 1 h 1 + +s n h n + v )) = α 1 g ( α ( η(s) + v )) = α 1 Dg(0) ( α ( η(s) + v )) + α 1 α ( η(s) + v ) ( ( )) E φ v α η(s) + v = Dg(0) ( η(s) ) + ( ( )) η(s) + v E φ v α η(s) + v = ( s 1,...,s n) + η(s) + v ( ( )) E φ v α η(s) + v s = ( s 1,...,s n) B n (0,r). Note that there is a positive real number M such that η(s) + v E M for any s = (s 1,...,s n ) in B n (0,r). Thus [ { ( ( )) sup φv α η(s) + v E : ( s 1,...,s n) B n (0,r) }] = 0 and α 0 Gα (s), s = s,s + η(s) + v ( ( )) φv E α η(s) + v,s m 2{ 1 mm ( ( )) } ( φ v α η(s) + v E s B n 0,m 1 ), where m is an integer greater than r 1. Thus there is a real number α m in (0,m 1 ) such that Gαm (s), s > 0 s B n ( 0,m 1 ). By Lemma 4.1 in [18, p. 14], we have a solution s m in B n (0,m 1 ) to the equation G αm (s) = 0 for any integer m greater than r 1, which yields the lemma. Remark 2.3. If E is a Banach space, U is an open subset of E, g is Fréchet differentiable on U and Dg is continuous at x, then by the Ljusternik theorem (see [7, p. 41]), M is a manifold and the tangent space TM x = Dg(x) 1 ({0}). Moreover, M is a C 1 -manifold if g is C 1 -Fréchet
5 D.M. Duc et al. / J. Math. Anal. Appl. 323 (2006) differentiable on U (see [23, Theorem 43.C]). The smoothness of g in Lemma 2.1 is very weak so that M may not be a smooth manifold and we cannot define the tangent space of M at x. Note that the sequence {αm 1(u m x)} m N converges to v in E. Therefore, we can admit v as a generalized tangent vector of M at x for any v in Dg(x) 1 ({0}), if we consider {u m } m N as a discrete curve passing through x. This idea is illustrated by the following example. Let E = X = U = R 3 and Z = R {0} {0}. We define P 1 = { (s, t) R 2 : (α, β) Q Q, αs+ βt = 0 }, P 2 = { (s, t) R 2 : s 2 + t 2 Q }, P =[P 1 P 2 ] [( R 2 ) ( \ P 1 R 2 )] \ P 2, A = { (r,s,t) R 3 : (s, t) P }, B = { (r,s,t) R 3 : r = 0, s>0, t= s 2}, g(r,s,t) = r + ( s 2 + t 2) χ A (r,s,t)+ χ B (r,s,t) (r,s,t) R 3, where χ C is the characteristic function of the set C. Let v = (0,s,t)in R 3 such that st 0. Put δ v = 1 2 s 2 t 2 + s 4 t 4,wehaveB E ((0, 0, 0), δ v ) Z(v) B =. Therefore, for any v in {0} R 2, there is a positive real number δ v such that B E ((0, 0, 0), δ v ) Z(v) B =. Thus we see that g is (X, Z)-continuous at (0, 0, 0) and (X, Z)-differentiable at (0, 0, 0). Butg is not Fréchet differentiable at (0, 0, 0) because g is not continuous at (0, 0, 0). Put M ={(r,s,t) R 3 : g(r,s,t) = g(0, 0, 0)},wehave Dg ( (0, 0, 0) ) (Z) = R, M = { (0,s,t): (s, t) / (P B) } { (r,s,t): (s, t) (P \ B), r = s 2 t 2}. Note that (0,s,t)is a generalized tangent vector of M at (0, 0, 0) for any (s, t) in R 2. The results in [3 7,17,19 22] cannot be applied to this case. It is easy to derive this example to the case of vector functions. The idea of generalized tangent vectors is essential to get the following generalized Lagrange multipliers theorem. Theorem 2.1. Let U be an open subset of normed vector space E, X be a vector subspace of E, F be a n-dimensional vector subspace of X, u be in U, r be in R n, f be a mapping from U into R, g = (g 1,...,g n ) be a mapping from U into R n, M ={x U: g(x) = r} and u M. Assume that (i) f(u)is the minimum (or maximum) of f(m), (ii) f is (X, F )-differentiable at u, (iii) g is (X, F )-continuous at u and (X, F )-differentiable at u, (iv) Dg(u)(F ) = R n. Then there exists a unique mapping Λ L(R n, R) such that Df (u)(k) = Λ ( Dg(u)(k) ) k X.
6 446 D.M. Duc et al. / J. Math. Anal. Appl. 323 (2006) Proof. Assume f(u) is the minimum of f(m). Choose n vectors h 1,...,h n in F such that Dg(u)(h i ) = e i (δi 1,...,δn i ) for any i in {1,...,n}. We define a real linear mapping Λ on Rn as follows: Λ(e i ) = Df (u)(h i ) i = 1,...,n. Now fix a vector k in X. Put n v = k Dg i (u)(k)h i X. Then Dg(u)(v) = 0. By Lemma 2.1, there are a sequence {α m } m N of positive real numbers and a sequence {s m = (s 1 m,...,sn m )} m N in R n such that {α m } m N and {s m } m N converge to 0 in R and R n respectively, u m U, and g(u m ) = g(u), where u m = u + α m ( s 1 m h 1 + +s n m h n) + αm v m N, or u m M for every m N. Since f(u)is the minimum of f(m)and f is (X, F )-differentiable at u,wehave Df (u)(k) Λ ( Dg(u)(k) ) ( n = Df (u)(k) Λ = Df (u)(k) Dg i (u)(k)e i ) n D i (u)(k)λ(e i ) = Df (u)(k) n Dg i (u)(k)df (u)(h i ) ( n ) ( = Df (u)(k) Df (u) Dg i (u)(k)h i = Df (u) k ) n Dg i (u)(k)h i [f(u+ α m (sm 1 = Df (u)(v) = h 1 + +sm n h n + v)) f(u)] 0. m α m Therefore, Df (u)(k) Λ(Dg(u)(k)) for any k in X. Replacing k in the above inequality by k, we get the theorem. Since Dg(u)(F ) = R n, we can get the uniqueness of Λ. The proof for the case f(u)= max f(m)is similar and omitted. Remark 2.4. If E is a Banach space, g is Fréchet differentiable on U and Dg is continuous at u, then Theorem 2.1 has been proved in [7, p. 73]. Here we only need the differentiability of f and g at u. Ifn = 1, Theorem 2.1 has been proved in [1]. Theorem 2.2. Let U be an open subset of normed vector space E, X be a vector subspace of E, F be a (n + m)-dimensional vector subspace of X, u be in U, f be a mapping from U into R, g = (g 1,...,g n+m ) be a mapping from U into R n+m and M = { x U: g i (x) 0,g n+j (x) = 0 i {1,...,n}, j {1,...,m} }. Assume that (i) u M and f(u)is the minimum of f(m),
7 D.M. Duc et al. / J. Math. Anal. Appl. 323 (2006) (ii) f is (X, F )-differentiable at u, (iii) g is (X, F )-continuous at u and (X, F )-differentiable at u, (iv) Dg(u)(F ) = R n+m. Then g(u) = 0 and there exists a unique (a 1,...,a n+m ) in R n+m such that a 1,...,a n are negative and n+m Df (u)(k) = a i Dg i (u)(k) k X. Proof. We prove the theorem by the following steps. Step 1. Put α = (α 1,...,α n+m ) = g(u) and S ={x U: g(x) = α} we see that S M. Since f(u)= min f(s), by Theorem 2.1, there exists a unique (a 1,...,a n+m ) in R n+m such that n+m Df (u)(k) = a i Dg i (u)(k) k X. Step 2. We prove that a i < 0 for every i {1,...,n}. First, we prove that a 1 < 0. Since Dg(u)(F ) = R n+m, there is k in F such that Dg 1 (u)(k) = 1 and Dg 2 (u)(k) = = Dg n (u)(k) = ε<0 and Dg n+j (u)(k) = 0 for any j in {1,...,m}. Let {h i },...,n+m be in F such that Dg(u)(h i ) = (δi 1,...,δn+m i ) for any i {1,...,n+ m}, we have D(g n+1,...,g n+m )(u)(h i ) = ( δi n+1,...,δi n+m ) i {n + 1,...,n+ m}. By Lemma 2.1, there are a sequence {s l = (sl n+1,...,sl n+m )} l N converging to 0 in R m and a sequence of positive real numbers {α l } l N converging to 0 in R such that ( u l α l s n+1 l h n+1 + +sl n+m ) h n+m + αl k + u, (g n+1,...,g n+m )(u l ) = (g n+1,...,g n+m )(u) = 0 l N. Note that g i (u l ) g i (u) = l α l l i=2 g i (α l (s n+1 l h n+1 + +sl n+m h n+m ) + α l k + u) g i (u) α l = Dg i (u)(k) < 0 i {1,...,n}. Thus there exists l 0 N such that g i (u l )<g i (u) i {1,...,n}, l l 0, which implies that u l is in M for any l l 0 and n f(u l ) f(u) a 1 ε a i = Df (u)(k) = 0 ε>0. l α l Let ε tend to 0, we have a 1 0ora 1 0. Since Dg(u)(F ) = R n+m,wehavea 1 < 0. Similarly, we have a i < 0 i = 1,...,n.
8 448 D.M. Duc et al. / J. Math. Anal. Appl. 323 (2006) Step 3. We shall prove g(u) = 0. Put I = { i {1,...,n}: g i (u) < 0 } and J = { j {1,...,n}: g j (u) = 0 }. We assume by contradiction that I is not empty. In this case, we see that a i < 0. i I On the other hand, there is k in F such that (1) Dg i (u)(k) = 1 i I, Dg j (u)(k) = ε j J, Dg n+l (u)(k) = 0 l = 1,...,m. By Lemma 2.1, there are a sequence {s l = (sl n+1,...,sl n+m )} l N converging to 0 in R m and a sequence of positive real numbers {α l } l N converging to 0 in R such that ( u l α l s n+1 l h n+1 + +sl n+m ) h n+m + αl k + u and 0 = (g n+1,...,g n+m )(u l ) = (g n+1,...,g n+m )(u) l N. We have g i (u l ) g i (u) = l α l l g i (α l (s n+1 l = Dg i (u)(k) = 1 i I. h n+1 + +sl n+m h n+m ) + α l k + u) g i (u) α l Thus l g i (u l ) = g i (u) < 0. Then there exists an integer l 1 such that We have g i (u l )<0 i I, l l 1. g j (u l ) g j (u) = l α l l Thus there exists l 2 N such that g j (α l (s n+1 l = Dg j (u)(k) = ε<0 j J. g j (u l )<g j (u) = 0 j J, l l 2. h n+1 + +sl n+m h n+m ) + α l k + u) g j (u) α l Therefore u l is in M for any l max{l 1,l 2 } and a i ε f(u l ) f(u) a j = Df (u)(k) = 0 ε>0. l α i I j J l It implies that i I a i 0, which contradicts to (1) and I should be empty and we get the result.
9 D.M. Duc et al. / J. Math. Anal. Appl. 323 (2006) Applications in programming problems Let X R n denote the n-dimensional Euclidean space and let R n + be its nonnegative orthant. Let f,h 1,...,h m be real functions on an open subset S of R n. We consider the following nonlinear programming problem min f(x) (P ) subject to h j (x) 0 j {1, 2,...,m}. Put J ={1,...,r} and K ={r + 1,...,m}, h J (x) = (h 1 (x),..., h r (x)) and h K (x) = (h r+1 (x),...,h m (x)) for any x in S. Leth(x) denote the column vector (h 1 (x), h 2 (x),..., h m (x)) T and be partitioned as h(x) = (h J (x), h K (x)) T. We denote S h = { x S: h j (x) 0, j {1,...,m} }, S K = { x S: h k (x) 0, k K }. Let a = (a 1,...,a N ) R N with N N. Wesaya 0(or 0) if and only if a i 0(or 0) for every i {1,...,N}. Definition 3.1. Let S be an open subset of R n, u be in S, g be a real function on S and Gâteaux differentiable at u, and η is a function from S S to R n.wesay (i) the function g is said to be invex at u with respect to the function η, if for every x S, we have g(x) g(u) η(x,u) T g(u); (ii) the function g is said to be pseudo-invex at u with respect to the function η, if for every x S,wehave η(x,u) T g(u) 0 g(x) g(u); (iii) the function g is said to be quasi-invex at u with respect to the function η, if for every x S, we have g(x) g(u) η(x,u) T g(u) 0. Applying results in the first section, we study the following programming problems. Problem 1: Nonlinear programming problem Let f, g, h 1,...,h m be real functions on an open subset S of R n.letj, K, h J and h K be as in the beginning of this section. Put L(x, λ J ) = f(x)+ (λ J ) T h J (x) for any (x, λ J ) in S K R J +.ThemapL is called the incomplete Lagrange function of the problem (P ). Definition 3.2. A point ( x, λ J ) S K R J + is called a saddle point of the incomplete Lagrange function L if L( x,λ J ) L( x, λ J ) L(x, λ J ) (x, λ J ) S K R J +.
10 450 D.M. Duc et al. / J. Math. Anal. Appl. 323 (2006) Theorem 3.1. Let S be an open subset of X R n, f and h 1,...,h m be real functions on S and Gâteaux differentiable at x with x S. F be a m-dimensional vector subspace of X and x be optimal for (P ). Assume that (i) L(,λ J ) is pseudo invex and (λ K ) T h K ( ) is quasi invex at x with respect to a function η for any (λ J,λ K ) in R m + ; (ii) f is (X, F )- differentiable at x; (iii) h is (X, F )-differentiable at x and (X, F )-continuous at x; (iv) Dh( x)(f ) = R m. Then there exists λ J R J + such that ( x, λ J ) is a saddle point of the incomplete Lagrange function L. Proof. Applying Theorem 2.2 in case E = X = R n, U = S for f and {h j } j=1,m, there exists λ = ( λ J, λ K ) R m, λ J R J +, λ K R K +, such that [ f( x) + (λ J ) T h J ( x) + (λ K ) T h K ( x) ] = 0, (λ J ) T h J ( x) = 0, (λ K ) T h K ( x) = 0, λ = ( λ J, λ K ) 0. Now arguing as in the proof of Theorem 2.1 in [2], we get the theorem. We consider the following propositions. (A1) x be optimal for (P ). (A2) There are positive real numbers λ 1,...,λ m such that h i ( x) = 0 for any i in {1,...,m} and Df ( x) + λ 1 Dh 1 ( x) + +λ m Dh m ( x) = 0. We have the following result. Theorem 3.2. Let S be an open subset of X R n, f and h 1,...,h m be real functions on S. Let x be in S. Then (i) if f, h 1,...,h m satisfy the conditions (ii), (iii) and (iv) of Theorem 3.1, then (A1) implies (A2); (ii) if f( ) + m λ i h i ( ) is Gâteaux differentiable at x and invex with respect to the function η at x then (A2) implies (A1). Proof. Applying Theorem 2.2 in case E = X = R n, U = S for f and {h j } j=1,m, we get (i). Consider (ii). By the invexity property of f( ) + m λ i h i ( ) with respect to the function η at x,wehave [ ] m m f(x)+ λ i h i (x) f( x) + λ i h i ( x) [ m η(x, x) T f + λ i h i ]( x) = 0 x S h.
11 D.M. Duc et al. / J. Math. Anal. Appl. 323 (2006) It follows that m m f(x) f(x)+ λ i h i (x) f( x) + λ i h i ( x) = f( x) x S h, and we get (ii). Problem 2: Fractional programming problem Let f, g,h 1,...,h m be real functions on an open subset S of R n.letj, K, h J and h K be as in the beginning of this section. Put S h = { x S: h j (x) 0, j {1,...,m} }, S K = { x S: h k (x) 0, k K }. Assume that g(x) 0 for any x in S and g(x) > 0 for any x in S K. We now consider the fractional programming problem min f(x) (FP) g(x) subject to h j (x) 0 j {1, 2,...,m}. We note that if x is optimal for the problem (FP) then x is also optimal for the following problem R J + (FP1) min f(x) g(x) subject to h j (x) g(x) 0 j J and h k(x) 0 k K. This form of (FP1) suggests the choice of the incomplete Lagrange function L F : S K R as follows: L F (x, λ J ) = f(x)+ (λ J ) T h J (x). g(x) Definition 3.3. A point ( x, λ J ) S K R J + is called a saddle point of the incomplete Lagrange function L F if L F ( x,λ J ) L F ( x, λ J ) L F (x, λ J ) (x, λ J ) S K R J +. Theorem 3.3. Let S be an open subset of X R n, f and h 1,...,h m be real functions on S and Gâteaux differentiable at x with x S. F be a m-dimensional vector subspace of X and x be optimal for (FP). Assume that (i) L F (,λ J ) is pseudo invex and (λ K ) T h K ( ) is quasi invex at x with respect to a function η for any (λ J,λ K ) in R m + ; (ii) f g is (X, F )-differentiable at x; (iii) h 1 g,..., h r g,h r+1,...,h m are (X, F )-differentiable at x and (X, F )-continuous at x; (iv) D(( h 1 g,..., h r g,h r+1,...,h m ))( x)(f ) = R m.
12 452 D.M. Duc et al. / J. Math. Anal. Appl. 323 (2006) Then there exists λ J R J + such that ( x, λ J ) is a saddle point of the incomplete Lagrange function L F. Proof. Since x is optimal for (FP) (and hence for (FP1)), applying Theorem 2.2 in case E = X = R n, U = S for f g and h 1 g,..., h r g,h r+1,...,h m, we can find λ = ( λ J, λ K ) in R J + R K +, such that [ f( x) g( x) + ( λ J ) T h ] J ( x) g( x) + ( λ K ) T h K ( x) = 0, (λ J ) T h J ( x) = 0 and (λ K ) T h K ( x) = 0. Fix x in S K,wehave( λ K ) T h K (x) 0 = ( λ K ) T h K ( x). By the quasi-invexity of ( λ K ) T h K ( ) with respect to the function η at x, we obtain η(x, x) T [ ( λ K ) T h K ( x) ] 0. Thus [ f( x) η(x, x) T g( x) + ( λ J ) T h ] J ( x) 0. g( x) By the pseudo-invexity of f( )+( λ J ) T h J ( ) g( ) with respect to the function η at x, we see that f(x) g(x) + ( λ J ) T h J (x) f( x) g(x) g( x) + ( λ J ) T h J ( x) g( x). And we get L F (x, λ J ) L F ( x, λ J ). If λ J is in R J +,wehave L F ( x, λ J ) = f( x) g( x) + ( λ J ) T h J ( x) g( x) = f( x) f( x) g( x) g( x) + (λ J ) T h J ( x) g( x) = L F ( x,λ J ). Thus we get the theorem. Problem 3: Generalized fractional programming problem Let f 1,...,f p,g 1,...,g p,h 1,...,h m be real functions on an open subset S of R n. Put S h = { x S: h j (x) 0, j {1,...,m} }, S K = { x S: h k (x) 0, k K }. Assume that g 1 (x),...,g p (x) 0 for any x in S and g 1 (x),..., g p (x) are positive for every x in S h. We consider the generalized fractional programming problem min (GFP) max f i (x) x S 1 i p g i (x) subject to h j (x) 0 j {1, 2,...,m}. Put Y ={y R p +, p y i = 1}. The incomplete Lagrange function L G : S K Y R J + R for the problem (GFP) can be chosen as L G (x,y,λ J ) = yt f(x)+ (λ J ) T h J (x) y T. g(x)
13 D.M. Duc et al. / J. Math. Anal. Appl. 323 (2006) Definition 3.4. A point ( x,ȳ, λ J ) S K Y R J + is called a saddle point of the incomplete Lagrange function L G if L G ( x,y,λ J ) L G ( x,ȳ, λ J ) L G (x, ȳ, λ J ) (x,y,λ J ) S K Y R J +. We consider the problem (EGFP) v min q subject to f i (x) vg i (x) q and h j (x) 0 (i, j) {1,...,p} {1,...,m}. We have the following lemma (see [2, Lemma 2.3, p. 9]). Lemma 3.1. The point x is (GFP) optimal with corresponding optimal value of the (GFP) objective equal to v if and only if (x,q ) is (EGFP) v optimal with corresponding optimal value of the (EGFP) v objective equal to 0, i.e., q = 0. Now we define H v (x, q) = q (x, q) S R, G v 1i (x, q) = f i(x) v g i (x) q (x,q,i) S R {1,...,p}, G v 2j (x, q) = h j (x) (x,q,j) S R {1,...,m}. Theorem 3.4. Let S be an open subset of X R n, f 1,...,f p,g 1,...,g p,h 1,...,h m be real functions on S and Gâteaux differentiable at x with x S. F be a (p + m)-dimensional vector subspace of X R R n R and x be optimal for (GFP) with corresponding optimal value of the (GFP) objective equal to v. Assume that (i) L G (,y,λ J ) is pseudo invex and (λ K ) T h K ( ) is quasi invex at x with respect to a function η for any (y, λ J,λ K ) in Y R m + ; (ii) G v 11,...,Gv 1p, Gv 21,...,Gv 2m are (Rn R,F)-differentiable at ( x,0) and (R n R,F)- continuous at ( x,0); (iii) D((G v 11,...,Gv 1p, Gv 21,...,Gv 2m ))( x,0)(f ) = Rp+m. Then there exists (ȳ, λ J ) Y R J + such that ( x,ȳ, λ J ) is a saddle point of the incomplete Lagrange function L G. Proof. Because x is an optimal value of (GFP) with corresponding optimal value of the (GFP) objective equal to v, by Lemma 3.1, (x,q ) is (EGFP) v optimal with corresponding optimal value of the (EGFP) v objective equal to 0. Then the problem (L) min H v (x, q) subject to G v 1i (x, q) 0 and Gv 2j (x, q) 0 has an optimal solution (x,q ) with q = 0. (i, j) {1,...,p} {1,...,m}
14 454 D.M. Duc et al. / J. Math. Anal. Appl. 323 (2006) Applying Theorem 2.2 in case E = X = R n R, U = S R for H v and G v 11,...,Gv 1p,Gv 21,...,G v 2m, there exists (ȳ, λ J, λ K ) R p + R J + R K + such that DH v ( x,0)(k, l) + p m ȳ i DG v 1i ( x,0)(k, l) + for every (k, l) R n R and j=1 λ j DG v 2j ( x,0)(k, l) = 0, ( ) [ ȳ i fi ( x) v g i ( x) ] = 0 i {1,...,p}, (λ J ) T h J ( x) = 0, (λ K ) T h K ( x) = 0, λ = ( λ J, λ K ) 0, ȳ 0. If l in ( ) is equal to 0, we have [ ȳ T f( x) v ȳ T g( x) + ( λ J ) T h J ( x) + ( λ K ) T h K ( x) ] = 0. If k in ( ) is equal to 0, we have p ȳi = 1orȳ Y. Now arguing as in the proof of Theorem 2.5 in [2], we get the theorem. We consider the following propositions: (A3) x be optimal for (GFP) with corresponding optimal value of the (GFP) objective equal to v. (A4) x has the following properties: [ ȳ T f( x) v ȳ T g( x) + ( λ J ) T h J ( x) + ( λ K ) T h K ( x) ] = 0, [ ȳ i fi ( x) v g i ( x) ] = 0 i {1,...,p}, (λ J ) T h J ( x) = 0, (λ K ) T h K ( x) = 0, λ = ( λ J, λ K ) 0, ȳ 0, p ȳ i = 1. Theorem 3.5. Let S be an open subset of X R n, f 1,...,f p,g 1,...,g p,h 1,...,h m be real functions on S. Let x be in S. (i) If the conditions (ii), (iii) of Theorem 3.4 are fulfilled then (A3) implies (A4). (ii) If ȳ T f( ) v ȳ T g( ) + ( λ J ) T h J ( ) + ( λ K ) T h K ( ) is Gâteaux differentiable at x and invex with respect to the function η at x then (A4) implies (A3). Proof. Applying Theorem 2.2 in case E = X = R n R, U = S R for H v and G v 11,...,Gv G v 21,...,Gv 2m (we define in p. 453) and with the same proof of Theorem 3.4, we get (i). Consider (ii). Since the map ȳ T f( ) v ȳ T g( ) + ( λ J ) T h J ( ) + ( λ K ) T h K ( ) is invex with respect to the function η at x, for every x S h,wehave ȳ T f(x) v ȳ T g(x) + ( λ J ) T h J (x) + ( λ K ) T h K (x) [ ȳ T f( x) v ȳ T g( x) + ( λ J ) T h J ( x) + ( λ K ) T h K ( x) ] η(x, x) T [ ȳ T f( x) v ȳ T g( x) + ( λ J ) T h J ( x) + ( λ K ) T h K ( x) ] = 0. 1p,
15 D.M. Duc et al. / J. Math. Anal. Appl. 323 (2006) Thus ȳ T f(x) v ȳ T g(x) ȳ T f(x) v ȳ T g(x) + ( λ J ) T h J (x) + ( λ K ) T h K (x) 0. f It implies that Max i (x),p g i (x) v, for every x S h and we get (ii). Acknowledgments The authors thank the referees for very helpful comments. References [1] L.H. An, P.X. Du, D.M. Duc, P.V. Tuoc, Lagrange multipliers for functions derivable along directions in a linear subspace, Proc. Amer. Math. Soc. 133 (2005) [2] C.R. Bector, S. Chandra, A. Abha, On incomplete Lagrange function and saddle point optimality criteria in mathematical programming, J. Math. Anal. Appl. 251 (2000) [3] F.H. Clarke, Optimization and Nonsmooth Analysis, Wiley Interscience, New York, [4] H. Halkin, Implicit functions and optimization problems without continuous differentiability of the data, SIAM J. Control 12 (1974) [5] A.D. Ioffe, Necessary conditions in nonsmooth optimization, Math. Oper. Res. 9 (1984) [6] A.D. Ioffe, A Lagrange multiplier rule with small convex-valued subdifferentials for nonsmooth problems of mathematical programming involving equality and nonfunctional constraints, Math. Programming 58 (1993) [7] A.D. Ioffe, V.M. Tihomirov, Theory of Extremal Problems, Nauka, Moscow, 1974 (in Russian). [8] A.Y. Kruger, Generalized differentials of nonsmooth functions and necessary conditions for an extremum, Siberian Math. J. 26 (1985) [9] A.Y. Kruger, B.S. Mordukhovich, Extremal points and the Euler equation in nonsmooth optimization, Dokl. Akad. Nauk BSSR 24 (1980) [10] P. Michel, J.-P. Penot, Calcul sous-différentiel pour des fonctions lipschitziennes et non lipschitziennes, C. R. Acad. Sci. Paris Sér. I Math. 12 (1984) [11] P. Michel, J.-P. Penot, A generalized derivative for calm and stable functions, Differential Integral Equations 5 (1992) [12] B.S. Mordukhovich, Metric approximations and necessary optimality conditions for general classes of nonsmooth extremal problems, Soviet Math. Dokl. 22 (1980) [13] B.S. Mordukhovich, On necessary conditions for an extremum in nonsmooth optimization, Soviet Math. Dokl. 32 (1985) [14] B.S. Mordukhovich, An abstract extremal principle with applications to welfare economics, J. Math. Anal. Appl. 251 (2000) [15] B.S. Mordukhovich, The extremal principle and its applications to optimization and economics, in: A. Rubinov, B. Glover (Eds.), Optimization and Related Topics, in: Appl. Optim., vol. 47, Kluwer Academic, Dordrecht, 2001, pp [16] B.S. Mordukhovich, B. Wang, Necessary suboptimality and optimality conditions via variational principles, SIAM J. Control Optim. 41 (2002) [17] R.T. Rockafellar, Convex Analysis, Princeton Univ. Press, Princeton, NJ, [18] V. Skrypnik, Methods for analysis of nonlinear elliptic boundary value problems, Amer. Math. Soc., Providence, RI, [19] J.S. Treiman, Shrinking generalized gradients, Nonlinear Anal. 12 (1988) [20] J.S. Treiman, Lagrange multipliers for nonconvex generalized gradients with equality, inequality, and set constraints, SIAM J. Control Optim. 37 (1999) [21] J.J. Ye, Multiplier rules under mixed assumptions of differentiability and Lipschitz continuity, SIAM J. Control Optim. 39 (2001) [22] J.J. Ye, Nondifferentiable multiplier rules for optimization and bilevel optimization problems, SIAM J. Optim. 15 (1) (2004) [23] E. Zeidler, Nonlinear Functional Analysis and Its Applications, III: Variational Methods and Optimization, Springer, Berlin, 1985.
Relationships between upper exhausters and the basic subdifferential in variational analysis
J. Math. Anal. Appl. 334 (2007) 261 272 www.elsevier.com/locate/jmaa Relationships between upper exhausters and the basic subdifferential in variational analysis Vera Roshchina City University of Hong
More informationA CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE
Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received
More informationThe Directed Subdifferential of DC Functions
Contemporary Mathematics The Directed Subdifferential of DC Functions Robert Baier and Elza Farkhi Dedicated to Alexander Ioffe and Simeon Reich on their 7th resp. 6th birthdays. Abstract. The space of
More informationFIRST ORDER CHARACTERIZATIONS OF PSEUDOCONVEX FUNCTIONS. Vsevolod Ivanov Ivanov
Serdica Math. J. 27 (2001), 203-218 FIRST ORDER CHARACTERIZATIONS OF PSEUDOCONVEX FUNCTIONS Vsevolod Ivanov Ivanov Communicated by A. L. Dontchev Abstract. First order characterizations of pseudoconvex
More informationSOME PROPERTIES ON THE CLOSED SUBSETS IN BANACH SPACES
ARCHIVUM MATHEMATICUM (BRNO) Tomus 42 (2006), 167 174 SOME PROPERTIES ON THE CLOSED SUBSETS IN BANACH SPACES ABDELHAKIM MAADEN AND ABDELKADER STOUTI Abstract. It is shown that under natural assumptions,
More informationJournal of Inequalities in Pure and Applied Mathematics
Journal of Inequalities in Pure and Applied Mathematics http://jipam.vu.edu.au/ Volume 4, Issue 4, Article 67, 2003 ON GENERALIZED MONOTONE MULTIFUNCTIONS WITH APPLICATIONS TO OPTIMALITY CONDITIONS IN
More informationOn robustness of the regularity property of maps
Control and Cybernetics vol. 32 (2003) No. 3 On robustness of the regularity property of maps by Alexander D. Ioffe Department of Mathematics, Technion, Haifa 32000, Israel Abstract: The problem considered
More informationOn the stability of the functional equation f(x+ y + xy) = f(x)+ f(y)+ xf (y) + yf (x)
J. Math. Anal. Appl. 274 (2002) 659 666 www.academicpress.com On the stability of the functional equation f(x+ y + xy) = f(x)+ f(y)+ xf (y) + yf (x) Yong-Soo Jung a, and Kyoo-Hong Park b a Department of
More informationOn the Midpoint Method for Solving Generalized Equations
Punjab University Journal of Mathematics (ISSN 1016-56) Vol. 40 (008) pp. 63-70 On the Midpoint Method for Solving Generalized Equations Ioannis K. Argyros Cameron University Department of Mathematics
More information[1] Aubin, J.-P.: Optima and equilibria: An introduction to nonlinear analysis. Berlin Heidelberg, 1993.
Literaturverzeichnis [1] Aubin, J.-P.: Optima and equilibria: An introduction to nonlinear analysis. Berlin Heidelberg, 1993. [2] Aubin, J.-P.; Ekeland, I.: Applied nonlinear analysis. New York, 1984.
More informationMetric regularity and systems of generalized equations
Metric regularity and systems of generalized equations Andrei V. Dmitruk a, Alexander Y. Kruger b, a Central Economics & Mathematics Institute, RAS, Nakhimovskii prospekt 47, Moscow 117418, Russia b School
More informationOn the Local Convergence of Regula-falsi-type Method for Generalized Equations
Journal of Advances in Applied Mathematics, Vol., No. 3, July 017 https://dx.doi.org/10.606/jaam.017.300 115 On the Local Convergence of Regula-falsi-type Method for Generalized Equations Farhana Alam
More informationON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction
J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties
More informationReceived 8 June 2003 Submitted by Z.-J. Ruan
J. Math. Anal. Appl. 289 2004) 266 278 www.elsevier.com/locate/jmaa The equivalence between the convergences of Ishikawa and Mann iterations for an asymptotically nonexpansive in the intermediate sense
More informationOn constraint qualifications with generalized convexity and optimality conditions
On constraint qualifications with generalized convexity and optimality conditions Manh-Hung Nguyen, Do Van Luu To cite this version: Manh-Hung Nguyen, Do Van Luu. On constraint qualifications with generalized
More informationOn Nonconvex Subdifferential Calculus in Banach Spaces 1
Journal of Convex Analysis Volume 2 (1995), No.1/2, 211 227 On Nonconvex Subdifferential Calculus in Banach Spaces 1 Boris S. Mordukhovich, Yongheng Shao Department of Mathematics, Wayne State University,
More informationCentre d Economie de la Sorbonne UMR 8174
Centre d Economie de la Sorbonne UMR 8174 On alternative theorems and necessary conditions for efficiency Do Van LUU Manh Hung NGUYEN 2006.19 Maison des Sciences Économiques, 106-112 boulevard de L'Hôpital,
More informationOn the validity of the Euler Lagrange equation
J. Math. Anal. Appl. 304 (2005) 356 369 www.elsevier.com/locate/jmaa On the validity of the Euler Lagrange equation A. Ferriero, E.M. Marchini Dipartimento di Matematica e Applicazioni, Università degli
More informationSOME REMARKS ON THE SPACE OF DIFFERENCES OF SUBLINEAR FUNCTIONS
APPLICATIONES MATHEMATICAE 22,3 (1994), pp. 419 426 S. G. BARTELS and D. PALLASCHKE (Karlsruhe) SOME REMARKS ON THE SPACE OF DIFFERENCES OF SUBLINEAR FUNCTIONS Abstract. Two properties concerning the space
More informationOPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS
OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS S. HOSSEINI Abstract. A version of Lagrange multipliers rule for locally Lipschitz functions is presented. Using Lagrange
More informationViscosity approximation methods for nonexpansive nonself-mappings
J. Math. Anal. Appl. 321 (2006) 316 326 www.elsevier.com/locate/jmaa Viscosity approximation methods for nonexpansive nonself-mappings Yisheng Song, Rudong Chen Department of Mathematics, Tianjin Polytechnic
More informationIntroduction to Optimization Techniques. Nonlinear Optimization in Function Spaces
Introduction to Optimization Techniques Nonlinear Optimization in Function Spaces X : T : Gateaux and Fréchet Differentials Gateaux and Fréchet Differentials a vector space, Y : a normed space transformation
More informationJournal of Inequalities in Pure and Applied Mathematics
Journal of Inequalities in Pure and Applied Mathematics EPI-DIFFERENTIABILITY AND OPTIMALITY CONDITIONS FOR AN EXTREMAL PROBLEM UNDER INCLUSION CONSTRAINTS T. AMAHROQ, N. GADHI AND PR H. RIAHI Université
More informationOn Optimality Conditions for Pseudoconvex Programming in Terms of Dini Subdifferentials
Int. Journal of Math. Analysis, Vol. 7, 2013, no. 18, 891-898 HIKARI Ltd, www.m-hikari.com On Optimality Conditions for Pseudoconvex Programming in Terms of Dini Subdifferentials Jaddar Abdessamad Mohamed
More informationSubdifferential representation of convex functions: refinements and applications
Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential
More informationOn Weak Pareto Optimality for Pseudoconvex Nonsmooth Multiobjective Optimization Problems
Int. Journal of Math. Analysis, Vol. 7, 2013, no. 60, 2995-3003 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ijma.2013.311276 On Weak Pareto Optimality for Pseudoconvex Nonsmooth Multiobjective
More informationOPTIMALITY CONDITIONS FOR D.C. VECTOR OPTIMIZATION PROBLEMS UNDER D.C. CONSTRAINTS. N. Gadhi, A. Metrane
Serdica Math. J. 30 (2004), 17 32 OPTIMALITY CONDITIONS FOR D.C. VECTOR OPTIMIZATION PROBLEMS UNDER D.C. CONSTRAINTS N. Gadhi, A. Metrane Communicated by A. L. Dontchev Abstract. In this paper, we establish
More informationViscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces
Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces YUAN-HENG WANG Zhejiang Normal University Department of Mathematics Yingbing Road 688, 321004 Jinhua
More information1. Introduction. Consider the following parameterized optimization problem:
SIAM J. OPTIM. c 1998 Society for Industrial and Applied Mathematics Vol. 8, No. 4, pp. 940 946, November 1998 004 NONDEGENERACY AND QUANTITATIVE STABILITY OF PARAMETERIZED OPTIMIZATION PROBLEMS WITH MULTIPLE
More informationHIGHER ORDER OPTIMALITY AND DUALITY IN FRACTIONAL VECTOR OPTIMIZATION OVER CONES
- TAMKANG JOURNAL OF MATHEMATICS Volume 48, Number 3, 273-287, September 2017 doi:10.5556/j.tkjm.48.2017.2311 - - - + + This paper is available online at http://journals.math.tku.edu.tw/index.php/tkjm/pages/view/onlinefirst
More informationON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS
MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth
More informationA Solution Method for Semidefinite Variational Inequality with Coupled Constraints
Communications in Mathematics and Applications Volume 4 (2013), Number 1, pp. 39 48 RGN Publications http://www.rgnpublications.com A Solution Method for Semidefinite Variational Inequality with Coupled
More informationgeneralized Jacobians, nonsmooth analysis, mean value conditions, optimality
SIAM J. CONTROL OPTIM. c 1998 Society for Industrial and Applied Mathematics Vol. 36, No. 5, pp. 1815 1832, September 1998 013 APPROXIMATE JACOBIAN MATRICES FOR NONSMOOTH CONTINUOUS MAPS AND C 1 -OPTIMIZATION
More informationOptimality and Duality Theorems in Nonsmooth Multiobjective Optimization
RESEARCH Open Access Optimality and Duality Theorems in Nonsmooth Multiobjective Optimization Kwan Deok Bae and Do Sang Kim * * Correspondence: dskim@pknu.ac. kr Department of Applied Mathematics, Pukyong
More informationarxiv: v1 [math.fa] 30 Oct 2018
On Second-order Conditions for Quasiconvexity and Pseudoconvexity of C 1,1 -smooth Functions arxiv:1810.12783v1 [math.fa] 30 Oct 2018 Pham Duy Khanh, Vo Thanh Phat October 31, 2018 Abstract For a C 2 -smooth
More informationThe best generalised inverse of the linear operator in normed linear space
Linear Algebra and its Applications 420 (2007) 9 19 www.elsevier.com/locate/laa The best generalised inverse of the linear operator in normed linear space Ping Liu, Yu-wen Wang School of Mathematics and
More informationNonlinear Analysis 72 (2010) Contents lists available at ScienceDirect. Nonlinear Analysis. journal homepage:
Nonlinear Analysis 72 (2010) 4298 4303 Contents lists available at ScienceDirect Nonlinear Analysis journal homepage: www.elsevier.com/locate/na Local C 1 ()-minimizers versus local W 1,p ()-minimizers
More information0 < f(y) - fix) - (x*, y - x) < e\\y - x\\
proceedings of the american mathematical society Volume 121, Number 4, August 1994 A NOTE ON THE DIFFERENTIABILITY OF CONVEX FUNCTIONS WU CONGXIN AND CHENG LIXIN (Communicated by Palle E. T. Jorgensen)
More informationRegularization Inertial Proximal Point Algorithm for Convex Feasibility Problems in Banach Spaces
Int. Journal of Math. Analysis, Vol. 3, 2009, no. 12, 549-561 Regularization Inertial Proximal Point Algorithm for Convex Feasibility Problems in Banach Spaces Nguyen Buong Vietnamse Academy of Science
More informationHelly's Theorem and its Equivalences via Convex Analysis
Portland State University PDXScholar University Honors Theses University Honors College 2014 Helly's Theorem and its Equivalences via Convex Analysis Adam Robinson Portland State University Let us know
More informationImplications of the Constant Rank Constraint Qualification
Mathematical Programming manuscript No. (will be inserted by the editor) Implications of the Constant Rank Constraint Qualification Shu Lu Received: date / Accepted: date Abstract This paper investigates
More informationCONVEX COMPOSITE FUNCTIONS IN BANACH SPACES AND THE PRIMAL LOWER-NICE PROPERTY
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 126, Number 12, December 1998, Pages 3701 3708 S 0002-9939(98)04324-X CONVEX COMPOSITE FUNCTIONS IN BANACH SPACES AND THE PRIMAL LOWER-NICE PROPERTY
More informationCharacterizations of the solution set for non-essentially quasiconvex programming
Optimization Letters manuscript No. (will be inserted by the editor) Characterizations of the solution set for non-essentially quasiconvex programming Satoshi Suzuki Daishi Kuroiwa Received: date / Accepted:
More informationConvergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces
Applied Mathematical Sciences, Vol. 6, 212, no. 63, 319-3117 Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Nguyen Buong Vietnamese
More informationSOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction
ACTA MATHEMATICA VIETNAMICA 271 Volume 29, Number 3, 2004, pp. 271-280 SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM NGUYEN NANG TAM Abstract. This paper establishes two theorems
More informationSECOND-ORDER CHARACTERIZATIONS OF CONVEX AND PSEUDOCONVEX FUNCTIONS
Journal of Applied Analysis Vol. 9, No. 2 (2003), pp. 261 273 SECOND-ORDER CHARACTERIZATIONS OF CONVEX AND PSEUDOCONVEX FUNCTIONS I. GINCHEV and V. I. IVANOV Received June 16, 2002 and, in revised form,
More informationITERATIVE SCHEMES FOR APPROXIMATING SOLUTIONS OF ACCRETIVE OPERATORS IN BANACH SPACES SHOJI KAMIMURA AND WATARU TAKAHASHI. Received December 14, 1999
Scientiae Mathematicae Vol. 3, No. 1(2000), 107 115 107 ITERATIVE SCHEMES FOR APPROXIMATING SOLUTIONS OF ACCRETIVE OPERATORS IN BANACH SPACES SHOJI KAMIMURA AND WATARU TAKAHASHI Received December 14, 1999
More informationON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES
U.P.B. Sci. Bull., Series A, Vol. 80, Iss. 3, 2018 ISSN 1223-7027 ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES Vahid Dadashi 1 In this paper, we introduce a hybrid projection algorithm for a countable
More informationImplicit Multifunction Theorems
Implicit Multifunction Theorems Yuri S. Ledyaev 1 and Qiji J. Zhu 2 Department of Mathematics and Statistics Western Michigan University Kalamazoo, MI 49008 Abstract. We prove a general implicit function
More informationCourse Summary Math 211
Course Summary Math 211 table of contents I. Functions of several variables. II. R n. III. Derivatives. IV. Taylor s Theorem. V. Differential Geometry. VI. Applications. 1. Best affine approximations.
More informationGeneralized monotonicity and convexity for locally Lipschitz functions on Hadamard manifolds
Generalized monotonicity and convexity for locally Lipschitz functions on Hadamard manifolds A. Barani 1 2 3 4 5 6 7 Abstract. In this paper we establish connections between some concepts of generalized
More informationNecessary Optimality Conditions for ε e Pareto Solutions in Vector Optimization with Empty Interior Ordering Cones
Noname manuscript No. (will be inserted by the editor Necessary Optimality Conditions for ε e Pareto Solutions in Vector Optimization with Empty Interior Ordering Cones Truong Q. Bao Suvendu R. Pattanaik
More informationGeneralized directional derivatives for locally Lipschitz functions which satisfy Leibniz rule
Control and Cybernetics vol. 36 (2007) No. 4 Generalized directional derivatives for locally Lipschitz functions which satisfy Leibniz rule by J. Grzybowski 1, D. Pallaschke 2 and R. Urbański 1 1 Faculty
More informationERROR BOUNDS FOR VECTOR-VALUED FUNCTIONS ON METRIC SPACES. Keywords: variational analysis, error bounds, slope, vector variational principle
ERROR BOUNDS FOR VECTOR-VALUED FUNCTIONS ON METRIC SPACES EWA M. BEDNARCZUK AND ALEXANDER Y. KRUGER Dedicated to Professor Phan Quoc Khanh on his 65th birthday Abstract. In this paper, we attempt to extend
More informationMath 147, Homework 5 Solutions Due: May 15, 2012
Math 147, Homework 5 Solutions Due: May 15, 2012 1 Let f : R 3 R 6 and φ : R 3 R 3 be the smooth maps defined by: f(x, y, z) = (x 2, y 2, z 2, xy, xz, yz) and φ(x, y, z) = ( x, y, z) (a) Show that f is
More informationOn nonexpansive and accretive operators in Banach spaces
Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 3437 3446 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa On nonexpansive and accretive
More information(7: ) minimize f (x) subject to g(x) E C, (P) (also see Pietrzykowski [5]). We shall establish an equivalence between the notion of
SIAM J CONTROL AND OPTIMIZATION Vol 29, No 2, pp 493-497, March 1991 ()1991 Society for Industrial and Applied Mathematics 014 CALMNESS AND EXACT PENALIZATION* J V BURKE$ Abstract The notion of calmness,
More informationON THE INVERSE FUNCTION THEOREM
PACIFIC JOURNAL OF MATHEMATICS Vol. 64, No 1, 1976 ON THE INVERSE FUNCTION THEOREM F. H. CLARKE The classical inverse function theorem gives conditions under which a C r function admits (locally) a C Γ
More informationarxiv: v1 [math.ca] 5 Mar 2015
arxiv:1503.01809v1 [math.ca] 5 Mar 2015 A note on a global invertibility of mappings on R n Marek Galewski July 18, 2017 Abstract We provide sufficient conditions for a mapping f : R n R n to be a global
More informationSome Properties of the Augmented Lagrangian in Cone Constrained Optimization
MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented
More informationNONDIFFERENTIABLE SECOND-ORDER MINIMAX MIXED INTEGER SYMMETRIC DUALITY
J. Korean Math. Soc. 48 (011), No. 1, pp. 13 1 DOI 10.4134/JKMS.011.48.1.013 NONDIFFERENTIABLE SECOND-ORDER MINIMAX MIXED INTEGER SYMMETRIC DUALITY Tilak Raj Gulati and Shiv Kumar Gupta Abstract. In this
More informationYuqing Chen, Yeol Je Cho, and Li Yang
Bull. Korean Math. Soc. 39 (2002), No. 4, pp. 535 541 NOTE ON THE RESULTS WITH LOWER SEMI-CONTINUITY Yuqing Chen, Yeol Je Cho, and Li Yang Abstract. In this paper, we introduce the concept of lower semicontinuous
More informationChapter 2 Convex Analysis
Chapter 2 Convex Analysis The theory of nonsmooth analysis is based on convex analysis. Thus, we start this chapter by giving basic concepts and results of convexity (for further readings see also [202,
More informationThe exact absolute value penalty function method for identifying strict global minima of order m in nonconvex nonsmooth programming
Optim Lett (2016 10:1561 1576 DOI 10.1007/s11590-015-0967-3 ORIGINAL PAPER The exact absolute value penalty function method for identifying strict global minima of order m in nonconvex nonsmooth programming
More informationDifferentiability of Convex Functions on a Banach Space with Smooth Bump Function 1
Journal of Convex Analysis Volume 1 (1994), No.1, 47 60 Differentiability of Convex Functions on a Banach Space with Smooth Bump Function 1 Li Yongxin, Shi Shuzhong Nankai Institute of Mathematics Tianjin,
More informationCONVERGENCE THEOREMS FOR STRICTLY ASYMPTOTICALLY PSEUDOCONTRACTIVE MAPPINGS IN HILBERT SPACES. Gurucharan Singh Saluja
Opuscula Mathematica Vol 30 No 4 2010 http://dxdoiorg/107494/opmath2010304485 CONVERGENCE THEOREMS FOR STRICTLY ASYMPTOTICALLY PSEUDOCONTRACTIVE MAPPINGS IN HILBERT SPACES Gurucharan Singh Saluja Abstract
More informationFixed Point Approach to the Estimation of Approximate General Quadratic Mappings
Int. Journal of Math. Analysis, Vol. 7, 013, no. 6, 75-89 Fixed Point Approach to the Estimation of Approximate General Quadratic Mappings Kil-Woung Jun Department of Mathematics, Chungnam National University
More informationDifferentiation. f(x + h) f(x) Lh = L.
Analysis in R n Math 204, Section 30 Winter Quarter 2008 Paul Sally, e-mail: sally@math.uchicago.edu John Boller, e-mail: boller@math.uchicago.edu website: http://www.math.uchicago.edu/ boller/m203 Differentiation
More informationCoderivatives of gap function for Minty vector variational inequality
Xue Zhang Journal of Inequalities Applications (2015) 2015:285 DOI 10.1186/s13660-015-0810-5 R E S E A R C H Open Access Coderivatives of gap function for Minty vector variational inequality Xiaowei Xue
More informationThe Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1
October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,
More informationOn the relations between different duals assigned to composed optimization problems
manuscript No. will be inserted by the editor) On the relations between different duals assigned to composed optimization problems Gert Wanka 1, Radu Ioan Boţ 2, Emese Vargyas 3 1 Faculty of Mathematics,
More informationDUALIZATION OF SUBGRADIENT CONDITIONS FOR OPTIMALITY
DUALIZATION OF SUBGRADIENT CONDITIONS FOR OPTIMALITY R. T. Rockafellar* Abstract. A basic relationship is derived between generalized subgradients of a given function, possibly nonsmooth and nonconvex,
More informationAN INTERSECTION FORMULA FOR THE NORMAL CONE ASSOCIATED WITH THE HYPERTANGENT CONE
Journal of Applied Analysis Vol. 5, No. 2 (1999), pp. 239 247 AN INTERSETION FORMULA FOR THE NORMAL ONE ASSOIATED WITH THE HYPERTANGENT ONE M. ILIGOT TRAVAIN Received May 26, 1998 and, in revised form,
More informationScalar Asymptotic Contractivity and Fixed Points for Nonexpansive Mappings on Unbounded Sets
Scalar Asymptotic Contractivity and Fixed Points for Nonexpansive Mappings on Unbounded Sets George Isac Department of Mathematics Royal Military College of Canada, STN Forces Kingston, Ontario, Canada
More informationCoderivatives of multivalued mappings, locally compact cones and metric regularity
Nonlinear Analysis 35 (1999) 925 945 Coderivatives of multivalued mappings, locally compact cones and metric regularity A. Jourani a;, L. Thibault b a Universite de Bourgogne, Analyse Appliquee et Optimisation,
More informationABBAS NAJATI AND CHOONKIL PARK
ON A CAUCH-JENSEN FUNCTIONAL INEQUALIT ABBAS NAJATI AND CHOONKIL PARK Abstract. In this paper, we investigate the following functional inequality f(x) + f(y) + f ( x + y + z ) f(x + y + z) in Banach modules
More informationTOPOLOGICAL GROUPS MATH 519
TOPOLOGICAL GROUPS MATH 519 The purpose of these notes is to give a mostly self-contained topological background for the study of the representations of locally compact totally disconnected groups, as
More informationSemi-infinite programming, duality, discretization and optimality conditions
Semi-infinite programming, duality, discretization and optimality conditions Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205,
More informationON PEXIDER DIFFERENCE FOR A PEXIDER CUBIC FUNCTIONAL EQUATION
ON PEXIDER DIFFERENCE FOR A PEXIDER CUBIC FUNCTIONAL EQUATION S. OSTADBASHI and M. SOLEIMANINIA Communicated by Mihai Putinar Let G be an abelian group and let X be a sequentially complete Hausdorff topological
More informationEpiconvergence and ε-subgradients of Convex Functions
Journal of Convex Analysis Volume 1 (1994), No.1, 87 100 Epiconvergence and ε-subgradients of Convex Functions Andrei Verona Department of Mathematics, California State University Los Angeles, CA 90032,
More informationON NECESSARY OPTIMALITY CONDITIONS IN MULTIFUNCTION OPTIMIZATION WITH PARAMETERS
ACTA MATHEMATICA VIETNAMICA Volume 25, Number 2, 2000, pp. 125 136 125 ON NECESSARY OPTIMALITY CONDITIONS IN MULTIFUNCTION OPTIMIZATION WITH PARAMETERS PHAN QUOC KHANH AND LE MINH LUU Abstract. We consider
More informationDECENTRALIZED CONVEX-TYPE EQUILIBRIUM IN NONCONVEX MODELS OF WELFARE ECONOMICS VIA NONLINEAR PRICES
DECENTRALIZED CONVEX-TYPE EQUILIBRIUM IN NONCONVEX MODELS OF WELFARE ECONOMICS VIA NONLINEAR PRICES BORIS S. MORDUKHOVICH 1 Department of Mathematics, Wayne State University Detroit, MI 48202, U.S.A. boris@math.wayne.edu
More information1. Bounded linear maps. A linear map T : E F of real Banach
DIFFERENTIABLE MAPS 1. Bounded linear maps. A linear map T : E F of real Banach spaces E, F is bounded if M > 0 so that for all v E: T v M v. If v r T v C for some positive constants r, C, then T is bounded:
More informationExamples of Convex Functions and Classifications of Normed Spaces
Journal of Convex Analysis Volume 1 (1994), No.1, 61 73 Examples of Convex Functions and Classifications of Normed Spaces Jon Borwein 1 Department of Mathematics and Statistics, Simon Fraser University
More informationJournal of Inequalities in Pure and Applied Mathematics
Journal of Inequalities in Pure and Applied Mathematics MONOTONE TRAJECTORIES OF DYNAMICAL SYSTEMS AND CLARKE S GENERALIZED JACOBIAN GIOVANNI P. CRESPI AND MATTEO ROCCA Université de la Vallée d Aoste
More informationVariational Analysis and Tame Optimization
Variational Analysis and Tame Optimization Aris Daniilidis http://mat.uab.cat/~arisd Universitat Autònoma de Barcelona April 15 17, 2010 PLAN OF THE TALK Nonsmooth analysis Genericity of pathological situations
More informationCHARACTERIZATIONS OF LIPSCHITZIAN STABILITY
CHARACTERIZATIONS OF LIPSCHITZIAN STABILITY IN NONLINEAR PROGRAMMING 1 A. L. DONTCHEV Mathematical Reviews, Ann Arbor, MI 48107 and R. T. ROCKAFELLAR Dept. of Math., Univ. of Washington, Seattle, WA 98195
More informationJournal of Mathematical Analysis and Applications. Riemann integrability and Lebesgue measurability of the composite function
J. Math. Anal. Appl. 354 (2009) 229 233 Contents lists available at ScienceDirect Journal of Mathematical Analysis and Applications www.elsevier.com/locate/jmaa Riemann integrability and Lebesgue measurability
More informationMonotone Point-to-Set Vector Fields
Monotone Point-to-Set Vector Fields J.X. da Cruz Neto, O.P.Ferreira and L.R. Lucambio Pérez Dedicated to Prof.Dr. Constantin UDRIŞTE on the occasion of his sixtieth birthday Abstract We introduce the concept
More informationON THE CONVERGENCE OF MODIFIED NOOR ITERATION METHOD FOR NEARLY LIPSCHITZIAN MAPPINGS IN ARBITRARY REAL BANACH SPACES
TJMM 6 (2014), No. 1, 45-51 ON THE CONVERGENCE OF MODIFIED NOOR ITERATION METHOD FOR NEARLY LIPSCHITZIAN MAPPINGS IN ARBITRARY REAL BANACH SPACES ADESANMI ALAO MOGBADEMU Abstract. In this present paper,
More informationDownloaded 09/27/13 to Redistribution subject to SIAM license or copyright; see
SIAM J. OPTIM. Vol. 23, No., pp. 256 267 c 203 Society for Industrial and Applied Mathematics TILT STABILITY, UNIFORM QUADRATIC GROWTH, AND STRONG METRIC REGULARITY OF THE SUBDIFFERENTIAL D. DRUSVYATSKIY
More informationarxiv: v1 [math.oc] 17 Oct 2017
Noname manuscript No. (will be inserted by the editor) Saddle representations of positively homogeneous functions by linear functions Valentin V. Gorokhovik Marina Trafimovich arxiv:1710.06133v1 [math.oc]
More informationChapter 3. Differentiable Mappings. 1. Differentiable Mappings
Chapter 3 Differentiable Mappings 1 Differentiable Mappings Let V and W be two linear spaces over IR A mapping L from V to W is called a linear mapping if L(u + v) = Lu + Lv for all u, v V and L(λv) =
More informationRelaxed Quasimonotone Operators and Relaxed Quasiconvex Functions
J Optim Theory Appl (2008) 138: 329 339 DOI 10.1007/s10957-008-9382-6 Relaxed Quasimonotone Operators and Relaxed Quasiconvex Functions M.R. Bai N. Hadjisavvas Published online: 12 April 2008 Springer
More informationAubin Criterion for Metric Regularity
Journal of Convex Analysis Volume 13 (2006), No. 2, 281 297 Aubin Criterion for Metric Regularity A. L. Dontchev Mathematical Reviews, Ann Arbor, MI 48107, USA ald@ams.org M. Quincampoix Laboratoire de
More informationJoint work with Nguyen Hoang (Univ. Concepción, Chile) Padova, Italy, May 2018
EXTENDED EULER-LAGRANGE AND HAMILTONIAN CONDITIONS IN OPTIMAL CONTROL OF SWEEPING PROCESSES WITH CONTROLLED MOVING SETS BORIS MORDUKHOVICH Wayne State University Talk given at the conference Optimization,
More informationCharacterizations of Solution Sets of Fréchet Differentiable Problems with Quasiconvex Objective Function
Characterizations of Solution Sets of Fréchet Differentiable Problems with Quasiconvex Objective Function arxiv:1805.03847v1 [math.oc] 10 May 2018 Vsevolod I. Ivanov Department of Mathematics, Technical
More informationA convergence result for an Outer Approximation Scheme
A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento
More informationViscosity approximation method for m-accretive mapping and variational inequality in Banach space
An. Şt. Univ. Ovidius Constanţa Vol. 17(1), 2009, 91 104 Viscosity approximation method for m-accretive mapping and variational inequality in Banach space Zhenhua He 1, Deifei Zhang 1, Feng Gu 2 Abstract
More informationBORIS MORDUKHOVICH Wayne State University Detroit, MI 48202, USA. Talk given at the SPCOM Adelaide, Australia, February 2015
CODERIVATIVE CHARACTERIZATIONS OF MAXIMAL MONOTONICITY BORIS MORDUKHOVICH Wayne State University Detroit, MI 48202, USA Talk given at the SPCOM 2015 Adelaide, Australia, February 2015 Based on joint papers
More information