generalized Jacobians, nonsmooth analysis, mean value conditions, optimality
|
|
- Isabel Kellie Griffith
- 5 years ago
- Views:
Transcription
1 SIAM J. CONTROL OPTIM. c 1998 Society for Industrial and Applied Mathematics Vol. 36, No. 5, pp , September APPROXIMATE JACOBIAN MATRICES FOR NONSMOOTH CONTINUOUS MAPS AND C 1 -OPTIMIZATION V. JEYAKUMAR AND D. T. LUC Abstract. The notion of approximate Jacobian matrices is introduced for a continuous vectorvalued map. It is shown, for instance, that the Clarke generalized Jacobian is an approximate Jacobian for a locally Lipschitz map. The approach is based on the idea of convexificators of realvalued functions. Mean value conditions for continuous vector-valued maps and Taylor s expansions for continuously Gâteaux differentiable functions (i.e., C 1 -functions) are presented in terms of approximate Jacobians and approximate Hessians, respectively. Second-order necessary and sufficient conditions for optimality and convexity of C 1 -functions are also given. Key words. conditions generalized Jacobians, nonsmooth analysis, mean value conditions, optimality AMS subject classifications. 49A52, 90C30, 26A24 PII. S Introduction. Over the past two decades, a great deal of research has focused on the study of first- and second-order analysis of real-valued nonsmooth functions [2, 3, 4, 5, 11, 12, 14, 15, 21, 23, 24, 20, 25, 27, 28, 29, 30, 34, 35]. The results of nonsmooth analysis of real-valued functions now provide basic tools of modern analysis in many branches of mathematics, such as mathematical programming, control, and mechanics. Indeed, the range of applications of nonsmooth calculus demonstrates its basic nature of nonsmooth phenomena in the mathematical and engineering sciences. On the other hand, research in the area of nonsmooth analysis of vector-valued maps has been of substantial interest in recent years [2, 6, 7, 8, 9, 10, 18, 21, 22, 23, 24, 29, 31]. In particular, it is known that the development and analysis of generalized Jacobian matrices for nonsmooth vector-valued maps are crucial from the viewpoint of control problems and numerical methods of optimization. For instance, the Clarke generalized Jacobian matrices [2] of a locally Lipschitz map play an important role in the Newton-based numerical methods for solving nonsmooth equations and optimization problems (see [26] and other references therein, and see also [17, 18, 19] for other applications). Warga [32, 33] examined derivative (unbounded derivative) containers in the context of local and global inverse function theorems as set-valued derivatives for locally Lipschitz (continuous) vector-valued maps. Mordukhovich [21, 22] developed generalized differential calculus for general nonsmooth vector-valued maps using the set-valued derivatives, called coderivatives [9, 21]. Our aim in this paper is to introduce a new concept of approximate Jacobian matrices for continuous vector-valued maps that are not necessarily locally Lipschitz, develop certain calculus rules for approximate Jacobians, and apply the concept to optimization problems involving continuously Gâteaux differentiable functions. This Received by the editors November 8, 1996; accepted for publication (in revised form) October 2, 1997; published electronically July 9, This research was partially supported by a grant from the Australian Research Council. Department of Applied Mathematics, University of New South Wales, Sydney 2052, Australia (jeya@maths.unsw.edu.au). Some of the work of this author was carried out while visiting the Centre for Experimental and Constructive Mathematics at the Simon Fraser University, Canada. Institute for Mathematics, Hanoi, Vietnam (dtluc@thevinh.ac.vn). Some of the work of this author was done while visiting the University of New South Wales. 1815
2 1816 V. JEYAKUMAR AND D. T. LUC concept is a generalization of the idea of convexificators of real-valued functions, studied recently in [4, 5, 13], to vector-valued maps. Convexificators provide twosided convex approximations [30] for real-valued functions. Unlike the set-valued generalized derivatives [9, 21, 22, 32, 33], mentioned above for vector-valued maps, the approximate Jacobian is defined as a closed subset of the space of (n m) matrices for a vector-valued map from R n into R m. Approximate Jacobians not only extend the nonsmooth analysis of locally Lipschitz maps to continuous maps but also unify and strengthen various results of nonsmooth analysis. They also enjoy useful calculus, such as the generalized mean value property and chain rules. Moreover, approximate Jacobians allow us to present second-order optimality conditions in easily verifiable forms in terms of approximate Hessian matrices for C 1 -optimization problems, extending the corresponding results for C 1,1 -problems [7]. The outline of the paper is as follows. In section 2, approximate Jacobian matrices are introduced, and it is shown that for a locally Lipschitz map the Clarke generalized Jacobian is an approximate Jacobian. Various examples of approximate Jacobians are also given. Section 3 establishes mean value conditions for continuous vector-valued maps and provides necessary and sufficient conditions in terms of approximate Jacobians for a continuous map to be locally Lipschitz. Various calculus rules for approximate Jacobians are given in section 4. Approximate Hessian matrices are introduced in section 5, and their connections to C 1,1 -functions are discussed. Section 6 presents generalizations of Taylor s expansions for C 1 -functions. In section 7, second-order necessary and sufficient conditions for optimality and convexity of C 1 -functions are given. 2. Approximate Jacobians for continuous maps. This section contains notation, definitions, and preliminaries that will be used throughout the paper. Let F : R n R m be a continuous function which has components (f 1,...,f m ). For each v R m, the composite function, (vf) :R n R, is defined by (vf)(x) = v, F(x) = m v i f i (x). The lower Dini directional derivative and the upper Dini directional derivative of vf at x in the direction u R n are defined by (vf) (x, u) := lim inf t 0 (vf) + (x, u) := lim sup t 0 i=1 (vf)(x + tu) (vf)(x), t (vf)(x + tu) (vf)(x). t We denote by L(R n, R m ) the space of all (n m) matrices. The convex hull and the closed convex hull of a set A in a topological vector space are denoted by co(a) and co(a), respectively. Definition 2.1. The map F : R n R m admits an approximate Jacobian F (x) at x R n if F (x) L(R n, R m ) is closed, and for each v R m, (2.1) (vf) (x, u) sup Mv,u u R n. M F (x)
3 APPROXIMATE JACOBIANS AND OPTIMIZATION 1817 A matrix M of F (x) is called an approximate Jacobian matrix of F at x. Note that condition (2.1) is equivalent to the condition (2.2) (vf) + (x, u) inf M F (x) Mv,u u Rn. It is worth noting that the inequality (2.1) means that the set F (x)v is an upper convexificator [13, 16] of the function vf at x. Similarly, the inequality (2.2) states that F (x)v is a lower convexificator of vf at x. In the case m = 1, the inequality (2.1) (or (2.2)) is equivalent to the condition (2.3) F (x, u) sup x,u and F + (x, u) inf x F (x) x F (x) x,u ; thus, the set F (x) is a convexificator of F at x. Also note that in the case m =1, condition (2.3) is also equivalent to the condition that for each α R, (2.4) (αf ) (x, u) sup αx,u u R n. x F (x) Similarly, the condition (2.3) is also equivalent to the condition that for each α R, (2.5) (αf ) + (x, u) inf x F (x) αx,u u R n. For applications of convexificators, see [5, 13, 16]. To clarify the definition, let us consider some examples. Example 2.2. If F : R n R m is continuously differentiable at x, then any closed subset Φ(x) ofl(r n, R m ) containing the Jacobian F (x) is an approximate Jacobian of F at x. In this case, for each v R m, (vf) (x, u) = F (x)v, u sup Mv,u u R n. M Φ(x) Observe from the definition of the approximate Jacobian that for any map F : R n R m, the whole space L(R n, R m ) serves as a trivial approximate Jacobian for F at any point in R n. Let us now examine approximate Jacobians for locally Lipschitz maps. Example 2.3. Suppose that F : R n R m is locally Lipschitz at x. Then the Clarke generalized Jacobian C F (x) is an approximate Jacobian of F at x. Indeed, for each v R m, (2.6) (vf)(x) = C F (x)v. Consequently, for each u R n, where (vf) (x, u) = max ξ,u = max Mv,u, ξ (vf)(x) M C F (x) C F (x) =co{ lim F (x n) T : x n Ω, x n x}, n Ω is the set of points in R n where F is differentiable, and the Clarke directional derivative of vf is given by (vf) (x, u) = lim sup x x t 0 v, F(x + tu) F (x ). t
4 1818 V. JEYAKUMAR AND D. T. LUC Since for each u R n, (vf) (x, u) (vf) (x, u) u R n, the set C F (x) is an approximate Jacobian of F at x. For the locally Lipschitz map F : R n R m, the set B F (x) :={ lim n F (x n) T : x n Ω, x n x} is also an approximate Jacobian of F at x. The set B F (x) is known as the B- subdifferential of F at x, which plays a significant role in the development of nonsmooth Newton methods (see [26]). In passing, note that for each v R m, (vf)(x) =co( M (vf)(x)) = co(d F (x)(v)), where the set-valued mapping D F (x) from R m into R n is the coderivative of F at x and M (vf)(x) is the first-order subdifferential of vf at x in the sense of Mordukhovich [22]. However, for locally Lipschitz maps, the coderivative does not appear to have a representation of the form (2.6), which allowed us above to compare approximate Jacobians with the Clarke generalized Jacobian. The reader is referred to [9, 21, 22, 29] for a more general definition and associated properties of coderivatives. A second-order analogue of the coderivative for vector-valued maps is given recently in [10]. Let us look at a numerical example of a locally Lipschitz map where the Clarke generalized Jacobian strictly contains an approximate Jacobian. Example 2.4. Consider the function F : R 2 R 2 F (x, y) =( x, y ). Then F (0) = {( ) ( 1 0, 0 1 ) ( 1 0, 0 1 ) ( 1 0, 0 1 )} is an approximate Jacobian of F at 0. On the other hand, the Clarke generalized Jacobian {( ) } α 0 C F (0) = : α, β [ 1, 1], 0 β which is also an approximate Jacobian of F at 0 and contains F (0). Observe in this example that C F (0) is the convex hull of F (0). However, this is not always the case. The following example illustrates that even for the case where m = 1, the convex hull of an approximate Jacobian of a locally Lipschitz map may be strictly contained in the Clarke generalized Jacobian. Example 2.5. Define F : R 2 R by Then it can easily be verified that F (x, y) = x y. 1F (0) = {(1, 1), ( 1, 1)} and 2F (0) = {(1, 1), ( 1, 1)}
5 APPROXIMATE JACOBIANS AND OPTIMIZATION 1819 are approximate Jacobians of F at 0, whereas B F (0) = {(1, 1), ( 1, 1), (1, 1), ( 1, 1)} and C F (0) = co({(1, 1), ( 1, 1), (1, 1), ( 1, 1)}). It is also worth noting that co( 1F (0)) co( M F (0)) = C F (0). Clearly, this example shows that certain results, such as mean value conditions and necessary optimality conditions that are expressed in terms of F (x), may provide sharp conditions even for locally Lipschitz maps (see section 3). Let us now present an example of a continuous map where the Clarke generalized Jacobian does not exist, whereas approximate Jacobians are quite easy to calculate. Example 2.6. Define F : R 2 R 2 by F (x, y) =( x sgn(x)+ y, y sgn(y)+ y ), where sgn(x) =1forx>0, 0 for x = 0, and 1 for x<0. Then F is not locally Lipschitz at (0, 0), and so the Clarke generalized Jacobian does not exist. However, for each c R, the set F (0, 0) = {( α 1 0 β ) ( α 1, 0 β ) } : α, β c is an approximate Jacobian of F at (0, 0). 3. Generalized mean value theorems. In this section we derive mean value theorems for continuous maps in terms of approximate Jacobians and show how locally Lipschitz vector-valued maps can be characterized using approximate Jacobians. Theorem 3.1. Let a, b R n and F : R n R m be continuous. Assume that for each x [a, b], F (x) is an approximate Jacobian of F at x. Then F (b) F (a) co( F ([a, b])(b a)). Proof. Let us first note that the right-hand side above is the closed convex hull of all points of the form M(b a), where M F (ζ) for some ζ [a, b]. Let v R m be arbitrary and fixed. Consider the real-valued function g :[0, 1] IR g(t) = v, F(a + t(b a)) F (a)+t(f (a) F (b)). Then g is continuous on [0, 1] with g(0) = g(1). So g attains a minimum or a maximum at some t 0 (0, 1). Suppose that t 0 is a minimum point. Then, for each α R, g (t 0,α) 0. It now follows from direct calculations that g (t 0,α)=(vF) (a + t 0 (b a),α(b a)) + α v, F(a) F (b). Hence, for each α R, (vf) (a + t 0 (b a),α(b a)) α v, F(b) F (a).
6 1820 V. JEYAKUMAR AND D. T. LUC Now, by taking α = 1 and α = 1, we obtain that (vf) (a + t 0 (b a),a b) v, F(b) F (a) (vf) (a + t 0 (b a),b a). By (2.1), we get inf Mv,b a v, F(b) F (a) sup Mv,b a. M F (a+t 0(b a)) M F (a+t 0(b a)) Consequently, and so v, F(b) F (a) co( F (a + t 0 (b a))v)(b a), (3.1) v, F(b) F (a) co( F ([a, b])v)(b a). Since this inclusion holds for each v R m, we claim that F (b) F (a) co( F ([a, b])(b a)). If this is not so, then it follows from the separation theorem p, F (b) F (a) ɛ> sup p, u u co( F ([a,b])(b a)) for some p R m since co( F ([a, b])(b a)) is a closed convex subset of R m. This implies p, F (b) F (a) > sup{α : α co( F ([a, b])p)(b a)}, which contradicts (3.1). Similarly, if t 0 is a maximum point, then g + (t 0,α) 0 for each α R. Using the same line of arguments as above, we arrive at the same conclusion, and so the proof is complete. Corollary 3.2. Let a, b R n and F : R n R m be continuous. Assume that F (x) is a bounded approximate Jacobian of F at x for each x [a, b]. Then (3.2) F (b) F (a) co( F ([a, b])(b a)). Proof. Since for each x [a, b], F (x) is compact, the set co( F ([a, b])(b a) =co{ F ([a, b])(b a)} is closed, and so the conclusion follows from Theorem 3.1. In the following corollary we deduce the mean value theorem for locally Lipschitz maps (see [1, 6]) as a special case of Theorem 3.1. Corollary 3.3. Let a, b R n and F : R n R m be locally Lipschitz on R n. Then (3.3) F (b) F (a) co( C F ([a, b])(b a)).
7 APPROXIMATE JACOBIANS AND OPTIMIZATION 1821 Proof. In this case the Clarke generalized Jacobian C F (x) is a convex and compact approximate Jacobian of F at x. Hence, the conclusion follows from Corollary 3.2. Note that even for the case where F is locally Lipschitz, Corollary 3.2 provides a stronger mean value condition than condition (3.3) of Corollary 3.3. To see this, let n =2,m=1,F (x, y) = x y, a =( 1, 1), and b =(1, 1). Then condition (3.2) of Corollary 3.2 is verified by F (0) = {(1, 1), ( 1, 1)}. However, condition (3.3) holds for C F (0), where C F (0) = co({(1, 1), ( 1, 1), (1, 1), ( 1, 1)}) F (0). As a special case of the above theorem, we see that if F is real-valued, then an asymptotic mean value equality is obtained. This was shown in [13]. Corollary 3.4. Let a, b X and F : R n R be continuous. Assume that, for each x [a, b], F (x) is a convexificator of F. Then there exist c (a, b) and a sequence {x k } co( F (c)) such that F (b) F (a) = lim k x k,b a. Proof. The conclusion follows from the proof of Theorem 3.1 by noting that a convexificator F (x) is an approximate Jacobian of F at x. We now see how locally Lipschitz functions can be characterized using the above mean value theorem. We say that a set-valued mapping G : R n L(R n, R m )islocally bounded at x if there exist a neighborhood U of x and a positive α such that A α for each A G(U). Recall that the map G is said to be upper semicontinuous at x if for each open set V containing G(x) there is a neighborhood U of x such that G(U) V. Clearly, if G is upper semicontinuous at x and if G(x) isbounded, then G is locally bounded at x. Theorem 3.5. Let F : R n R m be continuous. Then F has a locally bounded approximate Jacobian map F at x if and only if F is locally Lipschitz at x. Proof. Assume that F (y) is the approximate Jacobian of F for each y in a neighborhood U of x and that F is locally bounded on U. Without loss of generality, we may assume that U is convex. Then there exists α>0 such that A α for each A F (U). Let x, y U. Then [x, y] U, and by the mean value theorem, Hence, This gives us that F (x) F (y) co( F ([x, y])(x y)) co( F (U)(x y)). F (x) F (y) x y max{ A : A F (U)}. F (x) F (y) α x y, and so F is locally Lipschitz at x. Conversely, if F is locally Lipschitz at x, then the Clarke generalized Jacobian can be chosen as an approximate Jacobian for F, which is locally bounded at x.
8 1822 V. JEYAKUMAR AND D. T. LUC 4. Calculus rules for approximate Jacobians. In this section, we present some basic calculus rules for approximate Jacobians. We begin by introducing the notion of regular approximate Jacobians which are useful in some applications. Definition 4.1. The map F : R n R m admits a regular approximate Jacobian, F (x) at x R n if F (x) L(R n, R m ) is closed, and for each v R m, (4.1) (vf) + (x, u) = sup Mv,u u R n, M F (x) or equivalently, (4.2) (vf) (x, u) = inf M F (x) Mv,u u Rn. Note that in the case m = 1, this definition collapses to the notion of the regular convexificator studied in [13]. Thus, a closed set h(x) R n is a regular convexificator of the real-valued function h at x if for each u R n, h (x, u) = inf ξ,u and ξ h(x) h+ (x, u) = sup ξ,u. ξ h(x) It is evident that these equalities follow from (4.1) by taking F = h and v = 1 and v = 1, respectively. It is immediate from the definition that if F is differentiable at x, then { f(x)} is a regular approximate Jacobian of F at x. However, if F is locally Lipschitz at x, then the Clarke generalized Jacobian C F (x) is not necessarily a regular approximate Jacobian of F at x. It is also worth noting that if 1F (x) and 2F (x) are two regular approximate Jacobians of F at x, then co( 1F (x)) = co( 2F (x)). In passing, we note that if F is locally Lipschitz on a neighborhood U of x, then there exists a dense set K U such that F admits a regular approximate Jacobian at each point of K. By Rademacher s theorem, the dense subset can be chosen as the set where F is differentiable. Theorem 4.2 (Rule 1). Let F and H be continuous maps from R n to R m. Assume that F (x) is an approximate Jacobian of F at x and H(x) is a regular approximate Jacobian of H at x. Then the set F (x)+ H(x) is an approximate Jacobian of F + H at x. Proof. Let v R m,u R n be arbitrary. By definition, v, F + H (x, u) = lim inf t 0 v, F(x + tu) F (x)+h(x + tu) H(x). t Let {t n } be a sequence of positive numbers converging to 0 such that v, F + H v, F(x + t n u) F (x)+h(x + t n u) H(x) (x, u) = lim. n t n Further, let {s n } be another sequence of positive numbers converging to 0 such that v, F (x, u) = lim inf t 0 Then we have v, F(x + tu) F (x) t v, F(x + s n u) F (x) = lim. n s n v, F(x + s n u) F (x) lim sup Mv, u n s n M F (x)
9 and lim sup n Consequently, APPROXIMATE JACOBIANS AND OPTIMIZATION 1823 v, H(x + s n u) H(x) v, H + (x, u) = sup Mv,u. s n M H(x) v, F + H v, F(x + s n u) F (x) (x, u) lim n sup M F (x) s n Mv,u + sup N H(x) + v, H(x + s nu) H(x) Nv,u = sup Pv,u. P F (x)+ H(x) Since u and v are arbitrary, we conclude that F (x) + H(x) is an approximate Jacobian of F + H at x. Note that as in the case of convexificators of real-valued functions [18], the set F (x)+ H(x) is not necessarily regular at x. Theorem 4.3 (Rule 2). Let F : R n R m and H :IR m R l be continuous maps. Assume that F (x) is a bounded approximate Jacobian of F at x and H(x) is a bounded approximate Jacobian of H at F (x). If the maps F and H are upper semicontinuous at x and F (x), respectively, then H(F (x)) F (x) is an approximate Jacobian of H F at x. Proof. Let w R l and u R m be arbitrary. Consider the lower Dini directional derivative of w, H F at x: w, H F (x, u) = lim inf t 0 s n w, H(F (x + tu)) H(F (x)). t By applying the mean value theorem (see Theorem 3.1) to H and F, we obtain F (x + tu) F (x) tco( F ([x, x + tu])u), H(F (x + tu)) H(F (x)) co( H([F (x, ), F(x + tu)])(f (x + tu) F (x))) It now follows from the upper semicontinuity of F and H that for an arbitrary small positive ɛ we can find t 0 > 0 such that for t (0,t 0 )wehave F ([x, x + tu]) F (x)+ɛb 1, H([F (x), F(x + tu)]) H(F (x)) + ɛb 2, where B 1 and B 2 are the unit balls in L(R n, R m ) and L(R m, R l ), respectively. Using these inclusions, we obtain w, H(F (x + tu)) H(F (x)) w, A, t where A := co(( H(F (x)) F (x)+ɛ( H(F (x))b 1 + B 2 F (x)) + ɛ 2 B 2 B 1 )u). Since H(F (x)) and F (x) are bounded, we can find α>0 such that M α for all M H(F (x)) or M F (x). Consequently, w, H F (x, u) sup Mw,u +2ɛ u + ɛ 2 u. M H(F (x)) F (x) As ɛ is arbitrary, we conclude that H(F (x)) F (x) is an approximate Jacobian of H F at x.
10 1824 V. JEYAKUMAR AND D. T. LUC 5. Approximate Hessian matrices. In this section, unless stated otherwise, we assume that f : R n R is a C 1 - function, that is, a continuously Gâteaux differentiable function, and introduce the notion of approximate Hessian for such functions. Note that the derivative of f, which is denoted by f, is a map from R n to R n. Definition 5.1. The function f admits an approximate Hessian f(x) 2 at x if this set is an approximate Jacobian to f at x. Note that f(x) 2 = f(x) and the matrix M f(x) 2 is an approximate Hessian matrix of F at x. Clearly, if f is twice differentiable at x, then 2 f(x) isa symmetric approximate Hessian matrix of f at x. Let us now examine the relationships between the approximate Hessians and the generalized Hessians, studied for C 1,1 -functions, that is, Gâteaux differentiable functions with locally Lipschitz derivatives. Recall that if f : R n R is C 1,1, then the generalized Hessian in the sense of Hiriart-Urruty, Strodiot, and Hien Nguyen [7] is given by 2 Hf(x) =co{m : M = lim n 2 f(x n ),x n, x n x}, where is the set of points in R n where f is twice differentiable. Clearly, H 2 f(x) is a nonempty convex compact set of symmetric matrices. The second-order directional derivative of f at x in the directions (u, v) R n R n is defined by f (x; u, v) = lim sup y x s 0 f(y + su),v f(y),v. s Since (v f) (x, u) f (x; u, v), for each (u, v) R n and f (x; u, v) = max Mu,v = max Mv,u, M 2 H f(x) M 2 H f(x) H 2 f(x) is an approximate Hessian of f at x. The generalized Hessian of f at x as a set-valued map, f(x) :R n R n, which was given in Cominetti and Correa [3], is defined by f(x)(u) ={x R n : f (x; u, v) x,v v R n }. It is known that the mapping (u, v) f (x; u, v) is finite and sublinear and that f(x)(u) is a nonempty, convex, and compact subset of R n, and for each x, u, v R n, Moreover, for each u R n, f (x; u, v) = max{ x,v : x f(x)(u)}. f(x)(u) = 2 Hf(x)u. If f is twice continuously differentiable at x, then the generalized Hessian f(x)(u) is a singleton for every u IR n. In [34, 35], another generalized second-order directional derivative and a generalized Hessian set-valued map for a C 1,1 function f at x were given as follows: f (x; u, v) = sup z R n lim sup s 0 f(x + sz + su),v f(x + sz),v, s
11 APPROXIMATE JACOBIANS AND OPTIMIZATION 1825 f(x)(u) ={x X : f (x; u, v) x,v v X}. It was shown that the mapping (u, v) f (x; u, v) is finite and sublinear; f(x)(u) is a nonempty, convex, and compact subset of R n ; and f(x)(u) is singled-valued for each u IR n if and only if f is twice Gâteaux differentiable at x. Further, for each u R n, f(x)(u) f(x)(u) = H 2 f(x)u. If for each (u, v) Rn the function y f (y; u, v) is upper semicontinuous at x, then f(x)(u) = 2 Hf(x)u. The following proposition gives us necessary and sufficient conditions in terms of approximate Hessians for a C 1 -function to be C 1,1. Proposition 5.2. Let f : R n R be a C 1 -function. Then f hasalocally bounded approximate Hessian map f 2 at x if and only if f is C 1,1 at x. Proof. This follows from Theorem 3.5 by taking F as f. We complete this section with an example showing that for a C 1,1 function the approximate Hessian may be a singleton which is contained in the generalized Hessian of Hiriart-Urruty, Strodiot, and Hien Nguyen [7]. Example 5.3. Let g be an odd, linear piecewise continuous function on R as follows. g(x) =x for x 1 and g(0)=0;g(x) =2x 1 for x [ 1 2, 1]; g(x) = 1 2 x for x [ 1 6, 1 2 ]; g(x) =2x 1 6 for x [ 1 12, 1 6 ]; g(x) = 1 4 x for x [ 1 60, 1 12 ], etc. Let Define G(x) = x 0 g(t)dt, x R. f(x, y) =G(x)+ y2 2. Then the function f is a C 1,1 function, and the generalized Hessian of f at (0, 0) is {( ) } Hf(0) 2 α 0 = : α [0, 2]. 0 1 However, the approximate Hessian of f at (0, 0) is the singleton {( )} f(0) = Generalized Taylor s expansions for C 1 -functions. In this section, we see how Taylor s expansions can be obtained for C 1 - functions using approximate Hessians. Theorem 6.1. Let f : R n R be continuously Gâteaux differentiable on R n ; let x, y R n. Suppose that for each z [x, y], 2 f(z) is an approximate Hessian of f at z. Then there exists ζ (x, y) such that f(y) f(x)+ f(x),y x co 2 f(ζ)(y x), (y x). Proof. Let h(t) =f(y + t(x y)) + t f(y + t(x y)),y x at2 f(y), where a = 2(f(x) f(y) + f(x),y x ). Then h(0)=0,h(1) = f(x) f(y) +
12 1826 V. JEYAKUMAR AND D. T. LUC f(x),y x + 1 2a = 0, and h is continuous. So h attains its extremum at some γ (0, 1). Suppose that γ is a minimum point of h. Now, by necessary conditions, we have for all v R Then 0 h (γ; v) = lim inf λ 0 + h(γ + λv) h(γ) λ h (γ; v) 0. f(y +(γ + λv)(x y)) f(y + γ(x y)) = lim λ 0 + λ lim a(γ + λv) 2 aγ 2 λ 0 + λ (γ + λv) f(y +(γ + λv)(x y)),y x γ f(y + γ(x y)),y x + lim inf λ 0 + λ = v f(y + γ(x y)),x y + aγv + v f(y + γ(x y)),y x f(y +(γ + λv)(x y)),y x f(y + γ(x y)),y x +γ lim inf λ 0 + λ f(y +(γ + λv)(x y)),y x f(y + γ(x y)),y x = aγv + γ lim inf. λ 0 + λ Let ζ = y + γ(x y). Then ζ (x, y), and for v = 1 we get f(y + γ(x y)+λ(x y)),y x f(y + γ(x y)),y x 0 aγ + γ lim inf λ 0 + λ a + sup M(y x),x y. M 2f(ζ) This gives us that a inf x),y x. M 2 f(ζ) M(y Similarly, for v = 1, we obtain f(y + γ(x y)+λ(y x)),y x f(y + γ(x y)),y x 0 aγ + γ lim inf λ 0 + λ a + sup M(y x),y x ; M 2f(ζ) thus, Hence, it follows that and so a sup M(y x),y x. M 2f(ζ) inf x),y x a M 2 f(ζ) M(y sup M(y x),y x, M 2f(ζ) a co 2 f(ζ)(y x), (y x) ;
13 APPROXIMATE JACOBIANS AND OPTIMIZATION 1827 thus, (6.1) f(y) f(x) f(x),y x = a co 2 f(ζ)(y x), (y x). The case where γ is a maximum point of h also yields the same condition (6.1). The details are left to the reader. Corollary 6.2. Let f : R n R be continuously Gâteaux differentiable on R n and x, y R n. Suppose that for each z [x, y], f(z) 2 is a convex and compact approximate Hessian of f at z. Then there exist ζ (x, y) and M ζ f(ζ) 2 such that f(y) =f(x)+ f(x),y x M ζ(y x),y x. Proof. It follows from the hypothesis that for each z [x, y], 2 f(z) is convex and compact, and so the co in the conclusion of the previous theorem is superfluous. Thus, the inequalities give us that inf x),y x a M 2 f(ζ) M(y sup M(y x),y x M 2f(ζ) a 2 f(ζ)(y x), (y x). Corollary 6.3 (see [7]). Let f : R n R be C 1,1 and x, y R n. Then there exist ζ (x, y) and M ζ H 2 f(ζ) such that f(y) =f(x)+ f(x),y x M ζ(y x),y x. Proof. In this case, the conclusion follows from the above corollary by choosing the generalized Hessian H 2 f(x) as an approximate Hessian of f for each x. 7. Second-order conditions for optimality and convexity of C 1 -functions. In this section, we present second-order necessary and sufficient conditions for optimality and convexity of C 1 -functions using approximate Hessian matrices. Consider the optimization problem (P) minimize f(x) subject to x R n, where f : R n R is a continuously Gâteaux differentiable function on R n. We say that a map F : R n R m admits a semiregular approximate Jacobian F (x) at x R n if F (x) L(R n, R m ) is closed, and for each v R m, (vf) + (x, u) sup Mv,u u R n. M F (x) Similarly, the C 1 -function f : R n R admits a semiregular approximate Hessian 2 f(x) atx if this set is a semiregular approximate Jacobian to f at x.
14 1828 V. JEYAKUMAR AND D. T. LUC Of course, every semiregular approximate Hessian to f at x is an approximate Hessian at x. For a C 1,1 function f : R n R, the generalized Hessian, H 2 f(x), of f at x is a bounded semiregular approximate Hessian of f at x since (v f) + (x, u) f (x; u, v) = max Mu,v = max Mv,u. M 2 H f(x) M 2 H f(x) Theorem 7.1. For the problem (P), let x R n. Assume that 2 f( x) is a semiregular approximate Hessian of f at x. (i) If x is a local minimum of (P), then f( x) =0, and for each u IR n, sup Mu,u 0. M 2f( x) (ii) If x is a local maximum of (P), then f( x) =0, and for each u R n, inf M 2 f( x) Mu,u 0. Proof. Let u R n. Since x is a local minimum of (P), there exists δ>0 such that for each s [0,δ], f( x + su) f( x). Then, by the mean value theorem, for each s (0,δ], there exists 0 <t<ssuch that f( x + tu),u 0. So, there exists a positive sequence {t n } 0 such that f( x + t n u),u 0. Now, as f( x) = 0, it follows that (u f) + ( x; u) = lim sup s 0 0. f( x + su),u f( x),u s Since 2 f(x) is a semiregular approximate Hessian of f at x, wehave and hence, (u f) + ( x; u) sup Mu,u, M 2f( x) sup Mu,u 0. M 2f( x) On the other hand, if f attains a local maximum at x, then it follows by the similar arguments as above that for each u R n, inf M 2 f( x) Mu,u 0. Note in this case that it is convenient to use the inequality (u f) ( x, u) inf M 2 f( x) Mu,u.
15 APPROXIMATE JACOBIANS AND OPTIMIZATION 1829 Let us look at a numerical example to illustrate the significance of the optimality conditions obtained in the previous theorem. Example 7.2. Define f : R 2 R by f(x, y) = 2 3 x y2. Then f is C 1 but is not C 1,1 since the gradient ( ) f(x, y) = x sgn(x),y is not locally Lipschitz at (0, 0). Evidently, (0, 0) is a minimum point of f, f(0, 0) = (0, 0), and 2 f(0) = {( α ) } : α 0 is a semiregular approximate Hessian of f at (0, 0). And for each u =(u 1,u 2 ) R 2, sup Mu,u = sup{αu 2 M 2f(0) 1 + u 2 2 : α 0} 0. Hence, the statement (i) of Theorem 7.1 is verified. However, the generalized Hessians [7] do not apply to this function. Corollary 7.3. For the problem (P), let x R n. Suppose that f( x) 2 is a bounded semiregular approximate Hessian of f at x. (i) If x is a local minimum of (P), then f( x) =0, and for each u R n there exists a matrix M f( x) 2 such that Mu,u 0. (ii) If x is a local maximum of (P), then f( x) =0, and for each u R n there exists a matrix M f( x) 2 such that Mu,u 0. Proof. Since f( x) 2 is closed and bounded, it follows from Theorem 7.1 that f( x) = 0, and for each u IR n, max M 2 f( x) Mu,u 0, and so the first conclusion holds. The second conclusion similarly follows from Theorem 7.1. We now see how optimality conditions for the problem (P ) where f is C 1,1 follows from Corollary 7.3 (cf. [7]). Corollary 7.4. For the problem (P), assume that the function f is C 1,1 and x R n. (i) If x is a local minimum of (P), then f( x) =0, and for each u R n there exists a matrix M H 2 f( x) such that Mu,u 0. (ii) If x is a local maximum of (P), then f( x) =0, and for each u R n there exists a matrix M H 2 f( x) such that Mu,u 0. Proof. The conclusion follows from Corollary 7.3 by choosing H 2 f( x) asthe semiregular bounded approximate Hessian f( x) 2 off at x. Clearly, the conditions of Theorem 7.1 are not sufficient for a local minimum, even for a C 2 -function f. The generalized Taylor s expansion is now applied to obtain a version of second-order sufficient condition for a local minimum. For related results, see [34, 16].
16 1830 V. JEYAKUMAR AND D. T. LUC Theorem 7.5. For the problem (P), let x R n. Assume that for each x in a neighborhood of x, 2 f(x) is a bounded approximate Hessian of f at x. If f( x) =0 and for 0 <α<1, each u R n satisfies u 0;then the following holds: (7.1) ( M co( 2 f( x + αu))), Mu,u 0. Then x is a local minimum of (P). Proof. Suppose that x is not a local minimum of (P ). Then there exists a sequence {x n } such that x n x, x n x as n +, and f(x n ) <f( x) for each n. Let x n = x + u n, where u n 0. From the generalized Taylor expansion, Theorem 6.1, there exists 0 <α n < 1 such that f(x n ) f( x)+ f( x),x n x co 2 f( x + α n u n )(u n ),u n. Thus, there exists M n co( f( x 2 + α n u n )) such that f(x n )=f( x) + M n u n,u n, and so M n u n,u n < 0. This contradicts (7.1). Hence, x is a local minimum of (P). The following theorem gives us second-order sufficient optimality conditions for a strict local minimum. Theorem 7.6. For the problem (P), let x R n. Assume that, for each x in a neighborhood of x, f(x) 2 is a bounded approximate Hessian of f at x. If f( x) =0 and for 0 <α<1, each u R n satisfies u 0, then the following holds: (7.2) ( M co( 2 f( x + αu))), Mu,u > 0. Then x is a strict local minimum of (P). Proof. The method of proof is similar to the one given above for Theorem 7.5 and so it is omitted. We now see how the mean value theorem of section 3 and approximate Hessians can be used to characterize convexity of C 1 - functions. Theorem 7.7. Let f : R n R be a continuously Gâteaux differentiable function. Assume that 2 f(x) is an approximate Hessian of f for each point x R n. If the matrices M 2 f(x) are positive semidefinite for each x R n, then f is convex. Proof. Let x, u R n. Then, by the mean value theorem, and so, f(x + u) f(x) co( 2 f([x, x + u])u), f(x + u) f(x), u co( 2 f([x, x + u])u),u. Thus, there exist z [x, x + u] and M co( 2 f(z)) such that It follows by the assumption that f(x + u) f(x), u = Mu,u. f(x + u) f(x), u 0. Since x, u R n are arbitrary, we get that f is monotone in the sense that for each x, u R n, f(x + u) f(x), u 0.
17 APPROXIMATE JACOBIANS AND OPTIMIZATION 1831 The conclusion now follows from the standard result of convex analysis that f is convex if and only if f is monotone. Corollary 7.8. Let f : R n R be C 1,1. Then f is convex if and only if for each x R n, the matrices M H 2 f(x) are positive semidefinite. Proof. Since f is C 1,1 for each x R n, H 2 f(x) is an approximate Hessian of f at x. Hence, it follows from Theorem 7.7 that f is convex. Conversely, assume that f is convex. Let be a set of points in R n on which f is twice differentiable. Then, each matrix M of { lim n 2 f(x n ) : {x n }, x n x} is positive semidefinite as it is a limit of a sequence of positive semidefinite matrices. Hence, each matrix M of Hf(x) 2 =co{ lim n 2 f(x n ) : {x n }, x n x} is also positive semidefinite. Acknowledgments. The authors are grateful to the referees for their detailed comments and valuable suggestions which have contributed to the final preparation of the paper. The first author is grateful to Professor Jonathan Borwein for his helpful comments on the earlier version of the paper and for certain useful references. The second author wishes to thank the first author for his kind invitation and hospitality. REFERENCES [1] F. H. Clarke, Necessary Conditions for Pproblems in Optimal Control and Calculus of Variations, Ph.D. Thesis, University of Washington, Seattle, [2] F. H. Clarke, Optimization and nonsmooth analysis, Wiley-Interscience, New York, [3] R. Cominetti and R. Correa, A generalized second-order derivative in nonsmooth optimization, SIAM J. Control and Optim., 28 (1990), pp [4] V. F. Demyanov and V. Jeyakumar, Hunting for a smaller convex subdifferential, J. Global Optim., 10 (1997), pp [5] V. F. Demyanov and A. M. Rubinov, Constructive Nonsmooth Analysis, Verlag Peter Lang, Frankfurt am Main, [6] J. -B. Hiriart-Urruty, Mean value theorems for vector valued mappings in nonsmooth optimization, Numer. Funct. Anal. Optim., 2 (1980), pp [7] J. B. Hiriart-Urruty, J. J. Strodiot, and V. Hien Nguyen, Generalized Hessian matrix and second-order optimality conditions for problems with C 1,1 data, Appl. Math. Optim., 11 (1984), pp [8] A. D. Ioffe, Nonsmooth Analysis: differential calculus of nondifferential mappings, Trans. Amer. Math. Soc., 266 (1981), pp [9] A. D. Ioffe, Approximate subdifferentials and applications I: The finite dimensional theory, Trans. Amer. Math. Soc., 281 (1984), pp [10] A. D. Ioffe and J. -P. Penot, Limiting subhessians and limiting subjects and their calculus, Trans. Amer. Math. Soc., 349 (1997), pp [11] V. Jeyakumar, On optimality conditions in nonsmooth inequality constrained minimization, Numer. Funct. Anal. Optim., 9 (1987), pp [12] V. Jeyakumar, Composite nonsmooth programming with Gâteaux differentiability, SIAM J. Optim., 1 (1991), pp [13] V. Jeyakumar and D. T. Luc, Nonsmooth Calculus, Minimality and Monotonicity of Convexificators, Applied Mathematics Research Report AMR96/29, University of New South Wales, Australia, 1996, submitted. [14] V. Jeyakumar and X. Q. Yang, Convex composite multi-objective nonsmooth programming, Math. Progr., 59 (1993), pp [15] V. Jeyakumar and X. Q. Yang, Convex composite minimization with C 1,1 functions, J. Optim. Theory Appl., 86 (1995), pp
18 1832 V. JEYAKUMAR AND D. T. LUC [16] V. Jeyakumar and X. Q. Yang, Approximate Generalized Hessians and Taylor s Expansions for Continuously Gâteaux Differentiable Functions, Applied Mathematics Research Report AMR96/20, University of New South Wales, Australia, Nonlinear Anal., 1998, to appear. [17] D. T. Luc, Taylor s formula for C k,1 functions, SIAM J. Optim., 5 (1995), pp [18] D. T. Luc and S. Schaible, On generalized monotone nonsmooth maps, J. Convex Anal., 3 (1996), pp [19] D. T. Luc and S. Swaminathan, A characterization of convex functions, Nonlinear Anal., 20 (1993), pp [20] P. Michel and J.-P. Penot, A generalized derivative for calm and stable functions, Differential Integral Equations, 5 (1992), pp [21] B. S. Mordukhovich, Metric approximations and necessary optimality conditions for general classes of nonsmooth extremal problems, Soviet. Math. Dokl., 22 (1980), pp [22] B. S. Mordukhovich, Generalized differential calculus for nonsmooth and set-valued mappings, J. Math. Anal. Appl., 183 (1994), pp [23] B. S. Mordukhovich and Y. Shao, On nonconvex subdifferential calculus in Banach spaces, J. Convex Anal., 2 (1995), pp [24] B. S. Mordukhovich and Y. Shao, Nonsmooth sequential analysis in Asplund spaces, Trans. Amer. Math. Soc., 348 (1996), pp [25] Z. Pales and V. Zeidan, Generalized Hessian for C 1,1 functions in infinite dimensional normed spaces, Math. Programming, 74 (1996), pp [26] J. S. Pang and L. Qi, Nonsmooth equations: Motivation and algorithms, SIAM J. Optim., 3 (1993), pp [27] R. T. Rockafellar, Generalized directional derivatives and subgradient of nonconvex functions, Canad. J. Math., 32 (1980), pp [28] R. T. Rockafellar, Second-order optimality conditions in nonlinear programming obtained by way of epi-derivatives, Math. Oper. Res., 14 (1989), pp [29] R. T. Rockafellar and J. B. Wets, Variational Analysis, Springer-Verlag, Berlin, New York, 1998, to appear. [30] M. Studniarski and V. Jeyakumar, A generalized mean-value theorem and optimality conditions in composite nonsmooth minimization, Nonlinear Anal., 24 (1995), pp [31] L. Thibault, On generalized differentials and subdifferentials of Lipschitz vector valued functions, Nonlinear Anal., 6 (1982), pp [32] J. Warga, Derivative containers, inverse functions and controllability, in Calculus of Variations and Control Theory, D.L. Russell, ed., Academic Press, New York, [33] J. Warga, Fat homeomorphisms and unbounded derivative containers, J. Math. Anal. Appl., 81 (1981), pp [34] X. Q. Yang, Generalized Second-Order Directional Derivatives and Optimality Conditions, Ph.D. thesis, University of New South Wales, Australia, [35] X. Q. Yang and V. Jeyakumar, Generalized second-order directional derivatives and optimization with C 1,1 functions, Optimization, 26 (1992), pp
Relationships between upper exhausters and the basic subdifferential in variational analysis
J. Math. Anal. Appl. 334 (2007) 261 272 www.elsevier.com/locate/jmaa Relationships between upper exhausters and the basic subdifferential in variational analysis Vera Roshchina City University of Hong
More informationON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction
J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties
More informationFIRST ORDER CHARACTERIZATIONS OF PSEUDOCONVEX FUNCTIONS. Vsevolod Ivanov Ivanov
Serdica Math. J. 27 (2001), 203-218 FIRST ORDER CHARACTERIZATIONS OF PSEUDOCONVEX FUNCTIONS Vsevolod Ivanov Ivanov Communicated by A. L. Dontchev Abstract. First order characterizations of pseudoconvex
More informationSubdifferential representation of convex functions: refinements and applications
Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential
More informationSECOND-ORDER CHARACTERIZATIONS OF CONVEX AND PSEUDOCONVEX FUNCTIONS
Journal of Applied Analysis Vol. 9, No. 2 (2003), pp. 261 273 SECOND-ORDER CHARACTERIZATIONS OF CONVEX AND PSEUDOCONVEX FUNCTIONS I. GINCHEV and V. I. IVANOV Received June 16, 2002 and, in revised form,
More informationSome Properties of the Augmented Lagrangian in Cone Constrained Optimization
MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented
More informationON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS
MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth
More informationOn Nonconvex Subdifferential Calculus in Banach Spaces 1
Journal of Convex Analysis Volume 2 (1995), No.1/2, 211 227 On Nonconvex Subdifferential Calculus in Banach Spaces 1 Boris S. Mordukhovich, Yongheng Shao Department of Mathematics, Wayne State University,
More informationSOME PROPERTIES ON THE CLOSED SUBSETS IN BANACH SPACES
ARCHIVUM MATHEMATICUM (BRNO) Tomus 42 (2006), 167 174 SOME PROPERTIES ON THE CLOSED SUBSETS IN BANACH SPACES ABDELHAKIM MAADEN AND ABDELKADER STOUTI Abstract. It is shown that under natural assumptions,
More informationJournal of Inequalities in Pure and Applied Mathematics
Journal of Inequalities in Pure and Applied Mathematics http://jipam.vu.edu.au/ Volume 4, Issue 4, Article 67, 2003 ON GENERALIZED MONOTONE MULTIFUNCTIONS WITH APPLICATIONS TO OPTIMALITY CONDITIONS IN
More informationarxiv: v1 [math.fa] 30 Oct 2018
On Second-order Conditions for Quasiconvexity and Pseudoconvexity of C 1,1 -smooth Functions arxiv:1810.12783v1 [math.fa] 30 Oct 2018 Pham Duy Khanh, Vo Thanh Phat October 31, 2018 Abstract For a C 2 -smooth
More informationDownloaded 09/27/13 to Redistribution subject to SIAM license or copyright; see
SIAM J. OPTIM. Vol. 23, No., pp. 256 267 c 203 Society for Industrial and Applied Mathematics TILT STABILITY, UNIFORM QUADRATIC GROWTH, AND STRONG METRIC REGULARITY OF THE SUBDIFFERENTIAL D. DRUSVYATSKIY
More informationRemarks On Second Order Generalized Derivatives For Differentiable Functions With Lipschitzian Jacobian
Applied Mathematics E-Notes, 3(2003), 130-137 c ISSN 1607-2510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Remarks On Second Order Generalized Derivatives For Differentiable Functions
More informationA Dual Condition for the Convex Subdifferential Sum Formula with Applications
Journal of Convex Analysis Volume 12 (2005), No. 2, 279 290 A Dual Condition for the Convex Subdifferential Sum Formula with Applications R. S. Burachik Engenharia de Sistemas e Computacao, COPPE-UFRJ
More informationA CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE
Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received
More informationGlobal Maximum of a Convex Function: Necessary and Sufficient Conditions
Journal of Convex Analysis Volume 13 2006), No. 3+4, 687 694 Global Maximum of a Convex Function: Necessary and Sufficient Conditions Emil Ernst Laboratoire de Modélisation en Mécaniue et Thermodynamiue,
More informationLocal strong convexity and local Lipschitz continuity of the gradient of convex functions
Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate
More informationA convergence result for an Outer Approximation Scheme
A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento
More informationBORIS MORDUKHOVICH Wayne State University Detroit, MI 48202, USA. Talk given at the SPCOM Adelaide, Australia, February 2015
CODERIVATIVE CHARACTERIZATIONS OF MAXIMAL MONOTONICITY BORIS MORDUKHOVICH Wayne State University Detroit, MI 48202, USA Talk given at the SPCOM 2015 Adelaide, Australia, February 2015 Based on joint papers
More informationNecessary and Sufficient Conditions for the Existence of a Global Maximum for Convex Functions in Reflexive Banach Spaces
Laboratoire d Arithmétique, Calcul formel et d Optimisation UMR CNRS 6090 Necessary and Sufficient Conditions for the Existence of a Global Maximum for Convex Functions in Reflexive Banach Spaces Emil
More informationg 2 (x) (1/3)M 1 = (1/3)(2/3)M.
COMPACTNESS If C R n is closed and bounded, then by B-W it is sequentially compact: any sequence of points in C has a subsequence converging to a point in C Conversely, any sequentially compact C R n is
More informationRelaxed Quasimonotone Operators and Relaxed Quasiconvex Functions
J Optim Theory Appl (2008) 138: 329 339 DOI 10.1007/s10957-008-9382-6 Relaxed Quasimonotone Operators and Relaxed Quasiconvex Functions M.R. Bai N. Hadjisavvas Published online: 12 April 2008 Springer
More informationImplicit Multifunction Theorems
Implicit Multifunction Theorems Yuri S. Ledyaev 1 and Qiji J. Zhu 2 Department of Mathematics and Statistics Western Michigan University Kalamazoo, MI 49008 Abstract. We prove a general implicit function
More informationEpiconvergence and ε-subgradients of Convex Functions
Journal of Convex Analysis Volume 1 (1994), No.1, 87 100 Epiconvergence and ε-subgradients of Convex Functions Andrei Verona Department of Mathematics, California State University Los Angeles, CA 90032,
More informationON THE INVERSE FUNCTION THEOREM
PACIFIC JOURNAL OF MATHEMATICS Vol. 64, No 1, 1976 ON THE INVERSE FUNCTION THEOREM F. H. CLARKE The classical inverse function theorem gives conditions under which a C r function admits (locally) a C Γ
More informationAN INTERSECTION FORMULA FOR THE NORMAL CONE ASSOCIATED WITH THE HYPERTANGENT CONE
Journal of Applied Analysis Vol. 5, No. 2 (1999), pp. 239 247 AN INTERSETION FORMULA FOR THE NORMAL ONE ASSOIATED WITH THE HYPERTANGENT ONE M. ILIGOT TRAVAIN Received May 26, 1998 and, in revised form,
More informationA Proximal Method for Identifying Active Manifolds
A Proximal Method for Identifying Active Manifolds W.L. Hare April 18, 2006 Abstract The minimization of an objective function over a constraint set can often be simplified if the active manifold of the
More informationThe Strong Convergence of Subgradients of Convex Functions Along Directions: Perspectives and Open Problems
J Optim Theory Appl (2018) 178:660 671 https://doi.org/10.1007/s10957-018-1323-4 FORUM The Strong Convergence of Subgradients of Convex Functions Along Directions: Perspectives and Open Problems Dariusz
More informationGEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect
GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, 2018 BORIS S. MORDUKHOVICH 1 and NGUYEN MAU NAM 2 Dedicated to Franco Giannessi and Diethard Pallaschke with great respect Abstract. In
More informationMcMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L.
McMaster University Advanced Optimization Laboratory Title: A Proximal Method for Identifying Active Manifolds Authors: Warren L. Hare AdvOl-Report No. 2006/07 April 2006, Hamilton, Ontario, Canada A Proximal
More informationAW -Convergence and Well-Posedness of Non Convex Functions
Journal of Convex Analysis Volume 10 (2003), No. 2, 351 364 AW -Convergence Well-Posedness of Non Convex Functions Silvia Villa DIMA, Università di Genova, Via Dodecaneso 35, 16146 Genova, Italy villa@dima.unige.it
More informationCONVEX COMPOSITE FUNCTIONS IN BANACH SPACES AND THE PRIMAL LOWER-NICE PROPERTY
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 126, Number 12, December 1998, Pages 3701 3708 S 0002-9939(98)04324-X CONVEX COMPOSITE FUNCTIONS IN BANACH SPACES AND THE PRIMAL LOWER-NICE PROPERTY
More informationBrøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane
Conference ADGO 2013 October 16, 2013 Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions Marc Lassonde Université des Antilles et de la Guyane Playa Blanca, Tongoy, Chile SUBDIFFERENTIAL
More informationAn Approximate Lagrange Multiplier Rule
An Approximate Lagrange Multiplier Rule J. Dutta, S. R. Pattanaik Department of Mathematics and Statistics Indian Institute of Technology, Kanpur, India email : jdutta@iitk.ac.in, suvendu@iitk.ac.in and
More informationYuqing Chen, Yeol Je Cho, and Li Yang
Bull. Korean Math. Soc. 39 (2002), No. 4, pp. 535 541 NOTE ON THE RESULTS WITH LOWER SEMI-CONTINUITY Yuqing Chen, Yeol Je Cho, and Li Yang Abstract. In this paper, we introduce the concept of lower semicontinuous
More informationDifferentiability of Convex Functions on a Banach Space with Smooth Bump Function 1
Journal of Convex Analysis Volume 1 (1994), No.1, 47 60 Differentiability of Convex Functions on a Banach Space with Smooth Bump Function 1 Li Yongxin, Shi Shuzhong Nankai Institute of Mathematics Tianjin,
More informationEXISTENCE RESULTS FOR QUASILINEAR HEMIVARIATIONAL INEQUALITIES AT RESONANCE. Leszek Gasiński
DISCRETE AND CONTINUOUS Website: www.aimsciences.org DYNAMICAL SYSTEMS SUPPLEMENT 2007 pp. 409 418 EXISTENCE RESULTS FOR QUASILINEAR HEMIVARIATIONAL INEQUALITIES AT RESONANCE Leszek Gasiński Jagiellonian
More informationJournal of Inequalities in Pure and Applied Mathematics
Journal of Inequalities in Pure and Applied Mathematics EPI-DIFFERENTIABILITY AND OPTIMALITY CONDITIONS FOR AN EXTREMAL PROBLEM UNDER INCLUSION CONSTRAINTS T. AMAHROQ, N. GADHI AND PR H. RIAHI Université
More informationOptimality Conditions for Nonsmooth Convex Optimization
Optimality Conditions for Nonsmooth Convex Optimization Sangkyun Lee Oct 22, 2014 Let us consider a convex function f : R n R, where R is the extended real field, R := R {, + }, which is proper (f never
More informationGeneralized Hessians of C 1,1 - functions and second-order viscosity subjets
Generalized Hessians of C 1,1 - functions and second-order viscosity subjets Luc BARBET, Aris DANIILIDIS, Pierpaolo SORAVIA Abstract. Given a C 1,1 function f : U R (where U R n open) we deal with the
More informationExamples of Convex Functions and Classifications of Normed Spaces
Journal of Convex Analysis Volume 1 (1994), No.1, 61 73 Examples of Convex Functions and Classifications of Normed Spaces Jon Borwein 1 Department of Mathematics and Statistics, Simon Fraser University
More informationDivision of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45
Division of the Humanities and Social Sciences Supergradients KC Border Fall 2001 1 The supergradient of a concave function There is a useful way to characterize the concavity of differentiable functions.
More informationNecessary Optimality Conditions for ε e Pareto Solutions in Vector Optimization with Empty Interior Ordering Cones
Noname manuscript No. (will be inserted by the editor Necessary Optimality Conditions for ε e Pareto Solutions in Vector Optimization with Empty Interior Ordering Cones Truong Q. Bao Suvendu R. Pattanaik
More informationOptimization and Optimal Control in Banach Spaces
Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,
More informationWeak* Sequential Compactness and Bornological Limit Derivatives
Journal of Convex Analysis Volume 2 (1995), No.1/2, 59 67 Weak* Sequential Compactness and Bornological Limit Derivatives Jonathan Borwein 1 Department of Mathematics and Statistics, Simon Fraser University,
More informationRobust Duality in Parametric Convex Optimization
Robust Duality in Parametric Convex Optimization R.I. Boţ V. Jeyakumar G.Y. Li Revised Version: June 20, 2012 Abstract Modelling of convex optimization in the face of data uncertainty often gives rise
More informationJournal of Inequalities in Pure and Applied Mathematics
Journal of Inequalities in Pure and Applied Mathematics MONOTONE TRAJECTORIES OF DYNAMICAL SYSTEMS AND CLARKE S GENERALIZED JACOBIAN GIOVANNI P. CRESPI AND MATTEO ROCCA Université de la Vallée d Aoste
More informationStability of efficient solutions for semi-infinite vector optimization problems
Stability of efficient solutions for semi-infinite vector optimization problems Z. Y. Peng, J. T. Zhou February 6, 2016 Abstract This paper is devoted to the study of the stability of efficient solutions
More informationDedicated to Michel Théra in honor of his 70th birthday
VARIATIONAL GEOMETRIC APPROACH TO GENERALIZED DIFFERENTIAL AND CONJUGATE CALCULI IN CONVEX ANALYSIS B. S. MORDUKHOVICH 1, N. M. NAM 2, R. B. RECTOR 3 and T. TRAN 4. Dedicated to Michel Théra in honor of
More informationThe Directed Subdifferential of DC Functions
Contemporary Mathematics The Directed Subdifferential of DC Functions Robert Baier and Elza Farkhi Dedicated to Alexander Ioffe and Simeon Reich on their 7th resp. 6th birthdays. Abstract. The space of
More informationGeneralized Monotonicities and Its Applications to the System of General Variational Inequalities
Generalized Monotonicities and Its Applications to the System of General Variational Inequalities Khushbu 1, Zubair Khan 2 Research Scholar, Department of Mathematics, Integral University, Lucknow, Uttar
More informationOPTIMALITY CONDITIONS FOR D.C. VECTOR OPTIMIZATION PROBLEMS UNDER D.C. CONSTRAINTS. N. Gadhi, A. Metrane
Serdica Math. J. 30 (2004), 17 32 OPTIMALITY CONDITIONS FOR D.C. VECTOR OPTIMIZATION PROBLEMS UNDER D.C. CONSTRAINTS N. Gadhi, A. Metrane Communicated by A. L. Dontchev Abstract. In this paper, we establish
More informationChapter 2 Convex Analysis
Chapter 2 Convex Analysis The theory of nonsmooth analysis is based on convex analysis. Thus, we start this chapter by giving basic concepts and results of convexity (for further readings see also [202,
More informationThe exact absolute value penalty function method for identifying strict global minima of order m in nonconvex nonsmooth programming
Optim Lett (2016 10:1561 1576 DOI 10.1007/s11590-015-0967-3 ORIGINAL PAPER The exact absolute value penalty function method for identifying strict global minima of order m in nonconvex nonsmooth programming
More informationVariational inequalities for set-valued vector fields on Riemannian manifolds
Variational inequalities for set-valued vector fields on Riemannian manifolds Chong LI Department of Mathematics Zhejiang University Joint with Jen-Chih YAO Chong LI (Zhejiang University) VI on RM 1 /
More informationConvex Analysis Background
Convex Analysis Background John C. Duchi Stanford University Park City Mathematics Institute 206 Abstract In this set of notes, we will outline several standard facts from convex analysis, the study of
More informationCentre d Economie de la Sorbonne UMR 8174
Centre d Economie de la Sorbonne UMR 8174 On alternative theorems and necessary conditions for efficiency Do Van LUU Manh Hung NGUYEN 2006.19 Maison des Sciences Économiques, 106-112 boulevard de L'Hôpital,
More informationOn Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q)
On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) Andreas Löhne May 2, 2005 (last update: November 22, 2005) Abstract We investigate two types of semicontinuity for set-valued
More informationNONSMOOTH VARIANTS OF POWELL S BFGS CONVERGENCE THEOREM
NONSMOOTH VARIANTS OF POWELL S BFGS CONVERGENCE THEOREM JIAYI GUO AND A.S. LEWIS Abstract. The popular BFGS quasi-newton minimization algorithm under reasonable conditions converges globally on smooth
More informationA Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials
A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials G. Y. Li Communicated by Harold P. Benson Abstract The minimax theorem for a convex-concave bifunction is a fundamental theorem
More informationTHE NEARLY ADDITIVE MAPS
Bull. Korean Math. Soc. 46 (009), No., pp. 199 07 DOI 10.4134/BKMS.009.46..199 THE NEARLY ADDITIVE MAPS Esmaeeil Ansari-Piri and Nasrin Eghbali Abstract. This note is a verification on the relations between
More information1. Bounded linear maps. A linear map T : E F of real Banach
DIFFERENTIABLE MAPS 1. Bounded linear maps. A linear map T : E F of real Banach spaces E, F is bounded if M > 0 so that for all v E: T v M v. If v r T v C for some positive constants r, C, then T is bounded:
More informationOn the convexity of piecewise-defined functions
On the convexity of piecewise-defined functions arxiv:1408.3771v1 [math.ca] 16 Aug 2014 Heinz H. Bauschke, Yves Lucet, and Hung M. Phan August 16, 2014 Abstract Functions that are piecewise defined are
More informationA quasisecant method for minimizing nonsmooth functions
A quasisecant method for minimizing nonsmooth functions Adil M. Bagirov and Asef Nazari Ganjehlou Centre for Informatics and Applied Optimization, School of Information Technology and Mathematical Sciences,
More informationOPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS
OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS S. HOSSEINI Abstract. A version of Lagrange multipliers rule for locally Lipschitz functions is presented. Using Lagrange
More informationOn Penalty and Gap Function Methods for Bilevel Equilibrium Problems
On Penalty and Gap Function Methods for Bilevel Equilibrium Problems Bui Van Dinh 1 and Le Dung Muu 2 1 Faculty of Information Technology, Le Quy Don Technical University, Hanoi, Vietnam 2 Institute of
More informationRadius Theorems for Monotone Mappings
Radius Theorems for Monotone Mappings A. L. Dontchev, A. Eberhard and R. T. Rockafellar Abstract. For a Hilbert space X and a mapping F : X X (potentially set-valued) that is maximal monotone locally around
More informationMonotone operators and bigger conjugate functions
Monotone operators and bigger conjugate functions Heinz H. Bauschke, Jonathan M. Borwein, Xianfu Wang, and Liangjin Yao August 12, 2011 Abstract We study a question posed by Stephen Simons in his 2008
More informationTHE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION
THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION HALUK ERGIN AND TODD SARVER Abstract. Suppose (i) X is a separable Banach space, (ii) C is a convex subset of X that is a Baire space (when endowed
More informationPARTIAL SECOND-ORDER SUBDIFFERENTIALS IN VARIATIONAL ANALYSIS AND OPTIMIZATION BORIS S. MORDUKHOVICH 1, NGUYEN MAU NAM 2 and NGUYEN THI YEN NHI 3
PARTIAL SECOND-ORDER SUBDIFFERENTIALS IN VARIATIONAL ANALYSIS AND OPTIMIZATION BORIS S. MORDUKHOVICH 1, NGUYEN MAU NAM 2 and NGUYEN THI YEN NHI 3 Abstract. This paper presents a systematic study of partial
More informationDUALIZATION OF SUBGRADIENT CONDITIONS FOR OPTIMALITY
DUALIZATION OF SUBGRADIENT CONDITIONS FOR OPTIMALITY R. T. Rockafellar* Abstract. A basic relationship is derived between generalized subgradients of a given function, possibly nonsmooth and nonconvex,
More informationSOME REMARKS ON THE SPACE OF DIFFERENCES OF SUBLINEAR FUNCTIONS
APPLICATIONES MATHEMATICAE 22,3 (1994), pp. 419 426 S. G. BARTELS and D. PALLASCHKE (Karlsruhe) SOME REMARKS ON THE SPACE OF DIFFERENCES OF SUBLINEAR FUNCTIONS Abstract. Two properties concerning the space
More informationThai Journal of Mathematics Volume 14 (2016) Number 1 : ISSN
Thai Journal of Mathematics Volume 14 (2016) Number 1 : 53 67 http://thaijmath.in.cmu.ac.th ISSN 1686-0209 A New General Iterative Methods for Solving the Equilibrium Problems, Variational Inequality Problems
More informationChapter 1. Optimality Conditions: Unconstrained Optimization. 1.1 Differentiable Problems
Chapter 1 Optimality Conditions: Unconstrained Optimization 1.1 Differentiable Problems Consider the problem of minimizing the function f : R n R where f is twice continuously differentiable on R n : P
More informationRobust Farkas Lemma for Uncertain Linear Systems with Applications
Robust Farkas Lemma for Uncertain Linear Systems with Applications V. Jeyakumar and G. Li Revised Version: July 8, 2010 Abstract We present a robust Farkas lemma, which provides a new generalization of
More informationOn the Weak Convergence of the Extragradient Method for Solving Pseudo-Monotone Variational Inequalities
J Optim Theory Appl 208) 76:399 409 https://doi.org/0.007/s0957-07-24-0 On the Weak Convergence of the Extragradient Method for Solving Pseudo-Monotone Variational Inequalities Phan Tu Vuong Received:
More informationSOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction
ACTA MATHEMATICA VIETNAMICA 271 Volume 29, Number 3, 2004, pp. 271-280 SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM NGUYEN NANG TAM Abstract. This paper establishes two theorems
More informationOn intermediate value theorem in ordered Banach spaces for noncompact and discontinuous mappings
Int. J. Nonlinear Anal. Appl. 7 (2016) No. 1, 295-300 ISSN: 2008-6822 (electronic) http://dx.doi.org/10.22075/ijnaa.2015.341 On intermediate value theorem in ordered Banach spaces for noncompact and discontinuous
More informationOn robustness of the regularity property of maps
Control and Cybernetics vol. 32 (2003) No. 3 On robustness of the regularity property of maps by Alexander D. Ioffe Department of Mathematics, Technion, Haifa 32000, Israel Abstract: The problem considered
More information("-1/' .. f/ L) I LOCAL BOUNDEDNESS OF NONLINEAR, MONOTONE OPERA TORS. R. T. Rockafellar. MICHIGAN MATHEMATICAL vol. 16 (1969) pp.
I l ("-1/'.. f/ L) I LOCAL BOUNDEDNESS OF NONLINEAR, MONOTONE OPERA TORS R. T. Rockafellar from the MICHIGAN MATHEMATICAL vol. 16 (1969) pp. 397-407 JOURNAL LOCAL BOUNDEDNESS OF NONLINEAR, MONOTONE OPERATORS
More informationGENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim
Korean J. Math. 25 (2017), No. 4, pp. 469 481 https://doi.org/10.11568/kjm.2017.25.4.469 GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS Jong Kyu Kim, Salahuddin, and Won Hee Lim Abstract. In this
More informationA projection-type method for generalized variational inequalities with dual solutions
Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4812 4821 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A projection-type method
More informationConvex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version
Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com
More informationContinuity. Chapter 4
Chapter 4 Continuity Throughout this chapter D is a nonempty subset of the real numbers. We recall the definition of a function. Definition 4.1. A function from D into R, denoted f : D R, is a subset of
More informationStationarity and Regularity of Infinite Collections of Sets. Applications to
J Optim Theory Appl manuscript No. (will be inserted by the editor) Stationarity and Regularity of Infinite Collections of Sets. Applications to Infinitely Constrained Optimization Alexander Y. Kruger
More informationPreprint Stephan Dempe and Patrick Mehlitz Lipschitz continuity of the optimal value function in parametric optimization ISSN
Fakultät für Mathematik und Informatik Preprint 2013-04 Stephan Dempe and Patrick Mehlitz Lipschitz continuity of the optimal value function in parametric optimization ISSN 1433-9307 Stephan Dempe and
More informationThe local equicontinuity of a maximal monotone operator
arxiv:1410.3328v2 [math.fa] 3 Nov 2014 The local equicontinuity of a maximal monotone operator M.D. Voisei Abstract The local equicontinuity of an operator T : X X with proper Fitzpatrick function ϕ T
More informationCharacterization of lower semicontinuous convex functions on Riemannian manifolds
Wegelerstraße 6 53115 Bonn Germany phone +49 228 73-3427 fax +49 228 73-7527 www.ins.uni-bonn.de S. Hosseini Characterization of lower semicontinuous convex functions on Riemannian manifolds INS Preprint
More informationConvex Functions and Optimization
Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized
More informationc???? Society for Industrial and Applied Mathematics Vol. 1, No. 1, pp ,???? 000
SIAM J. OPTIMIZATION c???? Society for Industrial and Applied Mathematics Vol. 1, No. 1, pp. 000 000,???? 000 TILT STABILITY OF A LOCAL MINIMUM * R. A. POLIQUIN AND R. T. ROCKAFELLAR Abstract. The behavior
More informationThe nonsmooth Newton method on Riemannian manifolds
The nonsmooth Newton method on Riemannian manifolds C. Lageman, U. Helmke, J.H. Manton 1 Introduction Solving nonlinear equations in Euclidean space is a frequently occurring problem in optimization and
More informationMAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS
MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS JONATHAN M. BORWEIN, FRSC Abstract. We use methods from convex analysis convex, relying on an ingenious function of Simon Fitzpatrick, to prove maximality
More informationIdentifying Active Constraints via Partial Smoothness and Prox-Regularity
Journal of Convex Analysis Volume 11 (2004), No. 2, 251 266 Identifying Active Constraints via Partial Smoothness and Prox-Regularity W. L. Hare Department of Mathematics, Simon Fraser University, Burnaby,
More informationSequential Pareto Subdifferential Sum Rule And Sequential Effi ciency
Applied Mathematics E-Notes, 16(2016), 133-143 c ISSN 1607-2510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency
More informationc 1998 Society for Industrial and Applied Mathematics
SIAM J. OPTIM. Vol. 9, No. 1, pp. 179 189 c 1998 Society for Industrial and Applied Mathematics WEAK SHARP SOLUTIONS OF VARIATIONAL INEQUALITIES PATRICE MARCOTTE AND DAOLI ZHU Abstract. In this work we
More information(7: ) minimize f (x) subject to g(x) E C, (P) (also see Pietrzykowski [5]). We shall establish an equivalence between the notion of
SIAM J CONTROL AND OPTIMIZATION Vol 29, No 2, pp 493-497, March 1991 ()1991 Society for Industrial and Applied Mathematics 014 CALMNESS AND EXACT PENALIZATION* J V BURKE$ Abstract The notion of calmness,
More informationMetric regularity and systems of generalized equations
Metric regularity and systems of generalized equations Andrei V. Dmitruk a, Alexander Y. Kruger b, a Central Economics & Mathematics Institute, RAS, Nakhimovskii prospekt 47, Moscow 117418, Russia b School
More informationContinuity. Chapter 4
Chapter 4 Continuity Throughout this chapter D is a nonempty subset of the real numbers. We recall the definition of a function. Definition 4.1. A function from D into R, denoted f : D R, is a subset of
More informationCompactly epi-lipschitzian Convex Sets and Functions in Normed Spaces
Journal of Convex Analysis Volume 7 (2000), No. 2, 375 393 Compactly epi-lipschitzian Convex Sets and Functions in Normed Spaces Jonathan Borwein CECM, Department of Mathematics and Statistics, Simon Fraser
More informationOn an iterative algorithm for variational inequalities in. Banach space
MATHEMATICAL COMMUNICATIONS 95 Math. Commun. 16(2011), 95 104. On an iterative algorithm for variational inequalities in Banach spaces Yonghong Yao 1, Muhammad Aslam Noor 2,, Khalida Inayat Noor 3 and
More information