A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

Similar documents
The exact absolute value penalty function method for identifying strict global minima of order m in nonconvex nonsmooth programming

On Optimality Conditions for Pseudoconvex Programming in Terms of Dini Subdifferentials

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

New Class of duality models in discrete minmax fractional programming based on second-order univexities

Optimality and Duality Theorems in Nonsmooth Multiobjective Optimization

SOME PROPERTIES ON THE CLOSED SUBSETS IN BANACH SPACES

Optimality Conditions for Constrained Optimization

On Weak Pareto Optimality for Pseudoconvex Nonsmooth Multiobjective Optimization Problems

Numerical Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems

Subdifferential representation of convex functions: refinements and applications

Solving generalized semi-infinite programs by reduction to simpler problems.

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS

On constraint qualifications with generalized convexity and optimality conditions

Relationships between upper exhausters and the basic subdifferential in variational analysis

EXISTENCE RESULTS FOR QUASILINEAR HEMIVARIATIONAL INEQUALITIES AT RESONANCE. Leszek Gasiński

Centre d Economie de la Sorbonne UMR 8174

AN INTERSECTION FORMULA FOR THE NORMAL CONE ASSOCIATED WITH THE HYPERTANGENT CONE

Constructive Proof of the Fan-Glicksberg Fixed Point Theorem for Sequentially Locally Non-constant Multi-functions in a Locally Convex Space

SOME REMARKS ON THE SPACE OF DIFFERENCES OF SUBLINEAR FUNCTIONS

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

SECOND-ORDER CHARACTERIZATIONS OF CONVEX AND PSEUDOCONVEX FUNCTIONS

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Differentiability of Convex Functions on a Banach Space with Smooth Bump Function 1

A convergence result for an Outer Approximation Scheme

SECOND ORDER DUALITY IN MULTIOBJECTIVE PROGRAMMING

OPTIMALITY CONDITIONS AND DUALITY FOR SEMI-INFINITE PROGRAMMING INVOLVING SEMILOCALLY TYPE I-PREINVEX AND RELATED FUNCTIONS

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials

THE BANG-BANG PRINCIPLE FOR THE GOURSAT-DARBOUX PROBLEM*

Extremal Solutions of Differential Inclusions via Baire Category: a Dual Approach

Applied Mathematics Letters

Robust Farkas Lemma for Uncertain Linear Systems with Applications

Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency

Higher-order parameter-free sufficient optimality conditions in discrete minmax fractional programming

A derivative-free nonmonotone line search and its application to the spectral residual method

DUALIZATION OF SUBGRADIENT CONDITIONS FOR OPTIMALITY

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

On Total Convexity, Bregman Projections and Stability in Banach Spaces

Nonlinear Analysis 72 (2010) Contents lists available at ScienceDirect. Nonlinear Analysis. journal homepage:

Local semiconvexity of Kantorovich potentials on non-compact manifolds

Constrained maxima and Lagrangean saddlepoints

Mathematical Programming Involving (α, ρ)-right upper-dini-derivative Functions

Global Maximum of a Convex Function: Necessary and Sufficient Conditions

Journal of Inequalities in Pure and Applied Mathematics

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient

Weak sharp minima on Riemannian manifolds 1

FIRST ORDER CHARACTERIZATIONS OF PSEUDOCONVEX FUNCTIONS. Vsevolod Ivanov Ivanov

ON DENSITY TOPOLOGIES WITH RESPECT

AW -Convergence and Well-Posedness of Non Convex Functions

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction

A smoothing augmented Lagrangian method for solving simple bilevel programs

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow

ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS

THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION

CHARACTERIZATIONS OF LIPSCHITZIAN STABILITY

Convergence theorems for a finite family. of nonspreading and nonexpansive. multivalued mappings and equilibrium. problems with application

Inequality Constraints

Necessary and Sufficient Conditions for the Existence of a Global Maximum for Convex Functions in Reflexive Banach Spaces

On the (Non-)Differentiability of the Optimal Value Function When the Optimal Solution Is Unique

HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM

Polishness of Weak Topologies Generated by Gap and Excess Functionals

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

Journal of Inequalities in Pure and Applied Mathematics

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane

HIGHER ORDER OPTIMALITY AND DUALITY IN FRACTIONAL VECTOR OPTIMIZATION OVER CONES

WEAK LOWER SEMI-CONTINUITY OF THE OPTIMAL VALUE FUNCTION AND APPLICATIONS TO WORST-CASE ROBUST OPTIMAL CONTROL PROBLEMS

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT

CHARACTERIZATION OF (QUASI)CONVEX SET-VALUED MAPS

On the Midpoint Method for Solving Generalized Equations

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Convex analysis and profit/cost/support functions

Weak and strong convergence theorems of modified SP-iterations for generalized asymptotically quasi-nonexpansive mappings

Chapter 2 Convex Analysis

Coordinate Update Algorithm Short Course Subgradients and Subgradient Methods

Deviation Measures and Normals of Convex Bodies

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION

Zangwill s Global Convergence Theorem

Technische Universität Dresden Herausgeber: Der Rektor

Journal of Inequalities in Pure and Applied Mathematics

arxiv: v1 [math.oc] 21 Mar 2015

EXISTENCE OF WEAK SOLUTIONS FOR A NONUNIFORMLY ELLIPTIC NONLINEAR SYSTEM IN R N. 1. Introduction We study the nonuniformly elliptic, nonlinear system

Identifying Active Constraints via Partial Smoothness and Prox-Regularity

Boundary Behavior of Excess Demand Functions without the Strong Monotonicity Assumption

Scalar Asymptotic Contractivity and Fixed Points for Nonexpansive Mappings on Unbounded Sets

ITERATIVE SCHEMES FOR APPROXIMATING SOLUTIONS OF ACCRETIVE OPERATORS IN BANACH SPACES SHOJI KAMIMURA AND WATARU TAKAHASHI. Received December 14, 1999

NONTRIVIAL SOLUTIONS TO INTEGRAL AND DIFFERENTIAL EQUATIONS

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim

8. Constrained Optimization

STRONG CONVERGENCE RESULTS FOR NEARLY WEAK UNIFORMLY L-LIPSCHITZIAN MAPPINGS

NEW MAXIMUM THEOREMS WITH STRICT QUASI-CONCAVITY. Won Kyu Kim and Ju Han Yoon. 1. Introduction

ON THE EXISTENCE OF THREE SOLUTIONS FOR QUASILINEAR ELLIPTIC PROBLEM. Paweł Goncerz

On John type ellipsoids

Semi-infinite programming, duality, discretization and optimality conditions

1. Introduction. Consider the following parameterized optimization problem:

Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem

YET MORE ON THE DIFFERENTIABILITY OF CONVEX FUNCTIONS

Transcription:

Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received June 18, 1999 and, in revised form, December 27, 1999 Abstract. We present a new version of first order necessary optimality conditions for a static minmax problem with inequality constraints in the parametric constraint case. These conditions, after some modification, turn out to characterize strict local minimizers of order one for the given problem. 1. Introduction The concept of a strict local minimizer of order m was considered by Cromme, under the name strongly unique minimizer, in a study of iterative numerical methods (see [2]). Such minimizers play an important role in stability studies (see, e.g., [5], [8]). Some results concerning characterizations of such minimizers for standard nonlinear programming problems with both inequality and equality constraints have been obtained, for m = 1 or m = 2, in [7], [9], [12]. These results were derived under the presence 1991 Mathematics Subject Classification. 49K35, 49J52. Key words and phrases. Minmax problem, optimality conditions, Clarke s subdifferential. ISSN 1425-6908 c Heldermann Verlag.

140 A. W. A. Taha of constraint qualifications, leading to statements in which there is no gap between the necessary and sufficient conditions. Consider the following nonlinear programming problem: min {f(x) x S}, (1.1) where f :R n R and S is a nonempty subset of Rn defined by S := { x Rn gi(x) 0, i I }, (1.2) with I := {1,..., p} and g i :Rn R (i I). A special problem of the form (1.1) (1.2) is the static minmax problem in the parametric constraint case in which the objective function f is given by where φ :Rn Rm R, and f(x) := sup φ(x, y) y Z(x) Z(x) := { y Rm (x,y) 0, w(x,y) =0 }, for some d :Rn Rm Rk, w :Rn Rm Rl. In particular, if Z (x) and the sup above is attained at some point, we can write f as We can formulate problem (1.1) (1.3) as follows: f(x) := max φ(x, y). (1.3) y Z(x) min x S max y Z(x) φ(x, y), as a typical two-stage minmax problem, in which the maximizing playery Z(x) acts after the minimizing player-x S and with full knowledge of the choice of the minimizing player. Such problems arise in operations research (see [3], [4], [6]) In problem (1.1) (1.3), if the constraint set Z(x) does not depend on x (i.e., if for some set Y, Z(x) = Y, x S), then the problem becomes much simpler and is called the static minmax problem in the nonparametric constraint case. Recently, a characterization of strict local minimizers of order one for a nonsmooth static minmax problem with inequality constraints, in the nonparametric constraint case, has been investigated by Studniarski and the author in [10]. In this paper, we apply the ideas of [10] to derive a similar characterization for the more general problem (1.1) (1.3) (i.e., in the parametric constraint case). However, this will require imposing differentiability assumptions on the maximization problem which are stated below. max {φ (x, y) y Z (x)}, (1.4)

A characterization of strict local minimizers 141 For problem (1.4), we define the Lagrangian L(x, y, u, v) := φ(x, y) u T d(x, y) v T w(x, y), and we denote by P (x) and K(x, y ), respectively, the set of maximal solutions and the set of Kuhn-Tucker vectors associated with y P (x): P (x) := {y Z(x) φ(x, y ) = f(x)}, K(x, y ) := {(u, v ) Rk Rl yl(x,y,u,v ) =0, u T (x,y ) =0, d(x, y ) 0, w(x, y ) = 0, u 0}. We also define the index set of active inequality constraints I(x, y) := {i {1,..., k} d i (x, y) = 0}. For a given point x 0 Rn, let us assume that: (a 1 ) φ, d, w are continuously differentiable on Rn Rm ; (a 2 ) Z(x 0 ) is not empty, and Z is uniformly compact at x 0 (see [6, p. 17]); (a 3 ) for every y P (x 0 ), the gradients y d i (x 0, y ), i I(x 0, y ), y w i (x 0, y ), i = 1,..., l, are linearly independent. Let us note that conditions (a 1 ) (a 2 ) ensure that, for each x in some neighborhood of x 0, the maximum in (1.4) is attained at some point y Z(x) and condition (a 3 ) implies that the Kuhn-Tucker vector set K(x 0, y ) is a singleton set for each y P (x 0 ) (see [6, Proposition 6.5.1]). Before stating the following theorem, we will need some notations and definitions which can be found in [1, Chapter 2]. For a locally Lipschitzian function f :Rn R, we denote by f(x) the generalized gradient of f at x. We say that f is regular (or subdifferentially regular) at x if the usual one-sided directional derivative f (x; d) exists for all d and is equal to the generalized directional derivative f (x; d). Below, the symbol co A denotes the convex hull of the set A and the dot between two vectors denotes the usual inner product in Rn. Theorem 1 ([6, Proposition 9.2.1]). Under assumptions (a 1 ) (a 3 ), f is locally Lipschitzian at x 0 and directionally differentiable at x 0, so we have f (x 0 ; d) = f (x 0 ; d) = f(x 0 ) = co max y P (x 0 ) xl(x 0, y, u, v ) d, d Rn, (1.5) y P (x 0 ) where (u, v ) is the unique element of K(x 0, y ). x L(x 0, y, u, v ), (1.6)

142 A. W. A. Taha We consider problem (1.1) (1.3) in which the objective function (1.3) satisfies assumptions (a 1 ) (a 3 ), while the functions defining inequality constraints (g i, i I) are locally Lipschitzian and regular in Clarke s sense. Section 2 is devoted to necessary optimality conditions which are satisfied by all local minimizers (not necessarily strict) for problem (1.1) (1.3). These conditions include a restriction on the number of nonzero multipliers. In Section 3, we show that the previous necessary conditions can be modified so as to obtain a characterization of strict local minimizers of order one. We conclude this section with a compilation of some notations and definitions that will be useful in the sequel. For x 0 Rn and δ > 0, we denote B(x0, δ) := {x Rn x x0 δ}. We say that x 0 S is a local minimizer for problem (1.1) if there exists ε > 0 such that f(x) f(x 0 ) for all x S B(x 0, ε). Let m 1 be an integer. We say that x 0 S is a strict local minimizer of order m for problem (1.1) if there exist ε > 0, β > 0 such that f(x) f(x 0 ) + β x x 0 m for all x S B(x 0, ε). Throughout the paper, we will use the following notations for a given x Rn : I(x) := {i I g i (x) = 0}, I 0 (x) := {0} I (x). 2. First order necessary optimality conditions It was shown in [6, Theorem 9.2.2] that Theorem 1 can be used to derive first order necessary optimality conditions for the static minmax problem (1.1) (1.3) in which the functions φ, d, w and g i are continuously differentiable. These conditions do not include a restriction on the number of nonzero multipliers. Therefore, in this section, we present a detailed proof of first order necessary optimality conditions for problem (1.1) (1.3), in which the functions φ, d, w are continuously differentiable and g i, i I, are locally Lipschitzian, with appropriate modifications which allow us to obtain the mentioned restriction. Let x 0 S. Consider the following unconstrained optimization problem: where h :Rn R is defined by min { h(x) x Rn }, (2.1) h(x) := max {f(x) f(x 0 ), g i (x) i I}. (2.2)

A characterization of strict local minimizers 143 Observe that h(x 0 ) = 0. The relationship between problems (1.1) (1.2) and (2.1) (2.2) is given in the following lemma. Lemma 2. If x 0 is a local minimizer for (1.1) (1.2), then x 0 is also a local minimizer for (2.1) (2.2). Proof. The proof is elementary (see, e.g., [11, Theorem 2.1]). The following lemma will be used in the proof of Theorem 4 below. Lemma 3. For any subsets A, B of Rn, we have Proof. See [10, Lemma 2]. co((co A) B) = co(a B). The following result is a generalization of [6, Theorem 9.2.2]. Theorem 4. Let x 0 be a local minimizer for (1.1) (1.3). Suppose that assumptions (a 1 ) (a 3 ) are satisfied and the functions g j, j I, are locally Lipschitzian and regular. Then there exists a positive integer q, vectors y i P (x 0 ) together with scalars λ i 0, i = 1,..., q, µ j 0, j I, such that q 0 λ i x L(x 0, y i, u i, v i ) + µ j g j (x 0 ) (2.3) where (u i, v i ) is the unique element of K(x 0, y i ), i = 1,..., q, and µ j g j (x 0 ) = 0, j I. (2.4) Furthermore, if α is the number of nonzero λ i, and β is the number of nonzero µ j, then 1 α + β n + 1. (2.5) Proof. Let x 0 be a local minimizer for (1.1) (1.3); then x 0 is a local minimizer for (2.1) (2.2) by Lemma 2. Define g 0 (x) := f(x) f(x 0 ). We have h(x) = max{g i (x) i I 0 (x 0 )} and h(x 0 ) = 0 = g 0 (x 0 ). By assumptions (a 1 ) (a 3 ) and Theorem 1, f is locally Lipschitzian and regular at x 0, so that g 0 has the same properties. Since g j, j I, are locally Lipschitzian and regular at x 0 by hypotheses, then using [1, Propositions 2.3.2 and 2.3.12], we obtain 0 h(x 0 ) = co g j (x 0 ) = co f(x 0) g j (x 0 ). j I 0 (x 0 )

144 A. W. A. Taha Now, applying formula (1.6) to f(x 0 ), we deduce 0 co co x L(x 0, y, u, v ) y P (x 0 ) By Lemma 3, this is equivalent to 0 co x L(x 0, y, u, v ) y P (x 0 ) g j (x 0 ). g j (x 0 ). Hence, by Caratheodory s theorem, there exist scalars λ i > 0, i = 1,..., α, η s > 0, s = 1,..., r, such that and 0 = α λ i x L(x 0, y i, u i, v i ) + 1 α + r n + 1 (2.6) r η s w s for some y i P (x 0 ), s=1 w s g j (x 0 ). (2.7) (Here, we allow the case where α = 0, then the set of multipliers λ i is empty; similarly, if r = 0, then there are no multipliers η s ). For each j I(x 0 ), we define J(j) := {s {1,..., r} w s g j (x 0 )\ g t (x 0 )} and ζ j := t I(x 0 ) t<j { ( s J(j) η sw s ) /( s J(j) η s), if J(j), an arbitrary element of g j (x 0 ), if J(j) =, µ j := { s J(j) η s, if J(j), 0, if J(j) =. Then µ j 0 and ζ j g j (x 0 ) (by the convexity of g j (x 0 )) for all j I(x 0 ). Moreover, condition (2.7) implies that α 0 = λ i x L(x 0, y i, u i, v i ) + µ j ζ j. To obtain (2.3) and (2.4), let µ j = 0 for j I\I(x 0 ). Also, if α > 0, then we can take q = α, and all λ i are nonzero. However, if α = 0, then let q = 1, λ 1 = 0, and y 1 an arbitrary element of P (x 0 ).

A characterization of strict local minimizers 145 Let β be the number of nonzero µ j. Then it follows from our construction that β is not greater than the number r of all η i. Moreover, if r > 0, then β > 0. Hence, condition (2.6) implies (2.5). 3. Characterizations of strict local minimizers of order one We begin by reviewing some results for the standard nonlinear programming problem (1.1) (1.2). Throughout this section, we assume that the following constraint qualification is satisfied at x 0 : (a 4 ) For each η Rp satisfying the conditions the following implication holds: η i = 0, i I\I(x 0 ); η i 0, i I(x 0 ), z i g i (x 0 ) ( i I), η i zi = 0 η = 0. Theorem 5 ([10, Theorem 7]). Consider problem (1.1) (1.2) where the functions f and g i, i I, are locally Lipschitzian and possess one-sided directional derivatives at x 0 S. Suppose that assumption (a 4 ) holds. Then x 0 is a strict local minimizer of order one for (1.1) (1.2) if and only if where f (x 0 ; d) > 0, d C(x 0 )\{0}, C(x 0 ) := {d Rn g i (x 0; ) 0, i I(x0)}. (3.1) We now formulate the main result of this section. Theorem 6. Consider problem (1.1) (1.3). Suppose that assumptions (a 1 ) (a 4 ) are satisfied and the functions g j, j I, are locally Lipschitzian and regular. Then x 0 is a strict local minimizer of order one for (1.1) (1.3) if and only if the following conditions hold: (a) C(x 0 ) {d Rn \{0} max y P(x0) xl(x0,y,u,v ) = 0} =, where (u, v ) is the unique element of K(x 0, y ); (b) there exists a positive integer α, vectors y i P (x 0 ) together with scalars λ i > 0, i = 1,..., α, scalars µ j 0, j I, such that α 0 λ i x L(x 0, y i, u i, v i ) + µ j g j (x 0 ), (3.2) µ j g j (x 0 ) = 0, j I, (3.3)

146 A. W. A. Taha where (u i, v i ) is the unique element of K(x 0, y i ), i = 1,..., α, and 1 α + β n + 1, where β is the number of nonzero µ j. (3.4) Proof. (i) Necessity. Suppose that x 0 is a strict local minimizer of order one for problem (1.1) (1.3). Then it is also a local minimizer for (1.1) (1.3); therefore, Theorem 4 implies that there exists a positive integer q, vectors y i P (x 0 ) together with scalars λ i 0, i = 1,..., q, scalars µ j 0, j I, such that conditions (2.3) (2.5) hold. Suppose that all λ i are zero; then condition (2.3) takes on the form 0 µ j g j (x 0 ). Then it follows from assumption (a 4 ) and equalities (2.4) that all µ j are zero, a contradiction with the left-hand inequality in (2.5). Therefore, at least one λ i must be nonzero, which means that condition (b) holds. Condition (a) follows from Theorem 5 and formula (1.5). (ii) Sufficiency. By Theorem 5 and formula (1.5), it suffices to show that f (x 0 ; d) = max xl(x 0, y, u, v ) d > 0 (3.5) y P (x 0 ) for all d C(x 0 )\{0}. Define ν i := λ i /λ, i = 1,..., α, where λ := α λ i > 0. Then, using condition (3.2) and the equality α ν i = 1, we deduce α 0 λ ν i x L(x 0, y i, u i, v i ) + µ j g j (x 0 ) λco y P (x 0 ) = λ f(x 0 ) + x L(x 0, y, u, v ) + µ j g j (x 0 ), µ j g j (x 0 ) where the last equality follows from (1.6). By (3.3), we have µ j = 0 for all j I\I(x 0 ). Therefore, 0 λ f(x 0 ) + µ j g j (x 0 ). Since f and g j, j I, are regular at x 0 by formula (1.5) and hypotheses, we may apply [1, Corollary 3, p. 40] to obtain 0 (λf + µ j g j )(x 0 ). (3.6)

A characterization of strict local minimizers 147 Now, take any d C(x 0 )\{0}. Our assumptions on f and g j imply that the function ψ := λf + µ jg j is regular at x 0 (see [1, Proposition 2.3.6 (c)]) Hence, from (3.6) and the definition of generalized gradient, it follows that 0 ψ (x 0 ; d) = ψ (x 0 ; d) = λf (x 0 ; d) + µ j g j(x 0 ; d) = λ max y P (x 0 ) xl(x 0, y, u, v ) d + µ j g j(x 0 ; d) λ max y P (x 0 ) xl(x 0, y, u, v ) d (3.7) where the last inequality is a consequence of (3.1). Now, the desired inequality (3.5) follows from (3.7) and (a). Acknowledgment. The author is extremely grateful to his supervisor Professor Marcin Studniarski for his valuable comments and continuing encouragement. References [1] Clarke, F.H., Optimization and Nonsmooth Analysis, Wiley-Interscience, New York, 1983. [2] Cromme, L., Strong uniqueness: a far-reaching criterion for the convergence of iterative procedures, Numer. Math. 29 (1978), 179 193. [3] Danskin, J.M., The theory of maxmin with applictions, SIAM J. Appl. Math., 14(4) (1966), 641 664. [4] Danskin, J.M., The Theory of Maxmin and Its Appliction to Weapons Allocation Problems, Springer-Verlag, Berlin, 1967. [5] Klatte, D., Stable local minimizers in semi-infinite optimization: regularity and second-order conditions, J. Comput. Appl. Math. 56 (1994), 137 157. [6] Shimizu, K., Ishizuka, Y. and Bard, J.F., Nondifferentiable and Two-Level Mathematical Programming, Kluwer Academic Publishers, Boston, 1997. [7] Still, G. and Streng, M., Optimality conditions in smooth nonlinear programming, J. Optim. Theory Appl. 90 (1996), 483 515. [8] Studniarski, M., Sufficient conditions for the stability of local minimum points in nonsmooth optimization, Optimization 20 (1989), 27 35. [9] Studniarski, M., Characterizations of strict local minima for some nonlinear programming problems, Nonlinear Anal. 30 (1997), 5363 5367 (Proc. 2nd World Congress of Nonlinear Analysts). [10] Studniarski, M. and Taha, A., A characterization of strict local minimizers of order one for nonsmooth static minmax problems, (submitted for publication). [11] Sutti, C., Monotone generalized differentiability in nonsmooth optimization, Riv. Mat. Sci. Econom. Social. 18 (1995), 83 89. [12] Ward, D.E., Characterizations of strict local minima and necessary conditions for weak sharp minima, J. Optim. Theory Appl. 80 (1994), 551 571.

148 A. W. A. Taha Abdul Whab A. Taha Faculty of Mathematics University of Lódź S. Banacha 22 90-238 Lódź, Poland e-mail: wahab@math.uni.lodz.pl