On second order conditions in vector optimization

Similar documents
Applications of Mathematics

Second-order mollified derivatives and optimization

A survey on C 1,1 functions: theory, numerical methods and applications

Giovanni P. Crespi, Ivan Ginchev, Matteo Rocca. Variational inequalities in vector optimization 2004/31

Remarks On Second Order Generalized Derivatives For Differentiable Functions With Lipschitzian Jacobian

2007/5. Asymptotic results for a generalized Pòlya urn with delay and an application. I. Crimaldi, F.

SECOND-ORDER CHARACTERIZATIONS OF CONVEX AND PSEUDOCONVEX FUNCTIONS

Box-constrained vector optimization: a steepest descent method without "apriori" scalarization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

arxiv: v1 [math.fa] 30 Oct 2018

Optimality Conditions for Nonsmooth Convex Optimization

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

Centre d Economie de la Sorbonne UMR 8174

Journal of Inequalities in Pure and Applied Mathematics

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION

Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency

Optimality Conditions for Constrained Optimization

Relationships between upper exhausters and the basic subdifferential in variational analysis

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials

The Strong Convergence of Subgradients of Convex Functions Along Directions: Perspectives and Open Problems

Convexity in R n. The following lemma will be needed in a while. Lemma 1 Let x E, u R n. If τ I(x, u), τ 0, define. f(x + τu) f(x). τ.

On Optimality Conditions for Pseudoconvex Programming in Terms of Dini Subdifferentials

SOME PROPERTIES ON THE CLOSED SUBSETS IN BANACH SPACES

Subdifferential representation of convex functions: refinements and applications

generalized Jacobians, nonsmooth analysis, mean value conditions, optimality

Preprint Stephan Dempe and Patrick Mehlitz Lipschitz continuity of the optimal value function in parametric optimization ISSN

SOME REMARKS ON THE SPACE OF DIFFERENCES OF SUBLINEAR FUNCTIONS

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

Generalized Pattern Search Algorithms : unconstrained and constrained cases

A Proximal Method for Identifying Active Manifolds

FIRST ORDER CHARACTERIZATIONS OF PSEUDOCONVEX FUNCTIONS. Vsevolod Ivanov Ivanov

OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS

Necessary and Sufficient Conditions for the Existence of a Global Maximum for Convex Functions in Reflexive Banach Spaces

Chapter 2 Convex Analysis

Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic Constraints

Constraint qualifications for convex inequality systems with applications in constrained optimization

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University

Convex Optimization M2

Semi-infinite programming, duality, discretization and optimality conditions

Testing for common trends in conditional I(2) VAR models

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

Refined optimality conditions for differences of convex functions

CONVEX OPTIMIZATION VIA LINEARIZATION. Miguel A. Goberna. Universidad de Alicante. Iberian Conference on Optimization Coimbra, November, 2006

A Dual Condition for the Convex Subdifferential Sum Formula with Applications

Boundary Behavior of Excess Demand Functions without the Strong Monotonicity Assumption

Numerical Optimization

DUALIZATION OF SUBGRADIENT CONDITIONS FOR OPTIMALITY

LECTURE 12 LECTURE OUTLINE. Subgradients Fenchel inequality Sensitivity in constrained optimization Subdifferential calculus Optimality conditions

Examination paper for TMA4180 Optimization I

Coordinate Update Algorithm Short Course Subgradients and Subgradient Methods

Journal of Inequalities in Pure and Applied Mathematics

Lecture 7 Monotonicity. September 21, 2008

Global Maximum of a Convex Function: Necessary and Sufficient Conditions

Convex Functions. Pontus Giselsson

Second Order Optimality Conditions for Constrained Nonlinear Programming

On duality gap in linear conic problems

Optimization and Optimal Control in Banach Spaces

5. Duality. Lagrangian

Research Article Sufficient Optimality and Sensitivity Analysis of a Parameterized Min-Max Programming

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS

Introduction to Real Analysis Alternative Chapter 1

Chapter 1. Optimality Conditions: Unconstrained Optimization. 1.1 Differentiable Problems

Constrained Optimization Theory

A convergence result for an Outer Approximation Scheme

Necessary Optimality Conditions for ε e Pareto Solutions in Vector Optimization with Empty Interior Ordering Cones

Franco Giannessi, Giandomenico Mastroeni. Institute of Mathematics University of Verona, Verona, Italy

New Class of duality models in discrete minmax fractional programming based on second-order univexities

Symmetric and Asymmetric Duality

Characterizations of Solution Sets of Fréchet Differentiable Problems with Quasiconvex Objective Function

Two-Step Iteration Scheme for Nonexpansive Mappings in Banach Space

The exact absolute value penalty function method for identifying strict global minima of order m in nonconvex nonsmooth programming

On constraint qualifications with generalized convexity and optimality conditions

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Characterizations of Pseudoconvex Functions and Semistrictly Quasiconvex Ones

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces

Banach Spaces V: A Closer Look at the w- and the w -Topologies

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L.

Chap 2. Optimality conditions

Vector Variational Principles; ε-efficiency and Scalar Stationarity

arxiv: v1 [math.fa] 16 Jun 2011

PARTIAL SECOND-ORDER SUBDIFFERENTIALS IN VARIATIONAL ANALYSIS AND OPTIMIZATION BORIS S. MORDUKHOVICH 1, NGUYEN MAU NAM 2 and NGUYEN THI YEN NHI 3

5 Handling Constraints

arxiv: v2 [math.oc] 5 May 2017

Constrained Optimization

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane

Product metrics and boundedness

arxiv: v1 [math.ca] 5 Mar 2015

2 Statement of the problem and assumptions

CHARACTERIZATION OF (QUASI)CONVEX SET-VALUED MAPS

ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT

EC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1

PROBLEMS. (b) (Polarization Identity) Show that in any inner product space

DUALITY, OPTIMALITY CONDITIONS AND PERTURBATION ANALYSIS

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

On Weak Pareto Optimality for Pseudoconvex Nonsmooth Multiobjective Optimization Problems

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow

Transcription:

I.Ginchev, A.Guerraggio, M.Rocca On second order conditions in vector optimization 2002/32 UNIVERSITÀ DELL'INSUBRIA FACOLTÀ DI ECONOMIA http://eco.uninsubria.it

In questi quaderni vengono pubblicati i lavori dei docenti della Facoltà di Economia dell Università dell Insubria. La pubblicazione di contributi di altri studiosi, che abbiano un rapporto didattico o scientifico stabile con la Facoltà, può essere proposta da un professore della Facoltà, dopo che il contributo sia stato discusso pubblicamente. Il nome del proponente è riportato in nota all'articolo. I punti di vista espressi nei quaderni della Facoltà di Economia riflettono unicamente le opinioni degli autori, e non rispecchiano necessariamente quelli della Facoltà di Economia dell'università dell'insubria. These Working papers collect the work of the Faculty of Economics of the University of Insubria. The publication of work by other Authors can be proposed by a member of the Faculty, provided that the paper has been presented in public. The name of the proposer is reported in a footnote. The views expressed in the Working papers reflect the opinions of the Authors only, and not necessarily the ones of the Economics Faculty of the University of Insubria. Copyright I.Ginchev, A.Guerraggio, M.Rocca Printed in Italy in October 2002 Università degli Studi dell'insubria Via Ravasi 2, 21100 Varese, Italy All rights reserved. No part of this paper may be reproduced in any form without permission of the Author.

On second-order conditions in vector optimization Ivan Ginchev Technical University of Varna Department of Mathematics 9010 Varna, Bulgaria ginchev@ms3.tu-varna.acad.bg Angelo Guerraggio University of Insubria Department of Economics 21100 Varese, Italy aguerraggio@eco.uninsubria.it Matteo Rocca University of Insubria Department of Economics 21100 Varese, Italy mrocca@eco.uninsubria.it Abstract Starting from second-order conditions for C 1,1 scalar unconstrained optimization problems described in terms of the second-order Dini directional derivative, we pose the problem, whether similar conditions for C 1,1 vector optimization problems can be derived. We define second-order Dini directional derivatives for vector functions and apply them to formulate such conditions as a Conjecture. The proof of the Conjecture in the case of C 1,1 function (called the nonsmooth case) will be given in another paper. The present paper provides the background leading to its correct formulation. Using Lagrange multipliers technique, we prove the Conjecture in the case of twice Fréchet differentiable function (called the smooth case) and show on example the effectiveness of the obtained conditions. Another example shows, that in the nonsmooth case it is important to take into account the whole set of Lagrange multipliers, instead of dealing with a particular multiplier. Key Words: Nonsmooth vector optimization, Second-order conditions. Math. Subject Classification: 90C29, 90C30, 49J52. 1 Introduction The paper of Hiriart-Urruty, Strodiot, Nguyen [5] shows that many decision making models in economics and engineering are given by optimization problems involving rather C 1,1 than C 2 functions. Thereafter an intensive study of various aspects of C 1,1 functions is undertaken (see for instance [6, 7, 8, 9, 10, 11]). Guerraggio, Luc [4] propose second-order conditions for multy-criterial C 1,1 decision problems based on the second-order Clarke subdifferential, presented further as Theorem 2. Theorem 1 below proposes second-order optimality conditions which for scalar functions give better results than those of Guerraggio, Luc. In the preprint Ginchev, Guerraggio, Rocca [3] we tried to generalize these conditions to vector optimization problems. In the present paper this result is formulated more precisely as a Conjecture. We define and use there second-order Dini directional derivatives for C 1,1 vector functions. Applying the Lagrange multipliers technique, we prove the Conjecture in the smooth case, that is the case of twice Fréchet differentiable function. In fact the smooth case and the given examples help in getting the correct formulation of the Conjecture. In particular it becomes clear, that in the nonsmooth case it is important to take into account the whole set of Lagrange multipliers, instead of dealing with a particular multiplier. The discussion underlines as peculiarities of the vector optimization problems their nonsmooth nature (even for smooth problems) and the behaviour like problems with constraints (even for unconstrained problems). Lecture given at the French-German-Polish Conference on Optimization, Cottbus, Germany, September 9 13, 2002. 1

2 The scalar problem We use the notation R = R { } {+ } for the extended real line. Let m be a positive integer and ϕ : R m R be a given function. Recall that the domain of ϕ is the set dom ϕ = {x R m ϕ ± }. We discuss the problem to find second-order characterizations for the local minima of ϕ, where in general ϕ is not supposed to be a smooth function. We write the optimization problem ϕ(x) min, x R m. It is well known that optimality conditions in nonsmooth optimization could be based on various definitions of directional derivatives. Further we give the definitions of the first and second-order lower Dini directional derivatives (for brevity we will say just Dini derivatives). Given ϕ : R m R, x 0 dom ϕ and u R m, we put for the first-order Dini derivative ϕ D(x 0, u) = lim inf t +0 1 ( + tu) ) ). t Turn attention that the difference in the right hand side is well defined, since due to x 0 dom ϕ, only + tv) could eventually take infinite values (we must be careful with the infinities, since operations like + are not allowed). In general ϕ D (x0, u) R. The second-order Dini derivative ϕ D (x0, u) is defined only if the first-order derivative ϕ D (x 0, u) takes finite values and ϕ D(x 0, u) = lim inf t +0 2 t 2 ( + tu) ) t ϕ D(x 0, u) ). The expression in the parentheses has sense, since only +tv) eventually takes an infinite value. Denote by S = {x R m x = 1} the unit sphere in R m. Sometimes the directional derivatives in the optimality conditions are restricted to u S. Recall that x 0 R m is said to be a local minimizer (we prefer to say simply minimizer) of ϕ if for some neighbourhood U of x 0 it holds ϕ(x) ) for all x U. The minimizer is strict if this inequality is strict for x 0 U \ {x 0 }. It is said to be an isolated minimizer of order k (k is a positive number) if there is a constant A > 0, such that ϕ(x) ) + A x x 0 k, x U. Obviously, each isolated minimizer is a strict minimizer. We will confine to the case where ϕ is C 1,1 function. A function ϕ : R m R is said to be of class C 1,1 on R m if it is Fréchet differentiable at each point x R m and its Jacobian ϕ (x) is locally Lipschitz on R m. It can be shown that if ϕ is C 1,1, then the first and second-order Dini derivatives of ϕ exist and are finite, and the first-order derivative coincides with the directional derivative ϕ D(x 0, u) = ϕ (x 0 1 (, u) = lim + tu) ) ) = ϕ (x 0 )u. t +0 t The next theorem establishes second-order optimality conditions for C 1,1 functions in terms of Dini derivatives. Before proving the theorem we need the following lemma. Lemma 1 Let ϕ : R m R be C 1,1 function. Let ϕ be Lipschitz with constant L in x 0 + r cl B where x 0 R m and r > 0. Then for u, v R m and 0 < t < r it holds 2 ( t 2 + tv) ) tϕ (x 0 )v ) 2 ( t 2 + tu) ) tϕ (x 0 )u ) (1) 2L max ( u, v ) v u and consequently ϕ D(x 0, v) ϕ D(x 0, u) 2L max ( u, v ) v u. (2) 2

For v = 0 we get 2 ( t 2 + tu) ) tϕ (x 0 )u ) 2L u 2, and ϕ D(x 0, u) 2L u 2. In particular if ϕ (x 0 ) = 0 inequality (1) implies 2 ( t 2 + tv) + tu) ) 2L max ( u, v ) v u. (3) Proof. We have 2 ( t 2 + tv) ) tϕ (x 0 )v ) = 2 (( t 2 + tv) + tu) ) tϕ (x 0 )(v u) ) + 2 ( t 2 + tu) ) tϕ (x 0 )u ) = 2 ( ϕ (x 0 + tu + tθ(v u))(v u) tϕ (x 0 )(v u) ) t + 2 ( t 2 + tu) ) tϕ (x 0 )u ) 2L (1 θ)u + θv v u + 2 t 2 ( + tu) ) tϕ (x 0 )u ) 2L max ( u, v ) v u + 2 t 2 ( + tu) ) tϕ (x 0 )u ). Here 0 < θ < 1 comes from the Lagrange mean value theorem. Exchanging u and v we get inequality (1) and as a particular case also (3). Inequality (2) is obtained from (1) after passing to a limit (the fact that we take lim inf does not lead out from the routine treatment). Theorem 1 Let ϕ : R m R be C 1,1 function. (Necessary Conditions) Let x 0 be a minimizer of ϕ. Then ϕ (x 0 ) = 0 and for each u R m (or each u S) it holds ϕ D (x0, u) 0. (Sufficient Conditions) Let x 0 R m be a stationary point, that is ϕ (x 0 ) = 0. If for each u R m \ {0} (or each u S) it holds ϕ D (x0, u) > 0 then x 0 is an isolated minimizer of second order for ϕ. Conversely, each isolated minimizer of second order satisfies these sufficient conditions. Proof. Due to the obvious fact that ϕ D (x0, u) is positively homogeneous of second order with respect to u, we may confine to u S, that is to the case put in the parentheses by the formulation of the theorem. The proof of the necessity is trivial. Now we prove the sufficiency. Let ϕ be Lipschitz with constant L in the ball x 0 + r cl B, r > 0. Let u S and 0 < 3 ε(u) < ϕ D (x0, u). Choose 0 < δ(u) < r such that for 0 < t < δ(u) to have place 2 t 2 ( + tu) ) tϕ (x 0 )u ) > ϕ D(x 0, u) ε(u). Put also U(u) = u + (ε(u)/2l)b and let v U(u) S. Then applying Lemma 1 we get 2 ( t 2 + tv) ) ) = 2 ( t 2 + tu) ) ) + 2 ( t 2 + tv) + tu) ) 2 ( t 2 + tu) ) tϕ (x 0 )u ) 2L v u ϕ D(x 0, u) 2 ε(u) > ε(u) > 0. Therefore + tv) ) + 1 2 ( ϕ D(x 0, u) 2 ε(u) ) t 2. 3

The compactness of S yields that there exist ( points u 1,..., u n S such that S U(u 1 )... U(u n ). Put δ = min δ(u i ), A = min 1 2 f D (x0, u i ) 2ε(u i ) ). The above chain of inequalities implies that for x x 0 < δ it holds ϕ(x) ) + A x x 0 2. Therefore x 0 is an isolated minimizer of second order. Conversely, if x 0 is an isolated minimizer of second order for the C 1,1 function ϕ then from the necessary conditions ϕ (x 0 ) = 0. Further for some A > 0 and t > 0 sufficiently small it holds + tu) ) + A t 2, whence and consequently ϕ D (x0, u) 2A > 0. 2 t 2 ( + tu) ) tϕ (x 0 ) ) 2A The following example shows that the theorem is not true for a function being only differentiable at x 0, but not C 1,1. Example 1 Consider the function ϕ : R 2 R defined as x 2 1 + x 2 2, x 2 1 x 2, ϕ(x 1, x 2 ) = x 2 2, x 2 1 = x 2, Of course the point x 0 = (0, 0) is not a minimizer. It is easy to see that the function ϕ is differentiable at x 0 and ϕ (x 0 ) = (0, 0). Furthermore, for any nonzero u = (u 1, u 2 ) we have ϕ D(x 0, u) = 2(u 2 1 + u 2 2) > 0 and so the sufficient conditions of Theorem 1 are satisfied. 3 The vector problem Let C be a pointed closed convex cone in R n with int C. Let f : R m R n be a given function. We will consider the minimization problem f(x) min, x R m. (4) There are different concepts of solutions. The point x 0 is said to be weakly efficient (efficient) point, if there is a neighbourhood U of x 0, such that if x U then f(x) / f(x 0 ) int C (respectively f(x) / f(x 0 ) (C \{0}) ). In this paper the weakly efficient (efficient) solutions will be called for brevity w-minimizers (e-minimizers). Saying that y R n, we accept that this point is y = (y 1,..., y n ). Similarly, the point x R m is x = (x 1,..., x m ), the point ξ R n is ξ = (ξ 1,..., ξ n ), and the function f : R m R n is f = (f 1,..., f n ). Here we confine for simplicity mostly to optimization with respect to the cone C = R n + = {y R n y j 0, j = 1,..., n}. When C = R n +, the point x 0 is w-minimizer of f if and only if it is a solution of the scalar optimization problem ( ϕ(x) := max fj (x) f j (x 0 ) ) min, x R m, (5) 1 j n in which case the minimum of ϕ at x 0 is ) = 0. Indeed, if x 0 is not w-minimizer of f, then each neighbourhood U of x 0 contains a point x U, for which f(x) f(x 0 ) int R n +, whence f j (x) f j (x 0 ) < 0, j = 1,..., n, and consequently x 0 is not a minimizer for problem (5). If x 0 is w-minimizer of f, then for some neighbourhood U of x 0 the cone int R n + does not contain points of type f(x) f(x 0 ), x U. This means that for any x U it holds f j (x) f j (x 0 ) 0 for at least one j = 1,..., n. Consequently ϕ(x) f j (x) f j (x 0 ) 0 = ), whence x 0 is a minimizer of ϕ. Now we can call x 0 a strict or isolated minimizer of order k for f, if respectively it is a strict or isolated minimizer of order k for problem (5). Each strict minimizer (and hence each 4

isolated minimizer) is e-minimizer. Indeed, x 0 strict minimizer implies that there exists a neighbourhood U of x 0, such that ϕ(x) > ) = 0 for x U \{x 0 }. Then f j (x) f j (x 0 ) > 0 for some j = j(x). Consequently f(x) f(x 0 ) / (R n + \{0}) and therefore x 0 is e-minimizer. Given in R n a pointed closed convex cone C, int C, then its polar cone is defined by C = {ξ R n ξ, y 0 for all y C}. Using polar cones, the problem to find the w-minimizers of the vector function f with respect to C can be reduced to a scalar problem. On this basis one can think to generalize over optimization with respect to such arbitrary cones the results of the present paper. We confine however for simplicity mostly to the case C = R n +. We say, that the vector function f is C 1,1 if all of its components are C 1,1. Equivalently, f is C 1,1, if it is Fréchet differentiable with locally Lipschitz derivative f. Our purpose is to find second-order conditions, similar to these of Theorem 1 in the case of C 1,1 vector function f. Turn attention that the function ϕ from the equivalent scalar problem (5) is in general only Lipschitz, but not C 1,1. The result is that the minimizers of the vector function f cannot be determined by Theorem 1 applied to the function (5). The next Example 2 shows that also the conclusion of Theorem 1 for such a function is wrong. Example 2 Consider the optimization problem (4) for f : R 2 R 2, f(x 1, x 2 ) = ( 2x 2 1 + x 2, x 2 1 x 2 ). Then the scalar function ϕ at x 0 = 0 for the corresponding problem (5) is given by ϕ(x 1, x 2 ) = max ( 2x 2 1 + x 2, x 2 1 x 2 ) and the point x 0 = (0, 0) is not a minimizer. Indeed, for t 0 we have ϕ(t, 3 2 t2 ) = 1 2 t2 < 0 = ϕ(0, 0) = ). In spite of this, sufficient conditions similar to those of Theorem 1 are satisfied, since the Dini derivatives for nonzero u = (u 1, u 2 ) are u 2, u 2 > 0, ϕ D(x 0, u) = ϕ (x 0 )u = 0, u 2 = 0, u 2, u 2 < 0, ϕ D(x 0, u) = 2u 2 1 > 0 for u 2 = 0. Though Problem (4) can be smooth (which means that f has smooth in some sense components), the corresponding problem (5) is a nonsmooth one, and as a result some bad influences like these in Example 2 may occur. We call this effect the nonsmooth nature of the vector optimization problems. It seems still, that we can make an advantage of the smoothness of f, if we succeed to formulate suitable optimality conditions directly to the function f instead to the corresponding scalar problem (5). Wishing like in Theorem 1 to exploit Dini derivatives, we need first to define them for a vector function f. We confine in fact to a C 1,1 function f. For the first Dini derivative we put like in the scalar case f D (x0, u) = f (x 0 )u. The second derivative f D (x0, u) we define as the set of the cluster points of (2/t 2 ) ( f(x 0 + tu) f(x 0 ) t f (x 0 ) u ) when t +0, or in other words as the Kuratowski upper limit set f D(x 0, u) = Limsup t +0 2 t 2 ( f(x 0 + tu) f(x 0 ) t f (x 0 ) u ). This definition is convenient for the vector case, but does not cover with the definition of the second-order Dini derivative for the scalar case, where in fact lim inf is taken in order to 5

avoid set-valuedness. Turn however attention, that Theorem 1 can be stated also with this set-valued meaning for the second-order Dini derivative. Next we formulate a Conjecture generalizing Theorem 1 to the vector problem (4). As for the notations, we make the following agreement. If ξ is in R n, then we write for brevity the scalar product as ξ, y = ξ y, y R n. The notation is justified, since the scalar product determines a linear functional y ξ, y and usually ξ y is used for the value of the linear functional ξ at y. Extending this notation to compositions, we write e.g. ξ f (x 0 ) for the sum ξ f (x 0 ) = n j=1 ξ j f j (x0 ). Conjecture Assume that f : R m R n is C 1,1 function minimized with respect to the cone C = R n +. (Necessary Conditions) Let x 0 be w-minimizer of f. Then: a) The set D R n consisting of all ξ, such that ξ R n +, n j=1 ξ j = 1, and ξ f (x 0 ) = 0, is nonempty. b) For each u R m (or u S), such that f (x 0 ) u (R n + \ int R n +), it holds inf sup ξ y 0. y f D (x0,u) ξ D (Sufficient Conditions) Assume that for x 0 R m the following two conditions are satisfied: a) The set D R n consisting of all ξ, such that ξ R n +, n j=1 ξ j = 1, and ξ f (x 0 ) = 0, is nonempty. b) For each u R m \ {0} (or u S), such that f (x 0 ) u (R n + \ int R n +), it holds inf sup ξ y > 0. y f D (x0,u) ξ D Then x 0 is an isolated minimizer of second order for f. (Isolated Minimizers) The sufficient conditions above are sufficient and necessary x 0 to be an isolated minimizer of second order for f. The next Theorem 2 is from Guerraggio, Luc [4]. It gives second-order optimality conditions for C 1,1 vector functions in terms of the Clarke second-order subdifferential, defined as follows. Since f is Lipschitz, according to the Rademacher Theorem, the Hessian f exists almost everywhere. Then the second-order subdifferential of f at x 0 is defined by 2 f(x 0 ) = cl conv { lim f (x i ) x i x 0, f (x i ) exists }. Theorem 2 Let f : R m R n be C 1,1 function minimized with respect to the closed pointed convex cone C with int C. (Necessary Conditions) Assume that x 0 is a w-minimizer of f. Then the following conditions hold for each u S: a) ξ f (x 0 ) = 0 for some ξ C \ {0}, b) if f (x 0 )(u) (C \ int C) then 2 f(x 0 )(u, u) ( int C) c. (Sufficient Conditions) Assume that for each u S one of the following two conditions is satisfied: a) ξ f (x 0 ) = 0 for some ξ int C, b) f (x 0 )(u) (C \ int C) and 2 f(x 0 )(u, u) int C. Then x 0 is a e-minimizer for f. In Section 4, reducing the optimization problem (5) to a problem with constraints and using the Lagrange multipliers technique, we prove in Theorem 3 the Conjecture for the smooth case. The Conjecture is a generalization of Theorem 3 with an account of Theorem 1. Theorem 2 from Guerraggio, Luc [4] shows that known second-order conditions in vector optimization are designed similarly. 6

4 The smooth case We are concerned with the problem (5), which is obviously equivalent to the constrained scalar problem ϕ 0 (x, z) := z min ϕ j (x, z) := f j (x) f j (x 0 (6) ) z 0, j = 1,..., n. Here (x, z) R m R. Problem (5) is solved by x 0 if and only if problem (6) is solved by (x 0, z 0 ), where z 0 = 0, in which case ϕ j (x 0, z 0 ) = 0, j = 1,..., n. We call the smooth case, the case when f : R m R n is twice Fréchet differentiable at x 0. The next Theorem 3 verifies the Conjecture in the smooth case. Theorem 3 Assume that f : R m R n is strictly differentiable at x 0 and possesses at this point second-order Fréchet derivative f (x 0 ). Let f be minimized with respect to the cone C = R n +. (Necessary Conditions) Let x 0 be w-minimizer of f. Then: a) The set D R n consisting of all ξ such that ξ R n +, n j=1 ξ j = 1, and ξ f (x 0 ) = 0, is a nonempty convex set in R n. b) For each u R m, such that f (x 0 )u = 0, there exists ξ D, such that ξ f (x 0 )(u, u) 0. (Sufficient Conditions) Let x 0 R m and assume here exists ˆξ R n and α > 0, such that: a) ˆξ int R n + and ˆξ f (x 0 ) = 0. b) For each u R m, such that f (x 0 )u = 0 it holds ˆξ f (x 0 )(u, u) 2α u 2. Then x 0 is an isolated minimizer of second order for f. Proof. Necessity. We reformulate for the particular problem (6) the second-order necessary conditions from [1, Theorem 3.4.2., p. 188]. Let ϕ j : R m R R, j = 0,..., n, be twice Fréchet differentiable at the point (x 0, z 0 ) and let ϕ j (x 0, z 0 ) = 0, j = 1,..., n. If (x 0, z 0 ) supplies the local minimum of the problem ϕ 0 (x, z) min, ϕ j (x, z) 0, j = 1,..., n, (7) then: a) The set D of the Lagrange multipliers (ξ, ξ 0 ) R n R, such that n ξ j 0, j = 1,... n, ξ 0 0, j=1 ξ j = 1, L (x,z) (x 0, z 0, ξ, ξ 0 ) = 0, is a nonempty convex set in R n R. b) For each (u, v) R m R belonging to the subspace { (u, v) R m R ((ϕ j ) x(x 0, z 0 ), (ϕ j ) z(x 0, z 0 )), (u, v) = 0, j = 0,..., n } there exist Lagrange multipliers (ξ, ξ 0 ) = (ξ(u, v), ξ 0 (u, v)) D, such that Here the Lagrange function is L (x,z)(x,z) (x 0, z 0, ξ, ξ 0 )((u, v), (u, v)) 0. L(x, z, ξ, ξ 0 ) = n j=0 ξ j ϕ j (x, z). (8) We will apply the necessity conditions for problem (6), where ϕ 0 (x, z) = z, ϕ j (x, z) = f j (x) f j (x 0 ) z, j = 1,..., n, z 0 = 0. 7

The Lagrange function is L(x, z, ξ, ξ 0 ) = ξ 0 z + n j=1 ξ j (f j (x) f j (x 0 ) z). We describe the set D. We have on D ( n L (x,z) (x 0, z 0, ξ, ξ 0 ) j=1 ξ j f j (x0 ), ξ 0 ) n j=1 ξ j = 0. Therefore ξ 0 = n j=1 ξ j = 1. For such multipliers the variable z vanishes from the Lagrange function and it simplifies to L(x, ξ) = n j=1 ξ j (f j (x) f j (x 0 )). (9) Consequently the cited conditions admit a reformulation in terms of the function L(x, ξ), and this reformulation is in fact the Necessity of Theorem 3. Sufficiency. We reformulate for the particular problem (6) the second-order sufficient conditions from [1, Theorem 3.4.3, p. 190]. Suppose that ϕ j : R m R R, j = 0,..., n, are teice Fréchet differentiable at the point (x 0, z 0 ) being feasible for the problem (7), and let ϕ j (x 0, z 0 ) = 0, j = 1,..., n. Assume that there exist Lagrange multipliers (ˆξ, ˆξ 0 ) R n R and a number α > 0 such that: a) It holds n ˆξ j > 0, j = 1,... n, ˆξ0 > 0, ˆξ j=1 j = 1, L (x,z) (x 0, z 0, ˆξ, ˆξ 0 ) = 0. b) For each (u, v) R m R belonging to the subspace { (u, v) R m R ((ϕ j ) x(x 0, z 0 ), (ϕ j ) z(x 0, z 0 )), (u, v) = 0, j = 0,..., n } it holds L (x,z)(x,z) (x 0, z 0, ˆξ, ˆξ 0 )((u, v), (u, v)) 2α (u, v) 2. Then (x 0, z 0 ) is an isolated minimizer of second order for the considered problem. Here the Lagrange function is given by (8). Now, as in the proof of the necessity, we see that these conditions for problem (6) can be reformulated in terms of the Lagrange function (9), and this reformulation coincides with the sufficient conditions of Theorem 3. The equality n ˆξ j=1 j = 1 can be dropped, since the imposed conditions are homogeneous in ˆξ. We left α, as in [1], but since finite dimensional spaces are considered ([1] deals more generally with Banach spaces), the inequality ξ f (x 0 )(u, u) 2α u 2 can be replaced by ξ f (x 0 )(u, u) > 0. Now we show, that on the basis of Theorem 3 it follows, that the point x 0 = (0, 0) in Example 2 is not a minimizer, and this result cannot be achieved on the basis of Theorem 2. Theorem 3 works with this example, since the second-order necessary conditions at x 0 = (0, 0) are not satisfied. Indeed, obviously f(x) = ( 2x 2 1+x 2, x 2 1 x 2 ) is strictly differentiable and possesses second-order Fréchet derivatives at each point (x 1, x 2 ). We have f(x 0 ) = (0, 0), L(x, ξ) ξ (f(x) f(x 0 )) = ( 2x 2 1 + x 2 )ξ 1 + (x 2 1 x 2 )ξ 2, L x (x, ξ) ξ f (x) = ( 4x 1 ξ 1 + 2x 1 ξ 2, ξ 1 ξ 2 ), L x (x 0, ξ) = (0, ξ 1 ξ 2 ), whence the set D is determined by the system which gives D = {(1/2, 1/2)}. ξ 1 ξ 2 = 0, ξ 1 + ξ 2 = 1, ξ 1 0, ξ 2 0, 8

The directional derivative is f (x 0 )u = (u 2, u 2 ), whence f (x 0 )u = 0 iff u 2 = 0. The second-order derivatives are f (x 0 )(u, u) = ( 4u 2 1, 2u 2 1), L xx (x 0, ξ)(u, u) ξ f (x 0 )(u, u) = 4u 2 1ξ 1 + 2u 2 1ξ 2. For f (x 0 )u = 0 u 2 = 0 and ξ D ξ 1 = ξ 2 = 1/2 we get ξ f (x 0 )(u, u) = u 2 1. For u 1 0 this gives ξ f (x 0 )(u, u) < 0. Therefore in Theorem 3 the Necessity, point b) is not satisfied. On this basis we conclude that x 0 is not w-minimizer, and moreover, it is not e-minimizer of f. Theorem 2 does not work with this example. Indeed, the first-order condition at x 0 is satisfied with ξ = (1/2, 1/2) as it was shown above. If f (x 0 )u = 0, then u 2 = 0 and 2 f(x 0 )(u, u) = f (x 0 )(u, u) = ( 4u 2 1, 2u 2 1) ( int R n +) c, which shows that also the second-order Necessary Condition, point b) is satisfied. Therefore on the basis of Theorem 2 the suspect that x 0 is w-minimizer cannot be rejected. 5 The nonsmooth case We call the nonsmooth case, the case of C 1,1 function f : R m R n, which is the assumption in the Conjecture. In fact, there are also other differences between the Conjecture and Theorem 3, which make Theorem 3 almost verification (and not as it was declared verification ) of the Conjecture in the smooth case. The most essential differences are two. The first is, that the conditions are imposed on the vectors u satisfying f (x 0 ) u (R n + \ int R n +), instead on f (x 0 ) u = 0, which from geometrical point of view looks more natural, and is the way in which this condition appears in Theorem 2. The other difference is the use of the set D both in the necessity and sufficiency part of the Conjecture. One reason is, that one expects similar necessary and sufficient conditions. Another, more essential reason is, that confining the sufficient conditions to just one direction ˆξ like in the sufficiency of Theorem 3, we do not get characterization of the isolated minimizers of second order (that is, the part Isolated minimizers in the Conjectute would be wrong), which is shown in the next Example 3. Let us conclude this digression, noticing that another difference is the non-appearance of α in the Conjecture. We may drop α, as far as we deal with finite dimensional spaces, as it was commented in the proof of Theorem 3. Example 3 Consider the function f : R R 2, f(x) = { ( 2x 2, x 2 ), x 0, (x 2, 2x 2 ), x < 0. optimized with respect to C = R 2 +. The function f is C 1,1. The point x 0 = 0 is an isolated minimizer of second order (at which the Fréchet second-order derivative f (x 0 ) does not exist) and verifies both the Necessary and the Sufficient conditions in the Conjecture. At the same time, for each ˆξ R 2 + \ {0}, there exists u R with f (x 0 ) u = 0, such that ˆξ f D (x0, u) < 0 (here the Dini derivatives f D (x0, u) are one-point sets). In this example x 0 = 0 is an isolated minimizer of second order, since the corresponding scalar problem (5) is ϕ(x) = x 2 min and has x 0 as isolated minimizer of second order. We have f(x 0 ) = (0, 0) and ( 4x, 2x), x > 0, f (x) = (0, 0), x = 0, (2x, 4x), x < 0, 9

which implies easily that f is Lipschitz, i. e. f is C 1,1. Now ξ f (x 0 ) = (0, 0), whence D = {ξ R 2 + ξ 1 + ξ 2 = 1, ξ 1 0, ξ 2 0} is not empty, which verifies the points a) in the Conjecture. For each u R we have f (x 0 ) u = (0, 0). An easy calculation gives that the second-order Dini derivatives are the singletons f D(x 0, u) = ( 4u 2, 2u 2 ), u > 0, (0, 0), u = 0, (2u 2, 4u 2 ), u < 0, whence for each u it holds sup ξ D ξ f D (x0, u) = 2u 2 0 (and strictly positive for u 0), which verifies the points b) in the Conjecture. At the same time, let ˆξ D be fixed. Then { ˆξ f D(x 0 4u 2 ˆξ1 + 2u, u) = 2 ˆξ 2, u > 0, 2u 2 ˆξ 1 4u 2 ˆξ 2, u < 0. Assume that both ˆξ f D (x0, 1) = 4ˆξ 1 + 2ˆξ 2 > 0 and ˆξ f D (x0, 1) = 2ˆξ 1 4ˆξ 2 > 0. Adding these inequalities, we get 2 = 2(ˆξ 1 + ˆξ 2 ) > 0, a contradiction. We conclude the paper with some remarks. Our purpose was to establish reasonable secondorder conditions for C 1,1 vector optimization. The result of this effort is the formulated Conjecture, the proof of which will be given in a separate paper. In Theorem 3 we observed a reduction to a constrained problem, hence even unconstrained vector optimization problems behave like constrained ones. This has been observed also in the paper of Bolintenéanu, El Maghri [2], where second-order conditions for C 2 functions are obtained applying reduction to a constraint problem, which is different than the one used in the present paper. Let us underline that the approach in [2] assumes constraint qualifications on the objective function of the Lyusternik type, that is f (x 0 ) has a full range. The formulated in this paper Conjecture does not involve constraint qualifications on the objective function, which in our opinion is the more desirable case. This property occurs due to the special structure of the scalar constrained problem (6). The proved Theorem 3 gives an effective tool to treat vector optimization problems, which was demonstrated on Example 2. Still, the Conjecture improves Theorem 3 in a natural way, since one expects, like in the classical scalar optimization, the second-order sufficient conditions to characterize (that is to be both sufficient and necessary) the isolated minimizers of second order. In the case of a scalar C 1,1 function, obviously the Conjecture reduces to Theorem 1. Similarly to Theorem 3, we consider in the Conjecture optimization with respect to the cone C = R n +. This may cause some simplifications in the proof, but like in Theorem 2, there are no principle difficulties to carry the formulation over optimization with respect to more general cones. References [1] V. M. Alekseev, V. M. Tikhomirov, S. V. Fomin: Optimal control. Consultants Bureau, New York 1987. [2] S. Bolintenéanu, M. El Maghri: Second-order efficiency conditions and sensivity of efficient points. J. Optim. Theory Appl. 98 No. 3 (1998), 569 592. [3] I. Ginchev, A. Guerraggio, M. Rocca: From scalar to vector optimization. Preprint M-02/02, Department of Mathematics, Technical University Varna, 2002. 10

[4] A. Guerraggio, D. T. Luc: Optimality conditions for C 1,1 vector optimization problems. J. Optim. Theory Appl. 109 No. 3 (2001), 615 629. [5] J.-B. Hiriart-Urruty, J.-J Strodiot, V. Hien Nguyen: Generalized Hessian matrix and second order optimality conditions for problems with C 1,1 data. Appl. Math. Optim. 11 (1984), 169 180. [6] D. Klatte, K. Tammer : On the second order sufficient conditions to perturbed C 1,1 optimization problems. Optimization, 19 (1988), 169 180. [7] D. La Torre, M. Rocca: C 1,1 functions and optimality conditions. Journal of Computational Analysis and Applications, to appear. [8] D. T. Luc: Taylor s formula for C k,1 functions. SIAM J. Optim. 5 No. 3 (1995), 659 669. [9] X.Q. Yang, V. Jeyakumar : Generalized second-order directional derivatives and optimization with C 1,1 functions. Optimization, 26 (1992), 165 185. [10] X.Q. Yang : Second-order conditions in C 1,1 optimization with applications. Numer. Funct. Anal. and Optimiz., 14 (1993), 621 632. [11] X.Q. Yang : Generalized second-order derivatives and optimality conditions. Nonlin. Anal., 23 (1994), 767 784. 11