Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic Constraints

Similar documents
Optimality Conditions for Constrained Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization

5. Duality. Lagrangian

Convex Optimization & Lagrange Duality

Lecture 6: Conic Optimization September 8

Convex Optimization M2

Permanently Going Back and Forth between the Quadratic World and the Convexity World in Optimization

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lecture 18: Optimization Programming

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition

Chap 2. Optimality conditions

Convex Optimization Boyd & Vandenberghe. 5. Duality

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Lecture: Duality of LP, SOCP and SDP

Global Optimality Conditions for Optimization Problems

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

POLARS AND DUAL CONES

Optimality, Duality, Complementarity for Constrained Optimization

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Semidefinite Programming

Subdifferential representation of convex functions: refinements and applications

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

Lecture: Duality.

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

A Proximal Method for Identifying Active Manifolds

The Trust Region Subproblem with Non-Intersecting Linear Constraints

A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

Constrained Optimization and Lagrangian Duality

The Karush-Kuhn-Tucker (KKT) conditions

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis

Constrained Optimization

Largest dual ellipsoids inscribed in dual cones

Convex Optimization Lecture 6: KKT Conditions, and applications

Optimality Conditions for Nonsmooth Convex Optimization

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

On constraint qualifications with generalized convexity and optimality conditions

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L.

Strong duality in Lasserre s hierarchy for polynomial optimization

Relationships between upper exhausters and the basic subdifferential in variational analysis

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION

Lecture 5. The Dual Cone and Dual Problem

Semidefinite Programming Basics and Applications

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

On Weak Pareto Optimality for Pseudoconvex Nonsmooth Multiobjective Optimization Problems

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

Mustafa Ç. Pinar 1 and Marc Teboulle 2

Duality Theory of Constrained Optimization

Mathematical Economics. Lecture Notes (in extracts)

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Convex Optimization and Modeling

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

Numerical Optimization

EE/AA 578, Univ of Washington, Fall Duality

Inequality Constraints

Convex Optimization & Parsimony of L p-balls representation

Journal of Inequalities in Pure and Applied Mathematics

Solving generalized semi-infinite programs by reduction to simpler problems.

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization

Lecture 8. Strong Duality Results. September 22, 2008

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS

A ten page introduction to conic optimization

On the projection onto a finitely generated cone

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

Optimization for Machine Learning

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO

Lagrangian Duality and Convex Optimization

1 Introduction CONVEXIFYING THE SET OF MATRICES OF BOUNDED RANK. APPLICATIONS TO THE QUASICONVEXIFICATION AND CONVEXIFICATION OF THE RANK FUNCTION

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

Nonlinear Optimization

SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF CONSTRAINED MAXIMUM EIGENVALUE FUNCTION

Refined optimality conditions for differences of convex functions

Nonlinear Optimization: What s important?

arxiv: v1 [math.fa] 16 Jun 2011

A Brief Review on Convex Optimization

Mustapha Ç. Pinar 1. Communicated by Jean Abadie

A Dual Condition for the Convex Subdifferential Sum Formula with Applications

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm

Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems

Lecture 3. Optimization Problems and Iterative Algorithms

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

On Optimality Conditions for Pseudoconvex Programming in Terms of Dini Subdifferentials

The exact absolute value penalty function method for identifying strict global minima of order m in nonconvex nonsmooth programming

Subgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives

Minimality Concepts Using a New Parameterized Binary Relation in Vector Optimization 1

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Research Division. Computer and Automation Institute, Hungarian Academy of Sciences. H-1518 Budapest, P.O.Box 63. Ujvári, M. WP August, 2007

Lecture 7: Convex Optimizations

CSCI : Optimization and Control of Networks. Review on Convex Optimization

A proximal approach to the inversion of ill-conditioned matrices

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

5 Handling Constraints

Transcription:

Journal of Global Optimization 21: 445 455, 2001. 2001 Kluwer Academic Publishers. Printed in the Netherlands. 445 Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic Constraints JEAN-BAPTISTE HIRIART-URRUTY Laboratoire MIP, UMR 5640, U.F.R. Mathématiques, Informatique, Gestion, Université Paul Sabatier, 118 Route de Narbonne, 31062 Topulouse Cedex 4, France (E-mail: jbhu@cict.fr) Abstract. For the problem of maximizing a convex quadratic function under convex quadratic constraints, we derive conditions characterizing a globally optimal solution. The method consists in exploiting the global optimality conditions, expressed in terms of ε-subdifferentials of convex functions and ε-normal directions, to convex sets. By specializing the problem of maximizing a convex function over a convex set, we find explicit conditions for optimality. Key words: global optimality condition, quadratic objective function, convex quadratic constraints 1. Introduction Optimization problems, where all the data (objective function and constraint functions) are quadratic functions, cover a large spectrum of situations; they constitute an important part in the field of optimization, see [1, section 8] for a recent survey on the subject. Tackling them from the (global) optimality and duality viewpoints is not as yet at hand. We consider here a special class of such optimization problems, with a convex objective function and convex inequality constraints: (P) Minimize (or maximize) subject to g i (x) := 1 2 Q ix,x + b i,x +c i 0 f(x):= 1 Ax, x + a,x +α 2 for all i = 1,...,m, where A, Q 1,...,Q m are positive semidefinite symmetric n-by-n matrices, a, b 1,...,b m vectors in R n and α, c 1,...,c m real numbers. The notation.,. stands for the standard inner product in R n. In such a setting, two situations are well understood: when there is only one inequality constraint, or when the f to be minimized as well as the constraint functions g i are convex (hence (P) is a convex minimization problem). When there is only one inequality constraint, surprisingly enough, the usual first-order Karush Kuhn Tucker (KKT) conditions can be complemented so as to provide a characterization of global solutions to (P) ([6,8]): under a slight assumption on the A preliminary version of this paper was presented at the French Belgian German Conference on Optimization, September 1998

446 J.B. HIRIART-URRUTY constraint function g 1, the KKT conditions at x with an associated Lagrange KKT multiplier µ 0, complemented with the condition A + µq 1 is positive semidefinite, characterize a global solution to (P). In such a case one can assert that, roughly speaking, (P) is a convex problem in a hidden form. When all the matrices A, Q 1,...,Q m are positive semidefinite and (P) consists in minimizing the quadratic convex f under quadratic convex inequalities g i (x) 0, i = 1,...,m, (P) is a particular convex minimization problem for which one knows the optimality conditions. In the present work, we intend to derive conditions characterizing globally optimal solutions in the problem of maximizing the convex objective f under several convex inequality constraints. This can be viewed as a particular case of the general situation considered by the author in ([2], [3, section III]) where a convex function was maximized over a convex set. The conditions expressed there were given in terms of the ε-subdifferential of the objective function and the ε-normal directions to the constraint set. The nice thing in the quadratic world is that these global optimality conditions can be exploited in the calculations. We do not address here the questions of semidefinite relaxation, complexity, or obtaining good approximations of (P). 2. A global optimality condition Let f : R n R be convex and let C be a (nonempty) closed convex set in R n. Two mathematical objects are useful in deriving global optimality conditions in the problem of maximizing f over C: the so-called ε-subdifferential of f and the set of ε-normal directions to C. Forε 0, the ε-subdifferential of f at x, denoted as ε f( x), is the set of (slopes) s R n satisfying f(x) f( x) + s,x x ε for all x R n. (1) The set of ε-normal directions to C at x C, denoted as N ε (C, c), is the set of (directions) d R n satisfying d,x x ε for all x C. (2) For properties of ε-subdifferentials of convex functions and ε-normal directions to convex sets, we refer to [5, chapter XI]. The following general result characterizes a global maximizer x C of f over C. THEOREM 1. ([2]) x C is a global maximizer of f over C if and only if ε f( x) N ε (C, x) for all ε>0. (3) Inclusion (3) has to be checked for all ε>0, a priori. Instead of using the rough definitions (1), (2), an alternate way of exploiting Theorem 1 is to go through the

MAXIMIZING A CONVEX QUADRATIC FUNCTION 447 support functions of ε f( x) and N ε (C, x). The support function of ε f( x), denoted as f ε ( x,.), is the so-called ε-directional derivative of f at x: d R n f ε ( x,d) = inf t>0 f( x + td) f( x) + ε. (4) t Also the support function of N ε (C, x), denoted as (I C ) ε ( x,.),istheε-directional derivative of the indicator function I C at x: d R n (I C ) ε ( x,d) = inf ε t : t>0, x + td C }. (5) So, instead of writing the inclusions (3) between sets, we write the following inequalities between support functions: f ε ( x,d) (I C) ε ( x,d) for all d Rd and all ε>0. (6) The trick now is to exchange the quantifiers: for all d R n and for all ε>0. The strategy is to exploit thoroughly the condition f ε ( x,d) (I C) ε ( x,d) for all ε>0. in given situations, and then let d vary in R n. We begin by showing how this works in the presence of just one inequality constraint. 2.1. A GLOBAL OPTIMALITY CONDITION IN THE PRESENCE OF ONE INEQUALITY CONSTRAINT Here is our optimization problem: maximize f(x):= 1 Ax, x + a,x +α (P) 2 subject to x C := x R n : g(x) 0}, where g(x) := 1 Qx, x + b, x +c. 2 We make the following assumptions on the data: A = 0 is positive semidefinite; Q is positive semidefinite; There exists x 0 such that g(x 0 )<0(Slater s condition). Under such assumptions: The boundary of C is x : g(x) = 0}, while its interior is x : g(x) < 0}; A maximizer of f on C, even a local one, lies on the boundary of C. Calculation of f ε ( x,d). Due to the particular form of f, the calculation of f ε ( x,d) is fairly easy ([5, chapter XI]): f ε ( x,d) = A x + a, d + 2ε Ad, d for all d R n and all ε>0.

448 J.B. HIRIART-URRUTY Calculation of (I C ) ε ( x,d) at a boundary point x of C. Let x satisfy g( x) = 0. We wish to calculate inf ε : t>0, x + td C}, which amounts to determining t supt >0 : g( x + td) 0} =sup t>0 : 1 } 2 t2 Qd, d +t Q x + b, d 0 =: t d. We know that g( x) = Q x + b = 0; so three cases arise: Q x + b, d > 0: there is no t>0forwhich 1 2 t2 Qd, d +t Q x + b, d 0 whence t d =+. Q x + b, d =0: the set of t>0forwhich 1 2 t2 Qd, d +t Q x + b, d 0 is either empty or the whole half-line (0, + ) (depending on whether Qd, d > 0 or equals 0); whence t d =+ again. Q x+b,d Q x+b, d < 0: then t d is + if Qd, d =0and 2 if Qd, d > 0; Qd,d Q x+b,d whence t d = 2 in the two cases. Qd,d Therefore the necessary and sufficient condition for global optimality (6) is rewritten as: A x + a, d + 2ε Ad, d + ε Qd,d 2 Q x+b,d 0 (7) for all d satisfying Q x + b, d < 0andallε>0. The main question remains: how to get rid of the ε>0? For a given d satisfying Q x + b, d < 0 letting α = ε, the inequality in (7) becomes: θ(α) : 1 Qd, d 2 Q x + b, d α2 + 2 Ad, d α + A x + a,d 0forallα>0. (8) θ(α) is a polynomial function of degree 2, with θ (0) = 2 Ad, d 0. Hence the condition (8) is equivalent to θ(α) 0forallα R, which in turn is checked as: (d) := A x + a, d Qd, d Q x + b, d Ad, d 0. (9) Consequently, the condition (7) no longer contains ε>0: (d) 0foralld satisfying Q x + b, d < 0. (10) But ( d) = (d) for all d, so that condition (10) can be read again in the form below.

MAXIMIZING A CONVEX QUADRATIC FUNCTION 449 THEOREM 2. Under the hypotheses posed at the beginning of the subsection, x is a global maximizer in (P 1 ) if and only if: A x+a,d Ad, d Qd, d Q x+b,d 0 (11) for all d satisfying Q x + b, d = 0. There is no Lagrange-KKT multiplier showing up in (11); actually the first-order necessary condition for maximality is contained in (11) in a hidden form. Recall that the tangent cone T(C, x) to C at x is the half-space described as: T(C, x) =d R n : Q x + b, d 0}, and that the first-order necessary condition for maximality reads as follows: f( x),d = A x + a, d 0foralld T(C, x). (12) Reformulated with the help of a multiplier, (12) is equivalent to: there exists µ 0 such that A x + a = µ(q x + b). We now can see how our condition (11) takes the (equivalent) form of the necessary and suffficient condition for global optimality as given by Moré [6] (see also Stern and Wolkowicz [8]). THEOREM 3. Under the hypotheses posed at the beginning of the subsection, x is a global maximizer in (P 1 ) if and only if there exists µ 0 satisfying: A x + a = µ(q x + b); (13) A + µq is positive semidefinite. REMARKS 1. Condition (10) can easily be extended by continuity to all directions d satisfying Q x + b, d 0. Thus, while (12) is just a necessary condition for local maximality of x, the following mixture of first- and second-order information on the data f and g at x provides a necessary and sufficient condition for global maximality of x: Ad, d Q x + b, d Qd, d A x + a,d 0 (14) for all d T(C, x). The global optimality condition in Theorem 3 still holds in the following more general situation ([6,p.199]) : Q = 0and inf R n g(x) < 0. 2.2. GLOBAL OPTIMALITY WITH TWO INEQUALITY CONSTRAINTS We now admit two convex quadratic inequalities: maximize f(x):= 1 Ax, x + a,x +α (P 2 ) 2 subject to x C := x R n : g 1 (x) 0andg 2 (x) 0}, where g i (x) := 1 2 Q ix,x + b i,x +c i for i = 1, 2.

450 J.B. HIRIART-URRUTY The two convex quadratic inequality constraints problem is often referred to as the CDT problem (see [7] and references therein). Such specific problems appear in designing trust region algorithms for constrained optimization. We make the following assumptions on the data: A = 0 is positive semidefinite; Q 1 and Q 2 are positive definite (formulas are then a little simpler to derive than when Q 1 and Q 2 are just positive semidefinite); There exists x 0 such that g i (x 0 )<0andg 2 (x 0 )<0(Slater s condition). Under such assumptions: The boundary of C is x C : g 1 (x) = 0org 2 (x) = 0}; A maximizer of f on C necessarily on the boundary of C. The calculation of f ε ( x,d) is the same as in the previous subsection. As for (I C) ε ( x,d), its calculation is more tricky, we have to distinguish two cases: when both constraints are active at x, or only one constraint is active at x. 2.2.1. Case where g i ( x) = g 2 ( x) = 0 To calculate (I C ) ε ( x,d) for d = 0wehavetodetermine supt >0 : g i ( x + td) 0fori = 1, 2} = sup t>0 : 1 } 2 t2 Q i d,d +t Q i x + b i,d 0fori = 1, 2 =: t d. By exploring the various cases depending on whether or not g i ( x),d = Q i x + b i,d < 0, we easily obtain the following: t d =+ if either Q 1 x + b 1,d 0or Q 2 x + b 2,d 0; t d = min 2 Q 1 x + b 1,d, 2 Q } 2 x + b 2,d if both Q 1 d,d Q 2 d,d Q 1 x + b 1,d and Q 2 x + b 2,d are < 0. Consequently the necessary and sufficient condition for global optimality (6) becomes: } A x + a, d + 2ε Ad, d + ε 2 min Q1 d,d Q 1 x+b 1,d, Q 2 d,d Q 2 x+b 2 0,d for all d satisfying Q 1 x + b 1,d < 0and Q 2 x + b 2,d < 0andallε>0. (15) In the present case the tangent cone T(C, x) to C at x is described as T(C, x =d R n : Q 1 x + b 1,d 0and Q 2 x + b 2,d 0},

MAXIMIZING A CONVEX QUADRATIC FUNCTION 451 and the first-order necessary condition for maximality of x reads as follows: A x + a, d 0foralld T(C, x). (16) We get rid of the ε>0 in (15) as we have done in the previous subsection in the situation where only one inequality constraint was present. We skip details of calculation and directly give a necessary and sufficient condition for global optimality in a form parallel to (14). THEOREM 4. Under the assumptions posed at the beginning of the subsection, x satisfying g 1 ( x) = g 2 ( x) = 0 is a global maximizer in (P 2 ) if and only if Ad, d Q 1 x + b 1,d Q 2 x + b 2,d A x + a, d min Q 1 d,d Q 2 x + b 2,d, Q 2 d,d Q 1 x + b 1,d } 0 for all d T(C, x). (17) We know that, reformulated with the help of Lagrange -KKT multipliers µ i, condition (16) is equivalent to the following: there exist µ 1 0and µ 2 0such that: A x + a = µ 1 (Q 1 x + b 1 ) + µ 2 (Q 2 x + b 2 ). (18) Now, since the interior of T(C, x) consists of those d R n for which both Q 1 x + b 1,d and Q 2 x + b 2,d are <0, a necessary and sufficient condition for global optimality written with the multipliers µ 1, µ 2, and parallel to (10) (or (11)) is as follows. THEOREM 5. Under the assumptions posed at the beginning of the subsection, x satisfying g 1 ( x) = g 2 ( x) = 0 is a global maximizer in (P 2 ) if and only if there exist µ 1 0 and µ 2 0 satisfying: A x + a = µ 1 (Q 1 x + b 1 ) + µ 2 (Q 2 x + b 2 ); (19) Ad, d µ 1 max Q 1 d,d, Q } 1 x + b 1,d Q 2 x + b 2,d Q 2d,d µ 2 max Q 2 d,d, Q } 2 x + b 2,d Q 1 x + b 1,d Q 1d,d 0 (20) for all d intt (C, x). Condition (20) looks like a second order condition for maximality, actually mixing first and second (differential) information on the data f, g 1 and g 2 at x. Let K := intt (C, x) and let H 2 ( x, µ 1, µ 2 ;.) denote the homogeneous (of degre two) function occurring in (20). We indeed have: H 2 ( x, µ 1, µ 2,d) 0foralld K ( K), (21)

452 J.B. HIRIART-URRUTY but, contrary to the case where only one inequality constraint was present (Theorems 2 and 3), the closure of K ( K) is not the whole R n, whence (17) or (21) cannot be expanded to all directions d in R n. H 2 ( x, µ 1, µ 2,.) can be compared to the quadratic form associated with the Hessian matrix of the (usual) Lagrangean function: x L(x, µ 1, µ 2 ) := f(x) µ 1 g 1 (x) µ 2 g 2 (x) at x. Indeed H 2 ( x, µ 1, µ 2,d) Ad, d µ 1 Q 1 d,d µ 2 Q 2 d,d (22) = xx 2 L( x, µ 1, µ 2 )d, d for all d R n. (23) One thus recovers from Theorem 5 the following well-known sufficient condition for global maximality: if, for x satisfying g 1 ( x) = g 2 ( x) = 0, there exist µ 1 0 and µ 2 0 such that (18) holds true and A µ 1 Q 1 µ 2 Q 2 is negative semidefinite, then x is a global maximizer of (P 2 ). Note also that a (sharpened) necessary condition for global maximality was obtained in [7, p. 589]: under the assumptions of our problem, it states that if x is a global maximizer of (P 2 ), then there exist µ 1 0and µ 2 0 such that (18) holds true and A µ 1 Q 1 µ 2 Q 2 has at most one strictly positive eigenvalue. Deriving the latter property from (20) is not straightforward. 2.2.2. Case where g 1 ( x) = 0 and g 2 ( x) < 0 Here again the tangent cone T(C, x) to C at x is the half-space T(C, x) =d R n : Q 1 x + b 1,d 0}. Even if only the first constraint function g 1 is active at x, the second one g 2 necessarily plays a role, since we are considering a global optimality condition at x. To calculate (I C ) ε ( x,d) for d = 0, we need to determine t d := supt >0 : g i ( x + td) 0fori = 1, 2} = sup t>0 : 1 2 t2 Q 1 d,d +t Q 1 x + b 1,d 0and } 1 2 t2 Q 2 d,d +t Q 2 x + b 2,d +g 2 ( x) 0. The first inequality in the definition of t d is treated as in Section 2.2.1. It gives rise to t (1) d = + if Q1 x + b 1,d 0, 2 Q 1 x+b 1,d Q 1 d,d if Q 1 x + b 1,d < 0. (24)

MAXIMIZING A CONVEX QUADRATIC FUNCTION 453 Handling the second inequality in the definition of t d leadsusto t (2) d = 2 R 2( x,d) Q 2 d,d, where 2R 2 ( x,d) := Q 2 x + b 2,d [( Q 2 x + b 2,d ) 2 2 Q 2 d,d g 2 ( x)] 1/2. (R 2 ( x,d) is a sort of residual term due to the fact that g 2 is not active at x; R 2 ( x,d) < 0ford = 0). Hence, t d = min(t (1) d,t(2) d ) (25) + if Q1 x + b 1,d 0, = } min 2 Q 1 x+b 1,d Q 1, 2 R 2( x,d) (26) d,d Q 2 if Q d,d 1 x + b 1,d < 0. Consequently the necessary and sufficient condition for global optimality (6) is reformulated as: } A x + a, d + 2ε Ad, d + ε 2 min Q1 d,d Q 1 x+b 1,d, Q 2d,d R 2 0 ( x,d) (27) for all d satisfying Q 1 x + b 1,d < 0andallε>0. We get rid of the ε>0in (27) as in the previous subsections. This leads us to the following analog of Theorem 4. THEOREM 6. Under the assumptions posed at the beginning of the subsection, x satisfying g 1 ( x) = 0 and g 2 ( x) < 0 is a global maximizer in (P 2 ) if and only if Ad, d Q 1 x + b 1,d R 2 ( x,d) A x + a, d min Q 1 d,d R 2 ( x,d), Q 2 d,d Q 1 x + b 1,d } 0 for all d T(C, x). (28) Note that the expression above is no longer symmetric in d (since R 2 ( x, d) = R 2 ( x,d)). Reformulated with the help of the Lagrange -KKT multiplier µ 1 associated with the constraint function g 1 active at x, we get from Theorem 6 the following analog of Theorem 5. THEOREM 7. Under the same assumptions as before, x satisfying g 1 ( x) = 0 and g 2 ( x) < 0 is a global maximizer in (P 2 ) if and only if there exists µ 1 0 satisfying: A x + a = µ 1 (Q 1 x + b 1 ); (29) Ad, d µ 1 max Q 1 d,d, Q } 1 x + b 1,d Q 2 d,d 0 R 2 ( x,d) for all d intt (C, x). (30)

454 J.B. HIRIART-URRUTY 2.3. A GLOBAL OPTIMALITY CONDITION IN THE PRESENCE OF m INEQUALITY CONSTRAINTS The general problem to be considered is: maximize f(x):= 1 Ax, x + a,x +α (P m ) 2 subject to x C := x R n : g 1 (x) 0,...,g m (x) 0}, where g i (x) := 1 2 Q ix,x + b i,x +c i for i = 1, 2,...,m. We assume the following on the data: A = 0 is positive semidefinite; Q 1,Q 2,...,Q m are positive definite; There exists x 0 such that g i (x 0 )<0foralli (Slater s condition). For x C, leti( x) := i : g i ( x) = 0}.Wehavethat T(C, x) =d R n : Q i x + b i,d 0foralli I( x)}, intt (C, x) =d R n : Q i x + b i,d < 0foralli I( x)}. For i I( x), we note R i ( x,d) the associated residual term, that is defined as: 2R i ( x,d) := Q i x + b i,d [( Q i x + b i,d ) 2 2 Q i d,d g i ( x)] 1/2. The method we now follow is the one developed in full detail when m = 2in Section 2.2. We skip over calculations and present the final statement. THEOREM 8. Under the assumptions posed in this subsection, x C is a global maximizer in (P m ) if and only if there exist µ j 0,j I( x), such that: A x + a = µ j (Q j x + b j ); j I( x) Ad, d µ j ( x)min Q j d,d, Q j x + b j,d,i I( x),i = j; Q j I i x + b i,d } Q j x + b j,d Q i d,d,i I( x 0 R i ( x,d) for all d intt (C, x). 3. Conclusion For our nonconvex quadratic optimization problems, global optimality conditions consist in combinations of two conditions: the classical first order condition for optimality; a complementary condition stating that some homogeneous (of degree two) function, mixing first and second order (differential) information about the data, should have a constant sign on a convex cone.

MAXIMIZING A CONVEX QUADRATIC FUNCTION 455 We agree that it is hard to assess the practical use of these global optimality conditions. The situation was the same some years ago for optimizing a convex quadratic function over a convex polyhedral set; several algorithms have now been designed to use these global optimality conditions. We expect the conditions presented here to provide material for future work in the same direction. References 1. C. Floudas and V. Visweswaran, Quadratic optimization, in Handbook for global optimization, Kluwer Academic Publishers, Dordrecht, 1995, pp. 217 269. 2. J.-B. Hiriart-Urruty, From convex to nonconvex optimization, necessary and sufficient conditions for global optimality, in Nonsmooth optimization and related topics, Plenum, New York 1989, pp. 219 239. 3. J.-B. Hiriart-Urruty, Conditions for global optimality, in Handbook for global optimization, Kluwer Academic Publishers, Dordrecht, 1995, pp. 1 26. 4. J.-B. Hiriart-Urruty, Global optimality conditions 2, Journal of Global Optimization 13 (1998) 349 367. 5. J.-B. Hiriart-Urruty and C. Lemaréchal, Convex analysis and minimization algorithms, Grundlehren der mathematischen Wissenchaften 305 & 306. Springer, Berlin, Heidelberg, 1993. 6. J. Moré, Generalizations of the trust region problem, Optimization Methods & Software 2 (1993) 189 209. 7. J.-M. Peng and Y.-X. Yuan, Optimality conditions for the minimization of a quadratic with two quadratic constraints, SIAM Joournal on Optimization 7 (1997) 576 594. 8. R. Stern and H. Wolkowicz, Indefinite trust region subproblems and nonsymmetric eigenvalue perturbations, SIAM Journal on Optimization 5 (1995) 286 313.