SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF CONSTRAINED MAXIMUM EIGENVALUE FUNCTION

Similar documents
Algorithms for Nonsmooth Optimization

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Methods for a Class of Convex. Functions. Stephen M. Robinson WP April 1996

Coordinate Update Algorithm Short Course Subgradients and Subgradient Methods

Solving large Semidefinite Programs - Part 1 and 2

Subdifferential representation of convex functions: refinements and applications

A quasisecant method for minimizing nonsmooth functions

Nonsmooth optimization : beyond first order methods. A tutorial focusing on bundle methods

Research Article Finding Global Minima with a Filled Function Approach for Non-Smooth Global Optimization

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

Lehrstuhl B für Mechanik Technische Universität München D Garching Germany

Merit functions and error bounds for generalized variational inequalities

FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR

A double projection method for solving variational inequalities without monotonicity

LIMITED MEMORY BUNDLE METHOD FOR LARGE BOUND CONSTRAINED NONSMOOTH OPTIMIZATION: CONVERGENCE ANALYSIS

A doubly stabilized bundle method for nonsmooth convex optimization

Solution Methods for Stochastic Programs

Weak sharp minima on Riemannian manifolds 1

INERTIAL ACCELERATED ALGORITHMS FOR SOLVING SPLIT FEASIBILITY PROBLEMS. Yazheng Dang. Jie Sun. Honglei Xu

Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic Constraints

Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä. New Proximal Bundle Method for Nonsmooth DC Optimization

Written Examination

Relationships between upper exhausters and the basic subdifferential in variational analysis

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

Thai Journal of Mathematics Volume 14 (2016) Number 1 : ISSN

Optimality Conditions for Nonsmooth Convex Optimization

Nondifferentiable Higher Order Symmetric Duality under Invexity/Generalized Invexity

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

One Mirror Descent Algorithm for Convex Constrained Optimization Problems with Non-Standard Growth Properties

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems

On Optimality Conditions for Pseudoconvex Programming in Terms of Dini Subdifferentials

Optimization Tutorial 1. Basic Gradient Descent

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Composite nonlinear models at scale

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

On nonexpansive and accretive operators in Banach spaces

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Optimization and Optimal Control in Banach Spaces

FROM VARIATIONAL TO HEMIVARIATIONAL INEQUALITIES

On Second-order Properties of the Moreau-Yosida Regularization for Constrained Nonsmooth Convex Programs

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Convex envelopes, cardinality constrained optimization and LASSO. An application in supervised learning: support vector machines (SVMs)

Convex Optimization Theory. Athena Scientific, Supplementary Chapter 6 on Convex Optimization Algorithms

Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems

A Proximal Method for Identifying Active Manifolds

Journal of Convex Analysis Vol. 14, No. 2, March 2007 AN EXPLICIT DESCENT METHOD FOR BILEVEL CONVEX OPTIMIZATION. Mikhail Solodov. September 12, 2005

LIMIT LOAD OF A MASONRY ARCH BRIDGE BASED ON FINITE ELEMENT FRICTIONAL CONTACT ANALYSIS

QUADRATIC MAJORIZATION 1. INTRODUCTION

Subgradients. subgradients. strong and weak subgradient calculus. optimality conditions via subgradients. directional derivatives

Gradient methods for minimizing composite functions

Lecture: Cone programming. Approximating the Lorentz cone.

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems

A smoothing augmented Lagrangian method for solving simple bilevel programs

We describe the generalization of Hazan s algorithm for symmetric programming

A Solution Method for Semidefinite Variational Inequality with Coupled Constraints

Fixed points in the family of convex representations of a maximal monotone operator

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT

Bulletin of the. Iranian Mathematical Society

3.10 Lagrangian relaxation

On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean

Optimality Conditions for Constrained Optimization

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

4TE3/6TE3. Algorithms for. Continuous Optimization

Convex Functions and Optimization

Relaxed Quasimonotone Operators and Relaxed Quasiconvex Functions

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES

Support Vector Machines

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces

A GENERALIZATION OF THE REGULARIZATION PROXIMAL POINT METHOD

Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1

Self-dual Smooth Approximations of Convex Functions via the Proximal Average

Lecture 6: September 17

A bundle-filter method for nonsmooth convex constrained optimization

Quadratic Optimization over a Polyhedral Set

10 Numerical methods for constrained problems

A convergence result for an Outer Approximation Scheme

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L.

NONDIFFERENTIABLE SECOND-ORDER MINIMAX MIXED INTEGER SYMMETRIC DUALITY

Optimisation in Higher Dimensions

Optimality conditions for unconstrained optimization. Outline

Convex Optimization. Prof. Nati Srebro. Lecture 12: Infeasible-Start Newton s Method Interior Point Methods

Journal of Inequalities in Pure and Applied Mathematics

Dual and primal-dual methods


Analysis Preliminary Exam Workshop: Hilbert Spaces

OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS

Math 273a: Optimization Convex Conjugacy

LAGRANGIAN TRANSFORMATION IN CONVEX OPTIMIZATION

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT

Using Duality as a Method to Solve SVM Regression. Problems. Langley DeWitt

Subgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives

Gradient methods for minimizing composite functions

David Hilbert was old and partly deaf in the nineteen thirties. Yet being a diligent

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient

Transcription:

International Journal of Pure and Applied Mathematics Volume 91 No. 3 2014, 291-303 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu doi: http://dx.doi.org/10.12732/ijpam.v91i3.2 PAijpam.eu SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF CONSTRAINED MAXIMUM EIGENVALUE FUNCTION Wei Wang 1, Miao Chen 2, Lingling Zhang 3 1 School of Mathematics Liaoning Normal University Liaoning, Dalian, 116029, P.R. CHINA 2 School of Mathematics Liaoning Normal University Liaoning, Dalian, 116029, P.R. CHINA Abstract: Nonsmooth convex optimization problem is a class of important problems in operational research. Bundle methods are considered as one of the most efficient methods for solving nonsmooth optimization problems. The methods have already been applied to many practical problems. In this paper, using bundle method, the optimization problem that the sum of maximum eigenvalue function and general non-smooth convex function can be solved. Through approximation to the objective function, the proximal bundle method based on approximate model is given. We prove that the sequences generated by the algorithm converge to the optimal solution of the original problem. Finally, the algorithm is used to solve a class of constrained maximum eigenvalue function. AMS Subject Classification: 15A18, 49J52, 52A41 Key Words: nonsmooth optimization, bundle method, maximum eigenvalue function 1. Introduction Nonsmooth optimization problems arise in many fields of applications, for example, in economics (see [1]), mechanics (see [2]), engineering (see [3]) and optimal control (see [4]). They are generally difficult to solve. The methods Received: August 28, 2013 Correspondence author c 2014 Academic Publications, Ltd. url: www.acadpubl.eu

292 W. Wang, M. Chen, L. Zhang for nonsmooth optimization can be divided into two main classes: subgradient methods and bundle methods. In this paper, we focus on bundle methods, specifically on their approximate model. Consider a class of problem as follows: (P) min y R mλ max(a(y))+g(y), where λ max (A(y)) is a maximum eigenvalue function, A is a linear operator from R m to S n, and g(y) is a nonsmooth convex function. C. Helmberg and F. Oustry (see [5]) used bundle method to solve a class of unconstrained maximum eigenvalue function. C. Sagastizabal and M. Solodov (see [6]) adopted bundlefilter method to deal with nonsmooth convex constrained optimization. Here, the optimization problem that a class of the sum of maximum eigenvalue function and general non-smooth convex function can be solved by using proximal bundle method. Furthermore, we extend the algorithm to a class of constrained maximum eigenvalue function. Inordertoobtainaminimizerof(P), weshouldsolvethefollowingsubproblems. We consider a class of proximal bundle method for (P). The sequence of stability centers {x k } is a subsequence of {y k }, where {y k } is a sequence of sample points which is used to define an approximate model of the objective function F(y) := λ max (A(y))+g(y). In order to generate candidate point y k, we should construct an approximate model ˆFk (y) of the objective function F(y). The structure of this paper is as follows: Section 2 shows the approximate model of the objective function, which is under the condition of ri(domλ max (A(x i ))) ri(domg(x i )) φ. The proximal bundle method is derived in Section 3 and its convergence is studied in Section 4. In Section 5, a class of constrained maximum eigenvalue function optimization problem is solved by proximal bundle method. Throughout the paper, and, are the standard norm and inner product in Hilbert space. 2. The Approximate Model of the Objective Function In this section, we will mainly study the approximate model of the objective function. Then, we introduce the subdifferential of the maximum eigenvalue function: λ max (X), X S n.

SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF... 293 For the convenience of calculation, we will deal with the maximum eigenvalue function. Convexity is the important property enjoyed by maximum eigenvalue function. It is the support function of the compact convex set C n := {V S n : V 0,trV = 1} (see [7]), Then λ max (X) = max v R n, v =1v T Xv = max V Cn V X, where is the standard scalar product in S n. λ max (X) is the face of C n exposed by X. Let r be the multiplicity of λ max (X) and let Q be an n r matrix whose columns form an orthonormal basis of the corresponding eigenspace. Then λ max (X) = {QZQ T : Z C r }. we consider the maximum eigenvalue function form as λ max (A(x)), where A is a linear operator from R m to S n : A(x) = A 0 +Ax, Then It follows from this that [λ max (A(x))] = A λ max (A(x)) = A {Q(A(x))ZQ(A(x)) T,Z C r }. λ max (A(x i )) = {A (Q(A(x i ))ZQ(A(x i )) T ),Z C r }. Suppose ri(domλ max (A(y))) ri(domg(y)) φ. Then we construct the following approximate model for F(y) ˆF k (y) := max,2,k {F(xi )+ m i,y x i }, where m i F(x i ) = [λ max (A(x i ))+g(x i )]. Choose m i = A (Q(A(x i ))ZQ(A(x i )) T )+s i, where A (Q(A(x i ))ZQ(A(x i )) T ) λ max (A(x i )),s i g(x i ). By the condition of ri(domλ max (A(y))) ri(domg(y)) φ, we have the result from the convex analysis in [8] that A (Q(A(x i ))ZQ(A(x i )) T )+s i λ max (A(x i ))+ g(x i ) = [λ max (A(x i ))+g(x i )]. Set the terms e i be the linearization errors at x k, e i := F(x k ) F(x i ) A (Q(A(x i ))ZQ(A(x i )) T )+s i,x k x i. With the notation, the approximate models have the form ˆF k (y) = F(x k )+ max,2,k { ei + A (Q(A(x i ))ZQ(A(x i )) T )+s i,y x k }. In next section, the proximal bundle method algorithm based on approximate model will be given.

294 W. Wang, M. Chen, L. Zhang 3. The Proximal Bundle Method Algorithm Algorithm (the Proximal Bundle Method) Step 0. Let ε 0, m (0,1) be given parameters. Choose x 1, call black box with y = x 1, we can obtain F(x 1 ) and A (Q(A(x 1 ))ZQ(A(x 1 )) T )+s 1 (λ max (A(x 1 ))+g(x 1 )), construct the model ˆF1, and let k = 1, δ 1 =. Step 1. If δ k ε, stop. Step 2. Solve the quadratic program as follows: (P 1 ) min y R n ˆFk (y)+ 1 2 η k y x k 2, the nominal decrease δ k+1 := F(x k ) ˆF k (y k+1 ) 1 2 η k y k+1 x k 2. Step 3. Call black box with y = y k+1, if F(x k ) F(y k+1 ) mδ k+1, let x k+1 = y k+1, we call it serious-step. Otherwise, let x k+1 = x k, we call it null-step. Step 4. Append y k+1 to bundle model, construct ˆF k+1. Change k = k+1, go to Step 1. At Step 2, we can get the candidate point y k+1 by the dual problem of (P 1 ). This can be guaranteed by the following theorem. Theorem 1. If y k+1 be the unique solution to (P 1 ) and assume η k > 0. Then y k+1 = x k 1 np k ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+s i ), η k where α = (α 1,α 2,,α npk ) is a solution to min 1 2η k α i (A (Q(A(x i ))ZQ(A(x i )) T )+s i ) 2 + α i e i (D 1 ) s.t. α k = {α i [0,1], α i = 1,i = 1,2,np k } In addition, the following relations hold: (1) ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+s i ) ˆF k (y k+1 );

SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF... 295 (2) δ k+1 = ε k + 1 2η k ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+s i ) 2, where ε k = ᾱ i e i ; (3) ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+s i ) εk F(x k ). Proof. Write (P 1 ) as aqp withan extra scalar variabler. (P 1 ) is equivalent to min r + 1 (y,r) R m 2 η k y x k 2 R (P 2 ) s.t. F(x k ) e i + A (Q(A(x i ))ZQ(A(x i )) T )+s i,y x k r i = 1,2,,np k. In view of strong convexity, the dual problem of (P 1 ) is equivalent to the following problem: min 1 2η k α i (A (Q(A(x i ))ZQ(A(x i )) T )+s i ) 2 + α i e i (D 1 ) s.t. α k = {α i [0,1], α i = 1,i = 1,2,np k } and y k+1 = x k 1 np k η k ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T ) + s i ) is the solution of (P 1 ), then (1) holds. Because there is no duality gap, the optimal valve in (P 1 ) is equal to the dual optimal value in (P 1 ), hence (2) holds. The relation F(y) ˆF k (y) ˆF k (y k+1 ) gives the desired result of (3). As iterations go along, the number of elements in the bundle increases. When the size of the bundle becomes too big, it is necessary to compress it and clean the model. Let np max be the maximal size of the bundle, and np k be its current size. The compression sub-algorithm to be appended at Step 4 is the following: Step 4. Let n a = {i np k,ᾱ i > 0} be the cardinality of active indices. Ifn a np max 1,thendeleteallinactivecouplesfromthebundle,setn left = n a, anddefine np k+1 = n left +1. Otherwise, delete all inactive couples from the bundle, and discard two or more couples (A (Q(A(x i ))ZQ(A(x i )) T ) + s i,e i ),

296 W. Wang, M. Chen, L. Zhang then compress the discarded couples into a single couple. n left = np max 2 or n left < np max 2, define np k+1 = n left +2. If n a np max 1, append (A (Q(A(x np k+1 ))ZQ(A(x np k+1 )) T )+s np k+1,e npk+1 ) to the bundle, with 0, if serious-step, e npk+1 = F(x k ) F(y k+1 ) A (Q(A(x np k+1))zq(a(x np k+1)) T ) +s np k+1,x k y k+1, if null-step. Construct ˆF k+1, let k = k+1, go to Step 1. Remark 1. Whenthealgorithmreaches aniteration wherethenumbernp k becomes too big, delete all inactive couples from the bundle. If the remaining couples are still too many, then synthesizes indispensable information of active bundleelements. Usingtheinformationof( np k ᾱi(a (Q(A(x i ))ZQ(A(x i )) T )+ s i ),ε k ), which is defined by Theorem 4, construct aggregate linearization np k F α (y) := F(x k ) ε k + ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+s i ),y x k. For F α (y), it holds that: (1) F α (y) = ˆF k (y k+1 )+ ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+s i ),y y k+1, (2) For all y R n, ˆF k (y) F α (y). Remark 2. When the maximum capacity is reached, for instance, when k = np max, suppose we decide to discard the elements x 1,x 2,,x t (t < k) from the bundle, and to append the aggregate couple. The resulting model will be Then ˆF k+1 (y) = max{ max t+1 i k+1 {F(xi )+ A (Q(A(x i ))ZQ(A(x i )) T ) +s i,y y i },F α (y)}, F α (y) ˆF k+1 (y) F(y); ˆF k+1 (y) F(x k+1 )+ A (Q(A(x k+1 ))ZQ(A(x k+1 )) T )+s k+1,y y k+1.

SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF... 297 4. Convergence Analysis Next, we discuss the convergence result in two cases: (1) ε > 0;(2) ε = 0. Case (1): When ε > 0, by the following Theorem 5 there is an index k last for which δ klast ε if (P) has minimizers. Therefore, x k last is the minimizer. Theorem 2. Consider the algorithm and suppose it loops forever. Use the notation F := lim k Ks F(x k ) and F > ( K s is the set of indices k for which a new serious-step is done). Then 0 δ k F 1 F +ε m k K s Proof. Note first that, since ε 0, for the algorithm to loop forever the nominal decease must satisfy δ k > 0 for all k K s. Since the descent test is satisfied: x k+1 = y k+1, then F(x k ) F(x k+1 ) 0. Let k be the index followingkink s. Between k andk thealgorithmmakesnull-stepsonly:x k+1 = x k+j, for all j = 2,3,,k k. The descent test at k gives F(x k+1 ) F(x k +1 ) mδ k +1. Hence, for any k K s, ε > 0, m k k K s δ k+1 k k K s F(x k ) F(x k+1 ) = F x 1 F x k +1 F x 1 F +ε, Now letting k gives the desired result. Case(2) : Whenε = 0, ifδ k+1 = 0, bytheresultoftheorem4, thealgorithm will find a solution x k to (P). if δ k+1 > 0, the algorithm loops indefinitely. In this case, there are two possibilities for the sequence of descent steps {x k } k Ks. Either it has infinitely many elements, or there is an iteration k last where a last serious-step is done, i.e., x k = x k last for all k k last. We consider these two situations separately. Theorem 3. Suppose the algorithm generates infinitely many descentsteps x k. Then either (P) has an empty solution set and {F(x k )}, or (P) has minimizers, In this case, the following holds: (1) Both {δ k } 0 and {ε k } 0 as k K s,k. (2) If for all k K s, 0 < η k+1 η k, then the sequence {x k } is bounded and converges to a minimizer of (P). Proof. Note first that, since ε = 0 and the algorithm does not stop. We have that δ k+1 > 0 for all k K s. If (P) has no solution, {F(x k )} goes to.

298 W. Wang, M. Chen, L. Zhang To see item (2), first, we shall show that the sequence x k is minimizing for (P). Since 0 < η k+1 η k and y k+1 = x k+1, we can obtain x k+1 x,η k+1 (x k+1 x) x k+1 x,η k (x k+1 x), we notice from that x k+1 = x k 1 np k ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+s i ) (1) η k η k x k+1 x,x k+1 x = η k x k x 2 + 1 η k ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+s i ) 2 2 x k x, ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+s i ) We bound the right hand side terms by using Theorem 4(2) and 4(3). Then we obtain the relation η k+1 x k+1 x,x k+1 x η k x k x,x k x +2(F(x) F(x k )+δ k+1 ). (2) It follows from this that the sequence x k is minimizing for (P). To see the sequence {x k } is bounded, we suppose x is a solution to (P). Take in (2) x = x, and sum over k K s, we can obtain the desire result. Theorem 4. Suppose the algorithm generates a last serious-step x k last, followed by infinitely many null-steps, if 0 < η k+1 η k, then the sequence {y k } converges to x k last and x k last is the minimizer of F(x). Proof. For any y R m, consider the function M k (y) = ˆF k (y k+1 )+ 1 2 η k y x k last 2 + 1 2 η k y k+1 y 2. Since y k+1 is the solution of (P 1 ), So Furthermore, the equality in M k (y k+1 ) F(x k last ), k k last. (3) ˆF k+1 (y) = max{ max t+1 i k+1 {F(xi )+ A (Q(A(x i ))ZQ(A(x i )) T ) +s i,y y i },F α (y)},

SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF... 299 and the identity about F α (y), give the relations ˆF k+1 (y) ˆF np k k (y k+1 )+ ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+s i ),y y k+1. (4) Using inequality (4) written for y = y k+2, and we obtain that M k+1 (y k+2 ) M k (y k+1 ) 1 2 η k y k+1 x k last 2 + 1 2 η k y k+2 x k last 2 + ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+s i ),y k+2 y k+1. By expanding the difference of squares we see that it follows from this that y k+2 x k last 2 y k+1 x k last 2 = y k+2 y k+1 2 +2 y k+2 y k+1,y k+1 x k last, M k+1 (y k+2 ) M k (y k+1 )+ 1 2 η k y k+2 y k+1 2. (5) Since the increasing sequence M k (y k+1 ) is bounded from above by (3), it must converge. We now show that the sequence {y k+1 } is bounded, with {y k+1 y k } 0. Using the identity np k η k (x k last y k+1 ) = ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+s i ) and the relation M k (y k+1 ) = ˆF k (y k+1 )+ 1 2 η k y k+1 x k last 2, we see that M k (y k+1 )+ 1 2 η k y k+1 x k last 2 = ˆF k (y k+1 )+η k y k+1 x k last 2 = F α (x k last ) F(x k last ). It follows from this that the sequence {y k+1 } must be bounded. In addition, by (5) and passing to the limit, we conclude that {y k+1 y k } 0. It is obvious from the definition of convexity that ˆF k (y k+1 ) F(y k ) 0, (6)

300 W. Wang, M. Chen, L. Zhang From the bounded sequence {y k } extract a subsequence {y k i }, where {y k i } ȳ as i. Since {y k+1 y k } 0, the sequence {y k i+1 } ȳ. Therefore F(y k i+1 ) ˆF ki (y k i+1 ) = F(y k i+1 ) F(y k i )+F(y k i ) ˆF ki (y k i+1 ) 0 as i this implies that ˆF ki (y k i+1 ) F(ȳ) as i. To show that x k last minimizes (P), recall that for all k > k last, the serious-step is never satisfied. This means that F(y k i+1 ) F(x k last) mδ ki +1, so, we obtain 0 (1 m)δ ki +1 F(y k i+1 ) ˆF ki (y k i+1 ). Passing to the limit as i and using (6) we conclude that δ ki +1 0. Hence, it is obvious from the Theorem 4(2),(3) that x k last minimizes (P). Finally, weshowthatȳ isequaltox k last. Usethefacts thatf(y) F(x k last) and that ˆF ki (y) F(y), then we have F(ȳ) ˆF ki (y k i+1 )+ 1 2 η k last y k i+1 x k last 2. By (6), we obtain in the limit that F(ȳ) lim i (ˆF ki (y k i+1 )+ 1 2 η k last y k i+1 x k last 2 ) = F(ȳ)+ 1 2 η k last ȳ x k last 2 ), an inequality that is possible only if ȳ = x k last, and the proof is complete. 5. Bundle Method for Constrained Maximum Eigenvalue Function Consider the problem as follows : minλ max (A(y)) ( P) s.t. By = c y 0, where λ max (A(y)) is the maximum eigenvalue function, A is a linear operator from R m to S n, B R m m, Then ( P) is equivalent to ( P 1 ) min y R mλ max(a(y))+δ Ω (y),

SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF... 301 where δ Ω (y) denotes the indicator function on the set Ω = {y R m + : By = c}. δ Ω (y) is a nonsmooth convex function, which satisfies the condition of the proximal bundle method. Therefore, we can apply the proximal bundle method to P. Let G(y) = λ max (A(y)) + δ Ω (y), the sub-problem that the approximate model of G(y) as follows: where Ĝ(y) = G(x k )+ max,2 k { ei + A (Q(A(x i ))ZQ(A(x i )) T )+c i,y x k }. A (Q(A(x i ))ZQ(A(x i )) T ) λ max (A(x i )),c i δ Ω (x i ) = N Ω (x i ). Suppose ri(domλ max (A(y))) ri(domδ Ω (y)) φ, then Ĝ(y) = G(x k )+ max,2,k { ei + A (Q(A(x i ))ZQ(A(x i )) T )+c i,y x k }, which let the terms e i are the linearization errors at x k, e i := G(x k ) G(x i ) A (Q(A(x i ))ZQ(A(x i )) T )+c i,x k x i. Note that N Ω (x i ) = {v = v 1 +v 2 : v 1 R m,v 2 B T y,y R m }, Indeed, as we shall show next: Ω = {y R m + : By = c} = {y R m +} {y R m : Bx c = 0}. Let Ω 1 = {y R m + }, Ω 2 = {y R m : By c = 0}, we have N Ω (x i ) = N Ω1 Ω 2 (x i ) = N Ω1 (x i )+N Ω2 (x i ) = {v R m }+{v = BT y,y R m }. Just as in the previous section, we can solve the problem ( P 1 ). Theorem 5. If y k+1 be the unique solution to ( P 1 ) and assume η k > 0. Then y k+1 = x k 1 np k ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+c i ), η k where α = (α 1,α 2,,α npk ) is a solution to min 1 2η k α i (A (Q(A(x i ))ZQ(A(x i )) T )+c i ) 2 + α i e i (D 1 ) α k = {α i [0,1], α i = 1,i = 1,2,np k }. In addition, the following relations hold:

302 W. Wang, M. Chen, L. Zhang (1) ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+c i ) Ĝk(y k+1 ). (2) δ k+1 = ε k + 1 2η k ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+c i ) 2, (3) where we defined ε k = ᾱ i e i. ᾱ i (A (Q(A(x i ))ZQ(A(x i )) T )+c i ) εk G(x k ). Then, by algorithm it is easy to obtain the desired result. References [1] J. Outrata, M. Kočvara and J. Zowe, Nonsmooth Approach to Optimization Problems With Equilibrium Constraints, Theory, Applications and Numerical Results, Kluwer Academic Publishers, Dordrecht (1998). [2] J.J. Moreau, P.D. Panagiotopoulos and G. Strang, Eds., Topics in Nonsmooth Mechanics, Birkhäuser Verlag, Basel (1988). [3] E.S. Mistakidis and G.E. Stavroulakis, Nonconvex Optimization in Mechanics, Smooth and Nonsmooth Algorithms, Heuristics and Engineering Applications by the F.E.M., Kluwer Academic Publisher, Dordrecht(1998). [4] F.H. Clarke, Yu. S. Ledyaev, R.J. Stern and P.R. Wolenski, Nonsmooth Analysis and Control Theory, Springer, New York(1998). [5] C. Helmberg, F. Oustry, Bundle methods to minimize the maximum eigenvalue function, Handbook of Semidefinite Programming, 27 (2000), 307-337. [6] C. Sagastizábal, M. Solodov, An infeasible bundle method for nonsmooth convex constrained optimization without a penalty function or filter, SIAM J. Optimization, 16 (2005), 146-169. [7] C. Lemaréchal, F. Oustry, Nonsmooth algorithms to solve semidefinite programs, Society for Industrial and Applied Mathematics, Philadelphia (2000), 57-77. [8] R.T. Rockafellar, Convex Analysis, Princeton University Press, Princeton, New Jersey(1970).

SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF... 303

304