ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

Similar documents
Subdifferential representation of convex functions: refinements and applications

On Gap Functions for Equilibrium Problems via Fenchel Duality

On nonexpansive and accretive operators in Banach spaces

Journal of Inequalities in Pure and Applied Mathematics

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

c 1998 Society for Industrial and Applied Mathematics

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane

Convergence Theorems for Bregman Strongly Nonexpansive Mappings in Reflexive Banach Spaces

Journal of Inequalities in Pure and Applied Mathematics

Generalized Monotonicities and Its Applications to the System of General Variational Inequalities

On constraint qualifications with generalized convexity and optimality conditions

SOME GENERALIZATION OF MINTY S LEMMA. Doo-Young Jung

Yuqing Chen, Yeol Je Cho, and Li Yang

PROJECTIONS ONTO CONES IN BANACH SPACES

Necessary and Sufficient Conditions for the Existence of a Global Maximum for Convex Functions in Reflexive Banach Spaces

FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR

SOME PROPERTIES ON THE CLOSED SUBSETS IN BANACH SPACES

A convergence result for an Outer Approximation Scheme

Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems

Optimization and Optimal Control in Banach Spaces

EXISTENCE RESULTS FOR QUASILINEAR HEMIVARIATIONAL INEQUALITIES AT RESONANCE. Leszek Gasiński

OPTIMALITY CONDITIONS FOR D.C. VECTOR OPTIMIZATION PROBLEMS UNDER D.C. CONSTRAINTS. N. Gadhi, A. Metrane

SOME REMARKS ON SUBDIFFERENTIABILITY OF CONVEX FUNCTIONS

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

A double projection method for solving variational inequalities without monotonicity

ON THE RANGE OF THE SUM OF MONOTONE OPERATORS IN GENERAL BANACH SPACES

STABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES

Chapter 3. Characterization of best approximations. 3.1 Characterization of best approximations in Hilbert spaces

ITERATIVE SCHEMES FOR APPROXIMATING SOLUTIONS OF ACCRETIVE OPERATORS IN BANACH SPACES SHOJI KAMIMURA AND WATARU TAKAHASHI. Received December 14, 1999

SCALARIZATION APPROACHES FOR GENERALIZED VECTOR VARIATIONAL INEQUALITIES

Optimality Conditions for Nonsmooth Convex Optimization

The sum of two maximal monotone operator is of type FPV

arxiv: v1 [math.oc] 21 Mar 2015

Merit functions and error bounds for generalized variational inequalities

A Dual Condition for the Convex Subdifferential Sum Formula with Applications

SECOND-ORDER CHARACTERIZATIONS OF CONVEX AND PSEUDOCONVEX FUNCTIONS

MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS

Strongly convex functions, Moreau envelopes and the generic nature of convex functions with strong minimizers

Examples of Convex Functions and Classifications of Normed Spaces

Global Maximum of a Convex Function: Necessary and Sufficient Conditions

1 Introduction The study of the existence of solutions of Variational Inequalities on unbounded domains usually involves the same sufficient assumptio

CHARACTERIZATION OF (QUASI)CONVEX SET-VALUED MAPS

Characterizations of the solution set for non-essentially quasiconvex programming

Dedicated to Michel Théra in honor of his 70th birthday

Lecture 2: Convex Sets and Functions

CONVERGENCE OF APPROXIMATING FIXED POINTS FOR MULTIVALUED NONSELF-MAPPINGS IN BANACH SPACES. Jong Soo Jung. 1. Introduction

Epiconvergence and ε-subgradients of Convex Functions

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1

I teach myself... Hilbert spaces

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES

THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim

POLARS AND DUAL CONES

Fixed and Common Fixed Point Theorems in Metric Spaces

Scalar Asymptotic Contractivity and Fixed Points for Nonexpansive Mappings on Unbounded Sets

A characterization of essentially strictly convex functions on reflexive Banach spaces

Journal of Inequalities in Pure and Applied Mathematics

Viscosity approximation method for m-accretive mapping and variational inequality in Banach space

On Optimality Conditions for Pseudoconvex Programming in Terms of Dini Subdifferentials

The local equicontinuity of a maximal monotone operator

REMARKS ON SOME VARIATIONAL INEQUALITIES

arxiv: v1 [math.fa] 16 Jun 2011

ITERATIVE ALGORITHMS WITH ERRORS FOR ZEROS OF ACCRETIVE OPERATORS IN BANACH SPACES. Jong Soo Jung. 1. Introduction

Monotone operators and bigger conjugate functions

Constraint qualifications for convex inequality systems with applications in constrained optimization

Downloaded 09/27/13 to Redistribution subject to SIAM license or copyright; see

YET MORE ON THE DIFFERENTIABILITY OF CONVEX FUNCTIONS

Relationships between upper exhausters and the basic subdifferential in variational analysis

Error bounds for symmetric cone complementarity problems

On pseudomonotone variational inequalities

STRONG CONVERGENCE OF AN ITERATIVE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS AND FIXED POINT PROBLEMS

TWO MAPPINGS RELATED TO SEMI-INNER PRODUCTS AND THEIR APPLICATIONS IN GEOMETRY OF NORMED LINEAR SPACES. S.S. Dragomir and J.J.

OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS

Convex Analysis Background

Viscosity Approximative Methods for Nonexpansive Nonself-Mappings without Boundary Conditions in Banach Spaces

A CHARACTERIZATION OF REFLEXIVITY OF NORMED ALMOST LINEAR SPACES

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Weak sharp minima on Riemannian manifolds 1

CONTINUOUS CONVEX SETS AND ZERO DUALITY GAP FOR CONVEX PROGRAMS

Maximal monotone operators are selfdual vector fields and vice-versa

Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces

On the Weak Convergence of the Extragradient Method for Solving Pseudo-Monotone Variational Inequalities

Thai Journal of Mathematics Volume 14 (2016) Number 1 : ISSN

1 Introduction and preliminaries

Relaxed Quasimonotone Operators and Relaxed Quasiconvex Functions

Conditions for zero duality gap in convex programming

Stability of efficient solutions for semi-infinite vector optimization problems

Parcours OJD, Ecole Polytechnique et Université Pierre et Marie Curie 05 Mai 2015

ALGORITHMS FOR MINIMIZING DIFFERENCES OF CONVEX FUNCTIONS AND APPLICATIONS

Solution existence of variational inequalities with pseudomonotone operators in the sense of Brézis

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT

Continuous Sets and Non-Attaining Functionals in Reflexive Banach Spaces

WEAK CONVERGENCE THEOREMS FOR EQUILIBRIUM PROBLEMS WITH NONLINEAR OPERATORS IN HILBERT SPACES

On Penalty and Gap Function Methods for Bilevel Equilibrium Problems

Fixed points in the family of convex representations of a maximal monotone operator

EFFICIENCY AND GENERALISED CONVEXITY IN VECTOR OPTIMISATION PROBLEMS

Transcription:

J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties of gap functions for the extended variational inequality problem (EVI) in a general Banach space. We will present a correction of a recent result of Chen et. al. without assuming the convexity of the given function Ω. Using this correction, we will discuss the continuity and the differentiablity of a gap function, and compute its gradient formula under two particular situations in a general Banach space. Our results can be regarded as infinite dimensional generalizations of the well-known results of Fukushima, and Zhu and Marcotte with some modifications. 1. Introduction Given C closed and convex in R n and a function F : C R n, the variational inequality problem (in short, VI) is to find x C such that (1.1) F ( x), y x 0 for all y C, where, denotes the inner product in R n. Due to its applications to such diverse areas as partial differential equations, mathematical economics, and operations research, VI has been investigated by many authors and various methods and algorithms to solve VI have been developed. Readers are referred to the comprehensive survey paper of Harker and Pang [5] and the references therein. In the past two decades, an interesting approach for solving VI has been made by introducing gap functions. A function φ : C R {+ } is called a gap function for VI if Received November 14, 2000. Revised December 15, 2000. 2000 Mathematics Subject Classification: 49J40, 47J20, 49J50, 52A41, 90J26. Key words and phrases: variational inequalities, gap functions, Gâteaux differentiable, the Clarke generalized gradient, nonconvex programming. This work was supported by Korea Research Foundation BSRI-1998-015-D00039.

684 Sangho Kum and Gue Myung Lee (i) φ(x) 0 for all x C; and (ii) φ( x) = 0 if and only if x solves VI. With a gap function, one can reformulate VI as the following equivalent minimization problem: minimize φ(x) subject to x C. A gap function, e.g., Auslender s gap function [1], is in general nondifferentiable. However, Fukushima [4] proposed the projective gap function and provided an equivalent differentiable optimization problem formulation of VI, and treated a wide variety of applications. On the other hand, Zhu and Marcotte [8] studied a very general gap function which includes, as special cases, the projective gap function of Fukushima [4]. Recently, Chen et. al. [2] gave an interpretation of the meaning gap in terms of Young s inequality. In addition, they dealt with a modification [2, Theorem 4.1] and an extension [2, Theorem 5.1] of Zhu and Marcotte [8, Theorems 3.1 and 5.1]. All the above results are presented in the Euclidean space R n. In this paper we are concerned with theoretical properties of gap functions for the extended variational inequality problem (EVI) [2] in a general Banach space. To be more specific, first we are going to find a condition under which the function φ (4.1) in Chen et. al. [2] can be a gap function for EVI. Then we will discuss the continuity and the differentiablity of the gap function under two particular situations. As the beginning of our discussion, we provide a counterexample which shows that Chen et. al. [2, Theorem 4.1] is not correct. 2. Preliminaries Consider the following EVI. EVI : Let E be a real Banach space and E be its dual space. Given a nonempty closed convex subset C of E, a function F : C E and a proper convex lower semicontinuous function f : E R {+ }, find x C such that (2.1) F ( x), x x f( x) f(x) for all x C, where, denote the dual paring on E E. function on C, EVI (2.1) becomes VI (1.1). If f is the indicator

On gap functions of variational inequality 685 For a locally Lipschitz function g : E R on E, the Clarke generalized gradient [3] g(x) is well defined for each x E. Some properties of g necessary for our argument in the sequel are as follows. Proposition 2.1. If g is continuously (Gâteaux) differentiable at x, then g is locally Lipschitz at x and g( x) = { g( x)}, where g( x) is the Gâteaux derivative of g at x. Proof. See Clarke [3, Propositions 2.2.2 and 2.2.4] Proposition 2.2. If g is convex and locally Lipschitz at x, g( x) coincides with the subdifferential of g at x in the sense of convex analysis. Proof. See Clarke [3, Proposition 2.2.7] Proposition 2.3. If g is locally Lipschitz at x, C is a convex subset of E and g attains a minimum over C at x, then 0 g( x) + N C ( x), where N C ( x) = {x E x, x x 0 for all x C}, the normal cone of C at x. Proof. See Clarke [3, Corollary and Proposition 2.4.4] Proposition 2.4. If g i (i = 1, 2,, n) is a finite family of functions which is locally Lipschitz at x, then their sum Σg i is also locally Lipschitz at x, and (Σg i )( x) Σ g i ( x). Proof. See Clarke [3, Proposition 2.3.3] Now we introduce an extra function Ω : C C R. Let Ω(x, y) be nonnegative, and for each x C, Ω(x, ) is continuously differentiable (not necessarily convex) on C. We further assume that Ω(x, x) = 0 and y Ω(x, x) = 0 for all x C. Here y Ω(x, x) denotes the Gâteaux derivative of Ω(x, y) with respect to the second variable, evaluated at y = x. Our main goal is to show that the following function φ : C R is a gap function for EVI (2.1) and φ is continuous and differentiable under suitable conditions; (2.2) φ(x) := sup{ F (x), x y + f(x) f(y) Ω(x, y)}. y C

686 Sangho Kum and Gue Myung Lee 3. An example and a correction Chen et. al. [2, Theorem 4.1] asserted that the above φ (2.2) is a gap function for EVI (2.1) under the following general assumptions on Ω in the case that E = C = R n. Assertion. Let Ω : R n R n R be nonnegative and Ω(x, x) = 0 for all x R n. Let Ω(x, ) be convex for each x R n, and 0 y Ω(x, x), where y Ω(x, x) is the subdifferential of Ω(x, y) with respect to the second variable, evaluated at y = x. Assume that f : R n R {+ } is a proper convex lower semicontinuous function. Then φ (2.2) is a gap function for EVI (2.1) However, we can find a simple example which shows that Assertion is not correct. Example 3.1. Define two functions F and f : R R to be the identity function on R. Define another function Ω : R R R to be Ω(x, y) = x y. We can easily check that all the conditions of Assertion are satisfied. Indeed, y Ω(x, x) = [ 1, 1] contains the origin 0. By a direct computation, we get φ(x) : = sup{ F (x), x y + f(x) f(y) Ω(x, y)} y R = sup{x(x y) + x y x y } y R = δ [ 2,0] (x), where δ [ 2,0] (x) is the indicator function of the closed interval [ 2, 0] defined by { 0, if x [ 2, 0], δ [ 2,0] (x) = +, if x / [ 2, 0]. On the other hand, x is a solution of EVI (2.1) if and only if F ( x), x x f( x) f(x) x R x(x x) x x x R ( x + 1)(x x) 0 x = 1. x R

On gap functions of variational inequality 687 Thus the unique solution of EVI (2.1) is x = 1. This implies that φ(x) = δ [ 2,0] (x) is not a gap function for EVI (2.1). Therefore Assertion does not hold even in the one dimensional case R. Remarks 3.1. (i) The mistake in the proof of Assertion in Chen et. al. [2, Theorem 4.1] resides in the statement that the solution x of the convex optimization problem also solves the following variational inequality : Find x R n such that q, y x 0 y R n, where q y Φ(x, x) (see Chen et. al. [2, p.667]). However this is not true in general if Φ(x, ) is not (continuously) differentiable. (ii) Even though F is a C monotone operator and f is also a C convex real valued function, φ is not a gap function for EVI. We change the assumption on Ω(x, ) to get a correction of Chen et. al. [2, Theorem 4.1] in a Banach space. The point of change is to impose the differentiability condition on Ω(x, ) and to remove the convexity of Ω(x, ) instead. Theorem 3.1. Let E be a real Banach space and E be its dual space. Let C be a nonempty closed convex subset of E, F : C E be a function, and f : E R be a convex continuous function. Let Ω : C C R be nonnegative and for each x C, Ω(x, ) be continuously Gâteaux differentiable (not necessarily convex) on C. Assume further that for each x C, Ω(x, x) = 0 and y Ω(x, x) = 0. Then φ (2.2) is a gap function for EVI (2.1). Proof. Since Ω(x, x) = 0 for each x C, it is clear that φ(x) 0 for each x C. Now assume that x solves EVI (2.1). Then As Ω( x, x) 0 for all x C, F ( x), x x f( x) f(x) for all x C. F ( x), x x f( x) f(x) Ω( x, x) for all x C, and φ( x) 0. Hence φ( x) = 0. Conversely, assume that φ( x) = 0. This implies that F ( x), x y + f( x) f(y) Ω( x, y) 0 for all y C.

688 Sangho Kum and Gue Myung Lee Thus x is a solution of the following nonconvex and nondifferentiable optimization problem : minimize F ( x), y + f(y) + Ω( x, y) subject to y C. By Propositions 2.1 to 2.4, we have 0 F ( x) + f( x) + y Ω( x, x) + N C ( x). Since y Ω( x, x) = 0, there exists an x f( x) such that F ( x) x N C ( x). Hence F ( x) x, x x 0 for all x C, that is, (3.1) F ( x), x x + x, x x 0 for all x C. Because f is convex and x f( x), we have (3.2) f(x) f( x) x, x x for all x C. From (3.1) and (3.2), F ( x), x x + f(x) f( x) 0 for all x C. Therefore x solves EVI (2.1), as desired. Remarks 3.2. We would like to point out that Theorem 3.1 generalizes Zhu and Marcotte [8, Theorem 3.1] in the following manners: (i) By adopting the Clarke generalized gradient [3], we could remove the strong convexity assumption on Ω(x, ), which is essential in Zhu and Marcotte [8, Theorem 3.1]. Thus it is not necessary to suppose the strong convexity assumption on Ω(x, ) only for the result that φ is a gap function for EVI (2.2); (ii) Theorem 3.1 is obtained in a Banach space whereas Zhu and Marcotte [8, Theorem 3.1] is done in R n ; (iii) Theorem 3.1 is an extension of Zhu and Marcotte [8, Theorem 3.1] from VI to EVI. 4. The continuity and differentiability of φ In this section we are going to investigate the continuity and differentiability of φ for two special cases. The first case (Theorem 4.1) deals with the projective gap function of Fukushima [4] in a Hilbert space H under the assumption that C = E = H. The second one (Theorem 4.3) is concerned with an infinite dimensional modification of Chen et. al. [2, Theorem 3.3] in a reflexive Banach space. We derive the first case. Theorem 4.1. Let H be a real Hilbert space and H be its dual space. Let F : H H be a continuous operator, and A : H H be

On gap functions of variational inequality 689 a continuous linear and symmetric operator in the sense that Ax, y = Ay, x, where, denotes the usual pairing on H H. Assume further that the bilinear form g : H H R defined by g(x, y) = Ax, y is coercive on H, that is, there exists α > 0 such that g(x, x) = Ax, x α x 2 for all x H. Then φ(x) = sup y H { F (x), x y 1 2 A(x y), x y } is a continuous gap function for VI (1.1). If F is continuously Gâteaux differentiable, then φ is also continuously Gâteaux differentiable, and its gradient is given by φ(x) = J (F x) F (x). Here J is the duality mapping from H to H. Proof. First note that the bilinear form g(x, y) = Ax, y defines an inner product on H and the induced norm x A = g(x, x) = Ax, x is equivalent to the original norm x on H. Thus we may assume that H is equipped with the norm x A = g(x, x) = Ax, x induced by the inner product g(x, y) = Ax, y. Now we reformulate φ (2.2) as follows; (4.1) φ(x) = sup{ F (x), x y + f(x) f(y) Ω(x, y)} y H = F (x), x + f(x) + sup{ F (x), y (f + Ω x )(y)} y H = F (x), x + f(x) + (f + Ω x ) ( F x), where Ω x (y) = Ω(x, y) for each x, y E, and (f + Ω x ) denotes the Fenchel conjugate of f +Ω x defined by (f +Ω x ) (x ) = sup y E x, y (f + Ω x )(y). Taking f 0 and Ω(x, y) = 1 2 A(x y), x y in (4.1), we obtain φ(x) = sup{ F (x), x y 1 A(x y), x y } y H 2 (4.2) = F (x), x + Ω x ( F x) = 1 2 F x 2, where denotes the equivalent dual norm on H induced by x A

690 Sangho Kum and Gue Myung Lee on H. Indeed, for each y H, we have Ω x (y ) = sup[ y, y 1 A(y x), y x ] y H 2 = sup[ y, z + x 1 Az, z ] z H 2 = y, x + sup[ y, z 1 z H 2 z A 2 ] = y, x + sup sup λ 0 z A =λ [ y, z 1 2 λ2 ] = y, x + sup[λ y 1 λ 0 2 λ2 ] = y, x + 1 2 y 2. Hence Ω x ( F x) = F x, x + 1 2 F x 2. Therefore (4.2) does hold. Since Ω(x, y) = 1 2 y x A 2, Ω(x, y) is nonnegative and clearly Ω(x, x) = 0. Moreover, for each x H, we can directly compute y (Ω x )(y) = J(y x) Id H where J is the duality mapping defined by J(x) = {x H x, x = x x A and x = x A } (see Phelps [7, 2.26 Example, p.27]), and Id H is the identity operator on H. In particular, the duality mapping J is the canonical isomorphism between H and H (Actually J can be identified with the identity operator on H = H in this sense). Thus J is a single-valued continuous operator, so y (Ω x )(y) = J(y x) = y (Ω x )(y), and Ω x is continuously Gâteaux differentiable. In addition, y (Ω x )(x) = y Ω(x, x) = J(x x) = J(0) = 0, which implies that φ is a gap function for VI (1.1) by Theorem 3.1. Then the continuity and diffrentiability of φ immediately follows from (4.2). The gradient of φ is given by φ(x) = J (F x) F (x). Remarks 4.1. (i) In the case H = R n, the operator A is represented by an n n positive definite symmetric matrix A (we denote it by A for

On gap functions of variational inequality 691 notational simplicity). In this case we can easily show that the bilinear form g(x, y) = y t Ax is coercive. Hence the coercivity assumption on A in Theorem 4.1 is natural. Therefore Theorem 4.1 is a Hilbert space version of Fukushima [4, Theorem 3.2] with the modification C = H. (ii) As pointed out in Chen et. al. [2], in Fukushima [4], the gradient formula requires the solution of another optimization problem. However, in the gradient formula of Theorem 4.1, we don t need to solve any optimization problem. We need some more definitions and propositions to reach our second case. Let E be a Banach space. A function f : E R {+ } is said to be coercive when f(x) lim x + x = +. Proposition 4.1. For a proper convex lower semicontinuous, and coercive function f : E R {+ }, f (x ) < + for all x E, that is, domf = E. In this case, f is a continuous convex real-valued function on E. Proof. For the proof of the first part, refer to Hiriart-Urruty [6, Proposition 1.3.8, p. 46, Chapter X]. For the proof of the second part, see Phelps [7, Proposition 3.3]. A function f : E R is said to be strongly convex with modulus a (a > 0) if for all x, y E, and α [0, 1], we have f(αx + (1 α)y) αf(x) + (1 α)f(y) 1 2 aα(1 α) x y 2. For the sake of completeness, we provide a proof of the following. Proposition 4.2. A continuous and strongly convex function with modulus a (a > 0) f : E R is coercive. Proof. For x, y E and α (0, 1), we have (4.3) f(y) f(x) + x, y x + 1 2 a y x 2 for all x f(x). Indeed, f((1 α)x + αy) f(x) α f(y) f(x) 1 2 a(1 α) y x 2.

692 Sangho Kum and Gue Myung Lee Hence, Since f(x + α(y x)) f(x) lim f(y) f(x) 1 α 0 α 2 a y x 2. x, y x lim α 0 f(x + α(y x)) f(x) α for all x f(x), we get x, y x f(y) f(x) 1 2 a y x 2, as desired. Dividing (4.3) by y yields f(y) y f(x) y x ( y + x ) + 1 x )2 a( y. y 2 y Thus f(y) lim y + y +. This completes the proof. Now we are in a position to establish the second case in which Ω 0. Theorem 4.2. Let E be a real Banach space and E be its dual space equipped with the norm topology. Let F : C E be a continuous operator, and f : E R be a continuous and strongly convex function with modulus a (a > 0). Then φ(x) = sup y C { F (x), x y + f(x) f(y)}, is a continuous gap function for EVI (2.1). Proof. Since Ω 0 in (2.2), φ is obviously a gap function for EVI (2.1) by Theorem 3.1. Thus we only need to verify the continuity of φ. In fact, (4.4) φ(x) = sup{ F (x), x y + f(x) f(y)} y C = F (x), x + f(x) + sup{ F (x), y f(y) δ C (y)} y E = F (x), x + f(x) + (f + δ C ) ( F x),

On gap functions of variational inequality 693 where δ C denotes the indicator function of C. By Proposition 4.2, f is coercive, hence f + δ C is so. Thus (f + δ C ) is continuous by means of Proposition 4.1. Therefore φ is clearly continuous if F is continuous. This completes the proof. For the differentiability of φ, we assume that E is reflexive and C = E. Theorem 4.3. Let E be a real reflexive Banach space and E be its dual space equipped with the norm topology. Let F : E E be a continuously Gâteaux differentiable operator, and f : E R be a continuously Gâteaux differentiable, and strongly convex function with modulus a (a > 0). Then φ (4.4), the gap function for EVI (2.1), is continuously Gâteaux differentiable, and its gradient is given by φ(x) = F (x) + F (x)( ), x + f(x) ( f) 1 ( F x) F (x). Proof. Since C = E, δ C 0. So (4.4) becomes φ(x) = F (x), x + f(x) + f ( F x). To achieve our result, we have only to verify that f and h(x) = F (x), x are continuously Gâteaux differentiable. Step 1. f is continuously Gâteaux differentiable. It is well known that f = ( f) 1. Hence for the differentiability of f, it suffices to show that f is single-valued, that is, f(x 1 ) f(x 2 ) = whenever x 1 x 2. Suppose x f(x 1 ) f(x 2 ), where x 1 x 2. Then we have (4.5) f (x ) + f(x i ) = x, x i for i = 1, 2. According to Young s inequality, it follows from (4.5) that f (x ) + Σ 2 i=1α i f(x i ) = x, Σ 2 i=1α i x i f (x ) + f(σ 2 i=1α i x i ), where Σ 2 i=1 α i = 1, α i 0 for i = 1, 2. This implies that Σ 2 i=1 α if(x i ) = f(σ 2 i=1 α ix i ) on the line segment [x 1, x 2 ], which contradicts the strong convexity of f.

694 Sangho Kum and Gue Myung Lee It remains to check that f = f is continuous. We can easily deduce the following from (4.3): x 1 x 2, x 1 x 2 a x 1 x 2 2, where x i f(x i), i.e. x i = f (x i ) for i = 1, 2. Thus we have f (x 1) f (x 2) 1 a x 1 x 2 for all x 1, x 2 E, which implies that f is Lipschitzian with constant 1/a on E. Step 2. h(x) = F (x), x is continuously Gâteaux differentiable. To do this, we directly compute the Gâteaux derivative of h. For each x, v E, h(x + tv) h(x) h(x)(v) = lim t 0 t F (x + tv), x + tv F (x), x = lim t 0 t F (x + tv) F (x) = lim, x + F (x + tv), v t 0 t = F (x)(v), x + F (x), v. Thus h(x) = F (x) + F (x)( ), x, which shows that h is continuously Gâteaux differentiable, as desired. Furthermore the gradient formula φ(x) immediately comes from (4.4) as follows; φ(x) = h(x) + f(x) + f ( F x) ( F (x)) = F (x) + F (x)( ), x + f(x) ( f) 1 ( F x) F (x). This completes our proof. Remark 4.2. For the continuity of the gap function φ, we did not need to suppose that C = E. However, C = E was to be assumed for the differentiability of φ. Theorem 4.3 is an infinite dimensional modification of Chen et. al. [2, Theorem 3.3] in a reflexive Banach space.

On gap functions of variational inequality 695 References [1] A. Auslender, Optimisation: Méthodes Numériques, Masson, Paris, 1976. [2] G. Y. Chen, C. J. Goh, and X. Q. Yang, On gap functions and duality of variational inequality problems, J. Math. Anal. Appl. 214 (1997), 658 673. [3] F. H. Clarke, Optimization and Nonsmooth Analysis, SIAM, Philadelphia, 1990. [4] M. Fukushima, Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems, Math. Programming 53 (1992), 99 110. [5] P. T. Harker and J. S. Pang, Finite dimensional variational inequality and nonlinear complementarity problems: A survey of theory, algorithms and applications, Math. Programming 48 (1990), 161 220. [6] J.-B. Hiriart-Urruty and C. Lemarechal, Convex Analysis and Minimization Algorithms I and II, Grundlehren Math. Wiss., Springer-Verlag, Berlin/New York 305, 306 (1993). [7] R. R. Phelps, Convex Functions, Monotone Operators and Differentiability, 2nd ed., Lecture Notes in Math., Springer-Verlag, Berlin/New York 1364 (1993). [8] D. L. Zhu and P. Marcotte, An extended descent framework for variational inequalities, J. Optim. Theory Appl. 80 (1994), 349 366. Sangho Kum Department of Applied Mathematics Korea Maritime University Pusan 606-791, Korea E-mail: kum@hanara.kmaritime.ac.kr Gue Myung Lee Department of Applied Mathematics Pukyong National University Pusan 608-737, Korea E-mail: gmlee@dolphin.pknu.ac.kr