FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR

Similar documents
Fixed points in the family of convex representations of a maximal monotone operator

Maximal monotone operators, convex functions and a special family of enlargements.

Maximal Monotone Operators with a Unique Extension to the Bidual

Maximal Monotonicity, Conjugation and the Duality Product in Non-Reflexive Banach Spaces

ε-enlargements of maximal monotone operators: theory and applications

Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM. M. V. Solodov and B. F.

arxiv: v1 [math.fa] 16 Jun 2011

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

Monotone operators and bigger conjugate functions

Convergence rate of inexact proximal point methods with relative error criteria for convex optimization

On Total Convexity, Bregman Projections and Stability in Banach Spaces

ENLARGEMENT OF MONOTONE OPERATORS WITH APPLICATIONS TO VARIATIONAL INEQUALITIES. Abstract

Subdifferential representation of convex functions: refinements and applications

Error bounds for proximal point subproblems and associated inexact proximal point algorithms

Upper sign-continuity for equilibrium problems

AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS. M. V. Solodov and B. F.

MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS

AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS. May 14, 1998 (Revised March 12, 1999)

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

A convergence result for an Outer Approximation Scheme

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane

Positive sets and monotone sets

Maximal monotone operators are selfdual vector fields and vice-versa

Iteration-complexity of a Rockafellar s proximal method of multipliers for convex programming based on second-order approximations

A Strongly Convergent Method for Nonsmooth Convex Minimization in Hilbert Spaces

1 Introduction We consider the problem nd x 2 H such that 0 2 T (x); (1.1) where H is a real Hilbert space, and T () is a maximal monotone operator (o

The effect of calmness on the solution set of systems of nonlinear equations

On the convergence properties of the projected gradient method for convex optimization

A strongly convergent hybrid proximal method in Banach spaces

Iteration-Complexity of a Newton Proximal Extragradient Method for Monotone Variational Inequalities and Inclusion Problems

arxiv: v1 [math.fa] 25 May 2009

Epiconvergence and ε-subgradients of Convex Functions

Optimality Conditions for Nonsmooth Convex Optimization

Forcing strong convergence of proximal point iterations in a Hilbert space

The sum of two maximal monotone operator is of type FPV

AN INEXACT HYBRIDGENERALIZEDPROXIMAL POINT ALGORITHM ANDSOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Minimization vs. Null-Minimization: a Note about the Fitzpatrick Theory

On the iterate convergence of descent methods for convex optimization

Journal of Convex Analysis Vol. 14, No. 2, March 2007 AN EXPLICIT DESCENT METHOD FOR BILEVEL CONVEX OPTIMIZATION. Mikhail Solodov. September 12, 2005

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT

Monotone Linear Relations: Maximality and Fitzpatrick Functions

Merit functions and error bounds for generalized variational inequalities

The local equicontinuity of a maximal monotone operator

arxiv: v3 [math.oc] 18 Apr 2012

1. Introduction. We consider the classical variational inequality problem [1, 3, 7] VI(F, C), which is to find a point x such that

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES

FENCHEL DUALITY, FITZPATRICK FUNCTIONS AND MAXIMAL MONOTONICITY S. SIMONS AND C. ZĂLINESCU

An inexact subgradient algorithm for Equilibrium Problems

Existence results for quasi-equilibrium problems

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

A UNIFIED CHARACTERIZATION OF PROXIMAL ALGORITHMS VIA THE CONJUGATE OF REGULARIZATION TERM

Spectral gradient projection method for solving nonlinear monotone equations

Methods for a Class of Convex. Functions. Stephen M. Robinson WP April 1996

An Inexact Spingarn s Partial Inverse Method with Applications to Operator Splitting and Composite Optimization

A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators

Journal of Inequalities in Pure and Applied Mathematics

c 2013 Society for Industrial and Applied Mathematics

A quasisecant method for minimizing nonsmooth functions

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

Keywords: monotone operators, sums of operators, enlargements, Yosida regularizations, subdifferentials

Extended Monotropic Programming and Duality 1

On well definedness of the Central Path

Relationships between upper exhausters and the basic subdifferential in variational analysis

for all u C, where F : X X, X is a Banach space with its dual X and C X

Kantorovich s Majorants Principle for Newton s Method

1 Introduction and preliminaries

Refined optimality conditions for differences of convex functions

A Dual Condition for the Convex Subdifferential Sum Formula with Applications

OPTIMALITY CONDITIONS FOR D.C. VECTOR OPTIMIZATION PROBLEMS UNDER D.C. CONSTRAINTS. N. Gadhi, A. Metrane

A bundle-filter method for nonsmooth convex constrained optimization

Characterizations of the solution set for non-essentially quasiconvex programming

Near Equality, Near Convexity, Sums of Maximally Monotone Operators, and Averages of Firmly Nonexpansive Mappings

arxiv: v1 [math.fa] 30 Jun 2014

The Brezis-Browder Theorem in a general Banach space

Maximal Monotone Inclusions and Fitzpatrick Functions

Self-dual Smooth Approximations of Convex Functions via the Proximal Average

On Gap Functions for Equilibrium Problems via Fenchel Duality

STABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES

Downloaded 09/27/13 to Redistribution subject to SIAM license or copyright; see

The Fitzpatrick Function and Nonreflexive Spaces

Necessary and Sufficient Conditions for the Existence of a Global Maximum for Convex Functions in Reflexive Banach Spaces

FENCHEL DUALITY, FITZPATRICK FUNCTIONS AND MAXIMAL MONOTONICITY S. SIMONS AND C. ZĂLINESCU

SOME PROPERTIES ON THE CLOSED SUBSETS IN BANACH SPACES

Global Maximum of a Convex Function: Necessary and Sufficient Conditions

On Optimality Conditions for Pseudoconvex Programming in Terms of Dini Subdifferentials

CONTINUOUS CONVEX SETS AND ZERO DUALITY GAP FOR CONVEX PROGRAMS

A Dykstra-like algorithm for two monotone operators

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems

On Second-order Properties of the Moreau-Yosida Regularization for Constrained Nonsmooth Convex Programs

A doubly stabilized bundle method for nonsmooth convex optimization

Brézis - Haraux - type approximation of the range of a monotone operator composed with a linear mapping

SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF CONSTRAINED MAXIMUM EIGENVALUE FUNCTION

Convergence Theorems for Bregman Strongly Nonexpansive Mappings in Reflexive Banach Spaces

ALGORITHMS FOR MINIMIZING DIFFERENCES OF CONVEX FUNCTIONS AND APPLICATIONS

A derivative-free nonmonotone line search and its application to the spectral residual method

Transcription:

PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000 000 S 0002-9939(XX)0000-0 FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR B. F. SVAITER Abstract. Any maximal monotone operator can be characterized by a convex function. The family of such convex functions is invariant under a transformation connected with the Fenchel-Legendre conjugation. We prove that there exists a convex representation of the operator which is a fixed point of this conjugation. 1. Introduction Let X be a real Banach space and X its dual. It is usual to identify a point to set operator T : X X with its graph, {(x, x ) X X x T (x)}. We will use the notation x, x for the duality product x (x) of x X, x X. An operator T : X X is monotone if (x, x ), (y, y ) T x y, x y 0 and is maximal monotone if it is monotone and (y, y ) T, x y, x y 0 (x, x ) T. Krauss [11] managed to represent maximal monotone operators by subdifferentials of saddle functions on X X. After that, Fitzpatrick [8] proved that maximal monotone operators can be represented by convex functions on X X. Later on, Simons [19] studied maximal monotone operators using a min-max approach. Recently, the convex representation of maximal monotone operators was rediscovered by Burachik and Svaiter [7] and Martinez-Legaz and Théra [13]. In [7], some results on enlargements are used to perform a systematic study of the family of convex functions which represents a given maximal monotone operator. Here we are concerned with this kind of representation. Given f : X R, the Fenchel-Legendre conjugate of f is f : X R, f (x ) := sup x, x f(x). x x The subdifferential of f is the operator f : X X, f(x) := {x X f(y) f(x) + y x, x, y X}. If f is convex, lower semicontinuous and proper, then f is maximal monotone [17]. From the previous definitions, we have the Fenchel Young inequality: for all x X, x X, f(x) + f (x ) x, x, f(x) + f (x ) = x, x x f(x). This work was partially supported by CNPq Grant 301200/93-9(RN) and by PRONEX Optimization. 1 c 1997 American Mathematical Society

2 B. F. SVAITER So, defining h FY : X X R, (1.1) h FY (x, x ) := f(x) + f (x ), we observe that this function fully characterizes f. Assume that f is convex, lower semicontinuous and proper. In this case, f is maximal monotone. Moreover, if we use the canonical injection of X in to X, then f (x) = f(x) for all x X. Hence, for all (x, x ) X X, (h FY ) (x, x ) = h FY (x, x ). Our aim is to prove that any maximal monotone operator has a convex representation with a similar property. From now on, T : X X is a maximal monotone operator. Define, as in [8], H(T ) to be the family of convex lower semicontinuous functions h : X X R such that (x, x (1.2) ) X X, h(x, x ) x, x, (x, x ) T h(x, x ) = x, x. This family is nonempty [8]. Moreover, for any h H(T ), h(x, x ) = x, x if and only if (x, x ) T [7]. Hence, any element of H(T ) fully characterizes, or represents, T. Since the sup of convex lower semicontinuous function is also convex and lower semicontinuous, using also (1.2) we conclude that sup of any (nonempty) subfamily of H(T ) is still in H(T ). The dual of X X is X X. So, for (x, x ) X X, (y, y ) X X, (x, x ), (y, y ) = x, y + x, y. Given a function h : X X R, define Jh : X X R, (1.3) Jh(x, x ) := h (x, x), where h stands for the Fenchel-Legendre conjugate of h and the canonical inclusion of X in X is being used. Equivalently, (1.4) Jh(x, x ) = sup (y,y ) X X x, y + y, x h(y, y ). Trivially, J inverts the natural order of functions, i.e., if h h then Jh Jh. The family H(T ) is invariant under the application J [7]. The aim of this paper is to prove that there exists an element h H(T ) such that Jh = h. The application J can be studied in the framework of generalized conjugation [18, Ch. 11, Sec. L]. With this aim, define Φ : (X X ) (X X ) : R, Φ((x, x ), (y, y )) := x, y + y, x. Given h : X X R, let h Φ be the conjugate of h with respect to the coupling function Φ, (1.5) h Φ (x, x ) := sup (y,y ) X X Φ((x, x ), (y, y )) h(y, y ). Now we have and, in particular, Jh = h Φ, (1.6) h h ΦΦ = J 2 h.

FIXED POINT OF CONVEX REPRESENTATIONS 3 2. Proof of the Main Theorem Define σ T : X X R, σ T := sup h. h H(T ) Since H(T ) is closed under the sup operation, we conclude that σ T is the biggest element of H(T ). Combining this fact with the inclusion Jσ T H(T ) we conclude that σ T Jσ T. For a more detailed discussion on σ T, we refer the reader to [7, eq. (35)]. The above inequality will be, in some sense, our departure point. Now define H a (T ) := {h H(T ) h Jh}. The family H a (T ) is connected with a family of enlargements of T which shares with the ε-subdifferential a special property (see [7]). We already know that σ T H a (T ). Later on, we will use the following construction of elements in this set. Proposition 2.1. Take h H(T ) and define Then ĥ H a(t ). ĥ = max h, Jh. Proof. Since h and Jh are in H(T ), ĥ H(T ). By definition, ĥ h, ĥ Jh. Applying J on these inequalities and using (1.6) for majorizing J 2 h we obtain Jh Jĥ, h Jĥ. Hence, ĥ Jĥ. For h H(T ) define L(h) := {g H(T ) h g Jg}. The operator J inverts the order. Therefore, L(h) if and only if h Jh, i.e., h H a (T ). We already know that L(σ T ). Proposition 2.2. For any h H a (T ), the family L(h) has a minimal element. Proof. We shall use Zorn Lemma. Let C L(h) be a (nonempty) chain, that is, C is totally ordered. Take h C. For any h C, h h or h h. In the first case we have h h Jh, and in the second case, h Jh Jh. Therefore, (2.1) h Jh, h, h C. Now define (2.2) ĝ = sup Jh. h C Since H(T ) is invariant under J and also closed with respect to the sup, we have ĝ H(T ). From (2.1), (2.2) it follows that h ĝ Jh, h C. Applying J to the above inequalities, and also using (1.6), we conclude that (2.3) h Jĝ Jh, h C.

4 B. F. SVAITER Since ĝ H(T ), Jĝ H(T ). Taking the sup on h C, in the right-hand side of the last inequality, we get Jĝ ĝ. Applying J, again, we obtain Jĝ J(Jĝ). Take some h C. By the definition of L(h) and (2.3), we conclude that h h Jĝ. Hence Jĝ belongs to L(h) and is a lower bound for any element of C. Now we apply Zorn Lemma to conclude that L(h) has a minimal element. The minimal elements of L(h) (for h H a (T )) are the natural candidates for being fixed points of J. First we will show that they are fixed points of J 2. Observe that, since J inverts the order of functions, J 2 preserves it, i.e., if h h, then J 2 h J 2 h. Moreover, J 2 maps H(T ) in itself. Proposition 2.3. Take h H a (T ) and let h 0 be a minimal element of L(h). Then J 2 h 0 = h 0. Proof. First observe that J 2 h 0 H(T ). By assumption, h 0 Jh 0. Applying J 2 in this inequality we get J 2 h 0 J 2 (Jh 0 ) = J(J 2 h 0 ). Since h h 0 and, by (1.6), h 0 J 2 h 0, we conclude that h J 2 h 0 J(J 2 h 0 ). Hence J 2 h 0 L(h). Again using the inequality h 0 J 2 h 0 and the minimality of h 0, the conclusion follows. Theorem 2.4. Take h H(T ) such that h Jh. Then h 0 L(h) is minimal (on L(h)) if and only if h 0 = Jh 0. Proof. Assume first that h 0 = Jh 0. If h L(h) and h 0 h, then, applying J on this inequality and using the definition of L(h) we conclude that h Jh Jh 0 = h 0. Combining the above inequalities we obtain h = h 0. Hence h 0 is minimal on L(h). Assume now that h 0 is minimal on L(h). By the definition of L(h), h 0 Jh 0. Suppose that for some (x 0, x 0), (2.4) h 0 (x 0, x 0) > Jh 0 (x 0, x 0). We shall prove that this assumption is contradictory. By Proposition 2.3, h 0 = J(Jh 0 ). Hence, the above inequality can be expressed as or equivalently J(Jh 0 )(x 0, x 0) > Jh 0 (x 0, x 0), sup y, x 0 + x 0, y Jh 0 (y, y ) > Jh 0 (x 0, x 0). (y,y ) X X Therefore, there exists some (y 0, y 0) X X such that (2.5) y 0, x 0 + x 0, y 0 Jh 0 (y 0, y 0) > Jh 0 (x 0, x 0). In particular, Jh 0 (y 0, y 0), Jh 0 (x 0, x 0) R. Interchanging Jh 0 (y 0, y 0) with Jh 0 (x 0, x 0) we get y 0, x 0 + x 0, y 0 Jh 0 (x 0, x 0) > Jh 0 (y 0, y 0).

FIXED POINT OF CONVEX REPRESENTATIONS 5 Therefore, using also (1.4), we get J(Jh 0 (y 0, y 0)) > Jh 0 (y 0, y 0). Using again the equality J 2 h 0 = h 0 we conclude that (2.6) h 0 (y 0, y 0) > Jh 0 (y 0, y 0). Define γ : X X R, g : X X R, (2.7) (2.8) γ(x, x ) := x, y 0 + y 0, x Jh 0 (y 0, y 0), g := max γ, Jh 0. By (1.4), h 0 γ. Since h 0 L(h), h 0 Jh 0. Therefore, h 0 g Jh 0. We claim that g H(T ). Indeed, g is a lower semicontinuous convex function. Moreover, since h 0, Jh 0 H(T ), it follows from (1.2) and the above inequalities that g H(T ). Now apply J to the above inequality to conclude that Therefore, defining h 0 Jg Jh 0. (2.9) ĝ = max g, Jg, we have h > h 0 ĝ. By Proposition 2.1, ĝ H(T ) and ĝ Jĝ. Combining these results with the minimality of h 0, it follows that ĝ = h 0. In particular, (2.10) ĝ(y 0, y 0) = h 0 (y 0, y 0). To conclude the proof we shall evaluate ĝ(y 0, y 0). Using (2.7) we obtain γ(y 0, y 0) = 2 y 0, y 0 Jh 0 (y 0, y 0). Since Jh 0 H(T ), Jh 0 (y 0, y 0) y 0, y 0. Hence, γ(y 0, y 0) y 0, y 0 and by (2.8) (2.11) g(y 0, y 0) = Jh 0 (y, y ). Again using the inequality g γ, we have Jγ(y 0, y 0) Jg(y 0, y 0). Direct calculation yields Jγ(y 0, y 0) = Jh 0 (y, y ). Therefore (2.12) Jh 0 (y 0, y 0) Jg(y 0, y 0). Combining (2.11), (2.12) and (2.9) we obtain ĝ(y 0, y 0) = Jh 0 (y 0, y 0). This equality, together with (2.10), yields h 0 (y 0, y0) = Jh 0 (y 0, y0), in contradiction with (2.6). Therefore, h 0 (x, x ) = Jh 0 (x, x ) for all (x, x ). Since σ T H a (T ), L(σ T ) and there exists some h L(σ T ) such that Jh = h. (Indeed L(σ T ) = H a (T ).)

6 B. F. SVAITER 3. Application Let f : X X be a proper lower semicontinuous convex function. We already know that f is maximal monotone. Define, for ε 0, ε f(x) := {x X f(y) f(x) + y x, x ε, y X}. Note that 0 f = f. We also have (3.1) (3.2) f(x) ε f(x), x X, ε 0, 0 ε 1 ε 2 ε1 f(x) ε2 f(x), x X. Property (3.1) tells that ε f enlarges f. Property (3.2) shows that ε f is nondecreasing (or increasing) in ε. The operator ε f has been introduced in [3], and since that, it has had may theoretical and algorithmic applications [1, 14, 9, 10, 22, 12, 2]. Since f is maximal monotone, the enlarged operator ε f loses monotonicity in general. Even though, we have (3.3) x ε f(x) x y, x y ε, (y, y ) f. Now, take (3.4) and define (3.5) x 1 ε1 f(x 1 ), x 2 ε1 f(x 2 ), p, q 0, p + q = 1, ( x, x ) := p(x 1, x 1) + q(x 2, x 2), ε := pε 1 + qε 2 + pq x 1 x 2, x 1 x 2. Using the previous definitions, and the convexity of f, it is trivial to check that (3.6) ε 0, x ε f( x). Properties (3.4,3.5,3.6) will be called a transportation formula. If ε 1 = ε 2 = 0, then we are using elements in the graph of f to construct elements in the graph of ε f. In (3.5), the product of elements in ε f appears. This product admits the following estimation: (3.7) x 1 ε1 f(x 1 ), x 2 ε1 f(x 2 ) x 1 x 2, x 1 x 2 (ε 1 + ε 2 ). Moreover, ε f is maximal with respect to property (3.7). We will call property (3.7) additivity. The enlargement ε f can be characterized by the function h FY, defined in (1.1), x ε f(x) h FY (x, x ) x, x + ε. The transportation formula (3.4,3.5,3.6) now follows directly of the convexity of h FY. Additivity follows from the fact that h FY Jh FY, and maximality of the additivity follows from the fact that Define the graph of ε f as h FY = Jh FY. G( ( ) f( )) := {(x, x, ε) x ε f(x)}. Note that G( ( ) f( )) is closed. So we say that ε f is closed. Given T : X X, maximal monotone, it would be desirable to have an enlargement of T, say T ε, with similar properties to the ε f enlargement of f. With

FIXED POINT OF CONVEX REPRESENTATIONS 7 this aim, such an object was defined in [4, 5] (in finite-dimensional spaces and in Banach spaces, respectively), for ε 0, as (3.8) T ε (x) := {x X x y, x y ε, (y, y ) T }. The T ε enlargement of T shares many properties with the ε f enlargement of f: the transportation formula, Lipschitz continuity (in the interior of its domain), and even the Brøndsted-Rockafellar property (in Reflexive Banach spaces). Since its introduction, it has had both theoretical and algorithmic applications [4, 6, 20, 21, 15, 16]. Even though, T ε is not the extension of the construct ε f to a generic maximal monotone operator. Indeed, taking T = f, we obtain ε f(x) ( f) ε (x), with examples of strict inclusion even in finite-dimensional cases [4]. Therefore, in general, T ε lacks the additive property (3.7). The T ε enlargement satisfies a weaker property [5] x 1 T ε1 (x 1 ), x 2 T ε2 (x 2 ) x 1 x 2, x 1 x 2 ( ε 1 + ε 2 ) 2. The enlargement T ε is also connected with a convex function. Indeed, x T ε (x) x y, x y ε, (y, y ) T sup x y, y x ε. (y,y ) T The Fitzpatrick function, ϕ T, is the smallest element of H(T ) [8], and is defined as (3.9) ϕ T (x, x ) := sup x y, y x + x, x. (y,y ) T Therefore, x T ε (x) ϕ T (x, x ) x, x + ε. Now, the transportation formula for T ε follows from convexity of ϕ T. In [7] it is proven that each enlargement ˆT ε of T, which has a closed graph, is nondecreasing and satisfies the transportation formula, is characterized by a function ĥ H(T ), by the formula x ˆT ε (x) ĥ(x, x ) x, x + ε. So, if we want to retain additivity : x 1 ˆT ε1 (x 1 ), x 2 ˆT ε2 (x 2 ) x 1 x 2, x 1 x 2 (ε 1 + ε 2 ). we shall require ĥ Jĥ. The enlargements in this family, which are also maximal with respect to the additivity, are structurally closer to the ε f enlargement, and are characterized by ĥ H(T ), ĥ = Jĥ. If there were only one element in H(T ) fixed point of J, then this element would be the canonical representation of T by a convex function, and the associated enlargement would be the extension of the ε-subdifferential enlargement to T. Unfortunately, it is not clear whether we have uniqueness of such fixed points. Existence of an additive enlargement of T, maximal with respect to additivity, was proved in [23]. The convex representation of this enlargement turned out to be minimal in the family H a (T ), but the characterization of these minimal elements of H a (T ) as fixed point of J was lacking. Since the function σ T has played a fundamental role in our proof, we redescribe it here. Let δ T be the indicator function of T, i.e., in T its value is 0 and elsewhere

8 B. F. SVAITER in (X X \ T ) its value is +. Denote the duality product by π : X X R, π(x, x ) = x, x. Then σ T (x, x ) = cl conv(π + δ T ), where cl convf stands for the biggest lower semicontinuous convex function majorized by f. We refer the reader to [7] for a detailed analysis of this function. Acknowledgement We thank the anonymous referee for the suggestions which helped to improve this paper. References [1] D.P. Bertsekas and S.K. Mitter. A descent numerical method for optimization problems with nondifferentiable cost functionals. SIAM Journal on Control, 11(4):637 652, 1973. [2] F. Bonnans, J.Ch. Gilbert, C.L. Lemaréchal and C.A. Sagastizábal. Optimisation Numérique, aspects théoriques et pratiques. Collection Mathématiques et applications, SMAI-Springer- Verlag, Berlin, 1997. [3] A. Brøndsted and R. T. Rockafellar. On the subdifferentiability of convex functions. Proceedings of the American Mathematical Society, 16:605 611, 1965. [4] Regina S. Burachik, Alfredo N. Iusem, and B. F. Svaiter. Enlargement of monotone operators with applications to variational inequalities. Set-Valued Analysis, 5(2):159 180, 1997. [5] Regina Sandra Burachik and B. F. Svaiter. ε-enlargements of maximal monotone operators in Banach spaces. Set-Valued Analysis, 7(2):117 132, 1999. [6] Regina S. Burachik, Claudia A. Sagastizábal, and B. F. Svaiter. ɛ-enlargements of maximal monotone operators: theory and applications. In M. Fukushima and L. Qi, editors, Reformulation: nonsmooth, piecewise smooth, semismooth and smoothing methods (Lausanne, 1997), volume 22 of Applied Optimization, pages 25 43. Kluwer Acad. Publ., Dordrecht, 1999. [7] Burachik, R.S. and Svaiter, B.F.: Maximal monotone operators, convex functions and a special family of enlargements, Set Valued Analysis, (to appear). [8] Fitzpatrick, S.: Representing monotone operators by convex functions, Workshop/Miniconference on Functional Analysis and Optimization (Canberra, 1988) 59 65, Proc. Centre Math. Anal. Austral. Nat. Univ.,20 Austral. Nat. Univ., Canberra, 1988. [9] J.-B. Hiriart-Urruty and C. Lemaréchal. Convex Analysis and Minimization Algorithms. Number 305-306 in Grund. der math. Wiss. Springer-Verlag, 1993. (two volumes). [10] K.C. Kiwiel. Proximity control in bundle methods for convex nondifferentiable minimization. Mathematical Programming, 46:105 122, 1990. [11] Krauss, Eckehard: A representation of maximal monotone operators by saddle functions, Rev. Roumaine Math. Pures Appl., 30, (1985), 823 837. [12] C. Lemaréchal, A. Nemirovskii, and Yu. Nesterov. New variants of bundle methods. Mathematical Programming, 69:111 148, 1995. [13] Martinez-Legaz, J.-E. and Théra, M.: A convex representation of maximal monotone operators, Journal of Nonlinear and Convex Analysis, 2 (2001), 243 247. [14] E.A. Nurminski. ε-subgradient mapping and the problem of convex optimization. Cybernetics, 21(6):796 800, 1986. [15] Revalski, J.P. and M. Théra, Generalized sums of monotone operators, M., C. R. Acad. Sci. Paris Sér. I Math. 329(11):979 984, 1999. [16] Revalski, J.P. and M. Théra, Variational and extended sums of monotone operators Ill-posed variational problems and regularization techniques (Trier, 1998), 229 246, Lecture Notes in Econom. and Math. Systems, 477, Springer, Berlin, 1999. [17] Rockafellar, R. T.: On the maximal monotonicity of subdifferential mappings, Pacific Journal of Mathematics 33 (1970), 209 216. [18] R. Tyrrell Rockafellar, Roger J-B. Wets. Variational Analysis. Springer Verlag, Berlin Heidelberg, 1998. [19] Simons, S.: Minimax and monotonicity. Lecture Notes in Mathematics, 1693. Springer-Verlag, Berlin, 1998.

FIXED POINT OF CONVEX REPRESENTATIONS 9 [20] M. V. Solodov and B. F. Svaiter. An inexact hybrid extragradient-proximal point algorithm using the enlargement of a maximal monotone operator. Set-Valued Analysis, 7(4):323 345, December 1999. [21] M. V. Solodov and B. F. Svaiter. Error bounds for proximal point subproblems and associated inexact proximal point algorithms. Mathematical Programming, 88(2):371 389, 2000. [22] H. Schramm and J. Zowe. A version of the bundle idea for minimizing a nonsmooth function: conceptual idea, convergence analysis, numerical results. SIAM Journal on Optimization, 2(1):121 152, 1992. [23] B. F. Svaiter. A Family of Enlargements of Maximal Monotone Operators. Set-Valued Analysis, 8(4):311 328, December 2000. IMPA Instituto de Matemática Pura e Aplicada, Estrada Dona Castorina 110, Rio de Janeiro RJ, CEP 22460-320 Brazil E-mail address: benar@impa.br