Fixed points in the family of convex representations of a maximal monotone operator

Similar documents
FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR

Maximal monotone operators, convex functions and a special family of enlargements.

Maximal Monotone Operators with a Unique Extension to the Bidual

Maximal Monotonicity, Conjugation and the Duality Product in Non-Reflexive Banach Spaces

arxiv: v1 [math.fa] 16 Jun 2011

Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM. M. V. Solodov and B. F.

ε-enlargements of maximal monotone operators: theory and applications

MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS

Monotone operators and bigger conjugate functions

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

Convergence rate of inexact proximal point methods with relative error criteria for convex optimization

Subdifferential representation of convex functions: refinements and applications

Upper sign-continuity for equilibrium problems

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

arxiv: v1 [math.fa] 25 May 2009

A convergence result for an Outer Approximation Scheme

Error bounds for proximal point subproblems and associated inexact proximal point algorithms

AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS. M. V. Solodov and B. F.

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane

AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS. May 14, 1998 (Revised March 12, 1999)

ENLARGEMENT OF MONOTONE OPERATORS WITH APPLICATIONS TO VARIATIONAL INEQUALITIES. Abstract

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

On Total Convexity, Bregman Projections and Stability in Banach Spaces

On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean

The local equicontinuity of a maximal monotone operator

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

The sum of two maximal monotone operator is of type FPV

A Strongly Convergent Method for Nonsmooth Convex Minimization in Hilbert Spaces

Minimization vs. Null-Minimization: a Note about the Fitzpatrick Theory

Positive sets and monotone sets

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

arxiv: v3 [math.oc] 18 Apr 2012

Iteration-complexity of a Rockafellar s proximal method of multipliers for convex programming based on second-order approximations

On the convergence properties of the projected gradient method for convex optimization

Epiconvergence and ε-subgradients of Convex Functions

Monotone Linear Relations: Maximality and Fitzpatrick Functions

1 Introduction We consider the problem nd x 2 H such that 0 2 T (x); (1.1) where H is a real Hilbert space, and T () is a maximal monotone operator (o

AN INEXACT HYBRIDGENERALIZEDPROXIMAL POINT ALGORITHM ANDSOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS

Merit functions and error bounds for generalized variational inequalities

Optimality Conditions for Nonsmooth Convex Optimization

A strongly convergent hybrid proximal method in Banach spaces

Maximal monotone operators are selfdual vector fields and vice-versa

Forcing strong convergence of proximal point iterations in a Hilbert space

arxiv: v1 [math.fa] 30 Jun 2014

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT

The effect of calmness on the solution set of systems of nonlinear equations

Iteration-Complexity of a Newton Proximal Extragradient Method for Monotone Variational Inequalities and Inclusion Problems

On the iterate convergence of descent methods for convex optimization

Spectral gradient projection method for solving nonlinear monotone equations

Self-dual Smooth Approximations of Convex Functions via the Proximal Average

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Journal of Convex Analysis Vol. 14, No. 2, March 2007 AN EXPLICIT DESCENT METHOD FOR BILEVEL CONVEX OPTIMIZATION. Mikhail Solodov. September 12, 2005

An inexact subgradient algorithm for Equilibrium Problems

1. Introduction. We consider the classical variational inequality problem [1, 3, 7] VI(F, C), which is to find a point x such that

Methods for a Class of Convex. Functions. Stephen M. Robinson WP April 1996

1 Introduction and preliminaries

A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators

FENCHEL DUALITY, FITZPATRICK FUNCTIONS AND MAXIMAL MONOTONICITY S. SIMONS AND C. ZĂLINESCU

Existence results for quasi-equilibrium problems

Sum of two maximal monotone operators in a general Banach space is maximal

SOME PROPERTIES ON THE CLOSED SUBSETS IN BANACH SPACES

Extended Monotropic Programming and Duality 1

Kantorovich s Majorants Principle for Newton s Method

Journal of Inequalities in Pure and Applied Mathematics

Convergence Theorems for Bregman Strongly Nonexpansive Mappings in Reflexive Banach Spaces

Relationships between upper exhausters and the basic subdifferential in variational analysis

Keywords: monotone operators, sums of operators, enlargements, Yosida regularizations, subdifferentials

arxiv: v1 [math.na] 25 Sep 2012

c 2013 Society for Industrial and Applied Mathematics

A quasisecant method for minimizing nonsmooth functions

A UNIFIED CHARACTERIZATION OF PROXIMAL ALGORITHMS VIA THE CONJUGATE OF REGULARIZATION TERM

A Dual Condition for the Convex Subdifferential Sum Formula with Applications

OPTIMALITY CONDITIONS FOR D.C. VECTOR OPTIMIZATION PROBLEMS UNDER D.C. CONSTRAINTS. N. Gadhi, A. Metrane

Characterizations of the solution set for non-essentially quasiconvex programming

On well definedness of the Central Path

Near Equality, Near Convexity, Sums of Maximally Monotone Operators, and Averages of Firmly Nonexpansive Mappings

The Brezis-Browder Theorem in a general Banach space

for all u C, where F : X X, X is a Banach space with its dual X and C X

Maximal Monotone Inclusions and Fitzpatrick Functions

An Inexact Spingarn s Partial Inverse Method with Applications to Operator Splitting and Composite Optimization

A Proximal Method for Identifying Active Manifolds

Refined optimality conditions for differences of convex functions

Convergence rate estimates for the gradient differential inclusion

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems

SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF CONSTRAINED MAXIMUM EIGENVALUE FUNCTION

A bundle-filter method for nonsmooth convex constrained optimization

On Optimality Conditions for Pseudoconvex Programming in Terms of Dini Subdifferentials

A Dykstra-like algorithm for two monotone operators

Downloaded 09/27/13 to Redistribution subject to SIAM license or copyright; see

SHRINKING PROJECTION METHOD FOR A SEQUENCE OF RELATIVELY QUASI-NONEXPANSIVE MULTIVALUED MAPPINGS AND EQUILIBRIUM PROBLEM IN BANACH SPACES

The Fitzpatrick Function and Nonreflexive Spaces

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L.

Brézis - Haraux - type approximation of the range of a monotone operator composed with a linear mapping

A doubly stabilized bundle method for nonsmooth convex optimization

Abstract Monotone Operators Representable by Abstract Convex Functions

Necessary and Sufficient Conditions for the Existence of a Global Maximum for Convex Functions in Reflexive Banach Spaces

Maximal monotonicity for the precomposition with a linear operator

Transcription:

arxiv:0802.1347v2 [math.fa] 12 Feb 2008 Fixed points in the family of convex representations of a maximal monotone operator published on: Proc. Amer. Math. Soc. 131 (2003) 3851 3859. B. F. Svaiter IMPA Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110 Rio de Janeiro RJ CEP 22460-320 Brazil email: benar@impa.br 7 August 2002 Abstract Any maximal monotone operator can be characterized by a convex function. The family of such convex functions is invariant under a transformation connected with the Fenchel-Legendre conjugation. We prove that there exist a convex representation of the operator which is a fixed point of this conjugation. 2000 Mathematics Subject Classification: 47H05 keywords: maximal monotone operators, conjugation, convex functions Partially supported by CNPq Grant 301200/93-9(RN) and by PRONEX Optimization. 1

1 Introduction Let X be a real Banach space and X its dual. It is usual to identify a point to set operator T : X X with its graph, {(x,x ) X X x T(x)}. We will use the notation x,x for the duality product x (x) of x X, x X. An operator T : X X is monotone if (x,x ),(y,y ) T x y,x y 0, and is is maximal monotone if it is monotone and (y,y ) T, x y,x y 0 (x,x ) T. Krauss [11] managed to represent maximal monotone operators by subdifferentials of saddle functions on X X. After that, Fitzpatrick [8] proved that maximal monotone operators can be represented by convex functions on X X. Latter on, Simons [19] studied maximal monotone operators using a min-max approach. Recently, the convex representation of maximal monotone operators was rediscovered by Burachik and Svaiter [7] and Martinez-Legaz and Théra [13]. In [7], some results on enlargements are used to perform a systematic study of the family of convex functions which represents a given maximal monotone operator. Here we are concerned with this kind of representation. Given f : X R, the Fenchel-Legendre conjugate of f is f : X R, f (x ) := sup x,x f(x). x x The subdifferential of f is the operator f : X X, f(x) := {x X f(y) f(x)+ y x,x, y X}. If f is convex, lower semicontinuous and proper, then f is maximal monotone [17]. From the previous definitions, we have the Fenchel Young inequality: for all x X, x X f(x)+f (x ) x,x, f(x)+f (x ) = x,x x f(x). So, defining h FY : X X R, h FY (x,x ) := f(x)+f (x ), (1.1) 2

we observe that this function fully characterizes f. Assume that f is convex, lower semicontinuous and proper. In this case, f is maximal monotone. Moreover, ifweusethecanonicalinjectionofx intox, thenf (x) = f(x) for all x X. Hence, for all (x,x ) X X (h FY ) (x,x ) = h FY (x,x ). Our aim it to prove that any maximal monotone operator has a convex representation with a similar property. From now on, T : X X is a maximal monotone operator. Define, as in [8], H(T) to be the family of convex lower semi continuous functions h : X X R such that (x,x ) X X, h(x,x ) x,x, (x,x ) T h(x,x ) = x,x. (1.2) This family is nonempty [8]. Moreover, for any h H(T), h(x,x ) = x,x if and only if (x,x ) T [7]. Hence, any element of H(T) fully characterizes, or represents T. Since the sup of convex lower semicontinuous function is also convex and lower semicontinuous, using also (1.2) we conclude that sup of any (nonempty) subfamily of H(T) is still in H(T). The dual of X X is X X. So, for (x,x ) X X, (y,y ) X X, (x,x ), (y,y ) = x,y + x,y. Given an function h : X X R, define Jh : X X R, Jh(x,x ) := h (x,x), (1.3) where h stands for the Fenchel-Legendre conjugate of h and the canonical inclusion of X in X is being used. Equivalently, Jh(x,x ) = sup x,y + y,x h(y,y ). (1.4) (y,y ) X X Trivially, Jinverts thenaturalorder offunctions, i.e., ifh h thenjh Jh. The family H(T) is invariant under the application J [7]. The aim of this paper is to prove that there exist an element h H(T) such that Jh = h. The application J can be studied in the framework of generalized conjugation [18, Ch. 11, Sec. L]. With this aim, define Φ : (X X ) (X X ) : R, Φ((x,x ),(y,y )) := x,y + y,x. 3

Given h : X X R, let h Φ be the conjugate of h with respect to the coupling function Φ, h Φ (x,x ) := sup Φ((x,x ),(y,y )) h(y,y ). (1.5) (y,y ) X X Now we have and, in particular Jh = h Φ, h h ΦΦ = J 2 h. (1.6) 2 Proof of the Main Theorem Define as in [7], σ T : X X R, σ T := sup h H(T) Since H(T) is closed under the sup operation, we conclude that σ T is the biggest element of H(T). Combining this fact with the inclusion Jσ T H(T) we conclude that σ T Jσ T. For a more detailed discussion on σ T, we refer the reader to [7, eq. (35)]. The above inequality will be, in some sense our departure point. Define now h. H a (T) := {h H(T) h Jh}. The family H a (T) is connected with a family of enlargements of T which shares with the ε-subdifferential a special property (see [7]). We already know that σ T H a (T). Latter on, we will use the following construction of elements in this set. Proposition 2.1. Take h H(T) and define Then ĥ H a(t). ĥ = max h, Jh. 4

Proof. Since h and Jh are in H(T), ĥ H(T). By definition, ĥ h, ĥ Jh. Applying J on these inequalities and using (1.6) for majorizing J 2 h we obtain Hence, ĥ Jĥ. For h H(T) define Jh Jĥ, h Jĥ. L(h) := {g H(T) h g Jg}. The operator J inverts the order. Therefore, L(h) if and only if h Jh, i.e., h H a (T). We already know that L(σ T ). Proposition 2.2. For any h H a (T), the family L(h) has a minimal element. Proof. We shall use Zorn Lemma. Let C L(h) be a (nonempty) chain, that is, C is totally ordered. Take h C. For any h C, h h or h h. In the first case we have h h Jh, and in the second case, h Jh Jh. Therefore, h Jh, h,h C. (2.1) Define now ĝ = supjh. (2.2) h C Since H(T) is invariant under J and also closed with respect to the sup, we have ĝ H(T). From (2.1), (2.2) it follows that h ĝ Jh, h C. Applying J on the above inequalities, and using also (1.6), we conclude that, h Jĝ Jh, h C. (2.3) Since ĝ H(T), Jĝ H(T). Taking the sup on h C, in the right had side of the last inequality, we get Jĝ ĝ. 5

Applying J, again, we obtain Jĝ J(Jĝ). Take some h C. By the definition of L(h) and (2.3), we conclude that h h Jĝ. Hence Jĝ belongs to L(h) and is a lower bound for any element of C. Now we apply Zorn Lemma to conclude that L(h) has a minimal element. The minimal elements of L(h) (for h H a (T)) are the natural candidates for being fixed points of J. First we will show that they are fixed points of J 2. Observe that, since J inverts the order of functions, J 2 preserves it, i.e., if h h then J 2 h J 2 h. Moreover, J 2 maps H(T) in itself. Proposition 2.3. Take h H a (T) and let h 0 be a minimal element of L(h). Then J 2 h 0 = h 0. Proof. First observe that J 2 h 0 H(T). By assumption, h 0 Jh 0. Applying J 2 in this inequality we get J 2 h 0 J 2 (Jh 0 ) = J(J 2 h 0 ). Since h h 0 and, by (1.6) h 0 J 2 h 0, we conclude that h J 2 h 0 J(J 2 h 0 ). Hence J 2 h 0 L(h). Using again the inequality h 0 J 2 h 0 and the minimality of h 0, the conclusion follows. Theorem 2.4. Take h H(T) such that h Jh. Then h 0 L(h) is minimal (on L(h)) if and only if h 0 = Jh 0. Proof. Assume first that h 0 = Jh 0. If h L(h) and h 0 h, then, applying J on this inequality and using the definition of L(h) we conclude that h Jh Jh 0 = h 0. Combining the above inequalities we obtain h = h 0. Hence h 0 is minimal on L(h). Assume now that h 0 is minimal on L(h). By the definition of L(h), h 0 Jh 0. Suppose that for some (x 0,x 0), h 0 (x 0,x 0 ) > Jh 0(x 0,x 0 ). (2.4) 6

We shall prove that this assumption is contradictory. By Proposition 2.3, h 0 = J(Jh 0 ). Hence, the above inequality can be expressed as or equivalently J(Jh 0 )(x 0,x 0 ) > Jh 0(x 0,x 0 ), sup y,x 0 + x 0,y Jh 0 (y,y ) > Jh 0 (x 0,x 0). (y,y ) X X Therefore, there exists some (y 0,y 0 ) X X such that y 0,x 0 + x 0,y0 Jh 0(y 0,y0 ) > Jh 0(x 0,x 0 ). (2.5) In particular, Jh 0 (y 0,y0 ),Jh 0(x 0,x 0 ) R. Interchanging Jh 0(y 0,y0 ) with Jh 0 (x 0,x 0 ) we get y 0,x 0 + x 0,y 0 Jh 0(x 0,x 0 ) > Jh 0(y 0,y 0 ). Therefore, using also (1.4), we get J(Jh 0 (y 0,y0 )) > Jh 0(y 0,y0 ). Using again the equality J 2 h 0 = h 0 we conclude that Define γ : X X R, g : X X R, h 0 (y 0,y0 ) > Jh 0(y 0,y0 ). (2.6) γ(x,x ) := x,y 0 + y 0,x Jh 0 (y 0,y 0), (2.7) g := max γ, Jh 0. (2.8) By (1.4), h 0 γ. Since h 0 L(h), h 0 Jh 0. Therefore, h 0 g Jh 0. We claim that g H(T). Indeed, g is a lower semicontinuous convex function. Moreover, since h 0,Jh 0 H(T), it follows from (1.2) and the above inequalities that g H(T). Now apply J to the above inequality to conclude that h 0 Jg Jh 0. Therefore, defining ĝ = max g, Jg, (2.9) 7

we have h > h 0 ĝ. By Proposition 2.1, ĝ H(T) and ĝ Jĝ. Combining these results with the minimality of h 0, it follows that ĝ = h 0. In particular, ĝ(y 0,y0 ) = h 0(y 0,y0 ). (2.10) To end the prove we shall evaluate ĝ(y 0,y0 ). Using (2.7) we obtain γ(y 0,y 0 ) = 2 y 0,y 0 Jh 0(y 0,y 0 ). Since Jh 0 H(T), Jh 0 (y 0,y0 ) y 0,y0. Hence, γ(y 0,y0 ) y 0,y0 and by (2.8) g(y 0,y0 ) = Jh 0(y,y ). (2.11) Using again the inequality g γ, we have Jγ(y 0,y 0 ) Jg(y 0,y 0 ). Direct calculation yields Jγ(y 0,y 0 ) = Jh 0(y,y ). Therefore Combining (2.11), (2.12) and (2.9) we obtain Jh 0 (y 0,y0 ) Jg(y 0,y0 ). (2.12) ĝ(y 0,y 0 ) = Jh 0(y 0,y 0 ). This equality, together with (2.10) yields h 0 (y 0,y0 ) = Jh 0(y 0,y0 ), in contradiction with (2.6). Therefore, h 0 (x,x ) = Jh 0 (x,x ) for all (x,x ). Since σ T H a (T), L(σ T ) and there exist some h L(σ T ) such that Jh = h. (Indeed L(σ T ) = H a (T).) 3 Application Let f : X X be a proper lower semicontinuous convex function. We already know that f is maximal monotone. Define, for ε 0, ε f(x) := {x X f(y) f(x)+ y x,x ε, y X}. Note that 0 f = f. We also have f(x) ε f(x), x X,ε 0, (3.1) 0 ε 1 ε 2 ε1 f(x) ε2 f(x), x X (3.2) 8

Property (3.1) tells that ε f enlarges f. Property (3.2) shows that ε f is nondecreasing (or increasing) in ε. The operator ε f has been introduced in [3], and since that, it has had may theoretical and algorithmic applications [1, 14, 9, 10, 22, 12, 2]. Since f is maximal monotone, the enlarged operator ε f loses monotonicity in general. Even though, we have Now, take and define x ε f(x) x y,x y ε, (y,y ) f. (3.3) x 1 ε1 f(x 1 ),x 2 ε1 f(x 2 ), p,q 0,p+q = 1, (3.4) ( x, x ) := p(x 1,x 1 )+q(x 2,x 2 ), ε := pε 1 +qε 2 +pq x 1 x 2,x 1 x 2. (3.5) Using the previous definitions, and the convexity of f, is trivial to check that ε 0, x ε f( x). (3.6) Properties(3.4,3.5,3.6)willbecalledatransportation formula. Ifε 1 = ε 2 = 0, then we are using elements in the graph of f to construct elements in the graph of ε f. In (3.5), the product of elements in ε f appears. This product admits the following estimation, x 1 ε 1 f(x 1 ),x 2 ε 1 f(x 2 ) x 1 x 2,x 1 x 2 (ε 1 +ε 2 ). (3.7) Moreover, ε f is maximal with respect to property (3.7). We will call property (3.7) additivity. The enlargement ε f can be characterized by the function h FY, defined in (1.1) x ε f(x) h FY (x,x ) x,x +ε. The transportation formula (3.4,3.5,3.6) now follows directly of the convexity of h FY. Additivity follows from the fact that h FY Jh FY, and maximality of the additivity follows from the fact that Define the graph of ε f, as h FY = Jh FY. G( ( ) f( )) := {(x,x,ε) x ε f(x)}. 9

Note that G( ( ) f( )) is closed. So we say that ε f is closed. Given T : X X, maximal monotone, it would be desirable to have an enlargement of T, say T ε, with similar properties to the ε f enlargement of f. With this aim, such an object was defined in [4, 5](in finite dimensional spaces and in Banach spaces, respectively), for ε 0, T ε (x) := {x X x y,x y ε, (y,y ) T}. (3.8) The T ε enlargement of T shares with the ε f enlargement of f many properties: the transportation formula, Lipschitz continuity (in the interior of its domain), and even Brøndsted-Rockafellar property (in Reflexive Banach spaces). Since its introduction, it has had both theoretical and algorithmic applications [4, 6, 20, 21, 15, 16]. Even though, T ε is not the extension of the construct ε f to a generic maximal monotone operator. Indeed, taking T = f, we obtain ε f(x) ( f) ε (x), with examples of strict inclusion even in finite dimensional cases [4]. Therefore, in general, T ε lacks the additive property (3.7). The T ε enlargement satisfy a weaker property [5] x 1 Tε 1 (x 1 ),x 2 Tε 2 (x 2 ) x 1 x 2,x 1 x 2 ( ε 1 + ε 2 ) 2. The enlargement T ε is also connected with a convex function. Indeed, x T ε (x) x y,x y ε, (y,y ) T sup x y,y x ε. (y,y ) T Fitzpatrick function, ϕ T is the smallest element of H(T) [8], and is defined as ϕ T (x,x ) := sup x y,y x + x,x. (3.9) (y,y ) T Therefore, x T ε (x) ϕ T (x,x ) x,x +ε. Now, the transportation formula for T ε follows from convexity of ϕ T. In [7] it is proven that each enlargement ˆT ε of T, which has a closed graph, is nondecreasing and satisfy the transportation formula, is characterized by a function ĥ H(T), by the formula x ˆT ε (x) ĥ(x,x ) x,x +ε. 10

So, if we want to retain additivity : x 1 ˆT ε 1 (x 1 ),x 2 ˆT ε 2 (x 2 ) x 1 x 2,x 1 x 2 (ε 1 +ε 2 ). we shall require ĥ Jĥ. The enlargements in this family, which are also maximal with respect to the additivity, are structurally closer to the ε f enlargement, and are characterized by ĥ H(T), ĥ = Jĥ. If there were only one element in H(T) fixed point of J, then this element would be the canonical representation of T by a convex function, and the associated enlargement would be the extension of the ε-subdifferential enlargement to T. Unfortunately, it is not clear whether we have uniqueness of such fixed points. Existence of an additive enlargement of T, maximal with respect with additivity was proved in [23]. The convex representation of this enlargement turned out to be minimal in the family H a (T), but the characterization of these minimal elements of H a (T) as fixed point of J was lacking. Since the function σ T has played a fundamental role in our proof, we redescribe it here. Let δ T be the indicator function of T, i.e., in T its value is 0 and elsewhere (X X \T) its value is +. Denote the duality product by π : X X R, π(x,x ) = x,x. Then σ T (x,x ) = cl conv(π +δ T ), were cl convf stands for the biggest lower semicontinuous convex function majorized by f. We refer the reader to [7], for a detailed analysis of this function. 4 Acknowledgements We thanks the anonymous referee for the suggestions which helped to improve this paper. References [1] D.P. Bertsekas and S.K. Mitter. A descent numerical method for optimization problems with nondifferentiable cost functionals. SIAM Journal on Control, 11(4):637 652, 1973. 11

[2] F. Bonnans, J.Ch. Gilbert, C.L. Lemaréchal and C.A. Sagastizábal. Optimisation Numérique, aspects théoriques et pratiques. Collection Mathématiques et applications, SMAI-Springer-Verlag, Berlin, 1997. [3] A. Brøndsted and R. T. Rockafellar. On the subdifferentiability of convex functions. Proceedings of the American Mathematical Society, 16:605 611, 1965. [4] Regina S. Burachik, Alfredo N. Iusem, and B. F. Svaiter. Enlargement of monotone operators with applications to variational inequalities. Set- Valued Analysis, 5(2):159 180, 1997. [5] Regina Sandra Burachik and B. F. Svaiter. ε-enlargements of maximal monotone operators in Banach spaces. Set-Valued Analysis, 7(2):117 132, 1999. [6] Regina S. Burachik, Claudia A. Sagastizábal, and B. F. Svaiter. ǫ- enlargements of maximal monotone operators: theory and applications. In M. Fukushima and L. Qi, editors, Reformulation: nonsmooth, piecewise smooth, semismooth and smoothing methods (Lausanne, 1997), volume 22 of Applied Optimization, pages 25 43. Kluwer Acad. Publ., Dordrecht, 1999. [7] Burachik, R.S. and Svaiter, B.F.: Maximal monotone operators, convex functions and a special family of enlargements, Set Valued Analysis, 10(4): 297 316, 2002. [8] Fitzpatrick, S.: Representing monotone operators by convex functions, Workshop/Miniconference on Functional Analysis and Optimization (Canberra, 1988) 59 65, Proc. Centre Math. Anal. Austral. Nat. Univ.,20 Austral. Nat. Univ., Canberra, 1988. [9] J.-B. Hiriart-Urruty and C. Lemaréchal. Convex Analysis and Minimization Algorithms. Number 305-306 in Grund. der math. Wiss. Springer- Verlag, 1993. (two volumes). [10] K.C. Kiwiel. Proximity control in bundle methods for convex nondifferentiable minimization. Mathematical Programming, 46:105 122, 1990. [11] Krauss, Eckehard: A representation of maximal monotone operators by saddle functions, Rev. Roumaine Math. Pures Appl., 30, (1985), 823 837. 12

[12] C. Lemaréchal, A. Nemirovskii, and Yu. Nesterov. New variants of bundle methods. Mathematical Programming, 69:111 148, 1995. [13] Martinez-Legaz, J.-E. and Théra, M.: A convex representation of maximal monotone operators, Journal of Nonlinear and Convex Analysis, 2 (2001), 243 247. [14] E.A. Nurminski. ε-subgradient mapping and the problem of convex optimization. Cybernetics, 21(6):796 800, 1986. [15] Revalski, J.P. and M. Théra, Generalized sums of monotone operators, M., C. R. Acad. Sci. Paris Sér. I Math. 329(11):979 984, 1999. [16] Revalski, J.P. and M. Théra, Variational and extended sums of monotone operators Ill-posed variational problems and regularization techniques (Trier, 1998), 229 246, Lecture Notes in Econom. and Math. Systems, 477, Springer, Berlin, 1999. [17] Rockafellar, R. T.: On the maximal monotonicity of subdifferential mappings, Pacific Journal of Mathematics 33 (1970), 209 216. [18] R. Tyrrell Rockafellar, Roger J-B. Wets. Variational Analysis. Springer Verlag, Berlin Heidelberg, 1998. [19] Simons, S.: Minimax and monotonicity. Lecture Notes in Mathematics, 1693. Springer-Verlag, Berlin, 1998. [20] M. V. Solodov and B. F. Svaiter. An inexact hybrid extragradientproximal point algorithm using the enlargement of a maximal monotone operator. Set-Valued Analysis, 7(4):323 345, December 1999. [21] M. V. Solodov and B. F. Svaiter. Error bounds for proximal point subproblems and associated inexact proximal point algorithms. Mathematical Programming, 88(2):371 389, 2000. [22] H. Schramm and J. Zowe. A version of the bundle idea for minimizing a nonsmooth function: conceptual idea, convergence analysis, numerical results. SIAM Journal on Optimization, 2(1):121 152, 1992. [23] B. F. Svaiter. A Family of Enlargements of Maximal Monotone Operators. Set-Valued Analysis, 8(4):311 328, December 2000. 13