AN OUTER APPROXIMATION METHOD FOR THE VARIATIONAL INEQUALITY PROBLEM

Size: px
Start display at page:

Download "AN OUTER APPROXIMATION METHOD FOR THE VARIATIONAL INEQUALITY PROBLEM"

Transcription

1 AN OUTER APPROXIMATION METHOD FOR THE VARIATIONAL INEQUALITY PROBLEM R. S. Burachik Engenharia de Sistemas e Computação,COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP , Brazil regi@cos.ufrj.br J. O. Lopes Engenharia de Sistemas e Computação, COPPE-UFRJ,CP 68511, Rio de Janeiro, RJ, CEP , Brazil. jurandir@cos.ufrj.br B. F. Svaiter Instituto de Matemática Pura e Aplicada, Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro, RJ, CEP , Brazil. benar@impa.br Preprint version Abstract We study two outer approximation schemes, applied to the variational inequality problem in reflexive Banach spaces. First we pro- Research of this author was supported by CAPES Grant BEX /2. Partially supported by PICDT/UFPI-CAPES Partially supported by CNPq Grant /93-9(RN) and by PRONEX Optimization. 1

2 pose a generic outer approximation scheme, and its convergence analysis unifies a wide class of outer approximation methods applied to the constrained optimization problem. As is standard in this setting, boundedness and optimality of weak limit points are proved to hold under two alternative conditions: (i) boundedness of the feasible set, or (ii) coerciveness of the operator. In order to develop a convergence analysis where (i) and (ii) do not hold, we consider a second scheme in which the approximated subproblems use a coercive approximation of the original operator. Under conditions alternative to both (i) and (ii), we obtain standard convergence results. Furthermore, when the space is uniformly convex, we establish full strong convergence of the second scheme to a solution. Key words. maximal monotone operators, Banach spaces, outer approximation algorithm, semi-infinite programs. AMS subject classifications. 49M27, 65J05, 65K05, 90C25. 1 Introduction We investigate a broad class of outer approximation methods for solving the classical monotone variational inequality problem in a reflexive Banach space. First we recall this problem and then we describe those methods. Let B be a real reflexive Banach space with dual B. The notation v, x stands for the duality product v(x) of v B and x B. Given T : B B a maximal monotone operator and Ω B a nonempty closed and convex set, the variational inequality problem for T and Ω, V IP (T, Ω) is: Find x such that x Ω, u T (x ) : u, x x 0 x Ω. (1.1) The set Ω will be called the feasible set for Problem (1.1). In the particular case in which T is the subdifferential of a proper, convex and lower semicontinuous function f: B R {+ }, problem (1.1) reduces to the convex optimization problem: min f(x). (1.2) x Ω Outer approximation methods solve Problem (1.1) by generating and solving a sequence of problems with feasible sets Ω k which contain the original feasible set Ω, but have a simpler structure. These methods were introduced for solving optimization problems, four decades ago in the form of cutting plane methods [10, 32]. 2

3 Outer approximation schemes typically arise when the set Ω is of the form Ω = y Y Ω y, where Y is infinite and Ω y := {x B g(x, y) 0} with each g(, y) : B IR. Feasible sets of this kind appear in several areas of applications (see, e.g., [23, 22, 17, 13, 3, 19]). In this situation, a common choice is to replace the feasible set of the original problem by Ω k := y Y kω y = {x B g(x, y) 0 y Y k }, where Y k Y is finite and conveniently chosen. For this kind of Ω, outer approximation methods for problem (1.2) are classified according to the way in which the sets Ω k Ω = y Y Ω y are defined. We mention here cutoff methods (e.g., those in [10, 28, 32, 45, 48] for B = R n ), filtered cutoff methods (e.g., those in [2, 14, 16, 43, 44] for B = R n ) and disintegration schemes[20], like the ones proposed in [12, 21, 36, 38, 39] for the minimization of quadratic functions in Hilbert spaces and extended in [33] to the minimization of a convex function in a Banach space. Recently, Combettes gave in [11] a unified convergence analysis which includes and extends all the above-mentioned outer approximation methods. A basic assumption for obtaining convergence of these methods is, either that all sets Ω k are contained in some bounded set, or some coerciveness property of the objective function f. These boundedness assumptions are a standard requirement in the analysis of all the methods mentioned above (see the excellent surveys [40, 24] and references therein). The goal of the present work is twofold. First, we develop a convergence analysis which can be applied to more general and flexible schemes for successive approximation of variational inequalities, under the standard boundedness assumptions. Our analysis covers as a particular case the outer approximation scheme studied in [11] for problem (1.2) (and hence all the above-mentioned algorithms). We prove that our generic scheme generates a bounded sequence, and that all weak accumulation points are solutions of V IP (T, Ω). Second, we obtain the same convergence results in the absence of boundedness assumptions. For doing this, we consider subproblems (P k ) where the original operator is replaced by a suitable coercive regularization. Our work is built around the following generic outer approximation scheme for solving V IP (T, Ω). Algorithm. Initialization Take Ω 1 Ω, Iterations For k = 1, 2,..., find x k Ω k, a solution of the approximated problem (P k ), defined as: 3

4 u k T (x k ) with (1.3) u k, x x k 0 x Ω k (1.4) In our first generic scheme we relax the inequality in (1.4). iterate x k Ω k is taken such that Namely, the u k T (x k ) with u k, x x k ε k x Ω k, (1.5) where ε k > 0 and Ω k Ω is convex and closed. In the second scheme, we relax (1.3). More precisely, the approximate solution x k Ω k is such that u k T λk (x k ) with u k, x x k 0 x Ω k, (1.6) where λ k > 0, Ω k Ω is closed and convex and T λk is a suitable coercive approximation of T. Schemes in which the approximated subproblems use a coercive regularization of T are a common approach for solving non-coercive variational inequalities. Two classical examples of these are proximal-like regularizations (see, e.g., [29, 30]) and Tikhonov regularizations [42, 18]. The latter kind of regularization has been extensively studied in the last two decades (see [34] and the references therein). Mosco [35] studied the convergence of what is now called the Mosco scheme, which combines the Tikhonov regularization with a perturbation of the feasible set. This approach is followed in [34], where, as in [35, Section 5], the approximating feasible sets are assumed to converge in the sense of set-convergence to the original feasible set. In [34], the authors study variational inequalities in Hilbert spaces and the operator T is assumed to be point-to-point. Classical application of all these approximating schemes is in perturbation theory of variational boundary value problems for the operator T (see, e.g., problems (p ) and (p n) and Corollary 1 in [35, pages ]). The paper is organized as follows. Section 2 contains some theoretical preliminaries which are necessary for our analysis. In Section 3 we give a unified analysis for a broad family of outer approximation algorithms, in which the iterates solve problem (1.5). We prove existence of this sequence and establish optimality of all weak accumulation points under standard boundedness assumptions. In Section 4 we relax the boundedness assumptions, and consider a sequence {x k } as in (1.6). Under suitable assumptions, we prove that the iterates are bounded and all of its accumulation points are optimal. Moreover, we establish strong convergence of the whole sequence to a solution when B is uniformly convex. 4

5 2 Theoretical preliminaries From now on B is a real Banach space. Let T : B B be an arbitrary point-to-set operator. We recall some basic definitions: Domain of T, D(T ) := {x B T (x) }. Graph of T, G(T ) := {(x, u) B B u T (x)}. Range of T, R(T ) := {u B u T (x) for some x B}. The operator T is monotone if for all x, y B, u T (x), and v T (y), u v, x y 0. If this inequality holds strictly whenever x, y B, u T (x), v T (y), and x y then T is strictly monotone. The operator T is maximal monotone if it is monotone and for any monotone T : B B, G(T ) G( T ) T = T An example of maximal monotone operator is the normality operator. Let Ω B be closed, convex, and non-empty. The normality operator of Ω is N Ω : B B, { {u B N Ω (x) = u, z x 0 z Ω} if x Ω, otherwise. Maximal monotonicity of N Ω follows from Ω being closed, convex and nonempty. It is easy to verify that V IP (T, Ω), as defined in (1.1), is equivalent to the inclusion problem (or generalized equation): Find x such that 0 (T + N Ω )(x ). If T + N Ω is onto, then this problem, and hence V IP (T, Ω), has a solution. In this paper, existence of solutions of variational inequality problems will be based on surjectivity of sums of maximal monotone operators. The following definitions will be needed. Definition 2.1 (see [37].) An operator T : B B is: 1. coercive if D(T ) is bounded or for any x D(T ), lim z + v, z x / z = + holds for each selection v T (z). 5

6 2. regular if for any y D(T ) and u R(T ) sup v u, y z <. (z,v) G(T ) The proposition below will be used for establishing existence of solutions of the subproblems (P k ). Proposition 2.2 ([8, Lemma 2.7] ) Suppose that B is reflexive. Let T 1, T 2 : B B be maximal monotone operators such that (a) T 1 + T 2 is maximal monotone; (b) T 1 is regular and onto. Then T 1 + T 2 is onto. To establish condition (a) of the above proposition we will use a classical theorem, due to Rockafellar [41]. Denote by int(a) the topological interior of the set A. Proposition 2.3 ([41, Theorem 1]) Suppose that B is reflexive. Let T 1, T 2 : B B maximal monotone operators. If D(T 1 ) int(d(t 2 )), then T 1 + T 2 is a maximal monotone operator. To check for condition (b) of Proposition 2.2, we will need two auxiliary results. Theorem 2.4 ([4, page 147]) Suppose that B is reflexive. Let T : B B be a maximal monotone operator. If T is coercive (in particular, if D(T ) is bounded), then T is surjective. The fact stated next relates the two concepts given in Definition 2.1. Theorem 2.5 ([37, page 122]) Let T : B B be a monotone operator. If T is coercive then T is regular. The discussion of the maximality of monotone operators and their surjective properties requires the introduction of duality maps. Asplund [1] has shown that, when B is a reflexive Banach space, there exists an equivalent norm on B which is everywhere Gâteaux differentiable except at the origin and such that the corresponding dual norm on B is also everywhere Gâteaux differentiable except at the origin. From now on, we assume that B is a reflexive real Banach space. For simplifying the notation, we assume also from now on that the given norm 6

7 on B already has these special properties. We use the same notation for this norm on B and its associated norm on the dual B. Denote by J the Gâteaux gradient of the function ϕ(x) := (1/2) x 2. Thus, J is the duality mapping, which assigns to each x B the unique J(x) B such that x, J(x) = x 2 = J(x) 2. (2.1) Proposition 2.6 Let J: B B be the duality mapping described above. The following assertions hold: (i) J( x) = J(x) and J(λx) = λj(x) λ > 0; (ii) Let w = J(x), then w, z x 1 2 ( z 2 x 2 ) z B; (iii) If T is maximal monotone, then λ > 0, T + λj is maximal monotone and onto, and ( T +λj) 1 is a single-valued and maximal monotone operator. Proof. Item (i) follows from (2.1) and (ii) uses the fact that J( ) = ( ). In order to prove (iii), note that Proposition 2.3 implies maximal monotonicity of T + λj. The surjectivity of T + λj and the assertion on ( T + λj) 1 follow from [41, Proposition 1]. Our convergence theorems require two conditions on the operator T, namely para- and pseudomotonicity, which we discuss next. The notion of paramonotonicity was introduced in [7] and further studied in [9, 26]. It is defined as follows. Definition 2.7 The operator T is paramonotone in Ω if it is monotone and v u, y z = 0 with y, z Ω, v T (y), u T (z) implies that u T (y), v T (z). The operator T is paramonotone if this property holds in the whole space. Proposition 2.8 (see [26, Proposition 4]) Assume that T is paramonotone on Ω and x be a solution of the V IP (T, Ω). Let x Ω be such that there exists an element u T (x ) with u, x x 0. Then x also solves V IP (T, Ω). Paramonotonicity can be seen a condition which is weaker than strict monotonicity. The remark below contains some examples of operators which are paramonotone. 7

8 Remark 2.9 If T is the subdifferential of a convex function f : B IR { }, then T is paramonotone. When B = IR n, a condition which guarantees paramonotonicity of T : IR n IR n is when T is differentiable and the symmetrization of its Jacobian matrix has the same rank as the Jacobian matrix itself. However, relevant operators fail to satisfy this condition. More precisely, the saddle-point operator Λ(x, y) := ( x L(x, y), y L(x, y)), where L is the Lagrangian associated to a constrained convex optimization problem, is not paramonotone, except in trivial instances. For more details on paramonotone operators see [26]. Next we recall the definition of pseudomonotonicity, which was taken from [6] and should not be confused with other uses of the same word (see e.g., [31]). Definition 2.10 Let B be a reflexive Banach space and the operator T such that D(T ) is closed and convex. T is said to be pseudomonotone if it satisfies the following condition: If the sequence {(x k, u k )} G(T ), satisfies that: (a) {x k } converges weakly to x D(T ); (b) lim sup k u k, x k x 0. Then for every w D(T ) there exists an element u T (x ) such that u, x w lim inf u k, x k w. k Remark 2.11 If T is the gradient of a Gâteaux differentiable convex function ϕ : IR n IR { } then T is pseudomonotone. Indeed, using a fact proved in [37, p. 94] it holds that ϕ is hemicontinuous (i.e., for every fixed x, y IR n, the real-valued mapping t ϕ((1 t)x + ty), x y is continuous). On the other hand, point-to-point hemicontinuous operators defined on IR n are always pseudomonotone (see e.g., [37, p. 107]), which yields T = ϕ pseudomonotone, as claimed. Combining the latter statement with Remark 2.9, we conclude that every T of this kind is both para- and pseudomonotone. An example of a non-strictly monotone operator which is both paraand pseudomonotone is the subdifferential of the function ϕ : IR IR defined by ϕ(t) = t for all t. Definition 2.12 ([5, Page 51])We say that B is uniformly convex if ε > 0 δ > 0 such that x, y B, x 1, y 1 and x y > ε x + y 2 < 1 δ. 8

9 Proposition 2.13 ([5, Proposition III.30]) Assume that B is uniformly convex and that {y k } is a sequence converging weakly to y. Suppose further that lim sup y k y, then {y k } converges strongly to y. 3 A General Outer Approximation Scheme for V IP (T, Ω) Let Ω B be a nonempty closed and convex set and T : B B be a maximal monotone operator. In this section we present a unified convergence analysis of a general and flexible scheme for successive approximation of variational inequalities. Recall that V IP (T, Ω) is defined by: Find x Ω such that there exists u T (x ) with u, x x 0 for all x Ω. (3.1) In order to present a convergence analysis which can be applied to a wide family of outer approximation schemes, we fix a sequence {Ω k } of convex closed subsets of B and a sequence {ε k } IR + := {t IR : t 0} verifying (i) Ω Ω k, for all k, (ii) lim k ε k = 0. These sequences define our approximating problems. Namely, given Ω k and ε k, define the k-th approximating problem as: (P k ) { Find x k Ω k such that there exists u k T (x k ) with u k, x x k ε k for all x Ω k. Definition 3.1 Fix {Ω k } and {ε k } as in (i)-(ii). (a) A sequence {x k } will be called an orbit when x k solves (P k ) for all k. (b) An orbit {x k } will be called asymptotically feasible (af, for short) when all weak accumulation points of {x k } belong to Ω. Example 1 A broad family of outer approximation methods for the convex constrained optimization problem (1.2) generates af orbits. This is shown in [11], where the author provides a unified convergence analysis for a wide 9

10 class of outer approximation schemes devised for problem (1.2). The feasible set considered in [11] is explicitly defined in the following form Ω := {x B g(x, y) 0 y Y }, (3.2) where Y is an arbitrary set of indexes and the constraint functions g(, y): B IR {+ } satisfy the basic assumptions: (G 1 ) {x B g(x, y) 0} is nonempty and convex for all y Y ; (G 2 ) For all y Y and for very sequence {z k } B such that { w limk z k = z lim sup k g(z k = g(z, y) 0,, y) 0 where w lim stands for weak limit in B. In [11] T = f, where f is lower semicontinuous, convex and verifies the following assumptions. (F 1 ) For some closed convex set E Ω, there exists a point u domf Ω such that the set C := {x E f(x) f(u)} is bounded; (F 2 ) f is uniformly convex with modulus of convexity c on C, i.e., [46, 47] for all x, y C, ( ) x + y f 2 f(x) + f(y) 2 c ( x y ), (3.3) where c : R + R + is nondecreasing and ( τ R) c(τ) = 0 τ = 0. The outer approximation scheme studied in [11, Algorithm 1.1] has for kth approximating problem the minimization of f in a set Ω k Ω. Hence this (P k ) verifies conditions (i)-(ii), where T = f and ε k = 0 for all k. As a conclusion, [11, Algorithm 1.1] also generates an orbit in the sense of Definition 3.1(a). Therefore, our approach also contains the methods extended by [11, Algorithm 1.1]. The analysis presented in [11] establishes conditions under which [11, Algorithm 1.1] generates an af orbit. However, assumptions (F 1 ) (F 2 ) force every af orbit to converge strongly to the unique solution of (1.2). See also [40, Theorem 4.1], where other methods (from the family of cutting plane methods[10] for the convex semi-infinite programming problem) are proved to generate af orbits. When the objective and constraint functions are smooth, a family of schemes which generates af orbits is found in [24, Theorem 7.2]. Throughout our work we consider the following assumptions: 10

11 (H 1 ) D(T ) int(ω) or int(d(t )) Ω ; (H 2 ) T paramonotone and pseudomonotone with closed domain; (H 3 ) The solution set S of V IP (T, Ω) is nonempty. A relevant question regarding af orbits is which extra conditions guarantee optimality of all weak accumulation points. In our analysis, we use the assumption of para- and pseudomonotonicity, which is always verified when T = ϕ, with ϕ : IR n IR { } convex and Gâteaux differentiable, or when T is point-to-point (see Remarks 2.9 and 2.11). Lemma 3.2 Let {x k } be an af orbit for V IP (T, Ω). If (H 2 ) and (H 3 ) hold, then every weak accumulation point of {x k } is a solution of V IP (T, Ω). Proof. Assume that x is a weak accumulation point of {x k }. So, there exists a subsequence {x k j } converging (weakly) to x. For each j, x k j solves (P kj ), therefore, there exists u k j T (x k j ) such that Then by (i) we have Since {x k } is af, x Ω and hence Using also (ii) we have u k j, x k j x ε kj, x Ω k j and k j. u k j, x k j x ε kj x Ω and k j. (3.4) lim sup j u k j, x k j x ε kj k j. u k j, x k j x lim sup ε kj = 0. (3.5) j Take x S. By pseudomonotonicity of T, we conclude that there exists u T (x ) such that lim inf j Since x Ω, (3.4) implies that lim inf j u k j, x k j x u, x x. u k j, x k j x lim inf j 11 ε kj = 0.

12 Combining the last two inequalities we have that u, x x 0. Finally, by paramonotonicity of T and Proposition 2.8 we conclude that x is a solution of the V IP (T, Ω). We have seen in Example 1 that many well-known outer approximation schemes for the convex constrained problem solve the subproblems (P k ) exactly, i.e., with ε k = 0 for all k. Thus a relevant question is whether problem (P k ) admits an exact solution. Proposition 3.3 Assume (H 1 ) holds and suppose that one of the following assumptions hold: (a) there is a bounded set K such that K Ω k for all k, (b) T is coercive. Then every (P k ) admits an exact solution. Proof. Define the operator T k =T + N Ω k. For all k it holds that: (1) D(T k ) = D(T ) D(N Ω k) = D(T ) Ω k by (H 1 ) and (i). (2) T k = T + N Ω k is maximal monotone by (H 1 ) and Proposition 2.3. If (a) holds then by (1) and (i), D(T k ) is bounded, thus T k is onto by Theorem 2.4. This implies that (P k ) admits an exact solution in this case. Now suppose that T is coercive. In this case, T is regular by Theorem 2.5. On the other hand, R(T ) = B by Theorem 2.4. Since (2) holds, it follows from Proposition 2.2 that T k is onto. Thus (P k ) has a solution. Now we are in conditions to present our first convergence result. We point out that this result is an extension to V IP (T, Ω) of, e.g., [2, Theorem 2.1], [11, Proposition 3.1(i)], [24, Theorem 7.2] and [40, Theorem 4.1]. Theorem 3.4 Let the sequence {x k } be an af orbit. Assume that (H 2 ) and (H 3 ) hold. If one the following conditions hold: (a) There exists Ω a bounded set such that Ω Ω k for all k, (b) T coercive, then {x k } is bounded and each accumulation point is a solution of V IP (T, Ω). 12

13 Proof. By Lemma 3.2, it is enough to establish boundedness of {x k }. Assume (a) holds. By (i), we have that {x k } Ω, and hence the sequence is bounded. Assume now that (b) occurs, and suppose that {x k } is unbounded. By definition of (P k ), there exists u k T x k with lim k u k, x k x / x k lim k ε k / x k = 0, where we used unboundedness of {x k } and condition (ii). But the above expression contradicts the coercivity of T. Hence {x k } is bounded. Example 2 Consider now problem V IP (T, Ω), where the feasible set Ω is given as in (3.2). We describe next an outer approximation scheme for this problem. Assume that the set Y and the function g: B Y IR {+ } satisfy the assumptions: (G 1 ) Y is a weakly compact set contained in a reflexive Banach space; (G 2 ) g(, y) is a proper, lower semicontinuous and convex function y Y ; (G 3 ) g is weakly continuous on B Y. Before stating the algorithm, we need some notation: Y k is a finite subset of Y ; Ω k := {x B g(x, y) 0 y Y k }: Given { x k solution of (P k ), define the k-th auxiliary problem as: Find y (A k ) k+1 Y such that y k+1 arg max y Y g(x k, y). Algorithm 1 Step 0. Initialize: Set k = 1, and choose any Y 1 Y finite and nonempty. Iteration: For k = 1, 2..., Step 1. Given Ω k, find x k solution of (P k ). Step 2. For x k obtained in Step 1, solve A k. Step 3. (Check for solution and update if necessary) If g(x k, y k+1 ) 0 stop. Otherwise, set Step 4. Set k := k + 1 and return to Step 1. Y k+1 := Y k {y k+1 }. (3.6) If Algorithm 1 stops at step 3, then x k S. Indeed, if the solution y k+1 of the k-th auxiliary problem A k obtained in Step 2 satisfies g(x k, y k+1 ) 0, 13

14 then it holds that g(x k, y) 0 for all y Y, i.e., x k Ω. Thus, u k, x x k 0 x Ω, so that x k is a solution of problem (1.1). This justifies the stopping rule of Step 3. In the particular case in which B is finite dimensional, T = f, and ε k = 0 for all k in problem (P k ), the scheme above is the simplest exchange method[24, Section 7.1] for the numerical solution of semi-infinite optimization problems. The analysis for this particular case was developed in [2, Theorem 2.1], where the authors prove, under the hypothesis of boundedness of Ω 1, that the orbit generated by Algorithm 1 is af and every accumulation point of the orbit is a solution. The proof of the asymptotic feasibility of the orbit is omitted here, since it can be transferred, in a straightforward way, to our more general setting. Optimality of the weak accumulation points also holds in our setting, under the conditions of Theorem 3.4. Remark 3.5 Under the assumptions (G 1 ) (G 3 ) of Example 2, define h(x) := max y Y g(x, y). Then h is convex and weakly continuous. We claim that, if the conditions of Theorem 3.4 are met, then for every β > 0 and every orbit {x k } generated by Algorithm 1, there exists k 0 such that h(x k ) β for all k k 0. Indeed, suppose that there exists β 0 > 0 and a subsequence {x k j } {x k } such that h(x k j ) β 0 for all j. By Theorem 3.4, the subsequence {x k j } is bounded and af. Without loss of generality, assume the whole subsequence {x k j } converges weakly to some x Ω. Using also the assumption on β 0, we have that β 0 lim j h(x k j ) = h( x) 0, a contradiction. This fact will be useful later on, in order to define suitable approximated solutions for this particular instance of V IP (T, Ω), in the case in which boundedness assumptions do not hold. 4 Convergence without boundedness assumptions In the convergence analysis of the previous section, existence and boundedness of the iterates are proved under the boundedness assumptions of Theorem 3.4. Our aim in this section is to define outer approximating schemes 14

15 with the same convergence properties, but under alternative assumptions. In order to achieve this goal, we define subproblems using a Tikhonov regularization of T. As is standard in this kind of regularization, we will force the parameters to go to zero, in order to establish convergence of the method. We will consider outer approximations Ω k for an arbitrary closed and convex set Ω, as well as for the case in which the set Ω is given as in Example 2: Ω := {x B g(x, y) 0 y Y }. (4.1) Throughout this section, we will always assume that the set Y of indexes and the constraint functions g(, y): B IR {+ } verify: (G 1 ) Y is a weakly compact set contained in a reflexive Banach space; (G 2 ) g(, y) is a proper, lower semicontinuous and convex function y Y ; (G 3 ) g is weakly continuous on B Y. 4.1 Approximated solutions of V IP (T, Ω) Fix x 0 B and λ > 0. Define T λ (x) := T (x) + λj(x x 0 ) where J is the duality mapping given in (2.1). It is well-known that T λ is coercive. We use T λ for defining approximated solutions of V IP (T, Ω). Definition 4.1 Fix λ > 0, β > 0 and x 0 B, an element x =: x(λ, β, x 0 ) (i.e., depending on λ, β, x 0 ) is said to be an approximated solution of V IP (T, Ω) when (1) Ω Ω and x Ω solves V IP (T λ, Ω); (2) x x 0 + (1 + β)(ω x 0 ). In the particular case in which Ω is given as in (4.1), an element x =: x(λ, β, x 0 ) is an approximated solution of V IP (T, Ω) if it verifies condition (1) and (2 ) h( x) β, where h(x) = max y Y g(x, y). The parameter λ used in Definition 4.1(1) has a regularizing role, since it defines subproblems with coercive operator T λ. The role of the parameter β > 0 in Definition 4.1(2) or (2 ) is to control the infeasibility of the iterates x. When Ω is as in (4.1), we can connect conditions (2) and (2 ) above. For doing this we need the following simple lemma. 15

16 Lemma 4.2 Fix x 0 B and let ĥ : B IR { } be convex and continuous and suppose that Ω 0 {x B ĥ(x) 0}. Then for all γ > 0 there exists β = β(γ) such that inf z Ω0 ĥ (x 0 + (1 + β)(z x 0 )) < γ. Proof. The proof is a consequence of the continuity of ĥ. Suppose that for some γ 0 > 0 the conclusion of the lemma does not hold and fix z 0 Ω 0. Then for all k we must have ĥ(x0 + (1 + 1 k )(z0 x 0 )) γ 0. Call x k := x 0 + (1 + 1 k )(z0 x 0 ). Then γ 0 lim k ĥ( x k ) = ĥ(z0 ) 0, a contradiction. Take Ω 0 := Ω and ĥ( ) := max y Y g(, y) in the lemma above, and choose an arbitrary γ > 0. By the lemma, there exists β small enough and z Ω such that x := x 0 + (1 + β)(z x 0 ) verifies h( x) < γ, i.e., x verifies condition (2 ) for γ. In this sense, we can say that condition (2) is stronger that (2 ). On the other hand, condition (2) may never be met, no matter how small is the parameter β in condition (2 ). Indeed, observe that condition (2) only holds for a vector x when [ x, x 0 ] Ω. It is easy to find examples in which h and Ω admit a sequence { x k } verifying lim k h( x k ) = 0 with [ x k, x 0 ] Ω = for all k. Remark 4.3 Observe that, when λ = β = 0, x as in Definition 4.1 solves V IP (T, Ω). Remark 4.4 Algorithm 1 in Example 2 provides a set Ω and a point x verifying (1) and (2 ) of the definition above. More precisely, fix λ > 0 and consider an orbit {x k } of Algorithm 1, applied to solve problem V IP (T λ, Ω). Assume also that all problems (P k ) in Algorithm 1 are exact, i.e., ε k = 0 for all k. Since T λ is coercive, the assumptions of Theorem 3.4(b) hold. Therefore, {x k } is well-defined, bounded, and every weak accumulation point is a solution. Given arbitrary β > 0, and using Remark 3.5, we conclude that for some k 0, it holds that h(x k 0 ) β. So (2 ) holds for x := x k 0. By definition, x = x k 0 Ω k 0 = {x B g(x, y) 0 y Y k 0 } Ω and solves V IP (T λ, Ω k 0 ), so x = x k 0 and Ω = Ω k 0 verify (1). Using the approximated solutions given by Definition 4.1, we consider the following outer approximation scheme. Algorithm 2 Step 0. Initialize: Take x 0 B and {λ k }, {β k } IR +. Iteration: For k = 1, 2,..., 16

17 Step 1. Given λ k, β k, find x k = x(λ k, β k, x 0 ) an approximated solution of V IP (T, Ω) (in the sense of Definition 4.1). Step 2. If x k = x 0 and x k Ω, stop. Otherwise, Step 3. Set k := k + 1 and return to Step 1. Remark 4.5 Regarding the stopping criterion in Step 2, if x k x k Ω, we have by condition (1) of Definition 4.1 that = x 0 and 0 T λk (x 0 ) + N eω k(x 0 ) = T (x 0 ) + λ k J(x 0 x 0 ) + N eω k(x 0 ) = T (x 0 ) + N eω k(x 0 ). Then x 0 is a solution of V IP (T, Ω k ). Since x 0 Ω, it is also a solution of V IP (T, Ω). Iterates as in Definition 4.1(1) always exist. Proposition 4.6 Assume (H 1 ) holds. Given Ω Ω and λ > 0, there exists a unique solution x of V IP (T λ, Ω). Proof. Uniqueness of the solution follows from the fact that T λ is strictly monotone. The existence is a consequence of the surjectivity of T λ + N eω. Indeed, since T λ is coercive, by Theorems 2.4 and 2.5, it is regular and onto. In order to use Proposition 2.2, we only have to check that T λ +N eω is maximal monotone. But this fact follows readily from Proposition 2.4, (H 1 ) and the fact that Ω Ω. Proposition 4.7 Assume (H 3 ) holds. Let the sequence { x k } be generated by Algorithm 2 with approximated solutions as in Definition 4.1(1)(2). Take λ k, β k 0 with sup k [ β k λ k + λ k β k ] c 0 for some c 0 > 0. Then every orbit of Algorithm 2 is bounded and af. Proof. By condition (2) in Definition 4.1, there exists y k Ω such that x k = x 0 + (1 + β k )(y k x 0 ), (4.2) where x 0 B is given in Step 0 of Algorithm 2. Therefore, boundedness of the orbit will be guaranteed if we show that {y k } is bounded. Using condition (1) in Definition 4.1, we have that x k verifies u k + η k + λ k J( x k x 0 ) = 0, with u k T ( x k ), η k N eω k( x k ). 17

18 Since η k N eω k( x k ), we can write u k, x x k λ k J(x 0 x k ), x x k x Ω k, (4.3) where we also used Proposition 2.6(i). Take x S, then there exists v T (x) such that v, z x 0 z Ω. (4.4) Using monotonicity of T and (4.3) for x = x, we have that v, x x k λ k J(x 0 x k ), x x k. By Proposition 2.6 (ii) with z = x 0 x and x = x 0 x k we get J(x 0 x k ), x x k 1 2 ( x0 x k 2 x 0 x 2 ). The last two expressions yield v, x x k λ k 2 ( x0 x k 2 x 0 x 2 ). (4.5) Use (4.4) for z = y k to conclude that v, y k x 0. Adding and subtracting x k in the right hand side of this inner product, we obtain β k v, x 0 y k = v, y k x k v, x x k, where we also used (4.2). Combine the last expression with (4.5) to get β k v, x 0 y k λ k 2 ( x 0 x k 2 x 0 x 2 ) = λ k 2 ((1 + β k ) 2 y k x 0 2 x 0 x 2), where we used (4.2) in the equality. Rearranging the last expression, we get ( ) ( ) 0 v, y k x 0 + (1+β k) 2 λ k 2 β k y k x λ k 2 β k x 0 x 2 (4.6) v, y k x c 0 y k x 0 2 c 0 2 x 0 x 2, where we used (1+β k) 2 λ k β k 1/c 0 and λ k β k c 0 in the second inequality. The right hand side of the last inequality is a convex quadratic function on y k, thus the sequence {y k } must be bounded. Equivalently, { x k } is bounded. Now we proceed to prove that every weak accumulation point is feasible. Let 18

19 { x k j } { x k } be a subsequence weakly convergent to x. By definition of x k j we have that x k j x = y k j x + β kj (y k j x 0 ). Since lim j β kj = 0 and {y k j } Ω is bounded, we conclude that w lim j y k j = x. Since Ω is closed and convex, it is weakly closed, and hence x Ω. Recall that a Slater condition for Ω as in (4.1) requires the existence of a point x 0 such that g(x 0, y) < 0 for all y Y. However, by assumptions (G 1 ) and (G 3 ), we have that the Slater condition is equivalent to the existence of some α > 0 such that g(x 0, y) α y Y. (4.7) So, under our hypotheses there is no loss of generality by calling (4.7) a Slater condition for Ω. The next result is analogous to Proposition 4.7, for the case in which the set Ω is as in (4.1) and the approximate solutions are taken as in Definition 4.1(1)(2 ). Proposition 4.8 Assume that (H 3 ) holds. Let Ω be as in (4.1) and such that a Slater condition holds for Ω. Take λ k 0 and suppose that there exists β c 0 > 0 with sup k k λ k c 0 <. Assume also that in Step 0 of Algorithm 2 we use the x 0 provided by (4.7). Then every orbit { x k } of Algorithm 2 is bounded and af. Proof. Let x 0 and α > 0 be such that (4.7) holds, and fix c > c 0. Since λ k 0, there exists k 0 IN such that Using that c > c 0 we get 0 < c λ k α < 1 for all k k 0. (4.8) 0 < β k < cλ k = cαλ k α < cαλ k α cλ k, (4.9) for all k k 0. Define now the auxiliary sequence x k := x k + cλ k α (x0 x k ). We claim that x k Ω k > k 0. Indeed, by convexity of h and (4.8) we have that h(x k ) cλ k α h(x0 ) + (1 cλ k α )h( xk ) k > k 0. Using (4.7) and the fact that x k is an approximated solution, we get h(x k ) cλ k α ( α) + (1 cλ k α )(β k), 19

20 now using (4.9) for all k > k 0 we get h(x k ) cλ k + (1 cλ k α )( cαλ k α cλ k ) = cλ k + cλ k = 0 (4.10) Therefore {x k } Ω for all k > k 0. Using the fact that x k verifies condition (1) in Definition 4.1, and following the same steps as in the proof of Proposition 4.7 (see equations (4.3)-(4.5)), we arrive to the inequality v, x x k λ k 2 ( x0 x k 2 x 0 x 2 ), (4.11) where x is a solution of V IP (T, Ω) and v T ( x). Using definition of x and the fact that x k Ω we get v, x k x 0. Summing and subtracting x k in the right hand side of this inner product, we obtain v, x k x k v, x x k. (4.12) Using definition of {x k } and combining (4.12) and (4.11), we have cλ k α v, x0 x k λ k 2 ( x0 x k 2 x 0 x 2 ). (4.13) Dividing by λ k > 0, we conclude that c α v, xk x ( x0 x k 2 x 0 x 2 ) 0, (4.14) which, in the same way as in the last part of Proposition 4.7, yields boundedness of { x k }. Now let { x k j } { x k } be a subsequence weakly convergent to x. Using Definition 4.1(2 ) we have that h( x k j ) β kj for all j. Since h weakly continuous and β k 0 we get that h(x ) 0. Thus x Ω and hence the orbit { x k } is af. We proved so far that, under suitable assumptions on the data, Algorithm 2 generates an orbit which is bounded and af. We establish next conditions under which these two properties guarantee optimality of weak accumulation points. Theorem 4.9 Let the sequence { x k } be generated by Algorithm 2, where the iterates x k verify condition (1) of Definition 4.1 with parameters {λ k } such that lim k λ k = 0. Suppose that (H 2 ) and (H 3 ) hold. If { x k } is bounded and af, then one of the following holds. 20

21 (i) Algorithm 2 has finite termination, and the last iterate is a solution of V IP (T, Ω), or (ii) Algorithm 2 generates an infinite orbit, and every weak accumulation point of { x k } is a solution of V IP (T, Ω). Proof. Case (i) has been taken care of in Remark 4.5. So it is enough to consider the case in which Algorithm 2 generates an infinite orbit. Since x k verifies condition (1) of Definition 4.1, we have as in (4.3) u k, x x k λ k J(x 0 x k ), x x k x Ω k. (4.15) Take x a weak accumulation point of { x k }, and a subsequence { x k j } { x k } weakly converging to x. Using the last expression, together with the fact that Ω Ω, we can write for every x Ω u k, x k x λ k J(x 0 x k ), x k x. λ k J(x 0 x k ) x k x = λ k x 0 x k x k x, (4.16) where we used Cauchy-Schwartz inequality and definition of J. Since lim k λ k = 0, x Ω and { x k } is bounded, from (4.16) for k = k j we get that lim sup u k j, x k j x 0. (4.17) j Using x k w j x, (4.17) and pseudomonotonicity of T for x S we conclude that there exists u T (x ) such that lim inf j u k j, x k j x u, x x. (4.18) On the other hand, using (4.16) for x S and k = k j we get Using also (4.18) we get lim inf j u k j, x k j x 0. (4.19) u, x x 0. (4.20) Finally, by paramonotonicity of T and the above inequality, we can use Proposition 2.8 to guarantee that x is a solution of the V IP (T, Ω), as we wanted to prove 21

22 Our next convergence result requires the approximated solutions x to stay close enough to the original set Ω. Fix θ > 0 and r > 1. We say that x is a metric-approximated solution if it verifies Definition 4.1(1) and the condition (2 ) d( x, Ω) := inf z Ω x z θλr, where λ is the parameter used in Definition 4.1(1). Remark 4.10 When Ω is as in (4.1), condition (2 ) can be guaranteed when the Hoffman s global error bound [25] holds: θ > 0 x Ω, d(x, Ω) θ max g(x, y) = θ h(x). (4.21) y Y An error bound of the kind above has been proved to hold in a Banach space in [15]. Namely, assume that (G 1 ) (G 3 ) hold. Suppose also that there exist δ, γ > 0 such that (G 4 ) Ω(δ) := {x B h(x) δ}, and (G 5 ) Haus(Ω, Ω(δ)) < γ (where Haus(, ) stands for the Hausdorff distance between sets). Then [15, Proposition 1] proves that x Ω, d(x, Ω) δ 1 γ h(x). (4.22) Therefore, when assumptions (G 1 ) (G 5 ) hold for Ω, condition (2 ) is a consequence of Definition 4.1(2 ) with β = λ r. For alternative conditions under which a global error bound of the kind (4.21) holds in a Banach space, see [27]. In the theorem below we prove boundedness of the orbit generated by Algorithm 2, as well as optimality of all weak limit points. In the case in which B is uniformly convex, we get full strong convergence and we characterize the limit of { x k } as the closest point to x 0 in the solution set. Theorem 4.11 Let the sequence { x k } be generated by Algorithm 2, where the iterates satisfy conditions (1) of Definition 4.1 and (2 ) for λ = λ k. Assume that λ k 0 and that (H 2 ) and (H 3 ) hold. Then the sequence { x k } is bounded and every weak limit point is a solution of V IP (T, Ω). Moreover, if B is uniformly convex, then the sequence { x k } converges strongly and the limit point is the unique x characterized by {x } = arg min y S x0 y 2. 22

23 Proof. We prove first that x k is bounded. The approximated solution x k is such that u k + η k + λ k J( x k x 0 ) = 0, with u k T ( x k ), η k N eω k( x k ). (4.23) Let y S, then there exists v T (y) such that Thus by Proposition 2.6 (ii) v, x y 0 x Ω. (4.24) x 0 y 2 x 0 x k J(x 0 x k ), x k y. (4.25) Define A := 2 J(x 0 x k ), x k y, (4.23) implies that A = 2 λ k ( u k, x k y + η k, x k y ). Using the definition of normality operator and monotonicity of T we obtain Combining (4.26) and (4.25) we get We claim that y S A 2 λ k v, x k y. (4.26) x 0 y 2 x 0 x k λ k v, x k y. (4.27) x 0 y 2 x 0 x k 2 2θλ k r 1 v k. (4.28) Indeed, assume first that x k Ω, by (4.24) and (4.27) we have that x 0 y 2 x 0 x k 2, and hence (4.28) holds. Assume now that x k Ω. Let p k be the projection of the x k in Ω, we obtain [ 2 λ k v, x k y = 2 λ k v, x k p k + v, p k y ] 2 λ k v, x k p k 2 v, 2θλ r 1 k λ k v x k p k (4.29) where we used Cauchy-Schwartz inequality, (4.24) and condition (2 ). Combining (4.27) with (4.29) we conclude (4.28). Thus our claim is true and 23

24 hence { x k } is bounded. Let { x k j } { x k } be a subsequence weakly convergent to x. Since λ kj 0, condition (2 ) readily implies the existence of a sequence {y k j } Ω such that w lim j y k j = x. Using the fact that Ω is convex and (strongly) closed, it is also weakly closed, and hence x Ω. Now, using the last part of the proof of Theorem 4.9 (see equations (4.16) to (4.20)), every limit point of { x k } is a solution of V IP (T, Ω). We proceed now to prove the last assertion of the theorem. Assume that B is uniformly convex, and take again the subsequence { x k j } weakly converging to x. We know that x S. By (4.28) for k = k j and y = x we get x 0 x 2 x 0 x k j 2 2θλ kj r 1 v. Taking limsup for j in the expression above and using that λ k 0, we have that x 0 x 2 lim sup x 0 x k j 2. (4.30) j Since {x 0 x k j } weakly converges to x 0 x we apply Proposition 2.13 to conclude that { x k j } converges strongly to x. Now, taking limits for j in (4.28) for k = k j we get Thus, x 0 y 2 x 0 x 2 y S. x arg min y S x0 y 2. This point is unique by strict convexity of 2. 5 Acknowledgment The authors are very thankful to the two referees, whose helpful and essential corrections greatly improved an earlier version of the manuscript. References [1] E. Asplund, Positivity of duality mappings, Bull. Amer. Math. Soc. 73 (1967), pp MR 34 #

25 [2] J. W. Blankenship and J. E. Falk, Infinitely constrained optimization problems, J. Optim. Theory and Appl., 19, 2 (1976), pp [3] J. Bracken and J. F. McGill, Mathematical programs with optimization problems in the constraints, Operations Research., 21, 1 (1973). [4] H. Brézis, Opérateurs Monotones Maximaux et Semigroups de Contractions dans les Espaces de Hilbert, Université de Paris-CNRS. Paris (1971). [5] H. Brézis, Analyse fonctionnele: Théorie et applications, Masson, Paris, (1983). [6] F. E. Browder, Nonlinear operators and nonlinear equations of evolution in Banach spaces. Proceedings of Symposia in Pure Mathematics, Journal of the American Mathematical Society, 18, 2 (1976). [7] R. D. Bruck, An iterative solution of a variational inequality for certain monotone operator in a Hilbert space, Bulletin of the American Math. Soc., 81 (1975), pp (With corrigendum, in 82 (1976), p. 353). [8] R. S. Burachik and S. M. Scheimberg, A proximal point algorithm for the variational inequality problem in Banach spaces, SIAM J. Control Optim., 39, 5 (2001), pp [9] Y. Censor, A. Iusem and S. Zenios, An interior point method with Bregman functions for the variational inequality problem with paramonotone operators, Math. Programming, 81 (1998), pp [10] W. E. Cheney and A. A. Goldstein, Newton s method for convex programming and Tchebycheff approximation, Numer. Math., 1 (1959), pp [11] P. L. Combettes, Strong convergence of block-iterative outer approximation methods for convex optimization, SIAM Journal Control and Optim., 38, No. 2 (2000), pp [12] G. Crombez, Finding Projections onto the intersection of convex sets in Hilbert spaces, Numer. Funct. Anal. Optim., 16 (1995), pp [13] J. M. Danskin, The theory of Max-Min, Springer-Verlag, Berlin, Germany (1967). 25

26 [14] M. A. H. Dempster and R. R. Merkovsky, A practical geometrically convergent cutting plane algorithm, SIAM J. Numer. Anal., 32 (1995), pp [15] S. Deng, Computable error bounds for convex inequality systems in Banach spaces, SIAM J. Control Optim., 36(4) (1998), pp [16] B. C. Eaves and W. I. Zangwill, Generalized cutting plane algorithms, SIAM J. Control Optim., 9 (1971), pp [17] A. V. Fiacco and K. O. Kortanek, Semi-infinite programming and applications, in Lecture Notes in Economics and Math. Systems, 215, Springer-Verlag, New York (1983). [18] C.W. Groetsch, The Theory of Tikhonov regularization for Fredholm Equations of the First Kind, Pitman, Boston (1984). [19] S. A. Gustavson and K. O. Kortanek, Numerical treatment of a class of semi-infinite programming problems, Naval Research Logistics Quaterly, 20, 3 (1970). [20] Y. Haugazeau, Sur la minimisation de formes quadratiques avec contraintes, C. R. Acad. Sci. Paris Sér. A Math., 264 (1967), pp [21] Y. Haugazeau, Sur les Inéquations Variationelles et la Minimisation de Fonctionnelles Convexes. Thèse, Université de Paris, Paris, France (1968). [22] R. Hettich, Semi-infinite Programming, Lecture Notes in Control and Inform. Sci., 15, Springer-Verlag, New York (1979). [23] R. Hettich, A review of numerical methods for semi-infinite optimization, in Semi-infinite programming and applications, Lecture Notes in Economics and Math. Systems, 215, Springer-Verlag, New York (1983) pp [24] R. Hettich and Kortanek, Semi-infinite programming: Theory, methods and applications, SIAM review, 35, 3, (1993), pp [25] A. J. Hoffman, On approximate solutions of system of linear inequalities, Journal of the National Bureau of Standards., 49 (1952), pp

27 [26] A. N. Iusem, On some properties of paramonotone operators, J. Convex Anal., 5 (1998), pp [27] A. Jourani, Hoffman s error bound, local controllability, and sensitivity analysis, SIAM J. Control Optim., 38(3) (2000), pp [28] A. A. Kaplan, Determination of the extremum of a linear function on a convex set, Soviet. Math. Dokl., 9 (1968), pp [29] A. A. Kaplan and R. Tichatschke, Stable Methods for ill-posed variational problems, Akademie Verlag, Berlin (1994). [30] A. A. Kaplan and R. Tichatschke, Variational inequalities and convexsemi-infinite programming problems, Optimization, 26 (1992), pp [31] S. Karamardian, Complementarity problems over cones with monotone and pseudomonotone maps, J. Optim. Theory Appl., 18 (1976), pp [32] J. E. Kelley, The cutting-plane method for solving convex programs, J. SIAM, 8 (1960), pp [33] P. J. Laurent and B. Martinet, Méthods duales pour le calcul du minimum d une fonction convexe sur une intersection de convexes, in Symposium on Optimization, Lecture Notes in Mathematics. 132 Springer- Verlag, New York (1970) pp [34] F. Lui and M. Z. Nashed, Regularization of nonlinear ill-posed variational inequalities and convergence rates, Set-Valued Analysis, 6 (1998), pp [35] U. Mosco, Convergence of convex sets and of solutions of variational inequalities, Advances in Mathematics, 3,4,(1969), pp [36] W. Oettli, Solving Optimization problems with many constraints by a sequence of subproblems containing only two constraints. Math. Nachr., 71 (1976), pp [37] D. Pascali and S. Sburlan, Nonlinear mappings of monotone type, Ed. Academiei, Bucaresti, Romenia (1978). 27

28 [38] G. Pierra, Eclatement de contraintes en parallèle pour la minimisation d une forme quadratique. Lecture Notes in Comput. Sci., 41, Springer- Verlag, New York, (1976), pp [39] G. Pierra, Decomposition through formalization in a product space, Math. Programming, 28 (1984), pp [40] R. Reemtsen and S. Goerner, Numerical Methods for semi-infinite programming: a survey, in R. Reemtsen and J.-J. R uckmann (eds.): Semiinfinite Programming, Kluwer, Boston, (1998), pp [41] R. T. Rockafellar, On the maximality of sums of nonlinear monotone operators. Transactions of the American Mathematical Society, 149 (1970), pp [42] A.N. Tikhonov and V.Y. Arsenin, Solutions of Ill-Posed Problems, John Wiley & Sons, Washington, D.C. (1977), Translation editor Fritz John (English). [43] D. M. Topkis, Cutting-plane methods without nested constraint sets. Oper. Res., 18 (1970), pp [44] D. M. Topkis, A cutting-plane algorithm with linear and geometric rates of convergence. J. Optim. Theory Appl., 36 (1982), pp [45] A. F. Veinott, The supporting hyperplane method for unimodal programming. Oper. Res., 15 (1967), pp [46] A. A. Vladimirov, Yu. E. Nesterov and Yu. N. Cekanov, Uniformly convex functionals, Vestnik Moskov. Univ. Ser. XV Vychisl. Mat. Kibernet., 3 (1978), pp [47] C. Zalinescu, On uniformly convex functions, J.Math. Anal. Appl., 94 (1983), pp [48] W. I. Zangwill, Nonlinear Programming A unified approach, Prentice- Hall, Englewood Cliffs, NJ (1969). 28

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

On Total Convexity, Bregman Projections and Stability in Banach Spaces

On Total Convexity, Bregman Projections and Stability in Banach Spaces Journal of Convex Analysis Volume 11 (2004), No. 1, 1 16 On Total Convexity, Bregman Projections and Stability in Banach Spaces Elena Resmerita Department of Mathematics, University of Haifa, 31905 Haifa,

More information

Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM. M. V. Solodov and B. F.

Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM. M. V. Solodov and B. F. Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM M. V. Solodov and B. F. Svaiter January 27, 1997 (Revised August 24, 1998) ABSTRACT We propose a modification

More information

Maximal monotone operators, convex functions and a special family of enlargements.

Maximal monotone operators, convex functions and a special family of enlargements. Maximal monotone operators, convex functions and a special family of enlargements. Regina Sandra Burachik Engenharia de Sistemas e Computação, COPPE UFRJ, CP 68511, Rio de Janeiro RJ, 21945 970, Brazil.

More information

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE Fixed Point Theory, Volume 6, No. 1, 2005, 59-69 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.htm WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE YASUNORI KIMURA Department

More information

Convergence rate estimates for the gradient differential inclusion

Convergence rate estimates for the gradient differential inclusion Convergence rate estimates for the gradient differential inclusion Osman Güler November 23 Abstract Let f : H R { } be a proper, lower semi continuous, convex function in a Hilbert space H. The gradient

More information

FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR

FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000 000 S 0002-9939(XX)0000-0 FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR B. F. SVAITER

More information

AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS. M. V. Solodov and B. F.

AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS. M. V. Solodov and B. F. AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS M. V. Solodov and B. F. Svaiter May 14, 1998 (Revised July 8, 1999) ABSTRACT We present a

More information

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This

More information

Maximal Monotone Operators with a Unique Extension to the Bidual

Maximal Monotone Operators with a Unique Extension to the Bidual Journal of Convex Analysis Volume 16 (2009), No. 2, 409 421 Maximal Monotone Operators with a Unique Extension to the Bidual M. Marques Alves IMPA, Estrada Dona Castorina 110, 22460-320 Rio de Janeiro,

More information

On pseudomonotone variational inequalities

On pseudomonotone variational inequalities An. Şt. Univ. Ovidius Constanţa Vol. 14(1), 2006, 83 90 On pseudomonotone variational inequalities Silvia Fulina Abstract Abstract. There are mainly two definitions of pseudomonotone mappings. First, introduced

More information

A strongly convergent hybrid proximal method in Banach spaces

A strongly convergent hybrid proximal method in Banach spaces J. Math. Anal. Appl. 289 (2004) 700 711 www.elsevier.com/locate/jmaa A strongly convergent hybrid proximal method in Banach spaces Rolando Gárciga Otero a,,1 and B.F. Svaiter b,2 a Instituto de Economia

More information

ENLARGEMENT OF MONOTONE OPERATORS WITH APPLICATIONS TO VARIATIONAL INEQUALITIES. Abstract

ENLARGEMENT OF MONOTONE OPERATORS WITH APPLICATIONS TO VARIATIONAL INEQUALITIES. Abstract ENLARGEMENT OF MONOTONE OPERATORS WITH APPLICATIONS TO VARIATIONAL INEQUALITIES Regina S. Burachik* Departamento de Matemática Pontíficia Universidade Católica de Rio de Janeiro Rua Marques de São Vicente,

More information

AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS. May 14, 1998 (Revised March 12, 1999)

AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS. May 14, 1998 (Revised March 12, 1999) AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS M. V. Solodov and B. F. Svaiter May 14, 1998 (Revised March 12, 1999) ABSTRACT We present

More information

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES U.P.B. Sci. Bull., Series A, Vol. 80, Iss. 3, 2018 ISSN 1223-7027 ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES Vahid Dadashi 1 In this paper, we introduce a hybrid projection algorithm for a countable

More information

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences

More information

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane Conference ADGO 2013 October 16, 2013 Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions Marc Lassonde Université des Antilles et de la Guyane Playa Blanca, Tongoy, Chile SUBDIFFERENTIAL

More information

An inexact subgradient algorithm for Equilibrium Problems

An inexact subgradient algorithm for Equilibrium Problems Volume 30, N. 1, pp. 91 107, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam An inexact subgradient algorithm for Equilibrium Problems PAULO SANTOS 1 and SUSANA SCHEIMBERG 2 1 DM, UFPI, Teresina,

More information

On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean

On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean Renato D.C. Monteiro B. F. Svaiter March 17, 2009 Abstract In this paper we analyze the iteration-complexity

More information

arxiv: v1 [math.fa] 16 Jun 2011

arxiv: v1 [math.fa] 16 Jun 2011 arxiv:1106.3342v1 [math.fa] 16 Jun 2011 Gauge functions for convex cones B. F. Svaiter August 20, 2018 Abstract We analyze a class of sublinear functionals which characterize the interior and the exterior

More information

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

On the simplest expression of the perturbed Moore Penrose metric generalized inverse Annals of the University of Bucharest (mathematical series) 4 (LXII) (2013), 433 446 On the simplest expression of the perturbed Moore Penrose metric generalized inverse Jianbing Cao and Yifeng Xue Communicated

More information

Upper sign-continuity for equilibrium problems

Upper sign-continuity for equilibrium problems Upper sign-continuity for equilibrium problems D. Aussel J. Cotrina A. Iusem January, 2013 Abstract We present the new concept of upper sign-continuity for bifunctions and establish a new existence result

More information

The local equicontinuity of a maximal monotone operator

The local equicontinuity of a maximal monotone operator arxiv:1410.3328v2 [math.fa] 3 Nov 2014 The local equicontinuity of a maximal monotone operator M.D. Voisei Abstract The local equicontinuity of an operator T : X X with proper Fitzpatrick function ϕ T

More information

Merit functions and error bounds for generalized variational inequalities

Merit functions and error bounds for generalized variational inequalities J. Math. Anal. Appl. 287 2003) 405 414 www.elsevier.com/locate/jmaa Merit functions and error bounds for generalized variational inequalities M.V. Solodov 1 Instituto de Matemática Pura e Aplicada, Estrada

More information

A double projection method for solving variational inequalities without monotonicity

A double projection method for solving variational inequalities without monotonicity A double projection method for solving variational inequalities without monotonicity Minglu Ye Yiran He Accepted by Computational Optimization and Applications, DOI: 10.1007/s10589-014-9659-7,Apr 05, 2014

More information

On nonexpansive and accretive operators in Banach spaces

On nonexpansive and accretive operators in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 3437 3446 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa On nonexpansive and accretive

More information

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br

More information

A Dual Condition for the Convex Subdifferential Sum Formula with Applications

A Dual Condition for the Convex Subdifferential Sum Formula with Applications Journal of Convex Analysis Volume 12 (2005), No. 2, 279 290 A Dual Condition for the Convex Subdifferential Sum Formula with Applications R. S. Burachik Engenharia de Sistemas e Computacao, COPPE-UFRJ

More information

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties

More information

Maximal monotone operators are selfdual vector fields and vice-versa

Maximal monotone operators are selfdual vector fields and vice-versa Maximal monotone operators are selfdual vector fields and vice-versa Nassif Ghoussoub Department of Mathematics, University of British Columbia, Vancouver BC Canada V6T 1Z2 nassif@math.ubc.ca February

More information

Convergence rate of inexact proximal point methods with relative error criteria for convex optimization

Convergence rate of inexact proximal point methods with relative error criteria for convex optimization Convergence rate of inexact proximal point methods with relative error criteria for convex optimization Renato D. C. Monteiro B. F. Svaiter August, 010 Revised: December 1, 011) Abstract In this paper,

More information

AN INEXACT HYBRIDGENERALIZEDPROXIMAL POINT ALGORITHM ANDSOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS

AN INEXACT HYBRIDGENERALIZEDPROXIMAL POINT ALGORITHM ANDSOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS MATHEMATICS OF OPERATIONS RESEARCH Vol. 25, No. 2, May 2000, pp. 214 230 Printed in U.S.A. AN INEXACT HYBRIDGENERALIZEDPROXIMAL POINT ALGORITHM ANDSOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS M.

More information

Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1

Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1 Int. Journal of Math. Analysis, Vol. 1, 2007, no. 4, 175-186 Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1 Haiyun Zhou Institute

More information

On well definedness of the Central Path

On well definedness of the Central Path On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil

More information

1 Introduction The study of the existence of solutions of Variational Inequalities on unbounded domains usually involves the same sufficient assumptio

1 Introduction The study of the existence of solutions of Variational Inequalities on unbounded domains usually involves the same sufficient assumptio Coercivity Conditions and Variational Inequalities Aris Daniilidis Λ and Nicolas Hadjisavvas y Abstract Various coercivity conditions appear in the literature in order to guarantee solutions for the Variational

More information

HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM

HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM Georgian Mathematical Journal Volume 9 (2002), Number 3, 591 600 NONEXPANSIVE MAPPINGS AND ITERATIVE METHODS IN UNIFORMLY CONVEX BANACH SPACES HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM

More information

Generalized Monotonicities and Its Applications to the System of General Variational Inequalities

Generalized Monotonicities and Its Applications to the System of General Variational Inequalities Generalized Monotonicities and Its Applications to the System of General Variational Inequalities Khushbu 1, Zubair Khan 2 Research Scholar, Department of Mathematics, Integral University, Lucknow, Uttar

More information

CONVERGENCE OF APPROXIMATING FIXED POINTS FOR MULTIVALUED NONSELF-MAPPINGS IN BANACH SPACES. Jong Soo Jung. 1. Introduction

CONVERGENCE OF APPROXIMATING FIXED POINTS FOR MULTIVALUED NONSELF-MAPPINGS IN BANACH SPACES. Jong Soo Jung. 1. Introduction Korean J. Math. 16 (2008), No. 2, pp. 215 231 CONVERGENCE OF APPROXIMATING FIXED POINTS FOR MULTIVALUED NONSELF-MAPPINGS IN BANACH SPACES Jong Soo Jung Abstract. Let E be a uniformly convex Banach space

More information

A Dykstra-like algorithm for two monotone operators

A Dykstra-like algorithm for two monotone operators A Dykstra-like algorithm for two monotone operators Heinz H. Bauschke and Patrick L. Combettes Abstract Dykstra s algorithm employs the projectors onto two closed convex sets in a Hilbert space to construct

More information

Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces

Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces YUAN-HENG WANG Zhejiang Normal University Department of Mathematics Yingbing Road 688, 321004 Jinhua

More information

Subdifferential representation of convex functions: refinements and applications

Subdifferential representation of convex functions: refinements and applications Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential

More information

ITERATIVE SCHEMES FOR APPROXIMATING SOLUTIONS OF ACCRETIVE OPERATORS IN BANACH SPACES SHOJI KAMIMURA AND WATARU TAKAHASHI. Received December 14, 1999

ITERATIVE SCHEMES FOR APPROXIMATING SOLUTIONS OF ACCRETIVE OPERATORS IN BANACH SPACES SHOJI KAMIMURA AND WATARU TAKAHASHI. Received December 14, 1999 Scientiae Mathematicae Vol. 3, No. 1(2000), 107 115 107 ITERATIVE SCHEMES FOR APPROXIMATING SOLUTIONS OF ACCRETIVE OPERATORS IN BANACH SPACES SHOJI KAMIMURA AND WATARU TAKAHASHI Received December 14, 1999

More information

BREGMAN DISTANCES, TOTALLY

BREGMAN DISTANCES, TOTALLY BREGMAN DISTANCES, TOTALLY CONVEX FUNCTIONS AND A METHOD FOR SOLVING OPERATOR EQUATIONS IN BANACH SPACES DAN BUTNARIU AND ELENA RESMERITA January 18, 2005 Abstract The aim of this paper is twofold. First,

More information

Kantorovich s Majorants Principle for Newton s Method

Kantorovich s Majorants Principle for Newton s Method Kantorovich s Majorants Principle for Newton s Method O. P. Ferreira B. F. Svaiter January 17, 2006 Abstract We prove Kantorovich s theorem on Newton s method using a convergence analysis which makes clear,

More information

PROJECTIONS ONTO CONES IN BANACH SPACES

PROJECTIONS ONTO CONES IN BANACH SPACES Fixed Point Theory, 19(2018), No. 1,...-... DOI: http://www.math.ubbcluj.ro/ nodeacj/sfptcj.html PROJECTIONS ONTO CONES IN BANACH SPACES A. DOMOKOS AND M.M. MARSH Department of Mathematics and Statistics

More information

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Local strong convexity and local Lipschitz continuity of the gradient of convex functions Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate

More information

1 Introduction and preliminaries

1 Introduction and preliminaries Proximal Methods for a Class of Relaxed Nonlinear Variational Inclusions Abdellatif Moudafi Université des Antilles et de la Guyane, Grimaag B.P. 7209, 97275 Schoelcher, Martinique abdellatif.moudafi@martinique.univ-ag.fr

More information

Convergence Theorems for Bregman Strongly Nonexpansive Mappings in Reflexive Banach Spaces

Convergence Theorems for Bregman Strongly Nonexpansive Mappings in Reflexive Banach Spaces Filomat 28:7 (2014), 1525 1536 DOI 10.2298/FIL1407525Z Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Convergence Theorems for

More information

Journal of Inequalities in Pure and Applied Mathematics

Journal of Inequalities in Pure and Applied Mathematics Journal of Inequalities in Pure and Applied Mathematics http://jipam.vu.edu.au/ Volume 4, Issue 4, Article 67, 2003 ON GENERALIZED MONOTONE MULTIFUNCTIONS WITH APPLICATIONS TO OPTIMALITY CONDITIONS IN

More information

A proximal-like algorithm for a class of nonconvex programming

A proximal-like algorithm for a class of nonconvex programming Pacific Journal of Optimization, vol. 4, pp. 319-333, 2008 A proximal-like algorithm for a class of nonconvex programming Jein-Shan Chen 1 Department of Mathematics National Taiwan Normal University Taipei,

More information

AW -Convergence and Well-Posedness of Non Convex Functions

AW -Convergence and Well-Posedness of Non Convex Functions Journal of Convex Analysis Volume 10 (2003), No. 2, 351 364 AW -Convergence Well-Posedness of Non Convex Functions Silvia Villa DIMA, Università di Genova, Via Dodecaneso 35, 16146 Genova, Italy villa@dima.unige.it

More information

Existence results for quasi-equilibrium problems

Existence results for quasi-equilibrium problems Existence results for quasi-equilibrium problems D. Aussel J. Cotrina A. Iusem January 03, 2014 Abstract Recently in Castellani-Guili (J. Optim. Th. Appl., 147 (2010), 157-168), it has been showed that

More information

SOME PROPERTIES ON THE CLOSED SUBSETS IN BANACH SPACES

SOME PROPERTIES ON THE CLOSED SUBSETS IN BANACH SPACES ARCHIVUM MATHEMATICUM (BRNO) Tomus 42 (2006), 167 174 SOME PROPERTIES ON THE CLOSED SUBSETS IN BANACH SPACES ABDELHAKIM MAADEN AND ABDELKADER STOUTI Abstract. It is shown that under natural assumptions,

More information

Examples of Convex Functions and Classifications of Normed Spaces

Examples of Convex Functions and Classifications of Normed Spaces Journal of Convex Analysis Volume 1 (1994), No.1, 61 73 Examples of Convex Functions and Classifications of Normed Spaces Jon Borwein 1 Department of Mathematics and Statistics, Simon Fraser University

More information

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Lu-Chuan Ceng 1, Nicolas Hadjisavvas 2 and Ngai-Ching Wong 3 Abstract.

More information

A projection-type method for generalized variational inequalities with dual solutions

A projection-type method for generalized variational inequalities with dual solutions Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4812 4821 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A projection-type method

More information

Existence and Approximation of Fixed Points of. Bregman Nonexpansive Operators. Banach Spaces

Existence and Approximation of Fixed Points of. Bregman Nonexpansive Operators. Banach Spaces Existence and Approximation of Fixed Points of in Reflexive Banach Spaces Department of Mathematics The Technion Israel Institute of Technology Haifa 22.07.2010 Joint work with Prof. Simeon Reich General

More information

SHRINKING PROJECTION METHOD FOR A SEQUENCE OF RELATIVELY QUASI-NONEXPANSIVE MULTIVALUED MAPPINGS AND EQUILIBRIUM PROBLEM IN BANACH SPACES

SHRINKING PROJECTION METHOD FOR A SEQUENCE OF RELATIVELY QUASI-NONEXPANSIVE MULTIVALUED MAPPINGS AND EQUILIBRIUM PROBLEM IN BANACH SPACES U.P.B. Sci. Bull., Series A, Vol. 76, Iss. 2, 2014 ISSN 1223-7027 SHRINKING PROJECTION METHOD FOR A SEQUENCE OF RELATIVELY QUASI-NONEXPANSIVE MULTIVALUED MAPPINGS AND EQUILIBRIUM PROBLEM IN BANACH SPACES

More information

arxiv: v1 [math.na] 25 Sep 2012

arxiv: v1 [math.na] 25 Sep 2012 Kantorovich s Theorem on Newton s Method arxiv:1209.5704v1 [math.na] 25 Sep 2012 O. P. Ferreira B. F. Svaiter March 09, 2007 Abstract In this work we present a simplifyed proof of Kantorovich s Theorem

More information

ON THE RANGE OF THE SUM OF MONOTONE OPERATORS IN GENERAL BANACH SPACES

ON THE RANGE OF THE SUM OF MONOTONE OPERATORS IN GENERAL BANACH SPACES PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 124, Number 11, November 1996 ON THE RANGE OF THE SUM OF MONOTONE OPERATORS IN GENERAL BANACH SPACES HASSAN RIAHI (Communicated by Palle E. T. Jorgensen)

More information

Extensions of the CQ Algorithm for the Split Feasibility and Split Equality Problems

Extensions of the CQ Algorithm for the Split Feasibility and Split Equality Problems Extensions of the CQ Algorithm for the Split Feasibility Split Equality Problems Charles L. Byrne Abdellatif Moudafi September 2, 2013 Abstract The convex feasibility problem (CFP) is to find a member

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

Two-Step Iteration Scheme for Nonexpansive Mappings in Banach Space

Two-Step Iteration Scheme for Nonexpansive Mappings in Banach Space Mathematica Moravica Vol. 19-1 (2015), 95 105 Two-Step Iteration Scheme for Nonexpansive Mappings in Banach Space M.R. Yadav Abstract. In this paper, we introduce a new two-step iteration process to approximate

More information

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT Pacific Journal of Optimization Vol., No. 3, September 006) PRIMAL ERROR BOUNDS BASED ON THE AUGMENTED LAGRANGIAN AND LAGRANGIAN RELAXATION ALGORITHMS A. F. Izmailov and M. V. Solodov ABSTRACT For a given

More information

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions Angelia Nedić and Asuman Ozdaglar April 15, 2006 Abstract We provide a unifying geometric framework for the

More information

CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS

CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS Igor V. Konnov Department of Applied Mathematics, Kazan University Kazan 420008, Russia Preprint, March 2002 ISBN 951-42-6687-0 AMS classification:

More information

Maximal Monotonicity, Conjugation and the Duality Product in Non-Reflexive Banach Spaces

Maximal Monotonicity, Conjugation and the Duality Product in Non-Reflexive Banach Spaces Journal of Convex Analysis Volume 17 (2010), No. 2, 553 563 Maximal Monotonicity, Conjugation and the Duality Product in Non-Reflexive Banach Spaces M. Marques Alves IMPA, Estrada Dona Castorina 110, 22460-320

More information

MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS

MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS JONATHAN M. BORWEIN, FRSC Abstract. We use methods from convex analysis convex, relying on an ingenious function of Simon Fitzpatrick, to prove maximality

More information

CONSTRUCTION OF BEST BREGMAN APPROXIMATIONS IN REFLEXIVE BANACH SPACES

CONSTRUCTION OF BEST BREGMAN APPROXIMATIONS IN REFLEXIVE BANACH SPACES PROCEEINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 131, Number 12, Pages 3757 3766 S 0002-9939(03)07050-3 Article electronically published on April 24, 2003 CONSTRUCTION OF BEST BREGMAN APPROXIMATIONS

More information

WEAK CONVERGENCE THEOREMS FOR EQUILIBRIUM PROBLEMS WITH NONLINEAR OPERATORS IN HILBERT SPACES

WEAK CONVERGENCE THEOREMS FOR EQUILIBRIUM PROBLEMS WITH NONLINEAR OPERATORS IN HILBERT SPACES Fixed Point Theory, 12(2011), No. 2, 309-320 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.html WEAK CONVERGENCE THEOREMS FOR EQUILIBRIUM PROBLEMS WITH NONLINEAR OPERATORS IN HILBERT SPACES S. DHOMPONGSA,

More information

Epiconvergence and ε-subgradients of Convex Functions

Epiconvergence and ε-subgradients of Convex Functions Journal of Convex Analysis Volume 1 (1994), No.1, 87 100 Epiconvergence and ε-subgradients of Convex Functions Andrei Verona Department of Mathematics, California State University Los Angeles, CA 90032,

More information

Yuqing Chen, Yeol Je Cho, and Li Yang

Yuqing Chen, Yeol Je Cho, and Li Yang Bull. Korean Math. Soc. 39 (2002), No. 4, pp. 535 541 NOTE ON THE RESULTS WITH LOWER SEMI-CONTINUITY Yuqing Chen, Yeol Je Cho, and Li Yang Abstract. In this paper, we introduce the concept of lower semicontinuous

More information

YET MORE ON THE DIFFERENTIABILITY OF CONVEX FUNCTIONS

YET MORE ON THE DIFFERENTIABILITY OF CONVEX FUNCTIONS PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 103, Number 3, July 1988 YET MORE ON THE DIFFERENTIABILITY OF CONVEX FUNCTIONS JOHN RAINWATER (Communicated by William J. Davis) ABSTRACT. Generic

More information

Solution existence of variational inequalities with pseudomonotone operators in the sense of Brézis

Solution existence of variational inequalities with pseudomonotone operators in the sense of Brézis Solution existence of variational inequalities with pseudomonotone operators in the sense of Brézis B. T. Kien, M.-M. Wong, N. C. Wong and J. C. Yao Communicated by F. Giannessi This research was partially

More information

SOME ELEMENTARY GENERAL PRINCIPLES OF CONVEX ANALYSIS. A. Granas M. Lassonde. 1. Introduction

SOME ELEMENTARY GENERAL PRINCIPLES OF CONVEX ANALYSIS. A. Granas M. Lassonde. 1. Introduction Topological Methods in Nonlinear Analysis Journal of the Juliusz Schauder Center Volume 5, 1995, 23 37 SOME ELEMENTARY GENERAL PRINCIPLES OF CONVEX ANALYSIS A. Granas M. Lassonde Dedicated, with admiration,

More information

Convex Optimization Notes

Convex Optimization Notes Convex Optimization Notes Jonathan Siegel January 2017 1 Convex Analysis This section is devoted to the study of convex functions f : B R {+ } and convex sets U B, for B a Banach space. The case of B =

More information

Monotone operators and bigger conjugate functions

Monotone operators and bigger conjugate functions Monotone operators and bigger conjugate functions Heinz H. Bauschke, Jonathan M. Borwein, Xianfu Wang, and Liangjin Yao August 12, 2011 Abstract We study a question posed by Stephen Simons in his 2008

More information

MOSCO STABILITY OF PROXIMAL MAPPINGS IN REFLEXIVE BANACH SPACES

MOSCO STABILITY OF PROXIMAL MAPPINGS IN REFLEXIVE BANACH SPACES MOSCO STABILITY OF PROXIMAL MAPPINGS IN REFLEXIVE BANACH SPACES Dan Butnariu and Elena Resmerita Abstract. In this paper we establish criteria for the stability of the proximal mapping Prox f ϕ =( ϕ+ f)

More information

Monotone variational inequalities, generalized equilibrium problems and fixed point methods

Monotone variational inequalities, generalized equilibrium problems and fixed point methods Wang Fixed Point Theory and Applications 2014, 2014:236 R E S E A R C H Open Access Monotone variational inequalities, generalized equilibrium problems and fixed point methods Shenghua Wang * * Correspondence:

More information

A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators

A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators Radu Ioan Boţ Ernö Robert Csetnek April 23, 2012 Dedicated to Jon Borwein on the occasion of his 60th birthday Abstract. In this note

More information

Fixed points in the family of convex representations of a maximal monotone operator

Fixed points in the family of convex representations of a maximal monotone operator arxiv:0802.1347v2 [math.fa] 12 Feb 2008 Fixed points in the family of convex representations of a maximal monotone operator published on: Proc. Amer. Math. Soc. 131 (2003) 3851 3859. B. F. Svaiter IMPA

More information

("-1/' .. f/ L) I LOCAL BOUNDEDNESS OF NONLINEAR, MONOTONE OPERA TORS. R. T. Rockafellar. MICHIGAN MATHEMATICAL vol. 16 (1969) pp.

(-1/' .. f/ L) I LOCAL BOUNDEDNESS OF NONLINEAR, MONOTONE OPERA TORS. R. T. Rockafellar. MICHIGAN MATHEMATICAL vol. 16 (1969) pp. I l ("-1/'.. f/ L) I LOCAL BOUNDEDNESS OF NONLINEAR, MONOTONE OPERA TORS R. T. Rockafellar from the MICHIGAN MATHEMATICAL vol. 16 (1969) pp. 397-407 JOURNAL LOCAL BOUNDEDNESS OF NONLINEAR, MONOTONE OPERATORS

More information

Keywords. 1. Introduction.

Keywords. 1. Introduction. Journal of Applied Mathematics and Computation (JAMC), 2018, 2(11), 504-512 http://www.hillpublisher.org/journal/jamc ISSN Online:2576-0645 ISSN Print:2576-0653 Statistical Hypo-Convergence in Sequences

More information

Proximal Point Methods and Augmented Lagrangian Methods for Equilibrium Problems

Proximal Point Methods and Augmented Lagrangian Methods for Equilibrium Problems Proximal Point Methods and Augmented Lagrangian Methods for Equilibrium Problems Doctoral Thesis by Mostafa Nasri Supervised by Alfredo Noel Iusem IMPA - Instituto Nacional de Matemática Pura e Aplicada

More information

EXISTENCE RESULTS FOR OPERATOR EQUATIONS INVOLVING DUALITY MAPPINGS VIA THE MOUNTAIN PASS THEOREM

EXISTENCE RESULTS FOR OPERATOR EQUATIONS INVOLVING DUALITY MAPPINGS VIA THE MOUNTAIN PASS THEOREM EXISTENCE RESULTS FOR OPERATOR EQUATIONS INVOLVING DUALITY MAPPINGS VIA THE MOUNTAIN PASS THEOREM JENICĂ CRÎNGANU We derive existence results for operator equations having the form J ϕu = N f u, by using

More information

ITERATIVE ALGORITHMS WITH ERRORS FOR ZEROS OF ACCRETIVE OPERATORS IN BANACH SPACES. Jong Soo Jung. 1. Introduction

ITERATIVE ALGORITHMS WITH ERRORS FOR ZEROS OF ACCRETIVE OPERATORS IN BANACH SPACES. Jong Soo Jung. 1. Introduction J. Appl. Math. & Computing Vol. 20(2006), No. 1-2, pp. 369-389 Website: http://jamc.net ITERATIVE ALGORITHMS WITH ERRORS FOR ZEROS OF ACCRETIVE OPERATORS IN BANACH SPACES Jong Soo Jung Abstract. The iterative

More information

Continuous Sets and Non-Attaining Functionals in Reflexive Banach Spaces

Continuous Sets and Non-Attaining Functionals in Reflexive Banach Spaces Laboratoire d Arithmétique, Calcul formel et d Optimisation UMR CNRS 6090 Continuous Sets and Non-Attaining Functionals in Reflexive Banach Spaces Emil Ernst Michel Théra Rapport de recherche n 2004-04

More information

1 Introduction We consider the problem nd x 2 H such that 0 2 T (x); (1.1) where H is a real Hilbert space, and T () is a maximal monotone operator (o

1 Introduction We consider the problem nd x 2 H such that 0 2 T (x); (1.1) where H is a real Hilbert space, and T () is a maximal monotone operator (o Journal of Convex Analysis Volume 6 (1999), No. 1, pp. xx-xx. cheldermann Verlag A HYBRID PROJECTION{PROXIMAL POINT ALGORITHM M. V. Solodov y and B. F. Svaiter y January 27, 1997 (Revised August 24, 1998)

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Thai Journal of Mathematics Volume 14 (2016) Number 1 : ISSN

Thai Journal of Mathematics Volume 14 (2016) Number 1 : ISSN Thai Journal of Mathematics Volume 14 (2016) Number 1 : 53 67 http://thaijmath.in.cmu.ac.th ISSN 1686-0209 A New General Iterative Methods for Solving the Equilibrium Problems, Variational Inequality Problems

More information

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological

More information

On the split equality common fixed point problem for quasi-nonexpansive multi-valued mappings in Banach spaces

On the split equality common fixed point problem for quasi-nonexpansive multi-valued mappings in Banach spaces Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (06), 5536 5543 Research Article On the split equality common fixed point problem for quasi-nonexpansive multi-valued mappings in Banach spaces

More information

A characterization of essentially strictly convex functions on reflexive Banach spaces

A characterization of essentially strictly convex functions on reflexive Banach spaces A characterization of essentially strictly convex functions on reflexive Banach spaces Michel Volle Département de Mathématiques Université d Avignon et des Pays de Vaucluse 74, rue Louis Pasteur 84029

More information

c 1998 Society for Industrial and Applied Mathematics

c 1998 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 9, No. 1, pp. 179 189 c 1998 Society for Industrial and Applied Mathematics WEAK SHARP SOLUTIONS OF VARIATIONAL INEQUALITIES PATRICE MARCOTTE AND DAOLI ZHU Abstract. In this work we

More information

arxiv: v3 [math.oc] 18 Apr 2012

arxiv: v3 [math.oc] 18 Apr 2012 A class of Fejér convergent algorithms, approximate resolvents and the Hybrid Proximal-Extragradient method B. F. Svaiter arxiv:1204.1353v3 [math.oc] 18 Apr 2012 Abstract A new framework for analyzing

More information

arxiv: v1 [math.oc] 21 Mar 2015

arxiv: v1 [math.oc] 21 Mar 2015 Convex KKM maps, monotone operators and Minty variational inequalities arxiv:1503.06363v1 [math.oc] 21 Mar 2015 Marc Lassonde Université des Antilles, 97159 Pointe à Pitre, France E-mail: marc.lassonde@univ-ag.fr

More information

The Journal of Nonlinear Science and Applications

The Journal of Nonlinear Science and Applications J. Nonlinear Sci. Appl. 2 (2009), no. 2, 78 91 The Journal of Nonlinear Science and Applications http://www.tjnsa.com STRONG CONVERGENCE THEOREMS FOR EQUILIBRIUM PROBLEMS AND FIXED POINT PROBLEMS OF STRICT

More information

The Split Hierarchical Monotone Variational Inclusions Problems and Fixed Point Problems for Nonexpansive Semigroup

The Split Hierarchical Monotone Variational Inclusions Problems and Fixed Point Problems for Nonexpansive Semigroup International Mathematical Forum, Vol. 11, 2016, no. 8, 395-408 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/imf.2016.6220 The Split Hierarchical Monotone Variational Inclusions Problems and

More information

Topological Degree and Variational Inequality Theories for Pseudomonotone Perturbations of Maximal Monotone Operators

Topological Degree and Variational Inequality Theories for Pseudomonotone Perturbations of Maximal Monotone Operators University of South Florida Scholar Commons Graduate Theses and Dissertations Graduate School January 2013 Topological Degree and Variational Inequality Theories for Pseudomonotone Perturbations of Maximal

More information