Iteration-complexity of a Rockafellar s proximal method of multipliers for convex programming based on second-order approximations

Size: px
Start display at page:

Download "Iteration-complexity of a Rockafellar s proximal method of multipliers for convex programming based on second-order approximations"

Transcription

1 Iteration-complexity of a Rockafellar s proximal method of multipliers for convex programming based on second-order approximations M. Marques Alves R.D.C. Monteiro Benar F. Svaiter February 1, 016 Abstract This paper studies the iteration-complexity of a new primal-dual algorithm based on Rockafellar s proximal method of multipliers PMM for solving smooth convex programming problems with inequality constraints. In each step, either a step of Rockafellar s PMM for a second-order model of the problem is computed or a relaxed extragradient step is performed. The resulting algorithm is a large-step relaxed hybrid proximal extragradient r-hpe method of multipliers, which combines Rockafellar s PMM with the r-hpe method. 000 Mathematics Subject Classification: 90C5, 90C30, 47H05. keywords: convex programming, proximal method of multipliers, second-order methods, hybrid extragradient, complexity 1 Introduction The smooth convex programming problem with for the sake of simplicity only inequality constraints is min fx s.t. gx 0 1 where f : R n R and the components of g = g 1,..., g m : R n R m are smooth convex functions. Dual methods for this problem solve the associated dual problem max inf fx y, gx s.t. y 0 x Rn and, en passant, find a solution of the original primal problem. Notice that a pair x, y satisfies the Karush- Kuhn-Tucker conditions for problem 1 if and only if x is a solution of this problem, y is a solution of the associated dual problem, and there is no duality gap. The method of multipliers, which was proposed by Hestenes [3, 4] and Powel [10] for equality constrained optimization problems and extended by Rockafellar [11] see also [1] to inequality constrained convex programming problems, is a typical example of a dual method. It generates iteractively sequences x k and y k Departamento de Matemática, Universidade Federal de Santa Catarina, Florianópolis, Brazil, maicon.alves@ufsc.br. This author was partially supported by CNPq grants no /013-8, 37068/013-3 and / School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA, monteiro@isye.gatech.edu. This author was partially supported by CNPq Grant 40650/013-8 and NSF Grant CMMI IMPA, Estrada Dona Castorina 110, Rio de Janeiro, Brazil. benar@impa.br This author was partially supported by CNPq grants /013-1, 3096/011-5, FAPERJ grant E-6/10.940/011, and PRONEX-Optimization. 1

2 as follows: x k arg min L x, y k 1, λ k, y k = y k 1 λ k y L x k, y k 1, λ k x R n where stands for approximate solution, λ k > 0, and L x, y, λ is the augmented Lagrangian L x, y, λ = fx 1 λ [ y λgx y ] = max y 0 fx y, gx 1 λ y y. The method of multipliers is also called the augmented Lagrangian method. In the seminal article [1], Rockafellar proved that the method of multipliers is an instance of his proximal point method hereafter PPM [13] applied to the dual objective function. Still in [1], Rockafellar proposed a new primal-dual method for 1, which we discuss next and that we will use in this paper to design a new primal-dual method for this problem. Rockafellar s proximal method of multipliers hereafter PMM [1] generates, for any starting point x 0, y 0, a sequence x k, y k as the approximate solution of a regularized saddle-point problem k N x k, y k arg min x R n max y R m fx y, gx 1 [ x x y ẙ ] λ where x, ẙ = x k 1, y k 1 is the current iterate and λ = λ k > 0 is a stepsize parameter. Notice that the objective function of the above saddle-point problem is obtained by adding to the augmented Lagrangian a proximal term for the primal variable x. If inf λ k > 0 and x k, y k x k, yk < 3 k=1 where x k, y k is the exact solution of, then x k, y k k N converges to a solution of the Karush-Kuhn- Tucker conditions for 1 provided that there exist a pair satisfying these conditions. This result follows from the facts that the satisfaction of KKT conditions for 1 can be formulated as a monotone inclusion problem and is the Rockafellar s PPM iteration for this inclusion problem see comments after Proposition 3.. Although is a strongly convex-concave problem and hence has a unique solution the computation of its exact or an approximate solution can be very hard. We assume in this paper that f and g i i = 1,..., m are C convex functions with Lipschitz continuous Hessians. The method proposed in this paper either solves a second-order model of in which second-order approximations of f and g i i = 1,..., m replace these functions in or performs a relaxed extragradient step. In its general form, PMM is an inexact PPM in that each iteration approximately solves according to the summable error criterion 3. The method proposed in this paper can also be viewed as an inexact PPM but one based on a relative error criterion instead of the one in 3. More specifically, it can be viewed as an instance of the large-step relaxed hybrid proximal extragradient r-hpe method [8, 16, ] which we briefly discuss next. Given a point-to-set maximal monotone operator T : R p R p, the large-step r-hpe method computes approximate solutions for the monotone inclusion problem 0 T z as extragradient steps z k = z k 1 τλ k v k, 4 where z k 1 is the current iterate, τ 0, 1] is a relaxation parameter, λ k > 0 is the stepsize and v k together with the pair z k, ε k satisfy the following conditions v k T [ε k] z k, λ k v k z k z k 1 λ k ε k σ z k z k 1, λ k z k z k 1 η 5

3 where σ [0, 1 and η > 0 are given constants and T ε denotes the ε-enlargement of T. It has the property that T ε z T z for every z. The method proposed in this paper for solving the minimization problem 1 can be viewed as a realization of the above framework where the operator T is the standard saddle-point operator defined as T z := fx gxy, gx N R m y for every z = x, y. More specifically, the method consists of two type of iterations. The ones which perform extragradient steps can be viewed as a realization of 5. On the other hand, each one of the other iterations updates the stepsize by increasing it by a multiplicative factor larger than one and then solves a suitable second-order model of. After a few of these iterations, an approximate solution satisfying 5 is then obtained. Hence, in contrast to the PMM which does not specify how to obtain an approximate solution x k, y k of, or equivalently the prox inclusion 0 λ k T z z z k 1 with T as above, these iterations provide a concrete scheme for computing an approximate solution of this prox inclusion according to the relative criterion in 5. Pointwise and ergodic iteration-complexity bounds are then derived for our method using the fact that the large-step r-hpe method has pointwise and ergodic global convergence rates of O1/k and O1/k 3/, respectively. The paper is organized as follows. Section reviews some basic properties of ε-enlargements of maximal monotone operators and briefly reviews the basic properties of PPM and the large-step r-hpe method. Section 3 presents the basic properties of the minimization problem of interest and some equivalences between certain saddle-point, complementarity and monotone inclusion problems, as well as of its regularized versions. Section 4 introduces an error measure, shows some of its properties and how it is related to the relative error criterion for the large-step r-hpe method. Section 5 studies the smooth convex programming problem 1 and its second-order approximations. The proposed method Algorithm 1 is presented in Section 6 and its iteration-complexities pointwise and ergodic are studied in Section 7. Rockafellar s proximal point method and the hybrid proximal extragradient method This work is based on Rockafellar s proximal point method PPM. The new method presented in this paper is a particular instance of the large-step relaxed hybrid proximal extragradient r-hpe method [14]. For these reasons, in this section we review Rockafellar s PPM, the large-step r-hpe method, and review some convergence properties of these methods. Maximal monotone operators, the monotone inclusion problem, and Rockafellar s proximal point method A point-to-set operator in R p, T : R p R p, is a relation T R p R p and T z := {v z, v T }, z R p. The inverse of T is T 1 : R p R p, T 1 := {v, z z, v T }. The domain and the range of T are, respectively, DT := {z T z }, RT := {v z R p, v T z}. When T z is a singleton for all z DT, it is usual to identify T with the map DT z v R p where T z = {v}. If T 1, T : R p R p and λ R, then T 1 T : R p R p and λt 1 : R p R p are defined as T 1 T z := {v 1 v v 1 T 1 z, v T z}, λt 1 z := {λv v T 1 z}. A point-to-set operator T : R p R p is monotone if z z, v v 0, z, v, z, v T and it is maximal monotone if it is a maximal element in the family of monotone point-to-set operators in R p with respect to the partial order of set inclusion. The subdifferential of a proper closed convex function is a 3

4 classical example of a maximal monotone operator. Minty s theorem [5] states that if T is maximal monotone and λ > 0, then the proximal map λt I 1 is a point-to-point nonexpansive operator with domain R p. The monotone inclusion problem is: given T : R p R p maximal monotone, find z such that 0 T z. 6 Rockafellar s PPM [13] generates, for any starting z 0 R p, a sequence z k by the approximate rule z k λ k T I 1 z k 1, where λ k is a sequence of strictly positive stepsizes. Rockafellar proved [13] that if 6 has a solution and zk λ k T I 1 z k 1 ek, e k <, inf λ k > 0, 7 then z k converges to a solution of 6. In each step of the PPM, computation of the proximal map λt I 1 z amounts to solving the proximal sub problem k=1 0 λt z z z, a regularized inclusion problem which, although well-posed, is almost as hard as 6. From this fact stems the necessity of using approximations of the proximal map, for example, as prescribed in 7. Moreover, since each new iterate is, hopefully, just a better approximation to the solution than the old one, if it was computed with high accuracy, then the computational cost of each iteration would be too high or even prohibitive and this would impair the overall performance of the method or even make it infeasible. So, it seems natural to try to improve Rockafellar s PPM by devising a variant of this method that would accept a relative error tolerance and wherein the progress of the iterates towards the solution set could be estimated. In the next subsection we discuss the hybrid proximal extragradient HPE method, a variant of the PPM which aims to satisfy these goals. Enlargements of maximal monotone operators and the hybrid proximal extragradient method The HPE method [16, 17] is a modification of Rockafellar s PPM wherein: a the proximal subproblem, in each iteration, is to be solved within a relative error tolerance and b the update rule is modified so as to guarantee that the next iterate is closer to the solution set by a quantifiable amount. An additional feature of a is that, in some sense, errors in the inclusion on the proximal subproblems are allowed. Recall that the ε-enlargement [] of a maximal monotone operator T : R p R p is T [ε] z := {v z z, v v ε z, v T }, z R p, ε 0. 8 From now on in this section T : R p R p is a maximal monotone operator. The r-hpe method [0] for the monotone inclusion problem 6 proceed as follows: choose z 0 R p and σ [0, 1; for i = 1,,... compute λ i > 0 and z i, v i, ε i R p R p R such that v i T [εi] z i, λ i v i z i z i 1 λ i ε i σ z i z i 1, choose τ i 0, 1] and set z i = z i 1 τ i λ i v i. 9 In practical applications, each problem has a particular structure which may render feasible the computation of λ i, z i, v i, and ε i as prescribed above. For example, T may be Lipschitz continuous, it may be differentiable, or it may be a sum of an operator which has a proximal map easily computable with others with some of these 4

5 properties. Prescriptions for computing λ i, z i, v i, and ε i under each one of these assumptions were presented in [6, 7, 8, 9, 15, 16, 19, 1]. Computation of λt I 1 z is equivalent to the resolution of an inclusion-equation system: z = λt I 1 z v T z, λv z z = 0. Whence, the error criterion in the first line of 9 relaxes both the inclusion and the equality at the right-hand side of the above equivalence. Altogether, each r-hpe iteration consists in: 1 solving with a relative error tolerance a proximal inclusion-equation system; updating z i 1 to z i by means of an extragradient step, that is, using v i T [εi] z i. In the remainder part of this section we present some convergence properties of the r-hpe method which were essentially proved in [14] and revised in []. The next proposition shows that z i is closer than z i 1 to the solution set with respect to the square of the norm, by a quantifiable amount, and present some useful estimations. Proposition.1 [, Proposition.]. For any i 1 and z T 1 0, a 1 σ z i z i 1 λ i v i 1 σ z i z i 1 and λ i ε i σ z i z i 1 ; b z z i 1 z z i τ i 1 σ z i z i 1 z z i ; c z z 0 z z i 1 σ i j=1 τ j z j z j 1 ; d z z i z z i 1 / 1 σ and z i z i 1 z z i 1 / 1 σ. The aggregate stepsize Λ i and the ergodic sequences z a i, ṽa i, εa i associated with the sequences λ i, z i, v i, and ε i are, respectively, Λ i := i τ j λ j, j=1 z a i := 1 Λ i i τ j λ j z j, j=1 ε a i := 1 Λ i i j=1 v a i := 1 Λ i i τ j λ j v j, j=1 τ j λ j ε j z j z a i, v j v a i. 10 Next we present the pointwise and ergodic iteration-complexities of the large-step r-hpe method, i.e., the r-hpe method with a large-step condition [7, 8]. We also assume that the sequence of relaxation parameters τ i is bounded away from zero. Theorem. [, Theorem.4]. If d 0 is the distance from z 0 to T 1 0 and then, for any i 1, λ i z i z i 1 η > 0, τ i τ > 0 i = 1,,... a there exists j {1,..., i} such that v j T [εj] z j and v j d 0 iτ1 ση, ε j σ d 3 0 iτ 3/ 1 σ 3/ η ; b v a i T [εa i ] z a i, va i d 0 iτ 3/ 1 σ η, and εa i d 3 0 iτ 3/ 1 σ η. 5

6 Remark. We mention that the inclusion in Item a of Theorem. is in the enlargement of T which appears in the inclusion in 9. To be more precise, in some applications the operator T may have a special structure, like for instance T = S N X, where S is point-to-point and N X is the normal cone operator of a closed convex set X, and the inclusion in 9, in this case, is v i S N [εi] X such a case, Item a would guarantee that v j S N [εj] X for the Item b. The next result was proved in [18, Corollary 1] Lemma.3. If z R p, λ > 0, and v T [ε] z, then z i, which is stronger than v i T [εi] z i. In z j. Unfortunately, the observation is not true λv z z λε z λt I 1 z λv z λt I 1 z. 3 The smooth convex programming problem Consider the smooth convex optimization problem 1, i.e., P minfx s.t. gx 0, 11 where f : R n R and g = g 1,..., g m : R n R m. From now on we assume that: O.1 f, g 1,..., g m are convex C functions; O. the Hessians of f and g 1,..., g m are Lipschitz continuous with Lipschitz constants L 0 and L 1,..., L m, respectively, with L i 0 for some i 1; O.3 there exists x, y R n R m satisfying Karush-Kuhn-Tucker conditions for 11, fx gxy = 0, gx 0, y 0, y, gx = 0. 1 The canonical Lagrangian of problem 11 L : R n R m R and the corresponding saddle-point operator S : R n R m R n R m are, respectively, [ ] [ ] x L x, y fx gxy L x, y := fx y, gx, Sx, y := =. 13 y L x, y gx The normal cone operator of R n R m, N R n R m : Rn R m R n R m, is the subdifferential of the indicator function of this set δ R n R m : Rn R m R, that is, { δ R n R m x, y := 0, if y 0; otherwise, Next we review some reformulations of 1. N R n R m := δ R n R m. 14 Proposition 3.1. The point-to-set operator S N R n R m the following conditions are equivalent: is maximal monotone and for any x, y Rn R m a fx gxy = 0, gx 0, y 0, and y, gx = 0; b x, y is a solution of the saddle-point problem max y R m min x R n fx y, gx ; c x, y is a solution of the complementarity problem x, y R n R m ; w R m ; Sx, y 0, w = 0; y, w 0; y, w = 0; 6

7 d x, y is a solution of the monotone inclusion problem 0 S N Rn R m x, y. Next we review some reformulations of the saddle-point problem in. Proposition 3.. Take x, ẙ R n R m and λ > 0. The following conditions are equivalent: a x, y is the solution of the regularized saddle-point problem min x R n max y R m fx y, gx 1 x x y ẙ ; λ b x, y is the solution of the regularized complementarity problem x, y R n R m ; λ Sx, y 0, w x, y x, ẙ = 0; y, w 0; y, w = 0; c x, y = λs N R n R m I 1 x, ẙ. It follows from Propositions 3.1 and 3. that 1 is equivalent to the monotone inclusion problem 0 S N Rn R m x, y and that is the PPM iteration for this inclusion problem. Therefore, the convergence analysis of the Rockafellar s PMM follows from Rockafellar s convergence analysis of the PPM. 4 An error measure for regularized saddle-point problems We will present a modification of Rockafellar s PMM which uses approximate solutions of the regularized saddle-point problem satisfying a relative error tolerance. To that effect, in this section we define a generic instance of problem, define an error measure for approximate solutions of this generic instance, and analyze some properties of the proposed error measure. Consider, for λ > 0 and x, ẙ R n R m, a generic instance of the regularized saddle-point problem to be solved in each iteration of Rockafellar s PMM, min max x R n y R m Define for λ R and x, ẙ R n R m fx y, gx 1 λ x x y ẙ. 15 Ψ S, x,ẙ,λ : R n R m R, Ψ S, x,ẙ,λ x, y := min w R Sx, λ y 0, w x, y x, ẙ m λ y, w. 16 For λ > 0, this function is trivially an error measure for the complementarity problem on Proposition 3. b, a problem which is equivalent to 15, by Proposition 3. a; hence, Ψ S, x,ẙ,λ x, y is an error measure for 15. In the context of complementarity problems, the quantity y, w in 16 is refered to as the complementarity gap. Next we show that the complementarity gap is related to the ε-subdifferential of δ R n R m and to the ε-enlargement of the normal cone operator of R n R m. Direct use of 8 and of the definition of the ε- subdifferential [1] yields x, y R n R m, ε 0 ε δ Rn R m x, y = N[ε] R n R x, y = { 0, w w R m m, y, w ε }. 17 7

8 Since argmin λ Sx, y 0, w x, y x, ẙ λ y, w = gx λ 1 ẙ, 18 w R m it follows from definition 16 that for any x, y R n R m. Ψ S, x,ẙ,λ x, y = λ fx gxy x x y λgx ẙ y, λgx ẙ = λsx, y x, y x, ẙ λgx ẙ 19 Lemma 4.1. If λ > 0, z = x, ẙ R n R m, z = x, y R m R m and w, v, ε are defined as then a 0, w ε δ R n R m z = N[ε] R n R z; m w := gx λ 1 ẙ, v := Sz 0, w, ε := y, w, b v S N [ε] R n R z S N m Rn R m [ε] z, λv z z λε = Ψ S, z,λ z; c z λs N Rn R m I 1 z ΨS, z,λ z. Proof. Item a follows trivially from the definitions of w and ε, and 17. The first inclusion in item b follows from the definition of v and item a; the second inclusion follows from direct calculations and 8; the identity in item b follows from the definitions of w and ε, 16 and 18. Finally, item c follows from item b and Lemma.3. Now we will show how to update λ so that Ψ S, z,λ x, y does not increase when z is updated like z k 1 is updated to z k in 9. Proposition 4.. Suppose that λ > 0, z = x, ẙ R n R m, z = x, y R n R m and define For any τ [0, 1], w := gx λ 1 ẙ, v := Sz 0, w, zτ := z τλv =: xτ, ẙτ. xτ = x τλ fx gxy, ẙτ = ẙ τλgx gx λ 1 ẙ, Ψ S, zτ,1 τλ z Ψ S, z,λ z. If, additionally, ẙ 0 then, for any τ [0, 1], ẙτ 0. Proof. Direct use of the definitions of w, v and zτ yields the expressions for xτ and ẙτ as well as the identity 1 τλ Sz 0, w z zτ = λ Sz 0, w z z, which, in turn, combined with 16 gives, for any τ [0, 1], Ψ S, zτ,1 τλ z 1 τλ Sz 0, w z zτ 1 τλ y, w = λ Sz 0, w z z 1 τλ y, w Ψ S, z,λ z, where the second inequality follows from 16, 18, the assumption τ [0, 1] and the definition of w. To prove the second part of the proposition, observe that, for any τ [0, 1], ẙτ is a convex combination of ẙ and ẙ1 = λgx ẙ. 8

9 The next lemma and the next proposition provide quantitative and qualitative estimations of the dependence of Ψ S, x,ẙ,λ x, y on λ. Lemma 4.3. If ψλ := Ψ S, z,λ z for λ R, where z = x, ẙ R n R m, and z = x, y R n R m, then a ψ is convex, differentiable and piecewise quadratic; b d dλ ψλ = Sz, λsz 0, w z z where w = gx λ 1 ẙ ; c ψλ λsz z z ; d lim λ ψλ < if and only if x, y is a solution of 1. Proof. The proof follows trivially from 19. Proposition 4.4. If z = x, ẙ R n R m, z = x, y R n R m and 0 < µ λ then Ψ S, z,λ z λ Ψ S, z,µ z λ µ z z. µ µ Proof. Let w := gx µ 1 ẙ and r µ := µ Sz 0, w z z, r λ := λ Sz 0, w z z. It follows from the latter definitions, 16 and 18 that Ψ S, z,µ z = r µ µ y, w and Ψ S, z,λ z r λ λ y, w = λ µ r µ µ λ z z µ λ µ y, w µ λ rµ µ y, w λ λ µ µ µ µ r µ z z λ µ z z µ λ Ψ S, z,µz λ Ψ S, z,µ z λ µ z z µ µ µ λ µ z z, µ where the first inequality follows from the assumption 0 < µ λ. The conclusion follows trivially from the latter inequality. 5 Quadratic approximations of the smooth convex programming problem In this section we use second-order approximations of f and g around a point x R n to define a second-order approximation of problem 11 around such a point. We also define a local model of, where second-order approximations of f and g around x substitute these functions, and give conditions on a point x, ỹ under which a solution of the local model is a better approximation to the solution of than this point. For x R n, let f [ x] and g [ x] = g 1,[ x],..., g m,[ x] be the quadratic approximations of f and g = g 1,..., g m around x, that is, f [ x] x := f x f x T x x 1 x xt f xx x g i,[ x] x := g i x g i x T x x 1 x xt g i xx x, i = 1,..., m. 0 9

10 We define P [ x ] minf [ x] x s.t. g [ x] x 0 1 as the quadratic approximation of problem 11 around x. The canonical Lagrangian of 1, L [ x] : R n R m R, and the corresponding saddle-point operator, S [ x] : R n R m R n R m, are, respectively, L [ x] x, y := f [ x] x y, g [ x] x, [ ] x L S [ x] x, y := [ x] x, y = y L [ x] x, y [ f[ x] x g [ x] xy g [ x] x ]. Since L [ x] x, y is a 3rd-degree polynomial in x, y and the components of S [ x] x, y are nd-degree polynomials in x, y, neither L [ x] is a quadratic approximation of L nor S [ x] is a linear approximation of S; nevertheless, this 3-rd degree functional and that componentwise quadratic operator are, respectively, the canonical Lagrangian and the associated saddle-point operator of a quadratic approximation of P around x, namely, P [ x]. So, we may say that L [ x] and S [ x] are approximations of L and S based on quadratic approximations of f and g. Each iteration of Rockafellar s PMM applied to problem P [ x] requires the solution of an instance of the generic regularized saddle-point problem min max x R n y R m f [ x] x y, g [ x] x 1 λ x x y ẙ, 3 where λ > 0 and x, ẙ R n R m. It follows from Proposition 3. that this problem is equivalent to the complementarity problem x, y R n R m ; λs [ x] x, y x, y x, ẙ = 0, w; y, w 0; y, w = 0. To analyze the error of substituting S by S [ x] we introduce the notation: L g = L 1,..., L m ; y 1,..., y m = y 1,..., y m, y 1,..., y m R m. 4 Lemma 5.1. For any x, y R n R m and x R n Sx, y S[ x] x, y L 0 L g, y x x L g 6 x x 3. Proof. It follows from triangle inequality, 0 and assumption O. that and x L [ x] x, y x L x, y f [ x] x fx g [ x] x gxy m L 0 y i L i x x g [ x] x gx = m gi,[ x] x g i x m Li x x 3 6 i=1 i=1 To end the proof, use the above inequalities, 13 and. i=1 = L g 6 x x 3. 10

11 Define, for x, ẙ R n R m and λ > 0, N θ x, ẙ, λ := x, y Rn R m where ρ = L0 L g, y λ L g ρ 3 Ψ S, x,ẙ,λ x, y ρ θ,. 5 The next proposition shows that, if x, ỹ N θ x, ẙ, λ with θ 1/4, then the solution of the regularized saddle-point problem 3 is a better approximation than x, ỹ to the solution of the regularized saddle-point problem 15, with respect to the merit function Ψ S,x,y,λ. Proposition 5.. If λ > 0, x, ẙ R n R m, x, ỹ R n R m and x, y = arg min x R n max y R m f [ x] x y, g [ x] x 1 x x y ẙ, λ then x, ỹ x, y Ψ S, x,ẙ,λ x, ỹ. If, additionally, x, ỹ N θ x, ẙ, λ with 0 θ 1/4, then Ψ S, x,ẙ,λ x, y θ Ψ S, x,ẙλ x, ỹ and x, y N θ x, ẙ, λ. Proof. Applying Lemma 4.1 to 3 and using we conclude that x, ỹ x, y Ψ S[ x], x,ẙ,λ x, ỹ. It follows from 0,, and 13 that S [ x] x, ỹ = S x, ỹ, which, combined with 16, implies that Ψ S[ x], x,ẙ,λ x, ỹ = Ψ S x,ẙ,λ x, ỹ. To prove the first part of the proposition, combine this result with the above inequality. To simplify the proof of the second part of the proposition, define ρ = Ψ S[ x], x,ẙ,λ x, ỹ, w = gx λ 1 ẙ, r = λ Sx, y 0, w x, y x, ẙ. Since x, y is the solution of 3, λ S [ x] x, y 0, w x, y x, ẙ = 0, y, w 0, y, w = 0. Therefore, r = λsx, y S [ x] x, y. Using also 16, Lemma 5.1 and the first part of the proposition we conclude that Ψ S, x,ẙ,λ x, y r λ y, w = λ Sx, y S[ x] x, y L0 L g, y λ L g ρ 6 ρ. Moreover, it follows from the Cauchy-Schwarz inequality, the first part of the proposition and the definition of ρ that L g, y L g, ỹ L g y ỹ L g, ỹ L g ρ. 11

12 Therefore L0 L g, ỹ Ψ S, x,ẙ,λ x, y λ L g ρ ρ. 3 Suppose that x, ỹ N θ x, ẙ, λ with 0 θ 1/4. It follows trivially from this assumption, 5, the definition of ρ, and the above inequality, that the inequality in the second part of the proposition holds. To end the proof of the second part, let ρ = Ψ S[ x], x,ẙ,λx, y. Since ρ θ ρ ρ/4 and L g, y L g, ỹ L g ρ, L0 L g, y λ L g ρ 3 L0 L g, ỹ L g ρ ρ λ L0 L g, ỹ = λ L g ρ 3 L g ρ 3 4 θ ρ θ, where the last inequality follows from the assumption x, ỹ N θ x, ẙ, λ and 5. To end the proof use the definition of ρ, the above inequality and 5. In view of the preceding proposition, for a given x, ẙ R n R m and θ > 0, it is natural to search for λ > 0 and x, y N θ x, ẙ, λ. Proposition 5.3. For any x, ẙ R n R m, x, y R n R m, and θ > 0 there exists λ > 0 such that x, y N θ x, ẙ, λ for any λ 0, λ]. Proof. The proof follows from the definition 5 and Lemma 4.3c. The neighborhoods N θ as well as the next defined function will be instrumental in the definition and analysis of Algorithm 1, to be presented in the next section. Definition 5.4. For α > 0 and y R m, ρy, α stands for the largest root of L0 L g, y L g ρ ρ = α. 3 Observe that for any λ, θ > 0 and x, ẙ R n R m, { } N θ x, ẙ, λ = x, y R n R m Ψ S, x,ẙ,λ x, y ρ y, θ/λ. 6 Moreover, since ρy, α is the largest root of a quadratic it follows that it has an explicit formula. 6 A relaxed hybrid proximal extragradient method of multipliers based on second-order approximations In this section we consider the smooth convex programming problem 11 where assumptions O.1, O. and O.3 are assumed to hold. Aiming at finding approximate solutions of the latter problem, we propose a new method, called relaxed hybrid proximal extragradient method of multipliers based on quadratic approximations hereafter rhpemm-o, which is a modification of Rockafellar s PMM in the following senses: in each iteration either a relaxed extragradient step is executed or a second order approximation of 15 is solved. More specifically, each iteration k uses the available variables x k 1, y k 1, x k, ỹ k R n R m, and λ k > 0 θρ to generate x k, y k, x k1, ỹ k1 R n R m, and λ k1 > 0 1

13 in one of two ways. Either 1 x k, y k is obtained from x k 1, y k 1 via a relaxed extragradient step, x k, y k = x k 1, y k 1 τλ k v k, v k S N R n R m ε k x k, ỹ k in which case x k1, ỹ k1 = x k, ỹ k and λ k1 < λ k ; or x k, y k = x k 1, y k 1 and the point x k1, ỹ k1 is the outcome of one iteration at x k, y k of Rockafellar s PMM for problem 1 with x = x k and λ = λ k1. Next we present our algorithm, where N θ, f [ x], g [ x] and ρy, α are as in 5, 0, and Definition 5.4, respectively. Algorithm 1: Relaxed hybrid proximal extragradient method of multipliers based on nd ord. approx. r-hpemm-o initialization: choose x 0, y 0 = x 1, ỹ 1 R n R m, 0 < σ < 1, 0 < θ 1/4; define h := positive root of θ1 h 1 h 1 1/σ = 1, τ = h/1 h; choose λ 1 > 0 such that x 1, ỹ 1 N θ x 0, y 0, λ 1 and set k 1 1 if ρỹ k, θ /λ k σ x k, ỹ k x k 1, y k 1 then λ k1 := 1 τλ k ; 3 x k1, ỹ k1 := x k, ỹ k ; 4 x k := x k 1 τλ k [ f x k g x k ỹ k ], y k := y k 1 τ[λ k g x k λ k g x k y k 1 ]; 5 else 6 λ k1 := 1 τ 1 λ k ; 7 x k, y k := x k 1, y k 1 ; 8 x k1, ỹ k1 := arg min max x R n y R m 9 end if 10 set k k 1 and go to step 1; f [ xk ]x y, g [ xk ]x x x k y y k λ k1 ; To simplify the presentation of Algorithm 1, we have omitted a stopping test. First, we discuss its initialization. In the absence of additional information on the dual variables y, one shall consider the initialization x 0, y 0 = x 1, ỹ 1 = x, 0, 7 where x is close to the feasible set. If x, y R n R m an approximated solution of 1 is available, one can do a warm start by setting x 0, y 0 = x 1, ỹ 1 = x, y. Note that h > 0 and 0 < τ < 1. Existence of λ 1 > 0 as prescribed in this step follows from the inclusion x 1, ỹ 1 R n R m and from Proposition 5.3. Moreover, if we compute λ = λ 1 > 0 satisfying the inequality Lg Sx 0, y 0 3 λ 3 L0 L g, y 0 Sx 0, y 0 λ θ 0, where the operator S is defined in 13, use Lemma 4.3 c and Definition 5.4 we find Ψ S,x0,y 0,λ 1 x 0, y 0 λ 1 Sx 0, y 0 ρy 0, θ /λ 1 which, in turn, combined with the fact that x 1, ỹ 1 = x 0, y 0, gives the inclusion in the initialization of Algorithm 1. The computational cost of block of steps [,3,4] is negligible. The initialization λ 1 > 0, together with the update of λ k by step or 6 guarantee that λ k > 0 for all k. Therefore, the saddle-point problem to be solved in step 8 is strongly convex-concave and hence has a unique solution. The computational burden of the algorithm is in the computation of the solution of this problem. We will assume that x 1, ỹ 1 does not satisfy 1, i.e., the KKT conditions for 11, otherwise we would already have a solution for the KKT system and x 1 would be a solution of 11. For the sake of conciseness 13

14 we introduce, for k = 1,..., the notation z k 1 = x k 1, y k 1, z k = x k, ỹ k, ρ k = ρỹ k, θ /λ k. 8 Since there are two kinds of iterations in Algorithm 1, its is convenient to have a notation for them. Define A := {k N \ {0} ρ k σ z k z k 1 }, B := {k N \ {0} ρ k > σ z k z k 1 }. 9 Observe that in iteration k, either k A and steps, 3, 4 are executed, or k B and steps 6, 7, 8 are executed. Proposition 6.1. For k = 1,..., a z k N θ z k 1, λ k ; b Ψ S,zk 1,λ k z k ρ k. c z k 1 R n R m. Proof. We will use induction on k 1 for proving a. In view of the initialization of Algorithm 1, this inclusion holds trivially for k = 1. Suppose that this inclusion holds for k = k 0. We shall consider two possibilities. i k 0 A: It follows from Proposition 4. and the update rules in steps and 4 that Ψ S,zk0,λ z k0 1 k 0 Ψ S,zk0 1,λ k0 z k0 ρỹ k0, θ /λ k0 ρỹ k0, θ /λ k01 where the second inequality follows from the inclusion z k0 N θ z k0 1, λ k0 and 6; and the third inequality follows from step and Definition 5.4. It follows from the above inequalities and 6 that z k0 N θ z k0, λ k01. By step 3, z k01 = z k0. Therefore, the inclusion of Item a holds for k = k 0 1 in case i. ii k 0 B: In this case, by step 7, z k0 = z k0 1 and, using definition 9, the notation 8, and the assumption that the inclusion in Item a holds for k = k 0 we conclude that z k0 z k0 < ρ k0 /σ, z k0 N θ z k0, λ k0, Ψ S,zk0,λ k0 z k0 ρ k0. Direct use of the definitions of h, τ, and step 6 gives λ k01 = 1 hλ k0. Defining ρ = it follows from the above inequalities and from Proposition 4.4 that, ρ 1 h Ψ S,zk0,λ k0 z k0 h z k0 z k0 1 h1 1/σρ k0. Ψ S,zk0,λ k0 1 z k 0, Therefore, λ k01 L0 L g, ỹ k0 L g ρ ρ 1 h 1 h σ L0 L g, ỹ k0 λ k0 = 1 h 1 h L g ρ k0 ρ k σ θ = θ, where we also have used Definition 5.4 and the definition of h in the initialization of Algorithm 1. It follows from the above inequality, the definition of ρ and 5 that z k0 N θ z k0, λ k01. Using this inclusion, step 8 and Proposition 5. we conclude that the inclusion in Item a also holds for k = k 0 1. Item b follows trivially from Item a, 6 and 8. Item c follows from the fact that y 0 0, the definitions of steps 3, 4, and the last part of Proposition

15 Algorithm 1 as a realization of the large-step r-hpe method In this subsection, we will show that a subsequence generated by Algorithm 1 happens to be a sequence generated by the large-step r-hpe method described in 9 for solving a monotone inclusion problem associated with 11. This result will be instrumental for evaluating in the next section the iteration-complexity of Algorithm 1. In fact, we will prove that iterations with k A, where steps, 3, 4 are executed, are large-step r-hpe iterations for the monotone inclusion problem 0 T z := where the operator S is defined in 13. Define, for k = 1,,..., S N Rn R m z, z = x, y R n R m, 30 w k = g x k λ 1 k y k 1, v k = S z k 0, w k, ε k = ỹ k, w k, 31 where z k is defined in 8. We will show that, whenever k A, the variables z k, v k, and ε k provide an approximated solution of the proximal inclusion-equation system v S N R n R m z, λ kv z z k 1 = 0, as required in the first line of 9. We divided the proof of this fact in two parts, the next proposition and the subsequent lemma. Proposition 6.. For k = 1,,..., a 0, w k εk δ R n R m z k = N [ε k] R n R z m k ; b v k S N [ε k] R n R z m k S N R n R m [εk] z k ; c λ k v k z k z k 1 λ k ε k = Ψ S,zk 1,λ k z k ρ k. Proof. Items a, b and the equality in Item c follow from definitions 8, 31 and Items a and b of Lemma 4.1. The inequality in Item c follows from Proposition 6.1b. Define and observe that A k := {j N j k, steps, 3, 4 are executed at iteration j}, B k := {j N j k, steps 6, 7, 8 are executed at iteration j} A = A k, B = B k. k N k N 3 From now on, #C stands for the number of elements of a set C. To further simplify the converge analysis, define I = {i N 1 i #A}, k 0 = 0, k i = i-th element of A. 33 Note that k 0 < k 1 < k, A = {k i i I} and, in view of 9 and step 7 of Algorithm 1, In particular, we have z k = z ki 1, for k i 1 k < k i, i I. 34 z ki 1 = z ki 1 i I. 35 In the next lemma we show that for indexes in the set A, Algorithm 1 generates a subsequence which can be regarded as a realization of the large-step r-hpe method described in 9, for solving the problem

16 Lemma 6.3. The sequences z ki i I, z ki i I, v ki i I, ε ki i I, λ ki i I are generated by a realization of the r-hpe method described in 9 for solving 30, that is, 0 S N R n R m z, in the following sense: for all i I, v ki S N [ε k i ] R n R z m ki S N R n R m [ε k ] i z ki, λ ki v ki z ki z ki 1 λ ki ε ki ρ k i σ z ki z ki 1, z ki = z ki 1 τλ ki v ki. 36 Moreover, if I is finite and i M := max I then z k = z kim for k k im. Proof. The two inclusions in the first line of 36 follow trivially from Proposition 6.b. The first inequality in the second line of 36 follows from 35 and Proposition 6.c; the second inequality follows from the inclusion k i A, 9, step 1 of Algorithm 1 and 35. The equality in the last line of 36 follows from the inclusion k i A, 9 step 4 of Algorithm 1, 35 and 31. Finally, the last statement of the lemma is a direct consequence of 8, 9 and step 7 of Algorithm 1. As we already observed in Proposition 3.1, 30 and 1 are equivalent, in the sense that both problems have the same solution set. From now on we will use the notation K for this solution set, that is, 1 K = S N Rn R m 0 37 = {x, y R n R m fx gxy = 0, gx 0, y 0, y, gx = 0}. We assumed in O.3 that this set is nonempty. Let z = x, y be the projection of z 0 = x 0, y 0 onto K and d 0 the distance from z 0 to K, z K, d 0 = z z 0 = min z K z z To complement Lemma 6.3, we will prove that the large-step condition for the large-step r-hpe method stated in Theorem. is satisfied for the realization of the method presented in Lemma 6.3. Define c := L [ ] 0 L g, y 0 1 1/ σ/3 d 0 L g, η := θ 1 σ σc. 39 Proposition 6.4. Let z K and d 0, and η as in 38, and 39, respectively. For all i I, z z ki d 0, z z ki d 0 1 σ, z k i z ki 1 d 0 1 σ. 40 As a consequence, λ ki z ki z ki 1 η. 41 Proof. Note first that 40 follows from Lemma 6.3, items c and d of Proposition.1, 35 and 38. Using 8, 9, 33 and step 1 of Algorithm 1 we obtain σ z ki z ki 1 ρỹ ki, θ /λ ki i I, which, in turn, combined with the definition of ρ, see Definition 5.4 yields L0 L g, ỹ ki L g σ z ki z ki 1 σ z ki z ki 1 θ 3 λ ki i I. 4 Using the triangle inequality, 38 and the second inequality in 40 we obtain z 0 z ki d 0 z 1 z ki d σ 16

17 Now, using the latter inequality, the fact that z 0 z ki y 0 ỹ ki i I and the triangle inequality we find L g, ỹ ki L g z 0 z ki L g, y 0 d 0 L g σ L g, y 0 i I. 43 To finish the proof of 41, use 35, substitute the terms in the right hand side of the last inequalities in 40 and 43 in the term inside the parentheses in 4 and use Complexity analysis In this section we study the pointwise and ergodic iteration-complexity of Algorithm 1. The main results are essentially a consequence of Lemma 6.3 and Proposition 6.4 which guarantee that the subsequences z ki i I, z ki i I,... can be regarded as realizations of the large-step r-hpe method of Section, for which pointwise and ergodic iteration-complexity results are known. To study the ergodic iteration-complexity of Algorithm 1 we need to define the ergodic sequences associated to λ ki i I, z ki i I, v ki i I and ε ki i I, respectively see 10, namely Define also Λ i := τ i λ kj, j=1 z a i = x a i, ỹ a i := 1 Λ i τ ε a i := 1 Λ i τ i j=1 i λ kj z kj, j=1 v a i := 1 Λ i τ λ kj ε kj z kj z a i, v kj v a i. L x, y := { fx y, gx, y 0, otherwise. i λ kj v kj, j= Observe that that a pair x, y K, i.e., it is a solution of the KKT system 1 if and only if 0, 0 L, y L x, x, y. Since 30 and 1 are equivalent, the latter observation leads us to consider in this section the notion of approximate solution for 30 which consists in: for given tolerances δ > 0 and ε > 0 find x, y, v, ε such that v ε L, y L x, x, y, v δ, ε ε. 46 We will also consider as approximate solution of 30 any triple x, y, p, q, ε such that p, q δ, ε ε and or p = fx gxy, gx q 0, y 0, y, gx q = ε 47 p x,ε L x, y, gx q 0, y 0, y, gx q ε, 48 where ε := ε y, gx q. It is worthing to compare the latter two conditions with 1 and also note that whenever ε = 0 then 48 reduces to 47, that is, the latter condition is a special case of 48. Moreover, as Theorems 7.3 and 7.4 will show, 47 and 48 are related to the pointwise and ergodic iteration-complexity of Algorithm 1, respectively. We start by studying rates of convergence of Algorithm 1. 17

18 Theorem 7.1. Let z ki i I = x ki, ỹ ki i I, v ki i I and ε ki i I be subsequences generated by Algorithm 1 where the the set of indexes I is defined in 33. Let also z a i i I = x a i, ỹa i i I, v a i i I and ε a i i I be as in 44. Then, for any i I, a [pointwise] there exists j {1,..., i} such that v kj εkj L, ỹkj L x kj, x kj, ỹ kj 49 and v kj d 0 iτ1 ση, ε k j σ d 3 0 iτ 3/ 1 σ 3/ η ; 50 b [pointwise] there exists j {1,..., i} and p j, q j R n R m such that and c [ergodic] we have and p j = f x kj g x kj ỹ kj, g x kj q j 0, ỹ kj 0, ỹ kj, g x kj q j = ε kj 51 p j, q j d 0 iτ1 ση, ε σ d 3 0 k j iτ 3/ 1 σ 3/ η ; 5 v a i ε a i L, ỹ a i L x a i, x a i, ỹ a i 53 v a i d 0 iτ 3/ 1 σ η, εa i d 3 0 iτ 3/ 1 σ η ; 54 d [ergodic] there exists p a i, qa i Rn R m such that and p a i x,ε i L x a i, ỹ a i, g x a i q a i 0, ỹ a i 0, ỹ a i, g x a i q a i ε a i 55 p a i, q a i where ε i := εa i ỹa i, g xa i qa i. d 0 iτ 3/ 1 σ η, εa i d 3 0 iτ 3/ 1 σ η, 56 Proof. We first prove Items a and c. Using Lemma 6.3, the last statement in Proposition 6.4 and 30 we have that Items a and b of Theorem. hold for the sequences z ki i I, v ki i I and ε ki i I. As a consequence, to finish the proof of Items a and c of the theorem, it remains to prove the inclusions 49 and 53. To this end, note first that from the equivalence between Items a and c of Proposition A.1 with ε = 0 we have the following equivalence for all i I v ki S N [ε k i ] R n R x m ki, ỹ ki v ki εki L, ỹki L x ki, x ki, ỹ ki. Hence, using the latter equivalence, the first inclusion in the first line of 36, the inclusion in Theorem.a, the remark after the latter theorem, and 30 we obtain 49. Likewise, using an analogous reasoning and Proposition A. we also obtain 53, which finishes the proof of Items a and c. We claim that Item b follows from Item a. Indeed, letting p i, q i := v ki for all i I, using the definition of v kj and ε kj in 31, the definition of S in 13 and the equivalence between Items a and b of Proposition A.1 with ε = 0 we obtain that p j, q j := v kj satisfies 51 and 5. Using an analogous reasoning we obtain that Item d follows from Item c. 18

19 Next we analyze the sequence generated by Algorithm 1 for the set of indexes k B. Algorithm 1 s definition shows that Direct use of Define λ k1 = ρ = λ 1 #Bk #A 1 k λ 1 k τ θ. 58 L L0 0 8 L g θ 3λ 1 In the next proposition we obtain a rate of convergence result for the sequence generated by Algorithm 1 with k B. Proposition 7.. Let ρ k for all k 1 be as in 8 and let also ρ > 0 be as in 58. Then, for all k B, and Moreover, if λ k λ 1 then ρ k ρ. v k εk L, ỹk L x k, x k, ỹ k v k 1 1/σρ k λ k, ε k ρ k λ k. Proof. First note that the desired inclusion follows from Proposition 6.b and the equivalence between items a and c of Proposition A.1. Moreover, by Proposition 6. c we have λ k v k z k z k 1 λ k ε k ρ k k 1. Note that the desired bound on ε k is a direct consequence of the latter inequality. Moreover, this inequality combined with the definition of B see 9 gives λ k v k λ k v k z k z k 1 z k z k 1 1 1/σρ k for all k B, which proves the desired bound on v k. Assume now that λ k λ 1. Using Definition 5.4 and 8 we obtain for all k 1 ρ k = ρỹ k, θ /λ k = λ k L 0 L g, ỹ k θ, L0 L g, ỹ k 8 L g θ 3λ k which, in turn, combined with 58, the assumption that λ k λ 1 and the fact that L g, ỹ k 0 gives ρ k ρ. Next we present the two main results of this paper, namely, the pointwise and ergodic iteration-complexities of Algorithm 1. Theorem 7.3 pointwise iteration-complexity. For given tolerances δ > 0 and ε > 0, after at most { d 0 M := max δτ1 ση, σ 4/3 d } 0 ε /3 τ1 σ η /3 { max log 1 1/σρ/δλ 1, log ρ /ελ 1 } log1/1 τ iterations, Algorithm 1 finds x, y, v, ε satisfying 46 with the property that x, y, p, q, ε where p, q := v also satisfies p = fx gxy, gx q 0, y 0, y, gx q = ε, p, q δ, ε ε

20 Proof. First define M 1 := { d 0 max δτ1 ση, σ 4/3 d } 0 ε /3 τ1 σ η /3 and M := M M The proof is divided in two cases: i #A M 1 and ii #A < M 1. In the first case, the existence of x, y, v, ε resp. x, y, p, q, ε satisfying 46 resp. 60 in at most M 1 iterations follows from Theorem 7.1a resp. Theorem 7.1b. Since M = M 1 M M 1, it follows that, in this case, the number of iterations is not bigger than M. Consider now the case ii and let k 1 be such that #A = #A k = #A k for all k k. As a consequence of the latter property and the fact that #A < M 1 we conclude that if #B k M 1 M, for some k k, then Using the latter inequality, 59 and 61 we find β k := #B k #A k #B k M 1 M. 6 β k M max { log 1 1/σρ/δλ 1, log ρ /ελ 1 }, log1/1 τ which is clearly equivalent to βk 1 log λ 1 1 τ βk 1 log λ 1 1 τ log log ρ 1 1 1/σ ρ log 1 ε log, 63 1 δ. 64 Now using the definition in 6, 63 resp. 64 and 57 we obtain logλ k /[1 1/σ ρ] log1/ δ resp. logλ k / ρ log1/ ε which yields 1 1/σ ρ λ δ ρ resp. ε. k λ k It follows from the latter inequality and Proposition 7. that x, y, v, ε := x k, ỹ k, v k, ε k satisfies 46 and, due to Proposition A.1, that x, y, p, q, ε := x k, ỹ k, v k, ε k satisfies 60. Since the index k has been chosen to satisfy #A k < M 1 and #B k M 1 M we conclude that the total number of iterations is at most M 1 M 1 M = M. Theorem 7.4 ergodic iteration-complexity. For given tolerances δ > 0 and ε > 0, after at most { } M /3 d 4/3 0 := max δ /3 τ η 1 σ, /3d0 /3 ε /3 τ η1 σ /3 { max log 1 1/σρ/δλ 1, log ρ /ελ 1 } log1/1 τ 65 iterations, Algorithm 1 finds x, y, v, ε satisfying 46 with the property that x, y, p, q, ε where p, q := v also satisfies p x,ε L x, y, gx q 0, y 0, y, gx q ε, p, q δ, ε ε, 66 where ε := ε y, gx q. Proof. The proof follows the same outline of Theorem 7.3 s proof. 0

21 A Appendix Proposition A.1. Let x, ỹ R n R m, v = p, q R n R m and ε 0 be given and define w := g x q, ε := ε ỹ, w. 67 The following conditions are equivalent: a v ε L, ỹ L x, x, ỹ; b w 0, ỹ, w ε, p x,ε L x, ỹ; c 0 ε ε and w N [ε ε ] R ỹ, m p x,ε L x, ỹ. Proof. a b. Note that the inclusion in a is equivalent to L x, ỹ L x, y p, x x q, y ỹ ε x, y R n R m, 68 which, in view of 45 and 67, is equivalent to L x, ỹ L x, ỹ inf y 0 w, y p, x x ε x R n. The latter inequality is clearly equivalent to b. b c. Using 17, the fact that ỹ 0 and the definition of ε in 67 we obtain that the first inequality in b is equivalent to ε ε and the second inequality in c. To finish the proof note that the second inequality in b is equivalent to ε 0. Proposition A.. Let X R n and Y R m be given convex sets and Γ : X Y R be a function such that, for each x, y X Y, the function Γ, y Γx, : X Y R is convex. Suppose that, for j = 1,..., i, x j, ỹ j X Y and p j, q j R n R m satisfy p j, q j εj Γ, ỹ j Γ x j, x j, ỹ j. Let α 1,, α i 0 be such that i j=1 α j = 1, and define x a i, ỹ a i = i α j x j, ỹ j, p a i, qi a = j=1 i α j p j, q j, j=1 Then, ε a i 0 and Proof. See [6, Proposition 5.1]. References ε a i = i α j [ε j x j x a i, p j ỹ j ỹi a, q j ]. j=1 p a i, q a i ε a i Γ, ỹ a i Γ x a i, x a i, ỹ a i. [1] A. Brøndsted and R. T. Rockafellar. On the subdifferentiability of convex functions. Proc. Amer. Math. Soc., 16: ,

22 [] R. S. Burachik, A. N. Iusem, and B. F. Svaiter. Enlargement of monotone operators with applications to variational inequalities. Set-Valued Anal., 5: , [3] M.R. Hestenes. Multiplier and Gradient Methods. In Computing Methods in Optimization Problems-, Edited by L.A.Zadeh, L.W. Neustadt, and A.V. Balakrishnan. Academic Press, New York, [4] M.R. Hestenes. Multiplier and gradient methods. Journal of Optimization Theory and Applications, 4:303 30, [5] G. J. Minty. Monotone nonlinear operators in Hilbert space. Duke Math. J., 9: , 196. [6] R. D. C. Monteiro and B. F. Svaiter. Complexity of variants of Tseng s modified F-B splitting and Korpelevich s methods for hemivariational inequalities with applications to saddle point and convex optimization problems. SIAM Journal on Optimization, 1: , 010. [7] R. D. C. Monteiro and B. F. Svaiter. On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean. SIAM Journal on Optimization, 0: , 010. [8] R. D. C. Monteiro and B. F. Svaiter. Iteration-complexity of a Newton proximal extragradient method for monotone variational inequalities and inclusion problems. SIAM J. Optim., 3: , 01. [9] R. D. C. Monteiro and B. F. Svaiter. Iteration-complexity of block-decomposition algorithms and the alternating direction method of multipliers. SIAM J. Optim., 31: , 013. [10] M.J.D. Powell. A method for nonlinear constraints in minimization problems. In Optimization, Edited by Fletcher. Academic Press, New York, 197. [11] R. T. Rockafellar. The multiplier method of Hestenes and Powell applied to convex programming. Journal of Optimization Theory and Applications, 1:555 56, [1] R. T. Rockafellar. Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res., 1:97 116, [13] R. T. Rockafellar. Monotone operators and the proximal point algorithm. SIAM J. Control Optimization, 145: , [14] M. R. Sicre and B. F. Svaiter. An O1/k 3/ hybrid proximal extragradient primal-dual interior point method for non-linear monotone complementarity problems. Preprint A735/013, IMPA - Instituto Nacional de Matemática Pura e Aplicada, Estrada Dona Castorina 110, Rio de Janeiro, RJ Brasil , 013. [15] M. V. Solodov and B. F. Svaiter. A globally convergent inexact Newton method for systems of monotone equations. In Reformulation: nonsmooth, piecewise smooth, semismooth and smoothing methods Lausanne, 1997, volume of Appl. Optim., pages Kluwer Acad. Publ., Dordrecht, [16] M. V. Solodov and B. F. Svaiter. A hybrid approximate extragradient-proximal point algorithm using the enlargement of a maximal monotone operator. Set-Valued Anal., 74:33 345, [17] M. V. Solodov and B. F. Svaiter. A hybrid projection-proximal point algorithm. J. Convex Anal., 61:59 70, [18] M. V. Solodov and B. F. Svaiter. Error bounds for proximal point subproblems and associated inexact proximal point algorithms. Math. Program., 88, Ser. B: , 000. Error bounds in mathematical programming Kowloon, [19] M. V. Solodov and B. F. Svaiter. A truly globally convergent Newton-type method for the monotone nonlinear complementarity problem. SIAM J. Optim., 10: electronic, 000.

23 [0] M. V. Solodov and B. F. Svaiter. A unified framework for some inexact proximal point algorithms. Numer. Funct. Anal. Optim., 7-8: , 001. [1] M. V. Solodov and B. F. Svaiter. A new proximal-based globalization strategy for the Josephy-Newton method for variational inequalities. Optim. Methods Softw., 175: , 00. [] B. F. Svaiter. Complexity of the relaxed hybrid proximal-extragradient method under the large-step condition. Preprint A766/015, IMPA - Instituto Nacional de Matemática Pura e Aplicada, Estrada Dona Castorina 110, Rio de Janeiro, RJ Brasil ,

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This

More information

On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean

On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean Renato D.C. Monteiro B. F. Svaiter March 17, 2009 Abstract In this paper we analyze the iteration-complexity

More information

M. Marques Alves Marina Geremia. November 30, 2017

M. Marques Alves Marina Geremia. November 30, 2017 Iteration complexity of an inexact Douglas-Rachford method and of a Douglas-Rachford-Tseng s F-B four-operator splitting method for solving monotone inclusions M. Marques Alves Marina Geremia November

More information

An Inexact Spingarn s Partial Inverse Method with Applications to Operator Splitting and Composite Optimization

An Inexact Spingarn s Partial Inverse Method with Applications to Operator Splitting and Composite Optimization Noname manuscript No. (will be inserted by the editor) An Inexact Spingarn s Partial Inverse Method with Applications to Operator Splitting and Composite Optimization S. Costa Lima M. Marques Alves Received:

More information

Convergence rate of inexact proximal point methods with relative error criteria for convex optimization

Convergence rate of inexact proximal point methods with relative error criteria for convex optimization Convergence rate of inexact proximal point methods with relative error criteria for convex optimization Renato D. C. Monteiro B. F. Svaiter August, 010 Revised: December 1, 011) Abstract In this paper,

More information

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Abstract This paper presents an accelerated

More information

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br

More information

An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex-concave saddle-point problems

An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex-concave saddle-point problems An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex-concave saddle-point problems O. Kolossoski R. D. C. Monteiro September 18, 2015 (Revised: September 28, 2016) Abstract

More information

An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex concave saddle-point problems

An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex concave saddle-point problems Optimization Methods and Software ISSN: 1055-6788 (Print) 1029-4937 (Online) Journal homepage: http://www.tandfonline.com/loi/goms20 An accelerated non-euclidean hybrid proximal extragradient-type algorithm

More information

c 2013 Society for Industrial and Applied Mathematics

c 2013 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 3, No., pp. 109 115 c 013 Society for Industrial and Applied Mathematics AN ACCELERATED HYBRID PROXIMAL EXTRAGRADIENT METHOD FOR CONVEX OPTIMIZATION AND ITS IMPLICATIONS TO SECOND-ORDER

More information

A hybrid proximal extragradient self-concordant primal barrier method for monotone variational inequalities

A hybrid proximal extragradient self-concordant primal barrier method for monotone variational inequalities A hybrid proximal extragradient self-concordant primal barrier method for monotone variational inequalities Renato D.C. Monteiro Mauricio R. Sicre B. F. Svaiter June 3, 13 Revised: August 8, 14) Abstract

More information

arxiv: v3 [math.oc] 18 Apr 2012

arxiv: v3 [math.oc] 18 Apr 2012 A class of Fejér convergent algorithms, approximate resolvents and the Hybrid Proximal-Extragradient method B. F. Svaiter arxiv:1204.1353v3 [math.oc] 18 Apr 2012 Abstract A new framework for analyzing

More information

Iteration-Complexity of a Newton Proximal Extragradient Method for Monotone Variational Inequalities and Inclusion Problems

Iteration-Complexity of a Newton Proximal Extragradient Method for Monotone Variational Inequalities and Inclusion Problems Iteration-Complexity of a Newton Proximal Extragradient Method for Monotone Variational Inequalities and Inclusion Problems Renato D.C. Monteiro B. F. Svaiter April 14, 2011 (Revised: December 15, 2011)

More information

Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM. M. V. Solodov and B. F.

Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM. M. V. Solodov and B. F. Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM M. V. Solodov and B. F. Svaiter January 27, 1997 (Revised August 24, 1998) ABSTRACT We propose a modification

More information

Maximal Monotone Operators with a Unique Extension to the Bidual

Maximal Monotone Operators with a Unique Extension to the Bidual Journal of Convex Analysis Volume 16 (2009), No. 2, 409 421 Maximal Monotone Operators with a Unique Extension to the Bidual M. Marques Alves IMPA, Estrada Dona Castorina 110, 22460-320 Rio de Janeiro,

More information

FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR

FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000 000 S 0002-9939(XX)0000-0 FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR B. F. SVAITER

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

Merit functions and error bounds for generalized variational inequalities

Merit functions and error bounds for generalized variational inequalities J. Math. Anal. Appl. 287 2003) 405 414 www.elsevier.com/locate/jmaa Merit functions and error bounds for generalized variational inequalities M.V. Solodov 1 Instituto de Matemática Pura e Aplicada, Estrada

More information

arxiv: v1 [math.fa] 16 Jun 2011

arxiv: v1 [math.fa] 16 Jun 2011 arxiv:1106.3342v1 [math.fa] 16 Jun 2011 Gauge functions for convex cones B. F. Svaiter August 20, 2018 Abstract We analyze a class of sublinear functionals which characterize the interior and the exterior

More information

Complexity of the relaxed Peaceman-Rachford splitting method for the sum of two maximal strongly monotone operators

Complexity of the relaxed Peaceman-Rachford splitting method for the sum of two maximal strongly monotone operators Complexity of the relaxed Peaceman-Rachford splitting method for the sum of two maximal strongly monotone operators Renato D.C. Monteiro, Chee-Khian Sim November 3, 206 Abstract This paper considers the

More information

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT Pacific Journal of Optimization Vol., No. 3, September 006) PRIMAL ERROR BOUNDS BASED ON THE AUGMENTED LAGRANGIAN AND LAGRANGIAN RELAXATION ALGORITHMS A. F. Izmailov and M. V. Solodov ABSTRACT For a given

More information

An inexact block-decomposition method for extra large-scale conic semidefinite programming

An inexact block-decomposition method for extra large-scale conic semidefinite programming Math. Prog. Comp. manuscript No. (will be inserted by the editor) An inexact block-decomposition method for extra large-scale conic semidefinite programming Renato D. C. Monteiro Camilo Ortiz Benar F.

More information

On well definedness of the Central Path

On well definedness of the Central Path On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil

More information

An inexact block-decomposition method for extra large-scale conic semidefinite programming

An inexact block-decomposition method for extra large-scale conic semidefinite programming An inexact block-decomposition method for extra large-scale conic semidefinite programming Renato D.C Monteiro Camilo Ortiz Benar F. Svaiter December 9, 2013 Abstract In this paper, we present an inexact

More information

Fixed points in the family of convex representations of a maximal monotone operator

Fixed points in the family of convex representations of a maximal monotone operator arxiv:0802.1347v2 [math.fa] 12 Feb 2008 Fixed points in the family of convex representations of a maximal monotone operator published on: Proc. Amer. Math. Soc. 131 (2003) 3851 3859. B. F. Svaiter IMPA

More information

Maximal monotone operators, convex functions and a special family of enlargements.

Maximal monotone operators, convex functions and a special family of enlargements. Maximal monotone operators, convex functions and a special family of enlargements. Regina Sandra Burachik Engenharia de Sistemas e Computação, COPPE UFRJ, CP 68511, Rio de Janeiro RJ, 21945 970, Brazil.

More information

1. Introduction. We consider the classical variational inequality problem [1, 3, 7] VI(F, C), which is to find a point x such that

1. Introduction. We consider the classical variational inequality problem [1, 3, 7] VI(F, C), which is to find a point x such that SIAM J. CONTROL OPTIM. Vol. 37, No. 3, pp. 765 776 c 1999 Society for Industrial and Applied Mathematics A NEW PROJECTION METHOD FOR VARIATIONAL INEQUALITY PROBLEMS M. V. SOLODOV AND B. F. SVAITER Abstract.

More information

Error bounds for proximal point subproblems and associated inexact proximal point algorithms

Error bounds for proximal point subproblems and associated inexact proximal point algorithms Error bounds for proximal point subproblems and associated inexact proximal point algorithms M. V. Solodov B. F. Svaiter Instituto de Matemática Pura e Aplicada, Estrada Dona Castorina 110, Jardim Botânico,

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS. M. V. Solodov and B. F.

AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS. M. V. Solodov and B. F. AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS M. V. Solodov and B. F. Svaiter May 14, 1998 (Revised July 8, 1999) ABSTRACT We present a

More information

AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS. May 14, 1998 (Revised March 12, 1999)

AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS. May 14, 1998 (Revised March 12, 1999) AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS M. V. Solodov and B. F. Svaiter May 14, 1998 (Revised March 12, 1999) ABSTRACT We present

More information

1 Introduction We consider the problem nd x 2 H such that 0 2 T (x); (1.1) where H is a real Hilbert space, and T () is a maximal monotone operator (o

1 Introduction We consider the problem nd x 2 H such that 0 2 T (x); (1.1) where H is a real Hilbert space, and T () is a maximal monotone operator (o Journal of Convex Analysis Volume 6 (1999), No. 1, pp. xx-xx. cheldermann Verlag A HYBRID PROJECTION{PROXIMAL POINT ALGORITHM M. V. Solodov y and B. F. Svaiter y January 27, 1997 (Revised August 24, 1998)

More information

On the convergence properties of the projected gradient method for convex optimization

On the convergence properties of the projected gradient method for convex optimization Computational and Applied Mathematics Vol. 22, N. 1, pp. 37 52, 2003 Copyright 2003 SBMAC On the convergence properties of the projected gradient method for convex optimization A. N. IUSEM* Instituto de

More information

Journal of Convex Analysis Vol. 14, No. 2, March 2007 AN EXPLICIT DESCENT METHOD FOR BILEVEL CONVEX OPTIMIZATION. Mikhail Solodov. September 12, 2005

Journal of Convex Analysis Vol. 14, No. 2, March 2007 AN EXPLICIT DESCENT METHOD FOR BILEVEL CONVEX OPTIMIZATION. Mikhail Solodov. September 12, 2005 Journal of Convex Analysis Vol. 14, No. 2, March 2007 AN EXPLICIT DESCENT METHOD FOR BILEVEL CONVEX OPTIMIZATION Mikhail Solodov September 12, 2005 ABSTRACT We consider the problem of minimizing a smooth

More information

Maximal Monotonicity, Conjugation and the Duality Product in Non-Reflexive Banach Spaces

Maximal Monotonicity, Conjugation and the Duality Product in Non-Reflexive Banach Spaces Journal of Convex Analysis Volume 17 (2010), No. 2, 553 563 Maximal Monotonicity, Conjugation and the Duality Product in Non-Reflexive Banach Spaces M. Marques Alves IMPA, Estrada Dona Castorina 110, 22460-320

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

On convergence rate of the Douglas-Rachford operator splitting method

On convergence rate of the Douglas-Rachford operator splitting method On convergence rate of the Douglas-Rachford operator splitting method Bingsheng He and Xiaoming Yuan 2 Abstract. This note provides a simple proof on a O(/k) convergence rate for the Douglas- Rachford

More information

A HYBRID PROXIMAL EXTRAGRADIENT SELF-CONCORDANT PRIMAL BARRIER METHOD FOR MONOTONE VARIATIONAL INEQUALITIES

A HYBRID PROXIMAL EXTRAGRADIENT SELF-CONCORDANT PRIMAL BARRIER METHOD FOR MONOTONE VARIATIONAL INEQUALITIES SIAM J. OPTIM. Vol. 25, No. 4, pp. 1965 1996 c 215 Society for Industrial and Applied Mathematics A HYBRID PROXIMAL EXTRAGRADIENT SELF-CONCORDANT PRIMAL BARRIER METHOD FOR MONOTONE VARIATIONAL INEQUALITIES

More information

On the iterate convergence of descent methods for convex optimization

On the iterate convergence of descent methods for convex optimization On the iterate convergence of descent methods for convex optimization Clovis C. Gonzaga March 1, 2014 Abstract We study the iterate convergence of strong descent algorithms applied to convex functions.

More information

1 Introduction and preliminaries

1 Introduction and preliminaries Proximal Methods for a Class of Relaxed Nonlinear Variational Inclusions Abdellatif Moudafi Université des Antilles et de la Guyane, Grimaag B.P. 7209, 97275 Schoelcher, Martinique abdellatif.moudafi@martinique.univ-ag.fr

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

COMPLEXITY OF A QUADRATIC PENALTY ACCELERATED INEXACT PROXIMAL POINT METHOD FOR SOLVING LINEARLY CONSTRAINED NONCONVEX COMPOSITE PROGRAMS

COMPLEXITY OF A QUADRATIC PENALTY ACCELERATED INEXACT PROXIMAL POINT METHOD FOR SOLVING LINEARLY CONSTRAINED NONCONVEX COMPOSITE PROGRAMS COMPLEXITY OF A QUADRATIC PENALTY ACCELERATED INEXACT PROXIMAL POINT METHOD FOR SOLVING LINEARLY CONSTRAINED NONCONVEX COMPOSITE PROGRAMS WEIWEI KONG, JEFFERSON G. MELO, AND RENATO D.C. MONTEIRO Abstract.

More information

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Lu-Chuan Ceng 1, Nicolas Hadjisavvas 2 and Ngai-Ching Wong 3 Abstract.

More information

An adaptive accelerated first-order method for convex optimization

An adaptive accelerated first-order method for convex optimization An adaptive accelerated first-order method for convex optimization Renato D.C Monteiro Camilo Ortiz Benar F. Svaiter July 3, 22 (Revised: May 4, 24) Abstract This paper presents a new accelerated variant

More information

A strongly convergent hybrid proximal method in Banach spaces

A strongly convergent hybrid proximal method in Banach spaces J. Math. Anal. Appl. 289 (2004) 700 711 www.elsevier.com/locate/jmaa A strongly convergent hybrid proximal method in Banach spaces Rolando Gárciga Otero a,,1 and B.F. Svaiter b,2 a Instituto de Economia

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

An inexact subgradient algorithm for Equilibrium Problems

An inexact subgradient algorithm for Equilibrium Problems Volume 30, N. 1, pp. 91 107, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam An inexact subgradient algorithm for Equilibrium Problems PAULO SANTOS 1 and SUSANA SCHEIMBERG 2 1 DM, UFPI, Teresina,

More information

Splitting methods for decomposing separable convex programs

Splitting methods for decomposing separable convex programs Splitting methods for decomposing separable convex programs Philippe Mahey LIMOS - ISIMA - Université Blaise Pascal PGMO, ENSTA 2013 October 4, 2013 1 / 30 Plan 1 Max Monotone Operators Proximal techniques

More information

ENLARGEMENT OF MONOTONE OPERATORS WITH APPLICATIONS TO VARIATIONAL INEQUALITIES. Abstract

ENLARGEMENT OF MONOTONE OPERATORS WITH APPLICATIONS TO VARIATIONAL INEQUALITIES. Abstract ENLARGEMENT OF MONOTONE OPERATORS WITH APPLICATIONS TO VARIATIONAL INEQUALITIES Regina S. Burachik* Departamento de Matemática Pontíficia Universidade Católica de Rio de Janeiro Rua Marques de São Vicente,

More information

Forcing strong convergence of proximal point iterations in a Hilbert space

Forcing strong convergence of proximal point iterations in a Hilbert space Math. Program., Ser. A 87: 189 202 (2000) Springer-Verlag 2000 Digital Object Identifier (DOI) 10.1007/s101079900113 M.V. Solodov B.F. Svaiter Forcing strong convergence of proximal point iterations in

More information

arxiv: v1 [math.na] 25 Sep 2012

arxiv: v1 [math.na] 25 Sep 2012 Kantorovich s Theorem on Newton s Method arxiv:1209.5704v1 [math.na] 25 Sep 2012 O. P. Ferreira B. F. Svaiter March 09, 2007 Abstract In this work we present a simplifyed proof of Kantorovich s Theorem

More information

AN INEXACT HYBRIDGENERALIZEDPROXIMAL POINT ALGORITHM ANDSOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS

AN INEXACT HYBRIDGENERALIZEDPROXIMAL POINT ALGORITHM ANDSOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS MATHEMATICS OF OPERATIONS RESEARCH Vol. 25, No. 2, May 2000, pp. 214 230 Printed in U.S.A. AN INEXACT HYBRIDGENERALIZEDPROXIMAL POINT ALGORITHM ANDSOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS M.

More information

Some new facts about sequential quadratic programming methods employing second derivatives

Some new facts about sequential quadratic programming methods employing second derivatives To appear in Optimization Methods and Software Vol. 00, No. 00, Month 20XX, 1 24 Some new facts about sequential quadratic programming methods employing second derivatives A.F. Izmailov a and M.V. Solodov

More information

Iteration-complexity of first-order penalty methods for convex programming

Iteration-complexity of first-order penalty methods for convex programming Iteration-complexity of first-order penalty methods for convex programming Guanghui Lan Renato D.C. Monteiro July 24, 2008 Abstract This paper considers a special but broad class of convex programing CP)

More information

Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems

Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems Mathematical and Computational Applications Article Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems Wenling Zhao *, Ruyu Wang and Hongxiang Zhang School of Science,

More information

A note on upper Lipschitz stability, error bounds, and critical multipliers for Lipschitz-continuous KKT systems

A note on upper Lipschitz stability, error bounds, and critical multipliers for Lipschitz-continuous KKT systems Math. Program., Ser. A (2013) 142:591 604 DOI 10.1007/s10107-012-0586-z SHORT COMMUNICATION A note on upper Lipschitz stability, error bounds, and critical multipliers for Lipschitz-continuous KKT systems

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

A doubly stabilized bundle method for nonsmooth convex optimization

A doubly stabilized bundle method for nonsmooth convex optimization Mathematical Programming manuscript No. (will be inserted by the editor) A doubly stabilized bundle method for nonsmooth convex optimization Welington de Oliveira Mikhail Solodov Received: date / Accepted:

More information

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008 Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plus-properties and F-uniqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FP-I book:

More information

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES U.P.B. Sci. Bull., Series A, Vol. 80, Iss. 3, 2018 ISSN 1223-7027 ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES Vahid Dadashi 1 In this paper, we introduce a hybrid projection algorithm for a countable

More information

On Total Convexity, Bregman Projections and Stability in Banach Spaces

On Total Convexity, Bregman Projections and Stability in Banach Spaces Journal of Convex Analysis Volume 11 (2004), No. 1, 1 16 On Total Convexity, Bregman Projections and Stability in Banach Spaces Elena Resmerita Department of Mathematics, University of Haifa, 31905 Haifa,

More information

Convergence rate estimates for the gradient differential inclusion

Convergence rate estimates for the gradient differential inclusion Convergence rate estimates for the gradient differential inclusion Osman Güler November 23 Abstract Let f : H R { } be a proper, lower semi continuous, convex function in a Hilbert space H. The gradient

More information

ε-enlargements of maximal monotone operators: theory and applications

ε-enlargements of maximal monotone operators: theory and applications ε-enlargements of maximal monotone operators: theory and applications Regina S. Burachik, Claudia A. Sagastizábal and B. F. Svaiter October 14, 2004 Abstract Given a maximal monotone operator T, we consider

More information

Examples of dual behaviour of Newton-type methods on optimization problems with degenerate constraints

Examples of dual behaviour of Newton-type methods on optimization problems with degenerate constraints Comput Optim Appl (2009) 42: 231 264 DOI 10.1007/s10589-007-9074-4 Examples of dual behaviour of Newton-type methods on optimization problems with degenerate constraints A.F. Izmailov M.V. Solodov Received:

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

The effect of calmness on the solution set of systems of nonlinear equations

The effect of calmness on the solution set of systems of nonlinear equations Mathematical Programming manuscript No. (will be inserted by the editor) The effect of calmness on the solution set of systems of nonlinear equations Roger Behling Alfredo Iusem Received: date / Accepted:

More information

A proximal-newton method for monotone inclusions in Hilbert spaces with complexity O(1/k 2 ).

A proximal-newton method for monotone inclusions in Hilbert spaces with complexity O(1/k 2 ). H. ATTOUCH (Univ. Montpellier 2) Fast proximal-newton method Sept. 8-12, 2014 1 / 40 A proximal-newton method for monotone inclusions in Hilbert spaces with complexity O(1/k 2 ). Hedy ATTOUCH Université

More information

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE Fixed Point Theory, Volume 6, No. 1, 2005, 59-69 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.htm WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE YASUNORI KIMURA Department

More information

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Local strong convexity and local Lipschitz continuity of the gradient of convex functions Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities

Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities Thiago A. de André Paulo J. S. Silva March 24, 2007 Abstract In this paper, we present

More information

On attraction of linearly constrained Lagrangian methods and of stabilized and quasi-newton SQP methods to critical multipliers

On attraction of linearly constrained Lagrangian methods and of stabilized and quasi-newton SQP methods to critical multipliers Mathematical Programming manuscript No. (will be inserted by the editor) A.F. Izmailov M.V. Solodov On attraction of linearly constrained Lagrangian methods and of stabilized and quasi-newton SQP methods

More information

A Proximal Method for Identifying Active Manifolds

A Proximal Method for Identifying Active Manifolds A Proximal Method for Identifying Active Manifolds W.L. Hare April 18, 2006 Abstract The minimization of an objective function over a constraint set can often be simplified if the active manifold of the

More information

LAGRANGIAN TRANSFORMATION IN CONVEX OPTIMIZATION

LAGRANGIAN TRANSFORMATION IN CONVEX OPTIMIZATION LAGRANGIAN TRANSFORMATION IN CONVEX OPTIMIZATION ROMAN A. POLYAK Abstract. We introduce the Lagrangian Transformation(LT) and develop a general LT method for convex optimization problems. A class Ψ of

More information

Convergence Analysis of Perturbed Feasible Descent Methods 1

Convergence Analysis of Perturbed Feasible Descent Methods 1 JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS Vol. 93. No 2. pp. 337-353. MAY 1997 Convergence Analysis of Perturbed Feasible Descent Methods 1 M. V. SOLODOV 2 Communicated by Z. Q. Luo Abstract. We

More information

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual

More information

Upper sign-continuity for equilibrium problems

Upper sign-continuity for equilibrium problems Upper sign-continuity for equilibrium problems D. Aussel J. Cotrina A. Iusem January, 2013 Abstract We present the new concept of upper sign-continuity for bifunctions and establish a new existence result

More information

A globally convergent Levenberg Marquardt method for equality-constrained optimization

A globally convergent Levenberg Marquardt method for equality-constrained optimization Computational Optimization and Applications manuscript No. (will be inserted by the editor) A globally convergent Levenberg Marquardt method for equality-constrained optimization A. F. Izmailov M. V. Solodov

More information

Projective Splitting Methods for Pairs of Monotone Operators

Projective Splitting Methods for Pairs of Monotone Operators Projective Splitting Methods for Pairs of Monotone Operators Jonathan Eckstein B. F. Svaiter August 13, 2003 Abstract By embedding the notion of splitting within a general separator projection algorithmic

More information

Differentiable exact penalty functions for nonlinear optimization with easy constraints. Takuma NISHIMURA

Differentiable exact penalty functions for nonlinear optimization with easy constraints. Takuma NISHIMURA Master s Thesis Differentiable exact penalty functions for nonlinear optimization with easy constraints Guidance Assistant Professor Ellen Hidemi FUKUDA Takuma NISHIMURA Department of Applied Mathematics

More information

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of

More information

Identifying Active Constraints via Partial Smoothness and Prox-Regularity

Identifying Active Constraints via Partial Smoothness and Prox-Regularity Journal of Convex Analysis Volume 11 (2004), No. 2, 251 266 Identifying Active Constraints via Partial Smoothness and Prox-Regularity W. L. Hare Department of Mathematics, Simon Fraser University, Burnaby,

More information

Rejoinder on: Critical Lagrange multipliers: what we currently know about them, how they spoil our lives, and what we can do about it

Rejoinder on: Critical Lagrange multipliers: what we currently know about them, how they spoil our lives, and what we can do about it TOP manuscript No. (will be inserted by the editor) Rejoinder on: Critical Lagrange multipliers: what we currently know about them, how they spoil our lives, and what we can do about it A. F. Izmailov

More information

A double projection method for solving variational inequalities without monotonicity

A double projection method for solving variational inequalities without monotonicity A double projection method for solving variational inequalities without monotonicity Minglu Ye Yiran He Accepted by Computational Optimization and Applications, DOI: 10.1007/s10589-014-9659-7,Apr 05, 2014

More information

Kantorovich s Majorants Principle for Newton s Method

Kantorovich s Majorants Principle for Newton s Method Kantorovich s Majorants Principle for Newton s Method O. P. Ferreira B. F. Svaiter January 17, 2006 Abstract We prove Kantorovich s theorem on Newton s method using a convergence analysis which makes clear,

More information

Une méthode proximale pour les inclusions monotones dans les espaces de Hilbert, avec complexité O(1/k 2 ).

Une méthode proximale pour les inclusions monotones dans les espaces de Hilbert, avec complexité O(1/k 2 ). Une méthode proximale pour les inclusions monotones dans les espaces de Hilbert, avec complexité O(1/k 2 ). Hedy ATTOUCH Université Montpellier 2 ACSIOM, I3M UMR CNRS 5149 Travail en collaboration avec

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

for all u C, where F : X X, X is a Banach space with its dual X and C X

for all u C, where F : X X, X is a Banach space with its dual X and C X ROMAI J., 6, 1(2010), 41 45 PROXIMAL POINT METHODS FOR VARIATIONAL INEQUALITIES INVOLVING REGULAR MAPPINGS Corina L. Chiriac Department of Mathematics, Bioterra University, Bucharest, Romania corinalchiriac@yahoo.com

More information

A projection-type method for generalized variational inequalities with dual solutions

A projection-type method for generalized variational inequalities with dual solutions Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4812 4821 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A projection-type method

More information

Dual and primal-dual methods

Dual and primal-dual methods ELE 538B: Large-Scale Optimization for Data Science Dual and primal-dual methods Yuxin Chen Princeton University, Spring 2018 Outline Dual proximal gradient method Primal-dual proximal gradient method

More information

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane Conference ADGO 2013 October 16, 2013 Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions Marc Lassonde Université des Antilles et de la Guyane Playa Blanca, Tongoy, Chile SUBDIFFERENTIAL

More information

Primal/Dual Decomposition Methods

Primal/Dual Decomposition Methods Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients

More information

Solving Dual Problems

Solving Dual Problems Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem

More information

On the acceleration of augmented Lagrangian method for linearly constrained optimization

On the acceleration of augmented Lagrangian method for linearly constrained optimization On the acceleration of augmented Lagrangian method for linearly constrained optimization Bingsheng He and Xiaoming Yuan October, 2 Abstract. The classical augmented Lagrangian method (ALM plays a fundamental

More information

Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables

Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong 2014 Workshop

More information

Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1

Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1 Int. Journal of Math. Analysis, Vol. 1, 2007, no. 4, 175-186 Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1 Haiyun Zhou Institute

More information