Complexity of the relaxed Peaceman-Rachford splitting method for the sum of two maximal strongly monotone operators

Size: px
Start display at page:

Download "Complexity of the relaxed Peaceman-Rachford splitting method for the sum of two maximal strongly monotone operators"

Transcription

1 Complexity of the relaxed Peaceman-Rachford splitting method for the sum of two maximal strongly monotone operators Renato D.C. Monteiro, Chee-Khian Sim November 3, 206 Abstract This paper considers the relaxed Peaceman-Rachford (PR) splitting method for finding an approximate solution of a monotone inclusion whose underlying operator consists of the sum of two maximal strongly monotone operators. Using general results obtained in the setting of a non-euclidean hybrid proximal extragradient framework, convergence of the iterates, as well as pointwise and ergodic convergence rate, are established for the above method whenever an associated relaxation parameter is within a certain interval. An example is also discussed to demonstrate that the iterates may not converge when the relaxation parameter is outside this interval. Introduction In this paper, we consider the relaxed Peaceman-Rachford (PR) splitting method for solving the monotone inclusion 0 (A + B)(u) () where, for some β 0, A and B are maximal β-strongly monotone operators (with the convention that 0-strongly monotone means simply monotone). Recall that the relaxed PR splitting method is given by x k = x k + (J B (2J A (x k ) x k ) J A (x k )), where > 0 is a fixed relaxation parameter and J T := (I + T ). The special case of the relaxed PR splitting method in which = 2 is known as the Peaceman-Rachford (PR) splitting method and the one with = is the widely-studied Douglas-Rachford splitting method (see for example [, 6, 7]). The analysis of the relaxed PR splitting method for the case in which β = 0 has been undertaken in a number of papers. More specifically, convergence of the sequence of iterates generated by the relaxed PR splitting method is well-known when < 2 (see for example [, 7]) and, according to [8], its limiting behavior for the case in which 2 is not known. Actually, as a special case of the class of examples discussed in Section 5, it is shown that such sequence does not necessarily School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA ( monteiro@isye.gatech.edu). The work of this author was partially supported by NSF Grant CMMI Department of Mathematics, University of Portsmouth, Lion Gate Building, Lion Terrace, Portsmouth PO 3HF. ( chee-khian.sim@port.ac.uk). This research was made possible through support by Centre of Operational Research and Logistics, University of Portsmouth.

2 converge. An O(/ k) (strong) pointwise convergence rate result is established in [0] for the relaxed PR splitting method when (0, 2). Morever, when A = f and B = g where f and g are proper lower semi-continuous convex functions, [4, 5] derive strong pointwise (resp., ergodic) convergence rate bounds for the relaxed PR method when (0, 2) (resp., (0, 2]) under different assumptions on the functions. Assuming only β-strong monotonicity of A = f and some smoothness property on f, [8] shows that the relaxed PR splitting method has linear convergence rate for (0, 2 + τ) for some τ > 0. Finally, convergence rate results for special classes of A and B are derived in [3] for lying in an interval of the latter type. This paper considers the case in which β > 0, and hence the case when both A and B are strongly monotone. To the best of our knowledge, the analysis of the relaxed PR splitting method for solving () when β > 0 is new. Unlike the analysis of [4, 5, 8] where A = f and B = g, this paper considers maximal β-strongly monotone point-to-set operators A and B which are not necessarily subdifferentials and do not necessarily satisfy any regularity assumptions. Strong pointwise and ergodic convergence rate results (Theorems 4.7 and 4.9) when (0, 2 + β) and (0, 2 + β], respectively, and convergence of the iterates when (0, 2 + β), are established for the relaxed PR splitting method for every β 0. As a consequence, convergence of the iterates generated by the PR splitting method is shown under the condition that = 2 and β > 0, which clearly contrasts to case = 2 and β = 0 where non-convergence might occur (see Section 5). Our analysis of the relaxed PR splitting method for solving () is based on viewing it as an inexact proximal point method, more specifically, as an instance of a non-euclidean hybrid proximal extragradient (HPE) framework for solving the monotone inclusion problem. The proximal point method, proposed by Rockafellar [8], is a classical iterative scheme for solving the latter problem. Paper [9] introduces an Euclidean version of the HPE framework which is an inexact version of the proximal point method based on a certain relative error criterion. Iteration-complexities of the latter framework are established in [5] (see also [4]). Generalizations of the HPE framework to the non-euclidean setting are studied in [9, 3, 20]. Applications of the HPE framework can be found for example in [, 2, 4, 5]. This paper is organized as follows. Section 2 describes basic concepts and notation used in the paper. Section 3 discusses the non-euclidean HPE framework which is applied to the study of the relaxed PR splitting method in Section 4. Finally, Section 5 provides an example showing that the iterates generated by the relaxed PR splitting method may not converge when 2( + β). 2 Basic concepts and notation This section presents some definitions, notation and terminology which will be used in the paper. Let f and g be functions with the same domain and whose values are in set of positive real numbers. We write that f( ) = Ω(g( )) if there exists constant K > 0 such that f( ) Kg( ). Also, we write f( ) = Θ(g( )) if f( ) = Ω(g( )) and g( ) = Ω(f( )). Let Z be a finite-dimensional real vector space with inner product denoted by, (an example of Z is R n endowed with the standard inner product) and let denote an arbitrary seminorm in Z. Its dual (extended) seminorm, denoted by, is defined as := sup{, z : z }. The following result whose simple proof is omitted states some basic properties of the dual seminorm. Proposition 2. Let A : Z Z be a self-adjoint positive semidefinite linear operator and consider the seminorm in Z given by z = Az, z /2 for every z Z. Then, dom = Im (A) and Az = z for every z Z. 2

3 Given a set-valued operator T : Z Z, its domain is denoted by Dom(T ) := {z Z : T (z) } and its inverse operator T : Z Z is given by T (v) := {z : v T (z)}. The graph of T is defined by Gr(T ) := {(z, t) : t T (z)}. The operator T is said to be monotone if z z, t t 0 (z, t), (z, t ) Gr(T ). Moreover, T is maximal monotone if it is monotone and, additionally, if T is a monotone operator such that T (z) T (z) for every z Z, then T = T. The sum T + T : Z Z of two set-valued operators T, T : Z Z is defined by (T + T )(z) := {t + t Z : t T (z), t T (z)} for every z Z. Given a scalar ε 0, the ε-enlargement T [ε] : Z Z of a monotone operator T : Z Z is defined as T [ε] (z) := {t Z : t t, z z ε, z Z, t T (z )} z Z. (2) 3 A non-euclidean hybrid proximal extragradient framework This section discusses the non-euclidean hybrid proximal extragradient (HPE) framework and its associated convergence and complexity results. We start by introducing the definition of a distance generating function and its corresponding Bregman distance. Definition 3. A proper lower semi-continuous convex function w : Z [, ] is called a distance generating function if int (dom w) = Dom ( w) and w is continuously differentiable on this interior. Moreover, w induces the Bregman distance dw : Z int (dom w) R defined as (dw)(z ; z) := w(z ) w(z) w(z), z z (z, z) Z int (dom w). (3) For simplicity, for every z int (dom w), the function (dw)( ; z) will be denoted by (dw) z so that (dw) z (z ) = (dw)(z ; z) (z, z) Z int (dom w). The following useful identities follow straightforwardly from (3): (dw) z (z ) = (dw) z (z) = w(z ) w(z) z, z int (dom w), (4) (dw) v (z ) (dw) v (z) = (dw) v (z), z z + (dw) z (z ) z Z, v, z int (dom w). (5) Our analysis of the non-euclidean HPE framework presented in this section requires an extra property (first introduced in [9]) of the distance generating function, namely, that of being regular with respect to a seminorm. Definition 3.2 Let distance generating function w : Z [, ], seminorm in Z and convex set Z int (dom w) be given. For given positive constants m and M, w is said to be (m, M)-regular with respect to (Z, ) if (dw) z (z ) m 2 z z 2 z, z Z, (6) w(z) w(z ) M z z z, z Z. (7) 3

4 Note that if the seminorm in Definition 3.2 is a norm, then (6) implies that w is strongly convex, in which case the corresponding dw is said to be nondegenerate. However, since we are not necessarily assuming that is a norm, our approach includes the case in which w is not strongly convex, or equivalently, dw is degenerate. We observe that iteration-complexities of non-euclidean HPE frameworks have already been studied in [9] and [3]. However, our approach in this section is different from these two papers as follows. Paper [3] deals with distance generating functions w (see Definition 3.) which are not necessarily regular in the sense of Definition 3.2. Since it considers a larger class of distance generating functions than the one considered in this work, the results obtained in [3] are more limited in scope, i.e., only an ergodic convergence rate result is obtained for operators with bounded feasible domains (or, more generally, for the case in which the sequence generated by the HPE framwework is bounded). On the other hand, by introducing the class of distance generating functions w satisfying Definition 3.2, paper [9] analyzes the behavior of a HPE framework for solving inclusions whose operators are strongly monotone with respect to w and uses these results to solve (not necessarily strongly) monotone inclusion via regularization. The following result gives some useful properties of regular distance generating functions. Lemma 3.3 Let w : Z [, ] be an (m, M)-regular distance generating function with respect to (Z, ) as in Definition 3.2. Then, the following statements hold: (a) for every, z Z, we have (dw) z (z) 2 2M 2 (b) for every l and z 0, z,..., z l Z, we have m min{(dw) z(z ), (dw) z (z)}; (8) (dw) z0 (z l ) lm m l min{(dw) zi (z i ), (dw) zi (z i )}. (9) Proof: (a) It is easy to see that (8) immediately follows from (4), (6) and (7). (b) Using (3) and (7), it is easy to see that Then, we have (dw) z (z ) = w(z ) w(z) w(z), z z M 2 z z 2 z, z Z. (0) (dw) z0 (z l ) M 2 z l z 0 2 M 2 ( l ) 2 z i z i lm 2 l z i z i 2 which, in view of (6), immediately implies (9). Throughout this section, we assume that w : Z [, ] is an (m, M)-regular distance generating function with respect to (Z, ) where Z int (dom w) is a closed convex set and is a seminorm in Z. Our problem of interest in this section is the monotone inclusion problem (MIP) for a maximal monotone operator 0 T (z) () where T : Z Z is a maximal monotone operator and the following conditions hold: 4

5 A0) Dom (T ) Z; A) the solution set T (0) of () is nonempty. We now state a non-euclidean HPE (NE-HPE) framework for solving the MIP () which generalizes the ones studied in [5, 6, 9]. Framework (An NE-HPE variant for solving ()). (0) Let z 0 Z and σ [0, ] be given, and set k = ; () choose λ k > 0 and find ( z k, z k, ε k ) Z Z R + such that (2) set k k + and go to step. end r k := λ k (dw) zk (z k ) T [ε k] ( z k ), (2) (dw) zk ( z k ) + λ k ε k σ(dw) zk ( z k ); (3) We now make some remarks about Framework. First, it does not specify how to find λ k and ( z k, z k, ε k ) satisfying (2) and (3). The particular scheme for computing λ k and ( z k, z k, ε k ) will depend on the instance of the framework under consideration and the properties of the operator T. Second, if w is strongly convex on Z and σ = 0, then (3) implies that ε k = 0 and z k = z k for every k, and hence that r k T (z k ) in view of (2). Therefore, the HPE error conditions (2)-(3) can be viewed as a relaxation of an iteration of the exact non-euclidean proximal point method, namely, 0 λ k (dw) zk (z k ) + T (z k ). Third, if w is strongly convex on Z, then it can be shown that the above inclusion has a unique solution z k, and hence that, for any given λ k > 0, there exists a triple ( z k, z k, ε k ) of the form (z k, z k, 0) satisfying (2)-(3) with σ = 0. Clearly, computing the triple in this (exact) manner is expensive, and hence computation of (inexact) quadruples satisfying the HPE (relative) error conditions with σ > 0 is more computationally appealing. Fourth, even though the definition of a regular distance generating function does not exclude the case in which w is constant, such a case is not interesting from an algorithmic analysis point of view. In fact, if w is constant, then we have that z is already a solution of () since it follows from (3) with k = that ε = 0, and hence that 0 T ( z ) in view of (2) with k =. Lemma 3.4 For every k, the following statements hold: (a) for every z dom w, we have (dw) zk (z) (dw) zk (z) = (dw) zk ( z k ) (dw) zk ( z k ) + λ k r k, z k z ; (b) for every z dom w, we have (dw) zk (z) (dw) zk (z) ( σ)(dw) zk ( z k ) + λ k ( r k, z k z + ε k ); 5

6 (c) for every z T (0), we have (dw) zk (z ) (dw) zk (z ) ( σ)(dw) zk ( z k ). (4) Proof: (a) Using (5) twice and using the definition of r k given by (2), we obtain (dw) zk (z) (dw) zk (z) = (dw) zk (z k ) + (dw) zk (z k ), z z k = (dw) zk (z k ) + (dw) zk (z k ), z k z k + (dw) zk (z k ), z z k = (dw) zk ( z k ) (dw) zk ( z k ) + (dw) zk (z k ), z z k = (dw) zk ( z k ) (dw) zk ( z k ) + λ k r k, z k z. (b) This statement follows as an immediate consequence of (a) and (3). (c) This statement follows from (b), the fact that 0 T (z ) and r k T [ε k] ( z k ), and the definition of T [ε] ( ). For the purpose of stating the results below, define (dw) 0 = inf{(dw) z0 (z ) : z T (0)}. (5) It is not known whether the infimum in (5) can be attained. Lemma 3.5 For every i, define where i = max { λ 2 i r i 2 τ 2 ( + σ) 2, λ i ε } i, σ τ := 2M m. Then, Proof: For every i, we have ( σ) i (dw) 0. λ i r i = (dw) zi ( z i ) (dw) zi ( z i ) (dw) zi ( z i ) + (dw) zi ( z i ) [ τ (dw) zi ( z i ) /2 + (dw) zi ( z i ) /2] τ( + σ)(dw) zi ( z i ) /2 The last inequality, (3) and the definition of i then imply that i (dw) zi ( z i ) for every i. Hence, if z T (0), it follows from Lemma 3.4(c) that ( σ) i ( σ) (dw) zi ( z i ) (dw) z0 (z ) (dw) zk (z ) (dw) z0 (z ). The lemma now follows from the latter inequality and the definition of (dw) 0. Lemma 3.6 Assume that σ <. Then, for every α R and every k, there exists an i k such that r i τ( + ( ) ( ) σ) (dw) 0 λ α 2 i σ k, ε i σ(dw) 0 λ α i j= λα σ k. (6) j j= λα j 6

7 Proof: It is easy to verify that the conclusion of the lemma is equivalent to the condition { } ( ) i min,...,k λ α λ α i (dw) 0 i σ which in turn follows as an easy consequence of Lemma 3.5. Theorem 3.7 (Pointwise convergence) If σ <, then the following statements hold: (a) if λ := inf λ k > 0, then for every k N there exists i k such that r i τ( + ( ) σ) (dw) 0 λ σ k j= λ τ( + σ) j λ (dw)0 k σ ε i σ(dw) 0 σ k λ i σ(dw) 0 ( σ)λk, (b) for every k N, there exists an index i k such that r i τ( + ( ) σ) (dw) 0 σ k, ε i j= λ2 j σ(dw) 0 λ i ( σ). (7) k j= λ2 j Proof: Statements (a) and (b) follow from Lemma 3.6 with α = and α = 2, respectively. Due to the degeneracy of the Bregman distance dw, it is not possible to establish that the sequences { z k } and {z k } converge. However, under mild conditions on {λ k }, Proposition 3.9 below shows that {(dw) zk (z )} and {(dw) zk (z )} both converge to 0 for some z T (0). Before stating this proposition, we first give the following technical lemma. Lemma 3.8 Assume that lim k (dw) zk ( z k ) = 0 and z T (0) satisfies lim inf k (dw) zk (z ) = 0. Then, lim k (dw) z k (z ) = 0, lim (dw) z k (z ) = 0. (8) k Proof: Since σ [0, ] and z T (0), it follows from Lemma 3.4(c) that {(dw) zk (z )} is non-increasing. Since by Lemma 3.3(b) with l = 2, we have (dw) zk (z ) 2M m [ (dw)zk ( z k ) + (dw) zk (z ) ], we conclude that from the assumptions of the lemma that lim inf k (dw) zk (z ) = 0. The above two remarks then imply that lim k (dw) zk (z ) = 0. Using this conclusion together with the inequality (dw) zk (z ) 2M [ (dw)zk (z ) + (dw) zk ( z k ) ], m we conclude that lim k (dw) zk (z ) = 0. Proposition 3.9 Assume that σ <, λ2 k = and { z k} is bounded. Then, there exists z T (0) such that (8) holds. 7

8 Proof: The assumption that σ < and λ2 k = together with Theorem 3.8(b) imply that there exists subsequence {(r k, ε k )} k K converging to zero. Now, let z be an accumulation point of { z k } k K. Then, by passing the inclusion r k T ε k( z k ) to the limit, we conclude that 0 T 0 (z ) = T (z ), and hence that z T (0). Hence, Lemma 3.4(c) and the assumption that σ < imply that lim k (dw) zk ( z k ) = 0. Clearly, (0) and the fact that z is an accumulation point of { z k } k K imply that lim inf k (dw) zk (z ) = 0. The conclusion of the proposition now follows directly from the previous lemma. From now on, we focus on the ergodic convergence rate of the NE-HPE framework. For k, define Λ k := k λ i and the ergodic sequences z a k = Λ k λ i z i, rk a := λ i r i, ε a k Λ := λ i (ε i + r i, z i z k a ). (9) k Λ k The following result derives the convergence rate of the above ergodic sequences. Theorem 3.0 (Ergodic convergence) For every k, we have and where ε a k 0, ra k T [εa k ] ( z a k ) r a k 2τ (dw) 0 Λ k, ε a k ρ k := max,...,k (dw) z i ( z i ). ( ) 3M 2(dw)0 + ρ k m Λ k Moreover, the sequence {ρ k } is bounded under either one of the following situations: (a) σ <, in which case ρ k σ(dw) 0 σ ; (20) (b) Dom T is bounded, in which case ρ k 2M m [(dw) 0 + D] (2) where D := sup{min{(dw) y (y ), (dw) y (y)} : y, y Dom T } is the diameter of Dom T with respect to dw. Proof: Let z T (0) be given. Using (9), (2) and (4), we easily see that which, together with (8) and (4), imply that Λ k r a k = (dw) z 0 (z k ) = (dw) z0 (z ) (dw) zk (z ) Λ k r a k (dw) z 0 (z ) + (dw) zk (z ) τ[(dw) z0 (z ) /2 + (dw) zk (z ) /2 ] 2τ(dw) z0 (z ) /2. 8

9 This inequality together with definition of (dw) 0 clearly imply the bound on r k. To show the bound on ε a k, first note that Lemma 3.4(b) implies that for every z dom w, (dw) z0 (z) (dw) zk (z) ( σ) (dw) zi ( z i ) + λ i ( r i, z i z + ε i ). Letting z = z a k in the last inequality and using the fact that (dw) z 0 ( ) is convex and σ, we conclude that max (dw) z 0 ( z i ) (dw) z0 ( z k a ),...,k λ i ( r i, z i z k a + ε i) = Λ k ε a k where the last equality is due to (9). On the other hand, (9) with l = 3 implies that for every i and z T (0), (dw) z0 ( z i ) 3M m [(dw) z i ( z i ) + (dw) zi (z ) + (dw) z0 (z )] 3M m [(dw) z i ( z i ) + 2(dw) z0 (z )] where the last inequality is due to Lemma 3.4(c). Combining the above two relations and using the definitions of ρ k and (dw) 0, we then conclude that the bound on ε a k holds. We now establish the bounds on ρ k under either one of the conditions (a) or (b). First, if σ <, then it follows from (3) and Lemma 3.4(c) that ( σ)(dw) zi ( z i ) σ( σ)(dw) zi ( z i ) σ(dw) zi (z ) σ(dw) z0 (z ) for every i and z T (0), and hence that (20) holds. Assume now that Dom T is bounded. Then, for every i and z T (0), we have (dw) zi ( z i ) 2M m [(dw) z i (z ) + min{(dw) zi (z ), (dw) z ( z i )}] 2M m [(dw) z 0 (z ) + D] which, in view of the definitions of ρ k and (dw) 0, clearly implies that (2) holds. 4 The relaxed Peaceman-Rachford splitting method This section studies the relaxed Peaceman-Rachford (PR) splitting method by applying results from the previous section. For a given β > 0, an operator T : X X is said to be β-strongly monotone if w w, x x β x x 2 X (x, w), (x, w ) Gr(T ). In what follows, we refer to monotone operators as 0-strongly monotone operators. This terminology has the benefit of allowing us to treat both the monotone and strongly monotone case simultaneously. Throughout this section, we consider the monotone inclusion () where A, B : X X satisfy the following assumptions: B0) for some β 0, A and B are maximal β-strongly monotone operators; B) the solution set (A + B) (0) is non-empty. 9

10 We start by observing that () is equivalent to solving the augmented inclusion 0 γa(u) + u x 0 γb(v) + x v 0 = u v where γ > 0 is an arbitrary scalar. Observe the skew-symmetry of the linear part of the above operator. Another way of writing the above operator is as 0 γa(u) + u x 0 γb(v) + v + x 2u 0 = u v Note that the first and second equations are equivalent to u = u(x) := J γa (x), v = v(x) := J γb (2u x) = J γb (2J γa (x) x) (22) so that the third equation reduces to 0 = u(x) v(x) = J γa (x) J γb (2J γa (x) x). The Douglas-Rachford (DR) splitting method is the iterative procedure x k = x k + v(x k ) u(x k ), k, started from some x 0 X. It is known that the Douglas-Rachford splitting method is an exact proximal point method for some maximal monotone operator [6, 7]. Hence convergence of iterates of the method is guaranteed. More generally, the relaxed Peaceman-Rachford (PR) splitting method with relaxation parameter > 0 iterates as (u k, v k ) := (u(x k ), v(x k )) x k = x k := x k + (v k u k ) k. (23) When =, we have the Douglas-Rachford splitting method, while when = 2, we have the Peaceman-Rachford splitting method. The relaxed PR splitting method is an iterative process in the x-space but another way of looking at it is as an iterative process in the whole (u, v, x)-space as we will now discuss. We now introduce a monotone inclusion which plays an important role in our analysis. For every > 0, define the linear map L : X X X X X X as L (z) = L (u, v, x) := and the operator C : X X X X X X as ( )I I I ( 2)I ( )I I I I 0 u v x (24) C(z) = C(u, v, x) := A(u) B(v) {0}. (25) The following result relates the relaxed PR splitting method with the inclusion 0 (L + γc)(z) and the subsequent one shows that this inclusion is monotone. 0

11 Lemma 4. For a given x k X and > 0, define z k = (u k, v k, x k ) (26) where u k, v k and x k are as in (23), and set a k := γ (x k u k ), b k := γ (2u k v k x k ). (27) Then, we have: ( )u k v k + x k = γa k γa(u k ), (28) (2 )u k ( )v k x k = γb k γb(v k ). (29) As a consequence, we have (0, 0, u k v k ) = L (z k ) + γc k (L + γc)(z k ) (30) where c k := (a k, b k, 0). Proof: Using the definition of (u( ), v( )), the definition of (u k, v k, x k ) in (23), and the definitions of a k and b k, we easily see that the conclusion of the lemma follows. Clearly, (30) follows as an immediate consequence of the conclusion of the lemma, definitions (24) and (25), and the definition of c k. The following result gives sufficient conditions for the maximal monotonicity of L + γc. Proposition 4.2 Assume that A, B : X X satisfy B0. Then, L + γc is a maximal monotone operator on X X X for every (0, + (γβ)/2]. Proof: Assumption B0 imply that A = βi + A 0 and B = βi + B 0 for some maximal monotone operators A 0, B 0 : X X. Therefore, L + γc can be written as L + γc = L + γ C where L (z) = L (u, v, x) = ( )I + γβi I I ( 2)I ( )I + γβi I I I 0 u v x and C(z) = C(u, v, x) = A 0 (u) B 0 (v) {0}. It is easy to see that for every z = (u, v, x) X X X, z, L (z) = [( ) + γβ] ( u 2 X + v 2 ) X + 2( ) u, v = ( ) u v 2 X + γβ ( u 2 X + v 2 ) X.

12 which can be easily seen to be non-negative for every (0, + (γβ)/2]. Hence, L is a monotone linear map for any (0, + (γβ)/2]. The conclusion of the proposition then follows by noting that the sum of a monotone linear map and a maximal monotone operator is a maximal monotone operator [, 7]. Let us now denote 0 := + (γβ)/2. (3) Note that the value of 0 depends on γ and β. Clearly, 0 = when β = 0. The following result shows that the relaxed PR splitting method with (0, 2 0 ] can be viewed as an inexact instance of the NE-HPE method with respect to the monotone inclusion 0 (L 0 + γc)(z) where 0 := min{, 0 }. (32) Lemma 4.3 For any > 0, L 0 + γc is maximal monotone and the solution set (L 0 + γc) (0) is given by (L 0 + γc) (0) = {(u, u, x ) : γ (x u ) A(u ) ( B(u ))} = {(u, u, u + γa ) : a A(u ), a B(u )}. As a consequence, if z = (u, u, x ) (L 0 +γc) (0), then u (A+B) (0) and u = J γa (x ). Proof: The conclusion of the lemma follows immediately from Proposition 4.2, the definitions of and 0 in (24) and (32), respectively, and some simple algebraic manipulation. L 0 Proposition 4.4 Consider the sequence {z k = (u k, v k, x k )} generated according to the relaxed PR splitting method (23) with any > 0 and define the sequences { z k = (ũ k, ṽ k, x k )}, {ε k } and {λ k } as ũ k := u k, ṽ k := v k, x k := x k + 0 (v k u k ), ε k := 0, λ k :=. where 0 is defined above. Consider also the (degenerate) distance generating function given by w(z) = w(u, v, x) = x 2 X 2 Then, the following statements hold: z = (u, v, x) X X X. (33) (a) for every k, {(z k, z k, ε k )} satisfies r k := (dw) zk (z k ) = (0, 0, u k v k ) = L 0 ( z k ) + γc k (L 0 + γc)( z k ); and hence {(z k, z k, ε k )} satisfies (2) with T = L 0 + γc; (b) the sequence {(z k, z k, ε k )} satisfies (3) with σ = (/ 0 ) 2 and w as in (33). As a consequence, the relaxed PR splitting method with (0, 2 0 ) (resp., = 2 0 ) is an NE-HPE instance with respect to the monotone inclusion 0 (L 0 + γc)(z) in which σ < (resp., σ = ). Proof: First note that (26) and the definition of z k imply that z k = z 0 k. Hence, the second equality and the inclusion in (a) follow from (30) with = 0. The first equality in (a) follows immediately 2

13 from (23) and (33). Moreover, using (23), (33) and the definition of x k, we conclude that for any (0, 2 0 ], (dw) zk ( z k ) = x k x k 2 X 2 ( = 0 ) 2 x k x k 2 X 2 ( 2 = ) (dw) zk 0 ( z k) and hence that (3) is satisfied with σ = ( / 0 ) 2. Finally, the last conclusion follows from statements (a) and (b), and Lemma 4.3. We now make a remark about the special case of Proposition 4.4 in which (0, 0 ]. Indeed, in this case, 0 =, and hence σ = 0 and z k = z k for every k. Thus, the relaxed PR splitting method with (0, 0 ] can be viewed as an exact non-euclidean proximal point method with distance generating function w as in (33) with respect to the monotone inclusion 0 T (z) := (L + γc)(z). Note also that the latter inclusion depends on. We will now show that sequence {z k = (u k, v k, x k )} generated by the relaxed PR splitting method with (0, 2 0 ) converges, and also establish that this sequence and the sequence { z k = (u k, v k, x k )} (see Proposition 4.4) are bounded whenever (0, 2 0 ]. Theorem 4.5 Consider the sequence {z k = (u k, v k, x k )} generated by the relaxed PR splitting method with (0, 2 0 ] and the sequence { z k = (u k, v k, x k )} defined in Proposition 4.4. Then, the following statements hold: (a) {z k } and { z k } are bounded; (b) if < 2 0, then {z k } converges to some z (L 0 + γc) (0), and hence both {u k } and {v k } converge to some u (A + B) (0). Proof: (a) The assumption that (0, 2 0 ] together with the last conclusion of Proposition 4.4 imply that the relaxed PR splitting method is an NE-HPE instance. Hence, for any z + γc) (L 0 (0), it follows from Lemma 3.4(c) that the sequence {(dw) zk (z )} is nonincreasing where w is the distance generating function given by (33). Clearly, this observation implies that {x k } is bounded. This conclusion together with (22) and the nonexpansiveness of J γa, J γb imply that {u k } and {v k } are also bounded. Finally, { x k } is bounded due to the definition of x k in Proposition 4.4 and the boundedness of {x k }, {u k } and {v k }. (b) When (0, 2 0 ), it follows from Proposition 4.4 that the relaxed PR splitting method is an NE-HPE instance with σ < and λ k = for all k. Also, (a) implies that { z k } is bounded. Therefore, by Proposition 3.9, there exists z = (u, u, x ) + γc) (L 0 (0) such that (8) holds where w is as in (33). Hence, we conclude that {x k } converges to x. The second identity in (23) then imply that {u k v k } converges to 0. Since J γa is continuous, it follows from the definition of u(x) in (22) and the first identity in (23) that {u k } converges to J γa (x ), and hence to u, in view of Lemma 4.3. The latter two conclusions imply that {v k } also converges to u. We have thus shown the first conclusion of (b). The second conclusion follows from the first one and Lemma 4.3. When β = 0, the above result shows that the sequence {z k = (u k, v k, x k )} generated by the relaxed PR splitting method with (0, 2) converges. The specific instance of Section 5 corresponding to β = 0 shows that the sequence {z k = (u k, v k, x k )} generated by the relaxed PR splitting method with = 2 does not necessarily converge. Note that when β > 0 and = 2, the latter sequence always converges in view of Theorem 4.5(b) and the fact that 2 0 = 2 + (γβ) > 2. 3

14 In the rest of this section, we consider the pointwise and ergodic convergence rate for the relaxed PR splitting method. We first endow the space Z := X X X with the semi-norm (u, v, x) := x X and hence Proposition 2. implies that (0, 0, x) = x X. (34) It is also easy to see that the distance generating function w defined in (33) is (m, M)-regular with respect to (Z, ) where M = m = / (see Definition 3.2). Our next goal is to state a pointwise convergence rate bound for the relaxed PR splitting method. We start by stating a technical result which is well-known for the case where β = 0 (see for example Lemma 2.4 of [0]). The proof for the general case, i.e., β 0, is similar and is given in the Appendix for the sake of completeness. Lemma 4.6 Assume that (0, 2 0 ]. Then, for every k 2, we have x k X where x k := x k x k. x k X We now state the pointwise convergence rate result for the relaxed PR splitting method. Theorem 4.7 Consider the sequence {z k = (u k, v k, x k )} generated by the relaxed PR splitting method with (0, 2 0 ). Then, for every k and z = (u, u, x ) + γc) (L 0 (0), a k + b k A(u k ) + B(v k ), γ a k + b k X = u k v k X 2 x0 x X k 2 0 where X denotes the inner product norm on X. Proof: The inclusion in the theorem follows from the two inclusions in (28) and (29). It follows from Proposition 4.4(a), and relations (27) and (33), that and hence, in view of (34), that r k = (0, 0, x k x k ) = (0, 0, u k v k ) = γ(0, 0, a k + b k ) (35) r k = x k x k X = u k v k X = γ a k + b k X. Since by Proposition 4.4, the relaxed PR splitting method with (0, 2 0 ) is an NE-HPE instance for solving the monotone inclusion 0 (L 0 + γc)(z) in which σ = (/ 0 ) 2 < and λ k = for all k, it follows from the above relation, Lemma 4.6, Theorem 3.7, the fact that M = m = /, and relations (5) and (33), that 2M r k ( + ( ) σ) (dw) 0 m σ k j= λ2 j 2 x0 x X k 2 0. The inequality of the theorem then follows from the above two observations. Our main goal in the remaining part of this section is to derive ergodic convergence rate bounds for the relaxed PR splitting method for any (0, 2 0 ]. We start by stating the following variation of the transportation lemma for maximal β-strongly monotone operators. 4

15 Proposition 4.8 Assume that T is a maximal β-strongly monotone operator for some β 0. Assume also that t i T (u i ) for i =,..., k, and define t k = k t i, ū k = k u i, ε k = k t i βu i, u i ū k (36) Then, ε k 0 and t k T [ε k] (ū k ). Proof: The assumption that T is a maximal β-strongly monotone operator implies that T βi is maximal monotone. Hence, it follows from the weak transportation formula (see Theorem 2.3 of [2]) applied to T βi that ε k 0 and t k βū k (T βi) [ε k] (ū k ). The result then follows by observing that (T βi) [ε k] (ū k ) + βū k T [ε k] (ū k ). In order to state the ergodic iteration complexity bound for the relaxed PR splitting method, we introduce the ergodic sequences ū k = k u i, v k = k v i, ā k = k a i, bk = k b i (37) and the scalar sequences ε k := k a i βu i, u i ū k, ε k := k b i βv i, v i v k. Theorem 4.9 Assume that (0, 2 0 ] and consider the ergodic sequences above. Then, for every k and z = (u, u, x ) (L 0 + γc) (0), ā k A [ε k ] (ū k ), bk B [ε k ] ( v k ), γ ā k + b k X = ū k v k X 2 x 0 x X k where X denotes the inner product norm on X., ε k + ε k 3( + 2( 0 /) 2 ) x 0 x 2 X kγ Proof: The first two inclusions follow from relations (28), (29) and (37), Assumption B0 and Proposition 4.8. We will now derive the two inequalities of the theorem using the fact that the PR splitting method with (0, 2 0 ] is an instance of the NE-HPE method. Letting r a k and εa k be defined as in (9), we easily see from Proposition 4.4(a), and relations (35) and (37), that and r a k = (0, 0, ū k v k ) = γ(0, 0, ā k + b k ) ε a k = k k k r i, z i z k a = k ( z L 0 i ) + γc i, z i z k a γ c i, z i z a k γβ 2k (u i ū k ) (v i v k ) 2 X γ ( a i βu i, u i ū k + b i βv i, v i v k ) = γ(ε k + ε k ), 5

16 where in the first inequality we use the fact that ( z L 0 i ), z i z k a = ( z L 0 i ) ( z a L 0 k ), z i z k a ( 0 ) (u i ū k ) (v i v k ) 2 X ( 0 ) (u i ū k ) (v i v k ) 2 X and the second inequality follows from 2 (u i ū k ) (v i v k ) 2 X ( u i, u i ū k + v i, v i v k ). Hence, it follows from the definition of, the definition of w in (33), relations (5) and (34), and Theorem 3.0 with T = L 0 + γc, M = m = / and λ k = for all k, that γ ā k + b X k = ū k v k X = rk a 2 2M(dw) /2 0 mλk 2 x 0 x X k and since γ(ε k + ε k ) εa k ( 3M m ) 2(dw)0 + ρ k Λ k 3( + 2( 0 /) 2 ) x 0 x 2 X k ρ k := max (dw) x i x i 2 X z i ( z i ) = max,...,k,...,k 2 ( 0 /) 2 x i x i 2 X max,...,k 2 2( 0 /) 2 x 0 x 2 X. We now make some remarks about the convergence rate bounds obtained in Theorem 4.9. In view of Lemma 4.3, x depends on γ according to Hence, letting x = γa + u, a A(u ) B(u ). d 0 := inf{ x 0 u X : u (A + B) (0)}, S := sup{ a : a A(u ) B(u ), u (A + B) (0)}, and assuming that S <, it is easy to see that Theorem 4.9 and (3) imply that the relaxed PR splitting method with = 2 0 satisfies ā k + b k X C (γ) γk, ū k v k X C (γ) k, ε k + ε k C 2(γ) k where C (γ) = C (γ; β, d 0 ) = Θ ( ) d0 + γs, C 2 (γ) = C 2 (γ; β, d 0 ) = Θ + βγ ( (d0 + γs) 2 γ( + βγ) ). 6

17 When S/β d 0, then γ = d 0 /S minimizes both C ( ) and C 2 ( ) up to a multiplicative constant, in which case C = Θ(d 0), C /γ = Θ(S) and C 2 = Θ(Sd 0) where C = C (β, d 0 ) := inf{c (γ) : γ > 0}, C 2 = C 2(β, d 0 ) := inf{c 2 (γ) : γ > 0}. Note that this case includes the case in which β = 0. On the other hand, when S/β < d 0, then both C and C 2 are minimized up to a multiplicative constant by any γ d 0 /S, in which case C = Θ(S/β) and C 2 = Θ(S2 /β). Clearly, in this case, C /γ converges to zero as γ tends to infinity. Indeed, assume first that S/β d 0. Then, up to some multiplicative constants, we have C (γ) d 0 + γs + βγ d 0 + γs + Sγ/d 0 = d 0, C 2 (γ) (d 0 + γs) 2 γ( + βγ) (d 0 + γs) 2 γ( + Sγ/d 0 ) = d 0(d 0 + γs) = d2 0 γ γ + Sd 0, and hence that C = Ω(d 0) and C 2 = Ω(Sd 0). Moreover, if γ = d 0 /S, then the assumption S/β d 0 implies that βγ, and hence that C = Θ(d 0) and C 2 = Θ(Sd 0). Assume now that S/β < d 0. Then, up to multiplicative constants, it is easy to see that C 2 (γ) (d 0 + γs) 2 γ( + βγ) C (γ) d 0 + γs + βγ S β (S/β + γs)2 γ( + βγ) = S2 ( + γβ), γβ2 and hence that C = Ω(S/β) and C 2 = Ω(S2 /β). Moreover, if γ d 0 /S, then it is easy to see that C = Θ(S/β) and C 2 = Θ(S2 /β). Based on the above discussion, the choice γ = d 0 /S is optimal but has the disadvantage that d 0 is generally difficult to compute. One possibility around this difficulty is to use γ = D 0 /S where D 0 is an upper bound on d 0. 5 An Example By Theorem 4.5(b), the sequence {x k } generated by the relaxed PR splitting method converges whenever (0, 2+γβ). This section gives an instance of () for which the sequence {x k } generated by the relaxed PR splitting method does not converge when 2( + γβ). Recall from (22) and (23) that the relaxed PR splitting method iterates as x k+ = x k + (J γb (2J γa (x k ) x k ) J γa (x k )) (38) where > 0. Without any loss of generality, we assume that γ = in (38). 7

18 and We will now describe our instance. First, let A 0, B 0 : R R be defined as + β A 0(u) := B 0 (u) := [ [ 2(+β) 2(+β) [ ( 2 ( ) + k + 2(+β) 2(+β) ( ) + k+ ) ( ) ( + k + β +β + 4, if u < ( ) ] 4, 2k(k+), 2(+β) + k+ + ( 2(k+)(k+2) + k+ [ ( + k+, if u = 2k(k+), k =,..., ) + 2(k+)(k+2), if 2k(k+) < u < 2(k+)(k+2), k =,..., 2(+β), 2(+β) 2(k+)(k+2), ( 2 ], if u = 0, ) 2(k+)(k+2), if 2(k+)(k+2) < u < 2k(k+), k =,..., 2(+β) ( ) ] + k 2k(k+) +β 4, if u > 4, if u = 2k(k+), k =,..., 2 ) + 3β 2, if u < 3 2, + k+, ( 2) ( ) ( )] + k+ + β + k+2 ) ( ) ( ) + k+ + β + k+2 ( 2 ) ( + k+, if u = k+, k =,...,, if k+ < u < k+2, k =,..., ( 2 ) β) u ( ), if u, β + k+2 [ ( ( ) ( ) 2 ) + k+ β + k+2, ( 2 ) ( + ) ( )] k β + k+ 2 3β 2, if u > 3 2, if + k+2 < u < + k+, k =,...,, if u = + k+, k =,..., Note that A 0 and B 0 are maximal monotone operators, where A 0 and B 0 are monotone since 2( + β). Letting A := A 0 + βi and B := B 0 + βi where β 0, then it follows that A and B are maximal β-strongly monotone operators. It is easy to see that Hence, if x k = 2 and hence that J A (x) = J +β A 0 ( + k ), it is easy to see that ( ) ( ) x x, J B (x) = J + β +β B. 0 + β J A (x k ) = J +β A 0 J B (2J A (x k ) x k ) = J +β B 0 ( ) xk = + β ( ) 2JA (x k ) x k + β 2k(k + ), = k + = 2J A(x k ) 2 x k. 8

19 Thus, (38) yields x k+ = x k + [J B (2J A (x k ) x k ) J A (x k )] ( = x k + 2J A (x k ) 2 ) x k J A (x k ) = x k + J A (x k ) = ( + ). 2 k + Similarly, if x k = ( ) 2 + k, it is easy to check that x k+ = 2 ( + k + Hence, if x 0 =, then the sequence {x k } generated by the relaxed PR splitting method with 2( + β) for the above instance does not converge. The above example shows that the sequence {x k } does not converge whenever 2( + γβ). On the other hand, Theorem 4.5(b) shows that {x k } converges whenever 0 < < 2 + γβ. It is an open problem to investigate the behavior of {x k } when β [2 + γβ, 2 + 2γβ) and β > 0. Finally, we mention without providing a proof that {x k } converges when = 2 + γβ, β > 0 and X = R. 6 Concluding remarks This paper establishes convergence of the iterates and an O(/ k) pointwise convergence rate bound for the relaxed PR splitting method for any (0, 2 + γβ) by viewing it as an instance of a non-euclidean HPE framework. It also establishes an ergodic O(/k) convergence rate bound for it for any (0, 2 + γβ]. Furthermore, an example showing that its iterates do not necessarily converge for 2( + γβ) is given. We observe that our analysis, in contrast to the ones in [5, 8], does not impose any regularity condition on A and B such as assuming one of them to be a Lipschitz, and hence point-to-point, operator. Also, if only one of the operators, say A, is assumed to be maximal β-strongly monotone, () is equivalent to 0 (A + B )(u) where A := A (β/2)i and B := B + (β/2)i are now both (β/2)-strongly monotone. Thus, to solve (), the relaxed PR method with (A, B) replaced by (A, B ) can be applied, thereby ensuring convergence of the iterates, as well as pointwise and ergodic convergence rate bounds, for values of > 2. References [] H.H. Bauschke and P.L. Combettes. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, 20. [2] R.S. Burachik, C.A. Sagastizábal, and B.F. Svaiter. ε-enlargements of maximal monotone operators: Theory and applications. In Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods (Lausanne, 997), volume 22 of Applied Optimization, pages Kluwer Academic Publishers, Dordrecht, The Netherlands, 999. [3] D. Davis. Convergence rate analysis of the forward-douglas-rachford splitting scheme. SIAM Journal on Optimization, 25: , 205. ). 9

20 [4] D. Davis and W. Yin. Convergence rate analysis of several splitting schemes. arxiv preprint arxiv: v3, 205. [5] D. Davis and W. Yin. Faster convergence rates of relaxed peaceman-rachford and admm under regularity assumptions. arxiv preprint arxiv: v3, 205. [6] J. Eckstein and D.P. Bertsekas. On the douglas-rachford splitting method and the proximal point algorithm for maximal monotone operators. Mathematical Programming, 55:293 38, 992. [7] F. Facchinei and J.-S. Pang. Finite-Dimensional Variational Inequalities and Complementarity Problem, Volume II. Springer-Verlag, New York, [8] P. Giselsson and S. Boyd. Linear convergence and metric selection for douglas-rachford splitting and admm. arxiv preprint arxiv: v5, 206. [9] M.L.N. Goncalves, J.G. Melo, and R.D.C. Monteiro. Improved pointwise iteration-complexity of a regularized admm and of a regularized non-euclidean hpe framework. arxiv preprint arxiv:60.040v, 206. [0] B. He and X. Yuan. On the convergence rate of douglas-rachford operator splitting method. Mathematical Programming, Series A, 53:75 722, 205. [] Y. He and R.D.C. Monteiro. Accelerating block-decomposition first-order methods for solving composite saddle-point and two-player nash equilibrium problems. SIAM Journal on Optimization, 25:282 22, 205. [2] Y. He and R.D.C. Monteiro. An accelerated hpe-type algorithm for a class of composite convex-concave saddle-point problems. SIAM Journal on Optimization, 26:29 56, 206. [3] O. Kolossoski and R.D.C. Monteiro. An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex-concave saddle-point problems. Preprint, 205. [4] R. D. C. Monteiro and B. F. Svaiter. Complexity of variants of Tseng s modified F-B splitting and korpelevich s methods for hemivariational inequalities with applications to saddle-point and convex optimization problems. SIAM J. Optim., 2(4): , 20. [5] R.D.C. Monteiro and B.F. Svaiter. On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean. SIAM Journal on Optimization, 20: , 200. [6] R.D.C. Monteiro and B.F. Svaiter. Complexity of variants of tseng s modified f-b splitting and korpelevich s methods of hemi-variational inequalities with applications to saddle point and convex optimization problems. SIAM Journal on Optimization, 2: , 20. [7] R.T. Rockafellar. On the maximality of sums of nonlinear monotone operators. Transactions of the American Mathematical Society, 49:75 88, 970. [8] R.T. Rockafellar. Monotone operators and the proximal point algorithm. SIAM Journal on Control and Optimization, 4: ,

21 [9] M.V. Solodov and B.F. Svaiter. A hybrid approximate extragradient-proximal point algorithm using the enlargement of a maximal monotone operator. Set-Valued and Variational Analysis, 7: , 999. [20] M.V. Solodov and B.F. Svaiter. An inexact hybrid generalized proximal point algorithm and some new results on the theory of bregman functions. Mathematics of Operations Research, 25:24 230, Appendix Proof of Lemma 4.6: To simplify notation, let x = x k, x := x k, u = u k u k, v = v k v k, a = a k a k, b = b k b k. Then, it follows from the second identity in (23) and relation (27) that x = x + ( v u), γ a = x u, γ b = 2 u v x. (39) Also, the two inclusions in (28) and (29) together with the β-strong monotonicity of A and B imply that a, u β u 2 X, b, v β v 2 X. Combining the last two identities in (39) with the above inequalities, we obtain x u, u γβ u 2 X, 2 u v x, v γβ v 2 X. Adding these two last inequalities and simplifying the resulting expression, we obtain x, u v + 2 u, v ( + γβ)[ u 2 X + v 2 X ] (40) From the first equality in (39), we have 2 x, u v = x 2 X x 2 X + 2 v u 2 X, which upon substituting into (40), the following is true: ([ x 2 X x 2 X 2 + γβ 2 ] ( u 2 X + v 2 X ) + 2 [ ] ) 2 u, v. Note that the right-hand side in the above inequality is greater than or equal to zero if (0, 2 0 ]. Hence, we have if (0, 2 0 ], x X x X. 2

An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex concave saddle-point problems

An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex concave saddle-point problems Optimization Methods and Software ISSN: 1055-6788 (Print) 1029-4937 (Online) Journal homepage: http://www.tandfonline.com/loi/goms20 An accelerated non-euclidean hybrid proximal extragradient-type algorithm

More information

An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex-concave saddle-point problems

An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex-concave saddle-point problems An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex-concave saddle-point problems O. Kolossoski R. D. C. Monteiro September 18, 2015 (Revised: September 28, 2016) Abstract

More information

M. Marques Alves Marina Geremia. November 30, 2017

M. Marques Alves Marina Geremia. November 30, 2017 Iteration complexity of an inexact Douglas-Rachford method and of a Douglas-Rachford-Tseng s F-B four-operator splitting method for solving monotone inclusions M. Marques Alves Marina Geremia November

More information

On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean

On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean Renato D.C. Monteiro B. F. Svaiter March 17, 2009 Abstract In this paper we analyze the iteration-complexity

More information

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This

More information

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of

More information

Convergence rate of inexact proximal point methods with relative error criteria for convex optimization

Convergence rate of inexact proximal point methods with relative error criteria for convex optimization Convergence rate of inexact proximal point methods with relative error criteria for convex optimization Renato D. C. Monteiro B. F. Svaiter August, 010 Revised: December 1, 011) Abstract In this paper,

More information

A hybrid proximal extragradient self-concordant primal barrier method for monotone variational inequalities

A hybrid proximal extragradient self-concordant primal barrier method for monotone variational inequalities A hybrid proximal extragradient self-concordant primal barrier method for monotone variational inequalities Renato D.C. Monteiro Mauricio R. Sicre B. F. Svaiter June 3, 13 Revised: August 8, 14) Abstract

More information

An Inexact Spingarn s Partial Inverse Method with Applications to Operator Splitting and Composite Optimization

An Inexact Spingarn s Partial Inverse Method with Applications to Operator Splitting and Composite Optimization Noname manuscript No. (will be inserted by the editor) An Inexact Spingarn s Partial Inverse Method with Applications to Operator Splitting and Composite Optimization S. Costa Lima M. Marques Alves Received:

More information

COMPLEXITY OF A QUADRATIC PENALTY ACCELERATED INEXACT PROXIMAL POINT METHOD FOR SOLVING LINEARLY CONSTRAINED NONCONVEX COMPOSITE PROGRAMS

COMPLEXITY OF A QUADRATIC PENALTY ACCELERATED INEXACT PROXIMAL POINT METHOD FOR SOLVING LINEARLY CONSTRAINED NONCONVEX COMPOSITE PROGRAMS COMPLEXITY OF A QUADRATIC PENALTY ACCELERATED INEXACT PROXIMAL POINT METHOD FOR SOLVING LINEARLY CONSTRAINED NONCONVEX COMPOSITE PROGRAMS WEIWEI KONG, JEFFERSON G. MELO, AND RENATO D.C. MONTEIRO Abstract.

More information

On convergence rate of the Douglas-Rachford operator splitting method

On convergence rate of the Douglas-Rachford operator splitting method On convergence rate of the Douglas-Rachford operator splitting method Bingsheng He and Xiaoming Yuan 2 Abstract. This note provides a simple proof on a O(/k) convergence rate for the Douglas- Rachford

More information

Iteration-complexity of a Rockafellar s proximal method of multipliers for convex programming based on second-order approximations

Iteration-complexity of a Rockafellar s proximal method of multipliers for convex programming based on second-order approximations Iteration-complexity of a Rockafellar s proximal method of multipliers for convex programming based on second-order approximations M. Marques Alves R.D.C. Monteiro Benar F. Svaiter February 1, 016 Abstract

More information

c 2013 Society for Industrial and Applied Mathematics

c 2013 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 3, No., pp. 109 115 c 013 Society for Industrial and Applied Mathematics AN ACCELERATED HYBRID PROXIMAL EXTRAGRADIENT METHOD FOR CONVEX OPTIMIZATION AND ITS IMPLICATIONS TO SECOND-ORDER

More information

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Abstract This paper presents an accelerated

More information

Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM. M. V. Solodov and B. F.

Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM. M. V. Solodov and B. F. Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM M. V. Solodov and B. F. Svaiter January 27, 1997 (Revised August 24, 1998) ABSTRACT We propose a modification

More information

On the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities

On the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities On the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities Caihua Chen Xiaoling Fu Bingsheng He Xiaoming Yuan January 13, 2015 Abstract. Projection type methods

More information

Iteration-Complexity of a Newton Proximal Extragradient Method for Monotone Variational Inequalities and Inclusion Problems

Iteration-Complexity of a Newton Proximal Extragradient Method for Monotone Variational Inequalities and Inclusion Problems Iteration-Complexity of a Newton Proximal Extragradient Method for Monotone Variational Inequalities and Inclusion Problems Renato D.C. Monteiro B. F. Svaiter April 14, 2011 (Revised: December 15, 2011)

More information

On the order of the operators in the Douglas Rachford algorithm

On the order of the operators in the Douglas Rachford algorithm On the order of the operators in the Douglas Rachford algorithm Heinz H. Bauschke and Walaa M. Moursi June 11, 2015 Abstract The Douglas Rachford algorithm is a popular method for finding zeros of sums

More information

AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS. M. V. Solodov and B. F.

AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS. M. V. Solodov and B. F. AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS M. V. Solodov and B. F. Svaiter May 14, 1998 (Revised July 8, 1999) ABSTRACT We present a

More information

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Lu-Chuan Ceng 1, Nicolas Hadjisavvas 2 and Ngai-Ching Wong 3 Abstract.

More information

arxiv: v3 [math.oc] 18 Apr 2012

arxiv: v3 [math.oc] 18 Apr 2012 A class of Fejér convergent algorithms, approximate resolvents and the Hybrid Proximal-Extragradient method B. F. Svaiter arxiv:1204.1353v3 [math.oc] 18 Apr 2012 Abstract A new framework for analyzing

More information

AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS. May 14, 1998 (Revised March 12, 1999)

AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS. May 14, 1998 (Revised March 12, 1999) AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS M. V. Solodov and B. F. Svaiter May 14, 1998 (Revised March 12, 1999) ABSTRACT We present

More information

Splitting methods for decomposing separable convex programs

Splitting methods for decomposing separable convex programs Splitting methods for decomposing separable convex programs Philippe Mahey LIMOS - ISIMA - Université Blaise Pascal PGMO, ENSTA 2013 October 4, 2013 1 / 30 Plan 1 Max Monotone Operators Proximal techniques

More information

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES U.P.B. Sci. Bull., Series A, Vol. 80, Iss. 3, 2018 ISSN 1223-7027 ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES Vahid Dadashi 1 In this paper, we introduce a hybrid projection algorithm for a countable

More information

An inexact block-decomposition method for extra large-scale conic semidefinite programming

An inexact block-decomposition method for extra large-scale conic semidefinite programming Math. Prog. Comp. manuscript No. (will be inserted by the editor) An inexact block-decomposition method for extra large-scale conic semidefinite programming Renato D. C. Monteiro Camilo Ortiz Benar F.

More information

arxiv: v1 [math.oc] 21 Apr 2016

arxiv: v1 [math.oc] 21 Apr 2016 Accelerated Douglas Rachford methods for the solution of convex-concave saddle-point problems Kristian Bredies Hongpeng Sun April, 06 arxiv:604.068v [math.oc] Apr 06 Abstract We study acceleration and

More information

An inexact block-decomposition method for extra large-scale conic semidefinite programming

An inexact block-decomposition method for extra large-scale conic semidefinite programming An inexact block-decomposition method for extra large-scale conic semidefinite programming Renato D.C Monteiro Camilo Ortiz Benar F. Svaiter December 9, 2013 Abstract In this paper, we present an inexact

More information

A Unified Approach to Proximal Algorithms using Bregman Distance

A Unified Approach to Proximal Algorithms using Bregman Distance A Unified Approach to Proximal Algorithms using Bregman Distance Yi Zhou a,, Yingbin Liang a, Lixin Shen b a Department of Electrical Engineering and Computer Science, Syracuse University b Department

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

A Primal-dual Three-operator Splitting Scheme

A Primal-dual Three-operator Splitting Scheme Noname manuscript No. (will be inserted by the editor) A Primal-dual Three-operator Splitting Scheme Ming Yan Received: date / Accepted: date Abstract In this paper, we propose a new primal-dual algorithm

More information

Forcing strong convergence of proximal point iterations in a Hilbert space

Forcing strong convergence of proximal point iterations in a Hilbert space Math. Program., Ser. A 87: 189 202 (2000) Springer-Verlag 2000 Digital Object Identifier (DOI) 10.1007/s101079900113 M.V. Solodov B.F. Svaiter Forcing strong convergence of proximal point iterations in

More information

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE Fixed Point Theory, Volume 6, No. 1, 2005, 59-69 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.htm WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE YASUNORI KIMURA Department

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

A double projection method for solving variational inequalities without monotonicity

A double projection method for solving variational inequalities without monotonicity A double projection method for solving variational inequalities without monotonicity Minglu Ye Yiran He Accepted by Computational Optimization and Applications, DOI: 10.1007/s10589-014-9659-7,Apr 05, 2014

More information

Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems

Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems Mathematical and Computational Applications Article Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems Wenling Zhao *, Ruyu Wang and Hongxiang Zhang School of Science,

More information

An inexact subgradient algorithm for Equilibrium Problems

An inexact subgradient algorithm for Equilibrium Problems Volume 30, N. 1, pp. 91 107, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam An inexact subgradient algorithm for Equilibrium Problems PAULO SANTOS 1 and SUSANA SCHEIMBERG 2 1 DM, UFPI, Teresina,

More information

Iteration-complexity of first-order penalty methods for convex programming

Iteration-complexity of first-order penalty methods for convex programming Iteration-complexity of first-order penalty methods for convex programming Guanghui Lan Renato D.C. Monteiro July 24, 2008 Abstract This paper considers a special but broad class of convex programing CP)

More information

On the equivalence of the primal-dual hybrid gradient method and Douglas Rachford splitting

On the equivalence of the primal-dual hybrid gradient method and Douglas Rachford splitting Mathematical Programming manuscript No. (will be inserted by the editor) On the equivalence of the primal-dual hybrid gradient method and Douglas Rachford splitting Daniel O Connor Lieven Vandenberghe

More information

On the Weak Convergence of the Extragradient Method for Solving Pseudo-Monotone Variational Inequalities

On the Weak Convergence of the Extragradient Method for Solving Pseudo-Monotone Variational Inequalities J Optim Theory Appl 208) 76:399 409 https://doi.org/0.007/s0957-07-24-0 On the Weak Convergence of the Extragradient Method for Solving Pseudo-Monotone Variational Inequalities Phan Tu Vuong Received:

More information

Convergence rate estimates for the gradient differential inclusion

Convergence rate estimates for the gradient differential inclusion Convergence rate estimates for the gradient differential inclusion Osman Güler November 23 Abstract Let f : H R { } be a proper, lower semi continuous, convex function in a Hilbert space H. The gradient

More information

Math 273a: Optimization Overview of First-Order Optimization Algorithms

Math 273a: Optimization Overview of First-Order Optimization Algorithms Math 273a: Optimization Overview of First-Order Optimization Algorithms Wotao Yin Department of Mathematics, UCLA online discussions on piazza.com 1 / 9 Typical flow of numerical optimization Optimization

More information

Coordinate Update Algorithm Short Course Operator Splitting

Coordinate Update Algorithm Short Course Operator Splitting Coordinate Update Algorithm Short Course Operator Splitting Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 25 Operator splitting pipeline 1. Formulate a problem as 0 A(x) + B(x) with monotone operators

More information

Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables

Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong 2014 Workshop

More information

Operator Splitting for Parallel and Distributed Optimization

Operator Splitting for Parallel and Distributed Optimization Operator Splitting for Parallel and Distributed Optimization Wotao Yin (UCLA Math) Shanghai Tech, SSDS 15 June 23, 2015 URL: alturl.com/2z7tv 1 / 60 What is splitting? Sun-Tzu: (400 BC) Caesar: divide-n-conquer

More information

FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR

FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000 000 S 0002-9939(XX)0000-0 FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR B. F. SVAITER

More information

Extensions of Korpelevich s Extragradient Method for the Variational Inequality Problem in Euclidean Space

Extensions of Korpelevich s Extragradient Method for the Variational Inequality Problem in Euclidean Space Extensions of Korpelevich s Extragradient Method for the Variational Inequality Problem in Euclidean Space Yair Censor 1,AvivGibali 2 andsimeonreich 2 1 Department of Mathematics, University of Haifa,

More information

Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping

Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping March 0, 206 3:4 WSPC Proceedings - 9in x 6in secondorderanisotropicdamping206030 page Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping Radu

More information

On the O(1/t) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators

On the O(1/t) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators On the O1/t) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators Bingsheng He 1 Department of Mathematics, Nanjing University,

More information

Analysis of the convergence rate for the cyclic projection algorithm applied to semi-algebraic convex sets

Analysis of the convergence rate for the cyclic projection algorithm applied to semi-algebraic convex sets Analysis of the convergence rate for the cyclic projection algorithm applied to semi-algebraic convex sets Liangjin Yao University of Newcastle June 3rd, 2013 The University of Newcastle liangjin.yao@newcastle.edu.au

More information

A New Use of Douglas-Rachford Splitting and ADMM for Classifying Infeasible, Unbounded, and Pathological Conic Programs

A New Use of Douglas-Rachford Splitting and ADMM for Classifying Infeasible, Unbounded, and Pathological Conic Programs A New Use of Douglas-Rachford Splitting and ADMM for Classifying Infeasible, Unbounded, and Pathological Conic Programs Yanli Liu, Ernest K. Ryu, and Wotao Yin May 23, 2017 Abstract In this paper, we present

More information

1 Introduction We consider the problem nd x 2 H such that 0 2 T (x); (1.1) where H is a real Hilbert space, and T () is a maximal monotone operator (o

1 Introduction We consider the problem nd x 2 H such that 0 2 T (x); (1.1) where H is a real Hilbert space, and T () is a maximal monotone operator (o Journal of Convex Analysis Volume 6 (1999), No. 1, pp. xx-xx. cheldermann Verlag A HYBRID PROJECTION{PROXIMAL POINT ALGORITHM M. V. Solodov y and B. F. Svaiter y January 27, 1997 (Revised August 24, 1998)

More information

Tight Rates and Equivalence Results of Operator Splitting Schemes

Tight Rates and Equivalence Results of Operator Splitting Schemes Tight Rates and Equivalence Results of Operator Splitting Schemes Wotao Yin (UCLA Math) Workshop on Optimization for Modern Computing Joint w Damek Davis and Ming Yan UCLA CAM 14-51, 14-58, and 14-59 1

More information

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Local strong convexity and local Lipschitz continuity of the gradient of convex functions Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate

More information

1 Introduction and preliminaries

1 Introduction and preliminaries Proximal Methods for a Class of Relaxed Nonlinear Variational Inclusions Abdellatif Moudafi Université des Antilles et de la Guyane, Grimaag B.P. 7209, 97275 Schoelcher, Martinique abdellatif.moudafi@martinique.univ-ag.fr

More information

A HYBRID PROXIMAL EXTRAGRADIENT SELF-CONCORDANT PRIMAL BARRIER METHOD FOR MONOTONE VARIATIONAL INEQUALITIES

A HYBRID PROXIMAL EXTRAGRADIENT SELF-CONCORDANT PRIMAL BARRIER METHOD FOR MONOTONE VARIATIONAL INEQUALITIES SIAM J. OPTIM. Vol. 25, No. 4, pp. 1965 1996 c 215 Society for Industrial and Applied Mathematics A HYBRID PROXIMAL EXTRAGRADIENT SELF-CONCORDANT PRIMAL BARRIER METHOD FOR MONOTONE VARIATIONAL INEQUALITIES

More information

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br

More information

arxiv: v1 [math.oc] 28 Jan 2015

arxiv: v1 [math.oc] 28 Jan 2015 A hybrid method without extrapolation step for solving variational inequality problems Yu. V. Malitsky V. V. Semenov arxiv:1501.07298v1 [math.oc] 28 Jan 2015 July 12, 2018 Abstract In this paper, we introduce

More information

A proximal-newton method for monotone inclusions in Hilbert spaces with complexity O(1/k 2 ).

A proximal-newton method for monotone inclusions in Hilbert spaces with complexity O(1/k 2 ). H. ATTOUCH (Univ. Montpellier 2) Fast proximal-newton method Sept. 8-12, 2014 1 / 40 A proximal-newton method for monotone inclusions in Hilbert spaces with complexity O(1/k 2 ). Hedy ATTOUCH Université

More information

Heinz H. Bauschke and Walaa M. Moursi. December 1, Abstract

Heinz H. Bauschke and Walaa M. Moursi. December 1, Abstract The magnitude of the minimal displacement vector for compositions and convex combinations of firmly nonexpansive mappings arxiv:1712.00487v1 [math.oc] 1 Dec 2017 Heinz H. Bauschke and Walaa M. Moursi December

More information

Primal-dual first-order methods with O(1/ǫ) iteration-complexity for cone programming

Primal-dual first-order methods with O(1/ǫ) iteration-complexity for cone programming Mathematical Programming manuscript No. (will be inserted by the editor) Primal-dual first-order methods with O(1/ǫ) iteration-complexity for cone programming Guanghui Lan Zhaosong Lu Renato D. C. Monteiro

More information

Monotone operators and bigger conjugate functions

Monotone operators and bigger conjugate functions Monotone operators and bigger conjugate functions Heinz H. Bauschke, Jonathan M. Borwein, Xianfu Wang, and Liangjin Yao August 12, 2011 Abstract We study a question posed by Stephen Simons in his 2008

More information

A Dykstra-like algorithm for two monotone operators

A Dykstra-like algorithm for two monotone operators A Dykstra-like algorithm for two monotone operators Heinz H. Bauschke and Patrick L. Combettes Abstract Dykstra s algorithm employs the projectors onto two closed convex sets in a Hilbert space to construct

More information

Monotone variational inequalities, generalized equilibrium problems and fixed point methods

Monotone variational inequalities, generalized equilibrium problems and fixed point methods Wang Fixed Point Theory and Applications 2014, 2014:236 R E S E A R C H Open Access Monotone variational inequalities, generalized equilibrium problems and fixed point methods Shenghua Wang * * Correspondence:

More information

arxiv: v2 [math.oc] 21 Nov 2017

arxiv: v2 [math.oc] 21 Nov 2017 Unifying abstract inexact convergence theorems and block coordinate variable metric ipiano arxiv:1602.07283v2 [math.oc] 21 Nov 2017 Peter Ochs Mathematical Optimization Group Saarland University Germany

More information

Projective Splitting Methods for Pairs of Monotone Operators

Projective Splitting Methods for Pairs of Monotone Operators Projective Splitting Methods for Pairs of Monotone Operators Jonathan Eckstein B. F. Svaiter August 13, 2003 Abstract By embedding the notion of splitting within a general separator projection algorithmic

More information

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

Research Article Modified Halfspace-Relaxation Projection Methods for Solving the Split Feasibility Problem

Research Article Modified Halfspace-Relaxation Projection Methods for Solving the Split Feasibility Problem Advances in Operations Research Volume 01, Article ID 483479, 17 pages doi:10.1155/01/483479 Research Article Modified Halfspace-Relaxation Projection Methods for Solving the Split Feasibility Problem

More information

Downloaded 12/13/16 to Redistribution subject to SIAM license or copyright; see

Downloaded 12/13/16 to Redistribution subject to SIAM license or copyright; see SIAM J. OPTIM. Vol. 11, No. 4, pp. 962 973 c 2001 Society for Industrial and Applied Mathematics MONOTONICITY OF FIXED POINT AND NORMAL MAPPINGS ASSOCIATED WITH VARIATIONAL INEQUALITY AND ITS APPLICATION

More information

On duality gap in linear conic problems

On duality gap in linear conic problems On duality gap in linear conic problems C. Zălinescu Abstract In their paper Duality of linear conic problems A. Shapiro and A. Nemirovski considered two possible properties (A) and (B) for dual linear

More information

Proximal-like contraction methods for monotone variational inequalities in a unified framework

Proximal-like contraction methods for monotone variational inequalities in a unified framework Proximal-like contraction methods for monotone variational inequalities in a unified framework Bingsheng He 1 Li-Zhi Liao 2 Xiang Wang Department of Mathematics, Nanjing University, Nanjing, 210093, China

More information

Maximal monotone operators are selfdual vector fields and vice-versa

Maximal monotone operators are selfdual vector fields and vice-versa Maximal monotone operators are selfdual vector fields and vice-versa Nassif Ghoussoub Department of Mathematics, University of British Columbia, Vancouver BC Canada V6T 1Z2 nassif@math.ubc.ca February

More information

arxiv: v1 [math.oc] 16 May 2018

arxiv: v1 [math.oc] 16 May 2018 An Algorithmic Framewor of Variable Metric Over-Relaxed Hybrid Proximal Extra-Gradient Method Li Shen 1 Peng Sun 1 Yitong Wang 1 Wei Liu 1 Tong Zhang 1 arxiv:180506137v1 [mathoc] 16 May 2018 Abstract We

More information

A New Use of Douglas-Rachford Splitting and ADMM for Identifying Infeasible, Unbounded, and Pathological Conic Programs

A New Use of Douglas-Rachford Splitting and ADMM for Identifying Infeasible, Unbounded, and Pathological Conic Programs A New Use of Douglas-Rachford Splitting and ADMM for Identifying Infeasible, Unbounded, and Pathological Conic Programs Yanli Liu Ernest K. Ryu Wotao Yin October 14, 2017 Abstract In this paper, we present

More information

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4890 4900 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A generalized forward-backward

More information

Maximal Monotone Inclusions and Fitzpatrick Functions

Maximal Monotone Inclusions and Fitzpatrick Functions JOTA manuscript No. (will be inserted by the editor) Maximal Monotone Inclusions and Fitzpatrick Functions J. M. Borwein J. Dutta Communicated by Michel Thera. Abstract In this paper, we study maximal

More information

Error bounds for proximal point subproblems and associated inexact proximal point algorithms

Error bounds for proximal point subproblems and associated inexact proximal point algorithms Error bounds for proximal point subproblems and associated inexact proximal point algorithms M. V. Solodov B. F. Svaiter Instituto de Matemática Pura e Aplicada, Estrada Dona Castorina 110, Jardim Botânico,

More information

A GENERALIZATION OF THE REGULARIZATION PROXIMAL POINT METHOD

A GENERALIZATION OF THE REGULARIZATION PROXIMAL POINT METHOD A GENERALIZATION OF THE REGULARIZATION PROXIMAL POINT METHOD OGANEDITSE A. BOIKANYO AND GHEORGHE MOROŞANU Abstract. This paper deals with the generalized regularization proximal point method which was

More information

An Algorithm for Solving Triple Hierarchical Pseudomonotone Variational Inequalities

An Algorithm for Solving Triple Hierarchical Pseudomonotone Variational Inequalities , March 15-17, 2017, Hong Kong An Algorithm for Solving Triple Hierarchical Pseudomonotone Variational Inequalities Yung-Yih Lur, Lu-Chuan Ceng and Ching-Feng Wen Abstract In this paper, we introduce and

More information

An adaptive accelerated first-order method for convex optimization

An adaptive accelerated first-order method for convex optimization An adaptive accelerated first-order method for convex optimization Renato D.C Monteiro Camilo Ortiz Benar F. Svaiter July 3, 22 (Revised: May 4, 24) Abstract This paper presents a new accelerated variant

More information

Convex Optimization Notes

Convex Optimization Notes Convex Optimization Notes Jonathan Siegel January 2017 1 Convex Analysis This section is devoted to the study of convex functions f : B R {+ } and convex sets U B, for B a Banach space. The case of B =

More information

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Patrick L. Combettes joint work with J.-C. Pesquet) Laboratoire Jacques-Louis Lions Faculté de Mathématiques

More information

Convergence Rates for Projective Splitting

Convergence Rates for Projective Splitting Convergence Rates for Projective Splitting Patric R. Johnstone Jonathan Ecstein August 8, 018 Abstract Projective splitting is a family of methods for solving inclusions involving sums of maximal monotone

More information

Abstract In this article, we consider monotone inclusions in real Hilbert spaces

Abstract In this article, we consider monotone inclusions in real Hilbert spaces Noname manuscript No. (will be inserted by the editor) A new splitting method for monotone inclusions of three operators Yunda Dong Xiaohuan Yu Received: date / Accepted: date Abstract In this article,

More information

Une méthode proximale pour les inclusions monotones dans les espaces de Hilbert, avec complexité O(1/k 2 ).

Une méthode proximale pour les inclusions monotones dans les espaces de Hilbert, avec complexité O(1/k 2 ). Une méthode proximale pour les inclusions monotones dans les espaces de Hilbert, avec complexité O(1/k 2 ). Hedy ATTOUCH Université Montpellier 2 ACSIOM, I3M UMR CNRS 5149 Travail en collaboration avec

More information

On Total Convexity, Bregman Projections and Stability in Banach Spaces

On Total Convexity, Bregman Projections and Stability in Banach Spaces Journal of Convex Analysis Volume 11 (2004), No. 1, 1 16 On Total Convexity, Bregman Projections and Stability in Banach Spaces Elena Resmerita Department of Mathematics, University of Haifa, 31905 Haifa,

More information

Douglas-Rachford splitting for nonconvex feasibility problems

Douglas-Rachford splitting for nonconvex feasibility problems Douglas-Rachford splitting for nonconvex feasibility problems Guoyin Li Ting Kei Pong Jan 3, 015 Abstract We adapt the Douglas-Rachford DR) splitting method to solve nonconvex feasibility problems by studying

More information

Extensions of the CQ Algorithm for the Split Feasibility and Split Equality Problems

Extensions of the CQ Algorithm for the Split Feasibility and Split Equality Problems Extensions of the CQ Algorithm for the Split Feasibility Split Equality Problems Charles L. Byrne Abdellatif Moudafi September 2, 2013 Abstract The convex feasibility problem (CFP) is to find a member

More information

On nonexpansive and accretive operators in Banach spaces

On nonexpansive and accretive operators in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 3437 3446 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa On nonexpansive and accretive

More information

On the acceleration of the double smoothing technique for unconstrained convex optimization problems

On the acceleration of the double smoothing technique for unconstrained convex optimization problems On the acceleration of the double smoothing technique for unconstrained convex optimization problems Radu Ioan Boţ Christopher Hendrich October 10, 01 Abstract. In this article we investigate the possibilities

More information

Near Equality, Near Convexity, Sums of Maximally Monotone Operators, and Averages of Firmly Nonexpansive Mappings

Near Equality, Near Convexity, Sums of Maximally Monotone Operators, and Averages of Firmly Nonexpansive Mappings Mathematical Programming manuscript No. (will be inserted by the editor) Near Equality, Near Convexity, Sums of Maximally Monotone Operators, and Averages of Firmly Nonexpansive Mappings Heinz H. Bauschke

More information

Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1

Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1 Int. Journal of Math. Analysis, Vol. 1, 2007, no. 4, 175-186 Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1 Haiyun Zhou Institute

More information

Maximal Monotone Operators with a Unique Extension to the Bidual

Maximal Monotone Operators with a Unique Extension to the Bidual Journal of Convex Analysis Volume 16 (2009), No. 2, 409 421 Maximal Monotone Operators with a Unique Extension to the Bidual M. Marques Alves IMPA, Estrada Dona Castorina 110, 22460-320 Rio de Janeiro,

More information

arxiv: v4 [math.oc] 29 Jan 2018

arxiv: v4 [math.oc] 29 Jan 2018 Noname manuscript No. (will be inserted by the editor A new primal-dual algorithm for minimizing the sum of three functions with a linear operator Ming Yan arxiv:1611.09805v4 [math.oc] 29 Jan 2018 Received:

More information

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT Linear and Nonlinear Analysis Volume 1, Number 1, 2015, 1 PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT KAZUHIRO HISHINUMA AND HIDEAKI IIDUKA Abstract. In this

More information

About Split Proximal Algorithms for the Q-Lasso

About Split Proximal Algorithms for the Q-Lasso Thai Journal of Mathematics Volume 5 (207) Number : 7 http://thaijmath.in.cmu.ac.th ISSN 686-0209 About Split Proximal Algorithms for the Q-Lasso Abdellatif Moudafi Aix Marseille Université, CNRS-L.S.I.S

More information

INERTIAL ACCELERATED ALGORITHMS FOR SOLVING SPLIT FEASIBILITY PROBLEMS. Yazheng Dang. Jie Sun. Honglei Xu

INERTIAL ACCELERATED ALGORITHMS FOR SOLVING SPLIT FEASIBILITY PROBLEMS. Yazheng Dang. Jie Sun. Honglei Xu Manuscript submitted to AIMS Journals Volume X, Number 0X, XX 200X doi:10.3934/xx.xx.xx.xx pp. X XX INERTIAL ACCELERATED ALGORITHMS FOR SOLVING SPLIT FEASIBILITY PROBLEMS Yazheng Dang School of Management

More information

Algorithms for Bilevel Pseudomonotone Variational Inequality Problems

Algorithms for Bilevel Pseudomonotone Variational Inequality Problems Algorithms for Bilevel Pseudomonotone Variational Inequality Problems B.V. Dinh. L.D. Muu Abstract. We propose easily implementable algorithms for minimizing the norm with pseudomonotone variational inequality

More information

SEMI-SMOOTH SECOND-ORDER TYPE METHODS FOR COMPOSITE CONVEX PROGRAMS

SEMI-SMOOTH SECOND-ORDER TYPE METHODS FOR COMPOSITE CONVEX PROGRAMS SEMI-SMOOTH SECOND-ORDER TYPE METHODS FOR COMPOSITE CONVEX PROGRAMS XIANTAO XIAO, YONGFENG LI, ZAIWEN WEN, AND LIWEI ZHANG Abstract. The goal of this paper is to study approaches to bridge the gap between

More information