ADMM for monotone operators: convergence analysis and rates

Size: px
Start display at page:

Download "ADMM for monotone operators: convergence analysis and rates"

Transcription

1 ADMM for monotone operators: convergence analysis and rates Radu Ioan Boţ Ernö Robert Csetne May 4, 07 Abstract. We propose in this paper a unifying scheme for several algorithms from the literature dedicated to the solving of monotone inclusion problems involving compositions with linear continuous operators in infinite dimensional Hilbert spaces. We show that a number of primaldual algorithms for monotone inclusions and also the classical ADMM numerical scheme for convex optimization problems, along with some of its variants, can be embedded in this unifying scheme. While in the first part of the paper convergence results for the iterates are reported, the second part is devoted to the derivation of convergence rates obtained by combining variable metric techniques with strategies based on suitable choice of dynamical step sizes. Key Words. monotone operators, primal-dual algorithm, ADMM algorithm, subdifferential, convex optimization, Fenchel duality AMS subject classification. 47H05, 65K05, 90C5 Introduction and preliminaries Consider the convex optimization problem inf {fx) + glx) + hx)}, ) x H where H and G are real Hilbert spaces, f : H R := R {± } and g : G R are proper, convex and lower semicontinuous functions, h : H R is a convex and Fréchet differentiable function with Lipschitz continuous gradient and L : H G is a linear continuous operator. Due to numerous applications in fields lie signal and image processing, portfolio optimization, cluster analysis, location theory, networ communication, machine learning, the design and investigation of numerical algorithms for solving convex optimization problems of type ) attracted in the last couple of years huge interest from the applied mathematics community. The most prominent methods one can find in the literature for solving ) are the primal-dual proximal splitting algorithms and the ADMM algorithms. We briefly describe the two classes of algorithms. First proximal splitting algorithms for solving convex optimization problems involving compositions with linear continuous operators have been reported by Combettes and Ways [6], University of Vienna, Faculty of Mathematics, Osar-Morgenstern-Platz, A-090 Vienna, Austria, radu.bot@univie.ac.at. Research partially supported by FWF Austrian Science Fund), project I 49-N3. University of Vienna, Faculty of Mathematics, Osar-Morgenstern-Platz, A-090 Vienna, Austria, ernoe.robert.csetne@univie.ac.at. Research supported by FWF Austrian Science Fund), project P 9809-N3.

2 Esser, Zhang and Chan [] and Chambolle and Poc [3]. Further investigations have been made in the more general framewor of finding zeros of sums of linearly composed maximally monotone operators, and monotone and Lipschitz, respectively, cocoercive operators. The resulting numerical schemes have been employed in the solving of the inclusion problem find x H such that 0 fx) + L g L)x) + hx), ) which represents the system of optimality conditions of problem ). Briceño-Arias and Combettes pioneered this approach in [], by reformulating the general monotone inclusion in an appropriate product space as the sum of a maximally monotone operator and a linear and sew one, and by solving the resulting inclusion problem via a forwardbacward-forward type algorithm see also [4]). Afterwards, by using the same product space approach, this time in a suitable renormed space, Vũ succeeded in [9] in formulating a primaldual splitting algorithm of forward-bacward type, in other words, by saving a forward step. Condat has presented in [7] in the variational case algorithms of the same nature with the one in [9]. Under strong monotonicity/convexity assumptions and the use of dynamic step size strategies convergence rates have been provided in [8] for the primal-dual algorithm in [9] see also [3]), and in [9] for the primal-dual algorithm in [4]. We describe the ADMM algorithm for solving ) in the case h = 0, which corresponds to the standard setting in the literature. By introducing an auxiliary variable one can rewrite ) as inf x,z) H G Lx z=0 {fx) + gz)}. 3) For a fixed real number c > 0 we consider the augmented Lagrangian associated with problem 3), which is defined as L c : H G G R, L c x, z, y) = fx) + gz) + y, Ax z + c Ax z. The ADMM algorithm relies on the alternating minimization of the augmented Lagrangian with respect to the variables x and z see [,0,3 5] and Remar 4 for the exact formulation of the iterative scheme). Generally, the minimization with respect to the variable x does not lead to a proximal step. This drawbac has been overcome by Shefi and Teboulle in [7] by introducing additional suitably chosen metrics, and also in [5] for an extension of the ADMM algorithm designed for problems which involve also smooth parts in the objective. The aim of this paper is to provide a unifying algorithmic scheme for solving monotone inclusion problems which encompasses several primal-dual iterative methods [7, 3, 7, 9], and the ADMM algorithm and its variants from [7]) in the particular case of convex optimization problems. A closer loo at the structure of the new algorithmic scheme shows that it translates the paradigm behind ADMM methods for optimization problems to the solving of monotone inclusions. We carry out a convergence analysis for the proposed iterative scheme by maing use of techniques relying on the Opial Lemma applied in a variable metric setting. Furthermore, we derive convergence rates for the iterates under supplementary strong monotonicity assumptions. To this aim we use a dynamic step strategy, based on which we can provide a unifying scheme for the algorithms in [8, 3]. Not least we also provide accelerated versions for the classical ADMM algorithm and its variable metric variants).

3 In what follows we recall some elements of the theory of monotone operators in Hilbert spaces and refer for more details to [, 3, 8]. Let H be a real Hilbert space with inner product, and associated norm =,. For an arbitrary set-valued operator A : H H we denote by Gr A = {x, u) H H : u Ax} its graph, by dom A = {x H : Ax } its domain and by A : H H its inverse operator, defined by u, x) Gr A if and only if x, u) Gr A. We say that A is monotone if x y, u v 0 for all x, u), y, v) Gr A. A monotone operator A is said to be maximal monotone, if there exists no proper monotone extension of the graph of A on H H. The resolvent of A, J A : H H, is defined by J A = Id +A), where Id : H H, Idx) = x for all x H, is the identity operator on H. If A is maximal monotone, then J A : H H is single-valued and maximal monotone see [, Proposition 3.7 and Corollary 3.0]). For an arbitrary γ > 0 we have see [, Proposition 3.]) and see [, Proposition 3.8]) p J γa x if and only if p, γ x p)) Gr A J γa + γj γ A γ Id = Id. 4) When G is another Hilbert space and L : H G is a linear continuous operator, then L : G H, defined by L y, x = y, Lx for all x, y) H G, denotes the adjoint operator of L, while the norm of L is defined as L = sup{ Lx : x H, x }. Let γ > 0 be arbitrary. We say that A is γ-strongly monotone, if x y, u v γ x y for all x, u), y, v) Gr A. A single-valued operator A : H H is said to be γ-cocoercive, if x y, Ax Ay γ Ax Ay for all x, y) H H. Moreover, A is γ-lipschitz continuous, if Ax Ay γ x y for all x, y) H H. A single-valued linear operator A : H H is said to be sew, if x, Ax = 0 for all x H. The parallel sum of two operators A, B : H H is defined by A B : H H, A B = A + B ). Since the variational case will be also in the focus of our investigations, we recall next some elements of convex analysis. For a function f : H R we denote by dom f = {x H : fx) < + } its effective domain and say that f is proper, if dom f and fx) for all x H. We denote by ΓH) the family of proper convex and lower semi-continuous extended real-valued functions defined on H. Let f : H R, f u) = sup x H { u, x fx)} for all u H, be the conjugate function of f. The subdifferential of f at x H, with fx) R, is the set fx) := {v H : fy) fx)+ v, y x y H}. We tae by convention fx) :=, if fx) {± }. If f ΓH), then f is a maximally monotone operator cf. [6]) and it holds f) = f. For f, g : H R two proper functions, we consider also their infimal convolution, which is the function f g : H R, defined by f g)x) = inf y H {fy) + gx y)}, for all x H. When f ΓH) and γ > 0, for every x H we denote by prox γf x) the proximal point of parameter γ of f at x, which is the unique optimal solution of the optimization problem inf y H { fy) + y x γ }. 5) Notice that J γ f = Id +γ f) = prox γf, thus prox γf : H H is a single-valued operator fulfilling the extended Moreau s decomposition formula prox γf +γ prox /γ)f γ Id = Id. 6) 3

4 Finally, we say that the function f : H R is γ-strongly convex for γ > 0, if f γ is a convex function. This property implies that f is γ-strongly monotone see [, Example.3]). The ADMM paradigm employed to monotone inclusions In this section we propose an algorithm for solving monotone inclusion problems involving compositions with linear continuous operators in infinite dimensional Hilbert spaces which is designed in the spirit of the ADMM paradigm.. Problem formulation, algorithm and particular cases The following problem represents the central point of our investigations. Problem Let H and G be real Hilbert spaces, A : H H and B : G G be maximally monotone operators and C : H H an η-cocoercive operator for η > 0. Let L : H G be a linear continuous operator. To solve is the primal monotone inclusion together with its dual monotone inclusion find x H such that 0 Ax + L B L)x + Cx, 7) find v G such that x H : L v Ax + Cx and v BLx). 8) Simple algebraic manipulations yield that 8) is equivalent to the problem ) find v G such that 0 B v + L) A + C) L ) v, which can be equivalently written as find v G such that 0 B v + ) L) A C ) L ) v. 9) We say that x, v) H G is a primal-dual solution to the primal-dual pair of monotone inclusions 7)-8), if L v Ax + Cx and v BLx). 0) If x H is a solution to 7), then there exists v G such that x, v) is a primal-dual solution to 7)-8). On the other hand, if v G is a solution to 8), then there exists x H such that x, v) is a primal-dual solution to 7)-8). Furthermore, if x, v) H G is a primal-dual solution to 7)-8), then x is a solution to 7) and v is a solution to 8). Next we relate this general setting to the solving of a primal-dual pair of convex optimization problems. Problem Let H and G be real Hilbert spaces, f ΓH), g ΓG), h : H R a convex and Fréchet differentiable function with η -Lipschitz continuous gradient, for η > 0, and L : H G a linear continuous operator. Consider the primal convex optimization problem and its Fenchel dual problem inf {fx) + hx) + glx)} ) x H sup{ f h ) L v) g v)}. ) v G 4

5 The system of optimality conditions for the primal-dual pair of optimization problems )- ) reads: L v hx) fx) and v glx), 3) which is actually a particular formulation of 0) when A := f, C := h, B := g. 4) Notice that, due to the Baillon-Haddad Theorem see [, Corollary 8.6]), h is η-cocoercive. If ) has an optimal solution x H and a suitable qualification condition is fulfilled, then there exists v G, an optimal solution to ), such that 3) holds. If ) has an optimal solution v G and a suitable qualification condition is fulfilled, then there exists x H, an optimal solution to ), such that 3) holds. Furthermore, if the pair x, v) H G satisfies relation 3), then x is an optimal solution to ), v is an optimal solution to ) and the optimal objective values of ) and ) coincide. One of the most popular and useful qualification conditions guaranteeing the existence of a dual optimal solution is the one nown under the name Attouch-Brézis and which requires that: holds. Here, for S G a convex set, we denote by 0 sqridom g Ldom f)) 5) sqri S := {x S : >0 S x) is a closed linear subspace of G} its strong quasi-relative interior. The topological interior is contained in the strong quasi-relative interior: int S sqri S, however, in general this inclusion may be strict. If G is finite-dimensional, then for a nonempty and convex set S G, one has sqri S = ri S, which denotes the topological interior of S relative to its affine hull. Considering again the infinite dimensional setting, we remar that condition 5) is fulfilled, if there exists x dom f such that Lx dom g and g is continuous at Lx. For further considerations on convex duality we refer to [ 4,, 30]. Throughout the paper the following additional notations and facts will be used. We denote by S + H) the family of operators U : H H which are linear, continuous, self-adjoint and positive semidefinite. For U S + H) we consider the semi-norm defined by x U = x, Ux x H. The Loewner partial ordering is defined for U, U S + H) by Finally, for α > 0, we set U U x U x U x H. P α H) := {U S + H) : U α Id}. Let α > 0, U P α H) and A : H H a maximally monotone operator. Then the operator U + A) : H H is single-valued with full domain; in other words for every x H there exists a unique p H such that p = U + A) x. Indeed, this is a consequence of the relation U + A) = Id +U A) U 5

6 and of the maximal monotonicity of the operator U A in the renormed Hilbert space H,, U ) see for example [5, Lemma 3.7]), where x, y U := x, Uy x, y H. We are now in the position to formulate the algorithm relying on the ADMM paradigm for solving the primal-dual pair of monotone inclusions 7)-8). Algorithm 3 For all 0, let M S +H), M S +G) and c > 0 be such that cl L+M P α H) for α > 0. Choose x 0, z 0, y 0 ) H G G. For all 0 generate the sequence x, z, y ) 0 as follows: x + = z + = ) [ cl L + M + A cl z c y ) + M x Cx ] 6) ) [ Id +c M + c B Lx + + c y + c M z ] 7) y + = y + clx + z + ). 8) As shown below, several algorithms from the literature can be embedded in this numerical scheme. Remar 4 For all 0, the equations 6) and 7) are equivalent to and, respectively, cl z Lx + c y ) + M x x + ) Cx Ax +, 9) Notice that the latter is equivalent to clx + z + + c y ) + M z z + ) Bz +. 0) y + + M z z + ) Bz +. ) In the variational setting as described in Problem, namely, by choosing the operators as in 4), the inclusion 9) becomes which is equivalent to x + = argmin x H 0 fx + ) + cl Lx + z + c y ) + M x + x ) + hx ), On the other hand, 0) becomes which is equivalent to { fx) + x x, hx ) + c Lx z + c y + x x M clx + z + + c y ) + M z z + ) gz + ), z + = argmin z G }. ) { gz) + c Lx+ z + c y + } z z. 3) M 6

7 Consequently, the iterative scheme 6)-8) reads x + = argmin x H z + = argmin z G { fx) + x x, hx ) + c Ax z + c y + x x M { gz) + c Ax+ z + c y + } z z M } 4) y + = y + cax + z + ), 6) which is the algorithm formulated and investigated by Banert, Boţ and Csetne in [5]. The case when h = 0 and M, M are constant for every 0 has been considered in the setting of finite dimensional Hilbert spaces by Shefi and Teboulle [7]. We want to emphasize that when h = 0 and M = M = 0 for all 0 the iterative scheme 4)-6) collapses into the classical version of the ADMM algorithm. 5) Remar 5 For all 0, consider the particular choices M := τ Id cl L for τ > 0, and M := 0. i) Let 0 be fixed. Relation 6) written for x + ) reads x + = τ + Id +A) [ cl z + c y + ) + τ + x+ cl Lx + Cx +]. From 8) we have cl z + c y + ) = L y + y ) + cl Lx +, hence x + = [ τ + Id +A) τ + x+ L y + y ) Cx +] ) = J τ+ A x + τ + Cx + τ + L y + y ). 7) On the other hand, by using 4), relation 7) reads z + = J c B Lx + + c y ) = Lx + + c y c J cb clx + + y ) which is equivalent to clx + + y cz + = J cb clx + + y ). By using again 8), this can be reformulated as y + = J cb y + clx +). 8) The iterative scheme in 7)- 8) generates for a given starting point x, y 0 ) H G and c > 0 a sequence x, y ) which is generated for all 0 as follows: y + = J cb y + clx +) 9) ) x + = J τ+ A x + τ + Cx + τ + L y + y ). 30) 7

8 When τ = τ for all, the algorithm 9)-30) recovers a numerical scheme for solving monotone inclusion problems proposed by Vũ in [9, Theorem 3.]. More precisely, the errorfree variant of the algorithm in [9, Theorem 3.] formulated for a constant sequence n ) n N equal to and employed to the solving of the primal-dual pair 9)-7) by reversing the order in Problem, that is, by treating 9) as the primal monotone inclusion and 7) as its dual monotone inclusion) is nothing else than the iterative scheme 9)-30). In case C = 0, 9)-30) becomes for all 0 ) x + = J τ A x τ L y y ) 3) y + = J cb y + clx +), 3) which, in case τ = τ for all and cτ L <, is nothing else than the algorithm introduced by Boţ, Csetne and Heinrich in [7, Algorithm, Theorem ] applied to the solving of the primaldual pair 9)-7) by reversing the order in Problem ). ii) Considering again the variational setting as described in Problem, the algorithm 9)- 30) reads for all 0 y + = prox cg y + clx +) 33) ) x + = prox τ+ f x + τ + hx + ) τ + L y + y ). 34) When τ = τ > 0 for all, one recovers a primal-dual algorithm investigated under the assumption τ c L > η by Condat in [7, Algorithm 3., Theorem 3.]. Not least, 3)-3) reads in the variational setting which corresponds to the case h = 0) for all 0 ) x + = prox τ f x τ L y y ) 35) y + = prox cg y + clx +). 36) When τ = τ > 0 for all, this iterative schemes becomes the algorithm proposed by Chambolle and Poc in [3, Algorithm, Theorem ] for solving the primal-dual pair of optimization problems )-) in this order).. Convergence analysis In this subsection we will address the convergence of the sequence of iterates generated by Algorithm 3. One of the tools we will use in the proof of the convergence statement is the following version of the Opial Lemma formulated in the setting of variable metrics see [5, Theorem 3.3]). Lemma 6 Let S be a nonempty subset of H and x ) 0 be a sequence in H. Let α > 0 and W P α H) be such that W W + for all 0. Assume that: i) for all z S and for all 0: x + z W + x z W ; ii) every wea sequential cluster point of x ) 0 belongs to S. Then x ) 0 converges wealy to an element in S. 8

9 We present the first main theorem of this manuscript. Theorem 7 Consider the setting of Problem and assume that the set of primal-dual solutions to the primal-dual pair of monotone inclusions 7)-8) is nonempty. Let x, z, y ) 0 be the sequence generated by Algorithm 3 and assume that M η Id S +H), M + M, M S + G), M + M for all 0. If one of the following assumptions: I) there exists α > 0 such that M η Id P α H) for all 0; II) there exist α, α > 0 such that L L P α H) and M P α G) for all 0; is fulfilled, then there exists x, v), a primal-dual solution to 7)-8), such that x, z, y ) 0 converges wealy to x, Lx, v). Proof. Let S H G G be defined by S = {x, Lx, v) : x, v) is a primal dual solution to 7)-8)}. 37) Let x, Lx, y ) S be fixed. Then it holds L y Cx Ax and y BLx ). Let 0 be fixed. From 9) and the monotonicity of A we have cl z Lx + c y ) + M x x + ) Cx + L y + Cx, x + x 0, 38) while from 0) and the monotonicity of B we have clx + z + + c y ) + M z z + ) y, z + Lx 0. 39) Since C is η-cocoercive, we have Cx Cx, x x η Cx Cx. Summing up the three inequalities obtained above we get c z Lx +, Lx + Lx + y y, Lx + Lx + Cx Cx, x + x + M x x + ), x + x + c Lx + z +, z + Lx + y y, z + Lx + M z z + ), z + Lx + Cx Cx, x x η Cx Cx 0. According to 8) we also have y y, Lx + Lx + y y, z + Lx = y y, Lx + z + = c y y, y + y. By expressing the inner products through norms we further derive c z Lx z Lx + Lx + Lx ) + c Lx + Lx Lx + z + z + Lx ) + y y + y + y y + y ) c + ) x x x x + x + x M M M + ) z Lx z z + z + Lx M M M + Cx Cx, x + x η Cx Cx 0. 9

10 By expressing Lx + z + using again relation 8) and by taing into account that we obtain Cx Cx, x + x η Cx Cx = η η Cx Cx ) + x x +) + 4η x x +, x+ x + M z+ Lx M +c Id + c y+ y x x + M z Lx M +c Id + c y y c z Lx + x x + M z z + M η η Cx Cx ) + x x +) + 4η x x +. From here, using the monotonicity assumptions on M ) 0 and M ) 0, it yields x+ x M + x x M + z+ Lx M + +c Id + c y+ y + z Lx M +c Id + c y y c z Lx + x x + M η Id z z + M η η Cx Cx ) + x x +). 40) Discarding the negative terms on the right-hand side of the above inequality notice that M η Id S +H) for all 0), it follows that statement i) in Opial Lemma Lemma 6) holds, when applied in the product space H G G, for the sequence x, z, y ) 0, for W := M, M + c Id, c Id) for 0, and for S defined as in 37). Furthermore, summing up the inequalities in 40), we get z Lx + < +, x x + M η Id < +, z z + M < +. 4) Consider first the hypotheses in assumption I). Since M η Id P α H) for all 0 with α > 0, we get x x ) 4) and A direct consequence of 4) and 43) is From 8), 43) and 44) we derive z Lx ). 43) z z ). 44) y y ). 45) 0

11 Next we show that the relations 4)-45) are fulfilled also under assumption II). Indeed, in this situation we derive from 4) that 43) and 44) hold. From 8), 43) and 44) we obtain 45). Finally, the inequalities α x + x Lx + Lx Lx + z + z Lx 0 46) yield 4). The relations 4)-45) will play an essential role when verifying assumption ii) in the Opial Lemma for S taen as in 37). Let x, z, y) H G G be such that there exists n ) n 0, n + as n + ), and x n, z n, y n ) converges wealy to x, z, y) as n + ). From 4) and the linearity and the continuity of L we obtain that Lx n+ ) n N converges wealy to Lx as n + ), which combined with 43) yields z = Lx. We use now the following notations for n 0: a n := cl z n Lx n+ c y n ) + M n xn x n+ ) + Cx n+ Cx n a n := x n+ b n := y n+ + M n zn z n+ ) b n := z n+. From 9) we have for all n 0 Further, from 0) and 8) we have for all n 0 Furthermore, from 4) we have From 45) and 44) we obtain Moreover, 8) and 45) yield Finally, we have a n A + C)a n ). 47) b n Bb n. 48) a n converges wealy to x as n + ). 49) b n converges wealy to y as n + ). 50) La n b n converges strongly to 0 as n + ). 5) a n + L b n = cl z n Lx n+ ) + L y n+ y n ) + M n xn x n+ ) + L M n zn z n+ ) + Cx n+ Cx n. By using the fact that C is η -Lipschitz continuous, from 4)-45) we get a n + L b n converges strongly to 0 as n + ). 5)

12 Let us define T : H G H G by T x, y) = Ax)+Cx)) B y) and K : H G H G by Kx, y) = L y, Lx) for all x, y) H G. Since C is maximally monotone with full domain see []), A + C is maximally monotone, too see []), thus T is maximally monotone. Since K is s sew operator, it is also maximally monotone see []). Due to the fact that K has full domain, we conclude that Moreover, from 47) and 48) we have T + K is a maximally monotone operator. 53) a n + L b n, b n La n ) T + K)a n, b n) n 0. 54) Since the graph of a maximally monotone operator is sequentially closed with respect to the wea strong topology see [, Proposition 0.33]), from 53), 54), 49), 50), 5) and 5) we derive that 0, 0) T + K)x, y) = A + C, B )x, y) + L y, Lx). The latter is nothing else than saying that x, y) is a primal dual-solution to 7)-8), which combined with z = Lx implies that the second assumption of the Opial Lemma is verified, too. In conclusion, x, z, y ) 0 converges wealy to x, Lx, v), where x, v) a primal-dual solution to 7)-8). Remar 8 i) Choosing as in Remar 5 M := τ Id cl L, with τ > 0 and τ := sup 0 τ R, and M := 0 for all 0, we have x, M ) η Id x c L ) x τ η τ c L ) x x H, η which recovers the one in Algorithm 3. and Theorem 3. in [7]) the operators M η Id belong for all 0 to the class P α H) which means that under the assumption τ c L > η with α := τ c L η > 0. ii) Let us briefly discuss the condition considered in II): α > 0 such that L L P α H). 55) If L is injective and ran L is closed, then 55) holds see [, Fact.9]). Moreover, 55) implies that L is injective. This means that if ran L is closed, then 55) is equivalent to L is injective. Hence, in finite dimensional spaces, namely, if H = R n and G = R m, with m n, 55) is nothing else than saying that L has full column ran, which is a widely used assumption in the proof of the convergence of the classical ADMM algorithm. In the second convergence result of this section we consider the case when C is identically 0. We notice that this cannot be encompassed in the above theorem due to the assumptions which involve the cococercivity constant η in the denominator of some fractions and which do not allow us to tae it equal to zero. Theorem 9 Consider the setting of Problem in the situation when C = 0 and assume that the set of primal-dual solutions to the primal-dual pair of monotone inclusions 7)-8) is nonempty. Let x, z, y ) 0 be the sequence generated by Algorithm 3 and assume that M S +H), M M +, M S +G), M M + for all 0. If one of the following assumptions:

13 I) there exists α > 0 such that M P α H) for all 0; II) there exist α, α > 0 such that L L P α H), M = 0 and M P α G) for all 0; III) there exists α > 0 such that L L P α H), M 0; = 0 and M + M M + for all is fulfilled, then there exists x, v), a primal-dual solution to 7)-8), such that x, z, y ) 0 converges wealy to x, Lx, v). Proof. We fix an element x, Lx, y ) with the property that x, y ) a primal-dual solution to 7)-8). Tae an arbitrary 0. As in the proof of Theorem 7, we can derive the inequality x+ x M + x x M + z+ Lx M + +c Id + c y+ y c z Lx + x x + M + z Lx M +c Id + c y y z z +. 56) M Under assumption I) the conclusion follows as in the proof of Theorem 7 by maing use of the Opial Lemma. Consider the situation when the hypotheses in assumption II) are fulfilled. In this case, 56) becomes z+ Lx M + +c Id + c y+ y z Lx M +c Id + c y y c z Lx + z z +. 57) M By using telescopic sum techniques, it follows that 43) and 44) hold. From 8), 43) and 44) we obtain 45). Finally, by using again the inequality 46), relation 4) holds, too. On the other hand, 57) yields that lim + z Lx M +c Id + ) c y y, 58) hence y ) 0 and z ) 0 are bounded. Combining this with the condition imposed on L, we derive that x ) 0 is bounded, too. Hence there exists a wea convergent subsequence of x, z, y ) 0. By using the same arguments as in the second part of the proof of Theorem 7, one can see that every sequential wea cluster point of x, z, y ) 0 belongs to the set S defined in 37). In the remaining of the proof we show that the set of sequential wea cluster points of x, z, y ) 0 is a singleton. Let x, z, y ), x, z, y ) be two such sequential wea cluster points. Then there exist p ) p 0, q ) q 0, p + as p + ), q + as q + ), a subsequence x p, z p, y p ) p 0 which converges wealy to x, z, y ) as p + ), and a subsequence x q, z q, y q ) q 0 which converges wealy to x, z, y ) as q + ). As shown 3

14 above, x, z, y ) and x, z, y ) belong to the set S see 37)), thus z i = Lx i, i {, }. From 58), which is true for every primal-dual solution to 7)-8), we derive lim + z Lx M +c Id z Lx M +c Id + c y y ) c y y. 59) Further, we have for all 0 z Lx M +c Id z Lx M +c Id= Lx Lx M +c Id+ z Lx, M +c Id)Lx Lx ) and c y y c y y = c y y + c y y, y y. Now, the monotonicity condition imposed on M ) 0 implies that sup 0 M + c Id < +. Thus, according to [5], there exists α > 0 and M P α G) such that M + c Id) 0 converges pointwise to M as + ). Taing the limit in 59) along the subsequences p ) p 0 and q ) q 0 and using the last two relations above we obtain Lx Lx M + Lx Lx, M Lx Lx ) + c y y + c y y, y y = Lx Lx M + c y y, hence Lx Lx M c y y = 0, thus Lx = Lx and y = y. Assumption II) implies that x = x. In conclusion, x, z, y ) 0 converges wealy to an element in S see37)). Finally, we consider the situation when the hypotheses in assumption III) hold. As noticed above, since M = 0 for all 0, relation 56) becomes 57). Let be fixed. By considering the relation ) for consecutive iterates and by taing into account the monotonicity of B we derive z + z, y + y + M z z + ) M z z ) 0, hence z + z, y + y z + z M z + z M + z + z, M z z ) z+ z M z z. M Substituting y + y = clx + z + ) in the last inequality it follows z + z M z+ z M z z M c z Lx + z + z Lx + z + ), 4

15 which, after adding it with 57), leads to z+ Lx M + +c Id + c y+ y + z+ z 3M M z Lx M +c Id + c y y + z z c M z+ z c y+ y. 60) Taing into account that according to III) we have 3M M M, we can conclude that for all it holds z+ Lx M + +c Id + c y+ y + z+ z M z Lx M +c Id + c y y + z z c M z+ z c y+ y. 6) Using telescoping sum arguments, we obtain that y y + 0 and z z + 0 as +. Using 8), it follows that Lx x + ) 0 as +, which, combined with the hypotheses imposed to L, further implies that x x + 0 as +. Consequently, z Lx + 0 as +. Hence the relations 4)-45) are fulfilled. On the other hand, from 6) we also derive that lim + z Lx M +c Id + c y y + ) z z. 6) M By using that z z M z z M 0 M 0 z z, it follows that lim + z z = 0, which further implies that 58) holds. From M here the conclusion follows by arguing as in the second part of the proof provide above under hypotheses II). Remar 0 In the variational case, the sequences generated by the classical ADMM algorithm, which corresponds to the iterative scheme 4)-6) for h = 0 and M = M = 0 for all 0, are convergent, provided that L has full column ran. This situation is covered by the theorem above in the context of assumption III). 3 Convergence rates under strong monotonicity and by means of dynamic step sizes We state the problem on which we focus throughout this section. Problem In the setting of Problem we replace the cocoercivity of C by the assumptions that C is monotone and µ-lipschitz continuous for µ 0. Moreover, we assume that A + C is γ-strongly monotone for γ > 0. 5

16 Remar If C is a η-cocoercive operator for η > 0, then C is monotone and η -Lipschitz continuous. Though, the converse statement may fail. The sew operator x, y) L y, Lx) is for instance monotone and Lipschitz continuous, and not cocoercive. This operator appears in a natural way when considering formulating the system of optimality conditions for convex optimization problems involving compositions with linear continuous operators see []). Notice that due to the celebrated Baillon-Haddad Theorem see, for instance, [, Corollary 8.6]), the gradient of a convex and Fréchet differentiable function is η-cocoercive if and only if it is η - Lipschitz continuous. Remar 3 In the setting of Problem the operator A + L B L + C is strongly monotone, thus the monotone inclusion problem 7) has at most one solution. Hence, if x, v) is a primaldual solution to the primal-dual pair 7)-8), then x is the unique solution to 7). Notice that the problem 8) may not have an unique solution. We propose the following algorithm for the formulation of which we use dynamic step sizes. Algorithm 4 For all 0, let M : G G be a linear, continuous and self-adjoint operator such that τ LL + M P α G) for α > 0 for all 0. Choose x 0, z 0, y 0 ) H G G. For all 0 generate the sequence x, z, y ) 0 as follows: y + = τ LL + M + B ) [ τ Lz τ x ) + M y ] 63) ) z + θ = L y + + θ Cx + θ [ Id +τ + A ) L y + + τ + x Cx ] 64) x + = x + τ + L y + z +), 65) θ where, τ, θ > 0 for all 0. Remar 5 We would lie to emphasize that when C = 0 Algorithm 4 has a similar structure to Algorithm 3. Indeed, in this setting, the monotone inclusion problems 7) and 9) become and, respectively, find x H such that 0 Ax + L B L)x 66) ) find v G such that 0 B v + L) A ) L ) v. 67) The two problems 66) and 67) are dual to each other in the sense of the Attouch-Théra duality see []). By taing in 63)-65) =, θ = which corresponds to the limit case µ = 0 and γ = 0 in the equation 74) below) and τ = c > 0 for all 0, then the resulting iterative scheme reads y + = cll + M + B ) [ clz c x ) + M y ] 68) z + = Id +c A ) [ L y + + c x ] 69) x + = x + c L y + z +). 70) 6

17 This is nothing else than Algorithm 3 employed to the solving of the primal-dual system of monotone inclusions 67)-66), that is, by treating 67) as the primal monotone inclusion and 66) as its dual monotone inclusion notice that in this case we tae in relation 7) of Algorithm 3 M = 0 for all 0). Concerning the parameters involved in Algorithm 4 we assume that µτ < γ, 7) there exists σ 0 > 0 such that and for all 0: µ +, 7) σ 0 τ L, 73) θ = + τ+ γ µτ + ) 74) τ + = θ τ + 75) σ + = θ σ 76) τ LL + M σ Id 77) τ LL + M τ + LL + M +. τ + τ + τ + τ + 78) Remar 6 Fix an arbitrary. From 63) we have where Due to 65) we have which combined with 79) delivers τ Lz τ x ) + M y M y + + B y +, 79) M := τ LL + M. 80) τ z = τ L y + θ x x ), M y y + ) + L Fix now an arbitrary 0. From 4) and 64) we have L y + + θ z + + L y +) Cx = L y + + [ ] x + θ x x ) B y +. 8) τ + J τ+ /)A x Cx τ + [ x + τ + L y + Cx )]. By using 65) we obtain [ x + = J τ+ /)A x + τ + L y + Cx )]. 8) Finally, the definition of the resolvent yields the relation x x +) L y + + Cx + Cx A + C)x +. 83) τ + 7

18 Remar 7 The choice τ LL + M = σ Id 0 84) leads to so-called accelerated versions of primal-dual algorithms that have been intensively studied in the literature. Indeed, under these auspices 63) becomes by taing into account also 65)) y + = σ Id +B ) [ L τ z + x τ L y ) + σ y] = Id +σ B ) [ )] y + σ L x + θ x x ). This together with 8) gives rise for all 0 to the following numerical scheme [ x + = J τ+ /)A x + τ + L y + Cx )] 85) )] y + = J σ+ B [y + + σ + L x + + θ + x + x ), 86) which has been investigated by Boţ, Csetne, Heinrich and Hendrich in [8, Algorithm 5]. Not least, assuming that C = 0 and =, the variational case A = f and B = g leads for all 0 to the numerical scheme y + = prox σ g [y + σ L x + = prox τ+ f )] x + θ x x ) 87) x τ + L y +), 88) which has been considered by Chambolle and Poc in [3, Algorithm ]. We also notice that condition 84) guarantees the fulfillment of both 77) and 78), due to the fact that the sequence τ + σ ) 0 is constant see 75) and 76)). Remar 8 Assume again that C = 0 and consider the variational case as described in Problem. From 79) and 80) we derive for all the relation 0 g y + ) + τ L L y + + z τ x) + M y + y ), which in case M S +G) is equivalent to [ y + = argmin g y) + τ y G Algorithm 4 becomes in case = [ y + = argmin g y) + τ y G z + = θ ) L y + + θ argmin x H x + = x + τ + θ L y + z +), L y + z τ x + y y M L y + z τ x + [ f x) + τ + ] ]. y y M ] L y + z + τ + x which can be regarded as an accelerated version of the algorithm 4)-6) in Remar 4. 8

19 We present the main theorem of this section. Theorem 9 Consider the setting of Problem and let x, v) be a primal-dual solution to the primal-dual system of monotone inclusions 7)-8). Let x, z, y ) 0 be the sequence generated by Algorithm 4 and assume that the relations 7)-78) are fulfilled. Then we have for all n Moreover, x n x τ n+ x x + τ + σ 0τ L y n v σ 0 τ y v τ LL +M + x x 0 τ τ + Lx x 0 ), y v. τ lim n + nτ n = γ, hence one obtains for xn ) n 0 an order of convergence of O n ). Proof. Let be fixed. From 8), the relation Lx B v see 0)) and the monotonicity of B we obtain y + v, M [ ] y y + ) + L x + θ x x ) Lx 0 or, equivalently, y v M y+ v M [ ] y y + M y + v, Lx L x + θ x x ). 89) Further, from 83), the relation L v A + C)x see 0)) and the γ-strong monotonicity of A + C we obtain x + x, or, equivalently, τ + x x x x +) L y + + Cx + Cx + L v γ x + x τ + τ + x + x Since C is µ-lipschitz continuous, we have that τ + x x + γ x + x + x + x, Cx Cx + + y + v, Lx + x. 90) x + x, Cx Cx + µτ + x + x µ x + x, τ + which combined with 90) implies τ + x x + γ µτ ) + x + x + µ x + x τ + τ + + y + v, Lx + Lx. 9) 9

20 By adding the inequalities 89) and 9), we obtain y v M + x x τ + y+ v M + + γ µτ ) + x + x τ + + y y + M + µ x + x τ [ + ] + y + v, L x + x θ x x ). 9) Further, we have [ ] L x + x θ x x ), y + v = Lx + x ), y + v θ Lx x ), y v + θ Lx x ), y y + Lx + x ), y + v θ Lx x ), y v θ L σ x x y y +. σ By combining this inequality with 9) we obtain after dividing by τ + ) y v M + τ + τ+ x x y + v M + τ + τ + ) + γ µ x + x τ + y y + M + y y + 93) τ + τ + σ + µ τ+ x + x θ L σ x x τ + + Lx + x ), y + v τ + θ τ + Lx x ), y v. From 77) and 80) we have that the term in 93) is nonnegative. Further, noticing that see 74), 75), 76) and 73)) θ τ + = τ τ + + γ τ + µ = τ + τ + σ = τ σ =... = τ σ 0, and L σ θ τ + = τ + L σ τ = τ L σ 0 τ τ, 0

21 we obtain see also 7) and 78)) y v M + τ + τ+ x x y + v M + + τ + τ+ x + x + τ+ x + x τ x x + Lx + x ), y + v Lx x ), y v. τ + τ Let n be a natural number such that n. Summing up the above inequality from = to n, it follows y v M + τ τ x x y n v M n + τ n+ τn+ x n x + τn x n x n τ x x 0 + Lx n x n ), y n v Lx x 0 ), y v. τ n τ The inequality in the statement of the theorem follows by combining this relation with see 77)) τ n y n v M n τ n+ yn v σ n τ n+, x n x n + τ n Lx n x n ), y n v L yn v and σ n τ n+ = σ 0 τ. Finally, we notice that for any n 0 see 74) and 75)) τ n+ = τ n+. 94) + τ n+ γ µτ n+) From here it follows that τ n+ < τ n for all n and proof is complete. lim nτ n = /γ see [8, page 6]). The n + Remar 0 In Remar 7 we provided an example of a family of linear, continuous and selfadjoint operators M ) 0 for which the relations 77) and 78) are fulfilled. In the following we will furnish more examples in this sense. To begin we notice that simple algebraic manipulations easily lead to the conclusion that if µτ < γ, 95) then θ ) 0 is monotonically increasing. In the examples below we replace 7) with the stronger assumption 95). i) For all 0, tae M := σ Id.

22 Then 77) trivially holds, while 78), which can be equivalently written as LL + M LL + M +, θ τ + θ τ + follows from the fact that τ + σ ) 0 is constant see 75) and76)) and θ ) 0 is monotonically increasing. ii) For all 0, tae M := 0. Relation 78) holds since θ ) 0 is monotonically increasing. Condition 77) becomes in this setting σ τ LL Id 0. 96) Since τ > τ + for all and τ + σ ) 0 is constant, 96) holds, if In this case one has to tae in 73) LL P σ 0 τ G). 97) σ 0 τ L =. In view of the above theorem, the iterative scheme obtained in this particular instance see Remar 8) can be regarded as an accelerated version of the classical ADMM algorithm see Remar 4 and Remar 8ii)). iii) For all 0, tae M := τ Id. Relation 78) holds since θ ) 0 is monotonically increasing. On the other hand, condition 77) is equivalent to σ τ LL + Id) Id. 98) Since τ > τ + for all and τ + σ ) 0 is constant, 98) holds, if σ 0 τ LL σ 0 τ ) Id. 99) In case σ 0 τ which is allowed according to 73) if L ) this is obviously fulfilled. Otherwise, in order to guarantee 99), we have to impose that References LL P σ 0 τ G). 00) σ 0 τ [] H. Attouch, M. Théra, A general duality principle for the sum of two operators, Journal of Convex Analysis 3), 4, 996 [] H.H. Bausche, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, CMS Boos in Mathematics, Springer, New Yor, 0 [3] J.M. Borwein and J.D. Vanderwerff, Convex Functions: Constructions, Characterizations and Counterexamples, Cambridge University Press, Cambridge, 00

23 [4] R.I. Boţ, Conjugate Duality in Convex Optimization, Lecture Notes in Economics and Mathematical Systems, Vol. 637, Springer, Berlin Heidelberg, 00 [5] S. Banert, R.I. Boţ, E.R. Csetne, Fixing and extending some recent results on the ADMM algorithm, arxiv: , 06 [6] R.I. Boţ, E.R. Csetne, An inertial alternating direction method of multipliers, Minimax Theory and its Applications ), 9 49, 06 [7] R.I. Boţ, E.R. Csetne, A. Heinrich, A primal-dual splitting algorithm for finding zeros of sums of maximal monotone operators, SIAM Journal on Optimization 34), 0 036, 03 [8] R.I. Boţ, E.R. Csetne, A. Heinrich, C. Hendrich, On the convergence rate improvement of a primal-dual splitting algorithm for solving monotone inclusion problems, Mathematical Programming 50), 5 79, 05 [9] R.I. Boţ, C. Hendrich, Convergence analysis for a primal-dual monotone + sew splitting algorithm with applications to total variation minimization, Journal of Mathematical Imaging and Vision 493), , 04 [0] R.I. Boţ, C. Hendrich, A Douglas-Rachford type primal-dual method for solving inclusions with mixtures of composite and parallel-sum type monotone operators, SIAM Journal on Optimization 34), , 03 [] S. Boyd, N. Parih, E. Chu, B. Peleato, J. Ecstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, Foundations and Trends in Machine Learning 3,, 00 [] L.M. Briceño-Arias, P.L. Combettes, A monotone + sew splitting model for composite monotone inclusions in duality, SIAM Journal on Optimization 4), 30 50, 0 [3] A. Chambolle, T. Poc, A first-order primal-dual algorithm for convex problems with applications to imaging, Journal of Mathematical Imaging and Vision 40), 0 45, 0 [4] P.L. Combettes, J.-C. Pesquet, Primal-dual splitting algorithm for solving inclusions with mixtures of composite, Lipschitzian, and parallel-sum type monotone operators, Set-Valued and Variational Analysis 0), , 0 [5] P.L. Combettes, B.C. Vũ, Variable metric quasi-fejér monotonicity, Nonlinear Analysis 78, 7 3, 03 [6] P.L. Combettes, V.R. Wajs, Signal recovery by proximal forward-bacward splitting, Multiscale Modeling and Simulation 44), 68 00, 005 [7] L. Condat, A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms, Journal of Optimization Theory and Applications 58), , 03 3

24 [8] J. Ecstein, Augmented Lagrangian and alternating direction methods for convex optimization: a tutorial and some illustrative computational results, Rutcor Research Report 3-0, 0 [9] J. Ecstein, Some saddle-function splitting methods for convex programming, Optimization Methods and Software 4, 75 83, 994 [0] J. Ecstein, D.P. Bertseas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators, Mathematical Programming 55, 93 38, 99 [] I. Eeland, R. Temam, Convex Analysis and Variational Problems, North-Holland Publishing Company, Amsterdam, 976 [] E. Esser, Applications of Lagrangian-based alternating direction methods and connections to split Bregman, CAM Reports 09-3, UCLA, Center for Applied Mathematics, 009 [3] M. Fortin, R. Glowinsi, On decomposition-coordination methods using an augmented Lagrangian, in: M. Fortin and R. Glowinsi eds.), Augmented Lagrangian Methods: Applications to the Solution of Boundary-Value Problems, North-Holland, Amsterdam, 983 [4] D. Gabay, Applications of the method of multipliers to variational inequalities, in: M. Fortin and R. Glowinsi eds.), Augmented Lagrangian Methods: Applications to the Solution of Boundary-Value Problems, North-Holland, Amsterdam, 983 [5] D. Gabay, B. Mercier, A dual algorithm for the solution of nonlinear variational problems via finite element approximations, Computers and Mathematics with Applications, 7 40, 976 [6] R.T. Rocafellar, On the maximal monotonicity of subdifferential mappings, Pacific Journal of Mathematics 33), 09 6, 970 [7] R. Shefi, M. Teboulle, Rate of convergence analysis of decomposition methods based on the proximal method of multipliers for convex minimization, SIAM Journal on Optimization 4), 69 97, 04 [8] S. Simons, From Hahn-Banach to Monotonicity, Springer-Verlag, Berlin, 008 [9] B.C. Vũ, A splitting algorithm for dual monotone inclusions involving cocoercive operators, Advances in Computational Mathematics 383), , 03 [30] C. Zălinescu, Convex Analysis in General Vector Spaces, World Scientific, Singapore, 00 4

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems Radu Ioan Boţ Ernö Robert Csetnek August 5, 014 Abstract. In this paper we analyze the

More information

arxiv: v1 [math.oc] 12 Mar 2013

arxiv: v1 [math.oc] 12 Mar 2013 On the convergence rate improvement of a primal-dual splitting algorithm for solving monotone inclusion problems arxiv:303.875v [math.oc] Mar 03 Radu Ioan Boţ Ernö Robert Csetnek André Heinrich February

More information

Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping

Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping March 0, 206 3:4 WSPC Proceedings - 9in x 6in secondorderanisotropicdamping206030 page Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping Radu

More information

Second order forward-backward dynamical systems for monotone inclusion problems

Second order forward-backward dynamical systems for monotone inclusion problems Second order forward-backward dynamical systems for monotone inclusion problems Radu Ioan Boţ Ernö Robert Csetnek March 6, 25 Abstract. We begin by considering second order dynamical systems of the from

More information

Inertial Douglas-Rachford splitting for monotone inclusion problems

Inertial Douglas-Rachford splitting for monotone inclusion problems Inertial Douglas-Rachford splitting for monotone inclusion problems Radu Ioan Boţ Ernö Robert Csetnek Christopher Hendrich January 5, 2015 Abstract. We propose an inertial Douglas-Rachford splitting algorithm

More information

Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization

Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization Radu Ioan Boţ Christopher Hendrich November 7, 202 Abstract. In this paper we

More information

Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators

Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators Radu Ioan Boţ Christopher Hendrich 2 April 28, 206 Abstract. The aim of this article is to present

More information

A Dykstra-like algorithm for two monotone operators

A Dykstra-like algorithm for two monotone operators A Dykstra-like algorithm for two monotone operators Heinz H. Bauschke and Patrick L. Combettes Abstract Dykstra s algorithm employs the projectors onto two closed convex sets in a Hilbert space to construct

More information

On Gap Functions for Equilibrium Problems via Fenchel Duality

On Gap Functions for Equilibrium Problems via Fenchel Duality On Gap Functions for Equilibrium Problems via Fenchel Duality Lkhamsuren Altangerel 1 Radu Ioan Boţ 2 Gert Wanka 3 Abstract: In this paper we deal with the construction of gap functions for equilibrium

More information

A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators

A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators Radu Ioan Boţ Ernö Robert Csetnek April 23, 2012 Dedicated to Jon Borwein on the occasion of his 60th birthday Abstract. In this note

More information

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences

More information

Monotone operators and bigger conjugate functions

Monotone operators and bigger conjugate functions Monotone operators and bigger conjugate functions Heinz H. Bauschke, Jonathan M. Borwein, Xianfu Wang, and Liangjin Yao August 12, 2011 Abstract We study a question posed by Stephen Simons in his 2008

More information

On the Brézis - Haraux - type approximation in nonreflexive Banach spaces

On the Brézis - Haraux - type approximation in nonreflexive Banach spaces On the Brézis - Haraux - type approximation in nonreflexive Banach spaces Radu Ioan Boţ Sorin - Mihai Grad Gert Wanka Abstract. We give Brézis - Haraux - type approximation results for the range of the

More information

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Patrick L. Combettes joint work with J.-C. Pesquet) Laboratoire Jacques-Louis Lions Faculté de Mathématiques

More information

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE Fixed Point Theory, Volume 6, No. 1, 2005, 59-69 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.htm WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE YASUNORI KIMURA Department

More information

Robust Duality in Parametric Convex Optimization

Robust Duality in Parametric Convex Optimization Robust Duality in Parametric Convex Optimization R.I. Boţ V. Jeyakumar G.Y. Li Revised Version: June 20, 2012 Abstract Modelling of convex optimization in the face of data uncertainty often gives rise

More information

On the acceleration of the double smoothing technique for unconstrained convex optimization problems

On the acceleration of the double smoothing technique for unconstrained convex optimization problems On the acceleration of the double smoothing technique for unconstrained convex optimization problems Radu Ioan Boţ Christopher Hendrich October 10, 01 Abstract. In this article we investigate the possibilities

More information

Inertial forward-backward methods for solving vector optimization problems

Inertial forward-backward methods for solving vector optimization problems Inertial forward-backward methods for solving vector optimization problems Radu Ioan Boţ Sorin-Mihai Grad Dedicated to Johannes Jahn on the occasion of his 65th birthday Abstract. We propose two forward-backward

More information

Brézis - Haraux - type approximation of the range of a monotone operator composed with a linear mapping

Brézis - Haraux - type approximation of the range of a monotone operator composed with a linear mapping Brézis - Haraux - type approximation of the range of a monotone operator composed with a linear mapping Radu Ioan Boţ, Sorin-Mihai Grad and Gert Wanka Faculty of Mathematics Chemnitz University of Technology

More information

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

On the relations between different duals assigned to composed optimization problems

On the relations between different duals assigned to composed optimization problems manuscript No. will be inserted by the editor) On the relations between different duals assigned to composed optimization problems Gert Wanka 1, Radu Ioan Boţ 2, Emese Vargyas 3 1 Faculty of Mathematics,

More information

The Subdifferential of Convex Deviation Measures and Risk Functions

The Subdifferential of Convex Deviation Measures and Risk Functions The Subdifferential of Convex Deviation Measures and Risk Functions Nicole Lorenz Gert Wanka In this paper we give subdifferential formulas of some convex deviation measures using their conjugate functions

More information

Dual and primal-dual methods

Dual and primal-dual methods ELE 538B: Large-Scale Optimization for Data Science Dual and primal-dual methods Yuxin Chen Princeton University, Spring 2018 Outline Dual proximal gradient method Primal-dual proximal gradient method

More information

Convex Optimization Notes

Convex Optimization Notes Convex Optimization Notes Jonathan Siegel January 2017 1 Convex Analysis This section is devoted to the study of convex functions f : B R {+ } and convex sets U B, for B a Banach space. The case of B =

More information

A Primal-dual Three-operator Splitting Scheme

A Primal-dual Three-operator Splitting Scheme Noname manuscript No. (will be inserted by the editor) A Primal-dual Three-operator Splitting Scheme Ming Yan Received: date / Accepted: date Abstract In this paper, we propose a new primal-dual algorithm

More information

Coordinate Update Algorithm Short Course Operator Splitting

Coordinate Update Algorithm Short Course Operator Splitting Coordinate Update Algorithm Short Course Operator Splitting Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 25 Operator splitting pipeline 1. Formulate a problem as 0 A(x) + B(x) with monotone operators

More information

Proximal splitting methods on convex problems with a quadratic term: Relax!

Proximal splitting methods on convex problems with a quadratic term: Relax! Proximal splitting methods on convex problems with a quadratic term: Relax! The slides I presented with added comments Laurent Condat GIPSA-lab, Univ. Grenoble Alpes, France Workshop BASP Frontiers, Jan.

More information

MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS

MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS JONATHAN M. BORWEIN, FRSC Abstract. We use methods from convex analysis convex, relying on an ingenious function of Simon Fitzpatrick, to prove maximality

More information

SOME REMARKS ON SUBDIFFERENTIABILITY OF CONVEX FUNCTIONS

SOME REMARKS ON SUBDIFFERENTIABILITY OF CONVEX FUNCTIONS Applied Mathematics E-Notes, 5(2005), 150-156 c ISSN 1607-2510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ SOME REMARKS ON SUBDIFFERENTIABILITY OF CONVEX FUNCTIONS Mohamed Laghdir

More information

THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION

THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION HALUK ERGIN AND TODD SARVER Abstract. Suppose (i) X is a separable Banach space, (ii) C is a convex subset of X that is a Baire space (when endowed

More information

The sum of two maximal monotone operator is of type FPV

The sum of two maximal monotone operator is of type FPV CJMS. 5(1)(2016), 17-21 Caspian Journal of Mathematical Sciences (CJMS) University of Mazandaran, Iran http://cjms.journals.umz.ac.ir ISSN: 1735-0611 The sum of two maximal monotone operator is of type

More information

On the order of the operators in the Douglas Rachford algorithm

On the order of the operators in the Douglas Rachford algorithm On the order of the operators in the Douglas Rachford algorithm Heinz H. Bauschke and Walaa M. Moursi June 11, 2015 Abstract The Douglas Rachford algorithm is a popular method for finding zeros of sums

More information

On the convergence of a regularized Jacobi algorithm for convex optimization

On the convergence of a regularized Jacobi algorithm for convex optimization On the convergence of a regularized Jacobi algorithm for convex optimization Goran Banjac, Kostas Margellos, and Paul J. Goulart Abstract In this paper we consider the regularized version of the Jacobi

More information

Quasi-relative interior and optimization

Quasi-relative interior and optimization Quasi-relative interior and optimization Constantin Zălinescu University Al. I. Cuza Iaşi, Faculty of Mathematics CRESWICK, 2016 1 Aim and framework Aim Framework and notation The quasi-relative interior

More information

Convergence Rates for Projective Splitting

Convergence Rates for Projective Splitting Convergence Rates for Projective Splitting Patric R. Johnstone Jonathan Ecstein August 8, 018 Abstract Projective splitting is a family of methods for solving inclusions involving sums of maximal monotone

More information

Near Equality, Near Convexity, Sums of Maximally Monotone Operators, and Averages of Firmly Nonexpansive Mappings

Near Equality, Near Convexity, Sums of Maximally Monotone Operators, and Averages of Firmly Nonexpansive Mappings Mathematical Programming manuscript No. (will be inserted by the editor) Near Equality, Near Convexity, Sums of Maximally Monotone Operators, and Averages of Firmly Nonexpansive Mappings Heinz H. Bauschke

More information

arxiv: v4 [math.oc] 29 Jan 2018

arxiv: v4 [math.oc] 29 Jan 2018 Noname manuscript No. (will be inserted by the editor A new primal-dual algorithm for minimizing the sum of three functions with a linear operator Ming Yan arxiv:1611.09805v4 [math.oc] 29 Jan 2018 Received:

More information

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions Radu Ioan Boţ Ernö Robert Csetnek Szilárd Csaba László October, 1 Abstract. We propose a forward-backward

More information

A Dual Condition for the Convex Subdifferential Sum Formula with Applications

A Dual Condition for the Convex Subdifferential Sum Formula with Applications Journal of Convex Analysis Volume 12 (2005), No. 2, 279 290 A Dual Condition for the Convex Subdifferential Sum Formula with Applications R. S. Burachik Engenharia de Sistemas e Computacao, COPPE-UFRJ

More information

Subdifferential representation of convex functions: refinements and applications

Subdifferential representation of convex functions: refinements and applications Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential

More information

1 Introduction and preliminaries

1 Introduction and preliminaries Proximal Methods for a Class of Relaxed Nonlinear Variational Inclusions Abdellatif Moudafi Université des Antilles et de la Guyane, Grimaag B.P. 7209, 97275 Schoelcher, Martinique abdellatif.moudafi@martinique.univ-ag.fr

More information

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane Conference ADGO 2013 October 16, 2013 Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions Marc Lassonde Université des Antilles et de la Guyane Playa Blanca, Tongoy, Chile SUBDIFFERENTIAL

More information

A Unified Approach to Proximal Algorithms using Bregman Distance

A Unified Approach to Proximal Algorithms using Bregman Distance A Unified Approach to Proximal Algorithms using Bregman Distance Yi Zhou a,, Yingbin Liang a, Lixin Shen b a Department of Electrical Engineering and Computer Science, Syracuse University b Department

More information

Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency

Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency Applied Mathematics E-Notes, 16(2016), 133-143 c ISSN 1607-2510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency

More information

Self-dual Smooth Approximations of Convex Functions via the Proximal Average

Self-dual Smooth Approximations of Convex Functions via the Proximal Average Chapter Self-dual Smooth Approximations of Convex Functions via the Proximal Average Heinz H. Bauschke, Sarah M. Moffat, and Xianfu Wang Abstract The proximal average of two convex functions has proven

More information

Convex Optimization Conjugate, Subdifferential, Proximation

Convex Optimization Conjugate, Subdifferential, Proximation 1 Lecture Notes, HCI, 3.11.211 Chapter 6 Convex Optimization Conjugate, Subdifferential, Proximation Bastian Goldlücke Computer Vision Group Technical University of Munich 2 Bastian Goldlücke Overview

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

An Infeasible Interior Proximal Method for Convex Programming Problems with Linear Constraints 1

An Infeasible Interior Proximal Method for Convex Programming Problems with Linear Constraints 1 An Infeasible Interior Proximal Method for Convex Programming Problems with Linear Constraints 1 Nobuo Yamashita 2, Christian Kanzow 3, Tomoyui Morimoto 2, and Masao Fuushima 2 2 Department of Applied

More information

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Local strong convexity and local Lipschitz continuity of the gradient of convex functions Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate

More information

arxiv: v2 [cs.cv] 12 Sep 2014

arxiv: v2 [cs.cv] 12 Sep 2014 Noname manuscript No. (will be inserted by the editor) An inertial forward-bacward algorithm for monotone inclusions D. Lorenz T. Poc arxiv:403.35v [cs.cv Sep 04 the date of receipt and acceptance should

More information

STRONG AND CONVERSE FENCHEL DUALITY FOR VECTOR OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES

STRONG AND CONVERSE FENCHEL DUALITY FOR VECTOR OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES STUDIA UNIV. BABEŞ BOLYAI, MATHEMATICA, Volume LIV, Number 3, September 2009 STRONG AND CONVERSE FENCHEL DUALITY FOR VECTOR OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES Abstract. In relation to the vector

More information

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties

More information

On the equivalence of the primal-dual hybrid gradient method and Douglas Rachford splitting

On the equivalence of the primal-dual hybrid gradient method and Douglas Rachford splitting Mathematical Programming manuscript No. (will be inserted by the editor) On the equivalence of the primal-dual hybrid gradient method and Douglas Rachford splitting Daniel O Connor Lieven Vandenberghe

More information

An Inexact Spingarn s Partial Inverse Method with Applications to Operator Splitting and Composite Optimization

An Inexact Spingarn s Partial Inverse Method with Applications to Operator Splitting and Composite Optimization Noname manuscript No. (will be inserted by the editor) An Inexact Spingarn s Partial Inverse Method with Applications to Operator Splitting and Composite Optimization S. Costa Lima M. Marques Alves Received:

More information

STABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES

STABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES STABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES CHONG LI, DONGHUI FANG, GENARO LÓPEZ, AND MARCO A. LÓPEZ Abstract. We consider the optimization problem (P A )

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

About Split Proximal Algorithms for the Q-Lasso

About Split Proximal Algorithms for the Q-Lasso Thai Journal of Mathematics Volume 5 (207) Number : 7 http://thaijmath.in.cmu.ac.th ISSN 686-0209 About Split Proximal Algorithms for the Q-Lasso Abdellatif Moudafi Aix Marseille Université, CNRS-L.S.I.S

More information

M. Marques Alves Marina Geremia. November 30, 2017

M. Marques Alves Marina Geremia. November 30, 2017 Iteration complexity of an inexact Douglas-Rachford method and of a Douglas-Rachford-Tseng s F-B four-operator splitting method for solving monotone inclusions M. Marques Alves Marina Geremia November

More information

ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT

ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT EMIL ERNST AND MICHEL VOLLE Abstract. This article addresses a general criterion providing a zero duality gap for convex programs in the setting of

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

An Overview of Recent and Brand New Primal-Dual Methods for Solving Convex Optimization Problems

An Overview of Recent and Brand New Primal-Dual Methods for Solving Convex Optimization Problems PGMO 1/32 An Overview of Recent and Brand New Primal-Dual Methods for Solving Convex Optimization Problems Emilie Chouzenoux Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est Marne-la-Vallée,

More information

Splitting methods for decomposing separable convex programs

Splitting methods for decomposing separable convex programs Splitting methods for decomposing separable convex programs Philippe Mahey LIMOS - ISIMA - Université Blaise Pascal PGMO, ENSTA 2013 October 4, 2013 1 / 30 Plan 1 Max Monotone Operators Proximal techniques

More information

Self-equilibrated Functions in Dual Vector Spaces: a Boundedness Criterion

Self-equilibrated Functions in Dual Vector Spaces: a Boundedness Criterion Self-equilibrated Functions in Dual Vector Spaces: a Boundedness Criterion Michel Théra LACO, UMR-CNRS 6090, Université de Limoges michel.thera@unilim.fr reporting joint work with E. Ernst and M. Volle

More information

Extensions of the CQ Algorithm for the Split Feasibility and Split Equality Problems

Extensions of the CQ Algorithm for the Split Feasibility and Split Equality Problems Extensions of the CQ Algorithm for the Split Feasibility Split Equality Problems Charles L. Byrne Abdellatif Moudafi September 2, 2013 Abstract The convex feasibility problem (CFP) is to find a member

More information

BASICS OF CONVEX ANALYSIS

BASICS OF CONVEX ANALYSIS BASICS OF CONVEX ANALYSIS MARKUS GRASMAIR 1. Main Definitions We start with providing the central definitions of convex functions and convex sets. Definition 1. A function f : R n R + } is called convex,

More information

Monotone Operator Splitting Methods in Signal and Image Recovery

Monotone Operator Splitting Methods in Signal and Image Recovery Monotone Operator Splitting Methods in Signal and Image Recovery P.L. Combettes 1, J.-C. Pesquet 2, and N. Pustelnik 3 2 Univ. Pierre et Marie Curie, Paris 6 LJLL CNRS UMR 7598 2 Univ. Paris-Est LIGM CNRS

More information

EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES

EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES JEREMY J. BECNEL Abstract. We examine the main topologies wea, strong, and inductive placed on the dual of a countably-normed space

More information

Proximal methods. S. Villa. October 7, 2014

Proximal methods. S. Villa. October 7, 2014 Proximal methods S. Villa October 7, 2014 1 Review of the basics Often machine learning problems require the solution of minimization problems. For instance, the ERM algorithm requires to solve a problem

More information

The proximal mapping

The proximal mapping The proximal mapping http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes Outline 2/37 1 closed function 2 Conjugate function

More information

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection EUSIPCO 2015 1/19 A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection Jean-Christophe Pesquet Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est

More information

Semi-infinite programming, duality, discretization and optimality conditions

Semi-infinite programming, duality, discretization and optimality conditions Semi-infinite programming, duality, discretization and optimality conditions Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205,

More information

A New Fenchel Dual Problem in Vector Optimization

A New Fenchel Dual Problem in Vector Optimization A New Fenchel Dual Problem in Vector Optimization Radu Ioan Boţ Anca Dumitru Gert Wanka Abstract We introduce a new Fenchel dual for vector optimization problems inspired by the form of the Fenchel dual

More information

Convex Analysis and Economic Theory AY Elementary properties of convex functions

Convex Analysis and Economic Theory AY Elementary properties of convex functions Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory AY 2018 2019 Topic 6: Convex functions I 6.1 Elementary properties of convex functions We may occasionally

More information

SHORT COMMUNICATION. Communicated by Igor Konnov

SHORT COMMUNICATION. Communicated by Igor Konnov On Some Erroneous Statements in the Paper Optimality Conditions for Extended Ky Fan Inequality with Cone and Affine Constraints and Their Applications by A. Capătă SHORT COMMUNICATION R.I. Boţ 1 and E.R.

More information

Second order forward-backward dynamical systems for monotone inclusion problems

Second order forward-backward dynamical systems for monotone inclusion problems Second order forward-backward dynamical systems for monotone inclusion problems Radu Ioan Boţ Ernö Robert Csetnek March 2, 26 Abstract. We begin by considering second order dynamical systems of the from

More information

On proximal-like methods for equilibrium programming

On proximal-like methods for equilibrium programming On proximal-lie methods for equilibrium programming Nils Langenberg Department of Mathematics, University of Trier 54286 Trier, Germany, langenberg@uni-trier.de Abstract In [?] Flam and Antipin discussed

More information

Convergence Theorems for Bregman Strongly Nonexpansive Mappings in Reflexive Banach Spaces

Convergence Theorems for Bregman Strongly Nonexpansive Mappings in Reflexive Banach Spaces Filomat 28:7 (2014), 1525 1536 DOI 10.2298/FIL1407525Z Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Convergence Theorems for

More information

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4890 4900 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A generalized forward-backward

More information

arxiv: v1 [math.oc] 16 May 2018

arxiv: v1 [math.oc] 16 May 2018 An Algorithmic Framewor of Variable Metric Over-Relaxed Hybrid Proximal Extra-Gradient Method Li Shen 1 Peng Sun 1 Yitong Wang 1 Wei Liu 1 Tong Zhang 1 arxiv:180506137v1 [mathoc] 16 May 2018 Abstract We

More information

Heinz H. Bauschke and Walaa M. Moursi. December 1, Abstract

Heinz H. Bauschke and Walaa M. Moursi. December 1, Abstract The magnitude of the minimal displacement vector for compositions and convex combinations of firmly nonexpansive mappings arxiv:1712.00487v1 [math.oc] 1 Dec 2017 Heinz H. Bauschke and Walaa M. Moursi December

More information

Maximal monotone operators are selfdual vector fields and vice-versa

Maximal monotone operators are selfdual vector fields and vice-versa Maximal monotone operators are selfdual vector fields and vice-versa Nassif Ghoussoub Department of Mathematics, University of British Columbia, Vancouver BC Canada V6T 1Z2 nassif@math.ubc.ca February

More information

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT Linear and Nonlinear Analysis Volume 1, Number 1, 2015, 1 PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT KAZUHIRO HISHINUMA AND HIDEAKI IIDUKA Abstract. In this

More information

Conditions for zero duality gap in convex programming

Conditions for zero duality gap in convex programming Conditions for zero duality gap in convex programming Jonathan M. Borwein, Regina S. Burachik, and Liangjin Yao April 14, revision, 2013 Abstract We introduce and study a new dual condition which characterizes

More information

Convergence rate estimates for the gradient differential inclusion

Convergence rate estimates for the gradient differential inclusion Convergence rate estimates for the gradient differential inclusion Osman Güler November 23 Abstract Let f : H R { } be a proper, lower semi continuous, convex function in a Hilbert space H. The gradient

More information

An Algorithmic Framework of Variable Metric Over-Relaxed Hybrid Proximal Extra-Gradient Method

An Algorithmic Framework of Variable Metric Over-Relaxed Hybrid Proximal Extra-Gradient Method An Algorithmic Framewor of Variable Metric Over-Relaxed Hybrid Proximal Extra-Gradient Method Li Shen 1 Peng Sun 1 Yitong Wang 1 Wei Liu 1 Tong Zhang 1 Abstract We propose a novel algorithmic framewor

More information

The resolvent average of monotone operators: dominant and recessive properties

The resolvent average of monotone operators: dominant and recessive properties The resolvent average of monotone operators: dominant and recessive properties Sedi Bartz, Heinz H. Bauschke, Sarah M. Moffat, and Xianfu Wang September 30, 2015 (first revision) December 22, 2015 (second

More information

Examples of Convex Functions and Classifications of Normed Spaces

Examples of Convex Functions and Classifications of Normed Spaces Journal of Convex Analysis Volume 1 (1994), No.1, 61 73 Examples of Convex Functions and Classifications of Normed Spaces Jon Borwein 1 Department of Mathematics and Statistics, Simon Fraser University

More information

Tight Rates and Equivalence Results of Operator Splitting Schemes

Tight Rates and Equivalence Results of Operator Splitting Schemes Tight Rates and Equivalence Results of Operator Splitting Schemes Wotao Yin (UCLA Math) Workshop on Optimization for Modern Computing Joint w Damek Davis and Ming Yan UCLA CAM 14-51, 14-58, and 14-59 1

More information

Strongly convex functions, Moreau envelopes and the generic nature of convex functions with strong minimizers

Strongly convex functions, Moreau envelopes and the generic nature of convex functions with strong minimizers University of Wollongong Research Online Faculty of Engineering and Information Sciences - Papers: Part B Faculty of Engineering and Information Sciences 206 Strongly convex functions, Moreau envelopes

More information

The Brezis-Browder Theorem in a general Banach space

The Brezis-Browder Theorem in a general Banach space The Brezis-Browder Theorem in a general Banach space Heinz H. Bauschke, Jonathan M. Borwein, Xianfu Wang, and Liangjin Yao March 30, 2012 Abstract During the 1970s Brezis and Browder presented a now classical

More information

Visco-penalization of the sum of two monotone operators

Visco-penalization of the sum of two monotone operators Visco-penalization of the sum of two monotone operators Patrick L. Combettes a and Sever A. Hirstoaga b a Laboratoire Jacques-Louis Lions, Faculté de Mathématiques, Université Pierre et Marie Curie Paris

More information

EE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1

EE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1 EE 546, Univ of Washington, Spring 2012 6. Proximal mapping introduction review of conjugate functions proximal mapping Proximal mapping 6 1 Proximal mapping the proximal mapping (prox-operator) of a convex

More information

On duality theory of conic linear problems

On duality theory of conic linear problems On duality theory of conic linear problems Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 3332-25, USA e-mail: ashapiro@isye.gatech.edu

More information

Dedicated to Michel Théra in honor of his 70th birthday

Dedicated to Michel Théra in honor of his 70th birthday VARIATIONAL GEOMETRIC APPROACH TO GENERALIZED DIFFERENTIAL AND CONJUGATE CALCULI IN CONVEX ANALYSIS B. S. MORDUKHOVICH 1, N. M. NAM 2, R. B. RECTOR 3 and T. TRAN 4. Dedicated to Michel Théra in honor of

More information

Firmly Nonexpansive Mappings and Maximally Monotone Operators: Correspondence and Duality

Firmly Nonexpansive Mappings and Maximally Monotone Operators: Correspondence and Duality Firmly Nonexpansive Mappings and Maximally Monotone Operators: Correspondence and Duality Heinz H. Bauschke, Sarah M. Moffat, and Xianfu Wang June 8, 2011 Abstract The notion of a firmly nonexpansive mapping

More information

On the use of semi-closed sets and functions in convex analysis

On the use of semi-closed sets and functions in convex analysis Open Math. 2015; 13: 1 5 Open Mathematics Open Access Research Article Constantin Zălinescu* On the use of semi-closed sets and functions in convex analysis Abstract: The main aim of this short note is

More information

Maximal monotonicity for the precomposition with a linear operator

Maximal monotonicity for the precomposition with a linear operator Maximal monotonicity for the precomposition with a linear operator Radu Ioan Boţ Sorin-Mihai Grad Gert Wanka July 19, 2005 Abstract. We give the weakest constraint qualification known to us that assures

More information