Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators

Size: px
Start display at page:

Download "Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators"

Transcription

1 Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators Radu Ioan Boţ Christopher Hendrich 2 April 28, 206 Abstract. The aim of this article is to present two different primal-dual methods for solving structured monotone inclusions involving parallel sums of compositions of maximally monotone operators with linear bounded operators. By employing some elaborated splitting techniques, all of the operators occurring in the problem formulation are processed individually via forward or backward steps. The treatment of parallel sums of linearly composed maximally monotone operators is motivated by applications in imaging which involve first- second-order total variation functionals, to which a special attention is given. Keywords. convex optimization, Fenchel duality, infimal convolution, monotone inclusion, parallel sum, primal-dual algorithm AMS subject classification. 90C25, 90C46, 47A52 Introduction In applied mathematics, a wide variety of convex optimization problems such as singleor multifacility location problems, support vector machine problems for classification regression, problems in clustering portfolio optimization as well as signal image processing problems, all of them potentially possessing nonsmooth terms in their objectives, can be reduced to the solving of inclusion problems involving mixtures of monotone set-valued operators. Therefore, the solving of monotone inclusion problems involving maximally monotone operators continues to be one of the most attractive branches of research see [, 3, 5, 6, 9, 2, 3, 5 24, 26 29].. Motivation The problem formulation we consider in this article is inspired by a real-world application in image denoising, where first- second-order total variation functionals are linked via infimal convolutions in order to reduce staircasing effects in the reconstructed images. Let b R n be the observed vectorized noisy image of size M N with n = MN for greyscale n = 3MN for colored images. For k N ω = ω,..., ω k R k ++ University of Vienna, Faculty of Mathematics, Oskar-Morgenstern-Platz, A-090 Vienna, Austria, radu.bot@univie.ac.at. Research partially supported by DFG German Research Foundation, project BO 256/4-. 2 Chemnitz University of Technology, Department of Mathematics, D-0907 Chemnitz, Germany, christopher.hendrich@mathematik.tu-chemnitz.de. Research supported by a Graduate Fellowship of the Free State Saxony, Germany.

2 we consider on R k n the following norm defined for y = y,..., y k T R k n as y,ω = ω y ω k y 2 k where addition, multiplication square root of vectors are understood to be componentwise. Further, we consider the forward difference matrix D k := , R k k, which models the discrete first-order derivative. Note that Dk T D k is then an approximation of the second-order derivative. We denote by A B the Kronecker product of the matrices A B define ] D x = I N D M, D y = D N I M D = [ Dx D y,. where D x D y represent the vertical horizontal difference operators, respectively, I N I M are the identity matrices of sizes N M, respectively. Further, we define the discrete second-order derivatives matrices ] D xx = I N D T MD M, D yy = D T ND N I M, D 2 = [ Dxx D yy.2 L = [ D T x 0 0 D T y ] notice that D 2 = L D. This approach was initially proposed in [4] further investigated in [27]. We refer the reader to [27] for other discrete second-order derivatives involving also mixed partial derivatives in horizontal-vertical direction vice versa. The reconstructed image is obtained by solving one of the following convex optimization problems see [27, Example 2.2 Example 3.] } l 2 2-IC/P inf x R n 2 x b 2 + α,ω D α 2,ω2 D 2 x.3 } l 2 2-MIC/P inf x R n 2 x b 2 + α,ω α 2,ω2 L D x,.4 respectively, where α, α 2 R ++ are the regularization parameters the regularizers correspond to anistropic total variation functionals. This is the reason why we are going to treat the following more general primal-dual pair of complexly structured convex optimization problems. Problem.. Let H be a real Hilbert space, z H f, h ΓH such that h is differentiable with µ-lipschitzian gradient for µ R ++. Furthermore, for every i =,..., m, let G i, X i, Y i be real Hilbert spaces, r i G i, let g i ΓX i l i ΓY i 2

3 consider the nonzero linear bounded operators L i : H G i, K i : G i X i M i : G i Y i. We want to solve the primal optimization problem inf fx + x H gi } K i li M i L i x r i + hx x, z.5 together with its conjugate dual problem [ ] } sup f h z L i Ki p i gi p i + li q i + p i, K i r i. p,q X Y, Ki p i=mi q i,,...,m.6 By R ++ we denote the set of strictly positive real numbers by R + := R ++ 0}. For a function f : H R := R ± }, where H is a real Hilbert space, we denote by dom f := x H : fx < + } its effective domain call f proper, if dom f fx > for all x H. We let ΓH := f : H R f is proper, convex lower semicontinuous}. The conjugate function of f is f : H R, f p = sup p, x fx : x H} for all p H,, if f ΓH, then f ΓH, as well. For a linear bounded operator L : H G, where G is another real Hilbert space, the operator L : G H defined via Lx, y = x, L y for all x H all y G denotes its adjoint. Having two proper functions f, g : H R, their infimal convolution is defined by f g : H R, f gx = inf y H fy + gx y} for all x H. If f g are convex, then f g is convex, too. In order to solve the primal-dual pair of optimization problems.5-.6, we will actually solve the corresponding system of optimality conditions see 3.2, which is nothing else than a system of monotone inclusions with a complex intricate structure. This motivates the investigation of the following primal-dual pair of monotone inclusion problems. Problem.2. Let H be a real Hilbert space, z H, A : H 2 H a maximally monotone operator C : H H a monotone µ -cocoercive operator for µ R ++. Furthermore, for every i =,..., m, let G i, X i, Y i be real Hilbert spaces, r i G i, B i : X i 2 X i D i : Y i 2 Y i be maximally monotone operators consider the nonzero linear bounded operators L i : H G i, K i : G i X i M i : G i Y i. We want to solve the primal inclusion K find x H such that z Ax + L i i B i K i M i D i M i L i x r i + Cx.7 together with its dual inclusion p i X i, i =,..., m, find q i Y i, i =,..., m, such that x H : y i G i, i =,..., m, z m L i K i p i Ax + Cx, K i L i x y i r i Bi p i, i =,..., m, M i y i Di q i, i =,..., m, Ki p i = Mi q i, i =,..., m..8 3

4 We propose in this paper two iterative methods of forward-backward forwardbackward-forward type, respectively, for solving this primal-dual pair of monotone inclusion problems investigate their asymptotic behavior. The two methods share the common feature to reduce the solving of the primal-dual pair of monotone inclusions to the determination of the set of the zeros of the sum of two maximally monotone operators defined on an suitable product space endowed with a topology that is synchronized with the problem. Depending on the nature of the two operators we employ the forwardbackward algorithm see [2] or Tseng s forward-backward-forward algorithm see [28] obtain easily implementable iterative schemes. These have the property that each of the operators arising in the formulation of the monotone inclusion problem.7 is evaluated separately. More precisely, the set-valued operators are evaluated via their resolvents, called backward steps, while the single-valued ones are accessed via explicit forward steps. A forward-backward-forward algorithm for solving the primal-dual pair of monotone inclusions.7 -.8, in the particular situation when L i is the identity operator r i = 0 for any i =,..., m, has been recently investigated in [3]. However, since it makes a forward step less, the forward-backward method is naturally more attractive from the perspective numerical implementations. This phenomenon is supported by our experimental results reported in Section 4. After the appearance of the proximal point algorithm for approximating the set of zeros of a maximal monotone operator defined on a Hilbert space see [26], the attention of the community was drawn to iterative schemes for determining the zeros of the sum of two maximally monotone operators, due to the role played by these schemes in the minimization of the sum of two convex functions. To the most classical methods of this type belongs the Douglas-Rachford splitting algorithm see [22], which has the property that at each iteration the operators are processed separately via their resolvents. Of equal importance are methods designed to determine the zeros of the sum of a single-valued monotone operator a maximally monotone operator, like the forward-backward [2] Tseng s forward-backward-forward [28] algorithms, which evaluate the single-valued operator via a forward step the set-valued one via its resolvent. In the last years, motivated by different applications, the complexity of the monotone inclusion problems increased, by including sums of maximally monotone operators composed with linear bounded operators see [3, 5], single-valued Lipschitzian or cocoercive monotone operators parallel sums of maximally monotone operators see [3, 5, 2, 9 2, 29]. For some of these iterative schemes, under strong monotonicity assumptions accelerated versions have been provided see [6, 9, 5]. The article is organized as follows. In the remaining of this section we introduce notations preliminary results in convex analysis monotone operator theory. In Section 2 we formulate the two algorithms study their convergence behavior. In Section 3 we employ the outcomes of the previous one to the simultaneously solving of convex minimization problems their conjugate dual problems. Numerical experiments in the context of image denoising problems with first- second-order total variation functionals are made in Section 4..2 Notation preliminaries We are considering the real Hilbert space H endowed with inner product, associated norm =,. The symbols denote weak strong convergence, respectively. Having the sequences x n n 0 y n n 0 in H, we mind errors in the 4

5 implementation of the algorithm by using the following notation taken from [3] x n y n n 0 n 0 x n y n < +..9 Let M : H 2 H be a set-valued operator. We denote by zer M = x H : 0 Mx} its set of zeros, by gra M = x, u H H : u Mx} its graph by ran M = u H : x H, u Mx} its range. The inverse of M is M : H 2 H, u x H : u Mx}. The operator M is said to be monotone if x y, u v 0 for all x, u, y, v gra M. The operator M is said to be maximally monotone if it is monotone there exists no monotone operator M : H 2 H such that gra M properly contains gra M. The operator M is said to be uniformly monotone with modulus φ M : R + [0, + ] if φ M is increasing, vanishes only at 0, x y, u v φ M x y for all x, u, y, v gra M. Let µ > 0 be arbitrary. A single-valued operator M : H H is said to be µ-cocoercive if x y, Mx My µ Mx My 2 for all x, y H H. Moreover, M is µ-lipschitzian if Mx My µ x y for all x, y H H. A linear bounded operator M : H H is said to be self-adjoint, if M = M skew, if M = M. The sum the parallel sum of two set-valued operators M, M 2 : H 2 H are defined as M + M 2 : H 2 H, M + M 2 x = M x + M 2 x x H M M 2 : H 2 H, M M 2 = M + M2, respectively. If M M 2 are monotone, then M + M 2 M M 2 are monotone, too. However, if M M 2 are maximally monotone, this property is in general not true neither for M + M 2 nor for M M 2, unless some qualification conditions are fulfilled see [2, 4, 30]. The resolvent of an operator M : H 2 H is J M = Id + M, where the operator Id denotes the identity on H. When M is maximally monotone, its resolvent is a single-valued firmly nonexpansive operator, by [2, Proposition 23.8], we have for γ R ++ Id = JγM + γjγ M γ Id..0 Moreover, for f ΓH γ R ++, the subdifferential γf is maximally monotone cf. [25] it holds J γ f = Id + γ f = Prox γf. Recall that the convex subdifferential of f : H R at x H is the set fx = p H : fy fx p, y x y H}, if fx R, is taken to be the empty set, otherwise. Furthermore, Prox γf x denotes the proximal point of γf at x H, representing the unique optimal solution of the optimization problem inf γfy + } y H 2 y x 2.. In this particular situation, relation.0 becomes Moreau s decomposition formula Id = Prox γf +γ Prox γ f γ Id..2 When Ω H is a nonempty, convex closed set, the function δ Ω : H R, defined by δ Ω x = 0 for x Ω δ Ω x = +, otherwise, denotes the indicator function of the set Ω. For each γ > 0 the proximal point of γδ Ω at x H is nothing else than Prox γδω x = Prox δω x = P Ω x = arg min y Ω 5 2 y x 2,

6 which is the projection of x on Ω. Finally, when for i =,..., m the real Hilbert spaces H i are endowed with inner product, Hi associated norm Hi =, Hi, we denote by H = H... H m their direct sum. For v = v,..., v m, q = q,..., q m H, this real Hilbert space is endowed with inner product associated norm defined via v, q H = v i, q i Hi, respectively, v H = m v i 2 H i. 2 The primal-dual iterative schemes Within this section we provide two different algorithms for solving the primal-dual inclusions introduced in Problem.2 discuss their asymptotic convergence. In Subsection 2.2, however, the assumptions imposed on the monotone operator C : H H are weakened by assuming that C is only µ-lipschitz continuous for some µ R ++. In the following, we let X = X... X m, Y = Y... Y m, G = G... G m p = p,..., p m X, q = q,..., q m Y, y = y,..., y m G. We say that x, p, q, y H X Y G is a primal-dual solution to Problem.2, if z L i Ki p i Ax + Cx K i L i x y i r i Bi p i, M i y i Di q i, Ki p i = Mi q i, i =,..., m. 2. If x, p, q, y H X Y G is a primal-dual solution to Problem.2, then x is a solution to.7 p, q, y is a solution to.8. Notice also that K x solves.7 z Ax + L i i B i K i M i D i M i L i x r i + Cx z m L i v i Ax + Cx, v G such that L i x r i Ki B vi i K i + Mi D vi i M i, i =,..., m v, y G G such that p, q, y X Y G such that z m L i v i Ax + Cx, v i Ki B i K i Li x y i r i, i =,..., m, v i Mi D i M i yi, i =,..., m z m L i K i p i Ax + Cx, p i B i K i Li x y i r i, i =,..., m, q i D i M i yi, i =,..., m, K i p i = M i q i, i =,..., m 6

7 p, q, y X Y G such that z m L i K i p i Ax + Cx, K i L i x y i r i Bi p i, i =,..., m, M i y i Di q i, i =,..., m, Ki p i = Mi q i, i =,..., m 2.2 Thus, if x is a solution to.7, then there exists p, q, y X Y G such that x, p, q, y is a primal-dual solution to Problem.2 if p, q, y is a solution to.8, then there exists x H such that x, p, q, y is a primal-dual solution to Problem.2. Remark 2.. The notations.9 have been introduced in order to allow errors in the implementation of the algorithm, without affecting the readability of the paper in the sequel. This is reasonable since errors preserve their summability under addition, scalar multiplication linear bounded mappings. 2. An algorithm of forward-backward type In this subsection we propose a forward-backward type algorithm for solving Problem.2 prove its convergence by showing that it can be reduced to an error-tolerant forwardbackward iterative scheme. Algorithm 2.. Let x 0 H, for any i =,..., m, let p i,0 X i, q i,0 Y i z i,0, y i,0, v i,0 G i. For any i =,..., m, let τ, θ,i, θ 2,i, γ,i, γ 2,i σ i be strictly positive real numbers such that for α = max τ 2µ α min,...,m τ,, θ,i σ i L i 2, max j=,...,m θ 2,i, γ,i, θ,j γ,j K j 2,, } >, 2.3 γ 2,i σ i } θ 2,j γ 2,j M j 2. Furthermore, let ε 0,, λ n n 0 be a sequence in [ε, ] set x n J τa x n τ Cx n + m L i v i,n z For i =,..., m p i,n J θ,i B p i,n + θ,i K i z i,n i q i,n J θ2,i D q i,n + θ 2,i M i y i,n i u,i,n z i,n + γ,i Ki p i,n 2 p i,n + v i,n + σ i L i 2 x n x n r i u 2,i,n y i,n + γ 2,i M i q i,n 2 q i,n + v i,n + σ i L i 2 x n x n r i +σ z i,n i γ 2,i +σ i γ,i +γ 2,i u,i,n σ iγ,i +σ i γ 2,i u 2,i,n n 0 ỹ i,n +σ i γ 2,i u 2,i,n σ i γ 2,i z i,n ṽ i,n v i,n + σ i L i 2 x n x n r i z i,n ỹ i,n x n+ = x n + λ n x n x n For i =,..., m p i,n+ = p i,n + λ n p i,n p i,n q i,n+ = q i,n + λ n q i,n q i,n z i,n+ = z i,n + λ n z i,n z i,n y i,n+ = y i,n + λ n ỹ i,n y i,n v i,n+ = v i,n + λ n ṽ i,n v i,n

8 Theorem 2.. For Problem.2, suppose that z ran A + K L i i B i K i M i D i M i L i r i + C, 2.5 consider the sequences generated by Algorithm 2.. Then there exists a primal-dual solution x, p, q, y to Problem.2 such that i x n x, p i,n p i, q i,n q i y i,n y i for any i =,..., m as n +. ii if C is uniformly monotone at x, then x n x as n +. Proof. We introduce the real Hilbert space K = H X Y G G G let p = p,..., p m q = q,..., q m y = y,..., y m We introduce the maximally monotone operators z = z,..., z m v = v,..., v m r = r,..., r m. 2.6 B : X 2 X, p B p... B m p m D : Y 2 Y, q D q... D m q m. Further, consider the set-valued operator M : K 2 K, x, p, q, z, y, v z + Ax B p D q v, v, r + z + y, which is maximally monotone, since A, B D are maximally monotone cf. [2, Proposition Proposition 20.23] the linear bounded operator x, p, q, y, z, v 0, 0, 0, v, v, z + y is skew hence maximally monotone cf. [2, Example 20.30]. Therefore, M can be written as the sum of two maximally monotone operators, one of them having full domain, fact which leads to the maximality of M see, for instance, [2, Corollary 24.4i]. Furthermore, consider the linear bounded operators K : G X, z K z,..., K m z m, M : G Y, y M y,..., M m y m, S : K K, m x, p, q, z, y, v L i v i, Kz, My, K p, M q, L x,..., L m x. The operator S is skew as well therefore maximally monotone. As dom S = K, the sum M + S is maximally monotone see [2, Corollary 24.4i]. Finally, we introduce the monotone operator Q : K K, x, p, q, z, y, v Cx, 0, 0, 0, 0, 0 which is, obviously, µ -cocoercive. By making use of 2.2, we observe that z m L i K i p i Ax + Cx, K 2.5 x, p, q, y H X Y G : i L i x y i r i Bi p i, i =,..., m, M i y i Di q i, i =,..., m, Ki p i = Mi q i, i =,..., m. 8

9 x, p, q H X Y z, y, v G G G : x, p, q, z, y, v zerm + S + Q. From here it follows that 0 z + Ax + m L i v i + Cx, 0 K i z i + Bi p i, i =,..., m, 0 M i y i + Di q i, i =,..., m, 0 = Ki p i v i, i =,..., m, 0 = M i q i v i, i =,..., m, 0 = r i + z i + y i L i x, i =,..., m x, p, q, z, y, v zerm + S + Q z m L i K i p i Ax + Cx, K i L i x y i r i Bi p i, i =,..., m, M i y i Di q i, i =,..., m, Ki p i = Mi q i, i =,..., m. x, p, q, y is a primal-dual solution to Problem Further, for positive real values τ, θ,i, θ 2,i, γ,i, γ 2,i, σ i R ++, i =,..., m, we introduce the notations p θ = p θ,,..., pm z θ,m γ q θ 2 = q, = z z γ,,..., m γ,m θ 2,,..., qm y θ 2,m γ 2 = y, v γ 2,,..., ym σ = v σ,..., vm σ m, γ 2,m define the linear bounded operator x V : K K, x, p, q, z, y, v τ, p, q, z, y, v + θ θ 2 γ γ 2 σ L i v i, Kz, My, K p, M q, L x,..., L m x. It is a simple calculation to prove that V is self-adjoint. Furthermore, the operator V is ρ-strongly positive with ρ = α min,...,m τ,,,,, } > 0, θ,i θ 2,i γ,i γ 2,i σ i for α = max τ σ i L i 2, max j=,...,m θ,j γ,j K j 2, } θ 2,j γ 2,j M j 2. The fact that ρ is a positive real number follows by the assumptions made in Algorithm 2.. Indeed, using that 2ab αa 2 + b2 α for every a, b R every α R ++, it yields for any i =,..., m 2 L i x H v i Gi σ i L i 2 τ m σ i L i τ m σ i L i 2 x 2 H K i p i Xi z i Gi γ,i K i θ p i 2,i γ,i K i 2 X θ,i γ i + z i 2 G,i γ i,,i 2 M i q i Yi y i Gi γ 2,i M i θ q i 2 2,i γ 2,i M i 2 Y θ2,i γ i + y i 2 G 2,i γ i. 2,i σ i v i 2 G i, 2.8 9

10 Consequently, for each x = x, p, q, z, y, v K, using the Cauchy Schwarz inequality 2.8, it follows that [ x, V x K = x 2 pi 2 H X + i + q i 2 Y i + z i 2 G i + y i 2 G i + v i 2 ] G i τ θ,i θ 2,i γ,i γ 2,i σ i 2 L i x, v i Gi + 2 p i, K i z i Xi + 2 q i, M i y i Yi α min,...,m τ,,,,, } x 2 K θ,i θ 2,i γ,i γ 2,i σ i = ρ x 2 K. 2.9 Since V is maximally monotone cf. [2, Example 20.29] ρ-strongly positive, it is strongly monotone therefore, by [2, Proposition 22.8], it holds that V is surjective. Consequently, V exists V ρ. In consideration of.9, the algorithmic scheme 2.4 can equivalently be written in the form n 0 x n x n τ m L i v i,n ṽ i,n Cx n z + A x n a n + m L i ṽi,n an τ For i =,..., m p i,n p i,n θ,i + K i z i,n z i,n Bi p i,n b i,n K i z i,n b i,n θ,i q i,n q i,n θ 2,i + M i y i,n ỹ i,n Di q i,n d i,n M i ỹ i,n d i,n θ 2,i z i,n z i,n γ,i + Ki p i,n p i,n = ṽ i,n + Ki p i,n e,i,n y i,n ỹ i,n γ 2,i + Mi q i,n q i,n = ṽ i,n + Mi q i,n e 2,i,n v i,n ṽ i,n σ i L i x n x n = r i + z i,n + ỹ i,n L i x n e 3,i,n x n+ = x n + λ n x n x n, 2.0 where p n = p,n,... p m,n X q n = q,n,..., q m,n Y z n = z,n,..., z m,n G y n = y,n,..., y m,n G v n = v,n,..., v m,n G p n = p,n,... p m,n X q n = q,n,..., q m,n Y z n = z,n,..., z m,n G ỹ n = ỹ,n,..., ỹ m,n G ṽ n = ṽ,n,..., ṽ m,n G xn = x n, p n, q n, z n, y n, v n K x n = x n, p n, q n, z n, ỹ n, ṽ n K. Also, for any n 0, we consider sequences defined by a n H b n = b,n,... b m,n X d n = d,n,..., d m,n Y e,n = e,,n,..., e,m,n G e 2,n = e 2,,n,..., e 2,m,n G, e 3,n = e 3,,n,..., e 3,m,n G 2. that are summable in the corresponding norm. Further, by denoting for any n 0 en = a n, b n, d n, 0, 0, 0 K e τ n = an τ, bn θ, dn θ 2, e,n, e 2,n, e 3,n K, 0

11 which are also terms of summable sequences in the corresponding norm, it yields that the scheme in 2.0 is equivalent to n 0 V xn x n Qx n M + S x n e n + Se n e τ n x n+ = x n + λ n x n x n. 2.2 We now introduce the notations A K := V M + S B K := V Q 2.3 the summable sequence with terms e V n = V V + Se n e τ n for any n 0. Then, for any n 0, we have V x n x n Qx n M + S x n e n + Se n e τ n V x n Qx n V + M + S x n e n + V + Se n e τ n x n V Qx n Id + V M + S x n e n + V V + Se n e τ n x n = Id + V M + S x n V Qx n e V n + e n x n = Id + A K x n B K x n e V n + e n. 2.4 Taking into account that the resolvent is Lipschitz continuous, the sequence having as terms e A K n = J AK x n B K x n e V n J AK x n B K x n + e n n 0 is summable we have Thus, the iterative scheme in 2.2 becomes x n = J AK x n B K x n + e A K n n 0. n 0 xn J AK x n B K x n x n+ = x n + λ n x n x n, 2.5 which shows that the algorithm we propose in this subsection has the structure of a forward-backward method. In addition, let us observe that zer A K + B K = zer V M + S + Q = zer M + S + Q. We then introduce the Hilbert space K V with inner product norm respectively defined, for x, y K, via x, y KV = x, V y K x KV = x, V x K. 2.6 Since M + S Q are maximally monotone on K, the operators A K B K are maximally monotone on K V. Moreover, since V is self-adjoint ρ-strongly positive, one can easily see that weak strong convergence in K V are equivalent with weak strong convergence in K, respectively. By making use of V ρ, one can show that

12 B K is µ ρ-cocoercive on K V. Indeed, we get for x, y K V 3.35] that see, also, [29, Eq. x y, B K x B K y KV = x y, Qx Qy K µ Qx Qy 2 K µ V V Qx V Qy K Qx Qy K µ V B K x B K y, Qx Qy K = µ V B K x B K y 2 K V µ ρ B K x B K y 2 K V. 2.7 As our assumption imposes that 2µ ρ >, we can use the statements given in [7, Corollary 6.5] in the context of an error tolerant forward-backward algorithm in order to establish the desired convergence results. i By Corollary 6.5 in [7], the sequence x n n 0 converges weakly in K V therefore in K to some x = x, p, q, z, y, v zer A K + B K = zer M + S + Q. By 2.7, it thus follows that x, p, q, y is a primal-dual solution with respect to Problem.2. ii From [7, Remark 3.4], it follows B K x n B K x 2 K V < +, n 0 therefore we have B K x n B K x n or, equivalently, Qx n Qx as n +. Considering the definition of Q, one can see that this implies Cx n Cx as n +. As C is uniformly monotone, there exists an increasing function φ C : [0, + [0, + ] vanishing only at 0 such that φ C x n x x n x, Cx n Cx x n x Cx n Cx n 0. The boundedness of x n x n 0 the convergence Cx n x n x as n +. Cx further imply that Remark 2.2. Suppose that C : H H, x 0}, in Problem.2. Then condition 2.3 simplifies to max τ σ i L i 2, max j=,...,m θ,j γ,j K j 2, θ 2,j γ 2,j M j 2} } <. In this case, the scheme 2.5 reads n 0 x n+ x n + λ n J AK x n x n, 2.8 it can be shown to convergence under the relaxed assumption that λ n n 0 [ε, 2 ε], for ε 0, see, for instance, [6, 7, 23]. Remark 2.3. i When implementing Algorithm 2., the term L i 2 x n x n should be stored in a separate variable for any i =,..., m. Taking this into account, each linear bounded operator occurring in Problem.2 needs to be processed once via some forward evaluation once via its adjoint. ii The maximally monotone operators A, B i D i, i =,..., m, in Problem.2 are accessed via their resolvents so-called backward steps, also by taking into account the relation between the resolvent of a maximally monotone operator its inverse given in.0. 2

13 iii The possibility of performing a forward step for the cocoercive monotone operator C is an important aspect, since forward steps are usually much easier to implement than resolvent steps resp. evaluations of proximal operators. Due to the Baillon Haddad theorem cf. [2, Corollary 8.6], each µ-lipschitzian gradient with µ R ++ of a convex Fréchet differentiable function f : H R is µ -cocoercive. 2.2 An algorithm of forward-backward-forward type In this subsection we propose a forward-backward-forward type algorithm for solving Problem.2, with the modification that the operator C : H H is assumed to be µ-lipschitz continuous for some µ R ++, but not necessarily µ -cocoercive. Algorithm 2.2. Let x 0 H, for any i =,..., m, let p i,0 X i, q i,0 Y i, z i,0, y i,0, v i,0 G i. Set let ε β = µ + m max L i 2, [ 0, β+, γ n n 0 be a sequence in n 0 max j=,...,m ε, ε β Kj 2, M j 2}}, 2.9 ] set x n J γna x n γ n Cx n + m L i v i,n z For i =,..., m p i,n J γnb p i,n + γ n K i z i,n i q i,n J γnd q i,n + γ n M i y i,n i u,i,n z i,n γ n Ki p i,n v i,n γ n L i x n r i u 2,i,n y i,n γ n Mi q i,n v i,n γ n L i x n r i z i,n +γ2 n u +2γ 2,i,n γ2 n u n +γ 2 2,i,n n ỹ i,n +γ 2 u2,i,n γ n z 2 i,n n ṽ i,n v i,n + γ n L i x n r i z i,n ỹ i,n x n+ x n + γ n Cx n C x n + m L i v i,n ṽ i,n For i =,..., m p i,n+ p i,n γ n K i z i,n z i,n q i,n+ q i,n γ n M i y i,n ỹ i,n z i,n+ z i,n + γ n Ki p i,n p i,n y i,n+ ỹ i,n + γ n Mi q i,n q i,n v i,n+ ṽ i,n γ n L i x n x n Theorem 2.2. In Problem.2, let C : H H be µ-lipschitz continuous for µ R ++, suppose that z ran A + K L i i B i K i M i D i M i L i r i + C, 2.2 consider the sequences generated by Algorithm 2.2. Then there exists a primal-dual solution x, p, q, y to Problem.2 such that i n 0 x n x n 2 < + for any i =,..., m p i,n p i,n 2 < +, q i,n q i,n 2 < + y i,n ỹ i,n 2 < +. n 0 n 0 n 0 3

14 ii x n x, x n x, for any i =,..., m pi,n p i,n p i,n p i,n, qi,n q i,n q i,n q i,n yi,n y i,n ỹ i,n y i,n. Proof. As in the proof of Theorem 2., consider K = H X Y G G G along with the notations introduced in 2.6. Further, let the operators M : K 2 K, S : K K Q : K K be defined as in the proof of the same result. The operator S + Q is monotone, Lipschitz continuous, hence maximally monotone cf. [2, Corollary 20.25], it fulfills doms + Q = K. Therefore the sum M + S + Q is maximally monotone as well see [2, Corollary 24.4i]. In the following we derive the Lipschitz constant of S + Q. For arbitrary x = x, p, q, z, y, v x = x, p, q, z, ỹ, ṽ K, by using the Cauchy Schwarz inequality, it yields S + Qx S + Q x Qx Q x + Sx S x m µ x x + L i v i ṽ i, Kz z, My ỹ, K p p, M q q, L x x,..., L m x x m = µ x x + L i v i ṽ i 2 + [ K i z i z i 2 + M i y i ỹ i 2 + K i p i p i 2 + M i q i q i 2 + L i x x 2] 2 m µ x x + L i 2 x x 2 + µ + m max L i 2, v i ṽ i 2 + [ K i 2 z i z i 2 + M i 2 y i ỹ i 2 + K i 2 p i p i 2 + M i 2 q i q i 2] 2 max j=,...,m Kj 2, M j 2}} x x In the following we use the sequences in 2. for modeling summable errors in the implementation. In addition we consider the summable sequences in K with terms defined for any n 0 as e n = a n, b n, d n, 0, 0, 0 ẽ n = 0, 0, 0, e,n, e 2,n, e 3,n. 4

15 Note that 2.20 can equivalently be written as x n γ n Cxn + m L i v i,n Id + γn z + A x n a n For i =,..., m p i,n + γ n K i z i,n Id + γ n B i pi,n b i,n q i,n + γ n M i y i,n Id + γ n D i qi,n d i,n z i,n γ n Ki p i,n = z i,n γ n ṽ i,n e,i,n y i,n γ n Mi q i,n = ỹ i,n γ n ṽ i,n e 2,i,n n 0 v i,n + γ n L i x n = ṽ i,n + γ n r i + z i,n + ỹ i,n e 3,i,n x n+ x n + γ n Cx n C x n + m L i v i,n ṽ i,n For i =,..., m p i,n+ p i,n γ n K i z i,n z i,n q i,n+ q i,n γ n M i y i,n ỹ i,n z i,n+ z i,n + γ n Ki p i,n p i,n y i,n+ ỹ i,n + γ n Mi q i,n q i,n v i,n+ ṽ i,n γ n L i x n x n Therefore, 2.23 is nothing else than n 0 xn γ n S + Qx n Id + γ n M x n e n ẽ n x n+ x n + γ n S + Qx n S + Qp n We now introduce the notations Then 2.24 is A K := M B K := S + Q n 0 xn = J γna K x n γ n B K x n + ẽ n + e n x n+ x n + γ n B K x n B K x n We observe that for e K n := J γna K x n γ n B K x n + ẽ n J γna K x n γ n B K x n + e n, one has x n = J γna K x n γ n B K x n + e K n for any n 0 it holds e K n = J γnak x n γ n B K x n + ẽ n J γnak x n γ n B K x n + e n n 0 n 0 [ J γnak x n γ n B K x n + ẽ n J γnak x n γ n B K x n + e n ] n 0 [ ẽ n + e n ] < +. n 0 Thus, 2.26 becomes n 0 xn J γna K x n γ n B K x n x n+ x n + γ n B K x n B K x n, 2.27 which is an error-tolerant forward-backward-forward method in K whose convergence has been investigated in [3]. Note that the exact version of this algorithm was proposed by Tseng in [28]. 5

16 i By [3, Theorem 2.5i] we have x n x n 2 < +, n 0 which yields n 0 x n x n 2 < + for any i =,..., m, p i,n p i,n 2 < +, q i,n q i,n 2 < + y i,n ỹ i,n 2 < +. n 0 n 0 n 0 ii Let x = x, p, q, z, y, v zerm + S + Q. Using [3, Theorem 2.5ii], we obtain x n x x n x. In consideration of 2.7, it follows that x, p, q, v is a primal-dual solution to Problem.2, x n x, x n x, for i =,..., m pi,n p i,n p i,n p i,n, qi,n q i,n q i,n q i,n, yi,n y i,n ỹ i,n y i,n. Remark 2.4. i In contrast to Algorithm 2., the iterative scheme in Algorithm 2.2 requires twice the amount of forward steps is therefore more time-intensive. On the other h, many steps in Algorithm 2.2 can be processed in parallel. ii A forward-backward-forward type algorithm for solving the primal-dual pair of monotone inclusions.7 -.8, in the particular situation when L i is the identity operator r i = 0 for any i =,..., m, has been recently investigated in [3]. 3 Application to convex minimization In this section we employ the algorithms introduced in the previous one in the context of solving the primal-dual pairs of convex optimization problems introduced in Problem.. For every x H p, q X Y with Ki p i = Mi q i, i =,..., m, by the Young- Fenchel inequality, it holds fx + hx + f h z L i Ki p i z L i Ki p i, x, for any i =,..., m y i G, This yields g i K i L i x r i y i + g i p i p i, K i L i x r i y i = K i p i, L i x r i y i inf fx + x H = inf x,y H G l i M i y i + l i q i q i, M i y i = M i q i, y i. gi } K i li M i L i x r i + hx x, z fx + sup p,q X Y, K i p i=m i q i,,...,m } g i K i L i x r i y i + l i M i y i + hx x, z f h z L i Ki p i 3. [ g i p i + l i q i + p i, K i r i ] }, 6

17 which means that for the primal-dual pair of optimization problems.5-.6 weak duality is always given. Considering x, p, q, y H X Y G a solution of the primal-dual system of monotone inclusions z L i Ki p i fx + hx K i L i x y i r i gi p i, M i y i li q i, Ki p i = Mi q i, i =,..., m, 3.2 it follows that x is an optimal solution to.5 that p, q is an optimal solution to.6. Indeed, as h is convex everywhere differentiable, it holds thus z L i Ki p i fx + hx f + hx, fx + hx + f h z L i Ki p i = z L i Ki p i, x. On the other h, since g i ΓX i l i ΓY i, we have for any i =,..., m g i K i L i x y i r i + g i p i = K i p i, L i x r i y i l i M i y i + l i q i = M i q i, y i. By summing up these equations using 3.2, it yields gi fx + K i li M i L i x r i + hx x, z fx + g i K i L i x r i y i + l i M i y i + hx x, z [ ] = f h z L i Ki p i gi p i + li q i + p i, K i r i, which, together with 3., leads to the desired conclusion. In the following, by extending the result in [3, Proposition 4.2] to our setting, we provide sufficient conditions which guarantee the validity of 2.5 when applied to convex minimization problems. To this end we mention that the strong quasi-relative interior of a nonempty convex set Ω H is defined as sqri Ω = x Ω : λω x is a closed linear subspace. λ 0 Proposition 3.. Suppose that the primal problem.5 has an optimal solution, that 0 sqri domg i K i doml i M i, i =,..., m sqri E, 3.4 7

18 where m } E := K i L i dom f r i y i dom g i Then z ran f + m M i y i dom l i } } : y i G i, i =,..., m. L i Ki g i K i Mi l i M i L i r i + h. Proof. Let x H be an optimal solution to.5. Since 3.4 holds, we have that g i K i, l i M i ΓG i, i =,..., m. Further, because of 3.3, [2, Proposition 5.7] guarantees for any i =,..., m the existence of y i G i such that gi K i l i M i x = g i K i x y i + l i M i y i. Hence, x, y = x, y,..., y m is an optimal solution to the convex optimization problem inf fx + hx x, z + x,y H G By denoting [ g i K i L i x r i y i + l i M i y i ] } 3.5 f : H G R, fx, y = fx + hx x, z [ ] g : X Y R, gx, y = g i x i K i r i + l i y i L : H G X Y, x, y m problem 3.5 can be equivalently written as } K i L i x y i m } M i y i, 3.6 Thus, inf fx, y + glx, y}. 3.7 x,y H G 0 f + g Lx, y. Since E = Ldom f dom g 3.4 is fulfilled, it holds see, for instance, [2, 4, 7] where 0 f + g L x, y = fx, y + L g L x, y, L m : X Y H G, p, q L i Ki p i, Kp + M q,..., Kmp m + Mmq m. We obtain 0 fx, y + L g L x, y 0 fx + hx z + m L i K i g i K i Li x r i y i 0 Ki g i K i Li x r i y i + Mi l i M i yi, i =,..., m 0 fx + hx z + m L i v i v G : v i Ki g i K i Li x r i y i, i =,..., m v i Mi l i M i yi, i =,..., m 8

19 0 fx + hx z + m L i v i v G : L i x r i y i Ki g vi i K i, i =,..., m y i Mi l vi i M i, i =,..., m 0 fx + hx z + m L i v G : v i K v i i g i K i M i l i M i L i x r i, i =,..., m K z fx + L i i g i K i M i l i M i L i x r i + hx, which completes the proof. Remark 3.. If one of the following two conditions f is real-valued the operators L i, K i M i are surjective for any i =,..., m; the functions g i l i are real-valued for any i =,..., m; is fulfilled, then E = X Y 3.4 is obviously true. On the other h, if H, G i, X i Y i, i =,..., m are finite dimensional Ki y for any i =,..., m, there exists y i G i : i K i L i ri dom f r i ri dom g i,, M i y i ri dom l i then 3.4 is also true. This follows by using that in finite dimensional spaces the strong quasi-relative interior of a convex set is nothing else than its relative interior by taking into account the properties of the latter. 3. An algorithm of forward-backward type When applied to 3.2, the iterative scheme introduced in 2.4 the corresponding convergence statements read as follows. Algorithm 3.. Let x 0 H, for any i =,..., m, let p i,0 X i, q i,0 Y i y i,0, z i,0, v i,0 G i. For any i =,..., m, let τ, θ,i, θ 2,i, γ,i, γ 2,i σ i be strictly positive real numbers such that 2µ α min,...,m τ,, θ,i θ 2,i, γ,i,, } >, 3.8 γ 2,i σ i for α = max τ σ i L i 2, max j=,...,m θ,j γ,j K j 2, } θ 2,j γ 2,j M j 2. 9

20 Furthermore, let ε 0,, λ n n 0 be a sequence in [ε, ] set n 0 x n Prox τf x n τ Cx n + m L i v i,n z For i =,..., m p i,n Prox θ,i g p i i,n + θ,i K i z i,n q i,n Prox θ2,i l q i i,n + θ 2,i M i y i,n u,i,n z i,n + γ,i Ki p i,n 2 p i,n + v i,n + σ i L i 2 x n x n r i u 2,i,n y i,n + γ 2,i M i q i,n 2 q i,n + v i,n + σ i L i 2 x n x n r i +σ z i,n i γ 2,i +σ i γ,i +γ 2,i u,i,n σ iγ,i +σ i γ 2,i u 2,i,n ỹ i,n +σ i γ 2,i u 2,i,n σ i γ 2,i z i,n ṽ i,n v i,n + σ i L i 2 x n x n r i z i,n ỹ i,n x n+ = x n + λ n x n x n For i =,..., m p i,n+ = p i,n + λ n p i,n p i,n q i,n+ = q i,n + λ n q i,n q i,n z i,n+ = z i,n + λ n z i,n z i,n y i,n+ = y i,n + λ n ỹ i,n y i,n v i,n+ = v i,n + λ n ṽ i,n v i,n. 3.9 Theorem 3.2. For the optimization problem.5, suppose that z ran f + L i Ki g i K i Mi l i M i L i r i + h 3.0 consider the sequences generated by Algorithm 3.. Then there exists an optimal solution x to.5 optimal solution p, q to.6 such that i x n x, p i,n p i q i,n q i for any i =,..., m as n +. ii if h is uniformly convex at x, then x n x as n +. Proof. The results is a direct consequence of Theorem 2. when taking A = f, C = h, B i = g i, D i = l i, i =,..., m. 3. We also notice that, according to Theorem in [2], the operators in 3. are maximally monotone, while, by [2, Corollary 6.24], we have A = f, C = h, Bi = gi D i = li for i =,..., m. Furthermore, by [2, Corollary 8.6], C = h is µ -cocoercive, while, if h is uniformly convex at x H, then C = h is uniformly monotone at x cf. [30, Section 3.4]. Remark 3.2. If h ΓH such that hx = 0 for all x H, then condition 3.8 simplifies to max τ σ i L i 2, max θ,j γ,j K j 2, θ 2,j γ 2,j M j 2} } <. j I In this situation Algorithm 3. converges under the relaxed assumption that λ n n 0 [ε, 2 ε] for ε 0, see also Remark

21 3.2 An algorithm of forward-backward-forward type On the other h, when applied to 3.2, the iterative scheme introduced in 2.20 the corresponding convergence statements read as follows. Algorithm 3.2. Let x 0 H, for any i =,..., m, let p i,0 X i, q i,0 Y i, z i,0, y i,0, v i,0 G i. Set let ε β = µ + m max L i 2, [ 0, β+, γ n n 0 be a sequence in n 0 max j=,...,m ε, ε β Kj 2, M j 2}}, 3.2 ] set x n Prox γnf x n γ n Cx n + m L i v i,n z For i =,..., m p i,n Prox γng p i i,n + γ n K i z i,n q i,n Prox γnl q i i,n + γ n M i y i,n u,i,n z i,n γ n Ki p i,n v i,n γ n L i x n r i u 2,i,n y i,n γ n Mi q i,n v i,n γ n L i x n r i z i,n +γ2 n u +2γ 2,i,n γ2 n u n +γ 2 2,i,n n ỹ i,n +γ 2 u2,i,n γ n z 2 i,n n ṽ i,n v i,n + γ n L i x n r i z i,n ỹ i,n x n+ x n + γ n Cx n C x n + m L i v i,n ṽ i,n For i =,..., m p i,n+ p i,n γ n K i z i,n z i,n q i,n+ q i,n γ n M i y i,n ỹ i,n z i,n+ z i,n + γ n Ki p i,n p i,n y i,n+ ỹ i,n + γ n Mi q i,n q i,n v i,n+ ṽ i,n γ n L i x n x n. Theorem 3.3. For the optimization problem.5, suppose that z ran f K L i i g i K i M i l i M i L i r i + h, 3.4 consider the sequences generated by Algorithm 3.2. Then there exists an optimal solution x to.5 optimal solution p, q to.6 such that i n 0 x n x n 2 < + for any i =,..., m p i,n p i,n 2 < + q i,n q i,n 2 < +. n 0 n 0 ii x n x, x n x for any i =,..., m pi,n p i,n p i,n p i,n qi,n q i,n q i,n q i,n. Proof. The conclusions follow by using the statements in the proof of Theorem 3.2 by applying Theorem

22 4 Numerical experiments Within this section we show how the two algorithms we propose in this paper have performed when solving the image denoising problems.3.4 formulated in the introductory section, namely } l 2 2-IC/P inf x R n 2 x b 2 + α,ω D α 2,ω2 D 2 x } l 2 2-MIC/P inf x R n 2 x b 2 + α,ω α 2,ω2 L D x, respectively, also in comparison with other numerical schemes from the literature. First, we notice that for both problems a condition of type 3.3 is fulfilled, thus the infimal convolutions are proper, convex lower semicontinuous functions. Due to the fact that the objective functions of the two optimization problems are proper, strongly convex lower semicontinuous, these have unique optimal solutions. Finally, in the light of Remark 3., a condition of type 3.4 holds. Thus, according to Proposition 3., the hypotheses of the theorems are for both optimization problems l 2 2 -IC/P l 2 2-MIC/P fulfilled. In order to compare our two primal-dual iterative schemes with algorithms relying on augmented Lagrangian smoothing techniques, we formulated using the definition of the infimal convolution.3.4 as optimization problems with constraints of the form } l 2 2-IC/P inf x,x 2,z,z 2 2 x + x 2 b 2 + α z + α,ω 2 z 2,ω2, 4. D 0 x z subject to = 0 D 2 x 2 z 2 } l 2 2-MIC/P inf x,y,y 2,z 2 x b 2 + α y + α,ω 2 z,ω2, 4.2 D Id x y subject to = 0 L y 2 z respectively. We performed our numerical tests on the colored test image lichtenstein see Figure 4. of size making each color ranging in the closed interval from 0 to. By adding white Gaussian noise with stard deviation 0.08, we obtained the noisy image b R n. We took ω =, ω 2 =,, the regularization parameters in l 2 2 -IC/P l 2 2 -MIC/P were set to α = 0.06 α 2 = 0.2, while the tests were made on an Intel Core i processor. When measuring the quality of the restored images, we used the improvement in signalto-noise ratio ISNR, which is given by x b 2 ISNR k = 0 log 0 x x k 2, 22

23 a Original image b Noisy image c Reconstructed image Figure 4.: Figure a shows the clean lichtenstein test image, b shows the image obtained after adding white Gaussian noise with stard deviation 0.08 c shows the reconstructed image. a ISNR values for l 2 2-IC/P b ISNR values for l 2 2-MIC/P FB FBF DS VS PDHG ADMM CPU time in seconds FB FBF DS VS PDHG CPU time in seconds Figure 4.2: Figure a shows the evolution of the ISNR for the l 2 2-IC/P problem w.r.t. the CPU times in seconds in log scale. Figure b shows the evolution of the ISNR for the l 2 2-MIC/P problem w.r.t. the CPU times in seconds in log scale. where x, b, x k are the original, the observed noisy the reconstructed image at iteration k N, respectively. In Figure 4.2 we compare the performances of Algorithm 3. FB Algorithm 3.2 FBF in the context of solving the optimization problems.3.4 to the ones of different optimization algorithms. The double smoothing DS algorithm, as proposed in [], is applied to the Fenchel dual problems of by considering the acceleration strategies in [0]. One should notice that, since the smoothing parameters are constant, DS solves continuously differentiable approximations of does therefore not necessarily converge to the unique minimizers of.3.4. As a second smoothing algorithm, we considered the variable smoothing technique VS in [8], which successively reduces the smoothing parameter in each iteration therefore solves the primal optimization problems as the iteration counter increases. We further considered the primal-dual hybrid gradient method PDHG as discussed in [27], which is nothing else than the primal-dual method in [5]. Finally, the alternating direction method of multipliers ADMM was 23

24 applied to 4., as it was also done in [27]. Here, one makes use of the Moore-Penrose inverse of a special linear bounded operator which can be implemented in this setting efficiently, since D T D D2 T D 2 can be diagonalized by the discrete cosine transform. The problem which arises in 4.2, however, is far more difficult to be solved with this method was therefore not implemented, since the linear bounded operator assumed to be inverted has a more complicated structure. This reveals a typical drawback of ADMM given by the fact that this method does not provide a full splitting, like primal-dual or smoothing algorithms do. As it follows from the comparisons shown in Figure 4.2, the FBF method suffers because of its additional forward step. However, many time-intensive steps in this algorithm could have been executed in parallel, which would lead to a significant decrease of the execution time. On the other h, the FB method performs fast stable in both examples, while optical differences in the reconstructions for l 2 2 -IC/P l2 2-MIC/P are not observable. Acknowledgements. The authors are thankful to two anonymous reviewers for comments recommendations which improved the quality of the paper. References [] H. Attouch, L.M. Briceño-Arias P.L. Combettes. A parallel splitting method for coupled monotone inclusions. SIAM J. Control Optim. 485, , 200. [2] H.H. Bauschke P.L. Combettes. Convex Analysis Monotone Operator Theory in Hilbert Spaces. CMS Books in Mathematics, Springer, New York, 20. [3] S. Becker P.L. Combettes. An algorithm for splitting parallel sums of linearly composed monotone operators, with applications to signal recovery. J. Nonlinear Convex A. 5, 37 59, 205. [4] R.I. Boţ. Conjugate Duality in Convex Optimization. Lecture Notes in Economics Mathematical Systems, Vol. 637, Springer, Berlin, 200. [5] R.I. Boţ, E.R. Csetnek A. Heinrich. A primal-dual splitting algorithm for finding zeros of sums of maximally monotone operators. SIAM J. Optim. 234, , 203. [6] R.I. Boţ, E.R. Csetnek, A. Heinrich C. Hendrich. On the convergence rate improvement of a primal-dual splitting algorithm for solving monotone inclusion problems. Math. Program. 502, , 205. [7] R.I. Boţ, S.M. Grad G. Wanka. Duality in Vector Optimization. Springer, Berlin, [8] R.I. Boţ C. Hendrich. A variable smoothing algorithm for solving convex optimization problems. TOP 23, 24 50, 205. [9] R.I. Boţ C. Hendrich. Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization. J. Math. Imaging Vis. 493, , 204. [0] R.I. Boţ C. Hendrich. On the acceleration of the double smoothing technique for unconstrained convex optimization problems. Optimization 642, , 205. [] R.I. Boţ C. Hendrich. A double smoothing technique for solving unconstrained nondifferentiable convex optimization problems. Comput. Optim. Appl. 542, , 203. [2] R.I. Boţ C. Hendrich. A Douglas Rachford type primal-dual method for solving inclusions with mixtures of composite parallel-sum type monotone operators. SIAM J. Optim. 234, , 203. [3] L.M. Briceño-Arias P.L. Combettes. A monotone + skew splitting model for composite monotone inclusions in duality. SIAM J. Optim. 24, , 20. [4] A. Chambolle P.-L. Lions. Image recovery via total variation minimization related problems. Numer. Math. 762, 67 88, 997. [5] A. Chambolle T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40, 20 45,

25 [6] P.L. Combettes. Quasi-Fejérian analysis of some optimization algorithms. In: D. Butnariu, Y. Censor S. Reich Eds., Inherently Parallel Algorithms in Feasibility Optimization their Applications, Elsevier, New York, 5 52, 200. [7] P.L. Combettes. Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization, 535 6: , [8] P.L. Combettes. Iterative construction of the resolvent of a sum of maximal monotone operators. J. Convex Anal. 63 4, , [9] P.L. Combettes. Systems of structured monotone inclusions: duality, algorithms, applications. SIAM J. Optim. 234, , 203. [20] P.L. Combettes J.-C. Pesquet. Primal-dual splitting algorithm for solving inclusions with mixtures of composite, Lipschitzian, parallel-sum type monotone operators. Set-Valued Var. Anal. 202, , 202. [2] L. Condat. A primal-dual splitting method for convex optimization involving Lipschitzian, proximable linear composite terms. J. Optim. Theory Appl. 582, , 203. [22] J. Douglas H.H. Rachford. On the numerical solution of the heat conduction problem in two three space variables. Trans. of the Amer. Math. Soc. 822, , 956. [23] J. Eckstein D.P. Bertsekas. On the Douglas Rachford splitting method the proximal point algorithm for maximal monotone operators. Math. Program. 55 3, , 992. [24] S. Harizanov, J.-C. Pesquet G. Steidl. Epigraphical projection for solving least squares anscombe transformed constrained optimization problems. In: Scale Space Variational Methods in Computer Vision, Springer, Berlin Heidelberg, 25 36, 203. [25] R.T. Rockafellar. On the maximal monotonicity of subdifferential mappings. Pacific J. Math. 33, , 970. [26] R.T. Rockafellar. Monotone operators the proximal point algorithm. SIAM J. Control Optim. 45, , 976. [27] S. Setzer, G. Steidl T. Teuber. Infimal convolution regularizations with discrete l -type functionals. Commun. Math. Sci. 93, , 20. [28] P. Tseng. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 382, , [29] B.C. Vũ. A splitting algorithm for dual monotone inclusions involving cocoercive operators. Adv. Comp. Math. 383, , 203. [30] C. Zălinescu. Convex Analysis in General Vector Spaces. World Scientific, Singapore,

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems Radu Ioan Boţ Ernö Robert Csetnek August 5, 014 Abstract. In this paper we analyze the

More information

Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization

Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization Radu Ioan Boţ Christopher Hendrich November 7, 202 Abstract. In this paper we

More information

arxiv: v1 [math.oc] 12 Mar 2013

arxiv: v1 [math.oc] 12 Mar 2013 On the convergence rate improvement of a primal-dual splitting algorithm for solving monotone inclusion problems arxiv:303.875v [math.oc] Mar 03 Radu Ioan Boţ Ernö Robert Csetnek André Heinrich February

More information

ADMM for monotone operators: convergence analysis and rates

ADMM for monotone operators: convergence analysis and rates ADMM for monotone operators: convergence analysis and rates Radu Ioan Boţ Ernö Robert Csetne May 4, 07 Abstract. We propose in this paper a unifying scheme for several algorithms from the literature dedicated

More information

Inertial Douglas-Rachford splitting for monotone inclusion problems

Inertial Douglas-Rachford splitting for monotone inclusion problems Inertial Douglas-Rachford splitting for monotone inclusion problems Radu Ioan Boţ Ernö Robert Csetnek Christopher Hendrich January 5, 2015 Abstract. We propose an inertial Douglas-Rachford splitting algorithm

More information

On the acceleration of the double smoothing technique for unconstrained convex optimization problems

On the acceleration of the double smoothing technique for unconstrained convex optimization problems On the acceleration of the double smoothing technique for unconstrained convex optimization problems Radu Ioan Boţ Christopher Hendrich October 10, 01 Abstract. In this article we investigate the possibilities

More information

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Patrick L. Combettes joint work with J.-C. Pesquet) Laboratoire Jacques-Louis Lions Faculté de Mathématiques

More information

Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping

Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping March 0, 206 3:4 WSPC Proceedings - 9in x 6in secondorderanisotropicdamping206030 page Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping Radu

More information

Second order forward-backward dynamical systems for monotone inclusion problems

Second order forward-backward dynamical systems for monotone inclusion problems Second order forward-backward dynamical systems for monotone inclusion problems Radu Ioan Boţ Ernö Robert Csetnek March 6, 25 Abstract. We begin by considering second order dynamical systems of the from

More information

A Dykstra-like algorithm for two monotone operators

A Dykstra-like algorithm for two monotone operators A Dykstra-like algorithm for two monotone operators Heinz H. Bauschke and Patrick L. Combettes Abstract Dykstra s algorithm employs the projectors onto two closed convex sets in a Hilbert space to construct

More information

A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators

A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators Radu Ioan Boţ Ernö Robert Csetnek April 23, 2012 Dedicated to Jon Borwein on the occasion of his 60th birthday Abstract. In this note

More information

A New Fenchel Dual Problem in Vector Optimization

A New Fenchel Dual Problem in Vector Optimization A New Fenchel Dual Problem in Vector Optimization Radu Ioan Boţ Anca Dumitru Gert Wanka Abstract We introduce a new Fenchel dual for vector optimization problems inspired by the form of the Fenchel dual

More information

On the Brézis - Haraux - type approximation in nonreflexive Banach spaces

On the Brézis - Haraux - type approximation in nonreflexive Banach spaces On the Brézis - Haraux - type approximation in nonreflexive Banach spaces Radu Ioan Boţ Sorin - Mihai Grad Gert Wanka Abstract. We give Brézis - Haraux - type approximation results for the range of the

More information

Coordinate Update Algorithm Short Course Operator Splitting

Coordinate Update Algorithm Short Course Operator Splitting Coordinate Update Algorithm Short Course Operator Splitting Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 25 Operator splitting pipeline 1. Formulate a problem as 0 A(x) + B(x) with monotone operators

More information

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences

More information

On Gap Functions for Equilibrium Problems via Fenchel Duality

On Gap Functions for Equilibrium Problems via Fenchel Duality On Gap Functions for Equilibrium Problems via Fenchel Duality Lkhamsuren Altangerel 1 Radu Ioan Boţ 2 Gert Wanka 3 Abstract: In this paper we deal with the construction of gap functions for equilibrium

More information

Numerical methods for approximating. zeros of operators. and for solving variational inequalities. with applications

Numerical methods for approximating. zeros of operators. and for solving variational inequalities. with applications Faculty of Mathematics and Computer Science Babeş-Bolyai University Erika Nagy Numerical methods for approximating zeros of operators and for solving variational inequalities with applications Doctoral

More information

M. Marques Alves Marina Geremia. November 30, 2017

M. Marques Alves Marina Geremia. November 30, 2017 Iteration complexity of an inexact Douglas-Rachford method and of a Douglas-Rachford-Tseng s F-B four-operator splitting method for solving monotone inclusions M. Marques Alves Marina Geremia November

More information

The Subdifferential of Convex Deviation Measures and Risk Functions

The Subdifferential of Convex Deviation Measures and Risk Functions The Subdifferential of Convex Deviation Measures and Risk Functions Nicole Lorenz Gert Wanka In this paper we give subdifferential formulas of some convex deviation measures using their conjugate functions

More information

On the order of the operators in the Douglas Rachford algorithm

On the order of the operators in the Douglas Rachford algorithm On the order of the operators in the Douglas Rachford algorithm Heinz H. Bauschke and Walaa M. Moursi June 11, 2015 Abstract The Douglas Rachford algorithm is a popular method for finding zeros of sums

More information

A Primal-dual Three-operator Splitting Scheme

A Primal-dual Three-operator Splitting Scheme Noname manuscript No. (will be inserted by the editor) A Primal-dual Three-operator Splitting Scheme Ming Yan Received: date / Accepted: date Abstract In this paper, we propose a new primal-dual algorithm

More information

About Split Proximal Algorithms for the Q-Lasso

About Split Proximal Algorithms for the Q-Lasso Thai Journal of Mathematics Volume 5 (207) Number : 7 http://thaijmath.in.cmu.ac.th ISSN 686-0209 About Split Proximal Algorithms for the Q-Lasso Abdellatif Moudafi Aix Marseille Université, CNRS-L.S.I.S

More information

Robust Duality in Parametric Convex Optimization

Robust Duality in Parametric Convex Optimization Robust Duality in Parametric Convex Optimization R.I. Boţ V. Jeyakumar G.Y. Li Revised Version: June 20, 2012 Abstract Modelling of convex optimization in the face of data uncertainty often gives rise

More information

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.

More information

Self-dual Smooth Approximations of Convex Functions via the Proximal Average

Self-dual Smooth Approximations of Convex Functions via the Proximal Average Chapter Self-dual Smooth Approximations of Convex Functions via the Proximal Average Heinz H. Bauschke, Sarah M. Moffat, and Xianfu Wang Abstract The proximal average of two convex functions has proven

More information

1 Introduction and preliminaries

1 Introduction and preliminaries Proximal Methods for a Class of Relaxed Nonlinear Variational Inclusions Abdellatif Moudafi Université des Antilles et de la Guyane, Grimaag B.P. 7209, 97275 Schoelcher, Martinique abdellatif.moudafi@martinique.univ-ag.fr

More information

Inertial forward-backward methods for solving vector optimization problems

Inertial forward-backward methods for solving vector optimization problems Inertial forward-backward methods for solving vector optimization problems Radu Ioan Boţ Sorin-Mihai Grad Dedicated to Johannes Jahn on the occasion of his 65th birthday Abstract. We propose two forward-backward

More information

Proximal splitting methods on convex problems with a quadratic term: Relax!

Proximal splitting methods on convex problems with a quadratic term: Relax! Proximal splitting methods on convex problems with a quadratic term: Relax! The slides I presented with added comments Laurent Condat GIPSA-lab, Univ. Grenoble Alpes, France Workshop BASP Frontiers, Jan.

More information

Brézis - Haraux - type approximation of the range of a monotone operator composed with a linear mapping

Brézis - Haraux - type approximation of the range of a monotone operator composed with a linear mapping Brézis - Haraux - type approximation of the range of a monotone operator composed with a linear mapping Radu Ioan Boţ, Sorin-Mihai Grad and Gert Wanka Faculty of Mathematics Chemnitz University of Technology

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4890 4900 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A generalized forward-backward

More information

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions Radu Ioan Boţ Ernö Robert Csetnek Szilárd Csaba László October, 1 Abstract. We propose a forward-backward

More information

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE Fixed Point Theory, Volume 6, No. 1, 2005, 59-69 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.htm WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE YASUNORI KIMURA Department

More information

Victoria Martín-Márquez

Victoria Martín-Márquez A NEW APPROACH FOR THE CONVEX FEASIBILITY PROBLEM VIA MONOTROPIC PROGRAMMING Victoria Martín-Márquez Dep. of Mathematical Analysis University of Seville Spain XIII Encuentro Red de Análisis Funcional y

More information

Convergence rate estimates for the gradient differential inclusion

Convergence rate estimates for the gradient differential inclusion Convergence rate estimates for the gradient differential inclusion Osman Güler November 23 Abstract Let f : H R { } be a proper, lower semi continuous, convex function in a Hilbert space H. The gradient

More information

Almost Convex Functions: Conjugacy and Duality

Almost Convex Functions: Conjugacy and Duality Almost Convex Functions: Conjugacy and Duality Radu Ioan Boţ 1, Sorin-Mihai Grad 2, and Gert Wanka 3 1 Faculty of Mathematics, Chemnitz University of Technology, D-09107 Chemnitz, Germany radu.bot@mathematik.tu-chemnitz.de

More information

Splitting methods for decomposing separable convex programs

Splitting methods for decomposing separable convex programs Splitting methods for decomposing separable convex programs Philippe Mahey LIMOS - ISIMA - Université Blaise Pascal PGMO, ENSTA 2013 October 4, 2013 1 / 30 Plan 1 Max Monotone Operators Proximal techniques

More information

Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency

Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency Applied Mathematics E-Notes, 16(2016), 133-143 c ISSN 1607-2510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency

More information

arxiv: v4 [math.oc] 29 Jan 2018

arxiv: v4 [math.oc] 29 Jan 2018 Noname manuscript No. (will be inserted by the editor A new primal-dual algorithm for minimizing the sum of three functions with a linear operator Ming Yan arxiv:1611.09805v4 [math.oc] 29 Jan 2018 Received:

More information

Monotone operators and bigger conjugate functions

Monotone operators and bigger conjugate functions Monotone operators and bigger conjugate functions Heinz H. Bauschke, Jonathan M. Borwein, Xianfu Wang, and Liangjin Yao August 12, 2011 Abstract We study a question posed by Stephen Simons in his 2008

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES U.P.B. Sci. Bull., Series A, Vol. 80, Iss. 3, 2018 ISSN 1223-7027 ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES Vahid Dadashi 1 In this paper, we introduce a hybrid projection algorithm for a countable

More information

On the relations between different duals assigned to composed optimization problems

On the relations between different duals assigned to composed optimization problems manuscript No. will be inserted by the editor) On the relations between different duals assigned to composed optimization problems Gert Wanka 1, Radu Ioan Boţ 2, Emese Vargyas 3 1 Faculty of Mathematics,

More information

arxiv: v1 [math.oc] 20 Jun 2014

arxiv: v1 [math.oc] 20 Jun 2014 A forward-backward view of some primal-dual optimization methods in image recovery arxiv:1406.5439v1 [math.oc] 20 Jun 2014 P. L. Combettes, 1 L. Condat, 2 J.-C. Pesquet, 3 and B. C. Vũ 4 1 Sorbonne Universités

More information

Subdifferential representation of convex functions: refinements and applications

Subdifferential representation of convex functions: refinements and applications Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential

More information

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection EUSIPCO 2015 1/19 A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection Jean-Christophe Pesquet Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est

More information

Signal Processing and Networks Optimization Part VI: Duality

Signal Processing and Networks Optimization Part VI: Duality Signal Processing and Networks Optimization Part VI: Duality Pierre Borgnat 1, Jean-Christophe Pesquet 2, Nelly Pustelnik 1 1 ENS Lyon Laboratoire de Physique CNRS UMR 5672 pierre.borgnat@ens-lyon.fr,

More information

An Algorithm for Splitting Parallel Sums of Linearly Composed Monotone Operators, with Applications to Signal Recovery

An Algorithm for Splitting Parallel Sums of Linearly Composed Monotone Operators, with Applications to Signal Recovery An Algorithm for Splitting Parallel Sums of Linearly Composed Monotone Operators, with Applications to Signal Recovery Stephen R. Becker and Patrick L. Combettes UPMC Université Paris 06 Laboratoire Jacques-Louis

More information

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Local strong convexity and local Lipschitz continuity of the gradient of convex functions Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate

More information

The resolvent average of monotone operators: dominant and recessive properties

The resolvent average of monotone operators: dominant and recessive properties The resolvent average of monotone operators: dominant and recessive properties Sedi Bartz, Heinz H. Bauschke, Sarah M. Moffat, and Xianfu Wang September 30, 2015 (first revision) December 22, 2015 (second

More information

STABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES

STABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES STABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES CHONG LI, DONGHUI FANG, GENARO LÓPEZ, AND MARCO A. LÓPEZ Abstract. We consider the optimization problem (P A )

More information

Monotone Operator Splitting Methods in Signal and Image Recovery

Monotone Operator Splitting Methods in Signal and Image Recovery Monotone Operator Splitting Methods in Signal and Image Recovery P.L. Combettes 1, J.-C. Pesquet 2, and N. Pustelnik 3 2 Univ. Pierre et Marie Curie, Paris 6 LJLL CNRS UMR 7598 2 Univ. Paris-Est LIGM CNRS

More information

A Monotone + Skew Splitting Model for Composite Monotone Inclusions in Duality

A Monotone + Skew Splitting Model for Composite Monotone Inclusions in Duality A Monotone + Skew Splitting Model for Composite Monotone Inclusions in Duality arxiv:1011.5517v1 [math.oc] 24 Nov 2010 Luis M. Briceño-Arias 1,2 and Patrick L. Combettes 1 1 UPMC Université Paris 06 Laboratoire

More information

A New Use of Douglas-Rachford Splitting and ADMM for Classifying Infeasible, Unbounded, and Pathological Conic Programs

A New Use of Douglas-Rachford Splitting and ADMM for Classifying Infeasible, Unbounded, and Pathological Conic Programs A New Use of Douglas-Rachford Splitting and ADMM for Classifying Infeasible, Unbounded, and Pathological Conic Programs Yanli Liu, Ernest K. Ryu, and Wotao Yin May 23, 2017 Abstract In this paper, we present

More information

Heinz H. Bauschke and Walaa M. Moursi. December 1, Abstract

Heinz H. Bauschke and Walaa M. Moursi. December 1, Abstract The magnitude of the minimal displacement vector for compositions and convex combinations of firmly nonexpansive mappings arxiv:1712.00487v1 [math.oc] 1 Dec 2017 Heinz H. Bauschke and Walaa M. Moursi December

More information

On nonexpansive and accretive operators in Banach spaces

On nonexpansive and accretive operators in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 3437 3446 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa On nonexpansive and accretive

More information

Dual and primal-dual methods

Dual and primal-dual methods ELE 538B: Large-Scale Optimization for Data Science Dual and primal-dual methods Yuxin Chen Princeton University, Spring 2018 Outline Dual proximal gradient method Primal-dual proximal gradient method

More information

Primal-dual algorithms for the sum of two and three functions 1

Primal-dual algorithms for the sum of two and three functions 1 Primal-dual algorithms for the sum of two and three functions 1 Ming Yan Michigan State University, CMSE/Mathematics 1 This works is partially supported by NSF. optimization problems for primal-dual algorithms

More information

Quasi-relative interior and optimization

Quasi-relative interior and optimization Quasi-relative interior and optimization Constantin Zălinescu University Al. I. Cuza Iaşi, Faculty of Mathematics CRESWICK, 2016 1 Aim and framework Aim Framework and notation The quasi-relative interior

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

On the equivalence of the primal-dual hybrid gradient method and Douglas Rachford splitting

On the equivalence of the primal-dual hybrid gradient method and Douglas Rachford splitting Mathematical Programming manuscript No. (will be inserted by the editor) On the equivalence of the primal-dual hybrid gradient method and Douglas Rachford splitting Daniel O Connor Lieven Vandenberghe

More information

Convex Optimization Notes

Convex Optimization Notes Convex Optimization Notes Jonathan Siegel January 2017 1 Convex Analysis This section is devoted to the study of convex functions f : B R {+ } and convex sets U B, for B a Banach space. The case of B =

More information

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties

More information

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29 A Random block-coordinate primal-dual proximal algorithm with application to 3D mesh denoising Emilie CHOUZENOUX Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est, France Horizon Maths 2014

More information

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT Linear and Nonlinear Analysis Volume 1, Number 1, 2015, 1 PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT KAZUHIRO HISHINUMA AND HIDEAKI IIDUKA Abstract. In this

More information

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 36 Why proximal? Newton s method: for C 2 -smooth, unconstrained problems allow

More information

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION Peter Ochs University of Freiburg Germany 17.01.2017 joint work with: Thomas Brox and Thomas Pock c 2017 Peter Ochs ipiano c 1

More information

Visco-penalization of the sum of two monotone operators

Visco-penalization of the sum of two monotone operators Visco-penalization of the sum of two monotone operators Patrick L. Combettes a and Sever A. Hirstoaga b a Laboratoire Jacques-Louis Lions, Faculté de Mathématiques, Université Pierre et Marie Curie Paris

More information

Operator Splitting for Parallel and Distributed Optimization

Operator Splitting for Parallel and Distributed Optimization Operator Splitting for Parallel and Distributed Optimization Wotao Yin (UCLA Math) Shanghai Tech, SSDS 15 June 23, 2015 URL: alturl.com/2z7tv 1 / 60 What is splitting? Sun-Tzu: (400 BC) Caesar: divide-n-conquer

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

Thai Journal of Mathematics Volume 14 (2016) Number 1 : ISSN

Thai Journal of Mathematics Volume 14 (2016) Number 1 : ISSN Thai Journal of Mathematics Volume 14 (2016) Number 1 : 53 67 http://thaijmath.in.cmu.ac.th ISSN 1686-0209 A New General Iterative Methods for Solving the Equilibrium Problems, Variational Inequality Problems

More information

A Solution Method for Semidefinite Variational Inequality with Coupled Constraints

A Solution Method for Semidefinite Variational Inequality with Coupled Constraints Communications in Mathematics and Applications Volume 4 (2013), Number 1, pp. 39 48 RGN Publications http://www.rgnpublications.com A Solution Method for Semidefinite Variational Inequality with Coupled

More information

INERTIAL ACCELERATED ALGORITHMS FOR SOLVING SPLIT FEASIBILITY PROBLEMS. Yazheng Dang. Jie Sun. Honglei Xu

INERTIAL ACCELERATED ALGORITHMS FOR SOLVING SPLIT FEASIBILITY PROBLEMS. Yazheng Dang. Jie Sun. Honglei Xu Manuscript submitted to AIMS Journals Volume X, Number 0X, XX 200X doi:10.3934/xx.xx.xx.xx pp. X XX INERTIAL ACCELERATED ALGORITHMS FOR SOLVING SPLIT FEASIBILITY PROBLEMS Yazheng Dang School of Management

More information

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method L. Vandenberghe EE236C (Spring 2016) 1. Gradient method gradient method, first-order methods quadratic bounds on convex functions analysis of gradient method 1-1 Approximate course outline First-order

More information

A primal dual Splitting Method for Convex. Optimization Involving Lipschitzian, Proximable and Linear Composite Terms,

A primal dual Splitting Method for Convex. Optimization Involving Lipschitzian, Proximable and Linear Composite Terms, A primal dual Splitting Method for Convex Optimization Involving Lipschitzian, Proximable and Linear Composite Terms Laurent Condat Final author s version. Cite as: L. Condat, A primal dual Splitting Method

More information

EE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1

EE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1 EE 546, Univ of Washington, Spring 2012 6. Proximal mapping introduction review of conjugate functions proximal mapping Proximal mapping 6 1 Proximal mapping the proximal mapping (prox-operator) of a convex

More information

Tight Rates and Equivalence Results of Operator Splitting Schemes

Tight Rates and Equivalence Results of Operator Splitting Schemes Tight Rates and Equivalence Results of Operator Splitting Schemes Wotao Yin (UCLA Math) Workshop on Optimization for Modern Computing Joint w Damek Davis and Ming Yan UCLA CAM 14-51, 14-58, and 14-59 1

More information

A Dual Condition for the Convex Subdifferential Sum Formula with Applications

A Dual Condition for the Convex Subdifferential Sum Formula with Applications Journal of Convex Analysis Volume 12 (2005), No. 2, 279 290 A Dual Condition for the Convex Subdifferential Sum Formula with Applications R. S. Burachik Engenharia de Sistemas e Computacao, COPPE-UFRJ

More information

Convergence of Fixed-Point Iterations

Convergence of Fixed-Point Iterations Convergence of Fixed-Point Iterations Instructor: Wotao Yin (UCLA Math) July 2016 1 / 30 Why study fixed-point iterations? Abstract many existing algorithms in optimization, numerical linear algebra, and

More information

A Unified Approach to Proximal Algorithms using Bregman Distance

A Unified Approach to Proximal Algorithms using Bregman Distance A Unified Approach to Proximal Algorithms using Bregman Distance Yi Zhou a,, Yingbin Liang a, Lixin Shen b a Department of Electrical Engineering and Computer Science, Syracuse University b Department

More information

On Total Convexity, Bregman Projections and Stability in Banach Spaces

On Total Convexity, Bregman Projections and Stability in Banach Spaces Journal of Convex Analysis Volume 11 (2004), No. 1, 1 16 On Total Convexity, Bregman Projections and Stability in Banach Spaces Elena Resmerita Department of Mathematics, University of Haifa, 31905 Haifa,

More information

Near Equality, Near Convexity, Sums of Maximally Monotone Operators, and Averages of Firmly Nonexpansive Mappings

Near Equality, Near Convexity, Sums of Maximally Monotone Operators, and Averages of Firmly Nonexpansive Mappings Mathematical Programming manuscript No. (will be inserted by the editor) Near Equality, Near Convexity, Sums of Maximally Monotone Operators, and Averages of Firmly Nonexpansive Mappings Heinz H. Bauschke

More information

MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS

MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS JONATHAN M. BORWEIN, FRSC Abstract. We use methods from convex analysis convex, relying on an ingenious function of Simon Fitzpatrick, to prove maximality

More information

Extensions of the CQ Algorithm for the Split Feasibility and Split Equality Problems

Extensions of the CQ Algorithm for the Split Feasibility and Split Equality Problems Extensions of the CQ Algorithm for the Split Feasibility Split Equality Problems Charles L. Byrne Abdellatif Moudafi September 2, 2013 Abstract The convex feasibility problem (CFP) is to find a member

More information

Convergence Theorems for Bregman Strongly Nonexpansive Mappings in Reflexive Banach Spaces

Convergence Theorems for Bregman Strongly Nonexpansive Mappings in Reflexive Banach Spaces Filomat 28:7 (2014), 1525 1536 DOI 10.2298/FIL1407525Z Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Convergence Theorems for

More information

Conditions for zero duality gap in convex programming

Conditions for zero duality gap in convex programming Conditions for zero duality gap in convex programming Jonathan M. Borwein, Regina S. Burachik, and Liangjin Yao April 14, revision, 2013 Abstract We introduce and study a new dual condition which characterizes

More information

Maximal monotone operators are selfdual vector fields and vice-versa

Maximal monotone operators are selfdual vector fields and vice-versa Maximal monotone operators are selfdual vector fields and vice-versa Nassif Ghoussoub Department of Mathematics, University of British Columbia, Vancouver BC Canada V6T 1Z2 nassif@math.ubc.ca February

More information

Abstract In this article, we consider monotone inclusions in real Hilbert spaces

Abstract In this article, we consider monotone inclusions in real Hilbert spaces Noname manuscript No. (will be inserted by the editor) A new splitting method for monotone inclusions of three operators Yunda Dong Xiaohuan Yu Received: date / Accepted: date Abstract In this article,

More information

THE CYCLIC DOUGLAS RACHFORD METHOD FOR INCONSISTENT FEASIBILITY PROBLEMS

THE CYCLIC DOUGLAS RACHFORD METHOD FOR INCONSISTENT FEASIBILITY PROBLEMS THE CYCLIC DOUGLAS RACHFORD METHOD FOR INCONSISTENT FEASIBILITY PROBLEMS JONATHAN M. BORWEIN AND MATTHEW K. TAM Abstract. We analyse the behaviour of the newly introduced cyclic Douglas Rachford algorithm

More information

Proximal methods. S. Villa. October 7, 2014

Proximal methods. S. Villa. October 7, 2014 Proximal methods S. Villa October 7, 2014 1 Review of the basics Often machine learning problems require the solution of minimization problems. For instance, the ERM algorithm requires to solve a problem

More information

SEMI-SMOOTH SECOND-ORDER TYPE METHODS FOR COMPOSITE CONVEX PROGRAMS

SEMI-SMOOTH SECOND-ORDER TYPE METHODS FOR COMPOSITE CONVEX PROGRAMS SEMI-SMOOTH SECOND-ORDER TYPE METHODS FOR COMPOSITE CONVEX PROGRAMS XIANTAO XIAO, YONGFENG LI, ZAIWEN WEN, AND LIWEI ZHANG Abstract. The goal of this paper is to study approaches to bridge the gap between

More information

STRONG AND CONVERSE FENCHEL DUALITY FOR VECTOR OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES

STRONG AND CONVERSE FENCHEL DUALITY FOR VECTOR OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES STUDIA UNIV. BABEŞ BOLYAI, MATHEMATICA, Volume LIV, Number 3, September 2009 STRONG AND CONVERSE FENCHEL DUALITY FOR VECTOR OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES Abstract. In relation to the vector

More information

A New Primal Dual Algorithm for Minimizing the Sum of Three Functions with a Linear Operator

A New Primal Dual Algorithm for Minimizing the Sum of Three Functions with a Linear Operator https://doi.org/10.1007/s10915-018-0680-3 A New Primal Dual Algorithm for Minimizing the Sum of Three Functions with a Linear Operator Ming Yan 1,2 Received: 22 January 2018 / Accepted: 22 February 2018

More information

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Donghwan Kim and Jeffrey A. Fessler EECS Department, University of Michigan

More information

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This

More information

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane Conference ADGO 2013 October 16, 2013 Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions Marc Lassonde Université des Antilles et de la Guyane Playa Blanca, Tongoy, Chile SUBDIFFERENTIAL

More information

Second order forward-backward dynamical systems for monotone inclusion problems

Second order forward-backward dynamical systems for monotone inclusion problems Second order forward-backward dynamical systems for monotone inclusion problems Radu Ioan Boţ Ernö Robert Csetnek March 2, 26 Abstract. We begin by considering second order dynamical systems of the from

More information

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim Korean J. Math. 25 (2017), No. 4, pp. 469 481 https://doi.org/10.11568/kjm.2017.25.4.469 GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS Jong Kyu Kim, Salahuddin, and Won Hee Lim Abstract. In this

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information