A splitting algorithm for coupled system of primal dual monotone inclusions

Size: px
Start display at page:

Download "A splitting algorithm for coupled system of primal dual monotone inclusions"

Transcription

1 A splitting algorithm for coupled system of primal dual monotone inclusions B`ăng Công Vũ UPMC Université Paris 06 Laboratoire Jacques-Louis Lions UMR CNRS Paris, France Abstract We propose a splitting algorithm for solving a coupled system of primal-dual monotone inclusions in real Hilbert spaces. The weak convergence of the algorithm proposed is proved. Applications to minimization problems is demonstrated. Keywords: coupled system, monotone inclusion, monotone operator, operator splitting, cocoercivity, forward-backward algorithm, composite operator, duality, primal-dual algorithm Mathematics Subject Classifications H05, 49M29, 49M27, 90C25 1 Introduction Various problems in applied mathematics such as evolution inclusions [2], partial differential equations[1, 30, 32], mechanics[31], variational inequalities[10, 29], Nash equilibria[4], and optimization problems [6, 16, 22, 27, 37, 43], reduce to solving monotone inclusions. The simplest monotone inclusion is to find a zero point of a maximally monotone operator A acting on a real Hilbert space H. This problem can be solved efficiently by the proximal point algorithm when the resolvent of A is easy to implement numerically [41] see [11, 13, 14, 25, 34, 35, 38] in the context of variable metric. Thisproblemwas thenextended to theproblem of findingazero of thesumof a maximally monotone operator A and a cocoercive operator B. In this case, we can used the forward-backward splitting algorithm [2, 18, 32, 43] see [26] in the context of variable metric. When A has a structure, for examples, mixtures of composite, Lipschitzian or cocoercive, and parallel-sum type monotone operators as in [23, 26, 44, 45], existing purely primal splitting methods do not offer satisfactory options to solve the problem due to the appearance of the composite components and hence alternative primal-dual strategies must be explored. Very recently, these 1

2 frameworks are unified into a system of monotone inclusions with mixtures of composite, Lipschitzian, and parallel-sum type monotone operators in [19]. In this paper, we address to the numerical solutions of coupled system of primal-dual inclusions in real Hilbert spaces. Problem 1.1 Let m,s be strictly positive integers. For every i {1,...,m}, let H i, be a realhilbertspace, letz i H i, leta i : H i 2 H i bemaximallymonotone, letc i : H 1... H m H i be such that ν 0 ]0,+ [ x i 1 i m H 1... H m y i 1 i m H 1... H m m m x i y i C i x 1,...,x m C i y 1,...,y m ν 0 C i x 1,...,x m C i y 1,...,y m For every k {1,...,s}, let G k, be a real Hilbert space, let r k G k, let B k : G k 2 G k be maximally monotone, let S k : G 1... G s G k be such that µ 0 ]0,+ [ v k 1 k s G 1... G s w k 1 k s G 1... G s v k w k S k v 1,...,v s S k w 1,...,w s µ 0 S k v 1,...,v s S k w 1,...,w s For every i {1,...,m} and every k {1,...,s}, let L k,i : H i G k be a bounded linear operator. The problem is to solve the following system of primal-dual inclusions: find x 1 H 1,...,x m H m and v 1 G 1,...,v s G s such that m z 1 L k,1 v k A 1 x 1 +C 1 x 1,...,x m L 1,i x i r 1 B 1 v 1 +S 1 v 1,...,v s. and. m z m L k,m v k A m x m +C m x 1,...,x m, L s,i x i r s B s v s +S s v 1,...,v s. We denote by Ω the set of solutions to 1.3. In the case when every linear operators L k,i 1 k s 1 i m are zeros, we can use the algorithm in [2] to solve the inclusions in the left hand side and in the right hand side of 1.3 separately. Let us note that the non-linear coupling terms C i 1 i m and S k 1 k s are introduced in [2] and they are cocoercive operators which often play a central role; see for instance [2, 10, 17, 18, 29, 30, 31, 32, 42, 43, 46]. Let us add that the general algorithm in [19] can solve Problem 1.1 for the case when C i and S k are univariate, monotone and Lipschitzian. Furthermore, the primal-dual algorithm in [26, Section 6] can solve Problem 1.1 for the case when m = 1 and each S k are univariate, monotone and Lipschitzian. To sum up, the recent general frameworks can solve special cases of the above problem and no existing algorithm can solve it in the general case. In the present paper, we propose a primal-dual splitting algorithm for solving Problem 1.1 in Section 3. We recall some notations and background on the monotone operator theory in Section 2. In Section 4, we provide application to coupled system of monotone inclusions in duality. Section 5 is devoted to applications to minimization problems. In the last section, an application to multidictionary signal representation is presented

3 2 Notation and background, and technical results 2.1 Notation and background Throughout, H, G, and G i 1 i m are real Hilbert spaces. Their scalar products and associated normsarerespectively denoted by and. WedenotebyBH,G thespaceofboundedlinear operators from H to G. The adjoint of L BH,G is denoted by L. We set BH = BH,H. The symbols and denote respectively weak and strong convergence, and Id denotes the identity operator, we denote by l 1 +N the set of summable sequences in [0,+ [ and by l 2 K K N the set of square summable sequences, indexed by K, in R. Let M 1 and M 2 be self-adjoint operators in BH, we write M 1 M 2 if and only if x H M 1 x x M 2 x x. Let α ]0,+ [. We set P α H = { M BH M = M and M αid }. 2.1 The square root of M in P α H is denoted by M. Moreover, for every M P α H, we define respectively a scalar product and a norm by x H y H x y M = Mx y and x M = Mx x, 2.2 and, for any L BH, we define L M = sup Lx M. 2.3 x M 1 Let A: H 2 H be a set-valued operator. The domain of A is doma = { x H Ax }, and the graph of A is graa = { x,u H H u Ax }. The set of zeros of A is zera = { x H 0 Ax }, and the range of A is rana = { u H x H u Ax }. The inverse of A is A 1 : H 2 H : u { x H u Ax }, and the resolvent of A is Moreover, A is monotone if J A = Id+A x,y H H u,v Ax Ay x y u v 0, 2.5 and maximally monotone if it is monotone and there exists no monotone operator B: H 2 H such that graa grab and A B. A single-valued operator B: H H is β-cocoercive, for some β ]0,+ [, if x H y H x y Bx By β Bx By The parallel sum of A: H 2 H and B: H 2 H is A B = A 1 +B Let Γ 0 H be the class of proper lower semicontinuous convex functions from H to ],+ ]. For any U P α H and f Γ 0 H, we define J U 1 f = prox U f 1 : H H: x argmin fy+ y H 2 x y 2 U, 2.8 3

4 and J f = prox f : H H: x argmin y H fy+ 1 2 x y 2, 2.9 and the conjugate function of f is f : a sup a x fx x H Note that, or equivalently, f Γ 0 Hx Hy H y fx x f y, 2.11 f Γ 0 H f 1 = f The infimal convolution of the two functions f and g from H to ],+ ] is f g: x inf fy+gx y y H The indicator function of a nonempty closed convex set C is denoted by ι C, its dual function is the support function σ C, the distance function of C is denoted by d C. Finally, the strong relative interior of a subset C of H is the set of points x C such that the cone generated by x+c is a closed vector subspace of H. 2.2 Technical results We recall some results on monotone operators. Definition 2.1 [2, Definition 2.3] An operator A: H 2 H is demiregular at x doma if, for every sequence x n,u n in graa and every u Ax such that x n x and u n u, we have x n x. Lemma 2.2 [2, Proposition 2.4] Let A: H 2 H be monotone and suppose that x doma. Then A is demiregular at x in each of the following cases. i A is uniformly monotone at x, i.e., there exists an increasing function φ: [0,+ [ [0,+ ] that vanishes only at 0 such that u Ax y,v graa x y u v φ x y. ii A is strongly monotone, i.e., there exists α ]0,+ [ such that A αid is monotone. iii J A is compact, i.e., for every bounded set C H, the closure of J A C is compact. In particular, dom A is boundedly relatively compact, i.e., the intersection of its closure with every closed ball is compact. iv A: H H is single-valued with a single-valued continuous inverse. v A is single-valued on doma and Id A is demicompact, i.e., for every bounded sequence x n in doma such that Ax n converges strongly, x n admits a strong cluster point. 4

5 vi A = f, where f Γ 0 H is uniformly convex at x, i.e., there exists an increasing function φ: [0,+ [ [0,+ ] that vanishes only at 0 such that α ]0,1[ y domf f αx + 1 αy +α1 αφ x y αfx+1 αfy. vii A = f, where f Γ 0 H and, for every ξ R, { x H fx ξ } is boundedly compact. Lemma 2.3 [26, Lemma 3.7] Let A: H 2 H be maximally monotone, let α ]0,+ [, let U P α H, and let G be the real Hilbert space obtained by endowing H with the scalar product x,y x y U 1 = x U 1 y. Then the following hold. i UA: G 2 G is maximally monotone. ii J UA : G G is 1-cocoercive, i.e., firmly nonexpansive, hence nonexpansive. iii J UA = U 1 +A 1 U 1. Lemma 2.4 Let α and β be strictly positive reals, let B: H H be β-cocoercive, let U P α H such that U 1 < 2β and set P = Id U 1 B. Then, x Hy H Px Py 2 U x y 2 U 2β U 1 Bx By Hence, P is nonexpansive with respective to the norm U. Proof. Let x H and y H. Then using the cocoercivity of B, we have which proves Px Py 2 U = x y 2 U 2 x y Bx By + U 1 Bx By 2 U x y 2 U 2β Bx By 2 + Bx By U 1 Bx By x y 2 U 2β U 1 Bx By 2, 2.15 Theorem 2.5 [26, Theorem 4.1] Let K be a real Hilbert space with scalar product and the associated norm. Let A: K 2 K be maximally monotone, let α ]0,+ [, let β ]0,+ [, let B: K K be β-cocoercive, let η n l 1 + N, and let U n be a sequence in P α K such that µ = sup U n < + and n N 1+η n U n+1 U n Let ε ]0,min{1,2β/µ +1}], let λ n be a sequence in [ε,1], let γ n be a sequence in [ε,2β ε/µ], let x 0 K, and let a n and b n be absolutely summable sequences in K. Suppose that Z = zera+b, 2.17 and set n N Then the following hold for some x Z. i x n x. ii Bx n Bx 2 < +. yn = x n γ n U n Bx n +b n x n+1 = x n +λ n JγnU nay n +a n x n iii Suppose that at every point in Z, A or B is demiregular, then x n x. 5

6 3 Algorithm and convergence We propose the following algorithm for solving Problem 1.1. Algorithm 3.1 Let α ]0,+ [ and, for every i {1,...,m} and every k {1,...,s}, let U i,n be a sequence in P α H i and let V k,n be a sequence in P α G k. Set β = min{ν 0,µ 0 }, and let ε ]0,min{1,β}[, let λ n be a sequence in [ε,1]. Let x i,0 1 i m H 1... H m and v k,0 1 k s G 1... G s. Set For n = 0,1,... For i = 1,...,m p i,n = J Ui,n A i x i,n U s i,n L k,i v k,n +C i,n x 1,n,...,x m,n +c i,n z i +a i,n y i,n = 2p i,n x i,n x i,n+1 = x i,n +λ i,n p i,n x i,n For k = 1,...,s q k,n = J Vk,n B k v k,n +V m k,n L k,iy i,n S k,n v 1,n,...,v s,n d k,n r k +b k,n v k,n+1 = v k,n +λ m+k,n q k,n v k,n, where, for every i {1,...,m} and every k {1,...,s}, the following conditions hold 3.1 i n N U i,n+1 U i,n and V k,n+1 V k,n, and µ = sup{ U 1,n,..., U m,n, V 1,n,..., V s,n } < ii C i,n are operators from H 1... H m to H i such that a C i,n C i are Lipschitz continuous with respective constants κ i,n ]0,+ [ satisfying κ i,n < +. b There exists s H 1... H m not depending i such that n N C i,n s = C i s. iii S k,n are operators from G 1...G s to G k such that a S k,n S k are Lipschitz continuous with respective constants η k,n ]0,+ [ satisfying η k,n < +. b There exists w G 1... G s not depending k such that n N S k,n w = S k w. iv a i,n and c i,n are absolutely summable sequences in H i. v b k,n and d k,n are absolutely summable sequences in G k. vi λ i,n and λ m+k,n are in ]0,1] such that λ i,n λ n + λ m+k,n λ n < Remark 3.2 Here are some remarks 6

7 i Our algorithm has basically a structure of the variable metric forward-backward splitting since the multi-valued operators are used individually in the backward steps via their resolvents, the single-valued operators are used individually in the forward steps via their values. ii The algorithm allows the metric to vary over the course of the iterations. Even when restricted to the constant metric case which is the case where U i,n 1 i m and V k,n 1 k s are identity operators, the algorithm is new. iii Condition i is used in [26, 45] while conditions ii, iii and vi are used in [2], and conditions iv and v which quantify the tolerance allowed in the inexact implementation of the resolvents and the approximations of single-valued are widely used in the literature. iv Algorithm 3.1 is an extension of [26, Corollary 6.2] where m = 1 and every n N: C 1,n = C and for every k {1,...,s}, S k,n = D 1 k are restricted to univariate and cocoercive, and B k is replaced by B 1 k, and for every j {1,...,m+s}, λ j,n = λ n. The main result of the paper can be now stated. Theorem 3.3 Suppose in Problem 1.1 that Ω and there exists L k0,i 0 0 for some i 0 {1,...,m} and k 0 {1,...,s}. For every n N, set and suppose that ζ n = δ n = m 1 2 V k,n L k,i Ui,n 1, 3.4 δ n 1+δ n max 1 i m,1 k s { U i,n, V k,n } 1 2β ε. 3.5 For every i {1,...,m} and every k {1,...,s}, let x i,n and v k,n be sequences generated by Algorithm 3.1. Then the following hold for some x 1,...,x m,v 1,...,v s Ω. i i {1,...,m} x i,n x i and k {1,...,s} v k,n v k. ii Suppose that the operator x i 1 i m C j x i 1 i m 1 j m is demiregular see Lemma 2.2 for special cases at x 1,...,x m, then i {1,...,m} x i,n x i. iii Suppose that the operator v k 1 k s S j v k 1 k s 1 j s is demiregular see Lemma 2.2 for special cases at v 1,...,v s, then k {1,...,s} v k,n v k. iv Suppose that there exists j {1,...,m} and an operator C: H j H j such that x i 1 i m H i 1 i m C j x 1,...,x m = Cx j and C is demiregular see Lemma 2.2 for special cases at x j, then x j,n x j. v Suppose that there exists j {1,...,s} and an operator D: G j G j such that v k 1 k s G k 1 k s S j v 1,...,v s = Dv j and D is demiregular see Lemma 2.2 for special cases at v j, then v j,n v j. 7

8 Proof. Let us introduce the Hilbert direct sums H = H 1... H m, G = G 1... G s, and K = H G. 3.6 We denote by x = x i 1 i m, y = y i 1 i m and v = v k 1 k s, w = w k 1 k s the generic elements in H and G, respectively. The generic elements in K will be in the form p = x,v. The scalar product and the norm of H are respectively defined by : x,y m x i y i, 3.7 and : x x x. 3.8 The scalar product and the norm of G are defined by the same fashion as those of H, : v,w v k w k, 3.9 and : v v v We next define the scalar product and the norm of K are respectively defined by : x,v,y,w m x i y i + v k w k 3.11 and : x,v x,v x,v Set A: H 2 H : x m A ix i C: H H: x C i x 1 i m L: H G: x m z = z 1,...,z m, L k,ix i 1 k s B: G 2 G : v s B kv k and D: G 2 G : v S k v 1 k s 3.13 r = r 1,...,r s, and n N C n : H H: x C i,n x 1 i m and D n : G G: v S k,n v 1 k s Then, it follows from 1.1 that x Hy H x y Cx Cy ν 0 Cx Cy 2, 3.15 from 1.2 that v Hw H v w Dv Dw µ 0 Dv Dw 2,

9 which shows that C and D are respectively ν 0 -cocoercive and µ 0 -cocoercive and hence they are maximally monotone [10, Example 20.28]. Moreover, it follows from [10, Proposition 20.23] that A and B are maximally monotone. Furthermore, L : G H: v L k,i v k i m Then, using 3.13, we can rewrite the system of monotone inclusions 1.3 as a monotone inclusion in K, Set and find x,v K such that z L v A+Cx and Lx r B +Dv n N M: K 2 K : x,v z +Ax,r +Bv S: K K: x,v L v, Lx Q: K K: x,v Cx,Dv, Q n : K K: x,v C n x,d n v Λ n : K K: x,v λ i,n x i 1 i m,λ m+k,n v k 1 k s U n : K K: x,v U i,n x i 1 i m,v k,n v k 1 k s V n : K K: x,v U 1 n x,v L v,lx Then M, S are maximally monotone operators and 3.15, 3.16 implies that Q is β-cocoercive and hence it is maximally monotone [10, Example 20.28]. Therefore, M + S + Q is maximally monotone [10, Corollary 24.4]. Furthermore, the problem 3.18 is reduced to find a zero point of M +S +Q. Note that Ω implies that Moreover, we also have Hence, where zerm +S +Q 3.21 n N Λ n V n = max 1 j m+s λ j,n 1 and Id Λ n V n = 1 min 1 j m+s λ j,n Λ n V n + Id Λ n V n = 1+ max 1 j m+s λ j,n λ n min 1 j m+s λ j,n λ n 1+τ n, 3.23 We derive from the condition vi in Algorithm 3.1 that n N τ n = 2 max 1 j m+s λ j,n λ n τ n 2 m+s λ j,n λ n < j=1 We next derive from the condition i in Algorithm 3.1 that µ = sup U n < +, and n N U n+1 U n P α K,

10 and it follows from 3.12 and [25, Lemma 2.1ii] that n N p = x,v K p 2 U 1 n = m x i 2 + U 1 i,n v k 2 V 1 k,n m x i 2 Ui,n 1 + v k 2 V 1 k,n p 2 min { U i,n 1, V k,n 1 } i m,1 k s Note that V n are self-adjoint, let us check that V n are strongly monotone. To this end, let us introduce m T n : H G: x Vk,n L k,i x i n N 1 k s V1,n 1 R n : G G: v v1,..., V s,n vs. Then, by using Cauchy-Schwartz s inequality, we have where we set n Nx H T n x 2 = m 1 2 Vk,n L k,i Ui,n Ui,n xi m 2 1 V k,n L k,i Ui,n Ui,n xi m 2 m V k,n L k,i Ui,n 1 2 U i,n xi m = x i 2 U 1 i,n = β n m n N β n = which together with 3.4 imply that Moreover, m 2 V k,n L k,i Ui,n x i 2, 3.29 U 1 i,n m 2 V k,n L k,i Ui,n, 3.30 n N 1+δ n β n = 1 1+δ n n Nv G R n v 2 = = 1 V k,n vk 2 v k V 1 k,n 10

11 Therefore, for every p = x,v K, and every n N, it follows from 3.20, 3.28, 3.31, 3.32 and 3.27, 3.5 that p V n p = p 2 U 1 n = p 2 U 1 n = p 2 U 1 n p 2 U 1 n p 2 U 1 n 2 Lx v m 2 Vk,n L k,i x i 1 V k,n vk δn β n Tn x 1+δ n β n R n v T n x 2 +1+δ n β n R n v 2 1+δ n β n m x i 2 U 1 i,n +1+δ n β n v k 2 1+δ n V 1 k,n = δ m n x i δ U 1 n i,n v k 2 V 1 k,n ζ n p In turn, V n are invertible, by [25, Lemma 2.1iii] and 3.5, n N V 1 n 1 ζ n 2β ε, 3.34 and by [25, Lemma 2.1i], 3.26, n N U n+1 U n U 1 n U 1 1 V n. Furthermore, we derive from [25, Lemma 2.1ii] that V 1 n+1 Altogether, n+1 V n V n+1 p K V 1 n p p V n 1 p 2 1 ρ p 2, where ρ = α 1 + S sup V 1 n 2β ε and n N V 1 1 n+1 V n P 1/ρ K Moreover, using [25, Lemma 2.1iii] and 3.36, we obtain zn K N z n < + z n V 1 n < and and zn K N z n < + z n V n < +, 3.38 p K sup p V n <

12 Now we can reformulate the algorithm 3.1 as iterations in the space K. We first observe that 3.1 is equivalent to For n = 0,1... For i = 1,...,m For k = 1,...,s U 1 i,n x i,n p i,n s L k,i v k,n C i,n x 1,n,...,x m,n z i +A i p i,n a i,n +c i,n U 1 i,n a i,n x i,n+1 = x i,n +λ i,n p i,n x i,n V 1 k,n v k,n q k,n m L k,ix i,n p i,n S k,n v 1,n,...,v s,n r k +B k q k,n b k,n m L k,ip i,n +d k,n V 1 k,n b k,n v k,n+1 = v k,n +λ m+k,n q k,n v k,n Set p n = x 1,n,...x m,n,v 1,n,...,v s,n y n = p 1,n,...,p m,n,q 1,n,...,q s,n a n = a 1,n,...,a m,n,b 1,n,...,b s,n c n = c 1,n,...,c m,n,d 1,n,...,d s,n d n = U1,n 1 a 1,n,...,Um,n 1 a m,n,v1,n 1 b 1,n,...,V 1 b n = S +V n a n +c n d n. s,n b s,n Then, using the same arguments as in [44, Eqs ], using 3.19, 3.20, 3.40 yields We have n N V n p n y n Q n p n M +Sy n a n +Sa n +c n d n p n+1 = p n +Λ n y n p n n N V n p n y n Q n p n M +Sy n a n +Sa n +c n d n n N V n Q n p n M +S +V n y n a n +S +V n a n +c n d n n N y n = 1 M +S +V n V n Q n p n S +V n a n c n +d n +a n n N y n = Id+V 1 n N y n = J V 1 n M+S Therefore, 3.41 becomes +a n 1 Id V n M +S 1 n Q n pn V 1 n b n Id V 1 n Q n pn V 1 n b n +a n n N p n+1 = p n +Λ n J V 1 n M+S p n V 1 n Q np n +b n +a n p n By setting n N M: K 2 K : x,v Mx,v+Sx,v, P n = Id V 1 n Q n and P n = Id V 1 n Q, E n = Q n Q and Qn = V 1 n E n, e 1,n = Q n p n +V 1 n b n, e n = a n + 1 λ λn n Id Λ n p n J V 1 Pn p n e 1,n an, n M

13 we have 3.43 n N p n+1 = p n +Λ n J V 1 Pn M p n n V 1 n b n +an p n = Id Λ n p n +Λ n J V 1 Pn M p n n V 1 n b n +Λn a n = Id Λ n p n +Λ n J V 1 n M Pn p n e 1,n +Λn a n 3.45 = 1 λ n p n +λ n J V 1 M Pn p n n e 1,n +en Algorithm 3.46 is a special instance of the variable metric forward-backward splitting 2.18 with [ ] n N γ n = 1 ε,2β ε/sup V 1 n see Moreover, since M is maximally monotone, Q is β-cocoercive, and n N λ n [ε,1], since 3.36 and 3.21 respectively show that 2.16 and 2.17 are satisfied. In view of Theorem 2.5, it is sufficient to prove that e 1,n and e n are absolutely summable in K, i.e, we prove that and e 1,n < +, 3.48 e n < For every i {1,...,m} and every k {1,...,s}, since a i,n, c i,n and b k,n and d k,n are absolutely summable, we have and a n m a i,n + c n m c i,n + b k,n < +, 3.50 d k,n < Moreover, for every n N, U n P α K, it follows from [25, Lemma 2.1iii] that U 1 n α 1. Hence, d n α 1 a n < and b n ρ a n + cn + d n < Therefore, a n, b n, c n and d n are absolutely summable in K. Next it follows fromtheconditionsii, iiiinalgorithm 3.1 and3.36, 3.33, 3.5that, forevery p = x,v K 13

14 and q = y,w K, n N Qn p Q n q 2 V n = Q n p Q n q V n Qn p Q n q = E n p E n q V 1 n E np V 1 n E nq V 1 n E n p E n q 2 2β ε C n Cx C n Cy 2 + D n Dv D n Dw 2 m = 2β ε C i,n C i x C i,n C i y 2 + S k,n S k v S k,n S k w 2 m 2β ε κ 2 i,n x y 2 + m 2β ε κ 2 i,n + 2β εζ 1 n η 2 k,n m κ 2 i,n + m 2β ε 2 κ 2 i,n + η 2 k,n ηk,n 2 v w 2 p q 2 η 2 k,n p q 2 V n p q 2 V n, 3.54 which implies that Q n is Lipschitz continuous in the norm V n with respectively constant κ n = 2β ε m κ 2 i,n + ηk,n 2, 3.55 that satisfies κ n < Let p = x,v zerm +S +Q and noting that n N Q n s,w = 0, n N e 1,n V n Q n p n V n + V 1 n b n V n Since p zerm +S +Q, we have Q n p n Q n p V n + Q n p Q n s,w V n + V 1 n b n V n κ n p n p V n +κ n p s,w V n + V 1 n b n V n = κ n p n p V n +κ n p s,w V n + b n V 1 n n N p = J V 1 n M P np

15 Hence, since J V 1 n M and P n are nonexpansive with respect to the norm V n by Lemma 2.3ii and Lemma 2.4, on one hand, we have J V 1 n M Pn p n e 1,n p V n = J V 1 n M Pn p n e 1,n JV 1 n M P np V n p n p V n + e 1,n V n, 3.59 which and 3.45, 3.22, 3.23 imply that n N p n+1 p V n Id Λn p n p V n where + Λ M n JV 1 Pn p n n e 1,n p V n + Λ n a n V n Id Λ n + V Λ n n V p n n p + V Λ n n V e 1,n n V n + Λ n a n V n 1+τ n +κ n p n p V n +α n, 3.60 n N α n = κ n p s,w V n + bn V 1 n + an V n Noting that, by 3.25, 3.56, 3.39, 3.38, 3.37 and 3.53, 3.50, we have α n < + and τ n +κ n < Therefore, we derive from 3.60 and n N V n V n+1 that n N pn+1 p V n+1 1+τ n +κ n p n p V n +α n, 3.63 and hence, by [36, Lemma 2.2.2], sup p n p V n < +, 3.64 which and 3.57,3.56,3.62, 3.53, 3.37, 3.38, 3.39 imply that e 1,n V n < On the other hand, p n J V 1 M Pn p n n e 1,n an V n 2 p n p V n + e 1,n V n + a n V n, 3.66 which and 3.64, 3.65 imply that ν = sup p n J V 1 n M Pn p n e 1,n an V n < Now using the condition vi, 3.67 and the definition of e 1,n in 3.44, we obtain e n V n a n V n +νε m 1 λ i,n λ n + λ m+k,n λ n <

16 By using 3.38, we derive from 3.68 and 3.65 that e n < + and e 1,n < +, 3.69 which prove 3.48 and i: By Theorem 2.5i, p n p zerm +S +Q. iiiii: By Theorem 2.5ii and iii, Qp n Qp 0, 3.70 which implies that, for every i {1,...,m} and every k {1,...,s}, C i x 1,n,...,x m,n C i x 1,...,x m 0 and S k v 1,n,...,v s,n S k v 1,...,v s Moreover, by i, i {1,...,m} x i,n x i and k {1,...,s} v k,n v k. Therefore, the conclusions follow from the definition of the demiregular operators. ivv: The conclusions follow from our assumptions the definition of the demiregular operators. 4 Application to coupled system of monotone inclusions in duality We provide an application to coupled system of monotone inclusions. Problem 4.1 Let m,s be strictly positive integers. For every i {1,...,m}, let H i, be a realhilbertspace, letz i H i, leta i : H i 2 H i bemaximallymonotone, letc i : H 1... H m H i be such that ν 0 ]0,+ [ x i 1 i m H 1... H m y i 1 i m H 1... H m m m x i y i C i x 1,...,x m C i y 1,...,y m ν 0 C i x 1,...,x m C i y 1,...,y m For every k {1,...,s}, let G k, be a real Hilbert space, let r k G k, let D k : G k 2 G k be maximally monotone and ν k -strongly monotone for some ν k ]0,+ [, let B k : G k 2 G k be maximally monotone. For every i {1,...,m} and every k {1,...,s}, let L k,i : H i G k be a bounded linear operator. The primal problem is to solve the primal inclusion: find x 1 H 1,...,x m H m such that m z 1 A 1 x 1 + L k,1 D k B k L k,i x i r k +C 1 x 1,...,x m. z m A m x m + L k,m m D k B k L k,i x i r k +C m x 1,...,x m

17 We denote by P the set of solutions to 4.2. The dual problem is to solve the dual inclusion: find v 1 G 1,...,v s G s such that x i 1 i m H i 1 i m m z 1 L k,1 v k A 1 x 1 +C 1 x 1,...,x m L 1,i x i r 1 B1 1 v 1 +D1 1 v 1. and. m z m L k,m v k A m x m +C m x 1,...,x m, L s,i x i r s Bs 1 v s +Ds 1 v s. The set of solutions to 4.3 is denoted by D. 4.3 Problem 4.1 covers not only a wide class of monotone inclusions and duality frameworks in the literature [3, 5, 6, 12, 17, 18, 26, 28, 33, 37, 39, 40, 42, 43, 44] and coupled system of monotone inclusions unified in [2] and the references therein, but also a wide class of minimization formulations, in particular, in the multi-component signal decomposition and recovery [2, 5, 7] and the references therein. Algorithm 4.2 Let α ]0,+ [ and, for every i {1,...,m} and every k {1,...,s}, let U i,n be a sequence in P α H i and let V k,n be a sequence in P α G k. Set β = min{ν 0,ν 1,...,ν s }, and let ε ]0,min{1,β}[, let λ n be a sequence in [ε,1]. Let x i,0 1 i m H 1... H m and v k,0 1 k s G 1... G s. Set For n = 0,1,... For i = 1,...,m y i,n = 2p i,n x i,n x i,n+1 = x i,n +λ i,n p i,n x i,n For k = 1,...,s q k,n = J Vk,n B 1 k v k,n+1 = v k,n +λ m+k,n q k,n v k,n, p i,n = J Ui,n A i x i,n U i,n s L k,i v k,n +C i x 1,n,...,x m,n +c i,n z i +a i,n v k,n +V k,n m L k,iy i,n D 1 k v k,n d k,n r k +b k,n 4.4 where, for every i {1,...,m} and every k {1,...,s}, the following conditions hold i n N U i,n+1 U i,n and V k,n+1 V k,n, and µ = sup{ U 1,n,..., U m,n, V 1,n,..., V s,n } < ii a i,n and c i,n are absolutely summable sequences in H i. iii b k,n and d k,n are absolutely summable sequences in G k. iv λ i,n and λ m+k,n are in ]0,1] such that λ i,n λ n + λ m+k,n λ n <

18 Corollary 4.3 Suppose that P and there exists L k0,i 0 0 for some i 0 {1,...,m} and k 0 {1,...,s}, and 3.5 is satisfied. For every i {1,...,m} and every k {1,...,s}, let x i,n and v k,n be sequences generated by Algorithm 4.2. Then the following hold for some x 1,...,x m P and v 1,...,v s D. i i {1,...,m} x i,n x i and k {1,...,s} v k,n v k. ii Suppose that the operator x i 1 i m C j x i 1 i m 1 j m is demiregular see Lemma 2.2 for special cases at x 1,...,x m, then i {1,...,m} x i,n x i. iii Suppose that D 1 j is demiregular at v j, for some j {1,...,s}, then v j,n v j. iv Suppose that there exists j {1,...,m} and operator C: H j H j such that x i 1 i m H i 1 i m C j x 1,...,x m = Cx j and C is demiregular see Lemma 2.2 for special cases at x j, then x j,n x j. Proof. Set µ 0 = min{ν 1,...,ν s } and define k {1,...,s} S k : G 1...G s G k : v 1,...,v s D 1 k v k. 4.7 Then, for every v k 1 k s G 1... G s and every w k 1 k s G 1... G s, we obtain v k w k S k v 1,...,v s S k w 1,...,w s = µ 0 vk w k D 1 k v k D 1 k w k ν k D 1 k v k D 1 k w k 2 D 1 k v k D 1 k w k 2 which shows that 1.2 is satisfied. Moreover, upon setting { i {1,...,m} C i,n = C i n N k {1,...,s} S k,n = S k, = µ 0 S k v 1,...,v s S k w 1,...,w s 2, 4.8 the conditions ii and iii in Algorithm 3.1 are satisfied. Note that the conditions i, ii, iii, and iv in Algorithm 4.2 are the same as in Algorithm 3.1. Moreover, the algorithm 3.1 reduces to 4.4 where B k is replaced by B 1 k. Next, since P, we derive from 4.2 that, for every k {1,...,s}, there exists v k G k such that m m v k D k B k L k,i x i r k L k,i x i r k B 1 k v k +D 1 k v k, 4.10 and i {1,...,m} z i 4.9 L k,i v k A i x i +C i x 1,...,x m,

19 which shows that Ω and D. Inversely, if x 1,...,x m,v 1,...,v s Ω, then the inclusions 4.10 and 4.11 are satisfied. Hence v 1,...,v s D and x 1,...,x m P. Therefore, the conclusions follow from Theorem Application to minimization problems We provide applications to minimization problems involving infimal convolutions, composite functions and coupling. Problem 5.1 Let m,s be strictly positive integers. For every i {1,...,m}, let H i be a real Hilbert space, let z i H i, let f i Γ 0 H i. For every k {1,...,s}, let G k be a real Hilbert space, let r k G k, let l k Γ 0 G k be ν k -strongly convex function, for some ν k ]0,+ [, let g k Γ 0 G k. For every i {1,...,m}, and every k {1,...,s}, let L k,i : H i G k be a bounded linear operator. Let ϕ: H 1... H m R beconvex differentiable function with ν0 1 -Lipschitz continuous gradient. The primal problems is to minimize x 1 H 1,...,x m H m m fi x i x i z i + under the the assumption that, i {1,...,m} z i ran f i + m lk g k L k,i x i r k +ϕx 1,...,x m, 5.1 L k,i m l k g k L k,j r k + i ϕ, 5.2 where i ϕ is the ith component of the gradient ϕ, and the dual problem is to m minimize ϕ fi z i v 1 G 1,...,v s G s L k,i v k 1 i m + j=1 l k v k+gk v k+ v k r k. 5.3 In the case when the infimal convolutions are absent, Problem 5.1 often appears in the multicomponents signal decomposition and recovery problems [2, 5, 4] and the references therein. Example 5.2 Some special cases of this problem are listed in the following: i In the case when ϕ: x 1,...,x m m h ix i, where for every i {1,...,m}, h i : H i R is a convex differential function with τi 1 -Lipschitz continuous gradient, for some τi 1 ]0, + [, Problem 5.1 reduces to the general minimization problem [19, Problem 5.1] which covers a wide class of the convex minimization problems in the literature. ii In the case when ϕ: x 1,...,x m 0 and, for every k {1,...,s}, l k = ι {0} and g k is differentiable with τ 1 k -Lipschitz continuous gradient, for some τ k ]0,+ [, Problem 5.1 reduces to [5, Problem 1.1]. iii In the case when m = 1, Problem 5.1 reduces to [23, Problem 4.1] which is also studied in [26, 44]. 19

20 Algorithm 5.3 Let α ]0,+ [ and, for every i {1,...,m} and every k {1,...,s}, let U i,n be a sequence in P α H i and let V k,n be a sequence in P α G k. Set β = min{ν 0,ν 1,...,ν s }, and let ε ]0,min{1,β}[, let λ n be a sequence in [ε,1]. Let x i,0 1 i m H 1... H m and v k,0 1 k s G 1... G s. Set For n = 0,1,... For i = 1,...,m p i,n = prox U 1 i,n s f i x i,n U i,n L k,i v k,n + i ϕx 1,n,...,x m,n +c i,n z i +a i,n y i,n = 2p i,n x i,n x i,n+1 = x i,n +λ i,n p i,n x i,n For k = 1,...,s q k,n = prox V 1 k,n m g v k k,n +V k,n L k,iy i,n l k v k,n d k,n r k +b k,n v k,n+1 = v k,n +λ m+k,n q k,n v k,n, where, for every i {1,...,m} and every k {1,...,s}, the following conditions hold 5.4 i n N U i,n+1 U i,n and V k,n+1 V k,n, and µ = sup{ U 1,n,..., U m,n, V 1,n,..., V s,n } < ii a i,n and c i,n are absolutely summable sequences in H i. iii b k,n and d k,n are absolutely summable sequences in G k. iv λ i,n and λ m+k,n are in ]0,1] such that λ i,n λ n + λ m+k,n λ n < Corollary 5.4 Suppose that there exists L k0,i 0 0 for some i 0 {1,...,m} and k 0 {1,...,s}, and 3.5 is satisfied. For every i {1,...,m} and every k {1,...,s}, let x i,n and v k,n be sequences generated by Algorithm 5.3. Then the following hold for some solution x 1,...,x m to 5.1 and v 1,...,v s to 5.3. i i {1,...,m} x i,n x i and k {1,...,s} v k,n v k. ii Suppose that ϕ is defined as in Example 5.2i and h j is uniformly convex at x j, for some j {1,...,m}, then x j,n x j. iii Suppose that l j is uniformly convex at v j, for some j {1,...,s}, then v j,n v j. Proof. Set { i {1,...,m} A i = f i and C i = i ϕ, k {1,...,s} B k = g k and D k = l k. Then it follows from [10, Theorem 20.40] that A i 1 i m, B k 1 k s, and D k 1 k s are maximally monotone. Moreover, C 1,...,C m = ϕ is ν 0 -cocoercive [8, 9]. Moreover since, for every k

21 {1,...,s}, l k is ν k -strongly convex, l k is ν k -strongly monotone. Therefore, every conditions on the operators in Problem 1.1 are satisfied. Since, for every k {1,...,s}, doml k = G k, we next derive from [10, Proposition 20.47] that k {1,...,s} l k g k = g k l k = B k D k. 5.8 Let H and G be defined as in the proof of Theorem 3.3, and let L,z and r be defined as in 3.13, and define f: H ],+ [ : x m f ix i g: G ],+ [ : v s g kv k l: G ],+ [ : v 5.9 s l kv k. Observe that [10, Proposition 13.27], We also have f : y m fi y i, g : v Then the primal problem becomes minimize x H and the dual problem becomes minimize v G l g: v gk v k, and l : v l k v k l k g k v k fx x z +l glx r+ϕx, 5.12 ϕ f z L v+l v+g v+ v r Then, let x = x 1,...,x m be a solution to 4.2, i.e., for every i {1,...,m}, m z i f i x i + l k g k L k,j x j r k + i ϕx 1,...,x m L k,i j=1 Then, using 5.7, 5.8, [10, Corollary 16.38iii], [10, Proposition 16.8], 0 f + z x+l l glx r + ϕx Therefore, by [10, Proposition 16.5ii], we derive from 5.15 that 0 f + z +l gl r+ϕ x Hence, by Fermat s rule [10, Theorem 16.2] that x is a solution to 5.12, i.e, x is a solution to 5.1. We next let v be a solution to 4.3. Then using [10, Theorem 15.3] and 2.12, r L f + ϕ 1 z L v + g 1 v + l 1 v = L f +ϕ z L v + g v+ l v = L f ϕ z L v + g v+ l v

22 Therefore, by [10, Proposition 16.5ii], we derive from 5.16 that 0 ϕ f z L +l +g + r v Hence, by Fermat s rule [10, Theorem 16.2] that v is a solution to 5.13, i.e, v is a solution to 4.3. Now, in view of 2.8, algorithm 5.4 is a special case of the algorithm 4.4. Moreover, every specific conditions in Corollary 4.3 are satisfied. i It follows from Corollary 4.3i that x 1,n,...,x m,n x 1,...,x m which solves the primal problem 5.1, and v 1,n,...,v s,n v 1,...,v s which solves the dual problem 5.3. iiiii The conclusions follow from Corollary 4.3iiiiv and Lemma 2.2vi. Remark 5.5 Here are some remarks i Sufficient conditions which ensure that the condition 5.2 is satisfied are provided in [19, Proposition 5.3]. For instance, if 5.1 has at least one solution and r 1,...,r s belongs to the strong relative interior of { m E = L k,i x i v k 1 k s { i {1,...,m} x i domf i k {1,...,s} v k domg k +doml k } ii In the case when m = 1 and n Ni {1,...,m+s} λ i,n = λ n, the algorithm 5.4 is in [26, Eq.5.26] where the connections to existing work are available. 6 Multi-dictionary signal representation Dictionary has been used in minimization problems in signal processing in [24, Section 4.3]. Let us recall that a sequence of unit norm vectors o k k K K N in H is a dictionary with dictionary constant µ in ]0,+ [ if x H x o k 2 µ x Then the dictionary operator is defined by k K F: H l 2 K: x x o k k K 6.2 and its adjoint is F : l 2 K H: ω k k K k Kω k o k. 6.3 Dictionary extends the notion of orthonormal bases and frames which plays an important role in the theory of signal processing due to their ability to efficiently capture a wide range signal features [2, 15, 20, 21] and the references therein. The focus of this section is to explore the 22

23 information of the original signals x i 1 i m which are assumed to be available on the coefficients of dictionaries x o i,j 1 i m j K and close to soft constraints nonempty closed convex subsets C i 1 i m modeling its prior information. The rest of the information available will be modeled by potential functions f i 1 i m hard constraints. Furthermore, the data-fitting terms are measured by non-smooth functions. Problem 6.1 Let H be a real Hilbert space, let m,s be strictly positive integers such that s > m, let γ ]0,+ [, and let K be a nonempty subset of N. For every i {1,...,m}, let G i = l 2 K, let f i Γ 0 H, let o i,j j K be a dictionary in H with associated dictionary operator F i and dictionary constant µ i, let φ i,j j K be a sequence in Γ 0 R such that j K φ i,j φ i,j 0 = 0, let C i be a nonempty closed convex subset of H. For every k {m + 1,...,s}, let Y k be a real Hilbert space, let r k Y k, let β k be in ]0,+ [. For every i {1,...,m} and every k {m+1,...,s}, let R k,i : H i Y k be a bounded linear operator. Set C = m C i. The primal problems is to minimize x 1 H,...,x m H m f i x i + m φ i,j x i o i,j j K + k=m+1 β k rk m +γdc R k,i x i x 1,...,x m 2 /2 6.4 and the dual problem is to minimize ξ 1 l 2 K,...,ξ m l 2 K,v m+1 G m+1,...v s G s v m+1 β m+1,..., v s β s m σ Ci + 1 2γ 2 fi Fi ξ i + m φ i,jξ i,j + j K k=m+1 k=m+1 R k,i v k r k v k. 6.5 Lemma 6.2 Problem 6.1 is a special case of Problem 5.1 with i {1,...,m} z i = 0 and ϕ = γd 2 C /2, ν 0 = γ, k {1,...,m}i {1,...,m} l k = ι {0} and L i,i = F i and L k,i = 0 otherwise, k {1,...,m} r k = 0,G k = l 2 K and g k : l 2 K ],+ ] : ξ k j K φ k,jξ k,j, k {m+1,...,s} G k = Y k and g k = β k, and i {1,...,m} L k,i = R k,i. 6.6 Proof. Let us note that, by [10, Corollary 12.30], ϕ is a convex differentiable function with x = x i 1 i m H i 1 i m ϕx = γx P C x = γx i P Ci x i 1 i m. 6.7 Since Id P C is firmly nonexpansive [10, Proposition 4.8], ϕ is γ-cocoercive. Next for every k {1,...,s}, G k is a real Hilbert space and l k Γ 0 G k and by [27, Example 2.19], g k Γ 0 G k. Hence the conditions imposed on the functions in Problem 5.1 are satisfied. Now we have v G k l k g k v = inf w G k lk w+g k v w = g k v

24 Therefore, in view of 6.2 and 6.6, we have i {1,...,m} x i H m m lk g k L k,i x i r k = = m g i F i x i m φ i,j x i o i,j. 6.9 j K We derive from 6.9, 6.6 and 6.8 that 5.1 reduces to 6.4. For every k {m+1,...,s}, let B k 0;β k be the closed ball of Y k, center at 0 with radius β k. Using [10, Example 13.3v], [10, Proposition 13.27] and [10, Example 13.23], we obtain g k = β k = ι Bk 0;β k and i {1,...,m} g i : ξ i,j j K j Kφ i ξ i,j, 6.10 and Moreover, ϕ = σ C +γ 2 /2 = σ C + 2 /2γ = ϕ m f i = m m σci + 2 /2γ σci + 2 /2γ f i We derive from 6.9, 6.6, 6.10, 6.11 and 6.12 that 5.3 reduces to 6.5. Lemma 6.2 allows to solve Problem 6.1 by Algorithm 5.3. More precisely, Algorithm 6.3 Let ε ]0,min{1,γ}[, let λ n be a sequence in [ε,1], let γ i 1 i s+m be a finite sequence in [ε,+ [ such that m m 2γ ε 1 γ i µ i γ m+i + γ i γ m+k R k,i 2 max {γ i,γ m+k } i m,1 k s k=m+1 For every i {1,...,m}, let α i,n,j j K be sequences in R such that α i,n,j 2 < +, 6.14 j K let a i,n be a absolutely summable sequence in H, let λ i,n be sequence in ]0,1[, and for every k {1,...,s}, let λ m+k,n be sequence in ]0,1[ such that λ i,n λ n + λ m+k,n λ n < Let x i,0 1 i m H 1... H m, and for every i {1,...,m}, let ξ i,0,j j K l 2 K and 24

25 v k,0 m+1 k s G m+1... G s. Set For n = 0,1... For i = 1,...,m p i,n = prox γi f i x i,n γ i j K ξ i,n,jo i,j + s y i,n = 2p i,n x i,n x i,n+1 = x i,n +λ i,n p i,n x i,n For k = 1,...,m For k = m+1,...,s v k,n +γ m+k v k,n+1 = v k,n +λ m+k,n k=m+1 R k,i v k,n +γx i,n P Ci x i,n For every j K ξ k,n+1,j = ξ k,n,j +λ m+k,n prox γm+k φ ξk,n,j +γ j m+k y k,n o k,j +α k,n,j ξ k,n,j β k { max m R k,iy i,n r k m } v k,n β k, v k,n +γ m+k R k,iy i,n r k +a i,n Corollary 6.4 Suppose that 6.4 has at least one solution and 0,...,0,r m+1,...,r s belongs to the strong relative interior of { m i {1,...,m} x i domf i E = L k,i x i v k k {1,...,m} v k l 1 k s 2 K, } j K φ jv k,j < +, k {m+1,...,s} v k Y k 6.17 where L k,i is defined as in 6.6. Let x 1,n,...,x m,n and ξ 1,n,...,ξ m,n,v m+1,n,...,v s,n be sequence generated by Algorithm 6.3. Then x 1,n,...,x m,n x 1,...,x m a solution to 6.4, and ξ 1,n,...,ξ m,n, v m+1,n,...,v s,n ξ 1,...,ξ m,v m+1,...,v s a solution to 6.5. Furthermore, if C j = {0}, for some j {1,...,m}, then x j,n x j. Proof. For every i {1,...,m} and every j K, we have φ i,j φ i,j 0 = 0. Therefore, we derive from 6.10 and [10, Proposition 23.31] that ξ = ξ j j K l 2 K prox g i ξ = prox φ i,j ξ j j K 6.18 Next, for every k {m+1,...,s}, using 6.10 again, we have v G k prox g k v = P B0;βk v = β k v/max{β k, v } In view of 6.18, 6.19, 6.7 and the definition of L k,i 1 k s 1 i m in 6.6, the algorithm 6.16 is a special case of 5.4 with U i,n = γ i Id and V k,n = γ m+k Id, n N i {1,...,m} k {1,...,s} c i,n = 0 and d k,n = 0, 6.20 b i,n = α i,n,j j K. Moreover, we derive from 6.14 that the sequences b i,n 1 i m are absolutely summable, and from 6.13 that 3.5 holds. Finally, since 5.1 has at least one solution and 0,...,0,r m+1,...,r s 25

26 belongs to the strong relative interior of E, as mentioned in Remark 5.5i that 5.2 holds. To sup up, every specific conditions of Algorithm 5.3 and Corollary 5.4 are satisfied. Therefore, the conclusions follow from Corollary 5.4iii. Acknowledgement. I thank Professor Patrick L. Combettes for bringing this problem to my attention and for helpful discussions. References [1] H. Attouch, J. Bolte, P. Redont, and A. Soubeyran, Alternating proximal algorithms for weakly coupled convex minimization problems Applications to dynamical games and PDE s, J. Convex Anal., vol. 15, pp , [2] H. Attouch, L. M. Briceño-Arias, and P. L. Combettes, A parallel splitting method for coupled monotone inclusions, SIAM J. Control Optim., vol. 48, pp , [3] H. Attouch and M. Théra, A general duality principle for the sum of two operators, J. Convex Anal., vol. 3, pp. 1 24, [4] L. M. Briceño-Arias and P. L. Combettes, Monotone operator methods for Nash equilibria in non-potential games, in: Computational and Analytical Mathematics, D. Bailey, H. H. Bauschke, P. Borwein, F. Garvan, M. Théra, J. Vanderwerff, and H. Wolkowicz, Editors. Springer, New York, [5] L. M. Briceño-Arias and P. L. Combettes, Convex variational formulation with smooth coupling for multicomponent signal decomposition and recovery, Numer. Math. Theory Methods Appl., vol. 2, pp , [6] L. M. Briceño-Arias and P. L. Combettes, A monotone+skew splitting model for composite monotone inclusions in duality, SIAM J. Optim., vol. 21, pp , [7] L. M. Briceño-Arias, P. L. Combettes, J.-C. Pesquet and N. Pustelnik, Proximal algorithms for multicomponent image recovery problems, J. Math. Imaging Vision, vol. 41, pp. 3-22, [8] J.-B. Baillon and G. Haddad, Quelques propriétés des opérateurs angle-bornés et n- cycliquement monotones, Israel J. Math., vol. 26, pp , [9] H. H. Bauschke and P. L. Combettes, The Baillon-Haddad theorem revisited, J. Convex Anal., vol. 17, pp , [10] H. H. Bauschke and P. L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, New York, [11] J. F. Bonnans, J. Ch. Gilbert, C. Lemaréchal, and C. A. Sagastizábal, A family of variable metric proximal methods, Math. Programming, vol. 68, pp , [12] R. I. Boţ and C. Hendrich, A Douglas-Rachford type primal-dual method for solving inclusions with mixtures of composite and parallel-sum type monotone operators,

27 [13] J. V. Burke and M. Qian, A variable metric proximal point algorithm for monotone operators, SIAM J. Control Optim., vol. 37, pp , [14] J. V. Burke and M. Qian, On the superlinear convergence of the variable metric proximal point algorithm using Broyden and BFGS matrix secant updating, Math. Program., vol. 88, pp , [15] J.-F. Cai, R. H. Chan, L. Shen, and Z. Shen, Convergence analysis of tight framelet approach for missing data recovery, Adv. Comput. Math., vol. 31, pp , [16] J.-F. Cai, R. H. Chan, and Z. Shen, Simultaneous cartoon and texture inpainting, Inverse Probl. Imaging, vol. 4, pp , [17] G. H-G. Chen and R. T. Rockafellar, Convergence rates in forward-backward splitting, SIAM J. Optim., vol. 7, pp , [18] P. L. Combettes, Solving monotone inclusions via compositions of nonexpansive averaged operators, Optimization, vol. 53, pp , [19] P. L. Combettes, Systems of structured monotone inclusions: duality, algorithms, and applications, [20] P. L. Combettes and J.-C. Pesquet, Proximal thresholding algorithm for minimization over orthonormal bases, SIAM J. Optim., vol. 18, pp , [21] P. L. Combettes and J.-C. Pesquet, A proximal decomposition method for solving convex variational inverse problems, Inverse Problems, vol. 24, Art , 27 pp., [22] P. L. Combettes and J.-C. Pesquet, Proximal splitting methods in signal processing, in: Fixed- Point Algorithms for Inverse Problems in Science and Engineering, H. H. Bauschke et al. eds. Springer, New York, pp , [23] P. L. Combettes and J.-C. Pesquet, Primal-dual splitting algorithm for solving inclusions with mixtures of composite, Lipschitzian, and parallel-sum monotone operators, Set-Valued Var. Anal., vol. 20, pp , [24] P. L. Combettes, D- inh Dũng, and B. C. Vũ, Dualization of signal recovery problems, Set-Valued Var. Anal., vol. 18, pp , [25] P. L. Combettes and B. C. Vũ, Variable metric quasi-fejér monotonicity, Nonlinear Anal., vol. 78, pp , [26] P. L. Combettes and B. C. Vũ, Variable metric forward-backward splitting with applications to monotone inclusions in duality, Optimization, to appear, [27] P. L. Combettes and V. R. Wajs, Signal recovery by proximal forward-backward splitting, Multiscale Model. Simul., vol. 4, pp , [28] J. Eckstein and B. F. Svaiter, General projective splitting methods for sums of maximal monotone operators, SIAM J. Control Optim., vol. 48, pp ,

28 [29] F. Facchinei and J.-S. Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer-Verlag, New York, [30] D. Gabay, Applications of the method of multipliers to variational inequalities, in: M. Fortin and R. Glowinski eds., Augmented Lagrangian Methods: Applications to the Numerical Solution of Boundary Value Problems, pp North-Holland, Amsterdam, [31] R. Glowinski and P. Le Tallec, Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanics, SIAM, Philadelphia, [32] B. Mercier, Topics in Finite Element Solution of Elliptic Problems Lectures on Mathematics, no. 63. Tata Institute of Fundamental Research, Bombay, [33] U. Mosco, Dual variational inequalities, J. Math. Anal. Appl., vol. 40, pp , [34] C. Lemaréchal and C. Sagastizábal, Variable metric bundle methods: from conceptual to implementable forms, Math. Program., vol. 76, pp , [35] L. A. Parente, P. A. Lotito, and M. V. Solodov, A class of inexact variable metric proximal point algorithms, SIAM J. Optim., vol. 19, pp , [36] B. T. Polyak, Introduction to Optimization, Optimization Software Inc., New York, [37] J.-C. Pesquet and N. Pustelnik, A parallel inertial proximal optimization method, Pacific Journal of Optimization, vol. 8, pp , [38] L. Qi and X. Chen, A preconditioning proximal Newton s method for nondifferentiable convex optimization, Math. Program., vol. 76, pp , [39] H. Raguet, J. Fadili, and G. Peyré, Generalized forward-backward splitting, SIAM J. Imaging Sci., to appear, [40] R. T. Rockafellar, Duality and stability in extremum problems involving convex functions, Pacific J. Math., vol. 21, pp , [41] R. T. Rockafellar, Monotone operators and the proximal point algorithm, SIAM J. Control Optimization, vol. 14, pp , [42] P. Tseng, Further applications of a splitting algorithm to decomposition in variational inequalities and convex programming, Math. Programming, vol. 48, pp , [43] P. Tseng, Applications of a splitting algorithm to decomposition in convex programming and variational inequalities, SIAM J. Control Optim., vol. 29, pp , [44] B. C. Vũ, A splitting algorithm for dual monotone inclusions involving cocoercive operators, Adv. Comput. Math., vol. 38, pp , [45] B. C. Vũ, A variable metric extension of the forward backward forward algorithm for monotone operators, Numer. Funct. Anal. Optim., to appear, [46] D. L. Zhu and P. Marcotte, Co-coercivity and its role in the convergence of iterative schemes for solving variational inequalities, SIAM J. Optim., vol. 6, pp ,

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Patrick L. Combettes joint work with J.-C. Pesquet) Laboratoire Jacques-Louis Lions Faculté de Mathématiques

More information

A Dykstra-like algorithm for two monotone operators

A Dykstra-like algorithm for two monotone operators A Dykstra-like algorithm for two monotone operators Heinz H. Bauschke and Patrick L. Combettes Abstract Dykstra s algorithm employs the projectors onto two closed convex sets in a Hilbert space to construct

More information

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29 A Random block-coordinate primal-dual proximal algorithm with application to 3D mesh denoising Emilie CHOUZENOUX Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est, France Horizon Maths 2014

More information

arxiv: v1 [math.oc] 20 Jun 2014

arxiv: v1 [math.oc] 20 Jun 2014 A forward-backward view of some primal-dual optimization methods in image recovery arxiv:1406.5439v1 [math.oc] 20 Jun 2014 P. L. Combettes, 1 L. Condat, 2 J.-C. Pesquet, 3 and B. C. Vũ 4 1 Sorbonne Universités

More information

A Monotone + Skew Splitting Model for Composite Monotone Inclusions in Duality

A Monotone + Skew Splitting Model for Composite Monotone Inclusions in Duality A Monotone + Skew Splitting Model for Composite Monotone Inclusions in Duality arxiv:1011.5517v1 [math.oc] 24 Nov 2010 Luis M. Briceño-Arias 1,2 and Patrick L. Combettes 1 1 UPMC Université Paris 06 Laboratoire

More information

ADMM for monotone operators: convergence analysis and rates

ADMM for monotone operators: convergence analysis and rates ADMM for monotone operators: convergence analysis and rates Radu Ioan Boţ Ernö Robert Csetne May 4, 07 Abstract. We propose in this paper a unifying scheme for several algorithms from the literature dedicated

More information

Visco-penalization of the sum of two monotone operators

Visco-penalization of the sum of two monotone operators Visco-penalization of the sum of two monotone operators Patrick L. Combettes a and Sever A. Hirstoaga b a Laboratoire Jacques-Louis Lions, Faculté de Mathématiques, Université Pierre et Marie Curie Paris

More information

An Algorithm for Splitting Parallel Sums of Linearly Composed Monotone Operators, with Applications to Signal Recovery

An Algorithm for Splitting Parallel Sums of Linearly Composed Monotone Operators, with Applications to Signal Recovery An Algorithm for Splitting Parallel Sums of Linearly Composed Monotone Operators, with Applications to Signal Recovery Stephen R. Becker and Patrick L. Combettes UPMC Université Paris 06 Laboratoire Jacques-Louis

More information

Monotone Operator Splitting Methods in Signal and Image Recovery

Monotone Operator Splitting Methods in Signal and Image Recovery Monotone Operator Splitting Methods in Signal and Image Recovery P.L. Combettes 1, J.-C. Pesquet 2, and N. Pustelnik 3 2 Univ. Pierre et Marie Curie, Paris 6 LJLL CNRS UMR 7598 2 Univ. Paris-Est LIGM CNRS

More information

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences

More information

About Split Proximal Algorithms for the Q-Lasso

About Split Proximal Algorithms for the Q-Lasso Thai Journal of Mathematics Volume 5 (207) Number : 7 http://thaijmath.in.cmu.ac.th ISSN 686-0209 About Split Proximal Algorithms for the Q-Lasso Abdellatif Moudafi Aix Marseille Université, CNRS-L.S.I.S

More information

Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping

Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping March 0, 206 3:4 WSPC Proceedings - 9in x 6in secondorderanisotropicdamping206030 page Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping Radu

More information

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection EUSIPCO 2015 1/19 A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection Jean-Christophe Pesquet Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est

More information

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION Peter Ochs University of Freiburg Germany 17.01.2017 joint work with: Thomas Brox and Thomas Pock c 2017 Peter Ochs ipiano c 1

More information

Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators

Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators Radu Ioan Boţ Christopher Hendrich 2 April 28, 206 Abstract. The aim of this article is to present

More information

Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM. M. V. Solodov and B. F.

Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM. M. V. Solodov and B. F. Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM M. V. Solodov and B. F. Svaiter January 27, 1997 (Revised August 24, 1998) ABSTRACT We propose a modification

More information

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties

More information

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems Radu Ioan Boţ Ernö Robert Csetnek August 5, 014 Abstract. In this paper we analyze the

More information

Coordinate Update Algorithm Short Course Operator Splitting

Coordinate Update Algorithm Short Course Operator Splitting Coordinate Update Algorithm Short Course Operator Splitting Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 25 Operator splitting pipeline 1. Formulate a problem as 0 A(x) + B(x) with monotone operators

More information

Signal Processing and Networks Optimization Part VI: Duality

Signal Processing and Networks Optimization Part VI: Duality Signal Processing and Networks Optimization Part VI: Duality Pierre Borgnat 1, Jean-Christophe Pesquet 2, Nelly Pustelnik 1 1 ENS Lyon Laboratoire de Physique CNRS UMR 5672 pierre.borgnat@ens-lyon.fr,

More information

1 Introduction and preliminaries

1 Introduction and preliminaries Proximal Methods for a Class of Relaxed Nonlinear Variational Inclusions Abdellatif Moudafi Université des Antilles et de la Guyane, Grimaag B.P. 7209, 97275 Schoelcher, Martinique abdellatif.moudafi@martinique.univ-ag.fr

More information

Inertial Douglas-Rachford splitting for monotone inclusion problems

Inertial Douglas-Rachford splitting for monotone inclusion problems Inertial Douglas-Rachford splitting for monotone inclusion problems Radu Ioan Boţ Ernö Robert Csetnek Christopher Hendrich January 5, 2015 Abstract. We propose an inertial Douglas-Rachford splitting algorithm

More information

arxiv: v1 [math.oc] 12 Mar 2013

arxiv: v1 [math.oc] 12 Mar 2013 On the convergence rate improvement of a primal-dual splitting algorithm for solving monotone inclusion problems arxiv:303.875v [math.oc] Mar 03 Radu Ioan Boţ Ernö Robert Csetnek André Heinrich February

More information

Second order forward-backward dynamical systems for monotone inclusion problems

Second order forward-backward dynamical systems for monotone inclusion problems Second order forward-backward dynamical systems for monotone inclusion problems Radu Ioan Boţ Ernö Robert Csetnek March 6, 25 Abstract. We begin by considering second order dynamical systems of the from

More information

Splitting methods for decomposing separable convex programs

Splitting methods for decomposing separable convex programs Splitting methods for decomposing separable convex programs Philippe Mahey LIMOS - ISIMA - Université Blaise Pascal PGMO, ENSTA 2013 October 4, 2013 1 / 30 Plan 1 Max Monotone Operators Proximal techniques

More information

Convergence rate estimates for the gradient differential inclusion

Convergence rate estimates for the gradient differential inclusion Convergence rate estimates for the gradient differential inclusion Osman Güler November 23 Abstract Let f : H R { } be a proper, lower semi continuous, convex function in a Hilbert space H. The gradient

More information

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES U.P.B. Sci. Bull., Series A, Vol. 80, Iss. 3, 2018 ISSN 1223-7027 ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES Vahid Dadashi 1 In this paper, we introduce a hybrid projection algorithm for a countable

More information

A primal dual Splitting Method for Convex. Optimization Involving Lipschitzian, Proximable and Linear Composite Terms,

A primal dual Splitting Method for Convex. Optimization Involving Lipschitzian, Proximable and Linear Composite Terms, A primal dual Splitting Method for Convex Optimization Involving Lipschitzian, Proximable and Linear Composite Terms Laurent Condat Final author s version. Cite as: L. Condat, A primal dual Splitting Method

More information

Monotone operators and bigger conjugate functions

Monotone operators and bigger conjugate functions Monotone operators and bigger conjugate functions Heinz H. Bauschke, Jonathan M. Borwein, Xianfu Wang, and Liangjin Yao August 12, 2011 Abstract We study a question posed by Stephen Simons in his 2008

More information

Victoria Martín-Márquez

Victoria Martín-Márquez A NEW APPROACH FOR THE CONVEX FEASIBILITY PROBLEM VIA MONOTROPIC PROGRAMMING Victoria Martín-Márquez Dep. of Mathematical Analysis University of Seville Spain XIII Encuentro Red de Análisis Funcional y

More information

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4890 4900 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A generalized forward-backward

More information

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE Fixed Point Theory, Volume 6, No. 1, 2005, 59-69 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.htm WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE YASUNORI KIMURA Department

More information

Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization

Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization Radu Ioan Boţ Christopher Hendrich November 7, 202 Abstract. In this paper we

More information

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim Korean J. Math. 25 (2017), No. 4, pp. 469 481 https://doi.org/10.11568/kjm.2017.25.4.469 GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS Jong Kyu Kim, Salahuddin, and Won Hee Lim Abstract. In this

More information

On the order of the operators in the Douglas Rachford algorithm

On the order of the operators in the Douglas Rachford algorithm On the order of the operators in the Douglas Rachford algorithm Heinz H. Bauschke and Walaa M. Moursi June 11, 2015 Abstract The Douglas Rachford algorithm is a popular method for finding zeros of sums

More information

Downloaded 12/13/16 to Redistribution subject to SIAM license or copyright; see

Downloaded 12/13/16 to Redistribution subject to SIAM license or copyright; see SIAM J. OPTIM. Vol. 11, No. 4, pp. 962 973 c 2001 Society for Industrial and Applied Mathematics MONOTONICITY OF FIXED POINT AND NORMAL MAPPINGS ASSOCIATED WITH VARIATIONAL INEQUALITY AND ITS APPLICATION

More information

Variable Metric Forward-Backward Algorithm

Variable Metric Forward-Backward Algorithm Variable Metric Forward-Backward Algorithm 1/37 Variable Metric Forward-Backward Algorithm for minimizing the sum of a differentiable function and a convex function E. Chouzenoux in collaboration with

More information

Subdifferential representation of convex functions: refinements and applications

Subdifferential representation of convex functions: refinements and applications Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential

More information

arxiv: v2 [math.oc] 27 Nov 2015

arxiv: v2 [math.oc] 27 Nov 2015 arxiv:1507.03291v2 [math.oc] 27 Nov 2015 Asynchronous Block-Iterative Primal-Dual Decomposition Methods for Monotone Inclusions Patrick L. Combettes 1 and Jonathan Eckstein 2 1 Sorbonne Universités UPMC

More information

Monotone variational inequalities, generalized equilibrium problems and fixed point methods

Monotone variational inequalities, generalized equilibrium problems and fixed point methods Wang Fixed Point Theory and Applications 2014, 2014:236 R E S E A R C H Open Access Monotone variational inequalities, generalized equilibrium problems and fixed point methods Shenghua Wang * * Correspondence:

More information

arxiv: v3 [math.oc] 18 Apr 2012

arxiv: v3 [math.oc] 18 Apr 2012 A class of Fejér convergent algorithms, approximate resolvents and the Hybrid Proximal-Extragradient method B. F. Svaiter arxiv:1204.1353v3 [math.oc] 18 Apr 2012 Abstract A new framework for analyzing

More information

Research Article On an Iterative Method for Finding a Zero to the Sum of Two Maximal Monotone Operators

Research Article On an Iterative Method for Finding a Zero to the Sum of Two Maximal Monotone Operators Applied Mathematics, Article ID 414031, 5 pages http://dx.doi.org/10.1155/2014/414031 Research Article On an Iterative Method for Finding a Zero to the Sum of Two Maximal Monotone Operators Hongwei Jiao

More information

A Dual Condition for the Convex Subdifferential Sum Formula with Applications

A Dual Condition for the Convex Subdifferential Sum Formula with Applications Journal of Convex Analysis Volume 12 (2005), No. 2, 279 290 A Dual Condition for the Convex Subdifferential Sum Formula with Applications R. S. Burachik Engenharia de Sistemas e Computacao, COPPE-UFRJ

More information

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of

More information

On Total Convexity, Bregman Projections and Stability in Banach Spaces

On Total Convexity, Bregman Projections and Stability in Banach Spaces Journal of Convex Analysis Volume 11 (2004), No. 1, 1 16 On Total Convexity, Bregman Projections and Stability in Banach Spaces Elena Resmerita Department of Mathematics, University of Haifa, 31905 Haifa,

More information

Convergence rate of inexact proximal point methods with relative error criteria for convex optimization

Convergence rate of inexact proximal point methods with relative error criteria for convex optimization Convergence rate of inexact proximal point methods with relative error criteria for convex optimization Renato D. C. Monteiro B. F. Svaiter August, 010 Revised: December 1, 011) Abstract In this paper,

More information

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT Linear and Nonlinear Analysis Volume 1, Number 1, 2015, 1 PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT KAZUHIRO HISHINUMA AND HIDEAKI IIDUKA Abstract. In this

More information

A Relaxed Explicit Extragradient-Like Method for Solving Generalized Mixed Equilibria, Variational Inequalities and Constrained Convex Minimization

A Relaxed Explicit Extragradient-Like Method for Solving Generalized Mixed Equilibria, Variational Inequalities and Constrained Convex Minimization , March 16-18, 2016, Hong Kong A Relaxed Explicit Extragradient-Like Method for Solving Generalized Mixed Equilibria, Variational Inequalities and Constrained Convex Minimization Yung-Yih Lur, Lu-Chuan

More information

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Lu-Chuan Ceng 1, Nicolas Hadjisavvas 2 and Ngai-Ching Wong 3 Abstract.

More information

On a result of Pazy concerning the asymptotic behaviour of nonexpansive mappings

On a result of Pazy concerning the asymptotic behaviour of nonexpansive mappings On a result of Pazy concerning the asymptotic behaviour of nonexpansive mappings arxiv:1505.04129v1 [math.oc] 15 May 2015 Heinz H. Bauschke, Graeme R. Douglas, and Walaa M. Moursi May 15, 2015 Abstract

More information

EE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1

EE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1 EE 546, Univ of Washington, Spring 2012 6. Proximal mapping introduction review of conjugate functions proximal mapping Proximal mapping 6 1 Proximal mapping the proximal mapping (prox-operator) of a convex

More information

Epiconvergence and ε-subgradients of Convex Functions

Epiconvergence and ε-subgradients of Convex Functions Journal of Convex Analysis Volume 1 (1994), No.1, 87 100 Epiconvergence and ε-subgradients of Convex Functions Andrei Verona Department of Mathematics, California State University Los Angeles, CA 90032,

More information

SIGNAL RECOVERY BY PROXIMAL FORWARD-BACKWARD SPLITTING

SIGNAL RECOVERY BY PROXIMAL FORWARD-BACKWARD SPLITTING Multiscale Model. Simul. To appear SIGNAL RECOVERY BY PROXIMAL FORWARD-BACKWARD SPLITTING PATRICK L. COMBETTES AND VALÉRIE R. WAJS Abstract. We show that various inverse problems in signal recovery can

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

A Unified Approach to Proximal Algorithms using Bregman Distance

A Unified Approach to Proximal Algorithms using Bregman Distance A Unified Approach to Proximal Algorithms using Bregman Distance Yi Zhou a,, Yingbin Liang a, Lixin Shen b a Department of Electrical Engineering and Computer Science, Syracuse University b Department

More information

M. Marques Alves Marina Geremia. November 30, 2017

M. Marques Alves Marina Geremia. November 30, 2017 Iteration complexity of an inexact Douglas-Rachford method and of a Douglas-Rachford-Tseng s F-B four-operator splitting method for solving monotone inclusions M. Marques Alves Marina Geremia November

More information

MOSCO STABILITY OF PROXIMAL MAPPINGS IN REFLEXIVE BANACH SPACES

MOSCO STABILITY OF PROXIMAL MAPPINGS IN REFLEXIVE BANACH SPACES MOSCO STABILITY OF PROXIMAL MAPPINGS IN REFLEXIVE BANACH SPACES Dan Butnariu and Elena Resmerita Abstract. In this paper we establish criteria for the stability of the proximal mapping Prox f ϕ =( ϕ+ f)

More information

Convergence Theorems for Bregman Strongly Nonexpansive Mappings in Reflexive Banach Spaces

Convergence Theorems for Bregman Strongly Nonexpansive Mappings in Reflexive Banach Spaces Filomat 28:7 (2014), 1525 1536 DOI 10.2298/FIL1407525Z Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Convergence Theorems for

More information

arxiv: v1 [math.fa] 30 Jun 2014

arxiv: v1 [math.fa] 30 Jun 2014 Maximality of the sum of the subdifferential operator and a maximally monotone operator arxiv:1406.7664v1 [math.fa] 30 Jun 2014 Liangjin Yao June 29, 2014 Abstract The most important open problem in Monotone

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

On convergence rate of the Douglas-Rachford operator splitting method

On convergence rate of the Douglas-Rachford operator splitting method On convergence rate of the Douglas-Rachford operator splitting method Bingsheng He and Xiaoming Yuan 2 Abstract. This note provides a simple proof on a O(/k) convergence rate for the Douglas- Rachford

More information

Sum of two maximal monotone operators in a general Banach space is maximal

Sum of two maximal monotone operators in a general Banach space is maximal arxiv:1505.04879v1 [math.fa] 19 May 2015 Sum of two maximal monotone operators in a general Banach space is maximal S R Pattanaik, D K Pradhan and S Pradhan May 20, 2015 Abstract In a real Banach space,

More information

Existence and convergence theorems for the split quasi variational inequality problems on proximally smooth sets

Existence and convergence theorems for the split quasi variational inequality problems on proximally smooth sets Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (206), 2364 2375 Research Article Existence and convergence theorems for the split quasi variational inequality problems on proximally smooth

More information

On the Brézis - Haraux - type approximation in nonreflexive Banach spaces

On the Brézis - Haraux - type approximation in nonreflexive Banach spaces On the Brézis - Haraux - type approximation in nonreflexive Banach spaces Radu Ioan Boţ Sorin - Mihai Grad Gert Wanka Abstract. We give Brézis - Haraux - type approximation results for the range of the

More information

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane Conference ADGO 2013 October 16, 2013 Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions Marc Lassonde Université des Antilles et de la Guyane Playa Blanca, Tongoy, Chile SUBDIFFERENTIAL

More information

A Primal-dual Three-operator Splitting Scheme

A Primal-dual Three-operator Splitting Scheme Noname manuscript No. (will be inserted by the editor) A Primal-dual Three-operator Splitting Scheme Ming Yan Received: date / Accepted: date Abstract In this paper, we propose a new primal-dual algorithm

More information

Variational inequalities for set-valued vector fields on Riemannian manifolds

Variational inequalities for set-valued vector fields on Riemannian manifolds Variational inequalities for set-valued vector fields on Riemannian manifolds Chong LI Department of Mathematics Zhejiang University Joint with Jen-Chih YAO Chong LI (Zhejiang University) VI on RM 1 /

More information

1. Introduction A SYSTEM OF NONCONVEX VARIATIONAL INEQUALITIES IN BANACH SPACES

1. Introduction A SYSTEM OF NONCONVEX VARIATIONAL INEQUALITIES IN BANACH SPACES Commun. Optim. Theory 2016 (2016), Article ID 20 Copyright c 2016 Mathematical Research Press. A SYSTEM OF NONCONVEX VARIATIONAL INEQUALITIES IN BANACH SPACES JONG KYU KIM 1, SALAHUDDIN 2, 1 Department

More information

Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems

Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems Mathematical and Computational Applications Article Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems Wenling Zhao *, Ruyu Wang and Hongxiang Zhang School of Science,

More information

BREGMAN DISTANCES, TOTALLY

BREGMAN DISTANCES, TOTALLY BREGMAN DISTANCES, TOTALLY CONVEX FUNCTIONS AND A METHOD FOR SOLVING OPERATOR EQUATIONS IN BANACH SPACES DAN BUTNARIU AND ELENA RESMERITA January 18, 2005 Abstract The aim of this paper is twofold. First,

More information

Abstract In this article, we consider monotone inclusions in real Hilbert spaces

Abstract In this article, we consider monotone inclusions in real Hilbert spaces Noname manuscript No. (will be inserted by the editor) A new splitting method for monotone inclusions of three operators Yunda Dong Xiaohuan Yu Received: date / Accepted: date Abstract In this article,

More information

Self-dual Smooth Approximations of Convex Functions via the Proximal Average

Self-dual Smooth Approximations of Convex Functions via the Proximal Average Chapter Self-dual Smooth Approximations of Convex Functions via the Proximal Average Heinz H. Bauschke, Sarah M. Moffat, and Xianfu Wang Abstract The proximal average of two convex functions has proven

More information

Weak sharp minima on Riemannian manifolds 1

Weak sharp minima on Riemannian manifolds 1 1 Chong Li Department of Mathematics Zhejiang University Hangzhou, 310027, P R China cli@zju.edu.cn April. 2010 Outline 1 2 Extensions of some results for optimization problems on Banach spaces 3 4 Some

More information

ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT

ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT EMIL ERNST AND MICHEL VOLLE Abstract. This article addresses a general criterion providing a zero duality gap for convex programs in the setting of

More information

Second order forward-backward dynamical systems for monotone inclusion problems

Second order forward-backward dynamical systems for monotone inclusion problems Second order forward-backward dynamical systems for monotone inclusion problems Radu Ioan Boţ Ernö Robert Csetnek March 2, 26 Abstract. We begin by considering second order dynamical systems of the from

More information

Firmly Nonexpansive Mappings and Maximally Monotone Operators: Correspondence and Duality

Firmly Nonexpansive Mappings and Maximally Monotone Operators: Correspondence and Duality Firmly Nonexpansive Mappings and Maximally Monotone Operators: Correspondence and Duality Heinz H. Bauschke, Sarah M. Moffat, and Xianfu Wang June 8, 2011 Abstract The notion of a firmly nonexpansive mapping

More information

Convex Feasibility Problems

Convex Feasibility Problems Laureate Prof. Jonathan Borwein with Matthew Tam http://carma.newcastle.edu.au/drmethods/paseky.html Spring School on Variational Analysis VI Paseky nad Jizerou, April 19 25, 2015 Last Revised: May 6,

More information

Near Equality, Near Convexity, Sums of Maximally Monotone Operators, and Averages of Firmly Nonexpansive Mappings

Near Equality, Near Convexity, Sums of Maximally Monotone Operators, and Averages of Firmly Nonexpansive Mappings Mathematical Programming manuscript No. (will be inserted by the editor) Near Equality, Near Convexity, Sums of Maximally Monotone Operators, and Averages of Firmly Nonexpansive Mappings Heinz H. Bauschke

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

1 Introduction We consider the problem nd x 2 H such that 0 2 T (x); (1.1) where H is a real Hilbert space, and T () is a maximal monotone operator (o

1 Introduction We consider the problem nd x 2 H such that 0 2 T (x); (1.1) where H is a real Hilbert space, and T () is a maximal monotone operator (o Journal of Convex Analysis Volume 6 (1999), No. 1, pp. xx-xx. cheldermann Verlag A HYBRID PROJECTION{PROXIMAL POINT ALGORITHM M. V. Solodov y and B. F. Svaiter y January 27, 1997 (Revised August 24, 1998)

More information

On pseudomonotone variational inequalities

On pseudomonotone variational inequalities An. Şt. Univ. Ovidius Constanţa Vol. 14(1), 2006, 83 90 On pseudomonotone variational inequalities Silvia Fulina Abstract Abstract. There are mainly two definitions of pseudomonotone mappings. First, introduced

More information

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.

More information

ON THE RANGE OF THE SUM OF MONOTONE OPERATORS IN GENERAL BANACH SPACES

ON THE RANGE OF THE SUM OF MONOTONE OPERATORS IN GENERAL BANACH SPACES PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 124, Number 11, November 1996 ON THE RANGE OF THE SUM OF MONOTONE OPERATORS IN GENERAL BANACH SPACES HASSAN RIAHI (Communicated by Palle E. T. Jorgensen)

More information

Regularization Inertial Proximal Point Algorithm for Convex Feasibility Problems in Banach Spaces

Regularization Inertial Proximal Point Algorithm for Convex Feasibility Problems in Banach Spaces Int. Journal of Math. Analysis, Vol. 3, 2009, no. 12, 549-561 Regularization Inertial Proximal Point Algorithm for Convex Feasibility Problems in Banach Spaces Nguyen Buong Vietnamse Academy of Science

More information

Maximal monotone operators are selfdual vector fields and vice-versa

Maximal monotone operators are selfdual vector fields and vice-versa Maximal monotone operators are selfdual vector fields and vice-versa Nassif Ghoussoub Department of Mathematics, University of British Columbia, Vancouver BC Canada V6T 1Z2 nassif@math.ubc.ca February

More information

Nonconvex notions of regularity and convergence of fundamental algorithms for feasibility problems

Nonconvex notions of regularity and convergence of fundamental algorithms for feasibility problems Nonconvex notions of regularity and convergence of fundamental algorithms for feasibility problems Robert Hesse and D. Russell Luke December 12, 2012 Abstract We consider projection algorithms for solving

More information

Weak and strong convergence theorems of modified SP-iterations for generalized asymptotically quasi-nonexpansive mappings

Weak and strong convergence theorems of modified SP-iterations for generalized asymptotically quasi-nonexpansive mappings Mathematica Moravica Vol. 20:1 (2016), 125 144 Weak and strong convergence theorems of modified SP-iterations for generalized asymptotically quasi-nonexpansive mappings G.S. Saluja Abstract. The aim of

More information

The Brezis-Browder Theorem in a general Banach space

The Brezis-Browder Theorem in a general Banach space The Brezis-Browder Theorem in a general Banach space Heinz H. Bauschke, Jonathan M. Borwein, Xianfu Wang, and Liangjin Yao March 30, 2012 Abstract During the 1970s Brezis and Browder presented a now classical

More information

On the split equality common fixed point problem for quasi-nonexpansive multi-valued mappings in Banach spaces

On the split equality common fixed point problem for quasi-nonexpansive multi-valued mappings in Banach spaces Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (06), 5536 5543 Research Article On the split equality common fixed point problem for quasi-nonexpansive multi-valued mappings in Banach spaces

More information

Existence and Approximation of Fixed Points of. Bregman Nonexpansive Operators. Banach Spaces

Existence and Approximation of Fixed Points of. Bregman Nonexpansive Operators. Banach Spaces Existence and Approximation of Fixed Points of in Reflexive Banach Spaces Department of Mathematics The Technion Israel Institute of Technology Haifa 22.07.2010 Joint work with Prof. Simeon Reich General

More information

STRONG CONVERGENCE OF AN ITERATIVE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS AND FIXED POINT PROBLEMS

STRONG CONVERGENCE OF AN ITERATIVE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS AND FIXED POINT PROBLEMS ARCHIVUM MATHEMATICUM (BRNO) Tomus 45 (2009), 147 158 STRONG CONVERGENCE OF AN ITERATIVE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS AND FIXED POINT PROBLEMS Xiaolong Qin 1, Shin Min Kang 1, Yongfu Su 2,

More information

Convex Optimization Notes

Convex Optimization Notes Convex Optimization Notes Jonathan Siegel January 2017 1 Convex Analysis This section is devoted to the study of convex functions f : B R {+ } and convex sets U B, for B a Banach space. The case of B =

More information

Math 273a: Optimization Overview of First-Order Optimization Algorithms

Math 273a: Optimization Overview of First-Order Optimization Algorithms Math 273a: Optimization Overview of First-Order Optimization Algorithms Wotao Yin Department of Mathematics, UCLA online discussions on piazza.com 1 / 9 Typical flow of numerical optimization Optimization

More information

The Split Hierarchical Monotone Variational Inclusions Problems and Fixed Point Problems for Nonexpansive Semigroup

The Split Hierarchical Monotone Variational Inclusions Problems and Fixed Point Problems for Nonexpansive Semigroup International Mathematical Forum, Vol. 11, 2016, no. 8, 395-408 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/imf.2016.6220 The Split Hierarchical Monotone Variational Inclusions Problems and

More information

Iterative common solutions of fixed point and variational inequality problems

Iterative common solutions of fixed point and variational inequality problems Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (2016), 1882 1890 Research Article Iterative common solutions of fixed point and variational inequality problems Yunpeng Zhang a, Qing Yuan b,

More information

Brézis - Haraux - type approximation of the range of a monotone operator composed with a linear mapping

Brézis - Haraux - type approximation of the range of a monotone operator composed with a linear mapping Brézis - Haraux - type approximation of the range of a monotone operator composed with a linear mapping Radu Ioan Boţ, Sorin-Mihai Grad and Gert Wanka Faculty of Mathematics Chemnitz University of Technology

More information

Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem

Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (206), 424 4225 Research Article Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem Jong Soo

More information

Monotone Linear Relations: Maximality and Fitzpatrick Functions

Monotone Linear Relations: Maximality and Fitzpatrick Functions Monotone Linear Relations: Maximality and Fitzpatrick Functions Heinz H. Bauschke, Xianfu Wang, and Liangjin Yao November 4, 2008 Dedicated to Stephen Simons on the occasion of his 70 th birthday Abstract

More information

Graph Convergence for H(, )-co-accretive Mapping with over-relaxed Proximal Point Method for Solving a Generalized Variational Inclusion Problem

Graph Convergence for H(, )-co-accretive Mapping with over-relaxed Proximal Point Method for Solving a Generalized Variational Inclusion Problem Iranian Journal of Mathematical Sciences and Informatics Vol. 12, No. 1 (2017), pp 35-46 DOI: 10.7508/ijmsi.2017.01.004 Graph Convergence for H(, )-co-accretive Mapping with over-relaxed Proximal Point

More information