A FORWARD-BACKWARD VIEW OF SOME PRIMAL-DUAL OPTIMIZATION METHODS IN IMAGE RECOVERY

Size: px
Start display at page:

Download "A FORWARD-BACKWARD VIEW OF SOME PRIMAL-DUAL OPTIMIZATION METHODS IN IMAGE RECOVERY"

Transcription

1 A FORWARD-BACKWARD VIEW OF SOME PRIMAL-DUAL OPTIMIZATION METHODS IN IMAGE RECOVERY P. L. Combettes, 1 L. Condat, 2 J.-C. Pesquet, 3 and B. C. Vũ 4 1 Sorbonne Universités UPMC Univ. Paris 06 Laboratoire Jacques-Louis Lions, Paris, France 2 University of Grenoble Alpes, GIPSA-lab, St Martin d Hères, France 3 Université Paris-Est, LIGM, UMR CNRS 8049, Marne-la-Vallée, France 4 LCSL, Istituto Italiano di Tecnoloia and MIT, Genova, Italy ABSTRACT A wide array of imae recovery problems can be abstracted into the problem of minimizin a sum of composite convex functions in a Hilbert space. To solve such problems, primal-dual proximal approaches have been developed which provide efficient solutions to lare-scale optimization problems. The objective of this paper is to show that a number of existin alorithms can be derived from a eneral form of the forward-backward alorithm applied in a suitable product space. Our approach also allows us to develop useful extensions of existin alorithms by introducin a variable metric. An illustration to imae restoration is provided. Index Terms convex optimization, duality, parallel computin, proximal alorithm, variational methods, imae recovery. 1. INTRODUCTION Many imae recovery problems can be formulated in Hilbert spaces H andg i 1 i m as structured optimization problems of the form minimize il ix, 1 where, for every i {1,...,m}, i is a proper lower semicontinuous convex function from G i to ],+ ] and L i is a bounded linear operator from H to G i. For example, the functions i L i 1 i m may model data fidelity terms, smooth or nonsmooth measures of reularity, or hard constraints on the solution. In recent years, many alorithms have been developed to solve such a problem by takin advantae of recent advances in convex optimization, especially in the development of proximal tools see [12, 29] and the references therein. In imae processin, however, solvin such a problem still poses a number of conceptual and numerical challenes. First of all, one often looks for methods which have the ability to split the problem by activatin each of the functions throuh elementary processin steps which can be computed in parallel. This makes it possible to reduce the complexity of the oriinal problem and to benefit from existin parallel computin architectures. Secondly, it is often useful to desin alorithms which can exploit, in a flexible manner, the structure of the problem. In particular, some of the functions may be Lipschitz differentiable in which case they should be exploited throuh their radient rather than throuh their proximity operator, which is usually harder to This work was supported by the CNRS MASTODONS project rant 2013 MesureHD. implement examples of proximity operators with closed-form expression can be found in [6, 12]. In some problems, the functions i 1 i m can be expressed as the infimal convolution of simpler functions see [9] and the references therein. Last but not least, in imae recovery, the operators L i 1 i m may be of very lare size so that their inversions are costly e.., in reconstruction problems. Findin alorithms which do not require to perform inversions of these operators is thus of paramount importance. Note that all the existin convex optimization alorithms do not have these desirable properties. For example, the Alternatin Direction Method of Multipliers ADMM [18, 17, 20] requires a strinent assumption of invertibility of the involved linear operator. Parallel versions of ADMM [28] and related Parallel Proximal Alorithm PPXA [11, 25] usually necessitate a linear inversion to be performed at each iteration. Also, early primal-dual alorithms [4, 5, 7, 10, 16, 21] did not make it possible to handle smooth functions throuh their radients. Only recently, have primal-dual methods been proposed with this feature. Such work was initiated in [13] in the line of [4] and subsequent developments can be found in [2, 3, 8, 9, 15, 27, 30]. As will be seen in the present paper, another advantae of these approaches is that they can be coupled with variable metric strateies which can potentially accelerate their converence. In Section 2, we provide some backround on convex analysis and monotone operator theory. In Section 3, we introduce a eneral form of the forward-backward alorithm which uses a variable metric. This alorithm is employed in Section 4 to develop a versatile family of primal-dual proximal methods. Several particular instances of this framework are discussed. Finally, we provide illustratin numerical results in Section NOTATION AND BACKGROUND Monotone operator theory [1] provides a both insihtful and eleant framework for dealin with convex optimization problems and developin new solution alorithms that could not be devised usin purely variational tools. We summarize a number of related concepts that will be needed. Throuhout, H, G, and G i 1 i m are real Hilbert spaces. We denote the scalar product of a Hilbert space by and the associated norm by. The symbol denotes weak converence, 1 and Id denotes the identity operator. We denote by BH,G the space of bounded linear operators from H to G, we set SH = 1 In a finite-dimensional space, weak converence is equivalent to stron converence.

2 { } L BH,H L = L, where L denotes the adjoint of L. The Loewner partial orderin on SH is denoted by. For every α [0,+ [, we set P αh = { U SH U αid }, and we denote by U the square root of U P αh. Moreover, for every U P αh and α > 0, we define the norm x U = Ux x. We denote byg = G 1 G m the Hilbert direct sum of the Hilbert spacesg i 1 i m, i.e., their product space equipped with the scalar product : x,y m xi yi where x = xi 1 i m andy = y i 1 i m denote eneric elements ing. Let A: H 2 H be a set-valued operator. We denote by raa = { x,u H H u Ax } the raph of A, by zera = { x H 0 Ax } the set of zeros of A, and by rana = { u H x H u Ax } its rane. The inverse of A is A 1 : H 2 H : u { x H u Ax }, and the resolvent of A is J A = Id+A 1. Moreover, A is monotone if x,y H H u,v Ax Ay x y u v 0, 2 and maximally monotone if it is monotone and there exists no monotone operator B: H 2 H such that raa rab and A B. An operator B: H H isβ-cocoercive for some β ]0,+ [ if x H y H x y Bx By β Bx By 2. 3 The conjuate of a function f: H ],+ ] is f : H [,+ ] : u sup x u fx, 4 and the infimal convolution of f with: H ],+ ] is f : H [,+ ] : x inf fy+x y. 5 y H The class of lower semicontinuous convex functions f: H ],+ ] such that domf = { x H fx < + } is denoted by Γ 0H. If f Γ 0H, then f Γ 0H and the subdifferential of f is the maximally monotone operator f: H 2 H x { u H y H y x u +fx fy }. 6 LetU P αh for some α ]0,+ [. The proximity operator of f Γ 0H relative to the metric induced byu is [22, Section XV.4] prox U f : H H: x armin fy+ 1 y H 2 x y 2 U. 7 When U = Id, we retrieve the standard definition of the proximity operator [1, 24]. Let C be a nonempty subset of H. The indicator function of C is defined on H as { 0, if x C; ι C: x 8 +, if x / C. Finally,l 1 +N denotes the set of summable sequences in[0,+ [. 3. A GENERAL FORM OF FORWARD-BACKWARD ALGORITHM Optimization problems can often be reduced to findin a zero of a sum of two maximally monotone operators A and B actin on H. When B is cocoercive see 3, a useful alorithm to solve this problem is the forward-backward alorithm, which can be formulated in a eneral form involvin a variable metric as shown in the next result. Theorem 3.1 Letα ]0,+ [, let β ]0,+ [, leta: H 2 H be maximally monotone, and let B: H H be cocoercive. Let η n n N l 1 +N, and let V n n N be a sequence in P αh such that { sup n N V n < + n N 1+η nv n+1 V n 9 and Vn 1/2 BVn 1/2 is β-cocoercive. Let λ n n N be a sequence in ]0,1] such that inf n N λ n > 0 and let γ n n N be a sequence in ]0,2β[ such thatinf n N γ n > 0 andsup n N γ n < 2β. Letx 0 H, and let a n n N and b n n N be absolutely summable sequences in H. Suppose that Z = zera+b, and set n N yn = x n γ nv nbx n +b n Thenx n x for some x Z. x n+1 = x n +λ n JγnV nay n+a n x n. 10 At iteration n, variables a n and b n model numerical errors possibly arisin when applyin J γnvna or B. Note also that, if B is µ-cocoercive with µ ]0, + [, one can choose β = µsup n N V n 1, which allows us to retrieve [14, Theorem 4.1]. In the next section, we shall see how a judicious use of this result allows us to derive a variety of flexible convex optimization alorithms. 4. A VARIABLE METRIC PRIMAL-DUAL METHOD 4.1. Formulation A wide array of optimization problems encountered in imae processin are instances of the followin one, which was first investiated in [13] and can be viewed as a more structured version of the minimization problem in 1: Problem 4.1 Let z H, let m be a strictly positive inteer, let f Γ 0H, and let h: H R be convex and differentiable with a Lipschitzian radient. For every i {1,...,m}, let r i G i, let i Γ 0G i, let l i Γ 0G i be stronly convex, 2 and suppose that 0 L i BH,G i. Suppose that z ran f + Consider the problem minimize fx+ and the dual problem L i i l il i r i+ h. 11 i l il ix r i+hx x z, 12 minimize f h z v 1 G 1,...,v m G m + L iv i i v i+l iv i+ v i r i For every i {1,...,m}, l i is ν 1 i -stronly convex with ν i ]0,+ [ if and only if l i is ν i-lipschitz differentiable [1, Theorem 18.15].

3 Note that in the special case when l i = ι {0}, i l i reduces to i in 12. Let us now examine how Problem 4.1 can be reformulated from the standpoint of monotone operators. To this end, let us define Γ 0G, l Γ 0G and L BH,G by : v iv i, l: v l iv i and L: x L 1x,...,L mx. 14 Let us now introduce the product spacek = H G and the operators A: K 2 K and x,v fx z +L v Lx+ v+r 15 B: K K x,v hx, l v. 16 The operator A can be shown to be maximally monotone,whereas B is cocoercive. A key observation in this context is that, if there exists x,v K such that x,v zera+b, then x,v is a pair of primal-dual solutions to Problem 4.1 [13]. This connection with the construction for a zero of A + B makes it possible to apply a forward-backward alorithm as discussed in Section 3, by usin a linear operator V n BK,K to chane the metric at each iteration n. Dependin on the form of this operator various alorithms can be obtained A first class of primal-dual alorithms Let α ]0,+ [, let U n n N be a sequence in P αh such that n N U n+1 U n. For every i {1,...,m}, let U i,n n N be a sequence inp αg i such that n NU i,n+1 U i,n. A first possible choice for V n n N is iven by n N V 1 n : x,v Un 1 x L v, Lx+Ũ 1 n v 17 where Ũ n: G G: v 1,...,v m U 1,nv 1,...,U m,nv m. 18 The followin result constitutes a direct extension of [14, Example 6.4]: Proposition 4.2 Let x 0 H, and let a n n N and c n n N be absolutely summable sequences in H. For every i {1,...,m}, let v i,0 G i, let b i,n n N and d i,n n N be absolutely summable sequences in G i. For every n N, let µ n ]0,+ [ be a Lipschitz constant of Un 1/2 h Un 1/2 and, for every i {1,...,m}, let ν i,n ]0,+ [ be a Lipschitz constant of U 1/2 i,n l i U 1/2 i,n. Let λ n n N be a sequence in]0,1] such thatinf n N λ n > 0. For every n N, set m δ n = 1/2 U i,nl i Un 2 1, 19 and suppose that inf n N δ n 1+δ nmax{µ n,ν 1,n,...,ν m,n} > Set For n = 0,1,... p n = prox U 1 n f +c n z +a n y n = 2p n x n x n+1 = x n +λ np n x n For i = 1,...,m q i,n = prox U 1 i,n i d i,n r i +b i,n x n U n m L iv i,n + hx n v i,n+1 = v i,n +λ nq i,n v i,n. v i,n +U i,n Liy n l iv i,n 21 Then x n n N converes weakly to a solution to 12, for every i {1,...,m} v i,n n N converes weakly to some v i G i, and v 1,...,v m is a solution to 13. In the special case when U n τ Id with τ ]0,+ [ and, for every i {1,...,m}, U i,n σ iid with σ i ]0,+ [, we recover the parallel alorithm proposed in [30]. Variants of this alorithm where, for everyi {1,...,m},l i = ι {0} are also investiated in [15]. In this case, less restrictive assumptions on the choice of τ,σ 1,...,σ m can be made. Note that this alorithm itself can be viewed as a eneralization of the alorithm which constitutes the main topic of [5, 16, 21] desinated by some authors as PDHG. A preconditioned version of this alorithm was proposed in [26] correspondin to the case when m = 1, n N U n and U 1,n are constant matrices, and no error term is taken into account. Alorithm 21 when, for every n N, λ n 1, U n and U i,n 1 i m are diaonal matrices, h = 0, and i {1,...,m} l i = ι {0} appears also to be closely related to the adaptive method in [19] A second class of primal-dual alorithms Let α ]0,+ [, let U n n N be a sequence in P αh such that n NU n+1 U n. For everyi {1,...,m}, letu i,n n N be a sequence in P αg i such that n N U i,n+1 U i,n. A second possible choice forv n n N is iven by the followin diaonal form: n N V 1 n : x,v U 1 n x,ũ 1 n LU nl v 22 where Ũn is iven by 18. The followin result can then be deduced from Theorem 3.1. Its proof is skipped due to the lack of space. Proposition 4.3 Let x 0 H, and let c n n N be an absolutely summable sequence in H. For every i {1,...,m}, let v i,0 G i, let b i,n n N and d i,n n N be absolutely summable sequences in G i. For every n N, let µ n ]0,+ [ be a Lipschitz constant of Un 1/2 h Un 1/2 and, for everyi {1,...,m}, letν i,n ]0,+ [ be a Lipschitz constant ofu 1/2 i,n l i U 1/2 i,n. Letλn n N be a sequence in]0,1] such that inf n N λ n > 0. For every n N, set and suppose that inf n N ζ n = 1 U i,nl i Un 2 23 ζ n max{ζ nµ n,ν 1,n,...,ν m,n} >

4 a b Fi. 2. Normalized norm of the error on the iterate vs computation time in seconds for Experiment 1 blue, dash dot line and Experiment 2 red, continuous line. c Fi. 1. Oriinal imae x a, noisy imae w 1 SNR = 5.87 db b, blurred imaew 2 SNR =16.63 db c, and restored imae x SNR =21.61 db d. Set For n = 0,1,... s n = x n U n hx n+c n z y n = s n U m n L iv i,n For i = 1,...,m q i,n = prox U 1 i,n i d v i,n +U i,n Liy n l iv i,n d i,n r i +b i,n v i,n+1 = v i,n +λ nq i,n v i,n. p n = s n U n m L iq i,n x n+1 = x n +λ np n x n. 25 Assume thatf = 0. Thenx n n N converes weakly to a solution to 12, for every i {1,...,m}v i,n n N converes weakly to some v i G i, and v 1,...,v m is a solution to 13. The alorithm proposed in [23, 8] is a special case of the previous one, in the absence of errors, when m = 1, H and G 1 are finite dimensional spaces, l 1 = ι {0}, U n τ Id with τ ]0,+ [, U 1,n σid with σ ]0,+ [, and no relaxation λ n 1 or a constant one λ n κ < 1 is performed. 5. APPLICATION TO IMAGE RESTORATION We illustrate the flexibility of the proposed primal-dual alorithms on an imae recovery example. Two observed imaes w 1 and w 2 of the same scene x R N N = are available see Fi. 1a- c. The first one is corrupted with a noise with a varianceθ 2 1 = 576, while the second one has been deraded by a linear operator H R N N 7 7 uniform blur and a noise with variance θ 2 2 = 25. The noise components are mutually statistically independent, additive, zero-mean, white, and Gaussian distributed. Note that this kind of multivariate restoration problem is encountered in some push-broom satellite imain systems. An estimate x ofxis computed as a solution to 12 wherem = 2, z = 0, r 1 = 0, r 2 = 0, h = 1 w w 2 θ1 2 θ2 H 2, = ι [0,255] N, 2 = κ 1,2, 27 f = 0, l 1 = l 2 = ι {0} 28 where the second function in 27 denotes the l 1,2-norm and κ ]0,+ [. In addition, L 1 = Id and L 2 = [G 1,G 2 ] where G 1 R N N and G N N 2 are horizontal and vertical discrete radient operators. Function 1 introduces some a priori constraint on the rane values in the taret imae, while function 2 L 2 corresponds to a classical total variation reularization. The minimization problem is solved numerically by usin Alorithm 25 with λ n 1. In a first experiment, standard choices of the alorithm parameters are made by settin U n τ Id, U 1,n σ 1Id, and U 2,n = σ 2Id with τ,σ 1,σ 2 ]0,+ [ 3. In a second experiment, a more sophisticated choice of the metric is made. The operators U n n N, U 1,n n N and U 2,n n N are still chosen diaonal and constant in order to facilitate the implementation of the alorithm, but the diaonal values are optimized in an empirical manner. A similar stratey was applied in [26] in the case of Alorithm 21. The reularization parameter κ has been set so as to et the hihest value of the resultin sinal-to-noise ratio SNR. The restored imae is displayed in Fi. 1d. Fi. 2 shows the converence profile of the alorithm. We plot the evolution of the normalized Euclidean distance in lo scale between the iterates and x in terms of computational time Matlab R2011b codes runnin on a sinle-core Intel i7-2620m CPU@2.7 GHz with 8 GB of RAM. An approximation of x obtained after 5000 iterations is used. This result illustrates the fact that an appropriate choice of the metric may be beneficial in terms of speed of converence.

5 6. REFERENCES [1] H. H. Bauschke and P. L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces. New York: Spriner, [2] S. R. Becker and P. L. Combettes, An alorithm for splittin parallel sums of linearly composed monotone operators, with applications to sinal recovery, Nonlinear Convex Anal., vol. 15, no. 1, pp , Jan [3] R. I. Boţ and C. Hendrich, Converence analysis for a primal-dual monotone + skew splittin alorithm with applications to total variation minimization, J. Math. Imain Vision, 2013, accepted rabot/publications/jour13-18.pdf. [4] L. M. Briceño-Arias and P. L. Combettes, A monotone + skew splittin model for composite monotone inclusions in duality, SIAM J. Optim., vol. 21, no. 4, pp , Oct [5] A. Chambolle and T. Pock, A first-order primal-dual alorithm for convex problems with applications to imain, J. Math. Imain Vision, vol. 40, no. 1, pp , [6] C. Chaux, P. L. Combettes, J.-C. Pesquet, and V. R. Wajs, A variational formulation for frame-based inverse problems, Inverse Problems, vol. 23, no. 4, pp , Jun [7] G. Chen and M. Teboulle, A proximal-based decomposition method for convex minimization problems, Math. Proram., vol. 64, pp , [8] P. Chen, J. Huan, and X. Zhan, A primal-dual fixed point alorithm for convex separable minimization with applications to imae restoration, Inverse Problems, vol. 29, no. 2, 2013, doi: / /29/2/ [9] P. L. Combettes, Systems of structured monotone inclusions: duality, alorithms, and applications, SIAM J. Optim., vol. 23, no. 4, pp , Dec [10] P. L. Combettes, D. Dũn, and B. C. Vũ, Dualization of sinal recovery problems, Set-Valued Var. Anal., vol. 18, pp , Dec [11] P. L. Combettes and J.-C. Pesquet, A proximal decomposition method for solvin convex variational inverse problems, Inverse Problems, vol. 24, no. 6, Dec [12], Proximal splittin methods in sinal processin, in Fixed-Point Alorithms for Inverse Problems in Science and Enineerin, H. H. Bauschke, R. S. Burachik, P. L. Combettes, V. Elser, D. R. Luke, and H. Wolkowicz, Eds. New York: Spriner-Verla, 2011, pp [13], Primal-dual splittin alorithm for solvin inclusions with mixtures of composite, Lipschitzian, and parallel-sum type monotone operators, Set-Valued Var. Anal., vol. 20, no. 2, pp , June [14] P. L. Combettes and B. C. Vũ, Variable metric forwardbackward splittin with applications to monotone inclusions in duality, Optimization, 2012, published online DOI: / [15] L. Condat, A primal-dual splittin method for convex optimization involvin Lipschitzian, proximable and linear composite terms, J. Optim. Theory Appl., vol. 158, no. 2, pp , Au [16] E. Esser, X. Zhan, and T. Chan, A eneral framework for a class of first order primal-dual alorithms for convex optimization in imain science, SIAM J. Imain Sci., vol. 3, no. 4, pp , [17] M. A. T. Fiueiredo and R. D. Nowak, Deconvolution of Poissonian imaes usin variable splittin and aumented Laranian optimization, in IEEE Work. on Stat. Si. Proc., Cardiff, United Kindom, Au Sept , pp. x-x+4. [18] M. Fortin and R. Glowinski, Eds., Aumented Laranian Methods: Applications to the Numerical Solution of Boundary- Value Problems. Amsterdam: North-Holland: Elsevier Science Ltd, [19] T. Goldstein, E. Esser, and R. Baraniuk, Adaptive primal-dual hybrid radient methods for saddle-point problems, 2013, [20] T. Goldstein and S. Osher, The split Breman method for l 1- reularized problems, SIAM J. Imain Sci., vol. 2, pp , [21] B. He and X. Yuan, Converence analysis of primal-dual alorithms for a saddle-point problem: from contraction perspective, SIAM J. Imain Sci., vol. 5, no. 1, pp , [22] J.-B. Hiriart-Urruty and C. Lemaréchal, Convex Analysis and Minimization Alorithms, Part II : Advanced Theory and Bundle Methods. New York: Spriner-Verla, [23] I. Loris and C. Verhoeven, On a eneralization of the iterative soft-thresholdin alorithm for the case of non-separable penalty, Inverse Problems, vol. 27, no. 12, p , [24] J.-J. Moreau, Proximité et dualité dans un espace hilbertien, Bull. Soc. Math. France, vol. 93, pp , [25] J.-C. Pesquet and N. Pustelnik, A parallel inertial proximal optimization method, Pac. J. Optim., vol. 8, no. 2, pp , Apr [26] T. Pock and A. Chambolle, Diaonal preconditionin for first order primal-dual alorithms in convex optimization, in Proc. IEEE Int. Conf. Comput. Vis., Barcelona, Spain, Nov , pp [27] A. Repetti, E. Chouzenoux, and J.-C. Pesquet, A penalized weihted least squares approach for restorin data corrupted with sinal-dependent noise, in Proc. Eur. Si. and Imae Proc. Conference, Bucharest, Romania, Au. 2012, pp [28] S. Setzer, G. Steidl, and T. Teuber, Deblurrin Poissonian imaes by split Breman techniques, J. Visual Communication and Imae Representation, vol. 21, no. 3, pp , Apr [29] S. Sra, S. Nowozin, and S. J. Wriht, Optimization for Machine Learnin. Cambride, MA: MIT Press, [30] B. C. Vũ, A splittin alorithm for dual monotone inclusions involvin cocoercive operators, Adv. Comput. Math., vol. 38, no. 3, pp , Apr

arxiv: v1 [math.oc] 20 Jun 2014

arxiv: v1 [math.oc] 20 Jun 2014 A forward-backward view of some primal-dual optimization methods in image recovery arxiv:1406.5439v1 [math.oc] 20 Jun 2014 P. L. Combettes, 1 L. Condat, 2 J.-C. Pesquet, 3 and B. C. Vũ 4 1 Sorbonne Universités

More information

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection EUSIPCO 2015 1/19 A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection Jean-Christophe Pesquet Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est

More information

A variable smoothing algorithm for solving convex optimization problems

A variable smoothing algorithm for solving convex optimization problems A variable smoothin alorithm for solvin convex optimization problems Radu Ioan Boţ Christopher Hendrich March 6 04 Abstract. In this article we propose a method for solvin unconstrained optimization problems

More information

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29 A Random block-coordinate primal-dual proximal algorithm with application to 3D mesh denoising Emilie CHOUZENOUX Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est, France Horizon Maths 2014

More information

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Patrick L. Combettes joint work with J.-C. Pesquet) Laboratoire Jacques-Louis Lions Faculté de Mathématiques

More information

A Dykstra-like algorithm for two monotone operators

A Dykstra-like algorithm for two monotone operators A Dykstra-like algorithm for two monotone operators Heinz H. Bauschke and Patrick L. Combettes Abstract Dykstra s algorithm employs the projectors onto two closed convex sets in a Hilbert space to construct

More information

About Split Proximal Algorithms for the Q-Lasso

About Split Proximal Algorithms for the Q-Lasso Thai Journal of Mathematics Volume 5 (207) Number : 7 http://thaijmath.in.cmu.ac.th ISSN 686-0209 About Split Proximal Algorithms for the Q-Lasso Abdellatif Moudafi Aix Marseille Université, CNRS-L.S.I.S

More information

Variable Metric Forward-Backward Algorithm

Variable Metric Forward-Backward Algorithm Variable Metric Forward-Backward Algorithm 1/37 Variable Metric Forward-Backward Algorithm for minimizing the sum of a differentiable function and a convex function E. Chouzenoux in collaboration with

More information

An Overview of Recent and Brand New Primal-Dual Methods for Solving Convex Optimization Problems

An Overview of Recent and Brand New Primal-Dual Methods for Solving Convex Optimization Problems PGMO 1/32 An Overview of Recent and Brand New Primal-Dual Methods for Solving Convex Optimization Problems Emilie Chouzenoux Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est Marne-la-Vallée,

More information

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems Radu Ioan Boţ Ernö Robert Csetnek August 5, 014 Abstract. In this paper we analyze the

More information

A Primal-dual Three-operator Splitting Scheme

A Primal-dual Three-operator Splitting Scheme Noname manuscript No. (will be inserted by the editor) A Primal-dual Three-operator Splitting Scheme Ming Yan Received: date / Accepted: date Abstract In this paper, we propose a new primal-dual algorithm

More information

ADMM for monotone operators: convergence analysis and rates

ADMM for monotone operators: convergence analysis and rates ADMM for monotone operators: convergence analysis and rates Radu Ioan Boţ Ernö Robert Csetne May 4, 07 Abstract. We propose in this paper a unifying scheme for several algorithms from the literature dedicated

More information

Self-dual Smooth Approximations of Convex Functions via the Proximal Average

Self-dual Smooth Approximations of Convex Functions via the Proximal Average Chapter Self-dual Smooth Approximations of Convex Functions via the Proximal Average Heinz H. Bauschke, Sarah M. Moffat, and Xianfu Wang Abstract The proximal average of two convex functions has proven

More information

Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization

Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization Radu Ioan Boţ Christopher Hendrich November 7, 202 Abstract. In this paper we

More information

Proximal splitting methods on convex problems with a quadratic term: Relax!

Proximal splitting methods on convex problems with a quadratic term: Relax! Proximal splitting methods on convex problems with a quadratic term: Relax! The slides I presented with added comments Laurent Condat GIPSA-lab, Univ. Grenoble Alpes, France Workshop BASP Frontiers, Jan.

More information

Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators

Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators Radu Ioan Boţ Christopher Hendrich 2 April 28, 206 Abstract. The aim of this article is to present

More information

On the equivalence of the primal-dual hybrid gradient method and Douglas Rachford splitting

On the equivalence of the primal-dual hybrid gradient method and Douglas Rachford splitting Mathematical Programming manuscript No. (will be inserted by the editor) On the equivalence of the primal-dual hybrid gradient method and Douglas Rachford splitting Daniel O Connor Lieven Vandenberghe

More information

Monotone Operator Splitting Methods in Signal and Image Recovery

Monotone Operator Splitting Methods in Signal and Image Recovery Monotone Operator Splitting Methods in Signal and Image Recovery P.L. Combettes 1, J.-C. Pesquet 2, and N. Pustelnik 3 2 Univ. Pierre et Marie Curie, Paris 6 LJLL CNRS UMR 7598 2 Univ. Paris-Est LIGM CNRS

More information

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION Peter Ochs University of Freiburg Germany 17.01.2017 joint work with: Thomas Brox and Thomas Pock c 2017 Peter Ochs ipiano c 1

More information

Primal-dual algorithms for the sum of two and three functions 1

Primal-dual algorithms for the sum of two and three functions 1 Primal-dual algorithms for the sum of two and three functions 1 Ming Yan Michigan State University, CMSE/Mathematics 1 This works is partially supported by NSF. optimization problems for primal-dual algorithms

More information

Visco-penalization of the sum of two monotone operators

Visco-penalization of the sum of two monotone operators Visco-penalization of the sum of two monotone operators Patrick L. Combettes a and Sever A. Hirstoaga b a Laboratoire Jacques-Louis Lions, Faculté de Mathématiques, Université Pierre et Marie Curie Paris

More information

arxiv: v1 [math.oc] 12 Mar 2013

arxiv: v1 [math.oc] 12 Mar 2013 On the convergence rate improvement of a primal-dual splitting algorithm for solving monotone inclusion problems arxiv:303.875v [math.oc] Mar 03 Radu Ioan Boţ Ernö Robert Csetnek André Heinrich February

More information

A splitting algorithm for coupled system of primal dual monotone inclusions

A splitting algorithm for coupled system of primal dual monotone inclusions A splitting algorithm for coupled system of primal dual monotone inclusions B`ăng Công Vũ UPMC Université Paris 06 Laboratoire Jacques-Louis Lions UMR CNRS 7598 75005 Paris, France vu@ljll.math.upmc.fr

More information

arxiv: v4 [math.oc] 29 Jan 2018

arxiv: v4 [math.oc] 29 Jan 2018 Noname manuscript No. (will be inserted by the editor A new primal-dual algorithm for minimizing the sum of three functions with a linear operator Ming Yan arxiv:1611.09805v4 [math.oc] 29 Jan 2018 Received:

More information

Coordinate Update Algorithm Short Course Operator Splitting

Coordinate Update Algorithm Short Course Operator Splitting Coordinate Update Algorithm Short Course Operator Splitting Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 25 Operator splitting pipeline 1. Formulate a problem as 0 A(x) + B(x) with monotone operators

More information

WEAK AND STRONG CONVERGENCE OF AN ITERATIVE METHOD FOR NONEXPANSIVE MAPPINGS IN HILBERT SPACES

WEAK AND STRONG CONVERGENCE OF AN ITERATIVE METHOD FOR NONEXPANSIVE MAPPINGS IN HILBERT SPACES Applicable Analysis and Discrete Mathematics available online at http://pemath.et.b.ac.yu Appl. Anal. Discrete Math. 2 (2008), 197 204. doi:10.2298/aadm0802197m WEAK AND STRONG CONVERGENCE OF AN ITERATIVE

More information

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences

More information

A primal dual Splitting Method for Convex. Optimization Involving Lipschitzian, Proximable and Linear Composite Terms,

A primal dual Splitting Method for Convex. Optimization Involving Lipschitzian, Proximable and Linear Composite Terms, A primal dual Splitting Method for Convex Optimization Involving Lipschitzian, Proximable and Linear Composite Terms Laurent Condat Final author s version. Cite as: L. Condat, A primal dual Splitting Method

More information

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4890 4900 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A generalized forward-backward

More information

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE Fixed Point Theory, Volume 6, No. 1, 2005, 59-69 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.htm WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE YASUNORI KIMURA Department

More information

Signal Processing and Networks Optimization Part VI: Duality

Signal Processing and Networks Optimization Part VI: Duality Signal Processing and Networks Optimization Part VI: Duality Pierre Borgnat 1, Jean-Christophe Pesquet 2, Nelly Pustelnik 1 1 ENS Lyon Laboratoire de Physique CNRS UMR 5672 pierre.borgnat@ens-lyon.fr,

More information

A New Primal Dual Algorithm for Minimizing the Sum of Three Functions with a Linear Operator

A New Primal Dual Algorithm for Minimizing the Sum of Three Functions with a Linear Operator https://doi.org/10.1007/s10915-018-0680-3 A New Primal Dual Algorithm for Minimizing the Sum of Three Functions with a Linear Operator Ming Yan 1,2 Received: 22 January 2018 / Accepted: 22 February 2018

More information

An Algorithm for Splitting Parallel Sums of Linearly Composed Monotone Operators, with Applications to Signal Recovery

An Algorithm for Splitting Parallel Sums of Linearly Composed Monotone Operators, with Applications to Signal Recovery An Algorithm for Splitting Parallel Sums of Linearly Composed Monotone Operators, with Applications to Signal Recovery Stephen R. Becker and Patrick L. Combettes UPMC Université Paris 06 Laboratoire Jacques-Louis

More information

A General Framework for a Class of Primal-Dual Algorithms for TV Minimization

A General Framework for a Class of Primal-Dual Algorithms for TV Minimization A General Framework for a Class of Primal-Dual Algorithms for TV Minimization Ernie Esser UCLA 1 Outline A Model Convex Minimization Problem Main Idea Behind the Primal Dual Hybrid Gradient (PDHG) Method

More information

1 Introduction and preliminaries

1 Introduction and preliminaries Proximal Methods for a Class of Relaxed Nonlinear Variational Inclusions Abdellatif Moudafi Université des Antilles et de la Guyane, Grimaag B.P. 7209, 97275 Schoelcher, Martinique abdellatif.moudafi@martinique.univ-ag.fr

More information

On the equivalence of the primal-dual hybrid gradient method and Douglas-Rachford splitting

On the equivalence of the primal-dual hybrid gradient method and Douglas-Rachford splitting On the equivalence of the primal-dual hybrid gradient method and Douglas-Rachford splitting Daniel O Connor Lieven Vandenberghe September 27, 2017 Abstract The primal-dual hybrid gradient (PDHG) algorithm

More information

Adaptive Primal Dual Optimization for Image Processing and Learning

Adaptive Primal Dual Optimization for Image Processing and Learning Adaptive Primal Dual Optimization for Image Processing and Learning Tom Goldstein Rice University tag7@rice.edu Ernie Esser University of British Columbia eesser@eos.ubc.ca Richard Baraniuk Rice University

More information

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane Conference ADGO 2013 October 16, 2013 Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions Marc Lassonde Université des Antilles et de la Guyane Playa Blanca, Tongoy, Chile SUBDIFFERENTIAL

More information

Learning with stochastic proximal gradient

Learning with stochastic proximal gradient Learning with stochastic proximal gradient Lorenzo Rosasco DIBRIS, Università di Genova Via Dodecaneso, 35 16146 Genova, Italy lrosasco@mit.edu Silvia Villa, Băng Công Vũ Laboratory for Computational and

More information

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim Korean J. Math. 25 (2017), No. 4, pp. 469 481 https://doi.org/10.11568/kjm.2017.25.4.469 GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS Jong Kyu Kim, Salahuddin, and Won Hee Lim Abstract. In this

More information

Victoria Martín-Márquez

Victoria Martín-Márquez A NEW APPROACH FOR THE CONVEX FEASIBILITY PROBLEM VIA MONOTROPIC PROGRAMMING Victoria Martín-Márquez Dep. of Mathematical Analysis University of Seville Spain XIII Encuentro Red de Análisis Funcional y

More information

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties

More information

Inertial Douglas-Rachford splitting for monotone inclusion problems

Inertial Douglas-Rachford splitting for monotone inclusion problems Inertial Douglas-Rachford splitting for monotone inclusion problems Radu Ioan Boţ Ernö Robert Csetnek Christopher Hendrich January 5, 2015 Abstract. We propose an inertial Douglas-Rachford splitting algorithm

More information

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT Linear and Nonlinear Analysis Volume 1, Number 1, 2015, 1 PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT KAZUHIRO HISHINUMA AND HIDEAKI IIDUKA Abstract. In this

More information

On nonexpansive and accretive operators in Banach spaces

On nonexpansive and accretive operators in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 3437 3446 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa On nonexpansive and accretive

More information

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Lu-Chuan Ceng 1, Nicolas Hadjisavvas 2 and Ngai-Ching Wong 3 Abstract.

More information

Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem

Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (206), 424 4225 Research Article Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem Jong Soo

More information

Proximal methods. S. Villa. October 7, 2014

Proximal methods. S. Villa. October 7, 2014 Proximal methods S. Villa October 7, 2014 1 Review of the basics Often machine learning problems require the solution of minimization problems. For instance, the ERM algorithm requires to solve a problem

More information

A Monotone + Skew Splitting Model for Composite Monotone Inclusions in Duality

A Monotone + Skew Splitting Model for Composite Monotone Inclusions in Duality A Monotone + Skew Splitting Model for Composite Monotone Inclusions in Duality arxiv:1011.5517v1 [math.oc] 24 Nov 2010 Luis M. Briceño-Arias 1,2 and Patrick L. Combettes 1 1 UPMC Université Paris 06 Laboratoire

More information

A characterization of essentially strictly convex functions on reflexive Banach spaces

A characterization of essentially strictly convex functions on reflexive Banach spaces A characterization of essentially strictly convex functions on reflexive Banach spaces Michel Volle Département de Mathématiques Université d Avignon et des Pays de Vaucluse 74, rue Louis Pasteur 84029

More information

Math 273a: Optimization Overview of First-Order Optimization Algorithms

Math 273a: Optimization Overview of First-Order Optimization Algorithms Math 273a: Optimization Overview of First-Order Optimization Algorithms Wotao Yin Department of Mathematics, UCLA online discussions on piazza.com 1 / 9 Typical flow of numerical optimization Optimization

More information

Auxiliary-Function Methods in Iterative Optimization

Auxiliary-Function Methods in Iterative Optimization Auxiliary-Function Methods in Iterative Optimization Charles L. Byrne April 6, 2015 Abstract Let C X be a nonempty subset of an arbitrary set X and f : X R. The problem is to minimize f over C. In auxiliary-function

More information

Splitting methods for decomposing separable convex programs

Splitting methods for decomposing separable convex programs Splitting methods for decomposing separable convex programs Philippe Mahey LIMOS - ISIMA - Université Blaise Pascal PGMO, ENSTA 2013 October 4, 2013 1 / 30 Plan 1 Max Monotone Operators Proximal techniques

More information

Monotone operators and bigger conjugate functions

Monotone operators and bigger conjugate functions Monotone operators and bigger conjugate functions Heinz H. Bauschke, Jonathan M. Borwein, Xianfu Wang, and Liangjin Yao August 12, 2011 Abstract We study a question posed by Stephen Simons in his 2008

More information

SPARSE SIGNAL RESTORATION. 1. Introduction

SPARSE SIGNAL RESTORATION. 1. Introduction SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful

More information

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Donghwan Kim and Jeffrey A. Fessler EECS Department, University of Michigan

More information

Operator Splitting for Parallel and Distributed Optimization

Operator Splitting for Parallel and Distributed Optimization Operator Splitting for Parallel and Distributed Optimization Wotao Yin (UCLA Math) Shanghai Tech, SSDS 15 June 23, 2015 URL: alturl.com/2z7tv 1 / 60 What is splitting? Sun-Tzu: (400 BC) Caesar: divide-n-conquer

More information

Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic Constraints

Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic Constraints Journal of Global Optimization 21: 445 455, 2001. 2001 Kluwer Academic Publishers. Printed in the Netherlands. 445 Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic

More information

Extensions of the CQ Algorithm for the Split Feasibility and Split Equality Problems

Extensions of the CQ Algorithm for the Split Feasibility and Split Equality Problems Extensions of the CQ Algorithm for the Split Feasibility Split Equality Problems Charles L. Byrne Abdellatif Moudafi September 2, 2013 Abstract The convex feasibility problem (CFP) is to find a member

More information

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of

More information

Primal-dual fixed point algorithms for separable minimization problems and their applications in imaging

Primal-dual fixed point algorithms for separable minimization problems and their applications in imaging 1/38 Primal-dual fixed point algorithms for separable minimization problems and their applications in imaging Xiaoqun Zhang Department of Mathematics and Institute of Natural Sciences Shanghai Jiao Tong

More information

arxiv: v2 [math.oc] 27 Nov 2015

arxiv: v2 [math.oc] 27 Nov 2015 arxiv:1507.03291v2 [math.oc] 27 Nov 2015 Asynchronous Block-Iterative Primal-Dual Decomposition Methods for Monotone Inclusions Patrick L. Combettes 1 and Jonathan Eckstein 2 1 Sorbonne Universités UPMC

More information

Abstract In this article, we consider monotone inclusions in real Hilbert spaces

Abstract In this article, we consider monotone inclusions in real Hilbert spaces Noname manuscript No. (will be inserted by the editor) A new splitting method for monotone inclusions of three operators Yunda Dong Xiaohuan Yu Received: date / Accepted: date Abstract In this article,

More information

Tight Rates and Equivalence Results of Operator Splitting Schemes

Tight Rates and Equivalence Results of Operator Splitting Schemes Tight Rates and Equivalence Results of Operator Splitting Schemes Wotao Yin (UCLA Math) Workshop on Optimization for Modern Computing Joint w Damek Davis and Ming Yan UCLA CAM 14-51, 14-58, and 14-59 1

More information

Accelerated primal-dual methods for linearly constrained convex problems

Accelerated primal-dual methods for linearly constrained convex problems Accelerated primal-dual methods for linearly constrained convex problems Yangyang Xu SIAM Conference on Optimization May 24, 2017 1 / 23 Accelerated proximal gradient For convex composite problem: minimize

More information

A Dual Condition for the Convex Subdifferential Sum Formula with Applications

A Dual Condition for the Convex Subdifferential Sum Formula with Applications Journal of Convex Analysis Volume 12 (2005), No. 2, 279 290 A Dual Condition for the Convex Subdifferential Sum Formula with Applications R. S. Burachik Engenharia de Sistemas e Computacao, COPPE-UFRJ

More information

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration E. Chouzenoux, A. Jezierska, J.-C. Pesquet and H. Talbot Université Paris-Est Lab. d Informatique Gaspard

More information

Uniqueness of DRS as the 2 Operator Resolvent-Splitting and Impossibility of 3 Operator Resolvent-Splitting

Uniqueness of DRS as the 2 Operator Resolvent-Splitting and Impossibility of 3 Operator Resolvent-Splitting Uniqueness of DRS as the 2 Operator Resolvent-Splitting and Impossibility of 3 Operator Resolvent-Splitting Ernest K Ryu February 21, 2018 Abstract Given the success of Douglas-Rachford splitting (DRS),

More information

Graph Convergence for H(, )-co-accretive Mapping with over-relaxed Proximal Point Method for Solving a Generalized Variational Inclusion Problem

Graph Convergence for H(, )-co-accretive Mapping with over-relaxed Proximal Point Method for Solving a Generalized Variational Inclusion Problem Iranian Journal of Mathematical Sciences and Informatics Vol. 12, No. 1 (2017), pp 35-46 DOI: 10.7508/ijmsi.2017.01.004 Graph Convergence for H(, )-co-accretive Mapping with over-relaxed Proximal Point

More information

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.

More information

A General Iterative Method for Constrained Convex Minimization Problems in Hilbert Spaces

A General Iterative Method for Constrained Convex Minimization Problems in Hilbert Spaces A General Iterative Method for Constrained Convex Minimization Problems in Hilbert Spaces MING TIAN Civil Aviation University of China College of Science Tianjin 300300 CHINA tianming963@6.com MINMIN LI

More information

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 36 Why proximal? Newton s method: for C 2 -smooth, unconstrained problems allow

More information

Denoising of NIRS Measured Biomedical Signals

Denoising of NIRS Measured Biomedical Signals Denoising of NIRS Measured Biomedical Signals Y V Rami reddy 1, Dr.D.VishnuVardhan 2 1 M. Tech, Dept of ECE, JNTUA College of Engineering, Pulivendula, A.P, India 2 Assistant Professor, Dept of ECE, JNTUA

More information

Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping

Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping March 0, 206 3:4 WSPC Proceedings - 9in x 6in secondorderanisotropicdamping206030 page Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping Radu

More information

Dual and primal-dual methods

Dual and primal-dual methods ELE 538B: Large-Scale Optimization for Data Science Dual and primal-dual methods Yuxin Chen Princeton University, Spring 2018 Outline Dual proximal gradient method Primal-dual proximal gradient method

More information

Proximal tools for image reconstruction in dynamic Positron Emission Tomography

Proximal tools for image reconstruction in dynamic Positron Emission Tomography Proximal tools for image reconstruction in dynamic Positron Emission Tomography Nelly Pustelnik 1 joint work with Caroline Chaux 2, Jean-Christophe Pesquet 3, and Claude Comtat 4 1 Laboratoire de Physique,

More information

Convergence rate estimates for the gradient differential inclusion

Convergence rate estimates for the gradient differential inclusion Convergence rate estimates for the gradient differential inclusion Osman Güler November 23 Abstract Let f : H R { } be a proper, lower semi continuous, convex function in a Hilbert space H. The gradient

More information

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br

More information

A Unified Approach to Proximal Algorithms using Bregman Distance

A Unified Approach to Proximal Algorithms using Bregman Distance A Unified Approach to Proximal Algorithms using Bregman Distance Yi Zhou a,, Yingbin Liang a, Lixin Shen b a Department of Electrical Engineering and Computer Science, Syracuse University b Department

More information

arxiv: v3 [math.oc] 18 Apr 2012

arxiv: v3 [math.oc] 18 Apr 2012 A class of Fejér convergent algorithms, approximate resolvents and the Hybrid Proximal-Extragradient method B. F. Svaiter arxiv:1204.1353v3 [math.oc] 18 Apr 2012 Abstract A new framework for analyzing

More information

Variational Image Restoration

Variational Image Restoration Variational Image Restoration Yuling Jiao yljiaostatistics@znufe.edu.cn School of and Statistics and Mathematics ZNUFE Dec 30, 2014 Outline 1 1 Classical Variational Restoration Models and Algorithms 1.1

More information

On the acceleration of the double smoothing technique for unconstrained convex optimization problems

On the acceleration of the double smoothing technique for unconstrained convex optimization problems On the acceleration of the double smoothing technique for unconstrained convex optimization problems Radu Ioan Boţ Christopher Hendrich October 10, 01 Abstract. In this article we investigate the possibilities

More information

Generalized greedy algorithms.

Generalized greedy algorithms. Generalized greedy algorithms. François-Xavier Dupé & Sandrine Anthoine LIF & I2M Aix-Marseille Université - CNRS - Ecole Centrale Marseille, Marseille ANR Greta Séminaire Parisien des Mathématiques Appliquées

More information

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions Radu Ioan Boţ Ernö Robert Csetnek Szilárd Csaba László October, 1 Abstract. We propose a forward-backward

More information

Dual methods for the minimization of the total variation

Dual methods for the minimization of the total variation 1 / 30 Dual methods for the minimization of the total variation Rémy Abergel supervisor Lionel Moisan MAP5 - CNRS UMR 8145 Different Learning Seminar, LTCI Thursday 21st April 2016 2 / 30 Plan 1 Introduction

More information

A first-order primal-dual algorithm with linesearch

A first-order primal-dual algorithm with linesearch A first-order primal-dual algorithm with linesearch Yura Malitsky Thomas Pock arxiv:608.08883v2 [math.oc] 23 Mar 208 Abstract The paper proposes a linesearch for a primal-dual method. Each iteration of

More information

A Relaxed Explicit Extragradient-Like Method for Solving Generalized Mixed Equilibria, Variational Inequalities and Constrained Convex Minimization

A Relaxed Explicit Extragradient-Like Method for Solving Generalized Mixed Equilibria, Variational Inequalities and Constrained Convex Minimization , March 16-18, 2016, Hong Kong A Relaxed Explicit Extragradient-Like Method for Solving Generalized Mixed Equilibria, Variational Inequalities and Constrained Convex Minimization Yung-Yih Lur, Lu-Chuan

More information

An Algorithmic Framework of Generalized Primal-Dual Hybrid Gradient Methods for Saddle Point Problems

An Algorithmic Framework of Generalized Primal-Dual Hybrid Gradient Methods for Saddle Point Problems An Algorithmic Framework of Generalized Primal-Dual Hybrid Gradient Methods for Saddle Point Problems Bingsheng He Feng Ma 2 Xiaoming Yuan 3 January 30, 206 Abstract. The primal-dual hybrid gradient method

More information

Maximal monotone operators are selfdual vector fields and vice-versa

Maximal monotone operators are selfdual vector fields and vice-versa Maximal monotone operators are selfdual vector fields and vice-versa Nassif Ghoussoub Department of Mathematics, University of British Columbia, Vancouver BC Canada V6T 1Z2 nassif@math.ubc.ca February

More information

Sequential Unconstrained Minimization: A Survey

Sequential Unconstrained Minimization: A Survey Sequential Unconstrained Minimization: A Survey Charles L. Byrne February 21, 2013 Abstract The problem is to minimize a function f : X (, ], over a non-empty subset C of X, where X is an arbitrary set.

More information

Solving DC Programs that Promote Group 1-Sparsity

Solving DC Programs that Promote Group 1-Sparsity Solving DC Programs that Promote Group 1-Sparsity Ernie Esser Contains joint work with Xiaoqun Zhang, Yifei Lou and Jack Xin SIAM Conference on Imaging Science Hong Kong Baptist University May 14 2014

More information

Douglas-Rachford Splitting: Complexity Estimates and Accelerated Variants

Douglas-Rachford Splitting: Complexity Estimates and Accelerated Variants 53rd IEEE Conference on Decision and Control December 5-7, 204. Los Angeles, California, USA Douglas-Rachford Splitting: Complexity Estimates and Accelerated Variants Panagiotis Patrinos and Lorenzo Stella

More information

A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing

A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing Emilie Chouzenoux emilie.chouzenoux@univ-mlv.fr Université Paris-Est Lab. d Informatique Gaspard

More information

On the Convergence of the Iterative Shrinkage/Thresholding Algorithm With a Weakly Convex Penalty

On the Convergence of the Iterative Shrinkage/Thresholding Algorithm With a Weakly Convex Penalty On the Convergence of the Iterative Shrinkage/Thresholding Algorithm With a Weakly Convex Penalty İlker Bayram arxiv:5.78v [math.oc] Nov 5 Abstract We consider the iterative shrinkage/thresholding algorithm

More information

Gauge optimization and duality

Gauge optimization and duality 1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel

More information

2D HILBERT-HUANG TRANSFORM. Jérémy Schmitt, Nelly Pustelnik, Pierre Borgnat, Patrick Flandrin

2D HILBERT-HUANG TRANSFORM. Jérémy Schmitt, Nelly Pustelnik, Pierre Borgnat, Patrick Flandrin 2D HILBERT-HUANG TRANSFORM Jérémy Schmitt, Nelly Pustelnik, Pierre Borgnat, Patrick Flandrin Laboratoire de Physique de l Ecole Normale Suprieure de Lyon, CNRS and Université de Lyon, France first.last@ens-lyon.fr

More information

A Tutorial on Primal-Dual Algorithm

A Tutorial on Primal-Dual Algorithm A Tutorial on Primal-Dual Algorithm Shenlong Wang University of Toronto March 31, 2016 1 / 34 Energy minimization MAP Inference for MRFs Typical energies consist of a regularization term and a data term.

More information

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES U.P.B. Sci. Bull., Series A, Vol. 80, Iss. 3, 2018 ISSN 1223-7027 ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES Vahid Dadashi 1 In this paper, we introduce a hybrid projection algorithm for a countable

More information

Second order forward-backward dynamical systems for monotone inclusion problems

Second order forward-backward dynamical systems for monotone inclusion problems Second order forward-backward dynamical systems for monotone inclusion problems Radu Ioan Boţ Ernö Robert Csetnek March 6, 25 Abstract. We begin by considering second order dynamical systems of the from

More information