A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection

Size: px
Start display at page:

Download "A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection"

Transcription

1 EUSIPCO /19 A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection Jean-Christophe Pesquet Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est Marne-la-Vallée, France EUSIPCO 2015 Nice

2 EUSIPCO /19 In collaboration with A. Repetti E. Chouzenoux

3 EUSIPCO /19 Variational formulation OBJECTIVE FUNCTION: Find a solution to the convex optimization problem where minimize x H Φ ( x ) H: signal space (separable real Hilbert space), Φ Γ 0 (H): class of convex lower-semicontinuous functions from H to ], + ] with a nonempty domain.

4 EUSIPCO /19 Variational formulation OBJECTIVE FUNCTION: Find a solution to the convex optimization problem where minimize x H Φ ( x ) H: signal space (separable real Hilbert space), Φ Γ 0 (H): class of convex lower-semicontinuous functions from H to ], + ] with a nonempty domain. In the context of large scale problems, how to find an optimization algorithm able to deliver a reliable numerical solution in a reasonable time, with low memory requirement?

5 EUSIPCO /19 Fundamental Tools in Convex Analysis The inf-convolution of f: H ],+ ] and g: H ],+ ] is f g: H [,+ ]: x inf y H f(y)+g(x y). PARTICULAR CASE: f ι {0} = f, where, for C H, ( x H) ι C (x) = { 0 if x C, + otherwise.

6 EUSIPCO /19 Fundamental Tools in Convex Analysis The inf-convolution of f: H ],+ ] and g: H ],+ ] is f g: H [,+ ]: x inf y H f(y)+g(x y). The conjugate of f: H ],+ ] is f : H [,+ ] such that ( u H) f ( ) (u) = sup x u f(x). x H

7 EUSIPCO /19 Fundamental Tools in Convex Analysis The inf-convolution of f: H ],+ ] and g: H ],+ ] is f g: H [,+ ]: x inf y H f(y)+g(x y). The conjugate of f: H ],+ ] is f : H [,+ ] such that ( u H) f ( ) (u) = sup x u f(x). x H Let f Γ 0 (H). Let U : H H be a strongly positive self-adjoint linear operator. The proximity operator prox U f (x) of f at x H relative to the metric induced by U is the unique vector ŷ H such that f(ŷ)+ 1 2 ŷ x 2 U = inf f(y)+ 1 y x U(y x). y H 2

8 EUSIPCO /19 Parallel proximal primal-dual problem PRIMAL PROBLEM We want to minimize x H We want to h(x)+ (g k l k )(L k x). minimize h ( v 1 G 1,...,v q G q DUAL PROBLEM L k v ) k + gk (v k)+l k (v k). h: H R convex, µ-lipschitz differentiable function with µ ]0,+ [ g k Γ 0(G k ) with G k separable real Hilbert space l k Γ 0(G k ) ν k -strongly convex with ν k ]0,+ [ l k Γ 0(G) ν k -Lipschitz differentiable L k : H G k linear and bounded.

9 EUSIPCO /19 Parallel proximal primal-dual problem PRIMAL PROBLEM We want to minimize x H We want to DIFFICULTIES: h(x)+ (g k l k )(L k x). minimize h ( v 1 G 1,...,v q G q DUAL PROBLEM L k v ) k + gk (v k)+l k (v k). Large-size optimization problem Functions g k often nonsmooth (indicator functions of constraint sets, sparsity measures,...). Linear operators required by standard optimization methods (e.g. ADMM) difficult to invert due to the form of operators L k (e.g. weighted incidence matrices of graphs).

10 EUSIPCO /19 Parallel proximal primal-dual algorithm for n = 0,1,... s n x n W h(x n ) y n = s n W L k v k,n for k = 1,...,q ( u k,n prox U 1 k g (v k,n +U k Lk y n l k (v k,n) )) k v k,n+1 = v k,n +λ n (u k,n v k,n ) p n = s n W L k u k,n x n+1 = x n +λ n (p n x n ).

11 EUSIPCO /19 Parallel proximal primal-dual algorithm for n = 0,1,... s n x n W h(x n ) y n = s n W L k v k,n for k = 1,...,q ( u k,n prox U 1 k g (v k,n +U k Lk y n l k (v k,n) )) k v k,n+1 = v k,n +λ n (u k,n v k,n ) p n = s n W L k u k,n x n+1 = x n +λ n (p n x n ).

12 EUSIPCO /19 Parallel proximal primal-dual algorithm for n = 0,1,... s n x n W h(x n ) y n = s n W L k v k,n for k = 1,...,q ( u k,n prox U 1 k g (v k,n +U k Lk y n l k (v k,n) )) k v k,n+1 = v k,n +λ n (u k,n v k,n ) p n = s n W L k u k,n x n+1 = x n +λ n (p n x n ). where W: H H and ( k {1,...,q}) U k : G k G k strongly positive self-adjoint bounded linear operators such that { min µ 1 W 1,ν 1( 1 U 1/2 k L kw 1/2 2)} > 1/2 with ν = max{ν 1 U 1,...,ν q U q }.

13 EUSIPCO /19 Parallel proximal primal-dual algorithm for n = 0,1,... s n x n W h(x n ) y n = s n W L k v k,n for k = 1,...,q ( u k,n prox U 1 k g (v k,n +U k Lk y n l k (v k,n) )) k v k,n+1 = v k,n +λ n (u k,n v k,n ) p n = s n W L k u k,n x n+1 = x n +λ n (p n x n ). where ( n N) λ n ]0,1] such that inf n N λ n > 0.

14 EUSIPCO /19 Parallel proximal primal-dual algorithm for n = 0,1,... s n x n W h(x n ) y n = s n W L k v k,n for k = 1,...,q ( u k,n prox U 1 k g (v k,n +U k Lk y n l k (v k,n) )) k v k,n+1 = v k,n +λ n (u k,n v k,n ) p n = s n W L k u k,n x n+1 = x n +λ n (p n x n ). Assume that there exists x H such that 0 h(x)+ L k( g ( k l k) L ) kx. We have: x n x where x is a solution to the primal problem ( k {1,...,q}) v k,n v k where ( v k) 1 k q is a solution to the dual problem.

15 EUSIPCO /19 Proximal primal-dual algorithm ADVANTAGES: No linear operator inversion. Use of proximable or/and differentiable functions. Less restrictive convergence conditions than other primal-dual algorithms. DISADVANTAGES: At each iteration, all the dual variables are updated in parallel, it is necessary to update the full primal variable.

16 EUSIPCO /19 Proximal primal-dual algorithm BIBLIOGRAPHICAL REMARKS: Pioneering work in the 1950 s: Arrow-Hurwicz method. Methods based on Forward-Backward iteration type I: [Vu ][Condat ] (extensions of [Esser et al ][Chambolle and Pock ]) type II : [Combettes et al ] (extensions of [Loris and Verhoeven ][Chen et al ]) Methods based on Forward-Backward-Forward iteration [Combettes and Pesquet ] [Boţ and Hendrich ] Projection based methods... [Alotaibi et al ]

17 EUSIPCO /19 Improvement via block alternation Idea: split variable. x 1 H 1 x 2 H 2 g x H H = p j=1 H j x p H p H 1,...,H p are real separable Hilbert spaces

18 EUSIPCO /19 Improvement via block alternation Assumption: h is an additively block separable function. x 1 x 2 h x = h = p h j (x j ) j=1 x p ( j {1,...,p}) h j convex and µ j -Lipschitz differentiable with µ j ]0,+ [.

19 EUSIPCO /19 Block-coordinate strategy At each iteration n N, update only a subset of components ( Gauss-Seidel methods). ADVANTAGES: Reduced computational cost at each iteration. Reduced memory requirement. More flexibility.

20 EUSIPCO /19 Primal-dual problem PRIMAL PROBLEM minimize x 1 H 1,...,x p H p p )( h j (x j )+ (g p ) k l k L k,j x j j=1 j=1 ( j {1,...,p})( k {1,...,q}) H j and G k real separable Hilbert spaces h j: H j R convex, µ j-lipschitz differentiable, with µ j ]0,+ [ g k Γ 0(G k ) l k Γ 0(G k ) ν k -strongly convex, with ν k ]0,+ [ L k,j : H j G k is linear and bounded L k = { j {1,...,p} L k,j 0 }, and L j = { k {1,...,q} L k,j 0 }.

21 EUSIPCO /19 Primal-dual problem PRIMAL PROBLEM minimize x 1 H 1,...,x p H p p )( h j (x j )+ (g p ) k l k L k,j x j j=1 j=1 minimize v 1 G 1,...,v q G q p j=1 h j ( ) L k,j v k + DUAL PROBLEM (gk (v k)+l k (v k)) Assume that there exists (x 1,...,x p ) H 1... H p such that ( j {1,...,p}) 0 h j (x j )+ L k,j ( g k l k ) ( ) L k,j x j. OBJECTIVE: Let F and F be the sets of solutions to the primal and dual problems. Find an F F -valued random variable ( x, v).

22 EUSIPCO /19 Random block-coordinate proximal primal-dual algorithm For n = 0,1,... for j = 1,...,p η j,n = max { } ε p+k,n k L ( ( j ) ) s j,n = η j,n x j,n W j hj (x j,n )+a j,n ( y j,n = η j,n sj,n W j L k,j v ) k,n k L j for k = 1,...,q u k,n+1 = ε p+k,n (prox U 1 ( ) k (v ) ) g k,n +U k L k,j y j,n U k l k (v k,n )+c k,n +b k,n k j L k v k,n+1 = v k,n +λ nε p+k,n (u k,n v k,n ) for j = 1,...,p ( ) p j,n+1 = ε j,n s j,n W j L k,j v k,n k L j x j,n+1 = x j,n +λ nε j,n (p j,n x j,n ). where (ε n) n N binary variables signaling the blocks to be activated

23 EUSIPCO /19 Random block-coordinate proximal primal-dual algorithm For n = 0,1,... for j = 1,...,p η j,n = max { ε p+k,n k L } ( j ( ) ) s j,n = η j,n x j,n W j hj(xj,n)+a j,n ( y j,n = η j,n sj,n W j L k,j v ) k,n k L j for k = 1,...,q u k,n+1 = ε p+k,n (prox U 1 ( ) k (v ) ) g k,n +U k L k,j y j,n U k l k (v k,n )+c k,n +b k,n k j L k v k,n+1 = v k,n +λ nε p+k,n (u k,n v k,n ) for j = 1,...,p ( ) p j,n+1 = ε j,n s j,n W j L k,j v k,n k L j x j,n+1 = x j,n +λ nε j,n(p j,n x j,n). where (ε n) n N identically distributedd-valued random variables with D = {0,1} p+q {0} binary variables signaling the blocks to be activated x 0, (a n) n N, and (c n) n N H-valued random variables,v 0, (b n) n N, and (d n) n N G-valued random variables with G = G 1 G q (a n) n N, (b n) n N and (c n) n N: error terms

24 EUSIPCO /19 Random block-coordinate proximal primal-dual algorithm For n = 0,1,... for j = 1,...,p ηj,n = max { } εp+k,n k L ( j ) sj,n = ηj,n xj,n Wj( ) hj(xj,n)+aj,n yj,n = ηj,n( sj,n Wj L ) k,j vk,n k L j for k = 1,...,q ( uk,n+1 = εp+k,n (prox U 1 k g vk,n +Uk Lk,jyj,n Uk k j L k vk,n+1 = vk,n + λnεp+k,n(uk,n vk,n) for j = 1,...,p ( ) pj,n+1 = εj,n sj,n Wj L k,j vk,n k L j xj,n+1 = xj,n +λnεj,n(pj,n xj,n). where ( ) ) l k (vk,n)+ck,n +bk,n) (εn)n N binary variables signaling the blocks to be activated (an)n N, (bn)n N and (cn)n N: error terms ( j {1,...,p}) Wj: Hj Hj and ( k {1,...,q}) Uk: Gk Gk strongly positive self-adjoint preconditioning linear operators such that { min µ 1 W 1,ν 1( 1 with µ = max{µ1 W1,...,µp Wp } and ν = max{ν1 U1,...,νq Uq }. U 1/2 k LkW1/2 2)} > 1/2

25 EUSIPCO /19 Random block-coordinate proximal primal-dual algorithm For n = 0,1,... for j = 1,...,p η j,n = max { } ε p+k,n k L ( ( j ) ) s j,n = η j,n x j,n W j hj (x j,n )+a j,n ( y j,n = η j,n sj,n W j L k,j v ) k,n k L j for k = 1,...,q u k,n+1 = ε p+k,n (prox U 1 ( ) k (v ) ) g k,n +U k L k,j y j,n U k l k (v k,n )+c k,n +b k,n k j L k v k,n+1 = v k,n +λ nε p+k,n (u k,n v k,n ) for j = 1,...,p ( ) p j,n+1 = ε j,n s j,n W j L k,j v k,n k L j x j,n+1 = x j,n +λ nε j,n (p j,n x j,n ). where (ε n) n N binary variables signaling the blocks to be activated (a n) n N, (b n) n N and (c n) n N: error terms ( j {1,...,p}) W j: H j H j and ( k {1,...,q}) U k: G k G k strongly positive self-adjoint preconditioning linear operators ( n N) λ n ]0,1] such that inf n Nλ n > 0.

26 EUSIPCO /19 Random block-coordinate proximal primal-dual algorithm Let (Ω, F, P) be the underlying probability space. Set ( n N) X n = (x n,v n ) 0 n n. Assume that E( an 2 X n) < +, n N E( bn 2 X n) < +, and n N E( cn 2 X n) < + a.s. The variables(ε n) n N are identically distributed such that ( j {1,...,p}) P[ε j,0 = 1] > 0. For every n N, ε n and X n are independent. For every k {1,...,q} and n N, { } ε p+k,n = max εj,n j Lk. 1 j p (x n) n N converges weakly a.s. to an F-valued random variable. (v n) n N converges weakly a.s. to an F -valued random variable. Proof: Based on properties of quasi-fejér stochastic sequences [Combettes and Pesquet 2014].

27 EUSIPCO /19 Illustration of the random sampling strategy Variable selection ( n N) x 1,n activated when ε 1,n = 1 How choosing ( n N) the variable ε n = (ε 1,n,...,ε 6,n )? x 2,n activated when ε 2,n = 1 x 3,n activated when ε 3,n = 1 x 4,n activated when ε 4,n = 1 x 5,n activated when ε 5,n = 1 x 6,n activated when ε 6,n = 1

28 EUSIPCO /19 Illustration of the random sampling strategy Variable selection ( n N) x 1,n activated when ε 1,n = 1 x 2,n activated when ε 2,n = 1 How choosing ( n N) the variable ε n = (ε 1,n,...,ε 6,n )? P[ε n = (1,1,0,0,0,0)] = 0.1 x 3,n activated when ε 3,n = 1 x 4,n activated when ε 4,n = 1 x 5,n activated when ε 5,n = 1 x 6,n activated when ε 6,n = 1

29 EUSIPCO /19 Illustration of the random sampling strategy Variable selection ( n N) x 1,n activated when ε 1,n = 1 x 2,n activated when ε 2,n = 1 x 3,n activated when ε 3,n = 1 How choosing ( n N) the variable ε n = (ε 1,n,...,ε 6,n )? P[ε n = (1,1,0,0,0,0)] = 0.1 P[ε n = (1,0,1,0,0,0)] = 0.2 x 4,n activated when ε 4,n = 1 x 5,n activated when ε 5,n = 1 x 6,n activated when ε 6,n = 1

30 EUSIPCO /19 Illustration of the random sampling strategy Variable selection ( n N) x 1,n activated when ε 1,n = 1 x 2,n activated when ε 2,n = 1 x 3,n activated when ε 3,n = 1 x 4,n activated when ε 4,n = 1 How choosing ( n N) the variable ε n = (ε 1,n,...,ε 6,n )? P[ε n = (1,1,0,0,0,0)] = 0.1 P[ε n = (1,0,1,0,0,0)] = 0.2 P[ε n = (1,0,0,1,1,0)] = 0.2 x 5,n activated when ε 5,n = 1 x 6,n activated when ε 6,n = 1

31 EUSIPCO /19 Illustration of the random sampling strategy Variable selection ( n N) x 1,n activated when ε 1,n = 1 x 2,n activated when ε 2,n = 1 x 3,n activated when ε 3,n = 1 x 4,n activated when ε 4,n = 1 x 5,n activated when ε 5,n = 1 How choosing ( n N) the variable ε n = (ε 1,n,...,ε 6,n )? P[ε n = (1,1,0,0,0,0)] = 0.1 P[ε n = (1,0,1,0,0,0)] = 0.2 P[ε n = (1,0,0,1,1,0)] = 0.2 P[ε n = (0,1,1,1,1,1)] = 0.5 x 6,n activated when ε 6,n = 1

32 EUSIPCO /19 Application: 3D mesh denoising Original mesh x Observed mesh z Undirected nonreflexive graph OBJECTIVE: Estimate x = (x i ) 1 i M from noisy observations z = (z i ) 1 i M where, for every i {1,...,M}, x i R 3 is the vector of 3D coordinates of the i-th vertex of a mesh H = (R 3 ) M

33 EUSIPCO /19 Application: 3D mesh denoising OBJECTIVE: Estimate x = (x i ) 1 i M from noisy observations z = (z i ) 1 i M where, for every i {1,...,M}, x i R 3 is the vector of 3D coordinates of the i-th vertex of a mesh COST FUNCTION: Φ(x) = M ψ j (x j z j )+ι Cj (x j )+η j (x j x i ) i Nj 1,2 j=1 where ( j {1,...,M}), ψ j : R 3 R: l 2 l 1 Huber function robust data fidelity measure convex, Lipschitz differentiable function C j : nonempty convex subset of R 3 N j : neighborhood of j-th vertex (η j ) 1 j M : nonnegative regularization constants.

34 EUSIPCO /19 Application: 3D mesh denoising OBJECTIVE: Estimate x = (x i) 1 i M from noisy observations z = (z i) 1 i M where, for every i {1,...,M}, x i R 3 is the vector of 3D coordinates of thei-th vertex of a mesh COST FUNCTION: M Φ(x) = ψ j(x j z j)+ι Cj (x j)+η j (x j x i) i Nj 1,2 j=1 IMPLEMENTATION DETAILS: a block a vertex p = M, q = 2M ( j {1,...,M}) h j = ψ j( z j) ( k {1,...,M})( x (R 3 ) M ) g k (L k x) = (x k x i) i Nk 1,2 g M+k (L M+k x) = ι Ck (x k ) l k = ι {0}

35 EUSIPCO /19 Simulation results positions of the original mesh are corrupted through an additive i.i.d. zero-mean Gaussian mixture noise model. a limited number r of variables can be handled at each iteration, where p ε j,n = r p. j=1 mesh decomposed into p/r non-overlapping sets. Original mesh, M = Noisy mesh, MSE =

36 EUSIPCO /19 Simulation results Proposed reconstruction MSE = Laplacian smoothing MSE =

37 EUSIPCO /19 Complexity Time (s.) p/r Memory (Mb) dashed line: required memory continuous line: reconstruction time

38 EUSIPCO /19 Conclusion No linear operator inversion. Flexibility in the random activation of primal/dual components. Existing parallel proximal primal-dual algorithms recovered when p = 1 and ε n (1,...,1). Possibility to address other graph processing problems than denoising. [Couprie et al.,2013] Available extensions: asynchronous distributed algorithms (stochastic, primal-dual, proximal, defined on a hypergraph).

39 EUSIPCO /19 Some references P. L. Combettes and J.-C. Pesquet Proximal splitting methods in signal processing in Fixed-Point Algorithms for Inverse Problems in Science and Engineering, H. H. Bauschke, R. Burachik, P. L. Combettes, V. Elser, D. R. Luke, and H. Wolkowicz editors. Springer-Verlag, New York, pp , C. Couprie, L. Grady, L. Najman, J.-C. Pesquet, and H. Talbot Dual constrained TV-based regularization on graphs SIAM Journal on Imaging Sciences, vol. 6, no 3, pp , P. L. Combettes, L. Condat, J.-C. Pesquet, and B. C. Vũ A forward-backward view of some primal-dual optimization methods in image recovery IEEE International Conference on Image Processing (ICIP 2014), pp , Paris, France, Oct , P. Combettes and J.-C Pesquet Stochastic quasi-fejér block-coordinate fixed point iterations with random sweeping SIAM Journal on Optimization, vol. 25, no. 2, pp , July N. Komodakis and J.-C. Pesquet Playing with duality: An overview of recent primal-dual approaches for solving large-scale optimization problems to appear in IEEE Signal Processing Magazine, J.-C. Pesquet and A. Repetti A class of randomized primal-dual algorithms for distributed optimization to appear in Journal of Nonlinear and Convex Analysis, A. Repetti, E. Chouzenoux and J.-C. Pesquet A random block-coordinate primal-dual proximal algorithm with application to 3D mesh denoising IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2015), pp , South Brisbane, Australia, April 2015.

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29 A Random block-coordinate primal-dual proximal algorithm with application to 3D mesh denoising Emilie CHOUZENOUX Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est, France Horizon Maths 2014

More information

An Overview of Recent and Brand New Primal-Dual Methods for Solving Convex Optimization Problems

An Overview of Recent and Brand New Primal-Dual Methods for Solving Convex Optimization Problems PGMO 1/32 An Overview of Recent and Brand New Primal-Dual Methods for Solving Convex Optimization Problems Emilie Chouzenoux Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est Marne-la-Vallée,

More information

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Patrick L. Combettes joint work with J.-C. Pesquet) Laboratoire Jacques-Louis Lions Faculté de Mathématiques

More information

arxiv: v1 [math.oc] 20 Jun 2014

arxiv: v1 [math.oc] 20 Jun 2014 A forward-backward view of some primal-dual optimization methods in image recovery arxiv:1406.5439v1 [math.oc] 20 Jun 2014 P. L. Combettes, 1 L. Condat, 2 J.-C. Pesquet, 3 and B. C. Vũ 4 1 Sorbonne Universités

More information

A Primal-dual Three-operator Splitting Scheme

A Primal-dual Three-operator Splitting Scheme Noname manuscript No. (will be inserted by the editor) A Primal-dual Three-operator Splitting Scheme Ming Yan Received: date / Accepted: date Abstract In this paper, we propose a new primal-dual algorithm

More information

Variable Metric Forward-Backward Algorithm

Variable Metric Forward-Backward Algorithm Variable Metric Forward-Backward Algorithm 1/37 Variable Metric Forward-Backward Algorithm for minimizing the sum of a differentiable function and a convex function E. Chouzenoux in collaboration with

More information

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems Radu Ioan Boţ Ernö Robert Csetnek August 5, 014 Abstract. In this paper we analyze the

More information

Primal-dual algorithms for the sum of two and three functions 1

Primal-dual algorithms for the sum of two and three functions 1 Primal-dual algorithms for the sum of two and three functions 1 Ming Yan Michigan State University, CMSE/Mathematics 1 This works is partially supported by NSF. optimization problems for primal-dual algorithms

More information

About Split Proximal Algorithms for the Q-Lasso

About Split Proximal Algorithms for the Q-Lasso Thai Journal of Mathematics Volume 5 (207) Number : 7 http://thaijmath.in.cmu.ac.th ISSN 686-0209 About Split Proximal Algorithms for the Q-Lasso Abdellatif Moudafi Aix Marseille Université, CNRS-L.S.I.S

More information

arxiv: v4 [math.oc] 29 Jan 2018

arxiv: v4 [math.oc] 29 Jan 2018 Noname manuscript No. (will be inserted by the editor A new primal-dual algorithm for minimizing the sum of three functions with a linear operator Ming Yan arxiv:1611.09805v4 [math.oc] 29 Jan 2018 Received:

More information

A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing

A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing Emilie Chouzenoux emilie.chouzenoux@univ-mlv.fr Université Paris-Est Lab. d Informatique Gaspard

More information

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration E. Chouzenoux, A. Jezierska, J.-C. Pesquet and H. Talbot Université Paris-Est Lab. d Informatique Gaspard

More information

Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization

Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization Radu Ioan Boţ Christopher Hendrich November 7, 202 Abstract. In this paper we

More information

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Christian Clason Faculty of Mathematics, Universität Duisburg-Essen joint work with Barbara Kaltenbacher, Tuomo Valkonen,

More information

Signal Processing and Networks Optimization Part VI: Duality

Signal Processing and Networks Optimization Part VI: Duality Signal Processing and Networks Optimization Part VI: Duality Pierre Borgnat 1, Jean-Christophe Pesquet 2, Nelly Pustelnik 1 1 ENS Lyon Laboratoire de Physique CNRS UMR 5672 pierre.borgnat@ens-lyon.fr,

More information

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Donghwan Kim and Jeffrey A. Fessler EECS Department, University of Michigan

More information

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT Linear and Nonlinear Analysis Volume 1, Number 1, 2015, 1 PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT KAZUHIRO HISHINUMA AND HIDEAKI IIDUKA Abstract. In this

More information

Proximal splitting methods on convex problems with a quadratic term: Relax!

Proximal splitting methods on convex problems with a quadratic term: Relax! Proximal splitting methods on convex problems with a quadratic term: Relax! The slides I presented with added comments Laurent Condat GIPSA-lab, Univ. Grenoble Alpes, France Workshop BASP Frontiers, Jan.

More information

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION Peter Ochs University of Freiburg Germany 17.01.2017 joint work with: Thomas Brox and Thomas Pock c 2017 Peter Ochs ipiano c 1

More information

A Tutorial on Primal-Dual Algorithm

A Tutorial on Primal-Dual Algorithm A Tutorial on Primal-Dual Algorithm Shenlong Wang University of Toronto March 31, 2016 1 / 34 Energy minimization MAP Inference for MRFs Typical energies consist of a regularization term and a data term.

More information

On the acceleration of the double smoothing technique for unconstrained convex optimization problems

On the acceleration of the double smoothing technique for unconstrained convex optimization problems On the acceleration of the double smoothing technique for unconstrained convex optimization problems Radu Ioan Boţ Christopher Hendrich October 10, 01 Abstract. In this article we investigate the possibilities

More information

Optimisation in imaging

Optimisation in imaging Optimisation in imaging Hugues Talbot ICIP 2014 Tutorial Optimisation on Hierarchies H. Talbot : Optimisation 1/59 Outline of the lecture 1 Concepts in optimization Cost function Constraints Duality 2

More information

ADMM for monotone operators: convergence analysis and rates

ADMM for monotone operators: convergence analysis and rates ADMM for monotone operators: convergence analysis and rates Radu Ioan Boţ Ernö Robert Csetne May 4, 07 Abstract. We propose in this paper a unifying scheme for several algorithms from the literature dedicated

More information

A New Primal Dual Algorithm for Minimizing the Sum of Three Functions with a Linear Operator

A New Primal Dual Algorithm for Minimizing the Sum of Three Functions with a Linear Operator https://doi.org/10.1007/s10915-018-0680-3 A New Primal Dual Algorithm for Minimizing the Sum of Three Functions with a Linear Operator Ming Yan 1,2 Received: 22 January 2018 / Accepted: 22 February 2018

More information

arxiv: v1 [math.oc] 12 Mar 2013

arxiv: v1 [math.oc] 12 Mar 2013 On the convergence rate improvement of a primal-dual splitting algorithm for solving monotone inclusion problems arxiv:303.875v [math.oc] Mar 03 Radu Ioan Boţ Ernö Robert Csetnek André Heinrich February

More information

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions Radu Ioan Boţ Ernö Robert Csetnek Szilárd Csaba László October, 1 Abstract. We propose a forward-backward

More information

Learning with stochastic proximal gradient

Learning with stochastic proximal gradient Learning with stochastic proximal gradient Lorenzo Rosasco DIBRIS, Università di Genova Via Dodecaneso, 35 16146 Genova, Italy lrosasco@mit.edu Silvia Villa, Băng Công Vũ Laboratory for Computational and

More information

A Dykstra-like algorithm for two monotone operators

A Dykstra-like algorithm for two monotone operators A Dykstra-like algorithm for two monotone operators Heinz H. Bauschke and Patrick L. Combettes Abstract Dykstra s algorithm employs the projectors onto two closed convex sets in a Hilbert space to construct

More information

2 Regularized Image Reconstruction for Compressive Imaging and Beyond

2 Regularized Image Reconstruction for Compressive Imaging and Beyond EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement

More information

Victoria Martín-Márquez

Victoria Martín-Márquez A NEW APPROACH FOR THE CONVEX FEASIBILITY PROBLEM VIA MONOTROPIC PROGRAMMING Victoria Martín-Márquez Dep. of Mathematical Analysis University of Seville Spain XIII Encuentro Red de Análisis Funcional y

More information

Accelerated primal-dual methods for linearly constrained convex problems

Accelerated primal-dual methods for linearly constrained convex problems Accelerated primal-dual methods for linearly constrained convex problems Yangyang Xu SIAM Conference on Optimization May 24, 2017 1 / 23 Accelerated proximal gradient For convex composite problem: minimize

More information

Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators

Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators Radu Ioan Boţ Christopher Hendrich 2 April 28, 206 Abstract. The aim of this article is to present

More information

PIPA: A New Proximal Interior Point Algorithm for Large Scale Convex Optimization

PIPA: A New Proximal Interior Point Algorithm for Large Scale Convex Optimization PIPA: A New Proximal Interior Point Algorithm for Large Scale Convex Optimization Marie-Caroline Corbineau, Emilie Chouzenoux, Jean-Christophe Pesquet To cite this version: Marie-Caroline Corbineau, Emilie

More information

Operator Splitting for Parallel and Distributed Optimization

Operator Splitting for Parallel and Distributed Optimization Operator Splitting for Parallel and Distributed Optimization Wotao Yin (UCLA Math) Shanghai Tech, SSDS 15 June 23, 2015 URL: alturl.com/2z7tv 1 / 60 What is splitting? Sun-Tzu: (400 BC) Caesar: divide-n-conquer

More information

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.

More information

Combining multiresolution analysis and non-smooth optimization for texture segmentation

Combining multiresolution analysis and non-smooth optimization for texture segmentation Combining multiresolution analysis and non-smooth optimization for texture segmentation Nelly Pustelnik CNRS, Laboratoire de Physique de l ENS de Lyon Intro Basics Segmentation Two-step texture segmentation

More information

Generalized greedy algorithms.

Generalized greedy algorithms. Generalized greedy algorithms. François-Xavier Dupé & Sandrine Anthoine LIF & I2M Aix-Marseille Université - CNRS - Ecole Centrale Marseille, Marseille ANR Greta Séminaire Parisien des Mathématiques Appliquées

More information

Solving DC Programs that Promote Group 1-Sparsity

Solving DC Programs that Promote Group 1-Sparsity Solving DC Programs that Promote Group 1-Sparsity Ernie Esser Contains joint work with Xiaoqun Zhang, Yifei Lou and Jack Xin SIAM Conference on Imaging Science Hong Kong Baptist University May 14 2014

More information

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Caroline Chaux Joint work with X. Vu, N. Thirion-Moreau and S. Maire (LSIS, Toulon) Aix-Marseille

More information

Primal-dual coordinate descent

Primal-dual coordinate descent Primal-dual coordinate descent Olivier Fercoq Joint work with P. Bianchi & W. Hachem 15 July 2015 1/28 Minimize the convex function f, g, h convex f is differentiable Problem min f (x) + g(x) + h(mx) x

More information

Primal-dual coordinate descent A Coordinate Descent Primal-Dual Algorithm with Large Step Size and Possibly Non-Separable Functions

Primal-dual coordinate descent A Coordinate Descent Primal-Dual Algorithm with Large Step Size and Possibly Non-Separable Functions Primal-dual coordinate descent A Coordinate Descent Primal-Dual Algorithm with Large Step Size and Possibly Non-Separable Functions Olivier Fercoq and Pascal Bianchi Problem Minimize the convex function

More information

arxiv: v2 [math.oc] 27 Nov 2015

arxiv: v2 [math.oc] 27 Nov 2015 arxiv:1507.03291v2 [math.oc] 27 Nov 2015 Asynchronous Block-Iterative Primal-Dual Decomposition Methods for Monotone Inclusions Patrick L. Combettes 1 and Jonathan Eckstein 2 1 Sorbonne Universités UPMC

More information

Stability of optimization problems with stochastic dominance constraints

Stability of optimization problems with stochastic dominance constraints Stability of optimization problems with stochastic dominance constraints D. Dentcheva and W. Römisch Stevens Institute of Technology, Hoboken Humboldt-University Berlin www.math.hu-berlin.de/~romisch SIAM

More information

Monotone Operator Splitting Methods in Signal and Image Recovery

Monotone Operator Splitting Methods in Signal and Image Recovery Monotone Operator Splitting Methods in Signal and Image Recovery P.L. Combettes 1, J.-C. Pesquet 2, and N. Pustelnik 3 2 Univ. Pierre et Marie Curie, Paris 6 LJLL CNRS UMR 7598 2 Univ. Paris-Est LIGM CNRS

More information

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4890 4900 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A generalized forward-backward

More information

Convergence of Fixed-Point Iterations

Convergence of Fixed-Point Iterations Convergence of Fixed-Point Iterations Instructor: Wotao Yin (UCLA Math) July 2016 1 / 30 Why study fixed-point iterations? Abstract many existing algorithms in optimization, numerical linear algebra, and

More information

PROXIMAL THRESHOLDING ALGORITHM FOR MINIMIZATION OVER ORTHONORMAL BASES

PROXIMAL THRESHOLDING ALGORITHM FOR MINIMIZATION OVER ORTHONORMAL BASES PROXIMAL THRESHOLDING ALGORITHM FOR MINIMIZATION OVER ORTHONORMAL BASES Patrick L. Combettes and Jean-Christophe Pesquet Laboratoire Jacques-Louis Lions UMR CNRS 7598 Université Pierre et Marie Curie Paris

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in

More information

Self-dual Smooth Approximations of Convex Functions via the Proximal Average

Self-dual Smooth Approximations of Convex Functions via the Proximal Average Chapter Self-dual Smooth Approximations of Convex Functions via the Proximal Average Heinz H. Bauschke, Sarah M. Moffat, and Xianfu Wang Abstract The proximal average of two convex functions has proven

More information

Denoising of NIRS Measured Biomedical Signals

Denoising of NIRS Measured Biomedical Signals Denoising of NIRS Measured Biomedical Signals Y V Rami reddy 1, Dr.D.VishnuVardhan 2 1 M. Tech, Dept of ECE, JNTUA College of Engineering, Pulivendula, A.P, India 2 Assistant Professor, Dept of ECE, JNTUA

More information

1 Sparsity and l 1 relaxation

1 Sparsity and l 1 relaxation 6.883 Learning with Combinatorial Structure Note for Lecture 2 Author: Chiyuan Zhang Sparsity and l relaxation Last time we talked about sparsity and characterized when an l relaxation could recover the

More information

Gauge optimization and duality

Gauge optimization and duality 1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel

More information

Distributed Computation of Quantiles via ADMM

Distributed Computation of Quantiles via ADMM 1 Distributed Computation of Quantiles via ADMM Franck Iutzeler Abstract In this paper, we derive distributed synchronous and asynchronous algorithms for computing uantiles of the agents local values.

More information

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x

More information

A primal-dual fixed point algorithm for multi-block convex minimization *

A primal-dual fixed point algorithm for multi-block convex minimization * Journal of Computational Mathematics Vol.xx, No.x, 201x, 1 16. http://www.global-sci.org/jcm doi:?? A primal-dual fixed point algorithm for multi-block convex minimization * Peijun Chen School of Mathematical

More information

SEMI-SMOOTH SECOND-ORDER TYPE METHODS FOR COMPOSITE CONVEX PROGRAMS

SEMI-SMOOTH SECOND-ORDER TYPE METHODS FOR COMPOSITE CONVEX PROGRAMS SEMI-SMOOTH SECOND-ORDER TYPE METHODS FOR COMPOSITE CONVEX PROGRAMS XIANTAO XIAO, YONGFENG LI, ZAIWEN WEN, AND LIWEI ZHANG Abstract. The goal of this paper is to study approaches to bridge the gap between

More information

Adaptive Primal Dual Optimization for Image Processing and Learning

Adaptive Primal Dual Optimization for Image Processing and Learning Adaptive Primal Dual Optimization for Image Processing and Learning Tom Goldstein Rice University tag7@rice.edu Ernie Esser University of British Columbia eesser@eos.ubc.ca Richard Baraniuk Rice University

More information

On Total Convexity, Bregman Projections and Stability in Banach Spaces

On Total Convexity, Bregman Projections and Stability in Banach Spaces Journal of Convex Analysis Volume 11 (2004), No. 1, 1 16 On Total Convexity, Bregman Projections and Stability in Banach Spaces Elena Resmerita Department of Mathematics, University of Haifa, 31905 Haifa,

More information

Auxiliary-Function Methods in Iterative Optimization

Auxiliary-Function Methods in Iterative Optimization Auxiliary-Function Methods in Iterative Optimization Charles L. Byrne April 6, 2015 Abstract Let C X be a nonempty subset of an arbitrary set X and f : X R. The problem is to minimize f over C. In auxiliary-function

More information

A splitting algorithm for coupled system of primal dual monotone inclusions

A splitting algorithm for coupled system of primal dual monotone inclusions A splitting algorithm for coupled system of primal dual monotone inclusions B`ăng Công Vũ UPMC Université Paris 06 Laboratoire Jacques-Louis Lions UMR CNRS 7598 75005 Paris, France vu@ljll.math.upmc.fr

More information

First-order methods for structured nonsmooth optimization

First-order methods for structured nonsmooth optimization First-order methods for structured nonsmooth optimization Sangwoon Yun Department of Mathematics Education Sungkyunkwan University Oct 19, 2016 Center for Mathematical Analysis & Computation, Yonsei University

More information

Sequential convex programming,: value function and convergence

Sequential convex programming,: value function and convergence Sequential convex programming,: value function and convergence Edouard Pauwels joint work with Jérôme Bolte Journées MODE Toulouse March 23 2016 1 / 16 Introduction Local search methods for finite dimensional

More information

Proximal tools for image reconstruction in dynamic Positron Emission Tomography

Proximal tools for image reconstruction in dynamic Positron Emission Tomography Proximal tools for image reconstruction in dynamic Positron Emission Tomography Nelly Pustelnik 1 joint work with Caroline Chaux 2, Jean-Christophe Pesquet 3, and Claude Comtat 4 1 Laboratoire de Physique,

More information

A first-order primal-dual algorithm with linesearch

A first-order primal-dual algorithm with linesearch A first-order primal-dual algorithm with linesearch Yura Malitsky Thomas Pock arxiv:608.08883v2 [math.oc] 23 Mar 208 Abstract The paper proposes a linesearch for a primal-dual method. Each iteration of

More information

SPARSE SIGNAL RESTORATION. 1. Introduction

SPARSE SIGNAL RESTORATION. 1. Introduction SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful

More information

PROXIMAL THRESHOLDING ALGORITHM FOR MINIMIZATION OVER ORTHONORMAL BASES

PROXIMAL THRESHOLDING ALGORITHM FOR MINIMIZATION OVER ORTHONORMAL BASES SIAM J. Optim. to appear PROXIMAL THRESHOLDING ALGORITHM FOR MINIMIZATION OVER ORTHONORMAL BASES PATRICK L. COMBETTES AND JEAN-CHRISTOPHE PESQUET Abstract. The notion of soft thresholding plays a central

More information

Proximal Alternating Linearized Minimization for Nonconvex and Nonsmooth Problems

Proximal Alternating Linearized Minimization for Nonconvex and Nonsmooth Problems Proximal Alternating Linearized Minimization for Nonconvex and Nonsmooth Problems Jérôme Bolte Shoham Sabach Marc Teboulle Abstract We introduce a proximal alternating linearized minimization PALM) algorithm

More information

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences

More information

consistent learning by composite proximal thresholding

consistent learning by composite proximal thresholding consistent learning by composite proximal thresholding Saverio Salzo Università degli Studi di Genova Optimization in Machine learning, vision and image processing Université Paul Sabatier, Toulouse 6-7

More information

ADMM and Fast Gradient Methods for Distributed Optimization

ADMM and Fast Gradient Methods for Distributed Optimization ADMM and Fast Gradient Methods for Distributed Optimization João Xavier Instituto Sistemas e Robótica (ISR), Instituto Superior Técnico (IST) European Control Conference, ECC 13 July 16, 013 Joint work

More information

arxiv: v1 [math.na] 3 Jan 2019

arxiv: v1 [math.na] 3 Jan 2019 arxiv manuscript No. (will be inserted by the editor) A Finite Element Nonoverlapping Domain Decomposition Method with Lagrange Multipliers for the Dual Total Variation Minimizations Chang-Ock Lee Jongho

More information

Sparse Optimization Lecture: Dual Methods, Part I

Sparse Optimization Lecture: Dual Methods, Part I Sparse Optimization Lecture: Dual Methods, Part I Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know dual (sub)gradient iteration augmented l 1 iteration

More information

Inertial Douglas-Rachford splitting for monotone inclusion problems

Inertial Douglas-Rachford splitting for monotone inclusion problems Inertial Douglas-Rachford splitting for monotone inclusion problems Radu Ioan Boţ Ernö Robert Csetnek Christopher Hendrich January 5, 2015 Abstract. We propose an inertial Douglas-Rachford splitting algorithm

More information

Compressed Sensing: a Subgradient Descent Method for Missing Data Problems

Compressed Sensing: a Subgradient Descent Method for Missing Data Problems Compressed Sensing: a Subgradient Descent Method for Missing Data Problems ANZIAM, Jan 30 Feb 3, 2011 Jonathan M. Borwein Jointly with D. Russell Luke, University of Goettingen FRSC FAAAS FBAS FAA Director,

More information

Coordinate Update Algorithm Short Course Operator Splitting

Coordinate Update Algorithm Short Course Operator Splitting Coordinate Update Algorithm Short Course Operator Splitting Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 25 Operator splitting pipeline 1. Formulate a problem as 0 A(x) + B(x) with monotone operators

More information

ARock: an algorithmic framework for asynchronous parallel coordinate updates

ARock: an algorithmic framework for asynchronous parallel coordinate updates ARock: an algorithmic framework for asynchronous parallel coordinate updates Zhimin Peng, Yangyang Xu, Ming Yan, Wotao Yin ( UCLA Math, U.Waterloo DCO) UCLA CAM Report 15-37 ShanghaiTech SSDS 15 June 25,

More information

Math 273a: Optimization Overview of First-Order Optimization Algorithms

Math 273a: Optimization Overview of First-Order Optimization Algorithms Math 273a: Optimization Overview of First-Order Optimization Algorithms Wotao Yin Department of Mathematics, UCLA online discussions on piazza.com 1 / 9 Typical flow of numerical optimization Optimization

More information

A General Framework for a Class of Primal-Dual Algorithms for TV Minimization

A General Framework for a Class of Primal-Dual Algorithms for TV Minimization A General Framework for a Class of Primal-Dual Algorithms for TV Minimization Ernie Esser UCLA 1 Outline A Model Convex Minimization Problem Main Idea Behind the Primal Dual Hybrid Gradient (PDHG) Method

More information

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE Fixed Point Theory, Volume 6, No. 1, 2005, 59-69 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.htm WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE YASUNORI KIMURA Department

More information

On the equivalence of the primal-dual hybrid gradient method and Douglas-Rachford splitting

On the equivalence of the primal-dual hybrid gradient method and Douglas-Rachford splitting On the equivalence of the primal-dual hybrid gradient method and Douglas-Rachford splitting Daniel O Connor Lieven Vandenberghe September 27, 2017 Abstract The primal-dual hybrid gradient (PDHG) algorithm

More information

Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization

Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization Hongchao Zhang hozhang@math.lsu.edu Department of Mathematics Center for Computation and Technology Louisiana State

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 9 Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 2 Separable convex optimization a special case is min f(x)

More information

On the equivalence of the primal-dual hybrid gradient method and Douglas Rachford splitting

On the equivalence of the primal-dual hybrid gradient method and Douglas Rachford splitting Mathematical Programming manuscript No. (will be inserted by the editor) On the equivalence of the primal-dual hybrid gradient method and Douglas Rachford splitting Daniel O Connor Lieven Vandenberghe

More information

Fonctions Perspectives et Statistique en Grande Dimension

Fonctions Perspectives et Statistique en Grande Dimension Fonctions Perspectives et Statistique en Grande Dimension Patrick L. Combettes Department of Mathematics North Carolina State University Raleigh, NC 27695, USA Basé sur un travail conjoint avec C. L. Müller

More information

AUGMENTED LAGRANGIAN METHODS FOR CONVEX OPTIMIZATION

AUGMENTED LAGRANGIAN METHODS FOR CONVEX OPTIMIZATION New Horizon of Mathematical Sciences RIKEN ithes - Tohoku AIMR - Tokyo IIS Joint Symposium April 28 2016 AUGMENTED LAGRANGIAN METHODS FOR CONVEX OPTIMIZATION T. Takeuchi Aihara Lab. Laboratories for Mathematics,

More information

Minimizing Isotropic Total Variation without Subiterations

Minimizing Isotropic Total Variation without Subiterations MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Minimizing Isotropic Total Variation without Subiterations Kamilov, U. S. TR206-09 August 206 Abstract Total variation (TV) is one of the most

More information

Convex Optimization. (EE227A: UC Berkeley) Lecture 15. Suvrit Sra. (Gradient methods III) 12 March, 2013

Convex Optimization. (EE227A: UC Berkeley) Lecture 15. Suvrit Sra. (Gradient methods III) 12 March, 2013 Convex Optimization (EE227A: UC Berkeley) Lecture 15 (Gradient methods III) 12 March, 2013 Suvrit Sra Optimal gradient methods 2 / 27 Optimal gradient methods We saw following efficiency estimates for

More information

Robust Duality in Parametric Convex Optimization

Robust Duality in Parametric Convex Optimization Robust Duality in Parametric Convex Optimization R.I. Boţ V. Jeyakumar G.Y. Li Revised Version: June 20, 2012 Abstract Modelling of convex optimization in the face of data uncertainty often gives rise

More information

An Algorithmic Framework of Generalized Primal-Dual Hybrid Gradient Methods for Saddle Point Problems

An Algorithmic Framework of Generalized Primal-Dual Hybrid Gradient Methods for Saddle Point Problems An Algorithmic Framework of Generalized Primal-Dual Hybrid Gradient Methods for Saddle Point Problems Bingsheng He Feng Ma 2 Xiaoming Yuan 3 January 30, 206 Abstract. The primal-dual hybrid gradient method

More information

Levenberg-Marquardt method in Banach spaces with general convex regularization terms

Levenberg-Marquardt method in Banach spaces with general convex regularization terms Levenberg-Marquardt method in Banach spaces with general convex regularization terms Qinian Jin Hongqi Yang Abstract We propose a Levenberg-Marquardt method with general uniformly convex regularization

More information

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties

More information

Stochastic Three-Composite Convex Minimization

Stochastic Three-Composite Convex Minimization Stochastic Three-Composite Convex Minimization Alp Yurtsever, B`ăng Công Vũ, and Volkan Cevher Laboratory for Information and Inference Systems (LIONS) École Polytechnique Fédérale de Lausanne, Switzerland

More information

Variational Image Restoration

Variational Image Restoration Variational Image Restoration Yuling Jiao yljiaostatistics@znufe.edu.cn School of and Statistics and Mathematics ZNUFE Dec 30, 2014 Outline 1 1 Classical Variational Restoration Models and Algorithms 1.1

More information

Sparse Covariance Selection using Semidefinite Programming

Sparse Covariance Selection using Semidefinite Programming Sparse Covariance Selection using Semidefinite Programming A. d Aspremont ORFE, Princeton University Joint work with O. Banerjee, L. El Ghaoui & G. Natsoulis, U.C. Berkeley & Iconix Pharmaceuticals Support

More information

2D HILBERT-HUANG TRANSFORM. Jérémy Schmitt, Nelly Pustelnik, Pierre Borgnat, Patrick Flandrin

2D HILBERT-HUANG TRANSFORM. Jérémy Schmitt, Nelly Pustelnik, Pierre Borgnat, Patrick Flandrin 2D HILBERT-HUANG TRANSFORM Jérémy Schmitt, Nelly Pustelnik, Pierre Borgnat, Patrick Flandrin Laboratoire de Physique de l Ecole Normale Suprieure de Lyon, CNRS and Université de Lyon, France first.last@ens-lyon.fr

More information

Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces

Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces YUAN-HENG WANG Zhejiang Normal University Department of Mathematics Yingbing Road 688, 321004 Jinhua

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

Sparse Regularization via Convex Analysis

Sparse Regularization via Convex Analysis Sparse Regularization via Convex Analysis Ivan Selesnick Electrical and Computer Engineering Tandon School of Engineering New York University Brooklyn, New York, USA 29 / 66 Convex or non-convex: Which

More information

Convex Optimization on Large-Scale Domains Given by Linear Minimization Oracles

Convex Optimization on Large-Scale Domains Given by Linear Minimization Oracles Convex Optimization on Large-Scale Domains Given by Linear Minimization Oracles Arkadi Nemirovski H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology Joint research

More information