An Overview of Recent and Brand New Primal-Dual Methods for Solving Convex Optimization Problems

Size: px
Start display at page:

Download "An Overview of Recent and Brand New Primal-Dual Methods for Solving Convex Optimization Problems"

Transcription

1 PGMO 1/32 An Overview of Recent and Brand New Primal-Dual Methods for Solving Convex Optimization Problems Emilie Chouzenoux Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est Marne-la-Vallée, France 28 Oct. 2015

2 PGMO 2/32 In collaboration with A. Repetti J.-C. Pesquet

3 PGMO 3/32 Problem statement GOAL: Find a solution to the optimization problem minimize x H f(x) where H is a real Hilbertian space, f: H ],+ ] is convex. In the context of large scale problems, how to find an optimization algorithm able to deliver a reliable numerical solution in a reasonable time, with low memory requirement?

4 PGMO 4/32 Fundamental tools in convex analysis

5 PGMO 5/32 Notation and definitions Let f: H ],+ ]. The domain of function f is domf = {x H f(x) < + } If domf, function f is said to be proper. Function f is convex if ( (x,y) H 2 )( λ [0,1]) f(λx+(1 λ)y) λf(x)+(1 λ)f(y). Function f is lower semi-continuous (lsc) on H if, for every x H, for every sequence (x ) N of H, x x liminff(x ) f(x).

6 PGMO 5/32 Notation and definitions Let f: H ],+ ]. The domain of function f is domf = {x H f(x) < + } If domf, function f is said to be proper. Function f is convex if ( (x,y) H 2 )( λ [0,1]) f(λx+(1 λ)y) λf(x)+(1 λ)f(y). Function f is lower semi-continuous (lsc) on H if, for every x H, for every sequence (x ) N of H, x x liminff(x ) f(x). In the remainder of this tal, all functions will be assumed to be proper, lsc and convex.

7 PGMO 6/32 Notation and definitions The set of strongly positive, self adjoint, linear operators from H to H is denoted by S + (H). Let U S + (H). The weighted norm induced by U is with the convention = Id. U = U,

8 PGMO 6/32 Notation and definitions The set of strongly positive, self adjoint, linear operators from H to H is denoted by S + (H). Let U S + (H). The weighted norm induced by U is with the convention = Id. U = U, Let f: H ],+ [. Function f is said β-lipschitz differentiable if it is differentiable over H and its gradient fulfills with β ]0,+ [. ( (x,y) H 2 ) f(x) f(y) β x y,

9 Introduction Fundamental tools in convex analysis Primal-dual proximal algorithms Application to 3D mesh denoising PGMO 7/32 Subdifferential The subdifferential of f: H ],+ ] at x is the set f(x) = {t H ( y H) f(y) f(x)+ t y x } An element t of f(x) is called a subgradient of f at x. f(y) f(x)+ y x t t f(x) x x y If f is differentiable at x H then f(x) = { f(x)}.

10 Introduction Fundamental tools in convex analysis Primal-dual proximal algorithms Application to 3D mesh denoising PGMO 8/32 Conjugate function The conjugate of f: H ],+ ] is f : H [,+ ] such that ( u H) f ( ) (u) = sup x u f(x). x H f(x) x u f(x) x f (u) x f (u) x u ORIGINS: A. C. Clairaut ( ), A.-M. Legendre ( ), W. Fenchel ( )

11 PGMO 9/32 Notation and definitions The inf-convolution between f : H ],+ ] and g : H ],+ ] is f g: H [,+ ] x inf y H f(y)+g(x y). The convolution between f : H ],+ ] and g : H ],+ ] is f g: H [,+ ] x f(y)g(x y)dy. H PARTICULAR CASE: f ι {0} = f, where, for C H, ( x H) ι C (x) = { 0 if x C, + elsewhere.

12 PGMO 10/32 Proximity operator CHARACTERIZATION OF PROXIMITY OPERATOR ( x H) ŷ = prox U,f (x) x ŷ U 1 f(ŷ). The proximity operator prox U,f (x) of f at x H relative to the metric induced by U is the unique vector ŷ H such that f(ŷ)+ 1 2 ŷ x 2 U = inf f(y)+ 1 y x U(y x). y H 2

13 PGMO 11/32 Primal-dual proximal algorithms

14 PGMO 12/32 Primal-dual problem PRIMAL PROBLEM minimize x H f(x)+g(lx)+h(x). f: H ],+ ], g: G ],+ ], h: H R, β-lipschitz differentiable with β ]0,+ [, H and G real Hilbert spaces, L: H G linear and bounded.

15 PGMO 12/32 Primal-dual problem PRIMAL PROBLEM minimize x H f(x)+g(lx)+h(x). f: H ],+ ], g: G ],+ ], h: H R, β-lipschitz differentiable with β ]0,+ [, H and G real Hilbert spaces, L: H G linear and bounded. minimize v G (f h ) ( L v ) +g (v). DUAL PROBLEM

16 PGMO 13/32 Primal-dual problem KKT CONDITIONS If a pair ( x, v) fulfills the Karush-Kuhn-Tucer conditions: L v h( x) f( x) and L x g ( v), then { x is a solution to the primal problem v is a solution to the dual problem. LAGRANGIAN FORMULATION The problem is equivalent to searching for a saddle point of the Lagrangian: ( (x,y,v) H G 2 ) L(x,y,v) = f(x)+h(x)+g(y)+ v Lx y.

17 PGMO 14/32 Lin with Lagrange duality SADDLE POINTS ( x,ŷ, v) is a saddle point of the Lagrange function L if ( (x,y,v) H G 2 ) L( x,ŷ,v) L( x,ŷ, v) L(x,y, v). Let H and G be two real Hilbert spaces. Let ( x,ŷ, v) H G 2. Assume that ri(domg) L(domf) or domg ri ( L(domf) ). ( x,ŷ, v) is a saddle point of the Lagrange function ( x, v) is a Kuhn-Tucer point and ŷ = L x. EQUIVALENCE

18 PGMO 15/32 Primal-dual proximal algorithm for = 0,1,... y = prox τf (x τ ( h(x )+c +L ) ) v +a u = prox σg ( v +σl(2y x ) ) +b (x +1,v +1 ) = (x,v )+λ ((y,u ) (x,v ))

19 PGMO 15/32 Primal-dual proximal algorithm for = 0,1,... y = prox τf (x τ ( h(x )+c +L ) ) v +a u = prox σg ( v +σl(2y x ) ) +b (x +1,v +1 ) = (x,v )+λ ((y,u ) (x,v )) (τ,σ) ]0,+ [ 2 such that τ 1 σ L 2 > β/2,

20 PGMO 15/32 Primal-dual proximal algorithm for = 0,1,... y = prox τf (x τ ( h(x )+c +L ) ) v +a u = prox σg ( v +σl(2y x ) ) +b (x +1,v +1 ) = (x,v )+λ ((y,u ) (x,v )) (τ,σ) ]0,+ [ 2 such that τ 1 σ L 2 > β/2, for every N, λ ]0,λ[, where λ = 2 β(τ 1 σ L 2 ) 1 /2 [1,2[,

21 PGMO 15/32 Primal-dual proximal algorithm for = 0,1,... y = prox τf (x τ ( h(x )+c +L ) ) v +a u = prox σg ( v +σl(2y x ) ) +b (x +1,v +1 ) = (x,v )+λ ((y,u ) (x,v )) (τ,σ) ]0,+ [ 2 such that τ 1 σ L 2 > β/2, for every N, λ ]0,λ[, where λ = 2 β(τ 1 σ L 2 ) 1 /2 [1,2[, a < +, N N b < +, and N c < +,

22 PGMO 15/32 Primal-dual proximal algorithm for = 0,1,... y = prox τf (x τ ( h(x )+c +L ) ) v +a u = prox σg ( v +σl(2y x ) ) +b (x +1,v +1 ) = (x,v )+λ ((y,u ) (x,v )) (τ,σ) ]0,+ [ 2 such that τ 1 σ L 2 > β/2, for every N, λ ]0,λ[, where λ = 2 β(τ 1 σ L 2 ) 1 /2 [1,2[, a < +, N N b < +, and N c < +, there exists x H such that 0 f(x)+ h(x)+l g ( Lx ), Then: x x where x is a solution to the primal problem, v v where v is a solution to the dual problem.

23 PGMO 16/32 Preconditioned primal-dual proximal algorithm for = 0,1,... ( y = prox W 1,f x W ( h(x )+c +L ) ) v +a ( u = prox U 1,g v +UL(2y x ) ) +b (x +1,v +1 ) = (x,v )+λ ((y,u ) (x,v )) W S + (H) and U S + (G) such that 1 U 1/2 LW 1/2 > β 2 W, for every N, λ ]0,1], and inf N λ > 0, a < +, N N b < +, and N c < +, there exists x H such that 0 f(x)+ h(x)+l g ( Lx ). Then: x x where x is a solution to the primal problem, v v where v is a solution to the dual problem.

24 PGMO 17/32 Parallel primal-dual proximal algorithm PRIMAL PROBLEM minimize x H f(x)+g(lx)+h(x).

25 PGMO 17/32 Parallel primal-dual proximal algorithm PRIMAL PROBLEM minimize x H q f(x)+ g n (L n x)+h(x). n=1 with, for all n {1,...,q}, g n : G n ],+ ], (G n ) 1 n q real Hilbert spaces such thatg = G 1 G q, andl n : H G n bounded linear operators.

26 PGMO 17/32 Parallel primal-dual proximal algorithm PRIMAL PROBLEM minimize x H q f(x)+ g n (L n x)+h(x). n=1 with, for all n {1,...,q}, g n : G n ],+ ], (G n ) 1 n q real Hilbert spaces such thatg = G 1 G q, andl n : H G n bounded linear operators. minimize (v (1),...,v (q) ) G (f h ) ( q L nv (n)) + n=1 DUAL PROBLEM q gn(v (n) ). n=1

27 PGMO 17/32 Parallel primal-dual proximal algorithm PRIMAL PROBLEM minimize x H q f(x)+ g n (L n x)+h(x). n=1 with, for all n {1,...,q}, g n : G n ],+ ], (G n ) 1 n q real Hilbert spaces such thatg = G 1 G q, andl n : H G n bounded linear operators. minimize (v (1),...,v (q) ) G (f h ) ( q L nv (n)) + n=1 DUAL PROBLEM q gn(v (n) ). The separability of g implies that prox g can be obtained by computing in parallel the proximal operators of functions (g n) 1 n q PARALLEL PRIMAL-DUAL SCHEMES. n=1

28 PGMO 18/32 Parallel primal-dual proximal algorithm for = 0,1,... ( y = prox W 1,f x W ( q h(x )+c + n=1 x +1 = x +λ (y x ) for n,...,q u (n) ( (n) = prox U 1 n,g v n +U n L n (2y x ) ) +b (n) v (n) +1 = v(n) +λ (u (n) v (n) ) L nv (n) ) ) +a

29 PGMO 19/32 Bibliographical remars Pioneering wor: Arrow-Hurwicz-Uzawa method for saddle point problems [Arrow et al ] [Nedić and Ozdaglar ] Methods based on Forward-Bacward iteration: type I : [Vu ][Condat ] (extensions of [Esser et al ][Chambolle and Poc ]) type II: [Combettes et al ] (extensions of [Loris and Verhoeven ][Chen et al ]) Methods based on Forward-Bacward-Forward iteration [Combettes and Pesquet ] [Boţ and Hendrich,2014] Projection based methods... [Alotaibi et al ]

30 PGMO 20/32 Random bloc coordinate strategy

31 PGMO 21/32 Acceleration via bloc alternation Idea: variable splitting.

32 PGMO 21/32 Acceleration via bloc alternation Idea: variable splitting. x (1) H 1 x (2) H 2 g x H H = p j=1 H j x (p) H p H 1,...,H p are separable real Hilbert spaces

33 PGMO 21/32 Acceleration via bloc alternation Simplifying assumption: f and h are bloc separable functions. x (1) x (2) f x = f = p f j (x (j) ) j=1 x (p)

34 PGMO 21/32 Acceleration via bloc alternation Simplifying assumption: f and h are bloc separable functions. x (1) x (2) h x = h = p h j (x (j) ) j=1 x (p) ( j {1,...,p}) h j β j -Lipschitz differentiable with β j ]0,+ [.

35 PGMO 22/32 Acceleration via bloc alternation For each iteration N, update a subset of components ( Gauss-Seidel methods). ADVANTAGES: Reduced computational cost per iteration, Reduced memory requirement, More flexibility.

36 PGMO 23/32 Primal-dual problem PRIMAL PROBLEM Let F the set of solutions to the problem p ( ) minimize f j (x (j) )+h j (x (j) ) x (1) H 1,...,x (p) H p j=1 ( j {1,...,p})( n {1,...,q}) H j and G n separable real Hilbert spaces, f j: H j ],+ ] + q ( p g n L n,j x (j)) n=1 h j: H j R β j-lipschitz differentiable, with β j ]0,+ [ g n: G n ],+ ] L n,j: H j G n linear and bounded, L n = { j {1,...,p} L n,j 0 }, et L j = { n {1,...,q} L n,j 0 }. j=1

37 PGMO 23/32 Primal-dual problem PRIMAL PROBLEM Let F the set of solutions to the problem p ( ) minimize f j (x (j) )+h j (x (j) ) x (1) H 1,...,x (p) H p j=1 + q ( p g n L n,j x (j)) Let F DUAL PROBLEM the set of solutions to the problem p ( q minimize (fj h j) L n,jv (n)) q ( ) + gn(v (n) ) v (1) G 1,...,v (q) G q j=1 n=1 n=1 n=1 j=1 Assume that there exists (x (1),...,x (p) ) H 1... H p such that q ( j {1,...,p}) 0 f j (x (j) )+ h j (x (j) )+ L ( n,j g n Ln,j x (j)). n=1 GOAL: Find an F F -valued random variable ( x, v).

38 PGMO 24/32 Random bloc coordinate primal-dual proximal algorithm For = 0,1,... for j,...,p ( (j) y = ε (j) for n,...,q ( u(n) = ε (p+n) v (n) +1 = v(n) ( (j) prox W 1 j,f j x W j( h j (x (j) )+c(j) + q n=1 L n,j v(n) )) +a (j) x (j) +1 = x(j) +λ ε (j) (y(j) x (j) ) prox U 1 n,g n +λ ε (p+n) ( v (n) p +U n j=1 (u (n) v (n) ), ) L n,j (2y (j) x (j) )) +b (n) ) (ε ) N is a vector of p+q Boolean variables randomly chosen at each iteration N in order to signal the primal and dual blocs to be updated.

39 PGMO 25/32 Random bloc coordinate primal-dual proximal algorithm Assume that The variables (ε ) N are independent and identically distributed, such that for every N, ( n {1,...,q}) P[ε (p+n) ] > 0, and ( j {1,...,p}) ε (j) { (p+n) = max ε n L j}, 1 n q

40 PGMO 25/32 Random bloc coordinate primal-dual proximal algorithm Assume that The variables (ε ) N are independent and identically distributed, such that for every N, ( n {1,...,q}) P[ε (p+n) ] > 0, and ( j {1,...,p}) ε (j) { (p+n) = max ε n L j}, 1 n q ( j {1,...,p}) W j S + (H j), ( n {1,...,q}) U n S + (G n), with ( p ) 1/2 1 q j=1 n=1 U1/2 n L n,jw 1/2 j 2 > 1 max{( Wj βj) 2 1 j p}, ( N) λ ]0,1] and inf N λ > 0, E a 2 < +, E b 2 < +, and E c 2 < + a.s. N N N (x ) N converges wealy a.s. to an F-valued random variable. (v ) N converges wealy a.s. to an F -valued random variable. Proof: Based on properties of quasi-fejér stochastic sequences [Combettes & Pesquet 2015].

41 PGMO 26/32 Illustration of the random sampling strategy Bloc selection ( N) x (1) activated when ε (1) How to choose ( N) the variable ε = (ε (1),...,ε(6) )? x (2) activated when ε (2) x (3) activated when ε (3) x (4) activated when ε (4) x (5) activated when ε (5) x (6) activated when ε (6)

42 PGMO 26/32 Illustration of the random sampling strategy Bloc selection ( N) x (1) activated when ε (1) x (2) activated when ε (2) x (3) activated when ε (3) How to choose ( N) the variable ε = (ε (1),...,ε(6) )? P[ε = (1,1,0,0,0,0)] = 0.1 x (4) activated when ε (4) x (5) activated when ε (5) x (6) activated when ε (6)

43 PGMO 26/32 Illustration of the random sampling strategy Bloc selection ( n N) x (1) activated when ε (1) x (2) activated when ε (2) x (3) activated when ε (3) How to choose ( N) the variable ε = (ε (1),...,ε(6) )? P[ε = (1,1,0,0,0,0)] = 0.1 P[ε = (1,0,1,0,0,0)] = 0.2 x (4) activated when ε (4) x (5) activated when ε (5) x (6) activated when ε (6)

44 PGMO 26/32 Illustration of the random sampling strategy Bloc selection ( N) x (1) activated when ε (1) x (2) activated when ε (2) x (3) activated when ε (3) x (4) activated when ε (4) How to choose ( N) the variable ε = (ε (1),...,ε(6) )? P[ε = (1,1,0,0,0,0)] = 0.1 P[ε = (1,0,1,0,0,0)] = 0.2 P[ε = (1,0,0,1,1,0)] = 0.2 x (5) activated when ε (5) x (6) activated when ε (6)

45 PGMO 26/32 Illustration of the random sampling strategy Bloc selection ( N) x (1) activated when ε (1) x (2) activated when ε (2) x (3) activated when ε (3) x (4) activated when ε (4) x (5) activated when ε (5) How to choose ( N) the variable ε = (ε (1),...,ε(6) )? P[ε = (1,1,0,0,0,0)] = 0.1 P[ε = (1,0,1,0,0,0)] = 0.2 P[ε = (1,0,0,1,1,0)] = 0.2 P[ε = (0,1,1,1,1,1)] = 0.5 x (6) activated when ε (6)

46 PGMO 27/32 Extension to distributed algorithms... OPTIMIZATION PROBLEM Find an element of the set F of the solutions to m minimize f i (x)+h i (x)+g i (M i x) x H i=1 with H,G 1,...,G m separable Hilbert spaces and, for all i {1,...,m}: h i β i -Lipschitz differentiable with β i ]0,+ [, M i : H G i non-zero linear and bounded operators.

47 PGMO 27/32 Extension to distributed algorithms... OPTIMIZATION PROBLEM Find an element of the set F of the solutions to m minimize f i (x)+h i (x)+g i (M i x) x H i=1 OPTIMIZATION PROBLEM Find (x 1,...,x m ) H m such that m minimize f i (x i )+h i (x i )+g i (M i x i ) (x i ) 1 i m Λ m i=1 with Λ m = { (x i ) 1 i m H m } x 1 =... = x m

48 PGMO 28/32 3D mesh denoising (ANR GRAPHSIP) Original mesh x Observed mesh z Undirected nonreflexive graph OBJECTIVE: Estimate x = (x i ) 1 i M from noisy observations z = (z i ) 1 i M where, for every i {1,...,M}, x i R 3 is the vector of 3D coordinates of the i-th vertex of a mesh H = (R 3 ) M

49 PGMO 28/32 3D mesh denoising (ANR GRAPHSIP) OBJECTIVE: Estimate x = (x i ) 1 i M from noisy observations z = (z i ) 1 i M where, for every i {1,...,M}, x i R 3 is the vector of 3D coordinates of the i-th vertex of a mesh COST FUNCTION: Φ(x) = M ψ j (x j z j )+ι Cj (x j )+η j (x j x i ) i Nj 1,2 j=1 where ( j {1,...,M}), ψ j : R 3 R: l 2 l 1 Huber function robust data fidelity measure convex, Lipschitz differentiable function C j : nonempty convex subset of R 3 N j : neighborhood of j-th vertex (η j ) 1 j M : nonnegative regularization constants.

50 PGMO 28/32 3D mesh denoising (ANR GRAPHSIP) OBJECTIVE: Estimate x = (x i) 1 i M from noisy observations z = (z i) 1 i M where, for every i {1,...,M}, x i R 3 is the vector of 3D coordinates of the i-th vertex of a mesh COST FUNCTION: M Φ(x) = ψ j(x j z j)+ι Cj (x j)+η j (x j x i) i Nj 1,2 j=1 IMPLEMENTATION DETAILS: a bloc a vertex p = q = M ( j {1,...,M}) h j = ψ j( z j) f j = ι Cj ( {1,...,M})( x R 3M ) g (L x) = (x x i) i N 1,2

51 PGMO 29/32 Simulation results positions of the original mesh are corrupted through an additive i.i.d. zero-mean Gaussian mixture noise model. a limited number r of variables can be handled at each iteration, where p ε j,n = r p. j=1 mesh decomposed into p/r non-overlapping sets. Original mesh, M Noisy mesh, MSE =

52 PGMO 29/32 Simulation results Proposed reconstruction MSE = Laplacian smoothing MSE =

53 PGMO 30/32 Complexity Time (s.) p/r Memory (Mb) dashed line: required memory continuous line: reconstruction time

54 PGMO 31/32 Conclusion Simple handling of Proven convergence and robustness to errors Primal-dual proximal algorithms Resolution of (non) smooth convex optimization problems contraints No inversion of linear operators Proven efficiency in some signal /image processing applications Bloccoordinate strategies Flexibility in the random selection of primal/dual components Distributed versions of the algorithms available

55 PGMO 32/32 Some references... F. Abboud, E. Chouzenoux, J.-C. Pesquet, J.-H. Chenot and L. Laborelli A Dual Bloc Coordinate Proximal Algorithm with Application to Deconvolution of Interlaced Video Sequences IEEE International Conference on Image Processing (ICIP 2015), 5 p., Québec, Canada, Sep G. Chierchia, N. Pustelni, B. Pesquet-Popescu and J.-C. Pesquet A Non-Local Structure Tensor Based Approach for Multicomponent Image Recovery Problems IEEE Transaction on Image Processing, 23(12), pp , Dec P. L. Combettes and J.-C. Pesquet Proximal Splitting methods in Signal Processing in Fixed-Point Algorithms for Inverse Problems in Science and Engineering, H. H. Bausche, R. Burachi, P. L. Combettes, V. Elser, D. R. Lue, and H. Wolowicz editors. Springer-Verlag, New Yor, pp , P. L. Combettes, L. Condat, J.-C. Pesquet, and B. C. Vũ A Forward-Bacward View of Some Primal-Dual Optimization Methods in Image Recovery IEEE International Conference on Image Processing (ICIP 2014), 5 p., Paris, France, Oct , P. Combettes and J.-C Pesquet Stochastic Quasi-Fejér Bloc-Coordinate Fixed Point Iterations with Random Sweeping SIAM Journal on Optimization, 25(2), pp , A. Florescu, E. Chouzenoux, J.-C. Pesquet, P. Ciuciu and S. Ciochina A Majorize-Minimize Memory Gradient Method for Complex-Valued Inverse Problems Signal Processing, 103, pp , A. Jeziersa, E. Chouzenoux, J.-C. Pesquet and H. Talbot A Convex Approach for Image Restoration with Exact Poisson-Gaussian Lielihood to appear in SIAM Journal in Imaging Sciences, N. Komodais and J.-C. Pesquet Playing with Duality: An Overview of Recent Primal-Dual Approaches for Solving Large-Scale Optimization Problems to appear in IEEE Signal Processing Magazine, J.-C. Pesquet and A. Repetti A Class of Randomized Primal-Dual Algorithms for Distributed Optimization to appear in Journal of Nonlinear and Convex Analysis, A. Repetti, E. Chouzenoux and J.-C. Pesquet A Random Bloc-Coordinate Primal-Dual Proximal Algorithm with Application to 3D Mesh Denoising IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2015), 5 p., Brisbane, Australia, Apr , 2015.

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection EUSIPCO 2015 1/19 A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection Jean-Christophe Pesquet Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est

More information

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29 A Random block-coordinate primal-dual proximal algorithm with application to 3D mesh denoising Emilie CHOUZENOUX Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est, France Horizon Maths 2014

More information

Signal Processing and Networks Optimization Part VI: Duality

Signal Processing and Networks Optimization Part VI: Duality Signal Processing and Networks Optimization Part VI: Duality Pierre Borgnat 1, Jean-Christophe Pesquet 2, Nelly Pustelnik 1 1 ENS Lyon Laboratoire de Physique CNRS UMR 5672 pierre.borgnat@ens-lyon.fr,

More information

arxiv: v1 [math.oc] 20 Jun 2014

arxiv: v1 [math.oc] 20 Jun 2014 A forward-backward view of some primal-dual optimization methods in image recovery arxiv:1406.5439v1 [math.oc] 20 Jun 2014 P. L. Combettes, 1 L. Condat, 2 J.-C. Pesquet, 3 and B. C. Vũ 4 1 Sorbonne Universités

More information

Variable Metric Forward-Backward Algorithm

Variable Metric Forward-Backward Algorithm Variable Metric Forward-Backward Algorithm 1/37 Variable Metric Forward-Backward Algorithm for minimizing the sum of a differentiable function and a convex function E. Chouzenoux in collaboration with

More information

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems Radu Ioan Boţ Ernö Robert Csetnek August 5, 014 Abstract. In this paper we analyze the

More information

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Patrick L. Combettes joint work with J.-C. Pesquet) Laboratoire Jacques-Louis Lions Faculté de Mathématiques

More information

ADMM for monotone operators: convergence analysis and rates

ADMM for monotone operators: convergence analysis and rates ADMM for monotone operators: convergence analysis and rates Radu Ioan Boţ Ernö Robert Csetne May 4, 07 Abstract. We propose in this paper a unifying scheme for several algorithms from the literature dedicated

More information

A Primal-dual Three-operator Splitting Scheme

A Primal-dual Three-operator Splitting Scheme Noname manuscript No. (will be inserted by the editor) A Primal-dual Three-operator Splitting Scheme Ming Yan Received: date / Accepted: date Abstract In this paper, we propose a new primal-dual algorithm

More information

Primal-dual algorithms for the sum of two and three functions 1

Primal-dual algorithms for the sum of two and three functions 1 Primal-dual algorithms for the sum of two and three functions 1 Ming Yan Michigan State University, CMSE/Mathematics 1 This works is partially supported by NSF. optimization problems for primal-dual algorithms

More information

On the acceleration of the double smoothing technique for unconstrained convex optimization problems

On the acceleration of the double smoothing technique for unconstrained convex optimization problems On the acceleration of the double smoothing technique for unconstrained convex optimization problems Radu Ioan Boţ Christopher Hendrich October 10, 01 Abstract. In this article we investigate the possibilities

More information

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration E. Chouzenoux, A. Jezierska, J.-C. Pesquet and H. Talbot Université Paris-Est Lab. d Informatique Gaspard

More information

Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization

Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization Radu Ioan Boţ Christopher Hendrich November 7, 202 Abstract. In this paper we

More information

A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing

A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing Emilie Chouzenoux emilie.chouzenoux@univ-mlv.fr Université Paris-Est Lab. d Informatique Gaspard

More information

arxiv: v4 [math.oc] 29 Jan 2018

arxiv: v4 [math.oc] 29 Jan 2018 Noname manuscript No. (will be inserted by the editor A new primal-dual algorithm for minimizing the sum of three functions with a linear operator Ming Yan arxiv:1611.09805v4 [math.oc] 29 Jan 2018 Received:

More information

BASICS OF CONVEX ANALYSIS

BASICS OF CONVEX ANALYSIS BASICS OF CONVEX ANALYSIS MARKUS GRASMAIR 1. Main Definitions We start with providing the central definitions of convex functions and convex sets. Definition 1. A function f : R n R + } is called convex,

More information

Dual methods for the minimization of the total variation

Dual methods for the minimization of the total variation 1 / 30 Dual methods for the minimization of the total variation Rémy Abergel supervisor Lionel Moisan MAP5 - CNRS UMR 8145 Different Learning Seminar, LTCI Thursday 21st April 2016 2 / 30 Plan 1 Introduction

More information

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Donghwan Kim and Jeffrey A. Fessler EECS Department, University of Michigan

More information

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

Victoria Martín-Márquez

Victoria Martín-Márquez A NEW APPROACH FOR THE CONVEX FEASIBILITY PROBLEM VIA MONOTROPIC PROGRAMMING Victoria Martín-Márquez Dep. of Mathematical Analysis University of Seville Spain XIII Encuentro Red de Análisis Funcional y

More information

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Christian Clason Faculty of Mathematics, Universität Duisburg-Essen joint work with Barbara Kaltenbacher, Tuomo Valkonen,

More information

Math 273a: Optimization Convex Conjugacy

Math 273a: Optimization Convex Conjugacy Math 273a: Optimization Convex Conjugacy Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 online discussions on piazza.com Convex conjugate (the Legendre transform) Let f be a closed proper

More information

arxiv: v1 [math.oc] 12 Mar 2013

arxiv: v1 [math.oc] 12 Mar 2013 On the convergence rate improvement of a primal-dual splitting algorithm for solving monotone inclusion problems arxiv:303.875v [math.oc] Mar 03 Radu Ioan Boţ Ernö Robert Csetnek André Heinrich February

More information

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION Peter Ochs University of Freiburg Germany 17.01.2017 joint work with: Thomas Brox and Thomas Pock c 2017 Peter Ochs ipiano c 1

More information

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane Conference ADGO 2013 October 16, 2013 Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions Marc Lassonde Université des Antilles et de la Guyane Playa Blanca, Tongoy, Chile SUBDIFFERENTIAL

More information

Monotone Operator Splitting Methods in Signal and Image Recovery

Monotone Operator Splitting Methods in Signal and Image Recovery Monotone Operator Splitting Methods in Signal and Image Recovery P.L. Combettes 1, J.-C. Pesquet 2, and N. Pustelnik 3 2 Univ. Pierre et Marie Curie, Paris 6 LJLL CNRS UMR 7598 2 Univ. Paris-Est LIGM CNRS

More information

About Split Proximal Algorithms for the Q-Lasso

About Split Proximal Algorithms for the Q-Lasso Thai Journal of Mathematics Volume 5 (207) Number : 7 http://thaijmath.in.cmu.ac.th ISSN 686-0209 About Split Proximal Algorithms for the Q-Lasso Abdellatif Moudafi Aix Marseille Université, CNRS-L.S.I.S

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual

More information

ADMM and Fast Gradient Methods for Distributed Optimization

ADMM and Fast Gradient Methods for Distributed Optimization ADMM and Fast Gradient Methods for Distributed Optimization João Xavier Instituto Sistemas e Robótica (ISR), Instituto Superior Técnico (IST) European Control Conference, ECC 13 July 16, 013 Joint work

More information

A Dykstra-like algorithm for two monotone operators

A Dykstra-like algorithm for two monotone operators A Dykstra-like algorithm for two monotone operators Heinz H. Bauschke and Patrick L. Combettes Abstract Dykstra s algorithm employs the projectors onto two closed convex sets in a Hilbert space to construct

More information

Proximal splitting methods on convex problems with a quadratic term: Relax!

Proximal splitting methods on convex problems with a quadratic term: Relax! Proximal splitting methods on convex problems with a quadratic term: Relax! The slides I presented with added comments Laurent Condat GIPSA-lab, Univ. Grenoble Alpes, France Workshop BASP Frontiers, Jan.

More information

Sparse Optimization Lecture: Dual Methods, Part I

Sparse Optimization Lecture: Dual Methods, Part I Sparse Optimization Lecture: Dual Methods, Part I Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know dual (sub)gradient iteration augmented l 1 iteration

More information

A Tutorial on Primal-Dual Algorithm

A Tutorial on Primal-Dual Algorithm A Tutorial on Primal-Dual Algorithm Shenlong Wang University of Toronto March 31, 2016 1 / 34 Energy minimization MAP Inference for MRFs Typical energies consist of a regularization term and a data term.

More information

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br

More information

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences

More information

Primal-dual coordinate descent

Primal-dual coordinate descent Primal-dual coordinate descent Olivier Fercoq Joint work with P. Bianchi & W. Hachem 15 July 2015 1/28 Minimize the convex function f, g, h convex f is differentiable Problem min f (x) + g(x) + h(mx) x

More information

Gauge optimization and duality

Gauge optimization and duality 1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel

More information

Primal-dual coordinate descent A Coordinate Descent Primal-Dual Algorithm with Large Step Size and Possibly Non-Separable Functions

Primal-dual coordinate descent A Coordinate Descent Primal-Dual Algorithm with Large Step Size and Possibly Non-Separable Functions Primal-dual coordinate descent A Coordinate Descent Primal-Dual Algorithm with Large Step Size and Possibly Non-Separable Functions Olivier Fercoq and Pascal Bianchi Problem Minimize the convex function

More information

Dual Decomposition.

Dual Decomposition. 1/34 Dual Decomposition http://bicmr.pku.edu.cn/~wenzw/opt-2017-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes Outline 2/34 1 Conjugate function 2 introduction:

More information

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties

More information

Self-dual Smooth Approximations of Convex Functions via the Proximal Average

Self-dual Smooth Approximations of Convex Functions via the Proximal Average Chapter Self-dual Smooth Approximations of Convex Functions via the Proximal Average Heinz H. Bauschke, Sarah M. Moffat, and Xianfu Wang Abstract The proximal average of two convex functions has proven

More information

Adaptive Primal Dual Optimization for Image Processing and Learning

Adaptive Primal Dual Optimization for Image Processing and Learning Adaptive Primal Dual Optimization for Image Processing and Learning Tom Goldstein Rice University tag7@rice.edu Ernie Esser University of British Columbia eesser@eos.ubc.ca Richard Baraniuk Rice University

More information

Maximal monotone operators are selfdual vector fields and vice-versa

Maximal monotone operators are selfdual vector fields and vice-versa Maximal monotone operators are selfdual vector fields and vice-versa Nassif Ghoussoub Department of Mathematics, University of British Columbia, Vancouver BC Canada V6T 1Z2 nassif@math.ubc.ca February

More information

Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators

Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators Radu Ioan Boţ Christopher Hendrich 2 April 28, 206 Abstract. The aim of this article is to present

More information

Learning with stochastic proximal gradient

Learning with stochastic proximal gradient Learning with stochastic proximal gradient Lorenzo Rosasco DIBRIS, Università di Genova Via Dodecaneso, 35 16146 Genova, Italy lrosasco@mit.edu Silvia Villa, Băng Công Vũ Laboratory for Computational and

More information

BORIS MORDUKHOVICH Wayne State University Detroit, MI 48202, USA. Talk given at the SPCOM Adelaide, Australia, February 2015

BORIS MORDUKHOVICH Wayne State University Detroit, MI 48202, USA. Talk given at the SPCOM Adelaide, Australia, February 2015 CODERIVATIVE CHARACTERIZATIONS OF MAXIMAL MONOTONICITY BORIS MORDUKHOVICH Wayne State University Detroit, MI 48202, USA Talk given at the SPCOM 2015 Adelaide, Australia, February 2015 Based on joint papers

More information

2 Regularized Image Reconstruction for Compressive Imaging and Beyond

2 Regularized Image Reconstruction for Compressive Imaging and Beyond EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement

More information

The Subdifferential of Convex Deviation Measures and Risk Functions

The Subdifferential of Convex Deviation Measures and Risk Functions The Subdifferential of Convex Deviation Measures and Risk Functions Nicole Lorenz Gert Wanka In this paper we give subdifferential formulas of some convex deviation measures using their conjugate functions

More information

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis Enhanced Fritz John Optimality Conditions and Sensitivity Analysis Dimitri P. Bertsekas Laboratory for Information and Decision Systems Massachusetts Institute of Technology March 2016 1 / 27 Constrained

More information

A New Primal Dual Algorithm for Minimizing the Sum of Three Functions with a Linear Operator

A New Primal Dual Algorithm for Minimizing the Sum of Three Functions with a Linear Operator https://doi.org/10.1007/s10915-018-0680-3 A New Primal Dual Algorithm for Minimizing the Sum of Three Functions with a Linear Operator Ming Yan 1,2 Received: 22 January 2018 / Accepted: 22 February 2018

More information

Robust Duality in Parametric Convex Optimization

Robust Duality in Parametric Convex Optimization Robust Duality in Parametric Convex Optimization R.I. Boţ V. Jeyakumar G.Y. Li Revised Version: June 20, 2012 Abstract Modelling of convex optimization in the face of data uncertainty often gives rise

More information

Generalized greedy algorithms.

Generalized greedy algorithms. Generalized greedy algorithms. François-Xavier Dupé & Sandrine Anthoine LIF & I2M Aix-Marseille Université - CNRS - Ecole Centrale Marseille, Marseille ANR Greta Séminaire Parisien des Mathématiques Appliquées

More information

arxiv: v1 [math.oc] 16 May 2018

arxiv: v1 [math.oc] 16 May 2018 An Algorithmic Framewor of Variable Metric Over-Relaxed Hybrid Proximal Extra-Gradient Method Li Shen 1 Peng Sun 1 Yitong Wang 1 Wei Liu 1 Tong Zhang 1 arxiv:180506137v1 [mathoc] 16 May 2018 Abstract We

More information

arxiv: v2 [cs.cv] 12 Sep 2014

arxiv: v2 [cs.cv] 12 Sep 2014 Noname manuscript No. (will be inserted by the editor) An inertial forward-bacward algorithm for monotone inclusions D. Lorenz T. Poc arxiv:403.35v [cs.cv Sep 04 the date of receipt and acceptance should

More information

Proximal methods. S. Villa. October 7, 2014

Proximal methods. S. Villa. October 7, 2014 Proximal methods S. Villa October 7, 2014 1 Review of the basics Often machine learning problems require the solution of minimization problems. For instance, the ERM algorithm requires to solve a problem

More information

PIPA: A New Proximal Interior Point Algorithm for Large Scale Convex Optimization

PIPA: A New Proximal Interior Point Algorithm for Large Scale Convex Optimization PIPA: A New Proximal Interior Point Algorithm for Large Scale Convex Optimization Marie-Caroline Corbineau, Emilie Chouzenoux, Jean-Christophe Pesquet To cite this version: Marie-Caroline Corbineau, Emilie

More information

Semi-infinite programming, duality, discretization and optimality conditions

Semi-infinite programming, duality, discretization and optimality conditions Semi-infinite programming, duality, discretization and optimality conditions Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205,

More information

Accelerated primal-dual methods for linearly constrained convex problems

Accelerated primal-dual methods for linearly constrained convex problems Accelerated primal-dual methods for linearly constrained convex problems Yangyang Xu SIAM Conference on Optimization May 24, 2017 1 / 23 Accelerated proximal gradient For convex composite problem: minimize

More information

Dual and primal-dual methods

Dual and primal-dual methods ELE 538B: Large-Scale Optimization for Data Science Dual and primal-dual methods Yuxin Chen Princeton University, Spring 2018 Outline Dual proximal gradient method Primal-dual proximal gradient method

More information

arxiv: v1 [math.oc] 21 Apr 2016

arxiv: v1 [math.oc] 21 Apr 2016 Accelerated Douglas Rachford methods for the solution of convex-concave saddle-point problems Kristian Bredies Hongpeng Sun April, 06 arxiv:604.068v [math.oc] Apr 06 Abstract We study acceleration and

More information

A Globally Linearly Convergent Method for Pointwise Quadratically Supportable Convex-Concave Saddle Point Problems

A Globally Linearly Convergent Method for Pointwise Quadratically Supportable Convex-Concave Saddle Point Problems A Globally Linearly Convergent Method for Pointwise Quadratically Supportable Convex-Concave Saddle Point Problems D. Russell Luke and Ron Shefi February 28, 2017 Abstract We study the Proximal Alternating

More information

arxiv: v1 [math.oc] 13 Dec 2018

arxiv: v1 [math.oc] 13 Dec 2018 A NEW HOMOTOPY PROXIMAL VARIABLE-METRIC FRAMEWORK FOR COMPOSITE CONVEX MINIMIZATION QUOC TRAN-DINH, LIANG LING, AND KIM-CHUAN TOH arxiv:8205243v [mathoc] 3 Dec 208 Abstract This paper suggests two novel

More information

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions Radu Ioan Boţ Ernö Robert Csetnek Szilárd Csaba László October, 1 Abstract. We propose a forward-backward

More information

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique Master 2 MathBigData S. Gaïffas 1 3 novembre 2014 1 CMAP - Ecole Polytechnique 1 Supervised learning recap Introduction Loss functions, linearity 2 Penalization Introduction Ridge Sparsity Lasso 3 Some

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 4 Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 2 4.1. Subgradients definition subgradient calculus duality and optimality conditions Shiqian

More information

On the Brézis - Haraux - type approximation in nonreflexive Banach spaces

On the Brézis - Haraux - type approximation in nonreflexive Banach spaces On the Brézis - Haraux - type approximation in nonreflexive Banach spaces Radu Ioan Boţ Sorin - Mihai Grad Gert Wanka Abstract. We give Brézis - Haraux - type approximation results for the range of the

More information

An Algorithmic Framework of Variable Metric Over-Relaxed Hybrid Proximal Extra-Gradient Method

An Algorithmic Framework of Variable Metric Over-Relaxed Hybrid Proximal Extra-Gradient Method An Algorithmic Framewor of Variable Metric Over-Relaxed Hybrid Proximal Extra-Gradient Method Li Shen 1 Peng Sun 1 Yitong Wang 1 Wei Liu 1 Tong Zhang 1 Abstract We propose a novel algorithmic framewor

More information

Convex Optimization Conjugate, Subdifferential, Proximation

Convex Optimization Conjugate, Subdifferential, Proximation 1 Lecture Notes, HCI, 3.11.211 Chapter 6 Convex Optimization Conjugate, Subdifferential, Proximation Bastian Goldlücke Computer Vision Group Technical University of Munich 2 Bastian Goldlücke Overview

More information

In Progress: Summary of Notation and Basic Results Convex Analysis C&O 663, Fall 2009

In Progress: Summary of Notation and Basic Results Convex Analysis C&O 663, Fall 2009 In Progress: Summary of Notation and Basic Results Convex Analysis C&O 663, Fall 2009 Intructor: Henry Wolkowicz November 26, 2009 University of Waterloo Department of Combinatorics & Optimization Abstract

More information

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Caroline Chaux Joint work with X. Vu, N. Thirion-Moreau and S. Maire (LSIS, Toulon) Aix-Marseille

More information

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT Linear and Nonlinear Analysis Volume 1, Number 1, 2015, 1 PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT KAZUHIRO HISHINUMA AND HIDEAKI IIDUKA Abstract. In this

More information

Dual Proximal Gradient Method

Dual Proximal Gradient Method Dual Proximal Gradient Method http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes Outline 2/19 1 proximal gradient method

More information

arxiv: v1 [math.na] 3 Jan 2019

arxiv: v1 [math.na] 3 Jan 2019 arxiv manuscript No. (will be inserted by the editor) A Finite Element Nonoverlapping Domain Decomposition Method with Lagrange Multipliers for the Dual Total Variation Minimizations Chang-Ock Lee Jongho

More information

GSOS: Gauss-Seidel Operator Splitting Algorithm for Multi-Term Nonsmooth Convex Composite Optimization

GSOS: Gauss-Seidel Operator Splitting Algorithm for Multi-Term Nonsmooth Convex Composite Optimization : Gauss-Seidel Operator Splitting Algorithm for Multi-Term Nonsmooth Convex Composite Optimization Li Shen 1 Wei Liu 1 Ganzhao Yuan Shiqian Ma 3 Abstract In this paper, we propose a fast Gauss-Seidel Operator

More information

ALGORITHMS FOR MINIMIZING DIFFERENCES OF CONVEX FUNCTIONS AND APPLICATIONS

ALGORITHMS FOR MINIMIZING DIFFERENCES OF CONVEX FUNCTIONS AND APPLICATIONS ALGORITHMS FOR MINIMIZING DIFFERENCES OF CONVEX FUNCTIONS AND APPLICATIONS Mau Nam Nguyen (joint work with D. Giles and R. B. Rector) Fariborz Maseeh Department of Mathematics and Statistics Portland State

More information

First-order methods for structured nonsmooth optimization

First-order methods for structured nonsmooth optimization First-order methods for structured nonsmooth optimization Sangwoon Yun Department of Mathematics Education Sungkyunkwan University Oct 19, 2016 Center for Mathematical Analysis & Computation, Yonsei University

More information

9. Dual decomposition and dual algorithms

9. Dual decomposition and dual algorithms EE 546, Univ of Washington, Spring 2016 9. Dual decomposition and dual algorithms dual gradient ascent example: network rate control dual decomposition and the proximal gradient method examples with simple

More information

SPARSE SIGNAL RESTORATION. 1. Introduction

SPARSE SIGNAL RESTORATION. 1. Introduction SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful

More information

Math 273a: Optimization Subgradients of convex functions

Math 273a: Optimization Subgradients of convex functions Math 273a: Optimization Subgradients of convex functions Made by: Damek Davis Edited by Wotao Yin Department of Mathematics, UCLA Fall 2015 online discussions on piazza.com 1 / 42 Subgradients Assumptions

More information

Asynchronous Non-Convex Optimization For Separable Problem

Asynchronous Non-Convex Optimization For Separable Problem Asynchronous Non-Convex Optimization For Separable Problem Sandeep Kumar and Ketan Rajawat Dept. of Electrical Engineering, IIT Kanpur Uttar Pradesh, India Distributed Optimization A general multi-agent

More information

On the equivalence of the primal-dual hybrid gradient method and Douglas Rachford splitting

On the equivalence of the primal-dual hybrid gradient method and Douglas Rachford splitting Mathematical Programming manuscript No. (will be inserted by the editor) On the equivalence of the primal-dual hybrid gradient method and Douglas Rachford splitting Daniel O Connor Lieven Vandenberghe

More information

MATH 680 Fall November 27, Homework 3

MATH 680 Fall November 27, Homework 3 MATH 680 Fall 208 November 27, 208 Homework 3 This homework is due on December 9 at :59pm. Provide both pdf, R files. Make an individual R file with proper comments for each sub-problem. Subgradients and

More information

Strongly convex functions, Moreau envelopes and the generic nature of convex functions with strong minimizers

Strongly convex functions, Moreau envelopes and the generic nature of convex functions with strong minimizers University of Wollongong Research Online Faculty of Engineering and Information Sciences - Papers: Part B Faculty of Engineering and Information Sciences 206 Strongly convex functions, Moreau envelopes

More information

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Lagrange duality. The Lagrangian. We consider an optimization program of the form Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization

More information

A Coordinate Descent Primal-Dual Algorithm with Large Step Size and Possibly Non-Separable Functions

A Coordinate Descent Primal-Dual Algorithm with Large Step Size and Possibly Non-Separable Functions A Coordinate Descent Primal-Dual Algorithm with Large Step Size and Possibly Non-Separable Functions Olivier Fercoq Pascal Bianchi February 5, 2018 arxiv:1508.04625v2 [math.oc] 2 Feb 2018 Abstract This

More information

Convergence Rates for Projective Splitting

Convergence Rates for Projective Splitting Convergence Rates for Projective Splitting Patric R. Johnstone Jonathan Ecstein August 8, 018 Abstract Projective splitting is a family of methods for solving inclusions involving sums of maximal monotone

More information

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of

More information

Coordinate Update Algorithm Short Course Subgradients and Subgradient Methods

Coordinate Update Algorithm Short Course Subgradients and Subgradient Methods Coordinate Update Algorithm Short Course Subgradients and Subgradient Methods Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 30 Notation f : H R { } is a closed proper convex function domf := {x R n

More information

Optimization for Learning and Big Data

Optimization for Learning and Big Data Optimization for Learning and Big Data Donald Goldfarb Department of IEOR Columbia University Department of Mathematics Distinguished Lecture Series May 17-19, 2016. Lecture 1. First-Order Methods for

More information

Fenchel-Moreau Conjugates of Inf-Transforms and Application to Stochastic Bellman Equation

Fenchel-Moreau Conjugates of Inf-Transforms and Application to Stochastic Bellman Equation Fenchel-Moreau Conjugates of Inf-Transforms and Application to Stochastic Bellman Equation Jean-Philippe Chancelier and Michel De Lara CERMICS, École des Ponts ParisTech First Conference on Discrete Optimization

More information

Inertial Douglas-Rachford splitting for monotone inclusion problems

Inertial Douglas-Rachford splitting for monotone inclusion problems Inertial Douglas-Rachford splitting for monotone inclusion problems Radu Ioan Boţ Ernö Robert Csetnek Christopher Hendrich January 5, 2015 Abstract. We propose an inertial Douglas-Rachford splitting algorithm

More information

MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS

MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS JONATHAN M. BORWEIN, FRSC Abstract. We use methods from convex analysis convex, relying on an ingenious function of Simon Fitzpatrick, to prove maximality

More information

Primal Solutions and Rate Analysis for Subgradient Methods

Primal Solutions and Rate Analysis for Subgradient Methods Primal Solutions and Rate Analysis for Subgradient Methods Asu Ozdaglar Joint work with Angelia Nedić, UIUC Conference on Information Sciences and Systems (CISS) March, 2008 Department of Electrical Engineering

More information

arxiv: v1 [math.oc] 29 Dec 2018

arxiv: v1 [math.oc] 29 Dec 2018 DeGroot-Friedkin Map in Opinion Dynamics is Mirror Descent Abhishek Halder ariv:1812.11293v1 [math.oc] 29 Dec 2018 Abstract We provide a variational interpretation of the DeGroot-Friedkin map in opinion

More information

Splitting methods for decomposing separable convex programs

Splitting methods for decomposing separable convex programs Splitting methods for decomposing separable convex programs Philippe Mahey LIMOS - ISIMA - Université Blaise Pascal PGMO, ENSTA 2013 October 4, 2013 1 / 30 Plan 1 Max Monotone Operators Proximal techniques

More information

SEMI-SMOOTH SECOND-ORDER TYPE METHODS FOR COMPOSITE CONVEX PROGRAMS

SEMI-SMOOTH SECOND-ORDER TYPE METHODS FOR COMPOSITE CONVEX PROGRAMS SEMI-SMOOTH SECOND-ORDER TYPE METHODS FOR COMPOSITE CONVEX PROGRAMS XIANTAO XIAO, YONGFENG LI, ZAIWEN WEN, AND LIWEI ZHANG Abstract. The goal of this paper is to study approaches to bridge the gap between

More information

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4890 4900 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A generalized forward-backward

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Image restoration: numerical optimisation

Image restoration: numerical optimisation Image restoration: numerical optimisation Short and partial presentation Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS BINP / 6 Context

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in

More information