Variable Metric Forward-Backward Algorithm

Size: px
Start display at page:

Download "Variable Metric Forward-Backward Algorithm"

Transcription

1 Variable Metric Forward-Backward Algorithm 1/37 Variable Metric Forward-Backward Algorithm for minimizing the sum of a differentiable function and a convex function E. Chouzenoux in collaboration with A. Repetti and J.-C. Pesquet Laboratoire d Informatique Gaspard Monge - UMR CNRS 8049, Université Paris-Est, France. Séminaire PGMO, 15 mai 2013

2 Variable Metric Forward-Backward Algorithm 2/37 Outline 1. Problem statement Minimization problem Existence of minimizers 2. Theoretical background Variational Analysis Proximity operator Kurdyka- Lojasiewicz Inequality 3. Variable Metric Forward-Backward Algorithm Majorize-Minimize algorithm Proposed algorithm Convergence results 4. Application to image reconstruction Signal-dependent Gaussian noise Results

3 Variable Metric Forward-Backward Algorithm 3/37 Minimization problem Problem with : Find ˆx Argmin{G = F +R}, (1) R: R N (,+ ] proper, lsc, convex, R is continuous on its domain, F: R N R differentiable, F has an L-Lipschitz gradient on domr, i.e. ( (x,y) (domr) 2 ) F(x) F(y) L x y, G is coercive, i.e. lim x + G(x) = +.

4 Variable Metric Forward-Backward Algorithm 4/37 Existence of minimizers Find ˆx Argmin{G = F +R}, R: R N (,+ ] proper, lsc, convex, R is continuous on its domain, F: R N R differentiable, F has an L-Lipschitz gradient on domr, i.e. ( (x,y) (domr) 2 ) F(x) F(y) L x y, G is coercive, i.e. lim x + G(x) = +. domr domf domr = domg.

5 Variable Metric Forward-Backward Algorithm 5/37 Existence of minimizers Find ˆx Argmin{G = F +R}, R: R N (,+ ] proper, lsc, convex, R is continuous on its domain, F: R N R differentiable, F has an L-Lipschitz gradient on domr, i.e. ( (x,y) (domr) 2 ) F(x) F(y) L x y, G is coercive, i.e. lim x + G(x) = +. domg (= domr) is a nonempty convex set.

6 Variable Metric Forward-Backward Algorithm 6/37 Existence of minimizers Find ˆx Argmin{G = F +R}, R: R N (,+ ] proper, lsc, convex, R is continuous on its domain, F: R N R differentiable, F has an L-Lipschitz gradient on domr, i.e. ( (x,y) (domr) 2 ) F(x) F(y) L x y, G is coercive, i.e. lim x + G(x) = +. G proper lsc on R N and continuous on domg.

7 Variable Metric Forward-Backward Algorithm 7/37 Existence of minimizers Find ˆx Argmin{G = F +R}, R: R N (,+ ] proper, lsc, convex, R is continuous on its domain, F: R N R differentiable, F has an L-Lipschitz gradient on domr, i.e. ( (x,y) (domr) 2 ) F(x) F(y) L x y, G is coercive, i.e. lim x + G(x) = +. ( x domg) lev G(x) G is compact and ArgminG.

8 Variable Metric Forward-Backward Algorithm 8/37 Variational Analysis Let ψ: R N (,+ ] Domain of ψ: domψ = {x R N ψ(x) < + }. Proper function: ψ is proper if domψ. δ R ψ Level set of ψ at height δ R: lev δ ψ = {x R N ψ(x) δ}. lev δ ψ domψ R

9 Variable Metric Forward-Backward Algorithm 9/37 Variational Analysis [Rockafellar ] Sub-differential Let ψ: R N (,+ ] be a lsc proper function. Let x domψ. Fréchet sub-differential: { ˆ ψ(x) = t R N liminf y x y x Limiting sub-differential: } 1 ( ) ψ(y) ψ(x) (y x) t 0. x y ψ(x) = {ˆt R N y k x, ψ(y k ) ψ(x), t k ˆ ψ(y k ) ˆt }.

10 Variable Metric Forward-Backward Algorithm 9/37 Variational Analysis [Rockafellar ] Sub-differential Let ψ: R N (,+ ] be a lsc proper function. Let x domψ. Fréchet sub-differential: { ˆ ψ(x) = t R N liminf y x y x Limiting sub-differential: } 1 ( ) ψ(y) ψ(x) (y x) t 0. x y ψ(x) = {ˆt R N y k x, ψ(y k ) ψ(x), t k ˆ ψ(y k ) ˆt }. ψ is convex ψ corresponds to the usual sub-differential for convex functions x R N is a critical point of ψ iff 0 ψ(x). Convex particular case: x minimizer of ψ 0 ψ(x). General case: x minimizer of ψ 0 ψ(x).

11 Variable Metric Forward-Backward Algorithm 10/37 Proximity operator Proximity operator Let ψ: R N (,+ ] proper, lsc, convex. Let x R N. prox ψ (x) = argmin y R N ψ(y)+ 1 2 y x 2. Characterization of the proximity operator: p = prox ψ (x) x p ψ(p).

12 Variable Metric Forward-Backward Algorithm 11/37 Proximity operator relative to a metric Let U R N N be a symmetric positive definite matrix. Let x R N. Weighted norm: x U = ( x Ux ) 1/2. Loewner partial ordering on R N N ( (U 1,U 2 ) (R N N ) 2) U 1 U 2 x U 1 x x U 2 x.

13 Variable Metric Forward-Backward Algorithm 11/37 Proximity operator relative to a metric Let U R N N be a symmetric positive definite matrix. Let x R N. Weighted norm: x U = ( x Ux ) 1/2. Loewner partial ordering on R N N ( (U 1,U 2 ) (R N N ) 2) U 1 U 2 x U 1 x x U 2 x. Proximity operator relative to the metric induced by U Let ψ: R N (,+ ] proper, lsc, convex. prox U,ψ (x) = argmin y R N ψ(y)+ 1 2 y x U 2. prox IN,ψ = prox ψ.

14 Variable Metric Forward-Backward Algorithm 12/37 Proximity operator relative to a metric Characterization of the proximity operator: Let ψ: R N (,+ ] be a proper, lsc, convex function. ( x R N ) p = prox U,ψ (x) U(x p) ψ(p). Property ( x R N ) ψ(x) = N ψ (n) (x (n) ) n=1 U = Diag(u (1),...,u (N) ) with (u (n) ) 1 n N (0,+ ) N ( x R N ) prox U,ψ (x) = ( ) prox ψ (n) /u (n)(x(n) ) 1 n N Other properties can be found in [Becker and Fadili ].

15 Variable Metric Forward-Backward Algorithm 13/37 Kurdyka- Lojasiewicz inequality Kurdyka- Lojasiewicz Function G satisfies the Kurdyka- Lojasiewicz inequality i.e., for every ξ R, and, for every bounded subset E of R N, there exist three constants κ > 0, ζ > 0 and θ [0,1) such that ( t(x) G(x) ) t(x) κ G(x) ξ θ, for every x E such that G(x) ξ ζ (with the convention 0 0 = 0). Note that other forms of the KL inequality can be found in the literature [Bolte et al ][Bolte et al ]. Satisfied for a wide class of functions : real analytic functions semi-algebraic functions...

16 Variable Metric Forward-Backward Algorithm 14/37 Forward-Backward algorithm FB Algorithm x 0 R N For k = 0,1,... ȳ k = x k γ k F(x k ), y k = prox γk R(ȳ k ), x k+1 = x k +λ k (y k x k ), Convergence is established if: F convex with L-Lipschitzian gradient, R convex lsc proper, and 0 < inf l N γ l sup l N γ l < 2L 1, ( k N) 0 < inf l N λ l λ k 1. [Combettes and Pesquet ] F and R non convex, F with a Lipschitzian gradient, λ k 1. [Attouch, Bolte and Svaiter ]

17 Variable Metric Forward-Backward Algorithm 15/37 Variable Metric Forward-Backward algorithm VMFB Algorithm x 0 R N For k = 0,1,... ȳ k = x k γ k A 1 k F(x k ), y k = prox γ 1 k A k,r (ȳ k), x k+1 = x k +λ k (y k x k ), Convergence is established ([Combettes and Vũ ]) if F convex with L-Lipschitzian gradient, R convex lsc proper, and (η k ) k N l + 1 (N), such that ( k N) (1+η k)a k+1 A k, (ν,ν) (0,+ ) 2 such that ( k N) νi N A k νi N, ( ε 0, min{1, ] 2 } such that ( k N) L(1+ν) { ε γ k 2 Lν ε ν ε λ k 1.

18 Variable Metric Forward-Backward Algorithm 16/37 Our contribution [Chouzenoux et al ] Convergence of the VMFB algorithm for F non convex? Kurdyka- Lojasiewicz Inequality. Choice of variable metric (A k ) k N? Majorize-Minimize principle. Calculation of the proximity operator? Inexact VMFB algorithm.

19 Variable Metric Forward-Backward Algorithm 17/37 Majorize-Minimize assumption MM Assumption For every k N, there exists a symmetric positive definite matrix A k R N N such that for every x R N Q(x,x k ) = F(x k )+(x x k ) F(x k )+ 1 2 (x x k) A k (x x k ), is a majorant function of F at x k on domr, i.e., F(x k ) = Q(x k,x k ) and ( x domr) F(x) Q(x,x k ). There exists (ν,ν) (0,+ ) 2 such that ( k N) νi N A k νi N. F is differentiable with an L-Lipschitzian gradient on domr A k LI N satisfies the above assumption [Bertsekas ]

20 Variable Metric Forward-Backward Algorithm 18/37 Majorize-Minimize algorithm [Jacobson and Fessler ] MM Algorithm x k+1 ArgminQ(x,x k ) x Q(.,x k ) F VMFB Algorithm with R 0 λk 1 γk 1 x k x k+1

21 Variable Metric Forward-Backward Algorithm 18/37 Majorize-Minimize algorithm [Jacobson and Fessler ] MM Algorithm x k+1 ArgminQ(x,x k )+R(x) x Q(.,x k ) F VMFB Algorithm with λk 1 γk 1 x k x k+1

22 Variable Metric Forward-Backward Algorithm 19/37 Proposed algorithm VMFB Algorithm x 0 domr For k = 0,1,... ȳ k = x k γ k A 1 k F(x k), y k = prox γ 1 k A k,r (ȳ k), x k+1 = (1 λ k )x k +λ k y k, where (η,η) (0,+ ) 2 such that ( k N) η γ k λ k 2 η. λ (0,+ ) such that ( k N) λ λ k 1.

23 Variable Metric Forward-Backward Algorithm 19/37 Proposed algorithm Inexact VMFB Algorithm x 0 domr,τ (0,+ ) For k = 0,1,... Find y k R N and r(y k ) R(y k ) such that R(y k )+(y k x k ) F(x k )+γ 1 k y k x k 2 A k R(x k ), F(x k )+r(y k ) τ y k x k Ak, x k+1 = (1 λ k )x k +λ k y k, where (η,η) (0,+ ) 2 such that ( k N) η γ k λ k 2 η. λ (0,+ ) such that ( k N) λ λ k 1.

24 Variable Metric Forward-Backward Algorithm 20/37 Inexact proximal step { yk = prox γ 1 k A k,r (x k γ k A 1 k F(x k)) Convexity of R ( r(y k ) R(y k )) { r(y k ) = F(x k )+γ 1 k A k(x k y k ) (y k x k ) r(y k ) R(y k ) R(x k ). R(y k )+(y k x k ) F(x k )+γ 1 k y k x k 2 A k R(x k ), F(x k )+r(y k ) = γ 1 k A k(y k x k ) γ 1 k ν yk x k Ak η 1 ν y k x k Ak τ = η 1 ν

25 Variable Metric Forward-Backward Algorithm 21/37 Assumptions R proper lsc convex and continuous on domr, F differentiable, F L-Lipschitz on dom R, G is coercive. G satisfies the Kurdyka- Lojasiewicz inequality. (A k ) k N satisfies the majorization conditions. (λ k ) k N and (γ k ) k N bounded.

26 Variable Metric Forward-Backward Algorithm 21/37 Assumptions R proper lsc convex and continuous on domr, F differentiable, F L-Lipschitz on dom R, G is coercive. G satisfies the Kurdyka- Lojasiewicz inequality. (A k ) k N satisfies the majorization conditions. (λ k ) k N and (γ k ) k N bounded. There exists α (0,1] such that ( k N) G(x k+1 ) (1 α)g(x k )+αg(y k ).

27 Variable Metric Forward-Backward Algorithm 21/37 Assumptions R proper lsc convex and continuous on domr, F differentiable, F L-Lipschitz on dom R, G is coercive. G satisfies the Kurdyka- Lojasiewicz inequality. (A k ) k N satisfies the majorization conditions. (λ k ) k N and (γ k ) k N bounded. There exists α (0,1] such that ( k N) G(x k+1 ) (1 α)g(x k )+αg(y k ). Satisfied if ( k N) x k+1 is such that G(x k+1 ) G(y k ) and α = 1.

28 Variable Metric Forward-Backward Algorithm 21/37 Assumptions R proper lsc convex and continuous on domr, F differentiable, F L-Lipschitz on dom R, G is coercive. G satisfies the Kurdyka- Lojasiewicz inequality. (A k ) k N satisfies the majorization conditions. (λ k ) k N and (γ k ) k N bounded. There exists α (0,1] such that ( k N) G(x k+1 ) (1 α)g(x k )+αg(y k ). Satisfied if ( k N) x k+1 is such that G(x k+1 ) G(y k ) and α = 1. Satisfied if (λ k ) k N is such that, ( k N) λ k = α = 1.

29 Variable Metric Forward-Backward Algorithm 21/37 Assumptions R proper lsc convex and continuous on domr, F differentiable, F L-Lipschitz on dom R, G is coercive. G satisfies the Kurdyka- Lojasiewicz inequality. (A k ) k N satisfies the majorization conditions. (λ k ) k N and (γ k ) k N bounded. There exists α (0,1] such that ( k N) G(x k+1 ) (1 α)g(x k )+αg(y k ). Satisfied if ( k N) x k+1 is such that G(x k+1 ) G(y k ) and α = 1. Satisfied if (λ k ) k N is such that ( k N) λ k = α = 1. Satisfied if ( k N) G convex on [x k,y k ].

30 Variable Metric Forward-Backward Algorithm 21/37 Assumptions R proper lsc convex and continuous on domr, F differentiable, F L-Lipschitz on dom R, G is coercive. G satisfies the Kurdyka- Lojasiewicz inequality. (A k ) k N satisfies the majorization conditions. (λ k ) k N and (γ k ) k N bounded. There exists α (0,1] such that ( k N) G(x k+1 ) (1 α)g(x k )+αg(y k ). Satisfied if ( k N) x k+1 is such that G(x k+1 ) G(y k ) and α = 1. Satisfied if (λ k ) k N is such that ( k N) λ k = α = 1. Satisfied if ( k N) G convex on [x k,y k ]. Satisfied iff ( k N) there exists α k [α,1] such that G(x k+1 ) (1 α k )G(x k )+α k G(y k ).

31 Variable Metric Forward-Backward Algorithm 22/37 Descent Properties Property 1 There exists µ 1 (0,+ ), such that ( k N) G(x k+1 ) G(x k ) µ 1 2 x k+1 x k 2 G(x k ) µ 1 2 y k x k 2. Property 2 There exists µ 2 (0,+ ), such that ( k N) G(y k ) G(x k ) µ 2 2 y k x k 2.

32 Variable Metric Forward-Backward Algorithm 23/37 Convergence results Convergence theorem (x k ) k N and (y k ) k N, generated by the inexact VMFB algorithm (or the VMFB algorithm), both converge to a critical point ˆx of G. + k=0 x k+1 x k < + and + y k+1 y k < +. k=0 (G(x k )) k N and (G(y k )) k N both converge to G(ˆx). Local convergence to a global minimizer Let (x k ) k N and (y k ) k N, generated by the inexact VMFB algorithm (or the VMFB algorithm). There exists υ > 0 such that if G(x 0 ) inf x R N G(x) + υ, then (x k ) k N and (y k ) k N both converge to a solution to Problem (1).

33 Variable Metric Forward-Backward Algorithm 24/37 Image reconstruction under signal-dependent noise H + x [0,+ ) N Hx w(hx) z [0,+ ) M Observation matrix: H [0,+ ) M N. Signal-dependent noise: w(h x) = ( w (m) ([H x] (m) ) ) 1 m M, with ( m {1,...,M}) a (m) [0,+ ), b (m) (0,+ ), w (m) ([H x] (m) ) realization of W (m) N ( 0,a (m) [H x] (m) +b (m)). OBJECTIVE: Produce an estimate ˆx [0,+ ) N of the target image x from the observed data z.

34 Variable Metric Forward-Backward Algorithm 25/37 Optimization problem Solve Problem (1): Find ˆx Argmin{G = F +R} where F: data fidelity term neg-log-likelihood of the data. R: penalty function serving to incorporate a priori information.

35 Variable Metric Forward-Backward Algorithm 25/37 Optimization problem Solve Problem (1): Find ˆx Argmin{G = F +R} where Data fidelity term (neg-log-likelihood of the data) { F 1(x)+F 2(x) if x [0,+ ) N F(x) =, where + otherwise F 1(x) = 1 M ([Hx] (m) z (m) ) 2 Convex function. 2 m=1 a (m) [Hx] (m) +b (m) F 2(x) = 1 M ) log (a (m) [Hx] (m) +b (m) Concave function. 2 Penalization term m=1 ( x R N ) R(x) = R 1(x)+R 2(x), where { R 1 ι [xmin,x N(x) = 0 if x [x min,x max] N max] + otherwise R 2 Sparsity prior in analysis frame or Total Variation..

36 Variable Metric Forward-Backward Algorithm 26/37 Majorization strategy for F 1 F 1 Convex and additive separable function. ( x [0,+ ) N ) F 1(x) = M m=1 ρ(m) 1 ([Hx] (m) ), where ( m {1,...,M}) ( u [0,+ )) ρ (m) 1 (u) = 1 (u z (m) ) 2 2 a (m) u +b (m).

37 Variable Metric Forward-Backward Algorithm 26/37 Majorization strategy for F 1 F 1 Convex and additive separable function. ( x [0,+ ) N ) F 1(x) = M m=1 ρ(m) 1 ([Hx] (m) ), where ( m {1,...,M}) ( u [0,+ )) ρ (m) 1 (u) = 1 (u z (m) ) 2 2 a (m) u +b (m). Then ( k N) a majorant function of F 1 on [0,+ ) N at x k is given by { Q 1(,x k ) = F 1(x k )+( x k ) F 1(x k )+( x k ) A k ( x k ) A k = Diag(P ω(hx k ))+εi N for ε 0. ( ) with ω: (v (m) ) 1 m M [0,+ ) M ω (m) (v (m) ) ( m {1,...,M}) ω (m) (u) = { ρ (m) 1 m M R M, where 1 (0) if u = 0, 2 ρ(m) 1 (0) ρ (m) 1 (u)+u ρ (m) 1 (u) u 2 if u > 0. ( m {1,...,M}) ( n {1,...,N}) P (m,n) = H (m,n) N p=1 H(m,p). Proof based on the strict concavity of ρ (m) 1 and Jensen s inequality ([Erdogan and Fessler ]).

38 Variable Metric Forward-Backward Algorithm 27/37 Implementation Construction of the majorant { F 1(x)+F 2(x) if x [0,+ ) N F(x) =, where + otherwise F 1 Convex function. Majorized at x k by Q 1(,x k ). F 2 Concave function. Majorized at x k by Q 2(,x k ) = F 2(x k )+( x k ) F 2(x k ).

39 Variable Metric Forward-Backward Algorithm 27/37 Implementation Construction of the majorant { F 1(x)+F 2(x) if x [0,+ ) N F(x) =, where + otherwise F 1 Convex function. Majorized at x k by Q 1(,x k ). F 2 Concave function. Majorized at x k by Q 2(,x k ) = F 2(x k )+( x k ) F 2(x k ). Backward step { y k = argmin R(x)+ 1 x R N 2 x ȳ k 2 γ 1 k { R(γ 1/2 A k } with ȳ k = x k γ k A 1 k F(x k ) x)+ 1 } x γ 1/2 k A 1/2 k ȳ k 2 2 y k = γ 1/2 k A 1/2 k argmin k A 1/2 k x R N Dual Forward-Backward Algorithm [Combettes et al ]

40 Variable Metric Forward-Backward Algorithm 28/37 Reconstruction with sparsity prior H: Radon matrix modeling M = parallel projections from 128 acquisitions lines and 128 angles. ( m {1,...,M}) a (m) = 0.01 and b (m) = 0.1 Original image Zubal Degraded sinogram

41 Variable Metric Forward-Backward Algorithm 29/37 Results: Restored images FBP: SNR=7 db VMFB: SNR=18.9 db

42 Variable Metric Forward-Backward Algorithm 30/37 Results VMFB Algorithm with λ k 1 and γ k 1 (solid line) λ k 1 and γ k 1.9 (dashed line) FB Algorithm with λ k 1 and γ k 1 (solid line) λ k 1 and γ k 1.9 (dashed line) FISTA G(xk) G(ˆx) xk ˆx Time (s) Time (s)

43 Variable Metric Forward-Backward Algorithm 31/37 Deblurring with Total Variation H: Blur operator corresponding to a truncated Gaussian kernel of standard deviation 1 and size 7 7. ( m {1,...,M}), a (m) = 0.5 and b (m) = 1 Original image Jetplane Degraded image: SNR=21.95 db

44 Variable Metric Forward-Backward Algorithm 32/37 Results: Restored images Degraded image: SNR=21.95 db Restored image: SNR=27.09 db

45 Variable Metric Forward-Backward Algorithm 33/37 Results VMFB Algorithm with λ k 1 and γ k 1 (solid line) λ k 1 and γ k 1.9 (dashed line) FB Algorithm with λ k 1 and γ k 1 (solid line) λ k 1 and γ k 1.9 (dashed line) FISTA G(xk) G(ˆx) xk ˆx Time (s) Time (s)

46 Variable Metric Forward-Backward Algorithm 34/37 Conclusion Convergence of the VMFB algorithm for the sum of a non convex differentiable function F and a non smooth convex function R. Choice of variable metric (A k ) k N based on MM principle. Inexact VMFB algorithm for the calculation of the proximity operator. The variable metric strategy leads to a significant acceleration in terms of decay of both the objective function and the error on the iterates in each experiment.

47 Variable Metric Forward-Backward Algorithm 35/37 Bibliography D. P. Bertsekas. Nonlinear Programming. 2nd. edn. Athena Scientific, Belmont, MA, R. T. Rockafellar and R. J. B. Wets Variational Analysis. 1st edn. Grundlehren der Mathematischen Wissenschaften, vol. 317, Springer, Berlin, H. Attouch, J. Bolte and B. F. Svaiter. Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods. Math. Program., vol. 137, pp , Feb J. Bolte, A. Daniilidis, A. Lewis and M. Shiota. Clarke subgradients of stratifiable functions. SIAM J. Optim., vol. 18, no. 2, pp , J. Bolte, A. Daniilidis, O. Ley and L. Mazet. Characterizations of Lojasiewicz inequalities and applications. Trans. Amer. Math. Soc., vol.362, no. 6, pp , S. Becker and J. Fadili. A quasi-newton proximal splitting method. Tech. Rep., Available on E. Chouzenoux, J.-C. Pesquet and A. Repetti. Variable Metric Forward-Backward algorithm for minimizing the sum of a differentiable function and a convex function. Tech. Rep., Available on HTML/2013/01/3749.html

48 Variable Metric Forward-Backward Algorithm 36/37 Bibliography P. L. Combettes, D. Dũng and B. C. Vũ. Proximity for sums of composite functions. J. Math. Anal. Appl., vol. 380, no. 2, pp , Aug P. L. Combettes and J.-C. Pesquet. Proximal thresholding algorithm for minimization over orthonormal bases. SIAM J. Optim., vol. 18, no. 4, pp , Nov P. L. Combettes and B. C. Vũ. Variable metric forward-backward splitting with applications to monotone inclusions in duality. to appear in Optimization, H. Erdogan and J. A. Fessler. Monotonic algorithms for transmission tomography. IEEE Trans. Med. Imag., vol. 18, no. 9, pp , Nov M. W. Jacobson and J. A. Fessler. An expanded theoretical treatment of iteration-dependent Majorize-Minimize algorithms. IEEE Trans. Image Process., vol. 16, no. 10, pp , Oct J. J. Moreau. Proximité et Dualité dans un espace hilbertien. Bull. Soc. Math. France, vol. 93, pp , 1965.

49 Variable Metric Forward-Backward Algorithm 37/37 Thank you!

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration E. Chouzenoux, A. Jezierska, J.-C. Pesquet and H. Talbot Université Paris-Est Lab. d Informatique Gaspard

More information

A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing

A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing Emilie Chouzenoux emilie.chouzenoux@univ-mlv.fr Université Paris-Est Lab. d Informatique Gaspard

More information

Sequential convex programming,: value function and convergence

Sequential convex programming,: value function and convergence Sequential convex programming,: value function and convergence Edouard Pauwels joint work with Jérôme Bolte Journées MODE Toulouse March 23 2016 1 / 16 Introduction Local search methods for finite dimensional

More information

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane Conference ADGO 2013 October 16, 2013 Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions Marc Lassonde Université des Antilles et de la Guyane Playa Blanca, Tongoy, Chile SUBDIFFERENTIAL

More information

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION Peter Ochs University of Freiburg Germany 17.01.2017 joint work with: Thomas Brox and Thomas Pock c 2017 Peter Ochs ipiano c 1

More information

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29 A Random block-coordinate primal-dual proximal algorithm with application to 3D mesh denoising Emilie CHOUZENOUX Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est, France Horizon Maths 2014

More information

A user s guide to Lojasiewicz/KL inequalities

A user s guide to Lojasiewicz/KL inequalities Other A user s guide to Lojasiewicz/KL inequalities Toulouse School of Economics, Université Toulouse I SLRA, Grenoble, 2015 Motivations behind KL f : R n R smooth ẋ(t) = f (x(t)) or x k+1 = x k λ k f

More information

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection EUSIPCO 2015 1/19 A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection Jean-Christophe Pesquet Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est

More information

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences

More information

A semi-algebraic look at first-order methods

A semi-algebraic look at first-order methods splitting A semi-algebraic look at first-order Université de Toulouse / TSE Nesterov s 60th birthday, Les Houches, 2016 in large-scale first-order optimization splitting Start with a reasonable FOM (some

More information

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions Radu Ioan Boţ Ernö Robert Csetnek Szilárd Csaba László October, 1 Abstract. We propose a forward-backward

More information

From error bounds to the complexity of first-order descent methods for convex functions

From error bounds to the complexity of first-order descent methods for convex functions From error bounds to the complexity of first-order descent methods for convex functions Nguyen Trong Phong-TSE Joint work with Jérôme Bolte, Juan Peypouquet, Bruce Suter. Toulouse, 23-25, March, 2016 Journées

More information

Tame variational analysis

Tame variational analysis Tame variational analysis Dmitriy Drusvyatskiy Mathematics, University of Washington Joint work with Daniilidis (Chile), Ioffe (Technion), and Lewis (Cornell) May 19, 2015 Theme: Semi-algebraic geometry

More information

A Dykstra-like algorithm for two monotone operators

A Dykstra-like algorithm for two monotone operators A Dykstra-like algorithm for two monotone operators Heinz H. Bauschke and Patrick L. Combettes Abstract Dykstra s algorithm employs the projectors onto two closed convex sets in a Hilbert space to construct

More information

arxiv: v1 [math.oc] 20 Jun 2014

arxiv: v1 [math.oc] 20 Jun 2014 A forward-backward view of some primal-dual optimization methods in image recovery arxiv:1406.5439v1 [math.oc] 20 Jun 2014 P. L. Combettes, 1 L. Condat, 2 J.-C. Pesquet, 3 and B. C. Vũ 4 1 Sorbonne Universités

More information

Hedy Attouch, Jérôme Bolte, Benar Svaiter. To cite this version: HAL Id: hal

Hedy Attouch, Jérôme Bolte, Benar Svaiter. To cite this version: HAL Id: hal Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods Hedy Attouch, Jérôme Bolte, Benar Svaiter To cite

More information

SIAM Conference on Imaging Science, Bologna, Italy, Adaptive FISTA. Peter Ochs Saarland University

SIAM Conference on Imaging Science, Bologna, Italy, Adaptive FISTA. Peter Ochs Saarland University SIAM Conference on Imaging Science, Bologna, Italy, 2018 Adaptive FISTA Peter Ochs Saarland University 07.06.2018 joint work with Thomas Pock, TU Graz, Austria c 2018 Peter Ochs Adaptive FISTA 1 / 16 Some

More information

An Overview of Recent and Brand New Primal-Dual Methods for Solving Convex Optimization Problems

An Overview of Recent and Brand New Primal-Dual Methods for Solving Convex Optimization Problems PGMO 1/32 An Overview of Recent and Brand New Primal-Dual Methods for Solving Convex Optimization Problems Emilie Chouzenoux Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est Marne-la-Vallée,

More information

Self-dual Smooth Approximations of Convex Functions via the Proximal Average

Self-dual Smooth Approximations of Convex Functions via the Proximal Average Chapter Self-dual Smooth Approximations of Convex Functions via the Proximal Average Heinz H. Bauschke, Sarah M. Moffat, and Xianfu Wang Abstract The proximal average of two convex functions has proven

More information

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim Korean J. Math. 25 (2017), No. 4, pp. 469 481 https://doi.org/10.11568/kjm.2017.25.4.469 GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS Jong Kyu Kim, Salahuddin, and Won Hee Lim Abstract. In this

More information

Proximal Alternating Linearized Minimization for Nonconvex and Nonsmooth Problems

Proximal Alternating Linearized Minimization for Nonconvex and Nonsmooth Problems Proximal Alternating Linearized Minimization for Nonconvex and Nonsmooth Problems Jérôme Bolte Shoham Sabach Marc Teboulle Abstract We introduce a proximal alternating linearized minimization PALM) algorithm

More information

arxiv: v2 [math.oc] 21 Nov 2017

arxiv: v2 [math.oc] 21 Nov 2017 Unifying abstract inexact convergence theorems and block coordinate variable metric ipiano arxiv:1602.07283v2 [math.oc] 21 Nov 2017 Peter Ochs Mathematical Optimization Group Saarland University Germany

More information

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Caroline Chaux Joint work with X. Vu, N. Thirion-Moreau and S. Maire (LSIS, Toulon) Aix-Marseille

More information

Proximal tools for image reconstruction in dynamic Positron Emission Tomography

Proximal tools for image reconstruction in dynamic Positron Emission Tomography Proximal tools for image reconstruction in dynamic Positron Emission Tomography Nelly Pustelnik 1 joint work with Caroline Chaux 2, Jean-Christophe Pesquet 3, and Claude Comtat 4 1 Laboratoire de Physique,

More information

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Patrick L. Combettes joint work with J.-C. Pesquet) Laboratoire Jacques-Louis Lions Faculté de Mathématiques

More information

A gradient type algorithm with backward inertial steps for a nonconvex minimization

A gradient type algorithm with backward inertial steps for a nonconvex minimization A gradient type algorithm with backward inertial steps for a nonconvex minimization Cristian Daniel Alecsa Szilárd Csaba László Adrian Viorel November 22, 208 Abstract. We investigate an algorithm of gradient

More information

Convergence rates for an inertial algorithm of gradient type associated to a smooth nonconvex minimization

Convergence rates for an inertial algorithm of gradient type associated to a smooth nonconvex minimization Convergence rates for an inertial algorithm of gradient type associated to a smooth nonconvex minimization Szilárd Csaba László November, 08 Abstract. We investigate an inertial algorithm of gradient type

More information

1 Introduction and preliminaries

1 Introduction and preliminaries Proximal Methods for a Class of Relaxed Nonlinear Variational Inclusions Abdellatif Moudafi Université des Antilles et de la Guyane, Grimaag B.P. 7209, 97275 Schoelcher, Martinique abdellatif.moudafi@martinique.univ-ag.fr

More information

First Order Methods beyond Convexity and Lipschitz Gradient Continuity with Applications to Quadratic Inverse Problems

First Order Methods beyond Convexity and Lipschitz Gradient Continuity with Applications to Quadratic Inverse Problems First Order Methods beyond Convexity and Lipschitz Gradient Continuity with Applications to Quadratic Inverse Problems Jérôme Bolte Shoham Sabach Marc Teboulle Yakov Vaisbourd June 20, 2017 Abstract We

More information

Visco-penalization of the sum of two monotone operators

Visco-penalization of the sum of two monotone operators Visco-penalization of the sum of two monotone operators Patrick L. Combettes a and Sever A. Hirstoaga b a Laboratoire Jacques-Louis Lions, Faculté de Mathématiques, Université Pierre et Marie Curie Paris

More information

Proximal methods. S. Villa. October 7, 2014

Proximal methods. S. Villa. October 7, 2014 Proximal methods S. Villa October 7, 2014 1 Review of the basics Often machine learning problems require the solution of minimization problems. For instance, the ERM algorithm requires to solve a problem

More information

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems Radu Ioan Boţ Ernö Robert Csetnek August 5, 014 Abstract. In this paper we analyze the

More information

Merit functions and error bounds for generalized variational inequalities

Merit functions and error bounds for generalized variational inequalities J. Math. Anal. Appl. 287 2003) 405 414 www.elsevier.com/locate/jmaa Merit functions and error bounds for generalized variational inequalities M.V. Solodov 1 Instituto de Matemática Pura e Aplicada, Estrada

More information

Convergence rate of inexact proximal point methods with relative error criteria for convex optimization

Convergence rate of inexact proximal point methods with relative error criteria for convex optimization Convergence rate of inexact proximal point methods with relative error criteria for convex optimization Renato D. C. Monteiro B. F. Svaiter August, 010 Revised: December 1, 011) Abstract In this paper,

More information

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This

More information

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Local strong convexity and local Lipschitz continuity of the gradient of convex functions Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate

More information

Monotone Operator Splitting Methods in Signal and Image Recovery

Monotone Operator Splitting Methods in Signal and Image Recovery Monotone Operator Splitting Methods in Signal and Image Recovery P.L. Combettes 1, J.-C. Pesquet 2, and N. Pustelnik 3 2 Univ. Pierre et Marie Curie, Paris 6 LJLL CNRS UMR 7598 2 Univ. Paris-Est LIGM CNRS

More information

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br

More information

On the iterate convergence of descent methods for convex optimization

On the iterate convergence of descent methods for convex optimization On the iterate convergence of descent methods for convex optimization Clovis C. Gonzaga March 1, 2014 Abstract We study the iterate convergence of strong descent algorithms applied to convex functions.

More information

6. Proximal gradient method

6. Proximal gradient method L. Vandenberghe EE236C (Spring 2013-14) 6. Proximal gradient method motivation proximal mapping proximal gradient method with fixed step size proximal gradient method with line search 6-1 Proximal mapping

More information

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Donghwan Kim and Jeffrey A. Fessler EECS Department, University of Michigan

More information

Active sets, steepest descent, and smooth approximation of functions

Active sets, steepest descent, and smooth approximation of functions Active sets, steepest descent, and smooth approximation of functions Dmitriy Drusvyatskiy School of ORIE, Cornell University Joint work with Alex D. Ioffe (Technion), Martin Larsson (EPFL), and Adrian

More information

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method L. Vandenberghe EE236C (Spring 2016) 1. Gradient method gradient method, first-order methods quadratic bounds on convex functions analysis of gradient method 1-1 Approximate course outline First-order

More information

BORIS MORDUKHOVICH Wayne State University Detroit, MI 48202, USA. Talk given at the SPCOM Adelaide, Australia, February 2015

BORIS MORDUKHOVICH Wayne State University Detroit, MI 48202, USA. Talk given at the SPCOM Adelaide, Australia, February 2015 CODERIVATIVE CHARACTERIZATIONS OF MAXIMAL MONOTONICITY BORIS MORDUKHOVICH Wayne State University Detroit, MI 48202, USA Talk given at the SPCOM 2015 Adelaide, Australia, February 2015 Based on joint papers

More information

A proximal-newton method for monotone inclusions in Hilbert spaces with complexity O(1/k 2 ).

A proximal-newton method for monotone inclusions in Hilbert spaces with complexity O(1/k 2 ). H. ATTOUCH (Univ. Montpellier 2) Fast proximal-newton method Sept. 8-12, 2014 1 / 40 A proximal-newton method for monotone inclusions in Hilbert spaces with complexity O(1/k 2 ). Hedy ATTOUCH Université

More information

Subdifferential representation of convex functions: refinements and applications

Subdifferential representation of convex functions: refinements and applications Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential

More information

Auxiliary-Function Methods in Iterative Optimization

Auxiliary-Function Methods in Iterative Optimization Auxiliary-Function Methods in Iterative Optimization Charles L. Byrne April 6, 2015 Abstract Let C X be a nonempty subset of an arbitrary set X and f : X R. The problem is to minimize f over C. In auxiliary-function

More information

Une méthode proximale pour les inclusions monotones dans les espaces de Hilbert, avec complexité O(1/k 2 ).

Une méthode proximale pour les inclusions monotones dans les espaces de Hilbert, avec complexité O(1/k 2 ). Une méthode proximale pour les inclusions monotones dans les espaces de Hilbert, avec complexité O(1/k 2 ). Hedy ATTOUCH Université Montpellier 2 ACSIOM, I3M UMR CNRS 5149 Travail en collaboration avec

More information

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4890 4900 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A generalized forward-backward

More information

Received: 15 December 2010 / Accepted: 30 July 2011 / Published online: 20 August 2011 Springer and Mathematical Optimization Society 2011

Received: 15 December 2010 / Accepted: 30 July 2011 / Published online: 20 August 2011 Springer and Mathematical Optimization Society 2011 Math. Program., Ser. A (2013) 137:91 129 DOI 10.1007/s10107-011-0484-9 FULL LENGTH PAPER Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward backward splitting,

More information

Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms

Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms Peter Ochs, Jalal Fadili, and Thomas Brox Saarland University, Saarbrücken, Germany Normandie Univ, ENSICAEN, CNRS, GREYC, France

More information

arxiv: v3 [math.oc] 27 Oct 2016

arxiv: v3 [math.oc] 27 Oct 2016 A Multi-step Inertial Forward Backward Splitting Method for Non-convex Optimization arxiv:1606.02118v3 [math.oc] 27 Oct 2016 Jingwei Liang Jalal M. Fadili Gabriel Peyré Abstract In this paper, we propose

More information

A Proximal Method for Identifying Active Manifolds

A Proximal Method for Identifying Active Manifolds A Proximal Method for Identifying Active Manifolds W.L. Hare April 18, 2006 Abstract The minimization of an objective function over a constraint set can often be simplified if the active manifold of the

More information

A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization

A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization Panos Parpas Department of Computing Imperial College London www.doc.ic.ac.uk/ pp500 p.parpas@imperial.ac.uk jointly with D.V.

More information

EE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1

EE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1 EE 546, Univ of Washington, Spring 2012 6. Proximal mapping introduction review of conjugate functions proximal mapping Proximal mapping 6 1 Proximal mapping the proximal mapping (prox-operator) of a convex

More information

Image restoration by minimizing zero norm of wavelet frame coefficients

Image restoration by minimizing zero norm of wavelet frame coefficients Image restoration by minimizing zero norm of wavelet frame coefficients Chenglong Bao a, Bin Dong b, Likun Hou c, Zuowei Shen a, Xiaoqun Zhang c,d, Xue Zhang d a Department of Mathematics, National University

More information

Perturbed Proximal Gradient Algorithm

Perturbed Proximal Gradient Algorithm Perturbed Proximal Gradient Algorithm Gersende FORT LTCI, CNRS, Telecom ParisTech Université Paris-Saclay, 75013, Paris, France Large-scale inverse problems and optimization Applications to image processing

More information

Epiconvergence and ε-subgradients of Convex Functions

Epiconvergence and ε-subgradients of Convex Functions Journal of Convex Analysis Volume 1 (1994), No.1, 87 100 Epiconvergence and ε-subgradients of Convex Functions Andrei Verona Department of Mathematics, California State University Los Angeles, CA 90032,

More information

Dual Proximal Gradient Method

Dual Proximal Gradient Method Dual Proximal Gradient Method http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes Outline 2/19 1 proximal gradient method

More information

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique Master 2 MathBigData S. Gaïffas 1 3 novembre 2014 1 CMAP - Ecole Polytechnique 1 Supervised learning recap Introduction Loss functions, linearity 2 Penalization Introduction Ridge Sparsity Lasso 3 Some

More information

On the convergence of a regularized Jacobi algorithm for convex optimization

On the convergence of a regularized Jacobi algorithm for convex optimization On the convergence of a regularized Jacobi algorithm for convex optimization Goran Banjac, Kostas Margellos, and Paul J. Goulart Abstract In this paper we consider the regularized version of the Jacobi

More information

Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables

Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong 2014 Workshop

More information

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L.

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L. McMaster University Advanced Optimization Laboratory Title: A Proximal Method for Identifying Active Manifolds Authors: Warren L. Hare AdvOl-Report No. 2006/07 April 2006, Hamilton, Ontario, Canada A Proximal

More information

A Unified Approach to Proximal Algorithms using Bregman Distance

A Unified Approach to Proximal Algorithms using Bregman Distance A Unified Approach to Proximal Algorithms using Bregman Distance Yi Zhou a,, Yingbin Liang a, Lixin Shen b a Department of Electrical Engineering and Computer Science, Syracuse University b Department

More information

arxiv: v3 [math.oc] 18 Apr 2012

arxiv: v3 [math.oc] 18 Apr 2012 A class of Fejér convergent algorithms, approximate resolvents and the Hybrid Proximal-Extragradient method B. F. Svaiter arxiv:1204.1353v3 [math.oc] 18 Apr 2012 Abstract A new framework for analyzing

More information

6. Proximal gradient method

6. Proximal gradient method L. Vandenberghe EE236C (Spring 2016) 6. Proximal gradient method motivation proximal mapping proximal gradient method with fixed step size proximal gradient method with line search 6-1 Proximal mapping

More information

c 2013 Society for Industrial and Applied Mathematics

c 2013 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 3, No., pp. 109 115 c 013 Society for Industrial and Applied Mathematics AN ACCELERATED HYBRID PROXIMAL EXTRAGRADIENT METHOD FOR CONVEX OPTIMIZATION AND ITS IMPLICATIONS TO SECOND-ORDER

More information

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Christian Clason Faculty of Mathematics, Universität Duisburg-Essen joint work with Barbara Kaltenbacher, Tuomo Valkonen,

More information

Existence and Approximation of Fixed Points of. Bregman Nonexpansive Operators. Banach Spaces

Existence and Approximation of Fixed Points of. Bregman Nonexpansive Operators. Banach Spaces Existence and Approximation of Fixed Points of in Reflexive Banach Spaces Department of Mathematics The Technion Israel Institute of Technology Haifa 22.07.2010 Joint work with Prof. Simeon Reich General

More information

Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem

Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (206), 424 4225 Research Article Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem Jong Soo

More information

Block Coordinate Descent for Regularized Multi-convex Optimization

Block Coordinate Descent for Regularized Multi-convex Optimization Block Coordinate Descent for Regularized Multi-convex Optimization Yangyang Xu and Wotao Yin CAAM Department, Rice University February 15, 2013 Multi-convex optimization Model definition Applications Outline

More information

Convergence rate estimates for the gradient differential inclusion

Convergence rate estimates for the gradient differential inclusion Convergence rate estimates for the gradient differential inclusion Osman Güler November 23 Abstract Let f : H R { } be a proper, lower semi continuous, convex function in a Hilbert space H. The gradient

More information

Sequential Unconstrained Minimization: A Survey

Sequential Unconstrained Minimization: A Survey Sequential Unconstrained Minimization: A Survey Charles L. Byrne February 21, 2013 Abstract The problem is to minimize a function f : X (, ], over a non-empty subset C of X, where X is an arbitrary set.

More information

arxiv: v1 [math.oc] 2 Apr 2015

arxiv: v1 [math.oc] 2 Apr 2015 arxiv:1504.00424v1 [math.oc] 2 Apr 2015 Proximal point method for a special class of nonconvex multiobjective optimization problem G. C. Bento O. P. Ferreira V. L. Sousa Junior March 31, 2015 Abstract

More information

Convex Functions. Pontus Giselsson

Convex Functions. Pontus Giselsson Convex Functions Pontus Giselsson 1 Today s lecture lower semicontinuity, closure, convex hull convexity preserving operations precomposition with affine mapping infimal convolution image function supremum

More information

Fast proximal gradient methods

Fast proximal gradient methods L. Vandenberghe EE236C (Spring 2013-14) Fast proximal gradient methods fast proximal gradient method (FISTA) FISTA with line search FISTA as descent method Nesterov s second method 1 Fast (proximal) gradient

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Sparse Recovery using L1 minimization - algorithms Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

On the Local Convergence of Regula-falsi-type Method for Generalized Equations

On the Local Convergence of Regula-falsi-type Method for Generalized Equations Journal of Advances in Applied Mathematics, Vol., No. 3, July 017 https://dx.doi.org/10.606/jaam.017.300 115 On the Local Convergence of Regula-falsi-type Method for Generalized Equations Farhana Alam

More information

About Split Proximal Algorithms for the Q-Lasso

About Split Proximal Algorithms for the Q-Lasso Thai Journal of Mathematics Volume 5 (207) Number : 7 http://thaijmath.in.cmu.ac.th ISSN 686-0209 About Split Proximal Algorithms for the Q-Lasso Abdellatif Moudafi Aix Marseille Université, CNRS-L.S.I.S

More information

Relaxed linearized algorithms for faster X-ray CT image reconstruction

Relaxed linearized algorithms for faster X-ray CT image reconstruction Relaxed linearized algorithms for faster X-ray CT image reconstruction Hung Nien and Jeffrey A. Fessler University of Michigan, Ann Arbor The 13th Fully 3D Meeting June 2, 2015 1/20 Statistical image reconstruction

More information

Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization

Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization Radu Ioan Boţ Christopher Hendrich November 7, 202 Abstract. In this paper we

More information

Generalized greedy algorithms.

Generalized greedy algorithms. Generalized greedy algorithms. François-Xavier Dupé & Sandrine Anthoine LIF & I2M Aix-Marseille Université - CNRS - Ecole Centrale Marseille, Marseille ANR Greta Séminaire Parisien des Mathématiques Appliquées

More information

Variational inequalities for set-valued vector fields on Riemannian manifolds

Variational inequalities for set-valued vector fields on Riemannian manifolds Variational inequalities for set-valued vector fields on Riemannian manifolds Chong LI Department of Mathematics Zhejiang University Joint with Jen-Chih YAO Chong LI (Zhejiang University) VI on RM 1 /

More information

arxiv: v1 [math.oc] 11 Jan 2008

arxiv: v1 [math.oc] 11 Jan 2008 Alternating minimization and projection methods for nonconvex problems 1 Hedy ATTOUCH 2, Jérôme BOLTE 3, Patrick REDONT 2, Antoine SOUBEYRAN 4. arxiv:0801.1780v1 [math.oc] 11 Jan 2008 Abstract We study

More information

ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT

ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT EMIL ERNST AND MICHEL VOLLE Abstract. This article addresses a general criterion providing a zero duality gap for convex programs in the setting of

More information

Convergence of Fixed-Point Iterations

Convergence of Fixed-Point Iterations Convergence of Fixed-Point Iterations Instructor: Wotao Yin (UCLA Math) July 2016 1 / 30 Why study fixed-point iterations? Abstract many existing algorithms in optimization, numerical linear algebra, and

More information

PROXIMAL THRESHOLDING ALGORITHM FOR MINIMIZATION OVER ORTHONORMAL BASES

PROXIMAL THRESHOLDING ALGORITHM FOR MINIMIZATION OVER ORTHONORMAL BASES PROXIMAL THRESHOLDING ALGORITHM FOR MINIMIZATION OVER ORTHONORMAL BASES Patrick L. Combettes and Jean-Christophe Pesquet Laboratoire Jacques-Louis Lions UMR CNRS 7598 Université Pierre et Marie Curie Paris

More information

arxiv: v1 [math.oc] 12 Mar 2013

arxiv: v1 [math.oc] 12 Mar 2013 On the convergence rate improvement of a primal-dual splitting algorithm for solving monotone inclusion problems arxiv:303.875v [math.oc] Mar 03 Radu Ioan Boţ Ernö Robert Csetnek André Heinrich February

More information

arxiv: v1 [math.oc] 13 Dec 2018

arxiv: v1 [math.oc] 13 Dec 2018 A NEW HOMOTOPY PROXIMAL VARIABLE-METRIC FRAMEWORK FOR COMPOSITE CONVEX MINIMIZATION QUOC TRAN-DINH, LIANG LING, AND KIM-CHUAN TOH arxiv:8205243v [mathoc] 3 Dec 208 Abstract This paper suggests two novel

More information

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

Continuous Sets and Non-Attaining Functionals in Reflexive Banach Spaces

Continuous Sets and Non-Attaining Functionals in Reflexive Banach Spaces Laboratoire d Arithmétique, Calcul formel et d Optimisation UMR CNRS 6090 Continuous Sets and Non-Attaining Functionals in Reflexive Banach Spaces Emil Ernst Michel Théra Rapport de recherche n 2004-04

More information

Nonconvex notions of regularity and convergence of fundamental algorithms for feasibility problems

Nonconvex notions of regularity and convergence of fundamental algorithms for feasibility problems Nonconvex notions of regularity and convergence of fundamental algorithms for feasibility problems Robert Hesse and D. Russell Luke December 12, 2012 Abstract We consider projection algorithms for solving

More information

Non-smooth Non-convex Bregman Minimization: Unification and New Algorithms

Non-smooth Non-convex Bregman Minimization: Unification and New Algorithms JOTA manuscript No. (will be inserted by the editor) Non-smooth Non-convex Bregman Minimization: Unification and New Algorithms Peter Ochs Jalal Fadili Thomas Brox Received: date / Accepted: date Abstract

More information

arxiv: v1 [math.oc] 21 Apr 2016

arxiv: v1 [math.oc] 21 Apr 2016 Accelerated Douglas Rachford methods for the solution of convex-concave saddle-point problems Kristian Bredies Hongpeng Sun April, 06 arxiv:604.068v [math.oc] Apr 06 Abstract We study acceleration and

More information

Journal of Inequalities in Pure and Applied Mathematics

Journal of Inequalities in Pure and Applied Mathematics Journal of Inequalities in Pure and Applied Mathematics http://jipam.vu.edu.au/ Volume 4, Issue 4, Article 67, 2003 ON GENERALIZED MONOTONE MULTIFUNCTIONS WITH APPLICATIONS TO OPTIMALITY CONDITIONS IN

More information

Variational Analysis and Tame Optimization

Variational Analysis and Tame Optimization Variational Analysis and Tame Optimization Aris Daniilidis http://mat.uab.cat/~arisd Universitat Autònoma de Barcelona April 15 17, 2010 PLAN OF THE TALK Nonsmooth analysis Genericity of pathological situations

More information

Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization

Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization Hongchao Zhang hozhang@math.lsu.edu Department of Mathematics Center for Computation and Technology Louisiana State

More information

Algorithmes de minimisation alternée avec frottement: H. Attouch

Algorithmes de minimisation alternée avec frottement: H. Attouch Algorithmes de minimisation alternée avec frottement: Applications aux E.D.P., problèmes inverses et jeux dynamiques. H. Attouch Institut de Mathématiques et de Modélisation de Montpellier UMR CNRS 5149

More information