A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration

Size: px
Start display at page:

Download "A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration"

Transcription

1 A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration E. Chouzenoux, A. Jezierska, J.-C. Pesquet and H. Talbot Université Paris-Est Lab. d Informatique Gaspard Monge UMR CNRS 8049 Journées GDR MOA-MSPC Optimisation et traitement d images Emilie Chouzenoux Journées GDR MOA-MSPC 1 / 32

2 Outline 1 General context 2 l 2 -l 0 regularization functions Existence of minimizers Epi-convergence property 3 Minimization of F δ Proposed algorithm Convergence results 4 Application to image restoration Image denoising Image deblurring Image segmentation Texture+Geometry decomposition 5 Conclusion Emilie Chouzenoux Journées GDR MOA-MSPC 2 / 32

3 Context Image restoration We observe data y R Q, related to the original image x R N through: y = Hx+u, H R Q N Objective: Restore the unknown original image x from H and y. y x Emilie Chouzenoux Journées GDR MOA-MSPC 3 / 32

4 Context Penalized optimization problem Find where ( ) min F(x) = Φ(Hx y)+λr(x), x R N Φ Fidelity to data term, related to noise R Regularization term, related to some a priori assumptions λ Regularization weight Assumption: x is sparse in a dictionary V of analysis vectors in R N F 0 (x) = Φ(Hx y)+λl 0 (V x) Emilie Chouzenoux Journées GDR MOA-MSPC 4 / 32

5 Context Penalized optimization problem Find where ( ) min F(x) = Φ(Hx y)+λr(x), x R N Φ Fidelity to data term, related to noise R Regularization term, related to some a priori assumptions λ Regularization weight Assumption: x is sparse in a dictionary V of analysis vectors in R N F δ (x) = Φ(Hx y)+λ C c=1 ψ δ (V c x) where ψ δ is a differentiable, non-convex approximation of the l 0 norm. Emilie Chouzenoux Journées GDR MOA-MSPC 4 / 32

6 Outline 1 General context 2 l 2 -l 0 regularization functions Existence of minimizers Epi-convergence property 3 Minimization of F δ Proposed algorithm Convergence results 4 Application to image restoration Image denoising Image deblurring Image segmentation Texture+Geometry decomposition 5 Conclusion Emilie Chouzenoux Journées GDR MOA-MSPC 5 / 32

7 Outline 1 General context 2 l 2 -l 0 regularization functions Existence of minimizers Epi-convergence property 3 Minimization of F δ Proposed algorithm Convergence results 4 Application to image restoration Image denoising Image deblurring Image segmentation Texture+Geometry decomposition 5 Conclusion Emilie Chouzenoux Journées GDR MOA-MSPC 6 / 32

8 l 2 -l 0 regularization functions We consider the following class of potential functions: 1 ( δ (0,+ )) ψ δ is differentiable. 2 ( δ (0,+ )) lim t ψ δ (t) = 1. 3 ( δ (0,+ )) ψ δ (t) = O(t 2 ) for small t. Examples: ψ δ (t) = t2 2δ 2 +t 2 ψ δ (t) = 1 exp( t2 2δ 2 ) t Emilie Chouzenoux Journées GDR MOA-MSPC 7 / 32

9 Existence of minimizers F δ (x) = Φ(Hx y)+λ C c=1 ψ δ (V c x) Difficulty: F δ is a non convex, non coercive function. Proposition Assume that and that lim Φ(x) = + x + KerH = {0} Then, for every δ > 0, F δ has a minimizer. Emilie Chouzenoux Journées GDR MOA-MSPC 8 / 32

10 Existence of minimizers F δ (x) = Φ(Hx y)+λ C c=1 ψ δ (V c x)+ Πx 2 Difficulty: F δ is a non convex, non coercive function. Proposition Assume that and that lim Φ(x) = + x + KerH KerΠ = {0} Then, for every δ > 0, F δ has a minimizer. Emilie Chouzenoux Journées GDR MOA-MSPC 8 / 32

11 Epi-convergence to the l 0 -penalized objective function Assumptions: 1 ( (δ 1,δ 2 ) (0,+ ) 2 ) δ 1 δ 2 ( t R) ψ δ1 (t) ψ δ2 (t) 0. { 0 if t = 0 2 ( t R) lim δ 0 ψ δ (t) = 1 otherwise. 3 Φ is coercive and KerH KerΠ = 0 Proposition Let (δ n ) n N be a decreasing sequence of positive real numbers converging to 0. Under the above assumptions, inff δn inff 0 as n + In addition, if for every n N, x n is a minimizer of F δn, then the sequence ( x n ) n N is bounded and all its cluster points are minimizers of F 0. Emilie Chouzenoux Journées GDR MOA-MSPC 9 / 32

12 Outline 1 General context 2 l 2 -l 0 regularization functions Existence of minimizers Epi-convergence property 3 Minimization of F δ Proposed algorithm Convergence results 4 Application to image restoration Image denoising Image deblurring Image segmentation Texture+Geometry decomposition 5 Conclusion Emilie Chouzenoux Journées GDR MOA-MSPC 10 / 32

13 Iterative minimization of F δ (x) Descent algorithm x k+1 = x k +α k d k, ( k {0,...,K}) d k : search direction satisfying g T k d k < 0 where g k F δ (x k ) Ex: Gradient, conjugate gradient, Newton, truncated Newton,... stepsize α k : approximate minimizer of f k,δ (α): α F δ (x k +αd k ) Emilie Chouzenoux Journées GDR MOA-MSPC 11 / 32

14 Iterative minimization of F δ (x) Descent algorithm x k+1 = x k +α k d k, ( k {0,...,K}) d k : search direction satisfying g T k d k < 0 where g k F δ (x k ) Ex: Gradient, conjugate gradient, Newton, truncated Newton,... stepsize α k : approximate minimizer of f k,δ (α): α F δ (x k +αd k ) Generalization : subspace algorithm [Zibulevsky10] r x k+1 = x k + s i,k d i k, ( k {0,...,K}) i=1 [d 1 k,...,dr k ] = D k: Set of search directions Ex: Super-memory gradient D k = [ g k,d k 1,..,d k m ] stepsize s k : approximate minimizer of f k,δ (s): s F δ (x k +D k s) Emilie Chouzenoux Journées GDR MOA-MSPC 11 / 32

15 Majorize-Minimize principle [Hunter04] Objective: Find ˆx Argmin x F δ (x) For all x, let Q(.,x ) a tangent majorant of F δ at x i.e., Q(x,x ) F δ (x), x, Q(x,x ) = F δ (x ) MM algorithm: j {0,...,J}, x j+1 Argmin x Q(x,x j ) Q(.,x j ) F δ x j x j+1 Emilie Chouzenoux Journées GDR MOA-MSPC 12 / 32

16 Quadratic tangent majorant function Assumption: For all x, there exists A(x ), definite positive, such that Q(x,x ) = F δ (x )+ F δ (x ) T (x x )+ 1 2 (x x ) T A(x )(x x ) is a quadratic tangent majorant of F δ at x. Construction of A(.) G(x) = c φ([v x w] c ) { φisc 1 onr φhasl LipschitzgradientonR A(x ) = av T V, a L Emilie Chouzenoux Journées GDR MOA-MSPC 13 / 32

17 Quadratic tangent majorant function Assumption: For all x, there exists A(x ), definite positive, such that Q(x,x ) = F δ (x )+ F δ (x ) T (x x )+ 1 2 (x x ) T A(x )(x x ) is a quadratic tangent majorant of F δ at x. Construction of A(.) G(x) = c φ([v x w] c ) { φisc 1 onr φhasl LipschitzgradientonR A(x ) = av T V, a L [Allain06] φisc 1 andevenonr φ(.)isconcaveonr + A(x ) = V T Diag{ω([V x w])}v ω(u) = φ(u)/u (0, ) Emilie Chouzenoux Journées GDR MOA-MSPC 13 / 32

18 Majorize-Minimize multivariate stepsize [Chouzenoux11] x k+1 = x k +D k s k ( k {0,...K}) D k : set of directions s k resulting from MM minimization of f k,δ (s): s F δ (x k +D k s) q k (s,s j k ) : Quadratic tangent majorant of f k,δ at s j k with Hessian: B s j = Dk TA(x k +D k s j k )D k k MM minimization in the subspace: s 0 k = 0, s j+1 k Argmin s q k (s,s j k ), ( j {0,...J 1}) s k = s J k. Emilie Chouzenoux Journées GDR MOA-MSPC 14 / 32

19 Proposed algorithm Majorize-Minimize Memory Gradient (MM-MG) algorithm For k = 1,...,K 1 Compute the set of directions, for example D k = [ g k,x k x k 1 ] 2 s 0 k = 0 3 j {0,...J 1}, B s j = Dk A(x k +D k s j k )D k k s j+1 k 4 s k = s J k = s j k B 1 f s j k,δ (s j k ) k 5 Update x k+1 = x k +D k s k Emilie Chouzenoux Journées GDR MOA-MSPC 15 / 32

20 Convergence results Assumptions ➀ Φ is coercive and KerH KerΠ = 0 ➁ The gradient of Φ is L-Lipschitzian ➂ ψ δ is even and ψ δ (.) is concave on R +. Moreover, there exists ω [0,+ ) such that ( t (0,+ )) 0 ψ δ (t) ωt. In addition, lim t 0 ψ δ (t)/t R. t 0 ➃ F δ satisfies the Lojasiewicz inequality [Attouch10a,Attouch10b]: For every x R N and every bounded neighborhood of E of x, there exist constants κ > 0, ζ > 0 and θ [0,1) such that F δ (x) κ F δ (x) F δ ( x) θ, for every x E such that F δ (x) F δ ( x) < ζ. Emilie Chouzenoux Journées GDR MOA-MSPC 16 / 32

21 Convergence results Theorem Under Assumptions ➀, ➁, and ➂, for all J 1, the MM-MG algorithm is such that lim k F δ(x k ) = 0. Furthermore, if Assumption ➃ is fulfilled, then The MM-MG algorithm generates a sequence converging to a critical point x of F δ. The sequence (x k ) k N has a finite length in the sense that + k=0 x k+1 x k < +. Emilie Chouzenoux Journées GDR MOA-MSPC 17 / 32

22 Outline 1 General context 2 l 2 -l 0 regularization functions Existence of minimizers Epi-convergence property 3 Minimization of F δ Proposed algorithm Convergence results 4 Application to image restoration Image denoising Image deblurring Image segmentation Texture+Geometry decomposition 5 Conclusion Emilie Chouzenoux Journées GDR MOA-MSPC 18 / 32

23 Image denoising Original image x Noisy image y SNR = 15 db F δ (x) = 1 2 x y 2 +λ c ψ δ(v c x), ψ δ (t) = 1 exp( t 2 /2δ 2 ) MM-MG / Beck Teboulle / Half quadratic algorithms Comparison with discrete methods for ψ δ (t) = δ 2 min(t 2,δ 2 ) Emilie Chouzenoux Journées GDR MOA-MSPC 19 / 32

24 Results Denoised image SNR = 24.4 db Emilie Chouzenoux Journées GDR MOA-MSPC 20 / 32

25 Results Algorithm Time SNR (db) MM-MG 28 s 24.4 Beck-Teboulle [Beck09] 45 s Half-Quadratic [Allain06] 97 s Graph Cut Swap [Boykov02] 365 s Tree-Reweighted [Felzenszwalb10] 181 s Belief Propagation [Kolmogorov06] 1958 s Emilie Chouzenoux Journées GDR MOA-MSPC 20 / 32

26 Image deblurring Original image x Noisy blurred image y Motion blur (9 pixels) SNR = 10 db F δ (x) = 1 2 Hx y 2 +λ c ψ δ(v c x)+τ x 2, τ = Non convex penalty: ψ δ (t) = 1 exp( t 2 /2δ 2 ) Convex penalty: ψ δ (t) = 1+t 2 /δ 2 1 Emilie Chouzenoux Journées GDR MOA-MSPC 21 / 32

27 Results: Non convex penalty Restored image SNR = db Algorithm Time SNR (db) MM-MG 29 s B-T 32 s H-Q 141 s Emilie Chouzenoux Journées GDR MOA-MSPC 22 / 32

28 Results: Convex penalty Denoised image SNR = db Algorithm Time SNR (db) MM-MG 194 s B-T 900 s H-Q 1824 s Emilie Chouzenoux Journées GDR MOA-MSPC 23 / 32

29 Image segmentation Original image x F δ (x) = 1 2 x x 2 +λ c ψ δ(v c x) with large λ and small δ Non convex penalty: ψ δ (t) = 1 exp( t 2 /2δ 2 ) Convex penalty: ψ δ (t) = 1+t 2 /δ 2 1 Emilie Chouzenoux Journées GDR MOA-MSPC 24 / 32

30 Results Segmentation with non convex penalty Segmentation with convex penalty Emilie Chouzenoux Journées GDR MOA-MSPC 25 / 32

31 Results 50th line Initial image Non convex Convex Emilie Chouzenoux Journées GDR MOA-MSPC 25 / 32

32 Results 50th line Initial image Non convex Convex Algorithm Time Non convex Convex MM-MG 15 s 5 s Beck-Teboulle 21 s 50 s Half-Quadratic 39 s 31 s Emilie Chouzenoux Journées GDR MOA-MSPC 25 / 32

33 Texture+Geometry decomposition Original image x Noisy image y SNR=15 db { ˆx geometry y = ˆx+ ˇx with ˇx texture + noise ( ˆx Argmin 1 x 2 1 (x y) 2 +λ c (ψ δ(vc T x)) ) [Osher03] Non convex penalty / convex penalty Emilie Chouzenoux Journées GDR MOA-MSPC 26 / 32

34 Results: Non convex penalty Geometry ˆx Texture + noise ˇx MM-MG algorithm: Convergence in 91 s Emilie Chouzenoux Journées GDR MOA-MSPC 27 / 32

35 Results: Convex penalty Geometry ˆx Texture + noise ˇx MM-MG algorithm: Convergence in 134 s Emilie Chouzenoux Journées GDR MOA-MSPC 28 / 32

36 Outline 1 General context 2 l 2 -l 0 regularization functions Existence of minimizers Epi-convergence property 3 Minimization of F δ Proposed algorithm Convergence results 4 Application to image restoration Image denoising Image deblurring Image segmentation Texture+Geometry decomposition 5 Conclusion Emilie Chouzenoux Journées GDR MOA-MSPC 29 / 32

37 Conclusion Majorize-Minimize memory gradient algorithm for l 2 -l 0 minimization Faster methods w.r.t. combinatorial optimization techniques Simplicity of implementation Future work Constrained case Non differentiable case How to ensure the global convergence? Emilie Chouzenoux Journées GDR MOA-MSPC 30 / 32

38 Bibliography M. Allain, J. Idier and Y. Goussard On global and local convergence of half-quadratic algorithms IEEE Transactions on Image Processing, 15(5): , 2006 H. Attouch, J. Bolte and B. F. Svaiter Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods H. Attouch, J. Bolte, P. Redont and A. Soubeyran Proximal alternating minimization and projection methods for nonconvex problems. An approach based on the Kurdyka- Lojasiewicz inequality Mathematics of Operations Research, 35(2): , 2010 A. Beck and M. Teboulle Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems IEEE Transactions on Image Processing, 18(11): , 2009 E. Chouzenoux, J. Idier and S. Moussaoui A Majorize-Minimize strategy for subspace optimization applied to image restoration IEEE Transactions on Image Processing, 20(6): , 2011 Emilie Chouzenoux Journées GDR MOA-MSPC 31 / 32

39 Thanks for you attention! Emilie Chouzenoux Journées GDR MOA-MSPC 32 / 32

A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing

A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing Emilie Chouzenoux emilie.chouzenoux@univ-mlv.fr Université Paris-Est Lab. d Informatique Gaspard

More information

Variable Metric Forward-Backward Algorithm

Variable Metric Forward-Backward Algorithm Variable Metric Forward-Backward Algorithm 1/37 Variable Metric Forward-Backward Algorithm for minimizing the sum of a differentiable function and a convex function E. Chouzenoux in collaboration with

More information

Sequential convex programming,: value function and convergence

Sequential convex programming,: value function and convergence Sequential convex programming,: value function and convergence Edouard Pauwels joint work with Jérôme Bolte Journées MODE Toulouse March 23 2016 1 / 16 Introduction Local search methods for finite dimensional

More information

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION Peter Ochs University of Freiburg Germany 17.01.2017 joint work with: Thomas Brox and Thomas Pock c 2017 Peter Ochs ipiano c 1

More information

Optimisation in imaging

Optimisation in imaging Optimisation in imaging Hugues Talbot ICIP 2014 Tutorial Optimisation on Hierarchies H. Talbot : Optimisation 1/59 Outline of the lecture 1 Concepts in optimization Cost function Constraints Duality 2

More information

A user s guide to Lojasiewicz/KL inequalities

A user s guide to Lojasiewicz/KL inequalities Other A user s guide to Lojasiewicz/KL inequalities Toulouse School of Economics, Université Toulouse I SLRA, Grenoble, 2015 Motivations behind KL f : R n R smooth ẋ(t) = f (x(t)) or x k+1 = x k λ k f

More information

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29 A Random block-coordinate primal-dual proximal algorithm with application to 3D mesh denoising Emilie CHOUZENOUX Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est, France Horizon Maths 2014

More information

SPARSE SIGNAL RESTORATION. 1. Introduction

SPARSE SIGNAL RESTORATION. 1. Introduction SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful

More information

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions Radu Ioan Boţ Ernö Robert Csetnek Szilárd Csaba László October, 1 Abstract. We propose a forward-backward

More information

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Donghwan Kim and Jeffrey A. Fessler EECS Department, University of Michigan

More information

Proximal Alternating Linearized Minimization for Nonconvex and Nonsmooth Problems

Proximal Alternating Linearized Minimization for Nonconvex and Nonsmooth Problems Proximal Alternating Linearized Minimization for Nonconvex and Nonsmooth Problems Jérôme Bolte Shoham Sabach Marc Teboulle Abstract We introduce a proximal alternating linearized minimization PALM) algorithm

More information

A semi-algebraic look at first-order methods

A semi-algebraic look at first-order methods splitting A semi-algebraic look at first-order Université de Toulouse / TSE Nesterov s 60th birthday, Les Houches, 2016 in large-scale first-order optimization splitting Start with a reasonable FOM (some

More information

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection EUSIPCO 2015 1/19 A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection Jean-Christophe Pesquet Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est

More information

Block Coordinate Descent for Regularized Multi-convex Optimization

Block Coordinate Descent for Regularized Multi-convex Optimization Block Coordinate Descent for Regularized Multi-convex Optimization Yangyang Xu and Wotao Yin CAAM Department, Rice University February 15, 2013 Multi-convex optimization Model definition Applications Outline

More information

From error bounds to the complexity of first-order descent methods for convex functions

From error bounds to the complexity of first-order descent methods for convex functions From error bounds to the complexity of first-order descent methods for convex functions Nguyen Trong Phong-TSE Joint work with Jérôme Bolte, Juan Peypouquet, Bruce Suter. Toulouse, 23-25, March, 2016 Journées

More information

On the convergence of a regularized Jacobi algorithm for convex optimization

On the convergence of a regularized Jacobi algorithm for convex optimization On the convergence of a regularized Jacobi algorithm for convex optimization Goran Banjac, Kostas Margellos, and Paul J. Goulart Abstract In this paper we consider the regularized version of the Jacobi

More information

A gradient type algorithm with backward inertial steps for a nonconvex minimization

A gradient type algorithm with backward inertial steps for a nonconvex minimization A gradient type algorithm with backward inertial steps for a nonconvex minimization Cristian Daniel Alecsa Szilárd Csaba László Adrian Viorel November 22, 208 Abstract. We investigate an algorithm of gradient

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Optimization. Benjamin Recht University of California, Berkeley Stephen Wright University of Wisconsin-Madison

Optimization. Benjamin Recht University of California, Berkeley Stephen Wright University of Wisconsin-Madison Optimization Benjamin Recht University of California, Berkeley Stephen Wright University of Wisconsin-Madison optimization () cost constraints might be too much to cover in 3 hours optimization (for big

More information

Image restoration: numerical optimisation

Image restoration: numerical optimisation Image restoration: numerical optimisation Short and partial presentation Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS BINP / 6 Context

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

6. Proximal gradient method

6. Proximal gradient method L. Vandenberghe EE236C (Spring 2016) 6. Proximal gradient method motivation proximal mapping proximal gradient method with fixed step size proximal gradient method with line search 6-1 Proximal mapping

More information

Hedy Attouch, Jérôme Bolte, Benar Svaiter. To cite this version: HAL Id: hal

Hedy Attouch, Jérôme Bolte, Benar Svaiter. To cite this version: HAL Id: hal Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods Hedy Attouch, Jérôme Bolte, Benar Svaiter To cite

More information

9. Dual decomposition and dual algorithms

9. Dual decomposition and dual algorithms EE 546, Univ of Washington, Spring 2016 9. Dual decomposition and dual algorithms dual gradient ascent example: network rate control dual decomposition and the proximal gradient method examples with simple

More information

A First Order Primal-Dual Algorithm for Nonconvex T V q Regularization

A First Order Primal-Dual Algorithm for Nonconvex T V q Regularization A First Order Primal-Dual Algorithm for Nonconvex T V q Regularization Thomas Möllenhoff, Evgeny Strekalovskiy, and Daniel Cremers TU Munich, Germany Abstract. We propose an efficient first order primal-dual

More information

Regularized Multiplicative Algorithms for Nonnegative Matrix Factorization

Regularized Multiplicative Algorithms for Nonnegative Matrix Factorization Regularized Multiplicative Algorithms for Nonnegative Matrix Factorization Christine De Mol (joint work with Loïc Lecharlier) Université Libre de Bruxelles Dept Math. and ECARES MAHI 2013 Workshop Methodological

More information

Convergence rates for an inertial algorithm of gradient type associated to a smooth nonconvex minimization

Convergence rates for an inertial algorithm of gradient type associated to a smooth nonconvex minimization Convergence rates for an inertial algorithm of gradient type associated to a smooth nonconvex minimization Szilárd Csaba László November, 08 Abstract. We investigate an inertial algorithm of gradient type

More information

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x

More information

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universität - Linz Florianópolis - September, 2011.

More information

Adaptive Primal Dual Optimization for Image Processing and Learning

Adaptive Primal Dual Optimization for Image Processing and Learning Adaptive Primal Dual Optimization for Image Processing and Learning Tom Goldstein Rice University tag7@rice.edu Ernie Esser University of British Columbia eesser@eos.ubc.ca Richard Baraniuk Rice University

More information

arxiv: v2 [math.oc] 21 Nov 2017

arxiv: v2 [math.oc] 21 Nov 2017 Unifying abstract inexact convergence theorems and block coordinate variable metric ipiano arxiv:1602.07283v2 [math.oc] 21 Nov 2017 Peter Ochs Mathematical Optimization Group Saarland University Germany

More information

6. Proximal gradient method

6. Proximal gradient method L. Vandenberghe EE236C (Spring 2013-14) 6. Proximal gradient method motivation proximal mapping proximal gradient method with fixed step size proximal gradient method with line search 6-1 Proximal mapping

More information

Inverse problem and optimization

Inverse problem and optimization Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples

More information

Relaxed linearized algorithms for faster X-ray CT image reconstruction

Relaxed linearized algorithms for faster X-ray CT image reconstruction Relaxed linearized algorithms for faster X-ray CT image reconstruction Hung Nien and Jeffrey A. Fessler University of Michigan, Ann Arbor The 13th Fully 3D Meeting June 2, 2015 1/20 Statistical image reconstruction

More information

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique Master 2 MathBigData S. Gaïffas 1 3 novembre 2014 1 CMAP - Ecole Polytechnique 1 Supervised learning recap Introduction Loss functions, linearity 2 Penalization Introduction Ridge Sparsity Lasso 3 Some

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

Enhanced Compressive Sensing and More

Enhanced Compressive Sensing and More Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University

More information

MATH 4211/6211 Optimization Basics of Optimization Problems

MATH 4211/6211 Optimization Basics of Optimization Problems MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization

More information

Sparse Regularization via Convex Analysis

Sparse Regularization via Convex Analysis Sparse Regularization via Convex Analysis Ivan Selesnick Electrical and Computer Engineering Tandon School of Engineering New York University Brooklyn, New York, USA 29 / 66 Convex or non-convex: Which

More information

Numerical Optimization: Basic Concepts and Algorithms

Numerical Optimization: Basic Concepts and Algorithms May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some

More information

Tame variational analysis

Tame variational analysis Tame variational analysis Dmitriy Drusvyatskiy Mathematics, University of Washington Joint work with Daniilidis (Chile), Ioffe (Technion), and Lewis (Cornell) May 19, 2015 Theme: Semi-algebraic geometry

More information

Network Newton. Aryan Mokhtari, Qing Ling and Alejandro Ribeiro. University of Pennsylvania, University of Science and Technology (China)

Network Newton. Aryan Mokhtari, Qing Ling and Alejandro Ribeiro. University of Pennsylvania, University of Science and Technology (China) Network Newton Aryan Mokhtari, Qing Ling and Alejandro Ribeiro University of Pennsylvania, University of Science and Technology (China) aryanm@seas.upenn.edu, qingling@mail.ustc.edu.cn, aribeiro@seas.upenn.edu

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

1 Sparsity and l 1 relaxation

1 Sparsity and l 1 relaxation 6.883 Learning with Combinatorial Structure Note for Lecture 2 Author: Chiyuan Zhang Sparsity and l relaxation Last time we talked about sparsity and characterized when an l relaxation could recover the

More information

AN AUGMENTED LAGRANGIAN METHOD FOR l 1 -REGULARIZED OPTIMIZATION PROBLEMS WITH ORTHOGONALITY CONSTRAINTS

AN AUGMENTED LAGRANGIAN METHOD FOR l 1 -REGULARIZED OPTIMIZATION PROBLEMS WITH ORTHOGONALITY CONSTRAINTS AN AUGMENTED LAGRANGIAN METHOD FOR l 1 -REGULARIZED OPTIMIZATION PROBLEMS WITH ORTHOGONALITY CONSTRAINTS WEIQIANG CHEN, HUI JI, AND YANFEI YOU Abstract. A class of l 1 -regularized optimization problems

More information

A Proximal Alternating Direction Method for Semi-Definite Rank Minimization (Supplementary Material)

A Proximal Alternating Direction Method for Semi-Definite Rank Minimization (Supplementary Material) A Proximal Alternating Direction Method for Semi-Definite Rank Minimization (Supplementary Material) Ganzhao Yuan and Bernard Ghanem King Abdullah University of Science and Technology (KAUST), Saudi Arabia

More information

r=1 r=1 argmin Q Jt (20) After computing the descent direction d Jt 2 dt H t d + P (x + d) d i = 0, i / J

r=1 r=1 argmin Q Jt (20) After computing the descent direction d Jt 2 dt H t d + P (x + d) d i = 0, i / J 7 Appendix 7. Proof of Theorem Proof. There are two main difficulties in proving the convergence of our algorithm, and none of them is addressed in previous works. First, the Hessian matrix H is a block-structured

More information

Received: 15 December 2010 / Accepted: 30 July 2011 / Published online: 20 August 2011 Springer and Mathematical Optimization Society 2011

Received: 15 December 2010 / Accepted: 30 July 2011 / Published online: 20 August 2011 Springer and Mathematical Optimization Society 2011 Math. Program., Ser. A (2013) 137:91 129 DOI 10.1007/s10107-011-0484-9 FULL LENGTH PAPER Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward backward splitting,

More information

Chapter 2. Optimization. Gradients, convexity, and ALS

Chapter 2. Optimization. Gradients, convexity, and ALS Chapter 2 Optimization Gradients, convexity, and ALS Contents Background Gradient descent Stochastic gradient descent Newton s method Alternating least squares KKT conditions 2 Motivation We can solve

More information

Proximal methods. S. Villa. October 7, 2014

Proximal methods. S. Villa. October 7, 2014 Proximal methods S. Villa October 7, 2014 1 Review of the basics Often machine learning problems require the solution of minimization problems. For instance, the ERM algorithm requires to solve a problem

More information

Image restoration by minimizing zero norm of wavelet frame coefficients

Image restoration by minimizing zero norm of wavelet frame coefficients Image restoration by minimizing zero norm of wavelet frame coefficients Chenglong Bao a, Bin Dong b, Likun Hou c, Zuowei Shen a, Xiaoqun Zhang c,d, Xue Zhang d a Department of Mathematics, National University

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization

A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization Panos Parpas Department of Computing Imperial College London www.doc.ic.ac.uk/ pp500 p.parpas@imperial.ac.uk jointly with D.V.

More information

Image restoration. An example in astronomy

Image restoration. An example in astronomy Image restoration Convex approaches: penalties and constraints An example in astronomy Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS

More information

Variational Image Restoration

Variational Image Restoration Variational Image Restoration Yuling Jiao yljiaostatistics@znufe.edu.cn School of and Statistics and Mathematics ZNUFE Dec 30, 2014 Outline 1 1 Classical Variational Restoration Models and Algorithms 1.1

More information

Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms

Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms Peter Ochs, Jalal Fadili, and Thomas Brox Saarland University, Saarbrücken, Germany Normandie Univ, ENSICAEN, CNRS, GREYC, France

More information

Markov Random Fields

Markov Random Fields Markov Random Fields Umamahesh Srinivas ipal Group Meeting February 25, 2011 Outline 1 Basic graph-theoretic concepts 2 Markov chain 3 Markov random field (MRF) 4 Gauss-Markov random field (GMRF), and

More information

First-order methods for structured nonsmooth optimization

First-order methods for structured nonsmooth optimization First-order methods for structured nonsmooth optimization Sangwoon Yun Department of Mathematics Education Sungkyunkwan University Oct 19, 2016 Center for Mathematical Analysis & Computation, Yonsei University

More information

Sequential Unconstrained Minimization: A Survey

Sequential Unconstrained Minimization: A Survey Sequential Unconstrained Minimization: A Survey Charles L. Byrne February 21, 2013 Abstract The problem is to minimize a function f : X (, ], over a non-empty subset C of X, where X is an arbitrary set.

More information

Interior Point Methods in Mathematical Programming

Interior Point Methods in Mathematical Programming Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000

More information

Line Search Methods for Unconstrained Optimisation

Line Search Methods for Unconstrained Optimisation Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic

More information

You should be able to...

You should be able to... Lecture Outline Gradient Projection Algorithm Constant Step Length, Varying Step Length, Diminishing Step Length Complexity Issues Gradient Projection With Exploration Projection Solving QPs: active set

More information

A Majorize-Minimize strategy for subspace optimization applied to image restoration

A Majorize-Minimize strategy for subspace optimization applied to image restoration A Majorize-Minimize strategy for subspace optimization applied to image restoration Emilie Chouzenoux, Jérôme Idier, Saïd Moussaoui To cite this version: Emilie Chouzenoux, Jérôme Idier, Saïd Moussaoui.

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Nonconvex Sparse Logistic Regression with Weakly Convex Regularization

Nonconvex Sparse Logistic Regression with Weakly Convex Regularization Nonconvex Sparse Logistic Regression with Weakly Convex Regularization Xinyue Shen, Student Member, IEEE, and Yuantao Gu, Senior Member, IEEE Abstract In this work we propose to fit a sparse logistic regression

More information

SIAM Conference on Imaging Science, Bologna, Italy, Adaptive FISTA. Peter Ochs Saarland University

SIAM Conference on Imaging Science, Bologna, Italy, Adaptive FISTA. Peter Ochs Saarland University SIAM Conference on Imaging Science, Bologna, Italy, 2018 Adaptive FISTA Peter Ochs Saarland University 07.06.2018 joint work with Thomas Pock, TU Graz, Austria c 2018 Peter Ochs Adaptive FISTA 1 / 16 Some

More information

PART II: Basic Theory of Half-quadratic Minimization

PART II: Basic Theory of Half-quadratic Minimization PART II: Basic Theory of Half-quadratic Minimization Ran He, Wei-Shi Zheng and Liang Wang 013-1-01 Outline History of half-quadratic (HQ) minimization Half-quadratic minimization HQ loss functions HQ in

More information

arxiv: v1 [cs.na] 29 Nov 2017

arxiv: v1 [cs.na] 29 Nov 2017 A fast nonconvex Compressed Sensing algorithm for highly low-sampled MR images reconstruction D. Lazzaro 1, E. Loli Piccolomini 1 and F. Zama 1 1 Department of Mathematics, University of Bologna, arxiv:1711.11075v1

More information

Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables

Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong 2014 Workshop

More information

14 : Theory of Variational Inference: Inner and Outer Approximation

14 : Theory of Variational Inference: Inner and Outer Approximation 10-708: Probabilistic Graphical Models 10-708, Spring 2017 14 : Theory of Variational Inference: Inner and Outer Approximation Lecturer: Eric P. Xing Scribes: Maria Ryskina, Yen-Chia Hsu 1 Introduction

More information

Douglas-Rachford splitting for nonconvex feasibility problems

Douglas-Rachford splitting for nonconvex feasibility problems Douglas-Rachford splitting for nonconvex feasibility problems Guoyin Li Ting Kei Pong Jan 3, 015 Abstract We adapt the Douglas-Rachford DR) splitting method to solve nonconvex feasibility problems by studying

More information

Conditional Gradient (Frank-Wolfe) Method

Conditional Gradient (Frank-Wolfe) Method Conditional Gradient (Frank-Wolfe) Method Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 1 Outline Today: Conditional gradient method Convergence analysis Properties

More information

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method. Optimization Unconstrained optimization One-dimensional Multi-dimensional Newton s method Basic Newton Gauss- Newton Quasi- Newton Descent methods Gradient descent Conjugate gradient Constrained optimization

More information

Gauge optimization and duality

Gauge optimization and duality 1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel

More information

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universität - Linz Cambridge - July 28, 211. Doctoral

More information

An Overview of Recent and Brand New Primal-Dual Methods for Solving Convex Optimization Problems

An Overview of Recent and Brand New Primal-Dual Methods for Solving Convex Optimization Problems PGMO 1/32 An Overview of Recent and Brand New Primal-Dual Methods for Solving Convex Optimization Problems Emilie Chouzenoux Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est Marne-la-Vallée,

More information

On the iterate convergence of descent methods for convex optimization

On the iterate convergence of descent methods for convex optimization On the iterate convergence of descent methods for convex optimization Clovis C. Gonzaga March 1, 2014 Abstract We study the iterate convergence of strong descent algorithms applied to convex functions.

More information

Convex Optimization Algorithms for Machine Learning in 10 Slides

Convex Optimization Algorithms for Machine Learning in 10 Slides Convex Optimization Algorithms for Machine Learning in 10 Slides Presenter: Jul. 15. 2015 Outline 1 Quadratic Problem Linear System 2 Smooth Problem Newton-CG 3 Composite Problem Proximal-Newton-CD 4 Non-smooth,

More information

Dual methods for the minimization of the total variation

Dual methods for the minimization of the total variation 1 / 30 Dual methods for the minimization of the total variation Rémy Abergel supervisor Lionel Moisan MAP5 - CNRS UMR 8145 Different Learning Seminar, LTCI Thursday 21st April 2016 2 / 30 Plan 1 Introduction

More information

Numerical Optimization

Numerical Optimization Unconstrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 01, India. NPTEL Course on Unconstrained Minimization Let f : R n R. Consider the optimization problem:

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 9 Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 2 Separable convex optimization a special case is min f(x)

More information

Lecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods.

Lecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods. Lecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods. Linear models for classification Logistic regression Gradient descent and second-order methods

More information

Convex Optimization and l 1 -minimization

Convex Optimization and l 1 -minimization Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l

More information

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Caroline Chaux Joint work with X. Vu, N. Thirion-Moreau and S. Maire (LSIS, Toulon) Aix-Marseille

More information

Approximate Message Passing Algorithms

Approximate Message Passing Algorithms November 4, 2017 Outline AMP (Donoho et al., 2009, 2010a) Motivations Derivations from a message-passing perspective Limitations Extensions Generalized Approximate Message Passing (GAMP) (Rangan, 2011)

More information

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term; Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many

More information

Message-Passing Algorithms for GMRFs and Non-Linear Optimization

Message-Passing Algorithms for GMRFs and Non-Linear Optimization Message-Passing Algorithms for GMRFs and Non-Linear Optimization Jason Johnson Joint Work with Dmitry Malioutov, Venkat Chandrasekaran and Alan Willsky Stochastic Systems Group, MIT NIPS Workshop: Approximate

More information

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University

More information

Steepest descent method on a Riemannian manifold: the convex case

Steepest descent method on a Riemannian manifold: the convex case Steepest descent method on a Riemannian manifold: the convex case Julien Munier Abstract. In this paper we are interested in the asymptotic behavior of the trajectories of the famous steepest descent evolution

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Learning Task Grouping and Overlap in Multi-Task Learning

Learning Task Grouping and Overlap in Multi-Task Learning Learning Task Grouping and Overlap in Multi-Task Learning Abhishek Kumar Hal Daumé III Department of Computer Science University of Mayland, College Park 20 May 2013 Proceedings of the 29 th International

More information

arxiv: v5 [cs.na] 22 Mar 2018

arxiv: v5 [cs.na] 22 Mar 2018 Iteratively Linearized Reweighted Alternating Direction Method of Multipliers for a Class of Nonconvex Problems Tao Sun Hao Jiang Lizhi Cheng Wei Zhu arxiv:1709.00483v5 [cs.na] 22 Mar 2018 March 26, 2018

More information

Sparse & Redundant Representations by Iterated-Shrinkage Algorithms

Sparse & Redundant Representations by Iterated-Shrinkage Algorithms Sparse & Redundant Representations by Michael Elad * The Computer Science Department The Technion Israel Institute of technology Haifa 3000, Israel 6-30 August 007 San Diego Convention Center San Diego,

More information

This can be 2 lectures! still need: Examples: non-convex problems applications for matrix factorization

This can be 2 lectures! still need: Examples: non-convex problems applications for matrix factorization This can be 2 lectures! still need: Examples: non-convex problems applications for matrix factorization x = prox_f(x)+prox_{f^*}(x) use to get prox of norms! PROXIMAL METHODS WHY PROXIMAL METHODS Smooth

More information

Physics-based Prior modeling in Inverse Problems

Physics-based Prior modeling in Inverse Problems Physics-based Prior modeling in Inverse Problems MURI Meeting 2013 M Usman Sadiq, Purdue University Charles A. Bouman, Purdue University In collaboration with: Jeff Simmons, AFRL Venkat Venkatakrishnan,

More information

arxiv: v2 [math.oc] 22 Jun 2017

arxiv: v2 [math.oc] 22 Jun 2017 A Proximal Difference-of-convex Algorithm with Extrapolation Bo Wen Xiaojun Chen Ting Kei Pong arxiv:1612.06265v2 [math.oc] 22 Jun 2017 June 17, 2017 Abstract We consider a class of difference-of-convex

More information

Perturbed Proximal Gradient Algorithm

Perturbed Proximal Gradient Algorithm Perturbed Proximal Gradient Algorithm Gersende FORT LTCI, CNRS, Telecom ParisTech Université Paris-Saclay, 75013, Paris, France Large-scale inverse problems and optimization Applications to image processing

More information

Discrete Inference and Learning Lecture 3

Discrete Inference and Learning Lecture 3 Discrete Inference and Learning Lecture 3 MVA 2017 2018 h

More information