A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing

Size: px
Start display at page:

Download "A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing"

Transcription

1 A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing Emilie Chouzenoux emilie.chouzenoux@univ-mlv.fr Université Paris-Est Lab. d Informatique Gaspard Monge UMR CNRS 8049 Séminaire Image, GREYC 2 février 2012 Emilie Chouzenoux Séminaire Image, GREYC 1 / 39

2 Outline 1 General context 2 l 2 -l 0 regularization functions Existence of minimizers Epi-convergence property 3 Minimization of F δ Proposed algorithm Convergence results 4 Application to image processing Image denoising Image segmentation Texture+Geometry decomposition Image reconstruction 5 Conclusion Emilie Chouzenoux Séminaire Image, GREYC 2 / 39

3 Context Image restoration We observe data y R Q, related to the original image x R N through: y = Hx+w, H R Q N Objective: Restore the unknown original image x from H and y. y x Emilie Chouzenoux Séminaire Image, GREYC 3 / 39

4 Context Image restoration We observe data y R Q, related to the original image x R N through: y = Hx+w, H R Q N Objective: Restore the unknown original image x from H and y. y x Emilie Chouzenoux Séminaire Image, GREYC 4 / 39

5 Context Penalized optimization problem Find where ( ) min F(x) = Φ(Hx y)+ψ(x), x R N Φ Data fidelity term, related to noise Ψ Regularization term, related to some a priori assumptions Assumption: There exist V = [ V1 ]... V [ S and c = c 1,...,c ], S V s R Ps N and c s R Ps, such that V x c is block-sparse. F 0 (x) = Φ(Hx y)+ S s=1 λχ R\{0}( V s x c s ) where χ R\{0} (t) = 0 if t = 0, 1 otherwise group l 0 penalty [Eldar10] Emilie Chouzenoux Séminaire Image, GREYC 5 / 39

6 Context Penalized optimization problem Find where ( ) min F(x) = Φ(Hx y)+ψ(x), x R N Φ Data fidelity term, related to noise Ψ Regularization term, related to some a priori assumptions Assumption: There exist V = [ V1 ]... V [ S and c = c 1,...,c ], S V s R Ps N and c s R Ps, such that V x c is block-sparse. F δ (x) = Φ(Hx y)+ S s=1 ψ s,δ( V s x c s ) where ψ s,δ is an approximation of λχ R\{0} depending on δ > 0. Emilie Chouzenoux Séminaire Image, GREYC 5 / 39

7 Examples of regularization functions l 2 -l 1 functions: Asymptotically linear with a quadratic behavior near 0. Example: ( s {1,,S})( t R), ψ s,δ (t) = λ( 1+ t2 1) δ 2 Limit case: When δ 0, ψ δ (t) = λ t (l 1 penalty). Convex functions Majorize-Minimize algorithms [Allain06,Chouzenoux11] Proximal algorithms [Combettes11,Condat11,Raguet11]. Emilie Chouzenoux Séminaire Image, GREYC 6 / 39

8 Examples of regularization functions l 2 -l 1 functions: Asymptotically linear with a quadratic behavior near 0. Example: ( s {1,,S})( t R), ψ s,δ (t) = λ( 1+ t2 1) δ 2 Limit case: When δ 0, ψ δ (t) = λ t (l 1 penalty). Convex functions Majorize-Minimize algorithms [Allain06,Chouzenoux11] Proximal algorithms [Combettes11,Condat11,Raguet11]. l 2 -l 0 functions: Asymptotically constant with a quadratic behavior near 0. Example: ( s {1,,S})( t R), ψ s,δ (t) = λmin(t 2 /(2δ 2 ),1) Limit case: When δ 0, ψ δ (t) = λχ R\{0} (t) (l 0 penalty). Non-convex functions Which algorithms? Emilie Chouzenoux Séminaire Image, GREYC 6 / 39

9 Outline 1 General context 2 l 2 -l 0 regularization functions Existence of minimizers Epi-convergence property 3 Minimization of F δ Proposed algorithm Convergence results 4 Application to image processing Image denoising Image segmentation Texture+Geometry decomposition Image reconstruction 5 Conclusion Emilie Chouzenoux Séminaire Image, GREYC 7 / 39

10 Outline 1 General context 2 l 2 -l 0 regularization functions Existence of minimizers Epi-convergence property 3 Minimization of F δ Proposed algorithm Convergence results 4 Application to image processing Image denoising Image segmentation Texture+Geometry decomposition Image reconstruction 5 Conclusion Emilie Chouzenoux Séminaire Image, GREYC 8 / 39

11 l 2 -l 0 regularization functions We consider the following class of potential functions: 1 ( s {1,...,S}) ( δ (0,+ )) lim t ψ s,δ (t) = λ. 2 ( s {1,...,S}) ( δ (0,+ )) ψ s,δ (t) = O(t 2 ) for small t. Examples: 1 ψ δ (t) = min( t2 2δ 2,1) ψ δ (t) = t2 2δ 2 +t 2 ψ δ (t) = (1 exp( t2 2δ 2 )) ψ δ (t) = tanh( t2 2δ 2 ) t Emilie Chouzenoux Séminaire Image, GREYC 9 / 39

12 Existence of minimizers (I) S F δ (x) = Φ(Hx y)+ ψ s,δ ( V s x c s ) s=1 Difficulty: F δ is a non convex, non coercive function. Proposition 1 Assume that (i) Φ is continuous and coercive, i.e. lim x + Φ(x) = + (ii) For every δ > 0 and s {1,...,S}, ψ s,δ is continuous and takes nonnegative values (iii) KerH = {0} Then, for every δ > 0, F δ has a minimizer. Emilie Chouzenoux Séminaire Image, GREYC 10 / 39

13 Existence of minimizers (II) S F δ (x) = Φ(Hx y)+ ψ s,δ ( V s x c s )+ V 0 x 2 s=1 Difficulty: F δ is a non convex, non coercive function. Proposition 1 Assume that (i) Φ is continuous and coercive, i.e. lim x + Φ(x) = + (ii) For every δ > 0 and s {1,...,S}, ψ s,δ is continuous and takes nonnegative values (iii) KerH KerV 0 = {0} Then, for every δ > 0, F δ has a minimizer. Emilie Chouzenoux Séminaire Image, GREYC 10 / 39

14 Existence of minimizers (III) S F δ (x) = Φ(Hx y)+ ψ s,δ ( V s x c s ) s=1 Difficulty: F δ is a non convex, non coercive function. Proposition 1 Assume that (i) Φ is continuous and coercive, i.e. lim x + Φ(x) = + (ii) For every δ > 0 and s {1,...,S}, ψ s,δ is continuous and takes nonnegative values and ψ 1 s,δ (0) is a nonempty bounded set. (iii) KerH S s=1 KerV s = {0} Then, for every δ > 0, F δ has a minimizer. Emilie Chouzenoux Séminaire Image, GREYC 10 / 39

15 Epi-convergence to the group l 0 -penalized objective function Assumptions: 1 ( s {1,...,S}) ( (δ 1,δ 2 ) (0,+ ) 2 ) δ 1 δ 2 ( t R)ψ s,δ1 (t) ψ s,δ2 (t) 2 ( s {1,...,S})( t R), lim δ 0 ψ s,δ (t) = λχ R\{0} (t) 3 Assumptions of Proposition 1 Proposition 2 δ>0 Let (δ n ) n N be a decreasing sequence of positive real numbers converging to 0. Under the above assumptions, inff δn inff 0 as n + In addition, if for every n N, x n is a minimizer of F δn, then the sequence ( x n ) n N is bounded and all its cluster points are minimizers of F 0. Emilie Chouzenoux Séminaire Image, GREYC 11 / 39

16 Outline 1 General context 2 l 2 -l 0 regularization functions Existence of minimizers Epi-convergence property 3 Minimization of F δ Proposed algorithm Convergence results 4 Application to image processing Image denoising Image segmentation Texture+Geometry decomposition Image reconstruction 5 Conclusion Emilie Chouzenoux Séminaire Image, GREYC 12 / 39

17 Iterative minimization of F δ (x) In the sequel, we assume that F δ is differentiable. Descent algorithm x k+1 = x k +α k d k, ( k 0) d k : search direction satisfying g T k d k < 0 where g k F δ (x k ) Ex: Gradient, conjugate gradient, Newton, truncated Newton,... stepsize α k : approximate minimizer of f k,δ (α): α F δ (x k +αd k ) Emilie Chouzenoux Séminaire Image, GREYC 13 / 39

18 Iterative minimization of F δ (x) In the sequel, we assume that F δ is differentiable. Descent algorithm x k+1 = x k +α k d k, ( k 0) d k : search direction satisfying g T k d k < 0 where g k F δ (x k ) Ex: Gradient, conjugate gradient, Newton, truncated Newton,... stepsize α k : approximate minimizer of f k,δ (α): α F δ (x k +αd k ) Generalization : subspace algorithm [Zibulevsky10] x k+1 = x k + M u m,k d m k, ( k 0) m=1 [d 1 k,...,dm k ] = D k: Set of search directions Ex: Super-memory gradient D k = [ g k,d k 1,..,d k l ] stepsize u k : approximate minimizer of f k,δ (u): u F δ (x k +D k u) Emilie Chouzenoux Séminaire Image, GREYC 13 / 39

19 Majorize-Minimize principle [Hunter04] Objective: Find ˆx Argmin x F δ (x) For all x, let Q(.,x ) a tangent majorant of F δ at x i.e., Q(x,x ) F δ (x), x, Q(x,x ) = F δ (x ) MM algorithm: j {0,...,J}, x j+1 Argmin x Q(x,x j ) Q(.,x j ) F δ x j x j+1 Emilie Chouzenoux Séminaire Image, GREYC 14 / 39

20 Quadratic tangent majorant function Assumptions: (i) Φ is differentiable with an L-Lipschitzian gradient (ii) For every s {1,,S}, ψ s,δ is a differentiable function. (iii) For every s {1,,S}, ψ s,δ (.) is concave on [0,+ ). (iv) For every s {1,,S}, there exists ω s [0,+ ) such that ( t (0,+ )) 0 ψ s,δ (t) ω s t where ψ s,δ is the derivative of ψ s,δ. In addition, lim t 0 ω s,δ (t) R with ω s,δ (t) ψ s,δ (t)/t. t 0 Lemma 1 [Allain06] For every x R N, let A(x) = µh H +V Diag{b(x)}V +2V 0 V 0 where µ [L,+ ) and b(x) R SP with b sp (x) = ω s,δ ( V s x c s ). Then, Q(x,x ) = F δ (x )+ F δ (x ) T (x x )+ 1 2 (x x ) T A(x )(x x ) is a convex quadratic tangent majorant of F δ at x. Emilie Chouzenoux Séminaire Image, GREYC 15 / 39

21 Majorize-Minimize multivariate stepsize [Chouzenoux11] x k+1 = x k +D k u k ( k 0) D k : set of directions u k resulting from MM minimization of f k,δ (u): u F δ (x k +D k u) q k (u,u j k ) : Quadratic tangent majorant of f k,δ at u j k with Hessian: B k,u j = Dk TA(x k +D k u j k )D k k MM minimization in the subspace: u 0 k = 0, u j+1 k Argmin u q k (u,u j k ), ( j {0,...J 1}) u k = u J k. Emilie Chouzenoux Séminaire Image, GREYC 16 / 39

22 Proposed algorithm Majorize-Minimize subspace algorithm For all k 0 1 Compute the set of directions D k = [d 1 k,,dm k ] 2 u 0 k = 0 3 j {0,...J 1}, B k,u j = Dk A(x k +D k u j k )D k k u j+1 k = u j k B 1 f u j k,δ (u j k ) k 4 u k = u J k 5 Update x k+1 = x k +D k u k Emilie Chouzenoux Séminaire Image, GREYC 17 / 39

23 Convergence results Assumptions ➀ Assumptions of Proposition 1 Prop1 ➁ Assumptions of Lemma 1 Lem1 ➂ For every k N, the matrix of directions D k is of size N M with 1 M N and the first subspace direction d 1 k is gradient-related i.e., there exist γ 0 > 0 and γ 1 > 0 such that, for every k N, g k d1 k γ 0 g k 2, d 1 k γ 1 g k ➃ F δ satisfies the Lojasiewicz inequality [Attouch10a,Attouch10b]: For every x R N and every bounded neighborhood of E of x, there exist constants κ > 0, ζ > 0 and θ [0,1) such that F δ (x) κ F δ (x) F δ ( x) θ, for every x E such that F δ (x) F δ ( x) < ζ. Emilie Chouzenoux Séminaire Image, GREYC 18 / 39

24 Convergence results Theorem Under Assumptions ➀, ➁, and ➂, for all J 1, the MM subspace algorithm is such that lim k F δ(x k ) = 0. Furthermore, if Assumption ➃ is fulfilled, then The MM subspace algorithm generates a sequence converging to a critical point x of F δ. The sequence (x k ) k N has a finite length in the sense that + k=0 x k+1 x k < +. Emilie Chouzenoux Séminaire Image, GREYC 19 / 39

25 Outline 1 General context 2 l 2 -l 0 regularization functions Existence of minimizers Epi-convergence property 3 Minimization of F δ Proposed algorithm Convergence results 4 Application to image processing Image denoising Image segmentation Texture+Geometry decomposition Image reconstruction 5 Conclusion Emilie Chouzenoux Séminaire Image, GREYC 20 / 39

26 Simulation settings Considered penalization functions: ψ s,δ (t) = λ( Optimization algorithms: 1+ t2 δ 2 1) SC ψ s,δ (t) = λ λt2 2δ 2 +t SNC-(i) 2 ψ s,δ (t) = λ(1 exp( t2 2δ )) SNC-(ii) 2 ψ s,δ (t) = λtanh( t2 ψ s,δ (t) = λmin ( ) t 2,1 2δ 2 2δ 2 ) SNC-(iii) NSNC MM subspace algorithm with D k = [ g k x k x k 1 ] (MM-MG) NLCG [Hager06], L-BFGS [Liu89] and HQ [Allain06] algorithms NSNC: Four state-of-the-art combinatorial optimization algorithms : α-exp [Boykov01], QCSM [Jezierska11], TRW [Kolmogorov06] and BP [Felzenszwalb10] Emilie Chouzenoux Séminaire Image, GREYC 21 / 39

27 Image denoising Original image x with pixels (left) and noisy image y, degraded by i.i.d. Gaussian noise, SNR= 15 db (right). F δ (x) = 1 2 x y 2 + S ψ s,δ ( V s x )+βd 2 B(x). s=1 d B is the quadratic distance to the closed convex interval B = [0,255] Anisotropic penalization on the gradients of x Emilie Chouzenoux Séminaire Image, GREYC 22 / 39

28 Results Denoising result (left) and absolute reconstruction error (right) with SC penalty using MM-MG, SNR = db, MSSIM = Emilie Chouzenoux Séminaire Image, GREYC 23 / 39

29 Results Denoising result (left) and absolute reconstruction error (right) with SNC-(i) penalty using MM-MG, SNR = db, MSSIM = Emilie Chouzenoux Séminaire Image, GREYC 23 / 39

30 Results Denoising result (left) and absolute reconstruction error (right) with NSNC penalty using TRW, SNR = 22.8 db, MSSIM = Emilie Chouzenoux Séminaire Image, GREYC 23 / 39

31 Results Penalty Algorithm Iteration Time F δ SNR (db) SC MM-MG NLCG L-BFGS HQ SNC-(i) MM-MG NLCG L-BFGS HQ NSNC α-exp QCSM TRW BP Emilie Chouzenoux Séminaire Image, GREYC 24 / 39

32 Image segmentation Original image x with pixels. F δ (x) = 1 S 2 x x 2 + ψ s,δ ( V s x ) s=1 Anisotropic penalization on the gradients of x Emilie Chouzenoux Séminaire Image, GREYC 25 / 39

33 Results Segmented image (left) and its gradient (right) with SC penalty using MM-MG. Emilie Chouzenoux Séminaire Image, GREYC 26 / 39

34 Results Segmented image (left) and its gradient (right) with SNC-(2) penalty using MM-MG. Emilie Chouzenoux Séminaire Image, GREYC 26 / 39

35 Results Segmented image (left) and its gradient (right) with NSNC penalty using TRW. Emilie Chouzenoux Séminaire Image, GREYC 26 / 39

36 Results Detail of segmented image (left) and its gradient (right) with SC penalty using MM-MG. Emilie Chouzenoux Séminaire Image, GREYC 27 / 39

37 Results Detail of segmented image (left) and its gradient (right) with SNC-(2) penalty using MM-MG. Emilie Chouzenoux Séminaire Image, GREYC 27 / 39

38 Results Detail of segmented image (left) and its gradient (right) with NSNC penalty using TRW. Emilie Chouzenoux Séminaire Image, GREYC 27 / 39

39 Results Comparison of 50th line of segmented images using NSNC ( ), SNC-(ii) ( ) or SC ( ) potential functions. The 50th line of the original image is indicated in dotted plot. Emilie Chouzenoux Séminaire Image, GREYC 28 / 39

40 Results Penalty Algorithm Iteration Time F δ SC MM-MG NLCG L-BFGS HQ SNC-(ii) MM-MG NLCG L-BFGS HQ NSNC α-exp QCSM TRW BP Emilie Chouzenoux Séminaire Image, GREYC 29 / 39

41 Texture+Geometry decomposition Original image x with pixels (left) and noisy image y, degraded with i.i.d. Gaussian noise, SNR=15 db (right) { ˆx geometry y = ˆx+ ˇx with where ˆx minimizes ([Osher03]) ˇx texture + noise F δ (x) = (x y) 2 +λ Isotropic penalization on the gradients of x S ψ s,δ ( V s x ) Emilie Chouzenoux Séminaire Image, GREYC 30 / 39 s=1

42 Results Recovered geometry part x (left) and texture+noise part ˇx (right) with SC penalty using MM-MG. Emilie Chouzenoux Séminaire Image, GREYC 31 / 39

43 Results Recovered geometry part x (left) and texture+noise part ˇx (right) with SNC-(iii) penalty using MM-MG. Emilie Chouzenoux Séminaire Image, GREYC 31 / 39

44 Results Penalty Algorithm Iteration Time F δ SC MM-MG NLCG L-BFGS HQ SNC-(iii) MM-MG NLCG L-BFGS HQ Emilie Chouzenoux Séminaire Image, GREYC 32 / 39

45 Image reconstruction Original image x with pixels (left) and noisy sinogram y with measurements, degraded by i.i.d. Laplacian noise, SNR=23.5 db (right). F δ (x) = Q φ q,ρ ((Rx) q y q )+ q=1 S ψ s,δ ( V s x )+βd 2 B(x)+τ x 2 s=1 R is the Radon projection matrix, φ q,ρ is the SC function Isotropic penalization on the gradients of x Emilie Chouzenoux Séminaire Image, GREYC 33 / 39

46 Results Reconstructed image (left) and detail (right) with SC penalty using MM-MG, SNR = db, MSSIM = Emilie Chouzenoux Séminaire Image, GREYC 34 / 39

47 Results Reconstructed image (left) and detail (right) with SNC-(i) penalty using MM-MG, SNR = db, MSSIM = Emilie Chouzenoux Séminaire Image, GREYC 34 / 39

48 Results Penalty Algorithm Iteration Time F δ SNR SC MM-MG NLCG L-BFGS HQ SNC-(i) MM-MG NLCG L-BFGS HQ Emilie Chouzenoux Séminaire Image, GREYC 35 / 39

49 Outline 1 General context 2 l 2 -l 0 regularization functions Existence of minimizers Epi-convergence property 3 Minimization of F δ Proposed algorithm Convergence results 4 Application to image processing Image denoising Image segmentation Texture+Geometry decomposition Image reconstruction 5 Conclusion Emilie Chouzenoux Séminaire Image, GREYC 36 / 39

50 Conclusion Majorize-Minimize subspace algorithm for l 2 -l 0 minimization Faster methods w.r.t. combinatorial optimization techniques Simplicity of implementation Future work Constrained case Non differentiable case Application to a wider class of problems (Ex: IRM) Emilie Chouzenoux Séminaire Image, GREYC 37 / 39

51 Bibliography M. Allain, J. Idier and Y. Goussard On global and local convergence of half-quadratic algorithms IEEE Transactions on Image Processing, 15(5): , 2006 H. Attouch, J. Bolte and B. F. Svaiter Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods H. Attouch, J. Bolte, P. Redont and A. Soubeyran Proximal alternating minimization and projection methods for nonconvex problems. An approach based on the Kurdyka- Lojasiewicz inequality Mathematics of Operations Research, 35(2): , 2010 E. Chouzenoux, J. Idier and S. Moussaoui A Majorize-Minimize strategy for subspace optimization applied to image restoration IEEE Transactions on Image Processing, 20(6): , 2011 E. Chouzenoux, A. Jezierska, J.-C. Pesquet and H. Talbot A Majorize-Minimize subspace approach for l 2 -l 0 image regularization Emilie Chouzenoux Séminaire Image, GREYC 38 / 39

52 Thanks for your attention! Emilie Chouzenoux Séminaire Image, GREYC 39 / 39

53 Existence of minimizers (I) S F δ (x) = Φ(Hx y)+ ψ s,δ ( V s x c s ) s=1 Difficulty: F δ is a non convex, non coercive function. Proposition 1 Back Assume that (i) Φ is continuous and coercive, i.e. lim x + Φ(x) = + (ii) For every δ > 0 and s {1,...,S}, ψ s,δ is continuous and takes nonnegative values (iii) KerH = {0} Then, for every δ > 0, F δ has a minimizer. Emilie Chouzenoux Séminaire Image, GREYC 1 / 2

54 Existence of minimizers (II) S F δ (x) = Φ(Hx y)+ ψ s,δ ( V s x c s )+ V 0 x 2 s=1 Difficulty: F δ is a non convex, non coercive function. Proposition 1 Back Assume that (i) Φ is continuous and coercive, i.e. lim x + Φ(x) = + (ii) For every δ > 0 and s {1,...,S}, ψ s,δ is continuous and takes nonnegative values (iii) KerH KerV 0 = {0} Then, for every δ > 0, F δ has a minimizer. Emilie Chouzenoux Séminaire Image, GREYC 1 / 2

55 Existence of minimizers (III) S F δ (x) = Φ(Hx y)+ ψ s,δ ( V s x c s ) s=1 Difficulty: F δ is a non convex, non coercive function. Proposition 1 Back Assume that (i) Φ is continuous and coercive, i.e. lim x + Φ(x) = + (ii) For every δ > 0 and s {1,...,S}, ψ s,δ is continuous and takes nonnegative values and ψ 1 s,δ (0) is a nonempty bounded set. (iii) KerH S s=1 KerV s = {0} Then, for every δ > 0, F δ has a minimizer. Emilie Chouzenoux Séminaire Image, GREYC 1 / 2

56 Quadratic tangent majorant function Assumptions: (i) Φ is differentiable with an L-Lipschitzian gradient (ii) For every s {1,,S}, ψ s,δ is a differentiable function. (iii) For every s {1,,S}, ψ s,δ (.) is concave on [0,+ ). (iv) For every s {1,,S}, there exists ω s [0,+ ) such that ( t (0,+ )) 0 ψ s,δ (t) ω s t where ψ s,δ is the derivative of ψ s,δ. In addition, lim t 0 ω s,δ (t) R with ω s,δ (t) ψ s,δ (t)/t. t 0 Lemma 1 [Allain06] Back For every x R N, let A(x) = µh H +2V 0 V 0 +V Diag{b(x)}V where µ [L,+ ) and b(x) R SP with b sp (x) = ω s,δ ( V s x c s ). Then, Q(x,x ) = F δ (x )+ F δ (x ) T (x x )+ 1 2 (x x ) T A(x )(x x ) is a convex quadratic tangent majorant of F δ at x. Emilie Chouzenoux Séminaire Image, GREYC 2 / 2

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration E. Chouzenoux, A. Jezierska, J.-C. Pesquet and H. Talbot Université Paris-Est Lab. d Informatique Gaspard

More information

Variable Metric Forward-Backward Algorithm

Variable Metric Forward-Backward Algorithm Variable Metric Forward-Backward Algorithm 1/37 Variable Metric Forward-Backward Algorithm for minimizing the sum of a differentiable function and a convex function E. Chouzenoux in collaboration with

More information

Sequential convex programming,: value function and convergence

Sequential convex programming,: value function and convergence Sequential convex programming,: value function and convergence Edouard Pauwels joint work with Jérôme Bolte Journées MODE Toulouse March 23 2016 1 / 16 Introduction Local search methods for finite dimensional

More information

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection EUSIPCO 2015 1/19 A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection Jean-Christophe Pesquet Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est

More information

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29 A Random block-coordinate primal-dual proximal algorithm with application to 3D mesh denoising Emilie CHOUZENOUX Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est, France Horizon Maths 2014

More information

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION Peter Ochs University of Freiburg Germany 17.01.2017 joint work with: Thomas Brox and Thomas Pock c 2017 Peter Ochs ipiano c 1

More information

Optimisation in imaging

Optimisation in imaging Optimisation in imaging Hugues Talbot ICIP 2014 Tutorial Optimisation on Hierarchies H. Talbot : Optimisation 1/59 Outline of the lecture 1 Concepts in optimization Cost function Constraints Duality 2

More information

Proximal Alternating Linearized Minimization for Nonconvex and Nonsmooth Problems

Proximal Alternating Linearized Minimization for Nonconvex and Nonsmooth Problems Proximal Alternating Linearized Minimization for Nonconvex and Nonsmooth Problems Jérôme Bolte Shoham Sabach Marc Teboulle Abstract We introduce a proximal alternating linearized minimization PALM) algorithm

More information

On the convergence of a regularized Jacobi algorithm for convex optimization

On the convergence of a regularized Jacobi algorithm for convex optimization On the convergence of a regularized Jacobi algorithm for convex optimization Goran Banjac, Kostas Margellos, and Paul J. Goulart Abstract In this paper we consider the regularized version of the Jacobi

More information

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions Radu Ioan Boţ Ernö Robert Csetnek Szilárd Csaba László October, 1 Abstract. We propose a forward-backward

More information

A user s guide to Lojasiewicz/KL inequalities

A user s guide to Lojasiewicz/KL inequalities Other A user s guide to Lojasiewicz/KL inequalities Toulouse School of Economics, Université Toulouse I SLRA, Grenoble, 2015 Motivations behind KL f : R n R smooth ẋ(t) = f (x(t)) or x k+1 = x k λ k f

More information

Inverse problem and optimization

Inverse problem and optimization Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples

More information

Numerical Optimization: Basic Concepts and Algorithms

Numerical Optimization: Basic Concepts and Algorithms May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some

More information

arxiv: v2 [math.oc] 21 Nov 2017

arxiv: v2 [math.oc] 21 Nov 2017 Unifying abstract inexact convergence theorems and block coordinate variable metric ipiano arxiv:1602.07283v2 [math.oc] 21 Nov 2017 Peter Ochs Mathematical Optimization Group Saarland University Germany

More information

Sparse Regularization via Convex Analysis

Sparse Regularization via Convex Analysis Sparse Regularization via Convex Analysis Ivan Selesnick Electrical and Computer Engineering Tandon School of Engineering New York University Brooklyn, New York, USA 29 / 66 Convex or non-convex: Which

More information

Block Coordinate Descent for Regularized Multi-convex Optimization

Block Coordinate Descent for Regularized Multi-convex Optimization Block Coordinate Descent for Regularized Multi-convex Optimization Yangyang Xu and Wotao Yin CAAM Department, Rice University February 15, 2013 Multi-convex optimization Model definition Applications Outline

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in

More information

Enhanced Compressive Sensing and More

Enhanced Compressive Sensing and More Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University

More information

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Caroline Chaux Joint work with X. Vu, N. Thirion-Moreau and S. Maire (LSIS, Toulon) Aix-Marseille

More information

SPARSE SIGNAL RESTORATION. 1. Introduction

SPARSE SIGNAL RESTORATION. 1. Introduction SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful

More information

A Majorize-Minimize strategy for subspace optimization applied to image restoration

A Majorize-Minimize strategy for subspace optimization applied to image restoration A Majorize-Minimize strategy for subspace optimization applied to image restoration Emilie Chouzenoux, Jérôme Idier, Saïd Moussaoui To cite this version: Emilie Chouzenoux, Jérôme Idier, Saïd Moussaoui.

More information

Relaxed linearized algorithms for faster X-ray CT image reconstruction

Relaxed linearized algorithms for faster X-ray CT image reconstruction Relaxed linearized algorithms for faster X-ray CT image reconstruction Hung Nien and Jeffrey A. Fessler University of Michigan, Ann Arbor The 13th Fully 3D Meeting June 2, 2015 1/20 Statistical image reconstruction

More information

On the iterate convergence of descent methods for convex optimization

On the iterate convergence of descent methods for convex optimization On the iterate convergence of descent methods for convex optimization Clovis C. Gonzaga March 1, 2014 Abstract We study the iterate convergence of strong descent algorithms applied to convex functions.

More information

A Proximal Alternating Direction Method for Semi-Definite Rank Minimization (Supplementary Material)

A Proximal Alternating Direction Method for Semi-Definite Rank Minimization (Supplementary Material) A Proximal Alternating Direction Method for Semi-Definite Rank Minimization (Supplementary Material) Ganzhao Yuan and Bernard Ghanem King Abdullah University of Science and Technology (KAUST), Saudi Arabia

More information

A semi-algebraic look at first-order methods

A semi-algebraic look at first-order methods splitting A semi-algebraic look at first-order Université de Toulouse / TSE Nesterov s 60th birthday, Les Houches, 2016 in large-scale first-order optimization splitting Start with a reasonable FOM (some

More information

Network Newton. Aryan Mokhtari, Qing Ling and Alejandro Ribeiro. University of Pennsylvania, University of Science and Technology (China)

Network Newton. Aryan Mokhtari, Qing Ling and Alejandro Ribeiro. University of Pennsylvania, University of Science and Technology (China) Network Newton Aryan Mokhtari, Qing Ling and Alejandro Ribeiro University of Pennsylvania, University of Science and Technology (China) aryanm@seas.upenn.edu, qingling@mail.ustc.edu.cn, aribeiro@seas.upenn.edu

More information

Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping

Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping March 0, 206 3:4 WSPC Proceedings - 9in x 6in secondorderanisotropicdamping206030 page Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping Radu

More information

Hedy Attouch, Jérôme Bolte, Benar Svaiter. To cite this version: HAL Id: hal

Hedy Attouch, Jérôme Bolte, Benar Svaiter. To cite this version: HAL Id: hal Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods Hedy Attouch, Jérôme Bolte, Benar Svaiter To cite

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method

Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method Article Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method Andreas Langer Department of Mathematics, University of Stuttgart,

More information

r=1 r=1 argmin Q Jt (20) After computing the descent direction d Jt 2 dt H t d + P (x + d) d i = 0, i / J

r=1 r=1 argmin Q Jt (20) After computing the descent direction d Jt 2 dt H t d + P (x + d) d i = 0, i / J 7 Appendix 7. Proof of Theorem Proof. There are two main difficulties in proving the convergence of our algorithm, and none of them is addressed in previous works. First, the Hessian matrix H is a block-structured

More information

Sequential Unconstrained Minimization: A Survey

Sequential Unconstrained Minimization: A Survey Sequential Unconstrained Minimization: A Survey Charles L. Byrne February 21, 2013 Abstract The problem is to minimize a function f : X (, ], over a non-empty subset C of X, where X is an arbitrary set.

More information

From error bounds to the complexity of first-order descent methods for convex functions

From error bounds to the complexity of first-order descent methods for convex functions From error bounds to the complexity of first-order descent methods for convex functions Nguyen Trong Phong-TSE Joint work with Jérôme Bolte, Juan Peypouquet, Bruce Suter. Toulouse, 23-25, March, 2016 Journées

More information

A gradient type algorithm with backward inertial steps for a nonconvex minimization

A gradient type algorithm with backward inertial steps for a nonconvex minimization A gradient type algorithm with backward inertial steps for a nonconvex minimization Cristian Daniel Alecsa Szilárd Csaba László Adrian Viorel November 22, 208 Abstract. We investigate an algorithm of gradient

More information

Convergence rates for an inertial algorithm of gradient type associated to a smooth nonconvex minimization

Convergence rates for an inertial algorithm of gradient type associated to a smooth nonconvex minimization Convergence rates for an inertial algorithm of gradient type associated to a smooth nonconvex minimization Szilárd Csaba László November, 08 Abstract. We investigate an inertial algorithm of gradient type

More information

Adaptive Primal Dual Optimization for Image Processing and Learning

Adaptive Primal Dual Optimization for Image Processing and Learning Adaptive Primal Dual Optimization for Image Processing and Learning Tom Goldstein Rice University tag7@rice.edu Ernie Esser University of British Columbia eesser@eos.ubc.ca Richard Baraniuk Rice University

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

Douglas-Rachford splitting for nonconvex feasibility problems

Douglas-Rachford splitting for nonconvex feasibility problems Douglas-Rachford splitting for nonconvex feasibility problems Guoyin Li Ting Kei Pong Jan 3, 015 Abstract We adapt the Douglas-Rachford DR) splitting method to solve nonconvex feasibility problems by studying

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Donghwan Kim and Jeffrey A. Fessler EECS Department, University of Michigan

More information

An Overview of Recent and Brand New Primal-Dual Methods for Solving Convex Optimization Problems

An Overview of Recent and Brand New Primal-Dual Methods for Solving Convex Optimization Problems PGMO 1/32 An Overview of Recent and Brand New Primal-Dual Methods for Solving Convex Optimization Problems Emilie Chouzenoux Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est Marne-la-Vallée,

More information

A Primal-dual Three-operator Splitting Scheme

A Primal-dual Three-operator Splitting Scheme Noname manuscript No. (will be inserted by the editor) A Primal-dual Three-operator Splitting Scheme Ming Yan Received: date / Accepted: date Abstract In this paper, we propose a new primal-dual algorithm

More information

Physics-based Prior modeling in Inverse Problems

Physics-based Prior modeling in Inverse Problems Physics-based Prior modeling in Inverse Problems MURI Meeting 2013 M Usman Sadiq, Purdue University Charles A. Bouman, Purdue University In collaboration with: Jeff Simmons, AFRL Venkat Venkatakrishnan,

More information

Least squares regularized or constrained by L0: relationship between their global minimizers. Mila Nikolova

Least squares regularized or constrained by L0: relationship between their global minimizers. Mila Nikolova Least squares regularized or constrained by L0: relationship between their global minimizers Mila Nikolova CMLA, CNRS, ENS Cachan, Université Paris-Saclay, France nikolova@cmla.ens-cachan.fr SIAM Minisymposium

More information

Regularized Multiplicative Algorithms for Nonnegative Matrix Factorization

Regularized Multiplicative Algorithms for Nonnegative Matrix Factorization Regularized Multiplicative Algorithms for Nonnegative Matrix Factorization Christine De Mol (joint work with Loïc Lecharlier) Université Libre de Bruxelles Dept Math. and ECARES MAHI 2013 Workshop Methodological

More information

Image restoration. An example in astronomy

Image restoration. An example in astronomy Image restoration Convex approaches: penalties and constraints An example in astronomy Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS

More information

Image restoration: numerical optimisation

Image restoration: numerical optimisation Image restoration: numerical optimisation Short and partial presentation Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS BINP / 6 Context

More information

Received: 15 December 2010 / Accepted: 30 July 2011 / Published online: 20 August 2011 Springer and Mathematical Optimization Society 2011

Received: 15 December 2010 / Accepted: 30 July 2011 / Published online: 20 August 2011 Springer and Mathematical Optimization Society 2011 Math. Program., Ser. A (2013) 137:91 129 DOI 10.1007/s10107-011-0484-9 FULL LENGTH PAPER Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward backward splitting,

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x

More information

Nonconvex Sparse Logistic Regression with Weakly Convex Regularization

Nonconvex Sparse Logistic Regression with Weakly Convex Regularization Nonconvex Sparse Logistic Regression with Weakly Convex Regularization Xinyue Shen, Student Member, IEEE, and Yuantao Gu, Senior Member, IEEE Abstract In this work we propose to fit a sparse logistic regression

More information

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Structured signal recovery from non-linear and heavy-tailed measurements

Structured signal recovery from non-linear and heavy-tailed measurements Structured signal recovery from non-linear and heavy-tailed measurements Larry Goldstein* Stanislav Minsker* Xiaohan Wei # *Department of Mathematics # Department of Electrical Engineering UniversityofSouthern

More information

Seismic imaging and optimal transport

Seismic imaging and optimal transport Seismic imaging and optimal transport Bjorn Engquist In collaboration with Brittany Froese, Sergey Fomel and Yunan Yang Brenier60, Calculus of Variations and Optimal Transportation, Paris, January 10-13,

More information

Hessian Riemannian Gradient Flows in Convex Programming

Hessian Riemannian Gradient Flows in Convex Programming Hessian Riemannian Gradient Flows in Convex Programming Felipe Alvarez, Jérôme Bolte, Olivier Brahic INTERNATIONAL CONFERENCE ON MODELING AND OPTIMIZATION MODOPT 2004 Universidad de La Frontera, Temuco,

More information

Proximal tools for image reconstruction in dynamic Positron Emission Tomography

Proximal tools for image reconstruction in dynamic Positron Emission Tomography Proximal tools for image reconstruction in dynamic Positron Emission Tomography Nelly Pustelnik 1 joint work with Caroline Chaux 2, Jean-Christophe Pesquet 3, and Claude Comtat 4 1 Laboratoire de Physique,

More information

MATH 4211/6211 Optimization Basics of Optimization Problems

MATH 4211/6211 Optimization Basics of Optimization Problems MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization

More information

Quasi-Newton methods for minimization

Quasi-Newton methods for minimization Quasi-Newton methods for minimization Lectures for PHD course on Numerical optimization Enrico Bertolazzi DIMS Universitá di Trento November 21 December 14, 2011 Quasi-Newton methods for minimization 1

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.1/39

PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.1/39 PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP Michal Kočvara Institute of Information Theory and Automation Academy of Sciences of the Czech Republic and Czech Technical University

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

Convex Optimization Algorithms for Machine Learning in 10 Slides

Convex Optimization Algorithms for Machine Learning in 10 Slides Convex Optimization Algorithms for Machine Learning in 10 Slides Presenter: Jul. 15. 2015 Outline 1 Quadratic Problem Linear System 2 Smooth Problem Newton-CG 3 Composite Problem Proximal-Newton-CD 4 Non-smooth,

More information

Robust Sparse Recovery via Non-Convex Optimization

Robust Sparse Recovery via Non-Convex Optimization Robust Sparse Recovery via Non-Convex Optimization Laming Chen and Yuantao Gu Department of Electronic Engineering, Tsinghua University Homepage: http://gu.ee.tsinghua.edu.cn/ Email: gyt@tsinghua.edu.cn

More information

On efficiency of nonmonotone Armijo-type line searches

On efficiency of nonmonotone Armijo-type line searches Noname manuscript No. (will be inserted by the editor On efficiency of nonmonotone Armijo-type line searches Masoud Ahookhosh Susan Ghaderi Abstract Monotonicity and nonmonotonicity play a key role in

More information

arxiv: v1 [cs.it] 21 Feb 2013

arxiv: v1 [cs.it] 21 Feb 2013 q-ary Compressive Sensing arxiv:30.568v [cs.it] Feb 03 Youssef Mroueh,, Lorenzo Rosasco, CBCL, CSAIL, Massachusetts Institute of Technology LCSL, Istituto Italiano di Tecnologia and IIT@MIT lab, Istituto

More information

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc.

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc. ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, 2015 1 Name: Solution Score: /100 This exam is closed-book. You must show ALL of your work for full credit. Please read the questions carefully.

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

More information

Dual methods for the minimization of the total variation

Dual methods for the minimization of the total variation 1 / 30 Dual methods for the minimization of the total variation Rémy Abergel supervisor Lionel Moisan MAP5 - CNRS UMR 8145 Different Learning Seminar, LTCI Thursday 21st April 2016 2 / 30 Plan 1 Introduction

More information

Chapter 2. Optimization. Gradients, convexity, and ALS

Chapter 2. Optimization. Gradients, convexity, and ALS Chapter 2 Optimization Gradients, convexity, and ALS Contents Background Gradient descent Stochastic gradient descent Newton s method Alternating least squares KKT conditions 2 Motivation We can solve

More information

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.

More information

Optimization. Benjamin Recht University of California, Berkeley Stephen Wright University of Wisconsin-Madison

Optimization. Benjamin Recht University of California, Berkeley Stephen Wright University of Wisconsin-Madison Optimization Benjamin Recht University of California, Berkeley Stephen Wright University of Wisconsin-Madison optimization () cost constraints might be too much to cover in 3 hours optimization (for big

More information

A Second-Order Method for Strongly Convex l 1 -Regularization Problems

A Second-Order Method for Strongly Convex l 1 -Regularization Problems Noname manuscript No. (will be inserted by the editor) A Second-Order Method for Strongly Convex l 1 -Regularization Problems Kimon Fountoulakis and Jacek Gondzio Technical Report ERGO-13-11 June, 13 Abstract

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

You should be able to...

You should be able to... Lecture Outline Gradient Projection Algorithm Constant Step Length, Varying Step Length, Diminishing Step Length Complexity Issues Gradient Projection With Exploration Projection Solving QPs: active set

More information

Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms

Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms Peter Ochs, Jalal Fadili, and Thomas Brox Saarland University, Saarbrücken, Germany Normandie Univ, ENSICAEN, CNRS, GREYC, France

More information

Sparsity Regularization

Sparsity Regularization Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University

More information

Tame variational analysis

Tame variational analysis Tame variational analysis Dmitriy Drusvyatskiy Mathematics, University of Washington Joint work with Daniilidis (Chile), Ioffe (Technion), and Lewis (Cornell) May 19, 2015 Theme: Semi-algebraic geometry

More information

PART II: Basic Theory of Half-quadratic Minimization

PART II: Basic Theory of Half-quadratic Minimization PART II: Basic Theory of Half-quadratic Minimization Ran He, Wei-Shi Zheng and Liang Wang 013-1-01 Outline History of half-quadratic (HQ) minimization Half-quadratic minimization HQ loss functions HQ in

More information

Optimization Methods. Lecture 18: Optimality Conditions and. Gradient Methods. for Unconstrained Optimization

Optimization Methods. Lecture 18: Optimality Conditions and. Gradient Methods. for Unconstrained Optimization 5.93 Optimization Methods Lecture 8: Optimality Conditions and Gradient Methods for Unconstrained Optimization Outline. Necessary and sucient optimality conditions Slide. Gradient m e t h o d s 3. The

More information

How hard is this function to optimize?

How hard is this function to optimize? How hard is this function to optimize? John Duchi Based on joint work with Sabyasachi Chatterjee, John Lafferty, Yuancheng Zhu Stanford University West Coast Optimization Rumble October 2016 Problem minimize

More information

On Acceleration with Noise-Corrupted Gradients. + m k 1 (x). By the definition of Bregman divergence:

On Acceleration with Noise-Corrupted Gradients. + m k 1 (x). By the definition of Bregman divergence: A Omitted Proofs from Section 3 Proof of Lemma 3 Let m x) = a i On Acceleration with Noise-Corrupted Gradients fxi ), u x i D ψ u, x 0 ) denote the function under the minimum in the lower bound By Proposition

More information

2 Regularized Image Reconstruction for Compressive Imaging and Beyond

2 Regularized Image Reconstruction for Compressive Imaging and Beyond EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement

More information

MATH 4211/6211 Optimization Quasi-Newton Method

MATH 4211/6211 Optimization Quasi-Newton Method MATH 4211/6211 Optimization Quasi-Newton Method Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 Quasi-Newton Method Motivation:

More information

Nonconvex notions of regularity and convergence of fundamental algorithms for feasibility problems

Nonconvex notions of regularity and convergence of fundamental algorithms for feasibility problems Nonconvex notions of regularity and convergence of fundamental algorithms for feasibility problems Robert Hesse and D. Russell Luke December 12, 2012 Abstract We consider projection algorithms for solving

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

1 Sparsity and l 1 relaxation

1 Sparsity and l 1 relaxation 6.883 Learning with Combinatorial Structure Note for Lecture 2 Author: Chiyuan Zhang Sparsity and l relaxation Last time we talked about sparsity and characterized when an l relaxation could recover the

More information

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.

More information

PDE-Constrained and Nonsmooth Optimization

PDE-Constrained and Nonsmooth Optimization Frank E. Curtis October 1, 2009 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP)

More information

Zero-Order Methods for the Optimization of Noisy Functions. Jorge Nocedal

Zero-Order Methods for the Optimization of Noisy Functions. Jorge Nocedal Zero-Order Methods for the Optimization of Noisy Functions Jorge Nocedal Northwestern University Simons Institute, October 2017 1 Collaborators Albert Berahas Northwestern University Richard Byrd University

More information

Image restoration by minimizing zero norm of wavelet frame coefficients

Image restoration by minimizing zero norm of wavelet frame coefficients Image restoration by minimizing zero norm of wavelet frame coefficients Chenglong Bao a, Bin Dong b, Likun Hou c, Zuowei Shen a, Xiaoqun Zhang c,d, Xue Zhang d a Department of Mathematics, National University

More information

Algorithms for Nonsmooth Optimization

Algorithms for Nonsmooth Optimization Algorithms for Nonsmooth Optimization Frank E. Curtis, Lehigh University presented at Center for Optimization and Statistical Learning, Northwestern University 2 March 2018 Algorithms for Nonsmooth Optimization

More information

Convex Optimization. (EE227A: UC Berkeley) Lecture 15. Suvrit Sra. (Gradient methods III) 12 March, 2013

Convex Optimization. (EE227A: UC Berkeley) Lecture 15. Suvrit Sra. (Gradient methods III) 12 March, 2013 Convex Optimization (EE227A: UC Berkeley) Lecture 15 (Gradient methods III) 12 March, 2013 Suvrit Sra Optimal gradient methods 2 / 27 Optimal gradient methods We saw following efficiency estimates for

More information

Variational Image Restoration

Variational Image Restoration Variational Image Restoration Yuling Jiao yljiaostatistics@znufe.edu.cn School of and Statistics and Mathematics ZNUFE Dec 30, 2014 Outline 1 1 Classical Variational Restoration Models and Algorithms 1.1

More information

Exponential Weighted Aggregation vs Penalized Estimation: Guarantees And Algorithms

Exponential Weighted Aggregation vs Penalized Estimation: Guarantees And Algorithms Exponential Weighted Aggregation vs Penalized Estimation: Guarantees And Algorithms Jalal Fadili Normandie Université-ENSICAEN, GREYC CNRS UMR 6072 Joint work with Tùng Luu and Christophe Chesneau SPARS

More information

TUM 2016 Class 3 Large scale learning by regularization

TUM 2016 Class 3 Large scale learning by regularization TUM 2016 Class 3 Large scale learning by regularization Lorenzo Rosasco UNIGE-MIT-IIT July 25, 2016 Learning problem Solve min w E(w), E(w) = dρ(x, y)l(w x, y) given (x 1, y 1 ),..., (x n, y n ) Beyond

More information