Combining multiresolution analysis and non-smooth optimization for texture segmentation

Size: px
Start display at page:

Download "Combining multiresolution analysis and non-smooth optimization for texture segmentation"

Transcription

1 Combining multiresolution analysis and non-smooth optimization for texture segmentation Nelly Pustelnik CNRS, Laboratoire de Physique de l ENS de Lyon

2 Intro Basics Segmentation Two-step texture segmentation Joint Texture Segmentation Stochastic textures Geometric textures periodic Stochastic textures Conclusions

3 Intro Basics Segmentation Two-step texture segmentation Joint Texture Segmentation Stochastic textures Geometric textures periodic Stochastic textures scale-free? Conclusions

4 Sinusoidal signal periodic Stochastic textures log power Time log frequency Sinusoidal signal + noise periodic log power Time log frequency Monofractal signal scale-free log power Time log frequency

5 Texture segmentation Ω 1 Ω 2 Mask Synthetic image Real texture Segmentation: Estimate the boundary between Ω 1 and Ω 2 - Contribution 1: Discrete Mumford-Shah, - Contribution 2: Chan-Vese model. Texture = local dependence = local regularity. - Contribution 3: Joint estimation and segmentation.

6 SIROCCO Projet (Start) Projet Jeunes Chercheur.e.s GdR ISIS Défi Imag In CNRS 2017 Joint work with : B. Pascal, M. Foare, P. Abry, V. Vidal, J.-C. Géminard (LPENSL), L. Condat (GIPSA-Lab), H. Wendt, N. Dobigeon (IRIT). Difficulties: large size data (> 2 million pixels), accurate transition, avoid irregular contour.

7 Summary 1. Basics: wavelets and proximal tools 2. Segmentation by means of proximal tools 3. Two-step texture segmentation relying on scale-free descriptor 4. Joint texture segmentation

8 Intro Basics Segmentation Two-step texture segmentation Joint Texture Segmentation Conclusions Wavelet transform and sparsity prox Wavelets: sparse representation of most natural signals. Dyadic wavelet transform, denoted F R Ω Ω filterbank implementation, orthonormal transform: FF = F F = I. g R Ω ζ = Fg

9 Intro Basics Segmentation Two-step texture segmentation Joint Texture Segmentation Conclusions Wavelet transform and sparsity prox g ζ = Fg softλ (F g) b = F softλ (F g) u

10 Intro Basics Segmentation Two-step texture segmentation Joint Texture Segmentation Conclusions Wavelet transform and sparsity prox g ζ = Fg softλ (F g) softλ (ζ) = max{ ζi λ, 0}sign(ζi ) i Ω X 1 νi = arg min kν ζk22 + λ ν 2 i {z } kνk1 1 b = arg min ku gk22 + λkf uk1 u u 2 b = F softλ (F g) u 10 8 Identity Soft-thresholding λ 0 λ αi

11 Intro Basics Segmentation Two-step texture segmentation Joint Texture Segmentation Conclusions Wavelet transform and sparsity prox g ζ = Fg softλ (F g) b = F softλ (F g) u 10 softλ (ζ) = max{ ζi λ, 0}sign(ζi ) i Ω 8 Identity Soft-thresholding 6 = proxλk k1 (ζ) 4 2 -λ 0 1 b = arg min ku gk22 + λkf uk1 u u 2 λ αi

12 Intro Basics Segmentation Two-step texture segmentation Joint Texture Segmentation Conclusions Wavelet transform and sparsity prox g ζ = Fg softλ (F g) b = F softλ (F g) u 10 softλ (ζ) = max{ ζi λ, 0}sign(ζi ) i Ω = proxλk k1 (ζ) 8 Identity Soft-thresholding λ 0 λ -2 b = proxλkf k1 (g) u αi

13 F s : linear operator, Non-smooth optimization û Argmin u R Ω f s : proper, convex, l.s.c functions. S f s (F s u) s=1 Since 2004, numerous proximal algorithms: [Bauschke-Combettes, 2017] - Forward-Backward S = 2, f 1 Lipschitz gradient and L 2 = Id - Douglas-Rachford S = 2 and F 1 = F 2 = Id - PPXA F 1 =... = F S = Id - ADMM Invert S i=1 F ifi - Primal-dual... Flexibility in the design of objective functions.

14 F s : linear operator, Non-smooth optimization û Argmin u R Ω f s : proper, convex, l.s.c functions. S f s (F s u) s=1 Handle with large size problems: Closed form expression of the proximity operators: Avoid splitting: prox s fs. prox fs u = arg min ν u 2 ν 2 + f s (ν). Exploit properties of f s (strong convexity) and of F s. Block-coordinate approach.

15 Summary 1. Basics: wavelets and proximal tools 2. Segmentation by means of proximal tools 3. Two-step texture segmentation relying on scale-free descriptor 4. Joint texture segmentation

16 minimize u,k Mumford-Shah 1 (u g) 2 dxdy + β 2 Ω }{{} fidelity Ω\K u 2 dxdy } {{ } smoothness [Mumford-Shah, 1989] Ω: image domain, g L (Ω): input (possibly noisy), u W 1,2 (Ω): piecewise smooth approximation of g, + λh 1 (K Ω) }{{} length W 1,2 (Ω) = { u L 2 (Ω) u L 2 (Ω) } where weak derivative operator K: set of discontinuities, H 1 : Hausdorff measure. g (û, K)

17 minimize u,k Mumford-Shah 1 (u g) 2 dxdy + β 2 Ω }{{} fidelity Ω\K u 2 dxdy } {{ } smoothness [Mumford-Shah, 1989] Ω: image domain, g L (Ω): input (possibly noisy), u W 1,2 (Ω): piecewise smooth approximation of g, + λh 1 (K Ω) }{{} length W 1,2 (Ω) = { u L 2 (Ω) u L 2 (Ω) } where weak derivative operator K: set of discontinuities, H 1 : Hausdorff measure. g (û, K)

18 Total variation model 1 minimize (u g) 2 dxdy + β u 2 dxdy + λh 1 (K Ω) u,k 2 Ω Ω\K Discrete piecewise constant relaxation minimize u 1 2 u g λtv(u) + Convex. + Fast implementation due to strong convexity. TV denotes some form of the 2-D discrete total variation, i.e., N 1 N 2 ( u R Ω ) TV(u)= u i1 +1,i 2 u i1,i u i1,i 2 +1 u i1,i 2 2 i 1 =1i 2 =1 = Du 2,1,

19 Total variation model g û TV with λ = 100 û TV with λ = 500

20 Proposed Discrete Mumford-Shah minimize u,e 1 2 u g β (1 e) Du 2 + λr(e), [Foare-Pustelnik-Condat, 2018] Ω = {1,..., N 1 } {1,..., N 2 } g R Ω : input (possibly noisy), u R Ω : piecewise smooth approximation of g, D R E Ω : models a finite difference operator, e R E : edges between nodes whose value is 1 when a contour change is detected and 0 otherwise, R: non-smooth to favor sparse solution (i.e. short K ).

21 Proposed Discrete Mumford-Shah minimize u,e [Foare-Pustelnik-Condat, 2018] Ω = {1,..., N 1 } {1,..., N 2 } g R Ω : input (possibly noisy), 1 2 u g β (1 e) Du 2 + λr(e), u R Ω : piecewise smooth approximation of g, D R E Ω : models a finite difference operator, e R E : edges between nodes whose value is 1 when a contour change is detected and 0 otherwise, R: non-smooth to favor sparse solution (i.e. short K ). Hybrid linearized proximal alternating minimization (alternative to [Bolte et al. 2013]

22 Segmentation methods: summary Total Variation Discrete MS Chan-Vese + Fast + Piecewise constant Not accurate contour + Extract contour + Identify smooth variations + Piecewise smooth piecewise constant Time consuming Tune parameters + Perform good segmentation results Time consuming Tune parameters: number of labels, mean value µ q [Pustelnik-Condat, 2017]

23 Segmentation methods: summary Total Variation Discrete MS Chan-Vese + Fast + Piecewise constant Not accurate contour + Extract contour + Identify smooth variations + Piecewise smooth piecewise constant Time consuming Tune parameters + Perform good segmentation results Time consuming Tune parameters: number of labels, mean value µ q [Pustelnik-Condat, 2017]

24 Summary 1. Basics: wavelets and proximal tools 2. Segmentation by means of proximal tools 3. Two-step texture segmentation relying on scale-free descriptor 4. Joint texture segmentation

25 Local regularity (1D)

26 Local regularity (1D) f α regular at y f (x) f (y) χ x y α Example: α = 1 10

27 Local regularity (1D) f α regular at y f (x) f (y) χ x y α Example: α = 1 2

28 Local regularity (1D) Definition ( y) h(y) = sup α such that f is α-regular at y. Compute h(y) at every point?

29 Pointwise regularity and wavelet transform modulus [Extracted from Mallat 1998] log 2 Wf (u, s) log 2 A + (α ) log 2 s. Extract α at each location compute the slope. Continuous wavelet transform not adapted to large size images.

30 Intro Basics Segmentation Two-step texture segmentation Joint Texture Segmentation Conclusions Local regularity and wavelet leaders Discrete wavelet coefficients: - Coefficients at scale j {1,..., J} and subband m = {1, 2, 3}: ζj,m = Hj,m g - Orthonormal transform: h i> > >,..., HJ,3 F = H1,1, L> where J,4 g N Hj,m R 4j ζ = Fg N N and LJ,4 R 4J N

31 Intro Basics Segmentation Two-step texture segmentation Joint Texture Segmentation Conclusions Local regularity and wavelet leaders Wavelet leader at scale j and location k - local supremum of all wavelet coefficients taken within a spatial neighborhood across all finer scales j 0 j ( λj,k = [k2j, (k + 1)2j ) S Lj,k = sup ζj 0,m,k where Λj,k = p { 1,0,1}2 λj,k+p m={1,2,3} λj 0,k 0 Λj,k

32 Multiresolution + nonlinearity local regularity Behavior through the scales [Jaffard, 2004] L j,k s n 2 jhn when 2 j 0 (where k = 2 j n) Linear regression across scales [Wendt et al., 2009] ĥ n = j w j,k log 2 L j,k

33 Multiresolution + nonlinearity local regularity Behavior through the scales [Jaffard, 2004] L j,k s n 2 jhn when 2 j 0 (where k = 2 j n) Linear regression across scales [Wendt et al., 2009] ĥ n = j w j,k log 2 L j,k Unbiased when { j w j,k 0 j jw j,k 1 Ω 1 Ω 2 Mask Original g Estimate ĥ

34 Multiresolution + nonlinearity + nonsmooth Total variation: piecewise constant estimate 1 ĥ TV = arg min u 2 u w j log 2 L j λ Du 1 j }{{} Nonlinear Linear transform transform Linear transform wavelet log 2 leaders linear regression ĥ Nonlinear transform l 1 minimisation

35 Multiresolution + nonlinearity + nonsmooth Ω 1 Ω 2 Mask Original g Estimate ĥ Estimate ĥtv

36 Summary 1. Basics: wavelets and proximal tools 2. Segmentation by means of proximal tools 3. Two-step texture segmentation relying on scale-free descriptor 4. Joint texture segmentation

37 Multiresolution + nonlinearity + nonsmooth Total variation: Joint estimation and segmentation [Pustelnik et al., 2016] (ĥtvw, ŵ) = arg min u,w 1 2 u w j log 2 L j λ Du 1 + d C (w) j }{{} Relax unbiased contraint: C = {w R J Ω ( k) j w j,k 0 and j jw j,k 1} d C (ŵ)) = w P C (w) 2 1 P C (ŵ)) = arg min ν C 2 ν w 2 2 ĥ

38 Multiresolution + nonlinearity + nonsmooth Ω 1 Ω 2 Mask Original g Estimate ĥ Estimate ĥtv Estimate ĥtvw

39 Intro Basics Segmentation Two-step texture segmentation Joint Texture Segmentation Multiresolution + nonlinearity + nonsmooth Original g b Estimate h btv Estimate h btvw Estimate h Conclusions

40 Intro Basics Segmentation Two-step texture segmentation Joint Texture Segmentation Conclusions Multiresolution + nonlinearity + nonsmooth [Yuan et al. 2015] Original g [Arbelaez et al. 2011] b Estimate h btv Estimate h btvw Estimate h

41 Multiresolution + nonlinearity + nonsmooth 1 (ĥtvw, ŵ) = arg min (u,w) 2 u w j log 2 L j λ Du 1 + d C (ŵ) j }{{} ĥ + Good texture segmentation performance + Convex minimization formulation + Combined estimation and segmentation (contrary to ĥtv) Computational cost. Not adapted for large scale data.

42 Multiresolution + nonlinearity local regularity Behavior through the scales [Jaffard, 2004] L j,k s n 2 jhn as 2 j 0 (where k = 2 j n) log 2 L j,k log 2 s n + jh n as 2 j 0. PLOVER: Piecewise constant LOcal VariancE and Regularity estimation [Pascal et al., 2018] Find ( v, ĥ) Argmin j log 2 L j v jh η Dh 1 + ζ Dv 1 v,h with ŝ = 2 v + Strongly convex computationally efficient + Combine estimation and segmentation. + Joint estimation of the local variance and local regularity.

43 Multiresolution + nonlinearity + nonsmooth + fast (a) Synthetic texture x (b) s mask (c) h mask Linear regression Disjoint TV PLOVER Disjoint re-estimation SNR = SNR = SNR = SNR = PLOVER re-estimation SNR = Local variance SNR = SNR = SNR = SNR = SNR = Local regularity

44 Multiresolution + nonlinearity + nonsmooth + fast Image g R N Zoom of g PLOVER : ŝ PLOVER : ĥ [Arbelaez2011] [Yuan 2015] Disjoint TV PLOVER

45 Conclusions HL-PAM for fast discrete Mumford-Shah several applications going from image restoration to graph analysis. Proximity operator of a sum of two functions application to segmentation and depth map estimation. Scale-free descriptors in a variational framework large-scale texture segmentation procedure.

46 Perspectives TV denoising/ Chan-Vese/D-MS procedure allowing to propose to expert accurate estimation and segmentation. D-MS allows to go from piecewise smooth to piecewise constant. Both are of interest for the applications. HL-PAM and strong convexity? Quantify deadzone w.r.t. scale. Regularization parameter selection. Integrate anisotropy.

47 References N. Pustelnik, H. Wendt, P. Abry, N. Dobigeon, Combining local regularity estimation and total variation optimization for scale-free texture segmentation, IEEE Trans. on Computational Imaging, vol. 2, no. 4, pp , Dec N. Pustelnik, L. Condat, Proximity operator of a sum of functions; Application to depth map estimation, IEEE Signal Processing Letters, M. Foare, N. Pustelnik, L. Condat, A new proximal method for joint image restoration and edge detection with the Mumford-Shah model, accepted ICASSP B. Pascal, N. Pustelnik, P. Abry, M. Serres, V. Vidal, Joint estimation of local variance and local regularity for texture segmentation. Application to multiphase flow characterization, submitted IEEE ICIP J. Frecon, N. Pustelnik, N. Dobigeon, H. Wendt, and P. Abry, Bayesian selection for the regularization parameter in TVl0 denoising problems, IEEE Trans. on Signal Processing, 2017.

48 Proposed Discrete Mumford-Shah minimize u,e 1 2 u g β (1 e) Du 2 + λr(e) g û ê

49 Proposed Discrete Mumford-Shah minimize u,e 1 2 u g β (1 e) Du 2 + λr(e) R: favors binary, i.e. {0, 1} E and sparse solution (i.e. short K ) 1. Ambrosio-Tortorelli approximation: [Ambrosio-Tortorelli, 1990] [Foare-Lachaud-Talbot, 2016] R(e) = ε De ε e 2 2 with ε > 0 2. l 1 -norm: R(e) = e 1 3. Quadratic l 1 : [Foare-Pustelnik-Condat, 2017] R(e) = { } E i=1 max e i, e2 i. 4ε

50 minimize u,e Proposed Discrete Mumford-Shah Ψ(u, e) := 1 2 u g β (1 e) Du 2 +λr(e) }{{} S(e,Du) PALM [Bolte et al, 2014] Set u [0] R Ω and e [0] R E. For l N Set γ > 1 and c l = γχ(e [l] ) u [l+1] prox 1 g (u [l] c 2 c l u S ( e [l], Du [l])) l 2 Set δ > 1 and d k = δν(u [l+1] ) e [l+1] prox 1 λr (e [l] d l e S ( e [l], Du [l+1])) d l Under technical assumptions, the sequence (u [l], e [l] ) l N converges to a critical point (u, e ) of Ψ.

51 minimize u,e Proposed Discrete Mumford-Shah Ψ(u, e) := 1 2 u g β (1 e) Du 2 +λr(e), }{{} S(e,Du) Proposed HL-PAM [Foare-Pustelnik-Condat, 2017] Set u [0] R Ω and e [0] R E. For l N Set γ > 1 and c l = γχ(e [l] ). u [l+1] prox 1 g (u [l] c 2 c k u S ( e [l], Du [l])) k 2 Set d l > 0. ) e [l+1] prox 1 λr+s(,du d [l+1] ) (e [l] l Under technical assumptions, the sequence (u [l], e [l] ) l N converges to a critical point (u, e ) of Ψ.

52 Proposed Discrete Mumford-Shah Assumptions 1. The updating steps of u [l+1] and e [l+1] have closed form expressions; 2. e S is globally Lispchitz with moduli χ(e [l] ) for every l N and there exists χ, χ + > 0 such that χ χ(e [l] ) χ + ; 3. (d l ) l N is a positive sequence such that the stepsizes d l belongs to (d, d + ) for some positive d d +.

53 Proposed Discrete Mumford-Shah Proposition [Foare-Pustelnik-Condat, 2017] We assume that S is separable, i.e, ( e=(e i ) 1 i E ) R(e) = σ i (e i ), where σ i :R E ] ; + ] with a closed form proximity operator expression. Let d l > 0, then prox 1 d l λr+s(,du [l+1] ) (e[l] ) = ( prox λσ i 2β(Du [l] ) 2 i +d l E i=1 [l] i β(du[l+1] ) 2 i + d le 2 β(du [l+1] ) 2 i + d l 2 ) i E

54 Proposed Discrete Mumford-Shah Proposition [Foare-Pustelnik-Condat, 2017] For every η R and τ, ɛ > 0 { [ ( prox. τ max{., 2 }(η) = sign(η) max 0, min η τ, max 4ɛ, 4ɛ η )]} τ 2ɛ + 1

55 Proposed Discrete Mumford-Shah Convergence PALM versus HL-PALM: Ψ(u [l], e [l] ) w.r.t. iterations l PALM, d l = 0.5/β HL-PAM, d l = 0.5/β HL-PAM, d l = 5/β HL-PAM, d l = 50/β HL-PAM, d l = 500/β

56 Data g TV [Strekalovskiy-Cremers, 2014] [Foare-Lachaud-Talbot, 2016] l 1 quadratic-l 1

57 Proposed Discrete Mumford-Shah Data g TV [Strekalovskiy-Cremers, 2014] [Foare-Lachaud-Talbot, 2016] l 1 quadratic-l 1

58 Proposed Discrete Mumford-Shah Data g TV [Strekalovskiy-Cremers, 2014] [Foare-Lachaud-Talbot, 2016] l 1 quadratic-l 1

59 Proposed Discrete Mumford-Shah Data g TV [Strekalovskiy-Cremers, 2014] [Foare-Lachaud-Talbot, 2016] l 1 quadratic-l 1

60 Proposed Discrete Mumford-Shah Convergence speed: Ψ(u [l+1], e [l+1] ) Ψ(u [l], e [l] ) < 10 4 TV [Foare-Lachaud-Talbot, 2016] l 1 quadratic l 1 dots ( Ω = ) dots ( Ω = ) dots ( Ω = ) ellipse ( Ω = ) ellipse ( Ω = ) ellipse ( Ω = ) peppers ( Ω = ) peppers ( Ω = ) peppers ( Ω = )

61 minimize u,k Chan-Vese model 1 (u g) 2 dxdy + β u 2 dxdy + λh 1 (K Ω) 2 Ω Ω\K Discrete piecewise constant relaxation with fixed label number [Chan-Vese, 2001] Q Q minimize θ (q 1) θ (q), (µ q g) 2 + λ TV(θ (q 1) θ (q) ) (θ (q) ) 1 q Q 1 q=1 q=1 s.t. 1 θ (0) θ (1)... θ (Q 1) θ (Q) 0, Ω 3 Ω 3 Ω 2 Ω 1 Ω 1 Ω Ω Ω g

62 minimize u,k Chan-Vese model 1 (u g) 2 dxdy + β u 2 dxdy + λh 1 (K Ω) 2 Ω Ω\K Discrete piecewise constant relaxation with fixed label number [Chan-Vese, 2001] Q Q minimize θ (q 1) θ (q), (µ q g) 2 + λ TV(θ (q 1) θ (q) ) (θ (q) ) 1 q Q 1 q=1 q=1 s.t. 1 θ (0) θ (1)... θ (Q 1) θ (Q) 0, Ω 3 Ω 2 Ω 1 g θ (0) θ (1) θ (2) θ (3)

63 minimize u,k Chan-Vese model 1 (u g) 2 dxdy + β u 2 dxdy + λh 1 (K Ω) 2 Ω Ω\K Discrete piecewise constant relaxation with fixed label number [Chan-Vese, 2001] Q Q minimize θ (q 1) θ (q), (µ q g) 2 + λ TV(θ (q 1) θ (q) ) (θ (q) ) 1 q Q 1 q=1 q=1 s.t. 1 θ (0) θ (1)... θ (Q 1) θ (Q) 0, Ω 3 Ω 3 Ω 2 Ω Ω Ω 1 1 Ω Ω g θ (0) θ (1) θ (1) θ (2) θ (2) θ (3)

64 Chan-Vese model minimize Θ=(θ (q) ) 1 q Q 1 Q 1 Q β (q), θ (q) + λ DH q Θ 2,1 + ι [0,1] Q Ω (Θ) + ι E (Θ) q=1 q=1 β (q) = (µ q+1 g) 2 (µ q g) 2, H q : R Q Ω R Ω : Θ θ (q 1) θ (q), E = {Θ R Q Ω : θ (1)... θ (Q 1) }. Use of splitting proximal algorithms to deal with a sum of convex but non-smooth functions.

65 Chan-Vese model Three-term splitting : minimize Θ Q 1 q=1 β(q), θ (q) + λ Q q=1 DH qθ 2,1 + ι [0,1] Q Ω (Θ) + ι E (Θ) Two-term splitting : minimize Θ Q 1 q=1 β(q), θ (q) + λ Q q=1 DH qθ 2,1 + ι [0,1] Q Ω (Θ) + ι E (Θ) Question: When is it possible to compute the proximity operator of a sum of functions rather splitting. Would it be more efficient?

66 Chan-Vese model Proposition [Pustelnik, Condat, 2017] (i) For some function h 0 Γ 0(R), h is separable, with ( ) x = (xi ) i Ω h(x) = h 0(x i ). i Ω (ii) g has the following form: ( x = (xi ) i Ω ) g(x) = (m,m ) Υ Ω 2 σ Cm,m (x m x m), where σ Cm,m : t R sup {tp, p C m,m } is the support function of a closed real interval C m,m, such that inf C m,m = a m,m and sup C m,m = b m,m, for some a m,m R { } and b m,m R {+ }, with a m,m b m,m. a m,m t if t < 0, ( t R) σ Cm,m (t) = 0 if t = 0, b m,m t if t > 0, Under assumptions (i) and (ii), prox g+h = prox h prox g.

67 Chan-Vese model Particular cases: Fused Lasso: Ω = {1,..., N} and Υ = {(1, 2), (2, 3),..., (N 1, N)}, b n,n+1 = a n,n+1 = ω n 0, h 0 = λ, g(x) = N 1 n=1 ω n x n+1 x n Chan-Vese: Ω = {1,..., Q} and Υ = {(1, 2), (2, 3),..., (Q 1, Q)}, a n,n+1 = 0 b n,n+1 = +, h 0 = ι [0,1], g(x) = ι E Compute P E with Pool Adjacent Violators Algorithm (PAVA) [Ayer et al., 1995]

68 Chan-Vese model g λ = 10 3 λ = 10 4

69 Chan-Vese model Minimal splitting (proposed method) Intermediate splitting 10 8 Full splitting

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection EUSIPCO 2015 1/19 A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection Jean-Christophe Pesquet Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est

More information

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29 A Random block-coordinate primal-dual proximal algorithm with application to 3D mesh denoising Emilie CHOUZENOUX Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est, France Horizon Maths 2014

More information

Proximal tools for image reconstruction in dynamic Positron Emission Tomography

Proximal tools for image reconstruction in dynamic Positron Emission Tomography Proximal tools for image reconstruction in dynamic Positron Emission Tomography Nelly Pustelnik 1 joint work with Caroline Chaux 2, Jean-Christophe Pesquet 3, and Claude Comtat 4 1 Laboratoire de Physique,

More information

A Primal-dual Three-operator Splitting Scheme

A Primal-dual Three-operator Splitting Scheme Noname manuscript No. (will be inserted by the editor) A Primal-dual Three-operator Splitting Scheme Ming Yan Received: date / Accepted: date Abstract In this paper, we propose a new primal-dual algorithm

More information

About Split Proximal Algorithms for the Q-Lasso

About Split Proximal Algorithms for the Q-Lasso Thai Journal of Mathematics Volume 5 (207) Number : 7 http://thaijmath.in.cmu.ac.th ISSN 686-0209 About Split Proximal Algorithms for the Q-Lasso Abdellatif Moudafi Aix Marseille Université, CNRS-L.S.I.S

More information

Inverse problem and optimization

Inverse problem and optimization Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples

More information

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration E. Chouzenoux, A. Jezierska, J.-C. Pesquet and H. Talbot Université Paris-Est Lab. d Informatique Gaspard

More information

Adaptive Primal Dual Optimization for Image Processing and Learning

Adaptive Primal Dual Optimization for Image Processing and Learning Adaptive Primal Dual Optimization for Image Processing and Learning Tom Goldstein Rice University tag7@rice.edu Ernie Esser University of British Columbia eesser@eos.ubc.ca Richard Baraniuk Rice University

More information

On a multiscale representation of images as hierarchy of edges. Eitan Tadmor. University of Maryland

On a multiscale representation of images as hierarchy of edges. Eitan Tadmor. University of Maryland On a multiscale representation of images as hierarchy of edges Eitan Tadmor Center for Scientific Computation and Mathematical Modeling (CSCAMM) Department of Mathematics and Institute for Physical Science

More information

Variational Image Restoration

Variational Image Restoration Variational Image Restoration Yuling Jiao yljiaostatistics@znufe.edu.cn School of and Statistics and Mathematics ZNUFE Dec 30, 2014 Outline 1 1 Classical Variational Restoration Models and Algorithms 1.1

More information

Primal-dual algorithms for the sum of two and three functions 1

Primal-dual algorithms for the sum of two and three functions 1 Primal-dual algorithms for the sum of two and three functions 1 Ming Yan Michigan State University, CMSE/Mathematics 1 This works is partially supported by NSF. optimization problems for primal-dual algorithms

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 9 Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 2 Separable convex optimization a special case is min f(x)

More information

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Patrick L. Combettes joint work with J.-C. Pesquet) Laboratoire Jacques-Louis Lions Faculté de Mathématiques

More information

Sparse linear models

Sparse linear models Sparse linear models Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 2/22/2016 Introduction Linear transforms Frequency representation Short-time

More information

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique Master 2 MathBigData S. Gaïffas 1 3 novembre 2014 1 CMAP - Ecole Polytechnique 1 Supervised learning recap Introduction Loss functions, linearity 2 Penalization Introduction Ridge Sparsity Lasso 3 Some

More information

A General Framework for a Class of Primal-Dual Algorithms for TV Minimization

A General Framework for a Class of Primal-Dual Algorithms for TV Minimization A General Framework for a Class of Primal-Dual Algorithms for TV Minimization Ernie Esser UCLA 1 Outline A Model Convex Minimization Problem Main Idea Behind the Primal Dual Hybrid Gradient (PDHG) Method

More information

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x

More information

1 Sparsity and l 1 relaxation

1 Sparsity and l 1 relaxation 6.883 Learning with Combinatorial Structure Note for Lecture 2 Author: Chiyuan Zhang Sparsity and l relaxation Last time we talked about sparsity and characterized when an l relaxation could recover the

More information

INVERSE PROBLEM FORMULATION FOR REGULARITY ESTIMATION IN IMAGES

INVERSE PROBLEM FORMULATION FOR REGULARITY ESTIMATION IN IMAGES INVERSE PROBLEM FORMULATION FOR REGULARITY ESTIMATION IN IMAGES Nelly Pustelnik, Patrice Abry, Herwig Wendt 2 and Nicolas Dobigeon 2 Physics Dept. - ENSL, UMR CNRS 5672, F-69364 Lyon, France, firstname.lastname@ens-lyon.fr

More information

A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing

A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing Emilie Chouzenoux emilie.chouzenoux@univ-mlv.fr Université Paris-Est Lab. d Informatique Gaspard

More information

Semi-Linearized Proximal Alternating Minimization for a Discrete Mumford Shah Model

Semi-Linearized Proximal Alternating Minimization for a Discrete Mumford Shah Model SemiLinearized Proximal Alternating Minimization for a Discrete Mumford Shah Model Marion Foare, Nelly Pustelnik, Laurent Condat To cite this version: Marion Foare, Nelly Pustelnik, Laurent Condat. SemiLinearized

More information

A GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR CONVEX OPTIMIZATION IN IMAGING SCIENCE

A GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR CONVEX OPTIMIZATION IN IMAGING SCIENCE A GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR CONVEX OPTIMIZATION IN IMAGING SCIENCE ERNIE ESSER XIAOQUN ZHANG TONY CHAN Abstract. We generalize the primal-dual hybrid gradient

More information

A GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR TV MINIMIZATION

A GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR TV MINIMIZATION A GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR TV MINIMIZATION ERNIE ESSER XIAOQUN ZHANG TONY CHAN Abstract. We generalize the primal-dual hybrid gradient (PDHG) algorithm proposed

More information

Generalized greedy algorithms.

Generalized greedy algorithms. Generalized greedy algorithms. François-Xavier Dupé & Sandrine Anthoine LIF & I2M Aix-Marseille Université - CNRS - Ecole Centrale Marseille, Marseille ANR Greta Séminaire Parisien des Mathématiques Appliquées

More information

Sparse Regularization via Convex Analysis

Sparse Regularization via Convex Analysis Sparse Regularization via Convex Analysis Ivan Selesnick Electrical and Computer Engineering Tandon School of Engineering New York University Brooklyn, New York, USA 29 / 66 Convex or non-convex: Which

More information

Variable Metric Forward-Backward Algorithm

Variable Metric Forward-Backward Algorithm Variable Metric Forward-Backward Algorithm 1/37 Variable Metric Forward-Backward Algorithm for minimizing the sum of a differentiable function and a convex function E. Chouzenoux in collaboration with

More information

Stochastic Proximal Gradient Algorithm

Stochastic Proximal Gradient Algorithm Stochastic Institut Mines-Télécom / Telecom ParisTech / Laboratoire Traitement et Communication de l Information Joint work with: Y. Atchade, Ann Arbor, USA, G. Fort LTCI/Télécom Paristech and the kind

More information

A posteriori error control for the binary Mumford Shah model

A posteriori error control for the binary Mumford Shah model A posteriori error control for the binary Mumford Shah model Benjamin Berkels 1, Alexander Effland 2, Martin Rumpf 2 1 AICES Graduate School, RWTH Aachen University, Germany 2 Institute for Numerical Simulation,

More information

Recent developments on sparse representation

Recent developments on sparse representation Recent developments on sparse representation Zeng Tieyong Department of Mathematics, Hong Kong Baptist University Email: zeng@hkbu.edu.hk Hong Kong Baptist University Dec. 8, 2008 First Previous Next Last

More information

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Donghwan Kim and Jeffrey A. Fessler EECS Department, University of Michigan

More information

Markov Random Fields

Markov Random Fields Markov Random Fields Umamahesh Srinivas ipal Group Meeting February 25, 2011 Outline 1 Basic graph-theoretic concepts 2 Markov chain 3 Markov random field (MRF) 4 Gauss-Markov random field (GMRF), and

More information

Proximal splitting methods on convex problems with a quadratic term: Relax!

Proximal splitting methods on convex problems with a quadratic term: Relax! Proximal splitting methods on convex problems with a quadratic term: Relax! The slides I presented with added comments Laurent Condat GIPSA-lab, Univ. Grenoble Alpes, France Workshop BASP Frontiers, Jan.

More information

A First Order Primal-Dual Algorithm for Nonconvex T V q Regularization

A First Order Primal-Dual Algorithm for Nonconvex T V q Regularization A First Order Primal-Dual Algorithm for Nonconvex T V q Regularization Thomas Möllenhoff, Evgeny Strekalovskiy, and Daniel Cremers TU Munich, Germany Abstract. We propose an efficient first order primal-dual

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Sparse Recovery using L1 minimization - algorithms Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

consistent learning by composite proximal thresholding

consistent learning by composite proximal thresholding consistent learning by composite proximal thresholding Saverio Salzo Università degli Studi di Genova Optimization in Machine learning, vision and image processing Université Paul Sabatier, Toulouse 6-7

More information

Solving DC Programs that Promote Group 1-Sparsity

Solving DC Programs that Promote Group 1-Sparsity Solving DC Programs that Promote Group 1-Sparsity Ernie Esser Contains joint work with Xiaoqun Zhang, Yifei Lou and Jack Xin SIAM Conference on Imaging Science Hong Kong Baptist University May 14 2014

More information

Gradient Sliding for Composite Optimization

Gradient Sliding for Composite Optimization Noname manuscript No. (will be inserted by the editor) Gradient Sliding for Composite Optimization Guanghui Lan the date of receipt and acceptance should be inserted later Abstract We consider in this

More information

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION Peter Ochs University of Freiburg Germany 17.01.2017 joint work with: Thomas Brox and Thomas Pock c 2017 Peter Ochs ipiano c 1

More information

Signal Processing and Networks Optimization Part VI: Duality

Signal Processing and Networks Optimization Part VI: Duality Signal Processing and Networks Optimization Part VI: Duality Pierre Borgnat 1, Jean-Christophe Pesquet 2, Nelly Pustelnik 1 1 ENS Lyon Laboratoire de Physique CNRS UMR 5672 pierre.borgnat@ens-lyon.fr,

More information

Proximal Methods for Optimization with Spasity-inducing Norms

Proximal Methods for Optimization with Spasity-inducing Norms Proximal Methods for Optimization with Spasity-inducing Norms Group Learning Presentation Xiaowei Zhou Department of Electronic and Computer Engineering The Hong Kong University of Science and Technology

More information

Variational methods for restoration of phase or orientation data

Variational methods for restoration of phase or orientation data Variational methods for restoration of phase or orientation data Martin Storath joint works with Laurent Demaret, Michael Unser, Andreas Weinmann Image Analysis and Learning Group Universität Heidelberg

More information

arxiv: v2 [math.oc] 21 Nov 2017

arxiv: v2 [math.oc] 21 Nov 2017 Unifying abstract inexact convergence theorems and block coordinate variable metric ipiano arxiv:1602.07283v2 [math.oc] 21 Nov 2017 Peter Ochs Mathematical Optimization Group Saarland University Germany

More information

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Christian Clason Faculty of Mathematics, Universität Duisburg-Essen joint work with Barbara Kaltenbacher, Tuomo Valkonen,

More information

arxiv: v4 [math.oc] 29 Jan 2018

arxiv: v4 [math.oc] 29 Jan 2018 Noname manuscript No. (will be inserted by the editor A new primal-dual algorithm for minimizing the sum of three functions with a linear operator Ming Yan arxiv:1611.09805v4 [math.oc] 29 Jan 2018 Received:

More information

Primal-dual coordinate descent A Coordinate Descent Primal-Dual Algorithm with Large Step Size and Possibly Non-Separable Functions

Primal-dual coordinate descent A Coordinate Descent Primal-Dual Algorithm with Large Step Size and Possibly Non-Separable Functions Primal-dual coordinate descent A Coordinate Descent Primal-Dual Algorithm with Large Step Size and Possibly Non-Separable Functions Olivier Fercoq and Pascal Bianchi Problem Minimize the convex function

More information

OWL to the rescue of LASSO

OWL to the rescue of LASSO OWL to the rescue of LASSO IISc IBM day 2018 Joint Work R. Sankaran and Francis Bach AISTATS 17 Chiranjib Bhattacharyya Professor, Department of Computer Science and Automation Indian Institute of Science,

More information

Artifact-free Wavelet Denoising: Non-convex Sparse Regularization, Convex Optimization

Artifact-free Wavelet Denoising: Non-convex Sparse Regularization, Convex Optimization IEEE SIGNAL PROCESSING LETTERS. 22(9):164-168, SEPTEMBER 215. (PREPRINT) 1 Artifact-free Wavelet Denoising: Non-convex Sparse Regularization, Convex Optimization Yin Ding and Ivan W. Selesnick Abstract

More information

ARock: an algorithmic framework for asynchronous parallel coordinate updates

ARock: an algorithmic framework for asynchronous parallel coordinate updates ARock: an algorithmic framework for asynchronous parallel coordinate updates Zhimin Peng, Yangyang Xu, Ming Yan, Wotao Yin ( UCLA Math, U.Waterloo DCO) UCLA CAM Report 15-37 ShanghaiTech SSDS 15 June 25,

More information

Primal-dual coordinate descent

Primal-dual coordinate descent Primal-dual coordinate descent Olivier Fercoq Joint work with P. Bianchi & W. Hachem 15 July 2015 1/28 Minimize the convex function f, g, h convex f is differentiable Problem min f (x) + g(x) + h(mx) x

More information

Convex Hodge Decomposition and Regularization of Image Flows

Convex Hodge Decomposition and Regularization of Image Flows Convex Hodge Decomposition and Regularization of Image Flows Jing Yuan, Christoph Schnörr, Gabriele Steidl April 14, 2008 Abstract The total variation (TV) measure is a key concept in the field of variational

More information

Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization

Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization Hongchao Zhang hozhang@math.lsu.edu Department of Mathematics Center for Computation and Technology Louisiana State

More information

Optimisation in imaging

Optimisation in imaging Optimisation in imaging Hugues Talbot ICIP 2014 Tutorial Optimisation on Hierarchies H. Talbot : Optimisation 1/59 Outline of the lecture 1 Concepts in optimization Cost function Constraints Duality 2

More information

Convex Hodge Decomposition of Image Flows

Convex Hodge Decomposition of Image Flows Convex Hodge Decomposition of Image Flows Jing Yuan 1, Gabriele Steidl 2, Christoph Schnörr 1 1 Image and Pattern Analysis Group, Heidelberg Collaboratory for Image Processing, University of Heidelberg,

More information

Coordinate Update Algorithm Short Course Operator Splitting

Coordinate Update Algorithm Short Course Operator Splitting Coordinate Update Algorithm Short Course Operator Splitting Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 25 Operator splitting pipeline 1. Formulate a problem as 0 A(x) + B(x) with monotone operators

More information

Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise

Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Minru Bai(x T) College of Mathematics and Econometrics Hunan University Joint work with Xiongjun Zhang, Qianqian Shao June 30,

More information

Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method

Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method Article Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method Andreas Langer Department of Mathematics, University of Stuttgart,

More information

Convex Optimization. (EE227A: UC Berkeley) Lecture 15. Suvrit Sra. (Gradient methods III) 12 March, 2013

Convex Optimization. (EE227A: UC Berkeley) Lecture 15. Suvrit Sra. (Gradient methods III) 12 March, 2013 Convex Optimization (EE227A: UC Berkeley) Lecture 15 (Gradient methods III) 12 March, 2013 Suvrit Sra Optimal gradient methods 2 / 27 Optimal gradient methods We saw following efficiency estimates for

More information

2 Regularized Image Reconstruction for Compressive Imaging and Beyond

2 Regularized Image Reconstruction for Compressive Imaging and Beyond EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement

More information

Combining local regularity estimation and total variation optimization for scale-free texture segmentation

Combining local regularity estimation and total variation optimization for scale-free texture segmentation arxiv:154.5776v3 [cs.cv] 24 Jun 216 Combining local regularity estimation and total variation optimization for scale-free texture segmentation N. Pustelnik H. Wendt P. Abry N. Dobigeon June 27, 216 Abstract

More information

Accelerated primal-dual methods for linearly constrained convex problems

Accelerated primal-dual methods for linearly constrained convex problems Accelerated primal-dual methods for linearly constrained convex problems Yangyang Xu SIAM Conference on Optimization May 24, 2017 1 / 23 Accelerated proximal gradient For convex composite problem: minimize

More information

arxiv: v1 [math.oc] 13 Dec 2018

arxiv: v1 [math.oc] 13 Dec 2018 A NEW HOMOTOPY PROXIMAL VARIABLE-METRIC FRAMEWORK FOR COMPOSITE CONVEX MINIMIZATION QUOC TRAN-DINH, LIANG LING, AND KIM-CHUAN TOH arxiv:8205243v [mathoc] 3 Dec 208 Abstract This paper suggests two novel

More information

MMSE Denoising of 2-D Signals Using Consistent Cycle Spinning Algorithm

MMSE Denoising of 2-D Signals Using Consistent Cycle Spinning Algorithm Denoising of 2-D Signals Using Consistent Cycle Spinning Algorithm Bodduluri Asha, B. Leela kumari Abstract: It is well known that in a real world signals do not exist without noise, which may be negligible

More information

ADMM and Fast Gradient Methods for Distributed Optimization

ADMM and Fast Gradient Methods for Distributed Optimization ADMM and Fast Gradient Methods for Distributed Optimization João Xavier Instituto Sistemas e Robótica (ISR), Instituto Superior Técnico (IST) European Control Conference, ECC 13 July 16, 013 Joint work

More information

Gauge optimization and duality

Gauge optimization and duality 1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel

More information

A New Look at First Order Methods Lifting the Lipschitz Gradient Continuity Restriction

A New Look at First Order Methods Lifting the Lipschitz Gradient Continuity Restriction A New Look at First Order Methods Lifting the Lipschitz Gradient Continuity Restriction Marc Teboulle School of Mathematical Sciences Tel Aviv University Joint work with H. Bauschke and J. Bolte Optimization

More information

Monotone Operator Splitting Methods in Signal and Image Recovery

Monotone Operator Splitting Methods in Signal and Image Recovery Monotone Operator Splitting Methods in Signal and Image Recovery P.L. Combettes 1, J.-C. Pesquet 2, and N. Pustelnik 3 2 Univ. Pierre et Marie Curie, Paris 6 LJLL CNRS UMR 7598 2 Univ. Paris-Est LIGM CNRS

More information

Proximal methods. S. Villa. October 7, 2014

Proximal methods. S. Villa. October 7, 2014 Proximal methods S. Villa October 7, 2014 1 Review of the basics Often machine learning problems require the solution of minimization problems. For instance, the ERM algorithm requires to solve a problem

More information

SEMI-SMOOTH SECOND-ORDER TYPE METHODS FOR COMPOSITE CONVEX PROGRAMS

SEMI-SMOOTH SECOND-ORDER TYPE METHODS FOR COMPOSITE CONVEX PROGRAMS SEMI-SMOOTH SECOND-ORDER TYPE METHODS FOR COMPOSITE CONVEX PROGRAMS XIANTAO XIAO, YONGFENG LI, ZAIWEN WEN, AND LIWEI ZHANG Abstract. The goal of this paper is to study approaches to bridge the gap between

More information

ϕ ( ( u) i 2 ; T, a), (1.1)

ϕ ( ( u) i 2 ; T, a), (1.1) CONVEX NON-CONVEX IMAGE SEGMENTATION RAYMOND CHAN, ALESSANDRO LANZA, SERENA MORIGI, AND FIORELLA SGALLARI Abstract. A convex non-convex variational model is proposed for multiphase image segmentation.

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in

More information

Denoising of NIRS Measured Biomedical Signals

Denoising of NIRS Measured Biomedical Signals Denoising of NIRS Measured Biomedical Signals Y V Rami reddy 1, Dr.D.VishnuVardhan 2 1 M. Tech, Dept of ECE, JNTUA College of Engineering, Pulivendula, A.P, India 2 Assistant Professor, Dept of ECE, JNTUA

More information

Convex relaxation for Combinatorial Penalties

Convex relaxation for Combinatorial Penalties Convex relaxation for Combinatorial Penalties Guillaume Obozinski Equipe Imagine Laboratoire d Informatique Gaspard Monge Ecole des Ponts - ParisTech Joint work with Francis Bach Fête Parisienne in Computation,

More information

Convergence of Fixed-Point Iterations

Convergence of Fixed-Point Iterations Convergence of Fixed-Point Iterations Instructor: Wotao Yin (UCLA Math) July 2016 1 / 30 Why study fixed-point iterations? Abstract many existing algorithms in optimization, numerical linear algebra, and

More information

Dual methods for the minimization of the total variation

Dual methods for the minimization of the total variation 1 / 30 Dual methods for the minimization of the total variation Rémy Abergel supervisor Lionel Moisan MAP5 - CNRS UMR 8145 Different Learning Seminar, LTCI Thursday 21st April 2016 2 / 30 Plan 1 Introduction

More information

2D HILBERT-HUANG TRANSFORM. Jérémy Schmitt, Nelly Pustelnik, Pierre Borgnat, Patrick Flandrin

2D HILBERT-HUANG TRANSFORM. Jérémy Schmitt, Nelly Pustelnik, Pierre Borgnat, Patrick Flandrin 2D HILBERT-HUANG TRANSFORM Jérémy Schmitt, Nelly Pustelnik, Pierre Borgnat, Patrick Flandrin Laboratoire de Physique de l Ecole Normale Suprieure de Lyon, CNRS and Université de Lyon, France first.last@ens-lyon.fr

More information

Stochastic and online algorithms

Stochastic and online algorithms Stochastic and online algorithms stochastic gradient method online optimization and dual averaging method minimizing finite average Stochastic and online optimization 6 1 Stochastic optimization problem

More information

Math 273a: Optimization Overview of First-Order Optimization Algorithms

Math 273a: Optimization Overview of First-Order Optimization Algorithms Math 273a: Optimization Overview of First-Order Optimization Algorithms Wotao Yin Department of Mathematics, UCLA online discussions on piazza.com 1 / 9 Typical flow of numerical optimization Optimization

More information

Regularization Methods for Prediction in Dynamic Graphs and e-marketing Applications

Regularization Methods for Prediction in Dynamic Graphs and e-marketing Applications Regularization Methods for Prediction in Dynamic Graphs and e-marketing Applications Emile Richard CMLA-ENS Cachan 1000mercis PhD defense Advisors: Th. Evgeniou (INSEAD), N. Vayatis (CMLA-ENS Cachan) November

More information

On the equivalence of the primal-dual hybrid gradient method and Douglas Rachford splitting

On the equivalence of the primal-dual hybrid gradient method and Douglas Rachford splitting Mathematical Programming manuscript No. (will be inserted by the editor) On the equivalence of the primal-dual hybrid gradient method and Douglas Rachford splitting Daniel O Connor Lieven Vandenberghe

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

Optimization methods

Optimization methods Optimization methods Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda /8/016 Introduction Aim: Overview of optimization methods that Tend to

More information

A Linearly Convergent First-order Algorithm for Total Variation Minimization in Image Processing

A Linearly Convergent First-order Algorithm for Total Variation Minimization in Image Processing A Linearly Convergent First-order Algorithm for Total Variation Minimization in Image Processing Cong D. Dang Kaiyu Dai Guanghui Lan October 9, 0 Abstract We introduce a new formulation for total variation

More information

A Level Set Based. Finite Element Algorithm. for Image Segmentation

A Level Set Based. Finite Element Algorithm. for Image Segmentation A Level Set Based Finite Element Algorithm for Image Segmentation Michael Fried, IAM Universität Freiburg, Germany Image Segmentation Ω IR n rectangular domain, n = (1), 2, 3 u 0 : Ω [0, C max ] intensity

More information

2D Wavelets. Hints on advanced Concepts

2D Wavelets. Hints on advanced Concepts 2D Wavelets Hints on advanced Concepts 1 Advanced concepts Wavelet packets Laplacian pyramid Overcomplete bases Discrete wavelet frames (DWF) Algorithme à trous Discrete dyadic wavelet frames (DDWF) Overview

More information

Minimizing Isotropic Total Variation without Subiterations

Minimizing Isotropic Total Variation without Subiterations MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Minimizing Isotropic Total Variation without Subiterations Kamilov, U. S. TR206-09 August 206 Abstract Total variation (TV) is one of the most

More information

Oslo Class 6 Sparsity based regularization

Oslo Class 6 Sparsity based regularization RegML2017@SIMULA Oslo Class 6 Sparsity based regularization Lorenzo Rosasco UNIGE-MIT-IIT May 4, 2017 Learning from data Possible only under assumptions regularization min Ê(w) + λr(w) w Smoothness Sparsity

More information

Smoothing Proximal Gradient Method. General Structured Sparse Regression

Smoothing Proximal Gradient Method. General Structured Sparse Regression for General Structured Sparse Regression Xi Chen, Qihang Lin, Seyoung Kim, Jaime G. Carbonell, Eric P. Xing (Annals of Applied Statistics, 2012) Gatsby Unit, Tea Talk October 25, 2013 Outline Motivation:

More information

Relaxed linearized algorithms for faster X-ray CT image reconstruction

Relaxed linearized algorithms for faster X-ray CT image reconstruction Relaxed linearized algorithms for faster X-ray CT image reconstruction Hung Nien and Jeffrey A. Fessler University of Michigan, Ann Arbor The 13th Fully 3D Meeting June 2, 2015 1/20 Statistical image reconstruction

More information

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Caroline Chaux Joint work with X. Vu, N. Thirion-Moreau and S. Maire (LSIS, Toulon) Aix-Marseille

More information

Sparse Optimization Lecture: Dual Methods, Part I

Sparse Optimization Lecture: Dual Methods, Part I Sparse Optimization Lecture: Dual Methods, Part I Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know dual (sub)gradient iteration augmented l 1 iteration

More information

arxiv: v1 [math.na] 3 Jan 2019

arxiv: v1 [math.na] 3 Jan 2019 arxiv manuscript No. (will be inserted by the editor) A Finite Element Nonoverlapping Domain Decomposition Method with Lagrange Multipliers for the Dual Total Variation Minimizations Chang-Ock Lee Jongho

More information

Sélection adaptative des paramètres pour le débruitage des images

Sélection adaptative des paramètres pour le débruitage des images Journées SIERRA 2014, Saint-Etienne, France, 25 mars, 2014 Sélection adaptative des paramètres pour le débruitage des images Adaptive selection of parameters for image denoising Charles Deledalle 1 Joint

More information

Network Newton. Aryan Mokhtari, Qing Ling and Alejandro Ribeiro. University of Pennsylvania, University of Science and Technology (China)

Network Newton. Aryan Mokhtari, Qing Ling and Alejandro Ribeiro. University of Pennsylvania, University of Science and Technology (China) Network Newton Aryan Mokhtari, Qing Ling and Alejandro Ribeiro University of Pennsylvania, University of Science and Technology (China) aryanm@seas.upenn.edu, qingling@mail.ustc.edu.cn, aribeiro@seas.upenn.edu

More information

Accelerated Proximal Gradient Methods for Convex Optimization

Accelerated Proximal Gradient Methods for Convex Optimization Accelerated Proximal Gradient Methods for Convex Optimization Paul Tseng Mathematics, University of Washington Seattle MOPTA, University of Guelph August 18, 2008 ACCELERATED PROXIMAL GRADIENT METHODS

More information

Lasso: Algorithms and Extensions

Lasso: Algorithms and Extensions ELE 538B: Sparsity, Structure and Inference Lasso: Algorithms and Extensions Yuxin Chen Princeton University, Spring 2017 Outline Proximal operators Proximal gradient methods for lasso and its extensions

More information

Sparse linear models and denoising

Sparse linear models and denoising Lecture notes 4 February 22, 2016 Sparse linear models and denoising 1 Introduction 1.1 Definition and motivation Finding representations of signals that allow to process them more effectively is a central

More information

ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT

ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT PRASHANT ATHAVALE Abstract. Digital images are can be realized as L 2 (R 2 objects. Noise is introduced in a digital image due to various reasons.

More information

An Introduction to Wavelets and some Applications

An Introduction to Wavelets and some Applications An Introduction to Wavelets and some Applications Milan, May 2003 Anestis Antoniadis Laboratoire IMAG-LMC University Joseph Fourier Grenoble, France An Introduction to Wavelets and some Applications p.1/54

More information

Error Analysis for H 1 Based Wavelet Interpolations

Error Analysis for H 1 Based Wavelet Interpolations Error Analysis for H 1 Based Wavelet Interpolations Tony F. Chan Hao-Min Zhou Tie Zhou Abstract We rigorously study the error bound for the H 1 wavelet interpolation problem, which aims to recover missing

More information

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 36 Why proximal? Newton s method: for C 2 -smooth, unconstrained problems allow

More information