Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise

Size: px
Start display at page:

Download "Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise"

Transcription

1 Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Minru Bai(x T) College of Mathematics and Econometrics Hunan University Joint work with Xiongjun Zhang, Qianqian Shao June 30, 2018 Minru Bai(x T) (HNU) Adaptive Corrected Procedure 1 / 29

2 Outline 1 Introduction 2 Adaptive corrected procedure 3 Algorithm 4 Numerical experiments 5 Conclusions Minru Bai(x T) (HNU) Adaptive Corrected Procedure 2 / 29

3 Outline 1 Introduction 2 Adaptive corrected procedure 3 Algorithm 4 Numerical experiments 5 Conclusions Minru Bai(x T) (HNU) Adaptive Corrected Procedure 3 / 29

4 Image denoising and deblurring Let x R n2 be an original image concatenated into an n 2 -vector, K R n2 n 2 be a blurring operator, and f R n2 be an observation of x satisfying the relationship f = N imp (K x), where N imp represents the degradation by impulse noise. Noisy and blurry observation Minru Bai(x T) (HNU) Adaptive Corrected Procedure 4 / 29

5 Two types of impulsive noise Let y R n2 denote an original image. The dynamic range of y is in [d min, d max ], i.e. d min y i d max for all i. Minru Bai(x T) (HNU) Adaptive Corrected Procedure 5 / 29

6 Two types of impulsive noise Let y R n2 denote an original image. The dynamic range of y is in [d min, d max ], i.e. d min y i d max for all i. Salt-and-pepper noise f i = d min, with probability r 2, d max, with probability r 2, y i, with probability 1 r, where f i, y i are the i-th pixel values of f and y, respectively, 0 r 1. Minru Bai(x T) (HNU) Adaptive Corrected Procedure 5 / 29

7 Two types of impulsive noise Let y R n2 denote an original image. The dynamic range of y is in [d min, d max ], i.e. d min y i d max for all i. Salt-and-pepper noise f i = d min, with probability r 2, d max, with probability r 2, y i, with probability 1 r, where f i, y i are the i-th pixel values of f and y, respectively, 0 r 1. Random-valued noise { di, with probability r, f i = y i, with probability 1 r, where d i are the identically and uniformly distributed random numbers in [d min, d max ], 0 r 1. Minru Bai(x T) (HNU) Adaptive Corrected Procedure 5 / 29

8 Maximum a posteriori (MAP) estimator How to recover the original image from the noisy and blurred image? 1 Chen F, Shen L, Xu Y, Zeng X 2014 The Moreau envelope approach for the L1/TV image denoising. Inverse Problem and Imaging, Minru Bai(x T) (HNU) Adaptive Corrected Procedure 6 / 29

9 Maximum a posteriori (MAP) estimator How to recover the original image from the noisy and blurred image? Based on MAP estimator for a unknown image, Chen et al. 1 obtained the denoising reconstruction model in discrete form as TVL0 model: min x n 2 i=1 D i x + µ Kx f 0, 1 Chen F, Shen L, Xu Y, Zeng X 2014 The Moreau envelope approach for the L1/TV image denoising. Inverse Problem and Imaging, Minru Bai(x T) (HNU) Adaptive Corrected Procedure 6 / 29

10 Maximum a posteriori (MAP) estimator How to recover the original image from the noisy and blurred image? Based on MAP estimator for a unknown image, Chen et al. 1 obtained the denoising reconstruction model in discrete form as TVL0 model: min x n 2 i=1 D i x + µ Kx f 0, where for each i, D i x R 2 denotes a certain local first-order finite difference of x at pixel i in both horizontal and vertical directions; 0 denotes the number of non-zero elements in a vector; K is a blurry operator; µ is a regularization parameter; f R n2 be an observation of original image x. 1 Chen F, Shen L, Xu Y, Zeng X 2014 The Moreau envelope approach for the L1/TV image denoising. Inverse Problem and Imaging, Minru Bai(x T) (HNU) Adaptive Corrected Procedure 6 / 29

11 Convex relaxation Unfortunately, the function 0 is not convex, and it is a NP-hard problem. Minru Bai(x T) (HNU) Adaptive Corrected Procedure 7 / 29

12 Convex relaxation Unfortunately, the function 0 is not convex, and it is a NP-hard problem. It is well known that l 1 norm is a nice convex approximation of l 0 norm. So replacing 0 by 1 yields the following TVL1 model min x n 2 i=1 D i x + µ Kx f 1. Minru Bai(x T) (HNU) Adaptive Corrected Procedure 7 / 29

13 Outline 1 Introduction 2 Adaptive corrected procedure 3 Algorithm 4 Numerical experiments 5 Conclusions Minru Bai(x T) (HNU) Adaptive Corrected Procedure 8 / 29

14 Motivation Nikolova 2 pointed out from the view of MAP that the solutions of the TVL1 model substantially deviate from both the data-acquisition model and the prior model. 2 Nikolova M 2007 Model distortions in bayesian MAP reconstruction Inverse Problems and Imaging Tibshirani R 1996 Regression Shrinkage and Selection via the Lasso Journal of the Royal Statistical Society, Ser. B (Methodological) Hui Zou 2006 The Adaptive Lasso and Its Oracle Properties Journal of the American Statistical Association 101(476): Minru Bai(x T) (HNU) Adaptive Corrected Procedure 9 / 29

15 Motivation Nikolova 2 pointed out from the view of MAP that the solutions of the TVL1 model substantially deviate from both the data-acquisition model and the prior model. The l 1 -norm penalty has long been known to yield biased estimators for simultaneous estimation 3. 2 Nikolova M 2007 Model distortions in bayesian MAP reconstruction Inverse Problems and Imaging Tibshirani R 1996 Regression Shrinkage and Selection via the Lasso Journal of the Royal Statistical Society, Ser. B (Methodological) Hui Zou 2006 The Adaptive Lasso and Its Oracle Properties Journal of the American Statistical Association 101(476): Minru Bai(x T) (HNU) Adaptive Corrected Procedure 9 / 29

16 Motivation Nikolova 2 pointed out from the view of MAP that the solutions of the TVL1 model substantially deviate from both the data-acquisition model and the prior model. The l 1 -norm penalty has long been known to yield biased estimators for simultaneous estimation 3. Adaptive lasso was proposed by Zou 4, where adaptive weights are used for penalizing different coefficients in l 1 -norm penalty. 2 Nikolova M 2007 Model distortions in bayesian MAP reconstruction Inverse Problems and Imaging Tibshirani R 1996 Regression Shrinkage and Selection via the Lasso Journal of the Royal Statistical Society, Ser. B (Methodological) Hui Zou 2006 The Adaptive Lasso and Its Oracle Properties Journal of the American Statistical Association 101(476): Minru Bai(x T) (HNU) Adaptive Corrected Procedure 9 / 29

17 Motivation Nikolova 2 pointed out from the view of MAP that the solutions of the TVL1 model substantially deviate from both the data-acquisition model and the prior model. The l 1 -norm penalty has long been known to yield biased estimators for simultaneous estimation 3. Adaptive lasso was proposed by Zou 4, where adaptive weights are used for penalizing different coefficients in l 1 -norm penalty. The key to overcome the limit of TVL1 is how to improve the sparsity of the l 1 term. 2 Nikolova M 2007 Model distortions in bayesian MAP reconstruction Inverse Problems and Imaging Tibshirani R 1996 Regression Shrinkage and Selection via the Lasso Journal of the Royal Statistical Society, Ser. B (Methodological) Hui Zou 2006 The Adaptive Lasso and Its Oracle Properties Journal of the American Statistical Association 101(476): Minru Bai(x T) (HNU) Adaptive Corrected Procedure 9 / 29

18 Motivation Frequency Frequency x x (a) x x (b) Figure: Histograms of x x for House image corrupted by Average blur and salt-and-pepper noise or random-valued noise, where x is the original image and x is the recovered image by TVL1. (a) Salt-and-pepper with noise level 30%. (b) Random-valued noise with noise level 40%. Minru Bai(x T) (HNU) Adaptive Corrected Procedure 10 / 29

19 Motivation Frequency Frequency x x (a) x x (b) Figure: Histograms of x x for House image corrupted by Average blur and salt-and-pepper noise or random-valued noise, where x is the original image and x is the recovered image by TVL1. (a) Salt-and-pepper with noise level 30%. (b) Random-valued noise with noise level 40%. TVL1 model can effectively remove abnormal value noise signals. Minru Bai(x T) (HNU) Adaptive Corrected Procedure 10 / 29

20 Motivation Frequency Frequency x x (a) x x (b) Figure: Histograms of x x for House image corrupted by Average blur and salt-and-pepper noise or random-valued noise, where x is the original image and x is the recovered image by TVL1. (a) Salt-and-pepper with noise level 30%. (b) Random-valued noise with noise level 40%. TVL1 model can effectively remove abnormal value noise signals. These observations imply that small biased estimates may contain some information of the sparsity of Kx f to a certain extent. Minru Bai(x T) (HNU) Adaptive Corrected Procedure 10 / 29

21 Corrected model Variable substitution min x,z n 2 i=1 D i x + µ z 1 s.t. z = Kx f. Minru Bai(x T) (HNU) Adaptive Corrected Procedure 11 / 29

22 Corrected model Variable substitution Corrected TVL1(CTVL1): min x,z min x,z n 2 i=1 n 2 i=1 D i x + µ z 1 s.t. z = Kx f. s.t. z = Kx f, D i x + µ( z 1 F( z), z ) Minru Bai(x T) (HNU) Adaptive Corrected Procedure 11 / 29

23 Corrected model Variable substitution Corrected TVL1(CTVL1): min x,z min x,z n 2 i=1 n 2 i=1 D i x + µ z 1 s.t. z = Kx f. s.t. z = Kx f, D i x + µ( z 1 F( z), z ) where µ > 0 is the regularization parameters which depends on noise level and blur operator of corrupted image. Minru Bai(x T) (HNU) Adaptive Corrected Procedure 11 / 29

24 Corrected model F : R n2 R n2 5 is an operator defined as F i (z) = { φ( z i z ), z R n2 \{0}, 0, z = 0. The scalar function φ : R R is defined as φ(t) := sgn(t)(1 + ε τ t τ ) t τ + ε τ, t R, for some τ > 0 and ε > 0. z is a reasonable initial estimator. In particular, when F 0, CTVL1 reduces to TVL1. 5 Miao W, Pan S and Sun D 2015 A rank-corrected procedure for matrix completion with fixed basis coefficients Math. Program., Ser. A Minru Bai(x T) (HNU) Adaptive Corrected Procedure 12 / 29

25 Analysis of correction term Notice that if z 0, n 2 z 1 F( z), z = ( z i φ(t i )z i ), where t i = z i / z and 0 t i 1 for i = 1,..., n 2. i=1 Minru Bai(x T) (HNU) Adaptive Corrected Procedure 13 / 29

26 Analysis of correction term Notice that if z 0, n 2 z 1 F( z), z = ( z i φ(t i )z i ), where t i = z i / z and 0 t i 1 for i = 1,..., n 2. If t i is near zero, then the fidelity (sparsity) becomes important and z i φ(t i )z i z i as t i 0. i=1 Minru Bai(x T) (HNU) Adaptive Corrected Procedure 13 / 29

27 Analysis of correction term Notice that if z 0, n 2 z 1 F( z), z = ( z i φ(t i )z i ), where t i = z i / z and 0 t i 1 for i = 1,..., n 2. If t i is near zero, then the fidelity (sparsity) becomes important and z i φ(t i )z i z i as t i 0. If t i is more near 1, then the TV term (smoothness) becomes more important and z i φ(t i )z i 0 as t i 1. i=1 Minru Bai(x T) (HNU) Adaptive Corrected Procedure 13 / 29

28 Adaptive corrected procedure Initialization: Input f, K. Step 1: Compute x by solving TVL1. Step 2: Let z = K x f. Step 3: Compute x by solving CTVL1. Step 4: If necessary, x = x and go to Step 2. Else return x and the procedure stop. Minru Bai(x T) (HNU) Adaptive Corrected Procedure 14 / 29

29 Outline 1 Introduction 2 Adaptive corrected procedure 3 Algorithm 4 Numerical experiments 5 Conclusions Minru Bai(x T) (HNU) Adaptive Corrected Procedure 15 / 29

30 Proximal alternating direction method of multipliers Introduce an auxillary variable to enforce the constraint on x, corrected TVL1 model can be written as: min y,z,x n 2 i=1 y i 2 + µ( z 1 F( z), z ) s.t. z = Kx f, y i = D i x, i = 1,..., n 2. Minru Bai(x T) (HNU) Adaptive Corrected Procedure 16 / 29

31 Proximal alternating direction method of multipliers Introduce an auxillary variable to enforce the constraint on x, corrected TVL1 model can be written as: min y,z,x n 2 i=1 y i 2 + µ( z 1 F( z), z ) s.t. z = Kx f, y i = D i x, i = 1,..., n 2. The augmented Lagrangian function L(x, y, z, λ 1, λ 2 ) = n 2 i=1 y i 2 λ T 1 (y Dx) + β 1 n 2 2 i=1 y i D i x 2 2 +µ( z 1 F( z), z ) λ T 2 [z (Kx f )] + β 2 2 z (Kx f ) 2 2, where β 1, β 2 > 0 are penalty parameters, and λ = (λ T 1, λt 2 )T R 2n2 is the Lagrangian multiplier. Minru Bai(x T) (HNU) Adaptive Corrected Procedure 16 / 29

32 Proximal alternating direction method of multipliers 6 Iterative procedure: y k+1 = arg min y { L(x k, y, z k, λ k 1, λk 2 )}, { z k+1 = arg min L(x k, y k, z, λ k z 1, λk 2 )}, { x k+1 = arg min L(x, y k+1, z k+1, λ k x 1, λk 2 ) x } xk 2 S, λ k+1 1 = λ k 1 γβ 1(y k+1 Dx k+1 ), λ k+1 2 = λ k 2 γβ 2[z k+1 (Kx k+1 f )]. 6 Fazel M, Pong T K, Sun D and Tseng P 2013 Hankel matrix rank minimization with applications in system identification and realization SIAM J. Matrix Anal. Appl Minru Bai(x T) (HNU) Adaptive Corrected Procedure 17 / 29

33 The details of PADMM Compute y: y k+1 i = max { D i x k + λk 1 β β 1, 0 } D i x k + (λ k 1 ) i/β 1 D i x k + (λ k 1 ) i/β 1 2, i = 1, 2,..., n 2. Compute z: z k+1 = sgn(kx k f +[λ k 2 +µf( z)]/β 2) max{ Kx k f +[λ k 2 +µf( z)]/β 2 µ/β 2, 0}. Compute x: (β 1 D T D+β 2 K T K +S)x = D T (β 1 y k+1 λ k 1 )+KT (β 2 z k+1 λ k 2 )+β 2K T f +Sx k. Minru Bai(x T) (HNU) Adaptive Corrected Procedure 18 / 29

34 Convergence of PADMM Theorem Assume that S + β 1 D T D + β 2 K T K is positive definite. Let {y k, z k, x k, λ k 1, λk 2 } be generated from Algorithm 1. If γ (0, (1 + 5)/2), then the sequence {y k, z k, x k } converges to an optimal solution of CTVL1 and {λ k 1, λk 2 } converges to an optimal solution to the dual problem of CTVL1. 7 Yang J, Zhang Y and Yin W 2009 An efficient TVL1 algorithm for deblurring multichannnel images corrupted by implusive nosie SIAM J. Sci. Comput Minru Bai(x T) (HNU) Adaptive Corrected Procedure 19 / 29

35 Convergence of PADMM Theorem Assume that S + β 1 D T D + β 2 K T K is positive definite. Let {y k, z k, x k, λ k 1, λk 2 } be generated from Algorithm 1. If γ (0, (1 + 5)/2), then the sequence {y k, z k, x k } converges to an optimal solution of CTVL1 and {λ k 1, λk 2 } converges to an optimal solution to the dual problem of CTVL1. Theorem removes the condition N(D) N(K) = {0} of convergence results in J. Yang et al. [Theorem 3.4] 7. 7 Yang J, Zhang Y and Yin W 2009 An efficient TVL1 algorithm for deblurring multichannnel images corrupted by implusive nosie SIAM J. Sci. Comput Minru Bai(x T) (HNU) Adaptive Corrected Procedure 19 / 29

36 Convergence of PADMM Theorem Assume that S + β 1 D T D + β 2 K T K is positive definite. Let {y k, z k, x k, λ k 1, λk 2 } be generated from Algorithm 1. If γ (0, (1 + 5)/2), then the sequence {y k, z k, x k } converges to an optimal solution of CTVL1 and {λ k 1, λk 2 } converges to an optimal solution to the dual problem of CTVL1. Theorem removes the condition N(D) N(K) = {0} of convergence results in J. Yang et al. [Theorem 3.4] 7. The assumption condition of S + β 1 D T D + β 2 K T K is very easy to be satisfied. If β 1 D T D + β 2 K T K are positive definite, then we can choose S = 0. If β 1 D T D + β 2 K T K are positive semidefinite, then we can choose positive semidefinite matrix S such that where α σ max (β 1 D T D + β 2 K T K). S + β 1 D T D + β 2 K T K = αi, 7 Yang J, Zhang Y and Yin W 2009 An efficient TVL1 algorithm for deblurring multichannnel images corrupted by implusive nosie SIAM J. Sci. Comput Minru Bai(x T) (HNU) Adaptive Corrected Procedure 19 / 29

37 Outline 1 Introduction 2 Adaptive corrected procedure 3 Algorithm 4 Numerical experiments 5 Conclusions Minru Bai(x T) (HNU) Adaptive Corrected Procedure 20 / 29

38 Sparsity comparison Define the sparse rate of Kx f as follows: s := {i : (Kx f ) i < 10 4, i = 1,, n 2 } n 2 where denotes the number of elements in the set. Minru Bai(x T) (HNU) Adaptive Corrected Procedure 21 / 29

39 Sparsity comparison Define the sparse rate of Kx f as follows: s := {i : (Kx f ) i < 10 4, i = 1,, n 2 } n 2 where denotes the number of elements in the set. Table: Sparse rate s(%) of Kx f for various methods on Lena and House images corrupted by Average blur and salt-and-pepper noise (sp) or random value noise (rv). sp rv noise Lena House level Original TVL1 CTVL1 Original TVL1 CTVL1 30% 69.98% 25.71% 60.37% 69.78% 28.29% 62.25% 50% 49.98% 6.09% 46.35% 49.89% 9.77% 34.65% 70% 29.93% 1.68% 12.12% 29.88% 2.22% 12.14% 25% 74.99% 30.35% 66.71% 74.85% 46.16% 68.06% 40% 59.97% 11.70% 46.35% 59.83% 9.01% 45.85% Minru Bai(x T) (HNU) Adaptive Corrected Procedure 21 / 29

40 Deblurring image with random-valued noise Corruption: 40% SNR: SNR: SNR: SNR: Corruption: 40% SNR: SNR: SNR: SNR: Figure: Recovered images (with SNR(dB)) of TVL1 and CTVL1 on Cameraman and Lena images corrupted by Average blur and random-valued noise with noise level 40%. First column: Corrupted images. Second column: The restored image by TVL1. From third to fifth columns: The restored image by the first correction step, the second correction step, the third correction step. Minru Bai(x T) (HNU) Adaptive Corrected Procedure 22 / 29

41 Deblurring image with random-valued noise Corruption: 70% SNR: 8.15 SNR: 9.93 SNR: SNR: SNR: SNR: SNR: SNR: SNR: SNR: SNR: Figure: Recovered images of TVL1 and CTVL1 on Lena image corrupted by Average blur and random-valued noise with noise level 70%. From left to right and top to bottom: Corrupted image. The restored image by TVL1, the first correction step until to the tenth correction step. Minru Bai(x T) (HNU) Adaptive Corrected Procedure 23 / 29

42 Deblurring image with random-valued noise Table: SNR(dB) values for deblurring and denoising results of different methods for the test images corrupted by Average blur or Gaussian blur and random-valued noise. Boat House Man Goldhill noise Average Gaussian level TVL1 Two-phase 8 CTVL1 TVL1 Two-phase CTVL1 25% % % % % % % % % % % % Cai J, Chan R H and Nikolova M 2008 Two-phase approach for deblurring images corrupted by impulse plus Gaussian noise Inverse Problems and Imaging Minru Bai(x T) (HNU) Adaptive Corrected Procedure 24 / 29

43 Deblurring image with salt-and-pepper noise Table: SNR(dB) values for deblurring and denoising results of different methods for the test images corrupted by Average blur or Gaussian blur and salt-and-pepper noise. Boat House Man Goldhill noise Average Gaussian level TVL1 Two-phase CTVL1 TVL1 Two-phase CTVL1 30% % % % % % % % % % % % % % % % Minru Bai(x T) (HNU) Adaptive Corrected Procedure 25 / 29

44 Outline 1 Introduction 2 Adaptive corrected procedure 3 Algorithm 4 Numerical experiments 5 Conclusions Minru Bai(x T) (HNU) Adaptive Corrected Procedure 26 / 29

45 Conclusions Propose a corrected model for TVL1. Minru Bai(x T) (HNU) Adaptive Corrected Procedure 27 / 29

46 Conclusions Propose a corrected model for TVL1. Present the Proximal ADMM to solve the corrected model. Minru Bai(x T) (HNU) Adaptive Corrected Procedure 27 / 29

47 Conclusions Propose a corrected model for TVL1. Present the Proximal ADMM to solve the corrected model. The accuracy of our proposed method is verified by numerical examples. Minru Bai(x T) (HNU) Adaptive Corrected Procedure 27 / 29

48 References Minru Bai, Xiongjun Zhang, Qianqian Shao, Adaptive correction procedure for TVL1 image deblurring under impulse noise, Inverse Problems, 2016, 32(2016):085004(23pp). Minru Bai, Xiongjun Zhang, Guyan Ni, Chunfeng Cui, An adaptive correction approach for tensor completion, SIAM J. Imaging Sciences, 2016, 9(3): Minru Bai(x T) (HNU) Adaptive Corrected Procedure 28 / 29

49 Thank you for your attention! Minru Bai(x T) (HNU) Adaptive Corrected Procedure 29 / 29

Gauge optimization and duality

Gauge optimization and duality 1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel

More information

l0tv: A Sparse Optimization Method for Impulse Noise Image Restoration

l0tv: A Sparse Optimization Method for Impulse Noise Image Restoration l0tv: A Sparse Optimization Method for Impulse Noise Image Restoration Item Type Article Authors Yuan, Ganzhao; Ghanem, Bernard Citation Yuan G, Ghanem B (2017) l0tv: A Sparse Optimization Method for Impulse

More information

Enhanced Compressive Sensing and More

Enhanced Compressive Sensing and More Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University

More information

l 0 TV: A Sparse Optimization Method for Impulse Noise Image Restoration

l 0 TV: A Sparse Optimization Method for Impulse Noise Image Restoration IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1 l 0 TV: A Sparse Optimization Method for Impulse Noise Image Restoration Ganzhao Yuan, Bernard Ghanem arxiv:1802.09879v2 [cs.na] 28 Dec

More information

A TVSCAD APPROACH FOR IMAGE DEBLURRING WITH IMPULSIVE NOISE

A TVSCAD APPROACH FOR IMAGE DEBLURRING WITH IMPULSIVE NOISE A TVSCAD APPROACH FOR IMAGE DEBLURRING WITH IMPULSIVE NOISE GUOYONG GU, SUHONG, JIANG, JUNFENG YANG Abstract. We consider image deblurring problem in the presence of impulsive noise. It is known that total

More information

2 Regularized Image Reconstruction for Compressive Imaging and Beyond

2 Regularized Image Reconstruction for Compressive Imaging and Beyond EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement

More information

A Primal-dual Three-operator Splitting Scheme

A Primal-dual Three-operator Splitting Scheme Noname manuscript No. (will be inserted by the editor) A Primal-dual Three-operator Splitting Scheme Ming Yan Received: date / Accepted: date Abstract In this paper, we propose a new primal-dual algorithm

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in

More information

Image Restoration with Mixed or Unknown Noises

Image Restoration with Mixed or Unknown Noises Image Restoration with Mixed or Unknown Noises Zheng Gong, Zuowei Shen, and Kim-Chuan Toh Abstract This paper proposes a simple model for image restoration with mixed or unknown noises. It can handle image

More information

Supplementary Material of A Novel Sparsity Measure for Tensor Recovery

Supplementary Material of A Novel Sparsity Measure for Tensor Recovery Supplementary Material of A Novel Sparsity Measure for Tensor Recovery Qian Zhao 1,2 Deyu Meng 1,2 Xu Kong 3 Qi Xie 1,2 Wenfei Cao 1,2 Yao Wang 1,2 Zongben Xu 1,2 1 School of Mathematics and Statistics,

More information

Computer Vision & Digital Image Processing

Computer Vision & Digital Image Processing Computer Vision & Digital Image Processing Image Restoration and Reconstruction I Dr. D. J. Jackson Lecture 11-1 Image restoration Restoration is an objective process that attempts to recover an image

More information

About Split Proximal Algorithms for the Q-Lasso

About Split Proximal Algorithms for the Q-Lasso Thai Journal of Mathematics Volume 5 (207) Number : 7 http://thaijmath.in.cmu.ac.th ISSN 686-0209 About Split Proximal Algorithms for the Q-Lasso Abdellatif Moudafi Aix Marseille Université, CNRS-L.S.I.S

More information

10. Multi-objective least squares

10. Multi-objective least squares L Vandenberghe ECE133A (Winter 2018) 10 Multi-objective least squares multi-objective least squares regularized data fitting control estimation and inversion 10-1 Multi-objective least squares we have

More information

Adaptive Primal Dual Optimization for Image Processing and Learning

Adaptive Primal Dual Optimization for Image Processing and Learning Adaptive Primal Dual Optimization for Image Processing and Learning Tom Goldstein Rice University tag7@rice.edu Ernie Esser University of British Columbia eesser@eos.ubc.ca Richard Baraniuk Rice University

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

Bayesian Paradigm. Maximum A Posteriori Estimation

Bayesian Paradigm. Maximum A Posteriori Estimation Bayesian Paradigm Maximum A Posteriori Estimation Simple acquisition model noise + degradation Constraint minimization or Equivalent formulation Constraint minimization Lagrangian (unconstraint minimization)

More information

Recent developments on sparse representation

Recent developments on sparse representation Recent developments on sparse representation Zeng Tieyong Department of Mathematics, Hong Kong Baptist University Email: zeng@hkbu.edu.hk Hong Kong Baptist University Dec. 8, 2008 First Previous Next Last

More information

SPARSE SIGNAL RESTORATION. 1. Introduction

SPARSE SIGNAL RESTORATION. 1. Introduction SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful

More information

A Primal-Dual Method for Total Variation-Based. Wavelet Domain Inpainting

A Primal-Dual Method for Total Variation-Based. Wavelet Domain Inpainting A Primal-Dual Method for Total Variation-Based 1 Wavelet Domain Inpainting You-Wei Wen, Raymond H. Chan, Andy M. Yip Abstract Loss of information in a wavelet domain can occur during storage or transmission

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 9 Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 2 Separable convex optimization a special case is min f(x)

More information

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Donghwan Kim and Jeffrey A. Fessler EECS Department, University of Michigan

More information

TWO-PHASE APPROACH FOR DEBLURRING IMAGES CORRUPTED BY IMPULSE PLUS GAUSSIAN NOISE. Jian-Feng Cai. Raymond H. Chan. Mila Nikolova

TWO-PHASE APPROACH FOR DEBLURRING IMAGES CORRUPTED BY IMPULSE PLUS GAUSSIAN NOISE. Jian-Feng Cai. Raymond H. Chan. Mila Nikolova Manuscript submitted to Website: http://aimsciences.org AIMS Journals Volume 00, Number 0, Xxxx XXXX pp. 000 000 TWO-PHASE APPROACH FOR DEBLURRING IMAGES CORRUPTED BY IMPULSE PLUS GAUSSIAN NOISE Jian-Feng

More information

ABSTRACT. Recovering Data with Group Sparsity by Alternating Direction Methods. Wei Deng

ABSTRACT. Recovering Data with Group Sparsity by Alternating Direction Methods. Wei Deng ABSTRACT Recovering Data with Group Sparsity by Alternating Direction Methods by Wei Deng Group sparsity reveals underlying sparsity patterns and contains rich structural information in data. Hence, exploiting

More information

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of

More information

Accelerated primal-dual methods for linearly constrained convex problems

Accelerated primal-dual methods for linearly constrained convex problems Accelerated primal-dual methods for linearly constrained convex problems Yangyang Xu SIAM Conference on Optimization May 24, 2017 1 / 23 Accelerated proximal gradient For convex composite problem: minimize

More information

regularization parameter choice

regularization parameter choice A duality-based splitting method for l 1 -TV image restoration with automatic regularization parameter choice Christian Clason Bangti Jin Karl Kunisch November 30, 2009 A novel splitting method is presented

More information

Learning MMSE Optimal Thresholds for FISTA

Learning MMSE Optimal Thresholds for FISTA MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Learning MMSE Optimal Thresholds for FISTA Kamilov, U.; Mansour, H. TR2016-111 August 2016 Abstract Fast iterative shrinkage/thresholding algorithm

More information

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Department of Mathematics & Risk Management Institute National University of Singapore (Based on a joint work with Shujun

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms François Caron Department of Statistics, Oxford STATLEARN 2014, Paris April 7, 2014 Joint work with Adrien Todeschini,

More information

Math 273a: Optimization Overview of First-Order Optimization Algorithms

Math 273a: Optimization Overview of First-Order Optimization Algorithms Math 273a: Optimization Overview of First-Order Optimization Algorithms Wotao Yin Department of Mathematics, UCLA online discussions on piazza.com 1 / 9 Typical flow of numerical optimization Optimization

More information

Sparse Optimization Lecture: Dual Methods, Part I

Sparse Optimization Lecture: Dual Methods, Part I Sparse Optimization Lecture: Dual Methods, Part I Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know dual (sub)gradient iteration augmented l 1 iteration

More information

Sparsifying Transform Learning for Compressed Sensing MRI

Sparsifying Transform Learning for Compressed Sensing MRI Sparsifying Transform Learning for Compressed Sensing MRI Saiprasad Ravishankar and Yoram Bresler Department of Electrical and Computer Engineering and Coordinated Science Laborarory University of Illinois

More information

Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization

Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization Hongchao Zhang hozhang@math.lsu.edu Department of Mathematics Center for Computation and Technology Louisiana State

More information

Contraction Methods for Convex Optimization and monotone variational inequalities No.12

Contraction Methods for Convex Optimization and monotone variational inequalities No.12 XII - 1 Contraction Methods for Convex Optimization and monotone variational inequalities No.12 Linearized alternating direction methods of multipliers for separable convex programming Bingsheng He Department

More information

Convex and Non-Convex Optimization in Image Recovery and Segmentation

Convex and Non-Convex Optimization in Image Recovery and Segmentation Convex and Non-Convex Optimization in Image Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU 29 May- 2 June, 2017 NUS, Singapore Outline 1. Variational Models for Rician Noise Removal

More information

Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method

Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method Article Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method Andreas Langer Department of Mathematics, University of Stuttgart,

More information

A direct formulation for sparse PCA using semidefinite programming

A direct formulation for sparse PCA using semidefinite programming A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon

More information

A Linearly Convergent First-order Algorithm for Total Variation Minimization in Image Processing

A Linearly Convergent First-order Algorithm for Total Variation Minimization in Image Processing A Linearly Convergent First-order Algorithm for Total Variation Minimization in Image Processing Cong D. Dang Kaiyu Dai Guanghui Lan October 9, 0 Abstract We introduce a new formulation for total variation

More information

LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING

LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING JIAN-FENG CAI, STANLEY OSHER, AND ZUOWEI SHEN Abstract. Real images usually have sparse approximations under some tight frame systems derived

More information

Dual and primal-dual methods

Dual and primal-dual methods ELE 538B: Large-Scale Optimization for Data Science Dual and primal-dual methods Yuxin Chen Princeton University, Spring 2018 Outline Dual proximal gradient method Primal-dual proximal gradient method

More information

Optimisation in imaging

Optimisation in imaging Optimisation in imaging Hugues Talbot ICIP 2014 Tutorial Optimisation on Hierarchies H. Talbot : Optimisation 1/59 Outline of the lecture 1 Concepts in optimization Cost function Constraints Duality 2

More information

Minimizing Isotropic Total Variation without Subiterations

Minimizing Isotropic Total Variation without Subiterations MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Minimizing Isotropic Total Variation without Subiterations Kamilov, U. S. TR206-09 August 206 Abstract Total variation (TV) is one of the most

More information

Leveraging Machine Learning for High-Resolution Restoration of Satellite Imagery

Leveraging Machine Learning for High-Resolution Restoration of Satellite Imagery Leveraging Machine Learning for High-Resolution Restoration of Satellite Imagery Daniel L. Pimentel-Alarcón, Ashish Tiwari Georgia State University, Atlanta, GA Douglas A. Hope Hope Scientific Renaissance

More information

Sparse PCA with applications in finance

Sparse PCA with applications in finance Sparse PCA with applications in finance A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon 1 Introduction

More information

Sparse Gaussian conditional random fields

Sparse Gaussian conditional random fields Sparse Gaussian conditional random fields Matt Wytock, J. ico Kolter School of Computer Science Carnegie Mellon University Pittsburgh, PA 53 {mwytock, zkolter}@cs.cmu.edu Abstract We propose sparse Gaussian

More information

Confidence Intervals for Low-dimensional Parameters with High-dimensional Data

Confidence Intervals for Low-dimensional Parameters with High-dimensional Data Confidence Intervals for Low-dimensional Parameters with High-dimensional Data Cun-Hui Zhang and Stephanie S. Zhang Rutgers University and Columbia University September 14, 2012 Outline Introduction Methodology

More information

Denoising of NIRS Measured Biomedical Signals

Denoising of NIRS Measured Biomedical Signals Denoising of NIRS Measured Biomedical Signals Y V Rami reddy 1, Dr.D.VishnuVardhan 2 1 M. Tech, Dept of ECE, JNTUA College of Engineering, Pulivendula, A.P, India 2 Assistant Professor, Dept of ECE, JNTUA

More information

A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem

A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem Kangkang Deng, Zheng Peng Abstract: The main task of genetic regulatory networks is to construct a

More information

Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables

Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong 2014 Workshop

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Sparse Recovery using L1 minimization - algorithms Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Coordinate Update Algorithm Short Course Operator Splitting

Coordinate Update Algorithm Short Course Operator Splitting Coordinate Update Algorithm Short Course Operator Splitting Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 25 Operator splitting pipeline 1. Formulate a problem as 0 A(x) + B(x) with monotone operators

More information

Sparse Regularization via Convex Analysis

Sparse Regularization via Convex Analysis Sparse Regularization via Convex Analysis Ivan Selesnick Electrical and Computer Engineering Tandon School of Engineering New York University Brooklyn, New York, USA 29 / 66 Convex or non-convex: Which

More information

1 Sparsity and l 1 relaxation

1 Sparsity and l 1 relaxation 6.883 Learning with Combinatorial Structure Note for Lecture 2 Author: Chiyuan Zhang Sparsity and l relaxation Last time we talked about sparsity and characterized when an l relaxation could recover the

More information

Sparse and Low-Rank Matrix Decomposition Via Alternating Direction Method

Sparse and Low-Rank Matrix Decomposition Via Alternating Direction Method Sparse and Low-Rank Matrix Decomposition Via Alternating Direction Method Xiaoming Yuan a, 1 and Junfeng Yang b, 2 a Department of Mathematics, Hong Kong Baptist University, Hong Kong, China b Department

More information

MMSE Denoising of 2-D Signals Using Consistent Cycle Spinning Algorithm

MMSE Denoising of 2-D Signals Using Consistent Cycle Spinning Algorithm Denoising of 2-D Signals Using Consistent Cycle Spinning Algorithm Bodduluri Asha, B. Leela kumari Abstract: It is well known that in a real world signals do not exist without noise, which may be negligible

More information

Bias-free Sparse Regression with Guaranteed Consistency

Bias-free Sparse Regression with Guaranteed Consistency Bias-free Sparse Regression with Guaranteed Consistency Wotao Yin (UCLA Math) joint with: Stanley Osher, Ming Yan (UCLA) Feng Ruan, Jiechao Xiong, Yuan Yao (Peking U) UC Riverside, STATS Department March

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

Proximal Newton Method. Zico Kolter (notes by Ryan Tibshirani) Convex Optimization

Proximal Newton Method. Zico Kolter (notes by Ryan Tibshirani) Convex Optimization Proximal Newton Method Zico Kolter (notes by Ryan Tibshirani) Convex Optimization 10-725 Consider the problem Last time: quasi-newton methods min x f(x) with f convex, twice differentiable, dom(f) = R

More information

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.11

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.11 XI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.11 Alternating direction methods of multipliers for separable convex programming Bingsheng He Department of Mathematics

More information

Multiple Change Point Detection by Sparse Parameter Estimation

Multiple Change Point Detection by Sparse Parameter Estimation Multiple Change Point Detection by Sparse Parameter Estimation Department of Econometrics Fac. of Economics and Management University of Defence Brno, Czech Republic Dept. of Appl. Math. and Comp. Sci.

More information

Using ADMM and Soft Shrinkage for 2D signal reconstruction

Using ADMM and Soft Shrinkage for 2D signal reconstruction Using ADMM and Soft Shrinkage for 2D signal reconstruction Yijia Zhang Advisor: Anne Gelb, Weihong Guo August 16, 2017 Abstract ADMM, the alternating direction method of multipliers is a useful algorithm

More information

Sparse signals recovered by non-convex penalty in quasi-linear systems

Sparse signals recovered by non-convex penalty in quasi-linear systems Cui et al. Journal of Inequalities and Applications 018) 018:59 https://doi.org/10.1186/s13660-018-165-8 R E S E A R C H Open Access Sparse signals recovered by non-conve penalty in quasi-linear systems

More information

An efficient ADMM algorithm for high dimensional precision matrix estimation via penalized quadratic loss

An efficient ADMM algorithm for high dimensional precision matrix estimation via penalized quadratic loss An efficient ADMM algorithm for high dimensional precision matrix estimation via penalized quadratic loss arxiv:1811.04545v1 [stat.co] 12 Nov 2018 Cheng Wang School of Mathematical Sciences, Shanghai Jiao

More information

SOLVING NON-CONVEX LASSO TYPE PROBLEMS WITH DC PROGRAMMING. Gilles Gasso, Alain Rakotomamonjy and Stéphane Canu

SOLVING NON-CONVEX LASSO TYPE PROBLEMS WITH DC PROGRAMMING. Gilles Gasso, Alain Rakotomamonjy and Stéphane Canu SOLVING NON-CONVEX LASSO TYPE PROBLEMS WITH DC PROGRAMMING Gilles Gasso, Alain Rakotomamonjy and Stéphane Canu LITIS - EA 48 - INSA/Universite de Rouen Avenue de l Université - 768 Saint-Etienne du Rouvray

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Adrien Todeschini Inria Bordeaux JdS 2014, Rennes Aug. 2014 Joint work with François Caron (Univ. Oxford), Marie

More information

Sparse regression. Optimization-Based Data Analysis. Carlos Fernandez-Granda

Sparse regression. Optimization-Based Data Analysis.   Carlos Fernandez-Granda Sparse regression Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 3/28/2016 Regression Least-squares regression Example: Global warming Logistic

More information

An iterative hard thresholding estimator for low rank matrix recovery

An iterative hard thresholding estimator for low rank matrix recovery An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical

More information

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization /

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization / Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear

More information

MIXED GAUSSIAN-IMPULSE NOISE IMAGE RESTORATION VIA TOTAL VARIATION

MIXED GAUSSIAN-IMPULSE NOISE IMAGE RESTORATION VIA TOTAL VARIATION MIXED GAUSSIAN-IMPULSE NOISE IMAGE RESTORATION VIA TOTAL VARIATION P. Rodríguez, R. Rojas Department of Electrical Engineering Pontificia Universidad Católica del Perú Lima, Peru B. Wohlberg T5: Applied

More information

A General Framework for a Class of Primal-Dual Algorithms for TV Minimization

A General Framework for a Class of Primal-Dual Algorithms for TV Minimization A General Framework for a Class of Primal-Dual Algorithms for TV Minimization Ernie Esser UCLA 1 Outline A Model Convex Minimization Problem Main Idea Behind the Primal Dual Hybrid Gradient (PDHG) Method

More information

Lecture 2 Part 1 Optimization

Lecture 2 Part 1 Optimization Lecture 2 Part 1 Optimization (January 16, 2015) Mu Zhu University of Waterloo Need for Optimization E(y x), P(y x) want to go after them first, model some examples last week then, estimate didn t discuss

More information

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Caroline Chaux Joint work with X. Vu, N. Thirion-Moreau and S. Maire (LSIS, Toulon) Aix-Marseille

More information

Optimization for Learning and Big Data

Optimization for Learning and Big Data Optimization for Learning and Big Data Donald Goldfarb Department of IEOR Columbia University Department of Mathematics Distinguished Lecture Series May 17-19, 2016. Lecture 1. First-Order Methods for

More information

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar

More information

Adaptive one-bit matrix completion

Adaptive one-bit matrix completion Adaptive one-bit matrix completion Joseph Salmon Télécom Paristech, Institut Mines-Télécom Joint work with Jean Lafond (Télécom Paristech) Olga Klopp (Crest / MODAL X, Université Paris Ouest) Éric Moulines

More information

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Going off the grid Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang We live in a continuous world... But we work

More information

Parcimonie en apprentissage statistique

Parcimonie en apprentissage statistique Parcimonie en apprentissage statistique Guillaume Obozinski Ecole des Ponts - ParisTech Journée Parcimonie Fédération Charles Hermite, 23 Juin 2014 Parcimonie en apprentissage 1/44 Classical supervised

More information

arxiv: v3 [math.na] 9 May 2014

arxiv: v3 [math.na] 9 May 2014 arxiv:3.683v3 [math.na] 9 May 04 New explicit thresholding/shrinkage formulas for one class of regularization problems with overlapping group sparsity and their applications Gang Liu, Ting-Zhu Huang, Xiao-Guang

More information

OWL to the rescue of LASSO

OWL to the rescue of LASSO OWL to the rescue of LASSO IISc IBM day 2018 Joint Work R. Sankaran and Francis Bach AISTATS 17 Chiranjib Bhattacharyya Professor, Department of Computer Science and Automation Indian Institute of Science,

More information

Scaled gradient projection methods in image deblurring and denoising

Scaled gradient projection methods in image deblurring and denoising Scaled gradient projection methods in image deblurring and denoising Mario Bertero 1 Patrizia Boccacci 1 Silvia Bonettini 2 Riccardo Zanella 3 Luca Zanni 3 1 Dipartmento di Matematica, Università di Genova

More information

arxiv: v3 [math.oc] 29 Jun 2016

arxiv: v3 [math.oc] 29 Jun 2016 MOCCA: mirrored convex/concave optimization for nonconvex composite functions Rina Foygel Barber and Emil Y. Sidky arxiv:50.0884v3 [math.oc] 9 Jun 06 04.3.6 Abstract Many optimization problems arising

More information

A GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR TV MINIMIZATION

A GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR TV MINIMIZATION A GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR TV MINIMIZATION ERNIE ESSER XIAOQUN ZHANG TONY CHAN Abstract. We generalize the primal-dual hybrid gradient (PDHG) algorithm proposed

More information

Non-negative Quadratic Programming Total Variation Regularization for Poisson Vector-Valued Image Restoration

Non-negative Quadratic Programming Total Variation Regularization for Poisson Vector-Valued Image Restoration University of New Mexico UNM Digital Repository Electrical & Computer Engineering Technical Reports Engineering Publications 5-10-2011 Non-negative Quadratic Programming Total Variation Regularization

More information

Generalized Elastic Net Regression

Generalized Elastic Net Regression Abstract Generalized Elastic Net Regression Geoffroy MOURET Jean-Jules BRAULT Vahid PARTOVINIA This work presents a variation of the elastic net penalization method. We propose applying a combined l 1

More information

Markov Random Fields

Markov Random Fields Markov Random Fields Umamahesh Srinivas ipal Group Meeting February 25, 2011 Outline 1 Basic graph-theoretic concepts 2 Markov chain 3 Markov random field (MRF) 4 Gauss-Markov random field (GMRF), and

More information

Primal-dual algorithms for the sum of two and three functions 1

Primal-dual algorithms for the sum of two and three functions 1 Primal-dual algorithms for the sum of two and three functions 1 Ming Yan Michigan State University, CMSE/Mathematics 1 This works is partially supported by NSF. optimization problems for primal-dual algorithms

More information

A direct formulation for sparse PCA using semidefinite programming

A direct formulation for sparse PCA using semidefinite programming A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley A. d Aspremont, INFORMS, Denver,

More information

Solving l 1 Regularized Least Square Problems with Hierarchical Decomposition

Solving l 1 Regularized Least Square Problems with Hierarchical Decomposition Solving l 1 Least Square s with 1 mzhong1@umd.edu 1 AMSC and CSCAMM University of Maryland College Park Project for AMSC 663 October 2 nd, 2012 Outline 1 The 2 Outline 1 The 2 Compressed Sensing Example

More information

Convex Optimization and l 1 -minimization

Convex Optimization and l 1 -minimization Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l

More information

Solving DC Programs that Promote Group 1-Sparsity

Solving DC Programs that Promote Group 1-Sparsity Solving DC Programs that Promote Group 1-Sparsity Ernie Esser Contains joint work with Xiaoqun Zhang, Yifei Lou and Jack Xin SIAM Conference on Imaging Science Hong Kong Baptist University May 14 2014

More information

Sparsity Regularization

Sparsity Regularization Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation

More information

An interior-point stochastic approximation method and an L1-regularized delta rule

An interior-point stochastic approximation method and an L1-regularized delta rule Photograph from National Geographic, Sept 2008 An interior-point stochastic approximation method and an L1-regularized delta rule Peter Carbonetto, Mark Schmidt and Nando de Freitas University of British

More information

Supplemental Figures: Results for Various Color-image Completion

Supplemental Figures: Results for Various Color-image Completion ANONYMOUS AUTHORS: SUPPLEMENTAL MATERIAL (NOVEMBER 7, 2017) 1 Supplemental Figures: Results for Various Color-image Completion Anonymous authors COMPARISON WITH VARIOUS METHODS IN COLOR-IMAGE COMPLETION

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

Spatially adaptive alpha-rooting in BM3D sharpening

Spatially adaptive alpha-rooting in BM3D sharpening Spatially adaptive alpha-rooting in BM3D sharpening Markku Mäkitalo and Alessandro Foi Department of Signal Processing, Tampere University of Technology, P.O. Box FIN-553, 33101, Tampere, Finland e-mail:

More information

Smoothly Clipped Absolute Deviation (SCAD) for Correlated Variables

Smoothly Clipped Absolute Deviation (SCAD) for Correlated Variables Smoothly Clipped Absolute Deviation (SCAD) for Correlated Variables LIB-MA, FSSM Cadi Ayyad University (Morocco) COMPSTAT 2010 Paris, August 22-27, 2010 Motivations Fan and Li (2001), Zou and Li (2008)

More information

New inexact explicit thresholding/ shrinkage formulas for inverse problems with overlapping group sparsity

New inexact explicit thresholding/ shrinkage formulas for inverse problems with overlapping group sparsity Liu et al EURASIP Journal on Image and Video Processing 06 06:8 DOI 086/s3640-06-08-5 EURASIP Journal on Image and Video Processing RESEARCH Open Access New inexact explicit thresholding/ shrinkage formulas

More information

Variational Image Restoration

Variational Image Restoration Variational Image Restoration Yuling Jiao yljiaostatistics@znufe.edu.cn School of and Statistics and Mathematics ZNUFE Dec 30, 2014 Outline 1 1 Classical Variational Restoration Models and Algorithms 1.1

More information

A Randomized Approach for Crowdsourcing in the Presence of Multiple Views

A Randomized Approach for Crowdsourcing in the Presence of Multiple Views A Randomized Approach for Crowdsourcing in the Presence of Multiple Views Presenter: Yao Zhou joint work with: Jingrui He - 1 - Roadmap Motivation Proposed framework: M2VW Experimental results Conclusion

More information

A REWEIGHTED l 2 METHOD FOR IMAGE RESTORATION WITH POISSON AND MIXED POISSON-GAUSSIAN NOISE. Jia Li. Zuowei Shen. Rujie Yin.

A REWEIGHTED l 2 METHOD FOR IMAGE RESTORATION WITH POISSON AND MIXED POISSON-GAUSSIAN NOISE. Jia Li. Zuowei Shen. Rujie Yin. Volume X, No. 0X, 0xx, X XX doi:10.3934/ipi.xx.xx.xx A REWEIGHTED l METHOD FOR IMAGE RESTORATION WITH POISSON AND MIXED POISSON-GAUSSIAN NOISE JIA LI AND ZUOWEI SHEN AND RUJIE YIN AND XIAOQUN ZHANG Jia

More information