Artifact-free Wavelet Denoising: Non-convex Sparse Regularization, Convex Optimization

Similar documents
Denoising of NIRS Measured Biomedical Signals

Minimizing Isotropic Total Variation without Subiterations

Sparse Signal Estimation by Maximally Sparse Convex Optimization

Sparse Regularization via Convex Analysis

2 Regularized Image Reconstruction for Compressive Imaging and Beyond

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

SPARSE SIGNAL RESTORATION. 1. Introduction

A NONCONVEX ADMM ALGORITHM FOR GROUP SPARSITY WITH SPARSE GROUPS. Rick Chartrand and Brendt Wohlberg

Signal Restoration with Overcomplete Wavelet Transforms: Comparison of Analysis and Synthesis Priors

On the Convergence of the Iterative Shrinkage/Thresholding Algorithm With a Weakly Convex Penalty

SPARSITY-ASSISTED SIGNAL SMOOTHING (REVISITED) Ivan Selesnick

Sparsity-Assisted Signal Smoothing

About Split Proximal Algorithms for the Q-Lasso

MMSE Denoising of 2-D Signals Using Consistent Cycle Spinning Algorithm

Symmetric Wavelet Tight Frames with Two Generators

A ROBUST BEAMFORMER BASED ON WEIGHTED SPARSE CONSTRAINT

Sparse signal representation and the tunable Q-factor wavelet transform

Sparse signal representation and the tunable Q-factor wavelet transform

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS

Error Analysis for H 1 Based Wavelet Interpolations

Learning MMSE Optimal Thresholds for FISTA

446 SCIENCE IN CHINA (Series F) Vol. 46 introduced in refs. [6, ]. Based on this inequality, we add normalization condition, symmetric conditions and

Design of Projection Matrix for Compressive Sensing by Nonsmooth Optimization

Introduction to Sparsity in Signal Processing 1

The Alternating Direction Method of Multipliers

Sparse linear models and denoising

The Suppression of Transient Artifacts in Time Series via Convex Analysis

Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise

Adaptive Primal Dual Optimization for Image Processing and Learning

A Primal-dual Three-operator Splitting Scheme

A Unified Approach to Proximal Algorithms using Bregman Distance

Recent developments on sparse representation

Sparse linear models

Continuous State MRF s

EMPLOYING PHASE INFORMATION FOR AUDIO DENOISING. İlker Bayram. Istanbul Technical University, Istanbul, Turkey

Solving DC Programs that Promote Group 1-Sparsity

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases

Translation-Invariant Shrinkage/Thresholding of Group Sparse. Signals

Variational image restoration by means of wavelets: simultaneous decomposition, deblurring and denoising

POISSON noise, also known as photon noise, is a basic

A Higher-Density Discrete Wavelet Transform

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.11

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)

A Brief Overview of Practical Optimization Algorithms in the Context of Relaxation

Generalized Orthogonal Matching Pursuit- A Review and Some

Mathematical analysis of a model which combines total variation and wavelet for image restoration 1

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection

Sparse Regularization via Convex Analysis

Sparse and Regularized Optimization

Bayesian-based Wavelet Shrinkage for SAR Image Despeckling Using Cycle Spinning *

Edge-preserving wavelet thresholding for image denoising

A General Framework for a Class of Primal-Dual Algorithms for TV Minimization

Variational methods for restoration of phase or orientation data

Math 273a: Optimization Overview of First-Order Optimization Algorithms

Combining multiresolution analysis and non-smooth optimization for texture segmentation

Gauge optimization and duality

arxiv: v1 [cs.lg] 20 Feb 2014

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

Introduction to Sparsity in Signal Processing

On the Dual-Tree Complex Wavelet Packet and. Abstract. Index Terms I. INTRODUCTION

Introduction to Wavelets and Wavelet Transforms

A discretized Newton flow for time varying linear inverse problems

Multiscale Geometric Analysis: Thoughts and Applications (a summary)

Enhanced Compressive Sensing and More

Introduction to Sparsity in Signal Processing

PART II: Basic Theory of Half-quadratic Minimization

y(x) = x w + ε(x), (1)

Bayesian Paradigm. Maximum A Posteriori Estimation

A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design

An Investigation of 3D Dual-Tree Wavelet Transform for Video Coding

GREEDY SIGNAL RECOVERY REVIEW

Inpainting for Compressed Images

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms

Universität des Saarlandes. Fachrichtung 6.1 Mathematik

LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING

AN INTRODUCTION TO COMPRESSIVE SENSING

Contraction Methods for Convex Optimization and monotone variational inequalities No.12

EUSIPCO

Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition

A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

1 Sparsity and l 1 relaxation

An ADMM algorithm for optimal sensor and actuator selection

Estimation Error Bounds for Frame Denoising

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Denosing Using Wavelets and Projections onto the l 1 -Ball

Coordinate Update Algorithm Short Course Operator Splitting

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Generalized ADMM with Optimal Indefinite Proximal Term for Linearly Constrained Convex Optimization

arxiv: v1 [cs.it] 21 Feb 2013

VARIOUS types of wavelet transform are available for

AUGMENTED LAGRANGIAN METHODS FOR CONVEX OPTIMIZATION

Sparse Optimization Lecture: Basic Sparse Optimization Models

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

A Lower Bound Theorem. Lin Hu.

Optimal Linearized Alternating Direction Method of Multipliers for Convex Programming 1

A Variational Approach to Reconstructing Images Corrupted by Poisson Noise

Diffusion based Projection Method for Distributed Source Localization in Wireless Sensor Networks

Transcription:

IEEE SIGNAL PROCESSING LETTERS. 22(9):164-168, SEPTEMBER 215. (PREPRINT) 1 Artifact-free Wavelet Denoising: Non-convex Sparse Regularization, Convex Optimization Yin Ding and Ivan W. Selesnick Abstract Algorithms for signal denoising that combine avelet-domain sparsity and total variation (TV) regularization are relatively free of artifacts, such as pseudo-gibbs oscillations, normally introduced by pure avelet thresholding. This paper formulates avelet-tv (WATV) denoising as a unified problem. To strongly induce avelet sparsity, the proposed approach uses non-convex penalty functions. At the same time, in order to dra on the advantages of convex optimization (unique minimum, reliable algorithms, simplified regularization parameter selection), the non-convex penalties are chosen so as to ensure the convexity of the total objective function. A computationally efficient, fast converging algorithm is derived. I. INTRODUCTION Simple thresholding of avelet coefficients is a reasonably effective procedure for noise reduction hen the signal of interest possesses a sparse avelet representation. Hoever, avelet thresholding often introduces artifacts such as spurious noise spikes and pseudo-gibbs oscillations around discontinuities [1]. In contrast, total variation (TV) denoising [4] does not introduce such artifacts; but it often produces undesirable staircase artifacts. Roughly stated, noise spikes resulting from avelet thresholding are due to noisy avelet coefficients exceeding the threshold, hereas pseudo-gibbs artifacts are due to nonzero coefficients being erroneously set to zero. An approach to alleviate pseudo-gibbs artifacts is to estimate thresholded coefficients by variational means [5], [14]. The use of TV minimization as a ay to recover thresholded coefficients is particularly effective [1], [11], [18], [19]. In this approach, thresholding is performed first. Those coefficients that survive thresholding (i.e., significant coefficients) are maintained. Those coefficients that do not survive (i.e., insignificant coefficients) are then optimized to minimize a TV objective function. TV regularization as subsequently used ith avelet packets [26], curvelets [7], complex avelets [25], shearlets [21], tetrolets [2], [7], and generalized in [2]. Poerful proximal algorithms have been developed for signal restoration ith general hybrid regularization (including avelet-tv) alloing constraints and non-gaussian noise [2]. In this letter, e propose a unified avelet-tv (WATV) approach that estimates all avelet coefficients (significant and insignificant) simultaneously via the minimization of a Copyright (c) 215 IEEE. Personal use of this material is permitted. Hoever, permission to use this material for any other purposes must be obtained from the IEEE by sending a request to pubs-permissions@ieee.org. The authors are ith the Department of Electrical and Computer Engineering, School of Engineering, Ne York University, 6 Metrotech Center, Brooklyn, NY 1121. Email: yd72@nyu.edu and selesi@nyu.edu. MATLAB softare available at http://eeeb.poly.edu/iselesni/watv single objective function. To induce avelet-domain sparsity, e employ non-convex penalties due to their strong sparsityinducing properties [29] [1]. In general practice, hen nonconvex penalties are used, the convexity of the objective function is usually sacrificed. Hoever, in our approach, e restrict the non-convex penalty so as to ensure the strict convexity of the objective function; then, the minimizer is unique and can be reliably obtained through convex optimization. The proposed WATV denoising method is relatively resistant to pseudo-gibbs oscillations and spurious noise spikes. We derive a computationally efficient, fast converging optimization algorithm for the proposed objective function. The formulation of convex optimization problems for linear inverse problems, here a non-convex regularizer is designed in such a ay that the total objective function is convex, as proposed by Blake and Zimmerman [4] and Nikolova [27], [28], [1]. The use of semidefinite programming (SDP) to attain such a regularizer has been considered [5]. Recently, the concept has been applied to group-sparse denoising [12], TV denoising [6], and non-convex fused-lasso []. II. PROBLEM FORMULATION We consider the estimation of a signal x R N observed in additive hite Gaussian noise (AWGN), y n = x n + v n, n =, 1,..., N 1. (1) We denote the avelet transform by W and the avelet coefficients of signal x as = W x. We index the coefficients as j,k here j and k are the scale and time indices, respectively. In this ork, e use the translational-invariant (i.e., undecimated) avelet transform [1], [24], hich satisfies the Parseval frame condition, W T W = I. (2) But the proposed WATV denoising algorithm can be used ith any transform W satisfying (2). The total variation (TV) of signal x R N is defined as TV(x) := Dx 1 here D is the first-order difference matrix, D =......, () and x 1 denotes the l 1 norm of x, i.e., x 1 = n x n. We also denote x 2 2 := n x n 2. For a set of doubly-indexed avelet coefficients, e denote 2 2 := j,k j,k 2.

IEEE SIGNAL PROCESSING LETTERS. 22(9):164-168, SEPTEMBER 215. (PREPRINT) 2 The proposed WATV denoising method finds avelet coefficients by solving the optimization problem, ŵ F () = 1 2 W y 2 2 (a) x φ(x) 2 (b) + j,k λ j φ( j,k ; a j ) + β DW T 1. (4) 2 1 The estimate of the signal x is then given by the inverse avelet transform of ŵ, i.e., ˆx = W T ŵ. The penalty term DW T 1 is the total variation of the signal estimate ˆx. The regularization parameters are λ j > and β >. We allo the avelet regularization and penalty parameters (λ j and a j ) to vary ith scale j. A. Parameterized Penalty Function In the proposed approach, the function φ( ; a): R R is a non-convex sparsity-inducing penalty function ith parameter a. We assume φ satisfies the folloing conditions: 1) φ is continuous on R 2) φ is tice continuously differentiable, increasing, and concave on R + ) φ(x; ) = x 4) φ(; a) = 5) φ( x; a) = φ(x; a) 6) φ ( + ; a) = 1 7) φ (x; a) a for all x The folloing proposition indicates a suitable range of values for the parameter a. This proposition combines Propositions 1 and 2 of [12]. Proposition 1. Suppose the parameterized penalty function φ satisfies the above-listed properties and λ >. If then the function f : R R, a < 1 λ, (5) f(x) = 1 2 (y x)2 + λ φ(x; a), (6) is strictly convex and the function θ : R R, θ(y; λ, a) x R 1 2 (y x)2 + λ φ(x; a), (7) is a continuous nonlinear threshold function ith threshold value λ. That is, θ(y; λ, a) = for all y < λ. Note that (7) is the definition of the proximity operator, or proximal mapping, of φ [2], [15]. Several common penalties satisfy the above-listed properties, for example, the logarithmic and arctangent penalties (hen suitably normalized), both of hich have been advocated for sparse regularization [8], [29]. We use the arctangent penalty as it induces sparsity more strongly [12], [5], ( ( ) ) 2 a φ(x; a) = atan 1+2a x π 6, a > (8) x, a =. Figure 1 illustrates this penalty and its corresponding threshold function θ, hich is given by the solution to a cubic polynomial; see Eqn. (21) in [5]. 1 2 1 1 2 θ(y) soft(y) 1 2 Fig. 1. (a) Non-convex penalty. (b) Threshold function (λ = 1, a =.95). The parameter a (ith a < 1/λ) controls the nonconvexity of the penalty φ; specifically, φ ( + ; a) = a. This parameter also controls the shape of the threshold function θ; namely, the right-sided derivative at the threshold λ, is θ (λ + ) = 1/(1 aλ). If a =, then θ (λ + ) = 1 and θ is the soft-threshold function. If a 1/λ (and a < 1/λ), then θ is a continuous approximation of the hard-threshold function. But, if a > 1/λ, then f in (6) is not convex, the associated threshold function is not continuous, and the threshold value is not λ. As discussed in [5], the value a = 1/λ is the critical value that maximizes the sparsity-inducing behavior of the penalty hile ensuring convexity of f. B. Convexity of the Objective Function In the WATV denoising problem (4), e use the non-convex penalty φ to strongly promote sparsity, but e seek to ensure the strict convexity of the objective function F. Proposition 2. Suppose the parameterized penalty function φ satisfies the properties listed in Sec. II-A. The objective function (4) is strictly convex if for each scale j. a j < 1 λ j (9) Proof. We rite the objective function in (4) as F () = 1 2 ([W y] j,k j,k ) 2 + λ j φ( j,k ; a j ) j,k + β DW T 1. (1) By Proposition 1, condition (9) ensures each term in the curly braces is strictly convex. Also, the l 1 norm term is convex. It follos that F, being the sum of strictly convex and convex functions, is strictly convex. When β = in (4) and a j < 1/λ j for all j, then ŵ j,k = θ([w y] j,k ; λ j, a j ). That is, the solution is obtained by aveletdomain thresholding. C. Parameters The proposed formulation (4) requires parameters λ j, a j, and β. We suggest using a j = 1/λ j or slightly less (e.g.,

IEEE SIGNAL PROCESSING LETTERS. 22(9):164-168, SEPTEMBER 215. (PREPRINT).95/λ j ) to maximally induce sparsity hile keeping F convex. To set λ j and β, e consider separately the special cases of (i) β = and (ii) λ j = for all j. In case (i), problem (4) reduces to pure avelet denoising, for hich e presume suitable parameters, denoted λ W j, are knon. In case (ii), problem (4) reduces to pure TV denoising, for hich e presume a suitable parameter, denoted β TV, is knon. We suggest setting λ j = η λ W j, β = (1 η) β TV, (11) here < η < 1 controls the relative eight of avelet and TV regularization. We further suggest the smaller range of.9 < η < 1., so that TV regularization makes a relatively minor adjustment of the avelet solution, to reduce spurious noise spikes and pseudo-gibbs phenomena. In this ork, e set the avelet thresholds λ W j = 2.5σ j, here σj 2 denotes the noise variance in avelet scale j. For an undecimated avelet transform satisfying (2), e have σ j = σ/2 j/2 here σ 2 is the noise variance in the signal domain. For pure TV denoising, e use β TV = Nσ/4 in accordance ith [17]. Therefore, e suggest using λ j = 2.5 η σ/2 j/2, β = (1 η) Nσ/4, (12) ith a nominal value of η =.95. Additionally, e omit regularization of lo-pass avelet coefficients. III. OPTIMIZATION ALGORITHM To solve the WATV denoising problem (4) e use the split augmented Lagrangian shrinkage algorithm (SALSA) [1] hich in turn is based on the alternating direction method of multipliers (ADMM) [6], [22]. Proximal methods (e.g., Douglas-Rachford) could also be used [15]. We assume that a j satisfies (9) for each j, so that F is strictly convex. With variable splitting, problem (4) is expressed as a constrained problem, here arg min g 1() + g 2 (u) u, subject to u =, g 1 () = 1 2 W y 2 2 + j,k g 2 (u) = β DW T u 1. λ j φ( j,k ; a j ) (1) (14a) (14b) Both g 1 and g 2 are strictly convex, hence e may utilize the convergence theory of [22]. Even though φ is non-convex, the function g 1 is strictly convex because a j satisfies (9). The augmented Lagrangian is given by L(, u, µ) = g 1 () + g 2 (u) + µ 2 u d 2 2. (15) The solution of (1) can be found by iteratively minimizing ith respect to and u alternately, as proven in [22]. Thereby, e obtain an iterative algorithm to solve (4), here each iteration consists of three steps: g1 () + µ 2 u d 2 2, (16a) u g2 (u) + µ u 2 u d 2 2 d = d (u ),, (16b) (16c) for some µ >. We initialize u = W y and d =. Note that both (16a) and (16b) are strictly convex. (Sub-problem (16a) is strictly convex because a j satisfies (9).) Sub-problem (16a) can be solved exactly and explicitly. Combining the quadratic terms, (16a) may be expressed as j,k 1 2 (p j,k j,k ) 2 + λ j µ + 1 φ( j,k; a j ), here p = (W y + µ(u d))/(µ + 1). Since this problem is separable, it can be further ritten as 1 j,k v R 2 (p j,k v) 2 + λ j µ + 1 φ(v; a j), (17) for each j and k. This is the same form as (6). Consequently, the solution to sub-problem (16a) is given by ( ) λ j j,k = θ p j,k ; µ + 1, a j (18) here θ is the threshold function (7). Sub-problem (16b) can also be solved exactly. We use properties of proximity operators to express (16b) in terms of a TV denoising problem [15]. In turn, the TV denoising problem can be solved exactly by a fast finite-time algorithm [16]. Define function q as q(v) β DW T u 1 + µ u 2 u v 2 2 (19) ψ(w T u) + 1 u 2 u v 2 2 (2) here h(u) u + 1 2 u v 2 2 (21) = prox h (v) (22) = v + W ( prox ψ (W T v) W T v ) (2) ψ(x) = β µ Dx 1, h(u) = ψ(w T u), (24) and here e have used the semi-orthogonal linear transform property of proximity operators to obtain (2) from (22), hich depends on (2). See Table 1.1 and Example 1.7.4 in [15], and see [9], [] for details. Note that prox ψ (W T v) x ψ(x) + 1 2 x W T v 2 2 x β µ Dx 1 + 1 2 x W T v 2 2 (25) = tvd ( W T v, β/µ ) (26) is total variation denoising. We use the fast finite-time exact softare of [16]. Hence, (16b) can be implemented as v = d + (27) u = v + W ( tvd(w T v, β/µ) W T v ). (28) Hence, solving problem (4) by (16a)-(16c), consists of iteratively thresholding (18) [to solve (16a)] and performing total variation denoising (28) [to solve (16b)]. Table I summarizes the hole algorithm. Each iteration involves one avelet transform, one inverse avelet transform, and one total variation denoising problem. (The term W y in the algorithm

IEEE SIGNAL PROCESSING LETTERS. 22(9):164-168, SEPTEMBER 215. (PREPRINT) 4 TABLE I ITERATIVE ALGORITHM FOR WATV DENOISING (4). 6 4 (a) Simulated data (σ = 4.) Input: y, λ j, β, µ Initialization: u = W y, d = a j = 1/λ j Repeat: p = (W y + µ(u d))/(µ + 1) j,k = θ(p j,k ; λ j /(µ + 1), a j ) for all j, k v = d + u = v + W ( tvd(w T v, β/µ) W T v ) d = d (u ) Until convergence Return: x = W T 2 2 4 1 2 4 5 6 7 8 9 1 6 4 2 2 (b) Wavelet hard thresholding (RMSE = 1.68) 4 1 2 4 5 6 7 8 9 1 TABLE II AVERAGE RMSE FOR DENOISING EXAMPLE 6 4 (c) To step WATV (RMSE = 1.58) σ Thresholding To-step WATV Proposed WATV 1..44.9.7 2..81.69.67 4. 1.54 1.44 1.28 8. 2.9 2.75 2.46 16. 5.25 4.77 4.19 may be precomputed.) Since both minimization problems, (16a) and (16b), are solved exactly, the algorithm is guaranteed to converge to the unique global minimizer according to the convergence theory of [22]. IV. EXAMPLE We illustrate the proposed WATV denoising algorithm using noisy data of length 124 (Fig. 2a), generated by Wavelab (http://-stat.stanford.edu/%7eavelab/), ith AWGN of σ = 4.. We use a 5-scale undecimated avelet transform ith to vanishing moments. Hard-thresholding (ith λ W j = 2.5σ j ) results in noise spikes due to avelet coefficients exceeding the threshold (Fig. 2b). To-step WATV denoising [19] (avelet hard-thresholding folloed by TV optimization of coefficients thresholded to zero) does not correct noise spikes due to noisy coefficients that exceed the threshold, so e use higher thresholds of σ j 2 log N to avoid noise spikes from occurring in the first step. The result (Fig. 2c) is relatively free of noise spikes and psuedo-gibbs oscillations. The proposed WATV algorithm [ith parameters (12), η =.95, and a j = 1/λ j ] is resistant to spurious noise spikes, preserves sharp discontinuities, and results in a loer rootmean-square-error (RMSE) (Fig. 2d). The proposed algorithm as run for 2 iterations in a time of.15 seconds using Matlab 7.14 on a MacBook Air (1. GHz CPU). Table II shos the RMSE for several values of the noise standard deviation (σ) for each method. Each RMSE value is obtained by averaging 2 independent realizations. The higher RMSE of to-step WATV appears to be due to distortion induced by the greater threshold value. 2 2 4 1 2 4 5 6 7 8 9 1 6 4 2 2 (d) Proposed WATV (RMSE = 1.41) 4 1 2 4 5 6 7 8 9 1 Fig. 2. (a) Simulated data. (b) Wavelet denoising (hard-thresholding). (c) To-step WATV denoising [19]. (d) Proposed WATV denoising. V. CONCLUSION Inspired by [18], [19], this paper describes a avelet- TV (WATV) denoising method that is relatively free of artifacts (pseudo-gibbs oscillations and noise spikes) normally introduced by simple avelet thresholding. The method is formulated as an optimization problem incorporating both avelet sparsity and TV regularization. To strongly induce avelet sparsity, the approach uses parameterized non-convex penalty functions. Proposition 2 gives the critical parameter value to maximally promote avelet sparsity hile ensuring convexity of the objective function as a hole. A fast iterative implementation is derived. We expect the approach to be useful ith transforms other than the avelet transform, provided they satisfy the Parseval frame property (2). ACKNOWLEDGMENT The authors gratefully acknoledge constructive comments by anonymous revieers. In particular, a revieer noted that (16b) can be expressed explicitly in terms of total variation denoising, an improvement upon the inexact solution in the original manuscript.

IEEE SIGNAL PROCESSING LETTERS. 22(9):164-168, SEPTEMBER 215. (PREPRINT) 5 REFERENCES [1] M. V. Afonso, J. M. Bioucas-Dias, and M. A. T. Figueiredo. Fast image recovery using variable splitting and constrained optimization. IEEE Trans. Image Process., 19(9):245 256, September 21. [2] H. H. Bauschke and P. L. Combettes. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, 211. [] I. Bayram, P.-Y. Chen, and I. Selesnick. Fused lasso ith a non-convex sparsity inducing penalty. In Proc. IEEE Int. Conf. Acoust., Speech, Signal Processing (ICASSP), May 214. [4] A. Blake and A. Zisserman. Visual Reconstruction. MIT Press, 1987. [5] Y. Bobichon and A. Bijaoui. Regularized multiresolution methods for astronomical image enhancement. In Proc. SPIE, volume 169 (Wavelet Applications in Signal and Image Processing V), pages 6 45, October 1997. [6] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, (1):1 122, 211. [7] E. J. Candés and F. Guo. Ne multiscale transforms, minimum total variation synthesis: applications to edge-preserving image reconstruction. Signal Processing, 82(11):1519 154, 22. [8] E. J. Candès, M. B. Wakin, and S. Boyd. Enhancing sparsity by reeighted l1 minimization. J. Fourier Anal. Appl., 14(5):877 95, December 28. [9] L. Chaâri, N. Pustelnik, C. Chaux, and J.-C. Pesquet. Solving inverse problems ith overcomplete transforms and convex optimization techniques. In Proceedings of SPIE, volume 7446 (Wavelets XIII), August 29. [1] T. F. Chan and H. M. Zhou. Total variation improved avelet thresholding in image compression. In Proc. IEEE Int. Conf. Image Processing (ICIP), volume 2, pages 91 94, September 2. [11] T. F. Chan and H.-M. Zhou. Total variation avelet thresholding. J. of Scientific Computing, 2(2):15 41, August 27. [12] P.-Y. Chen and I. W. Selesnick. Group-sparse signal denoising: Nonconvex regularization, convex optimization. IEEE Trans. Signal Process., 62(1):464 478, July 214. [1] R. R. Coifman and D. L. Donoho. Translation-invariant de-noising. In A. Antoniadis and G. Oppenheim, editors, Wavelet and statistics, pages 125 15. Springer-Verlag, 1995. [14] R. R. Coifman and A. Soa. Combining the calculus of variations and avelets for image enhancement. J. of Appl. and Comp. Harm. Analysis, 9(1):1 18, 2. [15] P. L. Combettes and J.-C. Pesquet. Proximal splitting methods in signal processing. In H. H. Bauschke et al., editors, Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pages 185 212. Springer- Verlag, 211. [16] L. Condat. A direct algorithm for 1-D total variation denoising. IEEE Signal Processing Letters, 2(11):154 157, November 21. [17] L. Dümbgen and A. Kovac. Extensions of smoothing via taut strings. Electron. J. Statist., :41 75, 29. [18] S. Durand and J. Froment. Artifact free signal denoising ith avelets. In Proc. IEEE Int. Conf. Acoust., Speech, Signal Processing (ICASSP), 21. [19] S. Durand and J. Froment. Reconstruction of avelet coefficients using total variation minimization. SIAM J. Sci. Comput., 24(5):1754 1767, 2. [2] S. Durand and M. Nikolova. Denoising of frame coefficients using l 1 data-fidelity term and edge-preserving regularization. Multiscale Modeling & Simulation, 6(2):547 576, 27. [21] G. R. Easley, D. Labate, and F. Colonna. Shearlet-based total variation diffusion for denoising. IEEE Trans. Image Process., 18(2):26 268, February 29. [22] J. Eckstein and D. Bertsekas. On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program., 5:29 18, 1992. [2] J. Krommeh and J. Ma. Tetrolet shrinkage ith anisotropic total variation minimization for image approximation. Signal Processing, 9(8):2529 259, 21. [24] M. Lang, H. Guo, J. E. Odegard, C. S. Burrus, and R. O. Wells, Jr. Noise reduction using an undecimated discrete avelet transform. IEEE Signal Processing Letters, (1):1 12, January 1996. [25] J. Ma. Toards artifact-free characterization of surface topography using complex avelets and total variation minimization. Applied Mathematics and Computation, 17(2):114 1, 25. [26] F. Malgouyres. Minimizing the total variation under a general convex constraint for image restoration. IEEE Trans. Image Process., 11(12):145 1456, December 22. [27] M. Nikolova. Estimation of binary images by minimizing convex criteria. In Proc. IEEE Int. Conf. Image Processing (ICIP), pages 18 112 vol. 2, 1998. [28] M. Nikolova. Markovian reconstruction using a GNC approach. IEEE Trans. Image Process., 8(9):124 122, 1999. [29] M. Nikolova. Energy minimization methods. In O. Scherzer, editor, Handbook of Mathematical Methods in Imaging, chapter 5, pages 18 186. Springer, 211. [] M. Nikolova, M. K. Ng, and C. Tam. On l 1 data fitting and concave regularization for image recovery. SIAM J. Sci. Comput., 5(1):A97 A4, 21. [1] M. Nikolova, M. K. Ng, and C.-P. Tam. Fast nonconvex nonsmooth minimization methods for image restoration and reconstruction. IEEE Trans. Image Process., 19(12):7 88, December 21. [2] N. Pustelnik, C. Chaux, and J. Pesquet. Parallel proximal algorithm for image restoration using hybrid regularization. IEEE Trans. Image Process., 2(9):245 2462, September 211. [] N. Pustelnik, J.-C. Pesquet, and C. Chaux. Relaxing tight frame condition in parallel proximal methods for signal restoration. IEEE Trans. Signal Process., 6(2):968 97, February 212. [4] L. Rudin, S. Osher, and E. Fatemi. Nonlinear total variation based noise removal algorithms. Physica D, 6:259 268, 1992. [5] I. W. Selesnick and I. Bayram. Sparse signal estimation by maximally sparse convex optimization. IEEE Trans. Signal Process., 62(5):178 192, March 214. [6] I. W. Selesnick, A. Parekh, and I. Bayram. Convex 1-D total variation denoising ith non-convex regularization. IEEE Signal Processing Letters, 22(2):141 144, February 215. [7] L. Wang, L. Xiao, J. Zhang, and Z. Wei. Ne image restoration method associated ith tetrolets shrinkage and eighted anisotropic total variation. Signal Processing, 9(4):661 67, 21.