IN a series of recent papers [1] [4], the morphological component

Size: px
Start display at page:

Download "IN a series of recent papers [1] [4], the morphological component"

Transcription

1 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 16, NO. 11, NOVEMBER Morphological Component Analysis: An Adaptive Thresholding Strategy Jérôme Bobin, Jean-Luc Starck, Jalal M. Fadili, Yassir Moudden, and David L. Donoho Abstract In a recent paper, a method called morphological component analysis (MCA) has been proposed to separate the texture from the natural part in images. MCA relies on an iterative thresholding algorithm, using a threshold which decreases linearly towards zero along the iterations. This paper shows how the MCA convergence can be drastically improved using the mutual incoherence of the dictionaries associated to the different components. This modified MCA algorithm is then compared to basis pursuit, and experiments show that MCA and BP solutions are similar in terms of sparsity, as measured by the 1 norm, but MCA is much faster and gives us the possibility of handling large scale data sets. Index Terms Feature extraction, morphological component analysis (MCA), sparse representations. I. INTRODUCTION IN a series of recent papers [1] [4], the morphological component analysis (MCA) concept was developed and it has been shown that MCA can be used for separating the texture from the piecewise smooth component [2], for inpainting applications [3] or more generally for separating several components which have different morphologies. MCA has also been extended to the multichannel case in [5], [6]. The main idea behind MCA is to use the morphological diversity of the different features contained in the data, and to associate each morphology to a dictionary of atoms for which a fast transform is available. Thanks to recent developments in harmonic analysis, many new multiscale transforms are now available [7] [10], which greatly increases the potentiality of MCA. In Section I, we recall the MCA methodology. In [2], the authors introduced the MCA algorithm based on an iterative thresholding scheme depending on two key parameters: the threshold and the number of iterations. In Section I-B, we introduce a new way of tuning the threshold. We show that this new strategy called MOM, for mean of max, is very practical as it is adaptive to the data, and has a finite stopping-behaviour Manuscript received July 4, 2006; revised August 6, The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Srdjan Stankovic. J. Bobin and Y. Moudden are with the DAPNIA/SEDI-SAP, Service d Astrophysique, CEA/Saclay, Gif sur Yvette, France ( jerome.bobin@cea.fr; ymoudden@cea.fr). J.-L. Starck is with the DAPNIA/SEDI-SAP, Service d Astrophysique, CEA/ Saclay, Gif sur Yvette, France, and also with the Laboratoire APC, Paris, France ( jstarck@cea.fr). J. Fadili is with the GREYC CNRS UMR 6072, Image Processing Group, ENSICAEN 14050, Caen Cedex, France ( jalal.fadili@greyc.ensicaen. fr). D. L. Donoho is with the Department of Statistics, Stanford University, Stanford, CA USA ( donoho@stanford.edu). Digital Object Identifier /TIP since it alleviates us from the delicate choice of the number of iterations. We also prove conditions that guarantee no false detections with overwhelming probability. Numerical examples are provided in Section II-A. Furthermore, MCA/MOM also turns out to be a practical way to decompose a signal in an overcomplete representation made of a union of bases. In Section II-B, we provide a comparison between MCA and Basis Pursuit (BP) which clearly indicates that MCA/MOM is a practical alternative to classical sparse decomposition algorithms such as BP [11]. II. NEW THRESHOLDING STRATEGY IMPROVING MCA A. Overview of MCA For a given -sample signal consisting of a sum of signals,, having different morphologies, MCA assumes that a dictionary of bases exists such that, for each, is sparse in and not, or at least not as sparse, in other,, where denotes the pseudo-norm of the vector (i.e., the number of nonzero coefficients of ). For the sake of simplicity, we will consider in this paper a number of morphological components equal to two, but our results can be easily extended to any. We note the decomposition of in. In [1] and [2], it was proposed to estimate the components and by solving the following constrained optimization problem: where is the noise standard deviation in the noisy case. For now, we assume that no noise perturbs the data, (i.e., equality constraint). Extension of our method to deal with noise is discussed in Section I-D. The MCA algorithm given below relies on an iterative alternate matched filtering and thresholding scheme. 1) Set the number of iterations and thresholds and 2) While is higher than a given lower bound (e.g., can depend on the noise variance), Proceed with the following iteration to estimate components at iteration : For (1) /$ IEEE

2 2676 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 16, NO. 11, NOVEMBER 2007 Compute the residual term assuming the current estimates of, is fixed Estimate the current coefficients of with threshold Get the new estimate of selected coefficients by hard thresholding by reconstructing from the Decrease the thresholds following a given strategy At the th iteration, we have two estimates, of and (, ). Then and are obtained as follows: where the operator consists in decomposing over the transform (, i.e., matched filtering), thresholding the obtained coefficients with the threshold, and reconstructing from. The thresholding operator can be either a hard or a soft thresholding. In practice, hard thresholding leads generally to better results [1], [2]. The threshold decreases linearly towards zero where is the first threshold and is the number of iterations. Both are parameters of the method. The first threshold can be set automatically to a large enough value (for instance the maximum in magnitude of all coefficients ). For an exact representation of the data with the morphological components, must be set to zero. When noise is contained in the data, as discussed in Section I-D, should be set to few times the noise standard deviation. Intuitively, MCA distinguish between the morphological components by taking advantage of the mutual coherence of the subdictionaries and. Quantitatively, the mutual coherence of a dictionary is defined, assuming that its columns are normalized to unit -norm, in terms of the Gram matrix The mutual incoherence of the dictionaries is a key assumption for the MCA algorithm; it states that the image we wish to (2) (3) decompose contains features with different morphologies which are sparse in different representations. MCA will provide a good components separation by using this morphological diversity concept when the transforms amalgamated in the dictionary are mutually incoherent enough. Although the simplicity of this linear strategy is an advantage, there is no way to estimate the minimum number of iterations yielding a successful separation. Too small a number of iterations leads to a bad separation while a large number of iterations is computationally expensive. Experiments have clearly shown that the optimal number of iterations depends on the data (typically hundreds). Therefore, a good thresholding strategy must provide a fast decomposition with the least number of iterations. B. Mean-of-Max The Intuition As we stated before, the choice of the threshold management strategy is vital towards good performance of the MCA separation mechanism. A good thresholding strategy is one that meets three main requirements: i) applicability to the data for a wide range of dictionary pairs, ii) low global computational cost and iii) no need to prespecify a number of iterations. Thus, in this section, we discuss a new threshold management method aiming at satisfying the above three requirements. We assume now that the entries of (where is the number of entries of each vector ) are identically and independently distributed (iid) random variables. We further assume a sparse distribution so that most of the samples of a given are zeros. We can, thus, divide this vector into two subsets. The first one contains the nonzero coefficients. This set is called the support of. The second one is complementary to the latter. During the MCA process, at the th iteration in the transform, MCA selects coefficients from the following vector: which is a perturbed version of with an error term. This term is due to i) the imperfect estimation of (i.e., some of the coefficients in have not yet been recovered), and ii) the mutual coherence of the transforms and.wedefine, where. This quantity is the estimation error of the th morphological component in the coefficients domain. Without loss of generality, let us assume that at iteration,. This equation states that the most significant (in terms of norm) coefficient that has not been selected yet belongs to the support of.we then discuss the way we ought to tune so as to estimate at iteration. The converse case would yield the same discussion. Concretely, we would like to update avoiding false detections and, thus, fix the threshold in order to select new coefficients in the support. These new coefficients are guaranteed to be in the support of with overwhelming probability if their amplitudes are higher (4) (5)

3 BOBIN et al.: MORPHOLOGICAL COMPONENT ANALYSIS: AN ADAPTIVE THRESHOLDING STRATEGY 2677 C. MOM Strategy: Analysis In this section, we give arguments to support and prove that the following choice: (9) Fig. 1. Rationale governing the MCA/MOM thresholding strategy. than the highest (in absolute value) amplitude of the entries of the nuisance term (6) is sufficient for the set of inequalities (7) to hold true under some mild nonrestrictive conditions. This strategy coined MOM, is illustrated in Fig. 1 as explained in the previous section. A key advantage is that it only requires to assess the maximum absolute values of the total residual transformed in each basis and of the dictionary ( and in Fig. 1). Compared to the initial linear decrease strategy, MOM resorts to an additional use of the implicit transform operators associated to which induces an additional computational cost (typically or for most popular transforms). It does not require any other parameters and provides a data-adaptive strategy. We define by the th column of. Before proving the sufficiency of the bounds (7), let us first note that (10) (11) Owing to the last argument, has to be chosen such that (12) Basically, (7) says that to qualify, the chosen threshold must avoid false detections. The upper bound in (7) ensures that at least one coefficient in the support of will be recovered at iteration. Nonetheless, these two bounds depend on the true coefficients and cannot be computed in practice. Instead, we intuitively guess that knowing the maximum values of the total residual in the and domains will give us some information about the relative values of the useful parts and the perturbation terms. Let us have a look at the following quantities:. Note that these quantities can be easily computed online. The top (respectively, bottom) picture in Fig. 1 depicts an intuitive view of (respectively, ). In these plots, we define. Note that, for extremely incoherent dictionaries, the error terms are negligible, and. Furthermore, according to the incoherence of the subdictionaries and, is sparser than. This entails that with high probability (7) where. Through iterations of MCA, it is easy to estimate online the following quantities: Straightforward calculations give the following inequalities: (13) (14) (15) and similarly for. Let us denote. As MOM selects new coefficients in, at iteration, we can write with. Then, the following inequalities are easy to derive: (8) As a consequence, choosing such that intuitively fulfills the conditions in (7). (16)

4 2678 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 16, NO. 11, NOVEMBER 2007 Hence, defining is sufficient to guarantee the inequalities in (7) under the following conditions: (17) (18) A straightforward development of (17) provides the next condition (19) As, the right hand side of (19) is bounded below by. Thus, (17) is always verified if. Recalling that, it is worth noting that if at iteration,wehave: (20) then the condition is necessarily satisfied. Note that the above inequality has a flavor of the sparsity criterion ensuring unique maximally sparse solutions of -norm under-determined linear systems in [12] [15]. In words, the lower bound we arrive at stipulates that, as long as the rest of coefficients to be identified at iteration, in the support (resp., ) is sparse enough, then with a high probability, we are ensured to avoid false detections (i.e., coefficients not belonging ). That is, the atoms newly selected by the MCA/MOM algorithm at each iteration to enter the active set are guaranteed (with overwhelming probability) to belong to the support of the true solution. Thus, for rather sparse signals, (17) is valid entailing the lower bound inequality. The same correct term selection property with a condition similar to (20) has also been shown in a recent paper of Donoho and Tsaig [16], when studying the properties of the Homotopy algorithm. Note that the Homotopy method is different from MCA; the Homotopy is an exact path following that identifies one coefficient of the solution at a time entering or leaving the active set, while MCA/MOM is an adaptive stagewise iterative thresholding that follows the solution path by identifying groups of coefficients entering the active set at each iteration. As far as the upper bound inequality in (7) is concerned, [18] implies that (21) Note that although this condition is much more restrictive [condition (18) is not always verified], it is also less important as it only guarantees MCA to pick at least one new coefficient at each iteration. Let us add that in the case where the number of morphological components, the threshold in (9) is computed as the mean of the two dominant terms of the set. D. Handling Additive Gaussian Noise From a probabilistic point of view, the -norm constraint in (1) is equivalent to the anti-loglikelihood assuming the data are perturbed by an additive white Gaussian noise (AWGN). Hence, MCA can handle intrinsically data perturbed by additive Gaussian noise. Furthermore, MCA being a thresholding-based algorithm, it is, therefore, very robust to the noise since thresholding techniques belong to the best approaches for noise removal [17] in image processing. MCA is an iterative coarse-to-fine (in terms of coefficient amplitude in the dictionary) and AWGN noise can be handled by just stopping iterating when the residual is at the noise level. Assuming that the noise variance is known, the algorithm stops at iteration when the -norm of the residual satisfies. Another way consists in using a similar strategy as in denoising methods, and the iteration will stop when is smaller than, where is a constant, typically between 3 and 5. Put formally, the convergence is achieved when the residual satisfies:. A similar criterion was used in the combined (wavelet/curvelet) filtering method [18]. For non-gaussian noise, a similar strategy as before could also be used, but a noise modeling step in the transform domain must be accomplished in order to derive the probability density function for each coefficient of the dictionary to be due to the noise. Then the convergence criterion can be recast as a test of the null hypothesis: remaining coefficients are not significant. This is formally written. The distribution of many noise models in different dictionaries such as wavelets or curvelets have been proposed in the literature [19]. Another line of action to handle some specific noise distributions such as Poisson or Rayleigh noise is to use variance stabilizing transforms [19] [21], and the problem is brought back to the Gaussian case. E. Related Work Donoho et al. in [22] have also recently focused on iterative thresholding scheme applied to solving under-determined linear sparse problems. Interestingly, models similar to theirs can be easily transposed to our MCA setting. In their context, the perturbation term [see (5)] is assumed to be a Gaussian process whose samples are iid. Estimating the standard deviation at iteration using a median absolute deviation estimate provides a straightforward thresholding strategy: (in practice, ) the validity of which stands as long as the gaussianity of is a valid assumption (i.e., for incoherent dictionaries with low mutual coherence ). F. Success Phase Diagram In order to evaluate the success/failure of MCA, we define as the class of signals such that if belongs to, it is the linear combination where and are Bernoulli-Gaussian random vectors the samples of which are nonzero with probabilities and respectively. Nonzero coefficients are generated from a Gaussian law with mean zero and variance 1. Fig. 1 illustrates

5 BOBIN et al.: MORPHOLOGICAL COMPONENT ANALYSIS: AN ADAPTIVE THRESHOLDING STRATEGY 2679 Fig. 2. Opposite ` estimation error in decibels of the first morphological component extracted with MCA/MOM (020 log (ky 0 ~y k )). In abscissa: p and in ordinate:p. Fig. 3. Left: Original image (size ) used to compare the (dashed) linear and the (solid) MOM strategies. Middle: Number of recovered coefficients k k + k k while the algorithm is in progress. In abscissa: Number of iterations in log scale. Right: Reconstruction error ` -norm ky 0 ~y 0 ~y k as a function of the number of iterations. the efficiency of MCA/MOM. It shows the opposite estimation error of the first morphological component extracted with MCA/MOM ( ) for different values of and between 0.01 and 0.5. Each point has been evaluated from 25 different decompositions of signals (the number of samples is ). The redundant dictionary used in this experiment is the Spikes/Sines dictionary (in practice, is the concatenation of the identity matrix and the DCT matrix). One observes what can be interpreted as a rather sharp phase transition. For couples below the phase transition, the estimation of the morphological components is successful. The bottom-left bound is the theoretical recovery bound (in this case, this bound is equal to ; see [23]). The top-right bound is the MCA empirical bound defined such that the natural logarithm of the estimation error is less than db. Clearly, the theoretical bound is too pessimistic and MCA/MOM has a wider successful area. III. EXPERIMENTAL RESULTS A. Texture/Natural Part Separation Here, we compare the decompositions obtained with the linear strategy and the MOM. The redundant dictionary we used is composed of the union of the curvelet transform and the global DCT. Previous results (see Section II) have been proven assuming that and were orthogonal. In [24], the author shows that thresholding is still relevant for overcomplete representations for sparse minimization. Furthermore, in [22], the authors developed a thresholding-based sparse decomposition algorithm in overcomplete representations. Dealing with redundant subdictionaries such as the curvelet representation is then still reliable. In practice, even if the curvelet transform is not orthogonal, the same MOM thresholding strategy is used and achieves astounding results. Fig. 3 shows that MCA/MOM is more adaptive than MCA/Linear. Indeed, it continually selects new atoms (plot in the middle of Fig. 3) while MCA/Linear selects most of the coefficients in the last iterations. The same phenomenon is observed in the plot on the right. It shows the evolution of the -norm of the total residual: with MCA/MOM it decreases continually down to zero. These two plots shed light on how the MOM strategy adapts to the data. Indeed, the MOM strategy computes a new threshold at each iteration by taking into account online the evolution of the total residual term (in norm). Conversely, the linear strategy is not concerned Fig. 4. Left: Original signal. Middle: First morphological component (curvelet transform) extracted with MCA/MOM. Right: Second morphological component (global DCT) extracted with MCA/MOM. with the way the coefficients are distributed. Fig. 4 shows the result of a texture-cartoon separation experiment. The first morphological component is depicted in the middle of Fig. 4 and the second one on the right. The picture on the left depicts the original image. The overcomplete dictionary in this case is taken as the union of the curvelet transform and the global DCT. Hence, MCA/MOM extracts in the DCT basis a globally oscillatory part of the original image and leaves in the first morphological component edges and piece-wise smooth parts (i.e., cartoon). Visually, MCA/MOM performs well as a feature extraction/separation algorithm. Compared to MCA/linear that needs hundreds of iterations to converge, only 25 iterations were required for a similar separation quality. Note that the total variation (TV) constraint (see [2]) has not been taken into account here. Adding such a constraint would certainly improve slightly the result. B. Comparing MCA and Basis Pursuit So far, MCA has been considered as an efficient feature extraction/separation algorithm based on sparse decompositions. In practice, we also observe that MCA gives rather good results as a sparse decomposition algorithm in a union of bases. Several references ([12], [23], and references therein) have shown that some algorithms such as BP solve the minimization problem and even the minimization problem provided that some conditions are verified. The results of some experiments we carried out in order to compare BP and MCA/MOM are pictured in Fig. 5. The graphs were computed as an average over 50 decompositions of signals belonging to the Bernoulli Gaussian model with random and. The dictionary used in this experiment is the union of an orthogonal 1-D wavelet transform and the global DCT. The top-left plot in Fig. 5 shows the -curve. The top-right graph depicts the -curve: the MCA/MOM curve is below the BP curves. Then, in terms of norm sparsity, MCA/MOM and BP performs rather similarly.

6 2680 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 16, NO. 11, NOVEMBER 2007 the future, it would be interesting to extend MCA/MOM to be able to deal with other types of noise. A multichannel version of MCA has also recently been proposed in which MOM could be very useful [5], [6]. REFERENCES Fig. 5. Top left: ` =` -curve of both (solid line) MCA/MOM and (dashed line) BP. Top right: ` =` -curve of both (solid line) MCA/MOM and (dashed line) BP. Bottom left: Nonlinear approximation curve (` =` ) of both (solid line) MCA/MOM and (dashed line) BP. Bottom right: Comparison of execution times (in seconds): (solid line) MCA/MOM and (dashed line) BP. Finally, the bottom-left picture features the nonlinear approximation error curve computed from the set of coefficients estimated with both BP and MCA/MOM. According to the success phase diagram in Fig. 1, for most signals belonging to, and sparse problems are not equivalent. Indeed, as it uses a hard thresholding scheme, MCA/MOM is likely to converge to a sparse solution. On the contrary, BP solves the sparse problem. Then, in terms of sparsity, MCA/MOM achieves better nonlinear approximation as illustrated in the bottom-left plot of Fig. 5. The bottom right of Fig. 5 shows how the computational cost increases as the number of samples increases. MCA/MOM is clearly faster than BP. To conclude this section, MCA/MOM is a practical, fast and efficient way to decompose signals in a union of bases. 1 IV. CONCLUSION As a feature extraction/separation algorithm, MCA proved its efficiency. However, previous work did not propose any carefully-designed strategy to manage the threshold. This paper first insists on the impact of a particular choice of a thresholding scheme. Then, we exhibit a new advantageous way to tune the thresholds which we called MOM. The MOM strategy differs from other heuristics in the sense that it is fast, accurate and adaptive. It also no longer requires hand-tuning parameters as the MOM strategy is parameter free. Last, but not least, we put forward conditions that guarantee, with overwhelming probability, a good performance in the estimation of the morphological components as they particularly avoid false detections. We also showed that MCA is at least as efficient as BP in achieving sparse decompositions in redundant dictionaries. As it performs drastically faster than BP, MCA/MOM provides a practical alternative to this well-known sparse decomposition algorithm. In 1 More experiments with other images and sparse representations can be found at or perso.orange.fr/jbobin/mca2.html [1] J.-L. Starck, M. Elad, and D. Donoho, Redundant multiscale transforms and their application for morphological component analysis, Adv. Imag. Electron. Phys., vol. 132, [2] J.-L. Starck, M. Elad, and D. Donoho, Image decomposition via the combination of sparse representations and a variational approach, IEEE Trans. Image Process., vol. 14, pp , [3] M. Elad, J.-L. Starck, D. Donoho, and P. Querre, Simultaneous cartoon and texture image inpainting using morphological component analysis (MCA), ACHA, [4] M. J. Fadili and J.-L. Starck, Sparse representations and bayesian image inpainting, presented at the SPARS, [5] J. Bobin, Y. Moudden, J.-L. Starck, and M. Elad, Morphological diversity and source separation, IEEE Signal Process. Lett., vol. 13, no. 7, pp , Jul [6] J. Bobin, Y. Moudden, and J.-L. Starck, Enhanced source separation by morphological component analysis, presented at the ICASSP, [7] E. Candes, L. Demanet, D. Donoho, and L. Ying, Fast discrete curvelet transforms, SIAM Multiscale Model. Simul., vol. 5/3, pp , [8] J.-L. Starck, E. J. Candès, and D. L. Donoho, The curvelet transform for image denoising, IEEE Trans. Image Process., vol. 11, no. 6, pp , Jun [9] E. L. Pennec and S. Mallat, Sparse geometric image representations with bandelets, IEEE Trans. Image Process., vol. 14, no. 4, pp , Apr [10] M. N. Do and M. Vetterli, The contourlet transform: An efficient directional multiresolution image representation, IEEE Trans. Image Process., vol. 14, no. 12, pp , Dec [11] S. S. Chen, D. L. Donoho, and M. A. Saunders, Atomic decomposition by basis pursuit, SIAM J. Sci. Comput., vol. 20, no. 1, pp , [12] D. Donoho and M. Elad, Optimally sparse representation in general (non-orthogonal) dictionaries via ` minimization, Proc. Nat. Acad. Sci., vol. 100, pp , [13] A. Bruckstein and M. Elad, A generalized uncertainty principle and sparse representation in pairs of r bases, IEEE Trans. Inf. Theory, vol. 48, no. 9, pp , Sep [14] T. Tropp, Just relax: Convex programming methods for subset selection and sparse approximation, IEEE Trans. Inf. Theory, vol. 52, pp , [15] J.-J. Fuchs, On sparse representations in arbitrary redundant bases, IEEE Trans. Inf. Theory, vol. 50, pp , [16] D. Donoho and Y. Tsaig, Fast solution of ` -norm minimization problems when the solution may be sparse, Preprint, Oct [17] J.-L Starck, F. Murtagh, and A. Bijaoui, Image Processing and Data Analysis: The Multiscale Approach. Cambridge, U.K.: Cambridge Univ. Press, [18] J.-L Starck, D. L. Donoho, and E. J. Candès, A. F. Laine, M. A. Unser, and A. Aldroubi, Eds., Very high quality image restoration by combining wavelets and curvelets, Proc. SPIE, vol. 4478, pp. 9 19, Dec [19] J.-L Starck and F. Murtagh, Astronomical Image and Data Analysis, 2nd ed. New York: Springer-Verlag, [20] F. J. Anscombe, The transformation of Poisson, binomial and negative-binomial data, Biometrika, vol. 35, pp , [21] B. Z. M. Fadili and J.-L. Starck, Multi-scale variance stabilizing transform for multi-dimensional poisson count image denoising, presented at the Int. Conf. Image Processing, Toulouse, France, May 14-19, [22] D. Donoho, Y. Tsaig, I. Drori, and J.-L. Starck, Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit, IEEE Trans. Inf. Theory, to be published. [23] R. Gribonval and M. Nielsen, Sparse representations in unions of bases, IEEE Trans. Inf. Theory, vol. 49, no. 12, pp , Dec [24] M. Elad, Why simple shrinkage is still relevant for redundant representations?, IEEE Trans. Inf. Theory, vol. 52, no. 12, pp , Dec

7 BOBIN et al.: MORPHOLOGICAL COMPONENT ANALYSIS: AN ADAPTIVE THRESHOLDING STRATEGY 2681 Jérôme Bobin graduated from the Ecole Normale Superieure (ENS) de Cachan, France, in 2005, and received the M.Sc. degree in signal and image processing from ENS Cachan and the Université Paris XI, Orsay, France. He received the Agrégation de Physique in He is currently pursuing the Ph.D. degree at the CEA, France. His research interests include statistics, information theory, multiscale methods and sparse representations in signal, and image processing. Jean-Luc Starck received the Ph.D. degree from the University Nice-Sophia Antipolis, France, and the Habilitation degree from the University Paris XI, Paris, France. He was a Visitor at the European Southern Observatory (ESO) in 1993 and at the Statistics Department, Stanford University, Stanford, CA, in 2000 and 2005, and at the University of California, Los Angeles, in He has been a Researcher at CEA, France, since He is Leader of the project Multiresolution at CEA and he is a core team member of the PLANCK ESA project. He has published more than 200 papers in different areas in scientific journals and conference proceedings. He is also the author of two books entitled Image Processing and Data Analysis: the Multiscale Approach (Cambridge University Press, 1998) and Astronomical Image and Data Analysis (Springer, 2006, 2nd Ed.). His research interests include image processing, statistical methods in astrophysics, and cosmology. He is an expert in multiscale methods such as wavelets and curvelets. Jalal M. Fadili graduated from the Ecole Nationale Supérieure d Ingénieurs (ENSI) de Caen, Caen, France, and received the M.Sc. and Ph.D. degrees in signal and image processing from the University of Caen. He was a Research Associate with the University of Cambridge (MacDonnel-Pew Fellow), Cambridge, U.K., from 1999 to He has been an Associate Professor of signal and image processing since September 2001 at ENSI. He was a Visitor at the Queensland University of Technology, Brisbane, Australia, and Stanford University, Stanford, CA, in His research interests include statistical approaches in signal and image processing, inverse problems in image processing, multiscale methods, and sparse representations in signal and image processing. Areas of application include medical and astronomical imaging. and information theory. Yassir Moudden graduated in electrical engineering from SUPELEC, Gif-sur-Yvette, France, and received the M.S. degree in physics from the Université de Paris VII, France, in 1997, and the Ph.D. degree in signal processing from the Université de Paris XI, Orsay, France. He was a visitor at the University of California, Los Angeles, in 2004 and and is currently with the CEA, France, working on applications of signal processing to astronomy. His research interests include signal and image processing, data analysis, statistics, David L. Donoho received the A.B. degree (summa cum laude) in statistics from Princeton University, Princeton, NJ, and the Ph.D. degree in statistics from Harvard University, Cambridge, MA. He is Professor of statistics at Stanford University, Stanford, CA. He was a Professor at the University of California, Berkeley, and a Visiting Professor at the Université de Paris, Paris, France, as well as a Sackfer Fellow at Tel Aviv University, Tel Aviv, Israel. His research interests are in harmonic analysis, image representation, and mathematical statistics. Dr. Donoho is a member of the U.S. National Academy of Sciences and a Fellow of the American Academy of Arts and Sciences.

Morphological Diversity and Source Separation

Morphological Diversity and Source Separation Morphological Diversity and Source Separation J. Bobin, Y. Moudden, J.-L. Starck, and M. Elad Abstract This paper describes a new method for blind source separation, adapted to the case of sources having

More information

IN THE blind source separation (BSS) setting, the instantaneous

IN THE blind source separation (BSS) setting, the instantaneous 2662 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 16, NO. 11, NOVEMBER 2007 Sparsity and Morphological Diversity in Blind Source Separation Jérôme Bobin, Jean-Luc Starck, Jalal Fadili, and Yassir Moudden

More information

Sparsity and Morphological Diversity in Source Separation. Jérôme Bobin IRFU/SEDI-Service d Astrophysique CEA Saclay - France

Sparsity and Morphological Diversity in Source Separation. Jérôme Bobin IRFU/SEDI-Service d Astrophysique CEA Saclay - France Sparsity and Morphological Diversity in Source Separation Jérôme Bobin IRFU/SEDI-Service d Astrophysique CEA Saclay - France Collaborators - Yassir Moudden - CEA Saclay, France - Jean-Luc Starck - CEA

More information

Compressed Sensing in Astronomy

Compressed Sensing in Astronomy Compressed Sensing in Astronomy J.-L. Starck CEA, IRFU, Service d'astrophysique, France jstarck@cea.fr http://jstarck.free.fr Collaborators: J. Bobin, CEA. Introduction: Compressed Sensing (CS) Sparse

More information

Morphological Diversity and Sparsity for Multichannel Data Restoration

Morphological Diversity and Sparsity for Multichannel Data Restoration DOI 10.1007/s10851-008-0065-6 Morphological Diversity and Sparsity for Multichannel Data Restoration J. Bobin Y. Moudden J. Fadili J.-L. Starck Springer Science+Business Media, LLC 2008 Abstract Over the

More information

Sparse & Redundant Signal Representation, and its Role in Image Processing

Sparse & Redundant Signal Representation, and its Role in Image Processing Sparse & Redundant Signal Representation, and its Role in Michael Elad The CS Department The Technion Israel Institute of technology Haifa 3000, Israel Wave 006 Wavelet and Applications Ecole Polytechnique

More information

Signal Recovery, Uncertainty Relations, and Minkowski Dimension

Signal Recovery, Uncertainty Relations, and Minkowski Dimension Signal Recovery, Uncertainty Relations, and Minkowski Dimension Helmut Bőlcskei ETH Zurich December 2013 Joint work with C. Aubel, P. Kuppinger, G. Pope, E. Riegler, D. Stotz, and C. Studer Aim of this

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Image Processing by the Curvelet Transform

Image Processing by the Curvelet Transform Image Processing by the Curvelet Transform Jean Luc Starck Dapnia/SEDI SAP, CEA Saclay, France. jstarck@cea.fr http://jstarck.free.fr Collaborators: D.L. Donoho, Department of Statistics, Stanford E. Candès,

More information

Sparsity in Underdetermined Systems

Sparsity in Underdetermined Systems Sparsity in Underdetermined Systems Department of Statistics Stanford University August 19, 2005 Classical Linear Regression Problem X n y p n 1 > Given predictors and response, y Xβ ε = + ε N( 0, σ 2

More information

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 12, DECEMBER 2008 2009 Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation Yuanqing Li, Member, IEEE, Andrzej Cichocki,

More information

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Thresholds for the Recovery of Sparse Solutions via L1 Minimization Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu

More information

A simple test to check the optimality of sparse signal approximations

A simple test to check the optimality of sparse signal approximations A simple test to check the optimality of sparse signal approximations Rémi Gribonval, Rosa Maria Figueras I Ventura, Pierre Vergheynst To cite this version: Rémi Gribonval, Rosa Maria Figueras I Ventura,

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

SIGNAL SEPARATION USING RE-WEIGHTED AND ADAPTIVE MORPHOLOGICAL COMPONENT ANALYSIS

SIGNAL SEPARATION USING RE-WEIGHTED AND ADAPTIVE MORPHOLOGICAL COMPONENT ANALYSIS TR-IIS-4-002 SIGNAL SEPARATION USING RE-WEIGHTED AND ADAPTIVE MORPHOLOGICAL COMPONENT ANALYSIS GUAN-JU PENG AND WEN-LIANG HWANG Feb. 24, 204 Technical Report No. TR-IIS-4-002 http://www.iis.sinica.edu.tw/page/library/techreport/tr204/tr4.html

More information

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,

More information

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases 2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary

More information

Sparsity and Morphological Diversity in Blind Source Separation

Sparsity and Morphological Diversity in Blind Source Separation Sparsity and Morphological Diversity in Blind Source Separation J. Bobin, J.-L. Starck, J. Fadili and Y. Moudden Abstract Over the last few years, the development of multi-channel sensors motivated interest

More information

Recent developments on sparse representation

Recent developments on sparse representation Recent developments on sparse representation Zeng Tieyong Department of Mathematics, Hong Kong Baptist University Email: zeng@hkbu.edu.hk Hong Kong Baptist University Dec. 8, 2008 First Previous Next Last

More information

Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France

Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Structure of the tutorial Session 1: Introduction to inverse problems & sparse

More information

MATCHING PURSUIT WITH STOCHASTIC SELECTION

MATCHING PURSUIT WITH STOCHASTIC SELECTION 2th European Signal Processing Conference (EUSIPCO 22) Bucharest, Romania, August 27-3, 22 MATCHING PURSUIT WITH STOCHASTIC SELECTION Thomas Peel, Valentin Emiya, Liva Ralaivola Aix-Marseille Université

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

New Multiscale Methods for 2D and 3D Astronomical Data Set

New Multiscale Methods for 2D and 3D Astronomical Data Set New Multiscale Methods for 2D and 3D Astronomical Data Set Jean Luc Starck Dapnia/SEDI SAP, CEA Saclay, France. jstarck@cea.fr http://jstarck.free.fr Collaborators: D.L. Donoho, O. Levi, Department of

More information

Tutorial: Sparse Signal Processing Part 1: Sparse Signal Representation. Pier Luigi Dragotti Imperial College London

Tutorial: Sparse Signal Processing Part 1: Sparse Signal Representation. Pier Luigi Dragotti Imperial College London Tutorial: Sparse Signal Processing Part 1: Sparse Signal Representation Pier Luigi Dragotti Imperial College London Outline Part 1: Sparse Signal Representation ~90min Part 2: Sparse Sampling ~90min 2

More information

Wavelet Footprints: Theory, Algorithms, and Applications

Wavelet Footprints: Theory, Algorithms, and Applications 1306 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 5, MAY 2003 Wavelet Footprints: Theory, Algorithms, and Applications Pier Luigi Dragotti, Member, IEEE, and Martin Vetterli, Fellow, IEEE Abstract

More information

LEARNING OVERCOMPLETE SPARSIFYING TRANSFORMS FOR SIGNAL PROCESSING. Saiprasad Ravishankar and Yoram Bresler

LEARNING OVERCOMPLETE SPARSIFYING TRANSFORMS FOR SIGNAL PROCESSING. Saiprasad Ravishankar and Yoram Bresler LEARNING OVERCOMPLETE SPARSIFYING TRANSFORMS FOR SIGNAL PROCESSING Saiprasad Ravishankar and Yoram Bresler Department of Electrical and Computer Engineering and the Coordinated Science Laboratory, University

More information

Sparse Astronomical Data Analysis. Jean-Luc Starck. Collaborators: J. Bobin., F. Sureau. CEA Saclay

Sparse Astronomical Data Analysis. Jean-Luc Starck. Collaborators: J. Bobin., F. Sureau. CEA Saclay Sparse Astronomical Data Analysis Jean-Luc Starck Collaborators: J. Bobin., F. Sureau CEA Saclay What is a good representation for data? A signal s (n samples) can be represented as sum of weighted elements

More information

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE METHODS AND APPLICATIONS OF ANALYSIS. c 2011 International Press Vol. 18, No. 1, pp. 105 110, March 2011 007 EXACT SUPPORT RECOVERY FOR LINEAR INVERSE PROBLEMS WITH SPARSITY CONSTRAINTS DENNIS TREDE Abstract.

More information

On the Projection Matrices Influence in the Classification of Compressed Sensed ECG Signals

On the Projection Matrices Influence in the Classification of Compressed Sensed ECG Signals On the Projection Matrices Influence in the Classification of Compressed Sensed ECG Signals Monica Fira, Liviu Goras Institute of Computer Science Romanian Academy Iasi, Romania Liviu Goras, Nicolae Cleju,

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

Multipath Matching Pursuit

Multipath Matching Pursuit Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy

More information

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Alfredo Nava-Tudela ant@umd.edu John J. Benedetto Department of Mathematics jjb@umd.edu Abstract In this project we are

More information

A Continuation Approach to Estimate a Solution Path of Mixed L2-L0 Minimization Problems

A Continuation Approach to Estimate a Solution Path of Mixed L2-L0 Minimization Problems A Continuation Approach to Estimate a Solution Path of Mixed L2-L Minimization Problems Junbo Duan, Charles Soussen, David Brie, Jérôme Idier Centre de Recherche en Automatique de Nancy Nancy-University,

More information

MMSE Denoising of 2-D Signals Using Consistent Cycle Spinning Algorithm

MMSE Denoising of 2-D Signals Using Consistent Cycle Spinning Algorithm Denoising of 2-D Signals Using Consistent Cycle Spinning Algorithm Bodduluri Asha, B. Leela kumari Abstract: It is well known that in a real world signals do not exist without noise, which may be negligible

More information

Estimation Error Bounds for Frame Denoising

Estimation Error Bounds for Frame Denoising Estimation Error Bounds for Frame Denoising Alyson K. Fletcher and Kannan Ramchandran {alyson,kannanr}@eecs.berkeley.edu Berkeley Audio-Visual Signal Processing and Communication Systems group Department

More information

EUSIPCO

EUSIPCO EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,

More information

WE study the capacity of peak-power limited, single-antenna,

WE study the capacity of peak-power limited, single-antenna, 1158 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 3, MARCH 2010 Gaussian Fading Is the Worst Fading Tobias Koch, Member, IEEE, and Amos Lapidoth, Fellow, IEEE Abstract The capacity of peak-power

More information

The Iteration-Tuned Dictionary for Sparse Representations

The Iteration-Tuned Dictionary for Sparse Representations The Iteration-Tuned Dictionary for Sparse Representations Joaquin Zepeda #1, Christine Guillemot #2, Ewa Kijak 3 # INRIA Centre Rennes - Bretagne Atlantique Campus de Beaulieu, 35042 Rennes Cedex, FRANCE

More information

A new method on deterministic construction of the measurement matrix in compressed sensing

A new method on deterministic construction of the measurement matrix in compressed sensing A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central

More information

Randomness-in-Structured Ensembles for Compressed Sensing of Images

Randomness-in-Structured Ensembles for Compressed Sensing of Images Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Email: abdolhos@msu.edu Hayder

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

Image Decomposition and Separation Using Sparse Representations: An Overview

Image Decomposition and Separation Using Sparse Representations: An Overview INVITED PAPER Image Decomposition and Separation Using Sparse Representations: An Overview This overview paper points out that signal and image processing, as well as many other important areas of engineering,

More information

A simple test to check the optimality of sparse signal approximations

A simple test to check the optimality of sparse signal approximations A simple test to check the optimality of sparse signal approximations Rémi Gribonval, Rosa Maria Figueras I Ventura, Pierre Vandergheynst To cite this version: Rémi Gribonval, Rosa Maria Figueras I Ventura,

More information

The Sparsity Gap. Joel A. Tropp. Computing & Mathematical Sciences California Institute of Technology

The Sparsity Gap. Joel A. Tropp. Computing & Mathematical Sciences California Institute of Technology The Sparsity Gap Joel A. Tropp Computing & Mathematical Sciences California Institute of Technology jtropp@acm.caltech.edu Research supported in part by ONR 1 Introduction The Sparsity Gap (Casazza Birthday

More information

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie

More information

Error Correction via Linear Programming

Error Correction via Linear Programming Error Correction via Linear Programming Emmanuel Candes and Terence Tao Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 Department of Mathematics, University of California, Los Angeles,

More information

Stability and Robustness of Weak Orthogonal Matching Pursuits

Stability and Robustness of Weak Orthogonal Matching Pursuits Stability and Robustness of Weak Orthogonal Matching Pursuits Simon Foucart, Drexel University Abstract A recent result establishing, under restricted isometry conditions, the success of sparse recovery

More information

Astronomical Data Analysis and Sparsity: from Wavelets to Compressed Sensing

Astronomical Data Analysis and Sparsity: from Wavelets to Compressed Sensing ASTRONOMICAL DATA ANALYSIS 1 Astronomical Data Analysis and Sparsity: from Wavelets to Compressed Sensing Jean-Luc Starck and Jerome Bobin, Abstract Wavelets have been used extensively for several years

More information

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published

More information

ORTHOGONAL matching pursuit (OMP) is the canonical

ORTHOGONAL matching pursuit (OMP) is the canonical IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 9, SEPTEMBER 2010 4395 Analysis of Orthogonal Matching Pursuit Using the Restricted Isometry Property Mark A. Davenport, Member, IEEE, and Michael

More information

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE 5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years

More information

Uncertainty principles and sparse approximation

Uncertainty principles and sparse approximation Uncertainty principles and sparse approximation In this lecture, we will consider the special case where the dictionary Ψ is composed of a pair of orthobases. We will see that our ability to find a sparse

More information

Poisson Denoising on the Sphere: Application to the Fermi Gamma Ray Space Telescope

Poisson Denoising on the Sphere: Application to the Fermi Gamma Ray Space Telescope Astronomy & Astrophysics manuscript no. AA3corrected2 March 4, 2010 c ESO Poisson Denoising on the Sphere: Application to the Fermi Gamma Ray Space Telescope J. Schmitt 1, J.L. Starck 1, J.M. Casandjian

More information

Noise Removal? The Evolution Of Pr(x) Denoising By Energy Minimization. ( x) An Introduction to Sparse Representation and the K-SVD Algorithm

Noise Removal? The Evolution Of Pr(x) Denoising By Energy Minimization. ( x) An Introduction to Sparse Representation and the K-SVD Algorithm Sparse Representation and the K-SV Algorithm he CS epartment he echnion Israel Institute of technology Haifa 3, Israel University of Erlangen - Nürnberg April 8 Noise Removal? Our story begins with image

More information

Sparsity Measure and the Detection of Significant Data

Sparsity Measure and the Detection of Significant Data Sparsity Measure and the Detection of Significant Data Abdourrahmane Atto, Dominique Pastor, Grégoire Mercier To cite this version: Abdourrahmane Atto, Dominique Pastor, Grégoire Mercier. Sparsity Measure

More information

Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Final Presentation

Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Final Presentation Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Final Presentation Alfredo Nava-Tudela John J. Benedetto, advisor 5/10/11 AMSC 663/664 1 Problem Let A be an n

More information

Double Sparsity: Learning Sparse Dictionaries for Sparse Signal Approximation

Double Sparsity: Learning Sparse Dictionaries for Sparse Signal Approximation IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. X, NO. X, XX 200X 1 Double Sparsity: Learning Sparse Dictionaries for Sparse Signal Approximation Ron Rubinstein, Student Member, IEEE, Michael Zibulevsky,

More information

A New Poisson Noisy Image Denoising Method Based on the Anscombe Transformation

A New Poisson Noisy Image Denoising Method Based on the Anscombe Transformation A New Poisson Noisy Image Denoising Method Based on the Anscombe Transformation Jin Quan 1, William G. Wee 1, Chia Y. Han 2, and Xuefu Zhou 1 1 School of Electronic and Computing Systems, University of

More information

Analysis of Denoising by Sparse Approximation with Random Frame Asymptotics

Analysis of Denoising by Sparse Approximation with Random Frame Asymptotics Analysis of Denoising by Sparse Approximation with Random Frame Asymptotics Alyson K Fletcher Univ of California, Berkeley alyson@eecsberkeleyedu Sundeep Rangan Flarion Technologies srangan@flarioncom

More information

Data Sparse Matrix Computation - Lecture 20

Data Sparse Matrix Computation - Lecture 20 Data Sparse Matrix Computation - Lecture 20 Yao Cheng, Dongping Qi, Tianyi Shi November 9, 207 Contents Introduction 2 Theorems on Sparsity 2. Example: A = [Φ Ψ]......................... 2.2 General Matrix

More information

Astronomical Data Analysis and Sparsity: from Wavelets to Compressed Sensing

Astronomical Data Analysis and Sparsity: from Wavelets to Compressed Sensing ASTRONOMICAL DATA ANALYSIS 1 Astronomical Data Analysis and Sparsity: from Wavelets to Compressed Sensing Jean-Luc Starck and Jerome Bobin, arxiv:0903.3383v1 [astro-ph.im] 19 Mar 2009 Abstract Wavelets

More information

Uniqueness Conditions for A Class of l 0 -Minimization Problems

Uniqueness Conditions for A Class of l 0 -Minimization Problems Uniqueness Conditions for A Class of l 0 -Minimization Problems Chunlei Xu and Yun-Bin Zhao October, 03, Revised January 04 Abstract. We consider a class of l 0 -minimization problems, which is to search

More information

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 2, FEBRUARY

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 2, FEBRUARY IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 54, NO 2, FEBRUARY 2006 423 Underdetermined Blind Source Separation Based on Sparse Representation Yuanqing Li, Shun-Ichi Amari, Fellow, IEEE, Andrzej Cichocki,

More information

Elaine T. Hale, Wotao Yin, Yin Zhang

Elaine T. Hale, Wotao Yin, Yin Zhang , Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2

More information

Recovery Guarantees for Restoration and Separation of Approximately Sparse Signals

Recovery Guarantees for Restoration and Separation of Approximately Sparse Signals Recovery Guarantees for Restoration and Separation of Approximately Sparse Signals Christoph Studer and Richard G. Baraniuk Dept. Electrical and Computer Engineering, Rice University ouston, TX, USA; e-mail:

More information

Deconvolution of confocal microscopy images using proximal iteration and sparse representations

Deconvolution of confocal microscopy images using proximal iteration and sparse representations Deconvolution of confocal microscopy images using proximal iteration and sparse representations François-Xavier Dupé, Jalal M. Fadili, Jean-Luc Starck To cite this version: François-Xavier Dupé, Jalal

More information

Detecting Sparse Structures in Data in Sub-Linear Time: A group testing approach

Detecting Sparse Structures in Data in Sub-Linear Time: A group testing approach Detecting Sparse Structures in Data in Sub-Linear Time: A group testing approach Boaz Nadler The Weizmann Institute of Science Israel Joint works with Inbal Horev, Ronen Basri, Meirav Galun and Ery Arias-Castro

More information

Stable Signal Recovery from Incomplete and Inaccurate Measurements

Stable Signal Recovery from Incomplete and Inaccurate Measurements Stable Signal Recovery from Incomplete and Inaccurate Measurements EMMANUEL J. CANDÈS California Institute of Technology JUSTIN K. ROMBERG California Institute of Technology AND TERENCE TAO University

More information

OPTIMAL SURE PARAMETERS FOR SIGMOIDAL WAVELET SHRINKAGE

OPTIMAL SURE PARAMETERS FOR SIGMOIDAL WAVELET SHRINKAGE 17th European Signal Processing Conference (EUSIPCO 009) Glasgow, Scotland, August 4-8, 009 OPTIMAL SURE PARAMETERS FOR SIGMOIDAL WAVELET SHRINKAGE Abdourrahmane M. Atto 1, Dominique Pastor, Gregoire Mercier

More information

Dictionary Learning for L1-Exact Sparse Coding

Dictionary Learning for L1-Exact Sparse Coding Dictionary Learning for L1-Exact Sparse Coding Mar D. Plumbley Department of Electronic Engineering, Queen Mary University of London, Mile End Road, London E1 4NS, United Kingdom. Email: mar.plumbley@elec.qmul.ac.u

More information

On the Behavior of Information Theoretic Criteria for Model Order Selection

On the Behavior of Information Theoretic Criteria for Model Order Selection IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 49, NO. 8, AUGUST 2001 1689 On the Behavior of Information Theoretic Criteria for Model Order Selection Athanasios P. Liavas, Member, IEEE, and Phillip A. Regalia,

More information

A NEW FRAMEWORK FOR DESIGNING INCOHERENT SPARSIFYING DICTIONARIES

A NEW FRAMEWORK FOR DESIGNING INCOHERENT SPARSIFYING DICTIONARIES A NEW FRAMEWORK FOR DESIGNING INCOERENT SPARSIFYING DICTIONARIES Gang Li, Zhihui Zhu, 2 uang Bai, 3 and Aihua Yu 3 School of Automation & EE, Zhejiang Univ. of Sci. & Tech., angzhou, Zhejiang, P.R. China

More information

Optimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko

Optimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko IEEE SIGNAL PROCESSING LETTERS, VOL. 17, NO. 12, DECEMBER 2010 1005 Optimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko Abstract A new theorem shows that

More information

Sparse linear models and denoising

Sparse linear models and denoising Lecture notes 4 February 22, 2016 Sparse linear models and denoising 1 Introduction 1.1 Definition and motivation Finding representations of signals that allow to process them more effectively is a central

More information

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS Martin Kleinsteuber and Simon Hawe Department of Electrical Engineering and Information Technology, Technische Universität München, München, Arcistraße

More information

Filter Design for Linear Time Delay Systems

Filter Design for Linear Time Delay Systems IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 49, NO. 11, NOVEMBER 2001 2839 ANewH Filter Design for Linear Time Delay Systems E. Fridman Uri Shaked, Fellow, IEEE Abstract A new delay-dependent filtering

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

hal , version 3-23 Feb 2008

hal , version 3-23 Feb 2008 Fast Poisson Noise Removal by Biorthogonal Haar Domain Hypothesis Testing B. Zhang a, a Quantitative Image Analysis Group URA CNRS 2582 of Institut Pasteur, 75724 Paris, France M. J. Fadili b b Image Processing

More information

arxiv: v1 [math.na] 26 Nov 2009

arxiv: v1 [math.na] 26 Nov 2009 Non-convexly constrained linear inverse problems arxiv:0911.5098v1 [math.na] 26 Nov 2009 Thomas Blumensath Applied Mathematics, School of Mathematics, University of Southampton, University Road, Southampton,

More information

Compressive sensing of low-complexity signals: theory, algorithms and extensions

Compressive sensing of low-complexity signals: theory, algorithms and extensions Compressive sensing of low-complexity signals: theory, algorithms and extensions Laurent Jacques March 7, 9, 1, 14, 16 and 18, 216 9h3-12h3 (incl. 3 ) Graduate School in Systems, Optimization, Control

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

Image Processing in Astrophysics

Image Processing in Astrophysics AIM-CEA Saclay, France Image Processing in Astrophysics Sandrine Pires sandrine.pires@cea.fr NDPI 2011 Image Processing : Goals Image processing is used once the image acquisition is done by the telescope

More information

An Homotopy Algorithm for the Lasso with Online Observations

An Homotopy Algorithm for the Lasso with Online Observations An Homotopy Algorithm for the Lasso with Online Observations Pierre J. Garrigues Department of EECS Redwood Center for Theoretical Neuroscience University of California Berkeley, CA 94720 garrigue@eecs.berkeley.edu

More information

Modeling Multiscale Differential Pixel Statistics

Modeling Multiscale Differential Pixel Statistics Modeling Multiscale Differential Pixel Statistics David Odom a and Peyman Milanfar a a Electrical Engineering Department, University of California, Santa Cruz CA. 95064 USA ABSTRACT The statistics of natural

More information

Fast Sparse Representation Based on Smoothed

Fast Sparse Representation Based on Smoothed Fast Sparse Representation Based on Smoothed l 0 Norm G. Hosein Mohimani 1, Massoud Babaie-Zadeh 1,, and Christian Jutten 2 1 Electrical Engineering Department, Advanced Communications Research Institute

More information

Sparse linear models

Sparse linear models Sparse linear models Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 2/22/2016 Introduction Linear transforms Frequency representation Short-time

More information

Blind Compressed Sensing

Blind Compressed Sensing 1 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE arxiv:1002.2586v2 [cs.it] 28 Apr 2010 Abstract The fundamental principle underlying compressed sensing is that a signal,

More information

Low-Complexity Image Denoising via Analytical Form of Generalized Gaussian Random Vectors in AWGN

Low-Complexity Image Denoising via Analytical Form of Generalized Gaussian Random Vectors in AWGN Low-Complexity Image Denoising via Analytical Form of Generalized Gaussian Random Vectors in AWGN PICHID KITTISUWAN Rajamangala University of Technology (Ratanakosin), Department of Telecommunication Engineering,

More information

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3784. Ming-Jun Lai Department of Mathematics,

More information

Sensing systems limited by constraints: physical size, time, cost, energy

Sensing systems limited by constraints: physical size, time, cost, energy Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original

More information

A discretized Newton flow for time varying linear inverse problems

A discretized Newton flow for time varying linear inverse problems A discretized Newton flow for time varying linear inverse problems Martin Kleinsteuber and Simon Hawe Department of Electrical Engineering and Information Technology, Technische Universität München Arcisstrasse

More information

Multiple Change Point Detection by Sparse Parameter Estimation

Multiple Change Point Detection by Sparse Parameter Estimation Multiple Change Point Detection by Sparse Parameter Estimation Department of Econometrics Fac. of Economics and Management University of Defence Brno, Czech Republic Dept. of Appl. Math. and Comp. Sci.

More information

Design of Projection Matrix for Compressive Sensing by Nonsmooth Optimization

Design of Projection Matrix for Compressive Sensing by Nonsmooth Optimization Design of Proection Matrix for Compressive Sensing by Nonsmooth Optimization W.-S. Lu T. Hinamoto Dept. of Electrical & Computer Engineering Graduate School of Engineering University of Victoria Hiroshima

More information

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably

More information

Sparse representation classification and positive L1 minimization

Sparse representation classification and positive L1 minimization Sparse representation classification and positive L1 minimization Cencheng Shen Joint Work with Li Chen, Carey E. Priebe Applied Mathematics and Statistics Johns Hopkins University, August 5, 2014 Cencheng

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Recovery of Sparse Signals Using Multiple Orthogonal Least Squares

Recovery of Sparse Signals Using Multiple Orthogonal Least Squares Recovery of Sparse Signals Using Multiple Orthogonal east Squares Jian Wang, Ping i Department of Statistics and Biostatistics arxiv:40.505v [stat.me] 9 Oct 04 Department of Computer Science Rutgers University

More information

Applications of Polyspline Wavelets to Astronomical Image Analysis

Applications of Polyspline Wavelets to Astronomical Image Analysis VIRTUAL OBSERVATORY: Plate Content Digitization, Archive Mining & Image Sequence Processing edited by M. Tsvetkov, V. Golev, F. Murtagh, and R. Molina, Heron Press, Sofia, 25 Applications of Polyspline

More information