VARIATIONAL ALGORITHMS TO REMOVE STRIPES: A GENERALIZATION OF THE NEGATIVE NORM MODELS.

Similar documents
A Simple Regression Problem

CS Lecture 13. More Maximum Likelihood

Kernel Methods and Support Vector Machines

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup)

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Multi-Scale/Multi-Resolution: Wavelet Transform

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Detection and Estimation Theory

UNIVERSITY OF TRENTO ON THE USE OF SVM FOR ELECTROMAGNETIC SUBSURFACE SENSING. A. Boni, M. Conci, A. Massa, and S. Piffer.

Estimating Parameters for a Gaussian pdf

On the Use of A Priori Information for Sparse Signal Approximations

HIGH RESOLUTION NEAR-FIELD MULTIPLE TARGET DETECTION AND LOCALIZATION USING SUPPORT VECTOR MACHINES

Supplementary to Learning Discriminative Bayesian Networks from High-dimensional Continuous Neuroimaging Data

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machines. Machine Learning Series Jerry Jeychandra Blohm Lab

COS 424: Interacting with Data. Written Exercises

1 Bounding the Margin

3.3 Variational Characterization of Singular Values

On Constant Power Water-filling

Stochastic Subgradient Methods

Weighted- 1 minimization with multiple weighting sets

Lecture October 23. Scribes: Ruixin Qiang and Alana Shine

Tracking using CONDENSATION: Conditional Density Propagation

PAC-Bayes Analysis Of Maximum Entropy Learning

arxiv: v1 [cs.ds] 3 Feb 2014

Pattern Recognition and Machine Learning. Artificial Neural networks

Feature Extraction Techniques

Introduction to Machine Learning. Recitation 11

Computational and Statistical Learning Theory

Sharp Time Data Tradeoffs for Linear Inverse Problems

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

Hamming Compressed Sensing

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information

Ştefan ŞTEFĂNESCU * is the minimum global value for the function h (x)

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Computational and Statistical Learning Theory

Distributed Subgradient Methods for Multi-agent Optimization

Fairness via priority scheduling

Probability Distributions

Analyzing Simulation Results

Variational algorithms to remove stationary noise. Applications to microscopy imaging.

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

Randomized Recovery for Boolean Compressed Sensing

Pattern Recognition and Machine Learning. Artificial Neural networks

Automated Frequency Domain Decomposition for Operational Modal Analysis

Consistent Multiclass Algorithms for Complex Performance Measures. Supplementary Material

A Nonlinear Sparsity Promoting Formulation and Algorithm for Full Waveform Inversion

A BLOCK MONOTONE DOMAIN DECOMPOSITION ALGORITHM FOR A NONLINEAR SINGULARLY PERTURBED PARABOLIC PROBLEM

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

Boosting with log-loss

Will Monroe August 9, with materials by Mehran Sahami and Chris Piech. image: Arito. Parameter learning

Using a De-Convolution Window for Operating Modal Analysis

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion

Sequence Analysis, WS 14/15, D. Huson & R. Neher (this part by D. Huson) February 5,

Asynchronous Gossip Algorithms for Stochastic Optimization

Machine Learning Basics: Estimators, Bias and Variance

Upper and Lower Bounds on the Capacity of Wireless Optical Intensity Channels

Tail Estimation of the Spectral Density under Fixed-Domain Asymptotics

A Theoretical Framework for Deep Transfer Learning

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

Ch 12: Variations on Backpropagation

Robustness and Regularization of Support Vector Machines

Optimal nonlinear Bayesian experimental design: an application to amplitude versus offset experiments

Removal of Intensity Bias in Magnitude Spin-Echo MRI Images by Nonlinear Diffusion Filtering

Constrained Consensus and Optimization in Multi-Agent Networks arxiv: v2 [math.oc] 17 Dec 2008

Support Vector Machines MIT Course Notes Cynthia Rudin

Probabilistic Machine Learning

IN modern society that various systems have become more

TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES

Multi-view Discriminative Manifold Embedding for Pattern Classification

Pseudo-marginal Metropolis-Hastings: a simple explanation and (partial) review of theory

Solutions of some selected problems of Homework 4

Principal Components Analysis

Convex Hodge Decomposition of Image Flows

Block designs and statistics

AN OPTIMAL SHRINKAGE FACTOR IN PREDICTION OF ORDERED RANDOM EFFECTS

Non-Parametric Non-Line-of-Sight Identification 1

Lecture 9: Multi Kernel SVM

Lower Bounds for Quantized Matrix Completion

STOPPING SIMULATED PATHS EARLY

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair

arxiv: v1 [math.pr] 17 May 2009

On Hyper-Parameter Estimation in Empirical Bayes: A Revisit of the MacKay Algorithm

ZISC Neural Network Base Indicator for Classification Complexity Estimation

PULSE-TRAIN BASED TIME-DELAY ESTIMATION IMPROVES RESILIENCY TO NOISE

A note on the multiplication of sparse matrices

Combining Classifiers

Bayes Decision Rule and Naïve Bayes Classifier

Domain-Adversarial Neural Networks

Research Article Robust ε-support Vector Regression

Generalized Queries on Probabilistic Context-Free Grammars

REDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION

On the theoretical analysis of cross validation in compressive sensing

OPTIMIZATION in multi-agent networks has attracted

Understanding Machine Learning Solution Manual

Lecture 20 November 7, 2013

Transcription:

VARIATIONAL ALGORITHMS TO REMOVE STRIPES: A GENERALIZATION OF THE NEGATIVE NORM MODELS. Jérôe Fehrenbach 1, Pierre Weiss 1 and Corinne Lorenzo 2 1 Institut de Mathéatiques de Toulouse, Toulouse University, France 2 ITAV, Toulouse canceropole, France jeroe.fehrenbach@ath.univ-toulouse.fr, pierre.arand.weiss@gail.co, corinne.lorenzo@itav-recherche.fr Keywords: Abstract: denoising, cartoon+texture decoposition, prial-dual algorith, stationary noise, fluorescence icroscopy. Starting with a book of Y.Meyer in 2001, negative nor odels attracted the attention of the iaging counity in the last decade. Despite nuerous works, these nors see to have provided only luckwar results in practical applications. In this work, we propose a fraework and an algorith to reove stationary noise fro iages. This algorith has nuerous practical applications and we show it on 3D data fro a newborn icroscope called SPIM. We also show that this odel generalizes Meyer s odel and its successors in the discrete setting and allows to interpret the in a Bayesian fraework. It sheds a new light on these odels and allows to pick the according to soe a priori knowledge on the texture statistics. Further results are available at http://www.ath.univ-toulouse.fr/ weiss/pagepublications.htl. 1 INTRODUCTION The purpose of this article is to provide variational odels and algoriths in order to reove stationary noise fro iages. By stationary, we ean that the noise is generated by convolving white noise with a given kernel. The noise thus appears as structured in the sense that soe pattern ight be visible, see Figure 3(b),(c),(d). This work was priarily otivated by the recent developent of a new icroscope called Selective Plane Illuination Microscope (SPIM). The SPIM is a fluorescence icroscope which allows to perfor optical sectioning of a specien, see (Huisken et al., 2004) for details. One of the differences with conventional icroscopy is that the fluorescence light is detected at an angle of 90 degrees with the illuination axis. This procedure tends to degrade the iages with stripes aligned with the illuination axis see Figure 5(a). This kind of noise is well described by a stationary process. The first contribution of this paper is to provide effective denoising algoriths dedicated to this iaging odality. Even though this work was priarily designed for the SPIM icroscope, it appears that our odels generalize the negative nors odels proposed by Y. Meyer (Meyer, 2001) and the subsequent works (Aujol et al., 2006; Vese and Osher, 2003; Osher et al., 2003; Garnett et al., 2007). In his seinal book, Meyer initiated nuerous research in the doain of texture+cartoon decoposition ethods. The algorith presented in this work belongs to this class of ethods. Meyer s idea is to decopose an iage into a piecewise sooth coponent and an oscillatory coponent. Usually the use of these nors, denoted N, is otivated by the fact that if (v n ) converges weakly to 0 then v n N 0. So the negative nors should be able to capture oscillating patterns well. This nice result is however not really inforative on what kind of textures are well captured by negative nors. The second contribution of this paper is to propose a Bayesian interpretation of these odels in the discrete setting. This allows a better understanding of the decoposition odels: We can associate a probability density functions (p.d.f.) to the negative nors. This allows to choose a odel depending on soe a priori knowledge on the texture. We can synthetize textures which are adapted to these negative nors. The Bayesian interpretation suggests a new broader and ore versatile class of translation invariant odels used e.g. for SPIM iaging. Connection to previous works This work shares flavors with soe previous works. In (Aujol et al., 2006) the authors present algoriths and results using

siilar approaches. However, they do not propose a bayesian interpretation and consider a narrower class of odels. An alternative way of decoposing iages was proposed in (Starck et al., 2005). The idea is to seek for coponents that are sparse in given dictionaries. Different choices for the eleentary atos coposing the dictionary will allow to recover different kind of textures. See (Fadili et al., 2010) for a review of these ethods and a generalization to the decoposition into an arbitrary nuber of coponents. The ain novelties of the present work are: 1. We do not only consider sparse coponents, but allow for a ore general class of rando processes. 2. Siilarly to (Fadili et al., 2010), the texture is described through a dictionary. In this work each dictionary is coposed of a single pattern shifted in space, ensuring translation invariance. 3. A Bayesian approach is provided to take into account the statistical nature of textures ore precisely. 4. The decoposition proble is recast into a convex optiization proble that is solved with a recent algorith (Chabolle and Pock, 2011) allowing to obtain results in an interactive tie. 5. Codes are provided on http://www.ath. univ-toulouse.fr/ weiss/pagecodes.htl. Notation: Let u be a gray-scale iage. It is coposed of n = n x n y pixels, and u(x) denotes the intensity at pixel x. The convolution product between u and v is u v. The discrete gradient operator is denoted. Let ϕ : R n R be a convex closed function (see (Rockafellar, 1970)). ϕ denotes its sub-differential. The Fenchel conjugate of ϕ is denoted ϕ, and its resolvent is defined by (Id + ϕ) 1 (u) = argin v R n ϕ(v) + 1 2 v u 2 2. 2 NOISE MODEL One way of forulating our objective is the following: we want to recover an original iage u, given an observed iage u 0 = u + b, where b is a saple of soe rando process. The ost standard denoising techniques explicitly or iplicitly assue that the noise is the realization of a rando process that is pixelwise independent and identically distributed (i.e. a white noise). Under this assuption, the axiu a posteriori (MAP) approach leads to optiization probles of kind: Find u ArginJ(u) +φ(u(x) u 0 (x)), u R n x where 1. exp( φ) is proportional to the probability density function of the noise at each pixel, 2. J(u) is an iage prior. The assuption that the noise is i.i.d. appears too restrictive in soe situations, and is not adapted to structured noise (see Figure 2 and 3). The general odel of noise considered in this work is the following: b = i=1 λ i ψ i, (1) where {ψ i } i=1 are filters that describe patterns of noise, and {λ i } i=1 are saples of white noise processes {Λ i } i=1. Each process Λ i is a set of n i.i.d. rando variables with a probability density function (pdf) exp( φ i ). In short, the convolution that appears in the righthand side of (1) states that the noise b is coposed of a certain nuber of patterns ψ 1,...,ψ that are replicated in space. The noise b in (1) is a wide sense stationary noise (Shiryaev, 1996). Exaples of noises that can be generated using this odel are shown in Figure 3. 1. exaple (b) is a Gaussian white noise. It is the convolution of a Gaussian white noise with a Dirac. 2. exaple (c) is a sine function in the x direction. It is a saple of a unifor white noise in [ 1,1] convolved with the filter that is constant equal to 1/n y in the first colun and zero otherwise. 3. exaple (d) is coposed of a single pattern that is located at rando places. It is the convolution of a saple of a Bernoulli process with the eleentary pattern. 3 RESTORATION ALGORITHM The Bayesian approach requires a pdf on the space of iages. We assue that the probability of an iage u reads p(u) exp( J(u)). In this work we will consider priors of the for: J(u) = F( u), ( with F(q) = α q 1,ε = ψ ε q 1 (x) 2 + q 2 (x) ), 2 x where q = (q 1,q 2 ) R n 2 and { t if t ε ψ ε (t) = t 2 /2ε + ε/2 otherwise.

Note that li ε 0 u 1,ε = TV (u) is the discrete total variation of u, and that li ε u 1,ε = 1 ε + 2 u 2 2 up to an additive constant. This odel thus includes TV and H 1 regularizations as liit cases. The axiu a posteriori approach in a Bayesian fraework leads to retrieve the iage u and the weights {λ i } i=1 that axiize the conditional probability p(u,λ 1,...,λ u 0 ) = p(u 0 u,λ 1,...,λ )p(u,λ 1,...,λ ). p(u 0 ) By assuing that the iage u and the noise coponents λ i are saples of independent processes, standard arguents show that axiizing p(u,λ 1,...,λ u 0 ) aounts to solving the following iniization proble: Find {λ i } i=1 Argin φ i (λ i )+F( (u 0 λ i ψ i )). {λ i } i=1 i=1 i=1 (2) The denoised iage u is then u = u 0 i=1 λ i ψ i. We propose in this work to solve proble (2) with a prial-dual algorith developed in (Chabolle and Pock, 2011). Let A be the following linear operator: By denoting A : R n R n 2 λ ( i=1 λ i ψ i ). G(λ) = i=1 φ i (λ i ) (3) proble (2) can be recast as the following convexconcave saddle-point proble: in λ R n ax Aλ,q q 1 F (q) + G(λ). (4) We denote (λ,q) the duality gap of this proble (Rockafellar, 1970). This proble is solved using the following algorith (Chabolle and Pock, 2011): Algorith 1: Prial-Dual algorith Input: ε: the desired precision; (λ 0,q 0 ): a starting point; Output: λ ε : an approxiate solution to proble (4). begin n = 0; while (λ n,q n ) > ε (λ 0,q 0 ) do q n+1 = (Id + σ F ) 1 (q n + σa λ n ) λ n+1 = (Id + τ G) 1 (λ n τa q n+1 ) λ n+1 = λ n+1 + θ(λ n+1 λ n ) n = n + 1; end end In practice, for a correct choice of inner products and paraeters σ and τ, this algorith requires around 50 low-cost iterations for ε = 10 3. More details will be provided in a forthcoing research report. 4 BAYESIAN INTERPRETATION OF THE DISCRETIZED NEGATIVE NORM MODELS In the last decade, the texture+cartoon decoposition odels based on negative nors attracted the attention of the scientific counity. These odels often take the following for: where: inf TV (u) + v N (5) u BV (Ω),v V,u+v=u 0 u 0 is an iage to decopose as the su of a texture v in V and a structure u in BV, V is a Sobolev space of negative index, N is an associated sei-nor. Y. Meyer s seinal odel consists in taking V = W 1, and: v N = v 1, = inf g L (Ω) 2,div(g)=v g In the discrete setting the previous odel can be rewritten as: Find (u,g) argin g p + α u 1 (6) subject to u 0 = u + v v = T g ( ) where u 0, u and v are in R n g1, g = R g n 2 and 2 T g = T 1 g 1 + T 2 g 2. If p =, we get the discrete Meyer odel. Fro an experiental point of view, the choices p = 2 and p = 1 see to provide better practical results (Vese and Osher, 2003). In order to show the equivalence of these odels with the ones proposed in Equation (2), we express the differential operators as convolution products. As the discrete derivative operators are usually translation invariant, this reads: T g = h 1 g 1 + h 2 g 2. where denotes the convolution product and h 1 and h 2 are( derivative ) filters (typically h 1 = (1, 1) and 1 h 2 = ). 1 This siple reark leads to an interesting interpretation of g: it represents the coefficients of an iage v in a dictionary coposed of the vectors h 1 and h 2 translated in space.

The negative nors odels can thus be interpreted as decoposition odels in a very siple texture dictionary. Next, let us show that proble (5) can be interpreted in a MAP foralis. Let us define a probability density function: Definition 1 (Negative nor p.d.f.). Let Γ be a rando vector in R n and Θ be a rando vector in [0,2π] n. Let us assue that p(γ) exp( Γ p ) and that Θ has a unifor distribution. These two rando vectors allow to define a third one: ( ) Γcos(Θ) G =. Γsin(Θ) Now let us show that proble (6) actually corresponds to a MAP decoposition. Let us assue that: u 0 = u + v with u and v realization of independant rando vector such that p(u) exp( α u 1 ) and v = T g with g a realization of G. Then the classical Bayes reasoning leads to the following equations: arg ax p(u,v u 0 ) u R n,v R n p(u 0 u,v) p(u,v) = arg ax u R n,v R n p(u 0 ) p(u,v) = arg ax u+v=u 0,u R n,v R n p(u 0 ) = arg in u+v=u 0,u R n,v R n log(p(v)) log(p(u)) = arg in u+v=u 0,u R n,v R n log(p(v)) + α u 1 = arg in g p + α u 1 u+v=u 0,u R n,v= T g which is exactly proble (6). Also note that the odel above is equivalent to a slight variant of the odel defined in Equation (2) in the case = 2: Arg in g p + α u 1 u+v=u 0,u R n,v= T g = Arg in g=(g 1,g 2 ) R 2n g p + α (u 0 T g) 1 = Arg in (g 1,g 2 ) R 2n G(g 1,g 2 ) + F( (u 0 h 1 g 1 h 2 g 2 )) where G(g 1,g 2 ) = ( x (g 1 (x) 2 + g 2 (x) 2 ) p/2 ) 1/p is a ixed-nor variant of the function G defined in Equation (3) (Kowalski, 2009), F(q) = α q 1, the filters h 1 and h 2 are the discrete derivative filters defined above. The sae reasoning holds for ost negative nors odels proposed lately (Meyer, 2001; Aujol et al., 2006; Vese and Osher, 2003; Osher et al., 2003; Garnett et al., 2007), and proble (2) actually generalizes all these odels. To our knowledge, the Chabolle-Pock ipleentation (Chabolle and Pock, 2011) proposed here or the ADMM ethod (Ng et al., 2010) (for strongly onotone probles) are the ost efficient nuerical approaches. 5 NEGATIVE NORM TEXTURE SYNTHESIS The MAP approach to negative nor odels described above also sheds a new light on the kind of texture appreciated by the negative nors. In order to synthetize a texture with p.d.f. (1), it suffices to run the following algorith: 1. Generate a saple of a unifor rando vector θ [0,2π] n. 2. Generate a saple of a rando vector γ with p.d.f. proportional to exp( γ p ). 3. Generate two vectors g 1 = γcos(θ) and g 2 = γsin(θ). ( ) 4. Generate the texture v = T g1. g 2 The results of this siple algorith are presented in Figure 1. 6 RESULTS OF THE DENOISING ALGORITHM 6.1 Synthetic iage The ethod was validated on a synthetic exaple, where a ground truth is available. A synthetic iage was created by adding to a cartoon iage (a disk) the su of 3 different stationary noises. The resulting synthetic iage is shown in Figure 2. The cartoon iage and the 3 noise coponents are presented in Figure 3(a,b,c,d). The first noise coponent is a saple of a Gaussian white noise. The second coponent is a sine function in the horizontal direction. The third coponent is the su of eleentary patterns, this is a saple of a Bernoulli law with probability 5.10 4 convolved with an eleentary filter. The results of Algorith 1 are presented in Figure 3(e,f,g,h). The decoposition is alost perfect. This exaple is a good proof of concept.

p = 2 (a) (e) p = 1 (b) (f) p = Figure 1: Left: standard noises. Right: different textures synthetized with the negative nor p.d.f. Note: we synthetize the Laplace noise by approxiating it with a Bernoulli process. (c) (g) (d) (h) Figure 3: Toy exaple. Left colun: real coponents; right colun: estiated coponents using our algorith. (a,e): cartoon coponent - (b,f): Gaussian noise, std 0.2 - (c,g): Stripes coponent (sine)- (d,h): Poisson noise coponent (poisson eans fish in French) Figure 2: Synthetic iage used for the toy exaple. 6.2 Real SPIM iage Algorith 1 was applied to a zebrafish ebryo iage obtained using the SPIM icroscope. Two filters ψ 1 and ψ 2 were used to denoise this iage. The first filter ψ 1 is a Dirac (which allows the recovery of Gaussian white noise), and the second filter ψ 2 is an anisotropic Gabor filter with principal axis directed by the stripes (this orientation was obtained by user). The filter ψ 2 is shown in Figure 4. The original iage is presented in Figure 5(a), and the result of Algorith 1 is presented in Figure 5(e). We also present a coparison with two other algoriths in Figures 5(d,b): a standard TV-L 2 denoising algorith. The algorith is unable to reove the stripes as the prior is unadapted to the noise. an H 1 -Gabor algorith which consists in setting F( ) = 1 2 2 2 in equation (2). The iage prior thus prootes sooth solutions and provides blurry results.

Figure 4: A detailed view of filter ψ 2. ACKNOWLEDGEMENTS The authors wish to thank Julie Batut for providing the iages of the zebrafish. They also thank Valérie Lobjois, Bernard Ducoun, Raphael Jorand and François De Vieilleville for their support for this work. REFERENCES (a) (d) Aujol, J.-F., Gilboa, G., Chan, T., and Osher, S. (2006). Structure-texture iage decoposition - odeling, algoriths, and paraeter selection. Int. J. Coput. Vision, vol. 67(1), pp. 111-136. Chabolle, A. and Pock, T. (2011). A first-order prialdual algorith for convex probles with applications to iaging. J. Math. Iaging Vis. 40(1): 120-145. Fadili, M., Starck, J.-L., Bobin, J., and Moudden, Y. (2010). Iage decoposition and separation using sparse representations: an overview. Proc. of the IEEE, Special Issue: Applications of Sparse Representation, vol. 98(6), pp. 983-994,. Garnett, J., Le, T., Meyer, Y., and Vese, L. (2007). Iage decopositions using bounded variation and generalized hoogeneous besov spaces. Appl. Coput. Haron. Anal., 23, pp. 25-56. Huisken, J., Swoger, J., Bene, F. D., Wittbrodt, J., and Stelzer, E. (2004). Optical sectioning deep inside live ebryos by selective plane illuination icroscopy. Science, vol. 305, 5686, p.1007. Kowalski, M. (2009). Sparse regression using ixed nors. Appl. Coput. Haron. A. 27, 3, 303 324. Meyer, Y. (2001). Oscillating patterns in iage processing and in soe nonlinear evolution equations, in 15th Dean Jacqueline B. Lewis Meorial Lectures. AMS. Ng, M., Weiss, P., and Yuan, X.-M. (2010). Solving constrained total-variation iage restoration and reconstruction probles via alternating direction ethods. SIAM Journal on Scientific Coputing, 32. Osher, S., Sole, A., and Vese, L. (2003). Iage decoposition and restoration using total variation iniization and the h 1 nor. SIAM Multiscale Model. Si. 1(3), pp. 339-370. Rockafellar, T. (1970). Convex Analysis. Princeton University Press. Shiryaev, A. (1996). Probability, Graduate Texts in Matheatics 95. Springer. Starck, J., Elad, M., and Donoho, D. (2005). Iage decoposition via the cobination of sparse representations and a variational approach. IEEE Trans. I. Proc., vol. 14(10). (b) (c) (e) Figure 5: Top-Left: original iage zebrafish ebryo Tg.SMYH1:GFP Slow yosin Chain I specific fibers - Top- Right: TV-L2 denoising - Mid-Left: H 1 -Gabor restoration - Mid-Right: TV-Gabor restoration - Botto-Left: stripes identified by our algorith - Botto-Right: white noise. Vese, L. and Osher, S. (2003). Modeling textures with total variation iniization and oscillating patterns in iage processing. J. Sci. Coput., 19(1-3), pp. 553-572. (f)