Key words. Convexity, deblurring, multiplicative noise, primal-dual algorithm, total variation regularization, variational model.
|
|
- Hilary Welch
- 5 years ago
- Views:
Transcription
1 A CONVEX VARIATIONAL MODEL FOR RESTORING BLURRED IMAGES WITH MULTIPLICATIVE NOISE YIQIU DONG AND TIEYONG ZENG Abstract. In this paper, a new variational model for restoring blurred images with multiplicative noise is proposed. Based on the statistical property of the noise, a quadratic penalty function technique is utilized in order to obtain a strictly convex model under a mild condition, which guarantees the uniqueness of the solution and the stabilization of the algorithm. For solving the new convex variational model, a primal-dual algorithm is proposed and its convergence is studied. The paper ends with a report on numerical tests for the simultaneous deblurring and denoising of images subject to multiplicative noise. A comparison with other methods is provided as well. Key words. Convexity, deblurring, multiplicative noise, primal-dual algorithm, total variation regularization, variational model. AMS subject classifications. 5A4, 5K0, 5K5, 0C30, 0C47. Introduction. In real applications, degradation effects are unavoidable during image acquisition and transmission. For instance, the photos produced by astronomical telescopes are often blurred by atmospheric turbulence. In order to benefit further image processing tasks, image deblurring and denoising continue to attract the attentions in the applied mathematics community. Based on the imaging systems, various kinds of noise were considered, such as additive Gaussian noise, impulse noise, and Poisson noise, etc. We refer the reader to [7,,35,40,44,45] and references therein for an overview of those noise models and the restoration methods. However, multiplicative noise is a different noise model, and it commonly appears in synthetic aperture radar (SAR), ultrasound imaging, laser images, and so on [8,, 3, 43]. For a mathematical description of such degradations, suppose that an image û is a real function defined on, a connected bounded open subset of R with compact Lipschitz boundary, i.e., û : R. The degraded image f is given by: f = (Aû)η, (.) where A L(L ()) is a known linear and continuous blurring operator, and η L () represents multiplicative noise with mean. Here, f is obtained from û, which is blurred by the blurring operator A, and then is corrupted by the multiplicative noise η. Usually we assume that f > 0. In this paper, we concentrate on the assumption that η follows a Gamma distribution, which commonly occurs in SAR. The deblurring process under noise is well-known to be an ill-posed problem in the sense of Hadamard [30]. Since the degraded image only provides partial restrictions on the original data, there exist various solutions which can match the observed degraded image under the given blurring operator. Hence, in order to utilize variational methods, the main challenge in image restoration is to design a reasonable and easily solved optimization problem based on the degradation model and the prior information on the original image. Until the past decade, a few variational methods have been proposed to handle the restoration problem with the multiplicative noise. Given the statistical properties Department of Applied Mathematics and Computer Science, Technical University of Denmark, 800 Kgs. Lyngby, Denmark. (yido@dtu.dk) Department of Mathematics, Hong Kong Baptist University, Kowloon Tong, Hong Kong. (zeng@hkbu.edu.hk)
2 of the multiplicative noise η, in [3] the recovery of the image û was based on solving the following constrained optimization problem: inf u S() Du f subject to: Au dx = (.) ( f Au ) dx = θ, where θ denotes the variance of η, S() = {v BV () : v > 0}, BV () is the space of functions of bounded variation (see [] or below), and the total variation (TV) of u is utilized as the objective function in order to preserve significant edges in images. In (.), only basic statistical properties, the mean and the variance, of the noise η are considered, which somehow limits the restored results. For this reason, based on the Bayes rule and Gamma distribution with mean, by using a maximum a posteriori (MAP) estimator Aubert and Aujol [5] introduced a variational model as follows: inf u S() ( log(au) + f Au ) dx + λ Du, (.3) where the TV of u is utilized as the regularization term, and λ > 0 is the regularization parameter which controls the trade-off between a good fit of f and a smoothness requirement due to the TV regularization. Below we refer (.3) as AA model. Since both (.) and (.3) are non-convex, the gradient projection algorithms proposed in [3] and [5] may stick at some local minimizers, and the restored results strongly rely on the initialization and the numerical schemes. To overcome this problem and provide a convex model, in [3] Huang et al. introduced an auxiliary variable z = log u in (.3), and in [4] Shi et al. modified (.3) by adding a quadratic term. With convex models, these two methods both provide better restored results than the method proposed in [5], and they are independent of the initial estimations. In addition, Steidl et al. combined the I-divergence as the data fidelity term with the TV regularization or the nonlocal means to remove the multiplicative Gamma noise [4]. A general patch-based denoising filter is proposed in [0]. In [], the denoising problem was handled by using L fidelity term on frame coefficients. In [3], the approach with spatially varying regularization parameters in the AA model (.3) was considered in order to restore more texture details of the denoised image. In [3], the multiplicative noise removal was addressed via a learned dictionary and extensive experimental results illustrated the leading performance of this approach. However, all of the above methods only work on the multiplicative noise removal issue, and it is still an open question to extend them to the deblurring case. In this paper, we focus on the restoration of images that are simultaneously blurred and corrupted by multiplicative noise. Since the non-convexity of the model (.3) proposed in [5] causes uniqueness problem and the issue of convergence of the numerical algorithm, we introduce a new convex model by adding a quadratic penalty term based on the statistical properties of the multiplicative Gamma noise. Furthermore, we study the existence and uniqueness of a solution to the new model on the continuous, i.e. functional space, level. Here, we still use the TV regularization in order to preserve edges during the reconstruction. Evidently, it can be readily extended to some other modern regularization terms such as non-local TV [] or framelet approach []. The minimization problem in our method is solved by the primal-dual algorithm proposed in [5,4] instead of the gradient projection method in [5,3]. The numerical results in this paper show that our method has the potential to outperform the other approaches in multiplicative noise removal with deblurring simultaneously.
3 The rest of the paper is organized as follows. In Section, we briefly review the total variation regularization and provides its main properties. In Section 3, based on the statistical properties of the multiplicative Gamma noise we propose a new convex model for denoising, and study its existence and uniqueness of a solution with several other properties. Then in Section 4 we extend the model and those properties to the case of denoising and deblurring simultaneously. Section 5 gives the primal-dual algorithm for solving our restoration model based on the work proposed in [5]. The numerical results shown in Section demonstrate the efficiency of the new method. Finally, conclusions are drawn in Section 7.. Review of Total Variation Regularization. In order to preserve significant edges in images, in their seminal work [40] Rudin et al. introduced total variation regularization into image restoration. In this approach, they recover the image in BV (), which denotes the space of functions of bounded variation, i.e. u BV () iff u L () and the BV-seminorm: { Du := sup u div(ξ(x))dx } ξ C0 (, R ), ξ L (,R ), (.) is finite. The space BV () endowed with the norm u BV = u L + Du is a Banach space. If u BV (), the distributional derivative Du is a bounded Radon measure and the above quantity defined in (.) corresponds to the total variation (TV). Based on the compactness of BV (), in two-dimensional case we have the embedding BV () L p () for p which is compact for p <. See [3,, ] for more details. 3. A Convex Multiplicative Denoising Model. To propose a convex multiplicative denoising model, we start from the multiplicative Gamma noise. Suppose that the random variable η follows Gamma distribution, i.e., its probability density function (PDF) is: p η (x; θ, K) = θ K Γ(K) xk e x θ for x 0, (3.) where Γ is the usual Gamma-function, θ and K denote the scale and shape parameters in the Gamma distribution, respectively. Furthermore, the mean of η is Kθ, and the variance of η is Kθ, see []. As multiplicative noise, in general we assume that the mean of η equals, then we have that Kθ = and its variance is K. Now, set a random variable Y = η, whose PDF is: p Y (y) = θ K Γ(K) y K e θy for y 0. (3.) We can prove the following properties of Y. Proposition 3.. Suppose that the random variable η follows Gamma distribution with mean. Set Y = η. Then the means of Y and Y are: E(Y ) = KΓ(K /) Γ(K) respectively. Furthermore, we have: E(Y n ) = K n Γ(K n ) Γ(K) 3 and E(Y ) = K K,
4 for any n N. Proof. Based on the PDF of η shown in (3.) and Kθ =, we obtain: E(Y ) = Similarly, we have: = + 0 x θ K Γ(K) xk e x θ dx Γ(K /) + θγ(k) θ K / Γ(K /) xk 3/ e x θ 0 Γ(K /) + = p η (x; θ, K /) dx θγ(k) KΓ(K /) =. Γ(K) E(Y ) = Γ(K ) = θγ(k) = K K. x θ K Γ(K) xk e x θ dx + 0 θ K Γ(K ) xk e x θ dx Further, for any n N we can calculate E(Y n ) similarly, and obtain E(Y n ) = K n Γ(K n ). Γ(K) dx According to the properties of Gamma function [4] and Proposition 3., if K is large, the mean of Y is close to. Furthermore, we have: Proposition 3.. Suppose that the random variable η follows Gamma distribution with mean. Set Y = η. Then: lim E((Y K + ) ) = 0. (3.3) Proof. With Proposition 3., readily we have: E((Y ) ) = K K KΓ(K /) +. Γ(K) Based on the property of the Gamma function, see [4], lim K + taking α =, immediately we can obtain: Γ(K + α) =, α R, Γ(K)Kα lim E((Y K + ) ) = 0. 4
5 Proposition 3. ensures that the value of E((Y ) ) is small with a large K. On the other hand, from Table 3. we can see that even with a small K its value is still rather small. Remark 3.3. Since p Y (y) defined in (3.) is monotonic on either side of the single maximal point y = θ, it is a strictly unimodal function. In statistics, Gaussian distribution is commonly used to approximate the unimodal or other distributions, see [3, 38]. To further understand the relationship between Y and Gaussian distribution, we need the concept of the Kullback-Leibler (KL) divergence. Suppose that P and Q are continuous random variables with the PDFs p(x) and q(x), respectively. The KL-divergence of P and Q is defined as ( ) p(x) D KL (P Q) := log p(x) dx, q(x) which is always non-negative and is zero if and only if P = Q almost everywhere, see [7]. Based on the PDF of the random variable Y, we can prove that the KLdivergence of Y with the Gaussian distribution N (µ K, σk ) tends to 0 when K goes to infinity, where µ K and σk are the mean and variance of Y, respectively, i.e., µ K := E(Y ) and σ K := E(Y ) (E(Y )). (3.4) Proposition 3.4. Suppose that the random variable η follows Gamma distribution with mean. Set Y = η. Then, (i) + p 0 Y (y) log p Y (y)dy = log log( KΓ(K)) + K+ ψ(k) K, where d log Γ(K) ψ(k) := dk is the digamma function (see []); (ii) + p 0 Y (y) log p N (µk,σk ) (y)dy = log(πeσ K ), where p N (µ K,σK ) (y) denotes the PDF of the Gaussian distribution N (µ K, σk ); (iii) lim D KL(Y N (µ K, σk )) = 0. K + Proof. See Appendix I. Furthermore, let us consider a re-scaled version of Y, the random variable Z := Y µ K σ K. When K goes to infinity, Y degenerates to, however, Z tends to follow the Gaussian distribution N (0, ). Further, based on Proposition 3.4, we have: lim D KL(Z N (0, )) = lim D KL(Y N (µ K, σk)) = 0. K + K + Using Proposition 3.4 and the definition of the KL-divergence, in Table 3. we list the KL-divergence between Y and the Gaussian distribution N (µ K, σk ) with respect to different values of K. Obviously, even with small K, D KL (Y N (µ K, σk )) is already very small. In addition, in Figure 3. we show the PDFs of Y and N (µ K, σk ) with K =, K = 0 and K = 30. We can see that the PDF of Y decreases quickly and is close to symmetry as K increases, which are the essential properties of Gaussian distribution. In the denoising case, that is, A is the identity operator, from the degradation model (.), we obtain that Y = η = u f. Inspired by (3.3), we introduce a quadratic penalty term into the AA model (.3), which turns out to be: ( inf E(u) := log u + f ) u S() u ( u dx + α f 5 ) dx + λ Du (3.5)
6 K E((Y ) ) D KL (Y N (µ K, σk )) e-04 Table 3. The values of E((Y ) ) and the KL-divergence of Y with the Gaussian distribution N (µ K, σk ) for different values of K..8 p Y (y) p N(µ,σ ) (y).5 p Y (y) p N(µ,σ ) (y) p Y (y) p N(µ,σ ) (y) p(y) p(y) p(y) y y y (a) (b) (c) Fig. 3.. The comparisons of the PDFs of Y and N (µ K, σk ) with different K. (a) K =, (b) K = 0, (c) K = 30. with the penalty parameter α > 0. In addition, we set: S() := {v BV () : v 0}, which is closed and convex, and we define log 0 = and 0 = +. Note that as f > 0, we need not define Existence and uniqueness of a solution. For the existence and uniqueness of a solution to (3.5), we start with discussing the convexity of the model. Since the quadratic penalty term provides extra convexity, we prove that E(u) in (3.5) is convex, if the parameter α satisfies certain condition. Proposition 3.5. If α, then the model (3.5) is strictly convex. Proof. With t R + and a fixed α, we define a function g as: g(t) := log t + t + α( t ). Easily, we have that the second order derivative of g satisfies: g (t) = t + t 3 + α t 3. A direct computation shows that the function t 3 g (t) reaches its unique minimum, α 8, at t =. Hence, if α, we have g (t) 0, i.e., g is convex. Furthermore, since the function g has only one minimizer, g is strictly convex when α. Setting t = u(x) f(x) for each x, we obtain the strict convexity of the first two terms in (3.5). Based on the convexity of the TV regularization, we deduce that E(u) in (3.5) is strictly convex, if α. Since the feasible set S() is convex, the assertion is an immediate consequence. Based on Proposition 3.5, we see that with a suitable α, (3.5) is a convex approximation of the non-convex model (.3) with A as the identity operator. Now, we
7 argue the existence and uniqueness of a solution to (3.5) and the minimum-maximum principle. Theorem 3.. Let f be in L () with inf f > 0, then the model (3.5) has a solution u in BV () satisfying: 0 < inf f u sup f. Moreover, if α, the solution of (3.5) is unique. Proof. Set c := inf f, c := sup f, and let: ( E 0 (u) := log u + f ) ( ) u dx + α u f dx. For each fixed x, easily we have: log t+ f(x) t +log f(x) with t R + {0}. Noting that E(u) is defined in (3.5), we thus have: E(u) ( log u + f ) u dx ( + log f) dx. In other words, E(u) in (3.5) is bounded from below, and we can choose a minimizing sequence {u n S() : n =,, }. Since for each fixed x, the real function on R + {0}: g(t) := log t + f(x) t ( t + α f(x) is decreasing if t [0, f(x)) and increasing if t (f(x), + ), one always has g(min(t, M)) g(t) with M f(x). Hence, we obtain that: E 0 (inf(u, c )) E 0 (u). Combining with D inf(u, c ) Du, see Lemma in Section 4.3 of [34], we have E(inf(u, c )) E(u). In the same way we are able to get E(sup(u, c )) E(u). Hence, we can assume that 0 < c u n c, which implies that u n is bounded in L (). As {u n } is a minimizing sequence, we know that E(u n ) is bounded. Furthermore, Du n is bounded, and {u n } is bounded in BV (). Therefore, there exists a subsequence {u nk } which converges strongly in L () to some u BV (), and {Du nk } converges weakly as a measure to Du []. Since S() is closed and convex, by the lower semi-continuity of the total variation and Fatou s lemma, we get that u is a solution of the model (3.5), and necessarily 0 < c u c. Moreover, if α, uniqueness follows directly from the strict convexity of the function E. Similar as Theorem 4. in [5], here we also need the assumption inf f > 0, which ensures the lower boundedness of E(u) defined in (3.5). In the numerical practice, this assumption is always kept by using max(f, ϵ) as the observed image with a very small positive value ϵ. In [5] a comparison principle was given concerning the model (.3). With the α-term in (3.5), it is satisfied with certain condition on α as well. Proposition 3.7. Let f and f be in L () with a > 0 and a > 0, where a = inf f and a = inf f. Further, set b = sup f and b = sup f. Assume 7 )
8 f < f. Suppose u (resp. u ) is a solution of (3.5) with f = f (resp. f = f ). Then when α < aa b b a a, we have u u. Proof. Referring to [5], we define u v = inf(u, v) and u v = sup(u, v). Since u i is a minimizer of E(u) with f = f i, which is defined in (3.5), with respect to i =,, we have: E(u u ) + E(u u ) E(u ) + E(u ). Based on the result D(u u ) + D(u u ) Du + Du in [3,7], we get: ( ) log(u u ) + f u u + α u dx u f ( ) + log(u u ) + f u u + α u dx u f ( ) log u + f u u + α dx f ( ) + log u + f u u + α dx. f Writing = {u > u } {u u }, we easily deduce that: [ ] (f f )(u {u >u } u ) + α α f f f f ( f + f )( u + u ) 0. u u Based on Theorem 3., we have 0 < a u b and 0 < a u b, which imply and: u u b b f f ( f + f )( u + u ) f f a a ( a + a ). b b a a b b Hence, we find that: u u + α α f f f f ( f + f )( u + u ) > 0, a as soon as α < a b b a a. Taking account of f < f, in this case we deduce that {u > u } has a zero Lebesgue measure, i.e. u u a.e. in. 4. The Extension to Simultaneous Deblurring and Denoising. The model (3.5) is based on the statistical properties of Gamma distribution, and is specifically devoted to the multiplicative Gamma noise removal. In this section, we extend it to the simultaneous deblurring and denoising case, i.e., to restore the image û in (.) with the blurring operator A. The restoration is processed by solving the optimization problem: ( inf E A (u) := log Au + f ) u S() Au ( ) Au dx + α f 8 dx + λ Du, (4.)
9 where A L(L ()). As a blurring operator, we assume that A is nonnegative, i.e., A 0 in short. Then we have Au 0 with u S(). Similar as Proposition 3., since A is linear, we can readily establish the following convexity result. Proposition 4.. If α, then the model (4.) is convex. 4.. Existence of a solution. Based on the properties of total variation and the space of bounded variation functions, we prove the existence and uniqueness of a solution to (4.). Theorem 4.. Recall that R is a connected bounded set with compact Lipschitz boundary. Suppose that A L(L ()) is nonnegative, and it does not annihilate constant functions, i.e., A 0. Let f be in L () with inf f > 0, then the model (4.) admits a solution u. Moreover, if α and A is injective, then the solution is unique. Proof. Similar to the proof of Theorem 3., readily, E A is bounded from below. We can choose a minimizing sequence {u n } S() for (4.). So { Du n } with n =,, is bounded. Using the Poincaré inequality (see Remark 3.50, [3]), we obtain: u n m (u n ) C D(u n m (u n )) = C Du n, (4.) where m (u n ) = u n dx and denotes the measure of. Further, C is a constant. Recalling that is bounded, it follows that u n m (u n ) is bounded for each n. Since A L(L ()) is continuous, {A(u n m (u n ))} must be bounded in L () and in L (). ( On the other hand, according to the boundedness of E A (u n ), for each n, Aun f is bounded in L (), which implies that Aun is bounded, then we obtain that Au n is bounded. Moreover, we have: m (u n ) A = A(u n m (u n )) Au n A(u n m (u n )) + Au n. Hence, m (u n ) A is bounded. Thanks to A 0, we obtain that m (u n ) is uniformly bounded. Together with the boundedness of {u n m (u n )}, it leads to the boundedness of {u n } in L () and thus in L (). Since S() is closed and convex, {u n } is bounded in S() as well. Therefore, there exists a subsequence {u nk } which converges weakly in L () to some u L (), and {Du nk } converges weakly as a measure to Du. Due to the continuity of the linear operator A, one must have that {Au nk } converges weakly to Au in L (). Then based on the lower semi-continuity of the total variation and Fatou s lemma, we obtain that u is a solution of the model (4.). Based on Proposition 4., when α, the model (4.) is convex. Furthermore, if A is injective, (4.) is strictly convex, then its minimizer has to be unique. Same as in Theorem 3., the assumption that inf f > 0 basically guarantees the well-posedness of the model. Moreover, in the discrete settings, adding a few more technicalities, the assumption that A is injective may be weakened to ker(a) ker( ) = {0}, where denotes the discrete gradient operator. Remark 4.3. In the proof of Theorem 4., because of the α-term, we obtain that the sequence {Au n } is bounded in L (). Thus, when α = 0, that is, in the case of the model (.3), it is difficult to get the same result, and in [5] the existence of a solution to (.3) is still an open question. f )
10 According to the constraint in (4.), we find that its minimizer is nonnegative. Further, we have the following result. Proposition 4.4. Suppose that u is the solution of (4.). Then there exists a constant C, such that: ( ) {x : (Au ϵ )(x) ϵf(x)} C + log f dx + ϵ log ϵ for any 0 < ϵ <. Proof. Suppose that C is the minimal value of (4.), which is independent of ϵ. Set w = Au f, then we have: ( + log f dx log w + ) w dx C + log f dx, where we have used the fact that for each t > 0, log t + t. Moreover, if t ϵ <, then log t + t log ϵ + ϵ, and we get that: ( ) ϵ {x : w(x) ϵ} C + log f dx. + ϵ log ϵ Based on w = Au f, we obtain the assertion. As a consequence, {x : (Au )(x) = 0} = 0, i.e., Au is positive almost everywhere. Especially, in the discrete situation, Au is strictly positive. 4.. Bias correction. In [], a variational model was proposed in the logimage domain for multiplicative noise removal. In order to ensure that the mean of the restored image equals to that of the observed image, the method ends up an exponential transform along with a bias correction. In addition, we recall that through the theoretical analysis in [4, ] the classical ROF model in [40]: inf u BV () (Au f) dx + λ Du, (4.3) is proved to preserve the mean, i.e., m (Au ) = m (f) with u as a solution. However, (4.3) is proposed to remove the additive white Gaussian noise. In this section, we consider our new model (4.) through the similar theoretical analysis as in [4], and try to find the relation between the observed image f and the solution u. Proposition 4.5. Suppose that A =. Let u be a solution of (4.) and assume that we have inf Au > 0, then the following properties hold true: (i) [ ( )] f (Au ) α f f Au dx = Au (ii) If there exists a solution in the case of α = 0, then we have, f dx Au dx. dx. Proof. 0
11 (i) We define a function with single variable t ( inf Au, + ): ( ) e(t) := log A(u f + t) + A(u + t) + λ D(u + t). Concerning A =, we necessarily have: e(t) = ( ) A(u dx + α + t) f ( ) ( ) log(au f Au + t) + Au dx+α + t dx+λ Du. + t f Since t = 0 is a (local) minimizer of e(t), we have e (0) = 0, which leads to: [ Au f ( )] (Au ) + α f dx = 0. f Au (ii) With α = 0 the result in (i) becomes: f (Au ) dx = Au dx. Moreover, according to Hölder s inequality and the nonnegativity of Au and f, we obtain: Combining both, we have: ( ) f (Au ) dx f dx Au dx. f dx Au dx. dx Note that in the discrete situation the assumption that inf Au > 0 is always satisfied, based on the conclusion of Proposition 4.4. From Proposition 4.5 we cannot obtain the similar theoretical result as in [4, ], that is, the mean of the observed image is automatically preserved, for the model (4.). In order to reduce the influence from the bias and keep the restored image in the same scale as f, we improve the model (4.) as: ( inf log Au + f ) {u S():m (u)=m (f)} Au ( ) Au dx + α f dx + λ Du. (4.4) It is straightforward to show that the feasible set {u S() : m (u) = m (f)} is closed and convex, then the existence and uniqueness of a solution to (4.4) are easily obtained by extending Theorem 4.. Note that the bias-variance trade-off in statistics does not always advocates for unbiased estimators. It is possible to obtain better PSNR results with a small (but different from zero) bias. However, in our numerical simulations, we do observe the improvement of PSNR with our bias correction step.
12 In (4.4), we implicitly suppose that: m (u) m (Au), m ((Au)η) m (Au). Under some independence conditions, the above assumptions are theoretically rooted in statistics. Moreover, in the practical simulations, we find that these two assumptions provide rather reasonable results. 5. Primal-Dual Algorithm. Since the model (4.4) is convex, there are many methods that can be extended to solve the minimization problem in (4.4). For example, the alternating direction method [,5], which is convergent and is well-suited to large-scale convex problems, and its variant, the split-bregman algorithm [8], which is widely used to solve the L regularization problems such as the TV regularization. In this section, we apply the primal-dual algorithm proposed in [5, 4, 37] to solve the minimization problem in (4.4). This algorithm can be used for solving a large family of non-smooth convex optimization problems, see the applications in [5]. It is simple, and comes with the convergence guarantees. Now, we focus on the discrete version of (4.4). For the sake of simplicity, we keep the same notations from the continuous context. Then the discrete model reads as follows: f min E A(u) := log Au, + u X Au, Au + α f + λ u, (5.) where X = {v R n : v i 0 for i =,, n, and n i= v i = n i= f i}, n is the number of pixels in the images, f X is obtained from a two-dimensional pixel-array by concatenation in the usual columnwise fashion, and A R n n. Moreover, the vector inner product u, v = n i= u iv i is used, and denotes the l -vector-norm. The discrete gradient operator R n n is defined by: [ ] x v v =, y v for v R n with x, y R n n corresponding to the discrete derivative in the x- direction and y-direction, respectively. In our numerics, x and y are obtained by applying finite difference approximations for the derivatives with symmetric boundary conditions in the respective coordinate directions. In addition, v denotes the discrete total variation of v, which is defined as: v = n i= ( x v) i + ( yv) i. Define the function G : X R as: f G(u) := log Au, + Au, Au + α f. Based on the definition of total variation in Section, we give the primal-dual formulation of (5.): max min p Y u X G(u) λ u, div p, (5.)
13 where Y = {q R n : q }, q = max i {,,n} qi + q i+n denotes the l -vector-norm, p is the dual variable, and the divergence operator div =. This is a generic saddle-point problem, and we can apply the primal-dual algorithm proposed in [5] to solve the above optimization task. The algorithm is summarized as follows. Algorithm for solving the model (5.) : Fixed σ, τ. Initialize u 0 = f, ū 0 = f and p 0 = (0,, 0) R n. : Calculate p k+ and u k+ from: p k+ = arg max p Y λ ūk, div p σ p pk, (5.3) u k+ = arg min u X G(u) λ u, div pk+ + τ u uk, (5.4) ū k+ =u k+ u k. (5.5) 3: Stop; or set k := k + and go to step. In order to apply the algorithm to (5.), the main questions are how to solve the dual problem in (5.3) and the primal problem in (5.4). For (5.3), the solution can be easily given by: p k+ i = π ( λσ( ū k ) i + p k i ), for i =,, n, (5.) where π is the projector onto the l -normed unit ball, i.e., π (q i ) = q i max(, q i ) and π (q n+i ) = q n+i, for i =,, n, max(, q i ) with q i = qi + q i+n. Since the minimization problem in (5.4) is strictly convex and its objective function has the second derivative, it can be solved efficiently by the Newton method following with one projection step, u k i := n j= f j n j= max(uk j, 0) max(uk i, 0), for i =,, n, (5.7) to ensure that u k is nonnegative and preserves the mean of f. This projection is inspired by Prop.. of [8] or Prop. of []. Based on Theorem in [5], we end this section by the convergence properties of our algorithm. The proof refers to [5]. Proposition 5.. The iterates (u k, p k ) of our algorithm converge to a saddle point of (5.) provided that στλ <. According to the result 8 with the unit spacing size between pixels in [3], we only need στλ < 8 in order to keep the convergent condition. In our numerical practice, λ is tuned empirically (and usually it is around 0., see next Section for details) and we simply set σ = 3 and τ = 3, which in most cases ensures the convergence of the algorithm. 3
14 (a) (b) (c) Fig... Original images. (a) Phantom, (b) Cameraman, (c) Parrot.. Numerical Results. In this section we provide numerical results to study the behavior of our method with respect to its image restoration capabilities and CPUtime consumption. Here, we compare our method with the one proposed in [5] (AA method) by solving (.3) and the one in [3] (RLO method) by solving (.), and both of them are able to remove the multiplicative noise and deblurring simultaneously. In the denoising case, we also provide numerical results from [0] where a probabilistic patch-based method was proposed for several kind of noise, including multiplicative noise. We abbreviate this method by the PPB filter. Since in the AA method and the RLO method the preservation of the mean of the observed image is not guaranteed, in order to compare fairly, we add the same projection step as in (5.7) before outputting the results. Indeed, assume that u o is the result of the AA method or the RLO method, then we revise it as: n u new j= i := f j n j= max(uo j, 0) max(uo i, 0), for i =,, n. (.) Note that usually this will improve the result by around 0. to 0.3 db, depending on images and noise levels. For the PPB filter, we numerically find that this correction is not necessary. For illustrations, the results for the 5-by-5 gray level images Phantom, Cameraman and Parrot, are presented, see the original test images in Figure.. The quality of the restoration results is compared quantitatively by means of the peak signal-to-noise ration (PSNR) [0], which is a widely used image quality assessment measure. In addition, all simulations listed here are run in Matlab 7. (R0a) on a PC equipped with.40ghz CPU and 4G RAM memory... Image denoising. Although our method is proposed for the simultaneous deblurring and denoising of images subject to multiplicative noise, here we show that it also provides good results for noise removal only. However, since our model is rather basic, the more advanced approaches such as patch-based method [0] or dictionary learning method [3] for multiplicative noise removal could have better results. In our example, the test images are corrupted by multiplicative noise with K = 0 and K =, respectively. The results are shown in Figure.-.7. For the AA method and the RLO method, we use the time-marching algorithm to solve the minimization models as proposed in [5, 3]. We set the step size as 0. in order to obtain a stable iterative procedure. The algorithms are stopped when the maximum 4
15 (a) (b) (c) (d) (e) Fig... Results of different methods when removing the multiplicative noise with K = 0. Row 3 : the plots of the objective versus. (a) Noisy Phantom, (b) PPB filter, (c) AA method (λ = 0.), (d) RLO method (λ = 0.), (e) our method (λ = 0. and α = 8). number of is reached. In addition, after many experiments with different λ-values in the AA model, RLO model and ours, the ones that provide the best PSNRs are presented here. In the PPB filter [0], we use the codes provided by the authors in [], which are written as C++ mex-functions. Since the codes were written for removing multiplicative noise with Nakagami-Rayleigh distribution, a square root transform was applied in order to eliminate the gap between Gamma distribution and Nakagami-Rayleigh distribution. We use the suggested parameter values for the PBB filter except for Phantom where the search window size has to be enlarged to get better result, which costs more time (see Table.). In our method, we stop the iterative procedure as soon as the value of the objective function has no big relative decrease, i.e., E(u k ) E(u k+ ) E(u k ) < ε. (.) In denoising case, we set ε = Notice that the computing time of the PPB filter in [0] is not comparable with the remaining three TV-based methods, since the 5
16 (a) (b) (c) (d) (e) Fig..3. Results of different methods when removing the multiplicative noise with K = 0. Row 3 : the plots of the objective versus. (a) Noisy Cameraman, (b) PPB filter, (c) AA method (λ = 0.4), (d) RLO method (λ = 0.4), (e) our method (λ = 0. and α = ). programming languages are different. Comparing the results by the three TV-based methods, i.e., the AA, RLO and our method in Figure.-.5, we see that all of their objective are monotonically decreasing, and our method performs best visually with the least. Note that in the restored results by the AA method and the RLO method, much more noise remains comparing with the ones by our method; see, e.g., white boundary of phantom and the background in Cameraman. Moreover, the contrasts of the details by the AA method and the RLO method are noticeably reduced because of oversmoothing during noise removal, however, our method preserves more details. In this respect observe the tripod and the trousers in Cameraman, especially when recovering the images corrupted by high-level noise. In order to compare the capability of recovering details, in Figure. and Figure.7 we show the results for denoising the image Parrot which includes more details. Comparing the textures surrounding the eye and the background of the parrot, we can clearly see that our method suppresses noise successfully while preserving significantly more details.
17 (a) (b) (c) (d) (e) Fig..4. Results of different methods when removing the multiplicative noise with K =. Row 3 : the plots of the objective versus. (a) Noisy Phantom, (b) PPB filter, (c) AA method (λ = 0.3), (d) RLO method ( λ = 0.3), (e) our method (λ = 0. and α = 4). Since the PPB filter in [0] is a patch-based method, which has a different framework with the TV-based methods, it can remove the multiplicative noise more effectively and preserve more details. But we still find that some unpleasant artifacts are noticeable in the results obtained by the PPB filter; see, e.g., the homogeneous regions in Cameraman, Phantom and Parrot. Furthermore, some tiny structures in the images are completely missing; see, e.g., the spots in Phantom and the buildings in Cameraman. Since we utilize TV regularization in our method, those spurious artifacts are absent from our results, but at the same time there might be some undesirable stair-casing effects; see the beak and neck of the parrot. For the comparison of the performance quantitatively and the computational efficiency, in Table. we list the PSNR values of the restored results, the number of the and the CPU-times. In the three TV-based methods, we observe that the PSNR values from our method are more than 0.5 db higher than others. Due to a large step size, with much less our method reaches the stopping rule and also spends much less CPU-times. However, in order to obtain a stable iterative procedure, the AA method and the RLO method have to use a small step size, and 7
18 (a) (b) (c) (d) (e) Fig..5. Results of different methods when removing the multiplicative noise with K =. Row 3 : the plots of the objective versus. (a) Noisy Cameraman, (b) PPB filter, (c) AA method (λ = 0.), (d) RLO method ( λ = 0.), (e) our method (λ = 0. and α = ). then need more than 0 times more to provide the results with the best PSNRs... Image deblurring and denoising. In this section, we consider the restoration of the noisy blurred images. In our experiments, we test two blurring operators, which are motion blur with length 5 and angle 30, and Gaussian blur with a window size 7 7 and a standard deviation of. Further, after blurred, the test images are corrupted by multiplicative noise with K = 0. In the blurring case, we set ε = 0 4 in (.) as the stopping rule for our method. In Figure.8-.0, we show the degraded images and the restored results by all three TV-based methods, and Table. lists the PSNR values, the number of, and the CPU-times. In contrast to the results by the AA method and the RLO method, our method also performs best both visually and quantitatively, and it preserves more details; see, e.g., the tripod in Cameraman and the texture near the eye in Parrot. Due to the blurring, more are needed in all three methods, but our method still provides the best results in much less with 8
19 (a) (b) (c) (d) (e) Fig... Restored images by different methods for restoring the image Parrot. (a) Noisy image with K = 0, (b) PPB filter, (c) AA method (λ = 0.4), (d) RLO method (λ = 0.4), (e) our method (λ = 0. and α = ). the least CPU-times. In a conclusion, our method turns out to be more efficient and outperforms the other methods which are able to deblur while removing multiplicative noise simultaneously. 7. Conclusion. In this paper, we propose a new variational model for restoring blurred images subject to multiplicative Gamma noise. In order to obtain the convexity, we add a quadratic penalty term according to the statistical properties of the multiplicative Gamma noise in the model proposed in [5], which combines a MAP estimator with the total variation regularization. The existence and uniqueness of a solution to the new model are obtained. Furthermore, some other properties are studied, such as the minimum-maximum principle, the bias correction, and so on. Due to the convexity, we are allowed to employ the primal-dual algorithm proposed in [5] to solve the corresponding minimization problem in the new model, and its convergence is guaranteed. Compared to other recently proposed methods, our method appears to be very competitive with respect to image restoration capabilities and CPU-time consumption. Acknowledgement. This work is supported by RGC 70, RGC, NSFC 704 and the FRGs of Hong Kong Baptist University. The authors would like to thank the reviewers for the careful reading of the manuscript and the insightful and constructive comments. 8. Appendix I: Proof of Proposition 3.4. In order to prove Proposition 3.4, we need the following lemma.
20 (a) (b) (c) (d) (e) Fig..7. Restored images by different methods for restoring the image Parrot. (a) Noisy image with K =, (b) PPB filter, (c) AA method (λ = 0.8), (d) RLO method (λ = 0.8), (e) our method (λ = 0. and α = 8). Lemma 8.. Let B(K) denote the Beta function B(K, K) with K, i.e., B(K) := Γ (K) Γ(K). Then, we have B(K ) B(K) = + ( ) K + O K. (8.) Proof. Based on a property of Gamma function, we readily obtain: ( B(K log ) ) B(K) Γ(x + ) Γ(x) = x for x > 0, (8.) ( = log K ) + (log Γ(K ) ) log Γ(K). Then using the expansion of log Γ(x) with x > 0 in []: log Γ(x) =x log x x log x π + x + O( x ) (8.3) 0
21 (a) (b) (c) (d) (e) Fig..8. Results of different methods when restoring the degraded images corrupted by motion blur and then multiplicative noise with K = 0. Row : degraded images. Row and 4: restored images with different methods. Row 3 and 5: the plots of the objective versus. (a) Degraded Phantom, (b) degraded Cameraman, (c) AA method (row : λ = 0.05; row 4: λ = 0.0), (d) RLO method (row : λ = 0.05; row 4: λ = 0.0), (e) our method (row : λ = 0.0 and α = ; row 4: λ = 0.0 and α = ).
22 (a) (b) (c) (d) (e) Fig... Results of different methods when restoring the degraded images corrupted by Gaussian blur and then multiplicative noise with K = 0. Row : degraded images. Row and 4: restored images with different methods. Row 3 and 5: the plots of the objective versus. (a) Degraded Phantom, (b) degraded Cameraman, (c) AA method (row : λ = 0.03; row 4: λ = 0.05), (d) RLO method (row : λ = 0.03; row 4: λ = 0.05), (e) our method (row : λ = 0.07 and α = ; row 4: λ = 0.07 and α = ).
23 K = 0 K = Images Methods PSNR(dB) Iter Time(s) PSNR(dB) Iter Time(s) AA Phantom RLO PPB Ours AA Cameraman RLO PPB Ours AA Parrot RLO PPB Ours Table. The comparisons of PSNR values, the number of and CPU-time in seconds by different methods for denoising case. Motion Blur Gaussian Blur Images Methods PSNR(dB) Iter Time(s) PSNR(dB) Iter Time(s) AA Phantom RLO Ours AA Cameraman RLO Ours AA Parrot RLO Ours Table. The comparisons of PSNR values, the number of and CPU-time in seconds by different methods for deblurring with denoising. and the Taylor expansion of log( + x) for x (, ), we get: ( B(K log ) ) B(K) ( = + (K ) log K ( = log + ) + O( K K ). ) + Further, in view of the Taylor expansion of e x, we have B(K ) B(K) K(K ) + O( K ) = ( + K ) exp ( O( K )) = + K + O( K ). Now we are ready to prove Proposition 3.4: Proof. 3
24 (a) (b) (c) (d) (e) Fig..0. Results by different methods for restoring the image Parrot blurred by different kernels and then corrupted by multiplicative noise with K = 0 (row : by motion blur; row 3: by Gaussian blur). (a) Degraded image by motion blur, (b) degraded image by Gaussian blur, (c) AA method (row : λ = 0.05; row 3: λ = 0.04), (d) RLO method (row : λ = 0.05; row 3: λ = 0.04), (e) our method (row : λ = 0.08 and α = ; row 3: λ = 0.07 and α = ). (i) Based on (3.) and θ = K, we have: + p Y (y) log p Y (y)dy = + 0 ( p Y (y) log θ K Γ(K) (K + ) log y K ) y dy. 4
25 Let x = y. Note that the mean of η equals, then we get: + + ( p Y (y) log p Y (y)dy = p η (x) log θ K Γ(K) + K + ) log x Kx dx, = log θ K Γ(K) + K + (ψ(k) log K) K, = log log( KΓ(K)) + K + ψ(k) K, where we use a property of Gamma distribution, E η (log x) = ψ(k) log K d log Γ(K) (Theorem. in [33]), and recall that ψ(k) := dk is the digamma function (see []). (ii) Based on the PDFs of Y and Gaussian distribution N (µ K, σk ), we have: + ( ) + p Y (y) log p N (µk,σk ) (y)dy = p Y (y) log e (y µ K ) σ K dy πσk = + = log( πσ K ) = log(πeσ K), p Y (y) (log( πσ K ) + (y µ K) + where we have utilized the definition of the variance, i.e., p Y (y)(y µ K ) dy = σ K. p Y (y)dy σ K σk + (iii) In view of the results in (i) and (ii), using the approximation of ψ(x) in []: we have: ψ(x) = log x x + O( x ), D KL (Y N (µ K, σk)) = log log( KΓ(K)) + K + = log log( KΓ(K)) + K + ) dy ψ(k) K + log(πeσ K) ( log K K p Y (y)(y µ K ) dy ) K + log(πeσ K) + O( K ) = log + log Kσ K + O( ). (8.4) K Here, we obtain (8.4) by simplifying D KL (Y N (µ K, σk )) with (8.3). Moreover, based on the results in Proposition 3., we have: σ K = E(Y ) E (Y ) = KΓ(K ) Γ(K) = K K = K K KΓ (K ) Γ (K) K K K K 5 Γ (K ) Γ(K) Γ(K ) Γ (K) B(K ). (8.5) B(K)
26 Substituting the result (8.5) into (8.4) and applying the expansion in Lemma 8., we get: D KL (Y N (µ K, σ K)) = log + ( K log K K K ( + K + O( ) K )) + O( K ) = log + log ( 4 + O( K ) ) + O( K ) = O( K ), which yield the assertion. REFERENCES [] Probabilistic patch-based filter (PPB and NL-SAR). In http: // www. math. u-bordeaux. fr/ ~ cdeledal/ ppb. php. [] M. Abramowitz and I. Stegun. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. National Bureau of Standards Applied Mathematics Series 55, seventh printing edition, 8. [3] L. Ambrosio, N. Fusco, and D. Pallara. Functions of Bounded Variation and Free Discontinuity Problem. Oxford university press, 000. [4] G. Andrews, R. Askey, and R. Roy. Special Functions. Cambridge University Press, 00. [5] G. Aubert and J. Aujol. A variational approach to remove multiplicative noise. SIAM J. Appl. Math., 8:5 4, 008. [] G. Aubert and P. Kornprobst. Mathematical Problems in Image Processing: Partial Differential Equations and the Calculus of Variations, volume 47. Springer, 00. [7] P. Bickel and K. Doksum. Mathematical Statistics: basic ideas and selected topics, volume. Pearson, second edition, 007. [8] J. Bioucas-Dias and M. Figueiredo. Total variation restoration of speckled images using a split-bregman algorithm. In Proc. IEEE ICIP, 00. [] J. Bioucas-Dias and M. Figueiredo. Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process., :70 730, 00. [0] A. Bovik. Handbook of Image and Video Processing. Academic Press, 000. [] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. and Trends Mach. Learning, 3:, 00. [] J. Cai, R. Chan, and Z. Shen. A framelet-based image inpainting algorithm. Appl. Comput. Harmon. Anal., 4:3 4, 008. [3] A. Chambolle. An algorithm for total variation minimization and application. J. Math. Imaging Vis., 0:8 7, 004. [4] A. Chambolle and P. Lions. Image recovery via total variation minimization and related problems. Numerische Mathematik, 7, 7. [5] A. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis., 40():0 45, 0. [] T. Chan and J. Shen. Image Processing And Analysis: Variational, Pde, Wavelet, And Stochastic Methods. SIAM, 005. [7] T. Chartrand and T. Asaki. A variational approach to reconstructing images corrupted by Poisson noise. J. Math. Imaging Vis., 7:57 3, 007. [8] C. Chaux, J. Pesquet, and N. Pustelnik. Nested iterative algorithms for convex constrained image recovery problems. SIAM J. Imaging Sci., :730 7, 00. [] P. Combettes and J. Pesquet. A Douglas-Rachford splitting approach to nonsmooth convex variational signal recovery. IEEE J. Selected Topics in Signal Processing, :54 574, 007. [0] C.-A. Deledalle, L. Denis, and F. Tupin. Iterative weighted maximum likelihood denoising with probabilistic patch-based weights. IEEE Trans. Image Process., 8(): 7, 00. [] Y. Dong, M. Hintermüller, and M. Neri. An efficient primal-dual method for L-TV image restoration. SIAM J. Imaging Sci., :8 8, 00.
27 [] S. Durand, J. Fadili, and M. Nikolova. Multiplicative noise removal using L fidelity on frame coefficients. J. Math. Imaging Vis., 3:0, 00. [3] J. Einmahl. Poisson and Gaussian approximation of weighted local empirical processes. Stoch. Proc. Appl., 70():3 58, 7. [4] E. Esser, X. Zhang, and T. Chan. A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science. SIAM J. Imaging Sci., 00. [5] M. Figueiredo and J. Bioucas-Dias. Restoration of Poissonian images using alternating direction optimization. IEEE Trans. Image Process., : , 00. [] G. Gilboa and S. Osher. Nonlocal operators with applications to image processing. Multiscale Model. Simul., 7:005 08, 00. [7] E. Giusti. Minimal Surfaces and Functions of Bounded Variation. Birkhäuser, Boston, 84. [8] T. Goldstein and S. Osher. The split Bregman algorithm for L regularized problems. SIAM J. Imaging Sci., :33 343, 00. [] G. Grimmett and D. Welsh. Probability: an introduction. Oxford Science Publications, 8. [30] J. Hadamard. Sur les problèmes aux dérivées partielles et leur signification physique. Princeton University Bulletin, 0. [3] Y. Huang, L. Moisan, M. Ng, and T. Zeng. Multiplicative noise removal via a learned dictionary. IEEE Trans. Image Process., (): , 0. [3] Y. Huang, M. Ng, and Y. Wen. A new total variation method for multiplicative noise removal. SIAM J. Imaging Sci., :0 40, 00. [33] M. Khodabin and A. Ahmadabadi. Some properties of generalized gamma distribution. Mathematical Sciences, 4(): 8, 00. [34] P. Kornprobst, R. Deriche, and G. Aubert. Image sequence analysis via partial differential equations. J. Math. Imaging Vis., ():5,. [35] T. Le, R. Chartrand, and T. Asaki. A variational approach to reconstructing images corrupted by Poisson noise. J. Math. Imaging Vis., 007. [3] F. Li, M. Ng, and C. Shen. Multiplicative noise removal with spatially varying regularization parameters. SIAM J. Imaging Sci., 3: 0, 00. [37] T. Pock, D. Cremers, H. Bischof, and A. Chambolle. An algorithm for minimizing the Mumford- Shah functional. In Proc. th IEEE Int l Conf. Computer Vision, pages 33 40, 00. [38] B. Quinn and H. MacGillivray. Normal approximations to discrete unimodal distributions. J. Appl. Probab., 3(4):03 08, 8. [3] L. Rudin, P. Lions, and S. Osher. Geometric Level Set Methods in Imaging, Vision, and Graphics, chapter Multiplicative denoising and deblurring: theory and algorithms, pages 03. Springer, 003. [40] L. Rudin, S. Osher, and E. Fatemi. Nonlinear total variation based noise removal algorithms. Physica D, 0:5 8,. [4] J. Shi and S. Osher. A nonlinear inverse scale method for a convex multiplicative noise model. SIAM J. Imaging Sci., :4 3, 008. [4] G. Steidl and T. Teuber. Removing multiplicative noise by Douglas-Rachford splitting methods. J. Math. Imaging Vis., 3:8 84, 00. [43] T. Teuber and A. Lang. Nonlocal filters for removing multiplicative noise. In SSVM, volume 7, pages 50. Springer, 0. [44] Y. Xiao, T. Zeng, J. Yu, and M. Ng. Restoration of images corrupted by mixed Gaussianimpulse noise via l l 0 minimization. Pattern Recogn., 44(8):708 78, 0. [45] T. Zeng and M. Ng. On the total variation dictionary model. IEEE Trans. Image Process., (3):8 85, 00. 7
A Convex Variational Model for Restoring Blurred Images with Multiplicative Noise
SIAM J. IMAGING SCIENCES Vol. 6, No. 3, pp. 598 65 c 03 Society for Industrial and Applied Mathematics A Convex Variational Model for Restoring Blurred Images with Multiplicative Noise Yiqiu Dong and Tieyong
More informationPOISSON noise, also known as photon noise, is a basic
IEEE SIGNAL PROCESSING LETTERS, VOL. N, NO. N, JUNE 2016 1 A fast and effective method for a Poisson denoising model with total variation Wei Wang and Chuanjiang He arxiv:1609.05035v1 [math.oc] 16 Sep
More informationConvex and Non-Convex Optimization in Image Recovery and Segmentation
Convex and Non-Convex Optimization in Image Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU 29 May- 2 June, 2017 NUS, Singapore Outline 1. Variational Models for Rician Noise Removal
More informationDual methods for the minimization of the total variation
1 / 30 Dual methods for the minimization of the total variation Rémy Abergel supervisor Lionel Moisan MAP5 - CNRS UMR 8145 Different Learning Seminar, LTCI Thursday 21st April 2016 2 / 30 Plan 1 Introduction
More informationConvex Hodge Decomposition of Image Flows
Convex Hodge Decomposition of Image Flows Jing Yuan 1, Gabriele Steidl 2, Christoph Schnörr 1 1 Image and Pattern Analysis Group, Heidelberg Collaboratory for Image Processing, University of Heidelberg,
More informationA Variational Approach to Reconstructing Images Corrupted by Poisson Noise
J Math Imaging Vis c 27 Springer Science + Business Media, LLC. Manufactured in The Netherlands. DOI: 1.7/s1851-7-652-y A Variational Approach to Reconstructing Images Corrupted by Poisson Noise TRIET
More informationENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT
ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT PRASHANT ATHAVALE Abstract. Digital images are can be realized as L 2 (R 2 objects. Noise is introduced in a digital image due to various reasons.
More informationConvex Hodge Decomposition and Regularization of Image Flows
Convex Hodge Decomposition and Regularization of Image Flows Jing Yuan, Christoph Schnörr, Gabriele Steidl April 14, 2008 Abstract The total variation (TV) measure is a key concept in the field of variational
More informationVariational Image Restoration
Variational Image Restoration Yuling Jiao yljiaostatistics@znufe.edu.cn School of and Statistics and Mathematics ZNUFE Dec 30, 2014 Outline 1 1 Classical Variational Restoration Models and Algorithms 1.1
More informationSolving DC Programs that Promote Group 1-Sparsity
Solving DC Programs that Promote Group 1-Sparsity Ernie Esser Contains joint work with Xiaoqun Zhang, Yifei Lou and Jack Xin SIAM Conference on Imaging Science Hong Kong Baptist University May 14 2014
More informationAdaptive Primal Dual Optimization for Image Processing and Learning
Adaptive Primal Dual Optimization for Image Processing and Learning Tom Goldstein Rice University tag7@rice.edu Ernie Esser University of British Columbia eesser@eos.ubc.ca Richard Baraniuk Rice University
More informationA General Framework for a Class of Primal-Dual Algorithms for TV Minimization
A General Framework for a Class of Primal-Dual Algorithms for TV Minimization Ernie Esser UCLA 1 Outline A Model Convex Minimization Problem Main Idea Behind the Primal Dual Hybrid Gradient (PDHG) Method
More informationInvestigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method
Article Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method Andreas Langer Department of Mathematics, University of Stuttgart,
More informationLINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING
LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING JIAN-FENG CAI, STANLEY OSHER, AND ZUOWEI SHEN Abstract. Real images usually have sparse approximations under some tight frame systems derived
More informationVariational Methods in Signal and Image Processing
Variational Methods in Signal and Image Processing XU WANG Texas A&M University Dept. of Electrical & Computer Eng. College Station, Texas United States xu.wang@tamu.edu ERCHIN SERPEDIN Texas A&M University
More informationKey words: denoising, higher-order regularization, stability, weak convergence, Brezis-Lieb condition. AMS subject classifications: 49N45, 94A08.
A FEW REMARKS ON VARIATIONAL MODELS FOR DENOISING RUSTUM CHOKSI, IRENE FONSECA, AND BARBARA ZWICKNAGL Abstract. Variational models for image and signal denoising are based on the minimization of energy
More informationVariational Methods in Image Denoising
Variational Methods in Image Denoising Jamylle Carter Postdoctoral Fellow Mathematical Sciences Research Institute (MSRI) MSRI Workshop for Women in Mathematics: Introduction to Image Analysis 22 January
More informationA Theoretical Framework for the Regularization of Poisson Likelihood Estimation Problems
c de Gruyter 2007 J. Inv. Ill-Posed Problems 15 (2007), 12 8 DOI 10.1515 / JIP.2007.002 A Theoretical Framework for the Regularization of Poisson Likelihood Estimation Problems Johnathan M. Bardsley Communicated
More informationMathematical Problems in Image Processing
Gilles Aubert Pierre Kornprobst Mathematical Problems in Image Processing Partial Differential Equations and the Calculus of Variations Second Edition Springer Foreword Preface to the Second Edition Preface
More informationAbout Split Proximal Algorithms for the Q-Lasso
Thai Journal of Mathematics Volume 5 (207) Number : 7 http://thaijmath.in.cmu.ac.th ISSN 686-0209 About Split Proximal Algorithms for the Q-Lasso Abdellatif Moudafi Aix Marseille Université, CNRS-L.S.I.S
More informationAccelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)
Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Donghwan Kim and Jeffrey A. Fessler EECS Department, University of Michigan
More informationAdaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise
Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Minru Bai(x T) College of Mathematics and Econometrics Hunan University Joint work with Xiongjun Zhang, Qianqian Shao June 30,
More informationRecent developments on sparse representation
Recent developments on sparse representation Zeng Tieyong Department of Mathematics, Hong Kong Baptist University Email: zeng@hkbu.edu.hk Hong Kong Baptist University Dec. 8, 2008 First Previous Next Last
More informationA Primal-Dual Method for Total Variation-Based. Wavelet Domain Inpainting
A Primal-Dual Method for Total Variation-Based 1 Wavelet Domain Inpainting You-Wei Wen, Raymond H. Chan, Andy M. Yip Abstract Loss of information in a wavelet domain can occur during storage or transmission
More informationPoisson Image Denoising Using Best Linear Prediction: A Post-processing Framework
Poisson Image Denoising Using Best Linear Prediction: A Post-processing Framework Milad Niknejad, Mário A.T. Figueiredo Instituto de Telecomunicações, and Instituto Superior Técnico, Universidade de Lisboa,
More informationMinimizing Isotropic Total Variation without Subiterations
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Minimizing Isotropic Total Variation without Subiterations Kamilov, U. S. TR206-09 August 206 Abstract Total variation (TV) is one of the most
More informationAsymmetric Cheeger cut and application to multi-class unsupervised clustering
Asymmetric Cheeger cut and application to multi-class unsupervised clustering Xavier Bresson Thomas Laurent April 8, 0 Abstract Cheeger cut has recently been shown to provide excellent classification results
More informationarxiv: v1 [math.na] 3 Jan 2019
arxiv manuscript No. (will be inserted by the editor) A Finite Element Nonoverlapping Domain Decomposition Method with Lagrange Multipliers for the Dual Total Variation Minimizations Chang-Ock Lee Jongho
More informationANALYSIS OF THE TV REGULARIZATION AND H 1 FIDELITY MODEL FOR DECOMPOSING AN IMAGE INTO CARTOON PLUS TEXTURE. C.M. Elliott and S.A.
COMMUNICATIONS ON Website: http://aimsciences.org PURE AND APPLIED ANALYSIS Volume 6, Number 4, December 27 pp. 917 936 ANALYSIS OF THE TV REGULARIZATION AND H 1 FIDELITY MODEL FOR DECOMPOSING AN IMAGE
More informationACCELERATED FIRST-ORDER PRIMAL-DUAL PROXIMAL METHODS FOR LINEARLY CONSTRAINED COMPOSITE CONVEX PROGRAMMING
ACCELERATED FIRST-ORDER PRIMAL-DUAL PROXIMAL METHODS FOR LINEARLY CONSTRAINED COMPOSITE CONVEX PROGRAMMING YANGYANG XU Abstract. Motivated by big data applications, first-order methods have been extremely
More informationConvergence rates of convex variational regularization
INSTITUTE OF PHYSICS PUBLISHING Inverse Problems 20 (2004) 1411 1421 INVERSE PROBLEMS PII: S0266-5611(04)77358-X Convergence rates of convex variational regularization Martin Burger 1 and Stanley Osher
More informationA Nonlocal p-laplacian Equation With Variable Exponent For Image Restoration
A Nonlocal p-laplacian Equation With Variable Exponent For Image Restoration EST Essaouira-Cadi Ayyad University SADIK Khadija Work with : Lamia ZIAD Supervised by : Fahd KARAMI Driss MESKINE Premier congrès
More informationCalculus of Variations Summer Term 2014
Calculus of Variations Summer Term 2014 Lecture 20 18. Juli 2014 c Daria Apushkinskaya 2014 () Calculus of variations lecture 20 18. Juli 2014 1 / 20 Purpose of Lesson Purpose of Lesson: To discuss several
More informationNonlinear Flows for Displacement Correction and Applications in Tomography
Nonlinear Flows for Displacement Correction and Applications in Tomography Guozhi Dong 1 and Otmar Scherzer 1,2 1 Computational Science Center, University of Vienna, Oskar-Morgenstern-Platz 1, 1090 Wien,
More informationMIXED GAUSSIAN-IMPULSE NOISE IMAGE RESTORATION VIA TOTAL VARIATION
MIXED GAUSSIAN-IMPULSE NOISE IMAGE RESTORATION VIA TOTAL VARIATION P. Rodríguez, R. Rojas Department of Electrical Engineering Pontificia Universidad Católica del Perú Lima, Peru B. Wohlberg T5: Applied
More informationA Unified Approach to Proximal Algorithms using Bregman Distance
A Unified Approach to Proximal Algorithms using Bregman Distance Yi Zhou a,, Yingbin Liang a, Lixin Shen b a Department of Electrical Engineering and Computer Science, Syracuse University b Department
More informationTWO-PHASE APPROACH FOR DEBLURRING IMAGES CORRUPTED BY IMPULSE PLUS GAUSSIAN NOISE. Jian-Feng Cai. Raymond H. Chan. Mila Nikolova
Manuscript submitted to Website: http://aimsciences.org AIMS Journals Volume 00, Number 0, Xxxx XXXX pp. 000 000 TWO-PHASE APPROACH FOR DEBLURRING IMAGES CORRUPTED BY IMPULSE PLUS GAUSSIAN NOISE Jian-Feng
More informationA theoretical analysis of L 1 regularized Poisson likelihood estimation
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/228359355 A theoretical analysis of L 1 regularized Poisson likelihood estimation Article in
More informationIntroduction to Nonlinear Image Processing
Introduction to Nonlinear Image Processing 1 IPAM Summer School on Computer Vision July 22, 2013 Iasonas Kokkinos Center for Visual Computing Ecole Centrale Paris / INRIA Saclay Mean and median 2 Observations
More informationPrimal-Dual Algorithms for Total Variation Based Image Restoration under Poisson Noise
SCIENCE CHINA Mathematics. ARTICLES. doi: 10.1007/s1145-000-0000-0 Primal-Dual Algorithms for Total Variation Based Image Restoration under Poisson Noise Dedicated to Professor Lin Qun on the Occasion
More informationNovel integro-differential equations in image processing and its applications
Novel integro-differential equations in image processing and its applications Prashant Athavale a and Eitan Tadmor b a Institute of Pure and Applied Mathematics, University of California, Los Angeles,
More informationFunctional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...
Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................
More informationAn image decomposition model using the total variation and the infinity Laplacian
An image decomposition model using the total variation and the inity Laplacian Christopher Elion a and Luminita A. Vese a a Department of Mathematics, University of California Los Angeles, 405 Hilgard
More information2 Regularized Image Reconstruction for Compressive Imaging and Beyond
EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement
More informationPoisson NL means: unsupervised non local means for Poisson noise
2010 IEEE International Conference on Image Processing Hong-Kong, September 26-29, 2010 Poisson NL means: unsupervised non local means for Poisson noise Charles Deledalle 1,FlorenceTupin 1, Loïc Denis
More informationErkut Erdem. Hacettepe University February 24 th, Linear Diffusion 1. 2 Appendix - The Calculus of Variations 5.
LINEAR DIFFUSION Erkut Erdem Hacettepe University February 24 th, 2012 CONTENTS 1 Linear Diffusion 1 2 Appendix - The Calculus of Variations 5 References 6 1 LINEAR DIFFUSION The linear diffusion (heat)
More informationESAIM: M2AN Modélisation Mathématique et Analyse Numérique Vol. 34, N o 4, 2000, pp
Mathematical Modelling and Numerical Analysis ESAIM: M2AN Modélisation Mathématique et Analyse Numérique Vol. 34, N o 4, 2, pp. 799 8 STRUCTURAL PROPERTIES OF SOLUTIONS TO TOTAL VARIATION REGULARIZATION
More informationA REWEIGHTED l 2 METHOD FOR IMAGE RESTORATION WITH POISSON AND MIXED POISSON-GAUSSIAN NOISE. Jia Li. Zuowei Shen. Rujie Yin.
Volume X, No. 0X, 0xx, X XX doi:10.3934/ipi.xx.xx.xx A REWEIGHTED l METHOD FOR IMAGE RESTORATION WITH POISSON AND MIXED POISSON-GAUSSIAN NOISE JIA LI AND ZUOWEI SHEN AND RUJIE YIN AND XIAOQUN ZHANG Jia
More informationAccelerated primal-dual methods for linearly constrained convex problems
Accelerated primal-dual methods for linearly constrained convex problems Yangyang Xu SIAM Conference on Optimization May 24, 2017 1 / 23 Accelerated proximal gradient For convex composite problem: minimize
More informationNonlinear Diffusion. Journal Club Presentation. Xiaowei Zhou
1 / 41 Journal Club Presentation Xiaowei Zhou Department of Electronic and Computer Engineering The Hong Kong University of Science and Technology 2009-12-11 2 / 41 Outline 1 Motivation Diffusion process
More informationA posteriori error control for the binary Mumford Shah model
A posteriori error control for the binary Mumford Shah model Benjamin Berkels 1, Alexander Effland 2, Martin Rumpf 2 1 AICES Graduate School, RWTH Aachen University, Germany 2 Institute for Numerical Simulation,
More informationA Linearly Convergent First-order Algorithm for Total Variation Minimization in Image Processing
A Linearly Convergent First-order Algorithm for Total Variation Minimization in Image Processing Cong D. Dang Kaiyu Dai Guanghui Lan October 9, 0 Abstract We introduce a new formulation for total variation
More informationEE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)
EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in
More informationSPARSE SIGNAL RESTORATION. 1. Introduction
SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful
More informationA convergence result for an Outer Approximation Scheme
A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento
More informationSolution-driven Adaptive Total Variation Regularization
1/15 Solution-driven Adaptive Total Variation Regularization Frank Lenzen 1, Jan Lellmann 2, Florian Becker 1, Stefania Petra 1, Johannes Berger 1, Christoph Schnörr 1 1 Heidelberg Collaboratory for Image
More informationA Primal-dual Three-operator Splitting Scheme
Noname manuscript No. (will be inserted by the editor) A Primal-dual Three-operator Splitting Scheme Ming Yan Received: date / Accepted: date Abstract In this paper, we propose a new primal-dual algorithm
More informationSplitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches
Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Patrick L. Combettes joint work with J.-C. Pesquet) Laboratoire Jacques-Louis Lions Faculté de Mathématiques
More informationUniversität des Saarlandes. Fachrichtung 6.1 Mathematik
Universität des Saarlandes U N I V E R S I T A S S A R A V I E N I S S Fachrichtung 6.1 Mathematik Preprint Nr. 402 Iterative TV-regularization of Grey-scale Images Martin Fuchs and Joachim Weickert Saarbrücken
More informationregularization parameter choice
A duality-based splitting method for l 1 -TV image restoration with automatic regularization parameter choice Christian Clason Bangti Jin Karl Kunisch November 30, 2009 A novel splitting method is presented
More informationIMAGE RESTORATION: TOTAL VARIATION, WAVELET FRAMES, AND BEYOND
IMAGE RESTORATION: TOTAL VARIATION, WAVELET FRAMES, AND BEYOND JIAN-FENG CAI, BIN DONG, STANLEY OSHER, AND ZUOWEI SHEN Abstract. The variational techniques (e.g., the total variation based method []) are
More informationarxiv: v1 [math.oc] 3 Jul 2014
SIAM J. IMAGING SCIENCES Vol. xx, pp. x c xxxx Society for Industrial and Applied Mathematics x x Solving QVIs for Image Restoration with Adaptive Constraint Sets F. Lenzen, J. Lellmann, F. Becker, and
More informationContraction Methods for Convex Optimization and Monotone Variational Inequalities No.16
XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of
More informationA GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR CONVEX OPTIMIZATION IN IMAGING SCIENCE
A GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR CONVEX OPTIMIZATION IN IMAGING SCIENCE ERNIE ESSER XIAOQUN ZHANG TONY CHAN Abstract. We generalize the primal-dual hybrid gradient
More informationADMM algorithm for demosaicking deblurring denoising
ADMM algorithm for demosaicking deblurring denoising DANIELE GRAZIANI MORPHEME CNRS/UNS I3s 2000 route des Lucioles BP 121 93 06903 SOPHIA ANTIPOLIS CEDEX, FRANCE e.mail:graziani@i3s.unice.fr LAURE BLANC-FÉRAUD
More informationMMSE Denoising of 2-D Signals Using Consistent Cycle Spinning Algorithm
Denoising of 2-D Signals Using Consistent Cycle Spinning Algorithm Bodduluri Asha, B. Leela kumari Abstract: It is well known that in a real world signals do not exist without noise, which may be negligible
More informationTRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS
TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS Martin Kleinsteuber and Simon Hawe Department of Electrical Engineering and Information Technology, Technische Universität München, München, Arcistraße
More informationCONVERGENCE THEORY. G. ALLAIRE CMAP, Ecole Polytechnique. 1. Maximum principle. 2. Oscillating test function. 3. Two-scale convergence
1 CONVERGENCE THEOR G. ALLAIRE CMAP, Ecole Polytechnique 1. Maximum principle 2. Oscillating test function 3. Two-scale convergence 4. Application to homogenization 5. General theory H-convergence) 6.
More informationFast Multilevel Algorithm for a Minimization Problem in Impulse Noise Removal
Fast Multilevel Algorithm for a Minimization Problem in Impulse Noise Removal Raymond H. Chan and Ke Chen September 25, 7 Abstract An effective 2-phase method for removing impulse noise was recently proposed.
More informationTools from Lebesgue integration
Tools from Lebesgue integration E.P. van den Ban Fall 2005 Introduction In these notes we describe some of the basic tools from the theory of Lebesgue integration. Definitions and results will be given
More informationA GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR TV MINIMIZATION
A GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR TV MINIMIZATION ERNIE ESSER XIAOQUN ZHANG TONY CHAN Abstract. We generalize the primal-dual hybrid gradient (PDHG) algorithm proposed
More informationA NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang
A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.
More informationPDEs in Image Processing, Tutorials
PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower
More informationImage Restoration with Mixed or Unknown Noises
Image Restoration with Mixed or Unknown Noises Zheng Gong, Zuowei Shen, and Kim-Chuan Toh Abstract This paper proposes a simple model for image restoration with mixed or unknown noises. It can handle image
More informationA First Order Primal-Dual Algorithm for Nonconvex T V q Regularization
A First Order Primal-Dual Algorithm for Nonconvex T V q Regularization Thomas Möllenhoff, Evgeny Strekalovskiy, and Daniel Cremers TU Munich, Germany Abstract. We propose an efficient first order primal-dual
More informationAdaptive methods for control problems with finite-dimensional control space
Adaptive methods for control problems with finite-dimensional control space Saheed Akindeinde and Daniel Wachsmuth Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy
More informationImage Cartoon-Texture Decomposition and Feature Selection using the Total Variation Regularized L 1 Functional
Image Cartoon-Texture Decomposition and Feature Selection using the Total Variation Regularized L 1 Functional Wotao Yin 1, Donald Goldfarb 1, and Stanley Osher 2 1 Department of Industrial Engineering
More informationContents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.
Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological
More informationCoordinate Update Algorithm Short Course Operator Splitting
Coordinate Update Algorithm Short Course Operator Splitting Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 25 Operator splitting pipeline 1. Formulate a problem as 0 A(x) + B(x) with monotone operators
More informationQuantifying Uncertainty in High Dimensional Inverse Problems by Convex Optimisation
1 Quantifying Uncertainty in High Dimensional Inverse Problems by Convex Optimisation Xiaohao Cai, Marcelo Pereyra, and Jason D. McEwen arxiv:1811.02514v1 [eess.sp] 4 Nov 2018 Abstract Inverse problems
More informationRIGOROUS MATHEMATICAL INVESTIGATION OF A NONLINEAR ANISOTROPIC DIFFUSION-BASED IMAGE RESTORATION MODEL
Electronic Journal of Differential Equations, Vol. 2014 (2014), No. 129, pp. 1 9. ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu RIGOROUS MATHEMATICAL
More informationLinear Diffusion and Image Processing. Outline
Outline Linear Diffusion and Image Processing Fourier Transform Convolution Image Restoration: Linear Filtering Diffusion Processes for Noise Filtering linear scale space theory Gauss-Laplace pyramid for
More informationFundamentals of Non-local Total Variation Spectral Theory
Fundamentals of Non-local Total Variation Spectral Theory Jean-François Aujol 1,2, Guy Gilboa 3, Nicolas Papadakis 1,2 1 Univ. Bordeaux, IMB, UMR 5251, F-33400 Talence, France 2 CNRS, IMB, UMR 5251, F-33400
More informationPatch similarity under non Gaussian noise
The 18th IEEE International Conference on Image Processing Brussels, Belgium, September 11 14, 011 Patch similarity under non Gaussian noise Charles Deledalle 1, Florence Tupin 1, Loïc Denis 1 Institut
More informationLecture 4 Lebesgue spaces and inequalities
Lecture 4: Lebesgue spaces and inequalities 1 of 10 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 4 Lebesgue spaces and inequalities Lebesgue spaces We have seen how
More informationNew Identities for Weak KAM Theory
New Identities for Weak KAM Theory Lawrence C. Evans Department of Mathematics University of California, Berkeley Abstract This paper records for the Hamiltonian H = p + W (x) some old and new identities
More informationSome Properties of the Augmented Lagrangian in Cone Constrained Optimization
MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented
More informationAn Algorithmic Framework of Generalized Primal-Dual Hybrid Gradient Methods for Saddle Point Problems
An Algorithmic Framework of Generalized Primal-Dual Hybrid Gradient Methods for Saddle Point Problems Bingsheng He Feng Ma 2 Xiaoming Yuan 3 January 30, 206 Abstract. The primal-dual hybrid gradient method
More informationLecture 4 Colorization and Segmentation
Lecture 4 Colorization and Segmentation Summer School Mathematics in Imaging Science University of Bologna, Itay June 1st 2018 Friday 11:15-13:15 Sung Ha Kang School of Mathematics Georgia Institute of
More informationAn unconstrained multiphase thresholding approach for image segmentation
An unconstrained multiphase thresholding approach for image segmentation Benjamin Berkels Institut für Numerische Simulation, Rheinische Friedrich-Wilhelms-Universität Bonn, Nussallee 15, 53115 Bonn, Germany
More informationNon-negative Quadratic Programming Total Variation Regularization for Poisson Vector-Valued Image Restoration
University of New Mexico UNM Digital Repository Electrical & Computer Engineering Technical Reports Engineering Publications 5-10-2011 Non-negative Quadratic Programming Total Variation Regularization
More informationA Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem
A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem Kangkang Deng, Zheng Peng Abstract: The main task of genetic regulatory networks is to construct a
More informationFast Angular Synchronization for Phase Retrieval via Incomplete Information
Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department
More informationOn the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities
On the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities Caihua Chen Xiaoling Fu Bingsheng He Xiaoming Yuan January 13, 2015 Abstract. Projection type methods
More informationA primal-dual approach for a total variation Wasserstein flow
A primal-dual approach for a total variation Wasserstein flow Martin Benning 1, Luca Calatroni 2, Bertram Düring 3, Carola-Bibiane Schönlieb 4 1 Magnetic Resonance Research Centre, University of Cambridge,
More informationOn convergence rate of the Douglas-Rachford operator splitting method
On convergence rate of the Douglas-Rachford operator splitting method Bingsheng He and Xiaoming Yuan 2 Abstract. This note provides a simple proof on a O(/k) convergence rate for the Douglas- Rachford
More informationOptimal stopping time formulation of adaptive image filtering
Optimal stopping time formulation of adaptive image filtering I. Capuzzo Dolcetta, R. Ferretti 19.04.2000 Abstract This paper presents an approach to image filtering based on an optimal stopping time problem
More informationRegularity and compactness for the DiPerna Lions flow
Regularity and compactness for the DiPerna Lions flow Gianluca Crippa 1 and Camillo De Lellis 2 1 Scuola Normale Superiore, Piazza dei Cavalieri 7, 56126 Pisa, Italy. g.crippa@sns.it 2 Institut für Mathematik,
More informationSYMMETRY OF POSITIVE SOLUTIONS OF SOME NONLINEAR EQUATIONS. M. Grossi S. Kesavan F. Pacella M. Ramaswamy. 1. Introduction
Topological Methods in Nonlinear Analysis Journal of the Juliusz Schauder Center Volume 12, 1998, 47 59 SYMMETRY OF POSITIVE SOLUTIONS OF SOME NONLINEAR EQUATIONS M. Grossi S. Kesavan F. Pacella M. Ramaswamy
More informationA Convex Relaxation of the Ambrosio Tortorelli Elliptic Functionals for the Mumford Shah Functional
A Convex Relaxation of the Ambrosio Tortorelli Elliptic Functionals for the Mumford Shah Functional Youngwook Kee and Junmo Kim KAIST, South Korea Abstract In this paper, we revisit the phase-field approximation
More information