New inexact explicit thresholding/ shrinkage formulas for inverse problems with overlapping group sparsity

Size: px
Start display at page:

Download "New inexact explicit thresholding/ shrinkage formulas for inverse problems with overlapping group sparsity"

Transcription

1 Liu et al EURASIP Journal on Image and Video Processing 06 06:8 DOI 086/s EURASIP Journal on Image and Video Processing RESEARCH Open Access New inexact explicit thresholding/ shrinkage formulas for inverse problems with overlapping group sparsity Gang Liu,, Ting-Zhu Huang *, Xiao-Guang Lv 3 and Jun Liu Abstract The least-square regression problems or inverse problems have been widely studied in many fields such as compressive sensing, signal processing, and image processing To solve this kind of ill-posed problems, a regulariation term ie, regularier should be introduced, under the assumption that the solutions have some specific properties, such as sparsity and group sparsity Widely used regulariers include the l norm, total variation TV semi-norm, and so on Recently, a new regulariation term with overlapping group sparsity has been considered Majoriation imiation iteration method or variable duplication methods are often applied to solve them However, there have been no direct methods for solving the relevant problems due to the difficulty of overlapping In this paper, we proposed new inexact explicit shrinkage formulas for one class of these relevant problems, whose regulariation terms have translation invariant overlapping groups Moreover, we apply our results to TV deblurring and denoising problems with overlapping group sparsity We use alternating direction method of multipliers to iteratively solve them Numerical results verify the validity and effectiveness of our new inexact explicit shrinkage formulas Keywords: Overlapping group sparsity, Total variation, ADMM, Image deblurring, Regulariation, Explicit shrinkage formula Introduction The least-square regression problems or inverse problems have been widely studied in many fields such as compressive sensing, signal processing, image processing, statistics, and machine learning Regulariation terms with sparse representations for instance, the l norm regularier have been developed into an important tool in these applications, and a list of methods have been proposed [ 4] These methods are based on the assumption that signals or images have a sparse representation, that is, only containing a few nonero entries The corresponding task is to solve the following problem β x, *Correspondence: tinghuhuang@6com Research Center for Image and Vision Computing/School of Mathematical Sciences, University of Electronic Science and Technology of China, Xiyuan Ave, 673 Chengdu, China Full list of author information is available at the end of the article where x R n is a given vector, R n, p = ni= i p p with p =, represents the l p norm of vector The first term of is called the regulariation term, the second term is called the fidelity term, and β>0 is the regulariation parameter To further improve the solutions, recent studies suggested to go beyond sparsity and took into account additional information about the underlying structure of the solutions [,, 5] In particular, a wide class of solutions which have specific group sparsity structure are considered In this case, a group sparse vector can be divided into groups of components satisfying a only a few of groups contain nonero values and b these groups are not needed to be sparse If we insert such group vectors into a matrix as row vectors of the matrix, this matrix will only have few nonero rows and these rows may be not sparse This property is called group sparsity or joint sparsity, and many literature have considered these new sparse problems [ 6] These problems can be formulated as the following expression 06Liu et al Open Access This article is distributed under the terms of the Creative Commons Attribution 40 International License which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original authors and the source, provide a link to the Creative Commons license, and indicate if changes were made

2 Liu et al EURASIP Journal on Image and Video Processing 06 06:8 Page of 9 r [ i] β x, i= Considering the matrix case of the problem 4, we can get where i is the absolute value of i,and[ i] istheith group of with [ i] [ j] = and r i= [ i] = Group sparsity solutions from have better representation and have been widely studied both for convex and nonconvex cases [, 3, 6 9] More recently, overlapping group sparsity OGS have been considered [, 3, 0 9] These methods are based on the assumption that signals or images have a special sparse representation with OGS The task is to solve the following problem, β x, 3 where, = n i= i g is the generalied l, norm Here, each i g is a group vector containing s called group sie elements that surrounding the ith entry of For example, i g = i, i, i with s = 3 In this case, i g, i g,and i g contain the i th entry of, i, which means overlapping different from the form of group sparsity in In particular, if s =, the generalied l, norm degenerates into the original l norm, and the relevant regulariation problem 3 degenerates to To be more general, researchers considered the weighted generalied l, norm w,, = n i= w g i g we only consider that each group has the same weight, which means translation invariant instead of the former generalied l, norm The task can be extended to w,, β x, 4 where w g is a nonnegative real vector with the same sie as i g and is the point-wise product or Hadamard product For instance, w g i g = w g i, w g i, w g 3 i with s = 3 as the former example In particular, the weighted generalied l, norm degenerates into the generalied l, norm if each entry of w g equals to The problems 3 and 4 have been considered in [, 4 9] They solve the relevant problems by using variable duplication methods variable splitting, latent/auxilliary variables, etc For example, Deng et al in [] introduced a diagonal matrix G in their method However, matrix G is not easy to find and would break the structure of the coefficient matrix, which makes the difficulty of solving solutions under high-dimensional vector cases Moreover, it is difficult to extend this method to the matrix case A A W,, β A X F, 5 where X, A R m n, A W,, = m i= nj= W g A i,j g F Here, each A i,j g is a group matrix containing K K called group sie elements that surround the i, jth entry of AItisdefinedby W g A i,j g = W g, A i l,j l W g, A i l,j l W g,k A i l,jr W g, A i l,j l W g, A i l,j l W g,k A i l,jr W g K,A ir,j l W g K,A ir,j l W g K,K A ir,jr R K K, where l = K, l = K, r = K, r = K with l r = K and l r = K and x denotes the largest integer less than or equal to x Inparticular,if K =, this problem degenerates to the former vector case 4 If K = K =, this problem degenerates to the original l regulariation problem for the matrix case Much more recently, Chen et al [0] considered the problem 5 when W g i,j fori =,, K, j =,, K They used an iterative algorithm to solve this problem based on the principle of majoriation imiation MM Their experiments showed that their method on solving the problem 3 is more efficient than other variable duplication methods variable splitting, latent/auxilliary variables, etc [, 4 9] Although their method is efficient for computing the solution, it may cost more time since the solution is obtained after many iterations rather than a direct step computation To our knowledge, there have been no direct methods for solving the relevant problems 3, 4, and 5 In this paper, we propose new inexact explicit shrinkage formulas for solving them directly, which are direct methods, and faster or more convenient than other methods, for instance, the most recently MM iteration method [0] This new direct method can save more time than the MM method in [0] while getting very similar solutions Numerical results are given to show the effectiveness of our new explicit shrinkage formulas Moreover, the new method can be used in application for solving the subproblem in many other OGS problems, such as the wavelet regularier with OGS in compressive sensing and the total variation TV regularier with OGS in image restoration [, ] For example, we will apply our results to image restoration using TV with OGS in this work similarly as [, ] Moveover, we expend the works, only considering ATV in [, ] to both ATV and ITV more

3 Liu et al EURASIP Journal on Image and Video Processing 06 06:8 Page 3 of 9 detail can be referred to Section 4 Numerical results will show that our method can save more time with getting very similar results The outline of the paper is as follows In Section, we deduce our explicit shrinkage formulas in detail for the OGS problems 3, 4, and 5 In Section 3, we propose some extensions for these shrinkage formulas In Section 4, we apply our results to image deblurring and denoising problems with OGS TV Numerical results are given in Section 5 Finally, we conclude this paper in Section 6 OGS shrinkage Classic shrinkage In this subsection, we will review the original shrinkage formulas and their principles, since our new explicit OGS shrinkage formulas are based on them Firstly, we give the following definition Definition Define shrinkage mappings Sh and Sh from R N R to R N by Sh x, β i = sgnx i max { x i β },0, 6 Sh x, β = x max { x β } x,0, 7 where the expression is taken to be ero when the second factor is ero, and sgn represents the signum function indicating the sign of a number More precisely, sgnx = 0 if x = 0, sgnx = ifx < 0, and sgnx = ifx > 0 The shrinkage 6 is known as soft thresholding and occurs in many algorithms related to sparsity since it is the proximal mapping for the l norm The shrinkage 7 is known as high-dimensional shrinkage formula Now, we consider solving the following problems p β x, p =, 8 The imier of 8 with p = is the following equation arg β x = Sh x, β 9 Due to the additivity and separability of both the l norm and the square of the l norm, Eq 9 can be deduced easily by the following formula: β n x = i β i i x i i= 0 The imier of 8 with p = is the following equation arg β x = Sh x, β This formula is deduced by the Euler equation of 8 with p = In particular, without considering x = 0 or = 0, we obtain the following Euler equation β x 0, x 0 3 β We can easily get that the necessary condition is that the vector is parallel to the vector x Thatis, = x x Substituting into 3 and considering x = 0 or = 0, we obtain the formula 7 More details can be referred to [0, ] Our new explicit OGS shrinkage formulas are based on these observations, especially the properties of additivity, separability, and parallelity Remarks Theproblemiseasytobesolvedbya simple shrinkage formula, which is not used in this work More details can be referred to [,, 3] The OGS shrinkage formulas Now, we focus on the problem 3 firstly The difficulty of this problem is overlapping Therefore, we must take some special techniques to avoid overlapping That is the point of our new explicit OGS shrinkage formulas It is obvious that the first term of the problem 3 is additive and separable So if we find some relative rules such that the second term of the problem 3 has the same properties with the same variable as the first term, one approximatesolutionof3canbeeasilyfoundsimilaras classic shrinkage Assug periodic boundary condition PBC boundary condition BC is necessary in OGS functionals is used here, we observe that each entry i of the vector would appear exactly s times in the first term Therefore, to hold on the uniformity of vectors and x, we need to multiply the second term by s To maintain the invariability of the problem 3, after some manipulations, we have f m =, β x n = i g β i= s s x n = i g β n i= s i g x i g i= n = i g β i= s i g x i g, 4 where x i g is similarly defined as i g in Section

4 Liu et al EURASIP Journal on Image and Video Processing 06 06:8 Page 4 of 9 For example, we set s = 3 and define i g = i, i, i Thegeneraliedl, norm, can be treated as the generalied l norm of generalied points, whose entry i g is also a vector, and the absolute value of each entry is treated as the l norm of i g SeeFiga intuitively, where the top line is the vector, therectangles with dashed line are original i g, and the rectangles with solid line are the generalied points In the case of PBC, we know that each line of Fig a is translated equal We operate the vector x similarly as the vector Putting these generalied points rectangles with solid line in the figureasthecolumnsofamatrix,wecanregards x as the matrix Frobenius norm [ i g ] [ xi g ] F,where [ i g ] isamatrixasinfigawitheverylinebeingitsrow [ x i g ] is similar This is why the second equality in 4 holds Therefore, generally, for each i of the last line of 4, from the Eqs 7 and, we can obtain arg i g β i g s i g x i g = Sh { i g = max x i g s } β,0 xi g x i g, i g = [ i g ], [ i g ],, [ i g ]s x i g, β, s 5 Similarly as Fig a, for each i, the ith entry x i or i of the vector x or may appear s times, so we need to compute each i for s times in s different groups However, the results from 5 are not able to be satisfied simultaneously, because the results i in s different groups are different from 5 Moreover, for each i of the last line in 4, the result 5 is given by that the vector i g is parallel to the vector x i g due to Section Notice this point and ignore that i g = 0 or x i g = 0, particularly for s = 4and i g = i, i, i, i,the vector can be split as follows, =,, 3, 4, 5,, n, n, n = 4,, 3, 0, 0,, 0, 0, n 4,, 3, 4, 0,, 0, 0, 0 4 0,, 3, 4, 5,, 0, 0, 0 4, 0, 0, 0, 0,, n, n, n 4,, 0, 0, 0,, 0, n, n 6 Let i g = 0,,0, i, i, i, i,0,,0 be the expansion of i g,with g =,, 3,0,,0, n, n g =,0,0,,0, n, n, n, n g =,,0,,0, n, n Letx i g = 0,,0,x i, x i, x i, x i,0,,0 be the expansion of x i g similarly as i g Then, we have = 4 ni= i g,andx = 4 ni= x i g Moreover, we can easily obtain that i g = i g and x i g = x i g for every i On one hand, the Euler equation of f m with s = 4 is given by β 4 β x g g n i= n g n g 0, 7 i g x i g g g i g i g n g n g 0 8 From the deduction of the two-dimensional shrinkage formula 7 in Section, we know that the necessary condition of imiing the ith term of the last line in 4 a Fig Vector case The top line is the vector,therectangles with dashed line are original group vector i g,andtherectangles with solid line are the generalied points, which changes the overlapping to non-overlapping a Vector b Weighted vector b

5 Liu et al EURASIP Journal on Image and Video Processing 06 06:8 Page 5 of 9 is that i g is parallel to x i g Thatis, i g is parallel to x i g for every i For example, g =,, 3, 4,0,,0,0//x g = x, x, x 3, x 4,0,,0,0 Then, we obtain be changed to β 4 n i= 9 i g i g = x i g x i g Therefore, 8 can i g x i g x g x g x i g x i g x n g x n g 0, β x x g x g x g 0 x i g x i g x n g x n g 0, x β x g x i g x i g x n g x n g, for each component, we obtain x i i x i β x i g x i g x i g x i x i g 3 Therefore, when i g = 0 and x i g = 0, wefind a imier of 3 on the direction that all the vectors i g are parallel to the vectors x i g In addition, when β 4 i g x i g β 4 i g i g i g x i x i = 0, 8 holds, then i g = x i g ; therefore, i g i g = x i g x i g holds Moreover, because of the strict convexity of f m, we know that the imier is unique On the other hand, when i g = 0 or x i g = 0, our method may not obtain good results When x i g = 0,we know that the imier of the subproblem i g i g β s i g x i g is exactly that i g = 0 Whenx i g = 0 and the imier of the subproblem i g i g β s i g x i g is that i g = 0 since the parameter β/s is two small, our method is not able to obtain good results For example, that i g = 0 while i g = 0 makes the element i in take different values in different subproblems However, we can obtain an approximate imier in this case, which is that the element i is a simple summation of corresponding subproblems containing i We will show that in the experiments of Section 5, the approximate imier is also good Moreover, when we take this problem as a subproblem of the image processing problem, we can set the parameter β to be large enough to avoid this drawback In addition, from 3, we can know that the element i of the imier can be treated as in s subproblems independently and then we combine them In conclusion, after some manipulations, we can get the following two general formulas I with arg, β x = Sh OGSx, β, 4 Sh OGS x, β i = i = max { } β Fx i,0 x i 5 Here, for instance, when group sie s = 4, i g = i, i, i, i in, and x i g is defined similarly as i g, we have Fx i = x i g x i g x i g x i g The x j g is contained in Fx i if and only if x j g has the component x i, and we follow the convention /0 = infx i because x i g = 0 implies x i = 0andthevalueofFx i is insignificant in 5 II arg, β x = Sh OGSx, β, 6 with Sh OGS x, β i = i = Gx i x i 7 Here, symbols are the same as I, and Gx i = max s β x i g,0 max s β x i g,0 max s β x i g,0 max s β x i g,0 We call them Formula I and Formula II respectively throughout this paper When β is sufficiently large or sufficiently small, the former two formulas are the same For the other values of β, from the experiments, we find that Formula II is better approximate to the MM iteration method than Formula I, so we list the algorithm for Formula II as follows for finding the imier of 3 If we would use Formula I, we can only change the final three steps, which does not change the computation cost We can see that Algorithm only needs times of convolution computations with time complexity n s, which is just the same time complexity as one-step iteration in the MM method of [0] Therefore, our method is much more efficient than the MM method or other variable duplication methods In Section 5, we will give the numerical experiments for comparison between our method and the MM method Moreover, if s =, our Algorithm degenerates to the classic soft thresholding as our formula 7 degenerates to 6 Moreover, when β is sufficiently large

6 Liu et al EURASIP Journal on Image and Video Processing 06 06:8 Page 6 of 9 or sufficiently small, the results of our new explicit algorithm are almost the same as the MM method after 0 or more iteration steps in [0] Algorithm Direct shrinkage algorithm for the imiation problem 3 Input: Given vector x = x, x,, x n, group sie s, parameter β Compute: Definition of x i g, for example x i g = x i sl,, x i,, x isr,withsl =[ s ]ands r =[ s ]s l s r = s w = ones,s = [,,,] If s is even, then w =[ w,0]ands = s Let w r =fliplrwbeflipingwover 80 degrees Compute X n = x g,, x i g,, x n g, by convolution of w and x 3 Compute X n = max s /X n β,0 point-wise 4 Compute Gx i by correlation of w and X n, or by convolution of w r and X n 5 Compute by multiplying x with Gx i point-wise Algorithm Direct shrinkage algorithm for the imiation problem 4 Input: Given matrix A, group sie s= K K, parameter β, weight vector W g Compute: Pad W g to be an odd-by-odd matrix If K is even, then W g =[eros, K ; W g ]and K = K If K is even, then W g =[erosk,, W g ]and K = K Let Wg r =fliplrw gbeflipingw g over 80 degrees Compute A temp = A, g,, A,K g ; ; A K, g,, A K,K g, by correlation of Wg pointwise square and A, or by convolution of Wg r and A 3 Compute A temp = max /A temp β,0 W g F pointwise 4 Compute GA i,j by convolution of W g and A temp, or by correlation of W r g and A temp 5 Compute X by multiplying A with GA i,j point-wise Remarks Our new explicit algorithm can be treated as an average estimation algorithm for solving all the overlapping group subproblem independently In Section 5, our numerical experiments show that the results of our two formulas are almost the same as the MM method in [0] when β is sufficiently large or sufficiently small and is approximate to the MM method for other βmoreover, this point illuates that the OGS regulariers coincide to maintain the property of the classical sparse regulariers and smoothen the local regions For instance, TV with OGS can preserve edges and simultaneously overcome staircases in our experiments and [, ] For the problems 4 See Fig b intuitively and 5, we can get similar algorithms We omit the evolution details here and give them as in Appendix Here, we directly give the algorithm for solving the problem 5 as follows under Formula II We can also see that Algorithm only needs times of convolution computations with time complexity n s, which is just the same time complexity as one-step iteration in the MM method of [0] Therefore, our method is much more efficient than the MM method 3 Several extensions 3 Other boundary conditions In Section, we gave the explicit shrinkage formulas for one class of OGS problems 3, 4, and 5 In order to achieve a simply deduction, we assume that PBC is used One may confuse that whether PBC is always good for regulariation problems such as signal processing or image processing, since natural signals or images are often asymmetric However, in these problems, assug a kind of boundary condition is necessary for simplifying the problem and making the computation possible [4] There are kinds of BCs, such as ero BC ZBC and reflective BC PBC is often used in optimiation because it can be computed fast as above or other methods, for example, computation of matrix of block circulant with circulant blocks BCCB by fast Fourier transforms [0,, 4] In this section, we consider other BC such as ZBC and reflective BC For simplification, we only consider the vector case, while the results can be easily expanded to the matrix case as that in Section When ZBC is used, we can expand the original vectors or signal = i n i= and x by two s-length vectors on the both hands of the original vectors, which are = [ s,,, i n i=, ] [ n,, ns = 0s, i n i=, 0 ] s and x, respectively Then, the results and algorithms in Section are similar as the case of PBC on and x See Fig a intuitively Moreover, our numerical results will show that the results from a different BC are almost the same in practice see Section 5; and according to the definition of weighted generalied l, norm, ZBC seems better than PBC Therefore, we will choose the ZBC to solve the problems 3, 4, and 5 in the following sections

7 Liu et al EURASIP Journal on Image and Video Processing 06 06:8 Page 7 of 9 a Fig The case of expanding vectors with other BCs to use the formulas under PBC a ZBC b Reflective BC b When reflective BC is assumed, the results are also the same We only need to extend the original vector x to ˆx with reflective BC See Fig b intuitively 3 Nonpositive weights and different weights in groups In this section, we show that the weight vector w g and matrix W g in the former sections can contain arbitrary s entries with arbitrary real numbers On one hand, the ero value can be the arbitrary entries of the weight vector or matrix For example, the original l regulariation problem can be seen as a special form of weighted generalied l, norm regulariation problems 4 with s = or s = 3andw g =, 0, 0 for the example of Fig b On the other hand, the norm is with the property of positive homogeneity, w g any = w g any,where any can be arbitrary norm such as and Therefore, for the regulariation problems 4 or 5, the weight vector w g or matrix W g isthesameas w g or W g, where the point-wise absolute value Therefore, in general, our results are true, whatever the number of entries included in the weight vector w g and matrix W g and whatever the real number value of these entries However, if the weight w g is dependent on the group index i, for example, w i,g i g = w i,g i, w i,g i, w i,g 3 i, we cannot solve the relevant problems easily since our method fails As we mentioned above, we only focus on that the weight w g is independent on the group index i, thatis,w i,g = w g for all i, which means that it is with translation invariant overlapping groups 4 Applications in TV regulariation problems with OGS The TV regularier was firstly introduced by Rudin et al [5] ROF, which is widely used in many fields, ie, denoising and deblurring problems [0,, 6 3] Several fast algorithms such as Chambelle [6] and fast TV deconvolution FTVd [0, ] have been proposed Its corresponding imiation task is f f TV μ Hf g, 8 where f TV := f i,j = x f i,j y f i,j i,j n i,j n called isotropic TV, ITV, or f TV := f i,j = i,j n x f i,j y f i,j called anisotropic TV, ATV, i,j n H denotes the blur matrix, and g denotes the given observed image with blur and noise Operator : R n R n denotes the discrete gradient operator under PBC which is defined by f i,j = x f i,j, y f i,j, with { fi,j f x f i,j = i,j if i<n, f,j f n,j if i=n, { fi,j f y f i,j = i,j if j<n, f i, f i,n if j=n, for i, j =,,, n, wheref i,j refers to the j n ith entry of the vector f it is the i, jth pixel location of the n n image, and this notation remains valid throughout the paper unless otherwise specified Notice that H is a matrix of BCCB structure when PBC are applied [4] Recently, Selesnick et al [3] proposed an OGS TV regularier to one-dimensional signal denoising They applied the MM method to solve their model Their numerical experiments showed that their method can overcome staircase effects effectively and get better results However, their method has the disadvantages of the low speed of computation and the difficulty to be directly extended to the two-dimensional image case More recently, Liu et al [] proposed an OGS TV regularier for two-dimensional image denoising and deblurring under Gaussian noise, and Liu et al [] proposed an OGS TV regularier for image deblurring under impulse noise Both of them used a variable substitution method and the ADMM framelet with an inner MM iteration for solving the subproblems similar as 3 Therefore, their methods may spend more time than our methods Moreover, when the MM method is used in the inner iterations in [, ], they can only solve the ATV case but not the ITV case with OGS while our methods can solve both ATVandITVcases

8 Liu et al EURASIP Journal on Image and Video Processing 06 06:8 Page 8 of 9 Firstly, we define the ATV case with OGS under Gaussian noise and impulse noise respectively which is similar as [, ] For the Gaussian noise case, x f W,, y f W,, μ f Hf g 9 For the impulse case, x f W,, y f W,, μ Hf g 30 f We call the former model as TV OGS L model and the latter model as TV OGS L model, respectively Then,wedefinedtheITVcasewithOGSForGaussian noise and impulse noise, we only change the former two terms of 9 and 30 respectively by A W,, Here, A is a high-dimensional matrix or tensor with each entry A i,j = x f i,j ; y f i,j Remarks 3 In particular in this work, A can be treated as x f ; y f for simplicity in vector and matrix computation, although the computation of A W,, is should be treated as pointwise with A i,j = x f i,j ; y f i,j since the computation on A is almost all pointwise Moreover, we consider constrained model as listing in [,, 3] For any true digital image, its pixel value can attain only a finite number of values Hence, it is natural to require all pixel values of the restored image to lie in a certain interval [ a, b], see [3] for more details In general, with the easy computation and the certified results in [3], we only consider all the images located on the standard range [ 0, ] Therefore,we define a projection operator P on the set = { f R n n 0 f }, 0, f i,j < 0, P f i,j = f i,j, f i,j [0,], 3, f i,j > 4 Constrained TV OGS L model For the constrained model the ATV case called CATVOGSL, we have u,v x,v y,f { v x W,, v y W,, μ Hf g : } v x = x f, v y = y f, u = f The augmented Lagrangian function [33] of 3 is L v x, v y, u, f ; λ, λ, λ 3 = vx W,, λ T v x x f β v x x f 3 v y W,, λ T v y y f β v y y f λ T 3 u f β u f μ Hf g, 33 where β, β > 0 are penalty parameters and λ, λ, λ 3 R n are the Lagrange multipliers The solving algorithm is according to the scheme of ADMM in Gabay [34], and we refer to some applications in image processing which can be solved by ADMM, eg, [35 43] For a given v k x, vk y, uk, f k ; λ k, λk, λk 3, the next iteration v k x, v k y, u k, f k ; λ k, λ k, λ k 3 is generated as follows: Fix f = f k, λ = λ k, λ = λ k, λ 3 = λ k 3 and imie 33 with respect to v x, v y,anduwithrespecttov x and v y, vx k =arg v x W,, λ k T vx x f k β v x x f k =arg v x W,, β v x x f k λk β, 34 vy k =arg v y W,, λ k T vy y f k β v y y f k =arg v y W,, β v y y f k λk β 35 It is obvious that problems 34 and 35 match the framework of the problem 5; thus, the imiers of 34 and 35 can be obtained by using the formulas in Section With respect to u, u k = arg λ k T 3 u f k β u f k = arg β u f k λk 3 β The imier is given explicitly by [ ] u k = P f k λk 3 36 β Compute f k by solving the normal equation β x x y y μh H β If k = x β v k x λ k y β v k y λ k μh g β u k λk 3 β, 37 where denotes the conjugate transpose, see [44] for more details Since all the parameters are positive, the coefficient matrix in 37 are always invertible and symmetric positive-definite In addition, H, x, y and their conjugate transpose have BCCB structure under PBC We know that the computations with BCCB matrix can be very efficient by using fast Fourier transforms 3 Update the multipliers via λ k = λ k γβ v k x x f k, λ k = λ k γβ v k x x f k, λ k 3 = λ k 3 γβ u k f k 38

9 Liu et al EURASIP Journal on Image and Video Processing 06 06:8 Page 9 of 9 Based on the discussions above, we present the ADMM algorithm for solving the convex CATVOGSL model 3, which is shown as Algorithm 3 Algorithm 3 CATVOGSL for the imiation problem 3 initialiation: Starting point f 0 = g, k = 0, β, β, γ, μ, group sie K K,weightedmatrixW g, λ 0 i = 0, i =,, 3 iteration: Compute v k x and v k y according to 34 and 35 Compute u k according to 36 3 Compute f k by solving 37 4 update λ 0 i = 0, i =,, 3 according to 38 5 k = k until a stopping criterion is satisfied Algorithm 3 is an application of ADMM for the case with two blocks of variables v x, v y, u and f Thus, if the step of Algorithm 3 can be solved exactly, its convergence is guaranteed by the theory of ADMM [34, 35, 37, 43] Although step of Algorithm 3 cannot be solved exactly, we can find a convergent series to ensure the convergence as [35] Besides, our numerical experiments verify the convergence of Algorithm 3 Remarks 4 For the image f, weusepbcforthefast computation However, for v x, v y,weusezbcbecausev x and v y denote the gradient of the image, ZBC seems better for the definition of the generalied norm l, on v x and v y as mentioned in Section 3 Therefore, the two kinds of BCs for the image and its gradient are different and independent These remain valid throughout the paper unless otherwise specified For the ITV case, the constrained model CITVOGSL is { A W,, μ } Hf g : A= xf ; y f, u = f u,a,f 39 The detail will be presented in the next section for the constrained TV OGS L model We call this relevant algorithm CITVOGSL 4 Constrained TV OGS L model For the constrained model the ITV case called CITVOGSL, we have u,a,f { A W,, μ : =Hf g, A= x f ; y f, u=f } 40 The augmented Lagrangian function of 40 is L v x, v y,, w, f ; λ, λ, λ 3, λ 4 = A W,, λ T [ ], x f λt A y f [ ] β A x f y f μ λ T 3 Hf g β Hf g λ T 4 w f β 3 w f, 4 where β, β, β 3 > 0 are penalty parameters and λ, λ, λ 3, λ 4 R n are the Lagrange multipliers For a given A k, k, u k, f k ; λ k, λk, λk 3, λk 4, the next iteration A k, k, u k, f k ; λ k, λ k, λ k 3, λ k 4 is generated as follows: Fix f = f k, λ = λ k, λ = λ k, λ 3 = λ k 3, λ 4 = λ k 4 and imie 4 with respect to A,,anduWithrespectto A, [ ] [ ] A k = arg A W,, λ T x, λt A f k β x y f k A f k, y f k [ ] [ ] [ ] [ ] A k A k = arg A W,, β A x f k λ k /β A y f k λ k /β 4 It is obvious that problem 4 match the framework of the problem 5, thus the solution of 4 can be obtained by using the formulas in Section With respect to, k = arg μ λ k T 3 Hf k g β Hf k g = arg μ β Hf k g λk 3 β The imiation with respect to can be given by 6 and 9 explicitly, that is, { } { } k =sgn Hf k g λk 3 max Hf k g λk 3 μ,0 β β β 43 With respect to u, u k = arg λ k T 4 u f k β 3 u f k = arg β 3 u f k λk 4 β 3 The imier is given explicitly by [ ] u k = P f k λk 4 44 β 3

10 Liu et al EURASIP Journal on Image and Video Processing 06 06:8 Page 0 of 9 Compute f k by solving the following normal equation similarly as the last section β x x y y β H H β 3 If k = x β A k λ k y β A k λ k H β k λ k 3 β H g β 3 u k λk 4 β Update the multipliers via [ ] λ k [ ] ] λ k λ k = [A k [ λ k γβ x f A k k ] y f k, λ k 3 = λ k 3 γβ k Hf k g, λ k 4 = λ k 4 γβ 3u k f k 46 Based on the discussions above, we present the ADMM algorithm for solving the convex CITVOGSL model 40, which is shown as Algorithm 4 5 Numerical results In this section, we present several numerical results to illustrate the performance of the proposed method All experiments are carried out on a desktop computer using MATLAB 00a Our computer is equipped with an Intel Core i3-30 CPU 34 GH, 34 GB of RAM, and a 3-bit Windows 7 operation system 5 Comparison with the MM method for one-dimensional signal denoising As an illustrative example, we apply our direct formulas in one-dimensional signal denoising We only compare the results of our explicit shrinkage formulas with the most recent MM iteration method proposed in [0] as a simple example The one-dimensional group sparse signal is shown in the top left of Fig 3 The noisy signal x in the top right of Fig 3 is obtained by adding independent white Gaussian noise with standard deviation σ = 05 same as [0] The denoising model is as follows Algorithm 4 CITVOGSL for the imiation problem 40 initialiation: Starting point f 0 = g, k = 0, β, β, β 3, γ, μ, group sie K K,weightedmatrixW g, λ 0 i = 0, i =,, 3, 4 iteration: Compute A k according to 4, and compute k according to 43 Compute u k according to 44 3 Compute f k by solving 45 4 update λ 0 i = 0, i =,, 3, 4 according to 46 5 k = k until a stopping criterion is satisfied Algorithm 4 is an application of ADMM for the case with two blocks of variables A,, u and f Thus,ifstep of Algorithm 4 can be solved exactly, its convergence is guaranteed by the theory of ADMM [34, 35, 37, 43] Although step of Algorithm 4 cannot be solved exactly, we can find a convergent series to ensure the convergence as [35] Besides, our numerical experiments verify the convergence of Algorithm 4 For the ATV case, the constrained model CATVOGSL is { } vx W,, v y W,, μ : u,v x,v y,f = Hf g, v x = x f, v y = y f, u = f 47 The detail has been presented in the last section for the constrained TV OGS L model We call this relevant algorithm CATVOGSL arg Fun =, β x 48 We test several different parameters β anyinteger from to 50 for the comparison The results of our formulas are almost the same as the MM method for almost these βsdue to the limit space,we only illustrate parts of the results β = 3, 0, 5 in Fig 3 We take both our Formulas I and II for the comparison with the MM method From the figure, we can visually see that our results by our two kinds of formulas are almost the same as the MM method for different β Moreover, for the practical problems, β = 0, can be better than others from the figure This shows that our formulas are feasible, useful, and effective, because our method can only need the same computation cost as one-step iteration in the MM method while the MM method needs 5 iterations as [0] That means our method is 5 times faster than the MM method From the bottom line of Fig 3, we can see that the MM method may take 5 iterations to be convergent, which is the reason why Liu et al in [] and Liu et al in [] only choose 5 inner iterations However, in this case, our method is still 5 times faster than the MM method Remarks 5 Although this model is not the best model for signal denoising, it shows the superiority of our method Moreover, the authors in [0] also take similar experiments 5 Comparison with the MM method on problem 5 In this section, we give another example by solving the matrix case problem 5 for comparison with the MM method Without loss of generality, we let A = rand00, 00 and A45 : 55, 45 : 55 = 0inthe

11 Liu et al EURASIP Journal on Image and Video Processing 06 06:8 Page of 9 Fig 3 Comparison between our method and the MM method for one-dimensional signal denoising Second row to bottom row,fromtop to bottom, results of the MM method, our Formula I, our Formula II respectively, absolute error, and function value history From left to right, β = 3,0,5 respectively

12 Liu et al EURASIP Journal on Image and Video Processing 06 06:8 Page of 9 example In particular, we set weighted matrix W g R 3 3 W g i,j and list 5 again f A X = X FunX = X, β X A F 49 We use different parameters β and use the number of MM iteration by 0 steps For comparison, we expand our result by explicit shrinkage formulas to length 0 the same as the MM iteration steps The values of the object function FunX againstiterationsare illustrated in the top line in Fig 4 The cross-sectional elements of the imier A are shown in the bottom line in Fig 4 We choose both ZBC and PBC for our method and the MM method It can be seen that there is only a little difference between the two kinds of BCs It is obvious that our method is much more efficient than the MM method since our time complexity is just the same as one-step iteration in the MM method due to the explicit shrinkage formula From the top line in Fig 4, we observe that the MM method is also fast because it only needs less than 0 steps sometimes 5 to converge The related error of the function value of two methods is much less than % for different parameters β From the bottom line in Fig 4, we can see that when β which is sufficiently small, and when β 30 which is sufficiently large, our result is almost the same as the final results by the MM iteration method This shows that the results by our method are almost the same as the MM method When < β < 30, the imier computed by our method is approximate to the MM method both on the error of the function value and on the imier X In Table, we show the numerical comparison of our method and the MM method on three parts, the related error of function value ReE fa, related error of imier X ReE X, and the mean absolute error of X MAE X The three terms are defined by ReE fa = f AMM f Aours f Aours, ReE X = X MM X ours F X ours F, and MAE X = mean absolute error of X MM X ours,wherex MM and X ours are the final solution X of the MM method and our method, respectively f A MM and f Aours are the final function value of MM and our method, respectively From the table and the figure, we can see that our formula can almost get the same results as the MM method when β is sufficiently large and approximate results similarasthemmmethodforotherβ Thisisanotherproof for the feasibility of our formula Similarly as weighted matrix W g R 3 3 W g i,j, we also tested other weighted matrix for more than 000 times For example, when A = rand00, 00 and AA >= 05 = 0orAA <= 05 = every element is in [ 0, ], we find that, generally, β w g s is sufficiently small and β 30 W g s is sufficiently large These results illustrate that the former when W g R 3 3 W g i,j β 30 which is sufficiently large again However, in practice, if β is too large, then A = X, and the imiation problem 49 is meaningless In our experiments after more than 000 tests, we find that, when every element of A is in [ 0, ], 30 W g s β 300 W g s is sufficiently large but not too large to make the imiation problem meaningless In this case, that means we may tune the parameter to be better in practice On the other hand, when some elements of A are not in [ 0, ], we can first project or stretch A into the region [ 0, ] and then choose the better parameter β Moreover, although the parameter is not chosen to be in this region, the result can also be treated as an approximate solution with small error That means our formulas are feasible, useful, and effective Fig 4 Comparisonbetween ourmethod andthe MMmethodon twokindsofbcs, ZPC, andpbctop row, the function value history Bottom row, the cross line of the final imier matrix XFromleft to right, β =, 5, 0, 0, 30

13 Liu et al EURASIP Journal on Image and Video Processing 06 06:8 Page 3 of 9 Table Numerical comparison of our method and the MM method for two kinds of BCs, ZBC, and PBC BC β = ZBC ReE fa 59e e-4 4e-4 54e-5 4e-5 6e-6 ReE X e-4 47e-5 75e-6 78e-7 MAE X 34e e-4 4e-4 PBC ReE fa 65e e-4 55e-5 e-5 63e-7 3e-6 ReE X e-4 9e-5 43e-6 45e-7 MAE X 38e e-4 e-4 ReE of f A X denotes the related error of final function value funxreeofx denotes related error of imier XMAEofX denotes the mean absolute error of X 53 Comparison with TV methods and TV with OGS with inner iteration MM methods for image deblurring and denoising In this section, we compare all our algorithms with other methods All the test images are shown in Fig 5, one image as a Man and three 5 5 images as b Car, c Parlor, and d Housepole The quality of the restoration results is measured quantitatively by using the peak signal-to-noise ratio PSNR in decibel db and the relative error ReE: n Max I PSNR = 0 log 0 f f, ReE = f f, f where f and f denote the original and restored images, respectively, and Max I represents the maximum possible pixel value of the image In our experiments, Max I = The stopping criterion used in our work is set to be as other methods F k F k F k < 0 5, 50 where F k is the objective function value of the respective model in the kth iteration We compare our methods with some other methods, such as Chan et al s TV method proposed in [3], Liu et al s method proposed in [], and Liu et al s method proposed in [] Both the latter two methods [, ] are with inner iteration MM methods for OGS TV problems, where the number of the inner iterations is set 5 by them In particular, for a fair comparison, we set weighted matrix W g R 3 3 W g i,j in all the experiments of our methods as in [, ] 53 Experiments for the constrained TV OGS L model In this section, we compare our methods CATVOGSL and CITVOGSL with some other methods, such as Chan et al s method proposed in [3] Algorithm in [3] for the constrained TV-L model and Liu et al s method proposed in [] We set the penalty parameters β = 35, β = 0, for the ATV case, β = 00, β = 0 for the ITV case and relax parameter γ = 68 The blur kernels are generated by MATLAB built-in function ifspecial average,9 for 9 9 average blur We generate all blurring effects using the MATLAB built-in function imfilteri,psf, circular, conv under PBC with I as the original image and psf as the blur kernel We first generated the blurred images operating on images a c by the former Gaussian blurs a b c d Fig 5 Original images The versions of the images in this paper are special formats which are converted by Photoshop from the sources downloaded from a Manb Carc Parlor d Housepole

14 Liu et al EURASIP Journal on Image and Video Processing 06 06:8 Page 4 of 9 Table Numerical comparison of Chan [3], Liu [], CATVOGSL, and CITVOGSL for images a c in Fig 5 Images a Man b Car c Parlor Method μ Itr PSNR Time ReE Itr PSNR Time ReE Itr PSNR Time ReE Chan [3] Liu [] CAOL CIOL PSNR db, Time s, Itr iterations, μ regulariation parameter 0 5, CAOL CATVOGSL, CIOL CITVOGSL and further corrupted by ero mean Gaussian noise with BSNR = 40 The BSNR is given by variance ofg BSNR = 0 log 0 variance of η where g and η are the observed image and the noise, respectively The numerical results of the three methods are shown in Table We have tuned the parameters for all the methods as in Table From Table, we can see that PSNR and ReE of our methods both ATV and ITV cases are almost same as [], which applied MM inner iterations to solve the subproblems 34 and 35 only for the ATV case However, each outer iteration of our methods is nearly twice faster than [] from the experiments The time of each outer iteration of our methods is almost the same as the traditional TV method in [3] In Fig 6, we display the restored Parlor images from different algorithms We can see that OGS TV regulariers can get clearer edges on the desk of the image than TV regularier Now, we compute the complexity of each step of our methods and Liu et al s method [] Firstly, we know that Fig 6 Top row: blurred and noisy image left,restorationimagesof Chan [3] middle,liu [] rightbottom row:original image left,restoration images of CATVOGSL middle, CITVOGSL right

15 Liu et al EURASIP Journal on Image and Video Processing 06 06:8 Page 5 of 9 the complexity of all the methods is times n log n except the OGS subproblems Then, for the OGS subproblems, the MM method in [] with five-step inner iteration, the complexity is subproblems The complexity of our methods is in CATVOGSL and in CITVOGSL Therefore, the total complexity of [] is 5 5, the total complexity of CATVOGSL is 5 08, and the total complexity of CITVOGSL is 5 90 That means each step of our methods is more than double faster than the inner iteration method in [] In the next section, the common computation parts of our methods and the inner iteration method are much more, and then our methods are only nearly double faster Remarks 6 Wedonotlisttheresultsofimaged because they are almost the same as images b and c Moreover, when β < 30 which is not sufficiently large, the numerical results are also good while we did not list them This shows that the approximate part of our shrinkage formula is also good, and that when the inner step in [, ] is chosen to be 5, the numerical experiments are convergent although they did not find a convergence control sequence In addition, in our experiments, we find that the results of the ATV case are better than the ITV case, which is a little different from the classic TV methods Moreover, we find that if the parameters of [] are chosen to be the same as ours, the solutions of our method and their method are always the same on PSNR and visual presentation while our method may save more time it remains valid for [] 53 Experiments for the constrained TV OGS L model In this section, we compare our methods CATVOGSL and CITVOGSL with some other methods, such as Chan et al s method proposed in [3] Algorithm in [3] for the constrained TV-L model and Liu et al s method proposed in [] Similarly as the last section, we set the penalty parameters β = 80, β = 000, β 3 =, for the ATV case, β = 80, β = 000, β 3 =, for the ITV case and relax parameter γ = 68 The blur kernel is generated by MAT- LAB built-in function fspecial gaussian,7,5 for 7 7 Gaussian blur with standard deviation 5 We first generated the blurred images operating on images b d by the former Gaussian blur and further corrupted them by salt-and-pepper noise from 30 to 40 % We generate all noise effects by MATLAB built-in function imnoisebl, salt & pepper,level with Bl the blurred image and fix the same random matrix for different methods The numerical results by the three methods are shown in Table 3 We have tuned the parameters manually to give the best PSNR improvement for Chan [3] as in Table 3 for Table 3 Numerical comparison of Chan [3], Liu [], CATVOGSL, and CITVOGSL for images a d in Fig 5 Is N Chan [3] Liu [] L μ/itr PSNR Time ReE Itr PSNR Time ReE a 30 5/ / b 30 5/ / c 30 6/ / d 30 6/ / CATVOGSL CITVOGSL a b c d PSNR db, Time s, Is images, NL noise level %

16 Liu et al EURASIP Journal on Image and Video Processing 06 06:8 Page 6 of 9 different images For Liu [], we choose the given parameters μ default as 00, 80 for 30 and 40 %, respectively For our method CATVOGSL, we set μ as 80, 40 for 30 and 40 %, respectively For our method CITVOGSL, we set μ as 40, 00 for 30 and 40 %, respectively In the experiments, we find that the parameters of our methods are robust and have wide range to choose Therefore, we set the same μ for different images From Table 3, we can also see that PSNR and ReE of our methods both ATV and ITV cases are almost the same as Liu [], which applied MM inner iterations to solve the subproblems 34 and 35 only for the ATV case However, each outer iteration of our methods is nearly twice faster than Liu [] from the experiments The time of each outer iteration of our methods is almost the same as the traditional TV method in Chan [3] Moreover, we can also see that sometimes ATV is better than ITV and sometimes on the contrary for OGS TV Finally, in Fig 7, we display the degraded image, the original image, and the restored images for 30 % level of noise on image d by the four methods From the figure, we can see that both our methods and Liu [] can get better edges handrail and window than Chan [3] 6 Conclusions In this paper, we propose the explicit shrinkage formulas for one class of OGS regulariation problems with translation invariant overlapping groups These formulas can be extended to several other regulariation OGS problems as a subproblem in many fields In this work, we apply our results in OGS TV regulariation problems deblurring and denoising problems Furthermore, we also extend the image deblurring problems with OGS ATV in [, ] to both ATV and ITV cases Since the formulas are very simple, these results can be easily extended to many other applications such as multichannel deconvolution and compress sensing, which we will consider in the future In addition, in the experiments, we only choose all the entries of the weight matrix W g equal to We will test for other weights in the future on more experiments in order to choose the better or best weights for some other applications Fig 7 Top row: blurred and noisy image left,restorationimagesof Chan [3] middle,liu [] rightbottom row:original image left,restoration images of CATVOGSL middle, CITVOGSL right

arxiv: v3 [math.na] 9 May 2014

arxiv: v3 [math.na] 9 May 2014 arxiv:3.683v3 [math.na] 9 May 04 New explicit thresholding/shrinkage formulas for one class of regularization problems with overlapping group sparsity and their applications Gang Liu, Ting-Zhu Huang, Xiao-Guang

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in

More information

2 Regularized Image Reconstruction for Compressive Imaging and Beyond

2 Regularized Image Reconstruction for Compressive Imaging and Beyond EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement

More information

LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING

LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING JIAN-FENG CAI, STANLEY OSHER, AND ZUOWEI SHEN Abstract. Real images usually have sparse approximations under some tight frame systems derived

More information

Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise

Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Minru Bai(x T) College of Mathematics and Econometrics Hunan University Joint work with Xiongjun Zhang, Qianqian Shao June 30,

More information

Recent developments on sparse representation

Recent developments on sparse representation Recent developments on sparse representation Zeng Tieyong Department of Mathematics, Hong Kong Baptist University Email: zeng@hkbu.edu.hk Hong Kong Baptist University Dec. 8, 2008 First Previous Next Last

More information

SPARSE SIGNAL RESTORATION. 1. Introduction

SPARSE SIGNAL RESTORATION. 1. Introduction SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 9 Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 2 Separable convex optimization a special case is min f(x)

More information

Adaptive Primal Dual Optimization for Image Processing and Learning

Adaptive Primal Dual Optimization for Image Processing and Learning Adaptive Primal Dual Optimization for Image Processing and Learning Tom Goldstein Rice University tag7@rice.edu Ernie Esser University of British Columbia eesser@eos.ubc.ca Richard Baraniuk Rice University

More information

Gauge optimization and duality

Gauge optimization and duality 1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel

More information

Translation-Invariant Shrinkage/Thresholding of Group Sparse. Signals

Translation-Invariant Shrinkage/Thresholding of Group Sparse. Signals Translation-Invariant Shrinkage/Thresholding of Group Sparse Signals Po-Yu Chen and Ivan W. Selesnick Polytechnic Institute of New York University, 6 Metrotech Center, Brooklyn, NY 11201, USA Email: poyupaulchen@gmail.com,

More information

Optimization methods

Optimization methods Lecture notes 3 February 8, 016 1 Introduction Optimization methods In these notes we provide an overview of a selection of optimization methods. We focus on methods which rely on first-order information,

More information

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 36 Why proximal? Newton s method: for C 2 -smooth, unconstrained problems allow

More information

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS Martin Kleinsteuber and Simon Hawe Department of Electrical Engineering and Information Technology, Technische Universität München, München, Arcistraße

More information

Computer Vision & Digital Image Processing

Computer Vision & Digital Image Processing Computer Vision & Digital Image Processing Image Restoration and Reconstruction I Dr. D. J. Jackson Lecture 11-1 Image restoration Restoration is an objective process that attempts to recover an image

More information

Inverse problem and optimization

Inverse problem and optimization Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples

More information

Optimization methods

Optimization methods Optimization methods Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda /8/016 Introduction Aim: Overview of optimization methods that Tend to

More information

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring Marco Donatelli Dept. of Science and High Tecnology U. Insubria (Italy) Joint work with M. Hanke

More information

A Riemannian Framework for Denoising Diffusion Tensor Images

A Riemannian Framework for Denoising Diffusion Tensor Images A Riemannian Framework for Denoising Diffusion Tensor Images Manasi Datar No Institute Given Abstract. Diffusion Tensor Imaging (DTI) is a relatively new imaging modality that has been extensively used

More information

Compressive Imaging by Generalized Total Variation Minimization

Compressive Imaging by Generalized Total Variation Minimization 1 / 23 Compressive Imaging by Generalized Total Variation Minimization Jie Yan and Wu-Sheng Lu Department of Electrical and Computer Engineering University of Victoria, Victoria, BC, Canada APCCAS 2014,

More information

Least Sparsity of p-norm based Optimization Problems with p > 1

Least Sparsity of p-norm based Optimization Problems with p > 1 Least Sparsity of p-norm based Optimization Problems with p > Jinglai Shen and Seyedahmad Mousavi Original version: July, 07; Revision: February, 08 Abstract Motivated by l p -optimization arising from

More information

Over-enhancement Reduction in Local Histogram Equalization using its Degrees of Freedom. Alireza Avanaki

Over-enhancement Reduction in Local Histogram Equalization using its Degrees of Freedom. Alireza Avanaki Over-enhancement Reduction in Local Histogram Equalization using its Degrees of Freedom Alireza Avanaki ABSTRACT A well-known issue of local (adaptive) histogram equalization (LHE) is over-enhancement

More information

Bayesian Paradigm. Maximum A Posteriori Estimation

Bayesian Paradigm. Maximum A Posteriori Estimation Bayesian Paradigm Maximum A Posteriori Estimation Simple acquisition model noise + degradation Constraint minimization or Equivalent formulation Constraint minimization Lagrangian (unconstraint minimization)

More information

Image Restoration with Mixed or Unknown Noises

Image Restoration with Mixed or Unknown Noises Image Restoration with Mixed or Unknown Noises Zheng Gong, Zuowei Shen, and Kim-Chuan Toh Abstract This paper proposes a simple model for image restoration with mixed or unknown noises. It can handle image

More information

EE 381V: Large Scale Optimization Fall Lecture 24 April 11

EE 381V: Large Scale Optimization Fall Lecture 24 April 11 EE 381V: Large Scale Optimization Fall 2012 Lecture 24 April 11 Lecturer: Caramanis & Sanghavi Scribe: Tao Huang 24.1 Review In past classes, we studied the problem of sparsity. Sparsity problem is that

More information

Beyond Heuristics: Applying Alternating Direction Method of Multipliers in Nonconvex Territory

Beyond Heuristics: Applying Alternating Direction Method of Multipliers in Nonconvex Territory Beyond Heuristics: Applying Alternating Direction Method of Multipliers in Nonconvex Territory Xin Liu(4Ð) State Key Laboratory of Scientific and Engineering Computing Institute of Computational Mathematics

More information

Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction Method

Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction Method Advances in Operations Research Volume 01, Article ID 357954, 15 pages doi:10.1155/01/357954 Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction

More information

arxiv: v1 [cs.cv] 1 Jun 2014

arxiv: v1 [cs.cv] 1 Jun 2014 l 1 -regularized Outlier Isolation and Regression arxiv:1406.0156v1 [cs.cv] 1 Jun 2014 Han Sheng Department of Electrical and Electronic Engineering, The University of Hong Kong, HKU Hong Kong, China sheng4151@gmail.com

More information

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Donghwan Kim and Jeffrey A. Fessler EECS Department, University of Michigan

More information

Linearized Alternating Direction Method: Two Blocks and Multiple Blocks. Zhouchen Lin 林宙辰北京大学

Linearized Alternating Direction Method: Two Blocks and Multiple Blocks. Zhouchen Lin 林宙辰北京大学 Linearized Alternating Direction Method: Two Blocks and Multiple Blocks Zhouchen Lin 林宙辰北京大学 Dec. 3, 014 Outline Alternating Direction Method (ADM) Linearized Alternating Direction Method (LADM) Two Blocks

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Noise, Denoising, and Image Reconstruction with Noise (lecture 10)

EE 367 / CS 448I Computational Imaging and Display Notes: Noise, Denoising, and Image Reconstruction with Noise (lecture 10) EE 367 / CS 448I Computational Imaging and Display Notes: Noise, Denoising, and Image Reconstruction with Noise (lecture 0) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement

More information

Lecture 8: February 9

Lecture 8: February 9 0-725/36-725: Convex Optimiation Spring 205 Lecturer: Ryan Tibshirani Lecture 8: February 9 Scribes: Kartikeya Bhardwaj, Sangwon Hyun, Irina Caan 8 Proximal Gradient Descent In the previous lecture, we

More information

Assorted DiffuserCam Derivations

Assorted DiffuserCam Derivations Assorted DiffuserCam Derivations Shreyas Parthasarathy and Camille Biscarrat Advisors: Nick Antipa, Grace Kuo, Laura Waller Last Modified November 27, 28 Walk-through ADMM Update Solution In our derivation

More information

Lecture Notes 5: Multiresolution Analysis

Lecture Notes 5: Multiresolution Analysis Optimization-based data analysis Fall 2017 Lecture Notes 5: Multiresolution Analysis 1 Frames A frame is a generalization of an orthonormal basis. The inner products between the vectors in a frame and

More information

Homework 4. Convex Optimization /36-725

Homework 4. Convex Optimization /36-725 Homework 4 Convex Optimization 10-725/36-725 Due Friday November 4 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)

More information

TWO-PHASE APPROACH FOR DEBLURRING IMAGES CORRUPTED BY IMPULSE PLUS GAUSSIAN NOISE. Jian-Feng Cai. Raymond H. Chan. Mila Nikolova

TWO-PHASE APPROACH FOR DEBLURRING IMAGES CORRUPTED BY IMPULSE PLUS GAUSSIAN NOISE. Jian-Feng Cai. Raymond H. Chan. Mila Nikolova Manuscript submitted to Website: http://aimsciences.org AIMS Journals Volume 00, Number 0, Xxxx XXXX pp. 000 000 TWO-PHASE APPROACH FOR DEBLURRING IMAGES CORRUPTED BY IMPULSE PLUS GAUSSIAN NOISE Jian-Feng

More information

Digital Signal Processing

Digital Signal Processing JID:YDSPR AID:53 /FLA [m5g; v1.9; Prn:17/07/2017; 11:02] P.1 (1-13) Digital Signal Processing ( ) 1 Contents lists available at ScienceDirect 2 68 3 69 Digital Signal Processing 6 72 7 www.elsevier.com/locate/dsp

More information

Enhanced Compressive Sensing and More

Enhanced Compressive Sensing and More Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University

More information

Sparse regression. Optimization-Based Data Analysis. Carlos Fernandez-Granda

Sparse regression. Optimization-Based Data Analysis.   Carlos Fernandez-Granda Sparse regression Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 3/28/2016 Regression Least-squares regression Example: Global warming Logistic

More information

LPA-ICI Applications in Image Processing

LPA-ICI Applications in Image Processing LPA-ICI Applications in Image Processing Denoising Deblurring Derivative estimation Edge detection Inverse halftoning Denoising Consider z (x) =y (x)+η (x), wherey is noise-free image and η is noise. assume

More information

Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition

Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition Gongguo Tang and Arye Nehorai Department of Electrical and Systems Engineering Washington University in St Louis

More information

Image Noise: Detection, Measurement and Removal Techniques. Zhifei Zhang

Image Noise: Detection, Measurement and Removal Techniques. Zhifei Zhang Image Noise: Detection, Measurement and Removal Techniques Zhifei Zhang Outline Noise measurement Filter-based Block-based Wavelet-based Noise removal Spatial domain Transform domain Non-local methods

More information

Introduction to Alternating Direction Method of Multipliers

Introduction to Alternating Direction Method of Multipliers Introduction to Alternating Direction Method of Multipliers Yale Chang Machine Learning Group Meeting September 29, 2016 Yale Chang (Machine Learning Group Meeting) Introduction to Alternating Direction

More information

Science Insights: An International Journal

Science Insights: An International Journal Available online at http://www.urpjournals.com Science Insights: An International Journal Universal Research Publications. All rights reserved ISSN 2277 3835 Original Article Object Recognition using Zernike

More information

ACCELERATED FIRST-ORDER PRIMAL-DUAL PROXIMAL METHODS FOR LINEARLY CONSTRAINED COMPOSITE CONVEX PROGRAMMING

ACCELERATED FIRST-ORDER PRIMAL-DUAL PROXIMAL METHODS FOR LINEARLY CONSTRAINED COMPOSITE CONVEX PROGRAMMING ACCELERATED FIRST-ORDER PRIMAL-DUAL PROXIMAL METHODS FOR LINEARLY CONSTRAINED COMPOSITE CONVEX PROGRAMMING YANGYANG XU Abstract. Motivated by big data applications, first-order methods have been extremely

More information

Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method

Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method Article Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method Andreas Langer Department of Mathematics, University of Stuttgart,

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Leveraging Machine Learning for High-Resolution Restoration of Satellite Imagery

Leveraging Machine Learning for High-Resolution Restoration of Satellite Imagery Leveraging Machine Learning for High-Resolution Restoration of Satellite Imagery Daniel L. Pimentel-Alarcón, Ashish Tiwari Georgia State University, Atlanta, GA Douglas A. Hope Hope Scientific Renaissance

More information

Variational Image Restoration

Variational Image Restoration Variational Image Restoration Yuling Jiao yljiaostatistics@znufe.edu.cn School of and Statistics and Mathematics ZNUFE Dec 30, 2014 Outline 1 1 Classical Variational Restoration Models and Algorithms 1.1

More information

Fast Nonnegative Matrix Factorization with Rank-one ADMM

Fast Nonnegative Matrix Factorization with Rank-one ADMM Fast Nonnegative Matrix Factorization with Rank-one Dongjin Song, David A. Meyer, Martin Renqiang Min, Department of ECE, UCSD, La Jolla, CA, 9093-0409 dosong@ucsd.edu Department of Mathematics, UCSD,

More information

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Caroline Chaux Joint work with X. Vu, N. Thirion-Moreau and S. Maire (LSIS, Toulon) Aix-Marseille

More information

Generalized Tensor Total Variation Minimization for Visual Data Recovery

Generalized Tensor Total Variation Minimization for Visual Data Recovery Generalized Tensor Total Variation Minimization for Visual Data Recovery Xiaojie Guo 1 and Yi Ma 2 1 State Key Laboratory of Information Security IIE CAS Beijing 193 China 2 School of Information Science

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Convex Optimization and l 1 -minimization

Convex Optimization and l 1 -minimization Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l

More information

Accelerated primal-dual methods for linearly constrained convex problems

Accelerated primal-dual methods for linearly constrained convex problems Accelerated primal-dual methods for linearly constrained convex problems Yangyang Xu SIAM Conference on Optimization May 24, 2017 1 / 23 Accelerated proximal gradient For convex composite problem: minimize

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Augmented Lagrangian alternating direction method for matrix separation based on low-rank factorization

Augmented Lagrangian alternating direction method for matrix separation based on low-rank factorization Augmented Lagrangian alternating direction method for matrix separation based on low-rank factorization Yuan Shen Zaiwen Wen Yin Zhang January 11, 2011 Abstract The matrix separation problem aims to separate

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1394 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1394 1 / 34 Table

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table

More information

SOS Boosting of Image Denoising Algorithms

SOS Boosting of Image Denoising Algorithms SOS Boosting of Image Denoising Algorithms Yaniv Romano and Michael Elad The Technion Israel Institute of technology Haifa 32000, Israel The research leading to these results has received funding from

More information

Wavelet Footprints: Theory, Algorithms, and Applications

Wavelet Footprints: Theory, Algorithms, and Applications 1306 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 5, MAY 2003 Wavelet Footprints: Theory, Algorithms, and Applications Pier Luigi Dragotti, Member, IEEE, and Martin Vetterli, Fellow, IEEE Abstract

More information

Sparse linear models

Sparse linear models Sparse linear models Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 2/22/2016 Introduction Linear transforms Frequency representation Short-time

More information

MMSE Denoising of 2-D Signals Using Consistent Cycle Spinning Algorithm

MMSE Denoising of 2-D Signals Using Consistent Cycle Spinning Algorithm Denoising of 2-D Signals Using Consistent Cycle Spinning Algorithm Bodduluri Asha, B. Leela kumari Abstract: It is well known that in a real world signals do not exist without noise, which may be negligible

More information

Optimal XOR based (2,n)-Visual Cryptography Schemes

Optimal XOR based (2,n)-Visual Cryptography Schemes Optimal XOR based (2,n)-Visual Cryptography Schemes Feng Liu and ChuanKun Wu State Key Laboratory Of Information Security, Institute of Software Chinese Academy of Sciences, Beijing 0090, China Email:

More information

IMAGE RESTORATION: TOTAL VARIATION, WAVELET FRAMES, AND BEYOND

IMAGE RESTORATION: TOTAL VARIATION, WAVELET FRAMES, AND BEYOND IMAGE RESTORATION: TOTAL VARIATION, WAVELET FRAMES, AND BEYOND JIAN-FENG CAI, BIN DONG, STANLEY OSHER, AND ZUOWEI SHEN Abstract. The variational techniques (e.g., the total variation based method []) are

More information

Superiorized Inversion of the Radon Transform

Superiorized Inversion of the Radon Transform Superiorized Inversion of the Radon Transform Gabor T. Herman Graduate Center, City University of New York March 28, 2017 The Radon Transform in 2D For a function f of two real variables, a real number

More information

A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem

A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem Kangkang Deng, Zheng Peng Abstract: The main task of genetic regulatory networks is to construct a

More information

Machine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece.

Machine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece. Machine Learning A Bayesian and Optimization Perspective Academic Press, 2015 Sergios Theodoridis 1 1 Dept. of Informatics and Telecommunications, National and Kapodistrian University of Athens, Athens,

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Elaine T. Hale, Wotao Yin, Yin Zhang

Elaine T. Hale, Wotao Yin, Yin Zhang , Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

A Fast Algorithm for Computing High-dimensional Risk Parity Portfolios

A Fast Algorithm for Computing High-dimensional Risk Parity Portfolios A Fast Algorithm for Computing High-dimensional Risk Parity Portfolios Théophile Griveau-Billion Quantitative Research Lyxor Asset Management, Paris theophile.griveau-billion@lyxor.com Jean-Charles Richard

More information

Coordinate descent. Geoff Gordon & Ryan Tibshirani Optimization /

Coordinate descent. Geoff Gordon & Ryan Tibshirani Optimization / Coordinate descent Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Adding to the toolbox, with stats and ML in mind We ve seen several general and useful minimization tools First-order methods

More information

Coordinate Update Algorithm Short Course Operator Splitting

Coordinate Update Algorithm Short Course Operator Splitting Coordinate Update Algorithm Short Course Operator Splitting Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 25 Operator splitting pipeline 1. Formulate a problem as 0 A(x) + B(x) with monotone operators

More information

2.3. Clustering or vector quantization 57

2.3. Clustering or vector quantization 57 Multivariate Statistics non-negative matrix factorisation and sparse dictionary learning The PCA decomposition is by construction optimal solution to argmin A R n q,h R q p X AH 2 2 under constraint :

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

ARestricted Boltzmann machine (RBM) [1] is a probabilistic

ARestricted Boltzmann machine (RBM) [1] is a probabilistic 1 Matrix Product Operator Restricted Boltzmann Machines Cong Chen, Kim Batselier, Ching-Yun Ko, and Ngai Wong chencong@eee.hku.hk, k.batselier@tudelft.nl, cyko@eee.hku.hk, nwong@eee.hku.hk arxiv:1811.04608v1

More information

Scale-Invariance of Support Vector Machines based on the Triangular Kernel. Abstract

Scale-Invariance of Support Vector Machines based on the Triangular Kernel. Abstract Scale-Invariance of Support Vector Machines based on the Triangular Kernel François Fleuret Hichem Sahbi IMEDIA Research Group INRIA Domaine de Voluceau 78150 Le Chesnay, France Abstract This paper focuses

More information

EUSIPCO

EUSIPCO EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning Proximal-Gradient Mark Schmidt University of British Columbia Winter 2018 Admin Auditting/registration forms: Pick up after class today. Assignment 1: 2 late days to hand in

More information

Lecture 17: October 27

Lecture 17: October 27 0-725/36-725: Convex Optimiation Fall 205 Lecturer: Ryan Tibshirani Lecture 7: October 27 Scribes: Brandon Amos, Gines Hidalgo Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These

More information

Minimizing the Difference of L 1 and L 2 Norms with Applications

Minimizing the Difference of L 1 and L 2 Norms with Applications 1/36 Minimizing the Difference of L 1 and L 2 Norms with Department of Mathematical Sciences University of Texas Dallas May 31, 2017 Partially supported by NSF DMS 1522786 2/36 Outline 1 A nonconvex approach:

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

Comparative Study of Restoration Algorithms ISTA and IISTA

Comparative Study of Restoration Algorithms ISTA and IISTA Comparative Study of Restoration Algorithms ISTA and IISTA Kumaresh A K 1, Kother Mohideen S 2, Bremnavas I 3 567 Abstract our proposed work is to compare iterative shrinkage thresholding algorithm (ISTA)

More information

arxiv: v1 [cs.cv] 3 Dec 2018

arxiv: v1 [cs.cv] 3 Dec 2018 Tensor N-tubal rank and its convex relaxation for low-rank tensor recovery Yu-Bang Zheng a, Ting-Zhu Huang a,, Xi-Le Zhao a, Tai-Xiang Jiang a, Teng-Yu Ji b, Tian-Hui Ma c a School of Mathematical Sciences/Research

More information

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 58, NO. 6, JUNE

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 58, NO. 6, JUNE IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 58, NO. 6, JUNE 2010 2935 Variance-Component Based Sparse Signal Reconstruction and Model Selection Kun Qiu, Student Member, IEEE, and Aleksandar Dogandzic,

More information

Learning MMSE Optimal Thresholds for FISTA

Learning MMSE Optimal Thresholds for FISTA MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Learning MMSE Optimal Thresholds for FISTA Kamilov, U.; Mansour, H. TR2016-111 August 2016 Abstract Fast iterative shrinkage/thresholding algorithm

More information

2 Two-Point Boundary Value Problems

2 Two-Point Boundary Value Problems 2 Two-Point Boundary Value Problems Another fundamental equation, in addition to the heat eq. and the wave eq., is Poisson s equation: n j=1 2 u x 2 j The unknown is the function u = u(x 1, x 2,..., x

More information

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Zhaosong Lu October 5, 2012 (Revised: June 3, 2013; September 17, 2013) Abstract In this paper we study

More information

About Split Proximal Algorithms for the Q-Lasso

About Split Proximal Algorithms for the Q-Lasso Thai Journal of Mathematics Volume 5 (207) Number : 7 http://thaijmath.in.cmu.ac.th ISSN 686-0209 About Split Proximal Algorithms for the Q-Lasso Abdellatif Moudafi Aix Marseille Université, CNRS-L.S.I.S

More information

Erkut Erdem. Hacettepe University February 24 th, Linear Diffusion 1. 2 Appendix - The Calculus of Variations 5.

Erkut Erdem. Hacettepe University February 24 th, Linear Diffusion 1. 2 Appendix - The Calculus of Variations 5. LINEAR DIFFUSION Erkut Erdem Hacettepe University February 24 th, 2012 CONTENTS 1 Linear Diffusion 1 2 Appendix - The Calculus of Variations 5 References 6 1 LINEAR DIFFUSION The linear diffusion (heat)

More information

Lecture 3: Linear Filters

Lecture 3: Linear Filters Lecture 3: Linear Filters Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Images as functions Linear systems (filters) Convolution and correlation Discrete Fourier Transform (DFT)

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

ABSTRACT. Recovering Data with Group Sparsity by Alternating Direction Methods. Wei Deng

ABSTRACT. Recovering Data with Group Sparsity by Alternating Direction Methods. Wei Deng ABSTRACT Recovering Data with Group Sparsity by Alternating Direction Methods by Wei Deng Group sparsity reveals underlying sparsity patterns and contains rich structural information in data. Hence, exploiting

More information

Sparse Optimization Lecture: Dual Methods, Part I

Sparse Optimization Lecture: Dual Methods, Part I Sparse Optimization Lecture: Dual Methods, Part I Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know dual (sub)gradient iteration augmented l 1 iteration

More information

Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm

Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm Jeevan K. Pant, Wu-Sheng Lu, and Andreas Antoniou University of Victoria May 21, 2012 Compressive Sensing 1/23

More information

SIGNAL AND IMAGE RESTORATION: SOLVING

SIGNAL AND IMAGE RESTORATION: SOLVING 1 / 55 SIGNAL AND IMAGE RESTORATION: SOLVING ILL-POSED INVERSE PROBLEMS - ESTIMATING PARAMETERS Rosemary Renaut http://math.asu.edu/ rosie CORNELL MAY 10, 2013 2 / 55 Outline Background Parameter Estimation

More information

Inverse Problems in Image Processing

Inverse Problems in Image Processing H D Inverse Problems in Image Processing Ramesh Neelamani (Neelsh) Committee: Profs. R. Baraniuk, R. Nowak, M. Orchard, S. Cox June 2003 Inverse Problems Data estimation from inadequate/noisy observations

More information

Fourier Transforms 1D

Fourier Transforms 1D Fourier Transforms 1D 3D Image Processing Alireza Ghane 1 Overview Recap Intuitions Function representations shift-invariant spaces linear, time-invariant (LTI) systems complex numbers Fourier Transforms

More information

l 0 TV: A Sparse Optimization Method for Impulse Noise Image Restoration

l 0 TV: A Sparse Optimization Method for Impulse Noise Image Restoration IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1 l 0 TV: A Sparse Optimization Method for Impulse Noise Image Restoration Ganzhao Yuan, Bernard Ghanem arxiv:1802.09879v2 [cs.na] 28 Dec

More information