Digital Signal Processing
|
|
- Randolf Howard
- 5 years ago
- Views:
Transcription
1 JID:YDSPR AID:53 /FLA [m5g; v1.9; Prn:17/07/2017; 11:02] P.1 (1-13) Digital Signal Processing ( ) 1 Contents lists available at ScienceDirect Digital Signal Processing Introducing anisotropic tensor to high order variational model for image restoration Jinming Duan a, Wil O.C. Ward a, Luke Sibbett a, Zhenkuan Pan b, Li Bai a a School of Computer Science, University of Nottingham, UK 18 b School of Computer Science and Technology, Qingdao University, China a r t i c l e i n f o a b s t r a c t Article history: 23 Second order total variation (SOTV) models have advantages for image restoration over their first order 89 Available online xxxx counterparts including their ability to remove the staircase artefact in the restored image. However, such models tend to blur the reconstructed image when discretised for numerical solution [1 5]. To 25 Keywords: 91 overcome this drawback, we introduce a new tensor weighted second order (TWSO) model for image 26 Total variation 92 restoration. Specifically, we develop a novel regulariser for the SOTV model that uses the Frobenius 27 Hessian Frobenius norm norm of the product of the isotropic SOTV Hessian matrix and an anisotropic tensor. We then adapt the Anisotropic tensor alternating direction method of multipliers (ADMM) to solve the proposed model by breaking down the 29 ADMM original problem into several subproblems. All the subproblems have closed-forms and can be solved Fast Fourier transform efficiently. The proposed method is compared with state-of-the-art approaches such as tensor-based anisotropic diffusion, total generalised variation, and Euler s elastica. We validate the proposed TWSO model using extensive experimental results on a large number of images from the Berkeley BSDS We also demonstrate that our method effectively reduces both the staircase and blurring effects and 99 outperforms existing approaches for image inpainting and denoising applications The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY license ( Introduction norm of the structure tensor eigenvalues. Freddie et al. proposed a tensor-based variational formulation for colour image denoising 106 Anisotropic diffusion tensor can be used to describe the local [11], which computes the structure tensor without Gaussian convolution to allow Euler-equation of the functional to be elegantly geometry at an image pixel, thus making it appealing for various image processing tasks [6 13]. Variational methods allow easy integration of constraints and use of powerful modern optimisation the gradient energy total variation [12] that utilises both eigenval- derived. They further introduced a tensor-based functional named techniques such as primal dual [14 16], fast iterative shrinkagethresholding algorithm [17,18], and alternating direction method However, these existing works mentioned above only consider ues and eigenvectors of the gradient energy tensor of multipliers [2 4,19 24]. Recent advances on how to automatically select parameters for different optimisation algorithms [16,18, of the FOTV model is that it favours piecewise-constant solutions. the standard first order total variation (FOTV) energy. A drawback ] dramatically boost performance of variational methods, leading Thus, it can create strong staircase artefacts in the smooth regions of the restored image. Another drawback of the FOTV model to increased research interest in this field As such, the combination of diffusion tensor and variational is its use of gradient magnitude to penalise image variations at methods has been investigated by researchers for image processing. Krajsek and Scharr [7] developed a linear anisotropic regulartion as compared to high order derivatives in a discrete space. 120 pixel locations x, which uses less neighbouring pixel informa isation term that forms the basis of a tensor-valued energy functional for image denoising. Grasmair and Lenzen [8,9] penalised gaps. High order variational models thus can be applied to remedy 1 As such, the FOTV has difficulties inpainting images with large image variation by introducing a diffusion tensor that depends on these side effects. Among these is the second order total variation (SOTV) model [1,2,,26,27]. Unlike the high order variational 58 the structure tensor of the image. Roussous and Maragos [10] developed a functional that utilises only eigenvalues of the structure models, such as the Gaussian curvature [28], mean curvature [23, tensor. Similar work by Lefkimmiatis et al. [13] used Schatten- 29], Euler s elastica [] etc., the SOTV is a convex high order extension of the FOTV, which guarantees a global solution. The SOTV is also more efficient to implement [4] than the convex total generalised address: zkpan@qdu.edu.cn (Z. Pan). variation (TGV) [30,31]. However, the inpainting results of / 2017 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY license ( 66
2 JID:YDSPR AID:53 /FLA [m5g; v1.9; Prn:17/07/2017; 11:02] P.2 (1-13) 2 J. Duan et al. / Digital Signal Processing ( ) } 1 the model highly depend on the geometry of the inpainting region, min p + T 2 u 1, (2.3) 2 and it also tends to blur the inpainted area [2,]. Further, according to the numerical experimental results displayed in [1 5] the 69 u 68 3 where p {1, 2} denotes the L 1 and L 2 data fidelity terms (i.e. 4 SOTV model tends to blur object edges for image denoising Ɣ (u f ) 1 and 1 Ɣ (u f ) 2 In this paper, we propose a tensor weighted second order 2 ), and Ɣ is a subset of (i.e Ɣ R (TWSO) variational model for image inpainting and denoising. 2 ). For image processing applications, the is normally 72 7 a rectangle domain. For image inpainting, f is the given image A novel regulariser has been developed for the TWSO that uses 8 in Ɣ = \D, where D is the inpainting region with missing the Frobenius norm of the product of the isotropic SOTV Hessian 74 9 or degraded information. The values of f on the boundaries of D matrix and an anisotropic tensor, so that the TWSO is a nonlinear need to be propagated into the inpainting region via minimisation anisotropic high order model which effectively integrates orientation information. In numerous experiments, we found that the of the weighted regularisation term of the TWSO model. For image denoising, f is the noisy image and = Ɣ. In this case, the 12 TWSO can reduce the staircase effect whilst improve the sharp choice of p depends on the type of noise found in f, e.g. p = 2for edges/boundaries of objects in the denoised image. For image inpainting problems, the TWSO model is able to connect large gaps Gaussian noise while p = 1for impulsive noise. 15 In the regularisation term T regardless of the geometry of the inpainting region and it would 2 u 1 of (2.3), T is a symmetric positive semi-definite 2 2 diffusion tensor whose four com- 16 not introduce much blur to the inpainted image. As the proposed ponents are T 17 TWSO model is based on the variational framework, ADMM can be 11, T 12, T, and T. They are computed from the input image f. It is worth pointing out that the regularisation term 18 adapted to solve the model efficiently. Extensive numerical results 84 T 19 show that the new TWSO model outperforms the state-of-the-art 2 u 1 is the Frobenius norm of a 2 by 2 tensor T multiplied by 85 a 2 by 2 Hessian matrix 20 approaches for both image inpainting and denoising. 2 u, which has the form of 86 ( ) The contributions of the paper are twofold: 1) a novel anisotropic nonlinear second order variational model is proposed for, (2.4) 88 T11 87 x x u + T 12 x y u T 11 y x u + T 12 y y u T x x u + T x y u T y x u + T y y u 23 image restoration. To the best of our knowledge, this is the first time the Frobenius norm of the product of the Hessian matrix and where the two orthogonal eigenvectors of T span the rotated coordinate system in which the gradient of the input image is com a tensor has been used as a regulariser for variational image denoising and inpainting; 2) A fast ADMM algorithm is developed for puted. As such T can introduce orientations to the bounded Hes image restoration based on a forward-backward finite difference sian regulariser 2 u 1. The eigenvalues of the tensor measure the 28 scheme. degree of anisotropy in the regulariser and weight the four second order derivatives in 2 u in the two directions given by the The rest of the paper is organised as follows: Section 2 introduces the proposed TWSO model and the anisotropic tensor T; eigenvectors of the structure tensor introduced in [32]. It should Section 3 presents the discretisation of the differential operators be noting that the original SOTV (2.1) is an isotropic nonlinear used for ADMM based on a forward-backward finite difference model while the proposed TWSO (2.3) is an anisotropic nonlinear scheme; Section 4 describes ADMM for solving the variational one. As a result, the new tensor weighted regulariser T 2 u 1 is model efficiently. Section 5 gives details of the experiments using the proposed TWSO model and the state-of-the-art approaches experimental section. In the next section, we shall introduce the 101 powerful for image inpainting and denoising, as illustrated in the for image inpainting and denoising. Section 6 concludes the paper. derivation of the tensor T The TWSO model for image restoration 2.2. Tensor estimation The TWSO model In [32], the author defined the structure tensor J ρ of an image u In [26], the authors considered the following SOTV model for J ρ ( u σ ) = K ρ ( u σ u σ ), (2.5) image processing { η } where K ρ is a Gaussian kernel whose standard deviation is ρ and min 111 u 2 u f u 1, (2.1) u σ is the smoothed version of the gradient convolved by K σ. The use of ( u 46 σ u σ ) := u σ u T σ as a structure descriptor is 47 where η > 0is a regularisation parameter. 2 u is the second order to make J ρ insensitive to noise but sensitive to change in orientation. The structure tensor J ρ is positive semi-definite and has 48 Hessian matrix of the form ( ) two orthonormal eigenvectors v 1 u σ (in the direction of gradient) and v 115 x x u y x u 50 2 u σ (in the direction of the isolevel lines). The u = x y u y y u, (2.2) { η p 1 Ɣ (u f ) p corresponding eigenvalues μ 1 and μ 2 can be calculated from ( ) and 2 u in (2.1) is the Frobenius norm of the Hessian matrix 53 (2.2). By using such norm, (2.1) has several capabilities: 1) allowing discontinuity of gradients of u; 2) imposing smoothness on u; 120 μ 1,2 = 1 j 11 + j ± ( j 11 j ) j 2 12, (2.6) ) satisfying the rotation-invariant property. Thought this high order model (2.1) is able to reduce the staircase artefact associated where j 11, j 12 and j are the components of J ρ. They are given 56 as 1 57 with the FOTV for image denoising, it can blur object edges in the j 11 = K ρ ( x u σ ) 2, j 12 = j = K ρ ( ) x u σ y u σ, j 58 image [1 4]. For inpainting, as investigated in [2,], though the 59 SOTV has the ability to connect large gaps in the image, such ability depends on the geometry of the inpainting region, and it can = K ρ ( ) 2 y u σ. (2.7) The eigenvalues of J ρ describe the ρ-averaged contrast in the 61 blur the inpainted image. In order to remedy these side effects in 127 eigendirections, meaning: if μ 1 = μ 2 = 0, the image is in homogeneous area; if μ 1 μ 2 = 0, it is on a straight line; and if 62 both image inpainting and denoising, we propose a more flexible and generalised variational model, i.e., the tensor weighted second 129 μ 1 > μ 2 0, it is at objects corner. Based on the eigenvalues, 64 order (TWSO) model that takes advantages of both the tensor and 130 we can define the following local structural coherence quantity 65 the second order derivative. Specifically, the TWSO model is defined as Coh = (μ 1 μ 2 ) 2 = ( j 11 j ) j (2.8)
3 JID:YDSPR AID:53 /FLA [m5g; v1.9; Prn:17/07/2017; 11:02] P.3 (1-13) J. Duan et al. / Digital Signal Processing ( ) 3 1 This quantity is large for linear structures and small for homogeneous 2 areas in the image. With the derived structure tensor (2.5) we define a new tensor T = D ( J ρ ( u σ ) ) whose eigenvectors are 4 parallel to the ones of J ρ ( u σ ) and its eigenvalues λ 1 and λ 2 are 70 5 chosen depending on different image processing applications. For 71 6 denoising problems, we need to prohibit the diffusion across image 72 7 edges and encourage strong diffusion along edges. We therefore 8 consider the following two diffusion coefficients for the eigenvalues 74 9 Fig. 1. Discrete second order derivatives. 75 λ 1 = { 1 s e (s/c) 8 s > 0, λ 2 = 1, (2.9) with s := u σ the gradient magnitude and C the contrast param- eter. For inpainting problems, we want to preserve linear structures 16 u i, j u i, j 1 u i 1, j + u i 1, j 1 if 1 < i M, 1 < j N 82 and thus regularisation along isophotes of the image is appropriate. We therefore consider the weights that Weickert [32] used for (3.4) enhancing the coherence of linear structures. With μ 19 1, μ 2 being Fig. 1 summarises these discrete differential operators. Based on the eigenvalues of J ρ as before, we define the above forward backward finite difference scheme, the second 86 order Hessian matrix 2 u in (2.2) can be discretised as 87 { γ μ1 = μ 2 λ 1 = γ, λ 2 = γ + (1 γ ) e C, (2.10) Coh else where γ (0, 1), γ 1. The constant γ determines how steep In (3.1) (3.5), we assume that u satisfies the periodic boundary the exponential function is. The structure threshold C affects how condition so that the fast Fourier transform (FFT) solver can be applied to solve (2.3) analytically. The numerical approximation of 27 the approach interprets local structures. The larger the parameter value is, the more coherent the method will be. With these the second order divergence operator div 2 is based on the follow eigenvalues, the regularisation is stronger in the neighbourhood of ing expansion coherent structures (note that ρ determines the radius of neighbourhood) and weaker in homogeneous areas, at corners, and in div 2 (P) = + x x (P 1) + y x (P 2) + x y (P 3) + + y y (P 4), (3.6) incoherent areas of the image. 33 where P is a 2 2 matrix whose four components are P 1, P 2, P 3 and P 4, respectively. For more detailed description of the discretisation of other high order differential operators, we refer the Discretisation of differential operators reader to [3,4]. More advanced discretisation on a staggered grid 102 In order to implement ADMM for the proposed TWSO model, it 37 can be found in [,33,34] is necessary to discretise the derivatives involved. We note that different discretisation may lead to different numerical experimental order derivatives of u σ in (2.7). Since the differentiation and con- 105 Finally, we address the implementation problem of the first results. In this paper, we use the forward backward finite difference scheme. Let denote the two dimensional grey scale image the image in either order. In this sense, we have 107 volution are commutative, we can take the derivative and smooth of size MN, and x and y denote the coordinates along image column and row directions respectively. The discrete second order x u σ = x K σ u, y u σ = y K σ u. (3.7) derivatives of u at point (i, j) along x and y directions can be then Alternatively, the central finite difference scheme can be used to written as approximate x u σ and y u σ to satisfy the rotation-invariant property Once all necessary discretisation is done, the numerical com x x u i, j = x + x u i, j = putation can be implemented u i,n 2u i, j + u i, j+1 if 1 i M, j = 1 u i, j 1 2u i, j + u i, j+1 if 1 i M, 1 < j < N, (3.1) 4. Numerical optimisation algorithm u i, j 1 2u i, j + u i,1 if 1 i M, j = N 51 It is nontrivial to directly solve the TWSO model due to the y y u i, j = y + y u i, j = facts: 1) it is nonsmooth; 2) it couples the tensor T and Hessian 53 matrix 2 u in T 2 u 1 as shown in (2.4), making the resulting u M, j 2u i, j + u i+1, j if i = 1, 1 j N high order Euler Lagrange equation drastically difficult to discretise to solve computationally. To address these two difficulties, we u i 1, j 2u i, j + u i+1, j if 1 < i < M, 1 j N, (3.2) 56 u i 1, j 2u i, j + u 1, j if i = M, 1 j N present an efficient numerical algorithm based on ADMM to minimise 1 57 the variational model (2.3) x + y u i, j = + y + x u i, j = 4.1. Alternating Direction Method of Multipliers (ADMM) u i, j u i+1, j u i, j+1 + u i+1, j+1 if 1 i < M, 1 j < N x y u i, j = y x u i, j = u i, j u i,n u M, j + u M,N if i = 1, j = 1 u i, j u i, j 1 u M, j + u M, j 1 if i = 1, 1 < j N u i, j u i,n u i 1, j + u i 1,N if 1 < i M, j = 1 ( 2 u = x + x u + y + x u ) + x + y u y + y u. (3.5) 61 ADMM combines the decomposability of the dual ascent with u i, j u 1, j u i, j+1 + u 1, j+1 if i = M, 1 j < N superior convergence properties of the method of multipliers. Recent research [20] unveils that ADMM is also closely related to , 63 u i, j u i+1, j u i,1 + u i+1,1 if 1 i < M, j = N 64 u 130 i, j u 1, j u i,1 + u 1,1 if i = M, j = N Douglas Rachford splitting, Spingarn s method of partial inverses, 65 Dykstra s alternating projections, split Bregman iterative algorithm (3.3) etc. Given a constrained optimisation problem.
4 JID:YDSPR AID:53 /FLA [m5g; v1.9; Prn:17/07/2017; 11:02] P.4 (1-13) 4 J. Duan et al. / Digital Signal Processing ( ) { f (x) + g (z)} s.t. Ax + Bz = c, (4.1) (ũ, 1 min 1) ũ-subproblem: This problem ũ k+1 minũl A u k, W k, V k ; x,z 2 s k, d k, b k) can be solved by considering the following minimisation 68 3 where x R d 1, z R d 2, A R m d 1, B R m d 2, c R m, and both f ( ) problem 69 and g ( ) are assumed to be convex. The augmented Lagrange func- 5 tion of problem (4.1) can be written as 71 ũ k+1 = arg min 6 p + θ L A (x, z; ρ) = f (x) + g (z) + β ũ 2 ũ uk s k 2 2. (4.5) 8 2 Ax + Bz c ρ 2 2, (4.2) The solution of (4.5) depends on the choice of p. Given the domain 74 9 where ρ Ɣ for image denoising or inpainting, the closed-form formulae for is an augmented Lagrangian multiplier and β>0 is the minimisers ũ an augmented penalty parameter. At the kth iteration, ADMM at- k+1 under different conditions are 76 tempts to solve problem (4.2) by iteratively minimising L A with 12 respect to x and z, and updating ρ accordingly. The resulting optimisation procedure is summarised in Algorithm ũ k+1 = f + max ψ k 1 Ɣη θ 1, 0 ( sign ψ k) if p = 1, (4.6) Algorithm 1 ADMM. where ψ k = u k +s k f. and sign symbols denote the componentwise multiplication and signum function, respectively : Initialization: Set β >0, z 0 and ρ ) u-subproblem: We then solve u-subproblem u k+1 2: while a stopping criterion is not satisfied do { } 18 3: x k+1 = arg min 84 x f (x) + β 2 Ax + Bzk c ρ k 2 2. min u L (ũk+1 A, u, W k,v k ; s k, d k, b k) by minimising the following 19 } problem. 85 4: z k+1 = arg min z {g (z) + β 20 2 Axk+1 + Bz k c ρ k : ρ k+1 = ρ k ( Ax k+1 + Bz k+1 ) { } c. 87 6: end while u k+1 θ1 = arg min, u Application of ADMM to solve the TWSO model 25 whose closed-form can be obtained using the following FFT under the assumption of the circulant boundary condition (Note that to 92 We now use ADMM to solve the minimisation problem of the 27 benefit from the fast FFT solver for image inpainting problems, the proposed TWSO model (2.3). The basic idea of ADMM is to first 28 introduction of ũ is compulsory due to the fact that F(1 Ɣ u) 94 split the original nonsmooth minimisation problem into several 29 1 Ɣ F(u)) 95 subproblems by introducing some auxiliary variables, and then 30 ( 96 solve each subproblem separately. This numerical algorithm benefits from both solution stability and fast convergence. u k+1 = F 1 s k) + θ ( 2 div 2 V k d k)) ) ( (ũk+1 31 F θ In order to implement ADMM, one scalar auxiliary variable ũ θ 1 + θ 2 F ( div 2 2), (4.8) and two 2 2matrix-valued auxiliary variables W and V are in- troduced to reformulate (2.3) into the following constraint optimisation problem where F and F 1 respectively denote the discrete Fourier transform and inverse Fourier transform; div 2 is a second order diver gence operator whose discrete form is defined in (3.6); stands 102 { } 37 η min 104 ũ,u,w,v p 1 (ũ ) p Ɣ f p + W 1 s.t. ũ = u, V = 2 for the pointwise division of matrices. The values of the coefficient u, W = TV, matrix F(div 2 2 ) equal 4(cos 2πq 2πr + cos 2), where M and N N M (4.3) where W = ( W 11 W 12 ), V = ( V 11 V 12 that in addition to FFT, AOS and Gauss Seidel iteration can be applied to minimise the problem (4.7) with very low cost. ). The constraints ũ = u and W 108 W V V 43 W = TV are respectively applied to handle the non-smoothness of 3) W-subproblem: We now solve the W-subproblem W k the data fidelity term (p = 1) and the regularisation term, whilst min 110 W L (ũk+1 A,u k+1, W, V k ; s k, d k, b k). Note that the unknown 45 V = 2 u decouples T and 2 u in the TWSO. The three constraints matrix-valued variable W is componentwise separable, which can together make the calculation for each subproblem point-wise and be effectively solved through the analytical shrinkage operation, 47 thus no huge matrix multiplication or inversion is required. To also known as the soft generalised thresholding equation 48 guarantee an optimal solution, the above constrained problem (4.3) 114 { 49 can be solved through ADMM summarised in Algorithm 1. Let 115 (ũ, ) W k+1 = arg min W θ } 3 L 116 A u, W, V; s, d, b be the augmented Lagrange functional of W 2 W TVk b k 2 2, 51 (4.3), which is defined as follows whose solution W k+1 is given by 118 L A (ũ, u, W, V; s, d, b ) = η p 1 (ũ ) p Ɣ f p + W 1 + θ 1 2 ũ u s θ 2 2 V 2 u d θ 3 2 W TV b 2 2, (4.4) { η p 1 (ũ ) p Ɣ f { ũk+1 = ( ( 1 Ɣ η f + ( θ 1 u k + s k))/ (1 ) Ɣ η + θ 1 ) if p = 2 2 ũk+1 u s k θ 2 2 Vk 2 u d k ) The V-subproblem: Given fixed u k+1, W k+1, d k, b k, the solution V k+1 of the V-subproblem V k+1 min where s, d = ( d 11 d 12 ) and b = ( b 11 b V L (ũk+1 A,u k+1, W k+1, 12 ) are the augmented 60 d d b b V; s k, d k, b k) is equivalent to solving the following least-square optimisation problem Lagrangian multipliers, and θ 1, θ 2 and θ 3 are positive penalty constants controlling the weights of the penalty terms. { We will now decompose the optimisation problem (4.4) into V k+1 θ2 = arg min V four subproblems with respect to ũ, u, W and V, and then update 2 V 2 u k+1 d k the Lagrangian multipliers s, d and b until the optimal solution is found and the process converges. + θ } 3, } (4.7) respectively stand for the image width and height, and r [0, M) and q [0, N) are the frequencies in the frequency domain. Note ( TV ) W k+1 = max k + b k 1, 0 TVk + b k θ 3 TVk + b k, (4.9) with the convention that 0 (0/0) = 0. 2 Wk+1 TV b k 2 2
5 JID:YDSPR AID:53 /FLA [m5g; v1.9; Prn:17/07/2017; 11:02] P.5 (1-13) J. Duan et al. / Digital Signal Processing ( ) 5 1 which results in the following linear system with respect to each 5.1. Parameters 2 component in the variable V k R 11 V k+1 For image inpainting, there are 8 parameters in the proposed R V k+1 = θ 2 ( x + x uk+1 + d k 11 ) θ 3 (T 11 Q 11 + T Q ) R 12 V k R V k+1 = θ 2 ( + x + y uk+1 + d k ) model: the standard deviations ρ and σ in (2.5), the constant γ and structure threshold C in (2.10), the regularisation parameter η 6 in (4.4), and the penalty parameters θ 72 1, θ 2 and θ 3 in (4.4). 7 θ 3 (T 12 Q 11 + T Q ) The parameter ρ averages orientation information and helps 8 74 R 11 V k R V k+1 = θ 2 ( + y + x uk+1 + d k 12 ), (4.10) to stabilise the directional behaviour of the diffusion. Large interrupted lines can be connected if ρ is equal or larger than the gap 75 θ 3 (T 11 Q 12 + T Q ) 11 R 12 V 77 k R V k+1 = θ 2 ( y + y uk+1 + d k ) size. The selection of ρ for each example has been listed in Table 1. σ is used to decrease the influence of noise and guarantee numerical stabilisation and thereby gives a more robust estimation of θ 3 (T 12 Q 12 + T Q ) where we define R 11 = θ 3 (T T2 ) + θ 2, R 12 = R = the structure tensor J ρ in (2.5). γ in (2.10) determines how steep ( θ 3 (T 11 T 12 + T T ), R = θ 3 T ) T2 + θ2 ; Q 11 = b k 11 Wk+1 11, the exponential function is. Weickert [32] suggested γ (0, 1) and Q 12 = b k 12 Wk+1 12, Q = b k Wk+1 and Q = b k γ Wk+1. Note 1. Therefore γ is fixed at The structure threshold C affects how the method understands local image structures. The that the closed-forms of V k+1 11 and V k+1 can be calculated from the larger the parameter value is, the more sensitive the method will 18 first two equations in (4.10), whilst the last two equations lead to be. However, if C goes to infinity, our TWSO model reduces to the analytical solutions of V k and V k+1. the isotropic SOTV model. C s selection for different examples is 20 5) s, d and b update: At each iteration, we update the augmented Lagrangian multiplies s, d and b as shown from Step 09 smoothness of the restored image. Smaller η leads to smoother 87 presented in Table 1. The regularisation parameter η controls the 86 to 11 in Algorithm 2. restoration, and it should be relatively large to make the inpainted image close to the original image. The value of η is chosen as Algorithm 2 ADMM for the proposed TWSO model for image for all the inpainting experiments restoration. We now illustrate how to choose the penalty parameters θ 91 1, 26 01: Input: f, 1 Ɣ, p, ρ, σ, γ, C, C, η and (θ 1, θ 2, θ 3 ). θ 92 2 and θ 3. Due to the augmented Lagrangian multipliers used, 27 02: Initialise: u = f, W 0 = 0, V 0 = 0, d 0 = 0, b 0 = 0. different combinations of the three parameters will provide similar inpainting results. However, the convergence speed will be 28 03: while some stopping criterion is not satisfied do 94 04: Compute ũ k+1 according to (4.6). 05: Compute u k+1 according to (4.8). affected. In order to guarantee the convergence speed, we use 30 06: Compute W k+1 according to (4.9). the rule introduced in [35] to tune θ 96 1, θ 2 and θ 3. Namely, the 31 07: Compute V k+1 according to (4.10) : Compute T k+1 = D ( ( )) residuals R k J ρ u k+1 σ according to (2.9) or (2.10). i (i = 1, 2, 3) defined in (5.1), the relative errors of the : Update Lagrangian multiplier s k+1 = s k + u k+1 ũ k+1 Lagrangian multipliers L k. i (i = 1, 2, 3) defined in (5.2) and the relative errors of u defined in (5.3) should reduce to zero with nearly 99 10: Update Lagrangian multiplier d k+1 = d k + 2 u k+1 V k+1. 11: Update Lagrangian multiplier b k+1 = b k + (TV) k+1 W k+1. the same speed as the iteration proceeds. For example, in all the 35 12: end while 101 inpainting experiments, setting θ 1 = 0.01, θ 2 = 0.01 and θ 3 = gives a good convergence speed as each pair of curves in Fig. 2 (a) 37 In summary, an ADMM-based iterative algorithm was developed to decompose the original nonsmooth minimisation problem and (b) decrease to zero with very similar speed. In addition, Fig (d) shows that the numerical energy of the TWSO model (2.3) has 39 into four simple subproblems, each of which has a closed-form 105 reached to a steady state after only a few iterations. Fig. 2 shows 40 solution that can be point-wisely solved using the efficient numerical methods (i.e. FFT and shrinkage). The overall numerical the plots for a real inpainting example in Fig The residuals R k 42 i (i = 1, 2, 3) are defined as optimisation algorithm of our proposed method for image restoration can be summarised in Algorithm 2. We note that in order to ( ) 44 R k obtain better denoising and inpainting results, we iteratively refine the diffusion tensor T using the recovered image (i.e. u k+1 ( ) 1, Rk 2, Rk as shown in Step 8 in Algorithm 2). This refinement is crucial as it = 47 ũk ũ k 1 L 1, Vk 2 u k 1 L 1, Wk TV k L 1, provides more accurate tensor information for the next round iteration, thus leading to more pleasant restoration results. Due to the (5.1) reweighed process the convergence of the algorithm is not guaranteed. However, the algorithm is still fast and stable, as evident in where 50 L 1 denotes the L 1 -norm on the image domain. The relative errors of the augmented Lagrangian multipliers L k 51 i our experiments. 117 (i = 1, 2, 3) are defined as Numerical experiments 119 L k 54 1, Lk 2, Lk We conduct numerical experiments to compare the TWSO s k s k 1 56 model with state-of-the-art approaches for image inpainting and = L 1, dk d k 1 L 1, bk b k 1 L denoising. The images used in this study are normalised to [0,255] s k 1 L 1 d k 1 L 1 b k 1 (5.2) L 1 58 before inpainting and denoising. The metrics for quantitative evaluation of different methods are the peak signal-to-noise ratio 125 The relative errors of u are defined as (PSNR) and structure similarity index map (SSIM). The higher PSNR 61 and SSIM a method obtains the better the method will be. In order to maximise the performance of all the compared methods, we L 1 R k u = uk u k 1 L u k 1 (5.3) carefully adjust their built-in parameters such that the resulting In summary, there are only two built-in parameters ρ and C PSNR and SSIM by these methods are maximised. In the following, that need to be adjusted for inpainting different images (see Table we shall introduce how to select these parameters for the TWSO 1 for their selections). Consequently, the proposed TWSO is a model in detail. very robust model in terms of parameter tuning. ( ( ) )
6 JID:YDSPR AID:53 /FLA [m5g; v1.9; Prn:17/07/2017; 11:02] P.6 (1-13) 6 J. Duan et al. / Digital Signal Processing ( ) Fig. 2. Plots of the residuals (5.1), the relative errors of the Lagrangian multipliers (5.2), the relative errors of the function u in (5.3), and the energy of the proposed model (2.3) against number of iterations for the real inpainting example in Fig Fig. 3. Plots of the residuals (5.1), relative errors of the Lagrangian multipliers (5.2), relative errors of the function u in (5.3), and the energy of the TWSO model (2.3) against the number of iterations for the synthetic image in Fig. 9.
7 JID:YDSPR AID:53 /FLA [m5g; v1.9; Prn:17/07/2017; 11:02] P.7 (1-13) J. Duan et al. / Digital Signal Processing ( ) 7 1 Table 1 2 Parameters used in the TWSO model for the examples in Figs. 4 7 and Figs Parameter Fig. 4 Fig. 5 Fig. 6 Fig. 7 Fig. 9 Fig st, 2nd columns 3rd, 4th columns 5th, 6th columns All rows 70 ρ σ γ C/C η θ θ θ inpainting, ρ here should be small because we do not need to con- 80 nect gaps for image denoising. We have also found that σ = 1is 16 sufficient for the cases considered in this paper. C determines how much anisotropy the TWSO model will bring to the resulting image. We illustrate the effect of this parameter using the example in Fig. 10. The regularisation parameter η should be small when image noise level is high. The three penalty parameters are chosen based on the quantities defined in (5.1), (5.2) and (5.3). Fig shows that using θ 1 = 5, θ 2 = 5 and θ 3 = 10 leads to fast convergence speed for the synthetic image in Fig. 9 and these values are used for the rest of image denoising examples. Finally, the parameters selected are given in the last two columns in Table Fig. 4. Performance comparison of the SOTV and TWSO models on some degraded images. The red regions in the first row are inpainting domains. The second and 5.2. Image inpainting third rows show the corresponding inpainting results by the SOTV and the TWSO models, respectively. (For interpretation of the references to colour in this figure 29 In this section we test the capability of the TWSO model for legend, the reader is referred to the web version of this article.) image inpainting and compare it with several state-of-the-art inpainting methods such as the TV [36], TGV [37], SOTV [2,], and Euler s elastica [,38] models. We denote the inpainting domain as D in the following experiments. The region Ɣ in equation (2.3) 99 is \D. We use p = 2for all examples in image inpainting. 35 Fig. 4 illustrates the importance of the tensor T in the proposed TWSO model. The damaged images overlaid with the red inpainting domains are shown in the first row. Second row and third row show that the TWSO performs much better than SOTV. SOTV introduces blurring to the inpainted regions, whilst TWSO recovers these shapes very well without causing much blurring. In addition Fig. 5. Test on a black stripe with different geometries. (a): Images with red inpainting regions. The last three inpainting gaps have the same width but with different smoothly along the level curves of the image in the inpainting 108 to the capability of inpainting large gaps, the TWSO interpolates geometry; (b f): results of TV, SOTV, TGV, Euler s elastica, and TWSO, respectively. 43 domain, indicating the effectiveness of tensor in TWSO for inpainting. 109 (For interpretation of the references to colour in this figure legend, the reader is 44 referred to the web version of this article.) In Fig. 5, we show how different geometries of the inpainting 111 region affect the results by different methods. Column (b) illustrates that TV performs satisfactorily if the inpainting area is thin. 46 For image denoising, there are 7 parameters in the TWSO 47 model: ρ and σ in (2.5), the contrast parameter C in (2.9), and However, it fails in images that contain large gaps. Column (c) indicates that the inpainting result by SOTV depends on the geometry η, θ 49 1, θ 2 and θ 3 in (4.4). Each parameter plays a similar role as 115 its counterpart in the inpainting model. However, in contrast to of the inpainting region. It is able to inpaint large gaps but at the Fig. 6. Inpainting a smooth image with large gaps. First row from left to right are the original image, degraded image, inpainting results of TV, SOTV, TGV, Euler s elastica, and TWSO, respectively; Second row shows the intermediate results obtained by the TWSO model. Image is from [33].
8 JID:YDSPR AID:53 /FLA [m5g; v1.9; Prn:17/07/2017; 11:02] P.8 (1-13) 8 J. Duan et al. / Digital Signal Processing ( ) Fig. 7. Real example inpainting. Cornelia, mother of the Gracchi by J. Suvee (Louvre). From left to right are degraded image, inpainting results of TV, SOTV, TGV, Euler s elastica, and TWSO, respectively; Second row shows the local magnification of the corresponding results in the first row Table Comparison of PSNR and SSIM of different methods in Figs PSNR test SSIM value 87 Figure # Fig. 5 Fig. 6 Fig. 7 Fig. 5 Fig. 6 Fig st row 2 nd row 3 rd row 4 th row 1st row 2nd row 3rd row 4th row Degraded TV Inf SOTV TGV Euler s elastica TWSO Inf Inf Inf Table Mean and standard deviation (SD) of PSNR and SSIM calculated using 5 different methods for image inpainting over 100 images from the Berkeley database BSDS500 with 4 32 different random masks PSNR value (Mean±SD) SSIM value (Mean±SD) 35 Missing 40% 60% 80% 90% 40% 60% 80% 90% Degraded ± ± ± ± ± ± ± ± TV ± ± ± ± ± ± ± ± TGV ± ± ± ± ± ± ± ± SOTV ± ± ± ± ± ± ± ± Euler s elastica ± ± ± ± ± ± ± ± TWSO ± ± ± ± ± ± ± ± cost of introducing blurring. Mathematically, TGV is a combination random masks (i.e. 40%, 60%, 80% and 90% pixels are missing) for of TV and SOTV, so its results, as shown in Column (d), are similar each of the 100 images randomly selected from the database. The to those of TV and SOTV. TGV results also seem to depend on the performance of each method for each mask is measured in terms geometry of the inpainting area. The last column shows that TWSO of mean and standard derivation of PSNR and SSIM over all inpaints the images perfectly without any blurring and regardless images. The results are demonstrated in the following Table 3 and 47 local geometry. TWSO is also slightly better than Euler s elastica, Fig. 8. The accuracy of the inpainting results obtained by different which has proven to be very effective in dealing with larger gaps. methods decreases as the percentage of missing pixels region becomes larger. The highest averaged PSNR values are again achieved We now show an example of inpainting a smooth image with 51 large gaps. Fig. 6 shows that all the methods, except the TWSO by the TWSO, demonstrating its effectiveness for image inpainting model, fail in inpainting the large gaps in the image. This is due to the fact that TWSO integrates the tensor T with its eigenvalues defined in (2.10), which makes it suitable for restoring linear Image denoising For denoising images corrupted by Gaussian noise, we use p = 55 structures, as shown in the second row of Fig in (2.3) and the Ɣ in (2.3) is the same as the image domain 56 Fig. 7 compares the inpainting of a real image by all the methods. The inpainting result of the TV model seems to be more satis- 1. We compare the proposed TWSO model with the Perona Malik 57 (PM) [39], coherent enhancing diffusion (CED) [32], total variation 58 factory than those of SOTV and TGV, though it produces piecewise (TV) [40], second order total variation (SOTV) [1,,26], total generalised variation (TGV) [30], and extended anisotropic diffusion 59 constant restoration. From the second row, neither TV nor TGV is able to connect the gaps on the cloth. SOTV reduces some gaps model 4 (EAD) [11] on both synthetic and real images. 61 but blurs the connecting region. Euler s elastica performs better 127 Fig. 9 shows the denoised results on a synthetic image (a) by 62 than TV, SOTV and TGV, but no better than the proposed TWSO 128 different methods. The results by the PM and TV models, as shown 63 model. Table 2 shows that the TWSO is the most accurate among 129 in (c) and (e), have a jagged appearance (i.e. staircase artefact). 64 all the methods compared for the examples in Figs Finally, we evaluate TV, SOTV, TGV, Euler s elastica and TWSO on the Berkeley database BSDS500 for image inpainting. We use 4 4 Code:
9 JID:YDSPR AID:53 /FLA [m5g; v1.9; Prn:17/07/2017; 11:02] P.9 (1-13) J. Duan et al. / Digital Signal Processing ( ) Fig. 8. Plots of mean and standard derivation in Table 3 obtained by 5 different methods over 100 images from the Berkeley database BSDS500 with 4 different random 31 masks. (a) and (b): mean and standard derivation plots of PSNR; (c) and (d): mean and standard derivation plots of SSIM Fig. 9. Denoising results of a synthetic test image. (a) Clean data; (b) Image corrupted by 0.02 variance Gaussian noise; (c i): Results of PM, CED, TV, SOTV, TGV, EAD, and TWSO, respectively. Second row shows the isosurface rendering of the corresponding results above However, the scale-space-based PM model shows much stronger parameter C in (2.9) goes to infinity, the TWSO model degenerates staircase effect than the energy-based variational TV model for denoising piecewise smooth images. Due to the anisotropy, the result TWSO has. The eigenvalues (2.10) used in CED however are not 117 to the isotropic SOTV model. The larger C is, the less anisotropy (d) by the CED method displays strong directional characteristics. able to adjust the anisotropy; ii) it can determine if there exists Due to the high order derivatives involved, the SOTV, TGV and diffusion in TWSO along the direction parallel to the image gradient. The diffusion halts along the image gradient direction if λ TWSO models can reduce the staircase artefact. However, because 55 the SOTV imposes too much regularity on the image, it smears in (2.9) is small and encouraged if λ 1 is large. By choosing a suitable C, (2.9) allows TWSO to diffuse the noise along object edges 1 56 the sharp edges of the objects in (f). Better results are obtained 57 by the TGV, EAD and TWSO models, which show no staircase and prohibit the diffusion across edges. The eigenvalue λ 1 used in 58 and blurring effects, though TGV leaves some noise near the discontinuities (2.10) however remains small (i.e. λ 1 1) all the time, meaning 59 and EAD over-smooths image edges, as shown in (g) that the diffusion along image gradient in CED is always prohib and (h). ited. CED thus only prefers the orientation that is perpendicular to 61 Fig. 10 presents the denoised results on a real image (a) by different the image gradient, which explains why CED distorts the image methods. Both the CED and the proposed TWSO models use In addition to qualitative evaluation of different methods in the anisotropic diffusion tensor T. CED distorts the image while Fig. 9 and 10, we also calculate the PSNR and SSIM values for TWSO does not. The reason for this is that TWSO uses the eigenvalues these methods and show them in Table 4 and Fig. 11. These met of T defined in (2.9), which has two advantages: i) it allows rics show that the PDE-based methods (i.e. PM and CED) perform us to control the degree of the anisotropy of TWSO. If the contrast worse than the variational methods (i.e. TV, SOTV, TGV, EAD and
10 JID:YDSPR AID:53 /FLA [m5g; v1.9; Prn:17/07/2017; 11:02] P.10 (1-13) 10 J. Duan et al. / Digital Signal Processing ( ) Fig. 10. Noise reduction results of a real test image. (a) Clean data; (b) Noisy data corrupted by variance Gaussian noise; (c i): Results of PM, CED, TV, SOTV, TGV, EAD, and TWSO, respectively. The second row shows local magnification of the corresponding results in the first row Table 4 19 Comparison of PSNR and SSIM using different methods on Fig. 9 and 10 with different noise variances Fig. 9 Fig. 10 PSNR value SSIM value PSNR value SSIM value 87 Noise variance Degraded PM CED TV SOTV TGV EAD TWSO Fig. 11. Quantitative image quality evaluation for Gaussian noise reduction in Table 4. (a) and (b): PSNR and SSIM values of various denoised methods for Fig. 9 (a) corrupted by different noise levels; (c) and (d): PSNR and SSIM values of various denoised methods for Fig. 10 (a) corrupted by different noise levels.
11 JID:YDSPR AID:53 /FLA [m5g; v1.9; Prn:17/07/2017; 11:02] P.11 (1-13) J. Duan et al. / Digital Signal Processing ( ) 11 1 Table 5 2 Mean and standard deviation (SD) of PSNR and SSIM calculated using 7 different methods for image denoising over 100 images from the Berkeley database BSDS500 with different noise variances PSNR value (Mean±SD) SSIM value (Mean±SD) 70 5 Noise variance Degraded ± ± ± ± ± ± ± ± ± ± PM ± ± ± ± ± ± ± ± ± ± 0.11 CED ± ± ± ± ± ± ± ± ± ± TV ± ± ± ± ± ± ± ± ± ± SOTV ± ± ± ± ± ± ± ± ± ± TGV ± ± ± ± ± ± ± ± ± ± EAD ± ± ± ± ± ± ± ± ± ± TWSO ± ± ± ± ± ± ± ± ± ± Fig. 12. Plots of mean and standard derivation in Table 5 obtained by 7 different methods over 100 images from the Berkeley database BSDS500 with 5 different noise 43 variances. (a) and (b): mean and standard derivation plots of PSNR; (c) and (d): mean and standard derivation plots of SSIM TWSO). The TV model introduces staircase effect, SOTV blurs the and Ɣ in (2.3) is the same as the domain of the noise in the image image edges, and EAD tends to over-smooth the image structure. We compare our TWSO method with another two state-of-the-art 47 The proposed TWSO model performs well in all the cases, showing methods (i.e. media filter and TVL1 [41]) that have been widely 48 its effectiveness for image denoising. employed in salt-and-pepper noise removal. From the first and 114 We now evaluate PM, CED, TV, SOTV, TGV, EAD and TWSO on second row of Table 6, it is clear that all three methods perform the Berkeley database BSDS500 for image denoising. 100 images very well when the noise level is low. As the noise level increases, are randomly selected from the database and each of the chosen images is corrupted by Gaussian noise with zero-mean and the media filter and TVL1 become increasingly worse, introducing more and more artefacts to the resulting images, and both fail in variances ranging from to at interval. The performance of each method for each noise variance is measured in restoring the image when the noise level reaches 90%. In contrast, 120 TWSO has produced visually more pleasing results against increasing noise and its PSNR and SSIM remain the highest in all the terms of mean and standard derivation of PSNR and SSIM over all 1 cases, as the introduction of tensor in TWSO offers richer neighbourhood information. 57 the 100 images. The final quantitative results are shown in Table 5 58 and Fig. 12. The mean values of PSNR and SSIM obtained by the 59 TWSO remain the largest in all the cases. The standard derivation 6. Conclusion of PSNR and SSIM are smaller and relatively stable, indicating that 61 the proposed TWSO is robust against the increasing level of noise In this paper, we present the TWSO model for image processing which introduces an anisotropic diffusion tensor to the SOTV and performs the best among all the 7 methods compared for image denoising. model. Specifically, the model integrates a novel regulariser to the In addition to demonstrating TWSO for Gaussian noise removal, SOTV model that uses the Frobenius norm of the product of the Fig. 13 and Table 6 show the denoising results of images with saltand-pepper SOTV Hessian matrix and the anisotropic tensor. The advantage of noise by different methods. Here, we use p = 1in (2.3) the TWSO model includes its ability to reduce both the staircase
12 JID:YDSPR AID:53 /FLA [m5g; v1.9; Prn:17/07/2017; 11:02] P.12 (1-13) J. Duan et al. / Digital Signal Processing ( ) Fig. 13. Denoising results on a real colour image from Berkeley database BSDS500. First row: Images (from top to bottom) corrupted by salt-and-pepper noise of 20%, 40%, 60%, 80% and 90%; Second row: Media filter results; Third row: TVL1 results; Last row: the new TWSO results Table 6 Comparison of PSNR and SSIM using different methods on Fig. 13 with different noise levels. PSNR value SSIM value 47 Noise density 20% 40% 60% 80% 90% 20% 40% 60% 80% 90% 48 Degraded Media filter TVL1 TWSO Acknowledgments and blurring artefacts in the restored image. To avoid numerically solving the high order PDEs associated with the TWSO model, we develop a fast alternating direction method of multipliers based on a discrete finite difference scheme. Extensive numerical experiments demonstrate that the proposed TWSO model outperforms several state-of-the-art methods for image restoration for different applications. Future work will combine the capabilities of the TWSO model for image denoising and inpainting for medical imaging, such as optical coherence tomography [42] We would like to thank Prof. Xue-Cheng Tai of the University of Bergen, Norway and Dr. Wei Zhu of the University of Alabama, USA for providing the Euler s elastica code for this research. We also thank Dr. Jooyoung Hahn who initially wrote the Euler s elastica code References 125 [1] Maïtine Bergounioux, Loic Piffet, A second-order model for image denoising, Set-Valued Var. Anal. 18 (3 4) (2010) [2] Konstantinos Papafitsoros, Carola-Bibiane Schönlieb, A combined first and second order variational approach for image reconstruction, J. Math. Imaging Vis. 48 (2) (2014) [3] Wenqi Lu, Jinming Duan, Zhaowen Qiu, Zhenkuan Pan, Ryan Wen Liu, Li Bai, Implementation of high-order variational models made easy for image processing, Math. Methods Appl. Sci. 39 (14) (2016)
A Tensor Variational Formulation of Gradient Energy Total Variation
A Tensor Variational Formulation of Gradient Energy Total Variation Freddie Åström, George Baravdish and Michael Felsberg Linköping University Post Print N.B.: When citing this work, cite the original
More informationNONLINEAR DIFFUSION PDES
NONLINEAR DIFFUSION PDES Erkut Erdem Hacettepe University March 5 th, 0 CONTENTS Perona-Malik Type Nonlinear Diffusion Edge Enhancing Diffusion 5 References 7 PERONA-MALIK TYPE NONLINEAR DIFFUSION The
More informationLINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING
LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING JIAN-FENG CAI, STANLEY OSHER, AND ZUOWEI SHEN Abstract. Real images usually have sparse approximations under some tight frame systems derived
More informationNonlinear Diffusion. Journal Club Presentation. Xiaowei Zhou
1 / 41 Journal Club Presentation Xiaowei Zhou Department of Electronic and Computer Engineering The Hong Kong University of Science and Technology 2009-12-11 2 / 41 Outline 1 Motivation Diffusion process
More informationNON-LINEAR DIFFUSION FILTERING
NON-LINEAR DIFFUSION FILTERING Chalmers University of Technology Page 1 Summary Introduction Linear vs Nonlinear Diffusion Non-Linear Diffusion Theory Applications Implementation References Page 2 Introduction
More informationA Riemannian Framework for Denoising Diffusion Tensor Images
A Riemannian Framework for Denoising Diffusion Tensor Images Manasi Datar No Institute Given Abstract. Diffusion Tensor Imaging (DTI) is a relatively new imaging modality that has been extensively used
More informationEE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)
EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in
More informationNonlinear Diffusion. 1 Introduction: Motivation for non-standard diffusion
Nonlinear Diffusion These notes summarize the way I present this material, for my benefit. But everything in here is said in more detail, and better, in Weickert s paper. 1 Introduction: Motivation for
More informationPDEs in Image Processing, Tutorials
PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower
More informationITK Filters. Thresholding Edge Detection Gradients Second Order Derivatives Neighborhood Filters Smoothing Filters Distance Map Image Transforms
ITK Filters Thresholding Edge Detection Gradients Second Order Derivatives Neighborhood Filters Smoothing Filters Distance Map Image Transforms ITCS 6010:Biomedical Imaging and Visualization 1 ITK Filters:
More informationImage restoration: numerical optimisation
Image restoration: numerical optimisation Short and partial presentation Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS BINP / 6 Context
More informationTensor-based Image Diffusions Derived from Generalizations of the Total Variation and Beltrami Functionals
Generalizations of the Total Variation and Beltrami Functionals 29 September 2010 International Conference on Image Processing 2010, Hong Kong Anastasios Roussos and Petros Maragos Computer Vision, Speech
More informationImage enhancement. Why image enhancement? Why image enhancement? Why image enhancement? Example of artifacts caused by image encoding
13 Why image enhancement? Image enhancement Example of artifacts caused by image encoding Computer Vision, Lecture 14 Michael Felsberg Computer Vision Laboratory Department of Electrical Engineering 12
More informationSOS Boosting of Image Denoising Algorithms
SOS Boosting of Image Denoising Algorithms Yaniv Romano and Michael Elad The Technion Israel Institute of technology Haifa 32000, Israel The research leading to these results has received funding from
More informationVariational Image Restoration
Variational Image Restoration Yuling Jiao yljiaostatistics@znufe.edu.cn School of and Statistics and Mathematics ZNUFE Dec 30, 2014 Outline 1 1 Classical Variational Restoration Models and Algorithms 1.1
More informationInvestigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method
Article Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method Andreas Langer Department of Mathematics, University of Stuttgart,
More informationIMAGE RESTORATION: TOTAL VARIATION, WAVELET FRAMES, AND BEYOND
IMAGE RESTORATION: TOTAL VARIATION, WAVELET FRAMES, AND BEYOND JIAN-FENG CAI, BIN DONG, STANLEY OSHER, AND ZUOWEI SHEN Abstract. The variational techniques (e.g., the total variation based method []) are
More informationAdaptive Primal Dual Optimization for Image Processing and Learning
Adaptive Primal Dual Optimization for Image Processing and Learning Tom Goldstein Rice University tag7@rice.edu Ernie Esser University of British Columbia eesser@eos.ubc.ca Richard Baraniuk Rice University
More informationMotion Estimation (I) Ce Liu Microsoft Research New England
Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion
More informationTHE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR
THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},
More informationAdaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise
Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Minru Bai(x T) College of Mathematics and Econometrics Hunan University Joint work with Xiongjun Zhang, Qianqian Shao June 30,
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 9 Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 2 Separable convex optimization a special case is min f(x)
More informationCoordinate Update Algorithm Short Course Operator Splitting
Coordinate Update Algorithm Short Course Operator Splitting Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 25 Operator splitting pipeline 1. Formulate a problem as 0 A(x) + B(x) with monotone operators
More informationRecent developments on sparse representation
Recent developments on sparse representation Zeng Tieyong Department of Mathematics, Hong Kong Baptist University Email: zeng@hkbu.edu.hk Hong Kong Baptist University Dec. 8, 2008 First Previous Next Last
More informationA Four-Pixel Scheme for Singular Differential Equations
A Four-Pixel Scheme for Singular Differential Equations Martin Welk 1, Joachim Weickert 1, and Gabriele Steidl 1 Mathematical Image Analysis Group Faculty of Mathematics and Computer Science, Bldg. 7 Saarland
More information2 Regularized Image Reconstruction for Compressive Imaging and Beyond
EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement
More informationRIGOROUS MATHEMATICAL INVESTIGATION OF A NONLINEAR ANISOTROPIC DIFFUSION-BASED IMAGE RESTORATION MODEL
Electronic Journal of Differential Equations, Vol. 2014 (2014), No. 129, pp. 1 9. ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu RIGOROUS MATHEMATICAL
More informationMath 273a: Optimization Overview of First-Order Optimization Algorithms
Math 273a: Optimization Overview of First-Order Optimization Algorithms Wotao Yin Department of Mathematics, UCLA online discussions on piazza.com 1 / 9 Typical flow of numerical optimization Optimization
More informationAccelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)
Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Donghwan Kim and Jeffrey A. Fessler EECS Department, University of Michigan
More informationConsistent Positive Directional Splitting of Anisotropic Diffusion
In: Boštjan Likar (ed.): Proc. of Computer Vision Winter Workshop, Bled, Slovenia, Feb. 7-9, 2001, pp. 37-48. Consistent Positive Directional Splitting of Anisotropic Diffusion Pavel Mrázek and Mirko Navara
More informationEnergy-Based Image Simplification with Nonlocal Data and Smoothness Terms
Energy-Based Image Simplification with Nonlocal Data and Smoothness Terms Stephan Didas 1, Pavel Mrázek 2, and Joachim Weickert 1 1 Mathematical Image Analysis Group Faculty of Mathematics and Computer
More informationAn Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints
An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:
More informationSparse molecular image representation
Sparse molecular image representation Sofia Karygianni a, Pascal Frossard a a Ecole Polytechnique Fédérale de Lausanne (EPFL), Signal Processing Laboratory (LTS4), CH-115, Lausanne, Switzerland Abstract
More informationComparison of Modern Stochastic Optimization Algorithms
Comparison of Modern Stochastic Optimization Algorithms George Papamakarios December 214 Abstract Gradient-based optimization methods are popular in machine learning applications. In large-scale problems,
More informationInexact Alternating Direction Method of Multipliers for Separable Convex Optimization
Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization Hongchao Zhang hozhang@math.lsu.edu Department of Mathematics Center for Computation and Technology Louisiana State
More informationLinear Diffusion and Image Processing. Outline
Outline Linear Diffusion and Image Processing Fourier Transform Convolution Image Restoration: Linear Filtering Diffusion Processes for Noise Filtering linear scale space theory Gauss-Laplace pyramid for
More informationIntroduction to Nonlinear Image Processing
Introduction to Nonlinear Image Processing 1 IPAM Summer School on Computer Vision July 22, 2013 Iasonas Kokkinos Center for Visual Computing Ecole Centrale Paris / INRIA Saclay Mean and median 2 Observations
More informationCalculus of Variations Summer Term 2014
Calculus of Variations Summer Term 2014 Lecture 20 18. Juli 2014 c Daria Apushkinskaya 2014 () Calculus of variations lecture 20 18. Juli 2014 1 / 20 Purpose of Lesson Purpose of Lesson: To discuss several
More informationPDE Based Image Diffusion and AOS
PDE Based Image Diffusion and AOS by Jarno Ralli April 27, 204 Abstract This technical paper shows how AOS Additive Operator Splitting scheme, together with TDMA Tri-Diagonal Matrix Algorithm, can be used
More informationThe Alternating Direction Method of Multipliers
The Alternating Direction Method of Multipliers With Adaptive Step Size Selection Peter Sutor, Jr. Project Advisor: Professor Tom Goldstein December 2, 2015 1 / 25 Background The Dual Problem Consider
More informationT H E S I S. Computer Engineering May Smoothing of Matrix-Valued Data. Thomas Brox
D I P L O M A Computer Engineering May 2002 T H E S I S Smoothing of Matrix-Valued Data Thomas Brox Computer Vision, Graphics, and Pattern Recognition Group Department of Mathematics and Computer Science
More informationStatistical Geometry Processing Winter Semester 2011/2012
Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian
More informationgraphics + usability + visualization Edge Aware Anisotropic Diffusion for 3D Scalar Data on Regular Lattices Zahid Hossain MSc.
gruvi graphics + usability + visualization Edge Aware Anisotropic Diffusion for 3D Scalar Data on Regular Lattices Zahid Hossain MSc. Thesis Defense Simon Fraser University Burnaby, BC Canada. Motivation
More informationOver-enhancement Reduction in Local Histogram Equalization using its Degrees of Freedom. Alireza Avanaki
Over-enhancement Reduction in Local Histogram Equalization using its Degrees of Freedom Alireza Avanaki ABSTRACT A well-known issue of local (adaptive) histogram equalization (LHE) is over-enhancement
More informationImage restoration. An example in astronomy
Image restoration Convex approaches: penalties and constraints An example in astronomy Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS
More informationϕ ( ( u) i 2 ; T, a), (1.1)
CONVEX NON-CONVEX IMAGE SEGMENTATION RAYMOND CHAN, ALESSANDRO LANZA, SERENA MORIGI, AND FIORELLA SGALLARI Abstract. A convex non-convex variational model is proposed for multiphase image segmentation.
More informationA NO-REFERENCE SHARPNESS METRIC SENSITIVE TO BLUR AND NOISE. Xiang Zhu and Peyman Milanfar
A NO-REFERENCE SARPNESS METRIC SENSITIVE TO BLUR AND NOISE Xiang Zhu and Peyman Milanfar Electrical Engineering Department University of California at Santa Cruz, CA, 9564 xzhu@soeucscedu ABSTRACT A no-reference
More informationSpatially adaptive alpha-rooting in BM3D sharpening
Spatially adaptive alpha-rooting in BM3D sharpening Markku Mäkitalo and Alessandro Foi Department of Signal Processing, Tampere University of Technology, P.O. Box FIN-553, 33101, Tampere, Finland e-mail:
More informationCoordinate Update Algorithm Short Course Proximal Operators and Algorithms
Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 36 Why proximal? Newton s method: for C 2 -smooth, unconstrained problems allow
More informationACCELERATED FIRST-ORDER PRIMAL-DUAL PROXIMAL METHODS FOR LINEARLY CONSTRAINED COMPOSITE CONVEX PROGRAMMING
ACCELERATED FIRST-ORDER PRIMAL-DUAL PROXIMAL METHODS FOR LINEARLY CONSTRAINED COMPOSITE CONVEX PROGRAMMING YANGYANG XU Abstract. Motivated by big data applications, first-order methods have been extremely
More informationMathematical Problems in Image Processing
Gilles Aubert Pierre Kornprobst Mathematical Problems in Image Processing Partial Differential Equations and the Calculus of Variations Second Edition Springer Foreword Preface to the Second Edition Preface
More informationFraunhofer Institute for Computer Graphics Research Interactive Graphics Systems Group, TU Darmstadt Fraunhoferstrasse 5, Darmstadt, Germany
Scale Space and PDE methods in image analysis and processing Arjan Kuijper Fraunhofer Institute for Computer Graphics Research Interactive Graphics Systems Group, TU Darmstadt Fraunhoferstrasse 5, 64283
More informationAlternating Direction Method of Multipliers. Ryan Tibshirani Convex Optimization
Alternating Direction Method of Multipliers Ryan Tibshirani Convex Optimization 10-725 Consider the problem Last time: dual ascent min x f(x) subject to Ax = b where f is strictly convex and closed. Denote
More informationComputer Vision & Digital Image Processing
Computer Vision & Digital Image Processing Image Restoration and Reconstruction I Dr. D. J. Jackson Lecture 11-1 Image restoration Restoration is an objective process that attempts to recover an image
More informationMotion Estimation (I)
Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion
More informationBackward Error Estimation
Backward Error Estimation S. Chandrasekaran E. Gomez Y. Karant K. E. Schubert Abstract Estimation of unknowns in the presence of noise and uncertainty is an active area of study, because no method handles
More informationADMM algorithm for demosaicking deblurring denoising
ADMM algorithm for demosaicking deblurring denoising DANIELE GRAZIANI MORPHEME CNRS/UNS I3s 2000 route des Lucioles BP 121 93 06903 SOPHIA ANTIPOLIS CEDEX, FRANCE e.mail:graziani@i3s.unice.fr LAURE BLANC-FÉRAUD
More informationImage Restoration with Mixed or Unknown Noises
Image Restoration with Mixed or Unknown Noises Zheng Gong, Zuowei Shen, and Kim-Chuan Toh Abstract This paper proposes a simple model for image restoration with mixed or unknown noises. It can handle image
More informationSparse Optimization Lecture: Dual Methods, Part I
Sparse Optimization Lecture: Dual Methods, Part I Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know dual (sub)gradient iteration augmented l 1 iteration
More informationMark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer.
University of Cambridge Engineering Part IIB & EIST Part II Paper I0: Advanced Pattern Processing Handouts 4 & 5: Multi-Layer Perceptron: Introduction and Training x y (x) Inputs x 2 y (x) 2 Outputs x
More informationRecent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables
Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong 2014 Workshop
More informationLesson 04. KAZE, Non-linear diffusion filtering, ORB, MSER. Ing. Marek Hrúz, Ph.D.
Lesson 04 KAZE, Non-linear diffusion filtering, ORB, MSER Ing. Marek Hrúz, Ph.D. Katedra Kybernetiky Fakulta aplikovaných věd Západočeská univerzita v Plzni Lesson 04 KAZE ORB: an efficient alternative
More informationNotes on Regularization and Robust Estimation Psych 267/CS 348D/EE 365 Prof. David J. Heeger September 15, 1998
Notes on Regularization and Robust Estimation Psych 67/CS 348D/EE 365 Prof. David J. Heeger September 5, 998 Regularization. Regularization is a class of techniques that have been widely used to solve
More informationl 0 TV: A Sparse Optimization Method for Impulse Noise Image Restoration
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1 l 0 TV: A Sparse Optimization Method for Impulse Noise Image Restoration Ganzhao Yuan, Bernard Ghanem arxiv:1802.09879v2 [cs.na] 28 Dec
More informationDistributed Optimization via Alternating Direction Method of Multipliers
Distributed Optimization via Alternating Direction Method of Multipliers Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato Stanford University ITMANET, Stanford, January 2011 Outline precursors dual decomposition
More informationFETI domain decomposition method to solution of contact problems with large displacements
FETI domain decomposition method to solution of contact problems with large displacements Vít Vondrák 1, Zdeněk Dostál 1, Jiří Dobiáš 2, and Svatopluk Pták 2 1 Dept. of Appl. Math., Technical University
More informationAssorted DiffuserCam Derivations
Assorted DiffuserCam Derivations Shreyas Parthasarathy and Camille Biscarrat Advisors: Nick Antipa, Grace Kuo, Laura Waller Last Modified November 27, 28 Walk-through ADMM Update Solution In our derivation
More informationApplication of the Strictly Contractive Peaceman-Rachford Splitting Method to Multi-block Separable Convex Programming
Application of the Strictly Contractive Peaceman-Rachford Splitting Method to Multi-block Separable Convex Programming Bingsheng He, Han Liu, Juwei Lu, and Xiaoming Yuan Abstract Recently, a strictly contractive
More informationErkut Erdem. Hacettepe University February 24 th, Linear Diffusion 1. 2 Appendix - The Calculus of Variations 5.
LINEAR DIFFUSION Erkut Erdem Hacettepe University February 24 th, 2012 CONTENTS 1 Linear Diffusion 1 2 Appendix - The Calculus of Variations 5 References 6 1 LINEAR DIFFUSION The linear diffusion (heat)
More informationMedical Image Analysis
Medical Image Analysis CS 593 / 791 Computer Science and Electrical Engineering Dept. West Virginia University 20th January 2006 Outline 1 Discretizing the heat equation 2 Outline 1 Discretizing the heat
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression
More informationWavelet Inpainting Based on Tensor Diffusion
Vol. 39, No. 7 ACTA AUTOMATICA SINICA July, 03 Wavelet Inpainting Based on Tensor Diffusion YANG Xiu-Hong GUO Bao-Long WU Xian-Xiang Abstract Due to the lossy transmission in the JPEG000 image compression
More informationENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT
ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT PRASHANT ATHAVALE Abstract. Digital images are can be realized as L 2 (R 2 objects. Noise is introduced in a digital image due to various reasons.
More informationGauge optimization and duality
1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel
More informationCHAPTER 11. A Revision. 1. The Computers and Numbers therein
CHAPTER A Revision. The Computers and Numbers therein Traditional computer science begins with a finite alphabet. By stringing elements of the alphabet one after another, one obtains strings. A set of
More informationEquations of fluid dynamics for image inpainting
Equations of fluid dynamics for image inpainting Evelyn M. Lunasin Department of Mathematics United States Naval Academy Joint work with E.S. Titi Weizmann Institute of Science, Israel. IMPA, Rio de Janeiro
More informationPDE-based image restoration, I: Anti-staircasing and anti-diffusion
PDE-based image restoration, I: Anti-staircasing and anti-diffusion Kisee Joo and Seongjai Kim May 16, 2003 Abstract This article is concerned with simulation issues arising in the PDE-based image restoration
More informationCOMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b?
COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b? Rosemary Renaut http://math.asu.edu/ rosie BRIDGING THE GAP? OCT 2, 2012 Discussion Yuen: Solve
More informationUsing Entropy and 2-D Correlation Coefficient as Measuring Indices for Impulsive Noise Reduction Techniques
Using Entropy and 2-D Correlation Coefficient as Measuring Indices for Impulsive Noise Reduction Techniques Zayed M. Ramadan Department of Electronics and Communications Engineering, Faculty of Engineering,
More informationGradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice
1 Lecture Notes, HCI, 4.1.211 Chapter 2 Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice Bastian Goldlücke Computer Vision Group Technical University of Munich 2 Bastian
More informationA Localized Linearized ROF Model for Surface Denoising
1 2 3 4 A Localized Linearized ROF Model for Surface Denoising Shingyu Leung August 7, 2008 5 Abstract 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 1 Introduction CT/MRI scan becomes a very
More informationl0tv: A Sparse Optimization Method for Impulse Noise Image Restoration
l0tv: A Sparse Optimization Method for Impulse Noise Image Restoration Item Type Article Authors Yuan, Ganzhao; Ghanem, Bernard Citation Yuan G, Ghanem B (2017) l0tv: A Sparse Optimization Method for Impulse
More informationWavelet Footprints: Theory, Algorithms, and Applications
1306 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 5, MAY 2003 Wavelet Footprints: Theory, Algorithms, and Applications Pier Luigi Dragotti, Member, IEEE, and Martin Vetterli, Fellow, IEEE Abstract
More informationImage restoration: edge preserving
Image restoration: edge preserving Convex penalties Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS BINP 1 / 32 Topics Image restoration,
More informationIntroduction to Computer Vision. 2D Linear Systems
Introduction to Computer Vision D Linear Systems Review: Linear Systems We define a system as a unit that converts an input function into an output function Independent variable System operator or Transfer
More informationDistributed Convex Optimization
Master Program 2013-2015 Electrical Engineering Distributed Convex Optimization A Study on the Primal-Dual Method of Multipliers Delft University of Technology He Ming Zhang, Guoqiang Zhang, Richard Heusdens
More informationCHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION
59 CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION 4. INTRODUCTION Weighted average-based fusion algorithms are one of the widely used fusion methods for multi-sensor data integration. These methods
More informationSPARSE SIGNAL RESTORATION. 1. Introduction
SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful
More informationA General Framework for a Class of Primal-Dual Algorithms for TV Minimization
A General Framework for a Class of Primal-Dual Algorithms for TV Minimization Ernie Esser UCLA 1 Outline A Model Convex Minimization Problem Main Idea Behind the Primal Dual Hybrid Gradient (PDHG) Method
More informationSolution Methods. Steady convection-diffusion equation. Lecture 05
Solution Methods Steady convection-diffusion equation Lecture 05 1 Navier-Stokes equation Suggested reading: Gauss divergence theorem Integral form The key step of the finite volume method is to integrate
More informationUsing ADMM and Soft Shrinkage for 2D signal reconstruction
Using ADMM and Soft Shrinkage for 2D signal reconstruction Yijia Zhang Advisor: Anne Gelb, Weihong Guo August 16, 2017 Abstract ADMM, the alternating direction method of multipliers is a useful algorithm
More informationTWO-PHASE APPROACH FOR DEBLURRING IMAGES CORRUPTED BY IMPULSE PLUS GAUSSIAN NOISE. Jian-Feng Cai. Raymond H. Chan. Mila Nikolova
Manuscript submitted to Website: http://aimsciences.org AIMS Journals Volume 00, Number 0, Xxxx XXXX pp. 000 000 TWO-PHASE APPROACH FOR DEBLURRING IMAGES CORRUPTED BY IMPULSE PLUS GAUSSIAN NOISE Jian-Feng
More informationSupplemental Figures: Results for Various Color-image Completion
ANONYMOUS AUTHORS: SUPPLEMENTAL MATERIAL (NOVEMBER 7, 2017) 1 Supplemental Figures: Results for Various Color-image Completion Anonymous authors COMPARISON WITH VARIOUS METHODS IN COLOR-IMAGE COMPLETION
More informationLinear Diffusion. E9 242 STIP- R. Venkatesh Babu IISc
Linear Diffusion Derivation of Heat equation Consider a 2D hot plate with Initial temperature profile I 0 (x, y) Uniform (isotropic) conduction coefficient c Unit thickness (along z) Problem: What is temperature
More informationSparsity Regularization
Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation
More informationKasetsart University Workshop. Multigrid methods: An introduction
Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available
More informationSIGNAL AND IMAGE RESTORATION: SOLVING
1 / 55 SIGNAL AND IMAGE RESTORATION: SOLVING ILL-POSED INVERSE PROBLEMS - ESTIMATING PARAMETERS Rosemary Renaut http://math.asu.edu/ rosie CORNELL MAY 10, 2013 2 / 55 Outline Background Parameter Estimation
More informationConvex Generalizations of Total Variation Based on the Structure Tensor with Applications to Inverse Problems
Convex Generalizations of Total Variation Based on the Structure Tensor with Applications to Inverse Problems Stamatios Lefkimmiatis, Anastasios Roussos 2, Michael Unser, and Petros Maragos 3 Biomedical
More informationarxiv: v1 [math.na] 23 May 2013
ADI SPLITTING SCHEMES FOR A FOURTH-ORDER NONLINEAR PARTIAL DIFFERENTIAL EQUATION FROM IMAGE PROCESSING Luca Calatroni Cambridge Centre for Analysis, University of Cambridge Wilberforce Road, CB3 0WA, Cambridge,
More informationLecture Notes 5: Multiresolution Analysis
Optimization-based data analysis Fall 2017 Lecture Notes 5: Multiresolution Analysis 1 Frames A frame is a generalization of an orthonormal basis. The inner products between the vectors in a frame and
More information