A Dual Formulation of the TV-Stokes Algorithm for Image Denoising

Size: px
Start display at page:

Download "A Dual Formulation of the TV-Stokes Algorithm for Image Denoising"

Transcription

1 A Dual Formulation of the TV-Stokes Algorithm for Image Denoising Christoffer A. Elo, Alexander Malyshev, and Talal Rahman Department of Mathematics, University of Bergen, Johannes Bruns gate 12, 5007 Bergen, Norway Abstract. We propose a fast algorithm for image denoising, which is based on a dual formulation of a recent denoising model involving the total variation imization of the tangential vector field under the incompressibility condition stating that the tangential vector field should be divergence free. The model turns noisy images into smooth and visually pleasant ones and preserves the edges quite well. While the original TV- Stokes algorithm, based on the primal formulation, is extremely slow, our new dual algorithm drastically improves the computational speed and possesses the same quality of denoising. Numerical experiments are provided to demonstrate practical efficiency of our algorithm. 1 Introduction We suppose that the observed image d 0 (x,y), (x,y) Ω R 2, is an original image d(x,y) perturbed by an additive noise η, d 0 = d + η. (1) The problem of recovering the image d from the noisy image d 0 is an inverse problem that is often solved by variational methods using the total variation (TV) imization. The corresponding Euler equation, which is a set of nonlinear partial differential equations, is typically solved by applying a gradient-descent method to a finite difference approximation of these equations. A classical total variation denoising model is the primal formulation due to Rudin, Osher and Fatemi [1] (the ROF model): d d L1 + λ 2 d d 0 2 L 2. (2) The parameter λ > 0 can be chosen, e.g., to approximately fulfill the condition d d 0 L2 σ, where σ is an estimate of η L2. The Euler equation christoffer.elo@gmail.com alexander.malyshev@math.uib.no talal.rahman@math.uib.no

2 div ( d/ d ) + λ(d d 0 ) = 0 is usually replaced by a regularized one, ( ) d div + λ(d d 0 ) = 0, (3) d β where d β = d 2 + β 2 is a necessary regularization, since images contain flat areas where d = d 2 x + d 2 y 0. When solving (3) numerically, an explicit time marching scheme with an artificial time variable, t, is typically used. However, such an algorithm is rather slow due to severe restrictions requiring small time steps for the convergence. It is well known that the ROF model suffers from the so called staircase effect, which is a disadvantage when denoising images with affine regions. To overcome this defect, we motivate for a two-step approach, where the fourth-order model, studied in [2 4], is decoupled into two second-order problems. Such methods are known to overcome the staircase effect, but tend to have computational difficulties due to very large conditioning. The authors of [5, 6] used the same two-step approach as in [7], but adopting ideas from [8, 9] they proposed to preserve the divergence-free condition on the tangential vector field. Recall that the tangential vector field τ is orthogonal to the normal (gradient) vector field n of the image d: n = d = (d x,d y ), τ = d = ( d y,d x ) T. (4) Hence div τ = 0. The first step of the TV-Stokes algorithm smoothes the tangential vector field τ 0 = d 0 for a given noisy image d 0 and then solve the imization problem { τ L1 + 12δ } τ τ τ 0 2L2 subject to div τ = 0, (5) where δ > 0 is some carefully chosen parameter. Once a smoothed tangential vector field τ is obtained, the second step reconstructs the image d by fitting it to the normal vector field by solving the imization problem d { d L1 ( ) } n d, n L 2 subject to d d 0 L2 = σ, (6) where σ is an estimate of η L2. In [5] the imization problems (5) and (6) are numerically solved by means of a time marching explicit scheme, while existence and uniqueness are proven for the Modified TV-Stokes in [6]. The TV-Stokes approach resulted in an algorithm which does not suffer from the staircase effect, preserves the edges, and the denoised images look visually pleasant. However, the TV-Stokes algorithm from [5] is extremely slow convergent and therefore practically unusable as demonstrated in the last section of the present paper.

3 We adopt the TV-Stokes denoising model but reduce the above presented primal formulation to the so called dual formulation, which is then numerically solved by a variant of fast Chambolle s iteration [10]. The reduction exploits the orthogonal projector Π K onto the subspace K = {τ : div τ = 0} for eliation of the divergence-free constraint. 2 The TV-Stokes denoising algorithm in dual formulation To overcome difficulties with non-differentiability in the primal formulation, Carter[11], Chambolle[10] and Chan, Golub and Mulet [12] have proposed dual formulations of the ROF model, where a dual variable p = (p 1 (x,y),p 2 (x,y)) is used to express the total variation: d L1 = max {(d,divp) L2 : p j (x,y) 1 (x,y) Ω, j = 1,2}. (7) p For instance, a variant of dual formulation from [10] consists in imization of the distance divp λd 0 L2. In [10] Chambolle also proposed a fast iteration for solving this imization problem that produces a denoised image after a few steps only. Below we show how to reduce the TV-Stokes model to a dual formulation. 2.1 Step 1 To derive a dual formulation of the first step we take advantage of the following analog of (7) for the total variation of the tangential vector field τ = (τ 1,τ 2 ) T : τ L1 = max {(τ,divp) L2 : p i (x,y) 1 (x,y) Ω, i = 1,2}, (8) p where the dual variable p is a pair of two rows, p 1 = (p 11,p 12 ) and p 2 = (p 21,p 22 ). The divergence is defined as follows: divp = (divp 1,divp 2 ) T, where divp i = p i1 x + p i2, i = 1,2. (9) y This definition is similar to the vectorial dual norm from [13] for vectorial images, e.g. color images. Plugging (8) into (5) yields {(τ,divp) max L2 + 12δ } (τ τ o,τ τ o ) L2. (10) div τ=0 p i 1 Results from convex analysis, see for instance Theorem in [14], allow us to exchange the order of max and in (10) and obtain an equivalent optimization problem max {(τ,divp) L2 + 12δ } (τ τ o,τ τ o ) L2. (11) p i 1 div τ=0

4 Now comes a trick. Let us introduce the orthogonal projection Π K onto the constrained subspace K = {τ : div τ = 0}. Note that τ 0 K. By means of the pseudoinverse + we may write that Π K ( τ1 τ 2 ) = ( τ1 τ 2 ) + div ( τ1 τ 2 ). (12) The constraint div τ = 0 means that Π K τ = τ, and the latter implies the equalities (τ,divp) = (Π K τ,divp) = (τ,π K divp). Hence (11) is equivalent to max {(τ,π K divp) L2 + 12δ } (τ τ o,τ τ o ) L2. (13) p i 1 div τ=0 Solution to the imization problem (without constraint div τ = 0!) {(τ,π K divp) L2 + 12δ } (τ τ τ o,τ τ o ) L2 is τ = τ 0 δπ K divp (14) and satisfies the constraint div τ = 0. Owing to (14) we have the equality (τ,π K divp)+ 1 2δ (τ τ o,τ τ o ) = 1 2δ [(τ 0,τ 0 ) (δπ K divp τ 0,δΠ K divp τ 0 )], which together with (13) gives our dual formulation: p { ΠK divp δ 1 τ 0 L2 : p i 1, i = 1,2 }. (15) Numerical solution of (15) is computed by Chambolle s iteration from [10]: p 0 = 0, p n+1 = pn + t [ ( )] Π K divp n δ 1 τ t (Π K divp n δ 1. (16) τ 0 ) The iteration converges rapidly when t 1 4. The smoothed tangential field after n iterations is given by τ n = τ 0 δπ K divp n. 2.2 Step 2 The image d is reconstructed at the second step by fitting it to the normal vector field built from the tangential vector field computed at step 1, (n 1,n 2 ) = (τ 2, τ 1 ). Again we introduce a dual variable r = (r 1 (x,y),r 2 (x,y)) and use the formula d L1 = max r 1 ( d, r) L2. Then the imization problem (6) is equivalent to the problem max d r 1 { ( d,div ( r + n n )) L µ d d 0 2 L 2 }, (17) where µ > 0 is a Lagrangian multiplier. After interchanging and max in (17) we find conditions for attaining the imum: ( d = d 0 µdiv r + n ). (18) n

5 By analogy with (15) we can derive the dual formulation for step 2: { ( r div r + n ) d } 0 n µ : r 1. (19) L2 Chambolle s iteration for (19) is as follows: [ ( ( ) )] r n + t div r n + n r n+1 n µ 1 d 0 = ( ( ) ) 1 + t div r n + n. (20) n µ 1 d The discrete algorithm The staggered grid is used for discretization as in [5]. For convenience we introduce the differentiation matrices B = h......, BT = , (21) h where B is the forward difference operator and B T is the backward difference operator. The discrete gradient operator applied to a matrix d is then defined as h d = ( db T x,b y d ), (22) where B x (B y ) stands for differentiation in the x (resp. y) direction. The discrete divergence operator is given by div h (p 1,p 2 ) = p 1 B x B T y p 2. (23) The discrete analog of the projection operator Π K has the form Π h K = I h ( h ) + div h, (24) where the gradient and divergence are applied in a slightly different manner: ( ) ( ) div h τ1 = τ τ 1 B x By T τ 2, h db T d = x. (25) 2 B y d To complete the definition (24) we need a description of the pseudoinverse operator ( h ) + for the discrete Laplacian h d = db T x B x B T y B y d. (26) Let us introduce the orthogonal N N matrix of the Discrete Cosine Transform, C, which is defined by dst(eye(n)) in MATLAB. The symmetric matrix

6 of the Discrete Sine Transform, S, defined in MATLAB by dst(eye(n-1)), satisfies the equation S T S = (N/2) I, where I is the identity matrix. We prefer to use the orthogonal symmetric matrix S = S/ N/2 of order N 1. The singular value decomposition of B has the form B = S[0,Σ]C, Σ = diag(σ 1,...,σ N 1 ), (27) where the diagonal matrix Σ has the diagonal entries σ k = 2 h πk sin, k = 1,2,...,N 1. (28) 2N By the aid of (27) equation (26) can be rewritten as [ ] [ ] f = h d = dc T 0 0 Σx 2 C C T Σy 2 Cd. (29) Denoting f = CfC T and d = CdC T we arrive at the equation [ ] [ ] 0 0 f = d Σx 2 d. Σy 2 (30) This equation is easily solved with respect to d. Suppose that the matrices f and d have the entries f ij and d ij for i,j = 0,1,... Note that in our case f 00 = 0. Then the solution d = G( f) is as follows: d 00 = 0, d i,0 = f i,0 /σ 2 i,y, i = 1,2,..., d 0,j = f 0,j /σ 2 j,x, j = 1,2,..., (31) d ij = f ij /(σi,y 2 + σ2 j,x ), i,j = 1,2,... Thus the pseudoinverse operator ( h ) + can be efficiently computed with the help of the Discrete Cosine Transform: ( h ) + f = C T G(CfC T )C, (32) where the function G is defined in (31). In conclusion we recall that multiplication of an N N matrix by C or C T = C 1 is typically implemented by the aid of the fast Fourier transform and requires only O(N 2 log 2 N) arithmetical operations. All other computations have the cost O(N 2 ).

7 (a) Lena, (b) Cameraman, (c) Barbara, Fig. 1. Original images Algorithm: Dual TV-Stokes Given d 0, k, δ and µ ; Step one; Let p 0 = 0 and q 0 = 0 ; Calculate τ 0 = (v 0,u 0 ) : v 0 = Bd and u 0 = db T ; Initialize counter: n = 0 ; while not converged do Calculate projections: Update counter: n = n + 1 ; end Calculate τ: Step two; (π p,π q ) = ΠK(div h h p n,div h q n ) (33) p n+1 = pn + k ( ( )) h π p δ 1 v k ( h (π p δ 1 v 0 )). (34) q n+1 = qn + k ( h ( π q δ 1 u 0 )) 1 + k ( h (π q δ 1 u 0 )). (35) τ = τ 0 Π h K(δdiv h p n+1,δdiv h q n+1 ) (36) Let r 0 = 0 and calculate the normal field: n = (n 1,n 2 ),n 1 = u(v 2 + u 2 ) 1/2 and n 2 = v(v 2 + u 2 ) 1/2 ; Initialize counter: n = 0 ; while not converged do Calculate projections: )) r n + k ( (div h h (r n + n) µ 1 v 0 r n+1 = )) 1 + k ( (div h h. (37) (r n + n) µ 1 v 0 Update counter: n = n + 1 ; end Recover image d: d = d 0 µdiv h r n+1 (38) Algorithm 1: Dual TV-Stokes algorithm for image denoising

8 2.4 Numerical experiments x (a) Energy vs. iterations for the dual TV-Stokes algorithm (b) Energy vs. iterations for the TV-Stokes algorithm from [5] x 10 4 Fig. 2. Energy plot for the first step In what follows we present several examples to show how the TV-Stokes method works for different images. All the images we have tested are normalized into gray-scale values, ranging from 0 (black) to 1 (white). In the experiments we start with a clean image, shown in figure 1, and then add random noise with zero mean. This is done by the imnoise MATLAB command, where the variance parameter is set to for the Barbara image and for the Lena image. The Cameraman image is taken directly from the paper [5], so we compare the results with the same noisy image as input. In [5] this model is further compared to the two-step method LOT and famous ROF model. The signal-to-noise ratio is measured in decibels before denoising: ( SNR = 20log 10 Ω(d ) d)2 dx Ω (η, (39) η)2 dx where d = 1 d dx, and η = 1 η dx (40) Ω Ω Ω Ω The numerical procedures used in [5] were based on explicit finite difference schemes. This process is very slow, as the constraint converges slowly. However, in the proposed dual method the constraint is satisfied on each step by the orthogonal projection. The energy and number of iterations required for convergence in step one are shown in figure 2. The figure clearly illustrates that the dual TV-Stokes algorithm requires less iterations before the energy is stable than the primal TV-Stokes algorithm. Although the iterations in the dual TV-Stokes algorithm require more computational effort in each iteration, it is much faster than using sparse linear solvers.

9 Inverting the Laplacian for the orthogonal projection in each iteration is a bottleneck for very large images. In all these examples the projection was applied by the aid of the Fast Fourier Transform, which needs O(n 2 log(n)) operations in each iteration. For very large images, one should consider using a multigrid solver method for applying the projection. This will reduce the operations cost to O(N 2 ). All methods were coded in MATLAB, and in table 1 the CPU time is given in seconds for each test image. The figure shows the dual TV-Stokes algorithm vs. the primal TV-Stokes algorithm from [5]. We measure the L 2 -norm of the energy in (15) and (19) for stopping criteria, and stop the iteration when the difference of the energy is below For the TV-Stokes algorithm we used the same stopping criteria as in [5], where the tolerance of the L 2 -norm of the constraint is equal to and the difference in the energy tolerance is equal to The time steps were set to 10 3 and respecitvely for the first and second step of the TV-Stokes algorithm. Our first test is the well known Lena image, which we will recover from highly added noise. We have cropped the image to show the face, which consists of smooth areas and edges that are important to preserve. The denoised image in Figure 3, shows that the dual TV-Stokes method has recovered the smooth areas without inducing any staircase-effect. The smoothing parameter δ is equal to and µ is equal to Since this is a highly noisy image, the ROF model fails to give a visually pleasant image, because the smooth surfaces are piecewise continuous. The TV-Stokes algorithm however, has nearly the same quality as the dual TV-Stokes algorithm. For the TV-Stokes algorithm, δ was equal to The next test is the Cameraman image, which consists of a smooth skyline and some low-intensity buildings in the background. The buildings are difficult to recover, as they get smeared out by the denoising. The results are shown in figure 4 with δ equal to and µ equal to The TV-Stokes result is taken from [5] where the SNR are the same as the one we report, 20log 10 (8.21) Figure 4.e shows the TV-Stokes reconstruction for the same noisy image, where the delta parameter is equal to The last example is the Barbara image, which is quite detailed, with high and low intensity textures. The high intensity textures and the smooth areas are preserved quite well, but the low intensity textures disappear in the same way as for the Cameraman. This image is in size, which makes the algorithm slower, because of the rather large number of matrix operations per iteration. However, reaching a result for the optimal parameters is still obtainable, since the method has a denoised image after a few steps. Thus, one can run the method multiple times to find the optimal parameters. For this image we used δ equal to 0.05 and µ equal to We do not report on an optimal result for the particular case of the TV-Stokes algorithm, due to page limitation and the amount of running time. Clearly, using the dual formulation is more effective than solving the model with the explicit gradient descent method. The CPU time is found for only one

10 (a) Noisy image, SNR 14.0 (b) Denoised using the dual TV- Stokes algorithm (c) Contour plot, dual TV-Stokes image (d) Difference image, dual TV- Stokes (e) Denoised using ROF [1] (f) Difference image, ROF (g) Denoised using the TV-Stokes algorithm [5] (h) Difference image, TV-Stokes Fig. 3. Lena image ( ), denoised using the dual TV-Stokes, TV-Stokes and the ROF algorithm.

11 (a) Noisy image, SNR (b) Denoised using the dual TV- Stokes algorithm (c) Contour plot, dual TV-Stokes image (d) Difference image, dual TV- Stokes (e) Denoised using the TV-Stokes algorithm [5] (f) Difference image, TV-Stokes Fig. 4. Cameraman ( ), denoised using the dual and the primal formulation of the TV-Stokes algorithm.

12 (a) Noisy image, SNR 20.0 (b) Denoised image (c) Contour plot (d) Difference image Fig. 5. Barbara ( ), denoised using the dual formulation of the TV-Stokes algorithm. Algorithm Dual TV-Stokes algorithm TV-Stokes algorithm, [5] Image First step Second step First step Second step Lena Cameraman Barbara Table 1. Runtimes of the dual TV-Stokes algorithm compared to the TV-Stokes algorithm [5]. The test system is a 2 Opteron 270 dualcore 64-bit processor and 8GB RAM. Both steps in the dual TV-Stokes algorithm are computed with 150 iterations, while the first step in the primal TV-Stokes algorithm is calculated with iterations and the second step with iterations.

13 runtime, since computing an average of many runtimes is very time consug for the TV-Stokes method. Although, the time shown are for one runtime, they clearly give the indication that our method is much faster and stable. The comparison with the primal method also shows that the proposed dual method has the same denoising quality. References 1. Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D 60(1-4) (1992) Chan, T., Marquina, A., Mulet, P.: High-order total variation-based image restoration. SIAM J. Sci. Comput. 22(2) (2000) Chambolle, A., Lions, P.L.: Image recovery via total variation imization and related problems. Numer. Math. 76 (1997) Lysaker, O., Lundervold, A., Tai, X.C.: Noise removal using fourth-order partial differential equation with applications to medical magnetic resonance images in space and time. IEEE Trans. Imag. Proc 12 (2003) Rahman, T., Tai, X.C., Osher, S.: A tv-stokes denoising algorithm. In Sgallari, F., Murli, A., Paragios, N., eds.: SSVM. Volume 4485 of Lecture Notes in Computer Science., Springer (2007) Litvinov, W., Rahman, T., Tai, X.C.: A modified tv-stokes model for image processing. (Submitted 2008) 7. Lysaker, O.M., Osher, S., Tai, X.C.: Noise removal using smoothed normals and surface fitting. IEEE Transaction on Image Processing 13(10) (2004) Bertalmio, M., Bertozzi, A., Sapiro, G.: Navier-stokes, fluid dynamics, and image and video inpainting. Proc. IEEE Computer Vision and Pattern Recognition (CVPR) (2001) 9. Tai, X., Osher, S., Holm, R.: Image inpainting using tv-stokes equation. Image Processing based on partial differential equations (2006) 10. Chambolle, A.: An algorithm for total variation imization and applications. J. Math. Imaging Vis. 20(1-2) (2004) Carter, J.: Dual methods for total variation-based image restoration. PhD thesis, UCLA (2001) 12. Chan, T.F., Golub, G.H., Mulet, P.: A nonlinear primal-dual method for total variation-based image restoration. SIAM J. Sci. Comput. 20(6) (1999) X., B., T.F., C.: Fast imization of the vectorial total variation norm and applications to color image processing. CAM Report (2007) 14. Ciarlet, P.G., Jean-Marie, T., Bernadette, M.: Introduction to numerical linear algebra and optimisation. Cambridge University Press (1989)

Convex Hodge Decomposition of Image Flows

Convex Hodge Decomposition of Image Flows Convex Hodge Decomposition of Image Flows Jing Yuan 1, Gabriele Steidl 2, Christoph Schnörr 1 1 Image and Pattern Analysis Group, Heidelberg Collaboratory for Image Processing, University of Heidelberg,

More information

Convex Hodge Decomposition and Regularization of Image Flows

Convex Hodge Decomposition and Regularization of Image Flows Convex Hodge Decomposition and Regularization of Image Flows Jing Yuan, Christoph Schnörr, Gabriele Steidl April 14, 2008 Abstract The total variation (TV) measure is a key concept in the field of variational

More information

Adaptive Primal Dual Optimization for Image Processing and Learning

Adaptive Primal Dual Optimization for Image Processing and Learning Adaptive Primal Dual Optimization for Image Processing and Learning Tom Goldstein Rice University tag7@rice.edu Ernie Esser University of British Columbia eesser@eos.ubc.ca Richard Baraniuk Rice University

More information

Variational Image Restoration

Variational Image Restoration Variational Image Restoration Yuling Jiao yljiaostatistics@znufe.edu.cn School of and Statistics and Mathematics ZNUFE Dec 30, 2014 Outline 1 1 Classical Variational Restoration Models and Algorithms 1.1

More information

LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING

LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING JIAN-FENG CAI, STANLEY OSHER, AND ZUOWEI SHEN Abstract. Real images usually have sparse approximations under some tight frame systems derived

More information

Equations of fluid dynamics for image inpainting

Equations of fluid dynamics for image inpainting Equations of fluid dynamics for image inpainting Evelyn M. Lunasin Department of Mathematics United States Naval Academy Joint work with E.S. Titi Weizmann Institute of Science, Israel. IMPA, Rio de Janeiro

More information

Variational Methods in Image Denoising

Variational Methods in Image Denoising Variational Methods in Image Denoising Jamylle Carter Postdoctoral Fellow Mathematical Sciences Research Institute (MSRI) MSRI Workshop for Women in Mathematics: Introduction to Image Analysis 22 January

More information

PDE-based image restoration, I: Anti-staircasing and anti-diffusion

PDE-based image restoration, I: Anti-staircasing and anti-diffusion PDE-based image restoration, I: Anti-staircasing and anti-diffusion Kisee Joo and Seongjai Kim May 16, 2003 Abstract This article is concerned with simulation issues arising in the PDE-based image restoration

More information

ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT

ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT PRASHANT ATHAVALE Abstract. Digital images are can be realized as L 2 (R 2 objects. Noise is introduced in a digital image due to various reasons.

More information

A Riemannian Framework for Denoising Diffusion Tensor Images

A Riemannian Framework for Denoising Diffusion Tensor Images A Riemannian Framework for Denoising Diffusion Tensor Images Manasi Datar No Institute Given Abstract. Diffusion Tensor Imaging (DTI) is a relatively new imaging modality that has been extensively used

More information

SECOND-ORDER CONE PROGRAMMING METHODS FOR TOTAL VARIATION-BASED IMAGE RESTORATION. May 25, 2004

SECOND-ORDER CONE PROGRAMMING METHODS FOR TOTAL VARIATION-BASED IMAGE RESTORATION. May 25, 2004 SECOND-ORDER CONE PROGRAMMING METHODS FOR TOTAL VARIATION-BASED IMAGE RESTORATION DONALD GOLDFARB AND WOTAO YIN May 25, 2004 Abstract. In this paper we present optimization algorithms for image restoration

More information

Total Variation Theory and Its Applications

Total Variation Theory and Its Applications Total Variation Theory and Its Applications 2nd UCC Annual Research Conference, Kingston, Jamaica Peter Ndajah University of the Commonwealth Caribbean, Kingston, Jamaica September 27, 2018 Peter Ndajah

More information

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Donghwan Kim and Jeffrey A. Fessler EECS Department, University of Michigan

More information

Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice

Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice 1 Lecture Notes, HCI, 4.1.211 Chapter 2 Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice Bastian Goldlücke Computer Vision Group Technical University of Munich 2 Bastian

More information

Scaled gradient projection methods in image deblurring and denoising

Scaled gradient projection methods in image deblurring and denoising Scaled gradient projection methods in image deblurring and denoising Mario Bertero 1 Patrizia Boccacci 1 Silvia Bonettini 2 Riccardo Zanella 3 Luca Zanni 3 1 Dipartmento di Matematica, Università di Genova

More information

Coordinate Update Algorithm Short Course Operator Splitting

Coordinate Update Algorithm Short Course Operator Splitting Coordinate Update Algorithm Short Course Operator Splitting Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 25 Operator splitting pipeline 1. Formulate a problem as 0 A(x) + B(x) with monotone operators

More information

A fast primal-dual method for generalized Total Variation denoising

A fast primal-dual method for generalized Total Variation denoising Appl. Math. Inf. Sci. 6, No. 3, 401-409 (2012) 401 Applied Mathematics & Information Sciences An International Journal A fast primal-dual method for generalized Total Variation denoising Francesc Aràndiga,

More information

Least Squares with Examples in Signal Processing 1. 2 Overdetermined equations. 1 Notation. The sum of squares of x is denoted by x 2 2, i.e.

Least Squares with Examples in Signal Processing 1. 2 Overdetermined equations. 1 Notation. The sum of squares of x is denoted by x 2 2, i.e. Least Squares with Eamples in Signal Processing Ivan Selesnick March 7, 3 NYU-Poly These notes address (approimate) solutions to linear equations by least squares We deal with the easy case wherein the

More information

An image decomposition model using the total variation and the infinity Laplacian

An image decomposition model using the total variation and the infinity Laplacian An image decomposition model using the total variation and the inity Laplacian Christopher Elion a and Luminita A. Vese a a Department of Mathematics, University of California Los Angeles, 405 Hilgard

More information

Poisson Image Denoising Using Best Linear Prediction: A Post-processing Framework

Poisson Image Denoising Using Best Linear Prediction: A Post-processing Framework Poisson Image Denoising Using Best Linear Prediction: A Post-processing Framework Milad Niknejad, Mário A.T. Figueiredo Instituto de Telecomunicações, and Instituto Superior Técnico, Universidade de Lisboa,

More information

BERNSTEIN FILTER: A NEW SOLVER FOR MEAN CURVATURE REGULARIZED MODELS. Yuanhao Gong. National University of Singapore

BERNSTEIN FILTER: A NEW SOLVER FOR MEAN CURVATURE REGULARIZED MODELS. Yuanhao Gong. National University of Singapore BERNSTEIN FILTER: A NEW SOLVER FOR MEAN CURVATURE REGULARIZED MODELS Yuanhao Gong National University of Singapore ABSTRACT The mean curvature has been shown a proper regularization in various ill-posed

More information

An unconstrained multiphase thresholding approach for image segmentation

An unconstrained multiphase thresholding approach for image segmentation An unconstrained multiphase thresholding approach for image segmentation Benjamin Berkels Institut für Numerische Simulation, Rheinische Friedrich-Wilhelms-Universität Bonn, Nussallee 15, 53115 Bonn, Germany

More information

A Linearly Convergent First-order Algorithm for Total Variation Minimization in Image Processing

A Linearly Convergent First-order Algorithm for Total Variation Minimization in Image Processing A Linearly Convergent First-order Algorithm for Total Variation Minimization in Image Processing Cong D. Dang Kaiyu Dai Guanghui Lan October 9, 0 Abstract We introduce a new formulation for total variation

More information

March 8, 2010 MATH 408 FINAL EXAM SAMPLE

March 8, 2010 MATH 408 FINAL EXAM SAMPLE March 8, 200 MATH 408 FINAL EXAM SAMPLE EXAM OUTLINE The final exam for this course takes place in the regular course classroom (MEB 238) on Monday, March 2, 8:30-0:20 am. You may bring two-sided 8 page

More information

Supplemental Figures: Results for Various Color-image Completion

Supplemental Figures: Results for Various Color-image Completion ANONYMOUS AUTHORS: SUPPLEMENTAL MATERIAL (NOVEMBER 7, 2017) 1 Supplemental Figures: Results for Various Color-image Completion Anonymous authors COMPARISON WITH VARIOUS METHODS IN COLOR-IMAGE COMPLETION

More information

Introduction to Nonlinear Image Processing

Introduction to Nonlinear Image Processing Introduction to Nonlinear Image Processing 1 IPAM Summer School on Computer Vision July 22, 2013 Iasonas Kokkinos Center for Visual Computing Ecole Centrale Paris / INRIA Saclay Mean and median 2 Observations

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in

More information

Error Analysis for H 1 Based Wavelet Interpolations

Error Analysis for H 1 Based Wavelet Interpolations Error Analysis for H 1 Based Wavelet Interpolations Tony F. Chan Hao-Min Zhou Tie Zhou Abstract We rigorously study the error bound for the H 1 wavelet interpolation problem, which aims to recover missing

More information

Dual methods for the minimization of the total variation

Dual methods for the minimization of the total variation 1 / 30 Dual methods for the minimization of the total variation Rémy Abergel supervisor Lionel Moisan MAP5 - CNRS UMR 8145 Different Learning Seminar, LTCI Thursday 21st April 2016 2 / 30 Plan 1 Introduction

More information

A Denoising-Decomposition Model Combining TV Minimisation and Fractional Derivatives

A Denoising-Decomposition Model Combining TV Minimisation and Fractional Derivatives East Asian Journal on Applied Mathematics Vol. 8, No. 3, pp. 447-462 doi: 10.4208/eajam.130917.150218 August 2018 A Denoising-Decomposition Model Combining TV Minimisation and Fractional Derivatives Donghuan

More information

A Localized Linearized ROF Model for Surface Denoising

A Localized Linearized ROF Model for Surface Denoising 1 2 3 4 A Localized Linearized ROF Model for Surface Denoising Shingyu Leung August 7, 2008 5 Abstract 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 1 Introduction CT/MRI scan becomes a very

More information

Parameter Identification in Partial Differential Equations

Parameter Identification in Partial Differential Equations Parameter Identification in Partial Differential Equations Differentiation of data Not strictly a parameter identification problem, but good motivation. Appears often as a subproblem. Given noisy observation

More information

Fast Multilevel Algorithm for a Minimization Problem in Impulse Noise Removal

Fast Multilevel Algorithm for a Minimization Problem in Impulse Noise Removal Fast Multilevel Algorithm for a Minimization Problem in Impulse Noise Removal Raymond H. Chan and Ke Chen September 25, 7 Abstract An effective 2-phase method for removing impulse noise was recently proposed.

More information

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS Martin Kleinsteuber and Simon Hawe Department of Electrical Engineering and Information Technology, Technische Universität München, München, Arcistraße

More information

EUSIPCO

EUSIPCO EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,

More information

Solving DC Programs that Promote Group 1-Sparsity

Solving DC Programs that Promote Group 1-Sparsity Solving DC Programs that Promote Group 1-Sparsity Ernie Esser Contains joint work with Xiaoqun Zhang, Yifei Lou and Jack Xin SIAM Conference on Imaging Science Hong Kong Baptist University May 14 2014

More information

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the

More information

Efficient Beltrami Filtering of Color Images via Vector Extrapolation

Efficient Beltrami Filtering of Color Images via Vector Extrapolation Efficient Beltrami Filtering of Color Images via Vector Extrapolation Lorina Dascal, Guy Rosman, and Ron Kimmel Computer Science Department, Technion, Institute of Technology, Haifa 32000, Israel Abstract.

More information

A Partial Differential Equation Approach to Image Zoom

A Partial Differential Equation Approach to Image Zoom A Partial Differential Equation Approach to Image Zoom Abdelmounim Belahmidi and Frédéric Guichard January 2004 Abstract We propose a new model for zooming digital image. This model, driven by a partial

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

Lecture Notes 5: Multiresolution Analysis

Lecture Notes 5: Multiresolution Analysis Optimization-based data analysis Fall 2017 Lecture Notes 5: Multiresolution Analysis 1 Frames A frame is a generalization of an orthonormal basis. The inner products between the vectors in a frame and

More information

LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes.

LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes. Optimization Models EE 127 / EE 227AT Laurent El Ghaoui EECS department UC Berkeley Spring 2015 Sp 15 1 / 23 LECTURE 7 Least Squares and Variants If others would but reflect on mathematical truths as deeply

More information

Lecture 4 Colorization and Segmentation

Lecture 4 Colorization and Segmentation Lecture 4 Colorization and Segmentation Summer School Mathematics in Imaging Science University of Bologna, Itay June 1st 2018 Friday 11:15-13:15 Sung Ha Kang School of Mathematics Georgia Institute of

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization /

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization / Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Nonlinear diffusion filtering on extended neighborhood

Nonlinear diffusion filtering on extended neighborhood Applied Numerical Mathematics 5 005) 1 11 www.elsevier.com/locate/apnum Nonlinear diffusion filtering on extended neighborhood Danny Barash Genome Diversity Center, Institute of Evolution, University of

More information

A Primal-Dual Method for Total Variation-Based. Wavelet Domain Inpainting

A Primal-Dual Method for Total Variation-Based. Wavelet Domain Inpainting A Primal-Dual Method for Total Variation-Based 1 Wavelet Domain Inpainting You-Wei Wen, Raymond H. Chan, Andy M. Yip Abstract Loss of information in a wavelet domain can occur during storage or transmission

More information

PDEs in Image Processing, Tutorials

PDEs in Image Processing, Tutorials PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower

More information

Asymmetric Cheeger cut and application to multi-class unsupervised clustering

Asymmetric Cheeger cut and application to multi-class unsupervised clustering Asymmetric Cheeger cut and application to multi-class unsupervised clustering Xavier Bresson Thomas Laurent April 8, 0 Abstract Cheeger cut has recently been shown to provide excellent classification results

More information

Gauge optimization and duality

Gauge optimization and duality 1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel

More information

Novel integro-differential equations in image processing and its applications

Novel integro-differential equations in image processing and its applications Novel integro-differential equations in image processing and its applications Prashant Athavale a and Eitan Tadmor b a Institute of Pure and Applied Mathematics, University of California, Los Angeles,

More information

Recent developments on sparse representation

Recent developments on sparse representation Recent developments on sparse representation Zeng Tieyong Department of Mathematics, Hong Kong Baptist University Email: zeng@hkbu.edu.hk Hong Kong Baptist University Dec. 8, 2008 First Previous Next Last

More information

8 A pseudo-spectral solution to the Stokes Problem

8 A pseudo-spectral solution to the Stokes Problem 8 A pseudo-spectral solution to the Stokes Problem 8.1 The Method 8.1.1 Generalities We are interested in setting up a pseudo-spectral method for the following Stokes Problem u σu p = f in Ω u = 0 in Ω,

More information

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1,

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1, Math 30 Winter 05 Solution to Homework 3. Recognizing the convexity of g(x) := x log x, from Jensen s inequality we get d(x) n x + + x n n log x + + x n n where the equality is attained only at x = (/n,...,

More information

A Study on Numerical Solution to the Incompressible Navier-Stokes Equation

A Study on Numerical Solution to the Incompressible Navier-Stokes Equation A Study on Numerical Solution to the Incompressible Navier-Stokes Equation Zipeng Zhao May 2014 1 Introduction 1.1 Motivation One of the most important applications of finite differences lies in the field

More information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

8 The SVD Applied to Signal and Image Deblurring

8 The SVD Applied to Signal and Image Deblurring 8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

Sparse Regularization via Convex Analysis

Sparse Regularization via Convex Analysis Sparse Regularization via Convex Analysis Ivan Selesnick Electrical and Computer Engineering Tandon School of Engineering New York University Brooklyn, New York, USA 29 / 66 Convex or non-convex: Which

More information

POISSON noise, also known as photon noise, is a basic

POISSON noise, also known as photon noise, is a basic IEEE SIGNAL PROCESSING LETTERS, VOL. N, NO. N, JUNE 2016 1 A fast and effective method for a Poisson denoising model with total variation Wei Wang and Chuanjiang He arxiv:1609.05035v1 [math.oc] 16 Sep

More information

Image processing and Computer Vision

Image processing and Computer Vision 1 / 1 Image processing and Computer Vision Continuous Optimization and applications to image processing Martin de La Gorce martin.de-la-gorce@enpc.fr February 2015 Optimization 2 / 1 We have a function

More information

Compressive Imaging by Generalized Total Variation Minimization

Compressive Imaging by Generalized Total Variation Minimization 1 / 23 Compressive Imaging by Generalized Total Variation Minimization Jie Yan and Wu-Sheng Lu Department of Electrical and Computer Engineering University of Victoria, Victoria, BC, Canada APCCAS 2014,

More information

arxiv: v1 [math.na] 3 Jan 2019

arxiv: v1 [math.na] 3 Jan 2019 arxiv manuscript No. (will be inserted by the editor) A Finite Element Nonoverlapping Domain Decomposition Method with Lagrange Multipliers for the Dual Total Variation Minimizations Chang-Ock Lee Jongho

More information

Nonlinear Flows for Displacement Correction and Applications in Tomography

Nonlinear Flows for Displacement Correction and Applications in Tomography Nonlinear Flows for Displacement Correction and Applications in Tomography Guozhi Dong 1 and Otmar Scherzer 1,2 1 Computational Science Center, University of Vienna, Oskar-Morgenstern-Platz 1, 1090 Wien,

More information

CSCI5654 (Linear Programming, Fall 2013) Lectures Lectures 10,11 Slide# 1

CSCI5654 (Linear Programming, Fall 2013) Lectures Lectures 10,11 Slide# 1 CSCI5654 (Linear Programming, Fall 2013) Lectures 10-12 Lectures 10,11 Slide# 1 Today s Lecture 1. Introduction to norms: L 1,L 2,L. 2. Casting absolute value and max operators. 3. Norm minimization problems.

More information

arxiv: v1 [math.na] 17 Nov 2018

arxiv: v1 [math.na] 17 Nov 2018 A New Operator Splitting Method for Euler s Elastica Model Liang-Jian Deng a, Roland Glowinski b, Xue-Cheng Tai c a School of Mathematical Sciences, University of Electronic Science and Technology of China,

More information

Weighted Nonlocal Laplacian on Interpolation from Sparse Data

Weighted Nonlocal Laplacian on Interpolation from Sparse Data Noname manuscript No. (will be inserted by the editor) Weighted Nonlocal Laplacian on Interpolation from Sparse Data Zuoqiang Shi Stanley Osher Wei Zhu Received: date / Accepted: date Abstract Inspired

More information

IMAGE RESTORATION: TOTAL VARIATION, WAVELET FRAMES, AND BEYOND

IMAGE RESTORATION: TOTAL VARIATION, WAVELET FRAMES, AND BEYOND IMAGE RESTORATION: TOTAL VARIATION, WAVELET FRAMES, AND BEYOND JIAN-FENG CAI, BIN DONG, STANLEY OSHER, AND ZUOWEI SHEN Abstract. The variational techniques (e.g., the total variation based method []) are

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

6 The SVD Applied to Signal and Image Deblurring

6 The SVD Applied to Signal and Image Deblurring 6 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

Tensor-based Image Diffusions Derived from Generalizations of the Total Variation and Beltrami Functionals

Tensor-based Image Diffusions Derived from Generalizations of the Total Variation and Beltrami Functionals Generalizations of the Total Variation and Beltrami Functionals 29 September 2010 International Conference on Image Processing 2010, Hong Kong Anastasios Roussos and Petros Maragos Computer Vision, Speech

More information

ETNA Kent State University

ETNA Kent State University C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.

More information

A Four-Pixel Scheme for Singular Differential Equations

A Four-Pixel Scheme for Singular Differential Equations A Four-Pixel Scheme for Singular Differential Equations Martin Welk 1, Joachim Weickert 1, and Gabriele Steidl 1 Mathematical Image Analysis Group Faculty of Mathematics and Computer Science, Bldg. 7 Saarland

More information

Solution-driven Adaptive Total Variation Regularization

Solution-driven Adaptive Total Variation Regularization 1/15 Solution-driven Adaptive Total Variation Regularization Frank Lenzen 1, Jan Lellmann 2, Florian Becker 1, Stefania Petra 1, Johannes Berger 1, Christoph Schnörr 1 1 Heidelberg Collaboratory for Image

More information

SEQUENTIAL SUBSPACE FINDING: A NEW ALGORITHM FOR LEARNING LOW-DIMENSIONAL LINEAR SUBSPACES.

SEQUENTIAL SUBSPACE FINDING: A NEW ALGORITHM FOR LEARNING LOW-DIMENSIONAL LINEAR SUBSPACES. SEQUENTIAL SUBSPACE FINDING: A NEW ALGORITHM FOR LEARNING LOW-DIMENSIONAL LINEAR SUBSPACES Mostafa Sadeghi a, Mohsen Joneidi a, Massoud Babaie-Zadeh a, and Christian Jutten b a Electrical Engineering Department,

More information

8 The SVD Applied to Signal and Image Deblurring

8 The SVD Applied to Signal and Image Deblurring 8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

EE 381V: Large Scale Optimization Fall Lecture 24 April 11

EE 381V: Large Scale Optimization Fall Lecture 24 April 11 EE 381V: Large Scale Optimization Fall 2012 Lecture 24 April 11 Lecturer: Caramanis & Sanghavi Scribe: Tao Huang 24.1 Review In past classes, we studied the problem of sparsity. Sparsity problem is that

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 9 Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 2 Separable convex optimization a special case is min f(x)

More information

Nonlinear regularizations of TV based PDEs for image processing

Nonlinear regularizations of TV based PDEs for image processing Nonlinear regularizations of TV based PDEs for image processing Andrea Bertozzi, John Greer, Stanley Osher, and Kevin Vixie ABSTRACT. We consider both second order and fourth order TV-based PDEs for image

More information

A posteriori error control for the binary Mumford Shah model

A posteriori error control for the binary Mumford Shah model A posteriori error control for the binary Mumford Shah model Benjamin Berkels 1, Alexander Effland 2, Martin Rumpf 2 1 AICES Graduate School, RWTH Aachen University, Germany 2 Institute for Numerical Simulation,

More information

Sparse linear models

Sparse linear models Sparse linear models Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 2/22/2016 Introduction Linear transforms Frequency representation Short-time

More information

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Patrick L. Combettes joint work with J.-C. Pesquet) Laboratoire Jacques-Louis Lions Faculté de Mathématiques

More information

Indefinite and physics-based preconditioning

Indefinite and physics-based preconditioning Indefinite and physics-based preconditioning Jed Brown VAW, ETH Zürich 2009-01-29 Newton iteration Standard form of a nonlinear system F (u) 0 Iteration Solve: Update: J(ũ)u F (ũ) ũ + ũ + u Example (p-bratu)

More information

A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization

A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization Panos Parpas Department of Computing Imperial College London www.doc.ic.ac.uk/ pp500 p.parpas@imperial.ac.uk jointly with D.V.

More information

UPRE Method for Total Variation Parameter Selection

UPRE Method for Total Variation Parameter Selection UPRE Method for Total Variation Parameter Selection Youzuo Lin School of Mathematical and Statistical Sciences, Arizona State University, Tempe, AZ 85287 USA. Brendt Wohlberg 1, T-5, Los Alamos National

More information

Final Examination. CS 205A: Mathematical Methods for Robotics, Vision, and Graphics (Fall 2013), Stanford University

Final Examination. CS 205A: Mathematical Methods for Robotics, Vision, and Graphics (Fall 2013), Stanford University Final Examination CS 205A: Mathematical Methods for Robotics, Vision, and Graphics (Fall 2013), Stanford University The exam runs for 3 hours. The exam contains eight problems. You must complete the first

More information

Conjugate gradient acceleration of non-linear smoothing filters Iterated edge-preserving smoothing

Conjugate gradient acceleration of non-linear smoothing filters Iterated edge-preserving smoothing Cambridge, Massachusetts Conjugate gradient acceleration of non-linear smoothing filters Iterated edge-preserving smoothing Andrew Knyazev (knyazev@merl.com) (speaker) Alexander Malyshev (malyshev@merl.com)

More information

Regular article Nonlinear multigrid methods for total variation image denoising

Regular article Nonlinear multigrid methods for total variation image denoising Comput Visual Sci 7: 199 06 (004) Digital Object Identifier (DOI) 10.1007/s00791-004-0150-3 Computing and Visualization in Science Regular article Nonlinear multigrid methods for total variation image

More information

Discontinuous Galerkin Methods

Discontinuous Galerkin Methods Discontinuous Galerkin Methods Joachim Schöberl May 20, 206 Discontinuous Galerkin (DG) methods approximate the solution with piecewise functions (polynomials), which are discontinuous across element interfaces.

More information

Image enhancement. Why image enhancement? Why image enhancement? Why image enhancement? Example of artifacts caused by image encoding

Image enhancement. Why image enhancement? Why image enhancement? Why image enhancement? Example of artifacts caused by image encoding 13 Why image enhancement? Image enhancement Example of artifacts caused by image encoding Computer Vision, Lecture 14 Michael Felsberg Computer Vision Laboratory Department of Electrical Engineering 12

More information

SIGNAL AND IMAGE RESTORATION: SOLVING

SIGNAL AND IMAGE RESTORATION: SOLVING 1 / 55 SIGNAL AND IMAGE RESTORATION: SOLVING ILL-POSED INVERSE PROBLEMS - ESTIMATING PARAMETERS Rosemary Renaut http://math.asu.edu/ rosie CORNELL MAY 10, 2013 2 / 55 Outline Background Parameter Estimation

More information

ALADIN An Algorithm for Distributed Non-Convex Optimization and Control

ALADIN An Algorithm for Distributed Non-Convex Optimization and Control ALADIN An Algorithm for Distributed Non-Convex Optimization and Control Boris Houska, Yuning Jiang, Janick Frasch, Rien Quirynen, Dimitris Kouzoupis, Moritz Diehl ShanghaiTech University, University of

More information

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned

More information

Deep Learning: Approximation of Functions by Composition

Deep Learning: Approximation of Functions by Composition Deep Learning: Approximation of Functions by Composition Zuowei Shen Department of Mathematics National University of Singapore Outline 1 A brief introduction of approximation theory 2 Deep learning: approximation

More information

Lecture 1: Numerical Issues from Inverse Problems (Parameter Estimation, Regularization Theory, and Parallel Algorithms)

Lecture 1: Numerical Issues from Inverse Problems (Parameter Estimation, Regularization Theory, and Parallel Algorithms) Lecture 1: Numerical Issues from Inverse Problems (Parameter Estimation, Regularization Theory, and Parallel Algorithms) Youzuo Lin 1 Joint work with: Rosemary A. Renaut 2 Brendt Wohlberg 1 Hongbin Guo

More information

A Convex Relaxation of the Ambrosio Tortorelli Elliptic Functionals for the Mumford Shah Functional

A Convex Relaxation of the Ambrosio Tortorelli Elliptic Functionals for the Mumford Shah Functional A Convex Relaxation of the Ambrosio Tortorelli Elliptic Functionals for the Mumford Shah Functional Youngwook Kee and Junmo Kim KAIST, South Korea Abstract In this paper, we revisit the phase-field approximation

More information

Proximal Gradient Descent and Acceleration. Ryan Tibshirani Convex Optimization /36-725

Proximal Gradient Descent and Acceleration. Ryan Tibshirani Convex Optimization /36-725 Proximal Gradient Descent and Acceleration Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: subgradient method Consider the problem min f(x) with f convex, and dom(f) = R n. Subgradient method:

More information

c Springer, Reprinted with permission.

c Springer, Reprinted with permission. Zhijian Yuan and Erkki Oja. A FastICA Algorithm for Non-negative Independent Component Analysis. In Puntonet, Carlos G.; Prieto, Alberto (Eds.), Proceedings of the Fifth International Symposium on Independent

More information