Scaled gradient projection methods in image deblurring and denoising

Size: px
Start display at page:

Download "Scaled gradient projection methods in image deblurring and denoising"

Transcription

1 Scaled gradient projection methods in image deblurring and denoising Mario Bertero 1 Patrizia Boccacci 1 Silvia Bonettini 2 Riccardo Zanella 3 Luca Zanni 3 1 Dipartmento di Matematica, Università di Genova 2 Dipartimento di Matematica, Università di Ferrara 3 Dipartimento di Matematica, Università di Modena e Reggio Emilia Conference on Applied Inverse Problems, Vienna July Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

2 Outline 1 Examples of Imaging problems 2 Optimization problem 3 Gradient methods and step-length selections 4 Scaled Gradient Projection (SGP) Method 5 Test results 6 Conclusions and Future Works Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

3 Image Deblurring example Image acquisition model: y = Hx + b + n, where: y R n observed image, H R n n blurring operator, b R n background radiation, n R n unknown noise. Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

4 Image Deblurring example Image acquisition model: y = Hx + b + n, where: y R n observed image, H R n n blurring operator, b R n background radiation, n R n unknown noise. Goal: Find an approximation of the true image x R n Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

5 Image Deblurring example Image acquisition model: where: y R n observed image, H R n n blurring operator, b R n n R n y = Hx + b + n, background radiation, unknown noise. Goal: Find an approximation of the true image x R n Maximum Likelihood Approach (and early stopping) min L y (x) sub. to x Ω Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

6 Image Denoising example Image acquisition model: y = x + n, where: y R n n R n observed image, unknown noise. Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

7 Image Denoising example Image acquisition model: y = x + n, where: y R n n R n observed image, unknown noise. Goal: Remove noise from y R n, while preserving some features Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

8 Image Denoising example Image acquisition model: y = x + n, where: y R n n R n observed image, unknown noise. Goal: Remove noise from y R n, while preserving some features Regularized Approach where J R (x) is (for example): x 2 2 Tikhonov, x 1 sparsity inducing, x Total Variation. Ω min J (0) y (x) + µj R (x) sub. to x Ω Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

9 Problem setting Both examples lead to: Constrained optimization problem min f (x) sub. to x Ω Ω f (x) is a convex and closed set is countinuously differentiable in Ω Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

10 Why gradient type methods? Gradient methods are first order optimization methods. Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

11 Why gradient type methods? Gradient methods are first order optimization methods. pros Simplicity of implementation first order iterative method Low memory requirements suitable to face high dimensional problems Ability to provide medium-accurate solutions Semiconvergence from numerical practice Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

12 Why gradient type methods? Gradient methods are first order optimization methods. pros Simplicity of implementation first order iterative method Low memory requirements suitable to face high dimensional problems Ability to provide medium-accurate solutions Semiconvergence from numerical practice cons Low convergence rate hundreds or thousands of iterations Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

13 The Barzilai-Borwein (BB) step-length selection rules Consider the gradient method: x (k+1) = x (k) α k g (k) k = 0, 1,..., with g(x) = f (x). Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

14 The Barzilai-Borwein (BB) step-length selection rules Consider the gradient method: x (k+1) = x (k) α k g (k) k = 0, 1,..., with g(x) = f (x). Problem: How the step-length α k > 0 can be chosen to improve the convergence rate? Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

15 The Barzilai-Borwein (BB) step-length selection rules Consider the gradient method: with g(x) = f (x). Solution: x (k+1) = x (k) α k g (k) k = 0, 1,..., Regard the matrix B(α k ) = (α k I) 1 as an approximation of the Hessian 2 f (x (k) ) Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

16 The Barzilai-Borwein (BB) step-length selection rules Consider the gradient method: x (k+1) = x (k) α k g (k) k = 0, 1,..., with g(x) = f (x). Solution: Regard the matrix B(α k ) = (α k I) 1 as an approximation of the Hessian 2 f (x (k) ) Determine α k by forcing a quasi-newton property on B(α k ): α k BB1 = argmin α R B(α)s(k 1) z (k 1) or α k BB2 = argmin α R s(k 1) B(α) 1 z (k 1), where s (k 1) = ( x (k) x (k 1)) and z (k 1) = (g (k) g (k 1) ). Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

17 The BB step-length selection rules (cont.) It follows that: α BB1 k = s(k 1)T s (k 1) or α BB2 k = s(k 1)T z (k 1) s (k 1)T z (k 1) z (k 1)T z (k 1) where s (k 1) = ( x (k) x (k 1)) and z (k 1) = (g (k) g (k 1) ). Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

18 The BB step-length selection rules (cont.) It follows that: α BB1 k = s(k 1)T s (k 1) or α BB2 k = s(k 1)T z (k 1) s (k 1)T z (k 1) z (k 1)T z (k 1) where s (k 1) = ( x (k) x (k 1)) and z (k 1) = (g (k) g (k 1) ). Remarkable improvements in comparison with the steepest descent method are observed: [Barzilai-Borwein, IMA J. Num. Anal. 1988] [Raydan, IMA J. Num. Anal. 1993] [Friedlander et al., SIAM J. Num. Anal. 1999] [Raydan, SIAM J. Optim. 1997] [Fletcher, Tech. Rep. 207, 2001] [Dai-Liao, IMA J. Num. Anal. 2002] Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

19 Effective use of the BB rules Further improvements are obtained by using adaptive alternations of the two BB rules; for example: α k = αk BB2 if αk BB2 /αk BB1 < τ, α k = αk BB1 otherwise, Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

20 Effective use of the BB rules Further improvements are obtained by using adaptive alternations of the two BB rules; for example: α k = αk BB2 if αk BB2 /αk BB1 < τ, α k = αk BB1 otherwise, Many suggestions for the alternation are available: [Dai, Optim., 2003] [Dai-Fletcher, Math. Prog. 2005] [Serafini et al., Opt. Meth. Soft. 2005] [Dai et al., IMA J. Num. Anal. 2006] [Zhuo et al., Comput. Opt. Appl., 2006 ] [Frassoldati et al., J. Ind. Manag. Opt. 2008] Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

21 The BB step-lengths and Scaled Gradient Methods Consider the scaled gradient method: x (k+1) = x (k) α k D k g (k) k = 0, 1,..., where D k is the symmetric positive definite scaling matrix. Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

22 The BB step-lengths and Scaled Gradient Methods Consider the scaled gradient method: x (k+1) = x (k) α k D k g (k) k = 0, 1,..., where D k is the symmetric positive definite scaling matrix. By forcing the quasi-newton properties on B(α k ) = (α k D k ) 1 we have α BB1 k = s(k 1)T D 1 k D 1 k s (k 1)T D 1 k z (k 1) α k BB2 = and s (k 1) s(k 1)T D k z (k 1) z (k 1)T D k D k z (k 1), where s (k 1) = ( x (k) x (k 1)) and z (k 1) = (g (k) g (k 1) ). Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

23 Scaled Gradient Projection (SGP) method: basic notations [Bonettini et al., Inv. Prob. 2009] Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

24 Scaled Gradient Projection (SGP) method: basic notations [Bonettini et al., Inv. Prob. 2009] Scaling matrix: D k D L = {D s.p.d. R n n D L, D 1 L}, L > 1, if D k is diagonal, the requirement leads to: L 1 (D k ) ii L. Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

25 Scaled Gradient Projection (SGP) method: basic notations [Bonettini et al., Inv. Prob. 2009] Scaling matrix: D k D L = {D s.p.d. R n n D L, D 1 L}, L > 1, if D k is diagonal, the requirement leads to: Projection operator: L 1 (D k ) ii L. P Ω,D (x) argmin y Ω x y D, where x D = x T Dx. Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

26 Scaled Gradient Projection (SGP) method Given 0 < α min < α max, β, γ (0, 1) line-search parameters, and fix a positive integer M. Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

27 Scaled Gradient Projection (SGP) method Given 0 < α min < α max, β, γ (0, 1) line-search parameters, and fix a positive integer M. 1. Initialization. Set x (0) Ω, D 0 D L, α 0 [α min, α max ] Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

28 Scaled Gradient Projection (SGP) method Given 0 < α min < α max, β, γ (0, 1) line-search parameters, and fix a positive integer M. 1. Initialization. Set x (0) Ω, D 0 D L, α 0 [α min, α max ] For k = 0, 1, 2,... end Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

29 Scaled Gradient Projection (SGP) method Given 0 < α min < α max, β, γ (0, 1) line-search parameters, and fix a positive integer M. 1. Initialization. Set x (0) Ω, D 0 D L, α 0 [α min, α max ] For k = 0, 1, 2, Projection. y (k) = P Ω,D 1(x (k) α k D k f (x (k) )); k If y (k) = x (k) then stop. end Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

30 Scaled Gradient Projection (SGP) method Given 0 < α min < α max, β, γ (0, 1) line-search parameters, and fix a positive integer M. 1. Initialization. Set x (0) Ω, D 0 D L, α 0 [α min, α max ] For k = 0, 1, 2, Projection. y (k) = P Ω,D 1(x (k) α k D k f (x (k) )); k If y (k) = x (k) then stop. 3. Descent direction. d (k) = y (k) x (k). end Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

31 Scaled Gradient Projection (SGP) method Given 0 < α min < α max, β, γ (0, 1) line-search parameters, and fix a positive integer M. 1. Initialization. Set x (0) Ω, D 0 D L, α 0 [α min, α max ] For k = 0, 1, 2, Projection. y (k) = P Ω,D 1(x (k) α k D k f (x (k) )); k If y (k) = x (k) then stop. 3. Descent direction. d (k) = y (k) x (k). 3. Line-search. Set λ k = 1 and f = max 0 j min{k,m 1} f (x (k j) ) While f (x (k) + λ k d (k) ) > f + γλ k f (x (k) ) T d (k) λ k = βλ k end. Set x (k+1) = x (k) + λ k d (k). end Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

32 Scaled Gradient Projection (SGP) method Given 0 < α min < α max, β, γ (0, 1) line-search parameters, and fix a positive integer M. 1. Initialization. Set x (0) Ω, D 0 D L, α 0 [α min, α max ] For k = 0, 1, 2, Projection. y (k) = P Ω,D 1(x (k) α k D k f (x (k) )); k If y (k) = x (k) then stop. 3. Descent direction. d (k) = y (k) x (k). 3. Line-search. Set λ k = 1 and f = max 0 j min{k,m 1} f (x (k j) ) While f (x (k) + λ k d (k) ) > f + γλ k f (x (k) ) T d (k) λ k = βλ k end. Set x (k+1) = x (k) + λ k d (k). 4. Update. Define D k+1 and α k+1 [α min, α max ]. end Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

33 SGP acceleration techniques The acceleration technique involves: Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

34 SGP acceleration techniques The acceleration technique involves: selection of the step-length α k : general algorithm Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

35 SGP acceleration techniques The acceleration technique involves: selection of the step-length α k : general algorithm definition of the scaling matrix D k : problem dependent (see the experiment section) Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

36 SGP step-length selection Let α min = 10 3, α max = 10 5, M α = 3, τ = 0.5 Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

37 SGP step-length selection Let α min = 10 3, α max = 10 5, M α = 3, τ = 0.5 if s (k 1)T D 1 k z (k 1) 0 if s (k 1)T D k z (k 1) 0 αk BB1 = α max αk BB2 = α max else else α = s(k 1)T D 1 D 1 s (k 1) (k 1) T k k D k z (k 1) end s (k 1)T D 1 z (k 1)T D k D k z (k 1) αk BB1 = min{α max, max{α min, α}} αk BB2 = min{α max, max{α min, α}} k z (k 1) α = s end Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

38 SGP step-length selection Let α min = 10 3, α max = 10 5, M α = 3, τ = 0.5 if s (k 1)T D 1 k z (k 1) 0 if s (k 1)T D k z (k 1) 0 αk BB1 = α max αk BB2 = α max else else α = s(k 1)T D 1 D 1 s (k 1) (k 1) T k k D k z (k 1) end s (k 1)T D 1 z (k 1)T D k D k z (k 1) αk BB1 = min{α max, max{α min, α}} αk BB2 = min{α max, max{α min, α}} k z (k 1) α = s end if αk BB2 /αk BB1 < τ α k = min{αk j BB2, j = 0,..., M α 1} τ = τ 0.9 else α k = αk BB1 τ = τ 1.1 end Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

39 Convergence of SGP min f (x) sub. to x Ω (1) Ω f (x) is a convex and closed set is countinuously differentiable in Ω Theorem Assume that the level set Ω 0 = {x Ω : f (x) f (x (0) )} is bounded. Every accumulation point of the sequence {x (k) } generated by the algorithm SGP is a stationary point of (1). Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

40 Image Deblurring: Poisson noise Object Blurred Noisy image Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

41 Image Deblurring: Poisson noise Object Blurred Noisy image f (x) = D KL (Hx + b, y) = n i=1 ( n j=1 H ijx j + b i y i y i log Ω = {x R n x i 0, i = 1,..., n} P n ) j=1 H ij x j +b i y i A suited reconstruction is obtained by early stopping the SGP iterations. Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

42 Image Deblurring: Poisson noise (II) Algorithms: SGP Adaptive selection of α k, scaling matrix D k = min L, max diag(x (k) ), L 1. EM Richardson-Lucy or Expectation Maximization algorithm. EM_MATLAB deconvlucy function, Matlab image toolbox. WMRNSD Weighted Minimum Residual Norm Steepest Descent [Bardsley-Nagy]. Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

43 Image Deblurring: Poisson noise (II) Algorithms: SGP Adaptive selection of α k, scaling matrix D k = min L, max diag(x (k) ), L 1. EM Richardson-Lucy or Expectation Maximization algorithm. EM_MATLAB deconvlucy function, Matlab image toolbox. WMRNSD Weighted Minimum Residual Norm Steepest Descent [Bardsley-Nagy]. Algorithm it. number l 2 rel. err. time [s] SGP EM EM_MATLAB WMRNSD Test environment: Matlab on an AMD Opteron Dual Core 2.4 GHz processor. Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

44 Image Deblurring: SGP reconstruction Object Blurred Noisy image SGP reconstruction Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

45 Image Denoising: Poisson noise Object Blurred Noisy image Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

46 Image Denoising: Poisson noise Object Blurred Noisy image f (x) = D KL (x, y) + β TV(x) Ω = {x R n x i η, i = 1,..., n} Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

47 Image Denoising: Poisson noise (II) Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

48 Image Denoising: Poisson noise (II) Algorithms: SGP Adaptive selection of α k, scaling matrix D k = x (k) / (1 + βv ), with f (x (k) )) = V U, V i 0 and U i 0. [Lanteri et al., Inv. Prob. 2002] GP Adaptive selection of α k, scaling matrix D k = I. GP-BB Only α k BB1, scaling matrix D k = I. Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

49 Image Denoising: Poisson noise (II) Algorithms: SGP Adaptive selection of α k, scaling matrix D k = x (k) / (1 + βv ), with f (x (k) )) = V U, V i 0 and U i 0. [Lanteri et al., Inv. Prob. 2002] GP Adaptive selection of α k, scaling matrix D k = I. GP-BB Only α k BB1, scaling matrix D k = I. Algorithm it. number l 2 rel. err. time [s] SGP GP GP-BB Test environment: Matlab on an AMD Opteron Dual Core 2.4 GHz processor. [Zanella et al., Inv. Prob. 2009] Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

50 Image Denoising: SGP reconstruction Object Noisy image SGP reconstruction Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

51 An application in medical imaging Object Noisy image SGP reconstruction Image size: , parameters: β = 0.3. Noisy image relative error: 17.9%. Reconstructed image relative error: 2.9%. Computational time: seconds (Matlab on an AMD Opteron Dual Core 2.4 GHz processor). Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

52 GPU Implementation: Deblurring CPU GPU N = n n it. l 2 rel. err. time [s] it. l 2 rel. err. time [s] Speedup C implementation: C-CUDA implementation: Microsoft Visual Studio 2005, AMD Athlon X2 Dual-Core at 3.11GHz. CUDA 2.0, NVIDIA GTX 280, AMD Athlon X2 Dual-Core at 3.11GHz. [Ruggiero et al., J. Global Optim. 2009] Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

53 GPU Implementation: Denoising CPU GPU N = n n it. l 2 rel. err. time [s] it. l 2 rel. err. time [s] Speedup C implementation: C-CUDA implementation: Microsoft Visual Studio 2005, AMD Athlon X2 Dual-Core at 3.11GHz. CUDA 2.0, NVIDIA GTX 280, AMD Athlon X2 Dual-Core at 3.11GHz. [Serafini et al., ParCo 2009] Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

54 Other examples of applications Least-squares minimization: F. Benvenuto: Iterative methods for constrained and regularized least-square problems, M20, 23 July, 15:15-17:15, C2 Sparsity constraints: C. De Mol: Iterative Algorithms for Sparse Recovery, M19, 21 July, 15:15-17:15, C2 Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

55 Conclusions and Future Works Conclusions: by exploiting both the scaling matrix and the Barzilai-Borwein step-length rules, the SGP Method is able to achieve a satisfactory reconstruction in a reasonable time. easy to implement remarkable results in massively parallel architectures (GPU). Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

56 Conclusions and Future Works Conclusions: by exploiting both the scaling matrix and the Barzilai-Borwein step-length rules, the SGP Method is able to achieve a satisfactory reconstruction in a reasonable time. easy to implement remarkable results in massively parallel architectures (GPU). Works in progress: comparative analysis TV image reconstruction [S. Wright, M39, 23 July, 10:30-12:30, D] Duality-based algorithms [Zhu-Wright, COAP 2008] Primal-dual approach [Zhu-Chan, CAM Rep. UCLA 2008], [Lee-Wright, 2008] Regularized deblurring [G. Landi, M39, 23 July, 10:30-12:30, D] Quasi-Newton approaches [Landi-Loli-Piccolomini, Num. Alg. 2008] Zanella (UniMoRe) Gradient projection methods in imaging AIP / 26

Accelerated Gradient Methods for Constrained Image Deblurring

Accelerated Gradient Methods for Constrained Image Deblurring Accelerated Gradient Methods for Constrained Image Deblurring S Bonettini 1, R Zanella 2, L Zanni 2, M Bertero 3 1 Dipartimento di Matematica, Università di Ferrara, Via Saragat 1, Building B, I-44100

More information

Numerical Methods for Parameter Estimation in Poisson Data Inversion

Numerical Methods for Parameter Estimation in Poisson Data Inversion DOI 10.1007/s10851-014-0553-9 Numerical Methods for Parameter Estimation in Poisson Data Inversion Luca Zanni Alessandro Benfenati Mario Bertero Valeria Ruggiero Received: 28 June 2014 / Accepted: 11 December

More information

Math 164: Optimization Barzilai-Borwein Method

Math 164: Optimization Barzilai-Borwein Method Math 164: Optimization Barzilai-Borwein Method Instructor: Wotao Yin Department of Mathematics, UCLA Spring 2015 online discussions on piazza.com Main features of the Barzilai-Borwein (BB) method The BB

More information

On the regularization properties of some spectral gradient methods

On the regularization properties of some spectral gradient methods On the regularization properties of some spectral gradient methods Daniela di Serafino Department of Mathematics and Physics, Second University of Naples daniela.diserafino@unina2.it contributions from

More information

Interior-Point Methods as Inexact Newton Methods. Silvia Bonettini Università di Modena e Reggio Emilia Italy

Interior-Point Methods as Inexact Newton Methods. Silvia Bonettini Università di Modena e Reggio Emilia Italy InteriorPoint Methods as Inexact Newton Methods Silvia Bonettini Università di Modena e Reggio Emilia Italy Valeria Ruggiero Università di Ferrara Emanuele Galligani Università di Modena e Reggio Emilia

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

Regularized Multiplicative Algorithms for Nonnegative Matrix Factorization

Regularized Multiplicative Algorithms for Nonnegative Matrix Factorization Regularized Multiplicative Algorithms for Nonnegative Matrix Factorization Christine De Mol (joint work with Loïc Lecharlier) Université Libre de Bruxelles Dept Math. and ECARES MAHI 2013 Workshop Methodological

More information

arxiv: v3 [math.na] 26 Feb 2015

arxiv: v3 [math.na] 26 Feb 2015 arxiv:1406.6601v3 [math.na] 26 Feb 2015 New convergence results for the scaled gradient projection method S Bonettini 1 and M Prato 2 1 Dipartimento di Matematica e Informatica, Università di Ferrara,

More information

Step-size Estimation for Unconstrained Optimization Methods

Step-size Estimation for Unconstrained Optimization Methods Volume 24, N. 3, pp. 399 416, 2005 Copyright 2005 SBMAC ISSN 0101-8205 www.scielo.br/cam Step-size Estimation for Unconstrained Optimization Methods ZHEN-JUN SHI 1,2 and JIE SHEN 3 1 College of Operations

More information

ACQUIRE: an inexact iteratively reweighted norm approach for TV-based Poisson image restoration

ACQUIRE: an inexact iteratively reweighted norm approach for TV-based Poisson image restoration : an inexact iteratively reweighted norm approach for TV-based Poisson image restoration Daniela di Serafino Germana Landi Marco Viola February 6, 2019 Abstract We propose a method, called, for the solution

More information

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Donghwan Kim and Jeffrey A. Fessler EECS Department, University of Michigan

More information

Lecture 1: Numerical Issues from Inverse Problems (Parameter Estimation, Regularization Theory, and Parallel Algorithms)

Lecture 1: Numerical Issues from Inverse Problems (Parameter Estimation, Regularization Theory, and Parallel Algorithms) Lecture 1: Numerical Issues from Inverse Problems (Parameter Estimation, Regularization Theory, and Parallel Algorithms) Youzuo Lin 1 Joint work with: Rosemary A. Renaut 2 Brendt Wohlberg 1 Hongbin Guo

More information

IP-PCG An interior point algorithm for nonlinear constrained optimization

IP-PCG An interior point algorithm for nonlinear constrained optimization IP-PCG An interior point algorithm for nonlinear constrained optimization Silvia Bonettini (bntslv@unife.it), Valeria Ruggiero (rgv@unife.it) Dipartimento di Matematica, Università di Ferrara December

More information

The Scaled Gradient Projection Method: An Application to Nonconvex Optimization

The Scaled Gradient Projection Method: An Application to Nonconvex Optimization 2332 PIERS Proceedings, Prague, Czech Republic, July 6 9, 2015 The Scaled Gradient Projection Method: An Application to Nonconvex Optimization M. Prato 1, A. La Camera 2, S. Bonettini 3, and M. Bertero

More information

Source Reconstruction for 3D Bioluminescence Tomography with Sparse regularization

Source Reconstruction for 3D Bioluminescence Tomography with Sparse regularization 1/33 Source Reconstruction for 3D Bioluminescence Tomography with Sparse regularization Xiaoqun Zhang xqzhang@sjtu.edu.cn Department of Mathematics/Institute of Natural Sciences, Shanghai Jiao Tong University

More information

Adaptive two-point stepsize gradient algorithm

Adaptive two-point stepsize gradient algorithm Numerical Algorithms 27: 377 385, 2001. 2001 Kluwer Academic Publishers. Printed in the Netherlands. Adaptive two-point stepsize gradient algorithm Yu-Hong Dai and Hongchao Zhang State Key Laboratory of

More information

Due Giorni di Algebra Lineare Numerica (2GALN) Febbraio 2016, Como. Iterative regularization in variable exponent Lebesgue spaces

Due Giorni di Algebra Lineare Numerica (2GALN) Febbraio 2016, Como. Iterative regularization in variable exponent Lebesgue spaces Due Giorni di Algebra Lineare Numerica (2GALN) 16 17 Febbraio 2016, Como Iterative regularization in variable exponent Lebesgue spaces Claudio Estatico 1 Joint work with: Brigida Bonino 1, Fabio Di Benedetto

More information

Proximal Newton Method. Zico Kolter (notes by Ryan Tibshirani) Convex Optimization

Proximal Newton Method. Zico Kolter (notes by Ryan Tibshirani) Convex Optimization Proximal Newton Method Zico Kolter (notes by Ryan Tibshirani) Convex Optimization 10-725 Consider the problem Last time: quasi-newton methods min x f(x) with f convex, twice differentiable, dom(f) = R

More information

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x

More information

Truncated Newton Method

Truncated Newton Method Truncated Newton Method approximate Newton methods truncated Newton methods truncated Newton interior-point methods EE364b, Stanford University minimize convex f : R n R Newton s method Newton step x nt

More information

A Limited Memory, Quasi-Newton Preconditioner. for Nonnegatively Constrained Image. Reconstruction

A Limited Memory, Quasi-Newton Preconditioner. for Nonnegatively Constrained Image. Reconstruction A Limited Memory, Quasi-Newton Preconditioner for Nonnegatively Constrained Image Reconstruction Johnathan M. Bardsley Department of Mathematical Sciences, The University of Montana, Missoula, MT 59812-864

More information

Gradient methods exploiting spectral properties

Gradient methods exploiting spectral properties Noname manuscript No. (will be inserted by the editor) Gradient methods exploiting spectral properties Yaui Huang Yu-Hong Dai Xin-Wei Liu Received: date / Accepted: date Abstract A new stepsize is derived

More information

Sparse & Redundant Representations by Iterated-Shrinkage Algorithms

Sparse & Redundant Representations by Iterated-Shrinkage Algorithms Sparse & Redundant Representations by Michael Elad * The Computer Science Department The Technion Israel Institute of technology Haifa 3000, Israel 6-30 August 007 San Diego Convention Center San Diego,

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

SMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines

SMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines vs for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines Ding Ma Michael Saunders Working paper, January 5 Introduction In machine learning,

More information

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola

More information

A Dual Formulation of the TV-Stokes Algorithm for Image Denoising

A Dual Formulation of the TV-Stokes Algorithm for Image Denoising A Dual Formulation of the TV-Stokes Algorithm for Image Denoising Christoffer A. Elo, Alexander Malyshev, and Talal Rahman Department of Mathematics, University of Bergen, Johannes Bruns gate 12, 5007

More information

Residual iterative schemes for largescale linear systems

Residual iterative schemes for largescale linear systems Universidad Central de Venezuela Facultad de Ciencias Escuela de Computación Lecturas en Ciencias de la Computación ISSN 1316-6239 Residual iterative schemes for largescale linear systems William La Cruz

More information

Optimization with nonnegativity constraints

Optimization with nonnegativity constraints Optimization with nonnegativity constraints Arie Verhoeven averhoev@win.tue.nl CASA Seminar, May 30, 2007 Seminar: Inverse problems 1 Introduction Yves van Gennip February 21 2 Regularization strategies

More information

Steepest Descent. Juan C. Meza 1. Lawrence Berkeley National Laboratory Berkeley, California 94720

Steepest Descent. Juan C. Meza 1. Lawrence Berkeley National Laboratory Berkeley, California 94720 Steepest Descent Juan C. Meza Lawrence Berkeley National Laboratory Berkeley, California 94720 Abstract The steepest descent method has a rich history and is one of the simplest and best known methods

More information

Bayesian Paradigm. Maximum A Posteriori Estimation

Bayesian Paradigm. Maximum A Posteriori Estimation Bayesian Paradigm Maximum A Posteriori Estimation Simple acquisition model noise + degradation Constraint minimization or Equivalent formulation Constraint minimization Lagrangian (unconstraint minimization)

More information

Adaptive First-Order Methods for General Sparse Inverse Covariance Selection

Adaptive First-Order Methods for General Sparse Inverse Covariance Selection Adaptive First-Order Methods for General Sparse Inverse Covariance Selection Zhaosong Lu December 2, 2008 Abstract In this paper, we consider estimating sparse inverse covariance of a Gaussian graphical

More information

Trust-region methods for rectangular systems of nonlinear equations

Trust-region methods for rectangular systems of nonlinear equations Trust-region methods for rectangular systems of nonlinear equations Margherita Porcelli Dipartimento di Matematica U.Dini Università degli Studi di Firenze Joint work with Maria Macconi and Benedetta Morini

More information

Sparsity Regularization

Sparsity Regularization Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation

More information

Sparse Optimization: Algorithms and Applications. Formulating Sparse Optimization. Motivation. Stephen Wright. Caltech, 21 April 2007

Sparse Optimization: Algorithms and Applications. Formulating Sparse Optimization. Motivation. Stephen Wright. Caltech, 21 April 2007 Sparse Optimization: Algorithms and Applications Stephen Wright 1 Motivation and Introduction 2 Compressed Sensing Algorithms University of Wisconsin-Madison Caltech, 21 April 2007 3 Image Processing +Mario

More information

Dual and primal-dual methods

Dual and primal-dual methods ELE 538B: Large-Scale Optimization for Data Science Dual and primal-dual methods Yuxin Chen Princeton University, Spring 2018 Outline Dual proximal gradient method Primal-dual proximal gradient method

More information

Adaptive Primal Dual Optimization for Image Processing and Learning

Adaptive Primal Dual Optimization for Image Processing and Learning Adaptive Primal Dual Optimization for Image Processing and Learning Tom Goldstein Rice University tag7@rice.edu Ernie Esser University of British Columbia eesser@eos.ubc.ca Richard Baraniuk Rice University

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

Accelerated Block-Coordinate Relaxation for Regularized Optimization

Accelerated Block-Coordinate Relaxation for Regularized Optimization Accelerated Block-Coordinate Relaxation for Regularized Optimization Stephen J. Wright Computer Sciences University of Wisconsin, Madison October 09, 2012 Problem descriptions Consider where f is smooth

More information

On spectral properties of steepest descent methods

On spectral properties of steepest descent methods ON SPECTRAL PROPERTIES OF STEEPEST DESCENT METHODS of 20 On spectral properties of steepest descent methods ROBERTA DE ASMUNDIS Department of Statistical Sciences, University of Rome La Sapienza, Piazzale

More information

Newton s Method. Javier Peña Convex Optimization /36-725

Newton s Method. Javier Peña Convex Optimization /36-725 Newton s Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and

More information

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Newton s Method. Ryan Tibshirani Convex Optimization /36-725 Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x

More information

AN EFFICIENT COMPUTATIONAL METHOD FOR TOTAL VARIATION-PENALIZED POISSON LIKELIHOOD ESTIMATION. Johnathan M. Bardsley

AN EFFICIENT COMPUTATIONAL METHOD FOR TOTAL VARIATION-PENALIZED POISSON LIKELIHOOD ESTIMATION. Johnathan M. Bardsley Volume X, No. 0X, 200X, X XX Web site: http://www.aimsciences.org AN EFFICIENT COMPUTATIONAL METHOD FOR TOTAL VARIATION-PENALIZED POISSON LIKELIHOOD ESTIMATION Johnathan M. Bardsley Department of Mathematical

More information

Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise

Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Minru Bai(x T) College of Mathematics and Econometrics Hunan University Joint work with Xiongjun Zhang, Qianqian Shao June 30,

More information

Sparse Optimization Lecture: Dual Methods, Part I

Sparse Optimization Lecture: Dual Methods, Part I Sparse Optimization Lecture: Dual Methods, Part I Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know dual (sub)gradient iteration augmented l 1 iteration

More information

Dealing with edge effects in least-squares image deconvolution problems

Dealing with edge effects in least-squares image deconvolution problems Astronomy & Astrophysics manuscript no. bc May 11, 05 (DOI: will be inserted by hand later) Dealing with edge effects in least-squares image deconvolution problems R. Vio 1 J. Bardsley 2, M. Donatelli

More information

Parameter Identification by Iterative Constrained Regularization

Parameter Identification by Iterative Constrained Regularization Journal of Physics: Conference Series PAPER OPEN ACCESS Parameter Identification by Iterative Constrained Regularization To cite this article: Fabiana Zama 2015 J. Phys.: Conf. Ser. 657 012002 View the

More information

Convex Optimization and l 1 -minimization

Convex Optimization and l 1 -minimization Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l

More information

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1,

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1, Math 30 Winter 05 Solution to Homework 3. Recognizing the convexity of g(x) := x log x, from Jensen s inequality we get d(x) n x + + x n n log x + + x n n where the equality is attained only at x = (/n,...,

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Maximum Likelihood Estimation Prof. C. F. Jeff Wu ISyE 8813 Section 1 Motivation What is parameter estimation? A modeler proposes a model M(θ) for explaining some observed phenomenon θ are the parameters

More information

Phase Estimation in Differential-Interference-Contrast (DIC) Microscopy

Phase Estimation in Differential-Interference-Contrast (DIC) Microscopy Phase Estimation in Differential-Interference-Contrast (DIC) Microscopy Lola Bautista, Simone Rebegoldi, Laure Blanc-Féraud, Marco Prato, Luca Zanni, Arturo Plata To cite this version: Lola Bautista, Simone

More information

Laboratorio di Problemi Inversi Esercitazione 2: filtraggio spettrale

Laboratorio di Problemi Inversi Esercitazione 2: filtraggio spettrale Laboratorio di Problemi Inversi Esercitazione 2: filtraggio spettrale Luca Calatroni Dipartimento di Matematica, Universitá degli studi di Genova Aprile 2016. Luca Calatroni (DIMA, Unige) Esercitazione

More information

Enhanced Compressive Sensing and More

Enhanced Compressive Sensing and More Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University

More information

BOUNDED SPARSE PHOTON-LIMITED IMAGE RECOVERY. Lasith Adhikari and Roummel F. Marcia

BOUNDED SPARSE PHOTON-LIMITED IMAGE RECOVERY. Lasith Adhikari and Roummel F. Marcia BONDED SPARSE PHOTON-IMITED IMAGE RECOVERY asith Adhikari and Roummel F. Marcia Department of Applied Mathematics, niversity of California, Merced, Merced, CA 9533 SA ABSTRACT In photon-limited image reconstruction,

More information

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization /

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization / Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear

More information

Gauge optimization and duality

Gauge optimization and duality 1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel

More information

Barzilai-Borwein Step Size for Stochastic Gradient Descent

Barzilai-Borwein Step Size for Stochastic Gradient Descent Barzilai-Borwein Step Size for Stochastic Gradient Descent Conghui Tan The Chinese University of Hong Kong chtan@se.cuhk.edu.hk Shiqian Ma The Chinese University of Hong Kong sqma@se.cuhk.edu.hk Yu-Hong

More information

Projected Nesterov s Proximal-Gradient Signal Recovery from Compressive Poisson Measurements

Projected Nesterov s Proximal-Gradient Signal Recovery from Compressive Poisson Measurements Electrical and Computer Engineering Conference Papers, Posters and Presentations Electrical and Computer Engineering 2015 Projected Nesterov s Proximal-Gradient Signal Recovery from Compressive Poisson

More information

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University

More information

What s New in Active-Set Methods for Nonlinear Optimization?

What s New in Active-Set Methods for Nonlinear Optimization? What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for

More information

Optimization Algorithms for Compressed Sensing

Optimization Algorithms for Compressed Sensing Optimization Algorithms for Compressed Sensing Stephen Wright University of Wisconsin-Madison SIAM Gator Student Conference, Gainesville, March 2009 Stephen Wright (UW-Madison) Optimization and Compressed

More information

Convex constrained optimization for large-scale generalized Sylvester equations

Convex constrained optimization for large-scale generalized Sylvester equations Universidad Central de Venezuela Facultad de Ciencias Escuela de Computación Lecturas en Ciencias de la Computación ISSN 1316-6239 Convex constrained optimization for large-scale generalized Sylvester

More information

Non-negative Quadratic Programming Total Variation Regularization for Poisson Vector-Valued Image Restoration

Non-negative Quadratic Programming Total Variation Regularization for Poisson Vector-Valued Image Restoration University of New Mexico UNM Digital Repository Electrical & Computer Engineering Technical Reports Engineering Publications 5-10-2011 Non-negative Quadratic Programming Total Variation Regularization

More information

Solving linear equations with Gaussian Elimination (I)

Solving linear equations with Gaussian Elimination (I) Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian

More information

5 Quasi-Newton Methods

5 Quasi-Newton Methods Unconstrained Convex Optimization 26 5 Quasi-Newton Methods If the Hessian is unavailable... Notation: H = Hessian matrix. B is the approximation of H. C is the approximation of H 1. Problem: Solve min

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

CoE 3SK3 Computer Aided Engineering Tutorial: Unconstrained Optimization

CoE 3SK3 Computer Aided Engineering Tutorial: Unconstrained Optimization CoE 3SK3 Computer Aided Engineering Tutorial: Unconstrained Optimization Jie Cao caoj23@grads.ece.mcmaster.ca Department of Electrical and Computer Engineering McMaster University Feb. 2, 2010 Outline

More information

A Study on Trust Region Update Rules in Newton Methods for Large-scale Linear Classification

A Study on Trust Region Update Rules in Newton Methods for Large-scale Linear Classification JMLR: Workshop and Conference Proceedings 1 16 A Study on Trust Region Update Rules in Newton Methods for Large-scale Linear Classification Chih-Yang Hsia r04922021@ntu.edu.tw Dept. of Computer Science,

More information

Robust Preconditioned Conjugate Gradient for the GPU and Parallel Implementations

Robust Preconditioned Conjugate Gradient for the GPU and Parallel Implementations Robust Preconditioned Conjugate Gradient for the GPU and Parallel Implementations Rohit Gupta, Martin van Gijzen, Kees Vuik GPU Technology Conference 2012, San Jose CA. GPU Technology Conference 2012,

More information

Non-Negative Matrix Factorization with Quasi-Newton Optimization

Non-Negative Matrix Factorization with Quasi-Newton Optimization Non-Negative Matrix Factorization with Quasi-Newton Optimization Rafal ZDUNEK, Andrzej CICHOCKI Laboratory for Advanced Brain Signal Processing BSI, RIKEN, Wako-shi, JAPAN Abstract. Non-negative matrix

More information

x M where f is a smooth real valued function (the cost function) defined over a Riemannian manifold

x M where f is a smooth real valued function (the cost function) defined over a Riemannian manifold 1 THE RIEMANNIAN BARZILAI-BORWEIN METHOD WITH NONMONOTONE LINE SEARCH AND THE MATRIX GEOMETRIC MEAN COMPUTATION BRUNO IANNAZZO AND MARGHERITA PORCELLI Abstract. The Barzilai-Borwein method, an effective

More information

An Alternative Three-Term Conjugate Gradient Algorithm for Systems of Nonlinear Equations

An Alternative Three-Term Conjugate Gradient Algorithm for Systems of Nonlinear Equations International Journal of Mathematical Modelling & Computations Vol. 07, No. 02, Spring 2017, 145-157 An Alternative Three-Term Conjugate Gradient Algorithm for Systems of Nonlinear Equations L. Muhammad

More information

Introduction to numerical computations on the GPU

Introduction to numerical computations on the GPU Introduction to numerical computations on the GPU Lucian Covaci http://lucian.covaci.org/cuda.pdf Tuesday 1 November 11 1 2 Outline: NVIDIA Tesla and Geforce video cards: architecture CUDA - C: programming

More information

Proximal Newton Method. Ryan Tibshirani Convex Optimization /36-725

Proximal Newton Method. Ryan Tibshirani Convex Optimization /36-725 Proximal Newton Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: primal-dual interior-point method Given the problem min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h

More information

Variational Image Restoration

Variational Image Restoration Variational Image Restoration Yuling Jiao yljiaostatistics@znufe.edu.cn School of and Statistics and Mathematics ZNUFE Dec 30, 2014 Outline 1 1 Classical Variational Restoration Models and Algorithms 1.1

More information

Generalized Newton-Type Method for Energy Formulations in Image Processing

Generalized Newton-Type Method for Energy Formulations in Image Processing Generalized Newton-Type Method for Energy Formulations in Image Processing Leah Bar and Guillermo Sapiro Department of Electrical and Computer Engineering University of Minnesota Outline Optimization in

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration E. Chouzenoux, A. Jezierska, J.-C. Pesquet and H. Talbot Université Paris-Est Lab. d Informatique Gaspard

More information

Chapter 2. Optimization. Gradients, convexity, and ALS

Chapter 2. Optimization. Gradients, convexity, and ALS Chapter 2 Optimization Gradients, convexity, and ALS Contents Background Gradient descent Stochastic gradient descent Newton s method Alternating least squares KKT conditions 2 Motivation We can solve

More information

On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search

On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search ANZIAM J. 55 (E) pp.e79 E89, 2014 E79 On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search Lijun Li 1 Weijun Zhou 2 (Received 21 May 2013; revised

More information

Energy Minimization of Point Charges on a Sphere with a Hybrid Approach

Energy Minimization of Point Charges on a Sphere with a Hybrid Approach Applied Mathematical Sciences, Vol. 6, 2012, no. 30, 1487-1495 Energy Minimization of Point Charges on a Sphere with a Hybrid Approach Halima LAKHBAB Laboratory of Mathematics Informatics and Applications

More information

Progetto di Ricerca GNCS 2016 PING Problemi Inversi in Geofisica Firenze, 6 aprile Regularized nonconvex minimization for image restoration

Progetto di Ricerca GNCS 2016 PING Problemi Inversi in Geofisica Firenze, 6 aprile Regularized nonconvex minimization for image restoration Progetto di Ricerca GNCS 2016 PING Problemi Inversi in Geofisica Firenze, 6 aprile 2016 Regularized nonconvex minimization for image restoration Claudio Estatico Joint work with: Fabio Di Benedetto, Flavia

More information

Optimization II: Unconstrained Multivariable

Optimization II: Unconstrained Multivariable Optimization II: Unconstrained Multivariable CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Optimization II: Unconstrained

More information

Coordinate descent. Geoff Gordon & Ryan Tibshirani Optimization /

Coordinate descent. Geoff Gordon & Ryan Tibshirani Optimization / Coordinate descent Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Adding to the toolbox, with stats and ML in mind We ve seen several general and useful minimization tools First-order methods

More information

ORIE 6326: Convex Optimization. Quasi-Newton Methods

ORIE 6326: Convex Optimization. Quasi-Newton Methods ORIE 6326: Convex Optimization Quasi-Newton Methods Professor Udell Operations Research and Information Engineering Cornell April 10, 2017 Slides on steepest descent and analysis of Newton s method adapted

More information

Tikhonov Regularized Poisson Likelihood Estimation: Theoretical Justification and a Computational Method

Tikhonov Regularized Poisson Likelihood Estimation: Theoretical Justification and a Computational Method Inverse Problems in Science and Engineering Vol. 00, No. 00, December 2006, 1 19 Tikhonov Regularized Poisson Likelihood Estimation: Theoretical Justification and a Computational Method Johnathan M. Bardsley

More information

On nonstationary preconditioned iterative regularization methods for image deblurring

On nonstationary preconditioned iterative regularization methods for image deblurring On nonstationary preconditioned iterative regularization methods for image deblurring Alessandro Buccini joint work with Prof. Marco Donatelli University of Insubria Department of Science and High Technology

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

(4D) Variational Models Preserving Sharp Edges. Martin Burger. Institute for Computational and Applied Mathematics

(4D) Variational Models Preserving Sharp Edges. Martin Burger. Institute for Computational and Applied Mathematics (4D) Variational Models Preserving Sharp Edges Institute for Computational and Applied Mathematics Intensity (cnt) Mathematical Imaging Workgroup @WWU 2 0.65 0.60 DNA Akrosom Flagellum Glass 0.55 0.50

More information

10. Unconstrained minimization

10. Unconstrained minimization Convex Optimization Boyd & Vandenberghe 10. Unconstrained minimization terminology and assumptions gradient descent method steepest descent method Newton s method self-concordant functions implementation

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

R-Linear Convergence of Limited Memory Steepest Descent

R-Linear Convergence of Limited Memory Steepest Descent R-Linear Convergence of Limited Memory Steepest Descent Frank E. Curtis, Lehigh University joint work with Wei Guo, Lehigh University OP17 Vancouver, British Columbia, Canada 24 May 2017 R-Linear Convergence

More information

Scientific Data Computing: Lecture 3

Scientific Data Computing: Lecture 3 Scientific Data Computing: Lecture 3 Benson Muite benson.muite@ut.ee 23 April 2018 Outline Monday 10-12, Liivi 2-207 Monday 12-14, Liivi 2-205 Topics Introduction, statistical methods and their applications

More information

Improving the Convergence of Back-Propogation Learning with Second Order Methods

Improving the Convergence of Back-Propogation Learning with Second Order Methods the of Back-Propogation Learning with Second Order Methods Sue Becker and Yann le Cun, Sept 1988 Kasey Bray, October 2017 Table of Contents 1 with Back-Propagation 2 the of BP 3 A Computationally Feasible

More information

Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm

Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm Jeevan K. Pant, Wu-Sheng Lu, and Andreas Antoniou University of Victoria May 21, 2012 Compressive Sensing 1/23

More information

Optimization for neural networks

Optimization for neural networks 0 - : Optimization for neural networks Prof. J.C. Kao, UCLA Optimization for neural networks We previously introduced the principle of gradient descent. Now we will discuss specific modifications we make

More information

The cyclic Barzilai Borwein method for unconstrained optimization

The cyclic Barzilai Borwein method for unconstrained optimization IMA Journal of Numerical Analysis Advance Access published March 24, 2006 IMA Journal of Numerical Analysis Pageof24 doi:0.093/imanum/drl006 The cyclic Barzilai Borwein method for unconstrained optimization

More information

Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method

Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method Article Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method Andreas Langer Department of Mathematics, University of Stuttgart,

More information