Blind image restoration as a convex optimization problem

Size: px
Start display at page:

Download "Blind image restoration as a convex optimization problem"

Transcription

1 Int. J. Simul. Multidisci.Des. Optim. 4, (2010) c ASMDO 2010 DOI: /ijsmdo/ Available online at: Blind image restoration as a convex optimization problem A. Bouhamidi 1a, K. Jbilou 1 1 Université de Lille Nord de France, L.M.P.A, ULCO, 50 rue F. Buisson BP 699, F Calais-Cedex, France Received 12 December 2009, Accepted 15 February 2010 Abstract- In this paper, we consider the blind image restoration as a convex constrained problem we propose to solve this problem by a conditional gradient method. Such a method is based on a Thikonov regularization technique is obtained by an approximation of the blur matrix as a Kronecker product of two matrices given as a sum of a Toeplitz a Hankel matrices. Numerical examples are given to show the efficiency of our proposed method. Key words: Image restoration; Kronecker product; Tikhonov regularization; convex optimization 1 Introduction The problem of image restoration consists of the reconstruction of an original image that has been digitized has been degraded by a blur an additive noise. Image restoration techniques apply an inverse procedure to obtain an estimate of the original image. The background literature on image restoration has become quite large. Some treatments overviews on image restoration are found in [1, 2, 3]. The blur the additive noise may arise from many sources such as thermal effects, atmospheric turbulence, recording errors imperfections in the process of digitization. The blurring process is described mathematically by a point spread function (PSF), which is a function that specifies how pixels in the image are distorted. We assume that the degradation process is represented by the following linear model g(i, j) = (f h)(i, j) + ν(i, j) The pair (i, j) is the discrete pixel coordinate denotes the discrete convolution operator. Here f represents the true image, h is the PSF, ν is the additive noise g is the degraded image. More explicitly, g(i, j) = l,k f(l, k)h(i l, k n) + ν(i, j) (1) Blind restoration refers to the image processing task of restoring the original image from a blurred version without the knowledge of the point spread function. Hence, both the PSF the restored image must be estimated directly from the observed noisy blurred image. The PSF is often assumed to be spatially invariant [2], which means that the blur is independent of the position of the points. The discrete model with spatially invariant PSF in the presence of an additive noise, can also be modeled in a matrix form as g = H x + n (2) a Corresponding author: bouhamidi@lmpa.univ-littoral.fr where x, n g are n 2 vectors representing the true image X, the distorted image G the additive noise N of size n n, respectively. The vectors x, g n are obtained by stacking the columns of the matrices X, G N, respectively. It is well-known that the blurring n 2 n 2 matrix H is in general very ill-conditioned. The ill-conditioning results from the fact that many singular values of different orders of magnitude are close to the origin [4]. Another difficulty is due to the size of H which is in fact extremely large. We note that if the PSF is separable, then the matrix H may be decomposed into a Kronecker product of matrices with a smaller size. When the PSF is not separable, the matrix H can still be approximated by a Kronecker product [5, 6, 7]. 2 Approximation of the blurring matrix In practical restoration problems the PSF is unknown in this case, the problem of the image restoration is known as a blind image restoration, see for instance [8, 9, 10, 11]. Then we need to estimate the point spread function (PSF) characterizing the blur. Namely, we need to estimate the matrix P that contains the image of the point spread function. An estimation of the matrix P may obtained by using an iterative deconvolution scheme introduced by Ayers Dainty [8]. The algorithm starts with a guess for the true image f k a guess for the PSF h k with k = 0. So, at a step k, we pass to the Fourier domain by computing F k = F F T (f k ) Ĥk = F F T (h k ) we compute the matrix F k+1 = F k + F k H k+1 = Ĥk + H k where the incremental Wiener filter F k H k are given by F k = (G F k Ĥk)Ĥk Ĥk 2 F + α2 Article available at or

2 34 International Journal for Simulation Multidisciplinary Design Optimization H k = (G F k Ĥk 1)Ĥk 1 Ĥk 1 2 F + α2 Here the product sts for the Hadamard product Ĥ k sts for the conjugate of the matrix Ĥk. The constant parameter α 2 represents the noise to signal ratio it is detered as an approximation of the variance of the additive noise. Then, we compute the new approximation of the image f k+1 = IF F T (F k+1 ) the PSF approximation h k+1 = IF F T (H k+1 ). In each iteration, the image constraints are imposed f k (i, j) if f k (i, j) [0, 255] f k (i, j) = f k (i, j) = 0 if f k (i, j) < 0 f k (i, j) = 255 if f k (i, j) > 255 Together with the following blur constraints of the non negativity the normalization of the PSF, h k (i, j) 0, n h k (i, j) = 1 i,j So we increment the step k from k = 0 to k = k max. At the end of the algorithm we obtain an approximation denoted by P = h k of the image containing the image of the PSF. We recall that, the Kronecker product of a matrix A = (a ij ) of size n p a matrix B of size s q is defined as the matrix A B = (a ij B) of size (ns) (pq). The vec is the operator which transforms a matrix A of size n p to a vector a of size np 1 by stacking the columns of A. For A B two matrices in R n p, we define the following inner product A, B F = trace(a T B). It follows that the well known Frobenius norm denoted here by. F is given by A F = A, A F. In the context of image restoration when the point spread function (PSF) is separable the blurring matrix H given in (2) can be decomposed as a Kronecker product H = H 2 H 1 of two smaller blurring matrices of appropriate sizes. In the non separable case, one can approximate the matrix H by solving the Kronecker product approximation problem (KPA) [7] H H 2 H 1 F (3) H 1,H 2 Recently, Kamm Nagy [5, 6] introduced an efficient algorithm for computing a solution of the KPA problem in image restoration. Let us now give a breve description of the algorithm given in [6]. We assume that the size of the image is n n. For a given vector a = (a 1,, a n ) T R n, the matrix toep(a, k) is a bed Toeplitz matrix of size n n whose diagonals are constant whose k-th column is a = (a 1,, a n ) T ; the other elements are zero. The matrix hank(a, k) is a Hankel matrix of size n n whose anti-diagonals are constant whose first row last column are defined by the vectors (a k+1,..., a n, 0,..., 0) (0,..., 0, a 1,..., a k 1 ) T, respectively. We assume that the center of the PSF (location of the point source) is at p l,k, where P = (p ij ) is the n n matrix containing the image of a the point spread function. The aim of the following algorithm is to compute vectors â b of length n such that the matrices Ĥ1 = Ât + Âh Ĥ2 = B t + B h where  t = toep(â, i)  h = hank(â, i) B t = toep( b, j) Bh = hank( b, j) solve the Kronecker product approximation (3). Let R n be the Cholesky factor of the n n symmetric Toeplitz matrix T n = T oeplitz(v n ) with its first row v n = (n, 1, 0, 1, 0, ). The algorithm, given in [6], for constructing the matrices Ât, Âh, B t B t is as follows, ALGORITHM 1. Compute R n, 2. Construct P r = R n P Rn T 3. Compute the SVD: P r = σ i u i vi T 4. Construct the vectors: â = σ 1 Rn 1 v 1 b = σ 1 Rn 1 u 1 5. Construct the matrices:  t = toep(â, l),  h = hank(â, l), B t = toep( b, k), Bh = hank( b, k). 3 Convex Tikhonov imization problem In order to detere an approximation of x = vec( X), we consider the following convex optimization problem Hx g 2 (4) x Ω The set Ω R n2 could be a simple convex set (e. g., a sphere or a box) or the intersection of some simple convex sets. Due to the ill-conditioning of the matrix H, we replace the original problem by a better conditioned one in order to diish the effects of the noise in the data. One of the most popular regularization methods is due to Tikhonov. The method replaces the problem (4) by the new one ( Hx g λ 2 Lx 2 2 ) (5) x Ω where L is a regularization operator chosen to obtain a solution with desirable properties such as small norm or good smoothness the parameter λ is a scalar to be detered. The most popular methods for detering such a parameter λ, are the generalized crossvalidation (GCV) method the L-curve criterion, see [12, 13, 14, 15, 16, 17, 18].

3 A. Bouhamidi K. Jbilou: Blind image restoration as a convex optimization problem 35 Here, we assume that H = H 2 H 1 L = L 2 L 1 where H 1, H 2, L 1 L 2 are square matrices of dimension n n. Using the relations vec(axb) = (B T A)vec(X) (A B)(C D) = (AC) (BD), the problem (5) can be reformulated as x Ω ( H 1XH T 2 G F 2 + λ 2 L 1 XL T 2 F 2 ) (6) where the set Ω is such that x = vec(x) Ω R n2 X Ω R n n Then the problem (5) is replaced by a new one involving matrix equations with small dimensions. Now, we consider the function f λ : R n n R given by f λ (X) = H 1 X H T 2 G 2 F + λ 2 L 1 X L T 2 2 F The convex constrained imization problem (6) considered here is Minimize f λ (X) subject to X Ω. (7) The function f λ : R n n R is differentiable its gradient is given by the following formula ( f λ (X) = 2 H1 T ( ) ) A(X) G H2 + λ 2 L T 1 L(X)L 2 ( ( = 2 H1 XH2 T G ) ) H 2 + λ 2 L T 1 L 1 XL T 2 L 2 H T 1 The set Ω could be a simple convex set (e. g., a sphere or a box) or the intersection of some simple convex sets. Specific cases that will be considered are Ω 1 = {X R n p : L X U} (8) Ω 2 = {X R n p : X F δ} (9) Here, Y Z means Y ij Z ij for all possible entries ij. L U are given matrices δ > 0 is a given scalar. Another option to be considered is Ω = Ω 1 Ω 2. In this section, we describe the conditional gradient method for solving the convex constrained optimization problem (7). This method is well-known was one of the first successful algorithms used to solve nonlinear optimization problems. It is also called Frank-Wolfe method. The algorithm can be summarized as follows Algorithm 1. The Conditional Gradient Algorithm 1. Choose a tolerance tol, an initial guess X 0 Ω, set k = Solve the imization problem of a linear function over the set Ω: ( ) f λ(x k ) X F. X Ω Let X k be a solution to problem ( ) 3. Compute the value: η k = f λ (X k ) X k X k F 4. If η k < tol Stop else continue 5. Solve the one dimensional imization problem f λ(x k + α(x k X k )). ( ) α [0,1] Let αk be a solution to problem ( ) 6. Update X k+1 = X k + αk (X k X k ), set k = k + 1 go to Step 2. If the convex set Ω consists of the set Ω 1 given by (8), then, a solution of the problem ( ) in Step 2 of Algorithm 1, is given by [X k ] ij = { Lij if [ f λ (X k )] ij 0 U ij if [ f λ (X k )] ij < 0 (10) M ij denote the components of a matrix M. Indeed, from (10) we have [ f λ (X k )] ij [X k ] ij [ f λ (X k )] ij X ij for all X Ω 1. Then for X k given by (10), we have f λ (X k ) X k F f λ (X k ) X F, X Ω 1 If Ω is chosen to be Ω 2 given by (9), then, a solution of the problem ( ) in Step 2 of Algorithm 1, is given by Indeed, for all X Ω 2, we have X k = f λ(x k )δ f λ (X k ) F (11) f λ (X k ) X F f λ (X k ) F X F f λ (X k ) F δ f λ (X k )δ f λ (X k ) F δ = f λ (X k ) F f λ (X k ) F It follows that, for all X Ω 2, we have f λ (X k ) X F f λ (X k ) X k F where X k is given by (11). Now, let H k = X k X k, then it is easy to obtain where f λ (X k + αh k ) = a k α 2 + b k α + c k a k = A(H k ) 2 F + λ2 L(H k ) 2 F b k = f λ (X k ) H k F c k = A(X k ) G 2 F + λ2 L(X k ) 2 F

4 36 International Journal for Simulation Multidisciplinary Design Optimization Then, it follows that the imum of the quadratic one dimensional problem is analytically given by α f λ (X k + αh k ) α k = b k f λ (X k ) H k F = 2a k 2 A(H k ) 2 F + λ2 L(H k ) 2 F which may be written in the following form (12) α k = A(X k) G A(H k ) F + λ 2 L(X k ) L(H k ) F A(H k ) 2 F + λ2 L(H k ) 2 F (13) Then, the solution of the problem ( ) in Step 5 of Algorithm 1, is given by αk = α k if 0 α k 1 1 if α k > 1 0 if α k < 0 The following algorithm combines the conditional gradient method together with the Tikhonov regularization. The convex set Ω is the one given by (8) or (9). Algorithm 2. The Conditional Gradient-Tikhonov Algorithm 1. Choose a tolerance tol, an initial guess X 0 Ω, set k = Detere λ by the L-curve method 3. While k < kmax 3.1- Compute the matrix X k by using the relation (10), 3.2- Compute the value: η k = f λ (X k ) X k X k F 3.3- If η k < tol Stop else continue, 3.4- Compute α k by using(12)or(13), 3.5- If α k > 1 then α k = 1, ElseIf α k < 0 then α k = 0, Else α k = α k, EndIF Update X k+1 = X k + α k (X k X k ), 3.7- Set k = k + 1, 4. EndWhile. 4 Numerical examples In this section we give a numerical example to illustrate our proposed method. The original fruit image was degraded by a speckle multiplicative noise with different values of the variance σ m plus an additive white Gaussian noise with zero mean different values of the variance σ a. Figure 1 shows the original image. The degraded image was corrupted with a multiplicative noise with variance σ m = 0.01 plus an additive white Gaussian noise with the variance σ a = 0.02 is presented on Figure 2. In order to define local smoothing constraints, we detere the bound matrices L b U b from the parameters that describe the local properties of an image. For the degraded image G, the local mean matrix G the local variance σg 2 are measured over a 3 3 window are given by G(i, j) = 1 9 σ 2 G(i, j) = 1 9 i+3 i+3 j+3 l=i 3 k=j 3 j+3 l=i 3 k=j 3 G(l, k) [G(l, k) G(l, k)] 2 The maximum local variance over the entire image G, denoted by σ 2 max is given by σ 2 max = max 1 i,j 256 σ2 G(i, j) Let β > 0 be a positive constant, the matrices L b U b defining the domain Ω 1 are given by L b (i, j) = max(g(i, j) β σ2 G (i, j), 0) σ 2 max U b (i, j) = G(i, j) + β σ2 G (i, j) σmax 2 The constant β controls the tightness of the bounds. In the following numerical tests, the domain was chosen with β = 50. Here we mainly compare the visual quality of the restored images the values of the PSNR. We recall that the PSNR is the peak signal-to-noise ratio (PSNR) it measures the distortion between the original image I 0 the degraded image I = G or the restored image I = X is defined by P SNR(I) = 10 log 10 ( d 2 1 n 2 I o I 2 F where d = 255 in the case of gray images n m is the size of the images; in our case we have n = m = 500. We recall that I o I 2 F = n i=1 j=1 ) n I o (i, j) I(i, j) The value of the P SNR0 = P SNR(G) for the degraded image was 16.94dB. The restored image is presented on Figure 3 the value of the P SNR1 = P SNR(X) was improved to 26.42dB. Table 1: PSNR for different values of the variance of the multiplicative the additive noises 2

5 A. Bouhamidi K. Jbilou: Blind image restoration as a convex optimization problem σm σa PSNR0 PSNR Fig. 3: Restored image References 1. H. Andrews, B. Hunt, Digital image restoration, Prentice-Hall, Engelwood Cliffs, NJ, (1977). 2. A.K. Jain, Fundamentals of digital image processing, Prentice-Hall, Engelwood Cliffs, NJ, (1989). Fig. 1: Original image 3. R.L. Lagendijk, J. Biemond, Iterative identification restoration of images, Norwell, MA: Kluwer Academic Publishers, (1991). 4. H.W. Engl, M. Hanke, A. Neubauer, Regularization of inverse problems, Kluwer, Dordrecht, The Netherls, , (1996). 5. J. Kamm, J.G. Nagy, Kronecker product SVD approximations in image restoration, Linear Algebra its Applications 284, , (1998). 6. J. Kamm, J.G. Nagy, Kronecker product approximations for restoration image with reflexive boundary conditions, SIAM J. Matrix Anal. Appl., 25(3), , (2004). 7. C.F. Van Loan, N.P. Pitsianis, Approximation with Kronecker products, M.S. Moonen, G.H. Golub (Eds.), Linear Algebra for large scale real time applications, Kluwer Academic Publishers, Dordrecht, , (1993). Fig. 2: Degraded image 8. G. R. Ayers, J. C. Dainty, Iterative blind deconvolution method its applications, Optics Letters, 13 (7), , (1988). 9. L. B. Lucy, An iterative technique for the rectification of observed distributions, Astronomical Journal, 79, , (1974).

6 38 International Journal for Simulation Multidisciplinary Design Optimization 10. W. H. Richardson, Bayesian-based iterative method of image restoration, J. Optic. Soc. Amer. A, 62, 55 59, (1972). 11. A. Pruessner, D. P. O Leary, Blind deconvolution using a regularized structured total least norm algorithm, SIAM J. Matrix Anal. Appl. 24 (4), , (2003). 12. G.H. Golub, U. von Matt, Tikhonov regularization for large scale problems, in: G.H. Golub, S.H. Lui, F. Luk, R. Plemmons (Eds.), Workshop on Scientific Computing, Springer, New York, 3 26, (1997). 13. A. Bouhamidi, K. Jbilou, Sylvester Tikhonovregularization methods in image restoration, J. Comput. Appl. Math., 206, 1, 86 98, (2007). 14. M. Hanke, P.C. Hansen, Regularization methods for large-scale problems, Surveys Math. Indust., 3, , (1993). 15. P. C. Hansen Analysis of discrete ill-posed problems by means of the L-curve, SIAM Rev., 34, , (1992). 16. D. Calvetti, G.H. Golub, L. Reichel, Estimation of the L-curve via Lanczos bidiagonalization, BIT, 39, , (1999). 17. D. Calvetti, B. Lewis, L. Reichel, GMRES, L-curves discrete ill-posed problems, BIT, 42, 44 65, (2002). 18. G.H. Golub, M. Heath, G. Wahba, Generalized cross-validation as a method for choosing a good ridge parameter, Technometrics 21, , (1979).

The Global Krylov subspace methods and Tikhonov regularization for image restoration

The Global Krylov subspace methods and Tikhonov regularization for image restoration The Global Krylov subspace methods and Tikhonov regularization for image restoration Abderrahman BOUHAMIDI (joint work with Khalide Jbilou) Université du Littoral Côte d Opale LMPA, CALAIS-FRANCE bouhamidi@lmpa.univ-littoral.fr

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Convex constrained optimization for large-scale generalized Sylvester equations

Convex constrained optimization for large-scale generalized Sylvester equations Universidad Central de Venezuela Facultad de Ciencias Escuela de Computación Lecturas en Ciencias de la Computación ISSN 1316-6239 Convex constrained optimization for large-scale generalized Sylvester

More information

A GLOBAL LANCZOS METHOD FOR IMAGE RESTORATION

A GLOBAL LANCZOS METHOD FOR IMAGE RESTORATION A GLOBAL LANCZOS METHOD FOR IMAGE RESTORATION A. H. BENTBIB, M. EL GUIDE, K. JBILOU, AND L. REICHEL Abstract. Image restoration often requires the solution of large linear systems of equations with a very

More information

Discrete ill posed problems

Discrete ill posed problems Discrete ill posed problems Gérard MEURANT October, 2008 1 Introduction to ill posed problems 2 Tikhonov regularization 3 The L curve criterion 4 Generalized cross validation 5 Comparisons of methods Introduction

More information

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration D. Calvetti a, B. Lewis b and L. Reichel c a Department of Mathematics, Case Western Reserve University,

More information

Mathematical Beer Goggles or The Mathematics of Image Processing

Mathematical Beer Goggles or The Mathematics of Image Processing How Mathematical Beer Goggles or The Mathematics of Image Processing Department of Mathematical Sciences University of Bath Postgraduate Seminar Series University of Bath 12th February 2008 1 How 2 How

More information

1. Introduction. We consider linear discrete ill-posed problems of the form

1. Introduction. We consider linear discrete ill-posed problems of the form AN EXTRAPOLATED TSVD METHOD FOR LINEAR DISCRETE ILL-POSED PROBLEMS WITH KRONECKER STRUCTURE A. BOUHAMIDI, K. JBILOU, L. REICHEL, AND H. SADOK Abstract. This paper describes a new numerical method for the

More information

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola

More information

Global Golub Kahan bidiagonalization applied to large discrete ill-posed problems

Global Golub Kahan bidiagonalization applied to large discrete ill-posed problems Global Golub Kahan bidiagonalization applied to large discrete ill-posed problems A. H. Bentbib Laboratory LAMAI, FSTG, University of Cadi Ayyad, Marrakesh, Morocco. M. El Guide Laboratory LAMAI, FSTG,

More information

Mathematics and Computer Science

Mathematics and Computer Science Technical Report TR-2004-012 Kronecker Product Approximation for Three-Dimensional Imaging Applications by MIsha Kilmer, James Nagy Mathematics and Computer Science EMORY UNIVERSITY Kronecker Product Approximation

More information

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Gérard MEURANT November 2010 1 Introduction to ill posed problems 2 Examples of ill-posed problems 3 Tikhonov

More information

arxiv: v1 [math.na] 3 Jan 2019

arxiv: v1 [math.na] 3 Jan 2019 STRUCTURED FISTA FOR IMAGE RESTORATION ZIXUAN CHEN, JAMES G. NAGY, YUANZHE XI, AND BO YU arxiv:9.93v [math.na] 3 Jan 29 Abstract. In this paper, we propose an efficient numerical scheme for solving some

More information

VECTOR EXTRAPOLATION APPLIED TO TRUNCATED SINGULAR VALUE DECOMPOSITION AND TRUNCATED ITERATION

VECTOR EXTRAPOLATION APPLIED TO TRUNCATED SINGULAR VALUE DECOMPOSITION AND TRUNCATED ITERATION VECTOR EXTRAPOLATION APPLIED TO TRUNCATED SINGULAR VALUE DECOMPOSITION AND TRUNCATED ITERATION A. BOUHAMIDI, K. JBILOU, L. REICHEL, H. SADOK, AND Z. WANG Abstract. This paper is concerned with the computation

More information

SIAM Journal on Scientific Computing, 1999, v. 21 n. 3, p

SIAM Journal on Scientific Computing, 1999, v. 21 n. 3, p Title A Fast Algorithm for Deblurring Models with Neumann Boundary Conditions Author(s) Ng, MKP; Chan, RH; Tang, WC Citation SIAM Journal on Scientific Computing, 1999, v 21 n 3, p 851-866 Issued Date

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed

More information

REGULARIZATION MATRICES FOR DISCRETE ILL-POSED PROBLEMS IN SEVERAL SPACE-DIMENSIONS

REGULARIZATION MATRICES FOR DISCRETE ILL-POSED PROBLEMS IN SEVERAL SPACE-DIMENSIONS REGULARIZATION MATRICES FOR DISCRETE ILL-POSED PROBLEMS IN SEVERAL SPACE-DIMENSIONS LAURA DYKES, GUANGXIN HUANG, SILVIA NOSCHESE, AND LOTHAR REICHEL Abstract. Many applications in science and engineering

More information

Numerical Linear Algebra and. Image Restoration

Numerical Linear Algebra and. Image Restoration Numerical Linear Algebra and Image Restoration Maui High Performance Computing Center Wednesday, October 8, 2003 James G. Nagy Emory University Atlanta, GA Thanks to: AFOSR, Dave Tyler, Stuart Jefferies,

More information

Arnoldi-Tikhonov regularization methods

Arnoldi-Tikhonov regularization methods Arnoldi-Tikhonov regularization methods Bryan Lewis a, Lothar Reichel b,,1 a Rocketcalc LLC, 100 W. Crain Ave., Kent, OH 44240, USA. b Department of Mathematical Sciences, Kent State University, Kent,

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 31, pp. 204-220, 2008. Copyright 2008,. ISSN 1068-9613. ETNA NOISE PROPAGATION IN REGULARIZING ITERATIONS FOR IMAGE DEBLURRING PER CHRISTIAN HANSEN

More information

Inverse Ill Posed Problems in Image Processing

Inverse Ill Posed Problems in Image Processing Inverse Ill Posed Problems in Image Processing Image Deblurring I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz, martin.plesinger@tul.cz, strakos@cs.cas.cz 1,3 Faculty of Mathematics

More information

Golub-Kahan iterative bidiagonalization and determining the noise level in the data

Golub-Kahan iterative bidiagonalization and determining the noise level in the data Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech

More information

Ill Posed Inverse Problems in Image Processing

Ill Posed Inverse Problems in Image Processing Ill Posed Inverse Problems in Image Processing Introduction, Structured matrices, Spectral filtering, Regularization, Noise revealing I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz,

More information

Optimization with nonnegativity constraints

Optimization with nonnegativity constraints Optimization with nonnegativity constraints Arie Verhoeven averhoev@win.tue.nl CASA Seminar, May 30, 2007 Seminar: Inverse problems 1 Introduction Yves van Gennip February 21 2 Regularization strategies

More information

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring Marco Donatelli Dept. of Science and High Tecnology U. Insubria (Italy) Joint work with M. Hanke

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in

More information

Structured Linear Algebra Problems in Adaptive Optics Imaging

Structured Linear Algebra Problems in Adaptive Optics Imaging Structured Linear Algebra Problems in Adaptive Optics Imaging Johnathan M. Bardsley, Sarah Knepper, and James Nagy Abstract A main problem in adaptive optics is to reconstruct the phase spectrum given

More information

Greedy Tikhonov regularization for large linear ill-posed problems

Greedy Tikhonov regularization for large linear ill-posed problems International Journal of Computer Mathematics Vol. 00, No. 00, Month 200x, 1 20 Greedy Tikhonov regularization for large linear ill-posed problems L. Reichel, H. Sadok, and A. Shyshkov (Received 00 Month

More information

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February

More information

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE Int. J. Appl. Math. Comput. Sci., 007, Vol. 17, No., 157 164 DOI: 10.478/v10006-007-0014-3 REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE DOROTA KRAWCZYK-STAŃDO,

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universität - Linz Cambridge - July 28, 211. Doctoral

More information

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universität - Linz Florianópolis - September, 2011.

More information

! # %& () +,,. + / 0 & ( , 3, %0203) , 3, &45 & ( /, 0203 & ( 4 ( 9

! # %& () +,,. + / 0 & ( , 3, %0203) , 3, &45 & ( /, 0203 & ( 4 ( 9 ! # %& () +,,. + / 0 & ( 0111 0 2 0+, 3, %0203) 0111 0 2 0+, 3, &45 & ( 6 7 2. 2 0111 48 5488 /, 0203 & ( 4 ( 9 : BLIND IMAGE DECONVOLUTION USING THE SYLVESTER RESULTANT MATRIX Nora Alkhaldi and Joab Winkler

More information

An IDL Based Image Deconvolution Software Package

An IDL Based Image Deconvolution Software Package An IDL Based Image Deconvolution Software Package F. Városi and W. B. Landsman Hughes STX Co., Code 685, NASA/GSFC, Greenbelt, MD 20771 Abstract. Using the Interactive Data Language (IDL), we have implemented

More information

Leveraging Machine Learning for High-Resolution Restoration of Satellite Imagery

Leveraging Machine Learning for High-Resolution Restoration of Satellite Imagery Leveraging Machine Learning for High-Resolution Restoration of Satellite Imagery Daniel L. Pimentel-Alarcón, Ashish Tiwari Georgia State University, Atlanta, GA Douglas A. Hope Hope Scientific Renaissance

More information

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016

More information

1. Introduction. We are concerned with the approximate solution of linear least-squares problems

1. Introduction. We are concerned with the approximate solution of linear least-squares problems FRACTIONAL TIKHONOV REGULARIZATION WITH A NONLINEAR PENALTY TERM SERENA MORIGI, LOTHAR REICHEL, AND FIORELLA SGALLARI Abstract. Tikhonov regularization is one of the most popular methods for solving linear

More information

A NEW L-CURVE FOR ILL-POSED PROBLEMS. Dedicated to Claude Brezinski.

A NEW L-CURVE FOR ILL-POSED PROBLEMS. Dedicated to Claude Brezinski. A NEW L-CURVE FOR ILL-POSED PROBLEMS LOTHAR REICHEL AND HASSANE SADOK Dedicated to Claude Brezinski. Abstract. The truncated singular value decomposition is a popular method for the solution of linear

More information

Dealing with edge effects in least-squares image deconvolution problems

Dealing with edge effects in least-squares image deconvolution problems Astronomy & Astrophysics manuscript no. bc May 11, 05 (DOI: will be inserted by hand later) Dealing with edge effects in least-squares image deconvolution problems R. Vio 1 J. Bardsley 2, M. Donatelli

More information

On nonstationary preconditioned iterative regularization methods for image deblurring

On nonstationary preconditioned iterative regularization methods for image deblurring On nonstationary preconditioned iterative regularization methods for image deblurring Alessandro Buccini joint work with Prof. Marco Donatelli University of Insubria Department of Science and High Technology

More information

An iterative multigrid regularization method for Toeplitz discrete ill-posed problems

An iterative multigrid regularization method for Toeplitz discrete ill-posed problems NUMERICAL MATHEMATICS: Theory, Methods and Applications Numer. Math. Theor. Meth. Appl., Vol. xx, No. x, pp. 1-18 (200x) An iterative multigrid regularization method for Toeplitz discrete ill-posed problems

More information

AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS

AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS DAVOD KHOJASTEH SALKUYEH and FATEMEH PANJEH ALI BEIK Communicated by the former editorial board Let A : R m n R m n be a symmetric

More information

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS BIT Numerical Mathematics 6-3835/3/431-1 $16. 23, Vol. 43, No. 1, pp. 1 18 c Kluwer Academic Publishers ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS T. K. JENSEN and P. C. HANSEN Informatics

More information

Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank

Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank Taewon Cho Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial

More information

RESCALING THE GSVD WITH APPLICATION TO ILL-POSED PROBLEMS

RESCALING THE GSVD WITH APPLICATION TO ILL-POSED PROBLEMS RESCALING THE GSVD WITH APPLICATION TO ILL-POSED PROBLEMS L. DYKES, S. NOSCHESE, AND L. REICHEL Abstract. The generalized singular value decomposition (GSVD) of a pair of matrices expresses each matrix

More information

Two-parameter generalized Hermitian and skew-hermitian splitting iteration method

Two-parameter generalized Hermitian and skew-hermitian splitting iteration method To appear in the International Journal of Computer Mathematics Vol. 00, No. 00, Month 0XX, 1 Two-parameter generalized Hermitian and skew-hermitian splitting iteration method N. Aghazadeh a, D. Khojasteh

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 28, pp. 49-67, 28. Copyright 28,. ISSN 68-963. A WEIGHTED-GCV METHOD FOR LANCZOS-HYBRID REGULARIZATION JULIANNE CHUNG, JAMES G. NAGY, AND DIANNE P.

More information

c 1999 Society for Industrial and Applied Mathematics

c 1999 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 21, No. 1, pp. 185 194 c 1999 Society for Industrial and Applied Mathematics TIKHONOV REGULARIZATION AND TOTAL LEAST SQUARES GENE H. GOLUB, PER CHRISTIAN HANSEN, AND DIANNE

More information

arxiv: v1 [math.na] 15 Jun 2009

arxiv: v1 [math.na] 15 Jun 2009 Noname manuscript No. (will be inserted by the editor) Fast transforms for high order boundary conditions Marco Donatelli arxiv:0906.2704v1 [math.na] 15 Jun 2009 the date of receipt and acceptance should

More information

Tikhonov Regularization for Weighted Total Least Squares Problems

Tikhonov Regularization for Weighted Total Least Squares Problems Tikhonov Regularization for Weighted Total Least Squares Problems Yimin Wei Naimin Zhang Michael K. Ng Wei Xu Abstract In this paper, we study and analyze the regularized weighted total least squares (RWTLS)

More information

UPRE Method for Total Variation Parameter Selection

UPRE Method for Total Variation Parameter Selection UPRE Method for Total Variation Parameter Selection Youzuo Lin School of Mathematical and Statistical Sciences, Arizona State University, Tempe, AZ 85287 USA. Brendt Wohlberg 1, T-5, Los Alamos National

More information

INVERSE SUBSPACE PROBLEMS WITH APPLICATIONS

INVERSE SUBSPACE PROBLEMS WITH APPLICATIONS INVERSE SUBSPACE PROBLEMS WITH APPLICATIONS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Given a square matrix A, the inverse subspace problem is concerned with determining a closest matrix to A with a

More information

AMS classification scheme numbers: 65F10, 65F15, 65Y20

AMS classification scheme numbers: 65F10, 65F15, 65Y20 Improved image deblurring with anti-reflective boundary conditions and re-blurring (This is a preprint of an article published in Inverse Problems, 22 (06) pp. 35-53.) M. Donatelli, C. Estatico, A. Martinelli,

More information

Deconvolution. Parameter Estimation in Linear Inverse Problems

Deconvolution. Parameter Estimation in Linear Inverse Problems Image Parameter Estimation in Linear Inverse Problems Chair for Computer Aided Medical Procedures & Augmented Reality Department of Computer Science, TUM November 10, 2006 Contents A naive approach......with

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

ON THE REGULARIZING PROPERTIES OF THE GMRES METHOD

ON THE REGULARIZING PROPERTIES OF THE GMRES METHOD ON THE REGULARIZING PROPERTIES OF THE GMRES METHOD D. CALVETTI, B. LEWIS, AND L. REICHEL Abstract. The GMRES method is a popular iterative method for the solution of large linear systems of equations with

More information

Statistically-Based Regularization Parameter Estimation for Large Scale Problems

Statistically-Based Regularization Parameter Estimation for Large Scale Problems Statistically-Based Regularization Parameter Estimation for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova March 1, 2010 National Science Foundation: Division of Computational

More information

Dedicated to Adhemar Bultheel on the occasion of his 60th birthday.

Dedicated to Adhemar Bultheel on the occasion of his 60th birthday. SUBSPACE-RESTRICTED SINGULAR VALUE DECOMPOSITIONS FOR LINEAR DISCRETE ILL-POSED PROBLEMS MICHIEL E. HOCHSTENBACH AND LOTHAR REICHEL Dedicated to Adhemar Bultheel on the occasion of his 60th birthday. Abstract.

More information

Interval solutions for interval algebraic equations

Interval solutions for interval algebraic equations Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya

More information

Two-parameter regularization method for determining the heat source

Two-parameter regularization method for determining the heat source Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 8 (017), pp. 3937-3950 Research India Publications http://www.ripublication.com Two-parameter regularization method for

More information

An Interior-Point Trust-Region-Based Method for Large-Scale Nonnegative Regularization

An Interior-Point Trust-Region-Based Method for Large-Scale Nonnegative Regularization An Interior-Point Trust-Region-Based Method for Large-Scale Nonnegative Regularization Marielba Rojas Trond Steihaug July 6, 2001 (Revised December 19, 2001) CERFACS Technical Report TR/PA/01/11 Abstract

More information

An interior-point method for large constrained discrete ill-posed problems

An interior-point method for large constrained discrete ill-posed problems An interior-point method for large constrained discrete ill-posed problems S. Morigi a, L. Reichel b,,1, F. Sgallari c,2 a Dipartimento di Matematica, Università degli Studi di Bologna, Piazza Porta S.

More information

Blind Image Deconvolution Using The Sylvester Matrix

Blind Image Deconvolution Using The Sylvester Matrix Blind Image Deconvolution Using The Sylvester Matrix by Nora Abdulla Alkhaldi A thesis submitted to the Department of Computer Science in conformity with the requirements for the degree of PhD Sheffield

More information

DEBLURRING AND SPARSE UNMIXING OF HYPERSPECTRAL IMAGES USING MULTIPLE POINT SPREAD FUNCTIONS G = XM + N,

DEBLURRING AND SPARSE UNMIXING OF HYPERSPECTRAL IMAGES USING MULTIPLE POINT SPREAD FUNCTIONS G = XM + N, DEBLURRING AND SPARSE UNMIXING OF HYPERSPECTRAL IMAGES USING MULTIPLE POINT SPREAD FUNCTIONS SEBASTIAN BERISHA, JAMES G NAGY, AND ROBERT J PLEMMONS Abstract This paper is concerned with deblurring and

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS Fatemeh Panjeh Ali Beik and Davod Khojasteh Salkuyeh, Department of Mathematics, Vali-e-Asr University of Rafsanjan, Rafsanjan,

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

On Solving Large Algebraic. Riccati Matrix Equations

On Solving Large Algebraic. Riccati Matrix Equations International Mathematical Forum, 5, 2010, no. 33, 1637-1644 On Solving Large Algebraic Riccati Matrix Equations Amer Kaabi Department of Basic Science Khoramshahr Marine Science and Technology University

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

Signal Identification Using a Least L 1 Norm Algorithm

Signal Identification Using a Least L 1 Norm Algorithm Optimization and Engineering, 1, 51 65, 2000 c 2000 Kluwer Academic Publishers. Manufactured in The Netherlands. Signal Identification Using a Least L 1 Norm Algorithm J. BEN ROSEN Department of Computer

More information

One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017

One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017 One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017 1 One Picture and a Thousand Words Using Matrix Approximations Dianne P. O Leary

More information

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation Rosemary Renaut DEPARTMENT OF MATHEMATICS AND STATISTICS Prague 2008 MATHEMATICS AND STATISTICS

More information

Cosine transform preconditioners for high resolution image reconstruction

Cosine transform preconditioners for high resolution image reconstruction Linear Algebra and its Applications 36 (000) 89 04 www.elsevier.com/locate/laa Cosine transform preconditioners for high resolution image reconstruction Michael K. Ng a,,, Raymond H. Chan b,,tonyf.chan

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Noisy Word Recognition Using Denoising and Moment Matrix Discriminants

Noisy Word Recognition Using Denoising and Moment Matrix Discriminants Noisy Word Recognition Using Denoising and Moment Matrix Discriminants Mila Nikolova Département TSI ENST, rue Barrault, 753 Paris Cedex 13, France, nikolova@tsi.enst.fr Alfred Hero Dept. of EECS, Univ.

More information

Key words. Boundary conditions, fast transforms, matrix algebras and Toeplitz matrices, Tikhonov regularization, regularizing iterative methods.

Key words. Boundary conditions, fast transforms, matrix algebras and Toeplitz matrices, Tikhonov regularization, regularizing iterative methods. REGULARIZATION OF IMAGE RESTORATION PROBLEMS WITH ANTI-REFLECTIVE BOUNDARY CONDITIONS MARCO DONATELLI, CLAUDIO ESTATICO, AND STEFANO SERRA-CAPIZZANO Abstract. Anti-reflective boundary conditions have been

More information

Satellite image deconvolution using complex wavelet packets

Satellite image deconvolution using complex wavelet packets Satellite image deconvolution using complex wavelet packets André Jalobeanu, Laure Blanc-Féraud, Josiane Zerubia ARIANA research group INRIA Sophia Antipolis, France CNRS / INRIA / UNSA www.inria.fr/ariana

More information

Nested splitting CG-like iterative method for solving the continuous Sylvester equation and preconditioning

Nested splitting CG-like iterative method for solving the continuous Sylvester equation and preconditioning Adv Comput Math DOI 10.1007/s10444-013-9330-3 Nested splitting CG-like iterative method for solving the continuous Sylvester equation and preconditioning Mohammad Khorsand Zak Faezeh Toutounian Received:

More information

Total least squares. Gérard MEURANT. October, 2008

Total least squares. Gérard MEURANT. October, 2008 Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares

More information

NUMERICAL OPTIMIZATION METHODS FOR BLIND DECONVOLUTION

NUMERICAL OPTIMIZATION METHODS FOR BLIND DECONVOLUTION NUMERICAL OPTIMIZATION METHODS FOR BLIND DECONVOLUTION ANASTASIA CORNELIO, ELENA LOLI PICCOLOMINI, AND JAMES G. NAGY Abstract. This paper describes a nonlinear least squares framework to solve a separable

More information

SIGNAL AND IMAGE RESTORATION: SOLVING

SIGNAL AND IMAGE RESTORATION: SOLVING 1 / 55 SIGNAL AND IMAGE RESTORATION: SOLVING ILL-POSED INVERSE PROBLEMS - ESTIMATING PARAMETERS Rosemary Renaut http://math.asu.edu/ rosie CORNELL MAY 10, 2013 2 / 55 Outline Background Parameter Estimation

More information

Inverse problem and optimization

Inverse problem and optimization Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples

More information

A Limited Memory, Quasi-Newton Preconditioner. for Nonnegatively Constrained Image. Reconstruction

A Limited Memory, Quasi-Newton Preconditioner. for Nonnegatively Constrained Image. Reconstruction A Limited Memory, Quasi-Newton Preconditioner for Nonnegatively Constrained Image Reconstruction Johnathan M. Bardsley Department of Mathematical Sciences, The University of Montana, Missoula, MT 59812-864

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Data Preprocessing Tasks

Data Preprocessing Tasks Data Tasks 1 2 3 Data Reduction 4 We re here. 1 Dimensionality Reduction Dimensionality reduction is a commonly used approach for generating fewer features. Typically used because too many features can

More information

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Rosemary Renaut, Jodi Mead Arizona State and Boise State September 2007 Renaut and Mead (ASU/Boise) Scalar

More information

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse

More information

AIR FORCE RESEARCH LABORATORY Directed Energy Directorate 3550 Aberdeen Ave SE AIR FORCE MATERIEL COMMAND KIRTLAND AIR FORCE BASE, NM

AIR FORCE RESEARCH LABORATORY Directed Energy Directorate 3550 Aberdeen Ave SE AIR FORCE MATERIEL COMMAND KIRTLAND AIR FORCE BASE, NM AFRL-DE-PS-JA-2007-1004 AFRL-DE-PS-JA-2007-1004 Noise Reduction in support-constrained multi-frame blind-deconvolution restorations as a function of the number of data frames and the support constraint

More information

Using Hankel structured low-rank approximation for sparse signal recovery

Using Hankel structured low-rank approximation for sparse signal recovery Using Hankel structured low-rank approximation for sparse signal recovery Ivan Markovsky 1 and Pier Luigi Dragotti 2 Department ELEC Vrije Universiteit Brussel (VUB) Pleinlaan 2, Building K, B-1050 Brussels,

More information

PROJECTED TIKHONOV REGULARIZATION OF LARGE-SCALE DISCRETE ILL-POSED PROBLEMS

PROJECTED TIKHONOV REGULARIZATION OF LARGE-SCALE DISCRETE ILL-POSED PROBLEMS PROJECED IKHONOV REGULARIZAION OF LARGE-SCALE DISCREE ILL-POSED PROBLEMS DAVID R. MARIN AND LOHAR REICHEL Abstract. he solution of linear discrete ill-posed problems is very sensitive to perturbations

More information

1 Non-negative Matrix Factorization (NMF)

1 Non-negative Matrix Factorization (NMF) 2018-06-21 1 Non-negative Matrix Factorization NMF) In the last lecture, we considered low rank approximations to data matrices. We started with the optimal rank k approximation to A R m n via the SVD,

More information

Integer Least Squares: Sphere Decoding and the LLL Algorithm

Integer Least Squares: Sphere Decoding and the LLL Algorithm Integer Least Squares: Sphere Decoding and the LLL Algorithm Sanzheng Qiao Department of Computing and Software McMaster University 28 Main St. West Hamilton Ontario L8S 4L7 Canada. ABSTRACT This paper

More information

Hands-on Matrix Algebra Using R

Hands-on Matrix Algebra Using R Preface vii 1. R Preliminaries 1 1.1 Matrix Defined, Deeper Understanding Using Software.. 1 1.2 Introduction, Why R?.................... 2 1.3 Obtaining R.......................... 4 1.4 Reference Manuals

More information

COMPLEX CONSTRAINED CRB AND ITS APPLICATION TO SEMI-BLIND MIMO AND OFDM CHANNEL ESTIMATION. Aditya K. Jagannatham and Bhaskar D.

COMPLEX CONSTRAINED CRB AND ITS APPLICATION TO SEMI-BLIND MIMO AND OFDM CHANNEL ESTIMATION. Aditya K. Jagannatham and Bhaskar D. COMPLEX CONSTRAINED CRB AND ITS APPLICATION TO SEMI-BLIND MIMO AND OFDM CHANNEL ESTIMATION Aditya K Jagannatham and Bhaskar D Rao University of California, SanDiego 9500 Gilman Drive, La Jolla, CA 92093-0407

More information

Advanced Numerical Linear Algebra: Inverse Problems

Advanced Numerical Linear Algebra: Inverse Problems Advanced Numerical Linear Algebra: Inverse Problems Rosemary Renaut Spring 23 Some Background on Inverse Problems Constructing PSF Matrices The DFT Rosemary Renaut February 4, 23 References Deblurring

More information

Non-Negative Matrix Factorization with Quasi-Newton Optimization

Non-Negative Matrix Factorization with Quasi-Newton Optimization Non-Negative Matrix Factorization with Quasi-Newton Optimization Rafal ZDUNEK, Andrzej CICHOCKI Laboratory for Advanced Brain Signal Processing BSI, RIKEN, Wako-shi, JAPAN Abstract. Non-negative matrix

More information

Iterative Methods for Smooth Objective Functions

Iterative Methods for Smooth Objective Functions Optimization Iterative Methods for Smooth Objective Functions Quadratic Objective Functions Stationary Iterative Methods (first/second order) Steepest Descent Method Landweber/Projected Landweber Methods

More information