Preconditioning. Noisy, Ill-Conditioned Linear Systems

Size: px
Start display at page:

Download "Preconditioning. Noisy, Ill-Conditioned Linear Systems"

Transcription

1 Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image Restoration 5. Summary

2 Basic Problem Linear system of equations b = Ax where A, b are known A is large, structured Goal: Compute an approximation of x

3 Basic Problem Linear system of equations b = Ax + e where A, b are known A is large, structured, ill-conditioned Goal: Compute an approximation of x

4 Basic Problem Linear system of equations b = Ax + e where A, b are known A is large, structured, ill-conditioned Goal: Compute an approximation of x Applications: Ill-posed inverse problems. Geomagnetic Prospecting Tomography Image Restoration

5 Basic Problem Linear system of equations b = Ax + e where A, b are known A is large, structured, ill-conditioned Goal: Compute an approximation of x Applications: Ill-posed inverse problems. Geomagnetic Prospecting Tomography Image Restoration b = observed image A = blurring matrix (structured) e = noise x = true image

6 Basic Problem Linear system of equations b = Ax + e where A, b are known A is large, structured, ill-conditioned Goal: Compute an approximation of x Applications: Ill-posed inverse problems. Geomagnetic Prospecting Tomography Image Restoration b = observed image A = blurring matrix (structured) e = noise x = true image Example

7 Basic Problem Properties Computational difficulties revealed through SVD: Let A = UΣV T where Σ =diag(σ 1, σ 2,..., σ N ), σ 1 σ 2 σ N U T U = I, V T V = I

8 Basic Problem Properties Computational difficulties revealed through SVD: Let A = UΣV T where Σ =diag(σ 1, σ 2,..., σ N ), σ 1 σ 2 σ N U T U = I, V T V = I For ill-posed inverse problems, σ 1 1, small singular values cluster at small singular values oscillating singular vectors

9 Basic Problem Properties Inverse solution for noisy, ill-posed problems: If A = UΣV T, then x = A 1 b = V Σ 1 U T b = n i=1 u T i b σ i v i

10 Basic Problem Properties Inverse solution for noisy, ill-posed problems: If A = UΣV T, then ˆx = A 1 (b + e) = V Σ 1 U T (b + e) = n i=1 u T i (b + e) σ i v i

11 Basic Problem Properties Inverse solution for noisy, ill-posed problems: If A = UΣV T, then ˆx = A 1 (b + e) = V Σ 1 U T (b + e) = n i=1 u T i (b + e) σ i v i = n i=1 u T i b σ i v i + n i=1 u T i e σ i v i = x + error

12 Basic Problem Properties Inverse solution for noisy, ill-posed problems: If A = UΣV T, then ˆx = A 1 (b + e) = V Σ 1 U T (b + e) = n i=1 u T i (b + e) σ i v i = n i=1 u T i b σ i v i + n i=1 u T i e σ i v i = x + error Example

13 Regularization Basic Idea: Filter out effects of small singular values. x reg = n i=1 φ i u T i b σ i v i where the filter factors satisfy φ i { 1 if σi is large if σ i is small

14 Regularization Some regularization methods: 1. Truncated SVD x tsvd = k i=1 u T i b σ i v i 2. Tikhonov x tik = n i=1 σi 2 u T σi 2 i b + v i α2 σ i 3. Wiener x wien = n i=1 δ i σ 2 i δ i σ 2 i + γ2 i u T i b σ i v i

15 Iterative Regularization Basic idea: Use an iterative method (e.g., conjugate gradients) Terminate iteration before theoretical convergence: Early iterations reconstruct solution. Later iterations reconstruct noise.

16 Iterative Regularization Basic idea: Use an iterative method (e.g., conjugate gradients) Terminate iteration before theoretical convergence: Early iterations reconstruct solution. Later iterations reconstruct noise. Some important methods: CGLS, LSQR, GMRES MR2 (Hanke, 95) MRNSD (Kaufman, 93; N., Strakos, )

17 Iterative Regularization Efficient for large problems, provided 1. Multiplication with A is not expensive. 2. Convergence is rapid enough.

18 Iterative Regularization Efficient for large problems, provided 1. Multiplication with A is not expensive. Image restoration Use FFTs 2. Convergence is rapid enough. CGLS, LSQR, GMRES, MR2 often fast, especially for severely ill-posed, noisy problems. MRNSD based on steepest descent, typically converges very slowly.

19 Iterative Regularization Efficient for large problems, provided 1. Multiplication with A is not expensive. Image restoration Use FFTs 2. Convergence is rapid enough. CGLS, LSQR, GMRES, MR2 often fast, especially for severely ill-posed, noisy problems. MRNSD based on steepest descent, typically converges very slowly. Example

20 Preconditioning Purposes of preconditioning: 1. Accelerate convergence. Apply iterative method to P 1 Ax = P 1 b.

21 Preconditioning Purposes of preconditioning: 1. Accelerate convergence. Apply iterative method to P 1 Ax = P 1 b. In this case we minimize x Enforce regularization constraint on solution. (Hanke, 92; Hansen, 98) Apply iterative method to AL 1 Lx = b. In this case, we minimize Lx 2.

22 Preconditioning for Regularization Basic idea: Find a matrix L to enforce smoothness constraint min Lx 2 Typically L approximates a derivative operator.

23 Preconditioning for Speed Typical approach for Ax = b Find matrix P so that P 1 A I. Ideal choice: P = A In this case, converge in one iteration to x = A 1 b

24 Preconditioning for Speed Typical approach for Ax = b Find matrix P so that P 1 A I. Ideal choice: P = A In this case, converge in one iteration to x = A 1 b For ill-conditioned, noisy problems Inverse solution is corrupted with noise

25 Preconditioning for Speed Typical approach for Ax = b Find matrix P so that P 1 A I. Ideal choice: P = A In this case, converge in one iteration to x = A 1 b For ill-conditioned, noisy problems Inverse solution is corrupted with noise Ideal regularized preconditioner: If A = UΣV T (Hanke, N., Plemmons, 93) P = UΣ k V T = Udiag(σ 1,..., σ k,1,..., 1)V T

26 Preconditioning for Speed Notice that the preconditioned system is: P 1 A = (UΣ k V T ) 1 (UΣV T ) = V Σ 1 k ΣV T = V V T where =diag(1,..., 1, σ k+1,..., σ n ) That is, Large (good) singular values clustered at 1. Small (bad) singular values not clustered.

27 Preconditioning for Speed Remaining questions: 1. How to choose truncation index, k? Use regularization parameter choice methods, e.g., GCV, L-curve, Picard condition 2. We can t compute SVD, so now what? Use SVD approximation.

28 Preconditioning for Speed An SVD Approximation: Decompose A as: (Van Loan and Pitsianis, 93) A = C 1 D 1 + C 2 D C k D k where C 1 D 1 = argmin A C D F.

29 Preconditioning for Speed An SVD Approximation: Decompose A as: (Van Loan and Pitsianis, 93) A = C 1 D 1 + C 2 D C k D k where C 1 D 1 = argmin A C D F. Choose a structured (or sparse) U and V.

30 Preconditioning for Speed An SVD Approximation: Decompose A as: (Van Loan and Pitsianis, 93) A = C 1 D 1 + C 2 D C k D k where C 1 D 1 = argmin A C D F. Choose a structured (or sparse) U and V. Let Σ = argmin A UΣV T F.

31 Preconditioning for Speed An SVD Approximation: Decompose A as: (Van Loan and Pitsianis, 93) A = C 1 D 1 + C 2 D C k D k where C 1 D 1 = argmin A C D F. Choose a structured (or sparse) U and V. Let Σ = argmin A UΣV T F. That is, Σ = diag ( U T AV )

32 Preconditioning for Speed An SVD Approximation: Decompose A as: (Van Loan and Pitsianis, 93) A = C 1 D 1 + C 2 D C k D k where C 1 D 1 = argmin A C D F. Choose a structured (or sparse) U and V. Let Σ = argmin A UΣV T F. That is, Σ = diag ( U T AV ) = diag U T k i=1 C i D i V

33 Preconditioning for Speed Choices for U and V depend on problem (application). Since A = C 1 D 1 + C 2 D C k D k and C 1 D 1 = argmin A C D F we might use singular vectors of C 1 D 1.

34 Preconditioning for Speed Choices for U and V depend on problem (application). Since A = C 1 D 1 + C 2 D C k D k and C 1 D 1 = argmin A C D F we might use singular vectors of C 1 D 1. For image restoration, we also use Fourier Transforms (FFTs) Discrete Cosine Transforms (DCTs)

35 Example: Image Restoration 1. Matrix Structure 2. Efficiently computing SVD approximation

36 Matrix Structure in Image Restoration First, how do we get the matrix, A?

37 Matrix Structure in Image Restoration First, how do we get the matrix, A? Using linear algebra notation, the i-th column of A can be written as: Ae i = [ a 1 a i a n ]. 1. = a i

38 Matrix Structure in Image Restoration First, how do we get the matrix, A? Using linear algebra notation, the i-th column of A can be written as: Ae i = [ a 1 a i a n ] In an imaging system, e i = point source. 1. = a i Ae i = point spread function (PSF)

39 Matrix Structure in Image Restoration point source PSF

40 Matrix Structure in Image Restoration Spatially invariant PSF implies: e i e j e k Ae i Ae j Ae k

41 Matrix Structure in Image Restoration That is, spatially invariant implies Each column of A is identical, modulo shift. One point PSF is enough to fully describe A. A has Toeplitz structure.

42 Matrix Structure in Image Restoration e 5 = 1 1 blur p 11 p 12 p 13 p 21 p 22 p 23 p 31 p 32 p 33 Ae 5 = p 11 p 12 p 13 p 21 p 22 p 23 p 31 p 32 p 33 A = p 11 p 12 p 13 p 21 p 22 p 23 p 31 p 32 p 33

43 Matrix Structure in Image Restoration e 5 = 1 1 blur p 11 p 12 p 13 p 21 p 22 p 23 p 31 p 32 p 33 Ae 5 = p 11 p 12 p 13 p 21 p 22 p 23 p 31 p 32 p 33 A = p 12 p 11 p 13 p 12 p 11 p 13 p 12 p 22 p 21 p 23 p 22 p 21 p 23 p 22 p 32 p 31 p 33 p 32 p 31 p 33 p 32

44 Matrix Structure in Image Restoration e 5 = 1 1 blur p 11 p 12 p 13 p 21 p 22 p 23 p 31 p 32 p 33 Ae 5 = p 11 p 12 p 13 p 21 p 22 p 23 p 31 p 32 p 33 A = p 22 p 21 p 12 p 11 p 23 p 22 p 21 p 13 p 12 p 11 p 23 p 22 p 13 p 12 p 32 p 31 p 22 p 21 p 12 p 11 p 33 p 32 p 31 p 23 p 22 p 21 p 13 p 12 p 11 p 33 p 32 p 23 p 22 p 13 p 12 p 32 p 31 p 22 p 21 p 33 p 32 p 31 p 23 p 22 p 21 p 33 p 32 p 23 p 22

45 Matrix Structure in Image Restoration Matrix Summary Boundary Condition Matrix Structure zero BTTB periodic BCCB reflexive BTTB+BTHB+BHTB+BHHB B = block T = Toeplitz C = circulant H = Hankel

46 Matrix Structure in Image Restoration Matrix Summary Boundary Condition Matrix Structure zero BTTB periodic reflexive (strongly symmetric) BCCB (use FFT) BTTB+BTHB+BHTB+BHHB (use DCT) B = block T = Toeplitz C = circulant H = Hankel

47 Matrix Structure in Image Restoration For a separable PSF, we get: p 11 p 12 p 13 p 21 p 22 p 23 p 31 p 32 p 33 = cd T = c 1d 1 c 1 d 2 c 1 d 3 c 2 d 1 c 2 d 2 c 2 d 3 c 3 d 1 c 3 d 2 c 3 d 3 Ae 5 = c 1 c 2 c 3 d 1 d 2 d 3 d 1 d 2 d 3 d 1 d 2 d 3

48 Matrix Structure in Image Restoration For a separable PSF, we get: p 11 p 12 p 13 p 21 p 22 p 23 p 31 p 32 p 33 = cd T = c 1d 1 c 1 d 2 c 1 d 3 c 2 d 1 c 2 d 2 c 2 d 3 c 3 d 1 c 3 d 2 c 3 d 3 Ae 5 = c 1 c 2 c 3 d 1 d 2 d 3 d 1 d 2 d 3 d 1 d 2 d 3 c 1 c 2 c 3 d 1 d 2 d 3 d 1 d 2 d 3 d 1 d 2 d 3

49 Matrix Structure in Image Restoration For a separable PSF, we get: p 11 p 12 p 13 p 21 p 22 p 23 p 31 p 32 p 33 = cd T = c 1d 1 c 1 d 2 c 1 d 3 c 2 d 1 c 2 d 2 c 2 d 3 c 3 d 1 c 3 d 2 c 3 d 3 Ae 5 = c 1 c 2 c 3 d 1 d 2 d 3 d 1 d 2 d 3 d 1 d 2 d 3 c 1 c 2 c 3 d 2 d 1 d 3 d 2 d 1 d 3 d 2 d 2 d 1 d 3 d 2 d 1 d 3 d 2 d 2 d 1 d 3 d 2 d 1 d 3 d 2

50 Matrix Structure in Image Restoration For a separable PSF, we get: p 11 p 12 p 13 p 21 p 22 p 23 p 31 p 32 p 33 = cd T = c 1d 1 c 1 d 2 c 1 d 3 c 2 d 1 c 2 d 2 c 2 d 3 c 3 d 1 c 3 d 2 c 3 d 3 Ae 5 = c 1 c 2 c 3 d 1 d 2 d 3 d 1 d 2 d 3 d 1 d 2 d 3 c 2 c 3 d 2 d 1 d 3 d 2 d 1 d 3 d 2 d 2 d 1 d 3 d 2 d 1 d 3 d 2 c 1 d 2 d 1 d 3 d 2 d 1 d 3 d 2 c 2 d 2 d 1 d 3 d 2 d 1 d 3 d 2 c 3 d 2 d 1 d 3 d 2 d 1 d 3 d 2 c 1 d 2 d 1 d 3 d 2 d 1 d 3 d 2 c 2 d 2 d 1 d 3 d 2 d 1 d 3 d 2 = C D

51 Matrix Structure in Image Restoration If the PSF is not separable, we can still compute: P = r i=1 c i d T i (sum of rank-1 matrices) and therefore, get A = r i=1 C i D i (sum of Kron. products) In fact, we can get optimal decompositions. (Kamm, N, ; N., Ng, Perrone, 3)

52 SVD Approximation for Image Restoration Use A UΣV T, where If F ( C i D i ) F is best, U = V = F, Σ = diag ( F ( Ci D i ) F ) If C ( C i D i ) C T is best, U = V = C T, Σ = diag ( C ( Ci D i ) C T ) If (U c U d ) T ( C i D i ) (V c V d ) is best, U = U c U d, V = V c V d, Σ = diag ( (U c U d ) T ( Ci D i ) (Vc V d ) ) Example

53 The End Preconditioning ill-posed problems is difficult, possible. but Can build approximate SVD from Kronecker product approximations. Can implement efficiently for image restoration. Matlab software: RestoreTools (Lee, N., Perrone, 2) Object oriented approach for image restoration. nagy/restoretools/ Related software for ill-posed problems (Hansen, Jacobsen) pch/regutools/

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Numerical Linear Algebra and. Image Restoration

Numerical Linear Algebra and. Image Restoration Numerical Linear Algebra and Image Restoration Maui High Performance Computing Center Wednesday, October 8, 2003 James G. Nagy Emory University Atlanta, GA Thanks to: AFOSR, Dave Tyler, Stuart Jefferies,

More information

Mathematics and Computer Science

Mathematics and Computer Science Technical Report TR-2004-012 Kronecker Product Approximation for Three-Dimensional Imaging Applications by MIsha Kilmer, James Nagy Mathematics and Computer Science EMORY UNIVERSITY Kronecker Product Approximation

More information

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola

More information

Mathematical Beer Goggles or The Mathematics of Image Processing

Mathematical Beer Goggles or The Mathematics of Image Processing How Mathematical Beer Goggles or The Mathematics of Image Processing Department of Mathematical Sciences University of Bath Postgraduate Seminar Series University of Bath 12th February 2008 1 How 2 How

More information

Ill Posed Inverse Problems in Image Processing

Ill Posed Inverse Problems in Image Processing Ill Posed Inverse Problems in Image Processing Introduction, Structured matrices, Spectral filtering, Regularization, Noise revealing I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz,

More information

Linear Inverse Problems

Linear Inverse Problems Linear Inverse Problems Ajinkya Kadu Utrecht University, The Netherlands February 26, 2018 Outline Introduction Least-squares Reconstruction Methods Examples Summary Introduction 2 What are inverse problems?

More information

Advanced Numerical Linear Algebra: Inverse Problems

Advanced Numerical Linear Algebra: Inverse Problems Advanced Numerical Linear Algebra: Inverse Problems Rosemary Renaut Spring 23 Some Background on Inverse Problems Constructing PSF Matrices The DFT Rosemary Renaut February 4, 23 References Deblurring

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 31, pp. 204-220, 2008. Copyright 2008,. ISSN 1068-9613. ETNA NOISE PROPAGATION IN REGULARIZING ITERATIONS FOR IMAGE DEBLURRING PER CHRISTIAN HANSEN

More information

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Rosemary Renaut, Jodi Mead Arizona State and Boise State September 2007 Renaut and Mead (ASU/Boise) Scalar

More information

arxiv: v1 [math.na] 3 Jan 2019

arxiv: v1 [math.na] 3 Jan 2019 STRUCTURED FISTA FOR IMAGE RESTORATION ZIXUAN CHEN, JAMES G. NAGY, YUANZHE XI, AND BO YU arxiv:9.93v [math.na] 3 Jan 29 Abstract. In this paper, we propose an efficient numerical scheme for solving some

More information

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS BIT Numerical Mathematics 6-3835/3/431-1 $16. 23, Vol. 43, No. 1, pp. 1 18 c Kluwer Academic Publishers ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS T. K. JENSEN and P. C. HANSEN Informatics

More information

Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve

Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve Rosemary Renaut, Jodi Mead Arizona State and Boise State Copper Mountain Conference on Iterative Methods

More information

SIGNAL AND IMAGE RESTORATION: SOLVING

SIGNAL AND IMAGE RESTORATION: SOLVING 1 / 55 SIGNAL AND IMAGE RESTORATION: SOLVING ILL-POSED INVERSE PROBLEMS - ESTIMATING PARAMETERS Rosemary Renaut http://math.asu.edu/ rosie CORNELL MAY 10, 2013 2 / 55 Outline Background Parameter Estimation

More information

Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank

Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank Taewon Cho Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial

More information

Inverse Ill Posed Problems in Image Processing

Inverse Ill Posed Problems in Image Processing Inverse Ill Posed Problems in Image Processing Image Deblurring I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz, martin.plesinger@tul.cz, strakos@cs.cas.cz 1,3 Faculty of Mathematics

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 28, pp. 49-67, 28. Copyright 28,. ISSN 68-963. A WEIGHTED-GCV METHOD FOR LANCZOS-HYBRID REGULARIZATION JULIANNE CHUNG, JAMES G. NAGY, AND DIANNE P.

More information

Blind image restoration as a convex optimization problem

Blind image restoration as a convex optimization problem Int. J. Simul. Multidisci.Des. Optim. 4, 33-38 (2010) c ASMDO 2010 DOI: 10.1051/ijsmdo/ 2010005 Available online at: http://www.ijsmdo.org Blind image restoration as a convex optimization problem A. Bouhamidi

More information

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed

More information

The Global Krylov subspace methods and Tikhonov regularization for image restoration

The Global Krylov subspace methods and Tikhonov regularization for image restoration The Global Krylov subspace methods and Tikhonov regularization for image restoration Abderrahman BOUHAMIDI (joint work with Khalide Jbilou) Université du Littoral Côte d Opale LMPA, CALAIS-FRANCE bouhamidi@lmpa.univ-littoral.fr

More information

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration D. Calvetti a, B. Lewis b and L. Reichel c a Department of Mathematics, Case Western Reserve University,

More information

Downloaded 06/11/15 to Redistribution subject to SIAM license or copyright; see

Downloaded 06/11/15 to Redistribution subject to SIAM license or copyright; see SIAM J. SCI. COMPUT. Vol. 37, No. 2, pp. B332 B359 c 2015 Society for Industrial and Applied Mathematics A FRAMEWORK FOR REGULARIZATION VIA OPERATOR APPROXIMATION JULIANNE M. CHUNG, MISHA E. KILMER, AND

More information

Laboratorio di Problemi Inversi Esercitazione 2: filtraggio spettrale

Laboratorio di Problemi Inversi Esercitazione 2: filtraggio spettrale Laboratorio di Problemi Inversi Esercitazione 2: filtraggio spettrale Luca Calatroni Dipartimento di Matematica, Universitá degli studi di Genova Aprile 2016. Luca Calatroni (DIMA, Unige) Esercitazione

More information

On the regularization properties of some spectral gradient methods

On the regularization properties of some spectral gradient methods On the regularization properties of some spectral gradient methods Daniela di Serafino Department of Mathematics and Physics, Second University of Naples daniela.diserafino@unina2.it contributions from

More information

Towards Improved Sensitivity in Feature Extraction from Signals: one and two dimensional

Towards Improved Sensitivity in Feature Extraction from Signals: one and two dimensional Towards Improved Sensitivity in Feature Extraction from Signals: one and two dimensional Rosemary Renaut, Hongbin Guo, Jodi Mead and Wolfgang Stefan Supported by Arizona Alzheimer s Research Center and

More information

Extension of GKB-FP algorithm to large-scale general-form Tikhonov regularization

Extension of GKB-FP algorithm to large-scale general-form Tikhonov regularization Extension of GKB-FP algorithm to large-scale general-form Tikhonov regularization Fermín S. Viloche Bazán, Maria C. C. Cunha and Leonardo S. Borges Department of Mathematics, Federal University of Santa

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Newton s method for obtaining the Regularization Parameter for Regularized Least Squares

Newton s method for obtaining the Regularization Parameter for Regularized Least Squares Newton s method for obtaining the Regularization Parameter for Regularized Least Squares Rosemary Renaut, Jodi Mead Arizona State and Boise State International Linear Algebra Society, Cancun Renaut and

More information

Golub-Kahan iterative bidiagonalization and determining the noise level in the data

Golub-Kahan iterative bidiagonalization and determining the noise level in the data Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech

More information

Dedicated to Adhemar Bultheel on the occasion of his 60th birthday.

Dedicated to Adhemar Bultheel on the occasion of his 60th birthday. SUBSPACE-RESTRICTED SINGULAR VALUE DECOMPOSITIONS FOR LINEAR DISCRETE ILL-POSED PROBLEMS MICHIEL E. HOCHSTENBACH AND LOTHAR REICHEL Dedicated to Adhemar Bultheel on the occasion of his 60th birthday. Abstract.

More information

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =

More information

A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems

A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova SIAM Annual Meeting July 10, 2009 National Science Foundation:

More information

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation Rosemary Renaut DEPARTMENT OF MATHEMATICS AND STATISTICS Prague 2008 MATHEMATICS AND STATISTICS

More information

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring Marco Donatelli Dept. of Science and High Tecnology U. Insubria (Italy) Joint work with M. Hanke

More information

Inverse problem and optimization

Inverse problem and optimization Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples

More information

Iterative Krylov Subspace Methods for Sparse Reconstruction

Iterative Krylov Subspace Methods for Sparse Reconstruction Iterative Krylov Subspace Methods for Sparse Reconstruction James Nagy Mathematics and Computer Science Emory University Atlanta, GA USA Joint work with Silvia Gazzola University of Padova, Italy Outline

More information

Statistically-Based Regularization Parameter Estimation for Large Scale Problems

Statistically-Based Regularization Parameter Estimation for Large Scale Problems Statistically-Based Regularization Parameter Estimation for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova March 1, 2010 National Science Foundation: Division of Computational

More information

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation Rosemary Renaut Collaborators: Jodi Mead and Iveta Hnetynkova DEPARTMENT OF MATHEMATICS

More information

On nonstationary preconditioned iterative regularization methods for image deblurring

On nonstationary preconditioned iterative regularization methods for image deblurring On nonstationary preconditioned iterative regularization methods for image deblurring Alessandro Buccini joint work with Prof. Marco Donatelli University of Insubria Department of Science and High Technology

More information

Discrete Ill Posed and Rank Deficient Problems. Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1

Discrete Ill Posed and Rank Deficient Problems. Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1 Discrete Ill Posed and Rank Deficient Problems Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1 Definitions Overview Inversion, SVD, Picard Condition, Rank Deficient, Ill-Posed Classical

More information

FACTORIZATION STRATEGIES FOR THIRD-ORDER TENSORS

FACTORIZATION STRATEGIES FOR THIRD-ORDER TENSORS FACTORIZATION STRATEGIES FOR THIRD-ORDER TENSORS MISHA E. KILMER AND CARLA D. MARTIN Abstract. Operations with tensors, or multiway arrays, have become increasingly prevalent in recent years. Traditionally,

More information

1. Introduction. We are concerned with the approximate solution of linear least-squares problems

1. Introduction. We are concerned with the approximate solution of linear least-squares problems FRACTIONAL TIKHONOV REGULARIZATION WITH A NONLINEAR PENALTY TERM SERENA MORIGI, LOTHAR REICHEL, AND FIORELLA SGALLARI Abstract. Tikhonov regularization is one of the most popular methods for solving linear

More information

Chapter 8 Structured Low Rank Approximation

Chapter 8 Structured Low Rank Approximation Chapter 8 Structured Low Rank Approximation Overview Low Rank Toeplitz Approximation Low Rank Circulant Approximation Low Rank Covariance Approximation Eculidean Distance Matrix Approximation Approximate

More information

Linear Least Squares. Using SVD Decomposition.

Linear Least Squares. Using SVD Decomposition. Linear Least Squares. Using SVD Decomposition. Dmitriy Leykekhman Spring 2011 Goals SVD-decomposition. Solving LLS with SVD-decomposition. D. Leykekhman Linear Least Squares 1 SVD Decomposition. For any

More information

arxiv: v1 [math.na] 15 Jun 2009

arxiv: v1 [math.na] 15 Jun 2009 Noname manuscript No. (will be inserted by the editor) Fast transforms for high order boundary conditions Marco Donatelli arxiv:0906.2704v1 [math.na] 15 Jun 2009 the date of receipt and acceptance should

More information

Blind Image Deconvolution Using The Sylvester Matrix

Blind Image Deconvolution Using The Sylvester Matrix Blind Image Deconvolution Using The Sylvester Matrix by Nora Abdulla Alkhaldi A thesis submitted to the Department of Computer Science in conformity with the requirements for the degree of PhD Sheffield

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

Iterative Methods for Ill-Posed Problems

Iterative Methods for Ill-Posed Problems Iterative Methods for Ill-Posed Problems Based on joint work with: Serena Morigi Fiorella Sgallari Andriy Shyshkov Salt Lake City, May, 2007 Outline: Inverse and ill-posed problems Tikhonov regularization

More information

Tikhonov Regularization in General Form 8.1

Tikhonov Regularization in General Form 8.1 Tikhonov Regularization in General Form 8.1 To introduce a more general formulation, let us return to the continuous formulation of the first-kind Fredholm integral equation. In this setting, the residual

More information

Singular Value Decomposition in Image Noise Filtering and Reconstruction

Singular Value Decomposition in Image Noise Filtering and Reconstruction Georgia State University ScholarWorks @ Georgia State University Mathematics Theses Department of Mathematics and Statistics 4-22-2008 Singular Value Decomposition in Image Noise Filtering and Reconstruction

More information

COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b?

COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b? COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b? Rosemary Renaut http://math.asu.edu/ rosie BRIDGING THE GAP? OCT 2, 2012 Discussion Yuen: Solve

More information

A Parameter-Choice Method That Exploits Residual Information

A Parameter-Choice Method That Exploits Residual Information A Parameter-Choice Method That Exploits Residual Information Per Christian Hansen Section for Scientific Computing DTU Informatics Joint work with Misha E. Kilmer Tufts University Inverse Problems: Image

More information

Structured Linear Algebra Problems in Adaptive Optics Imaging

Structured Linear Algebra Problems in Adaptive Optics Imaging Structured Linear Algebra Problems in Adaptive Optics Imaging Johnathan M. Bardsley, Sarah Knepper, and James Nagy Abstract A main problem in adaptive optics is to reconstruct the phase spectrum given

More information

The Mathematics of Facial Recognition

The Mathematics of Facial Recognition William Dean Gowin Graduate Student Appalachian State University July 26, 2007 Outline EigenFaces Deconstruct a known face into an N-dimensional facespace where N is the number of faces in our data set.

More information

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in

More information

Arnoldi-Tikhonov regularization methods

Arnoldi-Tikhonov regularization methods Arnoldi-Tikhonov regularization methods Bryan Lewis a, Lothar Reichel b,,1 a Rocketcalc LLC, 100 W. Crain Ave., Kent, OH 44240, USA. b Department of Mathematical Sciences, Kent State University, Kent,

More information

Due Giorni di Algebra Lineare Numerica (2GALN) Febbraio 2016, Como. Iterative regularization in variable exponent Lebesgue spaces

Due Giorni di Algebra Lineare Numerica (2GALN) Febbraio 2016, Como. Iterative regularization in variable exponent Lebesgue spaces Due Giorni di Algebra Lineare Numerica (2GALN) 16 17 Febbraio 2016, Como Iterative regularization in variable exponent Lebesgue spaces Claudio Estatico 1 Joint work with: Brigida Bonino 1, Fabio Di Benedetto

More information

Numerical Methods I Singular Value Decomposition

Numerical Methods I Singular Value Decomposition Numerical Methods I Singular Value Decomposition Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 9th, 2014 A. Donev (Courant Institute)

More information

c 2008 Society for Industrial and Applied Mathematics

c 2008 Society for Industrial and Applied Mathematics SIAM J. SCI. COMPUT. Vol. 30, No. 3, pp. 1430 1458 c 8 Society for Industrial and Applied Mathematics IMPROVEMENT OF SPACE-INVARIANT IMAGE DEBLURRING BY PRECONDITIONED LANDWEBER ITERATIONS PAOLA BRIANZI,

More information

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Lars Eldén Department of Mathematics, Linköping University 1 April 2005 ERCIM April 2005 Multi-Linear

More information

Two-parameter generalized Hermitian and skew-hermitian splitting iteration method

Two-parameter generalized Hermitian and skew-hermitian splitting iteration method To appear in the International Journal of Computer Mathematics Vol. 00, No. 00, Month 0XX, 1 Two-parameter generalized Hermitian and skew-hermitian splitting iteration method N. Aghazadeh a, D. Khojasteh

More information

An iterative multigrid regularization method for Toeplitz discrete ill-posed problems

An iterative multigrid regularization method for Toeplitz discrete ill-posed problems NUMERICAL MATHEMATICS: Theory, Methods and Applications Numer. Math. Theor. Meth. Appl., Vol. xx, No. x, pp. 1-18 (200x) An iterative multigrid regularization method for Toeplitz discrete ill-posed problems

More information

Oslo Class 4 Early Stopping and Spectral Regularization

Oslo Class 4 Early Stopping and Spectral Regularization RegML2017@SIMULA Oslo Class 4 Early Stopping and Spectral Regularization Lorenzo Rosasco UNIGE-MIT-IIT June 28, 2016 Learning problem Solve min w E(w), E(w) = dρ(x, y)l(w x, y) given (x 1, y 1 ),..., (x

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

MATH 3795 Lecture 10. Regularized Linear Least Squares.

MATH 3795 Lecture 10. Regularized Linear Least Squares. MATH 3795 Lecture 10. Regularized Linear Least Squares. Dmitriy Leykekhman Fall 2008 Goals Understanding the regularization. D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

DEBLURRING AND SPARSE UNMIXING OF HYPERSPECTRAL IMAGES USING MULTIPLE POINT SPREAD FUNCTIONS G = XM + N,

DEBLURRING AND SPARSE UNMIXING OF HYPERSPECTRAL IMAGES USING MULTIPLE POINT SPREAD FUNCTIONS G = XM + N, DEBLURRING AND SPARSE UNMIXING OF HYPERSPECTRAL IMAGES USING MULTIPLE POINT SPREAD FUNCTIONS SEBASTIAN BERISHA, JAMES G NAGY, AND ROBERT J PLEMMONS Abstract This paper is concerned with deblurring and

More information

MARN 5898 Regularized Linear Least Squares.

MARN 5898 Regularized Linear Least Squares. MARN 5898 Regularized Linear Least Squares. Dmitriy Leykekhman Spring 2010 Goals Understanding the regularization. D. Leykekhman - MARN 5898 Parameter estimation in marine sciences Linear Least Squares

More information

A Limited Memory, Quasi-Newton Preconditioner. for Nonnegatively Constrained Image. Reconstruction

A Limited Memory, Quasi-Newton Preconditioner. for Nonnegatively Constrained Image. Reconstruction A Limited Memory, Quasi-Newton Preconditioner for Nonnegatively Constrained Image Reconstruction Johnathan M. Bardsley Department of Mathematical Sciences, The University of Montana, Missoula, MT 59812-864

More information

A novel method for estimating the distribution of convective heat flux in ducts: Gaussian Filtered Singular Value Decomposition

A novel method for estimating the distribution of convective heat flux in ducts: Gaussian Filtered Singular Value Decomposition A novel method for estimating the distribution of convective heat flux in ducts: Gaussian Filtered Singular Value Decomposition F Bozzoli 1,2, L Cattani 1,2, A Mocerino 1, S Rainieri 1,2, F S V Bazán 3

More information

Low Rank Matrix Completion Formulation and Algorithm

Low Rank Matrix Completion Formulation and Algorithm 1 2 Low Rank Matrix Completion and Algorithm Jian Zhang Department of Computer Science, ETH Zurich zhangjianthu@gmail.com March 25, 2014 Movie Rating 1 2 Critic A 5 5 Critic B 6 5 Jian 9 8 Kind Guy B 9

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017

One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017 One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017 1 One Picture and a Thousand Words Using Matrix Approximations Dianne P. O Leary

More information

Efficient Computational Approaches for Multiframe Blind Deconvolution

Efficient Computational Approaches for Multiframe Blind Deconvolution Master s Thesis Efficient Computational Approaches for Multiframe Blind Deconvolution Helen Schomburg Advisors Prof. Dr. Bernd Fischer Institute of Mathematics and Image Computing University of Lübeck

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

COMPUTATIONAL METHODS IN MRI: MATHEMATICS

COMPUTATIONAL METHODS IN MRI: MATHEMATICS COMPUTATIONAL METHODS IN MATHEMATICS Imaging Sciences-KCL November 20, 2008 OUTLINE 1 MATRICES AND LINEAR TRANSFORMS: FORWARD OUTLINE 1 MATRICES AND LINEAR TRANSFORMS: FORWARD 2 LINEAR SYSTEMS: INVERSE

More information

INVERSE SUBSPACE PROBLEMS WITH APPLICATIONS

INVERSE SUBSPACE PROBLEMS WITH APPLICATIONS INVERSE SUBSPACE PROBLEMS WITH APPLICATIONS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Given a square matrix A, the inverse subspace problem is concerned with determining a closest matrix to A with a

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition We are interested in more than just sym+def matrices. But the eigenvalue decompositions discussed in the last section of notes will play a major role in solving general

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 A cautionary tale Notes for 2016-10-05 You have been dropped on a desert island with a laptop with a magic battery of infinite life, a MATLAB license, and a complete lack of knowledge of basic geometry.

More information

A Sparsity Enforcing Framework with TVL1 Regularization and. its Application in MR Imaging and Source Localization. Wei Shen

A Sparsity Enforcing Framework with TVL1 Regularization and. its Application in MR Imaging and Source Localization. Wei Shen A Sparsity Enforcing Framework with TVL1 Regularization and its Application in MR Imaging and Source Localization by Wei Shen A Dissertation Presented in Partial Fulfillment of the Requirements for the

More information

Iterative Methods for Smooth Objective Functions

Iterative Methods for Smooth Objective Functions Optimization Iterative Methods for Smooth Objective Functions Quadratic Objective Functions Stationary Iterative Methods (first/second order) Steepest Descent Method Landweber/Projected Landweber Methods

More information

Inverse Source Identification based on Acoustic Particle Velocity Measurements. Abstract. 1. Introduction

Inverse Source Identification based on Acoustic Particle Velocity Measurements. Abstract. 1. Introduction The 2002 International Congress and Exposition on Noise Control Engineering Dearborn, MI, USA. August 19-21, 2002 Inverse Source Identification based on Acoustic Particle Velocity Measurements R. Visser

More information

Lecture 2. Linear Systems

Lecture 2. Linear Systems Lecture 2. Linear Systems Ivan Papusha CDS270 2: Mathematical Methods in Control and System Engineering April 6, 2015 1 / 31 Logistics hw1 due this Wed, Apr 8 hw due every Wed in class, or my mailbox on

More information

Augmented GMRES-type methods

Augmented GMRES-type methods Augmented GMRES-type methods James Baglama 1 and Lothar Reichel 2, 1 Department of Mathematics, University of Rhode Island, Kingston, RI 02881. E-mail: jbaglama@math.uri.edu. Home page: http://hypatia.math.uri.edu/

More information

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St Structured Lower Rank Approximation by Moody T. Chu (NCSU) joint with Robert E. Funderlic (NCSU) and Robert J. Plemmons (Wake Forest) March 5, 1998 Outline Introduction: Problem Description Diculties Algebraic

More information

Why the QR Factorization can be more Accurate than the SVD

Why the QR Factorization can be more Accurate than the SVD Why the QR Factorization can be more Accurate than the SVD Leslie V. Foster Department of Mathematics San Jose State University San Jose, CA 95192 foster@math.sjsu.edu May 10, 2004 Problem: or Ax = b for

More information

Linear Algebra Review

Linear Algebra Review Linear Algebra Review CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Linear Algebra Review 1 / 16 Midterm Exam Tuesday Feb

More information

Global Golub Kahan bidiagonalization applied to large discrete ill-posed problems

Global Golub Kahan bidiagonalization applied to large discrete ill-posed problems Global Golub Kahan bidiagonalization applied to large discrete ill-posed problems A. H. Bentbib Laboratory LAMAI, FSTG, University of Cadi Ayyad, Marrakesh, Morocco. M. El Guide Laboratory LAMAI, FSTG,

More information

2017 Society for Industrial and Applied Mathematics

2017 Society for Industrial and Applied Mathematics SIAM J. SCI. COMPUT. Vol. 39, No. 5, pp. S24 S46 2017 Society for Industrial and Applied Mathematics GENERALIZED HYBRID ITERATIVE METHODS FOR LARGE-SCALE BAYESIAN INVERSE PROBLEMS JULIANNE CHUNG AND ARVIND

More information

Regularization with Randomized SVD for Large-Scale Discrete Inverse Problems

Regularization with Randomized SVD for Large-Scale Discrete Inverse Problems Regularization with Randomized SVD for Large-Scale Discrete Inverse Problems Hua Xiang Jun Zou July 20, 2013 Abstract In this paper we propose an algorithm for solving the large-scale discrete ill-conditioned

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 The Singular Value Decomposition (SVD) continued Linear Algebra Methods for Data Mining, Spring 2007, University

More information

Automatic stopping rule for iterative methods in discrete ill-posed problems

Automatic stopping rule for iterative methods in discrete ill-posed problems Comp. Appl. Math. DOI 10.1007/s40314-014-0174-3 Automatic stopping rule for iterative methods in discrete ill-posed problems Leonardo S. Borges Fermín S. Viloche Bazán Maria C. C. Cunha Received: 20 December

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725 Consider Last time: proximal Newton method min x g(x) + h(x) where g, h convex, g twice differentiable, and h simple. Proximal

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING

LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING JIAN-FENG CAI, STANLEY OSHER, AND ZUOWEI SHEN Abstract. Real images usually have sparse approximations under some tight frame systems derived

More information

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x

More information