Lanczos Bidiagonalization with Subspace Augmentation for Discrete Inverse Problems
|
|
- Andrew Harper
- 5 years ago
- Views:
Transcription
1 Lanczos Bidiagonalization with Subspace Augmentation for Discrete Inverse Problems Per Christian Hansen Technical University of Denmark Ongoing work with Kuniyoshi Abe, Gifu Dedicated to Dianne P. O Leary
2 Many Many Thanks to Dianne for Inspiration, Insight, and Very Nice Collaborations Dianne, Jim and I wrote a book and tried it on students in Bari. It is water! Pictures by Nicola Mastronardi 2/20 P. C. Hansen Lanczos Bidiagonalization with Subspace Augmentation (for Dianne)
3 Overview of Talk Discrete inverse problem: Ax = b 1. Iterative Krylov-subspace methods regularizing iterations. 2. Augmenting the Krylov subspace for improved solutions. 3. Lanczos bidiagonalization algorithm with augmented subspace. 4. Numerical examples that illustrate the advantage of this idea. 3/20 P. C. Hansen Lanczos Bidiagonalization with Subspace Augmentation (for Dianne)
4 Regularization Algorithms Variational formulations take the form min ka x bk R(x) ª x where R(x) is a regularization terms that penalizes unwanted features in the solution, and is a user-chosen regularization parameter. H & O Leary 1993, O Leary 2001; Rust & O Leary 2008 choosing λ. Projection formulations take the form min x ka x bk 2 2 s.t. x 2 S k ; where the \signal subspace" S k is a linear subspace of dimension k. If S k is chosen such that it captures the main features in the solution, then this approach is well suited for large-scale problems. Hybrid methods that apply regularization to the projected problem. Chung, Nagy & O Leary 2008 hybrid method with GCV. 4/20 P. C. Hansen Lanczos Bidiagonalization with Subspace Augmentation (for Dianne)
5 The Signal Subspace In some applications we can use a pre-determined subspace, e.g., spanned by the Fourier basis, the discrete cosine bases, a wavelet basis, etc. An example: truncated SVD S k = spanfv 1 ; v 2 ; : : : ; v k g: Alternatively we can use a subspace determined by the given problem, e.g., the Krylov subspace K k associated with a speci c iterative method CGLS : spanfa T b; A T A A T b; (A T A) 2 A T b; : : :g ; GMRES : spanfb; A b; A 2 b; : : :g ; RRGMRES : spanfa b; A 2 b; A 3 b; : : :g : O Leary & Simmons 1981, Kilmer & O Leary 2001 regularizing iterations. 5/20 P. C. Hansen Lanczos Bidiagonalization with Subspace Augmentation (for Dianne)
6 Illustration of Semi-Convergence A y b ¹x = exact solution 6/20 P. C. Hansen Lanczos Bidiagonalization with Subspace Augmentation (for Dianne)
7 Augmented Signal Subspace Let W p denote a linear subspace that captures additional speci c components of the desired solution; dim(w p ) = p k = no. its. Then it can be advantageous to use an augmented linear subspace S p;k = W p + K k ; W p = R(W p ) = spanfw 1 ; : : : ; w p g : Ex.: deriv2 & GMRES. All vectors in the Krylov subspace! 0 at end points. Now use w 1 = (1; 1; : : : ; 1) T, w 2 = (1; 2; : : : ; n) T. GMRES Augmented GMRES Here we want an e±cient CGLS-type algorithm to solve the problem min x ka x bk 2 2 s.t. x 2 S p;k = W p + K k (A T A; A T b) : 7/20 P. C. Hansen Lanczos Bidiagonalization with Subspace Augmentation (for Dianne)
8 Overview of Methods Square matrix A 2 R n n ² \Augmented (RR)GMRES" (Baglama, Reichel 2007), where the subspace augmentation idea was originally formulated. An elegant and e±cient algorithmthat uses an incorrect subspace. ² \R 3 GMRES" (Dong, Garde, H 2014), uses the correct subspace, less elegant, still e±cient. Rectangular matrix A 2 R m n (this work) ² In some problems (e.g., tomography) the matrix A is rectangular. ² In some problems (tomography, inverse heat equation) the Arnoldi subspace is not suited. ² \LBAS" { Lanczos bidiagonalization with augmented subspace. ² Open question: can we use LSQR or LSMR to implement this? 8/20 P. C. Hansen Lanczos Bidiagonalization with Subspace Augmentation (for Dianne)
9 Towards our Algorithm LBAS We want to solve min x ka x bk 2 2 s.t. x 2 W p + K k (A T A; A T b) : In principle we could use, say, a Hessenberg decomposition A [ W p ; A T b ; A T A A T b ; ; (A T A) k 1 A T b ] = V p+k+1 H p+k and compute the solution as x (k) = [ W p ; A T b ; A T A A T b ; : : : ; (A T A) k 1 A T b ] y (k) ; y (k) = argmin y kh p+1 y V T p+k+1bk 2 2 : But we prefer to use a stable and e±cient \standard" algorithm. Run the bidiagonalization algorithm to compute an orthonormal basis of K k (A T A; A T b), and augment it by W p in each step of the algorithm. This seems cumbersome { but the overhead is favorably small! 9/20 P. C. Hansen Lanczos Bidiagonalization with Subspace Augmentation (for Dianne)
10 Setting the Stage for Our Algorithm At step k we have the decomposition h A [ V k ; W p ] = U k+1 ; U e i B k G k k 0 F k where ² A V k = U k+1 B k is obtained after k steps of the bidiag. process. ² V k 2 R n k has orthonormal columns that span K j (A T A; A T b). ² U k+1 2 R m (k+1) has orthonormal columns, u 1 = b=kbk 2. ² eu k 2 R m p : range(aw p ) = range(u k+1 G k + eu k F k ) and eu k T U k+1 = 0. ² B k 2 R (k+1) k is a lower bidiagonal matrix. ² F k 2 R p p and changes in every iteration. ² G k is (k + 1) p and is updated along with B k. The columns of [ V j ; W p ] form a basis for S p;j. 10/20 P. C. Hansen Lanczos Bidiagonalization with Subspace Augmentation (for Dianne)
11 More Details Recall that A [ V k ; W p ] = h U k+1 ; U e i B k G k k 0 F k : The matrices G k 2 R (k+1) p and F k 2 R p p are composed of the coe±cients of AW p with respect to basis of range(u k+1 ) and range( e U k ), respectively: G k = U T k+1aw p ; F k = e U T k AW p : Then the iterate x (k) 2 S p;k is given by x (k) = [ V k ; W p ] y (k), where " # " # y (k) Bk G k U T 2 k+1 = argmin y y b : 0 F k eu T j 2 11/20 P. C. Hansen Lanczos Bidiagonalization with Subspace Augmentation (for Dianne)
12 Algorithm: LBAS 1. Set U 1 = b=kbk 2, V 0 = [ ], B 0 = [ ], G 0 = U T 1 AW p, and k = Use the bidiag. process to obtain v k, u k+1 such that A V k = U k+1 B k, where 2 3 V k = [V k 1 ; v k ], U k+1 = [ U k ; u k+1 ], B k = 4 B 0 k G 3. Compute G k = k 1 u T k+1 AW 2 R (k+1) p. p 4. Orthonormalize AW p with respect to U k+1 to obtain e U k 2 R m p. 5. Compute F k = U e k T AW p 2 R p p. " # " Bk G k U T k+1 6. Solve min y y 0 F k 7. Then x (k) = [ V k ; W p ] y (k). eu T k # b 8. Stop, or set k := k + 1 and return to step to obtain y (k). Recomputation of e U k and F k in each step; but p is small! 12/20 P. C. Hansen Lanczos Bidiagonalization with Subspace Augmentation (for Dianne)
13 Efficient and Stable Implementation In each step we update the orthogonal factorization: 2 " # T (11) k T (12) 3 k Bk G k 6 = Q 4 0 T (22) 7 5 ; 0 F k k 0 0 T (11) k 2 R k k and T (22) k 2 R p p are upper triangular, Q is orthogonal. Update T (11) k via Givens rotations that are also applied to G k and U T k+1 b. eu k is already orthogonal to U k, hence (in principle) we can perform the update eu k+1 = (I m u k+1 u T k+1 ) e U k : For numerical stability: must reorthogonalize the columns of V k, U k+1, and e U k. Consider the use of partial reorthogonalization. Algorithm HYBR (Chung, Nagy, O Leary 2008) uses full reorthogonalization. 13/20 P. C. Hansen Lanczos Bidiagonalization with Subspace Augmentation (for Dianne)
14 Numerical Examples Setting up the test problems: 1. Generate noise-free system: A x exact = b exact. 2. Add noise: b = b exact + e where e is a random vector of Gaussian white noise scaled such that kek 2 =kb exact k 2 =. 3. We show best solution within the iterations plus: ² relative error kx exact x (k) k 2 =kx exact k 2, ² relative residual norm kb A x (k) k 2 =kbk 2. We compare combinations of the following algorithms: ² CGLS is the implementation from Regularization Tools. ² RRGMRES is the implementation from Regularization Tools. ² R 3 GMRES is our implementation (Dong, Garde, H 2014). ² LBAS is our new algorithm. 14/20 P. C. Hansen Lanczos Bidiagonalization with Subspace Augmentation (for Dianne)
15 Large Component in Augment. Subspace Test problem deriv2(n,2), n = 32, relative noise level = W 2 = spanfw 1 ; w 2 g; w 1 = (1; 1; : : : ; 1) T ; w 2 = (1; 2; : : : ; n) T : For this problem kw 2 W2 T x exact k 2 =kx exact k 2 = 0:99 ; k(i W 2 W2 T )x exactk 2 =kx exact k 2 = 0:035 ; we only need to spend e ort in capturing the small component in W 2?. 15/20 P. C. Hansen Lanczos Bidiagonalization with Subspace Augmentation (for Dianne)
16 Capture a Discontinuity Test problem gravity(n), n = 100, = 10 3, exact sol. changed to include a discontinuity between elements ` = 50 and ` + 1 = 51. Augmentation matrix W 2 allows us to represent this discontinuity: ones(`; 1) zeros(`; 1) w 1 = ; w zeros(n `; 1) 2 = : ones(n `; 1) 16/20 P. C. Hansen Lanczos Bidiagonalization with Subspace Augmentation (for Dianne)
17 Fix Boundary Conditions Here W 2 compensates for the \incorrect" or \incompatible" boundary conditions implicit in A, by allowing the regularized solutions to have nonzero values and nonzero derivatives at the endpoints. 17/20 P. C. Hansen Lanczos Bidiagonalization with Subspace Augmentation (for Dianne)
18 Fix Boundary Conditions, Rectangular A The matrix A is rectangular so RRGMRES and R 3 GMRES cannot be used. 18/20 P. C. Hansen Lanczos Bidiagonalization with Subspace Augmentation (for Dianne)
19 Compute Spectrum of X-Ray Source The spectrum of an X-ray source (where accelerated electrons hit an anode) consists of a continuous spectrum superimposed with line spectra. We know the frequencies of the line spectral, so we can easily incorporate this information through the augmentation subspace. Experiment with two choices: W delta two delta functions at the right frequencies, W Gauss two narrow Gauss functions at the right frequencies. Exact spectrum Many thanks to Jan Sijbers for inspiration to this example. 19/20 P. C. Hansen Lanczos Bidiagonalization with Subspace Augmentation (for Dianne)
20 Conclusions We consider (again) how to augment the Krylov subspace. Focus here on rectangular matrices and Lanczos bidiag. We develop an efficent algorihtm LBAS. Numerical examples demonstrate the advantage of LBAS. Future work: Selective reorthogonalization? Is it occasionally necessary to do the MGS twice? A similar algorithm based on MINRES/MR-II? Hybrid algorithm with regularization of projected problem! 20/20 P. C. Hansen Lanczos Bidiagonalization with Subspace Augmentation (for Dianne)
Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact
Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February
More informationGolub-Kahan iterative bidiagonalization and determining the noise level in the data
Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech
More informationKrylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17
Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve
More informationITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS
BIT Numerical Mathematics 6-3835/3/431-1 $16. 23, Vol. 43, No. 1, pp. 1 18 c Kluwer Academic Publishers ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS T. K. JENSEN and P. C. HANSEN Informatics
More informationAugmented GMRES-type methods
Augmented GMRES-type methods James Baglama 1 and Lothar Reichel 2, 1 Department of Mathematics, University of Rhode Island, Kingston, RI 02881. E-mail: jbaglama@math.uri.edu. Home page: http://hypatia.math.uri.edu/
More information4.8 Arnoldi Iteration, Krylov Subspaces and GMRES
48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationInverse Ill Posed Problems in Image Processing
Inverse Ill Posed Problems in Image Processing Image Deblurring I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz, martin.plesinger@tul.cz, strakos@cs.cas.cz 1,3 Faculty of Mathematics
More informationETNA Kent State University
Electronic Transactions on Numerical Analysis. Volume 31, pp. 204-220, 2008. Copyright 2008,. ISSN 1068-9613. ETNA NOISE PROPAGATION IN REGULARIZING ITERATIONS FOR IMAGE DEBLURRING PER CHRISTIAN HANSEN
More informationEmbedded techniques for choosing the parameter in Tikhonov regularization
NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. (24) Published online in Wiley Online Library (wileyonlinelibrary.com)..934 Embedded techniques for choosing the parameter in Tikhonov
More informationAlgorithms that use the Arnoldi Basis
AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to
More information1 Extrapolation: A Hint of Things to Come
Notes for 2017-03-24 1 Extrapolation: A Hint of Things to Come Stationary iterations are simple. Methods like Jacobi or Gauss-Seidel are easy to program, and it s (relatively) easy to analyze their convergence.
More informationAPPLIED NUMERICAL LINEAR ALGEBRA
APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation
More informationLSMR: An iterative algorithm for least-squares problems
LSMR: An iterative algorithm for least-squares problems David Fong Michael Saunders Institute for Computational and Mathematical Engineering (icme) Stanford University Copper Mountain Conference on Iterative
More informationSimple iteration procedure
Simple iteration procedure Solve Known approximate solution Preconditionning: Jacobi Gauss-Seidel Lower triangle residue use of pre-conditionner correction residue use of pre-conditionner Convergence Spectral
More informationLast Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection
Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue
More informationA Parameter-Choice Method That Exploits Residual Information
A Parameter-Choice Method That Exploits Residual Information Per Christian Hansen Section for Scientific Computing DTU Informatics Joint work with Misha E. Kilmer Tufts University Inverse Problems: Image
More informationChapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices
Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay
More informationFGMRES for linear discrete ill-posed problems
FGMRES for linear discrete ill-posed problems Keiichi Morikuni Department of Informatics, School of Multidisciplinary Sciences, The Graduate University for Advanced Studies, Sokendai, 2-1-2 Hitotsubashi,
More informationETNA Kent State University
Electronic Transactions on Numerical Analysis. Volume 28, pp. 49-67, 28. Copyright 28,. ISSN 68-963. A WEIGHTED-GCV METHOD FOR LANCZOS-HYBRID REGULARIZATION JULIANNE CHUNG, JAMES G. NAGY, AND DIANNE P.
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationComputation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.
Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces
More informationETNA Kent State University
Electronic Transactions on Numerical Analysis. Volume 33, pp. 63-83, 2009. Copyright 2009,. ISSN 1068-9613. ETNA SIMPLE SQUARE SMOOTHING REGULARIZATION OPERATORS LOTHAR REICHEL AND QIANG YE Dedicated to
More informationBlock Bidiagonal Decomposition and Least Squares Problems
Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition
More informationDedicated to Gérard Meurant on the occasion of his 60th birthday.
SIMPLE SQUARE SMOOTHING REGULARIZATION OPERATORS LOTHAR REICHEL AND QIANG YE Dedicated to Gérard Meurant on the occasion of his 60th birthday. Abstract. Tikhonov regularization of linear discrete ill-posed
More informationArnoldi-Tikhonov regularization methods
Arnoldi-Tikhonov regularization methods Bryan Lewis a, Lothar Reichel b,,1 a Rocketcalc LLC, 100 W. Crain Ave., Kent, OH 44240, USA. b Department of Mathematical Sciences, Kent State University, Kent,
More informationBindel, Spring 2016 Numerical Analysis (CS 4220) Notes for
Notes for 2016-03-11 Woah, We re Halfway There Last time, we showed that the QR iteration maps upper Hessenberg matrices to upper Hessenberg matrices, and this fact allows us to do one QR sweep in O(n
More informationSOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA
1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization
More informationPreconditioning. Noisy, Ill-Conditioned Linear Systems
Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 9 Minimizing Residual CG
More informationLab 1: Iterative Methods for Solving Linear Systems
Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they
More informationPreconditioning. Noisy, Ill-Conditioned Linear Systems
Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image
More informationMoments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems
Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems Zdeněk Strakoš Charles University, Prague http://www.karlin.mff.cuni.cz/ strakos 16th ILAS Meeting, Pisa, June 2010. Thanks
More informationSummary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method
Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,
More informationA MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS
A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed
More informationConjugate Gradient algorithm. Storage: fixed, independent of number of steps.
Conjugate Gradient algorithm Need: A symmetric positive definite; Cost: 1 matrix-vector product per step; Storage: fixed, independent of number of steps. The CG method minimizes the A norm of the error,
More informationMath 310 Final Project Recycled subspaces in the LSMR Krylov method
Math 310 Final Project Recycled subspaces in the LSMR Krylov method Victor Minden November 25, 2013 1 Introduction In many applications, it is the case that we are interested not only in solving a single
More informationParallel Numerical Algorithms
Parallel Numerical Algorithms Chapter 6 Matrix Models Section 6.2 Low Rank Approximation Edgar Solomonik Department of Computer Science University of Illinois at Urbana-Champaign CS 554 / CSE 512 Edgar
More informationIterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)
Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential
More informationMatrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland
Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii
More informationCourse Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More informationThe Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation
The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation Rosemary Renaut Collaborators: Jodi Mead and Iveta Hnetynkova DEPARTMENT OF MATHEMATICS
More informationKrylov subspace projection methods
I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks
More informationNumerical Mathematics
Alfio Quarteroni Riccardo Sacco Fausto Saleri Numerical Mathematics Second Edition With 135 Figures and 45 Tables 421 Springer Contents Part I Getting Started 1 Foundations of Matrix Analysis 3 1.1 Vector
More informationINVERSE SUBSPACE PROBLEMS WITH APPLICATIONS
INVERSE SUBSPACE PROBLEMS WITH APPLICATIONS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Given a square matrix A, the inverse subspace problem is concerned with determining a closest matrix to A with a
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationComputing tomographic resolution matrices using Arnoldi s iterative inversion algorithm
Stanford Exploration Project, Report 82, May 11, 2001, pages 1 176 Computing tomographic resolution matrices using Arnoldi s iterative inversion algorithm James G. Berryman 1 ABSTRACT Resolution matrices
More informationLARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems
LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems
More informationIndex. for generalized eigenvalue problem, butterfly form, 211
Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,
More informationThe amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.
AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative
More informationAutomatic stopping rule for iterative methods in discrete ill-posed problems
Comp. Appl. Math. DOI 10.1007/s40314-014-0174-3 Automatic stopping rule for iterative methods in discrete ill-posed problems Leonardo S. Borges Fermín S. Viloche Bazán Maria C. C. Cunha Received: 20 December
More informationLARGE SPARSE EIGENVALUE PROBLEMS
LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems
More informationIntroduction. Chapter One
Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and
More informationExtension of GKB-FP algorithm to large-scale general-form Tikhonov regularization
Extension of GKB-FP algorithm to large-scale general-form Tikhonov regularization Fermín S. Viloche Bazán, Maria C. C. Cunha and Leonardo S. Borges Department of Mathematics, Federal University of Santa
More information6.4 Krylov Subspaces and Conjugate Gradients
6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P
More informationLecture 9 Least Square Problems
March 26, 2018 Lecture 9 Least Square Problems Consider the least square problem Ax b (β 1,,β n ) T, where A is an n m matrix The situation where b R(A) is of particular interest: often there is a vectors
More informationRegularization methods for large-scale, ill-posed, linear, discrete, inverse problems
Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola
More informationThe Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation
The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation Rosemary Renaut DEPARTMENT OF MATHEMATICS AND STATISTICS Prague 2008 MATHEMATICS AND STATISTICS
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give
More informationCharles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems
Charles University Faculty of Mathematics and Physics DOCTORAL THESIS Iveta Hnětynková Krylov subspace approximations in linear algebraic problems Department of Numerical Mathematics Supervisor: Doc. RNDr.
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationc 2005 Society for Industrial and Applied Mathematics
SIAM J. MATRIX ANAL. APPL. Vol. 26, No. 4, pp. 1001 1021 c 2005 Society for Industrial and Applied Mathematics BREAKDOWN-FREE GMRES FOR SINGULAR SYSTEMS LOTHAR REICHEL AND QIANG YE Abstract. GMRES is a
More informationStatistically-Based Regularization Parameter Estimation for Large Scale Problems
Statistically-Based Regularization Parameter Estimation for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova March 1, 2010 National Science Foundation: Division of Computational
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Arnoldi Notes for 2016-11-16 Krylov subspaces are good spaces for approximation schemes. But the power basis (i.e. the basis A j b for j = 0,..., k 1) is not good for numerical work. The vectors in the
More informationOn the influence of eigenvalues on Bi-CG residual norms
On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue
More informationIterative methods for Linear System
Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and
More informationGreedy Tikhonov regularization for large linear ill-posed problems
International Journal of Computer Mathematics Vol. 00, No. 00, Month 200x, 1 20 Greedy Tikhonov regularization for large linear ill-posed problems L. Reichel, H. Sadok, and A. Shyshkov (Received 00 Month
More informationInstitute for Advanced Computer Studies. Department of Computer Science. Two Algorithms for the The Ecient Computation of
University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{98{12 TR{3875 Two Algorithms for the The Ecient Computation of Truncated Pivoted QR Approximations
More informationCG and MINRES: An empirical comparison
School of Engineering CG and MINRES: An empirical comparison Prequel to LSQR and LSMR: Two least-squares solvers David Fong and Michael Saunders Institute for Computational and Mathematical Engineering
More informationIterative Methods for Linear Systems of Equations
Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method
More informationGeneralized MINRES or Generalized LSQR?
Generalized MINRES or Generalized LSQR? Michael Saunders Systems Optimization Laboratory (SOL) Institute for Computational Mathematics and Engineering (ICME) Stanford University New Frontiers in Numerical
More informationContents. Preface... xi. Introduction...
Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism
More informationA hybrid reordered Arnoldi method to accelerate PageRank computations
A hybrid reordered Arnoldi method to accelerate PageRank computations Danielle Parker Final Presentation Background Modeling the Web The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector
More informationKrylov Subspace Methods that Are Based on the Minimization of the Residual
Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean
More informationResearch Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems
Abstract and Applied Analysis Article ID 237808 pages http://dxdoiorg/055/204/237808 Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite
More informationSIGNAL AND IMAGE RESTORATION: SOLVING
1 / 55 SIGNAL AND IMAGE RESTORATION: SOLVING ILL-POSED INVERSE PROBLEMS - ESTIMATING PARAMETERS Rosemary Renaut http://math.asu.edu/ rosie CORNELL MAY 10, 2013 2 / 55 Outline Background Parameter Estimation
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More information2017 Society for Industrial and Applied Mathematics
SIAM J. SCI. COMPUT. Vol. 39, No. 5, pp. S24 S46 2017 Society for Industrial and Applied Mathematics GENERALIZED HYBRID ITERATIVE METHODS FOR LARGE-SCALE BAYESIAN INVERSE PROBLEMS JULIANNE CHUNG AND ARVIND
More informationMain matrix factorizations
Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined
More informationNewton s method for obtaining the Regularization Parameter for Regularized Least Squares
Newton s method for obtaining the Regularization Parameter for Regularized Least Squares Rosemary Renaut, Jodi Mead Arizona State and Boise State International Linear Algebra Society, Cancun Renaut and
More informationProbabilistic upper bounds for the matrix two-norm
Noname manuscript No. (will be inserted by the editor) Probabilistic upper bounds for the matrix two-norm Michiel E. Hochstenbach Received: date / Accepted: date Abstract We develop probabilistic upper
More informationOld and new parameter choice rules for discrete ill-posed problems
Old and new parameter choice rules for discrete ill-posed problems Lothar Reichel Giuseppe Rodriguez Dedicated to Claude Brezinski and Sebastiano Seatzu on the Occasion of Their 70th Birthdays. Abstract
More informationLarge-scale eigenvalue problems
ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition
More informationIndex. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden
Index 1-norm, 15 matrix, 17 vector, 15 2-norm, 15, 59 matrix, 17 vector, 15 3-mode array, 91 absolute error, 15 adjacency matrix, 158 Aitken extrapolation, 157 algebra, multi-linear, 91 all-orthogonality,
More informationComputational Methods. Eigenvalues and Singular Values
Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations
More informationLecture 8: Fast Linear Solvers (Part 7)
Lecture 8: Fast Linear Solvers (Part 7) 1 Modified Gram-Schmidt Process with Reorthogonalization Test Reorthogonalization If Av k 2 + δ v k+1 2 = Av k 2 to working precision. δ = 10 3 2 Householder Arnoldi
More informationA Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems
A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems Mario Arioli m.arioli@rl.ac.uk CCLRC-Rutherford Appleton Laboratory with Daniel Ruiz (E.N.S.E.E.I.H.T)
More informationNewton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve
Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve Rosemary Renaut, Jodi Mead Arizona State and Boise State Copper Mountain Conference on Iterative Methods
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationA fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring
A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring Marco Donatelli Dept. of Science and High Tecnology U. Insubria (Italy) Joint work with M. Hanke
More informationPreconditioned GMRES Revisited
Preconditioned GMRES Revisited Roland Herzog Kirk Soodhalter UBC (visiting) RICAM Linz Preconditioning Conference 2017 Vancouver August 01, 2017 Preconditioned GMRES Revisited Vancouver 1 / 32 Table of
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1
More informationKrylov Subspaces. Lab 1. The Arnoldi Iteration
Lab 1 Krylov Subspaces Lab Objective: Discuss simple Krylov Subspace Methods for finding eigenvalues and show some interesting applications. One of the biggest difficulties in computational linear algebra
More informationChapter 5 Orthogonality
Matrix Methods for Computational Modeling and Data Analytics Virginia Tech Spring 08 Chapter 5 Orthogonality Mark Embree embree@vt.edu Ax=b version of February 08 We needonemoretoolfrom basic linear algebra
More informationMatrices, Moments and Quadrature, cont d
Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that
More informationMATCOM and Selected MATCOM Functions
book-onlin page 567 Appendix C MATCOM and Selected MATCOM Functions C.1 MATCOM and Selected MATCOM Functions C.1.1 What is MATCOM? MATCOM is a MATLAB-based interactive software package containing the implementation
More informationNumerical linear algebra
Numerical linear algebra Purdue University CS 51500 Fall 2017 David Gleich David F. Gleich Call me Prof Gleich Dr. Gleich Please not Hey matrix guy! Huda Nassar Call me Huda Ms. Huda Please not Matrix
More informationON THE COMPUTATION OF A TRUNCATED SVD OF A LARGE LINEAR DISCRETE ILL-POSED PROBLEM. Dedicated to Ken Hayami on the occasion of his 60th birthday.
ON THE COMPUTATION OF A TRUNCATED SVD OF A LARGE LINEAR DISCRETE ILL-POSED PROBLEM ENYINDA ONUNWOR AND LOTHAR REICHEL Dedicated to Ken Hayami on the occasion of his 60th birthday. Abstract. The singular
More information