Block Bidiagonal Decomposition and Least Squares Problems
|
|
- James Watkins
- 5 years ago
- Views:
Transcription
1 Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008
2 Outline Bidiagonal Decomposition and LSQR Block Bidiagonalization Methods Multiple Right-Hand Sides in LS and TLS Rank Deficient Blocks and Deflation Summary
3 The Bidiagonal Decomposition In 1965 Golub and Kahan gave two algorithms for computing a bidiagonal decomposition (BD) ( ) L A = U V T R m n, 0 where B is lower bidiagonal, α 1 β 2 α 2 B =. β.. 3 R (n+1) n,... αn β n+1 and U and V are square orthogonal matrices. Given u 1 = Q 1 e 1, the decomposition is uniquely determined if α k β k+1 0, k = 1 : n,
4 The Bidiagonal Decomposition The Householder Algorithm For given Householder matrix Q 1, set A 0 = Q 1 A, and apply Householder transformations alternately from right and left: A k = Q k+1 (A k 1 P k ), k = 1, 2,.... P k zeros n k elements in kth row; Q k+1 zeros m k 1 elements in the kth column. If Q 1 is chosen so that Q 1 b = β 1 e 1, then U T ( b AV ) = where B is lower bidiagonal and ( ) β1 e 1 B, 0 0 U = Q 1 Q 2 Q n+1, V = P 1 P 2 P n 1.
5 The Bidiagonal Decomposition Lanczos algorithm In the Lanczos process, successive columns of U = (u 1, u 2,..., u n+1 ), V = (v 1, v 2,..., v n ), with β 1 u 1 = b, β 1 = b 2, are generated recursively. Equating columns in A T U = VB T and AV = UB, gives the coupled two-term recurrence relations (v 0 0): α k v k = A T u k β k v k 1, β k+1 u k+1 = Av k α k u k, k = 1 : n. Here α k and β k+1 are determined by the condition v k 2 = u k 2 = 1.
6 The Bidiagonal Decomposition Since the BD is unique, both algorithms generate the same decomposition in exact arithmetic. From the Lanczos recurrences it follows that U k = (u 1,..., u k ) and V k = (v 1,..., v k ), form orthonormal bases for the left and right Krylov subspaces [ ] K k (AA T, u 1 ) = span b, AA T b,..., (AA T ) k 1 b, [ ] K k (A T A, v 1 ) = span A T b,..., (A T A) k 1 A T b. If either α k = 0 or β k+1 = 0, the process terminates. Then the maximal dimensioned Krylov space has been reached.
7 Least Squares and LSQR The LSQR algorithm (Paige and Saunders 1982) is a Krylov subspace method for the LS problem min x b Ax 2. After k steps, one seeks an approximate solution x k = V k y k K k (A T A, A T b). From the recurrence formulas it follows that AV k = U k+1 B k. Thus b AV k y k = U k+1 (β 1 e 1 B k y k ), where B k is lower bidiagonal. By the orthogonality of U k+1 the optimal y k is obtained by solving a bidiagonal subproblem min y β 1 e 1 B k y 2,
8 Least Squares and LSQR This is solved using Givens rotations to transform B k into upper bidiagonal form giving G(β 1 e 1 B k ) = ( fk R k φ k+1 0 ). x k = VR 1 k f k, b Ax k 2 = φ k+1. Then x k, k = 1, 2, 3,... gives approximations on a nested sequence of Krylov subspaces. Often superior to truncated SVD. LSQR uses the Lanczos recursion and interleaves the solution of the LS subproblems. A Householder implementation could be used for dense A.
9 Least Squares and LSQR Bidiagonalization can also be used for total least squares (TLS) problems. Here the error function to be minimized is b Ax k 2 2 x k = β 1e 1 B k y k 2 2 y k The solution of the lower bidiagonal TLS subproblems are obtained from the SVD of matrix ( β1 e 1 B k ), k = 1, 2, Partial least squares (PLS) is widely used in data analysis and statistics. PLS is mathematically equivalent to LSQR but differently implemented. The predictive variables tend to be many and are often highly correlated.
10 Outline Bidiagonal Decomposition and LSQR Block Bidiagonalization Methods Multiple Right-Hand Sides in LS and TLS Rank Deficient Blocks and Deflation Summary
11 Block Bidiagonalization Methods Let Q 1 be a given m m orthogonal matrix and set ( ) Ip U 1 = Q 1 R m p. 0 Next form Q1 T A and compute the LQ factorization of its first p rows (I p 0)(Q1 T A)P 1 = (L 1 0), where L 1 is lower triangular. Proceed by alternately performing QR factorizations of blocks of p columns and LQ factorizations of blocks of p rows.
12 Block Bidiagonalization Methods After k steps we have computed a block bidiagonal matrix L 1 R 2 L 2 T k = R k L k R k+1 R (k+1)p kp, and two block matrices with orthogonal columns ( ) I(k+1)p (U 1, U 2,..., U k+1 ) = Q k+1 Q 2 Q 1, 0 ( ) Ikp (V 1, V 2,..., V k ) = P k P 2 P 1. 0 The matrix T k is a banded lower triangular matrix with (p + 1) nonzero diagonals.
13 Block Bidiagonalization Methods The Householder algorithm gives a constructive proof of the existence of such a block bidiagonal decomposition. If the LQ and QR factorizations have full rank then the decomposition is uniquely determined by U 1. To derive a block Lanczos algorithm we use the identities A(V 1 V 2 V n ) = (U 1 U 2 U n+1 )T n A T (U 1 U 2 U n+1 ) = (V 1 V 2 V n )Tn T. Start by forming Z 1 = A T U 1 and compute its thin QR factorization Z 1 = V 1 L T 1.
14 Block Bidiagonalization Methods For k = 1 : n, we then do: Compute the residuals W k = AV k U k L k R m p, and compute the QR factorization W k = U k+1 R k+1. Compute the residuals Z k+1 = A T U k+1 V k R T k+1 Rn p, and compute the QR factorization Z k+1 = V k+1 L T k+1 ; Householder QR factorizations can be used to guarantee orthogonality within each block U k and V k.
15 Block Bidiagonalization Methods The block Lanczos bidiagonalization was given by Golub, Luk, and Overton in From the Lanczos recurrence relations it follows by induction that for k = 1, 2, 3,..., the block algorithm generates orthogonal bases for the Krylov spaces span(u 1,..., U k ) = K k (AA T, U 1 ), span(v 1,..., V k ) = K k (A T A, A T U 1 ), The process can be continued as long as the residual matrices have full column rank. Rank deficiency will be considered later.
16 Outline Bidiagonal Decomposition and LSQR Block Bidiagonalization Methods Multiple Right-Hand Sides in LS and TLS Rank Deficient Blocks and Deflation Summary
17 Multiple Right-Hand Sides in LS and TLS In many applications one needs to solve least squares problems with multiple right-hand sides B = (b 1,..., b p ). B = (b 1,..., b d ), d 2. There are two possible approaches; Select a seed right-hand side and use the Krylov subspace generated to start up the solution of the second, etc. Use a block Krylov solver where all right-hand sides are treated simultaneously. Block Krylov methods: use a much larger search space from the start introduce matrix matrix multiplies into the algorithm.
18 Multiple Right-Hand Sides in LS and TLS A natural generalization of LSQR is to compute a sequence of approximate solutions of the form X k = V k Y k, k = 1, 2, 3,..., are determined. That is, X k is restricted to lie in a Krylov subspace K k (A T A, A T B). It follows that B AX k = B AV k Y k = U k+1 (R 1 E 1 T k Y k ). Using the orthogonality of the columns of U k+1 gives B AX k F = R 1 E 1 T k Y k F.
19 Block LSQR Hence B AX k F is minimized for X k R(V k ) by taking X k = V k Y k, where Y k solves min Y k R 1 E 1 T k Y k F. The approximations X k are the optimal solutions on the nested sequence of block Krylov subspaces. K k (A T A, A T B), k = 1, 2, 3,..... The block bidiagonal LS subproblems are solved by using orthogonal transformations to bring the lower triangular banded matrix T k into upper triangular banded form.
20 Multiple Right-Hand Sides in LS and TLS As in LSQR the solution can be interleaved with the block bidiagonalization. For example, (p = 2) ( ) C1 R = X S 1. C 2 0 S 2 At this step the approximate solution to min AX B F, is X = V k (R 1 X C 1), B AX F = C 2 f.
21 Multiple Right-Hand Sides in LS and TLS The block algorithm can be used also for TLS problems with multiple right-hand sides which is min E, F ( F E ) F, (A + E)X = B + F, This cannot, as the LS problem, be reduced to p separate problems. Using the orthogonal invariance, the error function to be minimized is B AX 2 F X 2 F + 1 = R 1E 1 TY 2 F Y 2 F + 1.
22 Multiple Right-Hand Sides in LS and TLS The TLS solution can be expressed in terms of the SVD ( ) ( ) ( ) ( ) Σ B A = U1 U 1 V T 1 2 Σ 2 V2 T, where Σ 1 = diag(σ 1,..., σ n ). The solution of the lower triangular block bidiagonal TLS problem can be constructed from the SVD of the block bidiagonal matrix.
23 Outline Bidiagonal Decomposition and LSQR Block Bidiagonalization Methods Multiple Right-Hand Sides in LS and TLS Rank Deficient Blocks and Deflation Summary
24 Rank Deficient Blocks and Deflation When rank deficient blocks occur the bidiagonalization must be modified. For example (p = 1), at a particular step Q 3 Q 2 Q 1 (b, AP 1 P 2 ) = β 1 α 1 β 2 α 2 β 3, If α 3 = 0 or β 4 = 0, the reduced matrix has into block diagonal form. Then the LS problem decomposes and the bidiagonalization can be terminated.
25 Rank Deficient Blocks and Deflation If α k = 0, then and the problem is reduced to Then the right Krylov vectors are linearly dependent. ( ) Uk T (b, AV β1 e k) = 1 T k A k min y β 1 e 1 T k y 2. A T b, (A T A)A T b,..., (A T A) k 1 A T b
26 Rank Deficient Blocks and Deflation If β k+1 = 0, then ( ) Uk+1 T (b, AV β1 e k) = 1 Tk A k where T k = (T k α k e k ) is square. Then the original system is consistent and the solution satisfies Then the left Krylov vectors are linearly dependent. T k y = β 1 e 1. b, (AA T )b,..., (AA T ) k b
27 Rank Deficient Blocks and Deflation From well known properties of tridiagonal matrices it follows: The matrix T k has full column rank and its singular values are simple. The right-hand side βe 1 has nonzero components along each left singular vector of T k. Paige and Strakoš call this a core subproblem and shows that it is minimally dimensioned. If we scale b := γb, then only β 1 changes. Thus, we have a core subproblem for any weighted TLS problem. min (γr, E) F s.t. (A + E)x = b + r.
28 Rank Deficient Blocks and Deflation How do we proceed in the block algorithm if a rank deficient block occurs in an LQ or QR factorization? The triangular factor then typically has the form (p = 4 and r = rank(r k ) = 3): R k = 0 0 Rr p. 0 (Recall that the factorizations are performed without pivoting). All following elements in this diagonal of T k will be zero. The bandwidth and block size are reduced by one.
29 Rank Deficient Blocks and Deflation Example p = 2: Assume that the first element in the diagonal of the block R 2 becomes zero.. If a diagonal element (say in L 3 ) becomes zero the problem decomposes. In general, the reduction can be terminated when the bandwidth has been reduced p times.
30 Rank Deficient Blocks and Deflation Deflation is related to linear dependencies in the associated Krylov subspaces: If (AA T ) k b j is linearly dependent on previous vectors, then all left and right Krylov vectors of the form should be removed. (AA T ) p b j, (A T A) p A T b j, p k, If (A T A) k A T b j is linearly dependent on previous vectors, then all vectors of the form should be removed. (A T A) p A T b j, (AA T ) p+1 b j, p k, When the reduction terminates when both left and right Krylov subspaces have reached their maximal dimensions.
31 Rank Deficient Blocks and Deflation The block Lanczos algorithm is modified similarly when a rank deficient block occurs. Recall the block Lanczos recurrence: For k = 1 : n, V k L T k = AT U k V k 1 R T k, U k+1 R k+1 = AV k U k L k. Suppose the first rank deficient block is L k R p r, r < p. Then V k is n r, i.e. only r < p vectors are determined. In the next step AV k U k L k R m r. Recurrence still works! Block size has been reduced to r.
32 Outline Bidiagonal Decomposition and LSQR Block Bidiagonalization Methods Multiple Right-Hand Sides in LS and TLS Rank Deficient Blocks and Deflation Summary
33 We have emphasized the similarity and (mathematical) equivalence of the Householder and Lanczos algorithms for block bidiagonalization. For dense problems in data analysis and statistics the Householder algorithm should be used because of its backward stability. Today quite large dense problems can be handled in seconds on a desktop computer! The relationship of block LSQR to multivariate PLS needs further investigation.
34 G. H. Golub and W. Kahan. Calculating the singular values and pseudo-inverse of a matrix. SIAM J. Numer. Anal. Ser. B, 2: , G. H. Golub, F. T. Luk, and M. L. Overton. A block Lanczos method for computing the singular values and corresponding singular vectors of a matrix. ACM Trans. Math. Software, 7: , S. Karimi and F. Toutounian. The block least squares method for solving nonsymmetric linear systems with multiple right-hand sides. Appl. Math. Comput., 177: , C. C. Paige and M. A. Saunders. LSQR. An algorithm for sparse linear equations and sparse least squares. ACM Trans. Math. Software, 8:43 71, 1982.
Key words. conjugate gradients, normwise backward error, incremental norm estimation.
Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322
More informationLanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact
Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationTotal least squares. Gérard MEURANT. October, 2008
Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationComputation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.
Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces
More informationON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH
ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationUNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES
UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES Christopher C. Paige School of Computer Science, McGill University, Montreal, Quebec, Canada, H3A 2A7 paige@cs.mcgill.ca Zdeněk Strakoš
More informationSummary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method
Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,
More informationσ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2
HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For
More informationParallel Numerical Algorithms
Parallel Numerical Algorithms Chapter 6 Matrix Models Section 6.2 Low Rank Approximation Edgar Solomonik Department of Computer Science University of Illinois at Urbana-Champaign CS 554 / CSE 512 Edgar
More informationAPPLIED NUMERICAL LINEAR ALGEBRA
APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationMatrices, Moments and Quadrature, cont d
Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that
More informationAM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition
AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches
More informationNotes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.
Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where
More informationOrthogonalization and least squares methods
Chapter 3 Orthogonalization and least squares methods 31 QR-factorization (QR-decomposition) 311 Householder transformation Definition 311 A complex m n-matrix R = [r ij is called an upper (lower) triangular
More informationSOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA
1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization
More informationGolub-Kahan iterative bidiagonalization and determining the noise level in the data
Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech
More informationLast Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection
Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue
More informationMain matrix factorizations
Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined
More informationLecture 03. Math 22 Summer 2017 Section 2 June 26, 2017
Lecture 03 Math 22 Summer 2017 Section 2 June 26, 2017 Just for today (10 minutes) Review row reduction algorithm (40 minutes) 1.3 (15 minutes) Classwork Review row reduction algorithm Review row reduction
More informationSection 6.4. The Gram Schmidt Process
Section 6.4 The Gram Schmidt Process Motivation The procedures in 6 start with an orthogonal basis {u, u,..., u m}. Find the B-coordinates of a vector x using dot products: x = m i= x u i u i u i u i Find
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition
More informationMatrix decompositions
Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The
More informationGeneralized MINRES or Generalized LSQR?
Generalized MINRES or Generalized LSQR? Michael Saunders Systems Optimization Laboratory (SOL) Institute for Computational Mathematics and Engineering (ICME) Stanford University New Frontiers in Numerical
More informationOn the influence of eigenvalues on Bi-CG residual norms
On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue
More informationIntroduction. Chapter One
Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and
More informationCG and MINRES: An empirical comparison
School of Engineering CG and MINRES: An empirical comparison Prequel to LSQR and LSMR: Two least-squares solvers David Fong and Michael Saunders Institute for Computational and Mathematical Engineering
More informationSparse BLAS-3 Reduction
Sparse BLAS-3 Reduction to Banded Upper Triangular (Spar3Bnd) Gary Howell, HPC/OIT NC State University gary howell@ncsu.edu Sparse BLAS-3 Reduction p.1/27 Acknowledgements James Demmel, Gene Golub, Franc
More information4.8 Arnoldi Iteration, Krylov Subspaces and GMRES
48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate
More informationREORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS
REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS JESSE L. BARLOW Department of Computer Science and Engineering, The Pennsylvania State University, University
More informationResidual statics analysis by LSQR
Residual statics analysis by LSQR Residual statics analysis by LSQR Yan Yan, Zhengsheng Yao, Gary F. Margrave and R. James Brown ABSRAC Residual statics corrections can be formulated as a linear inverse
More informationIndex. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden
Index 1-norm, 15 matrix, 17 vector, 15 2-norm, 15, 59 matrix, 17 vector, 15 3-mode array, 91 absolute error, 15 adjacency matrix, 158 Aitken extrapolation, 157 algebra, multi-linear, 91 all-orthogonality,
More informationLARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems
LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems
More informationLSMR: An iterative algorithm for least-squares problems
LSMR: An iterative algorithm for least-squares problems David Fong Michael Saunders Institute for Computational and Mathematical Engineering (icme) Stanford University Copper Mountain Conference on Iterative
More informationA Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems
A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova SIAM Annual Meeting July 10, 2009 National Science Foundation:
More informationLARGE SPARSE EIGENVALUE PROBLEMS
LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems
More information7. Symmetric Matrices and Quadratic Forms
Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value
More informationCG VERSUS MINRES: AN EMPIRICAL COMPARISON
VERSUS : AN EMPIRICAL COMPARISON DAVID CHIN-LUNG FONG AND MICHAEL SAUNDERS Abstract. For iterative solution of symmetric systems Ax = b, the conjugate gradient method () is commonly used when A is positive
More informationComputing least squares condition numbers on hybrid multicore/gpu systems
Computing least squares condition numbers on hybrid multicore/gpu systems M. Baboulin and J. Dongarra and R. Lacroix Abstract This paper presents an efficient computation for least squares conditioning
More informationThis can be accomplished by left matrix multiplication as follows: I
1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course
More informationMatrix Operations: Determinant
Matrix Operations: Determinant Determinants Determinants are only applicable for square matrices. Determinant of the square matrix A is denoted as: det(a) or A Recall that the absolute value of the determinant
More informationAlgorithms that use the Arnoldi Basis
AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to
More information14.2 QR Factorization with Column Pivoting
page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution
More informationComputing tomographic resolution matrices using Arnoldi s iterative inversion algorithm
Stanford Exploration Project, Report 82, May 11, 2001, pages 1 176 Computing tomographic resolution matrices using Arnoldi s iterative inversion algorithm James G. Berryman 1 ABSTRACT Resolution matrices
More informationProblem set 5: SVD, Orthogonal projections, etc.
Problem set 5: SVD, Orthogonal projections, etc. February 21, 2017 1 SVD 1. Work out again the SVD theorem done in the class: If A is a real m n matrix then here exist orthogonal matrices such that where
More informationCharles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems
Charles University Faculty of Mathematics and Physics DOCTORAL THESIS Iveta Hnětynková Krylov subspace approximations in linear algebraic problems Department of Numerical Mathematics Supervisor: Doc. RNDr.
More informationProbabilistic upper bounds for the matrix two-norm
Noname manuscript No. (will be inserted by the editor) Probabilistic upper bounds for the matrix two-norm Michiel E. Hochstenbach Received: date / Accepted: date Abstract We develop probabilistic upper
More informationKrylov subspace projection methods
I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks
More informationEigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis
Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply
More informationLarge-scale eigenvalue problems
ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition
More informationITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS
BIT Numerical Mathematics 6-3835/3/431-1 $16. 23, Vol. 43, No. 1, pp. 1 18 c Kluwer Academic Publishers ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS T. K. JENSEN and P. C. HANSEN Informatics
More informationLeast Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo
Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationUsing semiseparable matrices to compute the SVD of a general matrix product/quotient
Using semiseparable matrices to compute the SVD of a general matrix product/quotient Marc Van Barel Yvette Vanberghen Paul Van Dooren Report TW 508, November 007 n Katholieke Universiteit Leuven Department
More informationBlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas
BlockMatrixComputations and the Singular Value Decomposition ATaleofTwoIdeas Charles F. Van Loan Department of Computer Science Cornell University Supported in part by the NSF contract CCR-9901988. Block
More information1 Conjugate gradients
Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration
More informationFall TMA4145 Linear Methods. Exercise set Given the matrix 1 2
Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an
More informationSingular Value Decomposition
Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =
More informationEigenvalue Problems CHAPTER 1 : PRELIMINARIES
Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Preliminaries Eigenvalue problems 2012 1 / 14
More informationApplied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization
More informationLinear Algebra Methods for Data Mining
Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 The Singular Value Decomposition (SVD) continued Linear Algebra Methods for Data Mining, Spring 2007, University
More informationOrthonormal Bases; Gram-Schmidt Process; QR-Decomposition
Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 205 Motivation When working with an inner product space, the most
More informationVector and Matrix Norms. Vector and Matrix Norms
Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose
More informationCourse Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More informationLecture 9 Least Square Problems
March 26, 2018 Lecture 9 Least Square Problems Consider the least square problem Ax b (β 1,,β n ) T, where A is an n m matrix The situation where b R(A) is of particular interest: often there is a vectors
More informationLinear Algebra, part 3 QR and SVD
Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We
More informationWorksheet for Lecture 25 Section 6.4 Gram-Schmidt Process
Worksheet for Lecture Name: Section.4 Gram-Schmidt Process Goal For a subspace W = Span{v,..., v n }, we want to find an orthonormal basis of W. Example Let W = Span{x, x } with x = and x =. Give an orthogonal
More informationENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition
ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Lecture 8: QR Decomposition
More informationMath 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm
Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find
More informationProblem # Max points possible Actual score Total 120
FINAL EXAMINATION - MATH 2121, FALL 2017. Name: ID#: Email: Lecture & Tutorial: Problem # Max points possible Actual score 1 15 2 15 3 10 4 15 5 15 6 15 7 10 8 10 9 15 Total 120 You have 180 minutes to
More informationMATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL
MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left
More informationLecture 5 Singular value decomposition
Lecture 5 Singular value decomposition Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn
More informationInstitute for Advanced Computer Studies. Department of Computer Science. Two Algorithms for the The Ecient Computation of
University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{98{12 TR{3875 Two Algorithms for the The Ecient Computation of Truncated Pivoted QR Approximations
More informationBlock Lanczos Tridiagonalization of Complex Symmetric Matrices
Block Lanczos Tridiagonalization of Complex Symmetric Matrices Sanzheng Qiao, Guohong Liu, Wei Xu Department of Computing and Software, McMaster University, Hamilton, Ontario L8S 4L7 ABSTRACT The classic
More informationKrylov Subspace Methods that Are Based on the Minimization of the Residual
Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean
More informationThe Singular Value Decomposition and Least Squares Problems
The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving
More informationComputational Methods. Eigenvalues and Singular Values
Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations
More informationModel reduction of large-scale dynamical systems
Model reduction of large-scale dynamical systems Lecture III: Krylov approximation and rational interpolation Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/
More informationarxiv: v1 [hep-lat] 2 May 2012
A CG Method for Multiple Right Hand Sides and Multiple Shifts in Lattice QCD Calculations arxiv:1205.0359v1 [hep-lat] 2 May 2012 Fachbereich C, Mathematik und Naturwissenschaften, Bergische Universität
More information18.06SC Final Exam Solutions
18.06SC Final Exam Solutions 1 (4+7=11 pts.) Suppose A is 3 by 4, and Ax = 0 has exactly 2 special solutions: 1 2 x 1 = 1 and x 2 = 1 1 0 0 1 (a) Remembering that A is 3 by 4, find its row reduced echelon
More informationKrylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17
Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve
More informationLinear Analysis Lecture 16
Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,
More informationbe a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u
MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =
More informationMath 3191 Applied Linear Algebra
Math 9 Applied Linear Algebra Lecture : Orthogonal Projections, Gram-Schmidt Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./ Orthonormal Sets A set of vectors {u, u,...,
More informationOn the loss of orthogonality in the Gram-Schmidt orthogonalization process
CERFACS Technical Report No. TR/PA/03/25 Luc Giraud Julien Langou Miroslav Rozložník On the loss of orthogonality in the Gram-Schmidt orthogonalization process Abstract. In this paper we study numerical
More informationTopics in Numerical Linear Algebra
Topics in Numerical Linear Algebra Axel Ruhe 1 January 17, 2005 1 Royal Institute of Technology, SE-10044 Stockholm, Sweden (ruhe@kth.se). Material given to course in Applied Numerical Analysis. 2 Preface
More informationThe Singular Value Decomposition
The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will
More informationNotes on Eigenvalues, Singular Values and QR
Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationMath 407: Linear Optimization
Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University
More informationMath 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008
Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures
More informationMATH 350: Introduction to Computational Mathematics
MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH
More informationOrthonormal Transformations and Least Squares
Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving
More information