Rank Revealing QR factorization. F. Guyomarc h, D. Mezher and B. Philippe

Similar documents
14.2 QR Factorization with Column Pivoting

Numerical Linear Algebra

Linear Least squares

Matrix decompositions

Gram-Schmidt Orthogonalization: 100 Years and More

MS&E 318 (CME 338) Large-Scale Numerical Optimization

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

This can be accomplished by left matrix multiplication as follows: I

Linear Algebra, part 3 QR and SVD

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

Review Questions REVIEW QUESTIONS 71

Rank revealing factorizations, and low rank approximations

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Algebra C Numerical Linear Algebra Sample Exam Problems

Matrix Multiplication Chapter IV Special Linear Systems

3 QR factorization revisited

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

Introduction to Mathematical Programming

Matrix Factorization and Analysis

LAPACK-Style Codes for Pivoted Cholesky and QR Updating

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Numerical Linear Algebra

Lecture 13 Stability of LU Factorization; Cholesky Factorization. Songting Luo. Department of Mathematics Iowa State University

Matrix decompositions

LU factorization with Panel Rank Revealing Pivoting and its Communication Avoiding version

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x.

Computational Methods. Least Squares Approximation/Optimization

ETNA Kent State University

Block Bidiagonal Decomposition and Least Squares Problems

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Solving linear equations with Gaussian Elimination (I)

ETNA Kent State University

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Numerical Methods I Non-Square and Sparse Linear Systems

Applied Numerical Linear Algebra. Lecture 8

Linear Algebraic Equations

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006.

Computational Linear Algebra

AM205: Assignment 2. i=1

MTH 464: Computational Linear Algebra

Linear Algebra Review

Main matrix factorizations

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

Numerical Methods in Matrix Computations

Scientific Computing

5.6. PSEUDOINVERSES 101. A H w.

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

Problem set 5: SVD, Orthogonal projections, etc.

Sparse BLAS-3 Reduction

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

Iterative Methods. Splitting Methods

Matrix decompositions

Homework 2 Foundations of Computational Math 2 Spring 2019

Subset Selection. Deterministic vs. Randomized. Ilse Ipsen. North Carolina State University. Joint work with: Stan Eisenstat, Yale

MATH 350: Introduction to Computational Mathematics

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Parallel Numerical Algorithms

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.

LU Factorization with Panel Rank Revealing Pivoting and Its Communication Avoiding Version

Lecture 6, Sci. Comp. for DPhil Students

Householder reflectors are matrices of the form. P = I 2ww T, where w is a unit vector (a vector of 2-norm unity)

FEM and sparse linear system solving

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Algorithm 853: an Efficient Algorithm for Solving Rank-Deficient Least Squares Problems

Dense LU factorization and its error analysis

Lecture 4 Orthonormal vectors and QR factorization

Communication Avoiding LU Factorization using Complete Pivoting

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition

Computing least squares condition numbers on hybrid multicore/gpu systems

Math 407: Linear Optimization

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].

6 Linear Systems of Equations

Applied Linear Algebra in Geoscience Using MATLAB

Numerical Linear Algebra

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices

Lecture 3: QR-Factorization

Some Notes on Least Squares, QR-factorization, SVD and Fitting

Important Matrix Factorizations

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

Krylov Subspace Methods that Are Based on the Minimization of the Residual

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

Numerical Methods I: Numerical linear algebra

Orthogonal Transformations

Fundamentals of Numerical Linear Algebra

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

The Lanczos and conjugate gradient algorithms

c 2013 Society for Industrial and Applied Mathematics

Review. Example 1. Elementary matrices in action: (a) a b c. d e f = g h i. d e f = a b c. a b c. (b) d e f. d e f.

Computational Methods. Systems of Linear Equations

Orthonormal Transformations and Least Squares

Parallel Numerical Algorithms

Scientific Computing: Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

Transcription:

Rank Revealing QR factorization F. Guyomarc h, D. Mezher and B. Philippe 1

Outline Introduction Classical Algorithms Full matrices Sparse matrices Rank-Revealing QR Conclusion CSDA 2005, Cyprus 2

Situation of the problem See Bjorck SIAM96 : Numerical Methods for Least Squares Problems. Data : A R m n, b R m. Problem P : Find x S such that x minimum where S = {x R n b Ax minimum }. (.. 2 ) The solution is unique : ˆx = A + b. Property : x S iff A T (b Ax) = 0, iff A T Ax = A T b. The last equation is consistent since R(A T A) = R(A T ). CSDA 2005, Cyprus 3

Rank-Revealing QR factorization Theorem : A R m n with rank(a) = r (r min(m, n)). There exist Q, E, R 11 and R 12 such that Q R m m is orthogonal, E R n n is a permutation, R 11 R r r is upper-triangular with positive diagonal entries, R 12 R r (n r), and AE = Q ( R11 R 12 0 0 ). RRQR The factorization is not unique. Let RRQR be any of them. CSDA 2005, Cyprus 4

Actually, if we consider a complete orthogonal decomposition ( ) T 0 A = Q V T 0 0 where Q and V are orthogonal ( and T triangular with positive diagonal entries, we have A + T = V 1 ) 0 Q 0 0 T. Such a factorization can be obtained from the previous one by performing a QR factorization of ( R T 11 R T 12 ) CSDA 2005, Cyprus 5

Column pivoting strategy Householder QR factorization using Householder reflections : A = H 1 A = v = A 1..12,1 + A 1..12,1 e 1 H 1 = I 12 2 vvt v t v v = A 2..12,2 + A 2..12,2 e 1 H 2 = I 11 2 vvt v t v CSDA 2005, Cyprus 6

Sparse QR factorization Factorizing a sparse matrix implies fill-in in the factors. The situation is worse with QR than with LU since when updating the tailing matrix : LU : the elementary transformation x y = (I αve T k )x keeps x invariant when x k = 0. QR : the elementary transformation x y = (I αvv T )x keeps x invariant when x v. Since A T A = R T R, the QR factorization A is related to the Cholesky factorization of A T A. It is known that a symmetric permutation on a sparse s.p.d. matrix changes the level of fill-in. CSDA 2005, Cyprus 7

Therefore, a permutation of the columns of A changes the fill-in of R. there is conflict between pivoting to minimize fill-ins and pivoting associated with numerical properties. We choose to decouple the sparse factorization phase and the rankrevealing phase For a standard QR factorization of a sparse matrix : Multi-frontal QR method [Amestoy-Duff-Puglisi 94] Routine MA49AD in Library HSL. CSDA 2005, Cyprus 8

Column pivoting strategy Householder QR factorization using Householder reflections : A = H 1 A = v = A 1..12,1 + A 1..12,1 e 1 H 1 = I 12 2 vvt v t v v = A 2..12,2 + A 2..12,2 e 1 H 2 = I 11 2 vvt v t v Householder QR with Column Pivoting (Businger and Golub) : At step k, the column j k k defining the Householder transformation is chosen so that A(k : m, j k ) A(k : m, j) for j k. CSDA 2005, Cyprus 9

Some properties on R 11 At step k : Any column a of A (k) H k H 1 AE(k) = R(k) 12 (k,:) A (k) 22 R(k) 11 R (k) 12 0 A (k) 22. satisfies R (k) 11 (k, k) a. Moreover A R (k) 11 R(k) 11 (1,1) R (k) 11 (k, k) σ min(r (k) 11 ) σ min(a), This implies that : cond(a) cond(r (k) 11 ) R(k) 11 (1,1) R (k) 11 (k,k). CSDA 2005, Cyprus 10

Bad new The quantity R(k) 11 (1,1) R (k) cannot be considered as an estimator of cond(r(k) 11 (k,k) since it can be of different order of magnitude : 11 ) Example (Kahan s matrix of order n) : for θ (0, π 2 ), c = cos θ and s = arcsin θ: MATLAB : d=c.ˆ [0:n-1] ; M=diag(d)*(eye(n)-s*triu(ones(n),1)); n=100 ; θ = arcsin(0.2) : σ min (R 11 ) = 3.7e 9 R 11 (100,100) = 1.3e 1 Therefore a better estimate of cond(r (k) 11 ) is needed. CSDA 2005, Cyprus 11

Incremental Condition Estimator (ICE) [Bischof 90] which is implemented in LAPACK : it estimates cond(r (k) 11 ) from an estimation of cond(r(k 1) 11 ). However, the strategy which consists in stopping the factorization when cond(r (k) 11 ) < 1 ǫ cond(r(k+1) 11 ) may fail in very special situations : Counterexample : M = Kahan s matrix of order 30 with θ = arccos(0.2) cond(r (16) 11 ) < 1 ǫ < cond(r(17) 11 ) indicates a numerical rank equal to 16 although the numerical rank of M computed by SVD is 22. CSDA 2005, Cyprus 12

Pivoting strategies Convert R to reveal the rank by pushing the singularities towards the right end Apply an Incremental Condition Estimator to evaluate σ max and σ min of R 1 i,1 i, if σ max /σ min > τ, move column i towards the right end and reorthogonalise R using Givens rotations CSDA 2005, Cyprus 13

Pivoting strategies RRQR uses a set of thresholds to overcome numerical singularities, A predifined set of thresholds might fail A = 2 0 0 0 0 10 6 1 1 0 0 10 3 10 3 rank=2 if τ = {10 7 }, rank=3 if τ = {10 4,10 7 } RRQR adapts the thresholds to avoid failure Upon completion, check if R 22 < σ max τ σ min On failure, insert new thresholds and restart algorithm CSDA 2005, Cyprus 14

Numerical results 10 50 10 45 10 40 smax/smin cond([q,r,p]=qr(a) RRQR, thres=1e3,1e7,1e12,1e25 RRQR, thres=1e3,1e9,1e25 10 35 condition number 10 30 10 25 10 20 10 15 10 10 10 5 10 0 0 5 10 15 20 25 30 35 40 45 50 kahan(50,acos(0.2)), cond=2.4721e+49, rank=20, size=50x50, req=0 CSDA 2005, Cyprus 15

Numerical results 10 50 10 45 10 40 10 35 smax/smin cond([q,r,p]=qr(a) RRQR, thres=1e30 RRQR, thres=1e20 RRQR, thres=1e25, req=2 condition number 10 30 10 25 10 20 10 15 10 10 10 5 10 0 0 5 10 15 20 25 30 35 40 45 50 kahan(50,acos(0.2)), cond=2.4721e+49, rank=20, size=50x50 CSDA 2005, Cyprus 16

Second strategy The k + 1 column is not the worst usually for the condition number. Can we select the worst one for the conditionement of R 11? Then we reject this worst column to the end. k+1 j k+1 We need to recompute the singular values and vectors estimates. Reverse ICE CSDA 2005, Cyprus 17

Second strategy 10 20 Random matrix, n=500 10 15 Exact Estimated tol=1e 12 Estimated tol=1e 7 10 10 10 5 10 0 0 100 200 300 400 500 CSDA 2005, Cyprus 18

Conclusion Work is still on progress : Case where R is sparse ICE might fail, so does the Reverse ICE. Using the elimination tree, we can try to keep R as sparse as possible. CSDA 2005, Cyprus 19