THE RELATION BETWEEN THE QR AND LR ALGORITHMS

Similar documents
Two Results About The Matrix Exponential

Chasing the Bulge. Sebastian Gant 5/19/ The Reduction to Hessenberg Form 3

Iassume you share the view that the rapid

Interlacing Inequalities for Totally Nonnegative Matrices

The restarted QR-algorithm for eigenvalue computation of structured matrices

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

A Note on Eigenvalues of Perturbed Hermitian Matrices

Solving large scale eigenvalue problems

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS

This can be accomplished by left matrix multiplication as follows: I

On the loss of orthogonality in the Gram-Schmidt orthogonalization process

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

Solution of Linear Equations

Section 5.6. LU and LDU Factorizations

Maths for Signals and Systems Linear Algebra in Engineering

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Exponentials of Symmetric Matrices through Tridiagonal Reductions

c 2005 Society for Industrial and Applied Mathematics

Schur s Triangularization Theorem. Math 422

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

arxiv: v1 [math.na] 5 May 2011

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Computational Linear Algebra

Notes on Eigenvalues, Singular Values and QR

The QR Algorithm. Marco Latini. February 26, 2004

Efficient and Accurate Rectangular Window Subspace Tracking

QR FACTORIZATIONS USING A RESTRICTED SET OF ROTATIONS

THE QR ALGORITHM REVISITED

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Lecture 3: QR-Factorization

arxiv: v4 [quant-ph] 8 Mar 2013

Matrix Factorization and Analysis

Numerical Linear Algebra

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

A note on eigenvalue computation for a tridiagonal matrix with real eigenvalues Akiko Fukuda

MS&E 318 (CME 338) Large-Scale Numerical Optimization

Rounding error analysis of the classical Gram-Schmidt orthogonalization process

Beresford N. Parlett Λ. April 22, Early History of Eigenvalue Computations 1. 2 The LU and QR Algorithms 3. 3 The Symmetric Case 8

Important Matrix Factorizations

Numerical Linear Algebra And Its Applications

ETNA Kent State University

Matrix Shapes Invariant under the Symmetric QR Algorithm

Chapter 3 Transformations

Eigenvalue Problems and Singular Value Decomposition

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

Math 291-2: Lecture Notes Northwestern University, Winter 2016

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

An SVD-Like Matrix Decomposition and Its Applications

MATH 350: Introduction to Computational Mathematics

I. Multiple Choice Questions (Answer any eight)

Matrix decompositions

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

We will discuss matrix diagonalization algorithms in Numerical Recipes in the context of the eigenvalue problem in quantum mechanics, m A n = λ m

HONORS LINEAR ALGEBRA (MATH V 2020) SPRING 2013

M.A. Botchev. September 5, 2014

Matrix Inequalities by Means of Block Matrices 1

Notes on basis changes and matrix diagonalization

International Journal of Scientific Research and Reviews

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Numerical Methods in Matrix Computations

Numerical Analysis Lecture Notes

Decomposition. bq (m n) R b (n n) r 11 r 1n

Orthogonal Symmetric Toeplitz Matrices

A Note on Simple Nonzero Finite Generalized Singular Values

In this section again we shall assume that the matrix A is m m, real and symmetric.

THE SPECTRUM OF THE LAPLACIAN MATRIX OF A BALANCED 2 p -ARY TREE

October 25, 2013 INNER PRODUCT SPACES

Computers and Mathematics with Applications. Convergence analysis of the preconditioned Gauss Seidel method for H-matrices

The antitriangular factorisation of saddle point matrices

On the reduction of matrix polynomials to Hessenberg form

5.6. PSEUDOINVERSES 101. A H w.

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

GQE ALGEBRA PROBLEMS

A fast randomized algorithm for overdetermined linear least-squares regression

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

Linear Algebra in Actuarial Science: Slides to the lecture

Sparse matrix methods in quantum chemistry Post-doctorale cursus quantumchemie Han-sur-Lesse, Belgium

Linear Algebra. Min Yan

Numerical Methods I Non-Square and Sparse Linear Systems

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM

Introduction. Chapter One

The Singular Value Decomposition

Lecture 4: Applications of Orthogonality: QR Decompositions

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Singular Value Decomposition

arxiv: v3 [math.ra] 22 Aug 2014

Improved Newton s method with exact line searches to solve quadratic matrix equation

EECS 275 Matrix Computation

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])

Chapter f Linear Algebra

LAPACK-Style Codes for Pivoted Cholesky and QR Updating

MINIMAL NORMAL AND COMMUTING COMPLETIONS

Transcription:

SIAM J. MATRIX ANAL. APPL. c 1998 Society for Industrial and Applied Mathematics Vol. 19, No. 2, pp. 551 555, April 1998 017 THE RELATION BETWEEN THE QR AND LR ALGORITHMS HONGGUO XU Abstract. For an Hermitian matrix the QR transform is diagonally similar to two steps of the LR transforms. Even for non-hermitian matrices the QR transform may be written in rational form. Key words. QR algorithm, LR algorithm, triangular factorization, Cholesky factorization AMS subject classification. 65F15 PII. S0895479896299937 1. Summary. In this section we assume that the reader is familiar with LR and QR. The transformations are presented in the next section. A slight variation of the LR algorithm that is suitable for positive definite Hermitian matrices is the Cholesky (LR) algorithm: A = CC is mapped into  = C C. In the positive definite Hermitian case, two steps of Cholesky yield the same matrix as one step of QR. At first glance this is surprising (how can LR produce a unitary similarity?) and the proof is sometimes given as an exercise in textbooks; see [4], [1], and [9]. Students are often left with the (false) impression that positive definiteness is essential. In recent years understanding of these algorithms has improved. Both the LR and QR algorithms are instances of GR algorithms [10]. For all such algorithms k steps applied to A are equivalent to a similarity driven by a factorization of A k : (1.1) A k = G k R k, A G 1 k AG k. Consequently, two steps of LR on A is equivalent to a similarity driven by A 2 : A 2 = LU, A L 1 AL. On the other hand, one step of QR on A is equivalent (up to a diagonal similarity) to a similarity driven by A A: A = QR, A A = R R(= LD 2 L ), A Q AQ = RAR 1. If A is Hermitian, then A A = A 2 and two steps of LR must be equivalent to one of QR. Despite these remarks it is still interesting to see the equivalence in detail, and that is the topic of section 2. The catch is that LR can break down so the more careful statement is that two LR steps (if they exist) are equivalent to one QR step (which always exists). What is of more than passing interest is that LR is entirely rational in operation whereas QR requires square roots and is not rational. The remarks made above show that these square roots in QR are somehow not essential; QR may be better thought of as LR driven by A A. It is this viewpoint that leads to the various root-free QR algorithms that have been so successful for symmetric tridiagonal matrices. Four versions are described in [6] and an even faster one appeared recently in [3]. Received by the editors March 1, 1996; accepted for publication (in revised form) by J. Varah April 17, 1997. http://www.siam.org/journals/simax/19-2/29993.html Department of Mathematics, Fudan University, Shanghai, 200433, P. R. China. Current address: Fakultät für Mathematik, TU Chemnitz-Zwickau, D-09107 Chemnitz, Germany (hxu@ mathematik.tu-chemnitz.de). This work was supported in part by the National Natural Science Foundation of China. 551

552 HONGGUO XU For non-hermitian matrices there is a similar rational version of QR and it is described in section 3. We follow Householder conventions for notation except for denoting the conjugate transpose of F by F instead of F H. We ignore shifts because they complicate the analysis and add nothing to a theoretical paper. 2. Connection of QR to LR. Two algorithms that play an important role in matrix eigenvalue computations are the LR and QR algorithms. The former was discovered by Rutishauser in 1958 [7] and the latter developed by Francis in 1959 1960 [2]. A formal derivation of QR was given in [5]. Both algorithms have been widely studied and good references are [9], [8], and [6]. Recall the basic decompositions. Triangular factorization (LU). If and only if B C n n has nonzero leading principal minors of orders 1, 2,..., n 1, then B has a unique decomposition B = LDU, where L is unit lower triangular, D is diagonal, and U is unit upper triangular. Gram-Schmidt factorization (QR). All B C n n may be written B = QR, where Q = Q 1 and R is upper triangular with nonnegative diagonal entries. The factorization is unique if and only if the columns of B are linearly independent. From the basic factorizations come the basic transforms. LR transform. IfB=LDU, then its LR transform is defined by B= DUL = L 1 BL =(DU)B(DU) 1. Here is the irritating ambiguity in LR; the definition B= ULD would be equally legitimate. For theoretical purposes one could consider the equivalence class of all diagonal similarities on a given matrix. QR Transform. If B = QR (uniquely), then its QR transform is defined by ˆB = RQ = Q BQ = RBR 1. Remark 1. If A is Hermitian and positive definite, then A = LD 2 L and its Cholesky transform is given by A = DL LD. Note that A = D 1 A D is a diagonal similarity transformation but uses square roots. LR destroys the Hermitian property but only by a diagonal similarity. Denote the Cholesky transform of A by A and the LR transform of A by A.If Ais Hermitian and positive definite, then A = Â: two steps of Cholesky equal one of QR. However, the positive definite property is not essential as the following result shows. All L s are unit lower triangular and all D s are diagonal and real. For completeness we include all the diagonal matrices. THEOREM 2.1. If A is Hermitian, and permits triangular factorization, then A is diagonally similar to Â.

THE RELATION BETWEEN QR AND LR ALGORITHMS 553 Proof. By hypothesis A = L 1 D 1 L 1 and so A= D 1 L 1L 1. Since L 1L 1 is positive definite it permits triangular factorization (2.1) L 1L 1 = L 2 D 2 2L 2 (D 2 positive). Consequently, the triangular factorization of A is Thus, where A= (D 1 L 2 D 1 1 )(D 1D 2 2)L 2. A = D1 D 2 2L 2D 1 L 2 D 1 1 =(D 1 D 2 )M(D 1 D 2 ) 1, M := D 2 L 2D 1 L 2 D 2. It remains to show that M is similar to  with a diagonal unitary transformation. Rewrite (2.1) as Since D 2 is real (2.2) I =(L 1 L 2D 2 )(D 2 L 2L 1 1 ). Q = L 1 L 2D 2 is unitary. Use Q = Q to obtain another triangular factorization of Q (2.3) Now use Q to rewrite M as Q = L 1 L 2 D 1 2. (2.4) M =(D 2 L 2L 1 1 )(L 1D 1 L 1)(L 1 L 2D 2 ) = Q AQ. Finally, using (2.3), A = L 1 D 1 L 1 =(L 1 L 2 D 1 2 )(D 2L 2D 1 L 1) = Q sign(d 1 ) sign(d 1 )D 2 L 2D 1 L 1 = Q sign(d 1 )R reveals the QR factorization of A since R has nonnegative diagonal. By (2.4) as claimed.  = sign(d 1 )M sign(d 1 ) 1 = sign(d 1 )(D 1 D 2 ) A (D 1 D 2 ) sign(d 1 ) 1,

554 HONGGUO XU Remark 2. When the LR transform is to be applied to an Hermitian matrix it is possible to modify the algorithm so that the Hermitian property is restored after two steps. In the notation used above and one then redefines A by A= D 1 L 1L 1 = D 1 (L 2 D 2 )(L 2 D 2 ) A:= (L2 D 2 ) D 1 (L 2 D 2 )=M. Such a modification forces a different mapping for odd and even steps and employs square roots. The advantage of Theorem 2.1 over the explanation (1.1) mentioned in section 1 is that it reveals explicitly in (2.2) and (2.3) how the triangular factors L 1 and L 2 D 2 from LR yield the triangular factorization of Q from QR. Remark 3. The QR transform does not require that A permit triangular factorization. In fact  cannot be derived from two steps of LR when, and only when, the orthogonal factor Q does not permit factorization as Q = L 1 D2 1 (D 2L 2 D 1 2 ). In many cases, but not all, a well-chosen symmetric permutation A ΠAΠ t will give rise to a new Q that permits triangular factorization. 3. The non-hermitian case. For general matrices the LR transform preserves band structure while the QR transform destroys the upper bandwidth. So the two procedures are not equivalent. Nevertheless it is legitimate to ask whether the QR transform can be represented in an alternative form related to triangular factorization. The answer is yes. The key to extending the result of the previous section is to factor the given matrix B with a congruence transformation B = FCF. This appears to be a strange representation of a non-hermitian matrix. Suppose B permits triangular factorization Rewrite this as B = L 1 D 1 U 1. B = L 1 (D 1 U 1 L 1 )L 1 and note that the middle factor is upper triangular instead of diagonal. Define, as earlier, B= (D 1 U 1 L 1 )(L 1L 1 ), and use the Cholesky factorization to define L 1L 1 =(L 2 D 2 )(L 2 D 2 ) B = D 2 L 2(D 1 U 1 L 1 )L 2D 2 =(D 2 L 2L 1 1 )(L 1D 1 U 1 )(L 1 L 2D 2 ) = Q BQ, using (2.2).

THE RELATION BETWEEN QR AND LR ALGORITHMS 555 Moreover, B = L 1 D 1 U 1 =(L 1 L 2 D 1 2 )(D 2L 2D 1 U 1 ) = Q sign(d 1 )(sign(d 1 )D 2 L 2D 1 U 1 ) is the QR factorization of B. Now, in general, sign(d 1 ) = diag(exp(iϕ 1 ),...,exp(iϕ n )). Another way to interpret these expressions is to observe that, ignoring diagonal unitary matrices, the Q factor of B is the Q factor of its lower triangular factor L 1. Note that B= L 1 1 BL 1, B= (D 2 L 2) B(D 2 L 2) 1, and so the nice unitary matrix Q is again split into its two triangular factors L 1 and (D 2 L 2) 1. Acknowledgments. The author would like to express his gratitude to Professors E.-X. Jiang, V. Mehrmann, B. Parlett, and D. Watkins for their encouragement and suggestions. He also thanks the referees for their valuable comments and criticism. Special thanks should be given to one of the referees who so kindly helped the author revise the manuscript. REFERENCES [1] J. DEMMEL, Applied Numerical Linear Algebra, SIAM, Philadelphia, PA, 1997. [2] J. G. F. FRANCIS, The QR transformation: A unitary analogue to the LR transformation, Parts I and II, Comput. J., 4 (1961), pp. 265 272 and pp. 332 345. [3] K. GATES AND W. B. GRAGG, Notes on TQR algorithms, J. Comput. Appl. Math., 86 (1997), pp. 195 203. [4] G. H. GOLUB AND C. F. VAN LOAN, Matrix Computations, 2nd ed., The Johns Hopkins University Press, Baltimore, MD, 1989. [5] V. N. KUBLANOVSKAYA, On some algorithms for the solution of the complete eigenvalue problem, U.S.S.R. Comput. Math. and Math. Phys., 3 (1961), pp. 637 657. [6] B. N. PARLETT, The Symmetric Eigenvalue Problem, Prentice Hall, Englewood Cliffs, NJ, 1980. [7] H. RUTISHAUSER, Solution of eigenvalue problems with the LR transformation, Nat. Bur. Standards Appl. Math. Ser., 49 (1958), pp. 47 81. [8] D. S. WATKINS, Understanding the QR algorithm, SIAM Rev., 24 (1982), pp. 427 440. [9] D. S. WATKINS, Fundamentals of Matrix Computations, John Wiley, New York, 1991. [10] D. S. WATKINS AND L. ELSNER, Convergence of algorithms of decomposition type for the eigenvalue problem, Linear Algebra Appl., 143 (1991), pp. 19 47.