QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

Similar documents
Orthogonal iteration to QR

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2

Solving large scale eigenvalue problems

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Lecture 4 Basic Iterative Methods I

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

ECS130 Scientific Computing Handout E February 13, 2017

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

Index. for generalized eigenvalue problem, butterfly form, 211

Solving large scale eigenvalue problems

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Lecture 3: QR-Factorization

EECS 275 Matrix Computation

Eigenvalues and eigenvectors

Diagonalizing Matrices

Orthogonal iteration to QR

I. Multiple Choice Questions (Answer any eight)

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation

Numerical Methods I Eigenvalue Problems

Numerical Methods I: Eigenvalues and eigenvectors

Computational Methods. Eigenvalues and Singular Values

Chasing the Bulge. Sebastian Gant 5/19/ The Reduction to Hessenberg Form 3

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

The QR Algorithm. Chapter The basic QR algorithm

Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

Numerical Methods - Numerical Linear Algebra

Math 504 (Fall 2011) 1. (*) Consider the matrices

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

EIGENVALUE PROBLEMS (EVP)

Course Notes: Week 1

Lecture notes: Applied linear algebra Part 1. Version 2

Linear Algebra, part 3 QR and SVD

Implementing the QR algorithm for efficiently computing matrix eigenvalues and eigenvectors

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

The QR Algorithm. Marco Latini. February 26, 2004

Elementary linear algebra

Matrix decompositions

Math 411 Preliminaries

Matrices, Moments and Quadrature, cont d

Orthogonal Transformations

Notes on Eigenvalues, Singular Values and QR

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

Eigenvalues and Eigenvectors

Lecture 5 Basic Iterative Methods II

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

The QR Decomposition

Orthonormal Transformations and Least Squares

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.

5 Selected Topics in Numerical Linear Algebra

Alternative correction equations in the Jacobi-Davidson method

Lecture 2: Numerical linear algebra

Krylov Subspace Methods to Calculate PageRank

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Orthonormal Transformations

Orthogonal iteration revisited

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems

Krylov Subspace Methods for Large/Sparse Eigenvalue Problems

4.2. ORTHOGONALITY 161

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm

Lecture 8 : Eigenvalues and Eigenvectors

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English.

Conceptual Questions for Review

Linear Algebra in Actuarial Science: Slides to the lecture

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Direct methods for symmetric eigenvalue problems

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

Introduction to Applied Linear Algebra with MATLAB

Linear Analysis Lecture 16

1 Last time: least-squares problems

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Numerical Linear Algebra

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Scientific Computing: An Introductory Survey

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

6.4 Krylov Subspaces and Conjugate Gradients

This can be accomplished by left matrix multiplication as follows: I

The Singular Value Decomposition and Least Squares Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems

Matrix Theory, Math6304 Lecture Notes from September 27, 2012 taken by Tasadduk Chowdhury

Eigenvalue and Eigenvector Problems

Numerical methods for eigenvalue problems

arxiv: v1 [math.na] 5 May 2011

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

MATRIX AND LINEAR ALGEBR A Aided with MATLAB

G1110 & 852G1 Numerical Linear Algebra

Numerical Methods in Matrix Computations

MAA507, Power method, QR-method and sparse matrix representation.

Iassume you share the view that the rapid

1 Number Systems and Errors 1

ACM106a - Homework 4 Solutions

Transcription:

QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique up to a change of signs of the columns of Q: A = (QD)( DR) with D = I

QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR If A is n k with column rank l and l k n, then the ecomical QR-decomposition is an n l orthonormal matrix Q and an l k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A, 0 );

QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR If A is n k with column rank l and l k n, then the ecomical QR-decomposition is an n l orthonormal matrix Q and an l k upper triangular matrix R for which A = QR Note. The columns of Q form an orthonormal basis of the space spanned by the columns of A: the QR-decomp. represents the results of the Gram-Schmidt process.

QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR Theorem. The QR-decomposition can be stably computed with Householder reflections.

QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR Theorem. The QR-decomposition can be stably computed with Householder reflections: Let R be the computed R and Q = (H vk... H v ) with H vj the Householder reflection as actually used in step j. Then A + A = Q R for some A with A F nku A F. Note. The claim is not that Q is close to the Q that we would have been obtained in exact arithmetic, but that Q is unitary (product of Householder reflections).

Schur decomposition The Schur decomposition or of an n n matrix A is an n n unitary matrix U and an n u upper triangular matrix S such that In Matlab A = USU or, equivalently, AU = US [U,S]=schur(A);

Schur decomposition The Schur decomposition or of an n n matrix A is an n n unitary matrix U and an n u upper triangular matrix S such that In Matlab A = USU or, equivalently, AU = US [U,S]=schur(A); Theorem. If ST = TΛ is the eigenvalue decomposition of S, i.e., T is non-singular and Λ is diagonal, then A(UT) = (UT)Λ is the eigenvalue decomposition of A. In particular, Λ(A) = Λ(S) = diag(s) and C 2 (T) = C 2 (UT).

Schur decomposition The Schur decomposition or of an n n matrix A is an n n unitary matrix U and an n u upper triangular matrix S such that In Matlab A = USU or, equivalently, AU = US [U,S]=schur(A); The columns u j Ue j of U are called Schur vectors.

Schur decomposition The Schur decomposition or of an n n matrix A is an n n unitary matrix U and an n u upper triangular matrix S such that In Matlab A = USU or, equivalently, AU = US [U,S]=schur(A); Observation. Many techniques, where the eigenvalue decomposition of A is exploited, can be based on the Schur decomposition as well. For practical computations, the Schur decomposition is preferable, since it is stable: C 2 (U) =, while C 2 (UT) can be huge.

Schur decomposition The Schur decomposition or of an n n matrix A is an n n unitary matrix U and an n u upper triangular matrix S such that In Matlab A = USU or, equivalently, AU = US [U,S]=schur(A); Note. The first column u Ue of U is an eigenvector of A with eigenvalue λ e Se. Proof. S is upper triangular: Se = λ e.

Schur decomposition The Schur decomposition or of an n n matrix A is an n n unitary matrix U and an n u upper triangular matrix S such that In Matlab A = USU or, equivalently, AU = US [U,S]=schur(A); Note. The last column u n Ue n of U is an eigenvector of A with eigenvalue λ n, where λ n e n Se n. Proof. S is lower triangular: S e n = λ n e n.

Schur decomposition The Schur decomposition or of an n n matrix A is an n n unitary matrix U and an n u upper triangular matrix S such that In Matlab A = USU or, equivalently, AU = US [U,S]=schur(A); Note. The second column u 2 of U is an eigenvector of A (I u u )A(I u u ) with eigenvalue λ 2 e 2 Se 2. Proof. S is upper triangular: Se 2 = αe + λ e 2 for α = S,2. Hence, USU u 2 = αu + λ u 2.

Schur decomposition The Schur decomposition or of an n n matrix A is an n n unitary matrix U and an n u upper triangular matrix S such that In Matlab A = USU or, equivalently, AU = US [U,S]=schur(A); Note. The second column u 2 of U is an eigenvector of A (I u u )A(I u u ) with eigenvalue λ 2 e 2 Se 2. In A, the eigenvector u is deflated from A.

QR-algorithm Select U 0 unitary. Compute S 0 = U 0 AU 0 for k =, 2,,... do ) Select a shift σ k 2) Compute Q k unitary and R k upper triangular such that S k σ k I = Q k R k 3) Compute S k = R k Q k + σ k I 4) U k = U k Q k end for Theorem. With a proper shift strategy: U k U, U is unitary S k S, S is upper triangular, AU = US: The QR-algorithm converges to the Schur decomposition.

QR-algorithm Select U 0 unitary. Compute S 0 = U 0 AU 0 for k =, 2,,... do ) Select a shift σ k 2) Compute Q k unitary and R k upper triangular such that S k σ k I = Q k R k 3) Compute S k = R k Q k + σ k I 4) U k = U k Q k end for Lemma. a) U k unitary, b) AU k = U k S k. Proof. S 0 Q = (S 0 σ I + σ I)Q = (Q R + σ I)Q = Q (R Q + σ I) = Q S

QR-algorithm Select U 0 unitary. Compute S 0 = U 0 AU 0 for k =, 2,,... do ) Select a shift σ k 2) Compute Q k unitary and R k upper triangular such that S k σ k I = Q k R k 3) Compute S k = R k Q k + σ k I 4) U k = U k Q k end for Lemma. a) U k unitary, b) AU k = U k S k. Proof. S k Q k = Q k S k AU k = S 0 Q Q 2... Q k = Q S Q 2... Q k = U k S k

QR-algorithm Select U 0 unitary. Compute S 0 = U 0 AU 0 for k =, 2,,... do ) Select a shift σ k 2) Compute Q k unitary and R k upper triangular such that S k σ k I = Q k R k 3) Compute S k = R k Q k + σ k I 4) U k = U k Q k end for Lemma. a) U k unitary, b) AU k = U k S k. c) (A σ k I)U k = U k R k, Proof. (A σ k I)U k = U k (S k σ k I) = U k Q k R k

QR-algorithm Select U 0 unitary. Compute S 0 = U 0 AU 0 for k =, 2,,... do ) Select a shift σ k 2) Compute Q k unitary and R k upper triangular such that S k σ k I = Q k R k 3) Compute S k = R k Q k + σ k I 4) U k = U k Q k end for Lemma. a) U k unitary, b) AU k = U k S k. c) (A σ k I)U k = U k R k, d) (A σ k I)U k = U k R k. Proof. U k (A σ k I) = R k U k

QR-algorithm Select U 0 unitary. Compute S 0 = U 0 AU 0 for k =, 2,,... do ) Select a shift σ k 2) Compute Q k unitary and R k upper triangular such that S k σ k I = Q k R k 3) Compute S k = R k Q k + σ k I 4) U k = U k Q k end for Lemma. a) U k unitary, b) AU k = U k S k. c) (A σ k I)U k = U k R k, d) (A σ k I)U k = U k R k. Corollary. With x k U k e and τ k e R ke, we have (A σ k I)x k = τ k x k (the shifted power method). Proof. R k upper triangular R k e = τ k e.

QR-algorithm Select U 0 unitary. Compute S 0 = U 0 AU 0 for k =, 2,,... do ) Select a shift σ k 2) Compute Q k unitary and R k upper triangular such that S k σ k I = Q k R k 3) Compute S k = R k Q k + σ k I 4) U k = U k Q k end for Lemma. a) U k unitary, b) AU k = U k S k. c) (A σ k I)U k = U k R k, d) (A σ k I)U k = U k R k. Corollary. With x k U k e and τ k e R ke, we have (A σ k I)x k = τ k x k (the shifted power method). With p(λ) (λ σ k )... (λ σ ), x k = τp(a)x 0 for some τ C.

QR-algorithm Select U 0 unitary. Compute S 0 = U 0 AU 0 for k =, 2,,... do ) Select a shift σ k 2) Compute Q k unitary and R k upper triangular such that S k σ k I = Q k R k 3) Compute S k = R k Q k + σ k I 4) U k = U k Q k end for Lemma. a) U k unitary, b) AU k = U k S k. c) (A σ k I)U k = U k R k, d) (A σ k I)U k = U k R k. Corollary. With x k U k e and τ k e R ke, we have (A σ k I)x k = τ k x k (the shifted power method). Note that λ (k) x k Ax k = e U k AU ke = e S k e.

QR-algorithm Select U 0 unitary. Compute S 0 = U 0 AU 0 for k =, 2,,... do ) Select a shift σ k 2) Compute Q k unitary and R k upper triangular such that S k σ k I = Q k R k 3) Compute S k = R k Q k + σ k I 4) U k = U k Q k end for Lemma. a) U k unitary, b) AU k = U k S k. c) (A σ k I)U k = U k R k, d) (A σ k I)U k = U k R k. Suppose v is the dominant eigenvector for A σi. With σ k = σ and x k U k e, for k, we have that (x k, v) 0, λ (k) e S ke λ, S k e λ (k) e 0, where λ is the eigenvalue of A associated v.

QR-algorithm Select U 0 unitary. Compute S 0 = U 0 AU 0 for k =, 2,,... do ) Select a shift σ k 2) Compute Q k unitary and R k upper triangular such that S k σ k I = Q k R k 3) Compute S k = R k Q k + σ k I 4) U k = U k Q k end for Lemma. a) U k unitary, b) AU k = U k S k. c) (A σ k I)U k = U k R k, d) (A σ k I)U k = U k R k. Corollary. With x k U k e n and τ k e n R ke n, we have (A σ k I)x k = τ k x k (Shift & Invert). Proof. R k lower triangular R k e n = τ k e n.

QR-algorithm Select U 0 unitary. Compute S 0 = U 0 AU 0 for k =, 2,,... do ) Select a shift σ k 2) Compute Q k unitary and R k upper triangular such that S k σ k I = Q k R k 3) Compute S k = R k Q k + σ k I 4) U k = U k Q k end for Lemma. a) U k unitary, b) AU k = U k S k. c) (A σ k I)U k = U k R k, d) (A σ k I)U k = U k R k. Corollary. With x k U k e n and τ k e n R ke n, we have (A σ k I)x k = τ k x k (Shift & Invert). Note that λ (k) x k Ax k = e n U k AU ke n = e n S k e n.

QR-algorithm Select U 0 unitary. Compute S 0 = U 0 AU 0 for k =, 2,,... do ) Select a shift σ k 2) Compute Q k unitary and R k upper triangular such that S k σ k I = Q k R k 3) Compute S k = R k Q k + σ k I 4) U k = U k Q k end for Lemma. a) U k unitary, b) AU k = U k S k. c) (A σ k I)U k = U k R k, d) (A σ k I)U k = U k R k. Suppose v is the dominant eigenvector for (A σi). With σ k = σ and x k U k e n, for k, we have that (x k, v) 0, λ (k) e n S ke n λ, S k e n λ (k) e n 0, where λ is the eigenvalue of A associated v.

Selecting shifts The QR-agorithm incorporates the Shift and Invert power method (for A ). Rayleigh Quotient Iteration is Shift and Invert with shifts equal to the the Rayleigh quotients, σ k = x k A x k. Theorem. The asymptotic convergence of RQI is quadratic. In this case, with x k = U k e n, σ k = e n S k e n. With The asymptotic convergence of this method is quadratic, we mean: the method produces sequences (x k ) that converge provided x 0 is close enough to some (limit) eigenvector, and for k large, the error reduces quadratically.

Selecting shifts The QR-agorithm incorporates the Shift and Invert power method (for A ). Rayleigh Quotient Iteration is Shift and Invert with shifts equal to the the Rayleigh quotients, σ k = x k A x k. Theorem. The asymptotic convergence of RQI is quadratic. In this case, with x k = U k e n, σ k = e n S k e n. RQI need converge, as the following example shows [ ] 0 Example. With A = and x 0 0 = e. RQI produces the sequence (x k ) = (e, e 2, e, e 2,...). Note that x k Ax k = 0. Observation. The shifts σ k = e n S k e n may lead to stagnation.

Selecting shifts The QR-agorithm incorporates the Shift and Invert power method (for A ). The Wilkinson shift is the absolute smallest eigenvalue of the 2 2 right lower block of S k.

Selecting shifts The QR-agorithm incorporates the Shift and Invert power method (for A ). The Wilkinson shift is the absolute smallest eigenvalue of the 2 2 right lower block of S k. Theorem. For σ k take the Wilkinson shift and take x k U k e n. Then, for some eigenpair (v, λ) of A, we have that (x k, v) 0, λ (k) e ns k e n λ, S k e n λ (k) e n 0. The convergence is quadratic (and cubic if A is Hermitian).

Selecting shifts The QR-agorithm incorporates the Shift and Invert power method (for A ). The Wilkinson shift is the absolute smallest eigenvalue of the 2 2 right lower block of S k. Theorem. For σ k take the Wilkinson shift and take x k U k e n. Then, for some eigenpair (v, λ) of A, we have that (x k, v) 0, λ (k) e ns k e n λ, S k e n λ (k) e n 0. The convergence is quadratic (and cubic if A is Hermitian). The QR-algorithm: if e n S k λ (k) e n 2 ɛ, then accept U k e n as an eigenvector of A deflate: delete the last row and column of S k and continu (the search for an eigenpair of the lower dimensional matrix).

Selecting shifts The QR-agorithm incorporates the Shift and Invert power method (for A ). The Wilkinson shift is the absolute smallest eigenvalue of the 2 2 right lower block of S k. Theorem. For σ k take the Wilkinson shift and take x k U k e n. Then, for some eigenpair (v, λ) of A, we have that (x k, v) 0, λ (k) e ns k e n λ, S k e n λ (k) e n 0. The convergence is quadratic (and cubic if A is Hermitian). The QR-algorithm: if e n S k λ (k) e n 2 ɛ, then accept U k e n as the nth Schur vector of A deflate: delete the last row and column of S k and continu (the search for an eigenpair of the lower dimensional matrix).

Deflation Consider the kth step of the QR-algorithm. Put u n U k e n. Note that Hence (I u n u n)u k = U k (I e n e n) (I u n u n )A(I u n u n )U k = U k (I e n e n )S k(i e n e n ). Deflating the nth Schur vector from A can easily be performed in the QR-algorithm: simply delete the last row and last column of the active matrix S k (assumming that U k e n is the nth Schur vector to required accuracy).

QR-algorithm Select U unitary. S = U AU, m = size(a, ), N = [ : m], I = I m. repeat until m = ) Select the Wilskinson shift σ 2) [Q, R] = qr(s σ I) 3) S RQ + σi 4) U(:, N) U(:, N)Q 5) if S(m, m ) ɛ S(m, m) %% Deflate m m, N [ : m], I I m S S(N, N) end if end repeat Theorem. U k U, U is unitary S k S, S is upper triangular, AU = US.

Observations. The QR-algorithm quickly converges towards to the eigenvalue as targeted by the Wilkinson shift (on average 8 steps of the QR algorithm seems to be required for accurate detection of the first eigenvalue). While converging to a target eigenvalue, other eigenvalues are also approximated. Therefore, the next eigenvalues are detected more quickly (from the 5th eigenvalue on, 2 steps appear to be sufficient). All eigenvalues are being computed (according to multiplicity). Computation of all eigenvalues (actually of the Schur decomposition of A) requires approximately 2n steps of the QR-algorithm. The order in which the eigenvalues are being computed can not be controled.

Initiation of the QR-algorithm Theorem. There is a unitary matrix U 0 (product of Householder reflections) such that S 0 U 0 AU 0 is upper Hessenberg. Start QR-alg. Bring A to upper Hessenberg form (i.e., S = S 0, U = U 0 ). Computation requires 4 3 n3 flop. Theorem. If S k is upper Hessenberg, then S k is upper Hessenberg. Moreover, if S k is m m, then Q k can be obtained as a product of m Givens rotations, i.e., rotations in the (j, j + ) plane. The steps 2 and 3 in the QR algorithm can be performed in (together) 3m 2 flop.

Upper Hessenberg matrices Theorem. If S is an upper Hessenberg matrix and S = QR is the QR-decomposition, then Q is Hessenberg S RQ and S + σi are upper Hessenberg. Q can be obtained as the product of n Givens rotations: with R 0 S, R j = G j R j (j =,..., n ), where G j rotates in the (j, j+) plane (i.e., in span(e j, e j+ )) R = Here, = c s s c = G R 0 [ ] [ ] c s cos(φ ) sin(φ ) =. Empty matrix entries are 0. s c sin(φ ) cos(φ )

Upper Hessenberg matrices Theorem. If S is an upper Hessenberg matrix and S = QR is the QR-decomposition, then Q is Hessenberg S RQ and S + σi are upper Hessenberg. Q can be obtained as the product of n Givens rotations: with R 0 S, R j = G j R j (j =,..., n ), where G j rotates in the (j, j+) plane (i.e., in span(e j, e j+ )) R 2 = Here, = c s s c [ ] [ ] c s cos(φ2 ) sin(φ 2 ) = s c sin(φ 2 ) cos(φ 2 ) = G 2R

Upper Hessenberg matrices Theorem. If S is an upper Hessenberg matrix and S = QR is the QR-decomposition, then Q is Hessenberg S RQ and S + σi are upper Hessenberg. Q can be obtained as the product of n Givens rotations: with R 0 S, R j = G j R j (j =,..., n ), where G j rotates in the (j, j+) plane (i.e., in span(e j, e j+ )) R 3 = Here, = c s s c [ ] [ ] c s cos(φ3 ) sin(φ 3 ) = s c sin(φ 3 ) cos(φ 3 ) = G 3R 2

Upper Hessenberg matrices Theorem. If S is an upper Hessenberg matrix and S = QR is the QR-decomposition, then Q is Hessenberg S RQ and S + σi are upper Hessenberg. Q can be obtained as the product of n Givens rotations: with R 0 S, R j = G j R j (j =,..., n ), where G j rotates in the (j, j+) plane (i.e., in span(e j, e j+ )) R 4 = Here, = [ ] [ ] c s cos(φ4 ) sin(φ 4 ) = s c sin(φ 4 ) cos(φ 4 ) c s s c = G 4R 3

Upper Hessenberg matrices Theorem. If S is an upper Hessenberg matrix and S = QR is the QR-decomposition, then Q is Hessenberg S RQ and S + σi are upper Hessenberg. Q can be obtained as the product of n Givens rotations: with R 0 S, R j = G j R j (j =,..., n ), where G j rotates in the (j, j+) plane (i.e., in span(e j, e j+ )) R 4 = Here, = [ ] [ ] c s cos(φ4 ) sin(φ 4 ) = s c sin(φ 4 ) cos(φ 4 ) c s s c = G 4R 3

Upper Hessenberg matrices Theorem. If S is an upper Hessenberg matrix and S = QR is the QR-decomposition, then Q is Hessenberg S RQ and S + σi are upper Hessenberg. Q can be obtained as the product of n Givens rotations: with R 0 S, R j = G j R j (j =,..., n ), where G j rotates in the (j, j+) plane (i.e., in span(e j, e j+ )) Then R = R n and Q = G n... G. Note. Q need not be formed explicitly: it suffices to store the sequences of cosines and sines.

Upper Hessenberg matrices Theorem. If S is an upper Hessenberg matrix and S = QR is the QR-decomposition, then Q is Hessenberg S RQ and S + σi are upper Hessenberg. Q can be obtained as the product of n Givens rotations: with R 0 S, R j = G j R j (j =,..., n ), where G j rotates in the (j, j+) plane (i.e., in span(e j, e j+ )) Then R = R n and Q = G n... G, and S = RG... G n.

Upper Hessenberg matrices Theorem. If S is an upper Hessenberg matrix and S = QR is the QR-decomposition, then Q is Hessenberg S RQ and S + σi are upper Hessenberg. Q can be obtained as the product of n Givens rotations: with R 0 S, R j = G j R j (j =,..., n ), where G j rotates in the (j, j+) plane (i.e., in span(e j, e j+ )) (G 3 (R 2 )G ) = c s s c c s s c

Upper Hessenberg matrices Theorem. If S is an upper Hessenberg matrix and S = QR is the QR-decomposition, then Q is Hessenberg S RQ and S + σi are upper Hessenberg. Q can be obtained as the product of n Givens rotations: with R 0 S, R j = G j R j (j =,..., n ), where G j rotates in the (j, j+) plane (i.e., in span(e j, e j+ )) (G 3 (R 2 )G ) = c s s c c s s c

Upper Hessenberg matrices Theorem. If S is an upper Hessenberg matrix and S = QR is the QR-decomposition, then Q is Hessenberg S RQ and S + σi are upper Hessenberg. Q can be obtained as the product of n Givens rotations: with R 0 S, R j = G j R j (j =,..., n ), where G j rotates in the (j, j+) plane (i.e., in span(e j, e j+ )) (G 3 (R 2 )G ) = c s s c Chasing the bulge.

Upper Hessenberg matrices Theorem. If S is an upper Hessenberg matrix and S = QR is the QR-decomposition, then Q is Hessenberg S RQ and S + σi are upper Hessenberg. Q can be obtained as the product of n Givens rotations: with R 0 S, R j = G j R j (j =,..., n ), where G j rotates in the (j, j+) plane (i.e., in span(e j, e j+ )) Then R = R n and Q = G n... G and S = RG... G n. Property. S =... G 4 (G 3 (G 2 G S)G )G 2... Only two sines and two cosines have to be stored.

Initiation of the QR-algorithm Theorem. There is a unitary matrix U 0 (product of Householder reflections) such that S 0 U 0 AU 0 is upper Hessenberg. Start QR-alg. Bring A to upper Hessenberg form (i.e., S = S 0, U = U 0 ). Computation requires 4 3 n3 flop. Theorem. If S k is upper Hessenberg, then S k is upper Hessenberg. Moreover, if S k is m m, then Q k can be obtained as a product of m Givens rotations, i.e., rotations in the (j, j + ) plane. The steps 2 and 3 in the QR algorithm can be performed in (together) 3m 2 flop.

Initiation of the QR-algorithm Theorem. There is a unitary matrix U 0 (product of Householder reflections) such that S 0 U 0 AU 0 is upper Hessenberg. Start QR-alg. Bring A to upper Hessenberg form (i.e., S = S 0, U = U 0 ). Computation requires 4 3 n3 flop. Theorem. If S k is upper Hessenberg, then S k is upper Hessenberg. Moreover, if S k is m m, then Q k can be obtained as a product of m Givens rotations, i.e., rotations in the (j, j + ) plane. The steps 2 and 3 in the QR algorithm can be performed in (together) 3m 2 flop. Observation. For computing the eigenvalues only, step 4 can be skipped.

Initiation of the QR-algorithm Theorem. There is a unitary matrix U 0 (product of Householder reflections) such that S 0 U 0 AU 0 is upper Hessenberg. Start QR-alg. Bring A to upper Hessenberg form (i.e., S = S 0, U = U 0 ). Computation requires 4 3 n3 flop. Theorem. If S k is upper Hessenberg, then S k is upper Hessenberg. Moreover, if S k is m m, then Q k can be obtained as a product of m Givens rotations, i.e., rotations in the (j, j + ) plane. The steps 2 and 3 in the QR algorithm can be performed in (together) 3m 2 flop. If the eigenvalue λ j is available, then the associated eigenvector can also be computed with Shift & Invert: solve (A λ j I)v j = e for v j. Note that the LU-decomposition can cheaply be computed if A is upper Hessenberg.

Initiation of the QR-algorithm Theorem. There is a unitary matrix U 0 (product of Householder reflections) such that S 0 U 0 AU 0 is upper Hessenberg. Start QR-alg. Bring A to upper Hessenberg form (i.e., S = S 0, U = U 0 ). Computation requires 4 3 n3 flop. Theorem. If S k is upper Hessenberg, then S k is upper Hessenberg. Moreover, if S k is m m, then Q k can be obtained as a product of m Givens rotations, i.e., rotations in the (j, j + ) plane. The steps 2 and 3 in the QR algorithm can be performed in (together) 3m 2 flop. Observation. The QR-algorithm requires approximately 8n 3 flop to compute the Schur decomposition to full accuracy.

Initiation of the QR-algorithm Theorem. There is a unitary matrix U 0 (product of Householder reflections) such that S 0 U 0 AU 0 is upper Hessenberg. Start QR-alg. Bring A to upper Hessenberg form (i.e., S = S 0, U = U 0 ). Computation requires 4 3 n3 flop. Theorem. If S k is upper Hessenberg, then S k is upper Hessenberg. Moreover, if S k is m m, then Q k can be obtained as a product of m Givens rotations, i.e., rotations in the (j, j + ) plane. The steps 2 and 3 in the QR algorithm can be performed in (together) 3m 2 flop. Observation. The QR-algorithm can not exploit any sparsity structure of A.

Benefits of the QR-RQ steps. The Shift & Invert power method is implicitly incorporated for one eigenvalue. The power method is implicitly incorporated for all other eigenvalues. Easy deflation is allowed. The computations are stable (when a stable qr-decomposition is used). When combined with an upper Hessenberg start: Upper Hessenberg structure is preserved, leading to realtively low computational costs per step. Simple error controle: the norm of the residual equals S k (n, n ). Effective shifts can easily be computed: with an eigenvalue of the 2 2 right lower block of S k, quadratic convergence is achieved and stagnation avoided.

Excellent performance of the QR algorithm relies on QR-RQ steps. (see previous transparant) A good shift strategy leading to fast convergence (quadratic and, if A is Hermitian, cubic) to one eigenvalue. While quickly converging to one eigenvalue, other eigenvalues are also approximated, yielding good starts for quick eigenvalue computation. Deflation allows a fast search for the next eigenvalue. Deflation is performed simply by deleting the last row and the last column of the active matrix. The upper Hessenberg structure is preserved, allowing relatively cheap QR steps.

Theorem. The QR-algorithm is stable: for the matrix U and the upper triangular matrix S we have that (A + A )U = US, U U I 2 us where A is an n n matrix such that A 2 u A 2