A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem

Similar documents
Katholieke Universiteit Leuven Department of Computer Science

Katholieke Universiteit Leuven Department of Computer Science

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Using semiseparable matrices to compute the SVD of a general matrix product/quotient

Structured Matrix Methods for Polynomial Root-Finding

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Direct methods for symmetric eigenvalue problems

Eigen-solving via reduction to DPR1 matrices

COMPUTING THE CONDITION NUMBER OF TRIDIAGONAL AND DIAGONAL-PLUS-SEMISEPARABLE MATRICES IN LINEAR TIME

A multiple shift QR-step for structured rank matrices

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm

Matrix Computations and Semiseparable Matrices

ECS130 Scientific Computing Handout E February 13, 2017

Katholieke Universiteit Leuven Department of Computer Science

Multivariate Statistical Analysis

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Numerical Experiments for Finding Roots of the Polynomials in Chebyshev Basis

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

Fast computation of eigenvalues of companion, comrade, and related matrices

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.

A Schur based algorithm for computing the smallest eigenvalue of a symmetric positive definite Toeplitz matrix

Matrices, Moments and Quadrature, cont d

Numerical Methods in Matrix Computations

Katholieke Universiteit Leuven Department of Computer Science

Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices

Accurate eigenvalue decomposition of arrowhead matrices and applications

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

Algebra C Numerical Linear Algebra Sample Exam Problems

Matrix decompositions

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Scientific Computing: An Introductory Survey

Diagonal Representation of Certain Matrices

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2

Numerical Methods - Numerical Linear Algebra

Linear algebra & Numerical Analysis

Transformations to rank structures by unitary similarity

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Numerical Methods I: Eigenvalues and eigenvectors

The Lanczos and conjugate gradient algorithms

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes

CS 246 Review of Linear Algebra 01/17/19

Krylov subspace projection methods

Solving linear equations with Gaussian Elimination (I)

Computational Linear Algebra

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

arxiv: v1 [math.na] 24 Jan 2019

Solutions Problem Set 8 Math 240, Fall

Computational Methods. Eigenvalues and Singular Values

Linear Algebra in Actuarial Science: Slides to the lecture

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Orthogonal iteration to QR

Katholieke Universiteit Leuven Department of Computer Science

Math 504 (Fall 2011) 1. (*) Consider the matrices

Chapter 1. Matrix Algebra

MATH 221, Spring Homework 10 Solutions

EIGENVALUE PROBLEMS (EVP)

Compression of unitary rank structured matrices to CMV-like shape with an application to polynomial rootfinding arxiv: v1 [math.

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

Foundations of Matrix Analysis

A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems

6.4 Krylov Subspaces and Conjugate Gradients

Eigenvalue and Eigenvector Problems

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices

22.4. Numerical Determination of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes

Math Matrix Algebra

Fundamentals of Numerical Linear Algebra

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

EECS 275 Matrix Computation

632 CHAP. 11 EIGENVALUES AND EIGENVECTORS. QR Method

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Matrix Theory, Math6304 Lecture Notes from October 25, 2012

A factorization of the inverse of the shifted companion matrix

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

A CHOLESKY LR ALGORITHM FOR THE POSITIVE DEFINITE SYMMETRIC DIAGONAL-PLUS-SEMISEPARABLE EIGENPROBLEM

On the influence of eigenvalues on Bi-CG residual norms

Improved initialization of the accelerated and robust QR-like polynomial root-finding

Course Notes: Week 1

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

Matrices and Linear Algebra

arxiv: v2 [math.na] 21 Sep 2015

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

R-Linear Convergence of Limited Memory Steepest Descent

Math 577 Assignment 7

UNIT 6: The singular value decomposition.

Finding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

M.A. Botchev. September 5, 2014

EE364a Review Session 7

Solving large scale eigenvalue problems

Numerical Analysis Lecture Notes

Incomplete Cholesky preconditioners that exploit the low-rank property

MAT Linear Algebra Collection of sample exams

Transcription:

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana Slovenia Ellen Van Camp and Marc Van Barel Department of Computer Science Katholieke Universiteit Leuven Belgium International Conference on Matrix Methods and Operator Equations, Moscow, June 20-25, 2005 1

Outline Introduction Cholesky LR algorithm Laguerre s shift Implementation Numerical examples Conclusions International Conference on Matrix Methods and Operator Equations, Moscow, June 20-25, 2005 2

Introduction semiseparable: every submatrix from the lower or upper triangular part has rank at most 1. diagonal-plus-semiseparable (DPSS): the sum D + S of a diagonal D and a semiseparable S. Givens-vector representation of a DPSS matrix is based on a vector f = [f 1,..., f n ] T, Givens [ ] ci s rotations G i = i, i = 1,..., n 1, and a diagonal d = [d 1,..., d n ] T as s i c i c 1 f 1 + d 1 c 2 s 1 f 1 c n 1 s n 2:1 f 1 s n 1:1 f 1 c 2 s 1 f 1 c 2 f 2 + d 2 c n 1 s n 2:1 f 2 s n 1:2 f 2 D + S =....., c n 1 s n 2:1 f 1 c n 1 s n 2:2 f 2 c n 1 f n 1 + d n 1 s n 1 f n 1 s n 1:1 f 1 s n 1:2 f 2 s n 1 f n 1 f n + d n where s a:b = s a s a 1 s b. When they appear, we assume that c n = 1 and s n = 0. We denote D + S = diag(d) + Giv(c, s, f). Our goal is to compute the smallest (or all) eigenvalues of a s.p.d. DPSS matrix. International Conference on Matrix Methods and Operator Equations, Moscow, June 20-25, 2005 3

Motivation Let A be an n n symmetric matrix. One can find an orthogonal matrix Q such that B = Q T AQ is tridiagonal in O(n 3 ) flops. Next, the eigendecomposition of B is computed in O(n 2 ) flops. Instead of the tridiagonal structure one can use DPSS matrices. Vandebril, Van Camp, Van Barel, Mastronardi (2004): For an arbitrary diagonal matrix D there exists an orthogonal matrix Q such that Q T AQ = D+S is a DPSS matrix. The reduction is O(n 2 ) more expensive than in the tridiagonal case. The reduction algorithm has a Lanczos-Ritz convergence behaviour and performs a kind of nested subspace iteration at each step. A good choice of the diagonal can compensate slower reduction. Algorithms for the eigendecomposition of a symmetric DPSS matrix: Chandrasekaran, Gu (2004), in Numer. Math.: divide and conquer, Mastronardi, Van Camp, Van Barel (2003), tech. report: divide and conquer, Bini, Gemignani, Pan (2003), tech. report: QR algorithm, Fasino (2004), tech. report: QR algorithm, Van Camp, Delvaux, Van Barel, Vandebril, Mastronardi (2005), tech. report: implicit QR algorithm, Mastronardi, Van Camp, Van Barel, Vandebril (2004), tech. report: computation of eigenvectors International Conference on Matrix Methods and Operator Equations, Moscow, June 20-25, 2005 4

Cholesky LR algorithm Let A be a s.p.d. matrix. A 0 = A k = 0, 1, 2,... choose shift σ k A k σ k I = V k V T k A k+1 = V T k V k + σ k I (Cholesky decomposition, V k is lower-triangular) Two steps of the zero shift Cholesky LR are equivalent to one step of the zero shift QR. The shift σ k should be such that A k σ k I is positive definite. When applied to a s.p.d. DPSS matrix D + S, the shift can be included into the diagonal part. Grad, Zakrajšek (1972): Cholesky LR with Laguerre s shifts for the symmetric tridiagonal matrices. International Conference on Matrix Methods and Operator Equations, Moscow, June 20-25, 2005 5

DPSS is invariant to Cholesky LR Theorem: Let A = Giv(c, s, f) + diag(d) be a s.p.d DPSS matrix. 1. If A = V V T is the Cholesky decomposition of A, then V = tril(giv(c, s, f)) + diag( d), 2. If B = V T V, then B is a s.p.d. DPSS matrix B = Giv(ĉ, ŝ, f) + diag(d). A 4 = c 1 f 1 + d 1 c 2 s 1 f 1 c 2 f 2 + d 2 c 3 s 2 s 1 f 1 c 3 s 2 f 2 c 3 f 3 + d 3. V 4 = B 4 = s 3 s 2 s 1 f 1 s 3 s 2 f 2 s 3 f 3 f 4 + d 4 c 1 f1 + d 1 c 2 s 1 f1 c 2 f2 + d 2 c 3 s 2 s 1 f1 c 3 s 2 f2 c 3 f3 + d 3 s 3 s 2 s 1 f1 s 3 s 2 f2 s 3 f3 f4 + d 4 ĉ 1 f1 + d 1 ĉ 2 ŝ 1 f1 ĉ 2 f2 + d 2 ĉ 3 ŝ 2 ŝ 1 f1 ĉ 3 ŝ 2 f2 ĉ 3 f3 + d 3.. ŝ 3 ŝ 2 ŝ 1 f1 ŝ 3 ŝ 2 f2 ŝ 3 f3 f4 + d 4 International Conference on Matrix Methods and Operator Equations, Moscow, June 20-25, 2005 6

Cholesky decomposition A 4 = c 1 f 1 + d 1 c 2 s 1 f 1 c 2 f 2 + d 2 c 3 s 2 s 1 f 1 c 3 s 2 f 2 c 3 f 3 + d 3 s 3 s 2 s 1 f 1 s 3 s 2 f 2 s 3 f 3 f 4 + d 4, V 4 = c 1 f1 + d 1 c 2 s 1 f1 c 2 f2 + d 2 c 3 s 2 s 1 f1 c 3 s 2 f2 c 3 f3 + d 3 s 3 s 2 s 1 f1 s 3 s 2 f2 s 3 f3 f4 + d 4. We compare the diagonal and the main subdiagonal of A and V V T : c k f k + d k = c k+1 s k f k = k 1 (c k s k 1 s j fj ) 2 + (c k fk + d k ) 2 = c 2 k q k + (c k fk + d k ) 2, j=1 k 1 c k c k+1 s k (s k 1 s j fj ) 2 c k+1 s k fk (c k fk + d k ) = c k c k+1 s k q k + c k+1 s k fk (c k fk + d k ), j=1 where The solution is q k := k 1 (s k 1 s k 2 s j fj ) 2. j=1 f k = f k c k q k dk + c k (f k c k q k ), dk = d k, k = 1,..., n. dk + c k (f k c k q k ) International Conference on Matrix Methods and Operator Equations, Moscow, June 20-25, 2005 7

Algorithm for the Cholesky decomposition If A = Giv(c, s, f) + diag(d) is s.p.d., then A = V V T for V = tril(giv(c, s, f)) + diag( d). function [ f, d] = Cholesky(c, s, f, d) c n = 1 q 1 = 0 for k = 1,..., n : z k = f k c k q k y k = d k + c k z k f k = z k /y k d k = d k /y k q k+1 = s 2 k (q k + f 2 k ) Flops: 11n + O(1). International Conference on Matrix Methods and Operator Equations, Moscow, June 20-25, 2005 8

Algorithm for the Cholesky decomposition If A = Giv(c, s, f) + diag(d) is s.p.d., then A = V V T for V = tril(giv(c, s, f)) + diag( d). function [ f, d] = Cholesky(c, s, f, d) c n = 1 q 1 = 0 for k = 1,..., n : z k = f k c k q k y k = d k + c k z k f k = z k /y k d k = d k /y k q k+1 = s 2 k (q k + f 2 k ) Flops: 11n + O(1). It follows from c k f k + d k = c 2 k q k + (c k fk + d k ) 2 that y k is in fact the diagonal element of V : c k fk + d k = d k + c k (f k c k q k ) = d k + c k z k = y k. A negative or zero value under the square root appears if A is not positive definite. International Conference on Matrix Methods and Operator Equations, Moscow, June 20-25, 2005 8

Equations for V T V The product B = V T V is a s.p.d. DPSS matrix. The lower triangular elements of B are where k = 1,..., n and j > k. b kk = (c k fk + d k ) 2 + (s k fk ) 2, b jk = s k s k+1 s j 1 fk ( f j + c j dj ), B = Giv(ĉ, ŝ, f) + diag(d). For k = 1,..., n 1 we have ŝ 2 f n 2 k k = b 2 jk = f n 2 k p k,, where p k = (s k s k+1 s j 1 ) 2 ( f j + c j dj ) 2. j=k+1 For p k we can apply the recursion p n = 0 and p k = s 2 k j=k+1 ( p k+1 + ( f k+1 + c k+1 dk+1 ) 2). From ĉ k fk + d k = (c k fk + d k ) 2 + (s k fk ) 2 and the Cholesky decomposition we get ĉ k fk = c k z k + (s k fk ) 2. International Conference on Matrix Methods and Operator Equations, Moscow, June 20-25, 2005 9

Equations for V T V The product B = V T V is a s.p.d. DPSS matrix. The lower triangular elements of B are where k = 1,..., n and j > k. b kk = (c k fk + d k ) 2 + (s k fk ) 2, b jk = s k s k+1 s j 1 fk ( f j + c j dj ), B = Giv(ĉ, ŝ, f) + diag(d). For k = 1,..., n 1 we have ŝ 2 f n 2 k k = b 2 jk = f n 2 k p k,, where p k = (s k s k+1 s j 1 ) 2 ( f j + c j dj ) 2. j=k+1 For p k we can apply the recursion p n = 0 and p k = s 2 k j=k+1 ( p k+1 + ( f k+1 + c k+1 dk+1 ) 2). From ĉ k fk + d k = (c k fk + d k ) 2 + (s k fk ) 2 and the Cholesky decomposition we get ĉ k fk = c k z k + (s k fk ) 2. Now, ĉ k, ŝ k, and f k can be computed from ĉ k fk = c k z k + (s k fk ) 2, ŝ k fk = f k pk. International Conference on Matrix Methods and Operator Equations, Moscow, June 20-25, 2005 9

Algorithm for the V T V product Let A = Giv(c, s, f) + diag(d) be s.p.d. = A = V V T, where V = tril(giv(c, s, f)) + diag( d), = B = V T V = Giv(ĉ, ŝ, f) + diag(d). function [ĉ, ŝ, f] = VTV(c, s, f, z) c n = 1 f n = ( f n + d n ) 2 d n p n = 0 for k = n 1,..., 2, 1 p k = s 2 k (p k+1 + ( f k+1 + c k+1 dk+1 ) 2) [ĉ k, ŝ k, f k ] = Givens(c k z k + s 2 k f 2 k, f k pk ) [c, s, f] = Givens(x, y) returns the Givens transformation such that [ c s s c ] [ ] x y = [ ] f. 0 Flops: 16n + O(1) = one step of the zero shift Cholesky LR algorithm: 27n + O(1) flops. As the eigenvalues are invariant to the sign of ŝ k, k = 1,..., n 1, we do not care about it. International Conference on Matrix Methods and Operator Equations, Moscow, June 20-25, 2005 10

Laguerre s shift Let A be a s.p.d. n n matrix with eigenvalues 0 < λ n λ n 1... λ 1. We apply Laguerre s method on the characteristic polynomial f(λ) = det(a λi). If x is an approximation for an eigenvalue of A and S 1 (x) = S 2 (x) = n i=1 n i=1 1 λ i x = f (x) f(x) 1 = f 2 (x) f(x)f (x) (λ i x) 2 f 2 (x) then the next approximation x by Laguerre s method is given by the equation n x = x +. S 1 (x) + (n 1)(nS 2 (x) S 21 (x)) Two important properties of Laguerre s method: if λ n is a simple eigenvalue and if x < λ n then x < x < λ n, the convergence towards λ n is cubic (linear for a multiple eigenvalue). International Conference on Matrix Methods and Operator Equations, Moscow, June 20-25, 2005 11

Laguerre s shift Let A be a s.p.d. n n matrix with eigenvalues 0 < λ n λ n 1... λ 1. We apply Laguerre s method on the characteristic polynomial f(λ) = det(a λi). If x is an approximation for an eigenvalue of A and S 1 (x) = S 2 (x) = n i=1 n i=1 1 λ i x = f (x) f(x) 1 = f 2 (x) f(x)f (x) (λ i x) 2 f 2 (x) = trace((a xi) 1 ), = trace((a xi) 2 ), then the next approximation x by Laguerre s method is given by the equation n x = x +. S 1 (x) + (n 1)(nS 2 (x) S 21 (x)) Two important properties of Laguerre s method: if λ n is a simple eigenvalue and if x < λ n then x < x < λ n, the convergence towards λ n is cubic (linear for a multiple eigenvalue). International Conference on Matrix Methods and Operator Equations, Moscow, June 20-25, 2005 11

Computation of Laguerre s shift We need S 1 (σ) = trace((a σi) 1 ) and S 2 (σ) = trace((a σi) 2 ). If A σi = V V T is the Cholesky decomposition and W = V 1, then S 1 (σ) = trace(w T W ) = W 2 F, S 2 (σ) = trace(w T W W T W ) = trace(w W T W W T ) = W W T 2 F. International Conference on Matrix Methods and Operator Equations, Moscow, June 20-25, 2005 12

Computation of Laguerre s shift We need S 1 (σ) = trace((a σi) 1 ) and S 2 (σ) = trace((a σi) 2 ). If A σi = V V T is the Cholesky decomposition and W = V 1, then S 1 (σ) = trace(w T W ) = W 2 F, S 2 (σ) = trace(w T W W T W ) = trace(w W T W W T ) = W W T 2 F. Delvaux, Van Barel (2004): Let V = tril(giv(c, s, f)) + diag( d) be nonsingular and d i 0 for i = 1,..., n. Then W = V 1 = tril(giv( c, s, f)) + diag( d), where d 1 i = d i for i = 1,..., n. Let us assume that W = tril(giv( c, s, f)) + diag( d). The final algorithm will also be correct when W is not DPSS, which happens when d i = 0 for some i = 2,..., n 1. International Conference on Matrix Methods and Operator Equations, Moscow, June 20-25, 2005 12

Computation of Laguerre s shift A 4 = c 1 f 1 + d 1 c 2 s 1 f 1 c 2 f 2 + d 2 c 3 s 2 s 1 f 1 c 3 s 2 f 2 c 3 f 3 + d 3 s 3 s 2 s 1 f 1 s 3 s 2 f 2 s 3 f 3 f 4 + d 4. Lemma: If A = Giv(c, s, f) + diag(d) is a symmetric n n DPSS matrix then A 2 F = n (c k f k + d k ) 2 + 2 k=1 n 1 k=1 s 2 k f 2 k. Lemma: If W = tril(giv( c, s, f)) + diag( d) and c k 0 for k = 2,..., n 1, then W W T 2 F = W 2 F = ( ) n n 1 (W W T ) 2 kk + 2 (W W T 2 ) k+1,k, k=1 n (W W T ) kk. k=1 k=1 c k+1 International Conference on Matrix Methods and Operator Equations, Moscow, June 20-25, 2005 13

Algorithm for the Laguerre s shift The following algorithm computes S 1 = W 2 F and S 2 = W W T 2 F. function [S 1, S 2 ] = invtrace(c, s, f, d, y) c n = c n = 1 for k = n 1,..., 2, 1 : r 1 = 0 [ c k, s k ] = Givens(c k c k+1 y k+1, c k+1 s k dk ) for k = 1,..., n 1 β k = c k+1 s k fk /( c k+1 y k y k+1 ) = w k+1,k / c k+1 ω k = c 2 k r k + y 2 k = (W W T ) kk ξ k = c k s k r k + β k /y k = (W W T ) k+1,k / c k+1 r k+1 = s 2 k r k + β 2 k ω n = r n + y 2 n S 1 = n k=1 ω k S 2 = n k=1 ω2 k + 2 n 1 k=1 ξ2 k Flops: 31n + O(1) flops. The algorithm is also correct if d k = 0 for some k = 2,..., n 1. International Conference on Matrix Methods and Operator Equations, Moscow, June 20-25, 2005 14

A 4 = Implementation c 1 f 1 + d 1 c 2 s 1 f 1 c 2 f 2 + d 2 c 3 s 2 s 1 f 1 c 3 s 2 f 2 c 3 f 3 + d 3 s 3 s 2 s 1 f 1 s 3 s 2 f 2 s 3 f 3 f 4 + d 4. If s k is small enough for some k = 1,..., n 1, then we decouple the problem into two smaller problems with matrices A(1 : k, 1 : k) and A(k + 1 : n, k + 1 : n). If s n 1 is small enough, we take f n + d n as an eigenvalue of A and continue with vectors c(1 : n 2), s(1 : n 2), f(1 : n 1), and d(1 : n 1). New initial shift is f n + d n. Even if σ k < λ n, the Cholesky factorization can fail if the difference is too small. This is a problem since Laguerre s shifts converge faster to the smallest eigenvalue than (A k ) nn. One strategy is to relax the shift with a factor τ close to 1, for instance, τ = 1 10 4. The computation of Laguerre s shift requires more than half of the operations for one step of the Cholesky LR. We can save work by fixing the shift once the shift improvement is small enough. The algorithm for Laguerre s shift fails if c k = 0 for some k = 2,..., n 1. A simple solution is to perturb c k into some small δ (for instance, 10 20 ) whenever c k = 0. International Conference on Matrix Methods and Operator Equations, Moscow, June 20-25, 2005 15

Numerical examples Numerical results were obtained with Matlab 7.0 running on a Pentium4 2.6 GHz Windows XP. We compared a Matlab implementation of the Cholesky LR algorithm, a Matlab implementation of the implicit QR algorithm for DPSS matrices [Van Camp, Delvaux, Van Barel, Vandebril, Mastronardi (2005)], the Matlab function eig. Exact eigenvalues were computed in Mathematica 5 using variable precision. The cutoff criterion for both Cholesky LR and implicit QR is 10 16. With the maximum relative error we denote max 1 i n λ i λ i λ i. When the initial matrix is not DPSS, the reduction into a similar DPSS matrix is done by the algorithm of Vandebril, Van Camp, Van Barel, and Mastronardi (2004). There is a connection between the Lanczos method and the reduction into a similar DPSS matrix which causes that the largest eigenvalues of A are approximated by the lower right diagonal elements of the DPSS matrix. As this is not good for the Cholesky LR method, we reverse the direction of the columns and rows of the DPSS matrix. This can be done in linear time, see, e.g., Vandebril (2004). International Conference on Matrix Methods and Operator Equations, Moscow, June 20-25, 2005 16

Numerical example 1 We take random s.p.d. DPSS matrices of the form A = diag(1,..., n) + triu(uv T, 1) + triu(uv T, 1) T + αi, where u and v are vectors of uniformly distributed random entries on [0, 1], obtained by the Matlab function rand, and the shift α is such that the smallest eigenvalue of A is 1. The condition numbers of these matrices are approximately n. Cholesky LR Implicit QR eig n t steps error t steps error t error 50 0.06 274 9.2 10 15 0.19 83 4.3 10 15 0.00 1.8 10 14 100 0.16 557 1.0 10 14 0.69 164 4.8 10 14 0.00 1.0 10 13 150 0.28 832 1.8 10 14 1.45 242 2.9 10 13 0.00 1.4 10 13 200 0.49 1104 2.6 10 14 2.59 311 3.6 10 13 0.02 1.2 10 13 250 0.72 1390 6.4 10 14 4.22 414 6.7 10 13 0.03 7.6 10 13 300 0.97 1660 1.3 10 13 6.18 486 4.7 10 13 0.05 1.2 10 13 350 1.25 1933 4.8 10 14 8.86 564 1.5 10 12 0.09 5.6 10 13 400 1.59 2194 1.3 10 13 11.95 684 4.3 10 12 0.14 7.8 10 13 450 1.94 2479 9.8 10 14 15.78 730 2.2 10 12 0.22 6.8 10 13 500 2.34 2741 1.0 10 13 19.72 821 4.5 10 12 0.28 3.8 10 13 International Conference on Matrix Methods and Operator Equations, Moscow, June 20-25, 2005 17

Numerical example 2 We take symmetric positive definite matrices A = Q diag(1 : n)q T, where Q is a random orthogonal matrix. Now we have to transform the matrix into a similar DPSS matrix before we can apply Cholesky LR or implicit QR. Cholesky LR Implicit QR eig n t steps error t steps error t error 500 3.1 3455 2.2 10 13 41.5 942 2.9 10 13 1.8 2.4 10 13 1000 12.7 6923 2.8 10 13 190.9 1804 6.6 10 13 13.4 6.2 10 13 1500 26.4 10384 4.6 10 13 496.1 2641 4.6 10 13 50.9 4.5 10 13 2000 46.1 13859 7.6 10 13 1067.9 3448 2.8 10 12 123.0 3.0 10 12 2500 72.8 17326 9.5 10 13 1962.3 4263 4.7 10 12 279.6 4.1 10 12 : times for Cholesky LR and Implicit QR do not include the reduction to a DPSS matrix. International Conference on Matrix Methods and Operator Equations, Moscow, June 20-25, 2005 18

Numerical example 3 We take symmetric positive definite matrices A = Q diag(λ 1,..., λ n )Q T, where Q is a random orthogonal matrix and 8+7 k 1 λ k = 10 n 1. Cholesky LR Implicit QR eig n t steps error t steps error t error 50 0.05 276 4.3 10 8 0.14 78 3.1 10 8 0.00 1.1 10 8 100 0.14 585 5.4 10 8 0.52 153 3.8 10 8 0.03 8.0 10 9 150 0.24 857 8.3 10 8 1.20 233 7.3 10 8 0.08 7.2 10 9 200 0.38 1143 8.7 10 8 2.14 307 1.3 10 7 0.17 6.6 10 9 250 0.52 1438 7.4 10 8 3.36 377 8.4 10 8 0.34 1.1 10 8 300 0.72 1712 1.6 10 7 5.23 466 7.0 10 8 0.66 7.9 10 9 : times for Cholesky LR and Implicit QR do not include the reduction to a DPSS matrix. International Conference on Matrix Methods and Operator Equations, Moscow, June 20-25, 2005 19

Conclusions Cholesky LR algorithm exploits the structure of s.p.d. DPSS matrices. The method can be combined with Laguerre s shifts. It seems natural to compare the method to the implicit QR for DPSS matrices. In Cholesky LR the eigenvalues are computed from the smallest to the largest eigenvalue, therefore the method is very appropriate for applications where one is interested in few of the smallest eigenvalues. If the complete spectrum is computed, Cholesky LR is more expensive than implicit QR, but, as it tends to be slightly more accurate, it presents an alternative. The proposed method combined with the reduction to DPSS matrices can also be applied to a general s.p.d. matrix. International Conference on Matrix Methods and Operator Equations, Moscow, June 20-25, 2005 20