where J " 0 I n?i n 0 # () and I n is the nn identity matrix (Note that () is in general not equivalent to L T JL N T JN) In most applications system-

Size: px
Start display at page:

Download "where J " 0 I n?i n 0 # () and I n is the nn identity matrix (Note that () is in general not equivalent to L T JL N T JN) In most applications system-"

Transcription

1 The Symplectic Eigenvalue roblem, the Buttery Form, the SR Algorithm, and the Lanczos Method eter Benner Heike Fabender y August 1, 199 Abstract We discuss some aspects of the recently proposed symplectic buttery form which is a condensed form for symplectic matrices Any n n symplectic matrix can be reduced to this condensed form which contains 8n? nonzero entries and is determined by n? 1 parameters The symplectic eigenvalue problem can be solved using the SR algorithm based on this condensed form The SR algorithm preserves this form and can be modied to work only with the n? 1 parameters instead of the n matrix elements The reduction of symplectic matrices to symplectic buttery form has a close analogy to the reduction of arbitrary matrices to Hessenberg form A Lanczos-like algorithm for reducing a symplectic matrix to buttery form is also presented Key words : buttery form, symplectic Lanczos method, symplectic matrix, eigenvalues AMS(MOS) subject classications : F1, F0 1 Introduction The computation of eigenvalues and eigenvectors or deating subspaces of symplectic pencils/matrices is an important task in applications like discrete linear-quadratic regulator problems, discrete Kalman ltering, computation of discrete stability radii, and the problem of solving discrete-time algebraic Riccati equations See, eg, [,,, 9] for applications and further references A matrix M IR nn is called symplectic (or J-orthogonal) if MJM T J (1) (or equivalently, M T JM J) and a symplectic matrix pencil L? N; L; N IR nn is dened by the property LJL T NJN T ; () Universitat Bremen, Fachbereich { Mathematik und Informatik, 8 Bremen, FRG peter@mathematikuni-bremende This work was completed while this author was with the Technische Universitat Chemnitz{Zwickau and was supported by Deutsche Forschungsgemeinschaft, research grant Me 90/{1 Singulare Steuerungsprobleme y Universitat Bremen, Fachbereich { Mathematik und Informatik, 8 Bremen, FRG heike@mathematikuni-bremende 1

2 where J " 0 I n?i n 0 # () and I n is the nn identity matrix (Note that () is in general not equivalent to L T JL N T JN) In most applications system-theoretic conditions are satised, which guarantee the existence of an n-dimensional invariant subspace (resp deating subspace) corresponding to the eigenvalues of the symplectic matrix M (resp the symplectic pencil L? N) inside the open unit disk This is the subspace one wishes to compute The solution of the (generalized) symplectic eigenvalue problem with small and dense coecient matrices has been the topic of numerous publications during the last 0 years Even for these problems a numerically sound method, ie, a strongly backward stable method in the sense of [], is yet not known The numerical computation of a deating subspace is usually carried out by an iterative procedure like the QZ algorithm which transforms L? N into a generalized Schur form, from which the deating subspace can be read o See, eg, [9, 1] The QZ algorithm is numerically backward stable but it ignores the symplectic structure Thus the computed eigenvalues will in general not come in reciprocal pairs, although the exact eigenvalues have this property Even worse, small perturbations may cause eigenvalues close to the unit disk to cross the unit circle such that the number of true and computed eigenvalues inside the open unit disk may dier Hence it is crucial to make use of the symplectic structure Dierent structure-preserving methods which avoid the above mentioned problems have been proposed Mehrmann [8] describes a symplectic QZ algorithm This algorithm has all desirable properties, but its applicability is limited to the single input/output case due to the lacking reduction to symplectic J{Hessenberg form in the general case [1] In [], Lin uses the S +S?1 - transformation in order to solve the symplectic eigenvalue problem But the method cannot be used to compute eigenvectors and/or invariant subspaces atel [] shows that these ideas can also be used to derive a structure-preserving method for the generalized symplectic eigenvalue problem similar to Van Loan's square-reduced method for the Hamiltonian eigenvalue problem [] Based on the multishift idea presented in [1], he also describes a method working on a condensed symplectic pencil using implicit QZ steps to compute the stable deating subspace of a symplectic pencil [] Using the analogy to the continuous-time case, ie, Hamiltonian eigenvalue problems, Flaschka, Mehrmann, and Zywietz show in [1] how to construct structure-preserving methods for the symplectic eigenproblem based on the SR method [1, ] This method is a QR-like method based on the SR decomposition In an initial step, the symplectic matrix is reduced to a more condensed form, the symplectic J-Hessenberg form As in the general framework of GR algorithms [0], the SR iteration preserves the symplectic J-Hessenberg form at each step and is supposed to converge to a form from which eigenvalues and deating subspaces can be read o The authors note that \the resulting methods have signicantly worse numerical properties than their corresponding analogues in the Hamiltonian case" [1, abstract] Recently, Banse and Bunse-Gerstner [,, ] presented a new condensed form for symplectic matrices which can be computed by an elimination process using elementary unitary and symplectic similarity transformations The n n condensed matrix is symplectic, contains 8n? nonzero entries, and is determined by n? 1 parameters This condensed form, called

3 symplectic buttery form, can be depicted as a symplectic matrix of the @ The reduction of a symplectic matrix to buttery form and also the existence of a numerically stable method to compute this reduction is strongly dependent on the rst column of the transformation matrix that carries out the transformation Once the reduction to buttery form is achieved, the SR algorithm [1, ] is a suitable tool for computing the eigenvalues/eigenvectors of a symplectic matrix It preserves the buttery form in its iterations and can be rewritten in a parameterized form that works with n? 1 parameters instead of the (n) matrix elements in each iteration Hence, the symplectic structure, which will be destroyed in the numerical process due to roundo errors, can easily be restored in each iteration for this condensed form In [], a strict buttery matrix is introduced in which the upper left diagonal matrix of the buttery form is nonsingular A strict buttery matrix can be factored 0 0 I We will introduce an unreduced buttery form in which the lower right tridiagonal matrix is unreduced An unreduced buttery matrix can be : 0?I Any unreduced buttery matrix is similar to a strict buttery matrix, but not vice versa We will show that unreduced buttery matrices have certain desirable properties which are helpful when examining the properties of the SR algorithm based on the buttery form A strict buttery matrix does not necessarily have these properties In [, ] an elimination process for computing the buttery form of a symplectic matrix is given which uses elementary unitary symplectic transformations as well as non-unitary symplectic transformations Here, we also consider a structure-preserving symplectic Lanczos method which creates the symplectic buttery form if no breakdown occurs Such a symplectic Lanczos method will suer from the well-known numerical diculties inherent to any Lanczos method for nonsymmetric matrices In [], a symplectic look-ahead Lanczos algorithm is presented which overcomes breakdown by giving up the strict buttery form Unfortunately, so far there do not exist eigenvalue methods that can make use of that special reduced form Standard eigenvalue methods as QR or SR have to be employed resulting in a full symplectic matrix after only a few iteration steps We propose to employ an implicit restart technique instead of a look-ahead mechanism in order to deal with the numerical diculties of the symplectic Lanczos method This approach is based on the fundamental work of Sorensen [] In Section, existence and uniqueness of the reduction of a symplectic matrix to buttery form are reviewed Unreduced buttery matrices are introduced and their properties are presented An SR algorithm based on the symplectic buttery form is discussed in Section The :

4 symplectic Lanczos method which reduces a symplectic matrix to buttery form is derived in Section, where we also give the basic idea of an implicit restart for such a Lanczos process The Symplectic Buttery Form Here, we review the known results on existence and uniqueness of the reduction of a symplectic matrix to buttery form and derive some new properties showing the analogy of the buttery form to the Hessenberg form in generic chasing algorithms As the reduction of a general matrix to upper Hessenberg form serves as a preparatory step for the QR algorithm, the reduction of a symplectic matrix to buttery form can be used as a preparatory step for the SR algorithm We will state results corresponding to those in the Hessenberg/QR-case for the symplectic buttery form and the SR algorithm Our main concern are symplectic matrices and the symplectic buttery form, but we will briey mention how the results presented here can be used for symplectic matrix pencils In order to state results concerning existence and uniqueness of the reduction of a symplectic matrix to symplectic buttery form we need the following denitions A matrix A IR nn is called a J-triangular matrix if A 11 ; A 1 ; A 1 ; A IR nn are upper triangular matrices and A 1 has a zero main diagonal, ie, A " # A11 A 1 A 1 A For a vector v 1 IR n and M IR @ K(M; v 1 ; `) : [v 1 ; M?1 v 1 ; M? v 1 ; : : : ; M?(`?1) v 1 ; Mv 1 ; M v 1 ; : : : ; M `v 1 ]: () (Note the similarity of this generalized Krylov matrix to the generalized ones in [9, 1, 1, 8]) Further, let n be the permutation matrix n [e 1 ; e ; : : : ; e n?1 ; e ; e ; : : : ; e n ] IR nn : () If the dimension of n is clear from the context, we leave o the superscript Theorem 1 Let X be a n n nonsingular matrix Let M and S be n n symplectic matrices and denote by v 1 the rst column of S a) There exists a nn symplectic matrix S and a J-triangular matrix R such that X SR if and only if all leading principal minors of even dimension of X T JX T are nonzero b) Let X SR and X S e R e be SR factorizations of X Then there exists a symplectic matrix " # C F D ; () 0 C?1 where C diag(c 1 ; : : : ; c n ), F diag(f 1 ; : : : ; f n ) such that e S SD?1 and e R DR :

5 c) Let K(M; v 1 ; n) be nonsingular If K(M; v 1 ; n) SR is an SR decomposition then S?1 MS is a buttery matrix d) If S?1 MS B is a symplectic buttery matrix then K(M; v 1 ; n) has an SR decomposition K(M; v 1 ; n) SR e) Let S; ~ S IR nn be symplectic matrices such that S?1 MS B and ~ S?1 M ~ S ~ B are buttery matrices Then there exists a symplectic matrix D as in () such that S ~ SD and B D ~ BD?1 roof: For the original statement and proof of a) see Theorem 11 in [1] For the original statement and proof of b) see roposition in [10] For the original statement and proof of c), d) and e) see Theorem in [] p The theorem introduces the SR decomposition of a matrix X The SR decomposition has been studied, eg, in [8, 10, 1] Theorem 1 e) shows that the transformation to buttery form is unique up to scaling with a matrix D as in () From the proof of c) it follows that the tridiagonal matrix in the lower right corner of the buttery form is an unreduced tridiagonal matrix, that is, none of the upper and lower subdiagonal elements are zero Similarly, one needs that these elements are nonzero to show in d) that R is nonsingular Because of this we will call a symplectic matrix B IR nn an unreduced buttery matrix if B " # B1 B @ ; () where B 1 ; B IR nn are diagonal matrices, B ; B IR nn are tridiagonal matrices, and B is unreduced, that is, the subdiagonal elements are nonzero Lemma If B as in () is an unreduced buttery matrix, then B is nonsingular and B can be factored as " # " B1 B B?1 B 1 B B #" 0?I 0 B I B?1 B This factorization is unique Note that B?1 B is 0?I :

6 roof : The fact that B is symplectic implies B 1 B? B B I Assume that B is singular, that is (B ) jj 0 for some j Then the jth row of B 1 B? B B I gives (B 1 ) jj (B ) j;j?1 0; (B 1 ) jj (B ) jj 1; (B 1 ) jj (B ) j;j+1 0: This can only happen for (B 1 ) jj 0, (B ) jj 0, and (B ) j;j?1 (B ) j;j+1 0, but B is unreduced Hence B has to be nonsingular if B is unreduced Thus, for an unreduced buttery matrix we obtain " B?B 1 0 B?1 # " # B1 B B B " 0?I I B?1 B As both matrices on the left are symplectic, their product is symplectic and hence B?1 B has to be a symmetric tridiagonal matrix Thus " # B1 B B B " #" B?1 B 1 0?I 0 B I B?1 B : 0?I The uniqueness of this factorization follows from the choice of signs of the identities p : We will frequently make use of this decomposition and will denote it by B 1 B?1 " # B?1 B 1 0 B " 0?I I B?1 B # a?1 1 b 1 a?1 n a 1?1 bn an 1 c 1 d d ; (8)?1 dn 1 dn cn ; (9)

7 B B 1 B?1 b 1 b 1 c 1? a?1 1 b 1 d b d bn?1dn bn bndn bncn? a?1 n a 1 a 1 c 1 a 1 d a d an?1dn an andn ancn : (10) From (8) { (10) we obtain Corollary Any unreduced buttery matrix B IR nn can be represented by n? 1 parameters a 1 ; : : : ; a n ; d ; : : : ; d n IR n f0g; b 1 ; : : : ; b n ; c 1 ; : : : ; c n IR Remark Any unreduced buttery matrix is similar to an unreduced buttery matrix with b i 1 and ja i j 1 for i 1; : : : ; n and sign(a i ) sign(d i ) for i ; : : : n (this follows from Theorem 1 e)) Remark We will have deation if d j 0 for some j Then the eigenproblem can be split into two smaller ones with unreduced symplectic buttery matrices The next result is well-known for Hessenberg matrices (eg, [0, Theorem ]) and will turn out to be essential when examining the properties of the SR algorithm based on the buttery form Lemma If is an eigenvalue of an unreduced symplectic buttery matrix B IR nn, then its geometric multiplicity is one roof : Since B is symplectic, B is nonsingular and its eigenvalues are nonzero For any C we have rank(b? I) n? 1 because the rst n? 1 columns of B? I are linear independent This can be seen by looking at the permuted expression B? I B T? I b1? b1c1? a?1 1 0 b1d a1 a1c1? 0 a1d 0 bd b? bc? a?1 0 bd 0 ad a ac? 0 ad 0 bd b? bc? a?1 0 ad a ac? 0 bn?1dn 0 an?1dn 0 bndn bn? bncn? a?1 n 0 andn an ancn? :

8 Obviously, the rst two columns of the above matrix are linear independent as B is unreduced We can not express the third column as a linear combination of the rst two columns: 0 0 b? a 1 b 1? a b 1 c 1? a?1 1 a 1 c 1? b d a d From the fourth row we obtain d?1 With this the third row yields b? b : As is an eigenvalue of B and is therefore nonzero, this equation can not hold Hence the rst three columns are linear independent Similarly, we can see that the rst n? 1 columns are linear independent p Hence, the eigenspaces are one-dimensional : Remark In [] a slightly dierent point of view is taken in order to argue that a buttery matrix can be represented by n? 1 parameters There, a strict buttery form is introduced in which the upper left diagonal matrix B 1 of the buttery form is nonsingular Then, using similar arguments as above, since " #" # " B?1 1 0 B1 B I B?1 B 1?B B 1 B B 0 I and B?1 B 1 is a symmetric tridiagonal matrix (same argument as used above), one obtains " # " # " # B1 B B1 0 I B?1 B B B B B?1 1 : 1 0 I 0 I 0 Therefore a strict buttery matrix can be represented by n? 1 parameters Unfortunately, strict buttery matrices do not have all the desirable properties of unreduced buttery matrices In particular, Lemma does not hold for strict buttery matrices as can be seen by the next example Example 8 Let B Then B is a strict symplectic buttery matrix that it is not unreduced It is easy to see that the spectrum of B is given by f1; 1g with geometric multiplicities two 8 :

9 From Remark and (10) it also follows that any unreduced symplectic buttery matrix is similar to a strict buttery matrix But Example 8 also shows that the converse does not hold Finally consider a symplectic matrix pencil L? N, that is LJL T NJN T, where L; N IR nn If N is nonsingular, then M N?1 L is a symplectic matrix The results of this section can be applied to M Assume that S transforms M to unreduced buttery form : S?1 MS B B 1 B?1 pencil Q(L? N)S B?1? B?1 1 0?I Then the symplectic matrix pencil L? N is equivalent to where Q B?1 1 S?1 N?1 Such a pencil is called a symplectic buttery pencil If N is singular, then the pencil L? N has at least one eigenvalue 0 and 1 Assume that there are k eigenvalues 0 and k eigenvalues 1 In a preprocessing step these eigenvalues can be deated out using Algorithm 1 in [1] In the resulting symplectic pencil L 0?N 0 of dimension (n?k)(n?k), N 0 is nonsingular Hence we can build M 0 (N 0 )?1 L 0 and transform it to buttery form B 0 B 0 1(B 0 )?1 Thus, L 0? N 0 is similar to the symplectic buttery pencil (B 0 )?1? (B 0 1)?1 Adding k rows and columns of zeros to each block of B 0 1 and B 0, and appropriate entries on the diagonals, we can expand the symplectic buttery pencil (B 0 )?1? (B 0 1)?1 to a symplectic buttery pencil b B? b B1 of dimension n n that is equivalent to L? N The SR Algorithm for Symplectic Buttery Matrices Based on the SR decomposition introduced in Theorem 1 a symplectic QR-like method for solving eigenvalue problems of arbitrary real matrices is developed in [10] The QR decomposition and the orthogonal similarity transformation to upper Hessenberg form in the QR process are replaced by the SR decomposition and the symplectic similarity reduction to J-Hessenberg form Unfortunately, a symplectic matrix in buttery form is not a J-Hessenberg matrix so that we can not simply use the results of [10] for computing the eigenvalues of a symplectic buttery matrix But, as we will see in this section, an SR step preserves the buttery form If B is an unreduced symplectic buttery matrix, p(b) a polynomial such that p(b) IR nn, p(b) SR, and if R is invertible, then S?1 BS is a symplectic buttery matrix again This was already noted and proved in [], but no results for singular p(b) are given there The next theorem shows that singular p(b) are desirable (that is at least one shift is an eigenvalue of B), as they allow the problem to be deated after one step First, we need to introduce some notation Let p(b) be a polynomial such that p(b) IR nn Write p(b) in factored form ; p(b) : (B? 1 I n )(B? I n ) (B? k I n ): (11) From p(b) IR nn it follows that if C, and f 1 ; : : : ; k g; then f 1 ; : : : ; k g p(b) is singular if and only if at least one of the shifts i is an eigenvalue of B Let denote the 9

10 number of shifts that are equal to eigenvalues of B Here we count a repeated shift according to its multiplicity as a zero of p, except that the number of times we count it must not exceed its algebraic multiplicity (as an eigenvalue of B) Lemma 1 Let B IR nn be an unreduced symplectic buttery matrix The rank of p(b) in (11) is n? with as dened above roof : Since B is an unreduced buttery matrix, its eigenspaces are one-dimensional by Lemma Hence, we can use the same arguments as in the proof of Lemma in p [9] in order to prove the statement of this lemma In the following we will consider only the case that rank(p(b)) is even In a real implementation, one would choose a polynomial p such that each perfect shift is accompanied by its reciprocal, since the eigenvalues of a symplectic matrix always appear in reciprocal pairs As noted before if C is a perfect shift, then we will choose as a shift as well That is in that case, we will choose ;?1 ; and?1 as shifts Further, if IR is a perfect shift, then we choose?1 as a shift as well Because of this, rank(p(b)) will always be even Theorem Let B IR nn be an unreduced symplectic buttery matrix Let p(b) be a polynomial with p(b) IR nn and rank(p(b)) n? : k If p(b) SR exists, then eb S?1 BS is a symplectic matrix of the form where eb " # eb11 B1 e eb 1 eb 11 eb 1 eb eb e B1 e B e B e B {z} {z} {z} {z} k n? k k n? k g k g n? k g k g n? k is a symplectic buttery matrix and the eigenvalues of just the shifts that are eigenvalues of B ; ; " # eb B e eb B e In order to simplify the notation for the proof of this theorem and the subsequent derivations, we use in the following permuted versions of B, R, and S Let with as in () From S T JS J we obtain B B T ; R R T ; S S T ; J J T ; S T J S J 0 1? ? ?1 0 are

11 while the permuted buttery matrix B is of the form B b 1 b 1 c 1? a?1 1 0 b 1 d a 1 a 1 c 1 0 a 1 d 0 b d b b c? a?1 0 a d a a c 0 bn?1 dn 0 an?1 dn 0 bndn bn bncn? a?1 n 0 andn an ancn : (1) roof of Theorem : B is an upper triangular matrix with two additional subdiagonals, where the second additional subdiagonal has a nonzero entry only in every other position (see (1)) Since R is a J-triangular matrix, R is an upper triangular matrix In the following, we denote by Z k the rst k columns of a n n matrix Z, while Z rest denotes its last n? k columns Z k;k denotes the leading k k principal submatrix of a n n matrix Z Now partition the permuted matrices B ; S ; J, and R as B [B k j B rest ]; S [S k j S rest ]; " # R k;k X ; 0 Y J [J k j J rest ]; R where the matrix blocks are dened as before; X IR k(n?k), Y IR (n?k)(n?k) First we will show that the rst k columns and rows of e B are in the desired form We will need the following observations The rst k columns of p(b ) are linear independent, since B is unreduced To see this, consider the following identity: p(b)k(b; e 1 ; n) [p(b)e 1 ; p(b)b?1 e 1 ; : : : ; p(b)b?(n?1) e 1 ; p(b)be 1 ; : : : ; p(b)b n e 1 ] [p(b)e 1 ; B?1 p(b)e 1 ; : : : ; B?(n?1) p(b)e 1 ; Bp(B)e 1 ; : : : ; B n p(b)e 1 ] K(B; p(b)e 1 ; n); where we have used p(b)b r B r p(b) for r 1; : : : ; n From Theorem 1 d) we know that, since B is unreduced, K(B; e 1 ; n) is a nonsingular upper J-triangular matrix As rank(p(b)) k, K(B; p(b)e 1 ; n) has rank k If a matrix of the form K(X; v; n) [v; X?1 v; : : : ; X?(n?1) v; Xv; : : : ; X n v] has rank k, then the columns v; X?1 v; : : : ; X?(k?1) v; Xv; : : : ; X k v are linear independent Further we obtain p(b) K(B; p(b)e 1 ; n)(k(b; e 1 ; n))?1 : [p 1 ; p ; : : : ; p n ]: 11

12 Due to the special form of K(B; e 1 ; n) (J-triangular!) and the fact that the columns 1 to k and n + 1 to n + k of K(B; p(b)e 1 ; n) are linear independent, the columns p 1 ; p ; : : : ; p k ; p n+1 ; p n+ ; : : : ; p n+k of p(b) are linear independent Hence the rst k columns of p(b ) p(b) T are linear independent The columns of S k are linear independent, since S is nonsingular Hence the matrix R k;k is nonsingular, since It follows that S k p(b )I k S k Rk;k : p(b )I k (R k;k )?1 : (1) Moreover, since rank(p(b)) k, we have that rank(r ) k Since rank(r k;k ) k, we obtain rank(y ) 0 and therefore Y 0 From this we see R Further we need the following identities " X 0 0 R k;k # : (1) B p(b ) p(b )B ; (1) B?1 p(b ) p(b )B?1 ; (1) B T J?1 J?1 B?1 ; (1) B?1 J?1 BT J ; (18) S T J?1 J?1 S?1 ; (19) S?1 J?1 ST J ; (0) S J k S k J k;k : (1) Equations (1) { (1) follow from the fact that B and S are symplectic while (1) { (1) result from the fact that Z and p(z) commute for any matrix Z and any polynomial p The rst k columns of e B are given by the expression eb k e B I k S?1 B S I k S?1 B S k S?1 B p(b )I k (R k;k )?1 by (1) S?1 p(b )B k (R k;k )?1 by (1) R B k (R k;k )?1 1

13 x x x x : : : : : : x x x x x x : : : : : : x x 0 x x x 0 x x x x x x x 0 x x x 0 x x x : : : 0 0 : : : 0 : For the last equation we used (1), that (R k;k )?1 is a k k upper triangular matrix, and that B is of the form given in (1) Hence eb x x x x : : : : : : x x x x : : : : : : x x x x x x : : : : : : x x x x : : : : : : x x 0 x x x 0 x x x x x : : : : : : x x x x : : : : : : x x x x x x : : : : : : x x x x x x : : : : : : x x 0 x x x x x : : : : : : x x 0 x x x x x : : : : : : x x 0 0 x x : : : : : : x x 0 0 x x : : : : : : x x x x : : : : : : x x x x : : : : : : x x and @ {z} {z} {z} {z} k n? k k n? k g k g n? k g k g n? k : () The rst k columns of ( e B ) T are given by the expression ( B e ) T I k S T B T S?T I k S T B T J?1 S J I k by (0) S T BT J?1 S J k S T BT J?1 Sk J k;k S T BT J?1 p(b ) by (1) " # (R k;k )?1 0 1 J k;k by (1)

14 S T J?1 B?1 p(b ) S T J?1 p(b )B?1 J?1 S?1 p(b )B?1 J?1 R B?1 " (R k;k )?1 0 " (R k;k )?1 0 " (R k;k )?1 0 " (R k;k )?1 0 # # # J k;k " (R k;k (J?1 R J?1 )BT (J )?1 0 x x x x : : : : : : x x x x x x : : : : : : x x 0 0 x x x x x x x x x x 0 0 x x x x x x : : : 0 0 : : : 0 : # J k;k by (1) J k;k by (1) J k;k by (19) # J k;k ) by (18) For the last equation we used (1), that (R k;k )?1 a k k is an upper triangular matrix, and that B is of the form (1) Hence, we can conclude that @ ; () where the blocks have the same size as before Comparing () and () we obtain This proves the rst part of the theorem The result about the eigenvalues now follows with the arguments as in the proof of Theorem in [9] There, a similar p statement for a generic chasing algorithm is proved 1 :

15 Algorithm : SR algorithm for symplectic buttery matrices B 1 : B symplectic buttery matrix for j 1; ; : : : until satised Choose polynomial p such that p j (B j ) IR nn Compute p j (B j ) S j R j SR decomposition Set B j+1 : S?1 j B j S j Table 1: SR algorithm for symplectic buttery matrices Hence, assuming its existence, the SR decomposition and the SR step (that is, B : S?1 BS) possess many of the desirable properties of the QR step An SR algorithm can thus be formulated similarly to the QR algorithm [8, 10] In Table 1 we present a general SR algorithm for symplectic buttery matrices There are dierent possibilities to choose the polynomial p j in the algorithm given in Table 1, eg: single shift: p(b) B? I for IR; double shift: p(b) (B? I)(B? I) for C, or p(b) (B? I)(B? 1 I) for IR; quadruple shift: p(b) (B? I)(B? I)(B? 1 I)(B? 1 I), for C In particular the double shift for IR and the quadruple shift for C make use of the symmetries of the spectrum of symplectic matrices An algorithm for explicitly computing an SR decomposition for general matrices is presented in [10] As with explicit QR steps, the expense of explicit SR steps comes from the fact that p(b) has to be computed explicitly A preferred alternative is the implicit SR step, an analogue to the Francis QR step [1, 0, ] The rst implicit transformation S 1 is selected so that the rst columns of the implicit and the explicit S are equivalent That is, a symplectic matrix S 1 is determined such that S?1 1 p(b)e 1 1 e 1 ; 1 IR: Applying this rst transformation to the buttery matrix yields a symplectic matrix S?1 BS 1 1 with almost buttery form having a small bulge The remaining implicit transformations perform a bulge-chasing sweep down the subdiagonal to restore the buttery form That is, a symplectic matrix S is determined such that S?1 S?1BS 1 1S is of buttery form again Banse presents in [] an algorithm to reduce an arbitrary symplectic matrix to buttery form Depending on the size of the bulge in S?1 BS 1 1, the algorithm can be greatly simplied to reduce S?1 [0] 1 BS 1 to buttery form The algorithm uses elementary symplectic Givens matrices G k " # Ck?S k ; S k C k 1

16 where C k I + (c k? 1)e k e T k ; S k s k e k e T k ; c k + s k 1; elementary symplectic Householder matrices [0] where H k I k?1 I k?1 I n?k+1? vvt v T v ; and elementary symplectic Gaussian elimination matrices [10] " # Wk V L k k ; 0 W?1 k where W k I + (w k? 1)(e k?1 e T k?1 + e k e T k ); ; V k v k (e k?1 e T k + e k e T k?1): As L k is nonorthogonal, it might be ill-conditioned or might not even exist at all This means that the SR decomposition of p(b) does not exist or is close to the set of matrices for which an SR decomposition does not exist As the set of these matrices is of measure zero [8], the polynomial p is discarded and an implicit SR shift with a random shift is performed as proposed in [10] in context of the Hamiltonian SR algorithm For an actual implementation this might be realized by checking the condition number of L k and performing an exceptional step if it exceeds a given tolerance The algorithm for reducing an arbitrary symplectic matrix to buttery form as given in [] can be summarized as given in Table (in MATLAB-like notation) Note that pivoting is incorporated in order to increase numerical stability 1

17 Algorithm : Reduction to buttery form input : n n symplectic matrix M output : n n symplectic buttery matrix M for j 1 : n? 1 for k n :?1 : j + 1 compute G k such that (G k M) k+n;j 0 M G k MG?1 k end if j < n then compute H k such that (H k M) j+:n;j 0 M H k MH?1 k end compute L j+1 such that (L j+1 M) j+1;j 0 M L k ML?1 k if jm(j; j)j > jm(j + n; j)j then p j + n else p j end for k n :?1 : j + 1 compute G k such that (MG k ) p;k 0 M G?1 k MG k end if j < n then compute H k such that (MH k ) p;j++n:n 0 end end M H?1 k MH k Table : Reduction to buttery form Let us assume for the moment that p is chosen to perform a quadruple shift Then p(b)e 1 has eight nonzero entries p(b)e 1 [x; x; x; x; 0; : : : ; 0; x; x; x; x; 0; : : : ; 0] T : In order to compute S 1 such that S?1 1 p(b)e 1 e 1, we have to eliminate the entries n + 1 to n + by symplectic Givens transformations and the entries to by a symplectic Householder 1

18 transformation Hence S?1 1 BS 1 is of the form x x x x + + x x x x + + x x x x + + x x x x x x x x x x x x x x x + + x x x x + + x x x x + + x x x x x x x x x x x ; where a \+" denotes ll-in Now the algorithm given in Table can be used to reduce S?1 BS 1 1 to buttery form again Making use of all the zeros in S?1 BS 1 1 the given algorithm greatly simplies The resulting algorithm requires O(n) oating point operations (+;?; ; ; p ) to restore the buttery form Remark In order to implement such an implicit buttery SR step, we do not need to form the intermediate symplectic matrices, but can apply the elimination matrices G k, L k, and H k directly to the parameters a 1 ; : : : ; a n, b 1 ; : : : ; b n, c 1 ; : : : ; c n, d ; : : : ; d n In that case, we could also work directly with the symplectic buttery pencil B 1? B with B 1, B as in (8), (9) In [] Banse also presents an algorithm to reduce a symplectic matrix pencil L? N, where L and N are symplectic matrices to a symplectic buttery pencil e B1? e B As in [], strict buttery matrices are used, this matrix pencil is of the form (see Remark 0 I 0 I Hence we can not make direct use of this algorithm as our symplectic buttery pencil is of @ I?I 0 but an algorithm based on our form can be derived in a similar way Working with the parameters is similar as in the parameterized SR algorithm given in [1] which is based on the parameterization of a symplectic J-Hessenberg matrix This parameterization is determined by n? 1 parameters Besides using nonorthogonal elimination matrices, 18 :

19 in order to obtain the parameterized version in [1], the explicit inversion of some of the matrix elements is necessary Therefore, this parameterized SR algorithm "is highly numerically unstable" [1] We expect an implicit buttery SR step to be more robust in the presence of roundo errors as such explicit inversions can be avoided A Symplectic Lanczos Method for Symplectic Matrices In this section, we describe a symplectic Lanczos method to compute the unreduced buttery form (10) for a symplectic matrix M A symplectic Lanczos method for computing a strict buttery matrix is given in [] The usual nonsymmetric Lanczos algorithm generates two sequences of vectors Due to the symplectic structure of M it is easily seen that one of the two sequences can be eliminated here and thus work and storage can essentially be halved (This property is valid for a broader class of matrices, see [18]) In order to simplify the notation we use in the following again the permuted versions of M and B as given by M M T ; B B T ; S S T ; J J T ; with the permutation matrix as in () We want to compute a symplectic matrix S such that S transforms the symplectic matrix M to a symplectic buttery matrix B In the permuted version, MS SB yields Equivalently, as B B 1 B?1, we can consider M S S B : () M S (B ) S (B 1 ) ; () where (B 1 ) (B ) a?1 1 0 b 1 a 1 c 1 1 d 0? a?1 n b n 0 a n d 0 c 1 0 0?1 0 dn d n 0 c n 1 0 0?1 0 () : ()

20 The structure preserving Lanczos method generates a sequence of permuted symplectic matrices (that is, the columns of S k are J-orthogonal) satisfying M S k or equivalently, as B k;k S k [v 1 ; w 1 ; v ; w ; : : : ; v k ; w k ] IR nk S k Bk;k + d k+1 (b k+1 v k+1 + a k+1 w k+1 )e T k (8) (B k;k 1 ) (B k;k )?1 and et k (Bk;k )?e T k?1, we have M S k (B k;k ) S k (B k;k 1 )? d k+1 (b k+1 v k+1 + a k+1 w k+1 )e T k?1: (9) Here, B k;k k B k;k T k is a permuted k k symplectic buttery matrix as in () and (B k;k j ) k (B k;k j ) T k, j 1;, is a permuted k k symplectic matrix of the form (), resp () The space spanned by the columns of S k n T S k k is J{orthogonal, since S k T J n S k J k, where jj j j T J j and J j is a j j matrix of the form () The vector r k+1 : d k+1 (b k+1 v k+1 + a k+1 w k+1 ) is the residual vector and is J -orthogonal to the columns of S k the J -orthogonal projection of M, called Lanczos vectors The matrix Bk;k onto the range of S k a length k Lanczos factorization of M If the residual vector r k+1 J k;k (S k)t J M S k is Equation (8) (resp (9)) denes is the zero vector, then equation (8) (resp 9)) is called a truncated Lanczos factorization if k < n Note that theoretically, r n+1 must vanish since (S n)t J nr n+1 0 and the columns of S n form a J - orthogonal basis for IR n In this case the symplectic Lanczos method computes a reduction to buttery form Before developing the symplectic Lanczos method itself, we state the following theorem which explains that the symplectic Lanczos factorization is completely specied by the starting vector v 1 Theorem 1 Let two length k Lanczos factorizations be given by M S k S k Bk;k + d k+1 (b k+1 v k+1 + a k+1 w k+1 )e T k k k k;k M S c S c B d + ^dk+1 (^b k+1^v k+1 + ^a k+1 ^w k+1 )e T k ; where S k, k S c have J -orthogonal columns, B k;k k;k, B d are permuted unreduced symplectic buttery matrices with (B k;k ) jj (d B k;k )jj 1; j(b k;k ) j+1;j j j(d B k;k )j+1;j j 1; for j 1; ; ; : : : ; k? 1; and (B k;k ) j+1;j?1 > 0; (d B k;k )j+1;j?1 > 0; for j ; ; : : : ; k? 1; J k;k (S k ) T J (b k+1 v k+1 + a k+1 w k+1 ) J k;k k ( S c ) T J (^b k+1^v k+1 + ^a k+1 ^w k+1 ) 0: 0

21 If the rst columns of S k and c S k are equal, then B k;k d B k;k, S k d k+1 (b k+1 v k+1 + a k+1 w k+1 ) ^d k+1 (^b k+1^v k+1 + ^a k+1 ^w k+1 ): c S k, and roof : This is a direct consequence of Theorem 1 e) and Remark p Next we will see how the factorization (8) (resp (9)) may be computed As this reduction is strongly dependent on the rst column of the transformation matrix that carries out the reduction, we must expect breakdown or near-breakdown in the Lanczos process as they also occur in the reduction process to J-Hessenberg form, eg, [10] Assuming that no such breakdowns occur, a symplectic Lanczos method can be derived as follows Let S [v 1 ; w 1 ; v ; w ; : : : ; v n ; w n ] For a given vector v 1, a Lanczos method constructs the matrix S columnwise from the equations That is, for even numbered columns M S (B ) e j S (B 1 ) e j ; j 1; ; : : : : and for odd numbered columns M v m b m v m + a m w m () a m w m M v m? b m v m : ew m (0) a?1 m v m M (d m v m?1 + c m v m? w m + d m+1 v m+1 ) () d m+1 v m+1?d m v m?1? c m v m + w m + a?1 m M?1 v m : ev m+1 : (1) Note that M?1?J M T J, since M is symplectic Thus M?1 v m is just a matrix-vectorproduct with the transpose of M Now we have to choose the parameters a m ; b m ; c m ; d m+1 such that S T J S J is satised, that is, we have to choose the parameters such that v T J m+1 w m+1 1 One possibility is to choose d m+1 jjev m+1 jj ; a m+1 v T J m+1 M v m+1 : remultiplying ev m+1 by w T m J and using S T J S J yields c m?a?1 m w T mj M?1 v m a?1 m v T mj M w m : Thus we obtain the algorithm given in Table There is still some freedom in the choice of the parameters that occur in this algorithm Essentially, the parameters b m can be chosen freely Here we set b m 1 Likewise a dierent choice of the parameters a m ; d m is possible 1

22 Algorithm : Symplectic Lanczos method Choose an initial vector ev 1 IR n ; ev 1 0 Set v 0 0 IR n Set d 1 jjev 1 jj and v 1 1 d 1 ev 1 for m 1,, do (update of w m ) set ew m M v m? b m v m a m v T mj M v m w m 1 a m ew m (computation of c m ) c m a?1 m vt m J M w m (update of v m+1 ) ev m+1?d m v m?1? c m v m + w m + a?1 d m+1 jjev m+1 jj v m+1 1 d m+1 ev m+1 m M?1 v m Table : Symplectic Lanczos Method Choosing b m 0, a dierent interpretation of the algorithm in Table can be given The resulting buttery matrix B S?1 MS is of the form " # 0?A ; A T where A is a diagonal matrix and T is an unsymmetric tridiagonal matrix As S?1 MS B, we have S?1 M?1 S B?1 and " #?T T 0 S?1 (M + M?1 )S B + B?1 : 0 T Obviously there is no need to compute both T and?t T It is sucient to compute the rst n columns of S This corresponds to computing the v m in our algorithm This case is not considered here any further See also [] Note that only one matrix-vector product is required for each computed Lanczos vector w m or v m Thus an ecient implementation of this algorithm requires n + (nz + n)k ops 1, where nz is the number of nonzero elements in M and k is the number of Lanczos vectors 1 (Following [0], we dene each oating point arithmetic operation together with the associated integer indexing as a op)

23 computed (that is, the loop is executed k times) The algorithm as given in Table computes an odd number of Lanczos vectors, for a practical implementation one has to omit the computation of the last vector v k+1 (or one has to compute an additional vector w k+1 ) In the symplectic Lanczos method as given above we have to divide by parameters that may be zero or close to zero If such a case occurs for the normalization parameter d m+1, the corresponding vector ev m+1 is zero or close to the zero vector In this case, a symplectic invariant subspace of M (or a good approximation to such a subspace) is detected By redening ev m+1 to be any vector satisfying v T j J ev m+1 0; w T j J ev m+1 0; for j 1; : : : ; m, the algorithm can be continued The resulting buttery matrix is no longer unreduced; the eigenproblem decouples into two smaller subproblems In case ew m is zero (or close to zero), an invariant subspace of M with dimension m? 1 is found (or a good approximation to such a subspace) From (0) it is easy to see that in this case the parameter a m will be zero (or close to zero) Thus, if either v m+1 or w m+1 vanishes, the breakdown is benign If v m+1 0 and w m+1 0 but a m+1 0, then the breakdown is serious No reduction of the symplectic matrix to a symplectic buttery matrix with v 1 as rst column of the transformation matrix exists On the other hand, an initial vector v 1 exists so that the symplectic Lanczos process does not encounter serious breakdown However, determining this vector requires knowledge of the minimal polynomial of M Thus, no algorithm for successfully choosing v 1 at the start of the computation yet exists Furthermore, in theory, the above recurrences for v m and w m are sucient to guarantee the J-orthogonality of theses vectors Yet, in practice, the J-orthogonality will be lost, re-jorthogonalization is necessary, increasing the computational cost signicantly The numerical diculties of the symplectic Lanczos method described above are inherent to all Lanczos-like methods for nonsymmetric matrices Dierent approaches to overcome these diculties have been proposed Taylor [] and arlett, Taylor, and Liu [] were the rst to propose a look{ahead Lanczos algorithm that skips over breakdowns and near{breakdowns Freund, Gutknecht, and Nachtigal present in [19] a look{ahead Lanczos code that can handle look{ahead steps of any length Banse adapted this method to the symplectic Lanczos method given in [] The price paid is that the resulting matrix is no longer of buttery form, but has a small bulge in the buttery form to mark each occurrence of a (near) breakdown Unfortunately, so far there exists no eigenvalue method that can make use of that special reduced form A dierent approach to deal with the numerical diculties of the Lanczos process is to modify the starting vectors by an implicitly restarted Lanczos process (see the fundamental work in [11, ]) The problems are addressed by xing the number of steps in the Lanczos process at a prescribed value k which is dependent on the required number of approximate eigenvalues J-orthogonality of the k Lanczos vectors is secured by re-j-orthogonalizing these vectors when necessary The purpose of the implicit restart is to determine initial vectors such that the associated residual vectors are tiny Given that a n k matrix S k is known such

24 that M S k S k B k;k + d k+1 (b k+1 v k+1 + a k+1 w k+1 )e T k () as in (8), an implicit Lanczos restart computes the Lanczos factorization M S k S k which corresponds to the starting vector B k;k + d k+1 ( b k+1 v k+1 + a k+1 w k+1 )e T k () v 1 p(m )v 1 (where p(m ) IR nn is a polynomial) without having to explicitly restart the Lanczos process with the vector v 1 Such an implicit restarting mechanism is derived in [] analogous to the technique introduced in [, 1, ] Concluding Remarks Several aspects of the recently proposed, new condensed form for symplectic matrices, called the symplectic buttery form, [,, ], are considered in detail The n n symplectic buttery form contains 8n? nonzero entries and is determined by n? 1 parameters The reduction to buttery form can serve as a preparatory step for the SR algorithm, as the SR algorithm preserves the symplectic buttery form in its iterations Hence, its role is similar to that of the reduction of an arbitrary nonsymmetric matrix to upper Hessenberg form as a preparatory step for the QR algorithm We have shown that an unreduced symplectic buttery matrix in the context of the SR algorithm has properties similar to those of an unreduced upper Hessenberg matrix in the context of the QR algorithm The SR algorithm not only preserves the symplectic buttery form, but can be rewritten in terms of the n? 1 parameters that determine the symplectic buttery form Therefore, the symplectic structure, which will be destroyed in the numerical computation due to roundo errors, can be restored in each iteration step We have also briey described an implicitly restarted symplectic Lanczos method which can be used to compute a few eigenvalues and eigenvectors of a symplectic matrix The symplectic matrix is reduced to a symplectic buttery matrix of lower dimension, whose eigenvalues can be used as approximations to the eigenvalues of the original matrix Acknowledgment art of the work on this paper was carried out while the second author was visiting the University of California at Santa Barbara despite the fact that her oce was located only 0 yards from the beach She would like to thank Alan Laub for making this visit possible Both authors would like to thank David Watkins for many helpful suggestions which improved the paper signicantly References [1] G S Ammar and V Mehrmann On Hamiltonian and symplectic Hessenberg forms Linear Algebra Appl, 19:{, 1991

25 [] G Banse Symplektische Eigenwertverfahren zur Losung zeitdiskreter optimaler Steuerungsprobleme Dissertation, Universitat Bremen, Fachbereich { Mathematik und Informatik, Bremen, Germany, 199 [] G Banse Condensed forms for symplectic matrices and symplectic pencils in optimal control ZAMM, (Suppl ):{, 199 [] G Banse and A Bunse-Gerstner A condensed form for the solution of the symplectic eigenvalue problem In U Helmke, R Menniken, and J Sauer, editors, Systems and Networks: Mathematical Theory and Applications, pages 1{1 Akademie Verlag, 199 [] Benner and H Fabender An implicitly restarted symplectic Lanczos method for the symplectic eigenvalue problem In preparation [] Benner and H Fabender An implicitly restarted symplectic Lanczos method for the Hamiltonian eigenvalue problem Linear Algebra Appl, to appear See also: Tech report SC 9 8, Fak f Mathematik, Fak f Mathematik, TU Chemnitz{Zwickau, 0910 Chemnitz, FRG (199) [] JR Bunch The weak and strong stability of algorithms in numerical algebra Linear Algebra Appl, 88:9{, 198 [8] A Bunse-Gerstner Matrix factorizations for symplectic QR-like methods Linear Algebra Appl, 8:9{, 198 [9] A Bunse-Gerstner and L Elsner Schur parameter pencils for the solution of the unitary eigenproblem Linear Algebra Appl, 1{1:1{8, 1991 [10] A Bunse-Gerstner and V Mehrmann A symplectic QR-like algorithm for the solution of the real algebraic Riccati equation IEEE Trans Automat Control, AC-1:110{111, 198 [11] D Calvetti, L Reichel, and D C Sorensen An implicitly restarted Lanczos method for large symmetric eigenvalue problems Electr Trans Num Anal, :1{1, March 199 [1] J Della-Dora Numerical linear algorithms and group theory Linear Algebra Appl, 10:{ 8, 19 [1] L Elsner On some algebraic problems in connection with general eigenvalue algorithms Linear Algebra Appl, :1{18, 199 [1] L Elsner and K Ikramov On a normal form for normal matrices under nite sequences of unitary similarities Submitted for publication, 199 [1] H Fabender On numerical methods for discrete least-squares approximation by trigonometric polynomials Math Comp, :19{1, 199 [1] U Flaschka, V Mehrmann, and D Zywietz An analysis of structure preserving methods for symplectic eigenvalue problems RAIRO Automatique roductique Informatique Industrielle, :1{190, 1991

26 [1] J G F Francis The QR transformation, art I and art II Comput J, :{1 and {, 191 [18] R Freund Transpose-free quasi-minimal residual methods for non-hermitian linear systems In G Golub et al, editor, Recent advances in iterative methods apers from the IMA workshop on iterative methods for sparse and structured problems, held in Minneapolis, MN, February -March 1, 199, volume 0 of IMA Vol Math Appl, pages 9{9, New York, NY, 199 Springer{Verlag [19] R W Freund, M H Gutknecht, and N M Nachtigal An implementation of the lookahead Lanczos algorithm for non-hermitian matrices SIAM J Sci Comput, 1(1):1{ 18, January 199 [0] GH Golub and CF Van Loan Matrix Computations Johns Hopkins University ress, Baltimore, nd edition, 1989 [1] EJ Grimme, DC Sorensen, and Van Dooren Model reduction of state space systems via an implicitly restarted Lanczos method Num Alg, 1:1{1, 199 [] D Hinrichsen and N K Son Stability radii of linear discrete-time systems and symplectic pencils Int J Robust Nonlinear Control, 1:9{9, 1991 [] V N Kublanoskaja On some algorithms for the solution of the complete eigenvalue problem USSR Comput Math and Math hys, :{, 191 [] Lancaster and L Rodman The Algebraic Riccati Equation Oxford University ress, Oxford, 199 [] AJ Laub Invariant subspace methods for the numerical solution of Riccati equations In S Bittanti, AJ Laub, and JC Willems, editors, The Riccati Equation, pages 1{19 Springer-Verlag, Berlin, 1991 [] W-W Lin A new method for computing the closed loop eigenvalues of a discrete-time algebraic Riccati equation Linear Algebra Appl, :1{180, 198 [] V Mehrmann Der SR-Algorithmus zur Berechnung der Eigenwerte einer Matrix Diplomarbeit, Universitat Bielefeld, Bielefeld, FRG, 199 [8] V Mehrmann A symplectic orthogonal method for single input or single output discrete time optimal linear quadratic control problems SIAM J Matrix Anal Appl, 9:1{8, 1988 [9] V Mehrmann The Autonomous Linear Quadratic Control roblem, Theory and Numerical Solution Number 1 in Lecture Notes in Control and Information Sciences Springer- Verlag, Heidelberg, July 1991 [0] CC aige and CF Van Loan A Schur decomposition for Hamiltonian matrices Linear Algebra Appl, 1:11{, 1981

27 [1] T appas, AJ Laub, and NR Sandell On the numerical solution of the discrete-time algebraic Riccati equation IEEE Trans Automat Control, AC-:1{1, 1980 [] B N arlett, D R Taylor, and Z A Liu A look-ahead Lanczos algorithm for unsymmetric matrices Math Comp, (19):10{1, January 198 [] RV atel Computation of the stable deating subspace of a symplectic pencil using structure preserving orthogonal transformations In roceedings of the 1st Annual Allerton Conference on Communication, Control and Computing, University of Illinois, 199 [] RV atel On computing the eigenvalues of a symplectic pencil Linear Algebra Appl, 188/189:91{11, 199 See also: roc CDC-1, Tuscon, AZ, 199, pp 191{19 [] D C Sorensen Implicit application of polynomial lters in a k-step Arnoldi method SIAM J Matrix Anal Appl, 1(1):{8, January 199 [] D R Taylor Analysis of the look ahead Lanczos algorithm hd thesis, Center for ure and Applied Mathematics, University of California, Berkeley, CA, 198 [] CF Van Loan A symplectic method for approximating all the eigenvalues of a Hamiltonian matrix Linear Algebra Appl, 1:{1, 198 [8] DS Watkins Some perspectives on the eigenvalue problem SIAM Rev, :0{1, 199 [9] DS Watkins and L Elsner Chasing algorithms for the eigenvalue problem SIAM J Matrix Anal Appl, 1:{8, 1991 [0] DS Watkins and L Elsner Convergence of algorithms of decomposition type for the eigenvalue problem Linear Algebra Appl, 1:19{, 199

Theorem Let X be a symmetric solution of DR(X) = 0. Let X b be a symmetric approximation to Xand set V := X? X. b If R b = R + B T XB b and I + V B R

Theorem Let X be a symmetric solution of DR(X) = 0. Let X b be a symmetric approximation to Xand set V := X? X. b If R b = R + B T XB b and I + V B R Initializing Newton's method for discrete-time algebraic Riccati equations using the buttery SZ algorithm Heike Fabender Zentrum fur Technomathematik Fachbereich - Mathematik und Informatik Universitat

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

The restarted QR-algorithm for eigenvalue computation of structured matrices

The restarted QR-algorithm for eigenvalue computation of structured matrices Journal of Computational and Applied Mathematics 149 (2002) 415 422 www.elsevier.com/locate/cam The restarted QR-algorithm for eigenvalue computation of structured matrices Daniela Calvetti a; 1, Sun-Mi

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

An SVD-Like Matrix Decomposition and Its Applications

An SVD-Like Matrix Decomposition and Its Applications An SVD-Like Matrix Decomposition and Its Applications Hongguo Xu Abstract [ A matrix S C m m is symplectic if SJS = J, where J = I m I m. Symplectic matrices play an important role in the analysis and

More information

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Fakultät für Mathematik TU Chemnitz, Germany Peter Benner benner@mathematik.tu-chemnitz.de joint work with Heike Faßbender

More information

Contents 1 Introduction and Preliminaries 1 Embedding of Extended Matrix Pencils 3 Hamiltonian Triangular Forms 1 4 Skew-Hamiltonian/Hamiltonian Matri

Contents 1 Introduction and Preliminaries 1 Embedding of Extended Matrix Pencils 3 Hamiltonian Triangular Forms 1 4 Skew-Hamiltonian/Hamiltonian Matri Technische Universitat Chemnitz Sonderforschungsbereich 393 Numerische Simulation auf massiv parallelen Rechnern Peter Benner Ralph Byers Volker Mehrmann Hongguo Xu Numerical Computation of Deating Subspaces

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

Matrices, Moments and Quadrature, cont d

Matrices, Moments and Quadrature, cont d Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that

More information

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin On the stability of invariant subspaces of commuting matrices Tomaz Kosir and Bor Plestenjak September 18, 001 Abstract We study the stability of (joint) invariant subspaces of a nite set of commuting

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 5, March 23, 2016 1/30 Lecture 5, March 23, 2016: The QR algorithm II http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich

More information

Nonlinear palindromic eigenvalue problems and their numerical solution

Nonlinear palindromic eigenvalue problems and their numerical solution Nonlinear palindromic eigenvalue problems and their numerical solution TU Berlin DFG Research Center Institut für Mathematik MATHEON IN MEMORIAM RALPH BYERS Polynomial eigenvalue problems k P(λ) x = (

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 4, March 14, 2018 1/41 Lecture 4, March 14, 2018: The QR algorithm http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich E-mail:

More information

THE RELATION BETWEEN THE QR AND LR ALGORITHMS

THE RELATION BETWEEN THE QR AND LR ALGORITHMS SIAM J. MATRIX ANAL. APPL. c 1998 Society for Industrial and Applied Mathematics Vol. 19, No. 2, pp. 551 555, April 1998 017 THE RELATION BETWEEN THE QR AND LR ALGORITHMS HONGGUO XU Abstract. For an Hermitian

More information

ON THE COMPLETE PIVOTING CONJECTURE FOR A HADAMARD MATRIX OF ORDER 12 ALAN EDELMAN 1. Department of Mathematics. and Lawrence Berkeley Laboratory

ON THE COMPLETE PIVOTING CONJECTURE FOR A HADAMARD MATRIX OF ORDER 12 ALAN EDELMAN 1. Department of Mathematics. and Lawrence Berkeley Laboratory ON THE COMPLETE PIVOTING CONJECTURE FOR A HADAMARD MATRIX OF ORDER 12 ALAN EDELMAN 1 Department of Mathematics and Lawrence Berkeley Laboratory University of California Berkeley, California 94720 edelman@math.berkeley.edu

More information

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis Volume 1, pp 33-48, September 1993 Copyright 1993, ISSN 1068-9613 ETNA A MULTISHIFT ALGORITHM FOR THE NUMERICAL SOLUTION OF ALGEBRAIC RICCATI EQUATIONS GREGORY

More information

On Solving Large Algebraic. Riccati Matrix Equations

On Solving Large Algebraic. Riccati Matrix Equations International Mathematical Forum, 5, 2010, no. 33, 1637-1644 On Solving Large Algebraic Riccati Matrix Equations Amer Kaabi Department of Basic Science Khoramshahr Marine Science and Technology University

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

ON THE REDUCTION OF A HAMILTONIAN MATRIX TO HAMILTONIAN SCHUR FORM

ON THE REDUCTION OF A HAMILTONIAN MATRIX TO HAMILTONIAN SCHUR FORM ON THE REDUCTION OF A HAMILTONIAN MATRIX TO HAMILTONIAN SCHUR FORM DAVID S WATKINS Abstract Recently Chu Liu and Mehrmann developed an O(n 3 ) structure preserving method for computing the Hamiltonian

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information

ANONSINGULAR tridiagonal linear system of the form

ANONSINGULAR tridiagonal linear system of the form Generalized Diagonal Pivoting Methods for Tridiagonal Systems without Interchanges Jennifer B. Erway, Roummel F. Marcia, and Joseph A. Tyson Abstract It has been shown that a nonsingular symmetric tridiagonal

More information

Solving large Hamiltonian eigenvalue problems

Solving large Hamiltonian eigenvalue problems Solving large Hamiltonian eigenvalue problems David S. Watkins watkins@math.wsu.edu Department of Mathematics Washington State University Adaptivity and Beyond, Vancouver, August 2005 p.1 Some Collaborators

More information

Arnoldi Methods in SLEPc

Arnoldi Methods in SLEPc Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,

More information

Convergence Analysis of Structure-Preserving Doubling Algorithms for Riccati-Type Matrix Equations

Convergence Analysis of Structure-Preserving Doubling Algorithms for Riccati-Type Matrix Equations Convergence Analysis of Structure-Preserving Doubling Algorithms for Riccati-Type Matrix Equations Wen-Wei Lin Shu-Fang u Abstract In this paper a structure-preserving transformation of a symplectic pencil

More information

p q m p q p q m p q p Σ q 0 I p Σ 0 0, n 2p q

p q m p q p q m p q p Σ q 0 I p Σ 0 0, n 2p q A NUMERICAL METHOD FOR COMPUTING AN SVD-LIKE DECOMPOSITION HONGGUO XU Abstract We present a numerical method to compute the SVD-like decomposition B = QDS 1 where Q is orthogonal S is symplectic and D

More information

Math 504 (Fall 2011) 1. (*) Consider the matrices

Math 504 (Fall 2011) 1. (*) Consider the matrices Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture

More information

arxiv: v1 [math.na] 24 Jan 2019

arxiv: v1 [math.na] 24 Jan 2019 ON COMPUTING EFFICIENT DATA-SPARSE REPRESENTATIONS OF UNITARY PLUS LOW-RANK MATRICES R BEVILACQUA, GM DEL CORSO, AND L GEMIGNANI arxiv:190108411v1 [mathna 24 Jan 2019 Abstract Efficient eigenvalue solvers

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

11.0 Introduction. An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if. A x = λx (11.0.1)

11.0 Introduction. An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if. A x = λx (11.0.1) Chapter 11. 11.0 Introduction Eigensystems An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if A x = λx (11.0.1) Obviously any multiple of an eigenvector x will also be an

More information

Schur-Like Forms for Matrix Lie Groups, Lie Algebras and Jordan Algebras

Schur-Like Forms for Matrix Lie Groups, Lie Algebras and Jordan Algebras Schur-Like Forms for Matrix Lie Groups, Lie Algebras and Jordan Algebras Gregory Ammar Department of Mathematical Sciences Northern Illinois University DeKalb, IL 65 USA Christian Mehl Fakultät für Mathematik

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Algorithms to solve block Toeplitz systems and. least-squares problems by transforming to Cauchy-like. matrices

Algorithms to solve block Toeplitz systems and. least-squares problems by transforming to Cauchy-like. matrices Algorithms to solve block Toeplitz systems and least-squares problems by transforming to Cauchy-like matrices K. Gallivan S. Thirumalai P. Van Dooren 1 Introduction Fast algorithms to factor Toeplitz matrices

More information

Two Results About The Matrix Exponential

Two Results About The Matrix Exponential Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about

More information

Positive Denite Matrix. Ya Yan Lu 1. Department of Mathematics. City University of Hong Kong. Kowloon, Hong Kong. Abstract

Positive Denite Matrix. Ya Yan Lu 1. Department of Mathematics. City University of Hong Kong. Kowloon, Hong Kong. Abstract Computing the Logarithm of a Symmetric Positive Denite Matrix Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A numerical method for computing the logarithm

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix

More information

Model Reduction of State Space. Systems via an Implicitly. Restarted Lanczos Method. E.J. Grimme. D.C. Sorensen. P. Van Dooren.

Model Reduction of State Space. Systems via an Implicitly. Restarted Lanczos Method. E.J. Grimme. D.C. Sorensen. P. Van Dooren. Model Reduction of State Space Systems via an Implicitly Restarted Lanczos Method E.J. Grimme D.C. Sorensen P. Van Dooren CRPC-TR94458 May 1994 Center for Research on Parallel Computation Rice University

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

Institute for Advanced Computer Studies. Department of Computer Science. Two Algorithms for the The Ecient Computation of

Institute for Advanced Computer Studies. Department of Computer Science. Two Algorithms for the The Ecient Computation of University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{98{12 TR{3875 Two Algorithms for the The Ecient Computation of Truncated Pivoted QR Approximations

More information

The rate of convergence of the GMRES method

The rate of convergence of the GMRES method The rate of convergence of the GMRES method Report 90-77 C. Vuik Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wiskunde en Informatica Faculty of Technical Mathematics

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis Volume 4, pp 64-74, June 1996 Copyright 1996, ISSN 1068-9613 ETNA A NOTE ON NEWBERY S ALGORITHM FOR DISCRETE LEAST-SQUARES APPROXIMATION BY TRIGONOMETRIC POLYNOMIALS

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

Compression of unitary rank structured matrices to CMV-like shape with an application to polynomial rootfinding arxiv: v1 [math.

Compression of unitary rank structured matrices to CMV-like shape with an application to polynomial rootfinding arxiv: v1 [math. Compression of unitary rank structured matrices to CMV-like shape with an application to polynomial rootfinding arxiv:1307.186v1 [math.na] 8 Jul 013 Roberto Bevilacqua, Gianna M. Del Corso and Luca Gemignani

More information

Eigenvalue Problems and Singular Value Decomposition

Eigenvalue Problems and Singular Value Decomposition Eigenvalue Problems and Singular Value Decomposition Sanzheng Qiao Department of Computing and Software McMaster University August, 2012 Outline 1 Eigenvalue Problems 2 Singular Value Decomposition 3 Software

More information

Peter Deuhard. for Symmetric Indenite Linear Systems

Peter Deuhard. for Symmetric Indenite Linear Systems Peter Deuhard A Study of Lanczos{Type Iterations for Symmetric Indenite Linear Systems Preprint SC 93{6 (March 993) Contents 0. Introduction. Basic Recursive Structure 2. Algorithm Design Principles 7

More information

Total least squares. Gérard MEURANT. October, 2008

Total least squares. Gérard MEURANT. October, 2008 Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares

More information

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

Outline Background Schur-Horn Theorem Mirsky Theorem Sing-Thompson Theorem Weyl-Horn Theorem A Recursive Algorithm The Building Block Case The Origina

Outline Background Schur-Horn Theorem Mirsky Theorem Sing-Thompson Theorem Weyl-Horn Theorem A Recursive Algorithm The Building Block Case The Origina A Fast Recursive Algorithm for Constructing Matrices with Prescribed Eigenvalues and Singular Values by Moody T. Chu North Carolina State University Outline Background Schur-Horn Theorem Mirsky Theorem

More information

Contents 1 Introduction 1 Preliminaries Singly structured matrices Doubly structured matrices 9.1 Matrices that are H-selfadjoint and G-selfadjoint...

Contents 1 Introduction 1 Preliminaries Singly structured matrices Doubly structured matrices 9.1 Matrices that are H-selfadjoint and G-selfadjoint... Technische Universitat Chemnitz Sonderforschungsbereich 9 Numerische Simulation auf massiv parallelen Rechnern Christian Mehl, Volker Mehrmann, Hongguo Xu Canonical forms for doubly structured matrices

More information

THE QR ALGORITHM REVISITED

THE QR ALGORITHM REVISITED THE QR ALGORITHM REVISITED DAVID S. WATKINS Abstract. The QR algorithm is still one of the most important methods for computing eigenvalues and eigenvectors of matrices. Most discussions of the QR algorithm

More information

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear

More information

Institute for Advanced Computer Studies. Department of Computer Science. Iterative methods for solving Ax = b. GMRES/FOM versus QMR/BiCG

Institute for Advanced Computer Studies. Department of Computer Science. Iterative methods for solving Ax = b. GMRES/FOM versus QMR/BiCG University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{96{2 TR{3587 Iterative methods for solving Ax = b GMRES/FOM versus QMR/BiCG Jane K. Cullum

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Orthogonal iteration to QR

Orthogonal iteration to QR Notes for 2016-03-09 Orthogonal iteration to QR The QR iteration is the workhorse for solving the nonsymmetric eigenvalue problem. Unfortunately, while the iteration itself is simple to write, the derivation

More information

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands

More information

2 DAVID S. WATKINS QR Past and Present. In this paper we discuss the family of GR algorithms, which includes the QR algorithm. The subject was born in

2 DAVID S. WATKINS QR Past and Present. In this paper we discuss the family of GR algorithms, which includes the QR algorithm. The subject was born in QR-like Algorithms for Eigenvalue Problems David S. Watkins Abstract. In the year 2000 the dominant method for solving matrix eigenvalue problems is still the QR algorithm. This paper discusses the family

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

The antitriangular factorisation of saddle point matrices

The antitriangular factorisation of saddle point matrices The antitriangular factorisation of saddle point matrices J. Pestana and A. J. Wathen August 29, 2013 Abstract Mastronardi and Van Dooren [this journal, 34 (2013) pp. 173 196] recently introduced the block

More information

Block Lanczos Tridiagonalization of Complex Symmetric Matrices

Block Lanczos Tridiagonalization of Complex Symmetric Matrices Block Lanczos Tridiagonalization of Complex Symmetric Matrices Sanzheng Qiao, Guohong Liu, Wei Xu Department of Computing and Software, McMaster University, Hamilton, Ontario L8S 4L7 ABSTRACT The classic

More information

QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those

QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE BRANKO CURGUS and BRANKO NAJMAN Denitizable operators in Krein spaces have spectral properties similar to those of selfadjoint operators in Hilbert spaces.

More information

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop Perturbation theory for eigenvalues of Hermitian pencils Christian Mehl Institut für Mathematik TU Berlin, Germany 9th Elgersburg Workshop Elgersburg, March 3, 2014 joint work with Shreemayee Bora, Michael

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

KEYWORDS. Numerical methods, generalized singular values, products of matrices, quotients of matrices. Introduction The two basic unitary decompositio

KEYWORDS. Numerical methods, generalized singular values, products of matrices, quotients of matrices. Introduction The two basic unitary decompositio COMPUTING THE SVD OF A GENERAL MATRIX PRODUCT/QUOTIENT GENE GOLUB Computer Science Department Stanford University Stanford, CA USA golub@sccm.stanford.edu KNUT SLNA SC-CM Stanford University Stanford,

More information

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St Structured Lower Rank Approximation by Moody T. Chu (NCSU) joint with Robert E. Funderlic (NCSU) and Robert J. Plemmons (Wake Forest) March 5, 1998 Outline Introduction: Problem Description Diculties Algebraic

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Computational Methods for Feedback Control in Damped Gyroscopic Second-order Systems 1

Computational Methods for Feedback Control in Damped Gyroscopic Second-order Systems 1 Computational Methods for Feedback Control in Damped Gyroscopic Second-order Systems 1 B. N. Datta, IEEE Fellow 2 D. R. Sarkissian 3 Abstract Two new computationally viable algorithms are proposed for

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Chasing the Bulge. Sebastian Gant 5/19/ The Reduction to Hessenberg Form 3

Chasing the Bulge. Sebastian Gant 5/19/ The Reduction to Hessenberg Form 3 Chasing the Bulge Sebastian Gant 5/9/207 Contents Precursers and Motivation 2 The Reduction to Hessenberg Form 3 3 The Algorithm 5 4 Concluding Remarks 8 5 References 0 ntroduction n the early days of

More information

Begin accumulation of transformation matrices. This block skipped when i=1. Use u and u/h stored in a to form P Q.

Begin accumulation of transformation matrices. This block skipped when i=1. Use u and u/h stored in a to form P Q. 11.3 Eigenvalues and Eigenvectors of a Tridiagonal Matrix 475 f += e[j]*a[i][j]; hh=f/(h+h); Form K, equation (11.2.11). for (j=1;j

More information

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications MAX PLANCK INSTITUTE Elgersburg Workshop Elgersburg February 11-14, 2013 The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications Timo Reis 1 Matthias Voigt 2 1 Department

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

On the reduction of matrix polynomials to Hessenberg form

On the reduction of matrix polynomials to Hessenberg form Electronic Journal of Linear Algebra Volume 3 Volume 3: (26) Article 24 26 On the reduction of matrix polynomials to Hessenberg form Thomas R. Cameron Washington State University, tcameron@math.wsu.edu

More information

Singular-value-like decomposition for complex matrix triples

Singular-value-like decomposition for complex matrix triples Singular-value-like decomposition for complex matrix triples Christian Mehl Volker Mehrmann Hongguo Xu December 17 27 Dedicated to William B Gragg on the occasion of his 7th birthday Abstract The classical

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Chapter 6. Algebraic eigenvalue problems Introduction Introduction 113. Das also war des Pudels Kern!

Chapter 6. Algebraic eigenvalue problems Introduction Introduction 113. Das also war des Pudels Kern! 6.0. Introduction 113 Chapter 6 Algebraic eigenvalue problems Das also war des Pudels Kern! GOETHE. 6.0. Introduction Determination of eigenvalues and eigenvectors of matrices is one of the most important

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2 The QR algorithm The most common method for solving small (dense) eigenvalue problems. The basic algorithm: QR without shifts 1. Until Convergence Do: 2. Compute the QR factorization A = QR 3. Set A :=

More information

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x Technical Report CS-93-08 Department of Computer Systems Faculty of Mathematics and Computer Science University of Amsterdam Stability of Gauss-Huard Elimination for Solving Linear Systems T. J. Dekker

More information

Jurgen Garlo. the inequality sign in all components having odd index sum. For these intervals in

Jurgen Garlo. the inequality sign in all components having odd index sum. For these intervals in Intervals of Almost Totally Positive Matrices Jurgen Garlo University of Applied Sciences / FH Konstanz, Fachbereich Informatik, Postfach 100543, D-78405 Konstanz, Germany Abstract We consider the class

More information

Exponential Decomposition and Hankel Matrix

Exponential Decomposition and Hankel Matrix Exponential Decomposition and Hankel Matrix Franklin T Luk Department of Computer Science and Engineering, Chinese University of Hong Kong, Shatin, NT, Hong Kong luk@csecuhkeduhk Sanzheng Qiao Department

More information

EXPLICIT BLOCK-STRUCTURES FOR BLOCK-SYMMETRIC FIEDLER-LIKE PENCILS

EXPLICIT BLOCK-STRUCTURES FOR BLOCK-SYMMETRIC FIEDLER-LIKE PENCILS EXPLICIT BLOCK-STRUCTURES FOR BLOCK-SYMMETRIC FIEDLER-LIKE PENCILS M I BUENO, M MARTIN, J PÉREZ, A SONG, AND I VIVIANO Abstract In the last decade, there has been a continued effort to produce families

More information

Eigenvalues and eigenvectors

Eigenvalues and eigenvectors Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors

More information

ETNA Kent State University

ETNA Kent State University C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information