where J " 0 I n?i n 0 # () and I n is the nn identity matrix (Note that () is in general not equivalent to L T JL N T JN) In most applications system-

Similar documents
Theorem Let X be a symmetric solution of DR(X) = 0. Let X b be a symmetric approximation to Xand set V := X? X. b If R b = R + B T XB b and I + V B R

Index. for generalized eigenvalue problem, butterfly form, 211

The restarted QR-algorithm for eigenvalue computation of structured matrices

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

An SVD-Like Matrix Decomposition and Its Applications

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries

Contents 1 Introduction and Preliminaries 1 Embedding of Extended Matrix Pencils 3 Hamiltonian Triangular Forms 1 4 Skew-Hamiltonian/Hamiltonian Matri

On the influence of eigenvalues on Bi-CG residual norms

Matrices, Moments and Quadrature, cont d

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

Numerical Methods in Matrix Computations

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Solving large scale eigenvalue problems

Nonlinear palindromic eigenvalue problems and their numerical solution

Solving large scale eigenvalue problems

THE RELATION BETWEEN THE QR AND LR ALGORITHMS

ON THE COMPLETE PIVOTING CONJECTURE FOR A HADAMARD MATRIX OF ORDER 12 ALAN EDELMAN 1. Department of Mathematics. and Lawrence Berkeley Laboratory

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

ETNA Kent State University

On Solving Large Algebraic. Riccati Matrix Equations

Scientific Computing: An Introductory Survey

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

ON THE REDUCTION OF A HAMILTONIAN MATRIX TO HAMILTONIAN SCHUR FORM

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

ANONSINGULAR tridiagonal linear system of the form

Solving large Hamiltonian eigenvalue problems

Arnoldi Methods in SLEPc

Convergence Analysis of Structure-Preserving Doubling Algorithms for Riccati-Type Matrix Equations

p q m p q p q m p q p Σ q 0 I p Σ 0 0, n 2p q

Math 504 (Fall 2011) 1. (*) Consider the matrices

arxiv: v1 [math.na] 24 Jan 2019

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

The Lanczos and conjugate gradient algorithms

11.0 Introduction. An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if. A x = λx (11.0.1)

Schur-Like Forms for Matrix Lie Groups, Lie Algebras and Jordan Algebras

Foundations of Matrix Analysis

Algorithms to solve block Toeplitz systems and. least-squares problems by transforming to Cauchy-like. matrices

Two Results About The Matrix Exponential

Positive Denite Matrix. Ya Yan Lu 1. Department of Mathematics. City University of Hong Kong. Kowloon, Hong Kong. Abstract

Solution of Linear Equations

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

Model Reduction of State Space. Systems via an Implicitly. Restarted Lanczos Method. E.J. Grimme. D.C. Sorensen. P. Van Dooren.

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

Main matrix factorizations

Institute for Advanced Computer Studies. Department of Computer Science. Two Algorithms for the The Ecient Computation of

The rate of convergence of the GMRES method

ETNA Kent State University

Eigenvalue and Eigenvector Problems

Compression of unitary rank structured matrices to CMV-like shape with an application to polynomial rootfinding arxiv: v1 [math.

Eigenvalue Problems and Singular Value Decomposition

Peter Deuhard. for Symmetric Indenite Linear Systems

Total least squares. Gérard MEURANT. October, 2008

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

Block Bidiagonal Decomposition and Least Squares Problems

APPLIED NUMERICAL LINEAR ALGEBRA

Outline Background Schur-Horn Theorem Mirsky Theorem Sing-Thompson Theorem Weyl-Horn Theorem A Recursive Algorithm The Building Block Case The Origina

Contents 1 Introduction 1 Preliminaries Singly structured matrices Doubly structured matrices 9.1 Matrices that are H-selfadjoint and G-selfadjoint...

THE QR ALGORITHM REVISITED

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Institute for Advanced Computer Studies. Department of Computer Science. Iterative methods for solving Ax = b. GMRES/FOM versus QMR/BiCG

Iterative methods for Linear System

Orthogonal iteration to QR

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

2 DAVID S. WATKINS QR Past and Present. In this paper we discuss the family of GR algorithms, which includes the QR algorithm. The subject was born in

AMS526: Numerical Analysis I (Numerical Linear Algebra)

The antitriangular factorisation of saddle point matrices

Block Lanczos Tridiagonalization of Complex Symmetric Matrices

QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

KEYWORDS. Numerical methods, generalized singular values, products of matrices, quotients of matrices. Introduction The two basic unitary decompositio

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St

6.4 Krylov Subspaces and Conjugate Gradients

Computational Methods for Feedback Control in Damped Gyroscopic Second-order Systems 1

Eigenvalues and Eigenvectors

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Chasing the Bulge. Sebastian Gant 5/19/ The Reduction to Hessenberg Form 3

Begin accumulation of transformation matrices. This block skipped when i=1. Use u and u/h stored in a to form P Q.

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications

Linear Algebra Review

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

M.A. Botchev. September 5, 2014

On the reduction of matrix polynomials to Hessenberg form

Singular-value-like decomposition for complex matrix triples

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

Numerical Methods - Numerical Linear Algebra

Chapter 6. Algebraic eigenvalue problems Introduction Introduction 113. Das also war des Pudels Kern!

Numerical Linear Algebra

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x

Jurgen Garlo. the inequality sign in all components having odd index sum. For these intervals in

Exponential Decomposition and Hankel Matrix

EXPLICIT BLOCK-STRUCTURES FOR BLOCK-SYMMETRIC FIEDLER-LIKE PENCILS

Eigenvalues and eigenvectors

ETNA Kent State University

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

Transcription:

The Symplectic Eigenvalue roblem, the Buttery Form, the SR Algorithm, and the Lanczos Method eter Benner Heike Fabender y August 1, 199 Abstract We discuss some aspects of the recently proposed symplectic buttery form which is a condensed form for symplectic matrices Any n n symplectic matrix can be reduced to this condensed form which contains 8n? nonzero entries and is determined by n? 1 parameters The symplectic eigenvalue problem can be solved using the SR algorithm based on this condensed form The SR algorithm preserves this form and can be modied to work only with the n? 1 parameters instead of the n matrix elements The reduction of symplectic matrices to symplectic buttery form has a close analogy to the reduction of arbitrary matrices to Hessenberg form A Lanczos-like algorithm for reducing a symplectic matrix to buttery form is also presented Key words : buttery form, symplectic Lanczos method, symplectic matrix, eigenvalues AMS(MOS) subject classications : F1, F0 1 Introduction The computation of eigenvalues and eigenvectors or deating subspaces of symplectic pencils/matrices is an important task in applications like discrete linear-quadratic regulator problems, discrete Kalman ltering, computation of discrete stability radii, and the problem of solving discrete-time algebraic Riccati equations See, eg, [,,, 9] for applications and further references A matrix M IR nn is called symplectic (or J-orthogonal) if MJM T J (1) (or equivalently, M T JM J) and a symplectic matrix pencil L? N; L; N IR nn is dened by the property LJL T NJN T ; () Universitat Bremen, Fachbereich { Mathematik und Informatik, 8 Bremen, FRG E-mail: peter@mathematikuni-bremende This work was completed while this author was with the Technische Universitat Chemnitz{Zwickau and was supported by Deutsche Forschungsgemeinschaft, research grant Me 90/{1 Singulare Steuerungsprobleme y Universitat Bremen, Fachbereich { Mathematik und Informatik, 8 Bremen, FRG E-mail: heike@mathematikuni-bremende 1

where J " 0 I n?i n 0 # () and I n is the nn identity matrix (Note that () is in general not equivalent to L T JL N T JN) In most applications system-theoretic conditions are satised, which guarantee the existence of an n-dimensional invariant subspace (resp deating subspace) corresponding to the eigenvalues of the symplectic matrix M (resp the symplectic pencil L? N) inside the open unit disk This is the subspace one wishes to compute The solution of the (generalized) symplectic eigenvalue problem with small and dense coecient matrices has been the topic of numerous publications during the last 0 years Even for these problems a numerically sound method, ie, a strongly backward stable method in the sense of [], is yet not known The numerical computation of a deating subspace is usually carried out by an iterative procedure like the QZ algorithm which transforms L? N into a generalized Schur form, from which the deating subspace can be read o See, eg, [9, 1] The QZ algorithm is numerically backward stable but it ignores the symplectic structure Thus the computed eigenvalues will in general not come in reciprocal pairs, although the exact eigenvalues have this property Even worse, small perturbations may cause eigenvalues close to the unit disk to cross the unit circle such that the number of true and computed eigenvalues inside the open unit disk may dier Hence it is crucial to make use of the symplectic structure Dierent structure-preserving methods which avoid the above mentioned problems have been proposed Mehrmann [8] describes a symplectic QZ algorithm This algorithm has all desirable properties, but its applicability is limited to the single input/output case due to the lacking reduction to symplectic J{Hessenberg form in the general case [1] In [], Lin uses the S +S?1 - transformation in order to solve the symplectic eigenvalue problem But the method cannot be used to compute eigenvectors and/or invariant subspaces atel [] shows that these ideas can also be used to derive a structure-preserving method for the generalized symplectic eigenvalue problem similar to Van Loan's square-reduced method for the Hamiltonian eigenvalue problem [] Based on the multishift idea presented in [1], he also describes a method working on a condensed symplectic pencil using implicit QZ steps to compute the stable deating subspace of a symplectic pencil [] Using the analogy to the continuous-time case, ie, Hamiltonian eigenvalue problems, Flaschka, Mehrmann, and Zywietz show in [1] how to construct structure-preserving methods for the symplectic eigenproblem based on the SR method [1, ] This method is a QR-like method based on the SR decomposition In an initial step, the symplectic matrix is reduced to a more condensed form, the symplectic J-Hessenberg form As in the general framework of GR algorithms [0], the SR iteration preserves the symplectic J-Hessenberg form at each step and is supposed to converge to a form from which eigenvalues and deating subspaces can be read o The authors note that \the resulting methods have signicantly worse numerical properties than their corresponding analogues in the Hamiltonian case" [1, abstract] Recently, Banse and Bunse-Gerstner [,, ] presented a new condensed form for symplectic matrices which can be computed by an elimination process using elementary unitary and symplectic similarity transformations The n n condensed matrix is symplectic, contains 8n? nonzero entries, and is determined by n? 1 parameters This condensed form, called

symplectic buttery form, can be depicted as a symplectic matrix of the following form: @ @@ @ @ @ The reduction of a symplectic matrix to buttery form and also the existence of a numerically stable method to compute this reduction is strongly dependent on the rst column of the transformation matrix that carries out the transformation Once the reduction to buttery form is achieved, the SR algorithm [1, ] is a suitable tool for computing the eigenvalues/eigenvectors of a symplectic matrix It preserves the buttery form in its iterations and can be rewritten in a parameterized form that works with n? 1 parameters instead of the (n) matrix elements in each iteration Hence, the symplectic structure, which will be destroyed in the numerical process due to roundo errors, can easily be restored in each iteration for this condensed form In [], a strict buttery matrix is introduced in which the upper left diagonal matrix of the buttery form is nonsingular A strict buttery matrix can be factored as @ 0 @ @ : I @ @ 0 I We will introduce an unreduced buttery form in which the lower right tridiagonal matrix is unreduced An unreduced buttery matrix can be factored as @ @ 0 @ : 0?I I @ Any unreduced buttery matrix is similar to a strict buttery matrix, but not vice versa We will show that unreduced buttery matrices have certain desirable properties which are helpful when examining the properties of the SR algorithm based on the buttery form A strict buttery matrix does not necessarily have these properties In [, ] an elimination process for computing the buttery form of a symplectic matrix is given which uses elementary unitary symplectic transformations as well as non-unitary symplectic transformations Here, we also consider a structure-preserving symplectic Lanczos method which creates the symplectic buttery form if no breakdown occurs Such a symplectic Lanczos method will suer from the well-known numerical diculties inherent to any Lanczos method for nonsymmetric matrices In [], a symplectic look-ahead Lanczos algorithm is presented which overcomes breakdown by giving up the strict buttery form Unfortunately, so far there do not exist eigenvalue methods that can make use of that special reduced form Standard eigenvalue methods as QR or SR have to be employed resulting in a full symplectic matrix after only a few iteration steps We propose to employ an implicit restart technique instead of a look-ahead mechanism in order to deal with the numerical diculties of the symplectic Lanczos method This approach is based on the fundamental work of Sorensen [] In Section, existence and uniqueness of the reduction of a symplectic matrix to buttery form are reviewed Unreduced buttery matrices are introduced and their properties are presented An SR algorithm based on the symplectic buttery form is discussed in Section The :

symplectic Lanczos method which reduces a symplectic matrix to buttery form is derived in Section, where we also give the basic idea of an implicit restart for such a Lanczos process The Symplectic Buttery Form Here, we review the known results on existence and uniqueness of the reduction of a symplectic matrix to buttery form and derive some new properties showing the analogy of the buttery form to the Hessenberg form in generic chasing algorithms As the reduction of a general matrix to upper Hessenberg form serves as a preparatory step for the QR algorithm, the reduction of a symplectic matrix to buttery form can be used as a preparatory step for the SR algorithm We will state results corresponding to those in the Hessenberg/QR-case for the symplectic buttery form and the SR algorithm Our main concern are symplectic matrices and the symplectic buttery form, but we will briey mention how the results presented here can be used for symplectic matrix pencils In order to state results concerning existence and uniqueness of the reduction of a symplectic matrix to symplectic buttery form we need the following denitions A matrix A IR nn is called a J-triangular matrix if A 11 ; A 1 ; A 1 ; A IR nn are upper triangular matrices and A 1 has a zero main diagonal, ie, A " # A11 A 1 A 1 A For a vector v 1 IR n and M IR nn dene @ @ 0 @0 @ K(M; v 1 ; `) : [v 1 ; M?1 v 1 ; M? v 1 ; : : : ; M?(`?1) v 1 ; Mv 1 ; M v 1 ; : : : ; M `v 1 ]: () (Note the similarity of this generalized Krylov matrix to the generalized ones in [9, 1, 1, 8]) Further, let n be the permutation matrix n [e 1 ; e ; : : : ; e n?1 ; e ; e ; : : : ; e n ] IR nn : () If the dimension of n is clear from the context, we leave o the superscript Theorem 1 Let X be a n n nonsingular matrix Let M and S be n n symplectic matrices and denote by v 1 the rst column of S a) There exists a nn symplectic matrix S and a J-triangular matrix R such that X SR if and only if all leading principal minors of even dimension of X T JX T are nonzero b) Let X SR and X S e R e be SR factorizations of X Then there exists a symplectic matrix " # C F D ; () 0 C?1 where C diag(c 1 ; : : : ; c n ), F diag(f 1 ; : : : ; f n ) such that e S SD?1 and e R DR :

c) Let K(M; v 1 ; n) be nonsingular If K(M; v 1 ; n) SR is an SR decomposition then S?1 MS is a buttery matrix d) If S?1 MS B is a symplectic buttery matrix then K(M; v 1 ; n) has an SR decomposition K(M; v 1 ; n) SR e) Let S; ~ S IR nn be symplectic matrices such that S?1 MS B and ~ S?1 M ~ S ~ B are buttery matrices Then there exists a symplectic matrix D as in () such that S ~ SD and B D ~ BD?1 roof: For the original statement and proof of a) see Theorem 11 in [1] For the original statement and proof of b) see roposition in [10] For the original statement and proof of c), d) and e) see Theorem in [] p The theorem introduces the SR decomposition of a matrix X The SR decomposition has been studied, eg, in [8, 10, 1] Theorem 1 e) shows that the transformation to buttery form is unique up to scaling with a matrix D as in () From the proof of c) it follows that the tridiagonal matrix in the lower right corner of the buttery form is an unreduced tridiagonal matrix, that is, none of the upper and lower subdiagonal elements are zero Similarly, one needs that these elements are nonzero to show in d) that R is nonsingular Because of this we will call a symplectic matrix B IR nn an unreduced buttery matrix if B " # B1 B B B @ @@ @ @ @ ; () where B 1 ; B IR nn are diagonal matrices, B ; B IR nn are tridiagonal matrices, and B is unreduced, that is, the subdiagonal elements are nonzero Lemma If B as in () is an unreduced buttery matrix, then B is nonsingular and B can be factored as " # " B1 B B?1 B 1 B B #" 0?I 0 B I B?1 B This factorization is unique Note that B?1 B is symmetric # @ @ 0 @ 0?I I @ :

roof : The fact that B is symplectic implies B 1 B? B B I Assume that B is singular, that is (B ) jj 0 for some j Then the jth row of B 1 B? B B I gives (B 1 ) jj (B ) j;j?1 0; (B 1 ) jj (B ) jj 1; (B 1 ) jj (B ) j;j+1 0: This can only happen for (B 1 ) jj 0, (B ) jj 0, and (B ) j;j?1 (B ) j;j+1 0, but B is unreduced Hence B has to be nonsingular if B is unreduced Thus, for an unreduced buttery matrix we obtain " B?B 1 0 B?1 # " # B1 B B B " 0?I I B?1 B As both matrices on the left are symplectic, their product is symplectic and hence B?1 B has to be a symmetric tridiagonal matrix Thus " # B1 B B B " #" B?1 B 1 0?I 0 B I B?1 B # # @ @ 0 @ : 0?I I @ The uniqueness of this factorization follows from the choice of signs of the identities p : We will frequently make use of this decomposition and will denote it by B 1 B?1 " # B?1 B 1 0 B " 0?I I B?1 B # a?1 1 b 1 a?1 n a 1?1 bn an 1 c 1 d d ; (8)?1 dn 1 dn cn ; (9)

B B 1 B?1 b 1 b 1 c 1? a?1 1 b 1 d b d bn?1dn bn bndn bncn? a?1 n a 1 a 1 c 1 a 1 d a d an?1dn an andn ancn : (10) From (8) { (10) we obtain Corollary Any unreduced buttery matrix B IR nn can be represented by n? 1 parameters a 1 ; : : : ; a n ; d ; : : : ; d n IR n f0g; b 1 ; : : : ; b n ; c 1 ; : : : ; c n IR Remark Any unreduced buttery matrix is similar to an unreduced buttery matrix with b i 1 and ja i j 1 for i 1; : : : ; n and sign(a i ) sign(d i ) for i ; : : : n (this follows from Theorem 1 e)) Remark We will have deation if d j 0 for some j Then the eigenproblem can be split into two smaller ones with unreduced symplectic buttery matrices The next result is well-known for Hessenberg matrices (eg, [0, Theorem ]) and will turn out to be essential when examining the properties of the SR algorithm based on the buttery form Lemma If is an eigenvalue of an unreduced symplectic buttery matrix B IR nn, then its geometric multiplicity is one roof : Since B is symplectic, B is nonsingular and its eigenvalues are nonzero For any C we have rank(b? I) n? 1 because the rst n? 1 columns of B? I are linear independent This can be seen by looking at the permuted expression B? I B T? I b1? b1c1? a?1 1 0 b1d a1 a1c1? 0 a1d 0 bd b? bc? a?1 0 bd 0 ad a ac? 0 ad 0 bd b? bc? a?1 0 ad a ac? 0 bn?1dn 0 an?1dn 0 bndn bn? bncn? a?1 n 0 andn an ancn? :

Obviously, the rst two columns of the above matrix are linear independent as B is unreduced We can not express the third column as a linear combination of the rst two columns: 0 0 b? a 1 b 1? a 1 0 0 + b 1 c 1? a?1 1 a 1 c 1? b d a d From the fourth row we obtain d?1 With this the third row yields b? b : As is an eigenvalue of B and is therefore nonzero, this equation can not hold Hence the rst three columns are linear independent Similarly, we can see that the rst n? 1 columns are linear independent p Hence, the eigenspaces are one-dimensional : Remark In [] a slightly dierent point of view is taken in order to argue that a buttery matrix can be represented by n? 1 parameters There, a strict buttery form is introduced in which the upper left diagonal matrix B 1 of the buttery form is nonsingular Then, using similar arguments as above, since " #" # " B?1 1 0 B1 B I B?1 B 1?B B 1 B B 0 I and B?1 B 1 is a symmetric tridiagonal matrix (same argument as used above), one obtains " # " # " # B1 B B1 0 I B?1 B B B B B?1 1 I @ : 1 0 I 0 I # @ 0 @ @ Therefore a strict buttery matrix can be represented by n? 1 parameters Unfortunately, strict buttery matrices do not have all the desirable properties of unreduced buttery matrices In particular, Lemma does not hold for strict buttery matrices as can be seen by the next example Example 8 Let B 1 0 0 1 0 1 1 0 0 0 1 0 0 0 0 1 Then B is a strict symplectic buttery matrix that it is not unreduced It is easy to see that the spectrum of B is given by f1; 1g with geometric multiplicities two 8 :

From Remark and (10) it also follows that any unreduced symplectic buttery matrix is similar to a strict buttery matrix But Example 8 also shows that the converse does not hold Finally consider a symplectic matrix pencil L? N, that is LJL T NJN T, where L; N IR nn If N is nonsingular, then M N?1 L is a symplectic matrix The results of this section can be applied to M Assume that S transforms M to unreduced buttery form : S?1 MS B B 1 B?1 pencil Q(L? N)S B?1? B?1 1 0?I I @ Then the symplectic matrix pencil L? N is equivalent to the matrix? @ @ 0 @ where Q B?1 1 S?1 N?1 Such a pencil is called a symplectic buttery pencil If N is singular, then the pencil L? N has at least one eigenvalue 0 and 1 Assume that there are k eigenvalues 0 and k eigenvalues 1 In a preprocessing step these eigenvalues can be deated out using Algorithm 1 in [1] In the resulting symplectic pencil L 0?N 0 of dimension (n?k)(n?k), N 0 is nonsingular Hence we can build M 0 (N 0 )?1 L 0 and transform it to buttery form B 0 B 0 1(B 0 )?1 Thus, L 0? N 0 is similar to the symplectic buttery pencil (B 0 )?1? (B 0 1)?1 Adding k rows and columns of zeros to each block of B 0 1 and B 0, and appropriate entries on the diagonals, we can expand the symplectic buttery pencil (B 0 )?1? (B 0 1)?1 to a symplectic buttery pencil b B? b B1 of dimension n n that is equivalent to L? N The SR Algorithm for Symplectic Buttery Matrices Based on the SR decomposition introduced in Theorem 1 a symplectic QR-like method for solving eigenvalue problems of arbitrary real matrices is developed in [10] The QR decomposition and the orthogonal similarity transformation to upper Hessenberg form in the QR process are replaced by the SR decomposition and the symplectic similarity reduction to J-Hessenberg form Unfortunately, a symplectic matrix in buttery form is not a J-Hessenberg matrix so that we can not simply use the results of [10] for computing the eigenvalues of a symplectic buttery matrix But, as we will see in this section, an SR step preserves the buttery form If B is an unreduced symplectic buttery matrix, p(b) a polynomial such that p(b) IR nn, p(b) SR, and if R is invertible, then S?1 BS is a symplectic buttery matrix again This was already noted and proved in [], but no results for singular p(b) are given there The next theorem shows that singular p(b) are desirable (that is at least one shift is an eigenvalue of B), as they allow the problem to be deated after one step First, we need to introduce some notation Let p(b) be a polynomial such that p(b) IR nn Write p(b) in factored form ; p(b) : (B? 1 I n )(B? I n ) (B? k I n ): (11) From p(b) IR nn it follows that if C, and f 1 ; : : : ; k g; then f 1 ; : : : ; k g p(b) is singular if and only if at least one of the shifts i is an eigenvalue of B Let denote the 9

number of shifts that are equal to eigenvalues of B Here we count a repeated shift according to its multiplicity as a zero of p, except that the number of times we count it must not exceed its algebraic multiplicity (as an eigenvalue of B) Lemma 1 Let B IR nn be an unreduced symplectic buttery matrix The rank of p(b) in (11) is n? with as dened above roof : Since B is an unreduced buttery matrix, its eigenspaces are one-dimensional by Lemma Hence, we can use the same arguments as in the proof of Lemma in p [9] in order to prove the statement of this lemma In the following we will consider only the case that rank(p(b)) is even In a real implementation, one would choose a polynomial p such that each perfect shift is accompanied by its reciprocal, since the eigenvalues of a symplectic matrix always appear in reciprocal pairs As noted before if C is a perfect shift, then we will choose as a shift as well That is in that case, we will choose ;?1 ; and?1 as shifts Further, if IR is a perfect shift, then we choose?1 as a shift as well Because of this, rank(p(b)) will always be even Theorem Let B IR nn be an unreduced symplectic buttery matrix Let p(b) be a polynomial with p(b) IR nn and rank(p(b)) n? : k If p(b) SR exists, then eb S?1 BS is a symplectic matrix of the form where eb " # eb11 B1 e eb 1 B e @ @ @ @ @ @ eb 11 eb 1 eb eb e B1 e B e B e B {z} {z} {z} {z} k n? k k n? k g k g n? k g k g n? k is a symplectic buttery matrix and the eigenvalues of just the shifts that are eigenvalues of B ; ; " # eb B e eb B e In order to simplify the notation for the proof of this theorem and the subsequent derivations, we use in the following permuted versions of B, R, and S Let with as in () From S T JS J we obtain B B T ; R R T ; S S T ; J J T ; S T J S J 0 1?1 0 10 0 1?1 0 0 1?1 0 are

while the permuted buttery matrix B is of the form B b 1 b 1 c 1? a?1 1 0 b 1 d a 1 a 1 c 1 0 a 1 d 0 b d b b c? a?1 0 a d a a c 0 bn?1 dn 0 an?1 dn 0 bndn bn bncn? a?1 n 0 andn an ancn : (1) roof of Theorem : B is an upper triangular matrix with two additional subdiagonals, where the second additional subdiagonal has a nonzero entry only in every other position (see (1)) Since R is a J-triangular matrix, R is an upper triangular matrix In the following, we denote by Z k the rst k columns of a n n matrix Z, while Z rest denotes its last n? k columns Z k;k denotes the leading k k principal submatrix of a n n matrix Z Now partition the permuted matrices B ; S ; J, and R as B [B k j B rest ]; S [S k j S rest ]; " # R k;k X ; 0 Y J [J k j J rest ]; R where the matrix blocks are dened as before; X IR k(n?k), Y IR (n?k)(n?k) First we will show that the rst k columns and rows of e B are in the desired form We will need the following observations The rst k columns of p(b ) are linear independent, since B is unreduced To see this, consider the following identity: p(b)k(b; e 1 ; n) [p(b)e 1 ; p(b)b?1 e 1 ; : : : ; p(b)b?(n?1) e 1 ; p(b)be 1 ; : : : ; p(b)b n e 1 ] [p(b)e 1 ; B?1 p(b)e 1 ; : : : ; B?(n?1) p(b)e 1 ; Bp(B)e 1 ; : : : ; B n p(b)e 1 ] K(B; p(b)e 1 ; n); where we have used p(b)b r B r p(b) for r 1; : : : ; n From Theorem 1 d) we know that, since B is unreduced, K(B; e 1 ; n) is a nonsingular upper J-triangular matrix As rank(p(b)) k, K(B; p(b)e 1 ; n) has rank k If a matrix of the form K(X; v; n) [v; X?1 v; : : : ; X?(n?1) v; Xv; : : : ; X n v] has rank k, then the columns v; X?1 v; : : : ; X?(k?1) v; Xv; : : : ; X k v are linear independent Further we obtain p(b) K(B; p(b)e 1 ; n)(k(b; e 1 ; n))?1 : [p 1 ; p ; : : : ; p n ]: 11

Due to the special form of K(B; e 1 ; n) (J-triangular!) and the fact that the columns 1 to k and n + 1 to n + k of K(B; p(b)e 1 ; n) are linear independent, the columns p 1 ; p ; : : : ; p k ; p n+1 ; p n+ ; : : : ; p n+k of p(b) are linear independent Hence the rst k columns of p(b ) p(b) T are linear independent The columns of S k are linear independent, since S is nonsingular Hence the matrix R k;k is nonsingular, since It follows that S k p(b )I k S k Rk;k : p(b )I k (R k;k )?1 : (1) Moreover, since rank(p(b)) k, we have that rank(r ) k Since rank(r k;k ) k, we obtain rank(y ) 0 and therefore Y 0 From this we see R Further we need the following identities " X 0 0 R k;k # : (1) B p(b ) p(b )B ; (1) B?1 p(b ) p(b )B?1 ; (1) B T J?1 J?1 B?1 ; (1) B?1 J?1 BT J ; (18) S T J?1 J?1 S?1 ; (19) S?1 J?1 ST J ; (0) S J k S k J k;k : (1) Equations (1) { (1) follow from the fact that B and S are symplectic while (1) { (1) result from the fact that Z and p(z) commute for any matrix Z and any polynomial p The rst k columns of e B are given by the expression eb k e B I k S?1 B S I k S?1 B S k S?1 B p(b )I k (R k;k )?1 by (1) S?1 p(b )B k (R k;k )?1 by (1) R B k (R k;k )?1 1

x x x x : : : : : : x x x x x x : : : : : : x x 0 x x x 0 x x x x x x x 0 x x x 0 x x x 0 0 0 0 0 : : : 0 0 : : : 0 : For the last equation we used (1), that (R k;k )?1 is a k k upper triangular matrix, and that B is of the form given in (1) Hence eb x x x x : : : : : : x x x x : : : : : : x x x x x x : : : : : : x x x x : : : : : : x x 0 x x x 0 x x x x x : : : : : : x x x x : : : : : : x x x x x x : : : : : : x x x x x x : : : : : : x x 0 x x x x x : : : : : : x x 0 x x x x x : : : : : : x x 0 0 x x : : : : : : x x 0 0 x x : : : : : : x x x x : : : : : : x x x x : : : : : : x x and thus, eb @ @ @ 0 0 @ 0 0 {z} {z} {z} {z} k n? k k n? k g k g n? k g k g n? k : () The rst k columns of ( e B ) T are given by the expression ( B e ) T I k S T B T S?T I k S T B T J?1 S J I k by (0) S T BT J?1 S J k S T BT J?1 Sk J k;k S T BT J?1 p(b ) by (1) " # (R k;k )?1 0 1 J k;k by (1)

S T J?1 B?1 p(b ) S T J?1 p(b )B?1 J?1 S?1 p(b )B?1 J?1 R B?1 " (R k;k )?1 0 " (R k;k )?1 0 " (R k;k )?1 0 " (R k;k )?1 0 # # # J k;k " (R k;k (J?1 R J?1 )BT (J )?1 0 x x x x : : : : : : x x x x x x : : : : : : x x 0 0 x x x x x x x x x x 0 0 x x x x x x 0 0 0 0 0 : : : 0 0 : : : 0 : # J k;k by (1) J k;k by (1) J k;k by (19) # J k;k ) by (18) For the last equation we used (1), that (R k;k )?1 a k k is an upper triangular matrix, and that B is of the form (1) Hence, we can conclude that eb T @ @ @ 0 0 @ 0 0 ; () where the blocks have the same size as before Comparing () and () we obtain eb @ 0 @ @ 0 0 0 @ 0 @ @ 0 0 0 This proves the rst part of the theorem The result about the eigenvalues now follows with the arguments as in the proof of Theorem in [9] There, a similar p statement for a generic chasing algorithm is proved 1 :

Algorithm : SR algorithm for symplectic buttery matrices B 1 : B symplectic buttery matrix for j 1; ; : : : until satised Choose polynomial p such that p j (B j ) IR nn Compute p j (B j ) S j R j SR decomposition Set B j+1 : S?1 j B j S j Table 1: SR algorithm for symplectic buttery matrices Hence, assuming its existence, the SR decomposition and the SR step (that is, B : S?1 BS) possess many of the desirable properties of the QR step An SR algorithm can thus be formulated similarly to the QR algorithm [8, 10] In Table 1 we present a general SR algorithm for symplectic buttery matrices There are dierent possibilities to choose the polynomial p j in the algorithm given in Table 1, eg: single shift: p(b) B? I for IR; double shift: p(b) (B? I)(B? I) for C, or p(b) (B? I)(B? 1 I) for IR; quadruple shift: p(b) (B? I)(B? I)(B? 1 I)(B? 1 I), for C In particular the double shift for IR and the quadruple shift for C make use of the symmetries of the spectrum of symplectic matrices An algorithm for explicitly computing an SR decomposition for general matrices is presented in [10] As with explicit QR steps, the expense of explicit SR steps comes from the fact that p(b) has to be computed explicitly A preferred alternative is the implicit SR step, an analogue to the Francis QR step [1, 0, ] The rst implicit transformation S 1 is selected so that the rst columns of the implicit and the explicit S are equivalent That is, a symplectic matrix S 1 is determined such that S?1 1 p(b)e 1 1 e 1 ; 1 IR: Applying this rst transformation to the buttery matrix yields a symplectic matrix S?1 BS 1 1 with almost buttery form having a small bulge The remaining implicit transformations perform a bulge-chasing sweep down the subdiagonal to restore the buttery form That is, a symplectic matrix S is determined such that S?1 S?1BS 1 1S is of buttery form again Banse presents in [] an algorithm to reduce an arbitrary symplectic matrix to buttery form Depending on the size of the bulge in S?1 BS 1 1, the algorithm can be greatly simplied to reduce S?1 [0] 1 BS 1 to buttery form The algorithm uses elementary symplectic Givens matrices G k " # Ck?S k ; S k C k 1

where C k I + (c k? 1)e k e T k ; S k s k e k e T k ; c k + s k 1; elementary symplectic Householder matrices [0] where H k I k?1 I k?1 I n?k+1? vvt v T v ; and elementary symplectic Gaussian elimination matrices [10] " # Wk V L k k ; 0 W?1 k where W k I + (w k? 1)(e k?1 e T k?1 + e k e T k ); ; V k v k (e k?1 e T k + e k e T k?1): As L k is nonorthogonal, it might be ill-conditioned or might not even exist at all This means that the SR decomposition of p(b) does not exist or is close to the set of matrices for which an SR decomposition does not exist As the set of these matrices is of measure zero [8], the polynomial p is discarded and an implicit SR shift with a random shift is performed as proposed in [10] in context of the Hamiltonian SR algorithm For an actual implementation this might be realized by checking the condition number of L k and performing an exceptional step if it exceeds a given tolerance The algorithm for reducing an arbitrary symplectic matrix to buttery form as given in [] can be summarized as given in Table (in MATLAB-like notation) Note that pivoting is incorporated in order to increase numerical stability 1

Algorithm : Reduction to buttery form input : n n symplectic matrix M output : n n symplectic buttery matrix M for j 1 : n? 1 for k n :?1 : j + 1 compute G k such that (G k M) k+n;j 0 M G k MG?1 k end if j < n then compute H k such that (H k M) j+:n;j 0 M H k MH?1 k end compute L j+1 such that (L j+1 M) j+1;j 0 M L k ML?1 k if jm(j; j)j > jm(j + n; j)j then p j + n else p j end for k n :?1 : j + 1 compute G k such that (MG k ) p;k 0 M G?1 k MG k end if j < n then compute H k such that (MH k ) p;j++n:n 0 end end M H?1 k MH k Table : Reduction to buttery form Let us assume for the moment that p is chosen to perform a quadruple shift Then p(b)e 1 has eight nonzero entries p(b)e 1 [x; x; x; x; 0; : : : ; 0; x; x; x; x; 0; : : : ; 0] T : In order to compute S 1 such that S?1 1 p(b)e 1 e 1, we have to eliminate the entries n + 1 to n + by symplectic Givens transformations and the entries to by a symplectic Householder 1

transformation Hence S?1 1 BS 1 is of the form x + + + x x + + + + x + + x x x + + + + x + + x x x + + + + x + + x x x + + + + x + + + x x x x x x x x + + + x x + + + x + + x x x + + + x + + x x x + + + x + + x x x + + + + x + + + x x x x x x x ; where a \+" denotes ll-in Now the algorithm given in Table can be used to reduce S?1 BS 1 1 to buttery form again Making use of all the zeros in S?1 BS 1 1 the given algorithm greatly simplies The resulting algorithm requires O(n) oating point operations (+;?; ; ; p ) to restore the buttery form Remark In order to implement such an implicit buttery SR step, we do not need to form the intermediate symplectic matrices, but can apply the elimination matrices G k, L k, and H k directly to the parameters a 1 ; : : : ; a n, b 1 ; : : : ; b n, c 1 ; : : : ; c n, d ; : : : ; d n In that case, we could also work directly with the symplectic buttery pencil B 1? B with B 1, B as in (8), (9) In [] Banse also presents an algorithm to reduce a symplectic matrix pencil L? N, where L and N are symplectic matrices to a symplectic buttery pencil e B1? e B As in [], strict buttery matrices are used, this matrix pencil is of the form (see Remark ) @ 0 @ @? I @ @ 0 I Hence we can not make direct use of this algorithm as our symplectic buttery pencil is of the form @ @ 0 @? @ I?I 0 but an algorithm based on our form can be derived in a similar way Working with the parameters is similar as in the parameterized SR algorithm given in [1] which is based on the parameterization of a symplectic J-Hessenberg matrix This parameterization is determined by n? 1 parameters Besides using nonorthogonal elimination matrices, 18 :

in order to obtain the parameterized version in [1], the explicit inversion of some of the matrix elements is necessary Therefore, this parameterized SR algorithm "is highly numerically unstable" [1] We expect an implicit buttery SR step to be more robust in the presence of roundo errors as such explicit inversions can be avoided A Symplectic Lanczos Method for Symplectic Matrices In this section, we describe a symplectic Lanczos method to compute the unreduced buttery form (10) for a symplectic matrix M A symplectic Lanczos method for computing a strict buttery matrix is given in [] The usual nonsymmetric Lanczos algorithm generates two sequences of vectors Due to the symplectic structure of M it is easily seen that one of the two sequences can be eliminated here and thus work and storage can essentially be halved (This property is valid for a broader class of matrices, see [18]) In order to simplify the notation we use in the following again the permuted versions of M and B as given by M M T ; B B T ; S S T ; J J T ; with the permutation matrix as in () We want to compute a symplectic matrix S such that S transforms the symplectic matrix M to a symplectic buttery matrix B In the permuted version, MS SB yields Equivalently, as B B 1 B?1, we can consider M S S B : () M S (B ) S (B 1 ) ; () where (B 1 ) (B ) a?1 1 0 b 1 a 1 c 1 1 d 0?1 0 0 0 a?1 n b n 0 a n d 0 c 1 0 0?1 0 dn 0 19 0 0 d n 0 c n 1 0 0?1 0 () : ()

The structure preserving Lanczos method generates a sequence of permuted symplectic matrices (that is, the columns of S k are J-orthogonal) satisfying M S k or equivalently, as B k;k S k [v 1 ; w 1 ; v ; w ; : : : ; v k ; w k ] IR nk S k Bk;k + d k+1 (b k+1 v k+1 + a k+1 w k+1 )e T k (8) (B k;k 1 ) (B k;k )?1 and et k (Bk;k )?e T k?1, we have M S k (B k;k ) S k (B k;k 1 )? d k+1 (b k+1 v k+1 + a k+1 w k+1 )e T k?1: (9) Here, B k;k k B k;k T k is a permuted k k symplectic buttery matrix as in () and (B k;k j ) k (B k;k j ) T k, j 1;, is a permuted k k symplectic matrix of the form (), resp () The space spanned by the columns of S k n T S k k is J{orthogonal, since S k T J n S k J k, where jj j j T J j and J j is a j j matrix of the form () The vector r k+1 : d k+1 (b k+1 v k+1 + a k+1 w k+1 ) is the residual vector and is J -orthogonal to the columns of S k the J -orthogonal projection of M, called Lanczos vectors The matrix Bk;k onto the range of S k a length k Lanczos factorization of M If the residual vector r k+1 J k;k (S k)t J M S k is Equation (8) (resp (9)) denes is the zero vector, then equation (8) (resp 9)) is called a truncated Lanczos factorization if k < n Note that theoretically, r n+1 must vanish since (S n)t J nr n+1 0 and the columns of S n form a J - orthogonal basis for IR n In this case the symplectic Lanczos method computes a reduction to buttery form Before developing the symplectic Lanczos method itself, we state the following theorem which explains that the symplectic Lanczos factorization is completely specied by the starting vector v 1 Theorem 1 Let two length k Lanczos factorizations be given by M S k S k Bk;k + d k+1 (b k+1 v k+1 + a k+1 w k+1 )e T k k k k;k M S c S c B d + ^dk+1 (^b k+1^v k+1 + ^a k+1 ^w k+1 )e T k ; where S k, k S c have J -orthogonal columns, B k;k k;k, B d are permuted unreduced symplectic buttery matrices with (B k;k ) jj (d B k;k )jj 1; j(b k;k ) j+1;j j j(d B k;k )j+1;j j 1; for j 1; ; ; : : : ; k? 1; and (B k;k ) j+1;j?1 > 0; (d B k;k )j+1;j?1 > 0; for j ; ; : : : ; k? 1; J k;k (S k ) T J (b k+1 v k+1 + a k+1 w k+1 ) J k;k k ( S c ) T J (^b k+1^v k+1 + ^a k+1 ^w k+1 ) 0: 0

If the rst columns of S k and c S k are equal, then B k;k d B k;k, S k d k+1 (b k+1 v k+1 + a k+1 w k+1 ) ^d k+1 (^b k+1^v k+1 + ^a k+1 ^w k+1 ): c S k, and roof : This is a direct consequence of Theorem 1 e) and Remark p Next we will see how the factorization (8) (resp (9)) may be computed As this reduction is strongly dependent on the rst column of the transformation matrix that carries out the reduction, we must expect breakdown or near-breakdown in the Lanczos process as they also occur in the reduction process to J-Hessenberg form, eg, [10] Assuming that no such breakdowns occur, a symplectic Lanczos method can be derived as follows Let S [v 1 ; w 1 ; v ; w ; : : : ; v n ; w n ] For a given vector v 1, a Lanczos method constructs the matrix S columnwise from the equations That is, for even numbered columns M S (B ) e j S (B 1 ) e j ; j 1; ; : : : : and for odd numbered columns M v m b m v m + a m w m () a m w m M v m? b m v m : ew m (0) a?1 m v m M (d m v m?1 + c m v m? w m + d m+1 v m+1 ) () d m+1 v m+1?d m v m?1? c m v m + w m + a?1 m M?1 v m : ev m+1 : (1) Note that M?1?J M T J, since M is symplectic Thus M?1 v m is just a matrix-vectorproduct with the transpose of M Now we have to choose the parameters a m ; b m ; c m ; d m+1 such that S T J S J is satised, that is, we have to choose the parameters such that v T J m+1 w m+1 1 One possibility is to choose d m+1 jjev m+1 jj ; a m+1 v T J m+1 M v m+1 : remultiplying ev m+1 by w T m J and using S T J S J yields c m?a?1 m w T mj M?1 v m a?1 m v T mj M w m : Thus we obtain the algorithm given in Table There is still some freedom in the choice of the parameters that occur in this algorithm Essentially, the parameters b m can be chosen freely Here we set b m 1 Likewise a dierent choice of the parameters a m ; d m is possible 1

Algorithm : Symplectic Lanczos method Choose an initial vector ev 1 IR n ; ev 1 0 Set v 0 0 IR n Set d 1 jjev 1 jj and v 1 1 d 1 ev 1 for m 1,, do (update of w m ) set ew m M v m? b m v m a m v T mj M v m w m 1 a m ew m (computation of c m ) c m a?1 m vt m J M w m (update of v m+1 ) ev m+1?d m v m?1? c m v m + w m + a?1 d m+1 jjev m+1 jj v m+1 1 d m+1 ev m+1 m M?1 v m Table : Symplectic Lanczos Method Choosing b m 0, a dierent interpretation of the algorithm in Table can be given The resulting buttery matrix B S?1 MS is of the form " # 0?A ; A T where A is a diagonal matrix and T is an unsymmetric tridiagonal matrix As S?1 MS B, we have S?1 M?1 S B?1 and " #?T T 0 S?1 (M + M?1 )S B + B?1 : 0 T Obviously there is no need to compute both T and?t T It is sucient to compute the rst n columns of S This corresponds to computing the v m in our algorithm This case is not considered here any further See also [] Note that only one matrix-vector product is required for each computed Lanczos vector w m or v m Thus an ecient implementation of this algorithm requires n + (nz + n)k ops 1, where nz is the number of nonzero elements in M and k is the number of Lanczos vectors 1 (Following [0], we dene each oating point arithmetic operation together with the associated integer indexing as a op)

computed (that is, the loop is executed k times) The algorithm as given in Table computes an odd number of Lanczos vectors, for a practical implementation one has to omit the computation of the last vector v k+1 (or one has to compute an additional vector w k+1 ) In the symplectic Lanczos method as given above we have to divide by parameters that may be zero or close to zero If such a case occurs for the normalization parameter d m+1, the corresponding vector ev m+1 is zero or close to the zero vector In this case, a symplectic invariant subspace of M (or a good approximation to such a subspace) is detected By redening ev m+1 to be any vector satisfying v T j J ev m+1 0; w T j J ev m+1 0; for j 1; : : : ; m, the algorithm can be continued The resulting buttery matrix is no longer unreduced; the eigenproblem decouples into two smaller subproblems In case ew m is zero (or close to zero), an invariant subspace of M with dimension m? 1 is found (or a good approximation to such a subspace) From (0) it is easy to see that in this case the parameter a m will be zero (or close to zero) Thus, if either v m+1 or w m+1 vanishes, the breakdown is benign If v m+1 0 and w m+1 0 but a m+1 0, then the breakdown is serious No reduction of the symplectic matrix to a symplectic buttery matrix with v 1 as rst column of the transformation matrix exists On the other hand, an initial vector v 1 exists so that the symplectic Lanczos process does not encounter serious breakdown However, determining this vector requires knowledge of the minimal polynomial of M Thus, no algorithm for successfully choosing v 1 at the start of the computation yet exists Furthermore, in theory, the above recurrences for v m and w m are sucient to guarantee the J-orthogonality of theses vectors Yet, in practice, the J-orthogonality will be lost, re-jorthogonalization is necessary, increasing the computational cost signicantly The numerical diculties of the symplectic Lanczos method described above are inherent to all Lanczos-like methods for nonsymmetric matrices Dierent approaches to overcome these diculties have been proposed Taylor [] and arlett, Taylor, and Liu [] were the rst to propose a look{ahead Lanczos algorithm that skips over breakdowns and near{breakdowns Freund, Gutknecht, and Nachtigal present in [19] a look{ahead Lanczos code that can handle look{ahead steps of any length Banse adapted this method to the symplectic Lanczos method given in [] The price paid is that the resulting matrix is no longer of buttery form, but has a small bulge in the buttery form to mark each occurrence of a (near) breakdown Unfortunately, so far there exists no eigenvalue method that can make use of that special reduced form A dierent approach to deal with the numerical diculties of the Lanczos process is to modify the starting vectors by an implicitly restarted Lanczos process (see the fundamental work in [11, ]) The problems are addressed by xing the number of steps in the Lanczos process at a prescribed value k which is dependent on the required number of approximate eigenvalues J-orthogonality of the k Lanczos vectors is secured by re-j-orthogonalizing these vectors when necessary The purpose of the implicit restart is to determine initial vectors such that the associated residual vectors are tiny Given that a n k matrix S k is known such

that M S k S k B k;k + d k+1 (b k+1 v k+1 + a k+1 w k+1 )e T k () as in (8), an implicit Lanczos restart computes the Lanczos factorization M S k S k which corresponds to the starting vector B k;k + d k+1 ( b k+1 v k+1 + a k+1 w k+1 )e T k () v 1 p(m )v 1 (where p(m ) IR nn is a polynomial) without having to explicitly restart the Lanczos process with the vector v 1 Such an implicit restarting mechanism is derived in [] analogous to the technique introduced in [, 1, ] Concluding Remarks Several aspects of the recently proposed, new condensed form for symplectic matrices, called the symplectic buttery form, [,, ], are considered in detail The n n symplectic buttery form contains 8n? nonzero entries and is determined by n? 1 parameters The reduction to buttery form can serve as a preparatory step for the SR algorithm, as the SR algorithm preserves the symplectic buttery form in its iterations Hence, its role is similar to that of the reduction of an arbitrary nonsymmetric matrix to upper Hessenberg form as a preparatory step for the QR algorithm We have shown that an unreduced symplectic buttery matrix in the context of the SR algorithm has properties similar to those of an unreduced upper Hessenberg matrix in the context of the QR algorithm The SR algorithm not only preserves the symplectic buttery form, but can be rewritten in terms of the n? 1 parameters that determine the symplectic buttery form Therefore, the symplectic structure, which will be destroyed in the numerical computation due to roundo errors, can be restored in each iteration step We have also briey described an implicitly restarted symplectic Lanczos method which can be used to compute a few eigenvalues and eigenvectors of a symplectic matrix The symplectic matrix is reduced to a symplectic buttery matrix of lower dimension, whose eigenvalues can be used as approximations to the eigenvalues of the original matrix Acknowledgment art of the work on this paper was carried out while the second author was visiting the University of California at Santa Barbara despite the fact that her oce was located only 0 yards from the beach She would like to thank Alan Laub for making this visit possible Both authors would like to thank David Watkins for many helpful suggestions which improved the paper signicantly References [1] G S Ammar and V Mehrmann On Hamiltonian and symplectic Hessenberg forms Linear Algebra Appl, 19:{, 1991

[] G Banse Symplektische Eigenwertverfahren zur Losung zeitdiskreter optimaler Steuerungsprobleme Dissertation, Universitat Bremen, Fachbereich { Mathematik und Informatik, Bremen, Germany, 199 [] G Banse Condensed forms for symplectic matrices and symplectic pencils in optimal control ZAMM, (Suppl ):{, 199 [] G Banse and A Bunse-Gerstner A condensed form for the solution of the symplectic eigenvalue problem In U Helmke, R Menniken, and J Sauer, editors, Systems and Networks: Mathematical Theory and Applications, pages 1{1 Akademie Verlag, 199 [] Benner and H Fabender An implicitly restarted symplectic Lanczos method for the symplectic eigenvalue problem In preparation [] Benner and H Fabender An implicitly restarted symplectic Lanczos method for the Hamiltonian eigenvalue problem Linear Algebra Appl, to appear See also: Tech report SC 9 8, Fak f Mathematik, Fak f Mathematik, TU Chemnitz{Zwickau, 0910 Chemnitz, FRG (199) [] JR Bunch The weak and strong stability of algorithms in numerical algebra Linear Algebra Appl, 88:9{, 198 [8] A Bunse-Gerstner Matrix factorizations for symplectic QR-like methods Linear Algebra Appl, 8:9{, 198 [9] A Bunse-Gerstner and L Elsner Schur parameter pencils for the solution of the unitary eigenproblem Linear Algebra Appl, 1{1:1{8, 1991 [10] A Bunse-Gerstner and V Mehrmann A symplectic QR-like algorithm for the solution of the real algebraic Riccati equation IEEE Trans Automat Control, AC-1:110{111, 198 [11] D Calvetti, L Reichel, and D C Sorensen An implicitly restarted Lanczos method for large symmetric eigenvalue problems Electr Trans Num Anal, :1{1, March 199 [1] J Della-Dora Numerical linear algorithms and group theory Linear Algebra Appl, 10:{ 8, 19 [1] L Elsner On some algebraic problems in connection with general eigenvalue algorithms Linear Algebra Appl, :1{18, 199 [1] L Elsner and K Ikramov On a normal form for normal matrices under nite sequences of unitary similarities Submitted for publication, 199 [1] H Fabender On numerical methods for discrete least-squares approximation by trigonometric polynomials Math Comp, :19{1, 199 [1] U Flaschka, V Mehrmann, and D Zywietz An analysis of structure preserving methods for symplectic eigenvalue problems RAIRO Automatique roductique Informatique Industrielle, :1{190, 1991

[1] J G F Francis The QR transformation, art I and art II Comput J, :{1 and {, 191 [18] R Freund Transpose-free quasi-minimal residual methods for non-hermitian linear systems In G Golub et al, editor, Recent advances in iterative methods apers from the IMA workshop on iterative methods for sparse and structured problems, held in Minneapolis, MN, February -March 1, 199, volume 0 of IMA Vol Math Appl, pages 9{9, New York, NY, 199 Springer{Verlag [19] R W Freund, M H Gutknecht, and N M Nachtigal An implementation of the lookahead Lanczos algorithm for non-hermitian matrices SIAM J Sci Comput, 1(1):1{ 18, January 199 [0] GH Golub and CF Van Loan Matrix Computations Johns Hopkins University ress, Baltimore, nd edition, 1989 [1] EJ Grimme, DC Sorensen, and Van Dooren Model reduction of state space systems via an implicitly restarted Lanczos method Num Alg, 1:1{1, 199 [] D Hinrichsen and N K Son Stability radii of linear discrete-time systems and symplectic pencils Int J Robust Nonlinear Control, 1:9{9, 1991 [] V N Kublanoskaja On some algorithms for the solution of the complete eigenvalue problem USSR Comput Math and Math hys, :{, 191 [] Lancaster and L Rodman The Algebraic Riccati Equation Oxford University ress, Oxford, 199 [] AJ Laub Invariant subspace methods for the numerical solution of Riccati equations In S Bittanti, AJ Laub, and JC Willems, editors, The Riccati Equation, pages 1{19 Springer-Verlag, Berlin, 1991 [] W-W Lin A new method for computing the closed loop eigenvalues of a discrete-time algebraic Riccati equation Linear Algebra Appl, :1{180, 198 [] V Mehrmann Der SR-Algorithmus zur Berechnung der Eigenwerte einer Matrix Diplomarbeit, Universitat Bielefeld, Bielefeld, FRG, 199 [8] V Mehrmann A symplectic orthogonal method for single input or single output discrete time optimal linear quadratic control problems SIAM J Matrix Anal Appl, 9:1{8, 1988 [9] V Mehrmann The Autonomous Linear Quadratic Control roblem, Theory and Numerical Solution Number 1 in Lecture Notes in Control and Information Sciences Springer- Verlag, Heidelberg, July 1991 [0] CC aige and CF Van Loan A Schur decomposition for Hamiltonian matrices Linear Algebra Appl, 1:11{, 1981

[1] T appas, AJ Laub, and NR Sandell On the numerical solution of the discrete-time algebraic Riccati equation IEEE Trans Automat Control, AC-:1{1, 1980 [] B N arlett, D R Taylor, and Z A Liu A look-ahead Lanczos algorithm for unsymmetric matrices Math Comp, (19):10{1, January 198 [] RV atel Computation of the stable deating subspace of a symplectic pencil using structure preserving orthogonal transformations In roceedings of the 1st Annual Allerton Conference on Communication, Control and Computing, University of Illinois, 199 [] RV atel On computing the eigenvalues of a symplectic pencil Linear Algebra Appl, 188/189:91{11, 199 See also: roc CDC-1, Tuscon, AZ, 199, pp 191{19 [] D C Sorensen Implicit application of polynomial lters in a k-step Arnoldi method SIAM J Matrix Anal Appl, 1(1):{8, January 199 [] D R Taylor Analysis of the look ahead Lanczos algorithm hd thesis, Center for ure and Applied Mathematics, University of California, Berkeley, CA, 198 [] CF Van Loan A symplectic method for approximating all the eigenvalues of a Hamiltonian matrix Linear Algebra Appl, 1:{1, 198 [8] DS Watkins Some perspectives on the eigenvalue problem SIAM Rev, :0{1, 199 [9] DS Watkins and L Elsner Chasing algorithms for the eigenvalue problem SIAM J Matrix Anal Appl, 1:{8, 1991 [0] DS Watkins and L Elsner Convergence of algorithms of decomposition type for the eigenvalue problem Linear Algebra Appl, 1:19{, 199