Key words. Companion matrix, quasiseparable structure, QR iteration, eigenvalue computation,

Size: px
Start display at page:

Download "Key words. Companion matrix, quasiseparable structure, QR iteration, eigenvalue computation,"

Transcription

1 A FAST IMPLICIT QR EIGENVALUE ALGORITHM FOR COMPANION MATRICES D. A. BINI, P. BOITO, Y. EIDELMAN, L. GEMIGNANI, AND I. GOHBERG Abstract. An implicit version of the QR eigenvalue algorithm given in [D. A. Bini, Y. Eidelman, I. Gohberg, L. Gemignani, SIAM J. Matrix Anal. Appl , no. 2, ] is presented for computing the eigenvalues of an n n companion matrix using On 2 flops and On memory storage. Numerical experiments and comparisons confirm the effectiveness and the stability of the proposed method. Key words. Companion matrix, quasiseparable structure, QR iteration, eigenvalue computation, complexity AMS subject classifications. 65F15, 65H17 1. Introduction. The paper concerns the efficient computation of the eigenvalues of companion matrices. A fast and numerically robust algorithm for this tas, devised in [4], is based on the use of the explicit QR iteration applied to an input companion matrix A C n n represented as a ran-one perturbation of a unitary matrix, namely, A U zw T, where U C n n is unitary and z, w C n. The computational improvement with respect to the customary methods is achieved by exploiting the quasiseparable Hessenberg structure in the QR iterates inherited from the ran properties of the input companion matrix A. More specifically, in [4] it is shown that each iterate A is a Hessenberg matrix expressed as the sum of a unitary matrix U plus a ran-one correction. The fast implementation of the explicit QR iteration taes in input a complete set of generators for the quasiseparable structure of A and returns as output a complete set of generators for the quasiseparable structure of A +1. The reduction of matrix operations to manipulating a set of On parameters enables one to perform each QR step in On floating point operations flops with On memory storage. In [4] it is also pointed out that in practice, due to rounding errors, some additional computations must be carried out in order to maintain both the quasiseparable structure of A and the unitary property of U. In the classical numerical linear algebra literature [12, 16, 2] it is generally claimed that the implicit QR method should be preferred to its explicit counterpart since it can be faster in the case of multiple shifts, more stable and, moreover, admits suitable variants for the case of real inputs. The multishift QR iteration proceeds as follows: A A q A Q R, QR factorization A +1 : Q H A Q, 1.1 where q z is a monic polynomial of degree one single-shift step or two double-shift step suitably chosen to accelerate the convergence. The bulge-chasing implicit QR techniques manage to perform the transformation A A +1 without explicitly forming the matrix q A. The implicit determi- Dipartimento di Matematica, Università di Pisa, Largo Bruno Pontecorvo 5, Pisa, Italy, {bini,gemignan}@dm.unipi.it, boito@mail.dm.unipi.it School of Mathematical Sciences, Raymond and Beverly Sacler Faculty of Exact Sciences, Tel-Aviv University, Ramat-Aviv, 69978, Israel, {eideyu, gohberg}@post.tau.ac.il This wor was partially supported by MIUR, grant number , and by GNCS-INDAM. 1

2 2 nation of A +1 from A was first described by Francis [11, 1] see also [12] and [16] and the references given therein. Let Q 1 be a Householder matrix chosen to annihilate the subdiagonal entries in the first column of q A. The transformation A Q H 1 A Q 1 corrupts the upper Hessenberg form of A by creating a bulge of size degq z at the top left corner of the matrix. It is shown that the computation of A +1 essentially consists of chasing the bulge down and to the right of the matrix until it disappears. The tas can be accomplished by a standard Hessenberg reduction employing a sequence Q 2,..., Q n 1 of Householder matrices. The resulting algorithm requires On 2 flops and it is provably bacward stable [14]. The first implicit fast and accurate QR eigenvalue algorithm for real companion matrices has been presented in [6]. The algorithm employs a factored representation of A as the product of a unitary Hessenberg by a quasiseparable upper triangular matrix. A similar representation was previously considered in [1] for the efficient eigenvalue computation of certain ran-one corrections of unitary matrices. The bulge-chasing procedure is performed in linear time by taing advantage of the special form of A. Specifically, the multiplication on the left by the elementary unitary matrix Q H j amounts to a suitable rearrangement of the Schur factorization of the unitary factor by performing a sequence of swapping operations. Moreover, the multiplication on the right by Q j involves the updating of the quasiseparable structure of the upper triangular factor via manipulations of its generators. At the end of this updating process an auxiliary compression step is still required to recover a minimal-order quasiseparable parametrization for the triangular factor. In this paper we modify the algorithm given in [4] to incorporate single-shift and double-shift implicit techniques, thus obtaining a fast adaptation of the implicit QR method for the case where the initial matrix A A C n n is in companion form. The novel scheme still follows from the representation of each iterate A as a ranone correction of a unitary matrix, namely, A U z w T. The resulting eigenvalue algorithm taes in input a condensed representation of U, the perturbation vectors z and w as well as the coefficients of the shift polynomial q z and returns as output the generators of A +1 computed by means of 1.1. Differently from the approach pursued in [3] and [4], here the use of a suitable factored representation of U maes it possible the updating of the decomposition during the bulge-chasing process in a stable way without any additional compression and/or re-orthogonalization step. A minimal quasiseparable representation of U is easily computed from its factored form and then used for finding the unitary matrices Q j involved in the bulge-chasing process. The proposed algorithm is therefore logically simple, efficient and numerically stable. It turns out that the QR iteration can be performed at the cost of On flops using On storage. Numerical experiments confirm that the algorithm is stable. Experimental comparisons are also included by showing that in the considered cases our algorithm outperforms the one in [6]. Algorithms for explicit and implicit QR iterations with Hermitian and small ran perturbations of Hermitian matrices may be found in [5, 7, 8, 9]. An implicit QR algorithm for companion matrices was also discussed in [15]. The paper is organized as follows. In Sect. 2 we recall the structural properties and introduce condensed representations for the matrices generated by the QR process applied to an input companion matrix. Fast algorithms to carry out both the single-shift and the double-shift implicit QR iteration applied to such matrices are described in Sect. 3 and Sect. 4, respectively. In Sect. 5 we address some complementary issues concerning deflation and stopping techniques while in Sect. 6 the results

3 of extensive numerical experiments are given and compared. Finally, the conclusion and a discussion are the subjects of Sect Matrix structures under the QR iteration applied to a companion matrix: The classes H n and U n. In this section we analyze some matrix structures which play a fundamental role in the design of a fast adaptation of the shifted QR eigenvalue algorithm 1.1 applied to an input matrix A A C n n in companion form. For a given monic polynomial pz of degree n, pz p + p 1 z +... p n 1 z n 1 + z n n z ξ i, i1 3 the associated companion matrix A is defined by p 1 p 1 A p n 1 It is well nown that the set of eigenvalues of A coincides with the set of zeros ξ 1,..., ξ n of pz and this property provides the classical reduction between the computation of polynomial zeros and eigenvalues of companion matrices. Matrix methods based on the QR iteration 1.1 applied to a companion matrix are customary for polynomial root-finding: in fact, the MATLAB command roots relies on this approach. The general routines for Hessenberg matrices require On 2 flops and On 2 memory space per iteration. Fast adaptations of the QR eigenvalue algorithm applied to the companion matrix A can achieve better linear estimates both for the cost and for the storage. The computational improvement is due to the exploitation of certain additional matrix structures in the iterates A,, that follow from the companion form of A A. Specifically, it is worth noting that A A is an upper Hessenberg matrix which can be expressed as a ran-one perturbation of the unitary companion matrix U U associated with the polynomial z n 1, i.e., A p + 1 p 1. p n The Hessenberg shape is preserved under the QR iteration and, moreover, from 1.1 it follows that A +1 is a ran-one perturbation of a unitary matrix whenever A fulfills the same property. Therefore each matrix A,, generated by the QR iteration 1.1 applied to the initial companion matrix A A can be recognized as a member of a larger class of structured matrices. Definition 2.1. Let H n C n n be the class of n n upper Hessenberg matrices defined as ran-one perturbations of unitary matrices. That is, B H n if there exist MATLAB is a registered trademar of The Mathwors, Inc..

4 4 U C n n unitary and z, w C n such that B U zw T. 2.1 The vectors z z i n i1, w w i n i1 are called the perturbation vectors of the matrix B. The Hessenberg form of B implies some additional properties of the unitary matrix U. Definition 2.2. We define the class U n to be the set of n n unitary matrices U u i,j n i,j1 such that u i,j z i w j, 1 j i 2, 3 i n, 2.2 for suitable complex numbers z i and w j referred to as lower generators of the matrix U. Let B be a matrix from the class H n represented in the form 2.1 with the unitary matrix U and the vectors z z i n i1 and w w i n i1. Then the matrix U belongs to the class U n and the numbers z i, i 1,..., n and w i, i 1,..., n 2 are lower generators of the matrix U. In the reverse direction let U be a unitary matrix from the class U n with lower generators z i i 3,..., n and w i i 1,..., n 2. Tae arbitrary numbers z 1, z 2 and w n 1, w n and set z z i n i1, w w i n i1, the matrix B U zw T belongs to the class H n. In this paper we use essentially representations of the matrices from the class U n as a product U V F, where V and F are unitary matrices with zero entries above some superdiagonal and below some subdiagonal respectively. We consider in more details properties of such matrices. Denote by the direct sum of two matrices such that A B A B Lemma 2.3. Let W w ij n i,j1 be unitary matrix and let m be positive integer. The matrix W satisfies the conditions w ij for i > j + m if and only if W admits the factorization where. W W 1 W 2 W n m, 2.3 W i I i 1 W i I n i m, i 1,..., n m 2.4 with m + 1 m + 1 unitary matrices W i. The matrix W satisfies the conditions w ij for j > i + m if and only if W admits the factorization W W n m W n m 1 W 2 W with the unitary matrices W i i 1,..., n 2 of the form 2.3. Proof. Assume that w i,j, i > j + m. 2.6 The first column of the matrix W has the form w1 W :, 1,

5 where w 1 is an m + 1-dimensional column with the unit norm. We tae a unitary m+1 m+1 matrix W 1 such that W1 H f 1 1 T and then determine the matrix W 1 by the formula 2.4. We get W1 H W :, 1 e 1, where e 1 is the first vector from the standard basis in C n and since the matrix W1 W is unitary we conclude that W W 1 1 Ŵ 2 with a unitary n 1 n 1 matrix Ŵ2 which satisfies the condition 2.6. Next we apply the same procedure to the matrix ˆF 2 and so on and on the n m 1-th step we obtain the factorization Assume that the matrix W has the form 2.3 with the matrices W i i 1,..., n m of the form 2.4. We prove by induction in n that the condition 2.6 holds. For n m + 2 we have obviously W1 1 W W 1 W 2 1 W 2 and therefore w n,1 w m+2,1. Let for some n 2 the n 1 n 1 matrix where W W 1 W n m 1, W i I i 1 W i I n i m 1, i 1,..., n m 1 with m + 1 m + 1 matrices W i, satisfies the condition The matrix W defined via 2.3, 2.4 may be done in the form W In m 1 W W 1 W n m 1 W n m. 1 W n m 5 Hence it follows that W m + i + 1 : n 1, 1 : i W m + i + 1 : n, 1 : i, i 1,..., n m 1 which completes the proof of the first part of the lemma. Applying the first part of the lemma to the transposed matrix W T we prove the second part. Remar 2.4. Every matrix W i i 1,..., n 2 may be taen either as a m + 1 m + 1 Householder matrix or as a product of complex Givens rotations. For a matrix U U n we derive a condensed representation as a product of elementary unitary matrices. Definition 2.5. Let z z i n i1 be a vector and let V i, i 2,..., n be 2 2 unitary matrices. We say that the vector z is reduced by the matrices V i i 2,..., n if there exist complex numbers β i, i 2,..., n such that the relations β n z n, V H i zi β i+1 βi hold. Define the unitary matrix V C n n as the product, i n 1,..., 2, 2.7 V V n 1 V 2, 2.8

6 6 where V i I i 1 V i I n i 1, 2 i n We say also that the vector z is reduced by the matrix V. From 2.8 by Lemma 2.3 it follows that V v i,j is a unitary lower Hessenberg matrix, i.e., v i,j, j > i Lemma 2.6. Let U C n n be a unitary matrix from the class U n with lower generators z i i 3,..., n, w j j 1,..., n 2 and let z 1, z 2, w n 1, w n be arbitrary numbers. Assume that the vector z z i n i1 is reduced by 2 2 unitary matrices V i, i 2,..., n 1, define the unitary matrix V via relations 2.8, 2.9. Then the unitary matrix F V H U f i,j has null entries below the second subdiagonal, i.e., f i,j, i > j Proof. Set w w 1,..., w n T, z z 1,..., z n T and define the matrix B C n n as in 2.1. It is clear that B is an upper Hessenberg matrix. Furthermore the relations imply that the vector g V H z g i has null components g i for i > 3. By Lemma 2.3 V is a lower Hessenberg matrix. Hence using the fact that B and V H are upper Hessenberg matrices we conclude that the matrix F V H U V H B + V H z w T satisfies the relations Lemma 2.7. Every matrix U from the class U n admits the decomposition U V F, 2.12 where V is a unitary lower Hessenberg matrix and F f ij is a unitary matrix satisfying the condition f i,j, i > j + 2. Moreover one can tae the matrix V in the form where V V n 1 V 2, 2.13 V i I i 1 V i I n i 1, 2 i n with 2 2 unitary matrices V i and the matrix F in the form where F F 1 F 2 F n 2, 2.15 F i I i 1 F i I n i 2, 1 i n with 3 3 unitary matrices F i Proof. Let z i i 3,..., n, w j j 1,..., n 2 be lower generators of the matrix U and let z 1, z 2, w n 1, w n be arbitrary numbers. Determine 2 2 unitary matrices V i such that the relations 2.7 hold, and define the unitary matrix V via

7 2.13, By Lemma 2.3 the matrix V is lower Hessenberg. Moreover the vector z z i n i1 is reduced by the matrix V and hence by Lemma 2.6 there exists a unitary matrix V of the form 2.13 such that V H U F is lower banded with bandwidth 2. Summing up, we obtain that any matrix B from the class H n can be represented as B U zw T V F zw T, where V and F are unitary matrices represented in the factored form as specified by 2.13, 2.14 and 2.15, 2.16, and z and w are the perturbation vectors. Hence, the matrix B H n is completely specified by the following parameters: 1. the unitary matrices V, 2,..., n 1 defining the matrix V ; 2. the unitary matrix F, 1,..., n 2 defining the matrix F ; 3. the perturbation vectors z and w. These parameters are also called the generating elements of the matrix B. The decomposition of U U n by means of the elementary matrices V and F is numerically robust but not easy to be manipulated under the QR iteration applied to B. In this respect a more suited condensed form of U is its quasiseparable parametrization introduced in [4]. The representation gives an explicit description of each entry of U as the product of certain vectors and matrices of small size. We will show that for every matrix from the class U n there exist vectors g j C 2, 1 j n, h j C 2, 1 j n, and matrices B j C 2 2, 2 j n, such that u i,j g T i B i,j h j, for j i, 2.17 where B i,j B i+1 B j for n j i + 1 and B i,j I 2 if i j. The elements g j, h j j 1,..., n, B j j 2,..., n are called upper generators of the matrix U. The next result shows that generators of U can be easily reconstructed from the two sequences of elementary matrices V and F defining the unitary factors V and F, respectively. The recurrence relations for the upper generators can also be used to compute some elements in the lower triangular part of U. In particular we provide the formulas for the subdiagonal elements of U involved in the QR iteration. Theorem 2.8. Let U u i,j be a unitary matrix from the class U n with the given factorization U V F, where the factors V and F defined by the formulas 2.13, 2.14 and 2.15, 2.14 using the given unitary 2 2 matrices V i i 2,..., n 1 and unitary 3 3 matrices F i i 1,..., n 2. The entries u i,j, max{1, i 2} j n, 1 i n, satisfy the following relations: u i,j g T i B i,j h j for j i ; u i,j σ j for 2 i j + 1 n, 2.19 where the vectors h and the matrices B are determined by the formulas h F 1: 2, 1, B +1 F 1: 2, 2: 3, 1 n 2, 2.2 and the vectors g and the numbers σ are computed recursively Γ 1 1, g T 1 1 ; 2.21

8 8 σ g T +1 Γ +1 V +1 Γ 1 F, 1,..., n 2, 2.22 σ n 1 Γ n 1 h n 1, g T n Γ n 1 B n with the auxiliary variables Γ C 1 2. Proof. Let the elements g i, h i i 1,..., N, B 2,..., n, σ 1,..., n 1 be given via Using the elements g, B we define the matrices G 1,..., N of sizes 2 via relations G colg T i B i, i1, 1,..., n It is easy to see that the relations 2.18, 2.19 are equivalent to G h U1 : + 1,, 1,..., n 1; U1 : n, n G n h n Next we set σ C V +1 V 2 F 1 F, 1,..., n 2. Using the formulas 2.13, 2.16 we get C 1 : + 2, 1 : + 2 C, I n 2 1,..., n Moreover using 2.12, 2.13, 2.15 we have U V N 1 V +2 C F +1 F N 2, 1,..., n 3; U C n 2. Furthermore from 2.13, 2.16 we get I+1 V n 1 V +2 and therefore I, F +1 F n 2 U1 : + 1, 1 : C 1 : + 1, 1 :, 1,..., n Next we prove by induction that C 1 : + 2, 1 : + 2 C 1 1 :, 1 : 1 G h G B +1 σ g T +1, 1,..., n 2. Γ For 1 using the formula C 1 V 2 F 1 and the formulas 2.13, 2.16 we have therefore 1 C 1 1 : 3, 1 : 3 F V 1. 2 From here using 2.21, g T 1 G 1 and 2.2, 2.22 with 1 we get C 1 1, 1 : 3 F 1 1, 1 : 3 g 1 h1 B 2 G1 h 1 G 1 B 2,

9 C 1 2 : 3, 1 : 3 V F 1 σ1 g T 2 Γ 2 which implies 2.28 with 1. Let for some with 1 n 3 the relation 2.28 holds. Using the equality G B G +1 one can rewrite 2.28 in the form g T +1 C 1 : + 1, 1 : G C 1 : + 2, 1 : Γ +1 Using the formula C +1 V +2 C F and the formulas 2.26, 2.13, 2.16 we obtain C +1 1 : + 3, + 1 : + 3 I+1 C 1 : + 1, 1 : G +1 Γ V +1 I. +2 F 1 +1 From here using the relation F +1 1 : 2, 1 : 3 h +1 B +2 and the formula 2.22 we get. C +1 1 : + 1, + 1 : + 3 G +1 h +1 G +1 B +2, C : + 3, + 1 : + 3 V +2 Γ+1 1 F +1 σ+1 g T +2 Γ +2 which completes the proof of Now combining the relations 2.27, 2.28 together we obtain the equalities 2.25 for 1,..., n 2. Moreover using the equality C n 2 U and the relation 2.28 with n 2 we have U1 : n, n 1 : n G n 2 B n 1 g T n 1 Γ n 1 Taing here B n, h n 1, h n, σ n 1, g T n as in 2.2, 2.23 we get Gn 1 h U1 : n, n 1 : n n 1 G n 1 B n h n g T n h n σ N 1 which completes the proof of the theorem. Remar 2.9. One can chec easily that the relations 2.17 are equivalent to the equalities. U, : N g T H, i 1,..., n 2.29 with 2 n + 1 matrices H defined via relations H rowb,i h i n i, 1,..., n In the next sections we provide a fast adaptation of the implicit QR iteration 1.1 applied for computing the complete set of eigenvalues of the input companion matrix A A H n. It taes in input the generating elements of the matrix A H n together with the coefficients of the shift polynomial q z and returns as output the generating elements of the matrix A +1 H n. The computation is carried out at the total cost of On flops using On memory space.

10 1 3. The Structured QR Iteration. In this section we present a fast modification of the implicit QR iteration 1.1 applied to an input matrix A H n expressed in terms of its generating elements, that is, given in the form A U zw T V F zw T V n 1 V 2 F 1 F n 2 zw T, where U V F is the factored representation of U U n and z and w are the perturbation vectors of A. For the sae of notational simplicity we omit the superscript which labels the QR steps. Let qz be a monic polynomial of degree l {1, 2} determined to speed up the convergence of the QR iteration. The cases l 1 and l 2 are referred to as the single-shift and the double-shift iteration, respectively. Moreover, let Q 1 G 1 I n l 1 be a unitary matrix suitably chosen to annihilate the subdiagonal entries in the first column of qa. The transformation A Q H 1 AQ 1 corrupts the upper Hessenberg form of A by creating a bulge of size l at the top left corner of the matrix. It can be shown that the computation of A 1 essentially consists of chasing the bulge down and to the right of the matrix until it disappears. The tas can be accomplished by a standard Hessenberg reduction employing a sequence Q 2,..., Q n 1 of unitary matrices. The cumulative unitary factor Q 1 such that A 1 Q 1H AQ 1 is given by the product Q 1 : Q 1 Q 2 Q n 1. The updating of the matrices A A 1 and U U 1 and of the vectors z z 1 and w w 1 can be carried out in n 1 steps according to the following rules: A 1 Q H 1 A; A A Q, A +1 Q H +1A, 1,..., n 1; A 1 : A n 1 Q n Here every A, 1,..., n 1 is an upper Hessenberg matrix. Moreover using the formula 2.1 and setting U 1 Q H 1 U; U U Q, U +1 Q H +1U, 1,..., n 1; U 1 : U n 1 Q n 1, 3.2 we get z 1 Q H 1 z; z +1 Q H +1z, 1,..., n 2; z 1 : z n w 1 w; w T +1 w T Q T, 1,..., N 1; w 1 : w n. 3.4 A U z w T, 1,..., n Here U are unitary matrices and z, w are vectors. Moreover the matrices Q H +1 are chosen to remove the bulge in the matrix A and therefore A is an upper Hessenberg matrix. Hence it follows that every matrix A belongs to the class H n. The structured QR iteration essentially consists of an efficient procedure for evaluating the unitary matrices Q j, 1 j n 1, combined with an efficient scheme for updating the factorized representation of U under the multiplication on the right and on the left by the matrices Q j. In the next two subsections we describe the structured QR iteration in the single-shift and in the double-shift case. Specifically, in Subsection 3.1 we present a detailed description of the fast single-shift iteration and give a formal proof of its correctness. Then in Subsection 3.2 we setch the fast double shift iteration putting in evidence the basic differences with the single-shift step. A proof of the correctness of the double-shift iteration proceeds in exactly the same way as for the single-shift case.

11 3.1. The Structured QR Iteration: The Single-Shift Case. The procedure FastQR ss for the fast computation of the generating elements of A 1 such that 11 A αi QR, QR factorization A 1 : Q H AQ 3.6 is outlined below. Given in input the generating elements V 2,..., n 1, F 1,..., n 2, z z i n i1, w w i n i1 of A together with the shift parameter α, the algorithm calculates the generating elements V 1 2,..., N 1, F 1 1,..., N 2 z 1 z 1 i n i1, w1 w 1 i n i1 of A1. The condensed representations of A 1 and U 1 are computed by the scheme , where Q j are Givens rotation matrices determined in the bulge-chasing process. Procedure FastQR ss 1. Using algorithm from Theorem 2.8 compute upper generators g i, h i i 1,..., n, B 2,..., n and subdiagonal entries σ 1,..., n 1 of the matrix U. 2. Set β n z n and for n 1,..., 3 compute β V H z 1, 1 : 2 β The third stage of the algorithm is to compute the Givens rotation matrices G, 1,..., n 1 and the updated perturbation vectors z 1, w 1 a Set z τ 1 z 1. Determine the complex Givens rotation matrix G 1 such that G1 H g T 1 h 1 z 1w 1 α 3.8 σ 1 z 2w 1 and compute z 1 1 z τ 2 σ τ 1 g τ 2 T G H 1 z 1 1 g T 1 h1 gt 1 B1 z 2 σ 1 g T b For 1,..., n 2 perform the following. Compute ρ ρ ρ ρ z 1 z τ +1 σ τ g τ +1 T h +1 z +2 w τ σ +1 w τ w +1 G. 3.1 Determine the complex Givens rotation matrix G +1 from the condition G H +1 Compute z 1 +1 z τ +2 σ τ +1 g τ +2 T ρ z τ +1 w1 ρ z +2 w 1 G H +1 G H +1 ρ z τ +1 z +2 g τ +1 T B +1 ρ g

12 12 c Compute and set w 1 n 1 w n 1 z 1 n w τ n 1 w n G n z τ n The fourth stage of the algorithm is to compute the generating elements V 1 2,..., n 1, F 1 1,..., n 2 of the matrix A 1. a Determine the complex Givens rotation matrix V τ 2 such that V τ 2 H τ 1 z 2 β with some number β 1 2 and compute the 3 3 matrix 1 F τ G H V τ 2 H 1 V 2 b For 1,..., n 3 perform the following. Compute the 3 3 matrix G H X V +2 β 3 V τ +1 1 F and the 4 4 matrix τ F Y 1 1 F +1 G I Determine the complex Givens rotation matrix Z from the condition Compute the matrix and the matrix X +1 1, 2 : 3Z. 3.2 W X +1 1 Z I2 D Z H 3.21 Y The unitary matrix W has the zero entry in the position 1, 3 and the unitary matrix D has the zero entry in the position 4, 1. Compute the factorization W 1 V τ +2 V with unitary 2 2 matrices V τ +2, V1 +1 and the factorization D F F τ with 3 3 unitary matrices F 1 c Compute F 1 n 2 F τ n 2, F τ +1. V 1 n 1 GH n 1V τ Gn 2 1 n 1, G n 1

13 Theorem 3.1. Let A U zw T be a matrix from the class H n with generating elements V 2,..., n 1, F 1,..., n 2, z z i n i1, w w i n i1 such that the vector z is reduced by the matrix V V n 1 V 2, V i I i 1 V i I n i 1, 2 i n 1. Then the output data V 1 2,..., N 1, F 1 1,..., N 2, z 1 z 1 i n i1, w1 w 1 i n i1 returned by FastQR ss are the generating elements of the matrix A 1 satisfying 3.6 and, moreover, z 1 is reduced by the matrix V 1 V 1 n 1 V 1 2, V 1 i I i 1 V 1 i I n i 1, 2 i n Proof. Using Stage 1 of the algorithm we obtain upper generators and subdiagonal entries of the matrix U. On Stage 2 of the algorithm we use the fact that the vector z is reduced by the matrix V and compute the parameters β, N,..., 3 using the formulas 2.7. To justify Stage 3 of the algorithm we prove by induction that the formulas 3.8, 3.11 determine the unitary matrices G, 1,..., n 1 such that the unitary matrices Q I 1 G I n 1 mae the transformations and moreover for 1,..., n 1 the relations U + 1, : n hold and the vectors z, w have the form σ τ g τ +1 T H z z 1 1,..., z1, zτ +1, z +2,..., z n T, 3.28 w w 1 1,..., w1 1, wτ, z +1,..., z n T where the matrices U and the vectors z, w are determined in , hold. On the first step we deal with the matrix A 1 Q H 1 A U 1 z 1 w T 1 with U 1 Q H 1 U, z 1 Q H 1 z, w 1 w. We have g T A αi1 : 2, 1 1 h 1 z 1 w 1 α σ 1 z 2 w 1 and determine the complex Givens rotation matrix G 1 from the condition 3.8. Next using the formula 2.29 with 1, 2, the formula 2.19 with 1 and the relation H 1 h 1 B 2 H 2 we get g T U1 : 2, 1 : N 1 h 1 g T 1 B 2 H 2 σ 1 g T. 2 H 2 Using the formula 3.9 we obtain 3.27, 3.28, 3.29 with 1. Let for some with 1 n 2 the statement of induction holds. From the formulas 3.2 it follows that U + 2, : n U + 2, : n.

14 14 Furthermore using the formulas 2.29, 2.19 we obtain U + 2, + 1 : N σ +1 g T +2 H +2 and from the formulas 3.5, A + 2, and 3.28, 3.29 it follows that U + 2, z +2 w τ. Moreover using the formula 3.27 and the equality H +1 h +1 B +2 H +2 we obtain U +1 : +2, : N We have σ τ g τ +1 T h +1 g τ +1 T B +2 H +2 z +2 w τ σ +1 g T +2 H A A Q U Q z w T Q U z w T +1. Postmultiplication of a matrix C by the matrix Q means only postmultiplication of the columns, + 1 of C by the matrix G. Hence applying the formula 3.1 we obtain and U + 1 : + 2, : n ρ ρ g τ +2 T B +2 H +2 ρ ρ g T +2 H +2 w +1 w 1 1,..., w1, wτ +1, w +2,..., w n T. Now the elements in the, + 1,, + 2 positions in the matrix A are ρ z τ +1 w1, ρ z +2 w 1 and we determine the matrix of Givens rotation G+1 H from the condition Next consider the matrix A +1 Q H +1A Q H +1U Q H +1z w T +1 U +1 z +1 w T +1. Premultiplication of a matrix C by the matrix Q H +1 means only premultiplication of the rows + 1, + 2 of C by the matrix G+1 H. Hence using the formulas 3.13, 3.12 we obtain U , + 1 : N σ τ +1 g τ +2 T H +2 and z +1 z 1 1,..., z1, z1 +1, zτ +2, z +3,..., z n T. Thus we have determined the Givens rotation matrices G, 1,..., n 1 and the vectors z n 1, w n 1. To finish the proof of Stage 3 notice that the last coordinates of the vectors z n 1 z 1 and w n w 1 are given by the formulas 3.15 and Now we will justify Stage 4 of the algorithm. By the definition of generating elements we have U V n 1 V 2 F 1 F n 2, 3.31

15 15 where V i I i 1 V i I n i 1, F i I i 1 F i I n i 2. We should prove that the matrix U 1 admits the factorization where U 1 V 1 n 1 V 1 2 F 1 1 F 1 n 2, 3.32 V 1 i I i 1 V 1 i I n i 1, F 1 i I i 1 F 1 i I n i 2 with the matrices V 1 i, F 1 i determined via 3.16, 3.26, and moreover the vector z 1 is reduced by the matrix V 1. To do this we prove by induction that every unitary matrix U from 3.2 admits the factorization with where U ˆV ˆF, 1,..., n ˆV V n 1 V +2 V τ +1 V 1 V 1 2, ˆF F 1 1 F 1 1 F τ F +1 F n 2, V τ +1 I V τ +1 I n 2, F τ I 1 F τ I n 2 and the matrices V i, F i and V 1 i, F 1 i are defined above. Moreover we will prove that the vector z defined in 3.3 is reduced by the matrix ˆV, i.e., β n z n, V H i zi β i+1 βi, i n 1,..., + 2; 3.34 V τ +1 β H z τ β 1 +1, 3.35 V 1 i H z 1 i β 1 i+1 β 1 i, i,..., From 3.31 it follows that the matrix U 1 Q H 1 U has the form U 1 V n 1 Ṽ3Q H 1 V 2 F 1 F 2 F n 2. Taing the matrix V τ 2 from 3.16 and the matrix F τ 1 from 3.17 we obtain U 1 ˆV 1 ˆF 1 with ˆV 1 V n 1 V 3 V τ 2, ˆF 1 F τ 1 F 2 F n 2. Furthermore using the relations 2.7 with i n,..., 3 and 3.16 we conclude that the vector z 1 is reduced by the matrix ˆV 1 V n 1 V 3 V τ 2.

16 16 Assume that for some with 1 n 3 the induction statement holds. Hence the matrix U +1 Q H +1 U Q has the form with U +1 T +1 S +1 T +1 V n 1 Ṽ+3Q H +1V +2 V τ 1 +1 V V 1 2, We have and S +1 F 1 1 F 1 τ 1 F F +1Q F +2 F n 2. Q H +1V +2 V τ +1 I X +1 I n 1 F τ F +1Q I 1 Y I n 3 with the 3 3 unitary matrix X +1 from 3.18 and the 4 4 unitary matrix Y from Next we determine a 2 2 unitary matrix Z from the condition 3.2 and compute the matrix W by the formula The unitary 3 3 matrix W has the zero entry in the position 1, 3. Hence this matrix can be factorized in the form 3.23 with unitary 2 2 matrices V τ +2, V1 +1. Set Z I +1 Z I n 3 and furthermore ˆV +1 T +1 Z, ˆF +1 Z H S +1. We have where U +1 ˆV +1 ˆF +1, 3.37 ˆV +1 V n 1 V +3 Q H +1V +2 V τ +1 Z V 1 V 1 2 V n 1 V +3 V τ +2 V V Ṽ 1 2 which yields the desired representation for the matrix ˆV +1. Let us chec that the vector z +1 is reduced by the matrix V +1. Indeed using the relations 3.18, 3.21, 3.23 we have 1 Z H 1 V +1 H 1 1 V τ τ V +1 H 1 1 V+2 H +2 H G+1 1 Moreover using the relations 3.12, 3.35 and the relation 3.34 with i + 2 we.

17 17 get 1 1 Z H Z H V τ +1 H 1 1 Z H V τ +1 H 1 1 V τ +1 H 1 zτ +1 β +2 V H +2 1 V+2 H 1 Z H G+1 1 zτ +1 z +2 β +3 β +1 z 1 +1 z τ +2 β +3 β +1. Hence it follows that V τ +2 β H z τ β 1 +2, V 1 +1 H z 1 +1 β 1 +2 β 1 +1 which together with the relations 3.34 with i N 1,..., + 3 and 3.36 implies that the vector z +1 is reduced by the matrix V +1. Thus applying Lemma 2.6 to the matrix U +1 we get Furthermore we get Here we have ˆF , ˆF +1 F 1 1 F 1 1 ZH F τ F +1Q F +2 F n 2. Z H F τ F +1Q with the 4 4 matrix D from This implies ˆF +1 F 1 1 F 1 1 I 1 D I N 3 I 1 D I n 3 F +2 F N 2. We have D 4, 1 ˆF +1 +3, and using 3.38 we conclude that the 4 4 unitary matrix D has the zero entry in the 4, 1 position. Hence the matrix D admits the factorization 3.24 with 3 3 unitary matrices F 1, F τ +1. Hence it follows that ˆF +1 F 1 1 F F F τ +1 F +2 F N 2. which completes the proof of Thus the unitary matrix U n 2 admits the factorization U n 2 ˆV n 2 ˆF N 2 and therefore the matrix U 1 Q H n 1U n 2 Q n 2 Q n 1 has the form U 1 Q H n 1V τ 1 n 1 V n 2 V 1 2 F 1 1 F 1 τ n 3 F n 2 Q n 2Q n 1.

18 18 From here using 3.25, 3.26 we obtain the desired factorization 3.32 for the unitary matrix U 1. Next using 3.12 with n 1 and 3.25 we get V 1 n 1 H z 1 n 1 β 1 n V 1 n 1 H G n 1 Gn 1 H 1 p n 1 p 1 N β 1 n 1 which together with 3.36 with n 2 means that the vector z 1 is reduced by the matrix V 1 V 1 n 1 V 1 2. Remar 3.2. Notice that in the subsequent iteration the parameter β 3 may be updated by the formula 3.35 with 2, i.e., 3 τ z 3 V τ β 4 β 1 3. In this case the Stage 2 of FastQR ss can be sipped The Structured QR Iteration: The Double Shift Case. The double shift technique follows again the scheme In this case, however, the matrices Q are of the type Q I 1 G I n 2, where G is a 3 3 unitary matrix and 1 n 2. The goal is to reduce the matrix qa A α 1 I n A α 2 I n in upper Hessenberg form, where α 1 and α 2 are the eigenvalues of the 2 2 trailing bloc in the matrix A. At the beginning, Q 1 is chosen in such a way that it reduces the first column of qa. The lower triangular part of the matrix A 1 obtained after 1 steps of is given by trila 1, a 1 1 γ 1 1 a γ 1 2 a 1 1 δ 1 â θ 1 ˆγ â +1 ξ 1 η ˆγ +1 a γ n 1 a n In general, the matrix G is chosen so that G H δ 1 â θ 1 ˆγ â +1 ξ 1 η ˆγ +1 β 1 1 a 1 ˆγ 1 a 1 +1 γ 1 +1 Observe that, while in classical implicit QR with double shift the matrix G is only required to reduce the first column of the bulge, in this case the whole bulge should be eliminated in order to ensure that A attains again an upper Hessenberg form. The bulge chasing step to be applied to the matrix U can be described as follows..

19 19 Set R U + 1 : + 4, : + 3, we have and next and R σ t ρ +1 g t +1 T h +2 g t +1 T B +3 h +3 z t +2 wt z +3 w t z +4 w t σ 1 z t +2 w1 σ τ +1 g τ +2 T h +2 g τ +2 T B +3 h +3 z +3 w τ +1 σ +2 g +3 T h +3 z +4 w τ +1 z +4 w +2 σ +4 G H +1 1 R G 1 σ t +1 ρ +2 g t +2 T h +3 z τ +3 w1 z +4 w 1 z τ +3 wt +1 σ τ +2 g τ +3 T h +3 z +4 w t +1 z +4 w τ +2 σ +4 The updated vectors z +1 and w +1 are defined so that w 1 w t w t +1 G H w τ w τ +1 w z 1 z t +1 z τ +2 G+1 H z t z τ +1 z +2 Once determined through the bulge chasing process, the matrices Q j are used to compute the product representation for the updated U. Let us describe the general update step, using for the product representation the same notation as in the previous section. In order to exploit the bulge chasing properties of the matrices Q j, consider an update step where the current unitary term U is multiplied by Q on the right and by Q H +1 on the left. We have: Q H +1 U Q Q H +1 V n 1 V +3 V τ +2 V t +1 V 1 2 F 1 1 F t.. F τ +1 F +2F n 2 Q V n 1 QH +1 V +3 V τ +2 V t +1 V 1 2 F 1 1 F t F τ +1 F +2 Q F n 2. Let us examine more closely the product P +1 Q H +1 V +3 V τ +2 V t +1. We have: G H +1 G H +1 1 I2 1 V +3 1 V τ +2 1 V t +1, I 2

20 2 that is, multiplying on the left by Q H +1 spoils the lower Hessenberg structure of V +3 V τ +2 V t +1. Analogously, multiplying on the right by Q spoils the structure of the product F t F t F τ +1 F +2: I 2 1 F τ +1 G 1 I 2 I2 F +2 G In order to restore these structures, define a 3 3 unitary matrix Z +2 such that P +1 Z H +2 is again lower Hessenberg, where Z +2 I +1 Z +2 I n 4, and consider the updated U in the form Q H +1 U Q V n 1 Q H +1 V +3 V τ +2 V t +1 ZH +2 V 1 2 I 2. F 1 1 Z +2 F t F τ +1 F +2 Q F n 2. Since P +1 Z+2 H is lower Hessenberg, it can be re-factorized as where P +1 Z H +2 V τ +3 V t +2 V 1 +1, V τ +3 I +2 V τ +3 I n 4, V t +2 I +1 V t +2 I n 3, V 1 +1 I V 1 +1 I n 2, for suitable 2 2 unitary matrices V τ +3, Vt +2 and V1 +1. Because of the way in which the matrices Q j have been chosen, the matrix Z +2 must also have the property of restoring the structure of the product F t F τ +1 F +2. It is therefore possible to compute 3 3 unitary matrices F 1, F t τ +1 and F +2 such that where Z +2 F t F τ +1 F +2 Q F 1 F t +1 F τ +2, F 1 I 1 F 1 I n 2, F t +1 I F t +1 I n 3, F τ +2 I +1 F τ +2 I n Convergence Criteria and Deflation Techniques. Deflation is an important concept in the practical implementation of the QR iteration. Deflation amounts to setting a small subdiagonal element of the Hessenberg matrix to zero. This is called

21 deflation because it splits the Hessenberg matrix into two smaller subproblems which may be independently refined further. For the sae of clarity, suppose that after iterations of the QR algorithm the subdiagonal entry β n s of A becomes small enough to be considered negligible. A customary criterion requires that such an entry be small compared with its diagonal neighbors β n s u a n s,n s + a n s+1,n s+1. If this condition is satisfied, then β n s is set to zero which maes A a bloc upper triangular matrix: A A 1 A 2 where A 1 C n s n s and A 2 C s s. Now A 1 and A 2 can be reduced into upper triangular form separately. The process is called deflation and continuing in this fashion, by operating on smaller and smaller matrices, we may approximate all the eigenvalues of A. Specifically, if s 1 or s 2 then the computed eigenvalues of A 2 may be taen to be eigenvalues of the initial matrix A. The matrix U can be partitioned accordingly with A : where u u 1 u 2 U, w U 1 H u 2 w 1 w 1 w 2, U 2,, u 1, w 1 C n s, u 2, w 2 C s. Observe that U 1 and U 2 are not generally unitary matrices and, therefore, the matrices A 1 and A 2 are specified in terms of the whole matrix U as follows: A 1 I n s U In s + u H 1 w 1, 21 and A 2 I s U Is + u H 2 w 2. These representations enable the fast QR iteration to be applied to the input matrices A 1 and A 2. Observe that, even though the whole matrix U is required for representation purposes, woring only on A 1 or A 2 means that only part of the generators are actually involved in the computation. Remar 4.1. The implicit QR method for eigenvalue computation relies heavily on the implicit Q theorem see e.g. [12], which applies to irreducible Hessenberg matrices. As a consequence, deflation is a crucial step in implicit QR and it is necessary to perform it whenever possible.

22 22 A more subtle form of deflation should also be applied when two consecutive subdiagonal entries of A become small typically, small enough for their product to be less than or equal to the machine epsilon. In this case, the next bulge chasing process should only be applied to the submatrix A r+1 : n, r+1 : n, where r is the index such that the small subdiagonal entries are A r + 1, r and A r + 2, r + 1. Indeed, it turns out that beginning the bulge chasing process from the r + 1-st row introduces negligible perturbations of the Hessenberg structure in the entries of indices r + 2, r and r + 3, r in the double shift case. This is an idea that dates bac to Francis and is explained in detail in Wilinson s classical boo [17], Chapter 8 Section 38. It is however often overlooed in textboos, so we recall it briefly here. Let A be an upper Hessenberg matrix having small elements ɛ 1 and ɛ 2 in positions r + 1, r and r + 2, r + 1, for a certain index r, and consider the following partition: A ɛ 1 ɛ 2 X Y E W. 4.1 With this notation we have: Proposition 4.2. Let µ be any eigenvalue of W. Then µ is also an eigenvalue of a matrix A, of the same size as A, which differs from A only by an element ɛ 1 ɛ 2 /Ar + 1, r + 1 µ in position r + 2, r. As a consequence of Proposition 4.2, if the product ɛ 1 ɛ 2 is small enough then the next QR iteration can be applied to the submatrix W, ignoring the fact that E is not numerically zero and performing in fact a sort of deflation. In practical implementations, the search for a suitable partition is generally performed from bottom up, so that one always wors on the smallest possible trailing submatrix. More precisely, before starting each QR iteration on the current iterate A, the following procedure is performed: - loo for the smallest index s such that A n s + 1, n s is negligible; - if s 1 or s 2 for the double shift case, then one or two eigenvalues have been found; exit procedure; - else loo for the largest index r, with n s r n 2 or n s r n 3 for the double shift case, such that beginning the bulge chasing process from the r + 1-st row introduces negligible perturbations of the Hessenberg structure in the entries of indices r + 2, r and r + 3, r for the double shift case; - perform bulge chasing on the trailing submatrix A r + 1 : n, r + 1 : n. See also [13] for practical criteria and suggestions in the classical non-structured case. 5. Numerical Experiments. The algorithms for eigenvalue computation described in the previous sections have been implemented in Fortran 95, for the single and double shift versions. These programs deal with the particular case of the companion matrix A associated with a given polynomial P x n c x and have been used to test the efficiency and stability of the proposed algorithms on several examples. The software is available upon request to the authors. The following definitions of errors are used in tests for stability: Absolute forward error a.f.e.: this is the absolute distance between the

23 LAPACK FASTQR LAPACK FASTQR time.5.4 logtime deg deg Fig Comparison of running times Example 5.1. eigenvalues computed by the structured method which will be called here FASTQR for reference purposes and the eigenvalues computed by LAPACK routines ZGEEV and DGEEV, which are assumed to be correct. The absolute forward error is computed as the -norm of the difference between the two output vectors obtained from FASTQR and LAPACK and sorted by decreasing absolute value. An estimate for this error, based on theoretical arguments, is given by the quantity δ ɛ A 2 max{condeiga}, where ɛ is the machine epsilon and, using Matlab terminology, condeiga is the vector of eigenvalue condition numbers for the matrix A. Matrix relative bacward error: it is defined as m.b.e. Q aa Q a V f F f p f q T f A, where A A is the initial matrix in the QR iteration process, Q a is the accumulated unitary similarity transformation and V f, F f, p f, q f are generators for the product structure of the final iterate A f. Coefficient relative bacward error: it is a vector whose entries are defined as c.b.e c c c if c, where { c } 1,...n are the coefficients, computed in high precision, of the polynomial whose roots are the eigenvalues of A given by our structured method. We start with some examples of the behavior of the single shift version of FASTQR applied to complex polynomials. Most examples in this section are taen from [4] and [6]. Example 5.1. Given a positive integer n, define P x n a + ib x, where a and b are uniformly distributed random numbers in [ 1, 1].

24 y 2.2*x logtime logdeg Fig Experimental estimate of time growth Example 5.1. Random polynomials as in Example 5.1 are particularly useful for tests on computational speed and behavior on polynomials of high degree. Figure 5.1 shows, on the left, running times for FASTQR and LAPACK on such random polynomials, for degrees ranging between 5 and 35. On the right we report the log-scale plot of the same data, where it can be seen that FASTQR becomes faster than LAPACK for polynomials of degree about 1. Figure 5.2 is a log-log plot of the running times of FASTQR on polynomials of degrees ranging between 5 and 2. A linear fit of this plot shows that the growth of running time is indeed quadratic in n, since the slope of the line is about 2.2. The number of required QR iteration is generally less than 3 per eigenvalue. Polynomials in Example 5.1 are generally rather well-conditioned and the behavior of FASTQR is satisfactory both for bacward and forward stability more details are given in Example 5.4 for the double shift version. In the next example, coefficients are still random but chosen with an unbalanced distribution: Example 5.2. Given a positive integer n, define P x x n + n 1 j a jx j, with a j u j 1 vj, where u j is a uniformly distributed random number in [ 1, 1] and v j is a uniformly distributed random number in [ 5, 5]. The behavior of FASTQR on such polynomials is shown in the following table: n m.b.e. a.f.e. δ Bacward errors are small, which ensures bacward stability. The growth of the

25 LAPACK SSS QR FASTQR LAPACK SSS QR FASTQR time.8.6 time deg deg Fig Comparison of running times Example 5.4. forward error mirrors closely the behavior of the quantity δ, therefore the output of our method is consistent with theoretical expectations. Example 5.3. Given a positive even integer n, define P x n/2 1 n/2 x 2+.5 n 1 i sin 2+.5 n 1, where i 2 1. This is the monic polynomial with zeros equally spaced on the curve x t + i sinπt with 1 t 1. Results for this class of test polynomials are shown in the following table: n a.f.e. δ Next, let us consider some examples with real polynomials, to which we apply the double shift version of FASTQR. Example 5.4. Given a positive integer n, define P x n c x, where c is a random number uniformly distributed in [ 1, 1]. As in Example 5.1, we use these polynomials to compare running times and chec that the growth of the running time is indeed quadratic. In this case we are also able to introduce a comparison with the structured implicit QR method proposed in [6], denoted as SSS-QR, a Fortran 95 implementation of which is available on-line for the case of real polynomials. Figure 5.3 shows on the left the running times for FASTQR, LAPACK and SSS-QR for polynomials of degrees ranging between 5 and 4 and on the right the corresponding logarithmic plot. FASTQR becomes faster than LAPACK for polynomials of degrees between 15 and 2. Figure 5.4 shows a log-log plot of the running times of FASTQR for polynomials of degrees ranging between 5 and 2, with a linear fit; the slope here is about 2.2. The number of double shift iterations is generally less than 1.5 per eigenvalue. The logarithms of the absolute forward errors computed for these same polynomials are plotted in Figure 5.5. It is interesting to notice that, while for degrees up to 1 there

26 y 2.2*x logtime FASTQR linear fit logdeg Fig Experimental estimate of time growth Example 5.4. is a growth of the forward errors which is however expected, for degrees higher than 1 the errors stay roughly the same. This is also the case for experiments made for degrees 3 and 5. This suggests that FASTQR reliably computes eigenvalues even for very high degrees. Example 5.5 Roots of 1. Given a positive integer n, define P x x n 1. We use these polynomials to compare the running times of FASTQR to the running times of LAPACK and SSS-QR; the regular and logarithmic plot for polynomials of degrees ranging from 5 to 4 are found in Figure 5.6. Here FASTQR becomes faster than LAPACK for n 5. The computation of the roots of unity with FASTQR is generally very accurate forward errors are about 1 15 for the polynomials used in this experiment. Example 5.6 Multiple roots. Given a positive integer n, define P x 2 c x x 1 n, where c is a random number in [ 1, 1]. In this case, the computation of roots with multiplicity 1 is very accurate, whereas the multiple root 1 suffers from ill conditioning. The following table shows forward and bacward errors for several values of n. For the coefficient bacward error we give an interval containing all entries of the error vector. n a.f.e. c.b.e. δ , , , , Example 5.7. Define P x x 2 + x x + 1. We obtain here a.f.e ; the entries in the coefficient bacward error

27 log1forward error deg Fig Absolute forward errors Example LAPACK SSS QR FASTQR 1 LAPACK SSS QR FASTQR 3 time time deg deg Fig Comparison of running times Example 5.5. are in the range, , which is roughly the same result given in [6] and the same as one would obtain using the Matlab function roots. Example 5.8. Define P x as the monic polynomial whose roots are, in Matlab notation, [ 2.1 :.2 : 1.7]. For this example we have a forward error of , which is consistent with the estimate given by δ The entries in the coefficient bacward error

28 28 are in the range, , which is again comparable to the result given in [6]. 6. Conclusion. The analysis and development of efficient numerical methods for eigenvalue computation of ran structured matrices constitutes one of the major challenges in the field of numerical linear algebra. In this paper we have presented an implicit version of the QR algorithm for companion matrices designed in [4]. The novel method is logically simple, computationally fast and numerically stable. The complexity is an order of magnitude less than the customary implementation of the QR algorithm for Hessenberg matrices in LAPACK. Experimental results are also reported to compare the performances of our method with other existing fast eigensolvers for companion matrices. REFERENCES [1] G. S. Ammar, W. B. Gragg, and C. He. An efficient QR algorithm for a Hessenberg submatrix of a unitary matrix. In New directions and applications in control theory, volume 321 of Lecture Notes in Control and Inform. Sci., pages Springer, Berlin, 25. [2] Z. Bai, J. Demmel, J. Dongarra, A. Ruhe, and H. van der Vorst, editors. Templates for the solution of algebraic eigenvalue problems, volume 11 of Software, Environments, and Tools. Society for Industrial and Applied Mathematics SIAM, Philadelphia, PA, 2. A practical guide. [3] D. A. Bini, Y. Eidelman, L. Gemignani, and I. Gohberg. The unitary completion and QR iterations for a class of structured matrices. Math. Comp., 77261: , 28. Submitted for publication, 25. [4] D. A. Bini, Y. Eidelman, L. Gemignani, and I. Gohberg. Fast QR eigenvalue algorithms for Hessenberg matrices which are ran-one perturbations of unitary matrices. SIAM J. Matrix Anal. Appl., 292: , 27. [5] D. A. Bini, L. Gemignani, and V. Y. Pan. Fast and stable QR eigenvalue algorithms for generalized companion matrices and secular equations. Numer. Math., 13:373 48, 25. [6] S. Chandrasearan, M. Gu, J. Xia, and J. Zhu. A fast QR algorithm for companion matrices. In Recent advances in matrix and operator theory, volume 179 of Oper. Theory Adv. Appl., pages Birhäuser, Basel, 28. [7] Y. Eidelman, L. Gemignani, and I. Gohberg. On the fast reduction of a quasiseparable matrix to Hessenberg and tridiagonal forms. Linear Algebra Appl., 42:86 11, 27. [8] Y. Eidelman, L. Gemignani, and I. Gohberg. Efficient eigenvalue computation for quasiseparable Hermitian matrices under low ran perturbations. Numer. Algorithms, 471: , 28. [9] Y. Eidelman, I. Gohberg, and V. Olshevsy. The QR iteration method for Hermitian quasiseparable matrices of an arbitrary order. Linear Algebra Appl., 44:35 324, 25. [1] J. G. F. Francis. The QR transformation: a unitary analogue to the LR transformation. I. Comput. J., 4: , 1961/1962. [11] J. G. F. Francis. The QR transformation. II. Comput. J., 4: , 1961/1962. [12] G. H. Golub and C. F. Van Loan. Matrix computations. Johns Hopins Studies in the Mathematical Sciences. Johns Hopins University Press, Baltimore, MD, third edition, [13] W. H. Press, S. A. Teuolsy, W. T. Vetterling, and B. P. Flannery. Numerical recipes. Cambridge University Press, Cambridge, third edition, 27. The art of scientific computing. [14] F. Tisseur. Bacward stability of the QR algorithm. Technical Report 239, UMR 5585 Lyon Saint-Etienne, [15] M. Van Barel, P. Van Dooren and R. Vandebril. Computing the eigenvalues of a companion matrix. Slides of the tal on the conference Structured Linear Algebra Problems: Analysis, Algorithms, and Applications, Cortona, Italy, September 15-19, Barel.pdf. [16] D. S. Watins. Fundamentals of matrix computations. Pure and Applied Mathematics New Yor. Wiley-Interscience [John Wiley & Sons], New Yor, 22. Second editon. [17] J. H. Wilinson. The Algebraic Eigenvalue Problem. The Oxford University Press, 1965.

A Fast Implicit QR Eigenvalue Algorithm for Companion Matrices

A Fast Implicit QR Eigenvalue Algorithm for Companion Matrices A Fast Implicit QR Eigenvalue Algorithm for Companion Matrices D. A. Bini a,1 P. Boito a,1 Y. Eidelman b L. Gemignani a,1 I. Gohberg b a Dipartimento di Matematica, Università di Pisa, Largo Bruno Pontecorvo

More information

Compression of unitary rank structured matrices to CMV-like shape with an application to polynomial rootfinding arxiv: v1 [math.

Compression of unitary rank structured matrices to CMV-like shape with an application to polynomial rootfinding arxiv: v1 [math. Compression of unitary rank structured matrices to CMV-like shape with an application to polynomial rootfinding arxiv:1307.186v1 [math.na] 8 Jul 013 Roberto Bevilacqua, Gianna M. Del Corso and Luca Gemignani

More information

Structured Matrix Methods for Polynomial Root-Finding

Structured Matrix Methods for Polynomial Root-Finding Structured Matrix Methods for Polynomial Root-Finding Luca Gemignani Department of Mathematics University of Pisa Largo B. Pontecorvo 5, 5617 Pisa, Italy gemignan@dm.unipi.it ABSTRACT In this paper we

More information

arxiv: v1 [math.na] 24 Jan 2019

arxiv: v1 [math.na] 24 Jan 2019 ON COMPUTING EFFICIENT DATA-SPARSE REPRESENTATIONS OF UNITARY PLUS LOW-RANK MATRICES R BEVILACQUA, GM DEL CORSO, AND L GEMIGNANI arxiv:190108411v1 [mathna 24 Jan 2019 Abstract Efficient eigenvalue solvers

More information

Numerical Experiments for Finding Roots of the Polynomials in Chebyshev Basis

Numerical Experiments for Finding Roots of the Polynomials in Chebyshev Basis Available at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 12, Issue 2 (December 2017), pp. 988 1001 Applications and Applied Mathematics: An International Journal (AAM) Numerical Experiments

More information

Arnoldi Methods in SLEPc

Arnoldi Methods in SLEPc Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,

More information

The restarted QR-algorithm for eigenvalue computation of structured matrices

The restarted QR-algorithm for eigenvalue computation of structured matrices Journal of Computational and Applied Mathematics 149 (2002) 415 422 www.elsevier.com/locate/cam The restarted QR-algorithm for eigenvalue computation of structured matrices Daniela Calvetti a; 1, Sun-Mi

More information

Math 504 (Fall 2011) 1. (*) Consider the matrices

Math 504 (Fall 2011) 1. (*) Consider the matrices Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture

More information

A multiple shift QR-step for structured rank matrices

A multiple shift QR-step for structured rank matrices A multiple shift QR-step for structured rank matrices Raf Vandebril, Marc Van Barel and Nicola Mastronardi Raf Vandebril Department of Computer Science KU Leuven, Belgium Raf.Vandebril@cs.kuleuven.be Marc

More information

Direct methods for symmetric eigenvalue problems

Direct methods for symmetric eigenvalue problems Direct methods for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 4, 2008 1 Theoretical background Posing the question Perturbation theory

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. (21) 116:177 212 DOI 1.17/s211-1-32-y Numerische Mathematik Implicit double shift QR-algorithm for companion matrices Marc Van Barel Raf Vandebril Paul Van Dooren Katrijn Frederix Received:

More information

Fast computation of eigenvalues of companion, comrade, and related matrices

Fast computation of eigenvalues of companion, comrade, and related matrices BIT Numer Math (204) 54:7 30 DOI 0007/s0543-03-0449-x Fast computation of eigenvalues of companion, comrade, and related matrices Jared L Aurentz Raf Vandebril David S Watkins Received: 6 December 202

More information

Fast Hessenberg QR Iteration for Companion Matrices

Fast Hessenberg QR Iteration for Companion Matrices Fast Hessenberg QR Iteration for Companion Matrices David Bindel Ming Gu David Garmire James Demmel Shivkumar Chandrasekaran Fast Hessenberg QR Iteration for Companion Matrices p.1/20 Motivation: Polynomial

More information

Eigen-solving via reduction to DPR1 matrices

Eigen-solving via reduction to DPR1 matrices Computers and Mathematics with Applications 56 (2008) 166 171 www.elsevier.com/locate/camwa Eigen-solving via reduction to DPR1 matrices V.Y. Pan a,, B. Murphy a, R.E. Rosholt a, Y. Tang b, X. Wang c,

More information

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana Slovenia Ellen Van Camp and Marc Van

More information

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006.

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006. LAPACK-Style Codes for Pivoted Cholesky and QR Updating Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig 2007 MIMS EPrint: 2006.385 Manchester Institute for Mathematical Sciences School of Mathematics

More information

Eigenvalue Problems and Singular Value Decomposition

Eigenvalue Problems and Singular Value Decomposition Eigenvalue Problems and Singular Value Decomposition Sanzheng Qiao Department of Computing and Software McMaster University August, 2012 Outline 1 Eigenvalue Problems 2 Singular Value Decomposition 3 Software

More information

A note on eigenvalue computation for a tridiagonal matrix with real eigenvalues Akiko Fukuda

A note on eigenvalue computation for a tridiagonal matrix with real eigenvalues Akiko Fukuda Journal of Math-for-Industry Vol 3 (20A-4) pp 47 52 A note on eigenvalue computation for a tridiagonal matrix with real eigenvalues Aio Fuuda Received on October 6 200 / Revised on February 7 20 Abstract

More information

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today

More information

LAPACK-Style Codes for Pivoted Cholesky and QR Updating

LAPACK-Style Codes for Pivoted Cholesky and QR Updating LAPACK-Style Codes for Pivoted Cholesky and QR Updating Sven Hammarling 1, Nicholas J. Higham 2, and Craig Lucas 3 1 NAG Ltd.,Wilkinson House, Jordan Hill Road, Oxford, OX2 8DR, England, sven@nag.co.uk,

More information

Implicit double shift QR-algorithm for companion matrices

Implicit double shift QR-algorithm for companion matrices Implicit double shift QR-algorithm for companion matrices marc Van Barel, Raf Vandebril, Paul Van Dooren en Katrijn Frederix Marc Van Barel Department of Computer Science KU Leuven, Belgium Marc.VanBarel@cs.kuleuven.be

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

5.3 The Power Method Approximation of the Eigenvalue of Largest Module 192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Reduction to Hessenberg and Tridiagonal Forms; Rayleigh Quotient Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 5, March 23, 2016 1/30 Lecture 5, March 23, 2016: The QR algorithm II http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to

More information

Exponentials of Symmetric Matrices through Tridiagonal Reductions

Exponentials of Symmetric Matrices through Tridiagonal Reductions Exponentials of Symmetric Matrices through Tridiagonal Reductions Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A simple and efficient numerical algorithm

More information

THE RELATION BETWEEN THE QR AND LR ALGORITHMS

THE RELATION BETWEEN THE QR AND LR ALGORITHMS SIAM J. MATRIX ANAL. APPL. c 1998 Society for Industrial and Applied Mathematics Vol. 19, No. 2, pp. 551 555, April 1998 017 THE RELATION BETWEEN THE QR AND LR ALGORITHMS HONGGUO XU Abstract. For an Hermitian

More information

Lecture 2: Computing functions of dense matrices

Lecture 2: Computing functions of dense matrices Lecture 2: Computing functions of dense matrices Paola Boito and Federico Poloni Università di Pisa Pisa - Hokkaido - Roma2 Summer School Pisa, August 27 - September 8, 2018 Introduction In this lecture

More information

Chasing the Bulge. Sebastian Gant 5/19/ The Reduction to Hessenberg Form 3

Chasing the Bulge. Sebastian Gant 5/19/ The Reduction to Hessenberg Form 3 Chasing the Bulge Sebastian Gant 5/9/207 Contents Precursers and Motivation 2 The Reduction to Hessenberg Form 3 3 The Algorithm 5 4 Concluding Remarks 8 5 References 0 ntroduction n the early days of

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information

Chapter 6. Algebraic eigenvalue problems Introduction Introduction 113. Das also war des Pudels Kern!

Chapter 6. Algebraic eigenvalue problems Introduction Introduction 113. Das also war des Pudels Kern! 6.0. Introduction 113 Chapter 6 Algebraic eigenvalue problems Das also war des Pudels Kern! GOETHE. 6.0. Introduction Determination of eigenvalues and eigenvectors of matrices is one of the most important

More information

ON THE QR ITERATIONS OF REAL MATRICES

ON THE QR ITERATIONS OF REAL MATRICES Unspecified Journal Volume, Number, Pages S????-????(XX- ON THE QR ITERATIONS OF REAL MATRICES HUAJUN HUANG AND TIN-YAU TAM Abstract. We answer a question of D. Serre on the QR iterations of a real matrix

More information

AN ITERATIVE METHOD WITH ERROR ESTIMATORS

AN ITERATIVE METHOD WITH ERROR ESTIMATORS AN ITERATIVE METHOD WITH ERROR ESTIMATORS D. CALVETTI, S. MORIGI, L. REICHEL, AND F. SGALLARI Abstract. Iterative methods for the solution of linear systems of equations produce a sequence of approximate

More information

11.0 Introduction. An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if. A x = λx (11.0.1)

11.0 Introduction. An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if. A x = λx (11.0.1) Chapter 11. 11.0 Introduction Eigensystems An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if A x = λx (11.0.1) Obviously any multiple of an eigenvector x will also be an

More information

Solving linear equations with Gaussian Elimination (I)

Solving linear equations with Gaussian Elimination (I) Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian

More information

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2 The QR algorithm The most common method for solving small (dense) eigenvalue problems. The basic algorithm: QR without shifts 1. Until Convergence Do: 2. Compute the QR factorization A = QR 3. Set A :=

More information

Generalized interval arithmetic on compact matrix Lie groups

Generalized interval arithmetic on compact matrix Lie groups myjournal manuscript No. (will be inserted by the editor) Generalized interval arithmetic on compact matrix Lie groups Hermann Schichl, Mihály Csaba Markót, Arnold Neumaier Faculty of Mathematics, University

More information

Katholieke Universiteit Leuven Department of Computer Science

Katholieke Universiteit Leuven Department of Computer Science Orthogonal similarity transformation of a symmetric matrix into a diagonal-plus-semiseparable one with free choice of the diagonal Raf Vandebril, Ellen Van Camp, Marc Van Barel, Nicola Mastronardi Report

More information

Factoring block Fiedler Companion Matrices

Factoring block Fiedler Companion Matrices Chapter Factoring block Fiedler Companion Matrices Gianna M. Del Corso, Federico Poloni, Leonardo Robol, and Raf Vandebril Abstract When Fiedler published his A note on Companion matrices in 00 in Linear

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 4, March 14, 2018 1/41 Lecture 4, March 14, 2018: The QR algorithm http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich E-mail:

More information

Efficient Solution of Parameter Dependent Quasiseparable Systems and Computation of Meromorphic Matrix Functions

Efficient Solution of Parameter Dependent Quasiseparable Systems and Computation of Meromorphic Matrix Functions Efficient Solution of Parameter Dependent Quasiseparable Systems and Computation of Meromorphic Matrix Functions Paola Boito, Yuli Eidelman, Luca Gemignani To cite this version: Paola Boito, Yuli Eidelman,

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

Matrices, Moments and Quadrature, cont d

Matrices, Moments and Quadrature, cont d Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Block Lanczos Tridiagonalization of Complex Symmetric Matrices

Block Lanczos Tridiagonalization of Complex Symmetric Matrices Block Lanczos Tridiagonalization of Complex Symmetric Matrices Sanzheng Qiao, Guohong Liu, Wei Xu Department of Computing and Software, McMaster University, Hamilton, Ontario L8S 4L7 ABSTRACT The classic

More information

Dense LU factorization and its error analysis

Dense LU factorization and its error analysis Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,

More information

On aggressive early deflation in parallel variants of the QR algorithm

On aggressive early deflation in parallel variants of the QR algorithm On aggressive early deflation in parallel variants of the QR algorithm Bo Kågström 1, Daniel Kressner 2, and Meiyue Shao 1 1 Department of Computing Science and HPC2N Umeå University, S-901 87 Umeå, Sweden

More information

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x Technical Report CS-93-08 Department of Computer Systems Faculty of Mathematics and Computer Science University of Amsterdam Stability of Gauss-Huard Elimination for Solving Linear Systems T. J. Dekker

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

Orthogonal iteration to QR

Orthogonal iteration to QR Notes for 2016-03-09 Orthogonal iteration to QR The QR iteration is the workhorse for solving the nonsymmetric eigenvalue problem. Unfortunately, while the iteration itself is simple to write, the derivation

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

11.3 Eigenvalues and Eigenvectors of a Tridiagonal Matrix

11.3 Eigenvalues and Eigenvectors of a Tridiagonal Matrix 11.3 Eigenvalues and Eigenvectors of a ridiagonal Matrix Evaluation of the Characteristic Polynomial Once our original, real, symmetric matrix has been reduced to tridiagonal form, one possible way to

More information

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Algorithms Notes for 2016-10-31 There are several flavors of symmetric eigenvalue solvers for which there is no equivalent (stable) nonsymmetric solver. We discuss four algorithmic ideas: the workhorse

More information

On The Belonging Of A Perturbed Vector To A Subspace From A Numerical View Point

On The Belonging Of A Perturbed Vector To A Subspace From A Numerical View Point Applied Mathematics E-Notes, 7(007), 65-70 c ISSN 1607-510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ On The Belonging Of A Perturbed Vector To A Subspace From A Numerical View

More information

Linear algebra & Numerical Analysis

Linear algebra & Numerical Analysis Linear algebra & Numerical Analysis Eigenvalues and Eigenvectors Marta Jarošová http://homel.vsb.cz/~dom033/ Outline Methods computing all eigenvalues Characteristic polynomial Jacobi method for symmetric

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview

More information

Accelerating computation of eigenvectors in the nonsymmetric eigenvalue problem

Accelerating computation of eigenvectors in the nonsymmetric eigenvalue problem Accelerating computation of eigenvectors in the nonsymmetric eigenvalue problem Mark Gates 1, Azzam Haidar 1, and Jack Dongarra 1,2,3 1 University of Tennessee, Knoxville, TN, USA 2 Oak Ridge National

More information

COMPUTING THE CONDITION NUMBER OF TRIDIAGONAL AND DIAGONAL-PLUS-SEMISEPARABLE MATRICES IN LINEAR TIME

COMPUTING THE CONDITION NUMBER OF TRIDIAGONAL AND DIAGONAL-PLUS-SEMISEPARABLE MATRICES IN LINEAR TIME COMPUTING THE CONDITION NUMBER OF TRIDIAGONAL AND DIAGONAL-PLUS-SEMISEPARABLE MATRICES IN LINEAR TIME GARETH I HARGREAVES Abstract For an n n tridiagonal matrix we exploit the structure of its QR factorization

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

Begin accumulation of transformation matrices. This block skipped when i=1. Use u and u/h stored in a to form P Q.

Begin accumulation of transformation matrices. This block skipped when i=1. Use u and u/h stored in a to form P Q. 11.3 Eigenvalues and Eigenvectors of a Tridiagonal Matrix 475 f += e[j]*a[i][j]; hh=f/(h+h); Form K, equation (11.2.11). for (j=1;j

More information

ETNA Kent State University

ETNA Kent State University C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.

More information

11.6 The QR Algorithm for Real Hessenberg Matrices

11.6 The QR Algorithm for Real Hessenberg Matrices 486 Chapter 11. Eigensystems if (i!= m) { Interchange rows and columns. for (j=m-1;j

More information

Positive Denite Matrix. Ya Yan Lu 1. Department of Mathematics. City University of Hong Kong. Kowloon, Hong Kong. Abstract

Positive Denite Matrix. Ya Yan Lu 1. Department of Mathematics. City University of Hong Kong. Kowloon, Hong Kong. Abstract Computing the Logarithm of a Symmetric Positive Denite Matrix Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A numerical method for computing the logarithm

More information

Algorithm 853: an Efficient Algorithm for Solving Rank-Deficient Least Squares Problems

Algorithm 853: an Efficient Algorithm for Solving Rank-Deficient Least Squares Problems Algorithm 853: an Efficient Algorithm for Solving Rank-Deficient Least Squares Problems LESLIE FOSTER and RAJESH KOMMU San Jose State University Existing routines, such as xgelsy or xgelsd in LAPACK, for

More information

that determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that

that determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that Chapter 3 Newton methods 3. Linear and nonlinear eigenvalue problems When solving linear eigenvalue problems we want to find values λ C such that λi A is singular. Here A F n n is a given real or complex

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique

More information

DAVID FERRONE. s k s k 2j = δ 0j. s k = 1

DAVID FERRONE. s k s k 2j = δ 0j. s k = 1 FINITE BIORTHOGONAL TRANSFORMS AND MULTIRESOLUTION ANALYSES ON INTERVALS DAVID FERRONE 1. Introduction Wavelet theory over the entire real line is well understood and elegantly presented in various textboos

More information

Decomposition. bq (m n) R b (n n) r 11 r 1n

Decomposition. bq (m n) R b (n n) r 11 r 1n The QR Decomposition Lab Objective: The QR decomposition is a fundamentally important matrix factorization. It is straightforward to implement, is numerically stable, and provides the basis of several

More information

Computing the Complete CS Decomposition

Computing the Complete CS Decomposition Computing the Complete CS Decomposition Brian D Sutton May 0, 008 Abstract An algorithm for computing the complete CS decomposition of a partitioned unitary matrix is developed Although the existence of

More information

Split Algorithms and ZW-Factorization for Toeplitz and Toeplitz-plus-Hankel Matrices

Split Algorithms and ZW-Factorization for Toeplitz and Toeplitz-plus-Hankel Matrices Split Algorithms and ZW-Factorization for Toeplitz and Toeplitz-plus-Hanel Matrices Georg Heinig Dept of Math& CompSci Kuwait University POBox 5969 Safat 13060, Kuwait Karla Rost Faultät für Mathemati

More information

Chapter 12 Block LU Factorization

Chapter 12 Block LU Factorization Chapter 12 Block LU Factorization Block algorithms are advantageous for at least two important reasons. First, they work with blocks of data having b 2 elements, performing O(b 3 ) operations. The O(b)

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

2 Computing complex square roots of a real matrix

2 Computing complex square roots of a real matrix On computing complex square roots of real matrices Zhongyun Liu a,, Yulin Zhang b, Jorge Santos c and Rui Ralha b a School of Math., Changsha University of Science & Technology, Hunan, 410076, China b

More information

A Traub like algorithm for Hessenberg-quasiseparable-Vandermonde matrices of arbitrary order

A Traub like algorithm for Hessenberg-quasiseparable-Vandermonde matrices of arbitrary order A Traub like algorithm for Hessenberg-quasiseparable-Vandermonde matrices of arbitrary order TBella, YEidelman, IGohberg, VOlshevsky, ETyrtyshnikov and PZhlobich Abstract Although Gaussian elimination

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Accelerating computation of eigenvectors in the dense nonsymmetric eigenvalue problem

Accelerating computation of eigenvectors in the dense nonsymmetric eigenvalue problem Accelerating computation of eigenvectors in the dense nonsymmetric eigenvalue problem Mark Gates 1, Azzam Haidar 1, and Jack Dongarra 1,2,3 1 University of Tennessee, Knoxville, TN, USA 2 Oak Ridge National

More information

How to Find Matrix Modifications Keeping Essentially Unaltered a Selected Set of Eigenvalues

How to Find Matrix Modifications Keeping Essentially Unaltered a Selected Set of Eigenvalues How to Find Matrix Modifications Keeping Essentially Unaltered a Selected Set of Eigenvalues S. Noschese and L. Pasquini 1 Introduction In this paper we consider a matrix A C n n and we show how to construct

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

The antitriangular factorisation of saddle point matrices

The antitriangular factorisation of saddle point matrices The antitriangular factorisation of saddle point matrices J. Pestana and A. J. Wathen August 29, 2013 Abstract Mastronardi and Van Dooren [this journal, 34 (2013) pp. 173 196] recently introduced the block

More information

Cholesky factorisation of linear systems coming from finite difference approximations of singularly perturbed problems

Cholesky factorisation of linear systems coming from finite difference approximations of singularly perturbed problems Cholesky factorisation of linear systems coming from finite difference approximations of singularly perturbed problems Thái Anh Nhan and Niall Madden Abstract We consider the solution of large linear systems

More information

Iassume you share the view that the rapid

Iassume you share the view that the rapid the Top T HEME ARTICLE THE QR ALGORITHM After a brief sketch of the early days of eigenvalue hunting, the author describes the QR Algorithm and its major virtues. The symmetric case brings with it guaranteed

More information

A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION

A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION SIAM J MATRIX ANAL APPL Vol 0, No 0, pp 000 000 c XXXX Society for Industrial and Applied Mathematics A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION WEI XU AND SANZHENG QIAO Abstract This paper

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

THE QR ALGORITHM REVISITED

THE QR ALGORITHM REVISITED THE QR ALGORITHM REVISITED DAVID S. WATKINS Abstract. The QR algorithm is still one of the most important methods for computing eigenvalues and eigenvectors of matrices. Most discussions of the QR algorithm

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

MULTISHIFT VARIANTS OF THE QZ ALGORITHM WITH AGGRESSIVE EARLY DEFLATION LAPACK WORKING NOTE 173

MULTISHIFT VARIANTS OF THE QZ ALGORITHM WITH AGGRESSIVE EARLY DEFLATION LAPACK WORKING NOTE 173 MULTISHIFT VARIANTS OF THE QZ ALGORITHM WITH AGGRESSIVE EARLY DEFLATION LAPACK WORKING NOTE 173 BO KÅGSTRÖM AND DANIEL KRESSNER Abstract. New variants of the QZ algorithm for solving the generalized eigenvalue

More information

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n. Lecture # 11 The Power Method for Eigenvalues Part II The power method find the largest (in magnitude) eigenvalue of It makes two assumptions. 1. A is diagonalizable. That is, A R n n. A = XΛX 1 for some

More information

Preconditioned conjugate gradient algorithms with column scaling

Preconditioned conjugate gradient algorithms with column scaling Proceedings of the 47th IEEE Conference on Decision and Control Cancun, Mexico, Dec. 9-11, 28 Preconditioned conjugate gradient algorithms with column scaling R. Pytla Institute of Automatic Control and

More information

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4 Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math Week # 1 Saturday, February 1, 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x

More information

Computation of Kronecker-like forms of periodic matrix pairs

Computation of Kronecker-like forms of periodic matrix pairs Symp. on Mathematical Theory of Networs and Systems, Leuven, Belgium, July 5-9, 2004 Computation of Kronecer-lie forms of periodic matrix pairs A. Varga German Aerospace Center DLR - berpfaffenhofen Institute

More information