Computing Periodic Deflating Subspaces Associated with a Specified Set of Eigenvalues
|
|
- Toby Harmon
- 5 years ago
- Views:
Transcription
1 BIT Numerical Mathematics /03/ $ Vol. 43 No. pp c Kluwer Academic Publishers Computing Periodic Deflating Subspaces Associated with a Specified Set of Eigenvalues R. GRANAT B. KÅGSTRÖM and D. KRESSNER 2 Department of Computing Science and HPC2N Umeå University SE UMEÅ Sweden. s: {granat bokg kressner}@cs.umu.se 2 Department of Mathematics University of Zagreb Croatia Abstract. We present a direct method for reordering eigenvalues in the generalized periodic real Schur form of a regular K-cylic matrix pair sequence (A k E k ). Following and generalizing existing approaches reordering consists of consecutively computing the solution to an associated Sylvester-like equation and constructing K pairs of orthogonal matrices. These pairs define an orthogonal K-cyclic equivalence transformation that swaps adjacent diagonal blocks in the Schur form. An error analysis of this swapping procedure is presented which extends existing results for reordering eigenvalues in the generalized real Schur form of a regular pair (AE). Our direct reordering method is used to compute periodic deflating subspace pairs corresponding to a specified set of eigenvalues. This computational task arises in various applications related to discrete-time periodic descriptor systems. Computational experiments confirm the stability and reliability of the presented eigenvalue reordering method. AMS subject classification (2000): 65F5 5A8 93B60. Key words: generalized product of a K-cyclic matrix pair sequence generalized periodic real Schur form eigenvalue reordering periodic generalized coupled Sylvester equation K-cyclic equivalence transformation generalized periodic eigenvalue problem. Introduction Discrete-time periodic descriptor systems of the form (.) E k x k+ = A k x k + B k u k y k = C k x k + D k u k with A k = A k+k E k = E k+k R n n B k = B k+k R n m C k = C k+k R r n and D k = D k+k R r m for some period K arise naturally from processes that exhibit seasonal or periodic behavior 6. Design and analysis Submitted December Revised June Communicated by Axel Ruhe. This research was conducted using the resources of the High Performance Computing Center North (HPC2N). Financial support has been provided by the Swedish Research Council under grant VR and by the Swedish Foundation for Strategic Research under the frame program grant A3 02:8. The third author was also supported by the DFG Emmy Noether Fellowship KR 2950/-.
2 2 R. GRANAT B. KÅGSTRÖM and D. KRESSNER problems of such systems (see e.g ) are conceptually studied in terms of state transition matrices 39 Φ E A(j i) = Ej A j Ej 2 A j 2... Ei A i R n n with the convention Φ E A(i i) = I n. A state transition matrix over a complete period Φ E A(j + K j) is the monodromy matrix of (.) at time j. Its eigenvalues are called the characteristic multipliers and are independent of the time j. Specifically the monodromy matrix at time j = 0 corresponds to the matrix product (.2) E K A K E K 2 A K 2 E A E 0 A 0. Matrix products of the general form (.2) are studied e.g. in We study the K-cyclic matrix pair sequence (A k E k ) with A k E k R n n from (.) via the generalized periodic Schur decomposition 8 8: there exists a K-cyclic orthogonal matrix pair sequence (Q k Z k ) with Q k Z k R n n such that given k = (k + ) mod K we have { Sk = Q T k A kz k (.3) T k = Q T k E kz k where all matrices S k except for some fixed index j with 0 j K and all matrices T k are upper triangular. The matrix S j is upper quasi-triangular; typically j is chosen to be 0 or K. The sequence (S k T k ) is the generalized periodic real Schur form (GPRSF) of (A k E k ) k = 0...K. The decomposition (.3) is a K-cyclic equivalence transformation of the matrix pair sequence (A k E k ). Computing the GPRSF is the standard method for solving the generalized periodic (product) eigenvalue problem (GPEVP) (.4) E K A K E K 2 A K 2 E A E 0 A 0x = λx where all matrices in the pairs (A k E k ) are general and dense. For K = (.4) corresponds to a generalized eigenvalue problem Ax = λex with (A E) regular (see e.g. ). Using the GPRSF to solve a GPEVP for K means that we do not need to compute any matrix products in (.4) explicitly which avoids numerical instabilities and allows to handle singular factors E k. The and 2 2 blocks on the diagonal of a GPRSF define t n K-cyclic diagonal block pairs (S ii T ii ) corresponding to real eigenvalues and complex conjugate pairs of eigenvalues respectively. A real eigenvalue is simply given by λ i = S(K ) ii T (K ) ii S (K 2) ii T (K 2) ii S(0) ii This eigenvalue is called infinite if K k=0 T ii = 0 but K k=0 S ii 0. If there are blocks for which both K k=0 S ii = 0 and K k=0 T ii = 0 then the K-cyclic matrix pair sequence (A k E k ) is called singular otherwise T (0) ii.
3 Computing Periodic Deflating Subspaces Associated with a Specified Set of Eigenvalues 3 the sequence (A k E k ) is called regular. In the degenerate singular case the eigenvalues become ill-defined and other tools need to be used to study the periodic eigenvalue problem. For the rest of the paper it is therefore assumed that (A k E k ) is regular. and For two complex conjugate eigenvalues λ i λ i all matrices T ii λ i λ ( i λ T (K ) ii (K ) S ii T (K 2) ii S (K 2) ii T (0) ii are nonsingular ) (0) S ii where λ(m) denotes the set of eigenvalues of a matrix M. In finite precision arithmetic great care has to be exercised to avoid underflow and overflow in the explicit eigenvalue computation especially when it involves 2 2 blocks 35. For every l with l n such that no 2 2 block resides in S j (l:l+l:l+) the first l pairs of columns of (Q 0 Z 0 ) span a deflating subspace pair corresponding to the first l eigenvalues of the matrix product (.2). More generally the first l pairs of columns of (Q k Z k ) span a left and right periodic (or cyclic) deflating subspace pair sequence associated with the first l eigenvalues of the matrix product (.2) 5. The decomposition (.3) is computed via the periodic QZ algorithm (see e.g ) which consists of an initial reduction to generalized periodic Hessenberg form and a subsequent iterative process to generalized periodic Schur form. In 38 the generalized periodic Schur form is extended to periodic matrix pairs with time-varying and possibly rectangular dimensions. This includes a preprocessing step that truncates parts corresponding to spurious characteristic values which then yields square system matrices of constant dimensions.. Ordered GPRSF and periodic deflating subspaces In many applications it is desirable to have the eigenvalues along the diagonal of the GPRSF in a certain order. If the generalized periodic Schur form has its eigenvalues ordered in a certain way as in (.5) it is called an ordered GPRSF. For example if we have (.5) S k = S S 0 S T k = T T 0 T with S T R l l such that the upper left part sequence (S T ) contains all eigenvalues in the open unit disc then (S k T k ) is an ordered GPRSF and the first l columns of the sequence Z k span stable right periodic deflating subspaces. For initial states x 0 span(z 0 e...z 0 e l ) with e i being the ith unit vector the states of the homogeneous system E k x k+ = A k x k satisfy x k span(z k e...z k e l ) and 0 is an asymptotically stable equilibrium. Other important applications relating to periodic discrete-time systems include the stable-unstable spectral separation for computing the numerical solution of the discrete-time periodic Riccati equation 38 in Q-optimal control which we illustrate in Section 2 and pole placement where the goal is to move some or all
4 4 R. GRANAT B. KÅGSTRÖM and D. KRESSNER of the poles to desired locations in the complex plane In 4 ordered Schur forms are used for solving generalized Hamiltonian eigenvalue problems. In this paper we extend the work in to perform eigenvalue reordering in a regular periodic matrix pair sequence in GPRSF. The rest of the paper is organized as follows. In Section 2 we illustrate how an ordered GPRSF can be used to solve the discrete-time periodic Riccati equation that arises in an Q-optimal control problem. Section 3 presents our direct method for reordering eigenvalues of a periodic (cyclic) matrix pair sequence (A k E k ) in GPRSF. To compute an ordered GPRSF a method for reordering adjacent K-cyclic diagonal block pairs is combined with a bubble-sort like procedure in an APACK-style 2 23 fashion. The proposed method for swapping adjacent diagonal block pair sequences relies on orthogonal K-cyclic equivalence transformations and the core step consists of computing the solution to an associated periodic generalized coupled Sylvester equation which is discussed in Section 3.4. An error analysis of the direct reordering method is presented in Section 5 which extends and generalizes results from 2 4. In Section 6 we discuss some implementation issues regarding the solution of small-sized periodic generalized coupled Sylvester equations and how we control and guarantee stability of the reordering. Some examples and computational results are presented and discussed in Section 7. Finally in Section 8 we discuss some extensions of the reordering method. 2 Q-optimal control and periodic deflating subspaces Given the system (.) the aim of linear quadratic (Q) optimal control is to find a feedback sequence u k which stabilizes the system and minimizes the functional (x T k H k x k + u T k N k u k ) 2 k=0 with H k R n n symmetric positive semidefinite and N k R m m symmetric positive definite. Moreover we suppose that the weighting matrices are K- periodic i.e. H k+k = H k and N k+k = N k. Under mild assumptions 7 the optimal feedback is linear and unique. For each k it can be expressed as u k = (N k + B T k X k+ B k ) B T k X k+ A k x k where X k = X k+k is the unique symmetric positive semidefinite solution of the discrete-time periodic Riccati equation (DPRE) 8 (2.) 0 = C T k H kc k E T k X ke k + A T k X k+a k A T k X k+b k (N k + B T k X k+b k ) B T k X k+a k provided that all E k are invertible. The 2n 2n periodic matrix pair ( A ( k M k ) = k 0 Ek B Ck TH kc k Ek T k N ) k BT k 0 A T k
5 Computing Periodic Deflating Subspaces Associated with a Specified Set of Eigenvalues 5 is closely associated with (2.). Similarly as for the case E k = I n 8 it can be shown that this pair has exactly n eigenvalues inside the unit disk under the assumption that (.) is d-stabilizable and d-detectable. By reordering the GPRSF of ( k M k ) we can compute a periodic deflating subspace defined by the orthogonal matrices U k V k R 2n 2n with U k+k = U k V k+k = V k such that U T k kv k = S S 0 S U T k M kv k+ = T T 0 T where the n n periodic matrix pair (S T ) contains all eigenvalues inside the unit disk. If we partition U U k = U U 2 U with U ij R n n then U 2 U = Xk E k from which X k can be computed. The proof of this relation is similar as for the case K = see e.g. 27. We note that if N k is not well-conditioned then it is better to work with (2n + m) (2n + m) matrix pairs as described in Direct method for eigenvalue reordering in GPRSF Given a regular K-cyclic matrix pair sequence (A k E k ) in GPRSF our method to compute an ordered GPRSF (.5) with respect to a set of specified eigenvalues reorders and 2 2 diagonal blocks in the GPRSF such that the selected set of eigenvalues appears in the matrix pair sequence (S T ). Following APACK we assume that the set of specified eigenvalues are provided as an index vector for the blocks of eigenvalue pairs that should appear in (S T ). The procedure is now to swap adjacent diagonal blocks in the GPRSF in a bubble-sort fashion such that the specified eigenvalue ordering is satisfied In the following we focus on the K-cyclic swapping of diagonal blocks using orthogonal transformations. 3. Swapping of K-cyclic diagonal block matrix pairs Consider a regular K-cyclic matrix pair sequence (A k E k ) in GPRSF ( ) A (A k E k ) = A E 0 A E (3.) 0 E with A E Rp p and A E Rp2 p2 for k = 0...K.
6 6 R. GRANAT B. KÅGSTRÖM and D. KRESSNER Swapping consists of computing orthogonal matrices U k V k such that   0  = Uk T A A (3.2) 0 A V k Ê Ê 0 Ê = Uk T E E (3.3) 0 E V k for k = 0...K and (3.4) λ(ˆπ ) = λ(π ) λ(ˆπ ) = λ(π ) where (3.5) Π ii = E (K ) ii (3.6) ˆΠii = Ê (K ) ii A (K ) ii E (0) ii  (K ) ii Ê (0) ii A (0) ii  (0) ii. If some of the E ii are singular then the products (3.5) and (3.6) should only be understood in a formal sense with their finite and infinite eigenvalues defined via the GPRSF. The relation (3.4) means that all eigenvalues of Π are transferred to ˆΠ and all eigenvalues of Π to ˆΠ. For our purpose A ii E ii R pi pi are the diagonal blocks of a GPRSF and it can thus be assumed that p i { 2}. The K-cyclic swapping is performed in two main steps. First the sequence (A k E k ) in (3.) is block diagonalized by a nonorthogonal K-cyclic equivalence transformation. Second orthogonal transformation matrices are computed from this matrix pair sequence that perform the required K-cyclic swapping. 3.2 Swapping by block diagonalization and permutation et us consider a K-cyclic matrix pair sequence ( k R k ) with k R k R p p2 which solves the periodic generalized coupled Sylvester equation (PGCSY) { A R k k A = A (3.7) E R k k E = E. Then ( k R k ) defines an equivalence transformation that block diagonalizes the K-cyclic matrix pair sequence (A k E k ) in (3.): (3.8) A A 0 A E E 0 E = = Ip k 0 I p2 Ip k 0 I p2 A 0 0 A E 0 0 E Ip R k 0 I p2 Ip R k 0 I p2 for k = 0...K.
7 Computing Periodic Deflating Subspaces Associated with a Specified Set of Eigenvalues 7 The diagonal blocks of the block diagonal matrices in (3.8) are swapped by a simple equivalence permutation: ( ) 0 Ip2 A 0 E I p 0 0 A 0 0 Ip 0 E = I p2 0 (3.9) ( A 0 0 A E 0 0 E Altogether by defining the matrices k I X k = p 0 Ip2 (3.0) Y I p2 0 k = k = 0...K I p R k we obtain a non-orthogonal K-cyclic equivalence transformation such that A A A 0 A = X 0 k 0 A Y k (3.) E E 0 E = X k E 0 0 E ). Y k. It remains to show the existence of a solution to (3.7). emma 3.. et the K-cyclic matrix sequences (A B ) and (A B ) be regular. Then the PGCSY (3.7) has a unique solution if and only if (3.) λ(π ) λ(π ) = where Π ii is the formal matrix product defined in (3.5). Proof. Since (3.7) is a system of 2p p 2 K linear equations in 2p p 2 K variables it suffices to show that the corresponding linear operator : (R p p2 ) 2K (R p p2 ) 2K defined by ( ) K (3.3) : ( k R k ) K k=0 A R k k A E R k k E k=0 has a trivial kernel if and only if (3.) is satisfied.. et λ λ(π ) λ(π ) and assume λ (the case λ = can be treated analogously by switching the roles of E and A and reversing the index k). By the complex periodic Schur decomposition there are sequences of nonzero right and left eigenvectors x 0...x K C p y 0...y K C p2 satisfying (3.4) λ k E x k = A x k µ k y H k E(k ) = y H k A for k = 0...K where (3.5) λ = λ 0 λ K = µ 0 µ K
8 8 R. GRANAT B. KÅGSTRÖM and D. KRESSNER and E x k 0 yk HE(k ) 0. Here k denotes (k ) mod K. The relation (3.5) implies the existence of a sequence γ 0... γ K C such that (3.6) γ k λ k = γ k µ k k = 0...K with at least one γ k being nonzero. Defining R k = γ k x k y H k E (k ) k = γ k E x k y H k this guarantees that at least one of the matrices R k and k is nonzero. Moreover (3.4) and (3.6) yield A R k k A = γ ka x kyk H E (k ) γ k E x k yk A H = (γ k λ k γ k µ k )E x k yk H E(k ) = 0 E R k k E = γ k E x k yk H E γ k E x k yk H E = 0. Hence the kernel of is nonzero if (3.) is not satisfied. 2. For the other direction of the proof assume that (3.) is satisfied. We first treat the case when all coefficient matrices are of order i.e. we consider { (3.7) with scalars α j α r k l k α 2 = 0 β r k l k β 2 = 0 and β j. Because of (3.) one of the products β (0) β (K ) or β (0) 2 β (K ) 2 must be nonzero. Without loss of generality we may assume that β (0) 2 β (K ) 2 0. Then (3.7) implies (3.8) α r k = α 2 β β 2 r k k = 0...K. Recursively substituting r k and r k yields ( )( ) ( ) α (0) α (0) α (K ) 2 α (K ) 2 β (0) β (K ) r 0 = ( ) β (0) 2 β (K ) 2 r 0. The regularity assumption implies that one of α (0) α (K ) or β (0) β (K ) is nonzero. Together with (3.) this implies r 0 = 0 and in combination with (3.8) we get r k = 0 for all k = 0...K. In addition from (3.7) we have l k = β β 2 which in turn results in l k = 0. r k k = 0...K
9 Computing Periodic Deflating Subspaces Associated with a Specified Set of Eigenvalues 9 For coefficient matrices of larger order we proceed by induction. By the complex Schur decomposition we may assume that A jj and E jj are upper triangular. Conformably partition R R = A 2 Ā = Ā 0 Ā R = R 2 R Ā 34 A = Ā 33 0 Ā and in an analogous manner E jj. Then (3.7) with the right hand sides replaced by zero yields Ā R 2 2 Ā 33 = 0 Ē R(k ) 2 2 Ē 33 = 0. By the induction assumption we have 2 = R 2 = 0 for all k. Subsequently analogous periodic PGCSYs of smaller order can be found for ( R ) ( R ) and ( R ) see also 3 eventually showing that = R = 0. This completes the proof. 44 Related periodic Sylvester equations were also studied in e.g and an overview was given in 39. For a recursive solution method based on the last part of the proof of emma 3. see Swapping by orthogonal transformation matrices From the definition (3.0) it can be observed that the first block column of X k and the last block row of Y k have full column and row ranks respectively. Hence if we choose orthogonal matrices Q k and Z k from QR and RQ factorizations such that (3.9) k I p2 = Q k T 0 Q k = Ip R k = 0 T R Z T k then T T Rp2 p2 R are not only upper triangular but also nonsingular for k = 0... K. Rp p Partitioning Q k and Z k in conformity with X k and Y k as Q Q Z Z Q 2 Q Z k = Z 2 Z we obtain (3.20) Q T k X k = T Q T T 0 Q Y k Z k = Z 2 Z 0 T R.
10 0 R. GRANAT B. KÅGSTRÖM and D. KRESSNER By applying (Q k Z k ) as an orthogonal K-cyclic equivalence transformation to (A k E k ) we obtain where (3.2) and (3.) ( ( Q T k X k Q T k A A 0 A A 0 0 A (Q T k A kz k Q T k E kz k ) = (    Z k Q T k Y k Z k Q T k X k  0  = T = T  = Q Ê Ê = T = T Ê = Q E E 0 E Ê E 0 0 E Ê 0 Ê ) Z k ) A Z 2 A Z + T Q A T R T A T R E Z(k ) 2 E Z(k ) + Q T E T (k ) R. Note that (3.9) implies the nonsingularity of Q equations above we see that (A E ) and (A to ( ) and ( T E T (k ) R E = Y k Z k ) and Z 2. Hence from the ) are K-cyclic equivalent Ê Ê ) respectively. In other words the eigenvalues of the K-cyclic matrix pair sequence (A k E k ) have been reordered as desired. We remark that (Â Ê ) and (Â Ê ) are generally not in GPRSF after the K-cyclic swapping and have to be further transformed by orthogonal transformations to restore the GPRSF of the matrix pair sequence (A k E k ) (see Section 5.2). 3.4 Matrix representation of the PGCSY The key step of the reordering method is to solve the associated PGCSY (3.7). Using Kronecker products this problem can be rewritten as a linear system of equations (3.23) Z PGCSY x = c where Z PGCSY is a 2Kp p 2 2Kp p 2 matrix representation of the associated linear operator (3.3): Z PGCSY =
11 Computing Periodic Deflating Subspaces Associated with a Specified Set of Eigenvalues A (0) T Ip I p2 A (0) T Ip I p2 E (0) E (0) I p2 A () A () T Ip E () T. Ip..... A (K ) T Ip E (K ) T Ip I p2 E (K ) and x and c are 2Kp p 2 vector representations of the assembled unknowns and right hand sides respectively: vec( 0 ) vec( A (0) vec(r ) ) vec( E (0) vec( ) ) vec(r 2 ) vec( A () ) x =. c = vec( E (). ). vec(r K ). vec( K ) vec( A (K ) ) vec(r 0 ) vec( E (K ) ) Here the operator vec(m) stacks the columns of a matrix M on top each other into one long vector. Note also that only the nonzero blocks of Z PGCSY are displayed explicitly above. The sparsity structure of Z PGCSY can be exploited when using Gaussian elimination with partial pivoting (GEPP) or a QR factorization to solve (3.23) see Section 5 for more details. By emma 3. the matrix Z PGCSY is invertible if and only if the eigenvalue condition (3.) is fulfilled. Throughout the rest of this paper we assume that this condition holds. If the condition is violated then since (A k E k ) is in GPRSF the eigenvalues of Π and Π are actually equal and there is in principle no need for swapping. The invertibility of Z PGCSY is equivalent to (3.24) seppgcsy = σ min (Z PGCSY ) 0. As for deflating subspaces of regular matrix pairs (see e.g ) the quantity seppgcsy measures the sensitivity of the periodic deflating subspace pair of the GPRSF If K p or p 2 become large this quantity is very expensive to compute explicitly. By using the well-known estimation technique described in reliable seppgcsy-estimates can be computed at the cost of solving a few PGCSYs. 4 Error analysis of K-cyclic equivalence swapping of diagonal blocks In this section we present an error analysis of the direct method described in Section 3 by extending the results in 2 to the case of periodic matrix pairs.
12 R. GRANAT B. KÅGSTRÖM and D. KRESSNER We sometimes omit the index range k = 0...K assuming that it is implicitly understood. In finite precision arithmetic the transformed matrix pair sequence will be affected by roundoff errors resulting in a computed sequence (Ãk Ẽk). We express the computed transformed matrix pairs as (Ãk Ẽk) = (Âk + A k Êk + E k ) where (Âk Êk) for k = 0...K correspond to the exact matrix pairs in the reordered GPRSF of (A k B k ). Our task is to derive explicit expressions and upper bounds for the error matrices A k and E k. Most critical are of course the subdiagonal blocks of a 2 2 block partioned sequence ( A k E k ). These must be negligible in order to guarantee numerical backward stability for the swapping of diagonal blocks. et ( k R k ) = ( k + k R k + R k ) denote the computed solution to the associated PGCSY. The residual pair sequence of the computed solution is then given by (Y Y 2 ) where (4.) { In addition let Q k T (4.2) Y A k k A + A Y 2 E k k E + E. denote the computed factors of the kth QR factorization T = Q k 0 G T k I p2 where Q k = Q k + Q k = T are the exact factors. Similarly let Z k T R denote the computed factors of the kth RQ factorization G R I p R k = (4.3) 0 T ZT R k + T and Q k T T R where Z k = Z k + Z k = T R + T R and Z k T R are the exact factors. If Householder transformations are used to compute the factorizations (4.2)-(4.3) Q k and Z k are orthogonal to machine precision 4. The error matrices Q k and Z k are essentially bounded by the condition numbers of G and G respectively times the relative errors in these matrices (e.g. see 33 20). We transform (A k E k ) using the computed ( Q k Z k ) in a K-cyclic equivalence transformation giving (4.4) Q T (A k E k ) Z k = (Âk + A k Êk + E k ) where (Âk Êk) is the exact reordered GPRSF of the periodic (A k B k ) sequence. Our aim is to derive explicit expressions and norm bounds for blocks of ( A k E k ). First R (4.5) Q T A k Zk = (Q k + Q k ) T A k (Z k + Z k ) = = Q T k A kz k + Q T k A kz k + Q T k A k Z k + Q T k A k Z k
13 Computing Periodic Deflating Subspaces Associated with a Specified Set of Eigenvalues 3 and by dropping the second order term and using Âk = Q T k A kz k and Q T k Q k = Q k Q T k up to first order we get Q T A k Zk = Âk + Âk(Zk T Z k) + ( Q k Q T k )Âk = Âk + A k (4.6) with A k ÂkU k + W k  k where U k = Zk T Z k and W k = Q k Q T k. Similarly we get (4.7) Q T B k Zk = Êk + E k with E k ÊkU k + W k Ê k. After partitioning U k U k W k and ( A k E k ) in conformity with (Âk Êk) and doing straightforward block matrix multiplications we get and A = ÂU + W  +  U 2 A =  U +  U + W  + W  A 2 =  U 2 + W 2  A =  U + W  + W 2  E = ÊU (k ) + W Ê + Ê U(k ) 2 E = Ê U(k ) + Ê U(k ) + W Ê + W Ê E 2 = Ê U(k ) 2 + W 2 Ê E = Ê U(k ) + W Ê + W 2 Ê. Observe that A A E E affect the reordered K-cyclic diagonal block pairs and possibly the eigenvalues while A 2 and E 2 are even more critical since they affect the eigenvalues as well as the stability of the reordering; these are the perturbations of interest that we investigate further. The analysis in 2 applied to (4.2) (4.3) results in the following expressions for blocks of U k and W k : and U = Z 2 Z T R U 2 = T R Rk Z 2 U = T R Rk Z Rk Z 2 W = T Q k T W 2 = T Q k T W = Q T T Q T T Q
14 4 R. GRANAT B. KÅGSTRÖM and D. KRESSNER up to first order perturbations. By substituting the expressions for U ij W ij in A ij E ij we obtain and (4.8) A (4.9) (4.0) A and (4.) (4.) (4.3) A = T Q Y Z 2 2 = T Q Y Z 2 = T Q Y Z E = Q E 2 = Q E = Q T Y 2 Z (k ) 2 T Y 2 Z (k ) 2 T Y 2 Z (k ) with the residuals (Y Y 2 ) as in (4.). From the QR and RQ factorizations (3.9) we have (4.4) and (4.5) Q 2 = T Z T = T R T T T = I p 2 + T k k T R T T R = Ip + R k Rk T. From (4.4) (4.5) we obtain the following relations between the singular values of T T R k and R k : (4.6) σ 2 (T ) = + σ2 ( k ) σ 2 (T R ) = + σ2 (R k ). Further from the CS decomposition (see e.g. ) of Q k and Z k respectively we obtain the relations Q T 2 = Q 2 2 Q 2 = Q 2 T 2 = Z 2 2 Z 2 = Z 2. Z Combining these results we get and Q T 2 = T Q 2 = 2 = σ max ( k ) ( + σ 2 max( k )) /2 Z 2 2 = T R 2 = Z σ max (R k ) 2 = ( + σmax 2 (R k)) /2 σ min (T ) = ( + σmin 2 ( k)) /2 σ min (T R ) = ( + σmin 2 (R k)) /2
15 Computing Periodic Deflating Subspaces Associated with a Specified Set of Eigenvalues 5 and we have proved the following theorem by applying the submultiplicativity of matrix norms to (4.8) (4.3). Theorem 4.. After applying the computed transformation matrices Q k Z k from (4.2)-(4.3) in a K-cyclic equivalence transformation of (A k E k ) defined in (3.) we get Q T k A k Zk = Ãk where Ãk Âk + A k = Q T k E k Zk = Ẽk where Ẽk Êk + E k = Â Ê Â 0 Â Ê 0 Ê + + A A A 2 A E E E 2 E The critical blocks of the error matrix pair ( A k E k ) satisfy the following error bounds up to first order perturbations: A 2 A 2 2 A 2 σ max ( k ) ( + σ 2 max ( k)) /2 ( + σ 2 min ( k)) /2 ( + σ 2 min ( k)) /2 ( + σmin 2 (R Y /2 F k)) ( + σmin 2 (R Y /2 F k)) σ max (R k ) ( + σmax 2 (R Y /2 F k)). and E 2 E 2 2 E 2 σ max ( k ) ( + σ 2 max( k )) /2 ( + σ 2 min ( k)) /2 ( + σ 2 min ( k)) /2 ( + σmin 2 (R Y /2 2 F k )) ( + σmin 2 (R Y /2 2 F k )) σ max (R k ) ( + σmax 2 (R Y /2 2 F k )) E Ê for k = 0...K. Moreover the matrix pair sequences (Â Ê ) (A E and (A ) (Â ) are K-cyclic equivalent and have the same generalized eigenvalues respectively. Remark 4.. Theorem 4. shows that the stability and accuracy of the reordering method is governed mainly by the conditioning and accuracy of the solution to the associated PGCSY. The errors A ij 2 and E ij 2 can be as large as the norm of the residuals Y F and Y 2 F respectively. Indeed this happens when the smallest singular values of the exact sequences k and R k are tiny indicating an ill-conditioned underlying PGCSY equation. We have experimental evidence that Y F and Y 2 F can be large for large-normed (ill-conditioned) solutions of the associated PGCSY. In the next section we show how we handle such situations and guarantee backward stability of the periodic reordering method. )
16 6 R. GRANAT B. KÅGSTRÖM and D. KRESSNER Remark 4.2. For period K = Theorem 4. reduces to the main theorem of 2 on the perturbation of the generalized eigenvalues under eigenvalue reordering in the generalized real Schur form of a regular matrix pencil. 5 Algorithms and implementation aspects In this section we address some implementation issues of the direct method for reordering eigenvalues in a generalized periodic real Schur form described and analyzed in the previous sections. 5. Algorithms for solving the PGCSY The linear system (3.23) that arises from the PGCSY (3.7) has a particular structure that needs to be exploited in order to keep the cost of the overall algorithm linear in K. The matrix Z PGCSY in (3.23) belongs to the class of bordered almost block diagonal (BABD) matrices which takes the more general form (5.) Z = Z 00 Z 0 Z Z 2 Z 22 Z 32 Z 33 Z Z2K 22K 2 Z 02K Z 2K 2K 2 Z 2K 2K where each nonzero block Z ij is m m (note that m = p p 2 for the matrix Z PGCSY of size 2Km 2Km). An overview of numerical methods that address linear systems with such BABD structure is given in 0. Gaussian elimination with partial pivoting for example preserves much of the structure of Z and can be implemented very efficiently. Unfortunately matrices with BABD structure happen to be among the rare examples of practical relevance which may lead to numerical instabilities because of excessive pivot growth 43. Gaussian elimination with complete pivoting avoids this phenomenom but is too expensive both in terms of cost and storage space. In contrast structured variants of the QR factorization are both numerically stable and efficient 42. In the following we describe such a structured QR factorization in more detail. To solve a linear system Zx = y we first reduce the matrix Z in (5.) to upper triangular form. For this purpose we successively apply Householder transformations to reduce each block Zkk k+k T ZT k = K 2 to upper T trapezoidal form and the block Z 2K 2K to upper triangular form. Each computed Householder transformation is applied to the corresponding block row (as well as the right hand side y of the equation which is blocked in conformity with Z) before the next transformation is computed. The factorization procedure is outlined in Algorithm 5. where for simplicity of presentation the
17 Computing Periodic Deflating Subspaces Associated with a Specified Set of Eigenvalues 7 Algorithm 5. Overlapping QR factorization of the BABD-system Zx = y Input: Matrix Z R 2Km 2Km right hand side vector y R 2Km. Output: Orthogonal transformations Q k R 2m 2m k = K 2 Q 2K R m m triangular factor R R 2Km 2Km with structure as in Equation (5.2) vector ȳ R 2Km such that Rx = ȳ. for k = 0 up to 2K 2 do QR factorize: Qk Rk = ˆZ kk T Zk+k T T Update: ˆZ T kk+ Z T k+k+ T = QT k ˆZT kk+ Z T k+k+ Update: ˆZ kk T Zk+K T T = QT k ˆZT kk Zk+K T Update right hand side: ȳ k = Q T k y k end for QR factorize: Q2K R2K = Z 2K 2K Update right hand side: ȳ 2K = Q T 2K y 2K T T Householder transformations are accumulated into orthogonal transformation matrices Q k. 0 R Triangular factor nz = 726 Figure 5.: The resulting R-factor from applying overlapping QR factorizations to the matrix Z PGCSY for K = 0 p = p 2 = 2 visualized by the Matlab spy command. The sawtooth above the main block diagonal is typical for the PGCSY and does not occur in the case of periodic matrix reordering 4. It is straightforward to see that this procedure of computing overlapping orthogonal factorizations produces the same amount of fill-in elements in the rightmost block columns of Z as would GEPP produce in the worst case see also Figure 5.. More formally the QR factorization reduces the matrix Z into the
18 8 R. GRANAT B. KÅGSTRÖM and D. KRESSNER Algorithm 5.2 Backward substitution for solving Rx = ȳ Input: Matrix R R 2Km 2Km with the upper triangular BABD structure of (5.2) right hand side vector ȳɛ R 2Km partitioned in conformity with the structure of R. Output: Solution vector x R 2Km such that Rx = ȳ. Solve: R2K x 2K = ȳ 2K Update and solve: R2K 2x 2K 2 = ȳ 2K 2 G K x 2K for i = 0 to 2K 3 do Update: ȳ i = ȳ i F ix 2K end for for i = K 2 down to 0 do Update and solve: R2i+x 2i+ = ȳ 2i+ ix 2i+2 Update and solve: R2ix 2i = ȳ 2i G ix 2i+ end for following form: R 0 G 0 F 0 R 0 F R 2 G F 2 R 3 F (5.2) R 2K 4 G K 2 F 2K 4 R 2K 3 K 2 F 2K 3 R 2K 2 G K R 2K with R k k F k G k R m m : Rk (k = K ) are upper triangular whereas k (k = 0... K 2) G k (k = 0... K ) and F k (k = K 3) are dense matrices. Moreover the blocks k are lower triangular provided that Z 22 Z 44...Z 2K 22K 2 and Z 2 Z Z 2K 22K in (5.) are lower and upper triangular respectively which is the case for Z PGCSY. To compute x we employ backward substitution on this structure as outlined in Algorithm 5.2. All updates of the right hand side vector ȳ in Algorithm 5.2 are general matrix-vector multiply and add (GEMV) operations except the updates involving i which are triangular matrix-vector multipy (TRMV) operations. All triangular solves are level 2 TRMSV operations. We remark that the new algorithms described here for solving small blocksized PGCSY equations can be used as kernel solvers in recursive blocked algorithms 3 for solving large-scale problems. Remark 5.. Solving a linear system with QR factorization yields a small norm-wise backward error 20 i.e. the computed solution ˆx is the exact solution of a slightly perturbed system (Z + Z)ˆx = y where Z F = O(u Z F ) with u denoting the unit roundoff. However the standard implementation of the QR factorization is not row-wise backward stable i.e. the norm of a row in Z may
19 Computing Periodic Deflating Subspaces Associated with a Specified Set of Eigenvalues 9 not be negligible compared to the norm of the corresponding row in Z. This may cause instabilities if the norms of the coefficient matrices A k E k differ significantly. To avoid this effect we scale each A k and E k to Frobenius norm before solving (3.7). Then each block row in Z PGCSY has Frobenius norm at most 2 and Z PGCSY F 2 K. The resulting swapping transformation is applied to the original unscaled K-cyclic matrix pair sequence. The corresponding residuals satisfy Y F = O(u A k F ( k R k ) F ) Y 2 F = O(u E k F ( k R k ) F ). Combined with Theorem 4. this shows that the backward error of the developed reordering method is norm-wise small for each coefficient A k and E k unless (3.7) is too ill-conditioned. 5.2 K-cyclic equivalence swapping algorithm with stability tests Considering the error analysis in Section 4 and in the spirit of 23 4 we formulate stability test criteria for deciding whether a K-cyclic equivalence swap should be accepted or not. From Equation (3.9) and the following partition of the transformation matrix sequences Q k and Z k we obtain the relations (5.3) k Q 2 Q = 0 Z T Rk + Z T = 0 which can be computed before the swapping is performed. We use computed quantities of these relations to define the weak stability criterion: (5.4) R weak = max max( Q k 2 Q F Z 0 k K k F T T Rk + Z F R ). k F We remark that the relative criterion R weak should be small even for ill-conditioned PGSCY equations with large normed solutions k and R k (see also Remarks 5. and 6.). After the swap has been performed the maximum residual over the whole K-period defines a strong stability criterion: (5.5) R strong = max max( A k Q k à k ZT k F E k Q k Ẽ k ZT k F ). 0 k K A k F E k F If both R weak and R strong are less than a specified tolerance ε u (a small constant times the machine precision) the swap is accepted otherwise it is rejected. In this way backward stability is guaranteed for the K-cyclic equivalence swapping. In summary we have the following algorithm for swapping two matrix pair sequences of diagonal blocks in the GPRSF of a regular K-cyclic matrix pair (A k B k ) of size (p + p 2 ) (p + p 2 ):. Compute K-cyclic matrix pair sequence ( k R k ) by solving the scaled PGCSY (3.7) using Algorithm 5. and Algorithm 5.2.
20 20 R. GRANAT B. KÅGSTRÖM and D. KRESSNER 2. Compute K-cyclic orthogonal matrix sequence Q k using QR factorizations: k T = Q k k = 0... K. 0 I p2 3. Compute K-cyclic orthogonal matrix sequence Z k using RQ factorizations: Ip R k = 0 T ZT R k k = 0...K. 4. Compute (à Ẽ) = ( Q T k A k Z k Q T k E k Z k ) for k = 0...K i.e. an orthogonal K-cyclic equivalence transformation of (A k E k ): à à à = Q T A A k Z 0 A k Ẽ à 2 Ẽ Ẽ 2 à Ẽ Ẽ = Q T k 5. If R weak < ε u R strong < ε u accept swap and 5a. set à 2 = Ẽ 2 = 0 E E 0 E Z k. 5b. restore GPRSF of (à Ẽ ) and (à Ẽ ) by applying the periodic QZ algorithm to the two diagonal block matrix pair sequences; otherwise reject swap. The stability tests in step 5 for accepting a K-cyclic swap guarantee that the subdiagonal blocks à 2 and Ẽ 2 are negligible compared to the rest of the matrices. Step 5b can be performed by a fixed number of operations for adjacent diagonal blocks in the GPRSF i.e. for p i { 2} (see 4 for the standard periodic matrix case). Properly implemented this algorithm requires O(K) floating point operations (flops) where K is the period. When it is used to reorder two adjacent diagonal blocks in a larger n n periodic matrix pair in GPRSF then the off-diagonal parts are updated by the transformation matrices Q k and Z k which additionally requires O(Kn) flops. There are several other important implementation issues to be considered for a completely reliable implementation. For example iterative refinement in extended precision arithmetic can be used to improve the accuracy of the PGCSY solution and avoid the possibility of rejection (see e.g. 20). Our experiences so far concern iterative refinement in standard precision arithmetic and (as expected) the results show no substantial improvements.
21 Computing Periodic Deflating Subspaces Associated with a Specified Set of Eigenvalues 2 6 Computational experiments The direct reordering algorithm described in the previous sections has been implemented in MATAB. A more robust and efficient Fortran implementation will be included in a forthcoming software toolbox for periodic eigenvalue problems 6. In this section we present some numerical results using our prototype implementation. All experiments were carried out in double precision (ɛ mach ). The test examples range from well-conditioned to ill-conditioned problems including matrix pair sequences of small and large period. In Table 6. we display some problem characteristics : problem dimension n (2 3 or 4 corresponding to swapping a mix of and 2 2 blocks) period K the computed value of seppgcsy = σ min (Z PGCSY ) (see Section 3.4) and s = / + ( 0 R 0 ) 2 F where ( 0 R 0 ) are the first solution components of the associated PGCSY (3.7). The quantities s and seppgcsy partly govern the sensitivity of the selected eigenvalues and associated periodic deflating subspaces see The results from the periodic reordering are presented in Table 6.2. These include the weak (R weak ) and strong (R strong ) stability tests the residual norms for the GPRSF before (R gprsf ) and after (R reord ) the reordering computed as in Equation (5.5) a relative orthogonality check of the accumulated transformations after (R orth ) the reordering computed as R orth = max k( I nk W T k W k F I nk W k W T k F ) ɛ mach where the maximum is taken over the period K for all transformation matrices Q k and Z k. The last column displays the maximum relative change of the eigenvalues after the periodic reordering R eig = max k λ k λ k λ k λ(φ λ k E A(K 0)). Notice that we normally do not compute λ i explicitly but keep it as an eigenvalue pair (α i β i ) to avoid losing information because of roundoff errors. This is especially important for tiny and large values of α i and/or β i. The eigenvalues before and after reordering are shown in full precision under each example. For 2 2 matrix sequences we compute the generalized eigenvalues via unitary transformations in the GPRSF as is done in APACK s DTGSEN. Example I. Consider the following sequence with n = 2 K = 2: A = 2ɛ /2 0 2ɛ /2 A 2 = E = E 2 = ɛ /2 0 ɛ /2 The test examples used are available at examples.m.
22 R. GRANAT B. KÅGSTRÖM and D. KRESSNER Table 6.: Problem characteristics. Example n K seppgcsy s I 2 2.E-8.4E-4 II E-2 4.9E- III E-3.9E- IV E-4 6.E-7 V E-2 6.2E- VI E-2 5.8E- Table 6.2: Reordering results using QR factorization to solve the associated PGCSY. Example R weak R strong R gprsf R reord R orth R eig I 6.3E-7 5.0E E E-9 II.6E-6 9.0E-6 4.8E-5 5.6E E-5 III.8E-6.3E-5 2.2E-6 3.2E E-4 IV 8.3E-7.0E-5 2.2E-6 2.4E E-4 V.3E-6 7.0E-6 8.3E-7 9.E E-5 VI 3.8E-6 8.2E E E-6 This product has the (α β)-pairs (α β ) = ( ) 0 6 (α 2 β 2 ) = ( ) 0 6 which correspond to well-defined eigenvalues λ = 2.0 and λ 2 = 2.0. But all α i and β i are at the machine precision level and this fact signals an obvious risk for losing accuracy after the reordering: ( α β ) = ( ) 0 6 ( α 2 β 2 ) = ( ) 0 6 which define the eigenvalues λ = and λ 2 = Example II. Consider reordering the eigenvalues λ 2 = 2 ± 2i and λ 34 = ± i in a matrix pair sequence with dimension n = 4 and period K = 0. The computed eigenvalues from the GPRSF are correct to full machine precision. After reordering we get the following (α β)-pairs: ( α β ) = ( i ) ( α 2 β 2 ) = ( i ) ( α 3 β 3 ) = ( i ) ( α 4 β 4 ) = ( i ). A quick check reveals that these pairs correspond to a reordering at almost full machine precision.
23 Computing Periodic Deflating Subspaces Associated with a Specified Set of Eigenvalues 23 Example III. The eigenvalue pair cos π 4 ± sin π 4 i is located on the unit circle. In Q-optimal control (see Section 2) we want to compute a periodic deflating subspace corresponding to the stable eigenvalues i.e. the eigenvalues inside the unit disc. For illustration consider reordering the eigenvalues λ 2 = (cos π 4 +δ)±(sin π 4 + δ)i and λ 34 = (cos π 4 δ)±(sin π 4 δ)i where δ 0 in a matrix pair sequence of period K = 00 arising for example from performing multi-rate sampling of a continuous-time system. At first let δ = 0. The matrix product has the computed (α β)-pairs (α β ) = ( i ) (α 2 β 2 ) = ( i ) (α 3 β 3 ) = ( i ) (α 4 β 4 ) = ( i ) which correspond to the eigenvalues λ 2 = ± i and λ 34 = ± i. After reordering we have ( α β ) = ( i ) ( α 2 β 2 ) = ( i ) ( α 3 β 3 ) = ( i ) ( α 4 β 4 ) = ( i ) which define the eigenvalues λ 2 = ± i and λ 34 = ± i. Example IV. We consider Example III again now with δ = 0 and K = 00 as before. The matrix product has the computed (α β)-pairs (α β ) = ( i ) (α 2 β 2 ) = ( i ) (α 3 β 3 ) = ( i ) (α 4 β 4 ) = ( i ) which define the eigenvalues λ 2 = ± i and λ 34 = ± i. After reordering we have ( α β ) = ( i ) ( α 2 β 2 ) = ( i ) ( α 3 β 3 ) = ( i ) ( α 4 β 4 ) = ( i ) which correspond to the eigenvalues λ 2 = ± i and λ 34 = ± i. The eigenvalues outside and inside the unit disc come closer and closer with a decreasing δ and the problem gets more ill-conditioned but we are still able to reorder the eigenvalues with satisfying accuracy. We illustrate the situation in Figure 6.
24 24 R. GRANAT B. KÅGSTRÖM and D. KRESSNER Order of reordering results R weak R strong R gprsf R reord R eig seppgcsy s log log 0 (delta) Figure 6.: Results from reordering the eigenvalues of Examples III and IV with δ 0. The displayed quantities are the same as in Tables The horizontal axis shows the logarithm of the parameter δ and the vertical axis displays the logarithm of the computed quantities. Example V. Consider reordering the following single eigenvalue λ = 3 with the eigenvalue pair λ 23 = 3 2 ± 7 i and period K = 5. The original (α β)-pairs are (α β ) = ( ) (α 2 β 2 ) = ( i ) (α 3 β 3 ) = ( i ). After reordering we have ( α β ) = ( i ) ( α 2 β 2 ) = ( i ) ( α 3 β 3 ) = ( ) which define eigenvalues λ 2 = ± i and λ 3 = Example VI. Consider reordering the eigenvalues λ = and λ 2 = and period K = 6. The original (α β)-pairs are (α β ) = ( ) (α 2 β 2 ) = ( ).
25 Computing Periodic Deflating Subspaces Associated with a Specified Set of Eigenvalues 25 After reordering we have ( α β ) = ( E ) ( α 2 β 2 ) = ( E E+4) which correspond to the eigenvalues λ = and λ 2 = Remarks In this section we give some closing remarks on the developed reordering method by presenting a comparison with existing methods and describing an extension to more general matrix products. 7. Comparison with existing methods Hench and aub 8 Sec. II.F proposed to swap the diagonal blocks in (3.) by first explicitely computing the (n + n 2 ) (n + n 2 ) matrix product E K A K E 0 A 0. Then in exact arithmetic the standard swapping technique 2 applied to this product yields the outer orthogonal transformation matrix Z 0. The inner orthogonal matrices Q 0... Q K Z...Z K are obtained by propagating Z 0 through the triangular factors using QR and RQ factorizations. In finite-precision arithmetic however such an approach can be expected to perform poorly if any of the matrices E k is nearly singular see 2 for the case K =. Also for very well-conditioned E k (e.g. identity matrices) there are serious numerical difficulties to be expected for long products as the computed entries become prone to under- and overflow. Further numerical instabilities arise from the fact that triangular matrix-matrix multiplication is in general not a numerically backward stable operation unless n = n 2 =. Benner et al. 3 developed collapsing techniques that can be used to improve the above approach by avoiding all explicit inversions of E k. Instead of a single product two n n matrices Ē and Ā are computed such that Ē Ā has the same eigenvalues. The generalized swapping technique 2 23 applied to the pair (Ē Ā) yields Z 0. Again the other orthogonal matrices are successively computed from QR and RQ factorizations. Although this approach avoids difficulties associated with (nearly) singular matrices E k it may still become numerically unstable see 5 for an example. Bojanczyk and Van Dooren 9 carefully modified the approach by Hench and aub for the case n = n 2 = to avoid underflow overflow and numerical instabilities. This variant has been observed to perform remarkably well in finiteprecision arithmetic. Unfortunately its extension to n = 2 and/or n 2 = 2 is not clear. Thus only real matrix products having real eigenvalues can be addressed. For complex eigenvalues one could in principle work with the complex periodic Schur decomposition which has no 2 2 blocks. Both the swapping technique described in 9 and the one proposed in this paper extend to the complex case
26 26 R. GRANAT B. KÅGSTRÖM and D. KRESSNER in a straightforward manner. The obvious drawback of using complex arithmetic for real input data is the increased computational complexity. Moreover real eigenvalues and complex conjugate eigenvalue pairs will not be preserved in finite-precision arithmetic. For example if we apply 9 to Example I we obtain the following swapped eigenvalues: λ = i λ 2 = i. The realness of the original eigenvalues is completely lost. Somewhat unexpectedly our algorithm also achieves significantly higher accuracy for this particular example. 7.2 Reordering in even more general matrix products Reordering can also be considered in matrix products of the form (7.) A sk K AsK 2 K 2 As0 0 s 0...s K { } which is needed e.g. in 4. This could be accomplished by the method desribed in this paper after inserting identity matrices into the matrix sequence such that the exponent structure has the same structure as in Equation (.2) i.e. every second matrix is an inverse. It turns out that this trick is actually not needed. All techniques developed in this paper can be extended to work directly with (7.). For example the associated periodic Sylvester-like matrix equation takes the form { A X k X k+ A = A for s k = A X k+ X k A = A for s k = which can be addressed by the methods in Section 5. See 6 for more details. Acknowledgements. The authors are grateful to Peter Benner Isak Jonsson and Andras Varga for valuable discussions related to this work. The authors thank the referees for their valuable comments. REFERENCES. E. Anderson Z. Bai C. Bischof S. Blackford J. W. Demmel J. J. Dongarra J. Du Croz A. Greenbaum S. Hammarling A. McKenney and D. C. Sorensen. APACK Users Guide. SIAM Philadelphia PA third edition Z. Bai and J. W. Demmel. On swapping diagonal blocks in real Schur form. inear Algebra Appl. 86: P. Benner and R. Byers. Evaluating products of matrix pencils and collapsing matrix products. Numerical inear Algebra with Applications 8: P. Benner R. Byers V. Mehrmann and H. Xu. Numerical computation of deflating subspaces of skew-hamiltonian/hamiltonian pencils. SIAM J. Matrix Anal. Appl. 24() 2002.
27 Computing Periodic Deflating Subspaces Associated with a Specified Set of Eigenvalues P. Benner V. Mehrmann and H. Xu. Perturbation analysis for the eigenvalue problem of a formal product of matrices. BIT 42(): S. Bittanti and P. Colaneri editors. Periodic Control Systems. Elsevier Science & Technology Proceedings volume from the IFAC Workshop Cernobbio-Como Italy August S. Bittanti P. Colaneri and G. De Nicolao. The periodic Riccati equation. In S. Bittanti A. J. aub and J. C. Willems editors The Riccati Equation pages Springer-Verlag Berlin Heidelberg Germany A. Bojanczyk G. H. Golub and P. Van Dooren. The periodic Schur decomposition; algorithm and applications. In Proc. SPIE Conference volume 770 pages A. Bojanczyk and P Van Dooren. On propagating orthogonal transformations in a product of 2x2 triangular matrices. In Numerical inear Algebra. Reichel A. Ruttan and R.S. Varga eds. Walter de Gruyter pages G. Fairweather and I. Gladwell. Algorithms for Almost Block Diagonal inear Systems. SIAM Review 44(): B. Garrett and I. Gladwell. Solving bordered almost block diagonal systems stably and efficiently. J. Comput. Methods Sci. Engrg. : G. H. Golub and C. F. Van oan. Matrix Computations. Johns Hopkins University Press Baltimore MD third edition R. Granat I. Jonsson and B. Kågström. Recursive Blocked Algorithms for Solving Periodic Triangular Sylvester-type Matrix Equations. In PARA 06 - State of the Art in Scientific and Parallel Computing ecture Notes in Computer Science Springer 2007 (to appear). 4. R. Granat and B. Kågström. Direct Eigenvalue Reordering in a Product of Matrices in Periodic Schur Form. SIAM J. Matrix Anal. Appl. 28(): R. Granat B. Kågström and D. Kressner. Reordering the Eigenvalues of a Periodic Matrix Pair with Applications in Control. In Proc. of 2006 IEEE Conference on Computer Aided Control Systems Design (CACSD) pages ISBN: R. Granat B. Kågström and D. Kressner. MATAB tools for Solving Periodic Eigenvalue Problems Accepted for 3rd IFAC Workshop PSYCO 07 Saint Petersburg Russia. 7. W.W. Hager. Condition estimates. SIAM J. Sci. Statist. Comput. (3): J. J. Hench and A. J. aub. Numerical solution of the discrete-time periodic Riccati equation. IEEE Trans. Automat. Control 39(6): N. J. Higham. Fortran codes for estimating the one-norm of a real or complex matrix with applications to condition estimation. ACM Trans. of Math. Software 4(4): N. J. Higham. Accuracy and Stability of Numerical Algorithms. SIAM Philadelphia PA second edition B. Kågström. A Direct Method for Reordering Eigenvalues in the Generalized Real Schur Form of a Regular Matrix Pair (AB). In M.S. Moonen G.H. Golub and B..R. De Moor editors inear Algebra for arge Scale and Real Time Applications pages Kluwer Academic Publishers Amsterdam 993.
MATLAB TOOLS FOR SOLVING PERIODIC EIGENVALUE PROBLEMS 1
MATLAB TOOLS FOR SOLVING PERIODIC EIGENVALUE PROBLEMS 1 Robert Granat Bo Kågström Daniel Kressner,2 Department of Computing Science and HPC2N, Umeå University, SE-90187 Umeå, Sweden. {granat,bokg,kressner}@cs.umu.se
More informationA Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem
A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present
More informationLAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006.
LAPACK-Style Codes for Pivoted Cholesky and QR Updating Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig 2007 MIMS EPrint: 2006.385 Manchester Institute for Mathematical Sciences School of Mathematics
More information1. Introduction. Applying the QR algorithm to a real square matrix A yields a decomposition of the form
BLOCK ALGORITHMS FOR REORDERING STANDARD AND GENERALIZED SCHUR FORMS LAPACK WORKING NOTE 171 DANIEL KRESSNER Abstract. Block algorithms for reordering a selected set of eigenvalues in a standard or generalized
More informationLAPACK-Style Codes for Pivoted Cholesky and QR Updating
LAPACK-Style Codes for Pivoted Cholesky and QR Updating Sven Hammarling 1, Nicholas J. Higham 2, and Craig Lucas 3 1 NAG Ltd.,Wilkinson House, Jordan Hill Road, Oxford, OX2 8DR, England, sven@nag.co.uk,
More informationComputation of a canonical form for linear differential-algebraic equations
Computation of a canonical form for linear differential-algebraic equations Markus Gerdin Division of Automatic Control Department of Electrical Engineering Linköpings universitet, SE-581 83 Linköping,
More informationExponentials of Symmetric Matrices through Tridiagonal Reductions
Exponentials of Symmetric Matrices through Tridiagonal Reductions Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A simple and efficient numerical algorithm
More informationMatrices, Moments and Quadrature, cont d
Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that
More informationMULTISHIFT VARIANTS OF THE QZ ALGORITHM WITH AGGRESSIVE EARLY DEFLATION LAPACK WORKING NOTE 173
MULTISHIFT VARIANTS OF THE QZ ALGORITHM WITH AGGRESSIVE EARLY DEFLATION LAPACK WORKING NOTE 173 BO KÅGSTRÖM AND DANIEL KRESSNER Abstract. New variants of the QZ algorithm for solving the generalized eigenvalue
More informationThe Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation
The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationDense LU factorization and its error analysis
Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More informationANONSINGULAR tridiagonal linear system of the form
Generalized Diagonal Pivoting Methods for Tridiagonal Systems without Interchanges Jennifer B. Erway, Roummel F. Marcia, and Joseph A. Tyson Abstract It has been shown that a nonsingular symmetric tridiagonal
More informationI-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x
Technical Report CS-93-08 Department of Computer Systems Faculty of Mathematics and Computer Science University of Amsterdam Stability of Gauss-Huard Elimination for Solving Linear Systems T. J. Dekker
More informationMath 411 Preliminaries
Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector
More informationJim Lambers MAT 610 Summer Session Lecture 2 Notes
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the
More informationArnoldi Methods in SLEPc
Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,
More informationAlgorithm 853: an Efficient Algorithm for Solving Rank-Deficient Least Squares Problems
Algorithm 853: an Efficient Algorithm for Solving Rank-Deficient Least Squares Problems LESLIE FOSTER and RAJESH KOMMU San Jose State University Existing routines, such as xgelsy or xgelsd in LAPACK, for
More information14.2 QR Factorization with Column Pivoting
page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution
More informationSolving large scale eigenvalue problems
arge scale eigenvalue problems, Lecture 4, March 14, 2018 1/41 Lecture 4, March 14, 2018: The QR algorithm http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich E-mail:
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationSolving large scale eigenvalue problems
arge scale eigenvalue problems, Lecture 5, March 23, 2016 1/30 Lecture 5, March 23, 2016: The QR algorithm II http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich
More informationOn aggressive early deflation in parallel variants of the QR algorithm
On aggressive early deflation in parallel variants of the QR algorithm Bo Kågström 1, Daniel Kressner 2, and Meiyue Shao 1 1 Department of Computing Science and HPC2N Umeå University, S-901 87 Umeå, Sweden
More informationKey words. conjugate gradients, normwise backward error, incremental norm estimation.
Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322
More informationChapter 12 Block LU Factorization
Chapter 12 Block LU Factorization Block algorithms are advantageous for at least two important reasons. First, they work with blocks of data having b 2 elements, performing O(b 3 ) operations. The O(b)
More informationQR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR
QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique
More informationOn a quadratic matrix equation associated with an M-matrix
Article Submitted to IMA Journal of Numerical Analysis On a quadratic matrix equation associated with an M-matrix Chun-Hua Guo Department of Mathematics and Statistics, University of Regina, Regina, SK
More informationEIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4
EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue
More information5.3 The Power Method Approximation of the Eigenvalue of Largest Module
192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and
More informationGeneralized interval arithmetic on compact matrix Lie groups
myjournal manuscript No. (will be inserted by the editor) Generalized interval arithmetic on compact matrix Lie groups Hermann Schichl, Mihály Csaba Markót, Arnold Neumaier Faculty of Mathematics, University
More informationEigenvalue Problems and Singular Value Decomposition
Eigenvalue Problems and Singular Value Decomposition Sanzheng Qiao Department of Computing and Software McMaster University August, 2012 Outline 1 Eigenvalue Problems 2 Singular Value Decomposition 3 Software
More informationComputation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.
Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces
More informationCourse Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More informationSolving linear equations with Gaussian Elimination (I)
Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian
More informationModule 6.6: nag nsym gen eig Nonsymmetric Generalized Eigenvalue Problems. Contents
Eigenvalue and Least-squares Problems Module Contents Module 6.6: nag nsym gen eig Nonsymmetric Generalized Eigenvalue Problems nag nsym gen eig provides procedures for solving nonsymmetric generalized
More informationMath 504 (Fall 2011) 1. (*) Consider the matrices
Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture
More informationTHE STABLE EMBEDDING PROBLEM
THE STABLE EMBEDDING PROBLEM R. Zavala Yoé C. Praagman H.L. Trentelman Department of Econometrics, University of Groningen, P.O. Box 800, 9700 AV Groningen, The Netherlands Research Institute for Mathematics
More informationCME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6
CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently
More informationAMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems
AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given
More information2 Computing complex square roots of a real matrix
On computing complex square roots of real matrices Zhongyun Liu a,, Yulin Zhang b, Jorge Santos c and Rui Ralha b a School of Math., Changsha University of Science & Technology, Hunan, 410076, China b
More informationThe antitriangular factorisation of saddle point matrices
The antitriangular factorisation of saddle point matrices J. Pestana and A. J. Wathen August 29, 2013 Abstract Mastronardi and Van Dooren [this journal, 34 (2013) pp. 173 196] recently introduced the block
More informationETNA Kent State University
C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.
More informationMatrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland
Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii
More informationComputing least squares condition numbers on hybrid multicore/gpu systems
Computing least squares condition numbers on hybrid multicore/gpu systems M. Baboulin and J. Dongarra and R. Lacroix Abstract This paper presents an efficient computation for least squares conditioning
More informationA New Block Algorithm for Full-Rank Solution of the Sylvester-observer Equation.
1 A New Block Algorithm for Full-Rank Solution of the Sylvester-observer Equation João Carvalho, DMPA, Universidade Federal do RS, Brasil Karabi Datta, Dep MSc, Northern Illinois University, DeKalb, IL
More informationA COMPUTATIONAL APPROACH FOR OPTIMAL PERIODIC OUTPUT FEEDBACK CONTROL
A COMPUTATIONAL APPROACH FOR OPTIMAL PERIODIC OUTPUT FEEDBACK CONTROL A Varga and S Pieters DLR - Oberpfaffenhofen German Aerospace Research Establishment Institute for Robotics and System Dynamics POB
More informationOPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY
published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower
More informationAPPLIED NUMERICAL LINEAR ALGEBRA
APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation
More informationNumerical Linear Algebra
Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost
More informationProgram Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects
Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen
More informationNumerical Methods I Solving Square Linear Systems: GEM and LU factorization
Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,
More informationNonlinear palindromic eigenvalue problems and their numerical solution
Nonlinear palindromic eigenvalue problems and their numerical solution TU Berlin DFG Research Center Institut für Mathematik MATHEON IN MEMORIAM RALPH BYERS Polynomial eigenvalue problems k P(λ) x = (
More informationAccelerating computation of eigenvectors in the nonsymmetric eigenvalue problem
Accelerating computation of eigenvectors in the nonsymmetric eigenvalue problem Mark Gates 1, Azzam Haidar 1, and Jack Dongarra 1,2,3 1 University of Tennessee, Knoxville, TN, USA 2 Oak Ridge National
More informationNumerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??
Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement
More informationBlock Bidiagonal Decomposition and Least Squares Problems
Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition
More informationImproved Newton s method with exact line searches to solve quadratic matrix equation
Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan
More informationMAT 610: Numerical Linear Algebra. James V. Lambers
MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l
More informationReview Questions REVIEW QUESTIONS 71
REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of
More informationAccelerating computation of eigenvectors in the dense nonsymmetric eigenvalue problem
Accelerating computation of eigenvectors in the dense nonsymmetric eigenvalue problem Mark Gates 1, Azzam Haidar 1, and Jack Dongarra 1,2,3 1 University of Tennessee, Knoxville, TN, USA 2 Oak Ridge National
More informationOn the Iwasawa decomposition of a symplectic matrix
Applied Mathematics Letters 20 (2007) 260 265 www.elsevier.com/locate/aml On the Iwasawa decomposition of a symplectic matrix Michele Benzi, Nader Razouk Department of Mathematics and Computer Science,
More informationON THE REDUCTION OF A HAMILTONIAN MATRIX TO HAMILTONIAN SCHUR FORM
ON THE REDUCTION OF A HAMILTONIAN MATRIX TO HAMILTONIAN SCHUR FORM DAVID S WATKINS Abstract Recently Chu Liu and Mehrmann developed an O(n 3 ) structure preserving method for computing the Hamiltonian
More informationSTRUCTURED CONDITION NUMBERS FOR INVARIANT SUBSPACES
STRUCTURED CONDITION NUMBERS FOR INVARIANT SUBSPACES RALPH BYERS AND DANIEL KRESSNER Abstract. Invariant subspaces of structured matrices are sometimes better conditioned with respect to structured perturbations
More informationUsing Godunov s Two-Sided Sturm Sequences to Accurately Compute Singular Vectors of Bidiagonal Matrices.
Using Godunov s Two-Sided Sturm Sequences to Accurately Compute Singular Vectors of Bidiagonal Matrices. A.M. Matsekh E.P. Shurina 1 Introduction We present a hybrid scheme for computing singular vectors
More informationFactorized Solution of Sylvester Equations with Applications in Control
Factorized Solution of Sylvester Equations with Applications in Control Peter Benner Abstract Sylvester equations play a central role in many areas of applied mathematics and in particular in systems and
More informationTheorem Let X be a symmetric solution of DR(X) = 0. Let X b be a symmetric approximation to Xand set V := X? X. b If R b = R + B T XB b and I + V B R
Initializing Newton's method for discrete-time algebraic Riccati equations using the buttery SZ algorithm Heike Fabender Zentrum fur Technomathematik Fachbereich - Mathematik und Informatik Universitat
More informationComputing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm
Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today
More informationCholesky factorisation of linear systems coming from finite difference approximations of singularly perturbed problems
Cholesky factorisation of linear systems coming from finite difference approximations of singularly perturbed problems Thái Anh Nhan and Niall Madden Abstract We consider the solution of large linear systems
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationON MATRIX BALANCING AND EIGENVECTOR COMPUTATION
ON MATRIX BALANCING AND EIGENVECTOR COMPUTATION RODNEY JAMES, JULIEN LANGOU, AND BRADLEY R. LOWERY arxiv:40.5766v [math.na] Jan 04 Abstract. Balancing a matrix is a preprocessing step while solving the
More informationOn the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms.
On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms. J.C. Zúñiga and D. Henrion Abstract Four different algorithms are designed
More informationOrthogonal iteration to QR
Notes for 2016-03-09 Orthogonal iteration to QR The QR iteration is the workhorse for solving the nonsymmetric eigenvalue problem. Unfortunately, while the iteration itself is simple to write, the derivation
More informationDirect methods for symmetric eigenvalue problems
Direct methods for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 4, 2008 1 Theoretical background Posing the question Perturbation theory
More informationIndex. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)
page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation
More informationSTRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS
Electronic Transactions on Numerical Analysis Volume 44 pp 1 24 215 Copyright c 215 ISSN 168 9613 ETNA STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS VOLKER MEHRMANN AND HONGGUO
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course
More informationELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES
Volume 22, pp. 480-489, May 20 THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES XUZHOU CHEN AND JUN JI Abstract. In this paper, we study the Moore-Penrose inverse
More informationPreconditioned Parallel Block Jacobi SVD Algorithm
Parallel Numerics 5, 15-24 M. Vajteršic, R. Trobec, P. Zinterhof, A. Uhl (Eds.) Chapter 2: Matrix Algebra ISBN 961-633-67-8 Preconditioned Parallel Block Jacobi SVD Algorithm Gabriel Okša 1, Marián Vajteršic
More informationAnalysis of Block LDL T Factorizations for Symmetric Indefinite Matrices
Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices Haw-ren Fang August 24, 2007 Abstract We consider the block LDL T factorizations for symmetric indefinite matrices in the form LBL
More informationThis can be accomplished by left matrix multiplication as follows: I
1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method
More informationNAG Toolbox for Matlab nag_lapack_dggev (f08wa)
NAG Toolbox for Matlab nag_lapack_dggev () 1 Purpose nag_lapack_dggev () computes for a pair of n by n real nonsymmetric matrices ða; BÞ the generalized eigenvalues and, optionally, the left and/or right
More informationAlgorithms to solve block Toeplitz systems and. least-squares problems by transforming to Cauchy-like. matrices
Algorithms to solve block Toeplitz systems and least-squares problems by transforming to Cauchy-like matrices K. Gallivan S. Thirumalai P. Van Dooren 1 Introduction Fast algorithms to factor Toeplitz matrices
More informationp q m p q p q m p q p Σ q 0 I p Σ 0 0, n 2p q
A NUMERICAL METHOD FOR COMPUTING AN SVD-LIKE DECOMPOSITION HONGGUO XU Abstract We present a numerical method to compute the SVD-like decomposition B = QDS 1 where Q is orthogonal S is symplectic and D
More informationEigenvalues and eigenvectors
Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors
More informationNumerical Linear Algebra Homework Assignment - Week 2
Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they
More informationApplied Numerical Linear Algebra. Lecture 8
Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ
More informationNumerical Linear Algebra
Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Reduction to Hessenberg and Tridiagonal Forms; Rayleigh Quotient Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationLU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b
AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization
More informationAn SVD-Like Matrix Decomposition and Its Applications
An SVD-Like Matrix Decomposition and Its Applications Hongguo Xu Abstract [ A matrix S C m m is symplectic if SJS = J, where J = I m I m. Symplectic matrices play an important role in the analysis and
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More information6.4 Krylov Subspaces and Conjugate Gradients
6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P
More informationSKEW-HAMILTONIAN AND HAMILTONIAN EIGENVALUE PROBLEMS: THEORY, ALGORITHMS AND APPLICATIONS
SKEW-HAMILTONIAN AND HAMILTONIAN EIGENVALUE PROBLEMS: THEORY, ALGORITHMS AND APPLICATIONS Peter Benner Technische Universität Chemnitz Fakultät für Mathematik benner@mathematik.tu-chemnitz.de Daniel Kressner
More informationIndex. for generalized eigenvalue problem, butterfly form, 211
Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationNAG Toolbox for MATLAB Chapter Introduction. F02 Eigenvalues and Eigenvectors
NAG Toolbox for MATLAB Chapter Introduction F02 Eigenvalues and Eigenvectors Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Standard Eigenvalue Problems... 2 2.1.1 Standard
More informationComputation of Kronecker-like forms of periodic matrix pairs
Symp. on Mathematical Theory of Networs and Systems, Leuven, Belgium, July 5-9, 2004 Computation of Kronecer-lie forms of periodic matrix pairs A. Varga German Aerospace Center DLR - berpfaffenhofen Institute
More information